text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
React has uses beyond application development. One of the perhaps surprising use cases is to integrate it within a Content Management System (CMS) such as WordPress.
To get a better idea of how this could work out, I am interviewing Tomáš Konrády.
Lundegaard in Prague, living in Hradec Králové. Recently I have started to fell in love with open-source and the Ramda library. The result of that is a few projects.I am frontend developer at
First of them is the ramda-extension where our core team of Ramdists created point-free utility functions composed with only Ramda functions. Second open-source project is a React Union, the topic of this interview.
In my spare time, I either draw, play a guitar or exercise with a kettlebell.
Purpose of the React Union project is to help with developing React applications that are situated in JavaScript unfriendly environments. What do I mean by these? For us, in Lundegaard, it is a Java CMS backend. For others, it can be any non-JavaScript CMS such as WordPress or Drupal.
React Union project consists of three parts:
<Union />component -
Unioncomponent is responsible for assembling one logical virtual DOM from physically distributed HTML fragments.
Assume, that code below is the output of your server:
<html> <body> Generated content by CMS. <div id="news-feed"></div> <script data-</script> Generated content by CMS with nonpredictable markup... <div class="app-container"> <div id="customers-chat"></div> <script data-</script> </div> <script src="js/app.bundle.js"></script> </body> </html>
Pay attention to the
script tags with
data-union-widget attribute. The tag describes which application should be rendered at which place in document (described by
data-union-container attribute).
Now let's look at our index file in which the
Union components are used:
import { Union } from "react-union"; const routes = [ { name: "news-feed", getComponent: done => ( import("./containers/NewsFeed").then(done), ) }, { name: "customers-chat", getComponent: done => ( import("./containers/CustomersChat").then(done), ) }, ]; const App = () => <Union routes={routes} />; export default App;
Union component scans the HTML for our
script tags - we call them widget descriptors. Then, combined with the route definition above, they will become React containers.
The component utilizes portals under the hood so we can be sure that even though the components are physically rendered in different parts in real DOM,
Union will assemble one logical virtual DOM for us. Then we can provide one context to all of our containers. We can share the application state, theme preferences, etc. to all of our containers.
OK, why all the fuzz, why not to render the component directly?
Let's imagine that we don't have control over the response from the server. For example, it can be a result of a CMS, where administrators can drag and drop whatever application or a widget to their views. We do not know in advance what our apps should be rendered.
To sum up, the
Union component allows us to define what React containers can users use in their system. The component will ensure that the right component will be rendered in the right place.
I described just one single use case how
Union could be used. But there is more you can do. For example, you can pass data from a server or even share common data across all rendered containers.
I don't think that there are many other solutions available. I know just about react-habitat. The library is focused on isolated components that neither share context nor state.
But there are surely ways of how to achieve the same (and better):
But sometimes there is no budget to change backend technology, or you and that is where React Union shines.
At my work, there are half of backend programmers certified Liferay developers. They specialize in development for that platform. Liferay is the complex environment written in Java, and a big part of it is CMS. Our clients love using it, and our backend developers have both great inside and knowledge about it.
Neither clients nor backend developers will stop to use Liferay soon.
But I am JavaScript developer, and I don't care what backend technology is used. In advance, Liferay takes about 10 minutes to start. :)
I wanted to use a solution that is agnostic to the CMS platform. The React Union is the result of that.
Dynamic rendering of components is one thing, but state management in CMS environments is the second and maybe more complex one. In Lundegaard we love Redux (yes, we are going to use it even though React hooks are on the way :)). As a result, we started to opensource the
redux-tools - our solution to modular Redux. It is the younger brother of React Union that we use alongside.
Yes, there are trends — both good and bad ones.
Among the good ones, I consider the focus on the overall performance of web applications. We can speak either about the whole philosophy of the Progressive Web Applications or about the direction the React library is heading with focus on responsible GUIs.
The next big thing is undoubtedly WebAssembly (WA). I think once WA will be well supported across browsers than there will start to sublime new remarkable ways and technologies in which we will develop with native performance.
I have to say I am not a big fan of neither TypeScript nor Flow. Those two solutions are the way how to bring the static typing into JavaScript world. But I am aware that I stay behind the smaller group of JavaScript developers with the same opinion.
But I would recommend to everyone from the other group of developers to take a look into Clojure and ClojureScript world. In there they understand (for a long time) that static typing is not the silver bullet for safe apps without bugs.
I would recommend them to dig into basics. It is essential to genuinely know HTML, CSS, and JavaScript before they start to add any of frameworks or libraries into their skill set.
I want to thank all members of our small team that develop React-union project for their hard work! Namely aizerin, jamescoq, and wafflepie.
Thanks for the interview Tomáš! I am not a WordPress developer but I can see how React Union could come in handy in that context and others.
Learn more about React Union at the project site. See also React Union on GitHub. | https://survivejs.com/blog/react-union-interview/ | CC-MAIN-2020-40 | refinedweb | 1,068 | 65.32 |
On 1/11/19 8:20 AM, Laurent Vivier wrote: > On 11/01/2019 01:37, David Gibson wrote: >> On Wed, Jan 09, 2019 at 11:15:26AM +0100, Laurent Vivier wrote: >>> Hi Jon, >>> >>> please cc: qemu-devel and MAINTAINERS when you send a patch. >>> >>> You can have the list of maintainers using a script in qemu directory: >>> >>> ./scripts/get_maintainer.pl XXXX.patch >>> >>> Thanks, >>> Laurent >> >> Hrm. Like the other patch, I didn't seem to receive this - I checked >> back through my archives. I think something must be not working with >> email from Jon to me, although I can't quite imagine what. >> >>> >>> On 03/01/2019 20:58, Jon Diekema wrote: >>>> From: Jon Diekema <address@hidden> >>>> Date: Tue, 25 Dec 2018 04:03:04 -0500 >>>> Subject: Whitespace cleanup: target/ppc/translate_init.inc.c >>>> >>>> Signed-off-by: Jon Diekema <address@hidden> >>>> --- >>>> target/ppc/translate_init.inc.c | 4 ++-- >>>> 1 file changed, 2 insertions(+), 2 deletions(-) >>>> >>>> diff --git a/target/ppc/translate_init.inc.c >>>> b/target/ppc/translate_init.inc.c >>>> index c971a5faf7..b5f4c9bd55 100644 >>>> --- a/target/ppc/translate_init.inc.c >>>> +++ b/target/ppc/translate_init.inc.c >>>> @@ -5237,7 +5237,7 @@ static void init_proc_601(CPUPPCState *env) >>>> 0x00000000); >>>> /* Memory management */ >>>> init_excp_601(env); >>>> - /* XXX: beware that dcache line size is 64 >>>> + /* XXX: beware that dcache line size is 64 >>>> * but dcbz uses 32 bytes "sectors" >>>> * XXX: this breaks clcs instruction ! >>>> */ >>>> @@ -10485,7 +10485,7 @@ static void ppc_cpu_class_init(ObjectClass >>>> *oc, void *data) >>>> cc->tcg_initialize = ppc_translate_init; >>>> #endif >>>> cc->disas_set_info = ppc_disas_set_info; >>>> - >>>> + >>>> dc->fw_name = "PowerPC,UNKNOWN"; >>>> } >>>> >>> >> > > Likewise. Only sent to qemu-trivial ML. Also tools like patchew are only subscribed to address@hidden, so this patch is missing there too. The documentation seems clear enough although: - README: Submitting patches ================== When submitting patches, one common approach is to use 'git format-patch' and/or 'git send-email' to format & send the mail to the address@hidden mailing list. [...] - MAINTAINERS All patches CC here L: address@hidden | https://lists.gnu.org/archive/html/qemu-ppc/2019-01/msg00314.html | CC-MAIN-2019-35 | refinedweb | 316 | 55.84 |
2d Tile Based JRPG
It's a 2d Tile Based JRPG. It's still in pretty early stages.
James Pulec
(jpulec)
Links
-
Releases
2d Tile Based JRPG 0.0 — 16 Jul, 2011
Pygame.org account Comments
}
Lysander 2011-09-06 07:39:59
Can you include an executable package to try out? I use the most recent versions of python and pygame for my own programming and was sadly unable to try this out with my current setup. I fixed about 10 instances of print statements throwing out errors by encapsulating each with parentheses, for instance in ImageData.py -
print "Failed to load texture file '%s'!" %textureFilename
becomes
print("Failed to load texture file '%s'!" %textureFilename)
but I hit a dead end when string.split wasn't recognized as a valid object attribute in Map.py. I'm guessing this is a version issue and the functionality has changed in the most recent releases, but I'm no expert so I can't say for sure.
}
JamesPulec 2011-09-06 21:18:28
Yes. You must be using Python 3. A lot was updated then. Try using Python 2.7. If I get a chance I could try and put an executable out. Would you need a .exe or a .deb, or what would you be looking for?
}
}
Lysander 2011-09-13 06:09:48
You should get on pygame's IRC channel so that we can discuss a few things.
}
Anon 2012-03-20 14:14:31
It's good, but There is a re-occuring attribute error because it says attribute "alive" is not defined. This causes the game to end after attacking in combat.
}
Ino 2015-09-22 14:20:02
It can be fixed by adding one line in PlayerData,py:
class PlayerData(Creature.Creature):
def __init__(self, name):
Creature.Creature.__init__(self, name)
self.facing = 0
self.collisionRect = pygame.Rect(288, 216, 24, 24)
#self.font = pygame.font.Font(None, 24)
self.currentSkin = None
self.rHand = None
self.lHand = None
self.armor = None
self.alive = True #add this line
}
Ino 2015-09-22 14:21:26
There's another error, haven't figure out how to fix it yet:
Traceback (most recent call last):
File "D:\Documents\Downloads\My--RPG-master\My--RPG-master\Game.py", line 350, in <module>
game.mainloop()
File "D:\Documents\Downloads\My--RPG-master\My--RPG-master\Game.py", line 63, in mainloop
self.drawWorld()
File "D:\Documents\Downloads\My--RPG-master\My--RPG-master\Game.py", line 260, in drawWorld
self.battle()
File "D:\Documents\Downloads\My--RPG-master\My--RPG-master\Game.py", line 205, in battle
self.newSkins = self.instance.battleMain()
File "D:\Documents\Downloads\My--RPG-master\My--RPG-master\Battle.py", line 100, in battleMain
self.ret = self.performActions()
File "D:\Documents\Downloads\My--RPG-master\My--RPG-master\Battle.py", line 245, in performActions
self.actionsLength += self.run(self.highestAction[0])
TypeError: unsupported operand type(s) for +=: 'int' and 'list' | http://pygame.org/project-2d+Tile+Based+JRPG-1931-.html | CC-MAIN-2017-13 | refinedweb | 496 | 63.56 |
std::cin.get();to 'pause' the console so it won't close right away.
iostreamstandard library.
#include <windows>Then where you want a pause before continuing, add in
Sleep(1000);1000 gives 1000 ms or 1 second. So, adjust that number to how long of a delay you need. 1500, for 1.5 seconds, etc.
#include <Windows.h>
#define) names should be all uppercase. Since you are not making a macro, you shouldn't have it as all uppercase.
void main()
int main(). So whats the issue with void? I mean it works but people get upset.
int mainwill always work, no matter which version of which compiler you're using.
void mainon the other hand may work just fine in one compiler but fail in another*. So really there is no benefit at all to using void main.
__fooanywhere in your code.
_Fooin the global unnamed namespace.
FileEncryptionStatusin the global unnamed namespace would also cause a name clash with the Win32 API - as would thousands of other names.
void main() {}
OP? | http://www.cplusplus.com/forum/beginner/85879/ | CC-MAIN-2015-32 | refinedweb | 172 | 78.75 |
These options are in the
filebeat namespace.
registry.path
The root path of the registry. If a relative path is used, it is considered
relative to the data path. See the Directory layout section for details.
The default is
${path.data}/registry.
filebeat.registry.path: registry
The registry is only updated when new events are flushed and not on a predefined period. That means in case there are some states where the TTL expired, these are only removed when new event are processed.
The registry stores it’s data in the subdirectory filebeat/data.json. It also contains a meta data file named filebeat/meta.json. The meta file contains the file format version number.
The content stored in filebeat/data.json is compatible to the old registry file data format.
registry.file_permissions
The permissions mask to apply on registry data file. The default value is 0600. The permissions option must be a valid Unix-style file permissions mask expressed in octal notation. In Go, numbers in octal notation must start with 0.
The most permissive mask allowed is 0640. If a higher permissions mask is specified via this setting, it will be subject to a umask of 0027.
This option is not supported on Windows.
Examples:
0640: give read and write access to the file owner, and read access to members of the group associated with the file. 0600: give read and write access to the file owner, and no access to all others.
filebeat.registry.file_permissions: 0600
registry.flush
The timeout value that controls when registry entries are written to disk
(flushed). When an unwritten update exceeds this value, it triggers a write to
disk. When
registry.flush is set to 0s, the registry is written to disk after
each batch of events has been published successfully. The default value is 0s.
The registry is always updated when Filebeat shuts down normally. After an
abnormal shutdown, the registry will not be up-to-date if the
registry.flush
value is >0s. Filebeat will send published events again (depending on values in
the last updated registry file).
Filtering out a huge number of logs can cause many registry updates, slowing
down processing. Setting
registry.flush to a value >0s reduces write operations,
helping Filebeat process more events.
registry.migrate_file
Prior to Filebeat 7.0 the registry is stored in a single file. When you upgrade
to 7.0, Filebeat will automatically migrate the old Filebeat 6.x registry file
to use the new directory format. Filebeat looks for the file in the location
specified by
filebeat.registry.path. If you changed the path while upgrading,
set
filebeat.registry.migrate_file to point to the old registry file.
filebeat.registry.path: ${path.data}/registry filebeat.registry.migrate_file: /path/to/old/registry_file
The registry will be migrated to the new location only if a registry using the directory format does not already exist.
config_dir
[6.0.0] Deprecated in 6.0.0. Use Input config instead.
The full path to the directory that contains additional input configuration files.
Each configuration file must end with
.yml. Each config file must also specify the full Filebeat
config hierarchy even though only the
inputs part of each file is processed. All global
options, such as
registry
shutdown_timeout
General configuration options
These options are supported by all Elastic Beats. Because they are common options, they are not namespaced.
Here is an example configuration:
name: "my-shipper" tags: ["service-X", "web-tier"]
name_root
A list of processors to apply to the data generated by the beat.
See Filter and enhance the exported data for information about specifying processors in your config.
max_procs
Sets the maximum number of CPUs that can be executing simultaneously. The default is the number of logical CPUs available in the system. | https://www.elastic.co/guide/en/beats/filebeat/7.5/configuration-general-options.html | CC-MAIN-2020-16 | refinedweb | 628 | 60.11 |
In my previous post, we discussed the ASP.NET Core Model-View-Controller (MVC) web application framework and what better way to start our journey through ASP.NET Core. This framework is battle-hardened and has been around since early 2009. To touch on the essential components, we reviewed everything that gets scaffolded in a new project. While some of this code is considered to be “boilerplate” code, we must understand it. Remember, we are responsible for all the code we generate, whether made from a new project template, created by scaffolding or hand-coded.
Our next stop is with Microsoft’s new web application framework, ASP.NET Core Razor Pages. While Razor Pages uses many of the same underlying ASP.NET Core components, it takes on a different routing paradigm and a more compacted programming model. Let us get started.
Getting Started
Like any new .NET Core application, we can create a new project using one of the following methods.
Something interesting can be noticed when comparing the Razor Pages template to MVC. In Visual Studio, you have the option to create a “Web Application” project for Razor Pages or a “Web Application (Model-View-Controller)” project for MVC. Razor Pages seems like the default choice.
While it is not a focal point of this article, the ASP.NET Core Identity views have also been rewritten with Razor Pages. So even if your web application is MVC, there might be a little bit of Razor Pages in your application. Perhaps Microsoft is trying to tell us something.
A New Project Template
After looking at the ASP.NET Core MVC template, you will find many similarities with the Razor Pages template. For instance, the wwwroot folder and Program class are virtually unchanged. There are slight differences in the Startup class, and the Pages folder is entirely new. Before we dig into the Razor Pages themselves, let’s look at the startup class variations.
Startup
As a recap, all ASP.NET Core applications use a startup class to register services and configure the HTTP request middleware. This process takes place in the ConfigureServices and Configure methods. I’ve discussed these methods previously, so I won’t go into the details here (for reference, I’ve included the links below).
- ConfigureServices – Service Registration
- Configure – HTTP Request Pipeline
Now, let’s look at how the startup class differs in an ASP.NET Core Razor Pages application. I’ve highlighted two key lines below.
public class Startup { // Removed for brevity... public void ConfigureServices(IServiceCollection services) { services.AddRazorPages(); } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // Removed for brevity... app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); }); } }
If we understand the thought process behind the ConfigureServices and Configure methods, the changes we see in the startup class make perfect sense. Instead of configuring MVC controllers and views for dependency injection, Razor Pages need to be registered. We can see these changes reflected in the ConfigureServices method. Furthermore, instead of using the MVC controller routing, Razor Pages are mapped with endpoint routing.
Next, let’s look at the most fundamental part of our new application, the Razor Pages.
The ‘Pages’ Folder
As you’d imagine, the Pages folder is where the Razor Pages reside. Expanding this folder, we can see that we start with a few default pages. If these pages look familiar, they should. Running the Razor Pages template gives us the same web application that our MVC template did.
Right off the bat, you will notice that Razor Pages follows a much more compact programming model than MVC does. In Razor Pages, each request gets routed directly to a page. This routing paradigm is different in MVC, where requests get routed to a controller who generates a model for a view to data bind.
The Razor View Engine
Before we go any further, let’s talk about a fundamental concept. Just like MVC, Razor Pages uses Razor as it’s templating engine. What is Razor? Razor is a server-side templating engine that allows developers to use C# to generate data-driven markup. As we unpack the previous sentence, there are two things to keep in mind. First, Razor is not unique to Razor Pages. MVC also uses Razor to render its views. Second, markup gets generated server-side so, we don’t want to get Razor confused with Blazor.
What does all this mean? The Razor templating syntax has been around for a long time. While Razor is front and center in the name of the new “Razor Pages” framework, it is just a piece of the puzzle. Razor is shared by several .NET Core web application frameworks and focuses solely on generating HTML markup and rendering data objects.
If you would like to read more about the Razor Syntax, Tag Helpers, or any other features provided by Razor View Engine, I recommend reading through the Microsoft documentation.
Razor Pages Architecture
Unlike MVC, which breaks into three separate components, a Razor page is made up of two pieces, a Razor markup file and a C# code file. The Razor markup looks similar to an MVC view; however, there is a unique
@page directive placed at the top of the file to give it the features of a Razor Page.
@page @model IndexModel @{ ViewData["Title"] = "Home page"; } <div class="text-center"> <h1 class="display-4">Welcome</h1> <p>Learn about <a href="">building Web apps with ASP.NET Core</a>.</p> </div>
The Razor markup file also contains a
@model directive that binds the Razor Page to a specific page model. If we expand the Razor Page in Visual Studio, we find a page model class with the same name as the Razor Page with a
.cs suffix. By default, this is the page model referenced by the Razor markup.
public class IndexModel : PageModel { private readonly ILogger<IndexModel> _logger; public IndexModel(ILogger<IndexModel> logger) { _logger = logger; } public void OnGet() { } }
Handler Methods
When an HTTP request is routed to a Razor Page, a naming convention is used to find the appropriate handler method to execute. Handler methods are prefixed with the word “On” followed by the HTTP Verb. For example, the
OnGet method shown above is invoked when an HTTP GET request is routed to the
Index page. To create an asynchronous handler method, the Async suffix can be added to the end. Below is a list of the most frequently used handler methods.
- OnGet or OnGetAsync
- OnPost or OnPostAsync
- OnPut or OnPutAsync
- OnDelete or OnDeleteAsync
Model Binding
One key difference between Razor Pages and MVC is how data gets bound to the Razor markup. With Razor Pages, the page model not only handles requests, but it is also bound directly to the page markup. You can almost think of it like a model and controller combined. Properties exposed in the page model can be accessed directly in the page markup using the @Model syntax.
This condensed strategy works excellent for GET requests.
OnGet handler methods have to populate data into any available public properties, and away we go! For requests where data is being sent from the client, such as POST or PUT, a special
[BindProperty] attribute is required. Similar to a parameter in an MVC controller action, this attribute makes properties available for model binding.
public class CreateModel : PageModel { // Removed for brevity [BindProperty] public Donut Donut { get; set; } public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } _context.Donuts.Add(Donut); await _context.SaveChangesAsync(); return RedirectToPage("./Index"); } }
When Should I use Razor Pages?
Razor Pages have several benefits over the traditional ASP.NET Core Model-View-Controller (MVC) framework. MVC is entity and action-focused while Razor Pages are more page-focused. This, in itself, has an interesting side effect for MVC. Entities in most MVC applications start with simple CRUD operations; however, this is typically short-lived. As more “actions” are required, controllers quickly become bloated. This is not a concern with Razor Pages. Each page stays focused on a single activity, which allows them to be smaller and less bloated.
Razor Pages are also simple to wrap your head around. The condensed, page-focused architecture is very intuitive. This simplicity reminds many of the legacy ASP.NET Web Forms framework. While there are certainly parallels to draw between the two, Razor Pages maintains a strict separation between the markup and page model. This lack of separation in Web Forms made unit testing difficult and, in many ways, violated the separation of concerns principle.
So when should you use Razor Pages instead of MVC? It depends. In my opinion, Razor Pages shines in smaller, “action” based web applications. For larger CRUD applications, MVC could still a great fit.
What About The Client Side?
As mentioned previously, both MVC and Razor Pages are server-side web applications. This means every time a user navigates to a page, a request is sent to the server where data is retrieved and embedded into an HTML template. This can lead to a little extra overhead, which is evident on slower networks. Over the years, the desire to do more in the browser has lead to an explosion of JavaScript frameworks such as Angular, React, and Vue. Microsoft also has an answer for this in the new ASP.NET Core Blazor framework. If you are curious how Blazor compares to Razor, please take a look at my previous article for more information – What’s the Difference Between Razor and Blazor?!? | https://espressocoder.com/2020/09/01/comparing-asp-net-core-razor-pages-with-mvc/ | CC-MAIN-2020-40 | refinedweb | 1,570 | 57.57 |
//Header file to create linked list. #ifndef RANDOMSHAPELISTGEN_H #define RANDOMSHAPELISTGEN_H #include <iostream> #include <list> #include <stdlib.h> #include <ctime> //Where should I put the code "std::"? class shape { public: shape(){} virtual ~shape(){;}//destructor virtual char *getType() = 0;//does any of this actually need to be virtual? //need code to designate what the score value of each shape is. All shapes are worth 2 points each private://don't think I need any private }; class circle : public shape { public: char *getType() {return "circle";} }; class square : public shape { public: char *getType() {return "square";} }; class triangle : public shape { public: char *getType() {return "triangle";} }; typedef list<shape*> shapeList_t;//this sometimes raises an error sometimes doesn't. Its like my computer can't make up it mind. typedef shapeList_t::iterator shapeListIter_t;//what am I doing wrong here? shapeList_t myList;//declared this outside functions so multiple functions could use it. shapeListIter_t myIter;//same as above shape *curShape;//same as above shapeList_t makeList()//should I have defined this function perhaps in the cpp file? {//also am I defining this function correctly? I think eventually I will need to pass something to it, such as what level the game is on, to be passed to shapes(later though for timer.) int i, numItems=20;//i is a counter, and numItems is simply the number of shapes in the list. int seed; seed = time(0); srand(seed);//makes time the seed for randomization function // Insert for (i=0; i<numItems; i++)//for loop to fill list. { switch (rand() % 3)//random function here, seeded by srand. { case 0: myList.push_back(new circle); break; case 1: myList.push_back(new square); break; case 2: myList.push_back(new triangle); break; } } // Display. Will need to be changed to keep console screen from being a mess. //Also will need to be changed so that only one object at a time is listed, and clears each time object is moved from list or deleted. for (myIter=myList.begin(); myIter!=myList.end(); myIter++) { curShape = *myIter; std::cout << "Shape type: " << curShape->getType() << endl; } return myList;//Not sure if this right. } void deleteList()//function to delete list. Again should I put this in the cpp file? //Also, will this work for when the list is empty or not filled with a number different than 20? { // Cleanup for (myIter=myList.begin(); myIter!=myList.end(); myIter++) { curShape = *myIter; delete curShape; // free the memory that this list item consumes - calls curShape's destructor } myList.clear(); // free the memory that the list uses to point to each list item - empties the list } #endif
// char input //for gameover=false //run game(will call all functions at some point, probably multiple times. //if input = s execute save //if input = j execute join //if input = d execute drop //if input = e execute redeem //gameover conditions met: go to main menu. //cout totalScore // #include <iostream> //May need to include another header file. #include "RandomShapeListGen.h" using namespace std; int main() { bool stackFull = false; //initial value, changes based on return of a function. Controls level loop. bool listEmpty = false; //initial value, changes based on return of a function. Controls level loop. bool gameOver = false;//initial value, controls main loop. bool levelEndCondition= false; //controls level loop. int userScore, totalScore=0; //initial values, totalScore is found at the end of every level. userScore is determined by user functions. int commonShapeScore; int level; while (gameOver==false) { for (level=1; level<=3; level++)//I haven't been able to quite test this, but will it start at level 1 or automatically move to level 2? //Do I need to change the initialization to level=0? { //Code to create new list using resources from header file. //code to create stack using resources from a header file(not done yet, not sure how my code for this will look.) //commonShapeScore=//Code to find number of most abundant shape and multiply by 2. A function. while (levelEndCondition==false) { userScore=0; /*[Display contents of the top of the stack and contents at the beginning of list.]; [perform user functions on stack and list: parameters of functions: Shape score: 2 redeemableScore X = 5*level (if score is greater than this, it is redeemable); user controlled function to move shape at beginning of list to top of stack and next shape in list becomes shape at beginning; automated function that scores points whenever enough shapes of the same type are placed consecutively on top eachother in stack]; [Change displayed representation of stack and list based on what user functions were called (without multiple displays appearing for each time a change is made if possible)]; stackFull=[function to test if stack is full, returns bool]; listEmpty=[function to test if list is empty, returns bool];*/ if (stackFull==true) { levelEndCondition=true; } else if (listEmpty==true) { levelEndCondition=true; } else { cout<<""<<endl; } } totalScore+=userScore; //Code to delete list. //Code to delete stack. if (userScore < (3/4 * commonShapeScore)) { gameOver=true; cout<<3/4*commonShapeScore<<" was the minimum score to proceed. You did not reach it."<<endl; cout<<"Game Over"<<endl; break; } else { if (level==3) { cout<<"You beat all the levels!"<<endl; gameOver=true; } else { cout<<"Going to next level... Press enter to continue:"<<endl; //code to make program wait for enter to be pressed before continuing. } } } } cout<<totalScore<<" is your final score."<<endl; return 0; }
Oh and ignore the text above the include statement in the cpp file, the bottom file, that was some pseudocode I was using earlier. Sorry about that. | http://www.dreamincode.net/forums/topic/285061-functions-to-make-input-trigger-actions-on-list-and-stack-etc-etc/ | CC-MAIN-2017-51 | refinedweb | 900 | 64.61 |
tag:blogger.com,1999:blog-79243588752463744742018-03-06T00:14:18.505-08:00Sports Short ShotsInteresting comments and observations on sports, sport games and sport personalities.Garth*** YankeesThe Minnesota Twins officially have a psychological problem with the Yankees. On this celebration of our Nations birth the Twins took a four game beating at the hands of the Yankees. The Yankees aren't even that good this year, Jeter, Rodriguez, Granderson, Texeira all are missing from the Bronx Bombers. Instead they field a team of rag tag players. Yet they still slug the heck out of the ball and seem to always beat the Twins. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; cssfloat: left; float: left; height: 145px; margin-bottom: 1em; margin-right: 1em; width: 182px;"><img border="0" height="200" oya=" oya="true" src="" /></a> It wouldn't be so bad but no one but people in New York like the Yankees. Losing to them just leaves a bad taste in your mouth. In Boston they always talked about the curse of the Bambino and Boston over came that. I think it should just be the curse of the Yankees. Let's hope it doesn't take the Twins 100 years to overcome that curse.</div><div style="border-bottom: medium none; border-left: medium none; border-right: medium none; border-top: medium none;"><br /></div>I'm glad we only play them 6-7 times a year others the Twins might lose 162 games.<br /><br />Let's hope July 5th is better than July 4th.<img src="" height="1" width="1" alt=""/>Garth EveIt is the eve of the NCAA tournament. Actually it already started with the "first four" a media scam to get more teams into the tournament only to lose in the first round.<br /><br />Anyway, I've picked my brackets. I am in five my track record is not very good but I have hope this year. Who are you picking to make it to the final four?<br /><br />I would like to hear some of your choices.<br /><br />Since I am doing five brackets it changes somewhat but here is a list of the teams I think have the best chance to make the final four.<br /><br />Louisville<br />Indiana<br />New Mexico<br />Georgetown<br />Miami<br /><br />Good Luck on your brackets.<img src="" height="1" width="1" alt=""/>Garth Then There Were TwoThe NFL playoffs have moved to the final stages. The Superbowl will happen on February 3rd. It will be between the AFC champions Baltimore Ravens and the NFC champions San Francisco 49ers.<br /><br />It will be an interesting match-up between the two teams who finished runners up in their conferences a year ago and now they find themselves motivated to win the Superbowl.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; cssfloat: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="99" oea="true" src="" width="200" /></a></div>The Ravens are motivated by trying to win one more Championship for Ray Lewis who has announced his retirement. Once he made that announcement it lit a fire under the rest of the team to go on a mission to win the Superbowl.<br /><br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: right; cssfloat: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="118" oea="true" src="" width="200" /></a></div>The 49ers are motivated by a Quarterback change halfway through the season. A quarterback story that has captured the nation. A quarterback who ran for twice as much yardage against the Packers as Adrian Peterson did the week before.<br /><br />Don't forget the match-up of brother head coaches. That is also a great story.<br /><br />What is my prediction. San Francisco 35 Ravens 27<img src="" height="1" width="1" alt=""/>Garth Then There Were FourFour teams left in the NFL.<br /><br />I was sorely disappointed in the losses by Denver, how could they let that player get behind them like that. I was also disappointed in the loss by Seattle. They came back from 20 points down only to lose in the last 30 seconds. How could they not stop them, they have a good defense.<br /><br />Some good games this weekend, and now there is only four teams remaining. In the AFC we have the Baltimore Ravens heading to Gillette stadium to play the New England Patriots. In the NFC it will be the San Francisco 49ers heading to Atlanta to play the Falcons.<br /><br />Do the Ravens have the desire to win for Ray Lewis' last season?<br /><br />Are the Brady led Patriots a force to be reckoned with?<br /><br />Can Kaepernick have another huge came and the 9ers overcome the west to east travel jinx?<br /><br />Is it finally time for Matty Ice to win one for the Falcons?<br /><br />Who will it be between these four? Here is who I think will be playing in the Super Bowl.<br /><br />The New England Patriots versus the San Francisco 49ers. With the Patriots winning the Super Bowl!!<img src="" height="1" width="1" alt=""/>Garth NEW YEAR!!HAPPY NEW YEAR!!<br /><br />Continue following this blog in 2013 and see a different side and perspective on sports. Who would have thought we would be heading into 2013 with some interesting teams playing in the NFL playoffs.<br /><br />The Vikings in a 6 day rematch with the Packers this time on the frozen tundra of Lambeau.<br />The Redskins led by two rookies and Seahawks led by Wilson and Lynch should be quite a match-up<br /><br />The amazing Colts versus the sliding Texans. Will the rookie continue to shine in the playoffs?<br />Can the Bengals beat the Ravens? Can Flacco be trusted or will Dalton be the QB to shine.<br /><br />This years NFL season was a spectacular ride with AP coming 9 yards short of mortality. Still his feat is more remarkable considering he was coming off of knee surgery. Never underestimate the power of hard work and dedication in a sport.<br /><br />Payton Manning showed he still has what it takes and there was a rise in young guns in the league with Wilson, Luck and RGIII leading the way.<br /><br />The playoffs will prove to be just as exciting. Buckle up for a football ride to the Superbowl.<br /><br /><img src="" height="1" width="1" alt=""/>Garth Is Wrong?What is wrong with the Minnesota Vikings? I contend it is not Percy Harvin being out for a couple weeks. It is not Christian Ponders difficult times although that doesn't help. It isn't their defense or special teams. It definitely isn't Adrian Peterson. It is my contention that it is the coaches and especially the play calling.<br /><br />A prime example came in the game yesterday against the Bears. The Vikings had a 3rd and two from the 6 and instead of running AP for a mere 2 yards they call ponder to throw into the end zone. Of course it didn't work so now they have 4th and 2 and they do the exact play instead of call AP's number. Wouldn't it have been easier to call AP's number on that 3rd and 2 and if he doesn't make it then throw.<br /><br />The Vikings have the best running back in the game and they don't use him on third down and rarely in the red zone. What kind of play calling is that. If something isn't done soon AP will not want to play here anymore he will want to play where he is used and where he will win.<br /><br />I know, teams put 8 in the box to stop AP. If that is the case obviously he still beats the odds because he leads the league in rushing. Why not keep giving him the ball and make the other team actually try to stop the runaway train.<br /><br />This week they play the Packers, one of the worst run defenses in the league. I bet the Vikings try to air it out and not even use AP. But if they make the right calls and play good defense they can beat the Packers. We'll just have to see what happens.<img src="" height="1" width="1" alt=""/>Garth CheerHow about dem Yankees! The best hitting team money can buy and they fall flat on their face against the Detroit Tigers. The Tigers sweep the Yankees in 4 straight after an exciting game one in which the Yankees had an Ibanez comeback only to fall in Extras. <br /><br />The biggest blow for the Yankees came in game one when Derek Jeter broke his ankle. Huge disappointment because he is the inspiration and the straw that stirs the Yankee drink. It might actually had been a different series with Jeter at full strength.<br /><br />I am not a Yankee fan at all so I am glad they lost but I like to watch good baseball and the Tigers totally dominated the Yankees after that game one. I hope the World Series will be more exciting and if it is the Cardinals I'll have to root for the Tigers. The Cardinals won last year so it is up to the Tigers to win.<br /><br />Maybe just maybe the Giants can come back from a 3 to 1 deficit in games.<img src="" height="1" width="1" alt=""/>Garth LockoutHockey preseason should have started by now. Minnesotans like myself would have anxiously been watching the play of the newest Wild players, Zach Parise and Ryan Suter. It was going to be an exciting season. That is right I said WAS. September 15th came and their was no Collective Bargaining agreement between the players and the owners so the owners locked the players out. No practice and so far no preseason.<br /><br />It is very disappointing to hear two groups of people making millions of dollars complain that they don't have enough. If they can't figure it out maybe than can give me some of the money. Maybe they can get President Obama involved since President Obama wants to tax the rich maybe he can just tell both sides to share their wealth with the less fortunate. Oh wait! President Obama won't do that because he doesn't even share his wealth. He is too busy taxing every one to coffer his own agenda to really care about any one else.<br /><br />The way I see the lockout is that the owners have every right to make as much money as they can. It is America and Canada where capitalism is king (or should be). Plus it isn't like the players are hurting for money. In fact here is a quote from Minnesota's recent millionaire Zach Parise.<br /><br /><span style="color: #b45f06;">?"</span> <br /><br />He just signed a 98 million dollar contract. That doesn't sound like a pay cut to me. I love sports and enjoy watching them but I am tired of the cry baby players whining that they don't have enough money and even the minimum amount in most sports are well over the amount I make, and I have my MBA. Most of the players can barely give a competent interview.<br /><br />Then there are the owners making millions of dollars in other ventures and then whine and complain to the state or community that they need a new facility built or they may have to leave.<br /><br />Why don't the player and the owners just <strong>SHUT UP</strong> play the game and let the fans enjoy watching without all of this politics.<img src="" height="1" width="1" alt=""/>Garth RacesSeptember is a time for crisp cool air, back to school, football and of course baseball pennant races. With the introduction of one more wildcard team it has opened the door for more and more races. The usual suspects are still in the races but we have some newcomers this year.<br /><br />If you live in Baltimore baseball is relevant in September again. If you live in the DC area (I know, Baltimore and DC are only a tobacco spit away from each other) you probably weren't alive the last time Washington was in a pennant race. Then there are the no name kids from Oakland playing moneyball again knocking on the door of a wildcard birth.<br /><br />What are some intriguing World Series match-ups?<br /><br />Baltimore vs Washington - Wouldn't that be interesting in a year when Obama and Romney are battling for the nations capital.<br /><br />How about another bay battle San Francisco vs Oakland, minus the earthquake this time.<br /><br />Could Texas do it for the third straight year. Are the Reds ready to create a new "Big Red Machine" and are the Chicago southsiders ready to unleash more homeruns in the post season. Never forget the Bronx Bombers. Of course I can't forget the defending champs St Louis and there is always Atlanta and the pitching of Tampa. <br /><br />Who is your favorite? What match-up would you like to see?<img src="" height="1" width="1" alt=""/>Garth WeekendApart from Wednesday's kickoff game between the Giants and the Cowboys, Sunday marks the opening games to the NFL season. I am going to complete my predictions for each division. I have already looked at the NFC East, North and West. Today I will finish off the rest of them.<br /><br /><strong>NFC South AFC South</strong><br />Atlanta Houston<br />New Orleans Tennessee<br />Carolina Indianapolis<br />Tampa Bay Jacksonville<br /><br /><strong>AFC East AFC North AFC West</strong><br />New England Pittsburgh Denver<br />Buffalo Baltimore Oakland<br />New York Cleveland San Diego<br />Miami Cincinnati Kansas City<br /><br />I would love to see your predictions. As the season progresses I will tell you who I think will be in the playoffs and the Superbowl.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><img src="" height="1" width="1" alt=""/>Garth OvertimesReally Gophers? You can't make life eay on yourselves. Instead you have to take the Runnin' Rebels of UNLV into three overtimes before dipensing of them. What does this mean in the grand scheme of your game. Are you going to be good this year or mediocre and best? Is there a bowl bid in your near future or the toilet bowl? <br /><br /><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="112" src="" width="200" /></a>Let's hope you make a better showing against New Hampshire on Saturday September 8th. I suppose it didn't help that you played the game at 10 PM Central and out in the Vegas desert. Who really was able to stay up late on a work night and watch that three overtime game. <br /><br />On to more important things. I really like the College Football overtime format better than the pro football format. It gives both teams a chance to have the ball in overtime and control their own destiny. In the pros if the first team scores too bad for the loser they should have done better on the coin flip. Put the fate and the game on line in the hands of a coin, just doesn't really make sense to me. I would love to see the pros change their overtime format. I believe they have slightly for playoff games but not the regular season. Go all the way and change that format. I am not sure if the college format is suitable for the pros but something different would be nice.<br /><br />Congrats Gophers on a win. I hope to see many more of those this year from you.<img src="" height="1" width="1" alt=""/>Garth's blog is not about one of the most beloved Seven Dwarfs. It's about a problem in sports that is in the news again. This time instead of baseball it is cycling. Lance Armstrong has been stripped of his 7 Tour de France victories because of suspicions that he was doping. <br /><br /><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="" width="140" /></a>He never failed a drug test but all of the evidence is based on hearsay by others. He took tests during the Tour de France and during many of his competitions and never once failed a test. So why if someone says something it is the truth but the true tests are false.<br /><br />Regardless of whether Armstrong took PEDs or not. Whether Bonds, and any other ball player did as well. It shouldn't matter too much. According to what I have heard 80% of cyclists use PEDs because of the stamina issue with a race like the Tour de France. Doesn't that put 80% of the racers on level ground. Maybe they test everyone in that race everyday and make the results known to the public so when issues like this come up every one knows the truth.<br /><br />Dropping his side of the issue is a class act and I think it probably shows more his innocence because he just decided it isn't worth fighting when he knows he is right and they won't be convinced. Besides Lance Armstrong will probably be known more in the future for his work for cancer research and the philanthropy side of sports. <br /><br />Lance, for me you will always be the seven time Tour de France winner.<img src="" height="1" width="1" alt=""/>Garth NFC East and WestLast week I gave my prediction for the standings in the NFC North this year. This week I am going to give you my prediction for the NFC East and the West. I am no expert but it is fun to see how correct I am. <br /><br />As I said last week I would love for my readers to share their predictions as well.<br /><br />Without further delay:<br /><br />NFC East - This is always a tough division<br /><br />New York Giants (Defending Superbowl champs win the division)<br />Washington Redskins (Behind RG3)<br />Philadelphia Eagles<br />Dallas Cowboys<br /><br />NFC West - Besides the 9ers who else is there<br /><br />San Francisco 49ers<br />St Louis Rams<br />Arizona Cardinals<br />Seattle Seahawks<br /><br />What are your thoughts?<br /><img src="" height="1" width="1" alt=""/>Garth FootballPreseason football has started. The Vikings had their opening game tonight at San Francisco and it is currently sitting at 17 - 6 with 2:19 left in the game. I didn't get a chance to watch much of the first half of the game. My expectations however are very low. I hope they do well and I hope Adrian Peterson can bounce back from his injury and play the whole season.<br /><br /><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="" /></a>The Vikings are still in rebuilding and learning mode. The NFC North is going to be a tough division with the Packers, Bears and Lions all very tough teams to beat and the Vikings have to play each of them twice.<br /><br />My hope is that Christian Ponder has improved as a Quarterback and we will see some great plays and some exciting games. If they can go 8 - 8 that would definitely be major improvement.<br /><br />Here is my order for the NFC Central finish no records.<br /><br />1. Packers<br />2. Lions<br />3. Bears<br />4. Vikings<br /><br />Sorry Vikings. I just don't think you have it in you to overtake any of those three. I hope you do but I just don't see it happening. <br /><br />Look for my predictions on other divisions as we lead up to the opening kickoff. Share yours with me as well by leaving a comment.<br /><br /><img src="" height="1" width="1" alt=""/>Garth Olympics, Week 1The first week of the Olympics are officially in the books and there was a lot of excitement in the pool and at the gym. I really enjoy the swimming events and now that my daughter is in swimming it makes it even more exciting to watch. Someday I hope to watch her at the Olympics. The most exciting swimming event for me was seeing Rebecca Soni break the world record in the breast stoke and the 15 year old swim the 800 to gold. Missy Franklin is a great newcomer to the swimming and it is always great to see Phelps and Lochte do well.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div>One event that I was excited to see and haven't seen any is the Equestrian. My other daughter has been taking riding lessons and learning equestrian. I wanted to be able to watch some of that event but we don't get the one NBC channel that has had Equestrian on it.<br /><br />Track has started and it is always fun to see the fastest woman and the fastest man. Just saw Bolt bolt in front of everyone and be only the second man to repeat as 100 meter winner. Congratulations Usain Bolt!<br /><br />It has been a great Olympics. My favorite part of the Olympics is watching the underdogs and the athletes work hard to reach their greatest potential. They enjoy the competition and the fun regardless of who wins. It is after all about competing and doing the best you can to say I competed and made the most of it.<br /><br />That is what we all need to do with the gifts God has given us. Do the best we can to please Him.<img src="" height="1" width="1" alt=""/>Garth Goes to the SouthsideAs a Twins fan it is time to say goodbye to Francisco Liriano. You gave us some ups, a no-hitter against the White Sox last year and 15 strikeouts against the A's earlier this month. You also gave us some downs, your injuries and your inconsistency.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div>I still think you could have been the ace the Twins needed they just needed you to go out and pitch. Pitch like you did over the last month on a consistent basis and you could be a premier pitcher in this league.<br /><br />I wish you all the best with the White Sox except when you pitch against the Twins then be the old Frankie and give up hits and bombs. Hopefully the White Sox will use you like they are supposed to and you will find success.<br /><br />I don't know much about the two players the Twins received for Liriano but they better be worth it and help the team in the future.<img src="" height="1" width="1" alt=""/>Garth OpenAnother golf major is in the books. It was another good one. Too bad for Adam Scott who blew a four shot lead with four straight bogeys to finish his round. <br /><br /><br /><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="134" src="" width="200" /></a>It was nice to see Els earn another major. He was the only one who played some consistent golf throughout the tournament. Scott struggled, Woods struggled, Snedeker struggled and McDowell struggled but Els had a 67, 70, 68 and 68, par or under for all three rounds<br /><br /><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="112" src="" width="200" /></a>On hole 6 Tiger Woods landed in the bunker. Instead of playing safe he tried to get it out and ended up staying in the bunker with an even more difficult shot. He got out of the bunker from his knees and ended up three putting for a triple bogey. On the same hole one group later Graeme McDowell landed in the same bunker but he played it safe by hitting a shot in the same bunker for a better lie. He ended up getting a 5 on the hole. Of course neither of them won the tournament but who would you rather root for the one who plays it safe or the one who risks it all?<br /><br />That ultimately is what golf is all about risk and reward or playing it safe. Els was consistent and pared hole number 6. But he wasn't in the bunker. The turning point for Woods was hole number six had he played it safe he may have had a chance to win, but that isn't the Tiger way.<br /><br />Congratulations Ernie! On to the PGA and the last major of this year.<img src="" height="1" width="1" alt=""/>Garth 4thToday July 4th, fireworks struck the State of Hockey. The top two premier free agents in hockey decided to join the Wild. They decided not to join for just one or two years but for 13, you heard me right 13 years 98 million dollars.<br /><br /><div class="separator" style="clear: both; text-align: center;">="144" src="" width="200" /></a>Zach Parise and Ryan Suter have agreed to come and play in the state of hockey. They will lace their skates in the Excel Energy Center for the next 13 years. Their hope with the strong young nucleus that is already in Minnesota that they can be a contender and even win the Stanley Cup a few times.</div><br />I couldn't be more excited now to get the hockey season started. Let's get the excitement going and start to think of hockey in the middle of the summer. All signs point to a great season and possibly many more for the Wild. <br /><br />Enjoy the fireworks in the State of Hockey!<br /><img src="" height="1" width="1" alt=""/>Garth Is OverThe Miami Heat are NBA champions. I am not too happy. I would have liked to have seen the Thunder win. When the Heat went up 3-1 they went into tonight's game with the attitude to finish it, and they did.<br /><br />Lebron got his championship. Now where will he go to win another one?<br /><br />Time to look toward next season. Football is just around the corner and of course we have baseball.<img src="" height="1" width="1" alt=""/>Garth Par<a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="134" src="" width="200" /></a>Three inches to the left or two inches too far could make the difference between winning and losing in golf. Another US open has come and gone and despite Webb Simpson posting a +1 and shooting a great final round score the true winner of this year's tournament was the Olympic Club. It gobbled up professional golfers and spit them out. Not a single tournament score under par. I would hate to see what would happen if mere mortals played that course. <br /><br />Even Tiger Woods who usually has his mojo on for that final round found himself quickly consumed by the difficulty of the course. In a matter of the first six holes (coined the toughest opening six holes in golf) he found himself with three bogeys and a double, ouch!!<br /><br />For us peasants who love the game, playing and watching it helps to see those professionals struggle. Jim Furyk who was tied or in the lead since some time on Friday found himself on the tee at 16 having three holes left to secure a win hooked his tee shot far left into the rough and dash any hopes he had to win. It is US Open moments like that that bring hope to the rest of us. You can look at the television and say "yep, I've done that".<br /><br />Another example of mortality is Lee Westwood hitting one into the trees and the ball never coming down. I didn't think professionals ever lost a golf ball except for an occasional water shot. <br /><br />Then there are always the great moments, John Peterson's hole in one, Webb Simpson's comeback and the cool play of a 17 year old amateur.<br /><br />PGA, thanks for another great US open!<br /><br /><br /><img src="" height="1" width="1" alt=""/>Garth and HeatDuring these summer months when the weather gets really hot and it collides with a cold front it produces Thunder. <br /><br />The NBA finals are set with the Oklahoma City Thunder playing the Miami Heat. Kevin Durant and Michael Westbrook against the big three from the Heat, James, Wade and Bosh. Who will it be?<br /><br />I am hoping that a cold front comes to the Heat and produces a Thunder win. We saw the Thunder down two games to none and looking almost left for dead by the Spurs come back and win the next four games. We saw the Heat go up two zero on the Celtics only to almost lose it when the Celtics won three straight. <br /><br />Will we see the Heat of games 6 and 7 and the Thunder of games 3-6? If so we will see quite the series. I am not a big Heat fan and I will be pulling for the Thunder. I do want to see some good games and some games that come down to the wire. I would like to hear who everyone is pulling for. Please leave me a comment on who you like in this series. <br /><br />We saw when the Celtics played good defense and made their shots they prevailed. It was when they let LeBron have his way that they found the Heat taking control. The Thunder will have to play good defense and be able to make their shots to win this series. It won't be easy for either team but lets hope it is entertaining.<br /><br /><img src="" height="1" width="1" alt=""/>Garth No NosYou might think I am going to do a retrospective on Minnesota Twins pitchers who have hurled a no hitter. Like last year's Liriano no hitter, or Eric Milton's 1999 version or how about the 1994 no hitter thrown by Scott Erickson. That would be fun but instead I am going to talk about the Twins connection to all three no-hitters thrown in baseball this year.<br /><br />The first one.<br /><ul><li>On April 21st the Chicago White Sox beat the Seattle Mariners 4-0 and the Chicago pitcher was Philip Humber who threw the ultimate no hitter, a perfect game.</li><ul><li>Twins connection: He was part of a trade between the Mets and the Twins. He didn't pitch much for the Twins and was released and pitched for the Royals and eventually came to the White Sox. The Twins could use him this year.</li></ul></ul><br />The second one:<br /><ul><li>On May 2nd Jared Weaver pitcher for the Anaheim Angels facing the Minnesota Twins proceeded to throw a no hitter. Winning by the score of 9-0.</li><ul><li>Twins connection: The Twins were the victims.</li></ul></ul><br />The third one:<br /><ul><li>On Friday June 1st the New York Mets shutout the St. Louis Cardinals 8-0 with Johan Santana throwing his first no hitter and the Mets first in franchise history.</li><ul><li>Twins connection: Johan used to be the ace of the Minnesota Twins pitching staff until they traded him to the Mets for a handful of players including Philip Humber.</li></ul></ul><br />Three interesting Twins connections to the 2012 no hitters. Congratulations to Humber, Weaver and Santana.<img src="" height="1" width="1" alt=""/>Garth Will It Be?In a lock out shortened season the NBA is continuing their long tradition of lengthy playoffs. By the time the Western and Eastern conference champions will be crowned stores will be advertising their back to school supplies.<br /><br />Both the NBA and the NHL have postseasons that last too long. Baseball is trying to extend theirs next year with a one game wildcard playoff.<br /><br />Once the weather turns nice my sights are set on baseball and I lose focus on the NBA and the NHL. I think the NBA has a couple of good conference finals match-ups. Oklahoma City Thunder versus the San Antonio Spurs in the West and in the East Miami Heat versus the Boston Celtics.<br /><br />Can the crusty old Celtics overcome their age and outduel the Miami franchise that is wrought with enormous expectations and so far disappointment? Can the Thunder utilize the tandem of Durant and Westbrook to put to rest a aging Spurs team that is playing great basketball?<br /><br />It should be an exciting time in the NBA over the next two weeks. I am pulling for an NBA finals match up of Boston verses Oklahoma City. If it is Miami versus the Spurs I doubt I'll even watch. <br /><br />Who are you pulling for? Please share.<img src="" height="1" width="1" alt=""/>Garth AgainCould it be true? Could the Twins be on the road to another 99 loss season? Recent games would indicate that they are well on their way to another dismal season. I really hope not or it is going to be a long baseball season. Here are my reasons why the Twins aren't winning.<br /><br /><strong>1.</strong> Starting pitching is horrible. The starters can throw strikes but that is the problem they throw strikes right over the plate. Opposing batters can hit those out of the park.<br /><strong>2</strong>. No superstar- Joe Mauer hits over 300 but most of his hits are singles and when the big hit is needed he can't deliver. They need a player who will step up and get hits when needed. How do think they won two World Series.<br /><strong>3.</strong> Coaching - Gardenhire and staff are good coaches but I think it is time for a change. Sometimes a new face and a different outlook might help energize some of the players. Lariano needs someone who can figure him out and straighten him out.<br /><br />Make these changes and they will avert another 99 loss season. If they stick to the status-quo then get ready for a lot of losses. They will then wonder why no one goes to Target Field.<img src="" height="1" width="1" alt=""/>Garth In TournamentThe Golden Gophers of Minnesota did not make the big dance this year. However, they are making the most of the 32 team party they were invited to, the NIT. They are one win away from bringing home some hardware from the NIT. It was a thrilling overtime battle with Washington but in the end the Gophers prevailed by 1 point. They now face the team from Stanford on the hardcourt of Madison Square Garden.<br /><br />Let's hope this is a prelude to next year. Let's hope they keep playing hard and if Mbakwe will return (Please return) they could be a force to reckon with. They also might graduate to the big dance in 2013. <br /><br />Go Gophers beat Stanford!!!<img src="" height="1" width="1" alt=""/>Garth | http://feeds.feedburner.com/SportsShortShots | CC-MAIN-2018-13 | refinedweb | 6,133 | 80.31 |
What Do you Think? A Proposal to Java Language For String
+1
To support raw string in java will be great.
something like "@" in c# or (r'rawstring') in python are valuable. It always make something like regex more readable.
In general I'm opposed to most proposals for language changes. But this one I think would be useful.
I believe the PHP syntax for this is somewhat like:
END_MARKER>>>
here is some raw
text like SQL or something
END_MARKER
The cool thing is that you can use anything as a marker. I kindof like this. It is useful. If you can parameterize it it would be good too.
Obvious applications: SQL code, message formatting, large text, etc..
For what it's worth, I tend to stick SQL and lengthy regex's into properties files (which support multi line entries) and read them into a program using java.util.Properties .
This allows some degree of flexibility to change/enhance external data and database formats without needing to re-compile/re-issue the binary. If it turns out there's a bug (an actual example being when one of my apps failed to account for Microsoft smart quotes in a regular expression) I can issue an update by simple sending out a properties file, rather than an entire Jar file.
> For what it's worth, I tend to stick SQL and lengthy
> regex's into properties files (which support multi
> line entries) and read them into a program using
> java.util.Properties .
I tested this using NetBeans. It works and is an acceptable solution to this problem. Here is my sql.xml:
[pre]
</p>
Id bigint primary key generated always as identity,
Name varchar(128),
Modified timestamp not null default current timestamp
)]]>
And here is the code snippet:
[pre]
Properties sqlProps = new Properties();
FileInputStream in = new FileInputStream("src/derbytest/sql.xml");
sqlProps.loadFromXML(in);
in.close();
String createTableCustomer = sqlProps.getProperty("CreateTableCustomer");
String createIndexPK_Customer = sqlProps.getProperty("CreateIndexPK_Customer");
[/pre]
An annotation helper to this would be great, to get the string into .class at compile time, so that there would be no runtime penalty (and what if the sql.xml is inside jar):
[pre]
@Property("src/derbytest/sql.xml", "CreateTableCustomer")
String createTableCustomer;
@Property("src/derbytest/sql.xml", "CreateIndexPK_Customer")
String createIndexPK_Customer;
[/pre]
Also xml CDATA sections are a bit confusing, NetBeans editor does not have any structured editing capabilities, it has to be done manually.
>.
Yes but how do you get the property file if it has been packed into jar? The code below does not work anymore, does it?
[pre]
Properties sqlProps = new Properties();
FileInputStream in = new FileInputStream("src/derbytest/sql.xml");
sqlProps.loadFromXML(in);
in.close();
[/pre]
Also having the string at compile time removes one possibility for error, if there is a typo in property key name.
This is getting a bit too detailed and programming-tutorial-y for this forum but anyway:
> Yes but how do you get the property file if it has been packed into jar?
getClass().getResource(). Google should find examples.
Although, whenever I have my sysadmin hat on, I prefer configuration files outside jars so I don't have to do silly unzipping to see them. Server platforms (Unix) have a perfectly good file system for storing files and text processing tools for dealing with them. YMMV.
> Also having the string at compile time removes one possibility
> for error, if there is a typo in property key name.
Sure. As there is a possibility of having a typo in the SQL string, wherever it is stored, or in an annotation parameter. So let's be careful out there. I have written various strings in program code that refer to external objects, e.g. redirect to a JSP page with a specific name. Can't say I recall having had any major trouble with typos. Fairly basic testing should reveal them quickly anyway (three cheers to jUnit!). So can't see myself losing much sleep over that.
I prefer to write simple SQL strings in Strings, and complex ones usually go naturally in stored procedures. I doubt typing " and + characters to split a string is a measurable percentage of a software project. YMMV.
> This is getting a bit too detailed and
> programming-tutorial-y for this forum but anyway:
We are discussing if we should have multi-line strings or not...
> > Yes but how do you get the property file if it has
> been packed into jar?
>
> getClass().getResource(). Google should find
> examples.
The following works if it is in jar or not:
[pre]
Properties sqlProps = new Properties();
InputStream in = getClass().getResourceAsStream("/derbytest/sql.xml");
sqlProps.loadFromXML(in);
in.close();
[/pre]
I think that this is an acceptable solution to multi-line strings. Of course @" "@ would be better and easier in many ways.
> I prefer to write simple SQL strings in Strings, and
> complex ones usually go naturally in stored
> procedures.
Agree.
> I doubt typing " and + characters to
> split a string is a measurable percentage of a
> software project. YMMV.
I hate \n" + ", it feels like Visual Basic, which I have done enough.
@" "@ would make life easier, and it would be trivial to implement, so why not? Also it would help regular expressions.
> @" "@ would make life easier, and it would be trivial to implement, so why not? Also it would help regular expressions.
I agree. Java's poor support for string literals is one of the reasons why Java has a lousy reputation for text processing applications.
Yesterday I was porting a small app from Ruby to Java. Everytime I encountered a regular expression I needed to litter the regexp with a bunch of backslash escapes, to the extent that the Java version was virtually unreadable compared to the Ruby version.
Why?
When I took a Programming Languages course in college, the text book said that readability is a desireable quality of a programming language. Do the designers of Java think otherwise?
I don't think Java programmers are a bunch of masochists. I know I'm not. Java must continue to evolve, both through inovation and borrowing good ideas from other languages. Otherwise, its user base will move on to the growing number of alternatives.
> When I took a Programming Languages course in
> college, the text book said that readability is a
> desireable quality of a programming language. Do the
> designers of Java think otherwise?
I've done a fair bit of Perl, with quite a few multi-line strings. My opinion is that, while it's convenient for cut and paste etc., it's the very opposite of readable in the context of language syntax. It's just not so instantly apparent what's code and what's literal.
To my mind the answer is better handling of String literals by the IDEs.
On the other hand, I do take the point about regexp. Regexps were not used in early versions of Java, but they are increasingly valuable. I think you could make a better case for a regexp literal type, as used in JavaScript. It could, in fact, generate a Pattern value rather than a String value. A "/" delimiter isn't, AFAIKS, a big problem for the syntax (since "/" isn't a valid start of a primary expression).
All string resources must be out of java code!!!
Java is not a script language!
> All string resources must be out of java code!!!
> Java is not a script language!
well said, especially very long strings which are almost always things like status messages and SQL queries which should always be externalised.
i18n and changing database requirements dictate that.
In my experience anyone needing long string constants in his source has a an architecture problem.
Anyone needing copious amounts of annotations has one of those as well...
The quotes and + pretty much clutters the DAO code. Larger SQLs are very difficult to read due to this cluttering. I think Mustang should incorporate some of the good features from C# - Like the free form text as well as ability to extend String class.
Something you don't have to forget is that String is not modifiable, so if you have a lot of lines like these
String stat = "insert into xy;"
+ "insert into xy; "
+ "insert into xy; "
+ "insert into xy; "
...
your app can get slow at runtime!
> Something you don't have to forget is that String is
> not modifiable, so if you have a lot of lines like
> these
>
> String + "insert into xy; "
> + "insert into xy; "
> + "insert into xy; "
> ...
>
> your app can get slow at runtime!
No it doesn't run slow, the compiler concatenates the strings at compile time.
Yeah, I found out also similar to your syntax. The C# way.
static void Main(string[] args)
{
string hello = @"
The Quick Brown
Fox Jumps Over
The Lazy Dog
";
Console.WriteLine(hello);
} // Main()
which is nice.
@" and "@ would make it possible to have embedded " characters as well. In some language this is extended so that you can have arbitrary characters between @ and " so that you can have any string literal, for example:
@a"this makes it possible to have "@ in the string literal"a@
-10
Why the heck do so many people want change for the sake of change.
Learn to use the tools at your disposal rather than force the world into your frame of mind.
Usually people want to do things easily. It is much easier to copy-paste a SQL string to
FROM Customer
WHERE Id = 123"@
than write the following manually:
"SELECT *\nFROM\tCustomer\nWHERE\tId = 123"
and it is also much easier to read the first version later.
Easier to write and easier to read, so better, not?
> "SELECT *\nFROM\tCustomer\nWHERE\tId = 123"
You can improve the readability of this by splitting it into lines and using string concatenation. Also by using spaces instead of \n and \t.
Maybe something like this:
[pre]
String sql =
"from customer " +
"where id = ?";
[/pre]
Out of a, say, six man month project, how many days do you lose if you have to type it like that? Is the typing of extra two quotes and a + sign a significant source of schedule slippage in your typical projects?
Ugly kludge for a teeny weeny insignificant improvement...
But it is better to have the original line feeds and tabs preserved, so that the SQL is formatted correctly when later viewed in log files or SQL monitor.
You just need to approve that multi-line string literals would be a good thing, and the current Java string literals are very primitive and make code ugly and coding unnecessary difficult.
I would personally agree on the sake of SQL being easier with a multi-line string setup as does makes it easier to read, snag out and test, and verify versus a broken up string. Its one of the things I _love_ about Python, but I don't like any way its being proposed in this thread. If it happens someday, cool... If not, then I can keep going without lossing sleep over it, but thats my thoughts.
Not bad. Any fix for the current String literals is more than welcome. A fix that handles both SQL and regular expressions (no need to double every \ character).
My proposal was simply
@"any text here
even line feeds
or tabs or backslashes \ no problem"@
so @" in the beginning and "@ in the end.
For those that want to play with something ......
For info on using it see
line breaks are resolved to the line.separator value at runtime, which is something no-one has discussed in this thread yet.
Bruce
> For those that want to play with something ...
>
>
> t/java/dev/rapt/proposed/generators/LongString.html
I would be happy with this, even though I didn't quite understand it. It would solve the problem of indentation as well, because white space + * would be stripped off from the beginning of line.
> For info on using it see
>
>
> erview-summary.html
>
> line breaks are resolved to the line.separator value
> at runtime, which is something no-one has discussed
> in this thread yet.
Because it does not matter much if it is \n or \r\n or \r because it is just white space in SQL, and in many other cases as well.
again syntactics yes, syntactics.
I think the "import static" is a big non-sense in Java IMHO. Why'd they approved this crap and why not "multi-string" instead!
As its usability and practice, I wonder how many Java Programmers take advantage on this! Take a look on "Static Imports" area . Does this promotes productivity or chaos.
Man, I love static imports :-) I use them intentively.
And since static imports are explicitly specified in imports section, there is no ambiguity or chaos...
I just love it, when properly used, it decreases overall verbosity and improves readability of the code.
Why'd I say non-sense?
>>>> Because its just a redundant
---------------------------------------------
This
---------------------------------------------
import static System.out;
public class NewClass{
public static void main( String[] args ) {
out.println( "Hello" );
}
}
----------------------------------------------
Same as
----------------------------------------------
public class NewClass {
public static PrintStream out = System.out;
public static void main( String[] args ) {
out.println( "Hello" );
}
}
a. What problem does it practically trying to solve why it is endorsed?
b. What problem of productivity does that solution addresses?
c. Before the 'static import', do we have problem with readability of declaring static variables inside the class and using it? - same, it also decreases verbosity.
>>>> Because it is prone to pollute the code and slowly adds confusion.
Try to ask before using this feature,
Does this VERBOSITY BUSTER helps me after a month or two?
But one thing I surely knows of, yes it adds quiz value to Java Certification Exam.
------------------------------------------------
A - class file
------------------------------------------------
package staticImport;
public class A {
public static int AMBIGUOUS_INT = 1;
}
------------------------------------------------
B - class file
------------------------------------------------
package staticImport;
import static staticImport.A.AMBIGUOUS_INT;
public class B {
private static final int AMBIGUOUS_INT = 2;
public static void main( String[] args ) {
System.out.println( AMBIGUOUS_INT );
}
}
------------------------------------------------
Put it this way, you are setting at your workbench try to figure out your next big java5 project. At fresh you gathered your team, plan-design- and ready now to implement. And the members are so excited! You can tell them by the expressions of their faces. Why? Maybe, because of this marvelous new feature. You can't blame them, they've been on horrors of java < 5 'declaring static variables inside the class'.
Its time to implement, then next morning comes (with a confident smile)... start to code... then next morning start to code again and again, without questioning the multi-line string - 'why this feature instead of this feature' in java5 improvements. But anyway, you are happy, they are happy, it saves the day (coding the embedded scripts, the regex, the dao-sql, the xmls, the texts) - still keep telling at their selves, "See we're the best, we go deep into strings escape character, double quotes and plus sign", and still proud!
Lets turn back. Normally as the days goes along, your codes gets more and more. Afterwards - you've finished on time fortunately (with your static import team). Surprise! a loud voice broke the silence. Out of feeling of completeness... someone in your team jumps and shouts with joy, "Yes! What a feature, it decreases verbosity!". Another one cries out loud, "It helps us finish our project on time!". But out of 10 members 1 realizes why not multiline-string instead, keep quiet with a bow head - and reminisced, anyway the project is finished.
Finally, as the months or two goes by, no problem seen so far. Then you start a new fresh project. Ooch! When suddenly (1)someone, somebody else discovered a creepy bug crawling mysteriously! (2)Then another next one, post a new features(willing to buy your working time on time).
Now it is debugging time! It is maintenance time! ...Team! Ready... Start scratching your head and meet your old code friends and start jotting your new years resolution!
Conclusion? I'm afraid on the part of readability. I think its confusability feature as the time of judgement comes. That's why IMHO, I say non-sense as compares to multiline-string feature except for the other features added. But still the keyword is still "be careful" as it applies to every endeavors and setups of using.
sorry, but it's so funny seen so many lines about nothing... just entertaining lines with almost no argument...
well, this is wrong thread to take this any further...
importing statically System.out is just a bad example, suppose you have an enum with 30 items (fields) and you use them all intensively in other class (some parsing stuff or alike). write some code like this with and without static import and you'll understand...
IDE will quickly tell you where statically imported method or field comes from, exactly the same way we are already used to with ordinarily imported classes...
i think that static import is designed consistently enough to be productive
[b]please note[/b], that the same arguments you used against static imports apply also to
- imports at all (static all not, you can import first.SomeClass and second.SomeClass)
- class loading (multiple classes with the same name, incompatible at runtime)
conclusion :-)
static import is safe and can be helpful. more important risks and dangers are elsewhere...
> sorry, but it's so funny seen so many lines about nothing... just entertaining lines with almost no argument...
> well, this is wrong thread to take this any further...
I don't think so, perhaps maybe entertaining.
I am not going to argue you on this - I agree this is the wrong thread. I am just trying to give an insight of 'instead'.
> importing statically System.out is just a bad example, suppose you have an enum with 30 items (fields) and you use them all
> intensively in other class (some parsing stuff or alike). write some code like this with and without static import and you'll understand...
I can use MyEnum.ONE...MyEnum.THIRTY its clearer this way.
If you have two enum with identical items - it will clash, still you resort to conventional way.
Worst still, the compiler won't halt on identical field name; one declared as 'import static' and the other one declared as 'public static' inside the class.
I have no idea on some IDE, but in netbeans it won't give you a warning/error signal.
> IDE will quickly tell you where statically imported method or field comes from, exactly the same
> way we are already used to with ordinarily imported classes...
Yes I know, in netbeans it does.
> i think that static import is designed consistently enough to be productive
> please note, that the same arguments you used against static imports apply also to
I am still not convinced.
Here is mine to picture.
>>>>>>>>>>>> dependents [start] <<<<<<<<<<<<<
---------------------------------------------
File: EnumOf30 package importstatic2.penum
---------------------------------------------
package importstatic2.penum;
public enum EnumOf30 {
ONE, TWO, THREE, FOUR, FIVE, SIX, SEVEN, EIGHT, NINE, TEN,
ELEVEN, TWELVE, THIRTEEN, FOURTEEN, FIFTEEN, SIXTEEN, SEVENTEEN, EIGHTEEN, NINETEEN, TWENTY,
TWENTY_ONE, TWENTY_TWO, TWENTY_THREE, TWENTY_FOUR, TWENTY_FIVE, TWENTY_SIX, TWENTY_SEVEN, TWENTY_EIGHT, TWENTY_NINE, THIRTY
}
---------------------------------------------
File: EnumOf30 package importstatic2.penum2
---------------------------------------------
package importstatic2.penum2;
public enum EnumOf30 {
FOREVER, NEVER
}
---------------------------------------------
File: EnumOfSame30 package importstatic2.penum2
---------------------------------------------
package importstatic2.penum2;
public enum EnumOfSame30 {
ONE, TWO, THREE, FOUR, FIVE, SIX, SEVEN, EIGHT, NINE, TEN,
ELEVEN, TWELVE, THIRTEEN, FOURTHEEN, FIFTEEN, SIXTEEN, SEVENTEEN, EIGHTEEN, NINETEEN, TWENTY,
TWENTY_ONE, TWENTY_TWO, TWENTY_THREE, TWENTY_FOUR, TWENTY_FIVE, TWENTY_SIX, TWENTY_SEVEN, TWENTY_EIGHT, TWENTY_NINE, THIRTY
}
---------------------------------------------
File: ClassSameName package importstatic2.samename
---------------------------------------------
package importstatic2.samename;
public class ClassSameName {
public static void doLikeThis() {
System.out.println( "doLikeThis" );
}
}
---------------------------------------------
File: ClassSameName package importstatic2.samename.with
---------------------------------------------
package importstatic2.samename.with;
public class ClassSameName {
public static void doLikeThat() {
System.out.println( "doLikeThat" );
}
}
>>>>>>>>>>>>>> dependent [end] <<<<<<<<<<<<<<<
> - imports at all (static all not, you can import first.SomeClass and second.SomeClass)
> - class loading (multiple classes with the same name, incompatible at runtime)
---------------------------------------------
File: ClassThatUseStaticImportForEnumOf30 Using static import - Modern Way?
---------------------------------------------
package importstatic2;
import static importstatic2.penum.EnumOf30.*;
import static importstatic2.penum2.EnumOf30.*;
//import static importstatic2.penum2.EnumOfSame30.*; // clashes
import static importstatic2.samename.ClassSameName.*; // incompatible at runtime?
import static importstatic2.samename.with.ClassSameName.*; // incompatible at runtime?
public class ClassThatUseStaticImportForEnumOf30 {
public static int ONE = 1; // compiler wont halt about ambiguity
public static void main( String[] args ) {
System.out.println( ONE );
System.out.println( FOREVER );
doLikeThis(); // Any idea where this came from? scenario: I am on the console mode, I can't use Eclipse or Netbeans
doLikeThat(); // I am on dark age using vi or some editor say tinkering my jnode operating system.
}
}
---------------------------------------------
File: ClassThatDoesNotUseStaticImportForEnumOf30 Compares to Conventional Way: Does not use static import
---------------------------------------------
package importstatic2;
import importstatic2.penum.EnumOf30; // clearer
public class ClassThatDoesNotUseStaticImportForEnumOf30 {
public static void main( String[] args ) {
System.out.println( EnumOf30.ONE ); // clearer
}
}
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
> conclusion :)
Maybe its personal preference.
> static import is safe and can be helpful. more important risks and dangers are elsewhere...
1st stmt, I disagree. 2nd stmt, yes I agree. Again, personal preferences. I don't think this is a breakthrough :( back to the main topic.
> I can use MyEnum.ONE...MyEnum.THIRTY its clearer this
> way.
clearer, yes, and preferred, yes, unless you need to decrease redundant verbosity without sacrifying clarity... static imports give you the tool...
> If you have two enum with identical items - it will
> clash, still you resort to conventional way.
that's it, they will clash the same way regular imports do... and you resolve this just like you do when using javax.jms.MessageListener and mymessaging.MessageListener in the same class... one of them is fully qualified...
what's wrong with that?
> Worst still, the compiler won't halt on identical
> field name; one declared as 'import static' and the
> other one declared as 'public static' inside the
> class.
that's because locally defined fields are by design visible without importing and therefore they are preferred to any imports, static or not...
the same applies to nested classes, try it.
this is totally consistent behaviour and static imports just conform, they do not introduce anything new...
> import static importstatic2.penum.EnumOf30.*;
> import static importstatic2.penum2.EnumOf30.*;
> //import static importstatic2.penum2.EnumOfSame30.*;
> // clashes
first of all, wildcard import is the way to hell, fight against this one, they cause much more damage than static imports ...
second, compiler prefers explicit (named) imports to wildcards...
[code]
enum Color { RED, GREEN, BLUE; }
enum Color2 { RED, GREEN, BLUE; }
...
import static Color.RED;
import static Color2.*;
...
Object red = RED; // no conflict, using Color.RED;
[/code]
but there will be conflict if you wildcard import both of them...
again, what's wrong with that?
> public static int ONE = 1; // compiler wont halt
> halt about ambiguity
because there is no ambiguity, locally visible type/variable overrides any import (static or not)...
(see above)
> doLikeThis(); // Any idea where this came
> his came from? scenario: I am on the console mode, I
> can't use Eclipse or Netbeans
i don't buy this...
if you want to argument like this, you can safely go ahead and
- fight against imports altogether and use only fully qualified names
- attack inheritance and polymorphism
- etc
because in console mode and VI, you are totally LOST anyway, and the bigger code base is the worse. static imports do not change this a lot..
convinced? wanna continue?
you may start a new thread for this, I might join you there for a while :-)
But I think I said enough...
/peace/
./peace/ accepted. I gave up :)
In case others would like to spend with this topic, I made a neighboring thread dedicated to this. Let us hear from others opinion.
>because in console mode and VI, you are totally LOST anyway, and the bigger code base is the worse. static imports do not change this a lot..
My answer of concern continues here: What Do you Think? "import static" vetoed or voted?
BTW, I have no idea how to indent my code snippets. I hate it, its something of a kind messy here when I copy-paste. My apology. NEED LITTLE HELP :(
> BTW, I have no idea how to indent my code snippets. I
> hate it, its something of a kind messy here when I
> copy-paste. My apology. NEED LITTLE HELP :(
[code]interface bar {
public static crud();
}[/code]
Was achieved by wrapping in [[b][/b]code] and [[b][/b]/code] tags. And to get the [ in the previous sentence without it being interpreted as a code tag I used [[[b][/b]b][[b][/b]/b] if you are interested. Then I applied that recursively to write the sentence prior to this one, which looks much 'orribler in the editor than you are seeing :)
Bruce
To get back to the subject, what do you think about the following:
It has a nice example how we can have multi-line strings:
[code]);
}
[/code]
$ is a valid identifier. | https://www.java.net/node/645723?page=1 | CC-MAIN-2015-22 | refinedweb | 4,118 | 66.44 |
view raw
Like the title says, I'm looking for some simple way to run JUnit 4.x tests several times in a row automatically using Eclipse.
An example would be running the same test 10 times in a row and reporting back the result.
We already have a complex way of doing this but I'm looking for a simple way of doing it so that I can be sorta sure that the flaky test I've been trying to fix stays fixed.
An ideal solution would be an Eclipse plugin/setting/feature that I am unaware of.
The easiest (as in least amount of new code required) way to do this is to run the test as a parametrized test (annotate with an @RunWith(Parameterized.class) and add a method to provide 10 empty parameters). That way the framework will run the test 10 times.
This test would need to be the only test in the class, or better put all test methods should need to be run 10 times in the class.
Here is an example:
@RunWith(Parameterized.class) public class RunTenTimes { @Parameterized.Parameters public static List<Object[]> data() { return Arrays.asList(new Object[10][0]); } public RunTenTimes() { } @Test public void runsTenTimes() { System.out.println("run"); } }
With the above, it is possible to even do it with a parameter-less constructor, but I'm not sure if the framework authors intended that, or if that will break in the future.
If you are implementing your own runner, then you could have the runner run the test 10 times. If you are using a third party runner, then with 4.7, you can use the new @Rule annotation and implement the
MethodRule interface so that it takes the statement and executes it 10 times in a for loop. The current disadvantage of this approach is that @Before and @After get run only once. This will likely change in the next version of JUnit (the @Before will run after the @Rule), but regardless you will be acting on the same instance of the object (something that isn't true of the `prameterized runner). This assumes that whatever runner you are running the class with correctly recognizes the @Rule annotations. That is only the case if it is delegating to the JUnit runners.
If you are running with a custom runner that does not recognize the @Rule annotation, then you are really stuck with having to write your own runner that delegates appropriately to that Runner and runs it 10 times.
Note that there are other ways to potentially solve this (such as the Theories runner) but they all require a runner. Unfortunately JUnit does not currently support layers of runners. That is a runner that chains other runners. | https://codedump.io/share/g78QKPRD6KTw/1/easy-way-of-running-the-same-junit-test-over-and-over | CC-MAIN-2017-22 | refinedweb | 459 | 62.27 |
In this article, I will examine how you can improve the performance of an ASP.NET MVC application by taking advantage of the following components:
The easiest way to implement cache on MVC view is to add an [OutputCache] attribute to either an individual controller action or an entire controller class. Here is a controller action GetWeather() that will be cached for 15 seconds.
[OutputCache]
GetWeather()
[OutputCache(Duration = 15, VaryByParam = "None")]
public ActionResult GetWeather(string Id)
{
}
To cache your entire controller, you will implement [OutputCache] attribute on an entire controller class as shown below:
[OutputCache(Duration = 15, VaryByParam = "None")]
public class WeatherController : Controller
{
//
// GET: /Weather/
public ActionResult Index()
{
return View();
}
public ActionResult GetWeather(string Id)
{
}
}
For more details, please read this.
You can also control the cache programmatically by implementing Cache API. The System.Web.Caching.Cache class works like a dictionary. You can add key and item pairs to the Cache class. When you add an item to the Cache class, the item is cached on the server. The following code example adds an item to the cache with a sliding expiration time of 10 minutes:
Cache
System.Web.Caching.Cache
Cache.Insert("Key", "Value",
null, System.Web.Caching.Cache.NoAbsoluteExpiration,
new TimeSpan(0, 10, 0));
The one limitation of the ASP.NET Cache object is that it runs in the same process as your web application. It is not a distributed cache. If you want to share the same ASP.NET Cache among multiple machines, you must duplicate the cache for each machine. In this situation, you need to use a distributed cache. To implement distributed cache, you can use the Microsoft distributed cache (code-named Velocity) with an ASP.NET MVC application. Here is a great article by Stephen Walther where he explains in detail.
You can also cache any HTTP get request in the user browser for a predefined time, if the user requests the same URL in that predefined time the response will be loaded from the browser cache instead of the server. Here is another great article about ASP.NET MVC Action Filter - Caching and Compression by Kazi Manzur where he explains in detail.
get
The easiest way to implement compression is to apply IIS Compression, and here is a great article that explains more in detail.
You can apply the action filter to compress your response in your ASP.NET MVC application. A great article that explains this in detail is ASP.NET MVC Action Filter - Caching and Compression.
There are several problems with ASP.NET MVC application when deployed on IIS 6.0. Here is a solution presented by Omar AL Zabir.
There's also a port of the YUICompress for .NET on CodePlex that compresses any Javascript and Cascading Style Sheets to an efficient level.
You can Build Rich client site User Interfaces with jQuery. There is a great article by Dino Esposito on Building Rich User interfaces. You should also consider using Google hosted AJAX libraries API as it will improve the site performance. Please see Test Drive of the Google Hosted Ajax Libraries.
ASP.NET MVC Client-side Resource Combine is another great library which is available at CodePlex. This library requires you to organize client-side resources into separate sets, each with different configuration settings (although there's nothing stopping you from having a 1-file resource set). The resources in each set are to be minified, combined, compressed, and cached together and therefore can be requested in 1 single HTTP request. Refer to the project CodePlex page for detailed usage and binary/code download. The library uses the great YUI Compressor library for the minification part.
When we develop ASP.NET applications using Visual Studio, the default value for debug attribute is true. These settings will give a poor performance in production if released in the default debug mode. So, never release your website or application with debug mode set to true. It should be set to false in web.config when moving to production.
debug
true
false
<compilation debug="false" />
The following default httpModules element is configured in the root Web.config file in the .NET Framework version 2.0.="RoleManager" type="System.Web.Security.RoleManagerModule" />
" />
</httpModules>
You can remove the modules you don't need in the web.config like so:
<httpModules>
<remove name="PassportAuthentication" />
<remove name="Profile" />
<remove name="AnonymousIdentification" />
</httpModules>
ASP.NET MVC framework offers the following methods to generate URL:
Html.ActionLink()
Html.RouteLink()
Url.Action()
Url.RouteUrl()
Html.RouteLink() is equivalent to Html.ActionLink():
Html.ActionLink()
<%= Html.RouteLink("Click here", new {controller= "Weather", action= "GetWeather"}) %>
//will render the following <a href="/Weather/GetWeather">Click here</a>
Similarly, Url.RouteUrl() is equivalent to Url.Action():
Url.Action()
<%= Url.RouteUrl(new {controller= "Weather", action= "GetWeather"}) %>
will render the following /Weather/GetWeather
However, these methods can have a performance impact on your application. Chad Moran has run performance tests on his blog and he shows improving ASP.NET MVC performance through URL generation.
Here are a few useful links to read more:
In this article, we examined how to improve the performance of an ASP.NET MVC application by taking advantage of the caching and HTTP compression that are available in .NET Framework. We also examined other open source libraries such as JQuery and combining scripts and other resources to improve the MVC application performance.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Chad Moran's link is dead, you may update with the one I just have found; not sure who wrote it orginally, but very sure that your article is very useful and must have working links. :)
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/41930/How-To-Improve-the-Performance-of-ASP-NET-MVC-Web?msg=3179091 | CC-MAIN-2016-40 | refinedweb | 976 | 57.27 |
I want to generate a corresponding object for each line of a text file, store it in an Object type array, and output the contents.
Mathematics, 80, Taro Yamada English, 72, Taro Yamada Mathematics, 90, Ichiro Suzuki
package test; import java.io. *; public class Test { public static void main (String [] args) { try { File file = new File ("test.txt"); dataRoad (file); } catch (IOException e) { } } public static void dataRoad (File file) throws IOException { BufferedReader bufferedReader = new BufferedReader (new InputStreamReader (new FileInputStream (file), "UTF-8")); Object [] text = new Object [3]; String textfile; String [] splitLine = new String [3]; int count = 0; while ((textfile = bufferedReader.readLine ())! = null) { splitLine = textfile.split (",", 0); if (splitLine [0] .equals ("math")) { text [count] = new Math (splitLine [1], splitLine [2]); } else if (splitLine [0] .equals ("English")) { text [count] = new English (splitLine [1], splitLine [2]); } count ++; } bufferedReader.close (); System.out.println (text [0] .toString ()); System.out.println (text [1] .toString ()); System.out.println (text [2] .toString ()); } }
package test; public class Math { private static String score; private static String name; public Math (String score, String name) { this.score = score; this.name = name; } public String toString () { return "score =" + this.score + ", name =" + this.name; } }
package test; public class English { private static String score; private static String name; public English (String score, String name) { this.score = score; this.name = name; } public String toString () { return "score =" + this.score + ", name =" + this.name; } }
When I execute the above Test.java, I want the output to be as follows.
Score = 80, Name = Taro Yamada
Score = 72, Name = Taro Yamada
Score = 90, Name = Ichiro Suzuki
actually
Score = 90, Name = Ichiro Suzuki
Score = 72, Name = Taro Yamada
Score = 90, Name = Ichiro Suzuki
Will be output.
I thought that the cause was that the element of the String array splitLine was changed in the while statement, but in that case, the second line of the output result is not output as score = 90, name = Ichiro Suzuki. I thought.
Why is the content unified to that of the last created object only for objects of the same class type?
It may be a difficult question to understand, but thanks for your cooperation.
- Answer # 1
- Answer # 2
Math/
Englishof
name/
scoreBut
staticThat's why
Related articles
- regarding instance of java object(){}
- [java] i want to return a reference to the youngest object
- javascript - how to convert an array to an object
- java - how to count the bars surrounding 1 in a multidimensional array where 0s and 1s are entered
- ruby - i want to pass an instance variable in the controller to the form object model
- get the class of java array
- java object oriented
- java - an error occurs in the element part of the array and it does not resolve
- [java] handling of array elements and null
- pass by value of java array variable
- java instance generation method
- java - about array code
- javascript - how to loop an object in an array of js
- java - array problems with methods
- vuejs - array of date properties in vew instance is not recognized
- [java] about method call when subclass instance is created with super class type
- c # - [unity] object reference not set to an instance of an object error resolution
- i want to make a multi-dimensional array in java
- assign java array to
Math and English member variables are declared static. When creating a new instance of Math, the name and score are assigned, but the assignment destination is not independent for each instance, and it will be assigned to a common variable.
Please delete the static. | https://www.tutorialfor.com/questions-323730.htm | CC-MAIN-2021-25 | refinedweb | 579 | 59.74 |
Problem:
You have a filename in the Linux shell and want to strip/remove the filename extension from it – e.g. if you have
myarchive.zip, you want to get only
myarchive as output.
You have a filename in the Linux shell and want to strip/remove the filename extension from it – e.g. if you have
myarchive.zip, you want to get only
myarchive as output.
In C/C++ you want to encode/decode something from/to Base64. libtomcrypt is WTFPL-licensed and therefore provides a good choice for both commercial and non-commercial projects.
You want to calculate a hash of any string in C/C++. LibTomCrypt is WTFPL licensed, so it’s a good choice for commercial and non-commercial projects.
Recently hackthissite.org was recommended to me — it’s really fun to play around with, even if I think some of the challenges are not that realistic any more.
I thought it would be just as much fun to post some of my solutions to the programming challenges here. If not absolutely neccessary for understanding the underlying algorithm, I won’t post any information about how to use the programs, because the purpose of these posts shall be to understand it, not to use it in order to solve the HTS challenges..
Instead.
In C++11 you want to iterate over a smart pointer (
auto_ptr,
shared_ptr, …). collection, say a
std::vector, using the new
for loop syntax.
Let’s try it out:
using namespace std; shared_ptr<vector<int> > smartptr(/* A ptr to your vector */); for(int s : smartptr) { /* do something useful */ }
When trying to compile this code, GCC emits the following error message (other lines are omitted for the sake of simplicity)
error: no matching function for call to 'begin(std::shared_ptr<std::vector<int> >&)' error: no matching function for call to 'end(std::shared_ptr<std::vector<int> >&)'
or, when
LANG=de is set:
Fehler: keine passende Funktion für Aufruf von »begin(std::shared_ptr<std::vector<int> >&)« Fehler: keine passende Funktion für Aufruf von »end(std::shared_ptr<std::vector<int> >&)«
In NodeJS, you got a size of a file in bytes, but you want to format it for better readability.
For example, if your size is
10000 bytes, you want to print
10 kilobytes, but if it is
1200000, you want to print
1.20 Megabytes.
To determine the size of a file in NodeJS (e.g. to get the size of
myfile.txt) use
fs.stat() or
fs.statSync() like this:
const fs = require("fs"); //Load the filesystem module const stats = fs.statSync("myfile.txt"); const fileSizeInBytes = stats.size; //Convert the file size to megabytes (optional) const fileSizeInMegabytes = fileSizeInBytes / 1000000.0;
Another option is to use the following function:
function getFilesizeInBytes(filename) { const stats = fs.statSync(filename); const fileSizeInBytes = stats.size; return fileSizeInBytes; } | https://techoverflow.net/page/35/ | CC-MAIN-2019-51 | refinedweb | 468 | 64.61 |
This is a basic tutorial on Perl. The goal is to get a quick working understanding of the language. Examples on this page are based on Perl 5.14.2. (released on )
in Perl, every variable name must start with one of {$ @ %}.
$means the VALUE of the variable is a “scalar”. i.e. string, number.
@means the VALUE of the variable is a array.
%means the VALUE of the variable is a hash table (aka dictionary, associative list.).
# -*- coding: utf-8 -*- # perl use Data::Dumper; # for printing array and hash $Data::Dumper::Indent = 0; # set to print compact $a = 4; # scalar @a = (1, 2, 3); # array %a = ('e' => 4, 'f' => 5, 'g' => 6); # hash print $a, "\n"; # 4 print Dumper(\@a), "\n"; # $VAR1 = [1,2,3]; print Dumper(\%a), "\n"; # $VAR1 = {'e' => 4,'g' => 6,'f' => 5};; # this is 4
Summery: 2 major ways to quote strings strings: single quote and double quote.
'single quote'→ everything is literal.
"double quote"→ backslash is a char escape, and variables inside it will be evaluated.
You can also use the syntax
q(this n that), which is equivalent to
'this n that'.
The parenthesis can be curly brackets
{} or square brackets
[]. It can also be
/ \ | @ and most others ASCII symbols.
# -*- coding: utf-8 -*- # perl # the following are all same $a = q(this 'n' that); $b = q[this 'n' that]; $c = q{this 'n' that}; $d = q|this 'n' that|; $e = "this 'n' that"; $f = 'this \'n\' that';
Similarly,
"…" is same as
qq(…).
substr(‹string›, ‹offset›, ‹number of chars to extract›).
# -*- coding: utf-8 -*- # perl # get substring print substr('012345', 2, 2); # prints 23
Length of string is
length(‹string›).
print length('abc');
use dot to join string.
# -*- coding: utf-8 -*- # perl $s = "a" . "b"; print $s; # ab
String repetition is done with the operator
x.
# -*- coding: utf-8 -*- # perl print 'abc' x 2; # abcabc
〔☛ Python, Ruby, Perl: Basic String Operations〕
Perl does not have a boolean type. Basically, anything that seems should be false, is false. (of course, it can be tricky). The following are false:
undef
""empty string
Everything else is true.
Perl does automatic conversion between number and string, so
'0' is false in some contexts because it converts to
0. But
'0.0' is true, because it remains a string, and is not empty string.
The value of Perl's {array, list, hash}, depends on context (what's adjacent to it), and is not very intuitive.
The best thing is to test what you need exactly. For example, check if the length of a list is 0, or whether a var has value 0, or whether it is
undef.
# -*- coding: utf-8 -*- # perl use strict; if (0) { print "yes"} else { print "no"} # ⇒ no if (0.0) { print "yes"} else { print "no"} # ⇒ no if ("0") { print "yes"} else { print "no"} # ⇒ no if ("") { print "yes"} else { print "no"} # ⇒ no if (undef) { print "yes"} else { print "no"} # ⇒ no
# -*- coding: utf-8 -*- # perl use strict; # empty array is false my @myArray = (); if (@myArray) { print "yes"} else { print "no"} # ⇒ no
# -*- coding: utf-8 -*- # perl use strict; # empty hash is false my %myHash = (); if (%myHash) { print "yes"} else { print "no"} # ⇒ no
# -*- coding: utf-8 -*- # perl use strict; if (1) { print "yes"} else { print "no"} # ⇒ yes if ("0.0") { print "yes"} else { print "no"} # ⇒ yes if (".0") { print "yes"} else { print "no"} # ⇒ yes
# -*- coding: utf-8 -*- # perl use strict; # examples of explicit testing my $x = 5; my $y; if (defined($x)) { print "yes"} else { print "no"} # ⇒ yes if (defined($y)) { print "yes"} else { print "no"} # ⇒ no if ($x == 0) { print "yes"} else { print "no"} # ⇒ no
# -*- coding: utf-8 -*- # perl use strict; # testing array length my @myArray = (); my $myArrayLength = scalar @myArray; if ($myArrayLength == 0) { print "yes"} else { print "no"} # ⇒ yes
#-*- coding: utf-8 -*- # perl # Examples of if $x = 1; if ($x == 1) { print "x yes\n"; } $y = 2; if ($y == 1) { print "y yes\n"; } else { print "y no\n"; } $z = 2; if ($z < 0) { print 'z neg'; } elsif ($z == 0) { print 'z zero'; } elsif ($z == 1) { print 'z one'; } else { print 'z other'; }
perldoc perlsyn
#-*- coding: utf-8 -*- # perl @aa = (1..9); # creates a array 1 to 9 for $xx (@aa) { print $xx } # ⇒ 123456789
Note: Perl also supports loop controls “next”, “last”, “goto” and few others.
#-*- coding: utf-8 -*- # perl for $xx (1..9) { print $xx; if ($xx == 4) { last; # break } } # ⇒ 1234
#-*- coding: utf-8 -*- # perl $x = 1; while ($x <= 9) { print $x, "\n"; $x++; }
# -*- coding: utf-8 -*- # perl @a = (0, 1, 2, 'three', 4, 5, 6, 7); # assigns a list to @a. use Data::Dumper; # loads the list-printing module print '@a is:', Dumper(\@a);
The backslash in front of
@a is necessary. It returns the “reference” of the array
@a, and the argument to Dumper must be a reference. Once a list is assigned to a variable, it's called array.
Perl's concept of “list” and “array” is a bit complex. Basically, when a list is assigned to a variable, it's a array. For detail, see: Perl List vs Array — the Nether Mumble Jumble. When a list/array is in a scalar context, it returns its length. The function
scalar forces things in a scalar context.
To add a element, or join two lists, use
push(‹array›, ‹new item›).
# -*- coding: utf-8 -*- # perl use Data::Dumper; @b = (1, 9); push(@b, 3); # add a element to @b, at the end print Dumper(\@b); # [1, 9, 3]
# -*- coding: utf-8 -*- # perl use Data::Dumper; , [3, 4]]
Square brackets actually creates a reference to a array.
# -*- coding: utf-8 -*- # perl use Data::Dumper; @a = (1, 8); # array $b = [1, 8]; # reference to array print Dumper(\@a); # [1, 8] print Dumper($b); # [1, 8]
To extract list element, append with
[‹index›].
Here a example of extracting sublist (aka slice).
# -*- coding: utf-8 -*- # perl use Data::Dumper; @a = (0, 1, 2, 'three', 4, 5, 6, 7); @b = @a[1..4]; # the 1..4 creates a range print Dumper \@b; # [1, 2, 'three', 4]
To replace parts, just assign them. ⁖
$myArray[3] = "heart";.
# -*-. ⁖
@b = (4, 5, \@myarray, 7).
# -*- coding: utf-8 -*- # perl use Data::Dumper; @a=(1, 2, 3); @b = (4, 5, \@a, 7); # embed @a as sublist. print '@b', Dumper \@b; # [ 4, 5, [ 1, 2, 3 ], 7 ]
To extract element from nested list, use this form:
$‹array name›[‹first level index›]->[‹2nd level index›]->[‹3rd level index›]….
# -*- coding: utf-8 -*- # perl use Data::Dumper; @b = (1, 2, ['x', 'y'], 3); $c = $b[2]->[1]; print $c; # 'y' @b = (1, 2, ['x', [4, 5], 7], 3); $c = $b[2]->[1]->[1]; print $c; # 5
perldoc perldata
Python, Ruby, Perl: List Basics
In Perl, keyed-list is called hash table, or just hash. It is done like this:
# -*- coding: utf-8 -*- # perl use Data::Dumper qw(Dumper); # load the Dumper function for printing array/hash $Data::Dumper::Indent = 0; # make it print in compact style # hash table %hh = ('john'=>3, 'mary'=> 4, 'jane'=> 5, 'vicky'=>7); print Dumper \%hh; # {'jane' => 5,'john' => 3,'vicky' => 7,'mary' => 4}
The line
use Data::Dumper qw(Dumper); loads the function “Dumper” from the package “Data::Dumper”.
The purpose of Dumper is to print array and hash.
Variable of hash datatype must begin with
% in their name.
# -*- coding: utf-8 -*- # perl use Data::Dumper qw(Dumper); # for printing list or hash %hh = ('john' =>3, 'mary' => 4, 'jane' => 5, 'vicky' => 7); print Dumper \%hh; # get value from a key print $hh{'mary'}; # 4 # delete a entry delete $hh{'vicky'}; print Dumper \%hh; # { 'jane' => 5, 'john' => 3, 'mary' => 4 } # get all keys print Dumper [keys %hh]; # [ 'jane', 'john', 'mary' ] # get all values (Perl 5.12. released in 2010) print Dumper [values %hh]; # [ 5, 3, 4] # check if a key exists print exists $hh{'mary'}; # returns 1, meaning true.
If you are going to get values of a hash, you use
$ in front of the hash variable. ⁖
$b{'mary'}.
The Dumper function's argument needs to be a “reference” to
the hash.
So, you can use it like this:
Dumper(\%b)
or
Dumper([%b]).
(parenthesis is usually optional)
〔☛ Python, Ruby, Perl: Dictionary, Hash〕
Use “grep” to remove elements in a list. The form is one of:
grep {‹true/false function name› $_} ‹array›
grep {‹expression on $_› ;} ‹array›
Example:
# -*- coding: utf-8 -*- # perl use Data::Dumper; sub ff {return $_[0] % 2 == 0}; # return true if divisible by 2 print Dumper[ grep {ff $_} (0..10)]; # ⇒ [ 0, 2, 4, 6, 8, 10 ]
@_→ a builtin variable that's all the arguments passed to a subroutine, as array. So,
$_[0]is the first argument passed.
$_→ a builtin variable that's the default input for regex to match, and in general represents a default argument.
The
(0..10) generate a list from 0 to 10.
The
% above is the operator for computing remainder of a division.
The
Data::Dumper module is to import the “Dumper” function for printing list.
Use “map” to apply a function to a list. The basic form is
map {‹function name›($_)} ‹list›. It returns a list.
# -*- coding: utf-8 -*- # perl use Data::Dumper; $Data::Dumper::Indent=0; sub ff {return ($_[0])**2;}; # square a number print Dumper [ map { ff($_)} (0..10)]; # ⇒ $VAR1 = ['0','1','4',9,'16',25,36,49,'64',81,100];
The
** is the exponential operator.
〔☛ Python, Ruby, Perl: Apply a Function to a List〕
In Perl, a library is called a module. The standard filename suffix is “.pm”.
For a script, the filename suffix is “.pl”.
To get a list of standard module that are bundled with Perl (but not necessarily installed). perldoc perlmodlib..
To load a package, call
use ‹package name›;. It will import all functions in that package. Example:
# -*- coding: utf-8 -*- # perl # loading some commonly used packages use Data::Dumper; # for printing list and hash use File::Find; # for traversing directories
To find out what functions are available in a module, read its documentation, for example
perldoc Data::Dumper.
Here is a example showing module paths and loaded modules:
# -*- coding: utf-8 -*- # perl use Data::Dumper; print Dumper \@INC; # prints all module searching paths print Dumper \%INC; # prints all loaded modules __END__ sample output: $VAR1 = [ '/etc/perl', '/usr/local/lib/perl/5.12.4', '/usr/local/share/perl/5.12.4', '/usr/lib/perl5', '/usr/share/perl5', '/usr/lib/perl/5.12', '/usr/share/perl/5.12', '/usr/local/lib/site_perl', '.' ]; $VAR1 = { 'warnings/register.pm' => '/usr/share/perl/5.12/warnings/register.pm', 'bytes.pm' => '/usr/share/perl/5.12/bytes.pm', 'XSLoader.pm' => '/usr/share/perl/5.12/XSLoader.pm', 'Carp.pm' => '/usr/share/perl/5.12/Carp.pm', 'Exporter.pm' => '/usr/share/perl/5.12/Exporter.pm', 'strict.pm' => '/usr/share/perl/5.12/strict.pm', 'warnings.pm' => '/usr/share/perl/5.12/warnings.pm', 'overload.pm' => '/usr/share/perl/5.12/overload.pm', 'Data/Dumper.pm' => '/usr/lib/perl/5.12/Data/Dumper.pm' };
For more info about the predefined variables
@INC and
%INC.
perldoc perlvar
〔☛ Python & Perl: Using Modules/Packages/Library〕
Here is a example of defining a function.
# -*- coding: utf-8 -*- # perl use Data::Dumper; $Data::Dumper::Indent = 0; # print in compact style # define a function sub ff { $a = $_[0]; # get first arg $b = $_[1]; # get second arg # arguments are automatically assigned to array @_ print Dumper(\@_); # prints the array @_ # use “return” to return value and exit the function return $a + $b; } ff(3, 4, "rabbit"); # $VAR1 = [3,4,'rabbit'];
Note: Unlike most other languages, subroutine's parameters are USUALLY not declared.
Arguments are automatically assigned to the array
@_. So,
$_[0] is the first element of the array
@_. The
@_ a builtin variable.
To define a function with optional parameters, just use
defined($_[n]) to check if the argument is given.
# -*- coding: utf-8 -*- # perl # myFun(x,y) returns x+y. y is optional and default to 1. sub myFun { $x = $_[0]; if (defined $_[1]) { $y = $_[1]; } else { $y = 1; } return $x+$y; } print myFun(3); # 4
perldoc perlsub
For another example, see: Python & Perl: Defining A Function ◇ Python, Ruby, Perl: Defining A Function
In the following, i show you how to write a library in Perl by a example.
Save the following 3 lines in a file and name it 〔mymodule.pm〕.
# -*- coding: utf-8 -*- # perl package mymodule; # declaring the module sub f1($){$_[0]+1} # module body 1 # module must return a true value
Then, call it like the following way:
# -*- coding: utf-8 -*- # perl ‹file name›..
Python & Perl: Writing A Module | http://xahlee.info/perl-python/perl_basics.html | CC-MAIN-2013-20 | refinedweb | 2,087 | 69.41 |
10.7. Text Sentiment Classification: Using Recurrent Neural Networks¶
Text classification is a common task in natural language processing, which transforms a sequence of text of indefinite length into a category of text. This section will focus on one of the sub-questions in this field: using text sentiment classification to analyze the emotions of the text’s author. This problem is also called sentiment analysis and has a wide range of applications. For example, we can analyze user reviews of products to obtain user satisfaction statistics, or analyze user sentiments about market conditions and use it to predict future trends. collections import gluonbook as gb from mxnet import gluon, init, nd from mxnet.contrib import text from mxnet.gluon import data as gdata, loss as gloss, nn, rnn, utils as gutils import os import random import tarfile
10.
10.7.1.1. Reading Data¶
We first download this data set to the “../data” path and extract it to “../data/aclImdb”.
In [2]:
# This function is saved in the gluonbook package for future use. def download_imdb(data_dir='../data'): url = ('') sha1 = '01ada507287d82875905620988597833ad4e0903' fname = gutils.download(url, data_dir, sha1_hash=sha1) with tarfile.open(fname, 'r') as f: f.extractall(data_dir) download_imdb()
Next, read the training and test data sets. Each example is a review and its corresponding label: 1 indicates “positive” and 0 indicates “negative”.
In [3]:
def read_imdb(folder='train'): # This function is saved in the gluonbook package for future use. data = [] for label in ['pos', 'neg']: folder_name = os.path.join('../data/aclImdb/', folder, label) for file in os.listdir(folder_name): with open(os.path.join(folder_name, file), 'rb') as f: review = f.read().decode('utf-8').replace('\n', '').lower() data.append([review, 1 if label == 'pos' else 0]) random.shuffle(data) return data train_data, test_data = read_imdb('train'), read_imdb('test')
10.7.1.2. Data Preprocessing¶
We need to segment each review to get a review with segmented words. The
get_tokenized_imdb function defined here uses the easiest method:
word tokenization based on spaces.
In [4]:
def get_tokenized_imdb(data): # This function is saved in the gluonbook package for future use. def tokenizer(text): return [tok.lower() for tok in text.split(' ')] return [tokenizer(review) for review, _ in data]
Now, we can create a dictionary based on the training data set with the words segmented. Here, we have filtered out words that appear less than 5 times.
In [5]:
def get_vocab_imdb(data): # This function is saved in the gluonbook package for future use. tokenized_data = get_tokenized_imdb(data) counter = collections.Counter([tk for st in tokenized_data for tk in st]) return text.vocab.Vocabulary(counter, min_freq=5) vocab = get_vocab_imdb(train_data) '# Words in vocab:', len(vocab)
Out[5]:
('# Words in vocab:', 46151)
Because the reviews have different lengths, so they cannot be directly
combined into mini-batches, we define the
preprocess_imdb function
to segment each comment, convert it into a word index through a
dictionary, and then fix the length of each comment to 500 by truncating
or adding 0s.
In [6]:
def preprocess_imdb(data, vocab): # This function is saved in the gluonbook package for future use. max_l = 500 # Make the length of each comment 500 by truncating or adding 0s. def pad(x): return x[:max_l] if len(x) > max_l else x + [0] * (max_l - len(x)) tokenized_data = get_tokenized_imdb(data) features = nd.array([pad(vocab.to_indices(x)) for x in tokenized_data]) labels = nd.array([score for _, score in data]) return features, labels
10.7.1.3. Create Data Iterator¶
Now, we will create a data iterator. Each iteration will return a mini-batch of data.
In [7]:
batch_size = 64 train_set = gdata.ArrayDataset(*preprocess_imdb(train_data, vocab)) test_set = gdata.ArrayDataset(*preprocess_imdb(test_data, voc)
10.7.2. Use a Recurrent Neural Network Model¶
In this model, each word first obtains a feature vector from the
embedding layer. Then, we further encode the feature sequence using a
bidirectional recurrent neural network to obtain sequence information.
Finally, we transform the encoded sequence information to output through
the fully connected layer. Specifically, we can concatenate hidden
states of bidirectional long-short term memory in the initial time step
and final time step and pass it to the output layer classification as
encoded feature sequence information. In the
BiRNN class implemented
below, the
Embedding instance is the embedding layer, the
LSTM
instance is the hidden layer for sequence encoding, and the
Dense
instance is the output layer for generated classification results.
In [9]:, gb.try_all_gpus() net = BiRNN(vocab, embed_size, num_hiddens, num_layers) net.initialize(init.Xavier(), ctx=ctx)
10.7.2.1. Load Pre-trained Word Vectors¶
Because the training data set for sentiment classification is not very
large, in order to deal with overfitting, we will directly use word
vectors pre-trained on a larger corpus as the feature vectors of all
words. Here, we load a 100-dimensional GloVe word vector for each word
in the dictionary
vocab.
In [11]:
glove_embedding = text.embedding.create( 'glove', pretrained_file_name='glove.6B.100d.txt', vocabulary=vocab)
Then, we will use these word vectors as feature vectors for each word in
the reviews. Note that the dimensions of the pre-trained word vectors
need to be consistent with the embedding layer output size
embed_size in the created model. In addition, we no longer update
these word vectors during training.
In [12]:
net.embedding.weight.set_data(glove_embedding.idx_to_vec) net.embedding.collect_params().setattr('grad_req', 'null')
10.7.2.2. Train and Evaluate the Model¶
Now, we can start training.
In [13]:
lr, num_epochs = 0.01,.6502, train acc 0.605, test acc 0.764, time 63.0 sec epoch 2, loss 0.4378, train acc 0.806, test acc 0.829, time 63.1 sec epoch 3, loss 0.3920, train acc 0.829, test acc 0.809, time 62.3 sec epoch 4, loss 0.3627, train acc 0.844, test acc 0.834, time 62.7 sec epoch 5, loss 0.3328, train acc 0.860, test acc 0.814, time 61.7 sec
Finally, define the prediction function.
In [14]:
# This function is saved in the gluonbook package for future use. def predict_sentiment(net, vocab, sentence): sentence = nd.array(vocab.to_indices(sentence), ctx=gb.try_gpu()) label = nd.argmax(net(sentence.reshape((1, -1))), axis=1) return 'positive' if label.asscalar() == 1 else 'negative'
Then, use the trained model to classify the sentiments of two simple sentences.
In [15]:
predict_sentiment(net, vocab, ['this', 'movie', 'is', 'so', 'great'])
Out[15]:
'positive'
In [16]:
predict_sentiment(net, vocab, ['this', 'movie', 'is', 'so', 'bad'])
Out[16]:
'negative'
10.7.3. Summary¶
- Text classification transforms a sequence of text of indefinite length into a category of text. This is a downstream application of word embedding.
- We can apply pre-trained word vectors and recurrent neural networks to classify the emotions in a text.
10.7.4. Problems¶
- Increase the number of epochs. What accuracy rate can you achieve on the training and testing data sets? What about trying to re-tune other hyper-parameters?
- Will using larger pre-trained word vectors, such as 300-dimensional GloVe word vectors, improve classification accuracy?
- Can we improve the classification accuracy by using the spaCy word tokenization tool? You need to install spaCy:
pip install spacyand install the English package:
python -m spacy download en. In the code, first import spacy:
import spacy. Then, load the spacy English package:
spacy_en = spacy.load('en'). Finally, define the function
def tokenizer(text): return [tok.text for tok in spacy_en.tokenizer(text)]and replace the original
tokenizerfunction. It should be noted that GloVe’s word vector uses “-” to connect each word when storing noun phrases. For example, the phrase “new york” is represented as “new-york” in GloVe. After using spaCy tokenization, “new york” may be stored as “new york”.
10. | http://gluon.ai/chapter_natural-language-processing/sentiment-analysis-rnn.html | CC-MAIN-2019-04 | refinedweb | 1,284 | 52.36 |
Undefined behaviors are like blind spots in a programming language; they are areas where the specification imposes no requirements. In other words, if you write code that executes an operation whose behavior is undefined, the language implementation can do anything it likes. In practice, a few specific undefined behaviors in C and C++ (buffer overflows and integer overflows, mainly) have caused, and are continuing to cause, a large amount of economic damage in the form of exploitable vulnerabilities. On the other hand, undefined behaviors have advantages: they simplify compiler implementations and permit more efficient code to be generated. Although the stakes are high, no solid understanding of the trade-offs exists because, for reasons I don’t understand, the academic programming languages community has basically ignored the issue. This may be starting to change, and recently I’ve learned about two new papers about undefined behavior, one from UIUC and the other (not yet publicly available, but hopefully soon) from MIT will appear in the “Correctness” session at APSYS 2012 later this month. Just to be clear: plenty has been written about avoiding specific undefined behaviors, generally by enforcing memory safety or similar. But prior to these two papers, nothing has been written about undefined behavior in general.
UPDATE: I should have mentioned Michael Norrish’s work. Perhaps this paper is the best place to start. Michael’s thesis is also excellent.
UPDATE: The MIT paper is available now.
People writing formal semantics for C will typically talk about (and specify) undefined behaviour “in general”. Or do you mean undefined behaviours across multiple languages?
“Undefined behavior: C vs. the World”
Ignoring the obvious C++, what are the other languages that explicitly use undefined behavior? Even the languages that have no formal semantics and are defined by “whatever the official interpreter does” are arguably still well-defined by that (though not necessarily in a way that’s useful).
I’ve never seen other languages go beyond unspecified behavior: we must do something, the end result is well-defined, but we don’t document the algorithm by which we get there so we’re free to change it and you can’t complain if your program breaks because you were relying on the specifics. Undefined behavior of the “we can make demons fly out of your nose in the name of optimization” kind seems like a real C-ism to me.
I think academic studies of the pros and cons are long overdue. This won’t lead to UB being banished from the language, of course, but hopefully a more rational approach to when the trade-offs are worth it.
I don’t really understand section 2.6 of the UIUC paper. Are they making the (trivially correct) claim that it’s undecidable to detect UB statically? (It’s undecidable to detect *any* sort of behavior statically; that’s the Halting Problem.) Or are they making the interesting claim that it’s impossible to detect UB at runtime?
They say “this raises the question of whether one can *monitor* for undefined behaviors”, which makes it sound like they are making the interesting claim. But then in the flip() example, they say, “At iteration n of the loop above, r can be any one of 2n values. Because undefinedness can depend on the particular value of a variable, all these possible states would need to be stored and checked at each step of computation” — i.e., “Because r can take on any one of 2^32 values, we need at least 2^32 bits of memory to evaluate the current state of this program”… which is obviously a false claim.
In fact I believe the author of this very blog has written a tool that monitors arbitrary C programs to detect *exactly* the kind of UB (signed left-shift) the detection of which they claim to be undecidable!
*And by “undecidable” I mean “uncomputable”.
*And by “detect X” I mean “invariably decide whether X occurs or not”. Obviously you can statically detect that certain programs don’t use the << operator at all, and others initialize static variables to -1<<1, and so on.
Our semantics for R6RS addresses some of the underspecification in Scheme. In this PDF, check out section 4 (starting on section 4).
(Whoops, that’s the R5RS semantics, but the same issue in the R6 one.)
Hi Michael, can you point to some specific examples (outside of your own work, which I should have mentioned)?
Hi Jeroen, other than C/C++ the place where I’ve most commonly seen undefined behavior is at the machine level. As in, “if you touch bit 3 of the control register, the behavior is undefined.”
For more examples please see Section 2.7 of Chucky’s paper (the first one linked in this post).
Hi Arthur, the UIUC authors are talking about the difficulty of detecting undefined behavior dynamically. I also find this to be interesting! In fact, the other day I tried to write a blog post about it, but then I couldn’t quite make sense of their argument. I eventually gave up since I was not sure which part of the confusion came from the paper and which part came from my own head.
Arthur, the part about flip() in Chucky’s paper is also where I got stopped. My opinion is that this issue can only be cleared up by defining the problem more precisely. Maybe I’ll try to write this blog post again.
regehr, jeroen: Another common place for undefined behaviour is in concurrency. e.g. the JVM memory model makes a lot of language-level behaviour explicitly undefined when there’s no happens-between relationship between two interacting operations.
(I of course mean happens-before. Brain hiccup)
Jeroen: we give some examples of other languages in the (UIUC) paper John linked to. These include Scheme, Haskell, Perl, and Ruby.
Arthur: we are talking about dynamic checking for UB. We argue that it is equivalent to the halting problem, even dynamically. Given
int main(void){
guard();
5 / 0;
}
The only way you can show this program has undefined behaviors is to show guard() terminates. This is obviously undecidable statically, but it is equally undecidable dynamically. Even knowing that you’ve successfully been executing for 30 days doesn’t help you decide whether guard() terminates or not.
The stuff about monitoring is a slightly different take on the idea. Since it’s clear that checking a program for undefinedness is undecidable statically and dynamically, what about simply detecting undefined behavior as you run? It’s also clear you can do this for a single way of evaluation (after all, this is what tools like John’s IOC does), but it only works for a single compiler/doesn’t account for things like C’s nondeterministic behavior. For example:
int choice;
int f(int x) {
return choice = x;
}
int flip() {
return (f(0) + f(1)), choice;
}
Calling flip() will return either 0 or 1 nondeterministically. The flip() function is not undefined, just nondeterministic. Because true detection of UB requires you to consider all the valid ways of evaluating, we explored this in the monitoring bit. This idea is sort of related to runtime predictive analysis.
In the paper we argue that to keep track of all the possible ways of evaluating a program, even while monitoring, is intractable for nondeterministic programs and again undecidable for multi-threaded programs. Again, of course you can keep track of a single evaluation, but that’s not all that interesting. I’m sorry we didn’t explain this better in the TR.
One more caveat, the flip() example given in the paper runs into UB itself quite quickly due to shifting problems etc, but we also explain that it’s for didactic purposes. We simply wanted to show that there might be 2^n possible behaviors for n times through the loop. A complete example would need to use allocated memory, etc. to avoid overflowing and blah blah. To be completely technical, since C has a fixed pointer size and all memory has addresses, C has a finite number of memory and is not turing complete. We figured this wasn’t really relevant.
I hope this makes some sense!
Chucky, thanks for chiming in!
My sticking point (as we discussed) is the question of what counts as a “possible way” of interpreting a C program. I think this can be nailed down, but I didn’t feel like your TR did that.
@Chucky (13): By “undecidable” you also mean “uncomputable”, right? (Yeah, we can ignore the fact that C isn’t a Turing machine.)
That flip() is a bad example, because I happen to believe that it *does* exhibit undefined behavior. You modify “choice” twice without an intervening sequence point (e.g. in the case that the implementation spawns a new thread to compute f(0) and f(1) concurrently). That there happen to be two sequence points upon entry to f({0,1}) and two more sequence points upon return is completely irrelevant. But I recognize that experts disagree [with me ;)] on the subject.
For a little while this morning, I thought you might be saying that it’s difficult to detect the presence of UB in an expression like (a()+b()+c()+…), because you’d have to consider at least N! possible orders of evaluation. But then I remembered that you came up with the really neat idea of having your “C virtual machine” cache writes and flush the cache only at sequence points, which seems (handwave) to allow you to detect the multiple-writes-between-sequence-points kind of UB in basically linear time; it doesn’t *matter* what order the writes came in.
I believe Papaspyrou’s semantics discusses undefined behaviour correctly. I think he and I were the first to try to get it right in a formal setting (his thesis and mine came out at about the same time (late 90s)).
Authur,
The definedness of f(0) + f(1) above comes from, I believe, “Every evaluation in the calling function (including other function calls) that is not otherwise specifically sequenced before or after the execution of the body of the called function is indeterminately sequenced with respect to the execution of the called function.” (n1570, 6.5.2.2:10) I think it used to be less clear in previous versions of C.
I do argue that a()+b()+c()+… means you have to try all the combinations of evaluation if you want to ensure that such a program is without undefined behaviors. Consider:
int a = 0, …, m = 0;
int a(){
a = 1;
}
…
int m(){
m = 1;
}
int n() {
if (a && … && m) {
5 / 0;
}
}
int main(){
a() + … + n();
}
Only those evaluations where n() is called last will exhibit undefined behavior. You really have to consider all possible evaluations to detect stuff since different paths can have different behaviors.
Michael,
Papaspyrou does cover different evaluation orders, and might even handle (x=5) + (x=6) kinds of stuff, but his (and yours (and mine)) misses all kinds of other undefined behavior. One nice one is the (n1570, 6.5.16.”
I’m pretty sure we all three miss that one. There are hundreds more.
Chucky,
I certainly miss that one, not to mention the horrible example that Freek Wiedijk and Robbert Krebbers brought to my attention from Defect Report 260 (see)
Having to track the “provenance” of a pointer just makes me want to give up on the whole language. Again, I blame it on compiler writers with too much influence.
Michael that defect report made me throw up a little.
Not to reveal my age, but I cut my compiler teeth on Ada ’83, and they really tackled, or at least addressed, a lot of important issues, including undefined behavior. As I recall, any program that relied on undefined behavior was erroneous. For example, order of evaluation of subprogram arguments is undefined. If argument evalauations create side effects that in turn affect the program results, that program is incorrect. For example, if you ported that program to a compiler that had different evaluation order the program would compute an incorrect result. The Ada 83 language manual codified the word erroneous. It’s definitely worth a read. I wish all languages had such a good language definition manual. | https://blog.regehr.org/archives/746 | CC-MAIN-2019-39 | refinedweb | 2,058 | 61.97 |
There are certain times for confessions, and now seems like such a time so … here goes: My name is Rob, and I’m an addict. I am addicted to Matlab. If I didn’t have Matlab, my productivity would go to near-zero. For this reason, like any good junkie, I tolerate its expensive fees, obnoxious quirks and serious limitations. That is, until recently.
Some recent events, however, have forced me to start looking for alternatives. After casting around, speaking to friends and colleagues, I have came up with a number of alternatives. These include the “open-source Matlab”, Octave; a programming language called Ruby; and a language called Python. All of these languages have the benefit of being OpenSource, enjoy the support of strong communities, and are relatively easy to learn. So after careful thought and consideration (okay … so some thought and consideration), I have made a serious decision: I am going to learn Python!
A Manifesto
Like all addicts with a serious habit to kick, you have to start with a commitment. Moreover, as many of my past experiments will attest to, it has to be the type of commitment that you will take seriously. If you fail, it has to hurt! And not just a little kind of hurt, either. I’m talking about the kind of hurt that happens when you fall off a seriously pissed off bucking bull. If there isn’t a risk of being maimed … well … that’s just dull. So, here is my manifesto:
I will learn Python.
It’s public, it’s out there and people know. When motivation is needed, the fear of public humiliation is one of the best motivating factors I am aware of.
The Rationale
With the important bit out of the way, I thought I would share a bit of my (somewhat) rational decision making. In doing so, let’s talk about Python for a bit. Once we do that, then let’s talk about why Python is a really good stand in for Matlab. After addressing those two topics, then I’ll really get to the meat of what I intend to do and expound on the frameworks which might get me there.
But first …
Why Python?
The answer to the first question is very simple: Python is really, really cool. Python offers some enormous benefits with almost no costs (at least none that I have found till now). Here are just a few of its strengths.
- Like Matlab, it is a dynamic (or scripting) language.
- Like Matlab, it has an enormous number of existing libraries and functions available. Indeed, after a bit of surface scratching, I’ve come to the conclusion that it might have even more libraries and functions available than Matlab does. Put differently: Python comes with batteries included.
- Python has some seriously beautiful syntax.
Okay, I know how number three sounds, but it’s true. Unlike Matlab, whose syntax evolved from C and C++ in an evolutionary process that makes evolution from pond-scum seem simple; Python was actually designed, in a very intelligent manner! Consider the following code:
def DoSomething:
if condition == option1:
Thingy1()
if condition == option2:
Thingy2()
You might notice a few simple things. First, the code lines up. This might seem trivial, but it really simplifies a lot of problems. Greatest among them is an issue known as readability. Let me put it in a slightly different way, have you ever come back to a project after some time and realized that you have no idea what a given piece of code is doing? I certainly have (shudders). This problem is almost nonexistent in Python.
You can actually find things! That’s because python uses indentation to mark sections of code. Everything indented under DoSomething belongs to that function. Everything indented under the condition1 conditional statement belongs there. This make it very easy to locate and understand what a given chunk of code is doing. Unlike in C or C++, or even C#, you don’t need to wrestle with the brackets. If you are a coder, you know what I mean. If you aren’t, they look like this: { }, [ ], ( ), or < >. No matter who you are, the mere sight of these monstrosities should dredge some horrific memory from your childhood.
So what about Matlab?
Okay, so we can agree that Python is a well engineered language, but how does it compare to Matlab. I’ve already mentioned that it has a huge number of libraries. But, so does Matlab. Indeed, with its various toolboxes, Matlab is one of the most comprehensive languages available. Want to connect to a database and query records, there is a Matlab toolbox which can do that. Want to do image processing? Ditto. What about Medical Image Processing? Turns out, there is a wrapper for the Insight Toolkit. Pretty cool, huh?
With its position as the defacto language of computational engineering, you would assume that Matlab has an edge on Python. As I alluded to above, this is simply not so. While Matlab does indeed have a huge number of add-ons, these add-ons are not free. Quite the opposite, actually. You will pay dearly for them. For the image toolkit, this amounts to several hundred dollars (per academic user, I was unable to find the commercial price). In comparison, there are many image processing libraries written in Python. While some of them may require some form of payment, the others are OpenSource. This means that they are probably free, in every sense. Once you find the website for the tool, you likely only need to click on the “Download Now” button. There is no messy business with credit cards. More important than free, as in “free beer,” is the other type of freedom.
OpenSource PR people will call the second freedom, “free as in speech.” While confusing, it is actually a rather simple idea. Namely: you can inspect the source code of the functions which you are using. Have a question on the input of a specific function? Doesn’t discuss it in the documentation? When using a proprietary tool such as Matlab, you have dead-ended. With Python, not so. You can inspect the source code to determine what inputs are needed. Nifty!
Now where?
As you have probably surmised, the geek meter is clear in the red zone. What can I say? OpenSource makes me misty eyed. For the practically inclined, however, you probably don’t care. So, let me conclude with the practicalities. Python is clearly awesome, and free. Yet, like all good things, it is not quite that simple. In this case, however, complexity is bliss. For, you see, there are multiple types of Python. The first is standard old Python. While boring it might be called, but let us settle on the name CPython instead. But there are other Pythons, one is written in Java and called Jython. Jython can use the not only the Python libraries, but the Java libraries as well. While I agree that Java is a great and evil beast, it is also immensely old and powerful. If it can’t be done in Java, it likely cannot be done.
The other Python is called IronPython, and it is special too. In addition to the Python classes, it also has access to … the Microsoft and Mono .Net libraries. If it is written in C#, it can be used be used from IronPython, as a Python class. Now, that is power! So, while I will eventually spend some time with CPython and Jython, I am beginning my journey with IronPython. As with all great undertakings, beginnings are important. So, here is what I’ve tried.
- I have purchased a guide. Specifically, I bought Michael Foord’s excellent book, “Iron Python in Action” which is available from Manning. Though it technically hasn’t been published yet, you can get an advance copy through the Early Access Program. While slightly expensive ($49.99), it is money well spent. I will probably post a complete review soon.
- I have download the files. You can find the IronPython interpreter and everything else you need to get started at the IronPython Codeplex Website.
- I have found a good text editor. There are some fantastic free and OpenSource editors including Notepad2 and NotePad++. I have used and really like EditPadPro. At present, though, I am thoroughly infatuated an editor called e (also called TextMate on Windows). Stupid name, great text editor.
- I have started reading the book. Chapters 1 to 3 of “IronPython in Action” are great. The first three chapters start with a fantastic summary of Python and .Net.
- I am thoroughly committed! The figure below shows my past experiences in trying to learn to program. It was originally shown to me by a much older (and somewhat wiser), mentor. While it does not make me feel better, it does provide some warning of the stages. Forewarned is forearmed.
- Phase 1 : In the beginning, it’s just hard. Ridiculously so, in fact. The texts and available tutorials assume a basic degree of knowledge that I never seem to have. While it is disheartening, this phase eventually passes. It does involve a lot of head butting, unfortunately.
- Phase 2 : And … you progress to a state of cluelessness. While still out of the depth, it is possible to make progress. It’s just very difficult.
- Phase 3 : But … it gets better, and then it starts becoming fun.
- Finally … a state resembling completion arrives.
While I am still (securely) at phase 2, it is starting to get better, which provides hope.
A Word on the Articles
With the groundwork laid, I will close with a word about these writings. It is not my purpose to teach IronPython with this series of article. Rather, I hope to provide a snapshot of what my learning process looks like. To those seeking instruction, you really should get Michael Foord’s book. All you will find here is a fellow (and somewhat foulmouthed) traveler. I still hope you come back though, at a minimum you will find some words on my own personal discoveries.
Conclusion
With that, I start my journey into unknown territory. As I travel, I will share what I learn. I am junkie with a serious habit to kick. This will hopefully provide some entertainment to those around me. In the words of the young and the eternally clueless: This will be fun!
3 Responses to “Learning IronPython – Part 1 – A Halfhearted Manifesto”
[…], … […]
Hi Rob,
This is Steven with Manning Publications. Could you shoot me an email? I’d like to use a snippet of this post in our marketing…
Steven Hong
Marketing Coordinator
Manning Publications
Care to comment? | http://blog.oak-tree.us/index.php/2008/11/18/ironpython-part1 | CC-MAIN-2017-22 | refinedweb | 1,786 | 76.42 |
Perl's functions (including things that look like functions, like some keywords and named operators) arranged by category. Some functions appear in more than one place.
chomp,
chop,
chr,
crypt,
hex,
index,
lc,
lcfirst,
length,
oct,
ord,
pack,
q//,
qq//,
reverse,
rindex,
sprintf,
substr,
tr///,
uc,
ucfirst,
y///
m//,
pos,
quotemeta,
s///,
split,
study,
qr//
abs,
atan2,
cos,
exp,
hex,
int,
log,
oct,
rand,
sin,
sqrt,
srand
pop,
push,
shift,
splice,
unshift
grep,
join,
map,
qw//,
reverse,
sort,
unpack
delete,
each,
exists,
keys,
values
binmode,
closedir,
dbmclose,
dbmopen,
die,
eof,
fileno,
flock,
format,
getc,
printf,
read,
readdir,
rewinddir,
seek,
seekdir,
syscall,
sysread,
sysseek,
syswrite,
tell,
telldir,
truncate,
warn,
write
pack,
read,
syscall,
sysread,
syswrite,
unpack,
vec
-X,
chdir,
chmod,
chown,
chroot,
fcntl,
glob,
ioctl,
link,
lstat,
mkdir,
open,
opendir,
readlink,
rename,
rmdir,
stat,
symlink,
sysopen,
umask,
unlink,
utime
caller,
continue,
die,
do,
dump,
eval,
exit,
goto,
redo,
return,
sub,
wantarray
caller,
import,
local,
my,
our,
package,
use
defined,
dump,
eval,
formline,
local,
my,
our,
abs,
bless,
chomp,
chr,
exists,
formline,
glob,
import,
lc,
lcfirst,
lock,
map,
my,
no,
our,
prototype,
qr//,
qw//,
qx//,
readline,
readpipe,
ref,
sub*,
sysopen,
tie,
tied,
uc,
ucfirst,
untie,
use
* -
sub was a keyword in perl4, but in perl5 it is an operator, which can be used in expressions.
dbmclose,
dbmopen _;
Returns the absolute value of its argument. If VALUE is omitted, uses
$_.
Accepts an incoming socket connect, just as the accept(2) system call may interface to access setitimer(2) if your system supports it. See perlfaq8 for details.
It is usually a mistake to intermix
alarm and
sleep calls. (
sleep may be internally implemented in your system with perlipc.
eval { local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required alarm $timeout; $nread = sysread SOCKET, $buffer, $size; alarm 0; }; if ($@) { die unless $@ eq "alarm\n"; # propagate unexpected errors # timed out } else { # didn't }
For more information see perlipc.
Returns the arctangent of Y/X in the range -PI to PI.
For the tangent operation, you may use the
Math::Trig::tan function, or use the familiar relation:
sub tan { sin($_[0]) / cos($_[0]) }
Note that atan2(0, 0) is not well-defined.
Binds a network address to a socket, just as the bind system call does. Returns true if it succeeded, false otherwise. NAME should be a packed address of the appropriate type for the socket. See the examples in "Sockets: Client/Server Communication" in perlipc. and sets
$! (errno). simply the inverse of
:crlf -- other layers which would affect the binary nature of the stream are also disabled. See PerlIO, perlrun and the discussion about the PERLIO environment variable.
The
:bytes,
:crlf, and
:utf8, and any other directives of the form
:..., are called I/O layers. The
open prag or
:encoding(utf8).
:utf8 just marks the data as UTF-8 without further checking, while
:encoding(utf8) checks the data for actually being valid UTF-8. More details can be found in PerlIO::encoding... It returns true upon success, false otherwise. See the example under
die.
On systems that support fchdir, you might pass a file handle or directory handle as argument. On systems that don't support fchdir, passing handles produces a fatal error at run time.
Changes the permissions of a list of files. The first element of the list must be the numerical mode, which should probably be an octal number, and which definitely should not be a string of octal digits:
0644).
Chops);
Returns.
This
$_.
Closes the file or pipe associated with the file handle,
$? and
${^CHILD_ERROR_NATIVE}..
Closes a directory opened by
opendir and returns the success of that system call.
Attempts to connect to a remote socket, just as the connect system call does. Returns true if it succeeded, false otherwise. NAME should be a packed address of the appropriate type for the socket. See the examples in "Sockets: Client/Server Communication" in perlipc..
Returns the cosine of EXPR (expressed in radians). If EXPR is omitted, takes cosine of
$_.
For the inverse cosine operation, you may use the
Math::Trig::acos() function, or use this relation:
sub acos { atan2( sqrt(1 - $_[0] * $_[0]), $_[0] ) }
Creates a digest string exactly like the crypt(3) function in the C library (assuming that you actually have a version there that has not been extirpated as a potential munitions).
crypt() is a one-way hash function. The PLAINTEXT and SALT, do not assume anything about the returned string itself, or how many bytes in the digest matter.
Traditionally the result is a string of 13 bytes: two first bytes of the salt, followed by 11 bytes from the set
[./0-9A-Za-z], and only the first eight bytes of the digest string) the string back to an eight-bit byte string before calling crypt() (on that copy). If that works, good. If not, crypt() dies with
Wide character in crypt.
[This function has been largely superseded by the
untie function.]
Breaks the binding between a DBM file and a hash.
. Note that a subroutine which.
See also "undef", "exists", "ref".
Given.
Not.
This form of subroutine call is deprecated. See perlsub.,; } }
This.
This function is now largely obsolete, mostly because it's very hard to convert a core file into an executable. That's why you should now invoke it as
CORE::dump(), if you don't want to be warned against a possible typo.
When.
Returns 1 if the next read on FILEHANDLE will return end of file, or if FILEHANDLE is not open. FILEHANDLE may be an expression whose value gives the real filehandle. (Note that this function actually reads a character and then
ungetcs it, so isn't very only detect the end of. shown in this example:
#...
Given an expression that specifies a hash element or array element, returns true if the specified element in the hash or array has ever been initialized, even if the corresponding value is undefined. perlref for specifics on how exists() acts when used on a pseudo-hash.
Use of a subroutine call, rather than a subroutine name, as an argument to exists() is an error.
exists ⊂ # OK exists &sub(); # Error
Evaluates.
Returns e (the natural logarithm base) to the power of EXPR. If EXPR is omitted, gives
exp($_).
Implements the fcntl(2) function. You'll probably have to say
fcntl. Like
ioctl, it maps a
0 return from the system call into
"0 but true" in Perl. This string is true in boolean context and
0 in numeric context. It is also exempt from the normal -w warnings on improper numeric conversions.
REMOTE to";
Returns the file descriptor for a filehandle, or undefined if the filehandle is not open. may return undefined even though they are open.) qw(:flock SEEK_END); # import LOCK_* and SEEK_END constants sub lock { my ($fh) = @_; flock($fh, LOCK_EX) or die "Cannot lock mailbox - $!\n"; # and, in case someone appended while we were waiting...(), locks are inherited across fork() calls, whereas those that must resort to the more capricious fcntl() function lose the locks, making it harder to write servers.
See also DB_File for other flock() examples.
Does.
Declare a picture format for use by the
write function. For example:
format Something = Test: @<<<<<<<< @||||| @>>>>> $str, $%, '$' . int($num) . $str = "widget"; $num = $cost/$quantity; $~ = 'Something'; write;
See perlform for many details and examples.
This perlmodlib.
This.
Returns the packed sockaddr address of other end of the SOCKET connection.
use Socket; $hersockaddr = getpeername(SOCK); ($port, $iaddr) = sockaddr_in($hersockaddr); $herhostname = gethostbyaddr($iaddr, AF_INET); $herstraddr = inet_ntoa($iaddr);
Returns.
Returns.
Returns the current priority for a process, a process group, or a user. (See getpriority(2).) Will raise a fatal exception if used on a machine that doesn't implement getpriority(2).);.:
use File::stat; use User::pwent; $is_his = (stat($filename)->uid == pwent($whoever)->uid);
Even though it looks like they're the same method calls (uid), they aren't, because a
File::stat object is different from a
User::pwent object.
Returns);
Queries"; will".
There is no builtin
import function. It is just an ordinary method (subroutine) defined (or inherited) by modules that wish to export names to another module. The
use function calls the
import method for the package used. See also "use", perlmod, and Exp().
Implements.
Joins the separate strings of LIST into a single string with fields separated by the value of EXPR, and returns that new string. Example:
$rec = join(':', $login,$passwd,$uid,$gid,$gcos,$home,$shell);
Beware that unlike
split,
join doesn't take a pattern as its first argument. Compare "split"..
Returns a lowercased version of EXPR. value of EXPR with the first character lowercased.)).
Creates a new filename linked to the old filename. Returns true for success, false otherwise.
Does the same thing that the listen system call does. Returns true if it succeeded, false otherwise. See the example in "Sockets: Client/Server Communication" in perlipc.
You is the number of years since 1900, not just the last two digits of the year. That is,
$year is
123 in year 2023. The proper way to get a complete 4-digit year is simply:
$year += 1900;
Otherwise you create non-Y2K-compliant programs--and you wouldn't want to do that, would you?"
This scalar value is not locale dependent but is a Perl builtin. For GMT instead of local time use the "gmtime" builtin. See also the
Time::Local module .
See "localtime" in perlport for portability concerns.
The Time::gmtime and Time::localtime modules provides a convenient, by-name access mechanism to the gmtime() and localtime() functions, respectively.
For a comprehensive date and time representation look at the DateTime module on CPAN.
This.
Returns the natural logarithm (base e) of EXPR. If EXPR is omitted, returns.
Does
$_.
The match operator. See perlop.
Evaluates.
Creates the directory specified by FILENAME, with permissions specified by MASK (as modified by
umask). If it succeeds it returns true, otherwise it returns false and sets
$! (errno). If omitted, MASK defaults to 0777..
Calls the System V IPC function msgctl(2). You'll probably have to say
use IPC::SysV;
first to get the correct constant definitions. If CMD is
IPC_STAT, then ARG must be a variable that will hold the returned
msqid_ds structure. Returns like
ioctl: the undefined value for error,
"0 but true" for zero, or the actual return value otherwise. See also "SysV IPC" in perlipc,
IPC::SysV, and
IPC::Semaphore documentation.
Calls the System V IPC function msgget(2). Returns the message queue id, or the undefined value if there is an error. See also "SysV IPC" in perlipc and
IPC::SysV and
IPC::Msg documentation.::Msg documentation.
Calls.
A
my declares the listed variables to be local (lexically) to the enclosing block, file, or
eval. If more than one value is listed, the list must be placed in parentheses.
next command is like the
continue statement in C; it starts the next iteration of the loop:.
See the
use function, which
no is the opposite of..)
Opens.
See example at
readdir.
Returns the numeric (the native 8-bit encoding, like ASCII or EBCDIC, or Unicode) value of the first character of EXPR. If EXPR is omitted, uses
$_.
For the reverse, see "chr". See perlunicode for more about Unicode.
our be converted to a sequence of 4 characters. C char (octet) even under.
The following rules apply:
a,
A,
Z,
b,
B,
h,
H,
@,
x,
X). A numeric repeat count may optionally be enclosed in brackets, as in
pack 'C[80]', @arr..
When used with
Z,
* results in the addition of a trailing null byte (so the packed result will be one longer than the byte
length of the item).
When used with
@, the repeat count represents an offset from the start of the innermost () group.
The repeat count for
u is interpreted as the maximal number of bytes to encode per line of output, with 0, 1 and 2 replaced by 45. The repeat count should not be more than 65.
a,
A, and
Ztypes gobble just one value, but pack it as a string of length count, padding with nulls or spaces as necessary. When unpacking,
Astrips trailing whitespace and nulls,
Zstrips everything after the first null, and
areturns data verbatim.
If the value-to-pack is too long, it is truncated. If too long and an explicit count is provided,
Z packs only
$count-1 bytes, followed by a null byte. Thus
Z always packs a trailing null (except when the count is 0).
band
Bfields pack a string that many bits long. Each character of the input field of pack() generates 1 bit of the result. Each result bit is based on the least-significant bit of the corresponding input character, i.e., on
ord($char)%2. In particular, characters
"0"and
"1"generate bits 0 and 1, as do characters
"\0"and
"\1".
Starting from the beginning of the input string of pack(), each 8-tuple of characters is converted to 1 character of output. With format
b the first character of the 8-tuple determines the least-significant bit of a character, and with format
B it determines the most-significant bit of a character.
If the length of the input string is not exactly divisible by 8, the remainder is packed as if the input string were padded by null characters at the end. Similarly, during unpack()ing the "extra" bits are ignored.
If the input string of pack() is longer than needed, extra characters are ignored. A
* for the repeat count of pack() means to use all the characters of the input field. On unpack()ing the bits are converted to a string of
"0"s and
"1"s.
hand
Hfields pack a string that many nybbles (4-bit groups, representable as hexadecimal digits, 0-9a-f) long.
Each character of the input field of pack() generates 4 bits of the result. For non-alphabetical characters the result is based on the 4 least-significant bits of the input character, i.e., on
ord($char)%16. In particular, characters
"0" and
"1" generate nybbles 0 and 1, as do bytes
"\0" and
"\1". For characters
"a".."f" and
"A".."F" the result is compatible with the usual hexadecimal digits, so that
"a" and
"A" both generate the nybble
0xa==10. The result for characters
"g".."z" and
"G".."Z" is not well-defined.
Starting from the beginning of the input string of pack(), each pair of characters is converted to 1 character of output. With format
h the first character of the pair determines the least-significant nybble of the output character, and with format
H it determines the most-significant nybble.
If the length of the input string is not even, it behaves as if padded by a null character at the end. Similarly, during unpack()ing the "extra" nybbles are ignored.
If the input string of pack() is longer than needed, extra characters are ignored. A
* for the repeat count of pack() means to use all the characters of the input field. On unpack()ing the nybbles are converted to a string of hexadecimal digits.().
/template character allows packing and unpacking of a sequence of items where the packed structure contains a packed item count followed by the packed items themselves.
For
pack you write length-item
/sequence-item and the length-item describes how the length value is packed. The ones likely to be of most use are integer-packing ones like
n (for Java strings),
w (for ASN.1 or SNMP) and
N (for Sun XDR).
For
pack, the sequence-item may have a repeat count, in which case the minimum of that and the number of available items is used as argument for the sequence-item refers to a string type (
"A",
"a" or
"Z"), the length-item is a string length, not a number of strings. If there is an explicit repeat count for pack, the packed string will be adjusted to that given length.
unpack /C2', ord('a') .. ord('z'); gives '2ab'..)
s,
S,
i,
I,
l,
L,
j, and
Jare inherently non-portable between processors and operating systems because they obey the Config:
use Config; print $Config{byteorder}, "\n";.
'x'es while packing. There is no way to pack() and unpack() could know where the characters are going to or coming from. Therefore
pack(and
unpack) handle their output and input as flat sequences of characters.
/template character. Within each repetition of a group, positioning with
@starts again at 0. Therefore, the result of
pack( '@1A((@2A)@3A)', 'a', 'b', 'c' )
is the string "\0a\0\0bc".
xand
Xaccept
!modifier. In this case they act as alignment commands: they jump forward/back to the closest position aligned at a multiple of
countcharacters. For example, to pack() or unpack() C's
struct {char c; double d; char cc[2]}one may need to use the template
W x![d] d W[2]; this assumes that doubles must be aligned on the double's size.
For alignment commands
count of 0 is equivalent to
count of 1; both result in no-ops.
#and goes to the end of line. White space may be used to separate pack codes from each other, but a
!modifier and a repeat count must follow immediately.
""arguments. If TEMPLATE requires fewer arguments to pack() than actually given, extra arguments are ignored.
Examples:
$foo = pack("CCCC",65,66,67,68); # foo eq "ABCD" $foo = pack(().
Declares.
Pops and returns the last value of the array, shortening the array by one element.
If there are no elements in the array, returns the undefined value (although this may happen at other times as well). If ARRAY is omitted, pops the
@ARGV array in the main program, and the
@_ array in subroutines, just like
shift.
Returns performed.
Prints a string or a list of strings. Returns true if successful. FILEHANDLE may be a scalar variable name, in which case the variable contains by default to standard output (or to the last selected output channel--see "select"). If LIST is also omitted, prints
$_ to the currently selected output channel. To set the default output channel
+ or put parentheses around all the arguments.
Note that if you're storing FILEHANDLEs in an array, or if you're using any other expression more complex than a scalar variable to retrieve it, you will have to use a block returning the filehandle value instead:
print { $files[$i] } "stuff\n"; print { $OK ? STDOUT : STDERR } "stuff\n";
Equivalent to
print FILEHANDLE sprintf(FORMAT, LIST), except that
$\ (the output record separator) is not appended. The first argument of the list will be interpreted as the
printf format. See
sprintf for an explanation of the format argument. If
use locale is in effect, and POSIX::setlocale() has been called, the character used for the decimal separator in formatted floating point numbers is affected by the LC_NUMERIC locale. See perllocale and POSIX.
Don't fall into the trap of using a
printf when a simple.
Treats.
Generalized quotes. See "Quote-Like Operators" in perlop.
Regexp-like quote. See "Regexp Quote-Like Operators" in perlop.
Returns the value of EXPR with all non-"word" characters backslashed. (That is, all characters not matching
/[A-Za-z_0-9]/ will be preceded by a backslash in the returned string, regardless of any locale settings.) This is the internal function implementing the
\Q escape in double-quoted strings.
If EXPR is omitted, uses
$_.
Returns a random fractional number greater than or equal to
0 and less than the value of EXPR. (EXPR should be positive.) If EXPR is omitted, the value
1 is used. Currently EXPR with the value
0 is also special-cased as
1 - this has not been documented.)
Attempts actually implemented in terms of either Perl's or system's fread() call., open), the I/O will operate on UTF-8 encoded Unicode characters, not bytes. Similarly for the
:encoding pragma: in that case pretty much any characters can be read.;
Reads from the filehandle whose typeglob is contained in EXPR.line was successful.
for (;;) { undef $!; unless (defined( $line = <> )) { die $! if $!; last; # reached EOF } # ... }
Returns the value of a symbolic link, if symbolic links are implemented. If not, gives a fatal error. If there is some system error, returns the undefined value and sets
$! (errno). If EXPR is omitted, uses
$_.
EXPR is executed as a system command. The collected standard output of the command is returned. In scalar context, it comes back as a single (potentially multi-line) string. In list context, returns a list of lines (however you've defined lines with
$/ or
$INPUT_RECORD_SEPARATOR). This is the internal function implementing the
qx/EXPR/ operator, but you can use it directly. The
qx/EXPR/ operator is discussed in more detail in "I/O Operators" in perlop. pragma, open), the I/O will operate on UTF-8 encoded Unicode characters, not bytes. Similarly for the
:encoding pragma: in that case pretty much any characters can be read.
The.
Returns().
The result
Regexp indicates that the argument is a regular expression resulting from
qr//.
Changes.
Demands that do not support this syntax. The equivalent numeric version should be used instead.
require v5.6.1; # run time version check require 5.6.1; # ditto require 5.006_001; # ditto; preferred for backwards compatibility its nothing, or a list of up to three values in the following order:
$_and returning 1, then returning 0 at "end of file". If there is a filehandle, then the subroutine will be called to act a simple source filter, with the line as read in
$_. Again, return 1 for each valid line, and 0 after all lines have been returned.
$_[1]. A reference to the subroutine itself is passed in as
$_[0].
If an empty list,
undef, or nothing that matches the first 3 values above is returned then
require will look at the remaining elements of @INC. Note that this file handle must be a real file handle (strictly a typeglob, or reference to a typeglob, blessed or unblessed) - tied file handles will be ignored and return value processing will stop. Resets only
Sets the current position to the beginning of the directory for the
readdir routine on DIRHANDLE.
Works just like index() except that it returns the position of the last occurrence of SUBSTR in STR. If POSITION is specified, returns the last occurrence beginning at or before that position.
Deletes the directory specified by FILENAME if that directory is empty. If it succeeds it returns true, otherwise it returns false and sets
$! (errno). If FILENAME is omitted, uses
$_.
To remove a directory tree recursively (
rm -rf on unix) look at the
rmtree function of the File::Path module.
The substitution operator. See perlop.
Forces.
Sets FILEHANDLE's position, just like the
fseek call of
stdio. FILEHANDLE may be an expression whose value gives the name of the filehandle. The values for WHENCE are
0 to set the new position in bytes IO implementations are particularly cantankerous), then you may need something more like this:
for (;;) { for ($curpos = tell(FILE); $_ = <FILE>; $curpos = tell(FILE)) { # search for some stuff and put it into files } sleep($for_a_while); seek(FILE, $curpos, 0); }
Sets the current position for the
readdir routine on DIRHANDLE. POS must be a value returned by
telldir.
seekdir also has the same caveats about possible directory compaction as the corresponding system library routine.
Returns the currently selected filehandle. If FILEHANDLE is supplied, sets the new current default filehandle for output. This has two effects: first, a
write or <FH>) with
select, except as permitted by POSIX, and even then only on POSIX systems. You have to use
sysread instead.
Calls the System V IPC function
semctl. You'll probably have to say
use IPC::SysV;ore documentation.
Calls the System V IPC function semget. Returns the semaphore id, or the undefined value if there is an error. See also "SysV IPC" in perlipc,
IPC::SysV,
IPC::SysV::Semaphore documentation.
Calls the System V IPC function semop to perform, or false if there is an error. As an example, the following code waits on semaphore $semnum of semaphore id $semid:
$semop = pack("s!3", $semnum, -1, 0); die "Semaphore trouble: $!\n" unless semop($semid, $semop);
To signal the semaphore, replace
-1 with
1. See also "SysV IPC" in perlipc,
IPC::SysV, and
IPC::SysV::Semaphore documentation.
Sends pragma, open), the I/O will operate on UTF-8 encoded Unicode characters, not bytes. Similarly for the
:encoding pragma: in that case pretty much any characters can be sent.
Sets().
Sets the current priority for a process, a process group, or a user. (See setpriority(2).) Will produce a fatal error if used on a machine that doesn't implement setpriority(2).
Sets the socket option requested. Returns undefined if there is); {},
INIT {},
CHECK {}, and
END {} constructs.
See also
unshift,
push, and
pop.
shift and
unshift do the same thing to the left end of an array that
pop and
push do to the right end.
Calls: the undefined value for error, "
0 but true" for zero, or the actual return value otherwise. See also "SysV IPC" in perlipc and
IPC::SysV documentation.
Calls the System V IPC function shmget. Returns the shared memory segment id, or the undefined value if there is an error. See also "SysV IPC" in perlipc and
IPC::SysV documentation.
Reads.
Shuts.
Returns the sine of EXPR (expressed in radians). If EXPR is omitted, returns sine of
$_.
For the inverse sine operation, you may use the
Math::Trig::asin function, or use this relation:
sub asin { atan2($_[0], sqrt(1 - $_[0] * $_[0])) }
Causes the script to sleep for EXPR seconds, or forever if no EXPR. Returns the.
Opens a socket of the specified kind and attaches it to filehandle SOCKET. DOMAIN, TYPE, and PROTOCOL are specified the same as for the system call.
Creates an unnamed pair of sockets in the specified domain, of the specified type. DOMAIN, TYPE, and PROTOCOL are specified the same as for the system call of the same name. If unimplemented, yields a fatal error. Returns true if successful.
On systems that support a close-on-exec flag on files, the flag will be set for the newly opened file descriptors, as determined by the value of $^F. See "$^F" in perlvar.
Some systems defined
pipe..
The values to be compared are always passed by reference and should not be modified.
You also cannot exit out of the sort block or subroutine using any of the loop control operators described in perlsyn or with
goto.
When
use locale), and because
sort will trigger a fatal error unless the result of a comparison is defined, when sorting with a comparison function like
$a <=> $b, be careful about lists that might contain a
NaN. The following example takes advantage of the fact that
NaN != NaN to eliminate any
NaNs from the input.
@result = sort { $a <=> $b } grep { $_ == $_ } @input;)) { ... } | http://search.cpan.org/~nwclark/perl/pod/perlfunc.pod | crawl-002 | refinedweb | 4,497 | 65.62 |
.
In asp.net by using “Zxing.Net” library we can generate QR code and read data from that image based on our requirements.
To use “Zxing.Net” library in our application first create new asp.net web application and add Zxing.Net library for that right click on your application à select Manage Nuget Packages à Go to Browse Tab à Search for Zxing.Net à From the list select ZxingNet and install it. Once we install the component that will show like as shown following.
Once we install Zxing.Net package in our application now open your aspx page and write the code like as shown following.
Now open code behind file and write the code like as shown following
C# Code
VB.NET Code
If you observe above code we added a namespace “Zxing” reference in our application to generate and read QR code in our web applications.
Demo
When we run above code we will get result like as shown below
4 comments :
Sir can you update with checkbox details
If checbox is selected it should generate the selected items and generate the Qr code
nice one...
how we add two textbox and generate qr code using this data | https://www.aspdotnet-suresh.com/2017/04/aspnet-generate-and-read-qr-code-in-web-application-using-csharp-vbnet.html | CC-MAIN-2018-17 | refinedweb | 202 | 74.29 |
What is e2e (end-to-end) Testing?
The End to End Testing is used to testing the entire application looks like -
ü All User Interactions
ü All Service Calls
ü Authentication/Authorization of app
ü Everything of App
ü And so on.
This is the actual testing of your apps. It is fast action.
Unit testing and Integrations testing will do as fake calls but e2e testing is done with your actual Services and APIs calls.
ü Stayed Informed - Angular Unit Test - Karma and Jasmine
Test functions–
ü describe – Test suit (just a function)
ü it - The spec or test
ü expect - Expected outcome.
Triple Rule of Testing –
ü Arrange - Create and Initialize the Components
ü Act - Invoke the Methods/Functions of Components
ü Assert - Assert the expected outcome/behaviour
Sample Test example -
app.po.ts –
import { browser, by, element } from 'protractor';
export class AppPage {
navigateTo() {
return browser.get('/');
}
getParagraphText() {
return element(by.css('app-root h1')).getText();
}
}
app.e2e-spec.ts –
import { AppPage } from './app.po';
describe('my-app App', () => {
let page: AppPage;
beforeEach(() => {
page = new AppPage();
});
it('should display welcome message', () => {
page.navigateTo();
expect(page.getParagraphText()).toEqual('Welcome to app!');
});
});
I hope you are enjoying with this post! Please share with you friends!! Thank you!!!
You Might Also Like | https://www.code-sample.com/2017/11/angular-5-e2e-tests-end-to-end-testing.html | CC-MAIN-2018-09 | refinedweb | 211 | 50.43 |
python3 startup time - ideas?
i've just written my 1st Omega app. When I press a button, my python3 code starts up, does some magic, and spells out today's weather forecast through a speaker.
My challenge is python's startup time - it literally takes 30 seconds for it to load the libraries I need and begin work. Do you have any tricks to make it quicker? I know I could have it run as a daemon, but while I'm working on the code I would need some tool to apply changes without restarting the daemon.
Michal
Have you tried compiling to bytecode?
Tried to compile with py_compile - didn't help.
According to "python3 -v" output, most of the time is spent reading system libraries. Take this simple file that does nothing:
import urllib.request
import urllib.parse
import json
Execution takes literally 20 seconds on my Omega.
Michal
- Vinicius Batista last edited by
were you able to compile or something wrong happened?
If possible, would you be able to share the code using github or something else?
Vinicius
I have found that python 2 is faster than 3 on the omega. Have you tried using 2? Any reason you need 3?
@Vinicius-Batista i was able to compile my code, but it wasn't any faster on startup
@Samuel-Mathieson no real reason to use py3 - I was learning Python so I thought I should start with the most recent version.
The time it takes to start bare shell in python2/3 is in my case:
- 4 seconds for Python 2.7
- 5.5 seconds for Python 3.4
But importing urllib.request increases the python3 startup time to 22 secs. For python2, importing urllib does not have that much impact (4->5.5).
Michal
- Vinicius Batista last edited by Vinicius Batista
Hi @Michal-Rok .
Python is an interpreted language, so this is usually slow during initialisation. I just created a small script that reads response from a rest service(http get) and print it on the oled expansion. It takes about 4 seconds to initialise. I believe that it's partially due to omega's hardware limitations.
So, if initialisation time is something crucial for your project, I would suggest look for some alternatives - namely C and C++.
Regards,
Vinicius
Like Vinicius said, if you have no real reason to use something like C and can get away with python, if it is not crucial to have a fast initialization time, I would go with Python. Also, the difference between python 2 and 3 is minimal. Really 3 is just making things a little more consistent. Mostly if you change your print "ok" to print("ok") that should do it. I would go with python 2.
One suggestion. I have found that it is faster to use os.system() for some things rather than their python equivalents. Perhaps you could look into that?
Regards,
Sam. | https://community.onion.io/topic/767/python3-startup-time-ideas/9 | CC-MAIN-2022-40 | refinedweb | 488 | 75.61 |
Replace string across multiple files and increment value
- Philip Lengden
I’m looking for an urgent solution! We need to append a text string in 1800+ XML files and increment the value by 1. Alternatively, I could append the text string with the filename of the doc. Either solution would work.
Example 1
Original string is Benjamin_poster.jpg across all files
New value would be Benjamin_poster1.jpg, Benjamin_poster2.jpg etc.
OR
Example 2
Original string is Benjamin_poster.jpg across all files
New value would be Benjamin_poster[filename1].jpg, Benjamin_poster[filename2].jpg etc.
Any help would be great.
Thank you!
- Philip Lengden
$20 to the first correct submission ; )
Hello, Philip Lengden,
Oh, many thanks for your offer, but I could had help you, for free , just for the pleasure to give you a decent solution ;-))
Unfortunately, this kind of search / replacement can’t be easily achieved, even with the powerful regular expression engine,included in N++, because it needs some calculus, based on the
+operation !
But, don’t be sad ! This task should be easily run, in few lines of code, with a Python or Lua script, which can be performed, from inside Notepad++ ! However, up to now, I’m, personally, still unable to create such scripts !
So, Claudia, Scott and dail, here’s the solution to improve your Christmas's holidays ! Some more beer ? An other gift ?.. I’m joking, of course :-))
Philip, I do hope that someone will help you, very soon !
Best regards,
guy038
- Claudia Frank
Few questions:
Is there only one occurrence of the search string in each file or could it be multiple?
If there are multiple search strings in one file should all be replaced with same value
or should it be already incremented?
If only each file should get the incremented value, what if a file doesn’t have the search string,
can this incremented value be discarded or is it needed to assign it to the next file. Something like
file1 has the search string and will be replace with new_value1
file2 has the search string and will be replace with new_value2
file3 has not the search string
file4 has the search string and will be replaced with value (3 or 4)?
Cheers
Claudia
- Scott Sumner
I have no doubt that with @Claudia-Frank on it, you will get more than a $20 solution. :-)
- Claudia Frank
no pressure, no pressure - I thought of a simple solution like
import os search_string = 'HERE_YOUR_STRING' i = 0 for root, dirs, files in os.walk('C:\\WHAT_EVER_DIR\\'): # take care of double backslash like c:\\temp\\dir1\\ for file in files: fname, ext = os.path.splitext(file) if ext == '.xml': i+=1 full_path = os.path.join(root, file) with open(full_path) as f: s = f.read() s = s.replace(search_string, search_string + str(i)) with open(full_path, "w") as f: f.write(s)
but now, maybe an windows api solution?? ;-))
Cheers
Claudia | https://notepad-plus-plus.org/community/topic/12967/replace-string-across-multiple-files-and-increment-value | CC-MAIN-2017-17 | refinedweb | 482 | 73.37 |
Tutorial: Analyze fraudulent call data with Stream Analytics and visualize results in Power BI dashboard
This tutorial shows you how to analyze phone call data using Azure Stream Analytics. The phone call data, generated by a client application, contains fraudulent calls, which are filtered by the Stream Analytics job. You can use the techniques from this tutorial for other types of fraud detection, such as credit card fraud or identity theft.
In this tutorial, you learn how to:
- Generate sample phone call data and send it to Azure Event Hubs.
- Create a Stream Analytics job.
- Configure job input and output.
- Define queries to filter fraudulent calls.
- Test and start the job.
- Visualize results in Power BI.
Prerequisites
Before you start, make sure you have completed the following steps:
- If you don't have an Azure subscription, create a free account.
- Download the phone call event generator app TelcoGenerator.zip from the Microsoft Download Center or get the source code from GitHub.
- You will need Power BI account.
Create an Azure Event Hub
Before Stream Analytics can analyze the fraudulent calls data stream, the data needs to be sent to Azure. In this tutorial, you will send data to Azure by using Azure Event Hubs.
Use the following steps to create an Event Hub and send call data to that Event Hub:
Select Create a resource > Internet of Things > Event Hubs.
Fill out the Create Namespace pane with the following values:
Use default options on the remaining settings and select Review + create. Then select Create to start the deployment.
When the namespace has finished deploying, go to All resources and find asaTutorialEventHub in the list of Azure resources. Select asaTutorialEventHub to open it.
Next select +Event Hub and enter a Name for the Event Hub. Set the Partition Count to 2. Use the default options in the remaining settings and select Create. Then wait for the deployment to succeed.
Grant access to the event hub and get a connection string
Before an application can send data to Azure Event Hubs, the event hub must have a policy that allows access. The access policy produces a connection string that includes authorization information.
Navigate to the event hub you created in the previous step, MyEventHub. Select Shared access policies under Settings, and then select + Add.
Name the policy MyPolicy and ensure Manage is checked. Then select Create.
Once the policy is created, select the policy name to open the policy. Find the Connection string–primary key. Select the copyfile, so be sure that you open the correct one.
Update the
<appSettings. Don't forget to remove the semicolon that precedes the EntityPath value.
Save the file.
Next open a command window and change to the folder where you unzipped the TelcoGenerator application. Then enter the following command:
.\telcodatagen.exe 1000 0.2 2
This command takes the following parameters:
- Number of call data records per hour.
- Percentage of fraud probability, which is how often the app should simulate a fraudulent call. The value 0.2 means that about 20% of the call records will look fraudulent.
- Duration in hours, which is the number of hours that the app should run. You can also stop the app at and search for Stream Analytics job. Select the Stream Analytics job tile and select Create*.
Fill out the New Stream Analytics job form with the following values:
Use default options on the remaining settings, select Create, and wait for the deployment to succeed.
Configure job input
The next step is to define an input source for the job to read data using the event hub you created in the previous section.
From the Azure portal, open the All resources page, and find the ASATutorial Stream Analytics job.
In the Job Topology section of the Stream Analytics job, select Inputs.
Select + Add stream input and Event hub. Fill out the input form with the following values:
Use default options on the remaining settings and select Save.
Configure job output
The last step is to define an output sink where the job can write the transformed data. In this tutorial, you output and visualize data with Power BI.
From the Azure portal, open All resources, and select the ASATutorial Stream Analytics job.
In the Job Topology section of the Stream Analytics job, select the Outputs option.
Select + Add > Power BI. Then, select Authorize and follow the prompts to authenticate Power BI.
Fill the output form with the following details and select Save:
This tutorial uses the User token authentication mode. To use Managed Identity, see Use Managed Identity to authenticate your Azure Stream Analytics job to Power BI.
Create queries to transform real-time data
At this point, you have a Stream Analytics job set up to read an incoming data stream. The next step is to create a query that analyzes the data in real time. The queries use a SQL-like language that has some extensions specific to Stream Analytics.
In this section of the tutorial, you create and test several queries to learn a few ways in which you can transform an input stream for analysis.
The queries you create here will just display the transformed data to the screen. In a later section, you'll write the transformed data to Power BI.
To learn more about the language, see the Azure Stream Analytics Query Language Reference.
Test using a pass-through query
If you want to archive every event, you can use a pass-through query to read all the fields in the payload of the event.
Navigate to your Stream Analytics job in the Azure portal and select Query under Job topology.
In the query window, enter this query:
SELECT * FROM CallStream
Note
As with SQL, keywords are not case-sensitive, and whitespace is not significant.
In this query,
CallStreamis the alias that you specified when you created the input. If you used a different alias, use that name instead.
Select Test query.
The Stream Analytics job runs the query against the sample data from the input and displays the output at the bottom of the window. The results indicate that the Event Hub and the Streaming Analytics job are configured correctly.
The exact number of records you see will depend on how many records were captured in the sample.
Reduce the number of fields using a column projection
In many cases, your analysis doesn't need all the columns from the input stream. You can use a query to project a smaller set of returned fields than in the pass-through query.
Run the following query and notice the output.
SELECT CallRecTime, SwitchNum, CallingIMSI, CallingNumCalledNum FROM CallStream
Count incoming calls by region: Tumbling window with aggregation
Suppose you want to count the number of incoming calls per region. In streaming data, when you want to perform aggregate functions like counting, you need to segment the stream into temporal units, since the data stream itself is effectively endless. You do this using a Streaming Analytics window function. You can then work with the data inside that window as a unit.
For this transformation, you want a sequence of temporal windows that don't overlap—each window will have a discrete set of data that you can group and aggregate. This type of window is referred to as a Tumbling window. Within the Tumbling window, you can get a count of the incoming calls grouped by
SwitchNum, which represents the country/region where the call originated.
Paste the following query in the query editor:
SELECT System.Timestamp as WindowEnd, SwitchNum, COUNT(*) as CallCount FROM CallStream TIMESTAMP BY CallRecTime GROUP BY TUMBLINGWINDOW(s, 5), SwitchNum
This query uses the
Timestamp Bykeyword in the
FROMclause to specify which timestamp field in the input stream to use to define the Tumbling window. In this case, the window divides the data into segments by the
CallRecTimefield in each record. (If no field is specified, the windowing operation uses the time that each event arrives at the event hub. See "Arrival Time Vs Application Time" in Stream Analytics Query Language Reference.
The projection includes
System.Timestamp, which returns a timestamp for the end of each window.
To specify that you want to use a Tumbling window, you use the TUMBLINGWINDOW function in the
GROUP BYclause. In the function, you specify a time unit (anywhere from a microsecond to a day) and a window size (how many units). In this example, the Tumbling window consists of 5-second intervals, so you will get a count by country/region for every 5 seconds' worth of calls.
Select Test query. In the results, notice that the timestamps under WindowEnd are in 5-second increments.
Detect SIM fraud using a self-join
For this example, consider fraudulent usage to be calls that originate from the same user but in different locations within 5 seconds of one another. For example, the same user can't legitimately make a call from the US and Australia at the same time.
To check for these cases, you can use a self-join of the streaming data to join the stream to itself based on the
CallRecTime value. You can then look for call records where the
CallingIMSI value (the originating number) is the same, but the
SwitchNum value (country/region of origin) is not the same.
When you use a join with streaming data, the join must provide some limits on how far the matching rows can be separated in time. As noted earlier, the streaming data is effectively endless. The time bounds for the relationship are specified inside the
ON clause of the join, using the
DATEDIFF function. In this case, the join is based on a 5-second interval of call data.
Paste the following query in the query editor:))
This query is like any SQL join except for the
DATEDIFFfunction in the join. This version of
DATEDIFFis specific to Streaming Analytics, and it must appear in the
ON...BETWEENclause. The parameters are a time unit (seconds in this example) and the aliases of the two sources for the join. This is different from the standard SQL
DATEDIFFfunction.
The
WHEREclause includes the condition that flags the fraudulent call: the originating switches are not the same.
Select Test query. Review the output, and then select Save query.
Start the job and visualize output
To start the job, navigate to the job Overview and select Start.
Select Now for job output start time and select Start. You can view the job status in the notification bar.
Once the job succeeds, navigate to Power BI and sign in with your work or school account. If the Stream Analytics job query is outputting results, the ASAdataset dataset you created exists under the Datasets tab.
From your Power BI workspace, select + Create to create a new dashboard named Fraudulent Calls.
At the top of the window, select Edit and Add tile. Then select Custom Streaming Data and Next. Choose the ASAdataset under Your Datasets. Select Card from the Visualization type dropdown, and add fraudulent calls to Fields. Select Next to enter a name for the tile, and then select Apply to create the tile.
Follow the step 5 again with the following options:
- When you get to Visualization Type, select Line chart.
- Add an axis and select windowend.
- Add a value and select fraudulentcalls.
- For Time window to display, select the last 10 minutes.
Your dashboard should look like the example below once both tiles are added. Notice that, if your event hub sender application and Streaming Analytics application are running, your Power BI dashboard periodically updates as new data arrives.
Embedding your Power BI Dashboard in a Web Application
For this part of the tutorial, you'll use a sample ASP.NET web application created by the Power BI team to embed your dashboard. For more information about embedding dashboards, see embedding with Power BI article.
To set up the application, go to the PowerBI-Developer-Samples GitHub repository and follow the instructions under the User Owns Data section (use the redirect and homepage URLs under the integrate-web-app subsection). Since we are using the Dashboard example, use the integrate-web-app sample code located in the GitHub repository. Once you've got the application running in your browser, follow these steps to embed the dashboard you created earlier into the web page:
Select Sign in to Power BI, which grants the application access to the dashboards in your Power BI created a simple Stream Analytics job, analyzed the incoming data, and presented results in a Power BI dashboard. To learn more about Stream Analytics jobs, continue to the next tutorial: | https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-real-time-fraud-detection | CC-MAIN-2021-04 | refinedweb | 2,103 | 64.61 |
#include <cpptagdb.h>
Definition at line 128 of file cpptagdb.h.
the names of all directories encountered when populating symbols into members_
Definition at line 141 of file cpptagdb.h.
data structure to hold the names of the files processed by cpptagdb.exe keyed by their basename.ext component.
Definition at line 131 of file cpptagdb.h.
a mapping between class and namespace members and their symbol info.
Definition at line 136 of file cpptagdb.h.
Definition at line 184 of file cpptagdb.h.
Search the set of files found by cpptagdb.exe and append the name of each file whose basename.ext matches fileNodeName to a vector. Wildcards are not supported. fileNodeName must not have directory components.
Definition at line 279 of file cpptagdb.cxx.
Search this Info object for the names of symbols whose member name matches the specified value and append their names to the output vector.
Definition at line 316 of file cpptagdb.cxx.
Find any files in the directories where symbols are defined which match the specified sub path. For example, if relativeSubPath is fred.cpp, then all fred.cpp files in any directory which defined a symbol will be returned. The relativeSubPath can be an absolute path or it can contain ./ or ../ and the function will still work. Note that the current directory is explicitly NOT searched by this function.
Definition at line 296 of file cpptagdb.cxx.
Read the specified file and populate this object from that file.
Definition at line 212 of file cpptagdb.cxx.
Populate this Info object given a file name. Return a negative number if parsing of the file failed and also return the line number through the reference parameter, failedLine.
Definition at line 194 of file cpptagdb.cxx.
Add a symbol to this database given the filename with which it is associated and the line of text from the .tagpp file.
Definition at line 382 of file cpptagdb.cxx.
global free function that writes Info objects a stream.
Definition at line 364 of file cpptagdb.cxx.
All the files encountered while processing symbols.
Definition at line 152 of file cpptagdb.h.
The files processed by cpptagdb.exe. Note that the key is the file node name (basename.ext).
Definition at line 149 of file cpptagdb.h.
namespace and class members
Definition at line 154 of file cpptagdb.h.
A tree structure containing all the symbols read by cpptagdb.exe.
Definition at line 145 of file cpptagdb.h. | http://www.bordoon.com/tools/classcxxtls_1_1CppTagDB_1_1Info.html | CC-MAIN-2019-18 | refinedweb | 409 | 70.5 |
Automated code deployment with Fabric
Gone are the days of source editing on a live server, uploading files to the server manually via FTP and doing post-install stuff by hand. Or at least that should be the case. For single server projects that may still be manageable (yet unbearably annoying), but if a project requires more than one server it is a real pain.
That is one of the reasons I recently have been trying to automate deployment of my projects. The deployment process can be run with one command and will happen mostly without human intervention (YMMV depending on project). If any problems occur on the live version, a patched version can be deployed much quicker than a human could - reducing deployment time from 30+ minutes down to just a few minutes.
Such tools are definitely important for System Administrators, however I would recommend Software Developers to look into it as well, as that may be beneficial for your projects as well.
Today we’ll be looking into Fabric, but there are larger packages with a slightly different purpose (e.g. Puppet or Chef are intended for server management) and they can still be used for this purpose.
Setting up
I will assume that you have your project hosted in a version control system, such as Git (Github) or Mercurial (BitBucket), that can be accessed by your target server. If you don’t use version control - you really really should (Why should I use version control?). If you don’t want to use version control, you still can use Fabric, you’ll just have to do more tailoring of deployment scripts for your situation.
We’ll be using git, but any other version control system would work just as well, just please set up deployment keys for your remote server so that it really can access the repo: For BitBucket, For Github.
Fabric is available through PIP. Run this to install Fabric:
pip install fabric
Once that’s done, run
fab to verify that fabric was successfuly installed. Output should be similar to:
Fatal error: Couldn't find any fabfiles! Remember that -f can be used to specify fabfile path, and use -h for help. Aborting.
Then go to your project folder and create
fabfile.py which will hold deployment code for our project.
Our first fabfile
Before we do that, there are few things we’ll need to know.
fab task1 task2 ... taskn will look for a
fabfile.py nearby and will try to execute functions with names
task1,
task2, …,
taskn within the fabfile. For more extensive list of usage look into
fab -h.
Fabric uses SSH as transport to run commands on remote machines (which usually are UNIX commands). Think of it as automated version of yourself running commands via terminal.
Useful list of commands:
fabric.context_managers.cd(path)- change directory, UNIX, remote
fabric.context_managers.lcd(path)-
cdanalog, local
fabric.operations.local(command, capture=False, shell=None)- execute
commandon local machine
fabric.operations.run(command, shell=True, ...skipped...)- execute
commandon remote machine
fabric.operations.get(remote_path, local_path=None)- download files from remote machine
fabric.operations.pu(local_path, remote_path=None, use_sudo=False, ...skipped...)- upload files to remote machine
That should be enough to get started. For more extensive documentation visit HERE.
And here’s our first fabfile:
from __future__ import with_statement from fabric.api import * from fabric.contrib import files from fabric.contrib.console import confirm env.hosts = ['web1'] DBUSER = 'root' DBNAME = 'prod' def production(version=None): with cd('<project path>'): run('git pull') if not version: run('git checkout origin master') else: run('git checkout live-%s' % version) # use specific tag if files.exists('sql/latest') and confirm('Run latest SQL upgrade?'): with cd(path): run('mysql -u %s -p %s < install.sql' % (DBUSER, DBNAME)) else: print('Skipping SQL upgrade.')
The fabfile is rather simple and its goal is to pull the latest (or a specific) version from git and if needed run SQL upgrade scripts.
env.hosts holds address of the target server.
with is a python context manager,
cd helps to simplify paths.
Fabric will pull requested version from git and check if there are any SQL upgrade files. If there are - it will ask you if you want to run them and do so. Even with commands we’ve covered the script can be easily extended to do more complex things - build from source, sync static resources to backup servers.
This is script is pretty much what I use for my own projects, just slightly modified (e.g. I pull the version from source code, do automatic minification), but so far it is sufficiently convenient.
Last thoughts
I’ve only started using Fabric recently and wouldn’t call myself an experienced user, but so far it’s been a great experience. I like the level of automation that can achieved with it.
One problem I’ve experienced and haven’t been able to find a solution is with SSH key management. For some reason it will use a wrong key (ignoring SSH config) and won’t change to a different one. If you know a solution for this - I would love to hear it.
For official documentation go to Fabric homepage. They have a good tutorial, which you may find more understandable compared to my quick overview of my usage.
For my next adventure I will probably look into Puppet. Justin Weissig on Sysadmincasts.org did an episode on Learning Puppet with Vagrant - it’s very interesting.
If you have any questions or remarks about the post - please let me know either by email or in the comments below. I would love to hear from you. | https://tautvidas.com/blog/2013/08/automated-code-deployment-with-fabric/ | CC-MAIN-2019-43 | refinedweb | 943 | 57.37 |
How to write an Android CPU benchmark tool (part 1)
A benchmark test is a tool that you can use to compare the speed and performance of any given set-up/device. The idea is simple: the program runs a series of complex, arbitrary operations and then measures how long it takes. If the same function takes longer on one device than another, then you can generally conclude that the faster device is the more powerful one. You’ve probably seen benchmark tools used in a number of device reviews, with popular choices on Android being AnTuTu and Geekbench (check out a selection of benchmark tools here).
Of course it’s a bit more complicated than that though. Just like some people are better at maths and others are better at art, your smartphones and tablets can be better at some tasks than others (see Gardner’s theory of multiple intelligences, if you’re in the mood for a massive psychology tangent…). For example, there is a difference between graphical capability and pure processing power and so you need to use different tests for each.
What’s more, is that the amount of time that a function takes can vary each time you run it based on numerous factors, such as background processes you have running. As such, a well designed benchmark tool will run multiple different functions and then compare performance across the board in order to create a fairly consistent ‘score’ that users can judge their device’s performance by.
In this exercise we’re going to be testing processing power specifically, which is handled by the CPU. To do this, we’re going to write a program that challenges the device to perform some complex math and then time how long that takes. Of course this isn’t going to be anywhere near as accurate as a real benchmark tool but it will hopefully be a fun, educational project nonetheless.
Introducing SHA-1
Over the course of this two-part series, we’re going to try out a couple of different tests for our devices. To begin with for part one though, we’ll be looking at SHA-1 encryption.
SHA-1 stands for ‘Secure Hash Algorithm 1’ and is a ‘cryptographic hash function’. It’s a tool that can be used to encrypt data by storing it in a array (an array being a collection of data, like a list). The idea is that any string of characters can be represented by a 40 digit hexadecimal ‘hash value’ (called a message digest), which essentially points to a ‘location’ in the array where the string is stored. Almost like a pigeon hole. The location is worked out by applying a complex algorithm to the given string.
As a cryptographic function, the objective is to encode that information in a way that it can’t be easily decoded. To meet this requirement, the algorithm needs to be such that it would be impossible or near impossible to decipher the string from the hash value alone. This means that the hash value for a password won’t be worth anything to a sneaky hacker. What’s more, there can be no ambiguity – meaning that you can’t assign multiple strings to the same hash value.
SHA-1 is one of the algorithms that has been keeping your private data secure for the last few years while you’ve been browsing the web. Unfortunately, it’s no longer considered secure enough for that use and is in the process of being phased out in favor of SHA-2, SHA-3 and other solutions. This is because ‘brute force’ attacks can crack it if there’s enough power behind them. Essentially, this means a hacker could run an algorithm that would keep guessing until it got it right and today some computers are powerful enough that that wouldn’t take 100 years.
But we’re not actually interested in the security of our encryption. All that matters to us is that the encryption requires a heavy algorithm that slightly taxes the CPU. It happens super fast but it’s enough to form the basis of our little race.
And it’s also quite cool to play around with cryptography. If you want to make this all more exciting, you can pretend you’re a WW2 cryptographer protecting crucial information from hacker-Nazis.
‘Hacker-Nazis’ is the best thing I’ve written this year by the way.
Setting up
We’ve already covered setting up Android Studio in previous articles. Likewise, we’ve gone over the process of building a simple app with a basic UI. So in other words, I’m not going to go through the whole set-up process again here. I will quickly implement a quick UI though, so if you want to follow along you can do. Otherwise, if you’re just here to see how to implement that SHA-1 encryption, you can skip this section.
First create a new project and select ’empty activity’. I called the app and the activity ‘Benchmark’ because why complicate matters? For the look of this app, I’m going to go with a kind of ‘green screen’ aesthetic. It’s a very easy look to create and it feels cool and techy. Also, I may have been playing Tron Run/r too much lately…
So with that in mind, find ‘Colors.xml’ and set the values as so:
<color name="colorPrimary">#6CC417</color> <color name="colorPrimaryDark">#000</color> <color name="colorAccent">#FF4081</color>
Now head over to your ‘activity_benchmark.xml’ file and we’re going to create a linearlayout with a two text views and a button. We’re also going to add a background to the activity (colorPrimaryDark) and make the text our new green color (which is called ‘alien green’. We’re using a linear layout, setting the orientation to horizontal and adding some IDs and onClicks. It should look something like this:
="@color/colorPrimaryDark" android: <TextView android: <Button android: <TextView android: </LinearLayout>
Which in turn should appear like so in the designer:
If we were making this into an actual app, then I’d suggest making a old-school blinking cursor. But that wouldn’t really be a good use of our time right now…
In fact, that’s all we’re going to do for the layout at this point. On to the actual test!
The SHA-1 function
Now comes the moment we’ve been waiting for: adding the SHA-1 function!
Fortunately, this isn’t an algorithm we’re going to write ourselves. Happily, there’s a class that can do this for us called MessageDigest.
Before you do that though, head over to your ‘Strings.xml’ file and insert the following line of code:
<string name="teststring">The big bad wolf</string>
This is going to come in handy as the string that we’ll be encrypting. Why did I choose ‘The big bad wolf’? I literally do not know… You should also make sure to import the classes you’re going to need right at the start to save yourself trouble later on. Just open up Benchmark.java (we’ll be working here from now on) and add the following statements:
import android.util.Base64; import java.io.UnsupportedEncodingException; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import android.os.Bundle; import android.app.Activity; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.TextView;
Remember: if at any point a command isn’t recognized, you can usually find out why by selecting it and pressing Alt+Return. On doing this, you may get the option to automatically import the class if that’s the problem.
Now let’s create a function that will use MessageDigest to convert the string into a hash value. Use the following but don’t worry too much about precisely what’s going on here:
public void computeSHAHash(String password) { MessageDigest mdSha1 = null; try { mdSha1 = MessageDigest.getInstance("SHA-1"); } catch (NoSuchAlgorithmException e1) { Log.e("Benchmark", "Error initializing SHA1"); } try { mdSha1.update(password.getBytes("ASCII")); } catch (UnsupportedEncodingException e) { // TODO Auto-generated catch block e.printStackTrace(); } byte[] data = mdSha1.digest(); StringBuffer sb = new StringBuffer(); String hex=null; hex = Base64.encodeToString(data, 0, data.length, 0); sb.append(hex); HashValue=sb.toString(); }
The input is passed as a string called ‘password’ (seeing as this is what you would often use SHA-1 for). ‘HashValue’ is our hash value, which is the output. We’re going to be using the HashValue string in multiple functions so add:
private String HashValue;
To the top of your code. We’ll also need to define some of our views here, so while you’re at it, go ahead and include:
private TextView result; private Button compute; private String teststring; private String HashValue; private String tt; private String output; private long tsLong;
This goes underneath ‘Public Class Benchmark…’. Those extra variables (tt, teststring and tsLong) will come in handy in a moment but you may as well add them now.
Now we need to initialize the views and that means identifying them by the IDs we gave them in activity_benchmark.xml. We’re doing this in ‘onCreate’ which is important because it has to come after ‘setContentView’. Until that point, your views don’t really ‘exist’ as far as the Java code is concerned.
Add the following code beneath onCreate:
compute=(Button)findViewById(R.id.btn1); result= (TextView)findViewById(R.id.textView2); teststring= getResources().getString(R.string.teststring);
So now you should have something that looks like this:
Note that we also set ‘teststring’ as the string we added to ‘strings.xml’.
Interactivity and time stamps
This is all good and well but right now the app still doesn’t do anything because our hash function never gets called. We need to make it so that clicking the ‘Begin sequence’ button actually executes it. To do this we added the function onBeginClick() and inside of it we call computeSHAHash().
Since this is a benchmark app, we want to know exactly how long the computeSHAHash function takes to complete. The best way to do this is to get a timestamp is with ‘System.nanoTime’. This doesn’t give us the actual time, but it does access the most accurate internal clock available to us, it operates in nano seconds. This ‘clock’ doesn’t correspond with the real time and date and shouldn’t be used to display such. However, it can nevertheless be used to measure time, which is what we want to do here.
So to take a time stamp before calling we used tsLong = System.nanoTime(); and then after the call to the SHA hash function we take another look at the clock and work out the difference like this: Long ttLong = System.nanoTime() – tsLong;
tsLong is a Long (numerical variable with decimal places) and stands for ‘time started’; whereas ttLong is ‘total time’. Total time is the time at the end of the function, minus ‘time started’. We can also add this to the information we display after the hash has been calculated. This means that the first version of the onBeginClick() function will look like this:
public void onBeginClick (View view) { tsLong = System.nanoTime(); computeSHAHash(teststring); Long ttLong = System.nanoTime() - tsLong; output = "SHA-1 hash: " + " " + HashValue + "\n Time Taken: " + ttLong.toString(); result.setText(output); }
The result should now show the hash value, plus the amount of time it took to calculate it in nanoseconds. The ‘\n’ just means ‘new line’ and can be used in any string. Go ahead and build the project and see how it runs!
Slowing it down
What you’ll find is that this function completes really quickly. This isn’t really much of a challenge for your CPU and that makes it very variable and not particularly useful for comparing different hardware. Try clicking ‘Begin Sequence…’ a few times and you’ll see that the time taken to complete varies drastically.
The solution is to make this a little more challenging. How? By doing the same thing 20,000 times. Pretty much any given task is going to be more difficult when you perform it 20,000 times. Try doing 20,000 press-ups for example. See? I just did that and I’m now slightly tireder than I would be if I had only done one.
To make our hash value function run 20,000 times, we’ll just use a ‘for’ loop. Like so:
for (Integer i = 0; i<20000; i++) { computeSHAHash(teststring); }
For loops run as long as the middle statement is ‘true’, while increasing a variable incrementally. The format is: variable being used, statement that must be true, amount to increase variable by’. We’re simply creating the integer ‘i’, increasing ‘i’ by one each time round and running the hash function over and over until i is equal to 20,000.
Now it takes a bit longer and we get a really long number denoting the number of milliseconds. My Galaxy S6 Edge+ takes roughly 801982125 nanoseconds to do this, which is 801 milliseconds. I’d like to display this as a nice ‘score’ that will remain fairly consistent, so I’m going to divide that number by 100,000,000, giving me ‘8’.
I’m showing all this new information in my final output, so the entire ‘onBeginClick’ is as follows:
public void onBeginClick (View view) { tsLong = System.nanoTime(); for (Integer i = 0; i<20000; i++) { computeSHAHash(teststring); } Long ttLong = System.nanoTime() - tsLong; tt = ttLong.toString(); Integer roundnumber = Math.round(ttLong / 100000000); String score = roundnumber.toString(); output = "SHA-1 hash: " + " " + HashValue + "\n Time Taken: " + tt + "\n Score: " + score; result.settext(output); }
This shows the following:
While I sometimes get a 7, it’s overall pretty consistent. If I run it on my wife’s Galaxy S3 though, we get a less impressive score:
Was this all just a very elaborate ruse to get my wife to upgrade her phone or at least run an update? You decide!
(It goes without saying that in this test, a lower score should be considered preferable. Like golf.)
Finishing up
So we can’t really use this for much but you can see that this simple test takes a variable amount of time from device to device which should at least roughly correlate with CPU performance. If you want to try it yourself but don’t want to write out all the code, you can nab the source from here.
In the next part of this series, I’ll be tidying things up a little with a progress bar and some more UI. Moreover, we’ll add some other tests such as MD5 and if there’s time, we can even make our own little algorithm and see how long each device would take to crack passwords of different lengths. We’ll also look at using threads to get a ‘multi-core’ score, similar to Geekbench.
So stay tuned for that and don’t be too upset if your friends’ phones outperform yours!
- omarionbooshie
- aaronkatrini
- Amalan Dhananjayan
- Joel Schmidt | http://www.androidauthority.com/write-an-android-cpu-benchmark-part-1-679929/ | CC-MAIN-2016-22 | refinedweb | 2,521 | 63.19 |
A library for parsing OpenStreetMap files using HXT into data structures.
Download from hackage at or install by cabal-install.
> cabal install OSM
This example returns all nodes tagged as camp-sites (tourism=camp_site) in the given OSM file.
import Data.Geo.OSM
campSites :: FilePath -> IO [Node]
campSites f = let p = filter ("tourism" `hasTagValue` "camp_site") . (nodes =<<)
in fmap p (readOsmFile f)
Updates the given OSM file with a new OSM file by replacing specific suffixes of ways tagged with "name". e.g. A way such as name="George St" will become name="George Street" and name="Wickham Tce" will become name="Wickham Terrace".
import Data.Geo.OSM
import Data.List
wayTags :: FilePath -> FilePath -> IO ()
wayTags = interactsOSM [" St" ==> " Street",
" Pl" ==> " Place",
" Tce" ==> " Terrace",
" Cct" ==> " Circuit"]
-- Updates the a given name suffix with a new suffix
(==>) :: (NodeWayRelations a)
=> String -- The suffix to fix with the new suffix.
-> String -- The new suffix.
-> a -- The OSM value.
-> a -- The new OSM value.
(==>) x = usingWay . usingTag' . (\y (k, v) ->
let v' = reverse v
in (k, if k == "name" && reverse x `isPrefixOf` v'
then reverse (reverse y ++ drop (length x) v')
else v)) | http://code.google.com/p/geo-osm/ | crawl-003 | refinedweb | 187 | 65.22 |
based only on its outline and silhouette.
We are going to apply the same principles in this post and quantify the outline of Pokemon using shape descriptors.
Looking for the source code to this post?
Jump right to the downloads section.
You might already be familiar with some shape descriptors, such as Hu moments. Today I am going to introduce you to a more powerful shape descriptor — Zernike moments, based on Zernike polynomials that are orthogonal to the unit disk.
Sound complicated?
Trust me, it’s really not. With just a few lines of code I’ll show you how to compute Zernike moments with ease.
Previous Posts
This post is part of an on-going series of blog posts on how to build a real-life Pokedex using Python, OpenCV, and computer vision and image processing techniques. If this is the first post in the series that you are reading, go ahead and read through it (there is a lot of awesome content in here on how to utilize shape descriptors), but then go back to the previous posts for some added context.
- Step 1: Building a Pokedex in Python: Getting Started (Step 1 of 6)
- Step 2: Building a Pokedex in Python: Scraping the Pokemon Sprites (Step 2 of 6)
Building a Pokedex in Python: Indexing our Sprites using Shape Descriptors
At this point, we already have our database of Pokemon sprite images. We gathered, scraped, and downloaded our sprites, but now we need to quantify them in terms of their outline (i.e. their shape).
Remember playing “Who’s that Pokemon?” as a kid? That’s essentially what our shape descriptors will be doing for us.
For those who didn’t watch Pokemon (or maybe need their memory jogged), the image at the top of this post is a screenshot from the Pokemon TV show. Before going to commercial break, a screen such as this one would pop up with the outline of the Pokemon. The goal was to guess the name of the Pokemon based on the outline alone.
This is essentially what our Pokedex will be doing — playing Who’s that Pokemon, but in an automated fashion. And with computer vision and image processing techniques.
Zernike Moments
Before diving into a lot of code, let’s first have a quick review of Zernike moments.
Image moments are used to describe objects in an image. Using image moments you can calculate values such as the area of the object, the centroid (the center of the object, in terms of x, y coordinates), and information regarding how the object is rotated. Normally, we calculate image moments based on the contour or outline of an image, but this is not a requirement.
OpenCV provides the
HuMoments function which can be used to characterize the structure and shape of an object. However, a more powerful shape descriptors can be found in the
mahotas package —
zernike_moments. Similar to Hu moments, Zernike moments are used to describe the shape of an object; however, since the Zernike polynomials are orthogonal to each other, there is no redundancy of information between the moments.
One caveat to look out for when utilizing Zernike moments for shape description is the scaling and translation of the object in the image. Depending on where the image is translated in the image, your Zernike moments will be drastically different. Similarly, depending on how large or small (i.e. how your object is scaled) in the image, your Zernike moments will not be identical. However, the magnitudes of the Zernike moments are independent of the rotation of the object, which is an extremely nice property when working with shape descriptors.
In order to avoid descriptors with different values based on the translation and scaling of the image, we normally first perform segmentation. That is, we segment the foreground (the object in the image we are interested in) from the background (the “noise”, or the part of the image we do not want to describe). Once we have the segmentation, we can form a tight bounding box around the object and crop it out, obtaining translation invariance.
Finally, we can resize the object to a constant NxM pixels, obtaining scale invariance.
From there, it is straightforward to apply Zernike moments to characterize the shape of the object.
As we will see later in this series of blog posts, I will be utilizing scaling and translation invariance prior to applying Zernike moments.
The Zernike Descriptor
Alright, enough overview. Let’s get our hands dirty and write some code.
As you may know from the Hobbits and Histograms post, I tend to like to define my image descriptors as classes rather than functions. The reason for this is that you rarely ever extract features from a single image alone. Instead, you extract features from a dataset of images. And you are likely utilizing the exact same parameters for the descriptors from image to image.
For example, it wouldn’t make sense to extract a grayscale histogram with 32 bins from image #1 and then a grayscale histogram with 16 bins from image #2, if your intent is to compare them. Instead, you utilize identical parameters to ensure you have a consistent representation across your entire dataset.
That said, let’s take this code apart:
- Line 2: Here we are importing the
mahotaspackage which contains many useful image processing functions. This package also contains the implementation of our Zernike moments.
- Line 4: Let’s define a class for our descriptor. We’ll call it
ZernikeMoments.
- Lines 5-8: We need a constructor for our
ZernikeMomentsclass. It will take only a single parameter — the
radiusof.
- Lines 10-12: Here we define the
describemethod, which quantifies our image. This method requires an image to be described, and then calls the
mahotasimplementation of
zernike_momentsto compute the moments with the specified
radius, supplied in Line 5.
Overall, this isn’t much code. It’s mostly just a wrapper around the
mahotas implementation of
zernike_moments. But as I said, I like to define my descriptors as classes rather than functions to ensure the consistent use of parameters.
Next up, we’ll index our dataset by quantifying each and every Pokemon sprite in terms of shape.
Indexing Our Pokemon Sprites
Now that we have our shape descriptor defined, we need to apply it to every Pokemon sprite in our database. This is a fairly straightforward process so I’ll let the code do most of the explaining. Let’s open up our favorite editor, create a file named
index.py, and get to work:
Lines 2-8 handle importing the packages we will need. I put our
ZernikeMoments class in the
pyimagesearch sub-module for organizational sake. We will make use of
numpy when constructing multi-dimensional arrays,
argparse for parsing command line arguments,
pickle for writing our index to file,
glob for grabbing the paths to our sprite images, and
cv2 for our OpenCV functions.
Then, Lines 11-16 parse our command line arguments. The
--sprites switch is the path to our directory of scraped Pokemon sprites and
--index points to where our index file will be stored.
Line 21 handles initializing our
ZernikeMoments descriptor. We will be using a radius of 21 pixels. I determined the value of 21 pixels after a few experimentations and determining which radius obtained the best performing results.
Finally, we initialize our
index on Line 22. Our index is a built-in Python dictionary, where the key is the filename of the Pokemon sprite and the value is the calculated Zernike moments. All filenames are unique in this case so a dictionary is a good choice due to its simplicity.
Time to quantify our Pokemon sprites:
Now we are ready to extract Zernike moments from our dataset. Let’s take this code apart and make sure we understand what is going on:
- Line 25: We use
globto grab the paths to our all Pokemon sprite images. All our sprites have a file extension of .png. If you’ve never used
globbefore, it’s an extremely easy way to grab the paths to a set of images with common filenames or extensions. Now that we have the paths to the images, we loop over them one-by-one.
- Line 28: The first thing we need to do is extract the name of the Pokemon from the filename. This will serve as our unqiue key into the index dictionary.
- Line 29 and 30: This code is pretty self-explanatory. We load the current image off of disk and convert it to grayscale.
- Line 35 and 36: Personally, I find the name of the
copyMakeBorderfunction to be quite confusing. The name itself doesn’t really describe what it does. Essentially, the
copyMakeBorder“pads” the image along the north, south, east, and west directions of the image. The first parameter we pass in is the Pokemon sprite. Then, we pad this image in all directions by 15 white (255) pixels. This step isn’t necessarily required, but it gives you a better sense of the thresholding on Line 39.
- Line 39 and 40: As I’ve mentioned, we need the outline (or mask) of the Pokemon image prior to applying Zernike moments. In order to find the outline, we need to apply segmentation, discarding the background (white) pixels of the image and focusing only on the Pokemon itself. This is actually quite simply — all we need to do is flip the values of the pixels (black pixels are turned to white, and white pixels to black). Then, any pixel with a value greater than zero (black) is set to 255 (white).
Take a look at our thresholded image below:
This process has given us the mask of our Pokemon. Now we need the outermost contours of the mask — the actual outline of the Pokemon.
First, we need a blank image to store our outlines — we appropriately a variable called
outline on Line 45 and fill it with zeros with the same width and height as our sprite image.
Then, we make a call to
cv2.findContours on Lines 46 and 47. The first argument we pass in is our thresholded image, followed by a flag
cv2.RETR_EXTERNAL telling OpenCV to find only the outermost contours. Finally, we tell OpenCV to compress and approximate the contours to save memory using the
cv2.CHAIN_APPROX_SIMPLE flag.
Line 48 handles parsing the contours for various versions of OpenCV.
As I mentioned, we are only interested in the largest contour, which corresponds to the outline of the Pokemon. So, on Line 49 we sort the contours based on their area, in descending order. We keep only the largest contour and discard the others.
Finally, we draw the contour on our outline image using the
cv2.drawContours function. The outline is drawn as a filled in mask with white pixels:
We will be using this outline image to compute our Zernike moments.
Computing Zernike moments for the outline is actually quite easy:
On Line 54 we make a call to our
describe method in the
ZernikeMoments class. All we need to do is pass in the outline of the image and the
describe method takes care of the rest. In return, we are given the Zernike moments used to characterize and quantify the shape of the Pokemon.
So how are we quantifying and representing the shape of the Pokemon?
Let’s investigate:
Here we can see that our feature vector is of 25-dimensionality (meaning that there are 25 values in our list). These 25 values represent the contour of the Pokemon.
We can view the values of the Zernike moments feature vector like this:
So there you have it! The Pokemon outline is now quantified using only 25 floating point values! Using these 25 numbers we will be able to disambiguate between all of the original 151 Pokemon.
Finally on Line 55, we update our index with the name of the Pokemon as the key and our computed features as our value.
The last thing we need to do is dump our index to file so we can use when we perform a search:
To execute our script to index all our Pokemon sprites, issue the following command:
Once the script finishes executing all of our Pokemon will be quantified in terms of shape.
Later in this series of blog posts, I’ll show you how to automatically extract a Pokemon from a Game Boy screen and then compare it to our index.
Summary
In this blog post, we explored Zernike moments and how they can be used to describe and quantify the shape of an object.
In this case, we used Zernike moments to quantify the outline of the original 151 Pokemon. The easiest way to think of this is playing “Who’s that Pokemon?” as a kid. You are given the outline of the Pokemon and then you have to guess what the Pokemon is, using only the outline alone. We are doing the same thing — only we are doing it automatically.
This process of describing an quantifying a set of images is called “indexing”.
Now that we have our Pokemon quantified, I’ll show you how to search and identify Pokemon later in this series of posts.
Dear Adrian,
I need to appreciate your kind sharing and wise approaches here. Actually, I am doing my PhD and on the way about one of my mini projects need to reconrtuct the images from Zernike moments extracted from Mahotas toolbox that you introduced. There is one code I found on the web but relate to matlab and am not very sure of that. I wonder if you could kindly give me an advice . Thank you very much.
Best regards.
Congrats on doing your PhD Hamid, that’s very exciting! As for Zernike Moments, I would suggest looking at the source code directly of the Mahotas implementation. You’ll probably need to modify the code to do the reconstruction. I would also suggest sending a message on GitHub to Luis, the developer and maintainer of Mahtoas — he is an awesome guy and knows a lot about CV.
So even if the image of the Pokemon to be determined is laterally inverted, will the zernike moments search still find it as the normal image in the index?
Correct, Zernike moments are invariant under rotation.
Hi Adrian
While calculating the zernlike moments how to determine or end up using the the right radius value for the specific set of images ?
Could you elaborate on this item of your post pls
Lines 5-8: We need a constructor for our ZernikeMoments class. It will take only a single parameter — the radius of.
The easiest way to do this is to compute the
cv2.minEnclosingCircleof the contour. This will give you a radius that encapsulates the entire object. Or, if you have a priori knowledge about the problem you can hardcode it. I discuss this more inside the PyImageSearch Gurus course.
I just wanted to say your posts are so well written. Even though I learned something I didn’t know, I did not feel lost or confused for a moment. Instead of aversion masquerading as boredom, it is *exciting* and *fun* to read the next line.
You may be naturally gifted, but I suspect you’ve put great energy and care into crafting these lessons.
Thank you
Thank you for the kind words Aven, I really appreciate that 🙂 Comments like these are a real pleasure to read and make my day.
Very nice tutorial Adrian!
One question, can you index multiple images for 1 pokemon?
If so, I can index the images of every generation (gold/silver,black/white,…) and my pokedex will become smarter. This way it might also be able to process a random image of a Pokemon and still be accurate.
Or am I wrong?
Thanks in advance!
Oscar
Hi Oscar — you can certainly generate as many indexes as you want.
Thanks a a lot. I was searching for Zernik moments. I found no thing except one web site providing MATLAB codes which was unclear to me. You explain all things practically, specially where results of codes are provided. I appreciate you dear Adrian.
Thank you, I’m really happy to hear that I could help 🙂
Dear Adrian,
I am getting the following problem when I run the code for this lesson:
$ python index.py –sprites sprites –index index.cpickle
Traceback (most recent call last):
File “index.py”, line 48, in
(cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
ValueError: too many values to unpack
Please suggest solution.
Thanks
Dear Adrian,
Changing the 45th line to the following did it for me.
_,cnts,_ = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
Could you explain what the content of cnts might be? Why is the sorted function required if cv2.RETR_EXTERNAL flag is used? For our case, the image contains only one pokemon sprite. Should it give more than one external contour?
Thanks for the tutorial.
This blog post assumed you are using OpenCV 2.4; however, you are using OpenCV 3, where the
cv2.findContoursreturn signature changed. You can read more about this change here. As for sorting the contours, sometimes there is “noise” in our input image. Sorting just ensures we grab the largest region.
can i use zernike moments to identify hand signs ( like stop, go , follow etc) or is there a more convenient way to do it??
You could, but accuracy wouldn’t be as good as object detection. For objects that do not rotate (or have little rotation) take a look at HOG + Linear SVM.
Hi, Adrian.
All your post are amazing. Thank you very much for your work. I am really learning a lot here. I really love AI, speech recognition and CV projects, due to your blog I am seriously thinking in starting my graduate studies in these areas.
I would like to know, is there any way to obtain an image from its Zernike Moments… I mean, with this piece of code
moments = desc.describe(outline)
we are able to get the Zernike Moments, is there any way to make the inverse process??? Are Zernike moments enough to get back the outline of the image, or we should use some any other kind of feature vectors.
Thanks, again
When you say “obtain an image from its Zernike Moments” are you referring to reconstructing the image based on Zernike Moments? Is there a particular reason you are doing the reconstruction?
if you are python 3,
pls import cPickle this way,
import _pickle as cPickle
Hi Adrian!
I’m really enjoying your posts. Everything is very well explained!
Regarding this post, I just wanted to mention that apparently in Python 3 the dumps() function has varied a little bit:
Python 2 : Return the pickled representation of the object as a string ()
Python 3: Return the pickled representation of the object as a bytes object ()
Thus, we only need to do this modification, in order to obtain a binary file, with the binary representation:
f = open(args[“index”], “wb”)
cPickle.dump(index,f)
f.close()
Thanks Claudia. I’ll also mention that if you’re using Python 3 you should just be using the “pickle” library rather than “cPickle”.
And this is the error showing on my terminal
…
f.write(cPickle.dump(index, “wb”))
TypeError: file must have a ‘write’ attribute
I found a fix to write the index into a file
with open(args[“index”], ‘wb’) as pickle_file:
cPickle.dump(index, pickle_file)
Dear Adrian! Thanks for your post! I noticed that instead of creating a tight bounding box and resizing images to a fixed size, you just created binary masks with the same shape to original images. My question is that will Zernike Moment of a pokemon image be better or worse if I tightly crop and resize it to a fixed size?
Great post. Thanks for sharing. | https://www.pyimagesearch.com/2014/04/07/building-pokedex-python-indexing-sprites-using-shape-descriptors-step-3-6/ | CC-MAIN-2020-05 | refinedweb | 3,326 | 72.36 |
The "using" keyword in C# is one of the best friends of programmers but many of us may not realize this. The "using" keyword is used in two cases - First when importing a namespace in your code and second in a code block.
Here I am talking about using the "using" in the code block.
Let's take a look at a typical code block for reading data from a database table and displaying it in a ListBox control using the DataReader. See Listing 1.
SqlConnection connection = new SqlConnection(connectionString);
SqlDataReader reader = null;
SqlCommand cmd = new SqlCommand(commandString, connection);
connection.Open();
reader = cmd.ExecuteReader();
while (reader.Read())
{
listBox1.Items.Add(reader[0].ToString() + ", " + reader[1].ToString());
}
reader.Close();
connection.Close();
Listing 1.
Now, let's think. What is wrong with the code in Listing 1? What if there is an exception on line listBox1.Items.Add? For example, if the reader[0] brings null data, the ToString() method would fail and there will be an exception and code will exit. As we know, if you open a SqlDataReader or SqlConnection, it is advised to close them to free the connections immediately. But in possible scenario of exception, it will not happen. The code lines reader.Close() and connection.Close() will not be executed if an exception occurs.
To make sure to close DataReader and Connection objects, one possible way of doing is use a try..catch..finally block and close DataReader and Connection in finally block. This will ensure that both DataReader and Connection are closed. See Listing 2.
SqlDataReader reader = null;
try
{
listBox1.Items.Add(reader[0].ToString() + ", " + reader[1].ToString());
}
catch(Exception exp)
// Do something with exception like display a message
finally
Listing 2.
Alternatively, you may use the "using" keyword that will also ensure that the DataReader and Connection objects are closed before exiting the loop. See Listing 3. As you can see from Listing 3, code is much tidy and under the hood, Listing 3 does what Listing 2 would do for you.
using ( SqlConnection connection = new SqlConnection(connectionString) )
using (SqlDataReader reader = cmd.ExecuteReader())
while (reader.Read())
{
}
Listing 3.
View All | http://www.c-sharpcorner.com/article/leveraging-the-using-keyword-in-C-Sharp/ | CC-MAIN-2017-39 | refinedweb | 355 | 60.61 |
Quote:
Update: ASP.NET 5 has been renamed to ASP.NET Core. Check my recent article on getting started with Angular2 in ASP.NET Core.
Update: ASP.NET 5 has been renamed to ASP.NET Core. Check my recent article on getting started with Angular2 in ASP.NET Core.
This post will walk you through the step-by-step procedure on building a simple ASP.NET 5 application using AngularJS with Web API.
Before we dig further, let’s talk about a quick overview of AngularJS and Web API in MVC 6..
AS. Read more here. If you are new to ASP.NET 5 then I would suggest you to read the following articles below to know more about the new features in ASP.NET:
To start, fire up Visual Studio 2015 and create a new ASP.NET 5 project by selecting File > New Project. In the dialog, under Templates > Visual C#, select ASP.NET Web Application as shown in the figure below:
Name your project to whatever you like and then click OK. For this example, I named the project as “AngularJS101”. Now after that, you should be able to see the “New ASP.NET Project” dialog:
AngularJS101
style="height: 433px; width: 640px" data-src="/KB/aspnet/1105435/Capture2.PNG" class="lazyload" data-sizes="auto" data->
Now select ASP.NET 5 Preview Empty template from the dialog above. Then click OK to let Visual Studio generate the necessary files and templates needed for you. You should be able to see something like below:
style="height: 488px; width: 640px" data-src="/KB/aspnet/1105435/Capture3.PNG" class="lazyload" data-sizes="auto" data->
The next thing to do is to create a new folder called “Scripts”. This folder will contain all the JavaScript files needed in our application:
ASP.NET 5 now supports three main package managers: NuGet, NPM and Bower.
A package manager enables you to easily gather all resources that you need for building an application. In other words, you can make use of package manager to automatically download all the resources and their dependencies instead of manually downloading project dependencies such as jQuery, Bootstrap and AngularJS in the web.
NuGet manages .NET packages such as Entity Framework, ASP.NET MVC and so on. You typically specify the NuGet packages that your application requires within project.json file.
NPM is one of the newly supported package managers in ASP.NET 5. This package manager was originally created for managing packages for the open-source NodeJS framework. The package.json is the file that manages your project’s NPM packages. your Project (in this case AngularJS101) and select Add > New Item. In the dialog, select NPM configuration file as shown in the figure below:
style="height: 452px; width: 640px" data-src="/KB/aspnet/1105435/Capture5.PNG" class="lazyload" data-sizes="auto" data->
Click Add to generate the file for you. Now open package.json file and modify it by adding the following dependencies:
{
"version": "1.0.0",
"name": "AngularJS101",
"private": true,
"devDependencies": {
"grunt": "0.4.5",
"grunt-contrib-uglify": "0.9.1",
"grunt-contrib-watch": "0.6.1"
}
}
Notice that you get Intellisense support while you edit the file. A matching list of NPM package names and versions shows as you type.
In package.json file, from the code above, we have added three (3) dependencies named grunt, grunt-contrib-uglify and grunt-contrib-watch NPM packages that are required in our application.
grunt
grunt-contrib-uglify
grunt-contrib-watch
Now save the package.json file and you should be able to see a new folder under Dependencies named NPM as shown in the following:
Right click on the NPM folder and select Restore Packages to download all the packages required. Note that this may take a bit to finish the download, so just be patient and wait ;). After that, the grunt, grunt-contrib-uglify and grunt-contrib-watch NPM packages should now be installed as shown in the following:, and finally save the results to a file named app.js within the wwwroot folder.
Now right click on your project and select Add > New Item. Select Grunt Configuration file from the dialog as shown in the figure below:
style="height: 401px; width: 640px" data-src="/KB/aspnet/1105435/Capture17.PNG" class="lazyload" data-sizes="auto" data->
Then, click Add to generate the file and then modify the code within the Gruntfile.js file so it will look like this:']
}
}
});
grunt.registerTask('default', ['uglify', 'watch']);
};
The code above contains three sections. The first one is used to load each of the Grunt plugins that we need from the NPM packages that we configured earlier. The initConfig() is responsible for configuring the plugins. The Uglify plugin is configured so that it combines and minifies all the files from the Scripts folder and generate the result in a file named app.js within wwwroot folder. The last section contains the definitions for your tasks. In this case, we define a single ‘default’ task that runs ‘uglify’ and then watches for changes in our JavaScript file.
initConfig()
Uglify
Now save the file and let’s run the Grunt file using Visual Studio Task Runner Explorer. To do this, go to View > Other Windows > Task Runner Explorer in Visual Studio main menu. In the Task Runner Explorer, make sure to hit the refresh button to load the tasks for our application. You should see something like this:
style="height: 209px; width: 500px" data-src="/KB/aspnet/1105435/Capture18.PNG" class="lazyload" data-sizes="auto" data->
Now, right click on the default task and select Run. You should be able to see the following output:
style="height: 202px; width: 600px" data-src="/KB/aspnet/1105435/Capture19.PNG" class="lazyload" data-sizes="auto" data->
There are two main files that we need to modify to enable MVC in our ASP.NET 5 application.
First, we need to modify the project.json file to include MVC 6 under dependencies:
"webroot": "wwwroot",
"version": "1.0.0-*",
"dependencies": {
"Microsoft.AspNet.Server.IIS": "1.0.0-beta3",
"Microsoft.AspNet.Mvc": "6.0.0-beta3"
},
"frameworks": {
"aspnet50": { },
"aspnetcore50": { }
},
Make sure to save the file to restore the packages required. The project.json file is used by the NuGet package manager to determine the packages required in your application. In this case, we’ve added Microsoft.AspNet.Mvc.
Microsoft.AspNet.Mvc
Now the last thing is to modify the Startup.cs file to add the MVC framework in the application pipeline. Your Startup.cs file should now look like this:
using System;
using Microsoft.AspNet.Builder;
using Microsoft.AspNet.Http;
using Microsoft.Framework.DependencyInjection;
namespace AngularJS101
{
public class Startup
{
public void ConfigureServices(IServiceCollection services){
services.AddMvc();
}
public void Configure(IApplicationBuilder app){
app.UseMvc();
}
}
}
The ConfigureServices() method is used to register MVC with the ASP.NET 5 built-in Dependency Injection Framework (DI). The Configure() method is used to register MVC with OWIN.
ConfigureServices()
Configure()
The next step is to create a model that we can use to pass data from the server to the browser/client. Now create a folder named “Models” under the root of your project. Within the “Models” folder, create a class named “DOTAHero” and add the following code below:
DOTAHero
using System;
namespace AngularJS101.Models
{
public class DOTAHero
{
public int ID { get; set; }
public string Name { get; set; }
public string Type { get; set; }
}
}
Create another class called “HeroManager” and add the following code below:
HeroManager
using System.Collections.Generic;
using System.Linq;
namespace AngularJS101"},
};
public IEnumerable<DOTAHero> GetAll { get { return _heroes; } }
public List<DOTAHero> GetHeroesByType(string type) {
return _heroes.Where(o => o.Type.ToLower().Equals(type.ToLower())).ToList();
}
public DOTAHero GetHeroByID(int Id) {
return _heroes.Find(o => o.ID == Id);
}
}
}
The HeroManager class contains a readonly property that contains a list of heroes. For simplicity, the data is obviously static. In real scenario, you may need to get the data in a storage medium such as database or any files that stores your data. It also contains a GetAll property that returns all the heroes and a GetHeroesByType() method that returns a list of heroes based on the hero type, and finally a GetHeroByID() method that returns a hero based on their ID.
readonly
static
GetAll
GetHeroesByType()
GetHeroByID()
For this particular example, we will be using Web API for passing data to the browser/client.
Unlike previous versions of ASP.NET, MVC and Web API controllers used the same controller base class. Since Web API is now part of MVC 6, then we can start creating Web API controllers because we already pulled the required NuGet packages for MVC 6 and configured MVC 6 in startup.cs.
Now add an “API” folder under the root of the project:
Then add a Web API controller by right-clicking the API folder and selecting Add > New Item. Select Web API Controller Class and name the controller as “HeroesController” as shown in the figure below:
HeroesController
style="height: 413px; width: 640px" data-src="/KB/aspnet/1105435/Capture9.PNG" class="lazyload" data-sizes="auto" data->
Click Add to generate the file for you. Now modify your HeroesController class so it will look like this:
using System.Collections.Generic;
using Microsoft.AspNet.Mvc;
using AngularJS101.Models;
namespace AngularJS101.API.Controllers
{
[Route("api/[controller]")]
public class HeroesController : Controller
{
// GET: api/values
[HttpGet]
public IEnumerable<DOTAHero> Get()
{
HeroManager HM = new HeroManager();
return HM.GetAll;
}
// GET api/values/7
[HttpGet("{id}")]
public DOTAHero Get(int id)
{
HeroManager HM = new HeroManager();
return HM.GetHeroByID(id);
}
}
}
At this point, we will only be focusing on GET methods to retrieve data. The first GET method returns all the heroes available by calling the GetAll property found in HeroManager class. The second GET method returns a specific hero data based on the ID.
GET
ID
You can test whether the actions are working by running your application in the browser and appending the /api/heroes in the URL. Here are the outputs for both GET actions:
Route: /api/heroes
style="height: 321px; width: 640px" data-src="/KB/aspnet/1105435/Capture10.PNG" class="lazyload" data-sizes="auto" data->
Route: /api/heroes/7
style="height: 323px; width: 640px" data-src="/KB/aspnet/1105435/Capture11.PNG" class="lazyload" data-sizes="auto" data->
Visual Studio 2015 includes templates for creating AngularJS modules, controllers, directives and factories. For this example, we will be displaying the list of heroes using an AngularJS template.
To get started, let's create an AngularJS module by right-clicking on the Scripts folder and selecting Add > New Item. Select AngularJS Module as shown in the figure below:
style="height: 435px; width: 640px" data-src="/KB/aspnet/1105435/Capture12.PNG" class="lazyload" data-sizes="auto" data->
Click Add to generate the file and copy the following code for our AngularJS module:
(function () {
'use strict';
angular.module('heroesApp', [
'heroesService'
]);
})();
The code above defines a new AngularJS module named “heroesApp”. The heroesApp has a dependency on another AngularJS module named “heroesService” which we will create later in the next step.
heroesApp
heroesService
The next thing to do is to create a client-side AngularJS Controller. Create a new folder called “Controllers” under the Script folder as in the following:
Then, right click on the Controllers folder and select Add > New Item. Select AngularJS Controller using $scope as shown in the figure below:
style="height: 378px; width: 640px" data-src="/KB/aspnet/1105435/Capture14.PNG" class="lazyload" data-sizes="auto" data->
Click Add and copy the following code below within your heroesController.js file:
(function () {
'use strict';
angular
.module('heroesApp')
.controller('heroesController', heroesController);
heroesController.$inject = ['$scope','Heroes'];
function heroesController($scope, Heroes) {
$scope.Heroes = Heroes.query();
}
})();
`
The code above depends on the Heroes service that supplies the list of heroes. The Heroes service is passed to the controller using dependency injection (DI). The $inject() method call enables DI to work. The Heroes service is passed as the second parameter to the heroesController() function.
$inject()
heroesController()
We will use an AngularJS Heroes service to interact with our data via Web API. Now add a new folder called “Services” within the Script folder. Right click on the Services folder and select Add > New Item. From the dialog, select AngularJS Factory and name it as “heroesService.js” as in the following:
style="height: 419px; width: 640px" data-src="/KB/aspnet/1105435/Capture15.PNG" class="lazyload" data-sizes="auto" data->
Now click Add and then replace the generated default code with the following:
(function () {
'use strict';
var heroesService = angular.module('heroesService', ['ngResource']);
heroesService.factory('Heroes', ['$resource',
function ($resource) {
return $resource('/api/heroes', {}, {
query: { method: 'GET', params: {}, isArray: true}
});
}
]);
})();
The code above basically returns a list of heroes from the Web API action. The $resource object performs an AJAX request using a RESTful pattern. The heroesService is associated with the /api/heroes route on the server. This means that when you perform a query against the service from your client-side code, the Web API HeroesController is invoked to return a list of heroes.
$resource
Let’s add an AngularJS template for displaying the list of heroes. To do this, we will need an HTML page to render in the browser. In the wwwroot folder, add a new HTML page and name it as “index” for simplicity. Your application structure should now look like this:
The wwwroot folder is a special folder in your application. The purpose is that the wwwroot folder should contain all contents of your website such as HTML files and images needed for your website.
You should not place any of your source code within the wwwroot folder. Instead, source codes such as MVC controllers’ source, model classes and unminified JavaScript and LESS files should be placed outside of the wwwroot folder.
Now, replace the content of index.html with the following:
<!DOCTYPE html>
<html ng-
<head>
<meta charset="utf-8" />
<title>DOTA 2 Heroes</title>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.3.15/angular.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.3.15/angular-resource.js"></script>
<script src="app.js"></script>
</head>
<body ng-cloak>
<div ng-
<h1>DOTA Heroes</h1>
<table>
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Type</th>
</tr>
</thead>
<tbody>
<tr ng-
<td>{{hero.ID}}</td>
<td>{{hero.Name}}</td>
<td>{{hero.Type}}</td>
</tr>
</tbody>
</table>
</div>
</body>
</html>
There are several things to point out from the markup above:
The html element is embedded with the ng-app directive. This directive associates the heroesApp with the HTML file.
html
ng-app
In the script section, you will notice that I use Google CDN for referencing AngularJS and related libraries. Besides being lazy, it’s my intent to use CDN for referencing standard libraries such as jQuery, AngularJS and Bootstrap to boost application performance. If you don’t want to use CDN, then you can always install AngularJS packages using Bower.
script
The body element is embedded with the ng-cloak directive. This directive hides an AngularJS template until the data has been loaded in the page.
The div element within the body block is embedded with the ng-controller directive. This directive associates the heroesController and renders the data within the div element.
body
ng-cloak
div
ng-controller
heroesController
Finally, the ng-repeat directive is added to the tr element of the table. This will create row for each data that retrieved from the server.
ng-repeat
tr
Here’s the output below when running the page and navigating to index.html:
style="height: 293px; width: 640px" data-src="/KB/aspnet/1105435/Capture20.PNG" class="lazyload" data-sizes="auto" data->
That’s it! I hope you will find this post useful and fun. Stay tuned for more!
CodeProject
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | https://www.codeproject.com/Articles/1105435/ASP-NET-Jump-Start-to-AngularJS-with-MVC-Web-API?PageFlow=FixedWidth | CC-MAIN-2021-31 | refinedweb | 2,629 | 58.08 |
Hi there fellas. In this post i am going to tell you about pickle. It is used for serializing and de-serializing a Python object structure. Any object in python can be pickled so that it can be saved on disk. What pickle does is that it “serialises” the object first before writing it to file. Pickling is a way to convert a python object (list, dict, etc.) into a character stream. The idea is that this character stream contains all the information necessary to reconstruct the object in another python script.
So lets continue:
1. First of all you have to import it through this command:
import pickle
pickle has two main methods. The first one is dump, which dumps an object to a file object and the second one is load, which loads an object from a file object.
2. Prepare something to pickle:
Now you just have to get some data which you can pickle. For the sake of simplicity i will be pickling a python list. So just read the below code and you will be able to figure it out yourself.
import pickle a = ['test value','test value 2','test value 3'] a ['test value','test value 2','test value 3'] file_Name = "testfile" # open the file for writing fileObject = open(file_Name,'wb') # this writes the object a to the # file named 'testfile' pickle.dump(a,fileObject) # here we close the fileObject fileObject.close() # we open the file for reading fileObject = open(file_Name,'r') # load the object from the file into var b b = pickle.load(fileObject) b ['test value','test value 2','test value 3'] a==b True
The above code is self explanatory. It shows you how to import the pickled object and assign it to a variable. So now we need to know where we should use pickling. It is very useful when you want to dump some object while coding in the python shell. So after dumping whenever you restart the python shell you can import the pickled object and deserialize it. But there are other use cases as well which i found on stackoverflow. Let me list them below.
1) saving a program's state data to disk so that it can carry on where it left off when restarted (persistence)
2) sending python data over a TCP connection in a multi-core or distributed system (marshalling)
3) storing python objects in a database
4) converting an arbitrary python object to a string so that it can be used as a dictionary key (e.g. for caching & memoization).
One thing to note is that there is a brother of pickle as well with the name of cpickle. As the name suggests it is written in c which makes it 1000 times more faster than pickle. So why should we ever use pickle instead of cpickle ? Here’s the reason
>> Because pickle handles unicode objects.
>> Because pickle is written in pure Python, it's easier to debug.
For further reading i suggest the official pickle documentation or if you want to read more tutorials then check out the sqlite tutorial. Now we come to the end of today’s post. I hope you liked it. Do follow my blog to get regular updates. If you have any suggestions or comments then post them below.
31 thoughts on “What is Pickle in python ?”
Thanks, for the pickle example
Nice explanation, very precise !
Reblogged this on Nate's XBRL Blog and commented:
A nice summary of Pickling
Thank’s a lot! I’m using ‘pickle’ to identify unussual AS numbers in an IP networks.
In the example ‘a’ is a list, not a dictionary as stated in the explanatory text.
Hey there! Sorry for not correcting it before. I have corrected it now. I am not sure how that typo slipped through. Thank you very much for letting me know. 🙂
Good
This is more a tutorial on how to pickle that what pickling is and it’s workings in python.
Really very nice explanation….
Good post than I found on stackoverflow, very precise and to the point
Great, thanks for answering my query.
Really to the point article. As pointed earlier also, please update the text to mention the example is of a list , not a dictionary.
Thanks for letting me know. I don’t know how that typo slipped through and why I didn’t correct it before. 🙂 My bad.
This helped me understand pickle in Python a tad bit more, but I would have liked to see a real-world example to understand it is used (as where you mentioned storing a state).
rather “understand how it is used”….
Thanks for step-by-step tutorial!
Thanks. This was a good introduction to pickle.
Yasoob, nice work! I ran your example on python3 and wanted to point out to your readers a minor change for python3 compatibility. Python3 requires an encoding parameter to the open method for reading binary files. The line “b = pickle.load(fileObject)” will raise an exception unless “fileObject = open(file_Name,’r’)” is changed to “fileObject = open(file_Name,’rb’).
Hey Jeff! Thanks for pointing this out. It’s been a long time since I wrote this post so thank you very much for correcting it 🙂
yes I was going to raise the same point. Great Example BTW
Thanks for the explanation!
that’s helpful , thanks 🙂
very nice explanation
Thanks for Simple and yet clear explanation!!
While loading the pickled object back, the file should be opened in “rb” mode instead of just ‘r’ since we have dumped initially by writing in raw binary “wb” mode. If not done, this throws the error (UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0x80 in position 0: invalid start byte).
My friend, thanks for sharing. However if u load pickle u need to open file as ‘rb’ mode also, otherwise it will not work.
So add to: fileObject = open(file_Name,’r’) —> fileObject = open(file_Name,’rb’)
[Notice ‘rb’ in mode] 🙂
You explained it well, good job. Take care
Very clear to understand, thanks alot.
nice explanation , getting clearly first attempt itselft..hanks yasoob
Really helpful !!! | https://pythontips.com/2013/08/02/what-is-pickle-in-python/ | CC-MAIN-2018-51 | refinedweb | 1,021 | 74.9 |
single string, you query:
mysql> SELECT GROUP_CONCAT(Language) As Languages FROM CountryLanguage WHERE CountryCode = 'THA';
Then the output will be:
You can also use some format of GROUP_CONCAT(). Like
- SELECT GROUP_CONCAT( Language SEPARATOR ‘-‘ )… It will use ‘-‘ instead of ‘,’
- SELECT GROUP_CONCAT( Language ORDER BY Language DESC )… To change the order and shorting output
One thing to remember: GROUP_CONCAT() ignores NULL values.
Advertisements
October 20, 2008 at 5:50 am
update admin_messages
set subject=’function’
where messageId = (select max(messageId) from admin_messages)
this query is create error plz. solve this query solution
regards
Parveen Sharma
March 4, 2011 at 10:35 am
It is not possible to update the same table and use that in the from clause in your subquery. It could be rewritten using joins
October 20, 2008 at 6:15 am
possible more than row result in this format example
name marks
abc 36
xyz 56
def 76
abc 86
abc 66
def 76
Result is
abc xyz def
36 56 76
86 76
66
regards
Parveen Sharma
August 16, 2012 at 9:30 pm
Did you ever get a query to accomplish this? I’m looking to do something quite similar and am unsure how to do it.
October 25, 2009 at 8:44 pm
I think group_concat() has actually different purpose:
e.g. To group or show the people who lives in same address, or show the students who have same score.
query : select student, group_concat(score) from student group by student_name;
and if you want to group which has more than one people, the query can be:
select student, group_concat(score) from student group by student_name having count(student_name) > 1;
I need this email for one of my projects.
Happy to share here 🙂
November 16, 2009 at 4:42 pm
Thanks for the write up, very helpful.
April 22, 2010 at 11:50 pm
Thanks a ton!! This is exactly what I was looking for!
November 30, 2010 at 12:23 pm
Thanks man
August 20, 2011 at 5:21 am
This was useful, thanks! In addition to the above examples, you can use the DISTINCT keyword to get only unique values within the GROUP_CONCAT. e.g. GROUP_CONCAT(DISTINCT Languages)
September 9, 2011 at 10:00 pm
thanks much this is most enlightening Parveen,
is there a function that does the reverse of group_concat
ie. from languages: english,french,chinese
to generate separate rows as follows:
chinese
September 9, 2011 at 10:58 pm
reverse group_concat in order to resolve the following:
table1 project=A lang=chinese,french,english,spanish
table2 lang texte
==== ====
spanish ola
french bonjour
german hallo
i need to do a join of the 2 tables to obtain only the records of table2 based on the languages in table1 from record project=A which has lang=chinese,french,english
such that the result would be
lang texte
==== ====
french bonjour
spanish ola
thanks much,
October 8, 2011 at 12:40 am
Wonderfully useful.
I just used this to create a simple function for looking up users based on profile information in Drupal 6.
see:
May 7, 2012 at 3:29 pm
I had been really glad to learn your internet site, it features specifically the content I’m planning to locate.
May 10, 2012 at 9:35 am
Thanks useful information
May 11, 2012 at 10:36 am
False Hairpiece Com their wide selection of make Synthetic Flowing hair Hair pieces, Particularly long Imitation Wigs, Reasonably priced Fake Wigs, Fast Fabricated Wigs, Internal Fabricated Hairpiece and
May 26, 2012 at 1:23 am
keren. terima kasih..
July 5, 2012 at 7:05 am
Grete Post very Helpful function
December 3, 2012 at 2:11 pm
Thanks, Very easy to understand via example.
December 13, 2012 at 9:02 am
Thanks Qiu…
very usefull…
March 9, 2013 at 1:41 pm
After exploring a number of the blog posts on your website, I seriously like your technique of
writing a blog. I saved it to my bookmark webpage list and will be checking back soon.
Please visit my website too and tell me what you think.
March 24, 2013 at 10:24 pm
I really desired to discuss this particular post,
Roman Shades “MySQL – The GROUP_CONCAT() function | Think Different” together with my own pals on facebook itself.
Ijust simply planned to disperse ur terrific writing!
Many thanks, Geneva
April 7, 2013 at 5:43 pm
Someone necessarily assist to make critically articles I would state.
This is the first time I frequented your website page and to this point?
I amazed with the analysis you made to create this actual
post incredible. Wonderful job!
April 30, 2013 at 5:02 am
I’m very happy to find this page. I need to to thank you for your time due to this wonderful read!! I definitely enjoyed every little bit of it and I have you bookmarked to see new things on your site.
June 18, 2013 at 3:05 pm
Great post. I was checking constantly this weblog and I am impressed!
Very helpful info specially the final phase 🙂 I take care of such information much.
I used to be seeking this certain info for a very long time.
Thank you and good luck.
July 1, 2013 at 9:32 pm
“MySQL – The GROUP_CONCAT() function | Think Different” was a superb
blog post, can’t help but wait to read through more of ur posts.
Time to waste numerous time on-line lol. Thanks for your effort ,Victorina
July 9, 2013 at 10:34 pm
This particular blog, “MySQL – The GROUP_CONCAT() function |
Think Different” was in fact outstanding.
I am printing out a duplicate to demonstrate to my close friends.
Thanks for the post-Reyes
July 12, 2013 at 7:05 am
Greate!!
August 13, 2013 at 5:38 pm
Excellent post. I was checking continuously this weblog and I’m impressed! Very helpful information particularly the ultimate section 🙂 I care for such info much. I used to be looking for this certain information for a very lengthy time. Thank you and best of luck.
November 11, 2013 at 1:32 pm
How can I add a hyperlink to the comma separated values? Imagine I use as the separator can i use those as individual hyperlinks?
December 4, 2013 at 6:19 am
test
December 31, 2013 at 3:14 pm
Hi there very nice site!! Man .. Excellent ..
Wonderful .. I’ll bookmark your web site and
take the feeds also? I’m satisfied to find a lot of useful information here in the post, we’d like develop more strategies
on this regard, thanks for sharing. . . .
. .
July 18, 2014 at 9:01 am
nice
July 18, 2014 at 9:01 am
luking good
July 18, 2014 at 9:02 am
July 18, 2014 at 9:03 am
test
April 8, 2015 at 10:55 am
What i don’t understood is in fact how you’re now
not really a lot more smartly-preferred than you might be now.
You are very intelligent. You realize therefore considerably with regards to
this subject, produced me in my opinion believe it from numerous varied angles.
Its like women and men don’t seem to be fascinated until it is something to accomplish with Woman gaga!
Your individual stuffs great. Always maintain it up!
April 25, 2016 at 1:19 pm
Hello,
I am having problem to search data from two tables where both fields have comma separated values in different order.
For instance, I have two tables:
1. Users
id gender ethnicity
1 male asian,american,african
2 female asian,african,american
3 female any
2. Roles
id ethnicity
25 american,asian,african
102 african,american,asian
451 any
402 any,pecific islander
Now, I want to fetch data from both the tables on the basis of “ethnicity” field. Condition is, at least one value should match from both the fields. Here after comparing both the tables there may be multiple records we may have but I need one record for same user and same role. If same user will have same role and will have multiple records then we need one of all records.
I did R&D but didn’t find anything for this case. So please help me out for this and give me your best solution for this as soon as possible. I hope I will get a solutions from your side for sure.
Thanks in Advance.. 🙂 | https://mahmudahsan.wordpress.com/2008/08/27/mysql-the-group_concat-function/ | CC-MAIN-2017-13 | refinedweb | 1,403 | 68.7 |
from matplotlib import cm
import seaborn as sns
import matplotlib.pyplot as plt
cmap = [cm.inferno(x)[:3] for x in range(0,256)]
sns.palplot(cmap)
cmap2 = [cm.inferno(x)[:3] for x in range(0,256)][100:]
sns.palplot(cmap2)
I believe that by "same resolution" you mean that you want 256 colors in the palette. I would actually think of this as having a different resolution from the original palette in sense that the values are closer together in the color space. In any case, I think you can get what you want by doing:
import numpy as np import seaborn as sns from matplotlib import cm x = np.linspace(.3, 1, 256) pal = cm.inferno(x) sns.palplot(pal) | https://codedump.io/share/5E1VAFQAfCMP/1/i-would-like-to-remove-the-first-n-colors-from-a-colormap-without-losing-the-original-number-of-colours | CC-MAIN-2018-22 | refinedweb | 123 | 77.03 |
Technical Article
Developing Web Services with Java 2 Platform, Enterprise Edition (J2EE) 1.4 Platform:
- A brief overview of web services
- Overview of JSR 109
- Examples of web services and clients
- A flavor of the effort involved in developing web services using the J2EE 1.4 platform
- Sample code that you can adapt to your own web service applications
Overview of Web Services.
Figure 1: J2EE 1.4 Publish-Discover-Invoke.
J2EE 1.4 SDK
- J2EE 1.4 Application Server
- Java 2 Platform, Standard Edition (J2SE) 1.4.2_01
- J2EE Samples (Java Pet Store, Java Adventure Builder, Smart Ticket, and others)
- Sun ONE Message Queue
- PointBase Database Server.
JSR 109:
- Development: Standardizes the web services programming model as well as the deployment descriptors
- Deployment: Describes the deployment actions expected of a J2EE 1.4 container
- Service publication: Specifies how the WSDL is made available to clients
- Service consumption: Standardizes the client deployment descriptors and a JNDI lookup model
J2EE Web Services.
Figure 2: A Java client calling a J2EE web service.
Working with JAX-RPC::
- It must have a public default constructor.
- It must not implement
java.rmi.Remote.
- Its fields must be JAX-RPC supported types. Also, a
publicfield cannot be
finalor
transient, and a non-public field must have the corresponding getter and setter methods.
Creating Web Services
Building an XML-RPC style web service using the J2EE 1.4 platform involves five steps:
- Design and code the web service endpoint interface.
- Implement the interface.
- Write a configuration file.
- Generate the necessary files.
- Use the:
- It extends the
java.rmi.Remoteinterface
- It does not have constant declarations such as
public static final
- Its methods throw the
java.rmi.RemoteException(or one of its subclasses)
- Its method parameters and return data types are supported JAX-RPC types:
- The service name is
MyFirstService.
- The WSDL namespace is
urn:Foo.
- The classes for the service are in the
mathpackage under the
builddirectory.
- The service endpoint interface is
<?xml version="1.0" encoding="UTF-8"?> <definitions name="MyFirstService" targetNamespace="urn:Foo">
<?xml version="1.0" encoding="UTF-8"?> <java-wsdl-mapping >.
Creating Web Service Clients
Now, let's create a client that accesses the math service you have just deployed. A client invokes a web service in the same way it invokes a method locally. There are three types of web service clients:
- Static Stub: A Java class that is statically bound to a service endpoint interface. A stub, or a client proxy object, defines all the methods that the service endpoint interface defines. Therefore, the client can invoke methods of a web service directly via the stub. The advantage of this is that it is simple and easy to code. The disadvantage is that the slightest change of web service definition lead to the stub being useless... and this means the stub must be regenerated. Use the static stub technique if you know that the web service is stable and is not going to change its definition. Static stub is tied to the implementation. In other words, it is implementation-specific.
- Dynamic Proxy: Supports a service endpoint interface dynamically at runtime. Here, no stub code generation is required. A proxy is obtained at runtime and requires a service endpoint interface to be instantiated. As for invocation, it is invoked in the same way as a stub. This is useful for testing web services that may change their definitions. The dynamic proxy needs to be re-instantiated but not re-generated as is the case with stub.
- Dynamic Invocation Interface (DII): Defines
<).
Figure 3: MyFirstService.wsdl()); } } and
lib/endorsed/*.jar to
jdk\jre\lib\ext; this way, you don't have to worry about the classpath for the JAX-RPC classes.
Run the client:
package dynamicproxy; import java.net.URL; import javax.xml.rpc.Service; import javax.xml.rpc.JAXRPCException; import javax.xml.namespace.QName; import javax.xml.rpc.ServiceFactory; import dynamicproxy
And, now run the client:
C:\Sun\APPSER~1\apps\dynamic-proxy> java -classpath build dynamicproxy.MathClient
35
Command-Line J2EE Application Client:); }
Browser-Based Client
<html> <head><title>Hello</title></head> <body bgcolor="#ffcccc"> <h4>Welcome to the Math Web Service</h4>The sum is: <%=result%></h4>.
Figure 4: A JSP-based web client calling a web service
Conclusion.
Acknowledgments
Special thanks to Vijay Ramachandran and Dennis MacNeil of Sun Microsystems, whose feedback helped me improve this article. | https://www.oracle.com/technical-resources/articles/javaee/j2ee-ws.html | CC-MAIN-2021-25 | refinedweb | 729 | 51.24 |
Get Rid of Namespace Parameters When Working with Kubernetes
Have you grown sick and tired of typing
kubectl -n someverylongnamespacename all day long?
There’s a few ways to ditch the namespace flag from every. single. command. you issue. The best choice for you, depends on the number of clusters you work with, their stability and your workflows.
Create a Shell Alias
If you just want to type less and are working with a single namespace, you could
just create an alias in your
.bashrc,
.zshrc or similar. It’s not
fancy, but gets the job done.
You could do it in one line:
alias k='kubectl -n NAMESPACE_NAME'
And use it like this:
$ k get pods
Keeping it simple and practical. If you have multiple namespaces, the next method will serve you better however.
Create a Context
If you are working in a small number of stable, well defined namespaces, you can use contexts to your advantage. This way, you can specify a user, cluster and namespace to use for all subsequent commands.
You can get your current context with
$ kubectl config current-context
And create a new one using a simple command:
$ kubectl config set-context CONTEXT_NAME --namespace=NAMESPACE_NAME \ --cluster=CLUSTER_NAME \ --user=USER_NAME
All of the above upper-case names should be replaced by you with ones which
you need. Look the values up in your
.kube/config file. They don’t need to be upper-case and underscores of course, here’s an example:
$ kubectl config set-context monitoring --namespace=monitoring --cluster=kubernetes --user=kubernetes-admin
From now on, you can switch to that context at any future time with:
$ kubectl config use-context CONTEXT_NAME
Now all your future commands will be issued in that namespace, without specifying a
-n NAMESPACE_NAME or
--namespace=NAMESPACE_NAME flag.
The current context is saved in your config file, and will persist until you change it again.
Fancier
If you don’t mind installing and using helper tools, then
kubens and
kubectx
are worth checking out. You can use them to switch between contexts and namespaces smoothly, with a nice ux.
Get them via a method which suits your OS described on the GitHub repo of the project.
An example usage pattern could be:
$ kubectx CONTEXT_NAME $ kubectx - # back to the previous context | https://vsupalov.com/get-rid-of-kubernetes-namespace-parameters/ | CC-MAIN-2021-31 | refinedweb | 380 | 59.74 |
FULL PRODUCT VERSION : java version "1.5.0_06" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_06-b05) Java HotSpot(TM) Client VM (build 1.5.0_06-b05, mixed mode) java version "1.5.0_08" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_08-b03) Java HotSpot(TM) Client VM (build 1.5.0_08-b03, mixed mode, sharing) java version "1.6.0-beta2" Java(TM) 2 Runtime Environment, Standard Edition (build 1.6.0-beta2-b72) Java HotSpot(TM) Client VM (build 1.6.0-beta2-b72, mixed mode) ADDITIONAL OS VERSION INFORMATION : Windows XP SP2 EXTRA RELEVANT SYSTEM CONFIGURATION : 2GB RAM A DESCRIPTION OF THE PROBLEM : Attempting to read a large file in one chunck with FileInputStream.read() with TOO large heap space causes OutOfMemory exception. According to spec, AFIK, even if the read() runs out of space it should just return with as much data as it can, not throw an exception. And in any case there was plenty of memory available, and further more with smaller heap the code runs ok! STEPS TO FOLLOW TO REPRODUCE THE PROBLEM : Run the attached code with heap space (-Xmx) 1300M and 1400M. With 1300M it runs ok, with 1400M it throws an exception. The "bigfile" in my test case had a length of 251��503��002 bytes. I can provide the file, but it should not make difference as the programs just tries to read all the bytes. EXPECTED VERSUS ACTUAL BEHAVIOR : EXPECTED - Either the FileInputStream.read() should have read the whole file (as there was 1400 MB of memory available and the file size was 'only' 250MB) or it should have read some smaller part of the file and returned the number of bytes read. It should not have thrown an exception. ACTUAL - With TOO large heap size the code throws OutOfMemory exception. ERROR MESSAGES/STACK TRACES THAT OCCUR : C:\tests>"C:\Program Files\Java\jre1.6.0\bin\java.exe" -Xmx1400m -classpath . r3 d size: 251503002 buf ok Exception in thread "main" java.lang.OutOfMemoryError at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(Unknown Source) at r3d.main(r3d.java:19) REPRODUCIBILITY : This bug can be reproduced always. ---------- BEGIN SOURCE ---------- import java.awt.*; import java.io.*; public class r3d { public static void main(String[] args) { try { int size = 501*501*501*2; FileInputStream fis = new FileInputStream("bigfile"); // Any file with size >= 501*501*501*2 System.out.println("size: " + size); byte buf[] = new byte[size]; System.out.println("buf ok"); int bytesRead = fis.read(buf, 0,size); System.out.println("Bytes read " + bytesRead); } catch (Exception e) { e.printStackTrace(); System.out.println(e); System.out.println(); } } } ---------- END SOURCE ---------- CUSTOMER SUBMITTED WORKAROUND : Attempting to read the file in multiple smaller chunks seem to work,. | https://bugs.java.com/bugdatabase/view_bug.do?bug_id=6478546 | CC-MAIN-2018-05 | refinedweb | 463 | 60.72 |
ASP.NET detects browser capabilities (to use CSS, JavaScript etc.) and
generates proper HTML for respective browser from downlevel and up.
The only problem you may encounter if you use older .NET Framework (below
version 4.5 which is current at the time of this answer) with the latest
browsers (such as IE10, IE11 - latest at the time of this answer). Then it
may not recognize the browser and generate incorrect HTML/JavaScript.
Otherwise non MS-browsers have no problem accessing and vieweing ASP.NET
Web sites.
P.S. Of course you can encounter problems if you write IE-specific
client-site JavaScript code, but that would be unrelated to ASP.NET
Try some sort of emulator:
There are many more, also refer to:
how to install multiple versions of IE on the same system?
This is not an answer, but I want to show you a screenshot.
Id is a string value. As the result, when I copy and paste in other
browser, it doesn't work.
I'm using Windows 7 US English only in IE10.
Mark up is like this without src in image -
<img id="ContentOfPage_Image1" src="" style="height:250px;width:250px;"
/>
I'm not familiar with the language. Theoretically, ID should be a number
value instead of string. It is just my 2 cents.
I guess the main problem is that you use pixel precise values for your
images.
<a href="" class="logo" style="width: 1024px; height:
150px;">
<img
src="" alt=""
width="1024" height="162" class="logo_def">
</a>
So, you should change your fixed width values into percentages and removing
the height on your images like:
<a href="" class="logo" style="width: 100%;">
<img
src="" alt=""
class="logo_def">
</a>
I ran your page in internet explorer 8 and it is displayed very wrong, but
after looking at your css file, I noticed some major syntax mistakes as
well. I am not sure of your experience level, and sometimes cross-browser
display can get a little complicated but it looks like your css file needs
some of the basics fixed first. This section in particular:
EDIT: Also, nav is a html5 element which is only supported by IE9 and
above.
nav ul ul {
display: none;
}
nav ul li:hover > ul {
display: block;
}
nav ul:after {
content: ""; clear: both; display: block;
}
nav ul li {
float: left;
}
nav ul li:hover a {
color: #fff;
}
nav ul ul {
background: #5f6975; border-radius: 0px; padding: 0;
position: absolute; top: 1
In Chrome, there is more files download (47) than in firefox/IE (42). With
my konqueror I don't have any problem.
So its look like security issues. For somes resons IE/Firefox don't want to
dowload file that are not where the want.
If I'm right, a solution could be to put all files (even thoses of
wordpress) and all ressources on servers where all security problems have
been solved.
Did you try selenium ?
from selenium import webdriver
driver = webdriver.Ie()
login_btn = driver.find_element_by_id("toolbar-login")
login_btn.send_keys(Keys.RETURN)
user_name = driver.find_element_by_id("ClientBarcode")
user_name.sendKeys("user")
user_name.sendKeys(Keys.RETURN)
Seems to be an IE bug to me.
It calculates the height of the <tr> through the height of the
contents of the highest <td>, disregarding the rowspan="2".
If you remove the rowspan attribute, you can kind of see why it's doing
what it's doing.
The problem for IE is that every single <td> in your second row (the
row containing the <div class="fadein">-cells) has rowspan="2".
Remove that, and the page will no longer show that odd margin/padding in
IE.
It still doesn't look the same, my guess is more rogue rowspans... :)
First: The X-UA-Compatible tag has to be the very first tag in the <
head > section.
Try using the emulate option, which allows quirks mode:
<meta http-
Also, completely removing a "DOCTYPE" from your page has also been known to
help force compatibility mode or putting something like an xml declaration
at the top
<?xml version="1.0" encoding="UTF-8">
If you want your page to render in an older document mode, you can add this
to your markup, early inside the HEAD tag:
<meta http-
Generally speaking, however, you should consider updating your page to use
standards-compliant markup.
Having done a lot of compatibility testing for front ends of major
websites, I can say it is no way a full substitute for using the actual
browser version you want to test in. Whilst it's useful for seeing any
immediate issues, we often found that those who relied on the dev tools in
IE9 onwards were missing issues which were obvious in the full versions of
IE7 and IE8.
I don't think you have to put both "row" and "span12" classes on the same
<div>. Remove the "span12" and see what happens. See the docs.
...I also noticed you have some misaligned tags ( </p> at line 87,
</div> at 910, </center> at 955 and 967).
Make sure to validate with W3C Markup Validation Service before doing
anything else ;)
No, you cannot "pre-approve" your extension to avoid the " is ready for
use" notification. If you attempt to do so, Microsoft will consider your
extension malware and can block it from loading in IE entirely, treat sites
that distribute it as malicious, etc. You simply shouldn't try it.
You must restart the browser to use the extension; there's no way to get it
working until you restart the browser (or start a new tab that ends up in a
new tab process).
The problem with your code is that it's injecting content that is not using
a secure URL (e.g. injecting a HTTP URL into a HTTPS page). What sorts of
URLs are you injecting? Are you using the RES:// protocol or a different
protocol? If you're using a custom URL Protocol that your extension
implements, you need to configure that protocol t
Remove border:0 from #main as it should be - I'm even surprised this does
work as intended in Chrome since IE10 is without a doubt right here in its
behaviour.
After that, please remove that table, we're not living in 1999 anymore.
To answer part of your question, RGBA as a background color isn't supported
in IE8 and earlier. I ran into this problem a while back, and used this
along with background-color RGBA for IE support:
filter: progid:DXImageTransform.Microsoft.Gradient(GradientType=0,
StartColorStr='#7FFFFFFF', EndColorStr='#7FFFFFFF');
"7FFFFFFF" - First two characters are transparency amount, last 6 are
color. All in hex
More information about gradients for IE can be found here:
Hope this helps
I would recommend making the page look correct in browsers that follow web
standards first (Firefox, Chrome). Then fix up for IE later. There is no
way to do what you're suggesting. Those meta tags are for making later
versions of IE behave like earlier versions.
Try the following header:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<!-- saved from url=(0016) -->
<html>
Here you have full description of that feature..
I don't see any particular reason for you to use the charset in css, but I
might be wrong... anyway, you have an error in line 42 - where it is
"y-repeat" it should be "repeat-y".
Can you post your HTML source? I can then try it here.
LM
This seems to be a bug in IE. It doesn't appear to be bounding box
related, because you can encroach inside the bbox in places without
triggering the roll-over.
At first I suspected that it might be using the bezier control points as a
bounding polygon, but when I converted all the curves to lines, it was
still happening.
Then I suspected that it might be something to do with the transform, so I
created the following SVG to test this theory:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg xmlns="" width="100" height="100"
viewBox="0 0 100 100">
<style id="style3001">path { fill : blue; stroke-width: 0 }
path:hover { fill : green }</style>
<g>
<path d="m 50,0 c 25,0 50,25 50,50 l 0,50 l -50,-25 l -50,25 l 0,-
According to this, no.
This is the only polyfill I can find, and it uses a flash fallback:
There is IE10 support below with gradient going from green to red.
CHECK THIS DEMO
#rt-header
{
background: #ff3232; /* Old browsers */
background: -moz-linear-gradient(left, #ff3232 1%, #ff2828 49%, #3fff30
49%, #3fff00 100%); /* FF3.6+ */
background: -webkit-gradient(linear, left top, right top,
color-stop(1%,#ff3232), color-stop(49%,#ff2828), color-stop(49%,#3fff30),
color-stop(100%,#3fff00)); /* Chrome,Safari4+ */
background: -webkit-linear-gradient(left, #ff3232 1%,#ff2828
49%,#3fff30 49%,#3fff00 100%); /* Chrome10+,Safari5.1+ */
background: -o-linear-gradient(left, #ff3232 1%,#ff2828 49%,#3fff30
49%,#3fff00 100%); /* Opera 11.10+ */
background: -ms-linear-gradient(left, #ff3232 1%,#ff2828 49%,#3fff30
49%,#3fff00 100%); /* IE10+ */
background: li
Well, assuming that what you actually need is the arch of the JRE running
on the client machine, I believe this will be helpful: how to detect
browser and OS in Java applet.
Also check if you can run your applet with IE Security modes(Protected
Mode(Security tab) and Enhanced Protected Mode(Advanced tab)) disable.
Regards.
Yes it does.
IE11 has all the same backward-compatibility modes as IE10 did (plus an
IE10-compat mode of course).
In fact, in common with IE10, there are actually two quirks mode which are
very slightly different from each other. ("Quirks mode" and "IE5 Quirks
mode"). But for most purposes you don't really need to know that; it'll
default to the original Quirks mode in the absence of a doctype, just the
same as previous IE versions.
So the short answer to your question is "Yes, you're fine; it's still there
and your page will still work just as well in IE11 as it did in IE10."
However, IE's engineers are trying to discourage the use of these modes.
The main way they've done this is by hiding them in the dev tools panel --
the browser mode option is visible, but you only ever have at m
:one
or
img
{
border: 0 none;
}
In .special-title:before, since you're using absolute positioning, remove
margin-top:-13px;
and just use
top:10px;
instead. Then, declare a height as well as a width.
see
I am creating toolbar for IE.
This is my SetSite method
HRESULT CMyClass::SetupBrowser(IUnknown* pUnkSite) {
ATLASSERT(pUnkSite);
HRESULT hr = E_FAIL;
IOleCommandTarget* pCmdTarget = NULL;
if (SUCCEEDED(pUnkSite->QueryInterface(IID_IOleCommandTarget,
(LPVOID*)&pCmdTarget)) && NULL !=
pCmdTarget) {
IServiceProvider* pSP = NULL;
if (SUCCEEDED(pCmdTarget->QueryInterface(IID_IServiceProvider,
(LPVOID*)&pSP)) && NULL != pSP)
{
CComPtr<IServiceProvider> child_provider;
hr = pSP->QueryService(SID_STopLevelBrowser,
IID_IServiceProvider,
reinterpret_cast<void**>(&child_provider));
if (SUCCEEDED(hr
This happens in Chrome too. Zooming is not reliable and it prone to
mis-aligning elements, especially if you have percentage sizes or indeed
any size that is not a multiple of 100 due to the zoom resulting in
non-integer sizes.
Any solution for this scenario.Below link demonstrates the problem exactly.
The example not created by me Nathan Williams.
<div style="width: 500px; height: 500px; overflow: auto;">
<div style="width: 500px; height: 10000000px; overflow: hidden;
position: relative;">
<span style="left: 0px; top: 0px; position:
absolute;">top</span>
<span style="left: 0px; top: 1193030px; position:
absolute;">IE8 limit (approximate)</span>
<span style="left: 0px; top: 1533900px; position:
absolute;">IE10 limit (approximate)</span>
<span style="left: 0px; top: 9999950px; position:
absolute;">bottom</span>
</div>
</div>
IE9+ are suppose to support placeholder but I find that this is not the
case. There are a few ways to 'pretend' to have placeholder. I find the
best way to have a placeholder look/feel on IE is to create a jquery
function that adds a div behind the input boxes (having the input boxes
transparent) with the text you want displayed. Then when typing or data
present either A display:none the div or opaque the input box.
Other people put the text in the input fields but this can be a pain with
validation and, as you have found, password fields.
something like:
// Check browser type so we don't mess with other browsers that do this
right
if (BrowserDetect.browser === 'Explorer') {
// Find all placeholders
$('[placeholder]').each(function(_index, _this) {
var $this = $(this);
You need to set this particular line when parsing for IE5 and IE6:
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
Check this link:
This might help you understand better.
Not sure if you've tried this already, but a CSS reset script usually does
the trick for me.
It might break your webpage on the browser you are using, but the goal is
to make the webpage look consistent across all browsers.
In the web site, we disabled caching in order to stop people going
backwards using the browser back button.
HttpCacheability.NoCache
IE 8 requires any file that is gonna be displayed to be saved in a
temporary file before being shown. Disabling caching interrupted saving to
a temporary file. Even though the browser downloaded the file successfully,
since it couldn't save the file, it didn't show the pdf content or any
other document which is shown using active x. We learned that this is an IE
8 bug and it got fixed in IE 9.
Changing the caching to private fixed the issue.
Response.Cache.SetCacheability(HttpCacheability.Private);
Response.ClearHeaders();
Response.ContentType = "Application/pdf";
Response.WriteFile(path);
The problem was that I had "var WEB_SOCKET_SWF_LOCATION" in the code.
If you're having the same problem use "WEB_SOCKET_SWF_LOCATION" without the
var in the global namespace.
I personally placed WEB_SOCKET_SWF_LOCATION into the socket.io.js file.
Browser detection is brittle and problematic at best. A cleaner approach
would be to do feature detection (test for things you need and enable (or
disable) features based on those features. Modernizer is a great way to do
this.
There are Win32 functions available that can disable and enable
redirection.
[DllImport("kernel32.dll", SetLastError = true)]
static extern bool Wow64DisableWow64FsRedirection(ref IntPtr ptr);
[DllImport("kernel32.dll", SetLastError = true)]
static extern bool Wow64RevertWow64FsRedirection(IntPtr ptr);
Example:
IntPtr wow64Value = IntPtr.Zero;
// Disable redirection.
Wow64DisableWow64FsRedirection(ref wow64Value);
// Do whatever you need
// .....................
// Re-enable redirection.
Wow64RevertWow64FsRedirection(wow64Value);
Add Doctype like this will solve multiple IE issues
<!--[if IE]>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"">
<![endif]-->
The IE team has been trying to retire VBScript for years.
indicates that support was removed from the ExecScript API. explains
that it's removed from IE11 Edge mode in the Internet Zone.
If you add the following to your HEAD tag, your VBScript will run:
<meta http-
Without seeing all that extra information that Arran requested, it's hard
to help you understand the error.
However if you're just looking for a quick fix that works in all browsers,
I always just use
Thread.sleep(int milliseconds);
for my Selenium tests in C# that need to wait for a page to load or a
certain element to render before continuing.
As KatieK pointed out, you haven't provided enough information to work with
here. However, I had faced something similar a while back, this helped me
solve the issue:
Open Reader
Edit > Preferences.
Internet > Display PDF in Browser
But I am not sure if it's a similar issue with yours too. | http://www.w3hello.com/questions/Disable-Internet-Explorer-11-browser-location-website-redirection | CC-MAIN-2018-17 | refinedweb | 2,581 | 62.88 |
Any::Moose - *deprecated* - use Moo instead!
version 0.19
package Class; # uses Moose if it's loaded or demanded, Mouse otherwise use Any::Moose; # cleans the namespace up no Any::Moose;
package Other::Class; use Any::Moose; # uses Moose::Util::TypeConstraints if the class has loaded Moose, # Mouse::Util::TypeConstraints otherwise. use Any::Moose '::Util::TypeConstraints';
package My::Sorter; use Any::Moose 'Role'; requires 'cmp'; or MooseX::Types use Any::Moose 'X::Types'; # gives you the right class name depending on which Mo*se was loaded extends any_moose('::Meta::Class');..
Squirrel - a deprecated first-stab at Any-Moose-like logic. Its biggest fault was in making the decision of which backend to use every time it was used, rather than just once.
This software is copyright (c) 2012 by Best Practical Solutions.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~sartak/Any-Moose-0.19/lib/Any/Moose.pm | CC-MAIN-2017-43 | refinedweb | 156 | 52.49 |
I got a map that extends three layers deep and on the other side I got a plugin that should check if a given three-set of strings is in the map. So far it’s a statically written map directly in the plugin file, but I want to change it so that user of the plugin can freely configure the map.
def myMap = [ 'level1' : [ 'level21' : [ 'level31: true, 'level32' : true, 'level33' : true ], 'level22' : [ 'level31' : true, 'level32' : true ] ] ] assert myMap.get('level1').get('level21').get('level32') == true
Needless to say, this is a ton of syntax overhead and not exactly flexible either. My idea was that this could be written in an external .properties file as nested DSL, but I don’t know what the best way to do that. | https://discuss.gradle.org/t/best-practice-for-map-configurations/19251 | CC-MAIN-2020-40 | refinedweb | 129 | 76.56 |
Name | Synopsis | Description | Parameters | Errors | Examples | Environment Variables | Attributes | See Also
#include <slp.h> SLPError SLPOpen(const char *pcLang, SLPBoolean isAsync, SLPHandle *phSLP);
The SLPOpen() function() function terminates any outstanding calls on the handle.
A pointer to an array of characters containing the language tag set forth in RFC 1766 for the natural language locale of requests issued on the handle. This parameter cannot be NULL.
An SLPBoolean indicating whether or not the SLPHandle should be opened for an asynchronous operation.
A pointer to an SLPHandle in which the open SLPHandle is returned. If an error occurs, the value upon return is NULL.
This function or its callback may return any SLP error code. See the ERRORS section in slp_api(3SLP).
Use the following example to open a synchronous handle for the German (“de”) locale:
SLPHandle HSLP; SLPError err; err = SLPOpen("de", SLP_FALSE, &hSLP)
When set, use this file for configuration.
See attributes(5) for descriptions of the following attributes:
slpd(1M), slp_api(3SLP), slp.conf(4), slpd.reg(4), attributes(5)
System Administration Guide: Network Services
Alvestrand, H. RFC 1766, Tags for the Identification of Languages. Network Working Group. March 1995.
Kempf, J. and Guttman, E. RFC 2614, An API for Service Location. The Internet Society. June 1999.
Name | Synopsis | Description | Parameters | Errors | Examples | Environment Variables | Attributes | See Also | http://docs.oracle.com/cd/E19253-01/816-5170/6mbb5et3m/index.html | CC-MAIN-2014-52 | refinedweb | 220 | 51.24 |
Perl offers you little guidance in this. For instance in list context, localtime gives you a useful list, while in scalar it gives you a formatted result. Or caller will give you an array of information versus the single most likely to be useful fact, which happens to be the first element in the list. An array slice will give you the last element.
Here is a non-exhaustive set of choices that I have personally made in real code:
# In these examples, @ret is your return in list context.
return @ret;
return wantarray ? @ret : \@ret;
return @ret[0..$#ret];
return wantarray ? @ret : $ret[0];
if (wantarray) {
return @ret;
}
elsif (1 == @ret) {
return $ret[0];
}
else {
croak("<function> did not produce a scalar return");
}
# And for something completely different...
sub foo {
if (1 != @_) {
return map foo($_), @_;
}
# body of foo here.
return $whatever;
}
[download]
Therefore I am interested in which of the above (or other variations of your choice) people think is a good default behaviour to standardize on..and more importantly why.
Thoughts?
UPDATE: Removed typo in last example caught by converter. (I had a $ in front of foo.)
To me, it depends A LOT on what the function is being used for. The inconsistency within built in perl functions demonstrates this sort of dwimmery in action.
I tend to prefer the second option, though (return wantarray ? @ret : \@ret;) because I don't like the idea of discarding any data when scalar context is forced. The reference still behaves nicely in boolean context, but it contains all the information of the original array."
The reference still behaves nicely in boolean context, but it contains all the information of the original array.
I have since then drifted away from using it, and the experience of extending the test suite for Text::xSV (which used that interface) has convinced me that it is a complete PITA to work with unless you only want one scalar value.
(Yeah, yeah. I know how to use appropriately placed parens to force list context. To me tricks like that exemplify what people dislike about Perl.)
When the function is localtime-ish, I make it work like that. I have several that return a human readable string in scalar context, but a list of (numeric) elements in list context.
If the function returns a list that has a logical order and a variable number of elements, I return wantarray ? @array : \@array. But not when the function has a prototype of (&@).
I find return @array; annoying, unless the function warns or dies in scalar context. Again, an exception is made for map-ish functions.
I often use void context for mutating values: (Or: The function becomes a procedure in void context)
my @bar = encode @foo; # mutated copy in @bar
my $bar = encode @foo; # mutated copy in @$bar
encode @foo; # @foo's elements are now encoded
[download]
my $row = $db->query('select ...')->hash;
encode values %$row;
[download]
Juerd
# { site => 'juerd.nl', plp_site => 'plp.juerd.nl', do_not_use => 'spamtrap' }
As for return @array; - that made sense to me when I was writing a function that I needed in list context and didn't want to think through scalar context. So I left it with the easiest behaviour to implement, knowing full well that it wouldn't do anything useful in scalar context, meaning that if I ever called it in scalar context then I would have motivation to fix it with some supposition about what it should do in scalar context.
I guess that I have been indoctrinated enough into the concept that side-effects in functions are a bad thing that I strongly avoid writing a function that causes side-effects. (Particularly if it only does it in one context.)
Side effects are okay, as long as they're documented clearly. The efficiency that you get by mutating values instead without copying them first is worth it, in my opinion. I usually document the functions exactly as in my post: a piece of code that calls the function/procedure in all three contexts, each with a useful comment.
As for return @array; - (...) So I left it with the easiest behaviour to implement, knowing full well that it wouldn't do anything useful in scalar context, (...).
Of course, there isn't always a good scalar value. Hence the "unless the function warns or dies in scalar context." in my post. I tend to carp +(caller 0)[3] . ' used in non-list context' unless wantarray;.
null context
It's called void context, not null context.
As you can see, I am hardly consistent.
Therefore I am interested in which of the above (or other variations of your choice) people think is a good default behaviour to standardize on..and more importantly why.
# And for something completely different...
sub foo {
if (1 != @_) {
return map $foo($_), @_;
}
# body of foo here.
return $whatever;
}
[download]
# And for something completely different...
sub foo {
if (1 != @_) {
return map $foo($_), @_;
}
# body of foo here.
return $whatever;
}
[download]
sub foo {
map {
# body of foo here.
$whatever
} @_
}
[download]
Abigail
I have used that style in functions which transform elements where you might reasonably want to transform one or many elements. The hidden assumption is that it only makes sense to impose scalar context when you are transforming a single element, so behaving badly if you try to transform multiple elements in scalar context is OK.
As for why I would like to standardize somewhat, like I said before, it is all about setting expectations. The trouble with doing something different in each case is underscored by the fact that a top-notch Perl programmer like yourself could be tripped up by what a built-in function does in scalar context. (And to be honest upon seeing you claim that the two should be the same, I actually ran a test program before I was confident in claiming that map had the behaviour that I was specifically trying to work around.)
Besides which, I think that it is overkill to have to give an issue like this serious consideration with every function that I write. Having a default that just flows from my fingers would smooth out the development process.
Besides which, I think that it is overkill to have to give an issue like this serious consideration with every function that I write. Having a default that just flows from my fingers would smooth out the development process.
# And for something completely different...
sub foo {
if (1 != @_) {
return map $foo($_), @_;
}
# body of foo here.
return $whatever;
}
[download]
Why the test on the number of arguments, and the recursion? Wouldn't the following be equivalent?:
sub foo {
map {
# body of foo here.
$whatever
} @_
}
[download]
Answering the last question first, no that isn't equivalent to just have a map since a map in scalar context coerces like an array does - it tells you how many elements that you have, not what any of them are.
sub foo {
(map {
# body of foo here.
$whatever
} @_)[0..$#_];
}
[download]
sub foo {
my @ret = map {
# body of foo here
$whatever;
} @_;
@ret[0..$#ret];
}
[download]
if ($results->isa('XML::XPath::NodeSet')) {
return wantarray ? $results->get_nodelist : $results;
# return $results->get_nodelist;
}
[download]
#!/usr/bin/perl -l
use strict;
use warnings;
use Tie::Array::Iterable;
sub foo {
my @list = 0..9;
return wantarray ? @list : Tie::Array::Iterable->new(@list);
}
# access as a list
print for foo();
# or an iterator
my $iter = foo()->from_start();
print ($iter->value),$iter->next until $iter->at_end;
[download]
jeffa
L-LL-L--L-LL-L--L-LL-L--
-R--R-RR-R--R-RR-R--R-RR
B--B--B--B--B--B--B--B--
H---H---H---H---H---H---
(the triplet paradiddle with high-hat)
#!/usr/bin/perl
use strict;
use warnings;
sub make_list {
my @list = @_;
my $count = 0;
sub {wantarray ? do {$count = 0; @list} : $list [$count ++]}
}
my $list = make_list qw /red green blue white brown purple/;
print scalar $list -> (), "\n";
print scalar $list -> (), "\n";
print join " " => $list -> (), "\n";
my @list = qw /one two three four five six/;
my $count = 0;
sub foo {
wantarray ? do {$count = 0; @list} : $list [$count ++]
}
print scalar foo, "\n";
print scalar foo, "\n";
print join " " => foo, "\n";
__END__
red
green
red green blue white brown purple
one
two
one two three four five six
[download]
I do think that many of the internals should check for void context, if possible, like what Juerd said (and like what map does). For example, uc (and friends) would be great like that. I hate having to write $x = uc $x; when the dwimmiest way would be uc $x; and have it mutate in void context.
------
We are the carpenters and bricklayers of the Information Age.
Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified.
I hate having to write $x = uc $x; when the dwimmiest way would be uc $x; and have it mutate in void context.
The funny (?) thing is that Perl itself is inconsistent when it comes to operators that change strings.
As of last week, Larry's plan is to make Perl 6 much more consistent in that respect. My speculation (not his official pronouncement) is that there'll be two versions of each operator, one that works in-place and one that returns the modified value, leaving the original untouched.
It'll probably go further than Ruby's convention of postfixing ! to methods that operate in-place, perhaps having language support for adding this behavior to your own operators.
It's only in the idea stages, though.
One thing I've done in one project using Class::DBI is to return a closure-as-an-iterator in scalar context (and void context, just because I'm lazy):
use My::DBI::Table; # A Class::DBI instance
my $table; # Intitilized elsewhere
sub get_foos
{
my $foos = $table->foos();
if(wantarray) {
my @array;
while(my $foo = $foos->next()) {
push @array, $foo;
}
return @array;
}
else {
return sub {
my $foo = $foos->next() or return;
return $foo;
};
}
}
[download]
In the real project, the objects returned are actually passed into the constructor of a class that acts as a middle layer between the UI and the database, so the real method is slightly more complex than the above. As is, the above is just a thin layer over the regular Class::DBI methods (which already returns iterators on has_many relationships).
----I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident.
-- Schemer
: () { :|:& };:
Note: All code is untested, unless otherwise stated
I don't think there should be a "standard". That's why this particular feature is in the hands of the programmer; so that they can make the decision about what's best. Sometimes you want to return the first element of an array in scalar context, sometimes you want to return something else.
Why do you think that there should be some standard?
Also, you have completely ignored void context. wantarray returns 3 things you know. Do you think that there should be some "standard" for what to do in void context too?
I am saying that I as a programmer need to make this choice fairly often, what choices should I tend to make, and why?
As for what to do in void context, my default has always been to treat it like scalar, but some people have already said that they standardizing on having functions mutate the input in place in void context. So yes, thinking through void context as well might make sense. (Though I probably won't because the only suggestion that I have seen runs too far counter to the rest of my personal style.)
I am saying that I as a programmer need to make this choice fairly often, what choices should I tend to make, and why?
You use wantarray that much? I've used it less than a couple of handfuls of times in the 11+ years I've been using perl. Most of the time I write subroutines that return lists, I say that's what they do. Period. It's rare that I need different functionality depending on context. It's only when I think "it sure would be nice if this routine did this sometimes and that other times" that wantarray might come into the picture. And when that happens, I've already decided what I need from scalar, list and void contexts. (I don't usually care about void context though)
One thing which I have been meaning to do, but as usual never quite found enough tuits for, was write Sub::Context. With this module, subroutines with context attributes would be expected to return an array or list, but in scalar context would automagically behave as the context attribute specifies without the programmer being forced to write the code for every case.
use Sub::Context qw/arrayref first iterator custom/;
sub foo : arrayref {
# return wantarray ? @results : \@results;
}
sub bar : first {
# return wantarray ? @results : $results[0];
}
sub baz : iterator {
# return wantarray ? @results : get_iterator(@results);
}
sub quux : custom {
# user-defined return behavior?
}
[download]
I've had the sticky note for this module on my desk for about three months now. This seems like as good a place as any for asking for suggestions on the interface and behavior.
Update: chromatic just reminded me that not only is there a module named Sub::Context, which he wrote (and which I've seen, darn it), it does what I was looking for, but in a somewhat different fashion.
Cheers,
Ovid
New address of my CGI Course.
This sounds useful to me. OTOH, I think you're missing some. :last and :count you should have, if only for completeness. :uselessvoid should give warnings if called in void context. :mutatevoid would set $_[0] to the return if used in void context. :warnscalar would warn if called in scalar context. :loop would do the close equivlent of tilly's if @_>1: when called in list context, with more then one arg, just run through the sub for each argument.
Upon reading the update: Oh. I have to say, I like your API better then chromatic's, but see how both could be good in different circumstances. Perhaps you should find another namespace for yours?
Update: Fixed minor markup issue, thanks ysth.
Truth to tell, my personal opinion is that the entire idea of context in Perl is an interesting experiment in language design that other languages have wisely decided not to borrow...
Im on the other side on this one for sure. Context is one my favorite things in perl. Sure occassionally I get betting by it one way or another, but overall I think its a great thing.
Im definately in the don't standardize camp. I would say that part of the artistry of perl is choosing things like this. Its like naming functions. To borrow a chess metaphor, a newbie perl programmer doesnt think about names or context at all. An experienced one deliberates on such things for ages before deciding on the right behaviour, and a grandmaster just picks a name and behaviour and it makes sense. The whole point here is convenience. Does the code read well in its various usages, is the behaviour intuitive and easy to assimilate. Or does it read like gobbleygook and trip you up all the time?
Only experience, and understanding of the usage context (using context at the human level and not perl level) will determine what behaviour is sensible.
I will say however that the whole
return wantarray ? @array : \@ref
[download]
idea doesnt seem like a clean solution. I used to do it a fair bit but I found it trips you up too often. Instead I almost always simply return the reference for things like this. In other scenarios I often make the results vary based on wantarrays status. For instance ive done stuff like this on many an occassion:
sub get_rec {
my $self=shift;
print(join ",",@$self),return unless defined wantarray;
return wantarray ? @$self : join(",",@$self);
}
[download]
Anyway, i would say that context is a bit like a rich cake. A little bit goes a long way, and too much is just awful. But none is even worse. :-)
First they ignore you, then they laugh at you, then they fight you, then you win.
-- Gandhi
Context has never bought me much that I care about. I cannot count the number of times that I have seen people tripped up by it. And, having done it, I never again want to go through the fun of wrapping code whose behaviour is heavily context-dependent. Adding a million and one precisely defined contexts to Perl 6 will move that from being a PITA to effectively impossible.
Arguments about the artistry of Perl I find strangely unmoving. The theory that someone, somewhere, gets it right by habit is all fine and dandy. I daresay that I have more Perl expertise than most, and I sure as heck know that I don't get it right. When I revisit old code, it is a constant complaint. When I visit other people's code, their choices tend to be an irritation for me.
YMMV and apparently does. But I'm someone who is bugged by the feeling of solving artificially imposed problems, and I've come to feel that context is one of those. :-(
(Of course Perl avoids a lot of artificial problems that I see in other languages. I can live with the occasional misfeature that I dislike...)
Well, my view on context is that its a wonderul thing that is easily inappropriately used. I'd guess that I use it in less than 10% of the subroutines I write, but when it makes sense I dont hesitate at all. Used sparingly, only when it makes semantic sense, I feel it reduces the chance for errors.
As for artistry, I think I may not have been quite as clear as I wanted to. I didnt mean to imply the aristry of Perl, but rather artistry of programming. Designing an intuitive, flexible powerful, extensible interface to a body of code IMO requires an artistic touch. Without the art the interface quickly becomes annoying, frustrating, and counterintuitive. An example for instance is accessors in Perl. Do you do them as lvalue subs (I probably wouldn't), seperate gets and sets ala Java, VB etc? (nope blech), single get/set accessors that return the old value (occasionally but not often) or ones that return $self on set? (my preference.) These are artistic decisions. Theres no science involved. They are all functionally equivelent, but from an aesthetic side (look and feel) they are very different. How many times have you used a crappy API and thought to yourself "this API is soooo clunky", im betting lots just like me. I would argue that the clunky API is the product of an unartistic programmer, and a little bit of art would have improved matters greatly.
Anyway, from the artistic point of view you might consider wantarray to be like neon lime green/yellow. Not a lot of paintings would benefit from the color, but its not like you are going to just throw away the tube of paint: one day itll be just the right shade...
YMMV and apparently does. But I'm someone who is bugged by the feeling of solving artificially imposed problems, and I've come to feel that context is one of those. :-(
To me context is like the invisible argument. If it were to be removed then we would probably end up requiring a flag in the parameters, or worse duplicate subroutines, to do the same thing. I think that this would be worse than the occasional mishap that occurs due to context. So to me its not an artificially imposed problem, its an artifically imposed solution. ;-)
This reminds me of a comment I saw once. Some languages totally dispose of pointers (perl references). VB is an example. The argument goes that pointers are a consistent source of error so remove them entirely. The problem then becomes that for nontrivial work you need pointers. So what do you end up doing? Reinventing them. In VB this means using variant arrays and all the horrors that entails. The comment was: "Dont remove features just because they are a source of error, doing so will only require the user to hand code an equivelent, and this is likely to be even worse than the original problem." I would say this applies nicely to context. Removing it would only result in worse problems. At least with context the rules are relatively simple, but more importantly uniform for all subroutines. (I mean rules of how context works, not what a routine returns in a given context.)
A rule of thumb for context IMO is that if the effect of context doesnt make sense when the code is read aloud then it shouldnt be provided, but if it would make sense, then it should be.
Anyway, as always tilly, a thought provoking thread.
As an example, consider this real-world case from Bricolage:
@stories = Bric::Biz::Asset::Business::Story->list();
if (not @stories) {
print "There are no stories.\n";
}
[download]
That works fine, but this doesn't:
if (not Bric::Biz::Asset::Business::Story->list()) {
print "There are no stories.\n";
}
[download]
That's because Bricolage list() methods (and lots of other Bricolage methods) try to be helpful and return array refs in scalar context. I've personally found a number of bugs that turned out to be caused by this problem, and I'm sure there are more waiting to be found.
There may be cases where wantarray() is useful, but I definitely don't consider it a general-purpose tool.
-sam
I think that use of wantarray() to modify return behavior should be generally avoided. In my opinion it is a clear violation of the principal of least surprise.
Having said that, I do use wantarry sometimes, and my reasons are usually effiency related:
Also, it seems like a better idea than just returning the last column, or the number of columns, or anything like that, which would likely be the result of not using wantarray in this case.
There might be better examples, but I'm too lazy to think of them now.
Joost.
--
#!/usr/bin/perl -w
use strict;$;=
";Jtunsitr pa;ngo;t1h\$e;r. )p.e(r;ls ;h;a;c.k^e;rs
";$_=$;;do{$..=chop}while(chop);$_=$;;eval$.;
[download]
The simplest answer to the question is 'it depends'. However, there are a couple of things that I started doing fairly early on in my use of perl. The first is that in most cases, subs that wishes to utilise the calling context to differeciate what they return, should probably be testing for context as the (one of) the first things they do, rather than leaving it as something done on the return line. If the natural scalar context of a sub that can return a list is to return only the size of the list, then there is some economy in knowing that up front. It's inherently cheaper to increment a count for return than to accumulate an array and then discard it and only return the size. Equally, if a sub is called in a void context it is better to to return immediatly having done nothing rather than do stuff and then throw it away. If the sub has side effects and only returns a status value, then if called in a void context, it can be nice to have a (configurable) option to die if the sub fails -- I wish that this was an option for many of the built-in subs.
I don't view the use of context as being unique to perl, although the way it is manifest in other languages is somewhat different. In C++ for example, it could be argued that the incorporation of a methods return type into the method signature is a form of context. The difference is that it is handled entirely by the compiler rather than by the programmer.
One arguement for this approach is that it removes one detail from the auspices of the programmer and is therefore a good thing.
However, an alternative view is that this then requires the programmer to code multiple methods, one for each possible context, and/or requires the use of explicit casting.
Far from being a failed experiment, I think that context is a potentially very useful feature of perl that is only let down by it's currently limited form. IMO there are three limitatations to the current mechanism.
I would like to see this extended by
From what I've seen of P6, all three of these are being addressed in ways that hopefully will make the whole concept of contexts become more useful and consistant. Who knows, maybe they will prove to be so useful that other languages might copy the idea in the future.
The subdivision of scalar context into string and numeric. And the numeric context subdivided into integer and real.
That might be easy in these cases:
if( $foo == bar($baz) ) {
. . .
}
if( $foo eq bar($baz) ) {
. . .
}
[download]
But how could you handle the simple case of assigning the return value to a variable? I suppose the current type of the SV could be used if it already had data in it, but what about variables that were just declared?
List context should be subdivided into list, array and hash context.
I know the differences between list and array context are very subtle, but I'm not sure that there is any use for distinguishing the two here.
But how could you handle the simple case of assigning the return value to a variable?
The keyword was "subdivision". When the context cannot be broken down to a string or numeric context as in the case of assignment to a variable, then the context would be returned as simply 'SCALAR'. If the context can be further broken down, then that context would also be indicated. I haven't thought through how this would be done, but one way might be to return 'SCALAR/STRING' and 'SCALAR/NUMERIC' respectively in the case of your two if statements. Instead of coding
if( want() eq 'SCALAR' ) {
You would code
if( want() =~ /^SCALAR/ ) { if you were only interested in determining scalar context. Integer and real contexts could be a supplement to that.
Alternatively, use a parameter to want()
if( want( 'SCALAR' ) ) { would return true if the context was scalar string, scalar numeric, scalar numeric real or scalar numeric integer, but if( want( 'INTEGER' ) ) { would only return true if the return was being used in an inherently integer context like an array indices, range boundary or as an argument to a function that had been indicated as an integer (using the much improved prototyping facility:).
The main reason I thought of for wanting to distinguish between a list context and an array context, is the following
sub dostuff {
my @array;
# doing stuff building @array
return @array;
}
...
my @returned = dostuff();
[download]
In the above scenario, we have a lexically local @array built up within the sub. When it comes time to return this to the caller, the array is flattened to a list, which currently appears to consume some extra memory over the array itself. On the basis of experiment, this appears to be less than the full size of the array, but is still a fairly substantial chunk of memory if the array is large. This list is then assigned to the array in the outer code consuming another large chunk of ram.
Crude measurements indicate that the total memory allocated by the simple act of returning an array through a list to another array is close to 3 times the original size of the array. For small arrays, not a problem, but for large ones this is expensive.
Yes, you can return a reference to the array, but the extra level of indirection involved can be a PITA, especially if the elements of the array are themselves compound structures.
My thinking is that if the programmer could determine that the destination is an array, then he might also be given access to an alias to that destination array and assign his return values directly to it.
sub DoSomething {
my @array;
alias( @array ) if want( 'ARRAY' );
#Do Stuff Directly to the destination array
return;
}
...
my @returned = DoSomething();
[download]
This would effectively use the same mechanism as for loops do now.
If the results of the sub were being used in a list context, being pushed onto an array or printed in the calling code and the algorithm of the sub permits, then it can sometimes make sense not to acumulate the data into an array internally, but rather simple build and return a list.
sub DoStuff {
die 'Nothing to do in a void context'
if want( 'VOID' );
if( want( 'LIST' ) ) {
return map{
# Do stuff to generate the list
} ...;
}
my @array;
alias( @array ) if want( 'ARRAY' );
push @array, $stuff for ....;
return \@array if want( 'ARRAY/REF' );
# The callers array has been filled directly
# If we were called in an array context.
return;
}
[download]
LW and TheDamian are probably way ahead of me on this, and it is possible that some of the benefits of this type of optimisation can be achieved transparently by comterpreter, but I think that there are cases were it would be useful for the programmer to have this level of control.
Expectations.";
I ++'d a lot of nodes on this thread, but I wish I could "+=3" on this one by sauoq.
The right "default" behavior, in my opinion, is to return @ret; and let the programmer using your function sort it out. Document that your function returns an array and be done with it.
No, please don't!
Document that it returns a list. It's not possible to return an array. It is possible to return an array reference, but this code is not doing that. If you document that some functions returns an array, the user can only guess: is it a list or an array reference?
Document that your function returns a list and be done with it.
Document that it returns a list. It's not possible to return an array.
No. I would (will and do) document that it returns an array. What you are saying might be true from an internals standpoint, but that's irrelevant. Read the code: return @ret; . . . it says "return" and is followed by an array. More importantly, it behaves like an array. It doesn't behave like a list literal (a slice does, however, as ysth points out.) It certainly doesn't behave like an array reference. Documenting that it returns an array is the only clear way to describe its behavior.
If you document that some functions returns an array, the user can only guess: is it a list or an array reference?
When a funtion returns an array reference, document that it returns an array reference. "Array" and "array reference" are not synonyms. The user doesn't have to guess! He just has to read.
If you sometimes describe a reference to an array as "an array", then you are being sloppy and you should stop.
I usually just document my functions to be defined in list context only if that's all I've considered. I someone wants to use it in scalar context it's their dare.
Personally I haven't devoloped a default idea for what you ask of, since I haven't had a need to. For instance, if someone just wants the first return value they may do ($foo) = foo(). If someone wants to get a specific element from it they may do (foo())[$n]. If they want it as a reference, they're free to construct one on the fly: [ foo() ]. If [ foo() ] would prove inefficient I'd usually prefer providing another function for getting a reference, e.g. foo_ref(), and not add behaviour to the existing routine.
There is one special case though, and that's for methods that have one-to-one input-to-output list length mapping. If two arguments are taken, two elements are returned. It may then look like
my ($foo, $bar) = burk('foo', 'bar');
my ($foo, $bar, $baz) = burk('foo', 'bar', 'baz');
[download]
and in this particular case I can find it DWIMy to let
my ($foo) = burk('foo');
[download]
be equivalent to
my $foo = burk('foo');
[download]
but
my $foo = burk('foo', 'bar');
[download]
would emit a warning, since one can guess that this was unintended.
In real life it could look like this:
my $foo = foo(burk());
[download]
and &burk is believed to return just one element. Perhaps the user was wrong in that, and &burk returned more than one element. Or perhaps &burk needed to be called in scalar context to give the expected return behaviour of one element, like localtime(). (Note that this thusly just isn't a short-cut but also imposes extra checks on &burks return.)
To sum it up as a general rule of thumb: If a subroutine has the character of returning a list, but sometimes returns just one element, then it can be made so that in scalar context it returns that very element. But if it's called in scalar context and it in list context would have returned more than one element then it should warn.
Code-wise this means
return @results if wantarray;
carp("More than one value in result in scalar context for &foo")
if @results > 1;
return $results[0];
[download]
This whole post was pretty much a condensed revised repost of the relevant things in posts Re: Context aware functions - best practices? and Re: Re: Re: Context aware functions - best practices?. (Both nodes can be worth reading plus demerphq's reply, dispite (or because) their tendency to elaborate and digress.)
Of course, if the function is for "public" consumption, the return type should be specified in the documentation as well as in the comments preceeding the function.
So far as "standardizing" goes, I think I agree with those who don't standardize. So long as the function is documented properly, why not write in whatever manner best fits its purpose?
I strongly dislike return wantarray ? @ret : \@ret;. The justification that it offers performance benefits for those who want them reeks of premature optimization; if it turns out to be necessary, I'll have the function always return a reference. If it's not necessary, having the function set up this way anyway makes it inconvenient to deal with cases where the result list is expected to have only a single element. It also leads to a conundrum when the array is empty; some people will want to be able to dereference the return value without checking for undef, while others will want the scalar context return value to sensibly work as a boolean. It just doesn't work out; I would never consider this style in any case.
I tend to return @ret[ 0 .. $#ret ]; these days and find it to be completely unsurprising most of the time. It is hardly a rule though, and I may do other things on other occasions.
When we're talking about methods rather than plain functions, and the returned list consists of objects, an iterator object is a helpful option. A particularly nice kind of iterator object relays certain method calls to all of its contained list's objects, so that you can do something like $family->children->sleep(); and have all the children collectively leave for bed.
sub foo {
if (1 != @_) {
return map foo($_), @_;
}
# body of foo here.
return $whatever;
}
[download]
sub foo {
if (1 != @_) {
return map foo($_), @_;
}
# body of foo here.
return $whatever;
}
[download]
sub foo {
( map {
# body of foo here.
} @_ )[ 0 .. $#_ ];
}
[download]
Makeshifts last the longest.
The only default I can think of that makes much sense to me is:
croak ... if ! wantarray;
[download]
So, if you haven't taken the time to decide what should be returned by your list-returning function when used in a scalar context, it makes some sense to prevent people from using your function that way (yet), perhaps even telling the user that you haven't decided what that is supposed to mean yet.
After you've used the function for a while, a sane choice may well become rather obvious and you can put the new scalar-context behavior in place, quite confident that you aren't breaking any existing | http://www.perlmonks.org/index.pl?node_id=311537 | CC-MAIN-2014-23 | refinedweb | 6,132 | 70.63 |
Postcode validation is a requirement that comes up in a lot of my UK-based client projects. Parsing and linting UK postcodes is ripe with edge cases.
Postcodes do change from time to time. Users can be unaware of their postcode changing or they could have submitted their postcode prior to it being decommissioned. Mail might be delivered to decommissioned postcodes and geographic boundary information can exist meaning they can still be useful.
Network requests to validate postcodes can add a lot of overhead when you're batch processing. New postcodes aren't guaranteed to be in every 3rd-party database either so there is an error rate to take into account. Services where a flat number or building name and a postcode given by the user can be used to get their full address (saving time and spelling mistakes) often charge for querying their database so it's worth minimising calls to these services.
Good is not the enemy of perfect and sometimes just knowing that a postcode fits the format of a UK postcode can be enough for certain postcode-related functionality. Linting a postcode before checking with a 3rd-party database can reduce the costs as well.
There are a lot of regex snippets and libraries for parsing UK postcodes. I took a look at a couple of them to see if they could stand up to a database of 2.5 million current and past UK postcodes.
Rob Cowie's Postcode library
The first I looked at was postcode by Rob Cowie. I quickly found that two digits in the outing code (the first 3-4 characters of a UK postcode) caused the library to raise a TypeError:
➫ pip install -e git+
>>> from postcode import uk >>> uk.validate('s11 7ty') Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".../postcode/uk.py", line 82, in validate parts[0] = parts[0][0] TypeError: 'tuple' object does not support item assignment
I raised an issue and began looking around for another library.
Simon Hayward's UK Postcode Parser
The next library I looked at was a fork of ukpostcodeparser by Simon Hayward.
➫ pip install -e git+
I downloaded a list of UK postcodes and ran all of them through the parser to see if it raised any exceptions:
➫ curl -O ➫ unzip postcodes.zip
from ukpostcodeparser import parse_uk_postcode """ # Remove the space between the outward and inward codes postcode = pieces[0].replace(' ', '') try: _postcode = parse_uk_postcode(postcode) except Exception, error: print error, postcode continue if _postcode is None: print 'Invalid postcode', postcode
The CSV file contained 2,545,662 postcodes and of them 7,085 came back as invalid.
➫ wc -l postcodes.csv 2545662 postcodes.csv ➫ python check.py > results ➫ wc -l results 7085 results
I took a sampling of the invalid postcodes to see what they looked like:
➫ sort --random-sort results | head Invalid postcode NPT6ZE Invalid postcode W1R5HD Invalid postcode NPT7HS Invalid postcode W1X8NJ Invalid postcode NPT8AD Invalid postcode NPT5LU Invalid postcode W1M0BN Invalid postcode W1R0DS Invalid postcode NPT1JW Invalid postcode NPT2TW
The NPT outing code for Newport is no longer is use so I'm not so concerned with that one but W1R covers part of central London. I experimented with a few combinations of postcodes and found that if a letter came after any digits in the outing code then the postcode would be seen as invalid by the library even though it is valid. For example: "Golden Square, London, W1R 3AD":
>>> parse_uk_postcode('w1r3ad') Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".../ukpostcodeparser/parser.py", line 129, in parse_uk_postcode raise ValueError('Invalid postcode') ValueError: Invalid postcode
But then I tried another postcode, this one for "216 Oxford Street, London, W1D 1LA" and it did work:
>>> parse_uk_postcode('W1D1LA') ('W1D', '1LA')
Before looking to patch the library I wanted to see if there were any other obvious solutions.
Googling for Regexes
I began googling for regexes which claimed to parse UK postcodes. I can across the following regex and ran it against the 2.5 million postcodes list.
import re pattern = '^() def is_postcode(postcode): postcode = postcode return _POSTCODE_RE.match(postcode) != None """ # Make sure the postcode is in upper case postcode = pieces[0].upper() if not is_postcode(postcode): print 'invalid postcode', postcode
This failed against 8,614 postcodes. Here is a sampling of a few of them:
➫ sort --random-sort results | head invalid postcode W1V 9PD invalid postcode W1Y 8HE invalid postcode W1Y 8DH invalid postcode W1P 7FW invalid postcode W1Y 1AR invalid postcode W1R 6JJ invalid postcode W1M 5AE invalid postcode W1R 1FH invalid postcode NPT 8ET invalid postcode W1R 0HD
Ignoring the old Newport postcodes W1R was still being caught. I could see that only certain W1[A-Z] outing codes were being caught out so I got a list of them together and found only 7 were being flagged up as invalid. I adjusted the regular expression to allow for M, N, P, R, V, X and Y after any digits in the outing code:
pattern = '^([A-PR-UWYZ]([0-9]{1,2}|([A-HK-Y][0-9]|[A-HK-Y][0-9]([0-9]|' + \ '[ABEHMNPRV-Y]))|[0-9][A-HJKMNPRS-UVWXY])\ )
I ran the script again and only the 2,418 depreciated Newport postcodes were seen as invalid.
Seeing the pattern of the W1M, W1N, W1P, W1R, W1V, W1X and W1Y outing codes being the edge cases that tripped up Simon Hayward's library I created a pull request. | https://tech.marksblogg.com/uk-postcodes.html | CC-MAIN-2019-09 | refinedweb | 910 | 57.5 |
Details
Description
Issue Links
Activity
- All
- Work Log
- History
- Activity
- Transitions
Refactoring the XmlUpdateRequestHandler to use constant variables that can be reused by the Stax implementation. Adding a stax implementation for the XmlUpdateRequestHandler. Till now I get an error about missing content stream.
NOTE:
To make the version compile you need to download the JSR 173 API from
and copy it to $SOLR_HOME/lib/.
It seems the diff does not show the other libs you need to compile.
You can download them from:
Fixing bugs from first version.
Adding workaround for problem with direct use of the handler (never gets a stream).
by patching the SolrUpdateServlet
Please test, it works fine for me.
@Larrea
1) standards-based
2) agree
3) agree
4) agree
StAX is become a standard. Not as fast as SAX but nearly. IMO the StAX implementation is as easy to follow as the xpp, personally I think even easier.
Thorsten - this looks good. I cleaned it up a bit and modified it to use
SOLR-139. The big changes I made are:
- It uses two spaces (not tabs or 4 spaces)
- It overwrites the existing XmlUpdateRequestHandler rather then adding a parallel one. (We should either use StAX or XPP, but not both)
- It breaks out the xml parsing so that parsing a single document is an easily testable chunk:
SolrDocument readDoc(XMLStreamReader parser)
- It adds a test to make sure it reads documents correctly
- Since it is the XmlUpdateRequestHandler all the other tests that insert documents use it..
fixed the document parser to handle fields with CDATA.
switch (event) {
// Add everything to the text
case XMLStreamConstants.SPACE:
case XMLStreamConstants.CDATA:
case XMLStreamConstants.CHARACTERS:
text.append( parser.getText() );
break;
...
What is missing with this issue, where can I give a helping had.
>> Solr should assume UTF-8 encoding unless the contentType says otherwise.
>
> In general yes (when Solr is asked for a Reader).
> For XML, we should probably give the parser an InputStream.
>
>
Extracts the request parsing and update handling into two parts.
This adds an "UpdateRequestProcessor" that handles the actual updating. This offers a good place for authentication / document transformation etc. This can all be reuse if we have a JSONUpdate handler. The UpdateRequestProcessor can be changed using an init param in solrconfig,xml:
<requestHandler name="/update" class="solr.XmlUpdateRequestHandler" >
<str name="update.processor.class">org.apache.solr.handler.UpdateRequestProcessor</str>
</requestHandler>
Moved the XPP version to XppUpdateRequestHandler and mapped it to:
<requestHandler name="/update/xpp" class="solr.XppUpdateRequestHandler" />
My initial (not accurate) tests don't show any significant time difference between the two – we should keep both in the code until we are confident the new one is stable.
- - - - -
Thorsten - can you check if the STAX includes are all in good shape? Is it ok to use:
import javanet.staxutils.BaseXMLInputFactory;
dooh – wrong issue
this is the default implementation since r552198
It would be useful if there first were some consensus as to what the goals are for making a change to the XML Update Handler; some possibilities I can think of include:
1) To use standards-based rather than non-standards-based technologies as much as possible
2) To use as few different XML technologies (and coding styles related to the technology) as possible
3) To reduce as much as possible the complexity of code needed for interpreting XML command and/or configuration streams
4) To lower resource consumption and limitations for XML handling, e.g. stream-based rather than random-access
By all means add to that list, prioritize, and remove goals which are not seen as important.
Then it seems to me the question would be how many of those goals are addressed by changing XML Update Handler to stAX, vs. other technologies. One might at the same time also want to look at other places where SOLR decodes XML such as config files, to see if there can be more commonality rather than continued isolation. | https://issues.apache.org/jira/browse/SOLR-133?focusedCommentId=12486195&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-40 | refinedweb | 653 | 54.63 |
import "github.com/documize/community/domain/label"
Handler contains the runtime information such as logging and database.
Add space label to the store.
Delete removes space label from store and removes label association from spaces.
Get returns all space labels.
Update persists label name/color changes.
type Store struct { store.Context store.LabelStorer }
Store provides data access to section template information.
Add saves space label to store.
Delete removes space label from the store.
Get returns all space labels from store.
RemoveReference clears space.labelID for given label.
Update persists space label changes to the store.
Package label imports 15 packages (graph) and is imported by 2 packages. Updated 2019-07-01. Refresh now. Tools for package owners. | https://godoc.org/github.com/documize/community/domain/label | CC-MAIN-2020-40 | refinedweb | 118 | 55.81 |
itclwidget man page
itcl::widget — create a widget class of objects
Warning!
This is new functionality in [incr Tcl] where the API can still change!!
Synopsis
itcl::widget widgetName { inherit baseWidget ?baseWidget...? ...? } widgetName objName ?arg arg ...? objName method ?arg arg ...? widgetName::proc ?arg arg ...?
Description
One of the fundamental constructs in [incr Tcl] is the widget definition. A widget is like a class with some additional features. Each widget acts as a template for actual objects that can be created. The widget itself is a namespace which contains things common to all objects. Each object has its own unique bundle of data which contains instances of the "variables" defined in the widget definition. Each object also has a built-in variable named "this", which contains the name of the object. Widgets can also have "common" data members that are shared by all objects in a widget.
Two types of functions can be included in the widget definition. "Methods" are functions which operate on a specific object, and therefore have access to both "variables" and "common" data members. "Procs" are ordinary procedures in the widget widget can only be defined once, although the bodies of widget methods and procs can be defined again and again for interactive debugging. See the body and configbody commands for details.
Each namespace can have its own collection of objects and widgets. The list of widgets available in the current context can be queried using the "itcl::find widgets" command, and the list of objects, with the "itcl::find objects" command.
A widget can be deleted using the "delete widget" command. Individual objects can be deleted using the "delete object" command.
Widget Definitions
- widget widgetName definition
Provides the definition for a widget named widgetName. If the widget widgetName already exists, or if a command called widgetName exists in the current namespace context, this command returns an error. If the widget definition is successfully parsed, widgetName becomes a command in the current context, handling the creation of objects for this widget.
The widget definition is evaluated as a series of Tcl statements that define elements within the widget. The following widget definition commands are recognized:
- inherit baseWidget ?baseWidget...?
Causes the current widget to inherit characteristics from one or more base widgets. Widgets must have been defined by a previous widget command, or must be available to the auto-loading facility (see "Auto-Loading" below). A single widget definition can contain no more than one inherit command.
The order of baseWidget names in the inherit list affects the name resolution for widget members. When the same member name appears in two or more base widgets, the base widget that appears first in the inherit list takes precedence. For example, if widgets "Foo" and "Bar" both contain the member "x", and if another widget widget constructors that require arguments. Variables in the args specification can be accessed in the init code fragment, and passed to base widget constructors. After evaluating the init statement, any base widget constructors that have not been executed are invoked automatically without arguments. This ensures that all base widgets are fully constructed before the constructor body is executed. By default, this scheme causes constructors to be invoked in order from least- to most-specific. This is exactly the opposite of the order that widgets widget hierarchy are invoked in order from most- to least-specific. This is the order that the widgets widget method, a method can be invoked like any other command-simply by using its name. Outside of the widget context, the method name must be prefaced an object name, which provides the context for the data that it manipulates. Methods in a base widget that are redefined in the current widget, or hidden by another base widget, can be qualified using the "widgetName::method" syntax.
- proc name ?args? ?body?
Declares a proc called name. A proc is an ordinary procedure within the widget widget method or proc, a proc can be invoked like any other command-simply by using its name. In any other namespace context, the proc is invoked using a qualified name like "widgetName::proc". Procs in a base widget that are redefined in the current widget, or hidden by another base widget, can also be accessed via their qualified name.
- variable varName ?init? ?config?
Defines an object-specific variable named varName. All object-specific variables are automatically available in widget widget definition using the configbody command.
- common varName ?init?
Declares a common variable named varName. Common variables reside in the widget namespace and are shared by all objects belonging to the widget. They are just like global variables, except that they need not be declared with the usual global command. They are automatically visible in all widget widget definition. This allows common data members to be initialized as arrays. For example:
itcl::widget Foo { protected widget.
Widget Usage
Once a widget has been defined, the widget name can be used as a command to create new objects belonging to the widget.
- widgetName objName ?args...?
Creates a new object in widget widgetName with the name objName. Remaining arguments are passed to the constructor of the most-specific widget. This in turn passes arguments to base widget widgetName<number>, where the widgetName part is modified to start with a lowercase letter. In widget
Once widget widget where it was defined. If the "config" code generates an error, the variable is set back to its previous value, and the configure method returns an error.
- objName isa widgetName
Returns non-zero if the given widgetName can be found in the object's heritage, and zero otherwise.
- objName info option ?args...?
Returns information related to a particular object named objName, or to its widget definition. The option parameter includes the following things, as well as the options recognized by the usual Tcl "info" command:
- objName info widget
Returns the name of the most-specific widget for object objName.
- objName info inherit
Returns the list of base widgets as they were defined in the "inherit" command, or an empty string if this widget has no base widgets.
- objName info heritage
Returns the current widget name and the entire list of base widgets in the order that they are traversed for member lookup and object destruction.
- objName info function ?cmdName? ?-protection? ?-type? ?-name? ?-args? ?-body?
With no arguments, this command returns a list of all widgets/Procs
Sometimes a base widget has a method or proc that is redefined with the same name in a derived widget. This is a way of making the derived widget handle the same operations as the base widget, but with its own specialized behavior. For example, suppose we have a Toaster widget that looks like this:
itcl::widget Toaster { variable crumbs 0 method toast {nslices} { if {$crumbs > 50} { error "== FIRE! FIRE! ==" } set crumbs [expr $crumbs+4*$nslices] } method clean {} { set crumbs 0 } }
We might create another widget like SmartToaster that redefines the "toast" method. If we want to access the base widget method, we can qualify it with the base widget name, to avoid ambiguity:
itcl::widget SmartToaster { inherit Toaster method toast {nslices} { if {$crumbs > 40} { clean } return [Toaster::toast $nslices] } }
Instead of hard-coding the base widget name, we can use the "chain" command like this:
itcl::widget SmartToaster { inherit Toaster method toast {nslices} { if {$crumbs > 40} { clean } return [chain $nslices] } }
The chain command searches through the widget hierarchy for a slightly more generic (base widget) implementation of a method or proc, and invokes it with the specified arguments. It starts at the current widget context and searches through base widgets in the order that they are reported by the "info heritage" command. If another implementation is not found, this command does nothing and returns the null string.
Auto-Loading
Widget definitions need not be loaded explicitly; they can be loaded as needed by the usual Tcl auto-loading facility. Each directory containing widget definition files should have an accompanying "tclIndex" file. Each line in this file identifies a Tcl procedure or [incr Tcl] widget definition and the file where the definition can be found.
For example, suppose a directory contains the definitions for widgets "Toaster" and "SmartToaster". Then the "tclIndex" file for this directory would look like:
# Tcl autoload index file, version 2.0 for [incr Tcl] # This file is generated by the "auto_mkindex" command # and sourced to set up indexing information for one or # more commands. Typically each line is a command that # sets an element in the auto_index array, where the # element name is the name of a command and the value is # a script that loads the command. set auto_index(:, widgets will be auto-loaded as needed when used in an application.
C Procedures
C procedures can be integrated into an [incr Tcl] widget; } }
C procedures are implemented just like ordinary Tcl commands. See the CrtCommand man page for details. Within the procedure, widget data members can be accessed like ordinary variables using Tcl_SetVar(), Tcl_GetVar(), Tcl_TraceVar(), etc. Widget methods and procs can be executed like ordinary commands using Tcl_Eval(). [incr Tcl] makes this possible by automatically setting up the context before executing the C procedure.
This scheme provides a natural migration path for code development. Widgets can be developed quickly using Tcl code to implement the bodies. An entire application can be built and tested. When necessary, individual bodies can be implemented with C code to improve performance.
Keywords
widget, object, object-oriented | https://www.mankier.com/n/itclwidget | CC-MAIN-2018-39 | refinedweb | 1,568 | 53.61 |
Question
The journal People Management reports on new ways to use directive training to improve the performance of managers. The percentages of managers who benefited from the training from 2003 to 2007 are: 34%, 36%, 38%, 39%, and 41% for 2007. Comment on these results from a quality-management viewpoint.
Answer to relevant QuestionsWhat is the logic behind the control chart for the sample mean, and how is the chart constructed? Create R and s charts. Is the process in control? 121, 122, 121, 125, 123, 121, 129, 123, 122, 122, 120, 121, 119, 118, 121, 125, 139, 150, 121, 122, 120, 123, 127, 123, 128, 129, 122, 120, 128, 120 An article in Money compares investment in an income annuity, offered by insurance companies, and a mix of low-cost mutual funds.6 Suppose the following data are annualized returns (in percent) randomly sampled from these ...The following data are the one-year return to investors in world stock investment funds, as published in Pensions & Investments. The data are in percent return (%): 35.9, 34.5, 33.7, 31.7, 27.5, 27.3, 27.3, 27.2, 27.1 25.5. ...While considering three managers for a possible promotion, the company president decided to solicit information from employees about the managers' relative effectiveness. Each person in a random sample of 10 employees who ...
Post your question | http://www.solutioninn.com/the-journal-people-management-reports-on-new-ways-to-use | CC-MAIN-2016-50 | refinedweb | 228 | 54.73 |
¶
-.8 Changelog
- Next: 0.6 Changelog
- Up: Home
- On this page:
- 0.7 Changelog
-.7 Changelog¶
0.7.11¶no release date
orm¶.
Fixed bug when a query of the form:
query(SubClass).options(subqueryload(Baseclass.attrname)), where
SubClassis a joined inh of
BaseClass, would fail to apply the
JOINinside the subquery on the attribute load, producing a cartesian product. The populated results still tended to be correct as additional rows are just ignored, so this issue may be present as a performance degradation in applications that are otherwise working correctly.).
engine¶
The regexp used by the
make_url()function now parses ipv6 addresses, e.g. surrounded by brackets.
sql¶
Fixed regression dating back to 0.7.9 whereby the name of a CTE might not be properly quoted if it was referred to in multiple FROM clauses.
Fixed bug in common table expression system where if the CTE were used only as an
alias()construct, it would not render using the WITH keyword.
Fixed bug in
CheckConstraintDDL where the “quote” flag from a
Columnobject would not be propagated.
postgresql¶
Added support for PostgreSQL’s traditional SUBSTRING function syntax, renders as “SUBSTRING(x FROM y FOR z)” when regular
func.substring()is used. Courtesy Gunnlaugur Þór Briem.
mysql¶
Updates to MySQL reserved words for versions 5.5, 5.6, courtesy Hanno Schlichting.
tests¶
Fixed an import of “logging” in test_execute which was not working on some linux platforms.
References: #2669, pull request 41
0.7.10¶Released: Thu Feb 7 2013
orm¶.
Query.merge_result()can now load rows from an outer join where an entity may be
Nonewithout throwing an error..8.0b2..
engine¶
Fixed
MetaData.reflect()to correctly use the given
Connection, if given, without opening a second connection from that connection’s
Engine.
sql¶
Backported adjustment to
__repr__for
TypeDecoratorto 0.7, allows
PickleTypeto produce a clean
repr()to help with Alembic.
Fixed bug where
Table.tometadata()would fail if a
Columnhad both a foreign key as well as an alternate “.key” name for the column...
mysql¶
Added “raise_on_warnings” flag to OurSQL dialect.
Added “read_timeout” flag to MySQLdb dialect..
mssql¶
Fixed bug whereby using “key” with Column in conjunction with “schema” for the owning Table would fail to locate result rows due to the MSSQL dialect’s “schema rendering” logic’s failure to take .key into account.
Added a Py3K conditional around unnecessary .decode() call in mssql information schema, fixes reflection in Py3k.
oracle¶
The Oracle LONG type, while an unbounded text type, does not appear to use the cx_Oracle.LOB type when result rows are returned, so the dialect has been repaired to exclude LONG from having cx_Oracle.LOB filtering.
0.7.9¶Released: Mon Oct 01 2012
orm¶.
A warning is emitted when lazy=’dynamic’ is combined with uselist=False. This is an exception raise in 0.8.
Fixed bug whereby user error in related-object assignment could cause recursion overflow if the assignment triggered a backref of the same name as a bi-directional attribute on the incorrect class to the same target. An informative error is raised now.
Fixed bug where incorrect type information would be passed when the ORM would bind the “version” column, when using the “version” feature. Tests courtesy Daniel Miller..
engine¶
Dramatic improvement in memory usage of the event system; instance-level collections are no longer created for a particular type of event until instance-level listeners are established for that event..
Added gaerdbms import to mysql/__init__.py, the absence of which was preventing the new GAE dialect from being loaded.
Fixed cextension bug whereby the “ambiguous column error” would fail to function properly if the given index were a Column object and not a string. Note there are still some column-targeting issues here which are fixed in 0.8.
Fixed the repr() of Enum to include the “name” and “native_enum” flags. Helps Alembic autogenerate.
sql¶
Fixed the DropIndex construct to support an Index associated with a Table in a remote schema.
Fixed bug in over() construct whereby passing an empty list for either partition_by or order_by, as opposed to None, would fail to generate correctly. Courtesy Gunnlaugur Þór Briem.
Fixed CTE bug whereby positional bound parameters present in the CTEs themselves would corrupt the overall ordering of bound parameters. This primarily affected SQL Server as the platform with positional binds + CTE support..
quoting is applied to the column names inside the WITH RECURSIVE clause of a common table expression according to the quoting rules for the originating Column.
Fixed regression introduced in 0.7.6 whereby the FROM list of a SELECT statement could be incorrect in certain “clone+replace” scenarios.
Fixed bug whereby usage of a UNION or similar inside of an embedded subquery would interfere with result-column targeting, in the case that a result-column had the same ultimate name as a name inside the embedded UNION..
Added missing operators is_(), isnot() to the ColumnOperators base, so that these long-available operators are present as methods like all the other operators.
postgresql¶
Columns in reflected primary key constraint are now returned in the order in which the constraint itself defines them, rather than how the table orders them. Courtesy Gunnlaugur Þór Briem..
Added ‘terminating connection’ to the list of messages we use to detect a disconnect with PG, which appears to be present in some versions when the server is restarted.
mysql¶
Updated mysqlconnector interface to use updated “client flag” and “charset” APIs, courtesy David McNelis.
sqlite¶
Added support for the localtimestamp() SQL function implemented in SQLite, courtesy Richard Mitchell.”””).
Adjusted column default reflection code to convert non-string values to string, to accommodate old SQLite versions that don’t deliver default info as a string.
mssql¶
Fixed compiler bug whereby using a correlated subquery within an ORDER BY would fail to render correctly if the statement also used LIMIT/OFFSET, due to mis-rendering within the ROW_NUMBER() OVER clause. Fix courtesy sayap
Fixed compiler bug whereby a given select() would be modified if it had an “offset” attribute, causing the construct to not compile correctly a second time.
Fixed bug where reflection of primary key constraint would double up columns if the same constraint/table existed in multiple schemas.
0.7.8¶Released: Sat Jun 16 2012
orm¶
The ‘objects’ argument to flush() is no longer deprecated, as some valid use cases have been identified.
Fixed bug whereby subqueryload() from a polymorphic mapping to a target would incur a new invocation of the query for each distinct class encountered in the polymorphic result. as this was supposed to be part of that, this is.
Fixed identity_key() function which was not accepting a scalar argument for the identity. .
Fixed bug whereby populate_existing option would not propagate to subquery eager loaders. .
engine¶
Fixed memory leak in C version of result proxy whereby DBAPIs which don’t deliver pure Python tuples for result rows would fail to decrement refcounts correctly. The most prominently affected DBAPI is pyodbc.
Fixed bug affecting Py3K whereby string positional parameters passed to engine/connection execute() would fail to be interpreted correctly, due to __iter__ being present on Py3K string..
sql¶
added BIGINT to types.__all__, BIGINT, BINARY, VARBINARY to sqlalchemy module namespace, plus test to ensure this breakage doesn’t occur again.
Repaired common table expression rendering to function correctly when the SELECT statement contains UNION or other compound expressions, courtesy btbuilder.
Fixed bug whereby append_column() wouldn’t function correctly on a cloned select() construct, courtesy Gunnlaugur Þór Briem.
postgresql¶
removed unnecessary table clause when reflecting enums,. Courtesy Gunnlaugur Þór Briem.
mysql¶
Added a new dialect for Google App Engine. Courtesy Richie Foreman.
oracle¶
Added ROWID to oracle.*.
0.7.7¶Released: Sat May 05 2012
orm¶
Added prefix_with() method to Query, calls upon select().prefix_with() to allow placement of MySQL SELECT directives in statements. Courtesy Diana Clarke
Added new flag to @validates include_removes. When True, collection remove and attribute del events will also be sent to the validation function, which accepts an additional argument “is_remove” when this flag is used.
Fixed issue in unit of work whereby setting a non-None self-referential many-to-one relationship to None would fail to persist the change if the former value was not already loaded..
Fixed bug in 0.7.6 introduced by whereby column_mapped_collection used against columns that were mapped as joins or other indirect selectables would fail to function.
Fixed bug whereby polymorphic_on column that’s not otherwise mapped on the class would be incorrectly included in a merge() operation, raising an error.
Fixed bug in expression annotation mechanics which could lead to incorrect rendering of SELECT statements with aliases and joins, particularly when using column_property().
Fixed bug which would prevent OrderingList from being pickleable. Courtesy Jeff Dairiki
Fixed bug in relationship comparisons whereby calling unimplemented methods like SomeClass.somerelationship.like() would produce a recursion overflow, instead of NotImplementedError.
sql¶
Added new connection event dbapi_error(). Is called for all DBAPI-level errors passing the original DBAPI exception before SQLAlchemy modifies the state of the cursor.
Removed warning when Index is created with no columns; while this might not be what the user intended, it is a valid use case as an Index could be a placeholder for just an index of a certain name.
If conn.begin() fails when calling “with engine.begin()”, the newly acquired Connection is closed explicitly before propagating the exception onward normally.
Add BINARY, VARBINARY to types.__all__.
postgresql¶
Added new for_update/with_lockmode() options for PostgreSQL: for_update=”read”/ with_lockmode(“read”), for_update=”read_nowait”/ with_lockmode(“read_nowait”). These emit “FOR SHARE” and “FOR SHARE NOWAIT”, respectively. Courtesy Diana Clarke
removed unnecessary table clause when reflecting domains.
mysql¶
Fixed bug whereby column name inside of “KEY” clause for autoincrement composite column with InnoDB would double quote a name that’s a reserved word. Courtesy Jeff Dairiki.
Fixed bug whereby get_view_names() for “information_schema” schema would fail to retrieve views marked as “SYSTEM VIEW”. courtesy Matthew Turland..
sqlite¶
Added SQLite execution option “sqlite_raw_colnames=True”, will bypass attempts to remove “.” from column names returned by SQLite cursor.description.
When the primary key column of a Table is replaced, such as via extend_existing, the “auto increment” column used by insert() constructs is reset. Previously it would remain referring to the previous primary key column.
mssql¶
Added interim create_engine flag supports_unicode_binds to PyODBC dialect, to force whether or not the dialect passes Python unicode literals to PyODBC or not.
Repaired the use_scope_identity create_engine() flag when using the pyodbc dialect. Previously this flag would be ignored if set to False. When set to False, you’ll get “SELECT @@identity” after each INSERT to get at the last inserted ID, for those tables which have “implicit_returning” set to False..
0.7.6¶Released: Wed Mar 14 2012
orm¶
Added “no_autoflush” context manager to Session, used with with: will temporarily disable autoflush.
Added cte() method to Query, invokes common table expression support from the Core (see below).
Added the ability to query for Table-bound column names when using query(sometable).filter_by(colname=value).
Fixed event registration bug which would primarily show up as events not being registered with sessionmaker() instances created after the event was associated with the Session class.
Fixed bug whereby a primaryjoin condition with a “literal” in it would raise an error on compile with certain kinds of deeply nested expressions which also needed to render the same bound parameter name more than once..
Fixed bug whereby objects using attribute_mapped_collection or column_mapped_collection could not be pickled.
Fixed bug whereby MappedCollection would not get the appropriate collection instrumentation if it were only used in a custom subclass that used @collection.internally_instrumented.
Fixed bug whereby SQL adaption mechanics would fail in a very nested scenario involving joined-inheritance, joinedload(), limit(), and a derived function in the columns clause.
Fixed the repr() for CascadeOptions to include refresh-expire. Also reworked CascadeOptions to be a <frozenset>.
Improved the “declarative reflection” example to support single-table inheritance, multiple calls to prepare(), tables that are present in alternate schemas, establishing only a subset of classes as reflected.
Scaled back the test applied within flush() to check for UPDATE against partially NULL PK within one table to only actually happen if there’s really an UPDATE to occur.
Fixed bug whereby if a method name conflicted with a column name, a TypeError would be raised when the mapper tried to inspect the __get__() method on the method object.
examples¶
Altered _params_from_query() function in Beaker example to pull bindparams from the fully compiled statement, as a quick means to get everything including subqueries in the columns clause, etc.
engine¶).
Added pool_reset_on_return argument to create_engine, allows control over “connection return” behavior. Also added new arguments ‘rollback’, ‘commit’, None to pool.reset_on_return to allow more control over connection return activity.
Added some decent context managers to Engine, Connection:
with engine.begin() as conn: <work with conn in a transaction>
and:
with engine.connect() as conn: <work with conn>
Both close out the connection when done, commit or rollback transaction with errors on engine.begin().
Added execution_options() call to MockConnection (i.e., that used with strategy=”mock”) which acts as a pass through for arguments.
sql¶
Added support for SQL standard common table expressions (CTE), allowing SELECT objects as the CTE source (DML not yet supported). This is invoked via the cte() method on any select() construct.
Fixed memory leak in core which would occur when C extensions were used with particular types of result fetches, in particular when orm query.count() were called.
Fixed issue whereby attribute-based column access on a row would raise AttributeError with non-C version, NoSuchColumnError with C version. Now raises AttributeError in both cases..
A warning is emitted when a not-present column is stated in the values() clause of an insert() or update() construct. Will move to an exception in 0.8..
Fixed bug in new “autoload_replace” flag which would fail to preserve the primary key constraint of the reflected table.
Index will raise when arguments passed cannot be interpreted as columns or expressions. Will warn when Index is created with no columns at all.
mysql¶
Added support for MySQL index and primary key constraint types (i.e. USING) via new mysql_using parameter to Index and PrimaryKeyConstraint, courtesy Diana Clarke.
Added support for the “isolation_level” parameter to all MySQL dialects. Thanks to mu_mind for the patch here.
sqlite¶
Fixed bug in C extensions whereby string format would not be applied to a Numeric value returned as integer; this affected primarily SQLite which does not maintain numeric scale settings.
mssql¶
Added support for MSSQL INSERT, UPDATE, and DELETE table hints, using new with_hint() method on UpdateBase.
oracle¶
Added a new create_engine() flag coerce_to_decimal=False, disables the precision numeric handling which can add lots of overhead by converting all numeric values to Decimal.
Added missing compilation support for LONG
Added ‘LEVEL’ to the list of reserved words for Oracle.
0.7.5¶Released: Sat Jan 28 2012
orm¶
Added “class_registry” argument to declarative_base(). Allows two or more declarative bases to share the same registry of class names.
query.filter() accepts multiple criteria which will join via AND, i.e. query.filter(x==y, z>q, …) !
New declarative reflection example added, illustrates how best to mix table reflection with declarative as well as uses some new features from..
Fixed regression from 0.7.4 whereby using an already instrumented column from a superclass as “polymorphic_on” failed to resolve the underlying Column.
Raise an exception if xyzload_all() is used inappropriately with two non-connected relationships.
Fixed bug whereby event.listen(SomeClass) forced an entirely unnecessary compile of the mapper, making events very hard to set up at module import time (nobody noticed this ??)
Fixed bug whereby hybrid_property didn’t work as a kw arg in any(), has().
ensure pickleability of all ORM exceptions for multiprocessing compatibility.
implemented standard “can’t set attribute” / “can’t delete attribute” AttributeError when setattr/delattr used on a hybrid that doesn’t define fset or fdel.
Fixed bug where unpickled object didn’t have enough of its state set up to work correctly within the unpickle() event established by the mutable object extension, if the object needed ORM attribute access within __eq__() or similar.
Fixed bug where “merge” cascade could mis-interpret an unloaded attribute, if the load_on_pending flag were used with relationship(). Thanks to Kent Bower for tests.
Fixed regression from 0.6 whereby if “load_on_pending” relationship() flag were used where a non-“get()” lazy clause needed to be emitted on a pending object, it would fail to load.
examples¶
Simplified the versioning example a bit to use a declarative mixin as well as an event listener, instead of a metaclass + SessionExtension.
Fixed large_collection.py to close the session before dropping tables.
engine¶
Added __reduce__ to StatementError, DBAPIError, column errors so that exceptions are pickleable, as when using multiprocessing. However, not all DBAPIs support this yet, such as psycopg2.
Improved error messages when a non-string or invalid string is passed to any of the date/time processors used by SQLite, including C and Python versions.
Fixed bug whereby a table-bound Column object named “<a>_<b>” which matched a column labeled as “<tablename>_<colname>” could match inappropriately when targeting in a result set row.
Fixed bug in “mock” strategy whereby correct DDL visit method wasn’t called, resulting in “CREATE/DROP SEQUENCE” statements being duplicated
sql¶.
Added “false()” and “true()” expression constructs to sqlalchemy.sql namespace, though not part of __all__ as of yet.
Dialect-specific compilers now raise CompileError for all type/statement compilation issues, instead of InvalidRequestError or ArgumentError. The DDL for CREATE TABLE will re-raise CompileError to include table/column information for the problematic column.
Improved the API for add_column() such that if the same column is added to its own table, an error is not raised and the constraints don’t get doubled up. Also helps with some reflection/declarative patterns.
Fixed issue where the “required” exception would not be raised for bindparam() with required=True, if the statement were given no parameters at all.
mysql¶
fixed regexp that filters out warnings for non-reflected “PARTITION” directives, thanks to George Reilly
sqlite¶
the “name” of an FK constraint in SQLite is reflected as “None”, not “0” or other integer value. SQLite does not appear to support constraint naming in any case.
sql.false() and sql.true() compile to 0 and 1, respectively in sqlite
removed an erroneous “raise” in the SQLite dialect when getting table names and view names, where logic is in place to fall back to an older version of SQLite that doesn’t have the “sqlite_temp_master” table.
mssql¶.
oracle¶
Added ORA-03135 to the never ending list of oracle “connection lost” errors
misc¶
Changed LRUCache, used by the mapper to cache INSERT/UPDATE/DELETE statements, to use an incrementing counter instead of a timestamp to track entries, for greater reliability versus using time.time(), which can cause test failures on some platforms..
Fixed inappropriate usage of util.py3k flag and renamed it to util.py3k_warning, since this flag is intended to detect the -3 flag series of import restrictions only.
0.7.4¶Released: Fri Dec 09 2011
orm¶.
IdentitySet supports the - operator as the same as difference(), handy when dealing with Session.dirty etc.
Added new value for Column autoincrement called “ignore_fk”, can be used to force autoincrement on a column that’s still part of a ForeignKeyConstraint. New example in the relationship docs illustrates its use.
Fixed backref behavior when “popping” the value off of a many-to-one in response to a removal from a stale one-to-many - the operation is skipped, since the many-to-one has since been updated..
fixed inappropriate evaluation of user-mapped object in a boolean context within query.get(). Also in 0.6.9.
Added missing comma to PASSIVE_RETURN_NEVER_SET symbol
Cls.column.collate(“some collation”) now works. Also in 0.6.9
the value of a composite attribute is now expired after an insert or update operation, instead of regenerated in place. This ensures that a column value which is expired within a flush will be loaded first, before the composite is regenerated using that value..
Fixed bug whereby a subclass of a subclass using concrete inheritance in conjunction with the new ConcreteBase or AbstractConcreteBase would fail to apply the subclasses deeper than one level to the “polymorphic loader” of each base
Fixed bug whereby a subclass of a subclass using the new AbstractConcreteBase would fail to acquire the correct “base_mapper” attribute when the “base” mapper was generated, thereby causing failures later on.
Fixed bug whereby column_property() created against ORM-level column could be treated as a distinct entity when producing certain kinds of joined-inh joins.
Fixed the error formatting raised when a tuple is inadvertently passed to session.query(). Also in 0.6.9..
__table_args__ can now be passed as an empty tuple as well as an empty dict.. Thanks to Fayaz Yusuf Khan for the patch.
Updated warning message when setting delete-orphan without delete to no longer refer to 0.6, as we never got around to upgrading this to an exception. Ideally this might be better as an exception but it’s not critical either way.
Fixed bug in get_history() when referring to a composite attribute that has no value; added coverage for get_history() regarding composites which is otherwise just a userland function.
examples¶
Fixed bug in history_meta.py example where the “unique” flag was not removed from a single-table-inheritance subclass which generates columns to put up onto the base.
engine¶.
sql¶.
Added accessor to types called “python_type”, returns the rudimentary Python type object for a particular TypeEngine instance, if known, else raises NotImplementedError..
schema¶
Added new support for remote “schemas”:.
Fixed bug whereby TypeDecorator would return a stale value for _type_affinity, when using a TypeDecorator that “switches” types, like the CHAR/UUID type.
Fixed bug whereby “order_by=’foreign_key’” option to Inspector.get_table_names wasn’t implementing the sort properly, replaced with the existing sort algorithm
the “name” of a column-level CHECK constraint, if present, is now rendered in the CREATE TABLE statement using “CONSTRAINT <name> CHECK <expression>”.
MetaData() accepts “schema” and “quote_schema” arguments, which will be applied to the same-named arguments of a Table or Sequence which leaves these at their default of
None.
Sequence accepts “quote_schema” argument
tometadata() for Table will use the “schema” of the incoming MetaData for the new Table if the schema argument is explicitly “None”
Added CreateSchema and DropSchema DDL constructs - these accept just the string name of a schema and a “quote” flag.
When using default “schema” with MetaData, ForeignKey will also assume the “default” schema when locating remote table. This allows the “schema” argument on MetaData to be applied to any set of Table objects that otherwise don’t have a “schema”.
a “has_schema” method has been implemented on dialect, but only works on PostgreSQL so far. Courtesy Manlio Perillo.
postgresql¶
Added create_type constructor argument to pg.ENUM. When False, no CREATE/DROP or checking for the type will be performed as part of a table create/drop event; only the create()/drop)() methods called directly will do this. Helps with Alembic “offline” scripts..
mysql¶
Unicode adjustments allow latest pymysql (post 0.4) to pass 100% on Python 2.
mssql¶
lifted the restriction on SAVEPOINT for SQL Server. All tests pass using it, it’s not known if there are deeper issues however.
repaired the with_hint() feature which wasn’t implemented correctly on MSSQL - usually used for the “WITH (NOLOCK)” hint (which you shouldn’t be using anyway ! use snapshot isolation instead :) )
use new pyodbc version detection for _need_decimal_fix option.
don’t cast “table name” as NVARCHAR on SQL Server 2000. Still mostly in the dark what incantations are needed to make PyODBC work fully with FreeTDS 0.91 here, however.
Decode incoming values when retrieving list of index names and the names of columns within those indexes.
misc¶.
pyodbc-based dialects now parse the pyodbc accurately as far as observed pyodbc strings, including such gems as “py3-3.0.1-beta4”
the @compiles decorator raises an informative error message when no “default” compilation handler is present, rather than KeyError.
0.7.3¶Released: Sun Oct 16 2011. Also in 0.6.9.
orm¶.
Added new flag expire_on_flush=False to column_property(), marks those properties that would otherwise be considered to be “readonly”, i.e. derived from SQL expressions, to retain their value after a flush has occurred, including if the parent object itself was involved in an update..
Fixed a variety of synonym()-related regressions from 0.6:
making a synonym against a synonym now works.
synonyms made against a relationship() can be passed to query.join(), options sent to query.options(), passed by name to query.with_parent().
Fixed bug whereby mapper.order_by attribute would be ignored in the “inner” query within a subquery eager load. . Also in 0.6.9.
Identity map .discard() uses dict.pop(,None) internally instead of “del” to avoid KeyError/warning during a non-determinate gc teardown
Fixed regression in new composite rewrite where deferred=True option failed due to missing import
Reinstated “comparator_factory” argument to composite(), removed when 0.7 was released.
Fixed bug in query.join() which would occur in a complex multiple-overlapping path scenario, where the same table could be joined to twice. Thanks much to Dave Vitek for the excellent fix here.
Query will convert an OFFSET of zero when slicing into None, so that needless OFFSET clauses are not invoked.
Repaired edge case where mapper would fail to fully update internal state when a relationship on a new mapper would establish a backref on the first mapper.
Fixed bug whereby if __eq__() was redefined, a relationship many-to-one lazyload would hit the __eq__() and fail. Does not apply to 0.6.9.
Calling class_mapper() and passing in an object that is not a “type” (i.e. a class that could potentially be mapped) now raises an informative ArgumentError, rather than UnmappedClassError.
New event hook, MapperEvents.after_configured(). Called after a configure() step has completed and mappers were in fact affected. Theoretically this event is called once per application, unless new mappings are constructed after existing ones have been used already.
When an open Session is garbage collected, the objects within it which remain are considered detached again when they are add()-ed to a new Session. This is accomplished by an extra check that the previous “session_key” doesn’t actually exist among the pool of Sessions..
Declarative will warn when a subclass’ base uses @declared_attr for a regular column - this attribute does not propagate to subclasses.
The integer “id” used to link a mapped instance with its owning Session is now generated by a sequence generation function rather than id(Session), to eliminate the possibility of recycled id() values causing an incorrect result, no need to check that object actually in the session.
Behavioral improvement: empty conjunctions such as and_() and or_() will be flattened in the context of an enclosing conjunction, i.e. and_(x, or_()) will produce ‘X’ and not ‘X AND ()’...
Fixed bug whereby with_only_columns() method of Select would fail if a selectable were passed.. Also in 0.6.9.
examples¶
Adjusted dictlike-polymorphic.py example to apply the CAST such that it works on PG, other databases. Also in 0.6.9.
engine¶
The recreate() method in all pool classes uses self.__class__ to get at the type of pool to produce, in the case of subclassing. Note there’s no usual need to subclass pools.
Improvement to multi-param statement logging, long lists of bound parameter sets will be compressed with an informative indicator of the compression taking place. Exception messages use the same improved formatting.
Added optional “sa_pool_key” argument to pool.manage(dbapi).connect() so that serialization of args is not necessary.
The entry point resolution supported by create_engine() now supports resolution of individual DBAPI drivers on top of a built-in or entry point-resolved dialect, using the standard ‘+’ notation - it’s converted to a ‘.’ before being resolved as an entry point.
Added an exception catch + warning for the “return unicode detection” step within connect, allows databases that crash on NVARCHAR to continue initializing, assuming no NVARCHAR type implemented.
schema¶
Modified Column.copy() to use _constructor(), which defaults to self.__class__, in order to create the new object. This allows easier support of subclassing Column.
Added a slightly nicer __repr__() to SchemaItem classes. Note the repr here can’t fully support the “repr is the constructor” idea since schema items can be very deeply nested/cyclical, have late initialization of some things, etc.
postgresql¶
Added “postgresql_using” argument to Index(), produces USING clause to specify index implementation for PG. . Thanks to Ryan P. Kelly for the patch.
Added client_encoding parameter to create_engine() when the postgresql+psycopg2 dialect is used; calls the psycopg2 set_client_encoding() method with the value upon connect.
Fixed bug related to whereby the same modified index behavior in PG 9 affected primary key reflection on a renamed column.. Also in 0.6.9.
Reflection functions for Table, Sequence no longer case insensitive. Names can be differ only in case and will be correctly distinguished.
Use an atomic counter as the “random number” source for server side cursor names; conflicts have been reported in rare cases..
mysql¶
a CREATE TABLE will put the COLLATE option after CHARSET, which appears to be part of MySQL’s arbitrary rules regarding if it will actually work or not. Also in 0.6.9.
Added mysql_length parameter to Index construct, specifies “length” for indexes.
sqlite¶
Ensured that the same ValueError is raised for illegal date/time/datetime string parsed from the database regardless of whether C extensions are in use or not.
mssql¶().
”0” is accepted as an argument for limit() which will produce “TOP 0”.
oracle¶
Fixed ReturningResultProxy for zxjdbc dialect.. Regression from 0.6..
misc¶
Extra keyword arguments to the base Float type beyond “precision” and “asdecimal” are ignored; added a deprecation warning here and additional docs, related to
SQLSoup will not be included in version 0.8 of SQLAlchemy; while useful, we would like to keep SQLAlchemy itself focused on one ORM usage paradigm. SQLSoup will hopefully soon be superseded by a third party project.
Added local_attr, remote_attr, attr accessors to AssociationProxy, providing quick access to the proxied attributes at the class level.
Changed the update() method on association proxy dictionary to use a duck typing approach, i.e. checks for “keys”, to discern between update({}) and update((a, b)). Previously, passing a dictionary that had tuples as keys would be misinterpreted as a sequence.
0.7.2¶Released: Sun Jul 31 2011
orm¶
A rework of “replacement traversal” within the ORM as it alters selectables to be against aliases of things (i.e. clause adaption) includes a fix for multiply-nested any()/has() constructs against a joined table structure.
Fixed bug where query.join() + aliased=True from a joined-inh structure to itself on relationship() with join condition on the child table would convert the lead entity into the joined one inappropriately. Also in 0.6.9..
Load of a deferred() attribute on an object where row can’t be located raises ObjectDeletedError instead of failing later on; improved the message in ObjectDeletedError to include other conditions besides a simple “delete”.
Fixed regression from 0.6 where a get history operation on some relationship() based attributes would fail when a lazyload would emit; this could trigger within a flush() under certain conditions. Thanks to the user who submitted the great test for this.
Fixed bug apparent only in Python 3 whereby sorting of persistent + pending objects during flush would produce an illegal comparison, if the persistent object primary key is not a single integer. Also in 0.6.9
Fixed bug whereby the source clause used by query.join() would be inconsistent if against a column expression that combined multiple entities together. Also in 0.6.9
Fixed bug whereby if a mapped class redefined __hash__() or __eq__() to something non-standard, which is a supported use case as SQLA should never consult these, the methods would be consulted if the class was part of a “composite” (i.e. non-single-entity) result set. Also in 0.6.9.
Added public attribute “.validators” to Mapper, an immutable dictionary view of all attributes that have been decorated with the @validates decorator. courtesy Stefano Fontanelli
Fixed subtle bug that caused SQL to blow up if: column_property() against subquery + joinedload + LIMIT + order by the column property() occurred. . Also in 0.6.9
The join condition produced by with_parent as well as when using a “dynamic” relationship against a parent will generate unique bindparams, rather than incorrectly repeating the same bindparam. . Also in 0.6.9.
Added the same “columns-only” check to mapper.polymorphic_on as used when receiving user arguments to relationship.order_by, foreign_keys, remote_side, etc.
Fixed bug whereby comparison of column expression to a Query() would not call as_scalar() on the underlying SELECT statement to produce a scalar subquery, in the way that occurs if you called it on Query().subquery().
Fixed declarative bug where a class inheriting from a superclass of the same name would fail due to an unnecessary lookup of the name in the _decl_class_registry.
Repaired the “no statement condition” assertion in Query which would attempt to raise if a generative method were called after from_statement() were called.. Also in 0.6.9.
examples¶
Repaired the examples/versioning test runner to not rely upon SQLAlchemy test libs, nosetests must be run from within examples/versioning to get around setup.cfg breaking it.
Tweak to examples/versioning to pick the correct foreign key in a multi-level inheritance situation.
Fixed the attribute shard example to check for bind param callable correctly in 0.7 style.
engine¶
Context manager provided by Connection.begin() will issue rollback() if the commit() fails, not just if an exception occurs.
Use urllib.parse_qsl() in Python 2.6 and above, no deprecation warning about cgi.parse_qsl()
Added mixin class sqlalchemy.ext.DontWrapMixin. User-defined exceptions of this type are never wrapped in StatementException when they occur in the context of a statement execution.
StatementException wrapping will display the original exception class in the message.
Failures on connect which raise dbapi.Error will forward the error to dialect.is_disconnect() and set the “connection_invalidated” flag if the dialect knows this to be a potentially “retryable” condition. Only Oracle ORA-01033 implemented for now.
sql¶
Fixed two subtle bugs involving column correspondence in a selectable, one with the same labeled subquery repeated, the other when the label has been “grouped” and loses itself. Affects.
schema¶
New feature: with_variant() method on all types. Produces an instance of Variant(), a special TypeDecorator which will select the usage of a different type based on the dialect in use.
Added an informative error message when ForeignKeyConstraint refers to a column name in the parent that is not found. Also in 0.6.9.
Fixed bug whereby adaptation of old append_ddl_listener() function was passing unexpected **kw through to the Table event. Table gets no kws, the MetaData event in 0.6 would get “tables=somecollection”, this behavior is preserved.
Fixed bug where “autoincrement” detection on Table would fail if the type had no “affinity” value, in particular this would occur when using the UUID example on the site that uses TypeEngine as the “impl”.
Added an improved repr() to TypeEngine objects that will only display constructor args which are positional or kwargs that deviate from the default.
postgresql¶
Added new “postgresql_ops” argument to Index, allows specification of PostgreSQL operator classes for indexed columns. Courtesy Filip Zyzniewski.
mysql¶
Fixed OurSQL dialect to use ansi-neutral quote symbol “’” for XA commands instead of ‘”’. . Also in 0.6.9.
sqlite¶
SQLite dialect no longer strips quotes off of reflected default value, allowing a round trip CREATE TABLE to work. This is consistent with other dialects that also maintain the exact form of the default.
mssql¶
Adjusted the pyodbc dialect such that bound values are passed as bytes and not unicode if the “Easysoft” unix drivers are detected. This is the same behavior as occurs with FreeTDS. Easysoft appears to segfault if Python unicodes are passed under certain circumstances.
oracle¶
Added ORA-00028 to disconnect codes, use cx_oracle _Error.code to get at the code,. Also in 0.6.9.
Added ORA-01033 to disconnect codes, which can be caught during a connection event.
repaired the oracle.RAW type which did not generate the correct DDL. Also in 0.6.9.
added CURRENT to reserved word list. Also in 0.6.9.
Fixed bug in the mutable extension whereby if the same type were used twice in one mapping, the attributes beyond the first would not get instrumented.
Fixed bug in the mutable extension whereby if None or a non-corresponding type were set, an error would be raised. None is now accepted which assigns None to all attributes, illegal values raise ValueError.
0.7.1¶Released: Sun Jun 05 2011
general¶
Added a workaround for Python bug 7511 where failure of C extension build does not raise an appropriate exception on Windows 64 bit + VC express
orm¶
”delete-orphan” cascade is now allowed on self-referential relationships - this since SQLA 0.7 no longer enforces “parent with no child” at the ORM level; this check is left up to foreign key nullability. Related to
Repaired new “mutable” extension to propagate events to subclasses correctly; don’t create multiple event listeners for subclasses either.
Modify the text of the message which occurs when the “identity” key isn’t detected on flush, to include the common cause that the Column isn’t set up to detect auto-increment correctly;. Also in 0.6.8.
Fixed bug where transaction-level “deleted” collection wouldn’t be cleared of expunged states, raising an error if they later became transient. Also in 0.6.8.
engine¶
Deprecate schema/SQL-oriented methods on Connection/Engine that were never well known and are redundant: reflecttable(), create(), drop(), text(), engine.func
Adjusted the __contains__() method of a RowProxy result row such that no exception throw is generated internally; NoSuchColumnError() also will generate its message regardless of whether or not the column construct can be coerced to a string.. Also in 0.6.8.
sql¶
Fixed bug whereby metadata.reflect(bind) would close a Connection passed as a bind argument. Regression from 0.6.
Streamlined the process by which a Select determines what’s in its ‘.c’ collection. Behaves identically, except that a raw ClauseList() passed to select([]) (which is not a documented case anyway) will now be expanded into its individual column elements instead of being ignored.
postgresql¶
Some unit test fixes regarding numeric arrays, MATCH operator. A potential floating-point inaccuracy issue was fixed, and certain tests of the MATCH operator only execute within an EN-oriented locale for now. . Also in 0.6.8.
mysql¶
Unit tests pass 100% on MySQL installed on windows..
supports_sane_rowcount will be set to False if using MySQLdb and the DBAPI doesn’t provide the constants.CLIENT module.
sqlite¶
Accept None from cursor.fetchone() when “PRAGMA read_uncommitted” is called to determine current isolation mode at connect time and default to SERIALIZABLE; this to support SQLite versions pre-3.3.0 that did not have this feature.
0.7.0¶Released: Fri May 20 2011
orm¶
query.count() emits “count(*)” instead of “count(1)”..
It is an error to call query.get() when the given entity is not a single, full class entity or mapper (i.e. a column). This is a deprecation warning in 0.6.8.
Fixed a potential KeyError which under some circumstances could occur with the identity map, part of
added Query.with_session() method, switches Query to use a different session.
horizontal shard query should use execution options per connection as per)
Fixed the error message emitted for “can’t execute syncrule for destination column ‘q’; mapper ‘X’ does not map this column” to reference the correct mapper. . Also in 0.6.8.
polymorphic_union() gets a “cast_nulls” option, disables the usage of CAST when it renders the labeled NULL columns.
polymorphic_union() renders the columns in their original table order, as according to the first table/selectable in the list of polymorphic unions in which they appear. (which is itself an unordered mapping unless you pass an OrderedDict).
Fixed bug whereby mapper mapped to an anonymous alias would fail if logging were used, due to unescaped % sign in the alias name. Also in 0.6.8.
examples¶
removed the ancient “polymorphic association” examples and replaced with an updated set of examples that use declarative mixins, “generic_associations”. Each presents an alternative table layout.
sql¶
Fixed bug whereby nesting a label of a select() with another label in it would produce incorrect exported columns. Among other things this would break an ORM column_property() mapping against another column_property(). . Also in 0.6.8.
Some improvements to error handling inside of the execute procedure to ensure auto-close connections are really closed when very unusual DBAPI errors occur.
metadata.reflect() and reflection.Inspector() had some reliance on GC to close connections which were internally procured, fixed this.
Added explicit check for when Column .name is assigned as blank string
Fixed bug whereby if FetchedValue was passed to column server_onupdate, it would not have its parent “column” assigned, added test coverage for all column default assignment patterns. also in 0.6.8
postgresql¶
Fixed the psycopg2_version parsing in the psycopg2 dialect.
Fixed bug affecting PG 9 whereby index reflection would fail if against a column whose name had changed. . Also in 0.6.8.
mssql¶
Fixed bug in MSSQL dialect whereby the aliasing applied to a schema-qualified table would leak into enclosing select statements. Also in 0.6.8.
misc¶
This section documents those changes from 0.7b4 to 0.7.0. For an overview of what’s new in SQLAlchemy 0.7, see
Removed the usage of the “collections.MutableMapping” abc from the ext.mutable docs as it was being used incorrectly and makes the example more difficult to understand in any case.
Fixed bugs in sqlalchemy.ext.mutable extension where None was not appropriately handled, replacement events were not appropriately handled.
0.7.0b4¶Released: Sun Apr 17 2011
general¶
Changes to the format of CHANGES, this file. The format changes have been applied to the 0.7 releases.
The “-declarative” changes will now be listed directly under the “-orm” section, as these are closely related.
The 0.5 series changes have been moved to the file CHANGES_PRE_06 which replaces CHANGES_PRE_05..
orm¶
Some fixes to “evaluate” and “fetch” evaluation when query.update(), query.delete() are called. The retrieval of records is done after autoflush in all cases, and before update/delete is emitted, guarding against unflushed data present as well as expired objects failing during the evaluation.
Reworded the exception raised when a flush is attempted of a subclass that is not polymorphic against the supertype.
Still more wording adjustments when a query option can’t find the target entity. Explain that the path must be from one of the root entities.
Some fixes to the state handling regarding backrefs, typically when autoflush=False, where the back-referenced collection wouldn’t properly handle add/removes with no net change. Thanks to Richard Murri for the test case + patch. (also in 0.6.7).
Added checks inside the UOW to detect the unusual condition of being asked to UPDATE or DELETE on a primary key value that contains NULL in it..
a “having” clause would be copied from the inside to the outside query if from_self() were used; in particular this would break an 0.7 style count() query. (also in 0.6.7)
the Query.execution_options() method now passes those options to the Connection rather than the SELECT statement, so that all available options including isolation level and compiled cache may be used.
engine¶
The C extension is now enabled by default on CPython 2.x with a fallback to pure python if it fails to compile.
sql¶
The “compiled_cache” execution option now raises an error when passed to a SELECT statement rather than a Connection. Previously it was being ignored entirely. We may look into having this option work on a per-statement level at some point.
Restored the “catchall” constructor on the base TypeEngine class, with a deprecation warning. This so that code which does something like Integer(11) still succeeds.
Fixed regression whereby MetaData() coming back from unpickling did not keep track of new things it keeps track of now, i.e. collection of Sequence objects, list of schema names.
The limit/offset keywords to select() as well as the value passed to select.limit()/offset() will be coerced to integer. (also in 0.6.7)
fixed bug where “from” clause gathering from an over() clause would be an itertools.chain() and not a list, causing “can only concatenate list” TypeError when combined with other clauses.
Fixed incorrect usage of “,” in over() clause being placed between the “partition” and “order by” clauses.
Before/after attach events for PrimaryKeyConstraint now function, tests added for before/after events on all constraint types.
Added explicit true()/false() constructs to expression lib - coercion rules will intercept “False”/”True” into these constructs. In 0.6, the constructs were typically converted straight to string, which was no longer accepted in 0.7.
schema¶.
postgresql¶
Psycopg2 for Python 3 is now supported.
Fixed support for precision numerics when using pg8000.
sqlite¶
Fixed bug where reflection of foreign key created as “REFERENCES <tablename>” without col name would fail. (also in 0.6.7)
oracle¶
Using column names that would require quotes for the column itself or for a name-generated bind parameter, such as names with special characters, underscores, non-ascii characters, now properly translate bind parameter keys when talking to cx_oracle. (Also in 0.6.7)
Oracle dialect adds use_binds_for_limits=False create_engine() flag, will render the LIMIT/OFFSET values inline instead of as binds, reported to modify the execution plan used by Oracle. (Also in 0.6.7)
misc¶
REAL has been added to the core types. Supported by PostgreSQL, SQL Server, MySQL, SQLite. Note that the SQL Server and MySQL versions, which add extra arguments, are also still available from those dialects.
Added @event.listens_for() decorator, given target + event name, applies the decorated function as a listener.
AssertionPool now stores the traceback indicating where the currently checked out connection was acquired; this traceback is reported within the assertion raised upon a second concurrent checkout; courtesy Gunnlaugur Briem
The “pool.manage” feature doesn’t use pickle anymore to hash the arguments for each pool.
Documented SQLite DATE/TIME/DATETIME types. (also in 0.6.7)
Fixed mutable extension docs to show the correct type-association methods.
0.7.0b3¶Released: Sun Mar 20 2011
general¶
Lots of fixes to unit tests when run under PyPy (courtesy Alex Gaynor).
orm¶.
Improvements to the error messages emitted when querying against column-only entities in conjunction with (typically incorrectly) using loader options, where the parent entity is not fully present.
Fixed bug in query.options() whereby a path applied to a lazyload using string keys could overlap a same named attribute on the wrong entity. Note 0.6.7 has a more conservative fix to this.
examples¶
Updated the association, association proxy examples to use declarative, added a new example dict_of_sets_with_default.py, a “pushing the envelope” example of association proxy.
The Beaker caching example allows a “query_cls” argument to the query_callable() function. (also in 0.6.7)
engine¶
Fixed AssertionPool regression bug.
Changed exception raised to ArgumentError when an invalid dialect is specified.
sql¶
Added a fully descriptive error message for the case where Column is subclassed and _make_proxy() fails to make a copy due to TypeError on the constructor. The method _constructor should be implemented in this case.
Added new event “column_reflect” for Table objects. Receives the info dictionary about a Column before the object is generated within reflection, and allows modification to the dictionary for control over most aspects of the resulting Column including key, name, type, info dictionary..
Added new generic function “next_value()”, accepts a Sequence object as its argument and renders the appropriate “next value” generation string on the target platform, if supported. Also provides “.next_value()” method on Sequence itself.
func.next_value() or other SQL expression can be embedded directly into an insert() construct, and if implicit or explicit “returning” is used in conjunction with a primary key column, the newly generated value will be present in result.inserted_primary_key.
Added accessors to ResultProxy “returns_rows”, “is_insert” (also in 0.6.7)
postgresql¶
Added RESERVED_WORDS for postgresql dialect. (also in 0.6.7)
Fixed the BIT type to allow a “length” parameter, “varying” parameter. Reflection also fixed. (also in 0.6.7)
mssql¶
Rewrote the query used to get the definition of a view, typically when using the Inspector interface, to use sys.sql_modules instead of the information schema, thereby allowing views definitions longer than 4000 characters to be fully returned. (also in 0.6.7)
misc¶
Arguments in __mapper_args__ that aren’t “hashable” aren’t mistaken for always-hashable, possibly-column arguments. (also in 0.6.7)
The “implicit_returning” flag on create_engine() is honored if set to False. (also in 0.6.7)
Added RESERVED_WORDS informix dialect. (also in 0.6.7)
The horizontal_shard ShardedSession class accepts the common Session argument “query_cls” as a constructor argument, to enable further subclassing of ShardedQuery. (also in 0.6.7)
0.7.0b2¶Released: Sat Feb 19 2011
orm¶
Fixed bug whereby Session.merge() would call the load() event with one too few arguments.
Added logic which prevents the generation of events from a MapperExtension or SessionExtension from generating do-nothing events for all the methods not overridden.
examples¶
Beaker example now takes into account ‘limit’ and ‘offset’, bind params within embedded FROM clauses (like when you use union() or from_self()) when generating a cache key.
sql¶
Renamed the EngineEvents event class to ConnectionEvents. As these classes are never accessed directly by end-user code, this strictly is a documentation change for end users. Also simplified how events get linked to engines and connections internally.
The Sequence() construct, when passed a MetaData() object via its ‘metadata’ argument, will be included in CREATE/DROP statements within metadata.create_all() and metadata.drop_all(), including “checkfirst” logic.
The Column.references() method now returns True if it has a foreign key referencing the given column exactly, not just its parent table.
postgresql¶
Fixed regression from 0.6 where SMALLINT and BIGINT types would both generate SERIAL on an integer PK column, instead of SMALLINT and BIGSERIAL
misc¶
Fixed regression whereby composite() with Column objects placed inline would fail to initialize. The Column objects can now be inline with the composite() or external and pulled in via name or object ref.
Fix error message referencing old @classproperty name to reference @declared_attr (also in 0.6.7)
the dictionary at the end of the __table_args__ tuple is now optional.
Association proxy now has correct behavior for any(), has(), and contains() when proxying a many-to-one scalar attribute to a one-to-many collection (i.e. the reverse of the ‘typical’ association proxy use case)
0.7.0b1¶Released: Sat Feb 12 2011
general¶
New event system, supersedes all extensions, listeners, etc.
Logging enhancements
Setup no longer installs a Nose plugin
The “sqlalchemy.exceptions” alias in sys.modules has been removed. Base SQLA exceptions are available via “from sqlalchemy import exc”. The “exceptions” alias for “exc” remains in “sqlalchemy” for now, it’s just not patched into sys.modules.
orm¶
More succinct form of query.join(target, onclause)
Hybrid Attributes, implements/supersedes synonym()
Rewrite of composites
Mutation Event Extension, supersedes “mutable=True”
PickleType and ARRAY mutability turned off by default
Simplified polymorphic_on assignment
Flushing of Orphans that have no parent is allowed.
Warnings generated when collection members, scalar referents not part of the flush
Non-Table-derived constructs can be mapped
Tuple label names in Query Improved
Mapped column attributes reference the most specific column first
Mapping to joins with two or more same-named columns requires explicit declaration
Mapper requires that polymorphic_on column be present in the mapped selectable
compile_mappers() renamed configure_mappers(), simplified configuration internals
the aliased() function, if passed a SQL FromClause element (i.e. not a mapped class), will return element.alias() instead of raising an error on AliasedClass.
Session.merge() will check the version id of the incoming state against that of the database, assuming the mapping uses version ids and incoming state has a version_id assigned, and raise StaleDataError if they don’t match.
Session.connection(), Session.execute() accept ‘bind’, to allow execute/connection operations to participate in the open transaction of an engine explicitly.
Query.join(), Query.outerjoin(), eagerload(), eagerload_all(), others no longer allow lists of attributes as arguments (i.e. option([x, y, z]) form, deprecated since 0.5)
ScopedSession.mapper is removed (deprecated since 0.5).
Horizontal shard query places ‘shard_id’ in context.attributes where it’s accessible by the “load()” event.
A single contains_eager() call across multiple entities will indicate all collections along that path should load, instead of requiring distinct contains_eager() calls for each endpoint (which was never correctly documented).
The “name” field used in orm.aliased() now renders in the resulting SQL statement.
Session weak_instance_dict=False is deprecated.
An exception is raised in the unusual case that an append or similar event on a collection occurs after the parent object has been dereferenced, which prevents the parent from being marked as “dirty” in the session. Was a warning in 0.6.6.
Query.distinct() now accepts column expressions as *args, interpreted by the PostgreSQL dialect as DISTINCT ON (<expr>)..
the value of “passive” as passed to attributes.get_history() should be one of the constants defined in the attributes package. Sending True or False is deprecated.
Added a name argument to Query.subquery(), to allow a fixed name to be assigned to the alias object. (also in 0.6.7)
A warning is emitted when a joined-table inheriting mapper has no primary keys on the locally mapped table (but has pks on the superclass table). (also in 0.6.7). (also in 0.6.7)
Fixed bug where a column with a SQL or server side default that was excluded from a mapping with include_properties or exclude_properties would result in UnmappedColumnError. (also in 0.6.7)
A warning is emitted in the unusual case that an append or similar event on a collection occurs after the parent object has been dereferenced, which prevents the parent from being marked as “dirty” in the session. This will be an exception in 0.7. (also in 0.6.7)
sql¶
Added over() function, method to FunctionElement classes, produces the _Over() construct which in turn generates “window functions”, i.e. “<window function> OVER (PARTITION BY <partition by>, ORDER BY <order by>)”.
LIMIT/OFFSET clauses now use bind parameters
select.distinct() now accepts column expressions as *args, interpreted by the PostgreSQL dialect as DISTINCT ON (<expr>). Note this was already available via passing a list to the distinct keyword argument to select().
select.prefix_with() accepts multiple expressions (i.e. *expr), ‘prefix’ keyword argument to select() accepts a list or tuple.
Passing a string to the distinct keyword argument of select() for the purpose of emitting special MySQL keywords (DISTINCTROW etc.) is deprecated - use prefix_with() for this.
TypeDecorator works with primary key columns
DDL() constructs now escape percent signs
Table.c / MetaData.tables refined a bit, don’t allow direct mutation
Callables passed to bindparam() don’t get evaluated
types.type_map is now private, types._type_map
Non-public Pool methods underscored
Added NULLS FIRST and NULLS LAST support. It’s implemented as an extension to the asc() and desc() operators, called nullsfirst() and nullslast().
The Index() construct can be created inline with a Table definition, using strings as column names, as an alternative to the creation of the index outside of the Table.
execution_options() on Connection accepts “isolation_level” argument, sets transaction isolation level for that connection only until returned to the connection pool, for those backends which support it (SQLite, PostgreSQL)
A TypeDecorator of Integer can be used with a primary key column, and the “autoincrement” feature of various dialects as well as the “sqlite_autoincrement” flag will honor the underlying database type as being Integer-based..
Result-row processors are applied to pre-executed SQL defaults, as well as cursor.lastrowid, when determining the contents of result.inserted_primary_key.
Bind parameters present in the “columns clause” of a select are now auto-labeled like other “anonymous” clauses, which among other things allows their “type” to be meaningful when the row is fetched, as in result row processors.
TypeDecorator is present in the “sqlalchemy” import space..
Column.copy(), as used in table.tometadata(), copies the ‘doc’ attribute. (also in 0.6.7)
Added some defs to the resultproxy.c extension so that the extension compiles and runs on Python 2.4. (also in 0.6.7)
The compiler extension now supports overriding the default compilation of expression._BindParamClause including that the auto-generated binds within the VALUES/SET clause of an insert()/update() statement will also use the new compilation rules. (also in 0.6.7)
SQLite dialect now uses NullPool for file-based databases
The path given as the location of a sqlite database is now normalized via os.path.abspath(), so that directory changes within the process don’t affect the ultimate location of a relative file path.
postgresql¶
When explicit sequence execution derives the name of the auto-generated sequence of a SERIAL column, which currently only occurs if implicit_returning=False, now accommodates if the table + column name is greater than 63 characters using the same logic PostgreSQL uses. (also in 0.6.7)
Added an additional libpq message to the list of “disconnect” exceptions, “could not receive data from server” (also in 0.6.7)
mysql¶
New DBAPI support for pymysql, a pure Python port of MySQL-python.
oursql dialect accepts the same “ssl” arguments in create_engine() as that of MySQLdb. (also in 0.6.7)
mssql¶.
misc¶
Detailed descriptions of each change below are described at:
Added an explicit check for the case that the name ‘metadata’ is used for a column attribute on a declarative class. (also in 0.6.7)
Some adjustments so that Interbase is supported as well. FB/Interbase version idents are parsed into a structure such as (8, 1, 1, ‘interbase’) or (2, 1, 588, ‘firebird’) so they can be distinguished.
flambé! the dragon and The Alchemist image designs created and generously donated by Rotem Yaari.Created using Sphinx 4.5.0. | https://docs.sqlalchemy.org/en/20/changelog/changelog_07.html | CC-MAIN-2022-21 | refinedweb | 9,780 | 57.27 |
This is your resource to discuss support topics with your peers, and learn from each other.
06-08-2012 11:24 PM
I am building an app that reuses several sounds over and over. I am running into a problem with the audio tag when a sound is replayed.
The first time the sound is played it plays fine, but subsequent times it "stutters" at the beginning of the sound clip.
I've broken it down to it's simplest form:
<html>
<head>
<title>sound test</title>
</head>
<body>
<audio src="Right.mp3" controls></audio>
</body>
</html>
I've posted a working sample at
This is works fine in chrome, but you will experience the stutter through the PlayBook browser.
I realize I can re-create the sound every time i want to use it which eliminates this problem. But doing this I was running into memory issues because every audio element is loaded into memory and I could not find a way to release it.
var sound = "Right";
function playSound(sound){
snd = new Audio(sound+".mp3");
snd.play();
}
I thought I had found a work around for this by doing this:
var sound = "Right";
function playSound(sound){
snd = new Audio(sound+".mp3");
snd.addEventListener("ended", function(){
snd.src="";
}, false);
snd.play();
}
This solved the memory eating issue however, this caused random app crashes.
I would love to hear any solutions for repeating audio. I'm sure it's a common thing in games.
Solved! Go to Solution.
06-10-2012 09:04 AM
Could you debug that code on Chrome and add breakpoints at snd = new Audio(sound+".mp3"), snd.src="" and snd.play(). Check what's happening, because I think it's creating multiple Audio instances and calling all them at the same time.
06-10-2012 09:20 AM
Yes using that code it recreates the instance of the sound, but that does not cause shuddering audio.
Using that code each instance is loaded into memory and eventually you can use all the free memory in your playbook causing the app to crash and sometimes even having to reboot the playbook to free the memory.
06-10-2012 04:42 PM
I currently have working code for anyone who is interested.
Although this is not ideal...
var sndRight = new Audio("Right.mp3");
function playSound(){
sndRight.src="Right.mp3";
snd.play();
}
Pros: No "stutter" the second time the clip is played.
No memory issues because we're not creating multiple instances of the sound.
Haven't seen any random crashing.
Cons: It causes a lag while the audio is loaded each time.
So if anyone has any better solutions I'd still love to hear them. That loading lag is a issue I'd prefer not to have.
06-14-2012 09:46 PM - edited 06-14-2012 09:49 PM
I thought I would share my final solution to this problem:
<script type="text/javascript">// // This is a reusable function that will // a) create a duplicate of the sound // b) alternate between the two sound files // c) reload the other one so it's ready for the next play var audio2 = function (filename) { return { audio1: new Audio(filename), audio2: new Audio(filename), usePrimary: false, intName: filename, play: function () { if (this.usePrimary) { this.audio1.play(); this.audio2.load(); } else { this.audio2.play(); this.audio1.load(); } this.usePrimary = !this.usePrimary; } }; } //Define all your sounds here: var sndLeft = new audio2("Left.wav"); var sndRight = new audio2("Right.wav"); var sndBang = new audio2("Bang.wav"); var sndDing = new audio2("Ding.wav"); //call the sounds by doing something like this: function playSnd(){ sndLeft.play(); } </script>
Happy coding...
06-15-2012 01:25 AM - edited 06-15-2012 01:30 AM
i have audio recording pl your software my Mobil massage
06-22-2012 10:15 PM
Your code is probably the best solution so far to the problem of repeating sound clips in WebWorks on Playbook. If I play one sound clip (for example, sndLeft in your example) repeatedly, the sound plays without stuttering. But if I alternate between two sound clips (for example, sndLeft.play(), then sndRight.play(), then sndLeft.play() etc) I still hear the stuttering. My observed behavior is probably different from yours because I am using different sound files (very short clips).
The funny thing is that if I create an audio element with controls (<audio controls='controls' ...>...), I can play that clip, manually drag the slider back to the beginning and play again without stuttering. Unfortunately, there is no programmatic control over that slider control.
Unfortunately, for my own needs it looks like I'll have to wait for a fix from RIM. I filed a JIRA issue in February, but it's probably not visible to the public (TABLET-501). Thanks for sharing your code and experience.
06-28-2012 03:30 AM
I have also struggled with HTML5 audio on the PlayBook and I have found a solution that seems to be working perfectly fine.
First have a look at Tunneltilt sample from BlackBerry:
See the files "sounds.js", "Readme.md" and folder blackberry.custom.audio
They play audio using an adobe air extension, not by using HTML5. This extension has absolutely no problem with playing multiple sound at the same time and the sound is clear. There are some things to notice, thou:
Here's an article that discusses extensions:
Just one last thing. I think that they have a memory leak in the extension code, so here's my version of the "CustomAudio.as" file:
package blackberry.custom.audio { import flash.media.Sound; import flash.media.SoundChannel; import flash.media.SoundTransform; import flash.net.URLRequest; import webworks.extension.DefaultExtension; public class CustomAudio extends DefaultExtension { public function CustomAudio() { super(); } override public function getFeatureList():Array { return new Array ("blackberry.custom.audio"); } public function playFile(eFileName:String, eLoops:int, eVolume:Number, ePan:Number):Number { var req:URLRequest = new URLRequest(eFileName); var snd:Sound = new Sound(); var channel:SoundChannel = new SoundChannel(); snd.load(req); var pausePosition:int = channel.position; var sndTransform:SoundTransform = new SoundTransform(eVolume, ePan); channel = snd.play(pausePosition, eLoops, sndTransform); return 0; } public function setVolume(eSoundID:Number, eNewVolume:Number):void { //this method has been disabled } } }
Hope it helps.
06-28-2012 07:15 AM
Thanks for your reply. Unfortunately the clips I want to play are all very short and it doesn't sound like this will work for this. My above solution works for my current project.
My next project involves a much more random and often overlapping short sound clips (I was seeing a lag in my animations when the load() event was called) so I have been looking into soundjs which does the same sort of thing, passes the sound off to a different (flash) player to handle. I haven't explored it completely but it looks promising.
07-02-2012 06:13 PM - edited 07-03-2012 08:16 AM
I have just found out my solution(the one with the alternating copies of the same sound) only works with developer preview 2.1.0.560 whether or not it will work when 2.1 is officially released is yet to be seen.
The solution provided by razorek works very well.
@razorek Thank you for your help. You provided some really great information. I wish I had revisited this earlier. | https://supportforums.blackberry.com/t5/Web-and-WebWorks-Development/audio-skips-stutters-second-time-it-s-played/m-p/1788933/highlight/true | CC-MAIN-2017-13 | refinedweb | 1,219 | 66.84 |
Archive!
Enhanced email validation using #DNS #MX
If you have a sign-up form, and you are collecting user’s email addresses, then you really want to cut down on the number of typos. I’ve seen as many as 5% of users mistyping “@gmail.com” as “@gmail.con, @gmail.co, @gmai.com, and @gmail.xom etc.”
Many of these email addresses pass basic regex checks, it’s “Gmail.co” is in the correct format for a domain name, but alas, your user will never be able to re-log in.
So, instead of only relying on regexes, you can also use a DNS MX lookup, that can check if there are mail exchanger(s) associated with the domain. This means that “@gmail.com” will work, but “@gmail.co” won’t
Firstly, we have to delve into how to perform a DNS MX lookup in C#, which is a UDP request sent over port 53 to a DNS server, in this case 8.8.8.8, which is Google’s public DNS resolver.
Here’s the class
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Net.Sockets;
using System.Text;
public static class Dns
{
public static IEnumerable<string> MxLookup(string domain)
{
const string strDns = “8.8.8.8”; // Google DNS
var udpClient = new UdpClient(strDns, 53);
// SEND REQUEST——————–
var list = new List<byte>();
list.AddRange(new byte[] { 88, 89, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0 });
var tmp = domain.Split(‘.’);
foreach (string s in tmp)
{
list.Add(Convert.ToByte(s.Length));
var chars = s.ToCharArray();
list.AddRange(chars.Select(c => Convert.ToByte(Convert.ToInt32(c))));
}
list.AddRange(new byte[] { 0, 0, Convert.ToByte(15), 0, 1 });
var req = new byte[list.Count];
for (var i = 0; i < list.Count; i++) { req[i] = list[i]; }
udpClient.Send(req, req.Length);
// RECEIVE RESPONSE————–
IPEndPoint ep = null;
var receiveBuffer = udpClient.Receive(ref ep);
udpClient.Close();
var resp = new int[receiveBuffer.Length];
for (var i = 0; i < resp.Length; i++)
resp[i] = Convert.ToInt32(receiveBuffer[i]);
var status = resp[3];
if (status != 128) return null; // throw new Exception(string.Format(“{0}”, status));
var answers = resp[7];
if (answers == 0) return null; // throw new Exception(“No results”);
var pos = domain.Length + 18;
var lRecords = new List<string>();
while (answers > 0)
{
pos += 14; //offset
var str = GetMxRecord(resp, pos, out pos);
lRecords.Add(str);
answers–;
}
return lRecords;
}
private static string GetMxRecord(int[] resp, int start, out int pos)
{
StringBuilder sb = new StringBuilder();
int len = resp[start];
while (len > 0)
{
if (len != 192)
{
if (sb.Length > 0) sb.Append(“.”);
for (int i = start; i < start + len; i++)
sb.Append(Convert.ToChar(resp[i + 1]));
start += len + 1;
len = resp[start];
}
if (len != 192) continue;
var newPosition = resp[start + 1];
if (sb.Length > 0) sb.Append(“.”);
sb.Append(GetMxRecord(resp, newPosition, out newPosition));
start++;
break;
}
pos = start + 1;
return sb.ToString();
}
}
[Credit due to Christian Salway @ccsalway for this code]
This when called as Dns.MxLookup(“gmail.com”), would return a list of strings as follows;
"gmail-smtp-in.l.google.com", "alt4.gmail-smtp-in.l.google.com", "alt1.gmail-smtp-in.l.google.com", "alt2.gmail-smtp-in.l.google.com", "alt3.gmail-smtp-in.l.google.com"
These correspond to the mail exchange servers used by gmail, and indicate that the domain can receive email, otherwise, this function returns null.
Now, lets create an ASP.NET page that will act as a handler for an Ajax call to validate an email address as follows;
using System;
using System.Collections.Generic;
using Newtonsoft.Json;
public partial class ajax_ValidateEmail : System.Web.UI.Page
{
private class ResponseClass
{
public bool success { get; set; }
public string error { get; set; }
public IEnumerable<string> information { get; set; }
}
protected void Page_Load(object sender, EventArgs e)
{
var response = new ResponseClass();
var email = Request.QueryString[“email”];
if (string.IsNullOrEmpty(email))
{
response.error = “Need on email on querystring”;
}
else
{
var idxAt = email.IndexOf(“@”, StringComparison.CurrentCulture);
if (idxAt == -1)
{
response.error = “Invalid email address”;
}
else
{
var domain = email.Substring(idxAt+1);
var mx = Dns.MxLookup(domain);
if (mx == null)
{
response.error = “Invalid domain”;
}
else
{
response.success = true;
response.information = mx;
}
}
}
var json = JsonConvert.SerializeObject(response, Formatting.Indented);
Response.ContentType = “application/json”;
Response.Write(json);
}
}
This will respond with the following Json in the case of success;
{ "success": true, "error": null, "information": [ "gmail-smtp-in.l.google.com", "alt4.gmail-smtp-in.l.google.com", "alt1.gmail-smtp-in.l.google.com", "alt2.gmail-smtp-in.l.google.com", "alt3.gmail-smtp-in.l.google.com" ] }
And, in the case of failure;
{ "success": false, "error": "Invalid domain", "information": null }
This can then be called from Javascript (jquery) as follows;
var strEmail = $(“#tbEmail”).val();
$.get(“/ajax/ValidateEmail.aspx?email=” + strEmail,
function(response) {
if (!response.success) {
$(“#fgEmail”).addClass(“has-error”);
} else {
$(“#fgEmail”).removeClass(“has-error”);
}
});
You can go a step further and prevent the form submission, at the moment, I’m just relaying user feedback, and see what happens.
To see this live, see
Using #Electron to call a .NET #API
Electron is a platform that allows you develop desktop applications which are cross-platform, and run on standard HTML , CSS , Javascript and Node. The mix of typically client-and-“server side” javascript is very unusual, but quite liberating.
Here is a simple example of using Electron with an API that typically would be called by server-side Node, in a desktop app here –
It’s not really that much beyond the “hello world” example, but shows the basics of using a simple user interface, and calling an API.
The code in “renderer.js” is as follows;
var api = require(‘car-registration-api-uk’);
window.$ = window.jQuery = require(‘jquery’);
$(init);
function init()
{
$(“#btnSearch”).bind(“click”,btnSearch_click);
}
function btnSearch_click()
{
var reg = $(“#reg”).val();
api.CheckCarRegistrationUK(reg,”*** your username here***”,function(data){
$(“#output”).html(data.Description);
});
}
You’ll need an account on RegCheck.org.uk for this to work.
Co2 emissions database available via #EEA
If you are looking for environmental data on vehicles, to judge their impact on the environment, then this dataset available from the EEA has a record of millions of european vehicles, and their CO2 footprint.
It’s available for download here; | https://blog.dotnetframework.org/category/uncategorized/ | CC-MAIN-2019-26 | refinedweb | 1,033 | 52.66 |
.
SVG 1.1 is a modularization of SVG 1.0 [SVG10]. See the Document Type Definition appendix for details on how the DTD is structured to allow profiling and composition with other XML languages. following are the SVG 1.1 namespace, public identifier and system identifier:
The following is an example document type declaration for an SVG document:
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "">
Note that DTD listed in the System Identifier is a modularized DTD (i.e. its contents are spread over multiple files), which means that a validator may have to fetch the multiple modules in order to validate. For that reason, there is a single flattened DTD available that corresponds to the SVG 1.1 modularized DTD. It can be found at.
While a DTD is provided in this specification, the use of DTDs for validating XML documents is known to be problematic. In particular, DTDs do not handle namespaces gracefully. It is not recommended that a DOCTYPE declaration be included 2 Core rootmost ‘svg’ element is the furthest ‘svg’ ancestor element that does not exit an SVG context. See also SVG document fragment.. | http://www.w3.org/TR/SVG11/intro.html | CC-MAIN-2013-48 | refinedweb | 190 | 58.28 |
import "golang.org/x/exp/shiny/text"
Package text lays out paragraphs of text.
A body of text is laid out into a Frame: Frames contain Paragraphs (stacked vertically), Paragraphs contain Lines (stacked vertically), and Lines contain Boxes (stacked horizontally). Each Box holds a []byte slice of the text. For example, to simply print a Frame's text from start to finish:
var f *text.Frame = etc for p := f.FirstParagraph(); p != nil; p = p.Next(f) { for l := p.FirstLine(f); l != nil; l = l.Next(f) { for b := l.FirstBox(f); b != nil; b = b.Next(f) { fmt.Print(b.Text(f)) } } }
A Frame's structure (the tree of Paragraphs, Lines and Boxes), and its []byte text, are not modified directly. Instead, a Frame's maximum width can be re-sized, and text can be added and removed via Carets (which implement standard io interfaces). For example, to add some words to the end of a frame:
var f *text.Frame = etc c := f.NewCaret() c.Seek(0, text.SeekEnd) c.WriteString("Not with a bang but a whimper.\n") c.Close()
Either way, such modifications can cause re-layout, which can add or remove Paragraphs, Lines and Boxes. The underlying memory for such structs can be re-used, so pointer values, such as of type *Box, should not be held over such modifications.
Code:
package main import ( "fmt" "image" "os" "golang.org/x/exp/shiny/text" "golang.org/x/image/font" "golang.org/x/image/math/fixed" ) // toyFace implements the font.Face interface by measuring every rune's width // as 1 pixel. type toyFace struct{} func (toyFace) Close() error { return nil } func (toyFace) Glyph(dot fixed.Point26_6, r rune) (image.Rectangle, image.Image, image.Point, fixed.Int26_6, bool) { panic("unimplemented") } func (toyFace) GlyphBounds(r rune) (fixed.Rectangle26_6, fixed.Int26_6, bool) { panic("unimplemented") } func (toyFace) GlyphAdvance(r rune) (fixed.Int26_6, bool) { return fixed.I(1), true } func (toyFace) Kern(r0, r1 rune) fixed.Int26_6 { return 0 } func (toyFace) Metrics() font.Metrics { return font.Metrics{} } func printFrame(f *text.Frame, softReturnsOnly bool) { for p := f.FirstParagraph(); p != nil; p = p.Next(f) { for l := p.FirstLine(f); l != nil; l = l.Next(f) { for b := l.FirstBox(f); b != nil; b = b.Next(f) { if softReturnsOnly { os.Stdout.Write(b.TrimmedText(f)) } else { os.Stdout.Write(b.Text(f)) } } if softReturnsOnly { fmt.Println() } } } } func main() { var f text.Frame f.SetFace(toyFace{}) f.SetMaxWidth(fixed.I(60)) c := f.NewCaret() c.WriteString(mobyDick) c.Close() fmt.Println("====") printFrame(&f, false) fmt.Println("====") fmt.Println("123456789_123456789_123456789_123456789_123456789_123456789_") printFrame(&f, true) fmt.Println("====") } const mobyDick = "CHAPTER 1. Loomings.\nCall me Ishmael. Some years ago—never mind how long precisely—having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world...\n"
These constants are equal to os.SEEK_SET, os.SEEK_CUR and os.SEEK_END, understood by the io.Seeker interface, and are provided so that users of this package don't have to explicitly import "os".
Box holds a contiguous run of text.
Next returns the next Box after this one in the Line.
f is the Frame that contains the Box.
Text returns the Box's text.
f is the Frame that contains the Box.
TrimmedText returns the Box's text, trimmed right of any white space if it is the last Box in its Line.
f is the Frame that contains the Box.
Caret is a location in a Frame's text, and is the mechanism for adding and removing bytes of text. Conceptually, a Caret and a Frame's text is like an int c and a []byte t such that the text before and after that Caret is t[:c] and t[c:]. That byte-count location remains unchanged even when a Frame is re-sized and laid out into a new tree of Paragraphs, Lines and Boxes.
A Frame can have multiple open Carets. For example, the beginning and end of a text selection can be represented by two Carets. Multiple Carets for the one Frame are not safe to use concurrently, but it is valid to interleave such operations sequentially. For example, if two Carets c0 and c1 for the one Frame are positioned at the 10th and 20th byte, and 4 bytes are written to c0, inserting what becomes the equivalent of text[10:14], then c0's position is updated to be 14 but c1's position is also updated to be 24.
Close closes the Caret.
Delete deletes nBytes bytes in the specified direction from the Caret's location. It returns the number of bytes deleted, which can be fewer than that requested if it hits the beginning or end of the Frame.
DeleteRunes deletes nRunes runes in the specified direction from the Caret's location. It returns the number of runes and bytes deleted, which can be fewer than that requested if it hits the beginning or end of the Frame.
Read satisfies the io.Reader interface by copying those bytes after the Caret and incrementing the Caret.
ReadByte returns the next byte after the Caret and increments the Caret.
ReadRune returns the next rune after the Caret and increments the Caret.
Seek satisfies the io.Seeker interface.
Write inserts s into the Frame's text at the Caret and increments the Caret.
WriteByte inserts x into the Frame's text at the Caret and increments the Caret.
WriteRune inserts r into the Frame's text at the Caret and increments the Caret.
WriteString inserts s into the Frame's text at the Caret and increments the Caret.
Direction is either forwards or backwards.
Frame holds Paragraphs of text.
The zero value is a valid Frame of empty text, which contains one Paragraph, which contains one Line, which contains one Box.
FirstParagraph returns the first paragraph of this frame.
Height returns the height in pixels of this Frame.
Len returns the number of bytes in the Frame's text.
LineCount returns the number of Lines in this Frame.
This count includes any soft returns inserted to wrap text to the maxWidth.
NewCaret returns a new Caret at the start of this Frame.
ParagraphCount returns the number of Paragraphs in this Frame.
This count excludes any soft returns inserted to wrap text to the maxWidth.
SetFace sets the font face for measuring text.
SetMaxWidth sets the target maximum width of a Line of text, as a fixed-point fractional number of pixels. Text will be broken so that a Line's width is less than or equal to this maximum width. This line breaking is not strict. A Line containing asingleverylongword combined with a narrow maximum width will not be broken and will remain longer than the target maximum width; soft hyphens are not inserted.
A non-positive argument is treated as an infinite maximum width.
Line holds Boxes of text.
FirstBox returns the first Box of this Line.
f is the Frame that contains the Line.
Height returns the height in pixels of this Line.
Next returns the next Line after this one in the Paragraph.
f is the Frame that contains the Line.
Paragraph holds Lines of text.
FirstLine returns the first Line of this Paragraph.
f is the Frame that contains the Paragraph.
Height returns the height in pixels of this Paragraph.
LineCount returns the number of Lines in this Paragraph.
This count includes any soft returns inserted to wrap text to the maxWidth.
Next returns the next Paragraph after this one in the Frame.
f is the Frame that contains the Paragraph.
Package text imports 7 packages (graph) and is imported by 1 packages. Updated 2017-06-03. Refresh now. Tools for package owners. | http://godoc.org/golang.org/x/exp/shiny/text | CC-MAIN-2017-34 | refinedweb | 1,295 | 77.84 |
Simplify report views with Groovy's template engine framework
Document options requiring JavaScript are not displayed
Sample code
Help us improve this content
Level: Intermediate
Andrew Glover (aglover@stelligent.com), President, Stelligent Incorporated
15 Feb.
I.
String? But as you can see in Listing 1, Groovy
drops those +s, leaving you with much
cleaner, simpler code.
+
String example1 = "This is a multiline
string which is going to
cover a few lines then
end with a period."
Groovy also supports the notion of here-docs, as shown in
Listing 2. A here-doc is a convenient mechanism for creating formatted
Strings, such as HTML and XML. Notice
that here-doc syntax isn't much different from that of a normal String declaration, except that it requires
Python-like triple quotes.
itext =
"""
This is another multiline String
that takes up a few lines. Doesn't
do anything different from the previous one.
""".
GString
${}
Template engines have been around for a long time and can be found in almost every modern language. Normal Java language has Velocity and FreeMarker, to name two; Python has Cheetah and Ruby ERB; and Groovy has its own engine. See Resources to learn more about template engines.?
lang
${lang}.
length()
lang = "Groovy"
println "I dig any language with ${lang.length()}.
GroovyTestCase
import groovy.util.GroovyTestCase
class <%=test_suite %> extends GroovyTestCase {
<% for(tc in test_cases) {.
person
p
fname
lname.
map
For example, if a simple template had a variable named favlang, I'd have to define a map with a key value of favlang. The key's value would be whatever I
chose as my favorite scripting language (in this case, Groovy, of course).
favlang
In Listing 7, I've defined this simple template, and in
Listing 8, I'll show you the corresponding mapping code.
My favorite dynamic language is ${favlang}
Listing 8 shows a simple class that does five things, two of
which are important. Can you tell what they are?
package com.vanward.groovy.tmpl
import groovy.text.Template
import groovy.text.SimpleTemplateEngine
import java.io.File
class SimpleTemplate{
static void main(args) {
fle = new File("simple-txt.tmpl")
binding = ["favlang": "Groovy"]
engine = new SimpleTemplateEngine()
template = engine.createTemplate(fle).make(binding)
println template.toString()
}
}
Mapping the values for the simple template in Listing 8 was
surprisingly easy.
First, I created a File
instance pointing to the template, simple-txt.tmpl.
File.
binding.
SimpleTemplateEngine.
Person
class Person{
age
fname
lname
String toString(){
return "Age: " + age + " First Name: " + fname + " Last Name: " + lname
}
}
In Listing 10, you can see the mapping code that maps an
instance of the above-defined Person class.
import java.io.File
import groovy.text.Template
import groovy.text.SimpleTemplateEngine
class TemplatePerson{
static void main(args) {
pers1 = new Person(age:12, fname:"Sam", lname:"Covery")
fle = new File("person_report.tmpl")
binding = ["p":pers1]
engine = new SimpleTemplateEngine()."
pers1
When the code in Listing 10 is run, the output will be XML defining the person element, as shown in Listing 11.
.
list
fle = new File("unit_test.tmpl")
coll = ["testBinding", "testToString", "testAdd"]
binding = ["test_suite":"TemplateTest", "test_cases":coll]
engine = new SimpleTemplateEngine().
coll.
println
nfile.withPrintWriter{ pwriter |
pwriter.println("<md5report>")
for(f in scanner){.
nfile
PrintWriter.
<md5report>
<% for(clzz in clazzes) {.
ChecksumClass
The model then becomes the ChecksumClass defined in Listing 15.
class CheckSumClass{
name
value
String toString(){
return "name " + name + " value " + value
}
}
Class definitions are fairly easy in Groovy, no?
Creating a collection
Next, I need to refactor the section of code that previously
wrote to a file -- this time with logic to populate a list with the new
ChecksumClass, as shown in Listing 16.
clssez = []
for(f in scanner){
f.eachLine{ line |
iname = formatClassName(bsedir, f.path)
clsse.
[]
for
line
CheckSumClass
Adding the template mapping
The last thing I need to do is add the template engine-specific
code. This code will perform the run-time mapping and write the
corresponding formatted template to the original file, as shown in
Listing 17.
fle = new File("report.tmpl")
binding = ["clazzes": clzzez]
engine = new SimpleTemplateEngine():
/**
*
*/")
clssez = []
for(f in scanner){
f.eachLine{ line |
iname = formatClassName(bsedir, f.path)
clssez << new CheckSumClass(name:iname, value:line)
}
}
fle = new File("report.tmpl")
binding = ["clazzes": clzzez]
engine = new SimpleTemplateEngine()
template = engine.createTemplate(fle).make(binding)
nfile.withPrintWriter{ pwriter |
pwriter.println template.toString()
}
}
About the author
Andrew Glover is the President of Stelligent Incorporated, a Washington, D.C., metro area company specializing in the construction of automated testing? | http://www.ibm.com/developerworks/java/library/j-pg02155/ | crawl-001 | refinedweb | 733 | 59.5 |
Provided by: perl-doc_5.18.2-2ubuntu1_all
NAME
perlhpux - Perl version 5 on Hewlett-Packard Unix (HP-UX) systems
DESCRIPTION
This document describes various features of HP's Unix operating system (HP-UX) that will affect how Perl version 5 (hereafter just Perl) is compiled and/or runs. Using perl as shipped with HP-UX Application release September 2001, HP-UX 11.00 is the first to ship with Perl. By the time it was perl-5.6.1 in /opt/perl. The first occurrence is on CD 5012-7954 and can be installed using swinstall -s /cdrom perl assuming you have mounted that CD on /cdrom. That build was a portable hppa-1.1 multithread build that supports large files compiled with gcc-2.9-hppa-991112. If you perform a new installation, then (a newer) Perl will be installed automatically. Pre-installed HP-UX systems now have more recent versions of Perl and the updated modules. The official (threaded) builds from HP, as they are shipped on the Application DVD/CD's are available on <> for both PA-RISC and IPF (Itanium Processor Family). They are built with the HP ANSI-C compiler. Up till 5.8.8 that was done by ActiveState. To see what version is included on the DVD (assumed here to be mounted on /cdrom), issue this command: # swlist -s /cdrom perl # perl D.5.8.8.B 5.8.8 Perl Programming Language perl.Perl5-32 D.5.8.8.B 32-bit 5.8.8 Perl Programming Language with Extensions perl.Perl5-64 D.5.8.8.B 64-bit 5.8.8 Perl Programming Language with Extensions To see what is installed on your system: # swlist -R perl # perl E.5.8.8.J Perl Programming Language # perl.Perl5-32 E.5.8.8.J 32-bit Perl Programming Language with Extensions perl.Perl5-32.PERL-MAN E.5.8.8.J 32-bit Perl Man Pages for IA perl.Perl5-32.PERL-RUN E.5.8.8.J 32-bit Perl Binaries for IA # perl.Perl5-64 E.5.8.8.J 64-bit Perl Programming Language with Extensions perl.Perl5-64.PERL-MAN E.5.8.8.J 64-bit Perl Man Pages for IA perl.Perl5-64.PERL-RUN E.5.8.8.J 64-bit Perl Binaries for IA Using perl from HP's porting centre HP porting centre tries to keep up with customer demand and release updates from the Open Source community. Having precompiled Perl binaries available is obvious, though "up-to- date" is something relative. At the moment of writing only perl-5.10.1 was available (with 5.16.3 being the latest stable release from the porters point of view). The HP porting centres are limited in what systems they are allowed to port to and they usually choose the two most recent OS versions available. HP has asked the porting centre to move Open Source binaries from /opt to /usr/local, so binaries produced since the start of July 2002 are located in /usr/local. One of HP porting centres URL's is <> The port currently available is built with GNU gcc. Other prebuilt perl binaries To get even more recent perl depots for the whole range of HP-UX, visit H.Merijn Brand's site at <>. Carefully read the notes to see if the available versions suit your needs. Compiling Perl 5 on HP-UX When compiling Perl, you must use an ANSI C compiler. The C compiler that ships with all HP-UX systems is a K&R compiler that should only be used to build new kernels. Perl can be compiled with either HP's ANSI C compiler or with gcc. The former is recommended, as not only can it compile Perl with no difficulty, but also can take advantage of features listed later that require the use of HP compiler-specific command- line flags. If you decide to use gcc, make sure your installation is recent and complete, and be sure to read the Perl INSTALL file for more gcc-specific details. PA-RISC HP's HP9000 Unix systems run on HP's own Precision Architecture (PA-RISC) chip. HP-UX used to run on the Motorola MC68000 family of chips, but any machine with this chip in it is quite obsolete and this document will not attempt to address issues for compiling Perl on the Motorola chipset. The version of PA-RISC at the time of this document's last update is 2.0, which is also the last there will be. HP PA-RISC systems are usually refered to with model description "HP 9000". The last CPU in this series is the PA-8900. Support for PA-RISC architectured machines officially ends as shown in the following table: PA-RISC End-of-Life Roadmap +--------+----------------+----------------+-----------------+ | HP9000 | Superdome | PA-8700 | Spring 2011 | | 4-128 | | PA-8800/sx1000 | Summer 2012 | | cores | | PA-8900/sx1000 | 2014 | | | | PA-8900/sx2000 | 2015 | +--------+----------------+----------------+-----------------+ | HP9000 | rp7410, rp8400 | PA-8700 | Spring 2011 | | 2-32 | rp7420, rp8420 | PA-8800/sx1000 | 2012 | | cores | rp7440, rp8440 | PA-8900/sx1000 | Autumn 2013 | | | | PA-8900/sx2000 | 2015 | +--------+----------------+----------------+-----------------+ | HP9000 | rp44x0 | PA-8700 | Spring 2011 | | 1-8 | | PA-8800/rp44x0 | 2012 | | cores | | PA-8900/rp44x0 | 2014 | +--------+----------------+----------------+-----------------+ | HP9000 | rp34x0 | PA-8700 | Spring 2011 | | 1-4 | | PA-8800/rp34x0 | 2012 | | cores | | PA-8900/rp34x0 | 2014 | +--------+----------------+----------------+-----------------+ From <> The last order date for HP 9000 systems was December 31, 2008. A complete list of models at the time the OS was built is in the file /usr/sam/lib/mo/sched.models. The first column corresponds to the last part of the output of the "model" command. The second column is the PA-RISC version and the third column is the exact chip type used. (Start browsing at the bottom to prevent confusion ;-) # model 9000/800/L1000-44 # grep L1000-44 /usr/sam/lib/mo/sched.models L1000-44 2.0 PA8500 Portability Between PA-RISC Versions An executable compiled on a PA-RISC 2.0 platform will not execute on a PA-RISC 1.1 platform, even if they are running the same version of HP-UX. If you are building Perl on a PA-RISC 2.0 platform and want that Perl to also run on a PA-RISC 1.1, the compiler flags +DAportable and +DS32 should be used. It is no longer possible to compile PA-RISC 1.0 executables on either the PA-RISC 1.1 or 2.0 platforms. The command-line flags are accepted, but the resulting executable will not run when transferred to a PA-RISC 1.0 system. PA-RISC 1.0 The original version of PA-RISC, HP no longer sells any system with this chip. The following systems contained PA-RISC 1.0 chips: 600, 635, 645, 808, 815, 822, 825, 832, 834, 835, 840, 842, 845, 850, 852, 855, 860, 865, 870, 890 PA-RISC 1.1 An upgrade to the PA-RISC design, it shipped for many years in many different system. The following systems contain with PA-RISC 1.1 chips: 705, 710, 712, 715, 720, 722, 725, 728, 730, 735, 742, 743, 744, 745, 747, 750, 755, 770, 777, 778, 779, 800, 801, 803, 806, 807, 809, 811, 813, 816, 817, 819, 821, 826, 827, 829, 831, 837, 839, 841, 847, 849, 851, 856, 857, 859, 867, 869, 877, 887, 891, 892, 897, A180, A180C, B115, B120, B132L, B132L+, B160L, B180L, C100, C110, C115, C120, C160L, D200, D210, D220, D230, D250, D260, D310, D320, D330, D350, D360, D410, DX0, DX5, DXO, E25, E35, E45, E55, F10, F20, F30, G30, G40, G50, G60, G70, H20, H30, H40, H50, H60, H70, I30, I40, I50, I60, I70, J200, J210, J210XC, K100, K200, K210, K220, K230, K400, K410, K420, S700i, S715, S744, S760, T500, T520 PA-RISC 2.0 The most recent upgrade to the PA-RISC design, it added support for 64-bit integer data. As of the date of this document's last update, the following systems contain PA-RISC 2.0 chips: 700, 780, 781, 782, 783, 785, 802, 804, 810, 820, 861, 871, 879, 889, 893, 895, 896, 898, 899, A400, A500, B1000, B2000, C130, C140, C160, C180, C180+, C180-XP, C200+, C400+, C3000, C360, C3600, CB260, D270, D280, D370, D380, D390, D650, J220, J2240, J280, J282, J400, J410, J5000, J5500XM, J5600, J7000, J7600, K250, K260, K260-EG, K270, K360, K370, K380, K450, K460, K460-EG, K460-XP, K470, K570, K580, L1000, L2000, L3000, N4000, R380, R390, SD16000, SD32000, SD64000, T540, T600, V2000, V2200, V2250, V2500, V2600 Just before HP took over Compaq, some systems were renamed. the link that contained the explanation is dead, so here's a short summary: HP 9000 A-Class servers, now renamed HP Server rp2400 series. HP 9000 L-Class servers, now renamed HP Server rp5400 series. HP 9000 N-Class servers, now renamed HP Server rp7400. rp2400, rp2405, rp2430, rp2450, rp2470, rp3410, rp3440, rp4410, rp4440, rp5400, rp5405, rp5430, rp5450, rp5470, rp7400, rp7405, rp7410, rp7420, rp7440, rp8400, rp8420, rp8440, Superdome The current naming convention is: aadddd ||||`+- 00 - 99 relative capacity & newness (upgrades, etc.) |||`--- unique number for each architecture to ensure different ||| systems do not have the same numbering across ||| architectures ||`---- 1 - 9 identifies family and/or relative positioning || |`----- c = ia32 (cisc) | p = pa-risc | x = ia-64 (Itanium & Itanium 2) | h = housing `------ t = tower r = rack optimized s = super scalable b = blade sa = appliance Itanium Processor Family (IPF) and HP-UX HP-UX also runs on the new Itanium processor. This requires the use of a different version of HP-UX (currently 11.23 or 11i v2), and with the exception of a few differences detailed below and in later sections, Perl should compile with no problems. Although PA-RISC binaries can run on Itanium systems, you should not attempt to use a PA- RISC version of Perl on an Itanium system. This is because shared libraries created on an Itanium system cannot be loaded while running a PA-RISC executable. HP Itanium 2 systems are usually refered to with model description "HP Integrity". Itanium, Itanium 2 & Madison 6 HP also ships servers with the 128-bit Itanium processor(s). The cx26x0 is told to have Madison 6. As of the date of this document's last update, the following systems contain Itanium or Itanium 2 chips (this is likely to be out of date): BL60p, BL860c, BL870c, BL890c, cx2600, cx2620, rx1600, rx1620, rx2600, rx2600hptc, rx2620, rx2660, rx2800, rx3600, rx4610, rx4640, rx5670, rx6600, rx7420, rx7620, rx7640, rx8420, rx8620, rx8640, rx9610, sx1000, sx2000 To see all about your machine, type # model ia64 hp server rx2600 # /usr/contrib/bin/machinfo HP-UX versions Not all architectures (PA = PA-RISC, IPF = Itanium Processor Family) support all versions of HP-UX, here is a short list HP-UX version Kernel Architecture End-of-factory support ------------- ------ ------------ ---------------------------------- 10.20 32 bit PA 30-Jun-2003 11.00 32/64 PA 31-Dec-2006 11.11 11i v1 32/64 PA 31-Dec-2015 11.22 11i v2 64 IPF 30-Apr-2004 11.23 11i v2 64 PA & IPF 31-Dec-2015 11.31 11i v3 64 PA & IPF 31-Dec-2020 (PA) 31-Dec-2022 (IPF) See for the full list of hardware/OS support and expected end-of-life <> Building Dynamic Extensions on HP-UX HP-UX supports dynamically loadable libraries (shared libraries). Shared libraries end with the suffix .sl. On Itanium systems, they end with the suffix .so. Shared libraries created on a platform using a particular PA-RISC version are not usable on platforms using an earlier PA-RISC version by default. However, this backwards compatibility may be enabled using the same +DAportable compiler flag (with the same PA- RISC 1.0 caveat mentioned above). Shared libraries created on an Itanium platform cannot be loaded on a PA-RISC platform. Shared libraries created on a PA-RISC platform can only be loaded on an Itanium platform if it is a PA-RISC executable that is attempting to load the PA-RISC library. A PA-RISC shared library cannot be loaded into an Itanium executable nor vice-versa. To create a shared library, the following steps must be performed: 1. Compile source modules with +z or +Z flag to create a .o module which contains Position-Independent Code (PIC). The linker will tell you in the next step if +Z was needed. (For gcc, the appropriate flag is -fpic or -fPIC.) 2. Link the shared library using the -b flag. If the code calls any functions in other system libraries (e.g., libm), it must be included on this line. (Note that these steps are usually handled automatically by the extension's Makefile). If these dependent libraries are not listed at shared library creation time, you will get fatal "Unresolved symbol" errors at run time when the library is loaded. You may create a shared library that refers to another library, which may be either an archive library or a shared library. If this second library is a shared library, this is called a "dependent library". The dependent library's name is recorded in the main shared library, but it is not linked into the shared library. Instead, it is loaded when the main shared library is loaded. This can cause problems if you build an extension on one system and move it to another system where the libraries may not be located in the same place as on the first system. If the referred library is an archive library, then it is treated as a simple collection of .o modules (all of which must contain PIC). These modules are then linked into the shared library. Note that it is okay to create a library which contains a dependent library that is already linked into perl. Some extensions, like DB_File and Compress::Zlib use/require prebuilt libraries for the perl extensions/modules to work. If these libraries are built using the default configuration, it might happen that you run into an error like "invalid loader fixup" during load phase. HP is aware of this problem. Search the HP-UX cxx-dev forums for discussions about the subject. The short answer is that everything (all libraries, everything) must be compiled with "+z" or "+Z" to be PIC (position independent code). (For gcc, that would be "-fpic" or "-fPIC"). In HP-UX 11.00 or newer the linker error message should tell the name of the offending object file. A more general approach is to intervene manually, as with an example for the DB_File module, which requires SleepyCat's libdb.sl: # cd .../db-3.2.9/build_unix # vi Makefile ... add +Z to all cflags to create shared objects CFLAGS= -c $(CPPFLAGS) +Z -Ae +O2 +Onolimit \ -I/usr/local/include -I/usr/include/X11R6 CXXFLAGS= -c $(CPPFLAGS) +Z -Ae +O2 +Onolimit \ -I/usr/local/include -I/usr/include/X11R6 # make clean # make # mkdir tmp # cd tmp # ar x ../libdb.a # ld -b -o libdb-3.2.sl *.o # mv libdb-3.2.sl /usr/local/lib # rm *.o # cd /usr/local/lib # rm -f libdb.sl # ln -s libdb-3.2.sl libdb.sl # cd .../DB_File-1.76 # make distclean # perl Makefile.PL # make # make test # make install As of db-4.2.x it is no longer needed to do this by hand. Sleepycat has changed the configuration process to add +z on HP-UX automatically. # cd .../db-4.2.25/build_unix # env CFLAGS=+DD64 LDFLAGS=+DD64 ../dist/configure should work to generate 64bit shared libraries for HP-UX 11.00 and 11i. It is no longer possible to link PA-RISC 1.0 shared libraries (even though the command- line flags are still present). PA-RISC and Itanium object files are not interchangeable. Although you may be able to use ar to create an archive library of PA-RISC object files on an Itanium system, you cannot link against it using an Itanium link editor. The HP ANSI C Compiler When using this compiler to build Perl, you should make sure that the flag -Aa is added to the cpprun and cppstdin variables in the config.sh file (though see the section on 64-bit perl below). If you are using a recent version of the Perl distribution, these flags are set automatically. Even though HP-UX 10.20 and 11.00 are not actively maintained by HP anymore, updates for the HP ANSI C compiler are still available from time to time, and it might be advisable to see if updates are applicable. At the moment of writing, the latests available patches for 11.00 that should be applied are PHSS_35098, PHSS_35175, PHSS_35100, PHSS_33036, and PHSS_33902). If you have a SUM account, you can use it to search for updates/patches. Enter "ANSI" as keyword. The GNU C Compiler When you are going to use the GNU C compiler (gcc), and you don't have gcc yet, you can either build it yourself from the sources (available from e.g. <>) or fetch a prebuilt binary from the HP porting center at <> or from the DSPP (you need to be a member) at <> (Browse through the list, because there are often multiple versions of the same package available). Most mentioned distributions are depots. H.Merijn Brand has made prebuilt gcc binaries available on <> and/or <> for HP-UX 10.20 (only 32bit), HP-UX 11.00, HP-UX 11.11 (HP-UX 11i v1), and HP-UX 11.23 (HP-UX 11i v2 PA-RISC) in both 32- and 64-bit versions. For HP-UX 11.23 IPF and HP-UX 11.31 IPF depots are available too. The IPF versions do not need two versions of GNU gcc. On PA-RISC you need a different compiler for 32-bit applications and for 64-bit applications. On PA-RISC, 32-bit objects and 64-bit objects do not mix. Period. There is no different behaviour for HP C-ANSI-C or GNU gcc. So if you require your perl binary to use 64-bit libraries, like Oracle-64bit, you MUST build a 64-bit perl. Building a 64-bit capable gcc on PA-RISC from source is possible only when you have the HP C-ANSI C compiler or an already working 64-bit binary of gcc available. Best performance for perl is achieved with HP's native compiler. Using Large Files with Perl on HP-UX Beginning with HP-UX version 10.20, files larger than 2GB (2^31 bytes) may be created and manipulated. Three separate methods of doing this are available. Of these methods, the best method for Perl is to compile using the -Duselargefiles flag to Configure. This causes Perl to be compiled using structures and functions in which these are 64 bits wide, rather than 32 bits wide. (Note that this will only work with HP's ANSI C compiler. If you want to compile Perl using gcc, you will have to get a version of the compiler that supports 64-bit operations. See above for where to find it.) There are some drawbacks to this approach. One is that any extension which calls any file-manipulating C function will need to be recompiled (just follow the usual "perl Makefile.PL; make; make test; make install" procedure). The list of functions that will need to recompiled is: creat, fgetpos, fopen, freopen, fsetpos, fstat, fstatvfs, fstatvfsdev, ftruncate, ftw, lockf, lseek, lstat, mmap, nftw, open, prealloc, stat, statvfs, statvfsdev, tmpfile, truncate, getrlimit, setrlimit Another drawback is only valid for Perl versions before 5.6.0. This drawback is that the seek and tell functions (both the builtin version and POSIX module version) will not perform correctly. It is strongly recommended that you use this flag when you run Configure. If you do not do this, but later answer the question about large files when Configure asks you, you may get a configuration that cannot be compiled, or that does not function as expected. Threaded Perl on HP-UX It is possible to compile a version of threaded Perl on any version of HP-UX before 10.30, but it is strongly suggested that you be running on HP-UX 11.00 at least. To compile Perl with threads, add -Dusethreads to the arguments of Configure. Verify that the -D_POSIX_C_SOURCE=199506L compiler flag is automatically added to the list of flags. Also make sure that -lpthread is listed before -lc in the list of libraries to link Perl with. The hints provided for HP-UX during Configure will try very hard to get this right for you. HP-UX versions before 10.30 require a separate installation of a POSIX threads library package. Two examples are the HP DCE package, available on "HP-UX Hardware Extensions 3.0, Install and Core OS, Release 10.20, April 1999 (B3920-13941)" or the Freely available PTH package, available on H.Merijn's site (<>). The use of PTH will be unsupported in perl-5.12 and up and is rather buggy in 5.11.x. If you are going to use the HP DCE package, the library used for threading is /usr/lib/libcma.sl, but there have been multiple updates of that library over time. Perl will build with the first version, but it will not pass the test suite. Older Oracle versions might be a compelling reason not to update that library, otherwise please find a newer version in one of the following patches: PHSS_19739, PHSS_20608, or PHSS_23672 reformatted output: d3:/usr/lib 106 > what libcma-*.1 libcma-00000.1: HP DCE/9000 1.5 Module: libcma.sl (Export) Date: Apr 29 1996 22:11:24 libcma-19739.1: HP DCE/9000 1.5 PHSS_19739-40 Module: libcma.sl (Export) Date: Sep 4 1999 01:59:07 libcma-20608.1: HP DCE/9000 1.5 PHSS_20608 Module: libcma.1 (Export) Date: Dec 8 1999 18:41:23 libcma-23672.1: HP DCE/9000 1.5 PHSS_23672 Module: libcma.1 (Export) Date: Apr 9 2001 10:01:06 d3:/usr/lib 107 > If you choose for the PTH package, use swinstall to install pth in the default location (/opt/pth), and then make symbolic links to the libraries from /usr/lib # cd /usr/lib # ln -s /opt/pth/lib/libpth* . For building perl to support Oracle, it needs to be linked with libcl and libpthread. So even if your perl is an unthreaded build, these libraries might be required. See "Oracle on HP-UX" below. 64-bit Perl on HP-UX Beginning with HP-UX 11.00, programs compiled under HP-UX can take advantage of the LP64 programming environment (LP64 means Longs and Pointers are 64 bits wide), in which scalar variables will be able to hold numbers larger than 2^32 with complete precision. Perl has proven to be consistent and reliable in 64bit mode since 5.8.1 on all HP-UX 11.xx. As of the date of this document, Perl is fully 64-bit compliant on HP-UX 11.00 and up for both cc- and gcc builds. If you are about to build a 64-bit perl with GNU gcc, please read the gcc section carefully. Should a user have the need for compiling Perl in the LP64 environment, use the -Duse64bitall flag to Configure. This will force Perl to be compiled in a pure LP64 environment (with the +DD64 flag for HP C-ANSI-C, with no additional options for GNU gcc 64-bit on PA-RISC, and with -mlp64 for GNU gcc on Itanium). If you want to compile Perl using gcc, you will have to get a version of the compiler that supports 64-bit operations.) You can also use the -Duse64bitint flag to Configure. Although there are some minor differences between compiling Perl with this flag versus the -Duse64bitall flag, they should not be noticeable from a Perl user's perspective. When configuring -Duse64bitint using a 64bit gcc on a pa-risc architecture, -Duse64bitint is silently promoted to -Duse64bitall. In both cases, it is strongly recommended that you use these flags when you run Configure. If you do not use do this, but later answer the questions about 64-bit numbers when Configure asks you, you may get a configuration that cannot be compiled, or that does not function as expected. Oracle on HP-UX Using perl to connect to Oracle databases through DBI and DBD::Oracle has caused a lot of people many headaches. Read README.hpux in the DBD::Oracle for much more information. The reason to mention it here is that Oracle requires a perl built with libcl and libpthread, the latter even when perl is build without threads. Building perl using all defaults, but still enabling to build DBD::Oracle later on can be achieved using Configure -A prepend:libswanted='cl pthread ' ... Do not forget the space before the trailing quote. Also note that this does not (yet) work with all configurations, it is known to fail with 64-bit versions of GCC. GDBM and Threads on HP-UX If you attempt to compile Perl with (POSIX) threads on an 11.X system and also link in the GDBM library, then Perl will immediately core dump when it starts up. The only workaround at this point is to relink the GDBM library under 11.X, then relink it into Perl. the error might show something like: Pthread internal error: message: __libc_reinit() failed, file: ../pthreads/pthread.c, line: 1096 Return Pointer is 0xc082bf33 sh: 5345 Quit(coredump) and Configure will give up. NFS filesystems and utime(2) on HP-UX If you are compiling Perl on a remotely-mounted NFS filesystem, the test io/fs.t may fail on test #18. This appears to be a bug in HP-UX and no fix is currently available. HP-UX Kernel Parameters (maxdsiz) for Compiling Perl By default, HP-UX comes configured with a maximum data segment size of 64MB. This is too small to correctly compile Perl with the maximum optimization levels. You can increase the size of the maxdsiz kernel parameter through the use of SAM. When using the GUI version of SAM, click on the Kernel Configuration icon, then the Configurable Parameters icon. Scroll down and select the maxdsiz line. From the Actions menu, select the Modify Configurable Parameter item. Insert the new formula into the Formula/Value box. Then follow the instructions to rebuild your kernel and reboot your system. In general, a value of 256MB (or "256*1024*1024") is sufficient for Perl to compile at maximum optimization.
nss_delete core dump from op/pwent or op/grent
You may get a bus error core dump from the op/pwent or op/grent tests. If compiled with -g you will see a stack trace much like the following: #0 0xc004216c in () from /usr/lib/libc.2 #1 0xc00d7550 in __nss_src_state_destr () from /usr/lib/libc.2 #2 0xc00d7768 in __nss_src_state_destr () from /usr/lib/libc.2 #3 0xc00d78a8 in nss_delete () from /usr/lib/libc.2 #4 0xc01126d8 in endpwent () from /usr/lib/libc.2 #5 0xd1950 in Perl_pp_epwent () from ./perl #6 0x94d3c in Perl_runops_standard () from ./perl #7 0x23728 in S_run_body () from ./perl #8 0x23428 in perl_run () from ./perl #9 0x2005c in main () from ./perl The key here is the "nss_delete" call. One workaround for this bug seems to be to create add to the file /etc/nsswitch.conf (at least) the following lines group: files passwd: files Whether you are using NIS does not matter. Amazingly enough, the same bug also affects Solaris.
error: pasting ")" and "l" does not give a valid preprocessing token
There seems to be a broken system header file in HP-UX 11.00 that breaks perl building in 32bit mode with GNU gcc-4.x causing this error. The same file for HP-UX 11.11 (even though the file is older) does not show this failure, and has the correct definition, so the best fix is to patch the header to match: --- /usr/include/inttypes.h 2001-04-20 18:42:14 +0200 +++ /usr/include/inttypes.h 2000-11-14 09:00:00 +0200 @@ -72,7 +72,7 @@ #define UINT32_C(__c) __CONCAT_U__(__c) #else /* __LP64 */ #define INT32_C(__c) __CONCAT__(__c,l) -#define UINT32_C(__c) __CONCAT__(__CONCAT_U__(__c),l) +#define UINT32_C(__c) __CONCAT__(__c,ul) #endif /* __LP64 */ #define INT64_C(__c) __CONCAT_L__(__c,l)
Miscellaneous). This has probably been fixed on your system by now.
AUTHOR
H.Merijn Brand <h.m.brand@xs4all.nl> Jeff Okamoto <okamoto@corp.hp.com> With much assistance regarding shared libraries from Marc Sabatella. | http://manpages.ubuntu.com/manpages/trusty/man1/perlhpux.1.html | CC-MAIN-2019-18 | refinedweb | 4,758 | 73.68 |
Mercurial > dropbear
view libtommath/bn_mp_sub.c @ 475:52a644e7b8e1 pubkey-options
* Patch from Frédéric Moulins adding options to authorized_keys. Needs review.
line source
#include <tommath.h> #ifdef BN_MP_SUB subtraction (handles signs) */ int mp_sub (mp_int * a, mp_int * b, mp_int * c) { int sa, sb, res; sa = a->sign; sb = b->sign; if (sa != sb) { /* subtract a negative from a positive, OR */ /* subtract a positive from a negative. */ /* In either case, ADD their magnitudes, */ /* and use the sign of the first number. */ c->sign = sa; res = s_mp_add (a, b, c); } else { /* subtract a positive from a positive, OR */ /* subtract a negative from a negative. */ /* First, take the difference between their */ /* magnitudes, then... */ if (mp_cmp_mag (a, b) != MP_LT) { /* Copy the sign from the first */ c->sign = sa; /* The first has a larger or equal magnitude */ res = s_mp_sub (a, b, c); } else { /* The result has the *opposite* sign from */ /* the first number. */ c->sign = (sa == MP_ZPOS) ? MP_NEG : MP_ZPOS; /* The second has a larger magnitude */ res = s_mp_sub (b, a, c); } } return res; } #endif /* $Source: /cvs/libtom/libtommath/bn_mp_sub.c,v $ */ /* $Revision: 1.3 $ */ /* $Date: 2006/03/31 14:18:44 $ */ | https://hg.ucc.asn.au/dropbear/file/52a644e7b8e1/libtommath/bn_mp_sub.c | CC-MAIN-2022-21 | refinedweb | 183 | 68.16 |
Code generation, compilation, and naming conventions in Microsoft Fakes
For the latest documentation on Visual Studio 2017 RC, see Visual Studio 2017 RC Documentation.
This topic discusses options and issues in Fakes code generation and compilation, and describes the naming conventions for Fakes generated types, members, and parameters.
Requirements
- Visual Studio Enterprise
Code generation and compilation
- Configuring code generation of stubs • Type filtering • Stubbing concrete classes and virtual methods • Internal types • Optimizing build times • Avoiding assembly name clashing
- Shim type and stub type naming conventions • Shim delegate property or stub delegate field naming conventions • Parameter type naming conventions • Recursive rules
Configuring code generation of stubs following example illustrates stub types defined in FileSystem.dll:
Type filtering
Filters can be set in the .fakes file to restrict which types should be stubbed. You can add an unbounded number of Clear, Add, Remove elements under the StubGeneration element to build the list of selected types.
For example, this .fakes file generates stubs for types under the System and System.IO namespaces, but excludes any type containing "Handle" in System:
The filter strings use a simple grammar to define how the matching should be done:
Filters are case-insensitive by default; filters perform a substring matching:
elmatches "hello"
Adding
!to the end of the filter will make it a precise case-sensitive match:
el!does not match "hello"
hello!matches "hello"
Adding
*to the end of the filter will make it match the prefix of the string:
el*does not match "hello"
he*matches "hello"
Multiple filters in a semicolon-separated list are combined as a disjunction:
el;womatches "hello" and "world"
Stubbing concrete classes and virtual methods
By default, stub types are generated for all non-sealed classes. It is possible to restrict the stub types to abstract classes through the .fakes configuration file:
Internal types
The Fakes code generator will generate shim types and stub types for types that are visible to the generated Fakes assembly. To make internal types of a shimmed assembly visible to Fakes and your test assembly, add InternalsVisibleToAttribute attributes to the shimmed assembly code that gives visibility to the generated Fakes assembly and to the test assembly. Here's an example:
Internal types in strongly named assemblies
If the shimmed assembly is strongly named and you want access internal types of the assembly:
Both your test assembly and the Fakes assembly must be strongly named.
You must add the public keys of the test and Fakes assembly to the InternalsVisibleToAttribute attributes in the shimmed assemblies. Here's how our example attributes in the shimmed assembly code would look when the shimmed assembly is strongly named:
If the shimmed assembly is strongly named, the Fakes framework will automatically strongly sign the generated Fakes assembly. You have to strong sign the test assembly. See Creating and Using Strong-Named Assemblies.
The Fakes framework uses the same key to sign all generated assemblies, so you can use this snippet as a starting point to add the InternalsVisibleTo attribute for the fakes assembly to your shimmed assembly code.
[assembly: InternalsVisibleTo("FileSystem.Fakes, PublicKey=0024000004800000940000000602000000240000525341310004000001000100e92decb949446f688ab9f6973436c535bf50acd1fd580495aae3f875aa4e4f663ca77908c63b7f0996977cb98fcfdb35e05aa2c842002703cad835473caac5ef14107e3a7fae01120a96558785f48319f66daabc862872b2c53f5ac11fa335c0165e202b4c011334c7bc8f4c4e570cf255190f4e3e2cbc9137ca57cb687947bc")]
You can specify a different public key for the Fakes assembly, such as a key you have created for the shimmed assembly, by specifying the full path to the .snk file that contains the alternate key as the
KeyFile attribute value in the
Fakes\
Compilation element of the .fakes file. For example:
You then have to use the public key of the alternate .snk file as the second parameter of the InternalVisibleTo attribute for the Fakes assembly in the shimmed assembly code:
In the example above, the values
Alternate_public_key and the
Test_assembly_public_key can be the same.
Optimizing build times
The compilation of Fakes assemblies can significantly increase your build time. You can minimize the build time by generating the Fakes assemblies for .NET System assemblies and third-party assemblies in a separate centralized project. Because such assemblies rarely change on your machine, you can reuse the generated Fakes assemblies in other projects.
From your unit test projects, you can simply take a reference to the compiled Fakes assemblies that are placed under the FakesAssemblies in the project folder.
Create a new Class Library with the .NET runtime version matching your test projects. Let’s call it Fakes.Prebuild. Remove the class1.cs file from the project, not needed.
Add reference to all the System and third-party assemblies you need Fakes for.
Add a .fakes file for each of the assemblies and build.
From your test project
Make sure that you have a reference to the Fakes runtime DLL:
C:\Program Files\Microsoft Visual Studio 12.0\Common7\IDE\PublicAssemblies\Microsoft.QualityTools.Testing.Fakes.dll
For each assembly that you have created Fakes for, add a reference to the corresponding DLL file in the Fakes.Prebuild\FakesAssemblies folder of your project.
Avoiding assembly name clashing
In a Team Build environment, all build outputs are merged into a single directory. In the case of multiple projects using Fakes, it might happen that Fakes assemblies from different version override each other. For example, TestProject1 fakes mscorlib.dll from the .NET Framework 2.0 and TestProject2 fakes mscorlib.dll for the .NET Framework 4 would both yield to a mscorlib.Fakes.dll Fakes assembly.
To avoid this issue, Fakes should automatically create version qualified Fakes assembly names for non-project references when adding the .fakes files. A version-qualified Fakes assembly name embeds a version number when you create the Fakes assembly name:
Given an assembly MyAssembly and a version 1.2.3.4, the Fakes assembly name is MyAssembly.1.2.3.4.Fakes.
You can change or remove this version by the editing the Version attribute of the Assembly element in the .fakes:
Shim type and stub type naming conventions
Namespaces
.Fakes suffix is added to the namespace.
For example,
System.Fakesnamespace contains the shim types of System namespace.
Global.Fakes contains the shim type of the empty namespace.
Type names
Shim prefix is added to the type name to build the shim type name.
For example, ShimExample is the shim type of the Example type.
Stub prefix is added to the type name to build the stub type name.
For example, StubIExample is the stub type of the IExample type.
Type Arguments and Nested Type Structures
Generic type arguments are copied.
Nested type structure is copied for shim types.
Shim delegate property or stub delegate field naming conventions
Basic rules for field naming, starting from an empty name:
The method name is appended.
If the method name is an explicit interface implementation, the dots are removed.
If the method is generic,
Ofn is appended where n is the number of generic method arguments.
Special method names such as property getter or setters are treated as described in the following table.
Notes
Getters and setters of indexers are treated similarly to the property. The default name for an indexer is
Item.
Parameter type names are transformed and concatenated.
Return type is ignored unless there’s an overload ambiguity. If this is the case, the return type is appended at the end of the name
Parameter type naming conventions
Recursive rules
The following rules are applied recursively:
Because Fakes uses C# to generate the Fakes assemblies, any character that would produce an invalid C# token is escaped to "_" (underscore).
If a resulting name clashes with any member of the declaring type, a numbering scheme is used by appending a two-digit counter, starting at 01.
Guidance
Testing for Continuous Delivery with Visual Studio 2012 – Chapter 2: Unit Testing: Testing the Inside
Isolating Code Under Test with Microsoft Fakes | https://msdn.microsoft.com/en-us/library/hh708916 | CC-MAIN-2017-13 | refinedweb | 1,271 | 55.03 |
This article is about giving you a simple overview of what you can do with reactjs-popup and how to use it effectively.
Today, we are excited to announce reactjs-popup 1.0.
Reactjs-popup is a simple and very small (3 kb) react popup component, with multiple use cases.
we created reactjs-popup to create a color picker for our project picsrush a new online image editor. After a while, We decided to make it available for everyone in GitHub and npm repository.
Why do you need to choose reactjs-popup over all other implementation?
-, Toast(coming soon) : All in one component🏋️
- Full style customization. 👌
- Easy to use. 🚀
- All these clocks in at around 3 kB zipped. ⚡️
- Animation (coming soon).
How can reactjs-popup help you in your next react project?
If you need to create a simple modal, tooltip or a nested menu this component is your best choice to start with. but first let get started with the component.
Getting Started
This package is available in npm repository as reactjs-popup. It will work correctly with all popular bundlers.
reactjs-popup - React Popup Component - Modals,Tooltips and Menus - All in onegithub.com
npm install reactjs-popup --save
#using yarn
yarn add reactjs-popup -S
Now you can import the component and start using it :
import React from "react";
import Popup from "reactjs-popup";
export default () => (
<Popup trigger={<button> Trigger</button>}
<div>Popup content here !!</div>
</Popup>
);
You can also use it with function as children pattern.
import React from "react";
import Popup from "reactjs-popup";
export default () => (
<Popup trigger={<button>Trigger</button>}
<div>
Content here
<a className="close" onClick={close}>
×
</a>
</div>
)}
</Popup>
);
Complete component API : Reactjs-popup Component API
Use Cases 🙌
ALL in one demo
What’s Next For reactjs-popup ?
The next version of reactjs-popup will support creating Simple Toast with full customization, But our big deal is to add Animation API to the component so feel free if you have any ideas 💪.
Thanks for reading! If you think other people should read this post and use this component, clap for me, tweet and share the post.
Remember to follow me on Medium so you can get notified about my future posts.
It your turn now to try it !!!
Show your support!
That’s all, thank you for your attention, please star the repo to show your support…
Read more stories
| https://hackernoon.com/introducing-reactjs-popup-modals-tooltips-and-menus-all-in-one-227de37766fa | CC-MAIN-2020-10 | refinedweb | 397 | 57.06 |
Calculating Mean
Mean is a simple arithmetic average -- add the values up and divide by the number of values. The formal statistical formula, which will be important later when we discuss variance, is as follows:
The formula looks more complicated than the actual operation in Example 3.(d); } void calculateMean(DataPoint d) { setSum(getSum() + d.getNumber()); setNipMean(getSum() / getNipCount()); setNilMean(getSum() / getFullCount() ); }
I am calculating two flavors of the mean. The nipMean is the mean with null thrown away and the nilMean is the mean with null converted to zero. Let's look at nipMean. The advantage of nipMean is that stores a more accurate mean based on the data as we know it. For example, nipMean would store the mean of (0, 1, 2, null, null) as one ( (0 + 1 + 2) / 3). The nipMean value, however, gives a false total if someone extrapolates the aggregate sum of a population. Going back to my example of (0, 1, 2, null, null), we have five data elements and a mean of one. If we extrapolate the aggregate total with the formula, N * ì, it would give us an population sum of five (5 data points X 1 mean value ). That is not the case. The true sum value is actually 3 (0 + 1 + 2).
I am storing the mean with nulls converted to zero in nilMean. It calcuates the mean as 0.60 with the same population (0, 1, 2, null, null) becomes ((0+1+2+0+0) / 5). The advantage of nilMean is that the population sum value is correct when we multiply the mean by the number of data points ( 0.60 X 5 = 3).
Random Tests for Determined Validity
Only testing convenience data my be the most common testing flaw other than the my-codehas-no-bugs delusion. Probably every programmer has blown a foot off at least once by only testing the first convenient section of data records. I am a little ashamed to admit it, but I have blown a foot off myself. Don't be embarrassed yourself. Use Example 4 to retrieve a random sample of records to test on your next project.
import java.util.Random; public class Sample { private int numberSamples; private int maxValue; private Integer[] samples; private Random generator; private static int currentSample = 0; Sample (int maximum, int numberSamples ) { setGenerator(new Random()); setNumberSamples(numberSamples); setMaxValue(maximum); setSamples(new Integer[numberSamples]); init(); } public int next() { if ( getCurrentSample() == getMaxValue() ) { setCurrentSample(0); } setCurrentSample(getCurrentSample() + 1); return getSamples()[getCurrentSample() - 1]; } void init() { for ( int count = 0; count < getNumberSamples(); count++ ) { getSamples()[count] = getGenerator().nextInt(getMaxValue()); } } }
The example above is pretty simple. It is designed for testing rows of data. The parameter, maximum, is the highest row number of our population. The number of samples you wish to test is passed in numberSamples. The init() method populates the samples array with a list of random row numbers. The next method retrieves the next random row number so you can retrieve a row number, use it to retrieve a row a data, and then verify the data is correct.
A couple of things to remember about the standard Random class. First, you should instantiate Random() only once during any run. It is possible to generate the same list of random numbers if you repeatedly instantiate Random in quick succession. Second, if you seed Random with the same seed number then it will always yield the same list of random numbers. Thus, computer randomness is really an illusion. The random numbers are predetermined given any known seed. That is not really a problem. Some of us that believe in the noetical nature of predeterminism think the physical universe works the same way -- chaos dissolves as knowledge becomes deeper and broader. For practical purposes, however, it is only important to remember to either let the library self-seed by not supplying a seed number or seed Random with a different number every time. | http://www.drdobbs.com/jvm/statistics-in-java/225702182?pgno=3 | CC-MAIN-2014-10 | refinedweb | 652 | 55.34 |
Version 1.55.0
Version 1.55.0
November 11th, 2013 19:50 GMT
Other Downloads
-
- PDF documentation (only for BoostBook based documentation).
News
Support was removed from Config for some very old versions of compilers. The new minimum requirements are:
- Digitial Mars 8.41
- GCC 3.3
- Intel 6.0
- Visual C++ 7.1
Note: This is just the mininimum requirements for Config. Some Boost libraries may have higher mininimum requirements and not support all platforms or compilers.
Other compilers are currently unchanged, but we are considering removing support for some other old compilers. Candidates for removal are:
- Metroworks C++ (i.e. codewarrior)
- SunPro 5.7 and earlier
- Borland C++ Builder 2006 (5.82) and earlier
If you're using any of these, please let us know on the mailing lists. We will take into account any feedback received before making a decision.
Known Bugs with Visual Studio 2013/Visual C++ 12
Visual Studio 2013 was released quite late in the release process, so there exist several unresolved issues. These include:
- Serialization can't compile because of a missing include.
-
- In libraries such as Unordered and MultiIndex, calling overloaded functions with initializer lists can result in a compile error, with Visual C++ claiming that the overloads are ambiguous. This is a Visual C++ bug and it isn't clear if there's a good workaround. This won't affect code that doesn't use initializer lists, or uses an initializer list that doesn't require an implicit conversion (i.e. an initializer list of the container's exact value type).
-
Patches
New Libraries
Updated Libraries
- Accumulators:
-
- Asio:
- Implemented a limited port to Windows Runtime. This support requires that the language extensions be enabled. Due to the restricted facilities exposed by the Windows Runtime API, the port also comes with the following caveats:
- The core facilities such as the
io_service,
strand, buffers, composed operations, timers, etc., should all work as normal.
- For sockets, only client-side TCP is supported.
- Explicit binding of a client-side TCP socket is not supported.
- The
cancel()function is not supported for sockets. Asynchronous operations may only be cancelled by closing the socket.
- Operations that use
null_buffersare not supported.
- Only
tcp::no_delayand
socket_base::keep_aliveoptions are supported.
- Resolvers do not support service names, only numbers. I.e. you must use "80" rather than "http".
- Most resolver query flags have no effect.
-
- Fixed a Windows-specific regression (introduced in Boost 1.54) that occurs when multiple threads are running an
io_service. When the bug occurs, the result of an asynchronous operation (error and bytes transferred) is incorrectly discarded and zero values used instead. For TCP sockets this results in spurious end-of-file notifications (#8933).
-
-
-
-
-
-
-
- Visual C++ language extensions use
genericas a keyword. Added a workaround that renames the namespace to
cpp_genericwhen those language extensions are in effect.
-
-
- Added
use_futuresupport for Microsoft Visual Studio 2012.
-
- Eliminated some unnecessary handler copies.
-
- Atomic:
- Added support for 64-bit atomic operations on x86 target for GCC, MSVC and compatible compilers. The support is enabled when it is known at compile time that the target CPU supports required instructions.
- Added support for 128-bit atomic operations on x86-64 target for GCC and compatible compilers. The support is enabled when it is known at compile time that the target CPU supports required instructions. The support can be tested for with the new
BOOST_ATOMIC_INT128_LOCK_FREEmacro.
- Added a more efficient implementation of
atomic<>based on GCC
__atomic*intrinsics available since GCC 4.7.
- Added support for more ARM v7 CPUs, improved detection of Thumb 2.
- Added support for x32 (i.e. 64-bit x86 with 32-bit pointers) target on GCC and compatible compilers.
- Removed dependency on Boost.Thread.
- Internal lock pool now includes proper padding and alignment to avoid false sharing.
- Fixed compilation with Intel compiler on Windows. Removed internal macro duplication when compiled on Windows.
- Some code refactoring to use C++11 features when available.
-
- Circular Buffer:
-
- Much better documentation.
-
-
-
-
-
-
-
- Filesystem:
-
- Geometry:
- Additional functionality
- Added centroid for segment type
- Added intersects() and disjoints() for Segment-Box and Linestring-Box
- Added rtree creation using packing algorithm
- Added contains() and covers() spatial query predicates
- Added iterative queries
- Bugfixes
- In some cases .back() or .clear() was called, violating the usage of Concepts. Fixed for the reported cases
- Solved tickets
- Graph:
voidis no longer allowed as a bundled property type (for example, in the VertexProperties template parameters to graph types); it did not work reliably before, but a static assertion now forbids it entirely. Use
boost::no_propertyinstead.
- Added support for
finish_edgevisitor event point in depth-first search; the change should be backward-compatible with visitors that do not have that member function.
- Disabled building of tests on Sun compiler.
- Multiple source vertices are supported in non-named-parameter versions of
breadth_first_visit,
breadth_first_search,
dijkstra_shortest_paths, and
dijkstra_shortest_paths_no_init. This feature is not yet documented; to use it, replace the single parameter for the source vertex in each of these functions by two input iterators of the same type containing the source vertices to use.
- Added Hawick circuits algorithm; contributed by Louis Dionne.
- Added edge coloring algorithm; contributed by Maciej Piechotka.
- Added min-cost max-flow algorithm; contributed by Piotr Wygocki.
-
-
-
- Intrusive:
- Source breaking: Deprecated
xxx_dont_splayfunctions from splay containers. Deprecated
splay_set_hookfrom splay containers, use
bs_set_hookinstead. Both will be removed in Boost 1.56.
-
- Big refactoring in order to reduce template and debug symbol bloat. Test object files have been slashed to half in MSVC compilers in Debug mode. Toolchains without Identical COMDAT Folding (ICF) should notice size improvements.
- Implemented SCARY iterators.
- Lexical cast:
-
-
- Documentation improved and more usage examples added.
- Log:
- General changes:
- Added a new configuration macro
BOOST_LOG_WITHOUT_DEFAULT_FACTORIES. By defining this macro the user can disable compilation of the default filter and formatter factories used by settings parsers. This can substantially reduce binary sizes while still retaining support for settings parsers.
- Rewritten some of the parsers to reduce the compiled binary size. The rewritten parsers are more robust in detecting ambiguous and incorrect input.
- The following headers are deprecated:
boost/log/utility/intrusive_ref_counter.hpp,
boost/log/utility/explicit_operator_bool.hpp,
boost/log/utility/empty_deleter.hpp. These headers will be removed in future releases. The contents of these headers were moved to other libraries.
- Bug fixes:
- Fixed
timerattribute generating incorrect time readings on Windows on heavy thread contention when
QueryPerformanceCounterAPI was used.
- Fixed a bug in the filter parser that prevented using parsed filters with some attributes.
- Fixed thread id formatting discrepancies between the default sink and formatters.
-
-
- Math:
-
-
-
-
-
-
-
-
-
- Fix bug in inverse incomplete beta that results in cancellation errors when the beta function is really an arcsine or Student's T distribution.
- Fix issue in Bessel I and K function continued fractions that causes spurious over/underflow.
- Add improvement to non-central chi squared distribution quantile due to Thomas Luu.
-
- Meta State Machine:
- New feature: interrupt states now support a sequence of events to end the interruption.
-
- Multiprecision:
- Added support for Boost.Serialization.
-
-
-
-
-
-
-
-
- Multi-index Containers:
- Boost.MultiIndex has been brought to a higher level of compliance with C++11.
Refer to the compiler specifics section for limitations on pre-C++11 compilers.
multi_index_containeris now efficiently movable.
- Initializer lists supported.
- Emplace functions provided.
- Non-copyable elements (such as
std::unique_ptr<T>) supported. This includes insertion of a range [
first,
last) where the iterators point to a type that is convertible to that of the element: no copy construction happens in the process.
- Random access indices provide
shrink_to_fit().
- The following classes are deprecated:
- Maintenance fixes.
-
-
-
- PropertyMap:
- dynamic_properties objects can now be built by non-destructively chaining
.property(name, pm)calls. Example:
boost::dynamic_properties() .property("color", color_map) .property("pos", position_map)
- The use of raw pointers as property maps is deprecated; it often failed on Visual Studio in the past. This usage has been removed from all tests and examples in Boost.Graph. The replacement to use for vertex properties in graphs (the most common use for this feature) is:
boost::make_iterator_property_map( <pointer or container .begin() iterator>, get(boost::vertex_index, <graph object>))(Note: the lack of namespace qualification on get() in this code is necessary for generic code). Outside a graph context, the closest equivalent is:
boost::make_iterator_property_map( <pointer>, boost::typed_identity_property_map<std::size_t>())There are commented-out static assertions on lines 151 and 159 of
<boost/property_map/property_map.hpp>that can be un-commented to find deprecated uses of pointers in user code.
- Rational:
- Added
lowestand
max_digits10, members of std::numeric_limits added in C++11, to the unit-test code. Needed since Boost.Test refers to one of them when compiled in C++11 mode.
-
-
- Thread:
- New Features:
- Fixed Bugs:
-
- Type Traits:
-
- Utility:
boost::result_ofcan be set to use TR1 protocol by default and fall back to
decltypeif the function object does not support it (like C++11 lambda functions, for example). Define
BOOST_RESULT_OF_USE_TR1_WITH_DECLTYPE_FALLBACKconfiguration macro to enable this mode.
- Improved support for C++11 in the
boost::base_from_memberclass template. The class implements perfect forwarding for the constructor arguments, if the compiler supports rvalue references, variadic templates and function template default arguments.
- Added
boost/utility/explicit_operator_bool.hppand
boost/utility/empty_deleter.hppheaders, which were extracted from Boost.Log. The headers implement utilities for defining explicit conversion operators to
booland a deleter function object that does nothing, respectively.
- Variant:
-
-
Updated Tools
- Quickbook:
- Quickbook 1.6 finalized, see the Quickbook documentation for details.
Compilers Tested
Boost's primary test compilers are:
- Linux:
- Clang: 3.3, 3.2, 3.1, 3.0
- Clang, C++11, libc++: 3.4, 3.3
- GCC: 4.8.1, 4.7.3, 4.6.3, 4.5.3, 4.4.7
- GCC, C++11: 4.8.1
- GCC, C++98: 4.8.1
- OS X:
- GCC: 4.2
- Apple Clang: 5.0
- Apple Clang, C++11: 5.0
- Windows:
- GCC, mingw: 4.8.0, 4.7.2, 4.6.3, 4.5.4, 4.4.7
- Visual C++: 11.0, 10.0, 9.0
Boost's additional test compilers include:
- OS X:
- Apple Clang: 5.0
- Apple Clang, C++11: 5.0
- Clang: trunk
- Clang, C++11: trunk
- GCC: 4.2.1
- Linux:
- Clang: 3.3, 3.2, 3.1, 3.0, trunk
- Clang, C++11: 3.4
- Clang, C++11, libc++: 3.4, 3.3
- GCC: 4.9.0 (experimental), 4.8.1, 4.7.3, 4.6.4, 4.5.3, 4.4.7
- GCC, C++11: 4.8.1
- GCC, C++98: 4.8.1
- Intel: 13.0.1, 12.1.6
- Windows:
- GCC, mingw: 4.8.0, 4.7.2, 4.6.3, 4.5.4, 4.4.7
- Visual C++: 11.0, 10.0, 9.0
Acknowledgements
Beman Dawes, Eric Niebler, Rene Rivera, Daniel James, Vladimir Prus and Marshall Clow managed this release. | https://www.boost.org/users/history/version_1_55_0.html | CC-MAIN-2021-10 | refinedweb | 1,775 | 51.85 |
Is there a
find()
You use
std::find from
<algorithm>, which works equally well for
std::list and
std::vector.
std::vector does not have its own search/find function.
#include <list> #include <algorithm> int main() { std::list<int> ilist; ilist.push_back(1); ilist.push_back(2); ilist.push_back(3); std::list<int>::iterator findIter = std::find(ilist.begin(), ilist.end(), 1); }
Note that this works for built-in types like
int as well as standard library types like
std::string by default because they have
operator== provided for them. If you are using using
std::find on a container of a user-defined type, you should overload
operator== to allow
std::find to work properly:
EqualityComparable concept | https://codedump.io/share/IsICPQ0Ak4cp/1/how-to-search-for-an-element-in-an-stl-list | CC-MAIN-2017-09 | refinedweb | 117 | 55.95 |
# cat /var/tmp/portage/app-arch/lha-114i-r7/temp/aclocal.out
***** aclocal *****
***** PWD: /var/tmp/portage/app-arch/lha-114i-r7/work/lha-1.14i-ac20050924p1
***** aclocal
/usr/share/aclocal/zthread.m4:34: warning: underquoted definition of AM_PATH_ZTHREAD
/usr/share/aclocal/zthread.m4:34: run info Automake 'Extending aclocal'
/usr/share/aclocal/zthread.m4:34: or see
configure.ac:6: error: 'AM_CONFIG_HEADER': this macro is obsolete.
You should use the 'AC_CONFIG_HEADERS' macro instead.
/usr/share/aclocal-1.13/obsolete-err.m4:12: AM_CONFIG_HEADER is expanded from...
configure.ac:6: the top level
autom4te-2.69: /usr/bin/m4 failed with exit status: 1
aclocal-1.13: error: echo failed with exit status: 1
Please note that although this package is stable and automake-1.13 is ~arch,
this is still a priority fix because ~arch users are immediately affected.
I run stable, so I'm having trouble replicating this problem. It builds fine for me with stable automake (1.12.6) installed. I tried upgrading to 1.13 (which installed in a new slot), but it still builds fine for me.
For a quick fix, could you not just add WANT_AUTOMAKE="1.12" to the ebuild? This is done for other stable packages in portage (such as parted, which depends on automake 1.11). So, it seems like that would be an acceptable way solve the problem of building lha.
Alternatively, if you can tell me what I need to do to replicate the problem on my stable amd64 system, I'll be happy to tinker with it to see if I can get it to build with automake 1.13. I apologize if that seems like a dumb question for a proxy maintainer to ask, but I'm not particularly familiar with automake beyond the basics, so I'm not sure how portage determines which version of automake to use when building a package.
Unless things have changed in the last 3 days, in autotools.eclass (which is possible), just having automake-1.13 emerged should have made this ebuild use it. I too run stable and that's all I did to produce this error .... (edit: yes, apparently changes did occur on April 28th which makes automake-1.12 be used whenever configure.ac doesn't support the new 1.13 syntax, according to cvs log on automake.eclass)
At any rate, for testing, you can force it by specifying WANT_AUTOMAKE="1.13" on the command line you use to emerge or ebuild.
I expect that at some point things should still be patched to work with automake-1.13
hmm... so i just tested and this still fails for me:
cd /usr/portage
ebuild sys-devel/automake-wrapper/automake-wrapper-8.ebuild merge
ebuild sys-devel/automake/automake-1.13.1.ebuild merge
ebuild app-arch/lha/lha-114i-r7.ebuild prepare
Unsure why you can't reproduce?
-----
Regarding the "quick fix" , yes, WANT_AUTOMAKE="1.12" is something you could put in the ebuild. However i think that is frowned upon; also the patches in this case are quite trivial so i'm not sure if locking to automake-1.12 is really easier than patching configure.ac
You should talk to flameeyes or vapier about WANT_AUTOMAKE usage; I think it's meant more for when upstreams do not intend to move beyond a particular version, than for when things happen to fail with the latest version in the tree.
(In reply to comment #3)
> You should talk to flameeyes or vapier about WANT_AUTOMAKE usage; I think
> it's meant more for when upstreams do not intend to move beyond a particular
> version, than for when things happen to fail with the latest version in the
> tree.
well, this hasn't been updated for over 6 years, so we can probably safely assume they don't really plan on moving forward. :-) But yeah, I agree that it's not a "proper" solution if other options are available.
Thanks for the extra info. I'll try to spend some more time with it tonight and see if I can get something working (well, broken, to be specific, and then working again).
Created attachment 346938 [details]
lha-114i-r7 patch to support automake-1.13.1
huh... so I tried upgrading automake again tonight and then lha failed immediately. Don't know what the deal is.
Anyway, attached is the tiny patch to the in-tree version of lha to correct this. I removed automake-1.13.1 and tested the patched version of lha against 1.12.6, and it seems to work fine there as well.
Please let me know if there's anything else I can help with.
Hi @ll
I tried to install the lha package today and ran into a problem that I got fixed with the attached patch by Jared B.
But my error wasn't the message with AUTOMAKE_CONFIG_HEADER. I got this one
x86_64-pc-linux-gnu-gcc -DHAVE_CONFIG_H -I.@am__isrc@ -I.. -DEUC -DSUPPORT_LH7 -DPROTOTYPES -O2 -pipe -c getopt_long.c
getopt_long.c:67:25: fatal error: getopt_long.h: No such file or directory
compilation terminated.
make[2]: *** [getopt_long.o] Error 1
make[2]: Leaving directory `/var/tmp/portage/app-arch/lha-114i-r7/work/lha-1.14i-ac20050924p1/src'
I used the patch and it seems to work, too.
Hey, Steffen. Can you please clarify: did applying the patch attached to this bug fix the problem for you? Sounds like it did (was just a different error you initially encountered), but was to confirm.
I'm not sure what I need to do to get this pushed into the tree, but if it fixes a couple different problems then I really don't see why we shouldn't push it. If you can confirm it works for you, I'll check with a couple devs on this to see if we can close out this bug.
Hi Jared,
Yes I applied your patch to the ebuild and after this the package was build successfully.
BUT:
I don't think that this was the result of the patch because I rebuild the package today without the patch and it build the package, too. I think its better to forget my comments on this bug, because I don't know where my error really comes from. I think that my best friends was playing on the machine (its a development and testing machine) with his other compiling crap and so this results in this and 2 other errors. This errors are gone now.
So I think its better to ignore my report here and I will ask him what he has done there.
Sorry for the stress that I have made for you.
regards
j0inty
I actually stumbled upon Steffen's compile error. Strange thing is that I did two emerge -eDN --complete-graph --with-bdeps=y --keep-going=y world, and during the first one it compiled fine, but during the second it failed. And now I can't compile it. And the pastebin-patch is gone.
Tamas, do you get that same error even with the attached patch applied to the ebuild?
Created attachment 365640 [details, diff]
lha-114i-getopt_long.patch
It appears that getopt_long.c is improperly including its header file, including it with:
#include <getopt_long.h>
rather than:
#include "getopt_long.h"
Under most circumstances it seems this works anyway (I guess make/gcc adds the current directory to the include path?), but I'd guess this is what's causing the issues Tamas and Steffan reported. Not sure what would trigger it, but the second option should, I'd think, work in all cases, whereas the first seems to work by chance.
I'm attaching a simple patch to fix this. Please first try the original patch just to see if that has any effect here (so I know for sure one way or another). If it still fails, please try this patch as well and let me know how it works.
Thanks.
Tamas, your first build was successful most probably because originally you had had automake-1.12 on your system, and during your first emerge, lha was built first (with automake-1.12), then automake-1.13 was installed. At the next re-emerge, you already had automake-1.13 so building lha failed.
Guys, can we have a fix for this build failure? I mean committed AND flagged as stable. Because automake-1.13 has been flagged as stable since last September, and lha stable now does not build at all - now it should be trivial to reproduce this problem for everybody. (BTW, why isn't there a machine somewhere building the latest stable packages regularly and feeding errors into the mailboxes of the affected package maintainers? Every time I rebuild my system, there's always at least one package which is broken, even though flagged as stable.)
This is the root of the problem, IMHO:
my 2013 August build with automake-1.12:
i686-pc-linux-gnu-gcc -O2 -march=pentium3 -mtune=core2 -fomit-frame-pointer -Wall -W -DHAVE_CONFIG_H -I. -I. -c -o getopt_long.o getopt_long.c
current build automake-1.13:
i686-pc-linux-gnu-gcc -DHAVE_CONFIG_H -I.@am__isrc@ -I.. -DEUC -DSUPPORT_LH7 -DPROTOTYPES -O2 -march=pentium3 -mtune=core2 -fomit-frame-pointer -c getopt_long.c
While I'm not an automake expert, I'm 100% sure that that @am__isrc@ is plain wrong there.
Personally, I'd see WANT_AUTOMAKE="1.12" as a long-term solution for packages like this. Automake is deliberately not interested in keeping itself without regressions for old code, lha is not updated by the upstream to work with new automake releases, so either somebody keeps an eye on these packages and does the job not done by the upstream, or every now and then we'll get broken stable packages.
Sorry, I copied the wrong line from my old emerge output. This was the good gcc line last August:
i686-pc-linux-gnu-gcc -DHAVE_CONFIG_H -I. -I.. -DEUC -DSUPPORT_LH7 -DPROTOTYPES -O2 -march=pentium3 -mtune=core2 -fomit-frame-pointer -c getopt_long.c
i am just building a new box from scratch and stumbled into the compile issue as well:
x86_64-pc-linux-gnu-gcc -DHAVE_CONFIG_H -I.@am__isrc@ -I.. -DEUC -DSUPPORT_LH7 -DPROTOTYPES -O2 -pipe -mtune=native -march=native -c getopt_long.c
getopt_long.c:67:25: fatal error: getopt_long.h: No such file or directory
-#include <getopt_long.h>
+#include <getopt_long.h>
makes it work - however, to properly fix it someone might also want to find out why USE_GNU isnt defined, which would make this a non issue. (the include is still wrong though)
@ian, can you look into this PR?
it's meant to fix this bug. thank you!
(In reply to Patrice Clement from comment #15)
> @ian, can you look into this PR?
> it's meant to fix this bug. thank you!
It's in tree. This bug can be closed.
Indeed. I merged it. Thank you for the heads up. | https://bugs.gentoo.org/show_bug.cgi?id=467544 | CC-MAIN-2020-24 | refinedweb | 1,828 | 75.81 |
Swift Network Layer, Part 1: Designing The API
Welcome 👋 to part one of the Swift Network Layer Series where we will be laying the foundation for our network layer.
Pre-requisites
- Xcode 11
- Basic understanding of Swift & Networking
API Design
Things I like to keep in mind when designing an API are syntax and style. Syntax defines whether the API is declarative (which defines the computational logic i.e the "what") or imperative (defines state changes i.e the "how") whereas style defines whether the design paradigm is concrete or abstracted with generics and protocols.
The goal of this network layer is to design an API that is easy to use whilst being powerful and easily customizable. To achieve this we need to set some requirements for the framework:
- Generic
- Typesafe
- Declarative
- Single source of truth
- Abstracts implementation details
With the requirements set out what I usually do is dive straight in Xcode and try to imagine what using this framework would look like.
To start we’ll create a new Xcode project and add the following line to
ViewController.swift.
let publisher = client.request([User].self, from: .users)
With the line above we have set a clear goal for what we aim to achieve. One thing I love the most about Swift is the language "readability". When designing an API I aim to have it read like a normal English sentence.
Reading the above line of code: "Client request list of User from users”.
Not exactly everyday language but it’s close enough to give an idea of what that line of code does without any prior knowledge of the implementation.
Let’s break down our initial line of code even further and look at each component
- publisher: hints to the use some publisher-subscriber pattern
- client: is able to perform requests
- User: is some model representing a user
- .users: is some enum encapsulating a resource from which Users can be requested
The client is requesting a list of type User from users and the response will be published to the publisher.
Right now Xcode should be throwing a couple of errors. Xcode is unable to resolve what the client and User are. The next task is to get rid of the errors we currently have and make Xcode happy. This approach to development lends itself well to TDD (Test Driven Development). Whilst we won’t be writing unit tests in this section we will be following a similar pattern of RED, GREEN, REFACTOR found in TDD. I like to call this approach something more sinister like “Error Driven Development” 😅.
Code
Folder Structure
User
For part one, we will be making use of the JSON Placeholder API. If you go to jsonplaceholder.typicode.com/users in your browser you will see a JSON response containing an array of users. Our User model will be a representation of this response. Add the code below to a file of your liking
Network.swift or
ViewController.swift.
- One error down, on to the next one 👉
struct User: Decodable { let id: Int let name: String let username: String let email: String }
Client
The next error we have to address is the
unresolved identifier client. In
Client.swift add the following code
final class Client { }
Now where you declared your publisher above go ahead and create a new instance of Client.
let client = Client()
At this point, you should have the following error in Xcode:
In understandable English, the first error thrown here is telling us that Xcode is unable to infer what we mean by
.users.
The second error is easier to understand, there is no method named request on the client.
Let’s address the easier of the two errors by declaring our request method on the client.
ClientType
In
Client.swift add the following protocol and implementation:
import Combine import Foundation protocol HTTPClient { associatedtype Route: RouteProvider func request<T: Decodable>(_ model: T.Type, from route: Route, urlSession: URLSession) -> AnyPublisher<HTTPResponse<T>, HTTPError> } final class Client<Route: RouteProvider>: HTTPClient { func request<T>(_ model: T.Type, from route: Route, urlSession: URLSession = .shared) -> AnyPublisher<HTTPResponse<T>, HTTPError> { fatalError("Not implemented yet") } }
We declare a HTTPClient protocol which will serve as the blueprint for our client. First, we provide the protocol with an associatedtype of RouteProvider.
… An associated type gives a placeholder name to a type that is used as part of the protocol. The actual type to use for that associated type isn’t specified until the protocol is adopted. Associated types are specified with the associatedtype keyword.
Declare the request function which takes in a model parameter that will represent the response object we want to decode from the request. The route parameter will represent the HTTP route to fetch the expected response object. The last argument is the URLSession which will allow us to pass in a URLSession to use with the request. The function returns a publisher with a generic HTTPResponse on our model object and an HTTPError.
Create the Client class which will implement the HTTPClient protocol. For now, we will leave the implementation empty. I’ve added a fatalError message to ensure that we get back to it. In the code above we made RouteProvider, HTTPResponse, and HTTPError which are types we will declare next.
RouteProvider
This is what I would call the star of the show. RouteProvider defines a protocol for which we can build requests. By using a protocol we can give flexibility to how requests are formed. We can decide to implement the RouteProvider with a class, struct, or enum. I’ve decided to go with the enum approach. This provides users with a single source of truth. We know every time that a request is made to a specific endpoint it will be built using the same parameters.
protocol RouteProvider { var baseURL: URL { get } var path: String { get } var method: HTTPMethod { get } var headers: [String: String] { get } }
PlaceHolderRoute
Declare a Route which will implement the RouteProvider protocol. I’ve made use of the JSONPlaceholder API and added a single case to get users to demonstrate.
enum PlaceHolderRoute { case users } extension PlaceHolderRoute: RouteProvider { var baseURL: URL { guard let url = URL(string: "") else { fatalError("Base URL could not be configured.") } return url } var path: String { switch self { case .users: return "/users" } } var method: HTTPMethod { switch self { case .users: return .get(nil) } } var headers: [String : String] { return [:] } }
HTTPMethod
The HTTPMethod enum will be used to state which HTTP method to use for the request. We will only be supporting the most common HTTP method types in our framework. The HTTP spec does not limit the number of HTTP method types, in fact, there are other spec implementations that use methods like “COPY” and “LOCK” but you don’t come across such often.
enum HTTPMethod { case get([String: String]? = nil) case put(HTTPContentType) case post(HTTPContentType) case patch(HTTPContentType) case delete public var rawValue: String { switch self { case .get: return "GET" case .put: return "PUT" case .post: return "POST" case .patch: return "PATCH" case .delete: return "DELETE" } } }
We once again make use of associated values in our enums to enable us to pass in values to the HTTPMethod.
The GET case will enable us to pass in URL query parameters with our request. We can construct use
.get(["page": 2]) to construct a URL like:
For PUT, POST and, PATCH we make use of HTTPContentType. This type will enable us to post encoded JSON or even an encoded dictionary in the body of the request.
HTTPContentType
enum HTTPContentType { case json(Encodable?) case urlEncoded(EncodableDictionary) var headerValue: String { switch self { case .json: return "application/json" case .urlEncoded: return "application/x-www-form-urlencoded" } } }
By setting the headerValue here we can tightly couple the expected header to the type being sent. This will ensure that we always send the right headers.
EncodableDictionary
This protocol URL encodes Dictionary keys and values and returns them as Data.
Input:
[“Name”: “Malcolm K”, “Emoji”: “🍩”]
Output:
Name=Malcolm%20K&Emoji=%F0%9F%8D%A9 (string representation of data)
protocol EncodableDictionary { var asURLEncodedString: String? { get } var asURLEncodedData: Data? { get } } extension Dictionary: EncodableDictionary { var asURLEncodedString: String? { var pairs: [String] = [] for (key, value) in self { pairs.append("\(key)=\(value)") } return pairs .joined(separator: "&") .addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed) } var asURLEncodedData: Data? { asURLEncodedString?.data(using: .utf8) } }
HTTPError
When communicating with a server via internet errors are expected. Errors can happen due to poor internet connectivity or the server is down. We need a way to handle these errors gracefully. Create and HTTPError struct with a nested Code enum with all the errors you wish to handle.
HTTPURLResponse provides us with an error message given a status code. We will leverage this functionality to display an appropriate message.
print(HTTPURLResponse.localizedString(forStatusCode: 400)) // prints “bad request”
We can decide to either make use of the system provided message or use the Code enum to provide a custom error message as seen below.
struct HTTPError: Error { private enum Code: Int { case unknown = -1 case networkUnreachable = 0 case unableToParseResponse = 1 case badRequest = 400 case internalServerError = 500 } let route: RouteProvider? let response: HTTPURLResponse? let error: Error? var message: String { switch Code(rawValue: response?.statusCode ?? -1) { case .unknown: return "Something went wrong" case .networkUnreachable: return "Please check your internet connectivity" default: return systemMessage } } private var systemMessage: String { HTTPURLResponse.localizedString(forStatusCode: response?.statusCode ?? 0) } }
Here we store a few properties: code, route, response and, error. This gives us flexibility as to how to handle the error. We can decide to show a toast with a message in the response. If needed we can retry the request as we have the route. We can also log an error to a logger or analytics service.
HTTPResponse
Finally, we have HTTPResponse which is a generic type constrained to Decodable. We constrain to Decodable as we expect JSON back from the server which we will decode to a model object.
struct HTTPResponse<T: Decodable> { let route: RouteProvider let response: HTTPURLResponse? let data: Data? var decoded: T? { guard let data = data else { return nil } return try? JSONDecoder().decode(T.self, from: data) } }
Conclusion
That brings us to the end of part one! Our goal was to design a generic, typesafe and, declarative API for our networking library. We started with this line of code giving us some errors.
let publisher = client.request([User].self, from: .users)
No more errors. 👍 Well done on getting this far! 🥳 But, wait when we build and run nothing is happening yet? 🤔
See you in part two where we will start building our URLRequests using Combine and handle server responses!
Thanks to Mpendulo Ndlovu for his editorial work on this post.
Resources
- Swift Enum Documentation
- HTTP in Swift by Dave DeLong
- URLs in Swift by Antoine van der Lee
- Why Swift enums with associated values can’t have a rawValue by Mischa Hildebrand | https://blog.malcolmk.com/swift-network-layer-part-1-designing-the-api-ckewm4w9p002dggs1hqfm03lm?guid=none&deviceId=1671c6dc-b9b7-4071-9dcc-30ee0df00fdc | CC-MAIN-2021-43 | refinedweb | 1,796 | 65.73 |
Builds with 4.7.2 fail if 4.7.1 has been uninstalled
Using VS2008 Pro on Vista.
4.7.1 has an include folder with all the header files for QtGui, QtCore, QtWebkit, etc. needed when building a Qt application. Not there in 4.7.2.
What should environment variable QTDIR refer to?
Seems a fundamental flaw - or am I missing something?
What are you using? Qt SDK, sources...?
If building from the sources, don't forget to run configure, it creates all the include wrappers needed.
Just using the SDK. I am not trying to build the sources.
For the 2008 edition of Visual Studio they provide Qt framework binaries for download, so usually you don't need to build the whole framework.
But if you really need to build Qt yourself, then i recommend the source files, that way it never failed for me (most of the times using VS2010, but i used 2008 once or twice)
Note - I am not trying to build the framework!
I have a Qt GUI Application, at some point I need to
#include <QtGui\QApplication>
But all the header files are removed when you uninstall 4.7.1. They are not reinstalled with 4.7.2 (unless they are hiding)
All the libs (e.g. QtGuid4.lib) are also missing.
OK, here is the explanation. Hope it helps others with the same workflow.
In 4.7.1, the include folder is at,
QtSDK/include
but now in 4.7.2 everything has been moved about, and it is at
QtSDK/Desktop/Qt/4.7.2/msvc2008/include
So you need to update your $QTDIR, and update your path to the bin folder which is at the new location in the new QTDIR.
Also, you will need to update QT Settings in your vs2008 project.
In Qt/QtOptions/Qt Versions, add the new version and path.
Then, in Qt/Qt Project Settings/Properties, apply the new version from the dropdown list. | https://forum.qt.io/topic/4498/builds-with-4-7-2-fail-if-4-7-1-has-been-uninstalled | CC-MAIN-2018-51 | refinedweb | 329 | 77.23 |
?
Two trees with roots A and B, none of which is a single-node tree, are isomorphic if and only if there is a 1-1 correspondence between the subtrees of A and of B such that the corresponding subtrees are isomorphic.
int checkIsomorphism(root_tree1, root_tree2)
{
if ( root_tree1==Null && root_tree2==NULL)
return 1
else if (root_tree1==NULL)
return 0
else if (root_tree2==NULL)
return 0
else if (root_tree1->data!=root_tree2->data)
return 0
else
return (checkIsomorphism(root_tree1->left,root_tree2->right) && checkIsomorphism(root_tree1->right,root_tree2->left)) || (checkIsomorphism(root_tree1->left,root_tree2->left) && checkIsomorphism(root_tree1->right,root_tree2->right)
}
//Asumption checking for binary trees
The condition for two trees to be Isomorphic is that ---
Given the root of first tree say X and root of second tree say Y, The subtrees must be isomorphic(All the four subtrees.....two in first and two in second) and there must be one to one correspondence between subtrees of X and subtres of Y. What it mean is that whether the (1) left subtree of X is isomorpic with right subtree of Y And right subtree of X is isomorpic with left subtree of Y OR (2) left subtree of X is isomorpic with left subtree of Y And right subtree of X is isomorphic with right subtree of Y. One among 1 or 2 must be two.. One to One correspondance for binary trees (2-ary) has this meaning. For K- ary There are K! different conditions which need to be checked for Isomorphism
This was asked today at Amazon interview -
Given two Binary Trees (not BST). Each node of both trees has an integer value. Validate whether both trees have the same integers, there could be repetitive integers.
Example
Tree1:
5
1 6
5 4 3 6
Tree2:
1
3 4
6 5
Output
Identical integers
Sample C/C++/Java code would be helpful.
Given two n-node trees, how many rotations does it take to convert one tree into the other?
Forgot Your Password?
2018 © Queryhome | https://www.queryhome.com/tech/40730/how-to-identify-whether-two-trees-are-isomorphic-or-not | CC-MAIN-2019-04 | refinedweb | 332 | 58.21 |
Return versus Risk
Stocks selected are Royal Bank of Canada and Suncor are the stocks chosen.
In two paragraphs, describe the concept of "return versus risk" and explain how you would use it in selecting a new investment portfolio. Explain how and why you used (or did not use) this concept when you chose your original two stocks. In your explanation, ensure that you answer the following questions:
1. What would you do differently if you were to choose another two stocks for your portfolio? Explain your answer.
2. What specific actions could you take in the future when choosing stock investments to reduce risk and increase the reward in your portfolio?
Solution Preview
The risk/return principle states that if an investor is willing to take on a higher risk level, then the investor has every reason to expect a higher return. In essence, the higher the risk, the higher the return expected (but not necessarily obtained). With higher risk comes the possibility of higher losses as well. So when we look at the concept of risk, we have to come to terms with the amount of risk we are willing to accept. Low risk in the current market translates to an expected return in the range of 2-4%; moderate risk translates to 4-6%; high risk translates to 7% and above. Now if we look at the two stocks selected above, we notice the following:
SUNCOR: Current price is $31.59, with the 52 week hi and lo at $37.37 - $25.95 (so the current price is just about midway between the two. The beta is 1.86, which indicates that the portfolio of this firm is considerably out performing the market. The one year target price is $42.39, which represents a return (before taxes) of approximately 35% if the target price is realized (this would not be considered a low risk scenario - especially since it is a foreign stock and it is in the mining sector, which is notoriously dependent upon the economy.
ROYAL BANK: Current price is $62.94, with the 52 week high and low at $63.76 - $46.80 (so it ...
Solution Summary
This discussion focuses on the ability and prospect of investing in foreign stocks for a risk averse investor. It deals with the return on the investment, the risk involved, and options to consider in order to reduce the risk while increasing the over all return. | https://brainmass.com/economics/international-investment/return-versus-risk-526757 | CC-MAIN-2018-05 | refinedweb | 407 | 63.19 |
So I have:
{
"cmd": "/usr/local/bin/coffee", "$file"],
"selector": "source.coffee"
}
But when I run it I get [Finished]env: node: No such file or directoryIf I run it in the terminal normally it works just fine.... any ideas?
Many thanks
any ideas?
It looks like node isn't in the path that Sublime Text 2 is using. You can verify this by typing in the console:
import os
os.environ"PATH"]
This is a common issue on OS X, take a look at stackoverflow.com/questions/1356 ... es-in-os-x for example.
Hi, thank you for the reply.
It looks like the issue:
import osos.environ"PATH"]'/usr/bin:/bin:/usr/sbin:/sbin'
It doesn't have the correct path which is /usr/local/bin I have export PATH="/usr/local/bin:$PATH" in my ~/.profile so I wonder why it isn't using that? Is there a way I can set the path in sublime?
so I added this to my profile alias s='open -a "Sublime Text 2"' and now if I use s . in my terminal it works and use the right path but using it directly from applications it doesn't.
I would try putting it in the file ~/.MacOSX/environment.plist, like one of the responses on stack overflow suggests. If that file does not exist, create it. You will have to log out and back in for the environment variables to take effect.
don't build commands take a 'path' argument?
Like so:
{
"cmd": "/usr/local/bin/coffee", "$file"],
"selector": "source.coffee",
"path": "/usr/local/bin:$PATH"
}
I fixed this by editing my CoffeeCompile.sublimesettings file:
"coffee_executable": "/usr/local/bin/coffee",
"node_path": "/usr/local/bin" | https://forum.sublimetext.com/t/coffeescript-build-issues/2034/2 | CC-MAIN-2016-44 | refinedweb | 284 | 69.07 |
by Niharika Singh
How to create an application on blockchain using Hyperledger
We are going to build a digital bank using Hyperledger Composer. It will have customers and accounts. At the end of it, you’ll be able to transfer funds and record all transactions on blockchain. We’ll expose a RESTful API for the same, so that even a person who has no clue what blockchain is can make a beautiful user interface (UI) around it. We’ll also create this application’s UI in Angular.
I’m super excited to share this step-by-step guide with you. So let’s get started right away!
When I was first coding this out, I ran into errors. Lots and lots of them. But I think that’s good, because it made me learn a lot of things. Errors are essential. I got to a point where I felt switching it on and off would make things better. It almost made me lose my mind, but it’s an integral part in every hacker’s life.
Before getting started, you need to ensure that the machine you’re using is equipped with the required configurations. You may need to download certain prerequisites and set up a basic dev environment. Below are the links to do that. Follow those steps before starting to develop an application, otherwise you’ll definitely run into stupid errors.
First install the Hyperledger composer. Then install the development environment.
There’s no need to start Playground while you’re installing the environment.
Make sure docker is running, and when you run ./startFabric.sh it’s going to take a couple of minutes. So be patient.
Now that your machine is all set, we can start coding!
Step 1: Outline your Business Network
Our Business Network Definition (BND) consists of the data model, transaction logic, and access control rules. The data model and access control rules are coded in domain specific language (which is very simple to catch up with). The transaction logic will be coded in JavaScript.
To create a BND, we need to create a suitable project structure on disk. We will create a skeleton business network using Yeoman. To create a project structure, open your terminal and run the following command:
$ yo hyperledger-composer
This will shoot out a series of questions as follows. You’ll be required to use your arrow keys to navigate through the answers.
Open this project in your favorite text editor. I’m using Visual Code. This is what the file structure will look like:
Delete the contents of test/logic.js. We won’t be using it at the moment.
Step 2.1: Coding out our Business Network (models/test.cto)
First, we’ll define models/test.cto. It contains the class definitions for all assets, participants, and transactions in the business network. This file is written in Hyperledger Composer Modelling Language.
namespace test
asset Account identified by accountId {o String accountId--> Customer ownero Double balance}
participant Customer identified by customerId {o String customerIdo String firstNameo String lastName}
transaction AccountTransfer {--> Account from--> Account too Double amount}
Account is an asset which is uniquely identified with accountId. Each account is linked with Customer who is the owner of the account. Account has a property of balance which indicates how much money the account holds at any moment.
Customer is a participant which is uniquely identified with customerId. Each Customer has firstName and lastName.
AccountTransfer is a transaction that can occur to and from an Account. And how much money is to be transferred is stored in amount.
Step 2.2: Coding out the Business Network (lib/logic.js)
In this file, we’ll add transaction logic in JavaScript.
/*** Sample transaction* @param {test.AccountTransfer} accountTransfer* @transaction*/
function accountTransfer(accountTransfer) {if (accountTransfer.from.balance < accountTransfer.to.balance) {throw new Error ("Insufficient funds");}
accountTransfer.from.balance -= accountTransfer.amount;accountTransfer.to.balance += accountTransfer.amount;
return getAssetRegistry('test.Account').then (function (assetRegistry) {return assetRegistry.update(accountTransfer.from);}).then (function () {return getAssetRegistry('test.Account');}).then(function (assetRegistry) {return assetRegistry.update(accountTransfer.to);});
}
@param {test.AccountTransfer} accountTransfer is the decorator we put at the top of the file to link the transaction with our JavaScript function. Then we validate if the account where funds are has enough money. Otherwise, an error will be thrown. Then we perform basic addition and subtraction on the account’s balance.
At this point, the most important step is to update this on the blockchain. To do this we call getAssetRegistry API of our assets which is Account. Then we update the retrieved assetRegistry for both the account doling out the funds and the account receiving the funds.
Step 3: Generate the Business Network Archive (BNA)
Now that the business network has been defined, it must be packaged into a deployable business network archive (
.bna) file.
Step 3.1: Navigate into the test-bank app in your terminal.
Step 3.2: Run the following command:
$ composer archive create -t dir -n .
This creates a .bna file in the test-bank folder.
Step 4: Deploy the Business Network Archive file on the Fabric
Step 4.1: Install composer runtime
$ composer runtime install --card PeerAdmin@hlfv1 --businessNetworkName test-bank
Step 4.2: Deploy the business network
$ composer network start --card PeerAdmin@hlfv1 --networkAdmin admin --networkAdminEnrollSecret adminpw --archiveFile test-bank@0.0.1.bna --file networkadmin.card
(Make sure you’re in the test-bank folder).
Step 4.3: Import the network administrator identity as a usable business network card
$ composer card import --file networkadmin.card
Step 4.4: To check that the business network has been deployed successfully, run the following command to ping the network:
$ composer network ping --card admin@test-bank
STEP 5: Expose a RESTful API
To create a RESTful API from your command line, run the following command:
$ composer-rest-server
This will shoot a lot of questions.
Now point your browser to.
You’ll see your beautiful blockchain API.
Now let’s add two Customers.
Fist, let’s add a customer named Niharika Singh:
We get a 200 response code.
Now we’ll add customer named Tvesha Singh in a similar way.
To check if you’ve added them correctly, GET them.
You’ll see two customers in the response body.
Now let’s add 2 accounts linked to these two customers.
Add accounts this way. Now, GET them to check if you’ve added them correctly.
Now let’s transfer 75 from Niharika to Tvesha.
Let’s check if the balance is updated by getting the account information.
Viola! It works. Niharika has 25 now, and Tvesha has 125.
Step 6: Angular Front End
To create Angular scaffolding automatically, run the following command in the test-bank folder:
$ yo
This will ask multiple questions.
And it will take a couple of minutes.
Navigate into the bank-app.
$ npm start
This starts the Angular server.
The Angular file structure is created as follows:
Point your browser to. That’s where the magic is happening! You’ll see this screen:
Now go to Assets in the top right corner and click on Account.
These are the exact accounts we created.
So now you can play around with this.
You have your front end and your backend ready!
All transactions that happen on localhost:3000 are reflected on localhost:4200 and vice versa. And this is all on blockchain.
Recently I wrote an article on blockchain’s use cases. I listed and explained about 20 ideas. They can be found here:
How can India get blockchained?
The blockchain epoch has just begun and like any other technology, blockchain will also hit couple of roadblocks…medium.com
If you have a business idea and want to concretise it with technology and architectural details, feel free to reach me at niharika.3297@gmail.com | https://www.freecodecamp.org/news/ultimate-end-to-end-tutorial-to-create-an-application-on-blockchain-using-hyperledger-3a83a80cbc71/ | CC-MAIN-2021-04 | refinedweb | 1,302 | 60.01 |
Section A (40 Marks)
Question 1
a) Why is a class called a factory of objects?
A class is called a factory of objects because with one class definition, we can create several objects, each with their own state and behavior.
b) State the difference between a boolean literal and a character literal.
A boolean literal occupies 1 byte of storage, whereas a character literal occupies 2 bytes of storage. A boolean literal can store either true or false, whereas a character literal can store Unicode characters.
c) What is the use and syntax of a ternary operator?
A ternary operator is an alternative for if-else statement. It works with three operands.
Following is the syntax:
variable = (condition)? value 1 : value 2;
d) Write one word answer for the following:
(i) A method that converts a string to a primitive integer data type.
Integer.parseInt()
(ii) The default initial value of a boolean variable data type.
false
e) State one similarity and one difference between while and for loop.
Similarity: Both while and for loops are entry-controlled loops.
Difference: The while loop is used when the number of iterations is not known, whereas a for loop is used when the number of iterations is known.
Question 2
a) Write the function prototype for the function “sum” that takes an integer variable x as its argument and returns a value of float data type.
float sum(int x)
b) What is the use of the keyword this?
The keyword this is used to refer to the currently calling object.
c) Why is a class known as composite data type?
A class is known as a composite data type because it is generally user-defined, and it is built using one or more primitive data types.
d) Name the keyword that:
(i) is used for allocating memory to an array.
new
(ii) causes the control to transfer back to the method call.
return
e) Differentiate between pure and impure functions.
A pure function does not change the state of an object, whereas an impure function changes the state of an object.
Question 3
a) Write an expression for:
Math.pow(a + b, n) / (Math.sqrt(3) + b)
b) The following is a segment of a program:
x = 1; y = 1; if(n > 0) { x = x + 1; y = y - 1; }
What will be the value of x and y, if n assumes a value (i) 1 (ii) 0?
(i) x = 2, y = 0
(ii) x = 1, y = 1
c) Analyze the following program segment and determine how many times the body of the loop will be executed (show the working):
x = 5; y = 50; while(x <= y) { y = y / x; System.out.println(y); }
y = 50 –> y = 10 –> y = 2
The loop body executes 2 times.
d) When there are multiple definitions with the same function name, what makes them different from each other?
The number of arguments.
The data types of each argument.
e) Given that
int x[][] = {{2, 4, 6}, {3, 5, 7}};
What will be the value of
x[1][0] and
x[0][2]?
x[1][0] = 3
x[0][2] = 6
f) Give the output of the following code fragment when (i) opn = ‘b’ (ii) opn = ‘x’ (iii) opn = ‘a’:
switch(opn) { case 'a': System.out.println("Platform Independent"); break; case 'b': System.out.println("Object Oriented"); case 'c': System.out.println("Robust and Secure"); break; default: System.out.println("Wrong Input"); }
(i) When opn = ‘b’, the output is:
Object Oriented
Robust and Secure
(ii) When opn = ‘x’, the output is:
Wrong Input
(iii) When opn = ‘a’, the output is:
Platform Independent
g) Consider the following code and answer the questions that follow:
class academic { int x, y; void access() { int a, b; academic student = new academic(); System.out.println("Object created"); } }
(i) What is the object name of class academic?
The object name is student.
(ii) Name the class variables used in the program.
The class variables are x and y.
(iii) Write the local variables used in the program.
The local variables are a and b.
(iv) Give the type of function used and its name.
The type of function is void. It is used to indicate that the function doesn’t return any value.
h) Convert the following segment into an equivalent do loop:
int x, c; for(x = 10, c = 20; c > 10; c = c - 2) x++;
int x = 10, c = 20; while(c > 10){ c = c - 2; x++; }
Section B (60 Marks))
import java.io.*; class Shop{ public static void main(String args[])throws IOException{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); System.out.print("Name: "); String name = br.readLine(); System.out.print("Address: "); String address = br.readLine(); System.out.print("Amount of purchase: "); double amount = Double.parseDouble(br.readLine()); System.out.print("Type of purchase: "); char type = Character.toUpperCase(br.readLine().charAt(0)); double discount = 0.0; if(amount <= 25000){ if(type == 'L') discount = 0.0; else if(type == 'D') discount = 5.0; } else if(amount <= 57000){ if(type == 'L') discount = 5.0; else if(type == 'D') discount = 7.6; } else if(amount <= 100000){ if(type == 'L') discount = 7.5; else if(type == 'D') discount = 10.0; } else{ if(type == 'L') discount = 10.0; else if(type == 'D') discount = 15.0; } discount = discount / 100 * amount; double net = amount - discount; System.out.println("Customer's name: " + name); System.out.println("Address: " + address); System.out.println("Net Amount: " + net); } }
import java.io.*; class Triangle{ public static void main(String args[])throws IOException{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); System.out.println("Type 1 for Triangle"); System.out.println("Type 2 for Inverted Triangle"); System.out.print("Enter your choice: "); int choice = Integer.parseInt(br.readLine()); switch(choice){ case 1: System.out.print("N = "); int n = Integer.parseInt(br.readLine()); for(int i = 1; i <= n; i++){ for(int j = 1; j <= i; j++){ System.out.print(i + " "); } System.out.println(); } break; case 2: System.out.print("N = "); n = Integer.parseInt(br.readLine()); for(int i = n; i >= 1; i--){ for(int j = 1; j <= i; j++){ System.out.print(i + " "); } System.out.println(); } break; default: System.out.println("Invalid Input"); } } }
Question 6
Write a program to input a sentence and print the number of characters found in the longest word of the given sentence.
For example, if s = “India is my country” then the output should be 7.
import java.io.*; class Longest{ public static void main(String args[])throws IOException{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); System.out.print("Sentence: "); String s = br.readLine().trim() + " "; String word = ""; String longest = ""; for(int i = 0; i < s.length(); i++){ char ch = s.charAt(i); if(ch == ' '){ if(longest.length() < word.length()) longest = word; word = ""; } else word += ch; } int len = longest.length(); System.out.println("Length = " + len); } }, computes the product of integer arguments if ch is ‘p’ else adds the integers.
c) void num_calc(String s1, String s2) with two string arguments, which prints whether the strings are equal or not.
class Overload{ public void num_calc(int num, char ch){ if(ch == 's' || ch == 'S'){ int s = num * num; System.out.println("Square = " + s); } else{ int c = num * num * num; System.out.println("Square = " + c); } } public void num_calc(int a, int b, char ch){ if(ch == 'p' || ch == 'P'){ int p = a * b; System.out.println("Product = " + p); } else{ int s = a + b; System.out.println("Sum = " + s); } } public void num_calc(String s1, String s2){ if(s1.equalsIgnoreCase(s2)) System.out.println("They are equal"); else System.out.println("They are unequal"); } }
Question 8
Write a menu-driven program to accept a number from the user and check whether it is a BUZZ number or to accept any two numbers and print the GCD of them.
a) A BUZZ number is the number which either ends with 7 or is divisible by 7.
b) GCD (Greatest Common Divisor) of two integers is calculated by continued division method. Divide the larger number by the smaller; the remainder then divides the previous divisor. The process is repeated until the remainder is zero. The divisor then results the GCD.
import java.io.*; class Menu{ public static void main(String args[])throws IOException{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); System.out.println("1. Buzz number"); System.out.println("2. GCD of two numbers"); System.out.print("Enter your choice: "); int choice = Integer.parseInt(br.readLine()); switch(choice){ case 1: System.out.print("N = "); int n = Integer.parseInt(br.readLine()); if(n % 10 == 7 || n % 7 == 0) System.out.println("Buzz number"); else System.out.println("Not a buzz number"); break; case 2: System.out.print("First number: "); int a = Integer.parseInt(br.readLine()); System.out.print("Second number: "); int b = Integer.parseInt(br.readLine()); while(a % b != 0){ int rem = a % b; a = b; b = rem; } System.out.println("GCD = " + b); break; default: System.out.println("Invalid Input"); } } }.
import java.io.*; class Exam{ public static void main(String args[])throws IOException{ BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); int roll[] = new int[50]; int a[] = new int[50]; int b[] = new int[50]; int c[] = new int[50]; double avg[] = new double[50]; for(int i = 0; i < roll.length; i++){ System.out.print("Enter roll: "); roll[i] = Integer.parseInt(br.readLine()); System.out.print("Marks in Subject A: "); a[i] = Integer.parseInt(br.readLine()); System.out.print("Marks in Subject B: "); b[i] = Integer.parseInt(br.readLine()); System.out.print("Marks in Subject C: "); c[i] = Integer.parseInt(br.readLine()); avg[i] = (a[i] + b[i] + c[i]) / 3.0; } System.out.println("Average marks obtained by each student:"); for(int i = 0; i < avg.length; i++) System.out.print(avg[i] + "\t"); System.out.println(); System.out.println("Those scoring above 80:"); for(int i = 0; i < avg.length; i++) if(avg[i] > 80) System.out.print("Roll " + roll[i] + ": " + avg[i] + "\t"); System.out.println(); System.out.println("Those scoring below 40:"); for(int i = 0; i < avg.length; i++) if(avg[i] < 40) System.out.print("Roll " + roll[i] + ": " + avg[i] + "\t"); System.out.println(); } } | https://www.happycompiler.com/class-10-2009/ | CC-MAIN-2020-24 | refinedweb | 1,689 | 60.82 |
Here you can find answers to some of the most frequently asked questions about RobotPy.
Should our team use RobotPy?¶
What we often recommend teams to do is to take their existing code for their existing robot, and translate it to RobotPy and try it out first in the robot simulator, followed by the real robot. This will give you a good taste for what developing code for RobotPy will be like.
Related questions for those curious about RobotPy:
Installing and Running RobotPy¶
How do I install RobotPy?¶
See our getting started guide.
What version of Python do RobotPy projects use?¶
When running RobotPy on a FIRST Robot, our libraries/interpreters use Python 3. This means you should reference the Python 3.x documentation instead of the Python 2.x documentation.
- RobotPy WPILib on the roboRIO uses the latest version of Python 3 at kickoff. In 2020, this was Python 3.8. When using pyfrc or similar projects, you should use a Python 3.6 or newer interpreter (the latest is recommended).
- RobotPy 2014.x is based on Python 3.2.5.
pynetworktables is compatible with Python 3.5 or newer, since 2019. Releases prior to 2019 are also compatible with Python 2.7.
What happens when my code crashes?¶
An exception will be printed out to the console, and the Driver Station log may receive a message as well. It is highly recommended that you enable NetConsole for your robot, so you can see these messages.
Is WPILib available?¶
Of course! Just
import wpilib. Class and function names are identical
to the Java version. Check out the Python WPILib API Reference
for more details.
As of 2020, the API mostly matches the C++ version of WPILib, except that protected functions are prefixed with an underscore (but are availble to all Python code).
From 2015-2019, almost all classes and functions from the Java WPILib are available in RobotPy’s WPILib implementation.
Prior to 2015, the API matched the C++ version of WPILib.
Is Command-based programming available?¶
Of course! Check out the
command package. There
is also some python-specific documentation available.
Is there an easy way to test my code outside of the robot?¶
Glad you asked! Our pyfrc project has a built in lightweight robot simulator you can use to run your code, and also has builtin support for unit testing with py.test.
Note
These are in the process of being updated for 2020
Competition¶
Is RobotPy competition-legal?¶
As RobotPy was not written by anyone involved with the GDC, we can’t provide a guaranteed answer (particularly not for future years). However, we see no reason that RobotPy would not be legal: to the cRIO/RoboRIO, it looks just like any other C++ WPILib-using program that reads text files. RobotPy itself should be considered COTS software as it is freely available to all teams. Teams have been using RobotPy since 2010 without any problems from FIRST, and we expect that to continue.
Caveat emptor: while RobotPy is almost certainly legal to use, your team should carefully consider the risk of using such a large piece of unofficial software; unless RobotPy is used by many teams, if you run into trouble at a competition, there may not be anyone else there to help! However, we’ve found that most problems teams run into are problems with WPILib itself, and not RobotPy.
Also, be sure to keep in mind the fact that Python is a dynamic language and is NOT compiled. This means that typos can easily go undetected until your robot runs that particular line of code, resulting in an exception and 5 second restart. Make sure to test your code thoroughly (see our unit testing documentation).
Is RobotPy stable?¶
Yes! While Python is not an officially supported language, teams have been using RobotPy since 2010, and the maintainer of RobotPy is a member of the WPILib team. Much of the time when bugs are found, they are found in the underlying WPILib, instead of RobotPy itself.
One caveat to this is that because RobotPy doesn’t have a beta period like WPILib does, bugs tend to be found during the first half of competition season. However, by the time build season ends, RobotPy is just as stable as any of the officially suported languages.
How often does RobotPy get updated?¶
RobotPy is a community project, and updates are made whenever community members contribute changes and the developers decide to push a new release.
Historically, RobotPy tends to have frequent releases at the beginning of build season, with less frequent releases as build season goes on. We try hard to avoid WPILib releases after build season ends, unless critical bugs are found.
Performance¶
Is RobotPy fast?¶
It’s fast enough.
We’ve not yet benchmarked it, but it’s almost certainly just as fast as Java for typical WPILib-using robot code. RobotPy uses the native C++ WPILib, and thus the only interpreted portions are your specific robot actions. If you have particularly performance sensitive code, you can write it in C++ and use pybind11 wrappers to interface to it from Python.
RobotPy Development¶
Who created RobotPy?¶
RobotPy was created by Peter Johnson, programming mentor for FRC Team 294, Beach Cities Robotics. He was inspired by the Lua port for the cRIO created by Ross Light, FRC Team 973. Peter is a member of the FIRST WPILib team, and also created the ntcore and cscore libraries.
The current RobotPy maintainer is Dustin Spicuzza, also a member of the FIRST WPILib team.
Current RobotPy developers include:
- Dustin Spicuzza (@virtuald)
- David Vo (@auscompgeek)
- Ellery Newcomer (@ariovistus)
- Tim Winters (@ArchdukeTim)
How can I help?¶
RobotPy is an open project that all members of the FIRST community can easily and quickly contribute to. If you find a bug, or have an idea that you think others can use:
- Test and report any issues you find.
- Port and test a useful library.
- Write a Python module and share it with others (and contribute it to the robotpy-wpilib-utilities package!) | https://robotpy.readthedocs.io/en/2020.0.1/faq.html | CC-MAIN-2020-24 | refinedweb | 1,010 | 65.22 |
To check for the existence of data in a script, use the Exists function.
When checking for the existence of geographic data, use the Exists function, since it recognizes catalog paths. A catalog path is a path name Infrastructure.gdb (a folder) does not contain a file named Infrastructure. In short, Windows doesn't know about feature datasets or feature classes, so you cannot use Python existence functions like os.path.exists. Of course, everything in ArcGIS knows how to deal with catalog paths. Universal Naming Convention (UNC) paths can also be used.
import arcpy arcpy.env.workspace = "d:/St_Johns/data.gdb" fc = "roads" # Clip a roads feature class if it exists # if arcpy.Exists(fc): arcpy.Clip_analysis(fc, "urban_area", "urban_roads")
Tip:
The Exists function honors the geoprocessing workspace environment allowing you to just specify the base name.
If the data resides in an enterprise geodatabase, the name must be fully qualified.
import arcpy
arcpy.env.workspace = "Database Connections/Bluestar.sde"
fc = "ORASPATIAL.Rivers"
# Confirm that the feature class exists
#
if arcpy.Exists(fc):
print("Verified {} exists".format(fc))
In scripting, the default behavior for all tools is to not overwrite any output that already exists. This behavior can be changed by setting the overwriteOutput property to True (arcpy.env.overwriteOutput = True). Attempting to overwrite when the overwriteOutput is False, causes a tool to fail. | https://pro.arcgis.com/en/pro-app/latest/arcpy/get-started/checking-for-existence.htm | CC-MAIN-2021-21 | refinedweb | 225 | 52.36 |
This post assumes you’re familiar with the following Scalaz concepts:
At Avention, we have a significant amount of backend code running in Akka. Most of this code runs in the following State monad:
case class PipelineState(...) type PipelineMonad[+A] = State[PipelineState, A]
The details of PipelineState aren’t relevant here. Briefly, it allows us to easily track statistics about our backend code. These statistics are written to a database for monitoring and debugging purposes. PipelineState allows us to do things like identify key spots in the code and track how many times those spots succeed or fail. It also allows us to easily track execution times for key blocks of code. The details of PipelineState might be a good subject for another blog post, but for now the important point is that most of our backend code runs in PipelineMonad, which is a state monad using PipelineState.
For convenience, we’ve also defined an option transformer wrapped around PipelineMonad. We use this option transformer extensively throughout our code. Here’s the definition:
type PipelineMonadOT[+A] = OptionT[PipelineMonad, A]
Adding More State
What happens when part of our application needs to use more state than just what’s in PipelineState? While in practice this extra state could be some complex data structure, for simple illustrative purposes let’s just use an Int. The traditional functional approach is to define a state transformer monad that mixes our Int state in with PipelineState:
type IntStateT[M[+_], +A] = StateT[M, Int, A] type PipelineMonadWithInt[+A] = StateT[PipelineMonad, Int, A]
We’ll also need to add an option transformer:
type PipelineMonadWithIntOT[+A] = OptionT[PipelineMonadWithInt, A]
We end up with 3 levels of nested monads. The lowest level is a state monad for PipelineState. Wrapped around that is a state transformer for adding the Int state. Wrapped around that is an option transformer. Transformers allow us to nest our monads arbitrarily deep. As we’ll see below, 3 levels is deep enough to cause a lot of confusion.
Let’s look at how we can get different types of values into our PipelineMonadWithIntOT monad. For clarity, I’m putting explicit types on all variables. You wouldn’t do this in practice.
To get a simple non-monadic value into PipelineMonadWithIntOT, you just point it into the monad:
val m1: PipelineMonadWithIntOT[String] = "hello".point[PipelineMonadWithIntOT]
To get an Option into PipelineMonadWithIntOT, you need to first wrap the Option in PipelineMonadWithInt using the point() method. Then you can wrap the result in a PipelineMonadWithIntOT using OptionT.optionT():
val m2: Option[String] = "hello".some val m3: PipelineMonadWithIntOT[String] = OptionT.optionT(m2.point[PipelineMonadWithInt])
If you already have a value wrapped in PipelineMonadWithInt, you can wrap it in PipelineMonadWithIntOT using the liftM() method:
val m4: PipelineMonadWithInt[String] = "hello".point[PipelineMonadWithInt] val m5: PipelineMonadWithIntOT[String] = m4.liftM[OptionT]
Finally, if you have a value wrapped in PipelineMonad, you have to go through a couple of steps. First, you have to use liftM() to wrap the PipelineMonad in PipelineMonadWithInt. Then, you have to use liftM() again to wrap that in PipelineMonadWithIntOT.
val m6: PipelineMonad[String] = "hello".point[PipelineMonad] val m7: PipelineMonadWithIntOT[String] = m6.liftM[IntStateT].liftM[OptionT]
That covers all the types of things you would want to wrap in PipelineMonadWithIntOT. But there is a problem. Let’s try to remove the explicit typing on variable m7:
val m7 = m6.liftM[IntStateT].liftM[OptionT] error: kinds of the type arguments (scalaz.Unapply[scalaz.Monad,IntStateT[PipelineMonad,String]]{type M[X] = IntStateT[PipelineMonad,X]; type A = String}#M,String) do not conform to the expected kinds of the type parameters (type F,type A) in class OptionT. scalaz.Unapply[scalaz.Monad,IntStateT[PipelineMonad,String]]{type M[X] = IntStateT[PipelineMonad,X]; type A = String}#M's type parameters do not match type F's expected parameters: type X is invariant, but type _ is declared covariant val m7 = m6.liftM[IntStateT].liftM[OptionT] ^
This code should work, but the Scala compiler gets confused and gives us an error. We’ve uncovered a bug in the way Scalaz and the compiler interact. (We’re using Scala version 2.10.4. I haven’t tested if this issue exists with later versions.) We need to give the compiler some hints to make it happy, either by putting an explicit type on m7, or by doing something like the following:
val m7 = ( m6.liftM[IntStateT]: PipelineMonadWithInt[String] ).liftM[OptionT]
As you can see, once we start putting transformers inside transformers wrapping values becomes non-trivial. We also start pushing the compiler to its limits. We’re drowning in transformers.
Looking for a Simpler Solution
Can we find a simpler solution that doesn’t involve nested transformers?
We could throw up our hands, say to heck with functional coding, and use a mutable variable to hold our Int. Let’s look for a better solution than that.
Rather than using a state transformer to manage our Int, we could just pass the current Int value into each of our functions. Then our functions could return the next Int value. But this approach could easily cause the problems with tracking intermediate states that I talked about at the beginning of my blog post about the state monad. That is, passing our Int state in and out of functions could lead to ugly, brittle code. The state monad was created explicitly to solve these problems. It would be a shame not to be able to take advantage of it.
We could add our Int as a new field inside the PipelineState case class. That would keep our code nice and simple since all we’d need is PipelineMonad and PipelineMonadOT (the option transformer that wraps PipelineMonad). But this is hacky because it violates separation of concerns. PipelineState and PipelineMonad exist at the lowest levels of our infrastructure. They should have no knowledge about how they are used. Besides, we might have a bunch of different parts of our code that each need to add their own type of state. So, we wouldn’t just be adding an Int to PipelineState; we’d be adding 10 or 20 distinct types of state used by the different sections of our code. Yuck.
Let’s see if we can find a less hacky way to add a new field inside the PipelineState class. We could add a new data field that’s of type Any:
case class PipelineState(data: Any, ...)
We could then stick our Int state into that data field. This approach solves our separation of concerns problem; PipelineState and PipelineMonad have no idea how they’re being used or what type of extra data is stored in them. Also, different parts of our code could pack different types of state into the single data field. Unfortunately, we have to typecast data whenever we pull a value out of it. That’s a pain in the neck. We also lose compile-time type checking. Let’s switch from using Any to using a type parameter:
case class PipelineStateEx[D](data: D, ...)
Much cleaner. Note that we’ve switched the name of the class to PipelineStateEx. We’ll see why in a bit.
Defining Monads Around PipelineStateEx
Now we need to build up a state monad and an option transformer around PipelineStateEx[D]. Before we do that, let’s take a deeper look at how some of our code was defined before we added the data field to PipelineState:
object PipelineStateMgr {] = ... ... }
Don’t worry about what each of the functions do. The key point is that after we define our monads, we define a bunch of helper functions that run in our monads. The rest of our code generally doesn’t access the PipelineState object directly; instead, we access PipelineState through these helper functions. For convenience, we have two versions of each helper: one version that runs in PipelineMonad and one that runs in PipelineMonadOT. Under the covers, the PipelineMonadOT versions just call the corresponding PipelineMonad versions and wrap the results in an option transformer.
Let’s try to modify PipelineStateMgr to use PipelineStateEx[D] with the new data field:
// THE FOLLOWING DOES NOT WORK! object PipelineStateMgr { type PipelineMonad[+A, D] = State[PipelineState[D], A] type PipelineMonadOT[+A, D] = OptionT[PipelineMonad[A, D], A] ... }
We’ve got a problem, namely, PipelineMonad[+A, D] is not a monad. A monad must take exactly one type parameter. But PipelineMonad has two: A and D. We run into problems when we try to use this non-monad with OptionT. OptionT expects its first type parameter to be a monad. Since PipelineMonad is not a monad, we can’t use it with OptionT. Put another way, the new D parameter is screwing everything up.
We can fix this problem by changing PipelineStateMgr to be a trait that takes D as a type parameter:
trait ManagesPipelineState[D] { type PipelineState = PipelineStateEx[D]] = ... ... }
As a convenience, we’ve added a type PipelineState which is just a parameterless version of PipelineStateEx[D]. Once that type is defined, the rest of the trait is identical to our original PipelineStateMgr object. Note that PipelineMonad and PipelineMonadOT now take only a single type parameter each. So they are now valid monads, and everything works. The trick to making things work was to move the type parameter from the definitions of PipelineMonad and PipelineMonadOT up to the trait.
With this trait in place, we can define the following pipeline state manager for code that needs to add an Int to the state:
object IntPipelineStateMgr extends ManagesPipelineState[Int]
We could just as easily create a pipeline state manager for some other data type:
case class Foo(...) object FooPipelineStateMgr extends ManagesPipelineState[Foo]
We could even create a pipeline state manager for code that doesn’t need any extra state:
object UnitPipelineStateMgr extends ManagesPipelineState[Unit]
Let’s see how we can use IntPipelineStateMgr to access or change the current Int when we’re running inside PipelineMonad:
import IntPipelineStateMgr._ ... for { ... squared <- gets { state: PipelineState => val currentNum = state.data currentNum * currentNum } ... _ <- modify { state: PipelineState => state.copy(data = state.data + 1) } ... } yield ...
This is just standard state monad stuff. The gets() call squares the current Int stored in the state without modifying the state. The modify() replaces the current state by incrementing our Int by one.
If we want to use PipelineMonadOT, we need to call liftM[OptionT] on the results of gets() and modify(). Unfortunately, due to a type inferencing bug/limitation in the compiler, we have to provide a bit of type information to make things work:
squared <- { gets { state: PipelineState => val currentNum = state.data currentNum * currentNum }: PipelineMonad[Int] }.liftM[OptionT]
Because that’s a bit ugly, let’s add a wrapOT() method to our ManagesPipelineState[D] trait:
trait ManagesPipelineState[D] { type PipelineState = PipelineStateEx[D] type PipelineMonad[+A] = State[PipelineState, A] type PipelineMonadOT[+A] = OptionT[PipelineMonad, A] def wrapOT[A](m: PipelineMonad[A]): PipelineMonadOT[A] = m.liftM[OptionT] ... }
The wrapOT() method just wraps a PipelineMonad in a PipelineMonadOT. Our code for accessing the current Int stored in the state wrapped up in a PipelineMonadOT now simplifies to
squared <- wrapOT { gets { state: PipelineState => val currentNum = state.data currentNum * currentNum } }
We’ve now found a solution that lets us add arbitrary data to PipelineState so that we don’t have to use state transformers at all. Our wrapOT() helper method helps us get past some deficiencies in the compiler and helps keep our code cleaner.
Reexamining State Transformers
Let’s take a second look at state transformer monads. Our main objection was that wrapping values gets ugly, especially given the compiler issues. But we could write wrapper methods similar to wrapOT() to hide all the ugly wrapping. The only issue then is that we’ve got a lot of boilerplate code. Every time we want to add a different type of state on top of PipelineMonad, we’ve got to add the following:
- A type equivalent to IntStateT
- A type equivalent to PipelineMonadWithInt
- A type equivalent to PipelineMonadWithIntOT
- A handful of helper wrap methods
That’s a bit of a pain. Instead of doing all that boilerplate code, we can build a trait that takes care of all that for us. The following trait assumes the original version of PipelineMonad, that is, the one without the extra data field in PipelineState.
trait ExtendsPipelineState[D] { type ExtStateT[M[+_], +A] = StateT[M, D, A] type ExtPipelineMonad[+A] = StateT[PipelineMonad, D, A] type ExtPipelineMonadOT[+A] = OptionT[ExtPipelineMonad, A] def wrapPipelineMonad[A] (m: PipelineMonad[A]): ExtPipelineMonad[A] = m.liftM[ExtStateT] def wrapExtStateMonad[A] (m: State[D, A]): ExtPipelineMonad[A] = m.lift[PipelineMonad] def wrapToOT[A](a: A): ExtPipelineMonadOT[A] = a.point[ExtPipelineMonadOT] def wrapOptionToOT[A] (m: Option[A]): ExtPipelineMonadOT[A] = OptionT.optionT(m.point[ExtPipelineMonad]) def wrapExtPipelineMonadToOT[A] (m: ExtPipelineMonad[A]): ExtPipelineMonadOT[A] = m.liftM[OptionT] def wrapPipelineMonadToOT[A] (m: PipelineMonad[A]): ExtPipelineMonadOT[A] = wrapExtPipelineMonadToOT(wrapPipelineMonad(m)) def wrapExtStateMonadToOT[A] (m: State[D, A]): ExtPipelineMonadOT[A] = wrapExtPipelineMonadToOT(wrapExtStateMonad(m)) }
Building state transformers and option transformers on top of PipelineMonad is now easy:
object IntPipelineStateExtender extends ExtendsPipelineState[Int] import IntPipelineStateExtender._ val m1: ExtPipelineMonadOT[String] = wrapToOT("hello") val m2: ExtPipelineMonadOT[String] = wrapOptionToOT("hello".some) val m3: ExtPipelineMonadOT[String] = wrapExtPipelineMonadToOT("hello".point[ExtPipelineMonad]) val m4: ExtPipelineMonadOT[String] = wrapPipelineMonadToOT("hello".point[PipelineMonad]) val m5: ExtPipelineMonadOT[Int] = wrapExtStateMonadToOT { gets { n: Int => n * n } }
The boilerplate is gone, and we’ve got nice clean wrappers. If we want to extend with a type other than Int, we just have to do this:
case class Foo(...) object FooPipelineStateExtender extends ExtendsPipelineState[Foo] import FooPipelineStateExtender._
Super easy. Using state transformers to add the state doesn’t seem so bad now.
Summary
When you have code already running in a state monad, adding extra state can be tricky. You can add a transformer around the base state, but the wrapping code gets ugly quickly. Deficiencies in the Scala compiler make things even worse.
We examined two approaches to solving this problem. For the first approach, we added an arbitrary data field to the underlying state monad. We moved our state monad definition from an object to trait ManagesPipelineState[D], where D is the type of the extra state. Then, we created objects that implement the trait such as IntPipelineStateMgr and FooPipelineStateMg. These objects allow us to easily and cleanly work with Int and Foo data directly in the base state monad. There’s no need to resort to using state transformers with this approach.
For our second approach, we created trait ExtendsPipelineState[D] to make state transformers easier to use. This trait encapsulates all the boilerplate type definitions and wrapper methods needed to build a state transformer on top of PipelineMonad. Using ExtendsPipelineState[D], it’s easy to layer state transformers and option transformers on top of PipelineMonad. Just create objects like IntPipelineStateExtender and FooPipelineStateExtender that implement the trait. With those in place, wrapping different types into the monads is easy. Plus you don’t have to do any boilerplate type definitions. | https://softwarecorner.wordpress.com/2015/01/24/drowning-in-monad-transformers/ | CC-MAIN-2017-26 | refinedweb | 2,484 | 56.05 |
Working without branches and PRs
When my team and I started developing new product, I was looking for a way to deliver high quality code effectively. Previously most of us worked with branch-based workflows with Pull Requests. From my experience it's quite standard way. I haven't seen many projects or meet many people working differently. However, I did work once in an green field'ish project with PRs. It was quite annoying. Especially, when for the POC phase I was alone. I was taught to do it like that and I didn't think about more effective ways.
Looking for inspiration
But in the meantime when we were starting that project I joined polish dev Instagram community and had a chance to talk with @andrzejkrzywda . Andrzej is CEO of Arkency, consulting agency that works with Ruby and Rails. The thing that makes Arkency special (and that was mind-blowing for me) is the fact that they have their own, very specific workflow. And yet, very different than what is accepted. Andrzej did some marketing on the Instagram stories. And it couldn't end up different. I bought the Async Remote book, where I learned of how developers in Arkency work... without branches (there are also few other interesting things in their workflow, check the book out).
Chance to test things out
I thought to myself that a Greenfield-ish project, that we need to develop sooner than later, is a good opportunity to at least try no-branches approach from the beginning. At the end of the day it should be quicker. Especially when there's not so many developers in the beginning (our team started with 2 backend and 1 frontend developers only). And there's not so many things that you can break :)
So we have started working that way. But at a certain point you stop developing the product just for sake of developing it. You're getting your first customers and sometimes its getting hard to deliver full feature in few commits. And we felt that we didn't want to go back to the old flow. This felt so much more productive.
Then I reminded myself that there was the answer in the Async Remote book. But before we get there, and see how we can use the azure feature management from the title to keep that flow, let's establish few things.
Let's establish few things
Before you stop reading and think we're crazy because we work without reading each others code or what not, I want you to know one thing. No branches and no Pull Requests is not equal to No Code Reviews. No. We do review each others code. And also, when someone is in doubt, there's a possibility to do the PR. The thing is, it's not mandatory. You can do that when you feel that you need colleagues input before you commit to the master branch. No hard rules. Just guidelines that you follow.
Find the right tool that suits your needs
I am not against branches. Do whatever works for you and your team. I've worked like that for quite some time. But the other way feels way more productive. No offence :)
All right, so we have that established. Let's see how Azure Feature Management can help us :)
The problem
The problem that we're trying to solve with Azure Feature Management is the fact that it sometimes might be hard to commit entire feature within one commit into the master branch, and then deploy it to production. It could mess few things up.
In our case it was the frontend view that has to be invisible conditionally. But it could be different. For example, you might want to change parts of backend code - introduce easily switchable branching with feature toggling. Azure Feature Management would also help you there.
The idea of solving the problem using Feature Management
So to solve this problem we can use Feature management (aka Feature Toggling) in Azure. Azure Feature Management allows you to disable parts of the code that aren't ready. Or hide those parts, if we're thinking about an UI feature.
What I really like about Feature Management in Azure is the fact that you can set it up in a way that the feature can be available only at certain time window or only for certain percentage of users.
Setting up Feature Management in Azure
Setting up the Feature Management in Azure is quite simple. Find the App Configuration resource and follow the steps in the wizard. If you get into trouble, visit this link for more detailed guidance.
Setting up the Feature Management in Azure is quite simple. Find the App Configuration resource and follow the steps in the wizard. If you get into trouble, visit this link for more detailed guidance.
Then in the operations section there's Feature Manager option
Now you can create feature flags :)
Feature flags have option to use feature filters. However this is out of scope of this post.
C# implementation
So now as we have feature management set in the Azure, we can implement it in C#. To solve our problem, we'll create an endpoint that returns list of currently enabled features. Then, the frontend client will be able to manipulate the view and navigation based on the result.
Necessary packages
Let's start with packages. The packages that you should install are
Microsoft.FeatureManagement.AspNetCore, Microsoft.Azure.AppConfiguration.AspNetCore
Program.cs configuration
To enable Azure Feature Management extend your Host builder by highlighted lines (the ConfigureAppConfiguration extension)
Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => webBuilder.ConfigureAppConfiguration((hostContext, config) => { var settings = config.Build(); var connection = settings.GetConnectionString("AppConfig"); config.AddAzureAppConfiguration(options => { options.Connect(connection).UseFeatureFlags(); }); }).UseStartup<Startup>())
Startup.cs
Simply Add Feature Management :)
services.AddFeatureManagement();
The Endpoint for Frontend
The endpoint is simple and uses configured Azure Feature Management SDK to manage App Configuration connection. The frontend client gets all the available features with a field describing whether it's enabled or not.
[Route("api/[controller]")] public class FeatureManagementController : Controller { private readonly IFeatureManager _featureManager; public FeatureManagementController(IFeatureManager featureManager) { _featureManager = featureManager; } /// <summary> /// Returns feature flags that are set up for the application /// </summary> /// <returns>List of feature flags. Feature flag contains name and status (enabled or not)</returns> [HttpGet] public async Task<IActionResult> GetFeatureFlags() { var featureFlags = new List<Feature>(); await foreach (var featureFlag in _featureManager.GetFeatureNamesAsync()) // Fetches all Feature names from Azure { // IsEnabledAsync method call returns the value from App Configuration, Feature Management tab for desired feature flag. featureFlags.Add(new Feature(featureFlag, await _featureManager.IsEnabledAsync(featureFlag))); } return Ok(featureFlags); } }
The using that allows you to use IFeatureManager interface:
using Microsoft.FeatureManagement;
Summary
- Azure Feature Management lets you enable and disable certain parts of code
- There's dedicated SDK that makes it easier to fetch available feature flags and their status
- Azure Feature Management is available through App Configuration service on Azure
- Because of feature flags, it's possible to commit and push that is not yet ready to be used by an end user
- It's also possible to manipulate who gets the certain feature enabled by using feature filters, which is out of this blog post scope
Have you ever tried Azure Feature Management? What's your opinion on that?
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/lukaszreszke/disabling-unready-code-with-azure-feature-management-40b5 | CC-MAIN-2022-21 | refinedweb | 1,221 | 55.84 |
Layer ordering?
How can I control the visual (z) ordering or depth of layers to move one to the front when touched? It must be obvious, but I couldn't seem to find it in the documentation. Many thanks!
Are you talking in
sceneor
ui?
For
ui, you'd use a view's
bring_to_frontor
send_to_backmethods.
Yes, it is not obvious how you would directly control the ordering of layers from the
scenedocumentation. One way to "bring to the front" would be to remove, then re-add a layer ( I haven't tested this, but ui worked that way). Sending a specific layer to the back would then seem to require removing all other layers, and re-adding them in order.
JonB, yes it's in a scene. I will try your suggestion re adding and removing all the layers, but this seems a bit much in any kind of dynamic environment with a reasonable number of layers doesn't it?! :/
ccc. I'm not clear what you are pointing at? Please tell me which bit of the documentation controls the layer 'height' (z coordinate). Coz I can't see it. As per original post, I HAVE read the docs...
Surely I can't be the first person to have this issue?! :)
edit
I have tried
remove_layer()and
add_layer()and yes, this will bring a chosen layer to the front of a scene.
Is this really the only way to do it through? I have to keep track of all the layers z heights, and at each redraw every single layer is removed, and then they're all added back, one by one, in the right order? 60 times a second?! :/
moddhayward, try only changing the z-order when you WANT a z-order that is different than the current one. Once you have changed the z-order, it should stay that way until you change it again. As you point out, doing so repeatedly in scene.draw(), up to 60 times a second, is NOT the way to go.
I was pointing to
scene.Layerbecause JonB's question (that you answered later) in the post before mine was about
uivs.
scene.
Ok, I played around with this a little, it turns out that
root_layer.sublayerscan be manipulated, which ultimately defines the drawing order.
For instance
root_layer.sublayers.reverse()works. I would imagine
sortwould also work for arbitrary ordering (e.g. sort with a key), though I didn't try it, since both sort in place. I suspect you could also set this list directly, for instance in Cards.py we see
self.root_layer.sublayers = [], indicating that you can indeed manipulate
sublayersdirectly.
The docs say not to modify the
sublayersdirectly, I suspect that might really mean don't add or remove layers directly using the list -- maybe there is some "registering" of Layers happening under the hood, but once they have been added, you can modify the order manually.
As ccc says, just reorder this list when something changes that requires reordering. Also, for many simple cases, I'd imagine you want a simple bring to front or send to back, which could be implemented as follows inside your scene class:
def bring_to_front(self, layer): """bring layer to the front, if it exists as a sublayer (otherwise nothing changes). This assumes that sublayer[0] is the bottom layer, so sorts the list so layer is at the end.""" self.root_layer.sublayers.sort( key=lambda x: x == layer, reverse=False) def send_to_back(self,layer): """send layer to the back, if it exists as a sublayer (otherwise nothing changes). This assumes that sublayer[0] is the bottom layer, so sorts the list so layer is at the front""" self.root_layer.sublayers.sort( key=lambda x: x == layer, reverse=True)
( I may have mixed these up -- not in front of pythonista right now, and I forgot if the "top layer" was the first or last item in the sublayer list -- the above is written assuming that the top layer is the last item in the list, i.e that add_sublayer appends to the list rather than inserts)
ccc, that wasn't obvious but fair enough! :)
I actually wanted a pretty dynamic environment/game. Even if I 'only change it when it needs changing' I still have to keep track of all z heights and run a chunk of code to decide IF a change has happened, between any of the layers, 60 times a second. This is some distance from simply passing a z height to the canvas, and no doubt considerably slower.
Thank you for all your help though, at least I know the limitations now. | https://forum.omz-software.com/topic/1681/layer-ordering/2 | CC-MAIN-2020-45 | refinedweb | 773 | 71.65 |
Learning C#: How to Master an Object Oriented Programming Language
If you’ve worked with any C-style language, C# will come as second nature to you as a programmer. C# is extremely similar to C++ and especially Java. Java programmers will have no problem learning C#, and for a native C# programmer, moving on to languages such as Java for Android development will be just a matter of learning simple syntax.
C# is a part of the Microsoft .NET library. At first, when .NET was introduced, C# was not as popular as its fellow .NET language, VB.NET. .NET was introduced as the next programming language after Classic ASP and Visual Basic, so most developers flocked to VB.NET as the natural next language to learn. Now, however, MVC C# is the next framework, and it’s been a catalyst for C#’s popularity. If you want to learn C#, there are a few basic frameworks, language syntax and programming style you need to know.
Learning Object Oriented Languages and C#
Object Oriented versus Linear Programming Languages
The first item you need to know is object oriented languages versus the linear execution steps of older languages. Object oriented languages aren’t a new concept. C++ is an object oriented languages. However, many of the older languages were linear. Linear languages run one page from start to finish. There are no compartmentalizing objects, which makes linear languages messier than object oriented languages. Linear languages only allow you to import code from other files, but this code does not need to have any organization or logic flow.
C# is a true object oriented language. At first, you’ll probably get frustrated. Understanding object oriented languages is difficult for most people, especially if you are used to older languages. Object oriented languages use a concept of “classes.” These classes represent parts of your code. For instance, if you have a program about a car, you map out the parts of the car as classes. You’d have a class for the engine, the interior, the exterior and maybe some classes about the dashboard and passengers. The complexity of your classes is dependent on the complexity of your car.
The flow of an object oriented language is completely different from a linear language. When a linear code file executes, the compiler runs through the code line-by-line. With object oriented classes, you call class methods, properties and events at any point in your code. When you call a method, the execution process jumps to the corresponding class and returns to the next line of execution. The C# language is written with Visual Studio, so you can step through your class code to see the flow of execution.
To Get Started
Microsoft offers the Visual Studio software for free on the company’s website. You’ll need the .NET framework installed on your computer, but if you run Windows, you probably have the framework requirements. Visual Studio installs all the necessary software to get started with C# including the C# compiler and the .NET framework if you don’t have it.
A few advantages with Visual Studio will help you get started with the language. First, Visual Studio has an excellent debugger that works with website code, web and Windows services code and class libraries. You can step through your code to more easily find bugs and errors.
Second, Visual Studio has an excellent interface that includes color-coded identification for different code elements such as classes (light blue), primitive data types (dark blue), and strings (red). Visual Studio is flexible and allows you to change these color codes, but they are well known in the industry as the default colors.
IntelliSense is probably Visual Studio’s best feature. IntelliSense tries to “guess” what you want to type next, so you don’t have to fully type out all of your code. Start typing a C# method or property and IntelliSense lets you click “Tab” to finish the syntax without fully typing the code.
Start learning Visual Studio and C# today
Designing Applications and Understanding Classes
Classes are the main component for any C# or object oriented language. Classes are also the biggest hurdle for most people. Classes represent parts of your code, and as you display windows or views in your application, you call these parts as needed. For instance, using the car scenario, you might have a class that describes the engine. The engine can be on or off and the engine also gives the car the ability to move forward.
When you want to show the user a car moving forward, you probably need to call the engine class and its methods that move the car forward. In another window, you might want to show the car moving backwards. You would again call the engine class, except this time you’d call the method that moves the car in reverse.
This is one of C#’s advantages and all object oriented languages for that matter. You only need to create one engine class, and then this class can be called in various parts of your code. With linear code, you need to retype the same code in the execution file. With C# and object oriented code, you simply call the class and execute the parts of the class you need to represent the program’s action.
The classes you create are usually determined when you design the application. Designing applications takes some time to learn, because if you create a poor design, it can make engineering the application more difficult. You might even need to re-code several parts of your program if it is poorly designed.
The basic idea of design is to put yourself in the user’s shoes. What would you want out of the application? After you’ve figured out the basic functionality for the program, you design the classes. These classes usually entwine with your database design as well, but database design is another learning obstacle. While the classes represent your program parts, the database syncs with the classes to store the information.
For most C# programmers, a group of people determine the applications functionality, which makes it easier for the programmer. You take the functional design and turn them into classes. For instance, the functional requirements will tell you that the program is a car and the car needs to move forward, backward, make turns and turn off and on. You then take these functional requirements and use them to design your classes. In this example, you’d use the functional requirements to create methods in your engine class. The methods would represent each car action including the forward and back motion and turning the car off and on.
Building Applications with C#
You have several options when you learn C#, which makes it one of the leading languages to learn for people who plan to write a wide range of applications. Probably the most popular type of application you will eventually build is a web application. Web applications are usually written in MVC C#, but older styles such as web forms are still common.
C# is also a valuable language for writing services. Web services are applications that allow external users to call methods over the web. For instance, Twitter, Facebook and Salesforce all have web services. They are usually referred to as an API. You can write these APIs in C# and publish them to your website.
Windows services are small programs that run on servers or desktops. C# is also used to write these services that run in the machine’s background and execute code on a scheduled basis.
You can build and deploy all of these applications using Visual Studio, which compiles and publishes your app without any manual code copying or moving files to the target machine.
You first need to know the basics, so get started with C# fundamentals.
Learning the C# and Object Oriented Programming Style
With each job you have, you’ll be asked to follow coding guidelines. Most guidelines are universal among other development shops. The standards make it easier for other programmers to maintain your C# code after you’re finished.
To learn basic C# coding style, take note of how the syntax is formatted and presented when you watch programming videos. For instance, camel case is common for variables. Camel case is a format where the first letter is lower case, and each word following the first variable word is upper case. Classes always have upper case for each word in the class definition.
One common issue that most programmers face is understanding that C# (and any C derived language for that matter) is case sensitive. When you create a variable named “myvariable,” this is an entirely different variable from “MyVariable” or “myVariable.” If you get case sensitivity wrong, you wind up creating logic errors in your code. With Visual Studio, IntelliSense will prompt you for the correct variable syntax, which is one benefit to using C# and Visual Studio.
Learn practical C# coding styles and object oriented syntax.
Understanding the .NET Framework
The hardest part about C# is learning the .NET framework. The .NET framework is a large collection of libraries provided by Microsoft when you code in the C# language. Just like a large library, you don’t know where to find certain functions, classes and code you need to complete a project, so you have to look up these parts of your code. For instance, if you want to work with .NET’s XML library, you have to find its namespace.
A namespace is a group of C# .NET libraries that encompass a group of methods. You add these namespaces to the top of your code, so you can use the library functions. There is no way to search for namespaces other than Google or experience. Experienced C# coders will remember most namespaces to add them to their code. As a student or new C# coder, you’ll probably have to use Google. You can also purchase books that give you an overview and reference for main .NET library namespaces.
The .NET framework is huge, and you aren’t expected to know them by heart, even when you code in C# for years. Most developers know that you’ll need to look up namespaces during the day, but it helps to get your feet wet with the popular libraries.
Practice Makes Perfect
While learning C# is time consuming, the best way to learn more quickly and more efficiently is to practice. If you walk away from the language and stop practicing, you’ll find yourself learning all of the basics all over again. Just like any human language, you also need to practice when perfecting a machine language.
You can accomplish practicing using a number of methods. You can program small applications from ideas you come up with. As you program them, you come across certain hurdles that you solve, and this problem-solving helps you understand the language and how to fix certain bugs.
Videos are also a great way to keep your language skills up-to-date. Videos can teach you anything from the basics to more advanced techniques. Videos are also a good option when you do walk away from the language for too long and need to brush up on the basics.
Finally, walking through steps while learning (such as Udemy.com videos) and performing the syntax on your own will help you understand how to work with C# and its libraries. You can’t just watch videos and learn a language. You need to practice. Install Visual Studio when you watch the videos and walk through the steps with the instructor. This is much more beneficial than just watching the videos.
C# is a valuable language to learn, but it’s also fun! C# also gives you an advantage when you want to learn other languages in the future such as C, C++ or Java. You can create a wide range of applications when you know how to code in C#, so you will have an invaluable asset on your resume during your job hunting.
Updated by Jennifer Marsh
Top courses in C#
C# students also learn
Empower your team. Lead the industry.
Get a subscription to a library of online courses and digital learning tools for your organization with Udemy for Business. | https://blog.udemy.com/learning-c-sharp/ | CC-MAIN-2021-21 | refinedweb | 2,068 | 64.1 |
How to Download a Local Copy of DynamoDB from AWS
In contrast to many of the other AWS services, you can actually get a local copy of DynamoDB that you can work with offline. Using DynamoDB offline, on your local machine, means that you can try various tasks without incurring costs or worrying about potential connectivity issues.
In addition, you can use DynamoDB in a test environment, in which you use a copy of your data to mimic real-world situations. Here, you discover how to obtain and install a local copy and then use your copy with Python to perform a test.
Performing the installation
To start using a local copy of DynamoDB, you need Java installed on your system because Amazon supplies DynamoDB as a
.jar file. You can obtain a user-level version of Java. However, if you plan to perform any customizations or feel you might need debugging support, then you need a developer version of Java (the Java Development Kit or JDK). Make sure to get the latest version of Java to ensure that DynamoDB works as expected.
The next step is to download a copy of DynamoDB and extract the files in the archive. Note that you can get versions of DynamoDB that work with Maven and Eclipse. These instructions assume that you use the pure Java version and that you’ve extracted the downloaded archive to a folder named DynamoDB on your hard drive. You may need to bury the archive a level or two deep, but make sure that the path doesn’t contain spaces. The main file you deal with is
DynamoDBLocal.jar.
Starting DynamoDB locally
Open a command prompt or terminal window and ensure that you’re in the location where you extracted the DynamoDB archive (using the
CD command). Type java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar –sharedDb and press Enter to start DynamoDB. Depending on your operating system, you see some startup messages.
When working with Windows, you should also see the message below (other platforms may show other messages). This firewall message tells you that port 8000 isn’t currently open. To make DyanmoDB work properly, you must allow access. If you want to change the port, use the -port command-line switch with a different port number. The page that contains the DynamoDB links also has a list of other command-line switches near the bottom, or you can use the -help command-line switch to see a listing for these command line switches locally.
Overcoming the Windows
OSError issue
When working with Windows, you may encounter a problem that involves seeing an
OSError message output for some Python calls, even if your code is correct. The problem is with the
tz.py file found in the
\Users\<UserName>\Anaconda3\Lib\site-packages\dateutil\tz folder of your Anaconda setup (the same file exists for every Python setup, but in different folders). To fix this problem, you must change the code for the
_naive_is_dst() function so that it looks like this:
def _naive_is_dst(self, dt):
# Original Code
timestamp = _datetime_to_timestamp(dt)
#return time.localtime(timestamp +
time.timezone).tm_isdst
# Bug Fix Code
if timestamp+time.timezone < 0:
current_time = timestamp + time.timezone +
31536000
else:
current_time = timestamp + time.timezone
return time.localtime(current_time).tm_isdst
Fortunately, you don’t have to make the change yourself. You can find the updated
tz.py file in the downloadable source, as explained in the Introduction. Just copy it to the appropriate folder on your system.
At this point, you can begin using your local copy of DynamoDB to perform tasks. | http://www.dummies.com/programming/cloud-computing/amazon-web-services/download-local-copy-dynamodb-aws/ | CC-MAIN-2018-22 | refinedweb | 597 | 55.64 |
Mass Effect Andromeda romance guide: from casual banging to winning hearts romance
– Reyes Vidal romance
– Keri romance
Dead end flirtations
– Lexi “romance”
– Paaran Shie . Pursuing romantic relationships unlocks unique spins on the friendship cutscenes you’d otherwise view after securing a companion’s loyalty – and then another, more intense romance scene to cap it all off.
And yes, some of the final romance scenes in Mass Effect Andromeda are pretty steamy. Unfortunately, your questions about alien genitals will not be answered. Lotta butts though.
But even if the famous banging isn’t your thing, the romances in Mass Effect Andromeda range from sweet flirtations to friends with benefits to tender committed relationships, so you’ll have a fine time mashing the flirt button.
You can juggle multiple relationships, up to a point, as you navigate the tangled world of flirtations, friends with benefits, side pieces and full relationships detailed below..
Although there are some exceptions (described below), in general, Mass Effect Andromeda romances progress in the same way. You need to flirt a minimum of three times with your intended and then complete all their loyalty missions and side quests. Have a few more conversations with them, and they’ll likely bring up the possibility of committing. It’s an obvious yes no moment, with Ryder having to answer something like “I want to be with you” or “sorry, I’m just playing”.
Almost all the loyalty and therefore romance missions in Mass Effect Andromeda are gated by critical path progress, so while you should make sure you check in with your squaddies for flirt opportunities between major story beats, don’t worry if you’re not getting anywhere with your favourite after a few priority ops. Sweetie probably just wants you to crack on with the main quest.
If you’d like to see every romance in Mass Effect Andromeda, our hot tip is to make a save before embarking on crew missions or speaking to them after they email asking to meet. These are the most likely instances for a commitment point. If you simply ignore those quests and scenes until right before the final mission’s point of no return (when it says “embark on this mission”) you can have every romance right on the brink, then split off side saves to see everything.
Less talk, more action: while we’ve divided the information below by the character’s own romantic preferences, we’ll start with a handy summary of the options available to each Ryder twin.
Find more tips, tricks and explanations in our Mass Effect Andromeda guide and walkthrough.
Male Ryder
- Cora
- Peebee
- Vetra
- Jaal
- Avela Kjar
- Gil
- Reyes Vidal
- Keri (fling only)
Female Ryder
- Liam
- Jaal
- Peebee
- Vetra
- Suvi
- Reyes Vidal
- Keri (fling only)
Read on to discover how to initiate Mass Effect Andromeda’s romances, how to advance them, when you’ll reach the point of no return, and how they cancel each other out.
Male Ryder exclusive romance options
Cora
Breaking the hearts of lesbians everywhere, Cora is straight as an arrow; she even turned down an Asari once, even though they’re not technically female. This good news for dude Ryders, though, as she’s all about that … whatever it is people see in him. Begin your assault on Cora’s heart once you have access to the Tempest and leave the Nexus for the first time; you can find her in the tech lab or the cargo bay.
To win Cora’s heart, you’ll need to flirt with her regularly throughout the game, until her loyalty mission unlocks. This is a multi-step quest that takes place over several missions and requires critical path progress to complete. It comes to a head when you track down the lost Asari ark.
Keep on flirting and you’ll cement the relationship with a kiss, and enjoy a couple of other exclusive romance scenes before Ryder and Cora make a very steamy first contact.
Commitment point:. Flirt with him regularly and you’ll soon find yourself with the option to pursue a more than friendly relationship. You can get the ball rolling as soon as you have access to the Tempest and have left the Nexus for the first time; talk to him in the engineering section..
Commitment point: After three flirtations and completing Hunting the Archon, Gil will email you about meeting on Prodromos, where you can declare your intentions.
Avela Kjar
An Angaran historian Ryder meets on Aya, Avela enjoys flirting and will even mash space faces with you on two occasions if you determinedly turn on the charm every time you complete one of her side quests. It’s a weird sort of romance – there’s no commitment conversation or steamy love scene, and we’re not sure it blocks you from other relationships. and You’ll need to complete the secondary quests Recovering the Past and Forgotten History to advance the romance, which culminates after you set out on your final critical path mission and receive an email from your beloved.
Female Ryder exclusive romance options
Liam
Liam is kind of the “default” romance option for a female Ryder, as he’s the first of the squad mates to offer flirting opportunities. His loyalty mission is available relatively early on, and provides plenty more opportunities to indicate your interest. You can even miss a few and still maintain his interest. Begin your flirtation after you gain access to the Tempest and leave the Nexus for the first time; he generally hangs out in a room off the cargo bay.
You can pursue Liam without fear of closing off other romance options right up until after his loyalty mission, so if you’d care to have a quick pash onboard the Tempest, go for it.
Once Liam’s loyalty mission is done, you have the option to lock in a full relationship or let him down gently; when the dust settles on that adventure, the cheerful crisis specialist is yours for the taking. Continue to enjoy his company over several scenes and cement the deal after you visit Eos together in response to an attack.
Commitment point: After completing Liam’s loyalty mission he’ll ask you to meet him at Podromos. This is where you’ll have to make up your mind for good.
Suvi
Scottish accents, amirite? Suvi is a full romance option and pretty easy to complete as long as you’ve turned up in the correct gender. Of all the romance options, she’s the one a female Ryder finds the most intimidating, regularly babbling like a fool. It’s good fun. Suvi sits opposite Kallo on the bridge, so she’s always handy when you’re in the mood to flirt.
To romance Suvi, just flirt with her whenever the option arises: you don’t need to agree with her opinions in order to hold her interest, since she has a mature attitude to debate.
Keep an eye on your email and eventually you’ll receive a missive which invites you to kick off the full romance. You know what to do – or we certainly hope you do.
Commitment point: Flirt with Suvi, then talk to her on the Tempest after completing Hunting the Archon if you’ve decided she’s the one for you.
Male or Female Ryder romance options
Peebee
Peebee is an Asari, and since Asari are mono-gendered and reproduce via a slightly scary form of mind meld, they don’t seem to care what you’ve got in your pants. The most enlightened race in the galaxy, indeed. After you recruit Peebee on Nexus.
When Peebee opens negotiations, you have two options: you can opt for a casual, NSA arrangement allowing you to discreetly indulge in friends-with-benefits whenever you fancy it. In either case, you can keep on carrying on in the airlock, and progress to something more serious later.
Commitment point: Flirt with Peebee a few times to trigger the friends-with-benefit conversation, and either agree to a no strings relationship or tell her you want strings. Then complete her loyalty mission and debrief with her. Leave the Tempest and return for a chat on whether you’re willing to commit.
Vetra
Vetra is a slow burn romance option, as the drifter takes a long time to open up to you. You’ll have only very limited opportunities to flirt with her prior to her loyalty mission – and you must seize every opportunity to lock in her interest. No matter what you do, Vetra’s romance won’t move past flirtation until Priority Ops 6, so be patient.
Don’t be discouraged if she doesn’t seem that receptive; although Vetra is more interested in male Ryders than female, she can be won over. Seems aliens have heard of the Kinsey scale. Look for Vetra in a storage room off the Cargo bay, although after her initial chat with you, she tends to spend her time in the crew quarters with Drack.
To cement things with Vetra, you need to do at least the following flirting:
- Choose “you’re intense” when you first chat on the Tempest.
- Ask about “anyone special” in a subsequent chat on the Tempest.
- Say “I’m here for you both” during Vetra’s loyalty mission.
- Tell Vetra you’re “dreaming of someone”
- Check your email for an invitation, then have Vetra in your squad on a Kadara visit.
after objective Hunting the Archon.
There are several cute scenes in Vetra’s romance after this, so go ahead and commit to the full relationship if you’re ready to forsake all others.
Commitment point: Vetra is only interested in commitment, so when you meet on Kadara and have a pash, you’ll have to make up your mind.
Jaal
Although the Angaran people display their emotions openly, they don’t necessarily act on them, and Jaal’s romance takes time to lock in. Keep flirting with him, but look out for the “couple” option that replaces the usual heart-shaped “flirt” one. Remember Angarans value honesty and emotional openness, so don’t be shy in declaring your intentions. You can start flirting once Jaal joins the squad; look for him in a room off the central research station area.
Jaal’s loyalty mission is required to commit to the relationship, and like Cora’s, it plays out over several missions and is reliant on main quest progress. Once that’s done, you’ll need to keep flirting until the main quest ramps up to its conclusion in order to fully seal the deal on a later visit to Aya.
Before you make your mind up on romancing Jaal, we recommend completing a small quest Liam sets you to craft a special requisition. Complete the crafting assignment and then visit Liam in the cargo hold to witness a conversation between Liam and Jaal offering, uh, revelations on them both. It’s always good to window shop before you get out your credit card.
Commitment point: If you’ve been flirting with Jaal and have completed his loyalty mission, he’ll email you about going to Hvarl. Put him in your squad and follow through for a chance to talk about your relationship.
Reyes Vidal
There’s been a lot of confusion about whether Reyes is available to both Ryder twins, but we’ve confirmed: he’s keen on both of them. Nevertheless, this is a weird relationship, as it is entirely possible to write Reyes out of the plot of your Mass Effect Andromeda playthrough altogether.
You can flirt with Reyes over and over again during the long quest to settle Kadara, which kicks off in Hunting the Archon. Whether you meet with him during that quest, you can check in with him after the Vehn interrogation at Tartarus Bar. Pursue the quests Murder in Kadara Port, Divided Loyalties, Precious Cargo and Night Out to advance things, and then meet with Sloane to bring the questline to its conclusion.
No spoilers, but whether Reyes sticks around and commits to you depends on your actions and choices. It’s him or – well. You’ll see. We promise it’s obvious.
Keri (fling)
A journalist who interviews Ryder on multiple occasions over the course of the game, Keri breaks with series tradition by not being someone you want to punch. She won’t be willing to start a relationship with you while the interviews are ongoing, but if you keep up the charm you can enjoy a one night stand with Keri when ethical considerations are no longer an issue. This is available even if you’re involved with someone else; it won’t affect anything. Look for Keri on the Nexus, in the docking bay or operations, depending on how far you’ve advanced Task: Path of a Hero. She’ll email you when a new interview slot is available.
People who won’t bang you, even if you really try
Lexi
The Tempest’s resident doctor maintains a strictly professional relationship to the crew – mostly. While male and female Ryders can both attempt a flirtation, Lexi knocks that on the head right out of the gate. You’ll have to settle for pining from afar. Drack, on the other hand, might have a chance – despite also being a patient of Lexi’s. Guess Ryder’s just not her type.
Paaran Shie
When you return to Aya after winning the trust of the Angara, you’ll have the freedom of the city – and that means you can track down governor Paaran Shie in her office and let her know you think she’s well fancy. Unfortunately, this hard-working administrator has no time for such frivolities. The bar is just down the road a bit if you need to drown your sorrows. | https://www.vg247.com/2017/12/20/mass-effect-andromeda-romance-guide/ | CC-MAIN-2018-05 | refinedweb | 2,309 | 66.57 |
Created on 2011-09-06 16:28 by eric.araujo, last changed 2011-09-12 15:28 by eric.araujo. This issue is now closed.
The pydoc module has a cram function that could be useful to Python authors, if we made it public (textwrap sounds like a great place).
It is already available:
>>> import pydoc
>>> pydoc.cram('This sentence is too long to fit the space I have made available', 28)
'This sentenc...ade available'
def cram(text, maxlen):
"""Omit part of a string if needed to make it fit in a maximum length."""
if len(text) > maxlen:
pre = max(0, (maxlen-3)//2)
post = max(0, maxlen-3-pre)
return text[:pre] + '...' + text[len(text)-post:]
return text
It could be documented in place, or moved and imported into pydoc. I am +0 at the moment.
A few thoughts:
* no one has ever made a request for this
* different people may want to do it in different ways
(the formulas are hard-wired).
* the '...' connector is hardwired
(perhaps ' ... ' would look less odd).
* we should have a preference for keeping APIs small
(more to learn and remember)
* this is dirt simple string processing and not hard
for people to roll their own if the need arises
* if the API were to be expanded, perhaps it should
be as a part of a focuses, thoughtful effort to
provide a more generic set of text formatting
transformations perhaps modeled on deep experiences
with similar modules in other languages.
(as opposed to piecemeal additions that weren't
designed with a coherent vision).
This pretty well summarizes my vague feelings. I originally used a size 30 in my example, getting 'This sentence...made available' and then realized that it was a complete accident that I got complete words. If anything were made publicly available, I might like a more sophisticated algorithm. I think other things in pydoc are more worth of consideration. So I am now -.5 or more and would not mind closing this one of the four new pydoc exposure issues.
> if the API were to be expanded, perhaps it should be as a part of a
> focuse[d], thoughtful effort to provide a more generic set of text
> formatting transformations perhaps modeled on deep experiences with
> similar modules in other languages. (as opposed to piecemeal additions
> that weren't designed with a coherent vision).
That’s a very strong point.
Thanks for the opinions. | https://bugs.python.org/issue12914 | CC-MAIN-2021-49 | refinedweb | 404 | 65.32 |
“TCP/IP and Ethernet will not be accepted as a valid network implementation as SNA and Token Ring are our preferred standards.” – circa 1993 by nameless corporate Information Systems expert.
I was shocked when I had heard this, and images of ostriches with their heads in the sand immediately came into mind. I was new to my career, and attempting to challenge the old masters of the all mighty Information Systems department was considered blaspheme. After that I was labeled a heretic and could not persue any sort of transformation that would modernize technology or for that matter the business. Over the years the answers were always the same:
- “That will not meet our requirements.”
- “This goes against our security policy.”
- “We do not support that {tool, technology, process}.”
- “If it ain’t broke, why mess with it.”
These statements have been picking up recently from a variety of information technology professionals, and sadly from enterprise architects. To be frank, I fear that many IT departments or IT related businesses are at risk of losing relevance. The world is changing faster than ever, yet there is still this strange clinging to the old ways. The truth is that the Internet has changed business forever, but I believe the potential remains largely unexploited. Enterprise architect professionals cannot meet the demands of the next century if there is no strategy to address business transformation, device consumerization, cloud computing, mobility, and exploitation of data.
So why is IT stuck? The answer came to me while listening to a great presentation at Microsoft’s semi-annual Architecture Summit for employees. Norm Judah our Worldwide Chief Technology Officer for Microsoft Services delivered a compelling presentation on the topic of need versus desire. Over the years many of the roadblocks that were thrown up by IT were based on a personal desire of IT professionals and not on the actual needs of the business. This interesting duality also has forced me to think about truths versus beliefs. I thought of needs as being truths, where evidence was provided in support of the need or requirement. Desire was actually something more personal, having an emotional component that addressed a level of desired functionality.
Many years back, I was working on a large directory implementation that required domain naming services (DNS) in order to function. There was a “desired” functionality (or belief) from the DNS administrators to have the system by managed in a particular way which resulted in a structurally more complicated DNS namespace. As a result, a simple foundational DNS service had more structural elements which in turn made any solution that leveraged it even more complicated and difficult to exploit. I remember pushing my client at the time very hard on this, to examine what were the true “needs” of the DNS administrator. What was the true business need? Where was the evidence? I never got the “truths” I was looking for as I was clearly sticking a knife in someone for just asking the question. As a result, this client is stuck with a very structurally complicated solution that has limited behavioral flexibility. It satisfies the desired functionality of the IT provider but not for the consumer. The cost of the solution is now more expensive to operate, and the cost of lost opportunities because the lack of flexibility is astoundingly more.
This presents us a very interesting physiological exercise on the needs and desires of the self. Carl Jung made the following observation on the topic of Neurosis: “I have frequently seen people become neurotic when they content themselves with inadequate or wrong answers to the questions of life.”
If I may take this one step further: “I have frequently seen IT professionals become neurotic when they content themselves with inadequate or wrong answers to the questions of the business.” I have heard the following statements all within the last six months.
- “We will never use the cloud because of security.”
- “We are required to use {insert protocol here} because of our strict enterprise architecture governance.”
- “Yes we have monolithic system, but we do not have time to architect changes it so we just keep adding and re-engineering it.”
- “We do not need to talk to the business, I have been here 20 years and I know what they need.” (uhhhh, seriously?)
Of course there are real “needs” that are requirements and constraints that are well defined that relate to business policy, regulatory requirements, human safety, privacy, and security. They are usually supported by economic data around value, cost, risk, and opportunity. These are rooted in truth because there is the evidence that they are required. These constraints must be surfaced and any “complicatedness” that is introduced into the solution may be absolutely necessary. On the other hand, adding elements based on notion that is based on “desired” behavior of the technology provider has to be carefully scrutinized as to not artificially introduce complications. As a result: the desire to build something from scratch, the desire to use the coolest new thing, the desire to buy a tool, the desire to be in control, etc. These all leave behind a cacophony of spaghetti-like of solutions with a small number of business capabilities actually delivered and large number technology configurations. Yes you are stuck.
This desire fuels us to build systems that are overly complicated. I am careful not to use the word complex, as there is confusion on the definitions of complicated versus complex. Complicated systems fall in the realm of known and predictable. Complex systems are not always known and not always predictable. For example, a modern passenger airplane is very complicated due to the massive number of elements and variables, mostly of which are tightly managed and controlled. The concept of passenger air travel is very complex as many elements and variables are loosely managed and sometimes outside the possibility of control.
As an engineer I hated randomness. I wanted control. Most IT professionals are still in the world of control based on this desire. Is there really a need for control? Or is this just to satisfy a desire? As an architect, I have grown to appreciate the value of a certain amount of entropy in the systems I am involved with as this allows for emergence; and in turn plants the seeds for business and user innovation. Therefore I have to be comfortable to give up some aspects of control. Standards are important to keep complications to a minimum, but at some point, they can be stifling if they are based on the desires of the provider. I cannot tell you how often I have seen an under exploited solution because of what I call over governance. Therefore I do not mind a certain degree of complexity with respect to the user freedom. But I vigilantly question the motives of an overzealous IT professional, developer, or manager when a solution gets over-engineered due to desire.
It is my belief that enterprise architecture must propose and help build solutions that are simple, yet can evolve and be fully exploited as business demands change. This requires and understanding that discriminates needs from desires, truths from beliefs. This is a conversation that I have with my clients when discussing true business transformation. Desire comes with a price, and we clearly have to understand the cost and value associated with it. Most often, but not always, desired functionality coming from outside of IT most likely will generate value; desire from inside most likely will have an associated cost.
My goal as an architect is to generate enough desire for the consumer (business) while balancing the needs of the provider (IT). Business moves fast, and technology is moving even faster. Managing the mess in the same way we always have will clearly set us up for failure. The definition of insanity according to Albert Einstein was “Doing the same thing over and over again expecting a difficult result.”
The discipline and professionals of enterprise architecture cannot be plagued with this type neurosis. As architects we need to be advocates of the needs of the providers and the consumer’s desires of information technology. Additionally, there is a characteristic of enterprise architects that is often lacking today. The quality of selflessness is one that enterprise architects must come to terms with. Can you as an architect put the desires of your customer before your own? If we are going to address the opportunities of device ubiquity, big data, hyper mobility, and cloud computing; we must all think differently or lose relevancy like SNA and Token Ring. May I interest you in taking a transoceanic ride in a steamship? I thought so.
As always I am interested in your thoughts.
Disclaimer: The opinions and views expressed in this blog are those of the author and do not necessarily state or reflect those of Microsoft Corporation. | https://blogs.msdn.microsoft.com/zen/2012/05/07/modernizing-enterprise-architecture-address-the-neurosis-of-it/ | CC-MAIN-2016-44 | refinedweb | 1,482 | 53.41 |
Receiving
Non-blocking
C | FORTRAN-legacy | FORTRAN-2008
MPI_Irecv
Definition
MPI_Irecv stands for MPI Receive with Immediate return; it does not block until the message is received. To know if the message has been received, you must use MPI_Wait or MPI_Test on the MPI_Request filled. To see the blocking counterpart of MPI_Irecv, please refer to MPI_Recv.
Copy
Feedback
int MPI_Irecv(void* buffer, int count, MPI_Datatype datatype, int sender, int tag, MPI_Comm communicator, MPI_Request* request);.
- request
- The non-blocking operation handle.
Returned value
The error code returned from the non-blocking receive.
- MPI_SUCCESS
- The routine successfully completed.
Example
Copy
Feedback
#include <stdio.h> #include <stdlib.h> #include <mpi.h> /** * @brief Illustrates how to receive a message in a non-blocking fashion. * @details This application is meant to be run with 2 processes: 1 sender and 1 * receiver. The receiver immediately issues the MPI_Irecv, then it moves on * printing a message while the reception takes place in the meantime. Finally, * the receiver waits for the underlying MPI_Recv to print the value received. **/ int main(int argc, char* argv[]) { MPI_Init(&argc, &argv); // Get the number of processes sends the message. int buffer = 12345; printf("[Process %d] I send the value %d.\n", my_rank, buffer); MPI_Ssend(&buffer, 1, MPI_INT, RECEIVER, 0, MPI_COMM_WORLD); break; } case RECEIVER: { // The "slave" MPI process receives the message. int received; MPI_Request request; printf("[Process %d] I issue the MPI_Irecv to receive the message as a background task.\n", my_rank); MPI_Irecv(&received, 1, MPI_INT, SENDER, 0, MPI_COMM_WORLD, &request); // Do other things while the MPI_Irecv completes. printf("[Process %d] The MPI_Irecv is issued, I now moved on to print this message.\n", my_rank); // Wait for the MPI_Recv to complete. MPI_Wait(&request, MPI_STATUS_IGNORE); printf("[Process %d] The MPI_Irecv completed, therefore so does the underlying MPI_Recv. I received the value %d.\n", my_rank, received); break; } } MPI_Finalize(); return EXIT_SUCCESS; } | https://www.rookiehpc.com/mpi/docs/mpi_irecv.php | CC-MAIN-2019-43 | refinedweb | 304 | 59.6 |
python API for roslaunch with parameter
I want to run a roslaunch file with parameter, I follow this tutorial:
import roslaunch uuid = roslaunch.rlutil.get_or_generate_uuid(None, False) cli_args = ['/home/m/catkin_ws/src/rl_nav/launch/nav_gazebo.launch','maze_id:=2'] roslaunch_args = cli_args[1:] roslaunch_file = [(roslaunch.rlutil.resolve_launch_arguments(cli_args)[0], roslaunch_args)] parent = roslaunch.parent.ROSLaunchParent(uuid, roslaunch_file) parent.start()
But when I run this script, there is an error:
TypeError: not all arguments converted during string formatting
Why does it and how can I solve it? Thank you.
Hey, did you ever find a solution for this problem? I am having the same issue. Cheers!
Any update?
I have the same problem. There seems to be discrepancy between the documentation and the code. The code expects a list of strings, but the documentation indicates using a list of tuples of strings. | https://answers.ros.org/question/318117/python-api-for-roslaunch-with-parameter/ | CC-MAIN-2021-43 | refinedweb | 138 | 52.26 |
.
Back. Our co-founder Torkel demo’d this capability on stage with Intel, and you can expect an update it this month, just in time for Monitorama.
The plugin is a cool prototype, but our vision is for Grafana to be able to “command and control” an entire fleet of Snap servers. We quickly realized that there’s a missing piece to achieving that goal, so I started working on a new component of our stack.
Like everything at raintank, it’s been developed in the open, so the code is available on GitHub. 218 commits, 38 days, and one newly born baby boy later, I’m excited to share some details on how things are shaping up!
What are we trying to do?
Here are the highlights of requirements we are trying to hit:
- Allow each Snap agent to be able to register its presence and its capabilities, so that Grafana has a complete picture of your entire fleet.
- Easily deploy and manage tasks on an arbitrary number of agents
- Support pluggable checks, so that customers can extend the capabilities of the system to collect new metrics.
- Support agents running on servers that are behind secured networks, and need to use proxy servers to access Internet-based services.
- Support multitenancy; key for our OpenSaaS deployment model.
- Accomplish our “command and control through Grafana” vision, but not at the expense for people using configuration management.
The “tribe” capabilities in Snap seemed like a good fit for some of this, but we decided to go ahead and create a light layer on top of Snap that is optimized for our Grafana-related goals.
So what’d we build?
There are 2 main components in the system: TaskServer and TaskAgent. Both of these components compliment Snap itself.
TaskServer is a central service for tracking which tasks are running, which tasks should be running, and which metrics are available, from across your entire Snap fleet.
TaskAgent is a small daemon that sits on the same server as Snap. Snap itself is pretty ephemeral, so TaskAgent syncs data between Snap and TaskServer. We’ll be working with our friends at Intel to figure out whether this capability (or perhaps part of it) belongs in Snap itself.
How does it work?
The process flow of the system is quite simple.
- When the TaskAgent starts up, it queries the local Snap daemon to get the list of supported metrics (ie. the metric catalog).
- The TaskAgent then connects to the TaskServer over a web socket, and publishes the metric catalog, which is stored by TaskServer.
- The TaskServer sends a TaskList event to the TaskAgent, detailing all of the Tasks that should be running.
- The TaskAgent syncs this TaskList with Snap, creating tasks that are missing, recreating ones that have changed, and deleting tasks that should not be there.
- Anytime a new ad-hoc task is created, updated or deleted by a user in Grafana, the TaskServer notifies all relevant TaskAgents so that they can synchronize the change with Snap.
- Anytime a new Plugin is loaded or unloaded into Snap, the TaskAgent updates the TaskServer with the updated metric catalog.
This service provides a layer of abstraction which allows Grafana to be aware of and provide management for individual tasks in a responsive and highly scalable way, without the need to talk to each Snap server individually, or being explicitly told about your entire fleet.
Through the use of a task “routing” feature users can define a single task that is deployed to multiple Snap Agents. In addition to just storing the metric catalog of each Snap server users are also able to assign “tags” to their agents and these tags can be used in the task routing. For example, a user could create a task to collect MySQL metrics from all Snap Agents that have the tag “mysql”. If the user deploys a new mysql server it can automatically receive the task definition and start collecting metrics.
Deploying the Task is where the power of this platform shines. The task definition is a simplified version of the Snap task definition. The one thing that you may notice missing is the “process” and “publish” workflow items available for Snap tasks. These are currently hard coded to our own TSDB backend, but we will be opening this up over time to account for all backends.
Task Schema:
{ "name": "Ping raintank", "interval": 10, "route": { "type": "byTag", "config": ["foo", "bar"] }, "metrics": { "/worldping/*/*/ping/*": 0 }, "config": { "/worldping": { "hostname": "", "timeout": 5 } }, "enabled": true }
- name: Unique name for the task.
- interval: The interval the task should run at.
- route: This is how we determine which Agents a task should run on. The route Type can be one of
- ByTag: Route the task to all Agents that have one of the listed tags.
- ById: Route the task to the specific Agent ids listed.
- Any: Route the task to any one Agent that supports the metric. If the Agent running the task fails, the task will be rescheduled on another agent that supports the metrics, if one exists.
- metrics: this is the same format as the “Workflow.collect” section in a Snap Task. The Key:Value pairs represent a metric:version pair. A version of 0, indicates that the latest version should be used. Metric names can include wildcards.
- config: This matches the “workflow.config” section of a Snap task. The top level key is a config namespace, and the child Key:Value pairs are variable:value pairs.
- enabled: flag to allow tasks to be disabled, which will result in the Snap task being removed. Re-enabling a task will cause the task to be recreated and data collected again.
As well as making it easy to deploy tasks for collecting metrics, a goal of the platform it to make it easy to explore the agents and metrics that are available for collection.
anthony:~$ curl -s -H "Authorization: Bearer not_very_secret_key" "*ping*"|json_pp|grep namespace "namespace" : "/worldping/*/*/ping/avg", "namespace" : "/worldping/*/*/ping/loss", "namespace" : "/worldping/*/*/ping/max", "namespace" : "/worldping/*/*/ping/mdev", "namespace" : "/worldping/*/*/ping/median", "namespace" : "/worldping/*/*/ping/min", anthony:~$
Users can also explore their agents to find the different servers that are capable of collecting certain metrics. eg. to list all agents that have the tag “bar” and can collect metrics starting with “/worldping”
anthony:~$ curl -s -H "Authorization: Bearer not_very_secret_key" "*"|json_pp { "body" : [ { "name" : "demo1", "enabled" : true, "public" : true, "onlineChange" : "2016-06-07T22:27:56.947079752Z", "tags" : [ "bar", "baz" ], "online" : true, "created" : "2016-06-07T22:13:17Z", "id" : 1, "enabledChange" : "2016-06-07T22:13:17Z", "updated" : "2016-06-07T22:16:50Z" }, { "tags" : [ "bar", "foo" ], "online" : true, "created" : "2016-06-07T22:13:17Z", "enabledChange" : "2016-06-07T22:13:17Z", "updated" : "2016-06-07T22:15:25Z", "id" : 2, "name" : "demo2", "enabled" : true, "public" : true, "onlineChange" : "2016-06-07T22:28:57.457744733Z" } ], "meta" : { "code" : 200, "message" : "success", "type" : "agents" } }
Cool! How can I use this?
The project is in a very early state, and we want to be careful how we open things up. Under-promise and over-deliver is our mantra.
We are currently testing the new Task Management service with a select group of clients, with apps that collect metrics from third party SaaS services (such as GitHub, Google, NS1, etc). This use case allows us to control the entire experience, and host the entire stack ourselves.
We also plan to transition our (Worldping app)[] to use this platform. Existing ICMP, DNS, and HTTP/S checks are being [refactored as Snap plugins](. Our probe software will be replaced with Snap itself! We hope to complete this work by the end of the month.
Finally, our ultimate goal: opening up the flood gates to allow users to use the platform for monitoring their own infrastructure and applications. We’ll be focusing on creating the following initial Snap-powered apps:
- Mirantis OpenStack (OpenStack per-tenant metrics)
- MySQL (internal perf metrics)
- InfluxDB (internal perf metrics)
- Elasticsearch (internal perf metrics)
- Cassandra (internal perf metrics)
Many of these apps already have corresponding Snap plugins already available. With the recent launch of Grafana.net, the final release of Grafana 3.0, and the addition of TaskServer and TaskManager, we can create Snap-powered Grafana apps that meld Snap plugins with Grafana Dashboards.
We’ve still got a ways to go with stabilizing this new piece of our stack, so we’re considering a phased approach that will allow us to publish the first set of apps sooner rather than later.
We’re really excited about where we’re going with Snap; our goal is to create a 100% open source experience that rivals the best commercial offerings.
Hopefully, we’re well on our way to doing that. If you’re interested in getting a demo of what we’re building, please get in touch at hello@raintank.io.
Let’s democratize metrics together! | https://grafana.com/blog/2016/06/08/democratizing-metrics-with-snap-an-update/ | CC-MAIN-2022-27 | refinedweb | 1,470 | 61.87 |
matplotlib with PyQt GUIsJanuary 20th, 2009 at 10:00 pm
A
The embedding of matplotlib into PyQt is relatively smooth and similar to wxPython, with a couple of tiny differences (which mainly have to do with how the two frameworks handle parenting of GUI elements). Note that I’ve reimplemented only one of the demos, but it’s really sufficient for getting started with combining the two libraries.
Related posts:
March 13th, 2009 at 15:58
That’s cool
However, I’m missing on key feature for my application:
I need to plot a image with a pyqt application. That is no pb AFAICS.
The problem is that I need to be able to click on the image to get the corresponding pixel value ((X,Y) and the value).
A display “x=, y=” within the matplotlib backend is not sufficient.
I need to be able to use these values into my pyqt code.
For instance, each time I click on the image, I show the pixel value within a qlineedit.
Is it even possible? If not, I’m in trouble with pylab+pyqt.
March 13th, 2009 at 17:06
For that I doubt you need matplotlib. Qt has excellent support for showing images, for example with something like QGraphicsView. I’m not sure this is exactly the class you need, but it’s probably not very far from the truth.
March 15th, 2009 at 00:56
It is true that QGraphicsView could do the job when I only want to plot an image.
However, I do need matplotlib because I need all the plotting capabilities of matplotlib (latex, alpha, shapes overplot). OK ok I can do that within the
QGraphicsView framework but I would prefer not to redevelop part of matplotlib in qt4
The only thing I’m missing is a link to get the (x,y,pixel) values under the mouse into my python code.
If it is not possible at all, then I’m going to use QGraphicsView to display images and matplotlib to display functions and scatter plot. I would be the backup solution…but I would lose the nice and easier_to_use subplot capabilities of matplotlib (mixing images and scatter plot for instance).
Do you see a chance to get this link?
March 15th, 2009 at 01:05
Wait wait wait…I’m a dumbass.
I haven’t seen matplotlib.backend_bases.PickEvent … I’m pretty sure that is what I need…
March 27th, 2009 at 01:21
Unfortunately, there is a bug
:
If you zoom on the image using the standard zoom button of the matplotlib toolbar, then the axes go back to there “nominal” state each time the on_draw even is called (e.g. each time you cover/uncover the window).
I’m not sure how it should be fixed…the on_draw even should only redraw a precomputed image…but I have no idea how the code should looks like so far.
April 16th, 2009 at 03:52
I enjoyed your example but I had to make a small change for the matplotlib API’s that are present on Kubuntu 8.04:
#box_points = event.artist.get_bbox().get_points()
box_points = event.artist.get_verts()
msg = “You’ve clicked on a bar with coords:\n %s” % str(box_points)
My API for matplotlib.patches.Rectangle has no get_bbox().
cheers!
June 6th, 2009 at 04:27
Cool – Just the example I was looking for
September 2nd, 2009 at 22:04
Thank you very much for the matplotlib-pyqt interface code. Very helpful
October 8th, 2009 at 10:33
You example doesn’t seem to run with matplotlib 0.99.1.1 and pyqt 4.6 and leads to a segmentation fault. The problem seems to come from the NavigationToolBar (if I comment the 2 lines concerning the NavigationToolBar it’s OK). I guess, this problem comes from the qt4agg backend of matplotlib which has not been updated to the new pyqt4.6 version.
December 15th, 2009 at 19:06
Is it possible to extend the mouse interaction to the axis labels, for instance to pop up some resource browser to edit range, number of decimal points, etc. The current mouse picks seem to only work on the actual bars in the plot area. Any hints please?
December 16th, 2009 at 06:16
@Ian, it could be possible by exploring the event types
mpl_connectaccepts. Here‘s a sample documentation link on the subject.
May 20th, 2010 at 21:22
Tondu – I find that:
will give you the toolbar back.
… and thanks to Eli for this outline of how to embed matplotlib into pyqt.
–Paul
June 14th, 2010 at 14:13
thanks so much, very helpful for integrating matplotlib, would have taken ages without an appropriate demo.
October 5th, 2011 at 16:26
How can I print the plot using QPrinter ?
October 5th, 2011 at 17:04
Nils,
Unfortunately I have absolutely no experience working with
QPrinterand printing in Qt in general. Hopefully someone else will be able to help you. IIRC
matplotlibhas capabilities to export its plots to images and other format – maybe this can be the path.
October 5th, 2011 at 17:13
Eli,
matplotlib provides
How do I proceed if I have stored the plot ?
October 5th, 2011 at 22:18
Nils,
Well, then I guess you can load the image and paint it onto
QPrintersince it’s a
QPaintDevice. Searching through the examples that come with PyQt there are a lot of hits on
QPrinter, so you may find some useful stuff there.
October 8th, 2011 at 18:48
Very useful example.
Thank you very much for your effort
October 26th, 2011 at 10:46
Thanks! This is incredibly useful.
December 27th, 2011 at 07:43
Hello,
First of all, amazing tutorial! However, I am trying to get the figure to dynamically resize to fit its QWidget container.
Any suggestions??
Tyler
January 19th, 2012 at 10:17
I added a printer button to the navigation toolbar in matplotlib.
However the quality of the print-out is poor using QPrinter.
Moreover the frame including the navigation toolbar is printed.
How can I omit that ?
How can I improve the quality of the hardcopy ? If I save the figure before, the print-out of the image is o.k.
Can anybody offer any solutions to this ?
import sys
import numpy as np
from PyQt4.QtCore import *
from PyQt4.QtGui import *
#from xlwt import *
from pylab import plot, show
from matplotlib.figure import Figure
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.backends.backend_qt4agg import NavigationToolbar2QT as NavigationToolbar2
class ViewWidget(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
# create a simple main widget to keep the figure
self.mainWidget = QWidget()
self.setCentralWidget(self.mainWidget)
layout = QVBoxLayout()
self.mainWidget.setLayout(layout)
# create a figure
self.figure_canvas = FigureCanvas(Figure())
layout.addWidget(self.figure_canvas, 10)
# and the axes for the figure
self.axes = self.figure_canvas.figure.add_subplot(111)
x = np.linspace(0.,2*np.pi,100)
self.axes.plot(x,np.sin(x),label='sin(x) ')
self.axes.plot(x,np.cos(x),label='cos(x) ')
self.axes.figure.set_facecolor('white')
self.axes.grid('on')
self.axes.legend()
# add a navigation toolbar
self.navigation_toolbar = NavigationToolbar2(self.figure_canvas, self)
layout.addWidget(self.navigation_toolbar, 0)
self.print_button = QPushButton()
self.print_button.setIcon(QIcon("Fileprint.png"))
self.print_button.setToolTip("Print the figure")
self.navigation_toolbar.addWidget(self.print_button)
self.connect(self.print_button, SIGNAL('clicked()'), self.goPrinter)
self.quit_button = QPushButton("&Quit")
self.navigation_toolbar.addWidget(self.quit_button)
self.connect(self.quit_button, SIGNAL('clicked()'), self.close)
def goPrinter(self):
printer = QPrinter()
anotherWidget= QPrintDialog(printer,self)
if(anotherWidget.exec_() != QDialog.Accepted):
return
p = QPixmap.grabWidget(self)
printLabel = QLabel()
printLabel.setPixmap(p)
painter = QPainter(printer)
printLabel.render(painter)
painter.end()
show()
if __name__=="__main__":
app=QApplication(sys.argv)
mw=ViewWidget()
mw.show()
sys.exit(app.exec_())
January 20th, 2012 at 15:54
Eli,
here comes the corrected version
February 29th, 2012 at 12:10
I tried to combine an event with annotation.
However, the annotation is not visible, when I click on the curves.
How can I resolve the problem ?
The code is available at
Any pointer would be appreciated.
February 29th, 2012 at 13:04
Nils,
I no longer support this code. Sorry. I suggest you take specific questions elsewhere like StackOverflow or the PyQt/matplotlib forums/mailing lists.
March 12th, 2012 at 19:20
Hi
I get a segmentation fault when I try to save the file. Can you guide me on what the problem could be.
May 14th, 2012 at 13:14
Thank you so much! This saved me so much time of figuring out the right API calls to make
September 27th, 2012 at 10:53
Great work Eli and thanks for making it available to the Python community
One simple question --- which would you choose for building a GUI, wxPython or PyQt?
Any comments positive or negative on wxPython and PyQt would be greatly appreciated.
September 27th, 2012 at 15:11
@Virgil,
I suggest you do a search on my site for PyQt – I’ve written a bit about the reasons for switching to it from wxPython a long while ago.
September 19th, 2013 at 10:49
I’ve got a weird case. The GUI app is basically a form where I fill in parameters. Signal uses .plot() to print a chart in a new pop-up window. Works flawlessly when I run from a python interpreter. But when I launch the GUI app from clicking on it’s desktop icon (like a windows user is prone to do), the GUI is visible, but the expected .plot window never materializes. Not even sure how to debug it since it works without issue from the Spyder built-in interpreter. Any ideas?
January 2nd, 2014 at 09:03
Running produces the following errors — is this a PyQt version issue? Any ideas?
TypeError: ‘PySide.QtGui.QWidget.setParent’ called with wrong argument types:
PySide.QtGui.QWidget.setParent(QWidget)
Supported signatures:
PySide.QtGui.QWidget.setParent(PySide.QtGui.QWidget)
PySide.QtGui.QWidget.setParent(PySide.QtGui.QWidget, PySide.QtCore.Qt.WindowFlags)
January 3rd, 2014 at 21:13
Solved my own problem above. Found solution on stack overflow: | http://eli.thegreenplace.net/2009/01/20/matplotlib-with-pyqt-guis/ | CC-MAIN-2014-10 | refinedweb | 1,680 | 58.48 |
Promises are important building blocks for asynchronous operations in JavaScript. You may think that promises are not so easy to understand, learn, and work with. And trust me, you are not alone!
Promises are challenging for many web developers, even after spending years working with them.
In this article, I want to try to change that perception while sharing what I’ve learned about JavaScript Promises over the last few years. Hope you find it useful.
A
Promise is a special JavaScript object. It produces a value after an
asynchronous (aka, async) operation completes successfully, or an error if it does not complete successfully due to time out, network error, and so on.
Successful call completions are indicated by the
resolve function call, and errors are indicated by the
reject function call.
You can create a promise using the promise constructor like this:
let promise = new Promise(function(resolve, reject) { // Make an asynchronous call and either resolve or reject });
In most cases, a promise may be used for an asynchronous operation. However, technically, you can resolve/reject on both synchronous and asynchronous operations.
Oh, yes! That’s right. We have
callback functions in JavaScript. But, a callback is not a special thing in JavaScript. It is a regular function that produces results after an
asynchronous call completes (with success/error).
The word ‘asynchronous’ means that something happens in the future, not right now. Usually, callbacks are only used when doing things like network calls, or uploading/downloading things, talking to databases, and so on.
While
callbacks are helpful, there is a huge downside to them as well. At times, we may have one callback inside another callback that’s in yet another callback and so on. I’m serious! Let’s understand this “callback hell” with an example.
How to Avoid Callback Hell – PizzaHub Example
Let’s order a Veg Margherita pizza 🍕 from the PizzaHub. When we place the order, PizzaHub automatically detects our location, finds a nearby pizza restaurant, and finds if the pizza we are asking for is available.
If it’s available, it detects what kind of beverages we get for free along with the pizza, and finally, it places the order.
If the order is placed successfully, we get a message with a confirmation.
So how do we code this using callback functions? I came up with something like this:
function orderPizza(type, name) { // Query the pizzahub for a store query(`/api/pizzahub/`, function(result, error){ if (!error) { let shopId = result.shopId; // Get the store and query pizzas query(`/api/pizzahub/pizza/${shopid}`, function(result, error){ if (!error) { let pizzas = result.pizzas; // Find if my pizza is availavle let myPizza = pizzas.find((pizza) => { return (pizza.type===type && pizza.name===name); }); // Check for the free beverages query(`/api/pizzahub/beverages/${myPizza.id}`, function(result, error){ if (!error) { let beverage = result.id; // Prepare an order query(`/api/order`, {'type': type, 'name': name, 'beverage': beverage}, function(result, error){ if (!error) { console.log(`Your order of ${type} ${name} with ${beverage} has been placed`); } else { console.log(`Bad luck, No Pizza for you today!`); } }); } }) } }); } }); } // Call the orderPizza method orderPizza('veg', 'margherita');
Let’s have a close look at the
orderPizza function in the above code.
It calls an API to get your nearby pizza shop’s id. After that, it gets the list of pizzas available in that restaurant. It checks if the pizza we are asking for is found and makes another API call to find the beverages for that pizza. Finally the order API places the order.
Here we use a callback for each of the API calls. This leads us to use another callback inside the previous, and so on.
This means we get into something we call (very expressively)
Callback Hell. And who wants that? It also forms a code pyramid which is not only confusing but also error-prone.
There are a few ways to come out of (or not get into)
callback hell. The most common one is by using a
Promise or
async function. However, to understand
async functions well, you need to have a fair understanding of
Promises first.
So let’s get started and dive into promises.
Just to review, a promise can be created with the constructor syntax, like this:
let promise = new Promise(function(resolve, reject) { // Code to execute });
The constructor function takes a function as an argument. This function is called the
executor function.
// Executor function passed to the // Promise constructor as an argument function(resolve, reject) { // Your logic goes here... }
The executor function takes two arguments,
resolve and
reject. These are the callbacks provided by the JavaScript language. Your logic goes inside the executor function that runs automatically when a
new Promise is created.
For the promise to be effective, the executor function should call either of the callback functions,
resolve or
reject. We will learn more about this in detail in a while.
The
new Promise() constructor returns a
promise object. As the executor function needs to handle async operations, the returned promise object should be capable of informing when the execution has been started, completed (resolved) or retuned with error (rejected).
A
promise object has the following internal properties:
state– This property can have the following values:
pending: Initially when the executor function starts the execution.
fulfilled: When the promise is resolved.
rejected: When the promise is rejected.
2.
result – This property can have the following values:
undefined: Initially when the
statevalue is
pending.
value: When
resolve(value)is called.
error: When
reject(error)is called.
These internal properties are code-inaccessible but they are inspectable. This means that we will be able to inspect the
state and
result property values using the debugger tool, but we will not be able to access them directly using the program.
A promise’s state can be
pending,
fulfilled or
rejected. A promise that is either resolved or rejected is called
settled.
How promises are resolved and rejected
Here is an example of a promise that will be resolved (
fulfilled state) with the value
I am done immediately.
let promise = new Promise(function(resolve, reject) { resolve("I am done"); });
The promise below will be rejected (
rejected state) with the error message
Something is not right!.
let promise = new Promise(function(resolve, reject) { reject(new Error('Something is not right!')); });
An important point to note:
A Promise executor should call only one
resolveor one
reject. Once one state is changed (pending => fulfilled or pending => rejected), that’s all. Any further calls to
resolveor
rejectwill be ignored.
let promise = new Promise(function(resolve, reject) { resolve("I am surely going to get resolved!"); reject(new Error('Will this be ignored?')); // ignored resolve("Ignored?"); // ignored });
In the example above, only the first one to resolve will be called and the rest will be ignored.
A
Promise uses an executor function to complete a task (mostly asynchronously). A consumer function (that uses an outcome of the promise) should get notified when the executor function is done with either resolving (success) or rejecting (error).
The handler methods,
.then(),
.catch() and
.finally(), help to create the link between the executor and the consumer functions so that they can be in sync when a promise
resolves or
rejects.
How to Use the
.then() Promise Handler
The
.then() method should be called on the promise object to handle a result (resolve) or an error (reject).
It accepts two functions as parameters. Usually, the
.then() method should be called from the consumer function where you would like to know the outcome of a promise’s execution.
promise.then( (result) => { console.log(result); }, (error) => { console.log(error); } );
If you are interested only in successful outcomes, you can just pass one argument to it, like this:
promise.then( (result) => { console.log(result); } );
If you are interested only in the error outcome, you can pass
null for the first argument, like this:
promise.then( null, (error) => { console.log(error) } );
However, you can handle errors in a better way using the
.catch() method that we will see in a minute.
Let’s look at a couple of examples of handling results and errors using the
.then and
.catch handlers. We will make this learning a bit more fun with a few real asynchronous requests. We will use the PokeAPI to get information about Pokémon and resolve/reject them using Promises.
First, let us create a generic function that accepts a PokeAPI URL as argument and returns a Promise. If the API call is successful, a resolved promise is returned. A rejected promise is returned for any kind of errors.
We will be using this function in several examples from now on to get a promise and work on it.
function getPromise(URL) { let promise = new Promise(function (resolve, reject) { let req = new XMLHttpRequest(); req.open("GET", URL); req.onload = function () { if (req.status == 200) { resolve(req.response); } else { reject("There is an Error!"); } }; req.send(); }); return promise; }
Example 1: Get 50 Pokémon’s information:
const ALL_POKEMONS_URL = ''; // We have discussed this function already! let promise = getPromise(ALL_POKEMONS_URL); const consumer = () => { promise.then( (result) => { console.log({result}); // Log the result of 50 Pokemons }, (error) => { // As the URL is a valid one, this will not be called. console.log('We have encountered an Error!'); // Log an error }); } consumer();
Example 2: Let’s try an invalid URL
const POKEMONS_BAD_URL = ''; // This will reject as the URL is 404 let promise = getPromise(POKEMONS_BAD_URL); const consumer = () => { promise.then( (result) => { // The promise didn't resolve. Hence, it will // not be executed. console.log({result}); }, (error) => { // A rejected prmise will execute this console.log('We have encountered an Error!'); // Log an error } ); } consumer();
How to Use the
.catch() Promise Handler
You can use this handler method to handle errors (rejections) from promises. The syntax of passing
null as the first argument to the
.then() is not a great way to handle errors. So we have
.catch() to do the same job with some neat syntax:
// This will reject as the URL is 404 let promise = getPromise(POKEMONS_BAD_URL); const consumer = () => { promise.catch(error => console.log(error)); } consumer();
If we throw an Error like
new Error("Something wrong!") instead of calling the
reject from the promise executor and handlers, it will still be treated as a rejection. It means that this will be caught by the
.catch handler method.
This is the same for any synchronous exceptions that happen in the promise executor and handler functions.
Here is an example where it will be treated like a reject and the
.catch handler method will be called:
new Promise((resolve, reject) => { throw new Error("Something is wrong!");// No reject call }).catch((error) => console.log(error));
How to Use the
.finally() Promise Handler
The
.finally() handler performs cleanups like stopping a loader, closing a live connection, and so on. The
finally() method will be called irrespective of whether a promise
resolves or
rejects. It passes through the result or error to the next handler which can call a .then() or .catch() again.
Here is an example that’ll help you understand all three methods together:
let loading = true; loading && console.log('Loading...'); // Gatting Promise promise = getPromise(ALL_POKEMONS_URL); promise.finally(() => { loading = false; console.log(`Promise Settled and loading is ${loading}`); }).then((result) => { console.log({result}); }).catch((error) => { console.log(error) });
To explain a bit further:
- The
.finally()method makes loading
false.
- If the promise resolves, the
.then()method will be called. If the promise rejects with an error, the
.catch()method will be called. The
.finally()will be called irrespective of the resolve or reject.
The
promise.then() call always returns a promise. This promise will have the
state as
pending and
result as
undefined. It allows us to call the next
.then method on the new promise.
When the first
.then method returns a value, the next
.then method can receive that. The second one can now pass to the third
.then() and so on. This forms a chain of
.then methods to pass the promises down. This phenomenon is called the
Promise Chain.
Here is an example:
let promise = getPromise(ALL_POKEMONS_URL); promise.then(result => { let onePokemon = JSON.parse(result).results[0].url; return onePokemon; }).then(onePokemonURL => { console.log(onePokemonURL); }).catch(error => { console.log('In the catch', error); });
Here we first get a promise resolved and then extract the URL to reach the first Pokémon. We then return that value and it will be passed as a promise to the next .then() handler function. Hence the output,
The
.then method can return either:
- A value (we have seen this already)
- A brand new promise.
It can also throw an error.
Here is an example where we have created a promise chain with the
.then methods which returns results and a new promise:
// Promise Chain with multiple then and catch let promise = getPromise(ALL_POKEMONS_URL); promise.then(result => { let onePokemon = JSON.parse(result).results[0].url; return onePokemon; }).then(onePokemonURL => { console.log(onePokemonURL); return getPromise(onePokemonURL); }).then(pokemon => { console.log(JSON.parse(pokemon)); }).catch(error => { console.log('In the catch', error); });
In the first
.then call we extract the URL and return it as a value. This URL will be passed to the second
.then call where we are returning a new promise taking that URL as an argument.
This promise will be resolved and passed down to the chain where we get the information about the Pokémon. Here is the output:
In case there is an error or a promise rejection, the .catch method in the chain will be called.
A point to note: Calling
.then multiple times doesn’t form a Promise chain. You may end up doing something like this only to introduce a bug in the code:
let promise = getPromise(ALL_POKEMONS_URL); promise.then(result => { let onePokemon = JSON.parse(result).results[0].url; return onePokemon; }); promise.then(onePokemonURL => { console.log(onePokemonURL); return getPromise(onePokemonURL); }); promise.then(pokemon => { console.log(JSON.parse(pokemon)); });
We call the
.then method three times on the same promise, but we don’t pass the promise down. This is different than the promise chain. In the above example, the output will be an error.
Apart from the handler methods (.then, .catch, and .finally), there are six static methods available in the Promise API. The first four methods accept an array of promises and run them in parallel.
- Promise.all
- Promise.any
- Promise.allSettled
- Promise.race
- Promise.resolve
- Promise.reject
Let’s go through each one.
The Promise.all() method
Promise.all([promises]) accepts a collection (for example, an array) of promises as an argument and executes them in parallel.
This method waits for all the promises to resolve and returns the array of promise results. If any of the promises reject or execute to fail due to an error, all other promise results will be ignored.
Let’s create three promises to get information about three Pokémons.
const BULBASAUR_POKEMONS_URL = ''; const RATICATE_POKEMONS_URL = ''; const KAKUNA_POKEMONS_URL = ''; let promise_1 = getPromise(BULBASAUR_POKEMONS_URL); let promise_2 = getPromise(RATICATE_POKEMONS_URL); let promise_3 = getPromise(KAKUNA_POKEMONS_URL);
Use the Promise.all() method by passing an array of promises.
Promise.all([promise_1, promise_2, promise_3]).then(result => { console.log({result}); }).catch(error => { console.log('An Error Occured'); });
Output:
As you see in the output, the result of all the promises is returned. The time to execute all the promises is equal to the max time the promise takes to run.
The Promise.any() method
Promise.any([promises]) – Similar to the
all() method,
.any() also accepts an array of promises to execute them in parallel. This method doesn’t wait for all the promises to resolve. It is done when any one of the promises is settled.
Promise.any([promise_1, promise_2, promise_3]).then(result => { console.log(JSON.parse(result)); }).catch(error => { console.log('An Error Occured'); });
The output would be the result of any of the resolved promises:
The Promise.allSettled() method
romise.allSettled([promises]) – This method waits for all promises to settle(resolve/reject) and returns their results as an array of objects. The results will contain a state (fulfilled/rejected) and value, if fulfilled. In case of rejected status, it will return a reason for the error.
Here is an example of all fulfilled promises:
Promise.allSettled([promise_1, promise_2, promise_3]).then(result => { console.log({result}); }).catch(error => { console.log('There is an Error!'); });
Output:
If any of the promises rejects, say, the promise_1,
let promise_1 = getPromise(POKEMONS_BAD_URL);
The Promise.race() method
Promise.race([promises]) – It waits for the first (quickest) promise to settle, and returns the result/error accordingly.
Promise.race([promise_1, promise_2, promise_3]).then(result => { console.log(JSON.parse(result)); }).catch(error => { console.log('An Error Occured'); });
Output the fastest promise that got resolved:
The Promise.resolve/reject methods
Promise.resolve(value) – It resolves a promise with the value passed to it. It is the same as the following:
let promise = new Promise(resolve => resolve(value));
Promise.reject(error) – It rejects a promise with the error passed to it. It is the same as the following:
let promise = new Promise((resolve, reject) => reject(error));
Sure, let’s do it. Let us assume that the
query method will return a promise. Here is an example query() method. In real life, this method may talk to a database and return results. In this case, it is very much hard-coded but serves the same purpose.
function query(endpoint) { if (endpoint === `/api/pizzahub/`) { return new Promise((resolve, reject) => { resolve({'shopId': '123'}); }) } else if (endpoint.indexOf('/api/pizzahub/pizza/') >=0) { return new Promise((resolve, reject) => { resolve({pizzas: [{'type': 'veg', 'name': 'margherita', 'id': '123'}]}); }) } else if (endpoint.indexOf('/api/pizzahub/beverages') >=0) { return new Promise((resolve, reject) => { resolve({id: '10', 'type': 'veg', 'name': 'margherita', 'beverage': 'coke'}); }) } else if (endpoint === `/api/order`) { return new Promise((resolve, reject) => { resolve({'type': 'veg', 'name': 'margherita', 'beverage': 'coke'}); }) } }
Next is the refactoring of our
callback hell. To do that, first, we will create a few logical functions:
// Returns a shop id let getShopId = result => result.shopId; // Returns a promise with pizza list for a shop let getPizzaList = shopId => { const url = `/api/pizzahub/pizza/${shopId}`; return query(url); } // Returns a promise with pizza that matches the customer request let getMyPizza = (result, type, name) => { let pizzas = result.pizzas; let myPizza = pizzas.find((pizza) => { return (pizza.type===type && pizza.name===name); }); const url = `/api/pizzahub/beverages/${myPizza.id}`; return query(url); } // Returns a promise after Placing the order let performOrder = result => { let beverage = result.id; return query(`/api/order`, {'type': result.type, 'name': result.name, 'beverage': result.beverage}); } // Confirm the order let confirmOrder = result => { console.log(`Your order of ${result.type} ${result.name} with ${result.beverage} has been placed!`); }
Use these functions to create the required promises. This is where you should compare with the
callback hell example. This is so nice and elegant.
function orderPizza(type, name) { query(`/api/pizzahub/`) .then(result => getShopId(result)) .then(shopId => getPizzaList(shopId)) .then(result => getMyPizza(result, type, name)) .then(result => performOrder(result)) .then(result => confirmOrder(result)) .catch(function(error){ console.log(`Bad luck, No Pizza for you today!`); }) }
Finally, call the orderPizza() method by passing the pizza type and name, like this:
orderPizza('veg', 'margherita');
If you are here and have read through most of the lines above, congratulations! You should now have a better grip of JavaScript Promises. All the examples used in this article are in this GitHub repository.
Next, you should learn about the
async function in JavaScript which simplifies things further. The concept of JavaScript promises is best learned by writing small examples and building on top of them.
Irrespective of the framework or library (Angular, React, Vue, and so on) we use, async operations are unavoidable. This means that we have to understand promises to make things work better.
Also, I’m sure you will find the usage of the
fetch method much easier now:
fetch('/api/user.json') .then(function(response) { return response.json(); }) .then(function(json) { console.log(json); // {"name": "tapas", "blog": "freeCodeCamp"} });
- The
fetchmethod returns a promise. So we can call the
.thenhandler method on it.
- The rest is about the promise chain which we learned in this article.
Thank you for reading this far! Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments.
You may also like these other articles:
That’s all for now. See you again with my next article soon. Until then, please take good care of yourself.
| https://envo.app/javascript-promise-tutorial-how-to-resolve-or-reject-promises-in-js/ | CC-MAIN-2022-33 | refinedweb | 3,372 | 60.21 |
21 April 2009 17:20 [Source: ICIS news]
LONDON (ICIS news)--DuPont expects some sequential quarter-to-quarter sales growth in its industrial businesses in the second quarter but steep declines in volumes and revenues year on year, the company said on Tuesday.
Modest sales growth was forecast as de-stocking in customer industries slowly comes to an end but the global economic downturn was forecast now to be much worse than earlier projections indicated.
“We expect flat to slightly up sequential sales,” chief financial officer Jeffrey Keefer said during an earnings conference call.
Sales volumes were mixed from a business and country-by-country perspective, DuPont said. There was a sequential month-to-month uptick in sales volume in March, Keefer added. And ?xml:namespace>
The company had seen some benefit in electronics from
“
DuPont reported a severely depressed first quarter with industrial business sales volumes down 30% year on year and losses in three of its industrial segments.
It said, however, that there had been some relief from de-stocking in some markets in March.
“We expect a larger volume contraction in 2009 than projected three months ago,” CEO Ellen Kullman said.
DuPont had projected a decline in global gross domestic product (GDP) in 2009 of 0.9% but the outlook now was for a 2009 global GDP drop of 3.5%, she added.
The US-headquartered chemicals group did not give an earnings outlook for the second quarter because, it said, of global market uncertainty, but its more detailed sales outlook indicated some end to de-stocking in important markets for the company.
It said it expected less de-stocking in industrial chemicals and polymers, in electronics, and building materials, for example, and a sequential increase in motor vehicle construction despite continued auto industry weakness.
Its aramid fibres protective materials business, however, was likely to be affected by late cycle de-stocking in the quarter, it said.
DuPont was trading up 3.5% at $27.7 at 11:37
($1 = €0.77)
For more on DuPont visit ICIS company intelligence
To discuss issues facing the chemical industry go to ICIS connect
Read Paul Hodges' | http://www.icis.com/Articles/2009/04/21/9209892/DuPont-sees-some-sequential-sales-growth-in-second-quarter.html | CC-MAIN-2014-52 | refinedweb | 357 | 52.49 |
Computer Science Archive: Questions from May 02, 2009
- Anonymous askedThe operation signature has all the following except:a. name of the operationb. object it accepts... Show moreThe operation signature has all the following except:a. name of the operationb. object it acceptsc. algorithm it usesd. value returned by the operation• Show less1 answer
- Anonymous askedb. represe... Show moreThe Strategy pattern:a. defines an object that encapsulate how a set of objectsinteractsb. represents an operation to be performed on the elements ofan object's structurec. defines a family of algorithm, encapsulate each one andmake them interchangeabled. none of the above• Show less1 answer
- Gootch askedif (n ==... Show more
Consider the recursive function for calculating the nthFibonacci number:• Show less
int fib(int n) {
if (n == 0)
return 0;
else if (n == 1)
return 1;
else
return fib(n-1) + fib(n-2);
}
(b) In what situations will thefunction terminate? How do you know that it will terminate inthose circumstances?3 answers
- Gootch askedthe C library strncpy() fu... Show more
Give the definition for a function, my_strncpy(), that behavesthe same as
the C library strncpy() function. That is, it takes, twocharacter pointers (dest and src), and a number (max_len) asparameters, and copies the string src to dest up to max_lencharacters to src.
Write a program that uses alinked list to read in an arbitrarily number of lines of text (witha maximum length of 80 characters each)from the standard input andprints them out backwards.• Show less2 answers
- Gootch askedand auxiliary external variable definit... Show more
Supply appropriate definitions for push() and pop() functionsand auxiliary external variable definitions, so that push() andpop() implement the Stack ADT operations using an array and thefollowing main function works. (You may assume that at every pointin the execution there will never have been more than 10 morepush()'es than pop()'s).
int main() {• Show less
char s[6];
int x;
while(scanf("%5s", s) == 1) {
if(isdigit(s))
push(atoi(s));
else {
switch(s[0]) {
case '+': push(pop() + pop()); break;
case '*': push(pop() * pop()); break;
case '-':
x = pop();
push(x-pop());
break;
case '/':
x= pop();
push(x-pop());
break;
default:
return;
}
}
}
}2 answers
- Gootch askedstruct... Show more
Suppose you are developing the scheduler for a real time systemand have
the struct:
struct process{
int process_id;
long deadline;
long wcet; /* worst-caseestimated time */
/* other field */
}
And you need to identify the process that needs torun the soonest in order to meet its deadline (i.e., the one thathas the smallest value of deadline - wcet). Write the code for thepriority queue enqueue function including anysupporting auxiliary functions) for a minimum heap containing pointers to process structusing deadline - wcet as the priority.• Show less1 answer
- Anonymous askedplease make a program for me that will find the inverse of amatrix using two dimensional dynami... Show morehi..please make a program for me that will find the inverse of amatrix using two dimensional dynamic pointers..plz be simple and write a commented code..thanks be fast...• Show less0 answers
- asalvani askedState the numbers that have been sea... Show moreHow many numbers are searched in 1000 entries, and 200 entries?
State the numbers that have been searched and state how itterminates.
• Show less4 answers
- asalvani askedWrite a pseudocode with procedure List with insertion sort,selection sort, bubble sort and quicksort... Show moreWrite a pseudocode with procedure List with insertion sort,selection sort, bubble sort and quicksort.
Give also an explanation of each of the serches regarding theiradvantages and disadvantages in your list.
• Show less3 answers
- Anonymous askedprovides an array of in... Show more
Write a program that implements functions and arrays using pointers. Theuser
provides an array of integerdata, while the system calculates the average, standarddeviation,
Variance and median of the array.• Show less1 answer
- Anonymous askedGeneral Purpose registers wereoriginally introduced into instruction set architecture to help (a)... Show moreQ1 General Purpose registers wereoriginally introduced into instruction set architecture to help (a)speed program execution and (b) reduce the overall size ofprograms. How is it that such registers contribute to (a) and(b)?__________________________________________________________________________Q2 How many bits are required toimplement a direct-mapped cache that can hold 64K bytes of memorydata and each cache block hold 4 bytes? Assume every cache blockneeds one valid bit and the memory address space is 32-bit.• Show less___________________________________________________________________________Q3:a. Develop an SEC code for a 16 bit data word. Generate thecode for the data word 0101 0000 0011 1001b. Show that the codewill correctly identify an error in bit 5.b. How many check bits are needed if the Hamming errorcorrection code is used to detect single bit errors in a 1024-bitdata word?___________________________________________________________________________NEEDHELP1 answer
- asalvani askedWhat sequence of numbers would be printed by the followingrecursive procedure if we started it with... Show moreWhat sequence of numbers would be printed by the followingrecursive procedure if we started it with N assigned the value1?
procedure Exercise(N)
print the value of N;
if (n<3) then (apply the procedure Excersise to thevalue N+1);
print the value of N.
The answer to this given is 1,2,3,3,2,1,..(can u please explain mewhy the numbers are given backwards?, is that because of the copiesthat return back the result to the original copy or somethingelse?
then after that is required what could be the termination conditionin the recursive procedure in the thing above.
• Show less3 answers
- asalvani askedCreate a use case diagram to show the f... Show moresuppose u are going to develop a University enrolment system.
Create a use case diagram to show the following requirements.
The student office provide enrolment instructions to allstudents
The enrolment instructions must contain the class timetable for thecourse offerings
For new students their acceptance forms should also be provided
For continuing students who are (eligible to re-enrol), theirexamination results from the last semester should also beprovided
The enrolment instructions may be sent by email or mail
Please give an explanation together with the use case diagram,thanks
• Show less1 answer
- asalvani askedCreate a class diagram to show the follow... Show moreSuppose u are goin to develop a university enrolment system
Create a class diagram to show the following requirements
Each student has a unique ID in addition to his/hers name. Eachcourse has a unique
course code in addition to its name and credit points.
A course is usually offered in multiple years and semesters.
A student must take one or more course offerings, while a courseofferings has at least
five and at most one hundtred students.
There are new students and continuing students.
A continuing student may have exam results for a given courseoffering.
• Show less1 answer
- Anonymous asked1 answer
- Anonymous askedhow do i read the number if i decla... Show moreI am writing a program using c++
I wish to read the input number
how do i read the number if i declare the number to be char*num;
while executing i input 123
do i use a loop and say cin>>num[i] or do i just saycin>>num;cout<<num
• Show less2 answers
- sadgirl askedform: dd mm yyyy, and t... Show morewrite and run aprogram that asks the user to enter the date in the following• Show less
form: dd mm yyyy, and then the program will print the entered datein a literal form using a switch statement. The programe will keepasking the user to enter the date while the entered day, month oryear is out of range. (i.e. the value of month, day and year shouldbe in the followng range : 1<=m<=12, 1<=d<=31,y>0
sample output
enter the date : 15 4 2009
the date in literal form: April 15, 2009
press any key to continue1 answer
- Anonymous askedQ1.Write a swap function that swaps valuesof two characters first by passing throughreference and se... Show moreQ1.Write a swap function that swaps valuesof two characters first by passing throughreference and secondly by passing through values. Submit completecode with supporting flow charts.• Show less
Q2.Implement a function that calculates the Fibonacci series ofany given number. Submit complete code with supporting flow chartas well as submit the trace of the series when input is 10.(You’re required to search Fibonacci series on theInternet).1 answer
- Anonymous askedQ / Using the following terms given below please draw for methe use case diagram.This diagram is rel... Show moreQ / Using the following terms given below please draw for methe use case diagram.This diagram is related to "Online RailwayReservation System".Please keep in mind to use <extend> and<include> whereever requird appropriately.I will appreciateand welcome if you can do some additional refinement in thediagram.Thanks.1 - Administrator (key player, actor)2- View Schedule <extends> Add schedule, UpdateSchedule3- View Trains <extends> Add trains, edit trains4-View Routes <extends> update routes, add routes5- View Timing <extends> set arrival timing, setdeparture timing6- View Fare <extends> Add ticket fare, Update ticketfare7- View Reports8 - Run Application9-Select Option10- Help• Show less0 answers
- Anonymous asked1 answer
- Anonymous askedThis was a supplemental material my teacher decided to addin the middle of our semester. She wanted... Show moreThis was a supplemental material my teacher decided to addin the middle of our semester. She wanted to try visual C++. No onehas any experience and she is unable to help us. This is thebook.Programming With Visual C++ by james alert.
Here is the program. Page 162 Algorithm 3-3: Vending MachineAlgorithm
1. Declare variables
2. Read the amount ofchange(amount)
3. Process the amount
3.1. Determine how many times 25 (the value of a quarter) goesinto the amount (store and display the quantity)
3.2. Determine the amount left over after quarters have beenremoved.
3.3. Determine how many times 10 (the value of a dime) goes intothe amount (store and display this)
3.4. Determine the amount left over after dimes have beenremoved
3.5. Determine how many times 5 (the value of a nickel) goesinto the amount (store and display this)
3.6. Determine the amount left over after nickels are removed(this is the number of pennies – store and display it)
3.7. Add up the sum of the number of quarters, dimes,nickels, and pennies, and display it.
4. The clear button shouldassign an empty string to the text field of every textbox allowingthe user to begin again.
The diagram of the visual is a square like this
Amount of change (0-99cents) textbox
Buttoncalculate button clear
Number ofdimes text box
Number ofnickels text box
Number ofpennies text box
Totalcoins text box• Show less1 answer
- Anonymous askedI have this code working but i get this one build errror and im afraid its wrong or im... Show morex.I have this code working but i get this one build errror and im afraid its wrong or im doing somehting wrong. below is my code and i have for the program the roman header file includded in my main but i commented it out and includded it in a header file under header files. this is the error i get can any one help.----- Build started: Project: roman12_p713_cbrads01, Configuration: Debug Win32 ------
Compiling...
Main.cpp
c:\users\\documents\visual studio 2008\projects\roman12_p713_\roman12_p713_\main.cpp(29) : fatal error C1083: Cannot open include file: 'romanType.h': No such file or directory
Build log was saved at ":\Users\Documents\Visual Studio 2008\Projects\roman12_p713_\roman12_p713_\Debug\BuildLog.htm"
roman12_p713_cbrads01 - 1 error(s), 0 warning(s)========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========Heres My codeò!x.øi5yš�mx. • Show less
//// RomanType Header FIle
// // romanType.h
// #include<string>
// using namespace std;
// class romanType
// {
// public :
// romanType( string = "" );
//
// void setRoman( string );
//
// void convertToDecimal();
//
// void printRoman();
//
// void printDecimal();
//
//private:
//
// string roman;
//
// int decimal;
//
//}; // end class definition of romanType
// // -----------------------------------------------------
// implementation file romanTypeImp.cpp
#include<iostream>
#include "romanType.h"
using namespace std;
romanType::romanType( string myRoman )
{
roman = myRoman;
decimal = 0;
} // end constructor romanType
void romanType::setRoman( string myRoman )
{
roman = myRoman;
decimal = 0;
} // end function setRoman
void romanType::convertToDecimal()
{
char romans[7] = { 'M', 'D', 'C', 'L', 'X', 'V', 'I'};
int decimals[ 7 ] = { 1000, 500, 100, 50, 10, 5, 1 };
int j, pos;
size_t len = roman.length();
// process the numeral
for ( unsigned int i = 0; i < len - 1; i++ )
{
// find the roman letter
for ( pos = 0; pos < 7; pos++ )
if ( roman.at( i ) == romans[ pos ] )
break;
// check for validity of the roman letter
if ( pos < 7 )
{
// check the next roman letter's value
for ( j = 0; j < pos; j++ )
if ( roman.at( i + 1 ) == romans[ j ] )
break;
// add or subtract the dec. val
// according to the values of j and pos
if ( j == pos )
decimal += decimals[ pos ];
else
decimal -= decimals[ pos ];
}
} // end for
// process the last numeral value
for ( j = 0; j < 7; j++ )
if ( roman.at( len - 1 ) == romans[ j ] )
break;
//add the dec. val of roman letter to the dec. number
decimal += decimals[ j ];
} // end function convertToDecimal
void romanType::printRoman()
{
cout << "õ\Tx.øi5numeral is " << roman;
} // end function printRoman
void romanType::printDecimal()
{
cout << "\n\tThe decimal equivalent of the "
<< "given roman numeral is " << decimal;
} // end function printDecimal
// -----------------------------------------------------
// Main program
#include<iostream>
#include "romanType.h"
using namespace std;
int main() // function main begins program execution
{
// let the user know about the program
cout << "\n\n\tProgram that convert Roman Numeral"
<< " into decimal form.";
// instantiate object of type romanType
romanType r;
string rns[ 3 ] = { "CCCLIX", "MCXIV", "MDCLXVI" };
for ( int i = 0; i < 3; i++ )
{
// set the roman numeral string
r.setRoman( rns[ i ] );
// convert the roman numeral into decimal form
r.convertToDecimal();
// print the roman numeral
r.printRoman();
// print the decimal form of numeral
r.printDecimal();
} // end for
cout << "\n\n\t";
system( "pause" );
return 0; // indicate program executed successfully
} // end of function, main1 answer
- Anonymous askedim working on an assembly project, you should input studentnumber and password, and program will out... Show moreim working on an assembly project, you should input studentnumber and password, and program will output the courses you needto take according to your student number. im trying to compare thepassword entered with the predefined one, but its not working outwell, here is the code in red, and the compare string functions isin blue, please help me fix it
.MODEL SMALL
.STACK 100H
.DATA
st1 db 13,10,"You are a first-year student; youshould register MATH 141, COMP142, ENGC 101.$"
st2 db 13,10,"You are a second-year student; youshould register ENCS 234, COMP 231, ENEE 231.$"
st3 db 13,10,"You are a third-year student; youshould register ENCS 331, COMP 333, ENEE 331.$"
st4 db 13,10,"You are a fourth-year student; youshould register ENCS 432, COMP 433, MATH 331.$"
st5 db 13,10,"You are a fifth-year student; youshould register ENCS 535, ENCS 536, ENCS 539.$"
mess1 db 13,10,"Enter Student Number:$"
mess2 db 13,10,"Enter Password:$"
pass db "word1234$"
stdnum db 9, ?, 9 dup (' ')
stdpas db 9, ?, 9 dup (' ')
.code
start:
call setscr ;set screen
mov Ax, @data
mov ds, Ax
mov dx, offsetmess1 ;print "Enter Student Number"
call prnt
mov ah, 0ah ;take input for studentnumber
mov dx, offset stdnum
int 21h
mov dx, offsetmess2 ;print "Enter Password"
call prnt
mov ah, 0ah ;take input for studentpassword:
cld
mov ax, cs
mov ds, ax
mov es, ax
lea si, pass
lea di, stdpas
mov cx, 8
repe cmpsb
jnz noteq
call prntst1
ret
noteq:
call prntst2
ret
; ---------------- set screen, set color and set cursor---------------
setscr:
mov ax, 0600H
mov cx,0
mov dx, 184FH
mov bh, 7
int 10H
movah,06H
moval,00H
movbh,2CH ;green backgroundand red forground
movcx,0
movdx,184FH
int 10H
movah,2 ;setcursor
movdh,02H ;row number
movdl,00H ;column number
movbh,0 ;page number
int 10H
ret
end start
• Show less0 answers
- Anonymous askedYou have set up a DSL connection to your Windows XP computerand installed the latest service pack fo... Show moreYou have set up a DSL connection to your Windows XP computerand installed the latest service pack for security. What othersteps can you take to ensure the security of your computer? (Chooseall that apply--this question can have multiple answers!)• Show lessa) configure permissionsb) Ensure that Windows Firewall is enabledc) Install DSLsec.d) Disable UDP1 answer
- Anonymous askedWhich of the following does IPSec employ? (Choose all thatapply--this question can have multiple ans... Show moreWhich of the following does IPSec employ? (Choose all thatapply--this question can have multiple answers):a) port protectionb) certificates for authenticationc) shortest route cost calculationd) encryption• Show less1 answer
- Anonymous askedYou monitor the available wireless networks for yourhome. You see five different wireless access poi... Show moreYou monitor the available wireless networks for yourhome. You see five different wireless access points that areavailable, including your own. What can you do to:a) keep others from attaching to your accesspoint?b) keep the other access points from interfering withyour signal?Please try to be as descriptive as possible. Thanks!• Show less1 answer
- Anonymous asked// Precondition:... Show morewrite a recursive function definitionfor the following function:
int squares(intn);
// Precondition: n>=1
// Returns the sum of the squares of the numbers 1 through n.
For Example, squares(3) returns 14 because12+22+ 32 is 14. • Show less2 answers
- Anonymous asked1- Union Person { char name[30]; //30 bytes int age; float height; }; How many bytes will skip after1 answer
- Anonymous askedDefinition of a pipeline of processes for the compression phase, as well as for the decompression ph... More »1 answer
- Anonymous askedCode for the compression phase and experimental results about the ratio of compression obtained. Whe... More »0 answers
- samwelli askedA. Implement the default constructor,... Show more
Use the following UML class diagramfor questions A through D
A. Implement the default constructor, to set the value of theobject to zero dollars and zero cents.
B. Overload the << operatorwhich will print the value in the format $d.cc, where d is dollarvalue and cc is the cents value using two digits.
C. Implement the accessor andmutator methods described in the UML diagram.
D. Implement the method operator +which will add two money values resulting in a money value. Don'tforgot to adjust the results so that the cents are between 0 and99.
• Show less0 answers
- samwelli askedA. Write the C++ statementthat will correctly declare... Show more
Please help me answer A-D. Will give lifesaver
A. Write the C++ statementthat will correctly declare a pointer variable ptr, which has basetype double.
B. Write the C++ statementthat will correctly make pointer variable ptr point to the storagelocation with variable name x, assuming that both variables havealready been declared.
C. Declare an object that is alist of integers, and another object which is an iterator for thatlist.
D. Write the statements thatwill dynamically allocate an array of nine integers, storing thestarting address into pointer myArrPrt, and then the statement thatwill release that dynamically allocated array.
• Show less0 answers
- Anonymous asked4. Write the statements that will dynamically allocate an array ofnine integers, storing the startin0 answers
- Anonymous askedQ) Here is a series of address references given as wordaddresses: 1,4,8,5,20,17,19,56,9,11,4,43,5,6,... Show moreQ) Here is a series of address references given as wordaddresses: 1,4,8,5,20,17,19,56,9,11,4,43,5,6,9, and 17. Assuming adirect-mappedcache with 16 one-word blocks that is initially empty, labeleach reference in thelist as a hit or a miss and show the final contents of thecache.Q) consider above series and Assumming a direct - mappedcache with 4 -word block and a total size of 16 words.Q) consider above series and 4-way associative cache withone-word blocks and a total size of 16 words.Q) consider series and fully associative one word blocks andtotal size of 16 words.please its urgent !!!!! please answer sub questions also.... will rate life saver...• Show less2 answers
- Anonymous asked1 answer
- Anonymous askedAnd how does it apply and implemented i... Show moreIs data encapsulation and information hiding the same thing?
And how does it apply and implemented in Java?
Thank you very much
• Show less1 answer
- Anonymous asked2 answers
- Anonymous askedwhere there is a private... Show moreCan you give an example of operator overloading of two class c1 andc2
c1=c2
where there is a private data int a.
I think this operators overloading should be declared as friendfunction
• Show less2 answers | http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2009-may-02 | CC-MAIN-2015-06 | refinedweb | 3,380 | 54.73 |
Visual Basic is by a large margin the most popular programming language in the Windows world. Visual Basic.NET (VB.NET) brings enormous changes to this widely used tool. Like C#, VB.NET is built on the Common Language Runtime, and so large parts of the language are effectively defined by the CLR. In fact, except for their syntax, C# and VB.NET are largely the same language. Because both owe so much to the CLR and the .NET Framework class library, the functionality of the two is very similar.
VB.NET can be compiled using Visual Studio.NET or vbc.exe, a command-line compiler supplied with the .NET Framework. Unlike C#, however, Microsoft has not submitted VB.NET to a standards body. Accordingly, while the open source world or some other third party could still create a clone, the Microsoft tools are likely to be the only viable choices for working in this language, at least for now.
Only Microsoft provides VB.NET compilers today
The quickest way to get a feeling for VB.NET is to see a simple example. The example that follows implements the same functionality as did the C# example shown earlier in this chapter. As you'll see, the differences from that example are largely cosmetic.
' A VB.NET example Module DisplayValues Interface IMath Function Factorial(ByVal F As Integer) _ As Integer Function SquareRoot(ByVal S As Double) _ As Double End Interface Class Compute Implements IMath Function Factorial(ByVal F As Integer) _ As Integer Implements IMath.Factorial Dim I As Integer Dim Result As Integer = 1 For I = 2 To F Result = Result * I Next Return Result End Function Function SquareRoot(ByVal S As Double) _ As Double Implements IMath.SquareRoot Return System.Math.Sqrt(S) End Function End Class Sub Main() Dim C As Compute = New Compute() Dim V As Integer V = 5 System.Console.WriteLine( _ " factorial: ", _ V, C.Factorial(V)) System.Console.WriteLine( _ "Square root of : {1:f4} ", _ V, C.SquareRoot(V)) End Sub End Module
The example begins with a simple comment, indicated by the single quote that begins the line. Following the comment is an instance of the Module type that contains all of the code in this example. Module is a reference type, but it's not legal to create an instance of this type. Instead, its primary purpose is to provide a container for a group of VB.NET classes, interfaces, and other types. In this case, the module contains an interface, a class, and a Sub Main procedure. It's also legal for a module to contain directly method definitions, variable declarations, and more that can be used throughout the module.
A Module provides a container for other VB.NET types
The module's interface is named IMath, and as in the earlier C# example, it defines the methods (or in the argot of Visual Basic, the functions) Factorial and SquareRoot. Each takes a single parameter, and each is defined to be passed by value, which means a copy of the parameter is made within the function. (The trailing underscore is the line continuation character, indicating that the following line should be treated as though no line break were present.) Passing by value is the default, so the example would work just the same without the ByVal indications . Passing by reference is the default in Visual Basic 6, which shows one example of how the language was changed to match the underlying semantics of the CLR.
By default, VB.NET passes parameters by value, unlike Visual Basic 6
The class Compute, which is the VB.NET expression of a CTS class, implements the IMath interface. Each of the functions in this class must explicitly identity the interface method it implements. Apart from this, the functions are just as in the earlier C# example except that a Visual Basic-style syntax is used. Note particularly that the call to System.Math.Sqrt is identical to its form in the C# example. C#, VB.NET, and any other language built on the CLR can access services in the .NET Framework class library in much the same way.
A VB.NET class is an expression of a CTS class
This simple example ends with a Sub Main procedure, which is analogous to C#'s Main method. The application begins executing here. In this example, Sub Main creates an instance of the Compute class using the VB.NET New operator (which will eventually be translated into the MSIL instruction newobj). It then declares an Integer variable and sets its value to 5.
Execution begins in the Sub Main pr ocedure
As in the C# example, this simple program's results are written out using the WriteLine method of the Console class. Because this method is part of the .NET Framework class library rather than any particular language, it looks exactly the same here as it did in the C# example. Not too surprisingly, then, the output of this simple program is
5 factorial: 120 Square root of 5: 2.2361
just as before.
To someone who knows Visual Basic 6, VB.NET will look familiar. To someone who knows C#, VB.NET will act in a broadly familiar way since it's built on the same foundation. But VB.NET is not the same as either Visual Basic 6 or C#. The similarities can be very helpful in learning this new language, but they can also be misleading. Be careful.
VB . NET's similarities to Visual Basic 6 both help and hurt in learning this new language
Like C#, the types defined by VB.NET are built on the CTS types provided by the CLR. Table 4-2 shows most of these types and their VB.NET equivalents.
Notice that some types, such as unsigned integers, are missing from VB.NET. Unsigned integers are a familiar concept to C++ developers but not to typical Visual Basic 6 developers. The core CTS types defined in the System namespace are available in VB.NET just as in C#, however, so a VB.NET developer is free to declare an unsigned integer using
VB.NET doesn't support all of the CTS types
Dim J As System.UInt32
Unlike C#, VB.NET is not case sensitive. There are some fairly strong conventions, however, which are illustrated in the example shown earlier. For people coming to .NET from Visual Basic 6, this case insensitivity will seem entirely normal. It's one example of why both VB.NET and C# exist, since the more a new environment has in common with the old one, the more likely people will adopt it.
VB.NET classes expose the behaviors of a CTS class using a VB-style syntax. Accordingly, VB.NET classes can implement one or more interfaces, but they can inherit from at most one other class. In VB.NET, a class Calculator that implements the interfaces IAlgebra and ITrig and inherits from the class MathBasics looks like this:
Like a CTS class, a VB.NET class can inherit directly from only one other class
Class Calculator Inherits MathBasics Implements IAlgebra Implements ITrig . . . End Class
Note that, as in C#, the base class must precede the interfaces. Note also that any class this one inherits from might be written in VB.NET or in C# or perhaps in some other CLR-based language. As long as the language follows the rules laid down in the CLR's Common Language Specification, cross-language inheritance is straightforward. Also, if the class inherits from another class, it can potentially override one or more of the type members , such as a method, in its parent. This is allowed only if the member being overridden is declared with the keyword Overridable, analogous to C#'s keyword virtual.
VB.NET classes can be labeled as NotInheritable or MustInherit, which means the same thing as sealed and abstract, respectively, the terms used by the CTS and C#. VB.NET classes can also be assigned various accessibilities, such as Public and Friend, which largely map to visibilities defined by the CTS. A VB.NET class can contain variables , methods, properties, events, and more, just as defined by the CTS. Each of these can have an access modifier specified, such as Public, Private, or Friend. A class can also contain one or more constructors that get called whenever an instance of this class is created. Unlike C#, however, VB.NET does not support operator overloading. A class can't redefine what various standard operators mean when used with an instance of this class.
VB.NET doesn't support operator overloading
Interfaces as defined by the CTS are a fairly simple concept. VB.NET essentially just provides a VB-derived syntax for expressing what the CTS specifies. Along with the interface behavior shown earlier, CTS interfaces can inherit from one or more other interfaces. In VB.NET, for example, defining an interface ITrig that inherits from the three interfaces, ISine, ICosine, and ITangent, would look like this:
Like a CTS interface, a VB.NET interface can inherit directly from one or more other interfaces
Interface ITrig Inherits ISine Inherits ICosine Inherits ITangent ... End Interface
Because both are based on the structure type defined by the CTS, structures in VB.NET are very much like structures in C#. Like a class, a structure can contain fields, members, and properties, implement interfaces, and more. VB.NET structures are value types, of course, which means that they can neither inherit from nor be inherited by another type. A simple employee structure might be defined in VB.NET as follows:
VB.NET structures can contain fields, provide methods, and more
Structure Employee Public Name As String Public Age As Integer End Structure
To keep the example simple, this structure contains only data members. As described earlier, however, CTS structures -and thus VB.NET structures -are in fact nearly as powerful as classes.
The idea of passing an explicit reference to a procedure or function and then calling that procedure or function is not something that the typical Visual Basic programmer is accustomed to. Yet the CLR provides support for delegates, which allows exactly this. Why not make this support visible in VB.NET?
VB.NET's creators chose to do this, allowing VB.NET programmers to create callbacks and other event-oriented code easily. Here's an example, the same one shown earlier in C#, of creating and using a delegate in VB.NET:
VB.NET allows creating and using delegates
Module Module1 Delegate Sub SDelegate(ByVal S As String) Sub CallDelegate(ByVal Write As SDelegate) System.Console.WriteLine("In CallDelegate") Write("A delegated hello") End Sub Sub WriteString(ByVal S As String) System.Console.WriteLine( _ "In WriteString: ", S) End Sub Sub Main() Dim Del As New SDelegate( _ AddressOf WriteString) CallDelegate(Del) End Sub End Module
Although it's written in VB.NET, this code functions exactly like the C# example shown earlier in this chapter. Like that example, this one begins by defining SDelegate as a delegate type. As before, SDelegate objects can contain references only to methods that take a single String parameter. In the example's Sub Main method, a variable Del of type SDelegate is declared and then initialized to contain a reference to the WriteString subroutine. (A VB.NET subroutine is a method that, unlike a function, returns no result.) Doing this requires using VB.NET's AddressOf keyword before the subroutine's name. Sub Main then invokes CallDelegate, passing in Del as a parameter.
CallDelegate has an SDelegate parameter named Write. When Write is called, the method in the delegate that was passed into CallDelegate is actually invoked. In this example, that method is WriteString, so the code inside the WriteString procedure executes next. The output of this simple example is exactly the same as for the C# version shown earlier in this chapter:
In CallDelegate In WriteString: A delegated hello
Delegates are another example of the additional features Visual Basic has acquired from being rebuilt on the CLR. While this rethinking of the language certainly requires lots of learning from developers using it, the reward is a substantial set of features.
Like arrays in C# and other CLR-based languages, arrays in VB.NET are reference types that inherit from the standard System.Array class. Accordingly, all of the methods and properties that class makes available are also usable with any VB.NET array. Arrays in VB.NET look much like arrays in earlier versions of Visual Basic. Perhaps the biggest difference is that the first member of a VB.NET array is referenced as element zero, while in previous versions of this language, the first member was element one. The number of elements in an array is thus one greater than the number that appears in its declaration. For example, the following statement declares an array of eleven integers:
Unlike Visual Basic 6, array indexes in VB.NET start at zero
Dim Ages(10) As Integer
Unlike C#, there's no need to create explicitly an instance of the array using New. It's also possible to declare an array with no explicit size and later use the ReDim statement to specify how big it will be. For example, this code
Dim Ages() As Integer ReDim Ages(10)
results in an array of eleven integers just as in the previous example. Note that the index for both of these arrays goes from 0 to 10, not 1 to 10.
VB.NET also allows multidimensional arrays. For example, the statement
Dim Points(10,20) As Integer
creates a two-dimensional array of integers with 11 and 21 elements, respectively. Once again, both dimensions are zero-based , which means that the indexes go from 0 to 10 in the array's first dimension and 0 to 20 in the second dimension.
While the CLR says a lot about what a .NET Framework-based language's types should look like, it says essentially nothing about how that language's control structures should look. Accordingly, adapting Visual Basic to the CLR required making changes to VB's types, but the language's control structures are fairly standard. An If statement, for example, looks like this:
VB . NET's control structures will look familiar to most developers
If (X > Y) Then P = True Else P = False End If
while a Select Case statement analogous to the C# switch shown earlier looks like this:
Select Case X Case 1 Y = 100 Case 2 Y = 200 Case Else Y = 300 End Select
As in the C# example, different values of x will cause y to be set to 100, 200, or 300. Although it's not shown here, the Case clauses can also specify a range rather than a single value.
The loop statements available in VB.NET include a While loop, which ends when a specified Boolean condition is no longer true; a Do loop, which allows looping until a condition is no longer true or until some condition becomes true; and a For…Next loop, which was shown in the example earlier in this section. And like C#, VB.NET includes a For Each statement, which allows iterating through all the elements in a value of a collection type.
VB.NET includes a While loop, a Do loop, a For...Next loop, and a For Each loop
VB.NET also includes a goto statement, which jumps to a labeled point in the program, and a few more choices. The innovation in the .NET Framework doesn't focus on language control structures (in fact, it's not easy to think of the last innovation in language control structures), and so VB.NET doesn't offer much that's new in this area.
The CLR provides many other features, as seen in the description of C# earlier in this chapter. With very few exceptions, the creators of VB.NET chose to provide these features to developers working in this newest incarnation of Visual Basic. This section looks at how VB.NET provides some more advanced features.
VB.NET exposes most of the CLR's features
As mentioned in Chapter 3, namespaces aren't directly visible to the CLR. Just as in C#, however, they are an important part of writing applications in VB.NET. As shown earlier in the VB.NET example, access to classes in .NET Framework class library namespaces looks just the same in VB.NET as in C#. Because the Common Type System is used throughout, methods, parameters, return values, and more are all defined in a common way. Yet how a VB.NET program indicates which namespaces it will use is somewhat different from how it's done in C#. Commonly used namespaces can be identified for a module with the Imports statement. For example, preceding a module with
VB . NET's Imports statement makes it easier to reference the contents of a namespace
Imports System
would allow invoking the System.Console.WriteLine method with just
Console.WriteLine( . . .)
VB.NET's Imports statement is analogous to C#'s using statement. Both allow developers to do less typing. And as in C#, VB.NET also allows defining and using custom namespaces.
One of the greatest benefits of the CLR is that it provides a common way to handle exceptions across all .NET Framework languages. This common approach allows errors to be found in, say, a C# routine and then is handled in code written in VB.NET. The syntax for how these two languages work with exceptions is different, but the underlying behavior, specified by the CLR, is the same.
Like C#, VB.NET uses Try and Catch to provide exception handling. Here's a VB.NET example of handling the exception raised when a division by zero is attempted:
As in C#, try/catch blocks are used to handle exceptions in VB.NET
Try X = Y/Z Catch System.Console.WriteLine("Exception caught") End Try
Any code between the Try and Catch is monitored for exceptions. If no exception occurs, execution skips the Catch clause and continues with whatever follows End Try. If an exception occurs, the code in the Catch clause is executed, and execution continues with what follows End Try.
As in C#, different Catch clauses can be created to handle different exceptions. A Catch clause can also contain a When clause with a Boolean condition. In this case, the exception will be caught only if that condition is true. Also like C#, VB.NET allows defining your own exceptions and then raising them with the Throw statement. VB.NET also has a Finally statement. As in C#, the code in a Finally block is executed whether or not an exception occurs.
VB.NET offers essentially the same exception handling options as C#
Code written in VB.NET is compiled into MSIL, so it must have metadata. Because it has metadata, it also has attributes. The designers of the language provided a VB-style syntax for specifying attributes, but the end result is the same as for any CLR-based language: Extra information is placed in the metadata of some assembly. To repeat once again an example from earlier in this chapter, suppose the Factorial method shown in the complete VB.NET example had been declared with the WebMethod attribute applied to it. This attribute instructs the .NET Framework to expose this method as a SOAP-callable Web service, as described in more detail in Chapter 7. Assuming the appropriate Imports statements were in place to identify the correct namespace for this attribute, the declaration would look like this in VB.NET:
A VB.NET program can contain attributes
<WebMethod()> Public Function Factorial(ByVal F _ As Integer) As Integer Implements IMath.Factorial
This attribute is used by VB.NET to indicate that a method contained in an .asmx page should be exposed as a SOAP-callable Web service. Similarly, including the attribute
<assembly:AssemblyCompanyAttribute("QwickBank")>
in a VB.NET file will set the value of an attribute stored in this assembly's manifest that identifies QwickBank as the company that created this assembly. VB.NET developers can also create their own attributes by defining classes that inherit from System.Attribute and then have whatever information is defined for those attributes automatically copied into metadata. As in C# or another CLR-based language, custom attributes can be read using the GetCustomAttributes method defined by the System.Reflection namespace's Attribute class.
Attributes are just one more example of the tremendous semantic similarity of VB.NET and C#. While they look quite different, the capabilities of the two languages are very similar. Which one a developer prefers will be largely an aesthetic decision.
VB.NET and C# offer very similar features | http://flylib.com/books/en/2.78.1.33/1/ | CC-MAIN-2013-20 | refinedweb | 3,466 | 57.16 |
I wanted to add more functionality to exercise 35 of Zed Shaw's LPTHW. The script runs without crashing and allows the player to revisit previous rooms, as I desired. However, in the bear_room the only way the player can get through is to scream at the bear, switching the bear_moved boolean to True.
If the player does this and then goes backward into the start room, it was my intent that the bear_moved boolean remained in the True position, meaning that the bear would still be moved away from the door upon re-entry.
That isn't happening when I run the script; Upon entering the bear_room for the first time I scream at the bear, causing it to move away from the door. I then back out of the room, returning to the start room. When I go back into the bear_room the bear has mysteriously plopped its fat self in front of the door again.
I placed the bear_moved boolean outside of the function just for this purpose--this was the only thing I could come up with to give that extra functionality to the program.
To recap, why doesn't the bear stay moved when I exit the bear_room and re-enter? How can I achieve the functionality I'm aiming for?
from sys import exit
bear_moved = False
def gold_room():
print "This room is full of gold. How much do you take?"
next = raw_input("> ")
if next.isdigit():
if int(next) > 101:
dead("You greedy bastard!")
elif int(next) < 101:
print "Nice, you're not greedy! You win!"
exit(0)
else:
pass
elif 'back' in next or 'backtrack' in next:
bear_room(bear_moved)
else:
print "Type a number!"
gold_room()
def bear_room(bear_moved):
if bear_moved == False:
print "There's a bear in here."
print "The bear has a bunch of honey."
print "The fat bear is in front of another door."
print "How are you going to move the bear?"
elif bear_moved == True:
print "The bear has moved away from the door."
else:
pass
next = raw_input("> ")
if 'honey' in next:
dead("The looks at you and then slaps your face off.")
elif 'taunt' in next or 'tease' in next or 'scream' in next or 'yell' in next and bear_moved == False:
bear_moved = True
bear_room(bear_moved)
elif 'taunt' in next or 'tease' in next or 'scream' in next or 'yell' in next and bear_moved == True:
dead("The bear flies into a rage and chews your leg off.")
elif 'open' in next and bear_moved == True:
gold_room()
elif 'open' in next and bear_moved == False:
dead("The bear, still standing in the way, lunges up and rips your throat out.")
elif 'back' in next or 'backtrack' in next:
start()
else:
print "I got no idea what this means."
bear_room(bear_moved)
def cthulhu_room():
print "Here you see the great evil Cthulhu."
print "He, it, whatever stares at you and you go insane."
print "Do you flee for your life or eat your head?"
next = raw_input("> ").lower()
if 'flee' in next:
start()
elif 'head' in next:
dead("Well that was tasy!")
else:
cthuhlu_room()
def dead(why):
print why, "Game Over!"
exit(0)
def start():
print "You're in a dark room."
print "You have no idea who you are or how you got here."
print "Your head is throbbing..."
print "There are two doors, one to your left, and one to your right."
print "Which door do you take?"
next = raw_input("> ").lower()
if 'left' in next:
bear_room(bear_moved)
elif 'right' in next:
cthulhu_room()
else:
start()
start()
Here's the top of your bear_room function, with the global functionality fixed. You still have to appropriately alter the calls to the routine.
You are still learning about Boolean values; note the changes I made in your tests. You don't have to test bear_moved == True; the variable itself is a True/False value. Comparing it against True does nothing. Comparing it against False is simply the not operation. I deleted your else: pass because the only way to reach that spot is if bear_moved is NaN (not a number), a value you shouldn't see in this program. Logically, you couldn't get to that clause at all.
Of course, there are still improvements to make in your program: fix the spelling and grammar errors, make the logic flow a little more cleanly (did you do any sort of flow chart for this?), and nest if statements to save work and reading time.
def bear_room(): global bear_moved if not bear_moved: print "There's a bear in here." print "The bear has a bunch of honey." print "The fat bear is in front of another door." print "How are you going to move the bear?" else: print "The bear has moved away from the door." | https://codedump.io/share/0j0RhAMU0Jum/1/why-doesn39t-the-bearboolean-remain-in-its-position | CC-MAIN-2017-17 | refinedweb | 786 | 81.53 |
Applicative do-notation
This is a proposal to add support to GHC for desugaring do-notation into Applicative expressions where possible.
Related proposals
Motivation
- Some Monads have the property that Applicative bind is more efficient than Monad bind. Sometimes this is really important, such as when the Applicative bind is concurrent whereas the Monad bind is sequential (c.f. Haxl). For these monads we would like the do-notation to desugar to Applicative bind where possible, to take advantage of the improved behaviour but without forcing the user to explicitly choose.
- Applicative syntax can be a bit obscure and hard to write. Do-notaiton is more natural, so we would like to be able to write Applicative composition in do-notation where possible. For example:
(\x y z -> x*y + y*z + z*x) <$> expr1 <*> expr2 <*> expr3vs.
do x <- expr1; y <- expr2; z <- expr3; return $ x*y + y*z + z*x
- Do-notation can't be desugared into Applicative in general, but a certain subset of it can be. For Applicatives that aren't also Monads, we would still like to be able to use the do-notation, albeit with some restrictions, and have an Applicative constraint inferred rather than Monad.
Clearly we need Applicative to be a superclass of Monad for this to work, hence this can only happen after the AMP changes have landed.
Since in general Applicative composition might behave differently from monadic bind, any automatic desugaring to Applicative operations would be an opt-in extension:
{-# LANGUAGE ApplicativeDo #-}
Stage 1
This section describes a transformation that could be performed during desugaring. It covers use case (1), but not (2). I'm describing this first because it is easy to understand by itself, and extends to a solution for (2). We might consider implementing this first.
Examples
do x <- A y <- B -- B does not refer to x return (f x y)
desugars to
do (x,y) <- (,) <$> A <*> B return (f x y)
Note that the tuples introduces like this will probably be optimised away when the Monad type is known and its bind can be inlined, but for overloaded Monad code they will not be. In Stage 2 (below) we'll get rid of the tuples in some cases.
In general we might have
do .. stmts1 .. x <- A y <- B z <- E[y] .. stmts2 ..
which we desugar to
do .. stmts1 .. (x,y) <- (,) <$> A <*> B z <- E[y] .. stmts2 ..
this is the best we can do: the rest of the do expression might refer to x or y.
So in general we want to take the largest consecutive sequence of statements where none of the rhs's refer to any of the bound variables, and lift them into an Applicative expression.
A non-binding statement can be considered to be a binding statement with a wildcard pattern.
do x <- A y <- B -- B does not refer to x C -- C does not refer to x or y return (f x y)
desugars to
do (x,y,_) <- (,,) <$> A <*> B <*> C return (f x y)
or we can be slightly more clever:
do (x,y) <- (,) <$> A <*> (B <* C) return (f x y)
What if there are more than 63(?) statements, and we don't have a tuple big enough? We have to desugar to nested tuples in this case. Not a huge problem, this is exactly what we do for pattern bindings.
Stage 2
This covers a more comprehensive transformation that would also enable us to drop a Monad constraint to an Applicative constraint in the typing of do expressions for a certain well-defined subset of the do syntax.
Back to our first example:
do x <- A y <- B -- B does not refer to x return (f x y)
we can go further in desugaring this:
(\x y -> f x y) <$> A <*> B
(obviously the lambda expression can be eta-reduced in this example, but that's not the case in general).
For this to work we have to recognise "return". Or perhaps "pure".
There are two advantages here:
- This code could be typed with an Applicative constraint rather than Monad.
- It leads to more efficient code when the Monad type is not known, because we have eliminated the intermediate pair.
What if the final expression is not a "return"?
do x <- A y <- B -- B does not refer to x f x y
this is
join ((\x y -> f x y) <$> A <*> B)
Note: *not* an Applicative, because "join" is a Monad operation. However we have eliminated the pair.
Problems:
- desugaring comes after typechecking, so the type checker would need its own version of the desugaring rules to be able to tell when a do expression can be fully desugared to Applicative syntax. | https://ghc.haskell.org/trac/ghc/wiki/ApplicativeDo?version=2 | CC-MAIN-2016-40 | refinedweb | 786 | 59.23 |
Jomo Fisher--I'm taking some time now to better understand F#. Right now, I understand the concept of functional code and have some narrow but deep experience by way of the work we did on LINQ for C# 3.0. My goal is to understand F# as well as I understand C#.
First, I downloaded and installed F# from here. Nice that there is source code for the compiler in there. I'll take a closer look at that down the road.
For now, a first program. I put this into a file test.fs:
let rec factorial n = if n <= 1 then 1 else n * factorial (n-1)
let rec factorial n = if n <= 1 then 1 else n * factorial (n-1)
That 'rec' next to the 'let' means recursive. So, unlike C# and VB, F# requires an explicit claim of recursiveness. This would let me grep for all recursive functions in a project. Also there's no class definition required. This is actually pretty nice because it gives a very simple starting point. I don't see a reason that C# couldn't work this way as well.
Compile it:
fsc test.fs
fsc test.fs
And get a warning:
test.fs(1,0): warning: Main module of program is empty: nothing will happen when it is run.
test.fs(1,0): warning: Main module of program is empty: nothing will happen when it is run.
Ok, so this gave me an .exe called test.exe which I expect to have a factorial function in it somewhere. When I put test.exe under Reflector (thanks, as always, Lutz Roeder) and disassemble into C# I see this:
[CompilationMapping(SourceLevelConstruct.Module)]
public class Test
{
// Methods
public static int factorial(int n)
{
if ((n > 1) ^ true)
{
return 1;
}
return (n * factorial(n - 1));
}
}
First, that CompilationMapping attribute is coming from Microsoft.FSharp.Core. I assume this attribute means 'the global methods for this module'. The Test class is in the global namespace. That (n>1)^true is a roundabout way of saying n<=1. Not the most readable thing but I'm not faulting F# for not being disassembleable into pretty C#. Also, I'm not worried about performance. For all I know, the jitter reduces this just fine. If it doesn't, I'm still not worried because performance typically comes from good algorithms, not micro-optimizations.
This posting is provided "AS IS" with no warranties, and confers no rights.
Jomo Fisher--I was curious about type inference in F# and I wondered what would happen if there was really
Interesting that the "rec" declaration for this method is not captured in the metadata. I might expect to see an attribute representing this to make it inspectable at runtime.
Acutally, the rec keyword would be a lovely definition to C# lambdas. Making them recursive isn't the prettiest thing in the world.
Jomo Fisher-- Easily my favorite feature of F# so far is the combination of discriminated union and pattern | http://blogs.msdn.com/b/jomo_fisher/archive/2007/09/12/adventures-in-f-the-lay-of-the-land.aspx | CC-MAIN-2015-18 | refinedweb | 501 | 67.55 |
soundwave.py: 22 points
We’ve implemented a constructor for our
SoundWave class, but now we need some way to save the tone we’ve generated. Let’s do this by adding a method called
save(self, filename) to our
Soundwave class. This method should take one parameter,
filename, (in addition to
self, since all methods take
self as their first parameter). The
save() method will need to make use of the
audio.py module that has been provided to you.
The
audio.py module contains several useful functions for converting your waveform samples into a format necessary for saving WAV audio files. Be sure to
import audio at the top of your
soundwave.py file. Note that you shouldn’t need to change any code in
audio.py.
The audio module function that you’ll need to call is
audio.save(filename, samples, sample_rate). As you can see in the function definition, this function requires you to pass a
filename,
self.samples, and
self.sample_rate as arguments.
You should now be able to run the provided file
middlec.py. When you run this program, it should create a new file in the Files pane on the left side of your replit project called
middlec.wav. If you click on that file and push the play button (the triangle pointing to the right) in the middle of your browser, you should hear a single note (middle C) for approximately 2 seconds. If it works, great, continue onward! Otherwise you’ll need to track down some bugs.
Here’s what your file should sound like: | https://www.cs.oberlin.edu/~cs150/lab-8/part-2/ | CC-MAIN-2022-21 | refinedweb | 264 | 75.3 |
.
A full bio can be found on the web at:
With Office XP, Microsoft introduced a new technology called Smart Tags. No doubt this technology is fascinating, but that also left me wondering what to do with it. I watched Microsoft demo Smart Tags on several occasions; typically, the presenter demonstrated Smart Tags by typing a stock symbol (such as "MSFT") into Microsoft Word. Word then recognized the symbol and underlined the symbol using a red dotted line. When the mouse was moved over the symbol, a little icon appeared and, when clicked, it presented a menu with action items that are related to trading stock (see Figure 1).
While this is very cool, I always wondered how much use it would be to me. First of all, I hardly ever receive documents or emails that have stock symbols in them. Secondly, even if they did, who says I wanted to use Microsoft's stock site (which is where the action items would take me). No, to me, this was not useful at all.
But, as Microsoft claimed, this was just an example. There were other things Word would recognize as well, such as dates and times. This would allow me to directly schedule meetings if someone sent me an email proposing one. That seemed to make more sense, although most of the meeting requests I get already utilize Outlook's (or Exchange's) meeting features. So all this Smart Tag stuff was kind of like supermodels: sexy, but of no direct use to me.
The technology itself, however, had me intrigued, especially the ability to build more useful Smart Tags myself. Think about it: what are the chances that Microsoft will embed SmartTags that directly make sense for your business? However, what if you could get Word, Excel or Outlook to recognize product names that are linked to your order fulfillment system? What if your users could place, fulfill and track customer orders right from within their email? You could provide a whole new user experience. There is little doubt in my mind that users would be more productive in this kind of environment.
Creating the Recognizer
So, let's go ahead and implement our own Smart Tag! Here is the basic concept: a developer can create Smart Tags by implementing a few simple Smart Tag interfaces. This means that you have to build a COM component containing certain methods and properties that can be called by Word and other Office XP applications.
You can use the Visual FoxPro 7 Object Browser (see Figure 2) to explore the Microsoft Smart Tags 1.0 Type Library. There are two interfaces that we will use in this example: ISmartTagRecognizer and ISmartTagAction. The first interface is used to find useful information within documents (strings). The ISmartTagAction interface is used to actually execute desired actions for a discovered phrase.
To start implementing a Smart Tag, create a new PRG, and drag and drop both interfaces from the Object Browser into the VFP 7 source editor. This creates two classes that implement each of the interfaces. In my example, I chose to combine both interface implementations into one object, but whether you have two objects with one interface each, or one object that combines both, doesn't matter.
Listing 1 shows the code that implements our complete Smart Tag. Before we can do anything else, our Smart Tag recognizer engine needs to scan the text passed to it to find matches (product names, in our case). For this example, I used the products table that ships as a sample with Visual FoxPro 7. Whenever an Office XP application is ready for the recognizers to parse the available text, it will call the Recognize() function on the ISmartTagRecognizer interface.
The first parameter passed to this method is the actual text to be parsed. I'm using the FoxTools.fll and its Words() and WordNum() functions to search through the text word by word (see sidebar). I then try to locate records in the product database that contain items whose names start with the word I just identified (I check in two different fields). Note that you never know the length of the product name. Therefore, it's difficult to perform a search on the full product name. It would be much easier to search based on a product ID, but I think it is very unlikely that I'll receive an email that contains our internal IDs.
Whenever I find a match in the database, I take the entire product name (which I just identified as a possible match), and verify whether the entire name in fact does occur in the string. If so, we have a match and it's time to communicate that to the host application. We can do so through the RecognizerSite object that is also passed to the Recognize() method as a parameter (in my example, this parameter is called RecSite to keep the listing more readable in our narrow column width).
So, let's think about what we are trying to accomplish here. First of all, we would like to tell Word (or whatever application we are running), that we have found a string to which we can link some action. However, there is some information needed to perform that action that Word does not have?the product ID, for instance. The way to preserve this information is to use the "Property Bag." This is simply an object that can be used to store values, which can be retrieved later when needed (as would be the case if the user clicked on our Smart Tag). We can use the RecognizerSite to generate a new property bag for us. We then store the ItemID into the property bag by calling its Write() method.
OK, that's it! We can commit our Smart Tag to the host by calling the CommitSmartTag() method on the RecognizerSite object. The method requires four parameters: First, we need to pass a unique identifier, which has to conform to namespace rules. This makes sure the host application can identify the Smart Tag in case other recognizers are also in use. The second parameter is the start position of the identified word, and the third is the length. Finally, the fourth parameter is the property bag we just created.
Once the Smart Tag is committed to the host application, we scan the string word by word until we reach the end.
You are probably wondering why I went through all the trouble of traversing the string word by word. I could have also simply scanned the entire products table and compared each product name with the string I received. This would certainly work. However, if I had a very large product table, this would get very slow; since Office XP calls this method quite often, this would be a bad idea. By looking for individual words first, I eliminate the majority of records and speed up the process significantly.
You will notice that the ISmartTagRecognizer interface implements several properties, as well. Most of them are descriptive in nature and you can modify them at will. There are a few that are important, though: the ProgID has to be the class name of the COM component you are about to create. In my example, I am using a project file named "Order," and the class itself is called "Recognizer." Hence, the class name is "Order.Recognizer." The count is also important, because it identifies how many different kinds of Smart Tags the recognizer implements. In our example, there is only one. And finally, there is the Smart Tag name, which is the same unique name we pass to the CommitSmartTag() method. The two have to match or the host application will not be able to identify the smart tag when the user clicks it.
One other method that's fairly interesting is the download URL. Once a Smart Tag is used in a document, it can embed a URL pointing to the Smart Tag recognizer into the actual document. This way, you can provide the document to a user who doesn't have the recognizer. The document itself would always be functional, but the recognizer functionality would be gone. But if there is a URL embedded, the second user can simply download the custom recognizer.
Implementing the Associated Actions
At this point, we can find product information within a document, but we cannot perform any actions when the user clicks on our Smart Tag.
I intend to implement five different actions associated with products: first, show the actual item to see its details; second, place an order for the item; third, show all orders that are currently pending for the item; fourth, display sales statistics for the item; and fifth, forward product information to a customer.
Whenever a user clicks on a Smart Tag, the host application queries the number of associated actions using the VerbCount property. It passes along the Smart Tag name, just in case. Whenever our name shows up, we simply return the number 5.
Then, the host application queries the names and description of each item, using the VerbNameFromID and VerbCaptionFromID properties. The parameter passed along is the action item number for the name to be queried. We simply return the names of the action items (which usually end up as menu items on the Smart Tag).
This information is sufficient for the host application to build a Smart Tag menu. Once the user clicks one of the menu items, the host application runs the InvokeVerb() method on our recognizer and action object. Again, the VerbID is passed along and we use a simple case statement to execute the appropriate action. In my example, I use a Hyperlink object to navigate to our Web-based order entry system. Note that I'm using the PropertyBag (which is passed along to this method) to retrieve the ItemID (the first property in the bag) and add it to the URL string.
Of course, we could have performed all kinds of actions based on the capabilities of Visual FoxPro. However, make sure to call out to some other process (perhaps by instantiating another COM component or navigating to another application or Web page) to perform a lengthy process. In Visual FoxPro, there is no way to launch a custom thread within a process. Therefore, the Smart Tag Action object would be kept busy until your process finishes. The results of that might be unexpected: from the host application not responding until the action object is done, to a complete crash (I haven't seen this myself, but the Smart Tag SDK makes a big effort to point this out).
Note that you also have the opportunity to communicate with the host application at this point. The second parameter provides the name of the host application (so you can tailor your actions according to the host and send a string to Word or set a cell in Excel, for example). Parameter number three is a reference to the actual application.
Registering our Recognizer and Action Object
Once we compile our component, we are almost done. The only step left is to register the object so Office XP can find and use it. Once the Smart Tag is registered, Office XP will automatically load it and call it regularly whenever the contents of a document change.
To register our component, we need to find out the ClassID (guid) that has been generated for it. This is rather easy. Open the Registry (choose "Run" from the Start menu and type "regedit"). Then, drill down into "My Computer / HKEY_CLASSES_ROOT / Order.Recognizer / CLSID," and double-click the "(Default)" item. Copy the value (guid) into the Clipboard.
Now, we need to add this ClassID to the recognizer and action objects used by Office XP. To do this, drill down into the "My Computer / HKEY_CURRENT_USER / Software / Microsoft / Office / Common / Smart Tag" node. Within that node, you will find two sub-nodes called "Actions" and "Recognizers." Right-click each tag, select "New / Key" and paste the class id as the key name (see Figure 3). Voila!
Note: Make sure to shut down all Office XP applications, such as Word and Outlook, before you make these changes. Oh, by the way: I'm sure you know that changing any of the other stuff in the Registry can be troublesome? OK, just checking.
Now, let's go ahead and test our Smart Tag! Start up Word, and type some text that contains any of the product names found in the products table. If everything went fine, they should be recognized almost immediately. Figures 4, 5 and 6 show our Smart Tag in action.
Conclusion
Creating Smart Tags is not very difficult. Once you get used to the terminology used to implement COM interfaces, you will find it very straightforward. Microsoft has been criticized for adding too much default Smart Tag functionality. For this reason, much of the default Smart Tag functionality has been eliminated from Internet Explorer. Custom Smart Tags however, enable knowledge workers to be more productive in their everyday work environment.
Let me know what uses you come up with for Smart Tag technology, or drop me an email in case you have any questions!
Markus Egger | https://www.codemag.com/article/0201101 | CC-MAIN-2019-13 | refinedweb | 2,212 | 62.38 |
Arch IFC
The Arch and Build Information Modeling (BIM) workbenches feature an Industry Foundation Classes (IFC) importer and exporter. The IFC format is a continuously growing widespread format to interchange data between BIM applications, used in architecture and engineering.
Both the importer and exporter depend on an external piece of open-source software, called IfcOpenShell, which might or might not be bundled with your version of FreeCAD, depending on the platform and where you obtained your FreeCAD package from. If IfcOpenShell is correctly installed, it will be detected by FreeCAD and used to import and export IFC files. An easy way to check if IfcOpenShell is present and available, is either to try to import or export an IFC file, or simply enter the following in the FreeCAD Python Console (found under menu View → Panels):
import ifcopenshell
If no error message appears, everything is fine, IfcOpenShell is correctly installed. Otherwise, you will need to install it yourself. Read on.
Note: The BIM Setup tool will look for IfcOpenShell too and issue a notification if it is not installed.
Note: The Arch Workbench used to feature in the past a simpler IFC importer that doesn't depend on IfcOpenShell. It is still possible to force the use of that old python IFC importer, by enabling the related option in the Arch preferences settings. But this importer has been discontinued, might not work properly, and will only be able to import a very small subset of IFC objects.
The use of IfcOpenShell is highly recommended, since it is much faster and more powerful than the internal parser. We think it is one of the best IFC handlers out there...
Obtaining IfcOpenShell
On the IfcOpenShell website, you will find download links for the various utilities that compose the IfcOpenShell program. What FreeCAD needs is the one called IfcOpenShell-Python. You must take care of choosing the correct architecture for your operating system (32bits or 64bits), and also need the exact same Python version as FreeCAD. The Python version used by FreeCAD is indicated on the first line of the FreeCAD Python Console, found under menu View → Panels. You need a version of IfcOpenShell with the same two first numbers. The third number is not important. For example, if your FreeCAD Python version is 3.7.4, you need an IfcOpenShell version 3.7.
IfcOpenBot
The packages available on the IfcOpenShell website are, however, usually very old and don't support more recent Python versions. We therefore recommend you to use another service provided by the developers of IfcOpenShell, which is called IfcOpenBot. It is an automated system that builds a series of packages from time to time from the IfcOpenShell source code. To download one of these packages, click the "Commits" link on the GitHub repository, and locate commits that have a comment (a small "message" icon). Those comments are where you will find the packeges built by IfcOpenBot.
At the time of writing, the current stable version of IfcOpenShell, which lies in its "master" branch, is v0.5. However, v0.6 is already very stable and contains many improvements such as support for both IFC2x3 and IFC4 at the same time. We recommend you to switch the "branch" button to v0.6 and use one of these instead. Again, make sure you download the correct package for your version of FreeCAD.
Compiling
You can also compile IfcOpenShell yourself, of course. As it has almost the same dependencies as FreeCAD, if you are already compiling FreeCAD yourself, compiling IfcOpenShell will be very straightforward and will normally not require any additional dependency.
Installing IfcOpenShell
The package you downloaded from one of the above locations is a zip file that contains a folder named "ifcopenshell" with several other files and folders inside. To "install" it simply means make this ifcopenshell folder found by Python (so the import ifcopenshell command we used above succeeds). The list of folders where Python looks for modules can be obtained by entering these two lines in the FreeCAD Python Console:
import sys for p in sys.path: print(p)
and press enter twice.
To install IfcOpenShell, just unzip the downloaded package, and place the "ifcopenshell" folder in any of the locations issued by the commands above.
You will notice that some of these locations are system folders (the officially recommended locations are the "site-packages" or "dist-packages" folders), which will make IfcOpenShell installed system-wide and available to other applications such as Blender, but you might prefer to not pollute your system folders with something copied by hand, and place it in one of the folders of FreeCAD itself. Good suggestions are FreeCAD's "bin" folder, or the macros folder (which you can also obtain from menu Macro → Macros)
Once you copied your ifcopenshell folder at one of these locations, test that it works correctly by entering:
import ifcopenshell
in the Python console of FreeCAD. If no error appears, you are good to go.
Importing
All IfcProduct-based entities from IFC2x3 or IFC4 files will be imported into the FreeCAD document. The IFC preferences settings allow you to set how the IFC objects are imported: as full parametric Arch objects (the geometry will, as much as possible, be editable in FreeCAD), as non-parametric Arch objects (objects will carry IFC information and properties but will not be editable), as non-parametric Part shapes (the geometry will be faithfully rendered but IFC information will be discarded), or as one Part shape per floor (one all-in-one object, just for reference). Each of these types looses, App Parts, and new structures such as | https://wiki.freecadweb.org/Arch_IFC/es | CC-MAIN-2020-16 | refinedweb | 933 | 58.42 |
IPAM - What IOS versions are supportedgmherring Oct 7, 2010 12:02 PM
I was wondering if there is a list of support IOS versions for pulling DHCP from Cisco devices.
I have two Cisco 4500 series switches one running version 12.2(53)SG3 that works and another 12.2(31)SGA10 that doesn't
I have also successfully pulled 3550 and 3750 switches, but my 6500s don't work either.
The error I get seems to be related to the existence of the show ip dhcp pool command.
Please don't tell me that you guys designed this around the existence of a command that was only just introduced.
Re: IPAM - What IOS versions are supportedmavturner
Oct 7, 2010 3:07 PM (in response to gmherring)
gmherring,
You are correct in saying we use the 'show ip dhcp pool' command to find information regarding the DHCP pools. This was introduced in 12.2(8)T.
For the devices having a problem, please try to run this command directly and see what response you get. Are those other devices running DHCP?
Mav
Re: IPAM - What IOS versions are supportedjawill8301 Jul 1, 2011 9:53 AM (in response to mavturner)
Mav,
I have just upgraded IOS code on some 3550's we have that run DHCP servers.
Cisco IOS Software, C3550 Software (C3550-IPBASEK9-M), Version 12.2(44)SE6, RELEASE SOFTWARE (fc1)
I still recieve this error:Command Failed
Dhcp Server scanning failed with error:Pools data processing with error 'Invalid input detected' at command 'show ip dhcp pool'Any help is appreciated.
Re: IPAM - What IOS versions are supportedjawill8301 Jul 1, 2011 10:52 AM (in response to jawill8301)
As a followup I did try the command "sh ip dhcp pool". That command is not available. Only the ones listed below.
sh ip dhcp ?
binding DHCP address bindings
conflict DHCP address conflicts
database DHCP database agents
import Show Imported Parameters
relay Miscellaneous DHCP relay information
server Miscellaneous DHCP server information
snooping DHCP snooping
Re: IPAM - What IOS versions are supportedmavturner
Jul 1, 2011 11:54 AM (in response to jawill8301)
I assume you have seen this document. Unfortunately, if those commands do not work, then IPAM will not work.
If you can't run the commands, then there is likely a bug in that IOS version. The best way to resolve this is to open a TAC case or upgrade to a newer version of IOS.
This site says that the command was introduced in 12.2(8) T.
Sorry I couldn't be more helpful here. Anyone else?
Re: IPAM - What IOS versions are supportedjawill8301 Jul 1, 2011 12:21 PM (in response to mavturner)
Hi Mav,
Yes, I did see the articles you mentioned. I just didn't know if there was another way to poll that information. I can see the DHCP server information in the running-config. Just the "show ip dhcp pool" command is not available on the 3550 and 3750 switches.
Re: IPAM - What IOS versions are supportedrulob Oct 19, 2012 4:31 PM (in response to gmherring)
Hello,
We are having the same problem, but we have CISCO Switches 3560. So you should add them to the list of non-compatible Switches. Unfortunately IPAM will be a no go for us since most of our switches are 3560.
Regards
Raul | https://thwack.solarwinds.com/message/146374 | CC-MAIN-2019-22 | refinedweb | 558 | 70.73 |
I recently released an iOS and Android application called OTP Safe to iTunes and Google Play. OTP Safe makes use of the time-based one-time password (TOTP) algorithm commonly used with two-factor authentication (2FA). How exactly, does this algorithm work, and how can we make it work with JavaScript?
Using the following resources as our framework, we can make use of the TOTP algorithm quickly and easily:
For TOTP to work, we are going to need to make use of an HMAC function. JavaScript doesn’t natively have one, but lucky for us there is a great open source library called jsSHA that we can use.
A little background on two-factor authentication and time-based one-time passwords in general. Two-factor authentication is an extra layer of security that many web services offer. You enter your normal username and password followed by a six digit code that changes every thirty seconds. This six digit code is determined based on a secret key that both you and the web service share. The code changes based on the machines system time so it is important the web service and device have accurate times configured.
A complete and working version of my code can be seen below:
TOTP = function() { var dec2hex = function(s) { return (s < 15.5 ? "0" : "") + Math.round(s).toString(16); }; var hex2dec = function(s) { return parseInt(s, 16); }; var leftpad = function(s, l, p) { if(l + 1 >= s.length) { s = Array(l + 1 - s.length).join(p) + s; } return s; }; var base32tohex = function(base32) { var base32chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ234567"; var bits = ""; var hex = ""; for(var i = 0; i < base32.length; i++) { var val = base32chars.indexOf(base32.charAt(i).toUpperCase()); bits += leftpad(val.toString(2), 5, '0'); } for(var i = 0; i + 4 <= bits.length; i+=4) { var chunk = bits.substr(i, 4); hex = hex + parseInt(chunk, 2).toString(16) ; } return hex; }; this.getOTP = function(secret) { try { var epoch = Math.round(new Date().getTime() / 1000.0); var time = leftpad(dec2hex(Math.floor(epoch / 30)), 16, "0"); var hmacObj = new jsSHA(time, "HEX"); var hmac = hmacObj.getHMAC(base32tohex(secret), "HEX", "SHA-1", "HEX"); var offset = hex2dec(hmac.substring(hmac.length - 1)); var otp = (hex2dec(hmac.substr(offset * 2, 8)) & hex2dec("7fffffff")) + ""; otp = (otp).substr(otp.length - 6, 6); } catch (error) { throw error; } return otp; }; }
The above TOTP code can be used like the following:
var totpObj = new TOTP(); var otp = totpObj.getOTP("SECRET_HERE");
Don’t forget that you will need to include these libraries in your HTML file like so:
<html> <head> <script src="sha.js"></script> <script src="totp.js"></script> </head> <body> </body> </html>
Just like that, you now have a perfectly capable way to generate six digit passwords for two-factor services. You can even use this code, like I did, in your JavaScript based Phonegap or Ionic Framework application. | https://www.thepolyglotdeveloper.com/2014/10/generate-time-based-one-time-passwords-javascript/ | CC-MAIN-2022-21 | refinedweb | 473 | 67.04 |
The ODMG API is an implementation of the
ODMG 3.0 Object Persistence API.
The ODMG API provides a higher-level API and query
language based interface over the
PersistenceBroker API.
More detailed information can be found in the ODMG-guide
and in the other reference guides.
This tutorial operates on a simple example class:
package org.apache.ojb.tutorials;
public class Product
{
/* Instance Properties */
private Double price;
private Integer stock;
private String name;
/* artificial property used as primary key */
private Integer id;
/* Getters and Setters */
...
}
The metadata descriptor for mapping this class is described in the
mapping tutorial
When using 1:1, 1:n and m:n references (the example doesn't use it) the ODMG-api
need specific metadata settings on relationship definition, the mandatory settings
are listed in the
ODMG-Guide - additional info see
auto-xxx settings and
repository file settings.
As with the other tutorials, the source code for this tutorial is
contained in the tutorials-src.jar which can be downloaded
here. The source files
are contained in the org/apache/ojb/tutorial2/ directory.
You can try it out with the ojb-blank project which can be downloaded from
the same place and is described in the
Getting started section.
Further information about the OJB odmg-api implementation can be found in
the ODMG guide.
The ODMG implementation needs to have a database opened for it to access.
This is accomplished via the following code:
Implementation odmg = OJB.getInstance();
Database db = odmg.newDatabase();
db.open("default", Database.OPEN_READ_WRITE);
/* ... use the database ... */
db.close();
With method call OJB.getInstance() always a new
org.odmg.Implementation instance will be created and
odmg.newDatabase() returns a new Database instance.
Call db.open(...) opens an ODMG
Database using the name specified in
metadata for the database -- "default" in
this case. Notice the Database is opened in read/write mode. It is possible to open it in read-only or write-only
modes as well.
Once a Implementation instance is created and a
Database has been opened it is available for use. Unlike
PersistenceBroker instances, ODMG
Implementation and Database instances
are threadsafe and can typically be used for the entire lifecycle of an application.
There is no need to call the
Database.close() method until the database
is truly no longer needed.
The
OJB.getInstance() function provides the ODMG
Implementation
instance required for using the ODMG API. From here on out it is straight ODMG
code that should work against any compliant ODMG implementation.
Persisting an object via the ODMG API is handled by writing it to the peristence
store within the context of a transaction:
public static void storeNewProduct(Product product)
{
// get the used Implementation instance
Implementation odmg = ...;
Transaction tx = odmg.newTransaction();
tx.begin();
// get current used Database instance
Database db = odmg.getDatabase(null);
// make persistent new object
db.makePersistent(product);
tx.commit();
}
Once the ODMG implementation has been obtained it is used to begin a transaction,
obtain a write lock on the
Product, and commit the transaction. It is
very important to note that all changes need to be made within transactions in the
ODMG API. When the transaction is committed the changes are made to the database. Until
the transaction is committed the database is unaware of any changes -- they exist
solely in the object model.
The ODMG API uses the OQL query language for obtaining references to persistent objects.
OQL is very similar to SQL, and using it is very similar to use JDBC. The ODMG
implementation is used to create a query, the query is specifed, executed, and a
list fo results is returned:
public static Product findProductByName(String name) throws Exception
{
// get the used Implementation instance
Implementation odmg = ...;
Transaction tx = odmg.newTransaction();
tx.begin();
OQLQuery query = odmg.newOQLQuery();
query.create("select products from "
+ Product.class.getName()
+ " where name = $1");
query.bind(name);
List results = (List) query.execute();
Product product = (Product) results.iterator().next();
tx.commit();
return product;
}
Updating a persistent object is done by modifying it in the context of a transaction,
and then committing the transaction:
public static void sellProduct(Product product, int number)
{
// get the used Implementation instance
Implementation odmg = ...;
Transaction tx = odmg.newTransaction();
tx.begin();
tx.lock(product, Transaction.WRITE);
product.setStock(new Integer(product.getStock().intValue() - number));
tx.commit();
}
The sample code obtains a write lock on the object (before the changes are made),
binding it to the transaction, changes the object, and commits the transaction. The newly modified
Product
now has a new
stock value.
Deleting persistent objects requires directly addressing the
Database which
contains the persistent object. This can be obtained from the ODMG
Implementation by asking for it. Once retrieved, just ask the
Database to delete the object. Once again, this is all done in the context
of a transaction.
public static void deleteProduct(Product product)
{
// get the used Implementation instance
Implementation odmg = ...;
Transaction tx = odmg.newTransaction();
tx.begin();
// get current used Database instance
Database db = odmg.getDatabase(null);
db.deletePersistent(product);
tx.commit();
}
It is important to note that the
Database.deletePerstient() call does
not delete the object itself, just the persistent representation of it. The transient
object still exists and can be used however desired -- it is simply no longer
persistent.
by Brian McCallister | http://db.apache.org/ojb/docu/tutorials/odmg-tutorial.html | CC-MAIN-2013-48 | refinedweb | 869 | 50.12 |
That there's a Primary_carbon. Isn't that a Alkylchloride? Wake up and smell the Ketone, cause that's what it is. Most people wouldn't realise this is a C_ONS_bond. It's a 1,3-Tautomerizable (or your money back). I don't believe it, it's a Rotatable_bond! Wake up and smell the CH-acidic, cause that's what it is.
And here's the code (requires Pybel, and the file SMARTS_InteLigand.txt):
import sys
import random
import pybel
def readsmartsfile(filename="SMARTS_InteLigand.txt"):
patterns = []
inputfile = open(filename, "r")
for line in inputfile:
line = line.strip()
if line and line[0]!="#":
colon = line.find(":")
name = line[:colon]
smarts = line[colon+1:].strip()
patterns.append([pybel.Smarts(smarts), name])
return patterns
phrases = ["I don't believe it, it's a %s!",
"Isn't that a %s?",
"It's a whadyamacallit, a %s.",
"Looks like a %s to me.",
"That there's a %s.",
"Most people wouldn't realise this is a %s.",
"It's a %s (or your money back).",
"Wow, a %s. Last time I saw one of these, I hit the"
"fire alarm and ran.",
"Could be a %s...yes, I'm sure of it.",
"It's a %s if I've ever seen one.",
"Wake up and smell the %s, cause that's what it is.",
"It's a %s. I wish I had one.",
"You've hit the jackpot, you and your %s!",
"A %s. You know, back in the day, we used to have"
"fun with these.",
"It takes me back years, this %s does."]
if __name__=="__main__":
if not len(sys.argv)==2:
sys.exit("You need a SMILES string like CC(=O)CCl")
molecule = pybel.readstring("smi", sys.argv[1])
print "So you want me to tell you about %s, do you?\n" % (
sys.argv[1],)
patterns = readsmartsfile()
print " ".join([random.choice(phrases) % name for
smarts, name in patterns if smarts.findall(molecule)])
1 comment:
That's as mad as a box of %s's, that is! | http://baoilleach.blogspot.com/2008/03/madmol-chemistry-aware-code.html | CC-MAIN-2013-48 | refinedweb | 335 | 89.24 |
Content
All Articles
Python News
Numerically Python
Python & XML
Community
Database
Distributed
Education
Getting Started
Graphics
Internet
OS
Programming
Scientific
Tools
Tutorials
User Interfaces
ONLamp Subjects
Linux
Apache
MySQL
Perl
PHP
Python
BSD
Shortly after I wrote my news article on the Python wiki program MoinMoin, Jürgen Hermann announced a new version with this notice:
This is a security update, which explains the short
release cycle. It replaces some exec() calls by __import__(), which is much safer (or actually, safe in contrast to totally unsafe). IF YOU HAVE INSTALLED RELEASE 0.5 OR 0.6, UPGRADE NOW!
exec()
__import__()
Because of the short release cycle, the code couldn't be very different between 0.6 and 0.7. That would make changes easy to spot. I love learning opportunities. I got a copy of 0.6 to compare with 0.7. There are several places in MoinMoin where Hermann wanted to use the same code but choose between different libraries of functions to be used by that code. This is called a plug-in. Hermann uses plug-ins to select formatters based on mime-type, or to select a particular parser or extension macro set based on the type of information submitted from a form. The name of the plug-in module to use is taken from information passed by CGI.
For example, here is the code used to import a formatter:
exec("from formatter.%s import Formatter" %
(string.replace(mimetype, '/', '_'),))
However, mimetype is taken from information passed to the program from an untrustworthy outside source. This may not seem like a big security leak, but any leak is a dangerous thing. Someone could monkey with the string passed to mimetype. The exec function will execute whatever code is passed to it. Using exec, you might unwittingly execute a dangerous block of code. Here is how Hermann fixed it. He replaced these exec strings with this utility function
mimetype
exec
def importName(modulename, name):
""" Import a named object from a module in the context of this function,
i.e. use fully qualified module paths.
Return None on failure.
"""
try:
module = __import__(modulename, globals(), locals(), [name])
except ImportError:
return None
return vars(module)[name]
The __import__ function only imports the specified code. If the mimetype (passed to this function as modulename) were manipulated now, you would only get an import error, not a surprise intruder.
__import__
modulename
It's errors like these that the taint mode in Perl was designed to catch. In this paranoid mode, information received from outside the program, like the string assigned to mimetype in this case, is rejected as tainted unless it is first checked by a regular expression. It throws in an extra step designed to make you stop and think before leaving the barn door open.
While Python doesn't have an equivalent to taint mode, it does have a module to help you restrict what might be dangerous code, the restricted execution or RExec module. The RExec module is not needed in this instance because Hermann only needs to import installed code, not evaluate user supplied code. For this purpose, the __import__ function does nicely. If, however, you work with a lot of CGI, you should study up on REx. | http://archive.oreilly.com/pub/a/python/2000/12/13/pythonnews.html | CC-MAIN-2015-27 | refinedweb | 538 | 56.55 |
Install GoDaddy SSL on Red Hat Openshift
Deprecated
Openshift v2.0 has now reached End of Life and will be replaced by v3.0.
Red Hat Openshift makes it really easy to install both SSL and use a custom domain. The first step is to upgrade your Openshift to Bronze so that you can gain access to the SSL form feature.
You will also need to purchase an SSL certificate. I purchased mine from GoDaddy.
Setup
The first thing you'll want to do is ensure that you've installed Red Hat's command-line tool using these instructions.
Step 1 - SSH into your Openshift app
Use this
rhc command to log into your app.
rhc ssh -a <app name> --namespace <namespace>
Note: Your
--namespace is usually the name right after your app name. For example:
Step 2 - Change directory
cd ~/app-root/data
Step 3 - Reviewing Openshift File Structure
If you'd like to understand more about the Openshift file structure, here is an excellent diagram.
Create an SSL certificate for Godaddy
Before you generate a CSR, you need to first generate a private key. This private key will be installed on the server together with the issued certificate. A private key should never be shared with anyone and should be protected by a passphrase. There are two ways to generate the CSR and private key.
Step 4 - Create an RSA Private Key
The following command will generate a 2048 bit RSA Private Key and stores it in the file appName.key.
openssl genrsa -des3 -out myPrivKey.key 2048
Step 5 - Create a Certificate Signing Request
After you have generated the private key, use the following command to generate the CSR.
openssl req -new -key myPrivKey.key -out myCert.csr
Step 6 - Complete CSR Form
You will be prompted to enter the some of the following information in order to generate the private key and CSR pair off the web server
Country Name (2 letter code) [XX]: US State or Province Name (full name) []: California Locality Name (eg, city) [Default City]: Los Angeles Organization Name (eg, company) [Default Company Ltd]: Chris Mendez Inc. Organizational Unit Name (eg, section) []: I SKIP THIS Common Name (eg, your name or your server's hostname) []: Email Address []: mail@chrisaiv.com
Step 7 - Copy and Paste
Once the private key and CSR files are generated, display the content of
myCert.csr file. Copy the entire block, including the BEGIN and END lines and paste it into where the CSR is requested on the website where you purchased the SSL.
nano myCert.csr
Step 8 - Download Private Key
Download your private key file and save it as
myPrivKey.key on your computer. Later, you will need to add this key file together with the SSL certificate for your domain to your application.
nano myPrivKey.key
Resources
- Sucuri: How to install an SSL certificate
- Install Openshift Tools
- Openshift: Custom SSL Certificates
- GoDaddy Certificate Signing Request
| https://www.chrisjmendez.com/2016/04/06/how-to-create-an-ssl-certificate-on-openshift-for-godaddy/ | CC-MAIN-2021-04 | refinedweb | 489 | 62.27 |
go to bug id or search bugs for
I am using PHP 403pl1 with Apache 1.3.14. Everything works great except for $HTTP_COOKIE_VARS. It never returns a value. I can retrieve the value via $GLOBALS but I would prefer to stick with $HTTP_COOKIE_VARS. This worked fine until I installed 403pl1 :)
I would be glad to supply more info if needed.
Regards,
Sam Beckwith
sam@samware.net
Add a Patch
Add a Pull Request
$HTTP_COOKIE_VARS is working just fine for me.
Please add some example code in which it doesn't work right.
--Jani
Globally-scoped variables are not imported into the namespace of a class by default. Globally-scoped variable must be imported into local scopes (like class and function definitions) using the globals keyword.
This is not a bug - it is simply the way that the language works.
At 03:05 PM 12/1/00 -0600, you wrote:
> I have more details. The problem only occurs within a
> Class method. To correct the problem I used:
>
> global $HTTP_COOKIE_VARS;
>
> It is my understanding that $HTTP_COOKIE_VARS should
> already be global as is $HTTP_SERVER_VARS etc...
>
> Regards,
> Sam Beckwith | https://bugs.php.net/bug.php?id=8062 | CC-MAIN-2019-22 | refinedweb | 187 | 76.42 |
i'm having a problem with a seg fault, but i'm stumped.
i've found it, but i just do not understand why i get it.
it's in the seventh line of WordTree::insert(const string &).
i have put similar code in the third, fourth and fifth line of main()
and that code doesn't seg fault.
(don't worry, you don't have to count the lines, i have marked them with "//******" and a description)
Code:/*** COMNDENSED TO SAVE SPACE ***/ #ifndef WORDTREE_H #define WORDTREE_H #include <string> #include <iostream> using std::cout; using std::endl; class WordTree { typedef std::string str; public: WordTree(const str & = ""); int search(const str &); int insert(const str &); void print(void); private: WordTree *left; WordTree *right; str key; }; #endifCode:#include "WordTree.h" using namespace std; WordTree::WordTree(const string &src) { key = src; left = right = 0; } int WordTree::insert(const string &src) { if(!key.size()) { key = src; return 1; } // this seg faults if(src < key){ cout << "test" << endl; } //***************** //the real code (i know i can factor this out, and it was to //begin with, but i expanded it and simpified it to find the seg fault //...there's no need to fix or correct this code, i'm going to //revert to my original code. i have only posted it so no one would //accuse me not giving them the whole code and claim the seg fault is elsewhere. /*if(src < key) { if(left) { cout << "CP 2" << endl; return left->insert(src); } else{ cout << "CP 3" << endl; left = new WordTree(); cout << "CP 4" << endl; return left->insert(src); } }else{ cout << "CP 2.b" << endl; if(right){ return right->insert(src); cout << "CP 3.b" << endl; } else{ cout << "CP 4.b" << endl; right = new WordTree(); cout << "CP 5.b" << endl; return right->insert(src); } } */ return 1; } //not even called, but if you have extra time, i'd like someone to comment on it if there's a better way (it's untested) int WordTree::search(const string &src) { if(src == key) return 1; else if(src <= key) if(left) left->search(src); else return 0; else if(right) right->search(src); else return 0; return 0; } //called but still segfaults whether it's called or not void WordTree::print(void) { cout << key << endl; if(left) left -> print(); if(right) right -> print(); }Code:/********* crappy test file, no need to pick it apart *****/ #include <cstdlib> #include <iostream> #include "WordTree.h" #include <string> using namespace std; int main(int argc, char *argv[]) { WordTree wt("n"); wt.print(); //this doesn't seg fault string x = "n"; string y = "p"; cout << (x < y) << endl; //********************* while(x != "stop") { cin >> x; wt.insert(x); } wt.print(); system("PAUSE"); return EXIT_SUCCESS; } | http://cboard.cprogramming.com/cplusplus-programming/69066-found-seg-fault-but-can%27t-explain-why-i-get.html | CC-MAIN-2013-48 | refinedweb | 448 | 75.54 |
Learn what to expect in the new updates
In this tutorial we will work interactively with images. To do so we will use the IPython shell. You can start it with:
$ipython
At the very least, you’ll need to have access to the imshow() function. The easy way for an interactive environment: is to use the matplotlib API (see Artist tutorial) where you use explicit namespaces and control object creation, etc...:
.. sourcecode:: ipython
In [1]: import matplotlib.pyplot as plt In [2]: import matplotlib.image as mpimg In [3]: import numpy as np
You can now access functions like imshow() by using: plt.imshow(yourimage). You can learn more about these functions in the Pyplot tutorial.
Plotting image data is supported by the Pillow). Natively, matplotlib only supports PNG images. The commands shown below fall back on Pillow if the native read fails.
The image used in this example is a PNG file, but keep that Pillow requirement in mind for your own data.
Here’s the image we’re going to play with:
It’s a 24-bit RGB PNG image (8 bits for each of R, G, B). Depending on where you get your data, the other kinds of image that you’ll most likely encounter are RGBA images, which allow for transparency, or single-channel grayscale (luminosity) images. You can right click on it and choose “Save image as” to download it to your computer for the rest of this tutorial.
And here we go...
In [4]: img=mpimg.imread('stinkbug.png') Out[4]: array([[[ 0.40784314, 0.40784314, 0.40784314], [ 0.40784314, 0.40784314, 0.40784314], [ 0.40784314, 0.40784314, 0.40784314], ..., [ 0.42745098, 0.42745098, 0.42745098], [ 0.42745098, 0.42745098, 0.42745098], [ 0.42745098, 0.42745098, 0.42745098]], [[ 0], ..., [], [ 0.4509804 , 0.4509804 , 0.4509804 ], []]], dtype=float32)
Note the dtype there - float32. Matplotlib has rescaled the 8 bit data from each channel to floating point data between 0.0 and 1.0. As a side note, the only datatype that Pillow can work with is uint8. Matplotlib plotting can handle float32 and uint8, but image reading/writing for any format other than PNG is limited to uint8 data. Why 8 bits? Most displays can only render 8 bits per channel worth of color gradation. Why can they only render 8 bits/channel? Because that’s about all the human eye can see. More here (from a photography standpoint): Luminous Landscape bit depth tutorial.
Each inner list represents a pixel. Here, with an RGB image, there are 3 values. Since it’s a black and white image, R, G, and B are all similar. An RGBA (where A is alpha, or transparency), has 4 values per inner list, and a simple luminance image just has one value (and is thus only a 2-D array, not a 3-D array). For RGB and RGBA images, matplotlib supports float32 and uint8 data types. For grayscale, matplotlib supports only float32. If your array data does not meet one of these descriptions, you need to rescale it.
So, you have your data in a numpy array (either by importing it, or by generating it). Let’s render it. In Matplotlib, this is performed using the imshow() function. Here we’ll grab the plot object. This object gives you an easy way to manipulate the plot from the prompt.
In [5]: imgplot = plt.imshow(img)
(Source code, png, hires.png, pdf)
You can also plot any numpy array - just remember that the datatype must be float32 (and range from 0.0 to 1.0) or uint8.
Pseudocolor can be a useful tool for enhancing contrast and visualizing your data more easily. This is especially useful when making presentations of your data using projectors - their contrast is typically quite poor.
Pseudocolor is only relevant to single-channel, grayscale, luminosity images. We currently have an RGB image. Since R, G, and B are all similar (see for yourself above or in your data), we can just pick one channel of our data:
In .png, pdf)
This adds a colorbar to your existing figure. This won’t automatically change if you change you switch to a different colormap - you have to re-create your plot, and add in the colorbar again.
Sometimes you want to enhance the contrast in your image, or expand the contrast in a particular region while sacrificing the detail in colors that don’t vary much, or don’t matter. A good tool to find interesting regions is the histogram. To create a histogram of our image data, we use the hist() function.
In[10]: plt.hist(lum_img.flatten(), 256, range=(0.0,1.0), fc='k', ec='k')
(Source code, png, hires.png, pdf)
Most often, the “interesting” part of the image is around the peak, and you can get extra contrast by clipping the regions above and/or below the peak. In our histogram, it looks like there’s not much useful information in the high end (not many white things in the image). Let’s adjust the upper limit, so that we effectively “zoom in on” part of the histogram. We do this by calling the set_clim() method of the image plot object.
In[11]: imgplot.set_clim(0.0,0.7)
(Source code, png, hires.png, pdf)
Interpolation calculates what the color or value of a pixel “should” be, according to different mathematical schemes. One common place that this happens is when you resize an image. The number of pixels change, but you want the same information. Since pixels are discrete, there’s missing space. Interpolation is how you fill that space. This is why your images sometimes come out looking pixelated when you blow them up. The effect is more pronounced when the difference between the original image and the expanded image is greater. Let’s take our image and shrink it. We’re effectively discarding pixels, only keeping a select few. Now when we plot it, that data gets blown up to the size on your screen. The old pixels aren’t there anymore, and the computer has to draw in pixels to fill that space.. | https://matplotlib.org/1.4.3/users/image_tutorial.html | CC-MAIN-2022-33 | refinedweb | 1,024 | 75.3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.