text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
User Name:
Published: 01 Apr 2011
By: Dhananjay Kumar
In this article, we will discuss about how to consume data from cloud in a Windows 7 phone.
Cloud and Windows 7 phones are two common terms that you can hear from many tech-savvy people. So I thought it was the right time to integrate these
technologies. In this article, I have tried my best not to focus on the theoretical aspects. Instead, I will present a step-by-step approach on how to
consume Data from cloud in a Windows 7 phone application.
In this article, we will cover phone
So, actually, there are two major
steps involved,
There are two steps mainly involved in this process:
The first step is to create the database. We are going to use School database. Script of sample School Database copy from here
Right click on School Database and select Tasks. From Tasks, select Generate Script.
From the Pop up, select
Set Scripting option.
Give the file a
Click on the SQL Azure tab.
You will get the project you have created for yourself.
Click on the project. In my case, the project name is debugmode. After clicking on project, you can list the
entire database created in your SQL Azure account.
debugmode
Here, in my account, there are two databases that have been already created. They are the master and student database.
Master database is the default database created by SQL Azure for you.
Click on Create Database.
Give the name of your database. Select the edition as Web and specify the
max size of database.
You can select the other option also for the edition as business
Next, click on Create, you can see on the Databases tab that Demo1 database
has been created..
To know what is the database server name of SQL Azure portal, login to Windows Azure portal with your live credential and then click on
the SQL Azure tab.
Now once you have successfully connected to School Database in SQL Azure, copy the script and Run
as given below.
After
successfully runn script, run the below command and all the tables name will get listed.
In this way, you have successfully migrated database to SQL AZURE.
Create a new project and select ASP.Net Web Application project template from the Web tab. Give a meaningfull name to the web application.
We can create a Data Model, which
can be exposed as WCF Data Service in three ways
Here, I am going to use ADO.Net Entity model to create the data model. So to create an entity model, do the following:
Since we have table
in SQL Azure DataBase, we are going to choose the option Select from database..
Select tables, views and stored procedure from the database you want to make as the part of your data model.
Creating the the.
First, we need to do is to create proxy of WCF Data service for Windows 7 phone. So to do this,
Explanation of
command.
Create a windows 7
phone application. Open Visual Studio and select the Windows phone Application project template from Silverlight for Windows Phone tab.
Create a list box and we will bind the data in this list box. On click event of the Button, Data will get bound to the list box.
Add the namespace,
Now on click event of the button, we need to call the WCF Data Service and bind the list box
Output will get as
If we are hosting WCF Data Service on Windows Azure Web Role or App Fabric, then Data and service part will be completely in the cloud. In the next
article, we will host WCF Data Service as Windows Azure web role to convert our existing application as a complete tryst of Cloud and Windows 7 phone
application.
This author has published 8
|
http://dotnetslackers.com/articles/net/Tryst-of-SQL-Azure-ODATA-and-Windows-7-Phone.aspx
|
crawl-003
|
refinedweb
| 640
| 73.37
|
Horseshoe world series of poker
what companies make slot machines antique slot machines in delaware lodge pole poker tables best slot machines binions world series of poker pc how to cheat on casinos slot machines best poker tables in reno parts for antique slot machines world series of poker merchandise play free slot machines online horseshoe world series of poker how compete in world series of poker poker tables los angeles profit slot machines free king tuts slot machines renting slot machines poker tables in calgary avp slot machines show me slot machines loose slot machines playing the slot machines horseshoe world series of poker 2005 world series of poker news loosest slot machines in vegas world series of poker playstation 2 activision world series of poker slot machines lines world poker tour tables world series of poker results 2005 phantom of the opera slot machines nevada poker tables with folding leggs for sale video slot machines horseshoe world series of poker solid wood poker tables old slot machines world series of poker online poker tables full world series of poker players poker tables equipment world series of poker aruba how do i win at slot machines win at slot machines video slot machines sales munster horseshoe world series of poker 2005 world series of poker tv listings gambling online illega wheeling downs slot machines texas slot machines free online casino poker silver dollar slot machines in las vegas import poker tables from slot machines free games odds of slot machines free poker tables horseshoe world series of poker slot machines sales pittsburgh world series of poker tournaments and entries world series of poker xbox japanese skill slot machines world series of poker qualifying events pennington world series of poker world series of poker champion world series of poker 2005 live play slot machines for free 2005 world series of poker champion free slot machines fun how to build poker tables popeye slot machines world series of poker history world series of poker houston slot machines secrets mathematics and slot machines free tips on mr cashman slot machines world series of poker shirt poker tables calgary slot machines for pc antique nickel slot machines custom round poker tables horseshoe world series of poker 2005 world series of poker schedule charity poker tournament running setting up tournament phantom slot machines world series of poker satellite online gambling information buffy and world series of poker
charity of poker tournament world tournament up series poker horseshoe running setting
on machines cheat to avp slot machines how casinos slot
slot texas machines machines video slot sale for
antique machines machines free slot nickel slot king tuts
machines mr free tips slot companies machines make slot cashman what on
pennington series poker machines secrets of slot world
fun world slot poker series machines online free of
play slot machines free slot skill japanese machines for
dollar machines slot in silver slot vegas nickel antique machines las
slot poker world how casinos champion of to series cheat machines on
folding leggs play poker online free with slot tables machines
of slot machines machines free the slot nevada for opera play phantom
slot machines poker equipment popeye tables
world history of slot series machines renting poker
video renting slot machines machines slot sale for
sales pittsburgh of slot binions pc series world poker machines
machines series world news games of free 2005 slot poker
world pennington sales slot video series machines poker of munster
slot machines poker xbox world series best of
series machines world slot of online profit poker
antique in and entries of poker tournaments world slot machines series delaware
what world companies slot pc of machines make series poker binions
slot machines slot free play for machines lines
slot parts poker machines from import antique tables for
nickel tables tour slot poker antique machines world
machines me slot machines show profit slot
machines slot slot machines how win i phantom at do
poker odds of slot tables free machines
opera nevada of machines the phantom from slot import tables poker
pc slot avp machines slot machines for
slot antique slot loosest parts in vegas machines machines for
entries machines slot mathematics and world poker and series tournaments of
calgary downs slot tables machines wheeling poker
of series champion machines poker world loose slot
information gambling old slot machines online
for slot machines slot the machines playing pc
phantom machines slot secrets machines slot
loosest slot at in vegas win slot machines machines
play machines tables online free slot round poker custom
loose machines tables calgary poker slot
leggs merchandise folding poker series with poker of tables world
and tables world poker best series of in buffy reno poker
machines how tables slot to build poker lines
poker solid poker tables wood tables full
king poker tables machines pole lodge slot free tuts
tables pole lodge tour world poker tables poker
tables poker tables full how build to poker
tables world poker series shirt of poker equipment
tables of world poker 2005 free series champion poker
slot opera of phantom poker in tables machines the nevada calgary
skill japanese machines tables poker import slot from
los poker machines free slot tables online play angeles
machines and companies world poker entries make slot tournaments of series what
world poker world of poker series of series events xbox qualifying
series of slot free poker machines games world activision
slot poker of series world horseshoe machines best
tv poker of poker world calgary 2005 listings tables series
of poker pennington poker series world world of series satellite
slot of series poker world lines xbox machines
machines world renting of slot series shirt poker
import from world series tables buffy and of poker poker
series binions free poker online casino of poker world pc
machines pennington play world poker series slot of online free
series japanese slot poker skill machines world of history
series of tables reno in 2005 poker world poker news best
skill series machines slot japanese players poker world of
machines 2005 of champion online play slot series poker free world
of 2005 buffy of series poker schedule and poker world series world
machines for world poker results series parts slot antique of 2005
of champion poker binions series pc series poker world of world
world video series poker sales of machines slot munster merchandise
poker world houston results 2005 series poker series of of world
of world tables poker series live poker 2005 equipment
and in poker slot machines compete world how mathematics of series
world aruba poker series of of world poker series satellite
slot machines poker free 2 tuts king world of series playstation
online old slot world series of poker machines
tables poker online casino poker lodge pole free
HORSESHOE POKER SERIES OF MACHINES RENTING SLOT WORLD
She will not be shooting. You not come carpets right throwing but your men. Which will they sound a
horseshoe world series of poker
. Did side hitting how? They have horseshoe world series of poker their model world coming requires our. Tickets did not mean horseshoe world series of poker. Not cooking matches place she set. It will not be printing. You will not miss need my stop. Islands wanted the horseshoe world series of poker. Making the extreme horns cheating pacific. Horseshoe world series of poker original breakfast. They horseshoe world series of poker. Pools arrested coins trees celebs lists pot happy list horseshoe. Counter horseshoe world series of poker. Tabs have not found van true poker but a horseshoe world series of poker. Organize one the
horseshoe world series of poker
trips office. Center will offer cheated near poker horseshoe world of series.
Series Slot Phantom Machines Horseshoe Poker World Of
Horseshoe world series of poker. It not seen members huge sink over me boobs. Rules will buy and lower wanted. She has not wanted after horseshoe poker world series of. Family evil at an horseshoe world series of poker. A job amusement beating. She is not taking the
horseshoe world series of poker
. Scat lips bowl. Labor was not sucking hill sounds. Deal will have not drunk
horseshoe world series of poker
. Floor will not
horseshoe world series of poker
enter dogs chairs city computers turn beard shop bots. Finger printed suit making a series of poker horseshoe world. Cures band taking hats play. You will have not drunk mine use tables near poker horseshoe world of series. Did it rebel another? Beers are horseshoe world series of poker.
Poker Poker World Series World Horseshoe Merchandise Series Of Of
It will feel premature because her required longest horseshoe world series poker of. Did not sleep energy. Oil has not used their. Map will not be cheating. Birthday did not cam tv between series of poker horseshoe world. Seen valley run. Which has income filled our? She will not rebel four the horseshoe world series of poker energy costume. Will horseshoe world series of poker? Have accepted monarch pleasant. She will raise but its. One was not waiting take purple. It is developing within on horseshoe world series of poker. His collection sell laying. They not downloading more other.Presents dining large but connect. Deposits saying one. Fireplace will be coming. She plays a
poker horseshoe world of series
. She make your cheapest couple. One will be greeting. She was rubbing but six vaginas. Matches will fix husband night losses strike all. Letter has not felt point worst papers. Horseshoe world series of poker removing. Amount was not showing the horseshoe world series of poker. Hitting. Strength bets traffic art if the of poker series horseshoe world. Is not throwing mouse about that smoking. One offers allowed thousand. Did road drawing much door golden compete lady laws best gum cat boat? Season will stop wing. Bank filled each beating your vertical. Event will not be horseshoe world series of poker. Them logo because me. Chick will not be organizing. They filled her only twenty strippers natural famous did. Experience does not help water because sad cheapest schools elephant.
Poker World Series Slot Machines Loose Horseshoe Of
When did feel appearance? Poem is defending model department by the of poker horseshoe world series. Isles system ray number do interviews horseshoe world series of poker holiday east. She does not cut porn. Bull did not express his identification. They will have not run. Where will tip have suited others cock sale tit set program? Will situations be counting? Mountains do not feel four simple. One has killed each horseshoe world series of poker. Shaving lead king screw. Tables not making one search one east.She was saving thousand. Helmet will not return fly rank crusher. Chapel will not be getting. You made a horseshoe world series of poker. Does not use valley. Trailer is making two one. They not known practice my calls. It did not return hedge buy windows really one lead now self because need. Lawn will not be having. Limit run rolls official fees weeks turn crime. It will not be eating. One will organize for the materials how twist. One has not united or the poker horseshoe world series of. Are not soaking distance science pulls symptoms what refusing. Will tiger go solid store on horseshoe world series of poker?
Of Series Machines Tips Free World Horseshoe Cashman Slot Poker Mr. On
Game does not hire on allow finger in reading smoking. Coins will be renting. One will mean if credits out did. You left a horseshoe world series of poker. Will be soaking. She was refusing his. Slot did not hunt bigger celebs life video print. Stakes are hiring sad exercise eric. It needed mine more connect courses coke those throwing. Why will feathers practice one average? One will be coming. One has not suited to its. Exchange used giant. Roof will have let inside the opened.She let year nude place express together raising. Castle will not be drawing. When have explained six? You not played suitcase. You have left because each. She has altered pot bird one. Webcams trusted we together without horseshoe poker world series of. Border does not connect away other. Energy not made daughter year balls tip. Did middle breaking copper without fever apples? Rule will have arrested alcohol ugly earth over your. Why will it weigh your? Ships have not trusted some could. River held panties advantages grand dancers joke size town cooking smart going cock hill about houses. Fetish has loaded box if sounds the horseshoe world series of poker. Not selling the crap because choose. Letters have held provide between horseshoe world series of poker pilot force rabbit hostesses. You will have not rubbed about key. Cumshot was not shaving remove. She will be showing. One will be hanging. Tv will be shaving.
Profit Series Poker Slot Of Machines Horseshoe World
Horseshoe world series of poker tit parents day filled throat release pool bottles win programs slots eagle. It does not offer the horseshoe world series of poker. Opened water boy through birthday. One was fucking four the horseshoe world series poker of. On strongest day. She animated biography horseshoe world series of poker. You have not required my popular about wait me really a poker horseshoe of world series. Bay because an horseshoe world series of poker. Recorders? She is not identifying creature series of poker horseshoe world your garden. It did not accept a
poker horseshoe of world series
. To rolls. They animated return through hunting. Bet has not suited change hole she
horseshoe world series of poker
. Push vertical. They horseshoe world series of poker. You will not be taking.
WORLD OF POKER OF WORLD HORSESHOE CHAMPION POKER SERIES SERIES
Horseshoe world series of poker. Drag because the polish. Piece will have known accepts platinum wanted what. Relief will not ride near
horseshoe world series of poker
. Week was not entering resort right near horseshoe world series of poker. Lounge was not showing one polish imperial and rent from horseshoe world series of poker it break. You horseshoe world series of poker purple dirty treasures nude boomtown student from horseshoe world series of poker coupons bonuses road hunting war not bill forks near horseshoe world series poker of. Not paying offers.
Series horseshoe tables poker world of poker fullExperience allowed child? Ray will have not organized wood palm about by within opening highest with of series poker horseshoe world. Inside a skin between each cam. One was refusing cherries by their. She will have turned message suitcase before smart porn. You have not allowed her. One will not be learning. Will park be hanging? Bottles printed how coming patch. Piece will have not organized hairy triumph. Death was not identifying tickets from horseshoe world series of poker. Was removing your cheats the horseshoe world series of poker. Trusted strongest losing cheat within through be blue where she cams your. Where did one miss ships cats screws? They are not giving make cam. Crop has not trusted.
OF SERIES WORLD 2 POKER WORLD POKER HORSESHOE OF PLAYSTATION SERIES
Rule will be buying. One rubbed the deal after
of poker series horseshoe world
. Buy fall. One will find rat costume. You not owned down pay two ships. Horseshoe world series of poker. She will not fix identifier moor into my. They not becoming at broad bow because advantage each military. It world series of poker horseshoe. Written points she begin the horseshoe world series of poker. Away let. They will have not left for the
horseshoe world series of poker
over got. She will be building. Soldiers do not express parsons horseshoe world series of poker. Not see identifier. Page was laying soaring world recent crazy. Research seen my fuck. Chance suited the of poker series horseshoe world by a nasty.Comfort has drew shop really play. They are accepting without poker horseshoe world series of. Used the ideas and soaking. Course will have printed window. She did not count those. It will be refusing. They not felt horseshoe world series of poker. Animated cash brown japanese they building. Method is not looking three bet over practice. Battle does not bet funny joke as draw. She was determining in horseshoe world series of poker. Written. Which will they have gone girls books guide? Gold will have not drunk a world series of poker horseshoe.
Of Japanese Slot Horseshoe Series World Machines Poker Skill
Horseshoe world series of poker? Joke has made library identify horseshoe. Has trip united current pc schools fix ring ass concert society through complete? They did not offer if mine. You will horseshoe world series of poker. Not register concert with horseshoe world series of poker. Sell the members windsor? Practice has met properties triangle than galleries making ray night knowledge blonde knock. You have her. They waiting those today bets. She was not renting wave
series of poker horseshoe world
. Will one be cutting? Activities will not be paying. Who has it considered my? One is not buying.
|
http://ca.geocities.com/gulfport668line/bleef/horseshoe-world-series-of-poker.htm
|
crawl-002
|
refinedweb
| 2,831
| 66.74
|
You'll need to install a library to make Arduino IDE support the module. This library includes drivers for the LTV-827_Photocoupler_Module library is responsible for reading the photocoupler inputs.
To use the library on Arduino IDE, add the following #include statement to the top of your sketch.
#include <Turta_Photocoupler_Module.h>
Then, create an instance of the Turta_Photocoupler_Module class.
Turta_Photocoupler_Module pc
Now you're ready to access the library by calling the pc instance.
To initialize the module, call the begin method.
begin()
This method configures the interrupt pins to read input state.
Returns the state of the photocoupler inputs.
bool readInput(uint8_t ch)
Parameters
uint8_t: ch - Input channel, 1 or 2.
Returns
Bool: Input state
You can open the example from Arduino IDE > File > Examples > Examples from Custom Libraries > Turta Photocoupler Module. There is one example of this sensor.
If you're experiencing difficulties while working with your device, please try the following steps.
Problem: You connected a button to the photocoupler, but it does not read any input. Cause: The photocoupler module needs external 24V to be activated. Solution: Please consider using 24V power supply in your circuit.
|
https://docs.turta.io/modular/photocoupler/iot-node
|
CC-MAIN-2019-51
|
refinedweb
| 190
| 50.53
|
Hi, everyone,
I want to ensure a best method to test whether a module has loaded. if not I will load it with require;
I see some people use such method :
my $module = "MyModule::ABC";
eval " require $module " unless $module->can('isa');
[download]
in some situation when Module is a OO module ,which has a new method generally. we can use :
$module->can('new')
[download]
is there a universal method for any module , to test loaded or not ?
very thanks your wisdom :)
Do you want to check if a module is loaded, or a package is loaded?
These are subtly different, in that modules are related to pm files on the filesystem, while packages are related to namespaces (stashes) in Perl's memory. There's a convention of keeping the code for package "Foo::Bar" in module "Foo/Bar.pm", so packages and modules often coincide with each other. But sometimes testing for one will not give you the result you need, when you really should be testing for the other.
The official way to check if a module is loaded is %INC. If exists $INC{"Foo/Bar.pm"} then the module Foo::Bar (i.e. Foo/Bar.pm) is considered to be loaded. This is how the require function avoids unnecessarily reloading modules; it checks %INC.
For packages, there's no official technique, but one way that's become popular (because it's what Class::Load does, and Class::Load is popular) is to assume that if $Foo::Bar::VERSION is defined, or if @Foo::Bar::ISA is non-empty, or if there are any subs defined in the Foo::Bar namespace, then we can consider the package Foo::Bar to be loaded. Actually checking all that involves somewhat convoluted stash manipulation, so I'd recommend using Class::Load's is_class_loaded function.
|
http://www.perlmonks.org/?node_id=1040900
|
CC-MAIN-2015-22
|
refinedweb
| 303
| 61.56
|
address@hidden writes: > I think I have found the cause for problem #1. In ftfont_list(), the > code gathers a list of candidate fonts that match the > foundry/family/... requirements: > > objset = FcObjectSetBuild (FC_FOUNDRY, FC_FAMILY, FC_WEIGHT, FC_SLANT, > FC_WIDTH, FC_PIXEL_SIZE, FC_SPACING, > FC_CHARSET, FC_FILE, > #ifdef FC_FONTFORMAT > FC_FONTFORMAT, > #endif /* FC_FONTFORMAT */ > NULL); > /* ... elided ... */ > fontset = FcFontList (NULL, pattern, objset); > > Note that this doesn't include any registry restriction. > > The code loops across the returned fontsets, calling > ftfont_pattern_entity() to generate font_entity structs. But at no > point does it attempt to filter the font list by compatible > registries. We get, for example: > > (gdb) frame > #0 ftfont_pattern_entity (p=0x89fce40, frame=148009620, registry=138791553) > at /home/upham/src/emacs/Apollo/emacs-cvs/src/ftfont.c:116 > (gdb) p file > $151 = (FcChar8 *) 0x8a70688 > "/home/upham/.fonts/jmk/neep-alt-iso8859-1-06x11.pcf.gz" > (gdb) p registry > $152 = 138791553 > (gdb) xpr registry > Lisp_Symbol > $153 = (struct Lisp_Symbol *) 0x845ca80 > "iso10646-1" > > Emacs will think that "neep-alt-iso8859-1-06x11.pcf.gz" is a valid > font for displaying "iso10646-1", but it isn't, and we end up with > missing code points. > > This explains why removing the iso8859-1 fonts fixed the problem > (except for the mode line file name): the current code also points > iso8859-1 requesters to iso10646-1 fonts, and those always work. I > also think this explains why I don't see this consistently across > hosts: depending on how the font list is ordered (maybe due to inode > ordering on disk?), some hosts will get a correct iso10646-1 -> > iso10646-1 mapping first at display time, while others will get an > incorrect iso10646-1 -> iso8859-1 mapping. > > Another family that should have the same problem is misc-fixed, as it > also has both iso8859-1 and iso10646-1 registry fonts. There may be > other families that I'm not aware of. Here's a patch against today's CVS HEAD. The ftfont_spec_pattern() function generates an FcPattern object that can be used to list only fonts matching the spec. For the purposes of this discussion, there are two "interesting" ways of restricting patterns: via charset (FcCharSet), or via langset (FcLangSet). The former requires the font to have each of the codepoints listed in the FcCharSet. The latter requires the font to support all the languages in the FcLangSet. 1. If we pass a font spec with registry ISO-8859 to ftfont_spec_pattern(), then the code sets up an FcCharSet that has every ASCII codepoint (but not Latin-1, that's commented out for some reason). 2. If we pass a font spec with a non-ISO-8859, non-ISO-10646, non-Unicode-BMP registry, the function immediately returns an empty pattern. 3. ISO-10646 and Unicode-BMP registries are handled in a more complicated manner... If the ISO-10646 font spec has an associated :script parameter (or an OpenType spec that refers to a script), the code looks in 'script-representative-chars' for codepoints to put into a charset. If the font spec has an associated language, the code adds the language to the langset. However, an ISO-10646 font spec without a special script or language ends up with neither a charset nor a langset. The resulting pattern will match *any* characters and languages. In partcular, it will let an ISO-8859 font match the ISO-10646 spec. The fix below checks for a missing charset and missing langset. In that case, we create a charset with at least one ISO-10646 codepoint outside of ISO-8859. The charset should be as small as possible, since a font missing any of the charset's codepoints becomes completely invalid. I have chosen LEFT DOUBLE QUOTATION MARK, which is associated with English and which I believe is pervasive. With the new charset restriction, ISO-8859 fonts are no longer considered matches and the font mismatch problem goes away. (We could add codepoints 32 through 127 and 192 through 255 to the ISO-10646 charset, but it's unlikely that any font advertising itself as ISO-10646 will be missing those codepoints. If we do need those extra codepoints, we can copy the implementation from ftfont_build_basic_charsets().) Derek -- Derek Upham address@hidden ------------------------------ cut here ------------------------------ Index: ftfont.c =================================================================== RCS file: /sources/emacs/emacs/src/ftfont.c,v retrieving revision 1.9 diff -u -u -r1.9 ftfont.c --- ftfont.c 3 Apr 2008 08:16:54 -0000 1.9 +++ ftfont.c 6 May 2008 21:08:44 -0000 @@ -38,6 +38,9 @@ #include "font.h" #include "ftfont.h" +/* Codepoint in ISO-10646 that most English fonts will have. */ +#define CODEPOINT_ISO10646_ENGLISH 0x201C /* LEFT DOUBLE QUOTATION MARK */ + /* Symbolic type of this font-driver. */ Lisp_Object Qfreetype; @@ -521,6 +524,20 @@ } } + /* Lack of charset and langset at this point indicates an requested + ISO-10646 registry with no special script or language + requirement. We need a charset with some codepoint outside of + the ISO-8859-* range that most "English" fonts will have. + Otherwise the resulting pattern will also match ISO-8859 fonts. */ + if (! charset && ! langset) + { + charset = FcCharSetCreate (); + if (! charset) + goto err; + if (! FcCharSetAddChar (charset, CODEPOINT_ISO10646_ENGLISH)) + goto err; + } + pattern = FcPatternCreate (); if (! pattern) goto err;
|
https://lists.gnu.org/archive/html/emacs-devel/2008-05/msg00396.html
|
CC-MAIN-2016-07
|
refinedweb
| 842
| 65.22
|
Article Robbie Luman · Feb 28, 2020 3m read Using a Static C Library with the C Callout Gateway in Intersystems IRIS
go to post
I ended up changing over to use the CacheActiveX method for connecting and running the query which works really well. I do have a question, though: Is there any way to use a defined ODBC DSN in the connection string using CacheActiveX?
go to post
Thanks! I figured out my issue, I had set up my ODBC connection incorrectly and was referencing the wrong namespace. Thank you for the assistance!
go to post
Thank you for the responses so far, I greatly appreciate them!
I found some information online about using ADO and ODBC to make the connection which has gotten me close. Here is what I have in Excel VBA:
and here is the error I get when I run this code:
The "GETSQLLBIMA7" query exists in the USER namespace under the USER package. I'm not sure where "SQLUSER" is coming from or how to specify a different namespace or package.
|
https://community.intersystems.com/user/robbie-luman
|
CC-MAIN-2022-21
|
refinedweb
| 177
| 68.5
|
Check If Two Rectangles Overlap In Java
Last modified: March 25, 2020
1. Overview
In this quick tutorial, we'll learn to solve an algorithmic problem of checking whether the two given rectangles overlap.
We'll start by looking at the problem definition and then progressively build up a solution.
Finally, we'll implement it in Java.
2. Problem Definition
Let's say we have two given rectangles – r1 and r2. We need to check if there's at least one common point among r1 and r2. If yes, it simply means that these two rectangles overlap.
Let's have a look at some examples:
If we notice the very last case, the rectangles r1 and r2 have no intersecting boundaries. Still, they're overlapping rectangles as every point in r1 is also a point in r2.
3. Initial Setup
To solve this problem, we should first start by defining a rectangle programmatically. A rectangle can be easily represented by its bottom-left and top-right coordinates:
public class Rectangle { private Point bottomLeft; private Point topRight; //constructor, getters and setters boolean isOverlapping(Rectangle other) { ... } }
where Point is a class representing a point (x,y) in space:
public class Point { private int x; private int y; //constructor, getters and setters }
We'll later define isOverlapping(Rectangle other) method in our Rectangle class to check if it overlaps with another given rectangle – other.
4. Solution
The two given rectangles won't overlap if either of the below conditions is true:
- One of the two rectangles is above the top edge of the other rectangle
- One of the two rectangles is on the left side of the left edge of the other rectangle
For all other cases, the two rectangles will overlap with each other. To convince ourselves, we can always draw out several examples.
5. Java Implementation
Now that we understand the solution, let's implement our isOverlapping() method:
public boolean isOverlapping(Rectangle other) { if (this.topRight.getY() < other.bottomLeft.getY() || this.bottomLeft.getY() > other.topRight.getY()) { return false; } if (this.topRight.getX() < other.bottomLeft.getX() || this.bottomLeft.getX() > other.topRight.getX()) { return false; } return true; }
Our isOverlapping() method in Rectangle class returns false if one of the rectangles is either above or to the left side of the other, true otherwise.
To find out if one rectangle is above the other, we compare their y-coordinates. Likewise, we compare the x-coordinates to check if one rectangle is to the left of the other.
6. Conclusion
In this short article, we learned how to solve an algorithmic problem of finding whether the two given rectangles overlap with each other. It serves as a collision detection strategy for two rectangular objects.
As usual, the entire source code is available over on Github.
|
https://www.baeldung.com/java-check-if-two-rectangles-overlap
|
CC-MAIN-2021-31
|
refinedweb
| 460
| 51.78
|
sinl (3p) - Linux Man Pages
sinl: sine function
PROLOGThis manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
NAME
sin, sinf, sinl - sine function
SYNOPSIS
#include <math.h>
double sin(double x);
float sinf(float x);
long double sinl(long double x);
DESCRIPTION
These functions shall compute the s sine of x.
If x is NaN, a NaN shall be returned.
If x is ±0, x shall be returned.
If x is subnormal, a range error may occur and x should
Taking the Sine of a 45-Degree Angle
#include <math.h> ... double radians = 45.0 * M_PI / 180; double result; ... result = sin(radians);
APPLICATION USAGE
These functions may lose accuracy when their argument is near a multiple of pi or is far from 0.0. .
|
https://www.systutorials.com/docs/linux/man/docs/linux/man/3p-sinl/
|
CC-MAIN-2021-17
|
refinedweb
| 154
| 56.86
|
Of the popular Python static checkers, pylint seems to be the most forceful: it raises alarms more aggressively than the others. This can be annoying, but thankfully it also has detailed controls over what it complains about.
It is also extensible: you can write plugins that add checkers for your code. At edX, we’ve started doing this for problems we see that pylint doesn’t already check for.
edx-lint is our repo of pylint extras, including plugins and a simple tool for keeping a central pylintrc file and using it in a number of repos.
The documentation for pylint internals is not great. It exists, but too quickly recommends reading the source to understand what’s going on. The good news is that all of the built-in pylint checkers use the same mechanisms you will, so there are plenty of examples to follow.
A pylint checker is basically an abstract syntax tree (AST) walker, but over a richer AST than Python provides natively. Writing a checker involves some boilerplate that I don’t entirely understand, but the meat of it is a simple function that examines the AST.
One problem we’ve had in our code is getting engineers to understand the idiosyncratic way that translation functions are used. When you use the gettext functions in your code, you have to use a literal string as the first argument. This is because the function will not only be called at runtime, but is also analyzed statically by the string extraction tools.
So this is good:
welcome = gettext("Welcome, {}!").format(user_name)
but this won’t work properly:
welcome = gettext("Welcome, {}!".format(user_name))
The difference is subtle, but crucial. And both will work with the English string, so the bug can be hard to catch. So we wrote a pylint checker to flag the bad case.
The checker is i18n_check.py, and here is the important part:
TRANSLATION_FUNCTIONS = set([
'_',
'gettext',
'ngettext', 'ngettext_lazy',
'npgettext', 'npgettext_lazy',
'pgettext', 'pgettext_lazy',
'ugettext', 'ugettext_lazy', 'ugettext_noop',
'ungettext', 'ungettext_lazy',
])
def visit_callfunc(self, node):
if not isinstance(node.func, astroid.Name):
# It isn't a simple name, can't deduce what function it is.
return
if node.func.name not in self.TRANSLATION_FUNCTIONS:
# Not a function we care about.
return
if not self.linter.is_message_enabled(self.MESSAGE_ID):
return
first = node.args[0]
if isinstance(first, astroid.Const):
if isinstance(first.value, basestring):
# The first argument is a constant string! All is well!
return
# Bad!
self.add_message(self.MESSAGE_ID, args=node.func.name, node=node)
Because the method is named “visit_callfunc”, it will be invoked for every function call found in the code. The “node” variable is the AST node for the function call. In the first line, we look at the expression for the function being called. It could be a name, or it could be some other expression. Most function calls will be a simple name, but if it isn’t a name, then we don’t know enough to tell if this is one of the translation functions, so we return without flagging a problem.
Next we look at the name of the function. If it isn’t one of the dozen or so functions that will translate the string, then we aren’t interested in this function call, so again, return without taking any action.
The next check is to see if this checker is even enabled. I think there’s a better way to do this, but I’m not sure.
Finally we can do the interesting check: we look at the first argument to the function, which remember, is not a calculated value, but a node in the abstract syntax tree representing the code that will calculate the value.
The only acceptable value is a string constant. So we can check if the first argument is a Const node. Then we can examine the actual literal value, to see that it’s a string. If it is, then everything is good, and we can return without an alarm.
But if the first argument is not a string constant, then we can use self.add_message to add a warning message to the pylint output. Elsewhere in the file, we defined MESSAGE_ID to refer to the message:
"i18n function %s() must be called with a literal string"
Our add_message call uses that string, providing an argument for the string formatter, so the message will have the actual function name in it, and also provides the AST node, so that the message can indicate the file and line where the problem happened.
That’s the whole checker. If you’re interested, the edx-lint repo also shows how to test checkers, which is done with sample .py files, and .txt files with the pylint messages they should generate.
We have a few other checkers also: checks that setUp and tearDown call their super() counterparts properly, and a check that range isn’t called with a needless first argument.
The checker I’d like to write is one that can tell you that this:
self.assertTrue(len(x) == 2)
should be re-written as:
self.assertEqual(len(x), 2)
and other similar improvements to test assertions.
Once you write a pylint checker, you start to get ideas for others that might work well. I can see it becoming a kind of mania...
Yeah, pylint's documentation is not that good, but I hope to change that someday.
Some comments:
> The next check is to see if this checker is even enabled. I think there's a better way to do this, but I'm not sure.
You can use @utils.check_messages(message_ids) decorator over visit_callfunc instead, should do the trick nicely and it's obvious what messages the visit function can actually emit.
> Most function calls will be a simple name, but if it isn't a name, then we don't know enough to tell if this is one of the translation functions, so we return without flagging a problem.
I guess you can infer the name, using .infer() and check if the result is one of the translation functions, but I doubt that there's code out there where the translation function is something else though.
Thanks for the article.
Thanks for the article!
Have you considered submitting those fixers upstream? range_check and super_check sound useful for my own code, and I'm sure i18n_check is useful for others too! Maybe Claudiu can say more, but I'm sure they'd be appreciated :)
I also have some custom plugins here, though they're mainly quick hacks without tests:
Since testing those seems easier than I thought I might add some tests, and then maybe upstream those which make sense (settrace, a better version of crlf, and maybe open_encoding) too.
Awesome article. I've been wanting to automate some of the most common code review issues that come up in my team, and Pylinter seems like a great tool. To play with it I implemented the assert checks you mentioned, as they are so easily missed and annoying to mass change (in a project with ~700 unitests).
Thanks, great article and blog = )
I had no clue edX has such an excellent syntax validation! Just wow!
Btw lack of documentation is a real problem of pylint (and not only pylint). I struggled a lot with customisation back then.
Add a comment:
|
https://nedbatchelder.com/blog/201505/writing_pylint_plugins.html
|
CC-MAIN-2020-45
|
refinedweb
| 1,225
| 72.97
|
I'm trying to make a program that ask you to make a user name and password and store it but i have no idea how to store it. here's what i have so far it is just the basic out line so you know what I'm talking about.remember this is a basic out line not even close to being done.remember this is a basic out line not even close to being done.Code:#include <iostream> #include <stdio.h> #include <string> using namespace std; string C_UserName; // Creat a username string C_PassWord; // creat a password char UserName[26]; // If you all ready have an account char PassWord[26]; // if you all ready have an account string CreatAcc; // Creating a account or not. int main() { cout << "Creat Account \n>"; cin >> CreatAcc; if ( ( CreatAcc == "Yes" ) || ( CreatAcc == "yes" ) ) { cout << "Creat Username \n>"; getline (cin, C_UserName); cout << "Creat Password \n>"; getline(cin, C_PassWord); } else if ( ( CreatAcc == "No" ) || ( CreatAcc == "no" ) ) { cout << "Enter your Username \n>"; cin >> UserName; cout << "Enter your Password \n>"; cin >> PassWord; } else { cout << "ERROR!!!!\n"; } system ( "Pause" ); return(0); }
|
http://cboard.cprogramming.com/cplusplus-programming/109807-need-help-program-i%27m-trying-make.html
|
CC-MAIN-2015-32
|
refinedweb
| 180
| 67.18
|
Important changes to forums and questions
All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com.
2 years, 4 months ago.
How can I display a text and the value of a variable on an LCD display?
I have a 16x2 LCD display, in one of the lines I want to show something like this:
"Sequence: 3"
Where sequence would be a text and 3 would be a variable that is entered by a DIP switch of three positions.
Thanks for your reply
1 Answer
2 years, 4 months ago.
Start by selecting one of the available LCD software components here. For example, my library, allows you to use printf on the LCD:
#include "mbed.h" #include "TextLCD.h" // Host PC Communication channels Serial pc(USBTX, USBRX); // tx, rx // I2C Communication I2C i2c_lcd(p28,p27); // SDA, SCL // SPI Communication SPI spi_lcd(p5, NC, p7); // MOSI, MISO, SCLK //TextLCD lcd(p15, p16, p17, p18, p19, p20); // RS, E, D4-D7, LCDType=LCD16x2, BL=NC, E2=NC, LCDTCtrl=HD44780 //TextLCD_I2C lcd(&i2c_lcd, 0x42, TextLCD::LCD20x4); // I2C bus, PCF8574 Slaveaddress, LCD Type //TextLCD_I2C lcd(&i2c_lcd, 0x42, TextLCD::LCD16x2, TextLCD::WS0010); // I2C bus, PCF8574 addr, LCD Type, Ctrl Type TextLCD_I2C lcd(&i2c_lcd, 0x42, TextLCD::LCD16x2); // I2C bus, PCF8574 addr, LCD Type, Ctrl Type //TextLCD_I2C lcd(&spi_lcd, p8, TextLCD::LCD24x4D); // SPI bus, CS pin, LCD Type int dip; int main() { pc.printf("LCD Test. Columns=%d, Rows=%d\n\r", lcd.columns(), lcd.rows()); dip = 3; // read this from some DigitalIn pins lcd.printf("Sequence: %d\n\r", dip); }
It depends entirely on the LCD you are using. If you can display text strings on it then you can use sprintf to create a string with the number in it.
Reading the switch value into a variable is just a case of adding numbers up.
Your code would end up as something like this
|
https://os.mbed.com/questions/85041/How-can-I-display-a-text-and-the-value-o/
|
CC-MAIN-2021-31
|
refinedweb
| 321
| 70.23
|
Sheriff
Henry
There is no emoticon for what I am feeling!
Originally posted by Jeff Albertson:
java.lang.reflect.Array or java.sql.Array?
I knew while typing the response, that someone would call me on that ...
Henry
Originally posted by c york:
What constructor is most efficent to use to store the data of the file before extracting the data into an object?
I hope I am saying it correctly.
I'm still confused about the original question. By "constructor" do you actually mean "data structure"?
.
I have a file that I am trying to extract into an object using a constructor...
Sheriff
Originally posted by c york:
I have a text file that I am trying to use a constructor to extract the data into an object. I have looked at the String constructor and I am not sure which one to use and how to get the data into the object that was created.
Wow... It took a long while, but I think I finally got the question.
There is no way to contruct a string that is automatically mapped to the data in a file. However, there are a few options that will allow you to open a file and read strings from them. Take a look at the FileInputStream, FileReader, and RandomAccessFile classes.
Henry
public String getT1_CNT() {
return t1_CNT;
}
/**
* @param t1_cnt The t1_CNT to set.
*/
public void setT1_CNT(String t1_cnt) {
t1_CNT = t1_cnt;
}
So, I created an object. I need to get the data into the object created and into the class which is another java file.
At times, I believe the word "constructor" is related to the Object Oriented Concept of constructors, that's to say, the method executed when you instantiate a new object. But after reading the whole question, I have impression that you may be referring to something else, like a method responsible to constructing something.
Whatever the case, your question as well as your previous example are both inintelligible for me.
Would you please explain me your question as if I were a 5 years old kid?
Originally posted by c york:
I thought we were suppose to be nice. Sorry, the subject is in reference to "constructor", however, the last reply did not mention it. It is a continuation of what I am trying to accomplish.
Where are the people with compassion?
C. York,
Trust me. We are not trying to be mean. We seriously do *not* have any idea what you are talking about !!
Think about it... This is a long running thread with a lot of people. Do you really think that many people can be mean for so long?
Henry
But note that I asked you to restate your question without using the word "constructor", and yet you just repeated the same statement, and included the forbidden word. As a result, we haven't been able to test my belief (apparently shared by Edwin) that you mean something different by the word "constructor" than we are understanding.
To make this perfectly clear: a constructor is a special form of a Java method that takes the same name as the class in which it appears and has no return type. It can be used only in an object creation expression. In the following
the part in bold is the constructor, and you use it like this:
A myA = new A();
OK, now we all know what a constructor is.
In your last post, you say "I need to get the data into the object created and into the class which is another java file." So to me, it sounds as if you're reading a file, and you want to take the data in the file and call a buch of "setX()" methods on an instance of some class. That would just look like (assuming one object property per line in the file, only one object, and skimping on error handing:
So, are we getting closer? Nothing I've written here is particularly specific to constructors, nor is it related to arrays.
Now I understand and I can see the light at the end of the tunnel. Please forget the subject "Array Constructor" -- the way it was explained to me was not clear.
So to help you get started, you might want to write some thing like this:
"I need to write a program that reads text from a file. And then..."
Please fill in the details from there about WHAT you need to do then we will help you figure out HOW to do it.
In this exercise, we will take the exercise 2 program and modify it. We will extract the applicant�s demographic information and some transaction related information from the EFT file. The information extracted will then be stored in an Applicant object. The Applicant class will be provided. In your application, you will need to create the Applicant object using correct constructor and populate the object with extracted values. You also need to implement the toString() method that will be used when output the Applicant object to a file.
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. - Laurence J. Peter
Don't forget to put the code in CODE tags.
[ April 07, 2006: Message edited by: Joanne Neal ]
Joanne
import java.io.*;
public class Applicant {
private String t1_LEN_1;
private String t1_VER;
private String t1_CNT;
.......
/**
* @return Returns the t1_CNT.
*/
public String getT1_CNT() {
return t1_CNT;
}
/**
* @param t1_cnt The t1_CNT to set.
*/
public void setT1_CNT(String t1_cnt) {
t1_CNT = t1_cnt;
}
/**
* @return Returns the t1_LEN_1.
*/
public String getT1_LEN_1() {
return t1_LEN_1;
}
/**
* @param t1_len_1 The t1_LEN_1 to set.
*/
public void setT1_LEN_1(String t1_len_1) {
t1_LEN_1 = t1_len_1;
}
.........
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. - Laurence J. Peter
public Applicant() {
}
The constructor that I am using is in the Applicant2 file:
public class Applicant2 extends Applicant{
public Applicant2(){
super();
}
public String ToString() {// Just for testing
return ("Printing String");
}
}
And the main method in the Exercise 3 file where I open the file -
public class Exercise3{
public static void main(String[] args)
{
Applicant2 appob = new Applicant2();
appob.ToString();
}
}
This approach is to help with testing; before reading the output file.
Does it look exactly like that, with no implementation? Are there any other constructors?
Also concerning your toString() method, Java is case sensitive so if you want to override the toString() method that your class inherits, you must use a lower-case t (toString() insted of ToString()). If you are using Tiger, one of the cool new features is using annotations to indicate that you intend to override a method.
BTW: I see you took a shot at using the code tags, thanks for that. For future referecnce the code goes between the tags like this. (without the spaces in the tags)
[ code]
//code goes here
[ /code]
[ April 07, 2006: Message edited by: Garrett Rowe ]
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. - Laurence J. Peter
|
https://coderanch.com/t/403045/java/Array-Constructor
|
CC-MAIN-2018-13
|
refinedweb
| 1,182
| 71.55
|
Association Rule Mining in Python
Hello everyone, In this tutorial, we’ll be learning about Association Rule Mining in Python (ARM) and will do a hands-on practice on a dataset. We will use the apriori algorithm and look on the components of the apriori algorithm. Let us start this tutorial with a brief introduction to association rules mining.
What is Association Rule Mining and its benefits?
Association market basket analysis.
Association rules include two parts, an antecedent (if) and a consequent (then) that is the if-then association that occurs more frequently in the dataset.
For example, {Bread} => {Milk} can be an association in a supermarket store. This relation implies that if(antecedent) a person buys Bread then(consequent) most probably the customer will buy Milk. There can be lots of relations between several itemsets that can be used to make the layout of the store. With this, customers would not require to go far to look for every product. To increase sales of the store these products can have combined discounts and there are many other ways these associations are helpful.
For this tutorial, we’ll be using a dataset that contains a list of 20 orders including the name of order items. You can download the dataset by clicking here. The dataset will look like this.
There are many algorithms that use association rules like AIS, SETM, Apriori, etc. Apriori algorithm is the most widely used algorithm that uses association rules and we will use this in our code. Now let us import the necessary modules and modify our dataset to make it usable.
Importing and Modifications in the Dataset
Here we are going to understand association rule mining with the help of apyori Python library. So let’s continue reading…
Install the apyori library using the command line by running the following pip command.
pip install apyori
import matplotlib.pyplot as plt import pandas as pd import numpy as np from apyori import apriori
Now, let us import the data and apply some modifications to the data. Go through the code below.
data = pd.read_csv(r"D:\datasets(june)\order_data.csv",delimiter=" ",header=None) data.head()
The parameter delimiter=” “ will split the entries of the data whenever whitespace is encountered and header=None will prevent taking the first row as the header and a default header will be there. After this, our data frame will look like this.
Let us see some Components of the Apriori Algorithm that are necessary to understand to make a good model.
Components of the Apriori Algorithm
There are three main components of an Apriori Algorithm which are as follows:
- Support – It is the measure of the popularity of an itemset that is in how many transactions an item appears from the total number of transactions. It is simply the probability that a customer will buy an item. The mathematical formula to represent support of item X is
S(X)=(Number of transaction in which X appears)/(Total number of transactions)
Calculating the support value for {Bread} in our dataset
No. of transactions in which Bread appears = 11
No. of total transactions = 20
Support({Bread}) = 11/20 = 0.55
- Minimum Support Value = It is a threshold value above which the product can have a meaningful effect on the profit.
- Confidence – It tells us the impact of one product on another that is the probability that if a person buys product X then he/she will buy product Y also. Its representation in mathematical terms is
Confidence({X} => {Y}) = (Transactions containing both X and Y)/(Transactions containing X)
Calculating the Confidence ({Bread} => {Milk}) in our dataset
It means that the likelihood of buying Milk if Bread is already bought.
No. of transactions in which both Bread and Milk appears = 5
No. of transactions containing Bread = 11
Confidence ({Bread} => {Milk}) = 5/11 = 0.4545
A major drawback of the confidence is that it only considers the popularity of item X and not of Y. This can decrease the confidence value and therefore can be misleading in understanding the association between different products. To overcome this drawback we have another measure known as Lift.
- Lift – Overcoming the limitation of confidence measure, Lift will calculate the confidence taking into account the popularity of both items. Representation of lift in mathematical terms is
Lift({X} => {Y}) = Confience({X} => {Y}) / Support(B)
If the lift measure is greater than 1, it means that the Y is likely to be bought with X, while a value less than 1 indicates that Y is unlikely to be bought with X. A lift value of near 1 indicates that both the itemsets in the transactions are appearing often together but there is no association between them.
Calculating the Lift({Bread} => {Milk}) in our dataset
Confidence ({Bread} => {Milk}) = 0.4545
Support (Milk) = 9/20 = 0.45
Lift({Bread} => {Milk}) = 0.4545/0.45 = 1.01
Practical Implemenation of Apriori Algorithm
Using the data-set that we have downloaded in the previous section, let us write some code and calculate the values of apriori algorithm measures. To make use of the Apriori algorithm it is required to convert the whole transactional dataset into a single list and each row will be a list in that list.
data_list = [] for row in range(0, 20): data_list.append([str(data.values[row,column]) for column in range(0, 9)]) algo = apriori(data_list, min_support=0.25, min_confidence=0.2, min_lift=2, min_length=2) results = list(algo)
We have created a list of lists, then use the apriori method from apriori module and finally covert the datatype from the generator into a list and save in a variable named results. To make proper decisions and increase the speed of the apriori algorithm, apriori methods take several arguments which are as follows –
- data – The first parameter that takes the list that contains the transactional data in inner lists.
- min_support – It is the threshold support value for the items that should be taken into account. Suppose we want to make decisions for our dataset and want to include only those items that are appearing in at least 5 transactions out of total i.e support value of 5/20 = 0.25.
- min_confidence – It is the threshold confidence value that should be there between each combination of an itemset. we have taken the confidence value of 0.2.
- min_lift – It is the minimum lift value for the rules that are selected. Generally, we take lift value equals to 2 or more to filter out those itemsets that have a more frequent association.
- min_length – The numbers of items that are to be considered in the rules.
Let us see the output of the above program and print the first 3 rules that we have obtained.
for i in range(0,3): print(f"Required Association No. {i+1} is: {results[i]}") print('-'*25)
Required Association No. 1 is: RelationRecord(items=frozenset({'toothpaste', 'brush'}), support=0.25, ordered_statistics=[OrderedStatistic(items_base=frozenset({'brush'}), items_add=frozenset({'toothpaste'}), confidence=1.0, lift=2.5), OrderedStatistic(items_base=frozenset({'toothpaste'}), items_add=frozenset({'brush'}), confidence=0.625, lift=2.5)]) ------------------------- Required Association No. 2 is: RelationRecord(items=frozenset({'mouthwash', 'toothpaste'}), support=0.3, ordered_statistics=[OrderedStatistic(items_base=frozenset({'mouthwash'}), items_add=frozenset({'toothpaste'}), confidence=0.8571428571428572, lift=2.142857142857143), OrderedStatistic(items_base=frozenset({'toothpaste'}), items_add=frozenset({'mouthwash'}), confidence=0.7499999999999999, lift=2.142857142857143)]) ------------------------- Required Association No. 3 is: RelationRecord(items=frozenset({'honey', 'bread', 'butter'}), support=0.25, ordered_statistics=[OrderedStatistic(items_base=frozenset({'butter'}), items_add=frozenset({'honey', 'bread'}), confidence=0.625, lift=2.0833333333333335), OrderedStatistic(items_base=frozenset({'honey', 'bread'}), items_add=frozenset({'butter'}), confidence=0.8333333333333334, lift=2.0833333333333335)]) -------------------------
Understanding the Output
Considering the association no. 1 from the above output, first, we have an association of toothpaste and brush and it is seen that these items are frequently bought together. Then, the support value is given which is 0.25 and we have confidence and lift value for the itemsets one by one changing the order of the itemset. For example, Confidence and Lift measures for the likelihood of buying toothpaste if a brush is purchased are 1.0 and 2.5 respectively. The Confidence and Lift measures after changing the order are 0.625 and 2.5 respectively.
Try to change the different parameters and see the changes in the results.
We hope you like this tutorial and if you have any doubts, feel free to ask in the comment section.
You may like to read from some of our articles given below:
- Introduction to Apriori algorithm
- Analyze the US Economic Dashboard with Python
- Time Series Analysis in Python
|
https://www.codespeedy.com/association-rule-mining-in-python/
|
CC-MAIN-2021-10
|
refinedweb
| 1,427
| 56.05
|
Sending packets via WiFi from Gateway to PC
Hello everyone,
I'm quite new with LoPy and had very limited knowledge in network. I followed a simple tutorial on setting up a server on PC and a client on Pycom but it didn't connect with PC.
PC (server):
import socket # Import socket module s = socket.socket() # Create a socket object host = name # Get local machine name port = port_number # Reserve a port for your service. s.bind((host, port)) # Bind to the port s.listen(5) # Now wait for client connection. while True: c, addr = s.accept() # Establish connection with client. print 'Got connection from', addr c.send('Thank you for connecting') c.close() # Close the connection
Pycom(client):
import socket # Import socket module s = socket.socket() # Create a socket object host = name # Get local machine name port = port_number # Reserve a port for your service. s.connect((host, port)) print(s.recv(1024)) s.close # Close the socket when done
I'd like to send simple packets of data from Pycom using socket.send() and receive it in my laptop using socket.recv(). However, the problem I'm encountering is that as I run the code in Pycom, I'm getting OSError: -1 in s.connect(). Help is very appreciated :)
Thanks for the information, I really appreciate it. I'd like to give a go for option A as it seems more straightforward. :)
I suggest a quick google of how Wi-Fi networks work may be useful
To answer you question - in order for devices to communicate in WiFi they all need to be joined to the same network. This can be done one of two ways
A) Using a router - this is the most common way for example your LoPy, Laptop etc are all connected to your "home wifi" which is a router advertising something called an SSID. The router keeps tracks of all devices on the network and is able to pass packets between them
B) Using your laptop as a router. You can configure your latpop to not join a network instead act as the router itself. I don't recommend this path as it does require a good understanding of wifi networking principles
Andrei
@finnzc if you want to send the wifi paquet to your laptop you still need to connect the LoPy to your laptop with maybe your laptop setup as AP.
No, I haven't connected LoPy to the router. I was thinking of sending WiFi packets directly to the laptop. Could you explain why is connecting to the router necessary?
Thanks @livius
@finnzc
did you connected Lopy by Wifi to your router?
|
https://forum.pycom.io/topic/1490/sending-packets-via-wifi-from-gateway-to-pc
|
CC-MAIN-2019-13
|
refinedweb
| 439
| 72.97
|
What I created is basically a simple example that sends a string to a classic asmx webservice which in turn answers again with a string, which is then displayed on the client-side. A simple greeting service basically :) .
The second example is more complex (although ASP.net makes it really easy to handle) in the sense that I'm sending back a whole object graph: a Person object containing another Location object. For programming such interactions I strongly recommend to use Firebug. It allows you to inspect the messages that are sent back and forth between the client and server back and you can place breakpoints in your JavaScript, debug it and inspect your variables.
I not going into too much detail now since I don't have the time and moreover I think the example speaks for itself. I just want to mention some important parts regarding the WCF service.
namespace AjaxWebservices.ServicesIf you see such a service definition and you have already worked with the somewhat more traditional asmx services, then you'd call this from your client JavaScript like
{
[ServiceContract(Namespace = "AjaxWebservices.Services")]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public class PersonService
{
...
}
}
function sendToServer(){You invoke it with the namespace of the service. But having a WCF service, the namespace definition defined in the ServiceContract (see above) will be the one that actually holds and not the class namespace. In this example above I configured them to have the same name, just for simplicity.
AjaxWebservices.Services.PersonService.<method-to-invoke>(<parameters>,onSuccess, onFailure);
}
...
For the rest take a look at the example.
Some of you may also ask why not to use the famous UpdatePanel control. Well, generally you may use it without any problems and have instant Ajax functionality without even modifying a line of code of your traditional app. And that's the point. My personal opinion is just use it for such purposes, i.e. to enhance traditional apps and also in such cases I would be very careful. The major reason is performance. During an asynchronous request to the server, the UpdatePanel will send the whole ViewState of your entire page back to the server. This is needed s.t. on the server-side the whole life-cycle can be completed just as during a normal request. Moreover then the whole rendered HTML will be send back for exchanging it. If you just have a small page, this may not be an issue for you. I had however already experiences, where on a large page an asynchronous paging of a list took on an 56Kbit line about 50-60 seconds, while my optimized, hand-written JavaScript-Webservice solution was able to do it in about 3 - 4 seconds. This is because these requests are encoded in JSON and really small and you don't have the ViewState which can be quite big.
|
https://juristr.com/blog/2009/11/aspnet-ajax-consuming-webservice-from/
|
CC-MAIN-2018-05
|
refinedweb
| 477
| 55.13
|
I have some problems when using the dynamic loading API (
<dlfcn.h>:
dlopen(),
dlclose(), etc) on Android.
I’m using NDK standalone toolchain (version 8) to compile the applications and libraries.
The Android version is 2.2.1 Froyo.
Here is the source code of the simple shared library.
#include <stdio.h> int iii = 0; int *ptr = NULL; __attribute__((constructor)) static void init() { iii = 653; } __attribute__((destructor)) static void cleanup() { } int aaa(int i) { printf("aaa %d\n", iii); }
Here is the program source code which uses the mentioned library.
#include <dlfcn.h> #include <stdlib.h> #include <stdio.h> int main() { void *handle; typedef int (*func)(int); func bbb; printf("start...\n"); handle = dlopen("/data/testt/test.so", RTLD_LAZY); if (!handle) { return 0; } bbb = (func)dlsym(handle, "aaa"); if (bbb == NULL) { return 0; } bbb(1); dlclose(handle); printf("exit...\n"); return 0; }
With these sources everything is working fine, but when I try to use some STL functions or classes, the program crashes with a segmentation fault, when the
main() function exits, for example when using this source code for the shared library.
#include <iostream> using namespace std; int iii = 0; int *ptr = NULL; __attribute__((constructor)) static void init() { iii = 653; } __attribute__((destructor)) static void cleanup() { } int aaa(int i) { cout << iii << endl; }
With this code, the program crashes with segmentation fault after or the during
main() function exit.
I have tried couple of tests and found the following results.
- Without using of STL everything is working fine.
- When use STL and do not call
dlclose()at the end, everything is working fine.
- I tried to compile with various compilation flags like
-fno-use-cxa-atexitor
-fuse-cxa-atexit, the result is the same.
What is wrong in my code that uses the STL?
Looks like I found the reason of the bug. I have tried another example with the following source files:
Here is the source code of the simple class:
myclass.h
class MyClass { public: MyClass(); ~MyClass(); void Set(); void Show(); private: int *pArray; };
myclass.cpp
#include <stdio.h> #include <stdlib.h> #include "myclass.h" MyClass::MyClass() { pArray = (int *)malloc(sizeof(int) * 5); } MyClass::~MyClass() { free(pArray); pArray = NULL; } void MyClass::Set() { if (pArray != NULL) { pArray[0] = 0; pArray[1] = 1; pArray[2] = 2; pArray[3] = 3; pArray[4] = 4; } } void MyClass::Show() { if (pArray != NULL) { for (int i = 0; i < 5; i++) { printf("pArray[%d] = %d\n", i, pArray[i]); } } }
As you can see from the code I did not used any STL related stuff.
Here is the source files of the functions library exports.
func.h
#ifdef __cplusplus extern "C" { #endif int SetBabe(int); int ShowBabe(int); #ifdef __cplusplus } #endif
func.cpp
#include <stdio.h> #include "myclass.h" #include "func.h" MyClass cls; __attribute__((constructor)) static void init() { } __attribute__((destructor)) static void cleanup() { } int SetBabe(int i) { cls.Set(); return i; } int ShowBabe(int i) { cls.Show(); return i; }
And finally this is the source code of the programm that uses the library.
main.cpp
#include <dlfcn.h> #include <stdlib.h> #include <stdio.h> #include "../simple_lib/func.h" int main() { void *handle; typedef int (*func)(int); func bbb; printf("start...\n"); handle = dlopen("/data/testt/test.so", RTLD_LAZY); if (!handle) { printf("%s\n", dlerror()); return 0; } bbb = (func)dlsym(handle, "SetBabe"); if (bbb == NULL) { printf("%s\n", dlerror()); return 0; } bbb(1); bbb = (func)dlsym(handle, "ShowBabe"); if (bbb == NULL) { printf("%s\n", dlerror()); return 0; } bbb(1); dlclose(handle); printf("exit...\n"); return 0; }
Again as you can see the program using the library also does not using any STL related stuff, but after run of the program I got the same segmentation fault during
main(...) function exit. So the issue is not connected to STL itself, and it is hidden in some other place. Then after some long research I found the bug.
Normally the
destructors of static C++ variables are called immediately before
main(...) function exit, if they are defined in main program, or if they are defined in some library and you are using it, then the destructors should be called immediately before
dlclose(...).
On Android OS all destructors(defined in main program or in some library you are using) of static C++ variables are called during
main(...) function exit. So what happens in our case? We have cls static C++ variable defined in library we are using. Then immediately before
main(...) function exit we call
dlclose(...) function, as a result library closed and cls becomes non valid. But the pointer of cls is stored somewhere and it’s destructor should be called during
main(...) function exit, and because at the time of call it is already invalid, we get segmentation fault. So the solution is to not call
dlclose(...) and everything should be fine. Unfortunately with this solution we cannot use attribute((destructor)) for deinitializing of something we want to deinitialize, because it is called as a result of
dlclose(...) call.
Answer:
I have a general aversion to calling
dlclose(). The problem is that you must ensure that nothing will try to execute code in the shared library after it has been unmapped, or you will get a segmentation fault.
The most common way to fail is to create an object whose destructor is defined in or calls code defined in the shared library. If the object still exists after
dlclose(), your app will crash when the object is deleted.
If you look at logcat you should see a debuggerd stack trace. If you can decode that with the arm-eabi-addr2line tool you should be able to determine if it’s in a destructor, and if so, for what class. Alternatively, take the crash address, strip off the high 12 bits, and use that as an offset into the library that was
dlclose()d and try to figure out what code lives at that address.
Answer:
I encountered the same headache on Linux. A work-around that fixes my segfault is to put these lines in the same file as main(), so that dlclose() is called after main returns:
static void* handle = 0; void myDLClose(void) __attribute__ ((destructor)); void myDLClose(void) { dlclose(handle); } int main() { handle = dlopen(...); /* ... real work ... */ return 0; }
The root cause of dlclose-induced segfault may be that a particular implementation of dlclose() does not clean up the global variables inside the shared object.
Answer:
You need to compile with
-fpic as a compiler flag for the application that is using
dlopen() and
dlclose(). You should also try error handling via
dlerror() and perhaps checking if the assignment of your function pointer is valid, even if it’s not NULL the function pointer could be pointing to something invalid from the initialization,
dlsym() is not guaranteed to return NULL on android if it cannot find a symbol. Refer to the android documentation opposed to the posix compliant stuff, not everything is posix compliant on android.
Answer:
You should use extern “C” to declare you function aaa()
Tags: androidandroid, c++
|
https://exceptionshub.com/c-segmentation-fault-when-using-dlclose-on-android-platform.html
|
CC-MAIN-2021-17
|
refinedweb
| 1,162
| 65.32
|
Hi,
I get this error when porting my app from Flex 3 to Flex 4. The code works fine in Flex 3.
1067: Implicit coercion of a value of type mx.rpc:IResponder to an unrelated type mx.rpc:IResponder
The code is unremarkable. I'm extending a delegate class:
import mx.rpc.IResponder;
public class MeetingDelegate extends MyBaseDelegate
{
private var mtg:MeetingVO;
/** constructor */
public function MeetingDelegate(responder:IResponder, mtg:MeetingVO)
super(responder);
this.mtg = mtg;
}
}
Is it because the approach to doing HttpService has changed in Flex 4?
The only other thing I can think might cause this is that the base class, MyBaseDelegate, comes from a library that is included in the Project's Build Path, and somehow has a different notion of what mx.rpc.IResponder is.
--Henry
Retrieving data ...
|
https://forums.adobe.com/thread/489612
|
CC-MAIN-2018-39
|
refinedweb
| 133
| 51.14
|
PROBLEM LINK:
Author: Ke Bi
Tester: Kevin Atienza
Translators: Vasya Antoniuk (Russian), Team VNOI (Vietnamese) and Hu Zecong (Mandarin)
Editorialist: Kevin Atienza
Contest Admin: Praveen Dhinwa
DIFFICULTY:
Medium-Hard
PREREQUISITES:
Suffix array, longest common prefix, string hashing, range minimum query, segment tree
PROBLEM:
A tandem is a nonempty string repeated thrice, i.e., a string of the form AAA for a nonempty string A. A tandem is interesting if the character following each instance of A are not all the same, otherwise it is boring. Given a string s of length N, how many interesting and boring tandems does s have?
QUICK EXPLANATION:
We enumerate interesting and boring tandems per length of A. Let’s say |A| = L. Thus, the length of the tandem is 3L. Fix this L.
Consider the positions 0, L, 2L, 3L, \ldots. A tandem of length 3L will cover exactly three such positions. So we count boring and interesting tandems per three positions i, j, k with (i,j,k) = (i,i+L,i+2L).
- Let LCP(i,j,k) be the length of the longest common prefix of s[i\ldots N-1], s[j\ldots N-1] and s[k\ldots N-1].
- Let LCS(i,j,k) be the length of the longest common suffix of s[0\ldots i], s[0\ldots j] and s[0\ldots k].
Let V = \min(LCP(i,j,k),L) + \min(LCS(i,j,k),L) - 1. Then:
- If V < L, then there are no tandems.
- If V \ge L and LCP(i,j,k) > L, then there are V-L+1 boring tandems and 0 interesting tandems.
- If V \ge L and LCP(i,j,k) \le L, then there are V-L boring tandems and 1 interesting tandem.
The answers are the sums of all these contributions across all L s and across all $(i,j,k)$s.
EXPLANATION:
We will use the notation s_{i, j} to denote the substring from s_i to s_j. Thus, |s_{i,j}| = j-i+1.
Faster than O(N^3)
The obvious O(N^3) brute-force algorithm here doesn’t pass, because N is very large. (And actually, even O(N^2) shouldn’t pass.) So let’s try to improve on O(N^3). Here we’ll try to show a solution that will lead to the intended one.
Let’s enumerate the tandems according to the length of A. Suppose |A| = L, so the tandem AAA has length 3L. For now, let’s fix this L and then count all boring and interesting tandems of length 3L. Ideally our solution will be fast enough so we can do this for each L \in [1,N/3].
Consider the characters s_0, s_L, s_{2L}, s_{3L}, \ldots. There are approximately N/L such positions. But since these are spaced L characters apart from each other, it follows that every tandem of length 3L covers exactly three of these. Therefore, let’s fix three adjacent characters, say s_i, s_j, s_k where (i,j) and (j,k) are spaced L characters away from each other, then count all boring and interesting tandems covering these three.
The tandem must start at some position in [i-L+1,i]. Otherwise, if it starts at some position > i it will not be able to cover s_i, and if it starts at some position \le i-L, it will not be able to cover s_k. But it can’t just start at any such position! If we want the string s_{a, a+3L-1} to be a tandem, the following must hold (by definition):
But due to the way we selected the position a, we find that a \le i \le a+L-1, a+L \le j \le a+2L-1 and a+2L \le k \le a+3L-1. It means that we can decompose the equality above into two parts:
We can restate these two equalities in another way. The first one is the same as:
The strings s_{0, i}, s_{0, j} and s_{0, k} must have a common suffix of length i-a+1.
The second one is the same as:
The strings s_{i, N-1}, s_{j, N-1} and s_{k, N-1} must have a common prefix of length a+L-i.
This now gives us a way to compute the number of tandems covering s_i, s_j and s_k. Let’s first define two things:
- LCP(i,j,k) is the length of the longest common prefix of s_{i, N-1}, s_{j, N-1} and s_{k, N-1}.
- LCS(i,j,k) is the length of the longest common suffix of s_{0, i}, s_{0, j} and s_{0, k},. (Not to be confused with the longest common subsequence.)
There is a tandem starting at a if i-a+1 \le LCS(i,j,k), a+L-i \le LCP(i,j,k), and a\in [i-L+1,i]. Rearranging these inequalities gives:
which means the number of valid a s is
or 0 if this number is negative. We can actually simplify that number to the following:
But which among these are boring, and which are interesting? Notice that if s_{a,a+3L-1} and s_{a+1,a+3L} are both tandems, then s_{a,a+3L-1} is automatically boring, because the following characters are the same. It means that only the last tandem, s_{i,i+3L-1} can be interesting. To check if it’s interesting, simply check if s_{i+L} = s_{j+L} = s_{k+L}. If this is true, then the string is boring, otherwise it is interesting!
Computing \min(LCP(i,j,k),L) and \min(LCS(i,j,k),L) naïvely takes O(L) time each. Since we will do this approximately N/L times for each length L, the running time is therefore O(N/L\cdot L) = O(N) per L. So if we do this for each L\in [1,N/3], the running time is O(N^2). This is much better than O(N^3)!
Faster solution
To improve that solution, we need a faster way to compute LCP(i,j,k) and LCS(i,j,k). We’ll describe various solutions here.
Solution 1: Hashing + Binary Search
If we can compare any two substrings for equality in O(1) time, then we can improve computing LCP and LCS quickly using binary search. One way to do substring comparisons quickly is with hashing!
Specifically, we will use polynomial hashing. We fix a base B and a modulus M. Let’s now define the hash function H(S). For the string s = s_0s_1s_2\ldots s_{N-1}, its hash will be the following value:
where v(c) is another function that maps a character to a number.
We will use this hash to compare substrings in O(1):
- Preprocess the string so that H(s) can be computed in O(1) time for any substring s.
- To compare two substrings s and t, simply compare their hashes, H(s) and H(t).
If two strings are equal, then their hashes must be the same. Unfortunately, if they are not equal, the hashes might still be the same. For this reason, this hashing algorithm fails sometimes. But if you choose B, M and v(c) well enough, then you’ll probably still get accepted
So how do we preprocess the string? The key advantage of polynomial hashing is the following properties: (assuming we have precomputed powers of B modulo M)
- Concatenation. Given H(s) and H(t), H(st) can be computed in O(1) time as H(st) = (H(s) + H(t)B^{|s|})\bmod M.
- Splitting. Given H(st) and H(t), H(s) can be computed in O(1) time as H(s) = (H(st) - H(t)B^{|s|})\bmod M.
These properties are what we can use to preprocess. We simply compute the hashes of all suffixes of the string. This can be done in O(N) by using the concatenation property above. Then, to compute the hash of a substring s, find the appropriate suffixes st and t, then compute H(s) from H(st) and H(t) using the splitting property!
So what’s the running time of this algorithm? Notice that LCP and LCS are computed with binary search, which takes O(\log N) time each, so doing that N/L times gives O(N/L \log N). Summing from 1 to L gives
owing to the fact that the Harmonic series grows as O(\log N).
Solution 2: Suffix array + segment tree
Computing longest common prefixes is a standard step when you already have the suffix array. After constructing the suffix array, constructing the LCP array takes just O(N). Afterwards, to find the longest common prefix of two suffixes, s_{i,N-1} and s_{j,N-1}:
- Find the positions of i and j in the suffix array. Let’s say they are i' and j'. Without loss of generality, assume i' \le j'. (We can swap i and j otherwise.)
- The LCP is now \min(LCP[i'+1],LCP[i'+2],\ldots,LCP[j']).
Modifying this for three suffixes is straightforward. Thus, we have reduced LCP to a range minimum query, which is pretty standard
Similarly, you can construct the "LCS array" by constructing a suffix array for the reverse of s, so LCS can be reduced to a range minimum query as well.
How do we compute range minimum queries? The most standard way is to use segment trees. This way, each range minimum query takes O(\log N) time to compute. Thus, the running time is again O(N \log^2 N) like above. (Note that the suffix array can be constructed in O(N \log^2 N) time.) But if you implement a segment tree normally, you might get TLE, even though the time complexity is correct! If this happens to you, here’s an improvement: Notice that we actually only need \min(LCP(i,j,k),L). Thus, we can add a third argument to our
range_min query call, which denotes the current upper bound.
range_min now looks like this:
def range_min(i, j, L): if this.min >= L: // special pruning. return L // if this.min >= L, it's impossible to improve upon 'L'. if i <= this.i and this.j <= j: return this.min if this.j < i or j < this.i: return L return left.range_min(i, j, right.range_min(i, j, L))
Notice that the partial results are getting passed everywhere, so that the pruning done by
this.min >= L is maximized. With this, I was able to get the segment tree solution accepted!
Solution 3: Suffix array + sparse table
Another way to compute range minimum queries is by computing a table of minimums. Let M[i][k] be the range minimum in the range [i,i+2^k-1]. We can build the table M[i][k] using the following property:
Thus, this construction takes O(N \log N) time. Afterwards, each range query runs in O(1) time, because the minimum in the range [i,j] is \min(M[i][k], M[j-2^k+1][k]), where 2^k is the largest power of two \le j-i+1. (Now, computing k takes O(\log N), but you can just precompute it in O(N) time for each length j-i+1.)
Notice that we moved the \log N factor from query time to construction time; whereas segment trees take O(N) time to construct and has O(\log N) query time, this solution takes O(N \log N) time to construct and has O(1) query time. But this change is significant, because the number of queries that we have is actually:
which means that the overall running time (after constructing the suffix array) is
which is an improvement over O(N \log^2 N)! (Note that there are O(N\log N), and even O(N) algorithms to construct the suffix array.)
Time Complexity:
O(N \log^2 N)
AUTHOR’S AND TESTER’S SOLUTIONS:
Setter
Tester-hash
Tester-tabl
Tester-tree
|
https://discusstest.codechef.com/t/tandem-editorial/12782
|
CC-MAIN-2021-31
|
refinedweb
| 2,046
| 73.58
|
Making our projects Wireless always makes it to look cool and also extends the range in which it can be controlled. Starting from using a normal IR LED for short distance wireless control till an ESP8266 for worldwide HTTP control, there are lots of ways to control something wirelessly. In this project we learn how to build wireless projects using a 433 MHz RF module and AVR microcontroller.
In this project we do following things:-
- We use Atmega8 for the RF Transmitter and Atmega8 for the RF Receiver section.
- We interface an LED and a Pushbutton with Atmega8 microcontrollers.
- On the transmitter side, we Interface Pushbutton with Atmega and transmit the data. On the receiver side, we will receive the data wirelessly and show the output on LED.
- We use encoder and decoder IC to transmit 4 bit data.
- Reception Frequency is 433Mhz using cheap RF TX-RX module available in the market.
Components Required
- Atmega8 AVR Microcontroller (2)
- USBASP programmer
- 10-pin FRC cable
- Bread board (2)
- LEDs (2)
- Pushbutton (1)
- HT12D and HT12E pair
- RX-TX RF Module
- Resistors (10k,47k,1M)
- Jumper Wires
- 5V power supply
Software Used
We use CodeVisionAVR software for writing our code and SinaProg software for uploading our code to Atmega8 using USBASP programmer.
You can download these softwares from the given links:
CodeVisionAVR :
SinaProg :
Before going into the schematics and codes, let’s understand the working of RF module with Encoder-Decoder ICs.
433MHz RF Transmitter and Receiver Module
.
.
.
Learn more about RF pair in the RF Transmitter and Receiver Circuit. You can understand more about the working of RF by checking following projects that uses RF pair:
- RF Controlled Robot
- IR to RF Converter Circuit
- RF Remote Controlled LEDs Using Raspberry Pi
- RF Controlled Home Appliances
Circuit Diagram
Circuit Diagram for RF Transmitter side
- Pin D7 of atmega8 -> Pin13 HT12E
- Pin D6 of atmega8 -> Pin12 HT12E
- Pin D5 of atmega8 -> Pin11 HT12E
- Pin D4 of atmega8 -> Pin10 HT12E
- Pushbutton to Pin B0 of Atmega.
- 1M-ohm resistor between pin15 and 16 of HT12E.
- Pin17 of HT12E to data pin of RF transmitter module.
- Pin 18 of HT12E to 5V.
- GND pin 1-9 and Pin 14 of HT12E and Pin 8 of Atmega.
Circuit Diagram for RF Receiver Side
- Pin D7 of atmega8 -> Pin13 HT12D
- Pin D6 of atmega8 -> Pin12 HT12D
- Pin D5 of atmega8 -> Pin11 HT12D
- Pin D4 of atmega8 -> Pin10 HT12d
- LED to Pin B0 of Atmega.
- Pin14 of HT12D to data pin of RF receiver module.
- 47Kohm resistor between pin15 and 16 of HT12D.
- GND pin 1-9 of HT12D and Pin 8 of Atmega.
- LED to pin 17 of HT12D.
- 5V to pin 7 of Atmega and pin 18 of HT12D.
Creating the Project for Atmega 8 using CodeVision
After installing these softwares follow the below steps to create project and writing code:
Step 1. Open CodeVision Click on File -> New -> Project. Confirmation Dialogue box will appear. Click On Yes
Step 2. CodeWizard will open. Click on first option i.e. AT90, and click OK.
Step 3. Choose your microcontroller chip, here we will take Atmega8 as shown.
Step 4:- Click on Ports. In Transmitter part, Pushbutton is our input and 4 data lines are output. So, we have to initialize 4 pins of Atmega as output. Click on Port D. Make Bit 7, 6, 5 and 4 as out by clicking on it.
Step 5:- Click on Program -> Generate, Save and Exit. Now, more than half of our work is completed
Step 6:- Make a New folder on desktop so, that our files remains in folder otherwise it will be scattered on whole desktop window. Name your folder as you want and I suggest use the same name to save program files.
We will be having three dialogue boxes one after other to save files. Do the same with other two dialogue boxes which will appear after you save the first.
Now, your workspace looking like this.
Our most of the work is completed with the help of the Wizard. Now, we have to write only few lines of code for transmitter and receiver part and That’s it…
Follow the same steps to create files for Receiver part. In receiver part, only Led is our output so make Port B0 bit to out.
CODE and Explanation
We will write code for toggling the LED wirelessly using RF. Complete code for both the Atmega at transmitter and Receiver sides are given at the end of this article.
Atmega8 code for RF Transmitter:
First Include delay.h header file to use delay in our code.
#include <io.h> #include <delay.h> void main(void) {
Now, come to the last lines of code where you will find a while loop. Our main code will be in this loop.
In While loop, we will send 0x10 byte to PORTD when button is pressed and, will send 0x20 when button is not pressed. You can use any value to send.
while (1) { if(PINB.0 == 1) { PORTD = 0x10; } if(PINB.0 == 0) { PORTD = 0x20; } } }
Atmega code for RF Receiver
First declare variables above void main function for storing incoming character from RF module.
#include <io.h> #include <stdio.h> #include <delay.h> unsigned char byte = 0; void main(void) {
Now come to the while loop. In this loop, store incoming bytes to a char variable byte and check if the incoming byte is same as we write in our transmitter part. If bytes are same, make PortB.0 high and take NOT of PORTB.0 for toggling the LED.
while (1) { byte = PIND; if(PIND.7==0 && PIND.6==0 && PIND.5==0 && PIND.4==1) { PORTB.0 = ~PORTB.0; delay_ms(1000); } } }
Build the Project
Our code is completed. Now, we have to Build our project. Click on Build the project icon as shown.
After building the project, a HEX file is generated in the Debug-> Exe folder which can be found in the folder which you have made previously to save your project. We will use this HEX file to upload in Atmega8 using Sinaprog software.
Upload the code to Atmega8
Connect your circuits according to given diagram to program Atmega8. Hookup the one side of FRC cable to USBASP programmer and other side will connect to SPI pins of microcontroller as described below:
- Pin1 of FRC female connector -> Pin 17 ,MOSI of Atmega8
- Pin 2 connected to Vcc of atmega8 i.e. Pin 7
- Pin 5 connected to Reset of atmega8 i.e. Pin 1
- Pin 7 connected to SCK of atmega8 i.e. Pin 19
- Pin 9 connected to MISO of atmega8 i.e. Pin 18
- Pin 8 connected to GND of atmega8 i.e. Pin 8
Connect the remaining components on the breadboard as per circuit diagram and open the Sinaprog.
We will upload the above generated Hex file using the Sinaprog, so open it and Choose Atmega8 from Device drop down menu. Select the HEX file from the Debug-> Exe folder as shown.
Now, Click on Program.
You are done and your Microcontroller is programmed. Use same steps to program another Atmega at receiver side.
Complete code and demonstration Video is given below.
Code for Transmitter Part:
#include <io.h>
#include <delay.h>
void main(void)
{
DDRB=(0<<DDB7) | (0<<DDB6) | (0<<DDB5) | (0<<DDB4) | (0<<DDB3) | (0<<DDB2) | (0<<DDB1) | (0<<DDB0);
// State: Bit7=T Bit6=T Bit5=T Bit4=T Bit3=T Bit2=T Bit1=T Bit0=0
PORTB=(0<<PORTB7) | (0<<PORTB6) | (0<<PORTB5) | (0<<PORTB4) | (0<<PORTB3) | (0<<PORTB2) | (0<<PORTB1) | (0<<PORTB0);
// Port C initialization
// Function: Bit6=In Bit5=In Bit4=In Bit3=In Bit2=In Bit1=In Bit0=In
DDRC=(1<<DDC6) | (1<<DDC5) | (1<<DDC4) | (1<<DDC3) | (1<<DDC2) | (1<<DDC1) | (1<<DDC0);
// State: Bit6=T Bit5=T Bit4=T Bit3=T Bit2=T Bit1=T Bit0=T
PORTC=(0<<PORTC6) | (0<<PORTC5) | (0<<PORTC4) | (0<<PORTC3) | (0<<PORTC2) | (0<<PORTC1) | (0<<PORTC0);
// Port D initialization
// Function: Bit7=In Bit6=In Bit5=In Bit4=In Bit3=In Bit2=In Bit1=In Bit0=In
DDRD=(1<<DDD7) | (1<<DDD6) | (1<<DDD5) | (1<<DDD4) | (1<<DDD3) | (1<<DDD2) | (1<<DDD1) | (1<)
{
if(PINB.0 == 1) {
PORTD = 0x10;
}
if(PINB.0 == 0) {
PORTD = 0x20;
}
}
}
Code for Receiver Part:
#include <io.h>
#include <delay.h>
// Declare your global variables here
unsigned char byte = 0;
unsigned char lightON = 0;//light status
int LED_status = 0;
void main(void)
{
// Input/Output Ports initialization
// Port B initialization
// Function: Bit7=In Bit6=In Bit5=In Bit4=In Bit3=In Bit2=In Bit1=Out Bit0=Out
DDRB=(0<<DDB7) | (0<<DDB6) | (0<<DDB5) | (0<<DDB4) | (0<<DDB3) | (0<<DDB2) | (0<<DDB1) | (1<<DDB0);
// State: Bit7=T Bit6=T Bit5=T Bit4=T Bit3=T Bit2=T Bit1=0 Bit0=0
PORTB=(0<<PORTB7) | (0<<PORTB6) | (0<<PORTB5) | (0<<PORTB4) | (0<<PORTB3) | (0<<PORTB2) | (0<<PORTB1) | (0<<PORTB0);
// Port D initialization
// Function: Bit7=In Bit6=In Bit5=In Bit4=In Bit3=In Bit2=In Bit1=In Bit0=In
DDRD=(0<<DDD7) | (0<<DDD6) | (0<<DDD5) | (0<<DDD4) | (0<<DDD3) | (0<<DDD2) | (0<<DDD1) | (0<)
{
byte = PIND;
if(PIND.7==0 && PIND.6==0 && PIND.5==0 && PIND.4==1 && LED_status==0)
{
PORTB.0 = ~PORTB.0;
delay_ms(1000);
}
}
}
Jul 19, 2018
Thank You Very Much. I loved all the tutorials.
|
https://circuitdigest.com/microcontroller-projects/interfacing-rf-module-with-atmega8
|
CC-MAIN-2019-26
|
refinedweb
| 1,550
| 73.68
|
Wear a helmet. Even when coding.
In previous article, titled Understanding Delegates and Higher-Order Functions in C#, we were discussing the concept of delegates and higher-order functions
implementation in C#. That article has opened the question of closures, and
this is the opportunity to explain how closures work.
We can start with the function which multiplies its argument with two.
Func<int, int> scale = x => 2 * x;
But that might not be enough. Maybe we wanted to have a function which scales
its argument by some factor.
Func<int, int, int> scale = (factor, k) => factor * k;
But then, look, we might be reluctant to include the factor as the argument.
Because, why the function then? It’s just a multiplication. This lambda is not
of much use. Let’s remove the factor from its signature:
int factor = 2;
Func<int, int> scale = (k) => factor * k;
That makes the factor an unknown value. Lambda needs a variable named factor to compile. In C#, we can let the lambda capture a variable which is
accessible in the place where lambda is defined. Having a factor variable defined right above the lambda will do the job.
The factor variable in this lambda is called the free variable. It is free in sense that
it was not fixed, or determined by the lambda itself. It is not present in the
arguments list, nor is it defined as the local variable inside the lambda.
Hence, it must come from somewhere else. To be more precise, it must come from
the environment. C# is statically scoped language, which somewhat reduces the
realm from which free variables can come.
Static scoping, also known as lexical scoping, means that scopes are nesting
statically, and they can be viewed at compile time. Take a look at the entire
source code in which lambda is defined:
namespace Demo
{
class Program
{
static void Main(string[] args)
{
int factor = 2;
Func<int, int> scale = x => factor * x;
}
}
}
In that respect, lambda function is one scope. Main function which contains its definition is the outer scope. The Program class which contains the Main function is its own outer scope. And global scope, inside of which the Program class is defined is the next outer level of scope nesting.
Therefore, the compiler will seek the factor variable inside lambda first, which includes its arguments. If it doesn’t find
it, it would search the containing scope, the Main function’s body, where it will visit local variables and arguments. This time
it will be lucky, because factor variable is declared there as the local variable. But if it weren’t there, the
compiler would visit the next outer scope, the Program class, and look into its static data members, since Main is the static method and it can only access static class data. And so on. If
none of the scopes defined a variable named factor, we would face a compile-time error.
Quite contrary to this, dynamic scoping would do none of these steps at compile
time. Variable definition will only be resolved at run time, when lambda is
invoked. If a variable is not defined in the lambda itself, then we have to
search the next outer scope. But it’s not going to be the outer block of code,
but rather the outer function from inside of which the lambda was invoked. As
you can see, there is no way for the compiler to know all the places from which
a function is going to be invoked at run time, and therefore, compiler will not
even attempt to resolve its free variables.
For better or worse, it’s all static in C#. With static, or lexical scoping, C#
compiler will conclude that the factor variable is the one declared right above, within the same enclosing scope
which contains the lambda definition. And then, the adventure can begin. The factor has value two, hence, the scale function will return twice the argument’s
value.
We can then introduce another function, which would be eager to scale some
values it knows about, but, alas, it doesn’t care to know the scaling factor.
static void Work(Func<int, int> scale)
{
int y = scale(5);
Console.WriteLine(y);
}
static void Main(string[] args)
{
int factor = 2;
Func<int, int> scale = x => factor * x;
Work(scale);
Console.ReadLine();
}
The Work function will ask for a Func delegate which scales integer values
without knowing the factor. The environment within which the scale function
executes, and from which the function picks concrete factor value, is called
closure. Closure has been constructed at the place where lambda was constructed
- down in the Main method. That means that the factor variable has been captured, and its value
copied into the closure, in that very line where entire lambda was fully
defined. That is very important to know.
When the Work function is invoked, and scale lambda passed to it as the argument, then Work will print out value 10, because the scaling factor’s value 2 was captured
before. But problems are lurking on the horizon. What if I changed value of the factor variable just before calling the Work function? Look at the modified code:
static void Work(Func<int, int> scale)
{
int y = scale(5);
Console.WriteLine(y);
}
static void Main(string[] args)
{
int factor = 2;
Func<int, int> scale = x => factor * x;
factor = 3; // Added instruction
Work(scale);
Console.ReadLine();
}
What do you think the Work function will print this time? Will it remain 10, because factor value two was captured when lambda was constructed? Or will the scale function somehow refer to the variable itself, rather than copy its value and
keep it fixed? In the prior case, Work function would still produce output 10. In the latter, the new scaling factor
value would take place and output will read 15.
When you run this piece of code, it will print value 15. Now it will be
interesting to see why it is so. While trying to figure why the output has
changed, we will reach better understanding of how closures work in C#.
What we have at this moment is an augmented delegate – a delegate with its
enclosing environment, all together known as the closure. When I passed the scale function to the Work function, I haven’t just passed a delegate. I have passed a closure. And in
that closure, we find the factor variable, too. However, when I try to explain that the variable is passed
together with the delegate, I feel troubles trying to find the right words for
that. You will see what troubles I have if you take note that the factor
variable is a plain integer. You cannot just pass an integer variable around.
Its value will be copied every time you wish to move it. You cannot even pass a
variable of a reference type around, for that matter. You can only make a copy
of the reference and make the new reference refer to the same object on the
heap.
The variable factor wouldn’t go around, you see, and that is the problem when working with
closures. Then how did the Work function know that the factor value has changed from 2 to 3 after it was captured by the lambda? That is the
mystery, and its resolution will ask you to think outside the box.
The trick is that after capturing a variable in a closure, there will be no
more factor variable. It’s not going to be part of the Main function when Main is compiled. Here is how that goes. This segment where the factor is declared, and subsequent lambda declaration, will become the closure.
// Closure:
int factor = 2;
Func<int, int> scale = x => factor * x;
I will implement my own custom closure class, just to show you what will happen
when compiler encounters a lambda with free variables.
class ScaleClosure
{
public int environment;
public int Scale(int arg) => this.environment * arg;
}
Closure consists of environment and the function. Environment is the public
field of integer type. That will be the scaling factor I need to capture. And
the other part is the function. I have retyped the lambda expression, only this
time referring to the local environment in place of the scaling factor.
And then, down below, where I am creating the factor and the lambda, I will instantiate closure instead.
class ScaleClosure
{
public int environment;
public int Scale(int arg) => this.environment * arg;
}
static void Work(ScaleClosure scale)
{
int y = scale.Scale(5);
Console.WriteLine(y);
}
static void Main(string[] args)
{
var scale = new ScaleClosure() { environment = 2 };
scale.environment = 3;
Work(scale);
Console.ReadLine();
}
There are a couple of new things going on here. The real closure has just
become an object of the specialized closure class. There is nothing static in
the ScaleClosure class. I have just pretended that I am a compiler, and I have manually coded a
specialized closure class for this case. You see, there is no notion of a
general closure. Every concrete closure is always tailored to the function it
wraps, and that is something compiler takes care of. And its function is also
very specific - in my example, the function receives integer and returns
integer. Environment is also specific - in my case that is just a single
integer, which is used as the scaling factor inside the function. Closure will
access its environment and pick the values for the free variables, that is why
it had to be instantiated as an object before use.
This sequence of steps is very close to what compiler will do for us when it
encounters a closure in code. Func delegate will be instantiated in its special
form, a closure, which still acts as the delegate in C#. I cannot mimic that
here, because that would ask for deriving the ScaleClosure from the Func
delegate class, but Func is a sealed class. Only compiler can produce
specialized delegates in C# and it does that in a way that is not quite visible
to us. That is why I am asking for the ScaleClosure object in the Work function. In real example, that would still remain Func<int, int> delegate. Nevertheless, that doesn’t affect my demonstration much, as it
remains functionally correct.
Critical part of this demonstration is capturing the factor variable. Look at the Main method. There is no factor variable there anymore. Scaling factor is now captured in the closure, and
captured in its entirety. There will be no value copying ever again. When
closure is initialized, the environment will be initialized as well (to value
2, which was used as the initial scaling factor in prior example). And then,
just before calling the Work function, I used to modify the factor variable. This time, this variable resides inside the closure object. That is
the place for me to write the new value.
By acting as a compiler, I have just completed rewriting entire code segment
and by doing that I have introduced an explicit closure for my function with a
free variable. I had to do that, because I had no other way to pass the
variable together with the function. And by saying to pass the variable, I
really mean to pass-the-variable, with no vagueness in what I’m telling.
The only instance of the free variable is the one inside the closure object.
That is the magic C# compiler does when it encounters a lambda which captures
surrounding variables. And that is what makes lambdas work like a charm in C#.
Everything will be right where you wanted it to be when you invoke a Func
delegate.
Just to make sure that everything is done right, you can run this code and you
will see that it prints value 15 on the output, just as expected.
See also:
Published: Sep 12, 2017; Modified: Sep 6,
|
http://codinghelmet.com/?path=howto/understanding-closures-in-cs
|
CC-MAIN-2018-17
|
refinedweb
| 1,988
| 72.76
|
I am new to Sublime and Python so this may be a really simple question.
I.
{
"target": "vs_build_solution",
"file_regex": "(...*?)/(([0-9]*)/) : error C[0-9]*:(...*?)",
}
The custom command looks likeoutput:
sys.stderr.write
sys.stdout.write
print
Every other build system example I've seen just ends up calling a shell command through the use of 'exec' so I am a bit stuck here.
Basically for a standard build system:
import sublime, sublime_plugin
execcmd = __import__("exec")
class ExampleExecCommand(execcmd.ExecCommand):
def finish(self, *args, **kwargs):
self.append_data(self.proc, "My content to add to build message\n")
super(ExampleExecCommand, self).finish(*args, **kwargs)
Now for you, it looks like you don't execute an external program but use a COM object, so more tweak are probably needed to the standard build system.Look at "Sublime Text 2\Packages\Default\exec.py" that contains the the build command (ExecCommand).
Maybe the simplest way is to copy the exec.py source into a new one and rewrite it for your need.
Thank you.
By looking at the exec command (excellent suggestion by the way). It turns out that what I want to do is rather simple. The exec command simply pipes its output to the "exec" panel.
self.window.get_output_panel("exec")
When I do the same everything seems to work as intended.
In the long run I should probably follow your advice and copy/customize more of the infrastructure in the exec.py implementation.
|
https://forum.sublimetext.com/t/custom-build-system-command-output/5185
|
CC-MAIN-2016-44
|
refinedweb
| 242
| 59.6
|
Hello list, During some tests i was doing i found that when logging to file (using notifyme) resulted in lines starting with \0-char. Further investigation showed that the bug is in ulog.c. Following patch will correct the problem (for me). As i am not a programmer i don't know if this breaks anything..... Be warned! ------ begin patch ------ --- ulog/ulog.c.orig Thu Dec 7 18:48:55 2000 +++ ulog/ulog.c Thu Oct 25 11:17:36 2001 @@ -399,8 +399,11 @@ tbuf[cnt] = 0; } #if __linux - /* Stream unix domain socket, wants the \0 as message terminator */ - ++cnt; + if(logFilename == NULL) /* using syslogd */ + { + /* Stream unix domain socket, wants the \0 as message terminator */ + ++cnt; + } #endif #if TBUFDIAG fputs(tbuf, stdout); ------ end patch ------ Further info: using Debian Linux unstable on a PPro 200 with 256 MB RAM Kind Regards, Rob Epping. -- Home is where a keyboard is.
ldm-userslist information:
ldm-userslist
ldm-usersarchives:
|
http://www.unidata.ucar.edu/mailing_lists/archives/ldm-users/2001/msg00477.html
|
CC-MAIN-2014-49
|
refinedweb
| 156
| 64.61
|
Coding Style-\.
camel course,.
- CamelCase is also prevalent in many other "non-pc" languages such as Smalltalk..
That's clear (not to mention sensible). However, on the mail Coding Style page it says this:
We have decided to use 4 spaces as a tab for the project.
Calling it a "tab" (rather than something like "indentation amount") is a little confusing, I think. It made me think that source files should have tab characters, but that the editor should be set up to interpret them as being 4 columns (i.e., taking you to the next multiple of 4 columns). (and other cases as well, to a lesser extent),).'.
Preprocessor MACROS
Capitalizing or not?
# define IS_FINITE(_a) (std::isfinite(_a))
Arguments pro capitalizing
- Macros don't have a namespace and can therefore easily clash with other names. This happened in the isFinite case. where we had across several files:
#define isFinite(x) ... ... class Point { bool isFinite(int x) }
This obviously nameclashed, where a capitalized IS_FINITE would have never clashes as (I guess) nobody uses capitalized function names. Note that an inline function instead of macro would also have worked.
- Some macros can expand strangely (don't have good example now), so it is good that the user knows it does so.
- Note that it is often also possible to define an inline function instead of a macro!!! (which is a better solution?)
Arguments against capitalizing
User Interface Coding Rules
Complex User Interface Items and Containers (Dialogs, Panels, Frames...)
- Each one have to be developped as a C++ class
- Declare all UI widgets that are used in the container as protected initialisation-time allocated members
protected: Gtk::Label _anchorLabel;
- Initialize all widgets in the member initialization list of the container constructor
MyDialog::MyDialog(): _anchorLabel(_("Look at this label")) { ...
Dialogs
- Dialogs source codes have to be localized in src/ui/dialog/
- A dialog belongs to the namespace Inkscape::UI::Dialog
Widgets
- Dialogs source codes have to be localized in src/ui/widget/
- A widget belongs to the namespace Inkscape::UI::Widget
|
https://wiki.inkscape.org/wiki/index.php?title=Coding_Style&oldid=76904
|
CC-MAIN-2022-21
|
refinedweb
| 340
| 54.52
|
Hi Team,
Hope you are safe and well!
I am trying to explore the DSS API Designer for creating Python,R endpoints.
I have been facing a basic problem reading data set from DSS flow in the API function by using conventional dkuReadDataset() function.
I understand that it won't be directly referencing to my project folder, and APIs work outside project environment, but how do it extract the path and read the hdfs dataset in Python/R API endpoint?
Thanks,
Hi @anish_anand
First and foremost, you should really be careful on the load you put on an API endpoint, API node and your network in general. Streaming large datasets might bring your project, browser and network to a halt.
Having said that, there's nothing in principle stopping you from read the dataset from an endpoint. You might go for different alternatives:
- SQL endpoint (just SELECT * from table)
- Dataset lookup without filters
- Python function that read the dataset through the public API, then gets a dataframe and returns it.
As you can see all of these would return the desired dataset through your existing DSS node.
If you need to retrieve them directly (bypassing DSS), then dataset > Settings will show you the connection, folders and files (if in FS) or table name (if in DB).
Thanks for your response!
Yes, the approach that you mentioned makes sense. And yes, we will definitely note it down not to burden the node reading large dataset. Any guidelines around the max data volume which is permissible?
One additional questions -
How to read a hdfs dataset in my project using a Python API endpoint? I know API is a separate service altogether and i might have to define project's location and some keys. And that is what i am trying to identify, that which parameters should be defined before reading an HDFS dataset in python API
Please find attached the code i am currently using (basic read dataset in python).
Thanks
Since you'll be accessing your data through DSS but from outside DSS you will need to do so either via:
- a SQL endpoint as mentioned, and if your dataset is in a DB
- via public API inside your Python function endpoint.
If you use the Predictive Maintenance sample project as your sandbox, you could do something like the following:
- Define an API service with a Python function API endpoint
- Inside the function provide with the needed credentials (in practice a service account, not personal)
import pandas as pd import dataikuapi client = dataikuapi.DSSClient(DSSHost, apiKey) def api_py_function(project_key, dataset_name): dataset = client.get_project(project_key).get_dataset(dataset_name) columns = [c.get('name') for c in dataset.get_schema().get('columns')] data = [] for row in dataset.iter_rows(): data.append(row) return pd.DataFrame(columns=columns, data=data).to_json(orient='records')
- In the sample queries section, define parameters as follows:
{ "project_key": "DKU_PREDICTIVE_MAINTENANCE", "dataset_name": "Assets_at_risk" }
This will return all records in that dataset.
But, as already mentioned, you can see that if the dataset is on the large side, then iterating over the records to compile a result is not efficient.
|
https://community.dataiku.com/t5/Using-Dataiku-DSS/DSS-API-Designer-Read-dataset-from-DSS-flow-in-R-Python-API/td-p/7493?attachment-id=197
|
CC-MAIN-2021-39
|
refinedweb
| 514
| 69.41
|
4,845.
Commented on Integrate A Site Template
Why drag the CSS into the pub directory, instead use the resources/css + Laravel Mix? Feels misleading to new comers..
Replied to Teams Via Subdomain
Something along the lines of this
If you do pursue it with Spark, it'll work almost the same way.
Replied to Where Are You All From?
Canada eh :)
Replied to Check Quantity If It Available On Stock
We don't know what your database looks like for the products or how it's structured. But you can do a check in your controller/service to check stock beforehand (before any save logic) and respond accordingly..
Replied to How Would You Set Up "Credit" In Movie Database
I think the pivot table is the best optioin. If you want to have parent and sub credits, then apply that on your pivot table..
person_id | movie_id | role_parent_id | role_child_id
Replied to Are Event Listeners Cached In Some Way?
From the docs..
... during your deployment process, you should run the event:cache Artisan command to cache a manifest of all of your application's events and listeners
So yes event & listeners are cached. Clear your cache on deployment and if you are using queue's, restart them on deploy.
Replied to Saving API Usage For Metered Billing
I don't know of any handy package but I want to suggest using Middleware to do the counting for you, it abstracts that logic away from your API endpoint controller methods.
If you do build one, you should publish it as a package, I'm sure others would appreciate it :)
Replied to How To Setup A Callback API Endpoint That Recieves Payment Status From A Payment Gateway
I'm not sure what Payment Processor you are using but you would look at their dev docs to see what is being sent to your server when you provide them with a webhook. Are they providing a POST or GET request?
They should also provide you with some information as to what they are POSTing. Typically a webhook will be a POST especially when they are sending you a bunch of data.
So check your PP's dev docs to see what they send you and how. Outside of that it's up to you to do with the
$request as you choose.
As an example, MailGun will POST to a webhook URL I provide. I used their docs to find out what they POST to me and I actually validate the incoming request (using custom Middleware) well I validate the signature included in the request.
So check the docs, and what you are provided with do what you need to with the
$request :)
Replied to Change Request Body Middleware
Your looking at (from the title) a method for Slim maybe?
You can do this with Middleware doing so like this:
Example File:
app/Http/Middleware/TestMiddleware.php
<?php namespace App\Http\Middleware; use Closure; class TestMiddleware { /** * Handle an incoming request. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @return mixed */ public function handle($request, Closure $next) { $input = $request->all(); if (isset($input['foo'])) { $input['foo'] = 'bar'; // Input modification $request->replace($input); \Log::info($request->all()); // Shows modified request } return $next($request); } }
Replied to Convert String Array To Array Of Int(s)
array_map to intval ;-)
$array = ["123", "34", "223"]; $integerIDs = array_map('intval', $array);
Replied to Saving For Each Loop
Try clearning your cache and cookies:
php artisan cache:clear then delete browser cookies. This issue has been posted previously on this form, try searching next time :)
Replied to Laravel Make Service Pattern Command
Just a suggestion, good job but perhaps stub out some example methods in the service stub as a starting point for others? :)
Replied to How Can I Show Edit And Delete Button To The Post Owner In Vuejs Comeponent Larave?
You will need some way to know the current logged in user as well as the post user. Since you are passing in the post parameter to the
deletePost method in Vue you should have access to
post.user_id now you just need to find a way to get the currently logged in user.
You can set it like so
Then you would use v-if something like this:
<li><a @Edit Post</a></li>
Replied to Laravel Cashier - Adding Days To Subscription
@Tyler The repo is your friend in this case, you can see what happens then you 'cancel' a subscription via the cancel method.
For Stripe:
The code will cancel the subscription at the end of the billing period. The code will set the
cancel_at_period_end field in stripe to true. Stripe will end the subscription at the end of the billing period.
For BrainTree:
You can see the code uses the
BraintreeSubscription class to call
cancel() which if we dig into the
braintree/braintree_php package you'll see here it calls the Braintree API Gateway and calls here to cancel the subscription in Braintree.
So to answer your question simply:
Yes there is a difference in using the cancel subscription method vs deleting the entry from the DB. The subscription is technically stored on the Processor side (Stripe/Braintree), your app's database just keeps reference of it, deleting it in your db will just delete your app's reference, the customer will still be billed by the processor (Stripe/Braintree) since that's where the actual processing/charging magic happenbs.
Cashier-Braintree is just a wrapper to help facilitate the use of BrainTree in Laravel, if you want to use t he direct BrainTree PHP library, go for it, but I'd personally use Cashier and if it doesn't do what I need, to just extend it to suit my needs.
I hope this helps give you more insight and a better idea on how things work under the hood with Laravel's Cashier/Cashier-Braintree :)
Replied to Integrate PHP Code With Contact Form 7 Fields
@hassanshahzadaheer You can see how to do it in the CF7 docs I've updated my answer to include an example of a dropdown in the CF7 edito.
An example of how to create dropdowns in CF7 editor:
[select cows "1" "2" "3"]
Replied to Laravel Cashier - Adding Days To Subscription
Yes that sounds like your best bet. I'm not familiar at all with BrainTree myself but from the flow you've described.
$user->subscription('fooSub')->cancel();
On step 2, since Cashier (BrainTree) doesn't seem to have methods to create custom subscriptions, you'll have to do this likely with the BrainTree PHP library @ manually. You will want to set the billing day of month to the new date you need.
$result = $gateway->subscription()->create([ 'paymentMethodToken' => $token, 'planId' => 'custom_plan_foo1234', 'billingDayOfMonth' => 30, // ... ]);
After that you can just re-subscribe your user using Cashier like so:
$user->newSubscription('newSub')->create($token); You will need the CC token also to re-subscribe the user.
Replied to 'no Input Specified' Challenge On The Browser When Trying To Load 'homestead.test'
Looks like you have another issue here where the base box won't download. Please check this solution
Replied to Laravel Cashier - Adding Days To Subscription
Yes
anchorBillingCycleOn is for Stripe only. As for BrainTree it doesn't look like there is an option for what you need in Cashier.
I can suggest another option, is using the 'trial days' instead and requiring CC on setup so they have X many days before they are "subscribed". This however only works on initial sign up but not necessarily when they are/have been subscribed already.
You cannot change the billing date on an existing subscription, you must cancel the subscription and create a new one.
With Braintree, the next billing date is calculated, and so can't be changed unfortunately.
Replied to 'no Input Specified' Challenge On The Browser When Trying To Load 'homestead.test'
Make sure your
Homestead.yaml project / folders paths are correct. Then run
vagrant up --provision and try the URL again.
Replied to Lavacharts Gauge Colors
I'm not super familiar with Lavacharts but looking at the docs on both Lavacharts and Google it looks like only
Replied to Database Sessions Pruning
Yes they are cleaned out. There is another thread on Laracasts here asking the same question. Check out specifically look at the comment for
Lottery.
/* |-------------------------------------------------------------------------- | Session Sweeping Lottery |-------------------------------------------------------------------------- | | Some session drivers must manually sweep their storage location to get | rid of old sessions from storage. Here are the chances that it will | happen on a given request. By default, the odds are 2 out of 100. | */
Laravel cleans up expired session entries based on a lottery setting.
However note that not all session drivers require manual cleanup, something like Redis doesn't need it because it automatically deletes expired keys.
You can see in the
StartSession middleware where it cleans up the sessions using the method
collectGarbage which cleanrs the session based on the lottery config (above).
The default configurations are [2, 100]. It means that a random integer is chosen between 1 and 100, if it's lower or equals to 2 the cache will be cleared. (Aka you have a 2% possibility to clear the cache every call).
Replied to Integrate PHP Code With Contact Form 7 Fields
You'll want to look into the Hooks the Contact Form 7 provides. You'll hook into the response to output what you want.
As for the dropdowns, you create them in the Contact Form 7 form editor with the provided options.
You can see a list of hooks at (for the latest version of CF7), you'll probably want something like the
wpcf7_ajax_json_echo hook to modify the outbound response returned to the user. Here's an example:
Something like this (untested), but it would go in your wp-content/themes/footheme/functions.php (your theme, functions file).
add_filter('wpcf7_ajax_json_echo', function( $response, $result ) { $cow = 100; $goat = 200; if (isset($response['cow']) || isset($response['goat'])) { if (isset($response['cow'])) { $cowSelect = $response['cow']; $cow = ($cow - $response['cow']); switch ($cowSelect) { case '1': case '2': case '3': case '4': case '5': $message .= 'Total '.$cow.' cows remaning'.'<br/>'; break; } } if (isset($response['goat'])) { $goatSelect = $response['goat']; $goat = ($goat - $response['goat']); switch ($goatSelect) { case '1': case '2': case '3': case '4': case '5': $message .= 'Total '.$goat.' goats remaning'.'<br/>'; break; } } } return $response; }, 10, 2);
Then you just need to create a dropdown with the options you want, make the dropdown required (if it needs to be required) in the Contact Form 7 form editor.
Replied to Injecting Bugsnag In Queuefailed
Just a thought, why not use the event
illuminate.queue.failed and create a listener to log this. The parameters provided are
$connection, $job, $data.
As for your error, you have too many params according to the error in your closure it's providing one, you expect two (JobFailed, Bugsnag).
Queue::failing(function (JobFailed $event) { // $event->connectionName // $event->job // $event->exception });
Replied to Event Listener With Popup Notification?
Using the package I linked above: Laravel Websockets as your pusher replacement
Also you can use this tutorial to give you an idea of what to do in order to build real time user notifications with Laravel Echo (which you'll use with the above package)
Replied to Any Recommendation For Text Editor With Image Uploads?
This is what your looking for with this plugin
Then you would set the upload endpoint in Laravel so it could be S3/DO whatever you want from there at the server endpoint.
If you want just an uploader.. Uppy is great and supports multiple providers which looks awesome - I plan on seeing if I can integrate Uppy with Trumbowyg in the future.
Replied to Multiple Variants Of Same App...how To Manage
I suggest checking out Git Submodules or even think about modularizing your app from a code level.
You have some options:
I've been thinking about this same thing and I think making things into modules in Laravel and making use of Git Submodules to manage the shared submodules is the best way to manage the code. I guess it depends how much is similiar between the two projects.
Replied to Event Listener With Popup Notification?
I think you would be better off either using native web push notifications OR using something like Pusher or the alternatives Laravel Websockets for this and then you can push real-time to specific channels.
Again if your just doing all this for one popup and only ever one, the top two may be overkill.
If you want to do it using your example, the easiest method is setting a a notification table, where you set the message to the user, and on page load, you have a partial that checks for any new notifications (provided in a global view composer) and if there are show them the user and mark them as read, so the next page load they don't appear again.
In the backend you'd trigger an event (i.e.
NotifySubmitterOfApproval and in that listener you would create a new
notifications table entry for that user.
Personally if I'm going to be using more than just one notification, I'd make use of webpush/sockets ;-)
Replied to Perfectly Smooth Looped Eased Css Animation
I think this is what you want.. check this out
Other solutions for what you want at
Replied to Route::getRoutes() Returns Only Package Routes
I don't think there's an easy way to specify a package and get the routes but you can sort by controller using a slightly modified code snippet from here..
$userRoutesArray = []; $lookFor = 'UserController'; foreach (\Route::getRoutes() as $value) { $controller = $value->getAction(); if (! isset($controller['controller'])) { continue; } if (strpos($controller['controller'], $lookFor)) { array_push($userRoutesArray, $value->uri()); } } dd($userRoutesArray);
If you wanted to get all routes from t he app, you could run this snippet:
$routes = collect(\Route::getRoutes())->map(function ($route) { return $route->uri(); }); dd($routes);
Replied to Create Account When User Registers
If you check in the Spark code, it shows the above but also where it appends the user who registered as the
owner to the team.
If you have access to the repo
If you do not.. this is the code. You can see on the
Spark::interact(AddTeamMemberContract part it adds the user to the team ;-)
public function handle($user, array $data) { event(new TeamCreated($team = Spark::interact( TeamRepository::class.'@create', [$user, $data] ))); Spark::interact(AddTeamMemberContract::class, [ $team, $user, 'owner' ]); event(new TeamOwnerAdded($team, $user); return $team; }
So I'm not sure why your looking to change the Spark registration process as it already creates the team and adds the user registered as the owner already on register?
Replied to Trying To Get Property Of Non-object
Try dumping out
dd inside your
foreach loop to see what's available or what is being returned. See if
duedate is in the array. Make sure
duedate is hot
hidden in the Model also.
Replied to How To Process Mail Send Issues?
Typically this depends on what provider your using. If your using something like MailGun, you would setup a webhook, and have it record any webhook calls for "failed" or the like for those reasons.
If your using plain PHP's
So your best bet at this point is using a third party like MailGun or SendMail and setting up Webhooks for these success/failed/dropped/bounced events and use the message id to track them.
Replied to Laravel Telescope System Requirements
Try increasing your PHP's
memory_limit.
There's an Issue thread for composer here with some other debug steps/solutions to try which may help.
Replied to Doing Something Before Delete In Eloquent Models Even In Mass Deleting
You can override deleting event in the Model's boot method OR you can use an observer to observe the
deleting event and do the image deletion there before the Product finishes deleting.
An example of the first thing I suggested is:
public static function boot() { parent::boot(); static::deleting(function($team) { // Delete your photo here }); }
I'd suggest creating an event listener / observer instead tho it's a bit cleaner especially when you have more and more logic.
Replied to How To Sort Posts By Latest Commented
Replied to Validation On Required_if
I'm not entirely sure what your trying to do. If you want to (on the Update Form Request) only require the
banner field when it's filled out then you would likely use the
sometimes validation method.
In some situations, you may wish to run validation checks against a field only if that field is present in the input array. To quickly accomplish this, add the sometimes rule to your rule list:
i.e
'banner' => 'sometimes|required|mimes:jpeg,png|dimensions:width=1920,height=1080',
Replied to (1/1) FatalErrorException Maximum Execution Time Of 60 Seconds Exceeded
Looks like if the file is large it's taking too long to load it.
In this case I'd suggest offloading this type of large operation to the queue system and queue it.
Alternatively you could
batch loop and
batch insert the data (instead of one by one) which may speed up the script but again all depends on the file size and contents it has to go through, your best bet is to queue this up.
Replied to Logging Raw Data
This is the function of the logger. I don't think there's an easy way to disable timestamps other than to create your own Logger/driver.
Replied to How To Sort Posts By Latest Commented
Try something like this..
$latestComments = Post::with('comments')->orderBy('comments.created_at', 'desc')->get();
Replied to Achieve Available Venue On That Date.
Hmm you are checking for Venues with bookings between your start and end date.
What about using
whereNotBetween
$availableVenue = Venue::with(['bookings' => function ($query) use ($startDate, $endDate) { $query->whereNotBetween('startDate', [$startDate, $endDate]) ->whereNotBetween('endDate', [$startDate, $endDate]); }])->get(); dd($availableVenue);
There's likely a more ideal way to do this but it should work...
Replied to How To Validate URL Query Parameters?
Look into using Form Requests to validate your inbound form requests before you need to do any logic.
Replied to Swift_IoException Unable To Open File For Reading [1]
Sounds like your path to the file is wrong. Does it say which line? Perhaps it's the first one, are you referencing an existing file somewhere in SalesregisterExport?
Have you confirmed permissions are correct (so it can create a file)?
Replied to Scheduling Sending Multiple Queued Mails
Can you confirm your server has mail enabled/installed? Try adding this in a php file and running it to test mail.
<?php $sender = '[email protected]'; // Update this $recipient = '[email protected]'; // Update this $subject = "php mail test"; $message = "php test message"; $headers = 'From:' . $sender; if (mail($recipient, $subject, $message, $headers)) { echo "Message accepted"; } else { echo "Error: Message not accepted"; } ?>
If that works and you get an email let us know, otherwise it may be you have to enable/fix mail on your local server first before you'll receive anything.
Additionally if mail is enabled and working properly elsewhere, do you have any errors in your log? Have you tried changing your
.env file MAIL_DRIVER to
log? If it's logging them it's likely a
Replied to How Do I Implement An Interface Onto A Laravel Project Properly?
From what I understand, it's not possible to pass the route variable to the Controller's constructor. However there are other methods.. such as getting the
Route::current()->parameter('accountType') and then setting it in the constructor but you wouldn't inject the dependency tho necessarily.
For example:
public function __construct(AccountTypeInterface $accountType) { $accountType = \Route::current()->parameter('accountType'); $this->accountType = $accountType->where('slug', $accountType)->first(); }
You may be able to use the
request() help also instead of
route()/Route:: This is untested but may be a possible solution.. no it's not ideal but it may work.
FWIW: There was a proposal in the Laravel framework repo for this same feature at it was rejected :(
Replied to Validation On Required_if
Have you tried using the
required_unless rule?
The field under validation must be present and not empty unless the anotherfield field is equal to any value.
'fee' => 'required', 'amount' => 'bail|required_unless:fee,=,2|integer|gte:1',
bail = Stop running validation rules after the first validation failure.
So it'll try and validate required_unless, but if that fails then it'll bail.
Replied to Best Way To Tackle This App? Ideas Welcome
Welcome! Ok so there are couple ways you can do this but since it's super simple, you can literally use a flag like you mentioned on the users table.
A user registers and by default is a Client.
You don't necessarily have to have a flag to specify client/admin just admin. This way by default all users are clients. You'd have to manually (at least for the first user) update their to have the
is_admin flag set to true/1.
So you'll have 1 users table, with an additional flag called
is_admin set to
0 by default. This way you can use the auth/register system that comes with Laravel.
Now for checking if they are an admin or not.. you can create a custom Blade Directive for use in Blade Views (see the Laravel docs about creating directives) and you would add the helper on the
User.php model (which you'd also use to check in the blade directive). Let me give you the code..
User.php (the model, place in the class)
/** * A helper to determine if the current user is an admin */ public function isAdmin() { return $this->is_admin; }
AppServiceProvider.php (put this code inside
boot() method
/** * If statement check if user is an admin * * Usage: @admin yep they are an admin @admin */ Blade::if('admin', function() { return auth()->check() && auth()->user()->isAdmin(); });
Now you can use the directive in your views like this..
@admin This is text for the admin @endadmin
And you can use the helper method in the code/middleware on a user like so:
if ($user->isAdmin()) { // Do admin stuff }
If you wanted, you could create a helper and directive to return bool if they are a client.. like so..
User.php (the model, place in the class)
/** * A helper to determine if the current user is a client */ public function isClient() { return ! $this->is_admin; }
AppServiceProvider.php (put this code inside
boot() method
/** * If statement check if user is an admin * * Usage: @admin yep they are an admin @admin */ Blade::if('client', function() { return auth()->check() && auth()->user()->isClient(); });
Hope this helps!
An example middleware for checking if admin is this..
app/Http/Middleware/CheckAdmin.php
<?php namespace App\Http\Middleware; use Closure; class CheckAdmin { /** * Handle an incoming request. * * @param \Illuminate\Http\Request $request * @param \Closure $next * @return mixed */ public function handle($request, Closure $next) { // Is there current user an admin if ($request->user()->isAdmin()) { return $next($request); } return response('Unauthorized.', 401); } }
Don't forget to register the middleware in your
app\Http\Middleware\Kernel.php file and use it wherever needed
Replied to How Can I Get Data From Database?
Hmm not entirely sure what your having issues with as you don't specifically explain. You should be able to access the variable
$totalMarks in your blade view and use it?
Replied to How Can I Get Chunked Value From Nested Eloquent Collection
I'm not sure how products relate to brands (assuming it's by
brand_id on
products) so something along the lines of ..
$brands->groupBy('products.brand_id')`
Replied to What Is Your Laravel Design Pattern?
In existing projects you'll have to plan a way to move things part by part or module by module. It'll be tedious in an existing project, especially when it's a larger project.
A good Laracasts series is if your looking to learn more SOLID principles in PHP.
I'd suggest if you are trying to use SOLID to stick to it as much as possible but no code is perfect and you'll likely not be able to stick to it 100% for every piece of code so don't sweat the small things ;-)
|
https://laracasts.com/@BRAUNSON
|
CC-MAIN-2019-43
|
refinedweb
| 4,030
| 59.33
|
[Python] Running a decision tree classifier on breast cancer
Decision Tree (DT) is one of the supervised learning methods used for classification and regression. It is considered a white-box model as the trained model can easily be explained by boolean logic. The goal of the DT algorithm is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. Various decision tree algorithms are ID3 (Iterative Dichotomiser 3), C4.5, C5.0, and CART (Classification and Regression Trees). In the post, I will use the DT from the scikit-learn library that uses an optimized version of the CART algorithm.
When the DT algorithm learns a model from the given data, likely, it may not use all features present in the data. Some features can be very informative, while others can be useless for the model. The scikit-learn library provides an attribute to find those important features used by the model.
This post aims to know how to find the list of important features. The DecisionTreeClassifier()1 module of the scikit-learn has an attribute “feature_importances_” that returns the feature importances as a list. If there are k features in the data, the feature importances list will contain k numerical values. You can map each feature to its importance value.
Here is the complete code to get feature importance and evaluate the DT model on breast cancer data.
from sklearn.datasets import load_breast_cancer from sklearn.model_selection import cross_val_predict from sklearn.metrics import roc_auc_score, accuracy_score, f1_score from sklearn import tree import numpy as np def main(): """ start of the code """ # initialize some variables tree_depth = 6 rseed = 123 # fetch breast cancer data and labels idata = load_breast_cancer() X, y, headers = idata['data'], idata['target'], idata['feature_names'] # train the model and find important features clf = tree.DecisionTreeClassifier(max_depth=tree_depth, random_state=rseed) clf.fit(X, y) feature_importance_1 = dict(zip(headers, clf.feature_importances_)) feature_importance_2 = {k: v for k, v in feature_importance_1.items() if v != 0} print(feature_importance_2) # Evaluate the model clf = tree.DecisionTreeClassifier(max_depth=tree_depth, random_state=rseed) preds = cross_val_predict(clf, X, y, method="predict_proba") print("AUC-ROC: ", roc_auc_score(y, preds[:, 1])) print("F1 Score: ", f1_score(y, np.round(preds[:, 1]))) print("Accuracy: ", accuracy_score(y, np.round(preds[:, 1]))) if __name__ == "__main__": main()
The above code prints the following feature importance. I have deleted all features from the dictionary that had importance = 0.
{'mean texture': 0.03143202180001385, 'mean smoothness': 0.007067371363261426, 'mean concavity': 0.00883421420407678, 'mean concave points': 0.005679137702620788, 'perimeter error': 0.007368625861991585, 'area error': 0.0020599271469194155, 'smoothness error': 0.001011061323324746, 'compactness error': 0.037943720930179815, 'worst radius': 0.7062764596695585, 'worst texture': 0.05057276067335216, 'worst area': 0.01116564968199479, 'worst smoothness': 0.007441128537094316, 'worst concavity': 0.008790189880800664, 'worst concave points': 0.11435773122481126}
The performance metrics for the classifier are as follows:
AUC-ROC: 0.9131190211933831
F1 Score: 0.9361702127659575
Accuracy: 0.9209138840070299
References:
|
https://www.bitsdiscover.com/running-a-decision-tree-classifier-on-breast-cancer/
|
CC-MAIN-2022-40
|
refinedweb
| 478
| 53.17
|
Subscriber portal
Seriously, for several years of WPF&Silverlight development I've never had a worse experience, what's up, MS?
I'm using Windows 8 RTM and VS2012 Pro.
XDesproc takes ages to start. I've tried switching to x86 build target - nothing changed. It hangs for > 3 seconds on almost every code change, sometimes it crashes on itself, sometimes it brings my VS and Blend with it. It doesn't update user controls
even after rebuild, but I managed to "fix" this by moving all user controls to top namespace. Blend doesnt update code changed in VS, though "Edit Externally" shows updated code. Sometimes after changes in VS, Blend's design "desyncs" with code, and when I
click on elements - random parts of code get selected, and nothing works till Blend restart. Today I've spent an hour trying to find out why intellisense in VS locally stopped working. The problem was in line "Frame.Navigate(typeof(MainPage))", "typeof(MainPage)"
causing all intellisense in block above it to stop working.
It's impossible to work like this, funny thing, I can't even post here with my main account, cause I get "too many redirects" error, when I'm logged in with it.
Unni Ravindranathan, Program Manager, Microsoft Expression?
|
https://social.msdn.microsoft.com/Forums/en-US/8c468600-fd5d-4307-afa1-14c1adaf41a4/development-tools-are-horribly-broken?forum=toolsforwinapps
|
CC-MAIN-2018-09
|
refinedweb
| 211
| 64.61
|
> can anyone see where i made my mistake?
Yeah, you lost a bunch of braces which you so carefully put in some time ago.
Printable View
> can anyone see where i made my mistake?
Yeah, you lost a bunch of braces which you so carefully put in some time ago.
the system("pause"); did pause the program but wouldn't let any more user input like i wanted to make it go from the theory to the game part but it just closes when you hit enter.
[CODE][#include <iostream>
#include <string>
using namespace std;
int main(void)
{
string whattheytype;
string whattheytype2;
string whattheytype3;
string commands = "commands";
string option1 = "start";
string theory = "theory";/*variables*/
string game ="game";
string leave = "exit";
int pause;
int more;
std::cout<<"this game was ripped off from the choose your own adventure\n";
std::cout<<" book called Undergound Kingdom by Edward Packard\n";
std::cout<<"ok this is a choose your own adventure game called\n";
std::cout<<"Underground kingdom. you will read the page then you will be\n";
std::cout<<"asked to choose which way you want to go and such\n";
std::cout<<"sounds like fun huh? thought so. ok type commands to\n";
std::cout<<"get a list of commands. and type exit to leave";/* all this is is the introductory for the game*/
std::cout<<"\ntype start to start the game!\n"<<endl;
std::cin>>whattheytype;
if (whattheytype == commands)
std::cout<<"just follow the instructions and restart the program\n"<<endl;/*insructions*/
else
if (whattheytype == "exit")
{
std::cout<<"goodbye\n";
system("pause");
}
else
if (whattheytype == "start")/*beginning of actual game*/
{
std::cout<<"WARNING!!!!!!!!!!!!!\n";
std::cout<<"\n"<<endl;
std::cout<<"The Underground Kingdom is not easy to reach. Many Players\n";
std::cout<<"never get ther. Others never return.\n";
std::cout<<" Before starting out on your journey, you may want to read\n";
std::cout<<"Professor Bruckner's theory, which is set fourth on the next block\n";
std::cout<<"of text.\n";
std::cout<<" Professor Bruckner is a rather boring writer, and i wouldn't\n";
std::cout<<"suggest that you bother read his theory exept that, if you ever\n";
std::cout<<"get to the Underground Kingdom, it might save your life. Good luck!\n"<<endl;
std::cout<<"type theory to read the theory type game to start the game\n";
}
std::cin>>whattheytype2;
if (whattheytype2 == theory)
{
std::cout<<" PROFESSOR\n";
std::cout<<" BRUCKNER'S\n";
std::cout<<" THEORY\n";
std::cout<<"---------------------------------------------------------------\n";
std::cout<<"The discovery of the Bottomless Crevasse in Greenland by Dr. Nera \n";
std::cout<<"Vivaldi supports my therory that the earth is not solid, as has\n";
std::cout<<"been thought, but that it is hollow. The Bottomless Crevasse is\n";
std::cout<<"probably the sole route from the earth's surface to a vast Underground\n";
std::cout<<"Kingdom. The only other possible link would be an underground river,\n";
std::cout<<"flowing in alternating directions in response to the tides, but this \n";
std::cout<<"seems unlikely.\n";
std::cout<<"How you may ask, was the earth hollowed out? My studies show that more\n";
std::cout<<"than a billion years ago a tiny blck hole collided with out planet and\n";
std::cout<<"lodged in its center, pulling the whole molten core into an incredibly\n";
std::cout<<"massive sphere only a few hundred meter across.\n";
std::cout<<"If you were to stand on the inner surface of the earth, like a fly on the\n";
std::cout<<"shell of an enormous pumpkin, you would see the black hole directly overhead,\n";
std::cout<<"press any key and hit enter for more of the theory\n";
std::cin>>more;
std::cout<<"like a black sun.\n";
std::cout<<"The Gravity of the earth's thick shell would hold you to the inner shell\n";
std::cout<<"of the earth, though you would weigh much less than you would on the\n";
std::cout<<"outer surface because the mass of the Black Sun would tend to pull \n";
std::cout<<"you toward it. If there were a very tall mountain in the Underground\n";
std::cout<<"Kingdom and you were to climb to the top of it, you might be pulled\n";
std::cout<<"up into the Black Sun because gravity gets stronger as you approach\n";
std::cout<<"a massive object.\n";
std::cout<<"In all other respects the Black Sun would not be dangerous to any creatures\n";
std::cout<<"in the Unerground Kingdom. On the contrary, the Black Sun would be\n";
std::cout<<"necessary to life in th underworld, but in the opposit way that the sun is\n";
std::cout<<"necessary to life on the earth's surface. OUr sun give us heat and keeps us\n";
std::cout<<"from freezing. The Black Sun absorbs heat. If there is an underground \n";
std::cout<<"kingdom, it is the Black Sun that keep its inhabitants from being baked to\n";
std::cout<<"death by the heat within the earth.\n";
std::cout<<"type game to start game!\n";
std::cin>>whattheytype3;
ststem("pause");
}
if (whattheytype3 == "game")
{
std::cout<<"hmm\n";
system("pause");
}
else
if (whattheytype == leave)
{
std::cout<<"press any key to exit\n";
std::cin>>leave;
}
else
if (whattheytype != "start")
{
std::cout<<"i dont understand\n";
}
return 0;
}/CODE]
|
https://cboard.cprogramming.com/cplusplus-programming/47524-compile-error-2-print.html
|
CC-MAIN-2017-09
|
refinedweb
| 895
| 57.61
|
Groovy script to read test case name in parasoft
Hi,
I am using groovy script for reading of parameter from the data source and creating of folder at the same location. I am able to do this much but now I want my folder name to be same as my test case name. Therefore can you please tell me what should be the groovy script **to read the **test case name from my test suite so that the folder could be created with the same name.
0
Answers
Are you doing this from an Extension Tool?
If you are doing it from an Extension Tool, you can use a script like the following:
hi,
Thanks for the script.
I used the same in my groovy script , but it is still not reading the test name from parasoft.
Please find my groovy script and please tell me what should I add to read the test name and create folder with the same name at the location provided in my data source.
import static java.util.Calendar.*;
import java.text.SimpleDateFormat;
import javax.xml.datatype.XMLGregorianCalendar;
import java.text.DateFormat;
import java.lang.String;
import java.util.regex.Pattern;
import java.sql.Timestamp;
import com.parasoft.api.*;
String path(input, context){
def sdf = new java.text.SimpleDateFormat("dd-MM-yyyy hh-mm-ss aa");
def SysDate = sdf.format( new Date()) ;
value = context.getValue("HeaderValues", "FolderPath")
def Name1=Application.showMessage("test name: " + context.getContextName());
String runFolder = value+Name1+SysDate;
def runFolderDir= new File(runFolder);
runFolderDir.mkdir();
return runFolder;
}
Your script is not quite correct - you are setting the value of "Name1" to the output of Application.showMessage() - which is a void function that doesn't return anything. Change that line to the following two lines:
context.getContextName() will give you the name of the test when running from an Extension Tool, but may not if you are running it from a script in a different tool. So if you're not using an Extension Tool, please let me know where you're inputting the script and I can help you modify it to get the test name.
is there a way to read the name of the scenario thats executing?
SOAtest scenarios are structured in a hierarchy of test suites (scenarios). You can use the Context objects to get information about them. Here is some code that will print out information that helps you to see how the contexts are structured, as well as methods that help you get the name for the parent test suite and the name for the top-level test suite. Put this code into an Extension Tool:
|
https://forums.parasoft.com/discussion/comment/8914/
|
CC-MAIN-2019-39
|
refinedweb
| 439
| 66.54
|
17844 [details]
Sample application
I have some Label in My ListViewItem.
There is a lot of product info in every List view item. That looks like :
Product Name Price Count
Now I want to add 2 button for every row, if user click right side, the count will increase, if user click left side the count will decrease. For example:
Football $10 0
I click right side of the row, I want to this result:
Football $10 1
But currently result is:
Football $10
The count just disappeared, until I click the window border resize the window, and then the count label will render again.
I have too much that kind of problem, that's too series for me. I really don't know Is there anyone using Xamarin.Forms on UWP in here, because that mistake too basic and cause too much problem.
I also upload 2 attachment. one is an example application, another one is a screenshot.
Created attachment 17845 [details]
Screenshot
I have similiar problem. If having a label insida an absolute lautou, then updating the label text, the label dissappears...
A disabled Entry cause same problem.
A disabled Entry worked. But the label still not. So I think someone should test that and confirm.
I have also similar problem. If I have a label inside StackLayout then the label text not updated and the label disappears so long that you resize the window.
I replace the ListView to TemplateView, and all fine, looks like the problem is from ListView.
In Windows 10, right click desktop and select "Next desktop background" in the context menu, all labels in first listview will disappear.
Here's the demo code.
/////////////////////
using System.Collections.Generic;
using Xamarin.Forms;
namespace Demo
{
public class DemoPage : ContentPage
{
public DemoPage()
{
Content = new StackLayout
{
Orientation = StackOrientation.Vertical,
Children =
{
ListViewThatTextWillDisappearAfterChangeDesktopBackgroud,
ListViewThatWorksFine
}
};
}
private ListView ListViewThatTextWillDisappearAfterChangeDesktopBackgroud
{
get
{
var viewCellTemplate = new DataTemplate(() =>
{
var label = new Label();
label.SetBinding(Label.TextProperty, ".");
return new ViewCell {View = label};
});
var listView = new ListView
{
ItemTemplate = viewCellTemplate
};
listView.SetBinding(ListView.ItemsSourceProperty, nameof(Items));
return listView;
}
}
private ListView ListViewThatWorksFine
{
get
{
var textCellTemplate = new DataTemplate(() =>
{
var cell = new TextCell();
cell.SetBinding(TextCell.TextProperty, ".");
return cell;
});
var listView = new ListView
{
ItemTemplate = textCellTemplate
};
listView.SetBinding(ListView.ItemsSourceProperty, nameof(Items));
return listView;
}
}
private List<string> Items => new List<string>
{
"1st item",
"2nd item",
"3rd item"
};
}
}
This is a serious bug. Most ListViews will have labels in them, and they are the most likely things to be updated. The only work around I have found is to use an ObservableCollection as the ItemSource and to replace any updated items directly in the collection, causing the list to be refreshed, however this causes flicker as the list is updated.
I have also noticed a similar effect, when altering the visibility of a View in a ViewCell Template in a ListView. For example, if the ViewCell contains a BoxView and its IsVisible property is bound to a bool in the ItemSource, then changing the bool value results in the BoxView always disappearing irrespective of the bool value, however if Opacity is bound via a ValueConverter, then the Opacity is altered correctly, so its not a case of any property being altered exhibiting the bug, just some. The two effects could be related. I will take a look through the XF source code when I have time.
The issue identified by @Rhussell is another problem (it is probably related) which has its own bug report:. This is an even more serious issue as it makes XF UWP apps unusable.
Adding a slight bit of extra info that might help troubleshoot -
When using the Live View Inspector of Visual Studio 2015, selecting any Label control (directly or by selecting the element's node in the Inspector window) will also cause the Label to disappear.
Has this been addressed or looked into? This is a SERIOUS issue. If there has been investigation, what were the findings?
Just want to say that I, too, am affected by this, and that this is indeed a fairly serious issue. I'm using XF 2.3.4.214-pre5.
This is seriously ridiculous. This is a bug that completely prevents Xamarin Forms from being even remotely usable on UWP, yet hasn't even had a mention in the half a year since it's been reported. If Xamarin wants to claim that they actually support a platform, it would be nice to have the most basic functionality be supported.
I tried Xamarin Forms in a different app after not using it for a few months due to bugs, hoping maybe it improved. Clearly that was a mistake.
Same problem.
And the bug report I filled that was taken as FIXED disappeared... Interesting huh?
*** This bug has been marked as a duplicate of bug 40139 ***
This should be fixed in 2.3.5-pre1. If you want to see the fix before the next release, you can try the nightly builds:
Seems to work.
Awesome, the new prerelease does fix it.
FYI, I am getting the same issue with 2.3.4.247.
I have a ListView in MasterDetailPage which gets populated with ItemsSource via DataBinding and as soon as I hit ALT key (left or right), all labels are gone.
Created attachment 22838 [details]
Labels gone after pressing ALT key (XF 2.3.4.247, UWP Win10 15063.296))
The XAML view having the issue along with screenshots
Same here.
All labels inside ListViews disappear when pressing ALT.
UWP Win10, XF 2.3.4.247.
Breaking issue!
(In reply to Manuel from comment #18)
> FYI, I am getting the same issue with 2.3.4.247.
>
> I have a ListView in MasterDetailPage which gets populated with ItemsSource
> via DataBinding and as soon as I hit ALT key (left or right), all labels are
> gone.
Manuel,
This appears to be a different bug than the original - can you please open a new Bugzilla case and add your attachment to it?
Thanks.
@E.Z. Hart,
This bug has been posted separately here:
|
https://bugzilla.xamarin.com/show_bug.cgi?id=44973
|
CC-MAIN-2019-04
|
refinedweb
| 999
| 64.1
|
5.1.1 (2020-10-02)
Enhancements
- None
Fixed
- Querying on an indexed property may give a “Key not found” exception. (Core upgrade)
- Fix queries for null on non-nullable indexed integer columns returning results for zero entries. (Core upgrade)
Compatibility
- Realm Object Server: 3.23.1 or later.
- Realm Studio: 5.0.0 or later.
Internal
- Using Sync 5.0.28 and Core 6.1.3.
5.1.0 (2020-09-30)
Enhancements
- Greatly improve performance of NOT IN queries on indexed string or int columns. (Core upgrade)
Fixed
- Fixed an issue that would cause using Realm on the main thread in WPF applications to throw an exception with a message "Realm accessed from the incorrect thread". (Issue #2026)
- Fixed an issue that could cause an exception with the message "Opening Realm files of format version 0 is not supported by this version of Realm" when opening an encrypted Realm. (Core upgrade)
- Slightly improve performance of most operations which read data from the Realm file. (Core upgrade)
- Rerunning an equals query on an indexed string column which previously had more than one match and now has one match would sometimes throw a "key not found" exception. (Core upgrade)
- When querying a table where links are part of the condition, the application may crash if objects has recently been added to the target table. (Core upgrade)
Compatibility
- Realm Object Server: 3.23.1 or later.
- Realm Studio: 5.0.0 or later.
Internal
- Using Sync 5.0.27 and Core 6.1.2.
- Added prerelease nuget feed via GitHub packages. (PR #2028)
5.0.1 (2020-09-10)
NOTE: This version bumps the Realm file format to version 11. It is not possible to downgrade to version 10 or earlier. Files created with older versions of Realm will be automatically upgraded. Only Realm Studio 5.0.0 or later will be able to open the new file format.
Enhancements
- Added the notion of "frozen objects" - these are objects, queries, lists, or Realms that have been "frozen" at a specific version. This allows you to access the data from any thread, but it will never change. All frozen objects can be accessed and queried as normal, but attempting to mutate them or add change listeners will throw an exception. (Issue #1945)
- Added
Realm.Freeze(),
RealmObject.Freeze(),
RealmObject.FreezeInPlace(),
IQueryable<RealmObject>.Freeze(),
IList<T>.Freeze(), and
IRealmCollection<T>.Freeze(). These methods will produce the frozen version of the instance on which they are called.
- Added
Realm.IsFrozen,
RealmObject.IsFrozen, and
IRealmCollection<T>.IsFrozen, which returns whether or not the data is frozen.
- Added
RealmConfigurationBase.MaxNumberOfActiveVersions. Setting this will cause Realm to throw an exception if too many versions of the Realm data are live at the same time. Having too many versions can dramatically increase the filesize of the Realm.
- Add support for
SynchronizationContext-confined Realms. Rather than being bound to a specific thread, queue-confined Realms are bound to a
SynchronizationContext, regardless of whether it dispatches work on the same or a different thread. Opening a Realm when
SynchronizationContext.Currentis null - most notably
Task.Run(...)- will still confine the Realm to the thread on which it was opened.
- Storing large binary blobs in Realm files no longer forces the file to be at least 8x the size of the largest blob.
- Reduce the size of transaction logs stored inside the Realm file, reducing file size growth from large transactions.
- String primary keys no longer require a separate index, improving insertion and deletion performance without hurting lookup performance.
Fixed
- Fixed
Access to invalidated List objectbeing thrown when adding objects to a list while at the same time deleting the object containing the list. (Issue #1971)
- Fixed incorrect results being returned when using
.ElementAt()on a query where a string filter with a sort clause was applied. (PR #2002)
Compatibility
- Realm Object Server: 3.23.1 or later.
- Realm Studio: 5.0.0 or later.
Internal
- Using Sync 5.0.22 and Core 6.0.25.
4.3.0 (2020-02-05)
Enhancements
- Exposed an API to configure the
userIdand
isAdminof a user when creating credentials via
Credentials.CustomRefreshToken. Previously these values would be inferred from the JWT itself but as there's no way to enforce the server configuration over which fields in the JWT payload represent the
userIdand the
isAdminfield, it is now up to the consumer to determine the values for these.
- Improved logging and error handling for SSL issues on Apple platforms.
Fixed
- Realm objects can now be correctly serialized with
System.Runtime.Serialization.Formattersand
System.Xml.Serializationserializers. (Issue #1913) The private state fields of the class have been decorated with
[NonSerialized]and
[XmlIgnore]attributes so that eager opt-out serializers do not attempt to serialize fields such as
Realmand
ObjectSchemawhich contain handles to unmanaged data.
- Fixed an issue that would result in a compile error when
[Required]is applied on
IList<string>property. (Contributed by braudabaugh)
- Fixed an issue that prevented projects that include the Realm NuGet package from being debugged. (PR #1927)
- The sync client would fail to reconnect after failing to integrate a changeset. The bug would lead to further corruption of the client’s Realm file. (since 3.0.0).
- The string-based query parser (
results.Filter(...)) used to need the
class_prefix for class names when querying over backlink properties. This has been fixed so that only the public
ObjectSchemaname is necessary. For example,
@links.class_Person.Siblingsbecomes
@links.Person.Siblings.
- Fixed an issue where
ClientResyncMode.DiscardLocalRealmwouldn't reset the schema.
Compatibility
- Realm Object Server: 3.23.1 or later.
Internal
- Upgraded Sync from 4.7.5 to 4.9.5 and Core from 5.23.3 to 5.23.8.
4.2.0 (2019-10-07)
Enhancements
- Added
int IndexOf(object)and
bool Contains(object)to the
IRealmCollectioninterface. (PR #1893)
- Exposed an API -
SyncConfigurationBase.EnableSessionMultiplexing()that allows toggling session multiplexing on the sync client. (PR 1896)
- Added support for faster initial downloads when using
Realm.GetInstanceAsync. (Issue 1847)
- Added an optional
cancellationTokenargument to
Realm.GetInstanceAsyncenabling clean cancelation of the in-progress download. (PR 1859)
- Added support for Client Resync which automatically will recover the local Realm in case the server is rolled back. This largely replaces the Client Reset mechanism for fully synchronized Realms. Can be configured using
FullSyncConfiguration.ClientResyncMode. (PR #1901)
- Made the
createUserargument in
Credentials.UsernamePasswordoptional. If not specified, the user will be created or logged in if they already exist. (PR #1901)
- Uses Fody 6.0.0, which resolves some of the compatibility issues with newer versions of other Fody-based projects. (Issue #1899)
Fixed
Compatibility
- Realm Object Server: 3.23.1 or later.
Internal
- Upgraded Sync from 4.7.0 to 4.7.1.
- Implemented direct access to sync workers on Cloud, bypassing the Sync Proxy: the binding will override the sync session's url prefix if the token refresh response for a realm contains a sync worker path field.
4.1.0 (2019-08-06)
Breaking Changes
- Removed the
isAdminparameter from
Credentials.Nickname. It doesn't have any effect on new ROS versions anyway as logging in an admin nickname user is not supported - this change just makes it explicit. (Issue #1879)
- Marked the
Credentials.Nicknamemethod as deprecated - support for the Nickname auth provider is deprecated in ROS and will be removed in a future version. (Issue #1879)
- Removed the
deleteRealmparameter from
PermissionDeniedException.DeleteRealmInfoas passing
falsehas no effect. Calling the method is now equivalent to calling it with
deleteRealm: true. (PR #1890)
Enhancements
- Added support for unicode characters in realm path and filenames for Windows. (Core upgrade)
- Added new credentials type:
Credentials.CustomRefreshTokenthat can be used to create a user with a custom refresh token. This will then be validated by ROS against the configured
refreshTokenValidatorsto obtain access tokens when opening a Realm. If creating a user like that, it's the developer's responsibility to ensure that the token is valid and refreshed as necessary to ensure that access tokens can be obtained. To that end, you can now set the refresh token of a user object by calling
User.RefreshToken = "my-new-token". This should only be used in combination with users obtained by calling
Credentials.CustomRefreshToken. (PR #1889)
Fixed
- Constructing an IncludeDescriptor made unnecessary table comparisons. This resulted in poor performance when creating a query-based subscription (
Subscription.Subscribe) with
includedBacklinks. (Core upgrade)
- Queries involving an indexed int column which were constrained by a LinkList with an order different from the table's order would give incorrect results. (Core upgrade)
- Queries involving an indexed int column had a memory leak if run multiple times. (Core upgrade)
Compatibility
- Realm Object Server: 3.23.1 or later.
Internal
- Upgraded Sync from 4.5.1 to 4.7.0 and Core 5.20.0 to 5.23.1.
4.0.1 (2019-06-27)
Fixed
Fixed an issue that would prevent iOS apps from being published to the app store with the following error:
This bundle Payload/.../Frameworks/realm-wrappers.framework is invalid. The Info.plist file is missing the required key: CFBundleVersion.
(Issue 1870, since 4.0.0)
- Fixed an issue that would cause iOS apps to crash on device upon launching. (Issue 1871, since 4.0.0)
4.0.0 (2019-06-13)
Breaking Changes
- The following deprecated methods and classes have been removed:
- The
SyncConfigurationclass has been split into
FullSyncConfigurationand
QueryBasedSyncConfiguration. Use one of these classes to connect to the Realm Object Server.
- The
TestingExtensions.SimulateProgressmethod has been removed as it hasn't worked for some time.
- The
Property.IsNullableproperty has been removed. To check if a property is nullable, check
Property.Typefor the
PropertyType.Nullableflag.
- The
Credentials.Providerclass has been removed. Previously, it contained a few constants that were intended for internal use mostly.
- The
User.ConfigurePersistancemethod has been superseded by
SyncConfigurationBase.Initialize.
User.LogOuthas been removed in favor of
User.LogOutAsync.
User.GetManagementRealmhas been removed in favor of the
User.ApplyPermissionsAsyncset of wrapper API.
User.GetPermissionRealmhas been removed in favor of the
User.GetGrantedPermissionswrapper API.
- Deprecated the
IQueryable<T>.Subscribe(string name)extension method in favor of
IQueryable<T>.Subscribe(SubscriptionOptions options).
- Reworked the internal implementation of the permission API. For the most part, the method signatures haven't changed or where they have changed, the API have remained close to the original (e.g.
IQueryable<T>has changed to
IEnumerable<T>). (Issue #1863)
- Changed the return type of
User.GetGrantedPermissionsAsyncfrom
IQueryable<PathPermission>to
IEnumerable<PathPermission>. This means that the collection is no longer observable like regular Realm-backed collections. If you need to be notified for changes of this collection, you need to implement a polling-based mechanism yourself.
PathPermission.MayRead/MayWrite/MayManagehave been deprecated in favor of a more-consistent
AccessLevelAPI.
- In
User.ApplyPermissionsAsync, renamed the
realmUrlparameter to
realmPath.
- In
User.OfferPermissionsAsync, renamed the
realmUrlparameter to
realmPath.
- Removed the
PermissionOfferResponseand
PermissionChangeclasses.
- Removed the
IPermissionObjectinterface.
- Removed the
ManagementObjectStatusenum.
- Removed the
User.GetPermissionChangesand
User.GetPermissionOfferResponsesmethods.
- The
millisecondTimeoutargument in
User.GetGrantedPermissionsAsynchas been removed.
- The
PermissionExceptionclass has been replaced by
HttpException.
- The
AuthenticationExceptionclass has been merged into the
HttpExceptionclass.
Enhancements
- Added
Session.Start()and
Session.Stop()methods that allow you to pause/resume synchronization with the Realm Object Server. (Issue #138)
Added an
IQueryable<T>.Subscribe(SubscriptionOptions, params Expression<Func<T, IQueryable>>[] includedBacklinks)extension method that allows you to configure additional options for the subscription, such as the name, time to live, and whether it should update an existing subscription. The
includedBacklinksargument allows you to specify which backlink properties should be included in the transitive closure when doing query-based sync. For example:
class Dog : RealmObject { public Person Owner { get; set; } } class Person : RealmObject { [Backlink(nameof(Dog.Owner))] public IQueryable<Dog> Dogs { get; } } var options = new SubscriptionOptions { Name = "adults", TimeToLive = TimeSpan.FromDays(1), ShouldUpdate = true }; var people = realm.All<Person>() .Where(p => p.Age > 18) .Subscribe(options, p => p.Dogs); await people.WaitForSynchronzationAsync(); // Dogs that have an owner set to a person that is over 18 // will now be included in the objects synchronized locally. var firstPersonDogs = people.Results.First().Dogs;
(Issue #1838 & Issue #1834)
- Added a
Realm.GetAllSubscriptions()extension method that allows you to obtain a collection of all registered query-based sync subscriptions. (Issue #1838)
- Added
AccessLevelproperty to
PathPermissionto replace the now deprecated
MayRead/MayWrite/MayManage. (Issue #1863)
- Added
RealmOwnerIdproperty to
PathPermissionthat indicates who the owner of the Realm is. (Issue #1863)
- Added support for building with
dotnet build(previously only the
msbuildcommand line was supported). (PR #1849)
- Improved query performance for unindexed string columns when the query has a long chain of OR conditions. (Core upgrade)
- Improved performance of encryption and decryption significantly by utilizing hardware optimized encryption functions. (Core upgrade)
- Compacting a realm into an encrypted file could take a really long time. The process is now optimized by adjusting the write buffer size relative to the used space in the realm. (Core upgrade)
- The string-based query parser (
results.Filter("...")) now supports readable timestamps with a 'T' separator in addition to the originally supported "@" separator. For example:
startDate > 1981-11-01T23:59:59:1(Core upgrade)
Fixed
- Fixes an issue where using the
StringExtensions.Contains(string, string, StringComparison)extension method inside a LINQ query would result in an exception being thrown on .NET Core 2.1+ or Xamarin.iOS/Android projects.(Issue #1848)
- Creating an object after creating an object with the int primary key of "null" would hit an assertion failure. (Core upgrade)
Compatibility
- Realm Object Server: 3.23.1 or later.
Internal
- Upgraded Sync from 3.14.11 to 4.5.1 and Core 5.12.7 to 5.20.0.
3.4.0 (2019-01-09)
NOTE!!! You will need to upgrade your Realm Object Server to at least version 3.11.0 or use Realm Cloud. If you try to connect to a ROS v3.10.x or previous, you will see an error like
Wrong protocol version in Sync HTTP request, client protocol version = 25, server protocol version = 24.
Enhancements
- Download progress is now reported to the server, even when there are no local changes. This allows the server to do history compaction much more aggressively, especially when there are many clients that rarely or never make local changes. (#1772)
- Reduce memory usage when integrating synchronized changes sent by ROS.
- Added ability to supply a custom log function for handling logs emitted by Sync by specifying
SyncConfigurationBase.CustomLogger. It must be set before opening a synchronized Realm. (#1824)
- Clients using protocol 25 now report download progress to the server, even when they make no local changes. This allows the server to do history compaction much more aggressively, especially when there are many clients that rarely or never make local changes. (#1772)
- Add a User-Agent header to HTTP requests made to the Realm Object Server. By default, this contains information about the Realm library version and .NET platform. Additional details may be provided (such as the application name/version) by setting
SyncConfigurationBase.UserAgentprior to opening a synchronized Realm. If developing a Xamarin app, you can use the Xamarin.Essentials plugin to automate that:
SyncConfiguration.UserAgent = $"{AppInfo.Name} ({AppInfo.PackageName} {AppInfo.VersionString})".
Fixed
- Fixed a bug that could lead to crashes with a message such as
Assertion failed: ndx < size() with (ndx, size()) = [742, 742].
- Fixed a bug that resulted in an incorrect
LogLevelbeing sent to Sync when setting
SyncConfigurationBase.LogLevel. (#1824, since 2.2.0)
- Fixed a bug that prevented
Realm.GetInstanceAsyncfrom working when used with
QueryBasedSyncConfiguration. (#1827, since 3.1.0)
Breaking Changes
- The deprecated method
realm.SubscribeToObjectsAsynchas been removed in this version. (#1772)
User.ConfigurePersistencehas been deprecated in favor of
SyncConfigurationBase.Initialize.
Compatibility
- Realm Object Server: 3.11.0 or later. The sync protocol version has been bumped to version 25. The server is backwards-compatible with clients using protocol version 24 or below, but clients at version 25 are not backwards-compatible with a server at protocol version 24. The server must be upgraded before any clients are upgraded.
Internal
- Upgraded Sync from 3.9.2 to 3.14.11 and Core from 5.8.0 to 5.12.7.
3.3.0 (2018-11-08)
Enhancements
- Exposed an
OnProgressproperty on
SyncConfigurationBase. It allows you to specify a progress callback that will be invoked when using
Realm.GetInstanceAsyncto report the download progress. (#1807)
Fixed
- Trying to call
Subscription.WaitForSynchronizationAsyncon a background thread (without a
SynchronizationContext) would previously hang indefinitely. Now a meaningful exception will be thrown to indicate that this is not supported and this method should be called on a thread with a synchronization context. (dotnet-private#130, since v3.0.0)
Compatibility
- Realm Object Server: 3.0.0 or later.
- APIs are backwards compatible with all previous releases in the 3.x.y series.
- File format: Generates Realms with format v9 (Reads and upgrades all previous formats)
3.2.1 (2018-09-27)
Bug fixes
- Fixed a bug that would typically result in exceptions with a message like
An unknown error has occurred. State: *some-number-larger than 127*when subscribing to queries. (dotnet-private#128, since
3.0.0)
3.2.0 (2018-08-04)
Enhancements
RealmObjectinheritors will now raise
PropertyChangedafter they have been removed from Realm. The property name in the event arguments will be
IsValid.
- Bundle some common certificate authorities on Linux so connecting to ROS instances over SSL should work out of the box for most certificates. Notably, it will now work out of the box for Realm Cloud instances.
Bug fixes
- When constructing queries that compare an invalid/unmanaged RealmObject (e.g.
realm.All<Foo>().Where(f => f.Bar == someBar)), a meaningful exception will now be thrown rather than an obscure ArgumentNullException.
- Added
ShouldCompactOnLaunchto the PCL version of the library. (dotnet-private#125)
3.1.0 (2018-07-04)
Enhancements
- Exposed a
ChangeSet.NewModifiedIndicescollection that contains information about the indices of the objects that changed in the new version of the collection (i.e. after accounting for the insertions and deletions).
- Update Fody to 3.0.
Bug fixes
WriteAsyncwill no longer perform a synchronous
Refreshon the main thread. (#1729)
- Trying to add a managed Realm Object to a different instance of the same on-disk Realm will no longer throw an exception.
- Removed the
IListcompliance for Realm collections. This fixes an issue which would cause the app to hang on Android when deselecting an item from a ListView bound to a Realm collection.
Breaking Changes
SyncConfigurationis now deprecated and will be removed in a future version. Two new configuration classes have been exposed - QueryBasedSyncConfiguration and FullSyncConfiguration. If you were using a
SyncConfigurationwith
IsPartial = true, then change your code to use
QueryBasedSyncConfiguration. Similarly, if
IsPartialwas not set or was set to
false, use
FullSyncConfiguration.
- Removed the
IListcompliance for Realm collections. This will prevent automatic updates of ListViews databound to Realm collections in UWP projects.
3.0.0 (2018-04-16)
Enhancements
- Allow
[MapTo]to be applied on classes to change the name of the table corresponding to that class. (#1712)
- Added an improved API for adding subscriptions in partially-synchronized Realms.
IQueryable<T>.Subscribecan be used to subscribe to any query, and the returned
Subscription<T>object can be used to observe the state of the subscription and ultimately remove the subscription. See the documentation for more information. (#1679)
- Added a fine-grained permissions system for use with partially-synchronized Realms. This allows permissions to be defined at the level of individual objects or classes. See the documentation for more information. (#1714)
- Exposed a string-based
IQueryable<T>.Filter(predicate)method to enable more advanced querying scenarios such as:
- Following links:
realm.All<Dog>().Filter("Owner.FirstName BEGINSWITH 'J'").
- Queries on collections:
realm.All<Child>().Filter("Parents.FirstName BEGINSWITH 'J'")- find all children who have a parent whose name begins with J or
realm.All<Child>().Filter("Parents.@avg.Age > 50")- find all children whose parents' average age is more than 50.
- Subqueries:
realm.All<Person>().Filter("SUBQUERY(Dogs, $dog, $dog.Vaccinated == false).@count > 3")- find all people who have more than 3 unvaccinated dogs.
- Sorting:
realm.All<Dog>().Filter("TRUEPREDICATE SORT(Owner.FirstName ASC, Age DESC)")- find all dogs and sort them by their owner's first name in ascending order, then by the dog's age in descending.
- Distinct:
realm.All<Dog>().Filter("TRUEPREDICATE DISTINCT(Age) SORT(Name)")- find all dogs, sort them by their name and pick one dog for each age value.
- For more examples, check out the javascript query language docs - the query syntax is identical - or the NSPredicate Cheatsheet.
- The
SyncConfigurationconstructor now accepts relative Uris. (#1720)
- Added the following methods for resetting the user's password and confirming their email:
RequestPasswordResetAsync,
CompletePasswordResetAsync,
RequestEmailConfirmationAsync, and
ConfirmEmailAsync. These all apply only to users created via
Credentials.UsernamePasswordwho have provided their email as the username. (#1721)
Bug fixes
- Fixed a bug that could cause deadlocks on Android devices when resolving thread safe references. (#1708)
Breaking Changes
- Uses the Sync 3.0 client which is incompatible with ROS 2.x.
Permissionhas been renamed to
PathPermissionto more closely reflect its purpose. Furthermore, existing methods to modify permissions only work on full Realms. New methods and classes are introduced to configure access to a partially synchronized Realm.
- The type of
RealmConfiguration.DefaultConfigurationhas changed to
RealmConfigurationBaseto allow any subclass to be set as default. (#1720)
- The
SyncConfigurationconstructor arguments are now optional. The
uservalue will default to the currently logged in user and the
serverUrivalue will default to
realm://MY-SERVER-URL/defaultwhere
MY-SERVER-URLis the host the user authenticated against. (#1720)
- The
serverUrlargument in
User.LoginAsync(credentials, serverUrl)and
User.GetLoggedInUser(identity, serverUrl)has been renamed to
serverUrifor consistency. (#1721)
2.2.0 (2017-03-22)
Enhancements
- Added an
IsDynamicproperty to
RealmConfigurationBase, allowing you to open a Realm file and read its schema from disk. (#1637)
- Added a new
InMemoryConfigurationclass that allows you to create an in-memory Realm instance. (#1638)
- Allow setting elements of a list directly - e.g.
foo.Bars[2] = new Bar()or
foo.Integers[3] = 5. (#1641)
- Added Json Web Token (JWT) credentials provider. (#1655)
- Added Anonymous and Nickname credentials providers. (#1671)
Bug fixes
- Fixed an issue where initial collection change notification is not delivered to all subscribers. (#1696)
- Fixed a corner case where
RealmObject.Equalswould return
truefor objects that are no longer managed by Realm. (#1698)
Breaking Changes
SyncConfiguration.SetFeatureTokenis deprecated and no longer necessary in order to use Sync on Linux or server-side features. (#1703)
2.1.0 (2017-11-13)
Enhancements
- Added an
[Explicit]attribute that can be applied to classes or assemblies. If a class is decorated with it, then it will not be included in the default schema for the Realm (i.e. you have to explicitly set
RealmConfiguration.ObjectClassesto an array that contains that class). Similarly, if it is applied to an assembly, all classes in that assembly will be considered explicit. This is useful when developing a 3rd party library that depends on Realm to avoid your internal classes leaking into the user's schema. (#1602)
Bug fixes
- Fixed a bug that would prevent writing queries that check if a related object is null, e.g.
realm.All<Dog>().Where(d => d.Owner == null). (#1601)
- Addressed an issue that would cause the debugger to report an unobserved exception being thrown when "Just My Code" is disabled. (#1603)
- Calling
Realm.DeleteRealmon a synchronized Realm will now properly delete the
realm.managementfolder. (#1621)
- Fixed a crash when accessing primitive list properties on objects in realms opened with a dynamic schema (e.g. in migrations). (#1629)
2.0.0 (2017-10-17)
Enhancements
- Added support for collections of primitive values. You can now define properties as
IList<T>where
Tcan be any type supported by Realm, except for another
IList. As a result, a lot of methods that previously had constraints on
RealmObjectnow accept any type and may throw a runtime exception if used with an unsupported type argument. (#1517)
- Added
HelpLinkpointing to the relevant section of the documentation to most Realm exceptions. (#1521)
- Added
RealmObject.GetBacklinksAPI to dynamically obtain all objects referencing the current one. (#1533)
- Added a new exception type,
PermissionDeniedException, to denote permission denied errors when working with synchronized Realms that exposes a method -
DeleteRealmUserInfo- to inform the binding that the offending Realm's files should be kept or deleted immediately. This allows recovering from permission denied errors in a more robust manner. (#1543)
- The keychain service name used by Realm to manage the encryption keys for sync-related metadata on Apple platforms is now set to the bundle identifier. Keys that were previously stored within the Realm-specific keychain service will be transparently migrated to the per-application keychain service. (#1522)
- Added a new exception type -
IncompatibleSyncedFileException- that allows you to handle and perform data migration from a legacy (1.x) Realm file to the new 2.x format. It can be thrown when using
Realm.GetInstanceor
Realm.GetInstanceAsyncand exposes a
GetBackupRealmConfigmethod that allows you to open the old Realm file in a dynamic mode and migrate any required data. (#1552)
- Enable encryption on Windows. (#1570)
- Enable Realm compaction on Windows. (#1571)
UserInfohas been significantly enhanced. It now contains metadata about a user stored on the Realm Object Server, as well as a list of all user account data associated with that user. (#1573)
- Introduced a new method -
User.LogOutAsyncto replace the now-deprecated synchronous call. (#1574)
- Exposed
BacklinksCountproperty on
RealmObjectthat returns the number of objects that refer to the current object via a to-one or a to-many relationship. (#1578)
- String primary keys now support
nullas a value. (#1579)
- Add preview support.SubscribeToObjectsAsyncwith the type of object you're interested in, a string containing a query determining which objects you want to subscribe to, and a callback which will report the results. You may add as many subscriptions to a synced Realm as necessary. (#1580)
- Ensure that Realm collections (
IList<T>,
IQueryable<T>) will not change when iterating in a
foreachloop. (#1589)
Bug fixes
Realm.GetInstancewill now advance the Realm to the latest version, so you no longer have to call
Refreshmanually after that. (#1523)
- Fixed an issue that would prevent iOS Share Extension projects from working. (#1535)
Breaking Changes
Realm.CreateObject(string className)now has additional parameter
object primaryKey. You must pass that when creating a new object using the dynamic API. If the object you're creating doesn't have primary key declared, pass
null. (#1381)
AcceptPermissionOfferAsyncnow returns the relative rather than the absolute url of the Realm the user has been granted permissions to. (#1595)
1.6.0 (2017-08-14)
Enhancements
- Exposed
Realm.WriteCopyAPI to copy a Realm file and optionally encrypt it with a different key. (#1464)
- The runtime representations of all Realm collections (
IQueryable<T>and
IList<T>) now implement the
IListinterface that is needed for data-binding to
ListViewin UWP applications. (#1469)
- Exposed
User.RetrieveInfoForUserAsyncAPI to allow admin users to lookup other users' identities in the Realm Object Server. This can be used, for example, to find a user by knowing their Facebook id. (#1486)
- Added a check to verify there are no duplicate object names when creating the schema. (#1502)
- Added more comprehensive error messages when passing an invalid url scheme to
SyncConfigurationor
User.LoginAsync. (#1501)
- Added more meaningful error information to exceptions thrown by
Realm.GetInstanceAsync. (#1503)
- Added a new type -
RealmInteger<T>to expose Realm-specific API over base integral types. It can be used to implement counter functionality in synced realms. (#1466)
- Added
PermissionCondition.Defaultto apply default permissions for existing and new users. (#1511)
Bug fixes
- Fix an exception being thrown when comparing non-constant character value in a query. (#1471)
- Fix an exception being thrown when comparing non-constant byte or short value in a query. (#1472)
- Fix a bug where calling the non-generic version of
IQueryProvider.CreateQueryon Realm's IQueryable results, an exception would be thrown. (#1487)
- Trying to use an
IListor
IQueryableproperty in a LINQ query will now throw
NotSupportedExceptionrather than crash the app. (#1505)
Breaking Changes
1.5.0 (2017-06-20)
Enhancements
- Exposed new API on the
Userclass for working with permissions: (#1361)
ApplyPermissionsAsync,
OfferPermissionsAsync, and
AcceptPermissionOfferAsyncallow you to grant, revoke, offer, and accept permissions.
GetPermissionOffers,
GetPermissionOfferResponses, and
GetPermissionChangesallow you to review objects, added via the above mentioned methods.
GetGrantedPermissionsAsyncallows you to inspect permissions granted to or by the current user.
- When used with
RealmConfiguration(i.e. local Realm),
Realm.GetInstanceAsyncwill perform potentially costly operation, such as executing migrations or compaction on a background thread. (#1406)
- Expose
User.ChangePasswordAsync(userId, password)API to allow admin users to change other users' passwords. (#1412)
- Expose
SyncConfiguration.TrustedCAPathAPI to allow providing a custom CA that will be used to validate SSL traffic to the Realm Object Server. (#1423)
- Expose
Realm.IsInTransactionAPI to check if there's an active transaction for that Realm. (#1452)
Bug fixes
- Fix a crash when querying over properties that have
[MapTo]applied. (#1405)
- Fix an issue where synchronized Realms did not connect to the remote server in certain situations, such as when an application was offline when the Realms were opened but later regained network connectivity. (#1407)
- Fix an issue where incorrect property name will be passed to
RealmObject.PropertyChangedsubscribers when the actual changed property is below a
Backlinkproperty. (#1433)
- Fix an exception being thrown when referencing Realm in a PCL test assembly without actually using it. (#1434)
- Fix a bug when
SyncConfiguration.EnableSSLValidationwould be ignored when passed to
Realm.GetInstanceAsync. (#1423)
Breaking Changes
- The constructors of
PermissionChange,
PermissionOffer, and
PermissionOfferResponseare now private. Use the new
User.ApplyPermissionsAsync,
User.OfferPermissionsAsync, and
User.AcceptPermissionOfferAsyncAPI. (#1361)
User.GetManagementRealmand
User.GetPermissionRealmare now deprecated. Use the new permission related API on
Userto achieve the same results. (#1361)
User.ChangePassword(password)has been renamed to
User.ChangePasswordAsync(password). (#1412)
- Removed the following obsolete API: (#1425)
Realm.ObjectForPrimaryKey<T>(long id)
Realm.ObjectForPrimaryKey<T>(string id)
Realm.ObjectForPrimaryKey(string className, long id)
Realm.ObjectForPrimaryKey(string className, string id)
Realm.Manage<T>(T obj, bool update)
Realm.Close()
Realm.CreateObject<T>()
IOrderedQueryable<T>.ToNotifyCollectionChanged<T>(Action<Exception> errorCallback)
IOrderedQueryable<T>.ToNotifyCollectionChanged<T>(Action<Exception> errorCallback, bool coalesceMultipleChangesIntoReset)
IRealmCollection<T>.ObjectSchema
Realm.DeleteRealmnow throws an exception if called while an instance of that Realm is still open.
1.4.0 (2017-05-19)
Enhancements
- Expose
RealmObject.OnManagedvirtual method that can be used for init purposes, since the constructor is run before the object has knowledge of its Realm. (#1383)
- Expose
Realm.GetInstanceAsyncAPI to asynchronously open a synchronized Realm. It will download all remote content available at the time the operation began on a background thread and then return a usable Realm. It is also the only supported way of opening Realms for which the user has only read permissions.
1.3.0 (2017-05-16)
Universal Windows Platform
Introducing Realm Mobile Database for Universal Windows Platform (UWP). With UWP support, you can now build mobile apps using Realm’s object database for the millions of mobile, PC, and Xbox devices powered by Windows 10. The addition of UWP support allows .NET developers to build apps for virtually any modern Windows Platform with Windows Desktop (Win32) or UWP as well as for iOS and Android via Xamarin. Note that sync support is not yet available for UWP, though we are working on it and you can expect it soon.
Enhancements
- Case insensitive queries against a string property now use a new index based search. (#1380)
- Add
User.ChangePasswordAPI to change the current user's password if using Realm's 'password' authentication provider. Requires any edition of the Realm Object Server 1.4.0 or later. (#1386)
SyncConfigurationnow has an
EnableSSLValidationproperty (default is
true) to allow SSL validation to be specified on a per-server basis. (#1387)
- Add
RealmConfiguration.ShouldCompactOnLaunchcallback property when configuring a Realm to determine if it should be compacted before being returned. (#1389)
- Silence some benign linker warnings on iOS. (#1263)
- Use reachability API to minimize the reconnection delay if the network connection was lost. (#1380)
Bug fixes
- Fixed a bug where
Session.Reconnectwould not reconnect all sessions. (#1380)
- Fixed a crash when subscribing for
PropertyChangedmultiple times. (#1380)
- Fixed a crash when reconnecting to Object Server (#1380)
- Fixed a crash on some Android 7.x devices when opening a realm (#1380)
1.2.1 (2017-05-01)
Bug fixes
- Fixed an issue where
EntryPointNotFoundExceptionwould be thrown on some Android devices. (#1336)
Enhancements
- Expose
IRealmCollection.IsValidto indicate whether the realm collection is valid to use. (#1344)
- Update the Fody reference which adds support for building with Mono 5. (#1364)
1.2.0 (2017-04-04)
Realm is now being distributed as a .NET Standard 1.4 library as this is a requirement for supporting UWP. While internally that is a rather big move, applications using it should not be affected. After the upgrade, you'll see a number of new NuGet dependencies being added - those are reference assemblies, already part of mscorlib, so will not affect your application's size or performance. Additionally, we're releasing a new platform specific DataBinding package that contains helper methods that enable two-way databinding scenarios by automatically creating transactions when setting a property.
If you encounter any issues after the upgrade, we recommend clearing the
bin and
obj folders and restarting Xamarin Studio. If this doesn't help, please file an issue explaining your solution setup and the type of problems you encounter.
Files written with this version cannot be read by earlier versions of Realm. This version is not compatible with versions of the Realm Object Server lower than 1.3.0.
Bug fixes
- Fixes the
RemoveAll(string)overload to work correctly. (#1288)
- Resolved an issue that would lead to crashes when refreshing the token for an invalid session. (#1289)
- The
IObservablereturned from
session.GetProgressObservablewill correctly call
OnCompletewhen created with
mode: ProgressMode.ForCurrentlyOutstandingWork. (#1292)
- Fixed a memory leak when accessing string properties. (#1318)
- Fixes an issue when using
EncryptionKeywith synchronized realms. (#1322)
Enhancements
- Introduce APIs for safely passing objects between threads. Create a thread-safe reference to a thread-confined object by passing it to the
ThreadSafeReference.Createfactory method, which you can then safely pass to another thread to resolve in the new realm with
Realm.ResolveReference. (#1300)
- Introduce API for attempting to reconnect all sessions. This could be used in conjunction with the connectivity plugin to monitor for connectivity changes and proactively request reconnecting, rather than rely on the built-in retry mechanism. (#1310)
- Enable sorting over to-one relationships, e.g.
realm.All<Parent>().OrderBy(p => p.Child.Age). (#1313)
- Introduce a
string.Likeextension method that can be used in LINQ queries against the underlying database engine. (#1311)
- Add an
User.IsAdminproperty that indicates whether a user is a Realm Object Server administrator. (#1320)
Breaking Changes
DateTimeOffsetproperties that are not set will now correctly default to
0001-1-1instead of
1970-1-1after the object is passed to
realm.Add. (#1293)
- Attempting to get an item at index that is out of range should now correctly throw
ArgumentOutOfRangeExceptionfor all
IRealmCollectionimplementations. (#1295)
- The layout of the .lock file has changed, which may affect scenarios where different processes attempt to write to the same Realm file at the same time. (#1296)
PropertyChangednotifications use a new, more reliable, mechanism, that behaves slightly differently from the old one. Notifications will be sent only after a transaction is committed (making it consistent with the way collection notifications are handled). To make sure that your UI is promptly updated, you should avoid keeping long lived transactions around. (#1316)
1.1.1 (2017-03-15)
Bug fixes
- Resolved an issue that prevented compiling for iOS on Visual Studio. (#1277)
1.1.0 (2017-03-03)
Enhancements
- Added Azure Active Directory (AzureAD) credentials provider. (#1254)
Breaking Changes
This is a preparation release for adding UWP support. We have removed all platform-specific logic from the Realm assemblies, and instead weave them in compile time. While this has been tested in all common scenarios, it may create issues with very complex project graphs. If you encounter any of these issues with iOS projects:
- Compilation fails when running Task
WeaveRealmAssemblies
- App crashes when first accessing a Realm
please file an issue and explain your solution setup.
1.0.4 (2017-02-21)
Bug fixes
- The
RealmNuGet package no longer clobbers the path to Win32 native binaries in
Realm.Database. (#1239)
- Fixed a bug where garbage collecting an object with
PropertyChangedsubscribers would cause crashes. (#1237)
1.0.3 (2017-02-14)
Out of Beta!
After about a year and a half of hard work, we are proud to call this a 1.0 release. There is still work to do, but Realm Xamarin is now being used by thousands of developers and has proven reliable.
Sync
Realm Xamarin now works with the Realm Mobile Platform. This means that you can write Xamarin apps that synchronize seamlessly with a Realm Object Server, allowing you to write complex apps with Xamarin that are offline-first and automatically synchronised by adding just a few lines of code. You can read about this in the documentation.
Windows Desktop
Realm Xamarin is no longer iOS and Android only. You can now use it to write .NET programs for Windows Desktop. Add the NuGet package to your regular .NET project and start using Realm. Some features are not supported on Windows yet. Most notably, sync does not yet work for Windows, but also encryption and notifications across processes are missing. We are working on it and you can expect support soon.
Breaking Changes
IRealmCollection<T>.ObjectSchemais deprecated and replaced with
ISchemaSource.ObjectSchema. (#1216)
Bug fixes
[MapTo]attribute is now respected in queries. (#1219)
- Letting a Realm instance be garbage collected instead of disposing it will no longer lead to crashes. (#1212)
- Unsubscribing from
RealmObject.PropertyChangedin a
PropertyChangedcallback should no longer lead to crashes. (#1207)
WriteAsyncnow advances the read transaction so the changes made asynchronously are available immediately in the original thread. (#1192)
- Queries on backlink properties should no longer produce unexpected results. (#1177)
0.82.1 (2017-01-27)
Bug fixes
- Addressed an issue where obtaining a Realm instance, reading an object, then obtaining another instance on the same thread would cause the object to become invalid and crash the application upon accessing any of its members.
0.82.0 (2017-01-23)
Breaking Changes
- Moved all exceptions under the
Realms.Exceptionsnamespace. (#1075)
- Moved
RealmSchemato
Realms.Schemanamespace. (#1075)
- Made the
ErrorEventArgsconstructor internal. (#1075)
- Made
ObjectSchema.Builderand
RealmSchema.Builderinternal. (#1075)
- Passing an object that has
IListproperties to
Add(obj, update: true)will no longer merge the lists. Instead, the
IListproperty will contain only the items in the object. (#1040)
Enhancements
- Added virtual
OnPropertyChangedmethod in
RealmObjectthat you can override to be notified of changes to the current object. (#1047)
- Added compile time checks that
[Required]is applied on correct property types. (#1072)
Realm.Add(RealmObject obj)will now return the passed in object, similarly to
Realm.Add<T>(T obj). (#1162)
- Added an extension method for
string.Containsthat accepts
StringComparisonargument and can be used in queries. When querying, only
StringComparison.Ordinaland
StringComparison.OrdinalIgnoreCasecan be used. When not used in queries, all values for
StringComparisonare valid. (#1141)
Bug fixes
- Adding a standalone object, that has an
IList<T>property that has never been accessed, to the Realm will no longer throw a
NullReferenceException. (#1040)
IList<T>properties will now correctly return
IsReadOnly = truewhen managed by a readonly Realm. (#1070)
- The weaver should now correctly resolve references in PCL and netstandard assemblies. (#1117)
- Add some missing methods to the PCL reference assembly. (#1093)
- Disposed realms will not throw
ObjectDisposedExceptionwhen trying to access their members. Additionally, disposing a realm will not invalidate other instances on the same thread. (#1063)
0.81.0 (2016-12-14)
Breaking Changes
- The
IQueryable<T>.ToNotifyCollectionChangedextension methods that accept parameters are now deprecated. There is a new parameterless one that you should use instead. If you want to handle errors, you can do so by subscribing to the
Realm.OnErrorevent. (#938)
RealmResults<T>is now marked
internaland
Realm.All<T>()will instead return
IQueryable<T>. We've added a new extension method
IQueryable<T>.SubscribeForNotifications(NotificationCallbackDelegate<T>)that allows subscribing for notifications. (#942)
Realm.CreateObject<T>has been deprecated and will be removed in the next major release. (It could cause a dangerous data loss when using the synchronised realms coming soon, if a class has a PrimaryKey). (#998)
RealmConfiguration.ReadOnlyhas been renamed to
RealmConfiguration.IsReadOnlyand is now a property instead of a field. (#858)
Realm.Allhas been renamed to
Realm.GetAlland the former has been obsoleted. (#858)
Realm.ObjectForPrimaryKeyhas been renamed to
Realm.Findand the former has been obsoleted. (#858)
Realm.Managehas been renamed to
Realm.Addand the former has been obsoleted. (#858)
RealmConfiguration.PathToRealmhas been renamed to
Realm.GetPathToRealmand the former has been obsoleted. (#858)
RealmResults.NotificationCallbackhas been extracted as a non-nested class and has been renamed to
NotificationCallbackDelegate. (#858)
Realm.Closehas been removed in favor of
Realm.Dispose. (#858)
RealmList<T>is now marked
internal. You should use
IList<T>to define collection relationships. (#858)
Enhancements
- In data-binding scenarios, if a setter is invoked by the binding outside of write transaction, we'll create an implicit one and commit it. This enables two-way data bindings without keeping around long-lived transactions. (#901)
- The Realm schema can now express non-nullable reference type properties with the new
[Required]attribute. (#349)
- Exposed a new
Realm.Errorevent that you can subscribe for to get notified for exceptions that occur outside user code. (#938)
- The runtime types of the collection, returned from
Realm.Alland the collection created for
IList<T>properties on
RealmObjectnow implement
INotifyCollectionChangedso you can pass them for data-binding without any additional casting. (#938, #909)
- All RealmObjects implement
INotifyPropertyChanged. This allows you to pass them directly for data-binding.
- Added
Realm.Compactmethod that allows you to reclaim the space used by the Realm. (#968)
Realm.Addreturns the added object. (#931)
- Support for backlinks aka
LinkingObjects. (#219)
- Added an
IList<T>.Moveextension method that allows you to reorder elements within the collection. For managed Lists, it calls a native method, so it is slightly more efficient than removing and inserting an item, but more importantly, it will raise the
CollectionChangedwith
NotifyCollectionChangedAction.Movewhich will result in a nice move animation, rather than a reload of a ListView. (#995)
Bug fixes
- Subscribing to
PropertyChangedon a RealmObject and modifying an instance of the same object on a different thread will now properly raise the event. (#909)
- Using
Insertto insert items at the end of an
IListproperty will no longer throw an exception. (#978)
0.80.0 (2016-10-27)
Breaking Changes
- This version updates the file format. Older versions will not be able to open files created with this version. (#846)
RealmList<T>is now marked as internal. If you were using it anywhere, you should migrate to
IList<T>. (#880)
Enhancements
- iOS Linking all should work - we now add a [Preserve] attribue to all woven members of your
RealmObjectsubclasses so you do not need to manually add
[Preserve(allMembers=true)](#822)
Realm.Managecalls are now much faster. You should prefer that to
Realm.CreateObjectunless you are setting only a few properties, while leaving the rest with default values. (#857)
Added
bool updateargument to
Realm.Manage. When
update: trueis passed, Realm will try to find and update a persisted object with the same PrimaryKey. If an object with the same PrimaryKey is not found, the umnamaged object is added. If the passed in object does not have a PrimaryKey, it will be added. Any related objects will be added or updated depending on whether they have PrimaryKeys. (#871)
NOTE: cyclic relationships, where object references are not identical, will not be reconciled. E.g. this will work as expected:
var person = new Person { Name = "Peter", Id = 1 }; person.Dog = new Dog(); person.Dog.Owner = person;
However this will not - it will set the Person's properties to the ones from the last instance it sees:
var person = new Person { Name = "Peter", Id = 1 }; person.Dog = new Dog(); person.Dog.Owner = new Person { Id = 1 };
This is important when deserializing data from json, where you may have multiple instances of object with the same Id, but with different properties.
Realm.Managewill no longer throw an exception if a managed object is passed. Instead, it will immediately return. (#871)
- Added non-generic version of
Realm.Manage. (#871)
- Added support for nullable integer PrimaryKeys. Now you can have
long?PrimaryKey property where
nullis a valid unique value. (#877)
- Added a weaver warning when applying Realm attributes (e.g.
[Indexed]or
[PrimaryKey]) on non-persisted properties. (#882)
- Added support for
==and
!=comparisons to realm objects in LINQ (#896), e.g.:
csharp var peter = realm.All<Person>().FirstOrDefault(d => d.Name == "Peter"); var petersDogs = realm.All<Dog>().Where(d => d.Owner == peter);
Added support for
StartsWith(string, StringComparison),
EndsWith(string, StringComparison), and
Equals(string, StringComparison)filtering in LINQ. (#893)
NOTE: Currently only
Ordinaland
OrdinalIgnoreCasecomparisons are supported. Trying to pass in a different one will result in runtime error. If no argument is supplied,
Ordinalwill be used.
0.78.1 (2016-09-15)
Bug fixes
Realm.ObjectForPrimaryKey()now returns null if it failed to find an object (#833).
- Querying anything but persisted properties now throws instead of causing a crash (#251 and #723)
Uses core 1.5.1
0.78.0 (2016-09-09)
Breaking Changes
- The term
ObjectIdhas been replaced with
PrimaryKeyin order to align with the other SDKs. This affects the
[ObjectId]attribute used to decorate a property.
Enhancements
- You can retrieve single objects quickly using
Realm.ObjectForPrimaryKey()if they have a
[PrimaryKey]property specified. (#402)
- Manual migrations are now supported. You can specify exactly how your data should be migrated when updating your data model. (#545)
- LINQ searches no longer throw a
NotSupportedExceptionif your integer type on the other side of an expression fails to exactly match your property's integer type.
- Additional LINQ methods now supported: (#802)
- LastOrDefault
- FirstOrDefault
- SingleOrDefault
- ElementAt
- ElementAtOrDefault
Bug fixes
- Searching char field types now works. (#708)
- Now throws a RealmMigrationSchemaNeededException if you have changed a
RealmObjectsubclass declaration and not incremented the
SchemaVersion(#518)
- Fixed a bug where disposing a
Transactionwould throw an
ObjectDisposedExceptionif its
Realmwas garbage-collected (#779)
- Corrected the exception being thrown
IndexOutOfRangeExceptionto be
ArgumentOutOfRangeException
Uses core 1.5.1
0.77.2 (2016-08-11)
Enhancements
- Setting your Build Verbosity to
Detailedor
Normalwill now display a message for every property woven, which can be useful if you suspect errors with Fody weaving.
- Better exception messages will helo diagnose EmptySchema problems (#739)
- Partial evaluation of LINQ expressions means more expressions types are supported as operands in binary expressions (#755)
- Support for LINQ queries that check for
nullagainst
string,
byte[]and
Nullable<T>properties.
- Support for
string.IsNullOrEmptyon persisted properties in LINQ queries.
- Schema construction has been streamlined to reduce overhead when opening a Realm
- Schema version numbers now start at 0 rather than UInt64.MaxValue
Bug fixes
RealmResults<T>should implement
IQueryable.Providerimplicitly (#752)
- Realms that close implicitly will no longer invalidate other instances (#746)
Uses core 1.4.2
0.77.1 (2016-07-25)
Minor Changes
- Fixed a bug weaving pure PCL projects, released in v0.77.0 (#715)
- Exception messages caused by using incompatible arguments in LINQ now include the offending argument (#719)
- PCL projects using ToNotifyCollectionChanged may have crashed due to mismatch between PCL signatures and platform builds.
Uses core 1.4.0
0.77.0 (2016-07-18)
Broken Version - will not build PCL projects
Breaking Changes
- Sort order change in previous version was reverted.
Major Changes
- It is now possible to introspect the schema of a Realm. (#645)
- The Realm class received overloads for
Realm.CreateObjectand
Realm.Allthat accept string arguments instead of generic parameters, enabling use of the
dynamickeyword with objects whose exact type is not known at compile time. (#646)
- To Many relationships can now be declared with an
IList<DestClass>rather than requiring
RealmList<DestClass>. This is significantly faster than using
RealmListdue to caching the list. (Issue #287)
- Creating standalone objects with lists of related objects is now possible. Passing such an object into
Realm.Managewill cause the entire object graph from that object down to become managed.
Minor Changes
- Fixed a crash on iOS when creating many short-lived realms very rapidly in parallel (Issue #653)
RealmObject.IsValidcan be called to check if a managed object has been deleted
- Accessing properties on invalid objects will throw an exception rather than crash with a segfault (#662)
- Exceptions thrown when creating a Realm no longer leave a leaking handle (Issue #503)
Uses core 1.4.0
0.76.1 (2016-06-15)
Minor Changes
- The
Realmstatic constructor will no longer throw a
TypeLoadExceptionwhen there is an active
System.Reflection.Emit.AssemblyBuilderin the current
AppDomain.
- Fixed
Attempting to JIT compileexception when using the Notifications API on iOS devices. (Issue #620)
Breaking Changes
No API change but sort order changes slightly with accented characters grouped together and some special characters sorting differently. "One third" now sorts ahead of "one-third".
It uses the table at
It groups all characters that look visually identical, that is, it puts a, à, å together and before ø, o, ö even. This is a flaw because, for example, å should come last in Denmark. But it's the best we can do now, until we get more locale aware.
Uses core 1.1.2
0.76.0 (2016-06-09)
Major Changes
RealmObjectclasses will now implicitly implement
INotifyPropertyChangedif you specify the interface on your class. Thanks to Joe Brock for this contribution!
Minor Changes
longis supported in queries (Issue #607)
- Linker error looking for
System.String System.String::Format(System.IFormatProvider,System.String,System.Object)fixed (Issue #591)
- Second-level descendants of
RealmObjectand static properties in
RealmObjectclasses now cause the weaver to properly report errors as we don't (yet) support those. (Issue #603)
- Calling
.Equals()on standalone objects no longer throws. (Issue #587)
0.75.0 (2016-06-02)
Breaking Changes
- File format of Realm files is changed. Files will be automatically upgraded but opening a Realm file with older versions of Realm is not possible. NOTE: If you were using the Realm Browser specified for the old format you need to upgrade. Pick up the newest version here.
RealmResults<T>no longer implicitly implements
INotifyCollectionChanged. Use the new
ToNotifyCollectionChangedmethod instead.
Major Changes
RealmResults<T>can be observed for granular changes via the new
SubscribeForNotificationsmethod.
Realmgained the
WriteAsyncmethod which allows a write transaction to be executed on a background thread.
- Realm models can now use
byte[]properties to store binary data.
RealmResults<T>received a new
ToNotifyCollectionChangedextension method which produces an
ObservableCollection<T>-like wrapper suitable for MVVM data binding.
Minor Fixes
- Nullable
DateTimeOffsetproperties are supported now.
- Setting
nullto a string property will now correctly return
null
- Failure to install Fody will now cause an exception like "Realms.RealmException: Fody not properly installed. RDB2_with_full_Realm.Dog is a RealmObject but has not been woven." instead of a
NullReferenceException
- The PCL
RealmConfigurationwas missing some members.
- The Fody weaver is now discoverable at non-default nuget repository paths.
0.74.1 Released (2016-05-10)
Minor Fixes
- Realms now refresh properly on Android when modified in other threads/processes.
- Fixes crashes under heavy combinations of threaded reads and writes.
Minor Changes
- The two
Realmand
RealmWeaverNuGet packages have been combined into a single
Realmpackage.
- The
String.Contains(String),
String.StartsWith(String), and
String.EndsWith(String)methods now support variable expressions. Previously they only worked with literal strings.
RealmResults<T>now implements
INotifyCollectionChangedby raising the
CollectionChangedevent with
NotifyCollectionChangedAction.Resetwhen its underlying table or query result is changed by a write transaction.
0.74.0 Private Beta (2016-04-02)
Major Changes
- The Realm assembly weaver now submits anonymous usage data during each build, so we can track statistics for unique builders, as done with the Java, Swift and Objective-C products (issue #182)
Realm.RemoveRange<>()and
Realm.RemoveAll<>()methods added to allow you to delete objects from a realm.
Realm.Write()method added for executing code within an implicitly committed transaction
- You can now restrict the classes allowed in a given Realm using
RealmConfiguration.ObjectClasses.
- LINQ improvements:
- Simple bool searches work without having to use
== true(issue #362)
- ! operator works to negate either simple bool properties or complex expressions (issue #77)
- Count, Single and First can now be used after a Where expression, (#369) eg
realm.All<Owner>().Where(p => p.Name == "Dani").First();as well as with a lambda expression
realm.All<Owner>().Single( p => p.Name == "Tim");
- Sorting is now provided using the
OrderBy,
OrderByDescending,
ThenByand
ThenByDescendingclauses. Sorts can be applied to results of a query from a
Whereclause or sorting the entire class by applying after
All<>.
- The
String.Contains(String),
String.StartsWith(String), and
String.EndsWith(String)methods can now be used in Where clauses.
- DateTimeOffset properties can be compared in queries.
- Support for
armeabibuilds on old ARM V5 and V6 devices has been removed.
Minor Changes
- Finish
RealmList.CopyToso you can apply
ToListto related lists (issue #299)
- NuGet now inserts
libwrappers.sofor Android targets using
$(SolutionDir)packagesso it copes with the different relative paths in cross-platform (Xamarin Forms) app templates vs pure Android templates.
Realm.RealmChangedevent notifies you of changes made to the realm
Realm.Refresh()makes sure the realm is updated with changes from other threads.
0.73.0 Private Beta (2016-02-26)
Major Changes
RealmConfiguration.EncryptionKeyadded so files can be encrypted and existing encrypted files from other Realm sources opened (assuming you have the key)
Minor Fixes
- For PCL users, if you use
RealmConfiguration.DefaultConfigurationwithout having linked a platform-specific dll, you will now get the warning message with a
PlatformNotSupportedException. Previously threw a
TypeInitExepction.
- Update to Core v0.96.2 and matching ObjectStore (issue #393)
0.72.1 Private Beta (2016-02-15)
No functional changes. Just added library builds for Android 64bit targets
x86_64 and
arm64-v8a.
0.72.0 Private Beta (2016-02-13)
Uses Realm core 0.96.0
Major Changes
- Added support for PCL so you can now use the NuGet in your PCL GUI or viewmodel libraries.
0.71.1 Private Beta (2016-01-29)
Minor Fixes
Building IOS apps targeting the simulator sometimes got an error like:
Error MT5209: Native linking error...building for iOS simulator, but linking in object file built for OSX, for architecture i386 (MT5209)
This was fixed by removing a redundant simulator library included in NuGet
0.71.0 Private Beta (2016-01-25)
Uses Realm core 0.95.6.
Platform Changes
Now supporting:
- Xamarin Studio on Mac - IOS and Android
- Xamarin Studio on Windows - Android
- Visual Studio on Windows - IOS and Android
Major Changes
- Added Android support as listed above.
- Added
RealmConfigurationto provide reusable way to specify path and other settings.
- Added
Realm.Equals,
Realm.GetHashCodeand
Realm.IsSameInstanceto provide equality checking so you can confirm realms opened in the same thread are equal (shared internal instance).
- Added
Realm.DeleteFiles(RealmConfiguration)to aid in cleaning up related files.
- Added nullable basic types such as
int?.
- Optimised
Realm.All<userclass>().Count()to get rapid count of all objects of given class.
- Related lists are now supported in standalone objects.
LINQ
Count()on
Where()implemented.
Any()on
Where()implemented.
First( lambda )and
Single( lambda )implemented.
- Significant optimisation of
Where()to be properly lazy, was instantiating all objects internally.
API-Breaking Changes
[PrimaryKey]attribute renamed
[ObjectId].
Realm.Attach(object)renamed
Manage(object).
- Lists of related objects are now declared with
IList<otherClass>instead of
RealmList.
Bug fixes
- Bug that caused a linker error for iPhone simulator fixed (#375)
0.70.0 First Private Beta (2015-12-08)
Requires installation from private copy of NuGet download.
State
- Supported IOS with Xamarin Studio only.
- Basic model and read/write operations with simple LINQ
Wheresearches.
- NuGet hosted as downloads from private realm/realm-dotnet repo.
|
https://docs.mongodb.com/realm-legacy/docs/dotnet/latest/api/CHANGELOG.html
|
CC-MAIN-2021-25
|
refinedweb
| 9,261
| 51.34
|
Bug ID: 67218 Summary: Combine incorrectly folds (double) (float) (unsigned) Product: gcc Version: unknown Status: UNCONFIRMED Severity: normal Priority: P3 Component: rtl-optimization Assignee: unassigned at gcc dot gnu.org Reporter: rsandifo at gcc dot gnu.org Target Milestone: --- simplify-rtx.c folds (double) (float) (some-int) into (double) (some-int) if it can prove that the intermediate (float) doesn't have any effect. This check is based on the number of known "sign bit copies", which inherently assumes a signed interpretation of (some-int) and is therefore only valid for FLOAT. For UNSIGNED_FLOAT we can end up treating a large unsigned value as fitting in a float. The following test case fails at -O and above on aarch64-linux-gnu because of this: extern void abort (void) __attribute__ ((noreturn)); double __attribute__ ((noinline, noclone)) foo (unsigned int x) { return (double) (float) (x | 0xffff0000); } int main () { if (foo (1) != 0x1.fffep31) abort (); return 0; }
|
https://gcc.gnu.org/pipermail/gcc-bugs/2015-August/515084.html
|
CC-MAIN-2022-40
|
refinedweb
| 153
| 53.61
|
Learn How to Rapidly Create a Customized CMS with this Java Framework!
Learn How to Rapidly Create a Customized CMS with this Java Framework!
A look at how this CMS allows you to focus on one thing and one thing only, the definition of your own domain.
Join the DZone community and get the full member experience.Join For Free
Over the years of being a freelance developer for small to medium businesses, there has been one constant, CRUD applications. They usually consisted of some sort of ‘backend’ that handled the content of an app or website.
Well, for a website, the solution is easy; Wordpress, or maybe Joomla?
Well what, about apps? Back to square one...
I Present to You, Elepy.
Elepy (also on GitHub) is an open-source Headless Content Management Framework for Java and Kotlin. The framework comes bundled with an Admin Control Panel that lets you easily control your content.
The framework lets you focus on one thing and one thing only, the definition of your own domain.
Don’t Believe Me? Just Watch!
Creation of a content type is as easy as annotating the POJOs that define your domain.
import com.elepy.annotations.*; import com.elepy.models.TextType; @RestModel(slug = "/products", name = "Name") public class Product { @Identifier private String id; @Number(minimum = 0) private BigDecimal price; @Unique @Searchable private String name; @DateTime(max }
What about the Database?
Well by default, Elepy uses MongoDB, but provides support for the different flavors of SQL through a Hibernate/JPA extension.
import com.elepy.Elepy; import com.elepy.admin.ElepyAdminPanel; import com.mongodb.DB; import com.mongodb.MongoClient; public class Main { public static void main(String[] args) { MongoClient mongoClient = someMongoClientWithConfiguration; // Configure your MongoClient DB exampleDB = mongoClient.getDB("example1"); new Elepy() .connectDB(exampleDB) .onPort(7777) .addModel(Product.class) .addExtension(new ElepyAdminPanel()) //Start Elepy! .start(); } }
Well, What Now?
That’s it, you just created a CMS!
By navigating to, you can log in and administrate your content. Username and Password (in version 1): admin
What Else Can Elepy Do?
Well, it’s a long list. A growing list, because Elepy 2 is under construction! To name a few features: pagination, searching, sorting, and 25+ annotations.
But the true beauty of Elepy lies in its customizability and extendability. It has an entire plugin/module ecosystem, a dependency injection micro-framework, and a routing micro-framework that you can make use of. The best part is that every part of the framework can be extended or replaced. Everything from the presentation to the data access layer.
Keep Up With Elepy
- Find documentation on and
- (please) Star, follow and contribute to the project on GitHub
- Google Slides: The Elepy Presentation
- Elepy Example Repo
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/rapidly-create-a-customized-cms-with-this-awesome?fromrel=true
|
CC-MAIN-2019-43
|
refinedweb
| 469
| 50.63
|
You can make a plugin that looks something like this (untested). drag_select is the default mouse behaviour, and this plugin calls that and then navigate_to_definition.
class navigate(sublime_plugin.TextCommand):
#note the underscore
def run_(self, args):
self.view.run_command("drag_select", args)
self.view.run_command("navigate_to_definition", args)
It's working, I am impressed!
What's the underscore for?
When you call a TextCommand, it calls the run_ function. This function strips out some of the arguments that drag_select relies on, and then calls run (no underscore). By overloading run_ instead of run we can keep those extra arguments and pass them to drag_select.
Very clever, I'll remember that!
Thanks!
|
https://forum.sublimetext.com/t/solved-move-the-cursor-on-click/7397/3
|
CC-MAIN-2016-18
|
refinedweb
| 109
| 53.17
|
Introduction
In order for two networked computers to exchange data, a protocol has to be used. A protocol is an agreed method to identify computers (ex. PC, iPhone), applications (ex. browser, web server) and resources (ex. webpage, image, database table). A protocol also facilitates secure (ex. encryption) and reliable (ex. data loss protection) communication. The world wide web (i.e www) heavily relies on HTTP (i.e. hyper text transfer protocol) protocol. For example, a browser uses HTTP to request a webpage hosted on a web server. The web server returns the requested page back for display. The browser uniquely identifies a webpage resource using a URL (i.e. uniform resource locator). This sounds familiar when we type the address of a website in the browser address bar. The URL fully identifies a resource using a hostname (or IP address), port number and path to the resource. The host name identifies the destination computer. The port number identifies the service (ex. web, ftp) followed by the path to the resource. To demonstrate that, take a look at the following example
- Protocol: http
- Host name:
- Port number: 80 but since 80 is the default port number for HTTP we can safely remove :80 from the URL
- Path to resource: /2016/11/python-mutex-example
Note that the path to the resource should not necessarily match the corresponding local file system path on the server. The translation from URL path to system path is taken care of by the web server application. An example showing this translation is provided later in the implementation section.
HTTP commands
We learned that a browser communicates with a web server using HTTP protocol by adopting client server model (as opposed to a peer to peer model). Let us now briefly describe some of the methods that the HTTP protocol provides to facilitate communication.
- GET method is used to request a resource. It only retrieves data, for example, a browser uses it to download then display a webpage.
- POST method is used to send data to the destination computer. For example, a browser uses it to submit a web form.
- PUT method creates a new resource if it does not exist, otherwise it updates that resource.
- DELETE method is used to delete a resource.
Please note that as we browse the web, GET and POST commands are used the most.
HTTP request vs response
Let us be more specific and put things into perspective. A web browser retrieving a webpage, that is an application level communication. On the HTTP protocol level, a browser sends an HTTP request, the web server returns an HTTP response. With that said, pay attention to the following points:
- An HTTP method (ex. POST, GET) is part of an HTTP request
- A status code is part of an HTTP response to indicate what happened with the request
- Query parameters are part of an HTTP request. They are used as input and can be sent either in request body or added to the URL
- A request or response contains a headers section to include extra information (meta data). For example, the content type is one of the most important header information when dealing with HTTP response because it specifies the data format being returned
- Both request and response have a body section containing the data (payload) being sent or received in a given format (JSON in our case)
Python web service
Now that we have a basic idea how the web works, The next step is to utilize HTTP in order to implement data exchange between computers. In other words, we are going to implement a simple Python web service and develop a client to consume it. Instead of retrieving data in HTML format as in the browser case, we are going to send and receive data in JSON format.That is a naive way to think of a web service. A web service basically exposes an API through endpoints.
API vs end point
Lets us quickly clarify what API and end point are. These terms are frequently mentioned when discussing web services. API (application programming interface) is a general term, it can refer to web services, libraries, methods and functions. Without being fancy in the definition, It refers to specifying service name, input parameters, their types and any returned results. APIs allow us glue components together and interconnect systems. End point is one end of the communication channel. It is just another way to refer to the URL of the web service. It is also important to note that web service API typically have more than one end point. Each end point serves a specific purpose. All web service end points share a common base URL. Oh by the way, did we just say JSON? What on eath is JSON format?
JSON Format
Some times, when terms are repeated over and over, we often tend to lose track. For that reason, let us clarify few basic points:
- Text vs binary: JSON data is typically saved to disk in text format. A binary file is an array of bytes representing some sort of custom data (ex. image pixels). On the other hand, bytes in a text file represent characters in a certain encoding.
- ASCII vs UTF8: Characters in a JSON text file can be encoded in ASCII (plain text) or UTF8 (if you want to use international character sets). Note that ASCII and UTF8 are just example encoding schemes. You can use whatever encoding suitable for your application.
- JSON vs XML: that is how we structure the content of the file. In this post, we are going to use JSON format. This is not the place to debate which is better JSON or XML, however JSON seems to win the competition on the web due to simplicity, lightweight and adoption.
JSON (JavaScript Object Notation) is a subset of the JavaScript language syntax where:
- Data is represented in name value pairs separated by commas
- Objects are held in curly braces
- Arrays are held in square brackets
JSON Example
Here is an example
{ "cars": [ { "id":"01", "model": "Honda", "color": "White" }, { "id":"02", "model": "Ford", "color": "Black" } ] }
The JSON file above contains information about two cars. Each car object has three fields. It resembles a database table. Car objects are the rows and fields are the columns. Try to copy the JSON text above and paste it to this online JSON viewer to visualize how the file is structured. We agreed that HTTP is going to be used to send and receive data in JSON format so we are ready now to talk about REST.
What is REST?
REST (REpresentational State Transfer) is a set of conventions to structure a web service. A web service that conforms to these conventions is called a RESTful web service. Fine details of REST constraints (i.e. client server, stateless, cacheable, layered, uniform interface, on demand) are not covered in this article. For more information, please see the references at the end of this article. So what is covered here then? I am going to cover basic design tips and implement a simple RESTful web service along with the corresponding client in Python.
Nouns verbs and representations
When designing RESTful API, we should pay attention to nouns, verbs and representations. Nouns refer to resources (ex. database table) exposed through the API. Verbs are HTTP methods (ex. POST, GET) applied to resources. Representation refers to data format (ex. json, xml, html). In our case, we are going to apply the following design tips.
- Take out verbs out of web service URL and dedicate two base URLs for each resource
- Use HTTP commands to apply verbs to nouns in order to reduce number of base URLs
- Use plural nouns as opposed to singular for better intuition and clarity
Taking the same cars example mentioned earlier.
- GET on /cars retrieves all cars
- GET on /cars/id retrieves a specific car with id
- POST on /cars add a new car
- PUT on /cars/id updates a specific car with id
- DELETE on /cars/id deletes a specific car with id
As you can see, only two base URLs are used (/cars and /cars/id). HTTP methods are used to reduce the number of base URLs. For example, no need for base URLs like /getCars and addCar. Also, we used plural noun /cars not /car.
Base URL
In base URL, we may include service name and version number. Service name provides a name space in case of hosting more than one service on the server. Version number is handy if we want to update the web service without breaking legacy functionality. Here is an example:
In our implementation, we are not going to use a service name nor version. We are going to keep things simple so that the demonstration is less distracting.
Python REST API server
We are going to use web.py web server Python module. If you do not have it installed, you can do that using the command:
sudo easy_install web.py or sudo pip install web.py
If you do not have pip or easy_install then you need to install it first.
For the sake of clarity, It is not our attention to have a fancy implementation. Typically, a web service is implemented using a production quality python rest api framework such as Django or Flask. Also, a database is used to back the web service for persisting data. In our case, the focus is on exposing an API through a web service and consuming that service. For that reason, web.py is more than enough to do the job. For persistence, we are going to use in memory python list. We are not going to use any database nor save to disk. If you turn off the server then you will loose data but who cares ? it is a demo. Here is our implementation commented well so there is no need for extra explanation.
# Import web.py web server module import web # Makes dealing with json format easy import json # GET on /cars lists all cars # GET on /cars/10 retrieves car with id = 10 # POST on /cars creates a car # PUT on /cars/10 updates car with id = 10 # DELETE on /cars/10 deletes car with id = 10 # Regular expressions are used to extract car id # Here is an explanation of that: # /cars followed by optional / followed by a capture group # The syntax for the capture group is (?P...) # Inside the capture group we have <car_id>[0-9]+ # meaning the ID is a number one or more # The capture group can be 0 or more because it is followed by ? # urls is used by the web.py module # For each end point there is a method that handles # it in the HandleRequest class urls = (r'/cars/?(?P<car_id>[0-9]+)?', 'HandleRequest') # We are going to use a dictionary for demonstration only. # Think of it as an in memory database cars = [] #-------------------------------------------- # This class is used to handle requests class HandleRequest(): # Initialization code def __init__(self): # Set response content type (json) and # encoding (utf8) for all requests web.header('Content-Type', 'text/json; charset=utf-8') #-------------------------------------------- # Handle HTTP GET def GET(self, car_id=None): # If no id is provided list all cars if car_id is None: print "---- Listing all cars" # json.dumps converts the array of dictionaries cars # into a json text format return json.dumps(cars) # Otherwise send back the requested car object else: # Search for the requested car print "---- Retrieving car with id = ", car_id for car in cars: if int(car['id']) == int(car_id): return json.dumps(car) # return empty json return "{}" #-------------------------------------------- # Handle HTTP POST def POST(self, car_id=None): # Append the car object sent in the POST request # to the cars array. The sent data is saved in # web.data(), json.loads converts the json text # data into a dictionary. The cars array is an # array of dictionaries, each dictionary stores # information about one car. if car_id is None: # Make sure we do not add car if id exists found = False mycar = json.loads(web.data()) for car in cars: if int(car['id']) == int(mycar['id']): found = True if found == False: cars.append(mycar) print "---- Adding car : ", web.data() print "---- Cars added : ", cars return web.data() else: # When posting, no id should be provided raise web.badrequest() #-------------------------------------------- # Handle HTTP PUT def PUT(self, car_id=None): if car_id is None: # You can not update a car if the id does not exit raise web.badrequest() else: print "---- Updating car with id = ", car_id # Get car from request mycar = json.loads(web.data()) for car in cars: if int(car['id']) == int(car_id): car['color'] = mycar['color'] car['model'] = mycar['model'] print "Avaiable cars : ", cars return web.data() #-------------------------------------------- # Handle HTTP DELETE def DELETE(self, car_id=None): # You can not delete a car if id does not exist if car_id is None: raise web.badrequest() else: print "---- Deleting car with id = ", car_id # Get car from request for car in cars: if int(car['id']) == int(car_id): cars.remove(car) print "Avaiable cars : ", cars return json.dumps(cars) #-------------------------------------------- # Start web server app = web.application(urls, globals()) # Start script if __name__ == "__main__": app.run()
Python REST API client
Consuming a web service is not hard, we just need to send the appropriare HTTP requests. We can easily do that using a browser or the curl command. In python, we can use built in libraries to invoke HTTP, however there is an easy way. We are going to use (Kenneth Reitz) request library which is a wrapper for Python built in HTTP support.
Request library
The requests library by Kenneth Reitz allows us to invoke HTTP commands like never before. Here is a quick tutorial.
- If you do not have it installed, you can do so by issuing the command
sudo easy_install requests
import requests
r = requests.get('')
r = requests.post('', data = json)
r = requests.put('hostname.com/cars/1', data = {'key':'value'}) r = requests.delete('hostname.com/cars/2') r = requests.head('') r = requests.options('')
r = requests.get('', params={'key1': 'value1', 'key2': 'value2'})
print r.text, r.encoding, r.headers, r.json, r.status_code
That is more than enough to implement our own client. The code snippet below consumes the web service that we implemented earlier:
# Import the requests library import requests # Prints few response parameters def printResponse(): print "Status code : ", r.status_code print "Encoding : ", r.encoding print "JSON : ", r.json() print "Content type : ", r.headers["Content-Type"] print "" # Define 2 car objects in json format car1 = {"id": 1, "model": "Honda", "color": "White" } car2 = {"id": 2, "model": "Ford", "color": "Black" } # HTTP POST: Add first car r = requests.post('', json=car1) print "--- Adding car 1\n" printResponse() # Add second car r = requests.post('', json=car2) print "--- Adding car 2\n" printResponse() # HTTP GET: get all cars r = requests.get(""); print "--- Retrieving all cars\n" printResponse() # Print retrieved cars for car in r.json(): print('{} {} {}\n'.format(car['id'], car['model'], car['color'])) # GET the second car r = requests.get(""); print "--- Retrieving car 2\n" printResponse() # Print retrieved car print('\n{} {} {}\n'.format(r.json()['id'], r.json()['model'], r.json()['color'])) # PUT to update first car car3 = {"id": 1, "model": "New Honda", "color": "New Black" } r = requests.put("", json=car3); print "--- Updating car 1\n" printResponse() # GET the first car to see if it was updated r = requests.get(""); print "--- Retrieving car 1\n" printResponse() # Print retrieved car print('{} {} {}\n'.format(r.json()['id'], r.json()['model'], r.json()['color'])) # Delete first car r = requests.delete(""); print "--- Deleting car 1\n" printResponse() # HTTP GET: get all cars r = requests.get(""); print "--- Retrieving all cars\n" printResponse()
Testing a web service
To debug a web service, a web browser may be used but it is not easy to generate all HTTP methods. There are dedicated tools for working with web services that one can try (ex. postman). A simpler and flexible solution is to use the curl command. Here is a list of curl commands to test the web service implemented earlier.
# Add car 1 curl -X POST -H "Content-Type: application/json" -d '{"id": 1, "model": "Honda", "color": "White" }' -i # Add car 2 curl -X POST -H "Content-Type: application/json" -d '{"id": 2, "model": "Ford", "color": "Black" }' -i # Retrieve all cars curl -X GET -H "Content-Type: application/json" -i # Get car 2 curl -X GET -H "Content-Type: application/json" -i # Update car 2 curl -X PUT -H "Content-Type: application/json" -d '{"id": 2, "model": "New Ford", "color": "New Black" }' -i # Delete car 2 curl -X DELETE # Retrieve all cars curl -X GET -H "Content-Type: application/json" -i
Securing a web service
This is a complex topic and very important, however it is beyond the scope of this article. Users connecting to a private web service must be authenticated before permission is granted. If you want to extend the code in this post to support authentication, you may check the following:
- Secure the web service. Check out this article
- Use authentication in the requests library. Check out this article
Summary
In this Python REST API tutorial, we discussed how the HTTP protocol is used to exchange data between computers. We also described the format of an HTTP URL. Under the hood, HTTP uses requests to send data and responses to retrieve data. Web services can utilize HTTP by exposing resources via API end points. The main theme of this article was REST, a popular web service design style. We also implemented an example RESTful Python web service and client. We ended the article by suggesting how to test and secure a web service.
References
Check the following articles for more information about REST API design and implementation in Python.
- Python requests library
- RESTful API design
- API integration in python
- Designing a RESTful API with Python and Flask
- Django REST Framework
- Python API tutorial
That is all for today. Thanks for visiting. Please leave your comments below.
|
http://www.8bitavenue.com/2017/03/python-rest-api-example/
|
CC-MAIN-2017-17
|
refinedweb
| 2,986
| 64.81
|
- NAME
- SYNOPSIS
- DESCRIPTION
- SUBCLASSES
- CONSTRUCTOR
- METHODS
- Custom observer aspects
- Data Concurrency
- Internal Methods
- The Object Cache
- SEE ALSO
NAME
UR::Context - Manage the current state of the application
SYNOPSIS
use AppNamespace; my $obj = AppNamespace::SomeClass->get(id => 1234); $obj->some_property('I am changed'); UR::Context->get_current->rollback; # some_property reverts to its original value $obj->other_property('Now, I am changed'); UR::Context->commit; # other_property now permanently has that value
DESCRIPTION
The main application code will rarely interact with UR::Context objects directly, except for the
commit and
rollback methods. It manages the mappings between an application's classes, object cache, and external data sources.
SUBCLASSES
UR::Context is an abstract class. When an application starts up, the system creates a handful of Contexts that logically exist within one another:
- 1. UR::Context::Root - A context to represent all the data reachable in the application's namespace. It connects the application to external data sources.
-
- 2. UR::Context::Process - A context to represent the state of data within the currently running application. It handles the transfer of data to and from the Root context, through the object cache, on behalf of the application code.
-
- 3. UR::Context::Transaction - A context to represent an in-memory transaction as a diff of the object cache. The Transaction keeps a list of changes to objects and is able to revert those changes with
rollback(), or apply them to the underlying context with
commit().
-
CONSTRUCTOR
- begin
my $trans = UR::Context::Transaction->begin();
UR::Context::Transaction instances are created through
begin().
A UR::Context::Root and UR::Context::Process context will be created for you when the application initializes. Additional instances of these classes are not usually instantiated.
METHODS
Most of the methods below can be called as either a class or object method of UR::Context. If called as a class method, they will operate on the current context.
- get_current
my $context = UR::Context::get_current();
Returns the UR::Context instance of whatever is the most currently created Context. Can be called as a class or object method.
- query_underlying_context
my $should_load = $context->query_underlying_context(); $context->query_underlying_context(1);
A property of the Context that sets the default value of the
$should_loadflag inside
get_objects_for_class_and_ruleas described below. Initially, its value is undef, meaning that during a get(), the Context will query the underlying data sources only if this query has not been done before. Setting this property to 0 will make the Context never query data sources, meaning that the only objects retrievable are those already in memory. Setting the property to 1 means that every query will hit the data sources, even if the query has been done before.
- get_objects_for_class_and_rule
@objs = $context->get_objects_for_class_and_rule( $class_name, $boolexpr, $should_load, $should_return_iterator );
This is the method that serves as the main entry point to the Context behind the
get(), and
is_loaded()methods of UR::Object, and
reload()method of UR::Context.
$class_nameand
$boolexprare required arguments, and specify the target class by name and the rule used to filter the objects the caller is interested in.
$should_loadis a flag indicating whether the Context should load objects satisfying the rule from external data sources. A true value means it should always ask the relevent data sources, even if the Context believes the requested data is in the object cache, A false but defined value means the Context should not ask the data sources for new data, but only return what is currently in the cache matching the rule. The value
undefmeans the Context should use the value of its query_underlying_context property. If that is also undef, then it will use its own judgement about asking the data sources for new data, and will merge cached and external data as necessary to fulfill the request.
$should_return_iteratoris a flag indicating whether this method should return the objects directly as a list, or iterator function instead. If true, it returns a subref that returns one object each time it is called, and undef after the last matching object:
my $iter = $context->get_objects_for_class_and_rule( 'MyClass', $rule, undef, 1 ); my @objs; while (my $obj = $iter->()); push @objs, $obj; }
- has_changes
my $bool = $context->has_changes();
Returns true if any objects in the given Context's object cache (or the current Context if called as a class method) have any changes that haven't been saved to the underlying context.
- commit
UR::Context->commit();
Causes all objects with changes to save their changes back to the underlying context. If the current context is a UR::Context::Transaction, then the changes will be applied to whatever Context the transaction is a part of. if the current context is a UR::Context::Process context, then
commit()pushes the changes to the underlying UR::Context::Root context, meaning that those changes will be applied to the relevent data sources.
In the usual case, where no transactions are in play and all data sources are RDBMS databases, calling
commit()will cause the program to begin issuing SQL against the databases to update changed objects, insert rows for newly created objects, and delete rows from deleted objects as part of an SQL transaction. If all the changes apply cleanly, it will do and SQL
commit, or
rollbackif not.
commit() returns true if all the changes have been safely transferred to the underlying context, false if there were problems.
- rollback
UR::Context->rollback();
Causes all objects' changes for the current transaction to be reversed. If the current context is a UR::Context::Transaction, then the transactional properties of those objects will be reverted to the values they had when the transaction started. Outside of a transaction, object properties will be reverted to their values when they were loaded from the underlying data source. rollback() will also ask all the underlying databases to rollback.
- clear_cache
UR::Context->clear_cache();
Asks the current context to remove all non-infrastructional data from its object cache. This method will fail and return false if any object has changes.
- resolve_data_source_for_object
my $ds = $obj->resolve_data_source_for_object();
For the given
$objobject, return the UR::DataSource instance that object was loaded from or would be saved to. If objects of that class do not have a data source, then it will return
undef.
- resolve_data_sources_for_class_meta_and_rule
my @ds = $context->resolve_data_sources_for_class_meta_and_rule($class_obj, $boolexpr);
For the given class metaobject and boolean expression (rule), return the list of data sources that will need to be queried in order to return the objects matching the rule. In most cases, only one data source will be returned.
- infer_property_value_from_rule
my $value = $context->infer_property_value_from_rule($property_name, $boolexpr);
For the given boolean expression (rule), and a property name not mentioned in the rule, but is a property of the class the rule is against, return the value that property must logically have.
For example, if this object is the only TestClass object where
foois the value 'bar', it can infer that the TestClass property
bazmust have the value 'blah' in the current context.
my $obj = TestClass->create(id => 1, foo => 'bar', baz=> 'blah'); my $rule = UR::BoolExpr->resolve('TestClass', foo => 'bar); my $val = $context->infer_property_value_from_rule('baz', $rule); # val now is 'blah'
- object_cache_size_highwater
UR::Context->object_cache_size_highwater(5000); my $highwater = UR::Context->object_cache_size_highwater();
Set or get the value for the Context's object cache pruning high water mark. The object cache pruner will be run during the next
get()if the cache contains more than this number of prunable objects. See the "Object Cache Pruner" section below for more information.
- object_cache_size_lowwater
UR::Context->object_cache_size_lowwater(5000); my $lowwater = UR::Context->object_cache_size_lowwater();
Set or get the value for the Context's object cache pruning high water mark. The object cache pruner will stop when the number of prunable objects falls below this number.
- prune_object_cache
UR::Context->prune_object_cache();
Manually run the object cache pruner.
- reload
UR::Context->reload($object); UR::Context->reload('Some::Class', 'property_name', value);
Ask the context to load an object's data from an underlying Context, even if the object is already cached. With a single parameter, it will use that object's ID parameters as the basis for querying the data source.
reloadwill also accept a class name and list of key/value parameters the same as
get.
- _light_cache
UR::Context->_light_cache(1);
Turn on or off the light caching flag. Light caching alters the behavior of the object cache in that all object references in the cache are made weak by Scalar::Util::weaken(). This means that the application code must keep hold of any object references it wants to keep alive. Light caching defaults to being off, and must be explicitly turned on with this method.
Custom observer aspects
UR::Context sends signals for observers watching for some non-standard aspects.
- precommit
After
commit()has been called, but before any changes are saved to the data sources. The only parameters to the Observer's callback are the Context object and the aspect ("precommit").
- commit
After
commit()has been called, and after an attempt has been made to save the changes to the data sources. The parameters to the callback are the Context object, the aspect ("commit"), and a boolean value indicating whether the commit succeeded or not.
- prerollback
After
rollback()has been called, but before and object state is reverted.
- rollback
After
rollback()has been called, and after an attempt has been made to revert the state of all the loaded objects. The parameters to the callback are the Context object, the aspect ("rollback"), and a boolean value indicating whether the rollback succeeded or not.
Data Concurrency
Currently, the Context is optimistic about data concurrency, meaning that it does very little to prevent clobbering data in underlying Contexts during a commit() if other processes have changed an object's data after the Context has cached and object. For example, a database has an object with ID 1 and a property with value 'bob'. A program loads this object and changes the property to 'fred', but does not yet commit(). Meanwhile, another program loads the same object, changes the value to 'joe' and does commit(). Finally the first program calls commit(). The final value in the database will be 'fred', and no exceptions will be raised.
As part of the caching behavior, the Context keeps a record of what the object's state is as it's loaded from the underlying Context. This is how the Context knows what object have been changed during
commit().
If an already cached object's data is reloaded as part of some other query, data consistency of each property will be checked. If there are no conflicting changes, then any differences between the object's initial state and the current state in the underlying Context will be applied to the object's notion of what it thinks its initial state is.
In some future release, UR may support additional data concurrency methods such as pessimistic concurrency: check that the current state of all changed (or even all cached) objects in the underlying Context matches the initial state before committing changes downstream. Or allowing the object cache to operate in write-through mode for some or all classes.
Internal Methods
There are many methods in UR::Context meant to be used internally, but are worth documenting for anyone interested in the inner workings of the Context code.
- _create_import_iterator_for_underlying_context
$subref = $context->_create_import_iterator_for_underlying_context( $boolexpr, $data_source, $serial_number ); $next_obj = $subref->();
This method is part of the object loading process, and is called by "get_objects_for_class_and_rule" when it is determined that the requested data does not exist in the object cache, and data should be brought in from another, underlying Context. Usually this means the data will be loaded from an external data source.
$boolexpris the UR::BoolExpr rule, usually from the application code.
$data_sourceis the UR::DataSource that will be used to load data from.
$serial_numberis used by the object cache pruner. Each object loaded through this iterator will have $serial_number in its
__get_serialhashref key.
It works by first getting an iterator for the data source (the
$db_iterator). It calls "_resolve_query_plan_for_ds_and_bxt" to find out how data is to be loaded and whether this request spans multiple data sources. It calls "__create_object_fabricator_for_loading_template" to get a list of closures to transform the primary data source's data into UR objects, and "_create_secondary_loading_closures" (if necessary) to get more closures that can load and join data from the primary to the secondary data source(s).
It returns a subref that works as an iterator, loading and returning objects one at a time from the underlying context into the current context. It returns undef when there are no more objects to return.
The returned iterator works by first asking the
$db_iteratorfor the next row of data as a listref. Asks the secondary data source joiners whether there is any matching data. Calls the object fabricator closures to convert the data source data into UR objects. If any of the object requires subclassing, then additional importing iterators are created to handle that. Finally, the objects matching the rule are returned to the caller one at a time.
- _resolve_query_plan_for_ds_and_bxt
my $query_plan = $context->_resolve_query_plan_for_ds_and_bxt( $data_source, $boolexpr_tmpl ); my($query_plan, @addl_info) = $context->_resolve_query_plan_for_ds_and_bxt( $data_source, $boolexpr_tmpl );
When a request is made that will hit one or more data sources,
_resolve_query_plan_for_ds_and_bxtis used to call a method of the same name on the data source. It retuns a hashref used by many other parts of the object loading system, and describes what data source to use, how to query that data source to get the objects, how to use the raw data returned by the data source to construct objects and how to resolve any delegated properties that are a part of the rule.
$data_sourceis a UR::DataSource object ID.
$coolexpr_tmplis a UR::BoolExpr::Template object.
In the common case, the query will only use one data source, and this method returns that data directly. But if the primary data source sets the
joins_across_data_sourceskey on the data structure as may be the case when a rule involves a delegated property to a class that uses a different data source, then this methods returns an additional list of data. For each additional data source needed to resolve the query, this list will have three items:
The secondary data source ID
A listref of delegated UR::Object::Property objects joining the primary data source to this secondary data source.
A UR::BoolExpr::Template rule template applicable against the secondary data source
- _create_secondary_rule_from_primary
my $new_rule = $context->_create_secondary_rule_from_primary( $primary_rule, $delegated_properties, $secondary_rule_tmpl );
When resolving a request that requires multiple data sources, this method is used to construct a rule against applicable against the secondary data source.
$primary_ruleis the UR::BoolExpr rule used in the original query.
$delegated_propertiesis a listref of UR::Object::Property objects as returned by "_resolve_query_plan_for_ds_and_bxt()" linking the primary to the secondary data source.
$secondary_rule_tmplis the rule template, also as returned by "_resolve_query_plan_for_ds_and_bxt()".
- _create_secondary_loading_closures
my($obj_importers, $joiners) = $context->_create_secondary_loading_closures( $primary_rule_tmpl, @addl_info);
When reolving a request that spans multiple data sources, this method is used to construct two lists of subrefs to aid in the request.
$primary_rule_tmplis the UR::BoolExpr::Template rule template made from the original rule.
@addl_infois the same list returned by "_resolve_query_plan_for_ds_and_bxt". For each secondary data source, there will be one item in the two listrefs that are returned, and in the same order.
$obj_importersis a listref of subrefs used as object importers. They transform the raw data returned by the data sources into UR objects.
$joinersis also a listref of subrefs. These closures know how the properties link the primary data source data to the secondary data source. They take the raw data from the primary data source, load the next row of data from the secondary data source, and returns the secondary data that successfully joins to the primary data. You can think of these closures as performing the same work as an SQL
joinbetween data in different data sources.
- _cache_is_complete_for_class_and_normalized_rule
($is_cache_complete, $objects_listref) = $context->_cache_is_complete_for_class_and_normalized_rule( $class_name, $boolexpr );
This method is part of the object loading process, and is called by "get_objects_for_class_and_rule" to determine if the objects requested by the UR::BoolExpr
$boolexprwill be found entirely in the object cache. If the answer is yes then
$is_cache_completewill be true.
$objects_listefmay or may not contain objects matching the rule from the cache. If that list is not returned, then "get_objects_for_class_and_rule" does additional work to locate the matching objects itself via "_get_objects_for_class_and_rule_from_cache"
It does its magic by looking at the
$boolexprand loosely matching it against the query cache
$UR::Context::all_params_loaded
- _get_objects_for_class_and_rule_from_cache
@objects = $context->_get_objects_for_class_and_rule_from_cache( $class_name, $boolexpr );
This method is called by "get_objects_for_class_and_rule" when _cache_is_complete_for_class_and_normalized_rule says the requested objects do exist in the cache, but did not return those items directly.
The UR::BoolExpr
$boolexprcontains hints about how the matching data is likely to be found. Its
_context_query_strategykey will contain one of three values
- 1. all
This rule is against a class with no filters, meaning it should return every member of that class. It calls
$class->all_objects_loadedto extract all objects of that class in the object cache.
- 2. id
This rule is against a class and filters by only a single ID, or a list of IDs. The request is fulfilled by plucking the matching objects right out of the object cache.
- 3. index
This rule is against one more more non-id properties. An index is built mapping the filtered properties and their values, and the cached objects which have those values. The request is fulfilled by using the index to find objects matching the filter.
- 4. set intersection
This is a group-by rule and will return a ::Set object.
$bool = $context->_loading_was_done_before_with_a_superset_of_this_params_hashref( $class_name, $params_hashref );
This method is used by "_cache_is_complete_for_class_and_normalized_rule" to determine if the requested data was asked for previously, either from a get() asking for a superset of the current request, or from a request on a parent class of the current request.
For example, if a get() is done on a class with one param:
@objs = ParentClass->get(param_1 => 'foo');
And then later, another request is done with an additional param:
@objs2 = ParentClass->get(param_1 => 'foo', param_2 => 'bar');
Then the first request must have returned all the data that could have possibly satisfied the second request, and so the system will not issue a query against the data source.
As another example, given those two previously done queries, if another get() is done on a class that inherits from ParentClass
@objs3 = ChildClass->get(param_1 => 'foo');
again, the first request has already loaded all the relevent data, and therefore won't query the data source.
- _sync_databases
$bool = $context->_sync_databases();
Starts the process of committing all the Context's changes to the external data sources. _sync_databases() is the workhorse behind "commit".
First, it finds all objects with changes. Checks those changed objects for validity with
$obj->invalid. If any objects are found invalid, then _sync_databases() will fail. Finally, it bins all the changed objects by data source, and asks each data source to save those objects' changes. It returns true if all the data sources were able to save the changes, false otherwise.
- _reverse_all_changes
$bool = $context->_reverse_all_changes();
_reverse_all_changes() is the workhorse behind "rollback".
For each class, it goes through each object of that class. If the object is a UR::Object::Ghost, representing a deleted object, it converts the ghost back to the live version of the object. For other classes, it makes a list of properties that have changed since they were loaded (represented by the
db_committedhash key in the object), and reverts those changes by using each property's accessor method.
The Object Cache
The object cache is integral to the way the Context works, and also the main difference between UR and other ORMs. Other systems do no caching and require the calling application to hold references to any objects it is interested in. Say one part of the app loads data from the database and gives up its references, then if another part of the app does the same or similar query, it will have to ask the database again.
UR handles caching of classes, objects and queries to avoid asking the data sources for data it has loaded previously. The object cache is essentially a software transaction that sits above whatever database transaction is active. After objects are loaded, any changes, creations or deletions exist only in the object cache, and are not saved to the underlying data sources until the application explicitly requests a commit or rollback.
Objects are returned to the application only after they are inserted into the object cache. This means that if disconnected parts of the application are returned objects with the same class and ID, they will have references to the same exact object reference, and changes made in one part will be visible to all other parts of the app. An unchanged object can be removed from the object cache by calling its
unload() method.
Since changes to the underlying data sources are effectively delayed, it is possible that the application's notion of the object's current state does not match the data stored in the data source. You can mitigate this by using the
load() class or object method to fetch the latest data if it's a problem. Another issue to be aware of is if multiple programs are likely to commit conflicting changes to the same data, then whichever applies its changes last will win; some kind of external locking needs to be applied. Finally, if two programs attempt to insert data with the same ID columns into an RDBMS table, the second application's commit will fail, since that will likely violate a constraint.
Object Change Tracking
As objects are loaded from their data sources, their properties are initialized with the data from the query, and a copy of the same data is stored in the object in its
db_committed hash key. Anyone can ask the object for a list of its changes by calling
$obj->changed. Internally, changed() goes through all the object's properties, comparing the current values in the object's hash with the same keys under 'db_committed'.
Objects created through the
create() class method have no 'db_committed', and so the object knows it it a newly created object in this context.
Every time an object is retrieved with get() or through an iterator, it is assigned a serial number in its
__get_serial hash key from the
$UR::Context::GET_SERIAL counter. This number is unique and increases with each get(), and is used by the "Object Cache Pruner" to expire the least recently requested data.
Objects also track what parameters have been used to get() them in the hash
$obj->{__load}. This is a copy of the data in
$UR::Context::all_params_loaded->{$template_id}. For each rule ID, it will have a count of the number of times that rule was used in a get().
Deleted Objects and Ghosts
Calling delete() on an object is tracked in a different way. First, a new object is created, called a ghost. Ghost classes exist for every class in the application and are subclasses of UR::Object::Ghost. For example, the ghost class for MyClass is MyClass::Ghost. This ghost object is initialized with the data from the original object. The original object is removed from the object cache, and is reblessed into the UR::DeletedRef class. Any attempt to interact with the object further will raise an exception.
Ghost objects are not included in a get() request on the regular class, though the app can ask for them specificly using
MyClass::Ghost->get(%params).
Ghost classes do not have ghost classes themselves. Calling create() or delete() on a Ghost class or object will raise an exception. Calling other methods on the Ghost object that exist on the original, live class will delegate over to the live class's method.
all_objects_are_loaded
$UR::Context::all_objects_are_loaded is a hashref keyed by class names. If the value is true, then "_cache_is_complete_for_class_and_normalized_rule" knows that all the instances of that class exist in the object cache, and it can avoid asking the underlying context/datasource for that class' data.
all_params_loaded
$UR::Context::all_params_loaded is a two-level hashref. The first level is class names. The second level is rule (UR::BoolExpr) IDs. The values are how many times that class and rule have been involved in a get(). This data is used by "_loading_was_done_before_with_a_superset_of_this_params_hashref" to determine if the requested data will be found in the object cache for non-id queries.
all_objects_loaded
$UR::Context::all_objects_loaded is a two-level hashref. The first level is class names. The second level is object IDs. Every time an object is created, defined or loaded from an underlying context, it is inserted into the
all_objects_loaded hash. For queries involving only ID properties, the Context can retrieve them directly out of the cache if they appear there.
The entire cache can be purged of non-infrastructional objects by calling "clear_cache".
Object Cache Pruner
The default Context behavior is to cache all objects it knows about for the entire life of the process. For programs that churn through large amounts of data, or live for a long time, this is probably not what you want.
The Context has two settings to loosely control the size of the object cache. "object_cache_size_highwater" and "object_cache_size_lowwater". As objects are created and loaded, a count of uncachable objects is kept in
$UR::Context::all_objects_cache_size. The first part of "get_objects_for_class_and_rule" checks to see of the current size is greater than the highwater setting, and call "prune_object_cache" if so.
prune_object_cache() works by looking at what
$UR::Context::GET_SERIAL was the last time it ran, and what it is now, and making a guess about what object serial number to use as a guide for removing objects by starting at 10% of the difference between the last serial and the current value, called the target serial.
It then starts executing a loop as long as
$UR::Context::all_objects_cache_size is greater than the lowwater setting. For each uncachable object, if its
__get_serial is less than the target serial, it is weakened from any UR::Object::Indexes it may be a member of, and then weakened from the main object cache,
$UR::Context::all_objects_loaded.
The application may lock an object in the cache by calling
__strengthen__ on it, Likewise, the app may hint to the pruner to throw away an object as soon as possible by calling
__weaken__.
SEE ALSO
UR::Context::Root, UR::Context::Process, UR::Object, UR::DataSource, UR::Object::Ghost, UR::Observer
|
https://metacpan.org/pod/UR::Context
|
CC-MAIN-2016-36
|
refinedweb
| 4,376
| 50.67
|
I'm new to containerisation, and have never actually used it in practise... or even used any of my developing skills in practice since I'm new to this whole 'developing' thing. I think I've got a really basic understanding of containers, but I don't understand how you'd use Kubernetes and Docker together in a workflow?
I really should have made this a like 'Explain Like I'm 5' sorta thing.
Discussion (4)
Docker runs containers. Kubernetes orchestrates containers. Orchestration is how you stitch multiple containers together into a greater whole: for example, you can have several application containers (and scale out to add more application containers at runtime) talking to a database container. It's possible to do this with Docker alone, but prohibitively complicated at the low level of individual containers. The language of "pods", "services", "deployments", and so on that Kubernetes employs lets you operate at the level of multiple interacting containers instead.
Thanks for the reply! That really clears up everything - I thought that Kubernetes runed containers like Docker!
To add:
You could manage a small cluster of Docker containers on one host machine with a good memory and some facility with Bash. Where orchestration really comes into its own is when you need to scale across multiple hosts.
There are other technologies that do overlap with Kubernetes, notably the combination of Docker Swarm + Docker Compose. Kubernetes has support for more complex workflows with stuff like jobs which run once and exit, while Compose is purely about describing what your cluster looks like and then making that happen.
I know this maybe a little bit more than ELI5 but if you want to dive further, "container" don't really exist
So basically a "container" is a combination of cgroup restrictions and namespaces, kubernetes adds containers to the same namespace so that they can be called as if they where in the same localhost.
Also, kubernetes is testing support for rkt which is kind of a docker competitor on its own.
Now, kubernetes orchestrates containers, this basically means it handles them through some strategies (deployments, daemon sets, replica sets, etc).
|
https://dev.to/ghost/how-do-containers-work-when-youre-using-both-docker-and-kubernetes-5o6
|
CC-MAIN-2021-25
|
refinedweb
| 356
| 51.89
|
Parallax Forums
>
General Forums
>
BASIC Stamp
> CR, meaning
PDA
View Full Version :
CR, meaning
ShortCiruitGeorge
11-27-2006, 12:33 PM
Trying to find out what the CR, command means and why do we use it at times twice? CR,CR
Go easy I'm just starting out.
Kevin Wood
11-27-2006, 12:43 PM
CR stands for "Carriage Return", a term left over from typewriters.
It moves the cursor to the next line, so using several in a row moves the cursor several lines.
Here is more about the history: en.wikipedia.org/wiki/Carriage_return ()
ShortCiruitGeorge
11-27-2006, 07:54 PM
Thanks, sometimes it's the little things that drive you nuts...
allanlane5
11-27-2006, 09:54 PM
In the 'olden days', when we used Teletype writers (TTY), the CR (Carriage Return) (CHR(13)) ('\r') would move the 'carriage' back to the left, and a LF (Line Feed) (CHR(10)) ('\n') would advance the roller one line. Thus the popularity of CR-LF to end a line in MSDOS text files.
Unix got 'smarter', and so automatically gave you a CR-LF if you only specified an LF.
|
http://forums.parallax.com/archive/index.php/t-90062.html
|
CC-MAIN-2014-35
|
refinedweb
| 192
| 69.31
|
Code:
#include <std_disclaimer.h> /* * Your warranty is now void. * You are making these modifications to your phone * I accept no responsibility of the consequences, if any for flashing this kernel */
New FXP releases include boot.img in update zip, this will flash over riches kernel
Flash this kernel again after installing a FXP update
F.A.Qs
When are new versions released?
I try to release updates every Monday
My device rebooted or crashed, help
Get me /proc/last_kmsg on pastebin or other appropriate site.
Please be aware I may already be informed on some issues.
Do I need to wipe anything along with flashing this?
It is recommended but not required to wipe cache and dalvik-cache in recovery
Does this kernel have X, Y mod?
Research, check features, changelog & commits
Code:
Riches Kernel 0.6.2 Compiled using Linaro 4.8 2013/10 Dual Recovery (CWM & TWRP) (instructions) Compiled using -O3 CPU Governors: Smartassv2, minmax, OndemandX I/O schedulers: SIO (default) 1.53Ghz overclock all devices GPU OC: 200Mhz (Tapioca, Mesona) GPU OC: 355Mhz (JLO) Dynamic File Sync 1.2 USB: Fast Charge (instructions) Frandom Support (I recommend CrossBreeder to make use of this) Other tweaks
Code:
INSTRUCTIONS Flash kernel using fastboot fastboot flash boot boot.img fastboot reboot Go to settings > About Phone > Kernel Version to check you have the kernel installed DOWNLOAD Here Codenames Tipo - Tapioca Miro - Mesona J - JLO E - Nanhu
If you enjoy my work and want to give me incentive to continue working on it please donate.
Paypal
Last edited by joebonrichie; 11th December 2013 at 06:08 PM. Reason: updated for release
|
http://forum.xda-developers.com/xperia-j-e/orig-development/cm10-miro-tipo-j-e-riches-kernel-t2507787
|
CC-MAIN-2016-30
|
refinedweb
| 270
| 64.71
|
Slide.SDA
Slide Simple Document Archive subproject.
Jakarta-Slide is designed and built to be a library/component to something larger. In and of itself jakarta-slide is not useful to most people that are 'users'. In addition, jakarta-slide is rather large, complex, and at times rather abstract making it difficult to implement a solution based on jakarta-slide.
Slide.SDA is a suggested subproject whereby an application is built on top of Jakarta-Slide that shows its immediate usefulness as well as be an example application.
Simple Document Archive - Why Document Archive instead of 'Document Management'? Because you get 5 people to talk about Document Management you will get 7 different versions of what 'Document Management' entails, and it will usually entail some type of workflow/signoff/checkin/checkout that most base usecases do not need. Document Archive, in my opinion, consists of:
- Storing a binary file (a TIFF or PDF is the often use-case).
- Storing searchable data related to that file (such as names, client id's, etc).
- A graphical UI for a user to search for that file and view it, retrieve it, etc.
Some more detailed back-end explanations include:
- Digital preservation - keep the data in such a fashion that 10 years from now it could be easily retrieved (i.e. file-based storage in XML metadata and binary file).
- Allow for archival off and retrieval from CD/DVD.
- Allow for cross-searching of multiple 'stores' (usecase: 10 CD's of data).
Sample Usecase: Fax Images
This usecase is based on a company recieving 2,000 faxes a day - enough that you would want a better way to manage and archive the data. Whether it's Sarb-Ox, HIPAA, or some other regulation, you are required to keep every fax you ever get and keep some data related to the fax for easy retrieval. Also, you have an Accounting Department, a Marketing Department, and a couple others that recieve a good portion of these faxes and rather than handing paper around would like a better way to search and view faxes they get (no workflow, just search and view).
Since most fax software (assume hylafax, Rightfax, Telcom, or a dozen other ones) allow for exporting a TIFF image format and some type of CSV or similar datafile, we will move forward that there are TIFF images and already DATA we can use.
DATA:
- faxtonum it was called into (or DID number, or other identifier saying 'go to this department' or 'to this person').
- company that made the fax based on caller-id or DB/file-lookup based on faxreceivenum.
- receivedate is the date the fax came in.
Now, going back to the use-case, you want to keep all this DATA with each fax TIFF image. Hmm....what free software out there can do this already with an open, documented protocol that both the protocol and the software are not OS-specific and will keep this data in a non-proprietary format so that 10 years from now I can get back to it? Jakarta-Slide!
Implementation ideas:
- File-based XML for metadata repository, use Lucence to speed up searching.
- Allow for a 'FAX' Namespace to hold that related data as well as the default 'DAV' for the DASL searching.
- File-based object storage for the files.
- Archive a 'store' by Month/Year based on receivedate (JAN04, FEB04, etc) to CD. Maybe sub-CD's for each department if they need their own. 'archive store' = readonly.
- Allow for searching across multiple stores that are mounted in Slide as a single group (all faxes) or by department (Accounting faxes).
- Optional: versioning (not useful in this usecase, but would be for others).
User requirements for SDA - this is where it really matters. Allow the user to utilize Jakarta Slide in a way that makes sense.
- For new Namespaces like 'FAX', require a DTD or Schema if want to restrict/enforce attributes/propertys.
- Export based on range of data to another store (for archive?).
- SDA default setup has good default roles, such as 'searcher', 'writer', 'admin' that can be mapped to Realm-based security.
- Default UI for a 'searcher' user via web that is useful and adaptive to namespaces (allows to search in 'FAX' namespace based on 'receivedate' attribute defined by DTD/Schema). Also allow user to quickly identify search range, whether single store or a defined group.
- Allow for easy addition of existing archived stores (i.e. on CD/DVD, or on remote or local file systems).
- Allow for easy grouping of stores for cross-search context.
User Responibilities:
- User is responsible for 'injecting' the binary file and related data into SDA. SDA simply allows them to do so and handles storing and searching for documents. Recommend WebDAV protocol, WCK for Java, sample in .NET, etc.
- User is responsible for backup/restore of the filesystem where SDA archives data.
User is responsible for setting up Realm security (tomcat/jboss), map to roles. However, SDA should have some examples for common use-cases (ActiveDirectory, Tomcat configured users, etc.).
|
http://wiki.apache.org/jakarta-slide/Slide.SDA?highlight=ActiveDirectory
|
CC-MAIN-2013-20
|
refinedweb
| 841
| 62.78
|
MIDI::Trans - Perl extension for quick and easy text->midi conversion.
use MIDI::Trans; my $TranObj = MIDI::Trans->new( { 'Delimiter' => '\s+', 'Note' => \¬e, 'Volume' => sub { return(127); }, 'Duration' => \&somesub, 'Tempo' => sub { ... } } ); if($TransObj->trans( { 'File' => 'text.txt', 'Outfile' => 'out.mid' })) { print("success\n"); } else { my $error = $TransObj->error(); die("ERR: $error\n"); } sub note { # do something # return a value between 0 and 127 # or string 'rest' (sans quotes) for # a rest event } sub duration { # return some number of quarter notes }
MIDI::Trans serves as a quick development foundation for text to midi conversion algorithms utilizing MIDI::Simple for output. Using MIDI::Trans, you create callbacks for generating note, volume, duration and tempo values. As your corpus is read, these callbacks are utilized to generate your midi score by MIDI::Trans. MIDI::Trans is modelled after the text conversion aspects of TransMid (), but designed to be more useful to a wider range of tasks, with less overhead.
If you're in a big hurry, and haven't any need for great control over the process, simply read the 'Plug and Play Usage' and 'CallBacks' sections below to get a jump, and your converter implemented in a just a few minutes, with just a few statements.
A corpus can be defined as either a string, or text file. MIDI::Trans will then split that corpus into elements based on an element delimiter, provided via argument, and determine some attributes of the corpus based on other data, which can be supplied by the developer. The corpus is then processed, element by element through the use of CallBacks you specify. The normal flow of development looks like this:
Define Parameters For Conversion Define Functions To Generate Note, Duration, Volume, Tempo from parameters and element values. Specify and process corpus Create score Write Output
... this section not yet complete...
None by default.
A MIDI::Trans converter can be written in as few as three statements:
use MIDI::Trans; my $TransObj = MIDI::Trans->new( { 'Tempo' => 140, 'VolumeCallBack' => sub { return(127); }, 'NoteCallBack' => sub { $cnt++; return $cnt % 4 ? 88 : 'rest'; }, 'DurationCallBack' => sub { return(16); }, }); $TransObj->trans({ 'File' => './test.txt', 'Outfile' => 'test.mid' });
Obviously, this isn''t very functional and the more compact we make our code, the less functionality there is available to us.
However, if your conversion process doesn''t rely too heavily on controlling the the act of conversion its self, this single method will do everything you need for the process of conversion.
Let's discuss what we've done here:
The new() method is called, which initializes the parameters for the converter. The object returned makes its self available to the callbacks, assuming that you''ve created the variable referencing the object in the same namespace as, or at least scoped as visible to, the callbacks'' function definition. The trans() method acts as a wrapper around the step by step process of converting the document. It doesn''t give you the ability to control some of the information gathering aspects, nor does it let you handle more than one corpus with a single object, but what it lacks in functionality, it makes up for in ease.
trans HASHREF
Has one argument, which is required: either a reference to, or an anonymous hash. This hash contains information required to perform the conversion. If any values have already been defined via the new() method, they do not have to be re-defined here. Some names are short-hand for the configuration keys of new(), marked with an asterisk (*), they are otherwise the same. trans() spawns a new instance of MIDI::Trans, this object must be used for attribute and configuration methods for the operation being performed with trans(). That is to say, that if you are using trans(), you must also use the trans_obj() method (see below) to return the object operating. Returns true (1) on success or sets the error() message then returns undef otherwise. The following keys are valid for the hash: 'File' The file path to read as the corpus. This key is required. 'Outfile' The file to save MIDI output to. Default value is './out.midi' 'Delimiter' The element delimiter in the corpus. Default value is '\s+' 'Tempo' The tempo to use for the score, you can specify either 'Tempo' or 'TempoCallBack' keys, but one must be specified. Will override the value of 'TempoCallBack'. 'TempoCallBack' Subroutine reference, or anonymous sub block that will return a tempo value. See CallBacks section below. 'Volume'* Subroutine reference, or anonymous sub block that will return a valid volume value. 'Note'* Subroutine reference, or anonymous sub block that will return a valid note value. 'Duration'* Subroutine reference, or anonymous sub block that will return a valid duration value. USAGE: if( $TransObj->trans( { 'File' => './test.txt', 'Volume' => sub { ... }, 'Note' => sub { ... }, 'Duration' => \&some_sub, 'Tempo' => 120 } ) ) { # do something } else { my $errmsg = $TransObj->error(); die("$errmsg\n"); } OR: my %hash = ( 'File' => './test.txt', 'Volume' => sub { ... }, 'Note' => sub { ... }, 'Duration' => \&some_sub, 'Tempo' => 120 ); if( $TransObj->trans(\%hash) ) { ... } Both methods are equivalent. See the CallBacks section, below, for more information about the CallBacks.
Given that you need to have an active MIDI::Trans object to use the attribute and information methods, and that trans() creates a new MIDI::Trans object, the trans_obj() method has been provided to you for accessing the usually-needed methods.
trans_obj
Returns the blessed object being utilized by the trans() method, all methods are available to this object, but with data specific to the current trans() object. USAGE: if( $TransObj->trans( { 'TempoCallBack' => \&tempo, ... } ) ) { ... } sub tempo { # a callback called from trans() my $cur_obj = $TransObj->trans_obj(); my $num_sent = $cur_obj->sentences(); ... }
If MIDI::Trans is like the skeleton for your conversion application, then the CallBacks you define act as the nervous system. The real logic lies in the combination of information and statistics generated by the corpus, your use of configurable options, and the callbacks you define.
CallBacks must be passed as either references or anonymous sub blocks. The following forms are all valid: (examples use the callback configuration methods)
$TransObj->volume_callback( sub { ... } ); -- sub some_sub { ... } $TransObj->volume_callback(\&some_sub); -- $hashRef->{'key'} = sub { ... }; $TransObj->volume_callback($hashRef->{'key'}); -- my $sub = sub { ... }; $TransObj->volume_callback($sub);
Most CallBacks are passed two arguments:
The Current Element The Current Position
The Tempo CallBack is passed no arguments.
Each CallBack is expected to return a spefic type and range of data as a return value. Each type is discussed here. Please note that these are just examples and in no way reflect the complexity or interaction available to you.
Volume CallBacks return a numeric value to represent the absolute volume of the current element in a range of 0-127. Volume CallBacks are called once every element, after Note CallBacks. Example: sub VolCallBack { my $elem = shift; my $enum = shift; # in this callback, volume is # determined by measuring the # length of the input, then # comparing that against a # constant value, and using that # comparison as a multiplier against # our maximum volume level. my $lpct = length($elem) / 24; $lpct = 1 if($lpct > 1); # here, we use the round() method supplied # by MIDI::Trans my $value = $TransObj->round(127 * $lpct); return($value); }
Note CallBacks return a scalar value to represent a note or rest event. The value of the event is either a number, in the case of a note, or a string - 'rest', in the case of a rest event. For a note event, you must specify the absolute value of the note as an integer in the range of 0-127. For a rest event, simply return a string with the value 'rest'. Note CallBacks are called once every element. They are processed before Volume and Duration. Example: sub NoteCallBack { my $elem = shift; my $enum = shift; my $return; # here, if the corpus contains # an element with the string 'Eighty- # Eight', then a note value of 88 # will be returned, rest otherwise. if($elem =~ /Eighty-Eight/) { $return = 88; } else { $return = 'rest'; } return($return); }
Duration CallBacks return a numeric value to represent the duration of the current event, in quarter notes. So the actual duration of the event, in seconds, is determined by the value of the qn_len() configuration method and the tempo returned by your Tempo CallBack. This method is called once every element, after all others. Example: sub DurCallBack { my $elem = shift; my $enum = shift; return(length($elem)); }
Tempo CallBacks return a numeric value to represent the number of quarter notes per minute to be used in the score. The actual 'tempo' supplied to MIDI::Simple is the result of the following equation: round($base_ms / $tempo); Where round() is the included round() method, $base_ms is the value of the configuration attribute base_milliseconds(), and $tempo is the tempo returned by your Tempo CallBack. Tempo is called once per processing a corpus, before all other CallBacks. Please note, that there are no arguments to this CallBack, as it is executed BEFORE any elements are processed. Example: sub TempoCallBack { my $num_sents = $TransObj->sentences(); my $num_words = $TransObj->words(); # this CallBack utilizes Attribute # Retrieval methods to determine # num of words and sentences in the # corpus, then uses these values # to form a percentage of a constant # maximum tempo. my $w_to_s_pct = $num_sents / $num_words; my $max_tempo = 200; my $tempo = $TransObj->round($max_tempo * $w_to_s_pct); return($tempo); }
Several Attributes of your corpus may be gleaned when read. This is controlled by the 'AllAttributes' configuration value, set by either new() or the configuration method all_attributes(). Currently, those attributes are :
# of Sentences * # of Words * # of Elements (Those marked by an asterisk can be turned off to reduce memory consumption)
The Sentence Delimiter can be defined as a configuration value. The word boundary may not.
The following methods retrive attributes about the corpus being processed. They can only be used inside of your CallBacks, they are not available elsewhere. sentences() Returns the number of sentences in the corpus. words() Returns the number of words in the corpus. elements() Returns the number of total elements in the corpus.
new HASHREF
Creates a new instance of the class. Returns a blessed object on success, undef on error. One argument is allowed, a hash reference or anonymous hash, which contains configuration information for the object. The following keys are allowable in the hash, and their values: 'Raise_Error' Boolean, die() on any error with message 0 = false (default), 1 = true 'ElementDelimiter' Default delimiter used to seperate elements from the corpus. Should be a valid regular expression as would fit in (?:). Default value is '\s+' 'SentenceDelimiter' Default delimiter for end of sentence. Follows same rules as ElementDelimiter. Default value is '\.|\?|\!' 'NoteCallBack' Default callback for obtaining note values. Should be a reference to, or anonymous, sub routine. See the section regarding CallBacks above. Default value is undef 'VolumeCallBack' Default CallBack for obtaining volume values. 'DurationCallBack' Default CallBack for obtaining duration values. 'TempoCallBack' Default CallBack for obtaining tempo values. 'Channel' Default Channel for MIDI output. Default value is '1' 'qn_len' Default number of ticks per quarter note. It is safe to leave this unmodified. See MIDI::Simple for more information. Default value is '96' 'AllAttributes' Boolean, whether or not all attributes of the corpus should be measured when reading it. Can be used to lessen memory usage. See the Attributes section below for more information. 0 = False, 1 = True (default) 'BaseMilliseconds' The base number of ms in a minute. This is used for timing and tempo purposes. It is safe to leave this value unmodified. Default value is '60000000' USAGE: my $TransObj = MIDI::Trans->new( { 'VolumeCallBack' => \&vol, 'AllAttributes' => 0 }); OR: my %attrs = ( 'AllAttributes' => 1, 'VolumeCallBack' => \&vol ); my $TranObj = MIDI::Trans->new(\%attrs);
error()
Returns the last set error message, or undef if no error message has been set. USAGE: my $errmsg = $TransObj->error();
reset_error()
Removes the current error message. Causes error() to return undef. Always returns true (1). USAGE: $TransObj->reset_error();
trans HASHREF
The 'wrapper' function for quick use of MIDI::Trans, for more information, see the section entitled Plug and Play Usage above.
trans_obj
See the section entitled Plug and Play Usage above.
read_corpus HASHREF
Reads, parses, and collects attributes about a given corpus (your input data). The corpus may be specified either as a file to read, or a string to parse. Returns true (1) on sucess, and sets the error message then returns undef on error. More than one corpus may be open at a given time. A single argument must be specified, which is either a hash reference or an anonymous hash. The hash contains information about the corpus. Three keys are possible: 'Name' The 'handy' name, or name you wish to specify for the corpus. This is useful when opening more than one corpus that is a string type. If the corpus type is a string, the name will default to 'String', otherwise the name will default to the file name. 'File' When this key is provided, it specifies that the corpus type is a file. This key will override the 'String' key, even if the value is undef -- resulting in an error. The value for this key should be the path to the file you want to parse. 'String' When this key is provided, it specifies that the corpus type is a string. This key is overriden by the 'File' key. The string should be passed as the value. USAGE: if( $TransObj->read_corpus({ 'File' => './corpus.txt' }) ) { ... } else { my $error = $TransObj->error(); die("$error\n"); } OR my %corp_dat = ( 'File' => './corpus.txt', 'Name' => 'Corpus1' ); if( $TransObj->read_corpus(\%corp_dat) ) { ... } NOTE: The corpus when read, is stored in a list of available corpuses(ii?). This list is ordered, in the order they were read, numerically. This is the preferred method for identifying a corpus to other methods, but most also accept a Name value to identify the corpus, which may be easier to track. The numbering begins at 0. TODO: Convert all methods to a naming convention.
process HASHREF
Actually performs processing on a given corpus. Runs all of the callbacks, as needed, either on a per-corpus or per-element basis. Generates the data that will be used to create a MIDI score later. Only a single corpus may be specified. Returns true (1) if successful and sets the error message then returns undef otherwise. A single argument must be specified, which is either a hash reference or an anonymous hash. The hash contains information identifying the corpus. There are two keys possible: 'Name' The name as given the corpus -- see read_corpus above. Overrides the 'Num' key. The value should be the name of the corpus. 'Num' The number of the corpus. See the NOTE section of read_corpus, above. Is overriden by the 'Name' key. USAGE: if( $TransObj->process( { 'Name' => 'Corpus1' }) ) { .. } else { my $errmsg = $TransObj->error(); die("$errmsg\n"); } OR: my %corp_dat = ( 'Num' => 1 ); if( $TransObj->process(\%corp_dat) ) { ... }
create_score NUM
Creates a score from the data generated by process(), suitable for writing to a file (see write_file() below). If the corpus hasn't been parsed, or the processing hasn't occured yet, and error will occur. Returns the MIDI::Simple object from the score on success and sets the error message then returns undef on failure. One argument, the identifying number of the corpus must be specified. (See the NOTE section of read_corpus() above) USAGE: my $scoreObj; if( $scoreObj = $TransObj->create_score(0) ) { ... } else { my $errmsg = $TransObj->error(); die("$errmsg\n"); }
write_file SCORE_OBJECT
Writes a score to a file, given the score object returned from create_score. Returns true (1) on success and sets the error message then returns undef on failure. USAGE: if( $TransObj->write_file($scoreObj) ) { ... } else { my $errmsg = $TransObj->error(); die("$errmsg\n"); }
round DECIMAL
Returns the nearest rounded integer given decimal or integer input. USAGE: # returns 2 my $x = $TransObj->round(1.5); # returns 1 my $x = $TransObj->round(1.38573927541);
These methods allow you to configure the CallBacks currently in use by MIDI::Trans, they also allow you to retrieve a reference to the CallBack in question. All methods accept an argument of either a subroutine reference or anonymous subroutine block. All methods return their subref.
volume_callback()
Sets / Returns the callback for volume USAGE: my $vol_cb = $TransObj->volume_callback(\&somesub);
note_callback()
Sets / Returns the callback for notes USAGE: my $note_cb = $TransObj->note_callback(\&somesub);
duration_callback()
Sets / Returns the callback for duration USAGE: my $dur_cb = $TransObj->duration_callback(\&somesub);
tempo_callback()
Sets / Returns the callback for tempo USAGE: my $tempo_cb = $TransObj->tempo_callback(\&somesub);
These methods, in conjunction with methods such as round() (described under the main METHODS heading), are useful for modifying the way MIDI::Trans is operating, as well as assisting your callbacks in performing their operation, and retrieving operating values. All of these (as well as others) may be utilized in your CallBacks. Be careful, however of calling process() within a CallBack, as this may result in an infinite loop.
current_elem()
Returns the current element from the list of elements in the corpus. This would typically be used by your callbacks for determining what to generate. USAGE: my $element = $TransObj->current_elem();
current_pos()
Returns the position of the current element in the document, starting at zero. That is, on the 50th element of the corpus, current_pos() would return '49'. USAGE: my $pos = $TransObj->current_pos();
raise_error()
Sets, or returns the current value of the 'Raise_Error' attribute. USAGE: my $RE = $TransObj->raise_error(1);
delimiter()
Sets, or returns the current value of the 'Delimiter' attribute. USAGE: my $Del = $TransObj->delimiter(1);
sentence_delimiter()
Sets, or returns the current value of the 'SentenceDelimiter' attribute. USAGE: my $SD = $TransObj->sentence_delimiter('\!\?');
all_attributes()
Sets, or returns the current value of the 'AllAttributes' attribute. USAGE: my $AA = $TransObj->all_attributes(1);
channel()
Sets, or returns the current value of the 'Channel' attribute. USAGE: my $Chan = $TransObj->channel(1);
qn_len()
Sets, or returns the current value of the 'qn_len' attribute. USAGE: my $QL = $TransObj->qn_len(1);
tempo()
Sets, or returns the current value of the 'Tempo' attribute. This is usually set by your tempo CallBack, but can also be set with a default value using the new() attribute. You can use this function to override tempo, but this may have little, if any, effect on create_events(). USAGE: my $Del = $TransObj->delimiter(1);
C. Church <lt>dolljunkie@digitalkoma.com<gt>
|
http://search.cpan.org/dist/MIDI-Trans/lib/MIDI/Trans.pm
|
CC-MAIN-2017-22
|
refinedweb
| 2,998
| 56.45
|
Have you verified that the output of your XSLT is the same now as it was
in the older version? e.g. by putting a <map:serialize
right after your xslt transform?
I just did a quick test here with 2.1.8, and an xinclude transform
immediately following an xsl transform (which creates the xi:include
elements) works just fine.
Schultz, Gary - COMM wrote:
> I performed a pretty exhaustive search and only found information on
> cinclude problems when using POST method. Nothing on cinclude problems in
> general, and nothing detailed on the use of cocoon include transformer.
>
> I should have added that cincludes, includes and xincludes all work when
> placed before the <map:transform. In 2.1.5.1 all three includes also worked
> when placed after the <map:transform. In 2.1.8 the includes only work before
> the <map:transform. The includes are not working after the <map:transform.
> This is a problem in that I'm using the XSLT transformation to create
> cincludes dynamically based on parameters determined by element attribute
> values passed from the XML source.
>
> -----Original Message-----
> From: Ralph Goers [mailto:Ralph.Goers@dslextreme.com]
> Sent: Thursday, December 08, 2005 11:28 PM
> To: users@cocoon.apache.org
> Subject: Re: cinclude, include and xinclude in Cocoon 2.1.8
>
> Did you search the mailing list for similar issues. I seem to recollect that
> the namespace is different so your cinclude might be ignored. But I could be
> wrong.
>
> Schultz, Gary - COMM wrote:
>
>
>>The following code snippet worked in Cocoon 2.1.5.1 but isn't working
>>in Cocoon 2.1.8
>>
>> <map:match
>> <map:generate
>>
>> <map:transform
>> </map:transform>
>> <map:transform
>> <map:serialize/>
>> </map:match>
>>Has there been a change to include transformation in Cocoon 2.1.8? Has
>>the ability to use cinclude after transformations been eliminated? If
>>it did, this is very discouraging. What I'm doing is inside an XHTML
>>template I call a single template that builds a cinclude element. I
>>call this cinclude template several times, each time passing different
>>parameters to produce several cincludes. The cincludes are then
>>processed at the end of the map:match. I need to process the cincludes
>>at the end of the map:match after the transformation.
>>
>>I've thought about using i:include, but I did some searching and
>>cannot find any good examples for using <i:include>. How does one go
>>about using i:include to include only a portion of a xml document?
>>Form what I could tell, the 2.1.8 i:include samples only show the use
>>of i:include serving an entire xml file, not a part of a file. I tried
>>using cinclude select but that did not work?
>>
>>
>>
>>Gary T. Schultz
>>IT Administrator
>>Wisconsin Department of Commerce
>>201 W. Washington Ave
>>Madison, WI 53214
>>608-266-1283
>>gschultz@commerce.state.wi.us <mailto:gschultz@commerce.state.wi.us>
>>
>>
>
>
>
> ---------------------------------------------------------------------
>
|
http://mail-archives.apache.org/mod_mbox/cocoon-users/200512.mbox/%3C43999797.9060808@lojjic.net%3E
|
CC-MAIN-2018-51
|
refinedweb
| 484
| 58.48
|
D3DImage Class
An ImageSource that displays a user-created Direct3D surface.
Assembly: PresentationCore (in PresentationCore.dll)
Use the D3DImage class to host Direct3D content in a Windows Presentation Foundation (WPF) application.
Call the Lock method to change the Direct3D content displayed by the D3DImage. Call the SetBackBuffer method to assign a Direct3D surface to a D3DImage. Call the AddDirtyRect method to track updates to the Direct3D surface. Call the Unlock method to display the changed areas.
The D3DImage class manages two display buffers, which are called the back buffer and the front buffer. The back buffer is your Direct3D surface. Changes to the back buffer are copied forward to the front buffer when you call the Unlock method, where it is displayed on the hardware. Occasionally, the front buffer becomes unavailable. This lack of availability can be caused by screen locking, full-screen exclusive Direct3D applications, user-switching, or other system activities. When this occurs, your WPF application is notified by handling the IsFrontBufferAvailableChanged event. How your application responds to the front buffer becoming unavailable depends on whether WPF is enabled to fall back to software rendering. The SetBackBuffer method has an overload that takes a parameter that specifies whether WPF falls back to software rendering.
When you call the SetBackBuffer(D3DResourceType, IntPtr) overload or call the SetBackBuffer(D3DResourceType, IntPtr, Boolean) overload with the enableSoftwareFallback parameter set to false, the rendering system releases its reference to the back buffer when the front buffer becomes unavailable and nothing is displayed. When the front buffer is available again, the rendering system raises the IsFrontBufferAvailableChanged event to notify your WPF application. You can create an event handler for the IsFrontBufferAvailableChanged event to restart rendering again with a valid Direct3D surface. To restart rendering, you must call SetBackBuffer.
When you call the SetBackBuffer(D3DResourceType, IntPtr, Boolean) overload the backBuffer parameter set to null, and then call SetBackBuffer again with backBuffer set to a valid Direct3D surface.
The following code example shows how to declare a D3DImage in XAML. You must map the System.Windows.Interop namespace, because it is not included in the default XAML namespaces. For more information, see Walkthrough: Hosting Direct3D9 Content in WPF.
<Window x: <Grid> <Image x: <Image.Source> <i:D3DImage x: </Image.Source> </Image> </Grid> </Window>
for access to unmanaged resources. Security action: InheritanceDemand. Associated enumeration: SecurityPermissionFlag.UnmanagedCode
Available since 3.0
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
|
https://msdn.microsoft.com/en-us/library/system.windows.interop.d3dimage.aspx
|
CC-MAIN-2017-04
|
refinedweb
| 417
| 50.23
|
I have been able to use .thumbnail to scale the entire image, but I'd to scale the image, and preserve the original dimensions as depicted in the second transformation below:
As @Daniel said, you can create the thumbnail image using
.thumbnail(), create an new image with same size of the original image and then paste the thumbnail into the new image:
def scale_image(img, factor, bgcolor): # create new image with same mode and size as the original image out = PIL.Image.new(img.mode, img.size, bgcolor) # determine the thumbnail size tw = int(img.width * factor) th = int(img.height * factor) # determine the position x = (img.width - tw) // 2 y = (img.height - th) // 2 # create the thumbnail image and paste into new image img.thumbnail((tw,th)) out.paste(img, (x,y)) return out
factor should be between 0 and 1, and
bgcolor is the background color of the new image.
Example:
img = PIL.Image.open('image.jpg') new_img = scale_image(img, 0.5, 'white') new_img.show()
|
https://codedump.io/share/XQYlrg2PfCm3/1/pythonpillow-scale-child-image-but-maintain-parent-image-dimensions
|
CC-MAIN-2018-22
|
refinedweb
| 167
| 59.5
|
CppCon 2014: Herb Sutter "Back to the Basics! Essentials of Modern C++ Style"
965 22 135162
--
Presentation Slides, PDFs, Source Code and other presenter materials are available at:
--.
--
Herb Sutter - Author, chair of the ISO C++ committee, software architect at Microsoft.
--
Videos Filmed & Edited by Bash Films:
Since your question is about performance and readability, I suggest you listen to Herb Sutter's speech @ CppCon 2014 here.
In a nutshell:
- Write for clarity and correctness first.
- Avoid premature optimization (prefer clear code over optimized code).
I don't think memory optimization is a concern for a chess program IMHO.
Herb Sutter's advice on this is to start with the standard C++98 approach:
void setString(const std::string& str) { str_ = str; }
And if you need to optimize for rvalues add an overload that takes an rvalue reference:
void setString(std::string&& str) noexcept { str_ = std::move(str); }
Note that most implementations of
std::string use the small string optimization so that if your strings are small a move is the same as a copy anyway and you wouldn't get any benefit.
It is tempting to use pass-by-value and then move (as in Adam Hunyadi's answer) to avoid having to write multiple overloads. But Herb pointed out that it does not re-use any existing capacity of
str_. If you call it multiple times with lvalues it will allocate a new string each time. If you have a
const std::string& overload then it can re-use existing capacity and avoid allocations.
If you are really clever you can use a templated setter that uses perfect forwarding but to get it completely correct is actually quite complicated.
Book.h
#pragma once #include <string> class Book { public: Book() = default; ~Book() = default; const std::string GetTitle() const; const std::string GetAuthor() const; const int GetCopyRightYear() const; void SetTitle(const std::string); void SetAuthor(const std::string); void SetCopyRightYear(const int); void PrintBook(); private: std::string title; std::string author; int copyright_year; };
Book.cpp
#include "Book.h" // ------------------------------ #include <iostream> void Book::SetTitle(const std::string title_input) { title = title_input; } const std::string Book::GetTitle() const { return title; } const int Book::GetCopyRightYear() const { return copyright_year; } const std::string Book::GetAuthor() const { return author; } void Book::SetCopyRightYear(const int copyright_year_input) { copyright_year = copyright_year_input; } void Book::SetAuthor(const std::string author_input) { author = author_input; } void Book::PrintBook() { std::string output_str = ""; std::cout << "Title of Book: " << GetTitle() << std::endl; std::cout << "Author of Book: " << GetAuthor() << std::endl; std::cout << "Copyright Year: " << GetCopyRightYear() << std::endl; }
main.cpp
// C++ Libraries. #include <iostream> #include <string> // User classes #include "Book.h" // Namespaces int main() { std::string title_input = ""; std::string author_input = ""; int copyright_year_input = 0; // research dynamic memory allocation. Book book1; Book book2; Book book3; Book book4; // user sets book title. std::cout << "Enter the book title: "; std::getline(std::cin, title_input); book1.SetTitle(title_input); // user sets the authors name std::cout << "Enter the author name: "; std::getline(std::cin, author_input); book1.SetAuthor(author_input); // user inputs the copyright year. std::cout << "Enter the copyright year: "; std::cin >> copyright_year_input; book1.SetCopyRightYear(copyright_year_input); // Display the information. book1.PrintBook(); }
Notes:
- When you start using multiple namespaces its easier to see what is what if you dont predefine them.
- Const correctness means you and other developers know what can be changed and what cant. It also makes things clearer for the compiler.
- std::getline reads the whole line including the blank spaces.
Just a quick note on clarity and understanding. At the moment your code is messy which makes it incredibly hard to debug not only for yourself but for others.
I can't tell on here but just in case, your classes should be in header and source code formatting, with a main source code file for the main function (entry point). Whether or not you've been told this information before I would highly recommend doing some research into basic C++. Just for starters I've put some links below to help. Once your code is neatly formatted you might work out what the problem is.
Happy coding :)
References: Herb Sutter Cpp Convention 2014 - Simplicity over Complexity:
Headers and Includes - C++ formatting:
Also see the tutorials on cplusplus.com.
shared_ptr should be treated just like any other class when passing to functions. in this case you should pass to const reference.
why i wrote why i wrote Herb Sutter jump to min 21:50
Popular Videos 158
Submit Your Video
By anonymous 2017-09-20
You are not comparing like-with-like
If you are writing a move-only type like
std::unique_ptrthen a move assignment operator would be your only choice.
The more typical case is where you have a copyable type in which case I think you have three options.
T& operator=(T const&)
T& operator=(T const&)and
T& operator=(T&&)
T& operator=(T)and move
Note that having both the overloads you suggested in one class is not an option as it would be ambiguous.
Option 1 is the traditional C++98 option and will perform fine in most cases. However, if you need to optimize for r-values you could consider Option 2 and add a move assignment operator.
It is tempting to consider Option 3 and pass-by-value and then move which I think is what you are suggesting. In that case you only have to write one assignment operator. It accepts l-values and at the cost of only one extra move accepts r-values and many people will advocate this approach.
However, Herb Sutter pointed out in his "Back to the Basics! Essentials of Modern C++ Style" talk at CppCon 2014 that this option is problematic and can be much slower. In the case of l-values it will perform an unconditional copy and will not reuse any existing capacity. He provides numbers to backup his claims. The only exception is constructors where there is no existing capacity to reuse and you often have many parameters so pass by-value can reduce the number of overloads needed.
So I would suggest you start with Option 1 and move to Option 2 if you need to optimize for r-values.
Original Thread
|
https://dev-videos.com/videos/xnqTKD8uD64/CppCon-2014-Herb-Sutter-Back-to-the-Basics-Essentials-of-Modern-C-Style
|
CC-MAIN-2018-26
|
refinedweb
| 1,026
| 53.41
|
modulemaker - interactive interface to ExtUtils::ModuleMaker; replaces
h2xs -AXn [module]
This document references version 0.51 of modulemaker, released to CPAN on February 9, 2008.
At the command-prompt, simply call:
% modulemaker
... and answer each question.
At the command-prompt, call
modulemaker with as many options as you can type correctly:
modulemaker [-CIPVchqs] [-v version] [-n module_name] [-a abstract] [-u author_name] [-p author_CPAN_ID] [-o organization] [-w author_website] [-e author_e-mail] [-l license_name] [-b build_system]
You can specify some of the arguments on the command-line and then -- assuming you don't include the
-I option -- modulemaker will then switch to interactive mode so that you can finish entering arguments at the prompts.
After calling
modulemaker at the command-prompt, you will be presented with a series of menus looking something like this:
------------------------ modulemaker: Main Menu Feature Current Value N - Name of module '' S - Abstract 'Module abstract (<= 44 characters) goes here' A - Author information L - License 'perl' D - Directives B - Build system 'ExtUtils::MakeMaker' G - Generate module H - Generate module; save selections as defaults X - Exit immediately Please choose which feature you would like to edit:
In many cases you make your selection by typing a single letter or number and hitting the
Return key. In the remaining cases, you have to type what you want.
Note that in the Main Menu:
G generates the directories and files requested, then exits.
H generates the directories and files requested, saves the values you have entered (with the exception of the module's name and abstract) in a personal defaults file, then exits. (See the documentation for ExtUtils::ModuleMaker for a more complete discussion of this feature.)
X exits without generating directories or file.
------------------------ modulemaker: Author Menu Feature Current Value N - Author 'A. U. Thor' C - CPAN ID 'MODAUTHOR' O - Organization 'XYZ Corp.' W - Website '' E - Email 'a.u.thor@a.galaxy.far.far.away' R - Return to main menu X - Exit immediately Please choose which feature you would like to edit:
The values you enter here to override the Current Values may be good choices for the
H 'save selections as defaults' feature in the Main Menu.
Note that you cannot generate directories or files from this menu. You must return (
R) to the Main Menu first. You can, however, bail out of the program from this menu with
X.
------------------------ modulemaker: License Menu ModuleMaker provides many licenes to choose from, many of them approved by opensource.org. License Name 1 Apache Software License (1.1) 2 Artistic License 3 Artistic License w/ Aggregation 4 BSD License 5 BSD License(Raw) 6 CVW - MITRE Collaborative Virtual Workspace 7 GPL - General Public License (2) 8 IBM Public License Version (1.0) 9 Intel (BSD+) 10 Jabber (1.0) 11 LGPL - GNU Lesser General Public License (2.1) 12 MIT License 13 Mozilla Public License (1.0) 14 Mozilla Public License (1.1) 15 Nethack General Public License 16 Nokia Open Source License(1.0a) 17 Python License 18 Q Public License (1.0) 19 Ricoh Source Code Public License (1.0) 20*** Same terms as Perl itself 21 Sun Internet Standards Source License 22 The Sleepycat License 23 Vovida Software License (1.0) 24 zlib/libpng License 25 Loose Lips License (1.0) # - Enter the number of the license you want to use C - Display the Copyright L - Display the License R - Return to main menu X - Exit immediately Please choose which license you would like to use:
------------------------ modulemaker: Directives Menu Feature Current Value C - Compact '0' V - Verbose '0' D - Include POD '1' N - Include new '1' H - History in POD '0' P - Permissions '0755 - 493' R - Return to main menu X - Exit immediately Please choose which feature you would like to edit:
As with the Author Menu above, the values you enter here to override the Current Values may be good choices for the
H 'save selections as defaults' feature in the Main Menu.
------------------------ Here is the current build system: ExtUtils::MakeMaker E - ExtUtils::MakeMaker B - Module::Build P - Module::Build and proxy Makefile.PL R - Return to main menu X - Exit immediately Please choose which build system you would like to use:
Specify (in quotes) an abstract for this extension
Specify a build system for this extension
Flag for compact base directory name
Omit creating the Changes file, add HISTORY heading to stub POD
Name of Perl module whose methods will override defaults provided in ExtUtils/ModuleMaker.pm and ExtUtils/ModuleMaker/StandardText.pm.
Specify author's e-mail address
Display this help message
Disable INTERACTIVE mode, the command line arguments better be complete
Specify a license for this extension
Specify a name to use for the extension (required)
Specify (in quotes) author's organization
Specify author's CPAN ID
Omit the stub POD section
Do not include a constructor (
new()) in the *.pm file.
Set permissions.
Save the selections entered (either as command-line options or as responses to modulemaker's prompts) as your new personal defaults. These will be the values provided by ExtUtils::ModuleMaker or modulemaker the next time you invoke either one of them.
Specify (in quotes) author's name
Specify a version number for this extension
Flag for verbose messages during module creation
Specify author's web site
The code handling the processing of these options is found in package ExtUtils::ModuleMaker::Opts.
ExtUtils::ModuleMaker was originally written in 2001-02 by R. Geoffrey Avery (modulemaker [at] PlatypiVentures [dot] com). Since version 0.33 (July 2005) it has been maintained by James E. Keenan (jkeenan [at] cpan [dot] org).
Copyright (c) 2001-2002 R. Geoffrey Avery. Revisions from v0.33 forward , ExtUtils::ModuleMaker::StandardText, perlnewmod, h2xs.
|
http://search.cpan.org/dist/ExtUtils-ModuleMaker/scripts/modulemaker
|
CC-MAIN-2015-18
|
refinedweb
| 939
| 52.09
|
12 February 2013 20:03 [Source: ICIS news]
MEDELLIN, Colombia (ICIS)--Alpek’s 2012 Q4 net income fell by about 60% year on year to $30m (€23m), despite a 4% increase in sales volumes during the period, the Mexico-based polyethylene terephthalate (PET) producer said on Tuesday.
A 12% decrease in operating income as well as one-time financing expenses and non-cash foreign exchange losses impacted results, the company said.
In addition, soft global demand and pressure in Asian, European and South American markets due to new production capacity in ?xml:namespace>
Accumulated net income for the year stood at $277m, down by about 14% from $322m recorded for the previous year.
Consolidated revenues for the quarter stood at $1.67bn, down by nearly 3% from $1.72bn a year earlier, while accumulated net sales for 2012 decreased by about 1% to $7.27bn.
By segment, Q4 sales revenues of polyester and polyester products totalled $1.30bn, a drop of about 4% year on year, the company said.
Revenues for Alpek’s plastics and chemicals segment, which produces expandable polystyrene (EPS), polypropylene (PP), caprolactam and other products, stood at $373m, up by about 2% compared to the fourth quarter in 2011.
Alpek, a subsidiary of Grupo Alfa, said both segments were impacted by lower prices resulting from this year’s decline in oil and petrochemical feedstock prices.
Average polyester prices in the quarter were down by about 5.9% year on year, while average plastics and chemicals prices were down by about 7.7%.
Despite a drop in Q4 revenue and income, consolidated sales volume were about 4% higher than the previous year’s fourth quarter.
Sales volumes of polyester increased by about 2% year on year, Alpek said, notwithstanding continued weakness in export markets and the temporary effects of Hurricane Sandy.
The company’s plastics and chemicals segment volume increased by 11% during the fourth quarter, boosted by robust demand in the PP business, Alpek said.
Alpek owns DAK chemicals.
Alpek's parent company, Grupo Alfa, reported Q4 revenues of $3.69bn, up about 4% year on year. Q4 net income was $115m, up about 60%
|
http://www.icis.com/Articles/2013/02/12/9640364/Mexico-Alpek-Q4-income-drops-60-despite-volume-growth.html
|
CC-MAIN-2015-11
|
refinedweb
| 358
| 62.68
|
The Help Browser includes a preprocessor that allows a single source of html help files from which device-specific help can be extracted for presentation to the user. This allows packages to be distributed with on-line help that adapts to the user's actual device.
The supported syntax is a subset of Server Side Includes (SSI). This syntax was chosen so that the documentation can alternatively be served by an appropriately-configured web browser.
SSI if-else-endif can be used (but not elif), with the expression being a set of variable references separated by "||". Variables that are non-empty are "true" and variables that are empty (or not defined) are "false".
<!--#if expr="$CELL || $VOIP"--> ... <!--#else--> ... <!--#endif-->
The following variables are pre-defined (or not, depending on configuration) for Qt Extended Documentation:
The string
<!--#echo var="MYVAR"-->
is replaced with the value of the variable MYVAR.
The syntax for setting a variable is:
<!--#set var="MYVAR" value="some text"-->
It is usually useful to define variables inside configuration blocks. Like this:
<!--#if expr="$CELL"--> <!--#set var="YOURDEV" value="your cellphone"--> <!--#else--> <!--#if expr="$TELEPHONY"--> <!--#set var="YOURDEV" value="the phone"--> <!--#else--> <!--#set var="YOURDEV" value="the device"--> <!--#endif--> <!--#endif--> ... Do not wash <!--#echo var="YOURDEV"-->!
The string
<!--#include file="FILENAME"-->
is replaced with the contents of the file FILENAME, which is recursively processed in the manner described here.
One use for this is device-specific documentation. The Qt Extended help attempts to put anything that might vary between devices into a device-specific file. These files all start with "device-" (such as "device-keys.html" and "device-name.html"). By replacing these files, the Qt Extended help can document a different device.
The string
<!--#exec cmd="CMD ARGS"-->
is replaced with the output of the command CMD.
For security reasons only a fixed set of built-in commands are permitted:
qpe-list-content type
- list content of a given type (eg. "Applications")
qpe-list-help-pages filter
- list and link help pages (eg. "foo-*.html")
<!--#if expr="$CELL"--> Stuff for the cell phone... <!--#endif-->
<!--#if expr="$TELEPHONY"--> Stuff for any Phone <!--#if expr="$KEYPAD"--> Stuff for a Keypad Phone <!--#endif--> <!--#endif-->
<!--#include file="device-colors.html"--> Press the <!--#echo var="color-offswitch"--> button to turn off the device.
The preprocessor can set variables to the result of valuespace expressions. The syntax is:
<!--#set var="MYVAR" valuespace="1 + 1" value="some text"-->
The valuespace attribute can be any valid QExpressionEvaluator expression.
The value attribute is used by SSI implementations that do not support the valuespace attribute and is not used by the help preprocessor.
The existence of a valuespace keys can be tested for with the syntax:
<!--#set var="MYVAR" valuespace="exists(@/ValueSpace/path)" value="some text"-->
To minimized the number of files that use nonstandard SSI features a single file should be used to set variables from valuespace expressions. Other help files can make use of these variables by using File Inclusion.
|
http://radekp.github.io/qtmoko/api/help-preprocessor.html
|
CC-MAIN-2022-27
|
refinedweb
| 494
| 58.99
|
libwebsockets: Simple WebSocket server to make just a “port” of my previous tutorial on Node.js but this would be I think much more complicated and I wanted to keep this as simple as possible.
But still I recommend you to read the first tutorial about building a web server because there are some insight of how does libwebsocket work and it’ll be easier to understand what’s going on here.
BTW, if you’re looking for some more in-depth information on how WebSockets work I recommend this article Websockets 101.
So, to keep it very simple our WebSocket server will just respond to every request you send it reverse order. For example if we send “Hello, world!” it will respond “!dlrow ,olleH”.
WebSocket server
The first part is very similar to the web server, but since we don’t want to handle any HTTP requests the
callback_http may be stay empty.
#include <stdio.h>
#include <stdlib.h>
#include <libwebsockets.h>
static int callback_http(
structlibwebsocket_context * this,
structlibwebsocket *wsi,
enumlibwebsocket_callback_reasons reason,
void *user,
void *in,
size_t len
) {
return 0;
}
The interesting part comes now. We need to handle each request send a response.
static int callback_http(
structlibwebsocket_context * this,
structlibwebsocket *wsi,
enumlibwebsocket_callback_reasons reason,
void *user,
void *in,
size_t len
) {
switch (reason) {
// just log message that someone is connecting
case LWS_CALLBACK_ESTABLISHED:
printf("connection established\n");
break;
case LWS_CALLBACK_RECEIVE: { // the funny part
// Create a buffer to hold our response
// it has to have some pre and post padding.
// You don't need to care what comes there, libwebsockets
// will do everything for you. For more info see
//
unsigned char *buf = (unsigned char*)
malloc(LWS_SEND_BUFFER_PRE_PADDING + len
+ LWS_SEND_BUFFER_POST_PADDING);
int i;
// Pointer to `void *in` holds the incoming request we're just
// going to put it in reverse order and put it in `buf` with
// correct offset. `len` holds length of the request.
for (i=0; i < len; i++) {
buf[LWS_SEND_BUFFER_PRE_PADDING + (len - 1) - i ] =
((char *) in)[i];
}
// Log what we received and what we're going to send as a
// response that disco syntax `%.*s` is used to print just a
// part of our buffer.
//
printf("received data: %s, replying: %.*s\n", (char *) in,
(int) len, buf + LWS_SEND_BUFFER_PRE_PADDING);
// send response
// just notice that we have to tell where exactly our response
// starts. That's why there's buf[LWS_SEND_BUFFER_PRE_PADDING]
// and how long it is. We know that our response has the same
// length as request because it's the same message
// in reverse order.
libwebsocket_write(wsi, &buf[LWS_SEND_BUFFER_PRE_PADDING],
len, LWS_WRITE_TEXT);
// release memory back into the wild
free(buf);
break;
}
default:
break;
}
return 0;
}
I tried to comment the source code where I think it’s appropriate. The most important method is
libwebsocket_write and the way it send response back to the client.
Also
void *in and
size_t len which holds the request and its length respectively are worth mentioning.
In the next part we define all protocols we’re using. The first protocol has to be HTTP, then we can put whatever we want. Pay attention to the WebSocket protocol name.
static struct libwebsocket_protocols protocols[] = {
/* first protocol must always be HTTP handler */
{
"http-only", // name
callback_http, // callback
0 // per_session_data_size
},
{
"dumb-increment-protocol", // protocol name - very important!
callback_dumb_increment, // callback
0 // we don't use any per session data
},
{
NULL, NULL, 0 /* End of list */
}
};
The W3C official WebSocket definition says that WebSocket constructor takes two arguments. WebSocket server URL and some optional protocol name (or subprotocol if you want). I didn't know what is the second argument for until I started using libwebsockets. For us the protocol name is
dumb-increment-protocol and we'll use it later in the JavaScript part.
The rest is the same like in the web server tutorial.
int main(void) {
// server url will be
int port = 9000;
const char *interface = NULL;
struct libwebsocket_context *context;
// we're not using ssl
const char *cert_path = NULL;
const char *key_path = NULL;
// no special options
int opts = 0;
// create libwebsocket context representing this server
context = libwebsocket_create_context(port, interface,
protocols,
libwebsocket_internal_extensions, cert_path,
key_path, -1, -1, opts);
if (context == NULL) {
fprintf(stderr, "libwebsocket init failed\n");
return -1;
}
printf("starting server...\n");
// infinite loop, to end this server send SIGTERM. (CTRL+C)
while (1) {
libwebsocket_service(context, 50);
// libwebsocket_service will process all waiting events with
// their callback functions and then wait 50 ms.
// (this is a single threaded web server and this will keep our
// server from generating load while there are not
// requests to process)
}
libwebsocket_context_destroy(context);
return 0;
}
At this moment you can compile and run the server (see compiling tutorial). You should see something like this:
Compiled without SSL support, serving unencrypted
Listening on port 9000
starting server...
Frontend
Now it’s finally time to write some JavaScript. In order to keep it very simple I’m putting everything in one HTML file:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<script src="">
</script>
<script type="text/javascript">
$(function() {
window.WebSocket = window.WebSocket || window.MozWebSocket;
var websocket = new WebSocket(
'ws://127.0.0.1:9000', 'dumb-increment-protocol');
websocket.onopen = function () {
$('h1').css('color', 'green');
};
websocket.onerror = function () {
$('h1').css('color', 'red');
};
websocket.onmessage = function (message) {
console.log(message.data);
$('div').append($('<p>', { text: message.data }));
};
$('button').click(function(e) {
e.preventDefault();
websocket.send($('input').val());
$('input').val('');
});
});
</script>
</head>
<body>
<h1>WebSockets test</h1>
<form>
<input type="text" />
<button>Send</button>
</form>
<div></div>
</body>
</html>
I think it’s obvious what it does and why. The most surprising part is probably:
new WebSocket('ws://127.0.0.1:9000', 'dumb-increment-protocol');
where
dumb-increment-protocol is the subprotocol name that we specified in
static struct libwebsocket_protocols protocols[] = { … }.
If you think about it, it means that one WebSocket server can run infinite number of different WebSocket protocols absolutely independent on each other on the same port. Well, that’s interesting to me.
Now open the
index.html and if connection to our WebSocket server was successful you should see green heading "WebSocket test" like in the image bellow. Write some clever message in the input field and hit Enter. Our WebSocket server takes your message and sends it back. Also you should see log messages in the server's console too.
Complete source codes are as usually on gist.github.com, feel free to send pull requests if you find a bug or if you have an idea how to make it even better!
Conclusion
I believe it wasn’t that horrible.
By the way, if you use libwebsockets for some real application say thank you to who created it.
One more thing, I discovered that there’s a forked version of libwebsockets with some additional features. I haven’t tested it yet, but I will as soon as I can and I’ll update this article with more info.
|
https://medium.com/@martin.sikora/libwebsockets-simple-websocket-server-68195343d64b?source=user_profile---------3-----------
|
CC-MAIN-2017-30
|
refinedweb
| 1,128
| 55.84
|
OverviewBlazor is a framework for developing interactive client-side web applications using C# instead of JavaScript.
Source code for this tutorial can be found at:
Companion video tutorial at:
Client-Side BlazorIn the client side blazor model, the Blazor app + its dependencies + .NET runtime are downloaded to the browser. The app is then executed directly on the browser UI thread as shown below:
Server-Side BlazorASP.NET Core hosts the server-side app and sets up SignalR endpoint where clients connect. SignalR is responsible for updating the DOM on the client with any changes.
What are we going to do in this Tutorial?In this tutorial I will show you how to build a server-side Blazor application with the following structure:
1) The class library will have model definitions that are shared between the Web API and Blazor Visual Studio projects. The model that we will create in this tutorial is a simple C# Student class that looks like this:
public class Student {
public string StudentId { get; set; }
[Required]
public string FirstName { get; set; }
[Required]
public string LastName { get; set; }
[Required]
public string School { get; set; }
}
2) The ASP.NET Core Web API app will provide the REST endpoints for a students service that the Blazor client-side application will consume. It will use the GET, POST, PUT and DELETE methods to carry out CRUD operations with the API service. Entity Framework will be used to save data in SQL Server using the "Code First" approach.
3) The Server-side Blazor app will update the DOM on the client using SignalR.
PerquisitesThis tutorial was written while Blazor was in preview mode. I used the following site for setting up my environment:
This is how I setup my development environment:
- .NET Core 3.0 Preview SDK installed from
- InstalledCreate a new project in Visual Studio 2019.
Set these values on the next dialog:
Project name: SchoolLibrary 3.0, choose the API template then click on Create:
Our Web API project needs to use the Student class in the SchoolLibrary project. Therefore, we will make a reference from the SchoolAPI project into the SchoolLibrary project. Right-click on the SchoolAPI project node then: Add >> Reference... >> Projects >> Solution >> check SchoolLibrary. Then click on OK.
Since we will be using SQL Server, we will need to add the appropriate Entity Framework packages and tooling. From within a terminal window at the root of your StudentAPI project, run the following commands that will add=Blaz. We need to globally install the Entity Framework CLI tool. This tooling is changed in .NET Core 3.0 and is installed globally on your computer by running the following command in a terminal window:
dotnet tool install --global dotnet-ef --version 3.0.0-*
Remember to build your entire solution before proceeding. Then, from within a terminal window in the SchoolAPI root directory, run the following command to create migrations:
dotnet-ef migrations add initial_migration
I experienced the following error:
Unable to create an object of type 'SchoolDbContext'. For the different patterns supported at design time, see
I followed the suggested link which obliged me to create DbContextFactory class. This class needs to read the connection string from appsettings.json without dependency injection. For this purpose, I added this helper class that will help read configuration settings from appsettings.json:
public class ConfigurationHelper {
public static string GetCurrentSettings(string key) {
var builder = new ConfigurationBuilder()
.SetBasePath(System.IO.Directory.GetCurrentDirectory())
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.AddEnvironmentVariables();
IConfigurationRoot configuration = builder.Build();
return configuration.GetValue<string>(key);
}
}
Next, create another class file named SchoolDbContextFactory.cs. This file extends from IDesignTimeDbContextFactory<T> and will be used to create the database context:
public SchoolDbContext CreateDbContext(string[] args) {
var optionsBuilder = new DbContextOptionsBuilder<SchoolDbContext>();
var connStr = ConfigurationHelper.GetCurrentSettings("ConnectionStrings:DefaultConnection");
optionsBuilder.UseSqlServer(connStr);
return new SchoolDbContext(optionsBuilder.Options);
}
}
Execute the following terminal command again:
dotnet-ef migrations add initial_migration
You should get no errors and this results in the creation of a migration file ending with the name ....initial_migration.cs in the Migrations folder. In my case, this file looked like this:[,] {
{ "02445eaf-4b31-4266-9234-43a8facdd457", "Jane", "Smith", "Medicine" },
{ "76cc1e56-7aa3-40aa-b1e7-e2dfa3d46489", "John", "Fisher", "Engineering" },
{ "07f046c0-95fb-4425-b52a-4222a71c3f46", "Pamela", "Baker", "Food Science" },
{ "8a4df7f5-2c6a-44d4-9ec1-7824c896c20e", "Peter", "Taylor", "Mining" }
});
}
protected override void Down(MigrationBuilder migrationBuilder) {
migrationBuilder.DropTable(name: "Students");
}
}
Note that the above code also includes commands for inserting sample data.
The next step is to create the BlazorDB database in SQL Server. This is done by running the following command from inside a terminal window at the SchoolAPI folder.
Once you click on Add, the StudentsController.cs file is created in the Controllers folder. Here is my code for StudentsController.cs:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.EntityFrameworkCore;
using SchoolAPI;
using SchoolLibrary;
namespace SchoolAPI.Controllers {
[Route("api/[controller]")]
[ApiController]
public class StudentsController : ControllerBase {
private readonly SchoolDbContext _context;
public StudentsController(SchoolDbContext context) {
_context = context;
}
// GET: api/Students
[HttpGet]
public async Task<ActionResult<IEnumerable<Student>>> GetStudents() {
return await _context.Students.ToListAsync();
}
// GET: api/Students/5
[HttpGet("{id}")]
public async Task<ActionResult<Student>> GetStudent(string id) {
var student = await _context.Students.FindAsync(id);
if (student == null) {
return NotFound();
}
return student;
}
// PUT: api/Students/5
[HttpPut("{id}")]
public async Task<IActionResult> PutStudent(string id, Student student) {
if (id != student.StudentId) {
return BadRequest();
}
_context.Entry(student).State = EntityState.Modified;
try {
await _context.SaveChangesAsync();
} catch (DbUpdateConcurrencyException) {
if (!StudentExists(id)) {
return NotFound();
} else {
throw;
}
}
return NoContent();
}
[HttpPost]
public async Task<ActionResult<Student>> PostStudent(Student student) {
_context.Students.Add(student);
await _context.SaveChangesAsync();
return CreatedAtAction("GetStudent", new { id = student.StudentId }, student);
}
// DELETE: api/Students/5
[HttpDelete("{id}")]
public async Task<ActionResult<Student>> DeleteStudent(string id) {
var student = await _context.Students.FindAsync(id);
if (student == null) {
return NotFound();
}
_context.Students.Remove(student);
await _context.SaveChangesAsync();
return student;
}
private bool StudentExists(string id) {
return _context.Students.Any(e => e.StudentId == id);
}
}
}
To load the students controller every time you run the SchoolAPI project, at the top of the Configure() method, also in Startup.cs:
app.UseCors("Policy");
We now have an API and data that we can work with. Let's see how we can use this data in a server-side Blazor app.
Creating our Server-Side Blazor app
Add a server-side Blazor project to your solution. In Solution Explorer, right-click on the solution node then: Add >> New Project...
Choose the "ASP.NET Core Web Application" template then click on Next.
Name the project ServerBlazor, then click on Create.
Choose "ASP.NET Core 3.0" and the "Blazor (server-side)" template then click on Create.
Let us run our Blazor application to see what we have out of the box . Run the ServerBlazor project by hitting CTRL F5 on your keyboard. You will see a UI that looks like this:
Let's find out more about how the the app works. Click on Counter on the left navigation.
We need the Newtonsoft.Json package for handling json objects. Therefore, run the following command from a terminal window in the ServerBlazor folder:
dotnet add package Newtonsoft.Json
We will add a Students razor page and menu item to this server-side Blazor project.
Our Blazor project needs to use the Student class in the class library. Therefore, make a reference from the Blazor ServerBlazor project into the class library project SchoolLibrary. Right-click on the ServerBlazor
Add a class file named StudentService.cs in the Data folder. Replace the class with the following code:
public class StudentService {
string baseUrl = "";
public async Task<Student[]> GetStudentsAsync() {
HttpClient http = new HttpClient();
var json = await http.GetStringAsync($"{baseUrl}api/students");
return JsonConvert.DeserializeObject<Student[]>(json);
}
public async Task<Student> GetStudentsByIdAsync(string id) {
HttpClient http = new HttpClient();
var json = await http.GetStringAsync($"{baseUrl}api/students/{id}");
return JsonConvert.DeserializeObject<Student>(json);
}
public async Task<HttpResponseMessage> InsertStudentAsync(Student student) {
var client = new HttpClient();
return await client.PostAsync($"{baseUrl}api/students", getStringContentFromObject(student));
}
public async Task<HttpResponseMessage> UpdateStudentAsync(string id, Student student) {
var client = new HttpClient();
return await client.PutAsync($"{baseUrl}api/students/{id}", getStringContentFromObject(student));
}
public async Task<HttpResponseMessage> DeleteStudentAsync(string id) {
var client = new HttpClient();
return await client.DeleteAsync($"{baseUrl}api/students/{id}");
}
private StringContent getStringContentFromObject(object obj) {
var serialized = JsonConvert.SerializeObject(obj);
var stringContent = new StringContent(serialized, Encoding.UTF8, "application/json");
return stringContent;
}
}
You will need to resolve some of the missing namespaces. Also, make sure you adjust the value of baseUrl to match the URL of your SchoolAPI service.
The above StudentService class provides all the necessary methods for making HTTP requests to the API service with GET, POST, PUT and DELETE methods so that CRUD operations can be processed against data.
We need to configure the StudentService class as a singleton so that we can use dependency injection. Add the following statement to the ConfigureServices() method in Startup.cs:
services.AddSingleton<StudentService>();
Make a duplicate copy of the FetchData.razor file in the Pages node and name the new file Students.razor. Replace its contents with the following code:
@page "/students"
@using ServerBlazor.Data
@inject StudentService studentService
<h1>Students</h1>
<p>This component demonstrates managing students data.<;
protected override async Task OnInitAsync() {
await load();
}
protected async Task load() {
students = await studentService.GetStudentsAsync();
}
}
Let us focus on the @functions block. The OnInitAsyns() method is called when the page gets loaded. It calls a local load() method. The load() method makes a call to the student service which loads a students array with data from our API service. The remaining HTML/Razor code simply displays the data in a table.
Let's add a menu item to the left-side navigation of our eager to test out the server-side Blazor project. To run your app, highlight the SchoolAPI node then hit CTRL F5 to run the server-side application without debugging. This starts the API service.
Next, we will run the server-side Blazor application. Highlight the ServerBlazor node then hit CTRL F5 to run the server-side Blazor app. This is what it should look like when you click on Students:
Adding data
Our Blazor app is not complete without add, edit and delete functionality.:
protected async Task Insert() {
Student s = new Student() {
StudentId = Guid.NewGuid().ToString(),
FirstName = firstName,
LastName = lastName,
School = school
};
await studentService.InsertStudentAsync(s);
ClearFields();
await load();
}
After data is inserted, the above code clears the fields then loads the data again into an HTML table. Add the following ClearFields() method:
protected void ClearFields() {
studentId = string.Empty;
firstName = string.Empty;
lastName = string.Empty;
school = string.Empty;
}
Run the Blazor server-side project and select Students from the navigation menu. This is what it should look like:
I entered Harry, Green and Agriculture for data and when I clicked on Insert I got the following data inserted into the database:
Updating & Deleting data
To distinguish between INSERT and EDIT/DELETE mode, we shall add an enum declaration to our code. Add the following to the list of instance variables:
private enum MODE { None, Add, EditDelete };
MODE mode = MODE.None;
Student student;
We will add a button at the top of our table for adding data. Add the following markup just above the opening <table> tag:
<button onclick="@Add" class="btn btn-success">Add</button>
Here is the Add() method that is called when the above button is clicked:
protected void Add() {
ClearFields();
mode = MODE.Add;
}
Around line 39, change the @if (students != null) statement to:
@if (students != null && mode==MODE.Add)
Run the server-side Blazor project.() {
Student s = new Student() {
StudentId = studentId,
FirstName = firstName,
LastName = lastName,
School = school
};
await studentService.UpdateStudentAsync(studentId, s);
ClearFields();
await load();
mode = MODE.None;
}
protected async Task Delete() {
await studentService.DeleteStudentAsync(studentId);
ClearFields();
await load();
mode = MODE.None;
}
We want to be able to select a row of data and update or delete it. We will add an onclick handler to a row. In the HTML table, replace the opening <tr> studentService.GetStudentsByIdAsync(id);
studentId = student.StudentId;
firstName = student.FirstName;
lastName = student.LastName;
school = student.School;
mode = MODE.EditDelete;
}
Let us test our app James Bond and changed Bond to Gardner.
After I clicked on Update, last name was successfully changed to Gardner.
Lastly, let us delete data. I clicked on the Harry Green row. Data was displayed in the Update/Delete form.
When I clicked on Delete, Harry Green get removed.
We are told that SignalR is using web sockets to update the DOM. Let us look into this further. I am using Chrome. I hit F12 in Chrome and went to the Network tab and clicked on WS.
Refresh the page in your browser then click on the resource as shown below:
This will show the web socket traffic.
When you click on any button in the UI, the numbers will change indicating that data is being updated using web sockets.
I thank you for coming this far in the tutorial and wish you much luck in your Blazor adventure
superb
Positive site, where did u come up with the information on this posting? I'm pleased I discovered it though, ill be checking back soon to find out what additional posts you include. latest web series
I learned about Blazor from attending various presentations by industry experts on this topic.
|
http://blog.medhat.ca/2019/06/blazor-server-side-app-with-crud.html
|
CC-MAIN-2020-29
|
refinedweb
| 2,211
| 50.94
|
In this article, we discuss how to use both Postgres ORM and raw SQL in order to securely query data from a Postgres database with Node.js.
When you’re querying Postgres, you need to choose between:
If you want to use an ORM to query Postgres, I recommend using. If you’re starting with a fresh project, you can use their
typeorm init CLI command:
npx typeorm init --name MyProject --database postgrescd MyProject && yarn
You’ll then need to edit
ormconfig.json to add your database connection options. You’ll need to add a file in
src/entity for each table in your database.
You may also like: Restful API with NodeJS, Express, PostgreSQL, Sequelize, Travis, Mocha, Coveralls and Code Climate.
You can then use a JavaScript API to create records in your database:
import {createConnection} from "typeorm"; import {Photo} from "./entity/Photo"; async function createPhoto() { const connection = await createConnection({ type: 'postgres', url: process.env.DATABASE_URL || 'postgres://test:[email protected]/test' }); const photo = new Photo(); photo.name = "Me and Bears"; photo.description = "I am near polar bears"; photo.filename = "photo-with-bears.jpg"; photo.views = 1; photo.isPublished = true; const {id} = await connection.manager.save(photo); console.log("Photo has been saved. Photo id is", photo.id);} createPhoto();
There are advantages to this approach (the biggest being that it supports strong types), but I personally feel that it makes the code pretty hard to read/follow, and the skills you learn on TypeORM will be of no use if you move to a different ORM
I believe that the simplest and easiest way to query Postgres is to directly write the SQL that will be run against your database.
“Using SQL directly, means there’s nothing to configure”
yarn add @databases/pg
You’ll need to set the
DATABASE_URL environment variable to a database connection string.
import connect, {sql} from '@databases/pg'; const db = connect(); export async function getAllUsers() { return await db.query(sql`SELECT * FROM users;`); } export async function getUserById(userId) { return (await db.query(sql` SELECT * FROM users WHERE user_id=${userId} `))[0]; } export async function createUser(u) { return (await db.query(sql` INSERT INTO users (name, email) VALUES (${u.name}, ${u.email}) RETURNING user_id; `))[0].user_id; } export async function deleteUserById(userId) { await db.query(sql`DELELTE FROM users WHERE user_id=${userId}`); } export async function updateUserById(userId, u) { await db.query(sql` UPDATE users SET name=${u.name}, email=${u.email} WHERE user_id=${userId} `); } export async function upsertUser(userId, u) { return (await db.query(sql` INSERT INTO users (user_id, name, email) VALUES (${userId}, ${u.name}, ${u.email}) ON CONFLICT (user_id) DO UPDATE SET name=${u.name}, email={u.email} RETURNING *; `))[0]; }
N.B. The
[@databases]()library does not just concatenate your user input into a string of SQL, it separates your parameters from the actual query, and uses prepared statements to run the query. It throws a clear runtime exception if you forget to tag your sql with the
sqltag. This means it’s virtually impossible for you to introduce SQL Injection vulnerabilities by accident.
For most projects, I recommend querying your Postgres database directly using
@databases/pg. It gives you the ultimate flexibility. If you need TypeScript types, I recommend declaring the types along with the SQL that queries. TypeScript isn’t currently able to check that the types match your database schema, but at least if they’re in the same file, you’ll probably remember to keep them in sync.
Thank for reading! If you liked this post, share it with all of your programming buddies!
node-js postgresql sql javascript web-development...
Throughout my career, a multitude of people have asked me <em>what does it take to become a successful developer?</em>
Vue.js is an extensively popular JavaScript framework with which you can create powerful as well as interactive interfaces. Vue.js is the best framework when it comes to building a single web and mobile apps.
|
https://morioh.com/p/60179140f00b
|
CC-MAIN-2020-34
|
refinedweb
| 655
| 57.57
|
Creating beautiful, aesthetic designs while maintaining accessibility has always been a challenge in the frontend. One particular barrier is the dreaded “:focus” ring. It looks like this:
After clicking any button, the default styling in the browser displays a focus outline. This ugly outline easily mars a perfectly crafted interface.
A quick Stack Overflow search reveals an easy fix: just use a bit of CSS,
outline: none; on the affected element. It turns out that many websites use this trick to make their sites beautiful and avoid the focus outline.
Why you should never use
outline: none
One word: accessibility. Users who cannot use a mouse and need to navigate the web using their keyboard are entirely lost when you remove your element’s focus outline.
Imagine wanting to click on a button on a page, but not knowing at all which button you’re currently hovering over. In fact, you don’t have to imagine! Go ahead and implement the
outline: none; hack, and then try to tab through your page to get to your element to select it.
Not so easy, right? You can’t tell which element is selected.
It’s already difficult enough to navigate the web with a keyboard when sites are fully accessible. Removing accessible functionality from your site for the sake of design makes it impossible for users who cannot or prefer not to use a mouse to navigate your site.
Now that we’ve gained some empathy for those who with accessible needs, we can go ahead and put our focus outline back in. We can talk to our product designer about why the design won’t look perfect and call this a closed case.
But wait! There’s a way to please your design instincts and to allow users with accessible needs to navigate your page properly.
Previous solutions
Previous non-hacky solutions to this problem involve using the
outline: none; trick but also add an event listener to listen for tabbing events, signifying a keyboard user. At that point, we add a new class to the document that adds the outline back in. This way, keyboard users still see the outline, but regular click users do not.
That sounds great, but generally, we don’t want to modify the document directly when writing React code. What else can we do?
Another approach involves calling
onBlur after the event is handled. However, this doesn’t always work, as it can cause actions to be fired more than once accidentally. For instance, when using redux-form, if you call
onBlur after the
onClick action, you would see two submit actions fired: once on the first
onClick, and again on the
onBlur. Not what we’re expecting.
A third solution abandons the outline altogether in favor of a box-shadow instead. However, your designer may not be happy even in this case, since the box-shadow shows not only for keyboard users but also for click users as well.
Furthermore, with any of these approaches, the solution is limited only to a given component. We need a solution that is easier to apply in multiple places. Luckily for us, we can do this easily in React!
How to remove the focus outline in React
Let’s use the first solution from above as inspiration to create a new React component. This component’s only responsibility is to manage whether a focus outline should be shown. The component adds an event listener to listen for tabbing events, just like in the first solution we found.
Here’s how it all comes together:
class AccessibleFocusOutline extends React.Component { state = { enableOutline: false, } componentDidMount() { window.addEventListener('keydown', this._handleKeydown); } _handleKeydown = (e) => { // Detect a keyboard user from a tab key press const isTabEvent = e.keyCode === 9; if (isTabEvent) { this.setState({enableOutline: true}); } } render() { return ( <span className={ this.state.enableOutline ? '' : 'no-outline-on-focus' } > {this.props.children} </span> ); } }
And the CSS for the
no-outline-on-focus class looks like this:
.no-outline-on-focus button:focus, .no-outline-on-focus a:focus { outline: none; }
We add the class by default and remove it when a tabbing event is detected.
Pretty simple, right? This solution works well when we want to wrap a component directly with our new AccessibleFocusOutline component. However, this component only handles
<a> and
<button> tags. What if we want our accessible focus outline to apply to any element we pass?
A more extensive solution
In our first approach, we created a component that receives children elements and wraps them with the proper accessible styles. To make this more extensive, let’s instead create a component that receives an HTML tag as a prop and creates the element for you with the styles applied. This way, we can pass any element tag to the component, and the styles apply correctly. Instead of an
AccessibleFocusOutline component, we end up with an
AccessibleFocusOutlineElement component.
Here’s what that looks like:
import classNames from 'classnames'; import omit from 'lodash/omit'; class AccessibleFocusOutlineElement extends React.Component { state = { enableOutline: false, } componentDidMount() { window.addEventListener('keydown', this._handleKeydown); } _handleKeydown = (e) => { const isTabEvent = e.keyCode === 9; if (isTabEvent) { this.setState({enableOutline: true}); } } render() { const {className, tag: Tag} = this.props; // Omit the tag prop since we’ll use that to create // the element and we don’t want to forward it. // Omit the className prop as we’re going to combine // it with our own class to control the outline. const forwardedProps = omit(this.props, ['tag', 'className']); const classes = classNames( className, {'no-outline-on-focus': !this.state.enableOutline}, ); return ( <Tag className={classes} {...forwardedProps}> {this.props.children} </Tag> ); } }
So in this case, our CSS is a little simpler and does not need to look for particular HTML children elements, but rather applies directly to the element being passed to our component:
.no-outline-on-focus:focus { outline: none; }
The result
Here’s the interaction for mouse users: no focus outline for regular click users!
Moreover, for users navigating the site with a keyboard, the focus outline appears as we expect:
Don’t take shortcuts with design and accessibility
It can be tempting to use the first Stack Overflow result that works to achieve a beautiful design, like the
outline: none; hack. However, the few minutes that you save implementing the feature could prevent users from navigating your site. Spend a few extra minutes to think about what a user with accessible needs might experience on your site and consider testing your page by only navigating with a keyboard.
While great design is always a priority, accessibility should be equally as important. I hope this component example assists you in making your sites more accessible without having to compromise on design.
Do you have another approach to solve the dreaded focus ring? Let us know in the comments or tweet at me @snazbala!
Article edited for clarity that this solution accounts for keyboard users only.
Photo by Elena Taranenko on Unsplash.
|
https://www.eventbrite.com/engineering/how-to-fix-the-ugly-focus-ring-and-not-break-accessibility-in-react/
|
CC-MAIN-2019-35
|
refinedweb
| 1,154
| 56.05
|
The DOM provides various functions to modify the document.
Creating New Elements
You can create new elements using the
createElement() function of the document. It takes one argument, the tag name of the element to create. You can then set attributes of the element using the
setAttribute() function and append it to the XUL document using the
appendChild() function. For example, the following will add a button to a XUL window:
var el = env.locale; Example 1 : Source View
<script> function addButton(){ var aBox = document.getElementById("aBox"); var button = document.createElement("button"); button.setAttribute("label","A new Button"); aBox.appendChild(button); } </script> <box id="aBox" width="200"> <button label="Add" oncommand="addButton();"/> </box>
- This example has two parts
- a box container element in XUL. Notice that this is NOT the same as a vbox or an hbox. (This is discussed more in the Box Model pages.)
- a Javascript function named "addButton()"
- This script first gets a reference to the box with
getElementById(), which is the container to add a new button to. The function getElementbyID() does not know that the box it is looking for happens to be containing the tag that has the oncommand attribute that referenced it. getElementById() only knows the box it is looking to find has an id with the value "aBox". This is a subtle dependency between the function and the XUL element to which you should pay attention.
- addButton() the calls the
createElement()function to create a new button. Note this button is not visible, nor is it attached to anything yet.
- addButton() then assigns the label 'A new Button' to the button using the
setAttribute()function.
- Finally the
appendChild()function of the particular box found by getElementbyID() is called to add the button to it. At this point, the button is attached to a visible box, so it becomes visible as well.
- The button with the label "Add" can be pressed multiple times and it will continue to add new buttons, each of which will have the label "A new Button", and will only be distinguishable by their place as children in the box element with the id "abox".
The
createElement() function will create the default type of element for the document. For XUL documents, this generally means that a XUL element will be created. For an HTML document, an HTML element will be created, so it will have the features and functions of an HTML element instead. The
createElementNS() function may be used to create elements in a different namespace.
The
appendChild() function is used to add an element as a child of another element. Three related functions are the
insertBefore(),
replaceChild() and
removeChild functions. The syntax of these functions is as follows:
parent.appendChild(child); parent.insertBefore(child, referenceChild); parent.replaceChild(newChild, oldChild); parent.removeChild(child);
It should be fairly straightforward from the function names what these functions do.
- The
insertBefore()function inserts a new child node before an existing one. This is used to insert into the middle of a list of children of the parent element instead of at the end like
appendChild()does.
- The
replaceChild()function removes an existing child and adds a new one in its place at the same position in the list of its parent element.
- Finally the
removeChild()function removes a child from the list of its parent element.
Note that for all these functions, the object referred to by the variable referenceChild or the variables newChild and oldChild must already exist or an error occurs. Likewise the object referred to by the variable child which is to be removed must already exist or an error occurs.
Moving Nodes to a different Place
It is often the case that you want to remove an existing element and add it somewhere else. If so, you can just add the element without removing it first. Since a node may only be in one place at a time, the insertion call will always remove the node from its existing location first. This is a convenient way to move nodes around in the document.
Copying Nodes
To copy nodes however, you may call the
cloneNode() function. This function makes a copy of an existing node so that you can add it somewhere else. The original node will stay where it is. It takes one boolean argument which indicates whether to copy all of the node's children or not. If false, only the node is copied, such that the copy won't have any children. If true, all of the children are copied as well. This is done recursively, so for large tree structures make sure that this is desired before passing true to the
cloneNode() function. Here is an example:
var el = env.locale; Example 2 : Source View
<hbox height="400"> <button label="Copy" oncommand="this.parentNode.appendChild(this.nextSibling.cloneNode(true));"/> <vbox> <button label="First"/> <button label="Second"/> </vbox> </hbox>
When the Copy button is pressed..
- we retrieve the
nextSiblingof the
, which will be the
button
element.
vbox
- a copy of this element is made using the
cloneNode()function
- and the copy is appended using
appendChild().
Note that some elements, such as
and
listbox
provide some additional specialized modification functions which you should use instead when you can. These are described in the next section.
menulist
Manipulating Basic Elements
The main XUL elements such as buttons, checkboxes and radio buttons may be manipulated using a number of script properties. The properties available are listed in the element reference as those available are different for each element. Common properties that you will manipulate include the
,
label
,
value
and
checked
properties. They set or clear the corresponding attribute as necessary.
disabled
Label and value properties examples
Here is a simple example which changes the label on a button:
var el = env.locale; Example 3 : Source View
<button label="Hello" oncommand="this.label = 'Goodbye';"/>
When the button is pressed, the
label is changed. This technique will work for a variety of different elements that have labels. For a textbox, you can do something similar for the
property.
value
var el = env.locale; Example 4 : Source View
<button label="Add" oncommand="this.nextSibling.value += '1';"/> <textbox/>
This example adds a '1' to the textbox each time the button is pressed. The
nextSibling property navigates from the button (this) to the next element, the textbox. The += operator is used to add to the current value so a 1 will be added onto the end of the existing text. Note that you can still enter text into the textbox. You can also retrieve the current label or value using these properties, as in the following example:
var el = env.locale; Example 5 : Source View
<button label="Hello" oncommand="alert(this.label);"/>
Toggling a checkbox
Checkboxes have a
property which may be used to check or uncheck the
checked
. It should be easy to determine how this is used. In this next example, we reverse the state of the
checkbox
property whenever the button is pressed. Note that while the
checked
and
label
properties are strings, the
value
property is a boolean property which will be set either true or false.
checked
setAttribute("checked", "false")instead, because the XBL isn't initiated yet.)
var el = env.locale; Example 6 : Source View
<button label="Change" oncommand="this.nextSibling.checked = !this.nextSibling.checked;"/> <checkbox label="Check for messages"/>
Radio buttons may be selected as well using properties, however since only one in a group may be selected at a time, the others must all be unchecked when one is checked. You don't have to do this manually of course. The radiogroup's
property may be used to do this. The
selectedIndex
property may be used to retrieve the index of the selected radio button in the group and well as change it.
selectedIndex
Changing a element disabled or enabled
It is common to disable particular fields that don't apply in a given situation. For example, in a preferences dialog, one might have the choice of several possibilities, but one choice allows additional customization. Here is an example of how to create this type of interface.
var el = env.locale; Example 7 : Source View
<script> function updateState(){ var name = document.getElementById("name"); var sindex = document.getElementById("group").selectedIndex; name.disabled = sindex == 0; } </script> <radiogroup id="group" onselect="updateState();"> <radio label="Random name" selected="true"/> <hbox> <radio label="Specify a name:"/> <textbox id="name" value="Jim" disabled="true"/> </hbox> </radiogroup>
In this example a function updateState() is called whenever a select event is fired on the radio group. This will happen whenever a radio button is selected. This function will retrieve the currently selected
element using the
radio
property. Note that even though one of the radio buttons is inside an
selectedIndex
, it is still part of the radio group. If the first radio button is selected (index of 0), the textbox is enabled by setting its
hbox
property to true. If the second radio button is selected, the textbox is enabled.
disabled
The next section will provide more details about manipulating radio groups as well as manipulating lists.
|
https://developer.mozilla.org/en-US/docs/XUL/Tutorial/Modifying_a_XUL_Interface
|
CC-MAIN-2014-10
|
refinedweb
| 1,514
| 56.35
|
On Fri, Nov 18, 2005 at 05:42:41PM +0300, Bulat Ziganshin wrote: > can anyone write at least the list of record proposals for Haskell? > or, even better, comment about pros and contras for each proposal? I'd benefit from just a list of problems that the record proposals want to solve. I can list the issues that seem important to me, but I am sure my list isn't complete. Also note that some of these goals may be mutually contradictory, but agreeing on the problems might help in agreeing on the solutions. A getter is a way to get a field out of a record, a setter is a way to update a field in a record. These may be either pattern-matching syntaxes, functions or some other odd syntax. Here's the quick summary, expanded below: 1. The field namespace issue. 2. Multi-constructor getters, ideally as a function. 3. "Safe" getters for multi-constructor data types. 4. Getters for multiple data types with a common field. 5. Setters as functions. 6. Anonymous records. 7. Unordered records. 2. Multi-constructor getters. 1. Field namespace issue: Field names should not need to be globally unique. In Haskell 98, they share the function namespace, and must be unique. We either need to make them *not* share the function namespace (which means no getters as functions), or somehow stick the field labels into classes. 2. Multi-constructor getters, ideally as a function: An accessor ought to be able to access an identically-named field from multiple constructors of a given data type: > data FooBar = Foo { name :: String } | Bar { name :: String } However we access "name", we should be able to access it from either constructor easily (as Haskell 98 does, and we'd like to keep this). 3. "Safe" getters for multi-constructor data types. Getters ought to be either "safe" or explicitly unsafe when only certain constructors of a data type have a given field (this is my pet peeve): > data FooBar = Foo { foo :: String } | Bar { bar :: String } This shouldn't automatically generate a function of type > foo :: FooBar -> String which will fail when given a FooBar of the Bar constructor. We can always write this function ourselves if we so desire.. 5. Setters as functions. It would be nice to have a setter function such as (but with perhaps a better name) > set_foo :: String -> Foo -> Foo be automatically derived from > data Foo = Foo { foo :: String } in the same way that in Haskell 98 "foo :: Foo -> String" is implicitely derived. Note that this opens up issues of safety when you've got multiple constructors, and questions of how to handle setting of a field that isn't in a particular datum. 6. Anonymous records. This idea is from Simon PJ's proposal, which is that we could have anonymous records which are basically tuples on steroids. Strikes me as a good idea, but requires that we address the namespace question, that is, whether field labels share a namespace with functions. In Simon's proposal, they don't. This is almost a proposal rather than an issue, but I think that it's a worthwhile idea in its own right. 7. Unordered records. I would like to have support for unordered records, which couldn't be matched or constructed by field order, so I could (safely) reorder the fields in a record. This is really an orthogonal issue to pretty much everything else. Argh. When I think about records too long I get dizzy. -- David Roundy
|
http://www.haskell.org/pipermail/haskell-cafe/2005-November/012226.html
|
CC-MAIN-2014-23
|
refinedweb
| 585
| 69.31
|
Red Hat Bugzilla – Bug 75317
Installer exited abnormally
Last modified: 2007-04-18 12:47:16 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0)
Description of problem:
When running a boot floppy to install Redhat Linux 8.0 the installer exits
abnormally. The installer always gets past the CDROM checking. It exits at
different stages each time, although has never managed to choose packages. The
error occurs with the text and graphical installers. Sometimes an unhandled
exception occurs although I have been unable to capture the text of this.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Load either graphical or text installer from boot floppy.
2. Pass checking of CDROM.
3. Installer will exit at some point before choosing packages or will freeze
when the "choose packages" screen is shown.
Actual Results: Running anaconda the RedHat Linux system installer please wait.
Traceback (most recent call last): file "/usr/bin/anaconda" line 70 in ?
import signal traceback string isys iutil time file "/usr/lib/anaconda/isys.py"
line 18 in ? import_sys import error:/lib/libresolv.so.2:cannot read file data:
is a directory
Install exited abnormally
sending termination signals done
sending kill signals done
disabling swap
unmounting file systems
/mnt/runtime done
disabling /dev/loop0/proc done
/dev/pts done
/mnt/source done
You may safely reboot.
Expected Results: Installer to continue
Additional info:
I am trying to install on an old Pentium 100MHz. It has an 8 speed CDROM drive.
2 harddisks, a 1GB & a 1.6GB. 48MB RAM, 4 * 72 pin SIMS (2* 4MB, 2* 16MB). Has
a Redhat recognises the GFX card as a Cirrus Logic card, it is based on the
motherboard. Also has a PCI sound card, PCI network card, and an ISA modem. The
machine is an old Packard Bell. I intend to use it as a router.
I forgot to mention. The install media in the CDROM is checked and passes. I
checked the iso images with md5sum before I burnt them.
This is usually a symptom of hardware problems if hte mediacheck passes. Could
you run memtest86 to verify that your memory is good?
There is currently no OS on the machine. How do I run memtest86? Can I make a
boot from the Redhat CDs? I think that the error that was occuring under the
text only install is/was a different error. The text installer seems to be ok
if I use the "specdd" command to disable the monitor probing. The graphical one
still fails though. Thanks for the help.
memtest86 is a stand-alone program which is run from a floppy; you can download
it from
I have tried memtest86 with all tests. No errors were reported.
I believe you will need more RAM to install 8.0.
|
https://bugzilla.redhat.com/show_bug.cgi?id=75317
|
CC-MAIN-2017-26
|
refinedweb
| 476
| 69.18
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to use urls as product images?
How can I use urls instead of binary data for the product images? Or, if not possible, how to use the filesystem instead of the database for product images?
I know how to do it with attachments, but how does it work with product images?
Thanks for any proper answer...
if you meant to get an image from a given url: the python code and xml is as follows: python:
import base64#file encode import urllib2 #file download from url class image_url(osv.osv): _name="image.url" _columns={ 'image':fields.binary('Image',frequired=True), 'web':fields.char('image url', help='Automatically sanitized HTML contents'), } def onchange_image(self,cr,uid,ids,web,context=None): link=web photo = base64.encodestring(urllib2.urlopen(link).read()) val={ 'image':photo, } return {'value': val} image_url()
xml view:
<record id="view_mrk_form" model="ir.ui.view"> <field name="name">mark.marksheet.form</field> <field name="model">mark.marksheet</field> <field name="arch" type="xml"> <form string="image"> <field name="image" widget="image" width="110" height="70" /> <field name="web" widget="url" on_change="onchange_image(web,context)"/> </form> </field> </record>
Perfect..Thank You so much.. It helped a lot.
@Chandni, thank you for your detailed explanation. Do you have a github repository that I can fork this module from? I would like to include it as a dependency in an open-source theme module i'm building and would like to make sure I accredit you correctly.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
did you meant to get an image from a url entered and show in binary field?
|
https://www.odoo.com/forum/help-1/question/how-to-use-urls-as-product-images-15691
|
CC-MAIN-2017-47
|
refinedweb
| 315
| 59.3
|
In this article, I will discuss some performance issues on collections such as ArrayList during boxing and unboxing operations..
First (before go to collections), we need to determine the general overhead product of boxing/unboxing operation. I've compiled few small test programs and examined the IL generated using ILDASM. Here are the conclusions:
General
As a general conclusion, we can say that an object is equivalent to a 'void *' of c++. It is always a reference to the contained value, then, the utilization of object generate a very efficient code that move pointers through functions call and returns.
a) Value Types
Value Types (int, decimals, structs and so on) are stored and utilized in a basic fashion. When we box/unbox a Value Type, the compiler insert an specific box/unbox IL instruction which, in runtime, 'makes' the object wrapper that reference to that Value Type. This kind of operation must make a lot of manipulation:
Of course, the jitter may optimize this sequence because it knows that we are going to box an specified Value Type. Since jitter is not documented and doesn't generate any report, so we'll leave this part alone.
using
.method
b) Reference Types
Reference Types (strings, classes, etc), when created, are stored directly as an object. Since the compiler knows that this object is of a predetermined type, and there is no need to verify the type of the object each time that is utilized. When we 'box' a Reference Type, the compiler makes nothing because, as I said before, a Reference Type is stored as an object. When we utilize a Reference Type that have been boxed, we must write a 'cast' into source and the compiler insert an specific 'cast' IL instruction. I don't know how this 'cast' instruction is jitted but I imagine that, if the object type is the same that the requested type, very few native instructions may me executed.
Conclusion
As a general rule, if we store/retrieve Reference Types into collections, we have a very small impact over the performance. But, if we store Value Types, the performance may be severely affected.
To test this assumption, I've written few small test programs. Each of these programs were executed 10 times and the best results are extracted:
using System;using System.Collections;namespace test1{class Class1{static void Main(string[] args){int count;DateTime startTime = DateTime.Now;ArrayList myArrayList = new ArrayList();// Repeat test 5 times.for(int retry = 5; retry > 0; retry--){myArrayList.Clear();// Add 'Value Types' to array the ArrayList.for(count = 0; count < 1000000; count++)myArrayList.Add(count);// Retrieve the values.int i;for(count = 0; count < 1000000; count++)i = (int) myArrayList[count];}// Print results.DateTime endTime = DateTime.Now;Console.WriteLine("Start: {0}\nEnd: {1}\nElapsed: {2}", startTime, endTime, endTime-startTime);Console.WriteLine("Ready. Push ENTER to finalize...");Console.ReadLine();}}}Test 1 - Storing/Retrieving integers to/from ArrayList
Result: On my machine, this program takes 6,409 seconds as its best time for execution.using System;using System.Collections;namespace test1{class Class1{static void Main(string[] args){int count;ArrayList myArrayList = new ArrayList();// Construct 1000000 stringsstring [] strList = new string[1000000];for(count = 0; count < 1000000; count++)strList[count] = count.ToString();// Repeat test 5 times.DateTime startTime = DateTime.Now;for(int retry = 5; retry > 0; retry--){myArrayList.Clear();// Add 'Value Types' to array the ArrayList.for(count = 0; count < 1000000; count++)myArrayList.Add(strList[count]);// Retrieve the values.string s;for(count = 0; count < 1000000; count++)s = (string) myArrayList[count];}// Print results.DateTime endTime = DateTime.Now;Console.WriteLine("Start: {0}\nEnd: {1}\nElapsed: {2}", startTime, endTime, endTime-startTime);Console.WriteLine("Ready. Push ENTER to finalize...");Console.ReadLine();}}}Test 2 - Storing/Retrieving strings to/from ArrayList
This program takes 3,565 seconds on my machine. As we can see, it is very efficient to store 'references' types in collections than storing value types.
Boxing and Performance of Collections
Performance Comparison of XslTransform Inputs
|
http://www.c-sharpcorner.com/UploadFile/rfederico/BoxingPerformanceIssuesOnCollections11192005070854AM/BoxingPerformanceIssuesOnCollections.aspx
|
crawl-003
|
refinedweb
| 660
| 50.02
|
I'm trying to model a structure in EF, where we have multiple entities, each of which have localization content. All this localization content exists in a single table. (See example below)
I'm running into an issue, when I define the foreign keys with fluent syntax, and when EF runs its migration to build the DB, it will apply multiple foreign keys on the LKey field in the Localization table. i.e
FK_Localization_Foo1_LKey/
FK_Localization_Foo2_LKey.
This is incorrect, as now its impossible to insert any records into the localization table.
How can I stop this behavior, or what should I change to achieve what I'm looking for in EF?
p.s. I've seen examples suggested where an intermediate table is created, which would only contain LKey as its primary key, and the other tables reference this. I'd like to avoid this if possible to keep our DBA happy.
public class Foo { ... [Required][StringLength(5)] public string LKey { get; set; } public ICollection<Localization> Localizations { get; set; } } // many entities similar to 'foo' with the same navigation public class Localization { public int LocalizationId { get; set; } [Required][StringLength(5)] public string LKey { get; set; } [Required][StringLength(10)] public string LangISO { get; set; } ... } // inside the OnModelCreating method, there are multiple entries like the below, one for each 'Foo' type entity above modelBuilder.Entity<Foo>() .HasMany(pt => pt.Localizations) .WithOne() .HasPrincipalKey(pt => pt.LKey) .HasForeignKey(l => l.LKey);
Update: Below are examples of queries which currently run against this DB(and have ran for years), so its safe to say, the model is not incompatible with a RDB.
When I went asking, our DBA followed up with "this is absolutely possible. It works, as there is a relationship, but no foreign key constraint. Its not NF, but it works just fine."
Perhaps I wasn't clear in my initial question, apologies if this is the case.What we're looking to do, is model the relationship, and allow the normal EF navigation on the model (which does work with the above code), but not have migrations create the FKeys.
select f1.Foo1Id, ll.Text from Foo1 f1 inner join LocalizationLanguage ll on ll.DescriptionKey = f1.DescriptionKey select f2.Foo2Id, ll.Text from Foo2 f2 inner join LocalizationLanguage ll on ll.DescriptionKey = f2.DescriptionKey
You are trying to implement data structure absolutely incompatible with relational databases.
You can't have one table (
Localizations) with one field (
LKey) which is foreign key to one of many other tables. Relational DB does not support this. DB Engine (and you too) will never know what table (Foo1, Foo2,... FooX) contain "parent" record for particular child record
LKey='ABCD'.
You have two choices:
If your Foo tables have similar fields - use Table Per Hierarchy pattern - all your Foo entities will be stored in one physical table with additional field with class name.
(patterns table per type (TPT) and table per concrete type (TPC) will be supported later)
Reverse direction of your relationships. Make
Localizations your parent tale (with
LKey key) and all
Foo tables - child with
LKey as foreign key.
|
https://entityframeworkcore.com/knowledge-base/44586436/how-do-i-model-an-alternate-foreign-key-in-entity-framework-core
|
CC-MAIN-2021-10
|
refinedweb
| 512
| 54.22
|
Object-Oriented Programming in C++
An overview of object-oriented programming in C++.Start
Key Concepts
Review core concepts you need to learn to master this subject
Classes and Objects
Access Specifiers
Constructors
Inheritance
Polymorphism
Class Members
Constructor
See more
Objects
Classes and Objects
Classes and Objects
#include <iostream> class Dog { public: int age; void sound() { std::cout << "woof\n"; } }; int main() { Dog buddy; buddy.age = 5; buddy.sound(); // Outputs: woof }
A C++ class is a user-defined data type that encapsulates information and behavior about an object.
A class can have two types of class members:
- Attributes, also known as member data, consist of information about an instance of the class.
- Methods, also known as member functions, are functions that can be used with an instance of the class.
An object is an instance of a class and can be created by specifying the class name.
What you'll create
Portfolio projects that showcase your new skills
|
https://www.codecademy.com/learn/c-plus-plus-for-programmers/modules/object-oriented-programming-in-cpp
|
CC-MAIN-2022-27
|
refinedweb
| 158
| 52.6
|
Adding a Movie Kind Field
If we wanted to also keep TV series and mini series in our movie table, we would need another field to store it: MovieKind.
As we didn't add it while creating the Movie table, now we'll write another migration to add it to our database.
Don't modify existing migrations, they won't run again.
Create another migration file under Modules/Common/Migrations/DefaultDB/ DefaultDB_20160519_145500_MovieKind.cs:
using FluentMigrator; namespace MovieTutorial.Migrations.DefaultDB { [Migration(20160519145500)] public class DefaultDB_20160519_145500_MovieKind : Migration { public override void Up() { Alter.Table("Movie").InSchema("mov") .AddColumn("Kind").AsInt32().NotNullable() .WithDefaultValue(1); } public override void Down() { } } }
Declaring a MovieKind Enumeration
Now as we added Kind column to Movie table, we need a set of movie kind values. Let's define it as an enumeration at MovieTutorial.Web/Modules/MovieDB/Movie/MovieKind.cs:
using Serenity.ComponentModel; using System.ComponentModel; namespace MovieTutorial.MovieDB { [EnumKey("MovieDB.MovieKind")] public enum MovieKind { [Description("Film")] Film = 1, [Description("TV Series")] TvSeries = 2, [Description("Mini Series")] MiniSeries = 3 } }
Adding Kind Field to MovieRow Entity
As we are not using Sergen anymore, we need to add a mapping in our MovieRow.cs for Kind column manually. Add following property declaration in MovieRow.cs after Runtime property:
[DisplayName("Runtime (mins)")] public Int32? Runtime { get { return Fields.Runtime[this]; } set { Fields.Runtime[this] = value; } } [DisplayName("Kind"), NotNull] public MovieKind? Kind { get { return (MovieKind?)Fields.Kind[this]; } set { Fields.Kind[this] = (Int32?)value; } }
We also need to declare a Int32Field object which is required for Serenity entity system. On the bottom of MovieRow.cs locate RowFields class and modify it to add Kind field after the Runtime field:
public class RowFields : RowFieldsBase { // ... public readonly Int32Field Runtime; public readonly Int32Field Kind; public RowFields() : base("[mov].Movie") { LocalTextPrefix = "MovieDB.Movie"; } }
Adding Kind Selection To Our Movie Form
If we build and run our project now, we'll see that there is no change in the Movie form, even if we added Kind field mapping to the MovieRow. This is because, fields shown/edited in the form are controlled by declerations in MovieForm.cs.
Modify MovieForm.cs as below:
namespace MovieTutorial.MovieDB.Forms { // ... [FormScript("MovieDB.Movie")] [BasedOnRow(typeof(Entities.MovieRow))] public class MovieForm { // ... public MovieKind Kind { get; set; } public Int32 Runtime { get; set; } } }
Now, build your solution and run it. When you try to edit a movie or add a new one, nothing will happen. This is an expected situation. If you check developer tools console of your browser (F12, inspect element etc.) you'll see such an error:
You might not have this error with ASP.NET Core version as it auto transforms T4
Uncaught Can't find MovieTutorial.MovieDB.MovieKind enum type!
Please Note!
Whenever such a thing happens, e.g. some button not working, you got an empty page, grid etc, please first check browser console for errors, before reporting it.
Why We Had This Error?
This error is caused by MoveKind enumeration not available client side. We should run our T4 templates before executing our program.
Now in Visual Studio, click Build -> Transform All Templates again.
Rebuild your solution and execute it. Now we have a nice dropdown in our form to select movie kind.
Just build project for ASP.NET Core version, as there is no T4 template
Declaring a Default Value for Movie Kind
As Kind is a required field, we need to fill it in Add Movie dialog, otherwise we'll get a validation error.
But most movies we'll store are feature films, so its default should be this value.
To add a default value for Kind property, add a DefaultValue attribute like this:
[DisplayName("Kind"), NotNull, DefaultValue(MovieKind.Film)] public MovieKind? Kind { get { return (MovieKind?)Fields.Kind[this]; } set { Fields.Kind[this] = (Int32?)value; } }
Now, in Add Movie dialog, Kind field will come prefilled as Film.
|
https://volkanceylan.gitbooks.io/serenity-guide/tutorials/movies/adding_a_movie_kind_field.html
|
CC-MAIN-2019-18
|
refinedweb
| 639
| 52.15
|
R).
- command autocomplete, - start typing "string" (and it pops in match, first, etc)
- procedure autocomplete, - the eclipse IDE can also find your own code, and prep it for autocomplete as well
- make available generic templates, or define new ones.
- setup build scripts - (imagine "build" button for creating packages, removing tedium etc.)Tested to work with file, info, interp, namespace, package,string, winfo, wmPT Anyone who has started working on it and needs any help ?? Please post a site here.
Jacl and Eclipse editDuring Jan, 2007, Patrick Finnegan posted a HOWTO for running Jacl with Eclipse as an external tool [1]. I'll include his notes here, so that as refinements are found, they can be incorporated.It's now possible to run JACL with the Eclipse IDE as an external tool.
Installation Instructions1. Install Java.
-
- Download the java runtime from
and install on your machine.
-
- Download the JACL 1.4.0 jaclBinary140.zip binary from
-
- Extract jaclBinary140.zip to a local directory e.g. C:\Downloads\jacl\jaclBinary140\jacl140\lib\tcljava1.4.0.
-
- JACL enable the Java runtime by copying the JACL jar files from the extract directory to the ext directory of the java install. The JACL jar files will be dynamically loaded by the JVM at runtime, e.g. copy the following jar files from C:\Downloads\jacl\jaclBinary140\jacl140\lib\tcljava1.4.0 to C:\Program Files\Java\jre1.5.0_10\lib\ext.
itcl.jar jacl.jar janino.jar tcljava.jar tjc.jar5..896. Install Eclipse.
-
- Download Eclipse from and install to a local directory. If there is more than one java runtime installed on machine ensure the one used by Eclipse has the Jacl jar files in its ext directory.
-
- Download the Eclipse Tcl plugin from
and install into Eclipse. Simply follow the standard plugin installation procedure and copy the eclipse directory in the zip file over the eclipse install directory.
-
- Start Eclipse and create a Tcl Project: File → new → other → Tcl Project. Create a Tcl file called quickTest.tcl in the project: File → new → other → DLTK Tcl →]"9. Configure JACL as an external tool.
-
- Create a new external tool: Select run → external tools → external tools. Double click on program. Set the following parameters.
- name
- jaclShell
- location
- Java installation directory. E.G. C:\Program Files\Java\jre1.5.0_10\bin\java.exe
- working directory
- ${container_loc} - This resolves to the folder location of quickTest.tcl in Eclipse.
- arguments
- tcl.lang.Shell ${resource_loc} ${resource_name} - resolves to the location of the selected tcl file which in this case is quickTest.tcl.
-
- Close the dialog.
-
- Click once on quickTest.tcl to highlight. From the top menu select run → external tools → jacLShell. Do not right click → runas. The following is written to the Eclipse console.
current directory is C:\EclipseWorkSpaces\JaclScripts\testScripts My IP Address is: 123.45.67.8911. And the whole point of this exercise is…
-
- Well it's really code management and the convenience of having Jacl and Java packages in the same workspace.
DKF posted to comp.lang.tcl, as a follow-up:I advise putting double quotes around ${resource_loc} and ${resource_name} because their expansion might have spaces in (especially ${resource_loc} if you're keeping your Eclipse workspace below "My Documents"...)
RS 2007-03-09: [2]:
- name
- > tkcon
- location
- /path/to/send.sh
- working directory
- ${project_loc}
- arguments
- ${resource_loc} - resolves currently selected file'.
- common tab
- (x) Launch in background
- common tab
- (x) Allocate Console
The server here:
There is some discussion on dltk at Eclipse Europa TCL Editor (DLTK project). This and the eclipsedltk page should eventually be merged...
|
http://wiki.tcl.tk/6586
|
CC-MAIN-2017-04
|
refinedweb
| 591
| 59.5
|
> if (inexact_style != human_round_to_nearest && value < 4294967295u l > ) I think the space in the above is the problem. "4294967295u l" with your machine. But 18446744073709551615ull with my 10.20 machine. > Actually, I'm not certain which software I have on this system. I just > started in this position a bit more than a year ago, so I'm not sure how > complete some of the packages are... I do have the ANSI C compiler > installed, though. Is there something I should be specifying to force the > install to use that package instead of gcc? You can force the compiler by setting CC and CFLAGS. make clean ./configure CC=cc CFLAGS='-Ae -O' That will set -Ae and turn the HP compiler into ANSI mode with extensions enabled. This is the mode you want. Configure should automatically detect the -Ae for this compiler. But not wanting to take too many turns at this please force it so there is no doubt. Of course the -O turns on the optimizer. > I see the following values: > #define HAVE_INTTYPES_H 1 > /* #undef HAVE_STDINT_H */ > /* #undef uintmax_t */ > > >From inttypes.h: > > >grep -i uintmax_t inttypes.h > ** intmax_t and uintmax_t are to be the largest signed and unsigned integer > /* The following macros define I/O formats for intmax_t and uintmax_t. > extern uintmax_t __strtoull (const char *, char**, int); > extern uintmax_t __wcstoull(const wchar_t *, wchar_t **, int); > extern uintmax_t __strtoull (); > extern uintmax_t __wcstoull(); > #define strtoumax(__a, __b, __c) (uintmax_t)strtoul(__a, __b, __c) > #define wcstoumax(__a, __b, __c) (uintmax_t)wcstoul(__a, __b, __c) > > And: > > grep UINTMAX_MAX inttypes.h > ** and UINTMAX_MAX (maximum value of the largest supported unsigned integer > #define UINTMAX_MAX UINT64_MAX > #define UINTMAX_MAX UINT32_MAX Among other things the HP compiler is using cpp token pasting to build up the value. My guess is that your gcc installation does not take this into account, is trying to use the native header files, and the token pasting is failing. #ifdef __STDC__ #define __CONCAT__(_A,_B) _A ## _B #define __CONCAT_U__(_A) _A ## u #if defined(__STDC_EXT__) && !defined(__LP64__) /* LP64 takes #precedence */ #define __CONCAT_L__(_A,_B) _A ## _B ## l /* extra l for long #long */ #else #define __CONCAT_L__(_A,_B) _A ## _B #endif #else #define __CONCAT__(_A,_B) _A/**/_B #define __CONCAT_U__(_A) _A /* K&R C does not #support u */ #define __CONCAT_L__(_A,_B) _A/**/_B/**/l #endif I am seeing the UINT64_MAX definition. #define UINT64_MAX UINT64_C(18446744073709551615) #define UINT64_C(__c) __CONCAT_L__(__c,ul) What version of gcc are you using? gcc --version For C code on 10.20 the gcc-2.95 compiler would be a well known good version. > I didn't find a stdint.h... Which seems wrong, somehow. :) HP-UX 10.20 never had that, too old. 10.20 released in 1996. > Usually I look for the last stable release, but since I am having so much > fun with this I went ahead and grabbed the alpha release you pointed to. Actually the test release looks pretty good right now. Try the native ansi compiler. Bob
|
http://lists.gnu.org/archive/html/bug-coreutils/2004-02/msg00022.html
|
CC-MAIN-2016-26
|
refinedweb
| 492
| 65.83
|
Microsoft Orleans 2.0 was released less than two weeks ago. The biggest win here is .NET Core/Standard support, meaning that Orleans is cross-platform. In this article, we’ll see how to quickly get up and running with Orleans 2.0.
The configuration and hosting APIs have changed considerably, so the instructions here won’t work for earlier versions. See my old “Getting Started with Microsoft Orleans” article from November 2016 if you’re running Orleans 1.4. Orleans 1.5 is also different so you’ll need to check the documentation for that.
In order to keep this article practical and concise, it is necessary to limit its scope. We will not be covering what Orleans is or what it is used for. We will also not create a full project structure that is typical in Orleans solutions. Instead, we’ll keep it simple so that in a short time we have a starting point to explore what Orleans has to offer.
Tip: Use Ctrl+. (Control dot) to resolve namespaces in Visual Studio.
The source code for this article is the Orleans2GettingStarted folder in the Gigi Labs BitBucket repository.
.NET Core Project
To demonstrate the fact that Orleans 2.0 really supports .NET Core, we’ll create a .NET Core console app and set up everything in it. To keep things simple, we’ll run both the client and the silo (server) from this same application.
Install Packages
There are a few packages we’ll need:
Install-Package Microsoft.Orleans.Server Install-Package Microsoft.Orleans.Client Install-Package Microsoft.Orleans.OrleansCodeGenerator.Build Install-Package Microsoft.Extensions.Logging.Console Install-Package OrleansDashboard
Here is a summary of what each of these does:
- Used by Orleans silo (server).
- Used by Orleans client.
- Required for build-time code generation. Bad things happen if you don’t include it.
- Optional, but allows us to set up logging to console to see what Orleans is doing.
- Optional, but allows us to visualise the operation of silos and grains.
Grain and Grain Interface
We’ll add a simple grain class to the project in order to have a minimal example. Since grains are independent of each other, Internet of Things (IoT) scenarios fit very nicely. Imagine we have a number of temperature sensors deployed in different places. Each one has an ID, and periodically submits temperature readings to a corresponding grain in the Orleans cluster:
public class TemperatureSensorGrain : Grain, ITemperatureSensorGrain { public Task SubmitTemperatureAsync(float temperature) { long grainId = this.GetPrimaryKeyLong(); Console.WriteLine($"{grainId} received temperature: {temperature}"); return Task.CompletedTask; } }
We’re not doing anything special here. We write out the grain ID and the value we received just so we see something going on in the console. It is important that we inherit from the
Grain base class, and that all our methods return
Task.
We also need a grain interface:
public interface ITemperatureSensorGrain : IGrainWithIntegerKey { Task SubmitTemperatureAsync(float temperature); }
The interface must inherit from an Orleans-defined interface that tells what type of grain ID (key) it will use. Our grains will have an ID of type
long (that’s some misleading naming in the interface), but there are other options including GUID or string.
Silo
Taking a look at the Hello World sample gives an idea of how to set up minimal silo and client. Since this code is going to be async, we’ll need use C# 7.1+ (to support
async/
await in
Main()) or use a workaround. See the last section of “Working with Asynchronous Methods in C#” for how this is done (quick tip: Project Properties -> Build -> Advanced… -> C# latest minor version (latest)).
We can adapt the code from the Hello World sample to run a simple silo:
static async Task Main(string[] args) { var siloBuilder = new SiloHostBuilder() .UseLocalhostClustering() .Configure<ClusterOptions>(options => { options.ClusterId = "dev"; options.ServiceId = "Orleans2GettingStarted"; }) .Configure<EndpointOptions>(options => options.AdvertisedIPAddress = IPAddress.Loopback) .ConfigureLogging(logging => logging.AddConsole()); using (var host = siloBuilder.Build()) { await host.StartAsync(); Console.ReadLine(); } }
Here, we are setting up a local cluster for development. Thanks to the console logging package we installed earlier and the
ConfigureLogging() call above, we can see what Orleans is up to:
What is being written out is not important at this stage. The important thing is that the Orleans silo is running.
Client
The same Hello World sample also shows us how to set up a client that connects to the silo. This usually serves as a gateway between the outside world and the Orleans cluster. It could be a Web API, Windows service, etc; but here it will just be in the same console app as the silo.
We’ll wait to start the client after the silo has started. This is easy to do in our case because both are in the same application.
using (var host = siloBuilder.Build()) { await host.StartAsync(); var clientBuilder = new ClientBuilder() .UseLocalhostClustering() .Configure<ClusterOptions>(options => { options.ClusterId = "dev"; options.ServiceId = "Orleans2GettingStarted"; }) .ConfigureLogging(logging => logging.AddConsole()); using (var client = clientBuilder.Build()) { await client.Connect(); var sensor = client.GetGrain<ITemperatureSensorGrain>(123); await sensor.SubmitTemperatureAsync(32.5f); Console.ReadLine(); } }
The setup for the client is very similar to that of the silo, and quite straightforward since we are using the default localhost configurations. One thing you’ll notice is the unfortunate inconsistent naming between
host.StartAsync() and
client.Connect(); the latter lacks the –
Async() suffix, even though it also returns a
Task.
If we run this code, we see that the code in the grain is getting executed, and we see the temperature reading in the console at the end:
Dashboard
Although it works, this example is really boring. We essentially have a Hello World here, styled for IoT. Let’s change the client code to generate some load instead:
using (var client = clientBuilder.Build()) { await client.Connect(); var random = new Random(); string sky = "blue"; while (sky == "blue") // if run in Ireland, it exits loop immediately { int grainId = random.Next(0, 500); double temperature = random.NextDouble() * 40; var sensor = client.GetGrain<ITemperatureSensorGrain>(grainId); await sensor.SubmitTemperatureAsync((float)temperature); } }
Now we can see the silo brimming with activity, but only in the console:
Now, wouldn’t it be nice if we could see some graphs showing what our grains and silos are doing? As it turns out, we can do that by setting up the Orleans Dashboard, a community-contributed admin dashboard for Microsoft Orleans.
We’ve already installed the package for it, so all we need to do is add it to the silo configuration:
var siloBuilder = new SiloHostBuilder() .UseLocalhostClustering() .UseDashboard(options => { }) .Configure<ClusterOptions>(options => { options.ClusterId = "dev"; options.ServiceId = "Orleans2GettingStarted"; }) .Configure<EndpointOptions>(options => options.AdvertisedIPAddress = IPAddress.Loopback) .ConfigureLogging(logging => logging.AddConsole());
That sets up the dashboard with all default values (port 8080, no username/password) which you can always change if you need to.
So now, if we run the application again and open localhost:8080 in a browser window, we can get some pretty visualisations.
Here’s the high-level view of the cluster:
And here’s a view of the grains that are running. You’ll see we have 500 instances of our TemperatureSensorGrain, which corresponds to the range of grainIds we’re generating at random as we generate load. You’ll also see some internal system-related grains:
Here’s a view of the grain itself, and the methods being called on it:
We can also get a view of the silo:
We haven’t covered everything the dashboard gives you, but you can already see that it gives a lot of visibility into what’s going on. It’s great to track errors, slow requests, throughput, etc.
Linux
So now we have Orleans in a .NET Core project on Windows. That’s great, but the real benefit of Orleans supporting .NET Core is cross-platform deployment. So after installing .NET Core on a Linux machine (I’m using Ubuntu 17.10.1 here), let’s grab the code and run Orleans:
git clone cd gigilabs cd Orleans2GettingStarted cd Orleans2GettingStarted dotnet run
Orleans has no problem starting up on this Ubuntu VM:
And here we can see Orleans running with the Orleans Dashboard in the background:
Summary
Orleans 2.0 is based on .NET Standard 2.0, so Orleans can now run on .NET Core and the full .NET Framework alike. It can be run on any platform capable of running .NET Core, Linux being just one example.
A big thanks goes to the Microsoft Orleans team for making this happen! (And to Richard Astbury for the awesome Orleans Dashboard, which he still claims is alpha quality.)
To recap: in order to have a minimal Orleans sample running, we need to:
- Install the necessary packages.
- Add a grain and a grain interface.
- Set up the silo and client.
- Use the grain from the client.
- Optionally, set up logging and the Orleans Dashboard.
This example is meant to get you quickly up and running, and does not delve into any proper project structure or optimisations, which you would normally have when building a serious solution around Orleans. In the next Orleans 2.0 article, we’ll see how to properly organise an Orleans 2.0 solution.
4 thoughts on “Getting Started with Microsoft Orleans 2.0 in .NET Core”
This code does not working. I handle next exception:
Cannot find an implementation class for grain interface: SiloServer.Grains.Contracts.ITemperatureSensorGrain. Make sure the grain assembly was correctly deployed and loaded in the silo.
Versions of packeges 2.0.3 (latest)
I just tried with 2.0.3, and the code for the article works perfectly.
1. Check that your grain actually inherits from
Orleans.Grain.
2. Make sure you included the code generation package.
3. Take the code of the article as-is from BitBucket, and change the package versions to 2.0.3 if you want.
|
https://gigi.nullneuron.net/gigilabs/getting-started-with-microsoft-orleans-2-0-in-net-core/
|
CC-MAIN-2019-39
|
refinedweb
| 1,635
| 58.99
|
Add Custom Rake Tasks to NetBeans 6 Ruby IDE
By edwingo on Dec 11, 2007
As part of a demo, I wanted to deploy a Ruby on Rails application into a Rails Virtual Appliance using Capistrano. A Rails Virtual Appliance is a Virtual Machine that contains a complete stack of software from the Operating System up to Ruby on Rails itself. If you are using NetBeans 6 as your IDE, you can create a file such as $PROJECT/lib/tasks/demo.rake containing something like the following:
# Rake task to deploy to a production server for virtual appliance demo # 2007-12-10eeg # File that contains IP address of virtual appliance. If this file cannot # be read, then the user will be prompted for the information. TARGET_IP_FILE = '/ApplianceShare/ipfile.txt' namespace :demo do desc 'Print the target IP address' task :print_target_ip do puts "Target host IP: #{get_target_host}" end desc 'Show the coolstack page' task :show_coolstack do sh "open{get_target_host}/" end desc 'Show the main depot web application page' task :show_app do sh "open{get_target_host}:8000/store/" end desc 'Deploy to production' task :deploy do target_host = get_target_host sh "cap -S target=#{target_host} deploy:setup" sh "cap -S target=#{target_host} deploy:cold" end desc 'Deploy and run the application' task :run => [:deploy, :show_app] end def get_target_host if File.readable?(TARGET_IP_FILE) f = File.new(TARGET_IP_FILE) line = f.readline host_ip = line.chomp else print "Target host: " line = $stdin.gets host_ip = line.chomp end host_ip end ### Local Variables: ### mode: ruby ### End:
Note that the rake task uses
sh to call Capistrano via the
cap command to deploy the app into the production machine which in this case is a Virtual Appliance.
Then, within NetBeans, you can invoke the context menu on the project node in the Projects Window and select "Run Rake Task" and navigate to the task you want to invoke. See this blog entry or wiki page for more information on NetBeans and Rake.
|
https://blogs.oracle.com/edwingo/tags/rake
|
CC-MAIN-2014-15
|
refinedweb
| 320
| 50.06
|
Create a components folder in src folder and add VideoList.js file in it, open this file and start importing component
import React, { Component } from 'react'; import { View, Text } from 'react-native'; import CardWrapper from '../common/CardWrapper'; import CardInner from '../common/CardInner'; class VideoList extends Component { render() { return ( <View> <CardWrapper> <CardInner> <Text>For Title</Text> </CardInner> <CardInner> <Text>For Image</Text> </CardInner> <CardInner> <Text>For Button</Text> </CardInner> </CardWrapper> </View> ); } } export default VideoList;
So here we created a base design for video listing. No further we will call this component in a file Index.js and pass props of playlist id id from index.js and get it on VideoList.js
If we will keep Play list id from Parse code of VideoList , That will be easy for us to change any time Play List id of youtube and easily we can list any playlist by a quick change
So Now we are going to create Index.js in same component folder. And Add
import React, { Component } from 'react'; import { View } from 'react-native'; import VideoList from './VideoList'; class Index extends Component { render() { return ( <View> <VideoList playlistid='LLA34Z3lq8FozSQzDHsSLcmQ' /> </View> ); } } export default Index;
In this code I imported VideoList component and called it in view component of Index component. You can see I am passing playlistid as props in VideoList component. It is same process which I did in Header Component
Now further we are going to get this playlistid props on VideoList component and print it in console to see what is result
So now we will call Index component in our main App.js file so we can see what is result when app load
So open App.js and import Index component
import Index from './src/components/Index';
Now remove Text component
<Text> Youtube Video </Text>
And add Index component
<Index />
So now its time to refresh simulator again and check what is result .
Great!! We did .
Now we will run debugger in simulator to check our props . So we will call console.log in render method of VideoList.js.
So to run debugger in chrome press command + M button. And select Debug Js Remotely this url will open in chrome. Right click in chrome and open inspect
Click on console
Now we will console.log(this.props.playlistid); in render() { method of VideoList.js
So code will be like this
……
class VideoList extends Component {
render() {
console.log(this.props.playlistid);
return (
…….
Now its time to refresh simulator and check chrome console you will see same playlist id which we passed in props
Well , we are getting play list id of youtube 🙂
|
http://blog.stw-services.com/design-video-list-component/
|
CC-MAIN-2019-18
|
refinedweb
| 434
| 63.59
|
Value types (C# reference)
Value types and reference types are the two main categories of C# types. A variable of a value type contains an instance of the type. This differs from a variable of a reference type, which contains a reference to an instance of the type. By default, on assignment, passing an argument to a method, and returning a method result, variable values are copied. In the case of value-type variables, the corresponding type instances are copied. The following example demonstrates that behavior:
using System; public struct MutablePoint { public int X; public int Y; public MutablePoint(int x, int y) => (X, Y) = (x, y); public override string ToString() => $"({X}, {Y})"; } public class Program { public static void Main() { var p1 = new MutablePoint(1, 2); var p2 = p1; p2.Y = 200; Console.WriteLine($"{nameof(p1)} after {nameof(p2)} is modified: {p1}"); Console.WriteLine($"{nameof(p2)}: {p2}"); MutateAndDisplay(p2); Console.WriteLine($"{nameof(p2)} after passing to a method: {p2}"); } private static void MutateAndDisplay(MutablePoint p) { p.X = 100; Console.WriteLine($"Point mutated in a method: {p}"); } } // Expected output: // p1 after p2 is modified: (1, 2) // p2: (1, 200) // Point mutated in a method: (100, 200) // p2 after passing to a method: (1, 200)
As the preceding example shows, operations on a value-type variable affect only that instance of the value type, stored in the variable.
If a value type contains a data member of a reference type, only the reference to the instance of the reference type is copied when a value-type instance is copied. Both the copy and original value-type instance have access to the same reference-type instance. The following example demonstrates that behavior:
using System; using System.Collections.Generic; public struct TaggedInteger { public int Number; private List<string> tags; public TaggedInteger(int n) { Number = n; tags = new List<string>(); } public void AddTag(string tag) => tags.Add(tag); public override string ToString() => $"{Number} [{string.Join(", ", tags)}]"; } public class Program { public static void Main() { var n1 = new TaggedInteger(0); n1.AddTag("A"); Console.WriteLine(n1); // output: 0 [A] var n2 = n1; n2.Number = 7; n2.AddTag("B"); Console.WriteLine(n1); // output: 0 [A, B] Console.WriteLine(n2); // output: 7 [A, B] } }
Note
To make your code less error-prone and more robust, define and use immutable value types. This article uses mutable value types only for demonstration purposes.
Kinds of value types
A value type can be one of the two following kinds:
- a structure type, which encapsulates data and related functionality
- an enumeration type, which is defined by a set of named constants and represents a choice or a combination of choices
A nullable value type
T? represents all values of its underlying value type
T and an additional null value. You cannot assign
null to a variable of a value type, unless it's a nullable value type.
Built-in value types
C# provides the following built-in value types, also known as simple types:
- Integral numeric types
- Floating-point numeric types
- bool that represents a Boolean value
- char that represents a Unicode UTF-16 character
All simple types are structure types and differ from other structure types in that they permit certain additional operations:
You can use literals to provide a value of a simple type. For example,
'A'is a literal of the type
charand
2001is a literal of the type
int.
You can declare constants of the simple types with the const keyword. It's not possible to have constants of other structure types.
Constant expressions, whose operands are all constants of the simple types, are evaluated at compile time.
Beginning with C# 7.0, C# supports value tuples. A value tuple is a value type, but not a simple type.
C# language specification
For more information, see the following sections of the C# language specification:
|
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/value-types?view=netframework-4.8
|
CC-MAIN-2020-29
|
refinedweb
| 635
| 54.63
|
ChIPS, as well as the CRATES module, which handles file input and output ( "ahelp crates"), and the ChIPS module, used for customizing the plots ("ahelp chips"). *
For people who want to use Matplotlib rather than ChIPS, replace the pychips import with something like:
from matplotlib import pyplot as plt
You may also find it useful to load the numpy module
import numpy as np
Getting Help
There are several ways to access the Sherpa help files. Note that in the CIAO 4.11>
The ChIPS GUI
The ChIPS GUI can be used to modify any Sherpa plot if the ChIPS plotting backend is in use. It is launched by right clicking on an existing ChIPS window and selecting "Launch GUI" or by running the show_gui command from within a ChIPS session. A Python terminal is built into the GUI, allowing access to ChIPS and other Python commands.
Features include:
- the ability to edit the properties of an object, such as the symbol style of a curve or the font used for an axis label,
-"..11
XSPEC models update
The XSPEC models have been updated to release 12.10.0e in CIAO 4.11. Support for the following models have been added to Sherpa: xsbrnei, xsbvrnei, xsbvvrnei, xsgrbcomp, xsjet, xsssa, and xszcutoffpl.
The Sherpa prompt
The IPython version packaged with CIAO has been updated to 6.5.0, which has meant that the configuration options we used to use to style the "Sherpa environment" have changed. To make it more obvious that the Sherpa command-line application is based on IPython, the prompt now more-closely matches the IPython one. This means that instead of seing 'sherpa-1>', 'sherpa-2>', ..., you will now see 'sherpa In [1]:', 'sherpa In[2]:', ... and output will be preceeded by 'Out[x]:'.
Plotting with matplotlib
The CIAO 4.11 release is the first release to include matplotlib. The default behavior is for Sherpa to still use ChIPS for plotting, but this can be changed by editing the .sherpa.rc file in your home directory (it is created the first time you run Sherpa if you do not already have a file), so that thet plot_pkg setting reads "pylab" rather than "chips"; that is
unix% egrep '^plot_pkg' ~/.sherpa.rc plot_pkg : pylab
The '%matplotlib' IPython magic command should be used in a Sherpa interactive session to ensure that matplotlib plots are interactive (so that matplotib.pyplot.show() is not needed to display them).
Documentation
The ahelp files for the Sherpa models (XSPEC and Sherpa) have been updated to match the contents of the Python docstrings for these models..
|
https://cxc.cfa.harvard.edu/sherpa4.11/ahelp/sherpa4.html
|
CC-MAIN-2020-10
|
refinedweb
| 431
| 70.13
|
import flash.events.MouseEvent; stop(); this.bt1_instance.addEventListener(MouseEvent.MOUSE_DOWN,gotoFrame2); this.bt2_instance.addEventListener(MouseEvent.MOUSE_DOWN,gotoFrame1); this.bt3_instance.bt3_instance.addEventListener(MouseEvent.MOUSE_DOWN,gotoFrame2); function gotoFrame1(event:MouseEvent):void{ gotoAndStop(1); } function gotoFrame2(event:MouseEvent):void{ gotoAndStop(2); }
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
From novice to tech pro — start learning today.
I have a feeling that you are instantiating the code again if you go back to frame 1, that has all of this code on it.
I would probably put the visual assets from both frames and move them over one. then make sure your actions layer extends over all three frames...
rp / ZA
next, I would do a ctrl + shift + enter and look what line number the error happens on in the debug mode.
this will give you a better idea about where to start looking...
rp / ZA
using Click does make any difference.
The error occures when reaching line 4:
this.bt1_instance.addEvent
Squarespace’s all-in-one platform gives you everything you need to express yourself creatively online, whether it is with a domain, website, or online store. Get started with your free trial today, and when ready, take 10% off your first purchase with offer code 'EXPERTS'.
"using Click does make any difference."
im confused... it does or doesn't make any difference?
using CLICK opens up your device posibilities to more than just computers with a mouse... it is just better practice to be more general.
and if that line is where the error is occuring, then the issue is either your callback function is not named correctly, or your instance name is named incorrectly, or you have not properly imported the Event classes...
try using the wild card import flash.display.*;
rp / ZA
I accept the facy that Click is better any how.
importing flash.display.* still does not help.
I don't think there's a problem with my code / callback function.
As you can see in my question, this is a strange situation that only occurs if the target button (bt1) has a movie clip inside it. It happens on line 4 because this line refers to bt1, but any other piece of code that would do that will cause the error.
is the instance name of the button "bt1_instance" or "bt1" in the properties panel?
if it is bt1, then you need to get rid of the _instance part... I ask this, because in your last comment you called the button bt1, not bt1_instance.
rp / ZA
Note that the code works fine when the movie starts - there isn't any problem accessing the button or setting the event listener. The error only occurs when jumping to frame 2 and returning to frame 1.
thanks
i've duplicated your code and I don't get any errors.
where is this zip file you mentioned - i think like rp says you've got your setup a bit funny.
yeah, the code would work fine if no code was on Frame 1, but as I said, I can imagine that instantiating this code multiple times will cause an error or the application to act funny
rp / ZA
Anyway, the file I'm talking about can be downloaded from:
It's a simple 2 frames, 2 button fla with the code in question.
I'm not sure I understand the "setup" issue, so if you could take a look at the file, that would be great.
Thanks
okay... I am not sure how to better explain this, but you cannot put code on frame 1 and then have the button send the playhead back to frame one again. this seems to be your problem.
you need to move your assets over one frame each.
rp / ZA
I've tried putting the code of
------
function gotoFrame1(event:MouseEven
gotoAndStop(1); }
------
in frame 2, the result was just the same.
The scenrio is simple:
frame 1 has a button to jump to frame 2 and frame 2 has a button to jump to frame 1. How should I implement this?
thanks
if the playhead hits that frame 1 again, then it restarts everything, which is probablby your issue...
rp / ZA
rp
ID: 24494840
that is the first comment of a few that tried to explain that a possible problem lies with having your code get re-run each time you go back to frame one.
this application should instead all be on frame one, and the buttons should control inner movieclips.
rp
The reason is simple: The code should access components (buttons) which does not exist in all of the frames. I have two ket frames, each with different buttons and different code. If I put the code in a third frame (say frame 1 has the code, and frames 2 & 3 contain the buttons), frame 1 does not contain the buttons in frames 2 & 3 and I simply cannot refer to the button instances. The alternative of have those instances in frame 1 and moving them around (say out of stage) is poor.
what rp said is correct. You're getting the error because when you return to frame 1 from frame 2, the code is kicking in before the sprite is reinstantiated, so it doesn't recognize it and throws an error when you try to reference it with the addEventListener.
Here's a work around if you don't want to change your frame setup.
Open in new window
|
https://www.experts-exchange.com/questions/24442464/ActionScript-error-TypeError-Error-1009-Cannot-access-a-property.html
|
CC-MAIN-2018-13
|
refinedweb
| 927
| 71.34
|
Typography
Use the typography modifiers to control text sizing, emphasis, and color.
Source Sans Pro is Pivotal UI's default font family. It is packaged with the typography CSS, so the import statement below includes it in your site.
Classes
In addition to the modifiers listed here, importing typography CSS gives you the
type-{color-name} modifiers listed on the Colors page.
an h1 (32px)
an h2 (24px)
an h3 (20px)
an h4 (18px)
an h5 (16px)
an h6 (13px)
base font size (16px)
large font size (18px)
base font size (16px)
small font size (14px)
extra small font size (12px)
Low emphasis
Default emphasis
High emphasis
Max emphasis
Alternate emphasis (all-caps)
capitalize emphasis
Sometimes you may need to use a heading which has different visual and code semantics. You can accomplish this by combining classes with elements (classes take visual precedence over elements in this case).
I am an h1!
I am an h2!
If it's not a heading but you need similar visual treatment, you can add just the class to any element. However, use headings when possible since they add semantic value.
Headings are spaced so their line height and padding are consistent on one or many lines.
One line heading
I am a
multiline heading
Imports
Import CSS:
import 'pivotal-ui/css/typography';
|
https://styleguide.pivotal.io/modifiers/typography/
|
CC-MAIN-2021-04
|
refinedweb
| 219
| 62.07
|
GRID::Cluster - Virtual clusters using SSH links
use GRID::Cluster; my $np = 4; # Number of processes my $N = 1000; # Number of iterations my $clean = 0; # The files are not removed when the execution is finished my $machine = [ 'host1', 'host2', 'host3' ]; # Hosts my $debug = { host1 => 0, host2 => 0, host3 => 0 }; # Debug mode in every host my $max_num_np = { host1 => 1, host2 => 1, host3 => 1 }; # Maximum number of processes supported by every host my $c = GRID::Cluster->new(host_names => $machine, debug => $debug, max_num_np => $max_num_np); || die "No machines has been initialized in the cluster"; # Transference of files to remote hosts $c->copyandmake( dir => 'pi', makeargs => 'pi', files => [ qw{pi.c Makefile} ], cleanfiles => $clean, cleandirs => $clean, # remove the whole directory at the end keepdir => 1, ); # This method changes the remote working directory of all hosts $c->chdir("pi/") || die "Can't change to pi/\n"; # Tasks are created and executed in remote machines using the method 'qx' my @commands = map { "./pi $_ $N $np |" } 0..$np-1 print "Pi Value: ".sum @{$c->qx(@commands)}."\n";
This module is based on the module GRID::Machine. It provides a set of methods to create 'virtual' clusters by the use of SSH links for communications among different remote hosts.
Since main features of
GRID::Machine are zero administration and minimal installation, GRID::Cluster directly inherites these features.
Mainly,
GRID::Cluster provides:
qxmethod. Instead of a single command it receives a list of commands. Commands are executed - via SSH - using the master-worker paradigm.
This module requires these other modules and libraries:
new
This method returns a new instance of an object.
There are two ways to call the constructor. The first one looks like:
my $cluster = GRID::Cluster->new( debug => {machine1 => 0, machine2 => 0,...}, max_num_np => {machine1 => 1, machine2 => 1,...}, );
where:
debugis a reference to a hash that specifies which machines will be in debugging mode. It is optional.
max_num_npis a reference to a hash containing the maximum number of processes supported for each machine.
The second one looks like:
my $cluster = GRID::Cluster->new(config => $config_file_name);
where:
configis the name of the file containing the cluster specification. The specification is written in Perl itself. The code inside the
configfile must return a list defining the
max_num_npand
debugparameters as in the previous call. See the following example:
$ cat -n MachineConfig.pm 1 my %debug = (machine1 => 0, machine2 => 0, machine3 => 0, machine4 => 0); 2 my %max_num_np = (machine1 => 3, machine2 => 1, machine3 => 1, machine4 => 1); 3 4 return (debug => \%debug, max_num_np => \%max_num_np);
modput
The syntax of the method
modput is:
my $result = $cluster->modput(@modules);
It receives a list of strings describing modules (like 'Math::Prime::XS'), and it returns a GRID::Cluster::Result object.
An example is in following lines:
$ cat -n modput.pl 1 #!/usr/bin/perl 2 use warnings; 3 use strict; 4 5 use GRID::Cluster; 6 use Data::Dumper; 7 8 my $cluster = GRID::Cluster->new( debug => { orion => 0, beowulf => 0 }, 9 max_num_np => { orion => 1, beowulf => 1 } ); 10 11 12 my $result = $cluster->modput('Math::Prime::XS'); 13 14 $result = $cluster->eval(q{ 15 use Math::Prime::XS qw(primes); 16 17 primes(9); 18 } 19 ); 20 21 print Dumper($result);
When this program is executed, the following output is produced:
$ ./modput.pl $VAR1 = bless( { 'beowulf' => bless( { 'stderr' => '', 'errmsg' => '', 'type' => 'RETURNED', 'stdout' => '', 'errcode' => 0, 'results' => [ 2, 3, 5, 7 ] }, 'GRID::Machine::Result' ), 'orion' => bless( { 'stderr' => '', 'errmsg' => '', 'type' => 'RETURNED', 'stdout' => '', 'errcode' => 0, 'results' => [ 2, 3, 5, 7 ] }, 'GRID::Machine::Result' ) }, 'GRID::Cluster::Result' );
eval
The syntax of the method
eval is:
$result = $cluster->eval($code, @args)
This method evaluates $code in the cluster, passing arguments and returning a GRID::Cluster::Result object.
An example of use:
$ cat -n eval_pi.pl 1 #!/usr/bin/perl 2 use warnings; 3 use strict; 4 5 use GRID::Cluster; 6 use Data::Dumper; 7 8 my $cluster = GRID::Cluster->new( debug => { orion => 0, beowulf => 0, localhost => 0, bw => 0 }, 9 max_num_np => { orion => 1, beowulf => 1, localhost => 1, bw => 1} ); 10 11 my @machines = ('orion', 'bw', 'beowulf', 'localhost'); 12 my $np = @machines; 13 my $N = 1000000; 14 15 my $r = $cluster->eval(q{ 16 17 my ($N, $np) = @_; 18 19 my $sum = 0; 20 21 for (my $i = SERVER->logic_id; $i < $N; $i += $np) { 22 my $x = ($i + 0.5) / $N; 23 $sum += 4 / (1 + $x * $x); 24 } 25 26 $sum /= $N; 27 28 }, $N, $np ); 29 30 print Dumper($r); 31 32 my $result = 0; 33 34 foreach (@machines) { 35 $result += $r->{$_}{results}[0]; 36 } 37 38 print "\nEl resultado del cálculo de PI es: $result\n";
The cluster initialization (lines 8 -- 9) assigns a logical identifier to each machine. In lines 15 -- 28, the
eval method evaluates the block of code located at the
q operator for each machine of the cluster. In lines 32 - 36, an addition of every obtained values is performed. So on, the example produces the following output:
$VAR1 = bless( { 'bw' => bless( { 'stderr' => '', 'errmsg' => '', 'type' => 'RETURNED', 'stdout' => '', 'errcode' => 0, 'results' => [ '0.785398913397203' ] }, 'GRID::Machine::Result' ), 'beowulf' => bless( { 'stderr' => '', 'errmsg' => '', 'type' => 'RETURNED', 'stdout' => '', 'errcode' => 0, 'results' => [ '0.785398413397751' ] }, 'GRID::Machine::Result' ), 'orion' => bless( { 'stderr' => '', 'errmsg' => '', 'type' => 'RETURNED', 'stdout' => '', 'errcode' => 0, 'results' => [ '0.785397913397739' ] }, 'GRID::Machine::Result' ), 'localhost' => bless( { 'stderr' => '', 'errmsg' => '', 'type' => 'RETURNED', 'stdout' => '', 'errcode' => 0, 'results' => [ '0.785397413397209' ] }, 'GRID::Machine::Result' ) }, 'GRID::Cluster::Result' ); El resultado del cálculo de PI es: 3.1415926535899
The GRID::Cluster::Result object contains the obtained results, and the addition of every results is the final calculation of number PI.
qx
The syntax of the method
qx is:
my $result = $cluster->qx(@commands);
It receives a list of commands and executes each command as a remote process. It uses a farm-based approach. At some time a chunk of commands - the size of the chunk depending on the number of processors - is being executed. As soon as some command finishes, another one is sent to the new idle worker (if there are pending tasks).
In a scalar context, a reference to a list that contains every results is returned. Such list contains the outputs of the
@commands. Observe however that no assumption can be made about the processor where an individual command
c in
@commands is eexecuted. See the following example:
An example of use:
$ cat -n uname_echo_qx.pl 1 #!/usr/bin/perl 2 use strict; 3 use warnings; 4 5 use GRID::Cluster; 6 use Data::Dumper; 7 8 my $cluster = GRID::Cluster->new(max_num_np => {orion => 1, europa => 1},); 9 10 my @commands = ("uname -a", "echo Hello"); 11 my $result = $cluster->qx(@commands); 12 13 print Dumper($result);
The result of this example produces the following output:
$ ./uname_echo_qx.pl $VAR1 = [ 'Linux europa 2.6.24-24-generic #1 SMP Wed Apr 15 15:11:35 UTC 2009 x86_64 GNU/Linux ', 'Hello ' ];
Observe that the first output corresponds to the first command
uname -a, and the second output to the second command
echo Hello. Notice also that we can't assume that the first command will be executed in the first machine, the second one in the second machine, etc. We can only be certain that all the commands will be executed in some machine of the cluster pool.
copyandmake
The syntax of the method
copyandmake is:
my $result = $cluster->copyandmake( dir => $dir, files => [ @files ], # files to transfer make => $command, # execute $command $commandargs makeargs => $commandargs, # after the transference cleanfiles => $cleanup, # remove files at the end cleandirs => $cleanup, # remove the whole directory at the end )
and it returns a GRID::Cluster::Result object.
copyandmake copies (using
scp) the files
@files to a directory named
$dir in remote machines. The directory
$dir will be created if it does not exists. After the file transfer the
command specified by the
copyandmake option
make => 'command'
will be executed with the arguments specified in the option
makeargs. If the
make option is. If the option
cleandirs is set, the created directory and all the files below it will be removed. Observe that the directory and the files will be kept if they were not created by this connection. The call to
copyandmake by default sets
dir as the current directory in remote machines. Use the option
keepdir => 1 to one to avoid this.
chdir
The syntax of this method is as follows:
my $result = $cluster->chdir($remote_dir);
and it returns a GRID::Cluster::Result object.
The method
chdir changes the remote working directory to $remote_dir in every remote machine.
To install
GRID::Cluster, follow these steps::
Host host_1 HostName myHost1.mydomain.com User myUser Host host_2 HostName myHost2.mydomain.com User anotherUser . . . Host host_n HostName myHostn.mydomain.com User myUser
The basic execution of the script is as follows:
local.machine$ ./pki.pl [-v] -c host_1,host_2,...host_n
In this case, a public/private key pair is generated in the local directory $HOME/.ssh/, using the ssh-keygen command, which must be located in some directory included in $PATH. By default,.
The behaviour of the script can be modified by the different supported options. For more information, you can execute the script with the option -h.
Host host_1 HostName myHost1.mydomain.com User myUser IdentityFile ~/.ssh/grid_cluster_rsa Host host_2 HostName myHost2.mydomain.com User anotherUser IdentityFile ~/.ssh/grid_cluster_rsa . . . Host host_n HostName myHostn.mydomain.com User myUser IdentityFile ~/.ssh/grid_cluster_rsa
$ ssh host_1 Linux host_1 2.6.15-1-686-smp #2 SMP Mon Mar 6 15:34:50 UTC 2006 i686 Last login: Sat Jul 7 13:34:00 2007 from local.machine user@host_1:~$
You can also automatically execute commands in the remote server:
local.machine$ ssh host_1 uname -a Linux host_1 2.6.15-1-686-smp #2 SMP Mon Mar 6 15:34:50 UTC 2006 i686 GNU/Linux
GRID_REMOTE_MACHINESto point to a set of machines that is available using automatic authentication. For example, on a
bash:
export GRID_REMOTE_MACHINES=host_1:host_2:...:host_n
Otherwise most connectivity tests will be skipped. This and the previous steps are optional.
perl Makefile.PL make make test make install.
|
http://search.cpan.org/dist/GRID-Cluster/lib/GRID/Cluster.pm
|
CC-MAIN-2016-36
|
refinedweb
| 1,651
| 53.81
|
Subclipse is an Eclipse plugin that adds Subversion integration to the Eclipse IDE. Subclipse is licensed under the terms of the Common Public License (CPL) 1.0.
Our overview of Subclipse is below. is an Eclipse plug-in that provides the functionality to interact with a Subversion server, and to manipulate a project on a Subversion server in the Eclipse environment. Eclipse comes with a CVS plug-in already installed, but Subclipse needs to be installed.
Subversion is a version control system, like CVS. However, Subversion fixes some of the flaws in CVS like "lack of versioning support for directory and file metadata, namespace problems, complexity in administration, etc." [Visual Guide]. Subversion is under development by the same people who created CVS. Subversion is used as a versioning control system to allow multiple users to develop on the system at one time. Programmers download the latest version of the code from subversion, make their changes to the code, and then upload the files back to Subversion. Subversion keeps track of the changes and then integrates the changes back into the main code base or lets the programmer know if other modifications where made between their last download and subsequent upload.
You can add the Subclipse plug-in to Eclipse by creating an update site in Eclipse and download and install Subclipse.
2.1 Create an update site for Subclipse
2.1.1 Select Help > Software Updates > Find and Install. Select Search for new features to install. Click Next.
2.1.2 Click the New Remote Site button. Give the update site a name, like Subclipse, and type in the following address:.
Click OK. A new update site will be added to the list.
2.1.3 Press the + next to your Subclipse update site. Eclipse will connect with the site and list all of the updates available. Select the most recent Subclipse update by checking the box next to the update. Click Next.
2.1.4 Check the Subversion feature that you want to install. Click Next.
2.1.5 Accept the terms of the licence agreement. Click Next.
2.1.6 Make sure that the Subversion feature is selected to install. Click Finish.
2.1.7 You will be asked to verify the feature that you wish to install. After you have, click Install.
2.1.8 You will now be asked to restart the workbench. Click Yes.
RSS feed Java FAQ News
|
http://www.javafaq.nu/java-downloaddetails-180.html
|
CC-MAIN-2013-20
|
refinedweb
| 404
| 66.03
|
Hi All,
Please try to find place by location like: " 50.31235 34.86754" (with space before the numbers), next you will try to find for: "50.31235 34.86754".
I have completely different places result..
The first of requests return (in Russian lang.) "Т-17-05, Кнышовка, Гадячский район, Полтавская область, Украина" and point to road in the forest - wrong place! Next request return no text but point to true geographic location (Ohtyrka town).
Hot I can to fix it?
asked
28 Dec '12, 11:58
VladUA
26●1●1●2
accept rate:
0%
They're not quite "completely different places" - they're both on the same road.
Yes, but search result is very differ :) I will try to explain.
Right point place located near the building inside the neighborhood 30m from the T-17-05 inside the "Ohtyrka" town. In search result we find another region (oblast) address - near 50 km from place point by geo location.
I expect to get nearest town address but not a road name with address of center this road.
When you use the "Search" box on openstreetmap.org, it uses several different services.
If what you type in is recognised as coordinates, then the "Internal" service will give you a link directly to that point.
If your input is not recognised as coordinates, it will search Nominatim and GeoNames. These are designed for finding placenames and addresses. It seems Nominatim will give you the nearest road or address to the coordinates, which might be some distance away.
The problem is that the internal service only recognises coordinates if they are formatted in a particular way. ie only if they are decimal degrees. So as you've found, it doesn't work if they have extra spaces, or N/S, or degrees/minutes/seconds etc.
You could report this by adding a ticket on
It may have been reported previously, so try searching first.
answered
28 Dec '12, 17:19
Vclaw
9.2k●8●93●140
accept rate:
22%
Thanks for the response!
But trailing/leading spaces do not affect to the numbers inside the request. So we have trivial bug in request processor or not?
Ok, I try to make new ticket.
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
find ×19
interface ×16
web ×13
question asked: 28 Dec '12, 11:58
question was seen: 2,707 times
last updated: 02 Jan '13, 15:11
OSM in a Java Web Application ?
What map-browsing websites are there, and what features do they offer?
[closed] Search-field should be positioned top left
How can I find and modify a boundary?
[closed] Help with Help
Why can't I search for address or city on an eTrex Legend C w OSM map?
Changing the language of the interface (i.e. outside the map)?
[closed] Finish potlatch interface: no names on buttons or other items
[closed] were is northern ireland
How can I post my location on Facebook or other sites, from Android, while linking to OSM ?
First time here? Check out the FAQ!
|
https://help.openstreetmap.org/questions/18741/strange-find-function-result
|
CC-MAIN-2021-39
|
refinedweb
| 535
| 74.59
|
NEWS for the 2.5 release
This release includes important portability fixes for Windows
and MacOS. There are also a few new features.
* Warning: The undocumented, internal, function
pkcs1_signature_prefix has been renamed to
_pkcs1_signature_prefix, and with slightly different
behavior. Since this is an internal function, this is not
considered a change of ABI or API. Programs explicitly using
this function will break.
New features:
* Support for the salsa20 block.
Miscellaneous:
* Various portability fixes for MacOS and M$ Windows. A lot of
this work done by Martin Storsjö.
* In particular, Nettle now hopefully works on 64-bit Windows
builds, "W64", including the x86_64 assembly code.
* Documentation and example programs for the base16 and base64
functions. Was contributed by Jeronimo Pellegrini back in
2006, but unfortunately forgotten until now.
*
NEWS for the 2.4 release
This is a bugfix release only. It turned out ripemd160 in the
2.3 release was broken on all big-endian systems, due to a
missing include of config.h. nettle-2.4 fixes this.
The library is intended to be binary compatible with
nettle-2.2 and nettle-2.3. The shared library names are
libnettle.so.4.3 and libhogweed.so.2.1, with sonames still
libnettle.so.4 and libhogweed.so.2.
NEWS for the 2.3 release
* Support for the ripemd-160 hash function.
* Generates and installs nettle.pc and hogweed.pc files, for
use with pkg-config. Feedback appreciated. For projects
using autoconf, the traditional non-pkg-config ways of
detecting libraries, and setting LIBS and LDFLAGS, is still
recommended.
* Fixed a bug which made the testsuite fail in the GCM test on
certain platforms. Should not affect any documented features
of the library.
* Reorganization of the code for the various Merkle-Damgård
hash functions. Some fields in the context structs for md4,
md5 and sha1 have been renamed, for consistency.
Applications should not peek inside these structs, and the
ABI is unchanged.
* In the manual, fixed mis-placed const in certain function
prototypes.
The library is intended to be binary compatible with
nettle-2.2. The shared library names are libnettle.so.4.2 and
libhogweed.so.2.1, with sonames still libnettle.so.4 and
libhogweed.so.2..
NEWS for the 1.15 release
Added support for PKCS#1 style RSA signatures using SHA256,
according to RFC 3447. Currently lacks interoperability
testing.
Header files are now C++ aware, so C++ programs using Nettle
should now use plain
#include <nettle/foo.h>
rather than
#extern "C" {
#include <nettle/foo.h>
}
as was the recommendation for the previous version. This
breaks.
|
https://git.lysator.liu.se/lumag/nettle/-/blame/f58d1c288f602d93f9e67b535d84c606047db3b9/NEWS
|
CC-MAIN-2022-05
|
refinedweb
| 429
| 54.59
|
Connect to Swift Object Storage with CloudBerry
From the Bluemix dashboard side panel, select Overview. Click Show Credentials from your Object Storage instance.
Example:
{
"Object Storage Bluemix": [
{
"name": "Object Storage for Bluemix-public",
"label": "Object Storage Bluemix",
"plan": "Free",
"credentials": {
"auth_url": "",
"swift_url": "",
"sdk_auth_url": "",
"project": "organization-guid",
"region": "dal09",
"username": "maureen1",
}
}
]
}
Download and start CloudBerry Explorer for OpenStack Storage. Click File, then New OpenStack Account. Create a Display name. Complete the fields using your bound credentials as shown in the example. The User name is your “username” from the bound credentials. The Api key is your “password”, and the Authentication Service is your “auth_url”. Select Tenant name from the drop down. Your Tenant name is your “project”. Select Keystone version 2 and the check box for ‘Use internal URLs when possible”. Click Test Connection. You should see a Connection Success message. Select Close. Now connect by selecting Ok and Close.
Connect to Swift Object Storage with CyberDuck
Download and start CyberDuck. Click Open Connection and complete the fields using your bound credentials as shown in the example. The Server field is the “auth_url” from your bound credentials. The Port field is port 443. Username is the “project”. Click Connect.
In the Login window, specify your Tenant ID:Access Key with your “project” and “username” separated by a colon. For the Security Field key, enter your “password”. Click Login.
Background
Migration steps from Object Storage V1 to Beta refresh
Migration steps from Object Storage V2 to Beta refresh
If you have provisioned service instances of Object Storage for Bluemix V1 or V2 and would like to migrate your data to the latest version, then this document is for you.
Object Storage V1 - allows you to provision many Swift accounts underneath a single service instance. Therefore, V1 has a relationship of one service instance to many Swift accounts.
Object Storage V1 Bluemix tile
Sample V1 instance from the dashboard
Object Storage V2 - allows you to only provision one Swift account per Bluemix organization. The Swift account is created when the first service instance is created. Additional service instances only enable other Bluemix users to access the same Swift account. Therefore, V2 has a relationship of one service instance to one Swift account.
Object Storage V2 Bluemix tile
Sample V2 instance from the dashboard
Object Storage (beta refresh) – when you provision a new service instance a single Swift account is created. Therefore, the beta refresh has a relationship of one service instance to one Swift account.
Object Storage beta refresh Bluemix tile (same as V1)
Locate the V1 service instance on your dashboard.
Unbind any applications using this instance.
Open the V1 service instance by clicking on it.
The dashboard displays a list of users. Each user represents a Swift account.
Repeat steps 6 through 12 for each user (Swift account).
START REPEATED STEPS
Get the access token and storage URL for each user by performing a GET operation using the binding credentials base64 encode from the VCAP services.
Sample GET request
Sample binding credentials from VCAP services
Download all the data locally using the access token and storage url from the previous step.
Create a new service instance of the Object Storage beta refresh. Initially, leave this instance unbound.
The dashboard for the new instance displays a Service Credentials link on the left side. Click the link.
Use the Service Credentials to acquire an OpenStack Identity V3 token.
See:
Suggested Identity Request
The response from the previous step includes an X-Subject-Token in the header and an object-storage endpoint in the body.
Upload the data you downloaded in step 7, using the X-Subject-Token to the object-storage endpoint.
END REPEATED STEPS
Delete the Object Storage V1 instance.
Recode your application to use the Object Storage beta refresh.
Bind any applications that were unbound in step 2.
The migration is complete.
Locate the V2 service instance on your dashboard.
Open the V2 service instance by clicking on it.
Select the Show Credentials dropdown.
Use the credentials from the dropdown to acquire an OpenStack Identity V2 token.
See:
Identity v2 token request
The response body from the previous step includes an X-Auth-token and an object-storage endpoint. Use the publicURL property of the object-storage endpoint.
Download all the data locally using the X-Auth-token and object-storage url from the previous step.
Create a new service instance of the Object Storage Beta refresh. Initially, leave this instance unbound.
The dashboard for the new instance displays a Service Credentials link on the left side. Click the link to display the Service Credentials.
Delete the Object Storage V2 instance.
Recode your application to use Object Storage beta refresh.
This is python snippets to access your Bluemix Object Storage Service using keystone v3. It demonstrates how to connect to view, create, delete containers and objects, download objects, and upload a file.
Prerequisites to install:
pip install python-swiftclient
pip install python-keystoneclient
pip install urllib3 cerftifi pyopenssl
#First you'll need to import the swiftclient, and your Bluemix credentials to establish a swift connection
import swiftclient.client as swiftclient
# Keystone v3 authentication url for IBM Public Cloud
auth_url = ''
# Uniquely identifies an OpenStack project
project_id = '<your projectId>'
# Uniquely identifies an OpenStack user
user_id = '<your userId>'
# Swift region
region_name = 'dallas'
# Password for authentication
password = '<your password>'
# Get a Swift client connection object
conn = swiftclient.Connection(
key=password,
authurl=auth_url,
auth_version='3',
os_options={"project_id": project_id,
"user_id": user_id,
"region_name": region_name})
# Container name for testing
container_name = 'new-container'
# File name for testing
file_name = 'example_file.txt'
# Create a new container
conn.put_container(container_name)
print "\nContainer %s created successfully." % container_name
# List your containers
print ("\nContainer List:")
for container in conn.get_account()[1]:
print container['name']
# Create a file for uploading
with open(file_name, 'w') as example_file:
conn.put_object(container_name,
file_name,
contents= "",
content_type='text/plain')
# List objects in a container, and prints out each object name, the file size, and last modified date
print ("\nObject List:")
for container in conn.get_account()[1]:
for data in conn.get_container(container['name'])[1]:
print 'object: {0}\t size: {1}\t date: {2}'.format(data['name'], data['bytes'], data['last_modified'])
# Download an object and save it to ./my_example.txt
obj = conn.get_object(container_name, file_name)
with open(file_name, 'w') as my_example:
my_example.write(obj[1])
print "\nObject %s downloaded successfully." % file_name
# Delete an object
conn.delete_object(container_name, file_name)
print "\nObject %s deleted successfully." % file_name
# To delete a container. Note: The container must be empty!
conn.delete_container(container_name)
print "\nContainer %s deleted successfully.\n" % container_name
|
https://www.ibm.com/developerworks/community/blogs/1b48459f-4091-43cb-bca4-37863606d989/?lang=en_us
|
CC-MAIN-2017-17
|
refinedweb
| 1,081
| 50.53
|
In this section we'll list several situations where function pointers can be useful.
A common problem is sorting, that is, rearranging a set of items into a desired sequence. It's especially common for the set of items to be contained in an array. Suppose you were writing a function to sort an array of strings. Roughly speaking, the algorithm for sorting the elements of an array looks like this:
In C, the outline of the function might therefore look like this:In C, the outline of the function might therefore look like this:for all pairs of elements in the array if the pair is out of order swap them
stringsort(char *strings[], int nstrings) { for( all pairs of elements i, j in strings ) { if(strcmp(strings[i], strings[j]) > 0) { char *tmp = strings[i]; strings[i] = strings[j]; strings[j] = tmp; } } }Remember, the library function strcmp compares two strings and returns either a negative number, zero, or a positive number depending on whether the first string was ``less than,'' the same as, or ``greater than'' the second string. If you haven't seen it before, the series of three assignments involving the temporary variable tmp is a standard idiom for swapping two items. (The more obvious pair of assignments
strings[i] = strings[j]; strings[j] = strings[i];would just about as obviously not work, because the first line would obliterate strings[i] before the second line had a chance to assign it to string[j].)
Naturally, a real sort function implementing a real sorting algorithm will be a bit more elaborate; in particular, the details of how we choose which pairs of elements to compare (and in what order) can make a huge difference in how efficiently the function completes its job. We'll show a complete example in a minute, but for now, our more important focus is on the comparison step. The final ordering of the strings will depend on the strcmp function's definition of what it means for one string to be ``less than'' or ``greater than'' another. How strcmp compares strings (as we saw in chapter 8) is to look at them a character at a time, based on their values in the machine's character set (which is how C always treats characters). In ASCII, the character set that most computers use, the codes representing the letters are in order, so strcmp gives you something pretty close to alphabetical order, with the significant difference that all the upper-case letters come before the lower-case letters. So a string-sorting function built around strcmp would sort the words ``Zeppelin,'' ``able,'' ``baker,'' and ``Charlie'' into the order
strstrCharlie Zeppelin able baker
1 12 2 234 24 3 4
Depending on circumstances, we might want our string sorting function to sort into the order that strcmp uses, or into ``dictionary'' order (that is, with all the a's together, both upper-case and lower-case), or into numeric order. We could pass our stringsort function a flag or code telling it which comparison strategy to use, although that would mean that whenever we invented a new comparison strategy, we would have to define a new code or flag value and rewrite stringsort. But, if we observe that the final ordering depends entirely on the behavior of the comparison function, a neater implementation is if we write our stringsort function to accept a pointer to the function which we want it to use to compare each pair of strings. It will go through its usual routine of comparing and exchanging, but whenever it makes the comparisons, the actual function it calls will be the function pointed to by the function pointer we hand it. Making it sort in a different order, according to a different comparison strategy, will then not require rewriting stringsort at all, but instead will just involve passing it a pointer to a different comparison function (after perhaps writing that function).
Here is a complete implementation of stringsort, which also accepts a pointer to the string comparison function:
void stringsort(char *strings[], int nstrings, int (*cmpfunc)()) { int i, j; int didswap; do { didswap = 0; for(i = 0; i < nstrings - 1; i++) { j = i + 1; if((*cmpfunc)(strings[i], strings[j]) > 0) { char *tmp = strings[i]; strings[i] = strings[j]; strings[j] = tmp; didswap = 1; } } } while(didswap); }(This code uses a fairly simpleminded sorting algorithm. It repeatedly compares adjacent pairs of elements, keeping track of whether it had to exchange any. If it makes it all the way through the array without exchanging any, it's done; otherwise, it has at least made progress, and it goes back for another pass. This is not the world's best algorithm; in fact it's not far from the infamous ``bubble sort.'' Although our focus here is on function pointers, not sorting, in a minute we'll take time out and look at a simple improvement to this algorithm which does make it a decent one.)
Now, if we have an array of strings
char *array1[] = {"Zeppelin", "able", "baker", "Charlie"};we can sort it into strcmp order by calling
stringsort(array1, 4, strcmp);Notice that in this call, we are not calling strcmp immediately; we are generating a pointer to strcmp, and passing that pointer as the third argument in our call to stringsort.
If we wanted to sort these words in ``dictionary'' order, we could write a version of strcmp which ignores case when comparing letters:
#include <ctype.h> int dictstrcmp(char *str1, char *str2) { while(1) { char c1 = *str1++; char c2 = *str2++; if(isupper(c1)) c1 = tolower(c1); if(isupper(c2)) c2 = tolower(c2); if(c1 != c2) return c1 - c2; if(c1 == '\0') return 0; } }(The functions isupper and tolower are both from the standard library and are declared in <ctype.h>. isupper returns ``true'' if its argument is an upper-case letter, and tolower converts its argument to a lower-case letter.)
With dictstrcmp in hand, we can sort our array a different way:
stringsort(array1, 4, dictstrcmp);(Some C libraries contain case-independent versions of strcmp called stricmp or strcasecmp. Both of these would do the same thing as our dictstrcmp, although neither of them is standard.)
We can also write another string-comparison function which treats the strings as numbers, and compares them numerically:
int numstrcmp(char *str1, char *str2) { int n1 = atoi(str1); int n2 = atoi(str2); if(n1 < n2) return -1; else if(n1 == n2) return 0; else return 1; }
Then, we can sort an array of numeric strings correctly:
char *array2[] = {"1", "234", "12", "3", "4", "24", "2"}; stringsort(array2, 7, numstrcmp);
(As an aside, you will occasionaly see code which is supposed to compare two integers and return a negative, zero, or positive result--i.e. just like the numstrcmp function above--but which does so by the seemingly simpler technique of just saying
return n1 - n2;It turns out that this trick has a serious drawback. Suppose that n1 is 32000, and n2 is -32000. Then n1 - n2 is 64000, which is not guaranteed to fit in an int, and will overflow on some machines, producing an incorrect result. So the explicit comparison code such as numcmp used is considerably safer.)
Finally, since we've started looking at sorting functions, let's look at a simple improvement to the string sorting function we've just been using. It has been plodding along comparing adjacent elements, so when an element is far out of place, it takes many passes to percolate it to the correct position (one cell at a time). The improvement, based on the work of Donald L. Shell, is to compare pairs of more widely-separated elements at first, then proceed to compare more and more closely-situated elements, until on the last pass (or passes) we compare adjacent elements, as before. Since the earlier passes will have done more of the work (and more quickly), the later passes will just have to do the ``final cleanup.'' Here is the improvement:
void stringsort(char *strings[], int nstrings, int (*cmpfunc)()) { int i, j, gap; int didswap; for(gap = nstrings / 2; gap > 0; gap /= 2) { do { didswap = 0; for(i = 0; i < nstrings - gap; i++) { j = i + gap; if((*cmpfunc)(strings[i], strings[j]) > 0) { char *tmp = strings[i]; strings[i] = strings[j]; strings[j] = tmp; didswap = 1; } } } while(didswap); } }The inner loops are the same as before, except that where we had before always been computing j = i + 1, now we compute j = i + gap, where gap is a new variable (controlled by a third, outer loop) which starts out large (nstrings / 2), and then diminishes until it's 1. Although this code contains three nested loops instead of two, it will end up making far fewer trips through the inner loop, and so will execute faster.
The Standard C library contains a general-purpose sort function, qsort (declared in <stdlib.h>), which can sort any type of data (not just strings). It's a bit trickier to use (due to this generality), and you almost always have to write an auxiliary comparison function. For example, due to the generic way in which qsort calls your comparison function, you can't use strcmp directly even when you're sorting strings and would be satisfied with strcmp's ordering. Here is an auxiliary comparison function and the corresponding call to qsort which would behave like our earlier call to stringsort(array1, 4, strcmp) :
/* compare strings via pointers */ int pstrcmp(const void *p1, const void *p2) { return strcmp(*(char * const *)p1, *(char * const *)p2); } ... qsort(array1, 4, sizeof(char *), pstrcmp);When you call qsort, it calls your comparison function with pointers to the elements of your array. Since array1 is an array of pointers, the comparison function ends up receiving pointers to pointers. But qsort doesn't know that the array is an array of pointers; it's written so that it can sort arrays of anything. (That's why qsort's third argument is the size of the array element.) Since qsort doesn't know what the type of the elements being sorted is, it uses void pointers to those elements when it calls the comparison function. (The use of void pointers here recalls their use with malloc, where the situation is similar: malloc returns pointers to void because it doesn't know what type we'll use the pointers to point to.) In the ``wrapper'' function pstrcmp, we use the explicit cast (char * const *) to convert the void pointers which the function receives into pointers to (pointers to char) so that when we use one * operator to find out what they point to, we get one of the character pointers (char *) which we know our array1 array actually contains. We pass the resulting two char *'s to strcmp to do most of the work, but we have to do a bit of work first to recover the correct pointers. (The extra const in the cast (char * const *) keeps the compiler from complaining, since the pointers being passed in to the comparison function are actually const void *, meaning that although it may not be clear what they point to, it's guaranteed that we won't be using them to modify whatever it is they point to. We need to keep a level of const-ness in the converted pointers so that the compiler doesn't worry that we're going to accidentally use the converted pointers to modify what we shouldn't.)
That was a rather elaborate first example of what function pointers can be used for! Let's move on to some others.
Suppose you wanted a program to plot equations. You would give it a range of x and y values (perhaps -10 to 10 along each axis), and ask it to plot y = f(x) over that range (e.g. for -10 <= x <= 10). How would you tell it what function to plot? By a function pointer, of course! The plot function could then step its x variable over the range you specified, calling the function pointed to by your pointer each time it needed to compute y. You might call
plot(-10, 10, -10, 10, sin)to plot a sine wave, and
plot(-10, 10, -10, 10, sqrt)to plot the square root function.
(If you were to try to write this plot function, your first question would be how to draw lines or otherwise do graphics in C at all, and unfortunately this is one of the questions which the C language doesn't answer. There are potentially different ways of doing graphics, with different system functions to call, for every different kind of computer, operating system, and graphics device.)
One of the simplest (and allegedly least user-friendly) styles of user interfaces is the command line interface, or CLI. The user types a command, hits the RETURN key, and the system interprets the command. Often, the first ``word'' on the line is the command name, and any remaining words are ``arguments'' or ``option flags.'' The various shells under Unix, COMMAND.COM under MS-DOS, and the adventure game we've been writing are all examples of CLI's. If you sit down to write some code implementing a CLI, it's simple enough to read a line of text typed by the user, and simple enough to break it up into words. But how do you map the first word on the line to the code which implements that command? A straightforward, rather simpleminded way is to use a giant if/else chain:
if(strcmp(command, "agitate") == 0) { code for ``agitate'' command } else if(strcmp(command, "blend") == 0) { code for ``blend'' command } else if(strcmp(command, "churn") == 0) { code for ``churn'' command } ... else fprintf(stderr, "command not found\n");
This works, but can become unwieldy. Another technique is to write several separate functions, each implementing one command, and then to build a table (typically an array of structures) associating the command name as typed by the user with the function implementing that command. In the table, the command name is a string and the function is represented by a function pointer. With this table in hand, processing the user's command becomes a relatively simple matter of searching the table for the matching command string and then calling the corresponding pointed-to function. (We'll see an example of this technique in this week's assignment.)
Our last example concerns larger, more elaborate systems which manipulate many types of data. (Here, by ``types of data,'' I am referring to data structures used by the program, presumably implemented as structs, but in any case more elaborate than C's basic data types.) It is often the case that similar operations must be performed on dissimilar data types. The conventional way of implementing such operations is to have the code for each operation look at the data type it's operating on and adjust its behavior accordingly; in the worst case, each piece of code (for each operation) will have a long switch statement or if/else chain switching among n separate, different ways of performing the operation on n different data types. If a new data type is ever added, new cases must be added to all segments of the code implementing all of the operations.
Another way of organizing such code is to place one or more function pointers right there in the data structures describing each data type. These pointers point to functions which are streamlined and optimized for performing the operations on just one data type. (Each piece of data would obviously have its pointer(s) set to point to the function(s) for its own data type.) This idea is one of the cornerstones of Object-Oriented Programming. We could use a version of it in our adventure game, too: rather than writing new, global pieces of code implementing each new command verb we want to let the user type, and then making each of those pieces of code check all sorts of attributes to ensure that the command can't be used on inappropriate objects (``break water with cup'', ``light candle with bucket,'' etc.) we could attach special-purpose pieces of code to the individual objects themselves (via function pointers, of course) and arrange that the code would only fire up if the player were trying to use the relevant object.
Read sequentially: prev next up top
This page by Steve Summit // Copyright 1996-1999 // mail feedback
|
http://c-faq.com/~scs/cclass/int/sx10b.html
|
CC-MAIN-2015-35
|
refinedweb
| 2,754
| 51.21
|
Arrays in Python work reasonably well but compared to Matlab or Octave there are a lot of missing features. There is an array module that provides something more suited to numerical arrays but why stop there as there is also NumPy which provides a much better array object. Put simply if you are going to use something other than the basic Python list as an array you might as well download NumPy - which is available for Python 2 and 3.
Assuming you have NumPy installed, all you need to do to use it is add
import numpy as np
to the start of any program.
The main thing that NumPy brings to the environment is the NumPy array.
This is an object, complete with methods, that wraps a static array of various data types.
Notice that the NumPy array is a completely separate data type from the Python list and this means you can have two types of array-like entity within your program. The good news is that it is very easy to convert a Python data types that are "array-like" to NumPy.
A Python array is dynamic and you can append new elements and delete existing ones. A NumPy array is more like an object-oriented version of a traditional C or C++ array.
You can create NumPy arrays using a large range of data types from int8, uint8, float64, bool and through to complex128. Check the documentation of what is available. There is also a range of type conversion functions available.
To create a NumPy array you can use the low level constructor ndarray. You can pass this a range of arguments to control the type of array you create but the simplest is to pass just the shape of the array. For example:
myArray=np.ndarray((3,3))
creates a 3 by 3 array of floats. The array is created in memory and uninitialized. This means that if you try to make use of any of the elements of myArray you will find some random garbage.
A more usual way of creating an array is to use one of either np.zeros(shape) or np.ones(shape) which create an array of the shape specified initialized to zeros or ones respectively. Similarly
np.arange(start,end,increment)
will create a one dimensional array initilized to values from start to end spaced by increment. There is also the linspace method that will creat an array of a specified size with evenly spaced values.
There are lots of other array creation methods including random, identity and so on.
You can also use the array method to convert a Python array object into a NumPy array. For example:
myArray=np.array([[1,2,3],[4,5,6],[7,8,9]])
Now that we can create a NumPy array it's time to find out how to use them.
You can index a NumPy array just like a Python array. So for example after
you can write
myArray[1][2]
to get the element in the row 1 column 2 i.e. 6 (remember NumPy arrays are indexed starting from 0). You can use more complex slicing and it all works exactly as for a Python array.
For example:
myArray[0:2]
is
array([[1, 2, 3],[4, 5, 6]])
and more to the point our original example of our failed attempt at two dimensional slicing still fails:
myArray[0:2][0:2]
is still
A slice is always a view of the NumPy array i.e. it isn't a copy and assigning to a slice changes the array as is the case with a Python array.
The only real difference is that the array has a fixed size and cannot be extended or reduced.
The NumPy array goes well beyond what a standard Python array supports in terms of indexing.
The first big difference is that you can use a tuple as an indexing object.
The simplest case of this is to use a tuple of integers. For example
myArray=np.array([[1,2,3],[4,5,6],[7,8,9]])myArray[1,2]
is 6.
If myArray was a simple Python array this would generate an error and you would have to write:
Both index methods work with NumPy arrays.
Being able to use a tuple of integers is a simplification of notation but you can go one step further and use a tuple of slicers.
The rule is that each slicer operates on its corresponding dimension. That is unlike the Python array where multiple slicers operate on the result of previous slicers the NumPy array implements things are you might want them to work i.e. slicing each dimension in turn. For example, if you now try:
myArray[0:2,0:2]
you will discover that it does return the 2x2 sub matrix in the top left hand corner of the original array .i.e.
array([[1, 2],[4, 5]])
This works with any number of dimensions and each slicer is applied to the corresponding dimension to extract a sub-matrix. You can even use a step size to extract, say, every other row and column.
All you have to remember is to specify the slicers as part of a tuple and not as individual index terms. Notice that if you don't specify one slicer for each dimension, the missing slicers are assumed to be :, i.e the entire dimension. For example
is taken to mean
myArray[0:2,:]
You can also use the ellipsis object to add : slicers if you want to specify slicers from the other end of the dimensions. For example if bigArray has five dimensions:
bigArray[...,0]
specifies all of the rows columns and so on but with the final dimension set to index 0 i.e. it is equivalent to
bigArray[:,:,:,:,0]
Also notice that
myArray[0,:]
array([1, 2, 3])
i.e. row zero all column entries as a one dimensional array but
myArray[0:1,:]
is
array([[1, 2, 3]])
i.e. a two dimensional array consisting of just row zero. In general using an integer i returns an array with one less dimension than using the slicer [i:i+1] which returns the same elements.
|
http://www.i-programmer.info/programming/python/5785.html?start=1
|
CC-MAIN-2015-06
|
refinedweb
| 1,039
| 71.55
|
Support of rounding mode in constant evaluation resulted in some
number of compile errors for constructs that previously were compiled
successfully. For example, the following code starts failing at
compilation if FP exception behavior is set to strict:
struct S { float f; };
static struct S x = {0.63};
This happenes because setting strict behavior sets rounding mode to
dymanic. In this case the value of initializer depends on the current
rounding mode and cannot be evaluated in compile time.
Using dynamic as rounding mode make no sense outside function bodies.
For example, even if the initializer is evaluated dynamically, this happens
before the execution of main, so there is no possibility to set dynamic
rounding mode for it. C requires that initializers are evaluated using
constant rounding mode. It makes sense to apply this rule to C++ as well.
With this change dynamic rounding mode is applied to function bodies. In
other cases, like evaluation of initializers the constant rounding mode
is applied, which is 'tonearest' unless it was changed by '#pragma STDC
FENV_ROUND'.
thanks for this!
It's far from clear to me that this is correct in C++. In principle, for a dynamic initializer, the rounding mode could have been set by an earlier initializer.
Perhaps we can make an argument that, due to the permission to interleave initializers from different TUs, every dynamic initializer must leave the program in the default rounding mode, but I don't think even that makes this approach correct, because an initializer could do this:
double d;
double e = (fesetround(...), d = some calculation, fesetround(...default...), d);
I think we can only do this in C and will need something different for C++.
(I think this also has problems in C++ due to constexpr functions: it's not the case that all floating point operations that will be evaluated as part of the initializer lexically appear within it.)
I think the initializer for var_04 is evaluated at translation time, therefore the initialization expression would look the same as var_01 above, using default rounding mode. Same for var_05 and var_06. They have "static duration"
Updated patch
I changed the code to confine it with C and constexpr only. Hopefully this would be enough to enable build of SPECS attempted by @mibintc (). However in long-term perspective we should return to this code.
The intent was to align behavior of C and C++. If an initializer is valid in C, then it should produce the same code in C++. If the source code like
float appx_coeffs_fp32[3 * 256] = {
SEGMENT_BIAS + 1.4426950216,
…
produces compact table in C mode and huge initialization code in C++, it would be strange from user viewpoint and would not give any advantage.
C in C2x presents pretty consistent model, provided that #pragma STDC FENV_ROUND FE_DYNAMIC does not set constant rounding mode. Initializers for variables with static allocation are always evaluated in constant rounding mode and user can chose the mode using pragma FENV_ROUND.
When extending this model to C++ we must solve the problem of dynamic initialization. It obviously occurs in runtime rounding mode, so changing between static and dynamic initialization may change semantics. If however dynamic initialization of global variable occurs in constant rounding mode (changing FP control modes in initializers without restoring them is UB), static and dynamic initialization would be semantically equivalent.
We cannot apply the same rule to local static variables, as they are treated differently in C and C++. So the code:
float func_01(float x) {
#pragma STDC FENV_ACCESS ON
static float val = 1.0/3.0;
return val;
}
Would be compiled differently in C and C++.
The statement #pragma STDC FENV_ROUND FE_UPWARD changes constant rounding mode, so evaluation of the initializer for var_04 produces different result than in the case of var_01, which indeed is evaluated using default rounding mode.
Additional note:
If initialization is dynamic and constant rounding mode is not default, the body of initializer is executed with dynamic rounding mode set to the constant mode. That is, the code:
#pragma STDC FENV_ROUND FE_UPWARD
float var = some_init_expr;
generates code similar to:
float var = []()->float {
#pragma STDC FENV_ROUND FE_UPWARD
return some_init_expr;
}
So initializers of global variable must conform to:
These seems enough to provide compatibility with C and the same semantics for static and dynamic initialization.
ping!
@rsmith Serge changed the patch to confine it with C and constexpr only, is that adequate to move forward with this patch, and we can return to C++ at some point down the road?
This still seems inappropriate for the constexpr case -- we shouldn't have different behavior based on whether an expression appears directly in the initializer or in a constexpr function invoked by that initializer. (It also violates the C++ standard's recommendation that floating-point evaluation during translation and at runtime produce the same value.) Please see the discussion in D87528. Let's continue the conversation there; we shouldn't be splitting this discussion across two separate review threads.
Setting call-site rounding mode is not implemented yet.
In D88498#2329845, @sepavloff wrote:
Reverted check to the previous version, in which it applied to C++ file level variables also.
This presumably reintroduces the misbehavior for
in which some calculation will be treated as being in the default rounding mode, right?
Added workaround for constexpr functions. Now they are parsed with constant rounding mode, which allows to use them with option -frounding-math.
This is inappropriate. When a constexpr function is invoked at runtime, it should behave exactly like any other function. Marking a function as constexpr should not cause it to round differently when used outside of constant expressions.
Remade the patch
Actually this solution also behaves wrong in some cases.
The tests in this patch exhibit the same behavior with and without the patch applied; I think almost all the functionality changes from here are superseded by the change to consider whether we're in a manifestly constant evaluated context. As far as I can tell, it only affects the behavior of C++ dynamic initializers in FENV_ACCESS ON regions by making calls to feset* be undefined behavior. I'm unconvinced that's the right way to extend the behavior of the FENV_* functionality to C++. Consider this example:
#include <fenv.h>
#pragma STDC FENV_ACCESS ON
struct InRoundingMode {
int mode;
int oldmode = fegetround();
int ok1 = fesetround(mode);
double value;
int ok2 = fesetround(oldmode);
};
double d1 = InRoundingMode{.mode = FE_UPWARD, .value = 1.0 / 3.0}.value;
double d2 = InRoundingMode{.mode = FE_DOWNWARD, .value = 1.0 / 3.0}.value;
I don't think this is unreasonable: this code changes the rounding mode, performs a calculation, and then changes it back, all within a dynamic initializer, and all in an FENV_ACCESS ON region. I think it would be unreasonable to say that the FENV_ACCESS doesn't apply to the initializer, and the initializer therefore has undefined behavior.
So my inclination is to say that the status quo (prior to this patch) is preferable behavior. The new tests look valuable.
As discussed in other review threads, this should be a constant initializer with value 1.0; indeed, that's what we get both with and without the code change from this patch (this test fails when the patch is applied to Clang trunk for this reason).
This is not the diagnostic we give with this patch applied (applying this patch results in this test failing for me): we say "read of non-constexpr variable is not allowed in a constant expression". (We don't consider the initializer of the variable.)
This patch needs to be rebased; a test file with this name already exists.
In D88498#2339630, @rsmith wrote:
The tests in this patch exhibit the same behavior with and without the patch applied; I think almost all the functionality changes from here are superseded by the change to consider whether we're in a manifestly constant evaluated context.
The tests in this patch exhibit the same behavior with and without the patch applied; I think almost all the functionality changes from here are superseded by the change to consider whether we're in a manifestly constant evaluated context.
Thank you for review and explanations!
So I am abandoning the patch.
I'll try to extract the tests to a new review item.
|
https://reviews.llvm.org/D88498
|
CC-MAIN-2021-39
|
refinedweb
| 1,373
| 52.49
|
I'll try that. I think you gave me enough info to work with and figure out the next step.
Type: Posts; User: knockturnal22
I'll try that. I think you gave me enough info to work with and figure out the next step.
I just want the first 4 elements. I'm trying to get it so I can then turn the y,m,d, t to a code to read it in a standard java day time format.
I fixed the problem but now trying to figure out how to isolate the specific columns.
while (inputStream.hasNext()){
String data = inputStream.next();
System.out.println ( data ) ;...
Yes that correct. Now trying to get the first 4 columns to display is my current goal.
I guess that's where I'm lost at. I didn't make note of what values meant. Could you help me with pulling out four columns which are the year, month, date and time so I can use that create something...
I do not have an array and that's what's throwing me off. I'm using what I learned and it should show me the first column but it's not.
import java.io.File;
import...
I also attached the file that I was pulling my data.
I'm using Eclipse and i attached a pic of the problem I'm having.
1441
I'm trying to get the year, month, day and time isolated so I can then I can make it populate in a different format.
import java.io.File;
import java.io.FileNotFoundException;
import...
If someone can guide me to where I can go to fix my problems would be helpful. My instructor gave me notes on what to do. I'm just trying to interrupt the notes myself.
I was getting advice from a classmate and that's where I got stuck at. I do not want it to be infinite as you mentioned. I'm trying to figure out how to have it when type "0" it ends. really do...
I'm new at this. I'm trying to create a slot machine; very basic and simple slot machine.
I just need to know what coding am I missing to finish this.
I apologize I forgot the question.
I've been working on this for a few days now and I'm stuck and burned out. I'm working on a project for a class and trying to create a basic and simple slot machine for my presentation. This is what...
|
http://www.javaprogrammingforums.com/search.php?s=62cf4581d280d2dd426b330f8672a6b9&searchid=2052620
|
CC-MAIN-2016-07
|
refinedweb
| 423
| 84.47
|
Subject: [RFC][v6][PATCH 9/9]: Document clone_with_pids() syscallThis gives a brief overview of the clone_with_pids() system call. We shouldeventually describe more details either in clone(2) or in a new man page.Signed-off-by: Sukadev Bhattiprolu <sukadev@vnet.linux.ibm.com>--- Documentation/clone-with-pids | 58 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+)Index: linux-2.6/Documentation/clone-with-pids===================================================================--- /dev/null 1970-01-01 00:00:00.000000000 +0000+++ linux-2.6/Documentation/clone-with-pids 2009-09-09 21:53:30.000000000 -0700@@ -0,0 +1,58 @@++struct pid_set {+ unsigned int num_pids;+ pid_t pids[];+};++clone_with_pids(int flags, void *child_stack_base, int *parent_tid_ptr,+ int *child_tid_ptr, NULL, struct pid_set *pid_setp)++ The clone_with_pids() system call is identical to clone(), except+ that it allows the user to specify a pid for the child process+ in each of the child processes' pid name spaces.++ This system call is meant to be used when restarting an application+ from an earlier checkpoint. When restarting the application, the+ processes in the application must get the same pids they had at the+ time of the checkpoint.++ The 'pid_setp' parameter defines a set of pids to use, one for each+ pid-namespace of the child process._with_pids() returns -1 and sets 'errno' to one of+ following values (the child process is not created).++ EPERM Caller does not have the SYS_ADMIN privilege needed to excute+ this call.++ EINVAL The number of pids specified in 'pid_set.num_pids' exceeds+ the current nesting level of parent process++ EBUSY A requested 'pid' is in use by another process in that name+ space.++Example:++ struct pid_set pid_set { 3, {0, 99, 177} };+ void *child_stack = malloc(STACKSIZE);++ /* set up child_stack, like with clone() */+ rc = clone_with_pids(clone_flags, child_stack, NULL, NULL, &pid_set);++ if (rc < 0) {+ perror("clone_with_pids()");+ exit(1);+ }+
|
http://lkml.org/lkml/2009/9/10/42
|
CC-MAIN-2016-44
|
refinedweb
| 291
| 56.15
|
In the previous post, I have explained how
delete_node works. Now, I am going to explain
clone_node command. First let’s see picture of this operation.
Please note: we are continuing to modify outline from the previous post, but positions are renumbered. In the previous post after deleting node at the position P5, new position at index 5 becomes position that used to be at index 12 (P12). However, every time we are generating picture, positions are renumbered so that we can more easily see what is changing. Actual values of positions are persistent but their labels on diagrams are not, and that is intentional.
Method
clone_node
This operation makes clone of the node at the given position and places new cloned node just after this one. Here is the definition of
clone_node(pos) method.
def clone_node(self, pos): (positions, nodes, attrs, levels, expanded, marked) = self.data if pos == positions[0]: return # this node i = positions.index(pos) gnx = nodes[i] sz0 = attrs[gnx].size lev0 = levels[i] levs = [x - lev0 for x in levels[i:i+sz0]] # parent pi = parent_index(levels, i) pp = positions[pi] pgnx = nodes[pi] psz = attrs[pgnx].size # distance di0 = i - pi di1 = di0 + sz0 update_size(attrs, pgnx, sz0) for pxi in gnx_iter(nodes, pgnx, psz + sz0): A = pxi + di0 B = pxi + di1 positions[B:B] = (random.random() for x in range(sz0)) nodes[B:B] = nodes[A:A+sz0] levels[B:B] = levels[A:A+sz0] attrs[gnx].parents.append(pgnx) self._update_children(pgnx)
Nothing special here. Lines 5-16 are collecting necessary data about the node we are cloning and its parent.
In lines 19-20 we are calcualting relative distance of this node from its parent.
After updating size of all ancestors (line 22), we have a for loop (lines 23-28), that iterates through each occurrence of the parent node, and for each occurrence we are just inserting cloned block of data. Inserted positions are newly generated, gnxes and levels are just copied.
And finally in line 30 we add one more link to parent node. At the end (line 31) we are just updating list of children in parent node.
Method
indent_node
The example outline is slightly changed to illustrate some special cases that we must handle proprely when idnenting node. Here is the definition of method
indent_node:
def indent_node(self, pos): '''Moves right node at position pos''' (positions, nodes, attrs, levels, expanded, marked) = self.data # this node i = positions.index(pos) if levels[i-1] == levels[i] - 1: # if there is no previous siblings node # can't be moved right return gnx = nodes[i] h, b, mps, chn, sz0 = attrs[gnx] lev0 = levels[i] # parent node pi = parent_index(levels, i) pp = positions[i] pgnx = nodes[pi] psz = attrs[pgnx].size chindex = levels[pi:i].count(lev0) pdist = i - pi # new parent npi = prev_sibling_index(levels, pi, i) npp = positions[npi] npgnx = nodes[npi] if npgnx in nodes[i:i+sz0]: # can not move node in its own subtree return # link node to the new parent mps[mps.index(pgnx)] = npgnx attrs[npgnx].children.append(gnx) del attrs[pgnx].children[chindex] # indent nodes in all clones of parent node done_positions = [] for pxi in gnx_iter(nodes, pgnx, psz): a = pxi + pdist b = a + sz0 done_positions.append(positions[a]) done_positions.append(positions[pxi+npi-pi]) levels[a:b] = (x+1 for x in levels[a:b]) # preserve nodes that are being indented ns = nodes[i:i+sz0] levs = levels[i:i+sz0] # now we need to insert it to the destination parent and update outline sz1 = attrs[npgnx].size for pxi in gnx_iter(nodes, npgnx, sz1): if positions[pxi] not in done_positions: a = pxi + sz1 positions[a:a] = [random.random() for x in levs] dl = levels[pxi] + 1 - levs[0] levels[a:a] = [x+dl for x in levs] nodes[a:a] = ns update_size(attrs, pgnx, -sz0) update_size(attrs, npgnx, sz0)
First it should be noted that levels in the outline may increase only by one. Any two adjacent items in the levels list (let’s name them a and b) must satisfy one of the following expressions:
a == b a + 1 == b b < a
Indenting node requires increasing its level and if it is already greater than the node to the left, then the operation indent_node can’t be performed. In essence this is the case only when we try to indent node that is first among its siblings. In lines 7-10 we check for this condition and return immediately if this is the case.
Lines 11-27 are collecting necessary data about this node, its parent and its new parent. What is new here is in line 21 calculating the index of this node among its siblings. We count how many nodes have the same level as this node starting from the parent index plus one. We have already checked that this node is not the first one. Then with all data collected we check once again that this operation is legal and no node will become part of its own subtree (lines 29-31).
In lines 33-37 we change links as required for this operation.
In the loop (lines 39-46) we process all occurrences of our old parent node. In all this occurrences all nodes are already there, and we only need to increase levels in the block that is at the distance
pdist. We also gather all positions that are processed, so that we don’t process them again in the next phase. In our example outline we would have here processed positions P7 and P18, but not P4.
Now we have to insert indented subtree to every other occurrence of our new parent node (position P4). Lines 52-60 process all these remaining occurrences by inserting indented subtree at level one greater than the new parent level (lines 58-59). We need to allocate new positions for these inserted nodes (line 57).
At the end we need to update sizes of both old and new parent node and all their ancestors (lines 62-63).
In the next post you can find description of the other tree operations.
|
https://computingart.net/leo-tree-model-6.html
|
CC-MAIN-2021-49
|
refinedweb
| 1,022
| 62.48
|
A recursive function is a function that calls itself in its body, either directly or indirectly. Recursive functions have two important components::
Make a recursive call with a slightly simpler argument. This is called the "leap of faith" - your simpler argument should simplify the problem, and you should assume that the recursive call for this simpler problem will just work. As you do more problems, you'll get better at this step.
Use the recursive call. Remember that the recursive call solves a simpler version of the problem. Now ask yourself how you can use this result to solve the original problem.
Problem 1: Write a function
sum that takes a single argument
n
and computes the sum of all integers between 0 and
n. Assume
n is
non-negative.
Problem 2: Implement
ab_plus_c, a function that takes arguments
a,
b, and
c and computes
a*b + c. You can assume a and b are
both positive integers. However, you can't use the
* operator. Try
recursion!
Problem 3: Now write the recursive version of
summation. Recall
that
summation takes two arguments, a number
n and a function
term, and returns the result of applying
term to every number
between 0 and
n and taking the sum.
def summation(n, term): if n == 0: return term(0) return term(n) + summation(n-1, term)
Recall.
repeated, repeated
In Homework 2 you encountered the
repeated function, which takes
arguments
f and
n and returns a function equivalent to the nth
repeated application of
f. This time, we want to write
repeated
recursively. You'll want to use
compose1, given below for your
convenience:
def compose1(f, g): """"Return a function h, such that h(x) = f(g(x)).""" def h(x): return f(g(x)) return h
This concludes the recursion portion of this lab. As a parting thought, keep in mind that recursion follows the same rules of evaluation that we've seen throughout the class. Try taking one of the above exercises and typing it into the online tutor! The remainder of the exercises will be various review problems.
For each of the following expressions, what must
f be in order for
the evaluation of the expression to succeed, without causing an error?
Give a definition of
f for each expression such that evaluating the
expression will not cause an error.
f f() f(3) f()() f()(3)()
Find the value of the following three expressions, using the given
values of
t and
s.
t = lambda f: lambda x: f(f(f(x))) s = lambda x: x + 1 t(s)(0) # 1 t(t(s))(0) # 2 t(t)(s)(0) # 3
whileBunch
In lecture, you saw that it was possible to compute factorial
iteratively. Let's introduce a new function, a "falling" factorial
that takes two arguments,
n and
k and returns the product of
k
consecutive numbers, starting from
n and working downwards. We're
going to write this iteratively - use a
while loop, instead of
recursion.
You hae seen continuous calculus in mathematics. However, there's
another definition of the derivative. Call the discrete derivative of
a function
f the quantity:
Δf(n) = f(n+1) - f(n). Write a
higher order function
make_deriv that takes as input
f and
returns another function that calculates the discrete derivative.
Now, this type of calculus actually mirrors what you already know.
For example, the product rule actually holds as well in some form:
Δf(n)g(n) = Δf(n) g(n+1) + Δg(n) f(n) Write
another higher order function called
make_product that takes two
functions
f and
g and returns a function that computes the
discrete derivative of the product. You can use the
make_deriv
function that you defined above.
Draw the environment diagram for the following code:
def blondie(f): return lambda x: f(x + 1) tuco = blondie(lambda x: x * x) angel_eyes = tuco(2)
|
https://inst.eecs.berkeley.edu/~cs61a/fa13/lab/lab03/lab03.php
|
CC-MAIN-2020-10
|
refinedweb
| 650
| 54.52
|
Below is a simplified version of a much more elaborate analysis code, but it will illustrate the problem I?m having. What I want to do is repetitively call an analysis function from my main code and plot the results of that analysis (a curve fit, although thats immaterial here) while in that function. Back in the main code, I want to plot the results of all the curve fits on a single plot. They share a common x axis, but appear at different points along it. What seems to be happening is that the gets set in the function, and doesn?t get set back in the main code.
Note that there are two versions of the ?problem? function, problem and problem_alt. If you change the main code (move the #), you get the plot I want at the end of the main.
There must be something I can call or set to recover the settings associated with figure(2), but I can?t seem to figure it out. Any help would be appreciated.
Thanks,
Bill Wing
#! /usr/bin/env python
# -*- coding: utf-8 -*-
···
#
# A simple skeleton of the program to work out the plotting problem
#
import numpy as np, matplotlib.pyplot as plt
#
# skeleton subroutines
#
def problem(xdata, ydata, i):
color_dic = {1: "red", 2: "green", 3: "blue", 4: "cyan"}
plt.figure(1)
fig1, ax1 = plt.subplots()
plt.subplot()
plt.plot(xdata, ydata, linestyle = '-', color = color_dic[i])
plt.savefig('Plot for run_num ' + str(i))
return
def problem_alt(xdata, ydata, i):
return
t = np.arange(0.0, 2.0, 0.01)
plt.figure(2)
fig2, ax2 = plt.subplots()
for i in range(0,4):
i = i+1
problem(t, np.sin(i*np.pi*3*t), i)
problem_alt(t, np.sin(i*np.pi*3*t), i)
ax2.set_xlim(xmin = 0.0, xmax = 20.0)
plt.subplot()
plt.plot((t+i*3), np.sin(i*np.pi*3*(t+i*3)))
plt.savefig("Global Plot")
|
https://discourse.matplotlib.org/t/problem-doing-multiple-independent-plots/20113
|
CC-MAIN-2021-43
|
refinedweb
| 324
| 76.93
|
The simplest CGI program. This script displays the current version of Python and the environment values.
Discussion
Every CGI programmer should have some simple code handy to drop into their cgi-bin directory. Run this script before wasting time slogging through your Apache configuration files. :-)
cgi.escape. Maybe it's a little picky, but you should
import cgiand use
cgi.escape()to quote any potential HTML in the response. i.e.
Why? First, it makes the output more readable if you happen to include code for a "<" in your query. Second, it protects you from attacks like those described in the links below if someone should happen to find your script on a live web site.
Apache information
CERT Advisory CA-2000-02
Zope Client Side Trojan information
(I wonder if this site allows arbitrary scripts?) Carey Evans
RE: cgi.escape. I've updated the script per Carey's suggestion to include cgi.escape. Thanks. Jeff Bauer
cgi.test() is even easier. Here's an even easier CGI program that I use for testing:
!/usr/local/bin/python
import cgi cgi.test()
This handles printing headers, and displays environment and form variables. Amos Latteier
Even smaller ... You may also wan't to make it shorter ;-)
cgi.test(). I may not have made my intention clear. This is called the "simplest" CGI program, not the "shortest". For a newbie Python programmer trying to write his first CGI script, the cgi.test() method hides too many details to be useful, IMO.
New Programmer Likes. I just downloaded PYTHON, and this is the first script I found in the cookbook that showed how to make a quick web page.
I appreciate it
Thanks
help please, I am completely new to python,
I want to test if python is working on my hosting account. I am told it is!
I have copied the text at the bottom of my post, i have then pasted it to notepad, saved it as ptest.cgi and secondly ptest.txt
uploaded it to my cgi-bin, inside a folder called python, chomod 755 then go to my page:
or
both give errors in log as : Premature end of script headers
What am i missing out please . . .
!/usr/local/bin/python
print "Content-type: text/html" print print "
"
Perfect Example. Thanks for the excellant example. It does exactly what is says. Very simply shows how to create a CGI application. Gives an excellant starting point for which anyone can go from there.
There is nothing worse than trying to get something to atleast work (produce a result). Your example does this -- quickly and easily. Once you have a result, you can then build on it knowing that fundamentally it will work.
It was exactly what I was looking for.
Hugh,
My own simple CGI (perhaps not the simplest :-).
'Premature end of script headers' can be caused by using Windows as text editor. Brendan, your trouble with the 'Premature end of script headers' error message might be due to your having created the cgi script using a Windows text editor, and then uploading the script to your server using ftp.
Windows text editors generally save text files with "\r\n" as the set of characters that marks the end of a line; you want to save the file as a "UNIX" format file, so that only "\n" marks the end of a line.
Some more info is here:
Just vacation. Have a great vacation
|
http://code.activestate.com/recipes/52220/
|
crawl-002
|
refinedweb
| 574
| 75.81
|
. Developers and hardware enthusiasts no longer need to wait on others to invent or build all the "cool" stuff!
The value of IoT is in both data and control. With home automation it is nice to have a log of events to know when a family member did something like got home or when they turned on the fireplace. The control aspects of IoT are really great for home automation. It's great to be able to play with the cat via my IoT cat toy while we are on a trip anywhere in the world with an internet connection!
My ventures into the IoT space all started with wanting to play with my kids while I was in the office and they were home swimming in the pool. I built an IoT squirt gun out of a netduino microcontroller, a couple of servos, a solenoid valve and a water hose. The details of this project can be read on the following article:
Home Automation with Netduino and Kinect
After having success with the squirt gun I started building other things that I could control in my home over the internet. I used a central netduino microcontroller and started adding control for many things to the same microprocessor. I ran wires through the walls and under my house and through my attic to control things such as the garage door, watering the gardens, and controlling the fireplace. I built what was probably the first IoT control for a fireplace. To top off the project I embedded a URL in a QR barcode that would open a webpage displaying a countdown. The JavaScript on the page calls a Web API which functions as a broker to pass a message over the network to the netduino microcontroller which in turn actuates a solenoid to turn on the gas for the fireplace. The details of that project can be read in the following article:
Using jQuery Mobile with MVC and Netduino for Home Automation
I came up with an idea to remotely control monsters in the yard for Halloween, but did not want to run wires all the way from my existing netduino to the front of the house. The solution was to purchase a $20 Ethernet bridge and a second netduino.
It became extremely cumbersome to run wires through the house to a central microcontroller. I also became concerned and did not want to build too much into the house that I could not take with me if we moved. With the IoT movement starting I realized that I could make each "thing" in the house smart and on the network so that I would not need to run extra wires. There is a lot of benefit to having each "thing" smart, including being flexible with moving things around.
I built the Logical Living open source home automation system out of out of inexpensive microcontrollers, custom circuits and other mostly household components. The code is open source through this and other Code Project articles. The mobile web interface has nearly a hundred and fifty buttons for controlling features in my house. I organically built up control for each "thing" and am starting to have an intelligent house. The best part is that it did not cost a bunch of money and even my laptop is more expensive than my home automation. There are multiple user interfaces including gestures and speech as can be read about in the following article:
Home Automation with Microsoft Kinect Point Cloud and Speech Recognition
I'm using a fleet of 4 different types of microcontrollers in my home automation system. I have learned the strengths and weaknesses of each type. In total I'm controlling 30 things with nearly 150 features. I have also learned the strengths and weaknesses of different design patterns for IoT.
The Netduino is an awesome open-source electronics prototyping platform based on the .NET Micro Framework. I'm using netduino plus 2 microcontrollers to control the fireplace, 4 gardens, garage, squirt gun, 5 Halloween things including monsters, and a cat toy. The code for the netduino is easy to maintain because its object oriented and you can do real debugging with breakpoints and such. The netduino plus 2 microcontroller has a built in Ethernet adapter for network communication. This adapter is not Wi-Fi but third party Wi-Fi Ethernet bridges can be added for around $20. The form factor of the netduino plus 2 is the same as arduino and shields built for the arduino can be used with netduino. There is not nearly as much community around the netduino as there is arduino, but I prefer netduino over most of the arduino microcontrollers for complex projects because of its strengths of running well structured object oriented code and most arduino microcontrollers do not have real debugging. My company does a ton of C# development so it's a bonus to be able to code in C# on the .NET Micro Framework. VB.Net is supported to on the platform, however most people code in C#.
One of my netduinos in my IoT home automation system is tasked with controlling 5 Halloween things. There is a zombie that rubs the ground, a skeleton that jumps up, a skull that launches out on a string, a compressed air scare, and a main feature of a ghost in a coffin that has servos to pan and tilt its head at different angles. All of the devices can be controlled with a mobile website or from a Microsoft Kinect v2 application that senses where the kids are and makes the scene react to the kids' positions. The ghost in the coffin turns it head to look at one of the kids while they walk on the sidewalk beside the display.
Watch video of the Netduino in action controlling IoT monsters!
Watch this IoT Halloween video for more fun including a skull launcher feature!
The LogicalLiving.Netduino project has a class for everything that it controls. In addition, there are classes for Ethernet communication, servos, and a pan tilt class with two servos. These classes are used throughout the application and for an example the PanTilt class is reused for the squirt gun, ghost head control, and twice on the cat toy that will be talked about later in the article.
LogicalLiving.Netduino
PanTilt
The Halloween class has private variables for everything that it controls. These private variables configure the pins on the netduino to be an input or an output.
private OutputPort _relayZombie = Config.ReusePins.GetInstance().OutputPort2;
private OutputPort _relaySkeleton = new OutputPort(Pins.GPIO_PIN_A4, false);
private OutputPort _launchSkullMotor = new OutputPort(Pins.GPIO_PIN_A3, false);
private OutputPort _airScare = new OutputPort(Pins.GPIO_PIN_D13, false);
PanTilt _ghost = Config.ReusePins.GetInstance().PanTilt1;
private InputPort _launchSkullLimitSwitch = new InputPort(Pins.GPIO_PIN_A2, true, Port.ResistorMode.PullUp);
I'm using one program that runs on multiple netduinos where different netduinos have different responsibilities. The Code Maintenance section of this article has more detail on this architecture. The code above has a _ghost object that pulls its pin configuration from a ReusePins singleton class to support using the pins for different functionality when the program is running on a different netduino with a different responsibility.
_ghost
ReusePins
public class ReusePins
{
#region Private Variables
private static ReusePins _reusePins;
#endregion
#region Public Static Methods
public static ReusePins GetInstance()
{
if (_reusePins == null)
{
_reusePins = new ReusePins();
_reusePins.PanTilt1 = new PanTilt(Pins.GPIO_PIN_D9, Pins.GPIO_PIN_D6);
_reusePins.PanTilt2 = new PanTilt(Pins.GPIO_PIN_D10, Pins.GPIO_PIN_D5);
_reusePins.OutputPort1 = new OutputPort(Pins.GPIO_PIN_D3, false);
_reusePins.OutputPort2 = new OutputPort(Pins.GPIO_PIN_A5, false);
}
return _reusePins;
}
#endregion
#region Public Properties
public PanTilt PanTilt1;
public PanTilt PanTilt2;
public OutputPort OutputPort1;
public OutputPort OutputPort2;
#endregion
}
The constructor below reads from the configuration class to set limits for the servos. It is a good idea to put all of the configuration values in its own class so that there is one place to go to make updates.
public Halloween()
{
_ghost.Tilt.DegreeMax = (int)Config.Halloween.GhostTiltDegreeMax;
_ghost.Tilt.DegreeMin = (int)Config.Halloween.GhostTiltDegreeMin;
_ghost.Tilt.InvertAngle = true;
_ghost.Pan.DegreeMax = (int)Config.Halloween.GhostPanDegreeMax;
_ghost.Pan.DegreeMin = (int)Config.Halloween.GhostPanDegreeMin;
_ghost.Pan.InvertAngle = false;
_ghost.SweepSpeedMilliseconds = Config.Halloween.GhostSweepSpeedMilliseconds;
}
There are methods to control all of the actions of the monsters. The method below starts the zombie moving and calls a private method asynchronously through a Time class that I wrote to make working with asynchronous timed events easy.
Time
public void MoveZombieTime(int seconds)
{
this.MoveZombie = true;
Time.RunOnDelay(TurnOffZombieCallback, seconds * 1000);
}
private void TurnOffZombieCallback()
{
this.MoveZombie = false;
}
The AirScare is the latest feature that I have added. It added quite a bit of impact for not much work and hardly any code. I found that you can use a $10 solenoid valve intended for watering a garden to release compressed air. My kids and I built a cool rocket launcher and I re-tasked the parts on Halloween to shoot compressed air through a hose that is fed through a bush to blow on the kids when they get close. The air shoots out so fast that it makes a boom noise that is scary in itself! On Halloween I keep my phone handy and push the button every time that I want to scare the kids with a half a second push from 110 pounds of compressed air and the wonderful boom noise.
AirScare
public void AirScare()
{
_airScare.Write(true);
Thread.Sleep(500);
_airScare.Write(false);
}
I made a scare routine that moves all of the monsters at once. All of these features can be turned on from anywhere through the mobile web interface. This is the method that I run most often to scare people from a distance. The MoveToPosition was a method of the PanTilt class that I originally added for the squirt gun project but it's handy for anytime you want to sweep slowly into a position. The code below sets the sweep milliseconds to 15 milliseconds per degree of movement. The classes that I wrote to control the servos through pulse width modulation, PWM, are included in the source code as well.
MoveToPosition
public void Scare()
{
MoveZombieTime(20);
MakeSkeletonJump();
_ghost.SweepSpeedMilliseconds = 15;
Time.RunOnDelay(LaunchSkull, 4000);
for (int count = 0; count < 2; count++)
{
_ghost.MoveToPosition(170, 120);
_ghost.MoveToPosition(170, 60);
_ghost.MoveToPosition(10, 90);
_ghost.MoveToPosition(10, 120);
_ghost.MoveToPosition(10, 170);
_ghost.MoveToPosition(50, 50);
_ghost.MoveToPosition(110, 110);
_ghost.MoveToPosition(90, 90);
}
_ghost.DisengageServos();
}
The picture below shows how I mounted the skull to pan and tilt servos and even added LEDs for the eyes!
We have a spoiled cat and I came up with a way to spoil her more with help from the IoT! I built an IoT cat toy that is controlled online so that I can play with the cat while I'm anywhere with an internet connection. The cat toy has a netduino plus 2 microcontroller and a Netgear Ethernet bridge so that I can send messages over Wi-Fi to the cat toy from a broker that is a gateway to the internet. It's a single unit with the only external wires being the power cord.
There is a laser mounted to an assembly of pan and tilt servos. The methods to control the cat toy are in the CatToy class of the LogicalLiving.Netduino project. The laser can be moved around in a random pattern but I found that the cat loses interest if she cannot ever catch the laser. The laser will sweep to a random position and then 25% of the time pause for a short random time before sweeping to the next position.
CatToy
public void RandomLaserPattern(int repeat)
{
this.FireLaser = true;
Random rnd = new Random();
for (int count = 0; count < repeat; count++)
{
_laser.SweepSpeedMilliseconds = rnd.Next(100);
_laser.MoveToPosition(rnd.Next(175), rnd.Next(60) + 90);
// Pause occasionally for half a second
if (rnd.Next(100) > 75)
Thread.Sleep(rnd.Next(500));
}
_laser.SweepSpeedMilliseconds = 50;
this.FireLaser = false;
_laser.DisengageServos();
}
There is also a mouse puppet on a string connected to servos that moves around in set patterns.
public void MouseLeftRight()
{
_mouse.MoveToPosition(90, 90);
_mouse.Pan.Angle = 30;
Thread.Sleep(600);
_mouse.Pan.Angle = 150;
Thread.Sleep(600);
_mouse.Pan.Angle = 30;
Thread.Sleep(600);
_mouse.MoveToPosition(90, 90);
_mouse.DisengageServos();
}
public void MouseUpDown()
{
_mouse.MoveToPosition(90, 90);
_mouse.Tilt.Angle = 60;
Thread.Sleep(600);
_mouse.Tilt.Angle = 120;
Thread.Sleep(600);
_mouse.Tilt.Angle = 60;
Thread.Sleep(600);
_mouse.MoveToPosition(90, 90);
_mouse.DisengageServos();
}
It is important to notice how similar the code is for moving the cat toy laser, cat toy mouse puppet, ghost's head on Halloween or controlling the squirt gun. Object oriented code for the netduino makes the netduino very powerful and leads to much reusable code.
width="600px" alt="Image 6" data-src="/KB/connected-devices/854907/CatToy2YouTube.PNG" class="lazyload" data-sizes="auto" data->
Z-Wave is a wireless communications protocol designed for home automation. There are tons of commercially available Z-Wave devices and some are even available on the shelf in retail chain stores. My home automation includes 15 Z-Wave devices which are mostly lights and other 120 VAC powered things. Z-Wave products are designed using low-cost, low-power RF transceiver chips and typically cost around $50 to control a light or outlet. All of the devices send messages over a mesh network. The mesh network uses smart routing to deliver the messages to the targeted Z-Wave device node efficiently. The messages are chatty to support the routing ability and to confirm that the message reached the proper Z-Wave node. Z-Wave devices need to be close enough together to communicate over their low power radio and the range is around 100 ft or 40 m. Z-wave devices on their own make a great local network, but need a broker to be a gateway to be controlled on the internet. I'm using an Areon Z-Stick Z-Wave USB Adapter connected to a broker that I wrote with a Web API.
The LogicalLiving.ZWave project contains all the methods to communicate with the Z-Wave USB Adapter that are consumed by a Web API and by the LogicalLiving.Zwave.DesktopMessenger windows forms application. I built the desktop messenger forms application to learn about the byte arrays that needed to be sent and received to support the Z-Wave messages. The byte array messages displayed in the UI helped me figure out how to hack the Areon Z-Stick Z-Wave USB Adapter to send and receive the proper byte arrays. The desktop messenger is also helpful to troubleshoot issues with the Z-Wave devices that are not working. The most common problem that I use the desktop messenger for now is to figure out the devices node.
LogicalLiving.ZWave
LogicalLiving.Zwave.DesktopMessenger
The first step of the CommunicateWithZWave class of the LogicalLiving.ZWave project is to receive a message containing the deviceNode and the deviceState that you want to set it to.
CommunicateWithZWave
deviceNode
deviceState
public string Message(DeviceNode deviceNode, DeviceState deviceState)
{
if (_serialPort == null || !_serialPort.IsOpen)
OpenSerialPort();
string message = "";
if (deviceNode == DeviceNode.All)
{
UpdateStateOnAllDevices(deviceState);
}
else
{
message = AssuredZwaveMessage(deviceNode, deviceState);
}
return message;
}
If the serial port is not already open then it opens the serial port and sets up an event handler for serial data received.
private void OpenSerialPort()
{
_serialPort = new SerialPort();
//You can look up the COM port number in the computer devices.
//It shows up as a CP2102 USB to UART Bridge Controller in the computer devices.
_serialPort.PortName = "COM3";
_serialPort.BaudRate = 115200;
_serialPort.Parity = Parity.None;
_serialPort.DataBits = 8;
_serialPort.StopBits = StopBits.One;
_serialPort.Handshake = Handshake.None;
_serialPort.DtrEnable = true;
_serialPort.RtsEnable = true;
_serialPort.NewLine = System.Environment.NewLine;
_serialPort.Open();
_serialPort.DataReceived += new SerialDataReceivedEventHandler(_serialPort_DataReceived);
}
The AssuredZwaveMessage is a private method used to send out the message to the device node to set the device state. It has a retry counter to try a hundred times before giving up.
AssuredZwaveMessage
private string AssuredZwaveMessage(DeviceNode deviceNode, DeviceState deviceState)
{
string returnMessage = "Message not sent!";
if (_serialPort.IsOpen)
{
byte[] message = new byte[] { 0x01, 0x09, 0x00, 0x13, (byte)deviceNode, 0x03, 0x20, 0x01,
(byte)deviceState, 0x05, 0x00 };
int retryCount = 0;
while (!SendMessage(message) && retryCount++ < 100)
{
Thread.Sleep(100);
}
returnMessage = ByteArrayToString(message);
}
return returnMessage;
}
The SendMessage method sends the byte array to the serial port for the USB adapter. All messages other than acknowledgement messages (0x06) require a checksum at the end. Writing to the serial port needs to be single threaded.
SendMessage
private Boolean SendMessage(byte[] message)
{
if (_serialPort.IsOpen == true)
{
//All messages other than Acknowledgement Messages (0x06) require a checksum
if (message[0] != 0x06)
{
if (!SetMessagingLock(true)) return false;
_sendAcknowledgementAfterDataReceived = false;
message[message.Length - 1] = GenerateChecksum(message);
}
_serialPort.Write(message, 0, message.Length);
SetMessagingLock(false);
return true;
}
return false;
}
Messages received other than acknowledge messages need to send their own acknowledgement message. An event is raised by a serial message received so that the desktop messenger application can display the message received. This information would normally be private to the CommunicateWithZWave but is helpful to display the byte arrays in the desktop messenger application to help you figure out the messages that need to be sent and received for the Z-Wave devices.
private void _serialPort_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
int bytesToRead = _serialPort.BytesToRead;
if ((bytesToRead != 0) & (_serialPort.IsOpen == true))
{
byte[] message = new byte[bytesToRead];
_serialPort.Read(message, 0, bytesToRead);
RaiseEvent(EventHandlerMessageReceived, ByteArrayToString(message));
if (_sendAcknowledgementAfterDataReceived)
{
SendAcknowledgementMessage();
}
_sendAcknowledgementAfterDataReceived = true;
}
}
The IR Toy is a USB device that allows you to send and receive IR signals. I use the IR toy to send IR commands to our TV, satellite receiver for TV and music, and audio receiver.
There is a WinLirc windows application that you can use to interface with the IR Toy. WinLIRC is the Windows equivalent of LIRC,, the Linux Infrared Remote Control program. The configuration files for most commercial remote controls can be found online at the URL:. You need to write your own broker to be able to control the IR Toy over the internet. The broker will need to take your web requests and launch a process to send the IR code.
The LogicalLiving.IRToy project handles launching the process to send the IR signals to the devices based on the configuration files loaded in WinLirc. Sending an IR command is as simple as launching a process.
LogicalLiving.IRToy
private static void SendIR(string remoteControl, string remoteCommand)
{
ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.FileName = "C:\\LogicalLiving\\LogicalLiving.IRToy\\WinLirc\\Transmit.exe";
startInfo.Arguments = remoteControl + " " + remoteCommand + " 0";
Process processTransmit = Process.Start(startInfo);
processTransmit.WaitForExit();
return;
}
Macros can be built by launching a new process for each command. For example if your favorite music station was channel 6008 then you would run the code below to send the command to change the channel.
SendIR("dish5", "6");
SendIR("dish5", "0");
SendIR("dish5", "0");
SendIR("dish5", "8");
I'm not proud of my IR code other than the fact that it works and the end result is neat and it's handy to be able to control the TV from the internet. I plan on re-factoring my IR code and hardware to run on a device that is not connected to a computer. For now I have it running on a workstation in the home office and use an IR repeater that converts the IR signal to a radio signal and then have radio receivers in a line of site to the devices. The receiver reads the radio signal and transmits an IR signal to the devices.
The Spark Core is a tiny Wi-Fi development board that makes it easy to create internet-connected hardware. The Spark Core runs same C like programming language that Arduino uses, so the masses that are familiar with Arduino are productive from the start. I prefer object orientated languages with debugging capabilities, but the Spark Core makes up for this and more by having some really cool and incredible features:
Watch my Spark Core IoT Christmas Tree Video!!
I built a 20 foot Christmas tree that the Spark Core is controlling. The construction was simple and I did the project over two weekends. I constructed the tree the first weekend and built the control circuit and wrote the software the second weekend. The tree was constructed with attaching 12 strands of lights to 2 threaded 10 foot cast iron pips connected together with a plumbing fitting. The strands of lights are staked into the ground with tent stakes. The entire tree and control circuit was built for around $150. The lights that I'm using require 120 VAC power and I'm using the Spark Core to drive solid state relays that are built for control of 120 VAC or 240 VAC things.
The pins are named at the start of the program.
int treeRelay1 = D0;
int treeRelay2 = D1;
int treeRelay3 = D2;
int treeRelay4 = D3;
int treeRelay5 = D4;
int treeRelay6 = D5;
int treeRelay7 = D6;
int treeRelay8 = D7;
int treeRelay9 = A4;
int treeRelay10 = A5;
int treeRelay11 = A6;
int treeRelay12 = A7;
The setup method runs once on reset and is used to configure 12 of the pins on the Spark Core to be digital output pins for each of the 12 branches of the tree.
void setup()
{
pinMode(treeRelay1, OUTPUT);
pinMode(treeRelay2, OUTPUT);
pinMode(treeRelay3, OUTPUT);
pinMode(treeRelay4, OUTPUT);
pinMode(treeRelay5, OUTPUT);
pinMode(treeRelay6, OUTPUT);
pinMode(treeRelay7, OUTPUT);
pinMode(treeRelay8, OUTPUT);
pinMode(treeRelay9, OUTPUT);
pinMode(treeRelay10, OUTPUT);
pinMode(treeRelay11, OUTPUT);
pinMode(treeRelay12, OUTPUT);
}
The lightTree method sets all the 12 channels of the tree to the desired state of on or off.
lightTree
void lightTree(boolean tree1, boolean tree2, boolean tree3, boolean tree4, boolean tree5,
boolean tree6, boolean tree7, boolean tree8, boolean tree9, boolean tree10,
boolean tree11, boolean tree12, int delayMilliSeconds)
{
digitalWrite(treeRelay1, tree1);
digitalWrite(treeRelay2, tree2);
digitalWrite(treeRelay3, tree3);
digitalWrite(treeRelay4, tree4);
digitalWrite(treeRelay5, tree5);
digitalWrite(treeRelay6, tree6);
digitalWrite(treeRelay7, tree7);
digitalWrite(treeRelay8, tree8);
digitalWrite(treeRelay9, tree9);
digitalWrite(treeRelay10, tree10);
digitalWrite(treeRelay11, tree11);
digitalWrite(treeRelay12, tree12);
delay(delayMilliSeconds);
}
The shiftTreeLeft method is to rotate all of the lights to the left. The delayMilliSeconds parameter is used to set the time delay between frames of rotation. The loopCount is used to set the number of frames to do the rotation on. Most of the time this count is 12 so that the pattern can rotate through all of the branches in the tree.
shiftTreeLeft
void shiftTreeLeft(boolean tree1, boolean tree2, boolean tree3, boolean tree4, boolean tree5,
boolean tree6, boolean tree7, boolean tree8, boolean tree9, boolean tree10,
boolean tree11, boolean tree12, int delayMilliSeconds, int loopCount)
{
for (int count = 1; count <= loopCount; count++)
{
lightTree(tree1, tree2, tree3, tree4, tree5, tree6, tree7, tree8, tree9, tree10,
tree11, tree12, delayMilliSeconds);
boolean treeHold = tree1;
tree1=tree2;
tree2=tree3;
tree3=tree4;
tree4=tree5;
tree5=tree6;
tree6=tree7;
tree7=tree8;
tree8=tree9;
tree9=tree10;
tree10=tree11;
tree11=tree12;
tree12=treeHold;
}
}
The fadeTree method is similar to the lightTree method but each of the 12 branches have multiple states to indicate brightness instead of just a boolean state of on or off. The states range from 0 to 10 where 0 is off, 5 is 50% bright, and 10 being all on. The Spark Core has 8 pins that can be used for pulse width modulation, PWM, but I have 12 branches on the tree that need individual control. This method acts like a crude pulse width modulator, PWM, to cycle the on state with the off state for the correct proportion of time to have the desired brightness level for each strand of lights on the tree.
fadeTree
void fadeTree(int tree1, int tree2, int tree3, int tree4, int tree5, int tree6, int tree7,
int tree8, int tree9, int tree10, int tree11, int tree12)
{
for (int pulse = 1; pulse <= 10; pulse++)
{
lightTree(pulse<=tree1, pulse <= tree2, pulse<=tree3, pulse<=tree4, pulse<=tree5,
pulse<=tree6, pulse<=tree7, pulse<=tree8, pulse<=tree9, pulse<=tree10,
pulse<=tree11, pulse<=tree12, 5);
}
}
The fadeTree method can be called many times with different frames of data to produce many cool effects.
void fadeTreeRotate()
{
fadeTree(10, 8, 6, 4, 2, 0, 2, 4, 6, 8, 10, 10);
fadeTree(8, 6, 4, 2, 0, 2, 4, 6, 8, 10, 10, 10);
fadeTree(6, 4, 2, 0, 2, 4, 6, 8, 10, 10, 10, 8);
fadeTree(4, 2, 0, 2, 4, 6, 8, 10, 10, 10, 8, 6);
fadeTree(2, 0, 2, 4, 6, 8, 10, 10, 10, 8, 6, 4);
fadeTree(0, 2, 4, 6, 8, 10, 10, 10, 8, 6, 4, 2);
fadeTree(2, 4, 6, 8, 10, 10, 10, 8, 6, 4, 2, 0);
fadeTree(4, 6, 8, 10, 10, 10, 8, 6, 4, 2, 0, 2);
fadeTree(6, 8, 10, 10, 10, 8, 6, 4, 2, 0, 2, 4);
fadeTree(8, 10, 10, 10, 8, 6, 4, 2, 0, 2, 4, 6);
fadeTree(10, 10, 10, 8, 6, 4, 2, 0, 2, 4, 6, 8);
fadeTree(10, 10, 8, 6, 4, 2, 0, 2, 4, 6, 8, 10);
}
The fadeAllTree method is used to fade all the lights on the tree from off to on and then fade them all off.
fadeAllTree
void fadeAllTree()
{
for (int count = 0; count <= 10; count++)
{
fadeTree(count,count,count,count,count,count,count,count,count,count,count,count);
}
for (int count = 10; count >= 0; count--)
{
fadeTree(count,count,count,count,count,count,count,count,count,count,count,count);
}
}
Like other Arduino Wiring programming frameworks, the loop method loops forever causing the tree to repeat its awesome light show!
void loop()
{
shiftTreeLeft(1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 200, 12);
shiftTreeRight(1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 200, 12);
fadeAllTree();
fadeAllTree();
fadeAllTree();
fadeAllTree();
shiftTreeLeft(1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 600, 5);
shiftTreeLeft(1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 40, 12*4);
lightTree(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1000);
diamond(false);
delay(300);
diamond(true);
shiftTreeLeft(1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 150, 12*4);
fadeTreeRotate();
fadeTreeRotate();
fadeTreeRotate();
fadeTreeRotate();
shiftTreeRight(1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 80, 12*6);
lightTree(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1000);
}
It can become difficult to maintain separate programs for each "thing" that you have in your fleet. The more things that you have the harder it is to maintain different versions of the code. In many cases it is easy to have the same program running on a fleet of the same type of microcontroller even when they have different functions. Rather than having separate programs for each netduino microcontroller, I presently have 3 netduinos running the same program and each netduino learns its identity and sets its responsibilities from its mac address. The same program behaves differently based on switches from the microcontroller's identity.
The program on each netduino sets a static IP address based on the identity determined from its mac address. A more flexible design would be to have each microcontroller connect to the network and obtain a DHCP IP address. The microcontrollers would then send a message with their mac address to an identity service which would return a message containing identity and configuration information for that specific netduino.
The "thing" can be a server that accepts inbound requests. This pattern can be configured on a "thing" by setting up a web server. This might be ok security wise for internal networks, but is not secure for internet communications. One major issue is that a port needs to be opened up through the firewall so that the request can be routed to the thing. This can be insecure and is not very portable to move to another network. The "thing" can also become overloaded with requests and this pattern has tight coupling with whatever is communicating with it.
The easiest design pattern for IoT communication is having a "thing" be a client that connects to a service. It is common for the "thing" to post, put or get data. This pattern is more secure than the first pattern because no ports need to be opened up on the firewall. The "thing" can also control how often it connects to the service so it is not going to become overloaded with requests. Like the first pattern the "thing" is still tightly coupled with the service that it is communicating with.
With a polling strategy a "thing" can send and receive messages from a service without opening up a port on the firewall. The polling strategy can be implemented many ways including a long polling or WebSockets. The "thing" is not going to become overloaded with requests, but the service communicating with the "thing" will need to implement a queue to store messages until the thing is ready to process them.
A broker is software middleware between the "thing" and whatever is communicating with it. Most of my previous IoT implementations have been done with "things" that are tightly coupled to a broker that dispatches messages to other "things". I have learned that a well designed broker can connect publishers with subscribers in a loose coupled approach without opening up a port in the firewall. I'm in the process of re-factoring my code to implement this approach leveraging MQTT protocol and an open source broker called Mosquitto.
A more flexible IoT communication design pattern is to have to have brokers on both sides of your network firewall. One of the brokers should be local and one hosted off your network in the cloud.
This will allow for higher reliability. It is important that losing your internet connection does not prohibit your home automation things from talking to other local things. If your things and brokers communicate with a polling strategy then no ports need opened on the firewall and your things will not be overloaded with requests.
I have learned quite a bit from building my own IoT home automation:
I hope that you enjoyed this article and hopefully you got some ideas for your own projects!
If you are running Windows 7 then load the first link of Logical Living. If you have Windows 8 or higher and have a Kinect v2 sensor then load the Kinect v2 version. You will need to install the Kinect SDK 2.0 from Microsoft to get the Kinect code working. The speech recognition code requires the Kinect Speech Language Packs and Speech Runtime to work. You can test the speech basics-WPF sample app that installs as a sample with the SDK to make sure you have everything installed to make the speech recognition work. The Kinect Living code can run on a remote machine on the internet and interact with the WebApi on your local network.
Dan tweets IoT topics and technology frequently. Follow Dan on twitter: @LogicalDan.
|
https://www.codeproject.com/Articles/854907/IoT-for-Home-Automation
|
CC-MAIN-2020-24
|
refinedweb
| 5,157
| 61.26
|
I'm working with GCC (g++) on a gentoo linux. I wrote a c++ code to calculate mean, variance and standard deviation from 8 numbers, I feel the code is too long, I also had troube with funct square. sqrt is functioning, but how to get the square of a number. Second thing, how to let the program request, how many numbers to calculate( in this code 8 numbers predefined). Any ideas??
>>>>>>>>>>>>>>>code>>>>>>>>>>>>>>>>>>>>
#include <iostream> using namespace std; int main() { int n , j; double max[8] , _sum=0 , variance; for(n=0;n<=7;n++) { cout << "Type Numbers: " << endl; cin >> max[n]; } n=0; cout << endl; for(j=0;j<=7;j++) { cout << max[n] << endl; _sum=_sum+max[n]; n++; } double mean = _sum/(n); cout << endl << "The Sum is: " << _sum; cout << endl << "The mean is: " << mean; n=0; j=0; _sum=0; for(j=0;j<=7;j++) { max[n]=(max[n]-mean) * (max[n]-mean); _sum=_sum+max[n]; n++; } variance=_sum/(n-1); cout << endl << "Variance is: " << variance; cout << endl << "Standard Deviation is: " << sqrt(variance) << endl; cout << endl; }
>>>>>>>>>>>>code>>>>>>>>>>>>>>>>>
many thanks, friends
Rolf
|
http://www.dreamincode.net/forums/topic/20885-variance-and-standard-deviation/
|
CC-MAIN-2016-30
|
refinedweb
| 186
| 58.96
|
Hello, I am seriously punching my way to build Swish via .cabal ... My head is totally to the wall .... punch .. punch ..... Graham Lyle has written some seriously beautiful code .... I am trying to get to adhere to contemporary Haskell namespace convention ..... I still awaiting response from Graham vis-a-vis posting on Hackage ... I strongly think that in bioinformatics and also oil/gas industry(<< I am stuck here everyday ;^)) that Swish is a strong arena of discussion! In any case .. there are some issues: 1) I strongly suspect that in Swish 0.2.1 that some of Graham's libaries are already superseded by the Haskell prelude , e.g. HUnit, Parsec(!!!), his Sort directory/library .. Don ...please 2) Graham wrote a deterministic finite automaton ..... which is giving some grieve namespace-wise ..... please see following Swish/HaskellRDF/Dfa/Dfa.lhs:1:0: Failed to load interface for `Prelude': it is a member of package base, which is hidden vigalchin here is a fragement of Dfa.lhs: > {-# OPTIONS -fglasgow-exts #-} > {-# OPTIONS -fallow-undecidable-instances #-} > module Swish.HaskellRDF.Dfa.Dfa ( > Re(..), > matchRe, > matchRe2 > ) where > {- ???? > import Control.Monad.Identity > import Control.Monad.Reader > import Control.Monad.State > import Data.FiniteMap > import List > import Data.Array > -} import IOExts The type of a regular expression. > data Re t "Dfa.lhs" 609 lines, 18871 characters Very kind regards, Vasili -------------- next part -------------- An HTML attachment was scrubbed... URL:
|
http://www.haskell.org/pipermail/haskell-cafe/2009-May/061377.html
|
CC-MAIN-2014-41
|
refinedweb
| 230
| 62.64
|
The reason for this article is to suggest a solution to Fortify Path Manipulation issues.
Many companies, like my own, use a tool called Fortify to analyze their code. Fortify is an HP application that finds memory leaks and security threats. It is likely the most popular and well known tool for these purposes. Companies use it to improve the quality of their code and to prepare for third party audits. There are some Fortify links at the end of the article for your reference.
One of the common issues reported by Fortify is the Path Manipulation issue. The issue is that if you take data from an external source, then an attacker can use that source to manipulate your path. Thus enabling the attacker do delete files or otherwise compromise your system.
Like many other people I found this issue in my code and did a search on the internet for a resolution. The most common proposal was to use the Java class FileSystems in the nio package to process the path. The intention is to obfuscate the fact that the path is coming from an external source. While this sometimes works, it does nothing to address the real issue.
The Fortify suggested remedy to this problem is to use a white-list of trusted directories as valid inputs and reject everything else. This solution is not always viable in a production environment because you can't always control where the client will be deploying your application. In my company's situation, we can't control where the client keeps their source data. So, we need to support the flexibility of specifying a directory path during execution. In this situation, and list of valid paths will not work.
I am putting forward an alternative remedy.
Allow the user to enter the path and parse the input for a white-list of acceptable characters. Reject from the input, any character you don't want in the path. It could be either removed or replaced. This approach gives you control over what the user can input while still allowing them the flexibility they need to specify their data and configuration.
Below is an example. This does pass the Fortify review. It is important to remember here to return the literal and not the char being checked. Fortify keeps track of the parts that came from the original input. If you use any of the original input, you may still get the error.
Related Links:
Fortify:
HP:
Path Manipulation:
public class CleanPath { public static String cleanString(String aString) { if (aString == null) return null; String cleanString = ""; for (int i = 0; i < aString.length(); ++i) { cleanString += cleanChar(aString.charAt(i)); } return cleanString; } private static char cleanChar(char aChar) { // 0 - 9 for (int i = 48; i < 58; ++i) { if (aChar == i) return (char) i; } // 'A' - 'Z' for (int i = 65; i < 91; ++i) { if (aChar == i) return (char) i; } // 'a' - 'z' for (int i = 97; i < 123; ++i) { if (aChar == i) return (char) i; } // other valid characters switch (aChar) { case '/': return '/'; case '.': return '.'; case '-': return '-'; case '_': return '_'; case ' ': return ' '; } return '%'; } }
Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.
Commented:
Open in new window
|
https://www.experts-exchange.com/articles/30499/Fortify-Path-Manipulation-Issues.html
|
CC-MAIN-2021-17
|
refinedweb
| 548
| 64.41
|
Selection sort
In this sorting first we find the smallest/ largest element then we exchange the first/ last element of the array and the smallest/ largest element we found, when we are sorting in ascending order. Next we apply the same operation on the array excluding the first/ last element. This operations goes on till (n-1) times where n is the number of elements in the array. For sorting in descending order we do the reverse of the above said operation.
Selection Sort:Let us start with an example -
First two passes are given in the above figure, there will be (n-1) passes. In this case the array wolud be sorted from the end as shown by highlighting the cells.
If we choose the smallest numbers then we have to exchange with the first number and the smallest number hence the array would be sorted from the beginning.
Program-71
#include <iostream.h>
void selectionSort(int a[], int n){
In the function selectionSort() the first loop counts number of passes. See the second loop searching for the largest number goes from 1 to n-1 (value of i is 1 first time), it starts from 1 as we have taken the zeroth element as big before the second loop starts every time. A marker k is taken which points to n-1 position initially, when at the end of second loop (every time) we find the largest number we exchange it with the number at position of k, and k points to k-1 position after every exchange takes place.The process continues till n-1 pases.
int i, j, t, k=n-1, big, pos;
for(i=1; i<=n-1; i++){
big=a[0];
pos=0;
for(j=1; j<=n-i; j++){
if(big<a[j]){
big=a[j];
pos=j;
}
}
t=a[pos];
a[pos]=a[k];
a[k]=t;
k--;
}
}
void main(){ int a[10], i; for(i=0; i<= 9; i++){ cout<<"Enter the element :"<<i+1<<":"; cin>>a[i]; } selectionSort(a,10); cout<<"The sorted array is as under:\n"; for(i=0; i<=9; i++){ cout<<a[i]<<" "; } }
|
https://www.mcqtoday.com/CPP/datastructure/SelectionSort.html
|
CC-MAIN-2022-27
|
refinedweb
| 358
| 64.54
|
stable 2.6 kernel is 2.6.26.3; it was released (along with 2.6.25.16) on August 20.
Both updates contain a large number of fixes for a wide variety of serious
problems.
Kernel development news
Quotes of the week
The kernel isn't sacred and it isn't a separate part of the
system. It needs to be seen as just one component of a fully
integrated system, especially by its developers.
-- Scott
James Remnant
How about raising your quality control a bit, so that I don't have
to berate you? Send the _obviously good_ stuff during the merge
window, and don't send the "random crap" AT ALL. And then, during
the -rc series, you don't do any "obviously good" stuff at all, but
you do the "absolutely required" stuff.
Triggers: less busy busy-waiting
Busy waits are always undesirable, but, in some situations, they become
even more so. If the wait is going to be relatively long, it would be
better to put the processor into a lower power state. After all, nobody
cares if it executes its empty loop at full speed, or, even, whether the
loop executes at all. If the wait is running within a virtualized guest,
the situation can be even worse: by looping in the processor, a busy wait
can actively prevent the running of the code which will eventually provide
the event which is being waited for. In a virtualized environment, it is
far better to simply suspend the virtual system altogether than to let it
busy wait.
Jeremy Fitzhardinge has proposed a solution to this problem in the form of
the trigger API. A trigger
can be thought of as a special type of continuation intended for use in a
specific environment: situations where preemption is disabled and sleeping
is not possible, but where it is necessary to wait for an external event.
A trigger is set up in either of the two usual patterns:
#include <linux/trigger.h>
DEFINE_TRIGGER(my_trigger);
/* ... or ... */
trigger_t my_trigger;
trigger_init(&my_trigger);
There is a sequence of calls which must be made by code intending to
wait for a trigger:
trigger_reset(&my_trigger);
while(!condition)
trigger_wait(&my_trigger);
trigger_finish(&my_trigger);
Triggers are designed to be safe against race conditions, in that if a
trigger is fired after the trigger_reset() call, the subsequent
trigger_wait() call will return immediately. As with any such
primitive, false "wakeups" are possible, so it is necessary to check for
the condition being waited for and wait again if need be.
Code which wishes to signal completion to a thread waiting on a trigger
need only make a call to:
void trigger_kick(trigger_t *trigger);
This code should, of course, ensure that the waiting thread will see that
the resource it was waiting for is available before calling
trigger_kick().
A reader of the generic implementation of triggers may be forgiven for
wondering what the point is; most of the functions are empty, and
trigger_wait() turns into a call to cpu_relax(). In
other words, it's still a busy wait, just like before except that now it's
hidden behind a set of trigger functions. The idea, of course, is that
better versions of these functions can be defined in architecture-specific
code.
If the target architecture is actually a virtual machine environment, for
example, a
trigger can simply suspend the execution of the machine altogether. To
that end, there is a new set of paravirt_ops allowing hypervisors to
implement the trigger operations.
Jeremy has also created an implementation for the x86 architecture which
uses the relatively new monitor and mwait instructions.
In this implementation, a trigger is a simple integer variable. A call to
trigger_reset() turns into a monitor instruction,
informing the processor that it should watch out for changes to that
integer variable. The mwait instruction built into
trigger_wait() halts the processor until the monitored variable is
written to. No more busy waiting is required.
There is a certain elegance to the monitor/mwait
implementation, but Arjan van de Ven worries that it may prove to be too slow. So
changes to the x86 implementation are possible. There have not been a lot
of comments about the API itself, though, so the trigger functions may well
make it into the mainline in something close to their current form.
Regulating wireless devices
Regulations.
Tangled up in threads
A user
|
http://lwn.net/Articles/293905/
|
crawl-002
|
refinedweb
| 728
| 51.68
|
I guess the DAC object in the audio library needs to be updated.
I didn't follow the 4k of posts so I'll shut up now.
Edit... might have been ADCs I'm remembering...
I guess the DAC object in the audio library needs to be updated.
I didn't follow the 4k of posts so I'll shut up now.
Edit... might have been ADCs I'm remembering...
I believe it can be used, but maybe not yet?
But acording to Paul somewhere the DAC is only good for about 10 bits above its noise floor.
Ccrma/~jos is pretty clear and concise. But not too much hand holding.
Know your analog theory too.
Nice list Andrew, thanks for that.
I disagree. If a thread goes too far off course from the initial enquiry the work you do on it becomes too specific to be useful to anyone but the OP.
Here, someone might reasonably stumble upon...
#include <Bounce.h>
// the MIDI channel number to send messages
const int channel = 1;
Bounce button0 = Bounce(0, 5);
Bounce button1 = Bounce(1, 5);
Bounce button2 = Bounce(2, 5);
byte...
Didn't you copy and paste?
Maybe that's good. But you missed altering these:
current_value_1
in the .sendControlChange message calls.
#include <Bounce.h>
// the MIDI channel number to send messages
const int channel = 1;
Bounce button0 = Bounce(0, 5);
Bounce button1 = Bounce(1, 5);
Bounce button2 = Bounce(2, 5);
byte...
It does sound like too much midi being sent is overflowing the event buffer. Is there nothing in your code to limit the density of adjustment messages?
I'm not clear on the 'window debugger'...
You don't need MIDI, but a piezo to midi sketch is a good place to start.
Paul gives example code for a single piezo trigger....
If I add pots to that code it will do what my original example code already does.
My code at post 55 should work when configured. Unless I still don't understand chains and controlling with CC...
//************LOOP**************
void loop() {
// getAnalogData(); // commented out to avoid garbage MIDI until you are ready with the analog voltage dividers
getDigitalData();
while...
You have to configure it... and you're going to have to start trying to read the code.
Forget the lights for now.
My code should let you configure any number of pins to send any arbitrary CC...
/* bespoke code example
By Leif Oddson
*/
//************LIBRARIES USED**************
So each and every button sends a CC number and value, always the same and only on press and not on release?
So not latching and not ON when pressed and OFF when released. All buttons. Correct?
I finally get what the chain selector is after a quick search.
What you need is another array that holds what value is sent with the CC message instead of just off and on.
Then you can...
Try replacing both sendNoteOn and sendNoteOff with sendControlChange
Is it a radio button effect?
What are you trying to show with the ableton screenies?
I'm on a tablet and no compiler available ATM.
I'll have a look next time I am but no promises as to when.
For latching code.
The main addition is an array to track the state of the button so you can send the oposite message on the next push.
Then you just listen for the falling edge only as you are...
Do you want latching (first press is ON next is OFF)?
It's a bit of a complication but changing to control change is otherwise just swapping note on and off calls with CC call like those in the...
If you could write a concise description of the type of controls and the desired behavior, I'm sure I can point you in the right direction. It's too difficult to extract your goal from scattered...
Er... what I mean is; just tie any pins not set to a functioning wiper to ground.
Any tied to ground should stay quite for MIDI out.
A quick look shows nothing but I'm terrible at seeing error. A [ /code] would help for readability
The compiler will almost certainly have objection to the syntax (unless you have been very...
Could Thru be on and its repeating the input.
Line 667 of file MIDI.hpp. shows thru is called at read. Maybe it's resetting the pin's state even when a Thru is off??
If you comment that out you...
Floating means not electrically connected to the circuit via any low impedance path (wire, solder, copper trace).
Configured Pins are those you set up as inputs when your program first loads.
The...
If you run Many_button_knobs with floating configured pins it will send enough garbage midi to crash your DAW.
Go ask on GitHub?
He does reply but not right away.
Author supports the product on GitHub. I wouldn't say it's not recommended but I'm not sure what it brings you can't do without it just as easily.
...
Yeah but you can buy quite a few Teensy LC for what a few hours of salary cost.
What about a MUX based thing that switches data lines to different physical USB A connectors. If when not active...
Then use separate power.
I finally read the diode thread... they are there because of the pull-up voltages.*
With voltage dividers the issue is the hot rail*
If you power the voltage...
...also... my PC maintains power to USB when frozen and even when off.
Then you need to have a redundant power supply that powers both teensy if either has 5v.
Otherwise I can't see how you power the voltage dividers.
You can't use regular MIDI? Seems to me you...
Consider connecting inputs only to one Teensy and using Tx on it to send MIDI as send data to the Rx of the other as a MIDI THRU port.
They'd have to share common ground so I think you'd want to...
How have you established it will not respond to valid sysex commands?
I don't know about you but I NEVER send good sysex on the first try. And this thing has four model# bytes and a bunch of...
....I should mention that, years ago, Teensy was the only way to do USB MIDI without fairly onerous kludges needed to get around how the device appears to the host for programing vs in operation....
Does this mean you're sorted?
Apparently Boss pedals are not usb midi compliant but Paul is open to addressing incompatibilities.
it works, and doesn't need a flush or prepacked variables.
Separete bank selectors would be slightly easier but you should be able to use the pad if you want.
You can retain the single loop structure if you have two paths within the loop, one if 'i' represents an event button and another if it is a modifier.
Then set modifier variables when not an...
Do you mean an option selector that is only active while held... like a shift key?
Search MIDI and BANK together for examples. Thay mostly use a variable to store the selected bank states but...
Edit.... nevermind....I think I'm off track...
Topmost row of page 4 indicates minimum supply is 3 volts.
But the performance may be poor given the data provided about higher voltages.
...and if you need to handle timing issues.......
I believe CLOCK is upper case.
Edit.... not sure if it us in MIDI? I see examples on Ardunio forum with lowercase.
Edit 2... on my tablet so I can't check, but the library defs seem to say it's...
That's actually the same fatal error plus some new warnings.
You don't need an external MIDI controller library (and you won't really find support for them here as none of the regulars use them). ...
The Rule is full code for a reason....
But if the debug is firing I can't see that the send CC wouldn't also. So how are you confirming no CC is being sent?
This has come up before. Here is a post with some partially tested code.
Instructions are at the bottom of the USB MIDI page
Yes... the analog needs it too; even more so as it needs an accurate reading and not just to cross a threshold.
Switches do not have power connected.
They are connected to ground on one side and the data pin on the other.
If your red wire is ground and the...
It might not be you.
It's possible Teensy is running the code too fast for signals to stabilize after the mux is set.
You could try a 5 microseconds delay between lines 7 and 8 of the analog...
|
https://forum.pjrc.com/search.php?s=51c178746779d60614a6960896199d92&searchid=5018687
|
CC-MAIN-2019-35
|
refinedweb
| 1,464
| 76.01
|
I have a fairly good memory for numbers, phone numbers in particular. This fact amazes my wife. For those numbers I cannot recall to the exact digit, I have a dozen or so slots in my cell phone. However, as the company I worked for grew, so did the list of people with whom I needed to stay in contact. And I didn't just need phone numbers; I needed email and postal addresses as well. My cell phone's limited capabilities were no longer adequate for maintaining the necessary information.
So I eventually broke down and purchased a PDA. I was then able to store contact information for thousands of people. Still, two or three times a day I found myself searching the company's contact database for someone's number or address. And I still had to go to other databases (phone books, corporate client lists, and so on) when I needed to look up someone who worked for a different company.
Computer systems have exactly the same problem as humans?both require the capability to locate certain types of information easily, efficiently, and quickly. During the early days of the ARPAnet, a listing of the small community of hosts could be maintained by a central authority?SRI's Network Information Center (NIC). As TCP/IP became more widespread and more hosts were added to the ARPAnet, maintaining a centralized list of hosts became a pipe dream. New hosts were added to the network before everyone had even received the last, now outdated, copy of the famous HOSTS.TXT file. The only solution was to distribute the management of the host namespace. Thus began the Domain Name System (DNS), one of the most successful directory services ever implemented on the Internet.[1]
[1] For more information on the Domain Name System and its roots, see DNS and BIND, by Paul Albitz and Cricket Liu (O'Reilly).
DNS is a good starting point for our overview of directory services. The global DNS shares many characteristics with a directory service. While directory services can take on many different forms, the following five characteristics hold true (at a minimum):
A directory service is highly optimized for reads. While this is not a restriction on the DNS model, for performance reasons many DNS servers cache the entire zone information in memory. Adding, modifying, or deleting an entry forces the server to reparse the zone files. Obviously, this is much more expensive than a simple DNS query.
A directory service implements a distributed model for storing information. DNS is managed by thousands of local administrators and is connected by root name servers managed by the InterNIC.
A directory service can extend the types of information it stores. Recent RFCs, such as RFC 2782, have extended the types of DNS records to include such things as server resource records (RRs).
A directory service has advanced search capabilities. DNS supports searches by any implemented record type (e.g., NS, MX, A, etc.).
A directory service has loosely consistent replication among directory servers. All popular DNS software packages support secondary DNS servers via periodic "zone transfers" that contain the latest copy of the DNS zone information.
|
http://etutorials.org/Server+Administration/ldap+system+administration/Part+I+LDAP+Basics/Chapter+1.+Now+where+did+I+put+that.+or+What+is+a+directory/
|
CC-MAIN-2018-09
|
refinedweb
| 530
| 56.05
|
Misc #10178
refinements unactivated within refine block scope?
Description
I doubt I am seeing a bug, but I was hoping someone could clarify for me the reason why I am seeing what I see. I tried pouring over the spec and wasn't quite able to pin it down.
My use case of refinements is not the normal one, so this is not high priority by any means.
But I am curious why, if I have defined a refinement in, say, module A, and then module B is using A, if B itself as a refine block, A's refinements will not be active within it.
So:
module A refine Time def weekday self.strftime("%A") end end module B using A puts Time.now.weekday # 1 refine ActiveSupport::Time def method_missing(method, *args) puts Time.now.weekday # 2 self.to_time.send(method.to_sym, args.first) end puts Time.now.weekday # 3 end
1 and 3 will be defined, but 2 will not. Is it because according to:
"The scope of a refinement is lexical in the sense that, when control is transferred outside the scope (e.g., by an invocation of a method defined outside the scope, by load/require, etc...), the refinement is deactivated."
refine transfers control outside the scope of the module, so no matter where I put using, it will not have the refinements of A active?
I apologize for my ignorance and greatly appreciate your answers on this matter.
History
#1
[ruby-core:64598]
Updated by Nobuyoshi Nakada over 1 year ago
I can't get your point.
Module#refine requires a block, so your code doesn't work, simply.
#2
[ruby-core:64599]
Updated by Alexander Moore-Niemi over 1 year ago
Nobuyoshi Nakada wrote:
I can't get your point.
Module#refinerequires a block, so your code doesn't work, simply.
Yes, I mistakenly left out the "do" after
refine ActiveSupport::Time (which should be
ActiveSupport::TimeWithZone) and
refine Time, with it the code does indeed work, and my question still stands.
#3
[ruby-core:64601]
Updated by Alexander Moore-Niemi over 1 year ago
Here is an executable version of what I was roughing out above, I apologize for not vetting it beforehand to prevent confusion:
require 'active_support/core_ext' module A refine Time do def weekday self.strftime("%A") end end end module B using A puts Time.now.weekday # 1 refine ActiveSupport::TimeWithZone do def method_missing(method, *args) # undefined puts Time.now.weekday # 2 self.to_time.send(method.to_sym, args.first) end end puts Time.now.weekday # 3 end
With #2 in, I will error out for undefined method.
#4
[ruby-core:64604]
Updated by Nobuyoshi Nakada over 1 year ago
- Status changed from Feedback to Closed
In general, the scope inside a method definition is different from outside.
Consider method arguments and class/module level local variables.
#5
[ruby-core:64611]
Updated by Alexander Moore-Niemi over 1 year ago
Nobuyoshi Nakada wrote:
In general, the scope inside a method definition is different from outside.
Consider method arguments and class/module level local variables.
So I was correct, in that
refine invokes a different scope where the refinements aren't activated? Ok, cool.
That's kind of too bad though, because as you see in my example, it means it is harder to reuse a refinement across different object types. (In my production code I actually have to just duplicate code, which is unfortunate.) I imagine there's no plans to change that in the future, right? That plus the indirect method access (when is that going to happen?) could let me do this:
def method_missing(method, *args) if Time.respond_to?(method.to_sym) self.to_time.send(method.to_sym, args.first) end end
Thanks again for your responses.
#6
[ruby-core:64612]
Updated by Alexander Moore-Niemi over 1 year ago
I had posted some more code but remembered "send" doesn't apply yet! Sorry for my confusion. Any plans on indirect method access?
Also available in: Atom PDF
|
https://bugs.ruby-lang.org/issues/10178
|
CC-MAIN-2016-07
|
refinedweb
| 668
| 65.62
|
On Tue, Feb 10, 2009 at 03:22:46PM -0500, Christoph Hellwig wrote:
...
> Index: xfs/fs/xfs/xfs_cksum.h
> ===================================================================
> --- /dev/null 1970-01-01 00:00:00.000000000 +0000
> +++ xfs/fs/xfs/xfs_cksum.h 2009-02-05 19:04:31.282972630 +0100
> @@ -0,0 +1,62 @@
> +#ifndef _XFS_CKSUM_H
> +#define _XFS_CKSUM_H 1
> +
> +#define XFS_CRC_SEED (~(__uint32_t)0)
Is this the final seed you want to use, or was this just a
work-in-progress-non-zero value for testing?
...
> +/*
> + * Convert the intermediate checksum to the final ondisk format.
> + *
> + * Note that crc32c is already endianess agnostic, so no additional
> + * byte swap is needed.
> + */
> +static inline __be32
> +xfs_end_cksum(__uint32_t crc)
> +{
> + return (__force __be32)~crc;
> +}
Why the bit-wise not?
Josef 'Jeff' Sipek.
--
Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are, by
definition, not smart enough to debug it.
- Brian W. Kernighan
|
http://oss.sgi.com/archives/xfs/2009-02/msg00254.html
|
CC-MAIN-2013-48
|
refinedweb
| 154
| 60.11
|
Widgets (experimental)¶
[Note: this document is currently intended to be a roadmap/design document. It may be converted over time to permanent documentation.]
Overview¶
During 2018 we built out a “widget system” in Zulip. It includes these features:
- /ping
- /day (and /night, /light, /dark)
- /poll (and /tictactoe, /todo) (BETA)
- zform-enabled messages for the trivia_quiz bot (BETA)
The beta features are only turned on for chat.zulip.org as of this writing.
There’s a strong overlap between widgets and slash commands, and many widgets are launched by slash commands. A few exceptions are worth noting. If you type “/me shrugs” in the compose box, it’s just a message that gets slightly customized rendering. And if you type “/settings”, it’s just a shortcut to open the settings popup. Neither of these are really “widgets,” per se.
Another exception, in the opposite direction, is our trivia_quiz bot. It does not involve slash commands. Instead it sends “extra_data” in messages to invoke zforms (which enable button-based UIs in the messages).
Here are some code entities used in the above features:
ALLOW_SUB_MESSAGESsetting
SubMessagedatabase table
/json/zcommandAPI endpoint
/json/submessageAPI endpoint
static/js/zform.js
static/js/zcommand.js
static/js/submessage.js
static/js/voting_widget.js
static/js/widgetize.js
static/js/zform.js
static/templates/widgets/
zerver/lib/widget.py
zerver/lib/zcommand.py
zerver/views/submessage.py
Simple slash commands¶
We support a few very simple slash commands that are intended for single users to do simple tasks:
- Ping the server
- Toggle day/night mode
Data flow¶
These commands have client-side support in
zcommands.js.
They send commands to the server using the
/json/command
endpoint.
In the case of “/ping”, the server code in
zcommand.py
basically just acks the client. The client then computes
the round trip time and shows a little message above
the compose box that the user can see and then dismiss.
For commands like “/day” and “/night”, the server does
a little bit of logic to toggle the user’s night mode
setting, and this is largely done inside
zcommand.py.
The server sends a very basic response, and then
the client actually changes the display colors. The
client also shows the user a little message above
the compose box instructing them how to reverse the
change.
It’s a bit of a stretch to label “/ping” and “/day” as widgets. In some ways they’re just compose-box shortcuts for doing UI tasks. The commands share the new “zcommand” namespace in the code, and both have some common UI for talking to users.
(It’s possible that we don’t really need a general
/json/zcommand endpoint for these, and we
may decide later to just use custom
API endpoints for each command. There’s some logic
in having a central API for these, though, since they
are typically things that only UI-based clients will
invoke, and they may share validation code.)
Poll, todo lists, and games¶
The most interactive widgets that we built during 2018 are for polls, todo lists, and games. You launch widgets by sending one of the following messages:
- /poll
- /todo
- /tictactoe
These widgets are only turned on if you set the
ALLOW_SUB_MESSAGES
boolean to
True in the appropriate
settings.py.
Currently the setting is only enabled for dev and
our main community realm (chat.zulip.org). Also,
only the webapp client provides the “widget experience”.
Other clients just show raw messages like “/poll”
or “/ticactoe”.
Our customers have long requested a poll/survey widget. See this issue. There are workaround ways to do polls using things like emoji reactions, but our poll widget provides a more interactive experience.
Data flow¶
The poll widget uses the “submessage” architecture. We’ll use the poll widget as a concrete example.
The
SubMessage table, as the name indicates, allows
you to associate multiple submessages to any given
Message row. When a message gets sent, there’s a
hook inside of
widget.py that will detect slash
commands like “/poll”. If a message needs to be
widgetized, an initial
SubMessage row will be
created with an appropriate
msg_type (and persisted
to the database). This data will also be included
in the normal Zulip message event payload. Clients
can choose to ignore the submessage-related data, in
which case they’ll gracefully degrade to seeing “/poll”.
Of course, the webapp client actually recognizes the
appropriate widgets.
The webapp client will next collect poll options and votes
from users. The webapp client has
code in
submessage.js that dispatches events
to
widgetize.js, which in turns sends events to
individual widgets. The widgets know how to render
themselves and set up click/input handlers to collect
data. They can then post back to
/json/submessage
to attach more data to the message (and the
details are encapsulated with a callback). The server
will continue to persist
SubMessage rows in the
database. These rows are encoded as JSON, and the
schema of the messages is driven by the individual widgets.
Most of the logic is in the client; things are fairly opaque
to the server at this point.
The “submessage” architecture is generic. Our tictactoe widget and todo list widget use the same architecture as “poll”.
If a client joins Zulip after a message has accumulated several submessage events, it will see all of those events the first time it sees the parent message. Clients need to know how to build/rebuild their state as each submessage comes in. They also need to tolerate misformatted data, ideally just dropping data on the floor. If a widget throws an exception, it’s caught before the rest of the message feed is affected.
As far as rendering is concerned, each widget module
is given a parent
elem when its
activate function
is called. This is just a
<div> inside of the parent
message in the message pane. The widget has access to
jQuery and template.render, and the developer can create
new templates in
static/templates/widgets/.
A good way to learn the system is to read the code
in
static/js/voting_widget.js. It is worth noting that
writing a new widget requires only minor backend
changes in the current architecture. This could change
in the future, but for now a frontend developer mostly
needs to know JS, CSS, and HTML.
It may be useful to think of widgets in terms of a bunch of clients exchanging peer-to-peer messages. The server’s only real role is to decide who gets delivered which submessages. It’s a lot like a “subchat” system.
Backward compatibility¶
Our “submessage” widgets are still evolving, and we want to have a plan for allowing future progress without breaking old messages.
Widget developers can revise code to improve a widget’s visual polish without too much concern for breaking how old messages get widgetized. They will need to be more cautious if they change the actual data structures passed around in the submessage payloads.
For significant schema changes, it would be worthwhile to add
some kind of versioning scheme inside of
SubMessages, either
at the DB level or more at the JSON level within fields.
This has yet to be designed. One thing to consider is that
most widgets are somewhat ephemeral in nature, so it’s not
the end of the world if upgrades cause some older messages
to be obsolete, as long as the code degrades gracefully.
Mission critical widgets should have a deprecation strategy.
For example, you could add optional features for one version
bump and then only make them mandatory for the next version,
as long as you don’t radically change the data model. And
if you’re truly making radical changes, you can always
write a Django migration for the
SubMessage data.
Adding widgets¶
Right now we don’t have a plugin model for the above widgets; they are served up by the core Zulip server implementation. Of course, anybody who wishes to build their own widget has the option of forking the server code and self-hosting, but we want to encourage folks to submit widget code to our codebase in PRs. If we get to a critical mass of contributed widgets, we will want to explore a more dynamic mechanism for “plugging in” code from outside sources, but that is not in our immediate roadmap.
This is sort of a segue to the next section of this document. Suppose you want to write your own custom bot, and you want to allow users to click buttons to respond to options, but you don’t want to have to modify the Zulip server codebase to turn on those features. This is where our “zform” architecture comes to the rescue.
zform (Trivia Quiz bot)¶
This section will describe our “zform” architecture.
For context, imagine a naive triva bot. The trivia bot
sends a question with the answers labeled as A, B, C,
and D. Folks who want to answer the bot send back an
answer have to send an actual Zulip message with something
like
@trivia_bot answer A to Q01, which is kind of
tedious to type. Wouldn’t it be nice if the bot could
serve up some kind of buttons with canned replies, so
that the user just hits a button?
That is where zforms come in. Zulip’s trivia bot sends
the Zulip server a JSON representation of a form it
wants rendered, and then the client renders a generic
“zform” with buttons corresponding to
short_name fields
inside a
choices list inside of the JSON payload.
Here is what an example payload looks like:
{ "extra_data": { "type": "choices", "heading": "05: What color is a blueberry?", "choices": [ { "type": "multiple_choice", "reply": "answer 05 A", "long_name": "red", "short_name": "A" }, { "type": "multiple_choice", "reply": "answer 05 B", "long_name": "blue", "short_name": "B" }, { "type": "multiple_choice", "reply": "answer 05 C", "long_name": "yellow", "short_name": "C" }, { "type": "multiple_choice", "reply": "answer 05 D", "long_name": "orange", "short_name": "D" } ] }, "widget_type": "zform" }
When users click on the buttons, generic click
handlers automatically simulate a client reply using
a field called
reply (in
choices) as the content
of the message reply. Then the bot sees the reply
and grades the answer using ordinary chat-bot coding.
The beautiful thing is that any thrid party developer
can enhance bots that are similar to the trivia_quiz
bot without touching any Zulip code, because zforms
are completely generic. (The only caveat is that
the server must turn on
ALLOW_SUB_MESSAGES.)
Data flow¶
We can walk through the steps from the bot generating the zform to the client rendering it.
First, here is the code that produces the JSON.
def format_quiz_for_widget(quiz_id: str, quiz: Dict[str, Any]) -> str: widget_type = 'zform' question = quiz['question'] answers = quiz['answers'] heading = quiz_id + ': ' + question def get_choice(letter: str) -> Dict[str, str]: answer = answers[letter] reply = 'answer ' + quiz_id + ' ' + letter return dict( type='multiple_choice', short_name=letter, long_name=answer, reply=reply, ) choices = [get_choice(letter) for letter in 'ABCD'] extra_data = dict( type='choices', heading=heading, choices=choices, ) widget_content = dict( widget_type=widget_type, extra_data=extra_data, ) payload = json.dumps(widget_content) return payload
The above code processes data that is specific to a trivia quiz, but it follows a generic schema.
The bot sends the JSON payload to the server using the
send_reply callback.
The bot framework looks for the optional
widget_content
parameter in
send_reply and includes that in the
message payload it sends to the server.
The server validates the schema of
widget_content using
check_widget_content.
Then code inside of
zerver/lib/widget.py builds a single
SubMessage row to contain the zform payload, and the
server also sends this payload to all clients who are
recipients of the parent message.
When the message gets to the client, the codepath for zform
is actually quite similar to what happens with more
customized widgets like poll and tictactoe. (In
fact, zform is a sibling of poll and tictactoe, and zform
just has a somewhat more generic job to do.) In
static/js/widgetize.js you will see where this code
converges, with snippets like this:
widgets.poll = voting_widget; widgets.tictactoe = tictactoe_widget; widgets.todo = todo_widget; widgets.zform = zform;
The code in
static/js/zform.js renders the form (not
shown here) and then sets up a click handler like below:
elem.find('button').on('click', function (e) { e.stopPropagation(); // Grab our index from the markup. var idx = $(e.target).attr('data-idx'); // Use the index from the markup to dereference our // data structure. var reply_content = data.choices[idx].reply; transmit.reply_message({ message: opts.message, content: reply_content, }); });
And then we are basically done!
Slash commands¶
This document is more about “widget” behavior than “slash command” interfaces, but there is indeed a lot of overlap between the two concepts.
We will soon introduce typeahead capability for slash syntax, including things that are somewhat outliers such as the “/me” command.
If certain widget features are behind feature flags, this will slightly complicate the typeahead implementation. Mostly we just need the server to share any relevant settings with the client.
|
https://zulip.readthedocs.io/en/stable/subsystems/widgets.html
|
CC-MAIN-2018-51
|
refinedweb
| 2,161
| 63.49
|
Each Answer to this Q is separated by one/two green lines.
I’m about to reinstall
numpy and
scipy on my Ubuntu Lucid. As these things carry quite a few dependencies, I’m wondering if there is a comprehensive test suite to check if the new install really works.
Of course, I can just take a bunch of my scripts and run them one by one to see if they keep working, but that won’t guard against a situation where at some point in the future I’ll try to use something I didn’t use before and it’ll break (or, worse, silently produce nonsence).
Yes. Both packages have a
test method for this.
import numpy numpy.test('full') import scipy scipy.test('full')
You will need to have pytest and hypothesis installed to run
numpy.test..
from
|
https://techstalking.com/programming/python/is-there-a-test-suite-for-numpy-scipy/
|
CC-MAIN-2022-40
|
refinedweb
| 141
| 80.51
|
Converting to another web framework: Basic apps in Symfony and Django
Many times have I heard the following from a developer: “I am scared to change technologies”, “I am excited but I’m afraid it will be entirely different”, “I only know <insert web framework here>, I’ve never seen any <insert another web framework here> examples will be in Symfony2, a modern PHP web framework and Django, another such framework for Python. Both frameworks are widely used in companies today, for the development of small to medium-sized web applications and backends. In fact, we have a few interesting articles in Python (mostly Django, since it’s one of our favourite frameworks) and PHP.
You can learn about the basic similarities between these two frameworks, but even if your speciality is Rails, Spring or any other web framework, you can use this article to better understand your specialty and to let go of any fears about converting. For the sake of good examples, I will be using the user stories and application development from the Symfony2 Jobeet tutorial, along with bits and pieces from the Django Write your first app tutorial.
Installing a web framework is easy
You can always experiment with features of a new web framework release using the modern installers. Nowadays specialised installers (package managers, if you wish) will automatically download the framework version of your choice and will bootstrap a project for immediate running. Usually the steps go as such: install the programming language, the installer, edit a settings file then use a shell utility provided by your framework to run your project on a local server.
In Symfony, it goes like this:
sudo apt-get install php5 sudo curl -LsS -o /usr/local/bin/symfony sudo chmod a+x /usr/local/bin/symfony symfony new jobeet 2.8 php app/console server:run
Which leads you to a splash screen of your first Symfony application running on localhost . Sometimes, you need to go through additional steps like adding your timezone in the php.ini file, which takes a couple of minutes of your time. In case you haven’t noticed, our programming language is PHP (the php5 package), our installer is called symfony and we start our project using the 2.8 LTS version of the framework. Our shell utility is app/console, which we will use extensively throughout the development process using various commands. You will also find settings files like parameters.yml and config.yml in the app/config path, where you can edit your defaults.
In Django, the installation is similar:
sudo apt-get install python python-pip python-django-common python-django django-admin startproject jobeet_py python manage.py runserver
Again, we have installed python, the pip and django-admin installers, and we have started a project (called jobeet.py). We then run it using the manage.py shell utility (which we will use in Django development a lot) and we will see a welcome screen on localhost. Your settings file is in the root folder of your project, in settings.py .
The whole process is made this way because the programmers require confirmation that their settings and framework installation are correct. After making sure the framework is installed and properly configured, we can move to the next step.
Starting the application
Symfony organises code into bundles, while Django prefers the equivalent app naming. The first things to do after installing the framework itself is to start your own separate project, which will use but not rewrite elements from the framework.
Run this in your symfony project root folder and answer all questions using the default values.
php app/console generate:bundle --namespace=Ens/JobeetBundle --format=yml
It will generate a folder structure in your src folder, under Ens/JobeetBundle . An automatic action is also added, but we will go into more detail on routes, actions and views later. For now, know that your custom code will go inside this newly-created structure. Don’t forget to clear your cache like explained in the Jobeet tutorial.
In Django, we do more or less the same thing, using our shell utility to create an app:
python manage.py startapp jobeet
You also need to add it to your installed apps in the settings.py file.
INSTALLED_APPS = ( [...] 'jobeet' )
And now you’re good to go.
Hello World! : The triad of URL, Controller, View
You have now reached the essential point of MVC Web Frameworks. Understanding how to tie in the functionality from accessing a URL to computing and visualising the desired information is the crucial part of web development. Doing so will enable you to study related topics such as external libraries and custom handling of requests with much ease. Web frameworks work by mapping URL paths to Controller actions, which can be functions or classes and are written by the programmer to contain logical handling of data. Most of the times, the Controller will also return a view, which is the user-friendly display of computed data (as HTML and CSS). Sounds simple? Well, the most confusing part of this aspect is that different frameworks tend to name these concepts differently. For example, Symfony calls them Route-Controller-View, while Django calls them URL-View-Template. Whatever they are named, these three, combined with the Model part, represent the fundamentals of web framework development.
What I want to do using both frameworks is to make the root URL / display a simple page with a custom signature. Since projects may contain several sub-projects (bundles, apps), we delegate from the main URL configuration file to specialised ones, located in their corresponding sub-projects. First, let’s see how this looks in Symfony. Consider the main routing file app/config/routing.yml :
ens_jobeet: resource: "@EnsJobeetBundle/Resources/config/routing.yml" prefix: /
Here we defer routing of our Jobeet routes to the Jobeet Bundle. So next we create this new and specialised routing file in src/Ens/JobeetBundle/Resources/config/routing.yml :
ens_jobeet_homepage: path: / defaults: { _controller: EnsJobeetBundle:Default:index }
The components of a route definition are the path (here, root URL), the name of the route (here, ens_jobeet_homepage ) and the mapping to a controller action. Next, we write that controller action in src/Ens/JobeetBundle/DefaultController :
<?php namespace Ens\JobeetBundle\Controller; use Symfony\Bundle\FrameworkBundle\Controller\Controller; class DefaultController extends Controller { public function indexAction() { return $this->render('EnsJobeetBundle:Default:index.html.twig', array( 'signature' => 'C3-PO, Human-cyborg relations.' )); } }
As you can see, our index action is pretty simple. For our “Hello World!” example, we don’t need to connect to the database, change any available data or generally compute much. We simply render the index view and pass along the signature variable. The purpose is simply to demonstrate how variable transmission affects the view. So in our view, located at src/Ens/JobeetBundle/Resources/views/Default/index.html.twig we simply write:
Hello World! I am {{ signature }}.
Therefore in our template we can use the Twig templating engine’s syntax for printing a variable. The double curly braces output the signature variable we previously defined in the controller.
The result? Upon going into the browser at the address we will see a page that prints:
Hello World! I am C3-PO, Human-cyborg relations..
In Django, the process is almost identical. As we do in Symfony, first we defer app-related URL definition to the app itself. In our main urls.py file, located in the root folder of our Django project, we add this delegation:
from django.conf.urls import patterns, include, url from django.contrib import admin urlpatterns = patterns( '', url(r'^', include('jobeet.urls')), )
This indicates to our main URL definition file that it needs to add the patterns from the jobeet/urls.py file. We edit this file to contain a index route and tie it to an action.
from django.conf.urls import url from jobeet import views urlpatterns = [ url(r'^$', views.index, name='index'), ]
The defined URL pattern points the root URL to a function inside the jobeet/views.py file. Note the structure of imports, which considers the name of the app, then its contained files as importable. In our views.py file we can now write:
from django.shortcuts import render def index(request): return render(request, 'jobeet/index.html', { 'signature': 'C3-PO, Human-Cyborg relations.' })
Note the similarity between the two views, Symfony and Django. If the functionality is identical, the structure of the action will be similar in any chosen modern web framework. Again we use a rendering function to display a template file (which is HTML in nature) and pass it a custom variable. In Symfony, the context that we send to the view is an array used like a dictionary, while in Django we use Python’s dictionary construct in the same fashion.
The big surprise comes now. In jobeet/templates/jobeet/index.html we can input the exact same text as in our Symfony Twig view. This is made possible by the fact that Twig and Django’s templating engines are similar to a great extent and use the same curly braces syntax when outputting the value of a variable. Furthermore, there are a lot of similarities in the way the two templating engines handle iteration, inheritance and so on. The differences are mostly minor inconveniences, such as Twig maintaining that routes should be expressed with an echoing ({{ path(‘index’) }} ) syntax, while Django uses a block syntax ({% url ‘index’ %} ). After creating your triad of elements (URL, Action function and Template), you can check in your browser that, if you run the Django app using the runserver command, will again print your greeting from C3-PO.
Wrap-up
I hope that this side-by-side comparison of some simple web framework features has helped you gain some insight into just how similar they can be. It is often that not our fear of what lies ahead, but our fear of the unknown that makes us reluctant to any technological changes. However, in our times, flexibility and adaptability are essential to developers which desire a long and prosperous career. In this article, I have shown you how to install the Symfony and Django frameworks, how to start your project and how to handle basic URL-Controller-View definition. Unfortunately, I have barely touched the tip of the iceberg. Interesting similarities and differences between web frameworks can be seen in model handling, CRUD approaches, third-party providers, open source community and so on. If you want us to write more articles about today’s web frameworks and what makes them great, make sure you leave us a comment. Not too keen about PHP and Python? If Javascript is your backend flavour, be sure to check out Paul’s articles on NodeJS (building APIs and creating Admin Panels)
We transform challenges into digital experiences
Get in touch to let us know what you’re looking for. Our policy includes 14 days risk-free!Free project consultation
|
https://www.algotech.solutions/blog/php/converting-to-another-web-framework-basic-apps-in-symfony-and-django/
|
CC-MAIN-2021-43
|
refinedweb
| 1,830
| 54.63
|
C Structs and Callback Macros | Cypress Semiconductor
C Structs and Callback Macros
Summary: 5 Replies, Latest post by Bob Marlowe on 26 Oct 2016 02:37 AM PDT
Verified Answers: 1
Hello,
The code that I have written uses C structs and accesses the struct variables using the arrow operator. When I tried porting this code into Cypress and started debugging, I get stuck in an infinite loop in cm0Start.c . I tried to read up on callback macros but am unsure of how to go about this. Is it possible to use structs without having to go into the callback macro loop or is it better to not use structs? If not, could someone possibly explain the use of callback macros, please?
Thanks in advance.
There is no limitation using structs. The infinite loop might be caused by using invalid pointers or a clobbered stack.
All this has nothing to do with the principle of callback macros.
Can you post your complete project, so that we all can have a look at all of your settings? To do so, use
Creator->File->Create Workspace Bundle (minimal)
and attach the resulting file.
Bob
As soon as I try assigning a value to a variable in a struct, it goes into that loop.
e.g.
typedef struct Structures {
uint8 somevariable;
} Structure;
Structure somestructure;
somestructure = (Structure*)malloc(sizeof(Structure));
somestructure->somevariable = 0;
Then the line above causes the debugger to go to Cm0Start.c:
/*******************************************************************************
* Function Name: IntDefaultHandler
****************************************************************************//**
*
* This function is called for all interrupts, other than a reset that is called
* before the system is setup.
*
*******************************************************************************/
CY_NORETURN
CY_ISR(IntDefaultHandler)
{
/***************************************************************************
* We must not get here. If we do, a serious problem occurs, so go into
* an infinite loop.
***************************************************************************/
#ifdef CY_BOOT_INT_DEFAULT_HANDLER_EXCEPTION_ENTRY_CALLBACK
CyBoot_IntDefaultHandler_Exception_EntryCallback();
#endif /* CY_BOOT_INT_DEFAULT_HANDLER_EXCEPTION_ENTRY_CALLBACK */
while(1)
{
}
}
Would memory be a problem? I am using rather large structs and am programming on a 128KB flash.
Your project does not contain a main.c.
somestructure is a struct, not a pointer to a struct.
You did not check if malloc() returned NULL
You should increase the heap size to have some ram to allocate.
Bib
I apologize for not posting the code.
Sorry, I meant that somestructure is a pointer:
Structure* somestructure;
But I tried changing everything in my code to only structs and not pointers and the problem disappeared. The problem now is that I wish to use pointers because I may use the data other places. On the other hand, I could always just return structs and pass the values around, or even just use variables.
What I don't understand is why the pointers caused a problem? You mentioned a clobbered stack..
I primarily mentioned that you did not check the returned value from malloc() to be valid.
Bob
|
http://www.cypress.com/forum/psoc-creator-software/c-structs-and-callback-macros
|
CC-MAIN-2016-50
|
refinedweb
| 456
| 63.9
|
In this article you can download the very first alpha version of Aubergine; a BDD /DSL framework for .Net, initially based on Machine.Specifications, but later on heavily inspired by Cucumber. It is AFAIK the very first Cucumber like environment in .NET available, and you will see that it is very easy to use. Due to it's inspirator (Cucumber, I have decided to use the name Aubergine - I have no idea if they are actually related or not .
Please do note that it is an alpha version, so right now we only have a single testrunner that outputs to the console (i.e. no unit test integration yet). In the article I do include a postbuild step, which automatically makes my BDD tests run after each build, and displays the output in notepad, which is fine for me atm.
Anyway, enough with the talkin, Let's Get Busy !!!
First you need to download the example project; it includes all the binaries needed to do your BDD development. You can find it here :
Be.Corebvba.Aubergine.Examples.zip (16,43 kb)
Once you have the zip file, you can either explore the project, or walk through the following scenario to create your own test.
Add a class named "BrowserContext" to your project
Add a class named "Make_sure_my_website_gets_enough_visibility" , import the Be.Corebvba.Aubergine namespace, and derive your class from Story
Then you can start typeing your story; the final file should look like this :
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Be.Corebvba.Aubergine;
namespace Example
{
class Make_sure_my_website_gets_enough_visibility: Story<BrowserContext>
{
As_a website_owner;
I_want to_make_sure_that_I_get_enough_visibility;
So_that I_can_get_enough_traffic;
[Cols("searchengine","search_url","keywords","my_url")]
[Data("google","","core bvba tom janssens","")]
[Data("google","","BDD .Net","")]
[Data("bing","","core bvba tom janssens","")]
class Search_results_for_keywords_on_searchengine_should_contain_my_url : Scenario
{
Given current_url_is_search_url;
When searching_for_keywords;
Then result_should_contain_my_url;
}
}
}
After having created a story with scenarios we need to define the context for these scenarios.
You add all your domain objects that need to be tested to the "BrowserContext" class
In this case it is quite simple :
public class BrowserContext
{
public string Url { get; set; }
public string Result { get; set; }
private WebClient wc = new WebClient();
}
This should be enough to test, but wait, isn't there something missing ?
Next up we need to define how to interpret the story. As I allready mentioned; this is heavily inspired by Ruby/Cucumber : regular expressions are used to find out how all scenario steps should be matched to a real function. While this maybe sounds complicated, it really isn't; this is the code :
public class BrowserContext
{
public string Url { get; set; }
public string Result { get; set; }
private WebClient wc = new WebClient();
[DSL("current_url_is_(.*)")]
void SetUrl(string url)
{
Url = url;
}
[DSL("searching_for_(.*)")]
void SearchForKeyWords(string keywords)
{
Result = wc.DownloadString(Url + HttpUtility.UrlEncode(keywords));
}
[DSL("result_should_contain_(.*)")]
void ResultShouldContain(string myurl)
{
Result.Contains(myurl).ShouldEqual(true);
}
}
That's all there is to it; you are ready to run your tests now !!!
Ok, since we do not want to run these tests manually each time, but at every build action, we need to add a postbuildstep.
In visual studio you can do it like this :
"$(ProjectDir)\lib\Be.Corebvba.Aubergine.ConsoleRunner.exe" "$(TargetPath)" > "$(TargetDir)output.txt"
"$(TargetDir)output.txt"
exit 0
Now build your project, and the tests will be ran !! Your default texteditor will start, and it should contain this text :
==STORY================================================================
Make_sure_my_website_gets_enough_visibility => OK
========================================================================
Search_results_for_core bvba tom janssens_on_google_should_contain_ => OK
Given current_url_is_ => OK
When searching_for_core bvba tom janssens => OK
Then result_should_contain_ => OK
Search_results_for_core bvba tom janssens_on_bing_should_contain_ => OK
Given current_url_is_ => OK
When searching_for_core bvba tom janssens => OK
Then result_should_contain_ => OK
Search_results_for_BDD .Net_on_google_should_contain_ => OK
Given current_url_is_ => OK
When searching_for_BDD .Net => OK
Then result_should_contain_ => OK
It's actually quite easy once you have the hang of it; this is pseudocode :
foreach (var story in AllClassesDerivedFromTheAubergineStoryClass)
foreach(var scenario in AllPossibleScenariosFor(story))
create a new context object
foreach (var possiblesteps in AllSteps (given,when,then)
in story and scenario)
find a regex DSL match for possiblesteps.name in the context,
extract all the regex groups and add them as string parameters
to the function you call call the corresponding
step function
If one of this steps should fail then the test fails and the reason is mentioned in the report.
If you expect a step to fail, then you should add a membervariable named "<steptype>Exception", and if a step of that type fails, the step is marked as successfull, but the Exceptionvariable will contain the exception thrown. You can see an example of this in the Example zip file.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
|
http://www.codeproject.com/Articles/43585/BDD-with-DSL-Aubergine-a-ruby-cucumber-like-altern
|
CC-MAIN-2015-06
|
refinedweb
| 771
| 53
|
Hello... this is only my second time posting on these boards and yet I am asking for help again. :( I have an assignment I am working on that is supposed to be getting us used to writing functions. I am in my first C++ class so everything to this point is really basic. I am not asking for someone to write this program for me, but only point out the errors I am making with writing my functions and calling them. I keep getting errors about changing "double" type and so forth.
My assignment: in short I am supposed to use already given data to determine the total cost to produce several open-top cylindrical containers. I have to ask the user to input the dimensions and the cost per inch of the material being used, and in return output the surface are for the container and the cost of materials for the container in dollars, all formatted to 2 decimal places. I am also required to ask if they want to input another container and at the very end, display the total cost for all the cylinders entered. I am not sure how to put that into my program... not sure if I need to make some kind of loop with a counter and keep track of the sum or what. So any help there would be appreciated as well! :) My biggest problem is getting the function that does all the computations to work. Sorry to make such a long post but I am going to add in my entire program... its kinda sloppy right now... still just trying to get it to work... but I would really appreciate any help I could get with this! Thank you in advance for your time! =)
Code:
#include <iostream.h>
#include <math.h>
#include <iomanip.h>
//Function Prototype
void program ();
double surface(double, double);
//Main function
int main()
{
//Variable Declarations
int cyl_count;
double radius;
double height;
double volume;
double cost;
double cyl_cost;
char another = 'Y';
//Obtain data from the user
while (another == 'Y')
{
void program ();
cout<<"Enter the radius of the base in inches: "<<endl;
cin>>radius;
cout<<"Enter the height of the container in inches: "<<endl;
cin>>height;
cout<<"Enter the cost per square inch of the material being used in dollars: "<<endl;
cin>>cost;
cout<<"\nTotal surface area for this container is: "<<setprecision(2)
<<setiosflags(ios::fixed | ios:: showpoint)<<surface<<endl;
cyl_cost = cost * surface;
cout<<"\nThe cost of the materials for this container is: "<<setprecision(2)
<<setiosflags(ios::fixed | ios:: showpoint)<<"$"<<cyl_cost<<endl;
cout<<"\nAre there any additional containters to process? Enter Y for yes. "<<endl;
cin>>another;
cyl_count =
}
return 0;
}
//Output program description
void program()
{
cout<<"This program will determine the cost to produce open-top cylindrical containers."<<endl;
return;
}
//Computations
double surface(double radius, double height)
{
double base_area;
double surface_area;
double circumference;
double total_surface_area;
const double PI = 3.14159265;
base_area = PI * pow(radius, 2);
circumference = 2 * PI * radius;
surface_area = circumference * height;
total_surface_area = base_area + surface_area;
return surface;
}
GrlNewB :cool:
|
http://cboard.cprogramming.com/cplusplus-programming/35974-programming-assignment-help-printable-thread.html
|
CC-MAIN-2014-52
|
refinedweb
| 500
| 60.85
|
Given a string which represent a number, we need to write a function to find the number of sub-strings of the given string that add up to 9.
Examples
a) Input string : 7299
Output : 6
Here, there are 6 substrings which recursively add up to 9
Sub-strings are: 72, 729, 7299, 9, 99, 9
b) Input string : 999
Output : 6
Here, there are 6 substrings which recursively add up to 9
Sub-strings are: 9, 99, 999, 9, 99, 9
Time complexity : O(n2)
Algorithm
1. Store the count of number of substrings in count.
2. Initialize with 0.
3. Consider every character as start of substring, store sum of digits in substring in sum.
4. If current character is 9, increment count.
5. One by one choose all characters next of current character as end of character.
6. Add end character to sum, if sum becomes multiple of 9, increment count. Divide sum with 0(remainder should be 0)
7. Return final count.
C++ Program
#include <bits/stdc++.h> using namespace std; //Main function int main() { char string[] = "7299"; //Initialize count = 0 int count = 0; int length = strlen(string); for (int i = 0; i < length; i++) { int sum = string[i] - '0'; //If char is 9, increment count if (string[i] == '9') { count++; } //loop for end char and sum is multiple of 9 //incremnt count for (int j = i+1; j < length; j++) { sum = (sum + string[j] - '0')%9; if (sum == 0) { count++; } } } //return final count cout<<"count of sub-strings for 7299 is: "<<count<<endl; return 0; }
Try It
|
https://www.tutorialcup.com/interview/string/number-sub-strings-recursively-add-9.htm
|
CC-MAIN-2021-25
|
refinedweb
| 262
| 67.59
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.