text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
The Best Java Tools You Never Knew Existed
I was at an awesome presentation.
- ApacheDS - Java LDAP and Kerberos server, very easy to embed and great for testing your directory code. Now under the Apache Directory project which has heaps of other good stuff.
- ASM - small, efficient bytecode manipulation
- CGLIB - built on ASM, it works at a higher level and makes the former look like Assembler by comparison. Only downside is that it's not well documented
- JEXL - easily embeddable expression language, compares with OGNL
- DisplayTag - JSP taglib for table formatting and exporting
- EHCache - easy in-memory caching for everyone. Used this, it's awesome.
- Janino - someone took the time to write an embeddable Java compiler! It lets you use Java like a scripting language, eg. allow your users to type Java expressions directly into your GUIs.
- Jar Jar Links - allows you to overcome namespace/package clashes between different versions of libraries used by your product by repackaging them. Similar to the Minijar Maven plugin that approaches the problem slightly differently by removing unused classes. Can be used to resolve dependency issues (classpath hell).
- jDBI - substitute for JDBC that doesn't suck when used directly
- Jetty - web server/servlet container. Lightweight, yet lightning fast. Great for embedding, drives Grails.
- Joda Time - date time library that kicks butt over java.util.Calendar. Intuitive (January is month 1!), easy to use and complete - everything that the java.util classes aren't. Can be used alongside Date and Calendar. Chances are that the JSR based on this will become a part of Java 7, but why wait when you can use it now!
- JSON-lib - best Java JSON library out there. Has good documentation. See also Jaxen.
- Commons Math - everything from linear algebra to statistics
- Mock Javamail - mock test infrastructure for Javamail code. See also Dumpster, Smartster.
- Not-Yet-Commons-SSL - Nice, easy to use Java wrapper over OpenSSL. Encryption for the masses!
- Selenium - test your web app interaction! HTML/JS, browser-based test environment - JUnit starts playing with windows :).
- Selenium IDE - the "IDE" part may be overstating things, but this Firefox plugin generates the basis of your test code for you. See also Canoo Webtest.
- Selenium Maven plugin - self explanatory
- Sitemesh - like Tiles, but non-intrusive and heaps better. It hasn't been updated for ages because it Just Works.
- Smack - Java Jabber client
- XStream - objects -> XML -> objects translation. I can't recommend this one enough, awesome one to have in the toolbox.
- StringTemplate - like Velocity, but better.
- Ivy - this Ant plugin means that dependency management is not just for Maven any more. I worked on a big project that used this and it worked a treat.
- Subetha - a mailing list manager and embeddable mail server. See also James.
- Scriptella - extract, transform and load (ETL) tool for Java.
Feel free to add any gems that you have come across in the comments!
- Login or register to post comments
- 51921 reads
- Printer-friendly version
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Jakub Korab replied on Mon, 2008/05/12 - 9:12am
Jonathan replied on Mon, 2008/05/12 - 9:53am
Marcos Silva Pereira replied on Mon, 2008/05/12 - 10:53am
Nice list. But...
Antlr, ASM, CGLIB, EhCache, Jetty, Joda-Time, Selenium*, Sitemesh, XStream and Ivy are well know tools, aren't?
Kind Regards
cornsomething replied on Mon, 2008/05/12 - 1:02pm
Vadim Pesochinskiy replied on Mon, 2008/05/12 - 2:24pm
Good list. Add this template parser:
FreeMarker - like Velocity, but much better.
Jakub Korab replied on Mon, 2008/05/12 - 5:25pm
in response to: marcos.pereira
Jakub Korab replied on Mon, 2008/05/12 - 5:28pm
in response to: cornsomething
sridhar replied on Tue, 2008/05/13 - 6:20am
Mostly Harmless replied on Sat, 2008/05/24 - 6:57pm | http://java.dzone.com/articles/best-java-tools-you-never-knew | crawl-002 | refinedweb | 645 | 67.86 |
Macromedia Flash MX Release Notes
This document addresses issues that are not discussed in
the Macromedia Flash MX documentation. This document may
be updated as more information becomes available.
Issues for
both Windows and Macintosh
Backward
compatibility of Flash MX - You cannot
open a Macromedia Flash MX source file (.FLA) in
Macromedia Flash 5 due to the additional features
in Macromedia Flash MX. However, you can now use
the Save as Flash 5 feature described in the Using
Flash manual to save the document in the Flash 5
.FLA format. When doing this, you will lose any
new Flash MX features you may have added to the
file.
In addition, you may export your Macromedia Flash
MX source file as Flash Player version 2, 3, 4,
or 5 format (.SWF) for deployment purposes.
The best way to copy items from files created in
Flash 5 to files created in Flash MX is to open
the files using Flash MX and then copy and paste
within the Flash MX application. If you paste items
into Flash MX that were copied using the Flash 5
application, certain Flash MX features are not supported,
including:
Library Conflict Resolution
Library Folders
Flash MX Component Data
Flash MX Export Linkage Types
Symbol Source File information
Backward
compatibility of Flash Player 6 - see
the Flash
Player 6 release notes.
Extension
Manager - The Macromedia Extension Manager
version 1.4 is included on the Flash MX CD-ROM in
the Goodies\Macromedia\Extension Manager folder.
You may also download the Extension Manager from
the Macromedia Exchange
for Flash.
Trial
Downloads - Trial downloads of Macromedia
products are available on the Macromedia
Downloads page.
Video
and Sound
Video Codecs - Although Flash MX fully supports
QuickTime and AVI, you may experience issues with
particular codecs.
This is a list of known problematic video codecs
for QuickTime movie files which can crash the Flash
MX application:
In addition, Macromedia Flash MX cannot import
MPEG video streams through QuickTime.
We recommend the following video codecs when using
QuickTime:
Sound Codecs - In addition you may experience
problems with some sound codecs. In most cases,
the video import panel will display a warning about
audio tracks using unsupported codecs. The following
sound codecs are not supported:
We recommend the following sound codecs when using
QuickTime:
Memory - In general, video uses a lot of
memory. You might run out of memory when importing
long video files. We recommend turning off audio
import to save memory since imported audio will
be kept uncompressed in memory.
When importing video on Macintosh OS 9.x, you may
get a warning message about lack of memory. We recommend
using Macintosh OS X or Windows to import these
files.
Like most multimedia development applications,
Flash MX will be able to handle large video or other
media files better if more memory is allocated to
the program. Application performance on the Macintosh
with large media files will be greatly improved
on machines with 256 meg of RAM or higher.
Video
drivers - You may experience some problems
with Flash MX, particularly with some of the drawing
tools, if you do not have the latest video drivers
installed for your video card. Visit your hardware
vendor on the Web to obtain the latest drivers for
your machine.
Network
License Detection and Firewalls - Users
who have firewall software installed may get a firewall
alert upon launching a Macromedia application.
Unloading
Movies - When a movie clip that is to
be removed, or one of its components, has an "unload"
movie handler(s), the movie can persist for at most
one frame longer in order to allow the handler(s)
to execute properly. As a result, when a movie is
removed (unloaded) and a movie with the same name
is subsequently created, the new movie may not play
correctly. Therefore, be sure to always rename your
movie before unloading it.
Flash MX
Keyboard Shortcuts - Flash MX keyboard
shortcuts have been re-assigned to be more compatible
with other Macromedia products. You may download
the Flash MX Keyboard Shortcuts Quick Reference
Card from the Macromedia
Flash Support Center.
Note: Due to the high volume of e-mail we
receive, we are unable to respond to every request.
Thank you for using Macromedia Flash MX, and for
taking the time to send us your feedback!
Windows-only
issues
Installation
You must be logged in as an administrator to install
Flash MX on Windows 2000, Windows XP, or Windows
NT.
On Windows 2000, some users who are not logged
in as administrator may receive the error "The
InstallShield Engine (iKernel.exe) could not be
launched. Class not registered" when attempting
to install Flash MX. Log in as administrator to
proceed with the installation.
Searching
the Help - On Windows XP, you will need
to install Java in order to use the Search function
in the Help.
Browser
support - Netscape Navigator/Communicator
4.79, Netscape 6.2, Microsoft Internet Explorer
5 and Microsoft Internet Explorer 5:
On Windows 2000 and Windows XP, it will
be in
C:\Documents and Settings\<username>\Application
Support\Macromedia\Flash MX\Configuration\
Note: This folder is a hidden folder
by default. To view hidden folders, do the
following in the Windows Explorer:
- Select the folder of interest
- Select Tools > Folder Options...
- Select the View tab
- Under the Hidden Files and Folders section,
select the Show Hidden Files and Folders
radio button.
On Windows NT, it will be in C:\WINNT\Profiles\<username>\Application
Data\Macromedia\Flash MX\Configuration\
On Windows 98 SE and Windows Me, it will
be in
C:\Windows\Application Data\Macromedia\Flash
MX\Configuration\
Create
a shortcut in this folder to the browser
you want to use as the default browser for
viewing the product help system and Publish
Preview.
Installation
Apple CarbonLib 1.5 or later is required for installation
of Flash MX. If you do not have CarbonLib 1.5, it
will be installed for you; however you will need
to restart your machine after the installation has
completed.
To install Flash MX on OS X, you must have administrative
privileges.
On Macintosh, when installing Flash Player 6 via
the Flash MX installer, the installer will launch
a browser window. This window may obscure the installer
dialog. You must switch back to the installer application
in order to complete the installation.
Memory
Allocation on Mac OS 9.X - The Flash
MX system requirements recommend users run Flash
MX on machines with 128MB of free system RAM. Users
with less than this, such as users with machines
with a total of 128MB of RAM, may experience difficulty
with basic operations in Flash MX. Symptoms of this
may include crashing, insufficent memory errors,
or redraw problems that look similar to poor television
reception. Users experiencing these issues can modify
their environments to allow Flash MX to run properly
by doing one of the following:
OSA Menu
Extension - Mac OS 9 users should update
their OSA Menu Extension to version 1.22. This is
a free update that is compatible with Carbonized
Mac applications. Not updating may result in performance
problems. See
UFS Support
- Flash MX will not support Mac users who
have formatted their hard drives using UFS. Carbon
and Classic have a number of issues with this that
are documented by Apple. Unless you are using native
apps (AKA Cocoa), UFS is not recommended or supported
by Apple.
Launching
Flash MX - On Macintosh OS 9.x, if you
have Flash 5 installed, double-clicking a .FLA file
will launch Flash 5 instead of Flash MX. You can
correct this by rebuilding your Macintosh desktop.
To rebuild your desktop, restart your computer while
holding down the Command and Option keys.
Font Issues
- If you have a corrupt font on your system,
or a typeface for which the printer font is missing
or damaged, you may experience a crash. See TechNote
#15830 for more information and troubleshooting
suggestions.
MP3
Export - Exporting movies on Mac OS X
that contain a sound asset with the following properties:
may crash Flash MX when publishing. Selecting a
different bit rate should alleviate the crash.
There are several ways to correct this:
Use
a utility such as Drop-Info or ResEdit to
edit the file's Type
and Creator.
Browser
support - Netscape Navigator/Communicator
4.79, Netscape 6.2, and Microsoft Internet Explorer:
Create
a folder called "Browser" in the
following location:
On OS X: HardDrive:users:<username>:Library:Application
Support:Macromedia:Flash MX:Configuration:
On OS 9.X: HardDrive:System Folder:Application
Support:Macromedia:Flash MX:Configuration:
Drag into this folder an alias of the browser
you want to use as the default browser for
viewing the product help system and Publish
Preview.
NOTE: Mac OS X users - To view the help
system, and to preview Flash movies in a browser
with the Publish Preview feature, please use Internet
Explorer. Netscape 6 is not supported for these
specific tasks.
Use of this website signifies your agreement to the Terms of Use and Online Privacy Policy (updated 07-08-2008).
Search powered by Google™ | http://www.adobe.com/support/flash/releasenotes/mx/rn_mx.html | crawl-002 | refinedweb | 1,504 | 54.02 |
Home Improvement
Home Improvement Questions? Ask a Handyman for Answers
Welcome to Just Answer, my name is ***** ***** I will do my best to help you with your issue. First I’ll need some more info so I have a more complete picture of you situation.
Where do you live (zip code)? Are you saying the roof only has 1 x boards for the roof that's 12' wide? no rafters?
I have been called away. I'll respond when I return (probably less than 1 hour). If you aren't sure about the answers to my questions then picture(s) of the roof will help. You can post pics here by clicking on ADD FILES next to the SEND button.
Ok, thanks for trying Just Answer | https://www.justanswer.com/home-improvement/9tq22-just-12-26-covered-porch-add-mobile.html | CC-MAIN-2017-43 | refinedweb | 126 | 93.03 |
struct module in Python and its functions
In this tutorial, we are going to discuss the struct module in Python and its functions. This module is useful for the conversion between C struct and python values. We use a format string to specify the order and size of the values that are to be packed in our struct. Read until the end of the tutorial to understand the concept clearly.
struct module in Python
There are many functions defined in the struct module of python. Few of them have been discussed here.
struct.pack() and struct.unpack()
These two functions are respectively used to pack and unpack values in python struct object. The syntax for stuct.pack() is as follows:
struct.pack(format_string, value1, value2, ....);
This function takes parameters format_string and values to be packed in the struct. format_string specifies the format of python values. For example, ‘hhl’ for (short, short, long). Here h stands for short and l stands for long.
Other possible formats are:
‘iii’ = int, int, int
‘?qf’ = _Bool, long long, float
‘hiB’ = short, int, unsigned char
We can use i,h,l,q,? etc in any order to format our string.
The return type for struct.pack() is a string containing the values provided in the function as parameters.
The syntax for the struct.unpack() is as follows
struct.unpack(format_string, struct_string);
This function returns the values that are packed in the struct_string. See the code for a better understanding.
import struct struct_string = struct.pack('hhl', 1, 3 , 5) print(struct_string) values = struct.unpack('hhl', struct_string) print(values)
Output:
b'\x01\x00\x03\x00\x05\x00\x00\x00' (1, 3, 5)
Note that b in our struct_string stands for binary.
struct.calcsize()
We use this struct function to find the size of a format string. This can be useful when calling struct.pack_into and struct.unpack_from() function as they require the size of format string as input.
The syntax for the above function is given here:
struct.calcsize(format_string);
The function returns the size required by the format_string.
See the below code.
import struct print(struct.calcsize('qf?')) print(struct.calcsize('ll')) print(struct.calcsize('?qf'))
Output:
13 8 20
As you can notice, changing the order in the format string affects the size of the struct.
struct.pack_into() and struct.unpack_from()
The syntax for struct.pack_into is as follows:
struct.pack_into(format_string, buf, offset, v1, v2, ...);
In the above syntax, format_string is to specify the data types and order of the values to be inserted in the struct. ‘buf’ is a writable buffer that starts at the offset. After that, we pass the values to be packed.
The syntax for struct.unpack_from() is as follows:
struct.unpack_from( format_string, buf[, offset =0]);
This function returns a tuple with packed values.
See the below code to understand the working of these functions.
import struct import ctypes size = struct.calcsize('iii') buf = ctypes.create_string_buffer(size) struct.pack_into('iii', buf, 0, 5, 5, 5) print(struct.unpack_from('iii', buf, 0))
Output:
(5, 5, 5)
NOTE: This module also defines an exception struct.error. This exception is raised when we pass a wrong argument in the above functions.
Thank you.
Also read: Tuple Constructor in Python Language | https://www.codespeedy.com/struct-module-in-python-and-its-functions/ | CC-MAIN-2020-50 | refinedweb | 537 | 70.7 |
How to: Use a Thread Pool (C# and Visual Basic)
Thread pooling is a form of multithreading in which tasks are added to a queue and automatically started when threads are created. For more information, see Thread Pooling (C# and Visual Basic)..
using System; using System.Threading; public class Fibonacci { private int _n; private int _fibOfN; private ManualResetEvent _doneEvent; public int N { get { return _n; } } public int FibOfN { get { return _fibOfN; } } // Constructor. start calculate.); } } }
Following is an example of the output.
launching 10 tasks... thread 0 started... thread 1 started... thread 1 result calculated... thread 2 started... thread 2 result calculated... thread 3 started... thread 3 result calculated... thread 4 started... thread 0 result calculated... thread 5 started... thread 5 result calculated... thread 6 started... thread 4 result calculated... thread 7 started... thread 6 result calculated... thread 8 started... thread 8 result calculated... thread 9 started... thread 9 result calculated... thread 7 result calculated... All calculations are complete. Fibonacci(38) = 39088169 Fibonacci(29) = 514229 Fibonacci(25) = 75025 Fibonacci(22) = 17711 Fibonacci(38) = 39088169 Fibonacci(29) = 514229 Fibonacci(29) = 514229 Fibonacci(38) = 39088169 Fibonacci(21) = 10946 Fibonacci(27) = 196418 | https://msdn.microsoft.com/en-us/library/vstudio/3dasc8as(v=vs.110).aspx | CC-MAIN-2015-11 | refinedweb | 188 | 56.62 |
hierarchy of mask of various shapes. More...
#include <vcl_iosfwd.h>
#include <vnl/vnl_vector.h>
#include <vil/vil_image_view.h>
#include <rgrl/rgrl_object.h>
#include <rgrl/rgrl_macros.h>
Go to the source code of this file.
hierarchy of mask of various shapes.
Disregarding the shape, each mask also provides a bounding box. Denoting the upper left corner as x0, and bottom right as x1 (in 2D case), the bounding box is defined by a tight interval [x0, x1] on all dimensions.
Modifications Oct. 2006 Gehua Yang (RPI) - move rgrl_mask_3d_image into separate file
Definition in file rgrl_mask.h.
An output operator for displaying a mask_box.
Definition at line 208 of file rgrl_mask.cxx.
An output operator for displaying a mask_box.
Definition at line 216 of file rgrl_mask.cxx.
Intersection Box A with Box B (make it within the range of B).
Definition at line 236 of file rgrl_mask.cxx. | http://public.kitware.com/vxl/doc/release/contrib/rpl/rgrl/html/rgrl__mask_8h.html | crawl-003 | refinedweb | 146 | 54.59 |
[
]
Sergey Beryozkin commented on CXF-6353:
---------------------------------------
CXF HttpConduit/HttpClientPolicy supports a lot of parameters indeed but I think we should
support at this stage only the very few core parameters (the ones we've just added plus may
be few more) - due to the fact they don't offer a portability guarantee even though no CXF
import is required :-) and also work on the JAX-RS side to ensure most of the well known parameters
can be represented as a standard JAX-RS properties...
Thanks, Sergey
> provide a not typed way to configure cxf jaxrs clients
> ------------------------------------------------------
>
> Key: CXF-6353
> URL:
> Project: CXF
> Issue Type: Improvement
> Reporter: Romain Manni-Bucau
> Assignee: Sergey Beryozkin
> Fix For: 3.1.0, 3.0.5
>
>
> when using jaxrs 2 client api it is quite common to desire to configure timeouts (receive,
connect). To do it today you need to unwrap the Client to get its config then the http conduit.
Would be great to be able to do it - or have a client specific config - through properties
of the client/client builder to try to not force user of the standard API to import CXF.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.apache.org/mod_mbox/cxf-issues/201504.mbox/%3CJIRA.12821179.1429108350000.34965.1429303798944@Atlassian.JIRA%3E | CC-MAIN-2017-43 | refinedweb | 201 | 57.13 |
The function
foldsh in this recipe is a general purpose tool for transforming tree-like recursive data structures while keeping track of shared subtrees.
# By default, a branch is encoded as a list of subtrees; each subtree can be a # branch or a leaf (=anything non-iterable). Subtrees can be shared: >>> subtree = [42,44] >>> tree = [subtree,[subtree]] # We can apply a function to all leaves: >>> foldsh(tree, leaf= lambda x: x+1) [[43, 45], [[43, 45]]] # Or apply a function to the branches: >>> foldsh(tree, branch= lambda t,c: list(reversed(c))) [[[44, 42]], [44, 42]] # The sharing is preserved: >>> _[0][0] is _[1] True # Summing up the leaves without double counting of shared subtrees: >>> foldsh(tree, branch= lambda t,c: sum(c), shared= lambda x: 0) 86
In particular, it is useful for transforming YAML documents. An example of this is given below.
Changelog
Revision 2: rewrite using the try...except...else idiom which I didn't know before and makes the logical structure clearer. Functionality is unchanged.
Background
Many transformations on trees follow the same general pattern. In a recursive traversal,
- first, leaves are transformed into results (here, this is the task of the function
leaf);
- when all subtrees (children) of a branch have been transformed, the results of these children are combined into a result for the branch (the task of function
branch);
- this way, results are passed up the tree until the result for the whole tree is constructed.
In this recipe, the recursion is performed by
foldsh; the transformation functions
leaf and
branch are supplied by the user. Note that a "result" can be anything, for example
- an integer counting the number of leaves, depth of the tree, maximal breadth of branches;
- a list of all the leaves in the tree;
- another tree.
The general pattern is known as a fold (and in Python as
reduce; but both these terms are mostly used for inductive lists) or catamorphism.
In addition to implementing a regular fold, this recipe supports sharing of subtrees. (So the data structure is actually a DAG, but we'll still say tree.) If during the recursive traversal
foldsh encounters a subtree that it has seen before, it will not recurse into it, but instead take the previously constructed result, and apply
shared to it.
To make
foldsh recurse over other tree-like structures, supply it with a different
getchildren function. This function will be applied to the tree (and subtrees) to determine whether we have reached a branch or a leaf. If
getchildren is applied to a branch, it should return an iterable of subtrees; if applied to a leaf, it should throw an exception.
With no arguments except its first,
foldsh(tree) returns a copy of
tree where the structure consists of newly constructed lists, and the leaves are shared with
tree. In the new structure, the sharing of subtrees is preserved.
Examples
def countsh(tree): '''Count the number of leaf+branch nodes (count shared subtrees once).''' def branch(tree,reschildren): return 1 + sum(reschildren) def leaf(tree): return 1 def shared(tree): return 0 return foldsh(tree,branch,leaf,shared) def listleaves(tree): '''List all leaves (but those from shared subtrees only once).''' def leaf(tree): return [tree] def branch(tree,reschildren): result = [] for rc in reschildren: result.extend(rc) return result def shared(res): return [] return foldsh(tree,branch,leaf,shared) # Actually, this can also be elegantly solved by returning iterators as # result. We do not have to make an exception for shared subtrees, because # these (shared!) iterators will already be exhausted when encountered for # the second time: def iterleavesonce(tree): '''Iterate all leaves (but those from shared subtrees only once).''' def leaf(tree): return iter([tree]) def branch(tree,reschildren): return itertools.chain(*reschildren) return foldsh(tree,branch,leaf) # Build a tree out of frozensets instead of lists: subtree = [42,44] tree = [subtree,[subtree]] fstree = foldsh(tree, branch= lambda t,c: frozenset(c)) # fstree is now: # frozenset([frozenset([42, 44]), frozenset([frozenset([42, 44])])]) # with shared subtrees
Strings
Watch out with strings as leaves. The default
getchildren will go into infinite
recursion because
iter('s') will yield 's' again. To solve this, you can use:
def getchildren(tree): assert not isinstance(tree,(str,unicode)) return iter(tree)
YAML
YAML is a human-readable language for data structure serialization with support for sharing. A YAML sequence corresponds to a Python
list and a YAML mapping to a
dict. We can use
foldsh to recurse into the data structure (for the dicts, we only recurse into the values), and, for example, reverse all the lists:
import yaml # from def reverselists(tree): def branch(tree,reschildren): if isinstance(tree,dict): return dict(zip(tree.iterkeys(),reschildren)) else: return list(reversed(reschildren)) def getchildren(tree): try: return tree.itervalues() except: assert not isinstance(tree,str) return iter(tree) return foldsh(tree,branch,getchildren=getchildren) y = yaml.load(''' ingredients: - &s # this element is given label s and can be shared later - spam - SPAM - eggs - *s # like here - *s # and here ''')
Sharing is preserved in the resulting YAML:
>>> print(yaml.dump(reverselists(y),default_flow_style=False)) ingredients: - &id001 - SPAM - spam - *id001 - eggs - *id001
Cycles
This recipe does not support cyclic data structures (it will go into infinite recursion). For a way to deal with this, see Recipe 578118. | https://code.activestate.com/recipes/578117-sharing-aware-tree-transformations/?in=user-4173111 | CC-MAIN-2022-05 | refinedweb | 885 | 58.32 |
26 January 2011 08:16 [Source: ICIS news]
SHANGHAI (ICIS)--China’s olefins imports rose year on year in December amid tight domestic supply and the growth is expected to continue in 2011, industry players and analysts said on Wednesday.
“The increase in olefins import in December was closely related to tight domestic supply as two energy giants Sinopec and PetroChina shifted their focus to boost diesel production,” said Yu Chunmei, an analyst from brokerage house Shenyin & Wanguo Securities in Shanghai.
Olefins imports would most likely continue to be strong in 2011 since no big petrochemical complex was scheduled to startup in China this year and domestic consumption was still strong, Yu told ICIS.
Ethylene imports in December jumped 37% year on year to 88,445 tonnes and butadiene imports grew 67% to 38,727 tonnes, according to the data from China Customs.
Naphtha imports soared 22% year on year and 23% month on month to 406,738 tonnes in December, the data showed.
Chinese market players such as Sinopec scrambled for spot naphtha in the last two months of 2010 because of a domestic supply crunch, traders said.
Local refineries ramped up their output of middle distillates at the expense of naphtha and gasoline as frigid temperatures caused a spike in demand for diesel-fired power generators.
“Many polyolefins traders lost huge [amounts of] money in the first half [of 2010] on lower prices, so they imported large volumes of cargoes in the last few months in a bid to make quick money with rising prices,” a trader in Shanghai said in Mandarin.
China imported 248,150 tonnes of linear low density polyethylene (LLDPE) last month, up 16% from a year ago, according to the data.
However, China may see fewer imports in January compared with December as importers were unwilling to build up inventories ahead of the week-long Lunar New Year holiday that starts on 2 February, sources said.
“Naphtha inventories are at a comfortable level [in China] these days. I don’t expect Chinese buyers would repeat what they did for November and December imports,” said a trader from Beijing.
Most Chinese polyolefins stockists and end-users have sufficient inventories because many had refrained from building up stocks during November-December last year due to the economic uncertainty.
The uncertainty was caused by speculations in China late last year about whether the govermnent was going to introduce more credit-tightening policies after it raised interest rates.
Meanwhile, olefins imports in 2010 fell after experiencing a strong 2009 due to the government's fiscal stimulus package at the time.
Ethylene imports fell 16% to 815,405 tonnes in 2010 and propylene imports dipped 2% to ?xml:namespace>
Additional reporting by Felicia Loo and Chow Bee | http://www.icis.com/Articles/2011/01/26/9429347/chinas-december-olefins-imports-soar-growth-to-continue-in-2011.html | CC-MAIN-2014-41 | refinedweb | 458 | 53.55 |
Member
6 Points
Jul 31, 2008 07:33 PM|jon.ebersole|LINK
Please let me know if I have posted in the wrong area.
I am working with WCF Services for the first time. I have a windows based project using WPF, my WCF Service, and a common set of classes in a class library. I have the common class library as references in both my WPF windows project and also as a reference in WCF service running on a website (in IIS 7). I added my WCF service as a service reference to my windows project so I could use it. My WCF Service says that it accepts an object from my shared class library, and then returns an object from my shared class library.
My problem is that when you add the service reference to the windows project, the service name that you create for that service reference gets lodged in the namespace hierachy. I am assuming that my WCF service will return objects that are typed by the classes in my shared class library, but they aren't. Here is an example;
I have a function in my WCF Service called GetValue. I have a class called Test in a namespace called MyCompany in my shared library and I make this as the parameter to my WCF Service and also the type being returned.
The WCF Service would look something like this...
<OperationContract()> _
Function GetValue(MyCompany.Test) As MyCompany.Test
Then, in my WPF project I add that WCF Service to my project and during this step I name it TestService. When I instantiate the service it looks like this...
Dim oService as MyCompany.TestService
Then, I call...
Dim oOldTest as MyCompany.Test
Dim oNewTest as MyCompany.Test
oNewTest = oService.GetValue(oOldTest)
On the last line, it gives an error stating that the value returned from the service is of type MyCompany.TestService.Test and cannot be converted to MyCompany.Test
It injects the service name. I know from previously working with normal webservices, I could just go into the Reference.vb file under the webservice reference and make the necessary changes, and convert the namespaces to what I wanted them
to be from the autogenerated code when it created the webservice reference. Anytime I make changes to the webservice, I would update the reference, and then have to manually edit that file again to fix the class typing to make it work. There has to be a
better way. I'm finally getting around to finding the *Right* way to do this. Can someone point me in the right direction? I've exhausted all research at this point. Any help would be greatly appreciated. Thanks in advance.
0 replies
Last post Jul 31, 2008 07:33 PM by jon.ebersole | https://forums.asp.net/t/1298913.aspx?WCF+Composite+or+Custom+Objects+for+Transmission | CC-MAIN-2018-09 | refinedweb | 465 | 75.1 |
Coupling in Java example
Coupling in java refers to the degree of direct knowledge that one element has of another. In other words, how often do changes in Class A force related changes in class B.
There are two types of coupling in java:
- Tight coupling in java:).
// java Tight coupling Example: class Animal { Body b = new Body(); public void startRunning() { t.running(); } } class Body { public void running() { System.out.println("Body is running"); } }
Explanation: In the above program the Animal class is dependents on Body class. In the above program Animal class is tightly coupled with Body class it means if any change in the Body class requires Subject class to change. For example, if Body class running() method change to going() method then you have to change the startRunning() method will call going() method instead of calling running() method.
// java Tight coupling Example concept class Thatsjavainfo { public static void main(String args[]) { Thats t = new Thats(5,5,5); System.out.println(t.volume); } } class Thats { public int T; Thats(int length, int width, int height) { this.T = length * width * height; } }
Explanation:In the above example, there is a strong inter-dependency between both the classes. If there is any change in Thats class then they reflects in the result of Class Thatsjavainfo.
- Loose coupling in java:.
// java Loos coupling Example: public interface Body { void running(); } class Body1 implements Body { public void running() { System.out.println("running loos"); } } class Body2 implements Body { public void running() { System.out.println("running loos more"); } } public class Animal { public static void main(String[] args) { Body b = new Body1(); t.running(); } }
Explanation : In the above example, Body1 and Body2 objects are loosely coupled. It means Body is an interface and we can inject any of the implemented classes at run time and we can provide service to the end user.
// Javaloose coupling concept class Thatsjavainfo { public static void main(String args[]) { Thats t = new Thats(6,6,6); System.out.println(t.getT()); } } final class Thats { private int T; Box(int length, int width, int height) { this.T = length * width * height; } public int getT() { return T; } }
Explanation : In the above program, there is no dependency between both the classes. If we change anything in the Thats classes then we dont have to change anything in Thatsjavainfo class.
Which is better tight coupling or loose coupling in java?:
coupling in java thatsjavainfo.
This post first appeared on ThatsJavaInfo - A Place To Learn Java And Its High, please read the originial post: here | https://www.blogarama.com/programming-blogs/312870-thatsjavainfo-place-learn-java-its-high-blog/24903074-coupling-example | CC-MAIN-2018-47 | refinedweb | 418 | 64.3 |
NX: URLy Bird 1.3.1 Connection Factory Design
,
I am finally in the RMI land! Based on the excellent tutorial that Andrew pointed to:
Fundamentals of RMI Short Course
and the threads on the forum discussing this topic, I came up with the follow design. Actually I shouldn't say I did. I adapted Max's DVD design to my needs. After all, he told me to make it Bharat's DVDs!
My GUIController class (as in MVC architecture) in the Presentation Layer calls the:
RoomConnector (in the Network Layer) Class's static methods as follows:
public static DBClient getRemote(String ip) throws RemoteException
and
public static DBClient getLocal() throws IOException
As you can see, both return either remote or local versions of the DataAdapter object as DBClient Interface which is as follows:
Note: The local connection stuff works perfectly. Now I am about to test the RMI networking. Towards that, and the fact that I am using a "ConnectionFactory" pattern which guarantees a unique data object per connection since that is the fundamental premise that my locking strategy is based on, I have the following design:
The getRemote method of the room connector class does the following:
1. It looks up the ConnectionFactory object (read interface) which is shown below:
2. It calls the create() method on the connection factory which returns an instance of the DataRemoteImpl class as a DBClient Interface. The connection factory code is shown below:
The DataRemoteImpl is the remote object which implements the DataRemote interface (extends both Remote and DBClient) and contains a reference to the DataAdapter class. It simply delegates the method calls, e.g., book, getRoomsUsingCriteria etc. to the wrapped instance of the DataAdapter class.
My question is: is this a correct implementation of the Connection Factory Pattern? This seems almost trivial. Am I missing something?
Regards.
Bharat
SCJP,SCJD,SCWCD,SCBCD,SCDJWS,SCEA
The only difference between yours and my submission is that
RoomConnector (in the Network Layer) Class's static methods as follows
in my "RoomConnector" I ahd one method name and overloaded it. 1 method for remote one for local. So the client was always calling the same method name, but sent different parameters. For instance, in local mode, no parameter is sent. And in remote mode the server address and IP(if necessary) is sent, and because of the parameters the correct version of the method would be called.
One question though. Is that my "RoomConnector" was instantiated on the clientside not the network layer, so I am a little confused at what you mean here.
Mark
Thank you for going through my design and validating it. You wrote:
One question though. Is that my "RoomConnector" was instantiated on the clientside not the network layer, so I am a little confused at what you mean here.
Sorry, I didn't write it clearly. The RoomConnector indeed is instantiated on the client side in the Presentation Layer in the GUIController class. Is that what you are asking?
Regards.
Bharat
SCJP,SCJD,SCWCD,SCBCD,SCDJWS,SCEA
Just one question - you say your GUIController class connects to the connector. Or is it that the controller makes the connection which it passes to the model, and the model does all the work?
Either option sounds counter intuitive to me.
Consider if I wanted to add a set of screens for reporting functionality to your MVC:
As you can see, I only have one model, and it has all the connections to the database. Nothing else has a connection to the database (or is even aware that there is a database present).
If it was the controller calling the methods in your factory, then I would have required two connections to the factory. One for the booking functionality, and one for the reporting functionality.
Or even with the concept of the controller making the connection then passing it to the model - you still have two connections being made and passed to the model. What does the model do with this second connection?
OK - the examples I gave are not very good. In reality you probably want to have a separate instance of the model for reporting and booking functionality. I am sure if I tried I could come up with two different booking views which you could toggle between (something for "normal" users and something for "power" users) in which case they would use the same model.
Regards, Andrew
The Sun Certified Java Developer Exam with J2SE 5: paper version from Amazon, PDF from Apress, Online reference: Books 24x7 Personal blog
Very intresting though.
I assume the model is a singelton that contains the various logic for accessing the db to provide booking, reading records etc, that provides a connection to the db server on start up. I can see how it's a very scalabe design.
If I ever do this assignment again I will use that pattern.
Tony
[ September 13, 2003: Message edited by: Tony Collins ]
Your a bugger Monkhouse
I assume the model is a singelton that contains the various logic for accessing the db to provide booking, reading records etc, that provides a connection to the db server on start up.
In the example diagram I gave, it would make sense to have a singleton. Likewise in the example I described where the user can toggle between a "normal" and "power-user" screen (in each case you would be displaying the same data) then you would probably look at having a single instance of the model.
A singleton is not always required, although the descriptions of MVC often describe the model as though it were a singleton.
One of the things that an MVC gives you is a clear delineation of responsibilities. The View is only concerned with displaying the data. The Controller is only concerned with translating actions from the view into actions for the model. The model is only concerned with providing access to the data model.
As such, you should be able to swap the View and the Controller if ever you needed a different view. This is where code reuse really comes into play. Say you wanted to have a web version of your product. You would create a servlet as the controller, and a JSP page as the View. Still using exactly the same model. You have only needed to make a very minor change - everything else (the remote access, multi user database) still works. (I did this while studying for SCWCD - I decided to make a web version of FBNS. However after implemeting the web version with only two classes, I realised I needed a totally different project :roll: )
Regards, Andrew
The Sun Certified Java Developer Exam with J2SE 5: paper version from Amazon, PDF from Apress, Online reference: Books 24x7 Personal blog
I am afraid that I do not understand exactly what you are driving at here. I am a bit familiar with the MVC architecture as implemented in the STRUTS framework. I will try to echo back to you my design (Ok Max's design which I creatively stole and adapted to my requirements) in the following paragraphs and see if I can connect it to what you are saying. That should give you a basis to further clarify what you are getting at.
If you look at the design shown in Max's book on page 209 that is basically what I have. I found that I could reuse it almost one hundred percent! That is why I closed-in (or at least seem to be closing-in until I find otherwise) on a working design so fast. This, along with the tutorial on RMI that you pointed out in another thread was just what I needed. Actually I didn't even have to finish the RMI tutorial. The second example, Simple Banking System, URL below was enough to give me a basic understanding of the "Factory Pattern" in RMI.
Anyway, here we go:
1. As in Max's example, I have a public class called the "MainWindow". This is the view in my design The default construtor for this class creates an instance of the GUIController class. The constructor for the GUIController takes a single parameter which is a constant called GUIController.LOCAL_CONNECTION or GUIController.NETWORK_CONNECTION.
2. The GUIController class's matching constructor calls the setConnectionType method in the same class with and passes on the parameter received from the MainWindow class. The setConnectionType method calls either the getLocal() or getRemote(String ip) static methods of the RoomConnector class. My RoomConnector class is shown below:
Please disregard the e.printStackTrace() calls. I am going to go through the entire application and put in the proper exception handling and logging once I have a working application meeting the requirements. Also, I think it might be a good idea to pass a port number along with the ip address so that the RMI service can be bound to a user-defined port instead of the default 1099 port for the RMI.
As you can see above, the getRemote method of the RoomConnector class returns an instance of DataAdapter class and assigns it to a DBClient client-side variable. Anyway, this return value is what gets stored in an instance variable of the GUIController class called "connection". The type for connection is "DBClient".
3. As I understand this with a bit of the EJB background in J2EE, this is my remote interface for the remote data object. Now I am free to call my "business" methods on it either locally or remotely. These business methods are "book" and "find" since that is the only functionality the client needs to implement.
4. Question is who and how are these business methods called, by the GUIController class or the RoomTableModel class which extends the AbstractTabelModel class? Right? Here we go..The only methods that are overriden in the RoomTableModel class are getColumnCount(), getValueAt(), setValueAt(), getColumnName(), and isCellEditable(). Additionally, there are two methods: 1. addRoomRecord(String [] recordArray), and 2. setHeaderNames(String [] headerArray). The addRoomRecord method is used to add a passed record object to a private ArrayList member called roomRecords. Similarly, the setHeaderNames sets the column header names.
5. Going back to step 1 above, when the MainWindow class instantiates a connection (DBClient) object either locally or remotely, it also creates an instance of the inner class called RoomMainWindow. This does the GUI layout work same as in Max's book. The MainWindow class also instantiates an instance variable called "tableData" which is of type RoomTableModel. This instance variable is the only connection that the MainWindow class has to the Model class. This is what is updated when a business method is called on the GUIController class that returns a new "view" object. Again, as in Max's book, there are two private inner classes called BookRoom and SearchRooms that are tied to the "book" and "search" JButton instances. Within the ActionPerformed methods of these methods, I call a method called setupTable(). This method is defined at the MainWindow class level, therefore, the inner classes have full access to it. This method is as follows:
As you can see above, the method basically 1. Gets the new view object using the Controller object, 2. Assigns it to the tableData (type is RoomTableModel) instance variable and 3. assigns this tableData object to the mainTable instance variable, where mainTable is an instance of JTable.
With this as a background. Let me see if I understand what you are driving at:
Just one question - you say your GUIController class connects to the connector. Or is it that the controller makes the connection which it passes to the model, and the model does all the work?
As I see it, the GUIController does make the connection. However, the RoomTableModel class does very little work. There are only two methods that I have added 1. addRoomRecord(String[] recordArray) and 2. setHeaderNames(String[] headerArray). addRoomRecord method is called by the getRoomUsingCriteria(criteria) method of the GUIController class as follows:
I am not sure if I understand your response to Tony's post above. I think that I have given you enough "hooks" into my design. If you repeat what you are saying above with concrete references to my design, I will begin (hopefully) to see the light?
Regards.
Bharat
SCJP,SCJD,SCWCD,SCBCD,SCDJWS,SCEA
I think I may try and get Max involved in this discussion, because what he is describing in his book is not MVC as I know it. And he may disagree with everything I say below
My comments have been based on whether this is MVC pattern or not. And I believe that this is not MVC.
What you are doing is using the "View-Helper" design pattern. This is another good pattern, and quite applicable to the assignment.
This post is just going to be discussing the differences between MVC and View-Helper. And why I believe that you are using the latter.
Normally a class diagram for MVC would look like:
Note: although the database connection and the TableModel are not normally shown in an MVC, I have put them in there to help show how everything interconnects.
The Model encapsulates the data and the rules used to access the data. It hides all the complexities of getting the data and how the data is stored. So the View does not need to know if the data is comming from one table in a database, 2 tables in a database or a flat file. The View does not need to know whether the data is local or on a remote machine. And so on.
(The TableModel is not an MVC model. It does not abstract the access to the database in any way. It is just a way of describing the table.)
The View displays the data provided by the Model. It sends actions to the Controller, and receives data from the Model. It may receive the data through the push or pull model (where the Model may push the data to all it's views, or the Views may have to pull the data from the Model). The View may do some (small) validation of the data, however it's focus is on displaying data to the client and allowing the client to enter data. It's focus is not on validation or checking access rights.
The Controller translates actions from the View into calls to the correct methods in the Model. It can also do things like veryifying access rights. If you are using a Heirachial-Model-View-Controller the Controller would also be responsible for linking the Controllers together and for sending the action to the next link in the chain.
Sometimes people decide that the link between the View and the model is undesirable, and they reduce the diagram to:
However what you (and Max's book) are describing is quite different:
In terms of patterns, the TableModel is irrelevant in that diagram - it is not required in order to meet a pattern (it is not the model in an MVC). If we take out the Database (I mentioned earlier, this would not normally be shown) then what we are left with is:
Then we see that this is a different pattern altogether - the "View-Helper" design pattern. This is quite a common modification to the MVC design, usually implemented when the controller has little or no functionality.
One of the advantages the MVC gives you is the concept that you can abstract things like access rights into the controller. If you think about it in terms of areas of responsibility, the View is only responsible for displaying the data, the Model is only reponsible for abstracting the data. The Controller is responsible for control - it can verify whether a person only has rights to view available records, or whether they have rights to make bookings, or whether they have rights to add and delete records. (To give one example).
The MVC is often used in web applications where the Model is often a bean, the View is often a JSP, and the controller is often a Servlet.
Some links to design patterns:
Regards, Andrew
The Sun Certified Java Developer Exam with J2SE 5: paper version from Amazon, PDF from Apress, Online reference: Books 24x7 Personal blog
Thanks for the response. My sole experience with MVC is in the Web world where STRUTS is the dominating standard. You wrote:
This is quite a common modification to the MVC design, usually implemented when the controller has little or no functionality.
I have to agree with you here. The controller servlet that we have for an industrial strength application that uses Struts 1.0, is a beast! It does all of what you mention:
The Controller is responsible for control - it can verify whether a person only has rights to view available records, or whether they have rights to make bookings, or whether they have rights to add and delete records. (To give one example).
Going through Max's book, I could see these departures from the Struts style MVC architecture were there, but all along, Max's book maintains a conceptual elegance and uniformity. Therefore, I paid little or no attention to what he chose to call what "works", and is quite similar to the MVC "mini" architecture.
Like you, I am a fan of the design patterns. I have GoF patterns book which I found almost incomprehensible until I bought John Metskers excellent book explaining those patterns. See the link below:
Another book that I have used extensively since its very first edition is Core J2EE Design Patterns
While I found all patterns discussed generally quite useful. In my humble openion, the differences in the patterns somtimes tend to be rather trivial. An example is that of Adapter v/s Mediator. To me, they both could have been one and the same. Mind you, they both are again, quite useful.
I am not a patterns expert by a long shot. I do remain a patterns enthusiast, but I have quit worrying about what specifically a pattern am I using. Getting a clearer picture will be quite useful though when I start preparing for the SCEA exam. In your experience, how useful do you think is the ability to correctly classify a specific design pattern being used?
In closing, I did go through your explanation carefully, and what you explain makes sense to me. Thanks for providing such an insightful discussion on this topic though.
Regards.
Bharat
[ September 13, 2003: Message edited by: Bharat Ruparel ]
SCJP,SCJD,SCWCD,SCBCD,SCDJWS,SCEA
I also have the GOF book - I was lucky that I have used most of the patterns before picking up the GOF book, so 90% of it made sense to me. Another good book is Thinking in Patterns by Bruce Eckel (still a "work in progress").
In my humble openion, the differences in the patterns somtimes tend to be rather trivial.
Agreed. Likewise with the reasoning for using one over another.
I have quit worrying about what specifically a pattern am I using
In your experience, how useful do you think is the ability to correctly classify a specific design pattern being used?
Hmmmm, there are two separate issues here. One is whether you can spot a pattern in the wild ("oooh, we should use a factory here") and the other is whether we should worry about misnaming a pattern.
The first one is nice, but by no means essential. And trying to identify every pattern than could possibly get used can get in the way of a actually getting work done.
The second one though can cause problems. If, for example, you said in your exam that you are using MVC, and the examiner decides that your assignment does not have an MVC, they may decide that the person who did the exam is different from the person who wrote the programs. Instant failure
Likewise if you are in a design review at work, and you start by saying that you use pattern 'x' when you are not really using it, then it is worse than not talking about patterns at all: the audience will be thinking of your code in terms of the wrong pattern, and when they discover that it is not, they will have to go back over stuff that has already been covered in order to think about it in the right way.
Regards, Andrew
The Sun Certified Java Developer Exam with J2SE 5: paper version from Amazon, PDF from Apress, Online reference: Books 24x7 Personal blog
( and author)
Sheriff
Unsurprisingly, as much as I love Andrew, I disagree with his interpretation of MVC. The classic example I use when I teach this stuff is a customer in a restaurant. If you consider the items on the menu to be the Model, the paper, print, font of the menu to the View, and the waiter ("no sir, you should really have the red wine with your steak") to be the Controller, you'll get a sense where things fall into place in MVC.
Corresponding, the order the waiter delivers to the kitchen is the 'Model' as far as the kitchen is concerned. The Cook is the kitchen's Controller, and so on. Thus, you can have several MVCs sitting next to each other, and feeding each other in turn,
So what does all of this have to do with my design? Well, the TableModel is the Model being created for the consumption of the GUI layer. Then the GUI does it's own MVC thing with it. Correspondingly, the DVDRecords are the Model being presented to the middle tier from the backbend tier. Thus, contrary to Andrew's interpretations, I see the TableModel as the model, which explains the Model piece in the diagram. In Andrew's interpretation, The Model is strictly what the back end manipulation as a logical abstraction between itself and the database. However, in my interpretation, the model is what the current layer manipulates in lieu of the complexity of the previous layer, not just the database.
If you consider your application to be three 'mini' applications (namely, the GUI, the BusinessLogic, and the DBDriver), then you'll see how each layer produces a model, in turn, for the next layer. The DBDriver layer parses Files and produces (Models)Records, which it feeds to the BusinessLayer. The businessLayer parses Records and produces TableModels(Models) which it feed to the next layer, and the GUI consumes TablesModels(Again, just another model), and provide a visual(model) for the human client.
However, all of this is just the naming of things: it doesn't really have anything to do with the concepts. If you have the concepts down, then you're ok. And it seems like you do, so we're ok
BTW- the reason this is somewhat different from Struts is because Struts actually uses a variation of MVC called Model2. The main difference being that Model2 architecture requires feedback from the User before it update itself, where traditional MVC allows this to driven by the MVC layer directly.
In keeping with the restaurant example, Model2 is a fast food restaurant, where the system can't interact with you until you interact with it first (which makes a lot of sense, in the context of a web browser). The traditional MVC would be more of a traditional system, where the waiter might interact with you without your prompting( say, to take you plate away, or to see if you need more water).
HTH,
M
[ September 15, 2003: Message edited by: Max Habibi ]
Regards, Andrew
[ September 15, 2003: Message edited by: Andrew Monkhouse ]
The Sun Certified Java Developer Exam with J2SE 5: paper version from Amazon, PDF from Apress, Online reference: Books 24x7 Personal blog
( and author)
Sheriff
Originally posted by Andrew Monkhouse:
Hi Max,
) so the waiter comes back and crosses the oysters off the menu (the view gets updated). I then make another choice which gets fulfilled.) so the waiter comes back and crosses the oysters off the menu (the view gets updated). I then make another choice which gets fulfilled.
Regards, Andrew
[ September 15, 2003: Message edited by: Andrew Monkhouse ]
I don't really think of it as an alternative interpretation: I think of it as a different perspective on the same thing. To wit, your post doesn't contradict anything that I explained above
The point to my design is that each layer, ultimately, produces a model for the next layer. The waiter produces a Model for the Kitchen (a customer order), the Kitchen produces a model for the grocer( the items on a grocery list), etc. That's the point of the design in my book. Three small layers, each with their own micro MVC world. I think that you're seeing the model from the mid tier, and assuming it's the model for the GUI tier. I must not have done a great job explaining it in the book.
M
| https://coderanch.com/t/184136/certification/NX-URLy-Bird-Connection-Factory | CC-MAIN-2016-50 | refinedweb | 4,175 | 60.75 |
Let’s take a look at the freely available data on births in the United States, provided by the Centers for Disease Control (CDC). This data can be found at births.csv
import pandas as pd births = pd.read_csv("births.csv") print(births.head()) births['day'].fillna(0, inplace=True) births['day'] = births['day'].astype(int)
births['decade'] = 10 * (births['year'] // 10) births.pivot_table('births', index='decade', columns='gender', aggfunc='sum') print(births.head())
We immediately see that male births outnumber female births in every decade. To see this trend a bit more clearly, we can use the built-in plotting tools in Pandas to visualize the total number of births by year :
import matplotlib.pyplot as plt import seaborn as sns sns.set() birth_decade = births.pivot_table('births', index='decade', columns='gender', aggfunc='sum') birth_decade.plot() plt.ylabel("Total births per year") plt.show()
Further data exploration:
There are a few interesting features we can pull out of this dataset using the Pandas tools. We must start by cleaning the data a bit, removing outliers caused by mistyped dates or missing values. One easy way to remove these all at once is to cut outliers, we’ll do this via a robust sigma-clipping operation:
import numpy as np quartiles = np.percentile(births['births'], [25, 50, 75]) mean = quartiles[1] sigma = 0.74 * (quartiles[2] - quartiles[0])
This final line is a robust estimate of the sample mean, where the 0.74 comes from the interquartile range of a Gaussian distribution. With this we can use the query() method to filter out rows with births outside these values:
births = births.query('(births > @mean - 5 * @sigma) & (births < @mean + 5 * @sigma)') births.index = pd.to_datetime(10000 * births.year + 100 * births.month + births.day, format='%Y%m%d') births['day of week'] = births.index.dayofweek
Using this we can plot births by weekday for several decades:
births_day = births.pivot_table('births', index='day of week', columns='decade', aggfunc='mean') births_day.index = ['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun'] births_day.plot() plt.ylabel("Average Births by Day") plt.show()
Apparently births are slightly less common on weekends than on weekdays! Note that the 1990s and 2000s are missing because the CDC data contains only the month of birth starting in 1989.
Another interesting view is to plot the mean number of births by the day of the year. Let’s first group the data by month and day separately:
births_month = births.pivot_table('births', [births.index.month, births.index.day]) print(births_month.head()) births_month.index = [pd.datetime(2012, month, day) for (month, day) in births_month.index] print(births_month.head())
Focusing on the month and day only, we now have a time series reflecting the average number of births by date of the year. From this, we can use the plot method to plot the data. It reveals some interesting trends:
fig, ax = plt.subplots(figsize=(12, 4)) births_month.plot(ax=ax) plt.show()
5 Comments
[…] Data Science Project on Birth Rate Analysis […]
Iam getting this error when I run query function:
Python keyword not valid identifier in numexpr query
Try to run it in colab maybe your system is not supporting the environment
Hi Sir, please write a article on real time object detection using computer vision
Hi Irfana, we already have some articles on object detection:
Real-Time Face Mask Detection
Computer Vision Tutorial | https://thecleverprogrammer.com/2020/05/08/birth-rate-analysis/ | CC-MAIN-2021-04 | refinedweb | 559 | 58.38 |
Through this article, I want to ask people to start programming on the GNU/Linux operating system (from here on, referred to as just ‘Linux’). Students who are just getting started in programming; educators who teach or have a role in teaching programming to new students; hobbyists who program on Windows — I’m asking all of you to please read on and give Linux a real good try for at least a week. If you agree that programming on Linux is indeed a better experience than your previous platform, then stay with it, and enjoy the freedom that the rest of us do!
Just to clear any misunderstandings, I am not aiming to get you to write code for the Linux kernel itself (though that could well follow as your comfort and programming proficiency grow). Instead, I’m talking about writing user-space programs — including the exercises, homework, and project work that most computer-science study courses include. Before we start, here’s a disclaimer: this article contains strong personal opinions and beliefs; I do not in any way intend to be offensive, but some of these ideas just might be worth a try — by you — to see if you feel the same way!
Attacking the mindset
It’s commonly believed that Linux is ‘tough to use’. Sure, it’s different from what people who’re used to Windows are accustomed to — but it’s not tough. Once you adjust to the differences, you’ll probably laugh at this misconception yourself, and tell others how wrong their perception is!
Just consider the many computer science students who’ve been inspired by the buzz that Linux has been creating over a long time now. They have resolutely set about learning how to use it on their own initiative — asking questions on mailing lists, forums and over IRC channels. Within a couple of weeks, they are ready to do more than just get around. Often, within a month, they’re so much at home with Linux that they begin introducing others to the OS. Astounding? It may seem so — but it’s just that those students were determined to explore and learn, and ignored the cries of, “It’s tough.”
There is always a learning curve involved whenever one is acquiring a new skill, and Linux is no exception. If students are taught to use and program on Linux, they will not just learn, but will also find it simple. It would just seem natural to them — learning something that they did not know earlier. “Linux is tough” is a modern-day myth that has to be busted. If you are an educator, please do your bit. You are the one that students look up to, and if you show them the right way, they will follow your example.
Getting Linux up and running
Okay, once you have decided to use Linux, how do you go about it? You may have heard of lots of different Linux “distros” (also called distributions): Ubuntu, Fedora, Debian and more. Why so many “Linuxes”? Let me explain. Technically, “Linux” is the name of a kernel (for more information, refer to this Wikipedia article, or the official home of Linux). Since a kernel is of little use on its own, user-space tools from the GNU project (including the most common implementation of the C library, a popular shell, and many common UNIX tools that carry out many basic operating system tasks) were combined with the Linux kernel to make a usable operating system. The graphical user interface (or GUI) used by most Linux systems is built on top of an implementation of the X Window System. Different free software projects and vendors build different combinations of packages and features, to provide varying Linux experiences to different target audiences — thus resulting in myriad Linux distributions.
So which Linux distribution should you use? Ubuntu and Fedora both have individually made the Linux experience very user-friendly for casual users of the computer — for Internet surfing, e-mail and document processing needs. Either of these is ideal for you to get started with.
Linux installation can be somewhat tricky, though, especially if you intend to set up a dual-boot system where you can boot either Linux or your old Windows. Otherwise, it’s quite simple: download the CD (ISO) image, burn it to a disc, boot your computer from it, and let it install! The best way to do a dual-boot set-up the first time is to get hold of someone in your school, locality or office who knows about it, and ask them to guide you.
Also, there are other options if you want to try Linux either without installing it, without replacing Windows or doing a dual-boot set-up. See the Dealing with practicalities section towards the end of this article, for some of these ideas.
The Ultimate Linux Newbie Guide is a good reference to help you learn things yourself. With Linux, an experimental approach to learning helps a lot. So, back up your data, and get started with those install discs if you can’t find anyone to help you out. These days, most Linux distributions come with just the essential applications and libraries installed — which probably won’t be sufficient for programming needs.
To enable easy installation of new software, most distributions have a package manager (in the Linux world, software is distributed in the form of “packages”), which you use to easily download and install new software from the Internet. The Ultimate Linux Newbie Guide is a good reference for this topic.
So that this article will be of maximum utility, I will try to be more general, and avoid favouring any particular distribution.
Choosing a text editor when you install Linux — you can use either. If you install a distribution like Ubuntu, which has the GNOME desktop environment, then you will have gedit already installed. It’s just like Notepad, only more useful and feature-rich.
C/C++ programming on Linux
C is usually the first language taught to many students in Indian engineering schools and colleges, so let’s first look at how we program in C on Linux. Note that the C code that you will write on Linux will be the same that you would write on Windows/DOS, as long as you are writing ANSI C code. Some library functions, such as those provided by
conio.h and
graphics.h, are not part of the ANSI standard. Hence, you won’t be able to use them on Linux.
The C compiler you use on Linux is GCC. It is part of the GNU Compiler Collection. Open a terminal and run the command:
gcc. If you see something like the following output, it means GCC is already installed.
gcc: no input files
However, if you see something like “Command not found”, then you will have to install GCC using the package manager.
Besides a compiler, you will also need the C standard library, called
glibc, to compile your C programs correctly. Type in locate
glibc and check the output. If it shows directory structures of the form
/foo/bar/glibc or the like, then you have
glibc installed; else you need to install it.
Okay, now that we have confirmed the presence of a text editor, a compiler and the standard library, let us write our first code in C on Linux. For the purpose of this article, let’s create a sub-directory called ‘codes’ under your ‘home’ directory, in which we will store all our source code.
Start up gedit and input this simple C code to print the factorial of a number:
#include<stdio.h> int main(int argc, char **argv) { int n, i,fact=1; printf("Enter a number for which you want to find the factorial:: "); scanf("%d", &n); for(i=1;i<=n;i++) fact=fact*i; printf("Factorial of %d is :: %dn", n,fact); return 0; }
Save this code in the codes sub-directory with the name
fact.c. Launch your shell program (terminal), and run
cd codes to go to this directory. Once you are there, issue the following command:
gcc factorial.c
After executing the command, run
ls and you will see an
a.out file in the current directory. This is the executable file of your C program, compiled and linked with the appropriate libraries. To execute it, run (note the leading
./, which is essential!):
./a.out Enter a number for which you want to find the factorial:: 5 Factorial of 5 is :: 120
Congratulations, you have just written your first C program on Linux! That was just the normal C that you write on DOS or Windows — no surprises there! A bit more about this
a.out file: This is the Linux equivalent of the .exe file that you would see under DOS/Windows; it is the executable form of your code. As you might have already guessed, this file cannot be executed on DOS or Windows, since it is in a different format.
Now, instead of having to rename your executable file each time you compile, you can specify the output file name to the compiler:
gcc -o factorial factorial.c
Try a few more programs from your C programming and data structures classes.. The cycle of coding, compilation and execution is very similar to that for C, except for the compiler we use, which is
g++. Check if it’s already installed by running the command in a terminal, like we did for
gcc. Next, use your package manager to check if
libstdc++, the standard C++ library, is installed (if not, install it). Once both are installed, open up gedit and type this simple C++ program:
#include<iostream> #include<string> using namespace std; int main(int argc, char **argv) { string s1="Hello"; string s2="World"; cout <<s1+" " + s2 << "n"; return 0; }
Save this file as
string-demo.cxx in the codes subdirectory.
Compile and execute the file:
g++ -o string-demo string-demo.cxx ./string-demo
Running the above command should output the following on the terminal:
Hello World
The C++ code you see is standard C++, with the
.h omitted from the header files. C++ source files conventionally use one of the suffixes
.C,
.cc,
.cpp,
.c++,
.cp, or
.cxx.
Let us now write a simple C++ program that uses classes:
#include<iostream> using namespace std; class Circle{ float r; public: void init(float x) /* Inline function */ { r = x; } float area(); }; float Circle::area() { return 3.14*r*r; } int main(int argc, char **argv) { float radius; Circle circle; cout << "Enter the radius of the circle:: "; cin >> radius; circle.init(radius); cout << "Area of the Circle:: "<<circle.area()<<"n"; return 0; }
Save the file in the codes sub-directory as
class-demo.cxx.
Compile and execute it:
g++ -o class-demo class-demo.cxx ./class-demo Enter the radius of the circle:: 4 Area of the Circle:: 50.24
Assuming that you have been able to compile these programs successfully, I would now recommend you go ahead and write, compile and test some of your C/C++ assignments and problems using gcc and g++. If you face any issues, you are most welcome to ping me.
Java programming on Linux
Java is perhaps the next most widely taught language in Indian schools and colleges after C/C++. The best part of Java programming on Linux is that you use the same tools that you would use on Windows — yes, the Sun Java Development Kit.
To install the JDK on Linux, download the installer for Linux from its official website.
Choose the
.bin file, and not the
*rpm.bin file, unless you know what you are doing. (The
.bin file is the equivalent of
.exe on Windows). Once the download is complete, in your terminal,
cd to the directory where the file has been downloaded, and use the following commands:
chmod +x jdk-6u18-linux-i586.bin ./jdk-6u18-linux-i586.bin
The file names above might differ depending on the JDK version that you have downloaded. The first line makes the installer executable, and the second line executes it. The installer should start now, and you should see the “Sun Microsystems, Inc. Binary Code License Agreement”.
Accept the licence, and the extraction of the JDK should start. Once the installer has exited, you should see a new sub-directory named ‘jdk1.6.0_18’ inside the current directory. If you are familiar with Java programming on Windows, this should be easily recognisable. Inside this directory is the bin sub-directory, which has the Java compiler (
javac), Java interpreter (
java), and others.
With this, we are all set; let’s write our first Java program on Linux. Fire up gedit and write the following Java code, which shows the usage of an array of integers:
import java.util.Random; class ArrayDemo { public static void main(String[] args) { int[] arr = new int[10]; for(int i=0;i<10;i++) arr[i] = (new Random()).nextInt(); for(int i=0;i<10;i++) System.out.println("Element at index " + i + "is::" + arr[i]); } }
Save the code to a file
ArrayDemo.java, then compile and run it as follows:
/home/amit/jdk1.6.0_18/bin/javac ArrayDemo.java /home/amit/jdk1.6.0_18/bin/java ArrayDemo
Note the first two commands, where I have given the full path to the location of the javac and java executables. Depending on where you have extracted the JDK, your path will vary.
Running the second command should output the following in your terminal:
Element at index 0is:: 480763582 Element at index 1is:: -1644219394 Element at index 2is:: -67518401 Element at index 3is:: 619258385 Element at index 4is:: 810878662 Element at index 5is:: 1055578962 Element at index 6is:: 1754667714 Element at index 7is:: 503295725 Element at index 8is:: 1129666934 Element at index 9is:: 1084281888
So, this is how you can compile, run, test and debug your Java programs.
OpenJDK
An article about Java programming in an open source magazine would be incomplete without talking about OpenJDK. It’s good for you to be aware of this project. As you might have already guessed, it is a GPL-licensed open source implementation of the Java Standard Edition — i.e., the source code of the JDK that you are so familiar with, is also now available for your scrutiny, in case you don’t like something in the current JDK.
So, is this a different Java? No — you write the same Java code. You can install OpenJDK from your Linux distribution’s package manager (it may come pre-installed with some distributions). Installation instructions are available here.
Dealing with practicalities
Due to various reasons, deploying Linux lab-wide may not always be possible. In such cases, it’s a good idea to have a single Linux machine in the lab, acting as an SSH server; you can install the necessary SSH client software on other operating systems, which will enable connecting to the Linux machine remotely.
This machine should be of a relatively good configuration, depending on how many students will be using it for their coding and compilation — a dual- or quad-core CPU with 4 GB of RAM and a hard disk of at least 320 GB is a good idea. For Windows, Putty is a widely used SSH client. If writing the code on Windows and copying it to the Linux machine to compile and run, you will also need to download the
pscp program from the site, which lets you copy files from the local machine to the Linux SSH server.
If you need a GUI session from the Linux server to be accessible on the Windows machine (for example, while doing GUI programming) then investigate the OpenNX server (to be installed on the Linux server machine) and the NoMachine NX client for Windows. A machine with the configuration given above should support around 10 user sessions before it starts slowing down. Fine-tuning the desktop manager (use a light one like LXDE or XFCE) and using lighter editors like GVim for writing code, is a good start.
Another option (which does not need a dedicated Linux server machine) is to install Linux in a virtual machine on your desktop. This could also prove useful on a home computer. VirtualBox is virtualisation software that, when installed on your Windows system, will allow you to create a virtual machine, inside which you can install Linux without disrupting your Windows installation. You will, of course, need some free disk space (8 GB or more) for the virtual machine’s disk file. You don’t need to burn the Linux installation ISO onto a CD in this case — you can simply instruct VirtualBox to use the ISO image file as a disc inserted in the CD-ROM drive of the virtual machine.
This is also a good way to practice installing Linux, and to see how easy it can be. For Ubuntu, in particular, there is Wubi which lets you install (and uninstall) Ubuntu like any other Windows application, in a simple and safe way, “with a single click”. The Ubuntu files are stored in a single folder in your Windows drive, and an option to boot Ubuntu is added to your Windows boot-loader menu.
However, hard-disk access is slightly slower than installation to a dedicated partition. If your Windows drive is very fragmented, the performance will degenerate further. Hibernation is not supported under Wubi. Moreover, the Wubi filesystem is more vulnerable to hard reboots (turning off the power) and power failures than a normal installation to a dedicated partition, which provides a more robust filesystem that can better tolerate such events.
In general, programming on Linux will also require a decent level of familiarity regarding working with shell commands. Get familiar with working with the shell. Try to minimise the use of the mouse :-)
Using your favourite IDE on Linux
If you have been using any IDEs for your development needs, it should be great news that two very popular IDEs — NetBeans and Eclipse — have Linux versions as well, and both of them support C, C++ and Java development. For GNOME-based Linux distributions, Anjuta DevStudio is another powerful IDE for C, C++ and Java (and other languages too). All three should be available in your distribution’s package manager.
To conclude this article, I would like to urge you to make an honest effort to embrace Linux for programming. It’s a much better world to be in. I would love to address any queries/concerns/comments/suggestions that you may have, regarding this article.
Resources
- graphics.h like functionality using GCC
- GNU Compiler Collection
- by Brian W. Kernighan and Dennis M. Ritchie, The C Programming Language
- Bjarne Stroustrup, The C++ Programming Language
- Neil Matthew, Richard Stones, Beginning Linux Programming
- StackOverflow is a community forum where you can post your programming-related questions. It’s languageneutral, which makes it very attractive.
- A basic introduction to SSH
- VirtualBox. User Manual.
- yolinux.com is a good resource for general Linux information.
Pingback: Tweets that mention Write Your Next Program on Linux – LINUX For You Magazine -- Topsy.com
Pingback: Tweets that mention Write Your Next Program on Linux – LINUX For You Magazine -- Topsy.com | http://www.opensourceforu.com/2010/05/write-your-next-program-on-linux/ | CC-MAIN-2014-42 | refinedweb | 3,214 | 61.36 |
list-watchable-software - REPLACE ME
This document describes version 0.04 of list-watchable-software (from Perl distribution Software-Release-Watch), released on 2015-09-04.
Usage:
% list-watchable-software [options] [query]
REPLACE ME
* marks required options.
Set path to configuration file.
Can be specified multiple times.
Set configuration profile to use.
Do not use any configuration file.
Do not read environment for default options.
Return array of full records instead of just ID fields.
By default, only the key (ID) field is returned per result entry.
Select fields to return.
Can be specified multiple times.
Select fields to return (JSON-encoded).
See
--field.
Return field names in each record (as hash/associative array).
When enabled, function will return each record as hash/associative array (field name => value pairs). Otherwise, function will return each record as list/array (field value, field value, ...).
Only return records where the 'id' field contains specified text.
Only return records where the 'id' field is in the specified values (JSON-encoded).
See
--id-in.
Only return records where the 'id' field is in the specified values.
Can be specified multiple times.
Only return records where the 'id' field equals specified value.
Only return records where the 'id' field does not equal specified value.
Only return records where the 'id' field is less than or equal to specified value.
Only return records where the 'id' field is greater than or equal to specified value.
Only return records where the 'id' field does not contain specified text.
Only return records where the 'id' field is not in the specified values (JSON-encoded).
See
--id-not-in.
Only return records where the 'id' field is not in the specified values.
Can be specified multiple times.
Only return records where the 'id' field is less than specified value.
Only return records where the 'id' field is greater than specified value.
Only return records where the 'id' field equals specified value.
Search.
Return records in random order.
Order records according to certain field(s).
A list of field names separated by comma. Each field can be prefixed with '-' to specify descending order instead of the default ascending.]
Only return a certain number of records.
Only return starting from the n'th record.
Default value:
1
Display help message and exit.
Display program's version and exit.
This script has shell tab completion capability with support for several shells.
To activate bash completion for this script, put:
complete -C list-watchable-software list-watchable-software-watchable-software 'p/*/`list-watchable-software`/'-watchable-software.conf,
~/list-watchable-software.conf or
/etc/list-watchable-software:
detail (see --detail) fields (see --field) format (see --format) id (see --id) id.contains (see --id-contains) id.in (see --id-in) id.is (see --id-is) id.isnt (see --id-isnt) id.max (see --id-max) id.min (see --id-min) id.not_contains (see --id-not-contains) id.not_in (see --id-not-in) id.xmax (see --id-xmax) id.xmin (see --id-xmin) naked_res (see --naked-res) query (see --query) random (see --random) result_limit (see --result-limit) result_start (see --result-start) sort (see --sort) with_field_names (see --with-field-names)
~/.config/list-watchable-software.conf
~/list-watchable-software.conf
/etc/list-watchable-software. | http://search.cpan.org/dist/Software-Release-Watch/bin/list-watchable-software | CC-MAIN-2017-13 | refinedweb | 535 | 54.08 |
KiokuDB::Test - Reusable tests for KiokuDB backend authors.
version 0.57
use Test::More; use KiokuDB::Test; use KiokuDB::Backend::MySpecialBackend; my $b = KiokuDB::Backend::MySpecialBackend->new( ... ); run_all_fixtures( KiokuDB->new( backend => $b ) ); done_testing();
This module loads and runs KiokuDB::Test::Fixtures against a KiokuDB directory instance.
Runs all the KiokuDB::Test::Fixture objects against your dir.
If you need a new instance of KiokuDB for every fixture, pass in a code reference.
This will load all the modules in the KiokuDB::Test::Fixture namespace, and run them against your directory.
Fixtures generally check for backend roles and skip unless the backend supports that set of features.. | http://search.cpan.org/~doy/KiokuDB/lib/KiokuDB/Test.pm | CC-MAIN-2015-06 | refinedweb | 106 | 58.38 |
_PORTAL(8) OpenBSD separat-
ed string of options. See the mount(8) man page for possible op-
tions, two sub-namespaces are implemented: tcp and fs. The tcp names-
pace takes a hostname and a port (slash separated) and creates an open
TCP/IP connection. The fs namespace opens the named file, starting back
at the root directory. This can be used to provide a controlled
tcp/ tcp tcp/
fs/ file fs/
FILES
/p/*
SEE ALSO
mount(2), unmount(2), fstab(5), mount(8)
CAVEATS
This filesystem may not be NFS-exported.
HISTORY
The mount_portal utility first appeared in 4.4BSD. | http://www.rocketaware.com/man/man8/mount_portal.8.htm | crawl-002 | refinedweb | 101 | 64.51 |
/foo/bar/index -> Foo::index()
/test123.html -> Bar::default()
/cookies/list/fresh -> Cookies::list()
[download]
# /foo/bar/index
package Foo::Bar;
sub index : Relative { ... }
# /test123.html
package Foo::Bar;
sub test : Path('/test123.html') { ... }
# /cookies/list/fresh
package Cookies;
sub list : Relative Args(1) { ... }
[download]
@Servlet(urlMappings={"/foo", "/bar"})
public class ControllerWithAnnotations {
@GET
public void handleGet(HttpServletRequest req, HttpServletResponse
+res) { ... }
}
[download]
# /foo/bar/index
on 'foo/bar/index' => do { ... }
# /test123.index
on 'test123.html' => do { ... }
# /cookies/list/fresh
on '/cookies/list/*' => do { ... }
[download]
__PACKAGE__->action('/foo/bar/index' => sub { ... });
[download]
# /foo/bar/index
map.connect 'foo/bar/index', :controller => "foo", :action => "index"
# /test123.html
map.connect 'test123.html', :controller => "foo", :action => "test"
# /cookies/list/fresh
map.connect ':controller/:action/:quality', :controller => "cookies",
+:action => "list",
:quality => "fresh", :requirements => { :quality => /\w+/ }
[download]
<%= link_to "Fresh Cookies", :controller => "cookies", :action => "lis
+t", :quality => "fresh" %>
[download]
# /foo/bar/index
r.match("/foo/bar/index").to(:controller => "foo", :action => "index")
# /test123.html
r.match("/test123.html").to(:controller => "foo", :action => "test")
# /cookies/list/fresh
r.match(%r[^/cookies/list/(\w+)$]).to(:controller => "cookies", :actio
+n => "list", :quality => 'path[1]')
[download]
urlpatterns = patterns('',
(r'^foo/bar/index$', 'foo.views.index'),
(r'^test123\.html$', 'foo.views.test'),
(r'^cookies/list/(\w+)$', 'cookies.views.list'),
)
[download]
# /foo/bar/index
$r->route('/foo/bar/index')->to(controller => 'foo', action => 'index'
+);
# /test123.html
$r->route('/test123.html')->to(controller => 'foo', action => 'test');
# /cookies/list/fresh
$r->route('/:controller/:action/:quality', quality => qr/\w+/)
->to(controller => 'cookies', action => 'list', quality => 'fresh');
$c->url_for(controller => 'cookies', action => 'list', quality => 'fre
+sh');
[download]
If you were to, say, use perl 5.10, would you be able to get rid of some of the parsing (which is likely done in perl) and use named captures with regexes?
$r->route(qr[^/(?<controller>[^/]+)/(?<action>[^/]+)/(?<quality>[^/]+)
+(?:/(?<extras>.*))$])->to(...)
# (ok, an x modifier with some gratuitous whitespace might be wise her
+e..)
[download]
So, if you're given a string, you can have the : markers as above. But if you're given a Regexp reference and the current perl is 5.10, you have the opportunity for some much more interesting handling, which you almost don't even have to handle.
$r->route('/:(number)_:(word)/foo', number => qr/\d+/, word => qr/\w+/
+)
[download]
In my HTTP::Server::Brick I followed a simple declarative approach.
I've always liked the OpenACS request processor. In your installation you have a number of standard packages (eg. comments, admin) and you write your own packages for your functionality. You can then instantiate your package (maybe more than once) on a sub-url of your choosing. Instantly all the url paths offered by that package are available relative to that mount point. Since you can instantiate more than once, your code is able to introspect which instance it is to make sure it is, say, retrieving the right blog entries from the database.
OpenACS is built on top of AOLServer which follows a fairly standard declarative dispatcher design where you call a tcl api to register a proc against a particular url.
A party
An organised event
A traditional gathering
With family and friends
Home alone
I don't celebrate the New Year
Adjusting my clocks for the Leap Second
I can't remember
Other
Results (177 votes). Check out past polls. | http://www.perlmonks.org/?node_id=717029 | CC-MAIN-2018-05 | refinedweb | 549 | 50.63 |
This is the assignment question below, ive made the basis of the program but need to know what do next to make the program do what its supposed to..
For the top marks, there should be considerations on the unexpected circumstances such as corrupted individual records etc, and the program should be able to report and handle such anomalies accordingly. Such endeavours should be properly documented as well.
this is what ive done so far..
Code:
#include<iostream>
using namespace std;
int main()
{
long id;
char station;
string rest;
while(true)
{
cerr<<"Please input ID: ";
cin>>id;
if(cin.fail()) break;
cerr<<"Input Station: ";
cin>>station;
if(cin.fail()) break;
cerr<<"Input Rest: ";
cin>>ws;
getline(cin, rest);
if(cin.fail()) break;
cout<<id<<' '<<"@ "<<station<<" "<<rest<<endl;
}
system ("pause");
return 0;
} | http://cboard.cprogramming.com/cplusplus-programming/79662-need-help-cplusplus-program-printable-thread.html | CC-MAIN-2014-23 | refinedweb | 131 | 65.42 |
How to Add Callbacks for a QTWidget in ROS?
I am intending to develop a user interface using rqt in ros. I have developed a user interface containing a QTPushButton and QTLabel in QT Designer. I have imported this ui file into my Python Plugin. So, when I run the plugin, the user interface does pop up. But now I want to add a callback to the PushButton. So, that if I press the Button then the label should show the text "Button Clicked". I have tried many code snippets, but somehow nothing happens when I click the Button.
Can somebody please help. It is very urgent. Can somebody write the actual code for the callback.
My Plugin.py file is given below:
import os import rospy import rospkg from qt_gui.plugin import Plugin from python_qt_binding import loadUi from python_qt_binding.QtGui import QWidget, QPushButton, QLabel class MyPlugin(Plugin): def __init__(self, context): super(MyPlugin, self).__init__(context) self.setObjectName('MyPlugin') from argparse import ArgumentParser parser = ArgumentParser()', 'mainwindow) self.text = QLabel("Data") pushButton = QPushButton("Click Here") pushButton.clicked.connect(self.buttonClicked) def buttonClicked(self): print("Button Clicked") self.text.setText("Button Has been Clicked")
Also, my ui file looks like this "mainwindow.ui":
<ui version="4.0"> <class>Form</class> <widget class="QWidget" name="Form"> <property name="geometry"> <rect> <x>0</x> <y>0</y> <width>400</width> <height>300</height> </rect> </property> <property name="windowTitle"> <string>Form</string> </property> <widget class="QPushButton" name="pushButton"> <property name="geometry"> <rect> <x>80</x> <y>180</y> <width>98</width> <height>27</height> </rect> </property> <property name="text"> <string>PushButton</string> </property> </widget> <widget class="QLabel" name="label"> <property name="geometry"> <rect> <x>100</x> <y>80</y> <width>66</width> <height>17</height> </rect> </property> <property name="text"> <string>TextLabel</string> </property> </widget> </widget> <resources/> <connections/> </ui>
It looks like you're creating a new button rather than retrieving the existing button that is created by the UI file.
Can you tell me how to reference the button from the UI File?
Can somebody please help?
I don't know how to retrieve a button that is created by the UI file offhand, but it shouldn't be hard to find a QT tutorial online.
I have searched the web for the last 2 days. But could not find a way of referencing the buttons from the ui file. I hope somebody else would be able to answer. It should be easy I suppose.
I did this search:... and the first link (for me) looks like a fairly good reference:... .
Thanks. I would have a look at these.
There is a helper function in python_qt_binding called loadUi that will make things easier and more portable than those links. I have detailed its use below. | https://answers.ros.org/question/195152/how-to-add-callbacks-for-a-qtwidget-in-ros/?answer=195261 | CC-MAIN-2022-33 | refinedweb | 462 | 59.09 |
Golden Master Testing: Refactor Complicated Views
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95 View templates should be simple. They should, in fact, be so simple that there is no point in calculating a complexity score for them. In successful applications, however, this is often not the case. Logic sneaks into the templates unnoticed while everyone’s attention is fixed on some emergency elsewhere.
Refactoring templates can be thorny when their test coverage is dismal, which is most of the time. And rightly so! Why bother testing something that is going to change on a regular basis and should not contain any logic?
The trick, then, is to find a way to put a wall at your back when you decide that you do need to move logic out of the templates and into more appropriate locations in your application.
A Real-World Example
Tracks is a time-management tool based on David Allen’s 2002 book Getting Things Done.
The application has been under continuous, active development since Rails was in Beta, keeping up to date with Rails releases. As with many applications that have been around for a while, some parts of the codebase have gotten a bit unwieldy and out of control.
This provides us with an opportunity to examine the Golden Master technique using real-world code rather than a tiny, synthetic example that leaves the reader wondering how to transfer the lesson to actual production code.
Exploring Tracks
Looking closely at the codebase, you’ll discover large models, large controllers, and complex views. The controllers and views, for example, share a huge number of instance variables.
All of these things suggest that there might be missing abstractions and unnamed concepts that could make it easier to reason about the codebase.
The
StatsController exemplifies this. The file is over 450 lines long. It contains 37 methods, only 17 of which are controller actions.
The actions come in two flavors:
- Vanilla HTML pages
- plain-text collections of
key=valuepairs
The
key=value pairs are used to render charts within the HTML responses. The templates for these do some computation, set some local variables, and render a whole host of instance variables that were calculated in the controller.
Also, they are duplicated.
That is, they’re not perfectly duplicated, that would be too easy. They’re similar, but not identical.
In this exercise, we’ll take two templates that are very similar—
actions_visible_running_time_data.html.erb, and
actions_running_time_data.html.erb–and pull all the differences up one level into the controller. Finally, we’ll delete one of the views.
Wait, what do you mean pull stuff into the controller? Isn’t the controller already too big?
Well, yes, it is.
Also, it contains even more duplication than the templates do.
This is a common quandary when refactoring. Very often the act of refactoring makes things temporarily worse. It adds duplication and complexity. It can sometimes feel as though you’re pushing messy code from one junk drawer to another. But then you suddenly identify a useful abstraction, allowing you to collapse the complexity.
Those abstractions are hard to identify when the logic is spread out. Once everything is in the controller, we can get a better idea of what we’re tackling.
A Wall at Your Back
And so, it begins…
Oh wait. It doesn’t. We need to make sure we have good tests.
Running
rake test takes about 20 minutes. That’s too slow to be useful for refactoring. We can narrow it down to just the tests that cover the chart data endpoints.
The easiest way to figure out which tests are implicated is to
raise 'hell' from within the
erb templates we are looking to refactor:
<%- raise "hell" -%>
About 20 minutes later we can confirm that three tests blow up. All three tests are in the
StatsController tests, which can be run in isolation:
$ ruby -Itest test/controllers/stats_controller_test.rb
That takes 5:17 seconds. It’s not fast, but it will do.
How good are the tests? Let’s see what they complain about if we delete the entire contents of the two
erb files.
24 runs, 126 assertions, 0 failures, 0 errors, 0 skips
We’re going to need better tests.
Blissful Ignorance
The actions in the
StatsController look very complicated. If we write tests that make faulty or incomplete assumptions, we might introduce regression bugs while refactoring.
It would be less tedious and error-prone to just assume that whatever is happening right now is exactly what should be happening. If we could just capture the full
key=value output for a chart endpoint, it can be copied/pasted into the test and used as the assertion.
We won’t even have to understand what is going on!
This type of test is dirty, brittle, and very, very handy. Michael Feathers calls it a characterization test in his book Working Effectively with Legacy Code.
A characterization test is a temporary measure. You may never commit it to source control, it’s just there to get you out of a bind. Sometimes you’ll be able to replace it with better tests. Other times, such as with these views, the test will just serve to simplify the views, and then it can be deleted altogether.
Characterizing the Views
The test must:
- Capture the entire HTML response that gets generated when we call the controller action.
- Compare it to the response we’ve previously defined as “good” (i.e. the Golden Master).
- If they’re the same, the test passes. Otherwise it fails.
While we could write the test, assert that the response is “LOLUNICORNS”, watch it fail, and then copy/paste the entire response from the failure message, we’re going to use Llewellyn Falco’s Approval Tests pattern to semi-automate the process.
It goes like this:
- Always capture the response, as
received.html.
- Compare it to
approved.html.
- Fail if
receivedis different from
approved.
In the case of a failure, eyeball the difference by opening both in browsers or using a diff tool. If you like
received better than
approved, manually rename the file.
This gives us a way to handle the very first Golden Master rule: Run the test, see it fail (because
approved is empty), move the received file over to be approved.
Voila, the test is now passing.
Setup
We need to do a bit of setup. First, it’s nice to be able to run these tests completely independently of any other test, so create a new file for them in
test/functional/lockdown_test.rb.
require File.expand_path(File.dirname(__FILE__) + '/../test_helper') class LockdownTest < ActionController::TestCase # tell Rails which controller we're using tests StatsController def test_output_does_not_change # do some setup # call the endpoint # write the response to a file # compare the two files and make an assertion end end
We actually need two tests, one for each endpoint. All but the call the endpoint step is exactly alike in both cases, so create a helper method for all the boiler-plate:
def with_golden_master(key) FileUtils.mkdir_p('.lockdown') FileUtils.touch(".lockdown/#{key}-approved.txt") login_as(:admin_user) Timecop.freeze(Time.utc(2014, 6, 5, 4, 3, 2, 1)) do yield end received = @response.body File.open(".lockdown/#{key}-received.txt", 'w') do |f| f.puts received end approved = File.read(".lockdown/#{key}-approved.txt") unless approved == received assert false, approval_message(key) end end def approval_message(key) <<-MSG FAIL: The output changed. Eyeball the differences with: diff .lockdown/#{key}-received.txt .lockdown/#{key}-approved.txt If you like the #{key}-received.txt output, then cp .lockdown/#{key}-received.txt .lockdown/#{key}-approved.txt MSG end
The fixture data gets generated relative to today’s date, so you will probably need to change the Timecop value to something a week or two in the future in order to get the tests to run.
Then, actually call the controller actions:
def test_visible_running_chart_stays_the_same with_golden_master('vrt') do get :actions_visible_running_time_data end end def test_all_running_chart_stays_the_same with_golden_master('art') do get :actions_running_time_data end end
Run the tests:
$ ruby -Itest test/functional/lockdown_test.rb
Copy the
received file over the
approved file, and re-run the test to see it pass.
How good are these tests?
That depends on how good the data is. Tracks has fixtures, and we end up with more than one datapoint in the charts, so let’s assume that it’s fine.
Refactoring (Finally!)
We’ll do these one at a time. Here’s the actions running time view:
<%- url_labels = Array.new(@count){ |i| url_for(:controller => 'stats', :action => 'show_selected_actions_from_chart', :index => i, :id=> "art") } url_labels[@count]=url_for(:controller => 'stats', :action => 'show_selected_actions_from_chart', :index => @count, :id=> "art_end") time_labels = Array.new(@count){ |i| "#{i}-#{i+1}" } time_labels[0] = "< 1" time_labels[@count] = "> #{@count}" -%> &title=<%= t('stats.running_time_all') %>,{font-size:16},& &y_legend=<%= t('stats.running_time_all_legend.actions') %>,10,0x736AFF& &y2_legend=<%= t('stats.running_time_all_legend.percentage') %>,10,0xFF0000& &x_legend=<%= t('stats.running_time_all_legend.running_time') %>,11,0x736AFF& &y_ticks=5,10,5& &filled_bar=50,0x9933CC,0x8010A0& &values=<%= @actions_running_time_array.join(",") -%>& &links=<%= url_labels.join(",") %>& &line_2=2,0xFF0000& &values_2=<%= @cumulative_percent_done.join(",") %>& &x_labels=<%= time_labels.join(",") %> & &y_min=0& <% # add one to @max for people who have no actions completed yet. # OpenFlashChart cannot handle y_max=0 -%> &y_max=<%=1+@max_actions+@max_actions/10-%>& &x_label_style=9,,2,2& &show_y2=true& &y2_lines=2& &y2_min=0& &y2_max=100&
First, because the values will need to be shared between the view and the controller, turn
url_labels into
@url_labels, and see the tests pass. Do the same for
time_labels.
When we move
@url_labels into the controller, the test fails.
# approved /stats/show_selected_actions_from_chart/avrt?index=0 # received
The controller added to the generated urls. We can force it to return just the path by passing the
:only_path => true option to it:
url_for(:controller => 'stats', :action => 'show_selected_actions_from_chart', :index => i, :id=> "avrt", :only_path => true)
Moving
time_labels into the controller works without a hitch.
The next changes are even easier. For every single interpolated value, make sure that the controller sets an instance variable for the final value:
# Normalize all the variable names @title = t('stats.running_time_all') @y_legend = t('stats.running_time_legend.actions') @y2_legend = t('stats.running_time_legend.percentage') @x_legend =
We can make the same changes in visible actions running time. Now that both templates are identical, both endpoints can render the same template:
render :actions_running_time_data, :layout => false
Delete the unused template, and celebrate.
Then get a shovel, some smelling salts, and get ready to attack that controller. That, however, is a topic for another post (coming soon, don’t fret.)
Get practical advice to start your career in programming!
Master complex transitions, transformations and animations in CSS! | https://www.sitepoint.com/golden-master-testing-refactor-complicated-views/ | CC-MAIN-2021-04 | refinedweb | 1,757 | 58.18 |
ctypes pythonapi version
@omz
ctypes.pythonapialways points to the C API of Python 3 regardless of the default interpreter setting. Is there anyway to access the Python 2 version of
pythonapiobject? It would be even more fantastic if both of them can be accessed without switch interpreter setting.
I also tried to manually load the library with
ctypes.CDLL(os.path.join(os.path.dirname(sys.executable), 'Frameworks/PythonistaKit.framework/PythonistaKit'))
Although it seems to load the Python 2 API and
Py_GetVersiondoes show the version to be 2.7. But it is somehow not really usable. Many API calls working with the Python 3 API would not work or even simply crash the app.
Any help is appreciated.
@ywangd Perhaps the Python 3 interpreter is the "main" DLL that takes priority. The names of the Python 2 and 3 C functions conflict in most places, which means that because
ctypes.pythonapiis really just a
ctypes.PyDLL(None)(i. e. accessing global symbols rather than a specific DLL) you can only really access one version with it and that will not change with the interpreter version of the console.
If you want to call the active version's C API, you need to wrap its DLL in a
ctypes.PyDLLinstead of a
ctypes.CDLLso the GIL stays held while you call its functions. If you want to call the other version's C API, you can use a normal
ctypes.CDLL, but you need to worry about managing the GIL yourself (the
PyGILStatefunctions are probably the easiest way).
Thanks @dgelessus
The use of
PyDLLworked for some initial tests!
@omz @dgelessus
The following simple code using
pythonapiworks well in Python 2 but errors out in Python 3.
import ctypes p3 = ctypes.pythonapi state = p3.PyGILState_Ensure() p3.PyRun_SimpleString('print(42)') p3.PyGILState_Release(state)
The error is
name 'p' is not definedwhich is very weird as it suggests that the API does not even parse the given string correctly. It somehow tries to get a variable named
pwhich is in fact the first character of
@ywangd That's a
unicode/
str/
bytesissue. Short answer, the arguments to Python's C API need to be byte strings (
b"...") unless stated otherwise in the docs. Long explanation below.
:)
In C, the type
charrepresents a byte (which is generally agreed to be 8 bits nowadays). Most code uses
char *(a pointer to a
char, which is effectively used as an array of unknown size) as the data type for "strings". Because a
charis only 8 bit wide, it can't hold a full Unicode code point. There is the
wchar_tdata type, which is not really standardized either, but it's wider than
charand can usually hold a Unicode code point, so APIs that support Unicode properly use
wchar_t *instead of
char *for strings.
In Python 2, the situation is similar.
stris like C's
char *- it's made of 8-bit bytes and can't hold Unicode text properly, and
unicodeis like C's
wchar_t *and supports full Unicode. That's why
ctypesconverts
strto
char *and
unicodeto
wchar_t *and vice versa.
Now Python 3 comes along and cleans up a lot of Python 2's Unicode issues. In Python 3, you have the two data types
bytesand
str. Python 3's
bytesis an 8-bit string like Python 2's
str, and Python 3's
stris a Unicode string like Python 2's
unicode. And most importantly, in both Python versions the string
"hello"is of type
str, which means that under Python 2 it's 8-bit (i. e.
char *) and under Python 3 it's Unicode (i. e.
wchar_t *).
Python's C API functions, such as
PyRun_SimpleStringuse normal
char *for source code. So under Python 2, your code works fine -
"print(42)"is an 8-bit string, gets converted to
char *, which is what
PyRun_SimpleStringwants. Perfect. Under Python 3,
"print(42)"is a Unicode string, which gets converted to
wchar_t *, and then things go wrong. Because
wchar_tis 32 bits wide under iOS, the text
print(42)represented as a
wchar_t *has three null bytes between each character (which would be used if the character had a higher code point in Unicode). Null bytes are also the "end of string" marker in C. Python reads the start of the
wchar_t *string, but expects a
char *- it sees a
p, then a null byte, and thinks "great, I'm done" and so it just runs
pinstead of
print(42).
Thanks a lot @dgelessus ! One cannot ask for a better answer! | https://forum.omz-software.com/topic/3288/ctypes-pythonapi-version/3 | CC-MAIN-2021-49 | refinedweb | 753 | 73.47 |
There are some samples.
1.Open Quote sample application. Change general property "Compiler Collection"
on Sun Compiler collection. Build application. Set main function breakpoint.
Debug project. External Debugging window immediately closed. From debugger
buttons only "Finish Debugger Session" is enabled.
2. Open IO sample project. Change general property "Compiler Collection" on Sun
Compiler collection. Set breakpoint on 26-th line of main.cc(return string).
Debug project. Debugger stopped at return statement. Try "Step Over" few times.
Debugger goes somewhere deep(see Call Stack) and after one of step all debugger
controls besides "Finish Debugger Session" and "Pause" became disabled.
3.Create New Project(C\C++ Application). Create newmain.cc file with code
int main(int argc , char** argv){
int i;
i=0;
i++;
return i;
}
Change general property "Compiler Collection" on Sun Compiler collection. Set
main function breakpoint. Run debugger. Open Local Variables view. Try to do few
steps(over ar into). i variable does not change its value in local var view.
Please, file separate issues for each case, because they look as different
issues. And make sure you copy-paste debugger log from gdb console - I asked
this many times, and I will repeat this request until each issue will be filed
with debugger log :-)
Also it is important to know which version of Sun Studio compilers was used.
There is a big difference between Sun Studio 11, Sun Studio "Mars" Express 2,
and Sun Studio "Mars" Express 3.
Also it is very important to specify operating system.
Thanks in advance,
Nik
Please correct Platform and OS fields. They cannot be "All" because
Sun Compilers don't run on all supported platforms. If the same
problem exists on both Solaris and Linux, then pick one and mention
the other in a comment (or if the problems are different on each
system file a second IZ).
Nik, sure these are different issues. I combined it into one to show that Sun
compilers collection is supported by gdb-lite very poor. Each of the examples
could be P1 if it were reproduced for default compiler collection(GNU), but now
I really do not know which priority have these bugs. It depends on decision
about words in release notes. Will be there said about non-supporting Sun
compilers by gdb-lite or no?
And what do you mean under "debugger log from gdb console"? Where I can find gdb
log in our IDE now?
as for platforms:
1-st and 3-rd problems are common for intel-Solaris(sqao43) and Linux
FC3(sqao35). 2-nd is for intel-solaris.Sun compilers were used from
/set/mars/dist/build35.0/
Alexander, thank you for additional info!
First of all, here is how to get back gdb console:
netbeans -J-Dgdb.console.window=true
Second, all these issues with Sun Studio compilers are caused by debugging
information, generated by compilers. This information is not correct or gdb
does not understand it properly. I suggest to file separate P3 issues.
There is nothink to fix in "gdb-lite" module itself, but we will use these
issues to file bugs against Sun Studio compilers and to track the bug fixing
process.
I changed the priority from P2 to P1 because we plan to support Sun Studio
compilers in release 5.5.1, and right now it is my highest priority task.
The problem with Quote is also related to the dwarf output.
If Quote is built with Sun Studio compilers, gdb cannot set
a breakpoint in "main" function:
bash-3.00$ gdb --i mi dist/Debug/Sun-Solaris-x86/quote1
~"GNU gdb 6.3.50_2004-11-23-cvs\n"
~"
~"This GDB was configured as \"i386-pc-solaris2.11\"..."
~"\n"
(gdb)
-break-insert main
&"Die: DW_TAG_<unknown> (abbrev = 22, offset = 7100)\n"
&"\thas children: TRUE\n"
&"\tattributes:\n"
&"\t\tDW_AT_name (DW_FORM_string) string: \"basic_ostream\"\n"
&"\t\tDW_AT_<unknown> (DW_FORM_string) string: \"nNbasic_ostream3CTACTB_\"\n"
&"\t\tDW_AT_decl_file (DW_FORM_data1) constant: 8\n"
&"\t\tDW_AT_decl_line (DW_FORM_data1) constant: 73\n"
&"Dwarf Error: Cannot find type of die [in module
/export/home/nikm/azureus_src/NB_Projects/Quote1/dist/Debug/Sun-Solaris-x86/quote1]\n"
^error,msg="Dwarf Error: Cannot find type of die [in module
/export/home/nikm/azureus_src/NB_Projects/Quote1/dist/Debug/Sun-Solaris-x86/quote1]"
(gdb)
As a result, "gdb-lite" module cannot start debugging.
Though, there is something strange with this application,
because I don't see such problems with other C++ applications.
The problem with Quote project seems to be caused by "endl".
I created a small test cases, which can be used to reproduce the problem:
-----------------------------------------------------------
#include <iostream>
using namespace std;
int main(int argc, char* argv[])
{
// THIS LINE CAUSES THE PROBLEM DESCRIBED IN IZ 89876
cout<<"Support metric quote program"<<endl<<endl;
return 0;
}
-----------------------------------------------------------
If I select this project and do "Step into Project", debugger cannot
stop in main, and the debugging session exits.
If I comment out "<<endl<<endl" - everything works just fine.
Instead of commenting out "<<endl" I can redefine it, and this also helps:
#define endl "\n"
On Solaris SPARC we can specify the following compiler options:
1. For Sun Studio C compiler: -Wc,-h_gcc -xO0 -g
2. For Sun Studio C++ compiler: -Qoption cg -h_gcc -xO0 -g
This options do not fix all known problems, but they fix some of them:
a) stop at the beginning of function (now prolog presents in dwarf)
b) local variables are visible and have correct values
These options tell Sun Studio compilers to use "cg" instead of "yabe",
and they also tell "cg" to generate a "more gdb compatible" dwarf output.
But, again, these options are for Solaris SPARC only.
Deferred to NetBeans 6. It is possible that planned changes in the Sun Studio
compilers would allow us to improve behavior sooner, so the plan is to re-test
with each new release of the Sun Studio compilers.
I cannot debug any C++ program (Welcome, HelloApp and etc.) which was compiled
by Sun C++ compiler on sqao43.
CC: /set/mars/dist/intel-S2/bin/CC (2007/04/10)
gdb: /set/sqe/tools/gdb/6.4/intel-S2/s10/bin/gdb
Machine: i86pc
System: SunOS
Release: 5.10
bash-3.00$ gdb ./dist/Debug/welcome.10"..."/export/home/tester/Welcome_4/dist/Debug/welcome": not
in executable format: File format not recognized
(gdb)
No sense keeping this open since its a compiler issue, not a debugger one.
*** Issue 121776 has been marked as a duplicate of this issue. *** | https://netbeans.org/bugzilla/show_bug.cgi?id=89876 | CC-MAIN-2015-35 | refinedweb | 1,068 | 56.66 |
Bloomberg is hosting a gathering of developers, students, and others from around
the Clang/LLVM community to spend a weekend learning how to work on Clang, LLVM, and other projects in the LLVM ecosystem. This event is intended to help
new community members get started learning how to contribute, how to work on the code, and get their first patches written and submitted.
Bloomberg is providing the space, food, beverages, travel/lodging for mentors, and the organization of the event, so attendees only need to bring their
laptops (with power adapters!) and a willingness to learn and contribute. At both locations there will be experienced mentors on hand to provide guidance.
The Google Group (mailing list) clang-llvm-sprint-2016-bloomberg@googlegroups.com
can be used to discuss issues of general interest related to the weekend.
Kevin Fleming and Henry Kleynhans are organizing this event, in New York and London, respectively. They can be contacted through the mailing list or via the EventBrite
pages (linked below).
Attendees at this event, and anyone else who wants to chat with them, should use channel #bbgweekend on irc.oftc.net.
If you want to talk about this event on Twitter, please tag TechAtBloomberg (@techatbloomberg).
The event is using a Trello board to track projects and the teams that are working on them. The board can be found
here.
A virtual machine image is available, based on Ubuntu Linux 14.04, that you can use to get your laptop ready for the event quickly. You'll need a virtualization
package (hypervisor); the image was created using VirtualBox 5, but has been tested with VMware and other hypervisors. Since it is a generic image, when you import
it you'll need to ensure that you supply the VM with adequate CPU and memory resources; while you can build LLVM with only one CPU and 4GB of memory, it will be slow
enough to be inconvenient.
New York event attendees should download the image here.
London event attendees should download the image here.
When you boot the image into a machine, you can log in with user 'llvm', password 'llvm'. As is typical with Ubuntu systems, this user has sudo privileges, so you
can install additional software or make configuration changes.
Note: if you use VMware to load this image, you may receive a warning about the image being non-compliant with the Open Virtualization specification; allow the
import to proceed anyway, and you'll have a working VM.
Civic Hall. Please review the menu in advance, and if for any reason the options available
will not meet your requirements, let us know, but be prepared to make alternate arrangements for meals. There are a large number of places to obtain food (from delis to
sandwich shops to sit-down restaurants) in the area around Civic Hall, so you should be able to find something compatible.
CodeNode. Please review the menu in advance, and if for any reason the options available
will not meet your requirements, let us know, but be prepared to make alternate arrangements for meals. On Saturdays there are some shops open in the area around CodeNode,
but none of them are open on Sundays, so you'd need to travel some distance away to find a place to purchase food. | https://llvm.org/devmtg/2016-02/ | CC-MAIN-2021-17 | refinedweb | 548 | 59.23 |
GNU
2018-04-30
Aliases: getcwd(2), getcwd(2), getcwd(2), getcwd(2), getcwd(2), getcwd(2), getcwd(2), getcwd(2), getcwd(2), getcwd(2), getwd(3), getwd(3), getwd(3), getwd(3), getwd(3), getwd(3), getwd(3), getwd(3), getwd(3), getwd(3), getwd(3), getwd)
NAME
getcwd, getwd, get_current_dir_name - get current working directory
SYNOPSIS
#include <unistd.h>
char *getcwd(char *buf, size_t size);
char *getwd(char *buf);
char *get_current_dir_name(void);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
get_current_dir_name():
_GNU_SOURCE
getwd():
DESCRIPTION.
ERRORS
ATTRIBUTES
For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO.
BUGS. | https://reposcope.com/man/en/3/getcwd | CC-MAIN-2021-17 | refinedweb | 104 | 54.56 |
Here is the script.(Please note most of the code is probably wrong, I'm still new to C#. I can figure all of that out on my own later, except for the problem I'm posting about)
using UnityEngine;
using System.Collections;
public class Flicker : MonoBehaviour {
// Use this for initialization
void Start () {
transform.light.intensity = 0.39;
}
}
// Update is called once per frame
void Update () {
waitForSeconds(1);
transform.light.intensity = 0.00;
waitForSeconds(0.3);
transform.light.intensity = 0.39;
}
On the first line is says: Unexpected symbol 'void'
I think there's something wrong with waitForSeconds(1);. I'm not that good too but I think it should be new WaitForSeconds(1f); Practice yourself putting f if it's float in C#.
waitForSeconds(1);
new WaitForSeconds(1f);
Answer by robertbu
·
Jan 16, 2014 at 05:04 AM
There are several issues here. The one you are struggling with is caused by line 12. This '}' is closing off the 'class', so all the lines beyond are being treated as outside the class. You can fix this one by moving it from line 12 to line 22. The next problem is that in C#, you have to have an 'f' after any floats since floating point numbers are doubles by default in C#, but standard Unity functions all use floats.
The next problem is not easily solved. You cannot yield in Update() nor do you yield in the way you've tried here (which is what you are trying to do with WaitForSeconds). That is true of Javascript as well. You need to study how to do Coroutines in C#:
Also C# and Javascript are case sensitive, and 'waitForSeconds' should be 'WaitFor.
Unexpected Symbol '_toon = new PlayerChar'
1
Answer
Enexpected Symbol Void (start)
1
Answer
Multiple Cars not working
1
Answer
Distribute terrain in zones
3
Answers
CS1525: 60,68 Unexpected symbol MatchMaxPlayers
1
Answer | https://answers.unity.com/questions/617938/unexpected-symbol-void-c.html | CC-MAIN-2019-39 | refinedweb | 315 | 62.68 |
Opened 20 months ago
Last modified 20 months ago
#5830 new change
Provide access to globals from filters
Description (last modified by kvas)
Background
I can't define a global in globals and access it in filters. It's also not possible to access globals from other globals, which leads to code duplication sometimes.
What to change
I want to be able to access globals from filters and other globals. Example (only showing filter usage):
globals/example.py
example = 'example'
filters/examplify
def examplify(str): return '%s (%s)' % (str, example)
includes/example
<h1>{{ "My test" | examplify }}</h1>
output
<h1>My test (example)</h1>
Change History (4)
comment:1 Changed 20 months ago by juliandoucette
Last edited 20 months ago by juliandoucette (previous) (diff)
comment:2 follow-up: ↓ 3 Changed 20 months ago by kvas
What do you think about globals being accessible from other globals? It seems to me like this would be a reasonable piece of functionality to add to this ticket, and it doesn't seem to make it much harder, so I'd be interested to know if you find it a useful addition?
comment:3 in reply to: ↑ 2 Changed 20 months ago by juliandoucette
comment:4 Changed 20 months ago by kvas
Note: See TracTickets for help on using tickets.
Real world example: The language selector on the new abp.org contains a location code after the language name e.g. "English (US)". We already have a filter called to_og_locale which matches crowdin language codes to ISO locale codes (which include location codes). If I could define og_locales as a global then I could write a filter to provide *just* a location code (instead of a full locale) from a crowdin language. e.g.
(I used two filters to solve this problem today e.g. locale | to_og_locale | to_og_location where to_og_location just blindly returns the split('_')[-1] of what it was given.) | https://issues.adblockplus.org/ticket/5830 | CC-MAIN-2019-22 | refinedweb | 317 | 56.89 |
We finished Chapter 1 by building a parallel dataframe computation over a directory of CSV files using
dask.delayed. In this section we use
dask.dataframe to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers
dask.delayed.
In this notebook we use the same airline data as before, but now rather than write for-loops we let
dask.dataframe construct our computations for us. The
dask.dataframe.read_csv function can take a globstring like
"data/nycflights/*.csv" and build parallel computations on all of our data at once.
dask.dataframe¶
Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but
dask.dataframe will scale to datasets much larger than memory.
The
dask.dataframe module implements a blocked parallel
DataFrame object that mimics a large subset of the Pandas
DataFrame. One Dask
DataFrame is comprised of many in-memory pandas
DataFrames separated along the index. One operation on a Dask
DataFrame triggers many pandas operations on the constituent pandas
DataFrames in a way that is mindful of potential parallelism and memory constraints.
Related Documentation
Main Take-aways
%run prep.py -d flights
from dask.distributed import Client client = Client(n_workers=4)
We create artifical data.
from prep import accounts_csvs accounts_csvs() import os import dask filename = os.path.join('data', 'accounts.*.csv') filename
Filename includes a glob pattern
*, so all files in the path matching that pattern will be read into the same Dask DataFrame.
import dask.dataframe as dd df = dd.read_csv(filename) df.head()
# load and count number of rows len(df)
What happened here?
len()applied to it
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'), parse_dates={'Date': [0, 1, 2]})
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and dtypes.
df
We can view the start and end of the data
df.head()
df.tail() # this fails
Unlike
pandas.read_csv which reads in the entire file before inferring datatypes,
dask.dataframe.read_csv only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.
In this case, the datatypes inferred in the sample are incorrect. The first
n rows have no value for
CRSElapsedTime (which pandas infers as a
float), and later on turn out to be strings (
object dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:
dtypekeyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.
samplekeyword (in bytes)
assume_missingto make
daskassume that columns inferred to be
int(which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.
In our case we'll use the first option and directly specify the
dtypes of the offending columns.
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'), parse_dates={'Date': [0, 1, 2]}, dtype={'TailNum': str, 'CRSElapsedTime': float, 'Cancelled': bool})
df.tail() # now works
dask.dataframe¶
We compute the maximum of the
DepDelay column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums
maxes = [] for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)
We could wrap that
pd.read_csv with
dask.delayed so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (
max of the intermediate maxes). This is just noise around the real task, which pandas solves with
df = pd.read_csv(filename, dtype=dtype) df.DepDelay.max()
dask.dataframe lets us write pandas-like code, that operates on larger than memory datasets in parallel.
%time df.DepDelay.max().compute()
This writes the delayed computation for us and then runs it.
Some things to note:
dask.delayed, we need to call
.compute()when we're done. Up until this point everything is lazy.
As with
Delayed objects, you can view the underlying task graph using the
.visualize method:
# notice the parallelism df.DepDelay.max().visualize()
In this section we do a few
dask.dataframe computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call
compute.
If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
# Your code here
len(df)
With pandas, you would use boolean indexing.
# Your code here
len(df[~df.Cancelled])
# Your code here
df[~df.Cancelled].groupby('Origin').Origin.count().compute()
# Your code here
df.groupby("Origin").DepDelay.mean().compute()
# Your code here
df.groupby("DayOfWeek").DepDelay.mean().compute()
When computing all of the above, we sometimes did the same operation more than once. For most operations,
dask.dataframe hashes the arguments, allowing duplicate computations to be shared, and only computed once.
For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe required to get the result.
If we compute them with two calls to compute, there is no sharing of intermediate computations.
non_cancelled = df[~df.Cancelled] mean_delay = non_cancelled.DepDelay.mean() std_delay = non_cancelled.DepDelay.std()
%%time mean_delay_res = mean_delay.compute() std_delay_res = std_delay.compute()
But lets try by passing both to a single
compute call.
%%time mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
Using
dask.compute takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling
dask.compute, allowing shared operations to only be done once instead of twice. In particular, using
dask.compute only does the following once:
read_csv
df[~df.Cancelled])
sum,
count)
To see what the merged task graphs between multiple results look like (and what's shared), you can use the
dask.visualize function (we might want to use
filename='graph.pdf' to zoom in on the graph better):
dask.visualize(mean_delay, std_delay)
Pandas is more mature and fully featured than
dask.dataframe. If your data fits in memory then you should use Pandas. The
dask.dataframe module gives you a limited
pandas experience when you operate on datasets that don't fit comfortably in memory.
During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory. This dataset is small enough that you would normally use Pandas.
We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded
MemoryError: ...
Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as
read_csv, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.
Dask.dataframe operations use
pandas operations internally. Generally they run at about the same speed except in the following two cases:
groupbyin the next version)
dask.dataframecan call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup.
For the most part, a Dask DataFrame feels like a pandas DataFrame. So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in Schedulers). This lets Dask do operations in parallel and out of core.
In Dask Arrays, we saw that a
dask.array was composed of many NumPy arrays, chunked along one or more dimensions.
It's similar for
dask.dataframe: a Dask DataFrame is composed of many pandas DataFrames. For
dask.dataframe the chunking happens only along the index.
We call each chunk a partition, and the upper / lower bounds are divisions. Dask can store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames
crs_dep_time = df.CRSDepTime.head(10) crs_dep_time
To convert these to timestamps of scheduled departure time, we need to convert these integers into
pd.Timedelta objects, and then combine them with the
Date column.
In pandas we'd do this using the
pd.to_timedelta function, and a bit of arithmetic:
import pandas as pd # Get the first 10 dates to complement our `crs_dep_time` date = df.Date.head(10) # Get hours as an integer, convert to a timedelta hours = crs_dep_time // 100 hours_timedelta = pd.to_timedelta(hours, unit='h') # Get minutes as an integer, convert to a timedelta minutes = crs_dep_time % 100 minutes_timedelta = pd.to_timedelta(minutes, unit='m') # Apply the timedeltas to offset the dates by the departure time departure_timestamp = date + hours_timedelta + minutes_timedelta departure_timestamp
We could swap out
pd.to_timedelta for
dd.to_timedelta and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a
dd.to_timedelta that works on Dask DataFrames. What would you do then?
dask.dataframe provides a few methods to make applying custom functions to Dask DataFrames easier:
map_partitions
map_overlap
reduction
Here we'll just be discussing
map_partitions, which we can use to implement
to_timedelta on our own:
# Look at the docs for `map_partitions` help(df.CRSDepTime.map_partitions)
The basic idea is to apply a function that operates on a DataFrame to each partition.
In this case, we'll apply
pd.to_timedelta.
hours = df.CRSDepTime // 100 # hours_timedelta = pd.to_timedelta(hours, unit='h') hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h') minutes = df.CRSDepTime % 100 # minutes_timedelta = pd.to_timedelta(minutes, unit='m') minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m') departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
def compute_departure_timestamp(df): pass # TODO: implement this
departure_timestamp = df.map_partitions(compute_departure_timestamp) departure_timestamp.head()
def compute_departure_timestamp(df): hours = df.CRSDepTime // 100 hours_timedelta = pd.to_timedelta(hours, unit='h') minutes = df.CRSDepTime % 100 minutes_timedelta = pd.to_timedelta(minutes, unit='m') return df.Date + hours_timedelta + minutes_timedelta departure_timestamp = df.map_partitions(compute_departure_timestamp) departure_timestamp.head()
Dask.dataframe only covers a small but well-used portion of the Pandas API. This limitation is for two reasons:
Additionally, some important operations like
set_index work, but are slower
than in Pandas because they include substantial shuffling of data, and may write out to disk.
client.shutdown() | https://nbviewer.jupyter.org/github/dask/dask-tutorial/blob/master/04_dataframe.ipynb | CC-MAIN-2020-40 | refinedweb | 1,831 | 51.55 |
One powerful feature of Windows which can get overlooked is the COM method of accessing other programs. This defines the standard application binary interface so that programs can communicate and use each other almost as you would a library. The interface is language neutral and can be used from Python just as well as any other language.
COM objects use the client-server model. The program being accessed, or library in the above analogy, is the server and the program doing the calling is the client. Most scripts are clients which use various COM servers already installed on the computer (or other computers on the network). To get a reference to the COM server you use the Dispatch method.
For example I can use COM to start Microsoft Word (assuming I have Word installed on the computer) and open a document with the following script
import win32com.client app = win32com.client.Dispatch("Word.Application") doc = app.Documents.Open("file.doc") app.Visible = True
Four lines of code and I have a word processor! The strength of COM is also its weakness, it only defines how the programs talk; it doesn’t define the methods or properties, what is called the application programming interface. That is left up to the server.
So how did I know about the Documents.Open method or the Visible property? In an ideal world the API would be fully documented and available. In reality there is little or no documents or it is not available or it is difficult to make sense of. If the COM server has been registered in a standard way you can use a COM browser to gain information.
A basic browser comes with pywin32. A more powerful browser, oleview, can be downloaded from Microsoft as part of the Windows Server 2003 Resource Kit Tools. If working with Microsoft Office COM objects, my preferred browser is actually the Object Browser that comes with VBA. Just press Alt + F11 from Word and then F2. I find this has the cleanest interface.
The last way of making a start with a COM interface is to simply dump everything that can be found out about the API into a file and use that as a starting point. You can then use this information with the interactive prompt to query the COM object and see what it returns. The program do this is makepy.py which can be found in the win32client\client directory. I’ll hopefully cover using makepy in another article. | https://quackajack.wordpress.com/tag/word/ | CC-MAIN-2018-47 | refinedweb | 417 | 65.42 |
It’s one thing to go on record saying I wouldn’t be creating an iPad-specific version of Flower Garden, and another seeing the iPhone version running on an iPad with all the ugly jaggies and huge, pixelated text. So when it came time to do a new update, I decided to at least take advantage of the iPad capabilities to make the existing app prettier by making Flower Garden into a universal app. It’s now available on the App Store, so go get it and check it out!
Surprisingly, there wasn’t that much documentation on how to go about making a universal iPhone/iPad version. There’s a document from Apple showing some initial steps, but that’s about it. So I wanted to share what I learned along the way, including some very useful tips I learned through trial and error or by doing a lot of digging through Twitter and the development forums.
Flower Garden is a strange hybrid of OpenGL and UIKit, which made things more complicated than if it were just one or the other. Learning curve and all, it took two full days of my time. If you’re using just OpenGL or just UIKit, the conversion to a universal app will be significantly faster.
Universal Project
The first thing you need to do is understand what’s going on with all the SDK versions. At this time, the latest iPhone OS version is 3.1.3, but you’re going to be developing the universal app with SDK 3.2. You want the resulting binary to run on both 3.X OS on iPhones and 3.2 OS on iPad. It’s a very similar situation to when we were writing apps that worked on both 3.X and 2.X.
So fire up XCode 3.2.2 (yes, all those version numbers start getting very confusing–that’s the XCode version that comes with SDK 3.2), and load your iPhone project you want to make into a universal one. If you look under the Project menu item, you’ll see an entry called “Upgrade Current Target For iPad”. The Apple documentation even warns you not to create a universal app in any other way.
Go ahead and use it if you want. It will work… as long as you have a single target you want to convert. For some inexplicable reason [1], it will only work once per project (even though it’s phrased as working with whatever the current target is). When would you have more than one target executable? If you have a free and a paid version for example.
Besides, I’m uncomfortable with automated “smart” tools that do things behind my back without me knowing exactly what’s going on, so I upgraded by hand by looking at the diffs of what the upgrade command did. It turns out it’s extremely simple.
- Under the project properties, set your base SDK to 3.2.
- Set Architectures to “Optimized (arm6, arm7)”
- Uncheck “Build Active Architectures Only”
- In the Deployment section, set it to SDK 3.1.3 (or whatever 3.X you want to target)
That’s it, really. That’s all you need to compile and run your app both on an iPhone and an iPad. When you build for the device, it will compile both arm6 and arm7 versions and create a combined fat executable with both. And yes, this means the size of your executable is going to double (which could be an issue if you’re near the 20MB limit). In the case of Flower Garden, the combined executable is 3.4MB, so that’s not a big deal.
iPad Functionality
What about that new xib file that the upgrade process creates? You don’t really need it. It’s there in case you want to have a different set of xibs for each platform, and it’s hooked up to the info file so the app knows to load it at startup.
For the universal version of Flower Garden, I wasn’t trying to make a whole new brand experience on the iPad. Instead, I was looking for a quick and easy way to make it look much better. Because of it, I decided to reuse the same xib files as for the iPhone version.
That meant I had to go in Interface Builder, and adjust the autosize properties for a lot of the views. Some of them I wanted to stretch and get bigger with the high resolution iPad screen (background views). Others, I wanted to leave at the same size, but remain at the same relative distance from a particular border (buttons). Some other ones, I left the same size and their position just scaled up with the resolution (info boxes). All in all, that was the most time-consuming part of the process. It also required a few tweaks here and there to support the resolution change correctly.
The other big change was updating the resolution of the 3D views. Fortunately, that was just worked without a single line change. The render target code I’m using, takes the view dimensions and creates a frame buffer of the correct size. And it’s not just the dimensions: Remember that the iPad has a different aspect ratio (grumble, grumble), so you need to make sure your perspective and orthographic projections take that into account.
The only other changes I had to make was supporting an upside-down portrait orientation. That was very easy because the frame buffer didn’t have to change. Make sure you support it both at launch time and during gameplay. If you have a root controller whose view is attached directly to the main window from the start, it’s as easy as adding this to your controller:
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation == UIInterfaceOrientationPortrait || interfaceOrientation == UIInterfaceOrientationPortraitUpsideDown); }
And make sure you add the supported orientations to your plist.
Finally, before you submit it to the App Store, you’ll need an iPad-specific icon. The documentation explains how to explicitly list all the icons in the info file, but at the very least, you can provide a file called Icon-72.png that is a 72×72 image and you’re done.
Running On The Simulator
This is one that should be a lot easier than it is. You’re creating a universal binary with SDK 3.2. Now you want to run it on the simulator. No problem, you run it as usual and you get the iPad simulator. You can debug it and do everything you normally do.
Now you want to run it on the simulator on iPhone mode. It turns out, that’s not so obvious.
You can’t just turn the simulator hardware setting to iPhone, because whenever you launch it from XCode it will override that again with the iPad one. Building and running on the 3.1.3 SDK is a no-no because you’re really running a different build than you’re going to be submitting (3.2) and all 3.2 SDK features you’re using are going to result in compile errors.
So after much searching and tweeting, here’s the solution:
- Build for 3.2 SDK on the simulator platform.
- Change project drop-down to 3.1.3 SDK
- Launch with debugger (Cmd + Options + Y). Don’t build and run!
That will launch the simulator on iPhone mode, but still run your 3.2 build. Talk about unintuitive! [2]. The worst thing is, when you’re done, you need to switch the project to 3.2 again or you’ll get tons of compiler errors. That definitely has to go for SDK 4.0.
Here’s an invaluable tip. Maybe it’s obvious to long-term Mac users, but it baffled me for a while. The iPad simulator is very well thought out, and it has a 100% and a 50% mode. That makes it possible to use the simulator on screens without very high resolution. Even my external monitor at 1680×1050 can’t display the iPad simulator at full 1024×768 resolution in portrait mode because of the window borders.
The problem comes in when you want to take a screenshot to add to the App Store or anything else. On the iPhone simulator it was easy, but how do you do that with the iPad since you can’t fit it on the screen? Cmd + Ctrl + C will take a screenshot at full resolution even when running at 50% mode and add it to the clipboard! Switch over to Preview and select New from Clipboard and voila! Full resolution screenshot!
SDK 3.2 Features
Chances are that even if you do a simple iPad port, you’re going to end up using a few 3.2 features. The main one I used in the case of Flower Garden is a UIPopoverController. Here you should follow the steps outlined in the Apple documentation down to a T. But it comes down to this: If a symbol that is defined on 3.2 appears anywhere, even as a member pointer variable in a class, it will compile and run on the iPad fine, but will crash and burn on the iPhone.
So you need to both check that the 3.2 features are available, and you need to “hide” the new symbol so it never appears anywhere: Use an id variable and cast it based on the class info. Even casting it directly to the symbol you want will cause a crash.
Class classPopoverController = NSClassFromString(@"UIPopoverController"); if (classPopoverController) { m_moreGamesPopover = [[classPopoverController alloc] initWithContentViewController:moreGames]; [m_moreGamesPopover setDelegate:self]; [m_moreGamesPopover setPopoverContentSize:CGSizeMake(320, moreGamesHeight) animated:NO]; [m_moreGamesPopover presentPopoverFromRect:m_moreGamesButton.frame inView:self.view permittedArrowDirections:UIPopoverArrowDirectionUp animated:YES]; }
Not pretty, uh? It works, but I couldn’t imagine doing this all over the place.
Gotchas
In the process of creating a universal app, I found a couple more things to watch out for.
When you submit the new binary through iTunes Connect and it’s accepted, the application status will change to something like “Missing screenshot”. If you go back and edit the app information, you’ll see there’s a new set of screenshots you can submit. You’ll need at least one for your application to enter “Waiting for Review” state. And if you have localized descriptions of your app, you’ll need to do that for every language.
Last, and perhaps most puzzling because I never figured this one out: I was not able to get in-app purchases to work with a test account from a development version of a universal app running on the iPad. I must have tried everything, but whenever I tried purchasing anything, it never brought up the familiar “sandbox environment” message. Also, items that hadn’t been approved yet did not show up in the list of available items. The exact same code worked fine on an iPhone, so that’s quite puzzling. Is it a major bug on Apple’s part, or did I miss some obscure “enable IAPs in debug mode” checkbox somewhere? It was a bit of a gamble submitting it like that, but fortunately, the approved version on the App Store allows in-app purchases without any problem.
I thought afterwards that maybe that was because the app version on the App Store was no universal, so the Store Kit server was not allowing the iPad version to access the store through the test account. But I tried it again even after it was approved and I had the same problem.
Has anybody managed to use an App Store test account from an iPad on a universal app?
Conclusion
All in all, it was a relatively painless process considering it’s different hardware, with a different resolution, and it’s the first iteration of the SDK. Was it worth it? I think so. Apart from looking a lot better on an iPad, universal apps get ranked on both iPhone and iPad charts. Flower Garden managed to get all the way up to #14 on the free games chart on the iPad (and around #60 free app overall on the iPad). I’m sure it got some exposure from being so high up, which in turn helped the iPhone rankings as well.
I can’t imagine that my next game is going to be iPad-only, but I’ll certainly have the iPad in mind from the beginning and release it on both platforms. Whether I choose to go universal or separate apps will depend on the game and business decisions, but at least it’s good to know that it’s fairly easy to create a universal app.
[1] Actually, there’s a pattern that is clear by looking at the hoops we have to jump to do a universal build: Apple was clearly scrambling to get this out the door. As a result, things are buggy, unintuitive, and clunky. Hopefully all that will be fixed for SDK 4.0.
[2] See? That confirms point [1].
Haven’t got my iPad yet, but this whole experience of yours will come in very useful. Thank you very much indeed 🙂
Quick note that I found quite useful: UIInterfaceOrientationIsPortrait(interfaceOrientation)
Noel,
I downloaded the software on my ipad for mother’s day and I must say it’s awesome!! It is so ipad-ready I really enjoyed the experience.
How did you get the email to add your icon and other text *after* the image?
shul
Noel, I downloaded the software on my ipad for mother’s day and I must say it’s awesome!! It is so ipad-ready I really enjoyed the experience.
+1
Great post! With WWDC in June, do you expect the 4.0 SDK to roll out before July?
Thanks for the article, very interesting. I was particularly excited to read your solution to the “test in iPhone simulator” conundrum, since I had asked that very question on the Apple forums a few days ago, and was told in no uncertain terms that it was impossible. In fact, I went back there with your tip, and received an admonition from an Apple engineer: “You are asking for obscure, hard-to-track crashes by doing that. If you want to test a Universal app on an iPhone, use real hardware. Period.”
So… be warned.
Thanks everybody. Glad you’re enjoying Flower Garden on the iPad.
Shul, adding the image in the middle of the email requires uploading the image to my server and creating an HTML body that has an img tag pointing to it. As far as I can tell (and I’ve looked all over the place), you can’t reference an attached image in the body.
@Noel Ah! That explains it then, thanks! 🙂
p.s. I looked everywhere as well. For some reason adding html is possible and attaching an image is possible, but combining the two in a way that will show the image and then the text is not..
For what it’s worth, for the Lorax Garden, I ended up just attaching the image. It’s not as nicely integrated because it shows up at the bottom of the email (and it gets resized for you whether you want it or not), but it’s a lot simpler.
That’s an example of an API that the high-level is very convenient, but I really wish Apple would expose some lower-level functionality. Actually, that goes for most APIs 🙂
Thanks for the great info! I have also found the following post from Jeff LaMarche that shows you how to setup different main XIB and different key value pairs in the info.plist and detect iPhone/iPad at run time.
Very valuable info! I was thinking about buying an ipad and this definitely helps my decision. Your guide was very helpful and i learned a lot from it. Thanks for the great article! keep it up!
I also had to set “Targeted Device Family”. That steps seems to be missing here… | http://gamesfromwithin.com/going-the-ipad-way-all-you-wanted-to-know-about-creating-universal-apps | CC-MAIN-2018-47 | refinedweb | 2,685 | 72.56 |
Hello guys and girls.😀 Hope that you're fine! Ready for this day of script! Yes? Good! So, don't waste our time. Let's talk about today script.
An application to handle parking places?
When I go shopping at the supermarket (although it is not always the case), I am fascinated by the way parking keeps cars. So I said to myself: why not write together a little script that allows you to simply manage a parking lot. In this script, let's reduce the number of available parking spaces (12 places). By the way, it's necessary to be familiar with some knowledges such as:
- loop
- dictionaries
- lists
- exceptions
Well, now let's get down to business ...
Script context
What we want firstly is that when a driver want to park his car in parking, the program gives to driver a ticket which indicate where he must park. It will also be necessary to know the places available and unavailable and sometimes the number of cars parked by the caretaker. So, let's go!
Start by create a folder called parking.py.
All our script will be write in this file.
Let's start by define our variables.
from random import randint # let's define variables total_parking_place = 12 # initial parking places number choosen. available_places = [] # register all availables places unavailable_places = [] # register all unavailables places parked_car_list = [] # register all parked cars at the moment relation_car_parking_place = {} # store car and place number in which it's parked parked_car_number = 0 # count all parked cars
As we said, let's say our parking has 12 seats in total, but you can change it in your script. Let's code now our functions! At first, we must initialize the parking spaces.
def reset_place(concerned_list, place_number): """ Reset all place in initial list """ for place in range(1, place_number+1): concerned_list.append(place)
Then, we also need to make a place be available and unavailable if necessary. To do this, we'll define two functions.
def make_a_place_available(place_number): """ Add a place in available list """ unavailable_places.remove(place_number) available_places.append(place_number) available_places.sort() unavailable_places.sort() print(f'Place n°{place_number} is now available.') def make_a_place_unavailable(place_number): """Remove a place in available list""" unavailable_places.append(place_number) available_places.remove(place_number) available_places.sort() unavailable_places.sort() print(f'Place n°{place_number} is now unavailable.')
It can be useful (for example) to know how many places are still available. So let's define a function that does it.
def know_available_places(): """ Display available places """ if len(available_places) > 1: print(f'Available places are: {available_places}.') elif len(available_places) == 1: print(f'Only place n°{available_places} is available.') else: print('All places are unavailable.')
Alright. We can now make that a place will be available or not and know how many places are available. However, it's not enough? Indeed, we have not yet created a process in which a ticket is given to the driver to indicate the number of place where he can park his car. To define the process, think of something. According to you, how do drivers park their cars in a car park:
- option 1: they park their vehicles by occupying the places in order (square 1, square 2, square 3, etc.)
- option 2: they park their vehicles by occupying the places they want ( as long as it's available)
You'll be agree with me if i say that option 2 is what we see in real world. So function we are about to code will generate randomly a available parking place number and return this number.
def choose_available_place(): """ Choose an available place and return it if it's available """ choice = randint(1, total_parking_place) while choice in unavailable_places: choice = randint(1, total_parking_place) return choice
Great work! One last thing before launch the scenario. What we need to register a car and admit it on our parking ? 🤔 For example, we can distinct a car by:
- the mark
- the model
- the color
I grant you, it's not enough, but we want to be simple as we can. For instance, if we need to note for each car the license plate, it can ask user more time, and made program less useful. So, let's use theses three criteria.
What we'll do now is to create a function which take car criteria and admit it to parking.
def add_a_car(car_mark, car_model, car_color): """Register a car with it's caracteristics in parked car list""" the_car = [car_color, car_mark, car_model] choosen_number = choose_available_place() parked_car_list.append(the_car) # link parked car to place where driver parked relation_car_parking_place[choosen_number] = the_car print(f'{car_color} {car_mark} {car_model} added to parking at place n°{choosen_number}') make_a_place_unavailable(choosen_number)
Now, it's time to begin scenario...
Scenario !
1rst thing to do is to fill out available_places list.
reset_place(available_places, total_parking_place)
Then, let's write rest of program.
In our program, our supermarket called Sin houn bou noumun. This in my native language (Attié) and it means The strength is with you (like in Star Wars 💪)!!!
print('******************************') print('Welcome to "Sin hounbou noumun" market.\n') continue_game = True # initialize a boolean which indicate if application continue or stop while continue_game: # program interact with parking guard and display choices correct_choice = False user_choice = '' while not correct_choice: user_choice = input('What do you want to do?\n\ Choose wright number:\n\ 1- give a ticket to a driver (add a car)\n\ 2- make a place available (remove a car)\n\ 3- print availables places list\n\ ') # Let's be sure that user choose a integer, and his choice is in [1:3] try: user_choice = int(user_choice) correct_choice = True if user_choice < 1 or user_choice > 3: correct_choice = False print('Make sure that you choose a number between 1 and 3') else: correct_choice = True except ValueError: print('Sorry, you must choose a number') correct_choice = False if user_choice == 1: # option 1: add a car # Ask car carasteristics to user car_ma = input('Enter the car mark: ') car_mo = input('Enter the car model: ') car_c = input('Enter the car color: ') # Avoid that user inputs not contain car caracteristics while len(car_ma) <= 0 or len(car_mo) <= 0 or len(car_c) <= 0: print('\nMake sure to fill in all the fields.') car_ma = input('Enter the car mark: ') car_mo = input('Enter the car model: ') car_c = input('Enter the car color: ') # If all is good admit car to parking and count it add_a_car(car_ma, car_mo, car_c) parked_car_number += 1 correct_choice = False elif user_choice == 2: # option 2: remove a car print('List of parked cars:') # Display parked cars list for place, car in relation_car_parking_place.items(): print(f'place n°{place}: {car}') '''here, we can add a try...except statement to be sure that user enter correct answer ''' place_wanted = int(input('Choose place you want to remove: ')) place_of_removed_car = 0 if place_wanted in unavailable_places: for place, car in relation_car_parking_place.items(): if place == place_wanted: parked_car_list.remove(car) place_of_removed_car = place esthetic_phrase = ' '.join(car) # Transform list in string print(f'{esthetic_phrase} go out.') del(relation_car_parking_place[place_of_removed_car]) make_a_place_available(place_wanted) else: # tell user that place number he choose is already available print(f'Sorry, {place_wanted} is already available.') else: # option 3: display available places know_available_places() # Ask if parking guard want exit the program exit_question = input('Do you want exit program? (Y/n)') if exit_question.lower() == 'y': continue_game = False print('Bye bye! See you next time!') if parked_car_number == 0: print(f'You didn\'t parked a car.') elif parked_car_number == 1: print(f'You\'ve parked only {parked_car_number} car.') else: print(f'You\'ve parked {parked_car_number} cars')
That's all! You can run script! Our parking places handling application is ready! Here is a screencast about results when i executed script.
This app complete script is on github
Note that you can optimize this script by add additional features such as:
- user authentication (for instance, you can use your knowledge about dict, tuple, list)
- Book seats for VIPs. Suppose some customers are very regular in the supermarket. It would be nice to dedicate to them specific places of sorts that they are always available when these customers are present.
Well, here are some examples of features that can be made. Don't hesitate to improve this script by adding your personal touch. I hope you enjoyed this moment of scripting with me! For my part, it's always a pleasure to see you again on this script project: "Scripting Day"! Go, I wish you a great week and appointment in two weeks for a new script!😎 | https://boanerges.hashnode.dev/create-app-which-handle-parking-places-using-python-cjxj2zv1c001qvhs1e33mt1ew | CC-MAIN-2019-51 | refinedweb | 1,385 | 64 |
ennis SchultzIBM
04 Dec 2003
Read this paper to learn about "Super Helper" classes and how to implement them.
Introduction
IBM® Rational® Functional Tester (formerly known as Rational XDE Tester) provides the ability for you to create your own "super helper" class. This white paper will attempt to answer these questions:
"What Is A Super Helper Class?"
To fully answer this question, we must first discuss object-oriented inheritance, also known as "extension" or "subclassing" in Java.
Java Inheritance
Object-oriented systems allow classes to be defined in terms of other classes. For example, mountain bikes, racing bikes, and tandems are each a type of bicycle, so in object-oriented terminology they are subclasses of the bicycle class. Similarly, the bicycle class is the superclass of mountain bikes, racing bikes, and tandems. This relationship is shown in the following figure.
Subclasses inherit both state and behavior from their superclasses. Using our example, mountain bikes, racing bikes, and tandems share some states such as speed, size, and so on. These common states are captured in the form of variable declarations in the superclass. Common behaviors for bicycles might be braking and changing pedaling speed, for instance. In addition, subclasses can extend the functionality of the superclass by adding their own state(s) and/or behavior(s).
Rational Functional Tester Test Script Inheritance
Now that we understand these relationships, we can discuss Rational Functional Tester test scripts, which are actually subclasses or specializations of a script helper class. Methods in the helper class provide a great deal of capability to the script. These methods give the script the ability to find objects in the object map and execute verification points. The helper class is generated by Rational Functional Tester when you record your script, and is regenerated by Functional Tester as necessary. Since this occurs, you should never modify the helper class by hand, because any user modifications to the helper class will be lost the next time Functional Tester generates it. Each test script has one and only one helper class. Each helper class has one and only one script that extends it.
ALL helper classes must extend the class RationalTestScript. By default, the helper class is a direct subclass of the class RationalTestScript, as shown in the following figure.
Functional Tester provides the ability to insert your own class into this inheritance hierarchy:
A super helper, then, is a class through which you can implement functionality that then becomes available to all Functional Tester scripts whose helper extends the super helper.
"Why Would We Want To Use A Super Helper?"
Let's take a simple example. You may choose to implement an Unexpected Active Window (UAW) handler in your scripts to address the situation where a window or dialog other than the one Functional Tester is attempting to access becomes active. In this state, a UAW handler attempts to dismiss the active window so that Functional Tester can continue. Without the handler, Functional Tester will time out waiting to gain access to the object.
A UAW handler is implemented by overriding the base implementation of the onObjectNotFound handler. The implementation details of this solution are beyond the scope of this paper; however, there is an example of this in the Functional Tester help under User Guide > Advanced Topics > Handling Unexpected Active Windows.
onObjectNotFound
This is all well and good, but to make this functionality available to multiple scripts without using a super helper, you would need to paste this code into every script after it was recorded. Another approach would be to alter the template that Functional Tester uses to generate scripts, so your code is included.
There are many drawbacks to both these approaches, not the least of which is maintenance. If you decided to enhance your handler, you would have to edit the code in each and every script that contains the handler. This is not a trivial task if you have dozens or hundreds of scripts in your datastore!
In contrast, the super helper provides a single place for you to put your handler code. The handler method implementation is then inherited by every helper class that extends your super helper. Since the helper class does not override the method implementation, it is therefore accessible to your script. In the rare instance that you don't want your super helper's method available to your script, or if you want some alternate behavior, you can simply override the method in your script.
"How Do We Implement Our Own Super Helper?"
Let's continue on with a step-by-step example of implementing a UAW handler (an override of the default onObjectNotFound implementation) in a super helper. The steps we will follow are:
Create a Test Folder
The super helper class can be located at the root of our datastore, in a subfolder, or even in an external jar file. In this example, we will create a subfolder called utility in which to store our class.
Create The Super Helper Class
We will use the Eclipse creation wizard to create an empty class.
Type MySuperHelper in the Name box, select the abstract check box, and type com.rational.test.ft.script.RationalTestScript in the Superclass box.
package utilities;
import com.rational.test.ft.script.RationalTestScript;
public abstract class MySuperHelper extends
RationalTestScript {
}
Implement the Exception Handler
Our handler will override the default onObjectNotFound handler in the RationalTestScript class.
import com.rational.test.ft.script.ITestObjectMethodState;
Note -- an example of an onObjectNotFound handler can be found in the Functional Tester help files under Advanced Topics > Handling Unexpected Active Windows
package utilities;
import com.rational.test.ft.script.RationalTestScript;
import com.rational.test.ft.script.ITestObjectMethodState;
public abstract class MySuperHelper extends
RationalTestScript {
/**
* Overrides the base implementation of
* onObjectNotFound. Whenever this event
* occurs, look through all the active domains
* (places where objects might be found).
* For HTML domains (Java and other domains
* are skipped) finds all the top objects.
* If the top object is an Html Dialog,
* types an Enter key to dismiss the dialog.
* Logs a warning when this happens. */
public void onObjectNotFound(
ITestObjectMethodState TestObjectMethodState) {
// Implementation of event handler
// goes here.
}
}
Extend the Super Helper with Our Script's Helper
The script's helper class needs to be altered to extend the new super helper instead of the RationalTestScript class. You may be tempted to browse to the helper and edit it by hand. Resist the temptation! The helper class is managed by Functional Tester and your changes will be overwritten if you hand-edit the helper. Instead, follow the procedure below.
Make Our Super Helper the Default for New Scripts
The preceding procedure only changed a single script's inheritance hierarchy to use the new super helper. Functional Tester has a global setting that allows you to make this configuration the default for all newly created scripts.
A Note on Configuration Management
Your super helper will become an important, shared asset in your testing project. It is therefore imperative that you are able to store it in a robust repository, track and control new versions as they are created, and be able to retrieve old versions when necessary. Rational Functional Tester provides a robust, scalable interface to Rational ClearCase. The use of Rational ClearCase with Functional Tester is beyond the scope of this paper, but it is highly recommended that you manage your super helper class -- along with all your other test assets -- in ClearCase.
About the author
Biography to be posted
Rate this page
Please take a moment to complete this form to help us better serve you.
Did the information help you to achieve your goal?
Please provide us with comments to help improve this page:
How useful is the information? | http://www.ibm.com/developerworks/rational/library/1093.html | crawl-002 | refinedweb | 1,283 | 53.51 |
Looterkings: Re-connecting with Photon Bolt
This guest post was written by Frederik Reher, developer at Looterkings.
Looterkings is an indie studio founded by the Youtubers Manuel Schmitt (aka SgtRumpel), Erik Range (aka Gronkh) and Valentin Rahmel (aka Sarazar) in 2015.
Their first game, the roguelike dungeon-crawler Looterkings, has been released as Early Access on Steam in August 2016 and is currently in development.
It’s an action-multiplayer game for up to 4 players each of whom take control over a little monster-slaying goblin.
Frederik explains the difficulties the developers ran into when trying to let players reconnect to an already open game session and how Photon Bolt’s token system helped to resolve this issue.
Get ‘Looterkings’ on Steam >>
Obstacles
„How about we let players reconnect into a game?“, said the Game Designer and my job began. Up to then, we had stabilised Looterkings‘ multiplayer mode to the point where it did not crash frequently any more and where interaction between players did not cause major data desync between host and clients. We had finally implemented the ‚stacking‘ feature which enables 2 players to combine their goblins for more damage and other benefits, complete with animation synchronisation of 2 separate player-controlled units. How hard could it be?
Well, as it turns out, quite hard. At least if we had not used bolt. When we designed the
interaction system, we did not think about reconnecting. Getting it to work and getting it to
work fast were top priorities. So we ended up with a system relying heavily on global
events in order to prevent cluttering our scenes with BoltEntities. Every mushroom a goblin can pick up is synchronized using events. The same is true for crates, barrels, doors and shops.
A lifesaver – Bolt’s token system
So how to tell a clueless client everything that has happened in the level so far? In other networking solutions, we might have sent an RPC per interactive object, or – even worse -just made the objects into networked entities. But thankfully, bolt has tokens.
Tokens (named IProtocolToken in Bolt) are Bolt‘s way of serializing arbitrary data. They consist of a Read and a Write method, both accepting a UdpPacket. UdpPacket provides methods for serializing and deserializing standard data types. How you layout your data using those methods is up to you. You can add tokens to almost everything in bolt.
Whether it is a connection attempt and you want to send a passphrase to the host, whether it is spawning a unit and passing some parameters, not unlike using a constructor (the attach token is so useful, I wonder why Unity has not implemented a similar system for their Instantiate method). Tokens can also be added to entity states, player commands and, most importantly, to events.
public class MyToken : IProtocolToken { public void Read( UdpPacket packet ) { // deserialize data here } public void Write( UdpPacket packet ) { // serialize data here } }
Data of a running game
So what data does a client need when it connects into a running game? The first thing a client needs to know is how to build the level. Levels in Looterkings are procedurally assembled from pre-created Rooms, Intersections and Deadends. So the first token a
client receives after the level has loaded is the token with the level seed. Depending on the game mode, the next token a client receives contains data about a special mission players can try to accomplish for permanent buffs. Both tokens only require a single type of event which helps keeping the amount of different events manageable.
The next event sent contains information about the level‘s current condition. Our levels have multiple parts that change as players interact and progress. The most important part of those are interactive objects. To make sure crates are destroyed, mushrooms eaten and chests are open, every interactive object‘s state needs to be serialized. Our interactive objects all have an ID, a flag whether they are interactive at the moment and a byte denoting their current state in case of the object has more than two states, e.g. a puzzle using cardinal directions.
Additionally, information about rooms and doors has to be sent. Doors leading to rooms the players have not yet visited display a foggy white frame. Opening and closing doors is also done per event. This means we have to send data on whether a door is open and on whether a room has been visited before.
Lastly, there is information about the other players that needs to be sent. As alluded to in the introduction, two goblins can stack on top of each other, creating a single unit in the process. The new player needs to know who stacks with whom and who‘s on top.
Furthermore, players are surrounded by particle effects every time they level up. We send a player‘s current experience points in the attach token but since the new player is connecting into a level that may be running for minutes already, we have to provide other players‘ current experience points to prevent unwanted particle action from occurring. So the final Read and Write methods are looking like this:
public void Read( UdpPacket packet ) { // entitystatuses (ID, isUsable, state) entityStatuses = new NetworkEntityStatus[ packet.ReadInt() ]; for ( int i = 0; i < entityStatuses.Length; i++ ) { entityStatuses[ i ] = new NetworkEntityStatus( packet.ReadInt( NETWORK_ENTITY_ID_BITS ), packet.ReadBool(), packet.ReadByte() ); } // doorStatuses (ID, isOpen, hasBeenOpen) doorStatuses = new DoorOpenStatus[ packet.ReadByte() ]; for ( int i = 0; i < doorStatuses.Length; i++ ) doorStatuses[ i ] = new DoorOpenStatus( packet.ReadByte(), packet.ReadBool(), packet.ReadBool() ); // roomStatuses (ID, isCleared) roomStatuses = new RoomStatus[ packet.ReadByte() ]; for ( int i = 0; i < roomStatuses.Length; i++ ) sectorStatuses[ i ] = new RoomStatus( packet.ReadByte(), packet.ReadBool() ); // playerStackPartnerIds (positive = on top) playerStackPartnerIds = new int[ 4 ]; for ( int i = 0; i < 4; i++ ) playerStackPartnerIds[ i ] = packet.ReadByte() - 4; // shift by -4 to revert +4 on write // playerCrawlStatuses (-1 = dc, 0 = dead, 1 = alive) playerCrawlStatuses = new int[ 4 ]; for ( int i = 0; i < 4; i++ ) playerCrawlStatuses[ i ] = packet.ReadByte() - 1; // shift by -1 to revert +1 on write // playerExps playerExps = new int[ 4 ]; for ( int i = 0; i < 4; i++ ) playerExps[ i ] = packet.ReadInt(); }
public void Write( UdpPacket packet ) { // entityStatuses (ID, isUsable, state) packet.WriteInt( entityStatuses.Length ); for ( int i = 0; i < entityStatuses.Length; i++ ) { packet.WriteInt( entityStatuses[ i ].id, NETWORK_ENTITY_ID_BITS ); packet.WriteBool( entityStatuses[ i ].isUsable ); packet.WriteByte( entityStatuses[ i ].state ); } // doorStatuses (ID, isOpen, hasBeenOpen) packet.WriteByte( (byte)doorStatuses.Length ); for ( int i = 0; i < doorStatuses.Length; i++ ) { packet.WriteByte( (byte)doorStatuses[ i ].id ); packet.WriteBool( doorStatuses[ i ].isOpen ); packet.WriteBool( doorStatuses[ i ].hasBeenOpen ); } // roomStatuses (ID, isCleared) packet.WriteByte( (byte)roomStatuses.Length ); for ( int i = 0; i < roomStatuses.Length; i++ ) { packet.WriteByte( (byte)roomStatuses[ i ].id ); packet.WriteBool( roomStatuses[ i ].isCleared ); } // playerStackPartnerIds (positive = on top) for ( int i = 0; i < 4; i++ ) packet.WriteByte( (byte)( playerStackPartnerIds[ i ] + 4 ) ); // shift +4 so stack lower does not get lost // playerCrawlStatuses (-1 = dc, 0 = dead, 1 = alive) for ( int i = 0; i < 4; i++ ) packet.WriteByte( (byte)( playerCrawlStatuses[ i ] + 1 ) ); // shift by +1 so -1 does not get lost // playerExps for ( int i = 0; i < 4; i++ ) packet.WriteInt( playerExps[ i ] ); }
So now the client knows what the world around him looks like, there‘s only one thing missing: information about the client itself. Each player can buy multiple weapons, outfits and hats during a run. Only the server and the local player know what items a player possesses and this information changes how items in the shop are displayed. So we have to synchronize equipment manually as well. We can do that easily by serializing an array of ItemIDs which are basically bytes.
Round-up
So this is how we solved the problem of synchronizing reconnecting players usingBolt‘s token system. Could we have achieved similar functionality with other networking solutions? Most likely. Would it have required us to change every interactive object into some form of BoltEntity? Most definitely. Even so be aware that tokens have their drawbacks and limitations. We learned that the hard way when we bloated a token and the event refused to send it, effectively halting the game. But in a nutshell, tokens are a pretty handy tool and they made creating Looterkings an easier process. | https://blog.photonengine.com/looterkings-photon-bolt/ | CC-MAIN-2020-40 | refinedweb | 1,367 | 57.16 |
[1] in the case of
$numerical and a string value[2] in the
case of $string.
[1] IV, for signed integer value, or a few other
possible types for floating-point and unsigned integer
representations.
[2].
As we just explained, to get the code-sharing effect, you
should preload the code before the child processes get spawned. The
right place to preload modules is at server startup.
You can use the PerlRequire
and PerlModule
directives to load commonly used modules such as
CGI.pm and DBI when the server
is started. On most systems, server children will be able to share
the code space used by these modules. Just add the following
directives into httpd.conf:
PerlModule CGI
PerlModule DBI
An even better approach is as follows. First, create a separate
startup file. In this file you code in plain Perl, loading modules
like this:
use DBI ( );
use Carp ( );
1;
(When a module is loaded, it may export symbols to your package
namespace by default. The empty parentheses ( )
after a module's name prevent this.
Don't forget this, unless you need some of these in
the startup file, which is unlikely. It will save you a few more
kilobytes of memory.)
Next, require( ) this startup
file
in httpd.conf with the
PerlRequire directive, placing the directive
before all the other mod_perl configuration directives:
PerlRequire /path/to/startup.pl
As usual, we provide some numbers to prove the theory.
Let's conduct a memory-usage test to prove that
preloading reduces memory requirements.
To simplify the measurement, we will use only one child process. We
will use these settings in httpd.conf:
MinSpareServers 1
MaxSpareServers 1
StartServers 1
MaxClients 1
MaxRequestsPerChild 100
We are going to use memuse.pl (shown in Example 10-8), an Apache::Registry
script that consists of two parts: the first one loads a bunch of
modules (most of which aren't going to be used); the
second reports the memory size and the shared memory size used by the
single child process that we start, and the difference between the
two, which is the amount of unshared memory. Unshared);
printf "%10d %10d %10d (bytes)\n", $size, $share, $diff;
First we restart the server and execute this CGI script with none of
the above modules preloaded. Here is the result:
Size Shared Unshared
4706304 2134016 2572288 (bytes)
Now we take the following code:
use strict;
use CGI ( );
use DB_File ( );
use LWP::UserAgent ( );
use Storable ( );
use DBI ( );
use GTop ( );
1;
and copy it into the startup.pl file. The script
remains unchanged. We restart the server (now the modules are
preloaded) and execute it again. We get the following results:
Size Shared Unshared
4710400 3997696 712704 (bytes)
Let's put the two results into one table:
Preloading Size Shared Unshared
---------------------------------------------
Yes 4710400 3997696 712704 (bytes)
No 4706304 2134016 2572288 (bytes)
---------------------------------------------
Difference 4096 1863680 -1859584
You can clearly see that when the modules weren't
preloaded, the amount of shared memory was about 1,864 KB smaller
than in the case where the modules were preloaded.
Assuming that you have 256 MB dedicated to the web server, if you
didn't preload the modules, you could have 103
servers:
268435456 = X * 2572288 + 2134016
X = (268435456 - 2134016) / 2572288 = 103
(Here we have used the formula that we devised earlier in this
chapter.)
Now let's calculate the same thing with the modules
preloaded:
268435456 = X * 712704 + 3997696
X = (268435456 - 3997696) / 712704 = 371
You can have almost four times as many servers!!!
Remember, however, that memory pages get dirty, and the amount of
shared memory gets smaller with time. We have presented the ideal
case, where the shared memory stays intact. Therefore, in use, the
real numbers will be a little bit different.
Since you will use different modules and different code, obviously in
your case it's possible that the process sizes will
be bigger and the shared memory smaller, and vice versa. You probably
won't get the same ratio we did, but the example
certainly shows the possibilities.
Suppose scripts scripts. This will require both
the code and the scripts to be preloaded at server startup..
The:
We always preload these modules:
use Gtop( );
use Apache::DBI( ); # preloads DBI as well
We are going to run memory benchmarks on five different versions of
the startup.pl file:
Leave the file unmodified.
Install. | http://www.yaldex.com/perl-tutorial-3/0596002270_pmodperl-chp-10-sect-1.html | CC-MAIN-2018-26 | refinedweb | 725 | 69.82 |
ember-cli: 2.13.2
node: 7.8.0
Hello, I'm a newbie at Ember.
I have a route with this code to set a body class. I will use this code on a couple of routes but not all. I was wondering if it's possible to share this code between routes?
TIA
-Emmanuel
import Ember from 'ember';
export default Ember.Route.extend({
activate () {
Ember.$('body').addClass('loading-authentification');
},
deactivate () {
Ember.$('body').removeClass('loading-authentification');
}
});
yes, use mixins
Mixins are largely an anti-pattern now. Make sure you understand exactly how Ember's mixin works and how the order that you mix these mixins actually results in different behavior.
On the larger perspective, it's always good to know the inner working of Ember's object model. Namely how all these works: .reopen, .reopenClass, .extend, Ember.mixin (lower case mixin).
.reopen
.reopenClass
.extend
Ember.mixin
Hate to thread jack, but could you point to some resources about this?
Hello, thanks for the answers!
But, I'm really stuck here. Although I googled a lot, read all documentation that I can find, I can't find a way to do this.
I tried with mixin, reopen and reopenClass.
I think I understand how to create a mixins/reopen/reopenClass but I can't figure out how to put it in the route under activate/deactivate.
Ok I'm almost there:
In ./app.js I have set the import of the reopen file:
import SetBodyClass from './custom/css/set_body_class'
In ./custom/css/set_body_class.js I have the following code:
Ember.Route.reopen({
set_body_class(this_class){
console.log (this_class);
alert(this_class);
return Ember.$('body').addClass(this_class);
}
})
Then in /routes/authentification.js I have this:
import Ember from 'ember';
export default Ember.Route.extend({
activate () {
this.set_body_class('red');
},
deactivate () {
this.set_body_class('blue');
}
});
Although console.log() and alert() are working in the route it seems that jQuery is not working and I can't change the body class of the page.
Holycow it's working now! I forgot to change the name of the css class.
routes/authentification.js
import Ember from 'ember';
export default Ember.Route.extend({
activate () {
this.set_body_class('loading-authentification');
}
});
./custom/css/set_body_class.js
Ember.Route.reopen({
set_body_class(this_class){
return Ember.$('body').addClass(this_class);
}
})
I talked about the issues of Mixins before on this forum. You can see it here
If you're on Ember Community slack channel, you can do a search on mixin and you'll get many opinions about it, many are negative.
What would you use the place of the mixin here? A service is the first thing that pops into my mind but that seems overkill for this. You could use an exported class or a simple function either but they don't feel right compared to the mixin which is built into the framework.
Well, a service is not hooked into the render cycle of the route. OP's use case is route specific.
It is perfectly ok to encapsulate this in a helper or component. I think component works better because it has a teardown phase.
Just make component that attach class to body on insert, then remove the class on teardown.
Fair point but a component comes with a lot more cognitive overhead than a mixin (not to mention number of files). Most Mixins should only describe a simple self contained behaviour or action but don't necessarily have lifecycles themselves or work with the DOM. In that case does a mixin or classical inheritance not make more sense than a component?
Rather than using imperative logic to set a body style, you can also add that to the template and use data mapping from a property to set the class. In particular if this is for navigation the link-to can be used to set the style if the url matches the target of the link-to which allows easy management of the navigation links common in web ui and bootstrap in particular. | https://discuss.emberjs.com/t/share-code-between-routes/13260 | CC-MAIN-2017-30 | refinedweb | 661 | 59.6 |
This Program Has Been Tested: It Works Perfectly!
... or at least that's what I was told when I found this implementation of Hello World in C++:
#include <iostream.h> main() { cout << "Hello World!"; return 0; }
Stashed away at the bottom of the page in the section "Program Notes" is this helpful bit of information:
This program has been tested, it works perfectly without any errors or warnings.
... and Yet, GCC Complains
You may be wondering to yourself "What on Earth prompted them to include that little tidbit of information?". Even in C++ Hello world should not be too hard to get right:
aaron@athena:~/scratch$ g++ hello.cpp In file included from /usr/include/c++/4.2/backward/iostream.h:31, from hello.cpp:1: /usr/include/c++.
... but GCC still complains. The specifics of what it's complaining about are euclidated nicely in this article but gist is that <iostream.h> references Bjarne Stroustrup's original implementation which is only included (if it's included) in modern compiler distributions for compatibility reasons whereas <iostream> is the standardized version which is better, faster, cheaper, more portable and should be used in preference to <iostream.h> whenever possible.
After switching to <iostream> and informing the compiler that we're using namespace std GCC will happily compile and run our simple program:
aaron@athena:~/scratch$ g++ hello.cpp aaron@athena:~/scratch$ ./a.out Hello World!aaron@athena:~/scratch$
For stylistic reasons, I'll add an endl at the end of the message, but that's not fatal. Unfortunately, I'm not quite done. As someone who thinks that my compiler might know a thing or two about code, I like to turn on all of the warnings that my compiler is able to give me. Anything that the compiler is worried enough to raise a warning over is probably worth looking into. Compiling with -Wall -pedantic shows us that GCC is still not happy:
aaron@athena:~/scratch$ g++ -Wall -pedantic hello.cpp hello.cpp:5: error: ISO C++ forbids declaration of ‘main’ with no type
... easily enough fixed, just add an int return type to the main function.
All Fixed Up
After changing the iostream header, adding a namespace declaration, adding a return type and a newline (which is a surprising number of things for "Hello World!" when you think about it) we can now compile with -Wall and -pedantic and GCC won't emit so much as a peep:
#include <iostream> using namespace std; int main() { cout << "Hello World!" << endl; return 0; }
aaron@athena:~/scratch$ g++ -Wall -pedantic hello.cpp aaron@athena:~/scratch$ ./a.out Hello World!
The Moral of the Story
When your students send you email complaining about your implementation of "Hello World" emitting a slew of obscure errors when compiled, posting a note indicating that your program has been compiled and tested and is bug free is probably not the most productive thing you could do. | http://aaron.maenpaa.ca/blog/entries/2009/01/28/this_program_has_been_tested_it_works_perfectly/ | CC-MAIN-2017-13 | refinedweb | 489 | 62.68 |
Have some free text fields in your application that you wish you could search efficiently? Tried using some methods before but found out that they just cannot match the performance needs of your customers? Do I have one weird trick that will solve all your problems? Don’t you already know!? All I do is bring great solutions to your performance pitfalls!
As usual, if you want the TL;DR (too long; didn’t read) version, skip to the end. Just know you are hurting my feelings.
If you open up your version of Sample.Company in the SAMPLES namespace of a recent (2015.1 or later) Caché/Ensemble/HealthShare version you will see a Mission field that is pseudo-randomly generated text. Suppose we want to search this text field. For the purpose of this exercise, I generated about 256,246 companies – feel free to populate some on your own and follow along. Well you might run the following query:
SELECT * FROM Sample.Company WHERE Mission LIKE ‘% agile %’
That is quite the reasonable query, but how does it perform? Well, with no index it clearly has to read each entry, so we get 256,277 global references for our 7854 rows returned. This is not very good! Let’s add an index to Sample.Company and see if we can do better. We add the following:
Index MissionIndex on Mission;
And we build the index and run the same query. What do we see? 279,088 global references.
Ah, I can hear you over there: “But Kyle, that’s MORE global references! Isn’t that bad? I thought indexes were supposed to help!!!”
Well slow down there, 3 exclamation points is just too many. Besides, when confronted with some inconsistent behavior, it’s best to take a moment and think. What is the cost of reading an index vs. reading the full map? Well an index is going to be smaller, so reading the full MissionIndex is going to be a cheaper operation than doing a full table scan. Then we only need to read pieces of the full table for display. So while it’s more Global References, it’s less work (assuming everything is on disk). There is, of course, a lot more to this behavior but I would have to write extensively on the Caché Block structure and that’s just WAY outside the scope of what I’m trying to accomplish here.
OK, so our first crack and cheapening our query might have worked, but certainly less than we’d like. We want lightning fast! We want less global references, and we want it easy! What can we do? The answer is an iFind Index.
Alright, I know you don’t know anything about iKnow and this iFind thing sound suspiciously like iKnow. You want an SQL solution and you really don’t want to learn the ins and outs of a whole new technology to get the performance you want. Don’t worry, although iFind leverages the iKnow engine, you need precisely ZERO knowledge of iKnow to be able to use this. Check it out, you define the index like this:
Index MissioniFind on (Mission) as %iFind.Index.Basic;
What does %iFind.Index.Basic mean? Doesn’t matter! Isn’t that beautiful!? Just put it in and build the index the way you always build indexes (%BuildIndices()). Now, with this index there’s a bit of a change in our query. We need to tell the query to use this new fancy index like so:
SELECT * FROM Sample.Company WHERE %ID %FIND search_index(MissioniFind,’agile’)
The only thing you are entering, is the name of the index and the word you are looking for. OK so that’s a lot to do, add an index, build it up, and use some funkier SQL than you might be used to. Is it worth it? Well this query gets the same 7854 rows back, but does so in 7928 global references.
7928 GLOBAL REFERENCES! That’s barely more Global References than rows! That’s some good stuff. Now I can tell you all have some questions, so I’m going to not only predict the questions but answer them!
Can we combine this with other indexes? Great question! The answer is YES! Index combinations work really well with this technology.
Are there any restrictions? Sadly, yes. You need to have a bitmap enabled ID. That is, the ID for your table needs to be a positive integer. This means no compound IDs, no string IDs, etc.
Does it really work that well? BETTER! If you have a free text field and haven’t at least tried an iFind index, you are doing yourself a disservice!
So what about iKnow? The iFind index is powered by the iKnow engine, however you don’t need to know anything about iKnow to be able to use iFind in your existing applications. Just define, build, and use!
Where can I learn more? Documentation of course! Here:
And Here:
Does iFind do other cool stuff? Of course! It’s powered by iKnow so it can do a lot! You can do Fuzzy Search, Stemming/Compounding, Ranking, and you can look for iKnow entities. If you’re interested, take a look at the %iFind.Index.Semantic and %iFind.Index.Analytic index classes. They use more iKnow bells and whistles than the Basic one we used above. Or, check out this DC post on a demo interface that works on top of any iFind index: .
Most of this is beyond the scope of this post, but if you want to see more details on any of these subjects, just ask! You all know I’m here to inform and enlighten!
---------------------------------------------------------------------------------------------
TL;DR: If you want to do search on a free text field add an iFind index by defining it like so:
Index <IndexName> on <FreeTextField> as %iFind.Index.Basic
Build the index using %BuildIndices (as normal)
Rewrite your query like so:
…WHERE ID %FIND search_index(<IndexName>,<Search Value>) AND …
And reap the benefits of lightning fast free text search!
*No one is hiding this from you! Come on now, you know by now I just do it for the clicks!
Interesting post!
Just curious, what is the performance impact when inserting data to associated table with this iFind index in place?
Excellent question.
Of course, when you are going to be using iFind it does mean a heavier burden for your INSERT/UPDATE/DELETE functions. It also does take up more space on disk than a traditional index. The cost is going to depend on what kind of iFind index you are using and the size of the data being indexed. For my example an insert went from about 5 global references to about 200. But that is, of course, still fast due to the way Caché manages writes to disk. On INSERT went from 0.0002 seconds to 0.0032 seconds. So significantly slower, but still plenty fast. If you test a use case and come up with something different, post it here!!!
Hi Kyle
When / why would I chose an iFind index over a full text index
INDEX MissionKW ON Mission(KEYS) [ TYPE=BITMAP ];
that I can search via SQL as such
SELECT * FROM Sample.Company WHERE Mission %CONTAINS ('agile')
That SQL code seems a lot more intuitive then the one one required by iFind where I have to know the index name.
This deserves a full answer that I will give on Monday - just want to put this here as a placeholder. But, spoiler, iFind is better.
Which Monday is that Kyle?
I sometimes like to call myself Kyle on Thursdays that feel like Mondays ;o)
Like any operator whose name starts with a %, it's an InterSystems-specific one, so both %CONTAINS and %FIND were added to the standard for specific purposes. %CONTAINS has been part of Caché for a long while and indeed offers a simple means to directly refer to your %Text field. So, you don't need to know the name of your index, but you do need to make sure your text field is of type %Text, so there's still a minimum of table metadata you need to be aware of.
In the case of %FIND, we're actually leveraging the %SQL.AbstractFind infrastructure, which is a much more recent addition that allows leveraging code-based filters in SQL. In this case, the code-based filter implementation is provided by the %iFind infrastructure. Because this AbstractFind interface is, what's in a name, abstract, it needs a bit of direction in the form of the index name to wire it to the right implementation, which in our case comes as the index name. As the AbstractFind interface is expected to return a bitmap (for fast filtering), it also needs to be applied to an integer field, typically the IDKey of your class. So, while it offers a lot of programmatic flexibility to implement optimized filtering code (cf this article on spatial index), it does require this possibly odd-looking construct on the SQL side. But that's easily hidden behind a user interface, of course.
I thought IKnow required a special license. Can I really do all of this with my regular old Caché license?
Ah licensing - I should have covered that. This does require an iKnow enabled license, but one of those should be easy to obtain, at least for development purposes. Just contact InterSystems or your already assigned sales representative.
Does this work with character streams also?
Sure! | https://community.intersystems.com/post/free-text-search-way-search-your-text-fields-sql-developers-are-hiding-you | CC-MAIN-2019-18 | refinedweb | 1,597 | 74.69 |
AWT Section Index | Page restrict the size of a component in a BoxLayout?
BoxLayout respects the preferred, maximum and minimum sizes of its managed components. You can manage the sizes in two ways: Subclass your components and override their sizing methods: getPref.. draw underlined text in an AWT Label component?
You can create your own Label class extending original java.awt.Label class and override the paint method. For example import java.awt.*; public class MyLabel extends Label { public MyLab...more
How can I get a reference to the container to which a component has been added?
Call component.getParent();
Can I hide the grey TextField border in an applet while still displaying the text?
You can't. You have no control over the display of the AWT components. They use the native widget set (are drawn outside Java).. othe...more
Can someone please explain the use of getMinimumSize()?
See my article on layout management, Effective Layout Management. The basic idea is that the components state t...more
How can I determine the active JFrame?
Call Frame.getFrames() which will return all frames created by the application. Frame[] f = Frame.getFrames(); Frame active = null; for (int i = 0; i < f.lenght; i++) { if (f[i].isActive()...more
Where can I learn (more) about Java's support for developing multi-threaded programs? | http://www.jguru.com/faq/client-side-development/awt?page=4 | CC-MAIN-2016-50 | refinedweb | 221 | 61.53 |
Entity beans are characterized by the following 3 features.
- They are 'Persistent'. ( they are stored inhard-disk)
- They are shared by many clients.
- They have ,'Primary key'.
also read:
As already mentioned ,Entity beans can be thought of as a record ( or row) in a table of a relational database. ( This is just for easy understanding because, the database can also be Object Database, XML database etc.)
Let us consider a simple Java class named'customer'. Let this class havejust three attributes ,namely,'key','Name'and 'Place'. In a javabean, we would provide accessor methods, such as 'getName()'&'setName(String s)etc. for each attribute. The same method is employed in Entity bean. ( Roughly).
Thus,we deal with Java idiom only andthis is more intuitive.If we have an account bean, we can write code for deposit as follows:
intn=account1.getBalance(); n=n+400; account1.setBalance(n);
Doubtless, this is much simplerand easier than writing sql code.
Entity beans 'persist' (ie) they are stored inEnterprise server's hard disk and so even if there is a shutdown of the server, the beans 'survive' and can be created againfrom the hard disk storage.[A session bean is not stored in hard disk].
A Session bean , whether it isstateless or statefulis meant for a single client. But Entity bean , being a record in a table of a database, is likely to be accessed by a number of clients . So, they are typically shared by a number of clients. For the same reason, entity beans should work within 'Transaction'.managementas specified. in the Deployment descriptor.
Canwe use a session beanfor accessing a database either for 'select' query or for modifying?
Yes. We can, provided that the application is simple &is not accessed by multiple users . But we should note that the session
beanjust displays or manipulates a record in database whereas the entity bean is the record itself!
But, typically in an enterprise situation, a database will be accessed by thousands of clients concurrently, and the very rationale for the development of EJB is to tackle the problems which arise then. That is why, Entity beans are the correct choice for Enterprise
situations.
If we think of an entity bean instance as a record in a table of database, it automatically followsthat itshouldhave a primary key for unique identification of the record..[Many books provide a 'primary key
class'.But it is not atall necessary.] But carefully note that it should be a serializable java class. So, if we provide a primary key as 'int' type, we will have to provide a wrapper class (ie) Integer. This is clumsy. The best and easiest method
is to provide a string type as the primary key. (String class). This is the method that we will be following in our illustarations.
( We are using WebLogic 5.1)
So, in our example, we are having an Accessdatabase named 'customer'.This database has a table known as 'table1'. The table has three columns .
- 'key'(primary key field)
- 'name'
- 'place'
( all of them are ofStringtype)
We create a table like this without any entries and then register it in ODBC.( this is the most familiar and easy approach.We can also use other types of jdbc drivers.)
[This does not mean that the recordshaveto be created only through Entity bean. We can always add, delete and edit records directly in the table].
Entity beans can have two types of Persistence.
- Container-managed Persistence(CMP)
- Bean-managed Persistence type(BMP)
We can declare the type of persistence required by us in the 'Deployment Descriptor'.
In CMP, the bean designer does not have to write any sql-related code at all. The necessary sql statements are automatically generated by the container. The container takes care of synchronizing the entity bean's attributes with the corresponding columns in the table of the database. Such variables are referred to as 'container-managed fields'.
This requirement also is declared by us in the Deployment descriptor.
With CMP, the entity bean class does not contain the code that connects to a database. So, we are able to get flexibility by simply editing the deployment descriptors and weblogic.properties files, without editing and recompiling the java class files.
What are the advantages of CMP?
CMPbeanshave two advantages:
- less code.
- the code is independent of the type of data store such as Relational database.
What are the limitations of CMP?
If we want to have complex joins between different tables, CMP is not suitable. In such cases, we should use BMP.
EXAMPLE FOR CMP -ENTITY BEAN
As before we begin with the Remote Interface file.
/; }
Next we write the home interfcae.
//**********customerHome.java *******************
import javax.ejb.*; import java.rmi.*; public interface customerHomeextends EJBHome { public customerRemote create (String a,String b,String c) throwsRemoteException, CreateException; public customerRemote findByPrimaryKey(String a) throwsRemoteException, FinderException; }
//customerBean.java
import javax.ejb.*; import java.rmi.*; public class customerBean implements EntityBean { public Stringkey; public Stringname; public Stringplace; public StringgetName() { returnname; } publicStringgetPlace() { returnplace; } //--------------------------- public void setName(String b) { name=b; } publicvoidsetPlace(String c) { place=c; } //------------------------------- public StringejbCreate (Stringa, Stringb, Stringc) throws CreateException { this.key=a; this.name=b; this.place = c; return null;//it should be null! } public void ejbPostCreate (String a,String b,String c) throws CreateException{} public void ejbActivate(){} publicvoidejbPassivate(){} public void ejbRemove(){} publicvoidejbLoad(){} public void ejbStore(){} public void setEntityContext(EntityContext ec){ } public void unsetEntityContext(){ } }
We now createthreexml files as given below.
- ejb-jar. xml
- weblogic-ejb-jar.xml
- weblogic-cmp-rdbms-jar.xml
These three files are very important and should be created with utmost care. Remember that XML is case-sensitive and the DTD (Deployment descriptor) for each file expects the correct structure of the document. So type exactly as given.(No formatting..shown here for clarity only)
ejb-jar. xml
<?xml version="1.0"?> <!DOCTYPE ejb-jar PUBLIC '-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 1.1/-field> <field-name>key</field-name> </cmp-field> <cmp-field> <field-name>name</field-name> </cmp-field> <cmp-field> <field-name>place</field-name> </cmp-field> <primkey-field>key</primkey-field> </entity> </enterprise-beans> </ejb-jar>
weblogic-ejb-jar.xml
<?xml version="1.0" ?> <!DOCTYPE weblogic-ejb-jar PUBLIC "-//BEA Systems, Inc.//DTD WebLogic 5.1.0 EJB//EN" ""> <weblogic-ejb-jar> <weblogic-enterprise-bean> <ejb-name>customerBean</ejb-name> <persistence-descriptor> <persistence-type> <type-identifier>WebLogic_CMP_RDBMS</type-identifier> <type-version>5.1.0</type-version> <type-storage>META-INF/weblogic-cmp-rdbms-jar.xml</type-storage> </persistence-type>
Speak Your Mind | http://www.javabeat.net/entity-beans-in-ejbcmp/ | CC-MAIN-2015-18 | refinedweb | 1,079 | 50.94 |
Short Answer:
NO - Microsoft won't give you any official support.
GS2.0 and GS3.0 have been confirmed to work on all Visual Studio SKUs Microsoft have stated that only c# will be officially supported and only C# within VS will have the IDE enhancements for XNA GS.
Medium Answer:
Not fully - see the Visual Studio FAQ for more details. You can reference the libraries but you won't get starter kits, content pipeline or Xbox deployment and debugging.
Long Answer... aka if you really must try and you understand the above restrictions:
Building content can be done using a C# project just for content or using command line MSBuild both of which require a GS install and cannot be done using the XNA redist.
Deploying to the xbox can be performed using the .ccgame packaging command line tools which require a GS install to obtain. However, your project will be missing some information vital for XNAPack if your project was not created in C# in VS or C# Express. If you get this error: "Error 1601: Packaging the startup assembly...... is unsupported for this toolset. Please use XNA Game Studio Express 1.0 Refresh tools to package this game." Then you need to follow this workaround:
360 potential issues
Other languages may or may not work on the 360 depending on how they compile things. The compact framework does not support all of the IL instructions that the full framework does (see) so your particular compiler may produce code that will not run. Some parts of System.Reflection (reflection.emit for example) are not available on the 360 either which rules out any languages that rely on those. In addition some languages such as VB.Net come with additional libraries (e.g. the My. and the Microsoft.VisualBAsic namespaces). Since these libraries don't exist on the compact framework apps that rely on them will not run on the 360. | http://forums.xna.com/forums/t/1464.aspx | crawl-002 | refinedweb | 322 | 64.91 |
Hi I'm trying to teach myself C# to make myself more marketable as a programmer. So far all I know is C++.
Anyway, a friend advised me just to start writing simple C# programs and then eventually I'll "know" C#. (You know what I'm talking about)
My first simple program I'm trying to write is a guessing game where the computer thinks of a random number between 1 and 100 and the human tries to guess it. Here's my code so far:
Problem is that when I try to compare a variable of type int to a variable of type Random I get an error. What can I do? Thanks, ~AdamProblem is that when I try to compare a variable of type int to a variable of type Random I get an error. What can I do? Thanks, ~AdamCode:using System; using System.Collections.Generic; using System.Text; namespace Guessing_Game { class Program { static void Main(string[] args) { string guess; Console.WriteLine("Welcome to Adam Henderson's C# Guessing Game!\n"); Console.WriteLine("Guess a number from 1 to 100."); guess = Console.ReadLine(); Random random = new Random(); // C#'s way of giving you a random number random.Next(1, 100); int guess2 = Int32.Parse(guess); if (random < guess2) Console.WriteLine("Oh! You guessed too high! Guess again!"); else if (random > guess2) Console.WriteLine("Oh! You guessed too low! Guess again!"); else { Console.WriteLine("You guessed correct! The answer was {0}!", guess2); } } } } | http://cboard.cprogramming.com/csharp-programming/105191-random-int.html | CC-MAIN-2015-35 | refinedweb | 245 | 77.94 |
import "github.com/timtadh/data-structures/tree/bptree"
bpmap.go bptree.go bptree_node.go
A BpMap is a B+Tree with support for duplicate keys disabled. This makes it * behave like a regular Map rather than a MultiMap.
func (self *BpMap) Iterate() (kvi types.KVIterator)
A BpTree is a B+Tree with support for duplicate keys. This makes it behave as * a MultiMap. Additionally you can use the Range operator to select k/v in a * range. If from > to it will iterate backwards.
func (self *BpTree) Backward() (kvi types.KVIterator)
func (self *BpTree) Iterate() (kvi types.KVIterator)
func (self *BpTree) Replace(key types.Hashable, where types.WhereFunc, value interface{}) (err error)
Package bptree imports 2 packages (graph) and is imported by 1 packages. Updated 2017-12-16. Refresh now. Tools for package owners. | https://godoc.org/github.com/timtadh/data-structures/tree/bptree | CC-MAIN-2018-26 | refinedweb | 134 | 54.08 |
The Outlook object model is described in detail in the MSDN library. However, while that tells you what methods, properties and events exist, there's not a huge amount of information about what they're actually for. I found Carter & Lippert's Visual Studio Tools for Office 2007 book very useful in filling in many gaps, but some experimentation was certainly necessary.
I used to write lots of little C# console apps to delve into the guts of the object model but more recently have started using PowerShell for the same thing. As a dynamic language it's much quicker to explore the object model than the edit-compile-debug cycle, especially when the objects identify themselves as just a COM object in the C# debugger, meaning I need to insert casts all over the place. Any dynamic language would do, but PowerShell "knows" about so many different classes of object interface (.NET, COM, WMI, file system, ...) that it's very convenient for all sorts of things.
The point of this add-in is to be able to set the reply all flag in an email message object. From Scott Hanselman's post, we already know we want to tweak the message object's Action elements, but it's probably worth finding out a little more about them first. With PowerShell in one hand and the Outlook object model MSDN pages in the other, let's begin... Fire up the PowerShell ISE (Integrated Scripting Environment) or command shell, and execute the following to get hold of Outlook:
$outlook = new-object -com Outlook.Application
If you now execute "$outlook.GetType()" PowerShell will tell you what sort of object you have - not too surprisingly, an ApplicationClass object - and you can see what the documentation says about this.
$inbox = $outlook.Session.GetDefaultFolder(6)
gets hold of the inbox (the "6" comes from the OlDefaultFolders enumeration) and the following gets us the first message in the inbox:
$item = $inbox.Items.Item(1)
You can now examine the attributes of the object (type MailItem) such as the body or recipient lists, but let's focus on the actions. "$item.Actions" dumps the complete array though the following gives a more easily understood list with just the properties we're interested in.
$item.Actions | select Name,Enabled
which results in:
I've not seen anything that lists a complete set of actions (but, to be honest, I've not looked very hard). The documentation indicates that the Actions list can be indexed by integer or by name, and $item.Actions.Item(1) does result in the same action as $item.Actions.Item("Reply") - additionally indicating that the integer index is 1-based, as most (all?) of the Office collections tend to be. The lookup by name is what the VBA macro that inspired this used, and I used it in the first version of my add-in too. However, a user pointed out that it didn’t work in his German installation of Office: it turns out that the action names are localized, but inspection and experimentation suggests that the integer index values are consistent. Now, I've not seen that documented anywhere, and it’s possible that future versions of Outlook might break my assumption, but I’ve not come up with a better idea.
Having confirmed the name based lookup, and discovered a means of potentially avoiding language problems, it's time to get back to the add-in. First, for some more coverage of using PowerShell with Outlook, see James Manning's blog post on the topic. I also have to admit that using the VBA macro editor built into Outlook would have made the exploration just as simple, perhaps even easier, but I wanted an excuse to start using PowerShell.
Change the stub ComposeButton_Click from last time to the following:
public void ComposeButton_Click(Office.IRibbonControl control, bool pressed)
{
var inspector = control.Context as Outlook.Inspector; if (inspector != null) {
var item = inspector.CurrentItem as Outlook.MailItem;
if (item != null) item.Actions[2].Enabled = !pressed; }}
The "context" for the control passed into the callback is the Outlook window container object associated with the ribbon, an Inspector in this case. From that, we can get hold of the mail item being displayed, and can then enable or disable the value of the appropriate Action (remember, the index is 1-based). I'm not sure of those null checks are necessary or if there's guaranteed to always be an inspector and a mail item but I guess it's better to perform an extra check than it is to crash Outlook, especially since this isn't really performance critical. The inspector check does protect against me accidentally showing this ribbon on a window with some other object in it (e.g., the Outlook main window).
Now, programmatic interaction with Office applications is really COM based, but VSTO does a really good job of hiding, say, COM reference counting from us. Most of the time it works just fine, but I did have some problems reports about phantom windows appearing on the screen, or Outlook not exiting - both of these problems may have been symptoms of something hanging on to a reference when they shouldn't have. Perhaps .NET garbage collection would have released the references but it didn't seem to be doing it in time for these two problems. Of course, these were well nigh impossible to debug because the debugger would also manipulate the references, obscuring whatever what happening within the application. I found that the problems "went away" when I explicitly released all COM objects when I'd finished with them - I don't know if this is the right thing to do, and I'm a bit uneasy at the potential for now causing objects to be prematurely destroyed, but everything does seem to work with this change. My actual button click handler is the more paranoid version that follows - note the bold lines:
public void ComposeButton_Click(Office.IRibbonControl control, bool pressed)
{
var inspector = control.Context as Outlook.Inspector; if (inspector != null) { var item = inspector.CurrentItem as Outlook.MailItem; if (item != null) { var action = items.Actions[2]; action.Enabled = !pressed; Marshal.ReleaseComObject(action); Marshal.ReleaseComObject(item); } Marshal.ReleaseComObject(inspector); }}
The Marshal class is in the System.Runtime.InteropServices namespace. Note that I'm releasing only references which I've created (for example, not the control passed in), and I do it after my last interaction with the reference. As I said earlier, I'm not sure how much of this is necessary, but doing it everywhere seems to be less problematic than doing it nowhere!
Run this, and you'll see that you can send messages with reply-all disabled. But try this: create a new message, click the button, send the message, create a second message - wait a moment, the button seems to indicate that reply-all is already disabled here... If you send that message though, you'll see that reply-all has not been disabled! What's going on here? What seems to be happening is that the ribbon is loaded once and cached in whatever state it was last left in for use in new windows (I guess this is for efficiency: most of the time the ribbon will be identical for different windows of the same type), so the fix is to cause a newly created window to reset its ribbon controls... Perhaps not reset as such, but set it to the value associated with the mail item being displayed in the window (this will be useful later on, when we want to display the state for received messages).
First, we need to add a callback to the ribbon to update the button. Add attribute getPressed="ComposeButton_IsPressed" to the button in the XML script and then add the callback method to Ribbon.cs:
public bool ComposeButton_IsPressed(Office.IRibbonControl control)
{ bool pressed = false;
var inspector = control.Context as Outlook.Inspector; if (inspector != null) { var item = inspector.CurrentItem as Outlook.MailItem; if (item != null) { var action = items.Actions[2]; pressed = !action.Enabled; Marshal.ReleaseComObject(action); Marshal.ReleaseComObject(item); } Marshal.ReleaseComObject(inspector); } return pressed;}
Of course, a better implementation would extract the code common between this and the previous method into a separate function, but I'll leave that as an exercise for the reader.
Creating this callback is not actually sufficient, since it's currently only invoked the very first time the ribbon is displayed - we need to force it to be triggered when new inspector windows are created, and that happens in ThisAddin.cs via, not unexpectedly, a NewInspector event handler. We've already got CreateRibbonExtensibilityObject, so extend it slightly to squirrel away a reference to the ribbon object and make the NewInspector event handler call the ribbon's InvalidateControl method. Something like the following:
private Ribbon ribbon;private Outlook.Inspectors inspectors;protected override Microsoft.Office.Core.IRibbonExtensibility CreateRibbonExtensibilityObject(){ this.ribbon = new Ribbon(); return this.extension;}private void ThisAddIn_Startup(object sender, System.EventArgs e){ this.inspectors = this.Application.Inspectors; this.inspectors.NewInspector += OnNewInspector;}private void ThisAddIn_Shutdown(object sender, System.EventArgs e){ this.inspectors.NewInspector -= OnNewInspector; Marshal.ReleaseComObject(this.inspectors);}private void OnNewInspector(Outlook.Inspector inspector){ if (this.ribbon != null) this.ribbon.InvalidateControl("btnNoReplyAll");}
Note that I capture the Inspectors collection as a class variable - if I were to just replace the body ot ThisAddin_Startup with "this.Application.Inspectors.NewInspector += ..." there is a chance that garbage collection would cause the .NET Inspectors proxy to be reclaimed and the event handler detached. Keeping the collection in a variable with a lifetime as long as required for the add-in means the event handler stays around too. That final function, though short, is a bit untidy, poking its nose inside the Ribbon object - it would be better to write a RibbonInvalidateRibbon method which hides the details and invoke that from here, but I've done enough typing for today.
After all that, you should have a fully functioning no-reply-all button, at least on the compose window. Next time, I'll add some more buttons... | http://blogs.msdn.com/b/gsmyth/archive/2011/08/06/outlook-object-model.aspx | CC-MAIN-2014-41 | refinedweb | 1,676 | 55.03 |
Java
This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here!
Here is a public class called Student
Public class Student {
public int studID = 0;
public char studGrade = 'F';}
Use Notepad to write a test class called testStud.java which will instantiate an object called myStudent. Use the object to write print statments to print the current values of the two attributes. Next, use the object to change the studID to 127 and the studGrade to an A. Write code that will display the two attributes studID and studGrade.© BrainMass Inc. brainmass.com February 24, 2021, 2:30 pm ad1c9bdddf
Solution Summary
Java help is given. | https://brainmass.com/computer-science/java/25368 | CC-MAIN-2021-10 | refinedweb | 113 | 66.23 |
Writing rules on Windows
Common problems of writing portable rules, and some solutions.
This document focuses on writing Windows-compatible rules.
Paths
Problems:
Length limit: maximum path length is 259 characters.
Though Windows also supports longer paths (up to 32767 characters), many programs are built with the lower limit.
Be aware of this about programs you run in the actions.
Working directory: is also limited to 259 characters.
Processes cannot
cdinto a directory longer than 259 characters.
Case-sensitivity: Windows paths are case-insensitive, Unix paths are case-sensitive.
Be aware of this when creating command lines for actions.
Path separators: are backslash (
\), not forward slash (
/).
Bazel stores paths Unix-style, i.e. with
/separators. Though some Windows programs support Unix-style paths, others don’t. Some built-in commands in cmd.exe support them, some don’t.
It’s best to always use
\separators on Windows: replace
/with
\when you create command lines and environment variables for actions.
Absolute paths: don’t start with slash (
/).
Absolute paths on Windows start with a drive letter, e.g.
C:\foo\bar.txt. There’s no single filesystem root.
Be aware of this if your rule checks if a path is absolute. (Absolute paths should be avoided though, they are often non-portable.)
Solutions:
Keep paths short.
Avoid long directory names, deeply nested directory structures, long file names, long workspace names, long target names.
All of these may become path components of actions’ input files, and may exhaust the path length limit.
Use a short output root.
Use the
--output_user_root=<path>flag to specify a short path for Bazel outputs. A good idea is to have a drive (or virtual drive) just for Bazel outputs (e.g.
D:\), and adding this line to your
.bazelrcfile:
build --output_user_root=D:/
or
build --output_user_root=C:/_bzl
Use junctions.
Junctions are, loosely speaking[1], directory symlinks. Junctions are easy to create and can point to directories (on the same computer) with long paths. If a build action creates a junction whose path is short but whose target is long, then tools with short path limit can access the files in the junction’ed directory.
In
.batfiles or in cmd.exe you can create junctions like so:
mklink /J c:\path\to\junction c:\path\to\very\long\target\path
[1]: Strictly speaking Junctions are not Symbolic Links, but for sake of build actions we may regard Junctions as Directory Symlinks.
Replace
/with
\in paths in actions / envvars.
When you create the command line or environment variables for an action, make the paths Windows-style. Example:
def as_path(p, is_windows): if is_windows: return p.replace("/", "\\") else: return p
Environment variables
Problems:
Case-sensitivity: Windows environment variable names are case-insensitive.
For example, in Java
System.getenv("SystemRoot")and
System.getenv("SYSTEMROOT")yields the same result. (This applies to other languages too.)
Hermeticity: actions should use as few custom environment variables as possible.
Environment variables are part of the action’s cache key. If an action uses environment variables that change often, or are custom to users, that makes the rule less cache-able.
Solutions:
Only use upper-case environment variable names.
This works on Windows, macOS, and Linux.
Minimize action environments.
When using
ctx.actions.run, set the environment to
ctx.configuration.default_shell_env. If the action needs more environment variables, put them all in a dictionary and pass that to the action. Example:
load("@bazel_skylib//lib:dicts.bzl", "dicts") def _make_env(ctx, output_file, is_windows): out_path = output_file.path if is_windows: out_path = out_path.replace("/", "\\") return dicts.add(ctx.configuration.default_shell_env, {"MY_OUTPUT": out_path})
Actions
Problems:
Executable outputs: Every executable file must have an executable extension.
The most common extensions are
.exe(binary files) and
.bat(Batch scripts).
Be aware that shell scripts (
.sh) are NOT executable on Windows, i.e. you cannot specify them as
ctx.actions.run’s
executable. There’s also no
+xpermission that files can have, so you can’t execute arbitrary files like on Linux.
Bash commands: For sake of portability, avoid running Bash commands directly in actions.
Bash is widespread on Unix-like systems, but it’s often unavailable on Windows. Bazel itself is relying less and less on Bash (MSYS2), so in the future users would be less likely to have MSYS2 installed along with Bazel. To make rules easier to use on Windows, avoid running Bash commands in actions.
Line endings: Windows uses CRLF (
\r\n), Unix-like systems uses LF (
\n).
Be aware of this when comparing text files. Be mindful of your Git settings, especially of line endings when checking out or committing. (See Git’s
core.autocrlfsetting.)
Solutions:
Use a Bash-less purpose-made rule.
native.genrule()is a wrapper for Bash commands, and it’s often used to solve simple problems like copying a file or writing a text file. You can avoid relying on Bash (and reinventing the wheel): see if bazel-skylib has a purpose-made rule for your needs. None of them depends on Bash when built/tested on Windows.
Build rule examples:
copy_file()(source, documentation): copies a file somewhere else, optionally making it executable
write_file()(source, documentation): writes a text file, with the desired line endings (
auto,
unix, or
windows), optionally making it executable (if it’s a script)
run_binary()(source, documentation): runs a binary (or
*_binaryrule) with given inputs and expected outputs as a build action (this is a build rule wrapper for
ctx.actions.run)
native_binary()(source, documentation): wraps a native binary in a
*_binaryrule, which you can
bazel runor use in
run_binary()’s
toolattribute or
native.genrule()’s
toolsattribute
Test rule examples:
diff_test()(source, documentation): test that compares contents of two files
native_test()(source, documentation): wraps a native binary in a
*_testrule, which you can
bazel test
On Windows, consider using
.batscripts for trivial things.
Instead of
.shscripts, you can solve trivial tasks with
.batscripts.
For example, if you need a script that does nothing, or prints a message, or exits with a fixed error code, then a simple
.batfile will suffice. If your rule returns a
DefaultInfo()provider, the
executablefield may refer to that
.batfile on Windows.
And since file extensions don’t matter on macOS and Linux, you can always use
.batas the extension, even for shell scripts.
Be aware that empty
.batfiles cannot be executed. If you need an empty script, write one space in it.
Use Bash in a principled way.
In Starlark build and test rules, use
ctx.actions.run_shellto run Bash scripts and Bash commands as actions.
In Starlark macros, wrap Bash scripts and commands in a
native.sh_binary()or
native.genrule(). Bazel will check if Bash is available and run the script or command through Bash.
In Starlark repository rules, try avoiding Bash altogether. Bazel currently offers no way to run Bash commands in a principled way in repository rules.
Deleting files
Problems:
Files cannot be deleted while open.
Open files cannot be deleted (by default), attempts result in “Access Denied” errors. If you cannot delete a file, maybe a running process still holds it open.
Working directory of a running process cannot be deleted.
Processes have an open handle to their working directory, and the directory cannot be deleted until the process terminates.
Solutions:
In your code, try to close files eagerly.
In Java, use
try-with-resources. In Python, use
with open(...) as f:. In principle, try closing handles as soon as possible. | https://docs.bazel.build/versions/master/skylark/windows_tips.html | CC-MAIN-2020-16 | refinedweb | 1,230 | 60.21 |
Huey ShepardPro Student 1,430 Points
I'm having problems with Linq question 2. What am I doing wrong. My answer compiles in Visual Studio.
Challenge Task 2 of 3
Create a public Action field named DisplayResult that takes an int parameter and a Func<int, int> parameter. Initialize it with an anonymous method delegate that takes an int result parameter, and a Func<int, int> named operation.
-----------------------------------------Code from Program.css---------------------------------- using System;
namespace Treehouse.CodeChallenges { public class Program {
public Action<int, Func<int, int>> DisplayResult = delegate (int result, Func<int, int> operation){}; }
}
1 Answer
Steven Parker171,904 Points
It looks like the task 1 code is missing.
In task 1, you created a field named Square, but I don't see it in the code above. In multi-task challenges, each new task must be added to the code created for the previous task(s).
Otherwise, the code you're adding for task 2 looks good. | https://teamtreehouse.com/community/im-having-problems-with-linq-question-2-what-am-i-doing-wrong-my-answer-compiles-in-visual-studio | CC-MAIN-2019-39 | refinedweb | 158 | 66.64 |
Internet Evolution — Name Game
I’m posting these here not because they are particularly good (they weren’t) but because I wanted to leave them online after Internet Evolution, the site which originally hosted them, went away.
The Name Game Moves to the Web
Written by David Manheim 4/28/2011
Addressable space has been an issue with computers since time immemorial. My first computer had expanded, extended, and HMA memory, just so I could use the second DIMM of RAM. We had 2- or 4-gigabyte limits on addressable RAM on 32-bit machines, and FAT16 limits for hard drives that are similar. Hell, IPv6 faces the same issue. We just don’t know how much space we’ll need when designing a system.
But there’s an impending bigger deal, with a less well defined limit: There aren’t enough words to describe what we need!
You may have noticed this issue with acronyms over the last 10 years: There are too many of them that could stand for anything. The FAT16 I mentioned above stands for File Allocation Table, but that’s obvious only to those who know it. It could just as easily stand for Feature Acceptance Test, Factory Allowance Test, Failure Automation Triangle, and so on. Or fat.
Tech acronyms aren’t the only terms for which we’re running out of alternatives. We’re also running out of names for rock bands, for instance. Dave Barry has been worried about this longer than IPv6 has been around, and he has a useful starter list in case any budding musicians get stuck.
Pharmaceuticals is having a similar problem, but with more profound consequences: OPDRA(one of those acronyms again) is the branch of the FDA making sure that prescription names are not similar enough to be confused with one another. Giving clozapine instead of olanzapine, or giving serzone for seroquel, has injured or killed patients. Computerized medical systems would prevent transcription errors, but no one can prevent confusion or misstatements on the part of a prescribing doctor.
In computing, the latest generation of technologies is ignoring the problem and reusing terms with reassigned definitions: I use Windows (wooden framework with a glass pane) and Vista (a distant view or prospect) to get to clouds (a visual mass of water droplets) via LAMP (an artificial source of illumination), so I can access the grid (a regularly spaced pattern of lines).
If we don’t have enough space for new terms, we can always reuse old ones!
But this only works if the technology is widespread, and no one needs to copyright it. Otherwise, we need to expand the namespace, or what I’ve called Wordv1 to Wordv2, which squares the size of the available namespace for the low price of using two-word names.
So we have Facebook and Foursquare, Techcrunch, Techflash, and Techdirt. We even had an “Untitled Startup” for awhile, but it finally found a name that wasn’t already copyrighted and is now “Simply Measured.”
I guess it won’t be long until we complete the circle and names expand into Wordv3, and we get great three-word names like International Beekeeping Mavens. But, don’t worry, if it sounds too long, we can always use the acronym.
— David Manheim works as a terrorism insurance modeling specialist at Risk Management Solutions.
Link: | https://medium.com/@davidmanheim/internet-evolution-name-game-d33c96fd7fa8 | CC-MAIN-2017-43 | refinedweb | 559 | 58.11 |
-- first edition --
November 8, 2001
Haskell has come a long way since the September of 1987, when a
meeting at a functional programming conference decided that more
widespread use of the class of non-strict purely functional
languages was being hampered by the lack of a common language. It is
a credit to everyone involved in the development of Haskell that the
language has achieved many of the goals set out for it.,
University of Kent at Canterbury, UK emails.,
Haskell's central information resource is
It has the language and standard library definitions, links to Haskell implementations, libraries, tools, books, tutorials, people's home pages, communities, projects, news, a wiki, question&answers, applications, educational material, job adverts, Haskell humour, and even merchandise..
Perhaps we can all take the release of this first Communities Report as an occasion for going through the information on haskell.org relating to our own interests and sending."
It has been a long and difficult revision, with a lot of surprising "features" being discovered and removed through a series of draft documents and intensive discussions on the haskell list. Simon Peyton Jones had taken on the job:
The results can be found at:
"I have posted a draft version of both Reports approximately monthly since April. Now I am posting what I hope are final versions, but I want to give one last chance for you to improve my wording. I do not want to do anything new; but I am prepared to fix any errors in the changes I have made. I urgently solicit feedback on these drafts, before the end of November 2001."
The effective lack of a formal semantics for the whole of Haskell (as opposed to academically interesting fragments) has been a constant source of embarrassment (functional languages: solid theoretical basis, effective reasoning about programs, ...; Haskell: ???, ahem, oops).
Similarly, anyone wanting to do meta-programming on Haskell source code, e.g., to prototype language extensions, tended to write their own fragmentary Haskell frontend on top of the few existing parsers for full Haskell.
On both these topics, there seems to have been some progress recently:
There are a few omissions and one deviation (can't use qualified names to refer to top level bindings in the same module, but these top level bindings shadow imported bindings, so it is easy to translate a program from Report-Haskell to KF-Haskell). The omissions are strictness flags in datatypes, the newtype construct (which is indistinguishable from ordinary algebraic data types from a typing point of view) and deriving clauses (which would probably have needed another ten pages or so to specify, most of which would be conncerned with the dynamic semantics)."
Old: " "Core" is an intermediate language used internally by the GHC compiler. It does)."
New: "The newly released GHC 5.02 supports an initial version of a facility for dumping GHC's intermediate code (called Core) into a file for use by other tools. The Core format has been given a formal syntax and semantics (the latter in the form of a definitional interpreter). Details are in
At present, Core can only be dumped (using the -fext-core flag to ghc); ultimately, we hope to be able to load it as well, so that the output of other tools can be fed back into GHC prior to code generation. Feedback on this facility (to glasgow-haskell-users and/or apt@cs.pdx.edu) would be very welcome."
Thomas Hallgren: "The intended use is within the Programatica project [1], where we want to be able work with extended versions of Haskell. The parser (and the abstract synax, I presume) is based on a Happy parser by Simon Marlow and Sven Panne, and I guess it has something in common with the parser used in GHC. The abstract syntax has then been refactored to separate structure and recursion, to make most of the types and related functions reusable without change in extended versions of the language. Tim Sheard talked about his unification algorithm based on the same ideas at ICFP [2].
At the moment, the parser seems to be in descent shape, although it does not yet handle infix operators in 100%accordance with Haskell 98, since that also requires an implementation of the module system...
The implementation of the modules system is close to completion, so we will have something that can parse Haskell 98 correctly within a couple of weeks, I guess.
There is some code for static analysis and type checking, but it is not in good enough shape to be viewed by external eyes yet...
Other than that, I refrain from saying anything definite. My guess is that things will get done when they seem to be needed to make progress in the project. After all, we are lazy functional programmers :-)"
[1]
[2]."
The three Haskell in Haskell frontends developed independently, using similar starting points. Now that they know about each other, it would seem to make sense for the groups to join forces, in order to make best use of sparse resources. Bernie Pope has already indicated that he would be willing to contribute to a joint effort, the other groups have yet to comment. We'll see how the story continues, but in any case, the foundations for Haskell meta-programming have been improved considerably by these projects.
The Journal of Functional Programming will be running a special issue on Haskell (submission, refereeing and editing done; expected publication sometime in 2002). Thanks to Graham Hutton, guest editor for that special issue, we have the titles, authors, and abstracts for the six papers that will appear in it, and it looks to be a very interesting special issue.
CFP:
A Static Semantics for Haskell Karl-Filip Faxen.
Overloading is translated into explicit dictionary passing, as in all current implementations of Haskell. The target language of this translation is a variant of the Girard-Reynolds polymorphic lambda calculus featuring higher order polymorphism and explicit type abstraction and application in the term language. Translated programs can thus still be type checked, although the implicit version of this system is impredicative.
A surprising result of this formalization effort is that the monomorphism restriction, when rendered in a system of inference rules, compromises the principal type property.
Developing a High-Performance Web Server in Concurrent Haskell Simon Marlow.
A Typed Representation for HTML and XML Documents in Haskell Peter Thiemann
We define a family of embedded domain specific languages for generating HTML and XML documents. Each language is implemented as a combinator library in Haskell. The generated HTML/XML documents are guaranteed to be well-formed. In addition, each library can guarantee that the generated documents are valid XML documents to a certain extent (for HTML only a weaker guarantee is possible). On top of the libraries, Haskell serves as a meta language to define parameterized documents, to map structured documents to HTML/XML, to define conditional content, or to define entire web sites.
The combinator libraries support element-transforming style, a programming style that allows programs to have a visual appearance similar to HTML/XML documents, without modifying the syntax of Haskell.
Secrets of the Glasgow Haskell Compiler Inliner Simon Peyton Jones and Simon Marlow.
Faking It: Simulating Dependent Types in Haskell Conor McBride
Dependent types reflect the fact that validity of data is often a relative notion by allowing prior data to affect the types of subsequent data. Not only does this make for a precise type system, but also a highly generic one: both the type and the program for each instance of a family of operations can be computed from the data which codes for that instance.
Recent experimental extensions to the Haskell type class mechanism give us strong tools to relativize types to other types. We may simulate some aspects of dependent typing by making counterfeit type-level copies of data, with type constructors simulating data constructors and type classes simulating datatypes. This paper gives examples of the technique and discusses its potential.
Parallel and Distributed Haskells P.W. Trinder, H-W. Loidl and R.F. Pointon.
Simon Peyton Jones, Simon Marlow, Julian Seward, Reuben Thomas, (with particular help recently from Marcin Kowalczyk, Sigbjorn Finne, Ken Shan)
In early October we released GHC 5.02. This is the first version of GHC that works really solidly on Windows, and it also has a much more robust implementation of GHCi, the interactive version of GHC. Compared to earlier releases our test infrastructure is in much better shape, and we were pretty confident about its reliability. Perhaps in response to this rash claim, lots of people started to use it and, sure enough, a significant collection of bugs were reported. So we will release GHC 5.02.1 early in November. [this has just happened (ed)]
Simon PJ has spent quite a bit of time on a new demand analyser that now replaces the old strictness analyser and CPR analyser. The new thing is much faster, and much smaller (lines of code) than the analysers it replaces. Hopefully a paper will follow.
Simon M wrote a new compacting garbage collector that reduces the amount of real memory you need to run big programs.
Ken Shan has heroically done a fine Alpha port of GHC.
Sadly, Reuben leaves at the end of October, and Julian at the end of Feb 02, when the grant that funds them runs out. That will leave the two Simons on GHC duty. So the tempo of GHC activity will reduce; we have no new sources of money in our sights. GHC is, and remains, an open-source project, and we welcome contributions from others. (Thanks to Ken, Sigbjorn, Marcin, and others who have pitched in recently.)
So our current short-term objective is to get GHC into a really solid, robust state --- rather than adding lots of new features. In particular, we plan to spend the autumn on
There is a never-ending task of filling in things that nearly work and but don't quite do it right. E.g. warning about unused bindings isn't quite right; generics are incomplete; derived read on unboxed types doesn't work; derived Read generates obscene amounts of code; and so on. This is a bit of a thankless task, and we're much more motivated to get on with things that are actually holding people up. So please tell us.
We have not paid serious attention to the quality of the code GHC produces, or the speed at which it produces it, or the space it eats (esp GHCi), for quite a while. So we're going to work on
In particular, Sunwoo Park, a summer intern from CMU, built a prototype implementation of lag/drag/void profiling, and retainer profiling. We plan to integrate these into our main release.
Having said we're not concentrating on new stuff, here are the things that are floating around in our brains. Vote now!
Project Status: maintenance mode, volunteers needed
Hugs 98 is a small and fast interactive programming system, that offers an almost complete implementation of Haskell 98. Its main strengths are
Hugs 98 is open source, and thus dependent on volunteering efforts for its development. In particular, Hugs isn't maintained or supported by OGI anymore. An active mailing list, and the Hugs cvs archive, can both be reached from haskell.org. The new FFI is partially supported in the current release. A new release (tentatively scheduled for Nov. 30), that will include hierarchical module names and the rearranged hslibs, is currently being put together by Sigbjorn Finne, Alastair Reid, Jeff Lewis, and Johan Nordlander.
The particular strengths of the nhc98 compiler are portability, space-efficiency, close adherence to Haskell'98, and extensive tool support to help you to engineer better programs.
Project Status: new project.
Version 1.0 of the Haskell 98 FFI Addendum is nearing publication. This version of the addendum only covers interaction between Haskell and C code in detail, but is designed to be easily extensible with support for other languages, such as C++ and Java. The current draft is available from
The document is complete and if approved in its current form on the FFI mailing list will be circulated on the main Haskell list for comments from the community. The functionality of the FFI as defined in the addendum is currently available in GHC and NHC98; however, there are still syntactic differences between the definition and those implementations. They are expected to be resolved in the near future.
Back in February of this year, Malcolm Wallace proposed an extension to Haskell to support a hierarchical module namespace, and gave some suggestions as to how Haskell might use the extended namespace. The original message is here:
At the same time, a mailing list for discussion of the changes was set up, libraries@haskell.org. The archives are here:
This report details the current status of the proposal, and outlines what we've been up to on the mailing list. There is also an evolving document describing the current proposal; an HTML version can be found here:
Further extensions have been suggested, such as those to allow importing or renaming of multiple modules simultaneously, but none has been settled on. We're waiting until we have more experience with using the hierarchical scheme before deciding what further extensions, if any, are necessary.
Work has begun on constructing the core libraries. The current sources can be perused in the CVS repository, here:
The current status is that most of GHC's old hslibs libraries have been migrated into the new framework, with the exception of posix, edison, HaXml, Parsec and a few others. GHC runs with the new libraries, but the development version of GHC hasn't fully switched over to the new scheme yet - this is expected to happen before the next major release of GHC.:
There has been some recent activity concerning the interaction between concurrency and exceptions, the result being the asynchronous exception API provided by GHC:
A future goal is to specify and standardise the Concurrent Haskell extension as a Haskell 98 addendum.
GpH is a minimal, conservative extension of Haskell'98 to support parallelism. Experience has shown that it is particularly good for constructing symbolic applications, especially those with irregular parallelism, e.g. where the size and number of tasks is dynamically determined. The project has been ongoing since 1994, initially at Glasgow, and now at Heriot-Watt and St Andrews Universities.
GpH extends Haskell'98 with parallel composition: par. Parallel and sequential composition are abstracted over as evaluation strategies to specify more elaborate parallel coordination, e.g. parList s applies strategy s to every element of a list in parallel. Evaluation strategies are lazy higher-order polymorphic functions that enable the programmer to separate the algorthmic and coordination parts of a program. A number of realistic programs have been parallelised using GpH, including a Haskell compiler and a natural language processor.
GpH is publicly available from the page below, and is implemented on GUM, a sophisticated runtime system that extends the standard GHC runtime system to manage dynamically much of the parallel execution, e.g. task and data placement. GUM is portable: using C and standard communication libraries (PVM or MPI), and hence GpH is available on a range of platforms, including shared-memory, distributed memory and workstation clusters, e.g. Beowulf. GpH shares implementation technology with the Eden and GdH languages.
Current work includes making GpH architecture independent, i.e. deliver good parallel performance on a range of platforms. Improved parallel profiling, parallel semantics and abstract machines, and performance comparison with other languages.
Robert Pointon, of Heriot-Watt University, has been working on Glasgow Distributed Haskell (GdH): GdH combines the multiple processes of Concurrent Haskell with the multiple processing elements of Glasgow Parallel Haskell (GpH). In summary the language is a minimal super-set of GpH and Concurrent Haskell and so maintains full backwards compatibility.
To support distribution we have only introduced the notion of "location":
We have used GdH to write applications which include: a distributed file server, multiplayer games, and parallel skeletons.
In terms of ongoing research, GdH is actively being used by the group here for looking at:
Oh, and the implementation is almost ready for public release!
Report by: Björn Lisper
Project Status: dormant
The continuing advances in semiconductor and hardware technology are leading to a situation where transistors are free and communication costly. This will make parallel systems-on-a-chip standard. These systems must be specified and programmed: this requires parallel programming and specification languages. The prevailing, process-parallel programming paradigms are however hard to master for many applications. Thus, efficient system and software development for these applications, on this kind of systems, will require simpler models on a higher level.
One such model is the data parallel model, which provides operations directly on aggregate data structures. These operations are often highly parallel. The data parallel model is particularly apt for data-intensive, computation-oriented applications like image and signal processing, neural network computations, etc. The Data Field Model is an attempt to create a formal, data parallel model that is suitable as a basis for high-level data parallel programming and specification. Data fields generalize arrays: they are pairs (f,b) where f is a function and b is a "bound", an entity which can be interpreted as a predicate (or set). The model postulates some operations of bounds, with certain properties. Common collection-oriented primitives can be defined in terms of these operations, without referring to the actual form of the bounds. Data fields thus make a very generic form of data parallelism possible, where algorithms become less dependent on the actual data parallel data structure.
Data Field Haskell (DFH) is a dialect of the functional programming language Haskell that provides an instance of data fields. This language can be used for rapid prototyping of parallel algorithms, and for parallel high-level specification of systems. DFH provides data fields with "traditional" array bounds and with sparse bounds, infinite data fields with predicates as bounds, and data fields with cartesian product bounds. There is a rich set of operations on data fields and bounds. A forall construct, similar to lambda-abstraction, can be used to define data fields in a succinct and generic manner.
The current version of DFH extends Haskell 98. Its implementation is a modified version of nhc98 pre-release 19 (2000-06-05), originally from the functional programming group at York. Although much of DFH is defined in Haskell itself, a few crucial things aren't, so the implementation is not easily portable to other Haskell systems.
Currently, the project is dormant. One M. Sc. thesis project was recently carried out within the project: Data Field Haskell 98, where an earlier implementation of DFH was ported to Haskell 98.
This activity is a continuation of the Data Fields project at KTH, where also the first prototype implementation of Data Field Haskell was developed:
O'Haskell extends Haskell with support for monadic reactive objects and polymorphic subtyping. An implementation, O'Hugs, is available, which is an interactive programming system derived from Hugs 1.3b. O'Hugs also comes with reactive network programming APIs, and a fairly complete interface to the Tk graphical toolkit.
O'Hugs is maintained (although at a slow pace) by Johan Nordlander, Magnus Carlsson, and Björn von Sydow. New O'Haskell-related developments are currently directed towards the language Timber, which is a strict language with real-time capabilities that has inherited many of O'Haskell's features.
An implicitly parallel dialect of Haskell, with provisions for side effects and special constructs for looping and for detecting termination. Uses a superset of the Eager Haskell parser, so the caveats about missing Haskell 98 features apply here too.
The ideas behind pH are best embodied in Arvind and Nikhil's book, "Implicit Parallel Programming in pH" (Morgan-Kaufman, 2001). There's a compiler release available, but it doesn't match the book as Nikhil never had the chance to make the necessary changes.
People: Alejandro Caro, myself, Arvind, Nikhil, Jacob Schwartz, Mieszko Lis, Lennart Augustsson, etc. I'm the only one of the above currently in academia, and I'm working full-time on Eager Haskell..
Current work includes, the combination of open/closed-world style overloading and a general coherence result.
TIE, a CHR-based type inference engine, and the underlying CHR-solver have been implemented in Haskell (it's only a prototype yet, but we're working on extending TIE).
Report by: Johan Jeuring.
[Generic Haskell version 0.99 has just been released (ed)]
There is a mailing list for Generic Haskell: generic-haskell@cs.uu.nl. See the homepage for how to join.
In other news (as they say on tv;-), Mark Shields and Simon Peyton Jones have taken another go at the topic of "First-Class Modules for Haskell":
From their abstract: . "
Support for Haskell's Foreign Function Interface is separated into language extensions, support libraries built on top of these, and tools that make use of the libraries and extensions. The support libraries are covered in the draft Haskell 98 FFI Addendum, discussed in section ?? of this report.
The language extension that permits hiearchical module namespaces was motivated by the need to organise the growing body of Haskell libraries, both user-contributed and those supported across Haskell implementations. See the subsections on "The Hierarchy" and "The Libraries" in section ??.
Report by: Manuel Chakravarty
Project Status: new project
This is well beyond the scope of what we can achieve with our resources.
We do not intend to solve sophisticated research problems here. We want to get a workable solution quickly. Always remember: Worse is Better.
(Note how many of the functions need to be in the IO monad anyway, because they need to perform file I/O.)
The Haskell GUI will be restricted to GTK+ features that can be implemented on other major platforms with reasonable effort
The API will include a set of convenience libraries on top of the basic API (eg, by providing layout combinators):
The combination of the two would lead to the nice situation where we can use different high-level APIs on different widgets sets.
Back in February, Simon Peyton-Jones issued a rather unusual call to the Haskell mailing list, titled "A GUI toolkit looking for a friend". He was referring to a promising port of the well-known Clean Object I/O library to Haskell:
"Peter Achten, its author, spent a few weeks in Cambridge, porting the Clean library to Haskell. The results are very promising. The main ideas come over fine, translating unique-types to IO monad actions, and the type structure gets a bit simpler.
So what we need now is to complete the port. Peter didn't have time to bring over all the widgets, nor did he have time to clean up the Haskell/C interface. (Clean's FFI is not as well-developed as Haskell's, so the interface can be made much nicer.) The other significant piece of work would be to make it work on Unix, perhaps by re-mapping it to GTK or TkHaskell or something.
So the main burden of this message is:
Would anyone be interested in completing the port?
Fame (if not fortune) await you! The prototype that Peter developed in is the hslibs/ CVS repository, and the GHC team would be happy to work with you to support your work. (The more compiler-independent we can make the library, the better.) Peter Achten is willing to play consultant too. The Clean team are happy for the code to be open source -- indeed, all the hslibs/ code is BSD-licensed.
It would not be the work of a moment. There are subtle issues involved (especially involving concurrency), and the design is not complete, so it isn't just boring hacking. So it should fun."
Krasimir Angelov has been the first volunteer to accept that challenge, with all the small print attached. He has been pretty active in the last few weeks:
"At this time the project is near its completion. The Haskell/C interface is completed. The library supports windows, dialogs and various kinds of controls. However, there are other items that are meant to be completed (menus, timers and other). It can already be used for simple GUI applications. My idea is not only to port the library, but also to extend it with various items.
Project Status: Has been used by a handful users over the last two years and gains some momentum recently
The goal of this project is to provide a binding for the OpenGL rendering library which utilizes the special features of Haskell, like strong typing, type classes, modules, etc., but still has the "flavour" of the ubiquitous C binding. This enables the easy use of the vast amount of existing literature and rendering techniques for OpenGL while retaining the advantages of Haskell on the other hand *is* a goal.
The short-term objectives are ironing out the remaining small portability problems and enhance the packaging of the current distribution. HOpenGL has been reportedly tested on Intel-Linux, Windows 98, and Sparc-Solaris with OpenGL versions ranging from 1.0 to 1.2.1, but there are probably still some combinations which don't work smoothly yet.
A medium-term objective is more or less a rewrite of OpenGL. After some experimentation the best route is probably as follows: There is an official description of the OpenGL API (including all extensions)
in the form of a specialized IDL from which a complete low-level binding could be generated automatically. A layer above this should make this a bit more Haskell-like. Currently a prototype which generates all data types including (un)marshaling functions already exists, but a translator for the API calls themselves has not been written yet.
Currently the coding is almost exclusively done by Sven Panne, but people are invited to join. Anyway, proposals for the user API (the 2nd layer mentioned above) and comments on the current API are much more urgent.
HOpenGL needs the new FFI and complex instance heads, but the latter non-H98 requirement is not really crucial and should be the topic of some debate.
C->Haskell is an interface generator that simplifies the development of Haskell bindings to C libraries. The current stable release is version 0.9.9, which is available in source and binary form for a range of platforms. There is a concise tutorial and the Gtk+HS binding shows that the tool is ready for serious use.
The most recent improvement is support for single-inheritance class hierarchies as they occur in C APIs that use a limited form of object-oriented design (this is currently available from CVS only, version 0.10.x). For the near future, simplified marshalling for common signatures as well as an example-based tutorial are planned. Updates on recent developments are available from the project homepage.
There have been some new or renewed activities in connecting Haskell to other languages recently. Two of these have their files at.
The introduction to Zoltan Varga's Haskell-Corba interface says: "This."
No separate information appears to be available for Ashley Yakeley's Haskell to Java VM Bridge, but the source code is in CVS at sourceforge.
There have been and still are a large number of research projects on tracing lazy functional programs for the purpose of debugging and program comprehension. Most of these projects did not yield tools that can be used for Haskell programs in practice, but in the last few years the number of tracing tools for Haskell has increased.
Freja provides algorithmic debugging of Haskell programs but supports only a subset of Haskell and runs only on Sparc/Solaris. Hood is a portable library that permits to observe data structures at given program points. The February 2001 release of Hugs directly supports a variant of Hood, making observation of user defined data structures easier. GHood extends Hood by a graphical backend which can animate observations, giving insight into dynamic program properties (animations can be added to web pages). There are no concrete plans for further development of these systems in the near future.
The development of the algorithmic debugger Buddha (not currently available) is an ongoing research project.
The Haskell tracing system Hat is based on a multi-purpose trace file. The specially compiled Haskell program creates at runtime a trace file. Hat includes tools to view the trace in various ways: algorithmic debugging a la Freja; Hood-style observation of top-level functions; stack-trace on program abortion; backwards exploration of a computation, starting from (part of) a faulty output or an error message. Hat is developed within an active research project. Hat is currently integrated in nhc98 but in a few months a version that will work together with any Haskell compiler will be available. Hat shall enable tracing of any Haskell98 program, the few remaining language limitations will be lifted. It is already possible to invoke other viewing tools from some of the viewing tools but further integration and general improvement of the viewing tools is planned.
Happy is very much in maintenance mode. It is heavily used in GHC and is relatively bug-free, but maintenance releases are still made occasionally. The latest release is 1.11 (September 2001). Happy's web page is at
The format of this chapter is still in flux, and its potential usefulness became apparent too late to expect good coverage in the first edition of this report, but you might want to think about adding a bit about your own use of Haskell to the next edition, due in about six months.
Haskell is Galois Connection's "not so secret" weapon. We use it to help meet the various demands of our clients in a number of ways:
All clients want programs that do what they should be doing. Haskell gives us a significant head start towards achieving high assurance. Between leveraging the type system, writing programs that are concise enough to actually understand, and writing versions of client code that has the look-and-feel of a specification, we find Haskell a *practical* language to write our programs in.
Domain Specific Language systems are just special purpose compilers, and Haskell excels at writing compilers.
Galois recently had a project to help a client change how an API was used over a large code base. We wrote a translator, based on the SML/C-Kit, that did the translation using a type inference algorithm. OK, SML is not Haskell, but next time we'll use Haskell :-)
The basic idea behind Galois is solving difficult problems using functional languages. Furthermore, we believe that Haskell is the right language for handling the complex problems that arise in many parts of software engineering.
Many research groups have already been covered by their larger projects in other parts of this report, especially if they work almost exclusively on Haskell-related projects, but there are more groups out there who count some Haskell-related work among their interests.). FRP was originally developed by Conal Elliott as part of the Fran animation system. It has three basic ideas: continous-time signals (behaviors), discrete-time signals (messages), used arrows to build a new implementation of FRP that has a number of operational advantages. Although FRP has traditionally been implemented in Haskell, we have also been looking at direct compilation of FRP programs. We are particularly interested in compilation for resource-limited systems such as embedded controllers.
We have not yet released a version of FRP or our FRP-based languages such as Frob or FVision, but we expect to release software before the end of the is being further developed to include asynchronous processes (using Concurrent Haskell).
Over the next year we hope to provide libraries for the Haskell community to work in this area and to attract funding to expand the research.
Chris Reade:
Here at the University of Kent at Canterbury, about half a dozen people pursuing research interests in functional programming have formed a functional programming interest group. Our projects are not limited to Haskell, so not all of them are mentioned here, but there are still quite a few Haskell-related activities:
Keith Hanna is working on bringing together the intuitive graphical interface of spreadsheet-like systems with the expressiveness and type-security of Haskell. A prototype system, named Vital, and an overview paper are available. Stefan Kahrs is interested in the boundaries of type system expressiveness, and has been looking at what one can or cannot do with Haskell types & classes. Chris Ryder's current topic are software metrics for Haskell programs and their visualisation. Simon Thompson, apart from producing educational material to help others learn Haskell (such as "The Craft of Functional Programming"), is working mostly where logic, types, programming, and verification come together.
Tony Daniels still looks at the semantics of time in Fran every now and then. Leonid Timochuk has been working on a Haskell implementation of Aldor--, a functional subset of the dependently typed Aldor language, originally developed for the purpose of computer algebra. Claus Reinke (yours truly), after a stint in the visualisation of Haskell program observations (GHood), has been trying to bring together virtual worlds (in the form of the standard Virtual Reality Modeling Language VRML'97) and functional programming (Haskell, with some FRP ideas) in a project named FunWorlds. More recently, he has also been seen chasing Haskell Community reports. Axel Simon has just joined us on one of the positions we advertised in the Job Adverts part on haskell.org.
In latest developments, Simon and Claus have been investigating the potential for refactoring functional programs.. We have just received confirmation of funding and will be advertising for a postdoctoral researcher soon, but if you are interested, please get in touch with us now!
Haskell metrics:
GHood:
FunWorlds:
Vital:
Some initial info about Refactoring Functional Programs:
As it turns out, many Haskellers do not currently have the benefit of being in a large group of like-minded people. Following the large numbers of students being introduced to Haskell, this group of individuals around the world might well be the largest group of Haskell users and, in fact, many of those students who decide that knowing Haskell is a skill too useful to forget might find themself isolated after leaving their university. As Hal Daume suggests:
"It seems to me that many people are in this situation, which is rather unfortunate. If not only for the ability to walk down the hall and ask someone if they could look over my code. One thing that may perhaps be useful would be to identify serious Haskellers (say people with >10k LOC in Haskell under their belts) who happen to be the only people in their organizations who use Haskell and try to form little groups of maybe 5 people with similar research (or applications).
This would probably cut down on the "what's wrong with my code" posts to the mailing lists and would also give a more personal avenue for discussing issues (I know that personally, since I deal with tons of data in large files, memory management issues, strictness issues, etc. are of prime concern. Other people might have more problems related to, say, multiparameter type classes and whatnot, if their field is more in that direction -- hard to say).
Anyway, that's just off the top of my head...I don't know whether it would actually be useful...one of the niceties about having someone down the hall is you can concurrently look at the code and find the problem (or the necessary optimization, as is often my desire). Whether something like this could work online, I don't know."
Well for a start, here are brief statements by the first few Haskellers to respond to my very late call for "micro-reports", in the hope of finding other Haskellers working in related areas. I hope this section will expand in future reports, and that the Haskell community finds other good ways to support its members. The main Haskell mailing lists (haskell, haskell-cafe) are certainly a good place to start organising more local (or networked, smaller) Haskell interest groups. In some cases, a re-organisation of the current mailing lists might also help - I could imagine a list on optimising and profiling (tools, techniques, and problems). Also, the currently rather inactive group on debuggers could become more lively if it widened its scope to debugging (again, covering tools, techniques, and problems).
The idea here, as in the earlier sections, is to let others know what you are working on, so that Haskellers with related interests can find together for the purposes of cooperation or technical discussions.
Hal Daume () is currently a first year PhD student in Computer Science at the University of Southern California: "My research interests are in the area of computational linguistics, which is, naively, the study of getting computers to understand natural languages (like English). I'm currently using Haskell exclusively to do statistical natural language processing research applications (mostly in summarization and aggregation)."
John Heron (jheron@enteka.com) has been working sporadically on a couple of projects: "None of them are at a stage where there's a whole lot more than talk, but I've done some of thinking about them:
NetInfer takes a router topology expressed in an adjacency matrix and Cisco router configurations. Using this information it infers network reachability information for static routes and RIPv1&2.
dbkit is an interactive, in-memory implementation of Codds Relational Algebra and Tuple Calculus. Initially based on Andrew Rock's RelationalDB module, the emphasis will be on finding an formulation which reflects the theoretical definitions clearly.
Based on my schedule between now and the end of the year, I expect that I can have these two bits working by the end of the year. At that point, perhaps I can spark some interest in the community in helping me out. For now, just talking about them publicly, and public speech's implied burden of clarity is most of the help I need."
John also has some longer-term visions attached to these concrete projects. Check out his projects page at | http://www.haskell.org/communities/11-2001/html/report.html | crawl-001 | refinedweb | 6,394 | 50.97 |
Clojure Data Analysis Cookbook — Save 50%
Over 110 recipes to help you dive into the world of practical data analysis using Clojure book and ebook
(For more resources related to this topic, see here.)
Parallelizing processing with pmap
The easiest way to parallelize data is to take a loop we already have and handle each item in it in a thread.
That is essentially what pmap does. If we replace a call to map with pmap, it takes each call to the function argument and executes it in a thread pool. pmap is not completely lazy, but it's not completely strict, either: it stays just ahead of the output consumed. So if the output is never used, it won't be fully realized.
For this recipe, we'll calculate the Mandelbrot set. Each point in the output takes enough time that this is a good candidate to parallelize. We can just swap map for pmap and immediately see a speed-up.
How to do it...
The Mandelbrot set can be found by looking for points that don't settle on a value after passing through the formula that defines the set quickly.
We need a function that takes a point and the maximum number of iterations to try and return the iteration that it escapes on. That just means that the value gets above 4.
(defn get-escape-point [scaled-x scaled-y max-iterations] (loop [x 0, y 0, iteration 0] (let [x2 (* x x), y2 (* y y)] (if (and (< (+ x2 y2) 4) (< iteration max-iterations)) (recur (+ (- x2 y2) scaled-x) (+ (* 2 x y) scaled-y) (inc iteration)) iteration))))
The scaled points are the pixel points in the output, scaled to relative positions in the Mandelbrot set. Here are the functions that handle the scaling. Along with a particular x-y coordinate in the output, they're given the range of the set and the number of pixels each direction.
(defn scale-to ([pixel maximum [lower upper]] (+ (* (/ pixel maximum) (Math/abs (- upper lower))) lower))) (defn scale-point ([pixel-x pixel-y max-x max-y set-range] [(scale-to pixel-x max-x (:x set-range)) (scale-to pixel-y max-y (:y set-range))]))
The function output-points returns a sequence of x, y values for each of the pixels in the final output.
(defn output-points ([max-x max-y] (let [range-y (range max-y)] (mapcat (fn [x] (map #(vector x %) range-y)) (range max-x)))))
For each output pixel, we need to scale it to a location in the range of the Mandelbrot set and then get the escape point for that location.
(defn mandelbrot-pixel ([max-x max-y max-iterations set-range] (partial mandelbrot-pixel max-x max-y max-iterations set-range)) ([max-x max-y max-iterations set-range [pixel-x pixel-y]] (let [[x y] (scale-point pixel-x pixel-y max-x max-y set-range)] (get-escape-point x y max-iterations))))
At this point, we can simply map mandelbrot-pixel over the results of outputpoints. We'll also pass in the function to use (map or pmap).
(defn mandelbrot ([mapper max-iterations max-x max-y set-range] (doall (mapper (mandelbrot-pixel max-x max-y max-iterations set-range) (output-points max-x max-y)))))
Finally, we have to define the range that the Mandelbrot set covers.
(def mandelbrot-range {:x [-2.5, 1.0], :y [-1.0, 1.0]})
How do these two compare? A lot depends on the parameters we pass them.
user=> (def m (time (mandelbrot map 500 1000 1000 mandelbrot-range))) "Elapsed time: 28981.112 msecs" #'user/m user=> (def m (time (mandelbrot pmap 500 1000 1000 mandelbrot-range))) "Elapsed time: 34205.122 msecs" #'user/m user=> (def m (time (mandelbrot map 1000 10001000 mandelbrot-range))) "Elapsed time: 85308.706 msecs" #'user/m user=> (def m (time (mandelbrot pmap 1000 10001000 mandelbrot-range))) "Elapsed time: 49067.584 msecs" #'user/m
Refer to the following chart:
If we only iterate at most 500 times for each point, it's slightly faster to use map and work sequentially. However, if we iterate 1,000 times each, pmap is faster.
How it works...
This shows that parallelization is a balancing act. If each separate work item is small, the overhead of creating the threads, coordinating them, and passing data back and forth takes more time than doing the work itself. However, when each thread has enough to do to make it worth it, we can get nice speed-ups just by using pmap.
Behind the scenes, pmap takes each item and uses future to run it in a thread pool. It forces only a couple more items than you have processors, so it keeps your machine busy, without generating more work or data than you need.
There's more...
For an in-depth, excellent discussion of the nuts and bolts of pmap, along with pointers about things to watch out for, see David Liebke's talk, From Concurrency to Parallelism ().
See also
The Partitioning Monte Carlo Simulations for better pmap performance recipe
Parallelizing processing with Incanter
One of its nice features is that it uses the Parallel Colt Java library () to actually handle its processing, so when you use a lot of the matrix, statistical, or other functions, they're automatically executed on multiple threads.
For this, we'll revisit the Virginia housing-unit census data and we'll fit it to a linear regression.
Getting ready
We'll need to add Incanter to our list of dependencies in our Leiningen project.clj file:
:dependencies [[org.clojure/clojure "1.5.0"] [incanter "1.3.0"]]
We'll also need to pull those libraries into our REPL or script:
(use '(incanter core datasets io optimize charts stats))
We can use the following filename:
(def data-file "data/all_160_in_51.P35.csv")
How to do it...
For this recipe, we'll extract the data to analyze and perform the linear regression. We'll then graph the data afterwards.
First, we'll read in the data and pull the population and housing unit columns into their own matrix.
(def data (to-matrix (sel (read-dataset data-file :header true) :cols [:POP100 :HU100])))
From this matrix, we can bind the population and the housing unit data to their own names.
(def population (sel data :cols 0)) (def housing-units (sel data :cols 1))
Now that we have those, we can use Incanter to fit the data.
(def lm (linear-model housing-units population))
Incanter makes it so easy, it's hard not to look at it.
(def plot (scatter-plot population housing-units :legend true)) (add-lines plot population (:fitted lm)) (view plot)
Here we can see that the graph of housing units to families makes a very straight line:
How it works…
Under the covers, Incanter takes the data matrix and partitions it into chunks. It then spreads those over the available CPUs to speed up processing. Of course, we don't have to worry about this. That's part of what makes Incanter so powerful.
Partitioning Monte Carlo simulations for better pmap performance
In the Parallelizing processing with pmap recipe, we found that while using pmap is easy enough, knowing when to use it is more complicated. Processing each task in the collection has to take enough time to make the costs of threading, coordinating processing, and communicating the data worth it. Otherwise, the program will spend more time concerned with how (parallelization) and not enough time with what (the task).
The way to get around this is to make sure that pmap has enough to do at each step that it parallelizes. The easiest way to do that is to partition the input collection into chunks and run pmap on groups of the input.
For this recipe, we'll use Monte Carlo methods to approximate pi . We'll compare a serial version against a naïve parallel version against a version that uses parallelization and partitions.
Getting ready
We'll use Criterium to handle benchmarking, so we'll need to include it as a dependency in our Leiningen project.clj file, shown as follows:
:dependencies [[org.clojure/clojure "1.5.0"] [criterium "0.3.0"]]
We'll use these dependencies and the java.lang.Math class in our script or REPL.
(use 'criterium.core) (import [java.lang Math])
How to do it…
To implement this, we'll define some core functions and then implement a Monte Carlo method for estimating pi that uses pmap.
We need to define the functions necessary for the simulation. We'll have one that generates a random two-dimensional point that will fall somewhere in the unit square.
(defn rand-point [] [(rand) (rand)])
Now, we need a function to return a point's distance from the origin.
(defn center-dist [[x y]] (Math/sqrt (+ (* x x) (* y y))))
Next we'll define a function that takes a number of points to process, and creates that many random points. It will return the number of points that fall inside a circle.
(defn count-in-circle [n] (->> (repeatedly n rand-point) (map center-dist) (filter #(<= % 1.0)) count))
That simplifies our definition of the base (serial) version. This calls count-incircle to get the proportion of random points in a unit square that fall inside a circle. It multiplies this by 4, which should approximate pi.
(defn mc-pi [n] (* 4.0 (/ (count-in-circle n) n)))
We'll use a different approach for the simple pmap version. The function that we'll parallelize will take a point and return 1 if it's in the circle, or 0 if not. Then we can add those up to find the number in the circle.
(defn in-circle-flag [p] (if (<= (center-dist p) 1.0) 1 0)) (defn mc-pi-pmap [n] (let [in-circle (->> (repeatedly n rand-point) (pmap in-circle-flag) (reduce + 0))] (* 4.0 (/ in-circle n))))
For the version that chunks the input, we'll do something different again. Instead of creating the sequence of random points and partitioning that, we'll have a sequence that tells how large each partition should be and have pmap walk across that, calling count-in-circle. This means that creating the larger sequences are also parallelized.
(defn mc-pi-part ([n] (mc-pi-part 512 n)) ([chunk-size n] (let [step (int (Math/floor (float (/ n chunk-size)))) remainder (mod n chunk-size) parts (lazy-seq (cons remainder (repeat step chunk-size))) in-circle (reduce + 0 (pmap count-in-circle parts))] (* 4.0 (/ in-circle n)))))
Now, how do these work? We'll bind our parameters to names, and then we'll run one set of benchmarks before we look at a table of all of them. We'll discuss the results in the next section.
user=> (def chunk-size 4096) #'user/chunk-size user=> (def input-size 1000000) #'user/input-size user=> (quick-bench (mc-pi input-size)) WARNING: Final GC required 4.001679309213317 % of runtime Evaluation count : 6 in 6 samples of 1 calls. Execution time mean :634.387833 ms Execution time std-deviation : 33.222001 ms Execution time lower quantile : 606.122000 ms ( 2.5%) Execution time upper quantile : 677.273125 ms (97.5%) nil
Here's all the information in the form of a table:
Here's a chart with the same information:
How it works…
There are a couple of things we should talk about here. Primarily, we'll need to look at chunking the inputs for pmap, but we should also discuss Monte Carlo methods.
Estimating with Monte Carlo simulations
Monte Carlo simulations work by throwing random data at a problem that is fundamentally deterministic, but when it's practically infeasible to attempt a more straightforward solution. Calculating pi is one example of this. By randomly filling in points in a unit square, p/4 will be approximately the ratio of points that will fall within a circle centered on 0, 0. The more random points that we use, the better the approximation.
I should note that this makes a good demonstration of Monte Carlo methods, but it's a terrible way to calculate pi. It tends to be both slower and less accurate than the other methods.
Although not good for this task, Monte Carlo methods have been used for designing heat shields, simulating pollution, ray tracing, financial option pricing, evaluating business or financial products, and many, many more things.
For a more in-depth discussion, Wikipedia has a good introduction to Monte Carlo methods at.
Chunking data for pmap
The table we saw earlier makes it clear that partitioning helped: the partitioned version took just 72 percent of the time that the serial version did, while the naïve parallel version took more than three times longer. Based on the standard deviations, the results were also more consistent.
The speed up is because each thread is able to spend longer on each task. There is a performance penalty to spreading the work over multiple threads. Context switching (that is, switching between threads) costs time, and coordinating between threads does as well. But we expect to be able to make that time and more up by doing more things at once. However, if each task itself doesn't take long enough, then the benefit won't out-weigh the costs. Chunking the input—and effectively creating larger individual tasks for each thread— gets around this by giving each thread more to do, and thereby spending less time context switching and coordinating, relative to the overall time spent running.
Finding the optimal partition size with simulated annealing
In the last recipe, Partitioning Monte Carlo simulations for better pmap performance , we more or less guessed what would make a good partition size. We tried a few different values and saw what gives us the best result. However, it's still largely guesswork, since just making the partitions larger or smaller doesn't give consistently better or worse results.
This is the type of task that computers are good at: searching a complex space to find the function parameters that results in an optimal output value. For this recipe, we'll use a fairly simple optimization algorithm called simulated annealing .Like many optimization algorithms, this one is based on a natural process: the way that molecules settle into low-energy configurations as the temperature drops to freezing. This is what allows water to form efficient crystal lattices as it freezes.
In simulated annealing, we feed a state to a cost function. At each point, we evaluate a random neighboring state, and possibly move to it. As the energy in the system (the temperature) goes down, we are less likely to jump to a new state, especially if that state is worse than the current one, according to the cost function. Finally, after either reaching a target output or iterating through a set number of steps, we take the best match found. Like many optimization algorithms, this doesn't guarantee that the result will be the absolute best match, but it should be a good one.
For this recipe, we'll use the Monte Carlo pi approximation function that we did in the Partitioning Monte Carlo simulations for better pmap performance recipe, and we'll use simulated annealing to find a better partition size.
Getting ready
We'll need to use the same dependencies, uses, imports, and functions as we did in the Partitioning Monte Carlo simulations for better pmap performance recipe. In addition, we'll also need the mc-pi-part function from that recipe.
How to do it...
For this recipe, we'll first define a generic simulated annealing system, and then we'll define some functions to pass it as parameters.
Everything will be driven by the simulated annealing function that takes all the function parameters for the process as arguments. We'll discuss them in more detail in a minute.
(defn annealing [initial max-iter max-cost neighbor-fn cost-fn p-fn temp-fn] (let [get-cost (memoize cost-fn) cost (get-cost initial)] (loop [state initial cost cost k 1 best-seq [{:state state, :cost cost}]] (println '>>> 'sa k \. state \$ cost) (if (and (< k max-iter) (or (nil? max-cost) (> cost max-cost))) (let [t (temp-fn (/ k max-iter)) next-state (neighbor-fn state) next-cost (get-cost next-state) next-place {:state next-state, :cost next-cost}] (if (> (p-fn cost next-cost t) (rand)) (recur next-state next-cost (inc k) (conj best-seq next-place)) (recur state cost (inc k) best-seq))) best-seq))))
For parameters, annealing takes an initial state, a limit to the number of iterations, a target output value, and a series of functions. The first function takes the current state and returns a new neighboring state.
To write this function, we have to decide how best to handle the state for this problem. Often, if the function to evaluate has multiple parameters we'd use a vector and randomly slide one value in that around. However, for this problem, we only have one input value, the partition size.
So for this problem, we'll instead use an integer between 0 and 20. The actual partition size will be 2 raised to that power. To find a neighbor, we just randomly slide the state value up or down at most five, within the range of 0 to 20.
(defn get-neighbor [state] (max 0 (min 20 (+ state (- (rand-int 11) 5)))))
The next function parameter for annealing is the cost function. This will take the state and return the value that we're trying to minimize. In this case, we benchmark mc-pi-part with the given partition size (2 raised to the power) and return the average time.
(defn get-pi-cost [n state] (let [chunk-size (long (Math/pow 2 state))] (first (:mean (quick-benchmark (mc-pi-part chunk-size n))))))
The next function takes the current state's cost, a potential new state's cost, and the current energy in the system (from 0 to 1). It returns the odds that the new state should be used. Currently, this will skip to an improved state always, or to a worse state 25 percent of the time (both of these are pro-rated by the temperature).
(defn should-move [c0 c1 t] (* t (if (< c0 c1) 0.25 1.0)))
The final function parameter takes the current percent through the iteration count and returns the energy or temperature as a number from 0 to 1. This can use a number of easing functions, but for this we'll just use a simple linear one.
(defn get-temp [r] (- 1.0 (float r)))
That's it. We can let this find a good partition size. We'll start with the value that we used in the Partitioning Monte Carlo simulations for better pmap performance recipe. We'll only allow ten iterations, since the search space is relatively small.
user=> (annealing 12 10 nil get-neighbor #_=> (partial get-pi-cost 1000000) #_=> should-move get-temp) >>> sa 1 . 12 $ 0.5805938333333334 >>> sa 2 . 8 $ 0.38975950000000004 >>> sa 3 . 8 $ 0.38975950000000004 >>> sa 4 . 8 $ 0.38975950000000004 >>> sa 5 . 8 $ 0.38975950000000004 >>> sa 6 . 8 $ 0.38975950000000004 >>> sa 7 . 6 $ 0.357514 >>> sa 8 . 6 $ 0.357514 >>> sa 9 . 6 $ 0.357514 >>> sa 10 . 6 $ 0.357514 [{:state 12, :cost 0.5805938333333334} {:state 8, :cost 0.38975950000000004} {:state 6, :cost 0.357514}]
We can see that a partition size of 64 (26) is the best time, and re-running the benchmarks verifies this.
How it works…
In practice, this algorithm won't help if we run it over the full input data. But if we can get a large enough sample, this can help us process the full dataset more efficiently by taking a lot of the guesswork out of picking the right partition size for the full evaluation.
What we did was kind of interesting. Let's take the annealing function apart to see how it works.
The process is handled by a loop inside the annealing function. Its parameters are a snapshot of the state of the annealing process.
(loop [state initial cost cost k 1 best-seq [{:state state, :cost cost}]]
We only continue if we need more iterations or if we haven't bested the maximum cost.
(if (and (< k max-iter) (or (nil? max-cost) (> cost max-cost)))
If we continue, we calculate the next energy and get a potential state and cost to evaluate.
(let [t (temp-fn (/ k max-iter)) next-state (neighbor-fn state) next-cost (get-cost-cache next-state) next-place {:state next-state, :cost next-cost}]
If the probability function (should-move, in this case) indicates so, we move to the next state and loop. Otherwise, we stay at the current state and loop.
(if (> (p-fn cost next-cost t) (rand)) (recur next-state next-cost (inc k) (conj best-seq next-place)) (recur state cost (inc k) best-seq)))
If we're done, we return the sequence of best states and costs seen.
best-seq)))))
This provides a systematic way to explore the problem space: in this case to find a better partition size for this problem.
There's more…
Simulated annealing is one of a class of algorithms known as optimization algorithms. All of these take a function (the cost-fn function that we saw) and try to find the largest or smallest value for it. Other optimization algorithms include genetic algorithms, ant colony optimization, particle swarm optimization, and many others. This is a broad and interesting field, and being familiar with these algorithms can be helpful for anyone doing data analysis.
Parallelizing with reducers
Clojure 1.5 introduced the clojure.core.reducers library. This library provides a lot of interesting and exciting features, including composing multiple calls to map and other sequence-processing high-order functions and abstracting map and other functions for different types of collections while maintaining the collection type.
Looking at the following chart, initial operations on individual data items such as map and filter operate on items of the original dataset. Then the output of the operations on the items are combined using a reduce function. Finally, the outputs of the reduction step are progressively combined until the final result is produced. This could involve a reduce-type operation such as addition, or an accumulation, such as the into function.
Another feature of reducers is that they can automatically partition and parallelize the processing of tree-based data structures. This includes Clojure's native vectors and hash maps.
For this recipe, we'll continue the Monte Carlo simulation example that we started in the Partitioning Monte Carlo simulations for better pmap performance recipe. In this case, we'll write a version that uses reducers and see how it performs.
Getting ready
From the Partitioning Monte Carlo simulations for better pmap performance recipe, we'll use the same imports, as well as the rand-point function, the center-dist function, and the mc-pi function.
Along with these, we'll also need to require the reducers and Criterium libraries:
(require '[clojure.core.reducers :as r]) (use 'criterium.core)
Also, if you're using Java 1.6, you'll need the ForkJoin library, which you can get by adding this to your project.clj dependencies:
[org.codehaus.jsr166-mirror/jsr166y "1.7.0"]
How to do it…
This version of the Monte Carlo pi approximation algorithm will be structured similarly to how mc-pi was in the Partitioning Monte Carlo simulations for better pmap performance recipe. First, we'll defi ne a count-in-circle-r function that uses the reducers library to compose the processing and spread it over the available cores.
(defn count-items [c _] (inc c)) (defn count-in-circle-r [n] (->> (repeatedly n rand-point) vec (r/map center-dist) (r/filter #(<= % 1.0)) (r/fold + count-items))) (defn mc-pi-r [n] (* 4.0 (/ (count-in-circle-r n) n)))
Now, we can use Criterium to compare the two functions.
user=> (quick-bench (mc-pi 1000000)) WARNING: Final GC required 3.023487696312759 % of runtime Evaluation count : 6 in 6 samples of 1 calls. Execution time mean :1.999605 sec Execution time std-deviation : 217.056295 ms Execution time lower quantile : 1.761563 sec ( 2.5%) Execution time upper quantile : 2.235991 sec (97.5%) nil user=> (quick-bench (mc-pi-r 1000000)) WARNING: Final GC required 6.398394257011045 % of runtime Evaluation count : 6 in 6 samples of 1 calls. Execution time mean :947.908000 ms Execution time std-deviation : 306.273266 ms Execution time lower quantile : 776.947000 ms ( 2.5%) Execution time upper quantile : 1.477590 sec (97.5%) Found 1 outliers in 6 samples (16.6667 %) low-severe 1 (16.6667 %) Variance from outliers : 81.6010 % Variance is severely inflated by outliers nil
Not bad. The version with reducers is over 50 percent faster than the serial one. This is more impressive because we've made relatively minor changes to the original code, especially compared to the version of this algorithm that partitioned the input before passing it to pmap, which we also saw in the Partitioning Monte Carlo simulations for better pmap performance recipe.
How it works…
The reducers library does a couple of things in this recipe. Let's look at some lines from count-in-circle-r. Converting the input to a vector was important, because vectors can be parallelized, but generic sequences cannot be.
Next, these two lines are composed into one reducer function that doesn't create an extra sequence between the call to r/map and r/filter . This is a small, but important, optimization, especially if we'd stacked more functions into this stage of the process.
(r/map center-dist) (r/filter #(<= % 1.0))
The bigger optimization is in the line for r/fold. r/reduce processes serially always, but if the input is a tree-based data structure, r/fold will employ a fork-join pattern to parallelize it. This line takes the place of a call to count by incrementing a counter function for every item in the sequence so far.
(r/fold + count-items))))
Graphically this process looks something like the following chart:
The reducers library is still fairly new to Clojure, but it has a lot of promise to automatically parallelize structured operations with a control and simplicity that we haven't seen elsewhere.
There's more...
For more about reducers, see Rich Hickey's blog posts at blog/2012/05/08/reducers-a-library-and-model-for-collectionprocessing.html and. Also, his presentation on reducers for EuroClojure 2012 () has a lot of good information.
See also
- The Generating online summary statistics with reducers recipe
Generating online summary statistics with reducers
We can use reducers in a lot of different situations, but sometimes we'll need to change how we process data to do so.
For this example, we'll show how to compute summary statistics with reducers. We'll use some algorithms and formulas first proposed by Tony F. Chan, Gene H. Golub, and Randall J. LeVeque in 1979 and later extended by Timothy B. Terriberry in 2007. These allow us to approximate mean, standard deviation, and skew for online data—that is, for streaming data that we may only see once—so we'll need to compute all the statistics on one pass without holding the full collection in memory.
The following formulae are a little complicated and difficult to read in lisp-notation. But there's a good overview of this process, with formulae, on the Wikipedia page for Algorithms for calculating variance ( calculating_variance). And to simplify this example somewhat, we'll only calculate the mean and variance.
Getting ready
For this, we'll need to have easy access to the reducers library and the Java Math class.
(require '[clojure.core.reducers :as r]) (import '[java.lang Math])
How to do it…
For this recipe, first we'll define the accumulator data structures and then the accumulator functions. Finally, we'll put it all together.
We need to define a data structure to store all the data that we want to accumulate and keep track of.
(def zero-counts {:n (long 0), :s 0.0, :mean 0.0, :m2 0.0})
Now, we'll need some way to add a datum to the counts and accumulation. The function accum-counts will take care of this.
(defn accum-counts ([] zero-counts) ([{:keys [n mean m2 s] :as accum} x] (let [new-n (long (inc n)) delta (- x mean) delta-n (/ delta new-n) term-1 (* delta delta-n n) new-mean (+ mean delta-n)] {:n new-n :mean new-mean :s (+ s x) :m2 (+ m2 term-1)})))
Next, we'll need a way to combine two accumulators. This has the complete, unsimplified versions of the formulae from accum-counts. Because some of the numbers can get very large and overflow the range of the primitive Java types, we'll use *'. This is a variant of the multiplication operator that automatically promotes values into Java's BigInteger types instead of overflowing.
(defn op-fields "A utility function that calls a function on the values of a field from two maps." [op field item1 item2] (op (field item1) (field item2))) (defn combine-counts ([] zero-counts) ([xa xb] (let [n (long (op-fields + :n xa xb)) delta (op-fields - :mean xb xa) nxa*xb (*' (:n xa) (:n xb))] {:n n :mean (+ (:mean xa) (* delta (/ (:n xb) n))) :s (op-fields + :s xa xb) :m2 (+ (:m2 xa) (:m2 xb) (* delta delta (/ nxa*xb n)))})))
Now we need a way to take the accumulated counts and values and turn them into the final statistics.
(defn stats-from-sums [{:keys [n mean m2 s] :as sums}] {:mean (double (/ s n)) :variance (/ m2 (dec n))})
Finally, we combine all these functions to produce results.
(defn summary-statistics [coll] (stats-from-sums (r/fold combine-counts accum-counts coll)))
For a pointless example, we can use this to find summary statistics on 1,000,000 random numbers:
user=> (summary-statistics (repeatedly 1000000 rand)) {:mean 0.5004908831693459, :variance 0.08346136740444697}
Harnessing your GPU with OpenCL and Calx
For calculations involving matrixes and floating point math, in today's computers our best option is executing them on the graphical processing unit, or GPU. Because these have been so highly tuned for 3D shading and rendering, they can handle these operations very quickly, sometimes an order of magnitude more quickly than general CPUs can.
But programming GPUs is a little different than general programming. For the most part, we're stuck coding in a subset of C with very specific parameters for the parts of the process that are handled by the GPU. There are some projects that convert Java byte-code to GPU code (,, or). Unfortunately, at this time, none of them support using a dynamic JVM language, such as Clojure.
For this recipe, we'll use the Calx library (). This project has a warning about not being under active development, but it's already usable. This is a wrapper around an OpenCL () library, which supports a wide range of video card vendors.
In general, you'll receive the most payoff from the GPU when doing floating point math, especially vector and matrix operations. Because of this, we'll once again calculate the Mandelbrot set. This will be an example that would see improvement from running on the GPU and having an existing implementation to compare to. This will also allow us to see just how much of a speed increase we're getting.
Getting ready
We need to include a dependency on Calx in our project.clj file.
:dependencies [[org.clojure/clojure "1.5.0"] [calx "0.2.1"]]
And we'll need to import Calx and java.lang.Math into our REPL or script.
(use 'calx) (import [java.lang Math])
We'll also use the output-points function from the Parallelizing processing with pmap recipe.
How to do it...
Even using Calx, most of this has to be in C. This is encoded as a string that is processed and compiled by Calx.
(def src "// scale from -2.5 to 1. float scale_x(float x) { return (x / 1000.0) * 3.5 - 2.5; } // scale from -1 to 1. float scale_y(float y) { return (y / 1000.0) * 2.0 - 1.0; } __kernel void escape( __global float *out) { int i = get_global_id(0); int j = get_global_id(1); int index = j * get_global_size(0) + i; float point_x = scale_x(i); float point_y = scale_y(j); int max_iterations = 1000; int iteration = 0; float x = 0.0; float y = 0.0; while (x*x + y*y <= 4 && iteration <max_iterations) { float tmp_x = (x*x - y*y) + point_x; y = (2 * x * y) + point_y; x = tmp_x; iteration++; } out[index] = iteration; }")
We'll also need a function to handle compiling the source code and passing data around.
(defn -main [] (let [max-x 1000, max-y 1000] (with-cl (with-program (compile-program src) (time (let [out (wrap (flatten (output-points max-x max-y)) :float32-le)] (enqueue-kernel :escape (* max-x max-y) out) (let [out-seq (vec @(enqueue-read out))] (spit "mandelbrot-out.txt" (prn-str out-seq)) (println "Calculated on " (platform) "/" (best-device)) (println "Output written to mandelbrot-out.txt"))))))))
If we run this, we'll see how fast it is.
user=> (-main) Calculated on #<CLPlatform Apple {vendor: Apple, version: OpenCL 1.0 (Dec 26 2010 12:52:21), profile: FULL_PROFILE, extensions: []}> / #<CLDevice ATI Radeon HD 6750M> Output written to mandelbrot-out.txt "Elapsed time: 9659.691 msecs" nil
We can compare that 9.7 seconds to the parallel version of the Mandelbrot set program that we wrote in the Parallelizing processing with pmap recipe.
user=> (def mset (time (mandelbrot pmap max-iterations max-x max-y #_=> mandelbrot-range))) "Elapsed time: 19126.335 msecs" #'user/mset
That's about twice as fast. This was a very invasive change, but the results are quite good.
How it works...
There are two parts to this. First, the C code and GPU processing in general, and then the Calx interface to the GPU.
Writing the GPU code in C
A full explanation—or for that matter, even just an introduction— of GPU programming is beyond the scope of this recipe, so we'll just try to understand what's going on in this specific example. See the There's more... section of this recipe for where to go to learn more.
First, the code to run on the GPU needs to be marked __kernel . Notice that we're just passing one parameter into this function, the output array. The GPU function, escape, will be called many times, based upon the shape of the data.
Each time the GPU calls escape, we have to figure out which index into the data it's executing on. We do that by calling get_global_id (dimension). Instead of scaling the pixel point to the Mandelbrot range in Clojure, I've moved those functions into C (scale_x and scale_y). Otherwise, the code is more or less the same as we were doing before to calculate the Mandelbrot set, just in C.
This pattern—let the GPU handle the looping and only code the inner part of the loop—is generally how we'll structure everything that runs on the GPU.
Wrapping it in Calx
While this all may seem very low-level, Calx is still taking a lot of the pain out of GPU processing. Let's look at what else we have to do. After compiling everything in the call to compile-program, the –main function breaks the process down into three steps. First, we need to move the data over to the GPU. This is done by converting the list of pixel coordinates to a float array on the GPU.
(let [out (wrap (flatten (output-points max-x max-y)) :float32-le)]
Next, we queue the escape function written in C for processing on the GPU. We tell it how many data items we'll need and the parameters to pass to it (in this case, only the float array we just allocated).
(enqueue-kernel :escape (* max-x max-y) out)
Finally, we queue a read operation on the out array, so that we can access its data. This returns a reference, which will block when we dereference it until the data is available.
(let [out-seq (vec @(enqueue-read out))]
GPU processing can be a bit different than the general-purpose programming we're probably used to, but as we saw, it also allows us to get some spectacular performance improvements.
There's more...
To go further with the GPU, you'll want to find some more material. Personally, I found AMD's tutorial to be helpful ( amd-accelerated-parallel-processing-app-sdk/introductorytutorial- to-opencl/). There are a number of other tutorials out on the Internet, and there seemed to be a lot of variance in them. Google can help you there.
And as usual, your mileage may vary, so start on a few tutorials until you find one that makes sense to you.
Summary
In this article we saw different recipes showing us how parallel processing is a way of getting better performance that has implications in how we structure our programs. Thus parallelization is good for doing the same task many, many times, all at once.
Resources for Article :
Further resources on this subject:
- Introduction to Parallel Programming and CUDA with Sample Code [Article]
- Simplifying Parallelism Complexity in C# [Article]
- Oracle BI Publisher 11g: Working with Multiple Data Sources [Article]
About the Author :
Eric Rochester
Eric Rochester enjoys reading, writing, and spending time with his wife and kids. When he’s not doing those things, he programs in a variety of languages and platforms. Currently, he’s been exploring functional programming languages, including Clojure and Haskell. He's also the author of the Clojure Data Analysis Cookbook. He works at the Scholars’ Lab in the library at the University of Virginia, helping humanities professors and graduate students realize their digitally informed research agendas.
Post new comment | https://www.packtpub.com/article/improving-performance-with-parallel-programming | CC-MAIN-2014-10 | refinedweb | 6,356 | 63.59 |
The ConViT is simply a ViT, where the first 10 blocks replace SA layers by a GPSA layer with convolutional initialization.
NOTE: This section is a bit math heavy. Please feel free to reach out to me or comment below at the end of this report if you have any questions.
👉: Please don't be confused by this representation of attention. It is just another way of writing Attention(Q,K,V) = \frac{Softmax(QK^T)}{\sqrt{D_h}}V
❓: Can you think of why the first part represents content interaction and the second part represents positional interaction? Also, how is this different from Vanilla self-attention layer?
👉: I recommend the reader to pause, take a small break and make sure that they can see the relation between figure-3 and the mathematical representation of GPSA layer.
import timm import torch m = timm.create_model('convit_tiny') x = torch.randn(1, 3, 224, 224)m(x).shape>> (1, 1000) | https://wandb.ai/wandb_fc/pytorch-image-models/reports/ConViT-Improving-Vision-Transformers-with-Soft-Convolutional-Inductive-Biases--Vmlldzo3MjIxMDk | CC-MAIN-2021-49 | refinedweb | 156 | 58.79 |
31820/understanding-spring-@autowired-usage
I am reading the spring 3.0.x reference documentation to understand Spring Autowired annotation:
3.9.2 @Autowired and @Inject
I am not able to understand the below examples. Do we need to do something in the XML for it to work?
EXAMPLE 1
public class SimpleMovieLister {
private MovieFinder movieFinder;
@Autowired
public void setMovieFinder(MovieFinder movieFinder) {
this.movieFinder = movieFinder;
}
// ...
}
EXAMPLE 2;
}
// ...
}
How can the two classes be autowired implementing the same interface and using the same class?
Example:
class Red implements Color
class Blue implements Color
class myMainClass{
@Autowired
private Color color;
draw(){
color.design();
}
}
Which design method will be called? How do I make sure the design method of Red class will be called and not Blue?
TL;DR
The @Autowired annotation spares you the need to do the wiring by yourself in the XML file (or any other way) and just finds for you what needs to be injected where, and does that for you.
Full explanation
The @Autowired annotation allows you to skip configurations elsewhere of what to inject and just does it for you. Assuming your package is com.mycompany.movies you have to put this tag in your XML (application context file):
<context:component-scan
This tag will do an auto-scanning. Assuming each class that has to become a bean is annotated with a correct annotation like @Component (for simple bean) or @Controller (for a servlet control) or @Repository (for DAO classes) and these classes are somewhere under the package com.mycompany.movies, Spring will find all of these and create a bean for each one. This is done in 2 scans of the classes - the first time it just searches for classes that need to become a bean and maps the injections it needs to be doing, and on the second scan it injects the beans. Of course, you can define your beans in the more traditional XML file or with a @Configuration class (or any combination of the three)., you can use the @Qualifier annotation and tell it which of the two beans to inject in the following manner:
@Qualifier("redBean")
class Red implements Color {
// Class code here
}
@Qualifier("blueBean")
class Blue implements Color {
// Class code here
}
Or if you prefer to declare the beans in your XML, it would look something like this:
<bean id="redBean" class="com.mycompany.movies.Red"/>
<bean id="blueBean" class="com.mycompany.movies.Blue"/>
In the @Autowired declaration, you need to also add the @Qualifier to tell which of the two color beans to inject:
@Autowired
@Qualifier("redBean")
public void setColor(Color color) {
this.color = color;
}
If you don't want to use two annotations (the @Autowired and @Qualifier) you can use @Resource to combine these two:
@Resource(name="redBean")
public void setColor(Color color) {
this.color = color;
}
The @Resource (you can read some extra data about it in the first comment on this answer) spares you the use of two annotations and instead you only use one.
I'll just add two more comments:
The @Autowired annotation provides more accurate control over where ...READ MORE
I think you can easily perform this ...READ MORE
As you might be knowing, all these ...READ MORE
The field annotated @Autowired is null because ...READ MORE
JVM uses the proxy to make HTTP ...READ MORE
Bean life cycle in Spring Bean Factory ...READ MORE
@RequestMapping(value = "/files/{file_name}", method = RequestMethod.GET)
public void ...READ MORE
If you are not coding a web ...READ MORE
First, and most important - all Spring ...READ MORE
You can get memory info programmatically and ...READ MORE
OR | https://www.edureka.co/community/31820/understanding-spring-@autowired-usage | CC-MAIN-2019-39 | refinedweb | 599 | 63.29 |
Hi, I'm really new to python (started this afternoon), but I allready run into a problem.
I try to download a .tar.gz file from a website. The downloading via urllib2 works perfectly.
As tarfile.open can't be used with urllib2 file objects directly (at least it didn't work the first time I tried), I'm saving the content of the .tar.gz file into a named temporary file via copyfileobj. But when I try to open it with tarfile.open it does not work: after some files from the archive it raises an exception which says that a crc check failed.
When I save the same file into a normal file (opened via open) it works flawlessly, but I would appreciate it, if I can use a temporary file.
I hope there is somebody who can help me.
def readDB(self, db): tar = tarfile.open(db, 'r') for tarinfo in tar: print tarinfo.name, "is", tarinfo.size, "bytes in size and is", if tarinfo.isreg(): print "a regular file." elif tarinfo.isdir(): print "a directory." else: print "something else." tar.close()
works:
file = urllib2.urlopen(url) f = open("out.tar.gz", "w") copyfileobj(file, f) file.close() f.close() readDB("out.tar.gz")
does not work
file = urllib2.urlopen(url) tmp = NamedTemporaryFile( "w") copyfileobj(file, tmp) file.close() readDB(tmp.name) tmp.close()
Offline
i think you get it wrong when you are running a loop on "tar" ,
try this:
for i in tar.getnames():
or if you want tarinfoobjects:
for i in tar.getmembers():
on tarinfo objects you can use the "name" attribute,
good luck and look out for cactus, he's an evil Ruby zealot
arch + gentoo + initng + python = enlisy
Offline
lol, thanks, I'll try to avoid him in my python threads
the problem was not connected to loop, but to the temp file, I removed the section with NamedTemporaryFile and used mstemp instead, I think the file got deleted before I was finished with reading.
But python rocks.
Offline | https://bbs.archlinux.org/viewtopic.php?id=10967 | CC-MAIN-2016-30 | refinedweb | 337 | 76.72 |
Subject: [Boost-announce] [Review Results] Range.Ex library accepted into boost
From: Thorsten Ottosen (nesotto_at_[hidden])
Date: 2009-04-27 05:12:49
Dear all,
It is my pleasure that Neil Groves' RangeEx library has been accepted
into boost. Congratulations Neil! There are quite a number of minor
issues that need to be resolved before the library is release ready,
see below for a summary.
Review statistics
-----------------
Full Reviews: 8.
Discussion: extensive.
I had the clear impression that everybody that participated in the
discussion were in favor of this library, albeit they did not have time
to submit a full review.
I did not hear a single statement saying that this library should be
rejected.
Thanks to everybody that participated in the review and its discussions.
Issue Summary
-------------
Below is given a list of topics that must be adressed before the
library can be included into boost. In general, we should try
to discuss them one at a time in seperate threads. Many people
suggested various extensions, new algorithms (e.g. from adope), etc.
**In general
we should not require Neil to add all these if he does not have the time
currently: the basic infrastructure, the concepts, the naming
and organization of the library is much more important. Then we can
add all these things later.**
1: documentaion
===============
The documentation should clearly reflect if an algorithm delegates
to a standard algorithm. If the algorithm is new, then it should
state complety and exception-safety. More examples would be welcome.
Sometimes the rationale and explanations could be improved, e.g for
operator|().
2: return type specification for find() etc
===========================================
There where no major objection to the mechanism, but some found
the syntax ugly. I believe the suggested syntax (e.g.)
boost::find[_f,_e]( range, x )
boost::find[_f+1,_e]( range, x )
was considered good by most.
3: namespace organization
=========================
Currently the library uses the namspaces
boost
boost::adaptors
Some raised concern about putting the algorithms straight in
the boost namespace before we have a general agreement on where
to put algorithms (e.g. in boost::algorithm or boost::ranges or whatever).
I think it will make sense to put each algorithm in its own header,
since we might have to include additional standard library headers
for calling algorithms that are implemented as member functions
(but see 7 below).
Feedback is most welcome.
4: naming convention for adaptors
=================================
The following have been proposed throughout the review
(here examplified with "transform"):
| transformed( fun )
| transform( fun )
| transform_view( fun )
| view::transform( fun )
There where no consensus during the review. Other libraries
seem to use the _view suffix, which is a strong argument
for that candidate
(in that case, I suggest to drop the adaptors namespace).
5: naming convention for the adaptor generators
===============================================
Many people like to be able to say make_transform_range(r,fun)
instead of r | transformed( fun ). There was no concensus
on how to name these functions. Here are some candidates:
make_transform_range( r, fun )
transformed( r, fun )
transform( r, fun ) (*)
transform_view( r, fun )
(*) was dislike by some because it has exactly the same spelling as
the algorithm transform. Many felt that the confusion is
too big if view generators are not named different from the actual
algorithms.
6: reintroducing _if variants of algorithms
===========================================
An important problem with the suggested approach
was the the iterator returned by an algorithm
after applying | filtered( pref ) are now filtered
iterators and so one needs to manually unwrap the return
iterator. This provided enough justification for reintroducing
these algorithms. (One could imagine that this unwrapping
was done automatically by a conversion operator such that
iterator i = boost::find( r | boost::filtered( pred ), x );
"just worked". However, the syntax is still slightly worse
than just using the original algorithm).
Generic programming is concerned with the use of orthogonal
concepts which put together yields all possible combinations
of algorithms. If we reintroduce all _if algorithms, we have to aks
ourselves "when does it end?". Should all new algorithms then
simply add _if variants? This seems very much against the spirit
of generic programming.
The problem with the iterators being changed (as in the find example)
seems to suggest the following guideline:
"If the algorithm returns an iterator, it makes sense to provide
an _if overload. Otherwise it does not."
Under this guideline, find_if() should be there, but
count_if() should not be provided.
7: should algorithms call member function when possible?
========================================================
I heard one strong voice against this. The reason was that
the algorithm implemented as member function often has quite
different guarantees w.r.t. complexity, reference and iterator
stabililty. I think this is a very strong argument. Also, we
can add this later if good arguments appear, but it is much
harder to take away. We also have to remember Scott Meyers
item "Beware of container independent code" which also suggest
that this is a bad idea.
At the very menimum, the original algorithm and the member function
should have similar semantics. This seems to suggest that
we can call set::lower_bound() from boost::lower_bound(). In that
case, the best way would probably be to add these as overloads
in the boost namespace:
template< class T, class A >
typename std::set<T,A>::iterator
lower_bound( std::set<T,A>& s );
template< class T, class A >
typename std::set<T,A>::const_iterator
lower_bound( const std::set<T,A>& s );
8: making return values of algorithms consistent/intuitive
==========================================================
For example, sort(r) returns the sorted range, but
sort_heap(r) does not. Similar issues for partial_sort().
Please go through all algorithms to see if this done correctly.
9: output range concept?
=========================
It appears that the only use for an output iterator in the library
was for boost::copy(rng,iter), and the only use for that function
was for "sinks" like std::ostream_iterator<T>(...).
This seems to be the only safety hole left in the library.
Here's an idea to how we might remove that: create a new "copy sink"
concept:
struct array_copy_sink
{
...
template< class Iterator >
copy( Iterator begin, Iterator end )
{ ... }
};
Then we might imagine
boost::copy( rng, boost::ostream_sink(std::cout) );
boost::copy( rng, boost::array_sink(an_array) );
boost::copy( rng, boost::ptr_sink(begin,end) );
boost::copy( rng, boost::front_insert_sink(a_deque) );
...
For optimal performance, we also need a "nonoverlapping copy sink"
concept, and an algorithm that expects that sink
(or that boost::copy() determines the type of the sink automatically).
10: automatic projection
=======================
Adope has algorithms overloaded such that projection is very easy:
struct my_type {
int member;
double another_member;
};
//...
my_type a[] = { { 1, 2.5 }, ... };
// sort the array by the first member:
sort(a, less(), &my_type::member);
my_type* i = lower_bound(a, 5, less(), &my_type::member);
We should be able to express this well, albeit not as concise, with
my_type* i =
boost::lower_bound( a | project( &my_type::member ), 5, less() );
where project( ... ) could simply return a transform iterator
constructed with boost::bind().
There is another problem, however, because now the return value is a
tranform_iterator of some form. Appending .base() would solve it,
albeit it is somewhat ugly.
Again this might raise the question of an implicit conversion.
I just want to say this is not a critical issue as we can always
add these overloads.
--- End of Issues ----
Again, I would like to thank everybody that participated in the review.
best | http://lists.boost.org/boost-announce/2009/04/0231.php | CC-MAIN-2015-14 | refinedweb | 1,210 | 55.03 |
(…
I tried yout Template, but it thows an InvalidCastException at the following line:
»var configFile = project.ProjectItems.Cast().FirstOrDefault(i => string.Compare(“web.config”, i.Name, true) == 0 || string.Compare(“app.config”, i.Name, true) == 0);«
Exception-Detail: Running transformation: System.InvalidCastException: Die angegebene Umwandlung ist ungültig.
bei System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo)
bei System.Runtime.InteropServices.CustomMarshalers.EnumeratorViewOfEnumVariant.GetEnumVariant()
bei System.Runtime.InteropServices.CustomMarshalers.EnumeratorViewOfEnumVariant.MoveNext()
bei System.Linq.Enumerable.d__b1`1.MoveNext()
bei System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source, Func`2 predicate).GetConfigFilePath() in c:\Dropbox\Entwicklung\Itm\Biz\BusinessObjects\ItmDbContext.Views.tt:Zeile 266..TransformText() in c:\Dropbox\Entwicklung\Itm\Biz\BusinessObjects\ItmDbContext.Views.tt:Zeile 38. c:\Dropbox\Entwicklung\Itm\Biz\BusinessObjects\ItmDbContext.Views.tt 266 1 BusinessObjects
I use VS2012 and the project, where the DbContext-Devived Class is located is a Class-Library Project.
Can you tell me, where Im going wrong?
Thanks in advance
Andreas
I am not sure what is the root cause of the problem here (but I think I know a workaround that will prevent the exception). ProjectItems should contain only items that are of ProjectItem. That’s why I used
Cast(). Apparently in your case ProjectItems contains something that is not of ProjectItem type (it would be interesting to learn what it is) and the query fails. You can rewrite this query as follows:
ProjectItem confgFile;
foeach(var i in project.ProjectItems)
{
var projectItem = i as ProjectItem;
if (projectItem != null)
{
if (projectItem.Name == "web.config" || projectItem.Name == "app.config")
{
configFile = projectItem;
break;
}
}
}
This should fix the exception. I have not tried using the template with the DbContext derived class in a separate assembly. Let me know if the above works for you. If not, you can provide me with a repro and I will investigate.
Mit freundlichen Gruessen,
Pawel
Thanks a lot. You did it!
There is only one thing. When I try to use a SQL-Server User instead of a Windows-User in the connectionString of my DbContext, there is still an error which says, that the keyword “userId” is not supportet.
In the connectionString i use the keyword “user id” (including space), which should be right, but the error is raising as long as i use a sql-server-user-authentication.
I hope, that my description of the problem is correct but my english is very bad (I am from Austria). Thanks in advance.
Greets
Andreas
I don’t know if this error is related to the template. Can you show the exact error message, your config and the stack trace? Also, can you make sure that you don’t have userId anywhere in your code (findstr /sim userid *.* should do the trick)?
Keine Sorge! Ihr Englisch ist sehr gut!
Is it hard to get this template working with ObjectContext (i.e. database-first) instead of DbContext?
I don’t think it would be too hard. Without too much thinking I believe this mostly the matter of having/not having edmx file rather than ObjectContext vs. DbContext.
I have the same problem
What have you tried and what exactly didn’t work?
[…] Installation steps have not changed comparing to the previous version and you can found them in this post. You shall not have to uninstall the previous version – the new vsix should replace the old […]
I’m getting error :
“Compiling transformation: The type or namespace name ‘ContainerMappingViewGroup’ could not be found (are you missing a using directive or an assembly reference?)”
at line:
private void SaveViews(ContainerMappingViewGroup viewGroup)
Entity Framework 6.0.0 RC1… And Nightly Build (4/10/2012 dated)
Hi Erhan,
There were a few bugs in EF6 RC1 that made it impossible to use pre-generated views. Therefore I decided not to post what I had on the VS Gallery. However the bugs were fixed post EF6 RC1. I did update the template but it’s not on the VS gallery yet (I will post it to the VS Gallery when/before EF6 ships). If you would like to try the latest version (it should work for nightly builds and for the upcoming EF6) you can download it from my github –. Note – I am planning to change the way it interacts with the user’s context – at the moment it requires a special directive and it does not work on VS2010 and VS2012 RTM (you would have to have at least VS 2011 Update 11 to use it).
Thanks for trying and reporting the problem though!
moozzyk, thank you very much.
[…] for EF6 RTM and nightly builds. If you need steps to install and use the template take a look at this post. A couple additional notes: – an updated version of EF Power Tools which allows generating views […] | https://blog.3d-logic.com/2012/10/17/entity-framework-6-and-pre-generated-views/ | CC-MAIN-2018-17 | refinedweb | 792 | 58.69 |
Hello! I am new to python and learning about functions and how to return a value. I have an assigment that says to “create a function who returns the third character in a string, if the string is shorter than three characters it should return false”.
I am able to make the function return the third sign in a string, but I can´t get it to return False when the string is shorther than characters.
Anyone who can help me figure out why it wont return false? This is the code I´ve written:
def third_character(string): x = string[2] print(x) if len(string) > 2: return x else: return False third_character() #In there you enter a string with "" around
Thankful for help | https://discuss.codecademy.com/t/return-false-not-working-if-else/463286 | CC-MAIN-2020-16 | refinedweb | 124 | 72.9 |
The listSets command is used to get a list of all the sets an object belongs to. To get sets of a specific type for an object use the type flag as well. To get a list of all sets in the scene then don’t use an object in the command line but use one of the flags instead.
Derived from mel command maya.cmds.listSets
Example:
import pymel.core as pm # Get a list of all the sets which `nurbsSphere1` belongs to: pm.listSets( object='nurbsSphere1' ) # Get a list of all the deformer sets in the scene: pm.listSets( type=2 ) # Get a list of all the rendering sets which `coneShape1` belongs to: pm.listSets( type=1, object='coneShape1' ) | http://www.luma-pictures.com/tools/pymel/docs/1.0/generated/functions/pymel.core.general/pymel.core.general.listSets.html#pymel.core.general.listSets | crawl-003 | refinedweb | 121 | 76.01 |
Based on
1. Roll call Present 15/11 AT&T Mark Jones BEA Systems Mark Nottingham Canon Jean-Jacques Moreau IBM David Fallside IBM Noah Mendelsohn Microsoft Corporation Martin Gudgin Microsoft Corporation Jeff Schlimmer Oracle Jeff Mischkinsky Oracle Anish Karmarkar SAP AG Volker Wiechers SeeBeyond Pete Wenzel Sun Microsystems Marc Hadley Systinet (IDOOX) Jacek Kopecky W3C Carine Bournez W3C Yves Lafon Excused AT&T Michah Lerner BEA Systems David Orchard Canon Herve Ruellan IBM John Ibbotson SAP AG Gerd Hoelzing Sun Microsystems Tony Graham Systinet (IDOOX) Miroslav Simek Regrets Ericsson Nilo Mitra DaimlerChrysler R. & Tech Mario Jeckle Fujitsu Limited Masahiko Narita Fujitsu Limited Kazunori Iwasa Macromedia Glen Daniels Matsushita Electric Ryuji Inoue Progress Software Colleen Evans Software AG Michael Champion Absent DaimlerChrysler R. & Tech Andreas Riegg IONA Technologies Eric Newcomer IONA Technologies Oisin Hurley Progress Software David Chappell Software AG Dietmar Gaertner Tibco Don Mullen Unisys Lynne Thompson Unisys Nick Smilonich 2. Agenda review 3. Approval of minutes [scribe_mj] minutes of April 2 approved [scribe_mj] minutes of March 26 approved 4. review of action items... 5. status reports... 6. CR progress and issues... Implementation and Interop Evidence, re summary table. Chair gives summary: [scribe_mj] the entire table is green except for feature 79 and 82 [scribe_mj] but we do have a trace for each of those features We know there are some other discrepancies, but we believe them to be small. E.g. one implementation generates xml:lang="" which is not valid, some implementations use previous 1.2 namespaces. In general, we believe that unless we wait for several more months, we will not see any further significant improvement in the evidence available. Chair: are there any objections from the WG to going forward with this body of evidence on the basis that it meets our CR exit criteria? No objections. Chair: we will go forward with this evidence. Issue 421 [scribe_mj] we incorrectly implemented resolution of issue 355; it will now be addressed by resolving issue 421 [scribe_mj] transmission of comments are required by intermediaries but no clarification text will be added [scribe_mj] issue 421 is now closed (no objections) [scribe_mj] issue 422 is closed by removing the unused prefix (no objections) [Noah] NOTE: Noah has sent closing mail on Issue 421, effectively discharging Action #9 above. Thank you. 7. Proposed Recommendation [scribe_mj] we will put together our PR package and make a final PR request next week (to the W3C team), assuming that the editors can make changes by the end of this week [scribe_mj] the wg will review the package early next week [scribe_mj] the chair will set up the meeting with TimBL and W3T [scribe_mj] ASAP and the final decision will be made just prior to that meeting The WG agreed to all this. (No objections) According to the W3C staff, the Requirements and Usage Scenarios docs should be published as W3C Notes. [jeffsch] Here's the latest URL 8. Attachments Chair asks WG members for their opinions about the Microsoft, BEA et al proposal for attachments, and whether the WG should work with it [scribe_mj] MarcH has raised a couple of issues on email that haven't been addressed. [scribe_mj] He also wonders about how security will be handled, and particularly signatures. Another concern is the "2 copy" problem when there are 2 refs to the same resource. Marc does not think the proposal is yet clear enough for the WG to make a decision. [scribe_mj] Noah/IBM has concerns that the referencing model still has value. [scribe_mj] THey would be OK taking the include proposal as a starting point for the moment. [scribe_mj] Particular areas of remaining concern are restating the attachment feature spec, rethinking the relationship of this to the rest of the W3C stack (XML Query data model), [scribe_mj] security issues, and casting it in terms of SOAP technology. [scribe_mj] Interop testing would be important, too. [scribe_mj] MarkN feels that BEA's view is consistent with Noah's sentiment. [scribe_mj] Noah: Wants freedom to reconsider "by reference" model if the proposal doesn't pan out. [Noah] Part of Noah's point was that IBM is willing to endorse this as starting point, for now, as long as we do our usual careful job of working through the technology, clarifying its relationship to SOAP itself and to other W3C standards such as the Query Data model, and demonstrating interop (e.g. soapbuilders) [scribe_mj] MarkN: WS-I wants a status report on the XMLP stance on attachments. [scribe_mj] MarkJ: There seem to be a couple of steps forward -- grounding it in the SOAP [scribe_mj] model and quickly addressing any showstopper issues. [scribe_mj] The working group has agreed to extend the time of the call to permit the discussion to continue. The following proposal was typed into IRC at the behest of the WG to ensure the WG is clear about the expectations associated with working with the attachment proposal. [Noah] Proposal on the status of the new attachment proposal (I.e. the document proposed by Microsoft, BEA, etc.) [Noah] We will agree, at least for now, to use this document as the starting point for our next round of work on attachments. [Noah] We will do this in much the same spirit as we accepted SOAP 1.1 as a starting point. We worked through it in detail, had the liberty to make radical changes or reject it completely as we thought appropriate, but also agreed to make no gratuitous changes. [Noah] We also agree that it is not currently cast in terms of a SOAP-compatible specification, so one of our jobs will be to rewrite it in that form. [jeffm] how defines gratuitius? [jeffm] s/how/who/ [Noah] Several participants (including me) have observed that there are a number of important issues including the relationship to XML and WS security specification, perhaps other specifications such as the XML Query data model, etc. that the group will have to work through in deciding whether this technology is in the end appropriate, and what if any refinements are appropriate. [Noah] </End of proposal> [Noah] gratuitous: (from the dictionary) unnecessary or unwarranted [Gudge] OED: gratuitous 1. uncalled for 2. free of charge [Gudge] So, we should make sure we charge for all the changes we make ;-) [jeffm] Yes I know that :-) - the question is how is it decided if a change is gratuitous? No objections are raised by the WG to go forward with the proposal (by Microsoft, BEA, etc.) on the basis proposed by Noah above and given appropriate copyright and IP disclosures. [scribe_mj] meeting adjourned [RRSAgent] I see 13 open action items: [RRSAgent] ACTION: Yves to submit media type application to IETF [1] [RRSAgent] recorded in [RRSAgent] ACTION: jeffsch to send closing text for issue 419 [2] [RRSAgent] recorded in [RRSAgent] ACTION: chair to send the letter thanking Bob Cunnings [4] [RRSAgent] recorded in [RRSAgent] ACTION: DavidF send e-mail to implementors for any objections to making their traces public [5] [RRSAgent] recorded in [RRSAgent] ACTION: Nilo to incorporate the modified text for 420 into the primer [6] [RRSAgent] recorded in [RRSAgent] ACTION: go forth and multiply [7] [RRSAgent] recorded in [RRSAgent] ACTION: Editors to incorporate 'proper' resolution of Issue 355 ( not fully implemented previously ) [8] [RRSAgent] recorded in [RRSAgent] ACTION: Noah to send closing email on 421 [9] [RRSAgent] recorded in [RRSAgent] ACTION: editors to make the correction (issue 422) [10] [RRSAgent] recorded in [RRSAgent] ACTION: Jacek to send closing email on 422 [11] [RRSAgent] recorded in [RRSAgent] ACTION: Yves to determine (by next telcon at latest) whether any PR activity is planned for going to PR [12] [RRSAgent] recorded in [RRSAgent] ACTION: chair, JJM, JohnI and w3 staff to strategize on publishing the requirements and usage scenarios docs [13] [RRSAgent] recorded in [RRSAgent] ACTION: group to start formulating the questions we'd like answered [14] [RRSAgent] recorded in [Zakim] WS_XMLP()2:00PM has ended | http://www.w3.org/2000/xp/Group/3/04/09-minutes.html | CC-MAIN-2016-40 | refinedweb | 1,313 | 53.24 |
UI of my application is weird with Qt 4/5 on OS X 10.11 El Capitan
Hi everybody!
I’m currently developing an application on mac OS X 10 and I’m using Qt4 and Xcode 7.
My issue is related to the recent OS X 11 El Capitan. Indeed when I launch my app on OS X 11, all the widget/buttons/graphical stuff are not aligned anymore and even disappear for few of them. I received a lot of mails from my customers about it.
I upgrade to El Capitan and qt5 with Xcode 7 but I got so much error I couldn’t count all of them. especially this one problem-with-qt-5-5 but I can’t use libc++ since I need to use libstdc++ for being able to use <tr1/tulpe> in a part of my project…
So my question is if someone could know from what this kind of problem could come?
Hi and welcome to devnet,
would be great to have and example of your code and some images showing the issue.
This happens only for this project or also for some other projects (for instance an example one)?
Hi and welcome to devnet,
You can disable C++11 so it will link to libstdc++.
CONFIG -= c++11
Other than that Qt 4's last official release supports OS X 10.10, it should work on 10.11 but there's no guarantee as the level of support since Apple is making subtle changes each time.
Thank you for your welcoming messages.
@mcosta unfortunately I won't be able to show example of code because the application is about 500 000 lines and as I'm new in the company (and Qt) I don't know where the problem comes from. So if you have an idea of what part of Qt could be responsible of this, I could investigate and show you an example of this part. But basically it's "I've a code which works perfectly on all the mac os x version, when the app is launched on el capitan everything is collapsing".
Concerning the image, as I work for a company I would prefer to not share it. But it's simple to see. Imagine you have a windows with some buttons centred. Once it's launched on el capitan some buttons will disappear and other will be moved.
@SGaist How can I do this in Xcode? In my project Build Settings I can set the "C++ standard library" to libc++ or libstdc++. Is it what you're talking about?
The problem is if I use libstdc++ I get the problem I linked problem-with-qt-5-5 where the solution is to use libc++ but if I do that I couldn't use <tr1/tulpe> anymore...
I almost manage to compile after changing some include with <QtWidgets> and my moc script from
"moc ${INPUT_FILE_PATH} -f -o ${INPUT_FILE_DIR}/moc_${INPUT_FILE_BASE}.cxx"
to "moc ${INPUT_FILE_PATH} -f ${INPUT_FILE_PATH} -o ${INPUT_FILE_DIR}/moc_${INPUT_FILE_BASE}.cxx"
I don't understand why I needed to add a path after -f because it's said in the doc that the path is optional...
But now WOW I've 600 errors! I don't know if hell exists but I think it could look like my project! ahah. The "good" new is that it's always the same error, here is a template
"Undefined symbols for architecture x86_64:
"ClassName::MethodName( various stuff like std::__1::vector<std::__1::queue<float, blabla)", referenced from: -[ClassName MethodName] in FileName.o"
Does anybody know what it means?
Sorry,
but we don't have a crystal ball; if we can't see your code (and also seems you use a custom Makefile) we can't help you so much.
@mcosta I totally understand your thoughts and I'm sorry if I expressed myself poorly but the thing is I' totally new on this project and I know nothing about Qt. That are the reason why I may seem lost.
Furthermore I agree with you that without any code you couldn't help me (or others people who could have the same problems) and I'm glad that you took time to give me some answers. But the thing is that I've 500 000 lines so I just don't know what to show you.
The only error I got now is "Undefined symbols for architecture x86_64:
"ClassName::MethodName( parameters)", referenced from: -[ClassName MethodName] in FileName.o" "
Maybe if you already saw this kind of error in Qt I could see in the code and show you the concerned part
By the way, as I discover Qt at the same time, I saw in the project a file named FindQt.cmake and inside it only talksabout Qt3 and Qt4. I guess i need to write a cmake for Qt5, no?
Hi,
about CMake with Qt5 you can read this page.
about the linking error, seems some missing linked libraries
@mcosta Thank you for the links.
I took some time to reply because I wanted to be sure about something.
I find the real reason about my problem:the build of my libraries. Indeed, some of our in-house libraries are build with libstdc++ and other with libc++. I cannot change this thing for the moment unfortunately. As the version of qt5 that I use prompts me to build in libc++, I get a lot of link error.
So my question is: is it possible to use a version of qt5 builded with libstdc++ ? If not, how could pass through the error no type named 'u16string' in namespace 'std' in qstring.h, using libstdc++ ?
Are you by any chance using the old macx-g++ mkspec ? If so, you should change to macx-clang, that should solve your problem.
@SGaist I apologize for my "noob" question but... Where can I check in Xcode if we use the old mac-g++ ? I'm really new with qt.
What I did was: download and link qt5 with home-brew and add the frameworks I needed in the "linked frameworks and libraries" part of "General" tab in my Xcode project.
Did you generate the Xcode project using qmake ? | https://forum.qt.io/topic/60776/ui-of-my-application-is-weird-with-qt-4-5-on-os-x-10-11-el-capitan/4 | CC-MAIN-2020-05 | refinedweb | 1,035 | 80.31 |
33 Fundamentals Every JavaScript Developer Should Know
You’ve learned React, mastered npm install, and can find your way around webpack.config.js. You can do just about anything as long as there’s a module for it, and there seems to be a module for everything. But what happens when you can’t find a module? Where do you go when you can’t find the answer on Stack Overflow? You feel like a solid developer, but there’s a gnawing sensation in the pit of your stomach. You don’t like to think about it, but it’s there. The truth… A lot of this is wizardry to you. If there wasn’t an example out there, in some cases you’d have no idea how to get any of this working.
What can you do? What if someone finds out? Maybe that’s the REAL reason white boards scare you? Fret no more fellow developer. We’ve all been there. The solution has been staring you in the face this whole time, and yes we both know you’ve been avoiding it. It might be time you took the time to really understand JavaScript, and maybe even learned a little computer science.
Here’s what you need to do:
- Understand how a line of high-level code like JavaScript turns into and gets executed as a stack frame on the call stack (from high level languages to machine code).
- Understand how different primitive types are stored in memory, down to the addresses, space allocated, and binary representation (if you haven’t run into the word mantissa you’re not far enough).
- Understand the difference between value types and reference types, and assigning values vs assigning pointers.
- Understand implicit typing, explicit typing, nominal typing, structural typing, and duck typing.
- Understand
==vs
===vs
typeof.
- Understand function scope, block scope, and lexical scope.
- Understand the difference between an expression and a statement, and what it means to evaluate an expression.
- Understand what happens (or doesn’t) in memory/on the call stack when an expression is evaluated, argument passed, result returned, etc. vs when a value is assigned or retrieved from a variable.
- Understand IIFE’s, modules, and namespaces. And why ES6 modules and block scope don’t fully replace IIFE’s.
- Understand how the message queue and event loop work in JavaScript specifically, and how it affects timing, hang, async, etc.
- (browser) understand
setTimeout,
setInterval, and
requestAnimationFrame.
- Understand which operations are more expensive and why (expensive meaning cost more processing time or memory). Are number of iterations really the most impactful on performance (typically)? What does Big O notation really tell us? Use jsperf and
performance.nowto run some tests and find out.
- Understand what opts and deopts are, and how to keep up to date on the across the different JavaScript engines.
- Understand how to represent numbers in binary, hex, dec, scientific notation, etc. in JavaScript and other languages.
- Understand how bitwise operators, typed arrays, and array buffers work. Use understanding RGBA as a way to understand how to manipulate binary data.
- (browser) understanding how the in-memory DOM and layout trees are built and modified, when/how reflows/layouts, composites and repaints are triggered.
- Understand
new, constructors,
instanceof, and instances.
- Understand prototypical inheritance and the prototype chain. And how even with
classJavaScript still doesn’t achieve classical inheritance.
- Understand the differences between
Object.createand
Object.assign.
- Understand factories and classes, and how these approaches differ.
- Understand the difference between member properties and properties on the prototype.
- Understand pure functions, side effects, and state mutation.
- Understand how almost every
for/
whileloop can be replaced with a
map/
reduce/
filter, and why.
- Understand what lambdas are (LINQ in c# as an example), and how
map/
reduce/
filter+ arrow functions change the way you think about your code.
- Understand how closures work, and how they look on the call stack.
- Understand how high order functions work, and when to use them.
- Understand what abstract data structures are, how to build them in JavaScript, and typical use cases for them.
- Understand recursion and how to use them to build abstract data structures.
- Understand that algorithms to solve many common problems exist, and be familiar enough with them to find the one you need on Google because nobody remembers all of them. Or even most of them.
- Understand the difference between the is a and has a relationship when it comes to inheritance, polymorphism, and code reuse.
- Become familiar with the common design patterns, and which ones have uses in JavaScript.
- Understand partial functions, currying, compose, and pipe. And understand why unary functions are so useful.
- Understand how reflection is different in JavaScript than in strongly typed, compiled languages, and why.
There you have it. The key to demystification. Did I miss something? If so, feel free to send me a tweet: @stephenthecurt
Stuff I forgot that rock star developers pointed out:
34. Erika Ritter reminded me it’s important to understand delegates. Not only what delegation is in the context of the prototype, but delegation as it pertains to
this and understanding how
bind,
call, and
apply work. Bonus: (browser) Understand what event delegation is, and how all three uses of the term delegate are saying the same thing. | https://medium.com/@stephenthecurt/33-fundamentals-every-javascript-developer-should-know-13dd720a90d1 | CC-MAIN-2018-51 | refinedweb | 878 | 58.48 |
C++ Boost Filesystem Library(Part III): Example Programs
My earlier posts on C++ Boost Libraries introduced the Boost Filesystem Library and discussed some example programs that make use of it. I discuss two more examples in this post. (Be sure to include the needed boost header files as noted in the earlier posts, including creation of the alias to the boost filesystem namespace: namespace bfs = boost::filesystem; )
1. A function to search for a file.
The function first verifies if the path passed in 'path_dir' exists and if it is a directory. Then it creates a 'directory_iterator' object(introduced in the previous post) by passing this path to its constructor. The loop then iterates over all the contents of the directory, and for every sub-directory found, it calls itself recursively passing this new directory as the starting point. Every file that is found is compared with the file that is to be searched(file_name), and if a match is found, the path is copied into 'pfound' and function returns true. If nothing is found by the end of the loop, 'false' is returned. Remember that as discussed earlier, iter->leaf() returns the last part of a path - the file name. Applying indirection operator to the directory_iterator object returns the file it is pointing to.
2. A simple directory listing (ls/dir) program.
The function simply iterates over all the contents of the directory passed to it(p) and prints every file and directory that is found. [Directory] tag is added to the directory names, and at the end of the program the total count of the files and directories is printed. This program only prints the contents of the directory passed, it doesn't traverse its sub-directories. Can you combine the general idea of the above two examples to create a directory listing program that displays the files of all the sub-directories recursively? You can also print more information about the files by using boost filesystem functions like is_symbolic_link(), file_size(), last_write_time() etc[discussed in the earlier posts].
Most of the examples inspired/taken from the Boost website. The website documentation also contains more help on the various formats in which the paths to the files can be represented. The function 'native_file_string()' used in the Example 2 above returns the path in the native format(using '/' as a file separator on Unix platforms and '\' on Windows platforms, for example). To pass the path names as 'path' objects, use the services from fstream.hpp. All Filesystem library related exceptions are available in exception.hpp. The header convenience.hpp contains a few convenience functions(like change_extension()). You can start from the documentation page on the website for further information. | http://m.mowser.com/web/beans.seartipy.com%2F2006%2F05%2F31%2Fc-boost-filesystem-librarypart-iii-example-programs%2F | crawl-002 | refinedweb | 449 | 54.02 |
There...
The opening chapter makes a stab at explaining why you should be interested - which is a good place to start a new language. The assumption here is that you are an existing developer, this is not a book aimed at the absolute newbie (and I'll refer readers to Realm of Racket if that's what you're looking for). Aside from things like concision and leveraging the Java environment the author right homes in on data immutability, which makes this different to Java, C#, Python et al, and which makes writing concurrent code much more straightforward.
The next chapter introduces a Clojure programming environment - in this case it's called Leiningen - and shows how you can use this to get yourself a REPL (read evaluate print loop) so that you can explore the language dynamically. It's in the third chapter that you really start getting to the core of the language, with introductions to the functions, lists, vectors and maps. The examples are all simple, but enough for you to follow along and try things out. If you've never had any exposure to a functional language or to Lisp before then time spent in this chapter will pay off as it really gets to the core of the language. This is followed by a chapter on Clojure types, including showing how these types ultimately resolve to the underlying Java types. In this chapter you also get to work with the idea of functions using functions as data - higher-order functions in other words - and you are introduced to the concept of filter, reduce and map, which are key concepts in Clojure (and other similar languages). This is one of the parts of the book that could have done with some more complex or realistic examples, while the concepts are introduced and explained well enough, for those coming to Clojure from more traditional languages some extra content would probably be beneficial.
Chapter 5 covers a mix of topics - program flow, namespaces, loading files into REPL It's bitty and lacks cohesion. Similarly with the final chapter that looks at things 'behind the scenes' - specifically at vars and macros. These final two chapters really don't work all that well, even though individual topics they cover are important.
Overall this does deliver a fast introduction to the core language. The explanations are clear and concise, just like Clojure itself. However, the concision does mean that this is a fairly light introduction. The text could have done with additional examples and a little bit more depth in places. Learning a new programming paradigm takes more work than learning a new language in the same family as one you already know. Given that many of the people looking at Clojure for the first time are likely to be coming from object oriented languages then some additional content to help them would make this a more useful book. | http://www.techbookreport.com/tbr0384.html | CC-MAIN-2015-06 | refinedweb | 487 | 54.36 |
#include <hallo.h> * Radu-Cristian FOTESCU [Sun, Jul 16 2006, 07:44:00PM]: > I'm terribly sorry to say, but I'm disappointed by the unprofessional > approach. Or, take it the other way around: if unofficial backports can be > seen as official, how comes we can't download the "updated CeBIT 2006 > edition" DVD? > > This is both frustrating and confusing for the image of the Debian. Then stop bitching and do something about it. Do you prefer this "new image" over the usual image of "beeing outdated when released and unuseable on most Desktops 5-10 months later"? What about making backports part of the official strategy? Eg. promoting Stable+Backports (approved) as "Debian Stable, Freshmeat series"? A small committee would look at the quality of backports and allow them to be added to official releases of this "special edition". Eg. doing that every two months after things has settled down in backports archive. Why do we not do that? Because of additional work? I doubt. Because CD vendors want to press 100.000 CDs for the next 12 months at once? Eduard. | https://lists.debian.org/debian-project/2006/07/msg00084.html | CC-MAIN-2015-32 | refinedweb | 183 | 66.44 |
Question:
Does anyone know the simplest way to import an OpenSSL RSA private/public key (using a passphrase) with a Python library and use it to decrypt a message.
I've taken a look at ezPyCrypto, but can't seem to get it to recognise an OpenSSL RSA key, I've tried importing a key with importKey as follows:
key.importKey(myKey, passphrase='PASSPHRASE')
myKey in my case is an OpenSSL RSA public/private keypair represented as a string.
This balks with:
unbound method importKey() must be called with key instance as first argument (got str instance instead)
The API doc says:
importKey(self, keystring, **kwds)
Can somebody suggest how I read a key in using ezPyCrypto? I've also tried:
key(key, passphrase='PASSPHRASE')
but this balks with:
ezPyCrypto.CryptoKeyError: Attempted to import invalid key, or passphrase is bad
Link to docs here:
EDIT: Just an update on this. Successfully imported an RSA key, but had real problem decrypting because eqPyCrypto doesn't support the AES block cipher. Just so that people know. I successfully managed to do what I wanted using ncrypt (). I had some compilation issues with M2Crypto because of SWIG and OpenSSL compilation problems, despite having versions installed that exceeded the minimum requirements. It would seem that the Python encryption/decryption frameworks are a bit of a minefield at the moment. Ho hum, thanks for your help.
Solution:1
The first error is telling you that
importKey needs to be called on an instance of
key.
k = key() k.importKey(myKey, passphrase='PASSPHRASE')
However, the documentation seems to suggest that this is a better way of doing what you want:
k = key(keyobj=myKey, passphrase='PASSPHRASE')
Solution:2
It is not clear what are you trying to achieve, but you could give M2Crypto a try. From my point of view it is the best OpenSSL wrapper available for Python.
Here is a sample RSA encryption/decription code:
import M2Crypto as m2c import textwrap key = m2c.RSA.load_key('key.pem', lambda prompt: 'mypassword') # encrypt something: data = 'testing 123' encrypted = key.public_encrypt(data, m2c.RSA.pkcs1_padding) print "Encrypted data:" print "\n".join(textwrap.wrap(' '.join(['%02x' % ord(b) for b in encrypted ]))) # and now decrypt it: decrypted = key.private_decrypt(encrypted, m2c.RSA.pkcs1_padding) print "Decrypted data:" print decrypted print data == decrypted
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/10/tutorial-python-rsa-decryption-using.html | CC-MAIN-2018-43 | refinedweb | 403 | 54.12 |
Hello everyone!
Today I created a project that I have been greatly looking forward to, I created a “Cat Toy,” a laser pointer on top of two servos, X- and Y-axis, both controlled via joystick.
I am using an Arduino UNO R3.
As of a couple hours ago, my project is up and working, but I now have another idea to improve it. The problem is I am having trouble coding out exactly what I want.
What I want is for my ‘work envelope’ (The space that my laser can reach) to be incrementally controlled, i.e. more like a FPS game, where you can move left, stop, let the joystick re-center, then move more left without the dot re-centering. In short, I am seeking to get rid of the snap-back movement that occurs when the joystick is released. Incremental vs. Absolute coordinates/movement.
Here is my code that is running with two HI-TEC HS-422 digital servos and the sparkfun “retail kit” joystick. The laser is an old .22lr cheap laser sight connected to a digital pin on the UNO.
#include <Servo.h> Servo servoX; Servo servoY; int laser = 2; int potpinX = A4; int potpinY = A5; const int servoPause = 1; void setup() { servoX.attach(9); servoY.attach(11); pinMode(laser, OUTPUT); pinMode(potpinX, INPUT); pinMode(potpinY, INPUT); digitalWrite(laser, HIGH); Serial.begin(9600); } void loop() { int XpotVal = analogRead(potpinX); int YpotVal = analogRead(potpinY); int servoAngleX = map(XpotVal, 0, 1023, 180, 0); int servoAngleY = map(YpotVal, 0, 1023, 180, 0); servoX.write(servoAngleX); servoY.write(servoAngleY); Serial.print(XpotVal); Serial.print(" "); Serial.println(YpotVal); delay(servoPause); }
Thanks in advance! | https://forum.arduino.cc/t/solved-x-y-laser-control-with-joystick-absolute-vs-incremental-movement/228165 | CC-MAIN-2022-21 | refinedweb | 274 | 54.02 |
Python try except and the block is to test a particular block of code to make sure if it runs properly before the rest of the program runs. You have written a big new code for a program, you want to ensure it runs before letting the rest of the program run. try…except block allows running your code and manage an exception if the user detects one.
Add finally and else statements to process extra code depending on the result of the try…except block. The Python try…except statement interprets the code under the “try” section. If the code does not execute, the program will stop where it detected the error, and the “except” will run. The try block provides you to process a block of code for mistakes. The except block allows you to manage the error with a response. The syntax for the try…except block:
try:
your code…
except:
your code…
Exception Handling
The try and except block are to detect and manage exceptions. Python runs a code regarding the try statement as a part of the program. The except statement works as the program’s answer to any exceptions in the previous try clause.
Exceptions are easy to handle errors and particular conditions in a program, if you are executing a code that results in an error then use exception handling. Moreover, you can alert an exception in your program using the raise exception statement. Detecting an exception deintegrates the current code running and falls back to the exception back until it is operated.
Example :
amount = 9000 # check that You are eligible to # purchase Dsa Self Paced or not if(amount > 2999) print("You are eligible to purchase Dsa Self Paced")
Output :
The syntax error here is due to the wrong syntax in the code. It leads to the termination of the program.
Let us now see how the user can handle this exception. Check out the code below:
Example :
marks = 10000 a = marks / 0 print(a)
Output :
Multiple Except Statements
You can reiterate except statements for many types of errors to detect for multiple exceptions. This is better if you think that one of the exceptions may be alert but you are unsure which one you will face. An illustration of try…except blocks that search for a Name Error:
Example :
try: print(our variable) except NameError: print('ourVariable is not defined') except: print('Error returned')
Here, the code that sends back our Variable is not described because the code returns a Name Error. We can increase the errors, such as an OSError or a Zero Division Error.
Python Finally Block
If you want to message print in both cases where an error is returned and if no error is detected. This happens when the final block executes. If you run a finally clause, its composition will be run regardless of if they try…except block detects an error. Finally, restrictions are indicators that your code has been executed, because they are not distinctive from a code that has been executed, they are not as generally executed.
try…except blocks swiftly debug your Python code, a program attempts to execute the code in a “try” block. If this fails, the “except” block executes. The code in a “finally” statement processes regardless of an “except” block. These blocks are useful when you’re testing current code or creating new code. It makes sure that your program executes properly and does not contain any kind of errors.
Example :
def divide(a, b): try: result = a // b except ZeroDivisionError: print("No result, You are dividing by zero ") else: print("Your answer will be ", result) finally: print('Finally block is always executed') divide(5, 4)
Output :
Your answer will be 1 Finally block is always executed
Python User-defined Exception
User-defined exception in python is one of the exceptions to treat the program in case it stops abruptly or something goes wrong. Python also provides an exception handling method with the help of try-except. Some of the standard exceptions which are most frequent include IndexError, ImportError, IOError, ZeroDivisionError exception, TypeError exception, and FileNotFoundError.
A Python program terminates as soon as it encounters an error. An error can be a syntax error or an exception in python. In this article, you will see what an exception is and how it differs from a syntax error. After that, you will learn about raising exceptions and making assertions. Then, you’ll finish with a demonstration of the try and except block. A user-defined exception is in which the user can create his own error using the exception class.
How to create User-defined Exception
Users can specify their own exceptions by creating a fresh class called exception class. Exceptions need to be derive either directly or indirectly from the Exception class only. Below is a program to create a user-defined exception:
Example :
class DH(Exception):
# Constructor or Initializer
def __init__(self, value):
self.value = value
# __str__ is to print() the value
def __str__(self):
return(repr(self.value))
try:
raise(DH(10*2))
# Value of Exception is stored in error
except DH as error:
print('A New Exception occured: ',error.value)
Output :
A New Exception occured: 20
>
When the user is developing a vast program in python, it is a good practice to store all the raised programs in a separate file, especially user-defined exceptions. Many standard modules do this as they tend to define their exceptions separately as exceptions.py or
errors.py .
Creating user-defined exception using multiple inheritance
Exceptions from derived classed are created when a single module handles multiple several distinct errors. The user creates a base class for exceptions defined by that module. The user can inherit the base class by various user-defined class to handle different types of errors. User exception class can implement everything a normal class can do, but we generally make them simple and concise. Most implementations declare a custom base class and derive others exception classes from this base class | https://www.developerhelps.com/python-try-except-and-block/ | CC-MAIN-2022-40 | refinedweb | 1,006 | 52.7 |
NAMEread - read from a file descriptor
SYNOPSIS
#include <unistd.h> ssize_t read(int fd, void *buf, size_t count);
DESCRIPTIONread() attempts to read up to count bytes from file descriptor fd into the buffer starting at buf.
If count is zero, read() returns zero and has no other results. If count is greater than SSIZE_MAX, the result is unspecified.
RETURN VALUEINTR
-.
Other errors may occur, depending on the object connected to fd. POSIX allows a read that is interrupted after reading some data to return -1 (with errno set to EINTR) or to return the number of bytes already read.
CONFORMING TOSVr4, SVID, AT&T, POSIX, X/OPEN, BSD 4.3
RESTRICTIONSOn NFS file systems, reading small amounts of data will only update the time stamp the first time, subsequent calls may not do so. This is caused by client side attribute caching, because most if not all NFS clients leave atime updates to the server and client side reads satisfied from the client's cache will not cause unneccesary. So, O_NONBLOCK may not be available on files and/or disks.
SEE ALSOclose(2), fcntl(2), ioctl(2), lseek(2), readdir(2), readlink(2), select(2), write(2), fread(3), readv(3)
Important: Use the man command (% man) to see how a command is used on your particular computer.
>> Linux/Unix Command Library | http://linux.about.com/od/commands/l/blcmdl2_read.htm | crawl-002 | refinedweb | 222 | 62.68 |
Watch Now: Preview Live! 360 Content Here
VSM Cover Story
Learn how to store and retrieve binary large objects in the cloud with Azure's RESTful Storage API..
Microsoft's potential ability to leverage its Visual Studio developer cadres to gain a foothold in the cloud computing market is unsettling to existing and prospective players. Months of unrelenting hype have followed Microsoft Chief Software Architect Ray Ozzie's unveiling of the Windows Azure cloud computing platform during Microsoft's Professional Developers Conference 2008. After about a year of developer feedback, Microsoft is expected to announce the date for commercial release of Windows Azure and the Azure Services Platform at PDC 2009 on Nov. 17.
Amazon Web Services LLC and the Google App Engine have a commanding presence as early infrastructure and Platform as a Service (PaaS) providers. IBM Corp., VMware Inc. and even Oracle Corp.-with its pending Sun Microsystems Inc. acquisition-are among the companies playing a furious game of catch-up to prevent Amazon Web Services, Google Inc. and Microsoft from becoming the enterprise cloud's ruling triumvirate.
The initial appeal of cloud-or "utility"-computing is the ability to rapidly scale application capacity as fluctuating demand grows and, just as quickly, to shed resources as traffic subsides. In 2009's tight economy, cloud computing offers a way to reduce the risk of testing Web-based projects that require substantial resources for a minimum implementation. At a growing number of companies, management's quest for "green IT" status may soon validate the increased efficiencies of data center operating scale-efficiencies that can reduce the amount of power consumption and heat generation per unit of computing work and data storage.
Amazon Web Services, Google App Engine and Microsoft Windows Azure all offer these features. However, only Microsoft delivers the unique advantage of minimizing the effort-and thus the cost-required to move .NET projects developed with Visual Studio from on-premises IT facilities to virtualized Windows Server clusters and replicated data storage running on the Windows Azure fabric in Microsoft data centers.
The foundation for the Azure Services Platform is the Windows Azure operating system. This OS provides the fabric for virtualized Windows 2008 Server instances; schemaless persistent table, binary large object (blob) and queue storage; .NET Services for managing authentication/authorization, inter-service communication, workflows, service bus queues and routers; SQL Services for data management with SQL Server's relational database features; and consumer-oriented Live services. Like the Google App Engine, Azure provides a development environment that emulates its cloud OS and storage services on-premises. Cloud-based SQL Analysis Services and SQL Reporting Services are scheduled for post-version 1 release.
Secure and reliable data storage is critical to commercial infrastructure and PaaS offerings. Microsoft announced in late May that its cloud infrastructure had received both Statement of Auditing Standard (SASO 70 Type I and Type II attestations) and the International Organization for Standardization and International Electrotechnical Commission (ISO/IEC) 27001:2005 certification for security.
For developers, integrating replicated data storage with cloud-based Web applications or services improves performance and increases data availability and security, while maintaining scalability for traffic surges. The Azure Services Platform provides four classes of replicated data storage: blobs for unstructured data such as music, pictures and video files up to 50GB; tables for structured data; queues for services and messaging; and relational data. (See "Retire Your Data Center," February 2009, for an overview of Azure's layered architecture and structured table storage, which uses the entity-attribute-value data model.)
Move ASP.NET Apps to the CloudVisual Studio 2008, ASP.NET 3.5 Service Pack 1 (SP1) and the Azure Services Platform simplify uploading blobs to scalable, replicated storage in virtual server clusters at Microsoft data centers. Azure blobs offer a simple interface to store unstructured binary data and metadata that corresponds to files in a directory structure. Developers can quickly learn how to create, store and consume blobs of varying sizes with a simple Azure ASP.NET Web application called a WebRole.
The top of Azure's object pyramid is the Account, which is the unit of hosted application and storage ownership, resource consumption accounting and billing. Microsoft hasn't disclosed pricing or Service Level Agreement (SLA) details. The Azure team expects to publish pricing and SLA commitments later this summer and to deliver the Azure Services Platform version 1 as a commercial service by the end of 2009.
If you don't have a token for an Azure-hosted service account-which includes two storage services accounts-you can emulate Azure by downloading and installing the Windows Azure software development kit (SDK) community technology preview (CTP), which requires .NET 3.5 SP1 and Windows Vista. The SDK lets you run ASP.NET Web application code in the local Development Fabric and store blobs, tables and queues in Development Storage-which, by default, is a local SQL Server 2005 or 2008 Express instance.
One of Azure's best features is the ability to develop and debug projects with Visual Studio 2008 in the development environment. Download the Windows Azure Tools for Microsoft Visual Studio CTP to provide templates for cloud-based projects. Visual Studio 2008 is recommended for now. The Windows Azure Tools for Visual Studio May 2009 CTP supports Visual Studio 2010 beta 1. However, the Windows Azure OS doesn't work with .NET 4.0 yet, and the .NET Services CTP-due in March 2009 at press time-doesn't support Visual Studio 2010. Installing the Windows Azure Training Kit is optional but recommended for its hands-on labs.
Using the Azure Services Developer Portal, you can quickly and easily publish your apps to Azure's staging environment for testing with the Azure fabric and cloud-based storage. After you verify operability with a GUID as the service name, a single click exchanges the staging version for the production service, if any exists. The only change you must make to your project code to move from the development environment to the Azure Fabric is to comment-out and add a few lines to the ServiceConfiguration.cscfg file (see Listing 1). You can perform the edits in the Azure Services Developer Portal's Configuration page if you prefer.
Register for a cloud-hosted service account token; the service provisioning process is Byzantine and changes often. The wait is usually only a day or two, down from weeks for early CTPs. Click the Register for Azure Services link to open Microsoft Connect's Applying to the Azure Services Invitations Program page. Complete the survey, click Submit, and wait for the token to arrive in the e-mail account for the Windows LiveID you presented.
When you receive the token-which gives you 2,000 virtual machine (VM) hours, 50GB of cloud storage and 20GB per day of storage traffic at no charge-go to the Azure Services Developer Portal. Sign in with the same LiveID, click the I Agree button to create a new developer portal account, and click Continue to open the Redeem Your Invitation Token page. (Alternately, click the Account tab and the Manage My Tokens link.) Copy the token product key to the Text box and click the New Project link or Claim Token and Continue buttons to open the Project | Create a New Service Component page. Click the Hosted Services button to open the Create a Project: Project Properties page, type a Project Label and optionally fill in the Description text boxes and click next. Type a unique DNS name for your hosted service in the Service Name text box and click Check Availability to verify that no one has used it previously.
Microsoft enabled Azure's geo-location service in late April, so you can now specify an Affinity Group to ensure that hosted-service code and your storage account reside in the same data center to maximize performance. To date, only the United States-based data centers-USA-Northwest (Quincy, Wash.) and USA-Southwest (San Antonio, Texas)-are in operation. Select Yes, This Service Is Related to Some of My Other Hosted Services ... and Create a New Affinity Group option buttons, give the affinity group a name, such as USA-NW1, and choose one of the two current data centers from the Region list. Click the Create button to create the new hosted service and open the Create a Service: Hosted Project with Affinity Group page.
Each hosted-service token entitles you to two storage services accounts, which share a similar setup page. To add a storage service account, click the New Project link to reopen the Project | Create a New Service Component page, and Click the Storage Account button. Add a storage Project Label and Description, and click Next to open the Create a Project: Storage Account page. Type a DNS name for the account and select the Yes, This Service Is Related to Some of My Other Hosted Services ... and Existing Affinity Group option buttons and select your Affinity Group name in the list (see Figure 1).
You'll also need to download and install the Windows Azure SDK to provide the local Developer Fabric and Storage services, as well as the StorageClient sample C# class library, which enables manipulating blobs as .NET objects. The Windows Azure SDK May 2009 CTP adds new features to Azure table and blob storage, but the CTP's .NET StorageClient library API doesn't support them. You'll need an updated version of StorageClient to take advantage of new CopyBlob and GetBlockList features for blobs.
PHP developers can create a native interface for Azure storage accounts using the PHP SDK for Windows Azure CTP, which Belgium integrator RealDolmen and project sponsor Microsoft made available in May on Microsoft's open source community hosting site CodePlex. The PHP SDK provides PHP classes for Windows Azure blobs, tables and queues. The May CTP, which is the first preview, is targeted at blob storage.
Create Containers, Blobs and BlocksContainers hold multiple blobs, emulating the way a folder contains files. Containers are scoped to accounts and, unlike other Azure storage types, can be designated publicly read-only or private when you create the container with Representational State Transfer (REST) requests or .NET code. Private containers require a 256-bit (SHA256) AccountSharedKey authorization string, which you copy from the Primary or Secondary Access Key fields on the storage-service-name Summary page. You can add up to 8KB of metadata to a container in the form of attribute/value pairs. Azure storage offers a REST API, so anyone who knows the Uniform Resource Identifier (URI) for your public storage service,, can list its containers with a simple HTTP GET request, such as. Returning a list of blobs in the oakleafblobs container uses a similar syntax with the blob name prefixed to the query string: dows.net/oakleafblobs?comp=list. The Fiddler2 Web Debugging Proxy's Request Builder makes it easy to write and execute RESTful HTTP requests.
The maximum size of an Azure blob is 50GB in the May 2009 CTP, but the largest blob you can upload in a single PUT operation is 64MB. Creating larger blobs requires uploading and appending blocks, which can range up to 4MB in size. To handle interrupted uploads, each block has a sequence number so blocks can be added in any order. You can update blobs by uploading new blocks, which overwrite earlier versions. Azure replicates all blob data at least three times for durability. Strong consistency ensures that a blob is accessible immediately after uploading or altering it. Blobs are scoped to containers, so a GET request to return a specific graphic blob from a public oakleafblobs container is.
HTTP REST requests use PUT BlobURI to insert a new blob or overwrite an existing blob named BlobURI, which uses the same syntax as GET requests. GET BlobURI can return a range of bytes with the standard HTTP GET operation with a Content-Range entity-header. The REST API also supports HEAD requests to determine the existence of a container or blob without returning its payload. DELETE requests use the same URIs as PUT, GET and HEAD. PUT BlockURI, GET BlockListURI, and PUT BlockListURI use a similar addressing scheme. The Windows Azure SDK's "Blob Storage API" section contains more details about the REST API for Blob Storage.
Objectify Blobs and Containers with StorageClientWriting the code to send HTTP request and process response messages directly isn't straightforward for most .NET developers. Fortunately, the Azure team provides a sample C# StorageClient class library that enables writing .NET code to manipulate containers and blobs in a more traditional object-oriented style. When you add a reference to StorageClient.dll, you gain access to the Microsoft.Samples.ServiceHosting.Sto rageClient namespace, which includes BlobStorage (account) and BlobContainer abstract classes, as well as Blob Contents, Blob Properties and other classes in BlobStorage.cs. Classes in RestBlobStorage.cs handle translation from .NET method calls to RESTful HTTP requests; helper methods contain code to create blobs from blocks.
The sample code for the AzureBlobTest.sln solution shows you how to write a WebRole application to create storage account and container objects (see Listing 2), upload blobs from online storage in Windows Live SkyDrive or your test computer's file system (see Listing 3) to the container, and then display selected graphic blob content in a new browser window. All event-handling code is contained in the AzureBlobTest WebRole project's Default.aspx.cs code-behind file. FileEntry.cs defines the FileEntry class for uploading local files. The StorageClient library includes class diagrams for BlobStorage.cs and RestBlobStorage.cs class files.
Install the Windows Azure SDK CTP and the Windows Azure Tools for Microsoft Visual Studio CTP, download the sample code (see Go Online for details), read the Readme.txt file, and then unzip the files to a \AzureBlobText folder. AzureBlobTest.sln connects to the oakleaf2store Storage account by default, but it's easy to change to DeveloperStorage by editing the ServiceConfiguration.cscfg file.
After you redeem your hosted-service token, follow the instructions in the Readme.txt file to publish the project to your own hosted application and storage services in the cloud.
After you try it out for yourself, you'll likely agree that Azure is the easiest and quickest way for .NET developers to experience the enterprise platform of the future: cloud computing with Windows.
Printable Format
> More TechLibrary
I agree to this site's Privacy Policy.
> More Webcasts | https://visualstudiomagazine.com/articles/2009/07/01/targeting-azure-storage.aspx | CC-MAIN-2016-44 | refinedweb | 2,413 | 52.7 |
Building it scan.py , and let’s get started. imutils , simply::
We start off by finding the contours in our edged image on Line 37. We also handle the fact that OpenCV 2.4, OpenCV 3, and OpenCV:
Line 62 performs the warping transformation. In fact, all the heavy lifting is handled by the four.
Python + OpenCV document scanning results
And speaking of output, take a look at our example document by running the script:…
Did You Like this Post?
Hey, did you enjoy this post on building a mobile document scanner?
If so, I think you’ll like my book, Practical Python and OpenCV.
Inside you’ll learn how to detect faces in images, recognize handwriting, and utilize keypoint detection and the SIFT descriptors to build a system to recognize the book covers!
Sound interesting?
Just click here and pickup a copy.
And in a single weekend you’ll unlock the secrets the computer vision pros use…and become a pro yourself!
Very informative and detailed explanation. Everything this blogger publishes is gold!
🙂 Thanks, Aaron!
Hello Adrian. Thanks for your awesome posts! I’m completely blind, and your content has greatly helped me develop a proof of concept prototype in Python for an AI-guided vision system for blind people like me.
Right now, I’m trying to add a function that will instruct a blind user if his or her camera is properly capturing all four points of an object with the largest contours (hence assuming nearest); and
I think I can build on the block of code below to print out statements if the object with largest contours does not have all four points, but:
I’m thinking of ways to do this that will allow me to instruct the blind user whether to move the focus of his or her camera to the left, right, up, or down …
– I just need approximations. I think this can be done by getting location coordinates of the missing point/s (out of the 4 supposed points) of the object with the largest contours, and using those values to calculate and print out comprehensible instructions for the end user?
Do you have any suggestions on how this can be done? Thanks in advance! 🙂
# if our approximated contour has four points, then we
# can assume that we have found our screen
if len(approx) == 4:
>>> screenCnt = approx
>>> break
P.S. This is for providing blind users with guided instructions on how to properly point their camera at a piece of document with text, in order to run the OCR functions of my software…
Hi Marx — this sounds like a wonderful project, thank you for sharing. I think you are on the right track here. Find the largest contour region in the image. If the approximated contour region does not have 4 vertices, tell the end user.
As for determining how the user should move their camera, the angles between the vertices should be (approximately) 90 degrees. Compute the angles between the contours and if the angle is not 90 degrees, you’ll know which corner is missing and then be able to give the user instructions on how to move the paper/camera.
Hi Adrian. Thanks for your help! 🙂
However, I’ve been searching and reading up on how to measure the angles of contour vertices for several hours now, but I just can’t find something that I can comprehend and use. 🙁
I also read up on how to sort contours from left to right and top to bottom, thinking that I’d be able to identify the missing edge and just tell the user to move his or her camera towards that missing edge for now (until I find something that allows me to give the user an approximation on how much to move the camera towards that direction) …
…
Also, I’m thinking if it would be better to implement this for a live camera feed, than for a captured image?
That’s mainly because before capturing the image for OCR processing, the user will need to know the right time to capture the image, in real time preferrably …
I’d greatly appreciate your suggestions regarding this matter. Thanks again! 🙂
If your goal is to provide real-time feedback to the user, then yes, a live camera feed would be more appropriate. However, this makes the task a bit more challenging due to motion blur. I would instead suggest solving the problem for a single image before you start moving on to real-time video processing.
You can eliminate motion blur using a really good camera sensor, and manual control of white balance. This is critical since a lot of sensors come with a controller which by default will try to increase the brightness of the image by capturing multiple frames in succession and adding them together. This process is what creates motion blur so you need to simply disable automatic white balance in the controller, and you’ll get clean frames every time. However this also means that in some situations it will be too dark for the sensor to see anything. One way to solve this is to put a large amount of powerful infrared LED lights around or behind the sensor, and remove the infrared filter from the sensor so it becomes sensitive to infrared light. The sensor will not see colors, but for reading text from a page you don’t need colors. This way your sensor will see images even in total “darkness” without blinding the non-blind with a potentially strong white light. Reach out to me if you’re interested and I will send you information about such a sensor that we use in my company.
Thanks for sharing Skaag!
Hai Adrian can you help me to extract text from scanned images these images are very low quality.
i am using pytessaract module to find out text from scanned image but i am not able to find out. please help me to find out the text from scanned image
Hey Pratap — I’ll be covering how to cleanup images before passing them into Tesseract in my next blog post. Be sure to stay tuned!
Hi Adrian,
I am wondering what is the title of your followup post after this that you mentioned above. Please let me know if you have it on top of your head.
Thanks!
How long does it usually take to finish the process on a minimum config machine?
Less than a second.
Hello Adrian Rosebrock,
I am Vietnamese,I am very interested in your blog.
But I see almost your post is work on MÂC OS ?
So I am using Windows OS,so your code can’t work on my computer.
And I am beginner (don’t have programmer knowledge).
But I want to learn Face recognize,text recognize and object detect/count only.
Do you think I can ?
I recommend using macOS or Linux to run the examples on this blog. Please note that I do not officially support Windows on this blog.
Hello Coung, I made this examples work on windows, you should be able too.
I work now on mac OS but if you put some effort you should be able to make it work in work in windows.
Since you are not a programer I would suggest you to start there, with some python tutorials on windows.
how hard would it be to pan parts of the document so it all fits into one panoramic view?
Awesome question, thanks for asking! Substantially harder, but certainly not impossible. Basically, your first step would be to perform “image stitching” to generate the larger panoramic picture. From there, you could essentially use the same techniques used in this post. Would you be interested in a blog post on image stitching?
Would it be better to do stiching after perspective transform and threshold or before?
It depends on what your end goal is, but I would do stitching first, then perform the perspective transform.
Hello Adrian,
An example of image stitching would be great!!
Thanks for your awesome posts
How can you export to Appstore your applications write with Python?
To my knowledge you cant. You must use c++.
Hi Can, you are correct. You must first convert it to iOS, but the algorithm is the same. All you have to do is port the code.
Or run the python code on a server and upload the image from your phone.
Exactly. And for applications that don’t require real-time processing, I highly recommend doing this. You can update your algorithms on the fly and don’t have to worry about users updating their software.
Hi Adrian,
I want to learn how do I run a server so that my application gets process in my computer?
Hi Usama, I don’t have any posts on creating a web API/server for computer vision code yet, but it’s in the queue. I should have a blog post out on it within the next few weeks.
Hi, how can you run a python code on a server? Where can I find a step by step on how to do this? thanks!
You can see this tutorial on converting an image processing pipeline to an API accessible via a server.
use kivy ( is a python framework to build mobile apps ) 🙂
Also why you need scikit-image Open CV already have adaptive threshold?
The scikit-image adaptive threshold function is more powerful than the OpenCV one. It includes more than just Gaussian and mean, it includes support for custom filtering along with median (Although I only use Gaussian for this example). I also found it substantially easier to use than the OpenCV variant. In general, I just (personally) like the scikit-image version more.
If somebody wants to use the opencv threshold I think this is an equivalent substitute:
warped = cv2.adaptiveThreshold(warped, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 251, 11)
Cool (post-OCR) improvement would be to recognize receipts and communicate information with budgeting app 😉
Agreed! That would be a really fantastic improvement. And I think (most) budgeting apps provide an API to interface with the app.
Hello guys,
Great Job, good instructions, perfect !!!
Do you know, if and how this could be work on Android devices ?
Absolutely, you would just have to look into using the Java OpenCV bindings.
This example may be a good starting point:
Very informative. Great!! Thanks.
What applications are you using to code? I don’t recognise the icons.
Cheers,
Great post btw
I use Sublime Text 2 most of the time.
I’m sorry, what constitutes the “mobile” part of a mobile doc scanner?
To me: it runs on a smartphone that runs Android, iOS, WP8 or Whatever the name of the BB OS is.
You are correct, the “mobile” part of a mobile document scanner is an app that runs on a smartphone device that utilizes the smartphones camera to capture documents and “scan” them.
I would like to see some OCR on it or on just some simple text or numbers.
Hi Sasa, good point. OCR would be a really great extension. If you’re interested in recognizing text, especially handwritten digits, definitely take a look at my Practical Python and OpenCV book. Inside I cover recognizing handwritten digits. Definitely not the same as OCR, but if you’re interested in recognizing text, it’s a pretty cool place to start.
Yeah!
I played with that some time ago in order to scan books.
But I faced to a harder problem: pages of a book are not flat but warped.
I was able to isolate the curve the page made from the flatbed (*).
And then?
There is two transformation to achieve:
– first to convert from a warped surface to a flat one ;
– secondly to convert from a perspective surface to a rectangular one (easy as you did it).
How can we do the first conversion? Bilnear formula?
Another problem I used to face to is to progressively cancel the shade that appears as the distance between the page surface and the camera increases.
(*) There are different methods to achieve that:
– take a shot at 45° ;
– use the shade as an approximate distance from the lens.
I you have idea how to compute that …
db
I would like to know about this too.
Nice and Clear understanding…
This is a great approach when dealing with small things like a typical receipt. But unless you’re going to take multiple pictures and stitch them together, the resolution will suffer as the item to be scanned gets larger and you have to pull the camera back to get it all into frame. This is where purpose-built document scanners really shine. They can capture a metre-long receipt at full resolution.
hi
What IDE do I use Python and Open CV? Please download link
Hi Mohammad. I use Sublime Text 2 and PyCharm. Definitely check them out!
Thank
You’re very cool
Hi great post!! I really like your web and your tutorials are the best! Just a doubt… is it better to use a bilateral filter instead of a gaussian one to smooth the image? If a recall right, the bilateral filter preserves better features like edges…
Great post, keep doing this great job! Thanks.
Great observation! A bilateral filter could definitely be used in this situation. However, I choose to use Gaussian smoothing, simply because there were less parameters to tune and get correct. Either way, I think the result would be the same.
Thanks!
Hi thanks for the post, really helpful! I noticed the contour detection approach isn’t working to well when part of the document to capture is offscreen. Any idea’s on how to solve this? TIA
In general, you’ll need the entire document to be in your image, otherwise you won’t be able to perform a reliable perspective transform.
This exercise requires scikit-image, which someone who just installed OpenCV and Python on a new Raspberry Pi 2 would not have. Installing scikit-image in turn seems to require scipy, which I am trying to install (slowly using pip install -U scipy) at this very minute. Perhaps a setup step would help.
Good point Joe. How are you liking your Pi 2?
So far the Pi 2 is doing well. The installation of scipy took between 1 and 2 hours (I didn’t time it) and then scikit-image took only minutes. Using the browser thru VNC displaying 1920 x 1080 is a bit slow, I’ll have to work with a smaller screen. I won’t know if the Pi 2 is adequate for my application until I get there–if the application works but is slow I will have to go to a faster system, maybe a Tegra.
If you’re doing most of your work via terminal, I would suggest using SSH forwarding instead of VNC:
$ ssh -X pi@your_ip_address. You’ll be able to execute your Python scripts via command line and the OpenCV windows will still show up.
I withdraw the information about installing scikit-image. I didn’t realize that the first try had failed. In fact, it took over an hour.
Hi Adrian,
Thank you so much for the great article and for the rest of your series!
I stumbled with the task of how to correct the document scan of a sheet of paper that has been folded 2 or 4 times?
Could you please take a look at my question here:.
Will appreciate if you could give some direction on how to achieve this.
If the paper has been creased or folded, then you’ll want to modify Line 48:
if len(approx) == 4:and find a range of values that work well for the contour approximation. From there, you’ll want to find the top-left, top-right, bottom-right, and bottom-left corners of the approximation region, and then finally apply the perspective transform.
Adrian, thank you for the answer!
But I need to clarify: I’m able to find the corners of a folded\creased paper and perform the proper perspective transform using those four points. In other words, your whole perfect algorithm works fine. What I’m doing is trying to take a step further 🙂
The step I’m asking about is how to “straighten” (recover rectangular shape of) the paper with OpenCV? I.e. stretch it so that its edges become touching surrounding rectangle.
I’m really not understanding where you put the path to the file that will be scanned. Can you give an example of proper usage of the code on lines 10-13?
If you have an image named
page.pngin an
imagesdirectory, then your example command would look like this:
$ python scan.py --image images/page.jpg
Definitely download the code to the post and give it a try!
hi,
your source code was very helpful,
what’s the functions to do some image processing for the scanned image (contrast, btightness, contrast…) ?, i removed the part that gives the image the ‘black and white’ paper effect…
I would suggest taking a look at the official OpenCV docs to get you started.
Great work!!
Sir, can i use this on Raspberry Pi B+ or 2 ?
If Yes, than How!
Please guide me I’m working on some related project.
You could certainly use either, but I would suggest going with the Pi 2. From there, you can follow my OpenCV install guide for the Raspberry Pi. Once you have OpenCV installed on your system, you should be able to download and execute the code in this post.
I keep getting this error at line 37.
(cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ValueError: too many values to unpack
I’ve triend this script on 3-4 images but getting same error. I tried to debug, but didn’t succeed.
I’ve mentioned the solution to this problem many times on the PyImageSearch blog before. The reason you are getting this error is because you are using OpenCV 3 — this post was written for OpenCV 2.4, well before OpenCV 3 was released. Please see this post for a discussion on how the
cv2.findContoursreturn signature changed in OpenCV 3.
try this….
(cnts) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
This is incorrect. For OpenCV 3, it should be:
(_, cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
Many thanks Adrian, this information solved my problem !!
Awesome, I’m glad to hear it Jorvan 🙂
dude! you are so cool 😀 thanks buddy
No problem Parviz 🙂
I was very happy to see this tutorial, but then I found that you didn’t tell howto install OpenCV 2.4.X with Python 2.7
So please tell me link to do that.
or more generally there is no date-wise posts in here, so that I see what is all on your website.
and how can I search a particular post on your website?
There is a search bar at the bottom-right corner of the sidebar on every page on the blog. As far explaining how to install OpenCV 2.4 and Python 2.7, I cover that in this post and in my book.
I tried many help from internet to install scikit image,
but this line:
please tell us how to fix it.
Please see the official scikit-image install instructions. All you need to do is let pip install it for you:
$ pip install -U scikit-image
thanks a lot for all your work and this tipp also
$ pip install -U scikit-image
while installing, it stuck at
Running setup.py bdist_wheel for scikit-image …
It’s likely not stuck. It takes a long time to compile and install scikit-image. If you take a look at your processor usage you’ll see that the processor is very busy compiling and installing scikit-image. What system are you trying to install scikit-image on?
this should be installed inside the (CV) or outside???
If you are using Python virtual environments you should be installing inside the “cv” virtual environment.
Thank you very much for the help you had the same problem, really thank you for sharing your knowledge with us
Hello, Adrian!
I have downloaded your code and tried to launch it on my computer and I failed.
I use PyCharm 4.5, OpenCV300 and Python 2.7.
I think it’s not versions thing.
There is a line on main code (scan.py) :
from skimage.filter import threshold_adaptive
But there is no skimage folder in your project…
What should I do?
Thanks.
The error can be resolved by installing scikit-imagee:
$ pip install -U scikit-image
However, keep in mind that this tutorial is for OpenCV 2.4.X, not OpenCV 3.0, so you’ll need to adjust Line 37 to be:
(_, cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
Hi Adrian,
I also found that I had to add parens to the print statement arguments on lines 32, 56, and 73 when running with Python 3.5 (they are optional in v2.7).
There is also a warning that the skimage.filter submodule has had a name change to skimage.filters, so I added the ‘s’ on line 7 and now it runs without any errors or warnings. The warning says the old name is deprecated and will be removed in version 0.13. This occurs for both virtual environments/versions of Python.
It seems like these changes might impact some of your other code on the site as well.
Best regards,
Mike
Thanks for the update Mike. I’m still trying to figure out a decent solution to handle the OpenCV 2.4 vs. OpenCV 3 issue. As a blog post that will come out this Monday will show, many users are still use OpenCV 2.4. (although that number growing and will continue to grow). But since this is a teaching blog, it’s not exactly easy to switch the code from one version to another if tow equally sized groups of people are using different versions.
do you have code for building in OCR into the Scanner?
Not yet, but that’s something I would like to cover in a future blog post.
Adrian,
I am a novice Pyphon developer, but was wondering if the scanning and OCR reading could have all been achieved via Javascript/HTML5?. I find it very hard trawling the javascript libraries to find something that will mostly do this. I don’t mind glue things together, but would prefer it if most of the hard work were done via some commercial library if possible, and obviously don’t mind paying for such a product. Do you know of any libraries that might fit this requirement?.
I don’t use JavaScript often, but I’ve heard of Ocard.js being used for OCR with JavaScript. I haven’t personally tried it out, but it might be worth looking into.
Dear Adrian and other members
Your job is highly interesting, I have a related project for 3 weeks.
– first, I have to take one scanned page that can be inclined in all directions and do the transformation to have what you did in your code in pdf.
– Secondly, snap two pages of a book, transform the edge’s curves into straight lines and finally have these pages in a rectangle in pdf.
Now, I am trying to use your code as a model but i don’t have openCV in my computer. which module or function can i use? I have “skimage” . Are there other links or documents that can help to solve this problem.
Thank you.
Having scikit-image is a good start, but I would really recommend getting OpenCV installed as well. I have tutorials detailing how to get OpenCV installed on your system here. I’m not sure if I understand the second part of your question, but if you’re trying to “scan” two pages of a book side-by-side, you can try finding the center spine of the book using edge detection. Once you have it, you can apply perspective transforms like the ones used in this post.
Thank you, I am trying now to install openCV as well.
In my second task I am trying to scan two pages of a book not side-by-side. So I will be asked to manage the horizontal lines that will look like curves and the middle-line between these pages that will not be well seen after scanning, but i need to manage it in this job
Please,
can someone help me on how to proceed for the case of two scanned pages of a book. Two pages are scanned together and i need to do the transformation to have the plat (or flat) rectangular form of these pages.
As I suggested over email, the try to detect the “spine” of the book using the Canny edge detector. If you can determine where the boundaries of the book and the top/bottom intersect, you can essentially split the book in two, determine your four points for the transformation, and apply the perspective warp.
Does anyone have something similar built for JS? We are working on a document scanner and would love some help on where to get started.
OCR is not needed – we only need the cropping, alignment, and conversion to b&w.
Thank you!
Steve
Computer vision is slowly starting to expand to the browser (since computers are becoming faster), but currently there is no full port of OpenCV to JavaScript, making it hard to build applications like these.
If you want to do something similar to this in JavaScript, I would suggest wrapping the document scanner code into an API, like this tutorial does. From there you can make requests to the API from JavaScript and display your results.
Hey Adrian,
This mini project is really handy in terms of usage and fundamental exposure to IP.
But how do you integrate this python code into a mobile android application ?
I would suggest wrapping your Python code using a web framework such as Django or Flask and then calling it like an API. You can find an example of doing this here.
Hi, scan.py hangs after Step 1. I left it running for more than hour but still didn’t finish. I am on Mac OS X El-captain. Using the Opencv 2.49 and Python 2.7. I have all the modules installed. I tried your transform_example.py and that works fine. Am I missing anything here? Do I need to hit a command or something?
Commenting out the following helped. Thanks. (sorry new to Python and OpenCV but loving it so far).
#cv2.waitKey(0)
#cv2.destroyAllWindows()
Interesting, I’m not sure why it would take so long to process the image during Step 1. The
cv2.waitKey
call wouldn't matter, provided that you clicked the window and pressed a button to advance the process. The
cv2.waitKeymethod pauses all execution until a key is pressed.
How do i save the final image in a new file?
You can use the
cv2.imwritemethod:
cv2.imwrite("output.png", warped)
And if you’re just getting started learning Python + OpenCV, you should definitely take a look at Practical Python and OpenCV. This book can get you up to speed with Python + OpenCV very quickly.
Thanks for this informative article
hi Adrian,
really nice tutorial there.I am currently trying to follow it to build an app of my own. I wanted to ask if its possible to generate a crude 3d wireframe model from a photo with probably the users help in correcting the edges. basically take 2-3 photos from a phone and then process it to create a simple 3d wireframe model.
It’s certainly possible, but you’ll realistically need more than 2-3 photos. I would suggest reading up on 3D Reconstruction.
Hi Adrian,
In line 37:
(cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
Does it better to use cv2.RETR_EXTERNAL as second parameter in findContours ?
For this project, yes, I would suggest using
cv2.RETR_EXTERNAL. I should have mentioned that in the original post 🙂 Thanks for noting this!
Hi, and thanks for a great tutorial.
To the point: I’m having trouble with the part where you sort the contours according to area, around line 38 in the sample code. The problem is that the contours that correspond to big blobs (such as the receipt border) does not at all end near the top of the list. Instead the largest contour is a contour around the WHOLE FOOD title.
I have checked that the contours are indeed sorted in descending order by the value returned by contourArea(), and sifted through all contours to verify that the contour enclosing the receipt indeed is present in the list. However, the area corresponding to that contour is uncannily small (about 12 pixels).
The issue, I believe, is that findContours() here finds the contours of the canny-edges rather than the enclosed objects. However, why this happens to me and none of you is beyond my comprehension. Maybe I have an unknown ascendency to Murphy.
Anyway, does anyone here have any I idea what might be going on?
Also, I’m using opencv 3.1.0 and python 3.4
Just to clarify, are you using the same Whole Foods receipt as I am in the blog post? It sounds like there might be an issue segmenting the actual receipt from the image. The likely issue is that that the border of the receipt outline is not “complete”. Can you take a second look at the edge map of the receipt and ensure that it is all one continuous rectangle?
EDIT: Try changing
cv2.RETR_LISTto
cv2.RETR_EXTERNALin
cv2.findContoursand see if that resolves the issue.
Yes I use the same image as in the post. Well almost. I used gimp to cut out the part containing the receipt.
Anyhow, the problem was a discontinuity in the edge map, just as you suspected. And a solution was to decrease the size of the Gaussian filter ((3,3) worked for me). Maybe the reason that I get this problem, but not anybody else, is that I use an original image of significantly lower resolution. I suspect your mobile camera does better than 496×669?
Thanks for quick response!
Indeed, the cropping in Gimp must have caused some sort of issue. My camera is an iPhone, so the resolutions is very high. I actually reduced the resolution of teh original image for this example. In fact, I wouldn’t suggest processing images larger than 640 x 480 pixels if at all possible. It’s normally to resize images prior to processing them. The less data there is to process, the faster the algorithm will run. And most computer vision functions expect smaller image sizes for more accurate results (edge detection and finding contours, for example).
if we put image scanned from scanner it will show this error
AttributeError: ‘NoneType’ object has no attribute ‘shape’
how to pass it ?
Anytime you see an error related to an image being
NoneType, it’s 99% of the time due to an image not being loaded from disk properly or read from a stream. Make sure the path you supply to
cv2.imreadis valid. I demonstrate how to execute the Python script via command line (and pass in the image path argument) in this post.
Hi Adrian,
I run your code on Anaconda, Windows. It runs perfectly.
I want to build an mobile android app on android studio, There are many functionalities including document scanning. As user have to get picture of any object, app may be responsible to get result out of it, But this code need some bindings with android code.
How to do this. How to integrate both these together ?
I personally don’t use Windows or Visual Studio, nor to I do any coding for the Android environment. That said, you have two options:
1. Convert the code from Python to Java + OpenCV (there are OpenCV bindings available for the Java programming language).
2. Wrap the document scanner code as computer vision API, then upload the image from the Android app to the API, process it, and then return the results.
If the second approach (using HTML5/javascript) can be implemented, then the export to mobile phones as a native application (for instance apk for android) would be very easy using CORDOVA.
Which is the exact same approach I took when building both Chic Engine and ID My Pill 🙂
I also demonstrate how to wrap a computer vision app as an API and access it via PhoneGap/Cordova inside the PyImageSearch Gurus course
Could this be easily modified for video?
Absolutely. You just need to wrap the code in a look that access a video stream. This blog post should be a good starting point.
Thanks Adrian for your quick response. I cant recommend you enough.
No problem, happy to help 🙂 And thanks for the kind words!
Hi, Adrain,
I try a lot of different coupons except your sample “whold food “one.
I got the difficulties in finding the four points of the edge.
It looks like the “approx” is not 4 points for some of them.
so the error is ‘screenCnt’ si not defined.
Thank you !
If the approximated contour does not have 4 points, then you’ll want to play with the percentage percentage parameter of
cv2.approxPolyDPTypical values are in the range of 1-5% of the perimeter. You can try working adjusting this value to help with the contour approximation.
I’m having issues with getting a clean contour that represents a full piece of paper. My paper contour is represented by two separate tuples in the cnts array. One tuple is for the left and bottom edge, and a distance away is the tuple for the top and right edge. Adjusting parameters within the cnts array is too late to find a all encompassing document contour.
I tried changing the parameter in findContours() as suggested above from cv2.RETR_LIST to cv2.RETR_EXTERNAL but that did not fix the problem.
I took a photo with my iphone of a 8×11 piece of paper with regular type against a plain dark background. I intentionally took it at a slight angle to test the transform function. It appears that the assumption of 4 clean points is failing.
If you’re not getting one contour that represents the entire piece of paper, then the issue is likely with the edge map generated by the Canny edge detector. Check the edged image and see if there are any discontinuities along the outline. If so, you’ll want to tune the parameters to
cv2.Cannyto avoid these problems or use a series of dilations to close the gaps in the outline.
I got every thing installed, all very smooth, but experiencing the same problem
STEP 2: Find contours of paper
Traceback (most recent call last):
File “scan.py”, line 63, in
cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 2)
NameError: name ‘screenCnt’ is not defined
i tried the auto_canny and still have same error.
Certain problem is with me, but not sure where. Thanks
If the
screenCntis
Noneit’s because there are no contours in your image that contain 4 vertices. Take a look at your edge map and explore the contours. You might need to tune the value of the contour approximation.
Thanks for the awesome and detailed explanation. I am facing the same issue that sunchy11 was facing. I am not able to find the rectangle/ four points. So, it throws the error: name ‘screenCnt’ is not defined. I tried changing the value of perimeter from 1-5%. Still no luck. Can u please let me know what may be the issue.
Are you using the same example images in this tutorial? Or your own image?
thank you
Hi Adrian,
Great article. I am encountering the issue below when following your instruction. Please advise.
Note that I am using openCV (3.1), python (3.5.1), numpy (1.11.0) & scikit-image (0.12.3)
….site-packages/skimage/filters/thresholding.py”, line 72, in threshold_adaptive
“
block_size
{0} is even.”.format(block_size))
ValueError: The kwarg
block_size
must be odd! Given
block_size
250 is even.
Few steps I revised in order to make it worked.
+ parenthesis for the print command: print (“STEP 1: Edge Detection”)
+ skimage.filters —> skimage.filters
+ comment out the following:
# cv2.waitKey(0)
# cv2.destroyAllWindows()
Thanks for sharing TD. It looks like the function signature to
threshold_adaptivechanged in the latest release of
scikit-image. I’ll need to take a closer look at this. I’ll post an update to the blog post when I have resolved the error.
UPDATE: In previous versions of scikit-image (<= 0.11.X) an even
block_sizewas allowed. However, in newer versions of scikit-image (>= 0.12.X), an odd
block_sizeis required. I have updated the code in the blog post (along with the code download) to use the correct
block_sizeso everything should once again work out of the box.
Adrian, this is wonderful!
Unfortunately my code doesn’t run past Step 1…it just stops before “(cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:5]”
Do you know how I can fix this?
Hey Mickey — please read the previous comments, specifically my reply to “Ashish”. It details a solution to what is (likely) causing your problem with
cv2.findContours.
Hey man,
you’re amazing!!!!
I’m a German student, and I’m working right now with opencv and Python.
I installed the opencv and python with another post of you.
Now, I wanted to scan a receipt, and searched in Internet. And what did I found? A post you wrote. 🙂
thanks a lot!
Awesome! I’m glad I could be of help Chuck! 😀
It is working good with only for the given example but not working in any other image. so please make it dynamic so it can recognize edge of any image ie, in any color any light.
It’s hard to ensure guaranteed edges in any color or lighting conditions, but you might want to try the automatic edge detector.
Hey Adrian,
My current set up is OpenCV 3.1.0 + RPi Model B + Python2.7.9
I’ve followed your tutorial on installing OpenCV 3 on the Pi. Did that include installing the scikit-learn module ?
I tried running the code the code and got an error : “No module named skimage.filters” on line 7.
What are the changes needed for the code to work on OpenCV 3? Thanks.
You’ll need to install scikit-image on your system:
Hey Adrian,
I tried the “import Scipy” works out of the cv environment
When I’m in the cv environment it gives the “No module named Scipy” error!!
Is there a way to shift the Scipy folder to the correct path?
You need to install SciPy into your virtual environment:
Hey Adrian
Thx for your Kick-Ass-Sample-Code.
One question though:
Line 67: warped = warped.astype(“uint8”) * 255
I don’t really get what’s going on here (and why).
After coming out of the
threshold_adaptivefunction, we need to ensure that the image is an 8-bit unsigned integer data type, which is what OpenCV expects.
I understand that astype casts the complete array to uint8 and uint8 is for a 8 bit unsigned integer. But why are we multiplying it with 255?
Because
(warped > T)returns an array of booleans which when translated into integers is either “0” or “1”.
Hi Adrian!
I was wondering if there was a way to adjust the document scanner’s sensitivity to edge detection? I want to detect faint edges on white surfaces. Is there anything i can do about contouring or thresholding?
You can control the edge detection process via the two parameters to the
cv2.Cannyfunction. You can read more about these parameters here. But in general, you’re going to have a real hard time detecting faint edges against a white surface. Edge detection requires that there be contrast between the background and foreground.
hi….
can i get certificate after completion of this course?
I only offer a Certificate of Completion inside the PyImageSearch Gurus course. A Certificate of Completion is not provided for the free OpenCV/Image Search Engine courses.
This code did not work on the image of a graph I have:
I ran your code and it did not give me the edges as expected. This code is not generic to be used to scan any kind of document. It would be nice if a generic code or approach can be suggested because that is what the professional scanning apps does.
User should not have to play with parameters like size, gaussian blur
While there are such things as “generic edge detectors”, they normally require a bit of machine learning to use. In fact, much of computer vision and machine learning is tuning parameters and learning how to tune them properly. Anyway, you might want to give the auto_canny function a try for parameter free edge detection.
Please any idea because the line code the scan documents
warped = threshold_adaptive(warped, 255, offset=11)
take too many time like 30 sec.,
Thank you
If it’s taking a lot time to process your image, then your image is likely too large. Resize your image and make it smaller. The smaller your image is, the less data there is to process, and thus the faster your program will run.
Small question, in this line:
cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 2)
What does the brackets does specifically around screenCnt? Also, is there an other tutorial where the functions, algorithms and arguments are explained or we need to look at the OpenCV documentation?
The brackets simply wrap the contours array as a list, that’s all. I would suggest either refer to the OpenCV documentation or go through Practical Python and OpenCV for a detailed explantation of
cv2.findContours.
Hi Adrian,
Beautiful work.
Is there a way to use houghtransform (or some other command) to close open contours? Can you use houghtransform over canny? What would that look like?
sometimes 3 out of four edges of a document come out clearly in pictures, but the fourth is only half detected. Its so close to working perfectly, but i don’t know what to do!
-em
If you have open contours, I would suggest using morphological operations to close the gap. A dilation or a closing operation should work well in that case.
Hi Adrian,
I love your blog!
I am having trouble with the image resolution. I want the output of the image resolution to be similar to the image i am inputing.
Also, some images that i input i get “NameError: name ‘screenCnt’ is not defined”. Does that mean the program does not detect 4 edges.
Thank you,
John
Yep, that’s correct! I would insert a
printstatement into your code in the
forloop where you loop over the contours to confirm this.
However, a better way to solve this problem would be to keep track of the ratio of the width of original to the resized image. Perform your edge detection and contour approximation on the resized image. Then, multiply your bounding box region by the ratio — this will give you the coordinates in terms of the original image.
Hello Adrian Rosebrock,
Your blog is superb and like to do something like your demo. I want to develop app in ios which detects objects from video and want to count total number of objects found in Video.
So i plan to divide video in multiple images and then selecting any image and try to identify objects from image. But i am not getting any help with OpenCv much, but while i am looking at your demo. It can help me.
So can you please tell me how can i use your python code in my objective C code.
Thanks in Advance.
Your project sounds neat! However, I only provide Python code on this blog post — not Objective-C.
Hi, Adrian!
Can this tutorial still be implemented in an app for android current versions? If yes, how can I get in touch with somebody that does it?
Thanks
Yes, you can use this algorithm inside of an Android app, but you would need to port it to Java + OpenCV first. As for as an Android developer, I would suggest using Upwork or Freelancer to find a developer suitable to your needs and budget.
Had to change Line 37 to:
_, cnts, _= cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
with Python3.5.2 and Opencv3
Indeed, this blog post was written well before OpenCV 3 was released. You can read more about the changes to
cv2.findContoursbetween OpenCV 2.4 and OpenCV in this post.
How would apply OCR to the processed image?
There are many ways to apply OCR to the thresholded image, but to start, you could try Tesseract, the open source solution. I also really like the Google Vision API for OCR as well.
Good explanation,can you tell me where i can use this feature mean to say where to use this docment scanner
Hey Tahir — can you elaborate more on what you mean by “where to use this document scanner”? I’m not sure I understand what you mean.
Hi, Adrian. It’s been a while since you created this blog. I got a question for you. I want to build this kind of paper scanner myself. Do you think it’s possible to make the scanner detect some sort of QR codes at each corners of the paper ? And then use the qr codes as the corner of the digitized paper instead of using edge detection like yours. I could use some help. Thanks before, btw
Absolutely — as long as you can detect the four corners and recognize them properly, that’s all that matters. I don’t have any QR code detection blog posts, but you might want to take a look at the ZBar library. Once you detect the markers, order them as I do in this blog post, and then apply the transformation. I would suggest starting with an easier marker than QR codes just to understand the general process.
Thanks for replying. I’m quite new to image processing, so maybe I need to ask a few questions. I know how to detect a certain shape or a square, but I never try to detect 4 squares. What’s the easiest way to do this ? Can you send me a link to your blog that explain this or some other blog maybe ? Also, how can I order them and then apply the transformation ? Is that what line 41 is ?
If you’re trying to understand how to order coordinates, start here.
From there, read this post on applying a perspective transform.
As for detecting squares, the simplest method is to use contour approximation and examine the number of vertices as this blog post does. I also have an entire blog post dedicated to finding shapes in image.
I hope that helps!
Thanks, this really helps !
Hi Adrian,
Thank you for the very cool article. I am actually trying to port your code to android (using opencv 3.1 and the android bindings) but I have got stuck at tje step of applying the Canny filter.
Although I am using the very same parameters as you are (and also downscale images to 500 rows) the edge detector does not seem to detect horizontal edges of the paper even though there is good contrast and the background is not busy.
It is strange, because vertical and angled edges are picked up nicely.
I have went even as far as lowering the tresholds to 10 and 20 and while it produces tons of false edges (as expected) it does not produce more than a handful of dots from the horizontal or near-horizontal edges.
I suppose i am missing something trivial. I have even tried the opencv android sample app and its canny does pick up edges nicely.
Your help would be really appreciated.
That is quite strange, although I must admit that I do not have any experience working with the Java + OpenCV bindings outside of testing them out by writing a program to load and display an image to my screen. This does seem like a Java specific issue so I would suggest posting on the OpenCV forums.
Hi, I am a beginner on Python.
I have a question on Line 38:
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:5].
How does the description “[:5]” work?
I understand “sorted” function, but I cannot understand it.
Sorry for bothering you.
The
[:5]is just an array slice. It simply takes the first 5 elements in the list and discards the rest. You can learn more about array slicing here.
Hi
Thanks a lot for the very informative post. Could you elaborate a bit on why you resize the image before the edge detection and why exactly to a height of 500 pixels? Because I tried your technique on a couple of images with and without resizing to 500 pixels. It worked perfectly for the resized images but for the original ones (they were bigger) the edge and contour detection failed horribly. I would probably just need to tune the parameters a bit differently?
Thanks!
In computer vision and image processing we rarely process images larger than 600 pixels along the maximum dimension. While high resolution images are appealing to the human eye they are simply too much detail for image processing algorithms. You can actually think of resizing an image as a form of “noise removal”. We discard much of the detail so we can focus our attention on a smaller version of the image that still contains the same contents — but with less “noise”, leading to much better results.
Getting error : No module named pyimagesearch.transform.
Please help/
You need to download the source code to this blog post using the “Downloads” section of this tutorial. You are likely forgetting to create the
pyimagesearchdirectory and put a
__init__.pyfile inside of it.
hey there
which all libraries are you using ?
which version
thanks man
This blog post assumes you are using OpenCV 2.4 and Python 2.7. The code can be easily adapted to work with OpenCV 3 and Python 3 as well.
Hi, Can i get same concept ( Mobile Document Scanner) on Android…
I only cover Python + OpenCV on this blog. You would need to convert it to Java + OpenCV for Android.
Hi,
Thanks for the awesome and comprehensive tutorial!
I actually want to apply this to some photo of receipts, but unfortunately not all the corner is inside the image (there is even a photo where not even one of the corner is on the image).
Is there a way to use the four_point_transform in this case? If yes, how to do it and if not is there any good way to deskew the image?
Thanks!
If you lack the corners you can apply the perspective transform to the entire image, although I really don’t recommend this. Otherwise, you can try to deskew the image. I’ve been meaning to do a tutorial on this, but the gist is that you threshold the image to reveal the text, then computing a rotated bounding box around a text region. The rotated bounding box will give you the angle that you can correct for. Again, I’ll try to do a tutorial on this in the future.
Hi Adrian
I’m struggling to understand how exactly the number of vertices are approximated in lines 43 and 44. Would you mind explaining this?
Many thanks
First, we compute the perimeter of the contour. We then take the perimeter and multiply it by a small percentage. The exact value of the percentage may take some fiddling based on your dataset, but typically values between 0.01-0.05 are common. This percentage controls the actual approximation according to the Ramer-Douglas-Peucker algorithm. The larger epsilon is, the less points included in the actual approximation.
Hi,
I am working on a similar project and your tutorials are of great help.
My goal was to detect the total price mentioned in a receipt. How can we achieve that goal so that i can easily detect the price? Any input will be much appreciated.
It sounds like you are trying to apply Optical Character Recognition (OCR) which is the process of recognizing text in images. This is a very extensive (and challenging field). To start, I would suggest trying to localize where in the image the total price would be (likely towards the bottom of the receipt). From there you can apply an OCR engine like Tesseract or the Google Vision API.
Thanks Adrian!! The practical uses for Computer Vision techniques are amazing. I like them. Do you have a post or any suggestion on how to load the python code on android mobile cell phones? regards!
Unfortunately, I don’t know of a way to use OpenCV + Python on a mobile devices. I would instead suggest creating a API endpoint that can process images and have the mobile app call the API.
Hi, it’s a great tutorial. May I ask instead of using skiimage adaptive thresholding, is it possible to use the adaptive thresholding in cv2, such as
cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,2)
as there is a problem importing skiimage python package to AWS Lambda. If it is possible, it would be great if the parameter in cv2 function could be provided to best suit this mobile scanner application . As I am new to opencv and skiimage. Any suggestions would be appreciated. Thanks in advance.
You can certainly use OpenCV’s adaptive thresholding algorithm, that’s no problem at all.
Hi there, thanks for the great piece of work. I am having trouble to reliably find the countour. Would there be a way to have cv to default to image boundaries or may be bounding rectangle if the countours could not be approximated? Thanks again so much and happy new year
If you are having trouble finding the contour of the document then I would suggest playing with the edge detection parameters. If the script cannot find this contour then there isn’t really a way to “default” any coordinates unless you were working in a fixed, controlled document where you know exactly where the user was placing the document to be scanned.
You could just check if value in screenCnt is None, and in case it is, default to the whole image. Also I found that blurring the edged image before approximation makes approximation better.
Hi. Thanks for the article – it was really helpful.
I went further by adding OCR, and optimizing the code for that matter.
My code and thoughts on the subject can be found here:
I hope it will be useful for people who want to make the next step.
Thanks for sharing Alexander!
Hi Adrian,
Thanks for this. This was a good beginning to learn OpenCV.
I struggled to install opencv on Mac but then was successful in doing so on a linux box.
I used Python 2.7 and opencv 3.1.0
Line 37 :
(cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
in this example gave me error (ValueError: too many values to unpack)
I changed left hand side of line 37 to:
(_, cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
and it worked!
Hi Adrian, I followed your tutorial on installing openCV on RPi3, so there it is working into a virtual environment.
Now I find that I can use scipy, skimage and I also tried with sklearn, only outside the virtual environment, inside the v.environment those packages are not found. I have done many times to “pip uninstall” those packages and install them again from the virtual environment, but nothing changes.
Maybe you have some tips since I am completely lost.
Hey Sergi — it sounds like you have installed SciPy, scikit-image, etc. into the global
site-packagesdirectory for your system. You mentioned installing them into your virtual environment, but I don’t think the commands worked:
Hello ! Thank you for the wonderfull work.
I followed your raspberry pi installation guide and managed to install CV 3 on my rpi3 (raspbian jessie with pixel). I went here and downloaded your code.
I’m not able to install scikit-image package in the virtual environment.
I did :
workon cv
pip install -U scikit-image (with sudo too)
first try : failed, a lot of garbage from python but only “memory error”.
second try : No eggs found in /tmp/easy_install-V0e5rR/Cython-0.25.2/egg-dist-tmp-dt9B8k (setup script problem?)
But it keeps failing. I managed to install scikit-image out of the vitual environment with apt-get but, obviously, it doesn’t work in cv…
pip is working in the cv : workon cv, pip install scipy
python
>>> import scipy
doesn’t return an error. It just took 3 hours 🙂
—
I’ve read crazy solutions like apt-get download the packages and install them manually but I’d like to find a cleaner workaround.
Any idea ?
If you are getting a memory issue, try using the
--no-cache-dirwhen installing via
pip:
$ pip install scikit-image --no-cache-dir
This should alleviate the issue.
Hello Adrian, can you tell me how to get the transform module? Also, how to set up the module so that I can use it for other programs as well?
Use the “Downloads” section of this blog post to download the example code (including the “pyimagesearch” and “transform” mini-modules for the example).
Hi!
I don’t know if you’re still active
but anyway
just wanted to thank you
me and my friend are doing a project
and you’re code really saves us
without you we were lost
so – making long story short –
THANK YOU VERY VERY MUCH
YOU’RE OUR SAVIOR!!!
Thank you Guy, I’m happy the tutorial helped you out!
Hello, are there any guides or examples how to use OpenCV in a Xamarin Android environment? I’m working on an Android app and need to find a good alternative from Scanbot and other expensive solutions.
Hey Karl — I don’t have any experience with OpenCV + Android environments so I’m unfortunately not the right person to ask regarding this question.
Hello Adrian, i just came across this post its very helpful please do you have anything on OCR?
or is it possible to modified this code for this purpose?
I don’t have any posts on OCR right, but I will be covering OCR in the future.
Can you suggest some tutorial or any source to make an android or IOS app for mobile scanner and integrate with the mobile camera and convert the clicked image into PDF or jpeg
Hi Adrian,
Many thanks for uploading your tutorials! I have a problem with installing the pyimagesearch module though. I am using the jupyter notebook, usually with installing packages I use the following code:
!pip install [PACKAGE NAME]
However, this time this gives te following error: Could not find a version that satisfies the requirement pyimagesearch (from versions: )
No matching distribution found for pyimagesearch
Do you have any idea what might cause the problem?
There is no “pyimagesearch” module on PyPI. You need to use the “Downloads” section of this tutorial to download the code + example images, then place the
pyimagesearchdirectory in the same path as your Jupyter Notebook.
Just wondering … where is the best place to find Android/iOS developer?
error: argument -i/–image is required while i run the program
You need to read up on command line arguments.
i’m using windows
can u please tell me how to do it, with possible changes/solutions?
sir how to save the scanned image sir
I would suggest you use
cv2.imwriteto write the image to disk. If you’re just getting started learning the basics of OpenCV and Python, I would absolutely suggest you work through practical Python and OpenCV. This book will help you learn the fundamentals of OpenCV and image processing.
Hey, great tutorial.Very informative and easy to understand.
However I am unable to figure out how to use it in android studio to build the app.
Please Help
Hi Onkar — this blog uses primarily OpenCV and Python. It does not cover Java. I would suggest you read up on Java + OpenCV bindings and convert the code. Alternatively you can build a REST application where the Java app sends the image to be processed on a Python server and the results returned.
The imutils module is not included in the code download for Basic Image Manipulations, so I can’t progress part the first part of this tutorial.
Each blog post is independent from the others. You should either use the “Downloads” section of the blog post to download the code or install imutils via pip:
$ pip install imutils
Hey chief,
I have successfully tried the code on my machine with the provided images, but when I tried with images downloaded from web, unable to find contours around the sheet. What could be done to optimize the code for any image.
Any help would greatly appreciated
Thank you
If there is not enough contrast between the edges of the paper and the background then the paper may not be detected. Furthermore, if paper is noisy/folded/etc. the contour approximation won’t be able to locate the paper region (since it might have more than four vertices). In that case, you would want to train a custom object detector. I’ve also seen work done on using machine learning specifically to find edge-regions (even low contrast ones) in documents. I can’t seem to find the paper though. If I do, I’ll be sure to link you to it.
Hello Adrian,
My name is George. I’m a fellow UMBC alum. Thanks for taking time out to answer questions on your platform. I’m currently building a scanner app, but I’m having a problem with document detection. It scans documents great 90% of the time. However, if the document has an image of a rectangle/box on it, the scanner will detect that image instead of the outermost 4 corners of the document. Could this be happening because the Gaussian blur isn’t reducing noise before edge detection?
Your thoughts are greatly appreciated. Sincerely George
This code will find the largest rectangular region that has 4 corners. It accomplishes this by sorting the contours by area. If you’re detecting a rectangular region inside the document, then you’ll want to double-check your contour approximation. It’s likely that the contour approximation parameters need to be tweaked since there are likely more than 4 corners being detected on the outer document.
Hello Adrian,
I just signed in for your ten days course, great first class. I am running the code in Python 3.5.2, scikit_image 0.13 and CV 3.2.0 in Ubuntu 16.04
When I run the code, scikit complains with the following warning:
/home/juan/.virtualenvs/cv/local/lib/python3.5/site-packages/skimage/filters/thresholding.py:222: skimage_deprecation: Function
threshold_adaptive
is deprecated and will be removed in version 0.15. Use
threshold_local
instead.
def threshold_adaptive(image, block_size, method=’gaussian’, offset=0,
/home/juan/.virtualenvs/cv/local/lib/python3.5/site-packages/skimage/filters/thresholding.py:224: UserWarning: The return value of
threshold_localis a threshold image, while
threshold_adaptivereturned the *thresholded* image.
warn(‘The return value of
threshold_localis a threshold image, while
(error cuts there, but the idea is clear)
So I tried to replace threshold_adaptative for threshold_local, but I get a blurry image instead of the black & white. Can’ t figure out why?
Also, if I remove line 66 (just to see what happens) the image I get is “white & black” (inverted, vs. black & white). Also don’t get exactly why? Is due casting when you do the astype?
Finally, all my images are rotated 90 degrees in the screen, also not sure why the differences.
Thanks a lot!!
Hi Juan — it’s important to understand that this is just a warning, not an error message. It’s simply alerting you that the
threshold_adaptivefunction will be renamed to
threshold_localin a future release of scikit-image. The functions should be the same (only with a different name), so I’m not sure why they would give different results. I will look into these and see if any parameters have changed.
Hi Adrian and all,
Just wanted to chime in on this.
I’m having the same issue as Juan and it seems that threshold_local doesn’t quite work the same as threshold_adaptive? The work around for threshold_local can be found here:
Here’s how I implemented the doc scanner based on the above scikit doc:
Thank you for sharing, John! It looks like scikit-image has deprecated the function. I’ll get this blog post updated.
Hi Adrian,
I used the code from John Goodman’s post above. It doesnt work as before. Your code was working perfectly earlier. Now it doesnt work the same way. Could you update the example so it works like before using threshold_local.
Thanks.
regards,
Rajeev
Hi Rajeev — certainly, I will get the post updated by the end of January.
I just wanted to follow up here and say that the code + associated downloads have been updated to handle Python 2.7/3, OpenCV 2.4/3, and scikit-image.
usage: scan.py [-h] -i IMAGE
scan.py: error: argument -i/–image is required
i encounter this error.
what happens in line 12?? where
should i give image path ? how is it given??please provide example
Please read up on command line arguments.
Hello Adrian,
I am replicating a similar algorithim in OpenCV4Android. But Im running into a problem that the output I get from the Canny edge detector is not even close to the one you get in terms of edge detection quality. It rarely gets the 4 side edges of the documents.
I have resized the image to an even smaller one and tweeked the parameters from both the gaussian filter and the edge detection. I am even using THRESH_OTSU as a parameter for the Canny edge detection. But no success.
How would you approach this?
Thank you very much
Hi Pablo — I don’t have any experience using the OpenCV + Java bindings, so it’s hard to provide any substantial guidance here. I would likely speak with the Java bindings developers and explain to them how you are getting different results based on the programming language.
Sorry but How i make install the package pyimagesearch? pls :/
You simply use the “Downloads” section of this blog post to download the source code and example images. This download includes the “pyimagesearch” module covered in this post.
What if my Rectangle Paper has many tiny Rectangles on it?
How will it find the longest Rectangle from it?
Lines 37-50 will find the largest rectangular object in the image.
hey Adrian,
In the cases where receipt’s or paper’s image is not exactly rectangular
or if it’s outer edge is not exactly contour
How should I process the Image?
If the outer edge of the document is not rectangular I would suggest being more aggressive with your contour approximation. You will need to reduce the number of points to four in order to apply the perspective transform.
When ever i try to run the code i get;
Traceback (most recent call last):
File “scan.py”, line 40, in
(cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
ValueError: too many values to unpack (expected 2)
What should i do?
Please read the other comments before posting. Your question has been addressed multiple times in previous comments. See my reply to “Mohd Ali” for more details.
I was trying to draw the contour on the original image (not scaled down), but I must be doing something wrong. How would one go about doing this? A couple attempts of mine…
cv2.drawContours(orig, [np.multiply(screenCnt,ratio)], -1, (0, 255, 0), 2)
cv2.drawContours(orig, [screenCnt * ratio], -1, (0, 255, 0), 2)
Both of these produce an error saying (-215) npoints > 0 in function drawContours
Awesome post. How would you proceed on transforming the perspective of whole image?
Let’s say –
1) we detected the reciept with a set of four co-ordinates.
2) next step I assume to provide with four co-ordinates corresponding to this receipt in the perspective corrected output image
3) next step is to acquire the relationship (homography matrix) between these two sets of four co-ordinates
4) next to calculate the resultant image size
5) next to warp the image.
Is this how I should do it? Let me know if I am missing anything.
Your question actually reminds me of this StackOverflow question on computing a homography matrix to transform an entire image. I would start there.
Hi Adrian! Today I was porting your example to C++ and you can find it in this link
Could you shine a light on me and tell me why the result image of my program doesn’t seen well as the output of your program?
Thank you!
Hi Carlos — thanks for sharing the C++ implementation, I’m sure many other PyImageSearch readers will benefit from this.
As for your result, I think you adaptive thresholding may be incorrect (disclaimer: I didn’t look at the code, just the output).
You are welcome! Your website is a great source of inspiration and learning I am happy to contribute at least a little bit.
I just have a question as to how could you actually grab the text and input it into a variable in swift and have it print out? In other words how could you recognize the words on a picture that is taken, and directly input it into code ?
I don’t cover Swift programming here on PyImageSearch, but the process you are referring to is called Optical Character Recognition.
Hello do you have a bubble sheet scanner and to get the data in the sheet like a,b,c,d,e. Sorry for the english…. And how to implement in android
Hi Adrian,
I stumbled on your blog post years later. It’s been very educational and informative for me.
Since I didn’t see anyone posting a Flask version of this, I wanted to share a quick and dirty way to use your function with Flask.
Basically make a post request with a file in the body and get back the scanned image as a response.
I hope others find it helpful.
Cheers
Awesome, thanks so much for sharing MichaelCu!
Hi Adrian,
Great and informative post. I do have a question though from my experiments with the code. It seems very fragile if there is some occlusion of an edge (say we capture just the document, and only one side has some background). It also seems fragile if the canny edge detector gets most of the outline of the document but finds a break in one of the edges (say I’m holding the paper in my hand).
In these cases, it seems the
if len(approx) == 4:
screenCnt = approx
break
doesn’t work.
Do you have any advice for handling these cases?
You are correct, you need a good segmentation in order to perform the contour approximation test. You might want to consider training your own custom object detector that specifically detects rectangle/document-like objects.
Hey,
You told about an Android app. Is it possible to create an android app using python. Any suggestions regarding it.
Thank you,
No, not easily. The problem is getting OpenCV + Python to interface together on the mobile app. I would suggest re-creating the app using the Java + OpenCV bindings for Android.
Hi Adrian,
I’ve had couple of doubts about parameters used in he openCV codes. How do you choose the optimal parameters for Canny function and how it affects the efficiency of edge detection?
How do you Define the Kernel size for gaussian filter?
How do you choose the optimal epsilon for approxPolyDP function?
Your help will be much appreciated!
Thanks
The short version is that you need to experimentally tune the values to both the Canny edge detector in approxPolyDP. The values I use here tend to work will if there is sufficient contrast between the background and foreground.
can you please tell me how to get accuracy rate from the given image in this process??
Thanks
Hello sir,
I am ankita pursuing my undergraduate degree in computer science , i have been following all your projects and feel they are great, i would be glad if you could help me in doing a mini project on fingerprint matching using python and opencv.
Thank you,
Hi Adrian,
This was the first of the free lessons given to me from your course.
I unfortunately was not able to jump right in, as I do not see any mentioning of how to install the pyimagesearch package in my windows python 2.7 (with openCV). There is no mentioning on your website from what I can find. Can you direct me to how I would install it through console? Thanks!
Alex
Hi Alex — please use the “Downloads” section of this tutorial to grab the source code + example images. This will enable you grab the “pyimagesearch” module.
Hi Adrian,
Thanks for the awesome article – very informative and well explained. I managed to build a in-browser working version in opencv.js in just a little bit more than 5 minutes 😉
Cheers,
Julian.
Congratulations Julian, nice job!
Hi Julian do you mind sharing how you were able to achieve the in browser version.
Hello,
I’m currently working on a project for a contest and your blog really helped me quicken the process. So far everything went flawless but atm I’m encountering a problem regarding skimage. I’m trying to make a image be readable for tesseract but so far I somehow struggle on getting the same results as you did by using the adaptive threshold function. I even tried adding multiple filters before that but noting seemed to work.
Would really appreciate the help!
Hey Wasabi, thanks for the comment. I’m not sure what the exact issue is in this case. Which version of scikit-image are you using?
Hello
Thanks for this article. I’m new with python and I’m having trouble with cv2.waitKey(0). I have all the modules installed but it freeze.
I would really appreciate the help!
Albe
Hey Able — make sure you click the active window opened by OpenCV and then press a key on your keyboard. This will advance the script.
Hi,
when I am running this code my image after thresholding isn’t as clear as yours. Could you please clear this to me, why is this happening?
Are you using the same example images as the ones in this post? Or are you using different images?
Hello Adrian,
I want to thank you for posting this tutorial.
I will be using your code to scan pictures of music sheets in order to make a Nao robot play simple melodies on piano. This will be done by generating a MIDI file from the scanned sheet.
Can you tell me how to properly credit you? I was thinking of
“Credits to Adrian Rosebrock on”
Is that OK?
Thanks,
Gabriel
Hi Gabriel — that is a perfectly fine credit, thank you. If you publish your code on GitHub or your own personal site I would appreciate a link back to PyImageSearch from the readme file/webpage.
Hi Adrian
There are some cases that the intended contour isn’t a closed one. Is there any way to handle such cases using openCV’s functions or I should write my own algorithm?
Try using morphological operations such as a dilation or closing operation to close the gap between the contours.
Another problem is that paper contour is connected to other contours in the background. this is the link for canny result: ‘’ if you want to see. I know the background is not suitable for the purpose but document scanner apps (like CamScanner) are able to detect the paper even in such cases. I think your assumption is not completely safe at least for a publishing app.
To solve this problem I used a houghline transform to detect lines, but then I don’t know how to extract that final four points. Do you have any idea?.
Hi Adrian, Everytime I run scan.py I am given this message
usage: scan.py [-h] -i IMAGE
scan.py: error: the following arguments are required: -i/–image
can you please help?
Please take a look at the comments before posting as I have addressed this issue a few times. See my reply to “vasanth” on May 7, 2017 to get started.
Hi Adrian, I have one doubt in step 2..This code is only use for rectangular shape objects?
can i use the same logic for t shirt shape images?
Correct, this method is meant to be used for rectangular shaped objects. You could technically use something similar for t-shirt detection but that would require you to obtain a very nice, clean segmentation of the t-shirt. You should instead consider training your own custom object detector or perform some sort of pixel-wise segmentation using deep learning, such as Mask R-CNN or UNet.
i’m getting this when i run this code:
usage: scan.py [-h] -i IMAGE
scan.py: error: argument -i/–image is required
Please help
You can solve this error by reading up on command line arguments.
Good Tutorial. I try to implement it in android. My goal is to Scan a national Id Card. I’m able to draw on the contour but sometimes it draws in information inside my card too. Do you have any suggestion how to avoid it?
I would suggest tuning your contour approximation values. It sounds like the code is finding areas inside your card that has four vertices but the card itself does not have four vertices (at least according to the approximated contour).
Hi Adrian,
Thanks for your feedback. Finally, I did it by looping trough all the countour detected ( boundy rectangle and get the largest rect (Width and height).
Have you written some articles describing how to capture(auto capture) image from camera when Threshold are found (distance between camera and object) lighting condition?
Any suggestions will be appreciated.
Regards….
I would start with this blog post. You can modify it to capture images/frames when the distance reaches some threshold.
Hi Adrian,
i’m getting this when i run this code:
STEP 1: Edge Detection
Traceback (most recent call last):
…
cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 2)
NameError: name ‘screenCnt’ is not defined
STEP 2: Find contour of paper
Process finished with exit code 1
can you give me a help,thank you.
I get the impression that you may have copied and pasted the code rather than using the “Downloads” section of this post to download the code. Make sure you use the “Downloads” section to download the code — this will help reduce any copy and paste errors as I believe what happened here.
Hi Adrian,
This is an awesome resource, Wondering if you have any suggestions on how to get your code to work for a web app
I’m not sure how to answer this question as the term “web app” is pretty loose. What exactly is the goal of your web app?
hi Adrian ,fantastic way to explain things and show how it is done.
How to install pyimagesearch module for four_point_transform as it shows no module named pyimagesearch .
Or We need to write full code under the function four_point_transform in order to use it
Hey Jatin, make sure you use the “Downloads” section of this blog post to download the source code. It will include the functions/modules that you need.
Adrian – great articles.
I am following the 10 day crash course and have crashed!!
When trying to run scan.py I keep getting the error:
ImportError: cannot import name threshold_local
I have looked through all of the comments and just cannot find a working solution
I am importing it using:
from skimage.filters import threshold_local
All scripts are in a directory named pyimagesearch and I am running the script using:
python scan.py –image images/receipt.jpg
Hey Bill, please see my reply to your other comment. I believe you are using an older version of scikit-image. Please double-check your scikit-image version and let me know which one you are using.
Adrian;
I posted a comment on an error however I thought I had resolved it by amending the imports as follows:
Instead of importing as in your example:
from skimage.filters import threshold_local
I did the following:
from skimage import filters
I thought this would get around the “ImportError: cannot import name threshold_local”
I also amended the script as per John Goodman’s post (isn’t he an actor 🙂 )
I thought this worked as I could then get all the way past step 2 however as soon as it reads the threshold_local it fails with ‘NameError name threshold_local is not defined”
Any ideas?
Hey Bill, can you check which scikit-image version you are using? I think you may be using an older version. Please check and let me know.
thanks for the article, with explanations and code. Loved creating my first program with openCV and python.
Was amazing to see the results, using the scanner on different kinds of receipts and documents, with different margins, backgrounds.
thanks !
Thank you for the kind words, Jesudas. I’m happy you enjoyed the tutorial 🙂
Hi Adrian. Thanks for your crisp, clear and well-explained blog posts.
As in the “Where to next?” section, I tried incorporating the OCR using pytesseract with reference to your article:.
However, I couldn’t get the pytesseract library’s “image_to_string” function to work on the output of this article’s code: the scanned image. Could you please suggest as to what might be wrong with my approach?
Could you be more specific in what you mean by getting the function to work? Is the function returning an error? Is the OCR output not what you expect?
There is no OCR output. The “image_to_string” function does not convert the image into text, so no output is seen.
In that case the Tesseract library likely cannot OCR any of the text and hence returns an empty string. You linked to my previous post on using Tesseract + Python together so to confirm the issue, you should run the code on the example images in the post. If it doesn’t work then there is a problem with your Tesseract install. If it does work then you know Tesseract simply cannot OCR your input image.
Adrian, Thank you for this really good tutorial. It needed some exploration with collecting the transform function from the previous blog entry and tweaking the code/reviewing the comments to ascertain just what each line is doing, but taking apart examples is how you learn best.
Could you please update line 29 of “Edge Detection” from a Python 2 print statement (without parenthesis) to match the other python 3 print statements?
Done! I must have missed that when I updated the post. I have confirmed the source code download is correct though. Thanks Kent!
Hi Andrian,
Can you help me on how to Accessing the Kinect Camera with OpenCV and Python?
Thanks.
Sorry, it’s been a long, long time since I’ve used OpenCV, Python, and the Kinect camera. Any suggestions I would have here would be out dated. I may consider doing a tutorial on it in the future but I’m not sure if/when that may be.
Hi Adrian!
Thank you for your post! It’s very helpful!
I am currently trying your code now, and trained with some objects.
I just wonder, what if the background is lighter than the object, for example on white table.
Somehow it cannot calculate the right “screenCnt”.
I realized when use Canny Edge, it cannot detect the edge of object.
Then, I tried to use another preprocessing beside Gaussian Blur or Grayscale (like dilate, and threshold), only detects one side of the edge (depend on the light). Can you suggest me how to detect the good edge on the lighter background?
Thank you!
how can i create an app using this code?
There are two options:
1. Use the Python code sitting behind a REST API. Your mobile app would upload the original image and the API can return the scanned image.
2. You can convert the Python code to the native language of your mobile device.
Awesome blog, helped me a lot in getting started with Opencv.
Thank you,
Adrian Rosebrock
Thanks Ashish, it is my pleasure to help!
Awesome Adrian Rosebrock, you are so generous.
There is some bugs, if it can not detect 4 count of corner, the value parameter ‘screenCnt’ can’t be define and set value.
This would cause crash.
Hi Adrian!
Thank you for your post! It’s very helpful!
When the background and target gray value are similar, how can we find out the edge as much as possible by adjusting canny‘s parameters?
This method assumes there is sufficient contrast between your background and edge of the document itself. You might want to try histogram equalization, edge detection, and non-maxima suppression. A Hough lines transform may also help.
hey 🙂
can you help me to understand argparse
I am not getting that part correctly.
No problem. Read this tutorial on argparse and you’ll be up to speed in no time 🙂
hello I have a problem when I run the program it say: No module named ‘scipy’
What I can do?
Make sure you install SciPy on your system via pip:
$ pip install scipy
Hey Adrian, thanks for great example!
I’ve been following this as day 3/17 and opted to download example code/library/image but to actually type everything from scratch following your guidelines and code examples (for learning sake, but I digress).
Issues I’ve found so far are really minor but wanted to bring you attention to them:
1) Original provided image of recipe used as-is from downloaded section is rotated (might be issue with macOS / Debian, but it registers as landscape at my end). I had to rotate it 90 degrees to be upright. Since initially I’ve hardcoded rotation angle, it then broke the page example which is properly portrait (incorrectly finding biggest contour) I eventually had to add:
…
# as second required argument (could be handled as defaulted to 0 if omitted ofc)
ap.add_argument(“-r”, “–rotate”, required = True, help = “Rotation angle of the image to be scanned”)
…
# right after initial imread of image
image = imutils.rotate(image, angle = int(args[“rotate”]))
in order to process both original unmodified images (receipt and page) properly (-r 0 for page, -r 90 for receipt).
2) Contrast of gray-scale receipt image was fairly average, so It had to have its threshold value increased from 11 to 81 to achieve results that are a bit weaker but closer to your resulting dark-contrasted images:
…
T = threshold_local(warped, 81, offset = 10, method = “gaussian”)
…
Overall it was excellent read and thanks for great tutorials!
Thank you for sharing, Const! Could you also share your Python, OpenCV, OS version just in case others run into a similar problem?
Please can somebody explain me this line
warped = (warped > T).astype(“uint8”) * 255
and how to write it in node.js as i am building this in node.js.
Hi Adrian,
It is a very good blog to understand the basics how to detect four corners then as per corner how to scan our document but it is only able to detect corners when corners are fully available in image but if I have full screen doc image or docs which are little bit smaller or some one holding docs then it is not able to detect corners and because of that it is not able to scan those images could you please help me how I can achieve scanning for these type of images.
Please give a minute.
I am newbie in image processing. I want to process image with a raspberry pi. I have just setup raspberry pi. Tell me please which software package is require to install for opencv+python. For getting started with raspberry pi image processing.
Thank you in advance.
I provide a number of OpenCV install tutorials here. Give them a look, they will help you install OpenCV on your Pi.
You almost made it sound like it’s possible for someone without a computer or tech background to actually nail this sort of stuff! Awesome.
Thanks James! 🙂
HI! Adrian
First of all, I want to say thanks for awesome tutorials.
Here I’m getting an error to not install pyimagesearch package.
Please share me the version of pyimagesearch to install.
You can download the “pyimagesearch” package by using the “Downloads” section of this tutorial (it is not distributed on PyPI).
Hi Adrian, first thanks for your tutorials. They are super didactic
I ask you a question, could you help me do the same as in this tutorial but using the Hough Transform instead of the four point detection.
Is for a job at the University, i’m just over time and I can’t get me out.
Thank you again for your help!
Hey there — I don’t have any tutorials on using the Hough Transform method but if this project is for a University I really think you should research it, study it, and put in the hard work yourself. If you are going to be teaching others you need to educate yourself.
Hi Adrian,
Thank you for your wonderfull job and script!!
I spend lot of time on ggle to find something like that…..
I have one question for my own project, do you think it’s possible to determine contours with some corner image, qrcode, anaything else ??
I mean a function like that :
Many thanks!
Gilles
I actually discuss how to perform QR code recognition with OpenCV in this tutorial.
Thanks Andrian,
Ur answer comes for another question, how to decode EAN128 on capture 🙂
But my first question is about the edge, in this tutorial you have decide to take a capture of the document by detect his corner.
But can you think it’s possible to define “custom corner” detection, like a litle place in this document itself.
Thanks an thanks a lot !
Gilles
I would suggest taking a look at keypoint detectors, specifically Harris and GFTT which are designed for corner detection. Along with keypoint matching they can be used to detect specific regions of an object.
how can we save the scanned image (step 3) in the directory from which we get an actual image
You can use the
cv2.imwritefunction to save any image you wish to disk.
First of all, thank you for this amazing website. I am starting to learn OpenCV and this site has guided me a lot where to start. I have started learning Python as I am a node guy with some java background.
My question here is, what additional step do you think will be required when doing an ID Scan. We are a background check company and perform OCR on government IDs and validate the data for KYC purpose. But now we are trying to improve our performance and scale. By improving the document orientation and cropping and resizing the ID from the whole image, we can provide clean inputs to our OCR engine.
Other than Edge detection and Transform what other steps do you think will help us to solve this problem. One issue I see is user holding the ID card in hand while taking the photo, causing edge detection issues.
Thanks Vijay, I’m so happy to hear you are enjoying the PyImageSearch blog 🙂
You may be able to get around many of the scanning issues by using a more powerful OCR engine. Have you tried this tutorial?
This is really Great and very bried
Thanks Hancer!
Sir i want to scan pictures from live cam
You’ll want to combine the code from this post with this one on accessing your webcam.
It doesn’t work with me i got the Error :
scan.py : error : argument -i/–image is required
Should my code to look like that ?
ap.add_argument(“-i”, “–image”, required = True,
help = “C:\\Users\\Ahmed\\Desktop\\image.jpg”)
No, that is incorrect. The code doesn’t have to be updated at all. Make sure you read this tutorial on argument parsing before you continue.
Hi Adrian,
There is a error when running args:
error: the following arguments are required: -i/–image
An exception has occurred, use %tb to see the full traceback.
Please help to advise how to fix it.
Thanks,
You need to supply the command line arguments to the script.
Wow.. Superb explaination. Really helpful. Thank you so much!!
You are welcome, I’m glad you found it helpful!
very great information about documents.
sometimes where there are lighting issues the contour method not always works.
and my advice use watershad segmentation to find the shape of document and then perform contours.
for performing this one part is middle of image other is edges of image. then segmentation will give two parts one the document other the background.
Thank you very much for this. I’m adapting it to find multiple documents in a single (scanned) image. I’m finding that each document is actually producing two nearly identical contours, the second slightly smaller than the first.
Any idea what might be causing this? I can just throw out every second contour, but that seems inefficient.
Thanks,
Mike
I would suggest taking a look at your edge map to see if there is some sort of “double contour” going on. Alternatively, you can check to see if the contour is enclosed within another, and if so, discard it.
Hello Adrian,
I have one query. Is it necessary to import imutils? Can I use opencv functions instead of imutils for the same operation?
For resizing? Yes, you certainly can. The difference is that the “imutils” function automatically preserves the aspect ratio for you while “cv2.resize” will not.
how to save that scanned image in system
You can use the “cv2.imwrite” function to write an image to disk.
How to Split image based on dark center line in the book?
Hello Adrian, thanks for your post. It helps to detect the edges of the page perfectly.
Your code works for the single page, but i need to slice the image by detecting the edges and shadow of the center part of the book. Can you help me to overcome this issue.
Is there a way to identify whether an image is scanned or original ?
Sorry, I’m not sure what you mean? Could you clarify?
Awesome resource Adrian,THANKS a lot..
I had a question…how would i import ur transform module in google colab
Excellent article. I also suggest dewarping the pages.
Nice, thanks for sharing!
I have a question? How did you choose the right argument value e.g. when you apply local thresholding
Typically you manually tune it by trial and error.
Hey thanks a lot for this guide Adrian! Is there any licensing on this code/could I build on top of it for my own ideas?
Yes, but read this first.
Hello Adrian,
I was wondering why you use the skimage adaptive thresholding method threshold_local as opposed to the opencv one adaptiveThresholding. did you find it in any way better at doing the job?
By the way, awesome site and great OpenCV tutorials. I’m an undergraduate studying Robotics and your tutorials have helped a ton in strengthening my skills.
No reason other than I like the scikit-image adaptive thresholding method. I find it easier to use and more Pythonic.
Hello Adrian! Thanks a lot for the above algorithm, its amazing to work with such a brilliant code. While i was fiddling around with it on my system, i found out that the algorithm doesnt work very well when white paper is used on white backgrounds, i have tried using every possible method i could think of, either adjusting thresh or blur values, resizing the image, but cant work out any of those. Would be grateful if you could guide me through with this….
Thanks a lot! | https://www.pyimagesearch.com/2014/09/01/build-kick-ass-mobile-document-scanner-just-5-minutes/ | CC-MAIN-2019-43 | refinedweb | 16,262 | 73.88 |
>>。
Dima has a hamsters farm. Soon N hamsters will grow up on it and Dima will sell them in a city nearby.
Hamsters should be transported in boxes. If some box is not completely full, the hamsters in it are bored, that's why each box should be completely full with hamsters.
Dima can buy boxes at a factory. The factory produces boxes of K kinds, boxes of the i-th kind can contain in themselves ai hamsters. Dima can buy any amount of boxes, but he should buy boxes of only one kind to get a wholesale discount.
Of course, Dima would buy boxes in such a way that each box can be completely filled with hamsters and transported to the city. If there is no place for some hamsters, Dima will leave them on the farm.
Find out how many boxes and of which type should Dima buy to transport maximum number of hamsters.
The first line contains two integers N and K (0 ≤ N ≤ 1018, 1 ≤ K ≤ 105) — the number of hamsters that will grow up on Dima's farm and the number of types of boxes that the factory produces.
The second line contains K integers a1, a2, ..., aK (1 ≤ ai ≤ 1018 for all i) — the capacities of boxes.
Output two integers: the type of boxes that Dima should buy and the number of boxes of that type Dima should buy. Types of boxes are numbered from 1 to K in the order they are given in input.
If there are many correct answers, output any of them.
19 3 5 4 10
2 4
28 3 5 6 30
1 5
解题说明:此题是一道模拟题,按照题目意思找出不超过N的最大装货量,并且满足装货量为箱子型号的倍数。可以采用暴力枚举的方法进行判断,找出余数最小的情况。
#include<cstdio> #include<iostream> #include<string> #include<cstring> #include<cmath> using namespace std; int main() { int m, s1; long long n, s = 1e18, s2, x; scanf("%I64d%d", &n, &m); for (int i = 1; i <= m; i++) { scanf("%I64d", &x); if (n%x < s) { s = n % x; s1 = i; s2 = n / x; } } printf("%d %I64d\n", s1, s2); return 0; }
Codeforces Round #464 (Div. 2) B. Hamster FarmB. Hamster FarmDima has a hamsters farm. Soon N hamsters will grow up on it and Dima will sell them ...
- s540239976
- 2018-02-18 12:28:34
- 213
cf 464 B. Hamster FarmB. Hamster Farmtime limit per test2 secondsmemory limit per test256 megabytesinputstandard i...
- PragIncor
- 2018-02-18 13:14:42
- 121
Codeforces Round #464 (Div. 2) B. Hamster Farm 水题B. Hamster Farmtime limit per test2 secondsmemory limit per test256 megabytesinputstandard inputoutp...
- Frost_Bite
- 2018-02-19 22:42:36
- 38
939B. Hamster FarmDima has a hamsters farm. Soon N hamsters will grow up on it and Dima will sell them in a city nearb...
- islittlehappy
- 2018-02-18 09:14:03
- 192
Codeforces Round #464 (Div. 2)B. Hamster Farm#include <bits/stdc++.h> using namespace std; long long a[1000005]; int main() { l...
- qq_37252519
- 2018-02-18 13:38:34
- 30
Codeforces 939B(Hamster Farm)这道题目很明显的就是一道翻译题 很简单没有坑 这里就不详细解释了,详解请看代码。题目:Dima has a hamsters farm. Soon N hamsters will grow up on...
- Puppet__
- 2018-02-21 22:35:42
- 23
利用arpspoof、tcpdump、ferret及hamster劫持登录会话利用arpspoof、tcpdump、ferret及hamster劫持登录会话
- u011843461
- 2015-01-25 22:59:32
- 4106
hamsterMarch 11th, 2010 Key word: hamster,difficult I saw hamster on LJ, I remember some spring ag...
- Cecil1119
- 2010-03-14 15:56:00
- 217
HDU_1198_FarmIrrigationFarm Irrigation Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others)...
- baidu_29410909
- 2015-08-11 16:02:42
- 301 | https://blog.csdn.net/zhaoxinfan/article/details/79946398 | CC-MAIN-2018-17 | refinedweb | 584 | 73.68 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Microcontroller Programming » Reading from uart stream
I have set up a serial communication link between the Nerd kit and a PC. I am using the uart functions available on the micro and the serial com functions available in C#. As a test and to allow my to get some experience handling data flow in both directions I decided I would to send a constantly changing stream of data each way. The actual data content is not important it is the general concepts that I am trying to get a handle on. I am using the date/time string because the word length and number of digest change form time to time and I wanted to set up a system on the MCU that could deal with that and still display properly.
The Nerdkit hardware is the same as the tempsensor set up in the Nerdkit guide. With the USB serial cable included.
I have the C# code running on the PC sends a custom formatted date/time string out on the serial com each second. The Nerd Kit receives that data and returns to the PC the current room temperature as read by the temp sensor. The idea was to simply display the PC’s current calendar/clock data on the LCD and display the current room temp on the PC. For the most part this setup is working well.
I am getting a temperature read out on the PC that is updated each second as expected. I am also getting most of the date/time data displayed on the LCD as expected. The time count is changing each second. If I reset the system date/time data on the PC it is immediately reflected on the LCD.
There is glitch that I have not been able to track down. The day of the week does not show correctly on the LCD screen. The date, time and temperature all show up as expected. The weekday line on the LCD does not remain constant it often show part of the word usually at least the first three letters. It changes often. It might show”Fry” for a few seconds then “Fray” for a few seconds then something else and so on. I believe I am using the same system to load and display the wkDay variable as the others yet it is displaying something totally different then the other variables. As I said the other lines on the LCD are working perfectly fine. I must be missing something I can’t see a difference with the wkDay variable.
I was thinking that the date stream may have been corrupted in the PC before it was sent out on the serial. In order to verify what was being sent out on the serial by the c# module I used a second USB connector and sent the C# output to PuTTY . That confirmed the data sent by the C# module was good. There is a full word being sent each second for the day of the week as expected.
There must be something in my code on the MCU. Why can’t I break out and display the weekday word just as I am doing with the time and the date?
Note-The screen shot where not taken at the same time therefore the times do not match up.
This is the section of my code that deals with filling and displaying the time/date variables after the ( char was read by the uart.
//reset wkDay
i_next = 0;
memset(wkDay,0x00,sizeof(wkDay));
incoming = uart_read();
//read char and fill WkDay untill # is read on serial
while (incoming != 35) {
wkDay[i_next++] = incoming;
incoming = uart_read();
}
incoming = uart_read();
// reset Date
i_next = 0;
memset(date,0x00,sizeof(date));
//read char fill Date untill $ is read
while (incoming != 36) {
date[i_next++] = incoming;
incoming = uart_read();
}
//reset time
incoming = uart_read();
i_next = 0;
memset(time,0x00,sizeof(time));
// read char fill time untill ) char is read
while (incoming != 41) {
time[i_next++] = incoming;
incoming = uart_read();
}
// print date and time to lcd
lcd_line_one();
fprintf_P(&lcd_stream, PSTR("%s "),time);
lcd_line_two();
fprintf_P(&lcd_stream, PSTR("%s "),wkDay);
lcd_line_three();
fprintf_P(&lcd_stream, PSTR("%s "),date);
LCD display with currupted weekday in line two
Screen shot of Putty rading C# output
Screen shot of PC desktop GUI for C# module show time and temp data.
sask55 -
I had something similar happen to me using "uart_read()". I tried a bunch of things to get it to work, but since I didn't like the way "uart_read()" blocks execution I went on to use UART_RX interrupt instead. Bang...problem solved! Here is the interrupt code I used..._buf[rx_in_idx] = UDR0;
rx_in_idx++;
if (rx_in_idx >= MAX_RX) rx_in_idx = 0; // make buffer "circular"
return;
}
Both the buffer array and the pointer are global and I have a similar pointer in the main section of the code called "rx_out_idx". In the main code, I just test to see if the pointers are not equal and if not, I know there is data to read. So all I have to do is read the buffer, update "rx_out_idx" and process the character. Works great.
PS - You can combine lines 7 and 8 into;
rx_buf[rx_in_buf++] = UDR0;
since it increments the index variable after the operation. Also, I've never needed a MAX_RX size greater than 32, you're results may differ.
Thanks pcbolt
That sounds like great solution for my project. I like the idea of not tying up the processor wait or testing for UART RX to arrive. The buffer is perfect for what I am ultimately attempting to do. I have not had any luck getting the uart interrupt system to work. I think I have tracked down the root of my problem. I believe the uart interrupt is triggering. I do receive data that is echoed back to the PC from the interrupt handler function.
The problem seems to be that none of the changes made to the variables in the interrupt ever show up in the same variables in the main. I have globally declared two volatile variables, outside of any functions, at the top of the code. Everything complies ok no warnings or errors. Yet, when I print (to the LCD) the values of the global variables from the main they never change even when I am receiving echoed data back to the PC on the serial from the interrupt event function. It seams like the Volatile designation is not working
int max_rx = 40;
volatile uint8_t rx_in_idx = 0 ;
volatile char rx_buf[40];_in_idx++;
rx_buf[rx_in_idx] = UDR0;
UDR0= rx_buf[rx_in_idx];
if (rx_in_idx >= max_rx) rx_in_idx = 0; // make buffer "circular"
return;
}
The loop count integer i increments as expected. The value displayed for all the other variables remain constant even when I am getting echoed data back on the PC.
while (rx_buf[rx_out_idx] != 40){ // loop untill ( is read
lcd_line_one();
fprintf_P(&lcd_stream, PSTR("char %c loop %d"),rx_buf[rx_in_idx],i);
lcd_line_two();
fprintf_P(&lcd_stream, PSTR("out idx %d in %d"),rx_out_idx,rx_in_idx);
I am lost any help would be greatly appreciated.
You have to be careful accessing the value of "rx_in_idx" inside your main portion of code since this could change at any moment. What I usually do is take a "snapshot" of it before using it. Here's an example:
in_idx = rx_in_idx; // take "snapshot" of rx_in_idx
while (rx_out_idx != in_idx){ // chase the input pointer
ch = rx_buf[rx_out_idx];
lcd_write_data(ch);
rx_out_idx++;
if (rx_out_idx >= MAX_RX) rx_out_idx = 0;
}
When the program first starts, both of the index variables are set to 0. Then data comes across and the buffer starts filling up and "rx_in_idx" gets incremented. So let say 8 characters come across then "rx_in_idx" is 8 and "rx_out_idx" is 0. The test inside the "while loop" will see that the values are different and start reading the buffer from 0 until "rx_out_idx" equals 8. This will keep happening and eventually "rx_in_idx" will wrap around to 0 after it reaches MAX_RX and start incrementing again. As long as the main code has read the buffer at 0 and a little beyond, everything is fine. The variable "rx_out_idx" just chases "rx_in_idx" around from 0 to MAX_RX and back to 0 again over and over. The only problem occurs when the main code isn't fast enough to keep up with the input and the contents of the buffer are overwritten before the main code can read it. In that case just make MAX_RX larger, but like I said earlier, I usually don't need anything higher than 32. Some of the tests I ran showed the two index variables rarely differed by more than 4 or 5.
Increment rx_in_idx after you store the character otherwise you will never use the 1st character in the buffer which is what your main is testing via rx_out_idx.
ISR(USART_RX_vect){
rx_buf[rx_in_idx] = UDR0;
UDR0= rx_buf[rx_in_idx];
rx_in_idx++;
if (rx_in_idx >= max_rx) rx_in_idx = 0; // make buffer "circular"
return;
}
Pcbolt
I see what you are saying. I will have to make a small change to my approach to averaging the temp sensor readings. I was just adding up the readings as they come back from the adc_read function (tempsensor guild code). Then dividing that sum by the number of loop passes that where carried out in the fraction of each second between subsequent uart read date/time strings. I was thinking since there was most of a second between RX read strings the MCU may as well just do as many temperature reads as possible and average them. In order to accomplish something like that I will have to be able to verify the value of any incoming RX characters inside the while loop as it is running. The arrival of the first char of date/time string the ( ,control character, is what I am using to end the while loop recording temperature readings. At that point the code can break out and display the time/date data to the various lines on the LCD according to the control characters within the string arriving on the uart.
I am hoping to have a code that can be doing something else and still monitoring the uart RX. When a specified control character is detected on the uart the program would change its task and deal with the uart data stream or carry out something different. My hope is to be able to recognize and handle intermittent packets of uart data periodically, as they arrive or shortly after. The buffer would be perfect as it could hold the incoming string data while the other operation finishes what it may be doing. The code could then check and deal with this data when it is logical to do so.
Noter
I do realize what you are saying. It is true that the very first time thru the buffer, the 0 position will not be filled except by the declaration statement. I changed the position of increment because it seamed more direct to have the variable rx_in_idx reference the last charter that was read rather then the next charter to read by the uart. After the initial “start” control character is detected by the main, there are numerous references made, in the code, to the positions of the last character in and the last character out on the circular buffer. I think it could be done ether way, It just make more sense to my to reference the buffer positions this way.
Thanks
noter
I noticed now that I had removed the rx_buf[0] =0; statement from the code I posted here
It was at the top of the code with the declaration of the global variables.
pcbolt
I tried taking a snap shot of the two global variables that are subject to change in the interrupt function. I was hoping that moving the references to these variables out of the fprint f statements it may make a difference. The “snap shot” references still remain within the temp sensor read loop.
It did not change. I really don’t understand why I can’t reference these variables inside that loop and read the current values as set in the interrupt. I know what you are saying about they may change at any time I think I could deal with that. The problem is not that they are changing unexpectedly it is that they are never changing in the main. The values of rx_buf(n) must be changing in the interrupt function as I am reading good data, echoed back, to the PC from that variable.
while (rx_buf[rx_out_idx] != 40){ // loop untill ( is read
rx_in =rx_in_idx;
incoming=rx_buf[rx_in_idx];
lcd_line_one();
fprintf_P(&lcd_stream, PSTR("char %c loop %d"),incoming,i);
lcd_line_two();
fprintf_P(&lcd_stream, PSTR("out idx %d in %d"),rx_out_idx,rx_in);
It would seam that I am missing something with regard to the volatile variables working correctly between the interrupt function and the main. Could it be related somehow to the tempsensor read functions and using the ADC within the while loop?
Could the fact that the MCU may be looping a good part of the time while waiting for the next ADC result be the source of the problem?;
}
Is the do noting loop or the ADC conversions themselves blocking the function of the volatile variables? Any comments on when this may happen? When can I expect not to get volatile variables updating between functions? Specifically is this likely to be an issue when using SPI functions outside the main and the uart interrupt is triggered by rx on the serial?
The compiler tries to be smart about executing your code and if it can load a value once into a register for the duration of a function or subroutine that's what it will do. However that doesn't work out so well if there is an interrupt function that is changing the value in memory so the keyword 'volatile' tells the compiler to always load the variable value from memory right before using it to be sure changes by interrupts are seen or visa versa. So if you're using the volatile keyword in your variable declarations then it is working and if you don't see the values you expect there is some other problem. I have never done any kind of snap shot thing with interrupt variables and have never had a problem because of it. Only the interrupt routine is changing the 'in' index and only the main is changing the 'out' index so there is no conflict. If the 'in' changes while you are in main and processing a character it doesn't matter because it is still not equal to the 'out' value which is all that needs to be tested anyway. You can make all your variables volatile and the program will work fine although a little less efficient than it would otherwise.
It's hard to say what the issue might be just looking at small code snippets. If you will post the whole thing we might see something that is causing the problem.
So that I can get a better understanding of what I am doing wrong I have made a very basic code using the uart interrupt. My hope is to get an understanding of why this code is not working as I think it should. Once I have learned the basic ideas from here I will carry on with my other ideas.
I have stripped down the code to the bare minimum of what I am trying to do. I hope to fill the string array once with the first 39 characters that the uart receives. After that any additional rx characters will be repeatedly stored into the same array position over and over again. All the incoming Rx is just echoed back to the PC. Without line 29 UDR0= rx_buf[rx_in_idx]; I see no text returned on the PC module, with that line in the code I see the text string back on the PC’s rx line. It seams to me that this must be an indication that the uart interrupt is working.
At this point all I wish to do in the main, is display the character contents of the string array one character at a time. My expectation was that before any serial rx is received I would see the # characters one in each of the array elements. Then after the uart interrupt fills the array with incoming characters I would see them as the loop displays the array content. I was also expecting the rx_in_idx value to go to 40
But none of the displayed characters ever change; they remain as they where even when the PC is indicating characters returned from the nerd kit. The rx_in_idx value remains at 0. It should be incremented up to 40 inside the uart event function.
Here is the code I am working with now.
// for NerdKits with ATmega168
#define F_CPU 14745600
#include <stdio.h>
#include <math.h>
#include <avr/io.h>
#include <avr/interrupt.h>
#include <avr/pgmspace.h>
#include <inttypes.h>
#include <string.h>
#include "../libnerdkits/delay.h"
#include "../libnerdkits/lcd.h"
#include "../libnerdkits/uart.h"
int max_rx = 40;
volatile uint8_t rx_in_idx = 0 ;
volatile char rx_buf[40];
ISR(USART_RX_vect){ // this is the symbol from AVR compiler docs (interrupt.h)
rx_buf[rx_in_idx] = UDR0;
UDR0= rx_buf[rx_in_idx]; //send echo back to PC to check for serial conection and interupt .
if(rx_in_idx < max_rx-1) rx_in_idx++;
return;
}
int main() {
// start up the LCD
lcd_init();
FILE lcd_stream = FDEV_SETUP_STREAM(lcd_putchar, 0, _FDEV_SETUP_WRITE);
lcd_home();
uart_init();
UCSR0B |= (1<<RXCIE0); //set RX interrupt enable bit on USART Control and Status Register 0B
sei(); //enable interrupts
int tc =0;
memset(rx_buf,0x23,sizeof(rx_buf));// fill string array with ####
while(1)
{
lcd_line_one();
fprintf_P(&lcd_stream, PSTR("char %c index %d "),rx_buf[tc],tc);
lcd_line_two();
fprintf_P(&lcd_stream, PSTR("index in val %d "),rx_in_idx);
delay_ms(1000);
tc++;
if (tc >= 40) tc=0;
}
return 0;
}
One more test I have done.
If I replace line 23
UDR0= rx_buf[rx_in_idx];
With
UDR0= rx_in_idx+48;
ASCII code 48 is 0
I then see a continues string of 0 zeros on the PC rx input line.
The uart interrupt is sending back characters on the serial to the PC. The variable rx_in_idx is not being incremented in the function so that explains why it does not show up that way in the main and why the array is never filled. I have now noticed that rx_buf[0] is changed in the array. It always begins as # char as expected it will remains that way until the PC start sending serial data. It then changes. It is always changed to the ASCII code character for the number that is in the variable declare statement.
volatile char rx_buf[40];.
So if array size is delared to 40 I see (, 41 I see ) , 42 I see * and so on
Sorry I did not notice that the array 0 element was updating earlier. Strangely it is not updating to anything that is received on the uart it is directly related to the value in the variable declaration statement no matter what char is recieved.
I have the simple version of the code working now.
Even though the issue was not the same I tried the solution suggested by hevans in this post. I simply changed the number in the declaration statement to a defined value.
#define max_rx 40
volatile uint8_t rx_in_idx =0 ;
volatile char rx_buf[max_rx];
I was not even using a variable in the definition,I was using a number. I have no idea why changing the number 40 to a #define that is defined as 40 works But I quess I should not argue with success. I am not very comfortable with solutions that I don’t understand.
The somewhat related thread and solution
Array definition issue
I am back to my original plan
Thanks
NerdKits » Forums » Microcontroller Programming » Multidimentional string arrays - stuck
This is the thread I was refering to
thread link
I am not sure if anyone cares at this point. It is not the dimensioning of the array that was the issue. It is the int variable max_rx. I was very uncomfortable with my last conclusion. I used the number 40 back in the array definition it works fine. It appears the problem was I cannot assign a value to a global variable the way I was.
int8_t max_rx = 40;
and
int max_rx =40;
do not work to assign a initial value to max_rx global variable.
Soooooooo simple yet it took me forever to figure this out.
Glad you figured it out. That same initialization thing has bitten me a couple of times too. Not too long ago PCBOLT pointed out that this statement in the make file will copy initialized values into ram which is likely the cause of your problem. Add the "-j .data" to the objcopy line in your makefile to have the initial variable values included.
avr-objcopy -j .text -j .data -O ihex
After resolving the issue that was related to the variable initialization everything has gone very smoothly.
I now have the uart rx interrupt working. Using uart interrupt to handle the uart rx has completely eliminated any issues I was having separating and displaying Date/time data as is is coming from the PC. I now have established a clear and reliable two way serial confection between the NerdKit and the PC using C#.
Both the timer event and the serial communications where very easy to drop onto a windows form using MS visual C# 2010 express which is free. The coding required in the c# was very basic. Using visual C# or visual C++ there appears to a lot of functionality built into the toolbox. By using the methods and solutions available with the visual MS .net applications it would seam that even a relative novice can produce interesting results without having to have an intricate understanding of the underling code.
C# has complied my serial connection test and produced an 11KB exe file that should run on any windows based computer with .net4 on it. COM 5 also has to be available as I never made any provision to change the COM port from the desk top window on this version.
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/2501/ | CC-MAIN-2020-29 | refinedweb | 3,708 | 68.81 |
25 Ways to Empower Your Passion and Purpose
Have you discovered your passion and purpose in life? Here's how you can empower them.
**: [log ind for at se URL] with
Vær venlig at Tilmelde dig eller Log ind for at se detaljer.
...one of our clients we have started gathering the decision maker’s names in the marketing department at their CORPORATE OFFICE who are 1st Level and 2nd Level such as you would find for job titles on linked in, to give you a reference point for an example. These high-level marketing directors work for companies that sell exclusively in the industries be the server, all data change in PC2 will be shown simultaneously
.. you like to ask for the data - can you share a bigger sample (at least 100+ contacts)
.. documentati...
I am Akshay Adkane (Aeronautical Engineer). If you find any kind if work regarding typing or data .. please inform me i will do it.
.. sql. 2- open a specific)
Hello, we need to create a web application where a pdf editable source is provided by Admin. This pdf...send it as an email attachment; the application must provide for every instance (a), b) or c)) a Json file (that contains all data entered by the Client) that needs to be uploaded into an ftp endpoint. in the attachment you find an example of Json.
We are keen to find basic level data entry candidate to copy email domains from a Google Sheet into our order management software, Linnworks. Ideally, this should be done within a two week period as there are around 18000 to input.
i have already created a draft sop with the essential data and need professional help to refine it and build an effective SOP. Please find the attached SOP draft below.
I want to develop a proof of concept to find stock price change triggers for minute by minute trading data. I need a developer to make it fully working on a cloud and also train on how the things were developed and deployed.
We are looking for someone to do online research for us. We want to build a .. references as needed. 2. b) Find an application in your everyday life
.. Name Country Website Company Activity Name... BUT NOT THE EMAIL. I need help for: 1-
...User Can Find near By Public Event • Track User Location and calculated Distance and Arrival Timing. • Chat & Group chat function (Like Whatsapp) Website Front End • Login, Register, Profile Functions & Login With Social Media • Add / Upload event ( Public & Private Event) • Invite Friends ( Invite Contacts , Invite By Email ) • Find...
Attached you will find a PDF with heaps of addresses from dentists. Please use the attached Excel file to put your parsed data in the right columns. Sometimes there will be a code next to the year. This has to go into column I. want an algorithm that will find closed loops for attached example. I will have a database as mentioned in the example with the four fields. The algorithm should go through the mentioned database and produce a closed loop(s) if found . I have shown an example of the close loop. Where the system should find this loop. Person 1 Hates Person 2
...as a question and comment area for students and teachers. Example Use Case1: Teacher needs to assign reading material to the class so they search through the news. When they find an appropriate news page they highlight text that they would like to create a question for and then right click and select generate multiple choice question. A window pops up
• “[log ind for at se URL]”, a CLI program that allows the user to ...collection of board games which are stored in a text file. Develop this program before “[log ind for at se URL]”. • “[log ind for at se URL]”, a GUI program that uses the data in the text file and allows the user to find games which match the criteria they specify. Deve...
I am seeking someone experienced with data collection/ price comparison/ data entry to find new listing on [log ind for at se URL] and find a price comparison for completed auctions on ebay.com. A daily report should be sent each day with the price comparisons for any new listings. This had the potential to be an ongoing job. Please bid the amount for one month of) - price breakdown (on admin panel
> Three Login Panels (Super Admin in .Net , Admin in .Net & Client on web based ) > Vast knowledge about .Net Codes , D...tool. > Good Knowledge to build OCR (optical Character Reader) to select a part of image , get it saved and call it over the OCR ; convert image into text and run OCR to find data. > Upload Images / Video Clips etc. All the Best !
STEP 1. Make sure our reservatio..)
...into set S. (i.e., S ← S ∪ {x}) 2. Find successor of a given number x in S. 3. Delete a...
...function that can be called in to other pages. - The existing code the selects/updates db info or extracts pertinent option data from the option string is all working code that can and should be reused. Attached you will find the following: 1) [log ind for at se URL] page - this is the page with the code. Will give unredacted page to winner. 2) datevalue
..
Hello.
...groups. That way we keep the overview when having to much products. For example: Alcohol: - Beer - Whisky - … Soda’s: - Coca-cola - Limonade - Fever-Tree - … If possible all the data from each selling point should come together in one database where we can see what has been sold by which selling point and at what time. That gives us the possibility to see
Find me a DNA database(s) to query Raw DNA Data against. The Raw DNA data will be in an encoded format if that matters at all.
...search key. It will then look at another sheet, find all instances of that search key (product group_, and return (as a dropdown) the relevant products within that group. I have seen a way of doing this with a script, but only when the product group is given once, not multiple times. An example of the raw data is below: Product Group Product Group
.. containing the data to import
Expand all our curre...marketplaces except Japan are in English, please ensure the correct data is transferred. Only Amazon familiar freelancers, you must clearly understand the listing process and maximise listing. For Japan you need to use google translate or another software to translate and list. You may find some are already in the catalogue.
..
Have you discovered your passion and purpose in life? Here's how you can empower them.
Here are some tips to help parents and educators teach kids how to program. | https://www.dk.freelancer.com/job-search/find-data/3/ | CC-MAIN-2018-47 | refinedweb | 1,136 | 73.58 |
In lesson 2.9 -- Naming conflicts and the std namespace [1],.cpp :
foo.h:
goo.h::
This produces the result:
1
The scope resolution operator is very nice because it allows us to specifically pick which namespace we want to look in. It even allows us to do the following:
This produces the result:
7 1
It is also possible to use the scope resolution operator without any namespace (eg.
::doSomething). In that case, it refers to the global namespace.
Multiple namespace blocks with the same name allowed
It’s legal to declare namespace blocks in multiple locations (either across multiple files, or multiple places within the same file). All declarations within the namespace block are considered part of the namespace.
add.h:
subtract.h:
main.c.
In C++17, nested namespaces can also be declared this way:. | https://www.learncpp.com/cpp-tutorial/4-3b-namespaces/print/ | CC-MAIN-2019-13 | refinedweb | 138 | 63.59 |
Hi,
The patch enables finding a child slot in the scoreboard using
find_child_by_pid.
Right now the reported number of childs is 0 which is not true, cause we
have a single child, and it is in the scoreboard image already.
It's not a big problem, but disables writing portable code among
different platforms, cause Netware will always report 1, and the unix
versions >= 1, and the WIN report that as 0!
Index: mpm_winnt.c
===================================================================
RCS file: /home/cvspublic/httpd-2.0/server/mpm/winnt/mpm_winnt.c,v
retrieving revision 1.285
diff -u -3 -r1.285 mpm_winnt.c
--- mpm_winnt.c 2 Jul 2002 19:03:15 -0000 1.285
+++ mpm_winnt.c 4 Jul 2002 06:23:24 -0000
@@ -57,6 +57,7 @@
*/
#define CORE_PRIVATE
+
#include "httpd.h"
#include "http_main.h"
#include "http_log.h"
@@ -1969,7 +1970,7 @@
*result = ap_max_requests_per_child;
return APR_SUCCESS;
case AP_MPMQ_MAX_DAEMONS:
- *result = 0;
+ *result = 1;
return APR_SUCCESS;
}
return APR_ENOTIMPL;
MT. | http://mail-archives.apache.org/mod_mbox/httpd-dev/200207.mbox/%3C001e01c22324$a3cea960$5c00000a@GISDATA.ZG%3E | CC-MAIN-2016-18 | refinedweb | 153 | 68.47 |
Amazing custom metrics using Azure Application Insights
You know that old management saying, that you can't improve what you don't measure? It's annoying because being measured feels personal and uncomfortable, but it's useful in technology because it's totally true. There is a fairly lucrative market around monitoring stuff in the cloud now, because as our systems become more distributed and less monolithic, we do less tracing at the code level and more digestion of interaction data between components.
When I launched the hosted forums, the big change was that I was finally exercising the two-level cache code that I wrote for POP Forums a long time ago. It was of course inspired by the way StackOverflow handles their caching. Basically, the web node checks to see if it has the object in its local memory, and if not, it goes to Redis. If it's not there either, then it goes to the database and puts it in Redis. Then, using Redis' pub/sub messaging, all of the web nodes get a copy and it holds on to it there with the same TTL. The default that I use is 90 seconds, but I don't know if that's "good" or not. I had no context or data to support that it worked at all, actually.
The first step for me was to plug in Azure Application Insights, because it's silly-easy to use. You add a NuGet package to your web app, a little Javascript to your pages, the app key to your configuration, and enjoy that sweet sweet stream of data in the Azure portal. Instantly I had this beautiful map of curvy lines going between databases and web apps and Azure functions and queues and... not Redis. OK, no problem, a little google-foo reveals that the SDK doesn't have the hooks to monitor Redis calls. So I wander through the Azure portal to my Redis instance, and look up the numbers there. What I find is that there are almost three times as many cache misses than hits, which implies that data isn't repeatedly fetched very often, or something else.
This is actually not super useful, because in theory, my two-level cache would not need to go all the way to Redis if the object is locally stored in the web node's memory. It's also not useful because I wanna see all of this in the pretty application maps!
OK, so with the underlying forum project being open source, I didn't want to bake in this instrumentation because I don't know what others are using. Not forcing them to use App Insights, I decided to wrap all of my cache calls in a simple start/stop interface, and wire in a default implementation that doesn't do anything. That way, a consumer can write their own and swap out the dependencies when they compose their DI container. For example, here's the code wrapping a write to the memory cache:
_cacheTelemetry.Start(); _cache.Set(key, value, options); _cacheTelemetry.End(CacheTelemetryNames.SetMemory, key);
The
_cache is an instance of
IMemoryCache, and
_cacheTelemetry is an instance of the interface I described above. One could argue that there are better ways to do this, and they're probably right. Ideally, since we're talking about something event based, having some kind of broader event based architecture would be neat. But that's how frameworks are born, and we're trying to be simple, so the default
ICacheTelemetry is an implementation that does nothing.
In the hosted forum app, I use a fairly straightforward implementation. Let's take a look and then I'll explain how it works.
public class WebCacheTelemetry : ICacheTelemetry { private readonly TelemetryClient _telemetryClient; private Stopwatch _stopwatch; public WebCacheTelemetry(TelemetryClient telemetryClient) { _telemetryClient = telemetryClient; } public void Start() { _stopwatch = new Stopwatch(); _stopwatch.Start(); } public void End(string eventName, string key) { _stopwatch.Stop(); var dependencyTelemetry = new DependencyTelemetry(); dependencyTelemetry.Name = "CacheOp-" + eventName; dependencyTelemetry.Properties.Add("Key", key); dependencyTelemetry.Duration = new TimeSpan(_stopwatch.ElapsedTicks); dependencyTelemetry.Type = "CacheOp"; _telemetryClient.TrackDependency(dependencyTelemetry); } }
When you add Application Insights to your web project, you add
services.AddApplicationInsightsTelemetry() to your
Startup. Under the hood, this adds a shared instance of
TelemetryClient to your DI container. This is what we use to pipe data back to Insights.
DANGER! A lot of the code samples in the SDK documentation show them creating new instances of the telemetry client with default configuration. This is naturally what I did first because I didn't read the documentation in any particular order. Something about this is bad, because my memory usage spiraled out of control until the app stopped responding. My theory is that it didn't have the right config, so it bottled up all of the events in memory and never flushed them, but I don't know for sure. Meh, using dependency injection is always better for testing anyway, so do that.
As shown earlier, the cache calls are wrapped in start and end calls, so all we're doing here is starting a timer, then stopping it and recording the cache event as a
DependencyTelemetry object. For extra flavor, I'm recording the cache key as well, so if something is particularly slow, I can at least infer what the cached data was. For example, a key like "pointbuzz:PopForums.PostPages.17747" tells me that the tenant is "pointbuzz," and the rest says it's a bunch of posts from the topic with ID 17747.
I'm also giving the event a name that's prefixed with "CacheOp-" and not just the operation. Why? Because the lovely map view will group these together when they start with similar names. I learned this quite accidentally, because all of the database calls to tenant databases have a prefix on the database name. That worked out great because it groups the calls to tenant databases from the master database that describes the tenants.
Let's see some raw data! We can enter our Azure Insights in the portal, go to Logs, and query for the dependencies we said we would send:
There's a ton of context in every one of these entries, because they all roll up to the request data from the user. So yeah, I find a lot of weird cache misses from Russian bots that I assume are trying to spam forum topics that are 10 years old.
So that's cool, but remember that my original intent was understanding what the hit ratios were for the cache, for both in-memory and Redis. Well, we can write a query for that as well, and even pin it to a dashboard, if we so choose:
Behold! Useful data! This makes way more sense than my initial look at Redis data. It shows exactly what I would hope for, in fact: There are a ton of cache hits to the in-memory cache, and when those miss, they're persisted to memory and Redis. The bottom line is that the juicy data is right there in the memory most of the time. That's success! If you go back to the raw data above, you can see that those memory hits have a duration of zero, so less than a millisecond. That's obviously faster than crossing the wire to the database.
You can also get a good summary of what's going on, and track down slow hits, by going to the application map:
Right away, I wondered what the deal was with the slow gets from Redis. I'm still looking into it, but it looks like there's some overhead and warm up involved with the connection multiplexer in the StackExchange client. It's used so infrequently that there often isn't a connection to Redis established. That may partially explain why the SetRedis events are so short, because there had to be a connection to find there was nothing there before writing to it. It's all speculation at this point though, and generally pages are arriving in under 100ms.
Azure Application Insights is a very cool tool, and not limited to just the basics. It's important that your monitoring goes deeper, because the lesson I've learned time after time is that contextual monitoring is the thing that will save you. Nothing knows about the health of the system better than the system itself, so you should strive to have it tell you what's up (or down, as the case may be). | https://weblogs.asp.net/jeff/amazing-custom-metrics-using-azure-application-insights | CC-MAIN-2020-34 | refinedweb | 1,425 | 60.65 |
Python language basics 84: starting with file I/O
March 5, 2016 Leave a comment
Introduction
In the previous post we finished discussing the basics of object inheritance in Python. We saw an example where the Dog and a Duck classes inherited from an abstract base class called Animal. All common functionality was encapsulated within the Animal class. Dog and Duck retained the code specific to them.
In this post we’ll start looking into something very different: file input and output.
File related actions
The function to open a file is aptly called “open”. We can pass the following arguments to it:
- The file path
- The file open mode – see below for more details
- The encoding, most often UTF-8 which is by far the most widely used character encoding type on the Internet
The file path is probably clear. It can be something like “source.txt”. An often recurring annoyance is the file path delimiter in various operating systems: do we use ‘/’, or ‘\’ or even ‘\\’ in the file paths? It’s best to start with a built-in function of Python already now that can combine the various sections into OS-specific file paths: os.path.join. We’ll see it in action in a bit.
What’s file open mode about? We can also call it file access mode. There are various things you can perform on a file once it is open: read, write, append something to it and some more. These modes are denoted by the following string characters:
- r: ‘read’, this is the default access mode if the parameter is omitted
- w: ‘write’, remove the existing content of the file and write new content to it
- a: ‘append’, append to the file, do not remove what’s in it
- b: ‘binary’, open for binary reading
- t: ‘text’, open as text
- x: ‘exclusive creation’, create the file, throw exception if the file already exists
These codes can be combined. E.g. ‘rw’ means open the file for reading and writing. You’ll find more access mode examples on this Python tutorial page.
First example: writing to a file
Let’s see the first example. We’ll go through the file I/O related techniques step by step. We’ll start with some code that we’ll improve later on, but it’s important to get the basics right.
The open method returns a file handle that has a write method. The file handle must be closed with the close method after all the writes are done otherwise the file handle remains open. The content will only be written to the file once the file handle is closed.
Here’s an example:
import os.path fullpath = os.path.join('C:/', 'tmp', 'source.txt') file = open(fullpath, mode='wt', encoding='UTF-8') file.write('This is some content, ') file.write('this is some more content, ') file.write('good bye for now.') file.close()
fullpath will be joined to c:\tmp\source.txt on Windows. We then open the file for writing in text mode and write some strings to it. We finally close the file.
Note that I actually have a tmp folder on my C: drive. There was no source.txt file before running the code so it was created for me. However, if I had no tmp folder then the code would fail:
FileNotFoundError: [Errno 2] No such file or directory: ‘C:/nosuchfolder\\source.txt’
The above code is not very robust to say the least. There are several things that can go wrong during file operations:
- The path doesn’t exist
- The path exists but the thread that the code is running under has no access to it, e.g. it’s not allowed to create a new file in that folder
- Something goes wrong when writing to the file, an exception is thrown and the file.close method is never reached. This results in a potential memory leak as the file handle hasn’t been closed
We’ll look into ways to improve the above code in the next post before we consider other file related operations.
Read all Python-related posts on this blog here. | https://dotnetcodr.com/2016/03/05/python-language-basics-84-starting-with-file-io/ | CC-MAIN-2021-39 | refinedweb | 693 | 73.58 |
Let’s Create a Secure HD Bitcoin Wallet in Electron + React.js
(Full Code hosted on GitHub)
As the planet is bracing for an exciting future in crypto, so do we, developers keep up to date with the technology. As there are still few posts written about programming the blockchain, I decided to write a fully fledged HD bitcoin wallet as an Electron desktop app to teach myself and others in the process.
The goal of this article is not to deepen our knowledge on Electron, React or AntD but to explore Bitcoin. For this reason, any UI or interactivity code was omitted and should be viewed directly from the source code on GitHub. Please download the repo now and keep it open as a companion to this article as I am going to be referring to the code rather than pasting it here.
Creating our First Wallet
Download and run the example app.
The first Tab of the example app contains all the functionality we need to create an addresses, receive bitcoins and send them to another address. The other two are mostly fluff as I just felt the need to create something more complete for the heck of it.
The first step in creating a wallet is creating a key pair. Pressing “create” will bring up a modal panel containing a form with the necessary fields. Now, in some examples you might have seen something like this:
const bitcoin = require('bitcoinjs-lib');
const keyPair = bitcoin.ECPair.makeRandom();
const address = keyPair.getAddress();
This is indeed the simplest way to create a key pair and an address but it is not really complete. Modern implementations use a method by which the initial randomness (entropy) used in creating a key is serialised into a mnemonic phrase and saved so that it can later be used to re-create the key. This is extremely useful in the case of importing keys from one wallet to another and restoring lost ones.
/* file: wallet.class.js
* methods: generate, create */
const mnemonic = bip39.generateMnemonic();
const seed = bip39.mnemonicToSeed(mnemonic);
const master = bitcoin.HDNode.fromSeedBuffer(seed);
const derived = master.derivePath("m/44'/0'/0'/0/0");
const address = derived.getAddress();
const privateKey = derived.keyPair.toWIF();
Now, there is a lot going on. A mnemonic is just a random number rendered as an array of words (strings) taken from a pre-set list defined in bip39. The original seed number is split into 5-bit parts where each is used as an offset to the table to retrieve a word. We will not store these words but simply display them, delegating some of the responsibility to the user who could literally store them in her memory as the word “mnemonic” implies.
If the user wishes to discard them, it is up to them. The wallet will be just as functional, apart form the ability to restore the wallet in another system.
Using the seed number (or word list), a master key is created. From that we can derive an infinite number of child keys, each able to derive child keys of its own. Here is how it’s done...
Understanding Key Derivation
HD stands for Hierarchical Deterministic. Imagine the master key being the root node of a tree structure. Each child node is another key that is derived deterministically (i.e. it will be the same every time we derive it) from the master. Each of the children nodes can derive their own keys and so forth.
A derivation path
m/0/1 means that starting from the master (m), we take the 1rst (0-indexed) child and from that the 2nd as seen above (some paths are marked as
m/0'/0' using a single quote after the index number, meaning hardened).
Now, we can throw away the master key and let the user decide how and where to store the mnemonic. This means that the address we end up with is a derivation of a master key that no longer exists anywhere. All that exists is the potential to re-create it through the mnemonic sequence.
Storing Keys Securely
I did not put “secure” on my title as clickbait! As you might have seen from creating a key is that apart from a name you will be asked to provide for a password. This password will encrypt the private part of the key and store it in the database. For this job, I have chosen a pure Javascript flat-file DB called NeDB. It is not designed to store millions of records or to retrieve complex data in milliseconds but it will do for the job.
When saving passwords of any kind on a database we never use the cleartext but only a hashed derivative. In this way, even if the database is compromised, the hacker will not be able to retrieve the original password and use it on our system (or even worst, and since many users use one password for everything… everywhere!). I used the
crypto module of Node to hash the password.
import Datastore from 'nedb';
// pwd is the password we retrieved from the formhash
const
= crypto.pbkdf2Sync(pwd, 'salt',2048
,48
, 'sha512');const cipher = crypto.createCipher('aes-256-cbc', hash);
let encrypted = '';
encrypted += cipher.update(privateKey, 'utf8', 'hex');
encrypted += cipher.final('hex');
const wallet = { name: name, address: address, // metadata
enckey: encrypted, pass: hash, // security
coins: 0, utxos: [] }; // coins
const options = { filename:'./db/wallets.db', autoload:true };
this.db = new Datastore(options);
this.db.insert(wallet, cb); // in cb, notify the user through the UI
(Note: In my code I used the async version of the
pbkdf2 function because these types of algorithms are designed to be intentionally slow so as to avoid a dictionary style attack.)
To decode, ask the user for a password, rehash it and decrypt the key:
consthash
= crypto.pbkdf2Sync(pwd, 'salt',2048
,48
, 'sha512');const cipher = crypto.createDecipher('aes-256-cbc', hash);
let decrypted = '';
decrypted += cipher.update(encryptedData, 'hex', 'utf8');
decrypted += cipher.final('utf8');
Understanding Transactions
Imagine if you will an economy where instead of money, goods are purchased by exchanging receipts from previous purchases. If I owned a store in this strange economy, I could take a receipt from my cashier’s desk, go to a store next to mine and use it as money. I would tell its owner: “Look, someone bought goods from me for a total of 20 klübecks, and I have his signature on the receipt to prove it. I will post this receipt in a public ledger (blockchain) and sign a new one for the same amount in exchange for some goods. You will be able to use this new receipt to make purchases for yourself and if anyone claims that I never had the money to begin with, he could look at the original receipt in the ledger, which proves that I was given the amount.” Now you might ask, how on earth did the first receipt came to be, so as to kick-start the whole system? Well, the people who designed the system kind of cut receipts to themselves… which is called mining in bitcoin speak. I know, it sounds a little simplistic, but I assure you that it’s not far from reality. See, a bitcoin wallet does not store bitcoins, in any form, period! All that exists is the Blockchain: a linked list of transactions from the beginning of time, with each transaction referencing a previous one. There, a transaction (formatted in JSON) looks something like this:
{
"lock_time":0,
"size":191,
"inputs":[
{
"prev_out":{ // 1. (see below)
"index":0,
"hash":"7e3ab0ea65b60f7d1ff4b231016fc958bc0766a4677"
},
"script":"47304402201c3be71e1794621cbe3a7adec1af25f818..."
}
],
"version":1,
"vin_sz":1,
"hash":"5d42b45d5a3ddcf2...",
"vout_sz":1,
"out":[ // 2.
{
"script_string":"OP_DUP OP_HASH160 e81d74...", // 3.
"address":"1NAK3za9MkbAkkSBMLcvmhTD6etgB4Vhpr",
"value":20000,
"script":"76a914e81d742e2c3..."
}
]
}
Each transaction has an identifier in the form of a hash, in the case above it is 5d42b45d5a3dd[…]. It also contains inputs and outputs which indicate where the money came from (previous transactions) and where it is going (valid addresses).
If the transaction above was sent to an address that I own, it would say something like this:
- Someone has sent me Bitcoins through a previous transaction with hash 7e3ab0ea65b60[…]. As a proof of this, I provide the
"script":"473044022…"in my input which can unlock the equivalent script from the previous transactions’s 1rst (0-indexed)
"out". (Note that the amount that was sent together with my address is not presented above. This means that to verify what I am saying one should query the blockchain with the hash provided here and retrieve the transaction it is referring to)
- Using the Bitcoins contained in that previous transaction, I will send 20000 Satoshis (0.00000001 BTC) to the address 1NAK3za9MkbAkk[…]. I record this in the out section above.
- I lock the 20000 Satoshis with
"script":"76a914e8…". For the receiving address to claim them (ex. send them to another address), it needs to create a transaction where the
"inputs":[{"prev_out":{"hash":…will be the hash of this transaction, the
"inputs":[{"prev_out":{"index":…will be 0 referring to it’s first (and only) output, while finally the
"inputs":[{"script":…will match the
"out":[{"script":…on this transaction.
For the moment let’s not concern ourselves with how the scripts match. The point is that transactions work in pairs. Each transaction refers to a previous one which in turn refers to one before, thus forming a long chain which spans from the start of the Bitcoin network continuing to infinity.
Another way of saying it, is that owning Bitcoin is nothing more that owning the cryptographic keys needed to unlock an output script from a previous transaction stored on the Bitcoin Network.
Here are two transactions that illustrate the above:
If we only have the second (right side) transaction in our hands, we need to use the hash and index from prev_out to get the transaction it is referencing and it’s output . There we will find the address and the amount the transaction at hand was using.
Let’s Send Some Money to our Wallet
With the current value of Bitcoin you might think we are crazy. What if something goes wrong? Here however we are not using the real Bitcoin network and it’s Blockchain but another, maintained specifically for the occasion of developing applications called testnet. Let’s visit a faucet and put our new address on the field. We should now have coins! Remember that these coins are not stored anywhere but are registered as UTXOs (Unspend Transaction Outputs) on the global network. Various serives can be used to query that network. Here I have used the official client designed to query
blockchain.info so let’s install it with npm.
import { blockexplorer } from 'blockchain.info';
let testexplorer = blockexplorer.usingNetwork(3); // use testnet
// given a simple wallet object
const resolve = (obj) => {
const utxos = obj.unspent_outputs;
wallet.utxos = utxos;
let satoshis = utxos.reduce((s,c) => s + curr.value, 0);
wallet.coins = satoshis / 100000000;
};
testexplorer.getUnspentOutputs(wallet.address).then(resolve);
Note that the code in the actual app is a little different (to account for multiple wallets) but the essense is that we make a request using the address we had sent the coins to through the faucet and summing all the unspent outputs to get the total value.
The coins accessible to a wallet are the sum total of all the unspent outputs present on the global network that have it’s address.
Now that we summed all outputs to get the coins for each wallet we could sum up all wallets to get a grand total of the Bitcoins we own (on testnet). We could, for the sake of our example, use the real price of Bitcoin to get a sense of what we did. We will use the same client again:
import { exchange } from 'blockchain.info';
const resolve = (price) => {
let total = this.wallets
.map(w => w.price * price)
.reduce((s,c) => s + c, 0);
this.setState({ total: total }); // React's state
};
exchange.getTicker({ currency: 'USD' }).then(resolve);
Making a Transaction
Now for the last stage. Sending money to a receiver. Remember from above that what a wallet has is not Bitcoins but the ability to unlock the script of an output found on a previous transaction. We have just transfered coins from a faucet and can see it when we make a query through blockchain.info.
Let’s create a second address and transfer some money there. Although both will be residing in the same wallet, the payment still needs to go through the network.
First we have to retrieve the private part of our key as we had encrypted it.
// get the encrypted key from the database
const cipher = crypto.createDecipher('aes-256-cbc', password);
let decrypted = cipher.update(wallet.wif_enc, 'hex', 'utf8')
decrypted += cipher.final('utf8');
const key = bitcoin.ECPair.fromWIF(decrypted, net);
Remember how we queried the blockchain and got all the UTXOs for each of our addresses:
import { blockexplorer } from 'blockchain.info';
const testexplorer = blockexplorer.usingNetwork(3); // for testnet
testexplorer.getUnspentOutputs(wallet.address).then(result => {
wallet.utxos = result.unspent_outputs;
});
With this in our hands we can try to sum up enough satoshis to satisfy our send request.
const sending = 200000; // the amount we wish to send
const net = bitcoin.networks.testnet;
const txb = new bitcoin.TransactionBuilder(net);
// loop through the available outputs until the amount is reached
let input = 0;
for (let utx of wallet.utxos) {
txb.addInput(utx.tx_hash_big_endian, utx.tx_output_n);
input += utx.value;
if (input >= sending) break;
}
const change = total - sending;
txb.addOutput(values.address, sending);
// return the rest to the wallet's address
if (change) txb.addOutput(wallet.address, change);
txb.sign(0, key);
const raw = txb.build().toHex();
console.log(raw);
The bytes outputted can be verified with any service.
Lastly, we need to broadcast our transaction to the network. We will do this with our chosen interface from blockchain.info.
import { pushtx } from 'blockchain.info';
const promise = pushtx.usingNetwork(3).pushtx(raw);
const message = 'Transaction Submitted';
promise.then(result => {
if (result === message) { /* handle success */ }
else { /* handle failure */ }
});
If we go about doing that, we will not get through and instead receive a warning saying “min relay fee not met.” For a transaction to be picked up from the network it needs to provide a fee by changing a few lines of code. Note that the fee is not an additional output but an implied amount, deducted by the difference between [input used - (amount send + change)].
const change = input - (sending + fee);
txb.addOutput(values.address, sending);
if (change) txb.addOutput(sw.address, change);
Note: It is not easy to calculate the fee as it is the result of everyone betting for what they think is fair, and the value is relative to that. In this code I have used an API that provides a decent estimate.
Conclusion
I hope that I described a practice that is a little more advanced than the rest of the tutorials found on this site (and the web in general) and it comes from actual experience with Bitcoin applications. | https://medium.com/coinmonks/lets-create-a-secure-hd-bitcoin-wallet-in-electron-react-js-575032c42bf3 | CC-MAIN-2019-04 | refinedweb | 2,501 | 64.81 |
Universal functions in Numpy are simple mathematical functions. It is just a term that we gave to mathematical functions in the Numpy library. Numpy provides various universal functions that cover a wide variety of operations.
The detailed documentation for each and every function is listed in the official website in the URL(). If want you can learn complete functions.
Here is simple python example which illustrates you how to use these universal functions
import numpy as np arr1 = np.array([10, 11, 12, 13, 14, 15]) arr2 = np.array([20, 21, 22, 23, 24, 25]) addition = np.add(arr1, arr2) print("1.addition is",addition) sub = np.subtract(arr2, arr1) print("2.subtraction is",sub) multi = np.multiply(arr1, arr2) print("3.Multiplication is",multi) Div = np.divide(arr1, arr2) print("4.Division is",Div) arr3 = np.array([2, 2, 3, 4, 5, 6]) arr4 = np.array([3, 5, 6, 8, 2, 3]) pow = np.power(arr3, arr4) print("5.powers of array3 rise array4",pow)
The Output is as follows
1.addition is [30 32 34 36 38 40] 2.subtraction is [10 10 10 10 10 10] 3.Multiplication is [200 231 264 299 336 375] 4.Division is [0.5 0.52380952 0.54545455 0.56521739 0.58333333 0.6 ] 5.powers of array3 rise array4 [ 8 32 729 65536 25 216]
Note: only a member of this blog may post a comment. | http://www.tutorialtpoint.net/2021/12/python-progarm-for-universal-functions.html | CC-MAIN-2022-05 | refinedweb | 236 | 63.66 |
As mentioned in the short introduction to the topic of graphs, a graph consists of nodes and edges. As a node is represented by an instance of the Node class, the Edge generic class can be used to represent an edge. The suitable part of code is as follows:
public class Edge<T> { public Node<T> From { get; set; } public Node<T> To { get; set; } public int Weight { get; set; } public override string ToString() { return $"Edge: {From.Data} -> {To.Data}, weight: {Weight}"; } }
The class contains three properties, namely representing nodes adjacent to the edge (From and To), as well as the weight of the edge (Weight). Moreover, the ToString method is overridden to present some basic information about the edge. | https://www.oreilly.com/library/view/c-data-structures/9781788833738/3c376fd7-91a8-459f-85e2-0c3d3c843ff3.xhtml | CC-MAIN-2019-30 | refinedweb | 120 | 65.05 |
XSLT in article is horrible
I have done quite a bit of XSLT in the past; and certainly enough to see that the XSLT sample in the tip is full of bad practices. To name a few:
- the use of xsl:for-each is generally considered bad practice, use xsl:apply-templates instead. XSLT is a functional language, not a procedural one.
- there is no need at all to use an extension function, the XSLT/XPath count() function will do just fine. Probably it will even do better, because it could be optimized by the XSLT compiler.
-more generally, extension functions are non-portable and should only be used for things you can't do in XSLT. In my experience, you hardly ever need them.
- the use of xsl:element and xsl:attribute to create output elements is only needed in specific (generally hairy) situations, and is certainly superfluous in the case of the article.
I am no expert either, but IMO the XSLT in the article looks like it was written by an absolute beginner in XSLT. The author hasn't wrapped his mind aroundthe (functional) concepts of XSLT yet. Which is a pity, because XSL can be highly elegant and concise when written well.
Bad XSLT
I agree with the previous poster's points! I ~have~ found the use of <xsl:element> handy when my input XML & XSL had special namespaces - in those cases, the use of a hardcoded output tag seemed to duplicate the namespace declaration in each of thethose output elements. BUT, I most certainly, wholeheartedly, strenuously agree with the condemnation of using the extension function (unless the use of such was as illustration only) - this XSLT would've been much cleaner without it and using it makes me doubt that the author has much understanding of the richness of node/axes traversal available in XSLT. using <xsl:apply-templates> is also better except in extreme, hack-it-together development.
Web Development Zone TechMail
The Feb. 4, Web Development Zone TechMail discusses transforming XML into HTML. In your application development, have you ever needed to transform XML data into HTML? Will this tip on transforming datainto HTML using XSLT style sheets help you?
This conversation is currently closed to new comments. | https://www.techrepublic.com/forums/discussions/web-development-zone-techmail/ | CC-MAIN-2017-51 | refinedweb | 375 | 61.77 |
My code currently looks like
import UIKit
if let url = URL(string: "") {
do {
let contents = try String(contentsOf: url)
print(contents)
} catch {
// contents could not be loaded
}
} else {
// the URL was bad!
}
\f0\fs24 \cf0 NEW Newcastle 84 }
NEW Newcastle 84
The
.rtf extension on your URL tells me that the file contains Rich Text Format (RTF), which means it has formatting markup mixed in with the text.
If you are on an Apple platform, you can use the platform's RTF parser to load the file as an
NSAttributedString and get the plain text from it:
let richText = try NSAttributedString(url: url, options: [:], documentAttributes: nil) let plainText = richText.string | https://codedump.io/share/HCxXIgWAPVmE/1/read-text-file-from-network-url | CC-MAIN-2019-09 | refinedweb | 110 | 62.21 |
Hide Forgot
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.1) Gecko/20020826
Description of problem:
When I choose imap for the mail server it does not load the folders i have on
the mail server. I have done the same thing in mozilla's mail program and it
works properly. I have used evolution in past versions and it worked properly also.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1.launch evolution
2.select mail accounts and then edit my account
3.select imap for the incoming server
Actual Results: does nothing
Expected Results: should show the folders on the imap server.
Additional info:
If you run `CAMEL_VERBOSE_DEBUG=1 evolution-mail` from a terminal, wait 5-10
seconds and then start evolution, what sort of output do you get in the terminal
when you try to see your imap folders?
Also, what is the server side running?
I tried it and in the terminal window nothing comes back but a carriage return
to the next line. I don't know what the server is running but i can tell you
that mozilla mail works perfectly with it.
Yes, you won't get any output in the terminal with evolution-mail running until
after evolution starts and tries to do mail operations -- if you don't get
anything then, you probably need to let the evolution-mail process wait longer
before starting evolution.
When I open evolution and send/get messages it attempts to get them and then
comes back with no messages. In prior releases this process immediately
retrieved mail as it does now with mozilla-mail. If I edit the account and
change the server from imap to pop then it will retrieve messages even though it
is a imap mail server. What would you like to do in order to debug this?
And you don't get any output from the evolution-mail process? Do you get output
from it when you switch to POP?
That is correct.
I will just add that I had problems with evolution and IMAP as well. It also
would not load my inbox from /var/spool/mail and my IMAP
folders /home/user/Mail.
Returned to using Kmail and Mozilla Mail...both work fine.
I have the same problem here. Our server is Red Hat 7.2, fully up2date, running
the Red Hat imap package in SSL mode (imaps).
No problems in other clients. I've attached the debug output from the command
mentioned, just in case it doesn't come from someone else.
Created attachment 80424 [details]
Debug output from eveolution-mail
Running Evolution against an IMAP server on the same machine / user also ends
uphogging CPU since it recursively follows a symbolic link from
~user/.openoffice/user/work -> ~user.
Mozilla mail in the same circumstances doesn't try to be clever...
System is a fresh install of RH8.0
Hello. Is anyone home. I would still like to get evolution working on my laptop.
evolution 1.2.x in rawhide handles imap namespaces much more correctly according
to the imap rfcs, which should work around this problem. | https://bugzilla.redhat.com/show_bug.cgi?id=74561 | CC-MAIN-2019-04 | refinedweb | 534 | 66.03 |
I think it’s worth noting that in Java, 1 == 1L is actually ((long) 1) == 1L, so it’s a (rather mundane) comparison of two longs. Comparing int and boolean (for example) is not allowed, which leads me to think that if, hypothetically, one could prevent the compiler from converting the int to a long, 1 == 1L would not be allowed either. All of this is to say that, if ((any) 1) == ((any) 1L) was somehow valid Java code, I think it would probably yield false, because the types are not comparable.
1 == 1L
((long) 1) == 1L
long
int
boolean
((any) 1) == ((any) 1L)
It is my impression from this discussion that, unlike Java, Scala treats 1 == 1L as a call to a method Int.==(Long), and not as 1.toLong == 1L. If the latter was the case, I think it would be obvious to say that (1: Any) != (1L: Any). However, even in the former case, it’s not clear to me that Any.==(Any) should jump through hoops to call Int.==(Long) if it’s an Int and its argument is a Long.
Int.==(Long)
1.toLong == 1L
(1: Any) != (1L: Any)
Any.==(Any)
Int
Long
Going away from this might be ‘ok’ for library code, but not for user code. It’s a punch in the face of dynamic-feel applications where you don’t want the user to be confronted with 1f versus 1.0 versus 1.
However, this does not solve the problem for composite keys like in Map[(Int,Int),Int] – when keys are case classes or tuples, they will still compare their components and compute hashes with == and ##… though I’m not sure how often people actually use composite keys.
Map[(Int,Int),Int]
==
##
> (1.0:Any) equals (1:Any)
res0: Boolean = false
> (1.0:Any, 2.0:Any) equals (1:Any, 2:Any)
res1: Boolean = true
This doesn’t seem consistent. For it to be, equals on Product classes should rely on equals of their components, and == should rely on == similarly.
equals
Product
I see we have a disconnect on what semantic consistency and elegance means. What I mean by that is: “Be able to describe what the language does with as few principles as possible”. Pre co-operative equality we had
There is a method == in Any, defined as follows:
final def == (that: Any): Boolean =
if (null eq this) null eq that else this equals that
There are overloaded methods in numeric classes that define == for specific combinations of types.
From a spec perspective, (2) could be seen as libraries, so I really care only about (1), which is simple enough. Post co-operative equality things got much messier. One sign of this is that the spec is now actually wrong, in the sense that it does not describe what the actual implemented behavior is (see my earlier post). Even if we would fix the spec it would have to specify the == method on Any as something like this:
Any
final def ==(that: Any): Boolean = this match {
case this: Byte =>
that match {
case that: Byte => this == that
case that: Short => this == that
case that: Char => this == that
case that: Int => this == that
case that: Long => this == that
case that: Float => this == that
case that: Double => this == that
case _ => false
}
case this: Short =>
...
... same as for Byte for all other numeric types
...
case _ =>
if (null eq this) null eq that else this equals that
}
I guess you agree that’s cringeworthy. It’s bloated, and anti-modular in that it ties the definition of Any with the precise set of supported numeric classes. If we ever would want to come back and add another numeric class, the definition would be invalid and would have to be rewritten. We could try to hide the complexity by specifying that == should behave like a multi-method. But that means we pull a rabbit out of our hat, because Scala does not have multi-methods. That’s actually another good illustration of the difference between (1) and (2). Multi-methods are very intuitive so from a viewpoint of (1) are desirable. But adding them to a semantic would be a huge complication.
However, this does not solve the problem for composite keys like in Map[(Int,Int),Int] – when keys are case classes or tuples, they will still compare their components and compute hashes with == and ##…
However, this does not solve the problem for composite keys like in Map[(Int,Int),Int] – when keys are case classes or tuples, they will still compare their components and compute hashes with == and ##…
Thanks for this observation! So because of co-operative equality equals and hashCode now turn out to be broken as well! This is a great demonstration that complexity breeds further complexity.
hashCode
I think equals and hashCode for case classes need to be defined in terms of themselves. It’s weird that they should forward to == and ##. But of course, that would mean we need four instead of two methods per case class to implement equality and hashing.
Unless, of course, we get rid of co-operative equality. It seems the case for doing this gets ever stronger.
In light of this development we might actually need to do this for 2.13. The problem is that we cannot fix the new collections to be more performant and NaN safe without also fixing the generation of equals and hashCode for case classes.
An interesting thing I discovered is that, if cooperative equality is removed (without changing anything else), symmetry for == will be broken for a small number of cases. Specifically:
> (1: Any) == (BigInt(1): Any)
res0: Boolean = false
> (BigInt(1): Any) == (1: Any)
res1: Boolean = true
> (1: Any) == (BigDecimal(1): Any)
res2: Boolean = false
> (BigDecimal(1): Any) == (1: Any)
res3: Boolean = true
In fact, the above cases are already violate symmetry for equals (is that a bug?)
scala> 1 equals BigInt(1)
res4: Boolean = false
scala> BigInt(1) equals 1
res5: Boolean = true
(I’m not saying this is a reason to keep cooperative equality; I’m only noting that it may add complications.)
In fact, the above cases are already violate symmetry for equals (is that a bug?)
I would say, yes. If we want to stay consistent, we should have
BigInt(1) == 1 == true
1 == BigInt(1) == true
BigInt(1).equals(1) == false
1.equals(BigInt(1)) == false
BigInt(1) == 1 == true
BigInt(1).equals(1) == false
BigInt(1) == 1 == true
BigInt(1).equals(1) == false
Aren’t BigInt(1) == 1 and BigInt(1).equals(1) equivalent, assuming BigInt(1) isn’t null (which it isn’t)?
BigInt(1) == 1
BigInt(1).equals(1)
BigInt(1)
null
I agree with that!
I agree they’re messier in code. But the principle is really simple:
Equality behaves the same way regardless of context for standard library types.
Equality behaves the same way regardless of context for standard library types.
You can rewrite it in pseudocode as
forall[A, B, C >: A, D >: B]{ (a: A) == (b: B) iff (c: C) == (d: D) }
if you want a formula. Despite the simple principle and simple formula, though, it’s quite hairy to implement it.
I do agree with all that. Despite being really nice to work with at the user level, it makes certain parts of the implementation very awkward to adjust, effectively freezing that part of the language in stone (or at least greatly raising the barrier to make changes, e.g. with the unsigned numeric types).
But I think the solution, if any, has to be to drop == on Any, because
Do you have a better way to catch behavioral changes in existing code and prevent bugs in future code?
We could add a typeclass that would re-enable == on Any; effectively
trait Equivalence[A] {
def apply(lhs: A, rhs: A): Boolean
}
implicit class AnyHasEquals(a: Any)(implicit eql: Equivalence[Any])
extends AnyVal {
def ==(that: Any) = eql(this, that)
}
One of the implicits one could use could forward to scala.runtime.BoxesRunTime.equals and then the existing behavior would continue (with speed penalty, but at least you can control your destiny then).
scala.runtime.BoxesRunTime.equals
Of course, we’d have to make sure that this was typically zero-cost, to meet speed requirements. Implicit AnyVals leave a lot of crud behind in the bytecode presently.
AnyVal
There’s no way we can drop == on Any or AnyRef. It plays a central role in almost every Scala program.
But migration indeed the crux of the matter. Would it be too risky to revert now? I don’t remember any breakage when we introduced co-operative equality (was it in 2.8?) so at least at the time few programs cared either way. I do remember people being bitten by NaN in collections, but here the situation would improve if we reverted.
Maybe we could put the new behavior under a command-line switch and try to do the community build with the new option? That would give us some indication how widespread problems would be. don’t think this will be much of an issue. Somehow people have no problem in Java or C# with this, nor do I remember our users having had a problem in Scala before we introduced the change.
An idea, but please forgive if it is ridiculous. What if == would return a different boolean type when used between unrelated numerics? Which means 1 == 1L would return a JBoolean. If someone wants to support cooperative equality, then import a JBoolean => Boolean implicit.
JBoolean
JBoolean => Boolean
You wouldn’t drop it on AnyRef; that’s well-defined already to be non-cooperative. Just on Any. You can always .asInstanceOf[AnyRef] when you need non-cooperative equality on Any (and a typeclass could make that better).
AnyRef
.asInstanceOf[AnyRef]
That sounds like one reasonable way to get some data on how widespread problems are.
The thing is, I don’t expect the problems to be very widespread, just rather dire; and they would tend to occur in places where people have done things which are valid but not best practice (hopefully rare in the community build).
For example, suppose there is a site that has User IDs that are given by number, but during account creation there are partial user records that are identified by username instead (which is also guaranteed to be unique). Someone writes
val users: Map[Any, UserRecord] = ...
It really should be Either[String, Long] or somesuch, but hey, it works.
Either[String, Long]
Now suppose there are a set of admin users with predefined user numbers.
users.get(0)
Uh-oh. After the change to equality, the admin user lookups fail.
In C# it doesn’t work in generic context, but I don’t have enough experience with C# to really know whether there are equality pitfalls there.
In Java people do have problems with == vs. equals with stuff like
Long x = 150L; if (x != 150L) System.out.println("What the...?!");
at least judging from StackOverflow questions.
Java forces you to pay attention all the time to whether something is boxed or not in order to even know what method name to use. If you’re already doing that, it’s easy enough to cope with !((Object)1L).equals((Object)1) despite 1L == 1. Again, it’s not even the same method name!
!((Object)1L).equals((Object)1)
1L == 1
The difference to Java is that in Java, it is always obvious whether a type
is unboxed or boxed, so at least people can more easily adapt to unboxed
and boxed types behaving differently.
But that would mean that e.g. HashMap could not use == anymore and would have to fall back on equals. We could do that but doing so would probably already cause most of the migration errors we would expect overall. So, if migration is our main concern, we might as well keep == for Any.
HashMap
Perhaps you’re right–it wouldn’t be worth it to have the compiler help people catch errors in their own code when it’s usage of library code that is most likely to reveal the difference.
There’s no way we can drop == on Any or AnyRef. It plays a central role in almost every Scala program.
Why? IIRC, moving Any.== to Any.AnyOps does not break source-level compatibility.
Any.==
Any.AnyOps
The title of this thread was not meant as a rhetorical question. I started this thread because I was not sure whether I had all the arguments for co-operative equality. In the discussion that followed I did not see any new arguments for it, but several serious new arguments against.
Here’s the case against co-operative equality:
“Break” means: We can construct examples where the outcome violates important laws.
Slow: Map get is at least twice as slow in Scala than in Java because it has to use co-operative equality. Other operations are also affected.
get
Not an equivalence: The culprit here is NaN. The IEEE floating point standard mandates
NaN
NaN != NaN
and that’s what the JVM implements, One can have a philosophical discussion whether that makes sense or not (and there are good arguments for both sides), but the fact is that we will not go against an established standard. The problem is then that with co-operative equality this irregularity, which was restricted to floating point comparisons only, now gets injected into our universal equality relation. I remember having seen bug reports about this. Users get bitten because
mutable.Map[Any, Int](NaN -> 1).get(NaN)
gives a None instead of a Some(1).
None
Some(1)
Now things get ironical. People might turn to Java collections instead of Scala collections to solve the two problems above. Java collections are based on equals instead of ==. Unfortunately, cooperative equality means that equals in Scala is also broken! Consider:
scala> NaN equals NaN
res1: Boolean = true
scala> (NaN, 1) equals (NaN, 1)
res2: Boolean = false
Similarly, but dually,
scala> 1 equals 1L
res3: Boolean = false
scala> Some(1) equals Some(1L)
res4: Boolean = true
So, equals is not even a congruence anymore! In other words, our well-intentioned attempt to improve the API of == has actually ruined the API of equals! (and, no, there’s no easy way to fix this).
Breaks Java collections. The illogical implementation of equals is a problem if we want to use Java collections with Scala case classes as keys.
Messy to specify. That was my original complaint and I have already written too much about it.
For me the most enlightening comments in this thread were the one by @scottcarey where he showed that we need two notions of equality, one an equivalence and the other not, and the one by @LPTK where he showed the problems with equals.
So I am now convinced that we should do what we can to drop cooperative equality on Any (and by extension on all unbounded generic type parameters). As @Ichoran notes, the big problem here is migration. And I am not sure I have a good answer yet, except, try it out on large code bases and see what happens. Hopefully, the instances where the change matters will be far and few between.
For clarity: does this change mean that primitive 1 and 1L will be equal, but boxed 1 and 1L won’t?
That’s what it means, yes.
So these two methods would give different results for isOne(1L)? That’s also far from an ideal situation. Maybe better to go all the way and get rid of universal equality then.
isOne(1L)
def isOne(a: Long) = a == 1
def isOne[A](a: A) = a == 1
Perhaps even more confusing, specializing a class or method might also result in different behavior I guess. | https://contributors.scala-lang.org/t/can-we-get-rid-of-cooperative-equality/1131?page=3 | CC-MAIN-2017-43 | refinedweb | 2,640 | 71.75 |
Manage your sitemaps
Video sitemaps and video sitemap alternatives. Google Video Sitemap is an extension to the Sitemap standard.
While Google recommends using video sitemaps, we also support mRSS feeds.
Guidelines for video sitemaps
Here are basic guidelines for video sitemaps:
- You can create a separate sitemap just for video, or you can embed a video sitemap within an existing sitemap, whichever is more convenient for you.
- You can host multiple videos in one web page.
- Each sitemap entry is the URL of a page that hosts one or more videos. The structure of each sitemap entry is as follows:
<url> <loc></loc> <!-- URL of host page --> <video> ... information about video 1 ... </video> ... as many additional <video> entries as you need ... </url>
- Don't list videos that are unrelated to the host page. For example, if the video is a small addendum to the page, or unrelated to the main text content.
- Each entry in a video sitemap includes a set of required, recommended, or optional values that you supply. Recommended and optional values provide useful metadata that can enhance your video results and improve Google's ability to include your video in search results. See the table below for a list of sitemap elements.
- Google might use text on the video landing page rather than the text you supply in your sitemap, if the page text is deemed more useful than the information in the sitemap.
- Google can't guarantee when or if your videos will be indexed, as Google relies on complex indexing algorithms.
- If Google cannot discover video content at the URL you provide, the sitemap entry will be ignored.
- Each sitemap file that you provide must have no more than 50,000 URL elements. If you have more than 50,000 videos, you can submit multiple sitemaps and a sitemap index file. You cannot nest sitemap index files. Keep in mind that if you are adding optional tags, you may hit the 50MB uncompressed limit before you hit the 50,000 video limit.
- Google must be able to access the source file or player (that is, the file or player cannot be blocked by robots.txt, require a login, or be otherwise inaccessible to Googlebot). Metafiles that require a download of the source via streaming protocols are not supported.
- All files must be accessible to Googlebot. If you want to prevent spammers from accessing your video content at the
<player_loc>or
<content_loc>URLs, verify that any bots accessing your server are really Googlebot.
- Make sure that your robots.txt file isn't blocking any of the items (including the host page URL, the video URL, and the thumbnail URL) included in each sitemap entry. More information about robots.txt.
- Google verifies that the information you provide for each video matches what is on the site. If not, your video might not be indexed.
- You can specify pages from different sites in one sitemap. All sites, including the one containing your sitemap, must be verified in Search Console. More information about managing sitemaps for multiple sites.
Example sitemap
Here is a sample video sitemap with one page hosting one></video:player_loc> <video:duration>600</video:duration> <video:expiration_date>2021:price1.99</video:price> <video:requires_subscription>yes</video:requires_subscription> <video:uploaderGrillyMcGrillerson </video:uploader> <video:live>no</video:live> </video:video> </url> </urlset>
XML namespace
The Video Sitemap tags are defined in the following namespace:
xmlns:video=""
Video sitemap tag definitions
You can find more documentation on media sitemaps at rssboard.org.
While Google recommends using video sitemaps and schema.org's VideoObject to mark up your videos, we also support mRSS feeds.
Google supports mRSS, an RSS module that supplements the element capabilities of RSS 2.0. mRSS feeds are very similar to video sitemaps and can be tested, submitted, and updated just like sitemaps.
Each mRSS feed must be under 50MB in size when uncompressed, and can contain no more than 50,000 video items. If your uncompressed file is larger than 50MB, or you have more than 50,000 videos, you can submit multiple mRSS feeds and a sitemap index file. Sitemap indexes can contain mRSS feeds.RSS vs mRSS – mRSS is a RSS extension used for syndicating multimedia files. It allows for a much more detailed description of the content than the RSS standard.
mRSS Example
Here's an example of an mRSS entry that provides all the key tags that Google uses. This includes
<dcterms:type>live-video</dcterms:type>, which you can use to identify live, streaming videos.
<?xml version="1.0" encoding="UTF-8"?> <rss version="2.0" xmlns: <channel> <title>Example MRSS</title> <link></link> <description>MRSS Example</description> <item xmlns: <link></link> <media:content <media:player <media:title>Grilling Steaks for Summer</media:title> <media:description>Get perfectly done steaks every time</media:description> <media:thumbnail <media:price <media:price </media:content> <media:restrictionus ca</media:restriction> <dcterms:valid xmlns:end=2020-10-15T00:00+01:00; scheme=W3C-DTF</dcterms:valid> <dcterms:type>live-video</dcterms:type> </item> </channel> </rss>
mRSS Tags
The full mRSS specification contains many more optional tags, best practices, and examples. Once you have an mRSS feed, you can test and submit it just like a video sitemap. | https://support.google.com/webmasters/answer/80471?visit_id=637041628107442780-3764205185&hl=en&ref_topic=6080646&rd=1 | CC-MAIN-2019-39 | refinedweb | 873 | 64.2 |
Le mardi 30 septembre 2008 à 17:20 +0200, Tarek Ziadé a écrit : > > Again, when a C library changes its ABI, we do not allow it to keep the > > same name. It's as simple as that. > > I see, so there's no deprecation processes for a package ? Not per se. It is the job of the package manager to propose removing deprecated packages when they are no longer available in the repository. > I mean, if you change a public API of your package , you *have* to > change its name ? Yes, this is the requirement for C libraries, and we try to enforce it as well for other languages. > My convention is to : > - keep the the old API and the new API in the new version, let's say "2.0" > - mark the old API as deprecated (we have this "warning'" module in > Python to do so) > - remove the old API in the next release, like "2.1" > > But I don't want to change the package name. > > And the development cycles in a python package are really short > compared to OS systems, in fact > we can have quite a few releases before a package is really stable. I don’t think the requirements are different from those of C library developers. There are, of course, special cases for libraries that are in development; generally we take a snapshot, give it a specific soname and enforce the ABI compatibility in the Debian package. The other possibility is to distribute the library only in a private directory. Nothing in this process is specific to C; the technical details are different for python modules, but we should be able to handle it in a similar way. > > This is not an improvement, it is a nightmare for the sysadmin. You > > cannot install things as simple (and as critical) as security updates if > > you allow several versions to be installed together. > > mmm... unless the version is "part of the name" in a way.... Yes, this is what C libraries do with the SONAME, for which the convention is to postfix it with a number, which changes when the ABI is changed in an incompatible way. I don’t know whether it would be possible to do similar things with python modules, but it is certainly something to look at. > > Two conflicting versions must not use the same module namespace. > > I have an idea: what about having a "known good set" (KGS) like what > Zope has built on its > side. > > a Known Good Set is a set of python package versions, that are known to provide > the good execution context for a given version of Python. > > Maybe the Python community could maintain a known good set of python > packages at PyPI, with a real work on its integrity, like any > OS-vendor does I believe. Having a body that enforces API stability for a number of packages would probably prevent such issues from happening in those packages. However, that means relying too much and this body, and experience proves it will quickly lag behind. Furthermore, the need to add packages that are not in the KGS to distributions will arise sooner or later. > And maybe this KGS could be used by Debian as the reference of package versions. We will always need, for some cases, more recent packages or packages that are not in the KGS. > -> if a package is listed in this KGS, it defines the version, for a > given version of Python You don’t have to define it so strictly. There is no reason why a new version couldn’t be accepted in the KGS for an existing python version, if it has been checked that it will not break existing applications using this module.: <> | https://mail.python.org/pipermail/distutils-sig/2008-September/010148.html | CC-MAIN-2014-15 | refinedweb | 623 | 68.4 |
One right thing when the connection succeeds
within the time limit, and also in the case of the problem we
are trying to solve, where it takes a very long time to fail.
But if the connection succeeds after the time limit,
the caller will already have returned, and we'll have made a
connection that nobody knows about!
This is the outline of my second attempt, which I believe is
correct. There are several refinements we'll need to apply
before having a solution that actually works.
// This is just an outline: the real code appears later
JMXConnector connectWithTimeout(JMXServiceURL url, long timeout, TimeUnit unit) {
final BlockingQueue<Object> mailbox = new ArrayBlockingQueue<Object>(1);
final ExecutorService executor = Executors.newSingleThreadExecutor();
executor.submit(new Runnable() {
public void run() {
JMXConnector connector = JMXConnectorFactory.connect(url);
if (!mailbox.offer(connector))
connector.close();
}
});
Object result = mailbox.poll(timeout, unit);
if (result == null) {
if (!mailbox.offer(""))
result = mailbox.take();
}
return (JMXConnector) result;
}
To understand how and why this works, notice that exactly one
object always gets posted to the mailbox. There
are three cases:
mailbox
connector
The code above is just an outline, and leaves out some
necessary details. We need to refine it in several ways to make
it work.
The first refinement we'll need is exception handling.
The result of the connection attempt could be an exception
instead of a JMXConnector. This doesn't change the reasoning
above, but it does complicate the code.
The main thread calls BlockingQueue.poll,
which can throw InterruptedException, so we must handle
that.
About half of the final version of connectWithTimeout involves
footering about with exceptions. It's times like this that I'm
inclined to join the checked-exception-haters.
The second refinement is to clean up the connect thread
when we're finished with it. The outline code doesn't call shutdown()
on the ExecutorService, so every time connectWithTimeout is
called, a new single-thread executor is created, and therefore a
new thread. If you're lucky, the garbage-collector will pick up
your executors and their threads at some stage, but you don't
want to depend on luck.
A more subtle point about threads is that the outline code will
create non-daemon threads. Your application will not exit when
the main thread exits if there are any non-daemon threads. So
as written, if you have a thread stuck in a connection attempt
and your application is otherwise finished, it will stay around
until the connection attempt finally times out. That's pretty
much exactly the sort of thing we're trying to avoid. So we'll
need to arrange to create a daemon thread instead.
All right, so here's the real code.
public static JMXConnector connectWithTimeout(
final JMXServiceURL url, long timeout, TimeUnit unit)
throws IOException {
final BlockingQueue<Object> mailbox = new ArrayBlockingQueue<Object>(1);
ExecutorService executor =
Executors.newSingleThreadExecutor(daemonThreadFactory);
executor.submit(new Runnable() {
public void run() {
try {
JMXConnector connector = JMXConnectorFactory.connect(url);
if (!mailbox.offer(connector))
connector.close();
} catch (Throwable t) {
mailbox.offer(t);
}
}
});
Object result;
try {
result = mailbox.poll(timeout, unit);
if (result == null) {
if (!mailbox.offer(""))
result = mailbox.take();
}
} catch (InterruptedException e) {
throw initCause(new InterruptedIOException(e.getMessage()), e);
} finally {
executor.shutdown();
}
if (result == null)
throw new SocketTimeoutException("Connect timed out: " + url);
if (result instanceof JMXConnector)
return (JMXConnector) result;
try {
throw (Throwable) result;
} catch (IOException e) {
throw e;
} catch (RuntimeException e) {
throw e;
} catch (Error e) {
throw e;
} catch (Throwable e) {
// In principle this can't happen but we wrap it anyway
throw new IOException(e.toString(), e);
}
}
private static <T extends Throwable> T initCause(T wrapper, Throwable wrapped) {
wrapper.initCause(wrapped);
return wrapper;
}
private static class DaemonThreadFactory implements ThreadFactory {
public Thread newThread(Runnable r) {
Thread t = Executors.defaultThreadFactory().newThread(r);
t.setDaemon(true);
return t;
}
}
private static final ThreadFactory daemonThreadFactory = new DaemonThreadFactory();
The initCause method is only used once but it's handy to have
around for those troublesome exceptions that don't have a
Throwable cause parameter.
Throwable cause
I think it would be awfully nice if java.util.concurrent
supplied DaemonThreadFactory rather than everyone
having to invent it all the time.
I admit I'm a bit uncomfortable with the code here. I'd be
happier if I didn't need to reason about it in order to convince
myself that it's correct. But I don't see any simpler way of
using the java.util.concurrent API to achieve the same effect.
Uses of cancel or interrupt tend to lead to race conditions,
where the task can be cancelled after it has already delivered
its result, and again we can get a JMXConnector leak; or we
might close a JMXConnector that the main thread is about to
return. I'd be interested in suggestions for
simplification.
This is a useful technique in many cases, subject to the caution above. It's not limited to the JMX
Remote API, either; you might use it when accessing a remote web
service or EJB or whatever, without having to figure out how to
get hold of the underlying Socket so you can set its timeout.
My thanks to Sébastien
Martin for the discussion that led to this entry.
[Tags: jmx timeout concurrent.]
I'm not sure that there is a completely universal solution, and simple concrete pools are available.
I'm certainly not keen on completely abandoning threads though, ever, unless you impose an upper bound on unreaped threads, else you have created a way of blowing up long-running apps at random running out of memory, etc...
Rgds
Damon
Posted by: damonhd on May 23, 2007 at 02:13 PM
Posted by: applebanana8 on May 24, 2007 at 01:44 AM
Damon, I agree with your comment about abandoning threads, which is why I added a caution about when the technique is valid.
I'm not sure what you mean by "simple concrete pools are available". I'd certainly be interested if there's some library that would implement the logic of "wait for the operation or abandon it and clean it up when it finishes".
applebanana8, if you can't create threads, then you are screwed, to use a technical term. Seriously, it would still be possible to fall back on the socket timeout option I described at the outset, but that option does vary from somewhat complicated to horribly complicated.
Posted by: emcmanus on May 24, 2007 at 06:18 AM
I meant that things like Executor.newScheduledThreadPool(int corePoolSize) exist, so that a simple static definition can be used to set up a pool rather than writing a new class, except for paranoid perfectionists like me...
One simple way to cap and use timeouts as just being discussed is to use a thread pool with capped size and then use the facility to have excess work handled in the calling thread when the pool gets full. That might be a reasonable compromise between simplicity and safety, though I'd still prefer to stack up async and relatively cheap file handles than expensive and big threads if possible.
Posted by: damonhd on May 24, 2007 at 01:53 PM
Yes, I deliberately chose a solution with an Executor so you could easily replace it with another flavour of Executor. I discarded an alternative solution that required singleThreadExecutor semantics for this reason. As you say, you could easily avoid catastrophic failure in (putatively) pathological cases with a bounded thread pool.
I do agree that explicit connection timeouts are better if you can persuade your API to provide them. With the JMX Remote API and its RMI connector, that involves some fairly complicated magic with RMIClientSocketFactory, and it doesn't compose well with other uses of socket factories.
Posted by: emcmanus on May 25, 2007 at 01:32 AM
throw initCause(new IOException(e.toString()), e);
Posted by: jswift on September 05, 2007 at 04:04 PM
Posted by: emcmanus on September 06, 2007 at 03:11 AM
Posted by: sabram on February 27, 2008 at 04:41 PM
Posted by: emcmanus on April 04, 2008 at 09:14 AM | http://weblogs.java.net/blog/emcmanus/archive/2007/05/making_a_jmx_co_1.html | crawl-001 | refinedweb | 1,343 | 53.61 |
In this example we’ll be using the Microchip 24LC256 EEPROM, this chip when connected to an Arduino can increase the available memory space by 32kbytes. Here is a pinout of the IC
The address pins, A0, A1, and A2, which are pins 1, 2, and 3 are all connected to ground. Because of this they are all in LOW states (0v). This means that the address pins will have a value of 000 and the I2C address will be 0x50
The SDA pin, pin 5, of the EEPROM connects to analog pin 4 on the arduino.
The SCL pin, pin 6, of the EEPROM connects to analog pin 5 on the arduino.. it also has pull ups on the I2C lines on board
Of course a schematic is always useful to look at, just in case you want build one of these. The IC is an 8 pin DIP so its quite an easy little circuit to build on a breadboard or stripboard, you can use larger sized EEPROMs as well
Lets look at a simple code example which will write some data out and read it back in again, you can see some debug in the serial monitor
Code
The code below is for newer Arduino IDE versions as it uses Wire.receive to Wire.read , if you are still using a pre 1.0 version you need to change the code below to use Wire.send to Wire.write instead
#include <Wire.h> // for I2C #define eeprom_address 0x50 // device address byte d=0; void setup() { Serial.begin(115200); // Initialize the serial Wire.begin(); //write data out Serial.println("Writing data."); for (int i=0; i<10; i++) { writeData(i,i); } Serial.println("Complete"); //read data back Serial.println("Reading data."); for (int i=0; i<10; i++) { Serial.print(i); Serial.print(" : "); d=readData(i); Serial.println(d, DEC); } Serial.println("Complete"); } // writes a byte of data in memory location eaddress void writeData(unsigned int eaddress, byte data) { Wire.beginTransmission(eeprom_address); // set the pointer position Wire.write((int)(eaddress >> 8)); Wire.write((int)(eaddress & 0xFF)); Wire.write(data); Wire.endTransmission(); delay(10); } // reads a byte of data from memory location eaddress byte readData(unsigned int eaddress) { byte result; Wire.beginTransmission(eeprom_address); // set the pointer position Wire.write((int)(eaddress >> 8)); Wire.write((int)(eaddress & 0xFF)); Wire.endTransmission(); Wire.requestFrom(eeprom_address,1); // get the byte of data result = Wire.read(); return result; } void loop() { }
You should see something like
Writing data. Complete Reading data. 0 : 0 1 : 1 2 : 2 3 : 3 4 : 4 5 : 5 6 : 6 7 : 7 8 : 8 9 : 9 Complete
Links
The IC comes in at about $0.65 a piece and the module is under $2
20PCS 24LC256 24LC256-I/P DIP
AT24C256 I2C Interface EEPROM Memory Module | http://arduinolearning.com/learning/basics/interfacing-to-a-24lc256-eeprom.php | CC-MAIN-2022-27 | refinedweb | 466 | 67.55 |
I'm trying to create a problem that calculates Distance Traveled using user-input for their Speed and Time. However, the assignment requires that I write a separate method for the computation of the Distance, so I can recall it back in the Main method, then display the results in another method. But - I keep getting some errors. Any help will surely be appreciated. Thanks in advance.
P.S. Bare with me on my coding. I am an amateur at Java Programming. Still learning.
import java.util.*; import java.io.*; import javax.swing.JOptionPane; public class distTravel { public static void main(String[] args) { // Variables int dist, speed, time; // The 'Distance Traveled' equation. distance(speed, time); // Display Data for(int i=1; i<=time; i++) displayData(i, speed); System.exit(0); } public static double distance(int speed, int time) { // Get the vehicle speed from the user. String input; // To hold user's input. input = JOptionPane.showInputDialog("What is your vehicle's speed in miles-per-hours? "); // Convert the input to a double. speed = Integer.parseInteger(input); while (speed <= 0) { input = JOptionPane.showInputDialog("Your speed must be greater than zero. Please re-enter. "); speed = Integer.parseInteger(input); } // Get the hours traveled from the user. input = JOptionPane.showInputDialog("How many hours have you traveled, thus far? "); time = Integer.parseInteger(input); while (time <= 1) { input = JOptionPane.showInputDialog("How many hours have you traveled for? "); time = Integer.parseInteger(input); } return speed * time; } public static void displayData(int i, int speed) { JOptionPane.showMessageDialog(null, "Hour(s): Distance Traveled" + "----------------------------" + "Hour " + i + ": " + (speed * i) + " miles traveled"); } }
This post has been edited by macosxnerd101: 10 November 2011 - 08:21 PM
Reason for edit:: Please use a descriptive title | http://www.dreamincode.net/forums/topic/255208-calculate-distance-traveled/ | CC-MAIN-2016-50 | refinedweb | 279 | 51.55 |
sub parseLists
{
my $self = shift;
my $text = \$self->{_text};
# So we want to loop through the text line by line
# and be able to modify some lines,
# but we don't want to rebuild/copy the whole text.
my $lf = "\n"; # linebreak
my $lflen = length $lf; # 1
my $pos1 = 0; # left line offset
my $pos2 = 0; # right line offset (1st char after line)
my $len = 0; # line length
my $lendif = 0; # line length difference
my $inlist;
open my $fh, "<:utf8", $text; # Note how we open in UTF-8 mode
# while (<$fh>) # Gets confused when line length changes
# Using seek() is risky, because it reads bytes, not chars!
# However, substr() always counts chars, not bytes.
while (<$fh>)
{
# Get line string without newline character
my $line = substr $_, 0, -$lflen;
my $oldline = $line;
# Calculate offsets
$len = length $line;
$pos1 = $pos2;
$pos2 += $len + $lflen;
# Modify line
# START (not part of loop structure)
my $isasterisk = $line =~ m/^\* /;
my $isindented = $line =~ m/^\ /;
my $isfirst;
if (!$inlist)
{
if ($isasterisk)
{
$isfirst = 1;
$inlist = 1;
}
}
if ($inlist)
{
if (!$isindented && !$isasterisk)
{
substr $line, 0, 0, "</ul>\n";
$inlist = 0;
}
elsif ($isindented)
{
$line =~ s/^\ (.*)/<li class="nobullet">$1<\/li>/;
}
elsif ($isasterisk)
{
$line =~ s/^\* (.*)/<li>$1<\/li>/;
substr $line, 0, 0, "<ul>\n" if $isfirst;
}
}
# END
# Write new line back
substr $$text, $pos1, $len, $line;
# Calculate diff
$lendif = (length $line) - ((length $_) - ($lflen));
# Adjust our and Perl's (!) position counter
$pos2 += $lendif; # That's our counter
seek $fh, $lendif, 1; # That's from Perl / SEEK_CUR
}
}
[download]
In reply to Re^2: Where to put self-made loop logic (separate module)?
by basic6
in thread Where to put self-made loop logic (separate module)?
by basic6
My savings account
My retirement account
My investments
Social Security
Winning the lottery
A Post-scarcity economy
Retirement?! You'll have to pull the keyboard from my cold, dead hands
I'm independently wealthy
Other
Results (77 votes),
past polls | http://www.perlmonks.org/?parent=1010936;node_id=3333 | CC-MAIN-2014-42 | refinedweb | 314 | 63.53 |
No project description provided
Project description
Chepy
Chepy is a python library with a handy cli that is aimed to mirror some of the capabilities of CyberChef. A reasonable amount of effort was put behind Chepy to make it compatible to the various functionalities that CyberChef offers, all in a pure Pythonic manner. There are some key advantages and disadvantages that Chepy has over Cyberchef. The Cyberchef concept of stacking different modules is kept alive in Chepy.
There is still a long way to go for Chepy as it does not offer every single ability of Cyberchef.
Feel free to give the project a ⭐️!
Docs
Refer to the docs for full usage information
Example
For all usage and examples, see the docs.
Chepy has a stacking mechanism similar to Cyberchef. For example, this in Cyberchef:
This is equivalent to
from chepy import Chepy file_path = "/tmp/demo/encoding" print( Chepy(file_path) .load_file() .reverse() .rot_13() .base64_decode() .base32_decode() .hexdump_to_str() .o )
Chepy vs Cyberchef
Advantages
- Chepy is pure python with a supporting and accessible python api
- Chepy has a CLI
- Chepy CLI has full autocompletion.
- Supports pe, elf, and other various file format specific parsing.
- Extendable via plugins
- Infinitely scalable as it can leverage the full Python library.
- Chepy can interface with the full Cyberchef web app to a certain degree. It is easy to move from Chepy to Cyberchef if need be.
- The Chepy python library is significantly faster than the Cyberchef Node library.
- Works with HTTP/S requests without CORS issues.
Disadvantages
- Chepy is not a web app (at least for now).
- Chepy does not offer every single thing that Cyberchef does
- Chepy does not have the
magicmethod (at the moment)
Installation
Chepy can be installed in a few ways.
Pypi
pip3 install chepy # optionally with extra requirements pip3 install chepy[extras]
Git
git clone --recursive cd chepy pip3 install -e . # I use -e here so that if I update later with git pull, I dont have it install it again (unless dependencies have changed)
Docker
docker run --rm -ti -v $PWD:/data securisec/chepy "some string" [somefile, "another string"]
Standalone binary
One can build Chepy to be a standalone binary also. This includes packaging all the dependencies together.
git clone cd chepy pip install . pip install pyinstaller pyinstaller cli.py --name chepy --onefile
The binary will be in the dist/ folder.
Plugins
Check here for plugins docs
Used by
.. toctree:: :maxdepth: 3 :caption: Contents: usage.md examples.md cli.rst chepy.md core.md modules.rst extras.rst plugins.md pullrequest.md config.md faq.md Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search`
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/chepy/ | CC-MAIN-2021-04 | refinedweb | 463 | 64 |
FERROR(3) BSD Programmer's Manual FERROR(3)
clearerr, feof, ferror, fileno - check and reset stream status
#include <stdio.h> void clearerr(FILE *stream); int feof(FILE *stream); int ferror(FILE *stream); int fileno(FILE *stream);
The function clearerr() clears the end-of-file and error indicators for the stream pointed to by stream. The function feof() tests the end-of-file indicator for the stream point- ed to by stream, returning non-zero if it is set. The end-of-file indica- tor can only be cleared by the function clearerr(). The function ferror() tests the error indicator for the stream pointed to by stream, returning non-zero if it is set. The error indicator can only be reset by the clearerr() function. The function fileno() examines the argument stream and returns its in- teger file descriptor.
These functions should not fail and do not set the external variable errno.
open(2), stdio(3)
The functions clearerr(), feof(), and ferror() conform to ANSI X3.159- 1989 ("ANSI C"). The function fileno() conforms to IEEE Std 1003.1-1990 ("POS. | https://www.mirbsd.org/htman/i386/man3/fileno.htm | CC-MAIN-2015-32 | refinedweb | 180 | 65.22 |
On Oct 27, 2010, at 11:02 AM, Simon Fraser wrote: >I've worked through the bzr install[0] and finally have my own separate >install of Python 2.7 with bzr2.3, just for mailman. However, 2.7 >doesn't appear to work - should I be using Python 2.6 instead? Is there >a way to get it working with 2.7? [1] Hi Simon. I can reproduce this with Python2.7 on Ubuntu 10.10. This is caused by the locknix package, which depends on setuptools_bzr. The issues you've identified with bzr on Python2.7 are causing the setuptools_bzr dependency to fail. The upstream fix is to remove this dependency from the locknix setup.py file. I'll try to do that in the next day or so. Once I release a new locknix package, it should work again. (Side note: I want to move locknix into the flufl namespace package, so things will change a bit more at some point in the future, but I won't block a quick fix on that reorganization.) >Incidentally, are there plans to allow installs to a different location? >If you specify a prefix to install under, it checks for the presence of >the Python site-packages and fails if they're not present. Is it going >to be a requirement to install under the Python tree, or is this just >for development? Can you provide the commands that failed for you? I'd like to try to reproduce this. If you can, submit a bug report. Ideally, the 'mailman' package will be installed under site-packages (or dist-packages for the Debuntudes in the audience ;) and you'll have access to just the few command line scripts in /usr/bin. Alternatively, it should be possible to build Mailman 3 in a virtualenv to stick it anywhere you want, though I haven't tried that in a while. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: <> | https://mail.python.org/pipermail/mailman-developers/2010-October/021275.html | CC-MAIN-2019-04 | refinedweb | 341 | 75.71 |
> june 2004
Filter by Day:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
BUG: Policy "Manage individual subscriptions" implemented incorrectly
Posted by Roland at 6/30/2004 11:40:08 PM
OK, I've tried various ways in order to get your attention (MSFT team). Every attempt failed, unfortunately. Here is your bug report file. Take it or ignore it, your decision. Found at The...
more >>
Credentials
Posted by Qbee at 6/30/2004 11:07:02 PM
Hi, if I open the report manager and click on a report to open it, under which credentials is the report genererated? I have a developement computer on which I create my reports. Reporting service is installed on a developmentserver. I use an AS2k cube as a source and runs on another server. A...
more >>
Dynamic Expressions
Posted by Tim Wragg at 6/30/2004 9:42:02 PM
Hello all, With the new chart palette features in SP1 I'm trying to create a dynamic switch expression of colors that is passed from a database, but I can't get the chart to render correctly based on the expression I pass through. Can this be done? What I've done so far is create a db table o...
more >>
Time X axis in Charts
Posted by Rekha at 6/30/2004 8:51:17 PM
Hi all.. I've got a chart with a 3month time series over the X axis and at the moment each day is printed on the graph and it is not really legible..I really need to have the graph shrink to fit an A4 page and only print the first day of each week, hope that makes sense. Any help woul...
more >>
SP1a ?
Posted by BrettF at 6/30/2004 7:39:01 PM
Is Microsoft intending to release SP1a for Reporting Services? Given the issues I've read in this group, my firm is reluctant to deploy SP1. The PC we have trialled it on has had DLL and functionality issues like other people have described. SP1 has been a disaster - we await SP1a to smooth over ...
more >>
The profile for the user is a temporary profile
Posted by Vasko Peter at 6/30/2004 6:25:49 PM
Hi, I am breaking my head on one problem related to Microsoft Reporting Services and I would be very very thankful if you could point me to the right place. I had my MRS running properly but not anymore ..:( now I can not load the Report Manager page any more and I get this error: ....CON...
more >>
Slow report with cube
Posted by burt_5920 NO[at]SPAM yahoo.com at 6/30/2004 5:13:34 PM
I have an Analysis Services OLAP cube with about 40,000 rows in the fact table, and a dimension with four levels- Office, Employee, Client, and Matter. Browsing the cube is instant, but RS reports created with the cube take about 30 seconds to render, and I'm using a parameter so am only deal...
more >>
Groups with Sub-groups. Can't get the sub-group to work properly.
Posted by Roger Twomey at 6/30/2004 4:52:52 PM
I am trying to build a report with grouping and I am not having success. I use a stored procedure to return these fields: PromptBnkMID (UniqueID of a group of questions) PromptBankTitle (Title of a group of questions) PromptID (UniqueID of a question or prompt) PromptText (The question) ...
more >>
Don't see what you're looking for? Search DevelopmentNow.com.
SQL Setup failed to execute command for server configuration
Posted by Halstein Tonheim at 6/30/2004 4:19:56 PM
I am trying to install Reporting Services on an Windows 2003 and get the follwing error message: "SQL Setup failed to execute a command for server configuration. The error was: Windows NT user or group 'webserv\webserv_anon' not found. Check the name again. Refer to the Server error logs ...
more >>
RS - Report Parameters
Posted by Paul D. Johnson at 6/30/2004 4:08:57 PM
I have many reports which are currently run from stored procedures which take multiple parameters, but all reports include Client and User parameters to limit the data returned to only that viewable by the User and only data the for those Clients (string list of integers (i.e., '1,2,5,7')). ...
more >>
RS - Stored Procedure and Report Wizard - not possible?
Posted by Paul D. Johnson at 6/30/2004 4:03:03 PM
I would like to use the Report Wizard Template in combination with data coming from a stored procedure, but cannot seem to find a way to get it to work yet. If I define a new report and add the data source and define the stored procedure and its parameters to RS, then I cannot find a way...
more >>
RS - How-To: Solutions and Projects and Reports
Posted by Paul D. Johnson at 6/30/2004 3:58:11 PM
We are contemplating moving about 200 existing reports into SQL Server Reporting Services and need to know the best way to do some basic setup tasks. We will have up to 5 Report Developers working on this project and need to know the best way to set up the Solution / Project / Report envir...
more >>
Use Custom Assemblies/Objects as a data source?
Posted by kbradfor at 6/30/2004 3:54:02 PM
We would like to move all our reports to Reporting Services...however, some of our current reports call the objects in our in-house developed assemblies rather than the calling the database directly (to use the associated business logic). Is it possible to use these objects (.Net assemblies writt...
more >>
number formating
Posted by Jessica C at 6/30/2004 2:42:06 PM
I have this number 1556.2310 and I want to format it to this $1,556.2310. Any suggestions? Thanks! ...
more >>
Deploy From VS.NET after applying SSL error
Posted by tfogel NO[at]SPAM idmi.com at 6/30/2004 2:35:42 PM
We installed Reporting Services on a machine that also houses the SQL server and IIS originally without an SSL certificate. (RS SP1 is applied) We have now added the SSL Certificate and when we try to deploy a report from VS.NET we get the following error: The underlying connection was ...
more >>
use format function in a cell
Posted by Dushan Bilbija at 6/30/2004 2:01:01 PM
hello how would i embed a format function inside a cell equation? by itself the format function is: format(total,"#,#") but in an equation, i'd like to use: "Total Value is: " & format(total,"#,#") but the dbl quotes generate compilation errors. as do single quotes. so... how to do it?...
more >>
Subscription :- Distribution List
Posted by Vijay Tripathi at 6/30/2004 1:43:02 PM
Do Subscription Support Distribution list? I tried to send e-mail to a group called "help desk" , but the error message says the one or more email address is not correct. all the e-mail address are valid with in the group. Thanks Vijay Tripathi....
more >>
Images in header shifting...
Posted by Lisa at 6/30/2004 12:57:53 PM
I have placed various images in my page header. I am finding that on the first page, my image is shifting about an inch to the left, but on the second page, it is placed correctly. Has anyone found this to be true or know how to fix it? Thanks, Lisa ...
more >>
Matrix PDF
Posted by AHH at 6/30/2004 12:42:03 PM
Is there some sort of calculation to make a matrix export to pdf without page breaks? I have 2 dynamic rows 1 dynamic column 1 value field Please help thanks ...
more >>
Third desperate attempt for "Browser role and subscription problem"
Posted by Roland at 6/30/2004 12:40:32 PM
Dear MSFT-Team, I believe it's hard to follow this dynamic group: Too many problems, indeed... Despite that would you mind to comment on this? Thanks in advance. Roland <snip> I refer to my posting news:uVSl98SXEHA.1888@TK2MSFTNGP11.phx.gbl Q: Is there a need to have the policy "Manage a...
more >>
how to display ReportParameters in report
Posted by stupy1 NO[at]SPAM hotmail.com at 6/30/2004 12:12:01 PM
Hi all, I'm populating a dropdown to let users select a report parameter. I want to be able display in the report what the user selected. More specifically the label of the selected item from the drop down. How can I do this? thanks, Justin...
more >>
Error in sample PrinterDeliveryProvider.cs?
Posted by Stephen Walch at 6/30/2004 11:46:04 AM
The Deliver method in the sample returns false if the delivery is successful and never sets the Notification.Retry flag. Both seem to contradict the documentation. Please confirm that the sample is wrong and the documentation is right. Thanks, - Steve ...
more >>
Symmetric Key
Posted by Ed Sitz at 6/30/2004 11:17:45 AM
Have installed Reporting Services on Windows 2000 Server with SP4, SQL 2000 Standard with SP3a, and Reporting Services Enterprise edition. Reporting Services is running using a domain account. We haven't had any problems with reports running in the browser, however Scheduled Reports aren't ...
more >>
SQL Server Express & Client/Server reporting
Posted by pdxfilter-google NO[at]SPAM yahoo.com at 6/30/2004 11:12:48 AM
We are a small ISV that would like to incorporate simple reporting into our SQL Server based WinForms app. We have no need for the enterprise features of Reporting Services (scheduling, etc.) and only wish to preview and/or print reports real-time. Up until a few days ago, we were under the i...
more >>
SECOND POST: Auto-sizing the Page Header
Posted by Lisa at 6/30/2004 10:43:14 AM
Sorry to ask this again, but I didn't get an answer and I am hoping someone out there might know... Is there a way to auto-size the height of the page header depending on it's contents? For example, I have created a page header with a logo that appears on the first page (1 inch x 1 1/2 inch...
more >>
Printer icon?
Posted by Lisa at 6/30/2004 10:39:43 AM
Is it possible to put a "Printer icon" on the top of every report. Then whenever readers want to print report, this printer icon can reduce the report size and print out the report in one page? Or this printer icon can set the printer layout to be landscape or portrait? ...
more >>
Errors with Service Pack 1
Posted by G. Schmelzer at 6/30/2004 10:38:31 AM
Hi Ng, i have some problems with Rs Sp 1. I installed Sp 1 on the server and on the pc where vs.net is installed. Now I got the following Problem: I design a report and go to the preview mode. If an error is found, for example a syntax problem in an expression, the report isn't displayed. No ...
more >>
Permissions Question
Posted by news.microsoft.com at 6/30/2004 10:38:23 AM
When I hit the Report Manager website on one of my installations and log-in (With a user name that has System Administrator and System User roles) I am presented with the Report Manager screen but am missing most of the controls: I have the Home|My Subscriptions, ... in the upper right, but the ...
more >>
Unable to find Reporting Services WMI namespace on <machinename>
Posted by GDSmith at 6/30/2004 10:01:54 AM
Am getting the message "Unable to find Reporting Services WMI namespace = on <machinename>. Reporting Services may not be installed." when = attempting activation of a newly-installed instance of Report Server = from the existing Report Server. Installing RS and pointing it to a = new databas...
more >>
Formatting PDF (portrait vs. landscape)
Posted by G at 6/30/2004 9:45:06 AM
When calling a report, in the URL i'm using rs:Format=PDF to render the report directly to PDF. I'd like the PDF to automatically format the report in landscape mode. Is this possible? Thanks, Brian ...
more >>
Export to Excel failed with System.OutOfMemoryException
Posted by Keith Kratochvil at 6/30/2004 9:26:46 AM
Server: SQL Server 2000 Ent running on Win2k AS, Ent Edition of Reporting Services + SP1 my workstation: Win2k SP4, Microsoft Development Environment v7.1.3088, Reporting Services Ent Ed + SP1. I am trying to export 55,804 rows to Excel. The table that I am exporting from has 11 columns. ...
more >>
error when trying to restore a sql7 master database
Posted by Francosi Thibault at 6/30/2004 7:11:31 AM
I'm trying to restore a sql7 database on a NT4 Entreprise SQL server. I rebuild the master database with Rebuildm.exe and I start the single user mode with d:\mssql7 \binn\sqlservr.exe -m When I start the restore with Brighstore ArcServe, I have this error: Error: 605. Severity: 21, ...
more >>
Exporting to Excel 2000
Posted by James at 6/30/2004 5:21:01 AM
IT support won't consider giving Excel XP or 2003 to users at this stage. Other than letting them choose CSV as export format for the report and loading the CSV into Excel 2000, does anybody have an idea how I could easily deliver Reports to Excel 2000?...
more >>
SVG for images in reports
Posted by jcaja NO[at]SPAM bool-e.com at 6/30/2004 4:17:50 AM
Hi! Will RS render SVG images in a future release? I think this would be great! Thx!...
more >>
Zoom to Page Width in URL parameter
Posted by jcaja NO[at]SPAM bool-e.com at 6/30/2004 4:14:19 AM
Hi! I'm working with Reporting Services (Spanish) with SP1 (Spanish). I'm trying to pass the parameter Zoom with the option "Page Width" in the URL, but RS always renders the report in 100%. It seems like it doesn't catch this option for the parameter Zoom. I'm following the example in htt...
more >>
Reporting services + OLAP MDX
Posted by Jabadoo at 6/30/2004 3:24:01 AM
Suppose : I have a dimension in a Cube, called DepRec. DepRec is a Parent-Child dimension with extra member properties(descr, type). I build a report with an mdx-query. Now, in that report I need to access the descr member value instead of the code(value), to show on my report. Any ideas ? Thank...
more >>
Matrix sub-total column width
Posted by John H at 6/30/2004 1:45:01 AM
I have a matrix report which has a row total at the end e.g. col1 col2 total 80.1 60.4 140.5 Is there a way in which I can make the total column wider than the data cells in the matrix or alternatively can I change the total format to have no decimal places? At the moment its width seems...
more >>
Excel export exception in reporting services.
Posted by Albert at 6/30/2004 12:37:01 AM
Hi, I'm gettting this error when exporting 1 particular report to excel: Microsoft.ReportingServices.ExcelRendering Specified cast is not valid. What is the possible causes for this error? Is it to do with how the fields were constructed/typed? Thanks, Albert. ...
more >>
SP1 Excel Export bug
Posted by Chris Botha at 6/29/2004 11:44:57 PM
I am pretty impressed with the export format, it looks just like the HTML/PDF. I have a simple report, it has a list and below it a table. There are two data sets, the one for the list always returns one record, and the other for the table returns a bunch (currently three). The problem is with ...
more >>
How to submit a report parameter?
Posted by zxytek at 6/29/2004 11:37:01 PM
Microsoft® SQL Server™ Reporting Services provides a single entry point to the full functionality of the report server: the Reporting Services Web service. The Web service uses Simple Object Access Protocol (SOAP) over HTTP and acts as a communications interface between client programs and the re...
more >>
NOT BottomN
Posted by Kevin Wilson at 6/29/2004 11:26:02 PM
Does anybody know how to create a filter expression where the row is NOT BottomN 1? Cheers...
more >>
Export to PDF
Posted by olap NO[at]SPAM gmsbv.nl at 6/29/2004 10:14:24 PM
Hi, Export to PDF of the sample Reporting Services Report, with Euro sign is ok for the chart but for the table it is cripted. Any idea if this is caused by the euro sign ? Marco...
more >>
What is the equivalent for Crystal Report Global Variables?
Posted by Kevin Buchanan at 6/29/2004 9:27:01 PM
What is the equivalent for Crystal Report Global Variables in Report Services? -- -Kevin...
more >>
Setting Null/Blank values in Matrix
Posted by Dave Clark at 6/29/2004 6:07:34 PM
is there a way to set blank values in a Matrix so it defaults to 0, so it won't appear blank? ...
more >>
How to move/deploy Reports from one server to another
Posted by RA at 6/29/2004 5:20:10 PM
I have some reports(.rdl) files in a solution. It is on MyServer1. And I can see using Internet Explorer; entering Location as Is there a way I can deploy it to other Computer MyServer2 in particular location and then run using Any suggest...
more >>
PDF Language/Font Problem
Posted by Mick Horne at 6/29/2004 4:28:02 PM
Anyone come across this one: I have a report that has to print out its figures in Euros or Dollars - I pass in the locale code as a parameter (DE-DE or EN-US) respectively and change the language of the report to one of those values via an expression. The report runs, looks fine on any PC ov...
more >>
oracle problems
Posted by Dushan Bilbija at 6/29/2004 4:20:58 PM
i'm having trouble accessing oracle data (gee... what a shock) i get the very *helpful* error message: "MinimumCapacity must be non-negative". no idea what it means. i'm running a sql statement which calls a stored proc. it runs fine, gets the data fine (i can't use the wizard because of the a...
more >>
Conditional visibility
Posted by Walter Lundgren at 6/29/2004 4:01:22 PM
I need to pass a parameter to determine the visibility of a remark. If the parameter defines it to be visible, I also need to toggle the visibility of the remark when the user expands or collapses the containing group. I do not want the visibility turned on when the parameter wants it always i...
more >>
Report data source choices
Posted by mark NO[at]SPAM markhibberdandassociates.com at 6/29/2004 3:58:22 PM
Hi all, I'm new to Reporting Sevices, and we're currently in the early stages of designing a Data Warehouse / Data Mart. Initially, the DW will only be used as a data source for reports developed with Reporting Services. My question is twofold: first, is it possible to use Analysis Service...
more >>
Cannot get past error 25619
Posted by Garick Newtzie at 6/29/2004 3:47:20 PM
I've tried the suggestions in the article:;EN-US;Q297989 I've made sure dts is running. SQL server 2000 windows 2000 server non domain controller AD domain. <Func Name='Do_rsManagerShortcut'> <EndFunc Name='Do_rsManagerShortcut' Return='0'...
more >>
·
·
groups
Questions? Comments? Contact the
d
n | http://www.developmentnow.com/g/115_2004_6_0_0_0/sql-server-reporting-services.htm | crawl-001 | refinedweb | 3,347 | 72.05 |
Code. Collaborate. Organize.
No Limits. Try it Today.
I have been doing TDD (Test Driven Development/Design) for quite some now, and I have found this to be pretty useful. Writing test is always kind of pain if you don't get it right. The problem with TDD seems it's more oriented towards writing tests whereas the developer may be more concerned with the design and behaviour of the system.
The problem per se doesn't lie with TDD, but more with the mindset. It's about getting our mind in the right spot. So, here comes BDD a.k.a. Behaviour Driven Development. These are not just terminology changes, but it's about a change in the way we write our tests or rather specification in terms of BDD. Without further ado, let's dig deeper into this unchartered territory.
First things first. I don't wish to replicate things which are already published. So, the best place to get to know some theory is the Wiki: [^].
But for the sake of completeness, here is a short summary. BDD is an agile software development technique that encourages collaboration between developers, QA, and non-technical or business participants in a software project. It's more about business specifications than about tests. You write a specification for a story and verify whether the specs work as expected. The main features of BDD development are outlined below:
Check the References section 'An interesting read', for a more detailed explanation of each of the above points. We will be using the Membership Provider that comes with the default ASP.NET 2 MVC application (ASP.NET 1.0 MVC should also work) to write our stories that will revolve around "Registering a new user" for the site.
But before that, let's have a quick look at the tools that we will be using for this sample story.
The following are the list of tools that I will be using for this demonstration. Please set these tools up before proceeding or trying out. The download is self-contained will all the dependencies. But to get the code template for BDD, you have to install SpecFlow.
SpecFlow is the framework that supports BDD style specifications for .NET.
We will be using the classic NUnit for writing our unit tests.
Moq is an excellent mocking framework for .NET.
SpecFlow is a BDD library/framework for .NET that adds capabilities that are similar to Cucumber. It allows to write specification in human readable Gherkin format. For more info about Gherkin, refer Gherkin project.
Gherkin is the language that Cucumber understands. It is a Business Readable, Domain Specific Language that lets you describe software behaviour without detailing with how that behaviour is implemented. It's simply a DSL for describing the required functionality for a given system. This functionality is broken down by feature, and each feature has a number of scenarios. A scenario is made up of three steps: GIVEN, WHEN, and THEN (which seems to be somewhat related to the AAA (Arrange, Act, Assert) syntax of TDD.
For more about Gherkin, refer to the Gherkin project.
We have named the Class Library Project as "SpecFlowDemo.Specs" to set the mindset that we are writing specifications for our business features.
Our intent is to get this nicely formatted report of our specifications:
Let's have a quick look at the UI for this.
The first step is to add a "SpecFlow" feature. We will keep all our features within the Feature folder in the Spec project that is created above. The feature file is where we're going to define our specifications. It's a simple text file with a custom designer which generates the plumbing spec code.
Let's add a new feature. Right click on the "Features" folder and "Add New" Item, and select "SpecFlowFeature" as shown below:
This creates a new file "RegisterUser.feature" and a designer file "RegisterUser.feature.cs". The default content of this file is shown below. This is in the Gherkin format.
Feature: Addition
In order to avoid silly mistakes
As a math idiot
I want to be told the sum of two numbers
@mytag
Scenario: Add two numbers
Given I have entered 50 into the calculator
And I have entered 70 into the calculator
When I press add
Then the result should be 120 on the screen
The "RegisterUser.feature.cs" file has the plumbing code to automate spec creation using NUnit (in this case). This file should not be manually edited.
The above template says what we we are trying to do, in this case, "Addition", and then there are different scenarios to support the feature.
Every time you save this file, you are invoking a custom tool "SpecFlowSingleFileGenerator". What it does is parse the above file and create the designer file based on the selected unit test framework.
Rather than explaining the above feature, let's dive into our first test case for "Registering a new user".
Feature: Register a new User
In order to register a new User
As member of the site
So that they can log in to the site and use its features
We have outlined our basic requirement in the above feature. Let's have a look at different scenarios that the application may have to deal with, with respect to the above feature.
Type/copy the below scenario to the .feature file.
Scenario: Browse Register page
When the user goes to the register user screen
Then the register user view should be displayed
Compile the spec project. Start up the NUnit GUI and open "SpecFlowDemo.Specs.dll". The first thing, change the following settings from Tools->Settings->Test Loader->Assembly Reload.
Ensure the following choices are checked:
Doing this will automatically execute your tests whenever you compile them.
Now when you execute this test, you should be presented with the following screen. Click on the "Text Ouput" tab in the NUnit GUI.
You can see two "StepDefinitions" which the SpecFlow generated based on the "Scenario" specified in the feature file.
Now in Visual Studio->Your Spec Project->Add a New Class File. In our case, the name is "RegisterUserSteps.cs". This will be your spec class.
Copy the two methods to this file and delete out the line "ScenarioContext.Current.Pendin()".
ScenarioContext.Current.Pendin()
The full source code to our first scenario is shown below. There will be a couple of times I will be showing the full source code for easy understanding, and the rest of the times, I will only show the essential code snippet:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using TechTalk.SpecFlow;
using SpecFlowDemo.Controllers;
using SpecFlowDemo.Models;
using NUnit.Framework;
using System.Web.Mvc;
using Moq;
using System.Web.Security;
namespace SpecFlowDemo.Specs
{
[Binding]
public class RegisterUserSteps
{
ActionResult result;
AccountController controller;
[When(@"the user goes to the register user screen")]
public void WhenTheUserGoesToTheRegisterUserScreen()
{
controller = new AccountController();");
}
}
}
Compile the test and run it in NUnit. You will be presented with the below failure:
The reason for the error is, the "Register" method uses the MembershipService which we need to mock out. Have a look at the Register method of AccountController:
MembershipService
AccountController
public ActionResult Register()
{
ViewData["Title"] = "Register";
ViewData["PasswordLength"] = MembershipService.MinPasswordLength;
return View();
}
To make this work, we need to add an overloaded constructor in the AccountController which will take in the required dependencies.
public AccountController(IFormsAuthenticationService formsService,
IMembershipService memberService)
{
FormsService = formsService;
MembershipService = memberService;
}
Here' is the modified test along with the Moq objects:
[Binding]
public class RegisterUserSteps
{
ActionResult result;
AccountController controller;
Mock<imembershipservice> memberService = new Mock<imembershipservice>();
Mock<iformsauthenticationservice> formsService =
new Mock<iformsauthenticationservice>();
[When(@"the user goes to the register user screen")]
public void WhenTheUserGoesToTheRegisterUserScreen()
{
controller = new AccountController(formsService.Object, memberService.Object);");
}
}
Note in the above test we don't have the "Given" criteria. This is optional though. We have mocked Membership and FormsAuthenticationService and passed to the AccountController. Here is the result of the test:
Membership
FormsAuthenticationService
Now we have a passing test.
Scenario: On Successful registration the user should be redirected to Home Page
Given The user has entered all the information
When He Clicks on Register button
Then He should be redirected to the home page
The test code is described below:
[Given(@"The user has entered all the information")]
public void GivenTheUserHasEnteredAllTheInformation()
{
registerModel = new RegisterModel
{
UserName = "user" + new Random(1000).NextDouble().ToString(),
Password = "test123",
ConfirmPassword = "test123"
};
controller = new AccountController(formsService.Object, memberService.Object);
}
[When(@"He Clicks on Register button")]
public void WhenHeClicksOnRegisterButton()
{
result = controller.Register(registerModel);
}
[Then(@"He should be redirected to the home page")]
public void ThenHeShouldBeRedirectedToTheHomePage()
{
var expected = "Index";
Assert.IsNotNull(result);
Assert.IsInstanceOf<redirecttorouteresult>(result);
var tresults = result as RedirectToRouteResult;
Assert.AreEqual(expected, tresults.RouteValues["action"]);
}
Scenario: Register should return error if username is missing
Given The user has not entered the username
When click on Register
Then He should be shown the error message "Username is required"
[Given(@"The user has not entered the username")]
public void GivenTheUserHasNotEnteredTheUsername()
{
registerModel = new RegisterModel
{
UserName = string.Empty,
Password = "test123",
ConfirmPassword = "test123"
};
controller = new AccountController(formsService.Object,
memberService.Object);
}
[When(@"click on Register")]
public void WhenClickOnRegister()
{
result = controller.Register(registerModel);
}
[Then(@"He should be shown the error message ""(.*)""")]
public void ThenHeShouldBeShownTheErrorMessageUsernameIsRequired(string errorMessage)
{
Assert.IsNotNull(result);
Assert.IsInstanceOf(result);
Assert.IsTrue(controller.ViewData.ModelState.ContainsKey("username"));
Assert.AreEqual(errorMessage,
controller.ViewData.ModelState["username"].Errors[0].ErrorMessage);
}
Some points of interest in this test case: notice the (.*) expression in the "Then" part. This allows you to pass parameters to the test. In this case, the parameter is "Username is required", which is passed form the "feature" file.
Please go through the source to find the complete set of test cases. Hope I was able to touch base on these excellent topics.
As a final note, to get the *HTML* output, do the following steps as shown in the figure below:
Open NUnit GUI and go to Tools->Save results as XML. Give a name to the file and save it in your project location. Then, do the following external tool setting for the VS IDE:
To get the *HTML* report, click on "SpecFlow" from the Tools menu.
Here is the location of my NUnit XML file:
Hope you find this useful. I will update this with more improvements and test cases. For the basics of TDD and BDD, there are numerous articles on CodeProject for | http://www.codeproject.com/Articles/82891/BDD-using-SpecFlow-on-ASP-NET-MVC-Application?fid=1572935&df=90&mpp=25&noise=3&prof=True&sort=Position&view=None&spc=Relaxed | CC-MAIN-2014-15 | refinedweb | 1,730 | 57.16 |
$ cnpm install apollo-client
Ap.
Apollo Client also has view layer integrations for all the popular frontend frameworks. For the best experience, make sure to use the view integration layer for your frontend framework of choice.
Apollo Client can be used in any JavaScript frontend where you want to use data from a GraphQL server. It's:
Get started on the home page, which has great examples for a variety of frameworks.
# installing the preset package npm install apollo-boost graphql-tag graphql --save # installing each piece independently npm install apollo-client apollo-cache-inmemory apollo-link-http graphql-tag graphql --save
To use this client in a web browser or mobile app, you'll need a build system capable of loading NPM packages on the client. Some common choices include Browserify, Webpack, and Meteor 1.3+.
Install the Apollo Client Developer tools for Chrome for a great GraphQL developer experience!
You get started by constructing an instance of the core class
ApolloClient. If you load
ApolloClient from the
apollo-boost package, it will be configured with a few reasonable defaults such as our standard in-memory cache and a link to a GraphQL API at
/graphql.
import ApolloClient from 'apollo-boost'; const client = new ApolloClient();
To point
ApolloClient at a different URL, add your GraphQL API's URL to the
uri config property:
import ApolloClient from 'apollo-boost'; const client = new ApolloClient({ uri: '' });
Most of the time you'll hook up your client to a frontend integration. But if you'd like to directly execute a query with your client, you may now call the
client.query method like this:
import gql from 'graphql-tag'; client.query({ query: gql` query TodoApp { todos { id text completed } } `, }) .then(data => console.log(data)) .catch(error => console.error(error));
Now your client will be primed with some data in its cache. You can continue to make queries, or you can get your
client instance to perform all sorts of advanced tasks on your GraphQL data. Such as reactively watching queries with
watchQuery, changing data on your server with
mutate, or reading a fragment from your local cache with
readFragment.
To learn more about all of the features available to you through the
apollo-client package, be sure to read through the
apollo-client API reference.
Read the Apollo Contributor Guidelines.
Running tests locally:
npm install npm test
This project uses TypeScript for static typing and TSLint for linting. You can get both of these built into your editor with no configuration by opening this project in Visual Studio Code, an open source IDE which is available for free on all platforms.
If you're getting booted up as a contributor, here are some discussions you should take a look at: | https://npm.taobao.org/package/apollo-client | CC-MAIN-2020-10 | refinedweb | 457 | 60.24 |
An open-source library for Java developers
Jean-Marie is a senior software engineer at Raytheon. He can be contacted at [email protected]
The Java Addition to the Default Environment (J.A.D.E. for short) is an open-source library (available at) that fills gaps in the JDK core library. Among the extensions J.A.D.E. 1.0 includes are XML support for all Java classes, generic matrix classes, automatic error calculation on all operations (including numeric errors), more than 400 measurement units for almost 40 different quantities, automatic unit simplification and verification, and support for enumerated types.
The reasons I developed the J.A.D.E. package include:
- To eliminate more interface errors. If a method expects a Length, it gets a Length not some float type that is supposed to be in inches. (This is the kind of problem that led to the 1999 Mars Climate Orbiter fiasco.)
- Quantities for which precision is unknown are not very useful (especially if they are the solutions of a possibly singular system of linear equations).
- To eliminate conversion errors. The U.K. gallon is different from the U.S. gallon, which is also different for liquid or dry products.
- To introduce a Matrix class generic enough to resolve systems of linear equations involving any element: Real, Complex, Big Numbers, Quantities, and the like.
In this article, I'll focus on one of J.A.D.E.'s most useful components XML support.
XML and Java
In Java and XML (O'Reilly & Associates, 2000), Brett McLaughlin wrote: "Java revolutionized the programming world by providing a platform-independent programming language. XML takes the revolution a step further with a platform-independent language for interchanging data." The question programmers are faced with is how to put the two together.
There are already many open-source libraries available to parse XML documents. For example, using JDOM () you can represent XML documents as Java objects. Plus, Sun is currently leading a project that is code-named "Adelard" to generate Java classes from XML Schema.
Unfortunately, the starting point for all of these tools is the XML document. More often than not, it is the other way around you have Java classes and you want to provide persistency to your existing classes using XML. Here's how J.A.D.E. addresses this particular issue.
Creating Objects from XML
Say you wrote a package to represent two-dimensional areas. Your package possibly includes classes such as Point, Ellipse, Polygon, and Area. As an enhancement, you decide to add the capability to load an area from an XML file.
Using J.A.D.E., all you need to do is to provide an XML constructor for all XML elements involved in the construction process (in this case the geometrical objects). An XML constructor is nothing more than a Java constructor with two parameters--one for the attributes and the other for the child elements. When the XML document is parsed, your constructor is called with the parameters set according to the element's attributes and content. The attributes are simple properties of the object being created (represented by a String), whereas the content is the list of all nested elements that have been created recursively.
Once you have written the XML constructors (see Listing One), this code lets you create an area from a file.
Constructor constructor = new Constructor("org.apache.xerces.parsers.SAXParser");
Area area = (Area) constructor.create(file);
As you can see, you need a SAX 2.0 parser at run time. The binary distribution of J.A.D.E. includes a subset of the Xerces 1.2.0 parser developed by the Apache Software Foundation, but any SAX 2.0 parser will do (for a list of SAX 2.0 parsers go to). SAX 1.0 parsers will not work because J.A.D.E. supports namespaces and SAX 1.0 does not. Namespaces map directly to Java packages. Listing Two shows the XML representation of an area without a namespace and Listing Three shows the same area represented using a default namespace; both can be used interchangeably. Also, you may notice that because the class Area may contain instances of itself, the XML representation of an area may include nested area elements.
How Does it Work?
Under the hood, the document handler uses the Reflection API to dynamically create instances of the elements being parsed. How this API works is beyond the scope of this article (for more information read the Reflection API Tutorial at).
It is interesting to note that the constructive approach used by J.A.D.E. works even on immutable objects. Another approach using interface (factory method pattern) has been studied but rejected because of its lack of support for these immutable objects.
Saving Objects in XML
In addition to converting XML documents to Java objects, you can also do the reverse. Using J.A.D.E., the instances of a class can be saved in XML format if the class implements the interface com.dautelle .xml.Representable. Basically, the class has to provide two methods: getAttributes() and getContent().
It is up to you to decide what is going to be an XML attribute and what is going to be a child element. Don't forget, however, that if the XML representation of your class is consistent with its XML constructor, then you will be able to save and retrieve any of your Java objects. Listing Four shows the implementation of com.dautelle .xml.Representable for geometrical objects.
Writing a representable object is a two-step process. First, you will need to create an ObjectWriter. This can be done via:
ObjectWriter ow = new ObjectWriter();
Or if you want to use namespaces, you may pass the mapping between the package names and the namespace prefix as argument:
Properties namespaces = new Properties();
namespaces.setProperty("com.dautelle.geom2d", "geom2d");
ObjectWriter ow = new ObjectWriter(namespaces);
The second step is to perform the actual writing. The XML encoding to be used depends on the output type:
- UTF-8 encoding (1 byte per character) for java.io.OutputStream:
FileOutputStream out = new FileOutputStream(file);
ow.write(area, out); // UTF-8 encoding.
- UTF-16 (2 bytes per character) for java.io.Writer:
StringWriter sw = new StringWriter();
ow.write(area, sw); // UTF-16 encoding.
Conclusion
While Java has an excellent API, you cannot always count upon Sun to offer everything you need (or you may wait a very long time). The J.A.D.E. library aims to complement the core JDK. It is open source and will hopefully grow with the contribution of all of you who want to continue to make Java a better platform.
DDJ
Listing One
package com.dautelle.geom2d; public class Point { double x; double y; // XML constructor. public Point(Attributes attributes, Elements content) { x = attributes.getDouble("x"); y = attributes.getDouble("y"); } } public abstract class Surface {} public class Ellipse extends Surface { Point center; double width; double height; // XML constructor. public Ellipse(Attributes attributes, Elements content) { center = (Point) content.get(0); width = attributes.getDouble("width"); height = attributes.getDouble("height"); } } public class Polygon extends Surface { Point [] vertices; // XML constructor. public Polygon(Attributes attributes, Elements content) { vertices = new Point[content.size()]; content.toArray(vertices); } } public class Area extends Surface { Surface [] surfaces; // XML constructor. public Area(Attributes attributes, Elements content) { surfaces = new Surface[content.size()]; content.toArray(surfaces); } }
Listing Two
<?xml version='1.0'?> .Polygon> <com.dautelle.geom2d.Point <com.dautelle.geom2d.Point <com.dautelle.geom2d.Point </com.dautelle.geom2d.Polygon> .Area> </com.dautelle.geom2d.Area>
Listing Three
<?xml version='1.0'?> <Area xmlns="java:com.dautelle.geom2d"> <Ellipse width="1.0" height="20.0"> <Point x= "0.0" y="0.0"/> </Ellipse> <Polygon> <Point x= "-1.0" y="-1.0"/> <Point x= "0.0" y="1.0"/> <Point x= "1.0" y="-1.0"/> </Polygon> <Area> <Ellipse width="1.0" height="20.0"> <Point x= "0.0" y="0.0"/> </Ellipse> </Area> </Area>
Listing Four
package com.dautelle.geom2d; import com.dautelle.xml.*; public class Point extends Representable { double x; double y; public Attributes getAttributes() { Attributes attributes = new Attributes(); attributes.add("x", x); attributes.add("y", y); return attributes; } public Representable[] getContent() { return null; } } public abstract class Surface extends Representable {} public class Ellipse extends Surface { Point center; double width; double height; public Attributes getAttributes() { Attributes attributes = new Attributes(); attributes.add("width", width); attributes.add("height", height); return attributes; } public Representable[] getContent() { return new Representable[] { center }; } } public class Polygon extends Surface { Point [] vertices; public Attributes getAttributes() { return null; } public Representable[] getContent() { return vertices; } } public class Area extends Surface { Surface [] surfaces; public Attributes getAttributes() { return null; } public Representable[] getContent() { return surfaces; } } | http://www.drdobbs.com/jvm/jade-the-java-addition-to-the-default-en/184404483 | CC-MAIN-2018-51 | refinedweb | 1,434 | 50.53 |
I'd like a method
t.find_first [:a, :b, :c], [:d, :e]
t
t.try(:[], :a).try(:[], :b).try(:[], :c) || t.try(:[], :d).try(:[], :e)
def find_first t, *keysets
keysets.each do |keys|
val = val || keys.inject(t){ |h, key| h.try(:[], key) }
end
end
There is no reason to create a special method for this.
Ruby < 2.3
value = t.try(:[], :a).try(:[], :b).try(:[], :c) value ||= t.try(:[], :d).try(:[], :e)
Ruby >= 2.3:
value = t&.[](:a)&.[](:b)&.[](:c) value ||= t&.[](:d)&.[](:e)
And while this question was posed rather obliquely, it looks from the method signature that you might be trying to walk into a nested hash like a params object.
If that is the case, you should take a look at Hash#dig for this. You can always monkeypatch it in if you aren't on 2.3
value = t.dig(:a, :b, :c) value ||= t.dig(:d, :e)
Another pattern for walking into hashes without
Hash#dig is something like this:
params.fetch(:a, {}).fetch(:b, {})[:c] | https://codedump.io/share/64GB9iWPNRVa/1/how-to-chain-a-dereference-in-ruby | CC-MAIN-2017-39 | refinedweb | 172 | 89.04 |
Providing Technology Training and Mentoring For Modern Technology Adoption
In this tutorial, you will create a React application that uses React Hooks to manage state and handle activities. Please refer to the tutorial Setting Up a React Environment to set up a development environment for React before you start working on this tutorial.
For this tutorial, we will be using a node.js based utility named "create-react-app" to create the development setup with the following capabilities:
ES6 & JSX Transpilation.
Module export/import support.
WebPack based development server.
Auto updating of the application in the browser.
Note that create-react-app will not work with earlier versions of node.js or npm.
1. Open a command prompt . Create C:\ReactWork directory and navigate to the C:\ReactWork directory.
2. Check the version of node.
node --version
The node version needs to be 8.10 or greater. If it is not then you need install a new version that complies with this requirement before going further.
check the version of npm.
npm --version
The npm version needs to be 5.6 or greater. If it is not then you need install a new version that complies with this requirement before going further.
3. We will use the 'npx' utility installed along with node which allows us to run create-react-app without first installing it on the local machine.
npx create-react-app react-hooks-app
This command may take several minutes to complete.
The command will create a 'react-hooks-app' directory under ReactWork
4. Navigate into the new directory
cd react-hooks-app
5. Run the application with the following command:
npm start
This starts the development server as well as ES6 /JSX transpilation. The transpilation and server are setup in watch mode which means that they are triggered automatically when files are saved. A browser window displaying the app is also opened automatically. The url being.
6. Wait for the process to fully start up and then check out the browser window. It should show the following:
In this part you will create a React component that displays a list of articles.
1. Edit the \react-hooks-app\src\App.js file and replace its contents with the following and then save the file:
import React from 'react';
import './App.css';
export default function App() {
return (<div className={'app'}>
<h2>React Hooks App</h2>
<li>one</li>
<li>two</li>
<li>three</li>
</div>
);
}
2. Replace the index.css and App.css in \ReactWork\react-hooks-src\src with these files from hooks-app folder. This folder can be downloaded from here:
Copy: \hooks-app\index.css
To: \ReactWork\react-hooks-app\src\index.css
This should overwrite the existing index.css
Copy: \hooks-app\App.css
To: \ReactWork\react-hooks-app\src\App.css
This should overwrite the existing App.css
3. The app will auto-update in the browser and should look like this:
The list above is hard coded. What we want to do now is to display the data programmatically.
Open the file and take a look. It contains an array of articles.
4. Add the following array into App.js after the imports and before the App function:
const initialArticles = ;
5. We want the App to hold its own state. We can do that with the help of the 'useState' hook. To use this hook we first need to update the import statement at the top of App.js to read:
import React, {useState} from 'react';
6. Insert the following as the first line of the App() function (before the return statement):
const = useState(initialArticles);
7. Delete the existing <li> elements being returned from the App() function and replace them with code that uses the JavaScript array map function to output the contents of the array as <li> elements. Your App() function should now look like this:
export default function App() {
const = useState(initialArticles);
return (<div className={'app'}>
<h2>React Hooks App</h2>
<ul>
{articles.map(
(article, index) => {
return <li key={index} >
{article.title}</li>
}
)}
</ul>
</div>
);
}
8. Save the App.js file. The app should now appear like this in the browser:
The current version of the app displays the contents of the initialArticles array. What we really want though is to retrieve the data from a network source. By default the development server (that serves the app) can also serve static files that are placed in the project's \public directory.
Copy: \hooks-app\articles.json
To: \react-hooks-app\public\articles.json
2. We can make use of the 'useEffect' method to insert code that will retrieve data from the article file after the first render. To do that we first need to update the import statement at the top of the file to import the 'useEffect' method:
import React, {useState, useEffect } from 'react';
3. We will need to add a function to retrieve the articles.json file contents over the network. We can do this inside of the App() function by creating a 'const' type with the name 'getArticles' and assigning an anonymous function to it like this:
const getArticles = function(){};
Make sure to place the above code inside the App() function, after the 'useState' line and before the return statement.
4. Update the anonymous function you just created with a JavaScript fetch statement that retrieves the url 'articles.json'
const getArticles = function(){
fetch('articles.json')
.then(response => response.json())
.then(data => {
setArticles(data)
}
);
};
Notice how the above code uses the 'setArticles' function that was returned from our 'useState' call.
5. Insert a 'useEffect' method call inside the App() method so that it appears on the line before the return statement:
useEffect(() => {getArticles()}, [] );
Notice the use of the empty array for the second parameter of useEffect. This makes sure the code is only called once – after the first render.
6. Save the App.js file. The browser should now look like this:
In this part we will add the ability to click an item to select it and show its article text.
1. We'll need to save the index of the selected item in a state variable. To do that add another call to 'useState'. Name the variable 'selectedArticleId' and its modifier method 'setSelectedArticleId'. Add the statement as the first line of the App() method:
const = useState(-1);
2. Add an onClick handler to the <li> elements that sets the selectedArticleId using the associated modifier method.
onClick={(event) => setSelectedArticleId(index)}
3. When an item is selected we want to apply the 'selected' CSS class to the <li>element which will render the element in bold text:
className={(selectedArticleId === index) ? 'selected' : ''}
4. When you are done the App() method's return statement should look like this:
return (<div className={'app'}>
<h2>React Hooks App</h2>
<ul>
{articles.map(
(article, index) => {
return <li key={index}
className={(selectedArticleId === index) ? 'selected' : ''}
onClick={(event) => setSelectedArticleId(index)} >
{article.title}</li>
}
)}
</ul>
</div>
);
5. Save the App.js file. The app should refresh in the browser. Try clicking on an item to select it. The selected item should appear in bold lettering.
6. Once an article is selected we'd like to see its text displayed below the list. When the app comes up though, before an item is selected we'd like to see the word 'none' instead. Lets add a line of code into the App() function that implements that logic and assigns a value to a 'const' variable named 'selectedArticle'. The code can be added just before the return statement.
const selectedArticle = (articles) ?
articles.content : 'none';
7. Now we need to add some HTML that displays the heading “Selected Article” and a paragraph right after that with the article text. These elements should be inserted near the end of the return statement after </ul> and before </div>.
<br /><span className={'bold'}>Selected Article:</span>
<p>{selectedArticle}</p><br />
8. Save the App.js file. The browser should update. Select an article. It should now show below the list:
We would like to be able to add and delete articles from the list. We'll start by adding a section called 'Controls' after the selected article. The section will hold two input fields (for title and content) and two buttons – 'Add Article' and 'Delete Selected'.
1. Before we add the controls section we need to create a state variable to hold the input field contents. The shape of the data will be an object with two properties - title and content. You can add this statement as the first line in the App() function:
const = useState({ title: 'title1', content: 'content1' });
The input fields we display will get their values from formObject.title and formObject.content.
2. Add the following HTML after the paragraph with the selected article and before the closing </div> element (you can cut and paste this text from \hooks-app\controls.html) :
<div className={'controls'}>
<span className={'bold'}>Controls:</span><br/>
<button onClick={null}>Add Article</button>
<button onClick={null}>Delete Selected</button>
<br />
<input type={'text'} name={'title'}
placeholder={'title'} value={formObject.title}
/><br />
<input type={'text'} name={'content'}
placeholder={'content'} value={formObject.content}
/><br />
</div>
3. Save the App.js file. The app should now display a 'controls' section below the selected article:
At this point the buttons don't do anything and the input fields don't update when you type into them. Lets fix that.
4. Create an anonymous function and assign it to a 'const' type named 'changeHandler'. The function should get a new value for the field being changed and assign it to the property of the same name in the formObject. The function should accept an 'event' parameter. (Hint: 'event.target' represents the input field being changed) . Add this code right before the return statement:
const changeHandler = function (event) {
const name = event.target.name;
const value = event.target.value;
formObject = value;
setFormObject({ ...formObject })
}
5. Call the 'changeHandler' function from the 'onChange' event of both <input> elements:
onChange= {(e)=>changeHandler(e)}
6. Save the App.js file. Now keystrokes you enter into the input fields will be added to the formObject and displayed when the component is re-rendered.
7. Update the onClick handler for the 'Add' button to create a new array based on the existing articles array and add the formObject to it. This can be done using the spread operator. Use the 'setArticles' modifier function to update the articles variable with the new array:
onClick={() => setArticles()}
8. Save the App.js file. The app will update in the browser. Type some text into the title and contents input fields. Click on the 'Add Article' button. You should see an article added to the list. Click on the new article in the list and you will see its contents.
Once an article has been selected we want to have the ability to delete it from the list. We will add the functionality now.
1. While we have a button named 'Delete Selected' we want that button to be disabled if no article is selected in the list. Lets create a 'const' type named 'validSelectedArticleId' that we can use to disable/enable the button. This code can be added right before the return statement:
const validSelectedArticleId = function () {
return( selectedArticleId >= 0 && selectedArticleId <articles.length);
}
2. Set the disabled property of the delete button from the value that was just created:
disabled={!validSelectedArticleId()}
3. Save the file and let the browser refresh. The 'Delete Selected' button should be grayed out until an article is selected.
4. Create a 'const' type named 'deleteSelected' and assign an anonymous function to it that deletes the selected article from the articles array and updates the articles array using its setArticles function. You can use the JavaScript array splice function to implement the delete. Add this code right before the return statement.
const deleteSelected = function () {
if (validSelectedArticleId()) {
articles.splice(selectedArticleId, 1);
setArticles();
}
}
5. Update the onClick handler of the 'Delete Selected' button to call the 'deleteSelected' function.
onClick={() => deleteSelected()}
6. Save the App.js file and let the browser update. Select one of the articles by clicking on it in the list. The 'Delete Selected' button should become active. Click on 'Delete Selected'. The selected item should be removed from the list. Refreshing the page will restore the original set of articles.
In this tutorial we:
Created a new app development environment using create-react-app.
Created a React component that displayed a list of articles.
Used React Hooks to manage state inside the component and to retrieve article data from a network source.
Added the ability to select articles from the list and display their contents.
Added the ability to create and add new articles to the list and to delete existing articles.
Your email address will not be published. Required fields are marked * | https://www.webagesolutions.com/blog/archives/4931 | CC-MAIN-2019-43 | refinedweb | 2,125 | 59.19 |
Solar.
outdoor all in one 200w 20000 lumen the best solar led sign lights reviews of 2021 - buying for government tender project min. order: 1 piece contact now. high brightness integrated all in one solar led lights malaysia - home | facebook 100w up to 12000lm fob price: us $399-400 / set. 3000 sqm full owned factory focus on led solar street light and solar light manufacturing
SKU SR62AA21H-06 Categories Solar Flood Lights, Solar Security Lighting. Reviews (0) Reviews There are no reviews yet. Be the first to review “Westinghouse 2000 Lumen Solar Motion Activated Security Light” Cancel reply. Your email address will not be published.
Dec 18, 2020· …
new cob ip65 waterproof 20w 30w 60w integrated solar rooftop and led street light authorized wholesale for outdoor garden lighting. solar panel. garden light. $22.00 - $76.20 / set. 1 set (min. order) quality smd manufacturer solar power 20w 40w 60w waterproof ip65 outdoor integrated all in one new release - tagged solar landscape lights- litom custom new design ip65 waterproof aluminum stake led street light, solar light, outdoor light.
May 23, 2020· ...
180-Degree 650-Lumen Bronze Solar LED Motion-Activated Flood Light with Timer. Model #MSLED600. Compare; Find My Store. for pricing and availability. 26.
import quality solar light in shenzhen, solar light in shenzhen suppliers supplied by experienced manufacturers at global sources. yangzhou bright solar solutions co. ltd. 8th year china (mainland) response: between 24h to 48h. facotry outdoor garden all in one integrated 10w led integrated solar sensor led street light us$ 76.95 - 81 / set; 10 sets
LumiGuard™ Pro Solar LED Security Light. The NEW LumiGuard™️ Pro wireless floodlight is more responsive, thanks to an upgraded wide-angle motion sensor that provides 270º of coverage: It can now detect people as far away as 20-26 feet. Stop unwelcome intruders in their tracks! Racoons & thieves will think twice when they see these powerful lights. Solarellent option.
Voice Activated Wi-Fi Connected White Motion Activated Solar Operated Integrated LED Outdoor Security Flood Light SECUR360 Outdoor Wi-Fi Connected Solar Powered SECUR360 Outdoor Wi-Fi Connected Solar Powered Motion LED Lighting provides …
1 product rating - LED Solar Flood Light Motion Sensor Security Spot Wall Street Yard Outdoor Lamp. $17.47. Save up to 12% when you buy more. Was: Previous Price $18.99 .... ...
2x Solar Powered Motion Sensor Security Flood Light 100 LED Garden Lamp Outdoor. 4.2 out of 5 stars (86) Total Ratings 86, ... 2x 100w LED PIR Motion Sensor Flood Light Warm White Outdoor Security Spot Lamp. $40.35 New. Sunforce 1900548 Solar Motion LED Security Light - White. 4.4 out of.
14000lm commercial led solar street light motion sensor dusk-to-dawn+remote+pole. $28.87 to $96.79. free shipping. (75) 75 product ratings - 100 led dual security detector solar spot light motion sensor outdoor floodlight. $18.98. was: $27.48. free shipping. or best offer. 150000lm solar - search result, zhongshan junrui lighting co., ltd. radar motion sensor outdoor
Find Sunforce Solar motion-sensor flood lights at Lowe's today. Shop motion-sensor flood lights and a variety of lighting & ceiling fans products online at Lowes.com.
LED Solar Motion Sensor Flood Light, Activated Outdoor Security With 2 Heads, 5 out of 5 stars (1) 1 product ratings - LED Solar Motion Sensor Flood Light, Activated Outdoor Security With 2 Heads,
JESLED Solar Flood Lights Outdoor Motion Sensor,90 LED Solar Powered Exterior Wall Security Light Waterproof for Garden Yard Patio Garage,Dusk-to-dawn,Super Bright,USB Charging & Emerengy Lighting (2-Pack) 4.5 out of 5 stars 485. $41.99 $ 41. 99. FREE Shipping by Amazon.
the global 10,000 lumen solar street light / parking lot light - capsells market will reach xxx million usd in 2019 with cagr xx% 2019-2025. the main contents market.biz is designed to provide the best and most penetrating research required to all commercial, industrial and profit-making ventures in.
The Solar Powered Motion-Activated, multi-directional floodlight that absorbs the sun all day by using the on-board solar panel so at night it will shower a span of light. So, no wiring or installations is needed.
jamiewin 120 4 pack solar diamond lawn lights with stainless steel wireless solar flood light motion sensor security light with 3 lighting modes for garden, street, deck, fence, patio, path - 2 pack 4.4 out of 5 stars 24. $179.99 $ 179. 99 $199.99 $199.99. $12.00 coupon applied at checkout save $12.00 with coupon. get it as soon as thu, may 13.
this item: relightable solar light aa ni-cd 600mah 1.2v rechargable batteries (pack of 8) $9.39 ($9.39 / 1 item) geilienergy triple a nicd aaa 1.2v 600mah triple a rechargeable batteries for solar light lamp $7.99 relightable aa nicd 600mah 1.2v rechargeable batteries for parking light change 1999-2003 toyota solara - 2001 (pack of 20) $15.99 | https://www.lacollinaristorante.it/motion/motion_activated_solar_flood_light_deiserafini_it_2602294.html | CC-MAIN-2021-49 | refinedweb | 818 | 57.77 |
SyntaxError when import webhelpers.paginate in py3.2
import webhelpers
import webhelpers.paginate
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/kulapard/Projects/test/test_env/lib/python3.2/site-packages/webhelpers/paginate.py", line 250
raise Exception, "getitem without slicing not supported"
SyntaxError: invalid syntax
Thank you for your report. WebHelpers is not yet certified for Python 3. I'll leave this ticket open to address during the conversion. However, it is likely that paginate will be spun off to a separate distribution in the next version.
You can try b492a4ca93f5.I fixed most part of bugs which occurs in unit testing, but I doesn't mean what my fork is fully functionally. I will be very thankful to you if you help me testing code. | https://bitbucket.org/bbangert/webhelpers/issues/72/syntaxerror-when-import-webhelperspaginate | CC-MAIN-2018-34 | refinedweb | 131 | 61.02 |
The ID_AA64MMFR0_EL1:ASIDBits determines the size of the mm contextid and is used in the early boot to make decisions. The value ispicked up from the Boot CPU and cannot be delayed until other CPUsare up. If a secondary CPU has a smaller size than that of the BootCPU, things will break horribly and the usual SANITY check is not goodenough to prevent the system from crashing. Prevent this by failing CPUs withASID smaller than that of the boot CPU.Also moves the fail_incapable_cpu() out of the CONFIG_HOTPLUG_CPU.Cc: Will Deacon <will.deacon@arm.com>Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>--- arch/arm64/kernel/cpufeature.c | 81 +++++++++++++++++++++++++++++----------- 1 file changed, 59 insertions(+), 22 deletions(-)diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.cindex 5629f2c..769782a 100644--- a/arch/arm64/kernel/cpufeature.c+++ b/arch/arm64/kernel/cpufeature.c@@ -293,6 +293,28 @@ static struct arm64_ftr_reg arm64_ftr_regs[] = { ARM64_FTR_REG(SYS_CNTFRQ_EL0, ftr_generic32), }; +/*+ *");+}+ static int search_cmp_ftr_reg(const void *id, const void *regp) { return (int)(unsigned long)id - (int)((const struct arm64_ftr_reg *)regp)->sys_id;@@ -459,6 +481,40 @@ static int check_update_ftr_reg(u32 sys_id, int cpu, u64 val, u64 boot) } /*+ * The asid_bits, which determine the width of the mm context+ * id, is based on the boot CPU value. If the new CPU doesn't+ * have an ASID >= boot CPU, we are in trouble. Fail this CPU.+ */+static void check_cpu_asid_bits(int cpu,+ struct cpuinfo_arm64 *info,+ struct cpuinfo_arm64 *boot)+{+ u32 asid_boot = cpuid_feature_extract_unsigned_field(boot->reg_id_aa64mmfr0,+ ID_AA64MMFR0_ASID_SHIFT);+ u32 asid_cur = cpuid_feature_extract_unsigned_field(info->reg_id_aa64mmfr0,+ ID_AA64MMFR0_ASID_SHIFT);+ if (asid_cur < asid_boot) {+ pr_crit("CPU%d: has incompatible ASIDBits: %u vs Boot CPU:%u\n",+ cpu, asid_cur, asid_boot);+ fail_incapable_cpu();+ }+ return;+}++/*+ * Checks whether the cpu is missing any of the features+ * the kernel has already started using at early boot,+ * before the other CPUs are brought up. This is intended+ * for checking features where variations can be fatal.+ */+static void check_early_cpu_features(int cpu,+ struct cpuinfo_arm64 *info,+ struct cpuinfo_arm64 *boot)+{+ check_cpu_asid_bits(cpu, info, boot);+}++/* * Update system wide CPU feature registers with the values from a * non-boot CPU. Also performs SANITY checks to make sure that there * aren't any insane variations from that of the boot CPU.@@ -469,6 +525,9 @@ void update_cpu_features(int cpu, { int taint = 0; + /* Make sure there are no fatal feature variations for this cpu */+ check_early_cpu_features(cpu, info, boot);+ /* * The kernel can handle differing I-cache policies, but otherwise * caches should look identical. Userspace JITs will make use of@@ -826,28 +885,6 @@ static u64 __raw_read_system_reg(u32 sys_id) } /*- *");-}--/* * Run through the enabled system capabilities and enable() it on this CPU. * The capabilities were decided based on the available CPUs at the boot time. * Any new CPU should match the system wide status of the capability. If the-- 1.7.9.5 | http://lkml.org/lkml/2015/11/18/571 | CC-MAIN-2018-09 | refinedweb | 456 | 53 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.