text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Try/Catch Slowdown: Part 2
Today’s article is in response to a comment left about my article on try/catch slowdowns. The second time around I will provide an example that is hopefully more “real world” than the last article provided.
The comment claimed:
Typical real world applications will not halt the LLVM for 100000000 iterations; therefore the slowness will negligible. Hopefully 10.1 will address this.
I have to concede that summing the first 100000000 positive integers is indeed an unrealistic test and the criticism is therefore valid. This piqued my curiosity. Perhaps when I claimed a 2-3x slowdown I had vastly overstated the effect of a try/catch block on performance and that is may be, as suggested, negligible. The difficulty though is in devising a test that both mimics a “real world” application and is simple enough to post on a blog. I cannot hope that readers will come to a full understanding of the computational tasks a full application undertakes, so I feel compelled to provide something more simple. This led me to post the simple loop summing up all those integers.
In today’s example, I have chosen a task that is both expensive on its own and realistic enough that it may well be undertaken by an application in the “real world”. Like the last example, it too includes many loop iterations, but this time to a more realistic point. I have chosen to set every pixel on a large, but not unreasonably large, BitmapData. Such a task is undertaken by image encoders, bicubic image resizers, and ray tracers, to name a couple of examples. My example won’t do anything interesting with the image, just iterate over it making a funky pattern and then showing that pattern. This, I hope, is a true “real world” task:
package { import flash.display.*; import flash.events.*; import flash.text.*; import flash.geom.*; import flash.ui.*; import flash.utils.*; [SWF(backgroundColor="0xEEEADB",frameRate="40",width="640",height="480")] /** * A (hopefully) real world test of try/catch performance * @author Jackson Dunstan */ public class TryCatchTester extends Sprite { /** If we should use a try/catch block during update */ private var __useTryCatch:Boolean; /** status */ private var __status:TextField; /** Display of the results */ private var __display:Bitmap; /** A bitmap to compose the image on before drawing it to the display */ private var __backBuffer:BitmapData; /** * Application entry point */ public function TryCatchTester() { // Setup stage stage.align = StageAlign.TOP_LEFT; stage.scaleMode = StageScaleMode.NO_SCALE; // Setup the display var bmd:BitmapData = new BitmapData(2000, 2000, false, 0x00000000); __display = new Bitmap(bmd); addChild(__display); __backBuffer = new BitmapData(2000, 2000, false, 0x00000000);); // Setup logger __status = new TextField(); __status.y = __framerate.height; __status.autoSize = TextFieldAutoSize.LEFT; __status.background = true; __status.backgroundColor = 0xEEEADB; __status.selectable = false; __status.text = "Use try/catch block: " + __useTryCatch; __status.setTextFormat(format); __status.defaultTextFormat = format; addChild(__status); stage.addEventListener(KeyboardEvent.KEY_DOWN, onKeyDown); } /** * Callback for when a frame is entered * @param ev ENTER_FRAME event */ private function onEnterFrame(ev:Event): void { // Update frame rate counter _; // Do work, optionally in a try/catch block if (__useTryCatch) { updateTryCatch(dTime); } else { updateNoTryCatch(dTime); } } /** * Update based on elapsed time with NO try/catch * @param elapsed Number of milliseconds elapsed since the last update */ private function updateNoTryCatch(elapsed:int): void { var bmd:BitmapData = __backBuffer; for (var x:int = 0; x < 2000; ++x) { for (var y:int = 0; y < 2000; ++y) { bmd.setPixel32(x, y, x*y); } } __display.bitmapData.draw(bmd); } /** * Update based on elapsed time WITH a try/catch * @param elapsed Number of milliseconds elapsed since the last update */ private function updateTryCatch(elapsed:int): void { try { var bmd:BitmapData = __backBuffer; for (var x:int = 0; x < 2000; ++x) { for (var y:int = 0; y < 2000; ++y) { bmd.setPixel32(x, y, x*y); } } __display.bitmapData.draw(bmd); } catch (err:Error) { } } /** * Callback for when a key is pressed anywhere on the stage * @param ev KEY_DOWN event */ private function onKeyDown(ev:KeyboardEvent): void { if (ev.keyCode == Keyboard.SPACE) { __useTryCatch = !__useTryCatch; __status.text = "Use try/catch block: " + __useTryCatch; } } } }
To use this, simply focus the Flash app and press the space bar to toggle the try/catch block. The results I get are:
A 33.5% slowdown is certainly not a 2-3x slowdown as I previously reported. This is not to say that 2-3x is not possible, since I have shown it to you already. However, it may be that 2-3x is the worst case scenario for a try/catch block slowdown. In this more realistic case you can see what is hopefully a more realistic slowdown figure. And still, 33.5% is certainly nothing to sneeze at. That, in a game, would be the difference between running at a smooth 30 FPS and a choppy 20 FPS. Are there other slowdown factors? Invariably! However, one thing seems certain: the slowdown is not, as suggested, negligible. I therefore repeat my recommendation to you: do not wrap performance-critical code in a try/catch block!
#1 by Valentin on October 30th, 2009 · | Quote
More FPS is better, if you are saying that try/catch is slower, looks like you switched the columns.
#2 by jackson on October 30th, 2009 · | Quote
You’re absolutely right. I’ve updated the post with the numbers corrected from what I transcribed. Thanks for catching this! | https://jacksondunstan.com/articles/379 | CC-MAIN-2020-16 | refinedweb | 877 | 53.61 |
I would like to be able to execute GRC from a directory structure
moved elsewhere from ${prefix} … meaning, I compile and install into
${prefix}, but then ‘cp -r’ or ‘mv’ or ‘tar - | tar -’ the whole
directory structure elsewhere outside of the usual execution
location. I then create / modify the environment variables such that
they point to the new location (PATH, PYTHONPATH, and
DYLD_LIBRARY_PATH are the 3 primary ones for OSX). From my initial
testing, the rest of GNU Radio can do this with just those 3 variables
(on OSX; Y-Variables-MV on other platforms)
This doesn’t work because GRC hard-wires the install path (or
something like it) in a few files (I do not know if all of these files
need to be modified to get what I’d like to do working; nor do I know
if these are all of the files that need to be tweaked):
grc/freedesktop/grc_setup_freedesktop.in
grc/src/platforms/base/Constants.py.in
grc/src/platforms/python/Constants.py.in
I can think of two possible solutions:
- Put a switch at the top of each of these files along the lines of:
import os
DATA_DIR = os.getenv (“GRC_DATADIR”)
if not DATA_DIR:
DATA_DIR = @datadir@
or the equivalent functionality that would work for the @variables@
used in that file (I don’t know if the above would work; it is for
descriptive purposes only): first check the user’s shell environment
for the appropriate variables, and if they don’t exist, then revert to
the default from compiling. This way, users who wish to change the
location can do so easily via their shell environment, and those who
do not set the shell environment variables get the default – same as
current usage.
- Even better would be the use of relative paths (it does work for
me, directly modifying the installed files). This solution would
require knowing the path to $sharedir and the $pythondir and then
figuring out how to get from one to the other (the relative path), but
IMHO is the more elegant solution.
Thanks in advance! - MLD | https://www.ruby-forum.com/t/grc-request/168957 | CC-MAIN-2022-27 | refinedweb | 347 | 51.96 |
?
ScrollView renders all its react child components at once, but this has a performance downside.
Imagine you have a very long list of items you want to display, maybe several screens worth of content. Creating JS components and native views for everything all at once, much of which may not even be shown, will contribute to slow rendering and increased memory usage.
View Props#
Inherits View Props.
StickyHeaderComponent#
A React Component that will be used to render sticky headers, should be used together with
stickyHeaderIndices. You may need to set this component if your sticky header uses custom transforms, for example, when you want your list to have an animated and hidable header. If component have not been provided, the default
ScrollViewStickyHeader component will be used.
alwaysBounceHorizontal iOS#
When true, the scroll view bounces horizontally when it reaches the end even if the content is smaller than the scroll view itself.
alwaysBounceVertical iOS#
When true, the scroll view bounces vertically when it reaches the end even if the content is smaller than the scroll view itself.
automaticallyAdjustContentInsets iOS#
Controls whether iOS should automatically adjust the content inset for scroll views that are placed behind a navigation bar or tab bar/toolbar.
bounces iOS#
When true, the scroll view bounces when it reaches the end of the content if the content is larger than the scroll view along the axis of the scroll direction. When
false, it disables all bouncing even if the
alwaysBounce* props are
true.
bouncesZoom iOS#
When
true, gestures can drive zoom past min/max and the zoom will animate to the min/max value at gesture end, otherwise the zoom will not exceed the limits.
canCancelContentTouches iOS#
When
false, once tracking starts, won't try to drag if the touch moves.
centerContent iOS#
When
true, the scroll view automatically centers the content when the content is smaller than the scroll view bounds; when the content is larger than the scroll view, this property has no effect.
contentContainerStyle#
These styles will be applied to the scroll view content container which wraps all of the child views. Example:
return ( <ScrollView contentContainerStyle={styles.contentContainer}> </ScrollView>);...const styles = StyleSheet.create({ contentContainer: { paddingVertical: 20 }});
contentInset iOS#
The amount by which the scroll view content is inset from the edges of the scroll view.
contentInsetAdjustmentBehavior iOS#
This property specifies how the safe area insets are used to modify the content area of the scroll view. Available on iOS 11 and later.
contentOffset#
Used to manually set the starting scroll offset.'0.998 on iOS, 0.985 on Android.
'fast', 0.99 on iOS, 0.9 on Android.
directionalLockEnabled iOS#
When true, the ScrollView will try to lock to only vertical or horizontal scrolling while dragging.
disableIntervalMomentum#.
endFillColor Android# Android#.
indicatorStyle iOS#
The style of the scroll indicators.
.
'none', iOS#
When set, the scroll view will adjust the scroll position so that the first child that is currently visible and at or beyond
minIndexForVisible will.
The optional
autoscrollToTopThreshold can iOS#
The maximum allowed zoom scale.
minimumZoomScale iOS#
The minimum allowed zoom scale.
nestedScrollEnabled Android#
Enables nested scrolling for Android API level 21+. iOS#
Fires when the scroll view scrolls to top after the status bar has been tapped.
overScrollMode Android#
Used to override default value of overScroll mode.
Possible values:
'auto'-.
Note: Vertical pagination is not supported on Android.
persistentScrollbar Android#
Causes the scrollbars not to turn transparent when they are not in use.
pinchGestureEnabled iOS#
When true, ScrollView allows use of pinch gestures to zoom in and out..
Note that the view can always be scrolled by calling
scrollTo.
scrollEventThrottle iOS#
0, which results in the scroll event being sent only once each time the view is scrolled.
scrollIndicatorInsets iOS#
The amount by which the scroll view indicators are inset from the edges of the scroll view. This should normally be set to the same value as the
contentInset.
scrollPerfT iOS#
When
true, the scroll view can be programmatically scrolled beyond its content size.
scrollsToTop iOS#
When
true, the scroll view scrolls to top when the status bar is tapped.
showsHorizontalScrollIndicator#
When
true, shows a horizontal scroll indicator.
showsVerticalScrollIndicator#
When
true, shows a vertical scroll indicator.
snapToAlignment iOS#
When
snapToInterval is set,
snapToAlignment will define the relationship of the snapping to the scroll view.
Possible values:
'start.
stickyHeaderIndices#
An array of child indices determining which children get docked to the top of the screen when scrolling. For example, passing
stickyHeaderIndices={[0]} will cause the first child to be fixed to the top of the scroll view. You can also use like [x,y,z] to make multiple items sticky when they are at the top. This property is not supported in conjunction with
horizontal={true}.
zoomScale iOS#
The current scale of the scroll view content.
. | https://reactnative.dev/docs/scrollview | CC-MAIN-2021-39 | refinedweb | 792 | 65.32 |
import java.util.prefs.Preferences;public class Inject { public static void main() { Preferences.userRoot().node("/org/lateralgm").put("NEWKEY","My New Value"); System.out.println("Done.");}}
<?xml version="1.0" encoding="UTF-8" standalone="no"?><!DOCTYPE map SYSTEM ""><map MAP_XML_VERSION="1.0"> <entry key="KEY1" value="value1"/> <entry key="KEY2" value="value2"/> ...</map>
After I got back from the hospital, I started work on Enigma, since other people are actually working on that too, even though I despise C++ (and now I'm re-learning why).
Why do all the pro-Microsoft people have troll avatars?
It'd be cool if it had one where it set values passes as pointers (a _ext version) of the exact collsision coordinates. but im happy for at least 2 weeks.
The problem is that LGM is set up using various design patterns and require a fair amount of abstract thinking. A lot of my desired changes would not be simply "code this", but rather "modularize this so we can use it easier and so it's easier to understand." Something you don't usually get with non-sentient code monkeys in a crate.Also, I should mention that I did nothing special to implement 'other' yet, so as far as I can tell, 'other' is not populated.QuoteIt'd be cool if it had one where it set values passes as pointers (a _ext version) of the exact collsision coordinates. but im happy for at least 2 weeks.The rectangle-line collision algorithm does not support this, afaik - it was implemented in an efficient way just for doing a boolean check. There are various line-clipping algorithms which would give you the coordinates where a line meets the boundaries of a rectangle, but they are nowhere near as efficient as the current algorithm. At any rate, anybody is welcome to implement them and stick them in the collision-utilities backend.
Would it be faster to call that one x times or to find the exact coordinates? You can find the exact and closest with base 2 n runs where n is the distance to check
All I can say is, we'll see. I have no idea whatsoever (which is somewhat unusual; I usually thoroughly investigate the implications of any differences before acting, but in this case I have nothing to go by).I've thought about it, and the idea that scares me the most is that a GM game in which the character could not fall through a 32px gap between blocks of the same size would do so in ENIGMA, or vice versa.All I can say is, we'll do our best and pay careful attention to reports. | https://enigma-dev.org/forums/index.php?topic=704.msg7981 | CC-MAIN-2020-05 | refinedweb | 449 | 62.07 |
> Howdy,
> I don't know Xalan well enough to answer, but it seems pretty
> clear tomcat is looking for a class called Redirect, while
> the stylesheet uses redirect. It seems like a
> case-sensitivity issue, but I can't help much beyond that ;)
>
> Yoav Shapira
I agree. Anyway, a few Googles later and I found this
which uses an alternative namespace declaration...
...
xmlns:
and this seems to work for me. I don't know if there is
anything sinister about using this different declaration, and
I am confused (like the guy in the above mail thread) as to why
the other declaration of
xmlns:redirect=""
works in my command line tests.
Anyway - there's what I've found out in case anyone else is interested.
Chris
---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org | http://mail-archives.apache.org/mod_mbox/tomcat-users/200401.mbox/%3CD11FD8AE7B15D64285EA3A41DFCCFFB19B27DC@HORIZON2.horizon-asset.co.uk%3E | CC-MAIN-2015-06 | refinedweb | 147 | 63.7 |
John W. Eaton a écrit :
On 17-Oct-2006, Michael Goffioul wrote:| Oct files needs a different set of compilation flag, to switch from | dllexport to dllimport.| To achieve this, I compile oct-objects in a different oct/ subdirectory. I think that instead of doing this, you should write a new pattern rule that applies only to a particular list of targets. For example, like this: $(DLD_OBJ) : %.o : %.cc ... rule that applies to $(DLD_OBJ) here ... ifdef CXXPICFLAG $(DLD_PICOBJ) : %.o : %.cc ... rule that applies to $(DLD_OBJ) here ... endif Since these rules will only be used in src/Makefile.in, I think they can be defined there instead of in Makeconf.in.
The attached patch is even simpler and seems to work OK. It shouldn't change anything on other systems. Michael.
Index: src/Makefile.in =================================================================== RCS file: /cvs/octave/src/Makefile.in,v retrieving revision 1.417 diff -c -r1.417 Makefile.in *** src/Makefile.in 13 Oct 2006 18:11:27 -0000 1.417 --- src/Makefile.in 18 Oct 2006 12:33:53 -0000 *************** *** 264,269 **** --- 267,275 ---- lex.o parse.o __gnuplot_raw__.o pic/lex.o pic/parse.o pic/__gnuplot_raw__.o: \ ALL_CXXFLAGS := $(filter-out -Wold-style-cast, $(ALL_CXXFLAGS)) + $(DLD_PICOBJ): \ + ALL_CXXFLAGS := $(filter-out $(XTRA_CXXDEFS), $(ALL_CXXFLAGS)) + XERBLA = ../libcruft/blas-xtra/xerbla.o ifdef FPICFLAG PIC_XERBLA = ../libcruft/blas-xtra/pic/xerbla.o | https://lists.gnu.org/archive/html/octave-maintainers/2006-10/msg00268.html | CC-MAIN-2021-39 | refinedweb | 221 | 70.29 |
The development of
European Architecture relied on Greek
Architecture (Beyond criticism)
The Character of Mycenaean Architecture is very
different from the Hellenic
Architecture, consists of
rough walling of large blocks of stone, often unworked.
Most prominent elements:
1-
corbel system
,
2-
inclined blocks over openings
, and the true arch.
Three
significant elements dominated our examination of Aegean architecture;
palaces, citadels and tombs
.
In both Crete and Mycenae,
palaces
were
important
architectural elements.
The organization and
form of
the
palaces
however
differs between the two civilizations
.
In
Crete, palaces
were
complex multifunctional, multi-story buildings
.
They were designed to be
colorful, relaxed and joyous in nature, reflecting the peaceful lifestyle
of the people, while in Mycenae,
palaces were single story, organized around a simple rectangular kingly residence, the megaron that is accessed through a series of courtyards
.
Gypsum, cut stone and Timber
were the principal building materials of the Aegean.
Gypsum was common in Crete
, while
limestone was common in Mycenaea
.
Timber was not very common
in both locations.
In
Crete, gypsum was preferred for wall and frescoed decoration,
while
timber is used for columns and roofs
.
In
Mycenae, cut stone
was the
most common
material which was
used with wooden frame for houses
or in cyclopean construction for
citadels
.
The Mycenaean generally tended to
adapt, rather than destroy
, Minoan culture, religion and art. They continued to operate the economic system and bureaucracy of the Minoans.
The Mycenaean people were
Greek
by race.
The Mycenaean kingdom was
small and lacked protection and buffer zones to protect the capital
.
The people of Mycenae were also more of a
society of warriors
than traders, which the Cretans were.
Their architecture
focused on defense
on a grand scale.
The Mycenaean people built
fortified kingly palaces located within Citadels
instead of the pleasure palaces of the Cretans.
The
citadels
were usually built along the edge of sharp change in elevation, on
hilltops
to make them difficult for would-be-attackers.
The
citadels were organized royal living areas enclosed by huge cyclopean
walls or roughhewn immense stone blocks.
The highest degree of sophistication in citadel construction was achieved at Mycenae and Tiryns.
Of the two sites,
Tiryns is better preserved
.
The two citadels were essentially similar and might have been constructed by the
same workmen
AEGEAN ARCHITECTURE
Minoan Architecture
Domestic Architecture (Palaces)
Palace of king Minos
, and was the largest and most elaborate of the Minoan palaces.
First palace to be discovered and excavated by the British archeologist Sir Arthur Evans.
Only the ground floor
of a large palace of several stories has partially survived
The
site is complicated
and there are
controversies about its function
s as the upper floors have remained impossible to reconstruct with certainty.
It was
a residence, a religious and an administrative center
; The King was Crete’s
high priest
The plan suggests it
evolved organically around the central courtyard
.
The palace covered and area of
10 square kilometers
, and it was
at least two storeys
high.
The various functions of the palace were not distributed in distinct areas.
Functions were gathered in chambers and apartments
spread around the central courtyard.
The palace had
two prominent entrances
, one on the
north
face and another on the
west
side.
The
north
entrance appears to be the
main entrance
and is defended by a guardhouse
The
western
entrance was
indirect and organic
(dog-leg) in form.
Palace of Knossos
Maged Elsamny, PhD
Week 02
Mycenaean Architecture
The
palaces were located within fortified citadels
, pointing to the defensive orientation of the people.
Citadels and Tholos were
restricted to the Mycenaean
civilization.
Citadels were
built on hilltops
to fortify and protect kingly residences. They also
provided a refuge
for the common people during periods of attack.
The citadels incorporated systems of defense and
access to water in case of siege
.
Infrastructure/Civil Buildings
Tholos
were the outstanding tombs of the Mycenaean people.
The tholos were round beehive structures that were covered with a
dome roof
.
They were
accessed by
a long causeway called a
dromos
.
Once a person is buried, the tholos is sealed.
They
did not function as a funerary chapel
in contrast with practices that we examined during the Egyptian periods.
Funerary Architecture
The
most splendid
of the tholos in Mycenae is the so called Treasury of Atreus or Tomb of Agamemnon
It was built around between
1330 BCE
.
The
dromos is about 6m wide and 36m long
.
Its
sidewalls rise to 13.7m high
.
The
chamber is 14.5m in diameter and 13.2m
high
It is made up of
34 circular courses of masonry
A
lateral chamber 8.2m square by 5.8m high was the actual place of burial
The treasury of Atreus exhibited the
best masonry and most careful and ambitious construction
to be found at Mycenae
Treasury of Atreus
Mycenae, inaccessible, easily defended, stands midway
between Corinth and Argos
on the eastern part of the Peloponnese.
The gate consists of
great upright stones 3.1 meter high supporting an immense lintel 4.9 meters long and 1.6 meters high
The
lintel defined a gate 2.4 meters deep with an opening of 3m wide.
Above the lintel is a
triangular shaped corbeled opening filed with a stone panel
bearing a carved relief depicting two rampant lions facing a central column of the downward tapering type
The
column was the sacred symbol of the earth that the lions supposedly protected
.
The triangular relief carving over the front is to herald the temple front of the Greek civilization
The Citadel at Mycenae
The Citadel of Tiryns
Central Court
Rooms of Different functions
Diagram of Typical Minoan Palace
Minoan civilization was the
first to flourish
among the Aegean civilizations, and named after
King Minos
.
Remains of this civilization are,
townhouses, and palaces
The
Minoans were traders and seafarers
.
The society can be thought of as being made up of near
divine kings
presiding over an administration largely concerned with
commerce
The Minoans were a
very rich and prosperous
society
The wealth of the society was reflected in the
building of palaces as the residences of the powerful rulers
who
controlled the town
in which they were built
Minoan cities did
not have city walls
, which suggest that they were a relaxed,
peaceful
and easygoing society
Apart from palaces, Minoans also built many
small country houses scattered
over the countryside and several towns of which the one attached to the palace of Knossos achieved considerable size.
Buildings were aligned
with their surrounding topography, like
mountains
in relation to the sacred or ritual significance of the mountain
Three Types of Masonry Walls
1- Cyclopean
3- Polygonal
2-Rectangular
Photo Credits: Supernado
Photo Credits: Jastrow
masses of rock roughly quarried and piled on each other, without cramp-irons, but with clay mortar, the interstices between the larger being filled with smaller blocks.
Photo Credits: The Lovecraftsman
2700 - 1100 BCE
2700 - 1380 BCE
1600 - 1100 BCE
carefully hewn rectangular blocks arranged in regular courses, but the joints between stones in the same course are not always vertical.
many sided blocks accurately worked so as to fit together.
Archeological excavations essentially discovered how the buildings were arranged in plan at the
ground level with no concrete knowledge of how the upper floors
of the buildings are organized.
In design, the palaces
resemble each other but still preserve unique features
.
They were
multi-storey buildings
, with
interior and exterior staircases
,
light wells
,
massive columns, storage magazines, and courtyards
.
Function rather than form
appears to predominate in their organization.
The most striking feature of the palace is the
extraordinary number of rooms
they contain. There were rooms of
different types, sizes and functions
organized
around a central courtyard
, they served as
centers of government, administrative offices, shrines, workshops, and storage spaces.
The
courtyards were aligned north-south
, the reason for which is not clear.
All the palaces have
multiple entrances,
most of which
led to the courtyard
.
The palaces do not also suggest i
nformal principles of planning
or design.
Their organization is more or less
organic in nature, suggesting gradually growth
.
Examples of palaces: Knossos, Phaistos, Malia, and Kato Zakros storeys
high. Typically the
lower walls
were constructed of
stone and rubble
, and the
upper walls of mud brick
.
Ceiling timbers
held up the roofs.
The materials used in constructing the villas and palaces varied, and could include
sandstone, gypsum, or limestone
.
Egyptians
Minoan
Greek mainland
Mycenae
influence
influence
import of papyrus
export of ceramics
architectural & artistic ideas
hieroglyphs
handicraft items
The Columns:
The columns of the Minoan architecture are unique because they are wider at the top than the bottom. The Minoan column was constructed from the
trunk of a cypress tree
, common in the Mediterranean.
.
The bulk of the northern part of the East wing was used for
industrial activity
.
Industrial activities included
jewelry and pottery making
, and other light industries.
Towards the southern part of the East wing is found the
Queen’s suit
The queen's suit boasted a
bathroom with a sophisticated drainage
system of earthenware pots fitted together.
A
staircase and a ramp
lead from the ground floor of the east wing to the upper floors.
Archeological evidence suggest that the
main living apartments were on the upper levels of the east wing
Most of the western wing was
devoted to storage
.
The storage rooms were in long and narrow shops found against the western wall.
The storage rooms were for
oil jars and probably granaries
.
The
throne room was dark and mysterious; the stone throne was against the north wall
, flanked by benches.
The walls were
decorated with paintings of sea animals
.
The
decorations
appear to have a
religious purpose rather than royal one
.
A
magnificent staircase
in the west wing
led to staterooms on the upper floors
.
Rooms were generally approached through
rows of double doors so that they could be opened, or totally or partially shut off.
Everything was designed to permit the
circulation of cool air, to counteract the intense heat
of the Cretan summer.
Staircases also designed to have light wells; these were
opening in the roof that admits light into the staircase.
General Characteristics:
The palace did
not
embody any idea of
monumentality or conceptual order
Rather it was
picturesque, colorful
with an atmosphere of comfort and informality
The building materials of the palace were rich;
Wood and gypsum
were extensively used to achieve
fine bright surfaces
Wood
was used to erect widely space
columns
to support lightweight
wooden roof
The
columns taper upward
and had
round capitals
The perishable nature of the materials have made
materials not to survive
to the present
None of their columns has survived. All the information on it is derived from paintings on walls
Cretans
loved color
and painted their walls and adorned them with relief, mostly of
sea animals
suggesting that they probably
worship nature
The stairways light wells, and colonnades of downward tapering wood columns were typically Minoan,
Elaborate and
developed sanitation and drainage
, example of which is found in the Queen’s suit
ca. 1700 BCE
West Entrance
Industrial Rooms
Staircases to upper level
Queen Suit
Ramp
East Wing
Storerooms
West Wing
Throne Room
Storerooms & Shops
North Entrance
Reconstruction drawing showing the palace at Knossos.
The "Throne Room" with its gypsum throne and benches built to accommodate sixteen people
Photo Credits: eucharisto deo
West Wing
Staircases at Knossos Palace
Queen's suit
East Wing
West Wing
In Mycenae, the
location was open to attack
and architectural form responded by emphasizing defense
The
emphasis on defense meant that movement in the citadels is directed through a maze to the megaron
to ensure optimal protection
The focus on palaces stems from the power and authority of the king in both civilizations, which is expressed in palace construction
In Mycenae there is also evidence of some conscious application of
aesthetic principles in the design of the tholos (tombs)
The geometrical relationships between the diameter and height of the tholos points to some
conscious formal organization of form
Open Area
Store Rooms
Entrance/Gates
Megaron
Corbeled gallery in the walls of the citadel, Tiryns, Greece, ca. 1400–1200 BCE.
Photo Credits: willbarnes
Aerial view of the citadel at Tiryns, Greece, ca. 1400–1200 BCE.
Casemates, or c
overed galleries, protected and concealed troops
within the wall
There were also
tunnels
within the walls that provided
access to water sources
beneath the hill
The tunnels were cunningly
camouflaged
where they extended beyond the area enclosed within the fortification walls
Tiryns citadel also had large galleries to the south and east that is used for storing a large quantity of agricultural produce
All the water and food arrangements ensured that the city can withstand attacks by its enemies for a long time without
running out of supplies
The fortification walls were constructed in the
irregular style of masonry
construction termed cyclopean
The citadel had a long narrow approach on the east side with two gates which could be barred.
The palace of Tyrins is located within the citadel to the south
Additional vacant land is enclosed on the north side
The
royal residence at Tiryns
is one of the best preserved Mycenaean fortifications
Tiryns was
located on the coast
and was in effect a castle, guarding the beachhead that served as the
port
of Mycenae
The citadel at Tiryns is located on a
low rocky citadel
hill
It was guarded by an
immensely thick wall 11m thick
Although one royalty resided in the citadel
, in times of war the vacant land served as a refuge for the community living in the city
below
The living quarter and lifestyle of the ruler is not much different from that of the other feudal barons.
All the
principal apartments were located on a single floor,
they were made up of a
simple rectangular box with a single door called megaron
The
Rectangular house of the ruler is called the chief megaron
The chief megaron
consists of a veranda, entrance hall and throne room
The
throne room is entered from the entrance hall
, through a door placed axially
In the
center of the throne room is a large circular fire place
Four columns are arranged in a square around the fire place
A
throne is located against the middle of the right-hand wall in the throne room
The
floors and walls are all painted and decorated
A large court lies directly in front of the chief megaron.
The Megaron courtyard is entered from the citadel gate through a series of corridors, entrance portals and other courtyards.
ca. 1400 BCE
ca. 1350 -1250 BCE
Lentil
Corbeled Arch
The Lion Gate
1330 BCE
Reconstruction of the Mycenaean capital, from the Treasury of Atreus in the British Museum
Sections of the tholos
3.1m high stones supporting 4.9m long lentil and 1.6m high
2.4m deep, and 3m wide opening
N
Entrance leading to the courtyard
Courtyards facing the North
The mythical creature (Minotaur) is believed to be in the middle of the courtyard
copyright Perseus Project 1989, drawn by M. W. Cutler based on BSA 25 1921-23 in A.W. Lawrence 1983 81 fig. 56
First circle of royal tombs
Plan of Citadel at Mycenegean Architecture
No description
Please log in to add your comment. | https://prezi.com/9nnnbvm6hszc/aegean-architecture/ | CC-MAIN-2017-51 | refinedweb | 2,542 | 56.79 |
/* Simple implementation of vsprintf for systems without it. Highly system-dependent, but should work on most "traditional" implementations of stdio; newer ones should already have vsprintf. Written by Per Bothner of Cygnus Support. Based on libg++'s "form" (written by Doug Lea; dl@rocky.oswego.edu). <ansidecl.h> #include <stdarg.h> #include <stdio.h> #undef vsprintf #if defined _IOSTRG && defined _IOWRT int vsprintf (char *buf, const char *format, va_list ap) { FILE b; int ret; #ifdef VMS b->_flag = _IOWRT|_IOSTRG; b->_ptr = buf; b->_cnt = 12000; #else b._flag = _IOWRT|_IOSTRG; b._ptr = buf; b._cnt = 12000; #endif ret = _doprnt(format, ap, &b); putc('\0', &b); return ret; } #endif | http://opensource.apple.com/source/libstdcxx/libstdcxx-39/libstdcxx/libiberty/vsprintf.c | CC-MAIN-2016-22 | refinedweb | 109 | 70.7 |
A friend of mine challenged me today to the number game. This is a classical one, where you have to guess a number between 0 and 999, and the computer will tell you whether you were right on or if you were above or below the chosen number.
Instead of doing dichotomy by hand or with a calculator, I wrote the following Forth snippet using the gforth interpreter:
: guess 2dup + 2/ dup . ;
: init 0 999 guess ;
: big nip guess ;
: small -rot big ;
Here is the transcript of an interactive session (what I typed is in black, what was printed by gforth is in red):
init 499 ok
small 749 ok
small 874 ok
big 811 ok
big 780 ok
big 764 ok
small 772 ok
big 768 ok
big 766 ok
big 765 ok
For those not well-versed in Forth, here is how it works:
guesstakes the low bound and the high bound from the stack, put them back there and adds the middle value as well, and prints it.
initstarts a session by putting 0 and 999 on the stack and calls
guessto print the initial value to be entered.
bigremoves the high bound from the stack, leaving only the low bound and the previous middle value, then calls
guessto get a new value.
smallreplaces the low bound on the stack by the previous middle value, and calls
guess. The stack manipulation and the call of
guesswould be done using
-rot nip guess, and I took advantage of
bigby factoring it into
-rot big.
That’s it. Who could now pretend that small isn’t beautiful?]]>.]]>
Il y a maintenant trois ans je prévenais, ainsi que beaucoup d’autres, du danger de la loi sur le droit d'auteur de droits voisins dans la société de l'information, plus connue sous le petit nom de DADVSI. Un amendement, surnommé par les bloggers et les journalistes « Amendement Vivendi Universal », proposait d’interdire la simple mise à disposition du public de logiciels qui pourraient être utilisés pour contrefaire du contenu protégé par le droit d’auteur.
Et voila, on y est. La société civile des producteurs de phonogrammes en France (SPPF) poursuit quatre sociétés américaines au prétexte que leurs logiciels de peer-to-peer permettent de partager du contenu contrefait. Peu importe que ces logiciels servent à partager des logiciels libres ou de la musique légalement partageable, le fait qu’ils puissent être utilisés pour partager des œuvres dont la redistribution est interdite par leurs ayants-droits suffit à donner une chance à la SPPF de gagner une telle action en justice. Le texte de l'article L335-2-1 du code de la propriété intellectuelle est formulé en ces termes :
« Est puni de trois ans d’emprisonnement et de 300 000 euros d’amende le fait :
1° D’éditer, de mettre à la disposition du public ou de communiquer au public, sciemment et sous quelque forme que ce soit, un logiciel manifestement destiné à la mise à disposition du public non autorisée d’oeuvres ou d’objets protégés ;
2° D’inciter sciemment, y compris à travers une annonce publicitaire, à l’usage d’un logiciel mentionné au 1°. »
Notez bien le « manifestement destiné à la mise à disposition du public non autorisée d’oeuvres ou d’objets protégés » employé ici : c’est lui qui peut faire la différence, le juge devant apprécier si cette condition est respectée. La loi pénale étant d’interprétation stricte, c’est peut-être ce qui sauvera nos logiciels de partage du jouc de la justice.
On pourra également remarquer qu’en cas de succès de l’action judiciaire, la SPPF aura beau jeu de la prolonger par l’attaque des fournisseurs de distributions GNU/Linux, car ceux ci fournissent en général certains des logiciels mis en cause (Azureus par exemple) dans leurs paquets. Ainsi que de tous ceux qui fournissent des dépôts, notamment en France, contenant une copie de ces distributions GNU/Linux (je pense à la quasi-totalité des fournisseurs d’accès par exemple). D’ailleurs, une des sociétés attaquées actuellement n’est autre que SourceForge, la plus grande plate-forme de développement de logiciels libres. Et ce uniquement parce qu’elle héberge et distribue un des logiciels de partage incriminés, même si la société ne contribue pas directement à son développement.
Oh, vous me direz que ça ne peut pas arriver chez nous, qu’on ne pourra pas interdire la distribution de logiciels libres sous prétexte que certains pourraient les utiliser illégalement. On parie ?]]>, so I guess I will have to find another brand.]]> it was ninth grade, don’t forget).
I remember myself asking the teacher: “Would it be ok to run a person’s blood through a machine that temporarily increases its pressure in the presence of oxygen?” She looked surprised and told me she didn’t know.
Two days ago, I watched an episode of House M.D in which Dr. House puts a patient into an hyperbaric oxygen chamber to cure a patient from carbon monoxide poisoning. Not quite the same thing as the derivation I was thinking about, but the same principle, increase the blood pressure in an oxygen-saturated environment. This reminded me of the question I asked when I was a child. Twenty years later, I’m proud to know that my idea was not that stupid.]]>
Am I the only one who would have loved to see a second season of Battle Programmer Shirase coming?]]>:
SYMBOL: recursive-block : set-block ( quot -- ) recursive-block set ; : recurse ( quot -- ) recursive-block get call ; : with-recurse ( quot -- ) recursive-block get >r dup set-block call r> set-block ;
A quote passed through
with-recurse can use the
recurse word and re-execute itself.
Now that we have recursion, it is easy to implement
while using currying:
: while ( quot quot -- ) swap [ call recurse ] curry [ slip when ] 2curry with-recurse ; inline
Note that
inline is used here to give the optimizing compiler a chance to build the complete quotation at compile-time if both quotations given to
while are statically known.
To create a vocabulary
recursion containing this code, one can create a file
extra/recursion/recursion.factor containing:
USING: kernel namespaces ; IN: recursion <PRIVATE SYMBOL: recursive-block : set-block ( quot -- ) recursive-block set ; : init-block ( -- ) [ "recurse called without with-recurse" throw ] set-block ; PRIVATE> : recurse ( quot -- ) recursive-block get call ; : with-recurse ( quot -- ) recursive-block get >r dup set-block call r> set-block ; : while ( quot quot -- ) swap [ call recurse ] curry [ slip when ] 2curry with-recurse ; inline MAIN: init-block
Loading this library can be done by issuing the following command in the listener:
"recursion" run.
That's all folks.]]> | http://www.rfc1149.net/blog/feed/atom/ | crawl-002 | refinedweb | 1,111 | 52.33 |
Overview
Atlassian SourceTree is a free Git and Mercurial client for Windows.
Atlassian SourceTree is a free Git and Mercurial client for Mac.
xlwt3
DEVELOPMENT STOPPED - 03.01.2011
I doubt that there will ever be a stable version of xlwt3.
"xlwt3" is the Python 3 port of "xlwt" 0.7.2
"xlwt3" is 100% API compatible to "xlwt" 0.7.2 () but all module names changed to lower-case names, and formula parsing related modules moved to the subpackage 'xlwt3.excel'.
Purpose
Library to create spreadsheet files compatible with MS Excel 97/2000/XP/2003 XLS files, on any platform, with Python 3.1+
Maintainer
- xlwt -- John Machin, Lingfo Pty Ltd <sjmachin AT lexicon.net>
- xlwt3 -- Manfred Moitzi, Python 3 port <mozman AT gmx.at>
Licence
BSD-style (see licences.py)
External modules required
The package itself is pure Python with no dependencies on modules or packages outside the standard Python distribution.
Quick start:
import xlwt3 as')
Installation
Any OS: Unzip the .zip file into a suitable directory, chdir to that directory, then do:
python setup.py install
or:
pip install xlwt3
Download URLs
PyPI:
Bitbucket:
Documentation - Sphinx based HTML documentation
or use the original "xlwt" 0.7.2 documention at and replace every "xlwt" with "xlwt3" or use:
import xlwt3 as xlwt
Documentation can be found in the 'doc' directory of the xlwt3 package. If these aren't sufficient, please consult the code in the examples directory and the source code itself.
Problems
Try the following in this order:
- Read the source
- Ask a question on
- E-mail the xlwt maintainer <sjmachin AT lexicon.net>, including "[xlwt]" as part of the message subject.
- E-mail the xlwt3 maintainer <mozman AT gmx.at>, including "[xlwt]" as part of the message subject.
Acknowledgements
- xlwt is a fork of the pyExcelerator package, which was developed by Roman V. Kiseliov.
- "This product includes software developed by Roman V. Kiseliov <roman AT kiseliov.ru>."
- xlwt uses ANTLR v 2.7.7 to generate its formula compiler.
- a growing list of names; see HISTORY.html: feedback, testing, test files, ... | https://bitbucket.org/luensdorf/xlwt3 | CC-MAIN-2016-44 | refinedweb | 345 | 60.01 |
Agenda
See also: IRC log
See also pictures:
See also: Summary of F2F outcomes
<paulwalk> Locah Project blog post:
topic list for today
<kcoyle>
<rayd> topic list was created as a placeholder originally
now is good time to figure it out
some relate to use cases, some are short and we haven't figured them out etc
discussion should be deliverable oriented, focused discussion
classify discussion into three areas
1. topics covered by use cases, which we should examine further, or for which a use case should be found first
2. topics to be treated as requirements
3. deliverables. things we can achieve
new topics might be created in the course of this discussion
for example, recommended software
first topic, knowledge representation
all about which vocabularies we are using.
michaelp: doesn't fit with a particular use case.
<emma> ...they are all about how we represent our domain knowledge
frsad, for example, has simple model how do we represent that, event or concept
a knowledge representation question
emanuelle: how to model domain
emanuelle: do we want to do in group, or is it for future.
marcia: more than one way to do it. decision to be made by people who assign subject terms
michaelp One of the main ideas of semantic Web: use a URI for real stuff.
<marcia> FRSAD
<marcia> FRSAD is a conceptual model. SKOS can be used to implment the model. But there are two options: SKOS only (lables are properties of a concept) or SKOS + extension for labels
antoine: hard to go into this detail for every model.
Gordon: Generally, we should be recommending VESes as ranges.
gordon: general good practice for linked data. range should be a URI.
emma: need best practice for modelling. is it possible to do this in our timeframe.
michaelp: it is a requirement rather than best practice
Marcia: Differentiate label - FRSAD - SKOS-XL. SKOS without XL works for some vocabularies. We should say: "Here are the two approaches".
Marcia: present different recipes for people to decide.
<marcia> SKOS eXtension for Labels (SKOS-XL)
non bibliographic data
there is one circulation and an identifier use case
emma: rec. dev. is
outside our scope. there are plenty of statistical
ontologies
... If a vocabulary is missing, we can point it out.
gordon: appl profile for collection description.
<markva> anybody interested in statistics models should look at
<TomB> Gordon: There is a Dublin Core application profile for Collection Description -
gordon: there are models in other domains, we don't have to invent everything
next: citations use cases
next: application profiles
tom: requirement to clarify what an app profile is and that there are different approaches, point to one or two, some issues
karen: a small number
of methodologies
... Libraries should try to converge on some common application profiles.
antoine: wonder whether previous item, frs, should be with app profiles.
tom: should there be something on isbd?
gordon: yes
karen: isnt isbd itself an app model
gordon: no it's a data model
<marcia> ISBD is a data model
gordon: its flat, premarc, no concept of authority data
<TomB> Suggest that we mention role of application profiles not only in ISBD but in RDA.
marcia: applic.
profile is more like what steps you need to follow
... Question is if APs are sets of documentations, or APs are technical specifications to be implmented.
tom: role of this group not to say it's one or the other (other being syntax) but point out areas like rda etc
jon: "style" of appl. profile?.
next: legacy data, first subtopic inventory available linked data
gordon: maintenance issue, anything we identify will be out of date soon
tom: do it on fringes but not a core activity
next: vocabularies statuses
gordon: moving targets
karen: difficult for us to know what's being developed and we need better communication channel.
next: Translation of data in MARC format to linked data
mike: translation of data or translation of marc?
latter
"should marc have an rdf representation"
gordon: at least half dozen efforts, experimental, group should take note of that
next: Populating reference data models when legacy data is not perfectly fitting
<TomB> My understanding of this discussion: In Gordon's update of status of new RDF vocabularies (FRBR, etc) - comment on desirability (or not) of expressing the MARC model in RDF
<TomB> ...in addition to the issue of converting MARC records into RDF (not necessarily using an RDF representation of MARC)
<antoine> Scribe: Mark van Assem
GordonD: frbr is 4 records
instead of one
... application profile bridges gap
<TomB> Gordon: Coming around to thinking: MARC to RDF triples, then build it into an ISBD record, or whatever. The promise of linked data, focus shifts from record to statement. Application profile fills the gap. Break down, then build back up.
next: [LLD. COMMON-MODEL]
is same as previous
next :[LLD. AUTHORITIES] is in use cases
TomB: problem with wording of the Topic, entities = vocabulaires?
kcoyle: SKOS is handy to put authorities into
alex: already have FRAD
GordonD: authority data is about labels, not entities themselves
Jeff: but SKOS (XL) does both
kcoyle: were two separate databases; in this new world how we model that
emma: req or not?
kcoyle: comes up in use cases
GordonD: The issue here is bibliographic entities versus real-world entities.
michaelp: this is what KR topic
is about we discussed in begin
... LD challenges our notion that biblio entities are completely cut off from real-world entities.
michaelp: litmus test for FRs
... "crossing the streams" - challenges us to think of authority files in a different way.
GordonD: is there 1-1 relationship between entities and bib entity within semweb?
emma: put it in deliverable
Karen: Used to be a database in the back room.
<marma> Data is here:
<TomB> Jeff: In VIAF, - Martin suggested using FOAF.
Martin suggest to use foaf:focus to link the foaf:person to skos:concept
Antoine: keep the two topics
separate
... be aware that authority data diff of real world
... then how to articulate link
... separate issue and practical solutions, patterns, and cases that use them
Antoine: observable in VIAF, they produce skos:Concept and foaf:Person from same piece of data
next: [LLD. SPECIFIC-VOCABS]]
next: [LLD. SKOS-MULTILINGUAL] is a use case
next: [LLD. SKOS-LIB-KOS] in deliverables
michaelp: is it about what's been done or what difficulties are
GordonD: it's what me and antoine just agreed to look at
next: [LLD. PERSON-NAMES]
rayd: covered in my use case
emma: put in deliverables together with authority data
antoine: and refer from there to
use cases
... use cases can be moved into requirements if turns out it was not done
... merge person-names and person-metadata
next: LLD. IDENTIFIERS] is use case
next: [LLD. LEGACY-IDS] is requirement
kcoyle: issue e.g. ISBN for
manifestation; need to give advice
... think about what ID means
TomB: LCSH cite as example
LarsG: related to digital
preservation
... can of worms; need reqs or recommendations
GordonD: need to expose it as can of worms
<edsu> mmmm, worms
next: [LLD. NAMESPACES]
into requirements
TomB: ld principle that URI resolve to representation
antoine: could we refer to webarch?
TomB: part of TBL's four points
LarsG: not particular topic for lld
emma: do we need to address namespace policy?
TomB: yes, libraries should have
persistence policies, and principles for vocab evolution
... can URI be repurposed? can meaning evolve?
kcoyle: issue what do you do with
multiple copies? how do you identify them?
... important part of structure people need to understand; lot in here that people need to understand so that they do proper LD
antoine: nothing library-specific about it
kcoyle: libraries bring up
interesting cases
... library experience should inform web experience
marcia: other communities gathering resources have no clear roadmap
??:
antoine: Europeana experience: for the moment URIs for digital objects are handled quite badly, after a while URIs are dead
... library practice in web context is poor
... we cannot improve that
emma: should say that practice should be better
LarsG: put persistent identification and resolution services into requirements
<antoine> scribe: Michael
emma: Next section: Semantic web
environmental issues
... Group with requirements
... Next: Linking across datasets
... What links to what? Group with inventory of datasets in deliverables
Jeff: People could still use OWl
to show what is being linked without relying on an
inventory
... Self-description using OWL without defining new level of properties
emma: Next: Alignment of vocabs
antoine: Related to previous discussions about skos mapping properties
GordonD: Also about mapping
models that are independent of SKOS
... There are different mapping approaches
emma: Next: Alignment of
real-world-resource identifiers
... Environmental issue
Antoine: Put into cases for
future action
... Bernard might investigate
RayD: about alignment or assignment?
antoine: Relating library authority file concepts to identifiers for the real thing
kcoyle: What is meant with
alignment?
... Bringing together if there is more than one?
TomB: And specifying realtionship between them
emma: Next one: The Linked Data
paradigm and the Metadata Record paradigm
... Models for packaging Linked Data in records, e.g., Named Graphs
... and Provenance of Linked Data
Jeff: Mikael's email indicates a
lot of tension between metadata models and domain models.
... Lot of confusion between these paradigms
ACTION: Tom to re-categorize AGRIS under Bibliographic Data. [recorded in]
Jeff: How can we help people to think in these paradigms?
kcoyle: Educational vs. proof of
concept. These are two different goals.
... Can we create the data we want to create without using the records paradigm.
John: I don't think we can create
data in absence of a record model.
... Creation, dissemination, and consumption.
... Latter two can happen without record. First one cannot.
antoine: Some of the choices
about the right URI in LD look like record building.
... Even on the basic level about which triples you send out.
<antoine> packaging in linked data dissemination context ->
marcia: Do you mean the presence of an application profile at the time of creation
Diane: Can we use aggregated view instead of record view?
GordonD: Catalogers create a package of descriptions.
Diane: We need to carefully
examine those assumptions.
... Catalogers don't start from nothing and arrive at something that they consider complete.
GordonD: Rarely info in record is
created from scratch
... Reliance on external authority and other sources
Marcia: From the abstract model a record is an aggregate of other description sets.
?: But, if things are added, can this info be consumed back into your aggregated set?
GordonD: Triples will be out
there. Aggregation will happen on the fly.
... Moving to a "post-coordinated" approach.
emma: We have to cut here
... It is in the requirements.
emma: provenance
kcoyle: It is an requirement. Not specific to LLD.
antoine: We can put it in the use case and probably look at the work of the provenance task group.
emma: We can extract some requirements if we put it in the use case category.
Kai: Strongly related to the record / description set issue.
emma: Next: Conversion issues, e.g., URIs, content negotiation, RDF compatibility
Kcoyle: Don't know what it means. Very broad.
Antoine: Could we trash it?
emma: OK
... Next:.
... Related to Gordon's and Antoine's UC?
Antone: This is more related to KOS alignment.
Emma: Maybe we have a gap here in
the use cases.
... We need a use case about the appropriateness of SKOS to cover controlled vocabularies in LLD.
Antoine: Some is covered in postponed SKOS issues.
Emma: Should check there.
Antoine: We should put it in the vocabulary section.
Emma: Next: extraction of semantic data
kcoyle: Perhaps Marcia can explain what is meant here
marcia: The original email was about a framework of showing things.
kcoyle: Let's put it in the deliverables so we remember to look at it when we prepare deliverable.s
emma: next: linked data
management, hosting, and preservation
... vocabulary-specific aspects of management, hosting, and presentation
kcoyle: related to discussion about metadata registries. We need use case.
emma: Put it in use case.
... Next: Versioning, updates
kcoyle: Next three go together.
We need a use case for all of them.
... Dissemination mechanisms: RDF schemas, RDFa, bulk download, feeds, SPARQL...
... DCMI-RDA task group would be a great use case.
GordonD: Many issues have surfaced there.
emma: Issues of Web architecture, e.g., persistent URI design best practices, HTTP
Alexander: I don't see pattern as
architecture patterns, more like modeling
recommendations.
... We should tell peoples about our experiences with our modeling.
emma: Should it be a use case?
kcoyle: We can require things
that we don't know how to do.
... It could be a requirement.
emma: Related to "data caching"?
Alexander: Broader context.
Ingestions, dissemination.
... I want to have a library system that is able to deal with linked data together with classical library data
Mark: Does that exist?
Alexander: No.
Mark: We have that covered in software recommendation.
Alexander: Not so much the issue
what to use, but how to use the tools.
... The IT departments have systems that are going to stay there for a long time.
... We have to come up with ways with doing new stuff with existing infrastructure.
... We are generating LD at runtime. This is not the right approach.
<michaelp> Jeff: I care about that also
Martin: Me also.
michaelp: Perhaps we can do something together
<scribe> ACTION: Alex, Jeff, Martin, MichaelP elaborabe on general purpose IT archtiecture for dealing with linked data with caching feature [recorded in]
<antoine> Scribe: Jeff
Ontology discovery and dissemination [DATA. ONTOLOGY-DISCOVERY]
kcoyle: covered in registry part, discovery part, vocabulary, need a way to find ontologies
marcia: difference between vocabularies/ontologies. format-related
kcoyle: different perspectives on
vocabularies: things divided into class, instance, properties,
(ontologies?) vs. different vocabularies naming concepts
... no vocabulary of vocabularies
... need to be clear about what we mean when we use the term "vocabulary"
# Search Engine Optimization for Library Data Google Rich Snippets, Yahoo SearchMonkey, Facbook's OpenGraph Protocol [edsu, jphipps] [DATA. SEARCH-OPTIMISATION]
alexander: seems to be close to architecture topic
antoine: these systems may be able to understand library models in the future
emma: this needs a use case
antoine: Europeana wants to put RDFa in HTML
<edsu> facebook's rdfa has a notion of book, author, movie
<edsu> also isbn :-)
TomB: is there a role of application profiles in search (e.g. Google)
antoine: if you search Google for a book, you will get a Google Book results near the top. It has special status.
ACTION: Emma and Antoine to create use case DATA.SEARCH-OPTIMIZATION [recorded in]
# Licenses, IP, DRM, other availability/rights/access restriction info [antoine, kcoyle, emmanuelle, aseiler] [MGT. LICENSES]
michael: related to provenance and rights discovery
antoine: need common way (RDF) to discover these things
kcoyle: need a use case for provenance and rights
# Workflows or roadmaps for different kinds Linked Data projects [keckert, emmanuelle] [MGT. WORKFLOWS]
# Examples of business models of managing linked library resources (metadata, vocabulary, and KOS resources) [digikim] [MGT. BIZ-MODELS]
# Common patterns in Linked Data, with examples, and with best practices for "Linked Data friendly" output from traditional library data - to provide guidance and save time - maybe several best practices when there are several good ways to solve a problem. [MGT. PATTERNS]
kcoyle: 1&3 have been covered? 2 is new?
Alexander: more concerned with
common software (architecture) patterns
... it's analogous to Java classes (built in classes)
<edsu>
<TomB>
kcoyle: we need library examples that refer to the "free book"
Alexander: What are the patterns that are pecularly useful in Library Linked Data?
Emma: examples of business models. no use cases.
marcia: Somebody needs to manage
Karen: sustainability is essential
kcoyle: ROI isn't necessarily money. it can also be cost savings
antoine: abstract a business model from existing use cases?
marcia: somebody needs to envision patterns of business models
# Need for training and documentation (a Linked Data primer for libraries ?) [gneher, Jschneid4, keckert, digikim, antoine, emmanuelle, aseiler] [MGT. TRAINING]
emma: a UTube video?
... can we deliver training and documentation?
antoine: our report should be readable as a primer
<Marcia> +1 Antoine primer idea
kcoyle: the community needs to commit to education in this area
TomB: do we need to specify the skillset?
kcoyle: a lot of people as that question, but few answers
emma: use cases address this in problems and limitations
# Mapping Linked Data terminology to library terminology and concepts [kcoyle] [MGT. LEGACY-MAPPING]
antoine: can glossary make these connections?
TomB: part of training and documentation
antoine: can this be a deliverable
emma: just listing the terms is a hard task
# Liaison with standardisation bodies and initiatives (ISO and national bodies, IFLA, International Council on Archives, CIDOC...) [GordonD, emmanuelle] [MGT. STANDARDS-PARTICIPATION]
kcoyle: it's a big one
TomB: Gordon and IFLA are a good example
gordon: need on going organizational commitments
TomB: we need to have ongoing communication
# Outreach to other communities (archives, museums, publishers, the Web) [Jschneid4, GordonD, antoine] [MGT. OUTREACH]
emma: we do have a use case related to archives
kcoyle: these communities also have "bodies" that can become involved
antoine: identify a list of these communities and keep it up to date
emma: use the people in this group to create connections to there groups
ray: "collaboration" is different from "liaison". Liaison is too hard.
kcoyle: but necessary.
TomB: try to disseminate our results as broadly as possible.
ACTION: on everyone to update the Events page () on the wiki regularly [recorded in]
# How to announce new efforts, build appropriate communities around those efforts, get the right players to the table. [kcoyle] [MGT. NEW-EFFORTS]
emma: it's very general
kcoyle: in the future, make sure we outreach to right people
<paulwalk> Re lldvis: The vocabs are mapped to use cases, but the topics have not yet been mapped at all yet - will do this following today's meeting
emma: group with next steps , new efforts, and future working groups.
<TomB> don't we have a page for linking articles, such as my TWR blog post?
# pulling in linked data for end users [USE.END_USERS]
# Computational use of library linked data [USE.COMPU]
# Linked data to enhance professional processes or workflows, for librarians, cataloguers, etc. [USE.PRO]
emma: special effort in use cases
to demonstrate these points
... use cases to enhance current practices
antoine: can we make this a deliverable?
emma: need a specific section in
the deliverable
... that's the end of the list
See post meeting cleaning:Outcome of the topics discussion
time for a group photo | http://www.w3.org/2005/Incubator/lld/minutes/2010/10/24-lld-minutes.html | CC-MAIN-2016-26 | refinedweb | 3,078 | 56.55 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Project Issue - Subject
Hi,
On sending emails to customers, the subject is from the name field. Is there any way to concatenate the record ID+name? On emails being sent we are looking for the format:
'id'+' - '+'subject'
Ok, based on the below comment, I have this method. My Python is not advanced enough to know what I should do with this method after overriding or even if I were to adjust this to test.
def get_record_data(self, cr, uid, model, res_id, context=None): """ Returns a defaults-like dict with initial values for the composition wizard when sending an email related to the document record identified by ``model`` and ``res_id``. :param str model: model name of the document record this mail is related to. :param int res_id: id of the document record this mail is related to """ doc_name_get = self.pool.get(model).name_get(cr, uid, [res_id], context=context) record_name = False if doc_name_get: record_name = doc_name_get[0][1] values = { 'model': model, 'res_id': res_id, 'record_name': record_name, } if record_name: values['subject'] = 'Re: %s' % record_name return values
Thanks
Hi,
You can do this by two ways, either in particular model define def name_get method and append string whatever you want.
Or You can override mail.compose.message model and override def get_record_data method and change record_name.
Thanks...
Hi,thanks for the response. I am not seeing this method in project_issue.py. Am I looking in the wrong place?
Hi, you need to override that method, means if you want to do it by mail.compose.message just inherit mail.compose.message and override def get_record_data method and customize it.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/project-issue-subject-40612 | CC-MAIN-2018-17 | refinedweb | 319 | 65.12 |
Asked by:
Code contract on interface methods
- Hello guys,
another quick question: should static analysis work against contracts defined for interfaces? here's a quick example:
[ContractClass(typeof(PersonContract))]
public interface IPerson
{
String FirstName { get; }
String LastName { get; }
void ChangeName(String firstName, String lastName);
}
[ContractClassFor(typeof(IPerson))]
public class PersonContract:IPerson
{
public string FirstName
{
get { return CodeContract.Result<String>(); }
}
public string LastName
{
get { return CodeContract.Result<String>(); }
}
public void ChangeName(string firstName, string lastName)
{
CodeContract.Requires(firstName != null);
CodeContract.Requires(lastName != null);
}
}
public class Person:IPerson
{
private String _firstName;
private String _lastName;
public String FirstName
{
get { return _firstName; }
}
public String LastName
{
get { return _lastName; }
}
public void ChangeName(string firstName, string lastName)
{
_firstName = firstName;
_lastName = lastName;
}
}
Should the following code be caught during static analysis:
var p = new Person();
p.ChangeName(null, null);
As you might guess, static analysis isn't picking this on my machine...
thanks again
Luis AbreuFriday, November 07, 2008 8:40 PM
General discussion
All replies
- Sigh. It should. I've fixed it and hope we can put out a new release soon. Sorry for the bugs! But I'm glad you are really trying it out.
Currently the contract class needs to explicitly implement the interface methods, not implicitly as in your example. (But even if you do that, the released version won't work yet.) Do you think that is okay?
Thanks!
MikeMonday, November 10, 2008 3:40 AMModerator
- Hello Mike.
thanks for the tip.
Since the contract class is only there for setting up the contract, I think that explicitly implementation won't be too much to ask for.
thanks again and keep up the good work!
PS: don't forget to update the docs. I think that there's nothing in the current pdf which mentions that the contract class must implement the interface explicitly.
Luis Abreu
Wednesday, November 12, 2008 9:23 PM
- Edited by Luis Miguel AbreuMVP Wednesday, November 12, 2008 9:25 PM
Thanks for posting this. What's really weird is that this seems to work in VS2010, but not when run on my TFS 2010 Build server. The VS2010 test seems to realize to expect a Contract exception and that's what the test harness produces in VS2010. But when in TFS 2010, it knows to expect the Contract Exception, but does not receive it. Maybe I just resaid what has already been stated.
As I am evaluating this for use by my team, I think it is an easier sell if the Interface does enforce the contracts, or at least the compiler should verify that the derived class does enforce them. No idea how hard it would be to enforce that. Perhaps that's a Code Analysis feature. But whatever the case, I'd like to keep the code repetition to a minimum or I'll get pushback from the DRY entusuiasts.
Love the product though, so far. Also agree on the docs... didn't even realize they were out of date.Friday, November 19, 2010 9:05 PM
Should have finished my testing before I posted on this one... but when I add the exact same code contracts from the interface into the class, and rerun the pex explorations, everything works fine on my PC. But once I check in the updated files and tried to build on the TFS Build Server, I get the following:
"The agent process was stopped while the build was running"
This happens on 5 of my 13 generated tests, and then the rest are 'Not Executed'.
I removed the code contracts from the class, regenerated the Pex Explorations, checked in the changes, and the error went away (though I was back to expecting the wrong exception)Friday, November 19, 2010 9:20 PM
Anyone know why this build error is happening? I've been going through scenarios for days trying to get some consistency, but if I can't get it, I'm going to have a really hard time selling this to my team as a useful tool for our next project.
ThanksTuesday, November 23, 2010 7:02 PM
- It sounds as if some part of the contracts tools are not installed on your server. Can you send us the output from the rewriting step?
Mike BarnettFriday, November 26, 2010 3:55 PMModerator
Ah... yes, I had installed the PEX assemblies, but not the Code Contracts on the build server. Installed those, rebuilt my tests and everything works great! Thanks!Monday, November 29, 2010 6:07 PM | https://social.msdn.microsoft.com/Forums/en-US/3dc719cd-0ab3-4be9-ae4d-ac06538d1108/code-contract-on-interface-methods?forum=pex | CC-MAIN-2015-48 | refinedweb | 748 | 62.98 |
Implement the static argument transformation
The Static Argument transformation optimises
f x y = ....f x' y...
into
f x y = let g x = ....g x'... in g x
Instead of passing
y along unchanged, we make it into a free variable of a local function definition
g.
Unfortunately, it's not always a win. Andre Santos gives a discussion, and quite a few numbers in his thesis.
But sometimes it is a pretty big win. Here's the example that recently motivated me, which Roman Leshchinskiy showed me. You need the attached file Stream.hs, and then try compiling
import Stream foo :: (a -> b) -> [a] -> [c] foo f = mapL f
Thus inspired, I think I have a set of criteria that would make the static arg transformation into a guaranteed win:
- there is only one (external) call to the function
- OR its RHS is small enough to inline
- OR it is marked INLINE (?)
So I'd like to try this idea out. | https://gitlab.haskell.org/ghc/ghc/-/issues/888 | CC-MAIN-2020-34 | refinedweb | 162 | 73.78 |
Amazon Interview QuestionSDE-2s
- ).
Country: India
Interview Type: In-Person
bool can_fit(int x, int a, int b, int c) {
#define re(x) (x-(x%2)) /* round down to even */
int n_choc = (re(a)*re(b)*re(c))/8;
return x <= n_choc;
}
Isn't this a simple math problem? Compute total volume of box - as a*b*c. Then find out how many chocolates can fit in to the box by division.
VolBox = a*b*c
VolChoc = 2
X = VolBox / VolChoc
If X >= 1 - return X and say atlteast X chocolates can fit into the box.
else. return 0 and say no chocolates of 2cm3 can fit into the box.
Unless the question is incomplete, this should be the solution
Because they're cuboids, the chocolates can be any shape. Max volume of chocolate equals volume of box. Max count of chocolates equals half the volume of chocolate (in cm^3) or half the volume of the box. Am I missing something?
static int howManyFit(float length, float width, float height) {
return (int)(length * width * height) / 2;
}
static boolean canFit(int count, float length, float width, float height) {
return count <= howManyFit(length, width, height);
}
public void testHowManyFit() {
assertEquals(1, howManyFit(1,1,2));
assertEquals(true, canFit(1, 1,1,2));
assertEquals(false, canFit(2, 1,1,2));
assertEquals(2, howManyFit(1,2,2));
assertEquals(true, canFit(2, 1,2,2));
assertEquals(false, canFit(3, 1,2,2));
}]
You have to take into account the fact that cuboids might not fit directly into the bigger cuboid- the volumes might be divisible, but physically, it might not be viable.
For example, if we take rectangles of size 4x3, our area would be 12. If we try to fit squares of 2x2 into that area, at first we might think just to divide 12 by 4 to get 3 squares that fit, but if you actually draw it out, you can only really get 2 squares to fit.
So, we need to break this down based on how many ways we can divide each dimension by 2, and take the floor of that, since anything above that would just be extra space.
- SK July 08, 2015 | https://careercup.com/question?id=5121982330830848 | CC-MAIN-2018-39 | refinedweb | 363 | 66.98 |
> -----Original Message-----
> From: Derik Crouch [mailto:dcrouch@pretorynet.com]
> Sent: Thursday, January 17, 2002 10:42 PM
> To: soap-dev@xml.apache.org
> Subject: Problem with response from MS Soap
>
>
> Hello,
>
> I'm currently building a Java client that will be using
> Apache Soap API
> to communicate to a MS Soap server. Using a sniffer we're able to see
> the message go out and the response coming back from the
> server. ( We're
> using MSSoapT to see what's going on ) The problem were
> experiencing is
> involving processing of the response. The exception isn't being thrown
> back and it's stopping the JVM . Any advice would be very appreciated
> and thanks for your time.
>
> .............
> ERROR RECEIVED....
> java.lang.NoSuchMethodError
> at org.apache.soap.util.xml.QName.<init>(QName.java:80)
>
> at
> org.apache.soap.util.xml.QName.matches(QName.java:146)
> at
etc.
That looks like the error we get if the XML parser does
not support namespaces. Maybe you have an old parser hanging
around that is being found before the namespace supporting
parser. It took me days to find this.....
wbrogden@bga.com
Author of Soap Programming with Java - Sybex; ISBN: 0782129285 | http://mail-archives.apache.org/mod_mbox/ws-soap-dev/200201.mbox/%3C000c01c1a00f$b350dfe0$42da35d8@bigcow%3E | CC-MAIN-2014-52 | refinedweb | 196 | 59.9 |
How I sent emails 10x faster than.
require 'config/environment'
require 'user'
require 'notifier'
server do |map_reduce|
map_reduce.type = User
map_reduce.conditions = "opt_out = 0"
end
client do |user|
Notifier.deliver_email(user)
end
This tiny amount of code with next to nothing that needs to be memorized and takes 30 seconds to write down can potentially save you hours in deliver time. Even running 10 clients at once on the SAME MACHINE gave us nearly 10x the speed it would have taken serially. This was not mission critical, but gives you a good sense of ways to apply Starfish to mission critical applications.
You should follow me on twitter here.
Technoblog reader special: click here to get $10 off web hosting by FatCow!
26 Comments:
that's so simple and clean... way simpler than background drb, even simpler than using ruby threads (which might not provide the parallelism you need anyhow).
3:11 AM, August 22, 2006
Plus, Ruby threads + Rails stack = Crap shoot
10:07 AM, August 22, 2006
Thanks for the example. It seems cleared to me now how you can use Starfish.
9:37 AM, August 23, 2006
Would you mind detailing what exactly the code above is doing.
Thanks.
10:38 AM, August 23, 2006
The code above creates a queue of User objects. The clients grab a user from the queue and sends an email to the user, then grab another user and send an email, over and over. If you have 10 clients, it is like splitting up the collection over 10 loops and running them concurrently, hence the 10x speedup.
11:42 AM, August 23, 2006
How does your example code know to use starfish?
12:05 PM, August 23, 2006
You save that file, for example as: email_sender.rb. Then you execute the code by calling: starfish email_sender.rb. Since I wanted 10 clients processing the data, I called starfish email_sender.rb 10 times.
12:11 PM, August 23, 2006
How do you prevent multiple processes from sending e-mails to the same person?
Meaning, if you had a text file contain unique e-mail addresses to mail a message to, how does the script above make sure that only 1 e-mail goes to each unique e-mail address?
1:01 PM, August 23, 2006
Since you are running it on the same machine, how is this different from creating a multi-thread application?
Isn't this essentially all this is?
1:11 PM, August 23, 2006
James: It uses a queue system, so two clients can never grab the same line from a file because once grabbed, that line is not accessible to any other clients. It also has a simple mutex to prevent two calls at the exact same time.
Hank: Have you ever tried multi-threading Rails? It is almost always a big source of headaches do to things since there would be conflicts with multiple attempts to concurrently use the connection to the DB. Plus, obviously I didn't have to call them all on the same server which is a big bonus.
1:23 PM, August 23, 2006
Hey I just wrote the worlds shortest operating system:
#include "theuniverse.h"
int main()
{
run_os()
}
I'm kidding, I'm kidding, but you get my point? What is user doing, what is notifier?
5:54 AM, August 24, 2006
The user is an ActiveRecord object that has a users email address. The notifier is an ActionMailer object that sends email.
6:44 AM, August 24, 2006
I think I'm starting to grasp this better. One feature you might want to add is a delay time between runs so you don't send too many emails too fast (you can get blacklisted) or some other external requirement.
Thanks,
Adrian Madrid
3:01 PM, August 24, 2006
Holy damn friggin hell, thats elegant code!
Lucas, keep it up.
5:31 PM, August 24, 2006
Starfish looks great and I'd love to use it but am still not sure on the whole process.
Could you write up a little tutorial in the future that goes through the process of doing something with Starfish from start to finish?
10:46 PM, September 02, 2006
This looks amazingly simple, but it's not immediately obvious how to get it to work properly from that example. For example, it seems like the way of starting up clients could cause problems. You said that you called "starfish email_sender.rb" 10 times, but how does each invocation know which "run" it should be a part of? For example, what if the task actually finished after your 8th call to "starfish email_sender.rb" w/o you realizing it? It seems that your remaining 2 calls to "starfish email_sender.rb" would start up the task again and cause duplicate emails to be sent.
Along the same lines, if these are started on remote machines, how do they communicate w/o specifying a head server? Are they storing the queue info and the mutex in the database itself?
Sorry for all the questions!
6:34 PM, September 11, 2006
Jay's questions seem legitimate or am I just too much of an ameteur to get it?
8:11 AM, January 24, 2007
Jay, I'm just reading up on starfish myself, and I found myself asking the same questions as you.
As for your first question (in paragraph 1), I think that's the beauty of the whole server/client system. There is one server that maintains the state of what has been processed and what has not, manages the mutex, etc. So, the clients really don't 'know' anything about the parallelization - they just know how to process one carefully chosen item (in this case a User, I believe).
As for your second question, I found myself wondering the same thing. I think the answer might be hidden in the config details of require 'config/environment'.
rinogo
(starfish.20.rinogo @ xoxy.net)
12:10 PM, May:19 PM, May 30,
uggs outlet
louis vuitton
christian louboutin shoes
moncler soldes
cheap nhl jerseys
christian louboutin sale
moncler coats
steelers jerseys
uggs on sale
replica rolex watches
2017.1.4xukaimin
10:33 PM, January 03, 2017
air jordan shoes
coach outlet online
chaussures ugg
christian louboutin
mcm handbags
louis vuitton outlet online
nike roshe run pas cher
montblanc pens
adidas outlet
canada goose outlet
2017.1.4xukaimin
10:34 PM, January 03, 2017
jimmy choo
nike trainers
replica watches
michael kors
nike air huarache
birkenstocks
moncler outlet
new balance shoes
gucci sale
michael kors handbags
12:35 AM, March 16, 2017
polo ralph lauren outlet
discount oakley sunglasses
kate spade outlet store
adidas nmd runner
cheap rolex replica watches
cardinals jerseys
mlb jerseys
fred perry polo shirts
true religion
pandora jewelry
xushengda0418
6:36 AM, April 18, 2017 | http://tech.rufy.com/2006/08/how-i-sent-emails-10x-faster-than.html | CC-MAIN-2017-22 | refinedweb | 1,139 | 71.14 |
[
]
John Vines commented on ACCUMULO-2092:
--------------------------------------
Or we make it part of the client API that there is a propagation delay, there is a similar
behavior for some of the other permissions.
> Seeing spurious error message about namespace that does not exist
> -----------------------------------------------------------------
>
> Key: ACCUMULO-2092
> URL:
> Project: Accumulo
> Issue Type: Bug
> Environment: db746960fbcafb1651c15ec2e5493d56acb5065c
> Reporter: Keith Turner
> Assignee: Christopher Tubbs
> Fix For: 1.6.0
>
>
> In the conditional random walk test the following spurious error message occurred. The
test did not fail, it was running fine.
> {noformat}
> 24 17:43:45,065 [conditional.Setup] DEBUG: created table banks
> 24 17:43:45,101 [conditional.Setup] DEBUG: set table.cache.block.enable false
> 24 17:43:45,148 [impl.Tables] ERROR: Table (d) contains reference to namespace () that
doesn't exist
> 24 17:43:47,012 [conditional.Init] DEBUG: Added splits [b100, b200, b300, b400, b500,
b600, b700, b800, b900]
> 24 17:44:00,394 [conditional.Init] DEBUG: Added bank b259 9
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.1.5#6160) | http://mail-archives.apache.org/mod_mbox/accumulo-notifications/201401.mbox/%3CJIRA.12686321.1387913556111.23800.1390883978344@arcas%3E | CC-MAIN-2017-04 | refinedweb | 172 | 50.02 |
It seems like a good idea to us -- no muss, no fuss, the
nodes just appear when the device gets loaded by insmod, and
disappear when the driver gets unloaded. Something like this:
if ((major = register_chrdev(0, "lala", &LalaFops)) <= 0) {
printk("Unable to get major for lala\n") ;
return(1) ;
}
do_unlink("/dev/lala0");
rc = do_mknod("/dev/lala0", S_IFCHR | 0666, MKDEV(major, 0) );
if (rc < 0)
{
printk("Unable to create device node for lala\n");
return (1);
}
To get this to work, we had to arrange to export do_mknod()
and also do_unlink() with the enclosed kernel patch.
So, I thought maybe I'd just pose the question here:
Calling do_mknod() from init_module() - a good idea or not???
-Rick
================= patch ============================
Note:
This device driver will make the device nodes
automatically on module loading. However, for the
time being you must hack your kernel to export
the proper symbols to enable this magic. I hope
to get the changes incorporated into the 2.0.30
kernel as well as the 2.1.X development kernels
The following patch will make the needed changes.
Tested on kernel 2.0.27
*** include/linux/fs.h.orig Wed Apr 2 12:31:11 1997
--- include/linux/fs.h Wed Apr 2 11:45:58 1997
***************
*** 617,622 ****
--- 617,623 ----
extern int open_namei(const char * pathname, int flag, int mode,
struct inode ** res_inode, struct inode * base);
extern int do_mknod(const char * filename, int mode, dev_t dev);
+ extern int do_unlink(const char * filename);
extern int do_pipe(int *);
extern void iput(struct inode * inode);
extern struct inode * __iget(struct super_block * sb,int nr,int crsmnt);
*** kernel/ksyms.c.orig Wed Apr 2 12:17:56 1997
--- kernel/ksyms.c Wed Apr 2 11:44:36 1997
***************
*** 170,175 ****
--- 170,177 ----
X(generic_file_read),
X(generic_file_mmap),
X(generic_readpage),
+ X(do_mknod),
+ X(do_unlink),
/* device registration */
X(register_chrdev),
*** fs/namei.c.orig Wed Apr 2 12:19:08 1997
--- fs/namei.c Wed Apr 2 11:45:13 1997
***************
*** 656,662 ****
return error;
}
! static int do_unlink(const char * name)
{
const char * basename;
int namelen, error;
--- 656,662 ----
return error;
}
! int do_unlink(const char * name)
{
const char * basename;
int namelen, error;
-- Rick Richardson Sr. Principal Engr. Can you be sure I'm really me Digi Intl. Email: rick@dgii.com and not my clone??? Has anybody 11001 Bren Rd. East Fax: (612) 912-4955 seen The Leader's nose??? Minnetonka, MN 55343 Tel: (612) 912-3212 | http://lkml.iu.edu/hypermail/linux/kernel/9704.0/0148.html | CC-MAIN-2019-35 | refinedweb | 402 | 64.81 |
Do had an object acting as a singleton and referencing a delegate instance that was created from an object acting as a non singleton. Bam! Memory Leak.
I setup a little test to demonstrate. Here is a class that has two methods which return delegate instances:
public class TestSource { private string internalValue = "test"; public Func<bool> GetFunc1() { return () => 1 == 1; } public Func<string> GetFunc2() { return () => internalValue; } }
Notice that GetFunc1() doesn’t have any references to internal members, but GetFunc2() does. Here’s the test class:
class Program { static void Main(string[] args) { // hold no references to TestSource TestFuncInstance(new TestSource().GetFunc1()); TestFuncInstance(new TestSource().GetFunc2()); Console.WriteLine("Press Enter to continue."); Console.ReadLine(); } private static void TestFuncInstance(Delegate func) { Thread.Sleep(1000);// give some time for GC Console.WriteLine(string.Format("Method: {0}",func.Method)); Console.WriteLine(string.Format("DeclaringType: {0}", func.Method.DeclaringType)); Console.WriteLine(string.Format("Target: {0}", func.Target ?? "null")); Console.WriteLine(); } }
This creates two separate instances of TestSource and passes the result of the two GetFunc methods to a test method. Notice that there are no declared variable references to the TestSource object. Here’s the output of the test:
Method: Boolean <GetFunc1>b__0() DeclaringType: TestingDelegates.TestSource Target: null Method: System.String <GetFunc2>b__2() DeclaringType: TestingDelegates.TestSource Target: TestingDelegates.TestSource Press Enter to continue.
I know this test isn’t very scientific, but you’ll see that Func1’s target is null while Func2’s target is not. Func2 has to hold a reference to the declaring object so that it can do it’s job when invoked. Func1 does not need a reference, and seems to free up the declaring object to be garbage collected. This is definitely something to keep in mind when passing around delegates. | https://lostechies.com/rayhouston/2008/05/22/do-anonymous-methods-prevent-declaring-types-from-being-gc-d/ | CC-MAIN-2018-30 | refinedweb | 292 | 51.04 |
INI section representation. More...
#include <inisection.h>
INI section representation.
Provides access to variables in specified INI file section.
Definition at line 40 of file inisection.h.
Creates an invalid IniSection object. Such object should not be used for read/write operations.
Creates a valid IniSection object.
IniSection object will operate on specified Ini object and provide access to variables in specified section.
Definition at line 47 of file inisection.cpp.
Inits specified variable with specified data.
This method serves specifically for init purposes. If variable exists already exists, data will not be modified.
Definition at line 57 of file inisection.cpp.
Deletes specified variable.
Definition at line 73 of file inisection.cpp.
true if setting of given name exists within the section.
Setting contents may be empty and this will still return true. false is returned only if setting doesn't exist at all.
Definition at line 84 of file inisection.cpp.
If true, IniSection object is not valid and should not be used to perform any actions on the Ini file.
Definition at line 90 of file inisection.cpp.
A name (or path) of this section with lettercase preserved.
Definition at line 95 of file inisection.cpp.
Definition at line 100 of file inisection.cpp.
Calls const retrieveSetting().
Definition at line 105 of file inisection.cpp.
Gets a variable but only if it already exists.
Definition at line 115 of file inisection.cpp.
const version of retrieveSetting()
Definition at line 126 of file inisection.cpp.
Gets a variable. Creates it first if it doesn't exist yet.
Definition at line 137 of file inisection.cpp.
Sets a variable directly. Omits the IniVariable system.
Definition at line 154 of file inisection.cpp.
Retrieves a variable directly; omits the IniVariable system.
Definition at line 164 of file inisection.cpp.
Retrieves a variable directly; omits the IniVariable system.
Overload that returns defaultValue if requested value is invalid.
Definition at line 174 of file inisection.cpp. | https://doomseeker.drdteam.org/docs/doomseeker_1.0/classIniSection.php | CC-MAIN-2021-25 | refinedweb | 324 | 55.3 |
17 October 2012 09:21 [Source: ICIS news]
TOKYO (ICIS)--?xml:namespace>
This was a 6.4% increase from the corresponding period a year earlier, according to the council.
The country’s total domestic shipments were up by 25% month on month to 89,378 tonnes in September, which is a 4.7% year-on-year increase, the VEC said.
Among the domestic shipments, shipments of rigid PVC increased by 30% month on month to 51,741 tonnes, and rose by 8.3% year on year, according to the council.
“When we looked into which applications boosted [the month-on-month increase in shipments], demand for PVC pipes increased. These are used for construction purposes, but we are not seeing continuous increase [in PVC demand] yet. However, spot [demand] has increased in September,” VEC chairman Shunzo Mori said at a press conference.
According to Mori, rigid PVC accounts for about 58% of total domestic demand for PVC, among which about 45% is used for pipes.
September PVC shipments increased a little, but it was driven by demand for pipes, he said, adding that until the construction and housing sectors recover, PVC demand is not likely to rise. “We predict [weak PVC demand] will continue for a while,” Mori said.
Meanwhile, exports of PVC rose by 6.7% to 28,741 tonnes in September from August, but decreased by 25% from September 2011, the VEC | http://www.icis.com/Articles/2012/10/17/9604326/japans-pvc-production-up-20-year-on-year-in-september.html | CC-MAIN-2014-52 | refinedweb | 234 | 73.58 |
You can subscribe to this list here.
Showing
1
results of 1
In 1.1a10 I have made a fairly significant change by adding a CallStack
to the path of execution through bsh scripts. The CallStack replaces the
simple namespace reference that most of the ASTs were using with a stack of
namespaces showing the chain of callers.
For normal users the only difference will be that they will (soon) start seeing
error messages that show a script "stack trace" to the point of failure. This
should make debugging complex scripts easier.
For developers writing BeanShell specific commands and tools the change is
more fundamental in that we now have two new powerful magic references:
this.caller - a reference to the calling This context of the method.
this.callstack - an array of NameSpaces representing the full callstack.
With this.caller it is finally possible to correctly write beanshell
commands / scripted methods that have side effects in the caller's namespace.
In particular the eval() and source() methods now work correctly with side
effects in local scope. It will also be possible to write other nifty
commands that examine the caller's context and do work... (Think of commands
which act as if they were "sourced" right into place).
Internally these changes were not terribly deep, but they did touch a *lot*
of code. This worries me a little with respect to potential new bugs, so I'd
really like to ask you all to try this release if possible.
As far as users outside of the package - I have tried to preserve all of the
existing APIs, so you shouldn't have to make changes. Although we may need to
add more hooks for preserving the call stack info where it is desired.
I'm shooting for wrapping up the current bug list and making the latest version
the beta in a couple of weeks. Then I'd like to wait a few weeks before making
that the final 1.1 release and moving on towards really new features for
bsh 2.0.
P.S.
As an aside, others might want to use the new callstack as a basis to
experiment with a dynamically bound scripting language vs. the statically
bound way in which BeanShell currently locates variables. With some changes
it would be possible to make scope follow the call chain as opposed to the
namespace parent hierarchy.
The code will be in CVS shortly. Any comments are welcome.
Thanks,
Pat Niemeyer | http://sourceforge.net/p/beanshell/mailman/beanshell-developers/?viewmonth=200105&viewday=30 | CC-MAIN-2014-23 | refinedweb | 413 | 72.76 |
Opened 12 years ago
Closed 12 years ago
#1200 closed Bug (Fixed)
_IEFormElementGetCollection example in helpfile
Description
For the command _IEFormElementGetCollection, the following example is given:
[code]
#include <IE.au3>
$oIE = _IECreate ("")
$oForm = _IEFormGetCollection ($oIE, 0)
$oQuery = _IEFormElementGetCollection ($oForm, 1)
_IEFormElementSetValue ($oQuery, "AutoIt IE.au3")
_IEFormSubmit ($oForm)
[code]
I'm using IE6 for my work uses it. They won't upgrade and no amount of shaming, begging, or asking will change this. I'm stuck using it. Well, when I test the example, the script doesn't work as designed. It will put "AutoIt IE.au3" in the address bar. If I change $oQuery = _IEFormElementGetCollection ($oForm, 1) to $oQuery = _IEFormElementGetCollection ($oForm, 2), it works as expected.
Is this a problem of IE6 or a bug? If it is a bad example, then the example in _IEFormGetCollection would also need to be fixed.
This is in the Production version. I've looked in the 3.3.1.1 and I see the same examples listed. | https://www.autoitscript.com/trac/autoit/ticket/1200 | CC-MAIN-2021-25 | refinedweb | 164 | 68.87 |
I2C
I2C is a serial communication bus able to address multiple devices along the same 2-wire bus.
Toit exposes the peripheral through the
i2c library.
Each device must have a unique address. The address can be found in the datasheet for the selected peripheral - it's an 7-bit integer.
In case of the Bosch BME280 sensor, the address is
0x76:
import gpio import i2c main: bus := i2c.Bus --sda=gpio.Pin 21 --scl=gpio.Pin 22 device := bus.device 0x76
Frequency
The default frequency of the I2C bus is 400kHz in Toit. This can be changed as an argument to the
i2c.Bus at construction time with
Remember that when changing the frequency, the associated pull-up resistors may have to be changed as well. | https://docs.toit.io/peripherals/i2c | CC-MAIN-2022-40 | refinedweb | 127 | 68.16 |
Feature #17288open
Optimize __send__ call with a literal method name
Description
I made a patch to optimize a
__send__ call with a literal method name. This optimization replaces a
__send__ method call with a
send instruction. The patch is available in this pull-request.
By this change, the redefined
__send__ method is no longer called when it is called by a literal method name. I guess it is no problem because the following warning message is displayed for a long time.
$ ruby -e 'def __send__; end' -e:1: warning: redefining `__send__' may cause serious problems
This change makes the optimized case x5~x6 faster. The benchmark result is below:
$ make benchmark COMPARE_RUBY="../../ruby/build-o3/ruby" ITEM=vm_send.yml (snip) # Iteration per second (i/s) | |compare-ruby|built-ruby| |:------------|-----------:|---------:| |vm_send | 18.536M| 113.778M| | | -| 6.14x| |vm_send_var | 18.085M| 16.595M| | | 1.09x| -|
Related issues
Updated by Eregon (Benoit Daloze) 12 months ago
I think
obj.send(:foo) should be optimized too, to not create a an arbitrary performance difference between
send and
__send__.
send is also used ~15x used more frequently than
__send__ in gems, so that would be even more valuable.
Of course, optimizing
send means checking that
obj.method(:send) corresponds to
Kernel#send (somewhat similar to
+).
matz (Yukihiro Matsumoto) suggested an operator for
__send__ in, that might be a cleaner solution.
OTOH, it would not benefit existing code and not be adopted for a few years.
FWIW in TruffleRuby I've been thinking to optimize all 3 sends (
__send__, send, public_send), since they are hidden from backtraces, etc, and optimizing them makes it simpler to remove them from backtraces, etc. It would also reduce the overhead in interpreter for
send/__send__(:private_method) calls (in JITed code, those have no overhead). I would still check the correct method in all cases, changing semantics should IMHO be the last resort.
Updated by shyouhei (Shyouhei Urabe) 12 months ago
Hello, I'm against this optimisation.
obj.__send__(:method) should just be written as
obj.method.
Not against the ability to write
obj.__send__(:method), but
obj.method must be the preferable way and thus must be the fastest thing.
This patch adds complexity just to encourage people to follow the wrong way. This is -1 to me.
Updated by zverok (Victor Shepelev) 12 months ago
shyouhei (Shyouhei Urabe) what about private methods?
Updated by shyouhei (Shyouhei Urabe) 12 months ago
zverok (Victor Shepelev) wrote in #note-3:
shyouhei (Shyouhei Urabe) what about private methods?
Private methods shall not be called at the first place. Period. That is breaking encapsulation.
Again I’m not against the ability to do such things. But we must not encourage people.
Updated by Eregon (Benoit Daloze) 12 months ago
Here are the first 1000 .send() usages in gems:
420 of them use a literal Symbol for the first argument.
Private methods shall not be called at the first place. Period.
It's not as simple, there are many cases where it's reasonable to call private methods.
For instance things like
Module#{include,prepend,alias_method,define_method} used to be private, and
Module#remove_const still is.
Some gems call their own private methods in tests, which seems fair enough.
shyouhei (Shyouhei Urabe) wrote in #note-2:
Not against the ability to write
obj.__send__(:method), but
obj.methodmust be the preferable way and thus must be the fastest thing.
I would think nobody prefers
obj.__send__(:some_method) to
obj.some_method if
some_method is public, so it seems a non-issue to me.
And anyway
obj.some_method would always be as fast or faster than
obj.__send__(:some_method), never slower (that would be a performance bug).
Updated by shyouhei (Shyouhei Urabe) 12 months ago
Eregon (Benoit Daloze) wrote in #note-5:
Here are the first 1000 .send() usages in gems:
420 of them use a literal Symbol for the first argument.
So? I don’t think we should follow that. If people misunderstand what an OOPL is, we would better not confirm that.
Private methods shall not be called at the first place. Period.
It's not as simple, there are many cases where it's reasonable to call private methods.
For instance things like
Module#{include,prepend,alias_method,define_method}used to be private, and
Module#remove_conststill is.
They are/were private for reasons. Private methods can be made public later, but that must have been done with really careful considerations by the author. Not by callee people.
Some gems call their own private methods in tests, which seems fair enough.
Testing private methods! That itself has a bunch of discussions.
But even if we put those topics aside, do we want to optimise such tests? I feel that is very low-priority.
shyouhei (Shyouhei Urabe) wrote in #note-2:
Not against the ability to write
obj.__send__(:method), but
obj.methodmust be the preferable way and thus must be the fastest thing.
I would think nobody prefers
obj.__send__(:some_method)to
obj.some_methodif
some_methodis public, so it seems a non-issue to me.
And anyway
obj.some_methodwould always be as fast or faster than
obj.__send__(:some_method), never slower (that would be a performance bug).
OK. So the point is wether we want people to call a private method or not. I’m still against that. Encapsulation is a very basic OO principle that Ruby employs. I want that be honoured.
The proposed patch is sending a wrong signal.
Updated by marcandre (Marc-Andre Lafortune) 12 months ago
shyouhei (Shyouhei Urabe) wrote in #note-4:
Private methods shall not be called at the first place. Period. That is breaking encapsulation.
I wish that was the case, but Ruby access is not expressive enough for this to be the case.
class Foo class << self private def special # General users should not call or rely on `special` end end def foo # even from inside our own class... self.class.special # this won't work without `send` end class Bar # Bar is a helper class, written by us def foo Foo.special # we want to call special, we need to use `send` end end end
Note: using
protected changes nothing in this example
In so many gems, you have to either mark method as
private and use
send, or else use
# @api private but that's just a comment.
Updated by shevegen (Robert A. Heiler) 12 months ago
Just a few things:
We need to also remember .instance_variable_get() and .instance_variable_set().
Ruby does not quite use a similar "restriction-style" OOP like, say, java. It will
depend a lot on the style and preferences of the ruby user.
Personally I much prefer the more traditional .send() approach that ruby has had, so
zverok's comment about what to do with private stuff, I say .send() it all the way!
Gimme all the goodies; don't restrict me. :D (I tend to use "private" rarely, and
mostly as cue for documentation, rather than as a "I want to restrict this usage 100%").
On the topic of .send(), .public_send() and .send(), I have a few gems that use
.send(). I like .send(). I do not use .public_send() or .send(), so even a new
operator for .send() would not affect me since I would not need it, most likely.
Last but not least, at first I thought .send() will be deprecated, e. g. I misread
this change here by nobu:
Perhaps this one is also related to the issue here? I think some ruby users may
wonder what to use, e. g. .send() or .public_send() or .send(). This should also
be kept in mind. Note that I have no strong opinion on the issue here at all, my
only concern would be whether .send() would be changed, but I think I misread that
when I first saw the git-change.
Updated by shyouhei (Shyouhei Urabe) 12 months ago
marcandre (Marc-Andre Lafortune) Here you are:
class Foo using Module.new { refine Foo.singleton_class do def special end end } def foo self.class.special end class Bar def foo Foo.special end end end
Updated by mrkn (Kenta Murata) 12 months ago
- Is duplicate of Feature #17291: Optimize __send__ call added
Updated by marcandre (Marc-Andre Lafortune) 12 months ago
shyouhei (Shyouhei Urabe) wrote in #note-9:
marcandre (Marc-Andre Lafortune) Here you are:
[snip brilliant code]
0) I was wrong, I retract what I said.
1) My mind is blown. I have always thought of refinements as a way to safely monkey-patch other people's classes, in particular builtin classes. Never as a way to structure access to our own classes. This can also be very fine-grained if desired. This is brilliant shyouhei (Shyouhei Urabe)!
2) Goto 1
I made a quick POC with RuboCop to isolate methods that I've always wanted to isolate and the only issues was mocking internal methods:
Failure/Error: allow(cop).to receive(:complete_investigation).and_return(cop_report) #<Fake::FakeCop:0x00007fed7e1ce530 ...> does not implement: complete_investigation
I'm fine with this, I already try to avoid stub and mocks anyways (10 tests out of 14515 :-) and I'm sure I can find better ways around that.
I also wanted to check any performance impact. I couldn't see any (running tests or running RuboCop). Are there any known circumstances where performance would be affected?
Are there gems using this technique?
Blog posts discussing this?
Updated by Eregon (Benoit Daloze) 12 months ago
How about a private module instead?
class Foo module Helpers def self.special # General users should not call or rely on `special` :special end end private_constant :Helpers def foo Helpers.special end class Bar # Bar is a helper class, written by us def foo Helpers.special end end end p Foo.new.foo # => :special p Foo::Bar.new.foo # => :special p Foo::Helpers.special # => private constant Foo::Helpers referenced (NameError)
Refinements seems rather heavy to me for this case, notably it creates extra modules, and makes initial lookups slower (once cached it shouldn't matter much).
For an uncached call (e.g.
refine Object and many different receivers at some call site), I think the overhead would be noticeable.
Also if the refinements need to be used in multiple files, the module passed to
using needs to be named, and stored in a private constant.
If done so, there seem little point to
using PrivateHelpers; ...; self.class.foo vs
PrivateHelpers.foo, except maybe for instance methods added on existing classes.
But then one could simply use a private method on
Foo to begin with.
I'm probably biased against refinements because the semantics around refinements + super or eval are fairly messy.
Updated by shyouhei (Shyouhei Urabe) 12 months ago
JFYI I learned the use of "immediate" refinement from rails.
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/17288 | CC-MAIN-2021-43 | refinedweb | 1,789 | 76.11 |
- .19 Wrap-Up
In this chapter, we discussed the difference between non-static and static methods, and we showed how to call static methods by preceding the method name with the name of the class in which it appears and the member-access operator (.). You saw that the Math class in the .NET Framework Class Library provides many static methods to perform mathematical calculations. We also discussed static class members and why method Main is declared static.
We presented several commonly used Framework Class Library namespaces. You learned how to use operator + to perform string concatenations. You also learned how to declare constants with the const keyword and how to define sets of named constants with enum types. We demonstrated simulation techniques and used class Random to generate sets of random numbers. We discussed the scope of fields and local variables in a class. You saw how to overload methods in a class by providing methods with the same name but different signatures. You learned how to use optional and named parameters.
We showed the concise notation of C# 6’s expression-bodied methods and read-only properties for implementing methods and read-only property get accessors that contain only a return statement. We discussed how recursive methods call themselves, breaking larger problems into smaller subproblems until eventually the original problem is solved. You learned the differences between value types and reference types with respect to how they’re passed to methods, and how to use the ref and out keywords to pass arguments by reference.
In Chapter 8, you’ll maintain lists and tables of data in arrays. You’ll see a more elegant implementation of the app that rolls a die 60,000,000 times and two versions of a GradeBook case study. You’ll also access an app’s command-line arguments that are passed to method Main when a console app begins execution. | https://www.informit.com/articles/article.aspx?p=2731935&seqNum=19 | CC-MAIN-2021-39 | refinedweb | 317 | 53.41 |
Hi, I'm implementing a shopping system.
---- webware version CVS-20041011 ----
1. I have to migrate a huge amount of product data from several sources
especially about 2000000 products in csv-format(books), a firebird
database(bookshop stock db)into only one stable db.
I want to use postgre for that purpose.
First I wrote a prototype without middlekit and used the ISBN / EAN
as my primary key for the products table. After writing my Classes.csv I
realized, that middlekit automatically creates a sequence and a primary key
in PostgreSQLGenerator.py:
def primaryKeySQLDef(self, generator):
return "\t%s integer not null primary key default
nextval('%s'),\n" % (self.sqlSerialColumnName(), self.seqName())
Question:
Would it be be more useful to use the implemented primary key format
and hold the ean/isbn as a "long type secondary key" / create an index
on that collumn or
How can I tell midllekit to use a long key, not creating
2. It would be nice having more control about the table indexing
procedure. I'm thinking about write some new code, but in the moment I
don't know where to begin, how middlekit could work with these
modifications, and if someone of you have done some work about it...
thanks
Stefan | http://sourceforge.net/mailarchive/forum.php?thread_name=416DDFDA.8000404%40t-online.de&forum_name=webware-discuss | CC-MAIN-2013-48 | refinedweb | 208 | 53.81 |
simple python module for pushover.net
Project description
A simple Python module for Pushover.net
Example
See example.py for full example:
from coinshot import Coinshot, CoinshotException coinshot_object = Coinshot(application_key[, user_key]) coinshot_object.push(message[, title, user_key, device, url, url_title, priority, timestamp])
shoot
If all you’re looking for is an app to send pushover notifications through (not a library for doing it programmatically, look at bin/shoot.
Usage
Usage: shoot [options] message Options: -h, --help show this help message and exit -a APP_KEY, --application-key=APP_KEY Application key provided by pushover.net -u USER_KEY, --user-key=USER_KEY User key provided by pushover.net -t TITLE, --title=TITLE Notification Title
Notes
- user_key is required either during object initialization or when calling push. If a user_key is provided for both, the one passed to push takes precedent.
- url_title can be passed without url, but will be ignored
What’s the deal with the name?
The name coinshot is derived from a class of characters from Brandon Sanderson’s Mistborn book series. Coinshot Allomancers “burn” steel to Push nearby metals.
Here’s the Wikipedia page
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/coinshot/ | CC-MAIN-2022-05 | refinedweb | 211 | 57.57 |
Python
In this article we will explore how to quickly find the MX records of a large amount of domains using Python. In recent months we have often found ourselves turning to Python to accomplish a variety of tasks – in our case mostly to quickly process and manipulate data. Python is a great language, easy and fun to learn but insanely powerful. You can also extend the capabilities of Python with numerous libraries out there. I will put references to some good learning resources at the end of the article.
MX Records
Mail Exchange (MX) records are DNS records that allow mail to be routed to the right receiving server. When a mail client (like Outlook or Android Mail or even a web client) wants to send email to your address it has to ‘know’ where your mailbox resides. This could be Exchange Online (the best choice by the way, wink wink) or any other service.
By manually analyzing the MX record for a domain with a tool like dig, nslookup or online tools like mxtoolbox you can also find out where the mailbox resides. At Kernel IT we often do this to validate a cutover. This is the last step for example if migrate accounts from G-Suite to Exchange Online.
Coding and Infrastructure
Another interesting trend we notice in our projects is that the lines between infrastructure and application development have blurred. Even if you’re a hardcore engineer who likes physical networking and servers is now necessary to read-out or construct JSON, XML, Powershell scripts or Windows Batch files. I mention all those in one line but be assured that they are wildly different tools and objects.
But back to the Python script:
import dns.resolver import pandas as pd import time import sys pd.__version__ dns.resolver.default_resolver = dns.resolver.Resolver(configure=False) dns.resolver.default_resolver.nameservers = ['8.8.8.8'] filename = 'emails.txt' #loading the file. This is just a simple file with email addresses consecutive lines try: print ("Trying to open file ", filename) with open(filename) as f: domains = [line.rstrip() for line in f] except: print("Error while loading", filename) sys.exit("IO error") else: print (len(domains), "addresses loaded...starting mx lookup.\n\n") time.sleep(1) mxRecords = [] emailAddresses = [] #we use domain.split("@",1)[1] to seperate the domain from the email addresses #the try-catch is necessary to avoid stopping th execution when a lookup fails. for domain in domains: try: answers = dns.resolver.query(domain.split("@",1)[1], 'MX') except: print ("some error") mxRecord = "some error" else: mxRecord = answers[0].exchange.to_text() finally: mxRecords.append(mxRecord) emailAddresses.append(domain) print (domain) time.sleep(.200) #a 200 ms pause is added for good measure #the rest of the program uses pandas to export everything neatly to CSV. It takes to lists "mxRecords" and "emailAddresses" and converts it to a dataframe. df = pd.DataFrame({"EmailAddress":emailAddresses, "MXRecords":mxRecords}) print ("\n", str(len(emailAddresses)), "records processed") df.to_csv(filename, index=False)
The most interesting part of this project was the library “dnspython”. This library does all the lookups and you don’t have to worry about system calls.
Input:
info@kernel.sr info@amazon.com info@google.com
Output:
So in conclusion, Python is a fantastic language for data processing with intuitive constructions. If you combine this with existing libraries you can easily build powerful applications.
References: | https://kernel.sr/find-mx-records-in-large-batches-with-python/hharpal/ | CC-MAIN-2022-21 | refinedweb | 565 | 56.96 |
Back to article
Even if you're not a database wonk, you've probably been hearing some
talk about this newfangled thing called CouchDB. For one thing,
the new Ubuntu desktop uses it for things like the addressbook
and Tomboy notes. So what is CouchDB, anyway?
figure 1
If you're familiar with old-style relational databases, like MySQL,
Postgres and Oracle ... forget all of it.
Couch is part of the new "NoSQL" movement, where your data, rather
than living in one massive relational database on a single server, may
be distributed on many machines across the web. Or it might be sitting
in a DesktopCouch database on your local desktop machine.
To get started with couch, first install couchdb. It's probably
available through your distro's package manager.
The python-couchdb library makes it easy to talk to CouchDB
from Python.
The first thing we need to do is create a database.
You can do that from the python interpreter, pointing python-couchdb
at your couchdb server -- typically.
Let's make a database of restaurants:
$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import couchdb
>>> server = couchdb.client.Server('')
>>> db = server.create('restaurants')
Now you need some data. In couchdb, a hunk of data is called a
document. The name is a bit misleading: a Couch document is a single
object, like a restaurant, not anything you'd normally think of as a document.
Your python objects can inherit from Document.
Specify a few attributes and their types,
then add the document to the database:
from couchdb.schema import Document
import datetime
class Restaurant(Document) :
type = couchdb.schema.TextField()
name = couchdb.schema.TextField()
last_visited = couchdb.schema.DateTimeField()
r = Restaurant(type="restaurant", name="Alice's Restaurant",
last_visited=datetime.date(1965, 11, 25))
r.store(db)
An invaluable tool when you're working with CouchDB is its web
interface, called Futon. Just point your browser to
(don't forget the underscore!) and you'll see a screen like Figure 1.
It shows that you have one database, called restaurants.
figure 2
Click on that database to see the documents inside it -- only one
so far. Clicking on the key for the document (Figure 2) shows all the
Field/Value pairs inside the document: the name and last_visited
time you assigned, plus two automatically assigned fields, _id and
_rev, which become important when you change a document.
It's always helpful to keep an eye on Futon when you're developing a
CouchDB application. You can add and remove documents from Futon,
delete the database and start over,
and even do queries, as you'll see shortly. | http://www.linuxplanet.com/linuxplanet/print/7179 | CC-MAIN-2016-22 | refinedweb | 456 | 68.06 |
Is there a full implementation? How is the library used, where is its website?
views:17709
answers:8
how to use xpath in python
The lxml package supports xpath. It seems to work pretty well, although I've had some trouble with the self:: axis. There's also Amara, but I haven't used it personally.
You didn't say what platform you're using, however if you're on Ubuntu you can get it with
sudo apt-get install python-xml. I'm sure other Linux distros have it as well.
If you're on a Mac, xpath is already installed but not immediately accessible. You can set
PY_USE_XMLPLUS in your environment or do it the Python way before you import xml.xpath:
if sys.platform.startswith('darwin'): os.environ['PY_USE_XMLPLUS'] = '1'
In the worst case you may have to build it yourself. This package is no longer maintained but still builds fine and works with modern 2.x Pythons. Basic docs are here.
The latest version of elementtree supports XPath pretty well. Not being an XPath expert I can't say for sure if the implementation is full but it has satisfied most of my needs when working in Python. I've also use lxml and PyXML and I find etree nice because it's a standard module.
NOTE: I've since found lxml and for me it's definitely the best XML lib out there for Python. It does XPath nicely as well (though again perhaps not a full implementation).
libxml2 has a number of advantages:
- Compliance to the spec
- Active development and a community participation
- Speed. This is really a python wrapper around a C implementation.
- Ubiquity. The libxml2 library is pervasive and thus well tested.
Downsides include:
- Compliance to the spec. ElementTree ( doc = ElementTree(file='tst.xml') for e in mydata.findall('/foo/bar'): print e.get('title').text
Use LXML. LXML uses the full power of libxml2 and libxslt, but wraps them in more "Pythonic" bindings than the Python bindings that are native to those libraries. As such, it gets the full XPath 1.0 implementation. Native ElemenTree supports a limited subset of XPath, although it may be good enough for your needs.
Another option is py-dom-xpath, it works seamlessly with minidom and is pure Python so works on appengine.
import xpath xpath.find('//item', doc)
Another library is 4Suite:
I do not know how spec-compliant it is. But it has worked very well for my use. It looks abandoned.
You can use:
PyXML:
from xml.dom.ext.reader import Sax2 from xml import xpath doc = Sax2.FromXmlFile('foo.xml').documentElement for url in xpath.Evaluate('//@Url', doc): print url.value
libxml2:
import libxml2 doc = libxml2.parseFile('foo.xml') for url in doc.xpathEval('//@Url'): print url.content | http://ansaurus.com/question/8692-how-to-use-xpath-in-python | CC-MAIN-2019-35 | refinedweb | 467 | 69.79 |
How much does it cost to create sample React VR app
>>IMAGE.
In this article our dev team is going to showcase the possibilities of React VR Oculus. We’ll make a simple React VR example, a 360-degree virtual reality tour to view in a web browser. With this technology you can even try out virtual reality without a VR headset, which is awesome.
To begin with, let us show you the project we did with React VR, and then elaborate on the process. This is how the final React VR sample of virtual reality experience will look like in your browser.
But first things first. What is React VR?
What is React VR?
React VR is a JavaScript library (framework) that has been developed by Oculus with the aim of creating web-based virtual reality applications. It is a part of bigger React entity. React is a JavaScript library (framework) by Facebook for building interactive UIs for web and mobile environments. React Native is used to build mobile apps on JS, specifically.
According to Oculus, React JS VR projects are going to run on the VR web browser under the name Carmel. The pre-packed release of React VR app is available for download with the source code on Github for preview so that anyone could give feedback.
Thus, it is a great tool for any web developer and/or VR developer. Why? Because of ability to start making VR web apps even without a VR headset in possession.
What is WebVR + QUIZ on Virtual Reality
WebVR, the experimental API that makes it easier to create VR experiences for web. Some call it a VR web browser – in a way that anyone could try virtual reality directly in a browser. And yet, no need for VR devices like Google Cardboard, Oculus Rift or HTC Vive.
It is similar to 360-videos like you’ve seen on YouTube. But most noteworthy you can navigate through it in as well. WebVR also works in pairing with React VR library and Carmel VR browser.
Now, to spice things up a little bit, we’ve prepared a little quiz for you to determine how well you are familiar with VR technology. Answer 10 basic questions and press Submit for results.
Our demo project
Here we will be creating a small VR web application – a React VR example of virtual reality tours in 360-degree view and navigation. We’ll use React VR sample of panoramic images by Oculus, which would look something like this in your browser:
There will be few locations, like restaurants and coffee shops for users to see in all-round view. We will also add rotation and buttons (arrows) to point out possible directions to enter other locations. As a result, it is going to serve as short and plain demonstration of how React VR works and its possibilities.
We’ll go in step by step mode, showing code samples of our React VR project. In the end you’ll also find our team’s estimate for how much does it cost to create VR web apps with React VR. The final version of our VR tour for web browsers will consist of 5 locations, much as these:
Technical requirements
To create a React VR app one has to have Node.js installed first of all. This JavaScript environment powers code compiling and running a local server. One most probably also needs NPM, a package manager by Node.js with lots of libraries and reusable code. Same, Three.js as possible additional 3D library.
With WebVR and WebGL APIs we can already render 3D imagery within a web browser. Since you do not need any special VR devices or headsets to build a React VR app, just basic stuff is enough. What you actually do need along React VR for making web apps is:
- A PC/laptop on Windows
- A compatible browser
- A latest Node.js version
- A VR headset (optional)
What are the compatible VR web browsers?
- Chrome for Android/iOS and Windows
- Firefox Nightly
- Carmel VR, a VR web browser by Oculus
- Samsung browser for Gear VR
- WebVR Polyfill for Google Cardboard devices
React VR framework is usually layered this way: React VR > React runtime > OVRUI > Three.js > WebVR and browser. OVRUI by the way, is a library helping with geometrical types and objects to build user interfaces for VR web apps.
Let's Build Your VR App
I agree to share this request with 10 development companies and get more offers
React VR project setup
To make VR web apps as our React VR example first we have to do a short setup with tools provided by Oculus. To start, we install the React VR CLI tool using NPM:
npm install -g react-vr-cli
With it we create a new directory for web application and name it TMExample for our React VR app project:
react-vr init TMExample
This may take some time, so do not worry. Then we place cd into our directory:
cd TMExample
To check if it’s working, we test a local server with:
npm start
After all of this is done, open the following URL in your web browser:. Clearly, give it few moments to initialize the application, and you should see the animated VR environment like this:
Now you good to go with building your own web based virtual reality apps with React VR. Let’s explain how we built our simple VR tour app.
Creating a VR tour for web
The structure of our app’s directory is as follows:
+-node_modules +-static_assets +-vr -.gitignore -.watchmanconfig -index.vr.js -package.json -postinstall.js -rn-cli-config.js
The code of a web app would be in the index.vr.js file, while the static_assets directory hosts external resources (images, 3D models)..registerComponent. It bundles the application and also readies it to run. Next step to highlight in our React VR app project is compiling 2 main files.
Index.vr.js file
In constructor we’ve indicated the data for VR tour app. These are scene images, buttons to switch between scenes with X-Y-Z coordinates, values for animations. All the images we contain in static_assets folder.
constructor (props) { super(props); this.state = { scenes: [{scene_image: 'initial.jpg', step: 1, navigations: [{step:2, translate: [0.73,-0.15,0.66], rotation: [0,36,0] }] }, {scene_image: 'step1.jpg', step: 2, navigations: [{step:3, translate: [-0.43,-0.01,0.9], rotation: [0,140,0] }]}, {scene_image: 'step2.jpg', step: 3, navigations: [{step:4, translate: [-0.4,0.05,-0.9], rotation: [0,0,0] }]}, {scene_image: 'step3.jpg', step: 4, navigations: [{step:5, translate: [-0.55,-0.03,-0.8], rotation: [0,32,0] }]}, {scene_image: 'step4.jpg', step: 5, navigations: [{step:1, translate: [0.2,-0.03,-1], rotation: [0,20,0] }]}], current_scene:{}, animationWidth: 0.05, animationRadius: 50 }; }
Then we’ve changed the output of images linking them to state, previously indicated in constructor.
Navigational buttons
In each scene we’ve placed transition buttons for navigation within a tour, taking data from state. Subscribing to onInput event to convey switching between scenes, binding this to it as well.
onNavigationClick(item,e){ if(e.nativeEvent.inputEvent.eventType === "mousedown" && e.nativeEvent.inputEvent.button === 0){ var new_scene = this.state.scenes.find(i => i['step'] === item.step); this.setState({current_scene: new_scene}); postMessage({ type: "sceneChanged"}) } } sceneOnLoad(){ postMessage({ type: "sceneLoadStart"}) } sceneOnLoadEnd(){ postMessage({ type: "sceneLoadEnd"}) } this.sceneOnLoad = this.sceneOnLoad.bind(this); this.sceneOnLoadEnd = this.sceneOnLoadEnd.bind(this); this.onNavigationClick = this.onNavigationClick.bind(this);
Button animation
Below, we’ll display the code for navigation button animations. We’ve built animations on button increase principle, applying conventional requestAnimationFrame.
this.animatePointer = this.animatePointer.bind(this); animatePointer(){ var delta = this.state.animationWidth + 0.002; var radius = this.state.animationRadius + 10; if(delta >= 0.13){ delta = 0.05; radius = 50; } this.setState({animationWidth: delta, animationRadius: radius}) this.frameHandle = requestAnimationFrame(this.animatePointer); } componentDidMount(){ this.animatePointer(); } componentWillUnmount(){ if (this.frameHandle) { cancelAnimationFrame(this.frameHandle); this.frameHandle = null; } }
In componentWillMount function we’ve indicated the current scene. Then we’ve also subscribed to message event for data exchange with the main thread. We do it this way due to a need to work out a React VR component in a separate thread.
In onMainWindowMessage function we only process one message with newCoordinates key. We’ll elaborate later why we do so. Similarly, we’ve subscribed to onInput event to convey arrow turns.
componentWillMount(){ window.addEventListener('message', this.onMainWindowMessage); this.setState({current_scene: this.state.scenes[0]}) } onMainWindowMessage(e){ switch (e.data.type) { case 'newCoordinates': var scene_navigation = this.state.current_scene.navigations[0]; this.state.current_scene.navigations[0]['translate'] = [e.data.coordinates.x,e.data.coordinates.y,e.data.coordinates.z] this.forceUpdate(); break; default: return; } } rotatePointer(nativeEvent){ switch (nativeEvent.keyCode) { case 38: this.state.current_scene.navigations[0]['rotation'][1] += 4; break; case 39: this.state.current_scene.navigations[0]['rotation'][0] += 4; break; case 40: this.state.current_scene.navigations[0]['rotation'][2] += 4; break; default: return; } this.forceUpdate(); }
Arrow turns are done with ↑→↓ alt keys, for Y-X-Z axes respectively.
Furthermore, see and download the whole index.vr.js file as part of our React VR example.
Client.js file
Moving further into our React VR example of virtual reality web applications, we’ve added the code below into init function. The goal is processing of ondblclick, onmousewheel and message events, where the latter is in rendering thread for message exchanges. Also, we’ve kept a link to vr and vr.player._camera objects.
window.playerCamera = vr.player._camera; window.vr = vr; window.ondblclick= onRendererDoubleClick; window.onmousewheel = onRendererMouseWheel; vr.rootView.context.worker.addEventListener('message', onVRMessage);
We’ve introduced the onVRMessage function for zoom returning to default when scenes change. Also, we have added the loader when scene change occurs.
function onVRMessage(e) { switch (e.data.type) { case 'sceneChanged': if (window.playerCamera.zoom != 1) { window.playerCamera.zoom = 1; window.playerCamera.updateProjectionMatrix(); } break; case 'sceneLoadStart': document.getElementById('loader').style.display = 'block'; break; case 'sceneLoadEnd': document.getElementById('loader').style.display = 'none'; break; default: return; } }
onRendererDoubleClick function for 3D-coordinates calculation and sending messages to vr component to change arrow coordinates. The get3DPoint function is custom to our web VR application and looks like this:
function onRendererDoubleClick(){ var x = 2 * (event.x / window.innerWidth) - 1; var y = 1 - 2 * ( event.y / window.innerHeight ); var coordinates = get3DPoint(window.playerCamera, x, y); vr.rootView.context.worker.postMessage({ type: "newCoordinates", coordinates: coordinates }); }
Switch to mouse wheel
We’ve used the onRendererMouseWheel function for switching zoom to a mouse wheel.
function onRendererMouseWheel(){ if (event.deltaY > 0 ){ if(window.playerCamera.zoom > 1) { window.playerCamera.zoom -= 0.1; window.playerCamera.updateProjectionMatrix(); } } else { if(window.playerCamera.zoom < 3) { window.playerCamera.zoom += 0.1; window.playerCamera.updateProjectionMatrix(); } } }
Exporting coordinates
Then we’ve utilized Three.js to work with 3D-graphics. In this file we’ve rather conveyed one function to export screen coordinated to world coordinates.
import * as THREE from 'three'; export function get3DPoint(camera,x,y){ var mousePosition = new THREE.Vector3(x, y, 0.5); mousePosition.unproject(camera); var dir = mousePosition.sub(camera.position).normalize(); return dir; }
Check and download the whole client.js file on Github demo. There’s probably no need to explain how the cameraHelper.js file works, as it is plain simple, and you can download it as our ReactVR demo as well.
And this is it, this is how we’ve created our short VR tour using React VR framework. The source code of this whole project is available for download at our demo React VR GitHub.
Also, bear in mind, that this was the demonstration of general possibilities of React VR. It has already made VR web apps possible thanks to Facebook efforts, and with official full release of React VR there will be more.
The cost of making a React VR app
Obviously, calculating the cost to make web-based virtual reality applications using React VR can not be precise. It totally depends on the skill-level of chosen ReactJS developers. It is a grey(ish) area at this point in time. This React VR framework by Oculus is still pending for official release and is expected to be enhanced considerably yet. And yet, as we’ve shown in our VR tour demo, making VR web apps is already possible.
And we’ve only used official free samples from React VR library. To build your own project you would have to install a local server and CMS, as well as create 360-degree images and three-dimensional objects. The cost estimate for such VR web app here at ThinkMobiles we imply a $40 hourly rate.
VR web app development: $4.000 – $10.000, 1-3 weeks.
In conclusion
Note, that cost of any custom React VR app for web would depend on the amount of 3D-imagery and backend buildup. On our side, 1 developer was engaged in this task. And yet, it took about one week to create this VR tour from zero.
The principal challenge of our React VR app was data exchange between the main and virtual reality component thread. We’ve actually put together 5 locations to view and switch between for this short web VR tour.
Finally, you can view our VR tour in full, and read more of our related articles about:
Let's Build Your VR App?
I agree to share this request with 10 development companies and get more offers
What’s essentially a slideshow of 360 photos is barely a “VR App”. There’s off the shelf packages that can create this with a few clicks.
A more meaningful comparison would have been something interactive involving real 3D geometry, textures and some form of locomotion.
I don’t want to rehash the old “poisoning the well” argument but we’ve got to start raising people’s expectations of what VR is – even mobile VR.
Nice tutorial.
Faced an issue though. I was trying your demo link () in a Gear VR.
I could not see the default white dot (VR pointer) while in VR mode in your scene.
Tapping while gazing at the VR markers (white circles with arrows) does not take me to a new scene either.
Am I missing something? How do I navigate inside the VR tour via Gear VR?
The same works fine while I am viewing in a web browser, and using mouse.
Ok, thanks let me check
Hot topics | https://thinkmobiles.com/blog/how-to-make-react-vr-app/ | CC-MAIN-2018-34 | refinedweb | 2,399 | 50.33 |
Field containing a plane equation. More...
#include <Inventor/fields/SoSFPlane.h>
Field containing a plane equation.
A field containing a plane equation (an SbPlane).
SoSFPlanes are written to file as four floating point values separated by whitespace. The first three are the normal direction of the plane, the fourth is the distance of the plane from the origin (in the direction of the normal).
SbPlane, SoField, SoSField, SoMFPlane
Default constructor.
Destructor.
Returns the type identifier for this specific instance.
Implements SoTypedObject.
Returns this field's value.
Sets this field to newValue.
Copy from another field of same type.
Sets this field to newValue. | https://developer.openinventor.com/refmans/latest/RefManCpp/class_so_s_f_plane.html | CC-MAIN-2021-25 | refinedweb | 103 | 62.34 |
First of all, I'm a total beginner which is the reason that probably my code contains a few simple and dumb mistakes...but how you see I'm working on it ;)
My task is to write a program that contains a queue which will be returned in reversed order. But my problem is that some restrictions are made and at this moment I'm totally confused :( Maybe someone can help me a little.
Two functions are given:
def enqueue(self,z,q): # z is an element, q = queue return q.append(z) def dequeue(self,q): return q.pop(0)
Further I'm only allowed to use one variable to buffer one element of queue. Calling len(q) and using controlvariables and parameters within loops is permitted.
So here is my sourcecode:
class Queue: def __init__(self): self.z = "" self.q = [] def enqueue(self,z,q): return q.append(z) def dequeue(self,q): return q.pop(0) def queueReverse(q): n = 1 print len(q) for x in range(len(q)-2): for x in range((len(q)-1)-n): enqueue(dequeue(q),q) a = dequeue(q) for x in range(n): enqueue(dequeue(q),q) enqueue(a,q) n += 1 enqueue(dequeue(q),q) return q #--------------- main ---------------- test = Queue() q = [1,2,3,4,5,6,7] test.queueReverse() print q
So, in advance, thanx a lot!
Greetz | https://www.daniweb.com/programming/software-development/threads/157929/need-help-with-queue | CC-MAIN-2018-30 | refinedweb | 231 | 65.12 |
How to reduce app load time in ionic2 ios app?
It takes 16 sec. to initialize the app.
Ionic App takes too long to start
How to reduce app load time in ionic2 ios app?
[TUTORIAL] Here are solutions to few common problems I encountered
Have you build your app with --prod?
Like:
ionic cordova build ios --prod
– prod = plz ionic optimize and shrink my code to allow my app to be loaded faster
Do you have any logs to indicate what’s taking time? Have you looked at component lazy loading?
After using --prod my app takes 16sec to load.
Yes i have Log from my xcode when i’m running my app in device after building it from terminal.
2017-07-10 12:50:55.195 BenzineKostern[202:8142] DiskCookieStorage changing policy from 2 to 0, cookie file:
2017-07-10 12:50:57.962 BenzineKostern[202:8142] Apache Cordova native platform version 4.4.0 is starting.
2017-07-10 12:50:57.965 BenzineKostern[202:8142] Multi-tasking -> Device: YES, App: YES
2017-07-10 12:50:58.250 BenzineKostern[202:8142] Using UIWebView
2017-07-10 12:50:58.258 BenzineKostern[202:8142] [CDVTimer][handleopenurl] 0.675976ms
2017-07-10 12:50:58.273 BenzineKostern[202:8142] Unlimited access to network resources
2017-07-10 12:50:58.274 BenzineKostern[202:8142] [CDVTimer][intentandnavigationfilter] 15.762985ms
2017-07-10 12:50:58.275 BenzineKostern[202:8142] [CDVTimer][gesturehandler] 0.419021ms
2017-07-10 12:50:58.278 BenzineKostern[202:8142] [CDVTimer][admob] 2.825975ms
2017-07-10 12:50:58.377 BenzineKostern[202:8142] [CDVTimer][splashscreen] 98.049998ms
2017-07-10 12:50:58.412 BenzineKostern[202:8142] [CDVTimer][statusbar] 34.932971ms
2017-07-10 12:50:58.428 BenzineKostern[202:8142] [CDVTimer][keyboard] 14.663994ms
2017-07-10 12:50:58.428 BenzineKostern[202:8142] [CDVTimer][TotalPluginStartup] 171.472967ms
2017-07-10 12:50:59.563 BenzineKostern[202:8142] Resetting plugins due to page load.
2017-07-10 12:51:00.582 BenzineKostern[202:8142] THREAD WARNING: [‘Device’] took ‘12.613037’ ms. Plugin should use a background thread.
2017-07-10 12:51:16.051 BenzineKostern[202:8267] CFNetwork SSLHandshake failed (-9806)
2017-07-10 12:51:17.474 BenzineKostern[202:8142] deviceready has not fired after 5 seconds.
2017-07-10 12:51:17.475 BenzineKostern[202:8142] Channel not fired: onDOMContentLoaded
2017-07-10 12:51:17.476 BenzineKostern[202:8142] WARN: Ionic Native: deviceready did not fire within 5000ms. This can happen when plugins are in an inconsistent state. Try removing plugins from plugins/ and reinstalling them.
2017-07-10 12:51:17.476 BenzineKostern[202:8142] Ionic Native: deviceready event fired after 13684 ms
2017-07-10 12:51:19.834 BenzineKostern[202:8142] Finished load of:
2017-07-10 12:51:19.867 BenzineKostern[202:8142] THREAD WARNING: [‘Keyboard’] took ‘18.012939’ ms. Plugin should use a background thread.
2017-07-10 12:51:20.008 BenzineKostern[202:8142] THREAD WARNING: [‘StatusBar’] took ‘137.495850’ ms. Plugin should use a background thread.
2017-07-10 12:51:20.019 BenzineKostern[202:8142] createBanner
2017-07-10 12:51:22.757 BenzineKostern[202:8276] To get test ads on this device, call: request.testDevices = @[ @“18eb32d77e925f20209d35acc9149639” ];
2017-07-10 12:51:23.233 BenzineKostern[202:8142] showBanner
2017-07-10 12:51:23.238 BenzineKostern[202:8142] statusbar offset:20.000000, overlap:0, ad position:8, x:0, y:0
2017-07-10 12:51:23.271 BenzineKostern[202:8142] window, resize,
2017-07-10 12:51:23.277 BenzineKostern[202:8142] THREAD WARNING: [‘AdMob’] took ‘44.109619’ ms. Plugin should use a background thread.
Which native plugins are you using in this app ? Which ones are you loading in the app.module ?
I’m using AdMob to load the banner ads in my app.
other then that i’m using default plugins.
My plugins are listed below:
“cordova-plugin-admobpro” spec="^2.29.21"
“cordova-plugin-console” spec="^1.0.5"
“cordova-plugin-device” spec="^1.1.4"
“cordova-plugin-splashscreen” spec="^4.0.3"
“cordova-plugin-statusbar” spec="^2.2.2"
“cordova-plugin-whitelist” spec="^1.3.1"
“cordova-plugin-x-toast” spec="^2.6.0"
“ionic-plugin-keyboard” spec="^2.2.1"
Can you suggest me sollution for it?
Try a different device, see if it has the same problem.
Try on an emulator, see if same problem.
Try to debug the problem, google for the error message etc.
I have already done that, when i build it again the problem of CFNetwork SSLHandshake failed was resolved.
nopp it’s still take too long to load.
Following command solved my problem:
ionic cordova build ios --prod --aot --minifyjs --minifycss --optimizejs
now it takes only 3-4 sec to start.
You should additionally change your main.ts to the following
import {platformBrowserDynamic} from “@angular/platform-browser-dynamic”;
import {enableProdMode} from “@angular/core”;
import {AppModule} from “./app.module”;
enableProdMode();
platformBrowserDynamic().bootstrapModule(AppModule);
To enable prod mode. Will give you a nice boost
u mean we need not use this in main.ts enableProdMode(); ?? is this included when build with --prod ?, pls clear | https://forum.ionicframework.com/t/ionic-app-takes-too-long-to-start/97546 | CC-MAIN-2018-51 | refinedweb | 862 | 64.17 |
django_quick_test 0.3.1
Django test runner that separates test database creation and test running
django-quick-test
Django quick test is a custom nose based test runner that separates testing and test related database manipulations.
Usualy running this command instead of the default manage.py test will give you 10-15 times speed boost. So you will be able to run your test suite in seconds instead of minutes.
Installation
1. Install the package with pip install django_quick_test or alternatively you can download the tarball and run python setup.py install
- Add quick_test to your INSTALLED_APPS list in settings.py
INSTALLED_APPS = ('quick_test')
- Add your test database details in settings.py.
DATABASES = { 'default':{ 'ENGINE':'', }, 'test':{ 'ENGINE': '', 'NAME': 'test_database', } }
- And finally replace the default Django test runner with this one. Again in settings.py:
TEST_RUNNER = 'quick_test.testrunner.NoseTestSuiteRunner'
Usage
django-quick-test assumes that you have created your test database manualy and you have loaded the required test data(fixtures)
Commands you have to run before using the command
python manage.py syncdb --database=test python manage.py migrate --database=test
and finaly run your tests with
python manage.py quick_test
Additional notes
If you are using the default Django TestCase class you have to ovewrite the _pre_setup method which is executed automatically when you call the class. If you don't overwrite it the quick_test command will still work, but your test data will be lost. Even if you don't have any fixtures in the database overwriting this method will give you additional speed boost.
from django.test import TestCase class SimpleTest(TestCase) def _pre_setup(self): # this method flushes the database and installs # the fixtures defined in the fixtures=[] list # we are doing everything manually, so we don't # really need it # these are the results I get with 1 test before and after ovewriting the method # Before -> Ran 1 test in 2.336s # After -> Ran 1 test in 0.004s pass def test_basic_addition(self): self.assertEqual(1 + 1, 2)
Requirements
Django 1.2+
nose
- Author: Martin Rusev
- Keywords: django,tests,nose,
- License: BSD
- Categories
- Package Index Owner: martin.rusev
- DOAP record: django_quick_test-0.3.1.xml | http://pypi.python.org/pypi/django_quick_test/0.3.1 | crawl-003 | refinedweb | 358 | 56.76 |
Machine Learning Taught Me High School Math—All Over Again
or, Regression for Amateurs (and Humanities Majors)
“It’s just a line.”
There’s a chance I paraphrased that a bit, but it’s something my instructor quipped to my cohort as we started the dreaded “statistical analysis” module of my (ongoing) data science boot camp. Was it a deliberately reductive non-sequitur about the mathematical underpinnings of linear regression, intended as a callback to our first experiences with one-variable equations in middle-school algebra? Sure, but it was also a kind of reassurance: that these concepts, stripped down to their skeletons, are not beyond the scope of my understanding, even as someone whose last formal experience with mathematics was an online stats course I took as a prerequisite for my MSW program.
For some context, I mentioned in a previous blog post that my entry into data science has felt like a hard turn from my background in history and social work, and the shift has rarely felt so stark as when we dove headlong into probabilities. The first (three-week) module of this particular course taught us the tools of the trade: basic Python, data manipulation and analysis with pandas, and plotting data with matplotlib. Some of my classmates had cursory experience with Python or other programming languages, but I felt that we were largely on the same page.
Then “Phase 2” reared its head, the formulae came out to play, and I started panicking. In the academic environs where I’d cut my teeth, “ML” stood for Marxism-Leninism, not Machine Learning. Now, I felt out of my depth, in more ways than one. Our second group project rolled around, and I was considering withdrawing from the course, wholly convinced that I wasn’t going to make it.
Now, before I write anything further, I’d like to disclaim that I am, without question, one of the “amateurs” referenced in the subtitle of this blog post. I do not have a degree in math or statistics, and I have no background in computer science whatsoever. I’ve been studying data science for a meager two months, and machine learning for even less time than that. I have a fraction of the expertise that’s probably necessary to write a post like this with any degree of confidence — possibly less relevant experience than you do, if you’re reading this post — but I’d also like to believe that this lack of technical expertise puts me in a position to articulate, in lay-reader’s terms, one of the most bare-bones examples of machine learning. In turn, I hope this blog post will calm the nerves of readers who might be fascinated with data and interested in a career transition, but worried about the potential barriers to entering a field like data science.
In the process of trying to “dumb down” this material, the probability that I’ll write something that’s flat-out wrong is, to put it charitably, non-negligible. If that happens, feel free to send me an email and call me a hack. That said, let’s turn back the clock.
Hit refresh…
Feature engineering… deep learning… gradient boosting… forget all the jargon, if only momentarily, and think back to Algebra I, or even Pre-Algebra, if that was how your school did things. It started with linear equations — if you’re anything like me, the simple formula y = mx + b is wired somewhere in your neuromuscular system, unlikely ever to fade. You plug in a number for x and get an answer y in return. The result is dependent, of course, on what you input as your independent variable — that is, x — but it also depends on the values of m and b, which represent the slope and the intercept of the line, respectively. Among the most rudimentary methods of visualizing a linear function is a table of values like this (crude) one:
# Sample of x and y values for
# the function y = 3x - 3| x | y |
|-----------|
| 0 | -3 | = (0, -3)
| 1 | 0 | = (1, 0)
| 2 | 3 | = (2, 3)
| 3 | 6 | = (3, 6)
| 4 | 9 | = (4, 9)
Python can handle simple arithmetic operations on its own, and the addition of libraries like NumPy and matplotlib enable us to write out and plot a simple algebraic function like y = 3x – 3 with relative ease:
Then, to get a visual representation of the relationship between x and y, we can type something like…
import matplotlib.pyplot as plt
%matplotlib inlineplt.plot(x, y)
(note: the above code is truncated for brevity; reproducing the figure on display below will require additional argument inputs, which can be found in the full notebook for this blog post)
…which gives us a figure that looks like this:
A little more glamorous than a TI-84 printout, no?
That’s a sleek line, sure, but therein lies the problem: how many practical situations can you think of that can be modeled with something as unsophisticated as a single-variable algebraic equation? When inspecting data — even cleaned, “toy” data used for instructional purposes — relationships between variables will never be this perfectly linear. In practice, we’ll be trying to determine the line of best fit — that is, how can we best express y as a function of x? But even that calculation, when fully deconstructed, isn’t any math you haven’t seen before; it’s just a little more tedious. That’s where machine learning kicks in.
Columns are just variables.
In its broadest definition, machine learning is the process by which algorithms learn from training data (i.e. existing data) in order to make predictions. The subject of this blog post is simple linear regression, a straightforward and easy-to-understand form of supervised learning (hyperlinks provided here and there so you can further explore subjects I don’t really address in any substantial fashion). That might read like precise, technical language — and it is, to an extent — but it’s also not impossible to wrap your head around if you have a baseline understanding of algebraic relationships. Let’s dive into some sample data to tease out what I mean.
For demonstrative purposes, I’ll be working with the
iris dataset that comes pre-loaded with seaborn. It has a digestible number of rows (150), and four of its five columns — petal length, petal width, sepal length, and sepal width — list continuous, numerical data, which makes it extraordinarily convenient for this kind of example. Even better, the relationships between its numerical features (i.e. columns) are linear in nature, meaning that a change in one feature corresponds with a change in another feature.
I’ll go out on a limb and assume that if you’ve navigated to this webpage, you have some kind of programming environment — I used a Jupyter Notebook — that allows you to write and run Python code, and that you’re somewhat familiar with (and have installed!) popular libraries for data science like pandas and Seaborn.
# Standard aliases for package imports
import pandas as pd
import seaborn as sns# Load in toy data, assign to variable `df`
df = sns.load_dataset('iris')
The
iris dataset has five columns — four that describe the width and length of the flower’s petals and sepals, and a fifth that classifies the flower into one of three iris species. We’ll only be using two columns from the dataset for this simple example:
petal_length and
sepal_length. If we imagine that
petal_length is our x (independent variable) and
sepal_length our y (dependent), we can plot the values of those respective columns like so:
Looking at that spread of points, we can conclude pretty safely there does exist some kind of linear relationship between these two variables; as we move along the x-axis and petal length increases, sepal length tends to increase on the y-axis as well. This linear relationship indicates that simple linear regression might be a good fit for a situation like this one. We could calculate this regression by hand, but it’s a little exhausting, especially when scikit-learn contains as many tools as it does for expediting the process.
No crunch necessary.
In some instances, data analysis software does a little too good of a job. seaborn’s
regplot (shorthand for regression plot) method allows us to visualize the line of best fit in a “quick and dirty” fashion — everything happens under the hood — but it doesn’t allow us to ascertain any information about the line itself, or the line’s relationship to the values!
Thankfully, using scikit-learn’s
LinearRegression class is simple:
- Import the relevant class,
LinearRegressionfrom
sklearn.linear_model(classes are always written in ThisFashion; this is also known as “CamelCase”).
- Instantiate the object using parentheses (i.e.
LinearRegression()) and assign it to a variable. I like to use something short but self-explanatory & easy to understand, like
lr.
- We can now access methods associated with the
LinearRegressionclass with dot notation. The
.fitmethod takes in at least two arguments —
Xand
y— and trains the model
lron that data. In other words, the model we created in Step 2 learns from the data stored in
Xand
y.
The block of code below illustrates how to execute those steps in Python. I’ll go out on a limb and assume that if you’ve navigated to this webpage, you have some way of writing and running Python code; I did this all in a Jupyter Notebook, which you can inspect here.
Now that the model,
lr, has been fitted on the data, we can do loads of other stuff with that object. The
LinearRegression class has three different functions:
- Estimator: it can use a
.fit()method to learn from data (already done above!)
- Predictor: it can use a
.predict()method to make predictions based on what it learned while fitting
- Model: it can use a
.score()method to evaluate its predictions
…and it’s just one of numerous algorithms that scikit-learn offers, each of which has its own properties and hyperparamaters. Calling
.score() on the
lr object and passing in your desired x and y data returns what’s called an r² score, a floating point number that ranges from 0.0 (x does not explain any of the variance in y) to 1.0 (x explains 100% of the variance in y), while
.predict() returns predictions made based on the knowledge the model developed in the training process.
If we reframed our problem and examined the dataset a different way, we could use
petal_length and
sepal_length as the (loosely) independent predictor variables we use to guess the
species of an iris — species, in this case, would be our target. One popular classification algorithm is the
RandomForestClassifier.
Like I mentioned up top, this material can get awfully complicated, but it helps me ground myself when I recall that these formulas, no matter how verbose, are ultimately built on mathematical operations and concepts that have been swimming in my head for years.
Additional materials
- A deeper dive into the methods used in this blog post can be found on GitHub!
- A brief blog post recommending that data scientists phase out their use of the
irisdataset as an instructional tool — like I did in this very blog post — in protest of the dataset’s collector’s advocacy for eugenics. | https://medium.com/@emergencykisses/machine-learning-taught-me-high-school-math-all-over-again-d2c1e5daf99a?source=read_next_recirc---------3---------------------68969f41_9545_43ea_9c4c_7b26dc4cc493------- | CC-MAIN-2022-40 | refinedweb | 1,897 | 55.98 |
Methods define the operations that can be executed by any component that you create from that class. In other words, methods are how classes (and instances of those classes) do their work.
Methods can be declared in CLASS blocks. To declare a method, include the following METHOD statement in your CLASS block:
The statements that implement the method can either follow the declaration, or they can reside in a separate SCL entry.The statements that implement the method can either follow the declaration, or they can reside in a separate SCL entry.
Methods are implemented in METHOD blocks. A METHOD block begins with the METHOD statement, includes the SCL code that implements the method, and then ends with the ENDMETHOD statement..
For example, the Add method can be implemented in the CLASS block as follows:
class Arithmetic; add: method n1 n2:num; return(n1 + n2); endmethod; endclass;If you want to implement the Add method in a separate SCL entry, then the CLASS block would contain only the method declaration:
class Arithmetic; add: method n1 n2:num / (scl='work.a.b.scl'); endclass;The
work.a.b.sclentry would contain a USECLASS block that implements the Add method:
useclass Arithmetic; add: method n1 n2: num; return (n1 + n2); endmethod; enduseclass;
See METHOD for a complete description of implementing methods with the METHOD statement. See The Structure of SCL Programs; Implementing Methods Outside of Classes; and USECLASS for more information about implementing methods in USECLASS blocks.
Note: The method options that you specify in the CLASS block can also
be specified in the USECLASS block. Any option that is included in the CLASS
block and is used to specify a nondefault value must be repeated in the USECLASS
block. For example, if you specify
State='O'
or
Signature='N' in the CLASS block, then you
must repeat those options in the USECLASS block. However, the SCL option will
be ignored in the USECLASS block.
For compatibility with Version 6, you can also define METHOD blocks in a separate SCL entry outside of CLASS and USECLASS blocks. However, such an application is not a strictly object-oriented application. For these methods, SCL will not validate method names and parameter types during compile time. See Defining and Using Methods for more information about methods that are not declared or implemented within a class.
SCL supports variable method scope, which gives you considerable design flexibility. Method scope can be defined as Public, Protected, or Private. The default scope is Public. In order of narrowing scope,
class Scope; m1: public method n:num return=num /(scl='work.a.uScope.scl'); m2: private method :char; /(scl='work.b.uScope.scl'); m3: protected method return=num; num = 3; dcl num n = m1(num); return(n); endmethod; m4: method /(scl='work.c.uScope.scl'); endclass;By default, method m4 is a public method.
Method names can be up to 256 characters long. Method labels can be up to 32 characters long. The name of a method should match its label whenever possible.
Note: A method that has the same name
as the class that contains it is called a constructor.
See Defining Constructors
for more information.
If you need the method name to be different from the method label, you must specify either the METHOD or LABEL option in the METHOD statement. These options are mutually exclusive.
Note: In dot notation, always use the method name. When implementing
the method, always use the method label.
For example, a label of MyMethod may be sufficient, but if you want the method name to be MySortSalaryDataMethod, you can declare the method as follows:
class a; MyMethod: public method sal:num /(Method='MySortSalaryDataMethod', SCL='work.a.a.scl'); endclass;When you implement the method in
work.a.a.scl, you identify the method by using the method label as follows:
useclass a; MyMethod: public method sal:num; ...SCL statements... endmethod; enduseclass;You would reference this method in dot notation by using the method name as follows:
obj.MySortSalaryDataMethod(n);
Alternatively, you can specify the LABEL option. For example, to specify a method name of Increase and a method label of CalculatePercentIncrease, you could declare the method as follows:
class a; Increase: public method /(Label='CalculatePercentIncrease', SCL='work.a.b.scl'); endclass;As in the previous example, you use the method label when you implement the method, and you use the method name when you refer to the method in dot notation. In
work.a.b.scl, you would implement the method as follows:
useclass a; CalculatePercentIncrease: public method; ...SCL statements... endmethod; enduseclass;You would reference the method in dot notation as follows:
obj.Increase();
In Version 6, SAS/AF software used underscores to separate words in method names (for example, _set_background_color_). The current convention is to use a lowercase letter for the first letter and to subsequently uppercase the first letter of any joined word (for example, _setBackgroundColor).
The embedded underscores have been removed to promote readibility. However, for compatibility, the compiler recognizes _set_background_color_ as equivalent to _setBackgroundColor. All Version 6 code that uses the old naming convention in CALL SEND or CALL NOTIFY method invocations will still function with no modification.
Although it is possible for you to name a new method using a leading underscore, you should use caution when doing so. Your method names may conflict with future releases of SAS/AF software if SAS Institute adds new methods to the parent classes.
When you define a method parameter, you must specify its data type. Optionally, you can also specify its storage type: input, output, or update. The storage type determines how methods can modify each other's parameters:The default parameter storage type is update.
You use the colon (:) delimiter to specify both the storage type and the data type for each method parameter:
In the following example, the TypeStore class defines four methods:In the following example, the TypeStore class defines four methods:
import sashelp.fsp.collection.class; class TypeStore; m1: method n:num a b:update:char return=num /(scl = 'work.a.uType.scl'); m2: method n:output:num c:i:char /(scl = 'work.b.uType.scl'); m3: method s:i:Collection /(scl = 'work.c.uType.scl'); m4: method l:o:list /(scl = 'work.d.uType.scl'); endclass;The parameter storage type and data type for each method are as follows:
Note: If you specify the storage type for a parameter in the CLASS block,
then you must also specify the storage type in the USECLASS block.
An object can be declared as an INTERFACE object, a CLASS object, or a generic OBJECT. If you declare an object as a generic OBJECT, then the compiler cannot validate attributes or methods for that object. Validation is deferred until run time. Any error that results from using incorrect methods or attributes for the generic object will cause the program to halt. For example, if you pass a listbox class to a method that is expecting a collection object, the program will halt.
Object types are treated internally as numeric values. This can affect how you overload methods. See Overloading and List, Object, and Numeric Types for more information.
When you declare or implement a method, you can specify the data type of the return value with the RETURN option. If the method has a RETURN option, then the method implementation must contain a RETURN statement. The method's RETURN statement must specify a variable, expression, or value of the same type. In the following example, method m1 returns a numeric value:
class mylib.mycat.myclass.class; /* method declaration */ m1: method n:num c:char return=num; /* method implementation */ return(n+length(c)); endmethod; endclass;
A method's signature is a set of parameters that uniquely identifies the method to the SCL compiler. Method signatures enable the compiler to check method parameters at compile time and can enable your program to run more efficiently. All references to a method must conform to its signature definition. Overloaded methods must have signatures. (See Overloading Methods.)
A signature is automatically generated
for each Version
8 method unless you specify
Signature='N' in the method's option list. By default,
Signature='Y' for all Version 8 methods. When you edit a class in the
Build window, a signature is generated for each method that is declared in
that class when you issue the SAVECLASS command or select
For all Version 6 methods, the default is
Signature='N'. See Converting Version 6 Non-Visual Classes to Version 8 Classes
for information about adding signatures to Version 6 methods.
For example, the following method declarations show methods that have different signatures:
Method1: method name:char number:num; Method2: method number:num name:char; Method3: method name:char; Method4: method return=num;Each method signature is a unique combination, varying by argument number and type:
Method1 sigstring: (CN)V Method2 sigstring: (NC)V Method3 sigstring: (C)V Method4 sigstring: ()N
The order of arguments also determines the method signature. For example, the getColNum methods below have different signatures -- (CN)V and (NC)V -- because the arguments are reversed. As a result, they are invoked differently, but they return the same result.
/* method1 */ getColNum: method colname:char number:update:num; number = getnitemn(listid, colname, 1, 1, 0); endmethod; /* method2 */ getColNum: method number:update:num colname:char; number = getnitemn(listid, colname, 1, 1, 0); endmethod;
You can also use the Class Editor to define method signatures. See
SAS Guide to Applications Development
for more information.
Signatures are usually represented by a shorthand notation, called a sigstring. This sigstring is stored in the method metadata as SIGSTRING.
A sigstring has the following compressed form:
Each argument type can be one of the following:Each argument type can be one of the following:
Return-type can be any
of the above types, or
V
for void, which specifies that the method does not return a value. The return
type cannot be an array.
Arrays are shown by preceding any of the above types
with a bracket ( [ ). For example, a method that receives a numeric value
and an array of characters and returns a numeric value would have the signature
(N[C)N.
Here are some examples of method signatures:
()V. This sigstring is the default signature.
(NCL)N.
(O:sashelp.classes.programHalt.class;N)V
(OC)C.
Note: Although the return type is listed as part of
the sigstring, it is not used by SCL to identify the method.
Therefore, it is recommended that you do not define methods that differ only
in return type. See Overloading Methods
for more information.
Signatures are most useful when SCL has to distinguish among the different forms of an overloaded method. The SCL compiler uses signatures to validate method parameters. When you execute your program, SCL uses signatures to determine which method to call.
For example, suppose your program contains the following class:
class Sig; /* Signature is (N)C */ M1: method n:num return=char /(scl='work.a.uSig.scl'); /* Signature is ([C)V */ M1: private method n(*):char /(scl='work.a.uSig.scl'); /* Signature is ()V */ M1: protected method /(scl='work.a.uSig.scl'); /* Signature is (NC)V M1: method n:num c:char /(scl='work.a.uSig.scl'); endclass;Suppose also that your program calls M1 as follows:
dcl char ch; ch = M1(3);SCL will call the method with the signature (N)C. If your program calls M1 like this:
M1();SCL will call the method with the signature ()V.
After defining a signature for a method and deploying the class that contains it for public use, you should not alter the signature of the method in future versions of the class. Doing so could result in program halts for users who have already compiled their applications. Instead of altering an existing signature, you should overload the method to use the desired signature, leaving the previous signature intact.
Within
a CLASS block, if a method invokes a another method within that same class,
then either the second method must be implemented before the first, or the
second method must be declared with the
Forward='Y'
option.
Note: Any methods that are forward-referenced must be implemented
in the class in which they are declared.
In the following example, m1 calls m2, so the compiler needs to know the existence of m2 before it can compile m1.
class mylib.mycat.forward.class; m2: method n:num c:char return=num / (forward='y'); m1: method n1 n2:num mylist:list return=num; dcl num listLen = listlen(mylist); dcl num retVal; if (listLen = 1) then retVal=m2(n1,'abc'); else if (listLen = 2) then retVal=m2(n2,'abc'); endmethod; m2:method n:num c:char return=num; return(n+length(c)); endmethod; endclass;
You can overload methods only for Version 8 classes. Method overloading is the process of defining multiple methods that have the same name, but which differ in parameter number, type, or both. Overloading methods enables you to
All overloaded methods must have method signatures because SCL uses the signatures to differentiate between overloaded methods. If you call an overloaded method, SCL checks the method arguments, scans the signatures for a match, and executes the appropriate code. A method that has no signature cannot be overloaded.
If you overload a method, and the signatures differ only in the return type, the results are unpredictable. The compiler will use the first version of the method that it finds to validate the method. If the compiler finds the incorrect version, it generates an error. If your program compiles without errors, then when you run the program, SCL will execute the first version of the method that it finds. If it finds the incorrect version, SCL generates an error. If it finds the correct version, your program might run normally.
Each method in a set of overloaded methods can have
a different scope, as well. However, the scope is not considered part of the
signature, so you may not define two methods that differ only by scope. (See Defining Method Scope.)
Suppose you have the following two methods, where each method performs a different operation on its arguments:
CombineNumerics: public method a :num b :num return=num; endmethod; CombineStrings: public method c :char d :char return=char; endmethod;Assume that CombineNumerics adds the values of A and B, whereas CombineStrings concatenates the values of C and D. In general terms, these two methods combine two pieces of data in different ways based on their data types.
Using method overloading, these methods could become
Combine: public method a :num b :num return=num; endmethod; Combine: public method c :char d :char return=char; endmethod;
In this case, the Combine method is overloaded with two different parameter lists: one that takes two numeric values and returns a numeric value, and another that takes two character parameters and returns a character value.
As a result, you have defined two methods that have the same name but different parameter types. With this simple change, you do not have to worry about which method to call. The Combine method can be called with either set of arguments, and SCL will determine which method is the correct one to use, based on the arguments that are supplied in the method call. If the arguments are numeric, SCL calls the first version shown above. If the arguments are character, SCL calls the second version. The caller can essentially view the two separate methods as one method that can operate on different types of data.
Here is a more complete example that shows how method overloading fits in with the class syntax. Suppose you create X.SCL and issue the SAVECLASS command, which generates the X class. (Although it is true here, it is not necessary that the class name match the entry name.)
class X; Combine: public method a:num b:num return=num; dcl num value; value = a + b; return value; endmethod; Combine: public method a:char b:char return=char; dcl char value; value = a || b; return value; endmethod; endclass;
You can then create another entry, Y.SCL. When you compile and execute Y.SCL, it instantiates the X class and calls each of the Combine methods.
import X.class; init: dcl num n; dcl char c; dcl X xobject = _new_ X(); n = xobject.Combine(1,2); c = xobject.Combine("abc","def"); put n= c=;
The PUT statement produces
n=3 c=abcdef
Another typical use of method overloading is to create methods that have optional parameters.
Note: This example shows two
implementations of an overloaded method that each accept different numbers
of parameters. Defining One Implementation That Accepts Optional Parameters
describes how to use the OPTIONAL option to create a method with one implementation
that accepts different numbers of parameters.
For example, suppose we have a method that takes a character string and a numeric value, where the numeric value is used as a flag to indicate a particular action. The method signature would be (CN)V.
M: public method c :char f :num; if (f = 1) then /* something */ else if (f = 2) /* something else */ else /* another thing */ endmethod;
If method M is usually called with the flag equal to one, you can overload M as (C)V, where that method would simply include a call to the original M. The flag becomes an optional parameter.
M: public method c: char; M(c, 1); endmethod;
When you want the flag to be equal to one, call M with
only a character string parameter. Notice that this is not an error. Method
M can be called with either a single character string, or with a character
string and a numeric -- this is the essence of method overloading. Also,
the call
M(c,1); is not a recursive call with an incorrect parameter list. It is a
call to the original method M.
This example can also be turned around for cases with existing code. Assume that we originally had the method M with signature (C)V and that it did all the work.
M: public method c: char; /* A lot of code for processing C. */ endmethod;
Suppose you wanted to add an optional flag parameter, but did not want to change the (possibly many) existing calls to M. All you need to do is overload M with (CN)V and write the methods as follows:
M: public method c: char f: num; Common(c, f); endmethod; M: public method c: char; Common(c, 0); endmethod; Common: public method c: char f: num; if (f) then /* Do something extra. */ /* Fall through to same old code for */ /* processing S. */ endmethod;
Notice that when you call M with a single character
string, you get the old behavior. When you call M with a string and a (non-zero)
flag parameter, you get the optional behavior.
You can use the OPTIONAL option to create an overloaded method with only one implementation that will accept different numbers of parameters, depending on which arguments are passed to it.
In the following example, the method M1 will accept from two to four parameters:
class a; M1: public method p1:input:num p2:output:char optional=p3:num p4:char / (scl='mylib.classes.old.scl'); endclass;SCL will generate three signatures for this method:
Lists and objects (variables declared with either the OBJECT keyword or a specific class name) are treated internally as Numeric values. As a result, in certain situations, variables of type List, Numeric, generic Object, and specific class names are interchangeable. For example, you can assign a generic Object or List to a variable that has been declared as Numeric, or you can assign a generic Object to a List. This flexibility enables Version 6 programs in which list identifiers are stored as Numeric variables to remain compatible with Version 8.
The equivalence between objects, lists, and numeric variables requires that you exercise caution when overloading methods with these types of parameters. When attempting to match a method signature, the compiler first attempts to find the best possible match by matching the most parameter types exactly. If no exact match can be found, the compiler resorts to using the equivalence between List, generic Object, and Numeric types.
For example, suppose you have a method M with a single signature (L)V. If you pass a numeric value, a list, or an object, it will be matched, and method M will be called. If you overload M with signature (N)V, then Numeric values will match the signature (N)V, and List values will match the signature (L)V. However, List values that are undeclared or declared as Numeric will now match the wrong method. Therefore, you must explicitly declare them with the LIST keyword to make this example work correctly. Also, if you pass an object, it will match both (L)V and (N)V, so the compiler cannot determine which method to call and will generate an error message.
When
you instantiate a class, the new class (or subclass) inherits the methods
of the parent class. If you want to use the signature of one of the parent's
methods, but you want to replace the implementation with your own implementation,
you can override the parent's method. To override the
implementation of a method, specify
State='O'
in the method declaration and in the method implementation. Here is an example
for a class named State:
class State; _init: method / (state='o'); _super(); endmethod; endclass;
Constructors are methods that are used to initialize an instance of a class. The Object class provides a default constructor that is inherited for all classes. Unless your class requires special initialization, you do not need to create a constructor.
Each constructor has the following characteristics:
Note: Using the _NEW_ operator to instantiate a class is the only way
to run constructors. Unlike other user-defined methods, you cannot execute
constructors using dot notation. If you instantiate a class in any way other
than by using the _NEW_ operator (for example, with the _NEO_ operator), constructors
are not executed.
For example, you could define a constructor X for class X as follows:
class X; X: method n: num; put 'In constructor, n='; endmethod; endclass;You can instantiate the class as follows:
init: dcl X x = _new_ X(99); return;The constructor is run automatically when the class is instantiated. The argument to _NEW_, 99, is passed to the constructor. The output is
In constructor, n=99
Like other methods, constructors can be overloaded. Any void method that has the same name as the class is treated as a constructor. The _NEW_ operator determines which constructor to call based on the arguments that are passed to it. For example, the Complex class defines two constructors. The first constructor initializes a complex number with an ordered pair of real numbers. The second constructor initializes a complex number with another complex number.
class Complex; private num a b; Complex: method r1: num r2: num; a = r1; b = r2; endmethod; Complex: method c: complex; a = c.a; b = c.b; endmethod; endclass;This class can be instantiated with either of the following statements:
dcl Complex c = _new_(1,2); dcl Complex c2 = _new_(c);These statements both create complex numbers. Both numbers are equal to
1 + 2i.
The
default constructor does not take any arguments. If you want to create your
own constructor that does not take any arguments, you must explicitly override
the default constructor. To override the default constructor, specify
State='o' in the method options list.
class X; X: method /(state='o'); ...SCL statements to initialize class X... endmethod; endclass;
Constructors can be called explicitly only from other constructors. The _NEW_ operator calls the first constructor. The first constructor can call the second constructor, and so on.
When a constructor calls another constructor within the same class, it must use the _SELF_ system variable. For example, you could overload X as follows:
class X; private num m; X: method n: num; _self_(n, 1); endmethod; X: method n1: num n2: num; m = n1 + n2; endmethod; endclass;The first constructor, which takes one argument, calls the second constructor, which takes two arguments, and passes in the constant
1for the second argument.
The following labeled section creates two instances of
X. In the first
instance, the
m attribute is set to
3. In the second instance, the
m
attribute is set to
100.
init: dcl X x = _new_ X(1,2); dcl X x2 = _new_ X(99); return;
Constructors can call parent constructors by using the _SUPER operator. For example, suppose you define class X as follows:
class X; protected num m; X: method n: num; m = n * 2; endmethod; endclass;Then, you create a subclass Y whose parent class is X. The constructor for Y overrides the default constructor for Y and calls the constructor for its parent class, X.
class Y extends X; public num p; Y: method n: num /(state='o'); _super(n); p = m - 1; endmethod; endclass;You can instantiate Y as shown in the following labeled section. In this example, the constructor in Y is called with argument
10. This value is passed to the constructor in X, which uses it to initialize the
mattribute to
20. Y then initializes the
pattribute to
19.
init: dcl Y y = _new_ Y(10); put y.p=; return;The output would be:
y.p=19
Note: As with other overridden methods that have identical signatures,
you must explicitly override the constructor in Y because there is a constructor
in X that has the same signature.
The compiler
automatically treats as a constructor any void method
that has the same name as the class. If you do not want such a method to
be treated as a constructor, you can specify
constructor='n' in the method declaration.
class X; X: method /(constructor='n'); put 'Is not constructor'; endmethod; endclass; init: dcl X x = _new_ X(); put 'After constructor'; x.x(); return;This will result in the following output:
After constructor Is not constructor
You can define the implementation of methods outside the SCL entry that contains the CLASS block that defines the class. This feature enables multiple people to work on class methods simultaneously.
To define class methods in a different SCL entry, use the USECLASS statement block. The USECLASS block binds methods that it contains to the class that is specified in the USECLASS statement. The USECLASS statement also enables you to define implementations for overloading methods. (See Overloading Methods. )
Method implementations inside a USECLASS block can include any SCL functions and routines. However, the only SCL statements that are allowed in USECLASS blocks are METHOD statements.
The USECLASS block binds the methods that it contains to a class that is defined in a CLASS statement block or in the Class Editor. Therefore, all references to the methods and the attributes of the class can bypass references to the _SELF_ variable completely as long as no ambiguity problem is created. Because the binding occurs at compile time, the SCL compiler can detect whether an undefined variable is a local variable or a class attribute. See also Referencing Class Methods or Attributes.
SCL stores metadata for maintaining and executing methods. You can query a class (or a method within a class) to view the method metadata. For example, to list the metadata for a specific method, execute code similar to the following:
init: DCL num rc metadata; DCL object obj; obj=loadclass('class-name'); /* metadata is a numeric list identifier */ rc=obj._getMethod('getMaxNum',metadata); call putlist(metadata,'',2); return;
Copyright 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved. | http://v8doc.sas.com/sashtml/sclr/z1090023.htm | CC-MAIN-2018-05 | refinedweb | 4,603 | 53.51 |
>Just 1%, gg? Damn... I guess it's time to bring out CheetOS XS (extra special). It'll primarily be for the Wombat gaming system that I spoke about in a much earlier thread, but its cross-platform compatibility spreads to refrigerators, freezers (we finally broke the ice-cube matrix!), and thumbtacks. <
Oh yeah you get 2% take a persent from the fruitOS
[commplety of topic]
Since windows has x-box
and
cheezOS has Wombat
Who do you think sony and nintendo would pair up with.
My guess is sony would creat sony linux
and Nintendo would team up with Mac.
[/completly of topic]
>Dos = Unix(actuallly dos was bought)
Windows = MacOS
WindowsNT = Unix
Microsoft office = Corel Whatever it was...
Windows XP = Linux + Unix + MacOS<
It's time I release the SECRET MICROSOFT TEPLETE CODE. I only have the main funcion though
Code:
#include <otherprogram.h>
#inclucde <modify.h>
if def win9x
void main()
if def winnt
int main ()
{
convert (company, OriganalToMicrosoft);
convert (productName, OriganalToOurs);
if (win9X)
{
createbugs(Lots, restartneeded);
}
if (winnt)
{
return 0;
}
} | http://cboard.cprogramming.com/brief-history-cprogramming-com/843-you-ms-anti-ms-3-print.html | CC-MAIN-2015-06 | refinedweb | 174 | 54.83 |
Here are some questions frequently posted to the JINI-USERS mailing list:
com.sun.rmi.rmid.ExecOptionPermissionexceptions when starting Jini?
Jini is a set of APIs and runtime conventions that facilitate the building and deploying of distributed systems. Jini provides "plumbing" that takes care of common but difficult parts of distributed systems.
Jini consists of a programming model and a runtime infrastructure. By defining APIs and conventions that support leasing, distributed events, and distributed transactions, the programming model helps developers build distributed systems that are reliable, even though the underlying network is unreliable. The runtime infrastructure, which consists of network protocols and APIs that implement them, makes it easy to add, locate, access, and remove devices and services on the network.
For a detailed introduction to Jini, see:
What does "Jini" stand for?
Jini isn't an acronym, so it doesn't stand for anything. Though, as
Ken Arnold pointed out, it does function as an anti-acronym:
"Jini Is Not Initials." It is pronounced the same as the word "genie,"
so it sounds like jee-nee, not jin-nee.
Where will Jini be useful?
Jini is intended to be of general use, applicable in a wide range of distributed systems environments. In the home, it can help simplify the job of "systems administration" sufficiently that non-sophisticated home owners will be able to compose and use a network of consumer devices. In the enterprise, it can make it easier for traditional systems administrators to manage a dynamic network environment.
Thus, the home and the enterprise are two major network environments where
Jini may make sense, but Jini's potential is not limited to these
environments. Basically, anywhere where there is a network, Jini could
potentially provide the plumbing for the distributed systems running on
that network. For example, networks are emerging in cars, which nowadays
contain many embedded microprocessors. Were you to plug a
new CD player into your car, Jini could, for example, enable the user
interface for the CD player to come up on your car's dashboard.
How is Jini licensed?
Sun licenses the Jini source code through the "Sun Community Source Code License," or "SCSL," which is a cross between keeping things proprietary and giving everything away via an open source license. For more information, see:
Also, check out The Jini Community, the central site for signers of the Jini Sun Community Source Licence to interact:
Will Jini work with PersonalJava?
Currently Jini does not work with PersonalJava, because (among
other reasons) Jini depends on 1.2 RMI, and the current release of
PersonalJava doesn't support 1.2 RMI. Sun says they expect to release
later this year a new version of PersonalJava that supports Jini.
Do clients need a priori knowledge to use a service?
To use a service, clients will in general have a priori knowledge of at least one "known" (or "well-known") Java interface that the service object implements. In other words, if a client program makes use of, say, a Jini storage service, the programmer of the client will usually know about a standard (or, "well-known") interface to storage services when the programmer writes the client program.
Clients will usually specify one or more Java interfaces as part of the search criteria when looking for services on a lookup service. This approach means that clients will definitely know how to use the services they find, because by definition they know (have a priori knowledge of) at least one Java interface through which they can interact with the service.
Currently, no "well-known" interfaces really exist. These interfaces will likely be defined by industry consortiums. For example, various manufacturers of printers have gotten together to define a hierarchy of interfaces for printing. As families of "well-known" interfaces come into the public domain, they will likely be described and made available at the Jini Community web site:
Currently, two working groups are forming to define "well-known" interfaces for printing and storage services. For information about these and other efforts, visit the Jini Community web site.
Although in the general case, client programs will have a priori knowledge of well-known interfaces implemented by the service objects they use, they needn't always. If a service offers a user interface (UI) the client can display, and that UI is attached to the service in a "standard" way known to the client, the client need not have any a priori knowledge about that service. The client need only have a priori knowledge of where the UI might be stored (likely as an attribute in the service item). Once the client locates the UI, it can pass a reference to the service object to the UI and display it. The user can interact directly with the UI, which interacts with the service via the service object. Because the UI knows how to use the service object, the client doesn't have to.
The serviceui project at jini.org is currently working to define a well-known way for UIs to be attached to services. To find out more, join the serviceui mailing list at:
(You'll have to log in as a Community member, which requires that you agree
to the Sun Community Source License terms for Jini.)
What's the difference between Jini and RMI?
Jini is built on top of RMI, so Jini requires RMI. RMI does not, however, require Jini. RMI, which stands for Remote Method Invocation, enables clients to hold references to objects in other JVMs, to invoke methods on those remote objects, and to pass objects to and from those methods as parameters, return values, and exceptions. RMI's object-passing abilities include a mechanism that enables downloading of any required class files across the network to the JVM receiving the object. Jini enables the spontanteous networking of clients and services on a network. Jini makes extensive use of RMI's ability to send objects across the network.
Both Jini and RMI involve a kind of directory service. In the case of Jini, the directory is called the lookup service. In the case of RMI, it is called the registry. The Jini lookup service enables clients to get references to services, which arrive in the local JVM as a reference to a local service object that implements a service interface. Similarly, the RMI registry enables clients to get references to remote objects, which arrive in the local JVM as a reference to a local stub object that implements a remote interface. Despite these similarities, however, several significant differences exist between Jini lookup services and RMI registries.
In general, a client program requires more knowledge about the network to use RMI than Jini, because Jini provides a discovery protocol that enables clients to locate nearby lookup services without prior knowledge of their locations. RMI, by contrast, requires that a client already know where a remote object is installed. In addition, because Jini service providers can also use the discovery protocol to locate lookup services, they can make their services available to the local area network without prior knowledge of that network. Because of the discovery protocol, Jini is able to provide "spontaneous networking" or "plug and work" between devices. Because it lacks a discovery protocol, RMI is not quite so spontaneous.
Another significant difference between RMI and Jini is the difference between a service object and a stub object. Although Jini makes use of RMI to send objects around, the service object on which the client invokes methods need not be an RMI stub. The Jini service object could be an RMI stub, but it's not necessary. A Jini service object can use any network protocal, not just RMI, to communicate back to any server, hardware, or whatever may be across the network. Alternatively, a Jini service object could fully implement the service locally so that it need not do any communication across the network. In the case of RMI, stub objects will always use the RMI wire protocol to communicate across the network to the remote object. A designer of each service decides which (if any) protocol is used to communicate across the network between a service object local to a client and a remote server. This means that Jini effectively raises the level of abstraction of distributed systems programming from the network protocol level to the object interface level. Clients don't have to worry about protocols, they just talk to object interfaces.
The third significant difference between RMI and Jini is that Jini provides a much more expressive way to search for services in a lookup service than RMI provides to search for remote objects in a registry. Each remote object is associated in a registry with a character string name that is unique within that registry. Jini services are registered with a service identifier that is globally unique and permanently maintained by each service, a service object, and any number of attribute objects. Clients can look up a service by service identifier, by the Java type or types of the service object, and by wildcard and exact matching on the attributes. Whereas RMI requires that a client know the name under which a desired remote object was registered, Jini enables a client to just look for the type of service desired. The ability to search for services by Java type is another feature of Jini that supports spontaneous networking. If you enter a LAN environment for the very first time and you want to use a printer, you don't have to figure out what name a printer service was registered under; instead, you just look up the services that implement a well-known Printer interface. Lookup by type also ensures that Jini clients will know how to use whatever it is they get back from their query, because they had to know the about the type to do the query in the first place.
In summary, RMI's aim is to enable invocation of methods on objects in remote
JVMs and to enable the sending and receiving of objects between JVMs as
parameters, return values, and exceptions of those remote method calls. Jini's
aim is to enable spontaneous networking of clients and services that
happen to find themselves on a local network.
How do you find out the IP address of a Jini service?
In general, you can't obtain the IP address of a service from the
lookup service. The service proxy is never unmarshalled in the lookup
service, so it's impossible to inspect the proxy for IP addresses.
Also, there is no requirement that the entity that registers the
service must also be the entity that provides the service, so knowing
who performed the join in the general case tells you nothing. Even
after the client retrieves the service proxy, obtaining an IP address
is not possible in general, because a smart proxy can be designed to
completely hide the network implementation details from the client. In
fact, the proxy can entirely implement the service locally, resulting
in no IP address. Or, the proxy could contact several different
servers at various IP addresses simultaneously or at different periods
during its lifetime; this means that there could, in fact, be many IP
addresses. If you are a service provider and want to provide clients
with your IP address, you must explicitly give them a way to get at
it. You could either register an attribute with your service that
gives the IP address, or your service object could implement an
interface that includes a method which returns the IP address.
Where can I get the Jini release?
You can download the Jini binaries and source from the Java Developer Connection at:
You must be a member of the Java Developer Connection to access this page.
If you are not a member, you can become one for free. (Well, for
almost free. You do have to fork over some information about
yourself.)
What version of Java does the Jini release require?
As of January 26th, the Jini binaries work with the JDK 1.2 FCS release. If you downloaded the Jini binaries prior to January 26th, your Jini binaries will NOT work with JDK 1.2 FCS, but with JDK1.2b4.
Get JDK 1.2 FCS from:
Get JDK1.2b4 from the Java Developer Connection at:
Jini will not work with any JDK release prior to JDK 1.2. It is built on top of
many features, such as new features of RMI, that did not exist in
the JDK prior to JDK 1.2.
Where can I get the Jini specifications?
You can download all of the Jini specifications from:
I'm a Jini newbie. How do I get started with Jini?
Once you download the Jini binaries and the appropriate JDK release (see above), you may wish to download some of these examples and try to get them to run:
The example code from Bill Venners' Jini/JavaSpaces talk, and instructions on getting it to compile and run, are available at:
Eran Davidov posted some example code to JINI-USERS, and Ken Arnold made some modifications to get the code to run on UNIX. Eran then updated the code to work with JDK FCS 1.2. This revised version can be downloaded from:
Brian Murphy posted some beginner examples to JINI-USERS, which you can retrieve from the JINI-USERS archives at:
A Jini tutorial is available online at:
Online lecture notes for a course about RMI and Jini are at:
Noel Enete's "Nuggets," which contain example code and detailed instructions, are a great way to get started running Jini:
How do I lookup a service?
The
ServiceRegistrar interface provides two overloaded
lookup() methods with which you can do lookups.
If you just want to get one service that matches the template, use
this form:
public Object lookup(ServiceTemplate tmpl) throws java.rmi.RemoteExceptionIf a service matches the template, you will receive back a reference to the service object. If no service matches the template, you'll get back a
nullreference. If more than one service matches the template, you'll just get back one service object chosen randomly from the matches.
If you want multiple matching service objects, or you want to know how many services total match a template, use this form:
public ServiceMatches lookup(ServiceTemplate tmpl, int maxMatches) throws java.rmi.RemoteExceptionThis method will never return null, but will always return a
ServiceMatchesobject.
ServiceMatchesobjects have two public fields:
public int totalMatches; public ServiceItem[] items;If you just want to know how many services match a template, but don't want any service items back, invoke
lookup()with
maxMatchesset to 0. You will get back a
ServiceMatchesobject whose
itemsfield is
nulland whose
totalMatchesfield indicates the total number of services that match the template.
If you want multiple service items, just indicate your maximum in
the
maxMatches field. You will get back a
ServiceMatches object whose
items field contains
the array of matching service items up to your specified
maxMatches.
The
totalMatches field will still indicate the
total number of services that match the template, which may be greater
than the length of the returned
items array.
I've got a problem. What info should I include in my post?
To help those who would help you with your problem, include the following information in your post along with the description of your problem:
As a general rule, you should not make the service object itself extend
a class from a user interface library, such as
Panel,
JPanel,
Applet, or
JApplet. Think
of the service object as the way clients interact with a service
through the well-known Java interfaces the service object implements.
To associate UI with a service, use attributes.
One reason to keep UI out of your service object is that it may be used in devices that don't have user interface libraries available. If you attach UI to a service item as attributes, those clients that have the user interface libraries available can retrieve your UI and display it. Those clients that don't have user interface libraries available, can just use your service object directly. You can therefore associate zero to many UIs with a service by placing entries in the attributes of the service's service item.
Each UI is represented by one UI object, which can but need not represent a graphical UI component. A UI object could represent a voice only interface, a text I/O interface, a combination of visual and text, a 3D immersive world, and so on. Each one of these UIs, even a 3D immersive world with voice and a virtual text display, would be represented in the attributes by one single UI object.
A "standard" way of attaching a UI to a service is being developed by the serviceui project at jini.org. You can read about their latest ideas here:
To participate in, or just keep track of, the development of a recommended
approach to adding UI to a service, including definition of well-known
interfaces and classes, join a mailing list for the serviceui project
at
jini.org.
How can I start multiple Jini lookup services on the same machine?
The only fixed port defined by the Jini specifications is the port for multicast discovery. Because this is a multicast port (unlike broadcast and connection-based ports), multiple lookup services on the same machine should be able to share this port.
Multiple lookup service on the same machine must, however, have different
unicast discovery/response ports. The "reggie" implementation of the
lookup service, which comes with the Sun' Jini release, includes an API
(the
DiscoveryAdmin interface) that enables you to change
the default port number. Also, the
-admin GUI of the lookup
service browser enables you to configure specific ports. Such changes will
remain persistent. If you don't need consistent port numbers, you can just
fire off several reggies on the same host, and all reggies except the
first will automatically use a dynamically allocated port rather
than the standard one for unicast discovery.
Why must Service IDs be universally unique and why must they be remembered after assigned?
Every Jini service is supposed to have a service ID that is globally unique. If a service has no service ID, it can receive one from a Jini lookup service by simply registering with a lookup service. Usually a service includes its service ID as part of the service item it sends to the lookup service when it is registered via the join process. Whenever a lookup service receives a service item that has no service ID, it assigns a new service ID to that service and sends the ID back to the service. The service is reponsible for remembering and using that same service ID forever after. In practice, Jini service providers (such as enabled devices) will likely come from the factory with service IDs already installed.
Services are required to maintain the same service ID throughout their lifetimes for several reasons. First, because a service always uses the same service ID, it can't be registered twice in the same lookup service. Every service registered in a lookup service has a unique service ID. If a service registers itself with a lookup service using a service ID that is being used by a service already registered in that lookup service, the already registered service will be replaced by the newly registering service. The lookup service interprets the newly registering service as a replacement (or new version) of the currently registered service with the same service ID.
Another reason services are required to always use the same service ID is that a client can determine that multiple services retrieved from multiple lookup services are actually the same service. If two services retrieved from two different lookup services have the same service ID, then those two services actually represent the same single service, which was registered in two different lookup services.
And lastly, a permanent service ID
allows lookup services to be queried by service ID. Queries based on
service ID allows enables clients to find exactly the service they
are looking for, independent of service and attribute object equality.
A service may be upgraded with a service object
that no longer implements the same interfaces as the previous version.
Similarly, a service may be upgraded with a new set of attributes that don't
have the same types or values as the previous versions. Such new
versions of a service may not match queries the matched the old version
of the same service.
Despite this potential for incompatible upgrades, clients (such as systems
administrators who want to track a service's status) can easily lookup
the service by Service ID. Because the service ID will never change, it
can be trusted to always match even if the service object and attributes
change drastically.
Can a client make persistent changes to the attributes of a service?
A client can't change the attributes of a service directly.
No methods that change attributes exist in the
ServiceRegistrar interface. Such methods exist only
in the
ServiceRegistration interface, which is implemented
by the object returned when service is registered. Thus, a client
must go through the service to change to an attribute, and the service
has the option of denying the request.
Whether or not an attribute change is persistent across network failures
is also the responsibility of the service itself.
If a service wishes to be administratable, its service object should implement
the
net.jini.admin.Administratable interface. This interface
defines just one method,
getAdmin(), which returns an
object that should implement the
net.jini.admin.JoinAdmin
interface, as well as other service-specific admin interfaces.
If a service doesn't want anyone besides itself to change an attribute,
the
Entry class for that attribute should
implement the
net.jini.lookup.entry.ServiceControlled interface.
To what extent is Jini dependent upon IP?
The current implementation of Jini is IP based, though other protocols could potientially be supported in the future. Such support for other protocols may involve gateways to IP.
Currently, a device that wishes to participate in a Jini federation must somehow get an IP address. The device must in fact already have an IP address before it sends the presence announcement packet (multicast request protocol). The lookup service extracts the IP address from the presence announcement packet in order connect back to the device to complete the discovery process via the unicast discovery protocol.
The Jini specifications themselves are silent about the process by which devices will get IP addresses. In particular, the lookup service has no involvement in the process of assigning IP addresses. It is likely that IP addresses will be dynamically assigned by a DHCP server or some other automatic mechanism when a device goes on-line.
Other than the discovery protocols, the Jini specification is quite
independent of IP. Once a lookup service has been
found, everything is RMI, and Jini is shielded from protocol details.
Programmers who use the classes in the Jini API that support discovery,
such as
JoinManager and
LookupDiscovery,
will find that the IP dependency is well hidden from their code.
In addition, the APIs that support leasing, distributed events, and
distributed transactions are independent of IP.
How can I control the lease length granted by a lookup service when I register my service?
When you register a service either via the
ServiceRegistrar or the
JoinManager, you can
specify a desired lease duration. The lookup service may grant a lease
whose duration is shorter (but not longer) than the requested duration.
In practice, a lookup service likely has a maximum lease length that it
observes when granting leases to registering services. You
can manage the maximum lease duration offered by a lookup service, if you have
permission, via the admin interface to the lookup service itself. For
the details, see the following text, which is based heavily on
a post written by Brian Murphy:
If you look at the code in
RegistrarImpl.java, you will see
private long minMaxServiceLease = 1000 * 60 * 5; private long minMaxEventLease = 1000 * 60 * 30;
These values indicate that, by default, the maximum length
of a service lease is 5 minutes; and the maximum length of
a lease on event notification is 30 minutes. When a service
requests a lease duration of
Long.MAX_VALUE (either
in the service itself or through the
JoinManager),
the lookup service interprets such a request to mean:
"assign a lease duration that is no greater than the default
maximum lease duration". Thus, if your service requests a lease
duration of
Long.MAX_VALUE,
then lookup will usually assign a duration that is 5 minutes
(or 30 minutes for event notifications); but might assign
a smaller duration. The Lookup Specification addresses this
issue.
If you want to assign longer lease durations, you can
change the default maximum values through the
RegistrarAdmin
interface. This is something that the administrator of the
lookup would typically do based on the needs of the various
services that interact with the administrator's lookup.
To do this, you would get a reference to the
RegistrarAdmin
through the
Registrar itself. For example,
ServiceRegistrar proxy = (CreateLookup.create(...)).reg; RegistrarAdmin adminProxy = (RegistrarAdmin)(((Administrable)proxy).getAdmin()); adminProxy.setMinMaxServiceLease(1000*60*25); adminProxy.setMinMaxEventLease(1000*60*90);
Now, when you ask for a lease duration of
Long.MAX_VALUE,
25 minutes or less will be granted for services; and 90
minutes or less will be granted for event notifications.
Note, that lookup services will not typically grant services not having the appropriate privledges the ability to change the lookup's configurable items. Usually the lookup service admin will have a GUI (possibily even a GUI registered as a service in the lookup itself) that is password protected; and which allows the administrator - and no one else - to change such items.
Because, in your test environment, you are both the creator/administrator of the lookup service and the creator of the service, you have the ability to change these items from your program.
One final note with respect to lease durations. At the
top of
RegistrarImpl.java, you will see the constant
MAX_LEASE (= 1000L * 60 * 60 * 24 * 365 * 1000)
This is the largest value that you can request the
lookup reset the default max lease duration to; anything
greater and you will get an
IllegalArgumentException.
How do I perform advanced searches during lookup?
An example of this problem is a client that needs to find a display
service that is at least 640 by 480 pixels. Assume that the well-known
interface for displays is called
Display and that
Display services usually register a
Resolution
attribute that contains a screen
width and
height in pixels. You could look for a
Display service that has a resolution
of 640 by 480, but how
could you find a
Display that has at least 640 by 480
resolution?
Although the lookup service matching semantics only support exact matching,
you could first get an array of all the different resolutions supported by
currently registered
Display services by using the
ServiceRegistrar.getFieldValues() method.
Once you know the range of available resolutions, you could
set the
attributeSetTemplates field of the
ServiceTemplate to perform exact match lookups for the resolutions
you are interested in.
What is the difference between Jini and CORBA?
CORBA is perhaps more akin to RMI than Jini, but CORBA includes a service called the trader service that is somewhat reminiscent of the Jini lookup service. CORBA, which stands for Common Object Request Broker Architecture, enables you to invoke methods on remote objects written in any programming language. RMI, which stands for Remote Method Invocation, enables you to invoke methods on Java objects in remote virtual machines. The objects needn't have been written in the Java programming language, but they do need to be running in a Java virtual machine. Thus, they most likely were written in some language that was compiled to Java class files, and by far the most common language compiled to Java class files is Java.
CORBA has a naming service that lets you look up a remote object by name and obtain a remote reference to it. RMI has a registry server that let you do the same thing: look up a remote object by name and obtain a remote reference to it. CORBA departs from RMI, however, in that to use the remote reference by invoking methods on a local stub, CORBA requires that the client have the definition of the stub locally. (In other words, CORBA requires that the code for the stub object be known to the developers that create the client.) RMI, by contrast, can send the class files for a stub object across the wire. Because an RMI client can dynamically load the code for a stub object, that code need not be known to the developers of the client.)
CORBA does offer a "dynamic invocation interface" that enables clients to use remote objects without the stub definition, but it is more complex to use than just invoking methods on a local stub. The difference in complexity is similar to the difference between using a Java object through its interface and using the same Java object via the reflection API.
Another difference between CORBA and RMI is that RMI lets you pass objects by value as well as by reference when you invoke a remote method. Up until the CORBA/IIOP 2.2 specification was released in 1998, CORBA only allowed you to pass objects by reference. The 2.2 specification added support for objects by value, but this functionality yet may not be supported in many CORBA implementations.
A significant difference between CORBA and RMI is that when a subclass
object is passed to a remote method by value, CORBA will truncate
the object to the type declared in the method parameter list. For
example, if a CORBA remote method expects an
Animal and the client
passes a
Dog (a subclass of
Animal), the remote object will receive
an
Animal (a truncated
Dog), not a
Dog. Because RMI can send the
class files for the
Dog across the network, the RMI remote object
will receive a
Dog. If the remote virtual machine has no idea what
a
Dog is, it will pull the class file for
Dog down across the
network.
As mentioned previously, CORBA does include one service that is reminescent of Jini's lookup service: the CORBA trader service. Instead of just supplying a name with which a remote object is associated, as you do with the CORBA naming service (or the RMI registry), you describe the type of remote object you are seeking. Similarly, you can look up a Jini service by type. The Jini lookup service offers a bit more flexibility in searching, however, because you can also search by a globally unique service ID and by attributes. But the most important difference between the CORBA trader service and the Jini lookup service lies in what they return as a result of the query. The CORBA trader service returns a remote reference to a matching remote object. The Jini lookup service returns a proxy object by value.
Thus, when you get a remote reference back from the CORBA trader
service (assuming you have the stub definition), you can talk to
the remote object by invoking methods on the local stub. The local
stub will talk across the network to the remote object via CORBA.
When you talk to a Jini service object, on the other hand, that
service object may not talk across the network at all. It may
implement the service in its entirety locally in the client's
virtual machine. Or, it may talk across the network to a server,
servers, or some hardware via sockets and streams. Or, the service
object may actually be an RMI stub that communicates across the
network to a remote RMI object via the RMI wire protocol. The
service object returned by Jini can use any network protocol to
communicate across the network. It could even use CORBA.
Should Jini service interfaces extend Remote?
When you design an object whose methods you want to invoke from a different virtual machine via RMI, you must design an interface through which the client interacts with your object. Your object -- called a remote object -- should extend some subclass of java.rmi.server.RemoteObject, such as UnicastRemoteObject or Activatable. Each interface through which clients will interact with your remote object should extend the tag interface java.rmi.Remote. The Remote interface identifies interfaces whose methods may be invoked from non-local virtual machines. Each method in an interface that extends Remote should declare java.rmi.RemoteException in its throws clause. The checked RemoteException indicates that some problem occurred in fulfilling the remote method invocation request.
When you design a Jini service interface, you define the way in which clients will talk to your Jini service. Clients will receive some implementation of your interface from the lookup service. This object will likely be passed as a parameter or return value of an RMI remote method invocation between the client and the lookup service.
In general, Jini service interfaces should not extend java.rmi.Remote, because that would imply that the service is implemented as an RMI remote object. It is more flexible to allow specific implementations of the service to bring about the Remote interface themselves if they are to be an RMI remote object. By not incorporating Remote in the service interface, you give people who provide implementations of the service the *option* of being an RMI remote object.
The methods of the service interface should, on the other hand, declare java.rmi.RemoteException in their throws clause. Any method that hopes ever to be invoked via RMI must declare java.rmi.RemoteException. Thus, to give providers of implementations of your service interface the option of being an RMI remote object, you must declare RemoteException in each method declared in the service interface.
Adding RemoteException to the throws clauses of the methods of your
Jini service interface indicates not only that RMI may be used by
implementations, but that the network in general may be used by
implementations. Thus, if an implementation of your Jini service
uses sockets and streams to talk across the network to a server, it
should indicate network problems by throwing some subclass of
RemoteException. Likewise, if an implementation uses CORBA or DCOM
to talk across the network to remote objects, it should throw some
subclass of RemoteException to indicate network problems.
Why does an application that exports an RMI remote object need to be able to load the stub class?
A stub class is generated by rmic from the class file that defines a class for a remote object. The stub class implements all the same remote interfaces as the remote object's class. When a client holds a remote reference to a remote object, it is actually holding a local reference to a local instance of the stub class for that remote object. Because the stub class implements all the same remote interfaces as the remote object's class, the client can invoke any method on the stub through the remote interfaces that it could invoke on the remote object.
Since the client must instantiate a stub object, it makes sense that the client needs to be able to load the class file for the stub class. The reason the server must also be able to load the stub class is less obvious. A stub class can be distributed along with clients, which enables the client to load the stub class from a local repository, such as from its local class path. However, RMI offers an alternative that makes it unnecessary to distribute stub classes with clients.
In an RMI method invocation, objects can be passed as parameters, returned as return values, or thrown as exceptions, either by value or by reference. If an object is a remote object, it is passed by reference; otherwise, it is passed by value.
To pass an object by reference, the remote object itself is replaced at serialization time by its stub. In other words, to pass a remote object by reference, RMI passes its stub by value. Like any other serialized object RMI passes along the wire, it optionally annotates the serialized stream with a codebase URL. If a recipient of a serialized stub (an RMI client) doesn't have the class file for the stub class available locally, the recipient can go to the codebase URL to download the class file across the network. By enabling the mobility of stub classes across the network, RMI eliminates the necessity to distribute stub classes with clients.
To pass an object by value, the state of the object is serialized, and the serialized stream is optionally annotated with a codebase URL. The codebase specifies how the recipient of the serialized object can get hold of any class files necessary for deserialization that aren't available at the receiving end.
Now, back to the original question: Why does the server that
exports a remote object need to have the stub available? When a
server exports a remote object for use by a client, the server
needs both the remote object class itself and the stub class,
because in the process of exporting a remote object, an instance of
the stub is created in the local (server's) virtual machine. In the
process of exporting a remote object, the stub class is requested
of the class loader that loaded the remote object's class, so the
class must be available locally to that particular class loader.
Whenever a client requests a remote reference to the remote object
via the rmi registry naming server, the client is sent the
serialized stub object. Thus, even though the server application
typically doesn't use the stub class overtly, it's needed by RMI's
serialization machinery when the remote object is exported.
How should I partition classes at the RMI codebase among jar and class files?
The two potentially contradictory goals you should aim for when thinking about how to partition the classes you need to make available through the RMI codebase are minimizing download time for needed classes and minimizing downloading of unneeded classes.
If you place only individual class files at the codebase, then clients won't have to download any class files they don't require. They will, instead, grab each class file they need individually. This approach eliminates the downloading of unneeded classes, however, it may not yield the minimum possible download time for needed classes.
When you place only individual class files at the codebase, clients must make a separate HTTP request for each class file. Were you to place all your class files in a single JAR file, only one HTTP request would be needed to grab the JAR file, which contains all the class files. This approach eliminates the time required by all those HTTP requests for individual class files, but forces all clients to download all class files that come in the JAR package, even if they only need one of those class files.
Nevertheless, depending on how big the JAR file is and how many classes contained in the JAR file are actually needed by a particular client, it is often more efficient to place the class files in a JAR. Not only does the client save time by only having to make one HTTP request, the contents of the JAR file can be compressed. (A JAR file, after all, is a ZIP file.)
Of course, if your JAR file contains many class files that may not be needed by many clients, it may still be more efficient to offer individual class files. An alternative, in-between approach that you can also consider taking when you've got many class files that many clients won't need is to distribute the class files among several JAR files. If you take this approach, you'll need to put a Class-Path: attribute in the manifest file of the initial JAR file that points to the other JAR files. You can place the classes that most clients will need in the initial JAR file, and classes that will be required less frequently in the other JAR files.
For more information, check out John McClain's presentation on "How to avoid Codebase Problems":
To help users view and edit the attributes attached to your Jini service, you can create entry beans for them. An entry bean is a class that follows the JavaBeans naming conventions, which serves as an adapter between a user and a service attribute entry. To view and manipulate the attribute entry, the user interacts with the entry bean.
To be a JavaBean, you need only declare a no-arg constructor and
implement
Serializable. Nevertheless, most beans go further than
that by declaring get and set methods for properties, addListener
and removeListener methods for events, and potentially providing
other support classes such as
BeanInfos,
PropertyEditors, and
Customizers.
An entry bean is any JavaBean that implements the
EntryBean interface:
package net.jini.lookup.entry.EntryBean; public interface EntryBean { public void makeLink(Entry e); public Entry followLink(); }
To connect an entry bean to its entry, you pass a reference to the
entry to
makeLink().
followLink() returns a
reference to the Entry passed to
makeLink().
You must give to each entry bean class a name consisting of the
name of the entry plus "
Bean." For example, an entry bean
for a "
Provider" attribute entry would have to be named:
"
ProviderBean".
To enable the bean to be found and loaded, you merely make the
class files for the bean (and any supporting class files, such
as class files for
Customizers,
BeanInfos,
etc.) available at
the codebase that contains the class files for the attribute
entry. You needn't register the bean as an attribute. When
a tool goes looking for an entry bean to help a user interact
with an entry, the tool will look in the codebase specified
in the serialized image of the entry object.
How are lookups by type implemented?
Jini's lookup service enables clients to look for specific kinds of services. When multiple services of the desired kind exist, clients can use attributes to narrow their search and help them select the best service for their needs.
To specify the kind of service desired, clients specify Java types. The types specified are most often interfaces, but can also be classes. Because developers of Jini clients must indicate the kind of desired service with a Java type, the developer knows about the type at compile-time, and will therefore know how to user whatever object is returned by the lookup service.
To indicate a Java type when performing a lookup, represented by the client's specified
Class instances with
the types names of the registered service objects. It doesn't compare
any class information.
Because the lookup service compares types by name only, two different types with the same fully qualified name would match. Nevertheless, when the service object for such a mismatched type gets back to the client, however, the deserialization process would detect the mismatch and throw an exception.
The reason the lookup service can get by with performing merely a string compare on the type names is that:
com.ibmnamespace have the same fully qualified name, and they aren't supposed to let the world ever see anything they made whose name doesn't start with
"com.ibm".
Yes. The ServiceRegistrar interface includes a
notify() method:
public EventRegistration notify(ServiceTemplate tmpl, int transitions, RemoteEventListener listener, MarshalledObject handback, long leaseDuration) throws RemoteException;
You invoke
notify() to register yourself (or some other listener)
as interested in receiving in any of those
fields, which match anything. The transitions are based on a change
(or non-change) in the status of what matches your
ServiceTemplate
before and after any operation performed by anyone new services are added to a lookup
service, therefore, you simply specify a template that matches any
service, and pass
TRANSITION_NOMATCH_MATCH as the transition
to the
notify() method.
Does the discovery protocol limit Jini within a single subnet?
The discovery protocol is really three protocols in one:
When a service provider or client finds itself connected to a new and unfamiliar network, it sends out a presence announcement on a well-known multicast port. This is the multicast request protocol. Lookup services monitor this well-known port. When a lookup service receives a presence announcement, it inspects the packet and decides whether or not to contact the sender. If it decides to make contact, it establishes a direct unicast connection to a host and port included in the presence announcement. The service provider or client sends a ping across this direct connection, and the lookup service responds with a service registrar object. The ping/service registrar conversation is the unicast discovery protocol.
Lookup services may also periodically send a presence announcement to a well-known multicast port, which clients and service providers can monitor. This is the multicast announcement protocol. If a client or service provider receives such a presence announcement, they can make a direct unicast connection back to a host and port number included in the presence announcement. The two parties perform the unicast discovery protocol across this connection: the client or service provider sends a ping, and the lookup service sends a service registrar object.
The unicast discovery protocol can also be used when a client or service provider has a long-term relationship with a lookup service. A client or service provider can simply make a direct unicast connection to a lookup service, and exchange a ping for a service registrar object.
None of these three discovery sub-protocols is necessarily limited to a single
subnet. The unicast discovery protocol, because it involves a known host and
port where the lookup service resides, could be used across the internet. The
other two protocol, because they involve multicast, will be more
geographically limited. Nevertheless, whether or not a multicast packet
reaches beyond the borders of the subnet in which it is launched depends
on how the local system administrators set up the nearby gateways. A
multicast packet may be limited to a single subnet, or it may migrate to
multiple nearby subnets.
How are service IDs generated such that they can be globally unique?
The first time a Jini service registers itself with a lookup service, the lookup service creates a service ID for that service. The service provider is supposed to remember this ID forever. Every time it registers its service with any lookup service from that point forward, it should specify its service ID.
The service ID provided by the lookup service is supposed to be globally unique. In other words, a particular service ID, if it is ever generated, should be generated only once by any lookup service, anywhere in the universe, at any time. How does this work?
First of all, the service ID is 128 bits long. The size of the ID itself makes it unlikely that any two randomly generated service IDs would come out to be the same value, so long as the random number generators started with different seed values.
The Jini lookup service uses a technique that includes randomization, but
also includes other techniques that in effect guarantees the uniqueness of
each service ID until the year 3400. In 60 of the
128 bits, the lookup service expresses the current system time in the number
of 100 nanosecond ticks since 1582. The rest of the ID is a random number
and, in some lookup service implementations, a unique host address for the
lookup service.
What are the ServiceRegistrar's browsing methods for, and how are they used?
The
ServiceRegistrar has three methods that are called "browsing methods,"
getServiceTypes(),
getEntryClasses(), and
getFieldValues(). These three methods
are called "browsing methods" because their intended purpose is to enable clients
to browse the services and attributes in the lookup service.
The
getServiceTypes() method takes a
ServiceTemplate (the same
ServiceTemplate that's passed
to the
lookup() methods) and a
String prefix. The method returns an array of
Class
instances representing the most specific types (classes or interfaces) of the service
objects that match the template which are neither equal to, nor a superclass of, any
of the types specified in the template and that have names that start with the specified
prefix. The service object or objects for whom
Class instances are returned are all
instances of all the types (if any) passed in the template, but the
Class instances
returned are all more specific than (are subclasses or subinterfaces of) the types specified
in the template. Each class appears only once in the returned array, and the order of
the classes in the returned array is arbitrary.
The
getEntryTypes() method takes a
ServiceTemplate, and returns an array of
Class instances
that represent the most specific classes of entries for those service items that match
the template which either don't match any entry template or are a subclass of an
entry template. Each class appears only once in the returned array, and the order of
the classes in the returned array is arbitrary.
The
getFieldValues() method takes a
ServiceTemplate, an integer index, and a
String field
name. The method returns an array of
Objects for the named field of all instances of the
entry that appears in the
ServiceTemplate's
Entry[] array at the passed index in any
matching service item. Each object of a particular class and value appears only once in
the returned array, and the order of the
Object values in the returned array is arbitrary.
The behavior and purpose of these methods can be obscure. A good way to think of them is
enabling clients to sequentially narrow queries of the lookup service. A client, such
as a graphical lookup service browser, could begin by invoking
getServiceTypes() with
an empty template. The
getServiceTypes() method returns all possible service types
registered in the lookup service, which the browser could display. The user could
select one or more type, then push the Requery button. The browser would add that type to
the service template, and invoke
getServiceTypes() again. A smaller list of types would be
returned, and the browser would display those. The user could select one and press, the user could select and use a service.
Thus, these three "browsing methods" are geared towards helping clients, whether a human
user is involved or not, to browse the lookup service. The arrays returned from the
browsing methods can help the client further refine its queries, ultimately resulting
in a
ServiceTemplate that, when passed to
lookup(), returns the most appropriate
service object.
Why do I get
com.sun.rmi.rmid.ExecOptionPermission exceptions when
starting Jini?
A
ExecOptionPermission exception, such as the one
shown below, most likely means that you're having trouble with
the new Java/RMID security system introduced in JDK1.3 and backported to
JDK1.2.2.
java.rmi.activation.ActivateFailedException: failed to activate object; nested exception is: java.security.AccessControlException: access denied (com.sun.rmi.rmid.ExecOptionPermission -Djava.security.policy=
)
An explanation of the new security policy can be found here:
Starting RMID with the
sun.rmi.activation.execPolicy property set
to
none will turn off security checks, and yield a "policy.all" like behavior:
rmid -J-Dsun.rmi.activation.execPolicy=none
Iain Shigeoka created a new ultra-promiscuous policy file, that will also turn off security checks in RMID. You can download this file as part of the jini-tools at: | http://www.artima.com/jini/faq.html | crawl-001 | refinedweb | 8,610 | 50.67 |
Hi Everyone.
I am new to c++, and trying to complete an assignment. I have to create a class called 'Element', declare it in a header, implement it in a c++ file. When i try and compile it to object code (-c flag on g++), I get the following error:
g++ -c Element.cpp
Element.cpp:5: error: new types may not be defined in a return type
Element.cpp:5: error: return type specification for constructor invalid
here is the code for the .h:
and the .cpp:and the .cpp:Code:
//Element.h an item implementation to hold any queue/stack data.
#ifndef ELEMENT_H
#define ELEMENT_H
class Element{
public:
Element(int* newItem);
int* getItem();
void setNext(Element*);
Element * getNext();
private:
int * item;
Element* next;
}
#endif
/all the research i have done on this compilation error points to the lack of a semi-colon after a struct... so I really dont know what im looking for. Any ideas on how to fix this?all the research i have done on this compilation error points to the lack of a semi-colon after a struct... so I really dont know what im looking for. Any ideas on how to fix this?Code:
/Element.cpp implementation to hold data element
#include <stdio.h>
#include "Element.h"
Element::Element(int * newItem){
item = newItem;
}
int* Element::getItem(){
return item;
}
void Element::setNext(Element* newNext){
next = newNext;
}
Element* Element::getNext(){
return next;
}
In addition, I later have to alter this class so it can store any type, not just 'int', using a template. Can anyone tell me where I can start looking to figure out how to do that?
Thanks | https://cboard.cprogramming.com/cplusplus-programming/62680-compilation-error-wrt-types-plus-constructors-printable-thread.html | CC-MAIN-2017-04 | refinedweb | 275 | 64.91 |
In Java, all parameters are passed by value. In C++, a parameter can be passed by:
void f( int a, int &b, const int &c );Parameter a is a value parameter, b is a reference parameter, and c is a const-reference parameter.
Value Parameters
When a parameter is passed by value, a copy of the parameter is made. Therefore, changes made to the formal parameter by the called function have no effect on the corresponding actual parameter. For example:
void f(int n) { n++; } int main() { int x = 2; f(x); cout << x; }In this example, f's parameter is passed by value. Therefore, although f increments its formal parameter n, that has no effect on the actual parameter x. The value output by the program is 2 (not 3).
Note that if a pointer is passed by value, then although the pointer itself is not affected by changes made to the corresponding formal parameter, the object pointed by the pointed can be changed. For example:
void f(int *p) { *p = 5; p = NULL; } int main() { int x=2; int *q = &x; f(q); // here, x == 5, but q != NULL }In this example, f's parameter is passed by value. Therefore, the assignment p = NULL; in f has no effect on variable q in main (since f was passed a copy of q, not q itself). However, the assignment *p = 5: in f, does change the value pointed to by q. To understand why, consider what happens when the example program runs:
After executing the two statements: int x=2; int *q = &x; memory looks like this: +---+ x: | 2 | <--+ +---+ | | +---+ | q: | --|----+ +---+ Now function f is called; the value of q (which is the address of x) is copied into a new location named p: +---+ x: | 2 | <--+ <--+ +---+ | | | | +---+ | | q: | --|----+ | +---+ | | +---+ | p: | --|----------+ +---+ Executing the two statements in f: *p = 5; p = NULL; causes the values of x (the thing pointed to by p) and p to be changed: +---+ x: | 5 | <--+ +---+ | | +---+ | q: | --|----+ +---+ +----+ p: |NULL| +----+ However, note that q is NOT affected.
When a parameter is passed by reference, conceptually, the actual parameter itself is passed (and just given a new name -- the name of the corresponding formal parameter). Therefore, any changes made to the formal parameter do affect the actual parameter. For example:
void f(int &n) { n++; } int main() { int x = 2; f(x); cout << x; }In this example, f's parameter is passed by reference. Therefore, the assignment to n in f is actually changing variable x, so the output of this program is 3.
When you write a function whose purpose is to compute two or more values, it makes sense to use reference parameters (since a function can return only one result). For example, if you want to read a list of integers from a file, and you want to know both how many integers were read, as well as the average value that was read, you might use a function like the following:
void f(istream &input, int &numRead, double &average) { int k, sum = 0; numRead = 0; while (intput >> k) { numRead++; sum += k; } average = (double)sum/numRead; }Another common use of reference parameters is for a function that swaps two values:
void swap( int &j, int &k ) { int tmp = j; j = k; j = tmp; }This is useful, for example, in sorting an array, when it is often necessary to swap two array elements. The following code swaps the jth and kth elements of array A:
swap(A[j], A[k]);
Another reason to use reference parameters is when you don't want the function to modify an actual parameter, but the actual parameter is very large, and you want to avoid the overhead of creating a copy. Of course, this only works if the function does not modify its formal parameter. To be sure that the actual parameter is not "accidentally" modified, you should use a const-reference parameter. Declaring the parameter to be const tells the compiler that it should not be changed; if the function does change the parameter, you will get a compile-time warning (possibly an error on some systems). For example:
void f(const IntList &L) { -- the code here cannot modify L or the compiler will complain -- }The potential use of a const-reference parameter is the reason why member functions that do not modify any data members should be declared const. For example, suppose that the IntList Print member function was not declared const. Then the following code would cause a compile-time error:
void f(const IntList &L) { L.Print(cout); }Because L is a const-reference parameter, it is the compiler's job to be sure that L is not modified by f (and that means that no data members of L are modified). The compiler doesn't know how the Print function is implemented; it only knows how it was declared, so if it is not declared const, it assumes the worst, and complains that function f modifies its const-reference parameter L.
Another unfortunate thing about C++ arrays is that they are always passed by reference (even though you don't declare them to be reference parameters). For example:
void f(int A[]) { A[0] = 5; } int main() { int B[10]; B[0] = 2; f(B); cout << B[0] << endl; // the output is 5 }Although f's parameter looks like it is passed by value (there is no &), since it is an array it is actually passed by reference, so the assignment to A[0] is really assigning to B[0], and the program prints 5 (not 2).
If you want to pass an array by value, you should use a vector, not a regular C++ array (see the last section in the notes on C++ classes for information about vectors).
void Mystery( int & a, int & b, int c ) { a = b + c; b = 0; c = 0; } void Print() { int x = 0, y = 1, z = 2; Mystery(x, y, z); cout << x << " " << y << " " << z; Mystery(x, y, z); cout << x << " " << y << " " << z << endl; }What is output when function Print is called?
A. 0 1 2 0 1 2
B. 3 0 0 3 0 0
C. 0 1 2 3 0 0
D. 3 0 2 2 0 2
E. 3 0 0 0 0 0
A. Sqrt is written using recursion; it should be written using
iteration.
B. Sqrt is written using iteration; it should be written using recursion.
C. Sqrt's parameter is a value parameter; it should be a reference parameter.
D. Sqrt's parameter is a reference parameter; it should be a value parameter.
E. Sqrt's parameter is a const reference parameter; it should be a value parameter.
#include <iostream> void Sum(int a, int b, int & c) { a = b + c; b = a + c; c = a + b; } int main() { int x=2, y=3; Sum(x, y, y); cout << x << " " << y << endl; return 0; }What happens when this program is compiled and executed?
A. There is a compile-time error because in the call
Sum(x, y, y),
variable y is passed both by value and by reference.
B. There is a run-time error because in the call Sum(x, y, y), variable y is passed both by value and by reference.
C. The program compiles and executes without error. The output is: 2 3
D. The program compiles and executes without error. The output is: 6 9
E. The program compiles and executes without error. The output is: 2 15
Questions 4 and 5 refer to the following function.
bool Mystery(const vector <int> & A) // precondition: A is sorted { int k; for (k=1; k<A.size(); k++) { if (A[k-1] == A[k]) return true; } return false; }
A. Always returns true.
B. Always returns false.
C. Determines whether vector A really is sorted.
D. Determines whether vector A contains any duplicate values.
E. Determines whether all values in vector A are the same.
A. A may be modified by function Mystery;
thus, it must be a reference parameter.
B. A is indexed by function Mystery; thus, it must be a reference parameter.
C. A's size member function is used by function Mystery; thus, it must be a reference parameter.
D. It is more efficient to pass A by reference than by value.
E. There is no reason for A to be a reference parameter. | http://pages.cs.wisc.edu/~hasti/cs368/CppTutorial/NOTES/PARAMS.html | CC-MAIN-2014-49 | refinedweb | 1,392 | 58.52 |
This is the third post in a series of articles where I want to analyze and describe the new upcoming mapping interface providing a fluent interface to NHibernate for the mapping of a domain model to the underlying database.
You can get the source code of the solution accompanying this post here.
In the time between my last post and today a lot has happened to the mapping framework. The contributors are busily improving the source and are also very responsive to my questions and remarks.
In this post I want to focus on how one can map various relations between entities and value objects of a domain model.
Let's have a look at the following simplified model
I have a Blog which has an author of type Person. Each Blog can have many Posts. To each Post readers can give feedback in the form of Comments. Comments are considered value objects in this model, that is they have no identity and are immutable (a reader cannot edit its comment after it has been published...). All other elements are true entities. If I consider the Blog to be the root object then the relation between Blog and Person is of type many-to-one (a person can be the owner of more than one blog). On the other hand the relation between Blog and Post is of type one-to-many. The parent and the children are both entities.
A special case (as we will see) is the relation between Post and Comment (since Comment is a value object). It is also of type one-to-many but this time the parent is an entity and the children are value objects.
How can we map this? Well, let's start with the easy one. In this simplified model the Person class has no external dependencies and is thus easy to map
public class PersonMap : ClassMap<Person>
{
public PersonMap()
{
Id(x => x.Id);
Map(x => x.FirstName);
Map(x => x.LastName);
}
}
The Comment class has also no external dependencies. I want to treat the Comment as a value object. So I have to map it as follows
public class PostMap : ClassMap<Post>
public PostMap()
Map(x => x.Title);
Map(x => x.Body);
Map(x => x.PublicationDate);
HasMany<Comment>(x => x.Comments)
.Component(c =>
{
c.Map(x => x.Text);
c.Map(x => x.AuthorEmail);
c.Map(x => x.CreationDate);
}).AsSet();
Note that I have to tell the framework that I want to have a set by using the AsSet method. The default is a bag (represented by a list in .Net).
Finally we can map the Blog class which is now easy
public class BlogMap : ClassMap<Blog>
public BlogMap()
Map(x => x.Name);
References(x => x.Author);
HasMany<Post>(x => x.Posts).AsSet().Cascade.AllDeleteOrphan();
Note that we map the many-to-one relation between Blog and Person with the aid of the References method. Note further that we map the collection of Posts with the HasMany method. Since by default this method maps to a "bag" we have to further specify that we want to map with a "set" (--> see my post on collection mapping for the various types of collections). Finally I also tell the framework that I want to have all posts of a blog deleted, if the blog is deleted and that I want to cascade all updates and inserts.
The XML generated by the above mapping class is shown below. Now you can ask yourself which way of mapping your entities you prefer...
<?xml version="1.0" encoding="utf-8"?>
<hibernate-mapping
<class name="Post" table="[Post]" xmlns="urn:nhibernate-mapping-2.2">
<id name="Id" column="Id" type="Int64">
<generator class="identity" />
</id>
<property name="PublicationDate" column="PublicationDate" type="DateTime">
<column name="PublicationDate" />
</property>
<property name="Body" column="Body" length="100" type="String">
<column name="Body" />
<property name="Title" column="Title" length="100" type="String">
<column name="Title" />
<set name="Comments" cascade="none">
<key column="Post_id" />
<composite-element class="FluentMapping.Domain.Scenario3.Comment,
FluentMapping.Domain, Version=1.0.0.0,
Culture=neutral, PublicKeyToken=null">
<property name="CreationDate" column="CreationDate" type="DateTime">
<column name="CreationDate" />
</property>
<property name="AuthorEmail" column="AuthorEmail" length="100" type="String">
<column name="AuthorEmail" />
<property name="Text" column="Text" length="100" type="String">
<column name="Text" />
</composite-element>
</set>
</class>
</hibernate-mapping>
Ok I admit, when hand crafting the XML we can skip some of the elements, but still...
All tests use the base class FixtureBase which I have introduced in my first post about the mapping framework. For completeness I present here the source once again
public class FixtureBase.BuildSchema(Session);
CreateInitialData(Session);
Session.Flush();
Session.Clear();
protected virtual void After_each_test()
Session.Close();
Session.Dispose();
protected virtual void CreateInitialData(ISession session)
Now let's test whether we can create a blog and add posts to it. Let's start with the former. I define a base class for all my further blog related tests as
public class Blog_Fixture : FixtureBase
protected Person author;
protected override void CreateInitialData(ISession session)
base.CreateInitialData(session);
author = new Person {FirstName = "Gabriel", LastName = "Schenker"};
session.Save(author);
In the CreateInitialData method I create an author object since every blog has to have an author. I save this author object to the database. To make the author available to all child test classes I have declared it as a protected filed. Now to the test which tries to create a new blog and verifies that it has be written correctly and completely to the database
[TestFixture]
public class When_no_blog_exists : Blog_Fixture
[Test]
public void Can_add_new_blog()
var blog = new Blog {Name = "Gabriel's Blog", Author = author};
Session.Save(blog);
var fromDb = Session.Get<Blog>(blog.Id);
fromDb.ShouldNotBeNull();
fromDb.ShouldNotBeTheSameAs(blog);
fromDb.Id.ShouldEqual(blog.Id);
fromDb.Name.ShouldEqual(blog.Name);
fromDb.Author.ShouldNotBeNull();
fromDb.Author.Id.ShouldEqual(blog.Author.Id);
Note that I have inherited this test class from the previously implemented Blog_Fixture class. In the test method I first create a new blog instance. Then I save it to the database. I then flush and clear the session instance to guarantee that all the object(s) in NHibernate's session cache are written to the DB and that the cache is cleared afterwards such as that when a read operation is invoked the respective object is really retrieved from the database.
If you wonder where all these ShouldXXX methods in the second part of the test come from then wonder no longer. These are extension methods which I have implemented. They make all the asserts that you normally would do with the aid on NUnit's Assert class. But like this the code is way more readable, isn't it? If you wonder how these methods are implemented then please have a look into the source code of the solution accompanying this post. Search for the class SpecificationExtensions.
When running this test it succeeds!
But we have seen in the past that the framework offers us some help to reduce the size of our test methods. So let's revisit the test and leverage the framework.
[Test]
public void Can_add_new_blog_revisited()
new PersistenceSpecification<Blog>(Session)
.CheckProperty(x => x.Name, "Gabriel's Blog")
.CheckProperty(x => x.Author, author)
.VerifyTheMappings();
Yeah, much shorter! That's what I call wrist friendly... Of course when run also this test succeeds.
Second we want to try to add a post to an already existing blog. I have the following code for that
public class When_a_blog_exists : Blog_Fixture
private Blog blog;
blog = new Blog {Name = "Gabriel's Blog", Author = author};
session.Save(blog);
public void Can_add_post_to_blog()
var post = new Post
{
Title = "First Post",
Body = "Just a test",
PublicationDate = DateTime.Today
};
blog.Posts.Add(post);
Session.Update(blog);
fromDb.Posts.Count.ShouldEqual(1);
fromDb.Posts.First().Id.ShouldEqual(post.Id);
In the CreateInitialData I setup my context which in this case is: I have a blog in the database. In the test method I take this existing blog instance and add a new post to it. I then tell the session to update the blog. As usual I flush and clear the session before I assert that the operation was indeed successful.
Now I reload the blog from the database and test whether it has one post as expected and whether it's the post we have added to the blog (it suffices to test the post's id). Note that the method First() applied to the Posts collection of the blog (on the last line of the test) is also an extension method. This extension method just returns the first element of any collection of objects implementing IEnumerable<T> onto which it is applied.
Again when we run the test it is successful.
We want to leverage the framework once more and thus I revisit the test
public void Can_add_post_to_blog_revisited()
List<Post> posts = new List<Post>();
posts.AddRange(new[]
new Post {
Title = "First Post",
Body = "Just a test",
PublicationDate = DateTime.Today
},
Title = "Second Post",
Body = "Just another test",
PublicationDate = DateTime.Today.AddDays(-1)
});
.CheckList(x => x.Posts, posts)
Once again I use our friend the PersistenceSpecification class. This time I use it's method CheckList to test the Posts collection of the blog instance. This method expects a collection of Post objects which I have defined in the first part of this test. Here I have defined two posts in the list but also a single one would suffice for the test.
Let me resume: to completely test the mapping of the Blog class I need four lines of code! Nice.
The last thing we have left to test is whether we can add comments to our posts. First I setup my context; that is I have a blog with one post. I also prepare a comment which I can then later on add to the post.
public class When_a_blog_with_a_post_exists : Blog_Fixture
private Post post;
private Comment comment;
blog = new Blog { Name = "Gabriel's Blog", Author = author };
post = new Post
{
Title = "First Post",
Body = "Just a test",
PublicationDate = DateTime.Today
};
comment = new Comment("This is my comment", DateTime.Today, "someone@gmail.com");
Once my context is set up writing the test is easy. I read the post from the database, add the prepared comment to it and then flush and clear the session (note that the session automatically realizes that the post is dirty and that an update must be made to the database). The I re-read the post from the database and verify that indeed one comment was added and that it is the comment which I expect (by comparing it's Id).
public void Can_add_comment_to_post()
var thePost = Session.Get<Post>(post.Id);
thePost.Comments.Add(comment);
Session.Flush();
Session.Clear();
var fromDb = Session.Get<Post>(post.Id);
fromDb.Comments.Count.ShouldEqual(1);
fromDb.Comments.First().Equals(comment);
And again the test succeeds. The test using the PersistenceSpecification class is like this
public void Can_add_comment_to_post_revisited()
new PersistenceSpecification<Post>(Session)
.CheckProperty(x => x.Title, "Some title")
.CheckProperty(x => x.Body, "Some text")
.CheckProperty(x => x.PublicationDate, DateTime.Today)
.CheckComponentList(x => x.Comments, new[] { comment })
which succeeds as usual!
In this post I have shown you that the mapping framework is indeed ready for mapping more advanced scenarios. I have shown you how to map one-to-many relations where either the children are entities or the children are value objects. I also have shown how to map many-to-one relations.
Enjoy
Lazy loading BLOBS and the like in NHibernate
Legacy DB and one-to-one relations
Linq to NHibernate | http://nhforge.org/blogs/nhibernate/archive/2008/09/06/a-fluent-interface-to-nhibernate-part-3-mapping-relations.aspx | crawl-002 | refinedweb | 1,914 | 55.95 |
> It would be easier to say what's happening if you showed a bit more of > your build setup, for example the rules that use cspm*. This is the entire relevant Makefile.am EXTRA_DIST = cspm.lex cspm.y if DO_CSPT bin_PROGRAMS = cspt cspt_CXXFLAGS = -Wno-deprecated -DYYDEBUG=1 -include iostream.h cspt_SOURCES = ParseNode.cc ParseNode.h ParseNodes.cc ParseNodes.h \ Symbols.cc Symbols.h TranContext.cc TranContext.h nodist_cspt_SOURCES = lex.yy.cc cspm.tab.cc BUILT_SOURCES = lex.yy.cc cspm.tab.cc cspt_LDADD = $(LEXLIB) lex.yy.cc: cspm.lex cspm.tab.cc ParseNode.h $(LEX) -+ -o$@ cspm.lex grep -v "class istream" lex.yy.cc > lex.yy.tmp # get rid of "class istream" /bin/rm lex.yy.cc /bin/mv lex.yy.tmp lex.yy.cc cspm.tab.cc: cspm.y ParseNodes.h Symbols.h bison -v -d cspm.y mv cspm.tab.c cspm.tab.cc CLEANFILES = lex.yy.cc cspm.output cspm.tab.cc cspm.tab.h endif > Without that information, I can only guess: Your package is broken with > respect to VPATH builds, i.e., srcdir != builddir. "make distcheck" > tests this. > > Try this: generate your tarball. In an empty directory, do: > gzip -dc $PACKAGE.tar.gz | tar xvf - > mkdir build > cd build > ../$PACKAGE/configure [OPTIONS] > make > make check > > If this fails, you may need to add a few $(srcdir) and/or $(builddir) > prefixes to your rules (Automake mostly does this for you in its > generated stuff). This does fail. Could you please give more detail on the solution? I am not sure what rules you are talking about, nor exactly what you mean by a prefix. I am new to automake so excuse my scant automake vocabulary. > Don't hesitate to show the rules; even if it seems to work on your > system, they can be tricky to get right so they work for any "make" > implementation. They are shown above. Thanks for you help! Joshua Moore-Oliva | http://lists.gnu.org/archive/html/automake/2005-08/msg00042.html | CC-MAIN-2014-42 | refinedweb | 320 | 70.7 |
Compiler internals
From Nemerle Homepage
Introduction
This page is to help anyone get into the code base of Nemerle compiler. It is already a quite large project and many explanations are required.
Compiler passes
The compiling process is divided into a few more or less separate phases. Most of them transform some representation of compiled program and when connected together they yield the binary executable. This gluing of compiler passes is done in passes.n and can be customized to some degree by changing compilation options
We now describe each pass in order in which it appears in compilation of common program.
Before we actually load and analyse some source code, we take a look at options and library references specified from command line. We need to load every class present in libraries specified by user (like -ref:bla.dll) and needed by compiler (like mscorlib.dll).
All the code devoted to loading external metadata is gathered in ncc/external/. Here for every assembly we load classes and place them in global namespace tree (described in hierarchy building). We analyze the contents of classes lazily, when they are referenced somewhere in program. When this occurs for some class Foo we build a special subclass of Nemerle.Compiler.TypeInfo, which contains information about external type.
In this stage also macros are loaded and placed in namespace tree. This is important, because macros introduce new syntax, so when we begin parsing also syntax extensions associated with macros are loaded. The nice thing is that you can tell compiler to load macros from different library than default Nemerle.Macros.dll and this way you can change quite much of Nemerle's syntax.
Lexical analysis
Lexer transforms text from compiled files into so called tokens, which simply represent stuff like identifiers, numbers, operators, etc. Lexing is done inside lexer classes and consists of loading a file from disk, going through every characted in file, ignoring whitespaces (but counting them to create correct locations information) and returning tokens one by one when requested.
One thing to note here is that lexer can be requested to add a new keyword for recognition - it is because our macros allow loading new syntax with new keywords during parsing of file (by using Name.Space; directive).
We also have special lexer subclasses, which analyze given string instead of file from disk (LexerString).
Preparser
The pre-parsing phase groups stream of tokens obtained from lexer into a tree of parentheses. We have distinguished four types of them ({} () [] <[ ]>). Tokens inside those parentheses are also divided into groups separated by special separator tokens. This way we can have a general skeleton of code (tree based on matched paretheses) quite early in compilation process. It is useful for our syntax extensions.
This pass is quite simple, it modifies the Next field in Token to point to the next token in stream. Also for every pair of braces the special node is created, which contains its inner token stream in Child field. One more important thing happens in this phase - preparser recognize using Name.Space; statements and enables syntax extensions available in loaded macros (they are looked up in namespace tree by namespace).
Parser
Object hierarchy building
The first thing, which compiler does with parsed program is analyse its general structure; it notes the presence of all classes which are defined, build the inheritance relation between them, add members and check the overrides and interface implementation. Between some of them, attribute macros are expanded. These operations are gathered inside ncc/hierarchy/ directory. Here is the exact list of tasks performed with short explanation:
- Walk parsed types in all files, for every type create its TypeBuilder and add them to global namespace tree. Add nested types to their enclosing types' special list. Here we also expand macros and delegates into their underlying classes. It all happens in ScanTypeHierarchy.n.
- We need to build generic environment for every class with special care to nested types and merging of partial classes (which are gathered in previous step). It happens in make_tyenvs function of TypeBuilder.
- Now we expand macros marked as BeforeInheritance.
- Now we build inheritance relation for classes. Every class (TypeBuilder) is set up with information about all its parents and interfaces, together with substitution information (for example class implements some interface under some generic instantiation, like class Int32 : IComparable [Int32]). We also analyse here which class is interface, struct, etc. and make them inherit some faked types (like classes are subtypes of object and structs of System.ValueType).
- Macros with BeforeTypedMembers are expanded here.
- Next we create objects representing members of classes. Every TypeBuilder iterates over its members (in parsed form) and creates subclasses of MemberBuilder, bind types specified in their headers (lookup in namespace tree is performed), check some consistency rules about member attributes etc.
- Macros with WithTypedMembers flag are expanded on class members, which now be supplied as various kinds of MemberBuilder.
- We perform analysis if every interface was fully implemented implicitly or by explicit implements keyword.
Data structures
- NamespaceTree holds the whole hierarchy of objects present in compilation (both loaded from external libraries and from current compilation) according to their names and nesting inside namespaces and classes. Each class or macro can be found there by its full qualified name (like System.Collections.ArrayList is a leaf on path through System, Console and ArrayList). This data structure is organised into tree, where each node might have some childs (nested objects) and value. The value may be one of the variant options, like Cached for single, created TypeInfo object or MacroCall holding IMacro instance, etc. Code is located in ncc/hierarchy/NamespaceTree.n
- GlobalEnv represents the set of imported namespaces (by using Name.Space; contruct), declared using aliases and current namespace. It is ment as global context in which the given program fragment resides. Every identifier generated by parser contains the reference to its context (it is especially useful for macros). This way when we see for example WriteLine and the current global env contains using System.Console; then we can interpret it as System.Console.WriteLine.
Typing of method bodies
Some more info is on the page about type inference. | http://nemerle.org/Compiler_internals | crawl-001 | refinedweb | 1,027 | 55.34 |
Spring IDE 2.0 Adds Web Flow and AOP Development Tools
- |
-
-
-
-
-
-
Read later
My Reading List
- Support for Spring Web Flow - validation and graphical editing of web flows are now available, and the Eclipse Web Tools Project has been extended to provide content assistance and hyperlinking functionality
- Full XSD-based configuration support - Spring IDE's internal model of bean definitions was completely reworked in order to leverage the Spring Tooling API
- Spring AOP development tools - support now exists for visualization of both
<aop:config>-based and @AspectJ-style cross-cutting references, and for validating configurations such as pointcut expressions
- Several usability and UI improvements - A new Spring Explorer replaces the previous Beans View, refactorings have been updated to include Spring Beans in some operations, and several new wizards have been added (e.g. Spring Bean configuration file, new Project)
Spring 2.1 introduces a new bean(<name pattern>) pointcut primitive. This new pointcut primitive is already supported by Spring IDE 2.0. Besides that Spring 2.1 adds a mechanism that scans a package tree for annotated classes and automatically creates Spring bean definitions from the annotation meta data (read more). Support for this is already build into Spring IDE 2.0.Spring IDE 2.0 also fully supports Eclipse 3.3, which is due out later this week..
InfoQ also asked Dupuis about which Spring subprojects were supported by Spring IDE. He had this reponse:
Certainly Spring IDE 2.0 aims to fully support Spring 2.0. As already mentioned we have very specialized support tools for Spring Web Flow. Furthermore Spring IDE 2.0 supports Spring bean configurations that are created by Spring JavaConfig; it even tries to parse the dependencies of bean definitions created by JavaConfig from the Java source code (see here).
Spring Security (aka Acegi) will add comprehensive configuration namespaces in the coming version. Work has been started to get tooling support for this valuable enhancement right with the initial release of Spring Security.
Currently we are not planning to have anything special for Spring Modules. If there is a community need for this, we could add support for Spring Modules' namespaces. It is important to note that Spring IDE 2.0 is open for extension. We are following the Eclipse pattern of defining extension points that allow other plug-ins to contribute functionality. Using Spring IDE's extension points a custom namespace developer is able to plug in support for his namespaces without the need to change Spring IDE code (see here). This is just like adding a NamespaceHandler or BeanFactoryPostProcessor to Spring.
Furthermore we are exposing extension points to contribute custom validation rules for Spring bean definitions.
Finally, Dupuis was asked about future plans for the Spring IDE. He left us with these thoughts:
In the future the team will further enhance and streamline the working experience with Spring IDE: We are trying to put an higher emphasis on Spring's theme of Power and Simplicity. Therefore you can expect a close integration with Mylyn, a plug-in that allows focusing the Eclipse workspace on the current task. We will leverage Mylyn to prioritize content assist in Spring IDE's XML editor extension, filter the Spring Explorer and even collapse uninteresting blocks in your XML bean definition file.
Work on this integration has already been started a while ago. Together with the Mylyn team around Mik Kersten we are planning to release a preview of Spring IDE's Mylyn integration around the Eclipse Europa release later this month.
Rate this Article
- Editor Review
- Chief Editor Action | https://www.infoq.com/news/2007/06/springide2 | CC-MAIN-2016-40 | refinedweb | 593 | 54.52 |
Description:
Diy Digital World Clock using Nodemcu ESP8266 and HMI TFT LCD- In this tutorial, you will learn how to make a World Clock using a 10.1 inch HMI TFT LCD display module by the Stone Technologies and a Nodemcu ESP8266 Wifi Module by the ESPRESSIF systems. This is the smartest World clock; you don’t have to enter the hours, minutes, and seconds manually and there is also no need to use the RTC “Real Time Clock” Module.
All you need simply select a time zone on your cell phone application designed in Blynk, wait for a minute and the time will be updated.
This is the Pakistan’s standard time. The values in Red color are the hours; the value in White color is the minutes, and the values in Green color are the seconds. The time is synchronized with the Server timing after every 1 minute, due to which you will always get the exact time. The synchronization time can be changed in the programming. For the long term use, select 10 minutes or more.
If due to some issues, the Wifi is disconnected the World Clock won’t stop working, the minutes and seconds are incremented automatically and will continue to work. When the wifi connection becomes active the World clock is synchronized again with the server time.
Let’s select a different Time Zone; I am going to select New York America. Simply open the Blynk application, select the time zone and that’s it. After, 1 minute the time will be updated.
As you can see the time is updated, and now I can keep track of all my favorite Tv shows and movies from Pakistan. Now, let’s check time in Sydney Australia.
After, the time zone is selected, then click on the play button and wait for 1 minute. As you can see in the picture below, the time is updated.
This world clock is based on the 24 hours time format due to which we can easily know if it’s the day time or night time. This is my 5th tutorial on the 10.1 inch HMI intelligent TFT LCD module which is entirely based on my previous 4 tutorials.
In tutorial number1 I explained how to design a graphical user interface using the images designed in Adobe Photoshop. How to use the button function, data variable function, Hardware parameter function and how to use the Drag adjustment, and Slider Scale functions for controlling the screen brightness.
Read Article:
HMI 10.1” TFT LCD Module, Display Panel, and Touchscreen by Stone Technologies
10.1” HMI Intelligent TFT LCD Module, Display Panel, & Touchscreen
In tutorial number 2, I explained the commands used for reading and writing, how to control the user interface without pressing the on screen buttons… how to access the brightness control register and so on.
Read Article:
10.1” HMI intelligent TFT LCD UART Serial Communication “Stone Technologies”
10.1” HMI intelligent TFT LCD UART Serial Communication
In tutorial number 3, I explained how to monitor a sensor in real time using Arduino and the HMI Touchscreen TFT LCD display module. In this tutorial I explained in very detail how the Arduino board is interfaced with the HMI TFT LCD module using the MAX232 board.
Read Article:
Arduino HMI intelligent TFT LCD based Sensor Monitoring “Stone Technologies”
Arduino HMI TFT LCD Display Sensor Monitoring
While, in tutorial number 4, I explained how to make a control panel GUI application for the 10.1 inch HMI intelligent TFT LCD module. The Arduino was interfaced with the HMI touchscreen for controlling 220Vac light bulbs.
Read Article:
Arduino 10.1” HMI intelligent TFT LCD 220 Vac Load Controller Panel “Stone Technologies”
Arduino HMI TFT LCD Module Electrical Load controller
After watching my previous tutorials, you will be able to design any kind of monitoring and control system. So, I highly recommend first watch my previous tutorials and then you can resume from here, as I will be using the same hardware connections, except this time I am using Nodemcu ESP8266 Wifi module instead of using the Arduino. I will explain the modified circuit diagram in a minute.
Without any further delay, let’s get started!!!
The components and tools used in this project can be purchased from Amazon, the components Purchase links are given below:
HMI TFT LCD MODULE Amazon product link:
Arduino Uno:
MAX232 board:
RS232 Cable:
2-channel relay module:
eBay store link:
Aliexpress store link:
*Please Note: These are affiliate links. I may make a commission if you buy the components through these links. I would appreciate your support in this way!
World Clock Circuit Diagram:
As you can see the circuit diagram is really simple. The 10.1 inch TFT LCD and Nodemcu ESP8266 Wifi modules 3.3 volts pin of the Nodemcu module, if you remember while using the Arduino I connected the VCC pin of the Max232 board with the 5 volts, as Arduino is a 5 volts microcontroller, while the Nodemcu ESP8266 is a 3.3v controller. So, while using the MAX232 board with the Nodemcu ESP8266 Wifi module, make sure you connect the VCC pin with the 3.3 volts pin. The ground of the MAX232 is connected with the Nodemcu module ground, while the TX and RX pins of the MAX232 Board are connected with the Nodemcu RX and TX pins. 10.1 inch HMI intelligent TFT LCD Module.
Download link of the Nodemcu library for Cadsoft eagle
HMI TFT LCD display module interfacing with Nodemcu ESP8266:
All the connections are done as per the circuit diagram already explained.
RTC “Real Time Clock” Blynk Application:
The step by step Blynk application designing is explained in the video given at the end of this article.
HMI TFT LCD Module World Clock GUI:
The GUI designing steps are exactly the same as explained in the first video. Following are the 4 images which I designed in Adobe Photoshop.
And the following are the World Clock minute and hour hands.
As usual, I added the data variable functions for displaying the Hours, Minutes, and seconds. The variable memory address of the hours is 0002, the variable memory address of the minutes is 0006, and the variable memory address of the seconds is 0008.
Note: The .bmp file does not upload on the server, so I upload it in zip format.
Minute Hand: minutehand
For rotating the needles I used the rotate icon function and used the icons generated by icon generation tool, which I have already explained in the first video. The only thing you need to know is that it needs 720 steps to complete one revolution and the variable memory addresses of the hour and minute hands which are 0417 and 0422. We will use this information in the Nodemcu programming to control the rotation of the World Clock needles. Now, let’s have a look at the Nodemcu programming.
IoT World Clock Nodemcu ESP8266 Programming using Arduino IDE:
Nodemcu ESP8266 based World Clock Programming:
#define BLYNK_PRINT Serial
#include <SPI.h>
#include <ESP8266WiFi.h>
#include <BlynkSimpleEsp8266.h>
//#include <BlynkSimpleEsp32.h>
#include <TimeLib.h>
#include <WidgetRTC.h>
All the libraries used in this project can be downloaded by clicking on the download link given below:
// You should get Auth Token in the Blynk App.
char auth[] = “CQpqUlW7gUMo08LgUSzpjYXxdtrq25Ab”;
// Your WiFi credentials.
// Set password to “” for open networks.
char ssid[] = “AndroidAP7DF8”;
char pass[] = “jamshaid”;
Then I defined a timer and rtc.
BlynkTimer timer;
BlynkTimer timer2;
WidgetRTC rtc;
// for hour hand
#define Sensor1_H 0x04
#define Sensor1_L 0x17
//for minute hand
#define Sensor2_H 0x04
#define Sensor2_L 0x22
// Hour Value
#define Hour_H 0x00
#define Hour_L 0x02
//Minute Value
#define Minute_H 0x00
#define Minute_L 0x06
//senconds Value
#define Seconds_H 0x00
#define Seconds_L 0x08
Above are the high bytes and low bytes memory variable addresses used on the graphical user interface side.
unsigned char sensor1_send[8]= {0xA5, 0x5A, 0x05, 0x82, Sensor1_H, Sensor1_L, 0x00, 0x00};
unsigned char sensor2_send[8]= {0xA5, 0x5A, 0x05, 0x82, Sensor2_H, Sensor2_L, 0x00, 0x00};
unsigned char Hour_send[8]= {0xA5, 0x5A, 0x05, 0x82, Hour_H, Hour_L, 0x00, 0x00};
unsigned char Minute_send[8]= {0xA5, 0x5A, 0x05, 0x82, Minute_H, Minute_L, 0x00, 0x00};
unsigned char Seconds_send[8]= {0xA5, 0x5A, 0x05, 0x82, Seconds_H, Seconds_L, 0x00, 0x00};
I have already explained how these commands work. For the extreme basics watch my videos on the uart serial communication and Sensor monitoring, which are available on the Electronic Clinic YouTube channel. These are exactly the same except the memory variable addresses which are changed.
Then I defined two variables mymints and myhours of the type integer.
int mymints;
int myhours;
The clockDisplay() function is a user-defined function, which has no return type and does not take any arguments as the input. The purpose of this function is to access the Current date and time from the Server.
void clockDisplay()
{
String currentTime = String(hour()) + “:” + minute() + “:” + second();
String currentDate = String(day()) + ” ” + month() + ” ” + year();
int hours = hour();
int minutes = minute();
int seconds = second();
myhours = hour();
The hours, minutes, and seconds are accessed from the server using the hour, minute, and second functions.
mymints = map(minute(),0,59,0,720);
As I said earlier one complete rotation is equal to 720 steps and as you know 1 hour is equal to 60 minutes. So using the map function the minutes are adjusted as per the 720 steps.
As the time from the server is in 24 hours format, so we have to find the am and pm. So we simply use the if conditions and set the range just like we did it for the minutes.
if ( (hour() >=0) && ( hour() <= 12) )
{
myhours = map(hour(),0,12,0,720);
}
if ( (hour() >=13) && ( hour() <= 23) )
{
myhours = map(hour(),12,23,0,720);
}
Finally, we store the high bytes and low bytes in their desired memory locations. If in case you find this confusing watch my previous tutorial on the Sensor monitoring.
Seconds_send[6] = highByte(seconds);
Seconds_send[7] = lowByte(seconds);
Serial.write(Seconds_send,8);
delay(100);
sensor1_send[6] = highByte(myhours);
sensor1_send[7] = lowByte(myhours);
Serial.write(sensor1_send,8);
delay(100);
sensor2_send[6] = highByte(mymints);
sensor2_send[7] = lowByte(mymints);
Serial.write(sensor2_send,8);
delay(100);
Hour_send[6] = highByte(hours);
Hour_send[7] = lowByte(hours);
Serial.write(Hour_send,8);
delay(100);
Minute_send[6] = highByte(minutes);
Minute_send[7] = lowByte(minutes);
Serial.write(Minute_send,8);
delay(100);
// Send time to the App
Blynk.virtualWrite(V1, currentTime);
// Send date to the App
Blynk.virtualWrite(V2, currentDate);
}
BLYNK_CONNECTED() {
// Synchronize time on connection
rtc.begin();
}
While rest of the programming is exactly the same; I have been using these instructions in almost all of my IoT based projects.
void setup()
{
// Debug console
Serial.begin(115200);
Blynk.begin(auth, ssid, pass);
// Other Time library functions can be used, like:
// timeStatus(), setSyncInterval(interval)…
// Read more:
setSyncInterval(1 * 60); // Sync interval in seconds (10 minutes)
// Display digital clock every 10 seconds
timer.setInterval(1000L, clockDisplay);
}
void loop()
{
Blynk.run();
timer.run();
}
If you have any questions regarding this project, let me know in a comment.
Watch Video Tutorial: | https://www.electroniclinic.com/diy-digital-world-clock-using-nodemcu-esp8266-and-hmi-tft-lcd/ | CC-MAIN-2020-45 | refinedweb | 1,834 | 62.07 |
This file follows Google coding style, except for the name MEM_ROOT (which is kept for historical reasons). More...
#include <string.h>
#include <memory>
#include <new>
#include <type_traits>
#include <utility>
#include "memory_debugging.h"
#include "my_compiler.h"
#include "my_dbug.h"
#include "my_inttypes.h"
#include "my_pointer_arithmetic.h"
#include "mysql/psi/psi_memory.h"
Go to the source code of this file.
This file follows Google coding style, except for the name MEM_ROOT (which is kept for historical reasons).
std::unique_ptr, but only destroying.
Allocate an object of the given type.
Use like this:
Foo *foo = new (mem_root) Foo();
Note that unlike regular operator new, this will not throw exceptions. However, it can return nullptr if the capacity of the MEM_ROOT has been reached. This is allowed since it is not a replacement for global operator new, and thus isn't used automatically by e.g. standard library containers.
TODO: This syntax is confusing in that it could look like allocating a MEM_ROOT using regular placement new. We should make a less ambiguous syntax, e.g. new (On(mem_root)) Foo(). | https://dev.mysql.com/doc/dev/mysql-server/latest/my__alloc_8h.html | CC-MAIN-2022-21 | refinedweb | 175 | 53.98 |
Type: Posts; User: psfign
Judas Priest! No, i dont understand that lol!
But, I just looked at my grade for the paper rock scissors. Got my first 100 without having to go back and make corrections! And that was the semester...
****! I shoulda saw the double declaration there. My bad! Thanks, that cleared it up!
#include <stdlib.h>
#include <iostream>
#include <iomanip>
#include <string>
#include <time.h.>
using namespace std;
int userChoice(void);
char getChoice(void);
void winOrLose(int,int);
char...
i have that but it resets after every turn
I always have a problem with keeping score total. How do I return the wins/losses/draws to keep score each time. So am I correct that I cant return the score in a void funtion like this? Do i need to...
That got it! Thanks! Now, all i have to do is tally up wins/losses
Thanks! I think i have an issue with my winOrLose function too though.
#include <stdlib.h>
#include <iostream>
#include <iomanip>
#include <string>
#include <time.h.>
using namespace std;
well this is what i have so far. It wont cout the answer. Im stuck.. :(
#include <stdlib.h>
#include <iostream>
#include <iomanip>
#include <string>
#include <time.h.>
using namespace std;
Update:
I'm not any closer than I was yesterday!
No...Im not posting my code. lol
I'm embarrassed this stuff isnt sticking
We havent learned how to do the conv like that, so i think I'd have to do it as
num = Computer's selection
if (num == 1)
return 'R';
Thanks for your help again guys! I didnt post my code...
Thanks!
Yea I know 2k, but i was trying other things that i thought would work....that obviously didnt work.
and looking at your example, im assuming i can use return '1' as code for Rock
yea, i havent looked at doing that switch 2k gave yet. But a couple follow up questions. Am i correct in the thought that i could do something like:
num = Computer's selection
if (num == 1)...
Love Big Bang Theory! :p
But, teacher wants user to imput letters. Im not doing very well trying to compare an int to a char.
Any ideas? Ive tried some things that compile, but they dont run.
...
ok, i think i can compare. so ill work on that. ty
well the computer generating it's choices as 1, 2 or 3. So how do i compare
My teacher wants the user to enter in their choices with characters instead of numbers.
R - Rock
P - Paper
S - Scissors.
After the user enters R for their choice, what would I use to convert...
oh ok and i did have to upgrade bc g++ kept crashing.
but i dont understand the void functions. can someone explain how to define them? do I place a value in the ( )?
void printSpecChar(int);...
ok, i'll work on those, but why does stdlib.h work still if it's supposed to be cstdlib? And why didnt i have to use that on win7?
I just purchased a laptop with Windows 8 and installed the portable Dev-C++ because the other one wasnt compatible with Win8.
Now im getting an error when i compile that says:
'undefined...
yup, especially when i suck at it.
well i received verification from the prof last night that i only display the stats after the user enters the correct answer. So, i did the code how I 'designed' and it worked...so woot.
So here's how Im thinking about the design.
1. <Into>
2. clear screen
3. com generates num
do (count number of tries, games & start/stop)
while (guess != num)
if...
well youre right, I have been. But like you said and someone said in my last thread.... Im going to try to write out the program steps on notebook and then write the code from that.
So on this program, I have started off first by creating parts of the program individually.
I created a intro page that just couts the rules, etc.
created a random number generator and a loop for... | http://forums.codeguru.com/search.php?s=4595c2f9580c82d292e880b5d64ec2c7&searchid=1921611 | CC-MAIN-2013-48 | refinedweb | 685 | 85.79 |
Photo by Sergey Pesterev on Unsplash
“Well, it depends” is the prototypical response when asking a software engineer for advice, no matter how straight-forward the problem may seem. When asked “Should we choose TypeScript or Flow for our next React Native project?” however, the answer only depends on one variable: whether or not you work at Facebook.
It’s interesting to consider how we got here. The project team evaluated Flow vs TypeScript for our new React Native app almost three years ago. At the time, TypeScript didn’t support React well, didn’t allow for a gradual opt-in, there was no Babel support, and VSCode wasn’t the editor providing the best JavaScript development experience on the market.
None of these factors are true today. But why bother to migrate?
TypeScript has a lot of value to offer making the switch highly attractive:
So, we forged ahead! But how’d it go?
$ npx sloc app ---------- Result ------------ Physical : 57903 Source : 50859 Number of files read : 709 ---------------------------- $ npx flow-coverage-report -i 'app/**/*.js' percent: 84 % $ time flow check flow 3.64s user 2.31s system 32% cpu 18.458 total
The first step we took was to update our JS tooling configuration. For our project, the procedure was relatively straight-forward, guided mostly by simply copying the configurations directly from a newly instantiated TypeScript template project using the react-native cli.
{ "parser": "typescript" }
One used to have to deal with TSLint when chosing TypeScript but thankfully TSLint is deprecated, so it’s an easy choice in 2019—just stick with ESLint!
Literally no changes necessary (for our usage anyway, and that included plugins and custom rules!)
Done. Next.
Did you know that you can import JavaScript modules relative to a package.json file using the name attribute in the json file, no plugins required?
Black magic! But it was useful black magic!
To accomplish absolute path imports (i.e.
we included in our project awe included in our project a
import Foo from ‘app/ui/components/Foo')
in thein the
package.json
(and several others as well for a total of 4 package.json’s):(and several others as well for a total of 4 package.json’s):
/app directory
{ "name": "app" }
This even works when importing from other folders in the tree (meaning you can import from
outside of the ‘app’ folder).
app/components/Foo
Well, TypeScript doesn’t like this black magic fuckery and couldn’t figure out our imports :(. So, we deleted our extra package.json files and chose to use babel instead.
With built in Babel support for TypeScript in Babel 7, converting our babel configuration from Flow to TypeScript was a process of simply removing unused plugins (like @babel/plugin-transform-flow-strip-types).
Here’s our final Babel.config.js, complete with less magical module resolution.
module.exports = { presets: ['module:metro-react-native-babel-preset'], plugins: [ [ 'module-resolver', { root: ['.'], alias: { app: './app' }, }, ], ] }
While not strictly required for running your React Native app (Babel will just strip/ignore the TS syntax anyway), the tsconfig is required for using the TypeScript compiler (and VSCode tooling) to detect type errors.
Thankfully, we got 90% of the way there by simply copy-pasting the tsconfig from a boilerplate new React Native project.
However, we also include some .json files in our project, which require some additional configuration.
{ "compilerOptions": { "baseUrl": ".", "allowJs": true, "allowSyntheticDefaultImports": true, "esModuleInterop": true, "isolatedModules": true, "jsx": "react-native", "lib": ["es6", "dom"], "moduleResolution": "node", "noEmit": true, "strict": true, "target": "esnext", "resolveJsonModule": true, // Required for JSON files "skipLibCheck": true }, "exclude": [ "node_modules", "e2e", "**/*.json", // Don't try and check JSON files "**/*.spec.ts", ] }
Thankfully some work has been done in the open source community to aid in the conversion from Flow to TypeScript syntax, and the projects have support for many of the common language features.
However, I cannot say that these projects tend to be very well maintained or mainstream.
There are two dominant solutions available, flow-to-typescript and babel-plugin-flow-to-typescript. We tried the flow-to-typescript library first but abandoned it because it crashed on any file containing a function (WTF?).
Thankfully the babel-plugin-flow-to-typescript worked for us. Almost.
As of writing the same bug related to function support forced us to use a fork. I don’t know what’s up with these tools and supporting plain old functions but whatever, it worked, I guess I should be thankful!
yarn add global @babel/cli @babel/core babel-plugin-flow-to-typescript # Convert single file npx babel script.js -o script.ts --plugins=babel-plugin-flow-to-typescript # Convert all files and delete original JS after conversion # Prereq: 'brew install parallel' find app -type f -name '*.js' | parallel "npx babel {} -o {.}.ts && rm {}"
Note: We actually used @zxbodya/babel-plugin-flow-to-typescript due to a bug in the main repo.
Next, rename any files which import React at the top to use .tsx instead:
find app/components -name "*.ts" -exec sh -c 'git mv "$0" "${0%.ts}.tsx"' {} \;
Before proceeding any further, it’s worth checking at this point that the project actually runs. Because Babel is used to compile TypeScript to JavaScript, and Babel simply strips out all TypeScript related syntax, regardless of how many type errors your project has it should “Just Work.”
In our case, it took less than half a day to update configurations, rename files, migrate all syntax, and get a running app again.
But the work is obviously still far from over. increases your overall error count leaving your confidence shaken! const reqs = get(props, 'requests', []) const inProgressIds = reqs.map(req => req.id) const activeRequests = actions // Error shown here .filter(action => inProgressIds.includes(action.id))
Rather than adding a type to
as the error suggests,
action
is missing a type, producing the error below. The root cause could be another issue in the function, or even an entirely different file!is missing a type, producing the error below. The root cause could be another issue in the function, or even an entirely different file!
reqs type PropTypes = {| onPress: () => mixed, custom: boolean, |} // Trivialized component const Button = (props: PropTypes) => <RNButton {...props } />
Under Flow, one must add every prop passed into
toto
<Button>
, where instead we’d prefer to take all the props that, where instead we’d prefer to take all the props that
PropTypes
does and add in our own custom one.does and add in our own custom one.
<RNButton>
//!
$ type-coverage -p tsconfig.json # ignored tests + storybook 687.
If you’re still using Flow with React Native, it’s never been easier to switch!
It took the equivalent of 3 engineers working 10 full.
Create your free account to unlock your custom reading experience. | https://hackernoon.com/migrating-a-50k-sloc-flow-react-native-app-to-typescript-c91aj3ton | CC-MAIN-2021-04 | refinedweb | 1,124 | 57.27 |
Hi all,
I'm developing a plugin and I'd like to log some actions using log4j.
I have to decide where to put the log file generated by log4j.
Which is the safest location, in order not to throw "Access denied" or similar Exceptions?
Is there a way to retrieve such locations in a programmatic way?
Thank you very much
Best
cghersi
Hi all,
The best way is to use standard IntelliJ logging mechanism:
import com.intellij.openapi.diagnostic.Logger;
private static final Logger LOG = Logger.getInstance(YourClass.class.getName());
LOG.xxx(...);
By default INFO level and above is saved to file, you may tune log.xml file in IntelliJ IDEA installation. Logs appear in standard log file (Help | Show logs).
Thank you Alexander,
the retrieval of safe location is not only needed for logging purpose, but also for other tasks.
So, apart from logging purpose, in which location may a write my own files?
Is your question related to the IDE?
System temp folder seems to be the best: com.intellij.openapi.util.io.FileUtil.getTempDirectory(). See also other methods of FileUtil class.
Thank you very much Alexander, that's exactly what I need! | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206767515-Programmatically-retrieve-folder-locations | CC-MAIN-2020-24 | refinedweb | 196 | 59.5 |
Chatlog 2010-02-25
From RDFa Working Group Wiki
See CommonScribe Control Panel, original RRSAgent log and preview nicely formatted version.
14:39:08 <RRSAgent> RRSAgent has joined #rdfa 14:39:08 <RRSAgent> logging to 14:39:27 <manu> trackbot, start telecon 14:39:29 <trackbot> RRSAgent, make logs world 14:39:31 <trackbot> Zakim, this will be 7332 14:39:31 <Zakim> ok, trackbot; I see SW_RDFa()10:00AM scheduled to start in 21 minutes 14:39:32 <trackbot> Meeting: RDFa Working Group Teleconference 14:39:32 <trackbot> Date: 25 February 2010 14:39:52 <manu> Chair: Manu Sporny 14:39:57 <manu> Regrets: Ben Adida 14:41:05 <manu> Agenda: 14:41:22 <manu> Scribe: Shane McCarron 14:43:18 <manu> action-3 due in 1 week 14:43:18 <trackbot> ACTION-3 Get in touch with LibXML developers about TC 142 due date now in 1 week 14:57:13 <manu> trackbot, status 14:58:53 <Zakim> SW_RDFa()10:00AM has now started 14:59:00 <Zakim> +Benjamin 14:59:02 <Zakim> +Knud 14:59:08 <Knud> Knud has joined #rdfa 14:59:12 <mgylling> mgylling has joined #rdfa 14:59:19 <ivan> zakim, dial ivan-voip 14:59:19 <Zakim> ok, ivan; the call is being made 14:59:20 <Zakim> +Ivan 14:59:39 <Zakim> +[IPcaller] 14:59:46 <manu> zakim, I am IPcaller 14:59:46 <Zakim> ok, manu, I now associate you with [IPcaller] 15:00:01 <RobW> RobW has joined #rdfa 15:00:28 <Steven> zakim, dial steven-work 15:00:28 <Zakim> ok, Steven; the call is being made 15:00:29 <Zakim> +Steven 15:00:39 <manu> zakim, who is on the call? 15:00:40 <markbirbeck> markbirbeck has joined #rdfa 15:00:40 <Zakim> On the phone I see Benjamin, Knud, Ivan, [IPcaller], Steven 15:00:53 <Zakim> +mgylling 15:00:59 <manu> zakim, who is on the call? 15:01:04 <Zakim> On the phone I see Benjamin, Knud, Ivan, [IPcaller], Steven, mgylling 15:01:17 <manu> zakim, who is on the call? 15:01:18 <Zakim> On the phone I see Benjamin, Knud, Ivan, [IPcaller], Steven, mgylling 15:01:42 <manu> trackbot, status 15:01:59 <Zakim> + +1.978.692.aaaa 15:02:08 <markbirbeck> zakim, code? 15:02:08 <Zakim> the conference code is 7332 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), markbirbeck 15:02:22 <ivan> zakim, mute me 15:02:22 <Zakim> Ivan should now be muted 15:02:29 <manu> zakim, who is making noise? 15:02:30 <Steven> zakim, [IP is Manu 15:02:30 <Zakim> +Manu; got it 15:02:43 <Zakim> manu, listening for 10 seconds I heard sound from the following: [IPcaller] (25%), mgylling (5%), Knud (10%) 15:02:44 <Steven> zakim, who is noisy? 15:02:55 <Zakim> Steven, listening for 10 seconds I heard sound from the following: Manu (10%), mgylling (10%), Benjamin (9%) 15:03:07 <manu> zakim, mute mgylling 15:03:07 <Zakim> mgylling should now be muted 15:03:21 <Zakim> +Mark_Birbeck 15:03:30 <Steven> zakim, who is noisy 15:03:30 <Zakim> I don't understand 'who is noisy', Steven 15:03:35 <ivan> zakim, who is here? 15:03:35 <Zakim> On the phone I see Benjamin, Knud, Ivan (muted), Manu, Steven, mgylling (muted), +1.978.692.aaaa, Mark_Birbeck 15:03:37 <Steven> zakim, who is noisy? 15:03:39 <Zakim> On IRC I see markbirbeck, RobW, mgylling, Knud, RRSAgent, Zakim, manu, Steven, Benjamin, ivan, trackbot 15:03:46 <manu> zakim, unmute mgylling 15:03:46 <Zakim> mgylling should no longer be muted 15:03:47 <ivan> zakim, unmute me 15:03:47 <Zakim> Ivan should no longer be muted 15:03:47 <Zakim> Steven, listening for 10 seconds I heard sound from the following: Manu (24%), Benjamin (14%), Knud (5%), Mark_Birbeck (5%) 15:04:00 <Steven> zakim, who is noisy? 15:04:01 <Zakim> +ShaneM 15:04:10 <Zakim> Steven, listening for 10 seconds I heard sound from the following: +1.978.692.aaaa (39%), Knud (39%), Mark_Birbeck (4%), ??P36 (8%) 15:04:12 <ivan> zakim, aaaa is RobW 15:04:13 <Zakim> +RobW; got it 15:04:46 <Steven> zakim, who is noisy? 15:04:56 <Zakim> Steven, listening for 10 seconds I heard sound from the following: Manu (39%), mgylling (8%), Knud (22%), Mark_Birbeck (33%) 15:05:06 <manu> 15:05:12 <Steven> zakim, mute knud temporarily 15:05:12 <Zakim> Knud should now be muted 15:05:27 <Zakim> Knud should now be unmuted again 15:05:36 <Steven> zakim, mute knud 15:05:36 <Zakim> Knud should now be muted 15:06:03 <Steven> scribenick: markbirbeck 15:06:21 <markbirbeck> Topic: Action items 15:06:34 <manu> action-3? 15:06:34 <trackbot> ACTION-3 -- Manu Sporny to get in touch with LibXML developers about TC 142 -- due 2010-02-24 -- OPEN 15:06:34 <trackbot> 15:06:54 <manu> trackbot, action-3 due in 1 week 15:06:54 <trackbot> ACTION-3 Get in touch with LibXML developers about TC 142 due date now in 1 week 15:07:05 <manu> action-4? 15:07:05 <trackbot> ACTION-4 -- Mark Birbeck to generate spec text for @token and @prefix -- due 2010-02-24 -- OPEN 15:07:05 <trackbot> 15:07:10 <markbirbeck> 15:07:30 <ivan> q+ 15:08:08 <manu> trackbot, comment action-4 proposal at 15:08:08 <trackbot> ACTION-4 Generate spec text for @token and @prefix notes added 15:08:08 <Steven> ack i 15:08:29 <markbirbeck> Ivan: Could we move the write-ups onto the wiki? 15:09:56 <markbirbeck> Mark_Birbeck: Don't mind. It was more that it didn't work as an email. 15:10:06 <manu> action-5? 15:10:06 <trackbot> ACTION-5 -- Mark Birbeck to generate spec text for pulling in external vocabulary documents -- due 2010-02-24 -- OPEN 15:10:06 <trackbot> 15:10:51 <manu> trackbot, action-4 due in 1 week 15:10:51 <trackbot> ACTION-4 Generate spec text for @token and @prefix due date now in 1 week 15:10:54 <manu> trackbot, action-5 due in 1 week 15:10:54 <trackbot> ACTION-5 Generate spec text for pulling in external vocabulary documents due date now in 1 week 15:11:12 <manu> trackbot, close action-7 15:11:12 <trackbot> ACTION-7 Send Invited Expert proposals to Ivan closed 15:11:25 <ivan> q+ 15:12:11 <markbirbeck> Manu: Have proposed Toby Inkster and Benji. 15:12:52 <manu> ACTION: Manu to get Toby to fill out Invited Expert form. 15:12:52 <trackbot> Created ACTION-10 - Get Toby to fill out Invited Expert form. [on Manu Sporny - due 2010-03-04]. 15:12:56 <markbirbeck> ...Benji says he is too busy to take up the Invited Expert position. Also, he says that he wants large changes to RDFa, and doesn't see that happening. 15:13:19 <markbirbeck> Ivan: Toby needs to fill in the IE form. 15:13:26 <markbirbeck> Manu: Will chase. 15:14:19 <markbirbeck> Ivan: What should we do with the issues that Benjamin raised? 15:14:39 <manu> ISSUE-12? 15:14:39 <trackbot> ISSUE-12 -- Analyze Benji's wishlist and determine if there are suggestions that will improve RDFa -- OPEN 15:14:39 <trackbot> 15:14:52 <markbirbeck> Manu: Think we should keep the issues. Any other views on that? 15:14:56 <ivan> 15:15:29 <markbirbeck> Manu: Can we discuss this on the list? 15:15:57 <markbirbeck> Ivan: Some of them are pretty substantial; change attribute names, harmonise with Microdata. 15:16:44 <markbirbeck> Manu: Agree, but the action is to look at the suggestions and see what might be used from there. 15:18:31 <markbirbeck> q+ 15:18:36 <manu> ack ivan 15:18:38 <ivan> ack ivan 15:18:51 <manu> ack markbirbeck 15:19:36 <manu> Mark_Birbeck: Maybe we should re-visit ISSUE-12 in a couple of weeks. 15:20:06 <manu> action-8? 15:20:06 <trackbot> ACTION-8 -- Shane McCarron to coordinate with the PFWG on @role and how @role integrates with RDFa -- due 2010-02-26 -- OPEN 15:20:06 <trackbot> 15:20:39 <Knud> (did we talk about Action-6?) 15:21:24 <manu> ISSUE: Coordinate with PFWG on @role and how @role integrates with RDFa 15:21:24 <trackbot> Created ISSUE-17 - Coordinate with PFWG on @role and how @role integrates with RDFa ; please complete additional details at . 15:21:44 <markbirbeck> Shane: RDFa needs a way for other groups to be able to add attributes and elements to the list. 15:21:54 <markbirbeck> s/to the list/to the list of supported attributes/ 15:22:27 <manu> trackbot, drop action-8 15:22:27 <trackbot> Sorry, manu, I don't understand 'trackbot, drop action-8'. Please refer to for help 15:22:32 <manu> trackbot, close action-8 15:22:32 <trackbot> ACTION-8 Coordinate with the PFWG on @role and how @role integrates with RDFa closed 15:22:57 <manu> trackbot, comment action-8 Closed action in favor of creating as ISSUE-17 15:22:58 <trackbot> ACTION-8 Coordinate with the PFWG on @role and how @role integrates with RDFa notes added 15:23:41 <Steven> Yay! RDFa stylesheets! 15:24:05 <manu> action-6? 15:24:05 <trackbot> ACTION-6 -- Shane McCarron to identify the requirements for html2ps and see about getting reSpec to support them -- due 2010-02-24 -- OPEN 15:24:05 <trackbot> 15:24:16 <markbirbeck> Shane: Will write up a proposal on how other people might indicate in documents that other attributes are supported. 15:24:21 <manu> trackbot, close action-6 15:24:21 <trackbot> ACTION-6 Identify the requirements for html2ps and see about getting reSpec to support them closed 15:24:34 <manu> trackbot, comment action-6 it works. 15:24:34 <trackbot> ACTION-6 Identify the requirements for html2ps and see about getting reSpec to support them notes added 15:24:57 <markbirbeck> Topic: Review High-level Work Plan 15:25:00 <manu> 15:25:42 <markbirbeck> Manu: Very high level proposal on the wiki. 15:26:17 <markbirbeck> ...On the API document we have Benjamin Adrian. 15:26:29 <markbirbeck> ...We probably need other editors -- anyone? 15:26:40 <Steven> q+ 15:26:41 <markbirbeck> q+ 15:27:01 <manu> ack steven 15:27:40 <markbirbeck> Steven: Ben has experience of working on APIs -- would seem an obvious choice. 15:27:43 <manu> ack markbirbeck 15:27:43 <manu> scribenick: manu 15:28:03 <manu> Mark_Birbeck: I was going to offer my services - I've been working on a Javascript library, so I'm interested. 15:28:32 <manu> Mark_Birbeck: Jenni Tennison has a slightly different notion about the API - sits on top of SPARQL. We may try to harmonize with that work. 15:29:08 <ivan> q+ 15:29:52 <manu> Mark_Birbeck: Do we want this in a separate document? 15:29:55 <manu> ack Ivan 15:27:43 <manu> scribenick: markbirbeck 15:30:27 <markbirbeck> Mark_Birbeck: Experience of XForms WG is that if we have 'owners' of work, it tends to create a little fragmentation in the spec. 15:30:53 <markbirbeck> Ivan: True that it's separate in the charter, but might not be a problem to change our minds later. So can defer this. 15:31:07 <manu> q+ to discuss taking the discussion to the mailing list. 15:31:32 <markbirbeck> Mark_Birbeck: Fine. 15:32:34 <markbirbeck> Manu: Could move this to the mailing-list. 15:32:39 <ivan> ack manu 15:32:39 <Zakim> Manu, you wanted to discuss taking the discussion to the mailing list. 15:33:26 <markbirbeck> s/Jenni/Jeni/ 15:33:36 <Steven> q+ 15:33:56 <manu> ack steven 15:34:01 <markbirbeck> s/different notion about the API/different API that is related/ 15:34:35 <Knud> q+ 15:34:42 <markbirbeck> Steven: How were people assigned to issues? Randomly? 15:34:45 <manu> ack knud 15:34:46 <Knud> I'm muted 15:35:02 <markbirbeck> Manu: No. I tried to look at issues and who had the skills to address them. 15:35:09 <ivan> q+ 15:35:29 <markbirbeck> ...I may have failed, so please comment on whether you think you should be dealing with an issue (and of course, if not). 15:35:35 <manu> ack ivan 15:35:36 <Steven> zakim, mute knud 15:35:37 <Zakim> Knud should now be muted 15:35:48 <markbirbeck> Knud: What is the Triple Store API about? 15:36:07 <markbirbeck> Manu: I assigned you because I thought your name came up on that. 15:36:26 <manu> q+ to explain that RDFa triplestore API is /not/ a required piece of work. 15:36:42 <markbirbeck> Ivan: The idea is that rather than having an RDFa-specific API, we could look at something more generic. 15:37:19 <markbirbeck> zakim, mute me 15:37:19 <Zakim> Mark_Birbeck should now be muted 15:37:29 <manu> q- 15:37:51 <manu> zakim, unmute knud 15:37:51 <Zakim> Knud should no longer be muted 15:38:10 <markbirbeck> ...but it's not a required piece of work. 15:38:20 <markbirbeck> Knud: So it's not a generic API, but a JavaScript one? 15:38:26 <markbirbeck> Ivan: Yes, that's right. 15:38:31 <manu> zakim, mute knud 15:38:31 <Zakim> Knud should now be muted 15:39:26 <markbirbeck> Ivan: If there are documents that we'd like to be involved in, but not as lead editor, what should we do? 15:39:36 <markbirbeck> Manu: Go ahead and add yourself to the end of the list. 15:39:55 <markbirbeck> Topic: ISSUE-2: Extend RDFa to allow URIs in all RDFa attributes 15:39:57 <manu> 15:40:00 <manu> ISSUE-2? 15:40:00 <trackbot> ISSUE-2 -- Extend RDFa to allow URIs in all RDFa attributes -- OPEN 15:40:00 <trackbot> 15:42:12 <markbirbeck> Manu: The RDFa TF discussed and voted to support 'URIs anywhere'. 15:42:19 <ivan> q+ 15:42:26 <manu> ack ivan 15:42:27 <markbirbeck> ...We need to decide whether we will bring that proposal in. 15:42:59 <markbirbeck> Ivan: Might be worth setting the context. 15:43:20 <markbirbeck> zakim, unmute me 15:43:21 <Zakim> Mark_Birbeck should no longer be muted 15:43:39 <manu> 15:43:51 <manu> That's the latest text. 15:43:20 <markbirbeck> scribenick: manu 15:44:01 <manu> Mark_Birbeck: The key thing is that 15:44:16 <ShaneM> ShaneM has joined #rdfa 15:44:22 <manu> ... realization that after working on CURIEs and URIs is that you could tell the difference between a CURIE and a URI 15:44:49 <manu> ... basically, if there is a prefix, it's a CURIE 15:44:59 <manu> ... if there isn't a prefix, it's a URI. 15:45:19 <manu> i/Mark_Birbeck: The key thing is that/scribenick: manu 15:45:48 <ivan> q+ 15:45:53 <manu> Mark_Birbeck: The motivation for this is that for small pieces of markup, it allows one to just use URIs instead of defining a ton of namespace prefix mappings. 15:46:32 <manu> Mark_Birbeck: It also allows easy cut-and-paste, which is a more overstated problem than it really is, but if you're concerned about that, this goes toward solving that issue. 15:46:55 <manu> Ivan: This is a fairly trivial implementation. 15:47:02 <markbirbeck> 15:47:12 <manu> Ivan: separate issue, how will this change affect values for @resource and @about? 15:47:34 <manu> Ivan: @resource and @about are defined to have URIs or Safe CURIEs, but with the same logic, we can unify all attributes. 15:48:08 <manu> Ivan: let me explain with some markup 15:48:15 <ivan> xmlns="a:xxxxx" resource="a:something" 15:49:13 <manu> Mark_Birbeck: We were warned off of doing that in @href, so that's why we don't support it in @resource. 15:49:28 <ShaneM> 15:49:48 <manu> Ivan: We shouldn't touch @href and @src 15:49:48 <manu> scribenick: markbirbeck 15:50:57 <markbirbeck> Mark_Birbeck: We now use the token 'URIorCURIEs' in the spec, which means it will be everywhere. 15:51:07 <markbirbeck> Shane: Yes, it's already done. 15:51:25 <markbirbeck> Manu: We don't need to agree the solution here, we just need to vote on whether we want this. 15:51:46 <ShaneM> ACTION: Shane to suggest short names for each working group deliverable by 1 March 2010 15:51:46 <trackbot> Created ACTION-11 - Suggest short names for each working group deliverable by 1 March 2010 [on Shane McCarron - due 2010-03-04]. 15:52:03 <markbirbeck> ...Any other questions on this? Would people feel comfortable doing a staw-poll now? 15:53:08 <manu> PROPOSAL: Add support for full URIs in RDFa attributes where CURIEs are allowed. 15:53:31 <manu> PROPOSAL: Add support for full URIs in RDFa attributes.. 15:53:42 <manu> PROPOSAL: Add support for full URIs in RDFa attributes, barring @href and @src. 15:54:03 <Steven> href has full URIs alrady 15:54:34 <markbirbeck> PROPOSAL: Allow both URIs and CURIEs in all RDFa attributes, except @href and @src. 15:54:38 <ivan> PROPOSAL: for all attributes, except @href and @src, both CURIE and URI-s can be used, with priority to the former 15:56:34 <manu> PROPOSAL: Allow both URIs and CURIEs in all RDFa attributes, except @href and @src. 15:56:38 <ivan> +1 15:56:39 <Steven> +1 15:56:41 <ShaneM> +1 15:56:41 <markbirbeck> +1 15:56:42 <RobW> +1 15:56:43 <manu> +1 15:56:45 <mgylling> +1 15:56:47 <Benjamin> +1 15:56:48 <Knud> +1 15:56:59 <manu> RESOLUTION: Allow both URIs and CURIEs in all RDFa attributes, except @href and @src. 15:57:06 <Steven> rrsagent, make minutes 15:57:06 <RRSAgent> I have made the request to generate Steven 15:57:21 <ivan> q+ 15:57:27 <manu> ack ivan 15:58:45 <markbirbeck> Topic: Ensuring RDFa WG Resolutions are Reflected in HTML+RDFa 15:58:49 <ShaneM> and I will have a draft RDFa Core document by Monday 15:59:13 <markbirbeck> Ivan: How do resolutions such as this work their way into the HTML 5 version of the draft? 15:59:14 <markbirbeck> Manu: In 3-4 months, after we publish RDFa Core 1.1, HTML+RDFa will normatively refer to RDFa Core 1.1. Because it normatively refers to RDFa Core 1.1, the resolution we just passed will be included by reference in HTML+RDFa (and thus, be valid in XHTML+RDFa, HTML4+RDFa and HTML5+RDFa markup). 15:59:16 <manu> ISSUE-11? 15:59:16 <trackbot> ISSUE-11 -- Determine if there is a subset of popular prefix declarations that should be enabled in all RDFa Processors -- OPEN 15:59:16 <trackbot> 16:00:24 <Zakim> -RobW 16:00:27 <RobW> RobW has left #rdfa 16:00:28 <Zakim> -mgylling 16:00:34 <Zakim> -Knud 16:00:40 <Zakim> -Steven 16:00:46 <Zakim> -Mark_Birbeck # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000279 | https://www.w3.org/2010/02/rdfa/wiki/Chatlog_2010-02-25 | CC-MAIN-2017-43 | refinedweb | 3,246 | 65.46 |
Created on 2008-01-07 11:24 by Romulo A. Ceccon, last changed 2010-08-05 00:36 by terry.reedy. This issue is now closed.
The message for WindowsError is taken from the Windows API's
FormatMessage() function, following the OS language. Currently Python
does no conversion for those messages, so non-ASCII characters end up
improperly encoded in the console. For example:
>>> import os
>>> os.rmdir('E:\\temp')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
WindowsError: [Error 41] A pasta nÒo estß vazia: 'E:\\temp'
Should be: "A pasta não está vazia" [Folder is not empty].
Python could check what is the code page of the current output interface
and change the message accordingly.
Crys, can you confirm this?
It would seem we'll need to fix this twice -- once for 2.x, once for 3.0.
Oh nice ...
Amaury knows probably more about the wide char Windows API than me. The
function Python/error.c:PyErr_SetExcFromWindows*() needs to be modified.
I confirm the problem (with French accents) on python 2.5.
Python 3.0 already fixed the problem by using the FormatMessageW()
unicode version of the API.
We could do the same for python 2.5, but the error message must be
converted to str early (i.e when building the Exception). What is the
correct encoding to use?
"... but the error message must be converted to str early (i.e when
building the Exception)."
Wouldn't that create more problems? What if somebody wants to intercept
the exception and do something with it, like, say, redirect it to a log
file? The programmer must, then, be aware of the different encoding. I
thought about keeping the exception message in Unicode and converting it
just before printing. Is that possible for Python 2.x?
I think this is not possible if we want to preserve compatibility; at
least, str(e.strerror) must not fail.
I can see different solutions:
1) Don't fix, and upgrade to python 3.0
2) Store an additional e.unicodeerror member, use it in a new
EnvironmentError.__unicode__ method, and call this from PyErr_Display.
3) Force FormatMessage to return US-English messages.
My preferred being 1): python2.5 is mostly encoding-naive, python3 is
unicode aware, and I am not sure we want python2.6 contain both code.
Other opinions?
3.0 will be a long way away for many users. Perhaps forcing English
isn't so bad, as Python's own error messages aren't translated anyway?
I would claim that this is not a bug. Sure, the message doesn't come out
correctly, but only because you run it in a cmd.exe window, not in (say)
IDLE.
IIUC, the problem is that Python computes the message in CP_ACP (i.e.
the ANSI code page), whereas the terminal interprets it in CP_OEMCP
(i.e. the OEM code page)..
Forcing English messages would certainly reduce the problems, but it
still might be that the file name in the error message does not come out
correctly.
> Forcing English messages would certainly reduce the problems
And it does not even work: my French Windows XP does not contain the
English error messages :-(
>.
If this is chosen, I propose to use CharToOem as the "unfailing"
conversion function. I will try to come with a patch following this idea.
> If this is chosen, I propose to use CharToOem as the "unfailing"
> conversion function. I will try to come with a patch following this idea.
Sounds fine to me.
Here is a patch. Now I feel it is a hack, but it is the only place I
found where I can access both the exception object and the encoding...
I think WindowsError's message should be English like other errors.
FormatMessageW() function can take dwLanguageId parameter.
So I think Python should pass `MAKELANGID(LANG_ENGLISH, SUBLANG_ENGLISH_US)` to the parameter.
> I think WindowsError's message should be English like other errors.
> FormatMessageW() function can take dwLanguageId parameter.
> So I think Python should pass `MAKELANGID(LANG_ENGLISH,
> SUBLANG_ENGLISH_US)` to the parameter.
On a non-english system FormatMessageW fails with ERROR_RESOURCE_LANG_NOT_FOUND (The specified resource language ID cannot be found in the image file) when called with that parameter.
Should we close this?
There was some opinion that this is not a bug.
The argument for not closing this before "3.0 will be a long way away for many users." is obsolete as 3.1.2 is here and 3.2 will be in less than 6 months.
Or, Amaury, do you have any serious prospect of applying the patch to 2.7?
Somebody should investigate the status of this on 3.x. If the message comes out as a nice Unicode string, I'd close it as fixed. If the message comes out as a byte string, it definitely needs fixing.
For 2.x, the issue is out of date.
The message is definitely an str (unicode) string. WinXP,3.1.2,
import os
try: os.rmdir('nonexist')
except Exception as e:
print(repr(e.args[1]), '\n', repr(e.strerror), '\n', e.filename)
os.rmdir('nonexist')
# prints
'The system cannot find the file specified'
'The system cannot find the file specified'
nonexist
...
WindowsError: [Error 2] The system cannot find the file specified: 'nonexist' | http://bugs.python.org/issue1754 | crawl-003 | refinedweb | 878 | 69.68 |
A couple needs a photo gallery for their wedding, where everyone can browse and upload their own photos from the wedding. Beginner developer: I know of some great software we can use, give me a couple of days. Skilled developer: I'll write something up from scratch for you, it will be perfect, give me a couple of weeks. Wise developer: Let's just use a Flickr group, give me a couple of minutes.
"Reinventing the wheel" on the web means doing something from scratch vs. using a pre-existing solution, especially referring to solutions that already work particularly well. On the "reinventing" side, you benefit from complete control and learning from the process. On the other side, you benefit from speed, reliability, and familiarity. Also often at odds are time spent and cost.
This screencast takes a look at a few examples that have come up for me recently. The answer, I propose, is like most things in life: somewhere in the middle.
Links from Video:
Some interesting points here Chris, I reckon that for smaller tasks such as a poll system for example it’s perfectly okay to reinvent the wheel, it certainly gives you a load more control over what you want to do and if the project allows it then fair play. But for bigger projects I think that unless a client specifically wants a bespoke solution then it’s much better to go with something premade that has been planned carefully and built to a high standard and adapt that to your needs. I certainly wouldn’t want to remake something like WordPress or Magento!
Great webcast… I love how the web is moving towards outsourcing like this. It means you can get the best of everything, and concentrate on your core content or functionality.
Don’t reinvent the wheel, since it’s already been invented. You could build upon it, that’s always a good idea. I’m not saying it because i’m lazy (wich i am), but also because i like to stay practical: Projects of more than a year bore me to death.
The only use for inventing a wheel is to invent one when there’s no wheel.
Nice to hear that I am not the only one who is willing to admit that sometimes it comes down to being practically lazy :D
If a new widget interests me and I think I will be using it often down the road I am much more willing to spend the time learning it.
Not really interesting.
I don’t think that “reinventing the wheel” is necessarily a bad thing, and in no way should anyone b chided for doing so. When you’re working on a larger site that requires things like a forum, a wiki, a shopping cart, and/or a news section in addition to a highly specialized custom CMS, you’re potentially looking at a lot of closed systems that don’t play nicely with each other. Your choices become either force your users to register with each and every component they will need permissions for and login for each one every time they need to access it, or make some hacked up solution that transparently handles it all. Trying to force each one to use the same “users” table is almost always an exercise in futility because most developers hardcode the table names.
It’s worse when the solutions you’re looking at use different databases or require different PHP configurations or all want to have a table named “users” and no concept of namespaces. I worked on a highly customized system a while back using PHP/PostgreSQL where the client decided they needed a live chat to provide support for their customers. So, we downloaded a few different ones and picked the least terrible one. It was PHP/MySQL. I wrote my system to require registerd_globals to be off, this one required registered_globals to be on.
For your wedding example, sure, simple solutions like Flickr can be the best choice. Just because dog food might be cheaper or more conveniently packaged, that doesn’t mean you should feed it to your cat. They are different animals with very different dietary needs.
That’s how I feel too. It’s all about assessing those “dietary needs” and choosing a solution that fits best. Sometimes that’s recreating something that’s already been done, sometimes it’s not.
Regarding the single-sign-on issue, I have, in the past, used SymmetricDS in some projects and that worked (after some learning) really well.
If you google “jquery tooltip” or “jquery slider” there are more than 5 000 000 results. There are many post like “40 Super Useful jQuery Slider” or “25 Useful jQuery Tooltip Plugins” That is Reinventing the wheel :)
I think this is an extremely interesting topic, so I think that its good to discuss. Its an important thing to keep in mind, because creative types tend to have a compulsion to do something “their” way. That’s sort of what it means to be creative.
On the other hand, I think that this particular discussion was a little shallow. All of the examples really seem to point to an off the shelf solution being the right answer. While I think most of the time that is true, depending on your project, a much more interesting discussion is when is it appropriate to build from scratch. After all, you don’t use the same design for every site. The gray area is much more interesting than the black or the white.
I think this entire website sits in that gray area. One one hand, I often teach techniques. “Here is how to build such and such.” – which is often reinventing something that has already been done. On the other hand, you could come hear, snag the download, integrate it into a site and hence nothing has been reinvented.
Here is one we could discuss that I briefly mentioned in the video: You need a system for clients to send you files. What do you do? FTP? Box.net? Build a system from scratch? Doesn’t it depend on the particular circumstances? What are those circumstances?
All coding problems sit on a spectrum of invention/reuse/reinvention. All coding problems also have measure of how important something is, as well as how challenging. Many of the the things I see here are less about how to reinvent a complete feature than they are about specific techniques or knowledge on a topic. If the example reinvents something, then who cares, its not about the example, its about the knowledge conveyed using the example.
The snippets are a little different, but they’re snippets, not features. In some cases it’s “reinventing” but most are so small that it would be hard to even turn it into a reusable part. Other snippets are already in reusable form in things like JS libs, but in cases where one isn’t being used, this is a way of reuse through copy/paste.
The gray area I’m talking about are the really hard choices; larger scale projects with bigger features. The chat and file upload examples are both good ones. There is no one right answer as you’ve said yourself, but that’s the interesting part. What are the factors that would go into such a decision? What are some experiences where rolling your own has succeeded instead of failed?
There are people out there who want to write everything from scratch because they want the control or have the hubris to think they could do it better and still fit in the time constraints. And I think that you pegged it in terms of skill level. But thats an impulse control issue. When you come down to the business man bottom line its obvious – get the best result with the least amount of effort. Obviously, reuse as much as possible. When you’ve gotten to that point, then you can have the gray area discussion. When you’re talking about challenging problems, you have to measure at all points – if I build from scratch, how long will it take. If I don’t, what’s available, how hard is it to integrate, how hard is it to make it suit my needs. There are major pitfalls from both sides. Integration can sometimes take a lot longer than building from scratch, or at least building from libraries with a layer of glue. And one of the most dangerous problems is getting majorly invested down either path and realizing you have to go the other way.
I believe I once heard Joel Spolsky say something along the lines of: If it’s core to your business, do it from scratch, otherwise, use something off the shelf as much as possible.
or dropbox
Also – what Jason said.
I think your FTP example was a bit weak. I’m not sure what the ‘client’s demographic, but I think for most people FTP is very geeky, and the amount of time saved in using FTP over a small PHP file (even easier using a premade class, etc.) would be lost in the amount of support time needed to help clients upload the files.
Probably true for a lot of cases. But for example, at our design studio we deal with sending files to printers all the time. The majority of them just have FTP logins that we use and it works fine.
But for argument, let’s say FTP is out, too geeky. Your job is to get a solution in place for having these files be sent. The files that need to be transferred are let’s say typically in the 30-100 MB range. What do you do now?
Just for clarification …
Sent to who? To one person or to be shared?
I think it would depend on the media being transferred aswell.
At the moment, I’m thinking Google Docs. To begin with, uploading and viewing can go straight there. In the future, there is room to expand and provide a nicer experience (still with Google Docs backend) through the Google Document API’s which allow for retrieval, search and upload.
Imagine it’s a design studio and it’s for their clients to send them whatever source files they have.
Google Docs is cool, but I’m thinking it’s just as geeky as FTP. “Go here, sign up for a Google Account, now go here here here attach your files, now go to “share this document” and type in my email address so I can see it…”
FTP is good for some people like printers but not so great for others.
Something like Yousendit is great. I know you can brand pro accounts but some clients want to control everything. So you get called in to make a bespoke solution that essentially does the same thing.
My mum can use Google Docs without a hitch at computers without Word (or Pages). I suppose the process is streamlined a bit, as both she and I already have Google accounts.
box.net easy, super simple, cheap
Lol @ “Aaaand apparently he’s a big dick head”
The chat idea is great. Happy that you left it live, too. Might be a way we can quickly fix some problems that people are posting in the forums.
hi,
yeah it was a pretty interesting screencast. something i have been thinking about a lot recently is having a pic gallery on my site. i have never really found a good solution.
something you said, Chris, was that Flikr had an API and maybe you could do something with that? surely that would be the best of both worlds. you could easily slam ya pics in that and then it would show on your own website?! is that right? i personally don’t have enough knowledge, but from a client’s perspective it would mean that they could put whatever pics they wanted up and have a lot of control, AND it would be on their own website?
that would be awesome wouldn’t it?
Beginner Businessman – Sure I’ll use flickr it’ll be cheap and quick.
Advanced Businessman – Sure I know a great wordpress plugin. Give me a couple of days a $200.
Wise Businessman – I’ll write something from scratch. It’ll really set your wedding photographs off perfectly. Give me a week and $2000.
Poor Dentist – Brush and floss your teeth more than once a day.
Advanced Dentist – Eat what you want and you really don’t need to floss.
Wise Dentist – Here are some sugary snacks, and you really don’t need to brush your teeth. After all cats don’t brush their teeth and they don’t have problems…
Poor Patient – Listens to the advice of the advanced or wise dentist.
Wise Patient – Finds a poor dentist.
Moral of the story: find the solution that best meets your client’s needs, not the solution that best fits your needs.
Two more thoughts: You can shear a sheep many times, but you can only skin it once. Secondly it is not better to build a concrete driveway thick enough to last 100 years when the house will only last 50 years, in fact it is worse.
I’m on the side of not reinventing the wheel. If I can use a pre-canned solution with success, I’ll do it.
Though, I’ve built many things. I even learned assembler. I’ve never used assembler for anything useful. When somebody asks me why I took all that time to learn assembler (well) “Damnit, I wanted to know!” I’m with not taking a huge amount of effort to reinventing the wheel, but rebuilding the wheel to learn how it works. It makes me feel all fuzzy inside.
Yeah! This is nice!
I see coding and webdev as I saw puzzles as I child.
I’d spend the whole month doing large puzzles… and the day I finished them, I waited for my dad to come and after dinner I’d just unassemble them right away. Sometimes you can afford to do it just for fun
it’s all about the balance…
Why use Jquery and not raw java script!?
Or even better,let’s write our own script language!
Why use cars to get from A to B why not Horses?
Or even better, let’s walk .
Why use an Airplane to fly from EU to US why not go there on a ship?
Or even better, let’s swim.
But sometimes someone uses airplane to get from home to school :)
Sometimes you’re both right.
I’m one of those “older folks” who has come from a long career in traditional IT development – Mainframe solutions right through to web solutions.
On looking back over those many years and innumerable projects it seems to me that there exists a similarity between the topic being discussed here and my days at “school”.
In the “school” environment we are taught things from first principles – in other words reinventing the wheel so we have a solid basis for using the techniques and solutions being passed on to us. We then move on to using that knowledge to discover when to use those techniques and solutions to solve problems in life and in our jobs.
However, there are many times when a challenge presents itself which doesn’t have a solution catered for by our carefully accumulated knowledge and we need to find a solution that lets us overcome that challenge. Often, we are under tight deadlines which prevent us from using the “school” technique to develop a solution and this is where we turn to a packaged solution. What we learn quickly (and sometimes not so quickly) is how to effectively use the packaged solution. This is still adding to our bank of knowledge but not at the same level of understanding that we may desire.
Other times, we are not under time pressure and can afford to start from first principles and develop a solution even though a packaged solution may be available. We get enormous satisfaction from this process, we learn new things and fully understand what the solution does and how it does it.
More often than not, we are driven in one of those two directions by time pressure. In either case we still add to our knowledge bank.
I guess my point is that compromise is necessary in some circumstances and not necessary in others.
I personally turned to web development simply to keep my mind active and to continue attending the “University of Life”. My methods these days tend to lean towards using packaged solutions up front and then pulling them apart to gain detailed understanding of what they do and how they do it. Sometimes I then “reinvent the wheel” to make sure my understanding is sound.
Naturally, I am not under any sort of time pressure so I can choose how I want to proceed. People in the workforce, either in a company or independently, rarely have that luxury so packaged solutions are a popular and convenient way to solve problems. That’s life – which is always sprinkled with compromises..
I did the same thing. It only takes one bad egg to ruin the breakfast.
Hahaha! Like that kid that threw his appedix at someone on the street so now no one is allowed to keep theirs after surgery.
@Mike Robertson – yes, I TOTALLY agree.
Really interesting debate Chris. I have always tried to stay somewhere in the middle in this issue. It seems even the best wheels out there can fall short once in a while though. So I think sometimes there is no other option but to reinvent the wheel.
Otherwise, as you said, if you have something you want to get going tomorrow, you just won’t have to time to build your own perfect wheel.
One more issue I though of is customization. A lot of the prepackaged products out there don’t really allow much of that. WordPress on the other hand lets you customize, control and extend as needed, which is why I think WordPress has been so successful.
Interestingly enough I just came across this opportunity to work for a big, well-established company and let’s just say they have invented their own wheels, and it seems that’s where the real deal is at for them. So, we’ll see how I feel about the subject in a few months.
(By the way, do you have an ETA for the new print-outs of your book?)
Cheers!
I’m such a non reinventing the wheel guy. I also enjoy sometimes research deeply on how the wheel got invented to learn, but to keep things running at the pace I need to I’m so grateful of a lot of pre-built accesible options out there. I love how knowing the guts of your field (as you do) gives you always the choice of customizing those solutions with very simple hacks. Using API’s is a very good resource.
Outsourcing some webdev projects on eLance I’ve found a lot of hardcore do it from scratch coders that will resist using WordPress even when explicitly asked for. Usually their code is terrible, and it’s hard to understand for someone else to update later. Sharing solutions and open-source standards and communities is awesome and lets the whole thing grow.
I’ll always try to implement the pre-made solution first. I’ve found that nowadays there’s almost always a couple of solutions for anything you want to do on the web, although you may want or need something else still, and it’s so good to know you can!
Looking forward to watch your screencast…
But Chris pls can you add it on itunes…
Thanks
I totally understand the concept the you’re trying to get across in this video. And its a good one. But the examples you gave don’t really sell the whole idea. Especially to a developer. The majority of the time what we’re asked to deliver is not as simple and trivial as wedding pictures.
There’s also the flipside to the story where you start out with a canned solution and end up hacking it to death to do what the client needs or wants. Then you get into a situation where upgrades are very difficult.
You did make a good point about the “learning” aspect of building things from scratch. If a client asks “Why isn’t this working?” its always much more comforting to know how the internals work so you can address the problem.
Anyhow, good screencast and keep up the good work Chris.
Good show. I feel like this all the time with WordPress and Plugins. Most of the time I probably could provide a better fitting solution writing the stuff myself and putting it into the funtions.php file but end up using an existing Plugin anyway.
I think the key here is to examine your options. What solutions exist. From experience, I have a couple of times reinvented the wheel, where I later found far better solutions that was already done, and being kept up to date. (like when I first tried wordpress!!!)
For me it is important to be able to spend as much time on the things I think are fun and I earn money on, rather than fiddle around with something that someone else is much better at and think is fun.
Boiled down: If I got the time and I get paid, sure I’ll build something from scratch, but only so I can learn something new! Otherwise I’ll google!
Reinventing the wheel??
Just because you can, doesn’t mean you should.
Reinvent it if you want to learn and understand it, and perhaps build upon it to create something totally new.
Otherwise, use what’s available. You would be stupid not to. That’s all there is to it.
I fully agree. Being a student, I force myself to do whatever I can from scratch, even down to graphics, illustration, photography (soon, fonts) etc. Sure, I learn a lot in the process, but I also waste great amounts of time which I couldn’t afford in real projects.
Then again, a small part of my lizard brain enjoys the control I have over code I wrote myself, and that certainly scales (no pun intended) to big projects. This used to be about feature creep, but that really isn’t such a big deal today. Compatibility is, though, and if the best that’s out there is a bad compromise, rolling your own is always an option.
It should be your last, however, so make sure to keep an eye on what’s available.
I am constantly arguing with coders on “reinventing the wheel” there is no good reason to make something like this from scratch if it has been done, plus if you develop something you will forever need to modify it or upgrade it. I have interacted with a few different designers who want to create their own CMS system, I always ask why they just don’t use WordPress or Drupal, these standards have teams of people perfecting it, all you have to do is install it.
Great Topic – Great comments.
I feel very encouraged that there is no single best solution. Most likely this will be a case-by-case decision.
Totally agree with you Chris. What I tend to notice on people re-inventing the wheel (and I don’t understand why) is the amount of people still building their own blogs into a single website ignoring the fantastic tools available like WordPress and Joomla. Each of these tool (as you obviously know) are completly customisable to the point where our own imagination is our only limits. Even so, people still go to all the hard work of building there own easy admin area blog which ultimatly isn’t as good as the tools mentioned but hey, what the hell…Guess it looks better on their invoices.
In regards to JQuery being a reinvention of the wheel, I wouldn’t say so. I consider it an evolution of javascript, making it easier so I would say JQuery is like adding a tyre to the wheel.
Problem is that to the wheel you add more tyres, later cover it with sth. else and you forgot how wheel looks like.
You have to keep in middle, if you are rely too much on upper level layers you may find problems with undestanding and doing more complicated jobs, better you will forgot about principles ! and you start building wheel on wheel.
The book Get Back in the Box by Ruskoff talks about understanding systems from the inside out vs using wizards. He has cool Frontline documentaries too. Thanks for the screencasts, keep them up! Looking forward to using your book in my class soon.
This really should have been a written article. Why did it need video? *scratches head*
I totally agree with you..
As a web designer, and developer, I think in many situations, starting from scratch is a bad idea, for the following reasons:
1) My goal always is to save my clients money.
More time = More Money
so starting from scratch could be a very expensive.
2) I always try to evaluate my clients requirements and needs and look for an already ready solution. Now if that solution that currently exists, doesn’t fulfill what the client wants, they hell yeah you have to start from scratch… But if it does 90% or solves 90% of the problem, then building on top of those plaforms, will be a time savior.
3) Your feedback on Jquery is right on the money.. Jquery made my life easier.. And I wont go back to javascript if Jquery does what I want to do, with less code, and less time
4) Client problem solving is my ultimate goal, and if for example, flickr doesnt do what they want, or there is a requirement that flickr doesn’t fulfill or do, my solution will be the following. I will try to look through its API and see if that can be accomplished, if not, then…. may be I will proceed and start developing the solution from scratch, that is considering time and effort.
Great screencast, and I totally agree with you 100%…
That’s why personal projects are so important for me. They allow me to reinvent the wheel and learn new skills.
I wouldn’t do that on a client project, I’ll just take whatever is easiest/ most efficient for the task at hand.
So yes, there’s a place for both.
Re-inventing the wheel just takes more time and eats into an otherwise profitable project. As the saying goes: “Time is money” and unless you need to build something that’s 110% new and unique – it’s probably been done before by someone more efficient than you. Stand on the shoulders of giants who have gone before you. And be able to leave at 5:30.
Great food for thought, as usual! I feel like I’m right in the middle of this with a number of clients right now.
On the one hand, I can use a pre-packaged wordpress theme, make a few customizations and call it a day. On the other hand, how am I supposed to learn/master any skill unless I practice, and practice on clients projects, as well. I feel like the cost will be about the same (I only charge a portion of my rates if I’m learning a new skill) either way.
However, you are totally right. In some cases it makes way more sense to use a ‘trusted’, ready-to-use solution, like for the wedding. For some projects, I’m not going to ‘re-invent the wheel’. For others, it’s the only way I’m going to push my skills to the next level.
-Jacob
For my study we got the oppertunity to develop something that we were really wanted to learn. This since in a few years we will be working and there are less changes.
Me and and a friend thought yeah lets develop a new look on CMS design. Sure we have to learn all the ins and outs about php and MVC but hell we learn a lot right? But the teacher didn’t aprove , we should focus on new technologies that are just starting like augmented reality or papervision etc.
But why do that while i’ m pretty sure that in the near future i’m not going to develop in this platform or learn as much since there is hardly any resources. Sometimes re-invent the wheel with a focus on adding something new to it isn’t that bad. If it fails at least you learnt a lot. And if it works out you really produced something useful.
That my experience in this topic that i liked to share with you. Not sure if it is exactly what you were aiming for.
How do you make progress if you don’t try it yourself. If you use only pre made solutions you will be lagging behind. And it’s because of those people who reinvet the wheel we have faster cars, bigger boats etc. I think you have to know how things work before you use them. I don’t mean to be an expert but to know the basics. I don’t speak just of jQuery… I mean EVERYTHING from your TV to your car. Otherwise you can’t use them fully or properly. For example 80% of the people who use jQuery and don’t know a thing of JavaScript make ugly or not user friendly apps!
got the book, read the articles, big fan of your work…
my favorite book, my favorite website, my favorite web designer… dig-wp, css-tricks, Chris Coyier! someday ill be as good as you…
But really, no one makes a better wheel than me. | https://css-tricks.com/video-screencasts/80-regarding-wheel-invention/ | CC-MAIN-2017-34 | refinedweb | 5,047 | 80.11 |
trie 1.0.2
A comprehensive Trie implementation in Dart, optimized for autocomplete.
Trie #
A comprehensive Trie implementation in Dart, for Dart developers. Optimized for autocomplete. Made by Christopher Gong and Ankush Vangari.
Created from templates made available by Stagehand under a BSD-style license.
Full API docs are linked here.
Usage #
A simple usage example:
import 'package:trie/trie.dart'; main() { List<String> names = []; //your list goes here Trie trie = new Trie.list(names); trie.addWord("TURING"); print("All names are: " + trie.getAllWords().toString()); print("All names that begin with T are: " + trie.getAllWordsWithPrefix("T").toString()); }
Features and bugs #
Please file feature requests and bugs at the issue tracker. | https://pub.dev/packages/trie | CC-MAIN-2020-34 | refinedweb | 110 | 54.49 |
The .NET Framework simplifies processing and formatting data with the String class and its Split and Join methods or regular expressions. Learn more about using these methods in your application.
Processing string values is an integral aspect of most application development projects. This often involves parsing strings into separate values. For instance, receiving data from an external data source such as a spreadsheet often utilizes a common format like comma-separated values. The .NET String class simplifies the process of extracting the individual values between the commas.
Extracting values
The
Here are the two variables:
- String.Split(char[]) in C# or String.Split(Char()) in VB.NET
- String.Split(char[], int) in C# or String.Split(Char(), Integer) in VB.NET
The following C# snippet populates an array with values contained in a comma-separated string value:
string values = "TechRepublic.com, CNET.com, News.com, Builder.com, GameSpot.com";
string[] sites = values.Split(',');
foreach (string s in sites) {
Console.WriteLine(s);
}
The following output is generated:
TechRepublic.com
CNET.com
News.com
Builder.com
GameSpot.com
The equivalent VB.NET code follows:
Dim values As String
values = "TechRepublic.com, CNET.com, News.com, Builder.com, GameSpot.com"
Dim sites As String() = Nothing
sites = values.Split(",")
Dim s As String
For Each s In sites
Console.WriteLine(s)
Next s
You may specify multiple separator characters, which are contained in a character array. The following code splits a string of values separated by a comma, semicolon, or colon. In addition, it uses the optional second parameter to set the maximum number of items returned at four.
char[] sep = new char[3];
sep[0] = ',';
sep[1] = ':';
sep[2] = ';';
string values = "TechRepublic.com: CNET.com, News.com, Builder.com; GameSpot.com";
string[] sites = values.Split(sep, 4);
foreach (string s in sites) {
Console.WriteLine(s);
}
The following output is generated (notice that the second parameter places the remainder of the string in the last array element):
TechRepublic.com
CNET.com
News.com
Builder.com; GameSpot.com
The equivalent VB.NET code follows:
Dim values As String
values = "TechRepublic.com: CNET.com, News.com, Builder.com; GameSpot.com"
Dim sites As String() = Nothing
Dim sep(3) As Char
sep(0) = ","
sep(1) = ":"
sep(2) = ";"
sites = values.Split(sep, 4)
Dim s As String
For Each s In sites
Console.WriteLine(s)
Next s
While the Split method allows you to easily work with individual elements contained in a string value, you may need to format values according to a predefined format like comma-separated values. The String class makes it easy to assemble a properly formatted string.
Putting it together
The Join method of the String class accepts the character to be used as the separator as its first parameter. The values to be concatenated are passed as the second parameter in the form of a string array. It has one overloaded method signature that accepts integer values as the third and fourth parameters. The third parameter specifies the first array element to use, and the last parameter is the total number of elements to use.
The following C# code sample demonstrates assembling the values used in the previous example:
string sep = ", ";
string[] values = new String[5];
values[0] = "TechRepublic.com";
values[1] = "CNET.com";
values[2] = "News.com";
values[3] = "Builder.com";
values[4] = "GameSpot.com";
string sites = String.Join(sep, values);
Console.Write(sites);
The following output is generated:
TechRepublic.com, CNET.com, News.com, Builder.com, GameSpot.com
The equivalent VB.NET follows:
Dim sep As String
sep = ", "
Dim values(4) As String
values(0) = "TechRepublic.com"
values(1) = "CNET.com"
values(2) = "News.com"
values(3) = "Builder.com"
values(4) = "GameSpot.com"
Dim sites As String
sites = String.Join(sep, values)
Console.Write(sites)
We could use the overloaded format to specify where to begin and how many elements to include in the result. The following sample begins with the second (note that element numbering begins at zero) and returns a maximum of three elements:
Dim sep As String
sep = ", "
Dim values(4) As String
values(0) = "TechRepublic.com"
values(1) = "CNET.com"
values(2) = "News.com"
values(3) = "Builder.com"
values(4) = "GameSpot.com"
Dim sites As String
sites = String.Join(sep, values, 2, 3)
Console.Write(sites)
The starting element number and the maximum values to return must be valid within the string array being used. If either is invalid (i.e., not contained in the array), then an exception is thrown. For this reason, it is a good idea to utilize a try/catch block to handle any problems.
While the String class provides the necessary methods, it isn't the only way to handle the parsing of a string value. Another common approach takes advantage of regular expressions.
Parsing with regular expressions
The .NET Framework provides the Regex class contained in the System.Text.RegularExpressions namespace for using regular expressions within a .NET application. Parsing is only one of the many applications of regular expressions.
Let's examine the parsing of our sample string using regular expressions. The following ASP.NET page uses C# to parse a comma-delimited list of sites into an array:
<%@ Page Language="C#" Debug="true" %>
<%@ Import Namespace="System.Text.RegularExpressions" %>
<script language="C#" runat="server">
private void Page_Load(object sender, System.EventArgs e){
if (!IsPostBack) {
string values = "TechRepublic.com, CNET.com, News.com, Builder.com, GameSpot.com";
string pattern = ",(?=(?:[^\"]*\"[^\"]*\")*(?![^\"]*\"))";
Regex r = new Regex(pattern);
string[] sites = r.Split(values);
foreach (string s in sites) {
Response.Write(s);
Response.Write("<br>");
} } }
</script>
The equivalent VB.NET code follows. Notice that the inclusion of quotation marks in the string value (pattern) causes problems. So, the quotation marks contained in the string must be escaped to be recognized; this may be achieved by placing two of the characters adjacent to each other.
<%@ Page Language="VB" Debug="true" %>
<%@ Import Namespace="System.Text.RegularExpressions" %>
<script language="VB" runat="server">
Sub Page_Load
If Not (IsPostBack) Then
Dim values As String
values = "TechRepublic.com, CNET.com, News.com, Builder.com, GameSpot.com"
Dim pattern As String
pattern = ",(?=(?:[^\""]*\""[^\""]*\"")*(?![^\""]*\\""))"
Dim r As Regex
r = new Regex(pattern)
Dim sites As String()
sites = r.Split(values)
Dim s As String
For Each s In sites
Response.Write(s)
Response.Write("<br>")
Next s
End If
End Sub
</script>
Easily work with data
The .NET Framework makes it easy to work with data regardless of its format. A string containing values separated by a specific character is easily processed via the String class or possibly regular expressions. The method that you decide to use will depend on your specific application.
Miss a column?
Check out the .NET Archive, and catch up on the most recent editions of Tony Patton's column. | https://www.techrepublic.com/article/easily-parse-string-values-with-net/ | CC-MAIN-2019-43 | refinedweb | 1,119 | 52.87 |
The public header files are defined in include/foo and get installed into $prefix/include/foo.
The accessors (set/get) functions that are to be exported are defined a pure virtual functions in this header.
A skeleton of the a common public header file looks like:
#ifndef INCLUDED_FOO_BAR_H
#define INCLUDED_FOO_BAR_H
#include <foo/api.h>
#include <gr_sync_block.h>
namespace gr {
namespace foo {
class FOO_API bar : virtual public gr_sync_block
{
public:
// gr::foo::bar::sptr
typedef boost::shared_ptr<bar> sptr;
/*!
* \class bar
* \brief A brief description of what foo::bar does
*
* \ingroup <some group>_blk
*
* A more detailed description of the block.
*
* \param var explanation of argument var.
*/
static sptr make(dtype var);
virtual void set_var(dtype var) = 0;
virtual dtype var() = 0;
};
} /* namespace foo */
} /* namespace gr */
#endif /* INCLUDED_FOO_BAR_H */
The private implementation header files are defined in lib
and do not get installed. We normally define these files to use the
same name as the public file and class with a '_impl' suffix to indicate
that this is the implementation file for the class.
In some cases, this file might be specific to a very particular
implementation and multiple implementations might be available for a
given block but with the same public API. A good example is the use of
the FFTW library for implementing the fft_filter blocks. This is only one of many possible ways to implement an FFT, and so the implementation was named fft_filter_ccc_fftw.
Another library that implements an FFT specific to a platform or
purpose could then be slotted in as a new implementation like fft_filter_ccc_myfft.
All member variables are declared private and use the prefix 'd_'.
As much as possible, all variables should have a set and get function.
The set function looks like void set_var(dtype var), and the get function looks like dtype var().
It does not always make sense to have a set or get for a particular
variable, but all efforts should be made to accommodate it.
The Doxygen comments that will be included in the manual are defined
in the public header file. There is no need for Doxygen markup in the
private files, but of course, any comments or documentation that make
sense should always be used.
A skeleton of the a common private header file looks like:
#ifndef INCLUDED_FOO_BAR_IMPL_H
#define INCLUDED_FOO_BAR_IMPL_H
#include <foo/bar.h>
namespace gr {
namespace foo {
class FOO_API bar_impl : public bar
{
private:
dtype d_var;
public:
bar_impl(dtype var);
~bar_impl();
void set_var(dtype var);
dtype var();
int work(int noutput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items);
};
} /* namespace foo */
} /* namespace gr */
#endif /* INCLUDED_FOO_BAR_IMPL_H */
The source file is lib/bar.cc and implements the actual code for the class.
This file defines the make function for the public
class. This is a member of the class, which means that we can, if
necessary, do interesting things, define multiple factor functions, etc.
Most of the time, this simply returns an sptr to the implementation
class.
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#include "bar_impl.h"
#include <gr_io_signature.h>
namespace gr {
namespace foo {
bar::sptr bar::make(dtype var)
{
return gnuradio::get_initial_sptr(new bar_impl(var));
}
bar_impl::bar_impl(dtype var)
: gr_sync_block("bar",
gr_make_io_signature(1, 1, sizeof(in_type)),
gr_make_io_signature(1, 1, sizeof(out_type)))
{
set_var(var);
}
bar_impl::~bar_impl()
{
// any cleanup code here
}
dtype
bar_impl::var()
{
return d_var;
}
void
bar_impl::set_var(dtype var)
{
d_var = var;
}
int
bar_impl::work(int noutput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
const in_type *in = (const in_type*)input_items[0];
out_type *out = (out_type*)output_items[0];
// Perform work; read from in, write to out.
return noutput_items;
}
} /* namespace foo */
} /* namespace gr */
Because of the use of the public header file to describe what we
want accessible publicly, we can simple include the headers in the main
interface file. So in the directory swig is a single interface file foo_swig.i:
#define FOO_API
%include "gnuradio.i"
//load generated python docstrings
%include "foo_swig_doc.i"
%{
#include "foo/bar.h"
%}
%include "foo/bar.h"
GR_SWIG_BLOCK_MAGIC2(foo, bar);
NOTE: We are using "GR_SWIG_BLOCK_MAGIC2" for the
definitions now. When we are completely converted over, this will be
replaced by "GR_SWIG_BLOCK_MAGIC".
To implement processing, the user must write a "work" routine that reads inputs, processes, and writes outputs.
An example work function implementing an adder in c++
int work(int noutput];
}
//return produced
return noutput_items;
}
When creating a block, the user must communicate the following to the block:
An IO signature describes the number of ports a block may have and
the size of each item in bytes. Each block has 2 IO signatures: an input
signature, and an output signature.
Some example signatures in c++
-- A block with 2 inputs and 1 output --
gr_sync_block("my adder", gr_make_io_signature(2, 2, sizeof(float)), gr_make_io_signature(1, 1, sizeof(float)))
-- A block with no inputs and 1 output --
gr_sync_block("my source", gr_make_io_signature(0, 0, 0), gr_make_io_signature(1, 1, sizeof(float)))
-- A block with 2 inputs (float and double) and 1 output --
std::vector<int> input_sizes;
input_sizes.push_back(sizeof(float));
input_sizes.push_back(sizeof(double));
gr_sync_block("my block", gr_make_io_signaturev(2, 2, input_sizes), gr_make_io_signature(1, 1, sizeof(float)))
To take advantage of the gnuradio framework, users will create
various blocks to implement the desired data processing. There are
several types of blocks to choose from: <gr_sync_block.h>
class my_sync_block : public gr_sync_block
{
public:
my_sync_block(...):
gr_sync_block("my block",
gr_make_io_signature(1, 1, sizeof(int32_t)),
gr_make_io_signature(1, 1, sizeof(int32_t)))
{
//constructor stuff
}
int work(int noutput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
//work stuff...
return noutput_items;
}
};...
};
The interpolation block is another type of fixed rate block where
the number of output items is a fixed multiple of the number of input
items.
An example interpolation block in c++
#include <gr_sync_interpolator.h>
class my_interp_block : public gr_sync_interpolator
{
public:
my_interp_block(...):
gr_sync_interpolator("my interp block",
in_sig,
out_sig,
interpolation)
{
//constructor stuff
}
//work function here...
};
The basic block provides no relation between the number of input
items and the number of output items. All other blocks are just
simplifications of the basic block. Users should choose to inherit from
basic block when the other blocks are not suitable.
The adder revisited as a basic block in c++
#include <gr_block.h>
class my_basic_block : public gr_block
{
public:
my_basic_adder_block(...):
gr_block("another adder block",
in_sig,
out_sig)
{
//constructor stuff
}
int general_work(int noutput_items,
gr_vector_int &ninput];
}
//consume the inputs
this->consume(0, noutput_items); //consume port 0 input
this->consume(1, noutput_items); //consume port 1 input
//this->consume_each(noutput_items); //or shortcut to consume on all inputs
//return produced
return noutput_items;
}
};
Hierarchical blocks are blocks that are made up of other blocks.
They instantiate the other GNU Radio blocks (or other hierarchical
blocks) and connect them together. A hierarchical block has a “connect”
function for this purpose.
Hierarchical blocks define an input and output stream much like normal blocks. To connect input i to a hierarchical block, the source is (in Python):
self.connect((self, i), <block>)
Similarly, to send the signal out of the block on output stream o:
self.connect(<block>, (self, o))
The top block is the main data structure of a GNU Radio flowgraph.
All blocks are connected under this block. The top block has the
functions that control the running of the flowgraph. Generally, we
create a class that inherits from a top block:
class my_topblock(gr.top_block):
def __init__(self, <args>):
gr.top_block.__init__(self)
<create and connect blocks>
def main():
tb = mytb(<args>)
tb.run()
The top block has a few main member functions:
The N concept allows us to adjust the latency of a flowgraph. By
default, N is large and blocks pass large chunks of items between
eachothre. This is designed to maximize throughput and efficiency. Since
large chunks of items incurs latency, we can force these chunks to a
maximum size to control the overall latency at the expense of
efficiency. A set_max_noutput_items(N) method is defined for a top block to change this number, but it only takes effect during a lock/unlock procedure.
A tag decorates a stream with metadata. A tag is associated with a
particular item in a stream. An item may have more than one tag
associated with it. The association of an item and tag is made through
an absolute count. Every item in a stream has an absolute count. Tags
use this count to identify which item in a stream to which they are
associated.
A PMT is a special data type in gnuradio to serialize arbitrary data. To learn more about PMTs see gruel/pmt.h
Tags can be read from the work function using get_tags_in_range. Each input port/stream can have associated tags.
Example reading tags in c++
int work(int noutput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
std::vector<gr_tag_t> tags;
const uint64_t nread = this->nitems_read(0); //number of items read on port 0
const size_t ninput_items = noutput_items; //assumption for sync block, this can change
//read all tags associated with port 0 for items in this work function
this->get_tags_in_range(tags, 0, nread, nread+ninput_items);
//work stuff here...
}
Tags can be written from the work function using add_item_tag. Each output port/stream can have associated tags.
Example writing tags in c++
int work(int noutput_items,
gr_vector_const_void_star &input_items,
gr_vector_void_star &output_items)
{
const size_t item_index = ? //which output item gets the tag?
const uint64_t offset = this->nitems_written(0) + item_index;
pmt::pmt_t key = pmt::pmt_string_to_symbol("example_key");
pmt::pmt_t value = pmt::pmt_string_to_symbol("example_value");
//write at tag to output port 0 with given absolute item offset
this->add_item_tag(0, offset, key, value);
//work stuff here...
}
This is the part of the guide where we give tips and tricks for making blocks that work robustly with the scheduler.
If a work function contains a blocking call, it must be written in
such a way that it can be interrupted by boost threads. When the flow
graph is stopped, all worker threads will be interrupted. Thread
interruption occurs when the user calls unlock() or stop() on the flow
graph. Therefore, it is only acceptable to block indefinitely on a boost
thread call such a sleep or condition variable, or something that uses
these boost thread calls internally such as pop_msg_queue(). If you need
to block on a resource such as a file descriptor or socket, the work
routine should always call into the blocking routine with a timeout.
When the operation times out, the work routine should call a boost
thread interruption point or check boost thread interrupted and exit if
true.
Because work functions can be interrupted, the block's state
variables may be indeterminate next time the flow graph is run. To make
blocks robust against indeterminate state, users should overload the
blocks start() and stop() functions. The start() routine is called when
the flow graph is started before the work() thread is spawned. The
stop() routine is called when the flow graph is stopped after the work
thread has been joined and exited. Users should ensure that the state
variables of the block are initialized property in the start() routine.
注:BlocksCodingGuide(原文出处,翻译整理仅供参考!)
Report Abuse|Powered By Google Sites | http://gnuradio.microembedded.com/blockscodingguide | CC-MAIN-2020-05 | refinedweb | 1,798 | 54.63 |
An interface can have three types of members:
An interface cannot have mutable instance and class variables. Unlike a class, an interface cannot be instantiated. All members of an interface are implicitly public.
We can declare constant fields in an interface as follows. It declares an interface named Choices, which has declarations of two fields: YES and NO. Both are of int data type.
public interface Choices { public static final int YES = 1; public static final int NO = 2; }
All fields in an interface are implicitly public, static, and final.
The Choices interface can be declared as follows without changing its meaning:
public interface Choices { int YES = 1; int NO = 2; }
You can access the fields in an interface using the dot notation in the form of
<interface-name>.<field-name>
You can use Choices.YES and Choices.NO to access the values of YES and NO fields in the Choices interface.
The following code demonstrates how to use the dot notation to access fields of an interface.
public class ChoicesTest { public static void main(String[] args) { System.out.println("Choices.YES = " + Choices.YES); System.out.println("Choices.NO = " + Choices.NO); } }
Fields in an interface are always final whether the keyword final is used in its declaration or not. We must initialize a field at the time of declaration.
We can initialize a field with a compile-time or runtime constant expression. Since a final field is assigned a value only once, we cannot set the value of the field of an interface, except in its declaration.
The following code shows some valid and invalid field declarations for an interface:
public interface ValidFields { int X = 10; int Y = X; double N = X + 10.5; boolean YES = true; boolean NO = false; Test TEST = new Test(); }
It is a convention to use all uppercase letters in the name of a field in an interface to indicate that they are constants.
The fields of an interface are always public. | http://www.java2s.com/Tutorials/Java/Java_Object_Oriented_Design/0510__Java_interface_fields.htm | CC-MAIN-2017-22 | refinedweb | 326 | 55.84 |
>> sparklines to a set of data in seven easy steps.
Step 1:
Import the necessary namespaces.
Step 2:
Then initialize the workbook.
Step 3:
Next, we’ll add our data and assign it to a range in the worksheet.
Step 4:
Next, create a table and add the range of data to the table.
Step 5:
Add a formula to sum up the columns.
Step 6:
Add a sparkline to column K. Specify the sparkline type and pass along the source data.
Step 7:
Finally, save the workbook. When we open the spreadsheet, we’ll see that the sparklines have been added in column K.! | https://www.grapecity.com/blogs/how-to-add-sparklines-to-your-dot-net-spreadsheet-gcexcel | CC-MAIN-2019-22 | refinedweb | 105 | 86.4 |
Matching with allowed mismatch (Java)
[edit] Overview
An implementation of string matching with a defined number of at most k mismatches based on longest common extension (lce) computation as described in Gusfield 1999:200. This is a form of inexact string matching that allows no insertions or deletions but only matches and mismatches. When used with lce computation with suffix trees it has a runtime complexity of O(km), where m is the length of the text. This implementation uses a simple computation of the lce.
[edit] Implementation
The approach is very similar to matching with wildcards: For every position in the text, up to k lce queries are executed, and if the lce reaches the end of the pattern, then more than k mismatches are needed to match. As at most k lce computations are required the runtime complexity is O(km), where k is the length of the text (Gusfield 1999:200). The method takes three parameters: the text, the pattern and the number of allowed mismatches. It returns a collection of strings, the matching substrings. The algorithm, as described in Gusfield 1999:200, consists of four steps:
- Step 1: Set j to 1 and h to i and count to 0.
<<init>>= int j = 0; int h = i; int count = 0; int n = p.length();
- Step 2: Compute the length L of the longest common extension starting at positions j of P and h of T:
<<compute_lce>>= int L = SimpleLongestCommonExtension.longestCommonExtension(p, j, t, h);
- Step 3: If j + L = n + 1, then a k-mismatch of P occurs in T starting at i (in fact, only count mismatches occur); stop.
<<collect>>= if (j + 1 + L == n + 1) { result.add(t.substring(i, i + n)); break; }
- Step 4: If count >= k, then increment count by one, set j to j + L + 1, set h to h + L + 1 and go to step 2. If count = k + 1, then a k-mismatch of P does not occur stating at i; stop.
<<match>>= else if (count < k) { count++; j = j + L + 1; h = h + L + 1; } else if (count == k) { break; }
[edit] Usage
- A JUnit 4 unit test to demonstrate the usage:
<<test>>= Collection<String> results = getMatches("abentbananaend", "bend", 2); assertEquals(Arrays.asList("bent", "bana", "aend"), results);
- The complete program:
<<MatchingWithAllowedMismatch.java>>= import static org.junit.Assert.assertEquals; import org.junit.Test; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; public class MatchingWithAllowedMismatch { public static Collection<String> getMatches(String t, String p, int k) { Collection<String> result = new ArrayList<String>(); for (int i = 0; i < t.length(); i++) { init while (true) { compute_lce collect match } } return result; } @Test public void testGetMismatches() { test } }
- The required simple implementation of longest common extension computation:
<<SimpleLongestCommonExtension.java>>= Longest common extension (Java)#8162#SimpleLongestCommonExtension.java
[edit] References
- Gusfield, Dan (1999), Algorithms on Strings, Sequences and Trees. Cambridge: University Press. | http://en.literateprograms.org/Matching_with_allowed_mismatch_(Java) | CC-MAIN-2016-07 | refinedweb | 478 | 55.03 |
Jeff Johnson wrote:
> I would love that! I tried to get ADO to work a few weeks ago but other
> projects took priority. Any help about using ADO from Python would be
> greatly appreciated.
First of all, I recommend getting the "Python Programming on Win32" book by Mark
Hammond and Andy Robinson. I've found it to be immensely useful.
In order to use any COM objects from a multithreaded application such as WebKit,
you have to do some COM initialization magic. The easiest way to do this is to
create a subdirectory underneath your WebWare directory called Win32Kit. Place
the attached file __init__.py into that directory.
You can now magically use COM objects from your WebKit servlets.
You'll want to install MDAC 2.5 which is downloadable from Microsoft -- this
gives you the latest version of ADO with drivers for SQL Server and other
databases.
Run the COM Makepy utility inside of PythonWin on "Microsoft ActiveX Data
Objects 2.5 Library."
Create an ODBC System Data Source for your SQL Server. This can be done in the
Windows Control Panel. The example code below assumes you called it MyDatabase.
Now you're all set -- you can use ADO from your servlets. It's not hard to just
use the ADO objects by themselves, but I've written some helper classes that
make it easier. It's called DatabaseMixin.py -- you can just put it in the same
directory as your servlets if you want.
I recommend installing the mxDateTime package -- it makes time manipulation much
easier. DatabaseMixin.py assumes you have it installed.
Now, you can write servlets like the following:
from Page import Page
from DatabaseMixin import DatabaseMixin
from WebUtils.WebFuncs import HTMLEncode
class ChooseCustomer(Page, DatabaseMixin):
def writeBody(self):
self.writeln('<H4>Choose a Customer to work with:</H4>')
rs = self.recordset('SELECT CustomerID, CustomerName FROM '
'Customers ORDER BY CustomerName')
for record in rs:
self.writeln('<br><a href="ChooseReport?CustomerID=%d">%s</a>' %
(record.CustomerID, HTMLEncode(record.CustomerName)))
That's about it. Give it a try, and let me know if you have problems,
questions, comments, suggestions, improvements, etc. I've glossed over some
details here in the interest of brevity, but I'd be happy to answer any
questions.
--
- Geoff Talvola
Parlance Corporation
gtalvola@...
View entire thread | http://sourceforge.net/p/webware/mailman/message/5307619/ | CC-MAIN-2015-27 | refinedweb | 384 | 68.47 |
Why a cloud solution?
In a previous article , I strongly suggested that you use Jupyter to design and work on your machine learning models. Of course I have not changed my mind, quite the contrary. Nevertheless Jupyter as it is has a big drawback: it must be installed! Of course with Anaconda, no worries you just have a button to click.
But unless you have a war machine at your disposal (lucky that you are) you will need some power whenever you go dealing with high volumes. What’s more, not everyone has GPUs at their disposal!
In short, a simple answer is to move towards a Cloud solution!
In this regard you will have some solutions available to you to do Jupyter in cloud mode. How about using a 100% free solution? well I highly recommend Google Colaboratory.
Google Collaboratory in brief
Collaboratory is a Google incubation project created for collaboration (as the name suggests), training and research related to Machine Learning. Collaboratory is a Jupyter notebook environment that really doesn’t require any configuration and runs entirely in the Google cloud.
One constraint: Collaboratory notebooks are saved in Google Drive and can also be shared like Google Docs or Sheets documents. A GitHub gateway (maybe not for long) is also available.
Of course Collaboratory is available for free, you just need to have a Google account.
For more information on Collaboratory I invite you to go to the FAQ .
What’s really great is that you can use a GPU for free for 12 hours (continuous use)!
Getting started with Google Colaboratory
Ok, type in your browser the URL:
Quickly Google Collaboratory offers you either to create a notebook or to get one from Google Drive or Github, or to upload one directly from your computer:
You even have some interesting examples to consult to get the most out of the tool.
So create a notebook or do like me get one from Google Drive. Good news about Jupyter notebooks are of course compatible with Google Colaboratory.
Getting started is quick for those who are already used to Jupyter. The environment is almost identical except for a few tricks and other small added features (and very useful for that matter).
Just a catch: the data files!
And yes we were in a sweet dream until then. If the world of the cloud does indeed make life easier on the installation side and more generally machine power, there remains a problem:
You have to be able to interact with the rest of the world, and therefore be able to read / write flat files at a minimum!
It seems to be just common sense indeed. Unfortunately this is not the most fun part of the solution. I told you your notebooks are stored in Google Drive. That’s one thing, now data can come from multiple places. In the context of this article, I suggest you place your files in Google Drive. We will see how to retrieve them in Google Colaboratory… because unfortunately it is not automatic!
You will find several examples and ways of doing it (via PyDev, API, etc.) in examples provided by Google . Despite these examples I had a hard time in the file recovery phase. Here’s how to do it easily with PyDev.
Downloaded a file from Google Drive -> Colaboratory
First of all I have a file (here sample1000.csv) placed in Google Drive:
To be able to retrieve this file I need its Google ID, here’s how:
- Right click on the file
- Choose Get Shareable Link from the drop-down menu:
- Copy the URL. but only get the id.
NB: For example, we will only recover the part in bold here: 1Pl-GxINYFcXL2ASaQjo_BFFiRVIZUObB
Now back to our Notebook, enter this code in a cell
from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # Authentification Google auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) # Download du fichier id = '1Pl-GxINYFcXL2ASaQjo_BFFiRVIZUObB' downloaded = drive.CreateFile({'id': '1Pl-GxINYFcXL2ASaQjo_BFFiRVIZUObB'}) downloaded.GetContentFile('sample1000.csv')
Here it is, the file is now present in the Collaboratory environment. You have noticed … no need to specify the directory, the ID allows Google to find it wherever it is in your Drive.
You just have to read it as usual with Pandas for example:
pd.read_csv('sample1000.csv').head()
Be careful before anything else you will have to install the PyDev library. This is done via the pip command directly in a cell of the notebook. For example, you can add this command line at the start of the previous code:
!pip install -U -q PyDrive
Upload a file from Collaboratory -> Google Drive
Now that you can work with your data, you will probably want to get the results of your work (your predictions for example).
For that this portion of code will help you:) # 2. Create & upload a file ()from here to Google drive) text file. uploaded = drive.CreateFile({'title': 'sample1000_resultat.csv'}) uploaded.SetContentString('Contenu du fichier ici :-)') uploaded.Upload() print('Uploaded file with ID {}'.format(uploaded.get('id')))
And that’s what you get your foot in with this tool.
Please feel free to share your thoughts with me in the comments below. | http://aishelf.org/google-collaboratory/ | CC-MAIN-2021-31 | refinedweb | 878 | 65.73 |
Prefect Cloud: Up and RunningPrefect Cloud: Up and Running
In this guide, we will look at a minimal and quick way to get Prefect Cloud flow deployments up and running on your local machine. We will write a simple flow, build its Docker storage, deploy to Prefect Cloud, and orchestrate a run with the Local Agent. No extra infrastructure required!
PrerequisitesPrerequisites
In order to start using Prefect Cloud we need to set up our authentication. Head to the UI and retrieve a
USER API token which we will use to log into Prefect Cloud. We are also going to want to generate a
RUNNER token and save that in a secure place because we will use it later when creating our Local Agent.
For information on how to create a
RUNNER token visit the Tokens page.
Let's use the Prefect CLI to log into Cloud. Run this command, replacing
$PREFECT_USER_TOKEN with the
USER token you generated a moment ago:
$ prefect auth login --token $PREFECT_USER_TOKEN Login successful!
Now you should be able to begin working with Prefect Cloud! Verify that you have a default project by running:
$ prefect get projects NAME FLOW COUNT AGE DESCRIPTION Hello, World! 0 1 day ago
Write a FlowWrite a Flow
To start, we're going to write a dummy flow that doesn't do anything. You can make your flow as complex as you want, but this is the flow we'll use for the tutorial!
from prefect import Flow flow = Flow("my-flow") # empty dummy flow
Deploying Flow to CloudDeploying Flow to Cloud
Docker
Make sure that you have Docker installed and running in order for this step to work!
Now that the flow is written and we want to call the
deploy function which will build our flow's default Docker storage and then send some metadata to Prefect Cloud! Note that in this step no flow code or images ever leave your machine. We are going to deploy this flow to the Prefect Cloud project that we saw at the beginning of this guide.
flow.deploy(project_name="Hello, World!")
A few things are happening in this step. First your flow will be serialized and placed into a local Docker image. By not providing a registry url, the step to push your image to a container registry is skipped entirely. Once the image finished building a small metadata description of the structure of your flow will be sent to Prefect Cloud.
Production
The decision to not push our image to a container registry is entirely for the purposes of this tutorial. In an actual production setting you would be pushing the images that contain your flows to a registry so they can be stored and retrieved by agents running on platforms other than your local machine.
You should now be able to see the flow's Docker image on your machine if you would like by running
docker image list in your command line.
After this deployment is complete you should be able to see your flow now exists in Prefect Cloud!
$ prefect get flows NAME VERSION PROJECT NAME AGE my-flow 1 Demo a few seconds ago
Start Local AgentStart Local Agent
In order to orchestrate runs of your flow we will need to boot up a Prefect Agent which will look for flow runs to execute. This is where the
RUNNER token you generated earlier will become useful.
$ prefect agent start -t RUNNER_TOKEN ____ __ _ _ _ | _ \ _ __ ___ / _| ___ ___| |_ / \ __ _ ___ _ __ | |_ | |_) | '__/ _ \ |_ / _ \/ __| __| / _ \ / _` |/ _ \ '_ \| __| | __/| | | __/ _| __/ (__| |_ / ___ \ (_| | __/ | | | |_ |_| |_| \___|_| \___|\___|\__| /_/ \_\__, |\___|_| |_|\__| |___/ 2019-09-01 13:11:58,202 - agent - INFO - Starting LocalAgent 2019-09-01 13:11:58,203 - agent - INFO - Agent documentation can be found at 2019-09-01 13:11:58,453 - agent - INFO - Agent successfully connected to Prefect Cloud 2019-09-01 13:11:58,453 - agent - INFO - Waiting for flow runs...
Here we use the Prefect CLI to start a Local Agent. The
RUNNER token that was generated earlier is specified through the
--token argument.
Create a Flow RunCreate a Flow Run
Now that the Local Agent is running we want to create a flow run in Prefect Cloud that the agent can pick up and execute! If you recall, we named our flow
my-flow and deployed it to the
Hello, World! project. We are going to use these two names in order to create the flow run with the Prefect CLI.
$ prefect run cloud -n my-flow -p "Hello, World!" Flow Run ID: 43a6624a-c5ce-43f3-a652-55e9c0b20527
Our flow run has been created! We should be able to see this picked up in the agent logs:
2019-09-01 13:11:58,716 - agent - INFO - Found 1 flow run(s) to submit for execution. 2019-09-01 13:12:00,534 - agent - INFO - Submitted 1 flow run(s) for execution.
Now the agent has submitted the flow run and created a Docker container locally. We may be able to catch it in the act with
docker ps but our flow doesn't do anything so it may run too fast for us! (Another option: Kitematic is a great tool for interacting with containers while they are live)
You can check on your flow run using Prefect Cloud:
$ prefect get flow-runs NAME FLOW NAME STATE AGE START TIME super-bat my-flow Success a few seconds ago 2019-09-01 13:12:02
Congratulations! You ran a flow using Prefect Cloud! Now you can take it a step further and deploy a Prefect Agent on a high availability platform such as Kubernetes. | https://docs.prefect.io/cloud/upandrunning.html | CC-MAIN-2019-47 | refinedweb | 967 | 66.57 |
Has there been any resolution to how to accomplish this?
Has there been any resolution to how to accomplish this?
Thanks Darell. It works.
Padma
Did you get this working? I have the same problem as you mentioned in case of Firefox. Pl. let me know if you happen to find the resolution.
Thanks,
Padma
Will there be multi sort implementation for Grid and Table in future?
Darell,
Thank you for your reply. Can you please let me know if there is any estimated time line on making this tooltip functionality available? We have migrated our application assuming that...
I am using ExtGWT 1.0.2 where the tooltips for TableItem is not working. Please let me know if this fix already in SVN or is it an issue not fixed so far? I just couldn't get these tooltips working...
I am trying to use setCellTooTip from TableItem and it is not showing any tooltips. Is there a bug in using TableItem's setCellToolTip method?
Here is the code I am using:
...
I am using setCellTooTip from TableItem and it is not showing any tooltips.
Here is the code I am using:
public class Edwin implements EntryPoint {
public void onModuleLoad() {
...
I get the following exception when I try to set the widget (which is a ExtGWT component - Layout container) using FlexTable setWidget method.
FlexTable sysDetails = new FlexTable();
...
Thank you for your reply. That helps. I will look in to modifying it.
Padma
gslender,
The solution in item (1) does resolve the issue partially and thanks for your input. But I am still looking in to resolution for 'Sorting toggles rather than resets after clicking...
glsender,
That worked for showing the sorting icons on the column header after the table is rendered initially which is very helpful from the usability point of view to know the sort direction. ...
When I click on one of the TableColumn that has sorting enabled (using Table/TableColumn API) but not sorted initially, the first sort is always ASC. Is there any option to change the first sort...
Thank you very much gslender. That worked.
Thanks,
Padma
I would like to show the sort direction style icon on a particular column header when I load the table initially.
I call table.sort(index, sortdir), but that does not display the sort direction...
Dare...
Thank you. That helps.
Padma
Thanks.
We really do not want to upgrade to GWT 1.5 for a while. Can I just upgrade Ext GWT alone with any settings that can help make Ext GWT to work with GWT 1.4?
(Or) is it a must for...
Thank you Darell. BTW, my application also is currently using GWT version 1.4.61. Does Ext GWT work with this version of GWT 1.4. I just wanted to confirm on this.
Thanks,
Padma
Thanks Darell. With that I can see that this issue will be resolved once I migrate my application to Ext GWT which has been a major issue in our application.
BTW, how easy it is to migrate from...
Our company is planning to migrate to ExtGWT from myGWT. Our application is currently developed in 0.5.2.
I would like to know whether migrating to ExtGWT would resolve the following error that... | https://www.sencha.com/forum/search.php?s=6a15f9135ee0637edf5a2ad0d7d9e76e&searchid=18957496 | CC-MAIN-2017-09 | refinedweb | 543 | 76.42 |
Using MQTT with Home Assistant
MQTT support was added to Home Assistant recently. The MQTT component will enable you to do all sort of things. Most likely you will use it to communicate with your devices. But Home Assistant doesn’t care where the data is coming from or is limited to real hardware as long as there is MQTT support. This means that it doesn’t matter if the data is coming from a human, a web service, or a device.
A great example is shown in a Laundry Automation post in this blog.
This post will give you a small overview of some other possibilities on how to use MQTT with Home Assistant.
Manual usage
The simplest but not the coolest way as a human to interact with a Home Assistant sensor is launching a command manually. Let’s create a “Mood” sensor. For simplicity Home Assistant and the MQTT broker are both running on the same host. The needed configuration snipplets to add to the
configuration.yaml file consists of two parts: one for the broker and one for the sensor.
mqtt: broker: 127.0.0.1 sensor: - platform: mqtt name: "Fabian's Mood" state_topic: "home-assistant/fabian/mood"
After a restart of Home Assistant the “Mood” sensor will show up in the frontend. For more details about the configuration of MQTT itself and the sensor, please refer to the MQTT component or the MQTT sensor documentation.
Now we can set the mood. The commandline tool (
mosquitto_pub) which is shipped with
mosquitto is used to send an MQTT message.
$ mosquitto_pub -h 127.0.0.1 -t "home-assistant/fabian/mood" -m "bad"
The Mood sensor
This is a really bad example. Don’t do this in the real world because you won’t be able to create diagrams of historical data. Better use a numerical value.
Python MQTT bindings
The last section was pretty boring, I know. Nobody wants to send MQTT messages by hand if there is a computer on the desk. If you are playing the lottery this section is for you. If not, read it anyway because the lottery is just an example :-).
This example is using the Paho MQTT Python binding because those binding should be available on the host where Home Assistant is running. If you want to use this example on another machine, please make sure that the bindings are installed (
pip3 install paho-mqtt).
The first step is to add an additional MQTT sensor to the
configuration.yaml file. The sensor will be called “Lottery” and the unit of measurement will be “No.”.
- platform: mqtt name: "Lottery" state_topic: "home-assistant/lottery/number" unit_of_measurement: "No."
Don’t forget to restart Home Assistant to make the configuration active.
To play, we need numbers from 1 to 49 which can be marked on the ticket. Those numbers should be random and displayed in the Home Assistant frontend. The Python script below is another simple example on how to send MQTT messages from the commandline; this time in a loop. For further information and examples please check the Paho MQTT documentation.
#!/usr/bin/python3 # import time import random import paho.mqtt.client as mqtt import paho.mqtt.publish as publish broker = '127.0.0.1' state_topic = 'home-assistant/lottery/number' delay = 5 # Send a single message to set the mood publish.single('home-assistant/fabian/mood', 'good', hostname=broker) # Send messages in a loop client = mqtt.Client("ha-client") client.connect(broker) client.loop_start() while True: client.publish(state_topic, random.randrange(0, 50, 1)) time.sleep(delay)
Every 5 seconds a message with a new number is sent to the broker and picked up by Home Assistant. By the way, my mood is much better now.
The Lottery sensor
With only a few lines of Python and an MQTT broker you can create your own “smartdevice” or send information to Home Assistant which you haven’t think of. Of course this is not limited to Python. If there is an MQTT library available, the device can be used with Home Assistant now.
Arduino
To get started with real hardware that is capable to send MQTT messages, the Arduino platform is an inexpensive way to do it. In this section an Arduino UNO with an Ethernet shield and a photo resistor is used. The photo resistor is connected to analog pin 0 (A0) and has an output from 0 to 1024.
The Arduino UNO with Ethernet shield and photo resistor
The MQTT client for the Arduino needs to be available in your Arduino IDE. Below you will find a sketch which could act as a starting point. Please modify the IP addresses, the MAC address, and the pin as needed and upload the sketch to your Arduino.
/* This sketch is based on the basic MQTT example by */ #include <SPI.h> #include <Ethernet.h> #include <PubSubClient.h> #define DEBUG 1 // Debug output to serial console // Device settings IPAddress deviceIp(192, 168, 0, 43); byte deviceMac[] = { 0xAB, 0xCD, 0xFE, 0xFE, 0xFE, 0xFE }; char* deviceId = "sensor01"; // Name of the sensor char* stateTopic = "home-assistant/sensor01/brightness"; // MQTT topic where values are published int sensorPin = A0; // Pin to which the sensor is connected to char buf[4]; // Buffer to store the sensor value int updateInterval = 1000; // Interval in milliseconds // MQTT server settings IPAddress mqttServer(192, 168, 0, 12); int mqttPort = 1883; EthernetClient ethClient; PubSubClient client(ethClient); void reconnect() { while (!client.connected()) { #if DEBUG Serial.print("Attempting MQTT connection..."); #endif if (client.connect(deviceId)) { #if DEBUG Serial.println("connected"); #endif } else { #if DEBUG Serial.print("failed, rc="); Serial.print(client.state()); Serial.println(" try again in 5 seconds"); #endif delay(5000); } } } void setup() { Serial.begin(57600); client.setServer(mqttServer, mqttPort); Ethernet.begin(deviceMac, deviceIp); delay(1500); } void loop() { if (!client.connected()) { reconnect(); } client.loop(); int sensorValue = analogRead(sensorPin); #if DEBUG Serial.print("Sensor value: "); Serial.println(sensorValue); #endif client.publish(stateTopic, itoa(sensorValue, buf, 10)); delay(updateInterval); }
The Arduino will send the value of the sensor every second. To use the data in Home Assistant, add an additional MQTT sensor to the
configuration.yaml file.
- platform: mqtt name: "Brightness" state_topic: "home-assistant/sensor01/brightness" unit_of_measurement: "cd"
After a restart of Home Assistant the values of your Arduino will be available.
The Brightness sensor
I hope that this post could give you some ideas about the usage Home Assistant and MQTT. If you are working on a cool project that includes Home Assistant, please let us now. | https://home-assistant.io/blog/2015/09/11/different-ways-to-use-mqtt-with-home-assistant/ | CC-MAIN-2018-13 | refinedweb | 1,076 | 58.28 |
"Fernando Pérez" <fperez528 at yahoo.com> wrote in message news:9u0jor$2li$1 at peabody.colorado.edu... > Ursus Horibilis wrote: > > > Is there a way to force the Python run time system to ignore > > integer overflows? I was trying to write a 32-bit Linear > > Congruential Pseudo Random Number Generator and got clobbered > > the first time an integer product or sum went over the 32-bit > > limit. > > well, you can always put the relevant parts in a try:..except block, but I > suspect that will kill the speed you need for a rng. Yes, I did put the computation in a try-block and you're right, it killed the speed, but worse, it also didn't store the low-order 32-bits of the computation. > If it's just an academic > algorithm exercise It's not. Such computations are routinely used in algorithms like PRNG's and cryptography. > Out of curiosity, though: I've never written a rng myself. In C, an integer > overflow silently goes to 0. No. In C, an integer overflow silently throws away the high-order bits, leaving the low-order bits just as they would be if you had a larger field. As an illustration, assume we are working with 8-bit integers instead of 32-bit. Then here's what happens with integer overflows: signed char a = 127; /* in binary = 01111111 */ a = a * 2; /* Because of sign, a = (-2) */ a = a * 2; /* Because of sign, a = (-4) */ a = a + 128; /* Sign bit gets inverted, a = 124 */ > Are you relying on this 'feature' in your > algorithm? I'm just curious about to whether asking to ignore an overflow > without triggering an exception is a design feature or a flaw of the way you > are doing things. The algorithm relys on the ability to ignore the fact that overflow has occurred and continue with the computation using the low-order bits. This is not my algorithm; it's as old as computer science, and then some. Here is how we implement one particular 32-bit Linear Congruential Pseudo Random Number Generator in C: unsigned int Lcprng(unsigned int *seed) { *seed = 29 * (*seed) + 13; return (*seed); } How do you do this in Python? | https://mail.python.org/pipermail/python-list/2001-November/079361.html | CC-MAIN-2014-15 | refinedweb | 366 | 69.31 |
I am doing Nominatim OSM installation as per the steps given in. Currently I am on following command
./utils/setup.php --osm-file
/usr/share/osmgeplanet/gujarat.osm.bz2
--all
./utils/setup.php --osm-file
/usr/share/osmgeplanet/gujarat.osm.bz2
--all
Script is running and the last line of output is "Partitions" and I can see around 589 tables in DB. But after one-two days no progress is there. Is script is terminated in background? How can I find the error? How can I check that script is running in background or not?
As I can see in the setup.php after writing "partitions" keyword as output, script is going to execute a large script (around 1.5MB sql queries) in postgresql shell. So how can I check if its still running or not? If terminated what is the error?
Please guide.
EDIT
Output of select * from pg_stat_activity where datname = 'nominatim'; is:
datid | datname | procpid | usesysid | usename | current_query | waiting | xact_start | query_start | backend_start | client_addr | client _port
---------+-----------+---------+----------+---------+---------------+---------+------------+-------------------------------+-------------------------------+-------------+-------------
2393113 | nominatim | 4537 | 16385 | root | <idle> | f | | 2012-06-25 13:27:22.345186+02 | 2012-06-25 12:55:53.153181+02 | | -1
2393113 | nominatim | 5399 | 16385 | root | <idle> | f | | 2012-06-25 13:32:27.59859+02 | 2012-06-25 13:27:22.387147+02 | | -1
(2 rows)
So is my process is currently runniing?
asked
25 Jun '12, 13:35
Ravi Kotwani
136●6●6●9
accept rate:
0%
edited
26 Jun '12, 06:49
You really should be running the install while running a screen session. That way if you disconnect (intentionally or not) your commands will continue to execute.
answered
25 Jun '12, 19:10
Norm1
126●4●5●8
accept rate:
0%
edited
05 May '16, 01:25
Harry Wood
9.3k●24●86●126
Normally, Nominatim shouldn't be silent for more than a few hours unless you try to import the entire planet on a slow machine. Even then, it should not hang in this particular line.
You can check what postgresql is doing with psql:
psql -d postgres -c "select * from pg_stat_activity where datname = 'nominatim'"
You can ignore the lines where the query is <IDLE>. But if you see another query there stuck in waiting state, something might be blocking Nominatim. If you can't find a reason, stop the import, restart postgresql and restart the import. If you still get stuck in the same place, file a bug report in trac or github.
<IDLE>
answered
25 Jun '12, 17:21
lonvia
5.6k●2●53●80
accept rate:
40%
edited
26 Jun '12, 07:46
I can see 2 rows as the result of above query. Value of "current_query" field name is "<idle>". Value of "waiting" field is "f". For more details of result of above query, you can see my this edited question.
What this columns mean? Is my process is currently running in backend or not?
You can ignore the lines with <idle>, those are just open connections to the database. If there are no other lines, then it is not a database access that is stuck. I edited my answer to make that more clear.
Finding another cause is more difficult. Just try to restart the import. It shouldn't take more than a few hours overall. If you get stuck again, ask over at the IRC channel.
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
nominatim ×602
osm ×592
import ×179
postgresql ×152
question asked: 25 Jun '12, 13:35
question was seen: 7,631 times
last updated: 05 May '16, 01:25
Determine nominatim db size prior to import
Error: DB error: insufficient permissions
Nominatim installation error: Out of memory for node cache dense index
How to exclude tables during importing full OSM into PostgreSQL
Nominatim 3.1.0: Deadlocks on data import
Different result of nominatim on my server while comparing with original
Problem importing Planet file into Nominatim
Issue with Importing osm-file in Nominatim
Nominatim not showing city's shape/boundaries, only single point
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/13762/how-to-check-nominatim-planet-import-execution-is-running-in-background-or-terminated?sort=newest | CC-MAIN-2020-34 | refinedweb | 701 | 66.13 |
Checksum drop of metadata traffic on isolated networks with DPDK
Bug Description
When an isolated network using provider networks for tenants (meaning without virtual routers: DVR or network node), metadata access occurs in the qdhcp ip netns rather than the qrouter netns.
The following options are set in the dhcp_agent.ini file:
force_metadata = True
enable_
VMs on the provider tenant network are unable to access metadata as packets are dropped due to checksum.
When we added the following in the qdhcp netns, VMs regained access to metadata:
iptables -t mangle -A OUTPUT -o ns-+ -p tcp --sport 80 -j CHECKSUM --checksum-fill
It seems this setting was recently removed from the qrouter netns [0] but it never existed in the qdhcp to begin with.
[0] https:/
Related LP Bug #1831935
See https:/
Brian,
Thanks for getting back to me. It seems this is a duplicate of LP Bug #1722584 [0]. And the explanation for my running into it is that we have not yet pushed your reversion into our Ubuntu packaging.
Marking this bug a duplicate of LP Bug #1722584
[0] https:/
We de-duplicated this bug as we have narrowed the focus. This is DPDK specific.
When using isolated provider networks AND DPDK metadata is dropped due to incorrect TCP checksum. Specifically when the provider interface is a DPDK interface.
It is temporarily mitigated by adding the iptables rule:
iptables -t mangle -A OUTPUT -o ns-+ -p tcp --sport 80 -j CHECKSUM --checksum-fill
However this is not sustainable as any restart of openvswitch will clear this setting.
Here is an example of a tcpdump in the qdhcp netns:
172.
23:36:43.932728 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
169.
Continuing with re-transmissions of the same. Note the incorrect cksum.
With the mangle rule in place:
172.
7], length 0
23:40:01.510745 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
169.
cr 2390126443,
23:40:01.510919 IP (tos 0x0, ttl 64, id 38572, offset 0, flags [DF], proto TCP (6), length 52)
172.
23:40:01.510974 IP (tos 0x0, ttl 64, id 38573, offset 0, flags [DF], proto TCP (6), length 229)
172.
77: HTTP, length: 177
GET /openstack HTTP/1.1
Host: 169.254.169.254
User-Agent: Cloud-Init/
Accept: */*
Connection: keep-alive
23:40:01.553043 IP (tos 0x0, ttl 64, id 62471, offset 0, flags [DF], proto TCP (6), length 52)
169.
23:40:02.036984 IP (tos 0x0, ttl 64, id 62472, offset 0, flags [DF], proto TCP (6), length 252)
169.
200: HTTP, length: 200
HTTP/1.1 200 OK
Date: Thu, 13 Jun 2019 23:40:02 GMT
2012-08-10
2013-04-04
2013-10-17
2015-10-15
2016-06-30
2016-10-06
2017-02-22
We can provi...
Further testing shows the provider network is irrelevant. With DPDK and an isolated network (qdhcp only no qrouter) either GRE or provider, any traffic initiated by the qdhcp netns, including response traffic, gets an incorrect TCP checksum.
This packet gets put on the "wire" and it is the VM that drops the packet due to an invalid TCP checksum.
In a DPDK isolated network environment from the qdhcp netns you can see this in action with an arbitrary netcat call:
nc -vz $VM_IP 73 (Any TCP port)
tcpdump on the VM side and you can see
22:06:31.424716 IP (tos 0x0, ttl 64, id 14532, offset 0, flags [DF], proto TCP (6), length 60)
172.
22:06:39.616633 IP (tos 0x0, ttl 64, id 14533, offset 0, flags [DF], proto TCP (6), length 60)
172.
22:06:55.744502 IP (tos 0x0, ttl 64, id 14534, offset 0, flags [DF], proto TCP (6), length 60)
172.
So the VM sees the response traffic from the qdhcp netns but drops it because the TCP checksum is invalid.
When we turn on DVR and create a virtual router (unused) the qrouter netns does not have this problem. I have not root caused why but there are a number of other iptables settings in the qrouter netns that are not in the qdhcp that may be required.
So what we are looking for is differences in the setup of the qdhcp netns from the qrouter netns.
Also, note the qrouter nets works with or without the neutron-
A bit more research. I tried to find differences in iptables rules between qdhcp and qrouter netns-es.
I went so far as to restore iptables-save of qrouter in the qdhcp with no change.
Any TCP connection (or response) in the qdhcp netns generates an invalid TCP checksum.
The issue appears to be that ovs_use_veth=True was set to true in dhcp_agent.ini. This causes the qdhcp namespace to be connected to the bridge via a veth pair. This appears to leave checksum offloading enabled on the device in the qdhcp namespace. Manually turning off tx-checksum-
Liam - thanks for the information. Since the default value for ovs_use_veth is False in the neutron tree, was it the installer tools that changed the value?
Hi Brian, yes it was, I'll be proposing a change to fix that today.
Maybe we should add some warning about it to our docs too?
Liam, can you clarify what change you're proposing and where?
I'm assuming setting ovs_use_veth=false in dhcp_agent.ini by default?
(and not disabling checksumming somehow).
Hi niveditasinghvi. Yes, I'm proposing ovs_use_veth be set to False rather than manually disabling checksumming. The changes are here: https:/
Reviewed: https:/
Committed: https:/
Submitter: Zuul
Branch: master
commit 051b58f566dd382
Author: Slawek Kaplonski <email address hidden>
Date: Wed Jun 19 14:27:18 2019 +0200
Update DPDK docs with note about using veth pairs
In case when ovs-dpdk is used together with ``ovs_use_veth`` config
option set to True, it cause invalid checksum on all packets send from
qdhcp namespace.
This commit adds short info about this limitation to ovs-dpdk config
guide.
Change-Id: I6237abab3d9e62
Related-Bug: #1832021
Reviewed: https:/
Committed: https:/
Submitter: Zuul
Branch: master
commit 7578326c592a81e
Author: Liam Young <email address hidden>
Date: Wed Jun 19 13:29:18 2019 +0000
Stop using veth pairs to connect qdhcp ns
veth pairs are currently being used to connect the qdhcp namespace
to the underlying bridge. This behaviour appears to only be needed
for old kernels with limited namespaces support (pre trusty).
Change-Id: If1f669de09e249
Closes-Bug: #1832021
Just a late note: the upgrade was applied to the customer in question and it indeed fixed the problem.
charm neutron-
David - please see the link that was the reason I reverted this change, https:/
/lore.kernel. org/patchwork/ patch/824819/ - that is basically saying this rule has no effect for TCP, it was only meant for UDP, and was finally changed to log a warning in the kernel.
There is probably something else going on here causing issues, possibly outside of neutron. | https://bugs.launchpad.net/neutron/+bug/1832021/+index | CC-MAIN-2019-43 | refinedweb | 1,172 | 71.14 |
Description
Class for one-degree-of-freedom mechanical parts with associated inertia (mass, or J moment of inertial for rotating parts).
In most cases these represent shafts that can be used to build 1D models of power trains. This is more efficient than simulating power trains modeled with full 3D ChBody objects.
#include <ChShaft.h>
Member Function Documentation
Method to allow deserialization of transient data from archives.
Method to allow de serialization of transient data from archives.
Reimplemented from chrono::ChPhysicsItem.
When this function is called, the speed of the shaft is clamped into limits posed by max_speed and max_wvel - but remember to put the shaft in the SetLimitSpeed(true) mode.
Number of coordinates in the interpolated field, ex=3 for a tetrahedron finite element or a cable, = 1 for a thermal problem, etc.
Implements chrono::ChLoadable..
Tell if the body is active, i.e.
it is neither fixed to ground nor it is in sleep mode.
Increment all DOFs using a delta.
Default is sum, but may override if ndof_x is different than ndof_w, for example with rotation quaternions and angular w vel. This could be invoked, for example, by the BDF differentiation that computes the jacobians.
Implements chrono::ChLoadable.. shaft position by the 'qb' part of the ChVariables, multiplied by a 'step' factor.
pos+=qb*step
Reimplemented from chrono::ChPhysicsItem.
Initialize the 'qb' part of the ChVariables with the current value of shaft speed.
Note: since 'qb' is the unknown , this function seems unnecessary, unless used before VariablesFbIncrementMq()
Reimplemented from chrono::ChPhysicsItem.
Fetches the shaft speed from the 'qb' part of the Ch. | http://api.projectchrono.org/classchrono_1_1_ch_shaft.html | CC-MAIN-2019-30 | refinedweb | 265 | 57.37 |
array triangles of all submeshes.
When you assign a triangle array, the subMeshCount is set to 1. If you want to have multiple sub-Meshes, use subMeshCount
and SetTriangles.
It is recommended to assign a triangle array after assigning the vertex array, in order to avoid out of bounds errors.
#pragma strict // Builds a Mesh containing a single triangle with uvs. // Create arrays of vertices, uvs and triangles, and copy them into the mesh. public class meshTriangles extends MonoBehaviour { // Use this for initialization function Start() { gameObject.AddComponent.<MeshFilter>(); gameObject.AddComponent.<MeshRenderer>(); var mesh: Mesh = GetComponent.<MeshFilter>().mesh; mesh.Clear(); // make changes to the Mesh by creating arrays which contain the new values mesh.vertices = [new Vector3(0, 0, 0), new Vector3(0, 1, 0), new Vector3(1, 1, 0)]; mesh.uv = [new Vector2(0, 0), new Vector2(0, 1), new Vector2(1, 1)]; mesh.triangles = [0, 1, 2]; } }
//}; } } | https://docs.unity3d.com/ScriptReference/Mesh-triangles.html | CC-MAIN-2017-13 | refinedweb | 148 | 51.95 |
In my last diary[1], I mentioned that I was able to access screenshots exfiltrated by the malware sample. During the first analysis, there were approximately 460 JPEG files available. I continued to keep an eye on the host and the number slightly increased but not so much. My diary conclusion was that the malware looks popular seeing the number of screenshots but wait… Are we sure that all those screenshots are real victims? I executed the malware in my sandbox and probably other automated analysis tools were used to detonate the malware in a sandbox. This question popped up in my mind: How do have an idea about the ratio of automated tools VS. real victims?
I grabbed all the pictures in a local directory and wrote some lines of Python to analyze them. The main question is: how to detect if the screenshot has been taken in a sandbox or a real system? What we can check:
To « translate » this into Python, I used the classic library to work on image: pillow[2]. extcolors is a small library that works directly on colors[3].
#!/usr/bin/python3
import extcolors
import PIL
import os
folder="screenshots"
for image in os.listdir(folder):
img = PIL.Image.open(folder+"/"+image)
width, height = img.size
colors, pixel_count = extcolors.extract_from_image(img)
if width <= 1024 and height <= 768:
print("Possible sandbox: %s : Size: %dx%d" % (image, width, height))
else:
for c in colors:
hexcolor = '%02x%02x%02x' % c[0]
percentage = (c[1] / pixel_count) * 100
if percentage > 93 and hexcolor < "f00000":
print("Possible sandbox: %s : Color: %s (%6.2f%%)" % (image,hexcolor, percentage))
After some tests, I decided to "flag" a screenshot as coming from a sandbox if the screen resolution is below 1024x768 and if we have >93% of a dark color (to match the classic blue, black or green backgrounds. Let's execute the scripts against the collected pictures:
Possible sandbox: 152114211370.jpg : Color: 000000 ( 94.25%)
Possible sandbox: 152117757583.jpg : Color: 000000 ( 98.20%)
Possible sandbox: 152127051988.jpg : Color: 000000 ( 95.09%)
Possible sandbox: 152178310978.jpg : Size: 1024x768
Possible sandbox: 152129950226.jpg : Size: 800x600
Possible sandbox: 152115117436.jpg : Size: 800x600
Possible sandbox: 152135496106.jpg : Color: c7b199 ( 99.23%)
Possible sandbox: 152119090512.jpg : Color: 000000 ( 99.37%)
Possible sandbox: 152129464868.jpg : Color: 2974c7 ( 94.60%)
Possible sandbox: 152153616774.jpg : Size: 800x600
Possible sandbox: 152137277200.jpg : Size: 800x600
Possible sandbox: 152157989841.jpg : Size: 1024x768
...
Here are the results:
Some detected sandboxes:
[1]
[2]
[3]
Xavier Mertens (@xme)
Senior ISC Handler - Freelance Cyber Security Consultant
PGP Key
Sign Up for Free or Log In to start participating in the conversation!
Make the web a better place by sharing the SANS Internet Storm Center with others | https://www.dshield.org/forums/diary/C2+Activity+Sandboxes+or+Real+Victims/27272/ | CC-MAIN-2021-17 | refinedweb | 449 | 60.31 |
Artificial intelligence and neural networks are bringing image processing to a whole new level. Processes that used to take days or weeks can now be performed in a matter of hours. But some ambitious people, including Apriorit developers, are going even further and looking for ways to use neural networks that were originally created for image processing to solve video processing and video classification tasks.
In this article, we talk about using Inception V3 for image classification, adding new data classes to the pretrained neural network, retraining it, and then using the modified model for classifying video streams.
A convolutional neural network (CNN) is an artificial neural network architecture targeted at pattern recognition..
A few years later, Google built its own CNN called GoogleNet, otherwise known as Inception V1, which became the winner of the 2014 ILSVRC with a top-5 error rate of 6.67 percent. The model was then improved and modified several times. As of today, there are four versions of the Inception neural network. In this article, we focus on the use of Inception V3, a CNN model for image recognition pretrained on the ImageNet dataset.
Inception V3 is widely used for image classification with a pretrained deep neural network. In this article, we discuss the use of this CNN for solving video classification tasks, using a recording of an association football broadcast as an example.
To make this task a bit easier, we first need to learn how to add new recognition classes to the Inception V3 network and train it specifically for these classes.
Transfer learning from Inception V3 allows retraining the existing neural network in order to use it for solving custom image classification tasks. To add new classes of data to the pretrained Inception V3 model, we can use the tensorflow-image-classifier repository. This repository contains a set of scripts to download the default version of the Inception V3 model and retrain it for classifying a new set of images using Python 3, Tensorflow, and Keras.
Since adding new data classes to the current neural network doesn’t take much time, you can run all of the development processes either in Google CoLab or on your own machine. If you choose the latter, it’s preferable to start with configuring the python-virtualenv tool.
Let’s move to the process of retraining Inception V3 for classifying new data. In our example below, we train our model to recognize the face of football player Lionel Messi in multiple images. In order to retrain the neural network, we need a dataset with classified images. In our case, we use this dataset containing 1,500 images with faces of three popular football players: Lionel Messi, Andrés Iniesta, and Neymar.
First, we need to create a training_dataset folder with all of the images separated into three subfolders according to the names of the athletes: Messi, Iniesta, and Neymar. As a result, we have the following file structure in a cloned tensorflow-image-classifier repository:
tensorflow-image-classifier
/
--- /training_dataset
| |
| --- /Messi
| | fcbleomessi2.jpg
| | fcbleomessi5.jpg
| | …
| |
| --- /Iniesta
| andresiniesta62.jpg
| andresiniesta63.jpg
| …
| |
| --- /Neymar
| fcbneymar3.jpg
| fcbneymar5.jpg
| …
Then we exclude two images (as test data) from each subfolder containing the athletes’ photos and move them to the tensorflow-image-classifier folder so they won’t be used for training. Now everything is set up for the retraining of our Inception V3 model.
Now we go to the tensorflow-image-classifier folder and launch the ./train.sh script. To help you better understand how it works, here’s a detailed scheme of the Inception model:
The ./train.sh script loads the already trained Inception V3 model, deletes the upper layer, and then trains a new layer on the data classes that we added with images of the football players’ faces. The whole process of retraining the model consists of two stages:
The result of running the ./train.sh script looks something like this:
...
2019-02-26 19:39:54.605909: Step 490: Train accuracy = 86.0%
2019-02-26 19:39:54.605959: Step 490: Cross entropy = 0.474662
2019-02-26 19:39:54.660586: Step 490: Validation accuracy = 80.0% (N=100)
2019-02-26 19:39:55.161398: Step 499: Train accuracy = 90.0%
2019-02-26 19:39:55.161448: Step 499: Cross entropy = 0.480936
2019-02-26 19:39:55.217443: Step 499: Validation accuracy = 79.0% (N=100)
Final test accuracy = 78.1% (N=151)
The final test accuracy of our classification model is 78.1 percent. This result is good considering the number of images in the dataset. The level of training accuracy can be increased if we increase the number of images used for training the model.
dataset
Now we need to check this model on the images we excluded from the training dataset earlier. In order to do this, we use a script from the classify.py repository.
python classify.py fcbneymar64.jpg
neymar (score = 0.68375)
iniesta (score = 0.17061)
messi (score = 0.14564)
python classify.py fcbleomessi30.jpg
messi (score = 0.63149)
iniesta (score = 0.30507)
neymar (score = 0.06344)
As we can see, our retrained CNN can accurately recognize the faces of our athletes. Now it’s time to shift our focus to using Inception V3 for classifying video streams.
After learning how to classify separate images, it’s time to classify a video stream. As you know, a video stream is basically a set of images in a specific format, compressed with a video codec. So the process of recognizing objects in a video stream comes down to breaking the stream into separate images and applying an object recognition algorithm to them. In our case, we’ll perform image recognition using Inception V3.
For instance, we can try to separate a commercial from the video stream of a football game.
ffmpeg -i input.mp4 -vcodec copy -acodec copy -ss 00:03:00 -t 00:01:20 mixed.mp4
First, we need to create a training dataset. We’ll use the FFmpeg utility for Linux to cut the video into pieces.
ss 00:03:00
t 00:01:20
With the help of this utility, we extract three short video clips. The first video clip was cut from the beginning of the game and contains an advertisement. The second clip is a five-minute recording of the game only. And, finally, the third video includes the final minutes of the first period and the beginning of the commercials.
What we need to do next is break down the first two videos into a series of images for further retraining of our Inception V3 model. We can also use the FFmpeg utility to cut the video frame by frame:
ffmpeg -i Football.mp4 -filter:v fps=5/1 football_%0d.jpeg
football
commercial
fps=5/1
%0d
/
--- /training_dataset
| |
| --- /football
| | football_1.jpeg
| | football_2.jpeg
| | …
| |
| --- /commercial
| commercial_1.jpeg
| commercial_2.jpeg
| …
We place the frames from the game and the commercials shown before the match (our mini clip #1) into the football and commercial folders, respectively.
Then we launch the ./train.sh script to retrain the network with the new two classes and get the following result:
./train.sh
...
2019-02-27 20:04:06.615386: Step 499: Train accuracy = 99.0%
2019-02-27 20:04:06.615436: Step 499: Cross entropy = 0.042714
2019-02-27 20:04:06.686268: Step 499: Validation accuracy = 99.0% (N=100)
Final test accuracy = 98.0% (N=293)
As you can see, we got a great final test accuracy: 98 percent. Such a high accuracy level is quite predictable, as we used a dataset with more images than in the previous example and had only two classes for image recognition. However, these numbers don’t reflect the whole picture.
Now let’s try to apply the trained model to recognize a commercial in a video stream. In order to do this, we modified the classify.py script from the tensorflow-image-classifier so that our new script, classify_video.py, is capable of:
The result is saved to the recognized.avi video at 10 frames per second. We save it as a slow-motion video on purpose, to make it easier to follow the change of the classification results.
Here’s the full code for the modified classify_video.py script:
import tensorflow as tf
import sys
import os
import cv2
import math
# Disable tensorflow compilation warnings
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
import tensorflow as tf
label_lines = [line.rstrip() for line
in tf.gfile.GFile("tf_files/retrained_labels.txt")]
with tf.gfile.FastGFile("tf_files/retrained_graph.pb", 'rb') as f:
graph_def = tf.GraphDef() ## The graph-graph_def is a saved copy of a TensorFlow graph;
graph_def.ParseFromString(f.read()) #Parse serialized protocol buffer data into variable
_ = tf.import_graph_def(graph_def, name='')
# import a serialized TensorFlow GraphDef protocol buffer,
extract objects in the GraphDef as tf.Tensor
video_path = sys.argv[1]
writer = None
# classify.py for video processing.
# This is the interesting part where we actually changed the code:
#############################################################
with tf.Session() as sess:
video_capture = cv2.VideoCapture(video_path)
i = 0
while True: # fps._numFrames < 120
frame = video_capture.read()[1] # get current frame
frameId = video_capture.get(1) #current frame number
i = i + 1
cv2.imwrite(filename="screens/"+str(i)+"alpha.png", img=frame); # write frame image to file
image_data = tf.gfile.FastGFile
("screens/"+str(i)+"alpha.png", 'rb').read() # get this image file
softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
predictions = sess.run(softmax_tensor, \
{'DecodeJpeg/contents:0': image_data}) # analyse the image
top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
pos = 1
for node_id in top_k:
human_string = label_lines[node_id]
score = predictions[0][node_id]
cv2.putText(frame, '%s (score = %.5f)' % (human_string, score),
(40, 40 * pos), cv2.FONT_HERSHEY_DUPLEX, 0.8, (255, 255, 255))
print('%s (score = %.5f)' % (human_string, score))
pos = pos + 1
print ("\n\n")
if writer is None:
# initialize our video writer
fourcc = cv2.VideoWriter_fourcc(*"XVID")
writer = cv2.VideoWriter("recognized.avi", fourcc, 10,
(frame.shape[1], frame.shape[0]), True)
# write the output frame to disk
writer.write(frame)
cv2.imshow("image", frame) # show frame in window
cv2.waitKey(1) # wait 1ms -> 0 until key input
writer.release()
video_capture.release()
cv2.destroyAllWindows()
For this script to successfully execute, we need to add the screens folder to the directory with the script. This folder will contain all of the recognized frames.
Now let’s launch our script:
python classify_video.py mixed.mp4
Pay attention to the classification results in the frames from 2:24 to 2:27 in the video. Every time there’s a football field on the screen, the network classifies the frame as a football game.
In the end, we’ll get a video file with the classification results displayed in the upper left corner.
You might also notice strange changes in the results from 2:15 to 2:18, when someone’s face is shown up-close. This is the result of training the model on the game screens with close-ups of coaches or people on the stands. Unfortunately, this problem with recognition of a separate image can’t be solved with Inception V3, as to solve it you need to somehow remember the whole video sequence. But we hope to find a solution soon and will surely share it when we do.
Our experts are actively exploring the latest developments in the field of artificial intelligence and are constantly sharing their experiences here.
The possibilities of deep learning algorithms and modern CNNs aren’t limited to classifying separate images. CNNs can be effectively applied to recognizing patterns in video streams as well. In this article, we showed you an easy way to use the pretrained Inception V3 neural network for video classification.
In our example, we successfully retrained the existing Inception V3 model, added new classes of data to it, and used the modified network to classify video clips. However, we also faced a new challenge in the process: recognizing a video sequence, as Inception V3 only works with separate images.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | https://codeproject.freetls.fastly.net/Articles/1366433/Using-Modified-Inception-V3-CNN-for-Video-Processi?pageflow=FixedWidth | CC-MAIN-2021-43 | refinedweb | 2,012 | 58.89 |
>>>>> "Mark" == Mark Wielaard <mark@klomp.org> writes: Mark> 2001-07-23 Mark Wielaard <mark@klomp.org> Mark> * HACKING: add description on updating namespace Please check this in. Mark> I also noticed that this file only exists on the main branch, Mark> may I also add it to the gcc-3_0-branch? Sure. Mark> Is there a reason to do it this way for the java.lang, java.io Mark> and java.util (sub)packages, but not for the other packages? We chose these packages on the theory that they were the most common ones, and most likely to be used in CNI code. It was an arbitrary sort of decision. We can always extend it. However, libgcj and gcjh collude in this matter. So if we change one we must change the other. Tom | https://gcc.gnu.org/pipermail/java/2001-July/007230.html | CC-MAIN-2022-21 | refinedweb | 134 | 86.5 |
How Square makes its SDKs
At Square we leverage the OpenAPI standard, Swagger Codegen & GitHub to build and deliver our client SDKs in a scalable way.
The developer platform team at Square is a little different than most. Rather than build separate APIs for our developer products, we focus on exposing the APIs that our first-party product use to create a seamless experience for developers. We have many upstream teams that are stakeholders in our external facing APIs, constantly wanting to expose new features and make improvements. This was an important factor when deciding how we should build our SDKs; we did not want our team to be a bottle-neck, where product teams would have to wait on us to finish updating SDKs before releasing new features. The primary way we avoid that is with SDK generation.
SDK Generation
Instead of writing each of our SDKs by hand (which would not only be time consuming, error prone, and slow down the release of new features into the SDKs) we use a process that relies heavily on SDK generation. There are many flavors of SDK generation out there, so if you are looking into adopting a similar method for your SDKs, be sure to look at a range of the possibilities and find the right one for you. Our preferred flavor uses the OpenAPI specification to define our API endpoints and Swagger Codegen to programmatically generate the code for the SDKs.
API specification
We use the OpenAPI standard to define our APIs. For us, this is a
JSON file that defines the url, what kind of HTTP request to make, as well as what kind of information to provide, or expect to get back for each our API endpoints. Our specification is made up of 3 main parts: general info/metadata, paths, and models.
General info/metadata
This part of the spec contains some of the descriptive information for the API overall, like where you can find licensing information, or who to contact for help.
"info": { "version": "2.0", "title": "Square Connect API", "description": "Client library for accessing the Square Connect APIs", "termsOfService": "", "contact": { "name": "Square Developer Platform", "email": "developers@squareup.com", "url": "" }, "license": { "name": "Apache 2.0", "url": "" } },
Paths
These describe the individual endpoints (or URL paths) for the API. It describes what kind of
HTTP request to make, how it should be authorized, and what kind of information you should add to the request, and what you should expect to get back. In the example below, you can see that it is a
POST request, there are a couple required parameters in the URL, another one in the body, and you get back a
CreateRefundResponse object.
"/v2/locations/{location_id}/transactions/{transaction_id}/refund": { "post": { "tags": [ "Transactions" ], "summary": "CreateRefund", "operationId": "CreateRefund", "description": "Initiates a refund for a previously charged tender.\n\nYou must issue a refund within 120 days of the associated payment. See\n(this article)[] for more information\non refund behavior.", "x-oauthpermissions": [ "PAYMENTS_WRITE" ], "security": [ { "oauth2": [ "PAYMENTS_WRITE" ] } ], "parameters": [ { "name": "location_id", "description": "The ID of the original transaction\u0027s associated location.", "type": "string", "in": "path", "required": true }, { "name": "transaction_id", "description": "The ID of the original transaction that includes the tender to refund.", "type": "string", "in": "path", "required": true }, { "name": "body", "in": "body", "required": true, "description": "An object containing the fields to POST for the request.\n\nSee the corresponding object definition for field details.", "schema": { "$ref": "#/definitions/CreateRefundRequest" } } ], "responses": { "200": { "description": "Success", "schema": { "$ref": "#/definitions/CreateRefundResponse" } } } } },
Models
The models describe the different objects that the API interacts with. They are used primarily for serializing the JSON response from the API into native objects for each language. In this one,
CreateRefundResponse, you can see it has a couple other models that it is comprised of, as well as a description and even an example of what the response looks like.
"CreateRefundResponse": { "type": "object", "properties": { "errors": { "type": "array", "items": { "$ref": "#/definitions/Error" }, "description": "Any errors that occurred during the request." }, "refund": { "$ref": "#/definitions/Refund", "description": "The created refund." } }, "description": "Defines the fields that are included in the response body of\na request to the [CreateRefund](#endpoint-createrefund) endpoint.\n\nOne of `errors` or `refund` is present in a given response (never both).", "example": { "refund": { "id": "b27436d1-7f8e-5610-45c6-417ef71434b4-SW", "location_id": "18YC4JDH91E1H", "transaction_id": "TRANSACTION_ID", "tender_id": "TENDER_ID", "created_at": "2016-02-12T00:28:18Z", "reason": "some reason", "amount_money": { "amount": 100, "currency": "USD" }, "status": "PENDING" } }, "x-sq-sdk-sample-code": { "python": "/sdk_samples/CreateRefund/CreateRefundResponse.python", "csharp": "/sdk_samples/CreateRefund/CreateRefundResponse.csharp", "php": "/sdk_samples/CreateRefund/CreateRefundResponse.php", "ruby": "/sdk_samples/CreateRefund/CreateRefundResponse.ruby" } },
You can see the most recent version of our specification to date version in our Connect-API-Specification repo on GitHub.
The specification is an important part of our generation process, as it is the source of truth about how our APIs work. When other teams want to expand their APIs, release new APIs, or just increase the clarity of a model description, they can make an edit to this single file and have their changes propagate to all of the client SDKs. We actually generate most of our specification from the files that describe the internal service to service communication for even more process automation and easier changes.
Swagger Codegen
Now that we have the specification for our APIs ready to go, how do we turn it into a client facing SDK? The answer is Swagger Codegen. Swagger Codegen is an open source project supported by Smartbear (just like the other Swagger tools) that applies your Open API specification to a series of templates for SDKs in different languages with a little configuration sprinkled in.
Templates
The templates use a language called mustache to define their parts, and for the most part look and read like a file in the desired language. The one below is part of the templates for out PHP SDK. You can see that useful things like code comments are auto generated as well, so that the end SDK can have built in documentation, snippets & more.
<?php {{#models}} {{#model}} /** * NOTE: This class is auto generated by the swagger code generator program. * * Do not edit the class manually. */ namespace {{modelPackage}}; use \ArrayAccess; /** * {{classname}} Class Doc Comment * * @category Class * @package {{invokerPackage}} * @author Square Inc. * @license Apache License v2 * @link */ class {{classname}} implements ArrayAccess { ...
Configuration
These are actually much less complex, and are essentially small
json files that describe aspects of your SDK, generally around how it fits into the relevant package manager.
{ "projectName": "square-connect", "projectVersion": "2.8.0", "projectDescription": "JavaScript client library for the Square Connect v2 API", "projectLicenseName": "Apache-2.0", "moduleName": "SquareConnect", "usePromises": true, "licenseName": "Apache 2.0" }
Because the Codegen project is so active, we actually check in a copy of our template files for each of our supported SDKS, and pin to specific Codegen versions to make sure that we don’t accidentally push breaking changes to our users as a result of all the automation. You can see the all of the templates and config files that power the {Java, PHP, C#, Python, Ruby, JavaScript} SDKs in the same repository as our specification file: Connect-API-Specification.
Other Ideas
Our process has evolved quite a bit, with tools like Travis CI making big impacts in the process. You can use CI & CD tools to make the process more automated but be sure that you have a good suite of test coverage to help prevent unexpected changes from creeping into your released code.
Hope your enjoyed the look into our SDK generation process. You can also see a recorded talk I gave at DevRelCon about the subject here. If you want to learn more about our SDKs, or other technical aspects of Square, be sure to follow on this blog, our Twitter account, and sign up for our developer newsletter! | https://developer.squareup.com/blog/how-square-makes-its-sdks/ | CC-MAIN-2019-26 | refinedweb | 1,296 | 51.38 |
Brian Hook wrote: >> and found the solution of wrapping the headers in extern "C" {...}. > > You need to wrap the headers when included by all the .c files as way. > The easiest thing to do is put the extern "C" in lua.h itself. A still more clean way to do this is to use some kind of home lua header file, like the following one and link to it through all your application instead of using lua.h directly : #ifndef MY_LUA_H #define MY_LUA_H extern "C" { #include "lua.h" } #endif /* MY_LUA_H */ If it still doesn't work then the brute force solution provided by Brian Hook above is the only one. The fact is that the headers you generate or use CANNOT link to lua.h directly. If you use third party library then this may happen without you knowing it ! The reason to use the clean way is to be able to update the lua library in the easiest way possible as we are humans and make mistakes : you will probably forget to reinsert the guardians in lua.h when updating it locally. If you project is open source then it is one more reason to do it in a clean way ! >> unresolved external symbol "struct lua_State * __cdecl >> lua_open(void)" (?lua_open@@YAPAUlua_State@@XZ) > > This indicates that something is still trying to link against it in > C++ form. > > Brian Chucky | http://lua-users.org/lists/lua-l/2004-03/msg00240.html | CC-MAIN-2018-30 | refinedweb | 229 | 72.46 |
Load an acoustic processing dataset into a running APX module
#include <sys/asoundlib.h> int snd_pcm_load_apx_dataset( snd_pcm_t *pcm, uint32_t apx_id, const char *dataset, int *ap_status );
The snd_pcm_load_apx_dataset() function loads an acoustic processing data set into a running APX. A .conf key apx_dataset_qcf_dataset-name is created to look up the acoustic data set filepath, where apx is the type of APX module and dataset-name is the string specified by the dataset argument. For example, for SFO and a dataset name foo: sfo_dataset_qcf_foo.
After it is set, the data set is loaded each time the APX module is started until a new data set is applied.
Because this function is only used with acoustic (SFO, SPM) APX modules, it must be called on the playback PCM device.
EOK on success, or a negative errno value if an error occurred.
This function can also return the return values of devctl() (see devctl() in the QNX Neutrino C Library Reference).
One of the following causes:
QNX Neutrino
This function is not thread safe if the handle (snd_pcm_t) is used across multiple threads. | https://www.qnx.com/developers/docs/7.1/com.qnx.doc.neutrino.audio/topic/libs/snd_pcm_load_apx_dataset.html | CC-MAIN-2022-33 | refinedweb | 179 | 60.24 |
import "gioui.org/io/event"
Package event contains the types for event handling.
The Queue interface is the protocol for receiving external events.
For example:
var queue event.Queue = ... for _, e := range queue.Events(h) { switch e.(type) { ... } }
In general, handlers must be declared before events become available. Other packages such as pointer and key provide the means for declaring handlers for specific event types.
The following example declares a handler ready for key input:
import gioui.org/io/key ops := new(op.Ops) var h *Handler = ... key.InputOp{Key: h}.Add(ops)
Event is the marker interface for events.
Key is the stable identifier for an event handler. For a handler h, the key is typically &h.
type Queue interface { // Events returns the available events for a // Key. Events(k Key) []Event }
Queue maps an event handler key to the events available to the handler.
Package event is imported by 15 packages. Updated 2020-03-28. Refresh now. Tools for package owners. | https://godoc.org/gioui.org/io/event | CC-MAIN-2020-16 | refinedweb | 164 | 71.31 |
Depending on your personal preference, or perhaps the needs of the moment, you can use a C#-style "curly brace" package syntax in your Scala applications, instead of the usual Java-style. As a quick example of what this looks like, here are a couple of simple package names and classes:
package orderentry { class Order class LineItem class Foo } package customers { class Customer class Address class Foo package database { // this is the "customers.database" package/namespace class CustomerDao // this Foo is different than customers.Foo or orderentry.Foo class Foo } }
In this example I've defined three package namespaces:
- orderentry
- customers
- customers.database
As you can see from the code, I also have three classes named "Foo", and they are different classes in different packages. I have:
- orderentry.Foo
- customers.Foo
- customers.database.Foo
Of course this isn't any different than what you can do in Java, but this "curly brace" package syntax is very different than what you can write in Java, so I wanted to share this alternate syntax.
Packages inside packages
As another simple example, you can include one Scala package inside another using curly braces, like this:
package tests package foo { package bar { package baz { class Foo { override def toString = "I'm a Foo" } } } } object PackageTests { def main(args: Array[String]) { // create an instance of our Foo class val f = new foo.bar.baz.Foo println(f.toString) } }
As you can see, after creating the Foo class in the foo.bar.baz package, we can create an instance of it as usual, like this:
val f = new foo.bar.baz.Foo
I hope these Scala packaging examples have been helpful. | https://alvinalexander.com/scala/scala-csharp-style-package-syntax-examples-curly-braces | CC-MAIN-2020-05 | refinedweb | 275 | 50.36 |
Twilio Autopilot Python Quickstart
With Twilio's Autopilot, you can build messaging and voice bots powered by Python Python installed properly next.
Install Python
If you’ve gone through one of our other Python Quickstarts already and have Python installed, you can skip this step and get straight to creating your first bot.
To build and manage your bot with Python you'll need to have a recent version of Python (3.5 or higher).
Checking your Python installation
If you are on a Mac or a Linux/Unix machine, you probably already have Python
installed. On Windows, you typically would not have Python installed out of the box.
Try running the following command from the command line to check your installation:
python --version
You should see a response similar to this:
Python 3.7.5
Any version of Python above 3.5 should work well for this project.
Installing Python on Windows
Windows users can follow this excellent tutorial for installing Python on Windows, or follow the instructions from Python's documentation.
Installing Python on OS X or Linux
We recommend installing Python following the guide for Mac OS X or your Linux distribution. Python with Flask. The task will still say a brief expression and then listen for user input.
You will need to start by creating a new directory for your application. Inside that directory, create two files -
requirements.txt and
app.py. With the
requirements.txt, we will add the Python web framework Flask to our project. Our application code will go into the
app.py Python file.
The
requirements.txt should look like this:
Flask
To start with, your
app.py file can simply be:
from flask import Flask app = Flask(__name__) @app.route('/') def home(): return 'Hello World!'
We will add more to this Python file as we go.
Before doing anything else, install the Flask package with the following command:
python -m pip install -r requirements.txt
If you would like, test your Python installation by running this command:
flask run
And then when you visit this URL: - you should see Hello World in your web browser.
Instead of providing JSON in the Autopilot console, we are going to serve JSON from our Python server. For that purpose, we are going to use Flask's
send_file method to send a file named
dynamicsay.json.
Create a file named
dynamicsay.json in the same directory as the
app.py file, and then copy the following contents into the file:
{ "actions": [ { "say": "What clothes would you like to order?" }, { "listen": true } ] }
In your
app.py file, add the following code:
from flask import send_file @app.route('/dynamicsay', methods=['POST']) def dynamic_say(): return send_file('dynamicsay.json')
Next, start up ngrok to get a public URL for your server. If you're new to ngrok, see our documentation on How to Install ngrok.
ngrok http 5000 Python
app.py for the
collect path that will generate the JSON for the
Say Action to redirect to our
Collect flow. This code parses the answers out of the Memory parameter (which is a JSON construct), and then echoes them back to the user in a confirmation message.
import json from flask import jsonify, request @app.route('/collect', methods=['POST']) def collect(): memory = json.loads(request.form.get('Memory')) answers = memory['twilio']['collected_data']['collect_clothes_order']['answers'] first_name = answers['first_name']['answer'] clothes_type = answers['clothes_type']['answer'] num_clothes = answers['num_clothes']['answer'] message = ( f'Ok {first_name}. Your order for {num_clothes} {clothes_type} is now confirmed.' f' Thank you for ordering with us' ) return jsonify(actions=[{'say': {'speech':. | https://www.twilio.com/docs/autopilot/quickstart/python-quickstart | CC-MAIN-2021-10 | refinedweb | 591 | 67.45 |
Copyright © Pro, described separately. Claims to be W3C mobileOK conformant are represented using Description Resources (see [POWDER]), also described separately.
mobileOK Basic primarily assesses basic usability, efficiency and interoperability. updated Candidate Recommendation of mobileOK Basic Tests 1.0, replacing the first Candidate Recommendation published on November 13, with a correction in the mobileOK User-Agent String. The Working Group does not expect substantive changes to the document from this point.
A complete Disposition of Comments record for this document is available.
In order for this document to transition from Candidate Recommendation status, the group has recorded the following exit criteria:
The group does not expect to request transition to Proposed Recommendation before 10 January Pro
1.1.3 Out of Scope
1.1.4 Beyond mobileOK
1.2 Applicability
1.3 Claiming mobileOK conformance
2 Conformance
2.1 Use of Terms must, should etc.
2.2 Validity of the Tests
2.3 Testing Outcomes
2.4 Conduct of Tests
2.4.1 Order of Tests
2.4.2 HTTP Request
2.4.3 HTTP Response
2.4.4 Meta http-equiv Elements
2.4.5 CSS Style
2.4.6 Included Resources
2.4.7 Linked Resources
2.4.8 Validity
2.4Practices] to a simple and largely hypothetical mobile user agent, the Default Delivery Context.
This document describes W3C mobileOK Basic tests for delivered content, and describes how to emulate the DDC when requesting that content.
mobileOK Basic is the lesser of two levels of claim, the greater level being mobileOK Pro, described separately. Claims to be W3C mobileOK Basic conformant are represented using Description Resources (see [POWDER]) also described separately.]) have been followed.
The (full) mobileOK Pro conformance still does not suggest that the most sophisticated mobile user experience possible is available. It implies that a functional experience is available which adheres even more closely to the Mobile Web Best Practices.
Like mobileOK Basic, mobileOK Pro conformance says nothing about whether other guidelines for development of Web content (such as [WCAG]) have been followed.
mobileOK Pro tests will be described separately.
Some best practices, like TESTING, are advisable but do not meaningfully translate into concrete tests, whether their outcome is machine- or human-verifiable.
The tests assess whether the content can be provided in a way that achieves basic usability, efficiency, and interoperability with mobile devices. The tests should not be understood to assess thoroughly whether the content has been well-designed for mobile devices. when accessed as described in 2.4.2..
The details of the mechanism for claiming mobileOK conformance will be described separately.
Where terms are used with the meanings defined in [RFC. In any test, PASS is achieved if and only if there are no FAILs. No specific PASS outcome is defined for any test.
Tests may also generate a number of informative warnings which do not affect whether a test has PASSed or not. A warning.
The following HTTP request headers inform the server that it should deliver content that is compatible with the Default Delivery Context.
Use the HTTP
GET method when making requests, except for
3.10
LINK_TARGET_FORMAT where the
HEAD method may be used (See
2.4.7 Linked Resources for a discussion of the
POST method).
Include a
User-Agent header indicating the default delivery context by sending exactly this header:
User-Agent: W3C-mobileOK/DDC-1.0 (see)
Do not include cookie related headers.
Include authentication information if required (see 2.4.3 HTTP Response). Once authentication information has been included in a request, subsequent requests for the same realm must include authentication information as described in Section 2 and under "domain" in Section 3.2.1 of [RFC2617].
Implementations
must support URIs with both
http and
https scheme components.
Note:.
Note:
To allow for self-signature of certificates during testing the signatory of a certificate should not be checked..
If an HTTP request does not result in a valid HTTP response (because of network-level error, DNS resolution error, or non-HTTP response), FAIL
If the response is an HTTPS response:
If the certificate is invalid, FAIL
If the certificate has expired, warn
If the HTTP status indicates redirection (status code 3xx):
Do not carry out tests on the response
If the response relates to a request for the resource under test, or any of its included resources (see 2.4.6 Included Resources):
Include the size of the response in the total described under 3.16 PAGE_SIZE_LIMIT
Include this response under the count.6 Included Resources):
If authentication information was supplied in the HTTP request (i.e. authentication failed), FAIL
Carry out tests on the response
Include the size of the response in the total described under 3.16 PAGE_SIZE_LIMIT
Include this response under the count described under 3.6 EXTERNAL_RESOURCES
Re-request the resource using authentication information
If the response relates to a request for a linked resource (see 2.4.7.7 Linked Resources), continue with the test (see 3.10 LINK_TARGET_FORMAT) and warn
Otherwise (i.e. for included resources), FAIL
If the HTTP status represents failure (4xx), other than 404 or a request for authentication (e.g. 401),Basic11]) descriptor is either not present or is present and contains values "all" or "handheld"
In the course of assembling the CSS Style use only those CSS rulesets that are not restricted as to their CSS media type or whose CSS media type specifier contains "handheld" or "all".
Some tests refer to a notion
src attribute of
img elements
the
data attribute of
object elements (see notes below)
the
href attribute of
link elements and
xml-stylesheet processing instructions as defined in
2.4.5 CSS Style
images included by
background-image and
list-style-image properties in the CSS Style (
2.4.5 CSS Style)
@import directives in the CSS Style - providing they are unqualified as to presentation media type or qualified by presentation media type "handheld" or "all" (case-insensitive) as defined in
2.4.5 CSS Style
Note:
In some circumstances
object elements may act as synonyms for other elements such as
img and
iframe. In these cases it is noted in the relevant section when to regard object elements as equivalents for other elements.
Note:
For nested
object elements, count only the number of objects that need to be assessed as discussed in
3.15
OBJECTS_OR_SCRIPT .. qualifiers [UTF-8], section 4
Several tests refer to white space. White space has the same definition in this document as in XML. For XML 1.0 [XML10] header value of the response starts with "text/" but does not specify UTF-8 character encoding,
warn if none is present, or by replacing the given
DOCTYPE with the appropriate
DOCTYPE for the DTD under test. contain a
DOCTYPE declaration,
FAIL
If the document is not an
HTML document or it fails to validate according to its given
DOCTYPE,
FAIL
If the document does not declare the html namespace on its html root element, FAIL
Note:
inputmode is part of
[XHTMLBasic11].
For each
input element with attribute
type whose value is "text" or "password" or whose
type attribute is missing:
If the element's
inputmode attribute is invalid according to
Section 5.2 User Agent Behavior of XHTML Basic 1.1
[XHTMLBasic11],Basic11],
FAIL
If the element is empty and an
inputmode attribute is not present,
warn
Make an inventory of unique included resources, as defined in 2.4.6 Included Resources.
For each such resource:
Request the resource and carry out the procedures discussed under 2.4.3 HTTP Response maintaining a running total of requests made. FAILures that occur in the course of making this assessment contribute to the result of this test.
If the total exceeds 10, warn
If this total exceeds 20, FAIL
and
object element
If an
input element with
type attribute set to "image" is present,
FAIL
For each
img element
and
object element:
If a
usemap attribute is present,
FAIL
If an
ismap attribute is present,
FAIL
Note:
The
height and
width HTML attributes specify pixels when they are used as a number. No unit is specified.
For each
img element
and
object element whose
type attribute starts with "image/":
If the
height or
width attribute are missing,
FAIL
If the
height or
width attribute
Note:
404 and 5xx HTTP status do not result in failure when conducting this test.
Note:
The document body of linked resources is not examined.
For each linked resource, as defined in 2.4.7 Linked Resources:
Request the resource
If the
Content-Type header value of the HTTP response is not one of the Internet Media Types listed in the
Accept header in
2.4.2 HTTP Request,
warn
If the
Content-Type header value of the HTTP response does not specify a
charset parameter, or does but it is not consistent with the value of the
Accept-Charset header in
2.4.2
If the document contains a
frame ,
frameset or
iframe element
or it contains an
object element which when retrieved has an Internet media type that starts with "text/", "application/xhtml+xml" or "application/vnd.wap.xhtml+xml",
FAIL
and has no
object element ancestor,
If the
innermost nested
object element is empty,
warn
If the
innermost nested
object element content consists only of white space (see
2.4.9 White Space),
FAIL
If none of the nested
object elements is an image that has a content type that matches the headers defined in
2.4.2 HTTP Request and the innermost nested
object element is non-empty and does not consist of text or an
img element that refers to an image that matches the headers defined in
2.4.2 HTTP Request,
FAIL
If the size of the document exceeds 10 kilobytes, FAIL
Add the size to a running total
For each included resource, as defined in 2.4.6 Included Resources:
Request the referenced resource
Add the size of the response body to the running total
(for nested
object elements count only the first object that matches the headers specified in
2.4.2 HTTP Request, if there is one)
If the total exceeds 20 kilobytes, FAIL
This test does not determine whether the title is meaningful.
If a
title element is not present in the
head element, or is empty, or contains only white space (see
2.4.9 White Space),
FAIL
For each
a,
link,
form, and
base element:
If a
target attribute is present,
If its value is not one of "_self", "_parent", or "_top", FAIL
In addition, a human-verifiable test is needed here to verify whether such elements could be replaced with alternative control elements.
In addition, a human test is needed here to verify whether the page is readable without a style sheet.
If the CSS Style (
2.4.5 CSS Style) contains rules referencing the
position,
display or
float properties,
warn or
u elements,
FAIL
If the document contains any
b,
big,
i,
small,
sub,
sup or
tt elements,
warn
If any element has a
style attribute,
warn
If there is CSS Style ( 2.4.5 CSS Style)
If all styles are restricted to media types other than "handheld" or "all" by means of @media at:
This appendix lists all Best Practices and indicates whether each has a corresponding test in mobileOK Basic, mobileOK Pro, both, or neither.
mobileOK Pro is a super-set of mobileOK Basic and so any Best Practice with a corresponding test in mobileOK Basic implicitly has a corresponding test in mobileOK Pro. This table, however, indicates which best practices have a corresponding test that expands on the test, if any, in mobileOK Basic. The tests listed for mobileOK Pro are subject to change as that document is still a work in progress. | http://www.w3.org/TR/2007/CR-mobileOK-basic10-tests-20071130/ | crawl-002 | refinedweb | 1,956 | 53.92 |
Basic Configuration
NATstyle comes packaged inside NaturalONE, the Eclipse-based IDE for NATURAL. As expected NATstyle can be configured in Eclipse preferences. The configuration is saved as
NATstyle.xmlwhich is used when you run NATstyle from the right click popup menu. We will need to modify
NATstyle.xmllater, so let's have a look at it:
<?xml version="1.0" encoding="utf-8"?> <naturalStyleCheck version="1.0" xmlns="" xmlns: <checks type="source"> <check class="CheckLineLength" name="Line length" severity="warning"> <property name="max" value="72" /> <property name="exclude" value="D3" /> </check> <!-- more checks of type source --> </checks> <!-- more checks of other types --> </naturalStyleCheck>(The default configuration file is here together with its XML schema.)
Existing Rules
The existing rules are described in NaturalONE help topic Overview of NATstyle Rules, Error Messages and Solutions. Version 8.3 has 42 rules. These are only a few compared to PMD or SonarQube, which has more than 1000 rules available for Java. Here are some examples what NATstyle can do:
- Source Checks: e.g. limit line length, find tab characters, find empty lines, limit the number of source lines and check a regular expressions for single source lines or whole source file.
- Source Header Checks: e.g. force header or check file naming convention.
- Parser Checks: e.g. find unused local variables, warn if local variable shadows view, find TODO comments, calculate Cyclomatic and NPath complexity, force NATdoc (documentation) tags and check function, subroutine and class names against regular expressions.
- Error (Message File) Checks: e.g. check error messages file name.
- Resource (File) Checks: e.g. check resource file name.
- Library (Folder) Checks: e.g. library folder conventions, find special folders, force group folders and warn on missing NATdoc library documentation.
Some rules like Source/Regular expression for single source lines only allow a single regular expression to be configured. Using alternation, e.g.
a|b|c, in the expression is a way to overcome that, but the expression gets complicated quickly. Another way is to duplicate the
<check>element in the
NATstyle.xmlconfiguration. Assume we do not only forbid
NATstyle.xmllooks like
<checks type="source"> <check class="CheckRegExLine" name="Regular expression for single source lines" ... > <property name="regex" value="PRINT '.*" /> </check> <check class="CheckRegExLine" name="Regular expression for single source lines" ... > <property name="regex" value="REDUCE .* TO 0" /> </check> </checks>While it is impossible to configure these rules in the NaturalONE preferences, it might be possible to run NATstyle with these modified settings. I did not verify that. I execute NATstyle from the command line passing in the configuration file name using the
-cflag. (See the full configuration and script to to run the rules from the command line.)
There is no documented way to create new rules for NATstyle. All rules' classes are defined inside the NATstyle plugin. The configuration XML contains a
classattribute, which is a short name, e.g.
CheckRegExLine. Its implementation is located in the package
com.softwareag.naturalone.natural.natstyle.check.src.sourcewhere
sourceis the group of the rules defined in the
typeattribute of the
<checks>element. I experimented a lot and did not find a way to load rules from other packages than
com.softwareag.naturalone.natural.natstyle. All rules must be defined inside this name space, which is possible.
Source Rules
While I cannot see the actual code of NATstyle rules, Java classes expose their public methods and parent class. I did see the names of the rule classes in the configuration and guessed and experimented with the API a lot. My experience with other static analysis tools, e.g. PMD and Pylint and the good method names of NATstyle code helped me doing so. A basic Source rule looks like that:
package com.softwareag.naturalone.natural.natstyle.check.src.source; // 1. import com.softwareag.naturalone.natural.natstyle.NATstyleCheckerSourceImpl; // other imports ... public class FindFooSourceRule extends NATstyleCheckerSourceImpl { // 2. private Matcher name; @Override public void initParameterList() { name = Pattern.compile("FOO").matcher(""); // 3. } @Override public String run() { // 4. StringBuffer xmlOutput = new StringBuffer(); String[] lines = this.getSourcelines(); // 5. for (int line = 0; line < lines.length; i++) { name.reset(lines[line]); if (name.find()) { setError(xmlOutput, line, "Message"); // 6. } } return xmlOutput.toString(); // 7. } }The marked lines are important:
- Because it is a Source rule, it must be in exactly this package - see the paragraph above.
- Source rules extend
NATstyleCheckerSourceImplwhich provides the lines of the NATURAL source file - see line 6. It has more methods, which have reasonable names, use the code completion.
- You initialise parameters in
initParameterList. I did not figure out how to make the rules configurable from the XML configuration, which will probably happen in here, too.
- The
runmethod is executed for each NATURAL file.
NATstyleCheckerSourceImplprovides the lines of the file in
getSourcelines. You can iterate the lines and check them.
- If there is a problem, call
setError. Now
setErroris a bit weird, because it writes an XML element for the violation report XML (e.g.
NATstyleResult.xml) into a
StringBuffer.
- In the end the return the XML String of all found violations.
<checks type="source"> <check class="FindFooSourceRule" name="Find FOO" severity="warning" /> </checks>(In the example repository, there is a working Source rule FindInv02.java together with its configuration customSource.xml.)
Parser Rules
Now it is getting more interesting. There are 18 rules of this type, which is a good start, but we need moar! Parser rules look similar to Source rules:
package com.softwareag.naturalone.natural.natstyle.check.src.parser; // 1. import com.softwareag.naturalone.natural.natstyle.NATstyleCheckerParserImpl; // other imports ... public class SomeParserRule extends NATstyleCheckerParserImpl { // 2. @Override public void initParameterList() { } @Override public String run() { StringBuffer xmlOutput = new StringBuffer(); // create visitor getNaturalParser().getNaturalASTRoot().accept(visitor); // 3. // collect errors from visitor into xmlOutput return xmlOutput.toString(); } }where
- Like Source rules, Parser rules must be defined under the package
...natstyle.check.src.parser.
- Parser rules extend
NATstyleCheckerParserImpl.
- The NATURAL parser traverses the AST of the NATURAL code. Similar to other tools, NATstyle uses a visitor, the
INaturalASTVisitor. The visitor is called for each node in the AST tree. This is similar to PMD.
INaturalASTVisitorin package
com.softwareag.naturalone.natural.parser.ast.internal. This interface defines 48
visitmethods for the different sub types of
INaturalASTNode, e.g. array indices, comments, operands, system function references like
LOOPor
TRIM, and so on. Still there are never enough node types as the AST does not convey much information about the code, most statements end up as
INaturalASTTokenNode. For example the NATURAL lines
* print with leading blanks PRINT 3X 'Hello'which are a line comment and a print statement, result in the AST snippet
+ TOKEN: * print with leading blanks + TOKEN: PRINT + TOKEN: 3X + OPERAND + SIMPLE_CONSTANT_REFERENCE + TOKEN: 'Hello'Now
'Hello'is a string. This makes defining custom rules possible but pretty hard. To help me understand the AST I created a visitor which dumps the tree as XML file, similar to PMD's designer: DumpAstAsXml.java.
Conclusion
With this information you should be able to get started defining your own NATstyle rules. There is always so much more we could and should check automatically.
2 comments:
Interesting to see this possibility of extending NatStyle! We took a look at NatStyle a while ago and quickly decided that it wouldn't fit our needs - especially due to the lack of custom rules.
In the meantime, one of our students has developed a real "linter" or code checker for Natural with access to the AST and custom rules that can be integrated in SonarQube and even instant feedback inside Eclipse just like with our Java projects. We are in the middle of rolling out this solution in our company.
If you're interested in taking a look at it, feel free to contact me. We're planning to also release the tool as open source.
Best regards,
Stefan
Stefan,
this is a great news. Integration with SonarQube is a must. I was merely looking for options. Yes please keep me in the loop. | https://blog.code-cop.org/2018/08/creating-your-own-natstyle-rules.html?showComment=1535616871124 | CC-MAIN-2019-26 | refinedweb | 1,316 | 51.34 |
Data types and variables in C# programming
Definition of Data Types
A variable holds data of a specific type. When you declare a variable to store data in an application, you need to choose an appropriate data type for that data. Visual C# is a type-safe language, which means that the compiler guarantees that values stored in variables are always of the appropriate type.
Commonly Used Data Types
The following table shows the commonly used data types in Visual C#, and their characteristics.
Declaring and Assigning Variables
Before you can use a variable, you must declare it so that you can specify its name and characteristics. The name of a variable is referred to as an identifier. Visual C# has specific rules concerning the identifiers that you can use:
- An identifier can only contain letters, digits, and underscore characters.
- An identifier must start with a letter or an underscore.
- An identifier for a variable should not be one of the keywords that Visual C# reserves for its own use.
Visual C# is case sensitive. If you use the name MyData as the identifier of a variable, this is not the same as myData . You can declare two variables at the same time called MyData and myData and Visual C# will not confuse them, although this is not good coding practice.
You can declare multiple variables in a single declaration by using the comma separator; all variables declared in this way have the same type.
Declaring a Variable:
// DataType variableName; int price; // OR // DataType variableName1, variableName2; int price, tax;
After you declare a variable, you can assign a value to it by using an assignment statement. You can change the value in a variable as many times as you want during the running of the application. The assignment operator = assigns a value to a variable.
Assigning a Variable, Declaring and Assigning:
//Declaring variableName = value; price = 10; //declaring and assigning variables int price = 10;
Implicitly Typed Variables
When you declare variables, you can also use the var keyword instead of specifying an explicit data type such as int or string. When the compiler sees the var keyword, it uses the value that is assigned to the variable to determine the type.
Declaring a Variable by Using the var Keyword:
var price = 20;
In this example, the price variable is an implicitly typed variable. However, the var keyword does not mean that you can later assign a value of a different type to price. The type of price is fixed, in much the same way as if you had explicitly declared it to be an integer variable.
Implicitly typed variables are useful when you do not know, or it is difficult to establish explicitly, the type of an expression that you want to assign to a variable.
Object Variables
When you declare an object variable, it is initially unassigned. To use an object variable, you must create an instance of the corresponding class, by using the new operator, and assign it to the object variable.
The new Operator:
ServiceConfiguration config = new ServiceConfiguration();
The new operator does two things:
- It causes the CLR to allocate memory for your object
- It then invokes a constructor to initialize the fields in that object.
The version of the constructor that runs depends on the parameters that you specify for the new operator.
Example of Variables
The program below show the use of int, var and string variables:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace Variables { class Program { static void Main(string[] args) { //declaring and assigning using var var a = "C# Programming"; var b = " Tutorials"; //declaring and assigning string string myFirstName; string myLastName; myFirstName = "Info"; myLastName = "codify"; //declaring and assigning using int and string int x = 1; string y = "4"; //converting int to string string myFirstTry = x.ToString() + y; //converting string to int int mySecondTry = x + int.Parse(y); Console.WriteLine(a + b); Console.WriteLine(myFirstName + myLastName); Console.WriteLine(myFirstTry + " is Admin BM Date of Birth"); } } }
Output: C# Programming Tutorials infocodify 14 is Admin BM Date of Birth
Ads Right | https://www.infocodify.com/csharp/variables | CC-MAIN-2020-45 | refinedweb | 683 | 52.19 |
I am attempting to create a class and subsequent functions simulating a purse.
Here are my instructions :
-Declare Purse Class
-Include 4 private data members
+int pennies, int nickels, int dimes, & int quarters
-Include 4 public functions
+function insert(int p, int n, int d, int q) to initialize pennies, nickels, dimes, & quarters
+function dollars() to return the dollar amount
+function remove(int p, int n, int d, int q) to subtract pennies, nickels, dimes & quarters
+function display() returns a new String with remaining pennies, nickels, dimes & quarters
-Class should include a test driver main()
+Should declare the Purse object p with (2, 3, 0, 1) and invokes display() method to print content of the purse
Here is the code I have, but I am receiving arbitrary values when the program outputs values.
I have tried messing with the syntax with a few of the functions with no luck. Even inputting all 0's into the insert function returns me with very high values.I have tried messing with the syntax with a few of the functions with no luck. Even inputting all 0's into the insert function returns me with very high values.Code:
//Purse
#include <iostream>
#include <iomanip>
#include <string>
using namespace std;
class Purse
{ private:
int pennies;
int nickels;
int dimes;
int quarters;
public:
void insert( int, int, int, int );
void remove( int, int, int, int );
void dollars( int, int, int, int );
void display();
};
//Purse::Purse(
//{
// insert( int p, n, d, q);
//}
void Purse::insert( int p, int n, int d, int q )
{
pennies += p;
nickels += n;
dimes += d;
quarters += q;
}
void Purse::remove( int p, int n, int d, int q )
{
pennies -= p;
nickels -= n;
dimes -= d;
quarters -= q;
}
//void Purse::dollars( int, int, int, int )
//{ float x = pennies + 5*nickels + 10*dimes + 25*quarters;
// x =(float)x/100;
// cout << "\nThe current amount inside of the purse in dollars, is : $" << x << " Dollars.\n";
//}
void Purse::display()
{
cout << "\nCurrent count of purse: " << pennies << " Pennies, " << nickels << " Nickels, " << dimes << " Dimes, and " << quarters << " Quarters.\n" << endl;
}
int main()
{
Purse objectp; //Declare object
objectp.insert( 0, 0, 0, 0);
//objectp.insert( 2, 3, 0, 1 );
objectp.display();
return 0;
}
What can I do to fix this? Thanks in advance. | http://cboard.cprogramming.com/cplusplus-programming/137705-help-class-member-functions-printable-thread.html | CC-MAIN-2016-18 | refinedweb | 369 | 61.19 |
CompilerQOR is part of the Querysoft Open Runtime, the world's first Aspect Oriented, Open Source, Cross Platform Framework written almost entirely in C++.
CompilerQOR
CompilerQOR is a small but vital QOR aspect. Its job is to ensure that no code outside of CompilerQOR itself ever has to know or care which C++ compiler is being used to build it. That doesn't mean code can be written without the limitations of compilers in mind, just that the particular limitation being considered is independent of which compiler is used.
If you've looked at any substantial amount of open source software you'll have seen many things like this:
#ifdef __COMPILER_A
...
# if( __COMPILER_A_VERSION > 5006 )
...
# else
...
#else
# if ( __COMPILER_VERION < 1400 )
# ...
# else
....etc
What's usually going on here is that the original code used some feature of the C++ language or compiler, templates, namespaces,
_UBER_FUNKY_BUILTIN_MACRO_ etc., that not everyone's compiler supports.
People who wanted to use the code have patched around this for specific versions of specific compilers that do or don't support feature X. Great, now it works for many people but unfortunately it's also now an unreadable mess rammed full of important meta data about different compilers which is not directly related to the original algorithm and which itself cannot be reused. It may work but from a design, maintainence, readability or reuse point of view its a disaster.
_UBER_FUNKY_BUILTIN_MACRO_
If you don't think the above example is bad enough to warrant doing anything about it then take a look at the source of Make the ubiquitous build tool but not for too long or you might go blind and miss out on the rest of this article.
Make
What CompilerQOR does is allow us to replace the above with:
#if __QCMP_SUPPORTS( FEATURE_X )
...
else
...
#endif
The further advantage is that the above code will now work with a new compiler even one not in existence when it was written. At most it will need an updated CompilerQOR but if feature X is something fairly ordinary that all new compilers are expected to support then CompilerQOR will define it as supported by default requiring little or no changes at all. Anyone who runs a software project will see the potential advantages; reducing costs and complexity, increasing portability, extending product lifetime and encouraging standards conformance.
Well of course it has. There are no original lines of code. The Boost libraries do this as does STLSoft and the configuration headers of many other fantastic projects like llvm/clang/libc++ have feature macros to specify support for variations between compilers. CompilerQOR builds on the shoulders of these giants without having to carry them around wherever it goes.
C++ language features are crucial but by no means the only thing that varies between compilers. There are variations in:-
Then there is the Application Binary Interface, how the built code and data
structures are actually laid out in memory, v-table differences, integration with the loader binary format, PE or ELF and injection of code for RunTime Type Information, Exceptions, Security and bootstrapping.
All these things then are in part or in whole the domain of CompilerQOR. As the QOR grows it will become a lot more than a simple set of header files but the goal remains the same. If 'it' varies per compiler 'it' belongs to the CompilerQOR and if 'it' only varies per compiler then 'it' belongs only to the CompilerQOR. Nothing else should depend on which compiler happens to be used. This is what makes CompilerQOR an Aspect as opposed to just a library. The compiler in use affects all the code everywhere in the source tree but we deal with this cross cutting concern in just one place.
Pretty much the first job of CompilerQOR is to work out which compiler is being used to compile it so it can send that compiler down its own happy path in the tree of included files. This is done in the same way its done in the Boost libraries with the input of a few snippets from elsewhere. Each different type of compiler is detected by the predefined preprocessor macros they automatically make available, for example:
# if defined __CODEGEARC__
# define __QCMP_COMPILER __QCMP_CODEGEAR //CodeGear - must be checked for before Borland
...
The predefinition of __CODEGEAR__ is the definitive indication we need in order to detect an Embarcadero compiler in use. But as the comment implies these compilers also define
__BORLANDC__ which is the test macro for the older Borland compilers so we have to check for CodeGear first to be sure which it is. These gotchas aside there's no rocket science here just a
chunk of if..else preprocessor logic.
__CODEGEAR__
__BORLANDC__
Now we know which compiler is looking at the code we can say which features are available and which are not. We do this by defining all the features we know about in one place "include/CompilerQOR/Common/CompilerFeatures.h" and then by #undef(ining) them in the compiler specific headers where compilers don't support them. We do it this way round for several reasons. There's a definitive list of features in exactly one place. Each compiler header explicitly points out the things which are not up to scratch with that compiler. These are the same things that are likely to cause
compatibility issues so we want them explicitly stated in association with each compiler.
To detect if a feature is supported the __QCMP_SUPPORTS( _FEATUREX ) macro is provided for use where you want to say:
__QCMP_SUPPORTS( _FEATUREX )
#if __QCMP_SUPPORTS( _FEATUREX )
...
#endif
If you want to do anything fancier with the feature switch such as include it as a condition in QOR_PP_IF( condition, truecase, falsecase ) then you'll need to use __QCMP_FEATURE_TEST( _FEATUREX ) which expands to 0 for unsupported features and 1 for supported features.
QOR_PP_IF( condition, truecase, falsecase )
__QCMP_FEATURE_TEST( _FEATUREX )
0
1
This uses officially the nastiest macro hack in the known universe which I reproduce here only to say I don't think I invented this, please don't take out a contract or fatwah on me.
# define __QCMP_FEATURE_TEST( _X ) __QCMP_FEATURE_TEST2( QOR_PP_CAT( __QCMP_FEATURE_TEST_, _X ), (0) )
# define __QCMP_FEATURE_TEST_1 0)(1
# define __QCMP_FEATURE_TEST2( _A, _B ) QOR_PP_SEQ_ELEM( 1, (_A)_B )
Never do that! Don't ask me about it and don't credit me for it. Enough said.
The set of features defined in the sample code is very small. Just enough to allow CompilerQOR itself to compile and for those features to be tested. Many additional features are needed to allow for any and all variations amongst compilers. This work is ongoing research and although it has progressed well beyond what's in the sample code a final definitive list has not been settled. I've reduced the list in the sample code for the sake of simplicity and so that it has a good chance of being forward compatible with future versions, i.e. nothing that's already published there will suddenly disappear in a later version.
In the sample code with this article is a complete port of the Boost Preprocessor library with a few additions as already noted. Boost is fully documented online and I didn't write their PP library so we'll just deal with whats in the added flow.h here.
flow.h
QOR_PP_include_IF( condition, file )
condition
file
QOR_PP_include_IF
#include QOR_PP_include_IF( SOME_CONDITIONAL_MACRO, "include/ConditionalHeader.h" )
Intrinsic functions are built in functions for which the compiler has an internal copy of the code. Instead of the code being in a library it's in the compiler itself and gets injected into your program during compilation as inline code. Intrinsic functions can be very useful but can also cripple portability between compilers as not every one is available in exactly the same way with each compiler. Change compiler and suddenly that function you were calling no longer exists.
We could just bar the use of intrinsic functions from the QOR altogether and in fact outside of CompilerQOR we do but then there'd be no way to make use of them. CompilerQOR solves this by persuading the Compiler in use to inject all the available
intrinsic functions into CompilerQOR and to set preprocessor constants for each one so all other code can find out if a
specific intrinsic is available. Even better the intrinsic functions don't pollute the global namespace. Outside of CompilerQOR itself they appear as member functions of the
CCompiler class.
In the sample code this is implemented for Microsoft VC++ compilers as a proof of concept. This is how it works for the intrinsic form of memcpy:
CCompiler
memcpy
After compiler discrimination causes the preprocessor to include the header for a Microsoft Compiler
__QCMP_BUILTINS_HEADER
#define __QCMP_BUILTINS_HEADER "CompilerQOR/MSVC/VC6/Builtins.h"
__QCMP_UNBUILTINS_HEADER
#define __QCMP_UNBUILTINS_HEADER "CompilerQOR/MSVC/VC6/UnBuiltins.h"
__QCMP_BUILTINS_INC
#define __QCMP_BUILTINS_INC "ComilerQOR/MSVC/VC6/Builtins.inl"
Later the generic Compiler.h header class #includes __QCMP_BUILTINS_HEADER within an
extern "C" section like this if the CompilerQOR is being built.
#include
extern "C"
extern "C"
{
#include __QCMP_BUILTINS_HEADER
}
which expands to:
extern "C"
{
void* memcpy( void* dest, const void* src, size_t count );
...
}
The additional condition which means this is only included if CompilerQOR itself is being built is so that other libraries including this header never see the global namespace declaration of
memcpy.
So now we have declared the global namespace memcpy function but not defined it. Then within the
CCompiler class declaration itself the same header is included again. This time without the
extern "C".
class CCompiler : public CCompilerBase
{
...
# include __QCMP_BUILTINS_HEADER
...
}
This creates a member function declaration matching each of the available intrinsic functions:
class CCompiler : public CCompilerBase
{
...
void* memcpy( void* dest, const void* src, size_t count );
...
}
In the main CompilerQOR.cpp file, the CCompiler member functions are then implemented by including __QCMP_BUILTINS_INC like this:
//--------------------------------------------------------------------------------
namespace nsCompiler
{
#ifdef __QCMP_BUILTINS_INC
# include __QCMP_BUILTINS_INC
#endif
...
//--------------------------------------------------------------------------------
namespace nsCompiler
{
//--------------------------------------------------------------------------------
#pragma intrinsic(memcpy)
//--------------------------------------------------------------------------------
void* CCompiler::memcpy( void* dest, const void* src, size_t count )
{
return ::memcpy( dest, src, count );
}
...
The pragma instructs the compiler to use the intrinsic version of memcpy when it comes across otherwise undefined references to it. We haven't defined it so the
CCompiler::memcpy function gets an injected copy of the intrinsic
CCompiler::memcpy
After this the final preprocessor defined header is included at global scope
#include #ifdef __QCMP_UNBUILTINS_HEADER
# include __QCMP_UNBUILTINS_HEADER
#endif
...
#pragma function(memcpy)
...
This instructs the compiler to stop using the intrinsic form of memcpy and it goes back to being an undefined function for the rest of the compilation unit or until the C library defines it again but that's for another article.
At the end of all this prattling about then we end up with a CCompiler class that contains member functions that are just calls to each of the compilers intrinsic functions. The only external interface is that of the exported
CCompiler class.
There's one more thing we have to do to make use of this later and that's to define a macro for each function so we know its available. This is done in the
__QCMP_BUILTINS_HEADER file giving us: #define __QCMP_DECLS_MEMCMP 1 which can be tested for later on.
#define __QCMP_DECLS_MEMCMP 1
#ifndef NDEBUG //Debug build
# define __QCMP_REPORTCONIG 1 //Report configuration items during compilation where supported
#endif
Inserting these lines as the first lines in a .cpp file, before any includes, means that when the preprocessor reaches lines like this:
__QCMP_MESSAGE( "Compiler runtime type information enabled." )
You should see 'Compiler runtime type information enabled' coming out on your build console. This will of course depend to some extent on which IDE you use. It works on .Net era versions of Visual Studio and on recent CodeBlocks and Netbeans IDEs.
the __QCMP_MESSAGE macro just like all the others discussed here and those not hidden in 'details' headers within the preprocessor library is fine to use in any code that includes
CompilerQOR.h.
__QCMP_MESSAGE
There are a number of extensions which are often supported by compilers even though they were not or are not specified in the C++ language. Among these are RunTime Type Information (RTTI) and C++ exceptions which I gather are now in the standard in some form but of course have legacy implementations all over the place. CompilerQOR enables other libraries to make use of these extensions portably by detecting and reporting their availability. Each extension is
specified by a definition like this:
#define RunTimeTypeInformation_QCMPSUPPORTED 1. The mixed case names distinguish extensions from language features
#define RunTimeTypeInformation_QCMPSUPPORTED 1
To test for an extension use the name without _QCMPSUPPORTED on the end as a parameter to __QCMP_EXTENSION( _X )
_QCMPSUPPORTED
__QCMP_EXTENSION( _X )
For example: __QCMP_EXTENSION( RunTimeTypeInformation ) will expand to 0 if the extension is unavailable and 1 if it is available.
__QCMP_EXTENSION( RunTimeTypeInformation )
There are also options that can be predefined for some compiler preprocessors that change the mode of compilation. These generally have to be set outside the code itself in the IDE or Makefile. With CompilerQOR you can choose to do that or in most cases you can define everything in a single Configuration header file which works just as well. The configuration for building CompilerQOR and anything that uses it works like this.
If __QOR_CONFIG_HEADER is defined outside the code in the IDE or Makefile to the name of a file then that file gets included to control the configuration. If it isn't defined then the "DefaultConfig.h" you can see in the sample code is used instead. The
following things can be configured:
__QOR_CONFIG_HEADER
"DefaultConfig.h"
__QCMP_REPORTCONIG
__QCMP_REPORTDEFECITS
__QOR_FUNCTION_CONTEXT_TRACKING
__QOR_CPP_EXCEPTIONS
__QOR_ERROR_SYSTEM
__QOR_PERFORMANCE
__QOR_UNICODE
UNICODE
__QOR_PARAMETER_CHECKING_
You're welcome to experiment with the configuration and tell me if any of the combinations don't work but it will have little or no effect on the sample code for this article. One thing that will is adding a definition for __QCMP_COMPILER to the configuration. This will override automatic compiler
discrimination and 'pretend' a different compiler is in use. There are rare cases where this might be useful but please don't expect it to work as a general rule. The values for __QCMP_COMPILER can be found in the "include/CompilerQOR/Common/Compilers.h" header file.
__QCMP_COMPILER
"include/CompilerQOR/Common/Compilers.h"
We're all familiar with the fundamental types of C++, int,
char, volatile unsigned long long? However the language itself is quite frighteningly vague about exactly what an int is let alone a long double. In order to be able to move code easily between compilers we need a set of types we can rely on to always be the same size. We can't
guarantee of course that the bytes will always be stored the same way round as that's down to the hardware but we'll leave that issue to hardware abstraction for now. It only really starts to hurt when we want to share binary files between systems with different architectures.
int
char
volatile unsigned long long
long double
CompilerQOR defines a set of types within the CCompiler class that must be available in some form from each supported compiler. If the compiler doesn't natively provide them then we need to fake them with typedefs so that client code can rely on the same types always being present. Each base type has const and volatile qualified variations and some have signed and unsigned. Here's the set for char:
char
typedef signed char mxc_signed_char;
typedef const signed char mxc_c_signed_char;
typedef volatile signed char mxc_v_signed_char;
typedef unsigned char mxc_unsigned_char;
typedef const unsigned char mxc_c_unsigned_char;
typedef volatile unsigned char mxc_v_unsigned_char;
and here we fake the wchar_t type for a compiler that doesn't have it built in:
wchar_t
typedef mxc_unsigned_short mxc_wchar_t;
typedef mxc_c_unsigned_short mxc_c_wchar_t;
typedef mxc_v_unsigned_short mxc_v_wchar_t;
These types are pulled back into the global namespace in "CompilerTypes.h" which also acts as a build time check that the
CCompiler class has defined them all
typedef nsCompiler::CCompiler::mxc_unsigned__int64 Cmp_unsigned__int64;
typedef nsCompiler::CCompiler::mxc_c_unsigned__int64 Cmp_C_unsigned__int64;
typedef nsCompiler::CCompiler::mxc_v_unsigned__int64 Cmp_V_unsigned__int64;
All these types end up with a Cmp_qualifier_type form. Above is an example of a sized type where the second underscore is doubled and a bit size is appended. While we can live with variations in the size of a long double these sized types really must be
reliably exactly what they say they are.
Cmp_qualifier_type
CompilerQOR has sized types for 8, 16, 32 and 64 bit signed an unsigned integers and their const and volatile variants.
const
volatile
Types that vary with the word size of the architecture are also useful and ComilerQOR defines Cmp_int_ptr and Cmp_uint_ptr as integer types exactly the size of a pointer on the current architecture. Finally Cmp__int3264 is always 32bits on a 32bit machine and 64bits on a 64bit machine even if some
weird addressing limitation or extension changes the _ptr types and byte is always exactly 8 unsigned bits. ( I always thought it was a ridiculous shortfall that 'byte' was not a fundamental type so I
sneaked it in.)
ComilerQOR
Cmp_int_ptr
Cmp_uint_ptr
Cmp__int3264
byte
The QOR uses the ordinary C++ types for most purposes but wherever you need to be sure that a 64bit type will be available or that a variable will be large enough to hold an address the Cmp_ types come in handy. The fact that they all have single token contiguous names even Cmp_V_unsigned_long_long can also be useful if type names need to go through recursive preprocessor macro expansion where volatile unsigned long long might get
interpreted as 4 parameters rather than 1.
Cmp_V_unsigned_long_long
volatile unsigned long long
So that's what CompilerQOR covers and what it doesn't. Now lets get into the details and walk through adding support for a completely new compiler.
Search or grep the code for:
Note: Add new compiler support here
This will give you a list of places where essential edits are required.
Add a __QCMP_MYCOMPILER definition, a __QCMP_COMPILERHEADER path definition to reference your MyCompiler.h file and add MyCompiler.h into whichever IDE project or Makefile you're using to build CompilerQOR.
__QCMP_MYCOMPILER
__QCMP_COMPILERHEADER
Set up all the specific definitions that will only be included if the new compiler is in use.
These all go inside MyComiler.h or files included from there. To work out what needs to be set either refer to the header of the most similar compiler already supported as a starting point or copy the Template.h file provided with the sample code to get you started. If you're an expert on your compiler then in a few minutes you'll have everything in place. If not then in a few minutes you'll have a lot of questions like, "How does my compiler manage warnings?" and what's the correct equivalent of __declspec(naked)? I probably can't be of much help, if I knew I may have added support for your compiler myself but
the members of CodeProject may be of some assistance along with manuals and Google. At this stage leave all the features turned on unless you know for sure they aren't supported. Unsupported features will be picked up automatically in Step 4.
__declspec(naked)
Build CompilerQOR as a static library with your new compiler. The chances are you'll get errors the first time. The crucial thing is where those errors occur. If you get an error in your new "MyCompiler.h file or in generic code that is included after
"MyCompiler.h" then you have a fix to make. If you get an error in code that's reached by the build before "MyCompiler.h is included then the chances are I have a fix to make. Some assumption I've made about what is 'generic' code that will be supported by all compilers was wrong for your case. Please let me know as these things are generally fixable.
The sample code provides projects to build a static library, StaticCompilerQOR and
an executable TestCmp which links in the static library. TestCmp contains a series of build time and runtime tests for the features specified in
CompilerQOR for the compiler in use. Running TestCmp will output the results of the runtime tests. If you're running it then the compile time tests have passed. Press <return>
a few times to drive it through to the end or it will wait for you indefinitely.
Build and run TempCmp just as it is. If it fails to build then the failure should indicate what needs to be added or changed in
"MyCompiler.h" to fix it. You might need to #undef some features at this point. Once it's building we're almost there, run it up and check for any failed tests. The type sizes we can't do anything much about at the moment but any failures of the other tests indicate features that need a #undef
#undef
Now you have basic support for yet another Compiler and any code written to make full use of
CompilerQOR feature and extension checks has a good chance of working with it
Given that moving a reasonable sized source tree even from one version of the same compiler to another can take days the savings easily justify the time invested
Due to the number of compilers and build systems supported only the Debug build configurations have been set up. Out of the box no Release build is likely to work. If you want one you'll need to carefully set the options in the Release configuration after examining the relevant Debug configuration.
Linux builds will report failures when checking the type sizes. This is not an error in itself but it does point up a pretty hideous inconsistency between Linux and Windows compilers even when both are GCC based. At the moment we have no SystemQOR library to handle operating system difference for us so
TestCmp can only have one set of 'correct' values to check against. Suggestions on 'permanent' solutions to this problem would be welcome.
This initial version of CompilerQOR is clearly just a start and has a long way to go to fulfill its full potential. I don't own or have access to all the worlds variety of C++ compilers so that's where you come in. If your compiler is under-supported, unsupported or even completely
unrecognized by CompilerQOR then the QOR needs your input. You make the QOR better and it makes everyone's life better. That's the way open source is supposed to work. The code that accompanies this article is also online at Source Forge where you can contribute to the QOR.
Now we've abstracted the compiler we can feel pretty good but what about all the differences between different machines that we haven't abstracted. The world is full of a mixture of 32bit and 64bit architectures and by the time that goes away someone will have made the jump to 128bit. We're certainly not walking on water yet if we can't take those differences in our stride. Then there's MMX and SSE and ... It looks like we're going to need an ArchQOR.
Due to a lot of compiler products being referenced here a lot of proprietary names are mentioned in the article and source many of which are Trademarks:
Microsoft, Windows, Visual C++ and Visual Studio are trademarks of Microsoft.
Embarcadero and CodeGear are trademarks of Embarcadero.
All other trademarks and trade names mentioned are acknowledged as the property of their respective owners who are in no way responsible for any of the content of this article or the associated source code.
I'd especially like to thank the regulars and occasionals in the CodeProject lounge who helped me prioritize which compilers to support and provided the encouragement necessary to do the same thing seven times in seven different IDEs only to realize it would have to be done again. Do. | http://www.codeproject.com/Articles/540062/QOR-Compiler-Aspect | CC-MAIN-2015-22 | refinedweb | 3,959 | 51.28 |
Python’s data visualisation libraries are great for exploratory and descriptive data analysis. When you have a new dataset, you may want to look at relationships en masse and then drilldown into something that you find particularly interesting. Python’s Seaborn module’s ‘.pairplot’ is one way to carry out your initial look at your data. This example takes a look at a few columns from a fantasy footall dataset, edited from here).
Plug in our modules, fire up the dataset and see what we’re dealing with.
import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline import pandas as pd import numpy as np
data = pd.read_csv("../../Data/Fantasy_Football.csv") data.head()
So we have a row for each player, containing their names, team and some numerical data including squad number, cost, selection and points.
These numbers, while readable individually, are impossible to read and make much use out of beyond a line-by-line understanding. Seaborn’s ‘.pairplot()’ allows us to take in a huge amount of data and see any relationships and the spread of each data point. It will take each numerical column, put them on both the x and y axes and plot a a scatter plot where they meet. Where the same variables meet, we get a histogram that shows the distribution of our variables. Let’s check out the default plot:
sns.pairplot(data) plt.show()
So this is a lot of data to look at. While it is very useful, it can be quite overwhelming. Let’s use the ‘vars’ argument within pairplot to focus on a few variables.
We’ll also change our scatterplot to a regression type with ‘kind’, so that we can see the regression model that Seaborn would create if we were to use a reg plot. Now we’ll be able to better see any relationships:
sns.pairplot(data, vars=["now_cost","selected_by_percent","total_points"], kind="reg") plt.show()
That looks much more manageable! See how easy it is to create a complicated plot, that tells us a lot about our data very quickly? We can now see that most players are picked by nobody/very few people, and that the clearest relationship is between popularity and points – as we’d probably expect. Perhaps less predictably, the relationship between points and cost is comparatively weak.
Summary
This sets us up for a more comprehensive look at fantasy football, but hopefully this article goes to show how easy it can be to knock together an exploratory data visualisation with Seaborn’s pairplot. There are many more arguments that we could pass to improve this, from the colour (“hue=’position’, for example), to other types of plots within our pairplot. Take a look at the docs to find out all of your options.
After your exploratory analysis, you might want to check out our describing datasets article to go further! | https://fcpython.com/visualisation/python-visualisation-fantasy-football-data | CC-MAIN-2018-51 | refinedweb | 481 | 63.59 |
Due Date: Friday 10/22 11:59PM.
Navigation
- A. Intro
- B. The Priority Queue Abstract Data Type
- C. Review: Heaps
- D. Exercise: Implementing Your Own Min Heap
- E. Application: Heapsort
- F. Heapify
- G. Submission
A. Intro
Here's an optional intro video for the lab with some explanations and examples. All the information in the video is covered in the spec and timestamps for topics are in the video description.
Today, we'll take a look at the priority queue and how it can be implemented
using a heap. We'll then implement our own priority queue (called
ArrayHeap),
and then discuss some applications of heaps.
As always, you can get the skeleton of this lab with the following commands.
git fetch shared git merge shared/lab9 -m "get lab9 skeleton" git push
B. The Priority Queue Abstract Data Type
We’ve learned about a few abstract data types already, including the stack and queue. The stack is a last-in-first-out (LIFO) abstract data type where, much like a physical stack, we remove most recently added elements first. The queue is a first-in-first-out (FIFO) abstract data type. When we remove items from a queue, we remove the least recently added elements first.
For example, you can model the back button in your browser as a stack - if you hit it, the most recent page you visited is the one you go to. You can model the GBC burrito bowl line as a queue - the first people to get in line get served first. But what if we want to model an emergency room, where people waiting with the most urgent conditions are served first? We would need to serve patients based on how urgent their condition is, and not by how long ago they arrived.
Sometimes, processing items LIFO or FIFO is not what we want. We may instead want to process items in order of importance or a priority value.
The priority queue is an abstract data type that allows for this that contains the following methods:
insert(item, priority value): Inserts
iteminto the priority queue with
priority valuepriority.
poll(): Removes and returns the highest priority item in the priority queue.
peek(): Returns the highest priority item.
Note: The priority of items are a function of their priority values. In a max priority queue, elements with larger priority values will have higher priority. In a min priority queue, elements with smaller priority values will have higher priority. In this lab, you will be dealing with the latter: Items with smaller priority values have higher priorities and should be removed from the priority queue first.
Java’s
PriorityQueue class is implemented with a data structure
called a binary min heap. For the remainder of this lab, we will study the heap
data structure and create our own implementation of a priority queue using a
binary min heap. The next section recaps what a binary min heap is.
C. Review: Heaps
Heaps were covered in lecture 24. This section is just a reference. Feel free to skip this section if you are already familiar with heaps and come back to this later if you get stuck.
Heap Properties
Heaps are tree-like structures that follow two additional invariants that will be discussed more below. Normally, elements in a heap can have any number of children, but in this class we will restrict our view to binary heaps, where each element will have at most two children.
Invariant 1: Completeness
In order to keep our operations fast, we need to make sure the heap is well balanced. We will define balance in a binary heap's underlying tree-like structure as completeness.
A complete tree has all available positions for elements filled, except for possibly the last row, which must be filled left-to-right. A heap's underlying tree structure must be complete.
Here are some examples of trees that are complete:
And here are some examples of trees that are not complete:
Invariant 2: Heap Property
Here is another property that will allow us to organize the heap in a way that will result in fast operations.
Every element must follow the heap property, which states that each element must be smaller than all of its children or larger than those of all of its children. The former is known as the min-heap property, while the latter is known as the max-heap property.
If we have a min heap, this guarantees that the element with the lowest priority value will always be at the root of the tree. This helps us access that item quickly, which is what we need for a priority queue!
For the rest of this lab, we will be discussing the representation and operations of binary min heaps. However, this logic can be modified to apply to max heaps or heaps with any number of children.
Heap Representation
We can represent binary trees as arrays. (In this lab, we use ArrayLists.)
- The root of the tree will be in position 1 of the array (nothing is at position 0; this is to make indexing more convenient).
- The left child of a node at position $i$ is at position $2i$.
- The right child of a node at position $i$ is at position $2i + 1$.
- The parent of a node at position $i$ is at position $i / 2$ rounding down.
Here is an example of a binary tree that contains a letter at each node and its array representation below. As we can see, the node B is at index 2 and its left child D is at index 2 * 2 = 4 and its right child E is at index 2 * 2 + 1 = 5. Node G is at index 7 and its parent C is at index 7 / 2 = 3 (rounding down).
Because binary heaps are essentially binary trees, we can use this array representation to represent our binary heaps!
Heap Operations
For min heaps, there are four operations that we care about:
insert
: Inserting an element to the heap.
removeMin
: Removing and returning the item with the lowest priority value.
peek
: Returning the lowest priority value item without removal.
changePriority
: Changes the priority of an item to a new priority value.
When we do these operations, we need to make sure to maintain the invariants mentioned earlier (completeness and the heap property). Let's walk through how to do each one.
insert
- Put the item you're adding in the next available spot in the bottom row of the tree. This is equivalent to placing the element in the next free spot in the array representation of the heap. This ensures the completeness of the heap because we're filling in the bottom-most row left to right.
If the element that has just been inserted is
n, swap
nwith its parent as long as
n's priority value is smaller than its parent or until
nis the new root. If
nis equal to its parent, you can either swap the items or not.
This process is called bubbling up (or swimming), and this ensures the min-heap property is satisfied because once we finish bubbling
bup, all elements below
bmust be greater than it, and all elements above must be less than it.
removeMin
- Swap the element at the root with the element in the bottom rightmost position of the tree. Then, remove the right bottommost element of the tree (which should be the previous root and the minimum element of the heap). This ensures the completeness of the tree.
If the new root
nis greater than either of its children, swap it with that child. If it is greater than both of its children, choose the smaller of the two children. Continue swapping
nwith its children in the same manner until
nis smaller than its children or it has no children. If
nis equal to both of its children or is equal to the lesser of the two children, you can choose to swap the items or not.
This is called bubbling down (or sinking), and this ensures the min-heap property is satisfied because we stop bubbling down only when the element
Nis less than both of its children and also greater than its parent.
findMin / peek
The element with the smallest value will always be stored at the root due to the min-heap property. Thus, we can just return the root node, without changing the structure of the heap.
changePriority
Find the element whose priority you want to change and change its priority value. Then bubble up or bubble down the element accordingly.
D. Exercise: Implementing Your Own Min Heap
The class
ArrayHeap implements a binary min heap using an underlying
ArrayList.
Open it up and read the provided methods.
Notice in the constructor we call
contents.add(null). Think about how this
affects indexing and why we choose to add a
null element.
Fill in the incomplete methods in
ArrayHeap.java (marked with TODOs) You should not edit the methods without TODOs but you can use them in the code you write. As John DeNero wisely says,
code that doesn't respect abstraction barriers should BURN. Respect abstraction
barriers! (You should able to finish the lab without directly accessing the
contents
ArrayList.)
- First implement
getLeftOf(int i),
getRightOf(int i),
getParentOf(int i).
- Next, implement
min(int index1, int index2),
peek(),
bubbleUp(int index),
bubbleDown(int index)
- Implement
insert(T item, double priority),
removeMin(). Make sure you use the
bubbleUpand
bubbleDownhelper methods when implementing these functions. After implementing, you should now be passing the tests
insertOne,
insertAscending,
insertDescending,
insertMany, and
removeMinPeekNull.
- Implement
changePriority(T item, double priority). Make sure you use the
bubbleUpand
bubbleDownhelper methods when implementing this functions. After implementing, you should now be passing the tests
changePriorityIncreaseOne,
changePriorityDecreaseOne, and
changePriorityAll.
All tests are located in
ArrayHeapTest.java.
E. Application: Heapsort
Now, let’s move onto an application of the heap data structure. Suppose you have an array of $N$ numbers that you want to sort smallest-to-largest. One algorithm for doing this is as follows:
- Put all of the numbers in a min heap.
- Repeatedly remove the min element from the heap, and store them in an array in that order.
This is called heapsort.
Now, what is the runtime of this sort? Since each insertion takes proportional to $\log N$ comparisons once the heap gets large enough and each removal also takes proportional to $\log N$ comparisons, the whole process takes proportional to $N \log N$ comparisons.
It turns out we can actually make step 1 of heapsort run faster—proportional to $N$ comparisons—using a process called heapify. (Unfortunately, we can’t make step 2 run any faster than $N \log N$, so the overall heapsort must take $N \log N$ time.)
F. Heapify
The algorithm for taking an arbitrary array and making it into a min (or max) heap in time proportional to $N$ is called heapify. Pseudocode for this algorithm is below:
def heapify(array): index = N / 2 while index > 0: bubble down item at index index -= 1
Conceptually, you can think of this as building a heap from the bottom up. To get a visualization of this algorithm working, click on the BuildHeap button on USFCA interactive animation of a min heap. This loads a pre-set array and then runs heapify on it.
Try to describe the approach in your own words. Why does the index start at the middle of the array rather than the beginning, 0, or the end, N? How does each bubble down operation maintain heap invariants?
It is probably not immediately clear to you why this heapify runs in $O(N)$. For those who are curious, you can check out an explanation on StackOverflow or Wikipedia.
G. Submission
You should be able to submit the same as always:
<git add/commit> git tag lab9-0 # or the next highest submission number git push git push --tags
Please submit
ArrayHeap.java and your
partner.txt file (left blank if you did not work with a partner). The grader for this assignment will be the same as the tests that we have provided for you. You must pass 5/8 tests for full credit for this lab, but we recommend you try to pass all 8 tests. | https://inst.eecs.berkeley.edu/~cs61b/fa21/materials/lab/lab9/index.html | CC-MAIN-2021-49 | refinedweb | 2,055 | 63.19 |
CSPServer is a VC++ class which makes it relatively easy to create solid, multi-threaded client/server state-based protocol servers (SPS). Common existing standard client/server SPS systems are SMTP, POP3, FTP, NNTP and others systems which millions of people use every day on the internet.
CSPServer
CSPServer gives you a time-tested, well engineered framework to create your own standard protocol server or a new protocol server that suits your needs. CSPServer is used in Santronics Software's intranet hosting product, Wildcat! Interactive Net Server () to provide an integrated multiple protocol intranet hosting system. Proprietary virtual communications technology was removed to make a public socket-based only version of CSPServer.
This article will explain how to use the CSPServer class with working SPS examples. This is the author's first CodeProject article subsmission, so all commentators and critics are welcome.
A State-based Protocol Server or SPS is client/server methodology where by a client application connects to a server application to begin a text based "controlled" conversation. This controlled conversation is often called the "State Machine."
In a state machine, a connected client will issue a command and then wait for a server response to the command. In a properly designed state machine, the client can not continue with additional commands until a response was provided by the server for the current command. It is very important to understand that all conversations in a state machine begins with the client issuing commands. The server will never send data or information to the client unless it was requested or in response to a client command.
CSPServer offers a framework to create your own client/server state machine conversation for your client/server application.
If you understand this basic concept, you can skip the next background section which illustrates a SPS using a standard SMTP server.
If you ever connected to standard SPS such as SMTP, POP3, FTP, NNTP etc, the first line you see is the welcome line. The best way to see this is to use a standard TELNET client such as the one that comes with Windows. For example, to connect to the Microsoft SMTP server (port 25) using telnet, type the following:
Telnet maila.microsoft.com 25
Telnet maila.microsoft.com 25
If successful, you will see the welcome line. You can type HELP to see the available commands. Most standard SPS systems will provide HELP information on the available commands.
However, SPS systems are ultimately designed for automated applications, not human interaction. Client software are used to automate the process, such as sending an email. The following illustrates what typically happens when you want to send an email to anyone in the world.
Example SMTP client/server session:
Lets assume the target address for the email is gbush@whitehouse.gov and lets assume you are using Outlook Express (OE) to create and send the email. OE has a built-in SMTP client component which is used to send mail to a SMTP server.
The following are the steps taken to send the email.
In summary, the smtp client commands and smtp server responses occur:
Please note how the SMTP server uses numeric response codes for server responses. They control how the client will react. For example, when the client issues the RCPT TO: command, the positive response code is 250 to indicate the address is acceptable. However, negative response codes such as 550 can be issued which means the "unknown address."
The point behind this example is to illustrate the "tight" client/server state machine conversation between a state-based protocol server and a state-based protocol client such as in SMTP. Servers like SMTP have specific RFC design guidelines that describe the proper state machine (commands and responses). The same is true for FTP, NNTP and POP3.
With CSPServer, you can create your own client/server state machine conversation. You can use a similar response code concept for your own for your particular client/server application.
The following figure 1.0 illustrates the "state machine" in CSPServer:
Figure 1.0CSPServer State machine
Server AppletClient Connection Listening Thread
<--connect--
Client Applet
|Accept|
Instantiate subclass CSPServer Object thread
|
Call subclassGo() Handler
call subclass SendWelcome()handler
--response-->
Command1()handler
<-----------
Command2()handler
..
CommandN()handler
run optional subclass Cleanup()handler when client disconnects
When the client first connects , the listening server will start a new CSPServer session which start a new thread to manage client session. The thread handler will call the subclass Go() handler.The subclass Go() handler can be used to collect connection information but its main goal is to start the state machine engine by calling the inherited Go() handler.
The inherited Go() handler will then call the virtual "SendWelcome()" function and begin the state machine. The SendWelcome() override provides the opportunity for the SPS to introduce itself and also possibly supply "readiness" information to the client.
For those who wish to get started quickly, the following is a "quick how to use" outline. For technical class or code details see the source code and examples provided.
At a minimum, to create a SPS using the CSPServer class, you need to do following items (in no particular order):
There are other virtual functions you can override, but the constructor and Go() are the only required overrides to start the CSPServer engine.
The SampleServer.cpp source file contains a complete working example of a SPS. The following are the basic steps in create an SPS:
Add #include <spserver.h> to your source code, and create a CSPServer subclass (i.e., CMySPServer) such as the one shown below.
For the sake of an example, we will create a state machine with 5 commands; "HELLO", "LOGIN", "SHOW", "HELP" and "QUIT". So for each command a handler is added.
#include <spserver.h>
class CMySPServer: public CSPServer {
typedef CSPServer inherited;
public:
CMySPServer(CSocketIO *s); // REQUIRED
protected:
virtual void Go(); // REQUIRED
virtual void SendWelcome();
private:
static TSPDispatch Dispatch[]; // REQUIRED
BOOL SPD_HELLO(char *args);
BOOL SPD_LOGIN(char *args);
BOOL SPD_SHOW(char *args);
BOOL SPD_HELP(char *args);
BOOL SPD_QUIT(char *args);
};
Please note the SendWelcome() override is optional. However, it is almost always required to send a connection response to the client when the client first connects to the server.
The class CSocketIO is a simple socket wrapper with formatting functions and a socket input circular buffer. This class documentation is not within the scope of this article. See the source file socketio.h/cpp for usage and reference.
CSocketIO
Create the TSPDispatch structure for the subclass member Dispatch declaring the commands and the commands dispatch handles as follows
CSPServer::TSPDispatch CMySPServer::Dispatch[] = {
SPCMD(CMySPServer, "HELLO", SPD_HELLO),
SPCMD(CMySPServer, "LOGIN", SPD_LOGIN),
SPCMD(CMySPServer, "SHOW", SPD_SHOW),
SPCMD(CMySPServer, "HELP", SPD_HELP),
SPCMD(CMySPServer, "QUIT", SPD_QUIT),
{0}
};
For each command in the Dispatch structure, declare a dispatch handler in the subclass using the following prototype:
BOOL dispatch_handler_name(char *args);
Advanced Usage: It is possible to have an single handler for call commands. In this case, you can use the method GetCurrentCommandName() to return the current command issued.
GetCurrentCommandName()
Now begin to add the implementation of the overrides and the dispatch handlers:
//////////////////////////////////////////////////////////////
// Constructor
CMySPServer::CMySPServer(CSocketIO *s)
: CSPServer(s, Dispatch)
{
// Initialize all your session variables here.
// Done is a special BOOL used to exit
// the inherited::Go() handler. One of the
// Dispatch commands should set Done = TRUE;
// i.e., QUIT() command.
Done = FALSE;
// start thread, calls Go() handler. If you
// wish, you can call Start() outside the constructor.
Start();
}
//////////////////////////////////////////////////////////////
// Go() is called by start()
void CMySPServer::Go()
{
// By this point, we have a new thread running.
// This is a good point to collect client ip or
// domain information.
// To start the state machine, you must call
// the inherited Go() function. This will
// starts the thread's socket command line
// reader and dispatcher. Go() returns when
// the Done is set TRUE or if connection drops
// or one of the dispatch handlers return FALSE.
inherited::Go(); // REQUIRED
// we are done, good place to do session
// cleanup.
delete this; // REQUIRED
}
//////////////////////////////////////////////////////////////
// SendWelcome() is called by the inherited Go(). This is
// a good place to provide "server readiness" information
// to the connecting client. Standard SPS use numeric
// response codes to provide this information.
void CMySPServer::SendWelcome()
{
Send("Hello! Server ready\r\n");
}
//////////////////////////////////////////////////////////////
// Dispatch handlers.
//
// Dispatch handlers have one parameter, char *args. It will
// contain the string, if any, passed with the command.
//
// Return TRUE to continue the state machine. If FALSE is
// returned, the session ends. NOTE: Returning FALSE is not
// a good idea in practical designs as it can put the remote
// client in a irregular state. You should always have a
// graceful way to complete a session. Even if you wish to
// show "error" conditions, you should always return TRUE.
BOOL CMySPServer::SPD_HELLO(char *args)
{
Send("--> HELLO(%s)\r\n",args);
return TRUE;
}
BOOL CMySPServer::SPD_LOGIN(char *args)
{
Send("--> LOGIN(%s)\r\n",args);
return TRUE;
}
BOOL CMySPServer::SPD_SHOW(char *args)
{
Send("--> SHOW(%s)\r\n",args);
return TRUE;
}
BOOL CMySPServer::SPD_HELP(char *args)
{
Send("--- HELP commands ---\r\n");
Send("HELLO\r\n");
Send("LOGIN\r\n");
Send("SHOW\r\n");
Send("HELP\r\n");
Send("QUIT\r\n");
Send("--- end of help ---\r\n");
return TRUE;
}
BOOL CMySPServer::SPD_QUIT(char *)
{
Send("<CLICK> Bye!\r\n");
Control->Shutdown(); // Disconnects socket
Done = TRUE; // Tells Go() to exit
return TRUE;
}
Finally, now that you have a CSPServer class ready, you need a listening server thread that will answer incoming socket connections and for each new connection, a CSPServer instance is started.
To create the Listening Server, the CThread class is used:
class CServerThread : public CThread {
typedef CThread inherited;
public:
CServerThread(const DWORD port = 4044, const DWORD flags = 0);
virtual void Stop();
protected:
virtual void Go();
private:
SOCKET serverSock;
DWORD serverPort;
};
The subclass Go() handler is used to create the listening socket server.
Go()
When a new connection is accepted, a new instance of CMySPServer is created passing the peer socket handle as a new CSocketIO object in the CMySPServer constructor. The following is done in the CServerThread::Go() handler:
CMySPServer
CServerThread::Go()
.
.
SOCKET t = accept(serverSock, (sockaddr *)&src, &x);
if (serverSock == INVALID_SOCKET) break; // listening server broken
new CMySPServer(new CSocketIO(t)); // Start new CMySPServer session
.
.
You don't need to work about releasing the objects. The class themselves will do the cleanup.
Example usage of CServerThread in a console application:
CServerThread
CServerThread server(4044);
while (!Abort) {
if (kbhit() && getch() == 27) break; // Escape to Exit
Sleep(30);
}
server.Stop();
See the source file SampleServer.cpp for a complete working example. By default, the example uses port 4044. To test the server, use telnet like so:
Telnet LocalHost 40 | http://www.codeproject.com/Articles/3774/CSPServer-State-based-Protocol-Server-Class?fid=14833&df=90&mpp=10&sort=Position&spc=None&select=1568076&tid=1342082&noise=1&prof=True&view=Expanded | CC-MAIN-2014-10 | refinedweb | 1,765 | 54.93 |
IRC log of rdfcore on 2003-02-14
Timestamps are in UTC.
14:55:03 [RRSAgent]
RRSAgent has joined #rdfcore
14:56:03 [AaronSw]
no he has, he just joins an hour before everyone else
14:56:26 [em]
who owns logger?
14:56:31 [AaronSw]
dajobe
14:56:31 [em]
AaronSw, is he yours?
14:56:35 [em]
thought so
14:56:52 [AaronSw]
dajobe mentioned he's set up as a cron job
14:57:35 [DanC]
logger, learn about chanops, ok?
14:57:37 [logger]
I'm logging. I found 1 answer for 'learn about chanops, ok'
14:57:37 [logger]
0) 2003-02-14 14:57:35 <DanC> logger, learn about chanops, ok?
14:57:58 [AaronSw]
heh
14:58:34 [Zakim]
SW_RDFCore()10:00AM has now started
14:58:41 [Zakim]
+FrankM
14:59:01 [Zakim]
+PatH
14:59:06 [Zakim]
+??P15
14:59:21 [bwm]
Zakim, ??p15 is bwm
14:59:23 [jang_scri]
jang_scri has joined #rdfcore
14:59:23 [Zakim]
+Bwm; got it
14:59:54 [Zakim]
+??P16
15:00:00 [jang_scri]
zakim, ??p16 is ilrt
15:00:01 [Zakim]
+Ilrt; got it
15:00:09 [jang_scri]
zakim, ilrt has jang daveb
15:00:11 [Zakim]
+Jang, Daveb; got it
15:01:06 [Zakim]
+EMiller
15:01:07 [Zakim]
+AaronSw
15:01:49 [jang_scri]
rdf lets you think anything about anything
15:01:53 [jang_scri]
libel law prevents you saying it
15:02:13 [jang_scri]
agenda:
15:02:18 [bwm]
zakim, who is on the phone?
15:02:20 [Zakim]
On the phone I see FrankM, PatH, Bwm, Ilrt, AaronSw, EMiller
15:02:20 [Zakim]
Ilrt has Jang, Daveb
15:02:42 [jang_scri]
regrets patrick danbri jjc
15:02:57 [jang_scri]
scribe jan today...
15:03:16 [jang_scri]
regrets danc
15:03:27 [jang_scri]
frank: regrets gk?
15:03:34 [jang_scri]
bwm: ah yes
15:03:45 [jang_scri]
agenda:
15:03:48 [jang_scri]
any aob?
15:03:51 [jang_scri]
nope
15:04:09 [jang_scri]
next telecon 28 feb (proposing a holiday next week)
15:04:24 [jang_scri]
path: no objection: I'm away the week after.
15:04:43 [jang_scri]
minuites last meeting:
15:04:57 [jang_scri]
15:05:08 [jang_scri]
plus: mike dean was there.
15:05:14 [jang_scri]
scribe next meeting?
15:05:27 [jang_scri]
eric: yep
15:05:48 [JosD]
JosD has joined #rdfcore
15:05:54 [jang_scri]
minutes APPROVED
15:06:15 [jang_scri]
item 6, xml schema
15:06:26 [jang_scri]
15:06:30 [Zakim]
+??P9
15:06:34 [jang_scri]
15:06:49 [jang_scri]
uris for bits of schemas
15:06:50 [JosD]
Zakim, ??P9 is JosD
15:06:52 [Zakim]
+JosD; got it
15:07:03 [jang_scri]
bwm: we should respond,
15:07:15 [jang_scri]
daveb: the xml schema people aren't here, jjc, pats, maybe gk
15:07:19 [AaronSw]
zakim, mute aaronsw
15:07:21 [Zakim]
AaronSw should now be muted
15:07:27 [jang_scri]
could we ask them to look at it?
15:07:36 [jang_scri]
otherwise I'll have a look, although I'm not an expert
15:07:44 [jang_scri]
make precise the requirement though
15:08:00 [jang_scri]
bwm: we agree to the WD they've produced and respond to it from the rdf WG
15:08:09 [Zakim]
+Mike_Dean
15:08:10 [jang_scri]
daveb: ok, I'll review bits that seem relevant
15:08:19 [jang_scri]
daveb: hasn't jjc said stuff?
15:08:27 [jang_scri]
bwm: already, yes, think so... on this document?
15:08:32 [jang_scri]
daveb: I recall that's the case
15:08:36 [mdean]
mdean has joined #rdfcore
15:09:01 [jang_scri]
ACTION daveb - liase with jjc to work up a response on schema 1.1 requirements
15:09:08 [JosD]
should be about
15:09:37 [JosD]
RQ23 endorsed
15:09:49 [jang_scri]
daveb: I'll reply to their message immediately once I've absorbed it, give them a date we'll get back to them on.
15:10:02 [jang_scri]
ACTION daveb give immediate response,
15:10:13 [jang_scri]
ACTION daveb liase etc
15:10:27 [mdean]
Mike is here too (phone and IRC) -- sorry I'm late
15:10:44 [jang_scri]
item 7:
15:10:51 [jang_scri]
15:10:55 [jang_scri]
rdf in html
15:10:59 [jang_scri]
dave's responded...
15:11:04 [jang_scri]
some TAG activity on this
15:11:27 [jang_scri]
daveb: this is one of the three threads in tag on multiple-namespaced documents
15:11:49 [jang_scri]
bwm: my reading of the issue is that it's more to do with html than rdf
15:12:10 [jang_scri]
daveb: we've already made the links on this
15:12:31 [jang_scri]
bwm: the html guys also want to add syntax to html that can be used to represent some (subset) of rdf
15:12:42 [jang_scri]
bwm: my initial reaction is that that's good news!
15:12:51 [jang_scri]
em: do people have a view on that?
15:13:05 [jang_scri]
em: I'm quite encouraged by this.
15:13:17 [jang_scri]
stephen and I've had conversations in the past about this
15:13:26 [jang_scri]
one of the big impediments at the moment is deployment
15:13:32 [jang_scri]
eg, legacy editing environemnts
15:13:51 [jang_scri]
it's like ntriples in html
15:13:58 [jang_scri]
it's a really clear way of doing s/p/o
15:14:07 [jang_scri]
but it'd certainly benefit from this group's review
15:14:25 [jang_scri]
daveb, path, jang: are going to look at it.
15:14:39 [JosD]
Zakim, who is here?
15:14:40 [Zakim]
On the phone I see FrankM, PatH, Bwm, Ilrt, AaronSw (muted), EMiller, JosD, Mike_Dean
15:14:41 [Zakim]
Ilrt has Jang, Daveb
15:14:42 [Zakim]
On IRC I see mdean, JosD, jang_scri, RRSAgent, em, bwm, Zakim, AaronSw, DanC, logger
15:14:45 [jang_scri]
it has the potential to bridge between html meta tags and the abstract model that we're defining
15:14:59 [jang_scri]
daveb: I should point stephen at ntriples so that he sees it made concrete
15:15:04 [jang_scri]
there's the ntriples/test.nt file
15:15:11 [jang_scri]
that demonstrates the kind of things we want to say
15:15:32 [jang_scri]
em: he's not proposing using bnodes, for example.
15:15:39 [jang_scri]
path: we MUST look at this then!
15:15:56 [jang_scri]
bwm: at the back of my mind there are a number of questions:
15:16:13 [jang_scri]
is this html specific or is it a more general syntax that other specs might find easier to embed?
15:16:36 [jang_scri]
another question: do they intend to represent arbitrary graphs or just a subset?
15:17:01 [jang_scri]
em: this proposal doesn't exclude the rdf/xml embedding we've already talked about
15:17:01 [AaronSw]
em: this is intermediate point between HTML and RDF
15:17:22 [jang_scri]
bwm: em, what's the best way forward?
15:17:29 [jang_scri]
em: (a) identify reviewers
15:17:49 [jang_scri]
(b) if the group thinks it's important enough, make it a target of the upcoming tech plenary to get the right people in the room
15:18:09 [jang_scri]
path: I'm going to be in cambridge throughout the plenary week, I can be avaiulable for this
15:18:28 [jang_scri]
bwm: also wonder if we should invite stephen to a telecon, since dave can't be there?
15:18:38 [jang_scri]
em: seconded,
15:18:46 [jang_scri]
wonder if that's the right place though
15:18:57 [jang_scri]
or if there's some specific meeting we can arrange to focus on that
15:19:05 [jang_scri]
daveb: use this slot next week instead?
15:19:14 [jang_scri]
"good idea"s all around
15:19:24 [jang_scri]
daveb: can you contact them about this?
15:19:39 [jang_scri]
em: we've been increasingly trying to affect each other's groups on this
15:19:46 [jang_scri]
getting it on the html agenda is the first step
15:19:53 [jang_scri]
I'm happy to keep pushing and pushing hard
15:20:01 [jang_scri]
I'm potentially at risk next week...
15:20:08 [jang_scri]
...but it may be a good opportunity
15:20:14 [jang_scri]
I can see if I can make it happen
15:20:27 [jang_scri]
em: who'd want to attend?
15:20:34 [jang_scri]
(for next week)
15:20:46 [jang_scri]
path: yes, miked, yes, daveb yes
15:20:47 [jang_scri]
jang yes
15:20:55 [jang_scri]
em: ok
15:21:17 [jang_scri]
ACTION em to set up a discussion between stephen and rdfcore , objective to understand each other on the subject of rdf in html
15:21:34 [jang_scri]
bwm: done with rdf in html?
15:22:03 [jang_scri]
item 8; webont update
15:22:29 [jang_scri]
webont are reviewing our documents, generating a lot of discussion...
15:22:35 [jang_scri]
update (pat maybe, mike?)
15:22:42 [jang_scri]
social meaning:
15:22:46 [jang_scri]
pat verbally blanches
15:23:05 [jang_scri]
bwm: after a general "where webont are on reviewing our specs"
15:23:21 [jang_scri]
path: the difficulty is more that webont's not sure what to say, agreement internally on that
15:23:30 [jang_scri]
sociual meaning is the one that's causing the most debate
15:23:43 [jang_scri]
most of the recent discussions don't appear to impinge on rdf
15:23:48 [jang_scri]
there's an rdfs:comment issue
15:23:54 [jang_scri]
because they don't want comments to be assertions
15:24:08 [jang_scri]
JosD: annotations in general, not just comments
15:24:14 [jang_scri]
there's a rather plsit issue here
15:24:17 [jang_scri]
split, even
15:24:32 [jang_scri]
I'm anxiously awaiting a test case that jjc is producing...
15:24:49 [jang_scri]
path: in the weakened form that pfps' got it to, it shouldn't impact rdf at the moment
15:25:16 [jang_scri]
path: the simpler owl languages don't have a good fit for classes of things that apply to individuals
15:25:23 [jang_scri]
em: can you send me a link to that thread?
15:25:43 [jang_scri]
JosD: not a particular thread, it's tied around everywhere
15:25:57 [jang_scri]
em: i just don't want to take rdfcore time on this, I want to read up on it first
15:26:05 [jang_scri]
JosD: I'll post a link to a good summary message now...
15:26:17 [jang_scri]
daveb: I read the webont logs fairly regularly...
15:26:30 [jang_scri]
there are some things that I'm not sure of, eg
15:26:34 [jang_scri]
1. why rdfs:class and owl: class
15:26:44 [jang_scri]
and 2. why ban some rdf terms from owl?
15:26:52 [jang_scri]
path: owl lite ban, maybe, not owl full
15:27:07 [jang_scri]
daveb: I'm still not very happy with the owl three languages thing
15:27:09 [JosD]
Pat's webont message
15:27:13 [jang_scri]
cheers jos
15:27:25 [JosD]
:-)
15:27:30 [em]
i also note -
re OWL and RDF schema relationshop
15:27:40 [em]
thanks JosD for the pointer
15:27:58 [jang_scri]
bwm: where do people step up from rdfs?
15:28:03 [jang_scri]
not owl light....
15:28:07 [JosD]
right eric
15:28:22 [jang_scri]
path: owl's got a bunch of clean sublanguages
15:28:34 [jang_scri]
path: the other view is that owl full is just a large extension of rdf
15:28:40 [jang_scri]
which you then constrain to get the full languages
15:28:52 [jang_scri]
it depends on whether you see the smaller languages coming first, or last
15:29:23 [jang_scri]
daveb: I see rdf, I see "sameindividual as", I'd like to use that. which owl am I using?
15:29:31 [jang_scri]
path: safe option is to assume you're using owl full
15:30:07 [jang_scri]
path: the other thing you ight find direct feedback on is about xmlliteral, which they really don't like
15:30:14 [jang_scri]
jjc could say more about that
15:30:18 [jang_scri]
JosD: much more, i'd guess(!)
15:30:21 [jang_scri]
moving on
15:30:29 [jang_scri]
item 9: actions from last week.
15:31:20 [jang_scri]
bwm: summarising,
15:31:24 [jang_scri]
concepts defines a triple
15:31:36 [jang_scri]
the subject of a triple is a rdf uriref
15:31:57 [jang_scri]
there was at one point some language in schema, primer that didn't conform to that
15:32:30 [jang_scri]
ACTION daveb: same action as 2003-02-07#3
15:32:44 [jang_scri]
frankm: there's another component of this:
15:32:57 [jang_scri]
the corresponding s/p/o vocab applied to statements
15:33:33 [jang_scri]
bwm: think danbri's got an action to check this
15:33:56 [jang_scri]
path: one way to deal with this is "syntactic object", "semantic subject", etc.
15:34:06 [jang_scri]
daveb: argh! please no
15:34:21 [jang_scri]
... a triple has three parts, called what: nodes? arcs?
15:34:35 [jang_scri]
path: no, they're sets of triples
15:35:18 [jang_scri]
bwm: the key thing is making sure that the subject of a triple is a uiriref
15:35:20 [jang_scri]
not a resource
15:35:27 [jang_scri]
ACTION bwm: check a resource
15:35:35 [jang_scri]
daveb: taking out the word "labelled" - I've done this already
15:35:45 [jang_scri]
daveb's action done.
15:35:54 [jang_scri]
moving on
15:36:04 [jang_scri]
10. format of references in documents
15:36:31 [jang_scri]
frank: in december we agreed the format of references
15:36:53 [jang_scri]
what's in syntax (which jjc proposed we used) doesn't match what I thought we agreed on
15:37:00 [jang_scri]
there's a mixture across these documents
15:37:15 [jang_scri]
frank: let's all agree on one thing, please
15:37:31 [jang_scri]
daveb: syntax wasn't consistent with what we agreed. Think we had a japanese name that didn't fit
15:37:58 [jang_scri]
frankm: I can change the primer to agree with everyone else, but I think we should agree.
15:38:07 [jang_scri]
path: I've changed at least twice.
15:38:12 [jang_scri]
bwm: I'll pick one: what we said before
15:38:24 [jang_scri]
it's not mandatory if the other docs change
15:38:31 [jang_scri]
but it's low down the list of the things we have to do.
15:38:57 [jang_scri]
please conform to the pattern in primer, semantics, if you DO tidy these up
15:39:12 [jang_scri]
em: for all people putting links into documents, please point into the DATED documents
15:39:26 [jang_scri]
a lot of people were putting pointers into the "latest" documetns
15:40:00 [em]
- /tr/rdf-primer
15:40:05 [AaronSw]
if you link to /TR/rdf-concepts/#foo then that might break when #foo becomes #5-foo
15:40:14 [jang_scri]
ACTION em send a followup email on this
15:40:22 [jang_scri]
moving on#
15:40:29 [jang_scri]
11 responses to comments
15:41:06 [jang_scri]
there are quite a few comments languishing there with no responses
15:41:16 [jang_scri]
primer, syntax, semantics ok
15:41:35 [jang_scri]
LCComments end next week, be good to be on top of things at that time
15:41:59 [jang_scri]
frank: the problem isn't in rapidly responding; it's the content of those comments wrt our agreed procedure
15:42:15 [jang_scri]
frank: keeping the ball rolling...
15:42:23 [jang_scri]
bwm: is ok. it's the ones that sit there I'm worried about.
15:43:31 [jang_scri]
item 12:
15:43:36 [jang_scri]
schedule for processing LC comments
15:43:42 [jang_scri]
everyone had a chance to look?
15:43:54 [jang_scri]
15:45:43 [jang_scri]
bwm: goes through his schedule
15:45:52 [jang_scri]
JosD: I'd guess plan for CR,
15:46:17 [jang_scri]
bwm: I need to put together a message proposing to go to PR
15:46:20 [jang_scri]
JosD: "plan a", ok
15:46:31 [jang_scri]
bwm: extra telecons?
15:46:37 [jang_scri]
path: I'm up for it, apart from 18th
15:46:40 [jang_scri]
daveb:@ yes
15:46:45 [jang_scri]
jang_scri: ok
15:46:48 [jang_scri]
frank: ok
15:47:18 [jang_scri]
11, 14 are ok.
15:47:42 [jang_scri]
bwm: the 18th...
15:47:48 [jang_scri]
path: all dates next to that are out
15:48:07 [jang_scri]
bwm: suggest we schedule 18th and avoid path's needed to being there if possible
15:49:08 [jang_scri]
(times an hour later on tuesdays ok for everyone)
15:49:14 [jang_scri]
two hours on the 21st... ok
15:49:19 [jang_scri]
two hours on the 28th:...?
15:49:32 [jang_scri]
path: iffy for me, probably ok, but maybe not network access.
15:49:39 [jang_scri]
bwm: let's schedult ie, see how it goes.
15:50:04 [jang_scri]
bwm: folks happy with that plan then?
15:50:28 [jang_scri]
jang_scri: can't make 28th feb, alas
15:50:42 [jang_scri]
bwm: don't have either concepts editors, crucial we have their agreement
15:50:49 [jang_scri]
but at least for now that's the plan
15:51:05 [jang_scri]
ACTION bwm update schedule page to reflect our current plan
15:51:20 [jang_scri]
daveb: danb, danc be nice if they're there
15:51:52 [jang_scri]
DanC: availability for lc comment review currently being discussed
15:51:55 [jang_scri]
are you?
15:52:06 [DanC]
want me to dial in?
15:52:22 [jang_scri]
nah, we'll take to email
15:52:26 [jang_scri]
^^^ bwm
15:52:30 [DanC]
k
15:52:31 [jang_scri]
item 12 done,
15:52:35 [jang_scri]
any aob?
15:52:46 [jang_scri]
frank: eric, s+w has announced new 50-cal,
15:52:51 [jang_scri]
if you need a "persuader"
15:53:15 [jang_scri]
done, cheers, folks...
15:53:16 [jang_scri]
ooh
15:53:21 [jang_scri]
daveb: next weeks meeting
15:53:24 [jang_scri]
heh
15:53:34 [jang_scri]
em: will announce schedule for next week if there is one
15:53:35 [jang_scri]
done
15:53:35 [Zakim]
-JosD
15:53:37 [Zakim]
-Bwm
15:53:37 [Zakim]
-PatH
15:53:37 [Zakim]
-Ilrt
15:53:38 [Zakim]
-Mike_Dean
15:53:39 [Zakim]
-FrankM
15:53:44 [Zakim]
-AaronSw
15:53:52 [Zakim]
-EMiller
15:53:52 [Zakim]
SW_RDFCore()10:00AM has ended
15:54:44 [em]
RRSAgent, pointer?
15:54:44 [RRSAgent]
See
16:05:04 [AaronSw]
AaronSw has left #rdfcore
16:17:07 [em]
em has left #rdfcore
17:45:13 [Zakim]
Zakim has left #rdfcore
17:53:19 [DanC]
DanC has left #rdfcore | http://www.w3.org/2003/02/14-rdfcore-irc | CC-MAIN-2016-36 | refinedweb | 3,162 | 67.62 |
> On Oct 13, 2016, at 12:47 PM, Tom Prince <tom.pri...@ualberta.net> wrote: > > > This applies more generally; no need for any weird hacks. Any 'new' plugin > > could just opt in to a different syntax; we can just look up until the > > first ':'; we just need to define a new interface for a new syntax. > > I don't think that this provides a good user experience. > > 1) There are existing endpoints that want nestable endpoints, so either > a) They don't change, somewhat defeating the purpose of having a new > syntax (or cluttering the endpoint namespace with less than useful endpoints).
We already have this problem, and we will need to do a doc cleanup / consolidation / deprecation pass soon. (see: tcp, tcp6, host, ssl, tls...) > b) They change incompatibility, defeating the purpose of trying to > maintain backwards compatability. As you've noticed, we may have several potential "outs" to have practically-compatible parsing syntaxes; the real problem is the internal factoring of the parsing APIs rather than the syntax. > 2) As user, I need to learn which endpoints support the new syntax, thus > potentially needing to know both methods of quoting and switch between them > as appropriate. As a user you're going to need to read the parameter documentation anyway; learning about new syntax is not much different than learning about a new parameter. And you may not realize there _is_ a syntax; most configuration of this type is just copying and pasting a reasonable-looking example. Not to say that we should be spuriously incompatible for those who have learned the rules, but the only rule to learn at this point is ": separates arguments, \ escapes :". We could add one more rule without unduly stressing the cognitive burden of the endpoint system. > There are a couple of possible ways around this, without requiring a weird > hack. > - I wonder how many endpoints strings have ever been written whose value > starts with any of `[` `(` or `{`? I suspect that the number might in fact be > 0. In which case, although the change is technically incompatible, in > practice it wouldn't be. > - Alternatively, we could deprecate an unquoted [, (, { at the beginning of a > value, and then after a suitable deprecation period (perhaps additionally a > release where it is just an error), we could repurpose one of them to act as > quoting (leaving the other two for future extensiblity). I suspect that this would be overkill here; we also have other options, like '(: :)', which would be totally compatible (there are no _arguments_ anywhere presently named "("). -g _______________________________________________ Twisted-Python mailing list Twisted-Python@twistedmatrix.com | https://www.mail-archive.com/twisted-python@twistedmatrix.com/msg11911.html | CC-MAIN-2017-51 | refinedweb | 432 | 51.28 |
Ngl.chiinv
Evaluates the inverse chi-squared distribution function.
Available in version 1.3.0 or later.
Prototype
x = Ngl.chiinv(p,df)
Argumentsp
A multi-dimensional array or scalar value equal to the integral of the chi-square distribution. [0<p<1]df
A multi-dimensional array of the same size as p equal to the degrees of freedom of the chi-square distribution. (0, +infinity)
Return valuex
A multi-dimensional array of the same size as p.
Description
This function evaluates the inverse chi-squared distribution function by calculating the upper integration of the non-central chi-square distribution. This gives the same answers as IMSL's "chiin" function.
Examples
The following:
import Ngl p = 0.99 df = 2. x = Ngl.chiinv(p,df) print "p =",p,"df =",df,"x=",x df = 64. x = Ngl.chiinv(p,df)produces:
p = 0.99 df = 2.0 x= [ 9.21034037] p = 0.99 df = 64.0 x= [ 93.21685966] | http://www.pyngl.ucar.edu/Functions/Ngl.chiinv.shtml | CC-MAIN-2017-30 | refinedweb | 159 | 56.11 |
Hello Josiah, >> i = 0 >> while i != 1: >> i += 1 >> j = 5 >> print j JC> Maybe you don't realize this, but C's while also 'leaks' internal JC> variables... JC> int i = 0, j; JC> while (i != 1) { JC> i++; JC> j = 5; JC> } JC> printf("%i %i\n", i, j); Yeah, it may *leak* it in your example. But the advantage is that it may *not leak* as well: for (int i = 0; i <= 5; i++) { int j = 5; } printf ("%i\n", j); // Won't be even compiled Or in our "while" case: int i = 0; while (i != 1) { i++; int j = 5; } printf("%i %i\n", i, j); JC> If you haven't yet found a good use for such 'leakage', you should spend JC> more time programming and less time talking; you would find (quite JC> readily) that such 'leaking' is quite beneficial. I see such usages and realize the possible gains from it. But for about five years of software development in a bunch of programming languages, I've found good uses of non-leaking variables *as well*. Up to the level of manually "enclosing" the temporary variables used only once: ... int i = input(); int result; { int j = f1(i); int k = f2(i, j); result = f3(i, j, k); } // Now j and k are dead ... >> accustomed from other languages. That's my sorrowful story. JC> So you mistyped something. I'm crying for you, really I am. Yeah, I did. Just forgot removing the usage / renaming of variable neved initialized (I thought) before. And my error was that I relied upon the interpreter to catch it, mistakenly assuming the locality of variables inside loops as a consistency with other common languages. Mea culpa, without any jeers. The only reason why I was wrote my email was to get the understanding of community assessment of such behaviour, and/or probable workarounds. No blame against Python - it's great even with this unusualty. >> But for the "performance-oriented/human-friendliness" factor, Python >> is anyway not a rival to C and similar lowlevellers. C has >> pseudo-namespaces, though. JC> C does not have pseudo-namespaces or variable encapsulation in for loops. Neither for (int i = 0; i <= 5; i++) { int j = 5; } printf ("%i\n", i); nor for (int i = 0; i <= 5; i++) { int j = 5; } printf ("%i\n", j); gets compiled though with "gcc -std=c99". That is, each loop introduces a new scoping level. JC> Ah hah hah! Look ladies and gentlemen, I caught myself a troll! Python JC> does not rival C in the performance/friendliness realm? Who are you JC> trying to kid? There is a reason why high school teachers are teaching JC> kids Python instead of Pascal, Java, etc., it's because it is easier to JC> learn and use. Ohh, my slip made me considered a troll... I meant only that Python does not rival C in such area as "performance by all means, even with decrease of friendliness" (but the overall performance is indeed good, I've seen even the VoIP solution written purely on on Python). And I believe noone of Python developers want it to be rival. JC> What you are proposing both would reduce speed and JC> usability, which suggests that it wasn't a good idea in the first place. Yes, it lacks performance, but I still believe that an opportunity to close the visibility of variables (*opportunity* rather than *behaviour change*!) is better in terms of "friendliness" rather than lack of it. Just because it offers the alternatives. Because C-style closed visibility can emulate the variable lack, but not vice versa. At least one voice here (of Gareth McCaughan) is not-strictly-against it. So I cannot agree that such *option* can reduce usability. >> "for (int i = 0; i < 10; i++)" works fine nowadays. JC> I'm sorry, but you are wrong. The C99 spec states that you must define JC> the type of i before using it in the loop. Maybe you are thinking of JC> C++, which allows such things. "gcc -std=c99" compiles the line "for (int i = 0; i <= 5; i++);" perfectly. Along with another sample mentioned above . To this day, I relied upon gcc in terms of standards compatibility... JC> Tes your ideas on comp.lang.python first, when more than a handful of JC> people agree with you, come back. Ok. Next time I shall be more careful. Sorry again, and thanks for your time. And thanks for more-or-less patient answers. -- With best regards, Alexander mailto:maa_public at sinn.ru | https://mail.python.org/pipermail/python-dev/2005-September/056701.html | CC-MAIN-2020-05 | refinedweb | 765 | 74.79 |
First time here? Check out the FAQ!
I wrote an answer below. Check it out.
When you want to use OpenCV or any other native libraries on Playframework, You MUST run your application with "play start" command, not "play run".
"play run" command starts your application in development mode and "play start" command starts in production mode.
I don't know every difference between them but one obvious thing is ,
Only when we use "play start", a new JVM for you application is launched and it loads native libraries you specified by System.load("/absolute/path/to/your/so/or/jnilib/inOSX/not/dylib/filename.jnilib");
How to load native lib is following.
Create Global.java which has empty package name. (refer this link )
public class Global extends GlobalSettings {
@Override
public void beforeStart(Application app) {
// TODO Auto-generated method stub
super.beforeStart(app);
String libopencv_java = "/Users/yoonjechoi/git/myFirstApp/target/native_libraries/64bits/libopencv_java246.jnilib";
System.load(libopencv_java);
}
}
then you can use classes of OpenCV in your Play application's controllers.
System.loadLibrary("opencv_java246") doesn't works. I don't know why.
I don't have time to dig why. -_-;
Please give hints if you know why.
I solved the problem.
For those who use osx version other than v10.5 and Apple's JDK (maybe jdk 6), not oracle's jdk.
If you get UnsatisfiedLinkError with JNI, CHANGE your native library file extension to ".jnilib".
Though java.library.path property is set properly, JVM may not load native library if file extension is not ".jnilib" such as ".dylib".
According to apple's doc on
."
Maybe on v10.5, ".dylib" works. But in my case on v10.8.4, mountain lion, ".dylib" doesn't work.
If you get problem during java development on mac, FIND documentation on developer.apple.com as well as oracle's doc. Don't forget that you may use Apple's JDK.!!
[UPDATE] This solution is only for Glassfish v3. It doesn't works for Play framework. I Just wrote after experimenting on glassfish. I'm so sorry confuse you. But I found a way for Play framework too and going to write about it right now~
Dear guys
I got very similar problem with play framework.
I posted this, .
ggl, was it a classloader related problem?
In my case, I hardly find a clue where should I start to solve because the other jni libraries works well but opencv doesn't.
symbol table problem with compilation or classloader related problem?
Thanks in advance..? | https://answers.opencv.org/users/4313/yoonje-choi/?sort=recent | CC-MAIN-2021-10 | refinedweb | 418 | 62.24 |
17836/create-remove-files-rapidly-windows-using-python-windowserror
I came across the above-stated problem when using Scrapy's FifoDiskQueue. In windows, FifoDiskQueue will cause directories and files to be created by one file descriptor and consumed by another file descriptor.
Sometimes I'll randomly get error messages like this one:
2015-08-25 18:51:30 [scrapy] INFO: Error while handling downloader output
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\twisted\internet\defer.py", line 588, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "C:\Python27\lib\site-packages\scrapy\core\engine.py", line 154, in _handle_downloader_output
self.crawl(response, spider)
File "C:\Python27\lib\site-packages\scrapy\core\engine.py", line 182, in crawl
self.schedule(request, spider)
File "C:\Python27\lib\site-packages\scrapy\core\engine.py", line 188, in schedule
if not self.slot.scheduler.enqueue_request(request):
File "C:\Python27\lib\site-packages\scrapy\core\scheduler.py", line 54, in enqueue_request
dqok = self._dqpush(request)
File "C:\Python27\lib\site-packages\scrapy\core\scheduler.py", line 83, in _dqpush
self.dqs.push(reqd, -request.priority)
File "C:\Python27\lib\site-packages\queuelib\pqueue.py", line 33, in push
self.queues[priority] = self.qfactory(priority)
File "C:\Python27\lib\site-packages\scrapy\core\scheduler.py", line 106, in _newdq
return self.dqclass(join(self.dqdir, 'p%s' % priority))
File "C:\Python27\lib\site-packages\queuelib\queue.py", line 43, in __init__
os.makedirs(path)
File "C:\Python27\lib\os.py", line 157, in makedirs
mkdir(name, mode)
WindowsError: [Error 5] : './sogou_job\\requests.queue\\p-50'
What I learned after a little research is that Error 5 means access is denied. A lot of explanations on the web quote the reason as lacking administrative rights, like this MSDN post, But the reason is not related to access rights. When I run the scrapy crawl command as an Administrator on command prompt, the problem still occurs.
I have also created a small test to try on windows and linux:
import os
import shutil
import time
for i in range(1000):
somedir = "testingdir"
try:
os.makedirs(somedir)
with open(os.path.join(somedir, "testing.txt"), 'w') as out:
out.write("Oh no")
shutil.rmtree(somedir)
except WindowsError as e:
print 'round', i, e
time.sleep(0.1)
raise
And the output of the above test code is as follows:
round 13 [Error 5] : 'testingdir'
Traceback (most recent call last):
File "E:\FHT360\FHT360_Mobile\Source\keywordranks\test.py", line 10, in <module>
os.makedirs(somedir)
File "C:\Users\yj\Anaconda\lib\os.py", line 157, in makedirs
mkdir(name, mode)
WindowsError: [Error 5] : 'testingdir'
The round is different every time. So if I remove the raise in the end, I will get something like this:
round 5 [Error 5] : 'testingdir'
round 67 [Error 5] : 'testingdir'
round 589 [Error 5] : 'testingdir'
round 875 [Error 5] : 'testingdir'
It simply fails randomly, with a small probability, ONLY on Windows. I tried this test script in cygwin and linux, this error never happens there. I also tried the same code in another Windows machine and it occurs there.
Here's the short answer:
disable any antivirus or document indexing or at least configure them not to scan your working directory.
Long Answer: you can spend months trying to fix this kind of problem, so far the only workaround that does not involve disabling the antivirus is to assume that you will not be able to remove all files or directories.
Assume this in your code and try to use a different root subdirectory when the service starts and trying to clean-up the older ones, ignoring the removal failures.
You need to set up the path ...READ MORE
You could simply use a wrapper object ...READ MORE
It appears that a write() immediately following a read() on a ...READ MORE
Use numpy in the following manner:
np.random.rand(3,4)*100
When you ...READ MORE
suppose you have a string with a ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
Only in Windows, in the latter case, ...READ MORE
down voteacceptedFor windows:
you could use winsound.SND_ASYNC to play them ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/17836/create-remove-files-rapidly-windows-using-python-windowserror | CC-MAIN-2021-10 | refinedweb | 725 | 60.92 |
Authenticating a OneDrive Account
WEBINAR:
On-Demand
Full Text Search: The Key to Better Natural Language Queries for NoSQL in Node.js
It seems, these days, that everyone is going "Cloud Crazy." I mean, really, what is the big deal about the cloud?
Well, okay, it's massively accessible and synced across global data centres around the world; but, outside of that sphere, there's nothing massively different about cloud versus a traditional hosting account on a standard server. You could. if you wanted to, set up your own servers in strategic locations around the world, develop your own software to keep them in sync, and then just run standard server operating systems on them, and it would probably work out a lot cheaper.
Before we get sidetracked, however, the one big benefit of globally shared cloud storage is that it's globally shared. The key part here is "Shared."
In today's ever-connected world, we rely more and more on being able to access all of our data on all of our devices, and this is where global shared storage really shines. The problem is, the powers that be (and by that, I mean the marketers and cloud software sales people) would have you believe that you have to be using a certain operating system, running on a specific device that uses special cloud-based software to share your files in this manner.
Case in fact, Windows 8 Store apps :-)
If you read any of the books out there, or many of the blog posts floating around, it's all about Windows RT, Windows 8, Store Apps, and all of this stuff that's now being aligned with the new Microsoft "One Vision."
So, What's This Got to Do with OneDrive?
Well, first off, as you already know, OneDrive is Microsoft's entry into the global cloud storage market. What's perhaps not so widely published (there are some articles, but most of them are quite difficult to find) is the fact that you actually can use OneDrive (and many other Live APIs) from within more traditional .NET apps, such as ASP.NET MVC, Win-form, and WPF-based projects.
What I want to show you in this article, and the next, is how you use the LiveSDK rest-based interface to interact with a standard Windows Forms desktop-based application. The code I show, however, can be very easily adapted to other application scenarios very easily.
We'll do four things. In this first part, I'll:
- Authenticate the user to OneDrive
- Get a file listing of available files
Then, in the next article, I'll show how to use that authentication to:
- Upload a file
- Download the file we uploaded
All set? Great, let's get started.
Authenticating Your Application
Like any of the popular frameworks these days, you first need to generate yourself an application key. To do this, you need an MS-Live/MSDN account. Having a Microsoft account should not be a challenge for most Windows developers. If you have a Hotmail/Outlook email account, have (or had) an MS-Messenger account, or have a Skype account, you'll most likely (even though you may not realise it) have a suitable account to use.
I don't have space in this article to go through all the possible variations on getting/signing up for an account, so I'm afraid I'll have to leave that one up to you. Once you have an account, however, the first thing you need to do is browse to:
and sign in (If you're not already).
Once you're on the main OneDrive developers page, you need to click the menu option marked 'Dashboard.' This should take you to your "Microsoft Account Developer Centre", which should look something like this:
Figure 1: The "Microsoft Account Developer Centre"
I already have an application key defined, which, as you can see, is listed on my front page already, Yours, if it's the first time you've used it, will likely be empty. To create a new key, click 'Create Application,' and then fill in the name of your application on the first tab that appears. Click the 'I accept' button to continue.
You're allowed to create up to 100 application keys the last time I checked. However, as is always the case with these things, if it's unclear, check it!! (It'll not be the first time I've breached Ts&Cs without realising.)
Once you've accepted, and moved on to the next page, you should see something like:
Figure 2: After you've been accepted
Most of what you see in the following dialogs is aimed at web apps, but because we're going to be using win-forms, we don't need to be bothered about most of it. Click the 'API Settings' option, and set 'Mobile or Desktop client app' to 'Yes;' for a desktop app, you MUST make sure that you leave the 'Redirect URL' empty. This URL is used during the OAuth2 stage of interacting with the service. In a desktop app, because we have no public endpoint, the OneDrive service has to be told to use a special internal endpoint. We do this by leaving the redirect URL blank, so that it knows to choose its own.
Click save, and then move to the 'App Settings' page. You should now see something like:
Figure 3: The 'App Settings' page
And yes, for those of you who are wondering, I will be deleting this app entry after I finish writing this article!
Make a note of the 'ClientID.' This is the item you'll need to make use of in just a moment. Most of the other stuff is self explanatory, but it doesn't need to be changed to use this account for our demo, so feel free to explore if you want, but for the next part you'll need to fire up Visual Studio.
Getting Down with Some Code
Okay, so now you have a client ID. The next step is to make a start on your project. Hopefully, you've done this many times before, so I'm simply going to tell you to start a standard .NET 4.5 windows forms project. I'll be using VS2013 Enterprise for these articles, but I'll not be doing anything that can't be done with the free/express versions of the same products.
Once you've created your win-forms project, the first thing you need to do is use NuGet to get the required libs for interfacing to the OneDrive API. Perform your usual method of using NuGet and search for the "LiveSDK" package. As of this writing, the current version is 5.6, and it'll be a minimum of this version you need to follow along with this article.
The NuGet page for the libs can be found here:
Once you've installed the NuGet package, you should have everything you need set up and ready to roll.
Getting Authentication from the User
Before you can do anything with the SDK, you first have to get permission from the user of the application to access their OneDrive account. This shouldn't come as a surprise; it's a standard practice these days.
Request Authentication, pass in your token, get a response back, save that response, and then use that response forever until the user either chooses to re-authenticate or removes permission for the application to access their account.
Unfortunately, it's not quite that simple with OneDrive. The process is still the same, but you can't use the returned token indefinitely. In fact, I've found while writing this article, by default the timeout on any token only seems to be approximately one hour.
We'll come back to this in just a moment. For now, however, add a new from to your project, so that you have a 'Form1' and 'Form2' available.
Set 'From2's properties to have a 'FormBorderStyle' of 'FixedToolWindow' and 'StartPosition' of 'CenterScreen' and then size it to a suitable size for a web dialog, approximately 500x600 pixels.
Once you've got your form set up, from your toolbox, drop a standard web browser control onto your window and set its 'Dock' property to 'Fill'. You should have something the resembles the following:
Figure 4: Dropping a standard web browser control onto your window
Once you've gotten your form set up, you then need to add a bit of code to make the authentication happen. Because we also need to get access to the authentication token, we're also going to add a custom property to the form class so that's also easier to access.
We need to add code to the forms constructor to browse to the Microsoft Accounts authorization page. We do this by navigating to the start page in our constructor and then we attach a document ready handler, which in turn waits until the authentication process has completed.
At this point, we then extract the supplied access token, make it available to the rest of the application, and dismiss the dialog. It's all quite basic stuff, so rather than do a line by line, I'll just present the full form class in just a moment. First, however, we need to discuss the URL used in the sign-in process.
When you want to sign in to a Microsoft Account, you need to provide two items of information in the request. The first is the ClientID you created in your application ID creation previously, using your developer account. The second bit of information needed is a list of the scopes requesting the access that your application desires.
A full list of possible scopes can be found here:
For our purposes, however, we only require:
'wl.skydrive_update'
You'll see in the code that we define these as constants in the form class, along with the client ID. Then, we assemble them into the URL:
This is the endpoint you need to call to kick start the authorization process. You can see the full URL and required parameters in the following code. Once we call this endpoint, the user should be presented with an authorization form that looks similar to this:
Figure 5: The sign-in screen
Once you fill in the form and go through the authentication process, the form should then close and we should, at that point, have an access token available that allows us to make use of the various calls that the OneDrive API has available.
The full code for the Web Browser form is as follows:
using System; using System.Windows.Forms; namespace Onedrive { public partial class FrmWebBrowser : Form { public string AccessToken { get { return _accessToken; }} private string _accessToken = string.Empty; private const string _scope = "wl.skydrive_update"; // Remember to change this to your ClientID, this one will be // invalid by the time this is published private const string _clientID = "00000000481275D0"; private const string _signInUrl = @" oauth20_authorize.srf?client_id={0}&redirect_uri= token&scope={1}"; private Timer _closeTimer; public FrmWebBrowser() { InitializeComponent(); StartAuthenticationProcess(); } private void StartAuthenticationProcess() { AuthenticationBrowser.DocumentCompleted += AuthenticationBrowserDocumentCompleted; AuthenticationBrowser.Navigate(string.Format (_signInUrl, _clientID, _scope)); } void AuthenticationBrowserDocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { if (e.Url.AbsoluteUri.Contains("#access_token=")) { var x = e.Url.AbsoluteUri.Split(new[] { "#access_token" }, StringSplitOptions.None); _accessToken = x[1].Split(new[] {'&'})[0]; _closeTimer = new Timer {Interval = 500}; _closeTimer.Tick += CloseTimerTick; _closeTimer.Enabled = true; } } private void CloseTimerTick(object sender, EventArgs e) { _closeTimer.Enabled = false; DialogResult = DialogResult.OK; Close(); } } }
We should now be able to instantiate an object of our web browser form when needed, and upon showing it, we should automatically get passed to the authentication process. However, because the access token we get is good for at least an hour, we should ideally have some kind of caching strategy. This prevents us having to ask the user to sign in for every call we make. In reality, what you might want to do is write the access token to a temp file in the applications folder. Doing this would allow you to actually exit the application, and, as long as you ran it again within the hour, you should still be okay. In this case, you would likely want to check for the existence of the file as you start up. If the file exists, you could check the creation date, see how long ago it was, and, if less than an hour, read and use the access token stored within.
For our purposes, however, we're just going to simply keep the string in memory, with a time stamp of its generation time. Then, every time we want to use it, check this time and, if more than an hour has passed, re-request authentication using our auth process.
To assist with this, add a new class to your application. called 'AccessToken.cs', and add the following code to it:
using System; namespace Onedrive { public class AccessToken { public string Token { get { return _token; } set { _token = value; _generationTime = DateTime.Now; } } public bool IsValid { get { if (string.IsNullOrEmpty(_token)) return false; return (DateTime.Now - _generationTime).TotalMinutes < 60; } } private string _token = string.Empty; private DateTime _generationTime; } }
We now can make use of this new class in our main 'Form1' class by adding the following method:
private string GetAccessToken() { if(myAccessToken != null && myAccessToken.IsValid) { return myAccessToken.Token; } using(FrmWebBrowser authBrowser = new FrmWebBrowser()) { if (authBrowser.ShowDialog() != DialogResult.OK) return string.Empty; myAccessToken = new AccessToken(); myAccessToken.Token = authBrowser.AccessToken; return myAccessToken.Token; } }
Remember, also, to add a class private variable:
private AccessToken myAccessToken;
to store the retrieved access token.
This wraps everything up neatly so that whenever we call GetAccessToken, we'll automatically go through the authentication process if we don't have a valid token. Or, if our token has timed out, and if it's still valid, will simply just return the token we have.
What's Next?
Now that we can authenticate, and we have an access token, we now can use that token to access the user's OneDrive. To finish off this article, I'll show you how to get a listing of files and folders in the root of the logged-in user's drive. Just as with the authentication, to do anything using the live API, we need to use HTTP-based restful calls. The good news is, because we don't require any user interaction any longer, we can do these using the regular web client.
The first thing we need to do is to get the basic information available about the user's OneDrive. We do that by making a get request to
Onto the end of this URL, we need to add a parameter called 'access_token' that contains the access token we obtained previously, using the authentication process. The results from this call will be a snippet of JSON containing various details about the OneDrive you accessed. That will look like the following:
{ "id": "folder.4515677xxxxxxxxx", "from": { "name": null, "id": null }, "name": "SkyDrive", "description": "", "parent_id": null, "size": 9514852942, "upload_location": "", "comments_count": 0, "comments_enabled": false, "is_embeddable": false, "count": 8, "link": "", "type": "folder", "shared_with": { "access": "Just me" }, "created_time": null, "updated_time": "2014-08-28T11:46:03+0000", "client_updated_time": "2013-08-09T14:30:52+0000" }
Among the data returned, you can see the 'id', 'name', 'description', and most importantly, the 'upload_location'. The upload location is important because it's this path you'll use to get a complete file listing of the contents of the drive.
Once you have this upload folder, you then simply just need to make a get request to it again as was done previously, making sure you append the access token. This will also return JSON data, but this time the data will be an array of file objects listing the files and folders available in the drive:
{ "data": [ { "id": "folder.4515677bdf99b35f.4515677BDF99B35F!223", "from": { "name": "Peter Shaw", "id": "4515677bdf99b35f" }, "name": "Blog images", "description": "", "parent_id": "folder.4515677bdf99b35f", "size": 66529, "upload_location": " folder.4515677bdf99b35f.4515677BDF99B35F!223/files/", "comments_count": 0, "comments_enabled": true, "is_embeddable": true, "count": 3, "link": " redir.aspx?cid=4515677bdf99b35f&page=browse& resid=4515677BDF99B35F!223&parId=4515677BDF99B35F!161", "type": "album", "shared_with": { "access": "Shared" }, "created_time": "2009-05-23T10:55:58+0000", "updated_time": "2010-09-28T18:30:53+0000", "client_updated_time": "2010-09-28T18:30:53+0000" }, { ": " y2m73vLY1yhhnhqJXy7_bt8Z6_o7u9ypLVhZMz-oD_hLbZs-qicCq_dTKFP jClvc7JH-BNUg-gAnyMMF-SonUfRPDB1LX2yaQ5lBpRg1V6xDi0/ Tiny.png.jpg?psid=1", ", "upload_location": " file.4515677bdf99b35f.4515677BDF99B35F!1021/content/", "images": [ { "height": 117, "width": 99, "source": " y2mbp4bsz2-2tbDrVIUC6LYbM1gCneqOvtLoLayTvIU1Aw1iTHH4idHb4w2 HOGISHymyob8fz_BP4UAlxlzHH-9BA/Tiny.png.jpg?psid=1&ck=2&ex=720", "type": "normal" }, { "height": 117, "width": 99, "source": " y2m_lCuh5snt-qp4ttijSDi7lrr1TrK4zsQEFJi2bMbTm6URGSIiDZeBmSxrMz- 6PTNFDTKllNEjKZ5YtknyYj3WQ/Tiny.png.jpg?psid=1&ck=2&ex=720", "type": "album" }, { "height": 96, "width": 81, "source": " y2m73vLY1yhhnhqJXy7_bt8Z6_o7u9ypLVhZMz-oD_hLbZs-qicCq_dTKFPjCl vc7JH-BNUg-gAnyMMF-SonUfRPDB1LX2yaQ5lBpRg1V6xDi0/Tiny.png.jpg?psid=1", "type": "thumbnail" }, { "height": 117, "width": 99, ", "type": "full" } ], "link": " redir.aspx?cid=4515677bdf99b35f&page=browse& resid=4515677BDF99B35F!1021&parId=4515677BDF99B35F!161", "when_taken": null, "height": 117, "width": 99, "type": "photo", "location": null, "camera_make": null, "camera_model": null, "focal_ratio": 0, "focal_length": 0, "exposure_numerator": 0, "exposure_denominator": 0, "shared_with": { "access": "Just me" }, "created_time": "2014-08-28T11:26:38+0000", "updated_time": "2014-08-28T11:46:03+0000", "client_updated_time": "2014-08-28T11:46:03+0000" } ] }
I've trimmed the preceding code down to show just one file and one folder. As you can see, files that have special significance, such as pictures, have much more extra meta data than regular files, allowing you to do all manner of things with them.
I'll close this article off with a method that will make the request and return the JSON data containing a file list, as follows:
private string GetOneDriveRootListing() { var accessToken = GetAccessToken(); string jsonData; string url = string.Format(@" v5.0/me/skydrive?access_token=; }
I've cheated a little bit here to make extracting the upload_location easier. I've used NuGet to add the excellent 'Newtonsoft.JSON' parsing library, rather than try and search out the line and parse it myself. To make that work as intended, you'll need to add a new class to your project, called 'OneDriveInfo.cs', and add the following code to it:
using System; namespace Onedrive { public class OneDriveInfo { public string ID { get; set; } public FromUser From { get; set; } public string Name { get; set; } public string Description { get; set; } public object Parent_ID { get; set; } public long Size { get; set; } public string Upload_Location { get; set; } public int Comments_Count { get; set; } public bool Comments_Enabled { get; set; } public bool Is_Embeddable { get; set; } public int Count { get; set; } public string Link { get; set; } public string Type { get; set; } public SharePermissions Shared_With { get; set; } public object Created_Time { get; set; } public DateTime Updated_Time { get; set; } public DateTime Client_Updated_Time { get; set; } } public class FromUser { public object Name { get; set; } public object ID { get; set; } } public class SharePermissions { public string Access { get; set; } } }
All we need from this at the moment is the upload location to get the file listing. But, for now, all we return is the JSON sent by the file listing. You can display this (rather massive) string easily in your app by doing something like the following:
MessageBox.Show(GetOneDriveRootListing());
This one call will authenticate if it needs to, and then call and parse the drive info object, exctract the upload location, and return the JSON data representing the entire root contents of the logged in users OneDrive.
In the next article, we'll expand on this even further by taking the JSON we got back from our file listing, and turning that into a C# object. We'll do this in such a way that each object can represent the different types of data that can be returned. Then, once we can get a file listing, we'll explore how to upload and download files to and from the logged-in drive.
If you have any suggestions or ideas for articless on specific topics of interest.
IT ManagerPosted by Mehmet Hepkorucu on 01/03/2018 02:22am
Dear Sirs, I would like to generate your code with Visual Studio 2015. It seems a little different than above. Could you please instruct me how to write your code in VS2015 environment. Kind Regards, Mehmet HepkorucuReply
Reference of Java codePosted by Vyankatesh on 08/07/2017 10:12pm
Hi, can you please provide similar reference for Java code to get the token? Thanks, VyankateshReply
OneDrive for Business for AuthenticationPosted by Dipendra Shekhawat on 03/21/2016 10:46pm
Hey Peter,!Reply
Onedrive authentication through ASP.NET MVCPosted by Saby on 03/11/2015 05:51am
Hi there, It's been a great learning and I could access my Onedrive files through an asp.net form application. Is there by any means a similar way to get the access token through an asp.net MVC application? I couldn't quite manage to grasp one...may be I need more honing as I'm pretty new to drive authentication process...it would be immensely helpful if you give your expert insight on the same...Thank you very muchReply
Error in articlePosted by Tim Parr on 11/15/2014 01:53pm
Hi there - great article. Just one point I noticed - in line 5 of your second-to-last code block (fetching the JSON data for the file list) you missed an = from the end of the url string. Also, is there a way of extending the hour that the token is valid for if the app is still connected? I am trying to create an application which will synchronize a folder between two separate oneDrive accounts. Thanks Tim
RE: Error in ArticlePosted by Peter Shaw on 12/03/2014 04:23am
Hi Tim, thanks for pointing that out, I'll get it corrected asap. As for extending the hour, yes there is but it's not easy to do. You can ad an offline access claim (Sorry cant remember the actual definition) when you request authorization. What this then means is that, instead of caching the token and checking it each time, you actually have to set up a timer that repeatedly verifies the authorization in the background. For every hour that goes by, a new token is generated, and you verify that with the old token. Basically, you send the old token and if the old one comes back your ok, if a new one comes back you then have to re-validate the connection with the new token and save that. You need to do this on a regular schedule too, if you only check it on drive access, then you might miss the small window for re-generating the token and be forced to log in again from scratch, so to do this in needs to be done on some kind of timer in the background. All the details are on the OneDrive developers site along with sample code.
What about sporadically used applicationsPosted by T on 02/16/2015 10:43pm
If I want to extend longer than an hour and my app is used briefly, say once every few days, do I need to re authenticate? Other than cheating by logging the user's id and password (security hazard), is there anyway, in winforms (non WinRT) I can avoid having to reauthenticate if my app has not been used for more than 1 hour? I wish to be able to use the current logged in windows user (microsoft account) to authenticateReply
developerPosted by Leon on 01/05/2015 03:05am
Did anyone implement this already? Can you provide an example of how to: - you send the old token and if the old one comes back your ok, if a new one comes back you then have to re-validate the connection with the new token and save that. You need to do this on a regular schedule too,Reply
Next ArticlePosted by Peter Shaw on 11/06/2014 05:01am
Due to a bit of a mix up with publishing schedules, this post was originally intended to be released before the previous sky drive one. If you wish to read them in order, this post should be read before, this post : | https://www.codeguru.com/columns/dotnet/authenticating-a-onedrive-account.html | CC-MAIN-2018-13 | refinedweb | 4,003 | 59.43 |
This tutorial shows some of the basic capabilities of an Arduino and it’s integration with Ubidots. After doing it, you’ll learn what an Arduino is, what it does and how you can set it up to turn an LED (or any other actuator) on and off from your Ubidots account. All you need to get started is an Arduino board and the official Arduino WiFi Shield.
What is Arduino?
Arduino is an open-source electronics platform based on easy-to-use hardware and software. The basic board can be connected to a series of devices to expand it’s possibilities; these devices are called “shields”. For this specific tutorial we will use the Arduino WiFi Shield.
About Ubidots
If you’re new around here, Ubidots is a platform that allows you to easily setup an Internet-enabled device (an Arduino in this case) to push data to the cloud and display it in the form of pretty and user friendly graphs. You can also trigger SMS or Email alerts based on your data, and share your graphs in external web or mobile applications.
What you need
– An Arduino Board. In this case we are using the Arduino UNO
– The Arduino WiFi Shield
– USB Cable
– An LED
And a good WiFi connection!
Getting ready
Before we start with this tutorial you have to make sure you have the Arduino IDE up and running.
Hands on
Put the Shield on top of the Arduino board and connect the LED: The positive end in the digital pin 12, and the negative end in any of the pins labeled GND.
To know which is the positive (+) end of the Led, look for the longer ending as shown in the picture below. In case they are the same size, look at the flat section on the bulb as shown in the picture. This will also indicate the negative ending.
Now go on to your Ubidots account. If you don’t have one yet, you can create one here.
Go to the **Device **section to create a new device called “control“, press “Add new device” and assign the name previously mentioned.
Once the device is created, enter to it and create a new default variable called “led“.
Once the device and the variable are created you should have something like this:
3. Go to the** Dashboard** section to create a new “switch” control widget to control the status of the led. Choose the device you created, the variable, press “Finish”.
Once the widget is created, you will see the widget show in the Dashboard, like below:
4. Next, copy and paste the code below into the Arduino IDE. Once pasted, assign your wifi and token parameters where indicated in the code:
/************************************************************************************************* * This example get the last value of a variable from the Ubidots Cloud() to control * a Led. * * Requirements: * 1. In the Device section, create a new device called "control" * 2. Into the device just created, create a new default variable called "led" * 3. In the Dashboard section, create a new "switch" control widget to control the led 🙂 * * IMPORTANT NOTE: Don't forget assign your WiFi credentials, ubidots token, and the pin where the * led is connected * * This example is given AS IT IS without any warranty * * Made by Maria Carlina Hernandez() **********************************************************/ / * Libraries included / #include <WiFi.h> / * Constants and objects / namespace { const char * SSID_NAME = "assign_wifi_ssid_here"; // Put here your SSID name const char * SSID_PASS = "assign_wifi_ssid_pass_here"; // Put here your Network password const char * SERVER = "industrial.api.ubidots.com"; const char * TOKEN = "assign_your_ubidots_token"; // Assign your Ubidots TOKEN const char * DEVICE_LABEL = "control"; // Assign the device label to get the values of the variables const char * VARIABLE_LABEL = "led"; // Assign the variable label to get the last value const char * USER_AGENT = "ArduinoWifi"; const char * VERSION = "1.0"; const int PORT = 80; int status = WL_IDLE_STATUS; int LED = 12; // assign the pin where the led is connected } WiFiClient client; / * Auxiliar Functions / / this method makes a HTTP connection to the server and send request to get a data / float getData(const char * variable_label) { / Assigns the constans as global on the function / char response; // Array to store parsed data char serverResponse; // Array to store values float num; char resp_str[700]; // Array to store raw data from the server uint8_t j = 0; uint8_t timeout = 0; // Max timeout to retrieve data uint8_t max_retries = 0; // Max retries to make attempt connection / Builds the request GET - Please reference this link to know all the request's structures / char data = (char ) malloc(sizeof(char) * 220); sprintf(data, "GET /api/v1.6/devices/%s/%s/lv", DEVICE_LABEL, variable_label); sprintf(data, "%s HTTP/1.1\r\n", data); sprintf(data, "%sHost: industrial.api.ubidots.com\r\n", data); sprintf(data, "%sUser-Agent: %s/%s\r\n", data, USER_AGENT, VERSION); sprintf(data, "%sX-Auth-Token: %s\r\n", data, TOKEN); sprintf(data, "%sConnection: close\r\n\r\n", data); / Initial connection / client.connect(SERVER, PORT); / Reconnect the client when is disconnected / while (!client.connected()) { Serial.println("Attemping to connect"); if (client.connect(SERVER, PORT)) { break; } // Tries to connect five times as max max_retries++; if (max_retries > 5) { Serial.println("Could not connect to server"); free(data); return NULL; } delay(5000); } / Make the HTTP request to the server/ client.print(data); / Reach timeout when the server is unavailable / while (!client.available() && timeout < 2000) { timeout++; delay(1); if (timeout >= 2000) { Serial.println(F("Error, max timeout reached")); client.stop(); free(data); return NULL; } } / Reads the response from the server / int i = 0; while (client.available()) { char c = client.read(); //Serial.write(c); // Uncomment this line to visualize the response from the server if (c == -1) { Serial.println(F("Error reading data from server")); client.stop(); free(data); return NULL; } resp_str[i++] = c; } / Parses the response to get just the last value received / response = strtok(resp_str, "\r\n"); while(response!=NULL) { j++; //printf("%s", response); response = strtok(NULL, "\r\n"); if (j == 10) { if (response != NULL) { serverResponse = response; } j = 0; } } / Converts the value obtained to a float / num = atof(serverResponse); free(data); / Removes any buffered incoming serial data / client.flush(); / Disconnects the client / client.stop(); / Returns de last value of the variable / return num; } / This methods print the"); } / * Main_NAME); // Connect to WPA/WPA2 network. Change this line if using open or WEP network: status = WiFi.begin(SSID_NAME, SSID_PASS); // wait 10 seconds for connection: delay(10000); } Serial.println("Connected to wifi"); printWifiStatus(); } void loop() { float value = getData(VARIABLE_LABEL); Serial.print("The value received form Ubidots is: "); Serial.println(value); if ( value == 1.0) { digitalWrite(LED, HIGH); } else { digitalWrite(LED, LOW); } delay(1000); }
Go to your Dashboard and press “Click to switch ON” and if everything is correct, the led on the board will illuminate.
**** ****
To verify how your device data is connected and transmitting data, goto Tools > Serial Monitor and wait for it to load. You can see the variable is modified: **1.0 when is on **and 0.0 when off.
Wrapping Up
This tutorial explained how to control an LED remotely from your Ubidots dashboard. The same code can be used to control other things attached to the Arduino, like door openers, pet feeders, water sprinkler, locks, other triggers etc.
More examples
Here further project ideas, check out…
- Sending light levels data from Electric Imp to Ubidots
- Connecting a Raspberry Pi to Ubidots
- A People counter using Raspberry Pi
- A Parking sensor using Raspberry Pi
To begin solving problems with the Internet of Things today, simply create an Ubidots account and effortlessly send your data to the Ubidots IoT Application Development Platform to develop, visualize, and deploy your Problem Solving Application today! | https://ubidots.com/blog/control-an-led-remotely-with-an-arduino/ | CC-MAIN-2019-39 | refinedweb | 1,257 | 53.31 |
Using Filters in Code Completion
ReSharper allows you to filter completion suggestions by kind of symbol, access modifiers, and so on. You can modify the set of applied filters each time the code completion is invoked and/or choose to keep the state of filters.
By default, ReSharper shows filters bar at the bottom of the completion popup. In this bar, you can see the state of filters and clickbox is selected, you can optionally modify the default state of filters. Note that these filter state controls are synchronized with the filter bar in the completion popup.
Filter modes
You can use each filter either to include or exclude suggestions of specific kind.
'Include' mode
To include only specific kinds of suggestions in the completion list, left-click the corresponding icons on the filter bar. The filter icons for included items get highlighted with a solid background.
In the example below, only namespaces are included in the list:
'Exclude' mode
To exclude specific kinds of suggestions from the completion list, right-click the corresponding icons on the filter bar. The filter icons for excluded items get highlighted with a border.
In the example below, everything except namespaces is excluded from the list:
Shortcuts for completion filters
By default, completion filters have no shortcuts, but you can assign a shortcut to any filter. The table below lists aliases for each filter action. You can use this aliases to find and assign specific shortcuts in Visual Studio options. ().
Custom filters
ReSharper allows you to define custom filters that you can use to exclude items by their assembly, namespace, and other parameters from completion suggestions.
Define a custom completion filter
Select Alt+R O, then choose on the left.from the main menu or press
Make sure that the Enable Filters checkbox is ticked.
Click Add. | https://www.jetbrains.com/help/resharper/Using_Filters_in_Code_Completion.html | CC-MAIN-2022-40 | refinedweb | 303 | 52.6 |
#include <conio.h> #include <dos.h> #include <stdlib.h> #include <bios.h> char far *position; int offset; int row; int col; int seg; #define black 8 #define white 15 ///////////prototype/////////// char get_attribute(char , char); ////////////////////////////////////// struct attrib_bits { int fg; int bg; int intensity; int blink; } attribute; union attribute { struct attrib_bits att_bits; char att_byte; }; void disp_init() { directvideo = 1; seg = (biosequip() & 0x30) == 0x30 ? 0xB000 : 0xB800 ; } void display(int row, int col,char ch) { offset = 160 * row + col * 2; position = (char*) MK_FP(seg,offset); //#ifdef using_far_pointer union attribute x; x.att_bits.blink = 1; x.att_bits.bg = white; x.att_bits.intensity = 1; x.att_bits.fg = 3; *position= x.att_byte; //*position= get_attribute(x.att_bits.bg,x.att_bits.fg); *position++= ch ; //#endif } #ifdef use_get_attribute_func char get_attribute(char bg, char fg) { return (bg<<4) | fg; } #endif //don't want to use pokeb() /* pokeb(seg,offset,ch); pokeb(seg,offset+1,attribute); */ //#ifdef _testvideo void main() { disp_init(); display(1,10,'s'); display(2,5,'l'); display(3,4,'f'); } //#endif
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
From novice to tech pro — start learning today.
Wow. You're certainly taxing my memory with 16-bit C and direct console I/O. :)
There are a set of built-in functions to read and write a single location on the screen. You can also read/write a "window" (rectangular block) on the screen. The data to/from this "window" is 2 bytes per cell. One byte contains the Ascii character, the other byte is color, blink, underline, etc.
Would that work for you?
Kent
I understand what you are saying but still doesn't help me to solve the problem.
for example check this code for me please and tell me if you see anything wrong with it
Open in new window
char byte,
the code above has a typo
The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified!
If you can acquire the address of the console buffer, you should be able to write directly to the buffer and see it reflected on the monitor.
char *Screen; // Address of console buffer
*Screen = 0x05;
*(Screen+1) = 'A';
That should put an 'A' on the screen, though I don't recall the color palette.
I'm not sure if modern 32-bit and 64-bit Windows operating systems actually have a console buffer any more. It might be that you need to run it on a 16-bit O/S. (I'll dig around a bit.)
Kent
first there is another mistake in the code as the character comes before the attribute in the offset
so it should be
*position = ch;
*position++ = attribute
although I have already switched them many times before this it wasn't the actual problem with my code...the problem was
to cast the MK_FP() as a char far* instead of just char*
position = (char far*) MK_FP(seg,offset);
thanks for trying kod, you get points for the effort
Silly byte swapping.... :)
*position = ch;
*position++ = attribute;
Load the two bytes into an integer and the architecture swaps the bytes. (I told you it has been quite a while....)
Your solution:
position = (char far*) MK_FP(seg,offset);
I'm really surprised that it works. position is declared as [char far *] and MK_FP should return a far pointer, though probably type [void]. I cannot image how the recast is helping. Perhaps you made another change at the same time that the recast was inserted into the source?
Also, I find it very interesting that Windows still has a console buffer in the 32 and/or 64 bit O/S. I learned something here. Thanks!
Good Luck,
Kent
yes I'm sure it's the casting, do you know I'm using Turboc for this assignment but when I was doing an assignment before it, I did it half way using borland 5.02 until I found out that it has a problem using debugging if you are using 16bit environment, anyway the thing is when I used to compile with borland without casting MK_FP to char far* it used to give me error...almost sure this how it was I will give it a try at some point to confirm but I still remember it gave a hard time then and for the irony I repeated the same mistake :D
but thanks for your help , I'm giving you 250pts for the effort, and let me know if I can come of any help if you need a memory refreshment with 16bit programming as it's the main point behind the course and I wouldn't mind to go through some discussion as the course is about programming a kernel so in this assignment we did rewrite the keyboard keystroke software,clock software, a queue all using interrupt functions. this why I wanted to learn how to handle this using far pointer instead of using pokeb() as probably it will come handy in later assignments
That's rather ambitious. :) I hope that you enjoy the course.
I'm going to track down my old Borland compiler and see what trouble I can create. Memory is an awful thing to lose....
Kent | https://www.experts-exchange.com/questions/26845088/question-regard-biosequip-and-how-change-bg-fg-color.html | CC-MAIN-2018-17 | refinedweb | 879 | 71.14 |
The rise of managed cloud services, cloud-native, and serverless applications brings both new possibilities and challenges. More and more practices from software development processes like version control, code review, continuous integration, and automated testing are applied to cloud infrastructure automation.
Most existing tools suggest defining infrastructure in text-based markup formats, YAML being the favorite. In this article, I’m making a case for using real programming languages like TypeScript instead. Such a change makes even more software development practices applicable to the infrastructure realm.
Sample Application
It’s easier to make a case given a specific example. For this article, we’ll build a URL Shortener application, a basic clone of tinyurl.com or bit.ly. There is an administrative page where we can define short aliases for long URLs:
Now, whenever a visitor goes to the base URL of the application + an existing alias, they get redirected to the full URL.
This app is simple to describe but involves enough moving parts to be representative of some real-world issues. As a bonus, there are many existing implementations on the web to compare with.
Serverless URL Shortener
I’m a big proponent of serverless architecture: the style of cloud applications being a combination of serverless functions and managed cloud services. They are fast to develop, effortless to run, and cost pennies unless the application gets lots of users. However, even serverless applications have to deal with infrastructure, like databases, queues, and other sources of events and destinations of data.
My examples are going to use Amazon’s AWS, but this could be Microsoft Azure or Google Cloud Platform too.
So, the gist is to store URLs with short names as key-value pairs in Amazon DynamoDB and use AWS Lambdas to run the application code. Here is the initial sketch:
The Lambda at the top receives an event when somebody decides to add a new URL. It extracts the name and the URL from the request and saves them as an item in the DynamoDB table.
The Lambda at the bottom is called whenever a user navigates to a short URL. The code reads the full URL based on the requested path and returns a 301 response with the corresponding location.
Here is the implementation of the
Open URL Lambda in JavaScript:
const aws = require('aws-sdk'); const table = new aws.DynamoDB.DocumentClient(); exports.handler = async (event) => { const name = event.path.substring(1); const params = { TableName: "urls", Key: { "name": name } }; const value = await table.get(params).promise(); const url = value && value.Item && value.Item.url; return url ? { statusCode: 301, body: "", headers: { "Location": url } } : { statusCode: 404, body: name + " not found" }; };
That’s 11 lines of code. I’ll skip the implementation of
Add URL function because it's very similar. Considering a third function to list the existing URLs for UI, we might end up with 30-40 lines of JavaScript in total.
So, how do we deploy the application?
Well, before we do that, we should realize that the above picture was an over-simplification:
- AWS Lambda can’t handle HTTP requests directly, so we need to add AWS API Gateway in front of it.
- We also need to serve some static files for the UI, which we’ll put into AWS S3 and proxy it with the same API Gateway.
Here is the updated diagram:
This is a viable design, but the details are even more complicated:
- API Gateway is a complex beast which needs Stages, Deployments, and REST Endpoints to be appropriately configured.
- Permissions and Policies need to be defined so that API Gateway could call Lambda and Lambda could access DynamoDB.
- Static Files should go to S3 Bucket Objects.
So, the actual setup involves a couple of dozen objects to be configured in AWS:
How do we approach this task?
Options to Provision the Infrastructure
There are many options to provision a cloud application, and each one has its trade-offs. Let’s quickly go through the list of possibilities to understand the landscape.
AWS Web Console
AWS, like any other cloud, has a web user interface to configure its resources:
That’s a decent place to start — good for experimenting, figuring out the available options, following the tutorials, i.e., for exploration.
However, it doesn’t suit particularly well for long-lived ever-changing applications developed in teams. A manually clicked deployment is pretty hard to reproduce in the exact manner, which becomes a maintainability issue pretty fast.
AWS Command Line Interface
The AWS Command Line Interface (CLI) is a unified tool to manage all AWS services from a command prompt. You write the calls like this:
aws apigateway create-rest-api --name 'My First API' --description 'This is my first API' aws apigateway create-stage --rest-api-id 1234123412 --stage-name 'dev' --description 'Development stage' --deployment-id a1b2c3
The initial experience might not be as smooth as clicking buttons in the browser, but the huge benefit is that you can reuse commands that you once wrote. You can build scripts by combining many commands into cohesive scenarios. So, your colleague can benefit from the same script that you created. You can provision multiple environments by parameterizing the scripts.
Frankly speaking, I’ve never done that for several reasons:
- CLI scripts feel too imperative to me. I have to describe “how” to do things, not “what” I want to get in the end.
- There seems to be no good story for updating existing resources. Do I write small delta scripts for each change? Do I have to keep them forever and run the full suite every time I need a new environment?
- If a failure occurs mid-way through the script, I need to manually repair everything to a consistent state. This gets messy real quick, and I have no desire to exercise this process, especially in production.
To overcome such limitations, the notion of the Desired State Configuration (DSC) was invented. Under this paradigm, we describe the desired layout of the infrastructure, and then the tooling takes care of either provisioning it from scratch or applying the required changes to an existing environment.
Which tool provides DSC model for AWS? There are legions.
AWS CloudFormation
AWS CloudFormation is the first-party tool for Desired State Configuration management from Amazon. CloudFormation templates use YAML to describe all the infrastructure resources of AWS.
Here is a snippet from a private URL shortener example kindly provided on the AWS blog:
Resources: S3BucketForURLs: Type: "AWS::S3::Bucket" DeletionPolicy: Delete Properties: BucketName: !If ["CreateNewBucket", !Ref "AWS::NoValue", !Ref S3BucketName ] WebsiteConfiguration: IndexDocument: "index.html" LifecycleConfiguration: Rules: - Id: DisposeShortUrls ExpirationInDays: !Ref URLExpiration Prefix: "u" Status: Enabled
This is just a very short fragment: the complete example consists of 317 lines of YAML. That’s an order of magnitude more than the actual JavaScript code that we have in the application!
CloudFormation is a powerful tool, but it demands quite some learning to be done to master it. Moreover, it’s specific to AWS: you won’t be able to transfer the skill to other cloud providers.
Wouldn’t it be great if there was a universal DSC format? Meet Terraform.
Terraform
HashiCorp Terraform is an open source tool to define infrastructure in declarative configuration files. It has a pluggable architecture, so the tool supports all major clouds and even hybrid scenarios.
The custom text-based Terraform
.tf format is used to define the configurations. The templating language is quite powerful, and once you learn it, you can use it for different cloud providers.
Here is a snippet from AWS Lambda Short URL Generator example:
resource "aws_api_gateway_rest_api" "short_urls_api_gateway" { name = "Short URLs API" description = "API for managing short URLs." } resource "aws_api_gateway_usage_plan" "short_urls_api_usage_plan" { name = "Short URLs admin API key usage plan" description = "Usage plan for the admin API key for Short URLS." api_stages { api_id = "${aws_api_rest_api.short_urls_gateway.id}" stage = "${aws_api_deployment.short_url_deployment.stage_name}" } }
This time, the complete example is around 450 lines of textual templates. Are there ways to reduce the size of the infrastructure definition?
Yes, by raising the level of abstraction. It’s possible with Terraform’s modules, or by using other, more specialized tools.
Serverless Framework and SAM
The Serverless Framework is an infrastructure management tool focused on serverless applications. It works across cloud providers (AWS support is the strongest though) and only exposes features related to building applications with cloud functions.
The benefit is that it’s much more concise. Once again, the tool is using YAML to define the templates, here is the snippet from Serverless URL Shortener example:
functions: store: handler: api/store.handle events: - http: path: / method: post cors: true
The domain-specific language yields a shorter definition: this example has 45 lines of YAML + 123 lines of JavaScript functions.
However, the conciseness has a flip side: as soon as you veer outside of the fairly “thin” golden path — the cloud functions and an incomplete list of event sources — you have to fall back to more generic tools like CloudFormation. As soon as your landscape includes lower-level infrastructure work or some container-based components, you’re stuck using multiple config languages and tools again.
Amazon’s AWS Serverless Application Model (SAM) looks very similar to the Serverless Framework but is tailored to be AWS-specific.
Is that the end game? I don’t think so.
Desired Properties of Infrastructure Definition Tool
So what have we learned while going through the existing landscape? The perfect infrastructure tools should:
- Provide reproducible results of deployments
- Be scriptable, i.e., require no human intervention after the definition is complete
- Define the desired state rather than exact steps to achieve it
- Support multiple cloud providers and hybrid scenarios
- Be universal in the sense of using the same tool to define any type of resource
- Be succinct and concise to stay readable and manageable
- ̶U̶̶̶s̶̶̶e̶̶̶ ̶̶̶Y̶̶̶A̶̶̶M̶̶̶L̶̶̶-̶̶̶b̶̶̶a̶̶̶s̶̶̶e̶̶̶d̶̶̶ ̶̶̶f̶̶̶o̶̶̶r̶̶̶m̶̶̶a̶̶̶t̶̶̶
Nah, I crossed out the last item. YAML seems to be the most popular language among this class of tools (and I haven’t even touched Kubernetes yet!), but I’m not convinced it works well for me. YAML has many flaws, and I just don’t want to use it.
Have you noticed that I haven’t mentioned Infrastructure as code a single time yet? Well, here we go (from Wikipedia):
Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.
Shouldn’t it be called “Infrastructure as definition files”, or “Infrastructure as YAML”?
As a software developer, what I really want is “Infrastructure as actual code, you know, the program thing”. I want to use the same language that I already know. I want to stay in the same editor. I want to get IntelliSense auto-completion when I type. I want to see the compilation errors when what I typed is not syntactically correct. I want to reuse the developer skills that I already have. I want to come up with abstractions to generalize my code and create reusable components. I want to leverage the open-source community who would create much better components than I ever could. I want to combine the code and infrastructure in one code project.
If you are with me on that, keep reading. You get all of that with Pulumi.
Pulumi
Pulumi is a tool to build cloud-based software using real programming languages. They support all major cloud providers, plus Kubernetes.
Pulumi programming model supports Go and Python too, but I’m going to use TypeScript for the rest of the article.
While prototyping a URL shortener, I explain the fundamental way of working and illustrate the benefits and some trade-offs. If you want to follow along, install Pulumi.
How Pulumi Works
Let’s start defining our URL shortener application in TypeScript. I installed
@pulumi/pulumi and
@pulumi/aws NPM modules so that I can start the program. The first resource to create is a DynamoDB table:
import * as aws from "@pulumi/aws"; // A DynamoDB table with a single primary key let counterTable = new aws.dynamodb.Table("urls", { name: "urls", attributes: [ { name: "name", type: "S" }, ], hashKey: "name", readCapacity: 1, writeCapacity: 1 });
I use
pulumi CLI to run this program to provision the actual resource in AWS:
> pulumi up Previewing update (urlshortener): Type Name Plan + pulumi:pulumi:Stack urlshortener create + aws:dynamodb:Table urls create Resources: + 2 to create Do you want to perform this update? yes Updating (urlshortener): Type Name Status + pulumi:pulumi:Stack urlshortener created + aws:dynamodb:Table urls created Resources: + 2 created
The CLI first shows the preview of the changes to be made, and when I confirm, it creates the resource. It also creates a stack — a container for all the resources of the application.
This code might look like an imperative command to create a DynamoDB table, but it actually isn’t. If I go ahead and change
readCapacity to
2 and then re-run
pulumi up, it produces a different outcome:
> pulumi up Previewing update (urlshortener): Type Name Plan pulumi:pulumi:Stack urlshortener ~ aws:dynamodb:Table urls update [diff: ~readCapacity] Resources: ~ 1 to update 1 unchanged
It detects the exact change that I made and suggests an update. The following picture illustrates how Pulumi works:
index.ts in the red square is my program. Pulumi's language host understands TypeScript and translates the code to commands to the internal engine. As a result, the engine builds a tree of resources-to-be-provisioned, the desired state of the infrastructure.
The end state of the last deployment is persisted in the storage (can be in pulumi.com backend or a file on disk). The engine then compares the current state of the system with the desired state of the program and calculates the delta in terms of create-update-delete commands to the cloud provider.
Help Of Types
Now I can proceed to the code that defines a Lambda function:
// Create a Role giving our Lambda access. let policy: aws.iam.PolicyDocument = { /* Redacted for brevity */ }; let role = new aws.iam.Role("lambda-role", { assumeRolePolicy: JSON.stringify(policy), }); let fullAccess = new aws.iam.RolePolicyAttachment("lambda-access", { role: role, policyArn: aws.iam.AWSLambdaFullAccess, }); // Create a Lambda function, using code from the `./app` folder. let lambda = new aws.lambda.Function("lambda-get", { runtime: aws.lambda.NodeJS8d10Runtime, code: new pulumi.asset.AssetArchive({ ".": new pulumi.asset.FileArchive("./app"), }), timeout: 300, handler: "read.handler", role: role.arn, environment: { variables: { "COUNTER_TABLE": counterTable.name } }, }, { dependsOn: [fullAccess] });
You can see that the complexity kicked in and the code size is growing. However, now I start to gain real benefits from using a typed programming language:
- I’m using objects in the definitions of other object’s parameters. If I misspell their name, I don’t get a runtime failure but an immediate error message from the editor.
- If I don’t know which options I need to provide, I can go to the type definition and look it up (or use IntelliSense).
- If I forget to specify a mandatory option, I get a clear error.
- If the type of the input parameter doesn’t match the type of the object I’m passing, I get an error again.
- I can use language features like
JSON.stringifyright inside my program. In fact, I can reference and use any NPM module.
You can see the code for API Gateway here. It looks too verbose, doesn’t it? Moreover, I’m only half-way through with only one Lambda function defined.
Reusable Components
We can do better than that. Here is the improved definition of the same Lambda function:
import { Lambda } from "./lambda"; const func = new Lambda("lambda-get", { path: "./app", file: "read", environment: { "COUNTER_TABLE": counterTable.name }, });
Now, isn’t that beautiful? Only the essential options remained, while all the machinery is gone. Well, it’s not completely gone, it’s been hidden behind an abstraction.
I defined a custom component called
Lambda:
export interface LambdaOptions { readonly path: string; readonly file: string; readonly environment?: pulumi.Input<{ [key: string]: pulumi.Input<string>; }>; } export class Lambda extends pulumi.ComponentResource { public readonly lambda: aws.lambda.Function; constructor(name: string, options: LambdaOptions, opts?: pulumi.ResourceOptions) { super("my:Lambda", name, opts); const role = //... Role as defined in the last snippet const fullAccess = //... RolePolicyAttachment as defined in the last snippet this.lambda = new aws.lambda.Function(`${name}-func`, { runtime: aws.lambda.NodeJS8d10Runtime, code: new pulumi.asset.AssetArchive({ ".": new pulumi.asset.FileArchive(options.path), }), timeout: 300, handler: `${options.file}.handler`, role: role.arn, environment: { variables: options.environment } }, { dependsOn: [fullAccess], parent: this }); } }
The interface
LambdaOptions defines options that are important for my abstraction. The class
Lambda derives from
pulumi.ComponentResource and creates all the child resources in its constructor.
A nice effect is that one can see the structure in
pulumi preview:
> pulumi up Previewing update (urlshortener): Type Name Plan + pulumi:pulumi:Stack urlshortener create + my:Lambda lambda-get create + aws:iam:Role lambda-get-role create + aws:iam:RolePolicyAttachment lambda-get-access create + aws:lambda:Function lambda-get-func create + aws:dynamodb:Table urls create
The
Endpoint component simplifies the definition of API Gateway (see the source):
const api = new Endpoint("urlapi", { path: "/{proxy+}", lambda: func.lambda });
The component hides the complexity from the clients — if the abstraction was selected correctly, that is. The component class can be reused in multiple places, in several projects, across teams, etc.
Standard Component Library
In fact, the Pulumi team came up with lots of high-level components that build abstractions on top of raw resources. The components from the
@pulumi/cloud-aws package are particularly useful for serverless applications.
Here is the full URL shortener application with DynamoDB table, Lambdas, API Gateway, and S3-based static files:
import * as aws from "@pulumi/cloud-aws"; // Create a table `urls`, with `name` as primary key. let urlTable = new aws.Table("urls", "name"); // Create a web server. let endpoint = new aws.API("urlshortener"); // Serve all files in the www directory to the root. endpoint.static("/", "www"); // GET /url/{name} redirects to the target URL based on a short-name. endpoint.get("/url/{name}", async (req, res) => { let name = req.params["name"]; let value = await urlTable.get({name}); let url = value && value.url; // If we found an entry, 301 redirect to it; else, 404. if (url) { res.setHeader("Location", url); res.status(301); res.end(""); } else { res.status(404); res.end(""); } }); // POST /url registers a new URL with a given short-name. endpoint.post("/url", async (req, res) => { let url = req.query["url"]; let name = req.query["name"]; await urlTable.insert({ name, url }); res.json({ shortenedURLName: name }); }); export let endpointUrl = endpoint.publish().url;
The coolest thing here is that the actual implementation code of AWS Lambdas is intertwined with the definition of resources. The code looks very similar to an Express application. AWS Lambdas are defined as TypeScript lambdas. All strongly typed and compile-time checked.
It’s worth noting that at the moment such high-level components only exist in TypeScript. One could create their custom components in Python or Go, but there is no standard library available. Pulumi folks are actively trying to figure out a way to bridge this gap.
Avoiding Vendor Lock-in?
If you look closely at the previous code block, you notice that only one line is AWS-specific: the
import statement. The rest is just naming.
We can get rid of that one too: just change the import to
import * as cloud from "@pulumi/cloud"; and replace
aws. with
cloud. everywhere. Now, we'd have to go to the stack configuration file and specify the cloud provider there:
config: cloud:provider: aws
Which is enough to make the application work again!
Vendor lock-in seems to be a big concern among many people when it comes to cloud architectures heavily relying on managed cloud services, including serverless applications. While I don’t necessarily share those concerns and am not sure if generic abstractions are the right way to go, Pulumi Cloud library can be one direction for the exploration.
The following picture illustrates the choice of the level of abstraction that Pulumi provides:
Working on top of the cloud provider’s API and internal resource provider, you can choose to work with raw components with maximum flexibility, or opt-in for higher-level abstractions. Mix-and-match in the same program is possible too.
Infrastructure as Real Code
Designing applications for the modern cloud means utilizing multiple cloud services which have to be configured to play nicely together. The Infrastructure as Code approach is almost a requirement to keep the management of such applications reliable in a team setting and over the extended period.
Application code and supporting infrastructure become more and more blended, so it’s natural that software developers take the responsibility to define both. The next logical step is to use the same set of languages, tooling, and practices for both software and infrastructure.
Pulumi exposes cloud resources as APIs in several popular general-purpose programming languages. Developers can directly transfer their skills and experience to define, build, compose, and deploy modern cloud-native and serverless applications more efficiently than ever.
Originally published at mikhail.io. | https://www.freecodecamp.org/news/from-yaml-to-typescript-a-developers-view-on-cloud-automation/ | CC-MAIN-2020-45 | refinedweb | 3,533 | 56.76 |
Type: Posts; User: Sterling Isfine
I.E. couldn't handle that syntax:
<script type='text/javascript'>
(function()
{
var allInps = document.getElementsByTagName('input'), allCb = [];
for( var i = 0, len = allInps.length; i <...
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"">
<html>
<head>
<title>Test Page</title>
<meta http-equiv="content-type" content="text/html;...
I've no idea what that means.
No they're irrelevant. If you apply that code to the table you posted it will synchronise the checkboxes with the same value attribute, whenever one is clicked, which...
Put this block somewhere below the form and it will synchronise all checkboxes in the document with the same value:
<script type='text/javascript'>
(function()
{
var allInps =...
I know what it meant to do but starting the array at 0 (as you should) means that 0 is a legitimate value but fails the test:
if (!readCookie("a")) which should be
if ( readCookie("a") !==...
Not all browsers support onchange for the <form> tag.
Apply the event to the <select> tag.
setCookie should take a days duration parameter.
<select name="view" onChange="jsFnc_OrderView( this...
If record is 3 at this point, you are resetting it to 1 and storing it as 2. It can't reach 3.
Try
Your code (if you insist on using cookies) can be simplified to this:
var...
Option elements do not as standard support events.
Use the onchange event of the <select> tag.
Such a facility just encourages sloppy coding practice. It's like using break or an intermediate return statement, it does the job but makes code harder to debug. If a condition is not met you just...
I can't think of a scenario in which that could happen. Do you have a URL?
This is totally untested but try it. If it fails, post any console error message.
"blank.jpg" can be any replacement image.
<script type="text/javascript" >
function ChangeMedia()
{
var...
Do you have a querystring in your URL?
parent.location or more correctly parent.location.href must match "" exactly.
Try alerting parent.location.href
Try:
if (...
The error console would have indicated that.It doesn't split anything, it just uses a regular expression to find the required name=value and saves the second parenthesised match, which is the value....
I think it would be a lot simpler just to read both cookies separately using a dedicated function, then process the results.
function readCookie( cName )
{
var v;
return (...
A while loop is like a for loop without the convenience of separate initialise and execute sections. do-while is like while but always executes once. Just change the places that you initialise and...
Try:
document.write(" ");
You're passing the wrong parameter to the swapimage function. The first parameter should be the id of the image not the map. Firefox is probably combining the two elements internally which is why it...
<input type='text' value='Email' onfocus='if( this.value == this.defaultValue ){this.value = ""; }'
onblur='if( !/\S/.test( this.value ) ){ this.value = this.defaultValue; }' >
Firefox does give a 'too much recursion' error. It seems to be related to
selectElement being allowed to call itself.
If you assign the function as an event handler, this points to the element. You can use call to specify the correct scope:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"...
Dynamically created elements don't show in page source nor do they need to in order to be referenced. Post all your code.
Doing that, the first problem you should have encountered is that your if test condition always fails. Testing the conditions individually reveals that element.canHaveChildren isn't supported by...
If a function doesn't behave as you expect, you step through it to find out where it's going wrong. In Javascript the simplest way is to insert alert statements at different points to discover which...
The A in AJAX stands for 'asynchronous'.
validateusername may terminate before the readystatechange handler executes with a status of 4, so the return statement is unreliable to useless.
Unless...
You can't Use JS to read the content from another domain. | http://www.webdeveloper.com/forum/search.php?s=ad518de4838e4973aa78a530831e1551&searchid=9146865 | CC-MAIN-2015-14 | refinedweb | 683 | 69.38 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.