text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
How do I use glob with urllib2?
So what I have been trying to acheive with no success is creating a list of file names with glob from two sources and comparing them and download file if it doesn't exist.
I can't get past the start because I am not sure how to tell glob to start at the end of the url or directory path.
This is where i was going.
import urllib2, urlparse, glob def getfile(base, fileExt): base = ('') # the files wanted end in _C.rtf # for files on site not in /home/sayth/python/secFiles download files = [] files = files.append(urllib2.urlopen(base + glob.glob('?_C.rtf')))
PS I checked with urllib that the full path was correct. I didn't include full print out but as you can see it works.
>>> import urllib >>> data = urllib.urlopen('').read() >>> print data {\rtf1\ansi \deff1\deflang1033{\fonttbl{\f1\froman\fcharset0\fprq2 Arial;}} \paperh11907\paperw16840\margt794\margb794\margl794\margr794\lndscpsxn\psz9\viewkind1\viewscale84 | https://www.daniweb.com/programming/software-development/threads/375393/use-glob-with-urrlib2 | CC-MAIN-2021-25 | refinedweb | 165 | 66.54 |
4. Simulating with FBA¶
Simulations using flux balance analysis can be solved using
Model.optimize(). This will maximize or minimize (maximizing is the
default) flux through the objective reactions.
In [1]:
import cobra.test model = cobra.test.create_test_model("textbook")
4.1. Running FBA¶
In [2]:
solution = model.optimize() print(solution)
<Solution 0.874 at 0x112eb3d30>
The Model.optimize() function will return a Solution object. A solution object has several attributes:
objective_value: the objective value
status: the status from the linear programming solver
fluxes: a pandas series with flux indexed by reaction identifier. The flux for a reaction variable is the difference of the primal values for the forward and reverse reaction variables.
shadow_prices: a pandas series with shadow price indexed by the metabolite identifier.
For example, after the last call to
model.optimize(), if the
optimization succeeds it’s status will be optimal. In case the model is
infeasible an error is raised.
In [3]:
solution.objective_value
Out[3]:
0.8739215069684307
The solvers that can be used with cobrapy are so fast that for many
small to mid-size models computing the solution can be even faster than
it takes to collect the values from the solver and convert to them
python objects. With
model.optimize, we gather values for all
reactions and metabolites and that can take a significant amount of time
if done repeatedly. If we are only interested in the flux value of a
single reaction or the objective, it is faster to instead use
model.slim_optimize which only does the optimization and returns the
objective value leaving it up to you to fetch other values that you may
need.
In [4]:
%%time model.optimize().objective_value
CPU times: user 3.84 ms, sys: 672 µs, total: 4.51 ms Wall time: 6.16 ms
Out[4]:
0.8739215069684307
In [5]:
%%time model.slim_optimize()
CPU times: user 229 µs, sys: 19 µs, total: 248 µs Wall time: 257 µs
Out[5]:
0.8739215069684307
4.1.1. Analyzing FBA solutions¶
Models solved using FBA can be further analyzed by using summary methods, which output printed text to give a quick representation of model behavior. Calling the summary method on the entire model displays information on the input and output behavior of the model, along with the optimized objective.
In [6]:
model.summary()
IN FLUXES OUT FLUXES OBJECTIVES --------------- ------------ ---------------------- o2_e 21.8 h2o_e 29.2 Biomass_Ecol... 0.874 glc__D_e 10 co2_e 22.8 nh4_e 4.77 h_e 17.5 pi_e 3.21
In addition, the input-output behavior of individual metabolites can also be inspected using summary methods. For instance, the following commands can be used to examine the overall redox balance of the model
In [7]:
model.metabolites.nadh_c.summary()
PRODUCING REACTIONS -- Nicotinamide adenine dinucleotide - reduced (nadh_c) --------------------------------------------------------------------------- % FLUX RXN ID REACTION ---- ------ ---------- -------------------------------------------------- 42% 16 GAPD g3p_c + nad_c + pi_c <=> 13dpg_c + h_c + nadh_c 24% 9.28 PDH coa_c + nad_c + pyr_c --> accoa_c + co2_c + nadh_c 13% 5.06 AKGDH akg_c + coa_c + nad_c --> co2_c + nadh_c + succ... 13% 5.06 MDH mal__L_c + nad_c <=> h_c + nadh_c + oaa_c 8% 3.1 Biomass... 1.496 3pg_c + 3.7478 accoa_c + 59.81 atp_c + 0.... CONSUMING REACTIONS -- Nicotinamide adenine dinucleotide - reduced (nadh_c) --------------------------------------------------------------------------- % FLUX RXN ID REACTION ---- ------ ---------- -------------------------------------------------- 100% 38.5 NADH16 4.0 h_c + nadh_c + q8_c --> 3.0 h_e + nad_c + q...
Or to get a sense of the main energy production and consumption reactions
In [8]:
model.metabolites.atp_c.summary()
PRODUCING REACTIONS -- ATP (atp_c) ---------------------------------- % FLUX RXN ID REACTION --- ------ ---------- -------------------------------------------------- 67% 45.5 ATPS4r adp_c + 4.0 h_e + pi_c <=> atp_c + h2o_c + 3.0 h_c 23% 16 PGK 3pg_c + atp_c <=> 13dpg_c + adp_c 7% 5.06 SUCOAS atp_c + coa_c + succ_c <=> adp_c + pi_c + succoa_c 3% 1.76 PYK adp_c + h_c + pep_c --> atp_c + pyr_c CONSUMING REACTIONS -- ATP (atp_c) ---------------------------------- % FLUX RXN ID REACTION --- ------ ---------- -------------------------------------------------- 76% 52.3 Biomass... 1.496 3pg_c + 3.7478 accoa_c + 59.81 atp_c + 0.... 12% 8.39 ATPM atp_c + h2o_c --> adp_c + h_c + pi_c 11% 7.48 PFK atp_c + f6p_c --> adp_c + fdp_c + h_c 0% 0.223 GLNS atp_c + glu__L_c + nh4_c --> adp_c + gln__L_c +...
4.2. Changing the Objectives¶
The objective function is determined from the objective_coefficient attribute of the objective reaction(s). Generally, a “biomass” function which describes the composition of metabolites which make up a cell is used.
In [9]:
biomass_rxn = model.reactions.get_by_id("Biomass_Ecoli_core")
Currently in the model, there is only one reaction in the objective (the biomass reaction), with an linear coefficient of 1.
In [10]:
from cobra.util.solver import linear_reaction_coefficients linear_reaction_coefficients(model)
Out[10]:
{<Reaction Biomass_Ecoli_core at 0x112eab4a8>: 1.0}
The objective function can be changed by assigning Model.objective,
which can be a reaction object (or just it’s name), or a
dict of
{Reaction: objective_coefficient}.
In [11]:
# change the objective to ATPM model.objective = "ATPM" # The upper bound should be 1000, so that we get # the actual optimal value model.reactions.get_by_id("ATPM").upper_bound = 1000. linear_reaction_coefficients(model)
Out[11]:
{<Reaction ATPM at 0x112eab470>: 1.0}
In [12]:
model.optimize().objective_value
Out[12]:
174.99999999999966
We can also have more complicated objectives including quadratic terms.
4.3. Running FVA¶
FBA will not give always give unique solution, because multiple flux states can achieve the same optimum. FVA (or flux variability analysis) finds the ranges of each metabolic flux at the optimum.
In [13]:
from cobra.flux_analysis import flux_variability_analysis
In [14]:
flux_variability_analysis(model, model.reactions[:10])
Out[14]:
Setting parameter
fraction_of_optimium=0.90 would give the flux
ranges for reactions at 90% optimality.
In [15]:
cobra.flux_analysis.flux_variability_analysis( model, model.reactions[:10], fraction_of_optimum=0.9)
Out[15]:
The standard FVA may contain loops, i.e. high absolute flux values that
only can be high if they are allowed to participate in loops (a
mathematical artifact that cannot happen in vivo). Use the
loopless
argument to avoid such loops. Below, we can see that FRD7 and SUCDi
reactions can participate in loops but that this is avoided when using
the looplesss FVA.
In [16]:
loop_reactions = [model.reactions.FRD7, model.reactions.SUCDi] flux_variability_analysis(model, reaction_list=loop_reactions, loopless=False)
Out[16]:
In [17]:
flux_variability_analysis(model, reaction_list=loop_reactions, loopless=True)
Out[17]:
4.3.1. Running FVA in summary methods¶
Flux variability analysis can also be embedded in calls to summary methods. For instance, the expected variability in substrate consumption and product formation can be quickly found by
In [18]:
model.optimize() model.summary(fva=0.95)
IN FLUXES OUT FLUXES OBJECTIVES ---------------------------- ---------------------------- ------------ id Flux Range id Flux Range ATPM 175 -------- ------ ---------- -------- ------ ---------- o2_e 60 [55.9, 60] co2_e 60 [54.2, 60] glc__D_e 10 [9.5, 10] h2o_e 60 [54.2, 60] nh4_e 0 [0, 0.673] for_e 0 [0, 5.83] pi_e 0 [0, 0.171] h_e 0 [0, 5.83] ac_e 0 [0, 2.06] acald_e 0 [0, 1.35] pyr_e 0 [0, 1.35] etoh_e 0 [0, 1.17] lac__D_e 0 [0, 1.13] succ_e 0 [0, 0.875] akg_e 0 [0, 0.745] glu__L_e 0 [0, 0.673]
Similarly, variability in metabolite mass balances can also be checked with flux variability analysis.
In [19]:
model.metabolites.pyr_c.summary(fva=0.95)
PRODUCING REACTIONS -- Pyruvate (pyr_c) --------------------------------------- % FLUX RANGE RXN ID REACTION ---- ------ ------------ ---------- ---------------------------------------- 50% 10 [1.25, 18.8] PYK adp_c + h_c + pep_c --> atp_c + pyr_c 50% 10 [9.5, 10] GLCpts glc__D_e + pep_c --> g6p_c + pyr_c 0% 0 [0, 8.75] ME1 mal__L_c + nad_c --> co2_c + nadh_c +... 0% 0 [0, 8.75] ME2 mal__L_c + nadp_c --> co2_c + nadph_c... CONSUMING REACTIONS -- Pyruvate (pyr_c) --------------------------------------- % FLUX RANGE RXN ID REACTION ---- ------ ------------ ---------- ---------------------------------------- 100% 20 [13, 28.8] PDH coa_c + nad_c + pyr_c --> accoa_c + c... 0% 0 [0, 8.75] PPS atp_c + h2o_c + pyr_c --> amp_c + 2.0... 0% 0 [0, 5.83] PFL coa_c + pyr_c --> accoa_c + for_c 0% 0 [0, 1.35] PYRt2 h_e + pyr_e <=> h_c + pyr_c 0% 0 [0, 1.13] LDH_D lac__D_c + nad_c <=> h_c + nadh_c + p... 0% 0 [0, 0.132] Biomass... 1.496 3pg_c + 3.7478 accoa_c + 59.81 ...
In these summary methods, the values are reported as a the center point +/- the range of the FVA solution, calculated from the maximum and minimum values.
4.4. Running pFBA¶
Parsimonious FBA (often written pFBA) finds a flux distribution which gives the optimal growth rate, but minimizes the total sum of flux. This involves solving two sequential linear programs, but is handled transparently by cobrapy. For more details on pFBA, please see Lewis et al. (2010).
In [20]:
model.objective = 'Biomass_Ecoli_core' fba_solution = model.optimize() pfba_solution = cobra.flux_analysis.pfba(model)
These functions should give approximately the same objective value.
In [21]:
abs(fba_solution.fluxes["Biomass_Ecoli_core"] - pfba_solution.fluxes[ "Biomass_Ecoli_core"])
Out[21]:
7.7715611723760958e-16 | http://cobrapy.readthedocs.io/en/latest/simulating.html | CC-MAIN-2017-47 | refinedweb | 1,395 | 53.98 |
.
module Main where import FRP.Animas -- Animas is a fork of Yampa import Data.IORef import System.CPUTime sf :: SF () Bool -- The signal function to be run sf = time >>> arr (\t -> if (t < 2) then False else True) -- the time signal function ignores its input and returns the time main :: IO () main = do t <- getCPUTime -- CPUTime in picoseconds timeRef <- newIORef t let init = putStrLn "Hello... wait for it..." sense = (\_ -> do t' <- getCPUTime t <- readIORef timeRef let dt = fromInteger (t'-t)/10^12 writeIORef timeRef t' return (dt, Nothing)) -- we could equally well return (dt, Just ()) actuate = (\_ x -> if x then putStrLn "World!" >> return x else return x) reactimate init sense actuate sf. | http://www.haskell.org/haskellwiki/index.php?title=Yampa/reactimate&oldid=42205 | CC-MAIN-2013-20 | refinedweb | 114 | 72.76 |
Re: Usernames and passwords
- From: "GlowingBlueMist" <nobody@xxxxxxxxxxx>
- Date: Sat, 2 Aug 2008 06:37:03 -0500
"Adi" <vouk.adolf@xxxxxxxx> wrote in message
news:g718o10r90@xxxxxxxxxxxxxxxxxxxxx
Hi, experts!I like to use the Freeware program called "Password Safe". It works just
How to save all Username and Passwords for some internet sites where I
need Username and password for access to those all pages. I have different
Usernames and Passwords and clean tool for computer each time remove all
of them.
I must put this Username and Passwords each 3 or 4 days again for access
those pages.
Is any program to save this Username and Passwords to USB Memory Stick and
running program return back on their's places.
vouk.adolf@xxxxxxxx
fine from a USB memory stick as well as from my hard drive. I use it
mostly on my hard drive but keep a backup copy of the folder on my USB
memory stick as well. That way I have a backup and I can use it on any
computer I happen to be using at the time, like using one of the library's
machines.
You can read about the program an download it at using this link:
Like any program it takes a little getting used to but I like the way it
lets you customize the "Autotype" function if needed. I use the default
"Autotype" function for 99% of the places needing a username and password
but I did have one bank that needed a custom login/password sequence before
it would work. They wanted the Username, a carriage return, wait 3 seconds
and then a second carriage return (acknowledging some policy they wanted
everyone to read on every login), and then the password, and a final
carriage return.
Anyway it's worth looking into, especially if you don't want to deal with
purchasing software or using something that has limitations imposed on it
unless you purchase a license.
.
- References:
- Usernames and passwords
- From: Adi
- Prev by Date: Re: Windows XP Explorer.exe and drwtsn32.exe crashing
- Next by Date: Re: two XP PRO in the same disk
- Previous by thread: Re: Usernames and passwords
- Index(es): | http://www.tech-archive.net/Archive/WinXP/microsoft.public.windowsxp.general/2008-08/msg00233.html | crawl-002 | refinedweb | 365 | 67.69 |
Console output below, google'ed read other peoples problems but unsure of which package to install / I needed (Ubuntu 8.04)
phil@phil-laptop:~$ tuxgdgTraceback (most recent call last): File "/opt/tuxdroid/apps/tux_framework/TFW.py", line 12, in <module> from FWObject import GdgFramework File "/opt/tuxdroid/apps/tux_framework/libs/FWObject.py", line 35, in <module> from GdgObject import * File "/opt/tuxdroid/apps/tux_framework/libs/GdgObject.py", line 38, in <module> from TGFormat import * File "/opt/tuxdroid/apps/tux_framework/libs/TGFormat.py", line 30, in <module> from TGFXml import * File "/opt/tuxdroid/apps/tux_framework/libs/TGFXml.py", line 26, in <module> import xml.dom.extImportError: No module named ext
Fixed it myself, not too happy on having to dig into the programs source code considering i spent £90 on tux droid.
But all you do is comment out :
import xml.dom.ext
on line 26 of file TGFXml.py at /opt/tuxdroid/apps/tux_framework/libs on my system.
problem now is permission denieds on firmware updates.
Ok comments and Ubuntu 8.04 notes so far:
permission problems solved by typing "sudo tuxgdg"
tuxd and tuxttsd not starting up, you need to make a link to the tuxttsd and then add 2 new services under system -> prefs -> services for each to startup.
More to come once I fix this software out. Considering rewriting the python as it uses out of date xml calls and for some strange reason brings up all gadgets on startup with the line commented out and won't save settings.
Previously Phil wrote:
Ok comments and Ubuntu 8.04 notes so far:
permission problems solved by typing "sudo tuxgdg"
I have the same problem and I find the same solution afther an update to the new tuxsetup but I don't find tuxgi.
Mirko
I have the same problem and I find the same solution afther an update to the new tuxsetup but I don't find tuxgi.
Mirko
©
2005-2008
Kysoh S.A. | http://www.tuxisalive.com/tux-droid-forum/copy_of_forumtopic1/193799496 | crawl-001 | refinedweb | 327 | 65.83 |
Recursion (JavaScript)
Recursion is an important programming technique. It is used to have a function call itself from within itself. One example is the calculation of factorials. The factorial of 0 is defined specifically to be 1. The factorials of larger numbers are calculated by multiplying 1 * 2 * ..., incrementing by 1 until you reach the number for which you are calculating the factorial.
The following paragraph is a function, defined in words, that calculates a factorial.
"If the number is less than zero, reject it. If it is not an integer, round it down to the next integer. If the number is zero, its factorial is one. If the number is larger than zero, multiply it by the factorial of the next lesser number."
To calculate the factorial of any number that is larger than zero,.
Recursion and iteration (looping) are strongly related - anything that can be done with recursion can be done with iteration, and vice-versa. Usually a particular computation will lend itself to one technique or the other, and you simply need to choose the most natural approach, or the one you feel most comfortable with.
Clearly, there is a way to get in trouble here. You can easily create a recursive function that does is any chance of an infinite recursion, you can have the function count the number of times it calls itself. If the function calls itself too many times (whatever number you decide that should be) it automatically quits.
Here is the factorial function again, this time written in JavaScript code.
function factorial(aNumber) { // If the number is not an integer, round it down. aNumber = Math.floor(aNumber); // If the number is less than 0, reject it. // If the number is 0, its factorial is 1. // Otherwise, call this recursive procedure again. if (aNumber < 0) { return -1; } else if (aNumber == 0) { return 1; } else { return (aNumber * factorial(aNumber - 1)); } } | https://msdn.microsoft.com/en-us/library/wwbyhkx4.aspx | CC-MAIN-2015-27 | refinedweb | 316 | 57.77 |
Here are details of our 0904 release grouped by each of the HealthVault areas. This release is now available in our Pre Production Envrionment.Please note this release will be available in our production environment around April 29 timeframe.
Improved Caching and Reliability
We have substantially improved availability and caching in HealthVault: Application
will notice more predictable response times under heavy load and better
handling of cache server failures at the Healthvault backend.
Extended Certificates we trust for Digital Signing
Support for Entrust certificates for Digital Signatures in HealthVault:
Beside Komodo, VeriSign and GeoTrust, we now support Entrust certificates for
Globalized Application Provisioning Methods
In this release we are adding ability to have localized versions of
Applications through our AddApplication and UpdateApplication methods. These
methods are available to Master applications for application provisioning.
Improved Direct to Clinical Experience
We are adding a customizable Direct to Clinical Success message:
Applications will now have the ability to submit a customized success message
for the direct to clinical flow.
This release will include changes in our user flow for
signing up for HealthVault. Please review the details since these changes could
affect the way some applications describe and link to our signup process.
New first page
In previous version of HealthVault Shell, a new user of HealthVault sees a
LiveID signin page that enables the user to sign in using an existing Live ID, create
a Live ID or, as an alternative, sign in using OpenID. In ongoing user
research, we’ve found that many users don’t know what Live ID is or whether
they have one, which can cause unwanted user confusion on the first page of the
flow. In this release, users who have not signed in to HealthVault before will
see a simpler page that asks them to enter the e-mail address they want to use
with HealthVault. If the user enters an address, we’ll check to see if it is
already associated with a Live ID. If it is, we’ll show a Live ID signin page
that should already be familiar to the user. If it’s not, we’ll show a page
where the user can create a Live ID. Of course, users will still have the
option to sign in using OpenID instead of Live ID.
Some applications host pages that explain to new users what
will happen when they create an account. We’ve prepared a screen shot of
the new first page of the flow that applications can display on these pages if
they wish.
Streamlined process for creating a Live ID
In previous version, a new user who does not have a Live ID (and who
doesn’t use OpenID) must complete a two-page form in order to create one. In
this release, we have shortened this to one page.
Ability to add records during signup
Users will now have the option to add record(s) during signup, just before
application authorization. This may be helpful for multi-record applications in
particular.
New CREATEACCOUNT target
Welcome the newest member of our Shell Redirect
Interfaces - “CreateAccount”.
Many existing applications have separate links for new and
returning users – for example, a “Sign In” link and a “Get Started” button –
that both point to the AUTH target. In this release, we recommend that
applications point “Sign In” links to the AUTH target and “Get Started” links
to the CREATEACCOUNT target. This isn’t required – sending new users to the
AUTH target will still work fine – but it will deliver what we think is a
better user experience.
Consistent co branding
Co branding will appears on all pages of the flow, including pages hosted
by Live ID. We believe this will help make it clear to users that the entire
flow is related to using the application.
In response to partner feedback, we have made two changes to
the lab test results data type.
1. The
first change is a change in how we interpret the note that is part of a result
(in lab-test-result-type). Some existing systems produce results that are
intended to be displayed or printed using non-HTML format’s, with hard line
breaks and meaningful whitespace.
The new version of the data type now supports that format for this specific
Note property. In the HealthVault shell, these notes are rendered using the
HTML <pre></pre> tags, and that may be appropriate for other
applications that wish to display this data.
2.
The second change was to modify how ranges are
expressed. The existing range type only supported a strict numeric range with a
minimum and a maximum.
We have modified the range so that the range value is now expressed as a
codable value. All applications should put a textual representation of the
range in the text property of this codable value. If there is a specific
vocabulary for the range (for example, a color range expressed as “yellow –
orange”), that can be stored using the coded part of the codable value.
Ranges that are numeric in nature can be placed in the range type. This range
type now supports optional value for minimum and maximum, allowing the
expression of open-ended ranges by setting one and not the other.
In previous releases, if you wanted to fetch a specific
instance of a type, you called GetItem(), and passed in the ID and the sections
that you wanted to fetch:
PersonInfo.SelectedRecord.GetItem(itemId, HealthRecordItemSections.Core
| HealthRecordItemSections.Xml);
It was inconvenient to have to specify the sections and it
wasn’t clear which sections to ask for. We have made two changes to make this
easier.
First, we have added a Default entry to
HealthRecordItemSections, so you can write:
PersonInfo.SelectedRecord.GetItem(itemId, HealthRecordItemSections.Default);
The default is to return the Core and Xml sections.
Second, you can omit the section, in which case the default
section is used:
PersonInfo.SelectedRecord.GetItem(itemId);
We added a similar overload to the GetItemsByType() method.
The presence of different versions of a type (for example,
Encounter and EncounterV1) has created some confusion. In this release, we have
taken all older versions of the types and moved them to a separate dll, named Microsoft.Health.ItemTypes.Old.dll,
and also moved them to the Microsoft.Health.ItemTypes.Old.
We hope that this will make the set of types easier to
understand.
If you wish to use the old types, you will need to reference
the datatypes.old dll in your project, and reference the old namespace | http://blogs.msdn.com/b/healthvault/archive/2009/04/22/healthvault-0904-release-notes.aspx | CC-MAIN-2014-42 | refinedweb | 1,086 | 59.94 |
At 20:23 11.12.2003 +0100, Xavier Noria wrote:
>How modifiable are classes in Python by code? Can you change the list of
>ancestors of a class before any instance is created?
yes __bases__ is writable, although if you have a Java class in there you
should keep it, this is not checked but should be respected. (In CPython
2.2 etc for new-style classes the rules are a bit more involved)
>Can you wrap a method to add pre or post code a la CLOS? Can you redefine
>a class at any time as in Ruby?
if you mean as in:
>>> class A:
... def meth(self): return 1
...
>>> a=A()
>>> class A:
... def meth(self): return 2
...
>>> a.meth()
1
no, because the class statement simply and always creates a new class
object and binds it to a name.
But you can modify the old/original version of the class:
>>> class A:
... def meth(self): return 1
...
>>> a=A()
>>> a.meth()
1
>>> def meth(self): return 2
...
>>> A.meth = meth
>>> a.meth()
2
You can also add "methods" to instances:
...
>>> def meth2(): # no self, really just a function to live in instance
namespace
... return 'a'
...
>>> a.meth2=meth2
>>> a.meth2()
'a'
or
...
>>> import new
>>> def meth2(self):
... return self.meth()*2
...
>>> a.meth2=new.instancemethod(meth2,a,a.__class__)
>>> a.meth2()
4
>>>
El 11/12/2003, a las 16:27, Jeff Emanuel escribi=F3:
> Can your classes extend a common superclass. The constructor
> of the superclass can to the registration.
This is the approach we are towering at somehow, thank you.
We chose an scripting language as Python for several reasons, one of=20
them being its flexibility and the existence of exec() to modify or=20
create source code on the fly if we need it.
The objective is to offer an IDE where you don't work in a normal=20
decoupled object oriented environment, as if you developed using a=20
library. We have to find a balance between offering common things=20
solved out the box and without the needed of code, and programming on=20
the model-builder side.
In this particular case, we want to be able to take advantage of the=20
dynamism of Python to, for instance, let the user associate a widget to=20=
a class through a wizard and have the display of all its instances=20
happen magically in simulation time.
I ask for your advice because unfortunately no one in the team is=20
fluent in Python, so it's difficult to be creative and offer powerful=20
features.
How modifiable are classes in Python by code? Can you change the list=20
of ancestors of a class before any instance is created? Can you wrap a=20=
method to add pre or post code a la CLOS? Can you redefine a class at=20
any time as in Ruby? Is there some online reference where we can have a=20=
general picture of what can be modified of class/method definitions=20
dynamically?
-- fxn | http://sourceforge.net/p/jython/mailman/message/11105846/ | CC-MAIN-2015-11 | refinedweb | 510 | 74.39 |
Functional Programming explained to my grandma
Open Source Your Knowledge, Become a Contributor
Technology knowledge has to be shared and made accessible for free. Join the movement.
Pure Function
Functional programming is all about functions, specifically pure functions.
A pure function is a function which has no side effects. It has two main characteristics:
- If you give the same parameters, you get the same result no matter what
- It will never change its environment
What is a side effect?
A function has a side effect if something is changed during its execution. For example:
- Modifying a variable
- Modifying a data structure in place
- Setting a field on an object
- Throwing an exception or halting with an error
- Printing to the console or reading user input
- Reading from or writing to a file
- Drawing on the screen
Let's look at an example of a function with a side effect. In this example, we have an URL dynamically built.
In this test, I run the "BuildUrl" function twice. It seems this doesn't work well: the first test passed, but in the second one the url is " ...". As the URL attribute is mutable, the URL has been changed two times.
If we want to transform this function into a pure function, we should remove the affectation and just return the new URL.
Moreover, the URL attribute is defined as var, which is mutable. We should consider using val, which cannot be modified. If you change the type of attribute to val, this code will not compile anymore. Why? We cannot modify an immutable value.
var x = 0 x = 2 //perfectly fine val y = 10 y = 8 //will not compile
In a nutshell
The easiest way to visualize what pure functions are all about is to see them as mathematical functions.
Mathematically, you know that 2+2 can always be replaced by 4.
That's what we want to reflect with functional programming.
def add (a: Int, b: Int): Int{ a + b }
The add function can always be substituted by its result.
But,if I don't change my data, my applications will do nothing!
Obviously, we cannot keep everything immutable. But we will keep our changes, modifications, I/O operation, etc. in specific layers outside of our logic. Pure functions allow us to write smaller snippets of code which is easier to read, and thus easier to maintain. | https://tech.io/playgrounds/6247/functional-programming-explained-to-my-grandma/what-are-functions-all-about | CC-MAIN-2018-34 | refinedweb | 397 | 64.1 |
-- .General.Maybe where import Control.Monad import Data.Maybe -- | Indicate error or return a. -- This function for replace `fromJust`, expression `fromJust x` is bad -- when `x` is `Nothing`, so `maybeError` allowed you customize error -- information. maybeError :: Maybe a -> String -> a maybeError m str = fromMaybe (error $ "Just crash : " ++ str) m -- | Maybe boolean. maybeBool :: Maybe a -> Bool -> Bool maybeBool (Just _) b = b maybeBool Nothing b = not b -- | Maybe alternative. maybeAlternate :: Maybe a -> a -> a maybeAlternate = flip fromMaybe -- | Maybe alternative monad. maybeAlternateM :: Monad m => Maybe a -> m a -> m a maybeAlternateM x = maybeBranch x return -- | Apply maybe. maybeApply :: Maybe a -> (a -> b) -> Maybe b maybeApply = flip fmap -- | Apply maybe with monad. maybeApplyM :: Monad m => Maybe a -> (a -> m b) -> m (Maybe b) maybeApplyM m f = maybe (return Nothing) (liftM Just . f) m -- | Maybe transform monad. (?>=>) :: Monad m => Maybe a -> (a -> m (Maybe b)) -> m (Maybe b) m ?>=> f = maybe (return Nothing) f m -- | (>?>=>) :: Monad m => m (Maybe a) -> (a -> m (Maybe b)) -> m (Maybe b) g >?>=> f = g >>= (?>=> f) -- | Maybe tranform (). (?>=) :: Monad m => Maybe a -> (a -> m ()) -> m () m ?>= f = maybe (return ()) f m -- | (>?>=) :: Monad m => m (Maybe a) -> (a -> m ()) -> m () g >?>= f = g >>= (?>= f) -- | Maybe branch. maybeBranch :: Monad m => Maybe a -> (a -> m b) -> m b -> m b maybeBranch (Just a) f _ = f a maybeBranch Nothing _ g = g -- | Maybe head. maybeHead :: [a] -> Maybe a maybeHead = listToMaybe | http://hackage.haskell.org/package/manatee-core-0.0.1/docs/src/Manatee-Toolkit-General-Maybe.html | CC-MAIN-2014-52 | refinedweb | 228 | 77.53 |
]
Filip Bulovic(5)
Doug Doedens(5)
Chris Rausch(4)
Bechir Bejaoui(4)
Frank Gutierrez(3)
manish Mehta(2)
Mahesh Chand(2)
Vivek Gupta(2)
John Hudai Godel(2)
Dipal Choksi(2)
Abebe Assefa(2)
Mike Gold(2)
samuel.ludlow (2)
Anand Kumar Rao(2)
Jigar Desai(2)
Anand Thakur(2)
Munir Shaikh(2)
Hari Shankar(1)
Ashish Banerjee(1)
R. Seenivasaragavan Ramadurai(1)
Raju Pericharla(1)
Tin Lam(1)
Ashish Jaiman(1)
Michael Momcouglu(1)
fdutoit (1)
Chow Lai Man(1)
Michael Evans(1)
Giuseppe Russo(1)
John O Donnell(1)
Malcolm Crowe(1)
Prashant Tailor(1)
gsuttie (1)
Chris Blake(1)
Konstantin Knizhnik(1)
David Sandor(1)
Vishnu Prasad(1)
salvatore.capuano (1)
jimteeuwen (1)
Sam Haidar(1)
Wdenton (1)
Sudheer Adimulam(1)
Rama Mohan(1)
Pradeep Kellangere(1)
Chandrakant Parmar(1)
Santhosh Kumar R V(1)
jr.charles (1)
sudhirmangla (1)
klaus_salchner (1)
Santhi Maadhaven(1)
Harikishan Gireesh(1)
Subburam Karthikeyan(1)
Balaji K(1)
Subramanian Veerappan(1)
Leon Pereira(1)
Leon Pereira(1)
Arun Kumar(1)
K S Ganesh(1)
Nirlep Kaur(1)
Sushmita Kumari(1)
Moses Soliman)
Raj Kumar.
.NET Framework and Web Services - Part 3
Jan 31, 2002.
Here I am going to explain Web methods and how to write Web methods in C# and VB.NET.
VS.NET Tools Intermediate Language Disassembler (ILDAM)
Feb 06, 2002.
"The ILDSAM tool parses any .NET Framework EXE/DLL Module and shows the information in a human-readeble format"
World Clock Using Windows Forms
Feb 11, 2002.
I developed a C# application for finding World Timings given the US Central Timing.
Reflecting Data to .NET Classes: Part I - From HTML Forms
Mar 06, 2002.
Reflection allows us to examine internal details of assemblies and classes at runtime .
Case Study: Demo Networking Financial System
May 23, 2002.
In this tutorial I will discuss some of the design and development issues that one might consider when using .NET framework for developing Network affiliated applications.
Robotics Game Using .NET Languages
May 28, 2002.
It is often not easy to get to know new technologies like .NET if you don’t get introduced to in a work environment.
Utilizing Assembly Information for Your Automated Splash Dialog
Jun 05, 2002.
There are cases when you need to reuse the same splash screen or about box in many applications..
Macro to Update References of all Projects in a Solution
Jul 24, 2002.
When working in enterprise development there are occasions when you have a solution file with 10 or more projects in it all using private assemblies..#.
Using DTS from C#
Sep 16, 2002.
In this article I will concentrate on enumerating, executing and changing properties or global variables of DTS package.
Customize User Interfaces and Pass User Input to Installer Classes
Oct.
Global Assembly Cache(GAC) Hell
Jan 03, 2003...
Implementing Caching in ASP.Net
Jun 10, 2003.
This article explains the concepts, advantages and types of caching and the implementation of caching in ASP.NET applications.
Data Access Layer based on dataSets
Jul 01, 2003.
This article aims to introduce the reader to several conceptual problems encountered in the development of a generic Data Access Layer (from now on referred to as DAL)..
The Graphics Class and Transformations
Apr 01, 2004.
The Graphics class defined the transformation related functionality. This article discussed the Graphics class and its members that participate in transformation process..
How to work with Assemblies in InstallShield Developer 7.0
Oct 14, 2004.
InstallShield Developer 7.0 is the best solution for providing the very easy user interface to author installations having both .NET and side by side components. This article is a step by step walk through of how to create a deployment project using InstallShield.
Active Directory and Microsoft .NET
Oct 18, 2004.
This article will emphasize in the benefits of using the namespace System.DirectoryServices....
Enterprise Library 1.0
Apr 02, 2005.
Enterprise Library is a set of tested, reusable application blocks that address common problems developers face when developing enterprise-based applications...”.
How to Execute an Application in a Remote Application Domain
Jan 18, 2007.
This article explains executing an application in a remote application domain
Assembly in .NET
Feb 23, 2007.
The .NET assembly is the standard for components developed with the Microsoft.NET.#.
Generic Error Logger using ASP.Net & C#
Jul 09, 2007.
In this article I just want to put some light on "global.asax" file & how we can make use of "Application_Error".#.
Compress Web Pages using .NET 2.0 Compression Library
Oct 08, 2007.
This article explains how to create very simple HttpModule to compress your dynamic content using compression library (System.IO.Compression) available in .NET 2.0?
Assembly in .NET 2.0
Nov 13, 2007.
This article gives you an overview of assemblies used in .net 2.0.
Caching in ASP.NET 2.0
Jan 08, 2008.
Caching is a technique of storing a copy of data in memory. You could cache a page or the results of a query. The advantage of caching is to build better performance into your application..
Installing an Assembly: Part II Using the Global Cache
Jan 21, 2008.
This article explains installing an Assembly using the Global Cache.
Part III: Step by Step Procedure of How to Install an Assembly
Jan 22, 2008.
This article is as in the part I and II describes the maner of how to install an assembly.
About Global-Assembly-Cache. | http://www.c-sharpcorner.com/tags/Global-Assembly-Cache | CC-MAIN-2016-36 | refinedweb | 903 | 59.9 |
...for.
I would go for CPC of over $1.50 if possible with minimum daily search of 50 searches per day - if I can on longtail keywords
Personally I'd not go lower than $1 in CPC AND 1000 in search volumes.
But I do have my own formula when it comes to choosing the keyword(s) to target. Unfortunately, I prefer to discuss it in private.
Is the 1000 search volume per day or per month?
Would have to be month, 30,000 would be way too high.
I've just done a keyword search for deodorant and chainsaw and they're showing 1,220,000 and 1,500,000 searches per month each.
I wish you luck hitting the front page with either !
My meaning was 30,000 would be too high for me to tackle for hitting a high ranking on google !
CabinGirl is right! 1000 per month is the least I'd go but with an exception of certain keyword(s) I'll write a hub for.
By the way, DS, the figure you're looking at is not an accurate representation of search volume based on that specific keyword deodorant because you're using *broad* match. The actual figure (*exact* match) is 90,500 - still a good # though.
Anyway, I'm surprised the competition is very low for deodorant so go for it, DS.
I think it is a little more complex than CPC value, you have the broad, phrase and exact options. You should also consider where the keyword competition is i.e. is it in the text or in the title and how much traffic you are likely to get on a topic.
Its a bit like trying to sell a Rolls Royce to get a multi-thousand doller payout where you are happy to see 1 or 2 a month versus selling biros where you may get a few pence but with all the volume it ends up being a pretty penny. Be interesting to know who makes the most profit, Rolls Royce motors or BIC!!!
On the face of it the Rolls Royce dealer is the one with the expensive lifestyle while the pen salesman is the one struggling to make ends meet!
Though I've given a very broad and extremely open to be flawed generalisation there!
But your analogy is an offshoot of what I'm thinking. If there's little traffic, but big bucks it could be worth checking out. If it's little money but a lot of traffic, then it's also worthwhile.
I'm trying to avoid the combination of the little traffic and little money.
I need to find out more about this "broad match" business. I would have thought that a single keyword search would be an exact match. I've got some homework to do.
Why are you in the knowledge exchange forum then? LOL!
To answer the question posed by DS.
Beyond that, I prefer to discuss it in private. Simple enough!
Darkside
I use a minimum of $1 CPC.
As for traffic - I find that the Google Keywords tool isn't that accurate.
Try using another Google tool:
You don't need to sign in, just type in your phrase/words. It brings up a whole bunch of keywords with monthly search, and you can click the magnifying glass icon and go straight through to Google Insights for search to see where the traffic comes from (as in which country).
The country info is very important - sometimes what seems like a good keyword gets all it's traffic from Croatia (where they probably don't have enough advertisers to bid more than a few cents).
I find it interesting that two Google tools show different figures for search, with the tool listed above consistently showing lower numbers than the Keywords tool - but all these tools are estimates. Google would never divulge the true accurate data. Therefore use both to arrive at a sort of rule of thumb about traffic.
Even the CPC data isn't quite accurate - Google charges advertisers based on how relevant their landing page is - the more relevant the less they pay. Advertisers are now cottoning on and improving their sites to drive down their costs. Plus the whole CPC thing is based on a dynamic auction which depends on how many advertisers are bidding at a given moment, which in turn depends on their advertising budgets and time of year (I would think bidding rises towards Christmas).
From my experiments, this is all a very uncertain business, and the only thing you can do to protect yourself is to have a lot of different niches. And you can only do that by simple experimentation, and sometimes just going with your gut or focusing on your interests. Those who focus on just a few niches leave themselves vulnerable to their keyword price dropping due to changes in the economy/fashion.
Hope this helps.
I'm with fayans (generally) but I also found out accidentally that less than $1 can actually generate more than you'd suppose - more so if the search volume (per month) is high.
It's a very weird science.
It is a very weird science. Initially I just wrote about what I liked but now I tend to target my keywords more. However, I always tend to strike a balance. I would generally not go below 75p (I'm in the UK) but it also depends on the number of searches.
I've also discovered that some keywords, despite the high competition, are actually poorly served and therefore despite the stats or whatever - high competition + popular keyword = poor return (for e.g.) - they can be more than worth using.
Does anyone use PPC Web Spy? What's your opinion?
To be fair, the competiton is poor BC. Now go write about body odour
What ??
Have you got your head on BC? I was meaning that what's on the first page or two for deodorant isn't great. Deodorant - body odour? Get it
Of course I do, I ain't stupid but why would I write about body odour. I am just replying to a thread on keyword research and pointing out what I go for. I would never go for a word that is above 8,000.
Like I know there are words out there over a million, someone been spreading rumours that I have been in a coma ?
I don't happen to think you're stupid. Nor have I heard runmours that you're in a coma. But fine BC, take what I said as you will.
@ Darkside - I should think you'll manage it.
I misunderstood you, head deff not on. Methinks I am in a coma and deff am getting rusty. Accept my apology oh green one as I realise you were being helpful !
Apology accepted, of course.
Darkside - seriously, some of the competition is wasted on that keyword, so I don't see why you can't hit the mark. And though you were asking BC (about CPC), I've gone for less, because I know a lot about the subject, and it's paid off. And I imagine will pay off more and more in time.
I'll be coming out with both guns blazing so everyone better have their hands up in the air (even if they have B.O.)
That's only 266 a day searching for that keyword.
1000 search volume is only 33 per day searching for that keyword. Not many people at all. That's a very small audience.
I'm doing a bit of a juggling act, and I don't target a high paying keyword just because there's a lure of big money, it's got to interest me. I've written plenty that will have a very small audience, just because I really dig the topic. But then I wouldn't be specifically targeting a very small niche because I expected there'd be very little competition, with a chance of only 1 or 2 people an hour in the whole world searching for that information (that may sound contradictory, but what I'm saying is, if I'm doing it purely for love, it doesn't matter how little traffic or Adsense $ is involved, but if I'm doing it for a chance for money I won't hunting wallabies when I could be bagging kangaroos. I'm currently researching a topic and gathering information on something that I've just now had a look at with the keyword search tool, 1,900 searches a month and $0.64 average CPC. But I'd not recommend that to anyone who's wanting to make money).
What sort of average CPC would you have as the lower limit?
It will be your usual masterpiece DS
I'll try get everybody keep da noise down
lol
I am happy enough with a cpc of about 30 cents and to be honest I would be happy as well if all my hubs were attracting 33 reads a day but perhaps my sights are set too low.
Deff learnt something from your posts though and might just change the way I think, cheers for that !
Don't take this in the wrong way, but I do think you're aiming too low. Though not as bad as those who target 0 search terms and are justifying it by saying that one day it might be a hot topic!
33 people searching a day won't guarantee all 33 will click through. And a 5% CTR on the ad itself x the low average CPC means it's just crumbs.
Trust me, I've spent a looooong time just writing what I enjoy writing about. And in all honesty, it's not a bad thing to do. It's made me money in the past. But there's a thread by Ryan in the 30 Day Hub Challenge called HubPages Tips: Using keyword research to make a good Hub great! which is worth reading a few times over. I had the opportunity a little while back to see a similar video which was a real eye opener. It's like getting a high powered scope for a rifle, and getting taught how to use it properly.
Someone recently in the forum mentioned about their hub getting good search traffic for the term jet ski. Which is excellent. I did the research and found that jet ski insurance is only getting 4,400 searches a month, but the Estimated Average CPC is $11.50.
Now that's only about 150 a day, but the Average CPC being $11.50 means it's a damn good keyword to target.
I personally wouldn't target it. I have little experience with jet ski's. If I did have an interest or wanted to do it justice, I'd avoid searching for the information online and ring up insurance companies and find out from them what they've got on offer and any other bits of information they have. And use that as the basis for my hub.
I pretty much have my hubs in two baskets: That which I HAVE to write about, and I don't care about the money (none of my How To Hub hubs have Adsense on them) and that which I WANT to write about... and there's a lot of them, so I'm weighing up which ones are going to be more profitable. If it's going to be low on the search volume and CPC then I'll toss it back like an undersize fish.
At the moment I don't have any firm criteria for what my minimum is on both counts. I'm interested in what others have as theirs to learn a thing or two.
You are an inspiration DS not only to myself but newbies reading this. Deff going to take this on board and give that link of Ryans a read. Maybe it's about time I actually discarded the alter-ego crap and actually made a go at earning proper money on here.
Deff some sound stats there and I am deff into stats. I am deff going to reinvent myself. Made me think DS, not many people get that far inside me head
My keyword search starts with the heading then I look at the most to least popular and then think of subcategories n do another search.
dunno if that gives you any ideas.
Oh deff the Title is the most important thing and deff can make a big difference as to whether you get on the google front page and I have changed a few a couple of times to achieve that !
Actual CPC can be miles off of the estimate. If you run ads with a good CTR and a quality landing page you can pay way less. I have run ads with an estimated CPC of £2 + where we only ended up paying 22p.
Personally, I choose CPC at $1 or more and Search Volume over 2000 per month and under 30,000. Some will say that is too high but you have to look at those little green bars to see how heavy the competition is, then go do a search on Google for that keyword or phrase.
In your jet ski insurance example, there are over 940,000 hits but for the exact phrase in quotations there are only 19,400. I wouldn't let that 940,000 scare me if all the other numbers were good. It is safe to assume that a good percentage of those hits have little or no SEO and the strength you get from HP site authority will give you an even better chance at ranking high if the hub is quality.
Also, there are some things that I will write about that have less than $1 CPC but on those articles I'm targeting Amazon or Ebay sales and not adsense revenue. So, my suggestion would be to make out a plan before you write on whether you want to target Adsense or other affiliate sales.
Just wanted to add that in that jet ski example, 6 out of the top 10 on google had the term jet ski insurance in the URL so that should tell you something also.
* removed, as i dont feel like fixing all my embarrassing typos*
We are always in state of learning and improving upon that which we have learned
n is fun when we share n learn from eachother. Kinda reminds me of homework sessions with friends back in 1867 when I was a kid. (sigh)
lol
I'd just like to thank everyone for these posts. I am slowly learning and think it is wonderful for minds to open in helping each other out. Thank you.. I read a lot of these posts and find them very helpful!
I would like to suggest to follow these steps:
1. Seed keywords: you just need to go through the related services/products and try to get basic keywords(not the key phrases).
2. Building key phrases and secondary keywords: i would like to suggest to use keywords tools. they can help you in getting right keyword phrase and its different format of it(or you can say all the synonymous and other stuff comes into this section.)
Some of favorite tools are as follows:
Google keyword tool
Google search based keyword tool
Google Insights
Google trends
Word tracker
(I will always prefer free one)
3. Fine tuning of Keywords:
Manual search comes in picture now. You can fine tune the keywords according to the targeted audience,geographical regions and of course your own limit.
by Glen6 years ago
And it's right under our very noses. What is it? Stuff that you're interested in!Your hobbies, your work, your experiences... do a bit of keyword research to see how many other people are interested in it and what... greathub7 years ago
Hi hubbersI have written two hubs with very high search volume keywords and I have chosen those keywords which had least competition. Its been 3 weeks but there has been no traffic at all from any search engine.Please.... | http://hubpages.com/community/forum/20758/when-doing-keyword-research | CC-MAIN-2017-26 | refinedweb | 2,720 | 79.6 |
for more information.
Thanks...corejava how to validate the date field in Java Script? Hi friend,
date validation in javascript
var dtCh= "/";
var minYear
Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava
Java Interview Questions
Core java Interview Question
page1
An immutable... in the constructor.
Core
java Interview Question Page2
A Java
CoreJava Project
CoreJava Project Hi Sir,
I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account
corejava - Java Beginners
corejava how to retriving data from html to servlet?how send the data servlet to text.file hai friend,
By using FormBeans we can...
-----------------------------------------------
Read for more
corejava - Java Interview Questions
Core Java vs Advance Java Hi, I am new to Java programming and confuse around core and advance java
corejava - Java Interview Questions
corejava how can we make a narmal java class in to singleton class
corejava
corejava Creating the object using "new" and usins xml configurations whih one is more useful?why
corejava - Java Interview Questions
singleton java implementation What is Singleton? And how can it get implemented in Java program? Singleton is used to create only one...://- - - - - - - -;) Meeya
Corejava - Java Interview Questions
corejava - Java Interview Questions
corejava - Java Beginners
for more information, pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass
corejava - Java Beginners
for more information:... design patterns are there in core java?
which are useful in threads?what r... by GOF(Gang Of Four) are well known, and more are to be discovered on the way
corejava - Java Beginners
is : " + ntw.convert(num));
}
}
For
SEND + MORE = MONEY
SEND + MORE = MONEY Problem Description
Write a program to solve... by exactly one space.
Example
Sample Input:
1
SEND + MORE = MONEY
Sample Output... of magic formula is no more than 3, the number of digits of every number is no more
java - Java Interview Questions
java i want to java&j2ee interview questions.
Regards
Akhilesh Kumar Hi friend,
I am sending you a link. This link will help you.
Read for more information.
java - JSP-Interview Questions
java hi..
snd some JSP interview Q&A
and i wnt the JNI(Java Native Interface) concepts matrial
thanks
krishna Hi friend,
Read more information.
java - Java Interview Questions
/interviewquestions/corejava.shtml
I think this is enough.. but u can see if you wanna something more on java... preparation manner as well..
I have a java interview question URL from where you
interview questin of java - Java Interview Questions
& addvance java in interview? Hi Garima,
I am sending you a link. This link will help you. please visit for more information.
Java - Java Interview Questions
Interview interview, Tech are c++, java, Hi friend,
Now get more...
link.
CoreJava
corejava
Java Interview - Java Interview Questions
Java Interview Please provide some probable Java interviewe Question... you a link. This link will help you.
Please visit for more information.
Thanks
interview questions - EJB
:// Questions: more thing first of all you should sound in programming language...interview questions in Java Need interview questions in Java
java auto mail send - Struts
java auto mail send Hello,
im the beginner for Java Struts. i use java struts , eclipse & tomcat. i want to send mail automatically when... scheduler.It referesh the server can send mail after specify interval.
For more
plz send immediately - Java Beginners
plz send immediately Hi Deepak,
How are you,
Deepak I face some problem in my project,plz help me
My Questin is user input... help me
Hi ragni,
Read for more information
pls send maven interview questions
pls send maven interview questions pls send maven interview questions to anvesh2406@gmail.com
java - Servlet Interview Questions
java servlet interview questions Hi friend,
For Servlet interview Questions visit to :
Thanks
java - Java Server Faces Questions
java Java Server Faces Quedtions Hi friend,
Thanks
core java - Java Beginners
Core Java interview Help Core Java interview questions with answers Hi friend,Read for more information.
Send multipart mail using java mail
Send multipart mail using java mail
This Example shows you how to send multipart mail using java
mail. Multipart is like a container that holds one or more body
Send me Binary Search - Java Beginners
Send me Binary Search how to use Binary think in java
give me the Binary Search programm
thx.. Hi friend,
import java.io....));
}
}
-----------------------------------------
Read for more information.
Thanks
hint - Java Interview Questions
hint Dear roseindia,
i want the java interview question... the following link:
Here you will get lot of interview questions and their answers.
Thanks thanks for your
Java - Java Interview Questions
it as
System.out.println();
For more information on Java visit to :
Thanks...
please send answer for me...
Hi friend,
System.out.println
java - Java Interview Questions
java what is the output for this program..
please send quickly... b=new B();
b.a();
}
}
For read more information :
Thanks
java - Java Interview Questions
/interviewquestions/
Here you will get lot of interview questions... questins in java .wat normally v see in interviews (tech apptitude)is one line... simple 2 answer..but it becomes complicated wen v see questions in jumbled formlet Response Send Redirect - JSP-Servlet
Servlet Response Send Redirect Hi,
Thank you for your previous... be one of my last questions as I am almost finish with my web medical clinic app... to be able to do search on the Mysql table based on more than one column instead of one
Javascript - Java Interview Questions
with running mode.
If you have any problem then send me detail and explain deeply. Please send me source code after that i will implement according to your requirement.
Visit for more information:
core java - Java Interview Questions
();
}
}
For read more information on java visit to :... in core java that is,Is there any instanceInitialization() method is there.If any,what is the purpose?plz its urgent,send me the answer.Thanks inadvance.
java Questions
java Questions do we need to implement all methods interface if yes I want example code
please send me
more circles - Java Beginners
more circles Write an application that uses Circle class you created in the previous assignment.
? The program includes a static method createCircle() that
o reads a radius of a circle from the user
o creates a circle object... :
String val = request.getParaMeter(text1);
Note:
Java accepts
Please Send - Java Beginners
Please Send Hi,
this is perfect ur sending code
I want java script coding
Steps:-If user click on refresh button then page iage will be refresh
Thanks Hi friend,
Thanks
vineet
JAVA - Java Interview Questions
JAVA i need objective Questions and answers ( with 4 or 5 choice) in JAVA. Can anyone help me?
H1!!
pl. mail your email id to
asciimails@gmail.com.
I will send you within 2-3 days.
Krishna
How to send HTTP request in java?
How to send HTTP request in java? How to send HTTP request in java
send ACK to external device
send ACK to external device i want to do connection with an external device that make medical tests to send to it a message but, this device should receive an acknowledgement before the message that i want to send , i want
plz send code for this
plz send code for this Program to calculate the sum of two big numbers (the numbers can contain more than 1000 digits). Don't use any library classes or methods (BigInteger etc
java questions
java questions how is java platform independent?
is jsp thread safe... (known as the Platform independent) is one of the important key feature of java language that makes java as the most powerful language.
When Java Code
send mail using smtp in java
send mail using smtp in java How to send mail using smtp in java?
Sending mail in JSP - SMTP
java,j2ee - Java Server Faces Questions
etc etc. please send to me the link.
3.i want to know more about collections. send me the link.
thanks and regards
siba prasad mishra Hi siba
interview question - Java Interview Questions
interview question hello i want technical interview question in current year
Hi Friend,
Please visit the following links:
pls send reply - Java Beginners
pls send reply i get the error
Error Occurred:java.sql.SQLException: Access denied for user 'root'@'localhost' (using password: YES)
how to remove
java mail send using setText()
java mail send using setText() Hai , Am newly mail send portion in java , Here i write my code ,
final String userName = "xxxxxx@gmail.com... testing");
message.setText("<html><head><title>java
Web application? - Java Interview Questions
for more information.
Thanks... to the databases. In an e-comm scenario when a user (first tier), send a request... varied results. To varying degrees, programmers are proficient in one or more
more doubts sir. - Java Beginners
more doubts sir. Hello sir,
Sir i have executed your code... in the bottom of the page.sir i also need to add some more buttons as in internet exoplorer such as search bar and some more buttons.Sir help me out
jsf - Java Server Faces Questions
in advance. Hi,
Please send me error code and explain in detail.
Visit for more information.
Validation code - Java Interview Questions
Validation code Hi,
Anyone can please send me a javascript function...;
}
}
For more information on Javascript visit to :
http...
*************
Java Script Calender Date Picker
function isNumberKey(value
Java - Java Server Faces Questions
Java I want complete details about Java Server Faces. Hi friend,
Java Server Faces is also called JSF.
JavaServer Faces is a new framework for building Web applications using JSP and Servlet.
Java Server
even more circles - Java Beginners
even more circles Write an application that compares two circle objects.
? You need to include a new method equals(Circle c) in Circle class. The method compares the radiuses.
? The program reads two radiuses from the user
send sms from pc to mobile
send sms from pc to mobile java program to send sms from pc to mobile
send me example of jmsmq - JMS
send me example of jmsmq please send me example about jmsmq (java microsoft message queuing ) library
pls send reply - Java Beginners
pls send reply i get the error
Error Occurred:java.sql.SQLException: Access denied for user 'root'@'localhost' (using password: YES)
how to remove
Hi Friend,
Check your connection string, username
Javascript Function - Java Interview Questions
Javascript Function Hi All,
Can anyone please send me a javascript function to validate the Date of birth field...it should check,leap year...)
:
For more information on Date Validation visit
java - Java Interview Questions
java what is java? Hi friend,
Read for more information,
Thanks
(Roseindia Team
java - Java Interview Questions
java java.lang.exception is apublic or protected or serializable or any thing Hi friend,
For more information on java.lang.exception visit to :
http
java what is the deffirence between java language and other language ?
Hi Friend,
Differences:
1)Java is more secure than any other languages.
2)Java is platform independent.
3)Java is purely Object Oriented
java - Java Interview Questions
Java Abstract Class and Interface Info What are the Java Abstract... but cannot be instantiated.
For more details on abstract methods and class read the given tutorial.
java - Java Interview Questions
serialized.
For more information,visit the following links:
Thanks
java - Java Interview Questions
roseindia.rose.write("hello");
please send answer for me...
Hi friend... out = new BufferedWriter(fstream);
out.write("Hello Java"); Does Java have "goto"? Hi friend,
No, Java....
----------------------------------------------------------------------
Read for more information.
Thanks
java - Java Interview Questions
Java factorial program I wanted to know ..how to create a factorial program in Java using recursive method.Thanks in Advance! Hi friend...(factorial(i)); }}------------------------------read for more information.http
Java - Java Interview Questions
responses to more that one type of client.
For read more infromation visit to :
Thanks
java - Java Interview Questions
or the Java Development Kit is a set of a Java compiler, a Java interpreter, developer tools, Java API libraries,
documentation which can be used by Java developers to develop Java-based applications.JDK is Java development kit. It is used
Java - Java Interview Questions
.
----------------------------------------
Read for more information.
Thanks...Java What are the Disadvantages of Java ? Hi friend,
Java has the following disadvantages.
1. Because programs written in Java
java - Java Interview Questions
java which one is more efficient
int x;
1. if(x==null)
2.if(null==x)
state which one either 1 or 2
java - Java Interview Questions
java how can i configure tomcat in eclipse? Hi friend,
Read for more information.
java - Java Interview Questions
criteria.
In a simple word, you can say Query String send values from one webpage
Java - Java Interview Questions
Java Questions on Java
java - Java Interview Questions
java what are the various names for method overriding and method overloading
Hi,
I am sending a link. This link will help you.
Visit for more information.
JAVA - Java Interview Questions
stream is called as Serialization.
For more information,visit the following link:
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/68133 | CC-MAIN-2015-32 | refinedweb | 2,179 | 57.67 |
Instead of files and directories, I'm going to use States, Counties and Cities. Essentially this application will be used to give the user an easy way to select a city.
Flex offers many components that can help us build this application. The controls I immediately consider for the job are the List Control, DataGrid, and the Accordion (in combination with the List). The List is the obvious control to start with because it represents the data in the right way - a list of states, counties, and cities. The reason I also considered the DataGrid and the Accordion (with the List) is because the they both have a header. I want an easy way to label the three columns/list 'States','Counties' and 'Cities'. With that said, I selected the Accordion with the List option. Using this option also allows for future expansion of the tool. For instance, one could adapt the tool to add country, then state, county, and city. The Accordion naturally has this grouping capability.
With that said, our first code block contains our basic UI. The structure is pretty simple. The layout of the application is vertical. I've added an HBox which contains the main components of the application.
The basic structure of each list is a List Control inside a Canvas Container which is inside of an Accordian Control. The Canvas is there because Accordians must have a container as a child and a List is not a part of the container package. We repeat this 3 times, one for each column and give the appropriate name.
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:
<mx:HBox
<!-- States -->
<mx:Accordion
<mx:Canvas
<mx:List
</mx:Canvas>
</mx:Accordion>
<!-- Counties -->
<mx:Accordion
<mx:Canvas
<mx:List
</mx:Canvas>
</mx:Accordion>
<!-- Cities -->
<mx:Accordion
<mx:Canvas
<mx:List
</mx:Canvas>
</mx:Accordion>
</mx:HBox>
<!-- Selected City -->
<mx:Label
<mx:Script>
<![CDATA[
public function selectCounties():void{
countiesList.dataProvider =
locations.state.(@name==statesList.selectedItem).counties.county.@name
}
public function selectCities():void{
citiesList.dataProvider =
locations.state.(@name==statesList.selectedItem).counties.county.
(@name==countiesList.selectedItem).cities.city.@name;
}
]]>
</mx:Script>
</mx:Application>
I've set the width and height to all containers to 100%. This will make it easy to later embed this application into a web page or other Flex application as a module. Also notice how the dataProvider attribute is only set for the statesList. This is because the countiesList and the citiesList are not populated until a state is selected. These dataProviders are set using ActionScript and are triggered by the click event listeners for both objects.
Here is what the start of our selector looks like:
The Data
The data for our selector is an XML Object:
<!-- Data -->
<mx:XML
<states>
<state name="Florida">
<counties>
<county name="Broward">
<cities>
<city name="Fort Lauderdale"/>
<city name="Coconut Creek"/>
<city name="Plantation"/>
<city name="Pompano Beach"/>
<city name="Cooper City"/>
<city name="Hollywood"/>
<city name="Davie"/>
<city name="Weston"/>
</cities>
</county>
<county name="Palm Beach">
<cities>
<city name="Boynton Beach"/>
<city name="Lake Worth"/>
<city name="West Palm Beach"/>
<city name="Greenacres"/>
<city name="Delray Beach"/>
<city name="Boca Raton"/>
<city name="Palm Springs"/>
<city name="Lake Clark Shores"/>
</cities>
</county>
</counties>
</state>
<state name="Georgia">
<counties>
<county name="Macon">
<cities>
<city name="Barrons Lane"/>
<city name="Bartlett"/>
<city name="Clearview"/>
<city name="Cutoff"/>
<city name="Fields"/>
<city name="Five Points"/>
<city name="Hicks"/>
<city name="Spalding"/>
<city name="Winchester"/>
<city name="Ideal"/>
</cities>
</county>
</counties>
</state>
</states>
</mx:XML>
As you can see above, I did not enter all 50 states. For this example it is only required to get a few states working and populate the remaining ones at a later time. Now, lets look at the structure of the XML object.
The root node is called 'states'. Within each 'states' you will see a 'state' node which contains all the information about a particular state. Within each 'state' node there is a 'counties' node and within each 'counties' node there is a 'county' node. As you can guess, there is a 'cities' node inside of each 'county' node. This a hierarchical breakdown of the data which will make getting the states, cities or counties from an external source easy.
Connecting the controls to the dataProviders
As I mentioned earlier the statesList has a dataProvider from the moment the application creation is completed. The countiesList and citiesList do not have dataProviders until a state is selected. This makes less confusion for end users who may see all three columns populated without having selected anything.
To populate the counties list, we use the click event listener. Our click event is tied to the following code:
public function selectCounties():void{
countiesList.dataProvider =
locations.state.(@name==statesList.selectedItem).counties.county.@name
}
If you haven't heard of E4X, now is a perfect time to introduce it to you. E4X is a specification that has been implemented in Action Script. It allows easy access and manipulation of XML nodes and attributes. It is exactly what we are using here in order to find the counties which belong to a selected state. Lets break down the dataProvider assignment statement.
- First we have the Left Hand Side (LHS) of countiesList.dataProvider. This is the dataProvider property of the countiesList. We want to assign it to the counties of the currently selected state.
- Next we have locations.state.(@name==statesList.selectedItem). This E4X expression simply says: in our locations XML object, find the state node which has a name attribute of statesList.selectedItem. statesList.selectedItem is the name of the currently selected State (from the statesList List). At this point we have access to all the information we need about the state. This is a lot of information which includes all the counties and all the cities. What we really want is just a list of counties. To get the counties we need to use a bit more E4X.
- Finally, get the counties from the state we are viewing. We simple add .counties.county.@name to our E4X. This says, for each state that we find, extract the county names. In more detail, find the counties node of the state, and for every county found extract the value for the name attribute. Notice the @ symbol of front of 'name'. I don't want to dive too much into E4X - see the links at the end of this article for more information.
To get the citiesList populated, we follow the same principal. We find the selected county and get a list of the all the cities in the county. Here is the selectCities method which is called once a county is selected:
public function selectCities():void{
citiesList.dataProvider =
locations.state.(@name==statesList.selectedItem).counties.county.
(@name==countiesList.selectedItem).cities.city.@name;
}
Now we have a basic working multi-list selector. Here is what it looks like:
This is a good example of how to leverage Flex to create powerful apps. A real world app may do some things slightly differently. For instance, we've only loaded two states and a subset of counties and cities for those states. Chances are, the end user would not need all the states data to be loaded at the start of the program. In this instance, we would use something like the HTTP SERVICE tag to retrieve the data from a web server. This keeps the Flex app as light as possible. Getting the data from a web server is a bit out of scope for this article, but I've listed some links below that can get you on your way.
Selecting a City, and Cleaning Up the Navigation
Now that we have the selection of the state, county and city working, we want something to happen once the user does select a city. What we will do is open a new web page once a city is double clicked. For now, the page will be a yahoo.com search page for the city selected. For this task we will use the NavigateToUrl method.
We will need to import the classes needed for navigation.
import flash.net.navigateToURL;
We will also have to enable the double click on the citiesList and add an event listener for double clicking. Here is what the citiesList looks like:
<mx:List
Our new function showCityPage looks like this:
public function showCityPage():void{
navigateToURL(new URLRequest("" +
citiesList.selectedItem));
}
Cleaning up the Navigation
This is where the real magic of navigating to a new URL happens. The navigateToUrl method takes as a parameter a URLRequest object. This object essentially is the link we wish to visit. We create this object on the same line that we call navigateToURL for convenience. As you can see, we are sending the user to "" concatenated with the city which was selected. This will direct us to the yahoo search result for the city we double clicked.
That pretty much takes care of our setting an action once a city is selected. No we can turn our attention to a minor UI problem that happens when a user selects a different state. If a user is viewing a list of cities from a particular state and they change the state, the list of cities is not updated. It should return to a state of being blank since no cities are selected. To combat this problem we will set the dataProvider to null for citiesList whenever a state is selected. Setting it to null effectively removes all data from the control. We can add this single line of code inside the selectCounties method. In his way, it automatically removes the dataProvider any time a state is selected. Here is the new selectCounties function:
public function selectCounties():void{
countiesList.dataProvider = locations.state.
(@name==statesList.selectedItem).counties.county.@name
citiesList.dataProvider=null; // clear the cities list
}
Summary
Well, that's it. We've created a simple to use Flex app that can be expanded for many uses. This type of app is 115 lines long including the data. Not bad. Here are some links that relate to this article. Enjoy!
Related links:
navigateToURL:
URLRequest:
HTTPService:
Accordion:
Canvas:
List:
If you have read this article you may be interested to view :
- Editing DataGrids with Popup Windows
- Data Access Methods in Flex 3
- Flex 101 with Flash Builder 4: Part 1
- Flex 101 with Flash Builder 4: Part 2
- Working with XML in Flex 3 and Java-part1
- Working with XML in Flex 3 and Java-part2 | https://www.packtpub.com/books/content/flex-multi-list-selector-using-list-control-datagrid-and-accordion | CC-MAIN-2016-36 | refinedweb | 1,746 | 64.41 |
Summary: this post explains how to convert a Health & Life Co. HL168Y blood pressure monitor (sold by Maplin for €17) into an ambulatory blood pressure monitor using nothing but a few components costing no more than €3. Basic knowledge of electronics, simple tools and a PIC microcontroller programmer are required. The modification was successfully tested and a chart of the data is presented at the bottom of this post.
Blood pressure (BP) is notoriously difficult to measure. A single reading at a doctor’s surgery can be misleading as BP varies throughout the day and is often biased due to anxiety ("white coat hypertension"). For a better picture an ambulatory blood pressure monitor (ABPM) is used. This is an unobtrusive device that automatically takes readings at fixed intervals, usually 20 minute intervals during the day and 1 hour intervals at night. The data is stored on the device and is transferred to a computer at the end of the measuring period.
ABPMs are generally made for the medical profession and cost in excess of €1K. As far as I tell there are just a few key features that differentiate ABPMs from the cheap consumer models that can cost as little as €17:
- Automatically take measurements
- Transfer data to computer
- Unobtrusive and robust
- Calibrated and certified
Disclaimer: Do not try this with a BP monitor that you need for medical purposes. Modifications will almost certainly void your warranty and may affect the accuracy and performance of the device. If you want to tinker with your BP monitor I recommend buying one specifically for this purpose. Also do not use the data obtained for any serious medical decisions: it's of unknown quality.
Most electronic BPMs use oscillometric measurement. A cuff which is wrapped around the arm (at wrist or upper arm level) is inflated to about 200mmHg (26kPa) pressure. At this pressure the cuff is sufficiently constrictive to block blood circulation to the arm. A pressure sensor monitors the pressure in the cuff. In addition to measuring the absolute pressure in the cuff, this sensor sufficiently sensitive to ‘hear’ the characteristic pulsations of the heart (which are manifested as small variations in pressure).
At 200mmHg blood flow is constricted -- no heart beat will be detected. The cuff is slowly deflated. At the systolic pressure (around 120mmHg for most people) the heart will be able to overcome the cuff pressure. The microcontroller will be able to detect a small pulsing variations in the cuff pressure. The cuff continues to deflate until a point is reached where no heart beat is detected. This is the diastolic pressure.
It is a very simple device essentially comprising a cuff, air pump, pressure sensor. A microcontroller coordinates these components, make the calculations and display the result. As the Maplin device has a 40 reading memory, at its simplest all that is required is a means of pressing the button automatically every 20 minutes. The readings will have to be manually transcribed into a spreadsheet or other analysis tool. In theory, this could be done mechanically (using a solenoid actuator) but would would involve a cumbersome attachment to the front face of the device – not very practical. Fortunately simulating a button press electrically is trivial.
The HL168Y device has 4 buttons: Start/Stop, Mode, Set and Memory. These are standard rubber buttons. When depressed, a conductive rubber foot on the underside of the button squeezes against interleaved copper contacts on the PCB.
On the reverse of the PCB are test pads which provide a convenient way to test or tap into important points in the device. TP4 corresponds to the Start/Stop button. Pulling this line to ground (0V) simulates pressing the button.
A microcontroller unit (MCU) like a PIC from Microchip Technology Inc. is perfect for the task. PICs are small, cheap and operate at voltages down to 2V. The HL168Y uses 3V (2 x AAA cells) so the PIC can draw power from the same battery.
My first choice was a tiny 8 pin 12F675. I figured this was small enough to fit in the spare space inside the device housing. Unfortunately I ran into some programming problems so opted for a larger 18 pin 16F627.
The Hardware
Three points need to be tapped into:
- Red: Battery + terminal (3V)
- Black: Battery - terminal (0V)
- Orange: testpad TP4 (the Start/Stop button)
Solder three wires to the above points. I strongly recommend that the wires are restrained using tape (otherwise it’s all to easy to damage the PCB by accidental tugging on the wires). These three wires connect to pins on the PIC (see fig 4).
To facilitate easy reprogramming of the PIC I soldered a IC socket into some veroboard. The PIC can be easily mounted and removed as needed.
The red wire is connected to Vdd (pin 14), black to Vss (pin 5) and orange to PB0 (pin 6). I found that unless the master clear pin MCLR (pin 4) is tied high the device is unstable. Therefore under the board (not visible in the photo) is a 2K2 resistor linking pin 4 to pin 14 (any value 1K - 10K will do).
Test by powering up. If all goes well the BPM should automatically start taking a reading in 5 minutes (enough time to reset the clock). Tape the attachment to the side of the device using duct tape. Ensure there are no lose wires that can be snagged on something as you go about your day.
The Software
Delays are often achieved by a tight loop or using internal counters. As this is a battery operated device minimizing power consumption is vital. Delays using loops or timers would push current consumption into the mA region, depleting the battery in days.
Fortunately there is a low power alternative: the PIC SLEEP instruction which puts the chip into a deep hibernate mode. The device can be woken from sleep by interrupt (eg button press) or using the device Watchdog Timer. Using the SLEEP instruction I’ve been able to reduce average power consumption into the micro-amp region.
For the first iteration of this project I've decided to use the C compiler that comes with Microchip's free MPLAB IDE. The free C compiler does not produce the most compact code, but as it's such a short program it is not a problem.
The C program below is compiled with MPLAB and the resulting HEX file uploaded onto the 16F627 chip using a Velleman K8048 PIC programmer.
#include <htc.h> __CONFIG(INTIO & WDTEN & PWRTDIS & BORDIS & LVPEN & UNPROTECT); // Internal crystal frequency 4Mhz #define _XTAL_FREQ 4000000 int loopDelay (unsigned char n); int pauseSeconds (unsigned char n); int pauseMinutes (unsigned char n); void init(void) { PORTB = 0x00000001; // set RB0 on latch high before making pin output TRISB = 0b11111110; // set RB0 as output (1=input, 0=output) } void main(void) { init(); while (1) { pauseMinues (5); // Press start button twice (once to wake, once to start) pressStartButton(); pauseSeconds(2); pressStartButton(); pauseMinutes(15); } } /** * Simulate the pressing of the Start button */ int pressStartButton () { PORTB = 0b0; loopDelay(128); PORTB = 0b1; } /** * Seep for one whole cycle of the watchdog timer (about 2s in 16F627) */ int wdtSleep () { CLRWDT(); SLEEP(); } /** * Sleep for 'n' seconds by calling wdtSleep() in a loop */ int pauseSeconds (unsigned char n) { int j; n /= 2; for (j = 0; j < n; j++) { wdtSleep(); } } /** * Sleep for 'n' minutes by calling wdtSleep() in a loop */ int pauseMinutes (unsigned char n) { int i,j; for (j = 0; j < n; j++) { // about 1 minute for (i = 0; i < 30; i++) { wdtSleep(); } } } /** * Cause a delay by looping. Useful for short delays, but uses considerably * more power than using SLEEP */ int loopDelay (unsigned char n) { unsigned char i,j; for (i = 0; i < n; i++) { CLRWDT(); for (j = 0; j < 127; j++) { NOP(); } } }
The Data
I tested this device on myself for a 30 hour continuous stretch over the weekend. It's like wearing a big heavy watch and generally was not a problem. For good readings it is necessary hold still and keep the device at heart level during the 30 seconds or so it takes to measure. It can be easily removed at any time if necessary. I was able to sleep with it (although I think it did affect the quality of my sleep).
The following is a chart of that run. It shows the expected day/night difference, with a noticeable dip during the peak of the sleep cycle at about 04:00.
Limitations
The main problem with this that it starts automatically and continues to trigger a measurement every 20 minutes while batteries are in the device. The only way to turn it off is to remove the battery (which by the way also erases the clock and memory). Also we have to manually key the data from the screen into our spreadsheet / analysis software. This is a major pain point for which I no solution yet. I would also very much like to be able to fit this modification inside the original housing.
Finally a wrist cuff device is not the best for ABPM because readings should be taken while the wrist is at heart height. Any higher or lower will bias the reading. An upper arm cuff is far more preferable as it's always (approx) at heart level and is more comfortable to wear. It may be possible to modify this device into such a configuration, although it will involve tearing it apart.
Phase 2 will attempt to address some of these shortcomings.
11 comments:
Wow, I need somthing like this.
Did you continue your work? You should come up with some professional device.
Please let me know.
Work is ongoing in the few free hours I have. I just got a upper arm cuff device which is more suited for 24 hour monitoring. I hope to have an update in Feb 2011.
It's one thing doing this in a hobby capacity. A commercial product of this nature opens a huge can of worms in terms of certification, product liability insurance and potential patent infringement. If anyone is interested in working with me to develope a commercial product do let me know. There may be a niche for an "open hardware" BP monitor.
How do you handle the abnormal SYS & DIA & PULSE RATE measured data while you are walking or talking ?
Do you know how do the professional ABPM handle this ?
Can u develop some algorithm to filter the above noise ?
Good question re measuring BP while moving. Right now I just rely on the devices own algorithm built into its proprietary controller. As cheap consumer units I would imagine they don't handle that situation well.
However I recall when I wore a 'proper' ABPM for a day (a SpaceLabs model I think), I was told by the nurse to try to be as still as possible during the measurements -- so I'm not sure that even the expensive models can do a good job of handling excessive motion well during measurements.
But you raise a good point. The next thing I want to do here (when I get time) is to eliminate the proprietary controller and see if I can implement the whole process with my own (open source!) firmware on my own MCU.
Hello Joe,
I want to hack a blood pressure monitor for a school project; tap into readings and transmit them for other use. Since you started this project have you found any other models of BP monitors that are particularly easy to access? I have a lot of programming experience but I'm very green when it comes to working with electronics - can you offer any tips on getting started?
Andrew, It turns that all the low cost models I've seen for sale locally are rebadged "Health and Life Company" (Taiwan) models. I think most of them share the same basic design and I suspect the EEPROM storage schema (documented in my blog posts) will be the same for all models. An alternative approach is buy a cheap monitor, rip out the controller board and replace with your own (eg an Arduino or Raspberry Pi). You will likely need a little bit of analog signal conditioning for the pressure sensor prior to feeding it to an ADC. Otherwise it's just IO lines driving the pump and release valve. There are many application notes by semiconductor manufacturers on how to implement a BP monitor (eg google "Freescale AN4328"). Most of the real work will be in software... signal processing the pressure sensor to listen for the systolic and diastolic pressure points. Btw: this is where a beefier board like a Beagle Bone or Raspberry Pi would have the edge on the Arduino). If you have any specific questions please feel to post here, or email me jdesbonnet at gmail dot com.
Hi there Jdes, was wondering if your still working on this project?
I would like to discuss a project with you
Hi Joe,
Where were you able to purchase the Health & Life wrist BP monitor? We were not able to find it on the web.
Thanks!
Hi Joe,
Where did you purchase the Health and Life HL168 Wrist BP Monitor? Could you please provide us with a link. We weren't able to find it on the web.
Thanks!
I got the first model in Maplin (a UK/Ireland chain of stores similar to Radioshack in the US). Later I found models under the brand 'Sanitas' in Lidl which was also a rebranded Health and Life machine. It seems many of the cheap BP monitors are produced by just a few manufacturers and are rebranded.
I found one on Amazon in the US. Search for "Wristech". You should find several results from North American Healthcare. I got one that cost $24.99, and so far it's worked as Joe has described, with the test pads and everything. | http://jdesbonnet.blogspot.co.uk/2010/05/how-to-make-ambulatory-blood-pressure.html | CC-MAIN-2018-17 | refinedweb | 2,318 | 62.27 |
[
]
Romain GERARD commented on CASSANDRA-13418:
-------------------------------------------
Yes I am still looking forward to test the proposition of [~krummas] and to add unit tests.
I am right now in vacation, so I haven't had the time to play around with the code, but it
is defintly in my todo list on my return (begin of August)
So I haven't forgotten the patch.
> Allow TWCS to ignore overlaps when dropping fully expired sstables
> ------------------------------------------------------------------
>
> Key: CASSANDRA-13418
> URL:
> Project: Cassandra
> Issue Type: Improvement
> Components: Compaction
> Reporter: Corentin Chary
> Labels: twcs
> Attachments: twcs-cleanup.png
>
>
> explains it well. If you really
want read-repairs you're going to have sstables blocking the expiration of other fully expired
SSTables because they overlap.
> You can set unchecked_tombstone_compaction = true or tombstone_threshold to a very low
value and that will purge the blockers of old data that should already have expired, thus
removing the overlaps and allowing the other SSTables to expire.
> The thing is that this is rather CPU intensive and not optimal. If you have time series,
you might not care if all your data doesn't exactly expire at the right time, or if data re-appears
for some time, as long as it gets deleted as soon as it can. And in this situation I believe
it would be really beneficial to allow users to simply ignore overlapping SSTables when looking
for fully expired ones.
> To the question: why would you need read-repairs ?
> - Full repairs basically take longer than the TTL of the data on my dataset, so this
isn't really effective.
> - Even with a 10% chances of doing a repair, we found out that this would be enough to
greatly reduce entropy of the most used data (and if you have timeseries, you're likely to
have a dashboard doing the same important queries over and over again).
> - LOCAL_QUORUM is too expensive (need >3 replicas), QUORUM is too slow.
> I'll try to come up with a patch demonstrating how this would work, try it on our system
and report the effects.
> cc: [~adejanovski], [~rgerard] as I know you worked on similar issues already.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org | http://mail-archives.apache.org/mod_mbox/cassandra-commits/201707.mbox/%3CJIRA.13061867.1491427038000.270905.1500362342946@Atlassian.JIRA%3E | CC-MAIN-2017-43 | refinedweb | 385 | 59.74 |
The core engine of this tip can be found here (). For sure, this is a tremendous utility application to manage the different passwords that we need to remember. Furthermore, the simpler approach of the code helps the beginner get a faster and firmer jumpstart into .NET cryptography namespace and security concepts.
This is an extension tip for the primary one (). All original credits to the original author
The intent of this extension article is to help it augment the functionality of determining the strength of the password against the industry standard Google's API. The selected password is queried against Google webservice and based on the response received, an appropriate message is shown to the user. Google's Password API is straight-forward and simple. It takes the password as a string (querystring) and returns a number from 1 through 4 indicating poor to strong. This article uses WebClient.DownloadString to grab the output, determine the strength and show the same to the user.
I have also made two other minor changes in the code. Cryptocore.cs declares a few of the members as protected whereas the class itself is declared sealed. This will trigger a compiler warning because the protected access modifier does not sound logical on a sealed class. I have fixed this by changing the access modifier to private.
I hope that the primary article and the extension would be of good value for developers and users alike.
People can consider improving the UI for password entry depending on the webservice' output. I have been using either WebClient DownloadData or DownloadFile hitherto and now I discovered that for simple output, we now have a WebClient.DownloadString that does the trick. Remember but it just returns the body of the response.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
General News Suggestion Question Bug Answer Joke Rant Admin
Man throws away trove of Bitcoin worth $7.5 million | http://www.codeproject.com/Articles/578129/Simple-Password-Manager-with-Google-API-validation?msg=4547720 | CC-MAIN-2013-48 | refinedweb | 331 | 55.84 |
Why we need a new lifetime management model
Managed code relies on Garbage Collection (GC) to manage the lifetime of object. For Add-In, object reference can be cross-appdomain or cross-process. We cannot rely on GC solely to give us automatic lifetime management. We need a new model that works cross any isolation boundary and can keep Add-In alive when it is in use and shut it down when it ends its lifetime. We also want to make sure that the container for the Add-In (either appdomain or process) can be recycled when there is no active Add-In.
We created a lifetime model based on remoting infrastructure. In this model, every object cross the isolation boundary will be the ISponsor of itself. Remoting infrastructure will ask the sponsor whether it should renew the lease and keep it alive. CLR Add-In model has implemented ISponsor logic in a class called ContractBase, which is supposed to be used by all contract based objects that go across the isolation boundary. Add-In and pipeline developers can easily use our ContractBase as the base class for lifetime management. The purpose of this blog is to help you understand our lifetime management model and avoid a few bad coding patterns in your Add-In development.
Lifetime of Add-In and Reference Counting
If you read our blogs or wrote a simple Add-In pipeline before, you would know that an Add-In is kept alive by Add-In Adapter which inherits from ContractBase. The Add-In lifetime management is essentially ContractBase lifetime management.
Below is how the ContractBase life cycle looks like. Assuming the left side is the LA appdomain and the right side is the RA appdomain (don’t assume LA is the Host and RA is the Add-In. It could be the other way around for data adapters)
2 contract ---------------------------------------------------------- 1 new ContractBase( ); TotalToken=0
3 contract.AcquireLifetimeToken ------------------------------- 4 ContractBase; TotalToken =1
5 Use contract functions ----------------------------------------- 5 ContractBase renew lease since TotalToken!=0
6 contract.RevokeLifetimeToken ------------------------------- 7 ContractBase; TotalToken=0
9 contract cannot be used anymore -------------------------------8 If TotalToken==0 disconnect or AD.Unload
First a new ContractBase object will be created in RA and passed to LA as a MarhalByRefObject. Inside LA this object will be treated as a contract (typically it implements both IContract and other functions that are supposed to go across the isolation boundary).
The contract can call AcquireLifetimeToken to notify RA to increase the token count. Once the token has been acquired, the contract interface can be considered as a live object.
I used 5 twice. It means that operations at both sides of the isolation boundary can execute simultaneously.
When contract user finished all the work it intended to do, it could call contract.RevokeLifetimeToken to notify RA side again. This API will decrease the ref count.
Once the ref count reaches 0 it won’t renew the lease; therefore it is disconnected. If the ContractBase is the only adapter of the appdomain and the appdomain is created by our activation API, we may even unload the appdomain.
If we try to use the contract in LA after it is disconnected, we will see exceptions in LA (either RemotingException or AppdomainUnloadedException).
Understand ContractHandle, Remoting and Garbage Collection
From above explanation, we may notice that the lifetime management work is really done by two APIs in IContract, AcquireLifetimeToken and RevokeLifetimeToken. If you compare IContract and IUnknown, you will agree that both interfaces carry some similarities. One advantage of IContract is that it uses a list of tokens to keep track of all the references. Each reference will get a unique token. Therefore it is not likely to release someone else’s token, which is common in COM and it isvery hard to debug.
To improve the user experience dealing with the IContract, we created the ContractHandle class. ContractHandle calls AcquireLifetimeToken in the constructor and RevokeLifetimeToken in the finalizer or its dispose method. This class actually brings us close to the GC world. Since GC can manage the lifetime of ContractHandle, it will call RevokeLifetimeToken if it sees no root for ContractHandle. Acquire-Revoke lifetime token will automatically pair up, and we won’t confuse ourselves with complex if-else blocks.
ContractHandle brings us closer to a GC based world. There are still some significant difference between GC and our lifetime management model. We do have some guidelines in terms of using the ContractBase.
1. Always hold a contract handle before using the contract functionality.
2. Avoid decreasing token count down to zero if contract is still in use.
3. Avoid circular reference of IContract.
4. Be aware of JIT optimization while using ContractHandle.
Violating above rules will cause serious bugs in your product. Even worse, the bugs are totally not deterministic. If GC does not happen at the right moment, you may not even be able to find it in your dev test environment. But sooner or later, customers will see them.
Fortunately, we have run into problems violating above rules. We learned how to debug them, fix them. We want to blog about it so that you don’t need to make the same mistakes.
Demo Scenario
To illustrate the issues that user may see in lifetime management, I created below sample. This sample contains several bugs that I will discuss later. In our pipeline model, Add-In and Add-In base are not really interesting for lifetime management. To simplify the scenario, Add-In and Addin base are intentionally omitted. We will focus on contract, host side adapter and Add-In adapter.
Here is the contract. IDemoContract contains two methods, Reverse and StoreContract. IDemoDataContract contains getter setter to an integer value.
[AddInContract]
public interface IDemoContract : IContract
{
IDemoDataContract Reverse(IDemoDataContract data);
void StoreContract(IDemoDataContract data);
}
public interface IDemoDataContract : IContract
int GetValue();
void SetValue( int x);
public delegate void Func();
Here is the AddInAdapter. It implments the IDemoContract. It also contains DataAdapter which implements IDemoDataContract. To make our debugging experience better, we changed the LeaseTime in the Add-In appdomain to 5 seconds. The default lease time is 5 minutes that is tool long for debugging.
[AddInAdapter]
public class TestAddInAdapter : ContractBase, IDemoContract
ITestAddInBase TestAddInBase;
ContractHandle storedContractHandle;
internal TestAddInAdapter(ITestAddInBase inTestAddInBase)
{
TestAddInBase = inTestAddInBase;
System.Runtime.Remoting.Lifetime.LifetimeServices.LeaseTime = new TimeSpan(0, 0, 5);
System.Runtime.Remoting.Lifetime.LifetimeServices.RenewOnCallTime = new TimeSpan(0, 0, 5);
}
public IDemoDataContract Reverse(IDemoDataContract data)
DataAdapter retData = new DataAdapter();
retData.SetValue(0 - data.GetValue());
return retData;
public void StoreContract(IDemoDataContract data)
storedContractHandle = new ContractHandle(data);
}
}
public class DataAdapter : ContractBase, IDemoDataContract
int m_Value = 0;
public int GetValue() { return m_Value; }
public void SetValue(int num) { m_Value = num; }
}
Here is the HostAdapter. It implements the host side view. DataAdapter2 also implements IDemoDataContract. It supports the SetTrigger function. It will write to console if x is set to zero. RunTest() is a simple method defined in HSVBlogSample
[HostAdapter]
public class TestHostAdapter : HSVBlogSample
IDemoContract contract;
ContractHandle handle;
internal TestHostAdapter(IDemoContract inContract)
contract = inContract;
handle = new ContractHandle(inContract);
public override void RunTest()
DataAdapter2 data = new DataAdapter2();
for (int i = 1; i <= 10; i++)
{
data.SetValue(i);
data.SetTrigger(delegate { Trigger(); });
IDemoDataContract retData = contract.Reverse(data);
Console.WriteLine("The value is {0}", retData.GetValue());
}
contract.StoreContract(data);
public void Trigger() { Console.WriteLine("Value Set to Zero.");}
}
public class DataAdapter2 : ContractBase, IDemoDataContract
Func trigger;
public void SetValue(int num) { m_Value = num; if (m_Value == 0) trigger.Invoke();}
public void SetTrigger(Func func) { trigger = func; }
}
Incorrect Coding Pattern -1 (unprotected IContract issue)
The first issue is in the implementation of Reverse method.
retData.SetValue( 0 - data.GetValue() );
The first question is where is data from. It is called from HostAdapter into AddInAdapter, therefore crossing the isolation boundary. It seems that nobody is managing the lifetime of data. It could be disconnected at the other side of the isolation boundary.
Since data is a contract, it is not safe to use it without taking a ContractHandle. There is a easy way to illustrate the issue here if you add Thread.Sleep() for 10 seconds before data.GetValue(). It is very likely to see a RemotingException.
One way to fix the issue is writing code like below
using (ContractHandle handle = new ContractHandle(data))
DataAdapter retData = new DataAdapter();
retData.SetValue(0 - data.GetValue());
return retData;
A little exercise: there us a similar issue in HostAdapter. Can you find it?
Incorrect Coding Pattern -2 (multiple reference issue)
Assuming you fixed above issue, the next type of issue will surface. It is a little more complex than the first one. You do need to know some background to understand this. Let’s look at the HostAdapter
data.SetValue(i);
data.SetTrigger(delegate { Trigger(); });
IDemoDataContract retData = contract.Reverse(data); /// May throw Here
contract.StoreContract(data); /// May throw Here
The problem here is that we are calling contract.Reverse(data) multiple times on the same data. Reverse(data) is executed at the other side of the isolation boundary. During the first cycle (i=1), the other side of the isolation boundary will have one ContractHandle attached with data. After Reverse(data), that ContractHandle will go out of scope and could be disposed by GC. Therefore the TotalToken becomes zero for data. During the second cycle (i=2), the other side of isolation boundary will call ContractHandle constructor again on the same data, this will try to increase the token count from zero to one. This violates our rule No. 2 above.
Typically you won’t see any exception if GC does not happen. As your code logic becomes complex, this is very likely to give you surprise. If you put GC.Collect() before contract.Reverse, you will see an exception with this message “Once all outstanding references to a Contract have been revoked, new ones may not be acquired.” during the second cycle.
People may ask why don’t you disable this rule and allow token to go down to zero and go back up again. We cannot. We need a way to know when we could do clean up to remove dead appdomains and dispose unmanaged objectes etc. Removing this rule will potentially cause huge memory leak.
There are two solutions for this kind of problem. We can either create unique DataAdpater for each call. Or we can use a ContractHandle to increase the token count before it drops to zero like below.
for (int i = 1; i <= 10; i++)
{
data.SetValue(i);
data.SetTrigger(delegate { Trigger(); });
IDemoDataContract retData = contract.Reverse(data);
Console.WriteLine("The value is {0}", retData.GetValue());
}
contract.StoreContract(data);
Incorrect Coding Pattern -3 (circular reference issue)
I wrote a host with code like below. It simply activates a pipeline and call RunTest(). Token is a valid AddInToken we get from our discovery API. Remember that our HostAdapter inherits from HSVBlogSample.
while (true)
HSVBlogSample hsv = token.Activate<HSVBlogSample>(AddInSecurityLevel.FullTrust);
hsv.RunTest();
hsv = null;
Thread.Sleep(10000); GC.Collect(); GC.WaitForPendingFinalizers();
Thread.Sleep(10000); GC.Collect(); GC.WaitForPendingFinalizers();
Most people would not expect a memory leak in managed application like above. The host view is set to null. GC should take care of the rest.
We indeed has a memory leak here. Before we show the details, I want to explain a little bit about why we have multiple sleep and multiple GC.Collect here. Remember we set the lease time to 5 secondes. Sleep for 10 seconds help remoting to disconnect objects. In our lifetime management model, objects are not cleaned in one shot by GC. Depending on how you connect up your objects, we might need multiple GC and remoting-disconnection to clean up the memory.
In above example, no matter how many times we call GC or let the thread to sleep. The memory leak won’t go away. We can let the program run a few cycles and put a breakpoint at the beginning of the loop, then verify with SOS.
First, you can load SOS and dump appdomains.
.load sos
extension D:\WINDOWS\Microsoft.NET\Framework\v2.0.orcasx86ret\sos.dll loaded
!dumpdomain
--------------------------------------
Domain 9: 0028eaa0
LowFrequencyHeap: 0028eac4
HighFrequencyHeap: 0028eb1c
StubHeap: 0028eb74
Stage: OPEN
SecurityDescriptor: 002f6008
Name: Test
With above code, we can easily find OPEN appdomains like above besides the default appdomain for the host. Now we are sure that appdomains are not unloaded. Therefore someone must be keeping it alive from the host side. Someone must be holding a ContractHandle to keep the addin alive. The dumpheap command can verify that.
!dumpheap -type System.AddIn.Pipeline
Address MT Size
0280a86c 05491914 20
0280cec0 05259f6c 20
0281906c 05491914 20
0281c3dc 04f59f6c 20
total 4 objects
Statistics:
MT Count TotalSize Class Name
05259f6c 1 20 System.AddIn.Pipeline.ContractHandle
04f59f6c 1 20 System.AddIn.Pipeline.ContractHandle
05491914 2 40 System.AddIn.Pipeline.ContractHandle
Total 4 objects
From dumpheap output, we do see live ContracHandle objects in the memory. That is really strange. Let’s find out who is keeping them alive.
!gcroot 0280a86c
Scan Thread 2692 OSTHread a84
Scan Thread 1904 OSTHread 770
Scan Thread 2472 OSTHread 9a8
Scan Thread 2428 OSTHread 97c
Scan Thread 3524 OSTHread dc4
Scan Thread 3520 OSTHread dc0
DOMAIN(0021D738):HANDLE(WeakLn):1da10fc:Root:027bea50(System.Runtime.Remoting.Contexts.Context)->
027911b4(System.AppDomain)->
027be914(System.Runtime.Remoting.DomainSpecificRemotingData)->
0280ace4(System.Runtime.Remoting.Lifetime.LeaseManager)->
0280ad0c(System.Collections.Hashtable)->
0280ad44(System.Collections.Hashtable+bucket[])->
0280b6cc(System.Runtime.Remoting.Lifetime.Lease)->
0280ab2c(DataAdapter2)->
0280b0c0(Func)->
0280a85c(TestHostAdapter)->
0280a86c(System.AddIn.Pipeline.ContractHandle)
DOMAIN(0021D738):HANDLE(WeakSh):1da1298:Root:0280a85c(TestHostAdapter)->
02806818()
DOMAIN(0021D738):HANDLE(WeakSh):1da129c:Root:0280a85c(TestHostAdapter)->
Examine the gcroot output closely will reveal why all those objects are alive.
ContractHandle is kept alive by TestHostAdapter
TestHostAdapter is kept alive by DataAdapter2’s trigger method.
DataAdapter2 is kept alive because remoting did not disconnect it.
It seems remoting LeaseManager still consider DataAdapter2 a live objects.
Why remoting did not disconnect DataAdapter2? Let’s dump the object and find it out.
!do 0280ab2c
Name: DataAdapter2
MethodTable: 05491c5c
EEClass: 051744a0
Size: 44(0x2c) bytes
(D:\vbl\orcas\clrtest\testbin\AddIn\AddInImpl\RunBlogSampleCode\HostSideAdapters\Test1HostSideAdapters.dll)
Fields:
MT Field Offset Type VT Attr Value Name
790f6f50 400018a 4 System.Object 0 instance 0280b200 __identity
00000000 4000143 8 0 instance 0280ab58 m_lifetimeTokens
00000000 4000144 c 0 instance 00000000 m_contractIdentifiers
790f6f50 4000145 10 System.Object 0 instance 0280ab70 m_contractIdentifiersLock
7910016c 4000146 14 System.Random 0 instance 0280ab7c m_random
79104974 4000147 1c System.Boolean 1 instance 0 m_zeroReferencesLeft
790fb9c0 4000148 18 System.Int32 1 instance 0 m_tokenOfAppdomainOwner
790fb9c0 4000003 24 System.Int32 1 instance 10 m_Value
05491d64 4000004 20 Func 0 instance 0280b0c0 trigger
Let’s examing m_lifetimeTokens which is a List<int> defined in ContractBase. The list is to hold all the acquired tokens.
!do 0280ab58
Name: System.Collections.Generic.List`1[[System.Int32, mscorlib]]
MethodTable: 7917bf7c
EEClass: 791a6c60
Size: 24(0x18) bytes
(D:\WINDOWS\Microsoft.NET\Framework\v2.0.orcasx86ret\assembly\GAC_32\mscorlib\2.0.0.0__b77a5c561934e089\mscorlib.dll)
7910f3b0 40009c7 4 System.Int32[] 0 instance 0280ac88 _items
790fb9c0 40009c8 c System.Int32 1 instance 1 _size
790fb9c0 40009c9 10 System.Int32 1 instance 23 _version
790f6f50 40009ca 8 System.Object 0 instance 00000000 _syncRoot
7910f3b0 40009cb 0 System.Int32[] 0 shared static _emptyArray
It turns out the size of the list is 1, which means that someone is still referencing DataAdapter2. That is why remoting did not disconnect it.
Examing AddInAdapter code closely, we will see that TestHostAdapter keeps AddInAdapter alive.
AddInAdapter keeps the storedContractHandle alive, which holds the contract of DataAdapter2.
We got a big circular reference here. In the usual managed application, circular reference will be deemed as dead object. Unfortunaly, with the help of remoting infrastructure, our circular reference becomes real reference and kept alive by remoting LeaseManager.
To fix the above sample, we need to break the circular reference. One simple way is to define the Trigger() function in the HostAdapter to be static. Another way is to avoid storing ContractHandle in the AddInAdapter.
As you can imagine, even more complex circular reference pattern can be created. Stress testing is definitely helpful to discover such kind of issues.
Incorrect Coding Pattern -4 (JIT optimization issue)
When we discuss the first incorrect coding pattern, I left a question. There is one IContract with no protection in the HostAdapter. I don’t know whether you find out the answer or not. Here is the problematic code.
IDemoDataContract retData = contract.Reverse(data);
Console.WriteLine("The value is {0}", retData.GetValue());
Above code may throw exception while doing Console.WriteLine if retData gets disconnected. Since retData is from the other side of the isolation boundary, we need a ContractHandle to protect it.
Here is the fix.
ContractHandle handle2 = new ContractHandle(data);
Now handle2 will call AcquireLifetimeToken and we have some protection here to make sure retData won’t be disconnected.
This is the same issue we discussed in coding pattern -1. So what is the incorrect coding pattern -4?
Take a second look at above fix. This actually is not the right way to fix the problem. The fix itself is the incorrect coding pattern that we want to discuss.
Assuming one GC happens right before Console.WriteLine, will the handle2 be collected? I have seen failures with this coding pattern. It seems that JIT optimization is smart enough to know that ContractHandle will not be referenced later, so it could be assumed dead. Then GC calls its finalizer. Therefore Console.WriteLine is not protected and may still throw object disconnected exception.
The recommended coding pattern looks like below.
using (ContractHandle handle2 = new ContractHandle(retData))
{
}
You can also use a field in the adapter class to hold the ContractHandle if you are sure that you are not going to have circular reference issue.
Summary
In this blog, we looked at our Add-In lifetime management model. Our model sits between pure reference counting model and pure GC based model. It does have pros and cons. We also listed a few coding patterns that user may encounter in their development. Keeping the issues in mind will help you write good quality pipeline code.
All the issues we listed above may not cause problems immediately on a dev machine due to the nature of how GC works. Stress testing and running tests on machines with different configuration are very helpful to discover issues like these before you deploy your app to customers’ machines. | http://blogs.msdn.com/b/zifengh/archive/2008/04/28/coding-patterns-to-avoid-in-addin-pipeline-development.aspx | CC-MAIN-2014-42 | refinedweb | 3,048 | 50.33 |
Please use this thread to post known issues with the Java and S60 development. This is not for possible bugs but only verified issues.
Ron
Please use this thread to post known issues with the Java and S60 development. This is not for possible bugs but only verified issues.
Ron
In case if any one missed: Here is a quick link to "Known Issues In The Nokia 6600 MIDP 2.0 Implementation v1.7" explained much in detail with possible solutions/work arounds.
Regards
Gopal
Hi All,
I am facing a strange problem in Nokia 6600. I have created a simple form with a datefield. The form is being displayed fine. But when I change the GMT offset (+0.00 to -6.00 or any negative offset), the application is crashed. By debugging I came to know that it crashes at the constructor of DateField.
This behaviour is only specific to this phone model. I have tested it on various 6600 phones but it has same behaviour. This thing doesn't occure in any other phone(Nokia 6260, 3230, Sony Ericsson k750i).
Please advice.
Thanks.
Concerning 6230i, and may be some other devices that i don't know.
the bluetooth connexion may not deliver all the data at once. Reading through the Stream may not let you get the whole data ina single shot, even if your buffer is not filled.
One solution is to know the size of the data you are expecting, but in any case, give the bluetooth stack a some time and make several reads separated by a little time, until you read nothing or you are sure that you have all you need.
Hi All
I have developed an application that works fine on 6681,3250.
But when I tested this application on 6131 it does not work.
My application connects to server(aspx page) and writes the content on mobile phone's local memory.This part is not working on 6131.
Any comments???????
Cheers,
Manan
When reporting known issues, try to give information to the fields listed below. Indicate the technology and platform/device, and write at least an overview of the known issue. Give a solution/workaround if you can.
Technology:
Reported against: (Platform/Devices/SW version)
Subject:
Detailed description:
See the following examples:
Resources cannot be prefetched after deallocation
Using Bluetooth serial port in MIDlets
Reinstallation problem in early SW builds on devices compliant with S60 2nd Edition, Feature Pack 1
Canceling the installation of an application fails on the Nokia 6600
Using authorization requires authentication in Bluetooth security manager
Browser Control interface stops working after delete/reconstruct
Last edited by Nokia Ron; 2006-09-08 at 15:39.
MananW, did you get an exception? Remember that the available directories on diifferent phone platforms are different. You should not use hard coded directory names, but get the accessible directores from the system as described in these documents.
Hartti
6131 support CLDC 1.0 or CLDC 1.1?? the solution to ur problem may lie in this??6131 support CLDC 1.0 or CLDC 1.1?? the solution to ur problem may lie in this??
Originally Posted by MananWOriginally Posted by MananW
Hi,
Luca,
About the "KNIEXT_NewGlobal Ref failed" error.
Is your midlet using network connections? It looks like some earlier application left some connections in a bad state, and hence your midlet is not able to open any network connections. This problem should be corrected by turning the phone off and removing the battery for a while (a minute or so)
Hartti
We were able to isolate a problem that appeared first on a Nokia 6280 and possibly exists on some other phones, too.
once a platformrequest is performed with a telephone number and the application will shortly after this exit, the phone number will not be called.
the calling screen appears for some instances, then the applciation just exits.
to reproduce the problem, create a blank midlet, place a platformrequest in the startApp method and call notifyDestroyed directly after this.
interestingly, it works if the midlet is part of a midlet suite with 1+ midlets. then the app does not seem to be destroyed immediately and the call goes through.
a workaroudn is to call Thread.sleep(2000) after the platformrequest. Then the native implementation seems to have enough time to make the call happen.
---
here the code of the test midlet that does not place a call on nokia 6280. note that the boolean return is false!
import javax.microedition.lcdui.Alert;
import javax.microedition.lcdui.Display;
import javax.microedition.midlet.MIDlet;
public class Dummy6280 extends MIDlet {
public Dummy6280(){
}
public void startApp() {
boolean possibleWithoutEnding = false;
try{
possibleWithoutEnding = platformRequest("tel:+491723844659"); // dial the phonenumber.
}catch(Exception e){}
Alert alert = new Alert ("check");
if (possibleWithoutEnding)
{
alert.setString("The MIDlet suite must exit before the call can be invoked!");
}
else
{
alert.setString("Call can be invoked from within the app!");
}
alert.setTimeout (Alert.FOREVER);
Display.getDisplay(this).setCurrent(alert);
}
public void pauseApp() {
}
public void destroyApp(boolean unconditional) {
}
}
if you place a sleep after the request, it should work
possibleWithoutEnding = platformRequest("tel:+491723844659"); // dial the phonenumber.
Thread.sleep(2000)
(plus add throw/catch)
On which phones is this likely to be the same? On 6680 it worked fine without the workaround.
Cheers\
Sven
Hansamann, I have been notified on this issue by another developer two months ago, but for some reason I did not finalize the known issue doc on this. Sorry for that.
In any case you could use a following workaround for this:
instead of just calling notifyDestroyed();
try using this code for 6280 (the sleep period is not "scientifically decided", you could try using different values in there...)
try {
Thread.sleep(3000);
} catch (InterruptedException ex) {
ex.printStackTrace();
}
notifyDestroyed();
Does this solve your issue?
Hartti
General phone number cannot be accessed by using PIM API (JSR-75)
Can't access series 40 General number via JSR75
should be added due to TechLib being not updated
Thank you guys for starting discussion on this topic. This has been really useful for me so far. With limited handset availability, I always had hard time convincing my clients about application compatibility across various platforms.
As far as I can see there are some compatibility issues across nokia platforms themselves. It may not be the right place to ask this question, but how do you guys find the MIDlet compatibility on other platforms like RIM, brew, pam, winMob etc ?
I'm aware that this is Nokia forum and I'm probably asking an off space question. I'd appreciate any if you guys could help - I understand otherwise
thanks
~B
On my Nokia 5500 while removing Items from a Form which is currently on screen the vm sometimes crashes and gives the following error message: App. closed: lcdui.
Haven't looked into exactly how to reproduce it yet but it seems to happen consistently if you remove two items from the form one being the current item.
Anyone else experienced this? | http://developer.nokia.com/community/discussion/showthread.php/84605-Known-Issues?mode=hybrid | CC-MAIN-2014-23 | refinedweb | 1,165 | 56.55 |
Synopsis
The CrateData object is used to store column or image data.
Syntax
CrateData()
Description
The CRATES Library uses CrateData objects to store data values of a columm or image.
Important fields of a CrateData object
Unlike the other parts of the Crates interface, access to information in a CrateData object is restricted to the methods and fields of the Python object (i.e. there are no separate functions). The important fields are listed below.
Reading in a column
Here we read in the X column from the file evt2.fits and inspect the CrateData object that is returned.
>>> cr = read_file("evt2.fits") >>> x = cr.get_column("x") >>> x Name: x Shape: (368303,) Unit: pixel Desc: Sky coords Eltype: Scalar >>> x.values.mean() 4263.6122540408305 >>> y = cr.get_column("y") >>> add_curve(x.values, y.values) >>> set_plot_xlabel(x.name + " (" + x.unit + ")") >>> set_plot_xlabel(y.name + " (" + y.unit + ")")
Modifying a column
With the above set up, we can modify values; for instance
>>> xv = x.values >>> xv += 0.5 >>> x.values.mean() 4264.1040230462422
Note that changing the values in the xv array change the underlying CrateData object. To ensure you are working with a copy of the data (so that changes do not get propagated back to the original Crate), use the NumPy copy() method - e.g.
>>> xv = x.values.copy()
or the copy_colvals() routine from Crates. See the discussion of Copies and Views of the NumPy Tutorial for more information on how NumPy arrays can be copied and shared.
Creating a column
The following lines create a CrateData object storing the values from the z NumPy array and called "zcol" (the unit and description fields are optional but are added here):
>>> cd = CrateData() >>> cd.>> cd.values = z >>> cd.>> cd.unit = 'erg /cm**2 /s'
This can then be added to a table crate using either
>>> add_col(cr, cd)
or
>>> cr.add_column(cd)
where the optional index argument of add_column can be used to place the column at a specific location, rather than at the end (the default).
Vector columns
In the following we access the SKY vector column of a Chandra events file.
>>> sky = cr.get_column("sky") >>> sky Name: sky Shape: (368303, 2) Datatype: float32 Nsets: 368303 Unit: pixel Desc: Sky coords Eltype: Vector NumCpts: 2 Cpts: ['x', 'y'] >>> sky.values.shape (368303, 2) >>> x = sky.values[:,0] >>> x.shape (368303,) >>> y = sky.values[:,1] >>> row0 = sky.values[0,:] >>> print(row0) [ 3820.85449219 3828.33813477]
Virtual columns
There is no significant difference to handling virtual columns (that is, a column which is calculated by applying a transformation to an actual column in a crate):
>>> msc = cr.get_column("MSC") >>> msc.is_virtual() True >>> msc Name: MSC Shape: (368303, 2) Unit: deg Desc: Eltype: Virtual Vector NumCpts: 2 Cpts: ['PHI', 'THETA']
How about images?
As there's no real distinction between a column and image for the CrateData() object, the read, modify, and write sections are essentially the same as above, as shown in this example
>>> cr = read_file("evt2.fits[bin sky=::8]") >>> img = cr.get_image() >>> img Name: EVENTS_IMAGE Datatype: int16 Unit: Desc: Eltype: Array Ndim: 2 Dimarr: (1024, 1024) >>> img.values.mean() 0.35124111175537109 >>> tr = get_transform(cr, 'EQPOS') >>> add_image(np.log10(img.values), tr)
Here the WCS transformation to celestial coordinates, encoded as the EQPOS transform, is extracted from the Crate and sent along to ChIPS in the add_image command so that the axes will be in RA and Dec.
When adding an image to a crate, use either the add_piximg command or the add_image method of the IMAGECrate; that is one of
>>> add_piximg(cr, img) >>> cr.add_image(img)
The CrateData object
There are three CrateData object types: Regular, Vector, and Virtual.
Regular Objects
Regular CrateData objects contain values from an image array or a single table column, which can be composed of either scalar or array values.
Multi-dimensional data
A CrateData object can contain multi-dimensional data, and the interpretation of whether this is an image or an array column is made by adding it to an IMAGECrate or TABLECrate respectively, with the add_col or add_piximg commands.
Vector Columns
Vector columns are two or more columns that have been grouped together under the same name, but each component column has its own name as well. For example, the vector column SKY has two components, X position and Y position. The notation for vectors in the CRATES library is
vector(cpt1, cpt2, ...)
so the sky vector is represented as
SKY(X,Y)
.
Vector CrateData objects have values which consist of two or more CrateData objects. Using the previous example, the SKY vector values point to regular columns X position and Y position.
Virtual Objects
A Virtual CrateData object has values that have been calculated via a transform from another CrateData object. For example, the virtual column RA is defined by a transform associated with the regular column X.
Vector columns can also be virtual. EQPOS is a virtual vector column comprised of two virtual column components RA and DEC. EQPOS(RA,DEC) values are determined by applying a transform to SKY(X,Y) values.
Creating a vector column
Two following routines were added to simplify creating vector columns: create_vector_column and create_virtual_column.
This example shows how to create a SKY vector component made up of X and Y arrays:
x = np.random.normal(4782.3, 5, size=1000).astype(np.float32) y = np.random.normal(5234.1, 5, size=1000).astype(np.float32)
creates 1000 pairs of X,Y values drawn from the normal distribution, and converted to 32-bit floats. To create a SKY vector column you would then say:
sky = create_vector_column('SKY', ['X', 'Y']) sky.unit = 'pixel' sky.desc = 'sky coordinates' sky.values = np.column_stack((x,y))
The sky column can then be added to a crate (a new one in this example) and written out:
cr = TABLECrate() cr.name = 'GAUSS' cr.add_column(sky) cr.write('gauss.fits', clobber=True)
unix% dmlist gauss.fits blocks -------------------------------------------------------------------------------- Dataset: gauss.fits -------------------------------------------------------------------------------- Block Name Type Dimensions -------------------------------------------------------------------------------- Block 1: PRIMARY Null Block 2: GAUSS Table 1 cols x 1000 rows unix% dmlist gauss1.fits cols -------------------------------------------------------------------------------- Columns for Table Block GAUSS -------------------------------------------------------------------------------- ColNo Name Unit Type Range 1 SKY(X,Y) pixel Real4 -Inf:+Inf sky coordinates
Note that the values are added to the vector column - the parent - as an array with shape (nrows, num cpts), which is what column_stack creates. Setting the unit and desc fields are not required, but can provide useful metadata.
Representing bit values
Boolean columns
A column of bollean values - that is True or False - can be created as with any other simple type. For instance, the following adds a column called ENFLAG that indicates whether the energy value of the row is between 500 and 7000:
>>> cr = read_file('evt2.fits') >>> flag = CrateData() >>> flag.>> flag.>> envals = cr.get_column('energy').values >>> flag.values = (envals >= 500) & (envals <= 7000) >>> cr.add_column(flag) >>> cr.write('flagged.evt2')
which creates a file with an ENFLAG column:
>>> !dmlist "flagged.evt2[cols enflag]" cols ----------------------------------------------------------------- Columns for Table Block EVENTS ----------------------------------------------------------------- ColNo Name Unit Type Range 1 ENFLAG Logical Interesting energy
Boolean arrays
The STATUS column of a Chandra event file is a bit column, where each bit represents a different flag value. The Python representation uses an array of np.uint8 values, one for each bit, where a non-zero value indicates set and 0 is unset. For instance:
>>> cr = read_file('evt2.fits') >>> status = cr.get_column('status') >>> print(status) Name: status Shape: (107664, 32) Datatype: uint8 | Bit[32] Nsets: 107664 Unit: Desc: event status bits Eltype: Array Ndim: 1 Dimarr: (32,) >>> status.is_bit_array() True >>> len(status.values[0]) 0 >>> print(status.values[0]) [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] >>> status.values[0][1] = 1 >>> status.values[0][10] = 1 >>> status.values[0][20] = 1 >>> print(status.values[0]) [0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0] >>> cr.write('test.fits', clobber=True) >>> !dmlist "test.fits[#row=1][cols status]" data,clean # status[4] 01000000001000000000100000000000
Boolean arrays
The is_bit_array() method of a CrateData object can be used to determine whether a sequence of bits is being represented as a bit array. The resize_bit_array() method is used to increase or decrease the size of the bit array.
The Crates module is automatically imported into Sherpa sessions, otherwise use one of the following:
from pycrates import *
or
import pycrates
Changes in CIAO 4.8
Support for variable-length arrays
Support for variable-length arrays has been improved and the CrateData object now supports the is_varlen and get_fixed_length_array methods.
Bugs
See the bug pages on the CIAO website for an up-to-date listing of known bugs.
Refer to the CIAO bug pages for an up-to-date listing of known issues. | https://cxc.cfa.harvard.edu/ciao/ahelp/cratedata.html | CC-MAIN-2020-05 | refinedweb | 1,485 | 56.35 |
Hello folks,
so after a nightmarish event of dealing with 3dr, I finally received my gimbal and attempted to install it.
The installation was acceptable, I won't say it went smoothly because, that simply wouldn't be true (the wires definitely could of used more room, or at the very least better placed.
All systems have been updated, including the go pro. I
So upon placing my GoPro Hero3+ Silver (3.02) into the gimbal and powering everything on according to the instructions included by 3dr, I'm asked to update my GoPro firmware, which makes no sense considering I have the latest firmware version (GoPro also confirmed this), and upon dismissing this request, I'm met with a please plug in the hdmi cable screen, thus no video link. Further more upon shutting down the system my GoPro remains powered up, and completely frozen, requiring the lipo to be pulled from the go pro in order to reset it. I have also tried all capture settings (1080p,960,720) (60fps,30fps) (narrow, medium, wide) still causes go pro to crash.
Isn't the gimbal compatible with the Hero3+ silver? It does say in the manual that it in fact does support it fully, however at this point I have a hard time in believing much of what 3dr says.
Now after doing some research I came across a link on 3drpilots.com referring to other users experiencing this same issue, with the response from 3dr claiming it to be a bug that they are working on, and for the time being convert back to the GoPro 2.00 firmware. I myself asked 3dr such, and was simply referred to the online pdf of their included manual, which seems shady to say the least why some users are given a solution and others are simply pushed aside and routed back to the manual.
So upon downgrading firmware (GoPro highly recommended against this I should note), I does still request the firmware on the GoPro to be updated within the Solo App, however upon unplugging and plugging back in the hdmi cable I now get video signal.
However the issue I'm now faced with is the inability to record onto my GoPro as I cannot access the record button due to the gimbal covering, and I can not control the go pro record/stop video the controller or app, and simply only being able to record directly on to my iOS device, which a subpar at best.
So does is anybody else having similar issues? If so have you discovered a solution?
Thanks in advance.
Views: 10837
It's my understanding that GoPro Hero3+ Silver compatibility is still up in the air (pun intended)...
That said, it has been reported that you can get the video feed back by downgrading your GoPro firmware back to v2.0.
You'll still get the error messages, and you still won't be able to control the GoPro, but at least you should get your feed back.
I just received my gimbal and also have a Hero3+ Silver, so I'm going to have to go through this too... when I get the time.
@Steven, thanks for your reply,
I did manage to get this done, however without the ability to record directly to the GoPro, its virtually useless.
And upon looking at my gimbal it even has "built for Hero3+ / Hero 4" so it would seem extremely shady to advertise such then once everyone has received their gimbals, change the compatibility requirements.
I found videos were very useful to show tricks that help with the orientation of the wires, so installing wasn't the worst process I've ever experienced.
Downgrading does work though, although I did run into one issue that was never mentioned in the link I followed to regain video, is that without powering on my Gopro manually prior to starting the solo, then I don't immediately get video feed, I need to unplug and replug in the hdmi cable to the go pro, but when I changed batteries and the go pro was left on it came on without requiring the unplug/replug so thats a start I guess.
I hope for 3dr's sake they solve this otherwise all the documentation advertising compatibility with hero3+ may come back to haunt them, as I'm sure many individuals with GoPro Hero3+ Silvers will be completely furious over such falsities, and would seek refund based on their policy.
Best of luck with your gimbal!
There is supposed to be a major update coming out very soon that will be addressing these issues. I have seen others that ended up going out and buying the H4 and problem solved.
Fortunately I have the 3+ Black and have had no problems.
Submit a support ticket. 3DR has a great support staff.
@Art, that is very unfortunate, hopefully you get it resolved sooner rather than later as I think we've waited long enough for the gimbal at this point!
@Ray, thats good you finally got your gimbal! And I definitely hope you are correct with regards to the update!
Getting my gimbal was probably one of the worst purchasing experiences I have ever had in my lifetime!!!
I'm afraid to invest anymore into this system (GoPro H4 for instance) at this point than absolutely necessary.
I contacted 3DR directly requesting a RMA# to return my Solo and extra batteries, as I felt completely unsatisfied (my whole season was waisted with funds invested into a product that's unusable without a gimbal, and since its already -2*C where I'm located, snows on its way so at this point it wouldn't be used till next season), and I was within their policy grounds for a return.
Then I was informed about how they were sorry, they would send me a gimbal overnighted asap (I stated I wasn't paying more than I would here in Canada, and that I really wanted to return it more than anything). So Wednesday passed with no response, then Thursday I'm told I wouldn't have it for Friday (my final research trip of the season), so once again I was back to emphasizing on returning it. Then at the last second I get an email stating they found one and it was going out overnighted, they'd call for payment info (at which point I re-itterated once more that I'm not paying more than I would here in Canada, otherwise I just want to be done with this nonsense and return it, which the individual says no problem, it will be no more in costs)..Waited for a bit then finally got a call...The guy on the phone had very little idea as to what was occurring and tried to bill me far more, even keeping me on hold for half an hour!!! Then says they'll have to contact the individual I talked to and get back to me...So I waited an hour before giving up, and headed out late for a meeting.
Next day I get an email from the individual saying they sent it anyways that they'll call me for payment and I have to pay what the guy on the phone said....So basically after I told them NO time and time again, they sent it to me anyways, then had the nerve to tell me what I'm paying them. So after responding saying its not happening, I'm not paying more than what I said, that they were told countess times through email and on the phone I want a refund and not to send it if they expect me to pay more.
So then they tried to harass (and somewhat bully) me by calling me, emailing me, saying I have to pay what they want or I have to go out of my way to send it back now (at which point I was late several hours departing waiting for it to arrive, and they were informed I was leaving for a research trip, then surgery, so I would be out of town and range). So their foolishness became my problem further...Waisting even more of my time...They eventually agreed and accepted my original payment offer (although, I'm sure it was only after mentioning that if need be I would take this further, filing complaints).
So I think its safe to say that I'm going to avoid submitting a ticket unless I'm in dire need to do so...Hopefully the update is sooner than later...Just don't need that kind of negativity in my life...
I was THIS close to pulling the trigger on a Phantom 3 but somehow found out about the Solo and that it used GoPro cameras and it was compatible with the 3+ Silver up.
As luck would have it I had a 3+ given to me last Christmas that I didn't want and didn't have any need for. So it was still in its packaging. I figured I could spend a few bucks more on the Solo and Gimbal and have a better drone for not that much more.
Since I received the gimbal I've got about 6 minutes of video. My GoPro is on 3.0.1 and basically locks up every time I connect it to the gimbal. If I've started video then it stops and locks up. I don't even know how I got that six minutes of video.
I can't get the app to upgrade so I'm going to spend some time tonight trying other methods.
At this point I'd settle for at least being able to take video, and suffer without FPV.
On a side note I've now gotten more practice just flying the thing.
@Corbin,
Well if I personally had the choice to do it over again I would have never even considered buying the Solo!
3DR is nothing but a company built on false advertising, and being untruthful.
And as a owner of both the Solo with a gimbal (Hero3+ Silver also) and a Phantom 3 Pro, they really aren't in the same league, with the P3P being far superior on all fronts (except the required cable from tablet/phone to remote), they even offer the so called Smart Shots 3DR claimed no one else would have. The Phantom 3 is truly a unit built for capturing video/images, although I'm not a fan of dji or its "cookie cutter" design approach (I think they could do better with regards to design)...
As for the issue at hand "No Video/GoPro Freeze/No Record", well theres several things I have learned
1. the Hero3+ Silver, while was and sitl is advertised as working with the Solo/Gimbal, it actually does not!!! This was simply a false claim by 3DR (I can post links and docs showing it clearly states fully compatible with Hero3+). Philip from 3DR even stated that they had no intention on fixing this issue on the Solo's main page.
So after waiting almost half a year with over 2k invested, many are left with an unusable unit with either the option of attempting a return (3DR will tried to avoid this at all costs even though its in the documentation so good luck with route), or being forced to go out and invest further into a newer GoPro, at which point you would have spent as much is not more than what an Inspire is worth.
2. There is a slight work around, although it doesn't offer perfect results and is kind of a pain to run through each time (it also may put added wear on your gimbal)
If you downgrade your GoPro firmware to 2.00, which after contact GoPro, they highly recommend against as they said it could easily damage your GoPro by downgrading (I have an email from them stating this if you wish to see it also), you will regain a form of video feed!
This issue with this is that, every time you boot up the system it'll ask you to upgrade the go pro firmware (dismiss this), then it will probably say no hdmi plugged in, so unplugging and plugging back it video will come back after a second.
However this does not give you access to any form of record unless its on the tablet/phone, which will most certainly be effect by reception, so quality will be poor at best from any real distance.
The go pro finally will not shut off once the system of shut down so you will be manually powering down the cam each time! Which could be hard on your gimbal!
A few tricks I've picked up to somewhat work around some of these issues:
1.Set your go pro to 1 button record option! This will allow the go pro to initiate record and stop simply by powering on and off the system! This one is key since the gimbal blocks access to the shutter button!
2. In order to avoid plugging/plugging the hdmi cable each time, manually power on the go pro before the solo ect...After getting it going I changed batteries but did not shut the go pro off and upon repowering the system I immediately had video feed. This will initiate the record with 1 button but also same from having to apply pressure on the gimbal itself in the form of the unplug/plug option, but still unnecessary pressure non the less.
Finally the downgrade will also correct the freeze effect the go pro endures as a result of such a lack of compatibility, however as I stated you will need to manually shut it down afterwards, but as Philip has stated they have no intentions on getting it to be compatible so that means absolutely no record/stop control or go pro itself control from the app or remote.
It's unfortunate this is what is has come down to just to be able to get an expensive product to at least work somewhat, after being told and sold on the fact that it would work!
We should not be forced to put our gopro's in harms way just to simply get Solo to work when it was sold as being fully functional!
It's sad and unfortunate that they have to resort to these tactics (making false claims, being untruthful), just to try and compete, but at this point its not a surprise coming from them...Perhaps formal complaints with business bureaus are in order....
Best of Luck...
This may help you with your issues...or not. I guess we'll see. (I posted this elsewhere)
HOW TO START & STOP RECORD OF GOPRO via A SMARTPHONE...REMOTELY!
><({(º>
BTW: I'm SCREAMING at 3DR about this. It is a glaring omission in their literature. And the tech support guy, as friendly as he was, seemed stunned I didn't know this.
@Fish , thanks for your response,
what model of GoPro model are you using? They all have different levels of compatibility, this is why I ask.
As with the Hero3+ Silver, it automatically freezes upon powering the system up, requiring the battery in the go pro to be removed to get it to power down, with the latest firmware installed.
As for the paddle control, while I am very familiar for which is the record/stop button, with the new firmware update this simply does not work (not supported) with the Hero3+ Silver (perhaps the Hero3+ Black is?) as stated by philip from 3DR on the main Solo discussion page. There's actually a discussion about this very topic on 3drpilots.com also, although I haven't been keeping track of their progress,
If your are in fact using a Hero3+ silver, then that is definitely great that you got it to work!.
Craig.
The Hero 3+ black and 4 black is what I own and both work with full firmware upgrades.
Sorry. | https://diydrones.com/group/solo/forum/topics/no-video-or-connection-between-solo-gimbal-gopro-hero3-silver?commentId=705844%3AComment%3A2104319&xg_source=activity&groupId=705844%3AGroup%3A1960722 | CC-MAIN-2019-18 | refinedweb | 2,694 | 64.44 |
trying to see if i could get a little help with this program i am writing
"Write a complete program that plays a coin flipping game, displays the individual flips, reports when a game is “LOST” or “WON” and shows the number of flips needed to complete the game.
The algorithm is as follows:
• Simulate the flip of a coin using a JAVA random number generator.
• Flip the coin once to initialize the flip value and print out the flip
• Inside of a loop, repeatedly flip the coin until 3 consecutive flips have the same value (3 heads or 3 tails)
a. Display the flip results after each flip
• When the game ends, report the total number of flips
Note: you can use 0 for heads and 1 for tails, but if you have time, add another method that will convert the integers 0 and 1 to the characters ‘H’ and ‘T’"
This is what i got so far, but its not printing the right answer.
import java.util.*; public class CoinFlip { public static void main(String[] args) { Random rand = new Random(); int H = 1; int T = 0; int Heads = 0; int Tails = 0; int tries = 0; int result = rand.nextInt(0+1); while(H != 3 || T !=3) if (result == H){ H = H + 1; System.out.println("Head"); Heads++; } else { if (result == T && T <= 3){ T = T + 1; System.out.println("Tail"); Tails++; } } System.out.println(tries); } }
thanks for any help
This post has been edited by g00se: 09 March 2013 - 02:01 PM
Reason for edit:: Please use code tags | http://www.dreamincode.net/forums/topic/314883-coin-flip-help/page__pid__1816956__st__0 | CC-MAIN-2016-07 | refinedweb | 261 | 60.79 |
jtabbedpane
jtabbedpane How to add jtextfield or jtabbedpane by coding using java netbeans?? plzz help
jtabbedpane
jtabbedpane how to open the jframe when click tab in jtabbedpane
jtabbedpane
Error-
Error- Hello, I would like to know about XSD file.
I try to print XML file but I am getting error SAXException-- says
Content is not allowed in prolog.
Please help me
help!!!!!!!!!!!!!!!!!! import java.awt.;
import java.sql.*;
import...;
JButton b1,b2,b3,b4;
JTabbedPane tp;
createSAccount(){
JTabbedPane tp=new JTabbedPane();
panel=new JPanel(new GridLayout(10,2));
panel1=new JPanel(new
Jtabbedpane - Java Beginners
Jtabbedpane JTabbedPane and i have no idea how to intract... in JTabbedPane and interacting with user...); JTabbedPane tab = new JTabbedPane(); frame.add... JButton in JTabbedPane and interacting with user using this JButton.Code
in the printf method is actually invalid.Therefore error occurs. Anyways, check your code
Logic error? HELP PLEASE! :(
Logic error? HELP PLEASE! :( Hello Guys! i have a huge problem. What...);
System.out.println("Error.");
result = "Client info...) {
ex.printStackTrace();
System.out.println(ex);
System.out.println("Error
java JTabbedPane() - Java Beginners
java JTabbedPane() how to add layout manager in JTabbedPane()
plese... {
private JTabbedPane tabbedPane;
private JPanel panel1;
private JPanel panel2...
tabbedPane = new JTabbedPane();
tabbedPane.addTab("Page 1", panel1
Creating a JTabbedPane Container in Java Swing
... to create the
JTabbedPane container in Java Swing. The example for illustration is given in which, all the things related to the creation of JTabbedPane container
Grid World project Run Error !! help please!!
Grid World project Run Error !! help please!! i'm trying to make...);
}
}
}
}
and Here is my error...:
(it runs at first but when the bug hits a side, the error occurs and i exit the error but run does not work anymore
My Sql Error - Development process
with Microsoft Access As BackEnd Application in this Project .
plz Help me Sir...{
JFrame f;
JPanel p1,p2,p3,p4,p;
JTabbedPane tp;
JLabel l1, l2, l3... GridLayout(2,2));
tp=new JTabbedPane();
l1=new JLabel("Bus No:");
l2=new JLabel
404 error
404 error hello anybody help me
when i deployed my ear file into jboss->server-> default->deploy folder it doesn't shows any error. but when i run this application by using localhost:8080/webapp ,it shows 404
Java Error In comapitable Type in statement - Java Beginners
Java Error In comapitable Type in statement Here I want to Check... Help Me Sir
import java.io.*;
import java.awt.*;
import javax.swing....
{
JFrame j;
JPanel j1,j2,j3;
JTabbedPane tp;
JLabel l1,l2,l3,l4,l5,l6,l7,l8,l9
ERROR with my JAVA code - Java Beginners
ERROR with my JAVA code The error came up in my main method... this....it is really stressing me out!! HELP!!!
import java.awt.*;
import... " );
JTabbedPane tab = new JTabbedPane();
// constructing the Pool Panel
BCMWLTRY.EXE error
BCMWLTRY.EXE error Yesterday I did system restore after it I am getting two problems. 1. showing an error
message An unhandled exception....
Please help me. Thanks in advance
JFREE error
JFREE error hi.........
the code for making chart is: WHICH IS GIVING THE ERROR
import org.jfree.chart.*;
import org.jfree.data.category.... THE RECTANGLEINSETS NA THEN WHY ITS GIVING ME THE ERROR.
tHE IMPORT FILE
help - Struts
studying on struts2.0 ,i have a error which can't solve by myself. Please give me help, thans!
information:
struts.xml
HelloWorld.jsp... execute(){
name = "Hello, " + name + "!";
return SUCCESS;
}
}
but error
Java Final Project Error - Java Beginners
Java Final Project Error It isnt showing the result when I press the proper button...?? It isnt doing the work... can someone help me??
import... Project " );
JTabbedPane tab = new JTabbedPane();
// constructing the Pool
deployment error
deployment error when i deployed my .ear file into the jboss4.2.3 ga it shows this exception in server log
any body please help me,thanks in advance
error is :
012-08-22 15:19:41,068 DEBUG
still error
second button which i,ve added in ma frame.... plzz smone help me to remove error
server error
server error during starting of WAS server in eclipse, everytime this warning comes........
WARN [Settings] Settings: Could not parse struts.locale setting, substituting default VM locale
plz do help me
java, plz help me in doing this - Java Beginners
java, plz help me in doing this # Write a small record management...;
JTabbedPane tp;
ImageIcon btnimg1,btnimg2;
JLabel l1, l2, l3, l4,l5,l6,l7,l8,l9,l10...(2,2));
p4=new JPanel();
tp=new JTabbedPane();
l1=new JLabel("ID:");
l2=new
compilation error
compilation error Hi my program below is not compiling,when I try to compile it i'm getting the error message "tool completed with exit code 1".The loop has to be at the end of the program can you help me.
Blockquoteimport
jsp error
jsp error <p>hi, could please help whenever i run jsp file in my eclipse i got this error.my jsp file is</p>
<%@ page language... encountered an internal error () that prevented it from fulfilling this request
help in java
help in java Create a class named Order that performs order processing of a single item. The class has five
instance variables (fields) : customer.... In your main method, use a trycatch
block to display a meaningful error message when
Need help with this!
Need help with this! Can anyone please help me... to a file at all at this time. Any help would be greatly appreciated, thank you... the entrys in array score = -1 for later error checking.
for(arrayCounter = 0
request to help
or not.
If invalid date is entered, then display an appropriate error message and ask the user
java help
??s not a valid day of a month, then print an error
message to the standard
Error - Struts
inform what is the problem here. Friends I am stuch with this. Please help me... mail id: ksenthu83@gmail.com
Please help me Friends!!!!
Regards
still error
..... can smone help me in removing errors from ds...........
Hi Friend
Error page in JSP
Error page in JSP
In this section we will discuss about "Error page... a coding error which can
occur at any time than while executing the program an exception occurs that
redirects programmer to error page. When you execute
JFREE error again
is the error which i am getting plz help to solve this error
plz help...JFREE error again hi.........
As i had asked u the jfree error i.... but then also its giving me error
i am able to compile the code but now when i am
Java Complication Error - Java Beginners
this error:
Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException... JTabbedPane jtabbedPane;
private JPanel general;
private JPanel..., depthText, volumeText;
//JtabbedPane jtabbedPane = null
ERRor in executing DB program
ERRor in executing DB program While executing following code an Error was generated.can any one help me for this.
LiveDB liveDBObj...=pstmt.executeUpdate(qry);
----------
**ERROR:java.sql.SQLException: You have
hi frnds.. help me..
hi frnds.. help me.. i ve a doubt in incompatible type error (block letter). plz help me clear this error!
thanks in advance.
This is my code...);
for (int i=0; i
}
}
incompatible error ------> List<
error in hibernate example
error in hibernate example i am getting error in this....
log4j:WARN No appenders could be found for logger (org.hibernate.cfg.Environment).
log4j... not found
plz help me out
Java Error - Java Beginners
Java Error Hello Sir ,How i Can solve Following Error
Can not find Symbol - in.readLine();
Here Error Display at in Hi Friend... can help you.
Thanks
error in sample example
error in sample example hi can u please help me
XmlBeanFactory class is deprecation in java 6.0
so, pls send advance version class
Use XmlBeanFactory(Resource) with an InputStreamResource parameter
Stateless Bean Error
Stateless Bean Error Ejb stateless bean giving following error,please help me.
11:49:54,894 INFO [STDOUT] Error:$Proxy72 cannot be cast to com.javajazzu
p.examples.ejb3.stateless.CalculatorRemote
Here, we
Friends need a help on ruby..
Friends need a help on ruby.. Friends i need a ruby script... if the script got failed, the html report will have the error message with the line number on which error is there.
If the script is executed successfully than
deployment error - XML
getting the following error after i deployed my war file on tomcat.I suspecting my web.xml. please help me to fix the problem. and can anyone help me to write web.xml entries to support spring and site mesh?? here is the error on my console
Java Complication Error - Java Beginners
this error:
Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException... MainPool extends JFrame implements ActionListener
{
private JTabbedPane jtabbedPane;
private JPanel general;
private JPanel pools;
private JPanel hotTubs
Jfree create error
Jfree create error hi..............
thanks 4 ur help.
as u said i....
but then also its giving the same error can u tel me why this is happening?
and also u... different or only javac filename.java????????/
plz help
JFree chart package error
package and done what u have said. Its working but its giving me error that some... me the error of one of import file not there.
I just want to ask hw to add import file from our side?
plz help
RFT 8.1.1.3 Error
getting the following error message, I have enabled JRE 1.6 and Browser enablement test is pass. Error message is "exception_message = C:\Program Files\IBM\SDP... please help me out. Thanks
Help Me plzzz
Help Me plzzz org.apache.jasper.JasperException: Unable to compile class for JSP
Generated servlet error:
D:\prash\workspace.metadata.plugins...\Hello_jsp.java:7: cannot access java.lang.Object
Generated servlet error:
bad
runtime error - Java Beginners
runtime error I created a sample java source ,compiled it succesfully using command prompt but when i tried to run i got the folowing error... the path for jdk Can u help me please Hi friend,
Plz give full details
Error using SoapClient() in PHP
Error using SoapClient() in PHP I'm trying to access WSDL(Web... parameter and active SSL on my server, still I'm getting an error.
Here is the code I'm...","Password" => "password"));
Here is the error I'm getting:
Warning
error message - Java Beginners
error message sir,
while i m trying to execute my first program(print my name)I get error message like" javac is not recognized as internal... ,this is the same.how can i take steps to solve this problem pls help.
java compillation error
java compillation error can u plz help me plz
correct the sql error
correct the sql error i am getting a SQL Error while retriving data... help me and studentId's datatye is number......
private void...(null,"Please enter the Student ID to Search","Error",0);
}
else
{boss related linkage error
Jboss related linkage error Please check below error when i run in jboss
java.lang.LinkageError:
loader constraint violation in interface itable... for your help
Need some help urgently
Need some help urgently Can someone please help me with this below question. I need to write a class for this.
If you roll Y standard six-sided dice... number of possible combinations.
The actual question got some error. Its
java error - Java Beginners
.
But if I want to run that program the fetches me the following error.
For example... uninstalled jdk and NetBeans and reinstalled them, Then too its the same error coming...
pls help...
its urgent...
pls reply as soon as possible...
Thank you
error
error while iam compiling iam getting expected error
java help - Java Beginners
java help Code a try statement that catches an IOException and an EOFException. If an IOException occurs, print the message ?An I/O error occurred.? to the console. If an EOFException occurs, print the message ?End of file
JSP SQL Error
JSP SQL Error Hi While trying to execute the below code i'm getting the following error "java.sql.SQLException: Io exception: Got minus one from a read call ". please help me out.
<%
try{
String Username
Programming Error - JSP-Servlet
??? And when i tried its giving me error than cannot find variable.In request.getParameter it wont work in next to next page so please help me how to do through session
Error in context path
following error.. PLZ help me
Tomcat server started.
Undeploying...Error in context path I Tried a Struts2 Login application having...");
}
return ERROR;
}
public int getEmployeeNumber() {
return
error detection - Java Beginners
me an error,of "(" or "[" expected. i tried to change everything i can to repair the error but got stuck. please help me to find out what must i do to repair...error detection
Hi Amardeep and all friends out
Java error stream
In this Tutorial we want to describe you a code that help you to understand
java error...
Java error stream
The Java error stream is used to print error that arises during
Help Required - Swing AWT
Help Required How to bring password field in JDialogueBox??? ... static String HELP = "help";
private JFrame controllingFrame; //needed...");
JButton helpButton = new JButton("Help
Run time error - WebSevices
Run time error Hello,
Anyone know, how run the template files in zend framework.Anybody help me.
Rgds,
Pras code to help in solving the problem :
protected function _run($template
Code Error - WebSevices
Code Error How to insert checkbox values to mysql using php Hi friend,
Code to help in solving the problem :
Insert CheckBox
Thanks
help in java program
help in java program please could you help me with my assignment
I have to submit it on Monday 29 October
please help me :(
Write a program... to illustrate the error in trying to access private data
members from
jsp-servelet,error http404
jsp-servelet,error http404 I am using mysql commandclient to connect with eclipse using jsp and servelet.
I keep getting the error hhtp 404...();
}
}
Please help
SQL QUERY ERROR
SQL QUERY ERROR Im writing a query which shows error of INVALID... THE FORM IS THROWING THE ERROR OF INVALID CHARACTER BUT THE SAME QUERY IS RUNNING IN THE DATABASE.
PLEASE HELP ME. Syntax of insert query is:
INSERT
JSP Error - JSP-Servlet
error.
HTTP Status 500...
description The server encountered an internal error () that prevented it from fulfilling...("Exception in Query"+e.getMessage());
}
return i;
}
}
Please help
error
error i have 404 error in my program plz tell me yhe solution about
compilation error - Java Beginners
this program it is giving an error as integer number is too large.why? and what... not in the Hexadecimal or Octal. Here is fraction of Java code example which will help you
SQL error - JSP-Servlet
to solve this. Help me. Hi friend its possible...
Here is the solution...){
System.out.println("Error occured while updating!!!");
}
con.close
Error - Java Beginners
Error import java.util.*;
import java.io.*;
public class inputdemo
{
public static void main(String[] args) throws IOException...);
System.out.println("The Book Price" +price);
}
}
Help me
Hi
Proogramming Error - JSP-Servlet
:
I want that ChequeNo , ChequeAmt ,BankName shoukd get validated but i tried its getting error sir can u please tell me
Servlet error - Java Beginners
my own servlet program..
The error i'm facing is...
exception...
I hope this may help you..
Happy programming......
Regards...,Britto.M
SubString help - Java Beginners
SubString help Hi , I would appreciate if somebody could help me... a substring of 35 zeros '00000000000000000000000000000000000' and they could... exception if any System.err.println("Error: " + e.getMessage
applet security error - Security
my jsp page, the following security error occured.
ERROR:
Java Plug... on finalization queue
g: garbage collect
h: display this help message
l: dump... without importing the certificate to all the client machines.
kindly help me
Error in a code. - Java Beginners
Error in a code. This is the question I posted yesterday:
"I'm... quite figure out how to write the code for the GUI. Could anyone please help... should give an appropriate error message.
The first part works perfectly fine
trap error - Java Beginners
trap error Hello. Am trying to trap this error that appears when i... fields instead to pop up. how can i do this. please help. source code.... will appreciate help
Zend Db error - WebSevices
Zend Db error Hello friends,
How i get the values from the database using zend framework
Any one know the code. Help me
Insert Operation Error - WebSevices
Insert Operation Error Hai,
How i store my full resume in database[Mysql] using PHP. Any one Know the code..... Help me
Program error - WebSevices
Program error Hello,
Any one know the sample program for Login page using zend framework. Then how i connect my databse file to zend framework. Anyone help me
error
/ServletUserEnquiryForm.shtml
getting an error given below
SQLException caught: [Microsoft][ODBC SQL Server Driver]COUNT field incorrect or syntax error
please suggest
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/49305 | CC-MAIN-2015-27 | refinedweb | 2,825 | 67.35 |
07 January 2009 18:00 [Source: ICIS news]
HOUSTON (ICIS news)--Here is Wednesday’s mid-day ?xml:namespace>
CRUDE: February WTI: $45.29/bbl, down $3.29; February Brent: $48.27/bbl, down $2.26
Crude prices plunged on profit taking from recent advances and in response to the weekly Energy Information Administration (EIA) supply statistics which showed a much greater than forecast build in crude and refined products inventories. WTI (West Texas Intermediate) bottomed out at $44.38/bbl before rebounding.
NATURAL GAS: $5.885/MMBtu, down 9.8 cents
Natural gas futures prices trailed the slide in crude oil prices. Traders expected to see a 79bn cubic foot decline in domestic natgas stocks when the EIA releases its storage report on Thursday.
RBOB: $1.1230/gal, down 6.62 cents
Reformulated gasoline blendstock for oxygenate blending (RBOB) prices dropped alongside falling crude oil as gasoline supplies added 3.3m bbl to domestic inventories. Total motor gasoline supplies were at 211.4m bbl for the week ended 2 January, according to the EIA.
BENZENE: US Gulf coast barges were steady with bids heard around 85 cents/gal and offers notionally at 95 cents/gal FOB (free on board) HTC (
ETHYLENE:
PROPYLENE: No refinery-grade propylene (RGP) spot activity was heard on Wednesday. RGP for January last traded at 14.50 and 15.00 | http://www.icis.com/Articles/2009/01/07/9182503/noon-snapshot---americas-markets-summary.html | CC-MAIN-2013-48 | refinedweb | 225 | 61.33 |
Opened 10 years ago
Closed 10 years ago
#27871 closed defect (fixed)
xplanet +aqua does not build on a case-sensitive filesystem
Description
I reported this bug about two years ago on the xplanet forum, and the problem is supposedly fixed in xplanet subversion... but since no new release has been made for quite some time it might be worth patching the 1.2.1 release for Macports.
#include <Quicktime/Quicktime.h>
in src/libimage/WriteImageQT.cpp should be changed to
#include <QuickTime/QuickTime.h>
and
AQUA_LIBS="-framework Quicktime"
in configure should be
AQUA_LIBS="-framework QuickTime"
Change History (1)
comment:1 Changed 10 years ago by jmroot (Joshua Root)
Note: See TracTickets for help on using tickets.
r74885 | https://trac.macports.org/ticket/27871 | CC-MAIN-2020-50 | refinedweb | 118 | 62.17 |
Man Page
Manual Section... (2) - page: utimes
NAMEutime, utimes - change file last access and modification times
SYNOPSIS
#include <sys/types.h>
#include <utime.h> int utime(const char *filename, const struct utimbuf *times); #include <sys/time.h> int utimes(const char *filename, const struct timeval times[2]);
DESCRIPTIONThe utime() system call changes the access and modification times of the inode specified by filename to the actime and modtime fields of times respectively..
RETURN VALUEOn success, zero is returned. On error, -1 is returned, and errno is set appropriately.
ERRORS
- file system.
CONFORMING TOutime(): SVr4, POSIX.1-2001. POSIX.1-2008 marks utime() as obsolete.
utimes(): 4.3BSD, POSIX.1-2001.
NOTESLinux does not allow changing the timestamps on an immutable file, or setting the timestamps to something other than the current time on an append-only file.
In libc4 and libc5, utimes() is just a wrapper for utime() and hence does not allow a subsecond resolution.
SEE ALSOchattr(1), futimesat(2), stat(2), utimensat(2), futimes(3), futimens | https://linux.co.uk/documentation/man-pages/system-calls-2/man-page/?section=2&page=utimes | CC-MAIN-2017-09 | refinedweb | 168 | 60.01 |
Blue Meanies vs. Capitalist Hamster
ExamsI don't know. I've only finished one so far (1/4) and it didn't go so well. Ask me again in a week's time.
NovarunnerThe radar is gradually unsucking, but it'll still take a lot of time to get it up where I want it.
I want to start implementing the front-end menu for the game, which means I need to finish off my button and radio-button UI object classes. This I will do soon, possibly as a study break.
Propane InjectorI've been speaking to a lot of people who are looking for a game API that's just about in PI's niche, but PI needs a lot more polish before it can be a "real" competitor. Later I'll have to rethink the future of PI and potentially embark on a redesign; one of the things I'm thinking of doing is rejiggering the Game2D and Game3D namespaces to be more complete, including a 3D actor class.
One nice thing I'm working on now: generative textures. I've written a whole set of texture generators for PI and are working on an open source tool to build them using a Lisp-like scripting syntax.
What's next? A loader for the free Torque Constructor (OS X, Windows) map editor format. I do so enjoy this tool.
After that is rejigging the Model class to take more than just md2; I have a nice Lightwave OBJ importer that I wrote the other day and want to do an ms3d loader as well.
_Not much else is going on. My internship starts May 1 so I'm hoping to make some serious progress in the few days I have off before it starts, and dedicate more time to game development than I have thus far these last two busy semesters.
0
Recommended Comments
There are no comments to display.
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now | https://www.gamedev.net/blogs/entry/1436940-blue-meanies-vs-capitalist-hamster/ | CC-MAIN-2018-26 | refinedweb | 361 | 71.95 |
The patch set of this stage introduced quite a few new core APIs to build the foundation for the Link concept, and at the same time, as a solution for handling of various local coordinates. Everything added is callable and can be overridden in Python, which makes it possible to create fully functional Link object in script, as is shown in the attached test script. It is not trivial to implement the link right, which is why a C++ built-in feature, App::Link, will be added in the upcoming final stage.
To test, please clone my branch at. Download and put the testing script in the attachment to your Macro directory (Edit: oops, just found out that I attached wrong script. Please try download again.). You can read the script code and the comments inside, in case you are interested to find out which new APIs are responsible for the new behaviors.
The first demo continues from stage one, and shows the correct tree view behavior of a link group.
Code: Select all
import linkDemo linkDemo.linkGroupTest()
As I mentioned in the previous stage, the objects added to a LinkGroup exists both in group's local coordinate system (CS), and the global CS. All existing FC tools only works with objects in global CS. So, to show the object in global CS, expand the fusion object, and toggle its child's visibility, you can see the object at a different placement. If, however, there is no FC tool claiming a member object, you can't access the group's children in the global CS using tree view, which is why I added a new API to let object control whether to remove the claimed children from tree root. Toggle group object's 'ChildAtRoot' property, you can reveal the fusion object in global CS at the tree root. The other members of the group, box and cone, are not shown at root because they have been claimed by fusion, which has the default remove-children-from-root behavior. Move the fusion object in the global CS also moves the one in the local coordinate, as expected. Moving the group object only affects the object inside group, not those at the global CS.
Because now one object can potentially be inside a lot of different groups, hence, different at coordinates, tree view has a convenience menu action to select all appearances of the objects in the entire application (you will soon find out that an object can now appear in multiple documents).
LinkGroup uses a different approach from GeoGroup to implement the group behavior. It uses Coin3D node sharing for visual representation as mentioned in stage one. The current stage introduced a new API, DocumentObject.getSubObject(), to impelement the non-visual behavior of a group with local coordinate system. Gui.Selection has been modified to accept a new parameter 'resolve' (defaults to True) in some of its API (getSelection(), getSelectionEx(), getCompleteSelection(), getTypeOfObjects(), addSelectionGate(), etc), which calls getSubObject() to resolve the object reference inside a full qualified SubName. Existing FC tools can work with objects inside a LinkGroup without modification. However, the default 'resolve' behavior is to resolve to object in global CS. You can select the box and cone inside the group object, and choose Part.Fusion tool to create a fusion object and notice the resulting fusion's placement. To work directly with objects in local coordinate, the tool needs to be modified to resolve the object manually by calling getSubObject(), which is capable of returning the full transformation matrix of the sub object. However, it is my opinion that this is an unnecessary complication, and that all existing tools shall still operate in global CS as they are now. A special Link type object can be used to link to sub object in nested local coordinates. The link itself still exists in global CS and can be operated as usual by existing tools, with minimum modification.
The testing script implements a Link type called _Link (and the upcoming stage will bring you App::Link) to do just that. The Link can link directly to an object in global CS, overriding its placement, and thus creates a separate entity in global CS. Or, it can link to a sub object inside arbitarily nested groups. It overrides the top level group's placement, but preserves and automatically sychronizes the accumulated local coordinates of the (nested) sub object. The Link is also capable of override the linked object's material. The following test sets a distinct color of each link for easy demonstration. One thing though, neither Link nor LinkGroup can work with App::Part because of App::Part's unique way of grouping children visually. The upcoming App::Link has special code to support that.
Code: Select all
linkDemo.linkTest()
The next demo shows that Part::Feature tool can work with links. Currently, Part workbench only has partial support of link type object. Toolbar commands only works when the link actually points to a real Part::Feature. You can use non Part::Feature linked object in script, as long as the linked object has a proper implementation of getSubObject() (such as LinkGroup here) that can return Part.Shape when asked. You can trick the GUI to accept non Part::Feature links by assigning the link to a Part::Feature first, create the the tool with the link, and then change the link afterwards, as shown in the following screencast. The screencast also shows how to assign the link using GUI. You can either use property editor, which only selects objects in global coordinates, or use drag and drop which supports sub objects in local coordinates.
Code: Select all
linkDemo.linkFusion()
The next demo shows the newly added PropertyXLink that can link to object outside the current document. It can be used just the same as PropertyLink. It stores additional information of the object's document path (relative to the owner document) and the document file's time stamp. As shown in the following screencast, you can simply drag the object across document boundary and drop to the link. Or, simply assign the property with any object using script. It will store the document information if the object is external. The tree view has been enhanced to show external object, and its claimed children. An overlay arrow is shown in the right bottom of the icon to indicate that it is an external object. When the external document is closed, all linked PropertyXLink automatically lose the linked object reference, but still retains the document and object name information, so that when the external document is opened again, the links can be automatically restored. The document recomputation logic is also enhanced to support external object dependency. Finally, the screencast shows that when a document containing PropertyXLink is opened, FC will automatically open all referenced external documents.
Code: Select all
linkDemo.linkXTest()
The final screencast shows some of the other tree view enhancements.
* Sync Selection, when enabled, selection in 3D view will automatically select the corresponding tree item and expand its parent(s) if necessary.
* Sync View, selecting an object in the tree will automatically switch to its 3D view.
* Select all instances, select all occurance of an object, expand the parents if necessary.
* Select all links, select all links (direct or indirect) to some object.
* Hide item, hide the object in the tree view. All tree items (including externally linked) corresponding to this object will be hidden. The setting is stored into 'ShowInTree' property of the object.
* Show hidden item, reveal the hidden items. This setting is per document, and stored in the document's 'ShowHidden' property. When this setting is on, the hidden items' icons are marked by a small 'eye' overlay image in the left top corner.
Edit: oops, just found out that I attached wrong script. Please try download again. | https://forum.freecadweb.org/viewtopic.php?style=3&p=184458 | CC-MAIN-2020-10 | refinedweb | 1,310 | 54.02 |
.
I did a presentation on Indigo to the VBUG London group last night. The turn out was pretty good, although it was obvious that many English football fans were missing (though they probably would have enjoyed the User Group more than the humiliating defeat).
Off the top of my head, here are the things that keep standing out for me about Indigo:
I'm looking forwad to some of the Level 400 sessions at the PDC, such as Kenny Woolf's presentation on Channels.
As Mehran noticed, I couldn't help but show off the Refactor! tool that's shipping for free in Visual Basic 2005. This was in response to an attendee who believed that Visual Studio was 'a multi-megabyte bloated version of notepad'.
Strong bodily reactions are a great way to measure the impact of new technologies. While many US technical conference crowds like clapping when they see new features demoed, I think the best reaction is a more modest bodily reaction. I'll never forget the time I shivered after seeing Eric Gamma demonstrate the Refactoring support in Eclipse a couple of years ago. One area of .NET 2.0 that is generating strong bodily feedback is ASP.NET 2.0's support for Custom Build Providers.
First there was Fritz dropping his jaw when he created a custom build provider that took his custom XML metadata file which he dropped into the app_code directory that automatically generated the strongly-typed classes for him. Tonight I discovered that Kirk Allen Evans had a similarly-sized bodily reaction when he created a Custom Build Provider that automatically created XmlSerializer classes that allows you to drop a schema into the app_code directory and automatically generate the XmlSerializer class.
I'm with these guys: Custom Build Providers are cool (bodily reaction: light goosebumps). I'm passionate about finding ways of driving application development through the use of metadata files and the Custom Build Providers are an innovative way that ASP.NET provides to support this kind of development. The only downside I can see is that the creation of these classes is slightly 'automagical' and requires that developers understand what the framework is providing under the covers. However, I think this initial learning curve is more than compensated for by the productivity that Custom Build providers enable..').
If.
Microsoft.
Mark').
The ‘Understanding
Web Services’ MSDN event back in November went very
well. It was great to see over 200 people who were interested enough to
spend a day learning about web services. Simon Harriyott has a
good report of his day at the event. Simon also has a great post on the different
pronunciations of web service terminology where he questions why UDDI
isn’t pronounced as ‘uddy’:
WSDL is pronounced
"Wizzdull".
WSE is pronounced
"Wizzy".
UDDI is NOT pronounced
"Uddy", but as spelt.
ASMX is pronounced
"Azumex".
Best of all is
WSE-WSDL, which is of course "Wizzy-wizzdull".
It was Mike Shaw’s last day as a Microsoft
employee after 13 years. To celebrate his final talk he showed a VPC demo
of Microsoft Bob, which sadly
didn’t end up working. He did mention Bob’s amazing security
system which let the user choose a new password after three mistakes! Valery
Pryamikov posts more on Bob and how it demonstrates how Microsoft has improved
its approach to security..
In his latest Service Station column, Aaron Skonnard writes about how to use the HttpListener class which comes with Whidbey which allows code running on XP SP2 and Windows Server 2003 to wrap the functionality of HTTP.SYS without having to have IIS on the box. This means it’s possible to listen for incoming HTTP requests within any .NET application domain (e.g windows form, console app, Windows service, serviced component etc) without having to install IIS. Picture being able to ‘call back’ into Windows Forms applications with ASMX web services. While WSE 2.0 has already given us support for web service endpoints on TCP, I’m really excited to see this HTTP support in Whidbey because I think it will be more widely used and deployed. Go and read the article and download the sample code.
John..
Gregor.
To complement Simon Horrell's MSDN article on messaging with WSE 2.0, I came across this CodeProject article by Roman Kiss describing the three levels of messaging within WSE 2.0 as part of his MSMQ custom transport for WSE 2.0. This is an excellent bit of detective work (reflectoring, as they say) as there has not been much written about WSE 2.0's SoapTransport and it's in-memory queue that exist at the lowest level of the WSE messaging stack.
The benefit of the channel/queue model is that the message receiver can retrieve messages from the queue as they are ready to process them, rather than having to process them on demand.
Here's a simple example based on a Windows Forms application that has a button that can be clicked to retrieve messages from the in memory queue and display them in ListBox.
private
In the Form_Load event the channel is retrieved from the SoapTransport.StaticGetInputChannel method, based on the channel listening on a particular network address (in this case using tcp) with particular capabilities (the options are ActivitelyListening, meaning the channel should create a listener at that address, or None, meaning the channel should connect to an existing listener). As messages are received they are placed in an in-memory queue managed by this channel. When the Button_Click event is fired the next message can be retrieved from the channel by calling the Receive() method which returns a SoapEnvelope from which the body can be unpacked.
After writing about how WSE 2.0 can use policy and config files to secure web services with no lines of code, I was thinking about how 'magic' it seemed and had an aha! moment when I realised that this demonstrated the power of the Pipes and Filters pattern or aspect-oriented style approaches. I believe that these approaches will play an important role in service-oriented applications in future. Here are some recent quotes that back this up.
In my post about using Policy with WSE to create secure web services with no lines of code, I mentioned that this seemed like a good practical demonstration of an aspect-like approach, similar to the Pipes and Filters patterns from Gregor's book. The policy filters are hooked into the incoming and outgoing messages pipelines are able to ensure that messages to and from the service conform to a particular policy, including retrieving tokens and signing and encrypting. This means that the security of the service can be configured outside the service code, making for cleaner implementations.
Harry Pierson mentions that WS-Policy is aspect-like in his interview on TheServerSide.NET:
Ted Neward writes about a recent presentation on Shadowfax and mentions it uses the concept of a pipeline of interceptors. He mentions that Shadowfax uses this approach to deliver functionality such as tracing, authorization, duplicate detection, instrumentation, authentication, authorizations and transactions. Chris Garty started an interesting thread about this on the Shadowfax message board, where it was revealed that the Shadowfax team spent some time Gregor Kiczales (one of the creators of AspectJ).
This same pipeline/interception approach has been used in WSE, ASP.NET, Remoting and COM+. Indigo will implement the same kind of approach using Channels and ChannelProviders. I'm going to keep reading around this area to understand more about this approach and where it how it can be used successfully...
my previous post, Mark Naughton asked an excellent question about how he'd apply WSE 2.0 security to a particular scenario. The answer highlights how to determine which SecurityToken to use in your environment, how to encrypt a UsernameToken with an X509 certificates with code and policy as well as handling authorization with X509 certificates and determining how to distinguish tokens received by a service.
Martin's scenario
An End User uses a Web-based UI Application (ASP.NET 1.1). The Web Application talks to a Web Service (ASMX) for data storage and other processing.The Web Service needs to identify the End User and the "direct" calling application (The Web UI App), since there may be more than one "direct" calling application. We also want to sign and encrypt the Soap messages in both directions.
Since we're talking about adding security to a web service, we'll need WSE 2.0 installed on all of the calling applications. What's the best SecurityToken to use?The first thing to consider is what SecurityTokens are applicable to the scenario. Aside from custom xml or binary tokens, the three options that WSE supports out of the box are as follows.
Username Name and Password
X509 Certificate
Kerberos Ticket
So any of these tokens can identify the end user or the application - it's a matter of working out which one works best for your situation. If you can handle the distribution and installation of X509 certificates to all of the calling applications, I'd suggest using them to sign and encrypt the message. In your scenario, the ASP.NET web server could create a UsernameToken to represent the End User of your web application. For best security, I'd suggest encrypting the UsernameToken with the X509 certificate (hiding the username and password/password digest).
The code would look something like this:
// ... Assume we have an X509SecurityToken and a UsernameToken, and a reference to the web service called proxy.
// Add both tokens to the SOAP envelope
proxy.RequestSoapContext.Security.Tokens.Add( x509token );
proxy.RequestSoapContext.Security.Tokens.Add( usernameToken );
/* Encrypt the username token with the X509 token.
When encrypting, WSE looks for XML elements with an Id attribute from the namespace, which the username token uses.
The "#" indicates the Id with this value is local to this message */
proxy.RequestSoapContext.Security.Elements.Add(new EncryptedData( x509token, "#" + usernameToken.Id ));
// Encrypt the message body with the X509 token to ensure no one can read it.
proxy.RequestSoapContext.Security.Elements.Add(new EncryptedData( x509token ));
// Sign the message with the X509 token to ensure its integrity
proxy.RequestSoapContext.Security.Elements.Add(new MessageSignature( x509token ));
You'll also need to decide on what token type to use when sending the signed and encrypted response. Again, I'd recommend using an X509 certificate for the most cryptographically strong security. The downside is that you'll need to install the certificate on each of the clients. If you can't handle this install requirement then you are stuck with UsernameTokens.
Distinguishing the tokens on the serviceUsing the combination of a UsernameToken and an X509SecurityToken to represent the identity of the end user of the calling application and the identity of the calling application itself makes it easy for the web service to work out which token is which. The web service has to search through the tokens in the RequestSoapContext.Security.Tokens collection to locate each token. If you decided to use two username tokens for example, you would need to distinguish them somehow. For username tokens you could achieve this through the username values, or perhaps by giving them well-know identifiers in their securityToken.Id property. For X509 certificates you could use the certificate name.Performing authorizationIf you use a UsernameToken encrypted with the X509SecurityToken, and you don't want to send the password as plain text, then you'll need to create your own UsernameTokenManager. This is responsible for authenticating the user and creating the SecurityToken.Principal object which can be used for authorization. For the X509SecurityToken you can create a custom X509SecurityTokenManager and in the AuthenticateToken method, after calling the base class's implementation, then create your own generic principal and attach this to the SecurityToken.Principal (Ingo Rammer wrote an excellent article on this last September for MSDN, but it's disappeared. You can find my notes on it here). The benefit of doing this is, rather than just testing for the certificate name inside the service code, is that WSE Policy validation input filters inspect the SecurityToken.Principal when verifying the policy assertion.Wrapping it all up with policyAs I've indicated in previous posts, using policy and configuration to avoid writing security code within your service is an excellent idea. You could do that in this situation as well, except that signing the username token is a little tricky to indicate in policy (you'd have to hand-craft the policy XML file as this is a little way-off the WSE Security Settings Tool territory). In a confidentiality assertion you'd need to modify the MessageParts elements in the policy file to indicate the UsernameToken's Id attribute. I'll leave this as an exercise to the reader (as my daughter is about to wake up again), but Aaron Skonnard shows how you can use XPath 1.0 to achieve this in his excellent article, WS-Policy and WSE 2.0 Assertion Handlers..
In the last post I showed how it takes only 1 line of code to ensure that a web service client signs all messages with a UsernameToken by creating a send-side policy with the WSE 2.0 Security Settings Tool. In this post I show the same feat can be achieved with an X509Token without writing a single line of code. I also show how this functionality powers WSE's support for automatic secure conversation without having to write any code, something that blew me away the first time I saw it.
X509Tokens can be located through Policy and ConfigIn the last post I covered how the PolicyEnforcementOutputFilter checks the send-side policy when processing output messages through the Pipeline and attempts to find a matching token to fulfil the policy. In the case of UsernameTokens, this means searching the SoapContext.Security.Tokens collection or looking in the PolicyEnforcementTokenCache (hence the one line of code). However, with X509Tokens it is possible for WSE to locate the certificate without a single line of code. The Security Settings Tool allows you to configure which X509 certificate you would like to use and stores an identifier for this key in the policy file. This information is combined with the the <x509> element in the Microsoft.Web.Services2 config section handler that specifies which certificate store to find the token in. So the combination of the policy file and the config file gives WSE enough information to find the correct X509 certificate without writing any security-related code within the service.
Policy saves code on the receive-side as wellPolicy files can be used to save writing code on the receive-side as well. On the receive-side the PolicyValidationInputFilter is used to validate that the incoming message meets the assertions defined in the policy file. The policy file can perform checks such as whether the message is signed and/or encrypted with a specific token type or token as well as whether particular message parts have been signed. If an incoming message does not satisfy these assertions then a security fault exception is raised before your service code is even executed. As with send-side policy, the WSE 2.0 Security Settings Tool can help you author this policy, saving you from paying the XML angle bracket tax.
The samples provided with WSE 2.0 have examples of solutions that rely on code and the same solutions using policy. Comparing these solutions side-by-side highlights the many benefits of using policy instead of code to perform receive-side validation. The first is that it keeps your service code much cleaner. Second, it saves you having to remember to make the same calls at the start of each service. Third, you can change your security configuration without having to recompile the code.
Putting it all together: automatic secure conversationThe best example I've seen of the power of no-code security through policy and configuration files is the support in WSE 2.0 for automatic secure conversation. WSE supports the WS-SecureConversation specification that defines a SecurityContextToken that is a fast, light-weight security token that can provide message-level secure communication across multiple calls between a client and a service. It's fast because it is based on a shared symmetric key, rather than an asymmetric key (which is over 1,000 times slower to process). WS-SecureConversation builds upon WS-Trust which defines the notion of a Security Token Service that receives RequestSecurityToken messages and returns the issued SecurityContextToken as part of a RequestSecurityTokenResponse message. WS-SecureConversation uses these mechanisms to request and retrieve the SecurityContextToken. While all of this may sound a little complicated, it is possible to achieve all of this in WSE using the Security Settings Tool. Using the ideas presented above, if you use X509Tokens then all of this can be achieved without writing any code. This is the first demo I showed in my TechEd presentation.
Here's my take on how it performs this magic under the covers (feel free to chime in any time Hervey). On the send-side, the PolicyEnforcementOutputFilter loads the policy file which specifies that all sent messages must be signed and encrypted with a SecurityContextToken. I think that WSE makes an assumption that the web service can act as a SecurityTokenService and issue SecurityContextTokens (This is enabled on the service by adding the automaticSecureConversation element to the config file). So when a SecurityContextToken assertion is found in the policy file WSE loads the SecurityContextTokenManager class and calls the LoadTokenFromSecurityTokenAssertion() method. This method retrieves the tokens that will be used to sign the request before calling the RequestTokenFromIssuer() method that sends the RequestSecurityToken message and unpacks the SecurityContextToken from the RequestSecurityTokenResponse message sent back from the token issuer (which is often the same location as the service). The PolicyEnforcmentOutputFilter then uses this SecurityContextToken to sign and encrypt the outgoing messages.
Phew, that certainly was a lot of digging with Reflector. But it illustrates how powerful policy can be: you can request tokens from a token issuer and use them to sign and encrypt messages without writing a single line of code. This blew me away the first time I saw it working (I didn't believe it until I saw the wire-level traces). I pinged John Bristowe and Christian Weyer asking 'how does this work? It seems like Magic but I know it can't be'. When I thought about it more I realised that this was a demonstration of the power of the concepts such as aspect oriented programming or the Pipes and Filters pattern from Gregor Hohpe's Enterprise Integration Patterns. More on this in a future post.
Making more of a good thing: custom policy assertionsAs well as using the built-in WS-SecurityPolicy features that WSE enables with its Security Settings Tool, it is also possible to create your own custom policy assertions as John Bristowe has demonstrated. Aaron Skonnard also has more about custom policy assertions. WSE has great extensibility hooks that let you write code that uses your own policy assertions, allowing you to write validation code in one location that can be hooked into your service through the config file without having to reference it in your code.
While.
I've been blog-lite while I prepared and then gave my Indigo presentation to the London .NET User Group. The audience seemed to enjoy it and I had a good time presenting it. Some quick points:
Doing a presentation where you don't have the current version of the software to demo is a challenge (the PDC Indigo bits are from an older, M4 build, which has been substantially refactored in the now-being-coded M5 build, so there's not a lot of value in showing demos with the PDC bit). In order to get some code samples I had to transcribe screen shots from the PDC videos (difficult since Steve Swartz didn't use word-wrap in Visual Studio). I'm filled with (even more) respect for Don Box and Steve Swartz's presentation skills after realizing they managed to do four Indigo presentations at the PDC without even compiling, let alone running any of the applications.
Given the PowerPoint dependence and Clemens' reports on the complexity of the topic I decide I'd have to resort to audience bribery. I drew on my experience in TheatreSports and brought several bags of Minties from the Australia Shop at Covent Garden (to my horror I discovered today that they are made in New Zealand!) and threw them to the audience whenever I detected the signs of PowerPoint-induced lethargy...
So, the programming model is there for V1, but a replacement for the enterprise aspects of MSMQ will have to wait until the future. This would cover the apps where I've message queue in the past (basically a private queue that one app can post to independent of another that reads the messages off), though its not (yet) the replacement to Tibco I was hoping for.
What's the relationship between Indigo, MSMQ, BizTalk and SQL Server "Yukon" Service Broker? When and where is each technology appropriate? Basically it depends what type of application you are trying to build and what environment it needs to run in. Here's a chart adapted from the DAT406 session:
Where does BizTalk come into it?BizTalk is a product that that builds upon other technologies in the Microsoft platform. As
BWill says in comments on a previous post, choosing BizTalk or Indigo will be a question of how much of the infrastructure you want to build yourself. BizTalk has connectors for MSMQ currently, in future it may connect to Indigo or possibly to Yukon Service Broker. BizTalk 2004 is currently in Beta it's on a different release schedule than the other products.
Where does Yukon Service Broker fit?Here's Roger Wolter's answer from the DAT406: Building Reliable Asychronous Database Applications with Yukon:.
There were some hostile questions in the DAT406 session about why it was necessary to put a messaging layer in the database. John Cavnar-Jonson (who definitely needs a blog) calls it an 'abomination' in the Developmentor Indigo discussion list. Personally, I think it's part of a Dr. Evil style plan from the SQL team - if they were to add a spreadsheet to the product then a great majority of the world's applications wouldn't need an OS letting the SQL team achieve world domination. Seriously though, there seem to be several good reasons:
Developers that know SQL can now develop queued, asynchronous database apps. The ability to have asynchronous queues is a very nice architectural feature. Being able to achieve this with SQL syntax like BEGIN DIALOG ... FROM SERVICE ... TO SERVICE is pretty cool.
It's all in the one box. Everything happens within the database. Backup, restore, installation, configuration, monitoring and security are all there in the one location. So deployment of the database is deployment of the messaging system etc. (no need to hassle with MSMQ installs).
The message broker is the database. It's easy to query the status of messages, processing the queues is as simple as writing a stored procedure, the database can efficiently throttle the queue processing resources and it's possible to farm out message processing work to another machine since all that is required to process a queue is a DB connection string.
It's fast. The Service Broker is fast because there's no need for two-phase commits for transactional messaging, there's no need to cross processes to get to the messaging platform and if the send and receive queues are in the same database then it's very fast.
Neils Berglund from Developmentor has been teaching Yukon for a while to Microsoft employees (such as Tim Sneath) and has an excellent sample chapter on Yukon Service Broker that's available for free download.
How does MSMQ fit into the longer term picture?BWill says in my comments that the Indigo team has shared the love and embraced the MSMQ team into its building. John Cavnar-Jonson did some research at the PDC:.
As I reported from the PDC, the message was:
think there's more to be said in this space. It's likely that this is about achieving an Indigo V1 release (primarily about unifying the three different programming models and baking WS-* specification support into the platform) and then targeting more ambitious goals with future releases.
Which parts of Indigo will ship in Whidbey and which bit will ship before or with Longhorn?Basically, System.Transaction will be in Whidbey, the rest later. I'm still digesting Don's WSV302 Indigo Part2: Secure, Reliable, Transacted Services and Jim Johnson's Transactional Programming on the Windows Platform presentations to understand this more deeply.
Given that the last version of WSE will be wire-level compatible with Indigo and that a future version of WSE is likely to support WS-ReliableMessaging, what are the benefits of Indigo other than the simplified programming model?Even though I love WSE I'm following the words of Hervey and accepting that WSE is V.Last++. This question was me fishing for what features Indigo will provide me with as an architect/developer that I can't get from WSE.
BWill mentions:.
So, no bites as to what the extra functionality of Indigo might be, so I'm still fishing (e.g. digging deeper into the Longhorn SDK Indigo Samples). Of course, Indigo has learnt from WSE, so the Indigo programming model will also be nicer (though the WSE programming model is already small and well refactored).
Indigo is committed to supporting WS-* standards and interoperability, but what extra functionality will be available if the whole environment is made up of Indigo boxes?I'm still trying to get a feel for what features and functionality might be available in Indigo.
BWill mentions the fact that Indigo will likely run faster in an all-Indigo environment.
Indigo does offer Peer to Peer functionalityRobert Scoble tells us:.
It's not difficult to spot an Evangelist with a marketing strength is it :)? Sounds like a simple programming model on top of existing network stacks opens up opportunities to use the Internet for more than just the web browsers against a central web server. I think I'll review WSV306 Indigo and Peer to Peer apps on the train this week.
On the 'death of objects' and the SOA 'paradigm shift'Same started off by saying:
SOA [is] one of those paradigm shifts - it really does mean the death of objects at least as we know them.
SOA [is] one of those paradigm shifts - it really does mean the death of objects at least as we know them.
Before I could suggest that we fine any blogger who uses worn-out expressions like 'it's the death of [technology X]' and 'paradigm shift', Sam later clarifies that he didn't mean objects but OOA/OOD: agree that proclaiming the death of objects was a misstatement. It may be more accurate to say that components, which are based around sharing types between parts of the system, are likely to become less of a primary focus with the rise of SOAs. I agree with what Bryan Noye's says:
I don't think SOA means the death of OOP or Components at all. Just like most people build components using OOP, I think most people will built SOAs using OOP and Components. They are not competing concepts but complementary.
I don't think SOA means the death of OOP or Components at all. Just like most people build components using OOP, I think most people will built SOAs using OOP and Components. They are not competing concepts but complementary.
SOAs are important where there is a need to share messages and interoperate with unknown othersFor my mind, SOAs provide the most benefits when there is a need to share data/information and interoperate with other groups, possibly on unknown platforms, that you have no control over. There will still be plenty of applications that are built in the current n-tier component Enterprise Architectures. Talking with Jim Johnson at the PDC he made the point that the benefits of SOA with external partners may also be benefits within an organisation or service boundary. I agree, but I think the technology platform (e.g. Indigo) and management tools (e.g. whatever Microsoft are planning here) have a long way to go before these benefits outweigh those of using existing component-based technologies.
Interoperating with others often means going for a lowest common-denominator approach which is always going to perform slower than when you can go with a binary format and control both ends of the wire (as Sam mentioned, using ASMX just doesn't give the same performance as current 'binary typed' systems, which is why Indigo will do special things if it knows it is working in an all-Indigo environment).
SOAs are currently still complex and difficult to build and manageSOAs are currently still complex and difficult to build and manage. They are complex because the standards are still being implemented (on a recent project I was on it took a major international bank nearly 3 months to convince a market-leading J2EE vendor to adopt SOAP headers and honour the 'mustUnderstand' attribute). Newer standards like WS-Addressing are still being worked through and implemented. It's difficult because of the layers of the technology that must be understood in order to build the systems. Just read Clemens' description of his latest FABRIQ project to see the level of technology understanding, skill set and experience you need to build an SOA project today. As the tools develop and experience and awareness of SOA's grow they are likely to get easier and simpler to build, in the same way that client server and then n-tier were once considered complex and difficult but are now considered main stream.
Services are about outside, Objects/COM+/Enterprise Services/MSMQ are for insideIt's useful to be aware of boundaries as Don Box demonstrated at the PDC. There are parts of SOA that are designed to be used on public organisational boundaries and some that are better deployed within an organisational boundary or even behind a service boundary. Clemens Vasters' makes the distinction between 'near and far'.
When using services outside an organisational boundary there are benefits to using open standards and working with contracts and schema rather than type, since it's difficult to control what is at the other endpoint. Enterprise Services and MSMQ provide useful functionality that isn't yet covered in WS-* standards, but the problems with these two approaches is that they often share binary type information or require a Microsoft box or adapter at the other end. This doesn't mean they shouldn't be used in SOA, just that they are better used inside the organisation where it's possible to have more control over the communication and the endpoints. Within the service boundary there's still a service to provide, and it's here that technologies like components/COM+/ES/MSMQ are likely to be just as useful as they are today.
Gregor Hohpe's Enterprise Integration Patterns provides useful SOA guidanceI think that Sam Gentile is right, there are some architectural changes that need to move towards SOA and message-oriented systems. Luckily Gregor Hohpe has written Enterprise Integration Patterns - Designing, Building, and Deploying Messaging Solutions a great book distilling his experience of these systems. Hervey Wilson's reading it and its on Ingo Rammer's list of recommended books as well!.
A month after the Indigo 'Kimono opening' at the PDC there's still a lack of clarity about what Indigo is, how it relates to other messaging technologies and what's the best way to start developing applications today. While a lot of this was covered at the PDC my perception is that some of the message hasn't been ack'd successfully from the audience. [Update: See my more recent post 'More on the Microsoft Messaging message' for some answers to these questions]
The Longhorn DevelopMentor mailing list had an excellent exchange yesterday on Indigo, which has lead me to highlight some areas where I'd like a clearer message:
I'm know some of these were addressed at the PDC. I'm still digesting some of it (for example, the PDC DAT406 presentation on the "Yukon" Service Broker shows how MSMQ, Indigo and Service Broker are positioned, though not BizTalk). Other questions I'm still researching.
Web services are being adopted in the wildThere certainly is a lot of material to cover in web services from the specifications to the implementation, so we only briefly touched on Indigo. There were 30 or so developers there. Starting with the mandatory 'polling' questions I was surprised that around 60% of them had used web services already. Of that group about a third had added some form of security (HTTPS, VPN or internal network). There were some good questions in the group ('Can I use WS-Security in a PHP application? Will Microsoft compete with Tibco and the EAI vendors with Indigo and/or BizTalk? Has Don Box stopped using the term GXA?)
Blogs are a great way to learn about Web ServicesI mentioned that blogging is a great way of tracking what's happening web services. I showed how:
How to get started reading blogsFirst download a news aggregator like SharpReader or RSSBandit then download my modified OPML file. It contains many Microsoft bloggers as well as other web service related blogs (including those on Matt Powell's list).
I also promised a link to the code that downloads all of the PDC powerpoint presentations.
Don Box is a great speaker, but as Scott Hanselman and Steve Maine and this post demonstrate, sometimes understanding and explaining the reasons behind his tenets can be difficult. As an intellectual entertainment and learning exercise, I'm throwing my hat into the ring, trying to answer a commentor on Scott's website who asked what is meant by Don's tenet of 'Share Schema, Not Type' (actually it was 'share schema, not class'), or as I like to ask 'Why should I use web services (with schema and WSDL) over .NET remoting or DCOM?'.
Steve's AnswerSteve's answer goes like this:
I like Steve's story, but I'm not sure it's the answer. The open content model is one approach for extending schemas, though it still works like COM and requires that the newest version provide all of the content required for all previous versions, so I'm not sure how much of a victory this is for the schema approach. Also, I'm not sure of the differences in terms of semantic consistency, between a COM interface that defines the types of input parameter and output parameters from a method, and contracts based on schema that defines the format of input and output messages of a service.
Scott's AnswerScott's answer was that types are unique and immutable and require the sharing of an assembly, whereas schema is a description of the XML content that acts like a contract between parties that can be used without the need for an assembly. I think this emphasis on avoiding a binary assembly is closer to what Don was talking about.
What was Don thinking?In Don's PDC Session “Indigo": Services and the Future of Distributed Applications, (slides here) session, Don's tenet was 'Share schema, not class: Integration based on message formats and exchange patterns, not classes and objects' (see Tenet 3 of my transcript of Don's talk at the PDC, which now seems less comprehensible than I'd hoped). In his MSDN article Don restates it as "Services share schema and contract, not class".
"
So, I think the advice behind Don's tenet is to build services using the service-oriented approach (using schema and WSDL to bind the input and output message) rather than an object oriented approach (DCOM or .NET remoting). This is because it is easier to evolve the structure of the input and output messages using schema rather than having to redeploying an assembly or a binary interface description (*.tlb).
Don's argument is that service-oriented approaches can evolve more flexibly because they can use "features such as XML element wildcards (like xsd:any) and optional SOAP header blocks to evolve a service in ways that do not break already deployed code." I'm not entirely convinced that COM didn't provide a similar level of flexibility as the xsd:any wildcard (e.g. using As Object in VB). The optional SOAP headers seems feasible.
What do I think?I think that interoperability across binary platforms is the key reason to favour schema and contracts over classes. Object-orientation and types are all about binary representations that are just too difficult to get all of the binary platform vendors to agree on. Schema provides a machine verifiable mechanism that currently uses text representations which are the lowest common denominator and can be implemented across all platforms.
Within a platform, I think using schema and contracts (web services) over classes (DCOM/remoting) wins on pragmatic grounds because schema and contracts seem easier to deploy (no binary object copying required), debug (it's easy to sniff the packets on the wire to work out what's going on) and evolve (using schema and wildcards seem easier to work with than interfaces in COM, especially when with the XML versioning improvments in Whidbey). a
"We are not building the uber queuing system - we are not a replacement yet for MSMQ - we have support for routing, but we a | http://benjaminm.net/CategoryView,category,WebServices.aspx | crawl-002 | refinedweb | 6,326 | 50.97 |
AS3 language 101 for C/C++ coders
This article covers aspects of ActionScript 3 that would be helpful to C/C++ application engineers transitioning to application development in Flex and Flash.
I've used C/C++ through most of my educational and professional career. I've also done a respectable share of Javascript and Perl. ActionScript 3 could be viewed as an interesting blend of features from all of these languages. Actionscript 3 conforms to the ECMAscript 4 spec. So, it is a standard languange. In addition to the standard language syntax, AS3 contains a custom class framework for working with Flex/Flash.
The following are areas in the AS3 language that I personally found interesting.
Type declarations
Compared to C/C++, the first syntax oddity that you'll notice is how AS3 declares it's variable types. All type declarations are in post-fix notation. For example, in C you would define a function like:
int myFunction(char *str);
In AS3 this same function declaration looks like:
function myFunction(str:String) : int
Typecasting
In AS3, think of typecasting like calling the constructor of the type. Functionally, this isn't what happens, but the syntax is what it appears to be doing.
If you see what appears to be something calling the constructor of a class or type but it does not use "new", it's not calling a constructor. It's performing a typecast. For example:
var foo:SomeClass = SomeClass(someObject);
var bar:SomeClass = new SomeClass(someObject);
The first line is typecasting "someObject" into SomeClass. The second line is creating a new SomeClass object, passing "someObject" as a parameter to the constructor. This subtle difference can have wide ranging effects (new object vs. reusing an existing object, etc.). Depending on what the class constructor takes as a parameter, it is possible that both the typecast and the constructor would compile with no errors/warnings in all situations. So, be careful. The difference between a typecast and a new object is just the "new" keyword.
Variable scope
Variables are scoped to the function. ActionScript employs a system called "variable hoisting", which implicitly pulls all variable declarations in the function (even ones in nested blocks) to the top of the function at compile time. For example:
public function doSomething() : void { var foo:int = 4; if (foo) { var bar:int = 2; } }
With viable hoisting, all declared variables in a function are moved to the top of the function block at compile time. In the above example, the compile time result looks like:
public function doSomething() : void { var foo:int; var bar:int; foo = 4; if (foo) { bar = 2; } }
Note that the "bar" variable is now in scope for the entire function. This subtle variable handling in AS3 may lead to unintended situations since any use of the "bar" variable before the "if(foo)" block is now valid, even though it is not declared until inside the if() block. AS3 will complain if you declare the same variable more than once in the same function, but it won't complain if you use a variable before it's declared.
void *
In AS3, you can use the wild card type to mimic the "void *" type. For example, say you have a factory object that can return objects of many types. This can be implemented as such:
public function wildcard() : void { var anything:* = ObjectFactory.getData(); }
Run time type checking
Since you can pass things around as anything using the wildcard type (void *), you need a way to check the type of the object at runtime. To do this, you can use the "
is" directive:
public function doSomething() : void { var something:* = getData(); if (something is String) { // handle string logic } else { // do something else } }
This allows for runtime type checking which allows your application to perform different logic depending on what the given object is.
No function overloading
There is no function overloading in AS3. However, you can implement a system that mimics function overloading. For example, in C++ you might have some function definitions like:
int doSomething(int i); int doSomething(char *str);
In AS3, you can't overload a function, but the language allows you to make use of the type wildcard "
*" and use the "is" directive as a way of performing runtime checks and branching based on what was passed in. For example:
function doSomething(obj:*) : int { if (obj is int) // do int stuff else if (obj is String) // do string stuff else // what type did you give me?
}
No operator overloading
AS3 has no way to override the meaning of "+", "=", or any other operator. The closest functionality is the get/set member accessors that you can declare to handle the getting and assignment of class members.
Everything is an object
All types within AS3 are derived classes of the base class "Object". Even if a class does not "extend Object", it still does. This means when you pass anything to a function, you are passing that data by reference (pointer) in all cases. If the called function modifies the object you passed, your version of the data will be modified as well.
However, there are some exceptions. Basic types like int, Number, and String are objects as well, but their implementation performs reference counting to make them behave like stack objects. If the called function simply assigns new values to variables of these basic types, the data in the caller function does not get modified.
Only one packaged class per file
When implementing AS3 classes, you can only have one class definition per package declaration per AS file. For example:
Fig. 1 and 2 are valid in AS3. Fig. 3 is not valid. AS3 does not allow you to have more than one class inside a package declaration and you can only have one package declaration per file. The interesting thing to note is that the "PrivateClass" in Fig. 2 is only accessable to code inside that file. Outside that file, the "PrivateClass" is an unknown type. You can use the construct in Fig. 2 to hide implementation classes from the rest of the world.
Being limited to one public class per file may not be a huge problem, but it may come as a logistical hurdle if you are expecting to define multiple classes within the same file. Plan your file structure accordingly.
virtual functions
The major difference between AS3 and C++ when it comes to inherited functions is the fact that the functions in a derived class do not override the base class functions unless you declare a method as "
override". Without this declaration, the base class version of the function will be called in all cases.
You can think of this as almost the opposite of C++'s "virtual" declaration. In C++, once a function is declared virtual, that function is automatically virtual for all derived classes regardless if the derived class declares it as virtual. In AS3, the derived class controls what functions are "virtual" (overridden).
dynamic_cast
In AS3, you will find yourself dealing with interfaces. These are similar in concept to abstract base classes in C++. Objects that implement an interface will be passed around as instances of that interface. So, given an object of a specific interface, how do you get the object as an instance of its subclass? You use the "
as" functionality. For example:
var someInterface:ISomeInterface = factory.getSomeInterface(); var someClass:SomeClass = (someInterface as SomeClass);
If "someInterface" is actually an instance of "SomeClass" or a derived class of "SomeClass", the variable "someClass" will be a reference to that object. If "someInterface" is an instance of some other class, the "someClass" variable will be null.
Raw character strings
AS3 has a primative string type named "String". You can access the characters of the string and do things with them, but what if you want access to the raw ASCII or UTF8 bytes? The easiest way to do this is via the ByteArray() class. You can write a string into a ByteArray object and then pull it back out as raw bytes.
Well, I hope this is useful to anyone making the transition to Flex/ActionScript from C/C++. If you have any questions, found an error in my assessment, or just want to say something, please leave a comment. We're trying to make Flex development as easy as possible for everyone and any feedback is much appreciated.
Thanks!
I just wanted to add one note about the as operator described above in the section titled "dynamic_cast":
The as operator, which is used to cast an object to a type at runtime, can be used not only to cast an instance of an interface (as in the example above) but also to upcast or downcast any type to a possibly related type (i.e. a subclass or superclass). So in the code below, either of the as statements would properly cast the instance:
Of course, the second as statement only works because parent1 was declared as ParentClass but instantiated as a ChildClass instance.
For me personally I think of is and as as a pair of related operators; one tells you whether something is an instance of a type, the other one tells you and casts the object all in one operation.
(Sorry about the short note that turned into a long note. I'm sure you already know all of this anyway, but I thought it might not be precisely clear to everyone, based solely on the context of the article.)
In any case, thanks for the articles, and I'm excited to read more!
Posted by: Paul Robertson | May 31, 2006 07:30 PM
hi,
I can not understand the code by using ":" symbol.
I think it has compile time error. Is there any one who could explane the use of this ":" Symbol in the code for this article?
For instance we have the following code on the article :-
In C: int i = (int)somefloat;
In AS3: var i:int = int(somefloat);
jone,
Posted by: Jone | July 6, 2006 07:03 AM | http://blogs.adobe.com/kiwi/2006/05/as3_language_101_for_cc_coders_1.html | crawl-002 | refinedweb | 1,668 | 61.26 |
Welcome to Cisco Support Community. We would love to have your feedback.
For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.
Hi Team,
I have some Cisco 881 router configuration questions and would like some help from you.
I have a web server within my network and I had forwarded port 80 on the Cisco router WAN interface to allow
external connection to the web server. .
I have no problem connecting to this domain name from my home internet.
However, I noticed that I am not able to connect to the public domain name of this server from
my internal office network. Is there any configuration
settings required to allow this to work on my internal network? There is no firewall in my network. Please advise asap.
Below is the Cisco router running configuration .
Regards,
MayThu
Current configuration : 2445 bytes
!
! Last configuration change at xxxxx
version 15.0
no service pad
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
no service password-recovery
hostnamexxxxxxx
boot-start-marker
boot-end-marker
enable password enable
no aaa new-model
memory-size iomem 10
ip source-route
ip dhcp excluded-address 192.168.12.1
ip dhcp pool lan
network 192.168.10.0 255.255.255.0
default-router 192.168.10.1
dns-server 210.23.4.6 210.23.1.3
lease infinite
ip dhcp pool VOICE-POOL
import all
network 192.168.11.0 255.255.255.0
default-router 192.168.11.1
ip dhcp pool GUEST-POOL
network 192.168.12.0 255.255.255.0
default-router 192.168.12.1
ip cef
no ipv6 cef
multilink bundle-name authenticated
license udi pid CISCO881-SEC-K9 sn xxxxx
interface FastEthernet0
description AUTONONOMOUS AIR
switchport trunk allowed vlan 1,2,1002-1005
switchport mode trunk
!
interface FastEthernet1
description AUTONOMOUS
switchport access vlan 2
interface FastEthernet2
interface FastEthernet3
description GUEST VLAN
switchport access vlan 3
interface FastEthernet4
ip address dhcp
ip nat outside
ip virtual-reassembly
duplex full
speed 100
interface Vlan1
ip address 192.168.10.1 255.255.255.0
ip nat inside
interface Vlan2
description VOICE VLAN
ip address 192.168.11.1 255.255.255.0
interface Vlan3
ip address 192.168.12.1 255.255.255.0
ip nat inside
ip forward-protocol nd
no ip http server
no ip http secure-server
no ip nat service sip udp port 5060
ip nat inside source list 1 interface FastEthernet4 overload
ip nat inside source static tcp 192.168.10.248 5500 interface FastEthernet4 5500
ip nat inside source static tcp 192.168.10.252 80 interface FastEthernet4 80
ip route 0.0.0.0 0.0.0.0 dhcp
access-list 1 permit 192.168.10.0 0.0.0.255
access-list 1 permit 192.168.11.0 0.0.0.255
access-list 1 permit 192.168.12.0 0.0.0.255
snmp-server community xxxxx
snmp-serverxxxxx
control-plane
line con 0
no modem enable
line aux 0
line vty 0 4
password enable
scheduler max-task-time 5000
end
Hi,
If you have an internal DNS server then configure a A record with the private IP of the server on this.
Select this internal DNS server as primary and then when hosts on the inside will do name resolution they will get the private IP.
Regards
Alain
Don't forget to rate helpful posts.
Hi Alain,
Thanks for your help. We don't have internal server. Is that why internal network can't go? Should we have internal server?
May Thu
Hi May Thu,
most of applications today requires DNS resolution. Your network can go without DNS but if you will run some web server such is Intranet or other applications for users so they will have to type IP instead of name.
Workaround for this is to edit hosts file on your machine and then you will get name resolution for your system.
Jan
if you rely on an external DNS server then the resolution will get you the external IP instead of the internal IP and in which case you can use NAT NVI config on your Cisco device to enable NAT hairpinning.
int vlan 1
no ip nat inside
no ip redirect
ip nat enable
int f4
no ip nat outside
no ip nat inside source list 1 interface FastEthernet4 overload
no ip nat inside source static tcp 192.168.10.248 5500 interface FastEthernet4 5500
no ip nat inside source static tcp 192.168.10.252 80 interface FastEthernet4 80
ip nat source list 1 interface FastEthernet4 overload
ip nat source static tcp 192.168.10.248 5500 interface FastEthernet4 5500
ip nat source static tcp 192.168.10.252 80 interface FastEthernet4 80 | https://supportforums.cisco.com/t5/lan-switching-and-routing/internal-web-server-nat-issue-in-cisco-881/td-p/2221349 | CC-MAIN-2017-39 | refinedweb | 816 | 65.83 |
This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.
On Fri, Aug 14, 2009 at 3:32 AM, Chandru<chandru@in.ibm.com> wrote:yes,
When a program is restarted within gdb, the initial breakpoint hit messages are not outputted on to the screen. Inform the user that a watchpoint has been hit
Signed-off-by: Chandru Siddalingappa <chandru@ilinux.vnet.ibm.com> ---
--- gdb/breakpoint.c.orig 2009-08-14 17:53:06.000000000 +0530 +++ gdb/breakpoint.c 2009-08-14 17:54:02.000000000 +0530 @@ -842,6 +842,9 @@ update_watchpoint (struct breakpoint *b, struct bp_location *loc; bpstat bs;
+ if (breakpoint_enabled (b)) + mention(b); + unlink_locations_from_global_list (b); for (loc = b->loc; loc;) {
Hi.
If we're stopping because of a watchpoint and not reporting it, that's bad.
But it seems odd that this is happening, and simple experiments don't
reveal anything.
Do you have a testcase?
#include <stdio.h> #include <stdlib.h>
int value1 = -1; int value2 = -1;
int func1 () { value1=2; value2=value1; return 0;
int main () { int i;
value1 =3; value2 = value1; for (i=0; i<2; i++) { value1 = i; value2 = value1; }
return 0; }
(gdb) break main Breakpoint 1 at 0x8048453: file rawatch.c, line 20. (gdb) run Starting program: /home/vrvazque/rawatch
Breakpoint 1, main () at rawatch.c:20 20 value1 =3; (gdb) rwatch value1 Hardware read watchpoint 2: value1 (gdb) awatch value1 Hardware access (read/write) watchpoint 3: value1 (gdb) cont Continuing. Hardware access (read/write) watchpoint 3: value1
Old value = -1 New value = 3 0x08048462 in main () at rawatch.c:21 21 value2 = value1; (gdb) cont ... ... ... (gdb) cont Continuing.
Program exited normally. (gdb) run Starting program: /home/vrvazque/rawatch
Breakpoint 1, main () at rawatch.c:20 20 value1 =3; (gdb) cont | http://sourceware.org/ml/gdb-patches/2009-08/msg00213.html | CC-MAIN-2014-52 | refinedweb | 295 | 69.38 |
Tablet PC Platform Independence
Dr. Neil Roodyn
Independent contractor,
August, 2004
Applies to:
Microsoft® Windows® XP Tablet PC Edition 2005
Application Deployment
Summary: Describes a number of strategies for deploying an application across Windows XP and Windows XP Tablet PC Edition. Details and code examples illustrate single application deployment for multiple platforms. This article assumes you have Microsoft Visual Studio® .NET 2003 and the Windows XP Tablet PC Edition Development Kit 1.7 installed on your development computer. You must have a Tablet PC to test the features specific to Tablet PC; however, you do not need a Tablet PC to complete the exercises or create the application.
Contents
Introduction
Different Strategies
The Common Strategy
Exercise Time
Getting Started
Is this a Tablet PC?
Enabling Controls
Printing
Putting It All Together
Biography
Introduction
Over the last year I helped a number of teams to use the features of the Tablet PC operating system in their applications. Several of the teams I've worked with wanted to ship the same application to multiple types of computers: desktops, laptops, and Tablet PCs. This makes a lot of sense, as the Tablet PC runs a full version of the Windows XP Professional operating system with extra features to support the Tablet PC hardware. I expect if you are reading this, then like the teams I have worked with, you want your application to use the features of the Tablet PC if they are available. Through a series of hands-on exercise we are going to learn how our code can detect if Tablet PC hardware is available and if so, take advantage of the additional features.
Different Strategies
Before we get into writing any code I want to think through some possible strategies we can take to enable us to build one assembly that runs across the different platforms. Once we detect if the device we are running on is a Tablet PC we really only have three choices:
- The If then Add strategy. This strategy sets a flag to indicate if the application is running on a Tablet PC. Each time we encounter an area of the code in which we want add Tablet-specific features we check this flag. If the flag is set we call the code to add the Tablet PC features. This is easy to implement but it can be confusing for the end user if the interface doesn't clearly change to distinguish that it is able to accept Ink as an input.
- The If-Else Strategy. This strategy is similar to the If then Add strategy except that we only show the user interface features for the platform that the application is running on. In this strategy, all the areas of the code that can have Ink input have an
if-elsecondition.
This is not much harder to implement than the first strategy but it provides a much better user interface for the end user.
- Alternate forms strategy. With this strategy you need to create two sets of forms: one set that runs on the Tablet PC and the other set that runs on a standard Windows XP computer. While this might seem like a lot of work, you can use form inheritance to lighten the load. This creates a much more object oriented pattern for your code. You still need to make the decision as to which form to load up through checking a flag that gets set when the application loads up. This method has the advantage of creating cleaner code, and if you stick with the principle of never duplicating code then it can help to separate the business logic from the user interface. The disadvantage of this approach is that it creates a larger assembly file.
The Common Strategy
Whichever strategy you decide is best for your project (and they are all valid) you will need to consider some aspect of how the common language runtime (CLR) works. We want to have some code paths that run only on the Tablet PC and those paths would not only fail to run on a platform other than Windows XP Tablet PC Edition, but they may also fail to compile. Remember that the Microsoft intermediate language (MSIL) you create when you build a managed assembly is compiled by a just-in-time (JIT) compiler before it gets run. If our application is not running on a Tablet PC we don't want the code that is specific to Tablet PC to even get compiled. We can force this to happen by extracting all of our Tablet-specific code into methods that only get called when running on the Tablet PC. Here is an example of what not to do.
This is the right way to do it. If the application doesn't run on a Tablet PC then the InitializeTabletComponents method never gets called. Therefore, the method doesn't get compiled unless it runs on a Tablet PC.
Exercise Time
Through the following exercise we are going to develop a small application that allows the user to create payment checks based on a template provided by their bank. A customer on a computer other than a Tablet PC will be able to enter the payee and amounts with a keyboard, whereas a user with a Tablet PC will be able to write the payee, amount, and sign the check, all by using a pen.
This example uses the second strategy (If–Else) and the common strategy.
Getting Started
In this first exercise we start by creating a new C# Windows Forms project to build up on it throughout the rest of the exercises in this article.
- Create a new C# Windows Application project called CheckWriter, as shown in Figure 1.
Figure 1. Create a new project
- Add a reference to the Tablet PC Development Kit 1.7. In the Solution Explorer window, right-click the References folder in the Project, and then click Add Reference. On the .NET tab in the Add Reference dialog box, on the components list, select Microsoft Tablet PC API, version 1.7.xxxx.x. Click Select, and then click OK. The last step is shown in Figure 2.
Figure 2. Add a reference to the Tablet PC API
- Change the name of the generated Form class from Form1 to CheckForm.
- In the Design View of the CheckForm project, add a PictureBox control to the form. In the properties pane for the PictureBox, change the Name property from pictureBox1 to picCheck. Change the Dock property to Fill by clicking the central area in the drop-down box that appears when you select the Dock property. Finally, set the Image property to something that resembles a check. I have added an image for fictional bank check on my website at. Your form should now look something like Figure 3.
Figure 3. The Check Writer Form with PictureBox control
- Next, put three TextBox controls on the form: one each for the payee, written amount, and numeric amount. Change the Name property for these controls to Payee, Amount and NumAmount respectively, as illustrated in Figure 4.
Figure 4. Add the TextBox controls
- Compile and run the application. We now have a form with a check we can fill in.
Is this a Tablet PC?
Now, we can put in place the code that will determine if we run this application for a Tablet PC or for another type of computer.
- In the CheckForm class we will add a method called IsRunningOnTablet. This method calls the Win32 API GetSystemMetrics to discover if the device is a Tablet PC. We will then store this result in a member variable, so we don't have to make the API call each time we need to know if the application is running on a Tablet PC.
using System.Runtime.InteropServices; . . . [DllImport("user32.dll")] private static extern int GetSystemMetrics(int nIndex); // System metric constant for Windows XP Tablet PC Edition private const int SM_TABLETPC = 86; private readonly bool tabletEnabled; protected bool IsRunningOnTablet() { return (GetSystemMetrics(SM_TABLETPC) != 0); }
- We will call this method from the constructor of the CheckForm class.
Enabling Controls
Now that we know if we are running on a Tablet PC, we can determine which way we want the form to look. If we are not running on a Tablet PC we want the form to be pretty much the way we've designed it so that the user can enter the values in the TextBox controls. On the other hand, if a user has a Tablet PC they won't need the TextBox controls, so we should not enable them. Instead, we should allow the user to write on the check.
- Start by setting up the PictureBox to accept ink input if the application is running on a Tablet PC. We'll need the InkOverlay object elsewhere in our class, so we'll make it a member variable of the class. Remember, the common strategy we discussed earlier means we will have to create a separate method.
private InkOverlay inkOverlay; public CheckForm() { tabletEnabled = IsRunningOnTablet(); InitializeComponent(); if (tabletEnabled) { InitializeTabletComponents(); } } protected void InitializeTabletComponents() { inkOverlay = new Microsoft.Ink.InkOverlay(picCheck.Handle); inkOverlay.Enabled = true; }
- Next, we are going to do something we're not supposed to do. We are going to edit the InitializeComponnent method. The comments tell us not to modify the contents of the method, but it's just code right? As long as we know what we are doing we should be just fine.
We are going to extract the initialization of the controls we want only in a non-Tablet PC application. We will put this code in a separate method called InitializeNonTabletControls.
private void InitializeNonTabletComponents() { this.Payee = new System.Windows.Forms.TextBox(); this.Amount = new System.Windows.Forms.TextBox(); this.NumAmount = new System.Windows.Forms.TextBox(); // // Payee // this.Payee.Location = new System.Drawing.Point(176, 112); this.Payee.Name = "Payee"; this.Payee.Size = new System.Drawing.Size(360, 20); this.Payee.TabIndex = 1; this.Payee.Text = "<Payee>"; // // Amount // this.Amount.Location = new System.Drawing.Point(224, 168); this.Amount.Name = "Amount"; this.Amount.Size = new System.Drawing.Size(392, 20); this.Amount.TabIndex = 2; this.Amount.Text = "<Amount>"; // // NumAmount // this.NumAmount.Location = new System.Drawing.Point(672, 240); this.NumAmount.Name = "NumAmount"; this.NumAmount.Size = new System.Drawing.Size(96, 20); this.NumAmount.TabIndex = 3; this.NumAmount.Text = "<NumAmount>"; this.Controls.Add(this.NumAmount); this.Controls.Add(this.Amount); this.Controls.Add(this.Payee); this.Payee.BringToFront(); this.Amount.BringToFront(); this.NumAmount.BringToFront(); }
- We will call this method in the constructor of our CheckForm class.
- Compile and run this application. On a Tablet PC you should be able to write directly on the check. If you run it on a computer that is not a Tablet PC, you can enter the values in the TextBox controls.
Printing
- Let's now try to use the data entered to actually do something. For this little application we will just print out the entered fields. If this is text in the text box then we will print that on the check, if it is Ink that has been handwritten we will want to print that out instead. Don't forget the common strategy applies here so we will create a separate method for printing the Ink. This method will only get called if the application is running on a Tablet PC.
private PrintDocument prnDoc; private void PrintInk(Graphics g) { inkOverlay.Renderer.Draw(g, inkOverlay.Ink.Strokes); } private void prnDoc_PrintPage(object sender, PrintPageEventArgs e) { e.Graphics.DrawImage(picCheck.Image, 0, 0); if (tabletEnabled) { PrintInk(e.Graphics); } else { int yOffset = 20; int xOffset = 5; Brush brsh = Brushes.Blue; e.Graphics.DrawString(Payee.Text, Payee.Font,brsh, (Payee.Left+2) + xOffset, yOffset+ Payee.Top); e.Graphics.DrawString(Amount.Text,Amount.Font, brsh,(Amount.Left+2) + xOffset, yOffset + Amount.Top); e.Graphics.DrawString(NumAmount.Text,NumAmount.Font, brsh,(NumAmount.Left+2) + xOffset, yOffset + NumAmount.Top ); } }
- We also need to add a button to the form to allow the user to print it out.
- Run this and test it. The printing might not be very accurate, but it works as a starting point to show you how to use the Ink versus text input.
Putting It All Together
You should now have a good idea of how to develop a managed Windows Form application that can run on both the traditional Windows platform and Windows XP Tablet PC Edition. As the Tablet PC becomes more widely adopted, usage will require many Windows applications to support the features of the Tablet PC.
There are some extra things you could do now. You could detect if the pen is in range and then change the interface accordingly. This way even a Tablet PC user can use the application in a traditional manner when they have their Tablet docked or are using their device as a laptop.. | http://msdn.microsoft.com/en-us/library/ms812504.aspx | CC-MAIN-2014-41 | refinedweb | 2,142 | 64.61 |
Creating a custom 404 Page Not Found error page is so easy in Django (all you do is put your own template named “404.html” at the root of your templates directory) that I naturally assumed doing the same for a 403 Forbidden error page would be just as easy. Unfortunately it is not.
After searching around for quite a while last night, I found bits and pieces that I have modified slightly and republished below in a unambiguous step-by-step tutorial (see the “Source and Other Resources” section at the end of the post for a few of the source posts).
The method posted below leverages some custom Django middleware code. Please, if you have a better, more elegant solution I’d love to hear about it in the comments.
Create the Middleware
- Create a directory at the root of your project called “middleware”
- Add a file named “__init__.py” to this directory
- Create a file named “http.py” in this directory with the following contents:
from django.conf import settings from django.http import HttpResponseForbidden from django.template import RequestContext,Template,loader,TemplateDoesNotExist from django.utils.importlib import import_module """ # Middleware to allow the display of a 403.html template when a # 403 error is raised. """ class Http403(Exception): pass class Http403Middleware(object): def process_exception(self, request, exception): from http import Http403 if not isinstance(exception, Http403): # Return None so django doesn't re-raise the exception return None try: # Handle import error but allow any type error from view callback = getattr(import_module(settings.ROOT_URLCONF),'handler403') return callback(request,exception) except (ImportError,AttributeError): # Try to get a 403 template try: # First look for a user-defined template named "403.html" t = loader.get_template('403.html') except TemplateDoesNotExist: # If a template doesn't exist in the projct, use the following hardcoded template t = Template("""{% load i18n %} <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html> <head> <title>{% trans "403 ERROR: Access denied" %}</title> </head> <body> <h1>{% trans "Access Denied (403)" %}</h1> {% trans "We're sorry, but you are not authorized to view this page." %} </body> </html>""") # Now use context and render template c = RequestContext(request, { 'message': exception.message }) return HttpResponseForbidden(t.render(c))
- You should now have this:
- /myproject/middleware/
- __init__.py
- http.py
Modify Your Project’s “settings.py”
- Add ” ‘myproject.middleware.http.Http403Middleware’, ” to your MIDDLEWARE_CLASSES
Create a Custom “403.html” page
- Put it at the root of your template directory
- Sample content: (note: assumes you’ve already defined a “base.html” template)
{% extends "base.html" %} {% block title %} | Access Denied{% endblock %} {% block content %} <h1>Access Denied</h1> <span>We're sorry, but you are not authorized to view this page (Error: 403)</span> {% endblock content %}
Raise a 403 Error
- In the file where you want to raise the 403 add this at the top: (I used it in my project’s “view.py” file)
from myproject.middleware.http import Http403
- Raise the 403
if request.user.id != object.user.id: raise Http403
That’s it, you now have a Django template to handle 403 Forbidden errors. I’m sure there’s a way for your front-end, production web server to do the same, but I haven’t explored that yet.
Source and Other Resources
- A middleware solution is here (HT Felipe ‘chronos’ Prenholato)
- A middleware solution is here (HT Glen Zangirolami)
- A potential decorator solution is here (HT Magus)
9 Responses to “Show a Custom 403 Forbidden Error Page in Django”
Hmm, your code (at least middleware) is very similar to one that I wrote a half year ago and can be found at
AFAIK have some discussion on django-dev about add this error in http core
Thanks Felipe, you post was definitely one of the key resource I found to help me solve this problem. Here’s hoping that simpler 403 handling gets added to core.
I like how you added TemplateDoesNotExist. Thanks for the reference!
Thanks for this I found it very useful.
I had to change:
‘message’: exception.message
to:
‘message’: ‘%s’ % exception
otherwise I got an exception “message is not an attribute…”
Cheers,
Alan
Thanks for the correction, Alan.
Thanks Mitch! Good stuff
Hey Mark! Thanks for stopping by, hope all is well.
That’s great, thanks.
If you want to define the 403 handler as a string in urls.py (like it’s done with 404), replace the following line:
callback = getattr(import_module(settings.ROOT_URLCONF),’handler403′)
by the followin one:
name = getattr(import_module(settings.ROOT_URLCONF),’handler403′)
hierarchy = name.split(‘.’)
callback = __import__(hierarchy[0])
for name in hierarchy[1:]:
callback = getattr(callback, name)
Django 1.4 has support for 403 handler similar to 404 and 500
see | http://mitchfournier.com/2010/07/12/show-a-custom-403-forbidden-error-page-in-django/ | CC-MAIN-2014-41 | refinedweb | 777 | 57.27 |
Tutorial
Using Server-Sent Events in Node.js to Build a Realtime goal of this article is to present a complete solution for both the back-end and front-end to handle realtime information flowing from server to client.
The server will be in charge of dispatching new updates to all connected clients and the web app will connect to the server, receive these updates and present them in a nice way.
About Server-Sent Events
When we think about realtime apps, probably one of the first choices would be WebSockets, but we have other choices. If our project doesn’t need a complex real time feature but only receives something like stock prices or text information about something in progress, we can try another approach using Server-Sent Events (SSE).
Server-Sent Events is a technology based on HTTP so it’s very simple to implement on the server-side. On the client-side, it provides an API called
EventSource (part of the HTML5 standard) that allows us to connect to the server and receive updates from it. Before making the decision to use server-sent events, we must take into account two very important aspects:
- It only allows data reception from the server (unidirectional)
- Events are limited to UTF-8 (no binary data)
These points should not be perceived as limitations, SSE was designed as a simple, text-based and unidirectional transport.
Here’s the current support in browsers
Prerequisites
Getting started
We will start setting up the requirements for our server. We’ll call our back-end app
swamp-events:
$ mkdir swamp-events $ cd swamp-events $ npm init -y $ npm install --save express body-parser cors
Then we can proceed with the React front-end app:
$ npx create-react-app swamp-stats $ cd swamp-stats $ npm start
The Swamp project will help us keep realtime tracking of alligator nests
SSE Express Backend
We’ll start developing the backend of our application, it will have these features:
- Keeping track of open connections and broadcast changes when new nests are added
GET /eventsendpoint where we’ll register for updates
POST /nestendpoint for new nests
GET /statusendpoint to know how many clients we have connected
corsmiddleware to allow connections from the front-end app
Here’s the complete implementation, you will find some comments throughout, but below the snippet I also break down the important parts in detail.
// Require needed modules and initialize Express app const express = require('express'); const bodyParser = require('body-parser'); const cors = require('cors'); const app = express(); // Middleware for GET /events endpoint function eventsHandler(req, res, next) { // Mandatory headers and http status to keep connection open const headers = { 'Content-Type': 'text/event-stream', 'Connection': 'keep-alive', 'Cache-Control': 'no-cache' }; res.writeHead(200, headers); // After client opens connection send all nests as string const data = data: ${JSON.stringify(nests)}\n\n; res.write(data); // Generate an id based on timestamp and save res // object of client connection on clients list // Later we'll iterate it and send updates to each client const clientId = Date.now(); const newClient = { id: clientId, res }; clients.push(newClient); // When client closes connection we update the clients list // avoiding the disconnected one req.on('close', () => { console.log(${clientId} Connection closed); clients = clients.filter(c => c.id !== clientId); }); } // Iterate clients list and use write res object method to send new nest function sendEventsToAll(newNest) { clients.forEach(c => c.res.write(data: ${JSON.stringify(newNest)}\n\n)) } // Middleware for POST /nest endpoint async function addNest(req, res, next) { const newNest = req.body; nests.push(newNest); // Send recently added nest as POST result res.json(newNest) // Invoke iterate and send function return sendEventsToAll(newNest); } // Set cors and bodyParser middlewares app.use(cors()); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({extended: false})); // Define endpoints app.post('/nest', addNest); app.get('/events', eventsHandler); app.get('/status', (req, res) => res.json({clients: clients.length})); const PORT = 3000; let clients = []; let nests = [];
The most interesting part is the
eventsHandler middleware, it receives the
req and
res objects that Express populates for us.
In order to establish a stream of events we must set a
200 HTTP status, in addition the
Content-Type and
Connection headers with
text/event-stream and
keep-alive values respectively are needed.
When I described SSE events, I noted that data is limited only to UTF-8, the
Content-Type enforces it.
The
Cache-Control header is optional, it will avoid client cache events. After the connection is set, we’re ready to send the first message to the client: the nests array.
Because this is a text-based transport we must stringify the array, also to fulfill the standard the message needs a specific format. We declare a field called
data and set to it the stringified array, the last detail we should note is the double trailing newline
\n\n, mandatory to indicate the end of an event.
We can continue with the rest of the function that’s not related with SSE. We use a timestamp as a client id and save the
res Express object on the
clients array.
At last, to keep the client’s list updated we register the
close event with a callback that removes the disconnected client.
The main goal of our server is to keep all clients connected, informed when new nests are added, so
addNest and
sendEvents are completely related functions. The
addNest middleware simply saves the nest, returns it to the client which made
POST request and invokes the
sendEvents function.
sendEvents iterates the
clients array and uses the
write method of each Express
res object to send the update.
Before the web app implementation, we can try our server using cURL to check that our server is working correctly.
My recommendation is using a Terminal with three open tabs:
# Server execution $ node server.js Swamp Events service listening on port 3000
# Open connection waiting updates $ curl -H Accept:text/event-stream data: []
# POST request to add new nest $ curl -X POST \ -H "Content-Type: application/json" \ -d '{"momma": "swamp_princess", "eggs": 40, "temperature": 31}'\ -s {"momma": "swamp_princess", "eggs": 40, "temperature": 31}
After the
POST request we should see an update like this on the second tab:
data: {"momma": "swamp_princess", "eggs": 40, "temperature": 31}
Now the
nests array is populated with one item, if we close the communication on second tab and open it again, we should receive a message with this item and not the original empty array:
$ curl -H Accept:text/event-stream data: [{"momma": "swamp_princess", "eggs": 40, "temperature": 31}]
Remember that we implemented the
GET /status endpoint. Use it before and after the
/events connection to check the connected clients.
The back-end is fully functional, and it’s now time to implement the
EventSource API on the front-end.
React Web App Front-End
In this second and last part of our project we’ll write a simple React app that uses the
EventSource API.
The web app will have the following set of features:
- Open and keep a connection to our previously developed server
- Render a table with the initial data
- Keep the table updated via SSE
For the sake of simplicity, the
App component will contain all the web app.
import React, { useState, useEffect } from 'react'; import './App.css'; function App() { const [ nests, setNests ] = useState([]); const [ listening, setListening ] = useState(false); useEffect( () => { if (!listening) { const events = new EventSource(''); events.onmessage = (event) => { const parsedData = JSON.parse(event.data); setNests((nests) => nests.concat(parsedData)); }; setListening(true); } }, [listening, nests]); return ( <table className="stats-table"> <thead> <tr> <th>Momma</th> <th>Eggs</th> <th>Temperature</th> </tr> </thead> <tbody> { nests.map((nest, i) => <tr key={i}> <td>{nest.momma}</td> <td>{nest.eggs}</td> <td>{nest.temperature} ℃</td> </tr> ) } </tbody> </table> ); }
body { color: #555; margin: 0 auto; max-width: 50em; font-size: 25px; line-height: 1.5; padding: 4em 1em; } .stats-table { width: 100%; text-align: center; border-collapse: collapse; } tbody tr:hover { background-color: #f5f5f5; }
The
useEffect function argument contains the important parts. There, we instance an
EventSource object with the endpoint of our server and after that we declare an
onmessage method where we parse the
data property of the event.
Unlike the
cURL event that was like this…
data: {"momma": "swamp_princess", "eggs": 40, "temperature": 31}
…We now we have the event as an object, we take the
data property and parse it giving as a result a valid JSON object.
Finally we push the new nest to our list of nests and the table gets re-rendered.
It’s time for a complete test, I suggest you restart the Node.js server. Refresh the web app and we should get an empty table.
Try adding a new nest:
$ curl -X POST \ -H "Content-Type: application/json" \ -d '{"momma": "lady.sharp.tooth", "eggs": 42, "temperature": 34}'\ -s {"momma":"lady.sharp.tooth","eggs":42,"temperature":34}
The
POST request added a new nest and all the connected clients should have received it, if you check the browser you will have a new row with this information.
Congratulations! You implemented a complete realtime solution with server-sent events.
Conclusion
As usual, the project has room for improvement. Server-sent events has a nice set of features that we didn’t cover and could use to improve our implementation. I would definitely take a look at the connection recovery mechanism that SSE provides out of the box. | https://www.digitalocean.com/community/tutorials/nodejs-server-sent-events-build-realtime-app | CC-MAIN-2020-34 | refinedweb | 1,563 | 50.16 |
Today we are sharing Visual Studio 2015 Update 3 RC. This release candidate primarily focuses on stability, performance, and bug fixes, but we also have some feature updates. I’ll share highlights in the rest of this post.
Tools for Apache Cordova. This update includes TACO Update 9 and TACO Update 10, which adds plugins for Intune, Azure engagement, security, and SQLite storage, as well as the ability to add plugins from the config designer either by npm package name or ID. It also includes support for Cordova 6.1.1.
Application Insights and HockeyApp. Developer Analytics Tools v7.0.1 adds has diagnostics tools events for ASP.NET 5 RC1 and ASP.NET Core RC2 projects. We also improved the search experience: search automatically refreshes if you change search criteria such as filters, date ranges, and selected events, and you can go to code from requests in search and also “find telemetry for this…” in the Search menu. For further details, check out the release notes in Microsoft Azure Documentation.
Debugging and Diagnostics. Update 3 RC includes Diagnostics Tool support for applications running on Windows OneCore devices, including HoloLens and Windows IoT. You will now get better performance and reliability in C++ Edit and Continue when FASTLINK is enabled. And we improved XAML UI Debugging so the new Track Focus feature in the Live Visual Tree will cause any selection changes in the Visual Tree to update the currently focused element.
Visual Studio IDE. This update addresses a lot of feedback regarding problems with subscriptions through an online identity or key used to unlock the IDE. You no longer need to login to my.visualstudio.com to activate your subscription. We have improved error handling for common licensing issues. We’ve begun securing all web links such as our terms of service and privacy statement over HTTPS as we already do for personally identifiable information. Additionally, we have made accessibility improvements in the Account Settings dialog for activating a subscription and entering a product key.
C#/VB/Roslyn. In this release you will see many performance improvements including when running code diagnostics on an entire solution. To learn about code diagnostic performance improvements read the How to: Enable and Disable Full Solution Analysis for Managed Code page on MSDN. Other bug fixes include –
- Performance improvements to the C# background code analysis engine that collects errors and warnings. These improvements have also significantly reduced overall memory consumption.
- Performance improvements to the C# GoTo Implementation and Find All References. You can try these by selecting an object, right-clicking on it and then selecting them from the menu.
- You can now enable an option to suggest usings for types in reference assemblies or NuGet packages. You can try this under Tools > Options > Text Editor > C# > Advanced, “Using Directives”.
- When you apply a “fix all” action to document/project/solution we now display a progress bar.
Other updates. This release includes enhancements to Tools for Universal Windows Apps. Architecture tools has updates to address a lot of your feedback around performance and reliability along with updates to features like Code Map, Layer Validation, and UML Diagrams. Visual C++ has updates to the C++ compiler, C++ libraries and C++ MDD in this release.
Check the Visual Studio 2015 Update 3 RC Release Notes and Visual Studio 2015 Update 3 RC Known Issues for all the details. To learn more about other related downloads, see the Downloads page. You can also access the bits and release notes right now on an Azure-hosted VM or download here. You should be able to install this on top of previous installations of Visual Studio 2015 Update 2. We’ve also released Team Foundation Server 2015 Update 3 RC today. You can see what’s new there in the TFS Update 3 RC Release Notes. And TFS Update 3 RC Known Issues.
As always, we welcome your feedback. For problems, let us know via the Report a Problem option in Visual Studio. For suggestions, let us know through UserVoice.
Just updated to version 2 last week. Still discovering the news.
Vitor – the one thing that should help is our release notes, they are kept up-to-date as one unified list of all the changes & new features in this particular VS 2015 Update 3 release and all other major VS releases as well.
Our hope is that by having this unified list customers can feel confident in catching up on a release without having to hunt down different blog posts. But all feedback is helpful so let us know!
Here is the link to our VS 2015 Update 3 release notes, now updated with RC information:
p.s. the above URL will stay the same when we ship the next release (another RC or RTM for example), so it should always be good to use to track down the info.
I think he wanted to say that you release new versions very often with a lot of new features so we even don’t have time to discover all of them, me too. So thank you very much of that, it is really nicely.
Hi John,
Are you going to update visual studio features timeline:
It looks like out of date.
+1 on this.
Hi Slawek/Matt, thanks for taking the time to provide feedback here. So regarding the Visual Studio Timeline we had on visualstudio.com, you’ll notice that we’ve taken it down, for now, since it was out of date. We’ve been investigating content on that site to ensure we have just the right information to help our customers and partners be successful. It would be super helpful if you could share what sort of information you’d find the most valuable and/or how it helps you be successful. This would certainly help us pull together the most effective content. Mind sharing?
When will you deliver the Xamarin Forms Previewer for VS ?
And consider mergin Xamarin.Forms with UWP to bring us one framework capable of building TRULLY Universal mobile apps ? By trully universal I mean: From a single codebase compile to Android, iOS and W10.
Merge UWP xaml + Xamarin.Forms Xaml.
totally agree !!!
I think I watched a channel9 video of James Montemagno demoing that features even asp.net is included
Agree. There should be some communication about the xamarin merger. What can we expect? What do you intend to do with xamarin?
Hi Eder,
Thanks so much for the feedback! Be sure to head over to Xamarin’s Uservoice (), and leave your product feedback there. 🙂
Does that mean that I longer get logged out of my VSO account all the time?
What?
@Stan. Can you share a bit more detail about the problem you’re seeing?
We were unable to automatically populate your Visual Studio Team Services accounts.
The following error was encountered: TF400813: Resource not available for anonymous access. Client authentication required.
I get that like 2-3 times a week.
Can someone please make it possible to create new window by window > new Window for Razor file. It’s not currently possible to do in Visual studio. I do some edit to Regedit to make it work for me. Sadly after few days it’s gone. Now I need to change it every few days.
Hi Anirudha,
This item is currently on our backlog and will be fixed in a future release.
Thanks,
Mike
We were unable to automatically populate your Visual Studio Team Services accounts.
The following error was encountered: TF400813: Resource not available for anonymous access. Client authentication required.
I get that like 2-3 times a week.
I have the identical problem. Is it caused by Python Tools?
please work on Xamarin forms and support WCF (duplex TCP) for cross platform Xamarin apps.
Hi Ali,
The Xamarin Uservoice portal (available at xamarin.uservoice.com) is a great place to provide suggestions. It appears that we already have a thread on WCF support:
Be sure to vote on product suggestions you find helpful, and leave comments to help our team know what’s important to you. 🙂
Thanks!
Installer keeps bringing up “Visual studio 2015 Update 3 RC has encountered a user-defined breakpoint”. The main window disappeared, but the msiexec continued, throwing up this popup for quite a while and then finished. VS works, about says Update 3 RC, seems to work, but not sure what got installed and what didn’t. If I rerun the installer it shows the feature Update 3 rc grayed out and selected, and I cannot continue.
Hi, thanks for looking into it. The file is here:
Hello Robert,
It appears that you experienced an internal exception during installation of VSU3 RC and VSU3 RC has not completed its installation. I recommend you repair the Professional Visual Studio installation from the Add Remove Program entry.
If possible, after the repair, you can rerun the collect tool and share with me the log for verification.
Thanks,
Chee Seong
229613
Hi,
it took forever (2-3 hours) but it completed. After reboot on first start visual studio completely froze, but after I killed it and restarted it seems to work ok.
Logs are here:
After repair it showed these errors:
Thanks for the additional information.
I followed up with the respective team about the hang. They recently fixed a similar issue. If you can repro, we would appreciate if you can gather a dump file. This would help us verify this.
With respect to the issue during repair, they are not expected to have functional impact.
Thanks,
Chee Seong
This update doesn’t compile C++ when target is set to Windows XP. Code can’t find standard headers.
Include path is missing C:\Program Files (x86)\Windows Kits\10\Include\10.0.10240.0\ucrt and library path is missing C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x86
We fixed this for the final release. I apologize for the trouble.
If this is blocking, a workaround is documented in the VS 2015 Update 3 RC Known Issues:
Regards,
Daniel Griffing
Visual C++ Libraries
Just so you know, the solution didn’t work for me. Luckily, I was planning to drop XP support so it isn’t a blocking issue for me.
Does this RC already include the new C++ code optimizer mentioned here?
Would be awesome!
Yes, the new optimizer is included and enabled by default.
Okay cool, thanks!
Where is VB interactive ? cmon, do it finally
Tristan,
It is still in our backlog. Unfortunately we do not have any ETA at this time.
–Manish
Sadly, its not the update i hoped for. VS 2015 is still very slow, uses a lot of memory and crashes. I’m really tired of working with it but i have no other choice.
Hi Hans,
Sorry to hear you are experiencing issues with the memory usage and stability. We’re tracking these issues that some people are hitting and would appreciate more information from you so we can address them. Can you please submit a feedback item through the “report a problem” menu in the upper right hand corner and include my alias, strodri, in the description? This will let us get some information on how the IDE is performing and follow up to get more information if needed.
Thanks for your feedback and helping us make Visual Studio better.
-Stephen
Looking forward to trying this. I hope it drastically improves the performance. VS2015 is SOOOO SLOOWWWWWWWWWW….
Hello Morrolan. Sorry to hear about the performance issues you are experiencing. To help us better understand what the problem may be, would you mind submitting a feedback item through Visual Studio? You can do this by clicking on the “report a problem” menu in the upper right hand corner of the IDE. Please also add “@Brent” in the title. This will allow us to collect some information from the IDE to help diagnose the issue. Any additional descriptive information regarding specific features or steps to reproduce the problem would be helpful as well.
Thanks,
-Brent
Oh no! More new features! After the trainwreck that was the Update 2 release I was hoping you guys would focus solely on stability and bug fixes. VS 2016 is probably just months from release- why not just save all the new features for it?
After an hour and the progress bar moving only 10%. I gave up! Why do installs take soooooo long!!!. I hate msi installers.
Only wanted the update to see if a seriously annoying multi-monitor IDE bug has been fixed. I’m constantly having to move undocked VS tool windows as they don’t save there window locations between coding and debuging sessions.
Is anyone else finding out that the hash doesn’t match? I’ve downloaded it from both Firefox (twice) and Internet Explorer, and the SHA-1 of the downloaded files doesn’t match.
I’m downloading the “Visual Studio 2015 Update 3 RC Bundle” ISO from here:
This files generates a SHA-1 of 8F4572028BAEDC92398351B0E3BA910D9E16D2B7. This doesn’t match any of the entries on the following page (which is linked to from the page above):
Thank you Rob for your report. We have confirmed that you have the correct SHA1 value and are working to update the published value. We have also confirmed the other published SHA1 values on are correct as well.
Thanks again,
-Larry
Just get us Xamarin updates please.
Hi Jonney,
If you are looking for news about Xamarin, the best place to follow along is at blog.xamarin.com, or on Twitter @xamarinhq. For specific information about individual releases, our release blog over at releases.xamarin.com provides detailed information on every version of Xamarin we ship. 🙂
Thanks for the feedback!
Any one else taking for ever to install this thing. It’s been updating from Update 2 to Update 3 for about 3 hours on my machine, Seems very slow on the “Visual Studio 2015 Software Development Kit Update 2” step. Been there for over an hour – looks like it’s 90%+ done. Please improve install times, and don’t release until they are reasonable – or provide warning that this is the case. I’ve killed 1/2 a day on this already.
Here you go:
Thanks for the log files.
I have completed the initial investigation and create an issue against the appropriate team. I will follow up once the issue is resolved.
Chee Seong
229607
I’ve got these DTAR* directories popping up all over the place because that fix wasn’t deemed important enough to make into Update 2 (if I understand the history on that correctly). Is that fixed now?
Hello John,
There has been a confusion about Layer diagram issues. In the release notes it states:
•It was not possible to validate a layer diagram when its parent modeling project was referencing PCL libraries (for instance ODP.Net). This works now.
This is actually two different issues. Issue 1: Layer diagram is broken for PCL libraries. Issue 2: Layer diagram is broken for ODP.NET since it has obfuscated data in it. These two issues are independent. I know because I was the one who reported it. Issue 1 was resolved in Update 2. I will check Issue 2 soon.
Hi Onurg,
Could you clarify if Layer diagram is fixed or still broken for you in Update 3 RC (for PCL libraries and ODP.NET)?
It’s fixed.
Can I install only the RC3 updates, or I have to reinstall all visual studio?
Hi Carlos,
You can install RC3 on top of your existing installation of Visual Studio 2015.
Radhika Tadinada [MSFT]
Couple of things:
I diffed a file – ugh, no Kdiff3! The update appears to have broken my preference for a diff editor.
And….VS 2015 loads slower now, I am certain of it.
OK, got to the bottom of the problem with the diff tool configuration. I had let Visual Studio create the git repository for a new project, something I had never done before (so, just coincidence I did this right after the update). When VS creates it, it creates an explicit entry in the config file in the repo to use the VS diff tool. This doesn’t seem right to me. The checkbox to create a git repo when creating a new project seems like a minor convenience, to just save me the trouble of going to the command line and typing “git init”. Obviously it’s doing more than that, and that should be made clear. And why would I want it to override my universal Git settings on anything?
Also, these updates uninstall Git and then reinstall it. Why? I had 2.8.3 installed, and after the update, I had 2.8.1.
May we get ISO file for this update. Online installation takes long time. Thanks!
You can find the iso file for the update at
There seems to be no 64-bit Debug DLLs I’m getting missing VCRUNTIME140D.DLL, and MSVCP140D.dll
Also had trouble installing Xamarin from within the VS2015 Community installer, it kept complaining about being unable to download it despite the fact it saturated my 3.6MBits broadband ability each time. It was only by uninstalling the previous Xamarin that I could get it to install, although it still said it didn’t have enough rights to install the Bonjour service
Hi Peter,
Regarding this: “There seems to be no 64-bit Debug DLLs I’m getting missing VCRUNTIME140D.DLL, and MSVCP140D.dll”
Could you verify that the “Common Tools for Visual C++ 2015” optional feature was selected during installation?
In “Programs and Features” (a.k.a. Add/Remove Programs):
1. Select Microsoft Visual Studio Community 2015 (with Updates)
2. Click “Change”
3. In the Visual Studio setup dialog, select “Modify”
4. Check for “Programming Languages->Visual C++->Common Tools for Visual C++ 2015”
5.a. If that is not selected, please select that checkbox and then click “UPDATE”
5.b. If that is selected, it’s likely that there were other setup failures that prevented these files from being installed on disk.
In the later case (if the download issues doe not explain the trouble), we’d be interested in seeing the setup logs from the machine. To collect those:
1) Run the vscollect tool from. This tool collects various log files related to VS Setup and places the output at %TEMP%\vslogs.zip
2) Send me an email (see below), so I can setup an upload location for you.
Thanks,
Daniel Griffing
Visual C++ Libraries
daniel dot griffing at microsoft dot com
We have investigated to migrate our solutions from VS 2013 to VS 2015 with every release (RTM, Upd 1, Upd 2). Unfortunately, far too much is broken. I lost the hope that VS 2015 is usable for us. Please invest much more into compatibility and reliability in VS v15. I’m not interested in any new features until this is fixed:
* Performance and reliability is not good enough for large c# solutions. VS 2013 does a great job!
* Unit test framework seems to forget the test settings file (sometimes – not reproducible)
* Test Agents are not supported anymore since VS 2015 Update 1 anymore. We have build a large testing infrastructure on top of your Test Agents!
* Cannot run unit tests on remote machines anymore (Test Agents).
* Layerdiagram cannot be linked with namespaces (sometimes – not reproducible)
We also have trouble with Visual Studio 2015. Why is there a Update3 if anything is not working right?
Please refer:
Microsoft seems to make only new, but never becomes somewhat stable. We do not develop “Hello World” projects. We need many years a proper development environment.
And there was still the update price for an inferior product.
Finds Microsoft again a reference to the developers?
There is a problem with the language. My Visual Studio was in English before updating, now it is in Spanish, I can’t switch it to english even downloading language the pack, it doesn’t list english as an option and “same as windows” doesn’t work. My windows is in English too but the region is set to Spain. Community edition. Windows 10 pro x64.
Regards.
Hi John,
The Remote Tools for Visual Studio 2015 still offer downloading the Update 2 only.
When is the Update 3 for VS Remote tools is available?
Current Remote Debugger Update 2 does not seem to work: when I try to debug run my UWP App on the remote – I can’t: it says “the project needs to be deployed before it can be started”. I have tried pre-deploying and this does not work either. The remote debugger (Update 2) must be out of sync with the VS2015 Update 3…
Regards,
serikT
update: I cannot debug the UWP on the remote Windows 10.
I can: (1) deploy, (2) start the UWP, (3) attach debugger to the remotely running process (UWP) 🙂
So I hope the VS2015 Remote Tools Update 3 will help!
Hi, I manage the install of these updates to a large team of Dev’s. What is the recommended way to roll these updates out to these guys?
Nothing happens when right-clicking to a Universal App project and select “Store”, “Associate”. Nothing.
VS2015 is extremely slow. Too much overhead, chews up too much bandwidth, with its hunger for telemetry. I would like a stable, private development environment tool without the overhead and the 100% ability to turn the overhead off. Please.
I wonder what are you talking about? On a decent computer it’s fast as hell, fastest of all Visual Studios so far, even with multiple extensions installed. Telemetry is saturating your bandwidth? No kidding. Are you on 16kbit/s?
Universal App Template is broken. When I try to create a new, blank UWP app, I get the following error:
—————————
Microsoft Visual Studio
—————————
The vstemplate file references the wizard class ‘Microsoft.VisualStudio.WinRT.TemplateWizards.ApplicationInsights.Wizard’, which does not exist in the assembly ‘Microsoft.VisualStudio.WinRT.TemplateWizards, Version=14.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’.
—————————
OK
—————————
Hi Carsten,
I would like to apologize for this issue you are running into. We would love to investigate it – could you please get in touch with me offline at unnir @ Microsoft dot com? Basically, we will need to investigate the logs from Visual Studio setup – to do this, you will need to run on your machine and send us the results of that.
Thanks,
Unni
Program Manager, Visual Studio
Send us to where? Email, website?
Got the same error.
VS2015 Update 2 was installed on my PC and it worked well until yesterday I was not able to build my existing uwp projects.
Then I updated to Update 3. After installation completed, the existing UWP projects could be built and run without problem, but I was not able to create new UWP projects.
the collection log was uploaded to:
I tried this procedure and it works.
1. Open “Programs and Features” on the “Control Panel”.
2. Find “Microsoft Visual Studio 2015 with Updates” and uninstall it.
3. Once uninstalled, restart your computer.
4. Start installing Visual Studio by going to Visual Studio Downloads on the MSDN website, and then selecting the edition you want to download.
5. Run the setup.
6. Choose the “Custom” installation.
7. Once the list of features appear, mark the “Universal Windows App Development Tools” component and start the installation.
Hi, i have the same error, I follow your instructions, it still doesn’t work !
Me too!!!
I uninstalled, restarted, reinstalled… same problem!!!
Hi, my install did not complete and my Surface shutdown before the rollback completed. Now I can’t run the install. How do I back out the failed install of vs2015.3 rc and reapply it?
Sorry but this Visual Studio is the worst of all …
My Team is Losing alot of time with this VS…
-CPU 25% permanently.
-RAM Over 1GB
-So Slowwwwww
RAM consumption over 1GB? C’mon, it’s 2016, maybe it’s time to buy better computers? We’ve got no problems with i5 or i7, 8GB or more RAM and SSD.
– Offtopic –
BTW: Why is VS 2015 (C#) when using “Find All References” for an object/variable not showing it’s declaration anymore?
E.g. the quite important line “bool b = true;” is not listed in the search results…. 🙁
I updated to Visual Studio 2015, Update 3 – and since then visual studio has a really high memory consumption. Out of 8 GB, if I debug, the usage goes to 7.9 instantly, and I get the low memory error, stating that I should close VS 2015.
Is this a known issue, and if yes, when will it be fixed ? Alternatively, can I rollback to Update 1, somehow ?
I’m having the following problem:
Assinatura do problema:
Nome do Evento de Problema: CLR20r3
Assinatura do Problema 01: devenv.exe
Assinatura do Problema 02: 14.0.25402.0
Assinatura do Problema 03: 574fbb1e
Assinatura do Problema 04: Microsoft.TeamFoundation.Git.SharedServices
Assinatura do Problema 05: 14.98.25331.0
Assinatura do Problema 06: 574e119d
Assinatura do Problema 07: 66
Assinatura do Problema 08: 76
Assinatura do Problema 09: System.ComponentModel.Win32
Versão do sistema operacional: 6.1.7601.2.1.0.256.48
Leia nossa declaração de privacidade online:
Se a declaração de privacidade online não estiver disponível, leia nossa declaração de privacidade offline:
C:\Windows\system32\pt-BR\erofflps.txt
I hope the team can fix the VS installer, because I have never be able to installed it without any problem in a Windows 10 computer an on a virtual machine in Parallels. And those cryptic errors with this codes very difficult to find information on the web. For instance, I just can´t install the android emulators.
Hi.
I was going to update from VS2015 Update 2. Unfortunately the installer stopped with an message “Visual Studio 2015 Update 2 (i) This component was already found on your computer. No action will be taken”
Regards
Grzegorz
Same here. Problem is already described in Known Issues for Update 3, but:
– did not use /layout command
– did not use unattended install
A possible solution to use a full product installer results in the same warning.
Strangely this occurred on a fairly fresh pure Win10 installation, whereas my desktop with an updated Win7 to Win10 updated without any issues.
Why is this update taking so long to install? I have been staring at the Applying Visual Studio Update 3 screen for the past 3 hours and its looks to be not even 20% done yet. If I knew it was going to take this long I would never have tried doing the update while at work.
This is really frustrating.
I really think about NOT updating to ANY version in the future if the update process goes like this.
Now I have to repair the Visual Studio Installation because it removed the actual program from my system. NO IDE!!!!!!!
This is by far the worst experience I have ever had with a Microsoft product.
I just want to say thank you and well done on making the overall update process with regards to VS and frameworks (ASP.NET, etc) more seamless while taking care of removing previous versions. This has helped reduce a lot of confusion on my part — in the past I was always scared to remove old installations not knowing what would be affected as a result. One question I do have though: the most recent updates installed .NET Core 1.0.0 – SDK Preview 2, however I still retain a .NET Core 1.0.0 RC2 – SDK Preview. Can the latter be removed safely? | https://blogs.msdn.microsoft.com/visualstudio/2016/06/07/visual-studio-2015-update-3-rc/ | CC-MAIN-2017-30 | refinedweb | 4,625 | 66.23 |
The.
<canvas>
Before we go directly over to the JavaScript, we got to have a look at the HTML of the website. It is quite short and simple:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<title>SpaceShoot SINGLEPLAYER</title>
<link rel="stylesheet" href="style.css" />
</head>
<body>
<div id="controls">
</div>
<canvas id="canvas" width=800 height=600>
Your browser does not support the HTML5 canvas element.
Please consider upgrading to a modern browser like
<a href="">Opera</a>.
</canvas>
<script src="base.js"></script>
<script src="objects.js"></script>
<script src="canvas.js"></script>
<script src="logic.js"></script>
<script src="sockets.js"></script>
<script src="events.js"></script>
</body>
</html>
It could be written a bit shorter. However, I do like the more strict XHTML style. Before I'll explain the JavaScript sources, I want to give you an insight about the CSS used to style the page. It is not much, however, we are having some possibilities here:
html, body
{
margin: 0;
padding: 0;
width: 100%;
height: 100%;
overflow: hidden;
}
canvas
{
width: 100%;
height: 100%;
background: url('bg.jpg');
background-size: 100% 100%;
}
As you can see, the first selector is kind of standard. In order to have the full page available, we tell the <html> as well as the <body> element to have neither a margin nor a padding set. Also we give those elements the full space that is available. In order to avoid any problems that can occur on some browsers, we set the overflow to be hidden, since we do not want to see any scrollbars or such. Now to the <canvas> element. I tried out two possibilities:
<html>
<body>
I also set the background and the background-size rule in order to have the game's background been drawn by CSS. I am not quite sure if this saves performance compared to drawing the background image onto the canvas in each redraw cycle, but I assume it. In the end, it is nice to know that one can combine direct drawing on the canvas with CSS rules.
background
background-size
Enough of CSS and HTML - let's talk about the JavaScript! We have 6 subfiles here. I should note that I split up the code into several files in order to keep readability - if you want to deploy this online, then you should think about combining the files (in this order) as well as minimizing them. The files serve the following purposes:
particle
asteroid
ship
draw()
prototype
logic()
onconnect
WebSocket
draw
Instead of going through the code line by line, I will just write about some implementation details that are good to know. Since our canvas needs to have focus for getting keyboard input, we'll need to give focus to it. This can be done in the following way:
c.canvas.addEventListener('keydown', keyboard.press, false);
c.canvas.addEventListener('keyup', keyboard.release, false);
c.canvas.addEventListener('mouseover', handlefocus, false);
c.canvas.setAttribute('tabindex', '0');
c.canvas.focus();
Looks like a lot of code for just giving focus? But wait! There is more.... Actually the first two lines will just the required callbacks for the necessary keyboard events. Both callbacks are methods of the keyboard object, which will call the manageKeyboard function. The difference is that one calls the manageKeyboard method with a status of true (key pressed), while the other one is calling with a status of false (key released).
keyboard
manageKeyboard
true
false
The next lines do the following: The set that hovering the mouse over the canvas (i.e. the browser viewport) will give focus to the canvas element. So the user can give focus to the adressbar, but will return the focus to the game if he returns the mouse cursor over the game. In order to state clearly that the canvas is the first (and in this case only) element that should get focus on the webpage, we give it the tabindex of 0 (lowest). Then we also call the focus() method.
tabindex
focus()
For drawing high quality games, you need to have high quality graphics. Those graphics need to be available at runtime so that they can be drawn on the screen. In HTML, this is an easy task to fulfill since we have the amazing <img> element. In JavaScript, we can generate those objects without attaching them to a node. This means that the browser will load the image, but not render, i.e. display or draw, the image. For this example, I did require some images to be ready at runtime - so I had to make sure that those resources were available. The following code illustrates how to make sure that those resources are loaded:
<img>
//Create basic objects
explosionImg = document.createElement('img');
asteroidImg = document.createElement('img');
implosionImg = document.createElement('img');
shiposionImg = document.createElement('img');
//Set the required src attribute to those objects
explosionImg.src = 'explosion.png';
asteroidImg.src = 'asteroid.png';
implosionImg.src = 'implosion.png';
shiposionImg.src = 'shiposion.png';
//Set the number of resources that do have to be loaded
var count = 4;
//Set all the onload callbacks to the same method
shiposionImg.onload = implosionImg.onload = explosionImg.onload =
asteroidImg.onload = function() {
--count;//Decrease the number of resources that still have to be loaded
if(count === 0) //If everything was loaded correctly
loop = setInterval(gameLoop, 40); //Start the game loop
};
If more resources are about to come, it would be easier to create another object that manages the resources by itself. A (simple yet powerful) approach would be to just use an array like the following:
//This will be initialized straight from the beginning
var temp = ['explosion.png', 'asteroid.png', /* more entries */];
//...
//This will be executed once necessary
for(var i in temp) {
var count = 0;
var t = document.createElement('img');
t.src = temp[i];
t.onload = function() {
count++;
if(count === temp.length) { /* Do Something */ }
};
temp[i] = t;
}
For this simple example, I thought my code might be more clear. So what am I prefetching here exactly? One image is for the asteroids, and three others are actually spritesheets. Spritesheets can be used to animate things since they contain a lot of (similar) images. In this case, every spritesheet has the dimension of 256 x 256 pixels, with 16 pictures in there. So each picture has 64 x 64 pixels. In order to draw the spritesheets, we need a kind of special object with a special draw method. I did create the following object:
var explosion = function(x, y, size, img) {
this.x = x; //Set object's center x
this.y = y; //Set object's center y
this.size = size; //Set object's size
this.frame = 15; //Initialize the framecount to 16
this.img = img || explosionImg; //Use the given image or a default one
};
The class's name is explosion. It's constructor accepts coordinates (of the center - all coordinates in the game are center coordinates) as well as an individual size and a reference to the (spritesheet) image we are going to draw. The last parameter is optional. If the img parameter is not set, it will have the value undefined (resulting in false) and set explosionImg as the image to use. Also there is a property named frame initialized with a value of 15. This sets the initial number of frames for the given spritesheet to 16. We will use this value in order to set the current index for the frame to display and the lifetime of the object. The object's logic is really simple and looks like this:
explosion
img
undefined
explosionImg
frame
explosion.prototype.logic = function() {
return this.frame--; //First give back current object state
//(0 = dead, else alive) then decrement
};
So here we just decrement the framecount. The return statement will tell the logic loop if the object is still considered alive. Here we see that a current value of 0 (decrementing to -1) will result in false. This means that 16 frames will be displayed, the 17th frame will not be displayed with the object being disposed instead. The drawing method has the following implementation:
return
explosion.prototype.draw = function() {
var pos = (15 - this.frame) * 64; //Set the position in a one-dimensional array
var s2 = this.size / 2; //This variable saves us one execution step
c.save(); //Save the current canvas configuration
//(coordinates, colors, ...)
c.translate(this.x, this.y); //Make coordinate transformation
c.drawImage(this.img, pos / 256, pos % 256, 64, 64, //Draw the image from the source
//to the destination
-s2, -s2, this.size, this.size);
c.restore(); //Restore the previous canvas configuration
};
We set a (one-dimensional) position and use this in order to determine the position in the two-dimensional spritesheet. Here we can also see why I decided to use only center coordinates: I am performing coordinate transformations to the center of the object. This does not help so much for these objects, but they will be very helpful for the spaceship and the asteroids. With those objects rotations are required - rotations which are performed against the center of the object. Coordinate transformations help a lot for performing such tasks. If one did never see the syntax for the drawImage method of the canvas 2D drawing context: It takes 9 parameters, stating which image to draw, with the source images x, y, width and height coordinates. The other parameters are the destination x, y, width and height coordinates. These coordinates are picked after the translation is applied. Here we change the size of the image to draw from 64 x 64 pixels to a different size x size statement.
drawImage
Now we look a bit more into the details of the SpaceShoot game. I did start out with a working singleplayer, just in order to have a working code before I start coding something more sophisticated. For instance, I did use the following variables:
SpaceShoot
var c = document.getElementById('canvas').getContext('2d'), //The Canvas Context to draw on
particles = [], //All particles (ammo) that should be drawn
asteroids = [], //All asteroids that should be drawn
explosions = [], //All explosions that should be drawn
ships = [], //All living ships (self + opponents)
deadVessels = [], //All dead ships (self + opponents)
/* ... */,
multiPlayer = false; //GameMode (SinglePlayer w/o Network, Multiplayer)
Additionally, I also used some constants - to make the game easier to maintain. In order to work with IE9, I had to change const with var for the final version. However, to be more semantic, I will still present it with the const keyword here:
const
var
const MAX_AMMO = 50, //Maximum amount of ammo that can be carried
INIT_COOLDOWN = 10, //Cooldown between two shots
MAX_LIFE = 100, //Maximum amount of life (of the ship)
MAX_PARTICLE_TIME = 200, //Maximum time that a particle (ammo) is alive
ASTEROID_LIFE = 10, //Initial life of an asteroid
MAX_SPEED = 6, //Maximum speed of the sheep
DAMAGE_FACTOR = 15, //Maximum damage per shot
ACCELERATE = 0.1; //Acceleration per round
All in all, we are ready to make our game work with implementing some logic and drawing methods as well as one big loop that forges everything together:
var gameLoop = function() {
if(gameRunning) {
logic(); //Perform our actions!
items(); //Generate new objects like Asteroids
draw(); //Draw all current objects!
++ticks; //Increase game time
}
};
In order to give a good insight into what is actually happening in the big loop, we can have a look at the logic() method that is being called at the beginning. Here, we have the following code:
var logic = function() {
var i;
for(i = explosions.length; i--; )
if(!explosions[i].logic())
explosions.splice(i, 1);
for(i = ships.length; i--; )
ships[i].logic();
for(i = particles.length; i--; )
if(!particles[i].logic())
particles.splice(i, 1);
for(i = asteroids.length; i--; )
if(!asteroids[i].logic())
asteroids.splice(i, 1);
for(i = ships.length; i--; )
if(ships[i].life <= 0) {
explosions.push(new explosion(ships[i].x, ships[i].y, 24, shiposionImg));
deadVessels.push(ships[i]);
ships.splice(i, 1);
}
};
With exception of the ship, everyone does execute its logic followed by returning if the object should be disposed. The ship is an exception in this due to some benefits from this approach. Here I am giving the ship the opportunity to make a move before the asteroid and other important objects get a chance (since they could also harm the ship). So I split the logic of the ship from the test if the ship is now dead to give the player a bit of an edge. As for the for loops, I use a faster version than the more convenient for(i = 0; i < x.length; i++) {...}. I do not want to waste any performance within the loop!
for
for(i = 0; i < x.length; i++) {...}
What can be noted is that once the player's own ship dies (this one has id === 1), the player will not be able to control the game any more. Here a summary screen or something equivalent should be presented. I will have to write something nice looking before I built that in. However, in the current code, there is already some statistics being recorded like the total number of asteroids that have been hit as well as the total number of asteroids that have been destroyed. All those ship specific statistics are part of the ships' base object.
id === 1
In order to implement the server side, I used C# 4.0 in combination with an open-source solution that is called Fleck. Fleck can be integrated into any project using the NuGet package manager - just search for the term "Fleck". The advantages of using this library is that everything from Secure WebSockets to different versions of the WebSocket protocol have been implemented already. So we do not have to focus on tedious tasks like writing the basic server code and testing it - we can directly go into the section where we want to write specific server code that is directly applied to:
In order to prevent the syncing problem, every spaceship is sending its current life status and its current position to the server. This data is then distributed to the other clients. This again has some disadvantages:
Most of these disadvantages can be tackled with the following approach:.
This approach is more advanced and results in somehow better (also node.js would probably then be a superior choice since we could reuse objects we've written for our singleplayer logic). So in order to stay close to a simple solution, I stay with the version that has some drawbacks. BUT I promise to hand in a full working amazing multiplayer experience quite soon.
Here is the basic server construct:
class Program
{
// In order to keep things simple (sync and such)
// I'll start the game when GAMEPLAYERS players are in
const int GAMEPLAYERS = 2;
static void Main(string[] args)
{
var id = 1000; // I give every ship an ID -
// the own ship on every client has ID = 1
var ticks = 0; // I will also record game time
var allSockets = new Hashtable(); // Stores the connections with ship-IDs
var server = new WebSocketServer("ws://localhost:8081"); //Creates the server
var timer = new Timer(40); // This is quite close the interval in JavaScript
server.Start(socket =>
{
socket.OnOpen = () =>
{
//Code what happens when a client connects to the server
};
socket.OnClose = () =>
{
//Code what happens when a client disconnects from the server
};
socket.OnMessage = message =>
{
//Code when a client sends a status update to the server
};
});
//Server side logic
timer.Elapsed += new ElapsedEventHandler((sender, e) => //This callback is
//used to generate objects like Asteroids
{
//Here the server executes some logic and sends the result to all
//connected clients
});
// Here we close everything
var input = Console.ReadLine();
timer.Close();
timer.Dispose();
server.Dispose();
}
}
This code does not look too complicated. Actually, I found it very easy and really powerful. After all, everything you have here is the basis for an awesome multiplayer game. Since I forgot to mention another disadvantage of this approach, I have to do it now. As you can see, I set a fixed number of players per game. In order to keep every client in sync with this very simple approach, I resist on starting the multiplayer game right away (when somebody connects). Instead the server waits until the fixed number of player has been reached and informs everyone that the game has now started.
To not let everyone start in the middle of the game area, I generate a random pair of numbers that correspond to x and y coordinates of the own vessel. In a more advanced server where a player could join any minute (thus corresponding to a server which also does compute some logic), it would be necessary to give the joining player a huge set of data which contains all required information about any object that is including in the gameworld.
The data is transmitted as strings (which is one of the problems of the WebSocket technology). In order to transmit meaningful data, I picked the JSON format. This one provides me with less overhead than XML and has some really nicely implemented parsers in C#. In order to stay simple, I used Microsoft's own HttpFormater extension - it comes with objects that are enclosed in the namespace System.Json.
HttpFormater
System.Json
The following code snippet shows what happens when a client sends some keyboard data to the server:
socket.OnMessage = message =>
{
var json = new JsonObject();
json["logic"] = JsonObject.Parse(message);
json["logic"]["ship"]["id"] = (int)allSockets[socket];
var back = json.ToString();
foreach (IWebSocketConnection s in allSockets.Keys)
if(s != socket)
s.Send(back);
};
Here I do use the JSON library. I do create a new JsonObject and place the contents of the incoming JSON object, which was placed in a string that can be accessed by the message variable, in a property called logic. Since the client does not know which ID its own ship has (own ships have the ID 1) we have to alter the ship's ID before transmitting it to the other clients. This is done by json["logic"]["ship"]["id"] = (int)allSockets[socket];. Here we access logic.ship.id and change the ID by the one that has been stored in the Hashtable. Afterwards, we create one string out of the JSON object and transmit it to every client except the one who sent the message.
JsonObject
string
logic
json["logic"]["ship"]["id"] = (int)allSockets[socket];
logic.ship.id
Hashtable
I should note that collecting keyboard input and sending it once has also a scaling advantage. Here we will usually receive n messages per round (by n clients) and will have to send each one to n-1 clients. This will result to n2-n messages per round that have to be send. Since we receive n messages, the total amount of messages will be n2. This is simply too much if we want to scale this to 100 or 1000 clients (even though the gamefield is too small right now to support more than 20 players). If we consider collecting all keyboard messages and re-sending them, then we have just n outgoing messages. This results in 2n total messages or just 2/n of the original load. In the case of two, we see that this does not have any beneficial effects for our program - so another reason to stick with the simple example.
I did already write a lot about the basic handling of multiplayer events and such. Now I want to go a little bit more into the details. Let's first consider the Websocket setup. I've written the following method:
Websocket
var socketSetup = function() {
if(typeof(WebSocket) !== 'undefined') { //If the type of WebSocket is
//known then they are active
socket = new WebSocket('ws://localhost:8081');//So let us create an
//instance of them!
socket.onopen = function() { //What happens if we can now really connect?!
multiPlayer = true; //We are obviously in multiplayer mode
document.title = 'SpaceShoot MULTIPLAYER'; //And I do want to see this -
//so I'll make a title change
};
socket.onclose = function() {
//Right now nothing will happen on close - maybe switch to single player
//again or something like this...
};
socket.onmessage = function(e) { // A message from the server!
var obj = JSON.parse(e.data); //Parse it - so that we can handle the data
if(obj.asteroid) { //An asteroid has been added - add it to the game
var a = obj.asteroid;
asteroids.push(new asteroid(a.size, a.life, a.angle,
a.rotation, a.x, a.y, a.vx, a.vy));
} else if(obj.added) { //Some ship has connected - add it to the game
for(var i = obj.added.length; i--; )
ships.push(new ship(obj.added[i], { }, 1));
} else if(obj.removed) { //Some ship has disconnected -
//remove it from the game
for(var i = ships.length; i--; )
if(ships[i].id === obj.removed) {
ships.slice(i, 1);
break;
}
} else if(obj.logic) { //Another logic iteration from the server -
//integrate it
for(var i = ships.length; i--; )
if(ships[i].id === obj.logic.ship.id) {
//Update the ship's properties
break;
}
}
else if(obj.status) { //Status update from the server
ships[0].x = obj.status.x; //Sets starting x of own vessel
ships[0].y = obj.status.y; //Sets starting y of own vessel
gameRunning = true; //There is just one status update right now -
//that starts the game
}
};
} else
gameRunning = true; //Single player games will always be started instantly
};
That looks like a lot of code, but there is not so much behind the scenes. First of all, it is checked if the type of the WebSocket object is known. If the type is not known, then there is only one possibility: The browser does not support the WebSocket object! This can either be the case for old browsers or for browsers which have decided to wait until the specification is fully standardized. On such systems, we will directly go into singleplayer mode.
Otherwise we create the socket (i.e., the connection) and tell the browser what to do when we have a running connection. In this case, the onopen event is really important. It is worth noting that WebSocket connection will remain open until they are closed by the browser. This is the case if you change the website, close the browser or tell the socket object explicitly to close. From the other two events, the onclose one is not so interesting. This one will be called if the connection is closed - independent of the origin (could be closed from the server as well). The onmessage event is more important. Here, we will do everything in order to distinguish between the different kinds of messages that the server can send us. Usually one determines a special kind of property like "type" that contains a special meaningful string that helps differentiating between the types of messages. The selection is done by a big switch-case block.
onopen
socket
onclose
onmessage
type
switch-case
Here I did something different (again). Since I just have a limited amount of data, I select by using that undefined properties have always a null value which casts to false. That distinguishes between the addition of a ship, the exit of a competitor, the addition of an asteroid and logic by other ships. Also the status that was mentioned before is included here.
null:
webserver
new WebSocketServer("ws://localhost:8081");
new WebSocket('ws://localhost:8081'); and give you a set of instructions that should work for sure!
WebWorkers
WebSockets
About performance: If the game does not run smooth on your computer, you should consider resizing the browser window or taking a different CSS setting. In the CSS file provided, I included another setting. You just have to change the first line with the second line:
<canvas id="canvas" class="stretched" width=800 height=600><!--Is in use right now -->
<canvas id="canvas" class="centered" width=800 height=600>
<!--Will give you some performance boost-->
The full singleplayer demo can be viewed live at.
I think the code contains a number of things that are quite important or good to know. I also think that the project still leaves room for innovation and improvement. As I stated several times: Security is an issue. Right now, anyone with decent JavaScript skills can break this code and cheat without limits. This is certainly one of the more advanced topics - how to secure those applications.
I really wanted to do something with WebSockets. This was certainly one of the incentives to write this article. The other one was to write a non-trivial game that is still simple just by using the canvas technology. I was really curious about the performance of canvas on several devices (My PC, MacBook Air, WindowsPhone 7 and iPod Touch (3G)). The canvas is still in the early stages - I am impressed by the performance on the big machines but disappointed with the performance on the small ones.
WebSockets
Thanks to Vangos Pterneas for his great article about Kinect & HTML5 using WebSockets and Canvas. That article did contain the information about the Fleck project, which I used for this project. The article can be found here.
Also I do have to say thanks to Jason Staten for creating Fleck. The project is hosted on GitHub and can be found here. I prefer NuGet to include the additional functionality to my projects.</a />
Since the game does not contain final statistics, audio and gauges besides the two status bars, there is a lot of space for improvement. Currently I plan on writing more articles on writing code for providing informative gauges and implementing audio in a nice and unobtrusive way. I will keep this article updated with major improvements on the game and links to other articles that will be related in some. | https://www.codeproject.com/Articles/314965/SpaceShoot-A-multiplayer-game-in-HTML5 | CC-MAIN-2018-51 | refinedweb | 4,276 | 64.3 |
The QTreeView class provides a default model/view implementation of a tree view. More...
#include <QTreeView>
Inherits QAbstractItemView.
Inherited by QHelpContentWidget and QTreeWidget.
The QTreeView class provides a default model/view implementation of a tree view.DirModel(obsolete).
QTreeView supports a set of key bindings that enable the user to navigate in the view and interact with the contents of items:.:
This property holds, this property has a value of 20.
Access functions:
This property holds whether the items are expandable by the user.
This property holds whether the user can expand and collapse items interactively.
By default, this property is true.
Access functions:.
By default, this property is false.
Access functions::
Constructs a tree view with a parent to represent a model's data. Use setModel() to set the model.
See also QAbstractItemModel.
Destroys the tree view.
Collapses the model item specified by the index.(). and header()...
Since 4.7, the returned region only contains rectangles intersecting (or included in) the viewport. | http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/qtreeview.html | CC-MAIN-2013-20 | refinedweb | 163 | 53.07 |
I have a networkx graph object, which is weighted and undirected.
I'm trying to predict 10 new links for every nodes with Adamic Adar Index. The function adamic_adar_index in Networkx returns a generator of tuples, which are formatted as (nodeid1, nodeid2, adamic_adar_index). I'm not familiar generators in Python. What I want to do is group the generator by nodeid1 and return the largest 10 index for nodeid1.
Here is my code, where "coauthor" is the network object and "preds" is the generator. The data file is here
import csv
import networkx as nx
g = nx.read_weighted_edgelist("coauthor.csv", delimiter='\t', encoding='utf-8')
coauthor = nx.read_weighted_edgelist("coauthor.csv", delimiter='\t', encoding='utf-8')
preds = nx.adamic_adar_index(coauthor)
Take a look at heapq.nlargest It takes an iterable and returns the n largest things in that iterable. Since I don't have your coauthor list, I'll use the karate graph. Instead of looking at all non-edges right away (as adamic_adar_index does by default), I'm going to go through each node u in G and do this for all non neighbors of u
import networkx as nx import heapq def nonedges(G,u): #a generator with (u,v) for every non neighbor v for v in nx.non_neighbors(G, u): yield (u, v) G = nx.karate_club_graph() for u in G.nodes_iter():# you may want to check that there will be at least 10 choices. preds = nx.adamic_adar_index(G,nonedges(G,u)) tenlargest = heapq.nlargest(10, preds, key = lambda x: x[2]) print tenlargest
Warning: if you aren't careful here there's a bug in the algorithm you described: for node 1 you might find that some of the tuples are being returned as (1, 2, 3.2), (1, 3, 0.3), (4, 1, 100). The way you've described your grouping, you'll miss the (4,1) pair. My example checks each pair twice to avoid this. It may be possible to eliminate this duplication of computer effort with some effort on your part.
Generators and iterators are closely related. More info on iterators is at (you can also find generators on that page). You can think of it as being a list, but there are rules about how you're allowed to access it. Each time you look at it, you get the next element. Once you look at the element, it's removed from the iterator. You can only get one thing at a time from the iterator. In the computer memory, it doesn't have to hold the entire thing (it generates the next element when it's asked for it). So for example, you can see I used an iterator rather than G.nodes() in my loop. This means that the computer never had to hold all the nodes in G in its memory.
for u in G.nodes_iter():
versus
for u in G.nodes() | https://codedump.io/share/a4DMvQqQq7AA/1/python-networkx-link-prediction-with-adamicadarindex | CC-MAIN-2017-09 | refinedweb | 483 | 75.81 |
12 June 2009 17:32 [Source: ICIS news]
TORONTO (ICIS news)--Shell has begun the due diligence process for two refineries in northern ?xml:namespace>
Cornelia Wolber, spokeswoman for Shell Deutschland in
The due diligence process was expected to take several weeks, if not months, she said.
There were investors and firms interested in the refineries, Wolber said, but she declined to say how many and would not disclose any names.
In March, Shell said it planned to sell its refineries in
Shell’s refinery in Heide has a processing capacity of 4.5m tonnes/year, producing fuels and petrochemicals products. Shell also produces ethylene, propylene, benzene, toluene, xylenes at Heide, according to its website.
The refinery in Hamburg-Harburg has a processing refinery of 5.5m tonnes/year. It produces fuels, base oils and waxes. | http://www.icis.com/Articles/2009/06/12/9224746/shell-in-due-diligence-on-germany-refinery-disposals.html | CC-MAIN-2014-52 | refinedweb | 136 | 63.8 |
please help i am really new to c and am having trouble
i need to know how to input a number and print it with the numbers reversed like 123 to 321.
I have not had any luck.I have not had any luck.
please help i am really new to c and am having trouble
i need to know how to input a number and print it with the numbers reversed like 123 to 321.
I have not had any luck.I have not had any luck.
What have you tried?
-Govtcheez
govtcheez03@hotmail.com
>i need to know how to input a number and print it with the numbers reversed like 123 to 321.
Play around with the modulus operator and division and a loop. Or place the number in a string and print it out in reverse with a loop.
-Prelude
My best code is written with the delete key.
Use somewhat stack mechanism.
Create a stack for holding integers.
Pass all digits in the number to the stack as they are encountered and then pop elements off the stack and print it.
Got it ?
If not then the best way is to use strings.
Accept input as a string say 'num'
then call strrev(num);
its simple.
I have made this small program that accompalishes that task.
But you can't input big numbers due to limitation of integer size.
Try changing all int to long for big numbers.
#include <stdio.h>
/* To find length of number eg : 123 length 3 */
int len(int num)
{
int l=1;
while(num=num/10)
l++;
return l;
}
/* --------------------------------------------*/
int power(int num)
{
int ans=1;
while(--num)
ans*=10;
return ans;
}
int main()
{
int n,l,mul,copy,ans=0,rem;
printf("Enter a number : ");
scanf("%d",&n);
copy=n;
l=len(n);
mul=power(l);
do
{
rem=copy%10;
copy=copy/10;
ans+=mul*rem;
mul/=10;
}while(copy);
printf("\nOriginal Number : %d\nReverce Number : %d\n",n,ans);
getch();
}
In the mean time if somebody else has better code then this (I am sure it would be there ) then post it here.
As my program uses to many variables. I will also try finding some different logic to make this happen in small amount of code.
When all else fails, read the instructions.
If you're posting code, use code tags: [code] /* insert code here */ [/code]
I know that code can be written in very short using arrays.
But I think (THINK) that the way suggested by prelude would be to use modulo and division operator in a loop and printing out the reverse of the number digit by digit as calculation goes on.
But what if we need reverse of a number in further calculations in our program.
The code I have written doesn't use arrays or strings. And the output can be used by ohter parts of program also.
(Focusing on modularity not on getting results)
I am not a master in C. That's why i mentioned if somebody has made small program then post the code not algorithm.
Here are two ways:
Code:#include <stdio.h> void revprint(int i) { while (i) { printf ("%d", i%10); i /= 10; } putchar ('\n'); } int revint(int i) { int newnum = 0; while (i) { newnum = (newnum*10) + (i%10); i /= 10; } return newnum; } int main(void) { printf ("By revprint: 123 becomes "); revprint(123); printf ("By revint: %d becomes %d\n", 123, revint(123)); return 0; } /* Output By revprint: 123 becomes 321 By revint: 123 becomes 321 * */
When all else fails, read the instructions.
If you're posting code, use code tags: [code] /* insert code here */ [/code]
Thanks for the code.
It requires practice to be at that level. Hope one day i will reach your level. | http://cboard.cprogramming.com/c-programming/36609-number-crunching.html | CC-MAIN-2016-26 | refinedweb | 629 | 80.31 |
Hello everyone, Apologies in advance if I'm asking an incredibly simple question, but I'm using GHC 6.4 on Windows XP SP2. I was recently playing with implementing the Kocher attack on RSA in Haskell, and along the way I needed a reasonable way to time operations -- the built-in Windows functions don't do such a good job. I wrote a very simple function that uses the rdtsc instruction: --- begin: AccuTime.c #include <windows.h> __declspec(dllexport) ULONGLONG accuticks() { LARGE_INTEGER i; ULONG lo, hi; __asm { rdtsc mov hi, edx mov lo, eax } i.HighPart = hi; i.LowPart = lo; return i.QuadPart; } --- end --- begin: AccuTime.h #ifndef __accutime_h #define __acutime_h #include <windows.h> ULONGLONG accuticks(); #endif /* __accutime_h */ --- end and compiled it to AccuTime.dll. I then attempted to piece together the file I'd need to import this function into Haskell: --- begin: AccuTime.hs module AccuTime where import Foreign.C.Types foreign import stdcall unsafe "accutime.h accuticks" c_accuticks :: IO CULLong accuticks :: IO Integer accuticks = do ull <- c_accuticks return (fromIntegral ull) --- end And then, after passing the correct -L and -l flags to ghci, attempted to load that module. Unfortunately, I got: ___ ___ _ / _ \ /\ /\/ __(_) / /_\// /_/ / / | | GHC Interactive, version 6.4, for Haskell 98. / /_\\/ __ / /___| | \____/\/ /_/\____/|_| Type :? for help. Loading package base-1.0 ... linking ... done. Loading object (dynamic) AccuTime ... done final link ... done Prelude> :l AccuTime.hs Compiling AccuTime ( AccuTime.hs, interpreted ) ghc.exe: panic! (the `impossible' happened, GHC version 6.4): ByteCodeFFI.mkMarshalCode_wrk(x86) L_ Please report it as a compiler bug to glasgow-haskell-bugs at haskell.org, or. > At this point, I'm quite confused. Does anyone else know what I'm doing to cause this panic? Thanks, /g | http://www.haskell.org/pipermail/glasgow-haskell-users/2005-June/008569.html | CC-MAIN-2014-15 | refinedweb | 294 | 62.14 |
Hallo,
ich versuche eine C# SQLite Connection zur Spiceworks DB aufzubauen es schlägt jedoch mit einem
File opened that is not a database file
file is encrypted or is not a database
fehl.
Das ganze ist eine Test Installation auf einem Windows 7 Client. Visual Studio und Spiceworks + SQLite Libary (http:/
using System.Data.SQLite; string datasource ="C:\\spiceworks_prod.db"; SQLiteConnection connection new SQLiteConnection(); connection.ConnectionString = "Data Source=" + datasource + ";Version=3"; connection.Open();
Hat jemand eine Idee was ich noch machen könnte ?
Gruß Hacke!
Feb 25, 2013 at 2:06 UTC
Okay folks i think i got it !!!
http:/
just take the correct connect dll and it works ! i will proof that but it looks good
11 Replies
Feb 24, 2013 at 7:19 UTC
Petes PC Repairs is an IT service provider.
http:/
this can connect to the db, but please be aware non of this is supported by spice works and definitely not recommended
Feb 24, 2013 at 7:28 UTC
My step-son is not home so you will have to read English:
Can you connect to the SpiceWorks databse and open the table from MySQL Workbench?
Feb 24, 2013 at 7:57 UTC
Here's how I connect to the Spiceworks DB without risking problems (actually connect to the backup DB)
http:/
Step #2 specifically.
Feb 24, 2013 at 11:13 UTC
Hey,
thanks for your reply. But all this solution not what i am searching for. I want to connect with C# to Spiceworks DB. I know that with SQLite Firefox Addons it works but not with my C# libary.
So why it works with a new Test SQLite DB. But not with Spiceworks DB ?
Or do you know any libary wich is working ?Edited Feb 24, 2013 at 11:34 UTC
Feb 25, 2013 at 1:33 UTC
1st Post
Hey Same Issue here,
i would be so nice to get direct access from C#
Best Regards
Patrick
Feb 25, 2013 at 1:59 UTC
Maybe this could be the problem ?
http:/
3.7.13 <--- SQLite Manager (Firefox Addon)
http:/
3.6.23.1 <--- SQLite C# Libariy
Could anybody "approve" that ? and maybe have a solution ?
Feb 25, 2013 at 2:06 UTC
Okay folks i think i got it !!!
http:/
just take the correct connect dll and it works ! i will proof that but it looks good
Feb 25, 2013 at 3:23 UTC
Don't forget to run your queries against a backup database, not the live one!!
http:/
Feb 25, 2013 at 4:01 UTC
Thanks but i will give it a try :>
So the guys it works very well just use the correct library
http:/
Feb 25, 2013 at 5:06 UTC
Ok Its done thanks Patrick3874
This discussion has been inactive for over a year.
You may get a better answer to your question by starting a new discussion. | https://community.spiceworks.com/topic/306516-c-connect-to-spiceworks-database | CC-MAIN-2017-39 | refinedweb | 480 | 80.41 |
Code. Collaborate. Organize.
No Limits. Try it Today.
...One article to rule them all, One article to find them,
One article to bring them all, and in the Windows bind them
In the Land of Unicode, where the bits do fly.
(With respect to J.R.R. Tolkien)
Long ago I had a dream: create a program which allows users fine-grained control of MIDI events easier than other editors. For me fate seemed to forever extend, but after being sidetracked by an even more interesting project (if you ever wondered about humanity’s changing conception of God three-thousand years ago, or how religion and science were intimately connected back then, check out my work – you won’t be disappointed) I was able to return to coding.
One of the twists and turns of the editor brought about a Windows wrapper, and I documented it earlier, poorly. I’m writing this to do a better job, as well as update the explanations to the newest version of the code, and make the writings into something I’m proud of, even though with .Net, WTL, and all the other options now available, demand may not be high.
My creation, named ‘DWinLib,’ incorporates several interesting features. People who are fond of mucking around with the guts of the Windows API, or just like seeing how things work, will enjoy some of the information. And if you are curious, the ‘D’ can stand for ‘David’s’, ‘Dandy,’ ‘Delightful,’ or any other wonderful ‘D’ word you want to interject. In my head it is pronounced ‘Duh Win Lib,’ in a bad Brooklyn-mobster-style stereotyped accent.
Many interesting items are tucked in DWinLib. Depending upon your experience, some of them will be more captivating than others.
Personally, two main reasons exist for sharing these articles. First, I believe frameworks of this nature are interesting. After all, the programming challenge kept me coding, even when the rewrite process seemed interminable. For those who share my fascination, I hope this overview is useful.
The second reason for sharing is no other Windows wrapper presentations I’m aware of explains how to wrap child controls easily, nor what is needed to incorporate callbacks for event handlers. Perhaps the knowledge will be beneficial to you.
Moving on to the tangible results, one advantage of DWinLib is resource editors aren’t required. Items derived from DwlControlWin automatically get an ID, which DWinLib processes. The only thing necessary is writing the handling code and telling the control to use it. You must still programatically lay out how things will appear, but if you don’t possess a resource editor, eliminating ID gruntwork greatly simplifies your job. (Later in my endeavors I learned about a free resource editor, enabling you to more easily create programs which don’t take advantage of WinControl processing if you want to go that way).
Another benefit of DWinLib is compiler independency. Visual Studio, DevC, Borland, and other standard C++ compilers are capable of building the code.
If you know C++ programming, a fun aspect of this library is you can easily see what is needed to wrap the Windows API. For those who are still unfamiliar with the language, you will get a better feel for C++ and Microsoft’s programming framework while working your way through the examples.
Another advantage is it is the only wrapper I know of which allows true multiple inheritance with DWinLib objects. I was told that should be bolded, so DWinLib objects can use multiple inheritance. Of course, you could not inherit from two classes with independent window procedures and have the program work correctly, as the basic premise is unsound.
In a semi-related topic, DWinLib allows creating windows on the stack rather than on the heap. A long time has transpired since I last did this, but I seem to remember doing so for a modal dialog box which died upon closing. If I have a chance I’ll create an example, and maybe other instances of applicability will come to mind.
At one time DWinLib allowed you to make flicker-free applications more easily than other frameworks. This is still true for those using the Windows classic theme. Ever since XP, a lot of window flicker when resizing cannot be eliminated because Microsoft never updated some of the common controls to stop the double painting happening on the WM_PAINT and WM_ERASEBKGND messages, and their theming mechanism precludes overriding this correctly, as far as I can tell.
WM_PAINT
WM_ERASEBKGND
WM_ERASEBKGN
An example of the flicker I’m discussing is visible on the scroll bar at the right of this browser window, if you are using a standard layout. Resize the window from the left edge. The scroll bar appears to jump erratically. In windows with dockers on the right, this jumping can be annoying. I haven’t been able to figure out how to overcome this behavior in Windows 7, but remember being particularly pleased when I eliminated the flicker in earlier versions of Windows because those programs ‘just felt right’ during the resize operation.
Other benefits exist to DWinLib, and we will look at many of them in these writings, but the most important one from my perspective is the framework makes Windows programming more fun.
There are several frameworks and products which may be used to wrap Windows. Here is a summary of the ones I remember seeing, or had pointed out to me.
(I believe all of the Windows specific frameworks listed require you to assign and process control IDs yourself if you want to handle child controls. Or you will need to develop your own solution to do this in an easier manner. DWinLib does not have that requirement. I may be wrong regarding WTL.)
I once tried to fit all of the information in the following articles into one document. There was too much stuff for my mind to grok, so I’m breaking everything into smaller blocks, linkification style (kinda like book chapters, or as close to them as I can get in HTML with the least amount of work). Hopefully this approach helps others comprehend everything more easily.
Let us ease into our task by setting up Visual Studio and compiling a very minimal windows wrapper. For programmers creating something extremely simple, that version may do the job, but if your program is so straightforward a pure API adaptation will almost be as easy. As you can guess by the description, the writing contains some of the steps needed for getting Visual Studio up and running, which even beginners creating non-DWinLib programs might benefit from.
As you will see upon reading, these articles were originally created for VSE 2005, which I no longer use. They have been updated to VSE 2008, with the older information left semi-intact. I don’t plan on upgrading to VS 2010 Express or 2012, as 2008 works for me, although the new C++0x/C++11 features will tempt silently, teasing my intellect to overrule my lack of time...
Once you’ve mastered a small program, the next item is the remaining things necessary for building a blank DWinLib program. The presentation would be incomplete without an overview of creating a small, but pretty useless program. Actually, it isn’t completely pointless. If you wonder how to move things around the screen with a mouse you can work out one answer by looking through the code.
Next, a discussion of DWinLib’s guts are in order. Curious to know the magic occurring when you press a button? Or maybe the mechanism which handles control ID processing behnd the scenes? Here’s the place to find out.
That covers the fundamentals of wrapping the Windows’ API, but two additional items are worth bringing up. A problem you will run into, even outside of DWinLib, is handling modal dialog boxes if you don’t have a resource editor. Without using CreateDialog, can the ‘Flashing Window’ effect be achieved when non-dialog-box areas of your program are clicked on? After some research I found the answer is ‘Yes,’ and this allows you to create dialog-like windows in the same manner as other windows inside of DWinLib.
And finally, the last thing to add is DWinLib is good for Unicode applications. To do so, set the ‘Character Set’ option to ‘Use Unicode Character Set’ in the project’s property pages in Visual Studio (or define ‘UNICODE’ and ‘_UNICODE’ somewhere in the headers). To make the task easier, use the ‘wString’ type where you would use std::strings or std::wstrings. This typedef is in ‘DwlTypedefs.h’, and looks like the following:
UNICODE
_UNICODE
_UNICOD
wString
std::strings
std::wstrings
#include <string>
typedef std::basic_string<TCHAR, std::char_traits<TCHAR>, std::allocator<TCHAR> > wString;
This expands to the same thing as std::string or std::wstring in VCE, depending upon what TCHAR becomes due to the definition or non-definition of UNICODE.
std::string
std::wstring
TCHAR
Other routines have been written to make dealing with ASCII and UTF16 issues easier. For examples, look at the code in the DwlAsciiAndUnicodeFileWrapper and DwlFileHistory units (which automate some of the difficulties you will face with those items if you take the tasks upon yourself from scratch). Also peruse a few of the functions in DwlUtilities, which allow you to perform some numerical conversions regardless of whether your program is compiled as UNICODE or non-UNICODE. One last goody in this regard: if you have a string which you know is needed as both UTF-16 and ASCII within your code, use the DwlContainedString wrapping class, defined in DwlUtilities. As noted in the header and before the class body, there are limitations to its use, but for those who don’t have to deal with locales and foreign language characters the class may be exactly what is needed.
DwlAsciiAndUnicodeFileWrapper
DwlAsciiAndUnicodeFileWrapper
DwlFileHistory
DwlUtilities
An additional item worth noting is the ‘PrecompiledHeaders.h’ file is designed to be the first one processed by the preprocessor. It takes care of the Windows includes for the project, as well as any other items you may desire. Once a unit of your project is stable, #include it in the ‘CompletedUnits.h’ file (which is ‘include’d by ‘PrecompiledHeaders.h’), and if you kept the precompiled headers working as in the sample programs, build times will be considerably reduced. (For another method of reducing compile times, which I have used in my own projects, check out my Two-thirds of a pimpl and a grin article. For those who don’t need full pimpls, but get tired of waiting on compiles, that pattern may ease some of your suffering.)
#include
In addition, the last time I played with Dev-C++, true Unicode applications were impossible due to a lack of the Windows ‘c0w32w.lib’ file. All of the strings can be Unicode in a program compiled with Dev-C++, but you will not be able to link to wWinMain, which a true Unicode application calls as the entry point. (Even though VSE doesn’t seem to have a ‘c0w32w.lib’ file, the correct Unicode entry point is called.)
wWinMain
And with that, the end of this DWinLib overview is reached. I hope this will be of use to you in the future, and wish you the best with your endeavors, no matter what framework you choose.
(PS - The MIDI editor based upon DWinLib is also completed. There are plans for many improvements, but for a first edition (even though it’s in its fourth iteration), I’m proud I succeeded in my main goals! I’m particularly pleased with the way all the data windows for each track are synchronized to each other, which I haven’t seen anywhere | http://www.codeproject.com/Articles/13887/DWinLib-An-Overview?fid=296282&df=90&mpp=10&sort=Position&spc=None&select=1460880&tid=1460880 | CC-MAIN-2014-15 | refinedweb | 1,964 | 60.35 |
From fuse
General
How can I umount a filesystem?
FUSE filesystems can be unmounted either with:
umount mountpoint
or
fusermount -u mountpoint
The later does not need root privileges if the filesystem was mounted by the user doing the unmounting.
What's the difference between FUSE and LUFS?
The main difference between them is that in LUFS the filesystem is a shared object (.so) which is loaded by lufsmount, and in FUSE the filesystem is a separate executable, which uses the fuse library. The actual API is very similar, and there's a translator, that can load LUFS modules and run them using the FUSE kernel module (see the lufis package on the FUSE page).
Another difference is that LUFS does some caching of directories and file attributes. FUSE does not do this, so it provides a 'thinner' interface.
By now LUFS development seems to have completely ceased.
Why is it called FUSE? There's a ZX Spectrum emulator called Fuse too.
At the time of christening it, the author of FUSE (the filesystem) hadn't heard of Fuse (the Speccy emulator). Which is ironic, since he knew Philip Kendall, the author of that other Fuse from earlier times. Btw. the author of FUSE (the filesystem) also created a Speccy emulator called Spectemu.
The name wanted to be a clever acronym for "Filesystem in USErspace", but it turned out to be an unfortunate choice. The author has since vowed never to name a project after a common term, not even anything found more than a handful of times on Google.
Is it possible to mount a fuse filesystem from fstab?
Yes, from version 2.4.0 this is possible. The filesystem must adhere to some rules about command line options to be able to work this way. Here's an example of mounting an sshfs filesystem:
sshfs#user@host:/ /mnt/host fuse defaults 0 0
The mounting is performed by the /sbin/mount.fuse helper script. In this example the FUSE-linked binary must be called sshfs and must reside somewhere in $PATH.
Why don't other users have access to the mounted filesystem?
FUSE imposes this restriction in order to protect other users' processes from wandering into a FUSE filesystem that does nasty things to them such as stalling their system calls forever. (See this comment.)
To lift this restriction for all users or for just root, mount the filesystem with the "-oallow_other" or "-oallow_root" mount option, respectively. Non-root users can only use these mount options if "user_allow_other" is specified in /etc/fuse.conf.
It has been argued that allowing users to mount removable media readable by others (as many popular Linux distributions do) presents the same risks and hence there is no point for this FUSE restriction, at least in such distributions.
Licensing issues
Under what license is FUSE released?
The kernel part is released under the GNU GPL.
Libfuse is released under the GNU LGPL.
All other parts (examples, fusermount, etc) are released under the GNU GPL.
Under what conditions may I modify or distribute FUSE?
See the files COPYING and COPYING.LIB in the distribution.
More information can be found at [1]
Under what conditions may I distribute a filesystem which uses libfuse?
See COPYING.LIB in the distribution.
In simple terms as long as you are linking dynamically (the default) there are no limitations on linking with libfuse. For example you may distribute the filesystem itself in binary form, without source code, under any propriatery license.
Under what conditions may I distribute a filesystem that uses the raw kernel interface of FUSE?
There are no restrictions whatsoever for using the raw kernel interface.
API
Which method is called on the close() system call?
flush() and possibly release(). For details see the documentation of these methods in <fuse.h>
Wouldn't it be simpler if there were a single close() method?
No, because the relationship between the close() system call and the release of the file (the opposite of open) is not as simple as people tend to imagine. UNIX allows open files to acquire multiple references
- after fork() two processes refer to the same open file
- dup() and dup2() make another file descriptor refer to the same file
- mmap() makes a memory mapping refer to an open file
This means, that for a single open() system call, there could be more than one close() and possibly munmap() calls until the open file is finally released.
Can I return an error from release()?
No, it's not possible.
If you need to return errors on close, you must do that from flush().
How do I know which is the last flush() before release()?
You can't. All flush() calls should be treated equally. Anyway it wouldn't be worth optimizing away non-final flushes, since it's fairly rare to have multiple write-flush sequences on an open file. The general exception is when a pipe is involved. During pipe construction a pipe a flush() will occur right after the file open(). This flush() may be followed by writes to the file, if any, and then by another flush() before the release().
Why doesn't FUSE forward ioctl() calls to the filesystem?
Because it's not possible: data passed to ioctl() doesn't have a well defined length and structure like read() and write(). Consider using getxattr() and setxattr() instead.
Is there a way to know the uid, gid or pid of the process performing the operation?
Yes: fuse_get_context()->uid, etc.
Why does my filesystem receive getattr() requests if there is no client issuing any getattr() syscalls?
FUSE automatically calls
getattr() whenever a new filesystem object has been created (i.e. after
open(),
create(),
link(),
symlink() and
mkdir() requests). It does so to immediately cache the attributes (ownership,
timestamps, etc.). This is basically an optimization that can not (yet) turned off.
It may also be that the
getattr() calls are due to syscalls generated during path resolution (see
man 7 path_resolution on linux systems). Basically your system will do a
getattr() call for every path component, so when you
chdir() into
/usr/share/man, you will get
getattr() calls for
/usr,
/usr/share and
/usr/share/man.
How should threads be started?
Miscellaneous threads should be started from the init() method. Threads started before fuse_main() will exit when the process goes into the background.
Is it possible to store a pointer to private data in the fuse_file_info structure?
Yes, the 'fh' field is for this purpose. This field may be set in the open() and create() methods, and is available in all other methods having a struct fuse_file_info parameter. Note, that changing the value of 'fh' in any other method as open() or create() will have no affect.
Since the type of 'fh' is unsigned long, you need to use casts when storing and retrieving a pointer. Under Linux (and most other architectures) an unsigned long will be able to hold a pointer.
This could have been done with a union of 'void *' and 'unsigned long' but that would not have been any more type safe as having to use explicit casts. The recommended type safe solution is to write a small inline function that retrieves the pointer from the fuse_file_info structure.
Problems
Version problems
Why do I get Connection Refused after mounting?
Library is too old (< 2.3.0)
You can check which version of the library is being used by foofs by doing ' 'ldd path_to_foofs' . It will return something like this
libfuse.so.2 => /usr/local/lib/libfuse.so.2 (0xb7fc9000) libpthread.so.0 => /lib/tls/libpthread.so.0 (0xb7fb9000) libglib-2.0.so.0 => /usr/lib/libglib-2.0.so.0 (0xb7f39000) libc.so.6 => /lib/tls/libc.so.6 (0xb7e04000)
Then do 'ls -l path_to_libfuse'
> ls -l /usr/local/lib/libfuse.so.2 lrwxrwxrwx 1 root root 16 Sep 26 13:41 /usr/local/lib/libfuse.so.2 -> libfuse.so.2.2.1
Why does fusermount fail with an Unknown option error?
Errors like 'fusermount: Unknown option -o' or 'fusermount: Unknown option --' mean, that an old version of fusermount is being used. You can check by doing 'which fusermount' .
If you installed FUSE from source, then this is probably because there exists a binary package on your system which also contains a fusermount program, and is found first in the path, e.g. in /usr/bin/fusermount.
The solution is to remove the binary package.
What version of FUSE do I need to use FUSE with Linux 2.4?
FUSE 2.5.3 (supports 2.4.21 or later); later versions include a kernel module compatible with Linux 2.6 only (Linux 2.6.9 or later, as of FUSE 2.6.1).
The latest version of the FUSE libraries retains compatibility with the module for 2.4.x .
Installation problems
Why is there an error loading shared libraries?
If you get the following error when starting a FUSE-based filesystem:
foofs: error while loading shared libraries: libfuse.so.2: cannot open shared object file: No such file or directory
check /etc/ld.so.conf for a line containing '/usr/local/lib'. If it's missing, add it, and run ldconfig afterwards.
Why doesn't mounting as user work if installing FUSE from a package?
Distributions often package 'fusermount' without the suid bit, or only executable to the 'fuse' group.
This results in the following message, when trying to mount a filesystem as an unprivileged user:
fusermount: mount failed: Operation not permitted
The simplest solution is to change the mode of 'fusermount':
chmod 4755 /usr/bin/fusermount
Note, you may have to do this after each upgrade.
Other problems
Why are some bytes zeroed when reading a file?
This happens if the filesystem returns a short count from the read() method. If the file wasn't opened in direct I/O mode, the read() method must return exactly the requested number of bytes, unless it's the end of the file.
If the file was opened in direct I/O mode (with direct_io mount option, or by setting the direct_io field of fuse_file_info at open) the read can return a smaller value than requested (but some programs (notably mountlo) mishandle such situation in write and produce broken files). In this case the end of file can be signalled by returning zero.
What do I do if /dev/fuse does not exist?
Maybe the FUSE module is not loaded. As root, try:
modprobe fuse
If nothing changes, as root run:
mknod /dev/fuse c 10 229
What do I do if you don't have access to /dev/fuse?
As root run:
chmod o+rw /dev/fuse
Remember that this will allow ordinary users to mount their own user space filesystems.
If that's not what you want then use different permissions.
Why does cp return operation not permitted when copying a file with no write permissions for the owner?
"cp" calls open(2) with read-only permissions and O_CREAT, the purpose being to atomically obtain a read/write file handle and make the file read-only. Unfortunately, this does not work very well in fuse, since you first get a mknod, and then an open call. At the time of open, you can't distinguish easily wether this is the first open issued by cp, or another process trying to write a read-only file.
Defining the 'create' method solves this problem, however this requires a Linux kernel version of at least 2.6.15 and libfuse version 2.5 or greater (on FreeBSD you'll need fuse4bsd-0.3.0-pre1 or newer for this).
There can be other workarounds, however the easy one is to use the "default_permissions" mount option, and to avoid checking permissions on open. If you store files on a filesystem, this can get tricky because you will have to change the file mode to allow writing. Using the stateful API (i.e. returning an handle on open) will simplify things. In this case, and using "-o default_permissions", when implementing the open call you have to:
1. check if the open is in write mode (i.e. mode has O_RDWR or O_WRONLY)
2. in that case (in mutual exclusion with other open, getattr etc. calls on the same file) change the mode from "M" to "M OR 0o200"
3. open the file, change back the mode even in case of errors, and return the obtained handle
Why doesn't find work on my filesystem?
The st_nlink member must be set correctly for directories to make find work. If it's not set correctly the -noleaf option of find can be used to make it ignore the hard link count (see man find).
The correct value of st_nlink for directories is NSUB + 2. Where NSUB is the number of subdirectories. NOTE: regular-file/symlink/etc entries *do not* count into NSUB, only directories.
If calculating NSUB is hard, the filesystem can set st_nlink of directories to 1, and find will still work. This is not documented behavior of find, and it's not clear whether this is intended or just by accident. But for example the NTFS filesysem relies on this, so it's unlikely that this "feature" will go away.
What is the reason for IO errors?
The kernel part of FUSE returns the EIO error value, whenever the userspace filesystem sends a "bad" reply. Sometimes these are unavoidable, and not necessarily a fault of the filesystem. Possible causes of this are (non-exhaustive)
- the filesystem returned a short count on write().
- the type of the file has changed (e.g. a directory suddenly became a symlink).
- a directory entry contained a filename that was too long (no, ENAMETOOLONG is not the right error here).
- the same node ID value was used for two different directories (i.e. hard-linked directories are not allowed).
- In the GETATTR function, st_mode needs to have a valid filetype bit set, like S_IFREG or S_IFDIR, see the stat manual for more.
I can not know the file size in advance, how do I force EOF from fs_read() to be seen in the application?
Set direct_io in fs_open().
Misc
Can the filesystem ask a question on the terminal of the user?
It would not be possible generally speaking, since it might not be an interactive program but rather a daemon, or a GUI program doing the operation. However you should be able to get the PID for the caller, and by looking in /proc you should be able to find the process tty or something similar.
But this is not recommended. You should rather think about solving this another way.
If a filesystem is mounted over a directory, how can I access the old contents?
There are two possibilities:
The first is to use 'mount --bind DIR TMPDIR' to create a copy of the namespace under DIR. After mounting the FUSE filesystem over DIR, files can still be accessed through TMDIR. This needs root privileges.
The second is to set the working directory to DIR after mounting the FUSE filesystem. For example before fuse_main() do
save_dir = open(DIR, O_RDONLY);
And from the init() method do
fchdir(save_dir); close(save_dir);
Then access the files with relative paths (with newer LIBC versions the *at() functions may also be used instead of changing the CWD).
This method doesn't need root privileges, but only works on Linux (FreeBSD does path resolving in a different way), and it's not even guaranteed to work on future Linux versions.
How do I test my new file system?
There are a number of tools listed here on SourceForge
POSIX testing may be found at POSIX
The ntfs-3g team has made a test suite available at [2] and [3]. See the email fuse archive [4] for more details. | http://sourceforge.net/apps/mediawiki/fuse/index.php?title=FAQ&oldid=189 | CC-MAIN-2014-10 | refinedweb | 2,631 | 65.22 |
C++
Using how doubles represent numbers.
bool isPowerOfTwo(double n) { return n > 0 && !(*(long long*)&n << 12); }
Python Golf
Using n&-n and chained comparisons.
def isPowerOfTwo(self, n): return-n&n==n>0
Python
Assigning a set's method.
class Solution: isPowerOfTwo = {1<<e for e in range(31)}.__contains__
Wooow~ How you discover this? The double type contains 64 bits: 1 for sign, 11 for the exponent, and 52 for the mantissa. Its range is +/–1.7E308 with at least 15 digits of precision. This is how "12" comes. If the number is the power of two, then mantissa part would be 0 which stands for 1.0 | https://discuss.leetcode.com/topic/17864/1-liners-other-than-the-boring-n-n-1-p | CC-MAIN-2017-39 | refinedweb | 110 | 79.56 |
I borrowed the idea from the problem "find the start node of a circular linkedlist".
- Link the last node of A to the head of A. Now the problem change to "Given a linked list (B), detect whether it contains circular part and if so find the start node"
- Set two pointer p1=headB(slow) and p2=headB(fast), move p2 two steps each time and p1 one step each time.
- If A and B has intersection, p1 and p2 will meet at some time. Then set p1=headB, and move p1 and p2 at same speed (one step each time), they will meet again. That node would be intersection node. Return that node.
- If p2 reach the end before they meet, no intersection.
- Don't forget to delink A's tail to head before return in either situation. Remember we are not allowed to change the structure of input list.
ps. I am really confused about how the code formatter works... I tried hundred times before I make it....
public class Solution { public ListNode getIntersectionNode(ListNode headA, ListNode headB) { if (headA == null || headB == null) return null; ListNode tail = headA; while (tail.next != null) tail = tail.next; tail.next = headA; ListNode p1 = headB; ListNode p2 = headB; while(p2.next != null && p2.next.next != null) { p1=p1.next; p2=p2.next.next; if (p1 == p2) break; } if (p2.next == null || p2.next.next == null) { tail.next = null; return null; } p1 = headB; while (p1 != p2) { p1 = p1.next; p2 = p2.next; } tail.next = null; return p1; } } | https://discuss.leetcode.com/topic/10504/accepted-java-solution-borrow-the-idea-from-problem-find-the-start-node-of-a-circular-linkedlist | CC-MAIN-2017-43 | refinedweb | 254 | 86.4 |
/* -----------------------------------------------------------------------------
*
* (c) The GHC Team 1998-2008
*
* Generational garbage collector
*
* Documentation on the architecture of the Garbage Collector can be
* found in the online commentary:
*
*
*
* ---------------------------------------------------------------------------*/
#ifndef GCTHREAD_H
#define GCTHREAD_H
#include "OSThreads.h"
#include "WSDeque_ * my_gct; // the gc_thread that contains this workspace
// where objects to be scavenged go
bdescr * todo_bd;
StgPtr todo_free; // free ptr for todo_bd
StgPtr todo_lim; // lim for todo_bd
WSDeque * todo_q;
bdescr * todo_overflow;
nat n_todo_overflow;
// local information. At some later
point it maybe useful to move this other into the TLS store of the
GC threads
------------------------------------------------------------------------- */
typedef struct gc_thread_ {
#ifdef THREADED_RTS
OSThreadId id; // The OS thread that this struct belongs to
SpinLock gc_spin;
SpinLock mut_spin;
volatile rtsBool wakeup;
;
// Remembered sets on this CPU. Each GC thread has its own
// private per-generation remembered sets, so it can add an item
// to the remembered set without taking a lock. The mut_lists
// array on a gc_thread is the same as the one on the
// corresponding Capability; we stash it here too for easy access
// during GC; see recordMutableGen_GC().
bdescr ** mut_lists;
// --------------------
//;
/* -----------------------------------------------------------------------------
The gct variable is thread-local and points to the current thread's
gc_thread structure. It is heavily accessed, so we try to put gct
into a global register variable if possible; if we don't have a
register then use gcc's __thread extension to create a thread-local
variable.
Even on x86 where registers are scarce, it is worthwhile using a
register variable here: I measured about a 2-5% slowdown with the
__thread version.
-------------------------------------------------------------------------- */
extern gc_thread **gc_threads;
#if defined(THREADED_RTS)
#define GLOBAL_REG_DECL(type,name,reg) register type name REG(reg);
#define SET_GCT(to) gct = (to)
#if defined(sparc_HOST_ARCH) || defined(i386_HOST_ARCH)
// Don't use REG_base or R1 for gct on SPARC because they're getting clobbered
// by something else. Not sure what yet. -- BL 2009/01/03
// Using __thread is better than stealing a register on x86, because
// we have too few registers available. In my tests it was worth
// about 5% in GC performance, but of course that might change as gcc
// improves. -- SDM 2009/04/03
extern __thread gc_thread* gct;
#define DECLARE_GCT __thread gc_thread* gct;
#elif defined(REG_Base) && !defined(i386_HOST_ARCH)
// on i386, REG_Base is %ebx which is also used for PIC, so we don't
// want to steal it
GLOBAL_REG_DECL(gc_thread*, gct, REG_Base)
#define DECLARE_GCT /* nothing */
#elif defined(REG_R1)
GLOBAL_REG_DECL(gc_thread*, gct, REG_R1)
#define DECLARE_GCT /* nothing */
#elif defined(__GNUC__)
extern __thread gc_thread* gct;
#define DECLARE_GCT __thread gc_thread* gct;
#else
#error Cannot find a way to declare the thread-local gct
#endif
#else // not the threaded RTS
extern StgWord8 the_gc_thread[];
#define gct ((gc_thread*)&the_gc_thread)
#define SET_GCT(to) /*nothing*/
#define DECLARE_GCT /*nothing*/
#endif
#endif // GCTHREAD_H | https://gitlab.haskell.org/ghc/ghc/-/blame/174dccda5a8213f9a777ddf5230effef6b5f464d/rts/sm/GCThread.h | CC-MAIN-2021-04 | refinedweb | 435 | 50.8 |
I have recently installed version 14.1 (trying to move from Eclipse), and there's a lot of issues with XML/XSL files I use.
Example 1 (XSL-FO template)
Example 2 (Tapestry .tml file)
- fo: namespace is not recognized and reports an error
- custom namespace declaration is ignored and reports an error
- custom attribute namespace reports an error
etc etc etc
The amount of XML/XSL files in this project is huge, so I simply won't be able to switch to IntelliJ if I can't resolve those "errors" (which are in fact not errors at all).
Could you advise? Many thanks. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206749775-xml-xsl-validation-errors | CC-MAIN-2020-40 | refinedweb | 104 | 68.7 |
Work is under way on extending the Python interface to Madagascar. With new tools, you should be able to use an interactive Python session rather than a Unix shell to run Madagascar modules. Here are some examples:
import m8r as sf import numpy, pylab f = sf.spike(n1=1000,k1=300)[0] # sf.spike is an operator # f is an RSF file object f.attr() # Inspect the file with sfattr b = sf.bandpass(fhi=2,phase=1)[f] # Now f is filtered through sfbandpass c = sf.spike(n1=1000,k1=300).bandpass(fhi=2,phase=1)[0] # c is equivalent to b but created with a pipe g = c.wiggle(clip=0.02,title='Welcome to Madagascar') # g is a Vplot file object g.show() # Display it on the screen d = b - c # Elementary arithmetic operations on files are defined g = g + d.wiggle(wanttitle=False) # So are operations on plots g.show() # This shows a movie pylab.show(pylab.plot(b)) c = numpy.clip(b,0,0.01) # RSF file objects can be passed to pylab or numpy c.attr() s = c[300:310] print s # Taking a slice outputs a numpy array c = sf.clip(clip=0.01)[b] c.attr() # Alternatively, apply sfclip
For doing reproducible research, one can also use the new syntax inside SConstruct files, as follows:
from rsf.proj import * import m8r as sf Flow('filter',None,sf.spike(n1=1000,k1=300).bandpass(fhi=2,phase=1)) Result('filter',sf.wiggle(clip=0.02,title='Welcome to Madagascar')) End()
See also a 4-line dot-product test and 20-line conjugate-gradient algorithm.
The picture shows a screenshot of an interactive session in a SAGE web-based notebook
| https://reproducibility.org/blog/2008/08/03/extending-python-interface/ | CC-MAIN-2022-33 | refinedweb | 285 | 60.01 |
See: Description
The JHI4 may be used by any Java application that needs to access HDF files. It is extremely important to emphasize that this package is not a pure Java implementation of the HDF library. The JHI4 calls the same HDF library that is used by C or FORTRAN programs. (Note that this product cannot be used in most network browsers because it accesses the local disk using native code.)
The Java HDF Interface consists of Java classes and a dynamically linked native library. The Java classes declare native methods, and the library contains C functions which implement the native methods. The C functions call the standard HDF library, which is linked as part of the same library on most platforms.
The central part of the JHI4 is the Java class hdf.hdflib.HDFLibrary. The HDFLibrary class calls the standard (i.e., `native' code) HDF library, with native methods for most of the HDF5functions.
For example, the HDF library has the function Hopen to open an HDF file. The Java interface is the class hdf.hdflib.HDFLibrary, which has a method:
static native int Hopen(String filename, int flags, flags, jint access) { /* ...convert Java String to (char *) */ /* call the HDF library */ retVal = Hopen((char *)file, (unsigned)flags, (hid_t)access ); /* ... */ }This C function calls the HDF library and returns the result appropriately.
There is one native method for each HDF entry point (several hundred in all), which are compiled with the HDF library into a dynamically loaded library (libhdf_java). Note that this library must be built for each platform.
To call the HDF `Hopen' function, a Java program would import the package 'hdf.hdflib.*', and invoke the method on the class 'HDFLibrary'. The Java program would look something like this:
import hdf.hdflib.*; { /* ... */ try { file = HDFLibrary.Hopen("myFile.hdf", flags, access ); } catch (HDFException ex) { //... } /* ... */ }The HDFLibrary class automatically loads the native method implementations and the HDF library. | https://support.hdfgroup.org/ftp/HDF/releases/HDF4.2.12/src/hdf4_java_doc/overview-summary.html | CC-MAIN-2018-47 | refinedweb | 317 | 58.08 |
Believe it or not—how you answer this question in your day-to-day code reveals your true Python skill level to every master coder who reads your code.
Beginner coders check if a list
a is empty using crude statements like
len(a)==0 or
a==[]. While those solve the problem—they check if a list is empty—they are not what a master coder would do. Instead, the most Pythonic way to check if a list (or any other iterable for that matter) is empty is the expression
not a.
You may call it implicit Booleanness (or, more formal, type flexibility): every object in Python can be implicityl converted into a truth value.
Here’s an example in our interactive Python shell—try it yourself!
Exercise: What’s the output of the code if you add one element to the list
a?
Truth Value Testing and Type Flexibility
Python implicitly associates any object with a Boolean value. Here are some examples:
- The integers 1, 2, and 3 are associated to the Boolean
True.
- The integer 0 is associated to the Boolean
False.
- The strings
'hello',
'42', and
'0'are associated to the Boolean
True.
- The empty string
''is associated to the Boolean
False.
Roughly speaking, each time a Boolean value is expected, you can throw in a Python object instead. The Python object will then be converted to a Boolean value. This Boolean value will be used to decide whether to enter, say, a
while loop or an
if statement. This is called “type flexibility” and it’s one of Python’s core design choices.
Per default, all objects are considered
True if they are semantically non-empty. Empty objects are usually associated to the Boolean
False. More specifically, only if one of the two cases is met, will the result of an object be
False: (i) the
__len__() function returns 0, or (ii) the
__bool__() function returns
False. You can redefine those two methods for each object.
From the Python documentation, here are some common objects that are associated to the Boolean
False:
- Defined constants:
Noneand
False.
- Zero of numerical types:
0,
0.0,
0j,
Decimal(0),
Fraction(0, 1)
- Empty iterables:
'',
(),
[],
{},
set(),
range(0)
Here are some examples:
if []: print('1') if (): print('2') if [()]: print('3') # 3 if 0: print('4') if 0.00: print('5') if 0.001: print('6') # 6 if set(): print('7') if [set()]: print('8') # 8
Again, even if the iterable contains only a single element (that may evaluate to
False like integer
0), the implicit Boolean conversion will return
True because an empty element is an element nonetheless.
PEP8 Recommendation: How to Check if a List is Empty
As some readers argued with me about how to correctly check for an empty list in Python, here‘s the explicit excerpt from the PEP8 standard (Python’s set of rules about how to write readable code):
For sequences, (strings, lists, tuples), use the fact that empty sequences are false:
# Correct: if not seq: if seq:
# Wrong: if len(seq): if not len(seq):
Performance Evaluations
To see which of the three methods is fastest, I repeated each method 100 times using the
timeit library on my notebook with Intel Core i7 (TM) CPU of 8th Generation, 8GB RAM—yes, I know—and NVIDIA Graphic Card (not that it mattered).
Here’s the code:
import timeit import numpy as np setup = 'a = []' method1 = 'if len(a) == 0: pass' method2 = 'if a == []: pass' method3 = 'if not a: pass' t1 = timeit.repeat(stmt=method1, setup=setup, repeat=100) t2 = timeit.repeat(stmt=method2, setup=setup, repeat=100) t3 = timeit.repeat(stmt=method3, setup=setup, repeat=100) print('Method 1: len(a) == 0') print('avg: ' + str(np.average(t1))) print('var: ' + str(np.var(t1))) print() print('Method 2: a == []') print('avg: ' + str(np.average(t2))) print('var: ' + str(np.var(t2))) print() print('Method 3: not a') print('avg: ' + str(np.average(t3))) print('var: ' + str(np.var(t3))) print()
The third method is the most Pythonic one with type flexibility. We measure the elapsed time of 100 executions of each method. In particular, we’re interested in the average time and the variance of the elapsed time. Both should be minimal.
Our thesis is that the third, most Pythonic method is also the fastest because there’s no need to create a new empty list (like in method 2) or performing nested function calls like in method 1. Method 3 consists only of a single function call: converting the list into a Boolean value with the
__bool__ or
__len__ methods.
Here’s the result in terms of elapsed average runtime and variance of the runtimes:
Method 1: len(a) == 0 avg: 0.06273576400000003 var: 0.00022597495215430347 Method 2: a == [] avg: 0.034635367999999944 var: 8.290137682917488e-05 Method 3: not a avg: 0.017685209000000004 var: 6.900910317342067e-05
You can see that the third method is not only 50% faster than method 2 and 75% faster than method 3, it also has very little variance. It’s clearly the best method in terms of runtime performance. Being also the shortest method, you can now see why the method is considered to be most “Pythonic”.! | https://blog.finxter.com/how-to-check-if-a-python-list-is-empty/ | CC-MAIN-2020-40 | refinedweb | 871 | 72.56 |
/* Machine description file for the Motorola Delta.
Tested on mvme147 board using R3V7 without X. Tested with gcc.
Tested on mvme167 board using R3V7 without X. Tested with cc, gnucc, gcc.-3" */
/* The following three symbols give information on
the size of various data types. */
#define SHORTBITS 16 /* Number of bits in a short */
#define INTBITS 32 /* Number of bits in an int */
#define LONGBITS 32 /* Number of bits in a long */
/* Define WORDS_BIG_ENDIAN iff lowest-numbered byte in a word
is the most significant byte. */
#define:
Ones defined so far include vax, m68000, ns16000, pyramid,
orion, tahoe, APOLLO and many others */
#define m68000
#define MOTOROLA_DELTA
/* data space precedes text */
/* Define these if you want to edit files up to 32Mbytes.
Leaving them undefined (files up to 8 Mbytes) should be more efficient. */
/* #define VALBITS 26
#define GCTYPEBITS 5 */
/* Undefine this if you don't want the machine slow down when a buffer
is modified. */
#define CLASH_DETECTION
/* Machine specific stuff */
#define HAVE_PTYS
#define SYSV_PTYS
#ifdef HAVE_INET_SOCKETS /* this comes from autoconf */
# define HAVE_SOCKETS /* NSE may or may not have been installed */
#endif
#define SIGNALS_VIA_CHARACTERS
#define BROKEN_CLOSEDIR /* builtin closedir is interruptible */
#undef HAVE_BCOPY /* b* functions are just stubs to mem* ones */
#define bcopy(from,to,bytes) memcpy(to,from,bytes)
#define bzero(to,bytes) memset(to,0,bytes)
#define bcmp memcmp
#define memmove(t,f,s) safe_bcopy(f,t,s) /* for overlapping copies */
#undef KERNEL_FILE
#define KERNEL_FILE "/sysv68"
#undef LDAV_SYMBOL
#ifdef SIGIO
/* R3V7 has SIGIO, but interrupt input does not work yet.
Let's go on with cbreak code. */
/* # define INTERRUPT_INPUT */
#endif
/* The standard C library is -lc881, not -lc.
-lbsd brings sigblock and sigsetmask.
DO NOT USE -lPW. That version of alloca is broken in versions R3V5,
R3V6, R3V7. -riku@field.fi -pot@cnuce.cnr.it. */
#define LIB_STANDARD -lc881
#define LIB_MATH -lm881
#define LIBS_TERMCAP -lcurses
#define LIBS_SYSTEM -lbsd
#undef sigsetmask
#ifdef HAVE_X_WINDOWS
/* I have not tested X, but I think these are obsolete, so let's
commment them -pot@cnuce.cnr.it */
/* debug switches enabled because of some difficulties w/X11
# define C_DEBUG_SWITCH -g
# define OBJECTS_MACHINE -lg
# define C_OPTIMIZE_SWITCH
# define CANNOT_DUMP
# define XDEBUG */
/* X library is in 'nonstandard' location. */
/* This should be taken care of by configure -pot@cnuce.cnr.it
# define LD_SWITCH_MACHINE -L/usr/lib/X11/ */
# define HAVE_RANDOM
# define BROKEN_FIONREAD /* pearce@ll.mit.edu says this is needed. */
# define HAVE_XSCREENNUMBEROFSCREEN
# undef LIB_X11_LIB /* no shared libraries */
# define LIB_X11_LIB -lX11
# undef USG_SHARED_LIBRARIES /* once again, no shared libs */
# undef LIBX11_SYSTEM /* no -lpt as usg5-3.h expects */
# define LIBX11_SYSTEM -lnls -lnsl_s
#endif /* HAVE_X_WINDOWS */
#ifdef __GNUC__
/* Use builtin alloca. Also be sure that no other ones are tried out. */
# define alloca __builtin_alloca
# define HAVE_ALLOCA
/* Union lisp objects do not yet work as of 19.15. */
/* # undef NO_UNION_TYPE */
/* There are three ways to use the gnucc provided with R3V7. Either
link /bin/ccd/cc to /bin/cc and then configure (supposing that CC
is unset or set to cc). Or configure like this: `CC=/bin/ccd/cc
configure', or else configure like this: `CC=gnucc configure'. */
# ifdef __STDC__
/* Compiling with gnucc (not through ccd). This means -traditional is
not set. Let us set it, because gmalloc.c includes <stddef.h>,
and we don't have that (as of SYSV68 R3V7).
Removing the -finline-functions option to gnucc causes an
executable emacs smaller by about 10%. */
# define C_SWITCH_MACHINE -mfp0ret -m68881 -traditional -Dconst= -fdelayed-branch -fstrength-reduce -finline-functions -fcaller-saves
# define LIB_GCC /lib/gnulib881
# endif /* __STDC__ */
#else
/* Not __GNUC__, use the alloca in alloca.s. */
/* Try to guess if we are using the Green Hills Compiler */
# if defined mc68000 && defined MC68000
/* Required only for use with Green Hills compiler:
-ga Because alloca relies on stack frames. This option forces
the Green Hills compiler to create stack frames even for
functions with few local variables. */
# define C_SWITCH_MACHINE -ga -O
# define GAP_USE_BCOPY /* *++to = *++from is inefficient */
# define BCOPY_UPWARD_SAFE 0
# define BCOPY_DOWNWARD_SAFE 1 /* bcopy does: mov.b (%a1)+,(%a0)+ */
# else
/* We are using the standard AT&T Portable C Compiler */
# define SWITCH_ENUM_BUG
# endif
#endif /* not __GNUC__ */ | https://emba.gnu.org/emacs/emacs/-/blame/3ef14e464b3ca838872a99bfb6a991b4d966a3e0/src/m/delta.h | CC-MAIN-2021-49 | refinedweb | 667 | 55.74 |
The multiprocessing module was added to Python in version 2.6. It was originally defined in PEP 371 by Jesse Noller and Richard Oudkerk. The multiprocessing module allows you to spawn processes in much that same manner than you can spawn threads with the threading module. The idea here is that because you are now spawning processes, you can avoid the Global Interpreter Lock (GIL) and take full advantages of multiple processors on a machine.
The multiprocessing package also includes some APIs that are not in the threading module at all. For example, there is a neat Pool class that you can use to parallelize executing a function across multiple inputs. We will be looking at Pool in a later section. We will start with the multiprocessing module’s Process class.
Getting started with multiprocessing
The Process class is very similar to the threading module’s Thread class. Let’s try creating a series of processes that call the same function and see how that works:
import os from multiprocessing import Process def doubler(number): """ A doubling function that can be used by a process """ result = number * 2 proc = os.getpid() print('{0} doubled to {1} by process id: {2}'.format( number, result, proc)) if __name__ == '__main__': numbers = [5, 10, 15, 20, 25] procs = [] for index, number in enumerate(numbers): proc = Process(target=doubler, args=(number,)) procs.append(proc) proc.start() for proc in procs: proc.join()
For this example, we import Process and create a doubler function. Inside the function, we double the number that was passed in. We also use Python’s os module to get the current process’s ID (or pid). This will tell us which process is calling the function. Then in the block of code at the bottom, we create a series of Processes and start them. The very last loop just calls the join() method on each process, which tells Python to wait for the process to terminate. If you need to stop a process, you can call its terminate() method.
When you run this code, you should see output that is similar to the following:
5 doubled to 10 by process id: 10468 10 doubled to 20 by process id: 10469 15 doubled to 30 by process id: 10470 20 doubled to 40 by process id: 10471 25 doubled to 50 by process id: 10472
Sometimes it’s nicer to have a more human readable name for your process though. Fortunately, the Process class does allow you to access the same of your process. Let’s take a look:
import os from multiprocessing import Process, current_process def doubler(number): """ A doubling function that can be used by a process """ result = number * 2 proc_name = current_process().name print('{0} doubled to {1} by: {2}'.format( number, result, proc_name)) if __name__ == '__main__': numbers = [5, 10, 15, 20, 25] procs = [] proc = Process(target=doubler, args=(5,)) for index, number in enumerate(numbers): proc = Process(target=doubler, args=(number,)) procs.append(proc) proc.start() proc = Process(target=doubler, name='Test', args=(2,)) proc.start() procs.append(proc) for proc in procs: proc.join()
This time around, we import something extra: current_process. The current_process is basically the same thing as the threading module’s current_thread. We use it to grab the name of the thread that is calling our function. You will note that for the first five processes, we don’t set a name. Then for the sixth, we set the process name to “Test”. Let’s see what we get for output:
5 doubled to 10 by: Process-2 10 doubled to 20 by: Process-3 15 doubled to 30 by: Process-4 20 doubled to 40 by: Process-5 25 doubled to 50 by: Process-6 2 doubled to 4 by: Test
The output demonstrates that the multiprocessing module assigns a number to each process as a part of its name by default. Of course, when we specify a name, a number isn’t going to get added to it.
Locks
The multiprocessing module supports locks in much the same way as the threading module does. All you need to do is import Lock, acquire it, do something and release it. Let’s take a look:
from multiprocessing import Process, Lock def printer(item, lock): """ Prints out the item that was passed in """ lock.acquire() try: print(item) finally: lock.release() if __name__ == '__main__': lock = Lock() items = ['tango', 'foxtrot', 10] for item in items: p = Process(target=printer, args=(item, lock)) p.start()
Here we create a simple printing function that prints whatever you pass to it. To prevent the threads from interfering with each other, we use a Lock object. This code will loop over our list of three items and create a process for each of them. Each process will call our function and pass it one of the items from the iterable. Because we’re using locks, the next process in line will wait for the lock to release before it can continue.
Logging
Logging processes is a little different than logging threads. The reason for this is that Python’s logging packages doesn’t use process shared locks, so it’s possible for you to end up with messages from different processes getting mixed up. Let’s try adding basic logging to the previous example. Here’s the code:
import logging import multiprocessing from multiprocessing import Process, Lock def printer(item, lock): """ Prints out the item that was passed in """ lock.acquire() try: print(item) finally: lock.release() if __name__ == '__main__': lock = Lock() items = ['tango', 'foxtrot', 10] multiprocessing.log_to_stderr() logger = multiprocessing.get_logger() logger.setLevel(logging.INFO) for item in items: p = Process(target=printer, args=(item, lock)) p.start()
The simplest way to log is to send it all to stderr. We can do this by calling the log_to_stderr() function. Then we call the get_logger function to get access to a logger and set its logging level to INFO. The rest of the code is the same. I will note that I’m not calling the join() method here. Instead, the parent thread (i.e. your script) will call join() implicitly when it exits.
When you do this, you should get output like the following:
[INFO/Process-1] child process calling self.run() tango [INFO/Process-1] process shutting down [INFO/Process-1] process exiting with exitcode 0 [INFO/Process-2] child process calling self.run() [INFO/MainProcess] process shutting down foxtrot [INFO/Process-2] process shutting down [INFO/Process-3] child process calling self.run() [INFO/Process-2] process exiting with exitcode 0 10 [INFO/MainProcess] calling join() for process Process-3 [INFO/Process-3] process shutting down [INFO/Process-3] process exiting with exitcode 0 [INFO/MainProcess] calling join() for process Process-2
Now if you want to save the log to disk, then it gets a little trickier. You can read about that topic in Python’s logging Cookbook.
The Pool Class
The Pool class is used to represent a pool of worker processes. It has methods which can allow you to offload tasks to the worker processes. Let’s look at a really simple example:
from multiprocessing import Pool def doubler(number): return number * 2 if __name__ == '__main__': numbers = [5, 10, 20] pool = Pool(processes=3) print(pool.map(doubler, numbers))
Basically what’s happening here is that we create an instance of Pool and tell it to create three worker processes. Then we use the map method to map a function and an iterable to each process. Finally we print the result, which in this case is actually a list: [10, 20, 40].
You can also get the result of your process in a pool by using the apply_async method:
from multiprocessing import Pool def doubler(number): return number * 2 if __name__ == '__main__': pool = Pool(processes=3) result = pool.apply_async(doubler, (25,)) print(result.get(timeout=1))
What this allows us to do is actually ask for the result of the process. That is what the get function is all about. It tries to get our result. You will note that we also have a timeout set just in case something happened to the function we were calling. We don’t want it to block indefinitely after all.
Process Communication
When it comes to communicating between processes, the multiprocessing modules has two primary methods: Queues and Pipes. The Queue implementation is actually both thread and process safe. Let’s take a look at a fairly simple example that’s based on the Queue code from one of my threading articles:
from multiprocessing import Process, Queue sentinel = -1 def creator(data, q): """ Creates data to be consumed and waits for the consumer to finish processing """ print('Creating data and putting it on the queue') for item in data: q.put(item) def my_consumer(q): """ Consumes some data and works on it In this case, all it does is double the input """ while True: data = q.get() print('data found to be processed: {}'.format(data)) processed = data * 2 print(processed) if data is sentinel: break if __name__ == '__main__': q = Queue() data = [5, 10, 13, -1] process_one = Process(target=creator, args=(data, q)) process_two = Process(target=my_consumer, args=(q,)) process_one.start() process_two.start() q.close() q.join_thread() process_one.join() process_two.join()
Here we just need to import Queue and Process. Then we two functions, one to create data and add it to the queue and the second to consume the data and process it. Adding data to the Queue is done by using the Queue’s put() method whereas getting data from the Queue is done via the get method. The last chunk of code just creates the Queue object and a couple of Processes and then runs them. You will note that we call join() on our process objects rather than the Queue itself.
Wrapping Up
We have a lot of material here. You have learned how to use the multiprocessing module to target regular functions, communicate between processes using Queues, naming threads and much more. There is also a lot more in the Python documentation that isn’t even touched in this article, so be sure to dive into that as well. In the meantime, you now know how to utilize all your computer’s processing power with Python!
Related Reading
- The Python documentation on the multiprocessing module
- Python Module of the Week: multiprocessing
- Python Concurrency – Porting a Queue to multiprocessing | http://www.blog.pythonlibrary.org/2016/08/02/python-201-a-multiprocessing-tutorial/ | CC-MAIN-2018-43 | refinedweb | 1,734 | 62.68 |
"DarwinParts" is mentioned at the end of the paragraph.
That's supposed to be "DarwinPorts".
there's a missing comma between their names.
... of string escaping depending ...
should be:
... of string escaping, depending ...
string.chr + string.chr + string.chr + string.chr + string.chr
should be:
string[3].chr + string[4].chr + string[5].chr + string[6].chr + string[7].chr
irb(main):003:0> data.each { |x| s << x << ' and a ' }
That's a bug. "s << x <<" should be "s << x.to_s <<"
You can reference any any binary ...
should be:
You can reference any binary ...
mneumonic
should be:
mnemonic
(occurs four times!)
"C-_x_" represents ....
.. should be ...
"C-x" represents ....
... and "M-_x_" represents ...
.. should be ...
... and "M-x" represents
This was an attempt at formatting. "x" is the variable X, not the letter 'x'.
The text should read "C-x" and "M-x" as stated, but "x" should be italicized.
The meaning is "C-whatever" and "M-whatever".
... of a particular in a ...
should be:
... of a particular character in a ...
becase
should be:
because
... and each_bytes yields ..
should read
... and each_byte yields
... (there are some samples are in the Discussion).
should be:
... (there are some samples in the Discussion).
/(w+([-'.]w+)*/
should read:
/(w+([-'.]w+)*)/
Missing double quote in first gsub argument:
"Line one
Line two
".gsub(
", "
")
should read:
"Line one
Line two
".gsub("
", "
")
In rare cases you may ...
should be:
In rare cases, you may ...
nonASCII
should be:
non-ASCII
CHANGE:
"String#count is a method that takes a strong of bytes ..."
TO:
"String#count is a method that takes a string of bytes ..."
CHANGE:
"... and counts how many times those bytes occurs in the string."
TO:
"... and counts how many times those bytes occur in the string."
end
end
... should be ...
end
end
"Its extract_numbers method..." should be
"Its extract method..."
"and work as well as as floats"
should be
"and work as well as floats"
"...and the toal number..." should be
"...and the total number..."
are subscripted. This is almost correct, but the "1" and "2" in "b1"
and "b2" need to be sub-sub-scripted, because "1" and "2" are
themselves subscripts of "b". To represent it as ASCII art:
log
b
1
In "The log base k of x, or logk(x)", the k is correctly subscripted,
but the "(x)" appears to be _superscripted_. As with the "logb1(x)"
and the other earlier examples, the (x) needs to be normal text, on
the same level as "log". Again, ASCII art to the rescue:
log (x)
k
It should be normal text.
"...you'll get the same results whether you multiply A by B and then the result by C, or multiply B by C and then the result by A."
would be less confusing if it read
"...you'll get the same results whether you multiply A by B and then the result by C, or multiply B by C and then A by the result."
since multiplication of matrices is non-abelian.
"is not in ASCII nor Unicode" is incorrect. As stated later in the
recipe, there is a Unicode character for a V with a bar over it.
Omit "nor Unicode".
t.wday # => 3 # Numeric day of week; Sunday
is 0
It should look like this:
t.wday # => 3 # Numeric day of week; Sunday
# # is 0
"...the dstination time zone's offset..." should be
"...the destination time zone's offset..."
"sysem" should be "system"
"Iterate over the array with Enumerable#each."
Change "Enumerable#each" to "Array#each".
... a code block fed to an method like...
should be
... a code block fed to a method like...
'occurance' is a misspelling -- it should be 'occurrence'.
The given implementation of SortedArray changes the semantics of some
of Array's methods. For instance, SortedArray#insert and
SortedArray#push only take one argument, and SortedArray#reverse!
returns nil instead of an array. This implementation works more like
Array.
Replace the first code sample with this:
class SortedArray < Array
def initialize(*args, &sort_by)
@sort_by = sort_by || Proc.new { |x, y| x <=> y }
super(*args)
sort!(&sort_by)
end
def insert(ignore, *values)
values.each do |v|
# The next line could be further optimized to perform a
# binary search.
insert_before = index(find { |x| @sort_by.call(x, v) == 1 })
super(insert_before ? insert_before : -1, v)
end
end
def <<(v)
insert(0, v)
self
end
def push(*values)
insert(0, *values)
end
alias unshift push
Replace the second code sample with this:
%w[ []= collect! flatten! ].each do |method_name|
class_eval %{
def #{method_name}(*args)
super
sort!(&@sort_by)
end
}
end
def reverse
Array.new(super) # Return a normal array.
end
def reverse!
self
end
end
In the 'to_s' method of class Card:
"#{@suit} of #{@rank}" should be
"#{@rank} of #{@suit}"
In the 'initialize' method of class Deck, the parameters in the call to Card.new should be reversed, ie.
"Card.new(rank, suit)" should be
"Card.new(suit, rank)"
"...extract one particular elements, or..." should be
"...extract one particular element, or...
In the "Cartesian product" subsection of section 4.14 "Computing Set Operations and
Arrays" the result for the example [1,2,3].cartesian(["a",5,6]) is missing a final
closing square bracket.
"...This puts 4 is in a..." should be
"...This puts 4 in a..."
The XOR operator alone toggles permission bits rather than clearing
them. To clear bits it must be combined with the AND operator.
Replace the last paragraph before the first code sample with this:
Use the XOR (^) and the AND (&) operators to remove permissions from a
bitmap. Use the OR operator, as seen above, to add permissions:
Replace this line of code:
new_permission = File.lstat("my_file").mode ^ File::O_R
with this:
new_permission = File.lstat("my_file").mode & (0777 ^ File::O_R)
"By passing '
' into IO#each or IO#readlines, you can handle the
newlines of files created on any recent operating system."
Should be changed to:
"By passing "
" into IO#each or IO#readlines, you can handle the
newlines of files created on many operating systems."
With a footnote:
"Technically, Unix uses "
", Mac OS X uses either "
" or "
", and
Windows uses "
", so to support all three systems, the logic will
be slightly more complex."
"decided refer to"
should be
"decided to refer to"
"By this time, you should familiar with..." should be
"By this time, you should be familiar with..."
The last sentence on the page is missing a word (either "if" or "though")
"When you call it, it's exactly as the code block were a Proc object and you had invoked its call method."
should be:
"When you call it, it's exactly as if the code block were a Proc object and you had invoked its call method."
"...you can use the iterator method to build an Enumerable object"
should read
"...you can use the iterator method to build an Enumerator object".
"the simplest and most common data type, and the most common"
is redundant. Omit ", and the most common".
"you can make a generation object of out of any piece of iteration code"
should be
"you can make a generation object out of any piece of iteration code"
def between_setup_and_cleanup
setup
begin
yield
finally
cleanup
end
end
There is no such keyword as "finally" in Ruby. Substitute "ensure" for "finally".
The first sentence of this paragraph starts:
Strict languages enforce strong typing, usually at compile type:
It should say:
Strict languages enforce strong typing, usually at compile time:
change "...from an Java collection?)"
to "...from a Java collection?)"
"directly access to" should be "directly access"
"...were recieved by the..." should be
"...were received by the..."
The paragraph starting "Your module can define an initialize method..." gives an inaccurate
picture. Add a sentence after the first sentence in the paragraph, like this:
...sometimes that doesn't work. If a module defines initialize, a
class that includes the module will no longer be able to call its
superclass's initialize method. There may also be a mismatch of
arguments. For instance, Taggable...
Spelling error: "unsed" should be "unused" followed by "Semidecidable module.
Similarly to the 317 erratum; this sentence is wrong: "When you call
super from within a method (such as initialize), Ruby finds every
ancestor that defines a method with the same name, and calls it too."
It should be this: "When you call super from within a method (such as
initialize), Ruby finds the first ancestor which defines a method with
the same name, and calls that method. That method may decide to call
super itself, sending Ruby searching even further back in the ancestor
tree, and so on."
The Class.included_modules method is redundant, because the Module
class already implements that method. This greatly simplifies the
code. The major difference is that newly included modules are
unshifted onto the beginning of the included_modules data structure,
rather than being pushed onto the end.
Replace the first code sample with this:
class Class
alias_method :old_new, :new
def new(*args, &block)
obj = old_new(*args, &block)
self.included_modules.each do |mod|
mod.initialize if mod.respond_to?(:initialize)
end
obj
end
end
Remove the second code sample.
Remove the references to Initializable in the third code sample:
module A
def self.initialize
puts "A's initialized."
end
end
module B
def self.initialize
puts "B's initialized."
end
end
Replace the final code sample with this:
class BothAAndB
include A
include B
end
both = BothAAndB.new
# B's initialized.
# A's initialized.
(The only change there is that B is now initialized before A. If this
disturbs you, you can substitute reverse_each for each in the first
code sample.)
The text of the Solution needs to be substantially rewritten to
accommodate this simplification; the Discussion a little less
so. Here's a rewrite of those two sections:
==Solution==
A class knows which modules it's included: you can get a list by
calling its Module#included_modules method.
```
Array.included_modules # => [Enumerable, Kernel]
```
To take advantage of this information when an object is initialized,
we need to redefine @Class#new@. Fortunately, Ruby's flexibility lets
us makes changes to the built-in @Class@ class (though this should
never be done lightly). Our new implemenation will call a module-level
@initialize@ method for each included module:
```
class Class
alias_method :old_new, :new
def new(*args, &block)
obj = old_new(*args, &block)
self.included_modules.each do |mod|
mod.initialize if mod.respond_to?(:initialize)
end
obj
end
end
```
We've redefined the @Class#new@ method so that it iterates through all
the modules in @included_modules@, and calls the module-level
@initialize@ method of each.
==Discussion==
Let's define a couple of modules which define @initialize@ module
methods:
```
module A
def self.initialize
puts "A's initialized."
end
end
module B
def self.initialize
puts "B's initialized."
end
end
```
We can now define a class that mixes in both modules:
```
class BothAAndB
include A
include B
end
BothAAndB.included_modules # => [B, A, Kernel]
```
Instantiating the class instantiates the modules, with not a single
@super()@ call in sight!
```
both = BothAAndB.new
# B's initialized.
# A's initialized.
```
The goal of this recipe is very similar to [[73160]]. In that recipe,
you call @super()@ in a class's @initialize@ method to call a mixed-in
module's @initialize@ method. That recipe doesn't require any changes
to built-in classes, so it's often preferable to this one.
But consider a case like the @BothAAndB@ class above. Using the
techniques from [[73160]], you'd need to make sure that both @A@ and
@B@ had calls to @super@ in their @initialize@ methods, so that each
module would get initialized. This solution moves all of that work
into the built-in @Class@ class. The other drawback of the previous
technique is that the user of your module needs to know to call
@super()@ somewhere in their @initialize@ method. Here, everything
happens automatically.
This technique is not without its pitfalls. Anytime you redefine
critical built-in methods like @Class#new@, you need to be careful:
someone else may have already redefined it elsewhere in your
program.
The code example that begins on page 338 and continues at the top of page 339. There are one too few end statements at the
top of page 339 to close all of the blocks in the example.
The code on those two pages should look like this:
class Object
def my_methods_only_no_mixins
self.class.ancestors.inject(methods) do |mlist, ancestor|
mlist = mlist - ancestor.instance_methods unless ancestor.is_a? Class
mlist
end
end
end
instance_variable_set('foo', 4) should be instance_variable_set('@foo', 4)
"Representing Data as MIDI Music" includes a line of
code to scale the data in an array of numbers so that it fits within
the range (21..108). This is the sixth line of code on page 444:
midi_note = (midi_min + ((number-midi_min) * (midi_max-low)/high)).to_i
This line is incorrect. It should be:
midi_note = (midi_min + ((number-low) * ((midi_max-midi_min)/(high-low)))).to_i
Recipe 12.5, "Adding Graphical Context with Sparklines" defines a
method called "scale" on page 421. This method is correct, but only
works when you need to scale a data set from 0 to 100. This
generalization works for any scale, making it reusable in recipe
12.14:
def scale(data, bottom=0, top=100)
min, max = data.min.to_f, data.max.to_f
scale_ratio = (top-bottom)/(max-min)
data.collect { |x| bottom + (x-min) * scale_ratio}
end
Omit the to_f calls if you want your data to be scaled using integer arithmetic.
Define that method, and you can write the loop on page 444 like this:
scale(self, midi_min, midi_max).each do |number|
midi_note = number.to_i
You can get cert validation in HTTPS even if you don't know where on
disk your certificates are located.
Replace the last paragraph on the page with this:
The OpenSSL library should know where your certificates are installed
on your computer. If you create an @OpenSSL::X509::Store@ object that
uses the default paths, you should be able to attach it to your
request object and then set the request's @verify_mode@ to
@OpenSSL::VERIFY_PEER@. Now OpenSSL can verify that you're really
talking to the web server you think you are, and not to an imposter:
Replace the code sample (spilling over onto the next page) with this:
request = Net::HTTP.new(uri.host, uri.port)
request.use_ssl = true
request.cert_store = OpenSSL::X509::Store.new
request.cert_store.set_default_paths
request.verify_mode = OpenSSL::SSL::VERIFY_PEER
response = request.get("/")
# => #<Net::HTTPOK 200 OK readbody=true>
just get rid of the parentheses: change
" ()"
to
""
Query strings in fetched URLs are stripped (14.3, 14.20)
The code in Recipes 14.3 and 14.20 can't be used to request a URL that
contains a query string: the query string gets stripped. For instance,
if you tell it to retrieve "", it
will actually retrieve "".
The easiest way to solve this problem would be to pass the method that
makes the HTTP request the result of @URI#path_query@, instead of the
result of @URI#path@. Unfortunately, the @URI#path_query@ method is
private.
You have a couple options. You can modify the @URI@ class to make
@URI#path_query@ public. You can use the @send@ (Ruby 1.8) or
@funcall@ (Ruby 1.9) tricks to call the private method. You can also
duplicate the functionality of @URI@ (either within @URI@ or outside
it). This is the solution I've chosen.
If you want to create a new method that works like @URI#path_query@,
here's a simple standalone implementation:
```
def path_query(uri)
uri.path + (uri.query ? ('?' + uri.query) : '')
end
```
Otherwise, you can make the following in-place changes:
For 14.3, the first code chunk on page 505 contains this line:
```
return http.get(uri.path, headers)
```
Change it to look like this:
```
path_query = uri.path + (uri.query ? ('?' + uri.query) : '')
return http.get(path_query, headers)
```
Though it's not presented as reusable code, the first code chunk on
page 506 has the same problem. Change this:
```
request = Net::HTTP::Get.new(uri.path)
```
to this:
```
path_query = uri.path + (uri.query ? ('?' + uri.query) : '')
request = Net::HTTP::Get.new(path_query)
```
For 14.20, there's a code chunk on page 553 that looks like this:
```
response = request.send(action, uri.path, data)
```
Change it to this:
```
path_query = uri.path + (uri.query ? ('?' + uri.query) : '')
response = request.send(action, path_query, data)
```
A shorter solution is to send the whole URI: that is, the result of
@URI#to_s@. This worked well in my tests, but the HTTP 1.1 spec
(RFC2616) strongly implies that this format is only acceptable when
you're talking to an HTTP proxy.
The paragraph at the bottom ("The @budget variable...") starts off
well-intentioned but rapidly becomes wrong. When the render method is
called the page is rendered with the current value of
@budget. Execution of the method continues, and the value of @budget
may change afterwards, but the page has already been rendered with the
old value. There is no "envelope".
Replace that paragraph (which spills over onto the next page) with the following:
The @budget variable gets set because execution of the current action
does not stop when you call render. Once render returns (having
rendered the page using the old value of @budget), the rest of the
index method runs and the value of @budget is changed.
"... have access to a method called sessions that returns ..."
should be
"... have access to a method called session that returns ..."
Refers to Recipe 16.7 as where the getQuote method was manually defined.
Should instead refer to Recipe 16.5.
This code won't work and gives an inaccurate picture of what the
Logger class does:
# Keep data for today and the past 20 days
Logger.new('application.log', 20, 'daily')
If the second argument is a number, that number is the number of
logfiles to keep. In that case, the third argument is supposed to be
the maximum size of the logfile.
Replace the bad example with this one:
# Keep up to five logs, each of up to 100 megabytes in size.
Logger.new("application.log", 5, 100 * 1024 * 1024)
ISBN: 0-596-00797-3On page 671 I show how to customize the logger message by overriding
the @Logger::Formatter#call@ method. A less disruptive way to
customize the message is to subclass @Logger#Formatter@ and override
@call@ in the subclass. On page 671, replace the last code fragment
with this:
```
class MyLogger < Logger::Formatter
def initialize()
self.datetime_format=("%Y-%m-%d %H:%M:%S")
end
def call(severity, time, progname, msg)
Format % [severity, format_datetime(time), progname, msg]
end
end
$LOG.formatter = MyLogger.new
$LOG.error('This is much shorter.')
# ERROR [2006-03-31 19:35:01] This is much shorter.
```
To get a @Logger@ object to use a custom formatter, you need to set
its @formatter@ member, as seen above.
There is in fact a simple way to avoid deadlocking a thread with
itself: use the standard library's Monitor class instead of using Mutex.
Change "The second problem is harder to solve: a thread..." to "The second problem is that a thread..."
Replace "Short of hacking Mutex..." with "You can avoid this problem by using the Monitor class instead of Mutex."
Add the following code sample to the end of the Discussion:
require 'monitor'
$lock = Monitor.new
Thread.new do
$lock.synchronize { $lock.synchronize { puts 'I synchronized twice!' } }
end
age is inconsistent. :age=>"26" should be :age=>"27"
The file_mark and file_free functions should be defined above the file_allocate function.
Move the function bodies up so that the first code sample looks like this:
...
static void
file_mark(struct file *f)
Cut the three paragraphs starting "There are some limitations you
should be aware of, though." The first limitation doesn't apply in
current versions (though the RubyInline README still talks about it),
and the second was described in a confusing way.
Replace the two paragraphs starting "Second, if you're..." with the
following:
When it comes time to distribute your program, RubyInline lets you
package a precompiled extension as a RubyGem (see the RubyInline docs
on the @inline_package@ script for details). If you don't distribute
a precompiled extension, your users will need to compile it
themselves. This means they'll need to have the Ruby development
libraries installed, along with a compiler to actually build the
extension.
© 2016, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://www.oreilly.com/catalog/errata.csp?isbn=9780596523695 | CC-MAIN-2016-07 | refinedweb | 3,391 | 67.65 |
1. Download and extract the SWiG interface:
Windows version available here:
2. Create your C# console application
3. Create your C++ console application:
Create a new C++ console application inside the existing solution you have open, and call it ‘cpp’. To do this right-click your solution icon and select Add > New Project…:
In the Application Settings select the DLL radio button, check Empty project and click Finish:
4. Create a folder in the C# project for your SWiG-generated C# files:
In this example, we’ll call it
Generated. Right-click the C# project, select Add > New Folder, and rename to Generated:
5. Create the C++ code and interface files in your C++ console project
In your C++ project add the .cpp, .h and .i files.
In this example, the cpp_file.{cpp, h, i} files.
cpp_file.h
#pragma once #ifdef CPP_EXPORTS #define CPP_API __declspec(dllexport) #else #define CPP_API __declspec(dllimport) #endif class CPP_API cpp_file { public: cpp_file(void); ~cpp_file(void); int times2(int arg); };
cpp_file.cpp
#include "cpp_file.h" cpp_file::cpp_file(void) { } cpp_file::~cpp_file(void) { } int cpp_file::times2(int arg) { return arg * 2; }
cpp_file.i
%module cpp %{ #include "cpp_file.h" %} %include <windows.i> %include "cpp_file.h"
6. Set interface file build properties
i. Click on the the
cpp_file.i and select Properties > General > Item Type as Custom Build Tool. Select Apply to create the Custom Build Tool property group.
ii. In Custom Build Tool > General > Command Line enter the command needed for SWiG generation:
swig -csharp -c++ -outdir <MyGenerateFolderLocation> cpp_file.i
In my example, this particular setting is:
c:\swigwin-2.0.9\swigwin-2.0.9\swig -csharp -c++ -outdir C:\dump\swig\swig\Generated cpp_file.i
iii. In the Outputs setting enter ‘cpp_file_wrap.cxx’.
Please note: I had previously experienced problems in performing step 7 described below, with what I think were output directory names that SWiG did not seem to like. For example, using the swig command with an output directory with whitespaces in, such as in this example:
c:\swigwin-2.0.9\swigwin-2.0.9\swig.exe -csharp -c++ -outdir “C:\Users\andy\Documents\visual studio 2010\Projects\swig\swig\Generated” cpp_file.i
Caused output errors similar to the following:
swig error : Unrecognized option studio
swig error : Unrecognized option 2010\Projects\swig\swig\Generated"
Use 'swig -help' for available options.
Using the simpler directory structure, with no whitespaces, fixed this particular problem. Also sometimes it did not seem to know where the swig executable was located, so specifying this explicitly, as in “c:\swigwin-2.0.9\swigwin-2.0.9\swig.exe” instead of just “swig” also cured this particular problem. Feel free to comment with your own experiences.
7. Compile the interface file
Right-click cpp_file.i and select ‘Compile’.
This should create four files, which will probably not yet be visible in your Visual Studio project.
Three in the C# ‘Generated folder’:
cpp.cs cpp_file.cs cppPINVOKE.cs
And one in the C++ project:
cpp_file_wrap.cxx
7. Create a Generated Files filter in the C++ console project and add the generated *.cxx file
Right-click your C++ project and select Add > New Filter. Call it ‘Generated Files’
Add the cpp_file_wrap.cxx to it: right-click the ‘Generated Files’ filter folder and select Add > Existing Item, and select the cpp_file_wrap.cxx file.
8. Add the SWiG-generated C# files to the C# project.
Right-click the ‘Generated’ folder, select Add > Existing Items, navigate to the ‘Generated’ folder and select the three generated C# files:
9. Set the project dependencies
Right-click the solution folder, and select Project Dependencies…
In the C# project’s Properties > Build tab, change the Output Path from bin\Debug to ..\Debug or whatever the relative path to the C++ Project output directory is.
The .exe and .dll need to be in the same directory.
10. Start using the C++ APIs in your C# code
A very simple example.
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace swig3 { class Program { static void Main(string[] args) { var cpp = new cpp_file(); Console.WriteLine("Five times two = " + cpp.times2(5)); } } }
Do a clean and rebuild of the entire solution. Click ‘Yes’ if prompted to reload any files:
If you get a compiler error similar to this one:
error CS0246: The type or namespace name 'cpp_file' could not be found (are you missing a using directive or an assembly reference?)
Make sure you have actually added the three SWiG-generated C# files to the ‘Generated’ folder, as described in step 8.
11. Try it out
To verify that the interface in place, try stepping through the code with the debugger, perhaps using F11 to step inside the code
cpp.times2(5) to verify that the C++ portions of the code are getting executed as well.
Example console output testing the C++ “times2” API:
Download available
Why not download a fully self-contained Visual Studio 2010 project to interface your C# console application with APIs written in C++, as described in this post? Available for 99 cents, to help cover costs of running this site.
This Visual Studio project contains the necessary SWiG executable and libraries (version 3.0.8) and uses relative paths to access them so that no further configuration is necessary – just build and execute in Release or Debug modes.
I get this compiler errors:
Errore 2 error LNK1120: 1 esterni non risolti
Errore 1 error LNK2019: riferimento al simbolo esterno _main non risolto nella funzione ___tmainCRTStartup
I followed all the steps. What is the problem?
Thanks
Sara
I solved. I didn’t create an empty c++ project.
Thanks.
hi ,
I get this compiler errors:
Error 1 error MSB6006: “cmd.exe” exited with code 1.
What is the problem?
thanks
hande | http://www.technical-recipes.com/2013/getting-started-with-swig-interfacing-between-c-and-c-visual-studio-projects/?replytocom=5563 | CC-MAIN-2017-30 | refinedweb | 955 | 58.58 |
Ruby Array Exercises: Create a new array using first three elements of a given array of integers
Ruby Array: Exercise-27 with Solution
Write a Ruby program to create a new array using first three elements of a given array of integers. If the length of the given array is less than three return the original array.
Ruby Code:
def check_array(nums) front = [] if nums.length >= 3 front[0] = nums[0] front[1] = nums[1] front[2] = nums[2] elsif nums.length == 2 front[0] = nums[0] front[1] = nums[1] else nums.length == 1 front[0] = nums[0] end return front end print check_array([1, 3, 4, 5]),"\n" print check_array([1, 2, 3]),"\n" print check_array([1,2]),"\n" print check_array([1]),"\n"
Output:
[1, 3, 4] [1, 2, 3] [1, 2] [1]
Flowchart:
Ruby Code Editor:
Contribute your code and comments through Disqus.
Previous: Write a Ruby program to find the largest value from a given array of integers of odd length. The array length will be a least 1.
Next: Write a Ruby program to create a new array with the first element of two arrays. If lenght of any array is 0, ignore | https://www.w3resource.com/ruby-exercises/array/ruby-array-exercise-27.php | CC-MAIN-2021-21 | refinedweb | 198 | 62.48 |
GIS Library - Window alignment functions. More...
#include <stdio.h>
#include <math.h>
#include <grass/gis.h>
Go to the source code of this file.
GIS Library - Window alignment functions.
(C) 2001-2008 by the GRASS Development Team
This program is free software under the GNU General Public License (>=v2). Read the file COPYING that comes with GRASS for details.
Definition in file align_window.c.
Align two regions.
Modifies the input window to align to ref region. The resolutions in window are set to match those in ref and the window edges (north, south, east, west) are modified to align with the grid of the ref region. The window may be enlarged if necessary to achieve the alignment. The north is rounded northward, the south southward, the east eastward and the west westward. Lon-lon constraints are taken into consideration to make sure that the north doesn't go above 90 degrees (for lat/lon) or that the east does "wrap" past the west, etc.
Definition at line 41 of file align_window.c.
References G_adjust_Cell_head(), G_col_to_easting(), G_easting_to_col(), G_northing_to_row(), and G_row_to_northing(). | https://grass.osgeo.org/programming6/align__window_8c.html | CC-MAIN-2018-51 | refinedweb | 179 | 69.18 |
Raspberry Pi is a trademark of The Raspberry Pi Foundation. This magazine was created using a Raspberry Pi computer.
19
Welcome to Issue 1 9 of The MagPi magazine. This year has flown by and we are back with our Christmas issue once again! Is Santa bringing you any Raspberry Pi goodies this Christmas? If so we have plenty of great articles to help you put this clever computer straight to good use. Are you bored of having your presents delivered by reindeer? If you fancy a change, why not have your own Pi-powered quadcopter air drop them in? Andy Baker begins his series on building this flying machine covering in this issue the parts required, their function and discusses some of the coding used for lift off.... no, its not powered by Christmas spirit! If you are too busy attending Christmas parties and are missing your favourite soaps, never fear, we have you covered. Geoff has produced a great article on OpenELEC, bringing you catch up TV on your Raspberry Pi so you never have to miss an episode again! Claire Price continues the festive spirit with a fantastic article on Sonic Pi which will have your Raspberry Pi singing along with a rendition of Good King Wenceslas. The news in the UK is currently filled with stories of electricity firms putting up their prices this winter. If you want to be savvy with your heating and electricity bills, without turning off the tree lights and turning the thermostat down, why not cast your eye over Pierres article on environmental monitoring. Alternatively, to warm you up we return to Project Curacao looking at the environmental subsystem used in this remote sensing project. If thats not enough to keep you busy over the holidays, why not paint an electronic masterpiece with XLoBorg? Andy Wilson looks at scrolling an RSS feed on an LCD via GPIO plus we pay a visit to the Pi Store. Finally, find out about events in your area and PC Supplies are yet again generously offering Raspberry Pi goodies in their monthly competition. We are also pleased to be able to announce that printed copies of the magazine are now available from various retailers listed at. The MagPi will be taking a short break over Christmas and the first issue of 201 4 will be published at the start of February. Merry Christmas and best wishes for 201 4.
Contents
4 QUADCOPTER Part 1 : An introduction to building and controlling a quadcopter with the Raspberry Pi
22 PIBRUSH Painting with the XLoBorg accelerometer and magnetometer from PiBorg
TV 28 CATCH-UP Avoid missing your favourite programme by using OpenELEC to watch TV PI STORE 34 THE A look at the diverse range of applications and games
38 COMPETITION Win a Raspberry Pi Model B, breadboard, 1 6GB NOOBS SD card, GPIO Cobbler kit and accessories
MONTH'S EVENTS GUIDE 39 THIS Stevenage UK, Winchester UK, Nottingham UK, Paignton UK, Helsingborg Sweden PI AT CHRISTMAS 40 SONIC Learning to program with Sonic Pi
Andy Baker
Guest Writer
power to each propeller, and hence different amounts of lift to corners of the quadcopter, it is possible to not only get a quadcopter to take-off, hover and land but also by tilting it, move horizontally and turn corners. Each propeller has its own DC brushless motor. These motors can be wired to rotate clockwise or anti-clockwise to match the propeller connected to them. The motor has coils in three groups around the body (called the stator) and groups of magnets attached to the propellor shaft (called the rotor). To move the blades, power is applied to one group of the coils and the rotor magnets are attracted to that coil, moving round. If that coil is then turned off and the next one powered up, the rotor moves around to the next coil. Repeating this around the three coils in sequence results in the motor rototating; the faster you swap between the three powered coils the faster the motor rotates. This makes the motor suitable for digital control the direction and speed of movement of the propeller blade exactly matches the sequence and rate power pulses are applied to the coils. These motors take a lot of power to spin the propeller blades fast enough to force enough air down to make the quadcopter take-off far more power than a Raspberry Pi can provide so an Electronic Speed Controller (ESC) bridges that gap. It translates between a Pulse Width Modulation (PWM) control signal from the Raspberry Pi and converts it to three high-current signals, one for each
Parts of a quadcopter
First a quick breakdown of all the parts that make up a quadcopter. There are four propeller blades. Two of the four are designed to rotate clock-wise; the other two anticlockwise. Blades which are designed to move the same way are placed diagonally opposite on the frame. Organising the blades like this helps stop the quadcopter spinning in the air. By applying different
coil of the motors. They are the small white objects velcrod under the arms of the quadcopter. Next there are sensors attached to the breadboard on the shelf below the Raspberry Pi; these provide information to the Raspberry Pi about rocking and rolling in three dimensions from a gyroscope, plus information about acceleration forward, backwards, left, right, up and down. The sensors connect to the Raspberry Pi GPIO I 2C pins.
That just leaves the beating heart of the quadcopter itself; the Raspberry Pi. Using Python code it reads the sensors, compares them to a desired action (for example take-off, hover, land) set either in code or from a remote control, converts the difference between what the quad is doing (from the sensors) and what it should be doing (from the code or remote control) and changes the power to each of the motors individually so that the desired action and sensor outputs match.
In the circuit diagram you can see I am considering adding a beeper, so I can hear what the quadcopter thinks its doing.
Talking to Phoebe
Whether Phoebe is autonomous or remote controlled someone needs to talk to her to tell her what to do. To that end, Phoebe runs a wireless access point (WAP) so another computer can join her private network and either SSH in or provide remote control commands. You can see how I did this at.
The QUADBLADE class handles the PWM for each blade handling initialization and setting the PWM data to control the propeller blade spin speeds. The PID class is the jigsaw glue and the core of the development and testing. It is the understanding of this which makes configuring a quadcopter both exciting and scary! It is worth an article in its own right for now there is a brief overview of what they do and how at the end. There are utility functions for processing the startup command line parameters, signal handling (the panic button Ctrl-C) and some shutdown code. Last, but not least, there is the big while keep_looping: loop which checks on what it should be doing (take-off, hover, land, etc), reads the sensors, runs the PIDs, updates the PWMs and returns to the start one hundred times a second!
PID
For initial testing the WAP function isnt necessary, any household wireless network will do, but as your quadcopter comes to life you will want to be doing your testing away from animals, children and other valuables you dont want damaged (like yourself). Having a WAP means you can take the testing out into the garden or local park or field. The PID (Proportional, Integral, Differential) class is a relatively small, simple piece of code used to achieve quite a complex task. It is fed a target value and an input value. The difference between these is the error. The PID processes this error and produces an output which aims to shrink the difference between the target and input to zero. It does this repeatedly, constantly updating the output, yet without any idea of what input, output or target actually mean in its real world context as the core of a quadcopter: weight, gravity, wind strength, RC commands, location, momentum, speed and all the other factors which are critical to quadcopters. In the context of a quadcopter, target is a flight command (hover, take-off, land, move forwards), input is sensor data and output is the PWM pulse size for the motors. Phoebe has 4 PIDs running currently pitch, roll, yaw and vertical speed these are the bare minimum needed for an orderly takeoff, hover and landing. The PID's magic is that it does not contain any complex calculations connecting power, weight, blade spin rates, gravity, wind-speed, imbalanced frame, poor center of gravity or the many other
factors that perturb the perfect flight modelled by a pure mathematical equation. Instead it does this by repeated, rapid re-estimation of what the current best guess output must be based only on the target and the input. The P of PID stands for proportional each time the PID is called its output is just some factor times the error in a quadcopter context, this corrects immediate problems and is the direct approach to keeping the absolute error to zero. The I of PID stands for integral each time the PID is called the error is added to a grand total of errors to produce an output with the intent that over time, the total error remains at zero in a quadcopter context, this aims to produce long term stability by dealing with problems like imbalance in the physical frame, motor and blade power plus wind. The D of PID stands for differential each time the PID is called the difference in error since last time is used to generate the output if the error is worse than last time, the PID D output is higher. This aims to produce a predictive approach to error correction. The results of all three are added together to give an overall output and then, depending on the purpose of the PID, applied to each of the blades appropriately. It sounds like magic... and to some extent it is! Every PID has three configurable gain factors configured for it, one each for P, I and D. So in my case I have twelve different gain factors. These are magic numbers, which if too small do nothing, if too large cause chaos and if applied wrongly cause catastrophe. My next article will cover this in much more detail, both how they work and how to tune the gains. In the meantime, use the bill of materials on page 5 and get on with building your own quadcopter. The PID gains in the code Ive supplied should be a reasonable starting point for yours.
make sure shes on absolute horizontal by putting padding under her feet this is absolutely critical if you dont want her to drift in flight well fix this in another article with some more PIDs. Connect the LiPo battery. The ESCs will start beeping loudly ignore them. Wait until the Wifi dongle starts to flash that means Phoebes WAP is working. Connect via SSH / rlogin from a client such as another Raspberry Pi, iPad etc. which you have joined to Phoebes network. Use the cd command to change to the directory where you placed Phoebes code. Then enter:
sudo python . /phoebe. py - c sudo python . /phoebe. py - c - t 550 - v
calibrates the sensors to the flat surface shes on 550 sets up the blades to just under take-off speed - v runs the video camera while shes in flight.
-c -t
Flying Phoebe
At the moment it is simple but dangerous! Put Phoebe on the ground, place a flat surface across her propeller tips, put a spirit level on that surface and
ENVIRONMENTAL MONITOR
Pierre Freyermuth
Guest Writer
through a powered USB hub, or to a network mounted file system, or buffered locally and then uploaded. Writing all of the data allows a complete analysis afterwards and reduces the write access to the storage media in comparison to a rotating buffer. Recording sensor data for one year could represent hundreds of megabytes of data. If a log of data needs to be written then either a hard disk drive or network mounted file system may be needed. Raspbian is a really convenient operating system. The default image now includes the Oracle Java virtual machine, which is very efficient. The default Raspbian image also includes system drivers for the buses (I 2C, SPI, UART) available via the GPIO pins. To install Raspbian on a SD card, follow the official tutorials at. The buses available on the Raspberry Pi GPIO can be controlled using the Pi4J Java library. This library is discussed later in this article. For those unfamiliar with Java, the Cup of Java MagPi series provides a basic introduction in Issues 1 4 and 1 7.
10
Raspberry Pi I 2C bus.
The response should be 1 .7.0_40 or higher. To get the source code needed for this project follow the instructions given at. to check out . The source code must be customised according the sensors connected and the online server used. Details of the Pi4J Java library are given at. The Pi4J library provides direct access to the SPI bus and a lot of GPIO facilities.
Before the BMP085 can be used the i2c kernel module needs to be enabled. From the command line, enter:
cd /etc sudo nano modprobe.d/raspi-blacklist.conf
Look for the entry blacklist i2c-bcm2708 and add a hash # at the beginning of the line so it becomes #blacklist i2c-bcm2708. Press <Ctrl>+<X> then press <Y> and <Enter> to save and exit. Next edit the modules file. From the command line, enter:
sudo nano modules
At start-up, configure the communications for the probes and then the probes themselves.
Add i2c-dev on a new line. Press <Ctrl>+<X> then press <Y> and <Enter> to save and exit. Reboot the Raspberry Pi, open a terminal window and enter:
sudo i2cdetect -y 1
NOTE: Use 0 instead of 1 for the bus number in the above command if you have a revision 1 Raspberry Pi. The revision 1 Pi does not have any mounting holes on the PCB; newer revisions have 2 mounting holes.
The response should show 77 in the address list. More information is provided in the Adafruit tutorial at.
At start-up, load the previous recorded data from log files. Retrieve data from probes and record them to a plain text file on the SD card. Perform averaging with different time scales, in order to present historical data in a convenient way. Upload averaged data to a web server using FTP to provide access via the internet. Use a chart as a graphical user interface, to make local visualisation easy. By recording all of the data, it is then possible to change the analysis method or analyse the full data.
11
Structure
Probes connected to the Raspberry Pi can provide several values that may be independent. The chosen solution consists of an abstract class for probes which must be implemented for each sensor connected to the system. A probe implementation of this class must provide one or several DataChannel objects, which represent the data model. In terms of a model-view-controller pattern, the ProbeManager class can be seen as the controller, customising the view. The DataChannel class includes activities that are common for each type of data - it loads previous data from the log file when it is instantiated, logs new data and performs averaging. The different implementations of AbstractProbe include functionality specific to a given type of sensor. Each derived class should configure the sensor, process or convert data and then add them to the right DataChannel object.
The BMP085probe class overides the abstract method that provides access to the DataChannel objects :
@Override public DataChannel[] getChannels() { return new DataChannel[]{pressureChannel, temperatureChannel}; }
The I2CBus object from the Pi4J library is passed to the BMP085 constructor, since it can be several peripherals on the same I 2C bus. With this bus, we can configure our I 2C device. The BMP085 has the address 077.(); }
The dataReaderThread object is used to send a request for temperature and pressure information. Then the thread reads two bytes of raw temperature information and three bytes of raw pressure information.
bmp085device.write(CONTROL, READTEMPCMD); sleep(50); rawTemperature = readU16(DATA_REG); bmp085device.write(CONTROL, (byte) READPRESSURECMD); sleep(50); msb = readU8(DATA_REG); //Send read temperature command //wait the convertion time //retrieve the 2 bytes //Send read pressure command //wait the convertion time //retrieve the 3 bytes
12
lsb = readU8(DATA_REG+1); xlsb = readU8(DATA_REG+2); rawPressure = ((msb << 16) + (lsb << 8) + xlsb) >> (8-OSS); //make raw pressure integer
This raw data can be converted into units of Pascals and Degrees by following the BMP085 datasheet at and using the calibration values previously read. Java does not support native unsigned interger types. It is therefore more convenient to replace bit shifting with division and a high rate of data. To enhance the precision, a thread can take 5 data points, average them and add this new measurement every second to the dataChannel object. To add the BMP085 to the final program,_1); //Change to BUS_0 for Rev 1 boards. bmp085Probe = new BMP085probe(bus); probeManager.addProbe(bmp085Probe); } catch (IOException e) { e.printStackTrace(); } }
Now the two DataChannel temperature and air pressure values can be acquired, logged, and displayed as a chart. In the next article, additional instructions will be given to export the data and setup a web page to view the data. Until then, try putting the code together and try it out with the hardware configuration described.
13
Andrew Wilson
AndyPi
On the LCD, wire a 1 0K potentiometer (for contrast control) between VSS (1 ) and VO (3), connect K (1 6) to RW (5), and connect K (1 6) to VSS (1 ). LCD to GPIO connections as follows: 1 2 15 14 13 12 11 6 4
LCD pin
For many Raspberry Pi projects, providing visual output is important, but a standard HDMI screen is either too large or unnecessary. For example, you may have a temperature sensor and only need to display the current value, or you may want to display an internet radio station name of a "headless" Raspberry Pi. Alternatively, you could have a standalone display for your latest tweets. This tutorial explains how you can connect an inexpensive HD44780 type 1 6 character 2 line LCD (Liquid Crystal Display) to your Raspberry Pi's GPIO port and display the time and the latest news headline from the BBC RSS feed. The AndyPi website () also contains video and written tutorials on how you can use this LCD to display media information such as mp3 titles using RaspBMC or RaspyFi.
Hardware set-up
VSS VDD A D7 D6 D5 D4 E RS
GND +5v 18 14 23 24 25 08 07
Software set-up
Starting with the latest version of Raspbian, make sure your system is up to date, and install some python related tools:
sudo apt- get update sudo apt- get i nstal l - y python- dev \ python- setuptool s python- pi p
A brief description of pin connections is described here but a full list of parts and detailed construction information is available at. You can buy the parts individually and make your own, or you can purchase a complete, pre-soldered kit of parts from AndyPi for 1 2.99 or + P&P (EU only).
We need to install some python modules; wiringpi for GPIO control; feedparser to read the RSS feed, and the processing module to allow us to use threading (explained later):
14
This script is useful for displaying a short static message, but it's not much use for an RSS feed as it only displays the first 1 6 characters that fit. Instead we can use the function called scroll_clock, which displays the time on one line, and scrolls the full text along the the second. However, in order to update the LCD by moving one character along at a time, the function loops infinitely - and therefore no code after this is executed. To get around this, we can run this function in a thread (a simultaneously running process) so we can then continue to give further commands (in this case to check for the latest news updates). Here we set up a thread process using scroll_clock, start the thread, wait 60 seconds, update "msg" to the latest RSS feed, and stop the thread. The while loop then repeats to continue scrolling the text. In general, it's bad practice to terminate a thread, but in this case we know that scroll_clock is an infinite loop that will never complete.
whi l e True: p = Process(target=l cd. scrol l _cl ock, args=(1, " c" , 0. 3, msg)) p. start() ti me. sl eep(60. 0) msg=feedparser. parse(' http: //feeds. bbci . co. uk/news/rss. xml ?edi ti on=uk' ). entri es[0] . ti tl e p. termi nate()
Python script
The AndyPi LCD class has a number of functions to enable simple control of the LCD. We'll make use of a few of these. Create a new script (in the same folder as AndyPi_LCD.py) as follows:
#! /usr/bi n/python from AndyPi _LCD i mport AndyPi _LCD from processi ng i mport Process i mport ti me i mport feedparser l cd=AndyPi _LCD() l cd. l cd_i ni t() l cd. l ed(512) msg = feedparser. parse( ' http: //feeds. bbci . co. uk/news/rss. xml ?edi t i on=uk' ). entri es[0] . ti tl e
After importing the required modules, we set the variable "lcd" as the AndyPi_LCD() class, initialise the LCD, and set the brightness of the backlight (0=off, 51 2=full brightness, using PWM). Then we can use the feedparser module to set the string variable "msg" to the first title of the BBC world news feed. To display text, you can use the AndyPi_LCD function static_text(). Here we display text on line 1 and 2, and clear it after 30 seconds:
l cd. stati c_text(1, c, Worl d News: ) l cd. stati c_text(2, l , msg) ti me. sl eep(30) l cd. cl s()
The scroll_clock function takes four arguments. Firstly, choose either 1 or 2 to determine which line the clock is placed on. Secondly, choose l, r or c to set the clock alignment. Thirdly, specify the scrolling speed, and the final argument takes any string of characters - here the variable msg (which contains RSS feed text). RSS feeds of many different topics are widely available on the internet, but there are many other things you could use this display for too this script is just the start for your own experiments!
Get full details of all functions of the AndyPi_LCD class and buy the kit from
15
PROJECT CURACAO
John Shovic
Guest Writer
A, with the exception of the AM231 5 outdoor temperature and humidity sensor, which is connected to the Arduino based battery watchdog for reasons given on the next page. A small computer fan under Raspberry Pi control is also connected to provide airflow through the box when inbox temperatures get too high or the indoor humidity gets too high.
System description
Project Curacao consists of four subsystems. A Raspberry Pi Model A is the brains and the overall controller. The Power Subsystem was described in part 1 . In Part 2 we will describe the Environmental Sensor Subsystem.
16
What to measure?
We want to measure the temperature, humidity and local light levels both inside and outside of the containing box to see what is happening in the local environment. This information will be placed in a MySQL database for later analysis.
is
also
on
17
PHYSICAL COMPUTING
Brought to you by ModMyPi
Jacob Marsh
ModMyPi
power our LED. Add this to our GPIO.setup section (line 4), below the input pin setup line:
GPIO. setup(18, GPIO. OUT)
Once GPIO P1 8 [Pin 1 2] has been set as an output, we can turn the LED on with the command GPIO.output(1 8, True). This triggers the pin to high (3.3V). Since our LED is wired directly to this output pin, it sends a current through the LED tha turns it on. The pin can also be triggered low (0V) to turn the LED off, by the command GPIO.output(1 8, False). Now we dont just want our LED to turn on and off, otherwise we would have simply wired it to the button and a power supply. We want the LED to do something interesting via our Raspberry Pi and Python code. For example, let us make it flash by turning it on and off multiple times with a single press of the button! In order to turn the LED on and off multiple times we are going to use a for loop. We want the loop to be triggered when the button has been pressed. Therefore, it needs to be inserted within the if condition 'If input_value == False:', that we created in our original program. Add the following below the line 'print(''Who pressed my button!)' (line 9), making sure the indentation is the same:
for x i n range(0, 3):
20 18
Any code below this function will be repeated three times. Here the loop will run from 0 to 2, therefore running 3 times. Now we will add some code in the loop, such that the LED flashes on and off:
GPIO. output(18, True) ti me. sl eep(1) GPIO. output(18, Fal se) ti me. sl eep(1)
GPIO port. Double check the LED is wired the right way round. If the program still fails, double check each line of the program, remembering that Python is case-sensitive and correct indentation is needed. If is everything is working as expected, you can start playing around a bit with some of the variables. Try adjusting the speed the LED flashes by changing the value given to the time.sleep() function. You can also change the number of times that the LED flashes by altering the number times that the for loop is repeated. For example if you wanted the LED to flash 30 times, change the loop to: for x in range(0, 30). Have a go playing around with both these variables and see what happens!
The LED is triggered on with the command GPIO.output(1 8, True). However, since we do not want to immediately turn it back off, we use the function time.sleep(1 ) to sleep for one second. Then the LED is triggered off with the GPIO.output(1 8,False) command. We use the time.sleep(1 ) function again to wait before the LED is turned back on again. The completed program should be of the form:
i mport RPi . GPIO as GPIO i mport ti me GPIO. setmode(GPIO. BCM) GPIO. setup(17, GPIO. IN) GPIO. setup(18, GPIO. OUT) whi l e True: i nput_val ue = GPIO. i nput(17) i f i nput_val ue == Fal se: pri nt(" Who pressed my button?" ) for x i n range(0, 3): GPIO. output(18, True) ti me. sl eep(1) GPIO. output(18, Fal se) ti me. sl eep(1) whi l e i nput_val ue == Fal se: i nput_val ue = GPIO. i nput(17)
Save the file and open a new terminal window. Then type the following command:
sudo python button_l ed. py
This time when we press the button a message will appear on the screen and the LED should also flash on and off three times! To exit the program script, simply type CTRL+C on the keyboard to terminate it. If it hasn't worked do not worry. Do the same checks we did before. First, check the circuit is connected correctly on the breadboard. Then check that the jumper wires are connected to the correct pins on the
19
# Pl ace any vari abl e defi ni ti ons and # GPIO set- ups here try: # Pl ace your mai n bl ock of code or # l oop here except KeyboardInterrupt: GPIO. cl eanup() # Program wi l l end and GPIO ports # cl eaned when you hi t CTRL+C fi nal l y: GPIO. cl eanup()
Note that Python will ignore any text placed after hash tags (#) within a script. You may come across this a lot within Python, since it is a good way of annotating programs with notes. After we have imported Python modules, setup our GPIO pins, we need to place the main block of our code within the try: condition. This part will run as usual, except when a keyboard interruption occurs (CTRL+C). If an interruption occurs the GPIO ports will be reset when the program exits. The finally: condition is included such that if our program is terminated by accident, if there is an error without using our defined keyboard function, then the GPIO ports will be cleaned before exit.
Notice that only the loop part of the program is within the try: condition. All our imports and GPIO set-ups are left at the top of the script. It is also important to make sure that all of your indentations are correct! Save the file. Then run the program as before in a terminal window terminal:
sudo python button_cl eanup. py
The first time you run the file, you may see the warning message appear since the GPIO ports have not been reset yet. Exit the program with a keyboard interruption (CTRL+X). Then run the program again and hopefully this time no warning messages will appear! This extra code may seem like a waste of time because the program still runs fine without it! However, when we are programming, we always want to try and be in control of everything that is going on. It is good practice to add this code, to reset the GPIO when the progam is terminated.
Open button_led.py in IDLE3 and save it as button_cleanup.py. Now we can add the code previously described into our script. The finished program should
All breakout boards and accessories used in this tutorial are available for worldwide shipping from the ModMyPi webshop at
20
ELECTRONIC ART
Using an accelerometer
Guest Writers
This article describes the PiBrush, a simple on-screen painting system that uses the XLoBorg motion and direction sensor add-on board from PiBorg. The XLoBorg adds an accelerometer and a magnetometer (compass) to your Pi and makes all sorts of motion-based interaction possible like a Kinect, but free and open. The header graphic for this article was created with the PiBrush. The PiBrush is an interactive art and technology exhibit (a rather grand name for something so small) that simulates flicking paint off the end of a paintbrush onto canvas as Jackson Pollock famously did in the 1 940s and 50s. The setup includes two Raspberry Pis (one Model B and one Model A), one XLoBorg, a battery pack, a display and two Wi-Fi dongles. The Model A is held by the user and waved around. It collects acceleration data with the XLoBorg. These data are then transmitted via Wi-Fi to the Model B, which processes the data collected into paint droplets and displays them on the screen. Functionally it looks like this:
22
accel_server.py
The server script is the meat of the programming as it handles the physics simulation and display. First, as we're dealing with sensor input we want to use something called a moving average to store accelerometer readings, which is initialised like this:
# l ength of movi ng average array AL = 20 # accel erometer storage for movi ng average AXa = numpy. zeros((AL, 1)) AYa = numpy. zeros((AL, 1)) AZa = numpy. ones((AL, 1)) # array i ndex for accel erometer data Ai = 0
We may improve on the elastic band in future models. The Model A (client) top and the Model B (server) bottom, where the red button on the server saves the picture and starts a new picture.
A moving average is the average of, in this case, the last 20 readings. Some sort of filtering is almost always necessary when dealing with the sort of analog input data that comes from an accelerometer to prevent sudden jumps in the readings from having an undue impact on the processing. A moving average is a convenient and easy way to do it. We simply input the current reading at index Ai and then increment Ai until it exceeds AL, then wrap it back around:
AXa[Ai ] = fl oat(a[0] ) AYa[Ai ] = fl oat(a[1] ) AZa[Ai ] = fl oat(a[2] ) Ai = Ai + 1 i f Ai == AL: Ai = 0
The Code
The hardware for this project is fairly straightforward. The XLoBorg plugs directly into the GPIO pins as a block on the client, and a push button simply connects a ground and GPIO pin on the server. The software is where the challenge lies: intermediate to advanced code is needed, using Python and some basic physics. The code is available on GitHub at: The core is in accel_client.py which runs on the Model A and accel_server.py which runs on the Model B. The former reads from the sensor and sends data across the network; the latter processes the data and displays the
This is an array of the acclerometer data, as read over the network. I have used NumPy, because it executes more quickly than standard Python lists and there are convenient functions to make things simpler like .ones() to initialise all of AZa to a value of one we assume a starting 1 G of gravity downwards.
Functions
If you don't know, a function is a bit of code written in such a way that it can be executed from different places in the main program. Each call you can pass different arguments (input parameters) that will be operated on and a new
23
result returned. There are two quite important functions used in the code that have to do with polar coordinates. If you know X, Y, Z, those are Cartesian coordinates.
def pol ar(X, Y, Z): x = numpy. l i nal g. norm([X, Y, Z] ) i f (x > 0): y = -math. atan2(Z, X) z = math. asi n(Y / x) el se: y = 0 z = 0 return (x, y, z)
The first thing to be done each loop is to move time forwards. Ideally execution time for each loop is identical and dt (the timestep) is constant. However, this isn't the case, so by knowing how long it's been since the last loop, we can check how far to move things, i.e., distance = velocity * time.
# movi ng averages for AX = numpy. sum(AXa) / AY = numpy. sum(AYa) / AZ = numpy. sum(AZa) / accel erati on AL AL AL
Here polar takes a Cartesian X, Y, and Z coordinate and returns the equivalent polar coordinates, where x is R (the radius to the point from the origin), and y is A and z is B. The latter are the angular rotation about the Z and Y axis. Because the Model A may be rotated relative to the screen, we need a convenient mechanism for rotating the acceleration vectors recorded to ones we can use on the screen. In order to do that though we need to subtract the Earth's gravitational field. This is what the polar coordinates are used for. It's an extension of the regular subtraction of vectors. This function was based on this forum thread. def cartesian(X, A, B):
x = 0 # don' t bother - i sn' t used y = X * math. si n(B) * math. si n(A) z = X * math. cos(B) return (x, y, z)
# combi ned accel erati on for # worki ng out resti ng gravi ty A = math. fabs(numpy. l i nal g. norm([AX, AY, AZ] ) - 1)
After reading in the accelerometer data and putting it in the AXa, etc., arrays, we then need to take the average of that array. .sum() adds up the values in an array. Then to perform an average, we divide by the number of elements AL. Therefore, AX, AY and AZ contain the moving average. The total combined acceleration A is worked out by calculating the Euclidean distance from 0 to the acceleration vector position using linalg.norm(). At rest this should work out to just about 1 (remember we're working in acceleration in G(ravities), which is why we subtract 1 . We then use .fabs() so that we always have a positive result which indicates the difference between acceleration due to gravity and the experienced acceleration. At rest this number should be very small.
# i n a sl ow moment store most recent # di recti on of the gravi tati onal fi el d i f A < 0. 02 and (l ast_ti me - l ast_G) > 0. 12: GX = AX GY = AY GZ = AZ (PGR, PGA, PGB) = pol ar(GX, GY, GZ) l ast_G = l ast_ti me # rotate to screen coordi nates # and subtract gravi ty (PAR, PAA, PAB) = pol ar(AX, AY, AZ) (GAX, GAY, GAZ) = cartesi an(PAR, PAA - PGA + PSGA, PAB - PGB + PSGB) GAZ = GAZ - PGR
Here cartesian, as you might suspect, does the opposite of polar, taking the distance X and rotations A and B and turns them back into Cartesian coordinates. As this code is only used as part of getting coordinates ready for the screen, x, as the coordinate into the screen, is permanently set to 0. This is an optimization to help the code run better on the Pi. The code here is based on this explanation of Cartesian and polar coordinates.
Main program
The main program consists of an infinitely repeating loop that reads the accelerometer data over the network, applies the earlier moving average, and then processes it.
# move ti me forward dt = ti me. ti me() - l ast_ti me l ast_ti me = ti me. ti me()
24
acceleration, we can act on it. I've pointed out how in order to know which way the Model A is moving, we need to know where gravity is first. We can't know it exactly, but we can estimate it. Since A is the acceleration the Model A is experiencing excluding gravity, it's very low when the Model A isn't moving at all. This means that the only acceleration being experienced is due to gravity. Therefore, we can take the estimate and turn it into polar coordinates. For every loop we need to actually do the gravity subtraction and rotation for the screen. We turn the current acceleration into polar coordinates. Then on the next line we need to turn them back into cartesian coordinates, while subtracting the estimated rotation of the gravitational field. The GAZ line after this subtracts gravity from its absolute direction.
velocity, e.g. the speed at which the needle of the speedometer in the car climbs. So now imagine accelerating at 1 kph per second, after 1 0 seconds you'll be going 1 0 kph faster. After this acceleration, instead of 1 00 kph you are now going at 1 1 0 kph. To get the distance, you have to to integrate twice. (Fun fact: if you kept up that acceleration, after an hour you'd be going 3,700 kph and would have traveled 36,370 km. Or almost around the Earth.) We increment the brush velocity by the acceleration, factored by the timestep to keep the animation smooth. I have also added a factor of 1 70 to scale up the acceleration, so it displays nicely on the screen. (This means that one pixel approximately represents 1 70 metres.) The next integration increments the brush position by adding on the current velocity, also multiplied by the timestep and scaled, this time by 1 20. (These values just work, but are physically nonsense.)
# add spl otches. . . . hi gh vel oci ty bi g # spl otches far apart, l ow smal l cl ose i f P > 0: V = numpy. l i nal g. norm([VX, VY] ) S = S + V d = A * random. randi nt(3, 5) * 25 + V
Paintbrush physics
Perhaps the most interesting bit of what is going on within every loop it the paint brush physics. This is the code that controls what happens on the screen. Everything up to this point has been to define GAY and GAZ, two variables indicating horizontal and vertical acceleration relative to the screen. Now we can interpret this acceleration and make something happen.
# accel erati on detecti on for pai nt strokes A = numpy. l i nal g. norm([GAY, GAZ] )
After this, A is the total acceleration of the Model A, with respect to the screen, ignoring gravity. We can use this number to detect roughly if we are speeding up or slowing down. What happens after that?
i f fast == 1: # accel erate the pai nt brush VX = VX - GAY * dt * 170 VY = VY - GAZ * dt * 170 BX = BX + VX * dt * 120 BY = BY + VY * dt * 120
Now that the paintbrush is moving, here comes the last and most important bit of code: making paint droplets. This bit of code is only run while the paintbrush is moving and only if there is paint on the brush. Vaguely, one expects that the further the brush has been swung, more paint should come off. The faster the brush is moving should also cause more paint to come off. V is calculated as the total velocity and S is a running displacement. d is calculated as a rough estimtate of paint droplet spacing. Paint droplets fly off due to two factors: 1 ) Fluid mechanics. Roughly, I'm speaking of the affect of what happens when you move a glass of water too quickly. 2) Air resistance. The force you feel acting on your hand when you stick it out the car window on a drive on a nice summer's day. Both of these factors are rather complex subjects. Therefore, I have lumped them together as the produce similar results paint flying off the brush. d is made up of
This bit of code is actually responsible for moving the brush, and only when we think it's moving. To get position from acceleration, we have to integrate twice. Imagine you are in a car travelling down the motorway at 1 00 kph. In one hour you will have travelled 1 00 km. Easy? That is one simple integration, going from velocity to displacement. Acceleration is the rate of change in
25
A (our acceleration) times a random factor which is the fluid dynamics bit, and V is added for the air resistance bit. The random factor makes things a bit more interesting, perhaps taking into account things like globs of paint or hairs in the brush getting tangled. 25 is the scaling factor this time. (This applies to the acceleration term only as velocity was already scaled before.)
i f S > d: S = S - d P = P - pow(A*4, 2) * math. pi pygame. draw. ci rcl e(screen, (COLR, COLG, COLB), (i nt(BX), i nt(BY)), i nt(A*45)) draw = 1
If we've travelled further than the expected paint droplet seperation according to our approximate simulation, we need to draw a paint droplet! We calculate the amount of paint that was in the droplet arbitrarily, as the acceleration times four, squared, times (pi). (This is the area of a circle formula.) This is subtracted from P, the paint on the paintbrush variable. The paint droplet actually drawn on the screen is done using cricles, with the Pygame function draw.circle(). The drawing on the screen takes place with a random colour, at the paintbrush position (BX, BY). A*45 is the paint droplet radius, where 45 is another scaling factor.
Mobile power for the Raspberry Pi - easy on the one hand, hard on the other: it is easy to plug in a 5V battery pack but when it runs out your Raspberry Pi gets a power cut that might well corrupt the SD cards. Over the last year we've been designing what we hope is the perfect mobile power solution for the Raspberry Pi, which we're calling MoPi, and we've just taken delivery of the second generation prototype. I think it does pretty much everything you could want for your Raspberry Pi on the go: it will accept multiple sources, including standard AA batteries, or your car cigarette lighter, or an old laptop power supply or etc.. it will let you do hot-swap power replacement without stopping work it will shutdown your Raspberry Pi cleanly if the battery charge level gets too low, and it has a neat little power switch on the top to save you logging in to shutdown at other times, it behaves like a uninterrupltible power supply (UPS) when the Raspberry Pi is plugged into a mains supply and it even fits in the Pibow (and other well-known Raspberry Pi cases)
Here's the circuit board, with an eight-pack of AAs (that will get you around 9 hours of mobile time for a Model B Raspberry Pi. More details of the project can be found at:
26
CATCH-UP TV
Geoff Harmer
Guest Writer day from friends or from TV reviews in newspapers or on the web. I am going to show you how to install catch-up TV services using XBMC and the OpenELEC distribution on a Raspberry Pi to work with your HDTV. It takes about 60 minutes to set up and requires no use of Linux commands either!
Permanent components
Do you want to watch TV programmes using your Raspberry Pi? No need to buy a smart TV to get your HDTV to connect to catch-up TV services such as BBC iPlayer and ITV Player in the UK, or CTV and Space and Bravo in Canada, or RTE1 and RTE2 in Ireland or Network1 0 in Australia. No need to have your Raspberry Pi keyboard and mouse and their cables spoiling the look of your HDTV. Use a Raspberry Pi with a Wifi USB dongle running OpenELEC - a compact, high performance Linux distribution that has XBMC media centre software installed - and then hide the Raspberry Pi behind the HDTV. Use a remote control app on your Android, iOS or Blackberry smartphone to control XBMC via your home Wifi so you can select and watch catch-up TV programmes on your HDTV. No Linux commands are needed to install or use OpenELEC with the Raspberry Pi. My style of television watching over the last few years has switched from live viewing and recording, to predominantly using catch-up TV. Catch-up TV is particularly helpful when a missed programme gets a good review the next
Raspberry Pi Model A or Model B. Good quality 5V power adapter for the Raspberry Pi (minimum 700mA or greater). HDMI cable to connect the Raspberry Pi to your HDTV. Wifi nano USB dongle if you do not have easy access to a wired internet connection (the Wifi dongle from uses builtin drivers and does not require a powered USB hub). 4GB+ SD card loaded with the latest NOOBS software from. Android, iOS or Blackberry 1 0 device with a downloaded app to remotely control XBMC, e.g. XRMT for Blackberry 1 0 OS, or Official XMBC Remote for Android, or Official XMBC Remote for iOS.
28
The Raspberry Pi USB ports may not be able to supply enough current for certain Wifi dongles. This can result in video stuttering. Assuming you have a good connection, the solution is to use a powered USB hub with a minimum 2A power supply. Some hubs will also back power the Raspberry Pi so you only need the one power supply. Check for a list of known good hubs.
Note:
setup wizard. Here you choose your region, provide a friendly network name then choose from a list of available wireless connections. You will need to know your wireless connection name and security passphrase. By default your network IP address and DNS servers will be automatically configured. However you may want to change the DNS servers to watch overseas programming using services like, or you may want to change the IP address to use a static IP. If you want to change the network settings later, from the main screen select SYSTEM, OpenELEC and then click on Connecti ons . Click on your connection and choose Edi t from the pop-up menu. A static IP address tends to be more reliable because it remains fixed and so the remote app on your smartphone will always connect. The IP address created using DHCP can vary depending on what other devices are connected to the broadband router.
Installation instructions
Copy the latest NOOBS distribution onto a 4GB+ SD card. When finshed insert the SD card into the Raspberry Pi, plug in the Wifi USB dongle and use a HDMI cable to connect the Raspberry Pi to your HDTV. Plug in the power adaptor and boot up the Raspberry Pi. The first time you start NOOBS you will be asked to choose what operating system to install. Choose OpenELEC . This will take a few minutes to install, after which you can reboot your Raspberry Pi.
Choose to Edi t your connection, as described above and click on IPv4. You will see the IP Address Method is currently set to DHCP . Click on this and change it to Manual . Now click on IP Address and enter an unused address within the range permitted by your broadband router e.g. 1 92.1 68.1 .50. You can leave the Subnet Mask (e.g. 255.255.255.0) and Defaul t Gateway (e.g. 1 92.1 68.1 .254) at their default settings. To change the DNS settings for your connection choose DNS servers . You will see options to enter IP addresses for up to three nameservers. By default these will already have the IP addresses of the DNS servers from your internet provider, but you can choose to use others. For example OpenDNS provides enhanced security using its own free to use DNS server IP addresses at. [Ed: I
Note: If you are using a Raspberry Pi Model A then for this section you will temporarily need to use a powered USB hub so you can connect both the Wifi dongle and a mouse. After your Raspberry Pi has started, XBMC will present its main screen known as Confluence (blue bubbles background) followed by an initial
29
changed these to the Unblock-Us DNS servers so, as an ex-pat, I can still watch BBC iPlayer content in Canada].
At this point it is a good idea to click the Power button on the main screen and reboot the Raspberry Pi. When OpenELEC restarts you should have a working network connection. 1 . Is the display too large for your screen or do you see a black border? To change the screen size select SYSTEM then Setti ngs and click on Appearance . In the Ski n section, change Zoom (up and down keys) to set an appropriate reduction. 2. Stop RSS feeds across the bottom of the Confluence screen. From the Confluence screen select SYSTEM then Setti ngs and click on Appearance . In the Ski n section click on Show RSS news feeds to turn it off. 3. To allow remote wireless control of XBMC select SYSTEM then Setti ngs and click on Servi ces . In the Webserver section click on Al l ow control of XBMC vi a HTTP to turn this option on. Note the default username is xbmc with no password. You can optionally change these. In the Remote control section click on
Al l ow programs control XBMC . on other systems to
Add-ons need to be added to XBMC in order to use catch-up TV. Here are some examples that have been tested to work. UK: BBC iPlayer downloads/list ITV player downloads/list Ireland: files/plugin.video.irishtv-2.0.1 1 .zip Canada: Australia: files/plugin.video.catchuptv.au.ten-0.4.0.zip The Canadian add-in is particularly good as it covers 20 stations, but just like the BBC and ITV add-ins, it is region locked. Fortunately Australia's Network1 0 and Ireland's RTE stations appear to be freely available to all. Using your PC, copy the latest zip files onto a USB memory stick without unzipping them. Now plug the USB memory stick into the USB port of your Raspberry Pi and power-up the Raspberry Pi. To install these add-ons, from the main menu select VIDEOS then Add-ons and click on Get More . At the top of the list click on . . (i.e. two dots). At the top of the next list again click on . . (i.e. two dots). Now click on Instal l from zi p fi l e . A new window will pop-up. Select your USB memory stick. If it is not visible remove it and insert it again. Navigate to the first add-on you wish to install and click it. It gets installed at this point within about 30-40 seconds but it is not obvious. You will be returned to the Instal l from zi p fi l e menu and you may momentarily observe (bottom
4. To ensure OpenELEC has your home workgroup from the main screen select SYSTEM then Setti ngs and click on Servi ces . In the SMB cl i ent section the Workgroup option is set to WORKGROUP by default. If your home network has its own workgroup name then you must change WORKGROUP to your home workgroup name. You can find out your home workgroup name from your Windows PC by opening the Control Panel , open the System and Securi ty category then click on System . The workgroup is shown near the bottom.
30
right of the screen) that the new add-on has been installed. Check by returning to VIDEOS and selecting Add-ons and you should see the name of the add-on you just installed. If it is absent, wait a few minutes for it to appear and if still absent reboot your Raspberry Pi. Repeat for each add-on, one at a time.
connected to the HDTV. The Belkin product has a button on it for selecting which of the 2 inputs to currently use.
XRMTconnection issue
Download and install the Official XBMC Remote app to your Android or iOS device, or install XRMT for your Blackberry 1 0 OS device. As an example, here is how to configure XRMT on your Blackberry 1 0 OS device. Once installed open the app and click the three dots icon (bottom right) and select Add Server . For Server name enter openel ec (lower case). For Server IP address enter the address that you set up for the Raspberry Pi in step 2 on page 23. Leave the Port code as 9090 then click the icon at bottom left. Now scroll down from the top of the app, click on Setti ngs and change the Auto connect by clicking on openel ec below it. Close the settings screen. The app is now ready to use. Once you have installed an XBMC remote app on your smartphone you are ready to control XBMC without a mouse or keyboard. Simply use the remote control app to navigate the XMBC Confluence screen, select VIDEOS and then select Add-ons . Your add-ons will be displayed. Simply click on an add-on such as iPlayer to run it. Enjoy the catch-up TV programmes.
Interesting extra facts and tips
If you have been using a mouse to control XBMC, you may find that your Blackberry XRMT won't connect. Unplug the mouse and reboot the Raspberry Pi and then XRMT should connect. Do you want to access the OpenELEC operating system with Linux commands from your Windows PC? You can do this using a program called PuTTY. PuTTY uses the SSH protocol to connect to the Raspberry Pi. Download the latest version of PuTTY from. greenend.org.uk/~sgtatham/putty/download.html. You also need to set OpenELEC to allow a SSH connection. From the XBMC main screen select SYSTEM then OpenEl ec and click on Servi ces . Select the SSH option and click to enable it. Reboot the Raspberry Pi. Once PuTTY is installed on your Windows PC simply run the file putty. exe . After the PuTTY screen has opened enter the IP address of your Raspberry Pi. Leave the Port as 22 and choose SSH as the connection type. Click on Open then enter the username and password. For OpenELEC the default username is root and the default password is openel ec .
HDMI connections
Are all the HDMI sockets on your TV in use with other devices such as a PVR, bluray player and games console? Not enough HDMI sockets on your TV for your Raspberry Pi? The solution is to use a 2-in/1 -out HDMI switch (e.g. Belkin). Thus both the Raspberry Pi and the other device are
31
THE PI STORE
Ian McAlpine
MagPi Writer
Digital downloads
Apple was one of the first companies to encourage the mass adoption of digital downloads with its iTunes store for music and movies. This was later followed with the App store, the iBooks store and finally iTunes U for students. The App store was the only way you could get applications for Apple mobile devices and later it was extended to Apple's MacBook and iMac computers. Amazon pioneered the digital download of books and today most major bookstores have digital download affiliations. Of course the instant gratification advantages of digital downloads were not lost on game console manufacturers with Microsoft, Sony and Nintendo all introducing digital download stores for their respective game consoles. Digital app stores are less prevalent for Windows based PCs with notable exceptions being game centric Steam and Origin.
34
and Media. Before we explore each of these areas here are some navigation tips that will help you find the content you want. In the Explore tab of the Pi Store there is an option to filter the content by status and also an option to specify the sort order. If you want to explore content which is still work-in-progress then change the status filter to In Progress. You can also set it to Show All to see all Pi Store content. There are many sort options, but the most useful are Most Played, Newest First and Top Rated. Note that currently the Pi Store does not store these settings so the next time you open the Pi Store the status will likely be reset to Finished and the sort order will be reset to Price - Highest. Several programs in the Pi Store do not run on LXDE and require a reboot before they start. Although there is a warning on the application page in the Pi Store, there is no final "Are you sure?" after you click on Launch. So make sure you have everything saved first.
Games
Surprisingly this is not the largest category in the Pi Store, but with 30 titles at the time of writing there is something for all gamers here. In Dr Bulbaceous : Puzzle Solver (US$2.99/ 1 .99) you drop different coloured objects and when 3 or more are touching they disappear. The goal is to clear the screen before the objects reach the top. Although I found the game too easy, kids will enjoy it. It is colourful and well implemented. Tip: To swap your current object for something more potent, press the Enter key. It looks like the mouse should work, but it didn't for me. Alas Zombie X Pro ($1 .60/1 .00) refused to run correctly for me so I cannot comment on it. There is a demo option you can try to see if it works for you.
DmazeD is a lot of fun. You search a maze for the key to the exit, but it is dark so you have a limited field of vision. Listen out for the monster that is also in the maze with you!
Another game I found myself thoroughly enjoying is The Little Crane That Could . Through dexterous use of your crane and its many movements you complete a series of increasingly difficult tasks. Many folks of a certain age (like myself!) use their Raspberry Pi to relive the golden age of home computing and there is no shortage of
35
games inspired from that decade. In the Pi Store these include Abandoned Farmhouse Adventure and King's Treasure - classic text based adventures, Chocolate Doom - play all versions of Doom, NXEngine - a clone of the "Cave Story" platformer, Star Flite - a Star Trek retro game plus an emulator for the Atari800. Other excellent games include Freeciv - an empire building strategy game, Sqrxz 2 and Sqrxz 3 - platformers, the impressive Open Arena - first person shooter, OpenTTD - a simulation game based on Transport Tycoon Deluxe and Iridium Rising - a 3D space game but currently suffering a "Servers Full" issue. The first commercial game on the Pi Store was Storm In A Teacup. It remains one of my favourite Raspberry Pi games, but unfortunately it was recently removed from the Pi Store. Hopefully it will be back soon.
smart phone or tablet device to control OMX Player running on the Raspberry Pi. Simply enter the given URL in your web browser, specify your media folder and then have full control of playlists, volume and all the usual media controls.
based PC emulator and is a great way to run 90's era DOS based games. Another similar program on the Pi Store is RPix86.
Apps
configuration manager for hardware tweaking. It provides a GUI for overclocking every aspect of the Raspberry Pi, but for me its real strength is with its video configuration. Visually get the perfect monitor setup quickly and easily. No more black bars! Multiple presets can be saved so if you take your Raspberry Pi to different locations (e.g. school, Raspberry Jam, hackspace) you can get the best display configuration quickly without changing your 'home' settings.
By far the largest category in the Pi Store, there is an incredible selection of titles ranging from utilities to emulators to media programs to 'heavy-weight' applications. Examples of the latter are Asterisk - a PBX system and the brilliant LibreOffice - a Microsoft Office compatible and equivalent office suite. Four of the six paid applications can be found here. OMX Player Remote GUI (US$3.00/ 1 .80) provides a web UI that can be used on a
I was not familiar with XIX Music Player, but it is pleasantly surprising and is definitely worth downloading. I am using it to listen to internet radio (Absolute Radio from the UK, Hit FM from Belgium, MacJingle Heartbeat from Austria...) while I layout this article using Scribus and all running smoothly on the Raspberry Pi. Staying with the music theme there are two music creation applications in the Pi Store. Schism Tracker is both a MOD file player and a music composition tool. Basic instructions explaining its operation are in issues 2, 1 2 and 1 3 of The MagPi. With PXDRUM you will have a lot of fun creating cool beats. You can simultaneously play eight
36
different percussion instruments and change their sound by loading different drum kits. Start with one of the demo songs and get your groove on!
Pack for your Python and Scratch games... and of course all free to use.
There is also the very excellent Pi3D , a Python module that simplifies developing 3D worlds and provides access to the power of the Raspberry Pi GPU. In addition to both 3D and 2D rendering, Pi3D can load textures, models, create fractal landscapes and offers vertex and fragment shaders. There are also 1 9 impressive demos.
Tutorials
Media
The Tutorials category contains several very useful guides including the Official Raspberry Pi Educational User Manual . Other guides include Python-SQL-Database - a tutorial on how to create database applications using SQL and Python plus Raspberry Invaders - a Python programming course with 1 7 lessons where you learn how to create a space invaders style game using Python.
Dev Tools
Last, but certainly not least, is the Media category. This is where you will find every issue of The MagPi - the first, and in my biased opinion the best, magazine for the Raspberry Pi... and of course it is free! There will typically be a short lag between The MagPi being released at the start of each month and it appearing in the Pi Store. That is because the Pi Store version is the same version that we use for creating the printed version of the magazine so we are doubly thorough with the quality control.
Conclusion
With over 2 million Raspberry Pis sold worldwide and the Pi Store being located on the desktop of Raspbian, it has the potential to be an incredible resource and the "go to" place for Raspberry Pi content. It is easy to upload so why not share your original programs or tutorials with others? The Pi Store is there for your benefit and the benefit of the community, so go use it.
This is the smallest category in the Pi Store yet it contains useful collections such as HUD Sprite Pack, Effects Sprite Pack and Audio Effects
37
DECEMBER COMPETITION
Once again The MagPi and PC Supplies Limited are proud to announce yet another chance to win some fantastic Raspberry Pi goodies!
This month there is one MASSIVE prize! The winner will receive a new Raspberry Pi 51 2MB Model B, an exclusive Whiteberry PCSL case, 1 A PSU, HDMI cable, 1 6GB NOOBS memory card, GPIO Cobbler kit, breadboard and jumper wires! For a chance to take part in this month's competition visit: Closing date is 20th December 201 3. Winners will be notified in the next issue and by email. Good luck! To see the large range of PCSL brand Raspberry Pi accessories visit
November's Winner!
The winner of a new 51 2MB Raspberry Pi Model B plus an exclusive Whiteberry PCSL case, 1 A PSU, HDMI cable, 1 6GB NOOBS memory card, GPIO Cobbler kit, breadboard and jumper wires is Lena Pellow (Cardiff, UK). Congratulations. We will be emailing you soon with details of how to claim your prizes!
38
Then this
39
SONIC Pi AT CHRISTMAS
Learning to program with Sonic Pi
Claire Price
MagPi Writer them into the sound jack port on your Raspberry Pi (see Rapberry Pi diagram below).
Getting Started
If you are using the latest version of NOOBS or Raspbian Wheezy then Sonic Pi should already be installed. You can check this by looking to see if you can find Sonic Pi under Programming in the main menu. However, if you have older versions of these you either need to download the latest image files or you could type in the following into your terminal:
sudo apt-get update ; sudo apt-get install sonic-pi
It is important you update before downloading Sonic Pi otherwise Sonic Pi wont work. To hear the sound generated, you will need speakers or a pair headphones, if you cant get sound through your monitor/screen, and plug
Making Sounds
Open up Sonic Pi. You will see 3 panes (see screenshot on next page).
40
Press play. Can you hear how changing the timings between the notes changes how the melody sounds? In the large pane type in
play 48
Writing chords
We may want to introduce chords into our tunes. The easiest way to do this is to use the play_chord function:
play_chord [48,52,55]
By typing this we are telling the program you want to play MIDI note 48, which is the equivalent of playing a c on a piano. To hear it, we need to press play. This is the triangle button at the top of the page. We should hear a note and in the output pane (which is found on the top right of the screen) we will see what is happening (i.e. note 48 is playing). If you make a mistake while typing in the code in, dont worry. The bottom right hand pane will show you your mistake. Try typing in
ploy 48
This is telling the program we want to play notes 48, 52 and 55 at the same time. By adding the sleep function after play_chord we can change the timings between notes (see previous section). Try this out by typing:
play_chord [48,52,55] sleep 1 play 52
and see what happens when you press play. To create a tune we will want to play multiple notes. Lets type in the following:
play 48 play 52 play 55
Using loops
If we wanted to repeat this section, we could rewrite everything we had just typed or we could use a loop. This is achieved by using times do. By writing a number in front of times do we can get the program to repeat the section that many times. So if we write 2.times do everything will be repeated twice, if we write 3.times do everything will be repeated 3 times and so on. Adding end at the bottom of this section tells the program we only want to repeat this section.
2.times do play 48 sleep 1 play 52 sleep 0.5 end
Press play. You will notice the three notes are played together. This is great if we want to play a number of notes at the same time, but no good if we want to play separate notes. So to make sure the notes play one after another we need to type sleep followed by the number of seconds we want to wait between notes. For instance, if we want a 1 second wait, we would type sleep 1. Lets try putting this into what we have just written.
41
Press play. You should hear all the notes played twice in the loop.
Playing patterns
In our tune, we may have a series of notes requiring the same timings in between, e.g.
play 48 sleep 1 play 52 sleep 1 play 55 sleep 1
notes we have written. This works particularly well if we have written a sequence of notes in any sequence and then decide what we really want is for the notes to be played in numerical order. For instance, we might type
play_pattern_timed [52,48,55],[1]
but we actually want is for the notes to be played 48,52,55. By adding .sort this can be achieved:
play_pattern_timed [52,48,55].sort,[1]
There is nothing wrong with writing the code as in the example, but we could write it another way:
play_pattern_timed [48,52,55],[1]
This is a great way to write sequences as it is more concise and reduces the chance of error as you do not need to repeatedly type play and sleep. We can also use this format to shuffle, reverse and sort the notes we want to play. By adding .shuffle, the program will play the notes in a random order:
play_pattern_timed [48,52,55].shuffle,[1]
Note the .shuffle comes after the notes. If it is placed after the sleep command, i.e. [1 ], then the notes will be played in the exact order we have written them. Try it and see:
play_pattern_timed [48,52,55],[1].shuffle
We can also reverse the order of the notes we have asked the program to play by using
.reverse
play_pattern_timed [48,52,55].reverse,[1]
The sections we want to use, we encapsulate in in_thread do and end so the program knows where to start and finish. The program will then play these two sections at the same time. We can also use other functions within the in_thread do too, in this case 2.times do.
Again if we put .reverse after the [1 ] the notes will be played in the order they have been typed. We can also use
.sort
42
synth), fm, beep and saw_beep. To change the synth, we use with_synth followed by the name of the synth we want to use (in quotation marks, ), e.g.
with_synth dull_bell
end
Anything that follows this command will be played with this synth. So lets try it out:
with_synth dull_bell play 48
You can hear the note sounds very different even though the same note is being played. Try out the other synths and see which you prefer. Now lets write a tune in Sonic Pi!
A lot of this section is the same pattern, i.e. there is a sequence of notes to be played with the same timings so we can use play_pattern_timed. Remember the indents.
play_pattern_timed [55,55,55,57,55,55],[0.5] play 50 sleep 1 play_pattern_timed [52,50,52,54],[0.5] play 55 sleep 1 play 55 sleep 1
Congratulations you have written your first Christmas carol! For more information on Sonic Pi see icpi/ and have fun creating your next tune!
Now we need to tell the program to only repeat this section so we type
43
Merry Christmas and a Happy New Year! Don't forget our first issue of 201 4 will be available online in February.
The MagPi is a trademark of The MagPi Ltd. Raspberry Pi is a trademark of the Raspberry Pi Foundation. The MagPi magazine is collaboratively produced by an independent group of Raspberry Pi owners, and is not affiliated in any way with the Raspberry Pi Foundation. It is prohibited to commercially produce this magazine without authorization from The MagPi Ltd. Printing for non commercial purposes is agreeable under the Creative Commons license below.. This work is licensed under the Creative Commons Attribution-NonCommercialShareAlike 3.0 Unported License. To view a copy of this license, visit:
Alternatively, send a letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041 , USA. | https://ru.scribd.com/document/190389231/The-Raspberry-Pi-Magazine-The-MagPi-Issue-19 | CC-MAIN-2019-30 | refinedweb | 12,441 | 70.94 |
Subject: Re: [boost] [test] Automatic registration of tests created with BOOST_PARAM_TEST_CASE
From: Jan Stolarek (fremenzone_at_[hidden])
Date: 2009-05-10 03:22:52
Ok, I figured out a solution that seems to be quite elegant, not so verbose
and easy to achieve :
using namespace boost::unit_test;
class DataSet {
public :
DataSet( /* params */ ) { /* setup single data set */ }
~DataSet( ) { /* teardown single data set */ }
/* some fields */
}
void testMethod( DataSet* data ) {
/* do something with method that's being tested. Use data somehow */
BOOST_CHECK( /* assertion */ );
}
BOOST_AUTO_TEST_CASE( TestNameFirstDataSet ) {
DataSet dataSet( /* params for first data set */ );
testMethod( dataSet );
}
BOOST_AUTO_TEST_CASE( TestNameSecondDataSet ) {
DataSet dataSet( /* params for second data set */ );
testMethod( dataSet );
}
It's so simple that I wonder why didn't I figure it out right away. It doesn't
require extra lines of code (dataSet has to be created somewhere anyway), it
even allows to get rid of DataProvider class. Also there is no problem of
spliting the tests into many files (I think this could be problematic if I
had used global fixture, since all the data sets would have to be defined in
one file). Any bad sides of this approach?
Jan
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2009/05/151387.php | CC-MAIN-2021-39 | refinedweb | 207 | 58.42 |
pam_get_item - get PAM information
#include <security/pam_appl.h>
int pam_get_item ( pam_handle_t *pamh, int item_type, void **item );
The
pam_get_item()function returns to the caller the PAM information for the item_type supplied. item is assigned the address of the requested item. The data within the item is valid until it is modified by a subsequent call to pam_set_item(). If the item has not been previously set, a NULL pointer is returned.
An item retrieved by
pam_get_item()should not be modified or freed. It will be released by pam_end().
The arguments for
pam_get_item()are:
- pamh (in)
The PAM authentication handle, obtained from a previous call to
pam_start().
- item_type (in)
The item type for which the PAM information is requested. This may item types PAM_AUTHTOK and PAM_OLDAUTHTOK are available only to the PAM service modules for security reasons. (out)
The address of a pointer into which is returned the address of the object requested.
One of the following PAM status codes shall be returned:
- [PAM_SUCCESS]
Successful completion.
- [PAM_SYSTEM_ERR]
System error.
- [PAM_BUF_ERR]
Memory buffer error.
[??] Some characters or strings that appear in the printed document are not easily representable using HTML. | http://pubs.opengroup.org/onlinepubs/008329799/pam_get_item.htm | crawl-003 | refinedweb | 186 | 59.19 |
You need to access code in a .jar or .class file in your project, but Eclipse can't find these files.
Select the project in the Package Explorer, and then select Project Properties to open the Properties dialog. Click the Libraries tab in this dialog, click Add External JARs for .jar files or Add Class Folder for .class files, navigate to the .jar file or to the folder containing .class files, and click OK.
Often you need other code in the build path , such as .class or .jar files. For instance, say you're developing a Java servlet, as shown in Example 4-3.
package org.cookbook.ch04; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class ServletExample extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { response.setContentType("text/html"); PrintWriter out = response.getWriter( ); out.println("<HTML>"); out.println("<HEAD>"); out.println("<TITLE>"); out.println("Using Servlets"); out.println("</TITLE>"); out.println("</HEAD>"); out.println("Using Servlets"); out.println("</BODY>"); out.println("</HTML>"); } }
A lot of the support for servlets is in servlet.jar . Eclipse can't find servlet.jar by itself, so a lot of wavy red lines will appear when it comes to the imports, as shown as in Figure 4-17.
To add servlet.jar to the build path, select Project Properties, and click the Libraries tab. Then click Add External JARs, navigate to servlet.jar , and click OK. Doing so adds servlet.jar to the build path, as shown in Figure 4-18. Click OK to close the Properties dialog, and then build the project; when you do, things will work out fine (and you'll see servlet.jar in the Package Explorer).
If you add multiple .jar files to the classpath, you also can indicate the order in which you want them searched. Just click the Order and Export tab in the Properties dialog, and change the order of imported items by using the Up and Down buttons .
If you know you're going to be using a .jar file such as servlet.jar when you first create the project, you can add that .jar file to the project's classpath in the third pane of the New Project dialog. You'll see the same tabs there as you do in Figure 4-18. Just click the Libraries tab, and add the .jar files you want to the project.
If you know you're going to be using a .jar file such as servlet.jar often, you might want to create a classpath variable . Doing so will save you time when you want to include items in a project's build path. Using classpath variables like this is not only convenient , but also it centralizes your classpath references for easy handling. For example, if you want to use a new version of servlet.jar across multiple projects, all you have to do is to update one classpath variable.
To create a classpath variable, select Window Preferences Java Classpath Variables, as shown in Figure 4-19. Click New, enter the new variable's namewe'll use SERVLET_JAR hereenter its path (or browse to its location), and then click OK. You can see this new variable in the figure.
When you want to add this classpath variable to a project's classpath, open the project's Properties dialog, click the Libraries tab, click the Add Variable button (shown in Figure 4-18), and select the variable you want to add to the classpath.
Recipe 1.5 on creating a Java project. | https://flylib.com/books/en/1.259.1.83/1/ | CC-MAIN-2019-13 | refinedweb | 591 | 77.13 |
On 07/05/2017 11:22 AM, Max Reitz wrote: >>>> return (double)x == x && x == y; >>> >>> Yes, that would do, too; and spares me of having to think about how well >>> comparing an arbitrary double to UINT64_MAX actually works. :-) >> >> On second thought, this won't do, because (double)x == x is always true >> if x is an integer (because this will implicitly cast the second x to a >> double, too). However, (uint64_t)(double)x == x should work. > > Hm, well, the nice thing with this is that (double)UINT64_MAX is > actually UINT64_MAX + 1, and now (uint64_t)(UINT64_MAX + 1) is > undefined... Urgs. (uint64_t)(UINT64_MAX + 1) is well-defined - it is 0. (Adding in unsigned integers is always well-defined - it wraps around on mathematical overflow modulo the integer size. You're thinking of overflow addition on signed integers, which is indeed undefined) > > So I guess one thing that isn't very obvious but that should *always* > work (and is always well-defined) is this: > > For uint64_t: y < 0x1p64 && (uint64_t)y == x > > For int64_t: y >= -0x1p63 && y < 0x1p63 && (int64_t)y == x That's harder to read, compared to the double-cast method which is well-defined after all. > > I hope. :-/ > > (But finally a chance to use binary exponents! Yay!) Justifying the use of binary exponents is going to be harder than that, and would need a really good comment in the code, compared to just using a safe double-cast. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org
signature.asc
Description: OpenPGP digital signature | https://lists.gnu.org/archive/html/qemu-block/2017-07/msg00219.html | CC-MAIN-2019-35 | refinedweb | 260 | 51.89 |
I got the exception pasted below twice recently. Seems to happen after the
build bot has been up for a few days. The build slave is on the same
physical host, it's a vmware machine.
One thing I'm noticing, not sure it's relevant, is that the date on the
vmware machine seems to skew quite badly. It lags of about half an hour
within a few days. I'm gonna try to fix this in case it's related.
TTimo
--
Traceback (most recent call last):
File "/home/timo/usr/python2.2/lib/python2.2/site-packages/twisted/internet/default.py", line 131, in mainLoop
self.runUntilCurrent()
File "/home/timo/usr/python2.2/lib/python2.2/site-packages/twisted/internet/base.py", line 361, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/home/timo/usr/python2.2/lib/python2.2/site-packages/twisted/internet/defer.py", line 193, in callback
self._startRunCallbacks(result, 0)
File "/home/timo/usr/python2.2/lib/python2.2/site-packages/twisted/internet/defer.py", line 249, in _startRunCallbacks
self._runCallbacks()
--- <exception caught here> ---
File "/home/timo/usr/python2.2/lib/python2.2/site-packages/twisted/internet/defer.py", line 262, in _runCallbacks
self.result = callback(self.result, *args, **kw)
File "/home/timo/usr/python2.2/lib/python2.2/site-packages/buildbot/process/base.py", line 681, in stepDone
return self.buildDone()
File "/home/timo/usr/python2.2/lib/python2.2/site-packages/buildbot/process/base.py", line 718, in buildDone
return self.buildFinished(e, success)
File "/home/timo/usr/python2.2/lib/python2.2/site-packages/buildbot/process/base.py", line 293, in buildFinished
self.builder.setExpectations(self.progress)
File "/home/timo/usr/python2.2/lib/python2.2/site-packages/buildbot/process/base.py", line 569, in setExpectations
self.expectations.update(progress)
File "/home/timo/usr/python2.2/lib/python2.2/site-packages/buildbot/status/progress.py", line 244, in update
new = self.wavg(old, current)
File "/home/timo/usr/python2.2/lib/python2.2/site-packages/buildbot/status/progress.py", line 239, in wavg
return (current * decay) + (old * (1 - decay))
exceptions.TypeError: unsupported operand type(s) for *: 'NoneType' and 'float'
> I got the exception pasted below twice recently. Seems to happen after the
> build bot has been up for a few days. The build slave is on the same
> physical host, it's a vmware machine.
Hmm, it looks like 'current' was None. I think that means the progress record
was updated before the step actually finished. I'd bet some of the
failed-build code paths have problems: probably one of them doesn't mark the
step as having finished, but calls the buildFinished function anyway. If the
step doesn't finish, progress.stopTime isn't set, so the step has an
undefined running time. When the build finishes it tries to average None into
the previous running time, hence your exception.
I've added a test to buildbot/status/progress.py to catch this case and
ignore the time updates. It will also emit a message to the log:
log.msg("Expectations.update: current[%s] was None!" % name)
If you see this message, look back through the display to see what steps have
passed or failed in the current build. I suspect a step is failing in a way
that doesn't terminate the step properly. Another symptom might be if you
click on the "log" link for that step and your browser behaves like it's
waiting for more text after the log is finished downloading. The trick I use
to let you start viewing logs that aren't finished yet (such that the
contents are sent to your browser as they arrive) tends to get confused when
the step fails unexpectedly, such that the step is done but doesn't know it
is done, so it never closes the HTTP connection.
If you see either of these symptoms, could you take a look at the logs and
see if you can figure out which step failed and what messages it emitted as
it failed? I'd like to figure out which code path is not cleaning up
properly.
thanks,
-Brian | http://sourceforge.net/p/buildbot/mailman/message/4575089/ | CC-MAIN-2015-35 | refinedweb | 693 | 52.26 |
lp:~gr409/pragmatic/phi
Created by George Rokos on 2014-03-31 and last modified on 2014-04-02
- Get this branch:
- bzr branch lp:~gr409/pragmatic/phi
Only George Rokos can upload to this branch. If you are George Rokos please log in for upload directions.
Branch merges
Related bugs
Related blueprints
Branch information
- Owner:
- George Rokos
- Status:
- Development
Recent revisions
- 658. By George Rokos on 2014-04-02
Added box200x200.bin
- 657. By George Rokos on 2014-04-01
Added missing file
- 656. By George Rokos on 2014-04-01
Renamed PCS to IDWS
- 655. By George Rokos on 2014-03-31
Created simple binary-format box200x200 for use on the Phi.
- 654. By George Rokos on 2014-01-22
Removed creation of active sub-mesh in coarsening and swapping.
- 653. By George Rokos on 2014-01-22
Re-arranged some barriers
- 652. By George Rokos on 2014-01-21
Bug-fixing in range sorting
- 651. By George Rokos on 2014-01-15
Fixed namespace issues in custom scheduler
- 650. By George Rokos on 2014-01-15
Introduced 2-stage thread barriers and used them in Refine2D
- 649. By George Rokos on 2013-12-22
Reordered operations in refinement
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later)
- Stacked on:
- lp:pragmatic | https://code.launchpad.net/~gr409/pragmatic/phi | CC-MAIN-2019-35 | refinedweb | 224 | 66.44 |
Agenda
See also: IRC log
<cferris> RESOLUTION: minutes from 5/16 approved as posted
<dorchard> scribe: dorchard
<scribe> scribenick: dorchard
<whenry> I got a 403 error when I tried that Link above
<maryann> thank you william
<maryann> (we;re talking about the F2F)
<fsasaki> f2f in Ireland, hosted by Iona
<whenry> Go ahead talk about me behind my back ! ;-)
almost everybody present here will be present at next f2f
<whenry> Excellent!
<whenry> Charlton, Fabian, William there too!
Thursday will be discussion on follow on f2f
<cferris>
sent off the guidelines doc last week, primer the week before.
<fsasaki> dorchard: one action was not completed
<fsasaki> chris: on the agenda
279 review: have on their agenda a 2nd lc of wsa metadata
<whenry> Peaceful ..
AI 286: maryann will have BOF table @ lunch today.
<Fabian> The microphones are picking up several people speaking
<Fabian> uncertain identities :-)
AI 290
<cferris> RESOLUTION: issue 4522 closed with resolution proposed in
Bug 4567
4572 suggests it should be lowercase..
<PaulC> 4567 and 4572 proposal:
<cferris> RESOLUTION: issues 4567 and 4572 closed with proposed resolution in
Bug 4568: latest namespaces
<cferris>
<cferris> RESOLUTION: issue 4568 closed with proposal in
Bug 4571: QNames/NCNames
<cferris>
<cferris> RESOLUTION: issue 4571 closed with proposal in
Bug 4575:
<cferris>
<cferris> RESOLUTION: issue 4575 closed with proposal in the submitted issue
paulc: how will all the changes get in?
paulc/chris: could editors do in real-time?
asir: seems like impls have already done the "right thing" wrt these fixes
<cferris>
related to scalability issue.
cferris: who agrees with ashok that Dan's
answer is correct wrt to what it says, but would prefer that the policies
intersect.
... ashok, dave
<monica> pong
dale: at the framework level or domain
all: at the framework level.
related 4561, can domain processing "opt-in" to intersection
cferris: if we went with the approach that the framework always intersects, which opposite of current
monica: are there cases in other domains that would take advantage of such matching?
<cferris> ack
monica: did other domains make a mistake assuming absence would match?
tom: seek stability
... ws-a does not want to rely on domain specific processing
... want stability, but could live with it IF we had done it before CR.
<Fabian> there is only one domain that introduced and uses nested policies. we should make sure we do what WS-SecurityPolicy requires.
<fsasaki> dorchard: problem I have: ws-addressing comes with something, others come with other requirements
<charlton> indeed
<fsasaki> .. there is no way to learn from ws-addressing implementation
<fsasaki> .. ws-addressing, ws-security ends up to have to do the same kind of workaround
<fsasaki> ashok: and we don't fix it
<fsasaki> paulc: you go back to WD and it will be done in 6 months
paulc: we could go back to WG and then take 6 months
cferris: who cannot live with the status quo?
<Fabian> can live with status quo, can not live with Dan's interpretation
cferris: on question 1
no hands
fabian: we need to coordinate with ws-security
policy
... probably if we did the right thing for ws-security policy, we cover all the cases
... we introduced nested policy for ws-securitypolicy
<PaulC> no hands
<charlton> can live with status quo
<PaulC> pbc hand
<cferris> raise hand
<monica> raise hand
we have now learned that "xyz hand" is long form
<fsasaki> dorchard: runtime protocol specs said they will not wait for ws-policy
<fsasaki> .. "we will not rely on the CR version of the spec", like RM
<fsasaki> tRutt: rm does both
<fsasaki> dorchard: so policy is of the hook, they decided not to wait
<fsasaki> .. that gives us some room from a scheduling perspective
cferris: except ws-addressing
ashok: what would happen to ws-security policy
asked by fabian
... if we adopt this, it would become easier to use securitypolicy
... could say <x509></x509> would match with all the myriad variations.
... make life much much easier
paulc: refutes ashok, assumes they wouldn't add
anything under x509
... if they revved security policy after policy revved, then there would be a problem
monica: we have that condition anyways
paulc: if I explicitly state what I support,
then I'm robust.
... if I then do wildcards, then somebody can add something new
... if you go look at ws-securitypolicy, they point to 1.5
<PaulC> SP:
<whenry> regrets, I must drop off for another call. Will be back afterward.
lines 171 to 173
monica: if we look at a nested policy expression, ... look at definitions
<maryann> since a nested assertion ( according to our definition) means that the behavior qualifies a parent assertion
<maryann> then at some level the "empty" does imply a certain level of behavior since the parent or root is expressing some behavior
monica: have to ask whether nesting is exclusive or additive?
felix: this would create versioning problems, and problems with proposal from ws-addressing
<maryann> the nested could be additive behavior rather than exclusive behavior that might conflict if you tried to match an empty with a specific sub-assertion
asir: go back to ashok's point that wildcard wouldn't break security policy
<Nadalin> yes it would break SP
<asir> Bottom of Section 3.9
<maryann> i think it depends on the assertion
<Nadalin> it would break anyone use of assertions
<maryann> and the fact that these assertions were designed with an assumption about how the algorithm currently works
asir: brings up httpstoken with parameters
<prasad> Yes in general we cannot guarentee that a nested one would always match empty. In some cases it would and some cases it may not. Depends on the specific case
ashok: no, that's domain specific.
asir: if you have a nested policy, then it
indicates any behaviour.
... this is hard to imagine an app that supports all options
cferris: don't buy the argument that if I added
new extension then I'd get a false positive.
... in the case if I had all the options (ie security policy) then compare all those
... vs letting subsequent behaviour figure out cipher suite..
... had we gone that direction, it might not have been that bad.
<Zakim> dorchard, you wanted to follow up on paul's rebuttal and to ask how ws-sp uses policy 1.5
<fsasaki> dorchard: paulc was arguing against the wildcard proposal based on ws-security policy does
<fsasaki> paulc: I looked at the ws-sx spec, you statement was wrong
<PaulC> Charter text: Web Services Policy should remain compatible with existing policy assertions and offer a smooth migration path for these assertions (where applicable). Existing policy assertions (in specifications that have been submitted to other standards groups) are Web Services Reliable Messaging Policy, Web Services Security Policy, Web Services Atomic Transaction, and Web Services Business Activity Framework.
<fsasaki> dorchard: am I talking against an implementation or a WG?
<fsasaki> paulc: reply is citation from charter text above
<fsasaki> paulc: dorchard said "other TCs have gone ahead without policy, so we can do what the want". That is not true
<fsasaki> .. ws-sx will reference also the CR version of policy
<fsasaki> dorchard: how will security policy use policy?
<fsasaki> .. what will the impact of the change be to security policy? It will break their current work, but they might benefit in the future
<fsasaki> .. what is the compatibility issue?
<fsasaki> paulc: go back to WD, change the NS
<fsasaki> dorchard: they have a reference, but how is policy used?
<fsasaki> maryann: it is more than just a reference, it is deep in the spec
<fsasaki> asir: it is a normative depdendecy
<PaulC> I supplied my comment above in the Charter text.
<cferris> we are breaking for 20 mins
<cferris> ---------------------
<Fabian> Charlton, seems like you got the P6 and P9 wrong :-)
<Dug> (pwd: wspolicy)
<maryann> <break conversation for posterity> there are issues for security policy, when a service takes the option of wildcarding...the use case for the customer side is easier to illustrate.
<maryann> when the service is the one doing wildcarding it becomes very difficult with all the extensibility points in security & security policy, to understand what the service is willing to do....
<maryann> in some sense it would be asserting it could do "anything" that the customer would have in its own policy, and it would be difficult to see how this range of options would be determined, assessesed for interoperability
<maryann> the customer "could" extend with tokens, that the service was not aware of
<maryann> there would need to be more constraints on these extensiblity points
bug 4558 is related..
<fsasaki> continuing meeting
cferris: dispose this, then get back to
4558.
... we have consensus that Dan's message has described the spec.
<fsasaki> Ashok: not technical points, but about the process
ashok: security policy agreed to refer to policy 1.2 and policy 1.5
<fsasaki> .. Paul mentioned that security policy agreed to refer to policy 1.5
ashok: not whole story
... they also have charter to change policy reference(s) (1.2 and 1.5 CR) to policy 1.5 rec
paulc: are you inferring that they were expecting 1.5 ns to change?
This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
<cferris> paul, concerned that were we to place the /ne/ namespace in jeopardy, that the sx, et al tcs might change their direction
ashok: 2nd point, tired of the stick "you
really want to make this change, it'll go back 6 months"
... if we have to go back, then ok.
... let's not use this to stop discussion
cferris: trying to finish this agenda item..
... for those left in the queue, I'd like to close this agenda item.
... do any of you have concerns related to this thread that are not captured by 4558 or 4560 or 4544
asir: what is disposition of this is the example?
prasad: leaving default behaviour as is, and give assertion authors chance to over-ride in domain specific
cferris: 4561
<TRutt__> Empty as wildcard has problems for nested policy, it woujld be better to define a standard wildcard, which can be put into scope for parent policy for which wildcarding is appropriate for matching. I believe wildcarding is not approporated for all assertions which have nested policy assertion types This could be addressed in v.next
cferris: asir, please open issue wrt need better example of empty nested policy item.
<cferris>
<asir> New issue is
<cferris> RESOLUTION: issue 4577 closed with proposal in amended to change 'default' to 'framework'
paulc: how will we proceed?
... what are people's favourite items to talk about?
... perhaps each person talk about what they think is most important.
... heard a suggestion from ashok that dorchard's taxonomy be the starting point.
<prasad>
<fsasaki> (now discussing mail from dorchard above)
<fsasaki> dorchard: tried to describe actual differences between 4 positions I see
<fsasaki> .. in terms of requester / and provider and "pseudo set theory"
<fsasaki> .. I had a single scenario to describe the differences
<fsasaki> (dorchard describes the mail, agreement that the mail is correct until "Strict intersection yields no intersection.")
<fsasaki> now discussing the part starting "There is a policy <Z/> ..."
<fsasaki> cferris: there are two flavors of that: talking about assertion vs. behavior
<fsasaki> dorchard: let's talk about assertions only now
<fsasaki> cferris: ok
<fsasaki> dorchard: nobody is proponent for 1. about "2. AIN Closed world flavour : "
<fsasaki> asir: nobody advocates 2 now
<fsasaki> cferris: agree
<fsasaki> .. IBM never advocated 2
<fsasaki> paul: let's skip history and get through analisys
<fsasaki> dorchard: +1. (now going through option 3/4)
<fsasaki> TRutt: is "client" and "behavior initiator" the same?
<fsasaki> dorchard: for the purpose of this yes
<fsasaki> ashok: question on 3: one use case: both provider and requester have published policies
<fsasaki> dorchard: the scope here is the simplist possible case, somebody starts an HTTP connection and picks up stuff
<fsasaki> ashok: (starts to ask question on 3)
<fsasaki> paul: hold the question, answer will come
<fsasaki> dorchard: now about the table
<fsasaki> maryann: why the "will" column?
<fsasaki> dorchard: client has an intersection result and it will do a,b. It is not a "MUST" because of the intersection
<fsasaki> paul: so this is for the lax intersection case
<fsasaki> dorchard: yes, for strict case the table is boring
<fsasaki> .. will or will not is about the intersection, must and "must not" are about both requester and provider
<fsasaki> .. so will and will not is about the requester only
<fsasaki> paul: everybody agrees with the "will" column?
<fsasaki> ashok: is the will column about requester and provider?
<fsasaki> paul: david said that
<fsasaki> cferris: I think "will" column applies to both
<fsasaki> (discussion on the restructuring of the column currently done by cferris)
<fsasaki> cferris: important to see "who initiates the behavior?" that is different than requester / provider
<fsasaki> .. I am only constraining the initator
<fsasaki> dorchard: I don't agree
<fsasaki> .. in a single interaction, a provider behaves as a response
<fsasaki> cferris: I am not constraining the behavior of a response
<whenry> But how do you really feel?
<maryann> there is a bit of a passionate discussion happening live
<maryann> for those of you remote, we ask your tolerance
<fsasaki> (problems of following the discussion for remote participants, paulc says nothing we can do about that in the current discussion)
<maryann> and we will try to capture the discussion in the scribed text
<whenry> May need to change the rating to "R" ;-)
<fsasaki> dorchard: what are the behaviors in the follow up of an interaction?
<fsasaki> paulc: cferris says the interaction does not constrain the provider
<maryann> chris does not believe that the intersected alternative constrains the provider behavior
<maryann> david seems to have a different view of the behaviors for either party
<maryann> david had tried to reduce the behaviors to a common set and chris feels the distinction is relevant and hence the reduction loses some characterization of behaviors thats important to capture
<fsasaki> dorchard: in the policy framework, there is no constraint on the provider whether it must do D (from cferris perspective)
<fsasaki> cferris: in the policy framework , there is no mechanism to tell which alternative I choose
<fsasaki> .. I'm trying to make a statement in the spec to make clear: if I know what an assertion means in terms of its behavior, and it is not in the alternative selected, it will not be applied
<fsasaki> paulc summarizes:
<fsasaki> paulc: requestor will exibit a,b,c and must not do E. Z,Y,C,D are out of scope
<fsasaki> (proposal is not on IRC)
<fsasaki> paulc: using intersection means "there is an entity that initiates the intersection", in cferris proposal
<fsasaki> .. if messages are going in the other direction, roles are changed
<prasad> ashoh hand
<fsasaki> ashok: I have a policy and do a policy intersection. What I must no do is: the behaviors which are in my policy included
<fsasaki> cferris: correct
<fsasaki> paulc: you do not the things which are in your policy, not talking about the other guy
<fsasaki> paulc: would ashok be happy with the words which asir proposed at ?
<fsasaki> ashok: yes, but with what we have now , we might reword them again
<fsasaki> paulc: cferris, are you fine with what we have now?
<fsasaki> cferris: vocabulary based AIN was subtly different. Now we are constraining what you know
<fsasaki> paulc: two tasked over lunch: 1) take cferris proposal with Asirs text: would that make ashok happy?
<fsasaki> .. and 2) request from dorchard to look more at open world proposal, and 3) from monica
<fsasaki> .. some editorial items
<fsasaki> monica: already in the mail archive
<fsasaki> .. what we have now on the screen should go to the primer
<cferris> we are taking a lunch break... back at 1:00 pm ET
<cferris> email that captures the "whiteboard" dicussion this morning:
<cferris> hi david
<cferris> we will be starting in 15 mins
<maryann> scribenick: maryann
<scribe> scribeNick: maryann
<scribe> scribe: maryann
just resuming
paul needed to leave but will be back
wasn't intending to deal with subtleties, the motivation is to represent some things that came out of the work in WS-Addressing in their attempts to define assertions for the addressing behavior
david - trying to help offer some feedback on reading the policy document, thinks there are some simple ideas that didn't come across
david: some of the behavior you get with bags
as a result of intersection is hard to grasp
... normalizing policy expressions seems to be indirect
starting with
<cferris>
takes a long time to get through the material, paul responded that we considered using other ways of expressing the rules, and it is acknowleged that in retrospect you can always see how you could do something different, but not advocating we do that at this time in the process
chris: is there a short path that could augment what is there with a simple summary?
david: normal form is just a policy
this could be a simplifying principle
chris: is there openess in the working group to try to create some rules to augment the current text?
more of when the rules apply
is the critical thing that is missing
commutivity applies because policies are unordered
that's not a normalization rule
assciativity applies pretty clearly
distributive would be good to state that its a normalization rule
a sentence or two at the beginning of each rule might help
ashok: think the spec would be better with more formal rules
chris: prescriptive, right?
ashok: yes
david: i agree this is formal without being rigorous
asir: i heard dave say he wanted an opening statement.
chris: : i'm talking about text to augment
david: more text for motivation and guidance on
when these rules apply
... its kind of there in the examples, but it would be good to pull it out
asir: there is a mapping from the normal form to the policy
<cferris>
chris: this is a description of what the normal form is
<cferris>
<Levogiro> PUTOOOOOOOOOOOOOOOOOOOOS!!!!
asir: set of axioms are defined
... 4.1 states the mapping
ashok: what does that mean?
asir: data model
david: map from expression to a policy is first
find normal form expressions and here are some normal form rules
... from a mathematical point of view there are some holes
... some of the text seems vague
... its a declarative and axiomatic approach
chris: are you asking for motivation in 4.3.6?
david: yes
... now that i understand i can go off and craft some specific statements
chris: that's what i was looking for
asir: 4.3.6 has hyperlinks to the axioms
<prasad>
chris: what i would recommend is that if you could come up with some specific statements to give this some motivation, the WG is happy to take a look at that
paul: normal form doesn't have a definition, it points to section 4.1 .....used the hyperlink to show the rules, isn't that what you want
david: i think its all in there someplace, but as a newcomer is its hard to find
paul: its a backward reference so maybe that
was wrong
... and there is no definition and that phrase is used quite offten
... david is asking for a motivation
david: it says the intent is to facilitate
interoperability
... really what it does is ground the mapping from expression to policy
... section 4.1
where it defines the element
david: not clear that putting out all lines in a normal form makes things simpler
paul: it says should
... and if you have a long policy that's a good reason
david: here we're saying that you will have to
deal with non-normative expressions, the motivation seems odd
... i think we've hit most of the points
chris: i think the group understands your concerns
<PaulC> Consider changing: The following rules are used to transform a compact policy expression into a normal form policy expression:
chris: hopefully if you could express some suggested changes in the form of " please do x, y , z" ...preferably not a chinese menu :-)
<PaulC> to inlcude a reference to 4.1 for "normal form policy expression"
chirs: WG is willing to entertain improvements
<cferris>
chris: this thread led to the realization that your terms and asirs terms are consistent
<PaulC> and to include a reference to 4.3 for "compact policy expressions"
david: yes, there was some discussion back and
forth
... so i understand that "a" is different from "a,a"
... alternatives that come out of intersection are going to be different than alternatives that come in from either side
... there is no requirement that a policy be reduced to a policy with only one alternative
... policies have set semantics and alternatives have bag semantics
chris: this is a general issue
david: i think the ambiguity is gone from the text
chris: 4552, policies are sets not bags,
... there's a proposal from asir, to add text
david: if that's what you want to say, yes
asir: yes that's what we want to say
<fsasaki> see mail from asir
paul: the code that needs to know about
different parameters is not in our hands
... you can't tell at intersection that they are the same
davidO: in owl you can say two things are the same
david: you can tell if you have two assertions
are spelled exactly the same
... same infoset
... doesn't mean same infoset it means same assertion
paul: what benefit do i get from eliminating duplicates?
ashok: the algorithm is that they take the alternatives and they pull out the assertions and they apply the same thing twice, like encryption
david: seems like the use case doesn't give the result ( in the primer)
<dhull> It seems that in most use cases it doesn't matter exactly what result comes back, just that it comes back at all
chris: we're sliding into the weeds......we
could argue about whether polcies should be bags or sets of alternatives, i
think it only matters that it might be simpler
... we have a proposal for a clarification
... so david, are you satisfied with that?
david: yes
... given that the group has discussed this and said they're ok with it, then i'm ok with it
davido: there is still an issue around duplicates at the end of intersection
monica: section 3.2 says that duplicates may exist
david: there is a direct testable assertion that should be in the interop tests
<cferris> RESOLUTION: issue 4552 closed with text in placed in Terminology section and referenced (linked) from uses of the term as deemed appropriate by editors
<cferris> RESOLUTION: issue 4556 is closed with proposal offered in issue description
<cferris> If two alternatives are compatible, their intersection is an alternative
<cferris> containing
<cferris> all of the occurrances of all of the assertions from each of the alternatives
<cferris> (i.e., the bag
<cferris> union of the two).
<cferris> If two alternatives are compatible, their intersection is an alternative
<cferris> containing
<cferris> all of the occurences of all of the assertions in both alternatives
<cferris> (i.e., the bag
<cferris> union of the two).
<cferris> RESOLUTION: issue 4553 closed with the above text modifying the existing text in section 4.5
david: 4555- the use of ther term
"intersection" was confusing , but the definitions do explain what the group
means
... i might consider "aggregation" or some other term
chris: i do think that our use of the term might introduce confusion, and would it help to have a link to see what we mean and disambiguate it from set intersection
david: some kind of softening might help
ashok: it would be good if we had an exact word
<dhull> "pairwise bag union of compatible alternatives"
paul: give us one
... i'm trying to make a proposal
<dhull> "Policy Intersection is an operation, analogous in some ways to set intersection ..."
paul: policy intersection does not appear in the terminology
<dhull> or "analogous in some cases ..."
ashok: would it be useful to say....and yyy is used for....
paul: that's what it says in 4.5
... you could introduce text and links
... introduce text in a note....."the use of the term intersection does not imply set semantics:
david: you could say policy intersection ....is analogous to set intersection in some cases....
asir: 3rd sentence in first paragraph
<PaulC> Org text:
<PaulC> Intersection is a commutative function that takes two policies and returns a policy.
<PaulC> New text:
<cferris> Policy intersection is communt
chris: there is a proposal from asir that may allow us to close this
issue 4554
david's reply
david: if that is what the WG means then it should be stated
<scribe> ACTION: Paul to make sure that the additional suggestions are not lost for the non-normative docs [recorded in]
<trackbot> Sorry, amibiguous username (more than one match) - Paul
<trackbot> Try using a different identifier, such as family name or username (eg. pknight, pcotton2)
<cferris> RESOLUTION: issue 4554 is closed with the proposal in to change the text in the first paragraph in section 4.5
<fsasaki> ACTION: pcotton2 to make sure that the additional suggestions are not lost for the non-normative docs related to issue 4554 [recorded in]
<trackbot> Created ACTION-300 - Make sure that the additional suggestions are not lost for the non-normative docs related to issue 4554 [on Paul Cotton - due 2007-05-30].
asir: the last issue from David's mail is the issue addressed in 4561
chris: resuming agenda item from before lunch
DavidO- would like to explore the open world
DavidO- this proposal is close to the one called "open world"
DavidO- I've had some trouble with the terms
DavidO- so i'd like to run through this on a more complicated message exchange
DavidO- terms initiatior is this protocol or wsdl in message
taking the Open world from the text proposed from David .....must do what is intersected and that's it
ashok: open world says nothing about what you must not do
david o - you said you can't live with this
chris-- the term "optional" means that there are two alternatives, not that the behavior is optional
davido- the requestor says RM optional
chris- its a matter of being precise in the use of the optional
davido- i don't understand the stridency of your position
davido- if the client choses to do something why is this so bad?
<whenry> Can the speakers speak up please?
i'll ask william, sorry
chris =- i'm making a big deal because if i don't use wsp:optional and I only have alternatives, and I am able to just select things to do anyway, then it negates the value of providing explicit alternatives
davidO: under my definition that's fine
dale: why did you put Y under must not? ( to chris)
chris -- no i didn't that's david's option
paul: your point is that optional is a macro
chris: yes
paul: imagine the case you have 16 alternatives
( to david)
paul: and you get back the one that doesn't have "e" in it, are you expecting that you can go back and if you find "e" is in it you can
davidO: i think its foolish for a client to do that, but its not necessary in the spec to say that
monica: i'd like to hear from dale, because he
raised these issues about open & closed
... there was a long dicussion in WS-TX and they had a hard time characterizing their assertions because they didn't know how to represent it
davidO: i want to try to champion chris's point of view to prove that i understand it
daveO: the requestor has this policy and there
might be a bunch of things that the provider does that it may or may not be
able to do
... in intersection, it explicitly asked whether E was a behavior to do
<Zakim> daveo, you wanted to say chris' point
daveO: if you add behaviors that you didn't get back in intersection then you are throwing out the value of intersection
monica: they had a conundrum and they came to a point where they only expressed what they were required to do
chris: if you can do what you want, what's the value of policy?
paul: we could define the syntax, but intersection has no value
<whenry> +1
paul: its a contract, i'm going to get your policy and this is what i'm going to do as a result of that
daveO: I want to see this under a more complicated message exchange pattern, i don't know what an entity that engages in an interaction means
paul: you're going to do that make connection
to someone
... then something comes back....its got to be going to something
chris: you already did from a reliable message
connection ....from a web services perspective you already did policy
intersection with paul to send them originally
... so you know what's going on here
... you are the entity engaging in that interaction
... conversely, asynchronously, paul;s going to send messages asynchonously and reliably
davidO- when you engage in an interaction, what do you mean?
chris: i have an endpoint
david: what about an endpoint with multiple
messages
... its the first one, that you're engaging in
chris: angels dancing on the end of a pin
... you want to know how do i talk to paul
... so you go and get his policy
davidO; how does this map to the subjects we define
paul: that's what attachment states
... if you have some at one subject or at another subject, that's the one you have to apply the algorithm on that subject
that why we have subject granuarlity in the policy subjects
chris: we would just like to not have
subjectivity in what you can do
... if we say you can do a or b, we want it to be either a or .b
... not that you can do a and b
... we want it to have predictability
david O --- why say in the spec MUST not
chris- it doesn't say must not
paul: is the only question about the verb
<whenry> What text?
paul: straw poll, how many people can live with the text
(delay for cut and paste)
<whenry> only reading it now
5 can
2 cannot
(editing)
."
<whenry> Can live with the inital text
6 can
<whenry> I can live with it
<whenry> Can live with should not but kinda like first one
P1 -."
P2- "If an."
<whenry> +1 to P2
P3- ."
<fsasaki> paulc: example : I have always RM in my policy. So even if I get a intersection result that has not RM in it, I will try to do it
<fsasaki> TRutt: could not live with SHOULD NOT
<fsasaki> ashok: wants to have a stronger word than "does not", e.g. SHOULD NOT or MUST NOT
<fsasaki> paulc: ashok does not want the flexibility in the spec. dave wants the flexibility. Tom is between ashok and dave
paul ( to tom) why did you vote that way?
tom: can we do a vote between 3 & 4
p4 "If an entity includes a policy assertion type A in its policy, and this policy assertion type A does not occur in an intersected policy, then the initiating entity must not apply the behavior implied by assertion type A. If a policy assertion type Z is not included in the policies being intersected then the intersected policy says nothing about the behavior implied by the assertion type Z."
<whenry> Are there penalties for MUST NOT? What will happen? What's the point. Even if we have a MUST NOT people are open to try the great thing is it won't work.
paul -- preference poll for 2, 3, 4
preference 2 - 0
<whenry> What's 1,2,3 ?
<whenry> 1 does not? 2 should not? 3 Must not?
( p1, p2, p3, p4 above)
preference 3- 7 1/2 or 8
preference 4- 1
<whenry> I like 2 "does not" better - let the best practices handle the shoulds
paul- strong preference for 3 and no one "can't live" with 3, so this is consensus
monica: is "entity" sufficient?
chris- i need to think about it over break
dave: i like getting rid if initiating because it gets rid of a lot of issues
<cferris> from the "whiteboard": If an entity includes a policy assertion type A in its policy, and this policy assertion type A does not occur in an intersected policy, then that entity SHOULD NOT apply the behavior implied by assertion type A. If a policy assertion type Z is not included in the policies being intersected then the intersected policy says nothing about the behavior implied by the assertion type Z.
<cferris> note, this link in the log is to the proposal that we have reached consensus on, modulo any "editorial" tweaks
asir: we need to remember that this was only one part of the original proposal
<cferris>
<break>
<cferris>
<asir> ACTION: Asir to close issues from David Hull [recorded in]
<trackbot> Created ACTION-301 - Close issues from David Hull [on Asir Vedamuthu - due 2007-05-30].
<asir> This includes 4552-4556
resuming after break
daveO- summarizing issues with wildcarding & issues with security policy
daveO- some new things emerged in the morning session if you were looking at introducing wildcarding at the provider side
daveO- there is a challenge with regard to scalability and ease of authoring
ashok: i think david raised issues about the performance side, but this is a usefull semantic to express
abbie: wildcarding?
ashok: yes
tom: from ws-addressing perspective the performance issues are not there ( there's only 2) but it may be that not every assertion can use the wildcard feature and we need to think about this more, so it could be a v-next issue
<fsasaki> +1 for v.next
<prasad> +1 to next version. This is not a show stopper
tom: we need to have a way to express whether or not the wildcarding holds or not
asir: we need some experience with examples,
its like an application saying it does anything
... if you are worried about malicious behavior, you can use throttles
... you can have a limit on the number of alternatives
... overloading the existing empty will break existing implementations
<CGI234> ping
<Zakim> CGI, you wanted to respond on breaking issue
davidO: in the current model i don't think
wildcarding breaks implementations
... every spec does not assume wildcarding, they list all the options
davidO- seems to me this is a compatible change
asir: i gave an example, from security policy,
<asir> From the primer - require
daveO-poll.....should we try to fix this now?
daveO- is there anyone else who is interested in solving this now?
exploring solving?
daveO- yes
monica- if we can establish that it won't break existing implementations then we can explore it
tom- asir, it will break implmentations
<asir> Bottom of Section 2.9 -
chris- ashok and dave are the only ones interested in exploring this?
daveO- if its incompatible i'm not sure i'm interested in a change
tom- if you did a new qname for wildcard
tom- this would be a global qname, so i don't see how it could be backward compatible
daveO- i believe the compatability is around assertions,
felix- it(compatability) is about implementations and assertions
<fsasaki> CR requirements are about (not) breaking existing implementations, adding a new qname would be against that
chris- so we're dong interop now, and lets say we can up with a compatible solution that doesn't break the 1.5 implementations, ..this is just an exploration.....of where we are
chris- we have roughly a month to cross t's dot i's in anticipation of transition to PR in June
<dorchard> the key is the phrase "is required".
<dorchard> a non-anonymous client who requires authentication would put their restriction in httpsToken, and then get the right intersection.
chris - we have to each review these changes, deal with any test cases unresolved after this interop....let's asume we get there.....that puts us into PR in July........that requires an AC review...for a month...current course....Sept for PR... if we entertain introducing a new feature...how long would it take to work out a resolution to this?
ashok- couple of weeks
chris- then we're looking at pushing back a month
chris- it pushes us back to last call
ashok- different question....if we start PR process in June..........what will we do in July?
chris- we have primer/ guidelines
<Zakim> dorchard, you wanted to mention when "customrs" would get wildcarding feature..
daveO- it will take us 2 years to get a v-next out and my concern is that a 2 month slip is a tradeoff to a 2 year slipbecause we add it with some other features
chris- i'm asking does that seem fair, to have a week or two to assess and then move on
daveO- i do think we need to have a proposal on the table
chris- i'm just looking to see if people expect to leave the spec open to do this, or if we are aware of the impact to the current track
asir: implementors have spent a lot of time and it would be hard to get implementors to do anything else
<TRutt__> I only want to change the CR namespace in the PR if there is a change necessary to fix a broken spec, the spec is not broken. The wildcard is an enhancement.
asir- its a myth to think it can be done in 2 months
tom- i don't want to partition the space, i hope we can do this without versioning the namespace
tom- its not broken the way it is
felix: it might also involve groups like the WS- addressing
ashok- if what's required is that we write a proposal, then i will write one next week
chris- the proposal needs to have a solid backing and an understanding of how we get where we want to go
ashok: you might need to retest the intersection algorithm
chris: felix, would adding a feature trigger going back?
felix: yes that should have been done before last call
chris- not going to close the door, lets think about it tonight and look at it again tomorrow
chris: if we have to go back to last call, we'd
be adding at least 3 months
... we would need to have a plan by June 6
felix: from my experiences and giving people more time, can start a feature creep
abbie: we should assess right now whether there is interest
<cferris> RESOLUTION: issue 4558 closed with no action as v.next
Description: can a domain define domain-specific processing that could state
that empty nested policy IS compatible with non-empty nested policy? If so,
then I believe the spec should indicate with a MAY.
<cferris>
<charltonb> +1 to MAY
asir: the intersection states that domains can only specify parameters intersection
monica: there is another sentence .....that
says " Because the set of bheaviors indicated tby a policy alternative,
depends on the domain specific semantics of the collected assertions,
determining whether two policy alternatives are compatible generally involves
domain-specific processing."
... i don't understand why we would say that they CAN NOT
<Zakim> dorchard, you wanted to dispute the assertion that it's hard to get implementors to do anything else and to ask why closed extensibility model on domain specific affecting
daveO: i don't understand why we have this
closed domain processing limit on the domain specific processing
... i think we will do harm and prevent this item we just put off for v.next if we do this
asir: I was explaining what's in section 4.5
... to say that two assertions are compatilbe you have to match the qname and the only thing that is delegated is the assertion parameter processing
... you need to determine if each assertion is compatible, and the key statement is that the only thing that is not covered is parameter processing
<TRutt__> The spec should clarify that the use of domain specific intersection processing requires that it be specified with the assertion type definition, In the lack of any domain specifric processing for intersection in the definition of an assertion type, the default intersection processing applies. If the intersection processor has to have a escape table (based on qname) for assertion types wanting to pull parameters into the algorithm, it costs no more
tom: we need to clarify if they don't put domains specific rules, the framework algorithm apples
<prasad> It only says: "As a first approximation, an algorithm is defined herein that approximates compatibility in a domain-independent manner". That is it is only a first approximation?
tom: when you pull in domain specific
processing you may as well pull in everything......parameters & empty
... and the text is ambiguous
felix: the spec says the alogrithm is only an
approximation, and v.next may be totally independent
... so I don't think it breaks v.next
ashok: asir talks about qnames and parameters, but what the spec says before that unless you have domain specific processing, so number one is if you have domain processing, and you can specify whatever you wish, if you don't then you fall back to the approximation
dale: similar to ashok, you can't say
categorically that empty can't be interpreted in a domain specific
processing, then they can do that
... that's what the wording says to me
... if a domain specific algorithm is required......then you say that you don't use the approximation, right now it seem s open
chris ( chair hat off)
chris: reasonable people are coming to
resonably different interpretations which indicates that there is
clarification needed
... there is no processing model for intersection, there are some steps, there is some prose, and it doesn't say explicitly whether you do domian first or second or part of the framework processing
... i think with soap we came up with a clear processing model which said, you can do it anyway you want but the behaviior has to be as if .....
anyone on the phone?
chris: we need to clear this text up
asir: clarification is fine
<TRutt__> if any spec defines an assertion type with domain specific processing, the implementation of that spec has to have a way to "overide" the default processing for that assertion qname. This can be very costly. In fact, an intersection implementation could be designed with a limitation of only doing default processing, and it would work only with policy definitions which rely on the default intersection algorithm In fact, I would ask if there are any
asir: section 4.5 is policy intersection
... everything is based on qnames
... the spec says what is not part of policy intersection
... if a domain says its not based solely on qnames then its a different algorithm
chris: how is that different?
daveO: why on earth do we say the domain can say that something falls out of intersection, but not in intersection
daveO- asir, you are saying that if you go through intersection and the qname match says yes, and the domain goes through and says no .......
<TRutt__> The framework should be clarified that the "domain specific" intersection is limited to processing of elements within the assertion element for a qname (i.e., only pertains to its parameters and nested assertions
asir: it says that in the first statement of the intersection
<fsasaki> +1 to TRutt
tom: the text does not say that anything under a qname is what is considered domain processing
daveO; this is a performance concern....the way it works it kind of scales because once you match, then you can do a lot of other processing and you know exaclty which domain processing to kick off
daveO: in one case you prune the tree of things that don't match, you know you only have to go into 2 to see if there is any domain processing to override the behavior
monica: if you look at the last paragraph in 4.5 you can have more than one assertion of the same type.... lean toward davids argument to allow domains to specify compatilbility
<fsasaki> adjourned for today | http://www.w3.org/2007/05/23-ws-policy-minutes.html | CC-MAIN-2017-30 | refinedweb | 7,448 | 54.46 |
>>."
early gnome (Score:2)
Re:early gnome (Score:4, Insightful)
Personally I liked Gnome 1.x a good deal better than I like the 2.x series.
Except for gnome-terminal. The newer versions of gnome-terminal are better.
But everything else is worse. More dependencies that shouldn't be necessary, worse performance, more emphasis on completely pointless features like the ability to use the file manager as a web browser (WHY would I EVER want that?) but fewer *useful* features (like, the ability to have an always-on-top panel of a particular size in a particular position, which was great for stuff like having a clock just to the left of where the minimize button was on maximized windows), more gratuitous bug-the-user annoyances (like dialog boxes asking you stupid questions and/or unasked-for windows popping up voluntarily every time you connect a USB device or insert a disc), more undesirably arcane Windows-esque stuff (like gconf), more effort required to get the theme the way you like it, and some things you just plain *can't* do, or I have not figured out how (like, changing the icons on the built-in feature buttons on the panel for things like logging out; in 1.x this was as easy as changing the icon on an app launcher).
If Gnome 1.4 were compatible with modern software (both directions: modern versions of the software it requires, like libraries, and, going the other way, modern versions of applications), I'd still be using it. It was good. I have no idea why they decided to screw it up so much. Gnome 2.x comes across as a bad sequel or a poor remake. It is inferior in nearly every respect.
I can't say I'm very excited at the prospect of Gnome 3.0. What features are they going to take away now, the foot menu and the ability to have a clock on the panel? And what are they going to add? A useless 3D "walk through" filesystem animation like in Jurassic Park, which activates automatically every time a filesystem is mounted? Fixed-size desktop-bound "gadgets", like in Windows Seven, which are strictly inferior to panel applets in every way? Take your time, guys, take your time. I'm in no hurry to upgrade.
Re: (Score:2)
> more emphasis on completely pointless features like the ability to use the file manager as a web browser (WHY would I EVER want that?)
Yes, for some reason that kind of "resource namespace abstraction" seems to be some kind of Holy Grail for a lot of developers who assume users will love it; the same thing happens with Kde's Konqueror, year after year, despite 99.99% of the web sites needing something heavy like Firefox et al. and 99.99% of the users desperate due to slow/unresponsive basic file browsin
Re: (Score:3, Insightful)
the ability to use the file manager as a web browser (WHY would I EVER want that?)
Wait, does GNOME even have that? I just tried it in Ubuntu 10.04 Beta 1, and it doesn't work.
On the other hand, KDE has had that for years; Konqueror. (Google it; the top hit says "Konqueror - Konqueror - Web Browser, File Manager - and more!")
I agree with you that I am fine with the file manager and the web browser being two different tools. I guess I don't care if they are merged, but I don't view it as a feature.
As for
Re: (Score:2)
Oh man, nostalgia. I loved Gnome 1.4. Particularly gmc (the old file manager). Simple, clean, and fast -- Nautilus was so terrible initially that I made several efforts atreplacing it with gmc. I don't remember if I was ever sucessful, as it was years ago.
Sure, it got better over several releases (I remember ~2.6-2.8 beginning to be usable again, I believe), but I never have liked new-Gnome as much as old-Gnome -- though XFCE is a somewhat reasonable 'replacement' for it these days.
Re: (Score:3, Interesting)
Actually, I'm leaning toward a custom session consisting of sawfish and gnome-panel. Most of the actually *useful* features of Gnome can be had just by running the panel (and individual apps launched as needed, though the terminal is the only Gnome-branded app I use very much) in conjunction with another window manager. I'm not sure I need the rest of Gnome. And unlike most of Gnome 1.x, sawfish has been maintained and still works just fine with modern stuff.
Setti
Re: (Score:2)
GConf, GStreamer and the other interdependent crud are simply an annoyance.
Unless you want to write apps that store a configuration, or handle sound or video.
Re:early gnome (Score:4, Interesting)
GConf isn't a binary-only registry, it's XML files stored in a directory structure. More importantly, it's a library that provides a convenient way to update and monitor the information in these files.
Linux audio is a bit of a mess, but the mess is due to there being lots of different ways to access the sound hardware (OSS, ALSA, PulseAudio, Jack, whatver else there may be). GStreamer doesn't really contribute to that mess, as it's at a higher level; if you standardized on, say, pulseaudio, you'ld still want something like GStreamer to handle file formats and codecs.
Re: (Score:2)
Sounds like a KDE-type cleanup (Score:5, Insightful)
If this is done properly, I think it'll be good for GNOME. From where I sit, they sound like they're shooting for a major architecture redesign. In other words, this 2.30 release is analogous to the 3.5 releases of KDE.
And I think starting largely from scratch will be a net benefit. I've never personally used GNOME (though I've recommended it to others) and I've found it to be technologically lacking compared to KDE (KParts and KIOSlaves are awesome, and while there are GNOME counterparts they aren't as used).
One thing I think GNOME does very well is their HIG - probably the best outside of Apple. The new release is very simple - dump a lot of legacy code and keep the HIG. Maybe drop the old-fashioned look too.
Though my fantasy is to see them use Qt.
Re: (Score:3, Informative)
KIOSlaves are awesome, and while there are GNOME counterparts they aren't as used.
One neat thing about GVFS, the GNOME abstraction, is that part of it wraps FUSE filesystem modules. Any application, not just GNOME applications, can use filesystems mounted with GNOME's 'connect to server' feature, for instance. I think it's more desirable to write a FUSE module than a KDE-specific KIOSlave.
GNOME sometimes comes across as a hodgepodge of bindings and semi-coherent libraries, but there has been a great deal of work to consolidate [gnome.org] and even eliminate [gnome.org] core libraries, tighten up [gnome.org] coding standard
Re: (Score:2)
I can mount my network drive using GVFS all I want, but I still can't watch anything on it, since VLC has no clue about it.
Yes, you can, that's what the parent means about the FUSE integration. GVFS mounts show up as directories accessible by all applications at ~/.gvfs . VLC doesn't need to know anything about GVFS to access these files.
Re:Sounds like a KDE-type cleanup (Score:5, Insightful)
The KDE type cleanup is what they did for 2.0, which was what made Linus Torvalds say "fuck this shit, I'm switching to KDE" (and, incidentally, what made him say "fuck this shit, I'm switching to Gnome" after trying KDE4). It pissed off a lot of other users as well. Of course, Gnome 2.0 was a bit more stable and less bug-ridden than KDE4, but on the other hand it had almost no features you'd expect from a computer (which was supposedly 'good for you', according to the HIG apologists, pretty much like the absence of multi-tasking on the iPad until yesterday), and took several years before it was as useful as 1.4 (the last version I used).
I forget. Did I have a point with all this? Oh, yes, the cleanup: it sucked the last time, and I hope they manage it better now, or they will probably hear it until the next time some huge project mismanages a major revision. On the other hand, maybe a botched Gnome3 release will help KDE get the recognition it deserves again.
Re:Sounds like a KDE-type cleanup (Score:4, Funny)
Looks like it over-stupidified your comment as well. Tough luck!
Re: (Score:2)
Though my fantasyThough my fantasy is to see them use Qt.
Well, you already have KDE for that don't you?
:-) What I really wish they sussed out once and for all, is freedesktop.org, so you can use whichever desktop you want with whatever tools you want, and it all works. They already co-operate a bit, but I'm not entirely sure how deep it goes in many ways.../p
Re:Sounds like a KDE-type cleanup (Score:5, Insightful)
Re:Sounds like a KDE-type cleanup (Score:5, Interesting)
I love signals and slots. They require a bit of a different way of thinking, and a semi-proprietary compiler (it's open source but still).
It's really the first time since Visual Basic where I think a language has really 'gotten' event-driven programming. Everything else has you writing your own event loops to switch on a message type. Signals/slots let you use a single statement as a patchboard. It's the reason they can have an example where a slider changes a text box in one line of code.
Is it different? Sure. It's slightly different than straight C++, but not by much. It definitely demands a new way of thinking about how to program graphical applications. But if you can manage it, I think it's far superior.
Though I'd also agree that Mono's crapulence is Gnome's biggest problem. I don't want the whole damn framework for some note-app
Re: (Score:2)
Did you read the "Credit where credit is due: QT" paragraphs?
I think you should.
Without Qt's moc, she would have never been inspired to make her own version of moc.
--
BMO
Re:Sounds like a KDE-type cleanup (Score:4, Informative)
is MOC really needed
No, basically. But back in the early nineties, when Qt was first developed, the various C++ features that make these pure-C++ signal and slot libraries usable weren't widely available.
Re:Sounds like a KDE-type cleanup (Score:5, Insightful)
It's a matter of taste. Personally I hate Qt's slot mechanism. And Moc. IMO the problem with GNOME is not GTK+, it's Mono.
I'd say it's Mono, to a lesser extent GTK+, and to a greater extent the fetish of removing any and all features.
Re: (Score:2)
Re: (Score:2)
Is that the fault of KDE/Qt or is it a result of a slow C++ compiler?
Re: (Score:2)
Some valid C++ code indeed requires more work to compile (if it can be compiled at all), but Qt? It does not use features like exceptions or templates...
And GCC C++ is known to not be the fastest C++ compiler...
Gnome Desktop (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:2)
Article is bogus, needs proofreading (Score:2, Informative)
GUADEC (Score:5, Interesting)
Not the same stuff - much worse! (Score:5, Insightful)
Yes, I DO remember the early days of Gnome and how much better it was than now:
- automatic save and restore of multi-workspace sessions
- handy window operations like maximize-vertically and maximize-horizontally
- easy to change settings like which app to handle movies, etc.
I remember when clicking on a menu button gave an instant response,
not a several second delay for the first time in a session.
Gnome has become bloated and slower while becoming less stable and less powerful.
It is neither easier nor harder for beginners. It has more eye candy.
Gnome clients have also gone downhill: Evolution used to support my mh mail folders.
Now it uses a database that crashes when I try to load my old mail and fails to work
with my rules. It still doesn't integrate the contact manager with the mail rules.
I'd switch to KDE but they've been destroying themselves even faster!
Re: (Score:2, Insightful)
Re: (Score:2)
This is my impression also.
All the stuff you mention plus keybindings and WM choice. Once upon a time I was totally in love with Gnome. Unix keybindings and WindowMaker integration made it very useable and useful for me. Gnome2 took away all that - took away everything I liked about it - and to ad insult to injury the developers made it a practice to abuse anyone that didnt like the change. It's true I havent tried it recently, and it's also true I probably never will. I am not a masochist.
Re: (Score:2)
> I'd switch to KDE but they've been destroying themselves even faster!
This. First the disaster that was (still is, from my experience) KDE4, and now those brain-dead Gnome 3 UI mockups. What are they all smoking? Can we please have one sane full-featured DE left?
Re: (Score:3, Funny)
Re: (Score:2, Insightful)
GNUstep could've been a winner if they'd only put forth the small degree of effort to at least look and feel like other desktops.
Instead, they chose to stick with those fucking vertical menus that basically nobody else uses. They made it fucking impossible to have GNUstep apps look like GTK+ or Qt apps. And the end result is that nobody uses it.
It's clearly not a language or API problem, as there are may people who like Objective-C and Cocoa, and develop fantastic apps using both. The problem is clearly wit
Re: (Score:2)
- automatic save and restore of multi-workspace sessions
- handy window operations like maximize-vertically and maximize-horizontally
Modern Gnome doesn't have those? That's pathetic. KDE4 has them.
I'd suggest giving KDE4 a second try. Make sure you're using a very new version, though, preferably 4.4. The early versions were very bad, and should never have been released for general consumption.
Why the hang-up with version numbers? (Score:3, Insightful)
"2.30 will probably be the final version of the 2.0 series"
I've noticed that open source software generally seems to be more hung-up and obsessed with version numbers than proprietary software. For example Linus Torvalds has said that there will never be a version 3.0 of the Linux kernel. So I guess 2.9.99.99.999 will be the end of the line.
I don't get the big hang-up with version numbers. Who cares if it is 2.30 or 3.0? My current nVidia video driver for Windows is 196.21 -- as long as it works, who cares?
Re: (Score:2)
So I guess 2.9.99.99.999 will be the end of the line.
With the current prohibition on a stable kernel/driver interface in the linux kernel, 2.9.99.99.998 drivers will be incompatible with 2.9.99.99.999
Re: (Score:2)
Before I get a bunch of "we don't care about binary only" drivers type responses: I'm talking about open source drivers, including those in the main kernel. The constant code churn means constant and frequent changes to driver internals are required just to keep up. New bugs are a frequent result. You are always chasing the latest change with bug fixes to compensate.
Honestly, a well defined and stable interface is sometimes useful.
Re: (Score:2)
For example Linus Torvalds has said that there will never be a version 3.0 of the Linux kernel.
Because there will be no more major revisions to the kernel? That would make sense.
Re: (Score:2, Interesting)
In many open source projects the version numbers have a technical basis, e.g. ABI or API compatibility. In proprietary software it's usually marketing fluff. Unfortunately much open source is heading in the same direction.
key-bindings (Score:2, Insightful)
I'm excited (Score:2)
The stuff that's happening with gnome desktop is fantastic. It's especially nice on small laptop/netbook/tablet machines. The latest Ubuntu (Lucid Lynx in beta) has built in social networking that actually jumps ahead of OSX or Windows. The fact that I have something like TweetDeck built into my OS is pretty cool. Sure there are some rough edges. OSX has rough edges too. But I rarely find myself explaining away huge deficiencies. It's just a different bug from your OSX or Windows bug.
I'm excited. But then a
How far have we come? About a quarter-inch. (Score:3, Insightful)
For those who were around for GNOME 1.2 back in 2000, the 2.30 release stands as evidence that Linux on the desktop and GNOME in particular have made awfully little progress in the last decade. GNOME 2.0 was released in 2002, not 2000, and it was horrid; maybe if your first experience with GNOME was 2.0 then you might think 2.30 was a vast improvement- heck, TWM is a vast improvement on GNOME 2.0. 2.0 was extremely bug-ridden, and if you wanted to change anything from its mind-numbingly bad defaults you had to putz around with finding where in gconf's xml you could go to change things.
If you were around for 1.0, the RH 6.1 "October GNOME" release, or 1.2, you know that GNOME made a lot of progress, was centered on the needs of those most likely to use Linux rather than on unsubstantiated usability claims, and was becoming quick, convenient, and powerful. The progress GNOME made between 1998 and 2000, the big improvements in the 2.2 kernel series, and a host of other developments made it seem like Linux really would overtake Windows for desktop use soon. But I really don't find much about modern versions of GNOME that really improves on 1.2 or maybe 1.4; the last 9 years have seen little improvement in the Linux desktop IMO.
Re: (Score:2, Funny)
Re: (Score:3, Informative)
I remember using GNOME in the late 90s, and if bonobo is dead than it's a good thing. That was a nightmare to mess with.
Re: (Score:2)
Like I said, good riddance to bad rubbish.
To be honest I really don't care anymore. I've been using Windowmaker for about 5 years now.
Re: (Score:2)
dbus was actually initially developed for GNOME; other GNOME-originated libraries that are still in use include GVFS, gconf, and gstreamer. And GTK, though of course not originally developed from GNOME, is now developed in parallel with GNOME (much of the stuff in the depracted GNOME libs is no longer needed because GTK itself provides solutions for the same problems). So I think GNOME is still a pretty vibrant development platform.
Re: (Score:2, Insightful)
LGPL is less free than GPL, really?
Re: (Score:3, Informative)
Qt has been LGPL for a few years now, and KDE has always been part LGPL (like WebKit, a derivative of the old khtml).
Re: (Score:2)
Fine, so then LGPL is less free then LGPL?
Re: (Score:2)
I think the idea is that X + Mono is less free than X.
It still hasn't been proven that Mono isn't a patent trap.
Re: (Score:2)
It still hasn't been proven that Mono isn't a patent trap.
It hasn't been proven that Qt isn't a patent trap, either. Thing is, to prove that, you have to go through the entire patent database, one by one, and see if any of them are applicable. In fact, you can be almost certain that both Qt and KDE are covered by at least a few, though their validity may be questionable (but then the same is also true for any MS patents that supposedly apply to Mono).
Anyway, this turns out to be irrelevant, since, as someone else pointed out in this thread, Gnome does not depend o
Re: (Score:3, Interesting)
Yes, but with Qt, you don't have a rabid Microsoft fan bent on directly implementing Microsoft technology on Linux. Trolltech does not have Miguel. Novell does. Judging from Miguel's actions and his words, I think we have something to worry about. I don't think that Miguel is some sort of Manchurian Candidate, but he is driven by his admiration of everything Microsoft.
--
BMO
Re: (Score:2)
Re: (Score:3, Insightful)
What's fud about it?
Novell signed a contract with Microsoft. It indemnified people who got mono from Novell from liability. It doesn't cover third parties.
You go find the clause that covers third parties and get back to me.
--
BMO
Re: (Score:2)
That may be true. Frankly, I don't give a rat's ass because Microsoft will never, ever go after an insignificant, individual end user like me for patient infringement.
If you're thinking of GNOME in a business setting or are are distributing Mono, you may want to think twice. However, for as long as Mono exists under a free license, I'm happy to use it.
Re:Oh good! (Score:4, Interesting)
The problem is that they're trying to push Gnome/Mono into all the Linux distros, and Linux is increasingly being used by businesses and governments. If there is a patent trap in Mono (which exists regardless of its license), then that means that MS can then sue all those businesses and governments for patent infringement. Of course, the real idea probably isn't for MS to get money from people from patent suits, but basically to scare everyone away from Linux for once and for all, and only use "safe" MS software.
The reason that the Linux software you now enjoy is as usable and functional as it is is because many businesses and governments have been investing in it, working with it, and using it. If it were some project only used and developed by hobbyists at home, like ReactOS, it wouldn't be good for anything but playing around. Instead, we have an OS and thousands of apps that are all free, and we can use to do just about anything you can do in a Windows environment (and sometimes much more). The only thing we can't do is run some Windows-specific apps, but that's becoming less and less of a concern as more companies make Linux versions of their apps, and as more alternative apps become available or mature (e.g. OpenOffice).
So these issues which seem to affect only the larger players may not seem to affect you personally, but in reality they do.
Mono considered harmless (Score:5, Interesting)
If there is a patent trap in Mono (which exists regardless of its license), then that means that MS can then sue all those businesses and governments for patent infringement.
I'm getting sick of this meme.
Tell me, are you a patent attorney? What is your expertise for making claims like this one?
Now I'm not a patent attorney either, but here is my understanding: If Microsoft does assert some kind of submarine patent, the main effect will be to cause GNOME and everybody else to yank out Mono. At that point, we will just have to port the Mono apps to Java or something. That is the absolute worst case. Can you give me an example of any time where some company had a submarine patent, then suddenly asserted it, and successfully extracted a bunch of penalties from businesses and governments?
Furthermore, while I'm still not a patent attorney, I have read Groklaw for a while, and I read some essays there about the "unclean hands" doctrine. If a company has patent rights, and discovers that someone is infringing, that company has a duty to inform the infringers as soon as possible; it is not allowed to just let the patent sit there ticking like a bomb, and then demand extra damages because the infringer was infringing for so long.
So, let's review: Mono is a technology that is very similar to the JVM, which in turn is similar to other virtual systems, going all the way back to the UCSD P-system. The amount of prior art is staggering. Besides that, the only danger is a submarine patent, not a new patent: the
.NET stuff has been around for years and years, and you have to file for a patent before you publicly disclose a technology, or you lose your chance.
So, the alleged threat is that there is a patent already granted, that nobody has noticed, on technology that has a ton of prior art; and Microsoft is deviously not asserting the patent, but is going to later. Microsoft won't care about the negative publicity for itself and for
.NET, because it stands to gain so much and is certain its patent will survive all challenges. And anyone infringing will somehow be on the hook for penalties.
I for one don't believe any of it. C# is as safe as Java and Mono is as safe as the JVM.
steveha
Re: (Score:2)
You go find the clause that covers third parties and get back to me.
Here you go [microsoft.com].
Re: (Score:2)
The MCP only covers the ecma parts.
Anything mono that is not ecma is not covered.
The Novell situation is a whole different kettle of fish.
Re: (Score:2)
The mono trap and GNOME (Score:4, Informative)
> And Gnome has been adopting mono like it doesn't matter.
You are out of date. Have Fedora 13 Alpha + all updates in a VM right now and behold:
[root@Fedora13 ~]# rpm -qa | grep mono
dejavu-sans-mono-fonts-2.30-2.fc12.noarch
liberation-mono-fonts-1.05.2.20091019-5.fc13.noarch
Everything works just fine. They ditched F-Spot for Shotwell and replaced Tomboy with the C++ port GNote. With those gone mono doesn't need to be installed. Somebody caught the cluetrain and stopped Novell from infecting GNOME with their patent poison.
Re:The mono trap and GNOME (Score:4, Interesting)
That's encouraging.
Let's hope it stays that way.
But is that the "installed" or did they remove Tomboy and the rest in the repositories too?
--
BMO
Re: (Score:3, Insightful)
> Let's hope it stays that way.
I'd think the die is now cast. Enough folks yelled "STOP!" loud enough they backed off from making mono a core dependency and replaced a pair of otherwise keeper apps out of the default install. It would be pretty hard to introduce a must have dep now, what would they say to the authors of F-Spot and Tomboy?
> But is that the "installed" or did they remove Tomboy and the rest in the repositories too?
Mono, F-Spot and Tomboy are all still in the repo. No problem with tha
Re: (Score:2)
Remember a while back they were claiming to have some triple-digit number of patents that the Linux kernel infringes on?
Yes, how can one forget?
IBM and TurboHercules debacle
Yes. TurboHercules is suing IBM in a SCO-esque lawsuit. IBM is supposed to take it lying down?
I'm not convinced that mono infringes significantly more or stronger potentially-hostile patents than any other similarly complex piece of software.
It was enough to convince John Dragoon and Ron Hovesepian.
Given the choice between technology t
Re: (Score:2)
Kill GTK+ off? Maybe when Qt has more than one decent theme that doesn't require installing half of KDE, or when Qt themes don't require a compiler to create. Until then, get lost.
Re: (Score:2)
GTK+ themes require a compiler to create as well. Unless you count the color tweaking that you can do with the gtkrc files (something you can do easily through the GUI with KDE and isn't considered a theme). Of course, I don't see why a couple of megs of KDE libs is really a problem unless you are using a ten year old computer (but then it wouldn't be fast enough to use GNOME anyways).
The toolkit/DE zealots really amaze me sometimes.
Re: (Score:2, Insightful)
Oh, for Christ's sake. Mono is safe. Microsoft made a legally-binding promise not to sue. No judge would even hear the case if they attempted to bring a suit against <I honestly have no idea who they'd sue> unless the GNOME people managed to infringe upon something else in Microsoft's portfolio.
We have much bigger fish to fry. Squabbling about Theora and Mono isn't a productive use of your time, no matter how valid your arguments might be. The standards have been set.
Re: (Score:3, Insightful)
The contract that Novell signed is not the same as the "promise" that Microsoft published later. There is a difference. Novell is indemnified, and so are Novell's customers and people (e.g., developers) who get mono directly from Novell instead of a third party.
Come on, Microsoft does not like standards and interoperability. They are already undermining their own OOXML by using the proposed standard (the one that passed ecma, but not iso) that was rejected instead of the one that the ISO actually approve
Re: (Score:2)
The contract that Novell signed is not the same as the "promise" that Microsoft published later. There is a difference.
Yes, there is a difference. In one case, Microsoft can't sue because of a contract. In the other, they can't sue because of promissory estoppel. Either way, they can't sue.
But enjoy the hat, I'm sure it's very shiny and stylish.
Re: (Score:2)
In one case, only ecma mono is covered.
In the other case, all of mono is covered.
The former is the Microsoft promise.
The latter is the Novell contract. [osnews.com]
There you go.
--
BMO
Re: (Score:2)
In one case, only ecma mono is covered.
So only use the ECMA bits (which will soon be factored out), and use the free software stack bindings for everything else, just like, well, basically all existing Gnome Mono applications (last I checked, F-Spot wasn't using winforms).
There, problem solved. Happy now?
Re: (Score:2)
TM Repository is officially the saddest, most pathetic website I've ever seen: A tiny community of people who get together just to snark at Linux propaganda. It's like setting up a site to mock the CPUSA [cpusa.org].
Re: (Score:2)
Tmrepository is entirely unfunny and a lame ripoff of Adequacy.org.
Get some better writers.
--
BMO
Re: (Score:2)
Re:Uhmmmm (Score:4, Interesting)
There are a lot more things I don't like about KDE4. It tries to be all integrated, with a common notification daemon for example, so that status messages can appear with a consistent look in the corner of the screen. The problem is that virtually nothing supports it except for KDE apps that start with "K". If you want that sleek, consistent QT4 look, you're limited to a small subset of free software - there are a lot more GTK applications than QT applications. And I'd prefer to be able to use, for example, a different file manager. Without dolphin, you're unable to take advantage of KIO and whatever search index thing that KDE uses. KDE as a whole seems really tightly coupled - I regularly use gnome apps on my XFCE system without having the gnome libs installed. That's unheard of for KDE.
A particular barrier for me to use KDE is a decent web browser. I've used Konqueror for a few months and it's OK, but KHTML became intolerable. Arora (webkit powered) is good but incomplete. I have similar complaints about the usual KDE chat programs, music players, and Konsole.
Re: (Score:3, Interesting)
The problem is that virtually nothing supports it except for KDE apps that start with "K".
Actually, that's not true. With the latest Fedora, for example, Firefox uses KDE notifications under KDE, and GNOME notifications under GNOME. The integration's spreading really, really quickly, now that the DBUS API has been fixed down for a release cycle.
Re: (Score:3, Informative)
It looks like it's theoretically possible [mozilla.org] to build firefox with Qt widgets thanks to Nokia, but it's difficult and unstable.
And yes obviously you can just load both Qt and GTK libraries but it's ugly and memory-inefficient.
Re: (Score:2, Insightful)
and interestingly in kde 4.4 with firefox 3.6 it's even using the kde notification thing in the corner whenever it finds an update for things. I believe this is because its using a "standard" (don't know if it is or not) dbus thing to do the notifications so that both gnome and kde can use the same code.
Re: (Score:2)
KDE lost me as a user when they took away the ability to have a different background on each desktop (yeah, petty point, but it was still a really neat feature in my opinion.) That did cause me to take a closer look at KDE4 before I dropped it, and I found myself not liking the feel of that environment overall. Then I took a look at Gnome and tried it out briefly, but didn't care much for that either (plus, still no multiple backgrounds).
That was when I finally started taking a more in depth look at alter
Re: (Score:2)
The following article mentions how to do that. In the article, read the paragraphs just after the heading which says "Combine Virtual Desktops With Plasma Activities." Notice that the last sentence says "This will allow you to have a different
Re: (Score:2)
It may have been up through 4.2. I am pretty sure I had given up on KDE before 4.3. And I remember at the time, a web search on KDE +"multiple wallpapers" brought up tons of similar complaints with no one being able to offer any solution.
No matter now. I am becoming more and more enamored with Enlightenment these days, and feel no need to switch back to either of the "Big Two" desktop environments.
:)
Re: (Score:2)
Correct. I need to get work done, not play around with the GUI all day.
Re: (Score:2)
"simplicity"? SIMPLICITY???!
"xfce" is simple.
GNOME, on the other hand, is now a more bloated pig than CDE ever was.. which is amusing, because one of the gnome1 boasts was that it was much lighter than CDE.
Re:Uhmmmm (Score:5, Interesting)
"Simplicity" can mean different things.
Ask a regular user: a "simple" system hides any complexity; in this sense, Ubuntu is simple - everything is automated or set by GUI-based tools.
Ask a developer: a "simple" system is transparent; in this sense, Slackware is simple - there are few GUI-based tools to set the system.
Re:Uhmmmm (Score:5, Insightful)
I'm really tired of the trade-off between simplicity and functionality. This trade-off should not be inherent to either windowing system. Rather, the variety of options presented to the user should be configurable. Each distribution should be able to decide how simple or how configurable they want to make their windowing environment when it is first installed.
Uhuh.
And then people would bitch about bloat because supporting all those features, options, and workflows would required a fuckton more code.
So here's an idea: pick the environment that fits your needs. Gnome, KDE, XFCE, or heck, throw components together that fit your needs. But quit expecting these projects to be infinitely flexible, it's completely unreasonable.
Re: (Score:2)
So here's an idea: pick the environment that fits your needs. Gnome, KDE, XFCE, or heck, throw components together that fit your needs. But quit expecting these projects to be infinitely flexible, it's completely unreasonable.
That was my solution when I saw I didn't like the direction KDE4 was going. Openbox with netwmpager, fbpanel, and feh allowed me to keep my environment free of garbage all over my desktop.
Re:Uhmmmm (Score:4, Insightful)
I agree. I'm about to say something that almost everyone here will think is completely crazy, but I'm going to say it anyway: some of these DEs and Linux distros should focus less on being infinitely flexible and configurable, and more on coming up with one single configuration that works.
Not that I don't appreciate the flexibility and configurability, but having all these different options ought to mean that at least one option is consistant, standard, and controlled. give me a distro that only supports one DE, but make sure that DE's experience is really smooth. Take away all the different themes, and give me one single theme that's extremely polished. Maybe even don't try to support every possible kind of hardware, but certify some set of hardware and support that hardware really well. Go ahead and make some choices for me, just so long as those choices are really good choices.
A lot of people would say that sort of mindset is antithetical to the open source movement, but I don't think it is. Leave it open source, and let other people make any changes they'd like.
Anyway, I it's part of the reason for the success of OSX. While the geeks are all complaining about the lack of configurability, everyone else is happy with how well crafted the defaults are. I think Gnome operates along the same line (but not to the same degree), and while that earns it the wrath of a lot of geeks, it's the reason why so many Linux distros use Gnome as the default environment.
Re: (Score:2)
I think like 99% of linux users use kde
Don't think so.
The latest figure is 98.9%
Re: (Score:2)
Because no one who uses Linux uses the default install of Ubuntu... got it.
Re: (Score:2)
GNOME is just jealous that they could be more popular if they just made it look and work more like KDE [slushdot.com].
Seriously
... look at the difference between the fugliness that is Ubuntu (even with the new "blight" look), and the KDE variant. If they want to fix Ubuntu's visual problems once and for all, they should just do this [slushdot.com]. Because going from Halloween Orange-and-Black to "Rotting Eggplant" might be a change, but it's not much of an improvement
Re: (Score:2)
Or you can switch to a distro that wasn't put together by someone who is color-blind, aided by someone who is spatially challenged
... because like you said, it's STILL ugly.
My opensuse desktop and laptop both look gorgeous with the black Oxygen taskbar. Every once in a while, I change the wallpapers
... just 'cuz ...
Re: (Score:2)
I think a lot of KDE users disappeared with KDE4. (Score:5, Interesting)
I used KDE from KDE 1.0, when I switched away from TWM. I was fully integrated into the KDE "way of life," and reliant on lots of KDE apps.
I tried to use KDE4.0 but after about two weeks it got the boot. Though it has theoretically improved and I keep a KDE 4 installation on my Fedora 12 personal machine, logging into KDE thus far provides no incentive to switch back, despite updates.
Dolphin is still intolerably slow. Important apps still don't share a consistent appearance; Firefox, Chrome, and OpenOffice in particular look good in GNOME but are full of distracting artifacting and other appearance problems in KDE. GNOME apps in general don't mix well with KDE themes right now. The graphics still don't work right. A notification balloon is likely to take out half the taskbar, etc. They blame this on the radeon driver and I believe them, but that's the hardware I have, and GNOME shows none of the same problems. Desktop management for multiple monitors doesn't behave as I expect it to, and it's difficult to create a configuration that jostles well amongst varying configurations of external, internal, or both, monitors without taskbars disappearing or desktops shifting from display to display unexpectedly. The default icon theme is far too colorful and luminous for focused desktop work of the kind that I do (lots of writing, editing, and calculating) but there are few replacement icon sets to be found. The wireless connectivity manager seems incapable of working with my simple home WiFi installation without needing constant reconfiguration and tinkering, while in GNOME it "just works."
Yes, some of these things could be fixed, but to trudge through each one of them would require rather a lot of time and effort that I just don't have to spare. So despite the fact that I'm still not wild about GNOME either, KDE4 is simply not on the cards in the near future for me. What's missing everywhere is polish. Not the kind that makes widget corners have a "glass" appearance, but the kind that keeps widgets from disappearing or artifacting unexpectedly, or the kind that doesn't leave you wondering why the hell the widget doesn't work, or there isn't a widget for that at all, in the first place. Details work. Not big thoughts. KDE needs to cut out the innovation for a while and patch roof leaks.
I wouldn't be surprised to hear that many other KDE users right up through KDE 3.x switched to GNOME with the KDE4 release.
Re: (Score:3, Interesting)
Re: (Score:2)
I use KDE. 3.5.10 is still unbeatable when you want fast, clean and powerful. However, it's officially dead now, and the 4.x series sucks.
KDE4 is full of tiny little features that look like what you're used to, but do something completely different. Or they removed the most important button on the Konsole window, injected Amarok with featuritis, and k3b simply does not exist. (Wanna guess my three favorite applications?)
I'm starting to think Joel was right [joelonsoftware.com]. The KDE project wasted two years trying to produce
Re: (Score:2, Insightful): (Score:3, Informative)
By the way, first person to say that my having successfully made a First Post guarantees that I won't have any grandchildren gets hit. | https://tech.slashdot.org/story/10/04/08/2236241/GNOME-230-End-of-the-2x-Line | CC-MAIN-2016-36 | refinedweb | 7,209 | 72.66 |
I take it you've never had any introduction to algebra? N = some number thaty ou pick.
As for google not turning anything up, somehow I think you didn't even try...
Quzah.
Printable View
No, still nothing. Of couse I tried. I don't understand what they say. They state there is no formula to compute Prime variables. So then how is it my teacher expects me to do this exercise?
Code:
#include <stdio.h>
#define N 200
main()
{
int i, j, a[N];
for (i = 2; i < N; i++) a[i] = 1;
for (i = 2; i < N; i++)
if (a[i])
for (j = i; i*j < N; j++) a[i*j] = 0;
for( i = 2; i < N; i++)
if (a[i]) printf("%4d ", i);
printf("\n");
}
oh and i forgot to say its a algorithm called Sieve of Eratosthenes
My tutor won't expect me to do that... I doubt it.
Okay... here's like my last attempt... it's 2:00 flipping AM and I know I won't get this done, but, let's hope. 1-200, prime.
Prime can only be divided by one and itself. All numbers can be diveded by one and itself.
Okay, so what it I just wrote down the prime numbers upto 10 and then did
//bear with the crap code, I can't type... or see for that matter right now. My mouse crapped out on me, my tutor's arriving in approx 8 hours and I haven't slept for awhile...
rmndr=cntr%2 //rmndr is the remeinder int... cntr is the counter.
if(rmndr==0) { // no remeinder means its not a prime number.
printf("n.a.p"); }//Printed Not a Prime.
rmndr=cntr%3 //rmnder is now = to counter / 3's remeinder.
// And the cycle continues till 9.
// If we have 10, it should print out n.a.p. If we have 11, it should
// break to something that will print it (printf("%d", cntr);
// So how do I use an array with this ...
// Why is god so crueL?
// And can anyone really just break everything down and give me
// simple exp? I give up. There... I can't take this anymore. I have
// like 5 more exercises to do and I can't let this one hold me back.
// P.S. Tylenol suxors. Motrin roxorz.
// P.P.S. Thanks to all for putting up with my idiocy... I know...
// but you see, I only see my tutor once a week and I easily forget
// what she says...so I need to keep on going over things on my
// own and I get confused a lot. T I A.
Have you searched the board? This gets posted at least once a week.
Quzah.
*smiiiiiiile* Thanks... | http://cboard.cprogramming.com/c-programming/22895-basic-array-help-2-print.html | CC-MAIN-2016-18 | refinedweb | 458 | 85.49 |
Betting on Futures In TPL terms, a Future is basically a task that returns a value. It's like a deferred function. You start it running and then use its value later. If the Future hasn't finished calculating its value by the time you need it, it makes you wait while it finishes. For example, consider the Fibonacci numbers. The first two values are defined as Fib(0) = 0 and Fib(1) = 1. For larger values of N, Fib(N) = Fib(N—1) + Fib(N—2). The following code shows a function that calculates Fib(N):
Private Function Fibo(ByVal N As Long) As Long
If N <= 0 Then Return 0
If N = 1 Then Return 1
Return Fibo(N - 1) + Fibo(N - 2)
End Function
There are much more efficient non-recursive implementations of this function, but this version is a good candidate for parallelism because in sequential mode, it takes a long time to calculate Fib(N) when N is larger than 20 or 30. The following code performs a similar calculation but uses Futures to calculate the result for each recursive call:
Private Function FiboFullParallel(ByVal N As Long) As Long
If N <= 0 Then Return 0
If N = 1 Then Return 1
Dim t1 As Tasks.Future(Of Long) = _
Tasks.Future(Of Long).Create( _
Function() FiboFullParallel(N - 1))
Dim t2 As Tasks.Future(Of Long) = _
Tasks.Future(Of Long).Create(_
Function() FiboFullParallel(N - 2))
Return t1.Value + t2.Value
End Function
This function checks the base cases N = 0 and N = 1 as before. It then creates two Future objects. The parameter to the Create method is a lambda (aka inline) function that returns the result of the FiboFullParallel function evaluated for N—1 and N—2. As each Future is created, it may begin executing on any of the system's CPUs. The function finishes by adding the values of the two Futures and returning the result. If the Futures have finished calculating their values by the time the Return statement executes, all is well. If either Future is not finished, the program waits until the executing Future completes its calculation. While this version works, it is extremely slow and inefficient. The problem is that calculating FiboFullParallel(N) requires an enormous number of recursive calls to FiboFullParallel. For example, to calculate FiboFullParallel(10) the program must find FiboFullParallel(9) and FiboFullParallel(8). But to calculate FiboFullParallel(9), the program calculates FiboFullParallel(8) and FiboFullParallel(7); in other words, it has to calculate FiboFullParallel(8) twice. If you follow the calculation further, you'll find that the program must calculate each of the intermediate values a huge number of times. For example, to calculate FiboFullParallel(10), the function calls itself 176 times. To calculate FiboFullParallel(20), the function calls itself 21,890 times. The number of calls grows very quickly as N grows. Here's an improved version:
Private Function FiboSplit(ByVal N As Long) As Long
If N <= 0 Then Return 0
If N = 1 Then Return 1
Dim t1 As Tasks.Future(Of Long) = _
Tasks.Future(Of Long).Create(Function() Fibo(N - 1))
Dim t2 As Tasks.Future(Of Long) = _
Tasks.Future(Of Long).Create(Function() Fibo(N - 2))
Return t1.Value + t2.Value
End Function
This function builds two Future objects but instead of calling back to this function, these Futures invoke the original sequential function Fibo.
For example, when the main program calls FiboSplit(10), the Futures calculate Fibo(9) and Fibo(8). After that, function Fibo does not use Futures so it doesn't pay the overhead that they incur. In this version, the program splits the calculation into two Futures and after that the Futures execute sequentially. The program UseFutures demonstrates these functions (see Figure 3). The Result label shows the results for the 37th Fibonacci number. The Sequential label shows that the original Fibo function took 1.88 seconds while the Split label shows that the FiboSplit function took only 1.20 seconds. The program won't calculate FiboFullParallel for N greater than 25 because it takes too long (about 2.6 seconds for N = 25; probably years for N = 37). You can use the TPL classes and methods to split tasks into pieces that you can then run in parallel but, as the FiboFullParallel function demonstrates, don't split the problem too finely. If you break the problem into pieces that are each very easy to evaluate, the overhead of running them separately will outweigh the benefit of parallelism. Locking Earlier I briefly mentioned that you need to be careful to ensure that tasks running on different threads don't interfere with each other. There are many ways two threads can get in each other's way and I don't want to take the time to cover them all here, but I do want to describe a couple of techniques that you may find useful. One of the easiest ways for two threads to lock horns is if they read and write the same variable. For example, consider the following trivial subroutine. It simply adds an integer value to a total:
Private m_Total As Integer
Private Sub AddNoLock(ByVal i As Integer)
m_Total += i
End Sub
This code performs its addition in a single step using the += operator, which is so quick and simple that you would think nothing could go wrong. Unfortunately, if two threads execute this same piece of code at the same time, they can conflict. Suppose two threads (A and B) are running the AddNoLock subroutine with parameters 1 and 2 respectively. Now suppose thread A starts running its addition statement and reads the value of m_Total, which at the time is 10. Now suppose thread B gets a turn. It reads m_Total, which is still 10, and adds 2 to it, saving the result back in m_Total, which is now 12. Thread B congratulates itself on a job well done and retires. But thread A is still executing. It adds 1 to the value that it read previously (10) and gets 11, which it stores in m_Total. The final result is that m_Total holds the value 11 rather than the expected value of 13. If the two threads had run sequentially, the result would be 13. But because they ran at the same time, the result is 11. This is called a race condition. The two threads are racing to see which can read and write the variable first. Depending on the exact sequence of events, you can get one of several possible outcomes. Race conditions can be extremely hard to debug because they don't happen every time. With the small window for problems in this example (a single line of very fast code), the odds of a problem are fairly small, so the problem won't appear every time you run the program. Even worse, stepping through the code in the debugger will probably prevent the threads from interrupting each other, so you won't see the problem in the debugger at all. So what are the odds that two threads will interlace in just the right way to make this sort of problem occur? If you're an experienced programmer, then you know that this will happen with absolute certainty when you're demonstrating your program to your company president. But even when the big boss isn't there, the odds are still large enough to make race conditions a significant problem. The sample program UseLocks uses the following code to demonstrate this problem:
Dim num_trials As Integer = Val(txtNumTrials.Text)
Dim start_time As Date = Now
For i As Integer = 1 To num_trials
m_Total = 0
Parallel.For(1, 10001, AddressOf AddNoLock)
Next i
Dim stop_time As Date = Now
Dim elapsed As TimeSpan = stop_time - start_time
The program accepts a user-entered number that controls the number of trials. For each trial, the code calls Parallel.For to invoke the AddNoLock subroutine shown earlier 10,000 times. In other words, if the user enters 100 for the number of trials, the program spins out a million (100 x 10,000) threads. That's plenty large enough to cause a race condition most of the time. When I run the program for 100 trials, it takes only about 0.75 seconds to perform all of those additions but the result is not always the same. In fact, if I run only 1 trial, so the program starts 10,000 threads, it produces inconsistent results. There are several ways you can avoid this type of problem. One of the best is to make sure the threads never, ever, ever read and write the same variables. They may read from some shared variables but they should not write into shared variables. If you give the threads separate variables to write into, there should be no problem. The Task example described earlier demonstrated this approach by using separate m_Tasks and m_ElapsedTime array entries for each task. Another way to handle this problem is to use a lock. A lock reserves some resource for a particular thread and denies other threads access to the resource while it is locked. There are a couple of easy ways you can make locks in Visual Studio. In Visual Basic, you can use the SyncLock statement. You pass this command an object used as a token to prevent other threads from acquiring a similar lock. The following code, which is used by sample program UseLocks, demonstrates this approach. The AddSyncLock function uses a SyncLock statement to reserve access to the "Me" object (the object running this code—in this case, the form). It then adds its value to the total and releases the lock. If another thread tries to execute the function, it will block at the SyncLock statement until the first thread releases its lock:
Private Sub AddSyncLock(ByVal i As Integer)
SyncLock Me
m_Total += i
End SyncLock
End Sub
Here's the equivalent C# version:
private void AddSyncLock(int i)
{
lock(this)
{
m_Total += i;
}
}
Operations such as this that add one value to another are so common in complex multi-threaded applications that the Threading namespace provides an Interlocked class to make them easier. This class's static Add method creates an exclusive lock, adds a number to another number, and then releases the lock. Example program UseLocks uses the following code to demonstrate Interlocked.Add:
Private Sub AddInterlocked(ByVal i As Integer)
Threading.Interlocked.Add(m_Total, i)
End Sub
Figure 4 shows program UseLocks in action. The first result, which doesn't use locks, is incorrect. Also notice that the other two results that do use locks take longer—but both return the correct result. There is some overhead in using locks but getting the correct result is worth it.
In the coming years, computers will contain more and more CPUs. The operating system and some fancier compilers will probably take advantage of that additional power to give you better performance even if you do nothing, but to get the full benefit of this extra horsepower you'll need to change the way you program. The Task Parallel Library gives you new tools that let you parallelize some parts of applications relatively easily. Methods such as Parallel.Invoke, Parallel.For, and Parallel.ForEach, and classes such as Task and Future let you launch swarms of threads on different CPUs relatively easily. The library automatically balances the load across the CPUs available. If you have an old-fashioned single CPU system, you pay only a slight performance penalty. If you have a multi-core system, you'll get the benefits of multiple CPUs with very little extra effort. Despite the advantages, the underlying problems of implementing multiple threads haven't gone away. You will need to consider race conditions, deadlocks, non-thread-safe classes, and other issues that can arise when multiple threads execute at the same time. If you keep tasks separate you shouldn't have too much trouble getting started with TPL. More importantly, if you start thinking about and implementing parallelism in your code now, your programs will easily scale to quad-core, eight-core, and even systems with more CPUs. Some day soon, you'll plop a 32-core system on your desk and your programs will have every CPU blazing away at full capacity. But if you ignore parallelism, your customers will be sitting there with 31 idle CPUs (hopefully powered down) wondering why they're not getting better performance from the latest hot hardware. Editor's Note: Be sure to check back soon to read the follow-up article, which shows the kinds of improvements you can expect after applying parallelism to existing code.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/dotnet/Article/39204/0/page/5 | CC-MAIN-2014-42 | refinedweb | 2,149 | 62.98 |
Details
Description
if you have a facelets composite component with an attribute "test" that points to a property in a managed bean (e.g. #{myBean.property}
) which is currently null and you refer to that attribute in the implementation via #{cc.attrs.test} you can get the current value (null) or set a new value but you cannot get the type of the property (e.g. String[]). However if the property in the managed bean is non-null you can get the type.
For example:
<cc:interface
<cc:attribute
</cc:interface>
<cc:implementation>
<h:selectManyListbox value="#{cc.attrs.test}
">
<f:selectItems value="#
"/>
</h:selectManyListbox>
</cc:implementation>
--> calling #{cc.attrs.test}.getType() will fail if #{cc.attrs.test}
resolves to null, but will work if #{cc.attrs.test}
resolves to some valid value.
This currently results in a NullPointerException in _SharedRendererUtils.getConvertedUISelectManyValue().
Issue Links
- is duplicated by
MYFACES-3311 Can't resolve converter for cc attributes
- Closed
MYFACES-3316 Problem concerning MyFaces-1890: does not work when dealing with a composite component
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
One solution for this would be to return a special type != Map when resolving #{cc.attrs}
and providing a special ELResolver for this type. Then we could use the original ValueExpressions of the attributes from the composite component to determine the type. Of course this would totally break the spec!!!
What are your opinions about that? Is this too unimportant to make such great changes or should we consult the EG and maybe change this behavior? Maybe in the next major release (2.1)?
For now I'll commit a very ugly temporal workaround for this. Note that mojarra currently does the same thing as this workaround for this special scenario.
I think the only thing we can do is assume this case return null and deal with it, retrieving the real value and check its type. It is possible to change the ELResolver (Flash object implements Map but FlashELResolver resolve its values, instead MapELResolver).
In this example there is no way to check the type without retrieve the value, and note #{cc.attrs}
is a Map<String,Object>. I remember variants of the same issue long time ago on myfaces and in that time the solution was the same.
With this patch the CompositeComponentELResolver is enabled to determine the type of attributes from CompositeComponentAttributesMapWrapper. This would make some scenarios related to composite components work better, although it completely breaks the spec.
I will try to get this into the spec.
For future reference, this issue can be worked around in any JSF 2+ environment by simply adding a custom ELResolver with this method:
@Override
public Class<?> getType(ELContext context, Object base, Object property) {
if (base instanceof CompositeComponentExpressionHolder && property instanceof String) {
ValueExpression expr = ((CompositeComponentExpressionHolder) base).getExpression((String) property);
if (expr != null)
}
return null;
}
Remaining methods should return null/false i.e. do nothing.
Also, spec issue #745 is considered resolved at this point. We just need to address it when we do the JSF 2.2 implementation in MyFaces.
Continued discussion from MyFaces-3311. What is the status on this?
For what I see, it will be fixed in JSF 2.2 (see [1]). But shouldn't this be fixed for older versions too?
[1]:
It's not technically considered a bug, but a spec enhancement, so probably won't be fixed for older versions. Have you tried the workaround at ?
It's no real issue for me at present and there are several ways to make it work (with detours). But I am convinced this should be fixed as IMO this is basic functionality.
Patch to bring MyFaces into compliance with spec v2.2 in this regard. Augments Jakob's original approach as described in the spec.
This latest patch only begins to address the idea of providing a context parameter with which this behavior may be disabled for MyFaces < v2.2.x . We should probably hammer that out on dev@myfaces. I am still looking into reworking my tests into MyFaces' test structure.
I just found out why this happens: #{cc.attrs}
resolves to a Map (CompositeComponentAttributesMapWrapper) and thus javax.el.MapELResolver is used to resolve the values. Here is the important part of the implementation of the getType() method from Tomcat:
if (base instanceof Map<?,?>){ context.setPropertyResolved(true); Object obj = ((Map<?,?>) base).get(property); return (obj != null) ? obj.getClass() : null; }
This explains the behavior. So we can only circumvent this by not using a Map, however I don't know if we should really change this... | https://issues.apache.org/jira/browse/MYFACES-2552?attachmentOrder=asc | CC-MAIN-2017-22 | refinedweb | 758 | 59.3 |
#include <deal.II/base/function_time.h>
Support for time dependent functions. The library was also designed for time dependent problems. For this purpose, the function objects also contain a field which stores the time, as well as functions manipulating them. Time independent problems should not access or even abuse them for other purposes, but since one normally does not create thousands of function objects, the gain in generality weighs out the fact that we need not store the time value for not time dependent problems. The second advantage is that the derived standard classes like
ZeroFunction,
ConstantFunction etc also work for time dependent problems.
Access to the time goes through the following functions:
get_time: return the present value of the time variable.
set_time: set the time value to a specific value.
advance_time: increase the time by a certain time step.
The latter two functions are virtual, so that derived classes can perform computations which need only be done once for every new time. For example, if a time dependent function had a factor
sin(t), then it may be a reasonable choice to calculate this factor in a derived version of set_time(), store it in a member variable and use that one rather than computing it every time
value(),
value_list or one of the other functions of class Function is called.
By default, the advance_time() function calls the set_time() function with the new time, so it is sufficient in most cases to overload only set_time() for computations as sketched out above.
The constructor of this class takes an initial value for the time variable, which defaults to zero. Because a default value is given, none of the derived classes needs to take an initial value for the time variable if not needed.
Definition at line 72 of file function_time.h.
The type this class is initialized with and that is used to represent time.
Definition at line 107 of file function_time.h.
Constructor. May take an initial value for the time variable, which defaults to zero.
Virtual destructor.
Return the value of the time variable.
Set the time to
new_time, overwriting the old value.
Advance the time by the given time step
delta_t.
Store the present time.
Definition at line 113 of file function_time.h. | https://dealii.org/developer/doxygen/deal.II/classFunctionTime.html | CC-MAIN-2021-04 | refinedweb | 375 | 62.98 |
For all the hype about big data, much value resides in the world’s medium and small data. Especially when we consider the length of the feedback loop and total analyst time invested, insights from small and medium data are quite attractive and economical. Personally, I find analyzing data that fits into memory quite convenient, and therefore, when I am confronted with a data set that does not fit in memory as-is, I am willing to spend a bit of time to try to manipulate it to fit into memory.
The first technique I usually turn to is to only store distinct rows of a data set, along with the count of the number of times that row appears in the data set. This technique is fairly simple to implement, especially when the data set is generated by a SQL query. If the initial query that generates the data set is
SELECT u, v, w FROM t;
we would modify it to become
SELECT u, v, w, COUNT(1) FROM t GROUP BY u, v, w;
We now generate a sample data set with both discrete and continuous features.
%matplotlib inline
from __future__ import division
from matplotlib import pyplot as plt import numpy as np import pandas as pd from patsy import dmatrices, dmatrix import scipy as sp import seaborn as sns from statsmodels import api as sm from statsmodels.base.model import GenericLikelihoodModel
np.random.seed(1545721) # from random.org
N = 100001
u_min, u_max = 0, 100
v_p = 0.6
n_ws = 50 ws = sp.stats.norm.rvs(0, 1, size=n_ws) w_min, w_max = ws.min(), ws.max()
df = pd.DataFrame({ 'u': np.random.randint(u_min, u_max, size=N), 'v': sp.stats.bernoulli.rvs(v_p, size=N), 'w': np.random.choice(ws, size=N, replace=True) })
df.head()
We see that this data frame has just over 100,000 rows, but only about 10,000 distinct rows.
df.shape[0]
100001
df.drop_duplicates().shape[0]
9997
We now use
pandas’
groupby method to produce a data frame that contains the count of each unique combination of
x,
y, and
z.
count_df = df.groupby(list(df.columns)).size() count_df.name = 'count' count_df = count_df.reset_index()
In order to make later examples interesting, we shuffle the rows of the reduced data frame, because
pandas automatically sorts the values we grouped on in the reduced data frame.
shuffled_ixs = count_df.index.values np.random.shuffle(shuffled_ixs) count_df = count_df.iloc[shuffled_ixs].copy().reset_index(drop=True)
count_df.head()
Again, we see that we are storing 90% fewer rows. Although this data set has been artificially generated, I have seen space savings of up to 98% when applying this technique to real-world data sets.
count_df.shape[0] / N
0.0999690003099969
This space savings allows me to analyze data sets which initially appear too large to fit in memory. For example, the computer I am writing this on has 16 GB of RAM. At a 90% space savings, I can comfortably analyze a data set that might otherwise be 80 GB in memory while leaving a healthy amount of memory for other processes. To me, the convenience and tight feedback loop that come with fitting a data set entirely in memory are hard to overstate.
As nice as it is to fit a data set into memory, it’s not very useful unless we can still analyze it. The rest of this post will show how we can perform standard operations on these summary data sets.
For convenience, we will separate the feature columns from the count columns.
summ_df = count_df[['u', 'v', 'w']] n = count_df['count']
Suppose we have a group of numbers \(x_1, x_2, \ldots, x_n\). Let the unique values among these numbers be denoted \(z_1, z_2, \ldots, z_m\) and let \(n_j\) be the number of times \(z_j\) apears in the original group. The mean of the \(x_i\)s is therefore
\[ \begin{align*} \bar{x} & = \frac{1}{n} \sum_{i = 1}^n x_i = \frac{1}{n} \sum_{j = 1}^m n_j z_j, \end{align*} \]
since we may group identical \(x_i\)s into a single summand. Since \(n = \sum_{j = 1}^m n_j\), we can calculate the mean using the following function.
def mean(df, count): return df.mul(count, axis=0).sum() / count.sum()
mean(summ_df, n)
u 49.308067 v 0.598704 w 0.170815 dtype: float64
We see that the means calculated by our function agree with the means of the original data frame.
df.mean(axis=0)
u 49.308067 v 0.598704 w 0.170815 dtype: float64
np.allclose(mean(summ_df, n), df.mean(axis=0))
True
We can calculate the variance as
\[ \begin{align*} \sigma_x^2 & = \frac{1}{n - 1} \sum_{i = 1}^n \left(x_i - \bar{x}\right)^2 = \frac{1}{n - 1} \sum_{j = 1}^m n_j \left(z_j - \bar{x}\right)^2 \end{align*} \]
using the same trick of combining identical terms in the original sum. Again, this calculation is easy to implement in Python.
def var(df, count): mu = mean(df, count) return np.power(df - mu, 2).mul(count, axis=0).sum() / (count.sum() - 1)
var(summ_df, n)
u 830.025064 v 0.240260 w 1.099191 dtype: float64
We see that the variances calculated by our function agree with the variances of the original data frame.
df.var()
u 830.025064 v 0.240260 w 1.099191 dtype: float64
np.allclose(var(summ_df, n), df.var(axis=0))
True
Histograms are fundamental tools for exploratory data analysis. Fortunately,
pyplot’s
hist function easily accommodates summarized data using the
weights optional argument.
fig, (full_ax, summ_ax) = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(16, 6)) nbins = 20 blue, green = sns.color_palette()[:2] full_ax.hist(df.w, bins=nbins, color=blue, alpha=0.5, lw=0); full_ax.set_xlabel('$w$'); full_ax.set_ylabel('Count'); full_ax.set_title('Full data frame'); summ_ax.hist(summ_df.w, bins=nbins, weights=n, color=green, alpha=0.5, lw=0); summ_ax.set_xlabel('$w$'); summ_ax.set_title('Summarized data frame');
We see that the histograms for \(w\) produced from the full and summarized data frames are identical.
Calculating the mean and variance of our summarized data frames was not too difficult. Calculating quantiles from this data frame is slightly more involved, though still not terribly hard.
Our implementation will rely on sorting the data frame. Though this implementation is not optimal from a computation complexity point of view, it is in keeping with the spirit of
pandas’ implementation of quantiles. I have given some thought on how to implement linear time selection on the summarized data frame, but have not yet worked out the details.
Before writing a function to calculate quantiles of a data frame with several columns, we will walk through the simpler case of computing the quartiles of a single series.
u = summ_df.u
u.head()
0 0 1 48 2 35 3 19 4 40 Name: u, dtype: int64
First we
argsort the series.
sorted_ilocs = u.argsort()
We see that
u.iloc[sorted_ilocs] will now be in ascending order.
sorted_u = u.iloc[sorted_ilocs]
(sorted_u[:-1] <= sorted_u[1:]).all()
True
More importantly,
counts.iloc[sorted_ilocs] will have the count of the smallest element of
u first, the count of the second smallest element second, etc.
sorted_n = n.iloc[sorted_ilocs] sorted_cumsum = sorted_n.cumsum() cdf = (sorted_cumsum / n.sum()).values
Now, the \(i\)-th location of
sorted_cumsum will contain the number of elements of
u less than or equal to the \(i\)-th smallest element, and therefore
cdf is the empirical cumulative distribution function of
u. The following plot shows that this interpretation is correct.
fig, ax = plt.subplots(figsize=(8, 6)) blue, _, red = sns.color_palette()[:3] ax.plot(sorted_u, cdf, c=blue, label='Empirical CDF'); plot_u = np.arange(100) ax.plot(plot_u, sp.stats.randint.cdf(plot_u, u_min, u_max), '--', c=red, label='Population CDF'); ax.set_xlabel('$u$'); ax.legend(loc=2);
If, for example, we wish to find the median of
u, we want to find the first location in
cdf which is greater than or equal to 0.5.
median_iloc_in_sorted = (cdf < 0.5).argmin()
The index of the median in
u is therefore
sorted_ilocs.iloc[median_iloc_in_sorted], so the median of
u is
u.iloc[sorted_ilocs.iloc[median_iloc_in_sorted]]
49
df.u.quantile(0.5)
49.0
We can generalize this method to calculate multiple quantiles simultaneously as follows.
q = np.array([0.25, 0.5, 0.75])
u.iloc[sorted_ilocs.iloc[np.less.outer(cdf, q).argmin(axis=0)]]
2299 24 9079 49 1211 74 Name: u, dtype: int64
df.u.quantile(q)
0.25 24 0.50 49 0.75 74 dtype: float64
The array
np.less.outer(cdf, q).argmin(axis=0) contains three columns, each of which contains the result of comparing
cdf to an element of
q. The following function generalizes this approach from series to data frames.
def quantile(df, count, q=0.5): q = np.ravel(q) sorted_ilocs = df.apply(pd.Series.argsort) sorted_counts = sorted_ilocs.apply(lambda s: count.iloc[s].values) cdf = sorted_counts.cumsum() / sorted_counts.sum() q_ilocs_in_sorted_ilocs = pd.DataFrame(np.less.outer(cdf.values, q).argmin(axis=0).T, columns=df.columns) q_ilocs = sorted_ilocs.apply(lambda s: s[q_ilocs_in_sorted_ilocs[s.name]].reset_index(drop=True)) q_df = df.apply(lambda s: s.iloc[q_ilocs[s.name]].reset_index(drop=True)) q_df.index = q return q_df
quantile(summ_df, n, q=q)
df.quantile(q=q)
np.allclose(quantile(summ_df, n, q=q), df.quantile(q=q))
True
Another important operation is bootstrapping. We will see two ways to perfom bootstrapping on the summary data set.
n_boot = 10000
Key to both approaches to the bootstrap is knowing the proprotion of the data set that each distinct combination of features comprised.
weights = n / n.sum()
The two approaches differ in what type of data frame they produce. The first we will discuss produces a non-summarized data frame with non-unique rows, while the second produces a summarized data frame. Each fo these approaches to bootstrapping is useful in different situations.
To produce a non-summarized data frame, we generate a list of locations in
feature_df based on
weights using
numpy.random.choice.
boot_ixs = np.random.choice(summ_df.shape[0], size=n_boot, replace=True, p=weights)
boot_df = summ_df.iloc[boot_ixs]
boot_df.head()
We can verify that our bootstrapped data frame has (approximately) the same distribution as the original data frame using Q-Q plots.
ps = np.linspace(0, 1, 100)
boot_qs = boot_df[['u', 'w']].quantile(q=ps)
qs = df[['u', 'w']].quantile(q=ps)
fig, ax = plt.subplots(figsize=(8, 6)) blue = sns.color_palette()[0] ax.plot((u_min, u_max), (u_min, u_max), '--', c='k', lw=0.75, label='Perfect agreement'); ax.scatter(qs.u, boot)) blue = sns.color_palette()[0] ax.plot((w_min, w_max), (w_min, w_max), '--', c='k', lw=0.75, label='Perfect agreement'); ax.scatter(qs.w, boot);
We see that both of the resampled distributions agree quite closely with the original distributions. We have only produced Q-Q plots for \(u\) and \(w\) because \(v\) is binary-valued.
While at first non-summarized boostrap resampling may appear to counteract the benefits of summarizing the original data frame, it can be quite useful when training and evaluating online learning algorithms, where iterating through the locations of the bootstrapped data in the original summarized data frame is efficient.
To produce a summarized data frame, the counts of the resampled data frame are sampled from a multinomial distribution with event probabilities given by
weights.
boot_counts = pd.Series(np.random.multinomial(n_boot, weights), name='count')
Again, we compare the distribution of our bootstrapped data frame to that of the original with Q-Q plots. Here our summarized quantile function is quite useful.
boot_count_qs = quantile(summ_df, boot_counts, q=ps)
fig, ax = plt.subplots(figsize=(8, 6)) ax.plot((u_min, u_max), (u_min, u_max), '--', c='k', lw=0.75, label='Perfect agreement'); ax.scatter(qs.u, boot_count)) ax.plot((w_min, w_max), (w_min, w_max), '--', c='k', lw=0.75, label='Perfect agreement'); ax.scatter(qs.w, boot_count);
Again, we see that both of the resampled distributions agree quite closely with the original distributions.
Linear regression is among the most frequently used types of statistical inference, and it plays nicely with summarized data. Typically, we have a response variable \(y\) that we wish to model as a linear combination of \(u\), \(v\), and \(w\) as
\[ \begin{align*} y_i = \beta_0 + \beta_1 u_i + \beta_2 v_i + \beta_3 w_i + \varepsilon, \end{align*} \]
where \(\varepsilon \sim N(0, \sigma^2)\) is noise. We generate such a data set below (with \(\sigma = 0.1\)).
beta = np.array([-3., 0.1, -4., 2.])
noise_std = 0.1
X = dmatrix('u + v + w', data=df)
y = pd.Series(np.dot(X, beta), name='y') + sp.stats.norm.rvs(scale=noise_std, size=N)
y.head()
0 7.862559 1 3.830585 2 -0.388246 3 1.047091 4 0.992082 Name: y, dtype: float64
Each element of the series
y corresponds to one row in the uncompressed data frame
df. The
OLS class from
statsmodels comes quite close to recovering the true regression coefficients.
full_ols = sm.OLS(y, X).fit()
full_ols.params
const -2.999658 x1 0.099986 x2 -3.998997 x3 2.000317 dtype: float64
To show how we can perform linear regression on the summarized data frame, we recall the the ordinary least squares estimator minimizes the residual sum of squares. The residual sum of squares is given by
\[ \begin{align*} RSS & = \sum_{i = 1}^n \left(y_i - \mathbf{x}_i \mathbf{\beta}^{\intercal}\right)^2. \end{align*} \]
Here \(\mathbf{x}_i = [1\ u_i\ v_i\ w_i]\) is the \(i\)-th row of the original data frame (with a constant added for the intercept) and \(\mathbf{\beta} = [\beta_0\ \beta_1\ \beta_2\ \beta_3]\) is the row vector of regression coefficients. It would be tempting to rewrite \(RSS\) by grouping the terms based on the row their features map to in the compressed data frame, but this approach would lead to incorrect results. Due to the stochastic noise term \(\varepsilon_i\), identical values of \(u\), \(v\), and \(w\) can (and will almost certainly) map to different values of \(y\). We can see this phenomenon by calculating the range of \(y\) grouped on \(u\), \(v\), and \(w\).
reg_df = pd.concat((y, df), axis=1)
reg_df.groupby(('u', 'v', 'w')).y.apply(np.ptp).describe()
count 9997.000000 mean 0.297891 std 0.091815 min 0.000000 25% 0.237491 50% 0.296838 75% 0.358015 max 0.703418 Name: y, dtype: float64
If \(y\) were uniquely determined by \(u\), \(v\), and \(w\), we would expect the mean and quartiles of these ranges to be zero, which they are not. Fortunately, we can account for is difficulty with a bit of care.
Let \(S_j = \{i\ |\ \mathbf{x}_i = \mathbf{z}_j\}\), the set of row indices in the original data frame that correspond to the \(j\)-th row in the summary data frame. Define \(\bar{y}_{(j)} = \frac{1}{n_j} \sum_{i \in S_j} y_i\), which is the mean of the response variables that correspond to \(\mathbf{z}_j\). Intuitively, since \(\varepsilon_i\) has mean zero, \(\bar{y}_{(j)}\) is our best unbiased estimate of \(\mathbf{z}_j \mathbf{\beta}^{\intercal}\). We will now show that regressing \(\sqrt{n_j} \bar{y}_{(j)}\) on \(\sqrt{n_j} \mathbf{z}_j\) gives the same results as the full regression. We use the standard trick of adding and subtracting the mean and get
\[ \begin{align*} RSS & = \sum_{j = 1}^m \sum_{i \in S_j} \left(y_i - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)^2 \\ & = \sum_{j = 1}^m \sum_{i \in S_j} \left(\left(y_i - \bar{y}_{(j)}\right) + \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)\right)^2 \\ & = \sum_{j = 1}^m \sum_{i \in S_j} \left(\left(y_i - \bar{y}_{(j)}\right)^2 + 2 \left(y_i - \bar{y}_{(j)}\right) \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right) + \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)^2\right). \end{align*} \]
As is usual in these situations, the cross term vanishes, since
\[ \begin{align*} \sum_{i \in S_j} \left(y_i - \bar{y}_{(j)}\right) \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right) & = \sum_{i \in S_j} \left(y_i \bar{y}_{(j)} - y_i \mathbf{z}_j \mathbf{\beta}^{\intercal} - \bar{y}_{(j)}^2 + \bar{y}_{(j)} \mathbf{z}_j \mathbf{\beta}^{\intercal}\right) \\ & = \bar{y}_{(j)} \sum_{i \in S_j} y_i - \mathbf{z}_j \mathbf{\beta}^{\intercal} \sum_{i \in S_j} y_i - n_j \bar{y}_{(j)}^2 + n_j \bar{y}_{(j)} \mathbf{z}_j \mathbf{\beta}^{\intercal} \\ & = n_j \bar{y}_{(j)}^2 - n_j \bar{y}_{(j)} \mathbf{z}_j \mathbf{\beta}^{\intercal} - n_j \bar{y}_{(j)}^2 + n_j \bar{y}_{(j)} \mathbf{z}_j \mathbf{\beta}^{\intercal} \\ & = 0. \end{align*} \]
Therefore we may decompose the residual sum of squares as
\[ \begin{align*} RSS & = \sum_{j = 1}^m \sum_{i \in S_j} \left(y_i - \bar{y}_{(j)}\right)^2 + \sum_{j = 1}^m \sum_{i \in S_j} \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)^2 \\ & = \sum_{j = 1}^m \sum_{i \in S_j} \left(y_i - \bar{y}_{(j)}\right)^2 + \sum_{j = 1}^m n_j \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)^2. \end{align*} \]
The important property of this decomposition is that the first sum does not depend on \(\mathbf{\beta}\), so minimizing \(RSS\) with respect to \(\mathbf{\beta}\) is equivalent to minimizing the second sum. We see that this second sum can be written as
\[ \begin{align*} \sum_{j = 1}^m n_j \left(\bar{y}_{(j)} - \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)^2 & = \sum_{j = 1}^m \left(\sqrt{n_j} \bar{y}_{(j)} - \sqrt{n_j} \mathbf{z}_j \mathbf{\beta}^{\intercal}\right)^2 \end{align*}, \]
which is exactly the residual sum of squares for regressing \(\sqrt{n_j} \bar{y}_{(j)}\) on \(\sqrt{n_j} \mathbf{z}_j\).
summ_reg_df = reg_df.groupby(('u', 'v', 'w')).y.mean().reset_index().iloc[shuffled_ixs].reset_index(drop=True).copy() summ_reg_df['n'] = n
summ_reg_df.head()
The design matrices for this summarized model are easy to construct using
patsy.
y_summ, X_summ = dmatrices(""" I(np.sqrt(n) * y) ~ np.sqrt(n) + I(np.sqrt(n) * u) + I(np.sqrt(n) * v) + I(np.sqrt(n) * w) - 1 """, data=summ_reg_df)
Note that we must remove
patsy’s constant column for the intercept and replace it with
np.sqrt(n).
summ_ols = sm.OLS(y_summ, X_summ).fit()
summ_ols.params
array([-2.99965783, 0.09998571, -3.99899718, 2.00031673])
We see that the summarized regression produces the same parameter estimates as the full regression.
np.allclose(full_ols.params, summ_ols.params)
True
As a final example of adapting common methods to summarized data frames, we will show how to fit a logistic regression model on a summarized data set by maximum likelihood.
We will use the model
\[P(s = 1\ |\ w) = \frac{1}{1 + \exp(-\mathbf{x} \gamma^{\intercal})}\].
As above, \(\mathbf{x}_i = [1\ u_i\ v_i\ w_i]\). The true value of \(\gamma\) is
gamma = np.array([1., 0.01, -1., -2.])
We now generate samples from this model.
X = dmatrix('u + v + w', data=df)
p = pd.Series(sp.special.expit(np.dot(X, gamma)), name='p') s = pd.Series(sp.stats.bernoulli.rvs(p), name='s')
logit_df = pd.concat((s, p, df), axis=1)
logit_df.head()
We first fit the logistic regression model to the full data frame.
full_logit = sm.Logit(s, X).fit()
Optimization terminated successfully. Current function value: 0.414221 Iterations 7
full_logit.params
const 0.965283 x1 0.009944 x2 -0.966797 x3 -1.990506 dtype: float64
We see that the estimates are quite close to the true parameters.
The technique used to adapt maximum likelihood estimation of logistic regression to the summarized data frame is quite elegant. The likelihood for the full data set is given by the fact that (given \(u\), \(v\), and \(w\)) \(s\) is Bernoulli distributed with
\[s_i\ |\ \mathbf{x}_i \sim \operatorname{Ber}\left(\frac{1}{1 + \exp(-\mathbf{x}_i \gamma^{\intercal})}\right).\]
To derive the likelihood for the summarized data set, we count the number of successes (where \(s = 1\)) for each unique combination of features \(\mathbf{z}_j\), and denote this quantity \(k_j\).
summ_logit_df = logit_df.groupby(('u', 'v', 'w')).s.sum().reset_index().iloc[shuffled_ixs].reset_index(drop=True).copy() summ_logit_df = summ_logit_df.rename(columns={'s': 'k'}) summ_logit_df['n'] = n
summ_logit_df.head()
Now, instead of each row representing a single Bernoulli trial (as in the full data frame), each row represents \(n_j\) trials, so we have that \(k_j\) is (conditionally) Binomially distributed with
\[k_j\ |\ \mathbf{z}_j \sim \operatorname{Bin}\left(n_j, \frac{1}{1 + \exp(-\mathbf{z}_j \gamma^{\intercal})}\right).\]
summ_logit_X = dmatrix('u + v + w', data=summ_logit_df)
As I have shown in a previous post, we can use
statsmodels’
GenericLikelihoodModel class to fit custom probability models by maximum likelihood. The model is implemented as follows.
class SummaryLogit(GenericLikelihoodModel): def __init__(self, endog, exog, n, **qwargs): """ endog is the number of successes exog are the features n are the number of trials """ self.n = n super(SummaryLogit, self).__init__(endog, exog, **qwargs) def nloglikeobs(self, gamma): """ gamma is the vector of regression coefficients returns the negative log likelihood of each of the observations for the coefficients in gamma """ p = sp.special.expit(np.dot(self.exog, gamma)) return -sp.stats.binom.logpmf(self.endog, self.n, p) def fit(self, start_params=None, maxiter=10000, maxfun=5000, **qwargs): # wraps the GenericLikelihoodModel's fit method to set default start parameters if start_params is None: start_params = np.zeros(self.exog.shape[1]) return super(SummaryLogit, self).fit(start_params=start_params, maxiter=maxiter, maxfun=maxfun, **qwargs)
summ_logit = SummaryLogit(summ_logit_df.k, summ_logit_X, summ_logit_df.n).fit()
Optimization terminated successfully. Current function value: 1.317583 Iterations: 357 Function evaluations: 599
Again, we get reasonable estimates of the regression coefficients, which are close to those obtained from the full data set.
summ_logit.params
array([ 0.96527992, 0.00994322, -0.96680904, -1.99051485])
np.allclose(summ_logit.params, full_logit.params, rtol=10**-4)
True
Hopefully this introduction to the technique of summarizing data sets has proved useful and will allow you to explore medium data more easily in the future. We have only scratched the surface on the types of statistical techniques that can be adapted to work on summarized data sets, but with a bit of ingenuity, many of the ideas in this post can apply to other models.
This post is available as an IPython notebook here. | http://austinrochford.com/posts/2015-08-03-counting-features.html | CC-MAIN-2017-13 | refinedweb | 3,750 | 51.14 |
On 11/17/2011 07:49 PM, Oleg Nesterov wrote:> On 11/17, Pavel Emelyanov wrote:>>>> Gentlemen, please, find some time for this, your ACK/NACK on the API proposal>> is required badly.> > Please.> >>with the security issue solved, but setting sysctl then cloning seems more obfuscatingto me than just passing an array of pids to clone.> OTOH, I do not pretend I understand the user-space needs, so I won't argue.> This series seems correct, the bugs we discussed are fixed.> > But. Speaking of API, it differs a bit compared to the previous version...> >> The API will be used like in the code below>>>> /* restore new pid namespace with an init in it */>> pid = clone(CLONE_NEWPID);> > Yes, CLONE_NEWPID | CLONE_CHILD_USEPIDS is not possible.It should be. If we (in theory, but) restore two pid namespaces with one beinga child of another we will have to create an init of the child ns with predefinedpid in the parent ns.> Then how the array of pids in child_tidptr[] can be useful? If CLONE_NEWPID> can't restore the pid_nr's in the parent namespaces, then probably this> doesn't makes sense at all?> > IOW. I think we should either allow CLONE_NEWPID | CLONE_CHILD_USEPIDS> (with additional check in set_pidmap() to ensure that CLONE_NEWPID> comes with child_tidptr[0] == 1), or we should treat the "overloaded"> child_tidptr as a simple pid_t.The child_tidptr[0] == 1 check will also work. Currently I check for thens->child_reaper being NULL instead.> Again, I won't insist. Just I want to be sure we do not miss something> adding the new API.> > Oleg. | http://lkml.org/lkml/2011/11/17/236 | CC-MAIN-2016-07 | refinedweb | 263 | 80.21 |
Dino Esposito
After covering some basic aspects of Facebook programming in previous columns, I’ll now discuss tools and techniques to view and retrieve content from a Facebook wall in order to share it through other means and catalog it for something else, such as business intelligence (BI) analysis.
Not all companies have the same level of interest in the world of social communities as does Facebook. One lesson I’ve learned, however, is that in all companies, periodically an internal department—usually marketing—ends up with a strong interest in getting closer to customers and, maybe more important, in having customers get closer to the company. A Facebook fan page is one of the tools to attract contacts, and the number of Likes and the level of activity on the page can measure the success of the initiative.
Where does programming fit in? To keep a Facebook page alive and kicking and to stimulate user activity that increases the number of “people talking about this,” you need to post interesting content—and frequently. Sometimes the company can afford a staff of people just researching and creating content for the Facebook page. Sometimes, instead, good content for the Facebook page comes straight from the regular flow of company business. In this case, it would be a bit problematic for employees doing their regular jobs to reserve extra time to report on a Facebook page what they’re doing. Imagine, for example, that news is posted to the Web site. The internal workflow might entail preparing the text, getting it approved, publishing it in the internal system and waiting for the content management system to make it live on the site. If the same news should be published to Facebook, too, most of the time the same person opens the Facebook page as an admin and just posts content manually. It often works this way today, but it’s not an approach that scales. This is just where Facebook programming fits in.
In recent installments of this column, I addressed the main topic of posting to a Facebook wall and the basics of building a Web site and a desktop application that can interact with the Facebook account of the user (you can see all my columns at bit.ly/hBNZA0). For an enterprise scenario where the target wall is the company’s fan page, the approach isn’t really different. All that changes is the account that receives posts.
So the first step toward Facebook programming is definitely finding a way to post content to specific walls in an automated way under the control of your software.
Over time, the content shared to the company’s fan page, which typically includes marketing communications, becomes a useful resource for the company. It becomes valuable information that the company might want to retrieve and further share or analyze. And this is another great fit for Facebook programming.
A quick and simple way to add some Facebook content to your site is through the Like Box widget. The widget lists recent posts made on a Facebook page as well as an optional list of users who like the page. For Web sites interested in using content published to Facebook, this is the first step to accomplish. It’s important to note that the Facebook Like Box social plug-in is only intended to be used with Facebook fan pages and won’t work if you connect it to a personal Facebook account.
Also, note that Facebook differentiates between fan pages and profile pages. The bottom line is that fan pages are for businesses, whereas profile pages are for individuals. There are some differences between the two as far as the allowed actions are concerned. First and foremost, a team of people can have admin rights on a fan page. In addition, posts from a fan page can be specifically targeted by language and location so they reach followers (well, fans, actually) who can best receive them. Fan pages support additional features and can be promoted via ads and sponsored articles.
Conversely, profile pages are intended to let owners stay in touch with friends and family. Being a friend becomes a mandatory condition to get updates, even though through the subscription mechanism you can allow non-friends to get your updates as well.
Configuring the Like Box for a Web page couldn’t be easier. You can preview the content being displayed and grab related HTML directly from the Facebook developers site. Go to bit.ly/hFvo7y for a live demo. In the end, it’s all about arranging a long URL to set on an iframe element. Figure 1 lists the parameters you can use in the URL.
Figure 1 Parameters to Configure the Facebook Like Box
All you do is arrange a URL and bind it to an iframe, as shown in Figure 2.
Figure 2 Binding a URL to an Iframe
<iframe src="//
?href=
&width=292&height=490
&colorscheme=light
&show_faces=false
&stream=true
&header=true
&appId=xxxxxxxxxxxxxxx"
scrolling="no"
frameborder="0"
style="border:none; overflow:hidden; width:292px; height:590px;"
allowTransparency="true">
</iframe>
It goes without saying that you can also embed the Like Box in a desktop application (for example, a Windows Presentation Foundation—or WPF— application) through a WebBrowser control.
As you can see, the plug-in allows for some quick styling that works most of the time. However, if you want to apply your own CSS (to the extent that it’s possible and documented) you should embed the Like Box via JavaScript and HTML5. Figure 3 shows the output of a Like Box for a sample site with custom and regular (light) style.
Figure 3 Sample Like Box
Another quick way to incorporate existing specific Facebook content in Web pages or desktop applications (via Web-browser components) is the Activity plug-in.
This plug-in aggregates stories resulting from the interaction that users have with your site through Facebook. Notice that the target here isn’t a Facebook page but rather an external site. Example actions that generate such feeds are liking content on the site, watching videos, commenting and sharing content from the site. The plug-in is also able to detect whether the current user of the site that contains the Activity plug-in is logged in to Facebook or not. If so, the displayed feed is restricted to friends of the user. Otherwise, the plug-in shows recommendations from across the site, while giving the user the option to log in to Facebook and receive more-targeted feedback. Here’s the markup you need (note that the Activity plug-in can only be consumed through HTML5 markup):
<div class="fb-activity"
data-
</div>
You must incorporate the JavaScript SDK in the page in order for this markup to produce what’s in Figure 4.
Figure 4 The Activity Plug-in in Action
In past columns, I used the Facebook C# SDK to post to the wall both plain text and attachments. Once the post is made, friends and fans can interact with it by liking it, sharing it and commenting on it. Let’s see what it takes to read the timeline of a given user.
Facebook assigns a unique and fairly long ID to any account, whether profile or fan page. Users, however, don’t use this ID to identify pages. So the first thing to do in order to read a timeline is match the public name of the page (or user) to the underlying Facebook ID. As a mere exercise, you can type the following into the address bar of any browser:.
The placeholder “your-page-name” is just the name of the account as you would type it to reach the page. Getting the ID of the account for which you intend to read the timeline is only the first step. You also need to be authenticated to access the feed. The bottom line is that any operation that goes directly against the underlying Facebook Graph API requires OAuth authentication. This means that the same preliminary steps discussed in past columns must also be done here:
Once acquired, the access token can be saved to a cookie and used for every further operation until it expires. Here’s the required code to get the raw feed from the Facebook server:
var name = "name-of-the-user";
// For example, joedummy
var client =
new FacebookClient(access_token);
dynamic user = client.Get(name);
dynamic feed =
client.Get(name + "/feed");
The first call to the Facebook client doesn’t strictly require the access token, as it’s expected to return only public information about the user. The user variable exposes properties such as first_name, last_name, id and location. Depending on your intentions, you might not need to place this call. The second call to the Facebook client is what really does the trick. It takes a string that denotes the path to the user’s feed. You build the path concatenating the account’s name with the /feed string. In return, you get a dynamic C# object built out of an underlying JSON stream. Figure 5 shows the structure of the JSON stream as captured by Fiddler.
Figure 5 The JSON Structure of a Timeline Item in Facebook
It shows that the selected post got two types of actions—it has been liked and commented upon. It also currently counts 14 likes. More details about people who commented on it or liked it are available as you expand nodes. Finally, you find the content of the post. At the JSON level, the content of the post is the message field.
It’s important to note that not all posts have the same structure; this is the reason why the Facebook C# SDK doesn’t use plain, statically defined classes as data transfer objects (DTOs). A common snag is that the post lacks a message, link and picture, but includes a story field. This is the case, for example, when the admin adds a collection of photos.
The Facebook C# SDK just hands you a dynamic C# object. Parsing that into more defined data types—or deciding that the object is good as is to trickle down to the view—is your call. Figure 6 shows some code that attempts to parse the dynamic object into a classic C# class.
Figure 6 Parsing a Dynamic Object into a Classic C# Class
public class FacebookPost
{
public String PostId { get; set; }
public String Author { get; set; }
public String Picture { get; set; }
public String Link { get; set; }
public String Published { get; set; }
public String ContentHtml { get; set; }
private delegate String ExtractDelegate();
public static IList<FacebookPost> Import(dynamic data)
{
var posts = new List<FacebookPost>();
foreach (var item in data)
{
var tempItem = item;
var fb = new FacebookPost
{
PostId = Extract(() => tempItem["id"]),
Published = Extract(() => tempItem["created_time"]),
Picture = Extract(() => tempItem["picture"]),
Link = Extract(() => tempItem["link"]),
Author = Extract(() => tempItem["from"]["name"])
};
try
{
fb.ContentHtml = Extract(() => tempItem["message"]);
}
catch
{
}
if (!String.IsNullOrEmpty(fb.ContentHtml))
posts.Add(fb);
}
return posts;
}
private static String Extract(ExtractDelegate func)
{
try {
return func();
} catch {
return null;
}
}
}
The most annoying part of this code is finding an effective way to check whether a given property is defined on the dynamic object you parse. Quite a few stackoverflow.com users agree on the approach shown in Figure 6, based on a delegate.
Dealing with social networks opens up a whole new world of possibilities, which become opportunities for developers to think of and realize new creative applications. As a .NET developer, you should be a friend of the Facebook C# SDK.: Scott Densmore (Microsoft)Scott Densmore is a a senior development lead at Microsoft working on features in Visual Studio. He has worked on agile development teams for the last 10 years at large corporate environments and small startups. His primary interests are cloud computing, mobile device computing and social computing. You can find him on Twitter at @scottdensmore.
More MSDN Magazine Blog entries >
Browse All MSDN Magazines
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/en-us/magazine/dn166925.aspx | CC-MAIN-2015-18 | refinedweb | 2,019 | 60.04 |
RavenDB… What am I Persisting, What am I Querying? (part 1)
RavenDB… What am I Persisting, What am I Querying? (part 1)
Join the DZone community and get the full member experience.Join For Free
MariaDB TX, proven in production and driven by the community, is a complete database solution for any and every enterprise — a modern database for modern applications.
Author: Phillip Haydon
A couple of questions that pop’s up a lot in the #RavenDB JabbR chat room by people picking up RavenDB for the first time are; what am I persisting?, and how do I query relationships?.
When we use relational databases we often de-normalize our data into multiple tables, usually this is done to get rid of duplication of data. We do this by adding 100’s of foreign keys to our tables relating things all over the place, we had a CountryId to our Address, a UserId to our Order, an OrderId to our OrderLine.
There’s many reasons why this was done, some of which Oren describes in his Embracing RavenDB post.
Then when we go to query those relationships we have to join data, when we have an entity with multiple relationships we end up getting into complex queries with cartesian joins, performance starts to degrade, and things just get messy.
When working with Document Databases we throw all that out the window and we deal with Root Aggregates. These are objects that are responsible for their child objects, you don’t load the child objects individually, they are loaded with the root or parent object.
The most common example I see is Blog/Posts/Comments, but I’m going to explain an easier scenario.
Order/OrderLine
The Order/Orderline is much easier to understand since it’s a scenario would probably always end up being the same in every system.
It also easier to understand because when displaying an OrderLine in any system, it’s always displayed with the Order details, and never by itself. So when we query for the Order it makes sense to always get the OrderLine at the same time.
When working with Business Rules applied to an Order, they almost always apply to the OrderLines also, so again you’re working with the entire Order, not a portion of it.
When starting out it’s hard to imagine, but the OrderLine is actually part of the Order, it’s not a separate entity, it’s just when we persist it in two tables since that makes more sense in a relational database, and it ends up feeling like two separate things, when in reality, it’s still the same object.
public class Order { public string Id { get; set; } // Other properties… public IEnumerable<OrderLine> Lines { get; set; } } public class OrderLine { public int Quantity { get; set; } public decimal Price { get; set; } public decimal Discount { get; set; } public string SkuCode { get; set; } // Other Properties }
So when we persist this with a Relational Database these would go into two different tables. Order and OrderLine tables, joined by a foreign key.
But now that we are thinking about the Root Aggregate, the Order, when we persist this with RavenDB we persist just the Order. When we persist ‘just the order’ that means we persist the ENTIRE Order object, including the OrderLines, since they are the Order.
When persisted to RavenDB we end up with a JSON document that looks similar to:
{ Id: 'orders/123', Lines: [ { Quantity: 1, Price: 12.95, Discount: null, SkuCode: 'N1C3' }, { Quantity: 3, Price: 6.23, Discount: null, SkuCode: 'F4K21' } ] }
Note: I purposely left out other properties for now.
As you can see we are persisting the entire root object itself. We don’t put OrderLines into a separate document or collection.
Note: I do realise I’ve mentioned persisting the entire object multiple times, but it’s something that some people find hard to wrap their head around at first. It confused me when I first started messing around with MongoDB.
When we query for the Order: session.Load<Order>(“orders/123”); we end up fetching all the OrderLines at the same time. No joins, no separate queries, just the entire order.
In a relational database we would have had to issue 2 separate queries, or join the tables together, like:
SELECT * FROM [Order] o LEFT JOIN [OrderLine] ol ON o.Id = Ol.OrderId
This makes querying the database more complicated than it needs to be. There are other ways around this in a relational database, you can blob the OrderLines. But then you lose the ability to search against OrderLines.
Why this example and not Post/Comments?
I don’t think Post/Comments is a good example to work from, Comment’s can be displayed with a Post, and without a Post, they can be paged, displayed on an individual page, in a ‘latest comments’ column on your blog, etc.
Some of these scenarios may justify putting Comments into their own collection.
However, more often than not, non-popular blogs such as my own only occur a few comments, so there’s no real reason to put them in their own collection, you can easily get away with putting them on the Post document.
I think this comes down to personal preference and the business problem you’re trying to solve, but for a learning exercise it makes it harder to understand. My personal preference is to store Comment’s in a separate collection, because you click through from the post listing screen to the post and load the comment’s, and if there are > x number of comments then I would page them and only display the latest comments, or high rated comments if they were rated voted up/down.
I hope that clear’s up what’s being persisted.
In part 2 I’m going to go over References (Relationships), and in part 3 MapReduce (doing all those fancy SQL queries inside RavenDB and what is happening)
MariaDB AX is an open source database for modern analytics: distributed, columnar and easy to use.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/ravendb%E2%80%A6-what-am-i-persisting | CC-MAIN-2018-39 | refinedweb | 1,029 | 57.91 |
Welcome to the Parallax Discussion Forums, sign-up to participate.
I never thought of working git/github the way you describe - I ONLY thought of it a way of working with others.
Just been watching the Git instructional videos!
RS_Jim wrote: »
Whit+ wrote
Just been watching the Git instructional videos!
I am convinced! How does one get started with GIT?
Jim
Heater. wrote: ».
Don't forget physical copies for when the network or site is down.
Heater. wrote: »
Recently I have been trying to explain to our manager types how it would be better to keep our docs in github so that everyone knows what is what, who changed what and when, and track suggested improvements.
Mickster wrote: »
......
I look at today's bloated .exe files and wonder how much of this mentality there is, out there.
int main (int argc, char* argv[])
{
printf("Hello World!");
return (0);
}
Genetix wrote: »
What about a Text-based User Interface like the old DOS Shell?
Easier to use than the command line but not the hog of a full blown GUI.
I am only an egg -- Stranger in a Strange land, Robert A. Heinlein
What about a Text-based User Interface like the old DOS Shell?
Easier to use than the command line but not the hog of a full blown GUI.
One cannot contain the whole project, or many projects, in ones head all the time. Projects get put aside whilst all the rest of life gets one's attention. Then when one gets back to it, all kind of details about what is what, what is where, and "what the hell was I doing anyway?" are forgotten.
Arguably, the "me" that comes back to a project is not the same "me" that left it a month ago.
Historically we deal with this problem by meticulously taking notes. Engineer's log books and all that. We leave messages to our future selves. The "others".
For software projects git automates that note taking.
With the added bonus that we can share with actual "others" if we so desire.
But, less abstractly, it's great that one can experiment with a new idea, perhaps fail and trash the whole program, and then "git checkout ... ", BOOM, everything is back as it was. No harm done.
This kind of liberates your mind to be adventurous.
Parallax colors simplified:<br>
My experience of old school source management systems is that one would checkout a file, or a bunch of files, hack on them for days or weeks, then check them back in again when whatever you did looked good.
The whole process was so complex, clunky and slow that one did not want to do that check out/check in thing too often. It was like some horrible office administration chore.
The there is git. Never mind the checkout thing, just hack the files you want like you normally do.
If your changes look good add those files to a possible commit. Then commit them.
This is all so quick and easy one ends up doing it multiple times per day.
So much easier than saving a snapshot of a directory, remembering to adjust the date/version extension to the directory name, remembering which version had what changes/fixes in it. Etc, etc.
I got the idea Heater and potatohead from the first mention. This is true in so many fields of endeavor - having a record (memory) of how one's thinking is changing and evolving is an education in itself and leads to new avenues to explore - good ones are integrated and poor ones can be dismissed (or filed for a more appropriate application).
What did someone say above? Comments are left for the next idiot trying to figure out what you did - often you are the idiot!
This thinking about programming is very instructive - often I am only thinking about the problem I am solving. What a marvelous discussion this has turned in to.
Thanks again to all of you - especially those of you I have not replied to specifically. I really appreciate all the thoughtful responses.
"We keep moving forward, opening new doors, and doing new things, because we're curious and curiosity keeps leading us down new paths." - Walt Disney
I've been working with BlocklyProp GitHub and BlocklyProp developers since it first came out - I have been leaving input for others and making suggestions for improvement, but really had NO idea what GitHub was actually tracking and allowing to happen. My mind is blown.
This sort of tracking will be great for me and I now understand (at least a bit more fully) what I have been doing and will continue to do with people obviously much smarter than me!
Thanks for the education! It is amazing when you discover big blind spots - things right in front of you that you just aren't aware of - then (of course), once you are aware - they show up everywhere!
Git will be installed shortly - locally!
"We keep moving forward, opening new doors, and doing new things, because we're curious and curiosity keeps leading us down new paths." - Walt Disney
Jim
See
"We keep moving forward, opening new doors, and doing new things, because we're curious and curiosity keeps leading us down new paths." - Walt Disney
I'd also recommend adding branching and rebasing to your workflow. Break off onto a branch, do your work making frequent commits and then rebase to clean up the history before merging the branch into master...
While I agree it can get complex (and I've got into some massive tangles while learning), I disagree on branching/merging. It doesn't have anything to do with GitHub and can all be done on your local machine and local repo. Say I'm working on a new feature and I get a bug report that comes in or stands in my way. That could get messy if you are doing all of your work on the master branch. If you did this, it's much easier:
1) git branch my_feature (makes a new branch locally)
2) git checkout my_feature (checkout that branch to work on)
3) <do work>
4) git add -u (add all tracked changed files, or specific files, whatever you want to do here)
5) git commit -m "Implemented part 1 of new feature"
* Bug report!!! *
6) git checkout master
7) git checkout -b my_bug_fix (make a new branch and check it out - handy shorthand)
8 ) <do work on bug fix, add, commit, etc>
9) git checkout master
10) <Work on whatever, back to the feature, another bug, etc>
11) git merge my_bug_fix (I'm happy with my bug fix, merge it in to master branch)
12) git branch -d my_bug_fix (cleanup your old merged branches!)
13) git checkout my_feature
14) git rebase master (put the changes in my_feature branch onto the new version of what's in master. Adding interactive flag will let you squash commits, respell messages, etc)
It's a bit confusing at first, but is a wonderful workflow.
If you've messed up your branch you can always checkout and older hash to roll back the whole thing or a specific file. git log is your friend there.
My only little point was that for a beginner to git, likely one guy with his hobby project, all that is too much to grasp from the get go.
Point is, git helps with the simple stuff as well. Hack code, save version, hack code, save version. Oh poop I messed up today, get back to yesterday's version and start again....
It's quick and easy. The other powers of git can be acquired with time.
Recently I have been trying to explain to our manager types how it would be better to keep our docs in github so that everyone knows what is what, who changed what and when, and track suggested improvements.
Oh no, they just want to juggle files on dropbox or google docs.
Grrr...
Don't forget physical copies for when the network or site is down.
"We keep moving forward, opening new doors, and doing new things, because we're curious and curiosity keeps leading us down new paths." - Walt Disney
Way to go!
@Genetix I sympathise with that point of view. I like to stash everything locally.
More and more I find that if there is no network there is no point in continuing.
But this is the beauty of git. If github or bitbucket or whatever goes away, it does not matter. My local repos on my machines are clones.
Have you checked out GitHub pages? Jekyll rendering of anything in a gh-pages branch. Here's an example
The docs:
Jim
He claimed that he'd proven time and again that it was necessary.
His program was a pre-processor for AutoCAD and people actually paid $17,000/seat for this clunky product.
I look at today's bloated .exe files and wonder how much of this mentality there is, out there.
Back in the C64 days it was amazing what programmers could squeeze into 64K but PC programs are resource hogs.
Small, useful programs can be written in C for PCs. Poor programming and GUIs are the resource hogs.
Novel Solutions - - Machinery Design - Product Development
"Necessity is the mother of invention." - Author unknown.
Far too much!
Life is unpredictable. Eat dessert first..
Easier to use than the command line but not the hog of a full blown GUI.
you're joking, right? Try reading this thread using lynx ()
I gave up on that half long ago.........
The link does not work because this forum software is broken. The "(" in the URL ends the URL when it should not. Copy it as text and paste into the browser.
I can't get anything but the first page of this thread in Lynx.
@frank,
Oh my God, ClearCase. I still have nightmares about that beast from my time having to use it at Nokia. Even worse than Visual Source Safe. | http://forums.parallax.com/discussion/comment/1436761/ | CC-MAIN-2019-35 | refinedweb | 1,674 | 81.43 |
Besides the new interfaces for threads and atomic operations that I already mentioned earlier, others of the new features that come with C11 are in reality already present in many compilers. Only that not all of them might agree upon the syntax, and especially not with the new syntax of C11. So actually emulating some these features is already possible and I implemented some of them in P99 on the base of what gcc provides: these are
static_assert(or
_Static_assert) to make compile time assertions (a misnomer, again!)
alignof(or
_Alignof) to get the alignment constraint for a type
alignas(of
_Alignas) to constraint the alignment of objects or
structmembers. Only the variant that receives a constant expression is directly supported. The variant with a type argument can be obtained simply by
alignas(alignof(T)).
noreturn(or
_Noreturn) to specify that a function is not expected to return to the caller
thread_local(or
_Thread_local) for thread local storage
_Genericfor type generic expression.
The most interesting among them is probably the latter feature, type generic expressions.
Gcc already has three builtins that can be used to something like
_Generic:
__typeof__(EXP)gives the type of the expression
EXP
__builtin_types_compatible_p(T1, T2)is true if the two types are compatible
__builtin_choose_expr(CNTRL, EXP1, EXP2)choose between the two expressions at compile time.
This only gives binary decisions and not multiple ones like
_Generic but with a good dose of
P99_FOR we can come to something that resembles a lot. (I spare you the details of the implementation.) The syntax that we support is as follows
#include "p99_generic.h" #define P99_GENERIC(EXP, DEF, ...) <some nasty use of P99 and gcc builtins>
For example as in the following
P99_GENERIC(a, , // empty default expression (int*, a), (double*, x));
That is an expression
EXP, followed by a default value
DEF, followed by a list of type value pairs. So here this is an expression that depending on the type of a will have a type of
int* or
double* that will be set to
a or
x, respectively.
In C11 syntax, the above would be coded with some kind of “label” syntax:
_Generic(a, int*: a, double*: x);
As you can see above, the default value can be omitted. If so, it is replaced with some appropriate expression that should usually give you a syntax error.
Here is an example with a default expression that will be used when none of the types matches:
uintmax_t max_uintmax(uintmax_t, uintmax_t); int max_int(int, int); long max_long(long, long); long long max_llong(long long, long long); float max_float(float, float); double max_double(double, double); a = P99_GENERIC(a + b, max_uintmax, (int, max_int), (long, max_long), (long long, max_llong), (float, max_float), (double, max_double))(a, b);
In C11 syntax
a = _Generic(a + b, default: max_uintmax, int: max_int, long: max_long, long long: max_llong, float: max_float, double: max_double)(a, b);
Here all the expressions evaluate to a function specifier. If
a + b is
int, … or
double the appropriate maximum function is chosen for that type. If none of these matches, the one for
uintmax_t is chosen. The corresponding function is then evaluated with
a and
b as arguments.
- Because the choice expression is
a + bits type is the promoted common type of
aand
b. E.g for all types that are narrower than
int, e.g
short, normally int will be the type of the expression and
max_intwill be the function. If
awould be
unsignedand
bwould be
doublethe result would be
doubleas well.
- The return type of the
_Genericexpression is a function to two arguments. If it would be for
int, e.g, the type would be
int ()(int, int). So the return type of the function call would be
intin that case.
- The arguments are promoted and converted to the expected type of the chosen function.
NB: if the compiler is already C11 complying, the
P99_GENERIC expression will just be translated to the corresponding
_Generic expression.
Otherwise only gcc and compatible compilers are supported.
Addendum: Today I checked out the new version of clang and discovered that their new version 3.1 that came out in December already supports
_Generic directly. Well done.
8 thoughts on “Emulating C11 compiler features with gcc: _Generic”
Just wanted to let you know
P99_GENERICrocks.
Just one thing, it would be great if the default value would allow something to abort compiling, like
currently I use
assert("fail")as default which is of incorrect type, as it returns
void, same behaviour, but feels wrong.
Anyway, great to have this.
You probably mean something like
static_assertsince this would be a compile time error, no?
The idea of
_Genericas by C11 is that there is automatically an error, if no default is given and none of the choices triggers. This should be the case here, too, since then you have an empty expression that messes up the syntax completely.
To improve the error message one could in effect add a
_Pragma. Something like
Unfortunately giving diagnostics through pragmas is not standard, this version here would work for gcc and Co. But other compilers should at least give you a diagnostic for an unknown
#pragma.
Actually a solution with
_Pragmawill not work, because this always gives the diagnostic, regardless of the actual choice expression. But I found another solution that is a bit more sophisticated and uses a gcc extension, namely error attributes to functions. (Using gcc features is not a restriction since we are talking of the mapping of
P99_GENERICto gcc, anyhow.)
I will push a new release with that solution later today.
I created a snippet to demonstrate my problem with default values.
For me, if there is no default value, it does not work at all, even matching generics end up being NULL instead.
Now,
0x7f70f1658c48 (nil) & 0x7f70f1658c48 0x7fff2394d620is not what I expected, but maybe it is my expectations?
Actually I’d expect this to fail during compiling, as you said the default is empty, and nothing matches, so fail?
0x7f70f1658c48 (nil)is wrong to me as the generic for fB should have matched,
0x7f70f1658c48 0x7fff2394d620, &Amatches
struct msg B *?
It works when I use a default value, static_asserts do not work here, assert does, as the return value mismatches and can’t be casted properly.
new version works fine, thanks for taking care.
Thanks to you for testing and giving feedback. Glad that this works, now.
For the values that you saw printed. These were the “correct” ones. You didn’t initialize your variables. The corresponding members that you printed were just uninitialized pointers. If I initialize the variables correctly (
P99_INITcomes handy) it prints
(nil) (nil)for the first print.
Jens
PS: please use
[sourcecode][/sourcecode]tags when you include larger code snipsets in a comment
I saw I was missing proper initialization, still the p99-2012-01-28 version was somewhat inconsistent:
I did not expect
to compile, and the result returned by it was somewhat off my expectations too.
Looking on the cpp results
I think the zero default value for 2012-01-28 was wrong, but well, it is fixed for me in 2012-01-30, which can be seen here:
While the error returned by gcc does not match the information cpp has
it’s enough to see something is wrong.
Thanks again.
Ah, not to forget, if you want to use the snippet for a regression test, it is all yours.
– I would have used the tags, but I was unable to find a link for the syntax – maybe I would see a syntax menu bar if I had js on?
Yes, probably was too naive.
It is very difficult to force gcc to spit out useful error diagnostics for all the cases. After all, this is just an emulation of the feature.
What you see is the fallback diagnostic when
P99_GENERICis used as a statement or full expression. If it is used as part of another expression, you’d basically see what you have seen through the preprocessor.
The diagnostic when it is used in a global declaration context is even worse. It just tells that there is a function call where a compile time expression is expected. I guess we have to live with that for the time being.
No, in the menu you just have simple editing features, but not that one. This is one of the features of wordpress that you’d have to know about.
Jens | https://gustedt.wordpress.com/2012/01/02/emulating-c11-compiler-features-with-gcc-_generic/?shared=email&msg=fail | CC-MAIN-2019-04 | refinedweb | 1,398 | 61.67 |
Route is a client routing library for Dart that helps make building single-page web apps.
Add this package to your pubspec.yaml file:
dependencies: route_hierarchical: any
Then, run
pub install to download and link in the package.
Route is built around
UrlMatcher, an interface that defines URL template
parsing, matching and reversing.
The default implementation of the
UrlMatcher is
UrlTemplate. As an example,
consider a blog with a home page and an article page. The article URL has the
form /article/1234. It can matched by the following template:
/article/:articleId.
Router is a stateful object that contains routes and can perform URL routing on those routes.
The
Router can listen to
Window.onPopState (or fallback to
Window.onHashChange in older browsers) events and invoke the correct
handler so that the back button seamlessly works.
Example (client.dart):
library client; import 'package:route_hierarchical/client.dart'; main() { var router = new Router(); router.root ..addRoute(name: 'article', path: '/article/:articleId', enter: showArticle) ..addRoute(name: 'home', defaultRoute: true, path: '/', enter: showHome); router.listen(); } void showHome(RouteEvent e) { // nothing to parse from path, since there are no groups } void showArticle(RouteEvent e) { var articleId = e.parameters['articleId']; // show article page with loading indicator // load article from server, then render article }
The client side router can let you define nested routes.
var router = new Router(); router.root ..addRoute( name: 'usersList', path: '/users', defaultRoute: true, enter: showUsersList) ..addRoute( name: 'user', path: '/user/:userId', mount: (router) => router ..addRoute( name: 'articleList', path: '/acticles', defaultRoute: true, enter: showArticlesList) ..addRoute( name: 'article', path: '/article/:articleId', mount: (router) => router ..addRoute( name: 'view', path: '/view', defaultRoute: true, enter: viewArticle) ..addRoute( name: 'edit', path: '/edit', enter: editArticle)))
The mount parameter takes either a function that accepts an instance of a new child router as the only parameter, or an instance of an object that implements Routable interface.
typedef void MountFn(Router router);
or
abstract class Routable { void configureRoute(Route router); }
In either case, the child router is instantiated by the parent router an injected into the mount point, at which point child router can be configured with new routes.
Routing with hierarchical router: when the parent router performs a prefix match on the URL, it removes the matched part from the URL and invokes the child router with the remaining tail.
For instance, with the above example lets consider this URL:
/user/jsmith/article/1234.
Route "user" will match
/user/jsmith and invoke the child router with
/article/1234.
Route "article" will match
/article/1234 and invoke the child router with ``.
Route "view" will be matched as the default route.
The resulting route path will be:
user -> article -> view, or simply
user.article.view
router.go('usersList'); router.go('user.articles', {'userId': 'jsmith'}); router.go('user.article.view', { 'userId': 'jsmith', 'articleId', 1234} ); router.go('user.article.edit', { 'userId': 'jsmith', 'articleId', 1234} );
If "go" is invoked on child routers, the router can automatically reconstruct and generate the new URL from the state in the parent routers.
Router.gowhich forces reloading of already active routes.
reload({startingFrom})method which allows to force reload currently active routes.
/foo/:bar*which will fully match
/foo/bar/baz)
BREAKING CHANGE:
The router no longer requires prefixing query param names with route name.
By default all query param changes will trigger route reload, but you can provide
a list of param patterns (via watchQueryParameters named param on addRoute) which
will be used to match (prefix match) param names that trigger route reloading.
A short-hand for "I don't care about any parameters, never reload" is
watchQueryParameters: [].
UrlMatcher.urlParameterNames has been changed from a method to a getter. The client code must be
updated accordingly:
Before:
var names = urlMatcher.urlParameterNames();
After:
var names = urlMatcher.urlParameterNames;
Add this to your package's pubspec.yaml file:
dependencies: route_hierarchical: ^0.7.0
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:route_hierarchical/click_handler.dart'; import 'package:route_hierarchical/client.dart'; import 'package:route_hierarchical/link_matcher.dart'; import 'package:route_hierarchical/route_handle.dart'; import 'package:route_hierarchical/url_matcher.dart'; import 'package:route_hierarchical/url_template. | https://pub.dev/packages/route_hierarchical | CC-MAIN-2019-26 | refinedweb | 696 | 51.14 |
LifeBoat An Autonomic Backup and Restore Solution
- Marylou Lee
- 3 years ago
- Views:
Transcription
1 Center ABSTRACT allows for a bare metal restore, but makes it hard to access individual files in the backup. Our solution does a file system backup while still allowing the system to be completely restored onto a new hard drive. Windows presents some particularly difficult problems during both backup and restore. We describe the information we store during backup to enable the bare metal restore. We also describe some of the problems we ran into and how we overcame them. Not only do we provide a way to restore a machine, but we also describe the rescue environment which allows machine diagnostics and recovery of files that were not backed up. This paper presents an autonomic workgroup backup solution called LifeBoat that increases the Built-In Value of the PC without adding hardware, administrative cost, or complexity. LifeBoat applies autonomic principles to the age old problem of data backup and recovery. Introduction Supporting PC clients currently represents roughly 50% of overall IT cost (IGS 2001). This number is larger than both server (30%) and network related costs (20%). This provides the motivation for an autonomic approach to reducing the cost of PC clients. So far, thin clients have repeatedly failed in the marketplace. IT attempts to lock down PC clients have not been accepted. In addition, attempts to control the client from the server have failed due to the fact that clients sometimes get disconnected. Fat clients, however, continue to prosper and increase in complexity which drives the maintenance cost up. We believe autonomic clients are critical components of an overall autonomic computing infrastructure. They will help lower the overall cost of ownership and reduce the client down time for corporations. The secure autonomic workgroup backup and recovery system, LifeBoat, provides data recovery and reliability to a workgroup while reducing administrative costs for Windows 2000/XP machines. LifeBoat provides a comprehensive backup solution including backing up data across the peer workstations of a workgroup, centralized server backup, and local backup for disconnected operation. In addition, it provides a complete rescue and recovery environment which allows end users to easily and conveniently restore downed machines. The LifeBoat project increases the Built-in Value of the PC without adding hardware, administrative, or complexity costs. By leveraging several autonomic technologies, the LifeBoat project increases utility while reducing administrative cost. In this paper we first describe the backup portion of LifeBoat. This is split into two sections, the first of which focuses on network backup. LifeBoat leverages a research technology called StorageNet to seamlessly spread backup data across the workstations of a workgroup in a peer-to-peer fashion. We then create a scalable road map from workgroup peer-to-peer to a centrally managed IT solution. The second backup section focuses on backing up to locally attached devices which is a requirement for disconnected operation. We then describe the complete rescue and recovery process which simplifies recovery of files and directories as well as providing disaster recovery from total disk failure. Next we describe a centralized management approach for LifeBoat and how LifeBoat can fit within a corporate environment. We conclude with performance measurements of some example backup and restore operations. Backup LifeBoat supports a number of backup targets such as network peers, a dedicated network server, and locally attached storage devices. The Autonomic Backup Program is responsible for creating a backup copy of a user s file system in such a way as to be able to completely restore the system to its original operating state. This means that the backup must include file data as well as file metadata such as file times, ACL information, ownership, and attributes. Our backups 2004 LISA XVIII November 14-19, 2004 Atlanta, GA 159
2 LifeBoat An Autonomic Backup and Restore Solution Bonkenburg, et al. are performed file-wise to enable users to restore or recover individual files without requiring the restoration of the entire machine such as with a block based solution. Finally, the backups are compressed on the fly in order to save space. The Autonomic Backup Program performs a backup by doing a depth first traversal of the user s file system. As it comes to each file or directory, it creates a corresponding file in the backup and saves metadata information for the file in a separate file called attributes.ntfs which maintains the attributes for all files backed up. There is special processing required for open files locked by the OS. The backup client employs a kernel driver to obtain file handles for reading these locked files. This driver stays resident only for the duration of the backup. When a backup is completed, the client generates a metadata file to describe the file systems which have been backed up. This usage file contains partition, file system, drive lettering, and disk space information. The existence of the usage file indicates that the backup was successful. The output of the backup and format of the backup data depends on the target. For a network backup, the data is stored using a distributed file system known as StorageNet. StorageNet has some unique features which make it especially suited for our peer-to-peer and client-server backup solutions. For a backup to locally attached storage, the backup is stored in a Zip64 archive. StorageNet Overview The storage building block of our distributed file system, StorageNet, is an object storage device called SCARED [2] that organizes local storage into a flat namespace of objects identified by a 128-bit object id. A workstation becomes an object storage device when it runs the daemon to share some of its local storage with its peers. While the object disks we describe here are similar to other object based storage devices [2, 3, 4, 8], our model has much richer semantics to allow it to run in a peer-to-peer environment. Clients request the creation of objects on SCARED devices. When an object is created, the device chooses an object id to identify the newly created object, marks the object as owned by the peer requesting creation, allocates space for it, and returns the object id to the client. Clients then use the object id as a handle to request operations to query, modify, and delete the object. An object consists of data, an access control list (ACL), and an info block. ACLs are enforced by the server so that only authorized clients access the objects. The info block is a variable sized attribute associated with each object that is atomically updated and read and written by the client. The info block is not interpreted by the storage device. One special kind of object creation useful in backup applications is the linked creation of an object. We implement hard links by passing the object id of an existing object when requesting creation. A hard link shares the data and ACL of the linked object, but has its own info block. These linked objects allow us to not only create hard links to files, but also to directories. Hard linked objects are not deleted until the last hard link to the object is deleted. There are two kinds of objects stored on SCARED devices. File objects have semantics similar to local files. They are a stream of bytes that can be read, written, and truncated. Directory objects are the other kind of object; they are an array of variable sized entries. Entries are identified by a unique 128-bit number, the etag, set by the daemon as well as a unique 128-bit number, the ltag, set by the client. The client chooses an ltag by hashing the name of the file or directory represented by an entry. The entry also has variable sized data associated with it that can be read and set atomically. Later we will describe how these objects are used to build a distributed file system, but here we need to point out that the storage devices only manage the allocation and access to the objects they store. They do not interpret the data in those objects, and thus, do not know the relationships between objects or know how the objects are positioned in the file system hierarchy. Because the data stored on the storage devices is not interpreted, the data can be encrypted at the client and stored encrypted on the storage devices. SCARED devices also track the allocations of objects for a given peer to enforce quotas. Later we will explain why quota support is needed, but for now it is important to note this requirement on the storage devices. Along with object management, storage devices also authenticate clients that access them. All communication is done using a protocol that provides mutual authentication and allows identification of the client and enforcement of quotas and access control. Note that communication only occurs between the client and storage device; storage devices are never required to communicate with each other. Clients use data stored on the object storage devices to create a distributed file system. The clients use meta-data attached to each object and directory objects to construct the file system. The directory entries are used to construct the file system hierarchy and the info blocks are used to verify integrity. Figure 1 shows the layout of the directory entries as interpreted by the client. The first three fields are maintained by the storage device. The other fields are stored in the entry data and thus stored opaquely by the storage device. The client needs to store the filename in the entry data since the ltag is the hash of the filename, which is useful for directory lookups, but LISA XVIII November 14-19, 2004 Atlanta, GA
3 Bonkenburg, et al. LifeBoat An Autonomic Backup and Restore Solution the actual filename is needed when doing directory listings. The other important piece of information is the location of the object represented by the entry or, if the entry is a symbolic link, the string representing the symbolic link which is stored in the entry data. Entry tag Lookup tag Version Filename Location type OID Hostname Symlink Figure 1: Layout of the directory entry. Figure 2 shows a fragment of the distributed file system constructed using the structures outlined above. The first device contains two objects. The first object is a directory with two entries. The first entry represents a directory stored on the second device. The second directory entry is a file that is stored on the same device. Peer-to-Peer Network Backup In the peer-to-peer case, our system backs up workstation data onto other workstations in the workgroup. This is accomplished by defining a hidden partition on each workstation that can be used as a target of the backup. The architecture of the software components in the system is completely symmetric. Each workstation runs a copy of the client and the StorageNet server. In this way each station serves as both a backup source and target. In addition, each station runs a copy of the Lifeboat agent process. This always runs, provides, and serves the web user interface that constitutes the policy tool to allow the user to make changes to backup targets, select files for backup, and set scheduling times. At the appointed time, this process will invoke the backup client program as well. The hidden partition is created during the installation process and is completely managed by the StorageNet server on each station. The customer uses the client software to specify what data to backup and on what schedule. The target of the backup is determined by the system and can be changed by the customer on request. In the case of an incremental backup, our StorageNet distributed file system offers some very strong advantages over traditional network file systems. For example, one feature which we use a great deal is the ability to create directory hard links. In this way, if an entire subtree of the file system remains unchanged between a base and an incremental backup, we can simply hard link the entire subtree to the corresponding subtree in the base backup. When individual files remain unchanged, but their siblings do not, we can hard link to the individual files, and create new backup files in the directory. This unique directory and file hard linking ability allows each backup in our file system, both base backup and incremental backups, to look like an entire mirror image of the file system on the machine being backed up. Each incremental backup only takes up the same amount of space as what has changed between backups. In the local case, incremental backups look like a subset of the file system. Pieces of the file system that did not change are simply not copied into the zip. In order to distinguish between files that are unchanged and files that have been deleted, we keep a list of files which have been deleted in DeletedFiles.log. This is used during the restore to know which files not to copy out of the base. For example, consider backing up the file helloworld.txt. In the remote scenario, this file is copied to our StorageNet distributed file system. The filename, file data, file times, and file size are all set in the StorageNet file system. File dates and sizes are not stored redundantly in this case because the cost of looking them up later during an incremental is free. This is because during a remote incremental, we are also doing a depth first traversal on the base backup. File ACL, attributes, and ownership information is placed into attributes.ntfs file for use during restore. The short- name data is stored in the directory entry for this file. Although StorageNet has no 8.3 limitations, it makes provision for this information to maintain full compatibility with Windows file systems. Info Block File data Entry data Entry data Info Block Info Block Entry data Entry data Figure 2: An example file system fragment stored on directory objects on two storage devices LISA XVIII November 14-19, 2004 Atlanta, GA 161
4 LifeBoat An Autonomic Backup and Restore Solution Bonkenburg, et al. Why We Need Quotas Since all the peers store their data remotely, if any peer fails it can recover from its remote backup. It is tempting to randomly spread a given peers backup across all of its peers. Spreading this way gives us some parallelism when doing backups and should speed our backup. However, if we do spread a given machine s backup uniformly across its peers, we cannot tolerate two failures since the second failure will certainly lose backup data that cannot be recovered. Instead of spreading the data across peer machines, we try to minimize the number of peers used by a given machine for a backup. Thus, if all machines have the same size disks, when a second failure happens there will only be a $1 over (n - 1)$ chance that backup data is lost for a given machine. Unfortunately, we cannot assume that all peers have the same sized disks. Thus some peers may store the backup data of multiple clients, and other peers may use multiple peers to store their backup. If the disk sizes are such that a peer s backup must be stored on multiple peers and those peers in turn store backups from multiple peers, the backups can easily degenerate into a uniform backup across all peers unless some form of quotas are used. Peer Backup Scenarios The number of scenarios that are supported by this solution is virtually innumerable. However, there are some attributes that constitute simple scenarios. For example, we can consider the most simple scenario in the peer-to-peer case to be the completely symmetric homogeneous case where all stations provide a hidden partition that is equal in size to their own data partition, and each stations data is backed up to a neighboring station. Figure 3 shows an example for three workstations. A B C Figure 3: Three workstation peer-to-peer case. In this case every machine backs up its data in the hidden partition of its neighbor. A B C Figure 4: Non-homogenous/non-symmetric example. D E F Figure 4 shows a more complicated scenario. In this case, the following statements hold true for the backup group: B holds all of A s data and portions of the data of C A holds all of B s data and portions of the data of C and D C holds all of E s data and portions of the data of D D could be a laptop and stores parts of its data on A and C E holds all of F s data F (as well as possibly other stations) has available target space for a new entrant in the group Either of these scenarios could have resulted from: autonomic system decisions based on the sizes and allocations of a heterogeneous group of workstations user-selection specifies the target of the backup Obviously these two scenarios are not exhaustive. Configurations of arbitrary complexity are supported. We intend to develop heuristics and user interface methods to reduce possible complexity and allow the customer to efficiently manage the backup configuration. Dedicated Server Network Backup One of the big advantages of using dedicated servers as opposed to peers is the availability of service. Because peers are general purpose user machines, they may be turned off, rebooted, or disconnected with a higher probability than with dedicated servers. In a large enterprise environment using a dedicated server approach can guarantee backup availability. Machine stability is important when trying to do backups. Dedicated servers are also easier to manage because of their fixed function. Machines are also easier to update and modify by an admin staff if they belong to the IT department rather than users. The dedicated server solution uses StorageNet in a similar fashion as the peer-to-peer approach. The dedicated server acts as the target StorageNet device for the backup clients and backup data is stored in the same fashion as the peer approach. Indeed, the architecture makes no distinction between dedicated servers and peers. In this way, the dedicated server solution is only a special case peer-to-peer usage scenario. Local Backup For mobile users the ability to perform regular backups to local media is critical. There are several configurations that we must deal with in order to provide local backup. The simplest one is for a system with one internal hard drive which contains the data we wish to back up and one additional hard drive where the backup is stored. The hard drive containing the backup can be either an external USB/Firewire drive or internal hard drive. The user is also allowed to perform backup locally to the source hard drive. In LISA XVIII November 14-19, 2004 Atlanta, GA
5 Bonkenburg, et al. LifeBoat An Autonomic Backup and Restore Solution this case we use a file system filter driver to protect the backup files. While this form of backup wont protect the user from hard drive failures, it will allow recovery from viral attacks or software error. The format of the backup is rather simple. We use a simple directory structure. The main directory is called LifeBoat_Local and for every machine backed up to the drive we add another sub directory. This sub directory, for example test, will contain multiple directories and files. The most important file contains the UUID of the machine that is backed up and is called the machine file. It contains the serial number of the machine and UUID as returned by DMI [1]. We use this file during the restore procedure to automatically detect backups. A sample machine file is in Figure N5U AKVAA2W 00F7D68B-0AA0-D611-88F2-EDDCAE30B833 Figure 5: Typical machine file. The first time a local backup is run, we create a directory called base and place it in LifeBoat_Localest. Additional backups are placed in directories called Incremental 1, Incremental 2, etc. The number of incremental backups is user configurable, with the default value set at five. The full directory structure can be seen in Figure 6. Each of the directories such as base and Incremental 1 contain the following files: usage, attributes.ntfs, backup.lst, and some zip files. In the case of a filesystem that is less than 4 GB compressed, a single zip file, backup.zip suffices. Otherwise, the Zip64 spanning standard is used. Figure 6: Typical directory structure. The first line of the usage file lists descriptions of columns inside the usage file: drive letter, file system type, size of the partition, amount of used space and amount of backed up data. In the next line is an OS descriptor which is important for post processing after restore. Possible descriptors are WinXp, Win200, WinNT4.0, WinNT3.5, Win98, Win95 and WinME. Lines that follow give information about each partition in the system. They are used during restore process. The attributes.ntfs file is of importance only when backing up/restoring NTFS partitions and is not used if the partitions are not NTFS. The attributes.ntfs file contains all file attributes as well as ACL, SACL, OSID and GSID data. We write the data during backup and restore it during the restore post processing step. Backup.zip contains the actual backup of all files. Through extending ZIP functionality to use the current Zip64 specification, we are able to create ZIP files that are very large, dwarfing the original 2 GB limit. If the backup is greater then 4 GB zipped we can create multiple backup files (backup.zip, backup.001, etc.) using the Zip64 spanning standard. We chose 4 GB as our spanning limit in order to allow these files to be read by FAT32 files systems. For example, let s imagine we are doing a base backup and come to the file helloworld.txt which contains data as well as some ACL information. This file would be added to the backup.zip file and compressed, taking care of the filename, file data, modification date, and file size. The file dates and size are also placed in a metadata file, backup.lst, to be used later when creating incremental backups to determine whether the file has changed and needs to be backed up again. File ACL, attributes, and ownership information is placed into the attributes.ntfs file for use during restore. Finally, the short name for this file, for example hellow 1.txt, is stored in the comments section of the zip file. Preserving shortnames across backup and restore turns out to be very important even in later Windows versions. Some Windows applications still expect the short names for files to not change unless the long filename changes as well. A special case of local backup is the backup to yourself case. In this case we have only one hard drive and we want to backup the data to the same drive we are backing up. In the simple case we have multiple partitions on the hard drive, for example we backup drive C to drive D. In a more complex case where we have to deal with a single partition we backup C to C. As far as backup is concerned this is not problematic, however during restore we have to deal with some very specific problems related to NTFS partitions and the lack of write support under Linux. Rescue and Recovery A significant portion of the LifeBoat project focuses on client rescue and recovery. This includes several UI features for Windows as well as a bootable Linux image. The rescue operations allow a user to perform diagnostics and attempt to repair problems. Recovery enables the user to restore individual files or even perform a full restore in the case of massive disk failure. Single File Restore When the system is bootable, it is possible to restore a single file or a group of files from within Windows [6]. In keeping with the autonomic goal of the system, the user interfaces for this system are minimal. From Windows, the restore process uses a simple browser interface to StorageNet using the browser protocol istp://. A screenshot of the istp protocol is below in Figure 7. We have also written a namespace extension for StorageNet which behaves like the ftp namespace extension which ships with Windows. An example 2004 LISA XVIII November 14-19, 2004 Atlanta, GA 163
6 LifeBoat An Autonomic Backup and Restore Solution Bonkenburg, et al. screen looks almost identical to that for ftp:// and uses the analogous Copy-Paste commands (see Figure 8). Figure 7: The istp:// protocol. Figure 8: StorageNet namespace extension. Rescue The LifeBoat Linux boot CD provides various software services that can be used for systems maintenance, rescue, and recovery. The distribution works in almost any PC and can be booted from a number of devices such as a CD-ROM drive, USB keyfob, local hard drive, or even over the network. The CD includes over 101 MB of software including a kernel, Xfree86 4.1, full network services for both PCI and PCMCIA cards and wireless connectivity. An important part in the design of the bootable Linux CD was rescue functionality. We wanted to provide the user with at least a rudimentary set of functions which would enable him to diagnose, report, and fix the problem if at all possible. As part of the CD we included the following set of rescue functions: PCDoctor based diagnostics which lets us run an all encompassing array of hardware tests, AIM as way of quickly communicating with help available online, and the Mozilla web browser. We also developed an application which finds all bookmarks on the local drive, in the local backup, and in the remote backup and makes them available for use in Mozilla. This provides the user with the list of bookmarks that he is used to. At the same time we add a selection of bookmarks which can be custom tailored for a specific company to include their own links to local help desk sites and other useful resources. Even in the face of disaster, an important issue to keep in mind is that a damaged hard disk may still contain some usable data. In the case of a viral attack, boot sectors and system files could be compromised but the user data could be left intact. Performing a full system restore would overwrite any changes made since the last backup. For this case we created an application that browses through all documents that were recently accessed and allows the user to copy them to a safe medium such as a USB keyfob or hard drive. Recovery Full machine recovery is a vital part of any backup solution. The Lifeboat solution uses its bootable Linux CD for full machine restore. This is necessary when the machine cannot be booted to run the Windows based restore utilities. In order to use the CD for system recovery we added a Linux virtual file system (VFS) implementation for StorageNet. Located on the CD is our Rapid Restore Ultra application which is used to restore both local and remote backups. Rapid Restore Ultra is written in C and uses QT for the UI elements [9]. The application comes in two flavors. The first one is intended for a novice user that has no deep knowledge of systems management issues and just wishes to restore the data. The second version is intended for knowledgeable system administrators or advanced users that have deep knowledge of internal systems functioning. The novice user just restores the latest backup and the application determines how the backup is to be restored. Advanced users can select any backup on the discoverable network or local devices, as well as forcing the discovery of backups on a non local networks by entering the IP address or name of a potential server. The user can then manually repartition the drives, and assign drive letters and data LISA XVIII November 14-19, 2004 Atlanta, GA
7 Bonkenburg, et al. LifeBoat An Autonomic Backup and Restore Solution Drive partitions can even be set to different sizes than were originally backed up. This way the user has full control of the restore process. Performing a Full Restore The first step towards machine recovery is creation of new partitions. To do that we can either use the description of partitions from the usage file in the last backup or let the user decide the partition sizes and types. The usage file is used for both local and remote restores and gives all the information about the old partition table. In order to determine new partition sizes we use a simple algorithm that takes into account old partition size, new partition size, the number of partitions, and the percentage of usage. Before we write the partition table we have to make sure we have a valid Master Boot Record (MBR). To be sure we dump our MBR onto the first 32 sectors of the drive. It is important to keep in mind that the MBR that is written at this moment has no partition information. If there were any partition info at this step in the MBR, we couldn t be sure that the disk geometry we are using is correct. After writing the MBR, we write the partition table. After the partition table is successfully written we have to format all partitions. One of the issues is the need to support all of the current Windows file systems such as FAT, FAT 16, FAT 32 and if possible, NTFS. Linux can format all of the FAT file systems, but can t create bootable FAT file systems. In order for a file system to be able to boot, the master boot record must point to a valid boot sector. Support for NT, WIN2k and XP is provided through the use of our application. We pieced together information about Windows boot sectors and after long debugging found a way to create valid boot sectors on our own. The reason why we are unable to use the original boot sectors from a previously backed up machine is simple. Boot sectors are dependent on partition sizes and geometry, thereby requiring us to create them every time we repartition. Another reason for not restoring the boot sector from a backup is that boot sectors are a favorite hiding place for viruses. After the disk is formatted and the boot sectors are written, we start the client application to restore the data. If we are performing remote restore, the client connects to the server and upon successful authorization the files, including the operating system, are copied to the local partition. This process is repeated for as many partitions as necessary. After all the files are transferred the machine is rebooted and available for work. Here is a summary of the steps performed in this process: Write general MBR Write new partition table Format partitions Mount boot partition Start Sys16 (for FAT16) or Sys32 (for FAT32) to create valid boot sector Transfer system files Copy remaining files Unmount partition If we are performing a local restore there are multiple issues we have to face. The first problem is related to having the backup located on the same drive we are trying to restore to. If this is the case, we are unable to reformat the partitions and also we cant change partition sizes. Another issue is related to NTFS support in Linux. Lets say we are backing up to the C drive and it is formatted NTFS. When the restore starts it will find the backup on the first partition and notice that the partition type is NTFS. While Linux has very good support for reading NTFS file systems it has minimal support for writing NTFS. The solution to this, which is detailed in the next section, is a technique for formatting an existing NTFS partition as FAT32 while preserving the backup files. Once all the preparatory steps are successfully completed, we start unzipping data to the desired partition. If we have only a base backup, the restore process ends when unzipping of the base backup.zip file is completed. In case of incremental backups the restore process is more complicated. Suppose we have three incremental backups and the base backup. If we wish to restore the third incremental backup, we start by unzipping the backup.zip located in the Incremental 3 directory. Then we unzip the backup.zip located in the Incremental 2 directory and so on. We do this until we have finished the backup.zip in the base directory. Each time we have to make sure that no files get overwritten. Once unzip finishes we have to create post processing scripts that will run immediately following Wi n d o w s boot. We have to take care of two problems: proper assignment of drive letters and NTFS conversion. In case of a backup with more than three partitions we cant be sure that once Windows comes up it will assign correct drive letters to their respective partitions. It is also possible that we didnt use C, D, or E as drive letter in Windows but for example C, G, and V. While performing backup we add a file called driveletter.sys to each drive on the hard disk. This file only contains the drive letter. The first thing after restore we need to do when windows comes up is change drive letter names. This is done easily by changing registry entries to values we read from driveletter.sys and doesnt even require a reboot. A second problem is related to NTFS partitions. When we restore we create our partitions to be FAT 3 2 and format them accordingly. Once restore is completed and drive letter assignment has run its course, we have to convert those partitions back to NTFS. This is accomplished using the convert.exe utility that is supplied in Windows. Upon completing conversion of the drives to NTFS we have to set attributes and ACLs for all files on that drive. We wrote a simple application that reads 2004 LISA XVIII November 14-19, 2004 Atlanta, GA 165
8 LifeBoat An Autonomic Backup and Restore Solution Bonkenburg, et al. the content of the attributes.ntfs file and sets ACL, System ACL, Owner SID, and Group SID as well as file creation/modification times. This application lets us set all file attributes. Upon completion it deletes the attributes.ntfs file and exits. That is also the last step in post processing. Same Partition NTFS Backup In order to overcome the lack of write support in the Linux NTFS driver we developed a technique whereby an NTFS partition can be formatted as FAT32 while preserving the backup files. This is in essence converting an NTFS partition to FAT32. The conversion process consists of a number of steps. First a meta file which contains data about the file to be preserved is created. Next the set of parameters for formatting the partition as FAT32 is carefully determined. The next step is running through all of the files to be preserved and relocating on disk only those portions that need to be moved in order to survive the format. The partition is formatted and the files are resurrected in the newly created FAT32 partition. Finally, directories are recreated and the files are renamed and moved to their original paths. The set of files that need to be preserved must be known a priori. In the case of the LifeBoat project, this consists of a directory and a small set of potentially large files. The first step is to create the meta file which contains enough information to do a format while preserving these files. The meta file may be created immediately after a backup from within Windows or, if the NTFS partition is readable, it is created in a RAM disk from within the Linux restore environment. C0 C1 C2 C3 FAT NTFS C0 C1 C2 FAT32 Figure 9: The top partition shows the first four clusters of an NTFS partition, each with two sectors per cluster. Below is a FAT32 partition with a FAT size of three sectors followed by the first three data clusters. This illustrates how a FAT32 partition with the same cluster size can be created yet the data is no longer cluster aligned. In the case of Windows, the file locations are available through standard API s, and the meta file contains itself as the first entry. In the case of Linux the NTFS driver does not provide a way to find out the clusters of a file. An ioctl was added to the driver for this purpose. A typical meta file is well under 8K in size so excessive memory use is not a concern. Creating a meta file is not the only preparation required for formatting the NTFS partition as FAT32. The data files all reside on cluster boundaries. Unfortunately, NTFS numbers its clusters starting with zero at the first sector of the drive, while FAT32 begins its clusters at the sector immediately after the file allocation tables. Formatting with the same cluster size does not necessarily mean that the clusters will be aligned properly (see Figure 9). A solution to the cluster alignment problem would be to always format the FAT 3 2 partition with a cluster size of 512 bytes (one sector) and cluster downsizing the extent data by splitting it into 512 byte clusters. In practice this leads to an extremely large file allocation table when partitions run into the gigabytes. The cluster size of the FAT32 file system is determined by constraining the size of the resulting file allocation table to be a configurable maximum size (default 32 MB). The simplest way to determine this is to loop over an increasing number of sectors per cluster in valid increments until the resulting calculation of the fat size exceeds the maximum. In order to align the clusters, we manipulate the number of reserved sectors until the newly created FAT32 partition and the former NTFS partition are cluster aligned. At this point the layout of the FAT32 file system and the potentially larger cluster size is determined. Before formatting can occur, the extents of all the data files must be preprocessed to relocate any extent that is either located before the start of the FAT32 data area or does not start on a cluster boundary. In the best case, the cluster size has not changed, so only the first set of relocations must occur. Otherwise relocating an extent requires allocating free space on the disk at a cluster boundary and possibly stealing from the file s next extent if its length is not an integral number of clusters. Moving an extent s data is time consuming so it is avoided whenever possible. Free space on the disk is found using a sliding bitmap approach. Any cluster that is not in use by an entry in the meta file is considered free. A bitmap is used to mark which clusters are free and which are in use. The relocation process requires that enough free space is available to successfully relocate necessary portions of the files to be preserved. When restoring to the same partition this will always be the case. Formatting is the simplest step. The mkdosfs program performs a semi-destructive format in that it only overwrites the reserved and file allocation table sectors. The -f switch is used to limit the number of file allocation tables to one. Once the file system is formatted as FAT 3 2, entries for the files to be preserved must be created. This is done via a user space FAT 3 2 library written for this purpose. The user space library can mount a FAT 3 2 partition and create directory entries in the root directory. It uses the data from the meta file to resurrect each LISA XVIII November 14-19, 2004 Atlanta, GA
9 Bonkenburg, et al. LifeBoat An Autonomic Backup and Restore Solution meta file entry by creating a directory entry and writing the extents to the file allocation table. Once all of the files have been resurrected, it is safe to use the Linux FAT32 driver to write to the partition. The meta file is traversed once again to create the full paths and rename all of the files to their proper names. Finally, resident files are extracted from the meta file and written. At this point the partition has been converted from NTFS to FAT32 while preserving all of the files necessary to perform a restore. Centralized Management In the case of multiple work groups, management issues become highly important. If a system administrator is supposed to deal with multiple groups with ten or more PCs he will need some sort of an autonomic system to simplify the management of storage. We based our system on IBMDirector which is widely available and boasts a high acceptance rate throughout the industry. To enable IBMDirector for our purposes we extend it in several ways. We developed extensions for the server, console and clients. Below we quickly detail the nature of those extensions. Client side extensions are written in C++. The extensions provide all backup/restore functions as described elsewhere in this document. An important extension is related to communication between the client and server. The communication module relays all the requests and results between the two machines. The client also starts a simple web server which upon authorization provides information about the given client. This feature was implemented for the case where no IBMDirector server is available or when the server is not functioning properly. The information exported on the web page is the same as what can be obtained through the IBMDirector console. The information exported is shown below: workgroup name back-up targets date of last successful backup contact info Number of drives Size of drives Free space on each drive File system on each drive OS used Current status (performing back up, restoring, idle) User name and user info Location of the backup Server and console side extensions are written in C++ and java. They are rather simple since all we need to add on the server are basic GUI elements that allow us to interface with the client and to receive data sent from the clients. The most complex extension is related to extending associations so that all StorageNet devices in the same work group appear in treelike form. The goal of this part of the project is to make a system that will be usable with or without the IBMDirector server. The Corporate Environment Our primary target environment in developing this system is a workgroup satellite office. If this is used in a corporate environment, there is the need for administrator level handling for setup, control and migration. Similar to the workgroup setting, the requirements of the workstation user are limited to: knowing my data is backed up (having confidence) knowing that my data is backed up to an area that will facilitate easy restoration In contrast, the administrator in the corporate environment has requirements for additional control and data, including: wants different user s data to be distributed evenly (or specifically) across several servers wants reports specifying where a user s data is backed up and the usage per server during initial rollout, wants a way to seed the backup server destination to achieve the first goal during server migration, needs a way for the user s data to go to another server. The general processing flow is described below. The asset collection process on the user s machine sends the UUID (machine serial number) to an administrative web server. A long running process on the server discovers available backup targets. The administrator reviews a web page containing unassigned backup clients and discovered servers, and assigns these clients to a server. This information is recorded and used by the client backup process (usually scheduled) to keep the user s data. This assignment information is also used by the file and image restoration processes. Described graphically, we have: Backup Admin Server Figure 10: General flow with dashed lines representing meta-data and solid lines representing backup data LISA XVIII November 14-19, 2004 Atlanta, GA 167
10 LifeBoat An Autonomic Backup and Restore Solution Bonkenburg, et al. The detailed processing flow is: 1. The asset collection process extracts the machine serial number, type, and location and uses HTTP POST to save this in a web server. If the asset collection process is disabled for this client, the user can surf to a well-known web page where the same executable from the asset collection process can be downloaded and run. 2. In parallel to this process, a long-running process resident on the server is busy discovering backup targets. These targets are StorageNet servers. The discovery protocol is limited to the subnet where discovery is issued. Because of this, there is a web service located at a well known address in each subnet that is used by the server-resident process to discover servers in other subnets. The list of available backup targets is maintained and updated in the administrative server. 3. The administrator surfs to a web page containing a list of unassigned clients and available servers. The processing behind this page automatically pre-selects target servers correlating to clients within their respective subnets. For those not pre-selected, or in which an override is requested, the administrator picks a server and one or more clients to back up to the selected server. This causes the machine file mentioned previously to be stored in that backup server. This is used for discovery by the restoration process. 4. The backup process on the client machine will normally be invoked as a result of a scheduled alarm popping. When this occurs, the backup process will check for a machine file (containing its UUID) on all the servers on its subnet. If it finds this, it initiates the backup to that target. 5. If it does not find it, the backup process looks on the administrative server to determine which target it should backup to. If no assigned target is found an error is generated, otherwise the backup process spools the user s data out to the assigned target. 6. When a file-based restore is requested, a process starts going through similar processing to the backup client to locate the user s data. Then a network share using the StorageNet Windows file system driver (FSD) is created pointing to the target backup server. This FSD allows the use of normal Windows-resident tools to access the backup data as described above. 7. When an image-based restore is requested, a process starts going through similar processing to the backup client to locate the user s data. Then a network share using the StorageNet Linux file system driver is created pointing to the target backup server. This file system driver allows the use of normal Linux-resident tools to access the backup data as described above. Figure 11: Image of the FSD accessing a StorageNet server. Performance During the extensive testing we gathered several interesting numbers that reflect the speed and efficiency of the backup and restore process [5]. Figure 12 shows the time in seconds for backing up and restoring 2.3 GB of data for a number of different target locations [7]. The restore process is measured from clicking on the restore button to the finish (reboot of a machine). Our main test machine is a ThinkPad R32 with 256 MB RAM and IBM 20 GB hard drive. A separate series of tests were performed using a 1.6 GHz Pentium M IBM ThinkPad T-40. A 1.7 GB image requires three minutes (156 sec) to backup. The restore from local HDD requires 15.5 minutes from selecting the restore button of which ten minutes is file system preparation and data transfer and seven minutes is rebooting and converting. Backup and Restore Times (seconds) Local HD Local USB1.1 Local USB2.0 Remote 100 MB Backup 2.3 GB NTFS 808s 4254s 575s 1274s Restore 2.3 GB 1100s 4440s 1001s 1200s Figure 12: Backup and restore times in seconds LISA XVIII November 14-19, 2004 Atlanta, GA
11 Bonkenburg, et al. LifeBoat An Autonomic Backup and Restore Solution The final stage requires two minutes to complete the attribute restore into the NTFS file system. This makes the total time 17.5 minutes. We have tested this repeatedly for at least 20 times on four systems with minimal variation. The main variations seem to be related to the file system preparation step which takes a minute or two longer after a base backup is re-established. When compared to Xpoints software: 1. Backup is approximately 3X in performance. 2. Compression is typically 2X better. 3. Our version works without dominating the PC while Xpoint s version of RRPC does not. 4. The restore performance for a base only is similar. 5. Restore of an incremental plus base is dramatically improved in ours since it is essentially the same as a base only while Xpoint s takes about twice as long. [4] Miller, Ethan L., William E. Freeman, Darrell D. E. Long, and Benjamin C. Reed, Strong Security for Network-Attached Storage, FAST 02. [5] Zwicky, E. D., Torture Testing Backup and Archive Programs, Selected Papers in Network and System Administration, USENIX. [6] McMains, J. R., Windows NT Backup and Recovery, McGraw Hill, [7] Stringfellow, S. and M. Klivansky, Backup and Restore Practices for Enterprise, Prentice Hall, [8] Azagury, A., V. Dreizin, M. Factor, E. Henis, D. Naor, N. Rinetzky, O. Rodeh, J. Satran, A. Tavory, and L. Yerushalmi, Towards an Object Store, 20th IEEE Symposium on Mass Storage Systems (MSST), [9] Conclusion LifeBoat provides a way to backup system such that the backup files are accessible for single file restore as well as a full image restore. Our work also shows how Linux can be effectively used to restore a Windows(tm) system while also providing a rescue environment in which a customer can salvage recent files and preform basic diagnostics and productivity work. Most importantly this system allows for a machine to be completely restored from scratch when the boot disk is rendered unbootable. The local backup version of this work shipped as part of IBM s Think In this paper we presented a description of the latest research project in autonomic computing at IBM Almaden Research Center. We described a fully autonomic system for workgroup based workstation backup and recovery with options for both everyday restore of a limited number of files and directories as well as full catastrophe recovery. This project is work in progress and is funded partially by the IBM Personal Systems Institute. References [1] Distributed Management Task Force, System Management BIOS (SMBIOS) Reference Specification, Version 2.3.4, December [2] Reed, Benjamin C., Edward G. Chron, Randal C. Burns, and Darrell D. E. Long, Authenticating Network-Attached Storage, IEEE Micro, Jan [3] Gibson, Garth A., David F. Bagle, Khalil Amiri, Fay W. Chang, Eugene M. Einberg, Howard Gobioff, Chen Lee, Berend Ozceri, Erik Riedel, David Rochberg, and Jim Zelenka, File Server Scaling with Network-Attached Secure Disks, Proceedings of the ACM International Conference on Measurement and Modeling of Computer Systems (Sigmetrics), June LISA XVIII November 14-19, 2004 Atlanta, GA 169
12 LISA XVIII November 14-19, 2004 Atlanta, GA
LifeBoat- An Autonomic Backup and Restore Solution
LifeBoat- An Autonomic Backup and Restore Solution Ted Bonkenburg(1), Dejan Diklic(1), Benjamin Reed(1), Mark Smith(1), Michael Vanover(2), Steve Welch(1), Roger Williams(1) (1) IBM Almaden Research Center,
Acronis True Image 2015 REVIEWERS GUIDE
Acronis True Image 2015 REVIEWERS GUIDE Table of Contents INTRODUCTION... 3 What is Acronis True Image 2015?... 3 System Requirements... 4 INSTALLATION... 5 Downloading and Installing Acronis True Image
A+ Guide to Software: Managing, Maintaining, and Troubleshooting, 5e. Chapter 3 Installing Windows
: Managing, Maintaining, and Troubleshooting, 5e Chapter 3 Installing Windows Objectives How to plan a Windows installation How to install Windows Vista How to install Windows XP How to install Windows
Installing Windows XP Professional
CHAPTER 3 Installing Windows XP Professional After completing this chapter, you will be able to: Plan for an installation of Windows XP Professional. Use a CD to perform an attended installation of Windows Backup & Recovery for Mac. Acronis Backup & Recovery & Acronis ExtremeZ-IP REFERENCE ARCHITECTURE
Acronis Backup & Recovery for Mac Acronis Backup & Recovery & Acronis ExtremeZ-IP This document describes the technical requirements and best practices for implementation of a disaster recovery solution
Microsoft Diagnostics and Recovery Toolset 7 Evaluation Guide
Microsoft Diagnostics and Recovery Toolset 7 Evaluation Guide White Paper Descriptor This document provides administrators with information and steps-by-step technique for deploying Microsoft Diagnostics
Backup & Disaster Recovery Appliance User Guide
Built on the Intel Hybrid Cloud Platform Backup & Disaster Recovery Appliance User Guide Order Number: G68664-001 Rev 1.0 June 22, 2012 Contents Registering the BDR Appliance... 4 Step 1: Register the
1. Product Information
ORIXCLOUD BACKUP CLIENT USER MANUAL LINUX 1. Product Information Product: Orixcloud Backup Client for Linux Version: 4.1.7 1.1 System Requirements Linux (RedHat, SuSE, Debian and Debian based systems such
BackupAssist v6 quickstart guide
Using the new features in BackupAssist v6... 2 VSS application backup (Exchange, SQL, SharePoint)... 2 Backing up VSS applications... 2 Restoring VSS applications... 3 System State backup and restore...
Acronis Backup & Recovery 10 Server for Windows. Installation Guide
Acronis Backup & Recovery 10 Server for Windows Installation Guide Table of contents 1 Before installation...3 1.1 Acronis Backup & Recovery 10 components... 3 1.1.1 Agent for Windows... 3 1.1.2 Management
Online Backup Client User Manual Linux
Online Backup Client User Manual Linux 1. Product Information Product: Online Backup Client for Linux Version: 4.1.7 1.1 System Requirements Operating System Linux (RedHat, SuSE, Debian and Debian based
BackupAssist v6 quickstart guide
New features in BackupAssist v6... 2 VSS application backup (Exchange, SQL, SharePoint)... 3 System State backup... 3 Restore files, applications, System State and mailboxes... 4 Fully cloud ready Internet
User Manual. Copyright Rogev LTD
User Manual Copyright Rogev LTD Introduction Thank you for choosing FIXER1. This User's Guide is provided to you to familiar yourself with the program. You can find a complete list of all the program's
Backup & Recovery. 10 Suite PARAGON. Data Sheet. Automatization Features
PARAGON Backup & Recovery 10 Suite Data Sheet Automatization Features Paragon combines our latest patented technologies with 15 years of expertise to deliver a cutting edge solution to protect home Windows
Acronis True Image 10 Home Reviewer s Guide
Acronis True Image 10 Home Reviewer s Guide Introduction This guide is designed for members of the media who will be evaluating Acronis True Image disk imaging, backup and bare-metal recovery software.
StruxureWare Data Center Expert 7.2.4 Release Notes
StruxureWare Data Center Expert 7.2.4 Release Notes Table of Contents Page # Part Numbers Affected...... 1 Minimum System Requirements... 1 New Features........ 1 Issues Fixed....3 Known Issues...3 Upgrade
AMD RAID Installation Guide
AMD RAID Installation Guide 1. AMD BIOS RAID Installation Guide.. 2 1.1 Introduction to RAID.. 2 1.2 RAID Configurations Precautions 3 1.3 Installing Windows XP / XP 64-bit / Vista / Vista 64-bit With
Enterprise Solution for Remote Desktop Services... 2. System Administration... 3. Server Management... 4. Server Management (Continued)...
CONTENTS Enterprise Solution for Remote Desktop Services... 2 System Administration... 3 Server Management... 4 Server Management (Continued)... 5 Application Management... 6 Application Management (Continued)...
Lesson Plans Microsoft s Managing and Maintaining a Microsoft Windows Server 2003 Environment
Lesson Plans Microsoft s Managing and Maintaining a Microsoft Windows Server 2003 Environment (Exam 70-290) Table of Contents Table of Contents... 1 Course Overview... 2 Section 0-1: Introduction... 4.
Gladinet Cloud Backup V3.0 User Guide
Gladinet Cloud Backup V3.0 User Guide Foreword The Gladinet User Guide gives step-by-step instructions for end users. Revision History Gladinet User Guide Date Description Version 8/20/2010 Draft Gladinet
Acronis Backup & Recovery 10 Server for Windows. Installation Guide
Acronis Backup & Recovery 10 Server for Windows Installation Guide Table of Contents 1. Installation of Acronis Backup & Recovery 10... 3 1.1. Acronis Backup & Recovery 10 components... 3 1.1.1. Agent
EASEUS Todo Backup. Version 1.1
EASEUS Todo Backup Version 1.1 1 Table of Contents Welcome...3 About EASEUS Todo Backup...3 Starting EASEUS Todo Backup...3 Getting started...4 Hardware requirements...4 System requirements...4 Supported
Chapter Contents. Operating System Activities. Operating System Basics. Operating System Activities. Operating System Activities 25/03/2014
Chapter Contents Operating Systems and File Management Section A: Operating System Basics Section B: Today s Operating Systems Section C: File Basics Section D: File Management Section E: Backup Security
AMD RAID Installation Guide
AMD RAID Installation Guide 1. AMD BIOS RAID Installation Guide.. 2 1.1 Introduction to RAID.. 2 1.2 RAID Configurations Precautions 3 1.3 Installing Windows 7 / 7 64-bit / Vista / Vista 64-bit / XP /
RecoveryVault Express Client User Manual
For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by
File Services. File Services at a Glance
File Services High-performance workgroup and Internet file sharing for Mac, Windows, and Linux clients. Features Native file services for Mac, Windows, and Linux clients Comprehensive file services using
AMD RAID Installation Guide
AMD RAID Installation Guide 1. AMD BIOS RAID Installation Guide.. 2 1.1 Introduction to RAID.. 2 1.2 RAID Configurations Precautions 3 1.3 Installing Windows 8 / 8 64-bit / 7 / 7 64-bit / Vista TM / Vista
File System Implementation II
Introduction to Operating Systems File System Implementation II Performance, Recovery, Network File System John Franco Electrical Engineering and Computing Systems University of Cincinnati Review Block
NSS Volume Data Recovery
NSS Volume Data Recovery Preliminary Document September 8, 2010 Version 1.0 Copyright 2000-2010 Portlock Corporation Copyright 2000-2010 Portlock Corporation Page 1 of 20 The Portlock storage management
How to recover a failed Storage Spaces How to recover a failed Storage Spaces ReclaiMe Storage Spaces Recovery User Manual 2013 Contents Overview... 4 Storage Spaces concepts and
StruxureWare Data Center Expert 7.2.1 Release Notes
StruxureWare Data Center Expert 7.2.1 Release Notes Table of Contents Page # Part Numbers Affected...... 1 Minimum System Requirements... 1 New Features........ 1 Issues Fixed....2 Known Issues...2 Upgrade
Online Backup Linux Client User Manual
Online Backup Linux Client User Manual Software version 4.0.x For Linux distributions August 2011 Version 1.0 Disclaimer This document is compiled with the greatest possible care. However, errors might
Total Backup Recovery 7
7 TM 7 Automat backup and restore management for all networked laptops & workstations from a centralized administrating console 7 Advanced Workstation assures that critical business information is well
PARALLELS SERVER 4 BARE METAL README
PARALLELS SERVER 4 BARE METAL README This document provides the first-priority information on Parallels Server 4 Bare Metal and supplements the included documentation. TABLE OF CONTENTS 1 About Parallels
Acronis Backup & Recovery 10 Server for Linux. Installation Guide
Acronis Backup & Recovery 10 Server for Linux Installation Guide Table of contents 1 Before installation...3 1.1 Acronis Backup & Recovery 10 components... 3 1.1.1 Agent for Linux... 3 1.1.2 Management
Online Backup Client User Manual
For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by
Using iscsi with BackupAssist. User Guide
User Guide Contents 1. Introduction... 2 Documentation... 2 Terminology... 2 Advantages of iscsi... 2 Supported environments... 2 2. Overview... 3 About iscsi... 3 iscsi best practices with BackupAss
LOCKSS on LINUX. CentOS6 Installation Manual 08/22/2013
LOCKSS on LINUX CentOS6 Installation Manual 08/22/2013 1 Table of Contents Overview... 3 LOCKSS Hardware... 5 Installation Checklist... 6 BIOS Settings... 9 Installation... 10 Firewall Configuration...
DISK DEFRAG Professional
auslogics DISK DEFRAG Professional Help Manual / Contents Introduction... 5 Installing the Program... 7 System Requirements... 7 Installation... 7 Registering the Program... 9 Uninstalling
Installing a boot partition
Installing a boot partition This document includes the following topics: Purpose of this document Creating an image of a model computer Setting up workstations as client computers Restoring client computers
Acronis Backup & Recovery 11
Acronis Backup & Recovery 11 Update 0 Installation Guide Applies to the following editions: Advanced Server Virtual Edition Advanced Server SBS Edition Advanced Workstation Server for Linux Server for
Acronis Backup & Recovery 11
Acronis Backup & Recovery 11 Quick Start Guide Applies to the following editions: Advanced Server Virtual Edition Advanced Server SBS Edition Advanced Workstation Server for Linux Server for Windows Workstation....................................
Retrospect 7.7 User s Guide Addendum
Retrospect 7.7 User s Guide Addendum 2011 Retrospect, Inc. All rights reserved. Retrospect 7.7 Users Guide Addendum, first edition. Use of this product (the Software ) is subject to acceptance of the license.
Installing, Uninstalling, and Upgrading Service Monitor
CHAPTER 2 Installing, Uninstalling, and Upgrading Service Monitor This section contains the following topics: Preparing to Install Service Monitor, page 2-1 Installing Cisco Unified Service Monitor, page
Deploying a File Server Lesson 2
Deploying a File Server Lesson 2 Skills Matrix Technology Skill Objective Domain Objective # Adding a New Disk Configure storage 1.5 File Server The most basic and the most universal type of application
Installation Notes for Outpost Network Security (ONS) version 3.2
Outpost Network Security Installation Notes version 3.2 Page 1 Installation Notes for Outpost Network Security (ONS) version 3.2 Contents Installation Notes for Outpost Network Security (ONS) version 3
1-bay NAS User Guide
1-bay NAS User Guide INDEX Index... 1 Log in... 2 Basic - Quick Setup... 3 Wizard... 3 Add User... 6 Add Group... 7 Add Share... 9 Control Panel... 11 Control Panel - User and groups... 12 Group Management...
A Better Approach to Backup and Bare-Metal Restore: Disk Imaging Technology
A Better Approach to Backup and Bare-Metal Restore: Disk Imaging Technology Acronis True Image Enterprise Server for Windows Acronis True Image Server for Windows Acronis True Image Server for Linux Another
Learning Objectives. Chapter 1: Networking with Microsoft Windows 2000 Server. Basic Network Concepts. Learning Objectives (continued)
Chapter 1: Networking with Microsoft Learning Objectives Plan what network model to apply to your network Compare the differences between Windows 2000 Professional, Server, Advanced Server, and Datacenter
Acronis Backup & Recovery 11.5
Acronis Backup & Recovery 11.5 Installation Guide Applies to the following editions: Advanced Server Virtual Edition Advanced Server SBS Edition Advanced Workstation Server for Linux Server for Windows
Reborn Card NET. User s Manual
Reborn Card NET User s Manual Table of Contents Notice Before Installation:... 2 System Requirements... 3 1. First Installation... 4 2. Hardware Setup... 4 3. Express Installation... 6 4. How to setup...
SysPatrol - Server Security Monitor
SysPatrol Server Security Monitor User Manual Version 2.2 Sep 2013 1 Product Overview SysPatrol is a server security monitoring solution allowing one to monitor one or
An Analysis of Propalms TSE and Microsoft Remote Desktop Services
An Analysis of TSE and Remote Desktop Services JULY 2010 This document illustrates how TSE can extend your Remote Desktop Services environment providing you with the simplified and consolidated management
GAUSS 9.0. Quick-Start Guide
GAUSS TM 9.0 Quick-Start Guide Information in this document is subject to change without notice and does not represent a commitment on the part of Aptech Systems, Inc. The software described
WinClon 6 User Guide. With Screenshots. A Windows Embedded Partner
User Guide With Screenshots Table of Contents Product Introduction Product Overview Product Features Product Installation/Registration System Requirements Installation Use as Evaluation Activate on Internet
Chapter 4. Operating Systems and File Management
Chapter 4 Operating Systems and File Management Chapter Contents Section A: Operating System Basics Section B: Today s Operating Systems Section C: File Basics Section D: File Management Section E: Backup
PARALLELS SERVER BARE METAL 5.0 README
PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal.,
Table of Contents. Online backup Manager User s Guide
Table of Contents Backup / Restore Windows System (WBAdmin)... 2 Requirements and recommendations... 2 Overview... 3 1.1 How to backup Microsoft Windows System (WBAdmin)... 5 How to restore Microsoft Windows
Installing and Upgrading to Windows 7
Installing and Upgrading to Windows 7 Before you can install Windows 7 or upgrade to it, you first need to decide which version of 7 you will use. Then, you should check the computer s hardware to make.,
BackupAssist Common Usage Scenarios
WHITEPAPER BackupAssist Version 5 Cortex I.T. Labs 2001-2008 2 Table of Contents Introduction... 3 Disaster recovery for 2008, SBS2008 & EBS 2008... 4 Scenario 1: Daily backups with
Total Backup Recovery 7
7 TM 7 Simplify and automate backup and recovery manageability while maintaining business continuity 7 Advanced Server is FarStone s next generation backup and recovery utility to protect your business
Windows Client/Server Local Area Network (LAN) System Security Lab 2 Time allocation 3 hours
Windows Client/Server Local Area Network (LAN) System Security Lab 2 Time allocation 3 hours Introduction The following lab allows the trainee to obtain a more in depth knowledge of network security and
Xopero Centrally managed backup solution. User Manual
Centrally managed backup solution User Manual Contents Desktop application...2 Requirements...2 The installation process...3 Logging in to the application...6 First logging in to the application...7 First
File System Management
Lecture 7: Storage Management File System Management Contents Non volatile memory Tape, HDD, SSD Files & File System Interface Directories & their Organization File System Implementation Disk Space Allocation | http://docplayer.net/1309060-Lifeboat-an-autonomic-backup-and-restore-solution.html | CC-MAIN-2018-47 | refinedweb | 11,021 | 54.52 |
Autotest Client Tests
[TOC]
References
- Autotest Best Practices
- Autotest Coding Style Guide
- Writing Autotests
- Dynamic Suites Codelab
- Server Side Autotests Codelab
- Result logs
- Client helper libraries
- Troubleshooting ebuild files
Overview
In this codelab, you will build a client-side Autotest to check the disk and cache throughput of a ChromiumOS device. You will learn how to:
Setup the environment needed for autotest
Run and edit a test
Write a new test and control file
Check results of the test
In the process of doing so you will also learn a little about the autotest framework.
Background: holds chrome/chromeos functional tests.
- These include pyauto tests, but not autotest tests (Note: pyauto is deprecated).
- On the DUT, these map to: /usr/local/autotest/deps/chrome_test/test_src/chrome/test/functional/
<cros_checkout>/src/third_party/autotest/client/site_tests: holds autotest tests.
- On the DUT, these map to /usr/local/autotest/tests.
<cros_checkout>/src/platform/factory: holds some private factory tests, although the bulk of factory tests reside in site_tests.
Please consult the dynamic suite codelab to understand how your tests can run as a suite.
Prerequisites
chroot build environment. Autotest source, basic python knowledge.
Objectives.
Running a test on the client
First, get the autotest source:
a. If you Got the Code, you already have autotest.
b. If you do not wish to sync the entire source and reimage a device, you can run tests in a vm.
-
Get an image
Select your image, e.g.,
gsutil ls gs://chromeos-releases/dev-channel/lumpy/*
Copy your image, e.g.,
gsutil cp gs://chromeos-releases/dev-channel/lumpy/3014.0.0/ChromeOS-R24-3014.0.0-lumpy.zip ./
Unzip the image and untar autotest.tar.bz2, e.g.,
unzip ChromeOS-R24-3014.0.0-lumpy.zip chromiumos_qemu_image.bin autotest.tar.bz2
The unzipped folder from 2.b should contain a VM.
.
Through test_that
- enter chroot:
cros_checkout_directory$ cros_sdk
- Invoke test_that, to run login_LoginSuccess on a vm with local autotest bits:
test_that localhost:9222 login.
Directly on the DUT
Editing a Test
For python-only changes, test_that uses
autotest_quickmerge to copy your
python changes to the sysroot. There is no need to run rcp/scp to copy the
change over to your DUT.
The fastest way to edit a test is directly on the client. If you find the text editor on a Chromium OS device non-intuitive then edit the file locally and use a copy tool like rcp/scp to send it to the DUT.
Add a print statement to the login_LoginSuccess test you just ran
rsync it into /usr/local/autotest/tests on the client
rcp path/to/login_LoginSuccess.py root@<DUT_ip>:/usr/local/autotest/tests/login_LoginSuccess/
-.
Writing a New Test:
- run
hdparm -T <disk>
- Search output for timing numbers.
- Report this as a result.
Create a directory in
client/site_tests, name
kernel_HdParmBasic.:
import logging
from autotest_lib.client.bin import testclass.
Emerging and Running
Basic flow:
- Add the new test: add
+tests_kernel_HdParmBasicin the
IUSE_TESTSsection of the autotest-tests ebuild file:
#third_party/chromiumos-overlay/chromeos-base/autotest-tests/autotest-tests-9999.ebuild IUSE_TESTS="${IUSE_TESTS} # some other tests # some other tests # ... +tests_kernel_HdParmBasic "
- cros_workon autotest-test
cros_workon --board=lumpy start autotest-tests
- emerge autotest-tests
emerge-lumpy chromeos-base/autotest-tests
(if that fails because of dependency problems, you can try cros_workon
--board=lumpy autotest-chrome and append
chromeos-base/autotest-chrome to the
line above)
- run test_that
test_that -b lumpy DUT_IP kernel_HdParmBasics
If you’d like more perspective you might benefit from consulting the troubleshooting doc.
Checking results
where you will have to replace ‘
/tmp/test_that.<RESULTS_DIR_HASH>’ with
anything you might have specified through the --results_dir_root option.
You can also find the latest results in
/
Import helpers:
- run
hdparm -T <disk>
This implies running things on the command line, modules to look at are base/site utils.
However common_lib’s ‘utils.py’ conveniently gives us both.
from autotest_lib.client.bin import test, utils
Search output for timing numbers.
Report this as a result.
import logging, re
run_once, cleanup and initialize
If your test manages any state on the DUT it might need initialization and cleanup. In our case the subprocess handles it’s own cleanup, if any. Putting together all we’ve talked about, our run_once method looks like:
import[ logging]((%5C.(swig%7Cpy)$%7C/__init__%5C.(swig%7Cpy)$)&is_navigation=1),[ re]((%5C.(swig%7Cpy)$%7C/__init__%5C.(swig%7Cpy)$)&is_navigation=1)
from[ autotest_lib.client.bin]((%5C.(swig%7Cpy)$%7C/__init__%5C.(swig%7Cpy)$)&is_navigation=1) import[ test]((%5C.(swig%7Cpy)$%7C/__init__%5C.(swig%7Cpy)$)&is_navigation=1),[ utils]((%5C.(swig%7Cpy)$%7C/__init__%5C.(swig%7Cpy)$)&is_navigation=1) class: | https://www.chromium.org/chromium-os/testing/test-code-labs/autotest-client-tests/ | CC-MAIN-2022-27 | refinedweb | 764 | 57.87 |
This week’s tidbit will show you how to launch an app when a user clicks on a link. Last time, I wrote about debugging a view binding issue in Android.
Have you ever had an app launch on your mobile device when you click a link in an email, on a webpage, etc. and wondered how to do the same thing for your app? Wonder no more, because in this simple guide I will show you how to accomplish this!
Android
On Android, this can be accomplished by defining an
<intent-filter> in your app’s
AndroidManifest.xml.
Here’s a simple example:
<intent-filter>
<action android:
<category android:
<category android:
<data
android:scheme="https"
android:
</intent-filter>
Doing something like this would cause any link that began with (the URL for my Medium profile) to launch this app instead.
You can even define multiple
<data> listings to allow multiple URL patterns to invoke your app. (Note that any possible combination of schemes/hosts that you have defined will match — even across
<data> entries)
Depending on your desired behavior, you can even use something completely custom for the
android:scheme, such as
mattstidbits, and omit the
android:host property entirely. This would allow any URL starting with
mattstidbits:// to launch your app. However, this will only work if the user has your app is installed. If you want to direct the user to the Play Store listing for your app if they don’t have it installed, you’ll want to use an
http or
https scheme and a host/domain that you own. Then, you can follow these instructions to create a JSON file that you can embed on your website that will instruct browsers to redirect users to the Play Store listing page for your app.
If you want to have your app access information that’s embedded in the URL (so you can take them to a specific screen within the app), you’ll want to add something like the following to your application’s
onCreate method:
val action: String? = intent?.action
val uri: Uri? = intent?.data
You can then verify that the action was
Intent.ACTION_VIEW and grab the
uri that was passed in and use the methods/properties of the
Uri class to pull out information from the URI to help you handle the request.
It’s also a good idea to test your deep linking, which you can do on a simulator or physical device using the
adb command, like this:
adb shell am start -d "your-deep-link-url"
The Android developer documentation for implementing deep linking is really excellent, which you can find here:
iOS
Disclaimer: I am not an iOS developer by trade, so I can’t provide nearly the level of detail that I can for Android.
Apple appears to offer two different mechanisms for launching an app from a URL — URL Schemes and Universal Links.
- URL Schemes are easier/faster to implement, but prompt the user for permission and don’t work if the app is not installed.
- Universal Links require you to have control of the domain (much like Google’s App Links that I described above), so they require more work to set up, but they won’t prompt the user and allow you to provide a fallback URL if the user doesn’t have the app installed.
This article does a good job of explaining some of the differences:
For now, let’s focus on URL Schemes, since that’s the approach I have worked with before. To set one up, all you have to do is specify the scheme in Xcode via the Info tab in your project’s settings. Note that unlike Android, iOS only lets you define the scheme (the part that comes before the
:// in a URL). So, you could enter
mattstidbits in the “URL Schemes” field, so any links beginning with
mattstidbits:// would launch your app (if it’s installed).
In your
Info.plist file this shows up under the
CFBundleURLSchemes key.
If you want your app to access the information that’s stored in the URL, then you’ll need to handle this inside of your app delegate’s
application() method. There’s a
url argument that will contain the data.
As with Android, you can test these deep links by running the following terminal command:
xcrun simctl openurl booted "your-deep-link-url"
(Again, I don’t pretend to be an iOS developer, so please see the official documentation for how to do this.)
React Native
If you’ve been following my tidbits for a while, you probably wondered if I would touch on this — here’s how to do this in React Native!
React Native has two different setups — if you are using Expo, follow these instructions to specify the scheme you’d like to use within Expo’s JSON config.
However, if you are not using Expo, you generally need to follow the same steps above for iOS/Android to set up deep linking in your
AndroidManifest.xml and
Info.plist. To get this working in a React Native CLI project, you should follow these instructions to configure your project. Note that for iOS you’ll need to add some special code to your AppDelegate to make this work.
Note that both of these options assume you are using React Navigation for managing navigation within your app (which I highly recommend you do!)
If you want to be able to actually do something with the URL (and not just simply invoke the app), you will need to configure a
linking object and assign that to your
NavigationContainer.
The
linking object is a little funky, in that it’s like a copy of your navigation graph. It gives you the power to map URL parameters to internal route names/paths, but does require you to match your app’s navigation graph precisely.
For example, you might set this up as follows:
const config = {
screens: {
Tidbit: 'post/:id',
Profile: 'user',
},
};
const linking = {
prefixes: [''],
config,
};
function App() {
return (
<NavigationContainer linking={linking}>
<Stack.Navigator>
<Stack.Screen name="Tidbit" component={PostDetailScreen} />
<Stack.Screen name="Profile" component={ProfileScreen} />
</Stack.Navigator>
</NavigationContainer>
);
}
Where this can start to get a little tedious is if you have nested navigators (which you likely do), your
linking object will need to also match that nesting.
For more details, see this article.
To test the link with an Expo project, run the following command (substituting
ios in place of
android below if you’d like to change what platform you’re targeting):
npx uri-scheme open "your-deep-link-url://127.0.0.1:19000" --android
For a non-Expo project, you can test the deep linking via the native iOS/Android commands described in the sections above.
Overall strategies
What can we take away from this?
- Deep linking on both iOS/Android/React Native provides sophisticated functionality.
- In nearly all cases, you probably want your links to have the same format across iOS/Android for consistency’s sake (having links work on either platform seamlessly)
- I would recommend starting with the simplest approach and building from there — there can be a lot of red tape to work through to get files changed in your company’s/project’s website in order to use the more sophisticated App Links (Android) or Universal Links (iOS), so if you’re able to get by with the more basic deep linking/URL schemes (where you don’t need to modify a website), then start with that.
Interested in working with me in the awesome Digital Products team here at Accenture? We have an opening:
Do you have other deep linking tips you’d like to share? Let me know in the comments below! | https://medium.com/nerd-for-tech/matts-tidbits-103-launching-an-app-from-a-url-c27da66cd52c?utm_campaign=React%2BNative%2BNow&utm_medium=web&utm_source=React_Native_Now_93 | CC-MAIN-2021-49 | refinedweb | 1,292 | 56.89 |
from django.utils import simplejson
Then:
from django.utils import simplejson
Then:
To create a Google app engine custom 404 page, add a handler that matches everything as the last handler:
(r'.*', DefaultHandler),
And somewhere in that handler, call
self.error(404)
Google App Engine WSGIApplication for Drupal Users:
(r'/path(?:/(.*))?', HandlerClass),
registers HandlerClass for URLs of the form:
/path /path/ /path/something /path/something/else
But not:
/pathfoo
HandlerClass get method should then start with:
def get(self, args): if args is None: args = ""
printf debugging in GreaseMonkey without the modal dialog box of alert():
console.log("Link: %o", document.links[0])to log messages and inspect elements. Don't forget to remove the Firebug code before distributing the script.
window.status = message_string, understanding that the status bar text is transient: many other processes within Firefox set it.
Firefox 2 font configuration in Fedora 8 Linux does not come
from GNOME font settings. If you have, at some point turned on
sub-pixel font rendering for your LCD in Firefox and want to
turn it off because the colors around the edges of the
letters are ugly (or if you want to turn it on because you
think it looks better), the settings are located in
~/.fonts.conf
The default subpixel font setting is off, so to turn off
subpixel rendering in firefox (and reset all the other
settings to defaults), one can simply delete
~/.fonts.conf
This also works for seamonkey
Was getting a lot of
[Errno 12] Timeout: <urlopen error timed out>
with yum. Went into /etc/yum.conf and added a line:
timeout=120
documentation was in man page yum.conf(5).
That helped a bit, but fetching mirrorlists was still giving the Errno 12 timeout error almost instantly. Worked around that by just pointing the yum.repos.d entries at a particular mirror. Probably some bad interaction with going through tor/privoxy (necessary to work around some recent china telecom auto-blocking)
Turning on kernel support for packet forwarding to do IP Masquerading/NAT in Linux (Fedora 7) can be done with:
echo 1 > /proc/sys/net/ipv4/ip_forward
But it can also be done by putting the line
net.ipv4.ip_forward = 1
in /etc/sysctl.conf
Was looking for my note about the new mysql 5 join syntax just now, but couldn't find it because I was searching for "syntax", which wasn't in the post. =P I can never remember if the manual's suggested replacement syntax for the comma is JOIN or INNER JOIN
problem: gnucash-bin: error while loading shared libraries: libgtkhtml-3.8.so.15: cannot open shared object file: No such file or directory
solution: sudo yum -y install gtkhtml38
problem: gnucash-bin: error while loading shared libraries: libgsf-gnome-1.so.114: cannot open shared object file: No such file or directory
solution: sudo yum -y install libgsf-gnome
problem: ERROR: In procedure open-file: ERROR: No such file or directory: "/usr/share/guile/1.8/slib/require"
solution: sudo yum -y update sl! | http://www.advogato.org/person/wtanaka/diary.html?start=17 | CC-MAIN-2013-48 | refinedweb | 507 | 53.21 |
Coveo JavaScript Search UI Framework
You should install the Coveo JavaScript Search UI Framework as an npm package:
npm install --save coveo-search-ui
All resources will be available under
node_modules/coveo-search-ui/bin. You can include those in your pages with
<script> tags. This will make the variable
Coveo globally available in your page.
If you are using a module bundler (Browserify, Webpack, rollup, etc.), you can use
require('coveo-search-ui') or
import * as Coveo from 'coveo-search-ui'.
Alternatively, you can download the Coveo JavaScript Search UI Framework (see Downloading the JavaScript Search Framework).
Since the April 2017 release, it is possible to access the resources of any specific Coveo JavaScript Search Framework
official release (from version
1.2537 on) through a content delivery network (CDN).
You can simply use a URL such as[VERSION]/[PATH_TO_FILE], where you
replace
[VERSION] by the actual release version number you wish to use and
[PATH_TO_FILE] by the path of the file
you require.
For quick access to the latest CDN links, see JavaScript Search Framework CDN Links.
Example:
The following tags include the
1.2537version (April 2017 release) of the
CoveoJsSearch.min.js,
templateNew.jsand
CoveoFullSearchNewDesign.cssfiles.
<head> [ ... ] <script src=""></script> <script src=""></script> <link rel="stylesheet" href="" /> [ ... ] </head>
<!-- Include the library scripts. --> <script src="js/CoveoJsSearch.js"></script> <script src="js/templates/templates.js"></script> <!-- Each DOM element with a class starting with "Coveo" (uppercase) will instantiate a component. --> <body id="search" class='CoveoSearchInterface'> <!-- Each DOM element with a class starting with "coveo-" (lowercase) is strictly for CSS/alignment purpose. --> <div class='coveo-search-section'> <!-- Any Coveo component can be removed (or added); none is actually required for the page to "load". --> <div class="CoveoSearchbox"></div> </div> <!-- The "data-" attributes of each component allow you to pass options to this specific component instance. --> <div class="CoveoFacet" data-</div> <div class="CoveoFacet" data-</div> <script> // The following line shows you how you could configure an endpoint against which to perform your search. // Coveo.SearchEndpoint.configureCloudEndpoint('MyCoveoCloudEnpointName', 'my-authentification-token'); // We provide a sample endpoint with public sources for demo purposes. Coveo.SearchEndpoint.configureSampleEndpoint(); // Initialize the framework by targeting the root in the interface. // It does not have to be the document body. Coveo.init(document.body); </script> </body>
You can find more examples of fully configured pages in the
./pages folder.
A tutorial is available to help you get started (see Coveo JavaScript Search UI Framework Getting Started Tutorial).
You should have node 10.x installed to build this project.
npm install -g yarn yarn global add gulp yarn install gulp
gulp default: Builds the entire project (CSS, templates, TypeScript, etc.)
gulp compile: Builds only the TypeScript code and generates its output in the
./binfolder.
gulp css: Builds only the Sass code and generates its output in the
./binfolder.
gulp sprites: Regenerates the sprites image as well as the generated Sass/CSS code.
gulp unitTests: Builds and runs the unit tests.
gulp doc: Generates the documentation website for the project.
gulp dev: Starts a webpack dev server for the project.
gulp devTest: Starts a webpack dev server for the unit tests.
Make sure you were able to run
gulp entirely without any errors first. Then you can start the dev-server:
gulp dev
This will start a webpack-dev-server instance (see Webpack Dev Server).
You can now load in a web browser.
Any time you hit Save in a source file, the bundle will be recompiled and the dev page will reload.
If you need to modify the content of the search page (i.e., the markup itself, not the TypeScript code), modify the
index.html page under
./bin. This page is not committed to the repository, so you do not have to worry about
breaking anything. However, if you feel like you have a good reason to modify the original
index.html, feel free to
do so.
You might need to assign more memory to Webpack if you see errors about
heap out of memory. To do so, use this command :
node --max_old_space_size=8192 ./node_modules/gulp/bin/gulp.js dev;
Tests are written using Jasmine. You can use
npm run test to run
the tests in Chrome Headless.
If you wish to write new unit tests, you can do so by starting a new webpack-dev-server instance.
To start the server, run
gulp devTest.
Load.
Every time you hit Save in a source file, the dev server will reload and re-run your tests.
Code coverage will be reported in
./bin/coverage
General reference documentation is generated using TypeDoc (see Coveo JavaScript Search UI Framework - Reference Documentation). The generated reference documentation lists and describes all available options and public methods for each component.
Handwritten documentation with more examples is also available (see Coveo JavaScript Search UI Framework Home).
A tutorial is also available (see Coveo JavaScript Search UI Framework Getting Started Tutorial). If you are new to the Coveo JavaScript Search UI Framework, you should definitely consult this tutorial, as it contains valuable information.
You can also use Coveo Search to find answers to any specific issues/questions (see the Coveo Community Portal).
Please use the Coveo community to ask questions or to search for existing solutions. | https://coveo.github.io/search-ui/ | CC-MAIN-2020-10 | refinedweb | 871 | 58.38 |
Due by 11:59pm on Wednesday, 10/15
Submission: See Lab 1 for submission instructions. We have provided a hw5.py starter file for the questions below.
Readings: You might find the following references useful: ***"
In the integer market, each participant has a list of positive integers to trade. When two participants meet, they trade the smallest non-empty prefix of their integers that are equal in sum. equal_prefix = lambda: sum(first[:m]) == sum(second[:n]) "*** YOUR CODE HERE ***" if equal_prefix(): first[:m], second[:n] = second[:n], first[:m] return 'Deal!' else: return 'No deal!'
Linked lists
The linked list data abstraction, used below, contains the following functions.
################################ # Linked list data abstraction # ################################] def print_link(s): """Print elements of a linked list s. >>> s = link(1, link(2, link(3, empty))) >>> print_link(s) 1 2 3 """ line = '' while s != empty: if line: line += ' ' line += str(first(s)) s = rest(s) print(line), empty)))) >>> print_link(x) 3 4 6 6 >>> has_prefix(x, empty) True >>> has_prefix(x, link(3, empty)) True >>> has_prefix(x, link(4, empty)) False >>> has_prefix(x, link(3, link(4, empty))) True >>> has_prefix(x, link(3, link(3, empty))) False >>> has_prefix(x, x) True >>> has_prefix(link(2, empty), link(2, link(3, empty))) False """ "*** YOUR CODE HERE ***"', empty))) >>> x = link('G', link('A', link('T', link('T', aca)))) >>> print_link(x) G A T T A C A >>> has_sublist(x, empty) True >>> has_sublist(x, link(2, link(3, empty))) False >>> has_sublist(x, link('G', link('T', empty))) False >>> has_sublist(x, link('A', link('T', link('T', empty)))) True >>> has_sublist(link(1, link(2, link(3, empty))), link(2, empty)) True >>> has_sublist(x, link('A', x)) False """ "*** YOUR CODE HERE ***"
Finally, write
has_61A_gene to detect
C A T C A T within a linked list
dna sequence.
def has_61A_gene(dna): """Returns whether linked list dna contains the CATCAT gene. >>> dna = link('C', link('A', link('T', empty))) >>> dna = link('C', link('A', link('T', link('G', dna)))) >>> print_link(dna) C A T G C A T >>> has_61A_gene(dna) False >>> end = link('T', link('C', link('A', link('T', link('G', empty))))) >>> dna = link('G', link('T', link('A', link('C', link('A', end))))) >>> print_link(dna) G T A C A T C A T G >>> has_61A_gene(dna) True >>> has_61A_gene(end) False """ "*** YOUR CODE HERE ***"
Note: Subsequence matching is a problem of importance in computational biology. CS 176 goes into more detail on this topic, including methods that handle errors in the DNA (because DNA sequencing is not 100% correct). >>> w(90, 'hax0r') 'Insufficient funds' >>> w(25, 'hwat') 'Incorrect password' >>> w(25, 'hax0r')']" """ "*** YOUR CODE HERE ***"
Suppose that our banking system requires the ability to make joint
accounts. Define a function
make_joint that takes three arguments.
withdrawfunction,
withdrawfunction was defined, and
The
make_joint function returns a
withdraw function that provides
additional access to the original account using either the new or old
password. Both functions draw down the same balance. Incorrect
passwords provided to either function will be stored and cause the
functions to be locked after three wrong attempts.
Hint: The solution is short (less than 10 lines) and contains no
string literals! The key is to call
withdraw with the right password
and ***"
ALL FOLLOWING PROBLEMS ARE CHALLENGE PROBLEMS (OPTIONAL)
Section 2.4.
Implement the function
triangle_area, which ***"
The
multiplier constraint from the readings ***"
Here is the equation solver implementation from the readings:) | http://inst.eecs.berkeley.edu/~cs61a/fa14/hw/released/hw5.html | CC-MAIN-2018-05 | refinedweb | 562 | 58.11 |
Hi,
I came up with a custom content page to check if the device was rotated, my usage was I wanted to change the listview data template if the device rotated for a dashboard view.
Content page
using System; using Xamarin.Forms; namespace Foobar.CustomPages { public class OrientationContentPage : ContentPage { private double _width; private double _height; public event EventHandler<PageOrientationEventArgs> OnOrientationChanged = (e, a) => { }; public OrientationContentPage() : base() { Init(); } private void Init() { _width = this.Width; _height = this.Height; } protected override void OnSizeAllocated(double width, double height) { var oldWidth = _width; const double sizenotallocated = -1; base.OnSizeAllocated(width, height); if (Equals(_width, width) && Equals(_height, height)) return; _width = width; _height = height; // ignore if the previous height was size unallocated if (Equals(oldWidth, sizenotallocated)) return; // Has the device been rotated ? if (!Equals(width, oldWidth)) OnOrientationChanged.Invoke(this,new PageOrientationEventArgs((width < height) ? PageOrientation.Vertical : PageOrientation.Horizontal)); } } }
Enum
using System; namespace Foobar.CustomPages { public class PageOrientationEventArgs: EventArgs { public PageOrientationEventArgs(PageOrientation orientation) { Orientation = orientation; } public PageOrientation Orientation { get; } } public enum PageOrientation { Horizontal = 0, Vertical = 1, } }
Usage
using System; using System.Diagnostics; using Foobar.CustomPages; using Xamarin.Forms.Xaml; namespace Foobar.Views { [XamlCompilation(XamlCompilationOptions.Compile)] public partial class HomePagePhone : OrientationContentPage { public HomePagePhone() { InitializeComponent(); OnOrientationChanged += DeviceRotated; } private void DeviceRotated(object s, PageOrientationEventArgs e) { switch (e.Orientation) { case PageOrientation.Horizontal: break; case PageOrientation.Vertical: break; default: throw new ArgumentOutOfRangeException(); } Debug.WriteLine(e.Orientation.ToString()); } } }
Seems to work okay on hardware, tested with Forms 2.3.3.180 on iPhone6, IPad, Sansumg S6, S2 Tablet and UWP on a nokia 650 lumia.
Might be useful for someone but would be good if it was added to the Forms framework, I'll add a new idea on the evolution forms if there's interest and post up any issues.
@ClintStLaurent @DavidOrtinau @PierceBoggan
@NMackay
Thanks for sharing that. Nice to see I'm not the only one who does a base Page type to inherit from, for all my other pages.
This will fit in the pattern perfectly.
Thanks!
I suspect many of us have done the same sort of thing :-)
This will (correctly) also fire the orientation change event when the app's window is resized (e.g. on UWP desktop) such that which is greater out of width and height changes.
Something like this should really be in the Forms Framework imo.
Anyway, does the job for me
Interesting. I remember one recent Apple DevCon where they advised developers to move away from explicitly detecting rotations, to only detecting resize events. After all, so the reasoning went, a rotation (as far as the window's content is concerned) is only a size change (unless the device/window is square).
So if we follow the Apple advice all we actually need is the
SizeAllocatedevent, which we already have. The fact that you've based your rotation detection on it seems to me to indicate that rotation detection isn't really needed since all the information is already available.
So I'm curious: what use is rotation detection for you? I see your use case, but how does rotation detection help where
SizeAllocateddoesn't?
@DavidDancy
The main point is the Forms framework doesn't provide a convenient hook/event as it stands to change for example a template when flipping from portrait to landscape, also my scenario doesn't cover actually figuring out what orientation your PCL forms app is in at startup to set the initial template, you need the dependency service (it's documented but again, it would help if it was part of the API). These are small quick wins
Besides, it's Forms and we can't follow Apple's advice
In my case, I switch between a single page view and a split screen view (similar to master/detail) depending on whether height > width and whether width > the minimum width that the split screen makes sense for. Whilst I could do that without a concept of portrait and landscape, I do still use the concept to drive the first test (height > width). It's a convenience that I've built into my base page class in similar fashion to how @NMackay has done it. Could be done either way. I'm not sure I'd merge it into the main XF codebase though at this point, as it might encourage people to go against Apple's guidance.
I just wrote a reply to say I think the
SizeAllocatedevent should be sufficient, but thinking about it I've unconvinced myself so I have to reply differently.
Reasoning: the
SizeAllocatedcallback gets called multiple times, with multiple different size values - including some where Height and Width are set to -1 (unknown). This means that we need an extra "filter" on that callback so we can detect when the size is a) stable and b) different than the previous stable size.
So I see your rotation event could be a nice way to package up that information to make sure that we only respond when a response is actually needed.
But I think the real problem is in the fact that Forms calls the
SizeAllocatedcallback before the
View's size is stable.
If Forms were to wait until the size had stable, non-undefined values for both
Widthand
Height, there would be no need to filter anything.
We could then do as Apple advises and handle both rotation and resize scenarios by simply observing the resize.
@DavidDancy
Good shout, OnSizeAllocated is currently called 2/3 times in iOS tested on iPhone6 and iPad4 hardware (and indicated in the guide) and once in Android and UWP (again, tested on hardware) hence my code workarounds (and in the guidance docs). Since I've added a property to set the orientation based on a platform dependency call to get the orientation at startup, I did research a lot and did see if there was a nice way in iOS and again, so much conflicting information, I'm just going on the official recommendation for the dependency service.
Thanks for your feedback & insight on the iOS side.
Finished implementing it my app by adding a property to my dependency service to check if the device is in portrait or landscape. Basically when the user flips the device it shows the dashboard widgets in a grid view (landscape) or a linear list (portrait). The dependency check at startup just means the app presents the widgets in the correct way at 1st load.
Code behind is fine in this case as it's just about adjusting the presentation layer
Yes obviously this must be a part of Forms Framework.
Anyways, it does the job for others also
Thanks a lot for sharing this, I am surprised they haven't implemented anything to do with orientation in the framework.
@DeveshMishra @Aegletes
There's some new features in the pipeline that will make this easier going forwards.
@NMackay Very interesting, I've implemented it with a sensible change:
I have a base view model that parallels the page base (indeed I have various base layers for different contexts for both page and view models) and instead of putting the OnOrientationChanged event in the page base and firing that, I pass it to the corresponding base view model and it fires it - that way the business rules bubble up the view model layers instead of bubbling up the page layers and then having to be passed into each individual view model at the top page's binding context.
But thanks for the core code!
But viewmodels are supposed to be ignorant and agnostic of views. They certainly shouldn't be responding to
OnOrientationChanged. At most they might have a property for
Orientationthat your views make use of. But from the perspective of a ViewModel the UI (if there even is one) is none of its concern.
Here's where your plan goes sideways....
What happens when your ViewModel is the backing source for 6 different views? Maybe 3 of them are orientation dependent and 3 aren't. And does your viewmodel run through
OnOrientationChanged6 times because all 6 views throw the event when the device is rotated?
What - a criticism without a solution!
So there's a whole scope such as binding a menu to 'IsVisible' where it can be argued that the business rule is 'give the user a menu tool' as opposed to the viewmodel knowing about the view, and I take orientation as being of the same ilk, in effect being agnostic to the view by accepting it as a business rule 'this is the orientation of the user's hardware' (ie there's no room to give a user a certain tool here).
And trying to cater for so many platforms, such as presenting Menus in Desktop and Tablet and 'not on iPhone except in Landscape', it's much easier to use a converter bound to a viewmodel than to try OnPlatform in Xaml. And this current project is for 8 platform/idiom combos.
And what really talks is when the client is spending a 1/4 million plus on the app and says to do it the quickest way, so I'll bind to orientation in the view model, and fire off orientation changes in the vms every time, this is just saying 'don't present the user with a tool.
And Btw, so what if a view does not subscribe to an event!
Come up with a solution that allows this to happen without the vm being involved I'm all ears but until you do, I'm too busy making money. | https://forums.xamarin.com/discussion/88646/detecting-page-orientation-change-for-contentpages | CC-MAIN-2019-13 | refinedweb | 1,585 | 50.67 |
/*************************************************************** Here's a good problem to work on, as it takes into account a number of the things we've talked about: The file scores.txt contains the scores of students on various problems on an exam. Each row corresponds to a student, and the scores along that row are that student's scores on problems 1, 2, 3 etc. Your job: figure out which problem was the hardest! You may assume that for every problem, at least one student got full credit for that problem. If the average score for problem X as a percentage of the full credit score for X is less than the average score for problem Y as a percentage of the full credit score for Y, then problem X is "harder" than problem Y. ***************************************************************/ #include <iostream> #include <fstream> #include <string> #include <cstdlib> using namespace std; double** readGrades(int& ns, int& np); double* getAveragePercentage(double** P, int ns, int np); int indexOfMin(double* A, int np); int main() { int ns, np; double** P = readGrades(ns,np); double* A = getAveragePercentage(P,ns,np); int imin = indexOfMin(A,np); cout << "Problem p" << imin+1 << " is hardest (ave = " << A[imin] << "%)" << endl; return 0; } double** readGrades(int& ns, int& np) { // Get size and allocate 2D array ifstream fin("scores.txt"); string junk; fin >> ns >> junk >> np >> junk; double** P = new double*[ns]; for(int i = 0; i < ns; i++) P[i] = new double[np]; // Read values into 2D array for(int j = 0; j < np; j++) // read problem numbers fin >> junk; for(int i = 0; i < ns; i++) for(int j = 0; j < np; j++) fin >> P[i][j]; return P; } double max(double a, double b) { return a < b ? b : a; } double* getAveragePercentage(double** P, int ns, int np) { double* A = new double[np]; for(int j = 0; j < np; j++) { // find maximum score in column j double maxScore = P[0][j]; for(int i = 1; i < ns; i++) maxScore = max(maxScore,P[i][j]); // find average of scores in column j (as a precentage of the top score) double sum = 0; for(int i = 0; i < ns; i++) sum += P[i][j]/maxScore * 100.0; A[j] = sum/ns; } return A; } int indexOfMin(double* A, int np) { int imin = 0; for(int i = 2; i < np; i++) if (A[imin] > A[i]) imin = i; return imin; } | https://www.usna.edu/Users/cs/wcbrown/courses/F16IC210/lec/l25/TE1.cpp.html | CC-MAIN-2018-09 | refinedweb | 389 | 53.99 |
note also bug 282658 Unhandled exception at 0x00e4685e (jsd3250.dll) in mozilla.exe: 0xC0000005: Access violation reading location 0x00000000. EAX = 00000000 EBX = 00000001 ECX = 0B6B8D98 EDX = 00E4683D ESI = 0AE48EE8 EDI = 00000000 EIP = 00E4685E ESP = 0012F1D8 EBP = 0012F1E4 EFL = 00200202 > jsd3250.dll!jsds_NotifyPendingDeadScripts(JSContext * cx=0x0012f224) Line 503 + 0x3 C++ jsd3250.dll!jsds_GCCallbackProc(JSContext * cx=0x00a6d888, JSGCStatus status=JSGC_END) Line 521 C++ js3250.dll!js_GC(JSContext * cx=0x00a6d888, unsigned int gcflags=0) Line 1448 C js3250.dll!js_ForceGC(JSContext * cx=0x00a6d888, unsigned int gcflags=0) Line 1028 + 0x19 C js3250.dll!JS_GC(JSContext * cx=0x00a6d888) Line 1747 + 0x8 C js3250.dll!JS_MaybeGC(JSContext * cx=0x00a6d888) Line 1766 + 0x6 C gklayout.dll!nsJSContext::ScriptEvaluated(int aTerminated=0) Line 1875 + 0xc C++ gklayout.dll!nsJSContext::ScriptExecuted() Line 1946 C++ xpc3250.dll!AutoScriptEvaluate::~AutoScriptEvaluate() Line 107 C++ xpc3250.dll!nsXPCWrappedJSClass::CallMethod(nsXPCWrappedJS * wrapper=0x010778dd, unsigned short methodIndex=55432, const nsXPTMethodInfo * info=0x0012f350, nsXPTCMiniVariant * nativeParams=0x00000004) Line 1588 + 0x11 C++ xpc3250.dll!nsXPCWrappedJS::CallMethod(unsigned short methodIndex=3, const nsXPTMethodInfo * info=0x00a621c0, nsXPTCMiniVariant * params=0x0012f408) Line 450 C++ xpcom_core.dll!PrepareAndDispatch(nsXPTCStubBase * self=0x091cff70, unsigned int methodIndex=3, unsigned int * args=0x0012f4c4, unsigned int * stackBytesToPop=0x0012f4b4) Line 117 + 0x12 C++ xpcom_core.dll!SharedStub() Line 147 C++ xpcom_core.dll!nsObserverService::NotifyObservers(nsISupports * aSubject=0x0cde2210, const char * aTopic=0x00be1f90, const unsigned short * someData=0x00000000) Line 210 C++ necko.dll!nsHttpHandler::NotifyObservers(nsIHttpChannel * chan=0x0cde2210, const char * event=0x00be1f90) Line 480 C++ necko.dll!nsHttpChannel::AsyncOpen(nsIStreamListener * listener=0x0a72fd28, nsISupports * context=0x00000000) Line 3123 C++ imglib2.dll!imgLoader::LoadImage(nsIURI * aURI=0x0a6e5b28, nsIURI * aInitialDocumentURI=0x0b31b408, nsIURI * aReferrerURI=0x0b31b408, nsILoadGroup * aLoadGroup=0x0b59ba10, imgIDecoderObserver * aObserver=0x124aeca0, nsISupports * aCX=0x025600d0, unsigned int aLoadFlags=0, nsISupports * cacheKey=0x00000000, imgIRequest * aRequest=0x00000000, imgIRequest * * _retval=0x124aeca4) Line 510 + 0xc C++ gklayout.dll!nsContentUtils::LoadImage(nsIURI * aURI=0x0a6e5b28, nsIDocument * aLoadingDocument=0x025600d0, nsIURI * aReferrer=0x0b31b408, imgIDecoderObserver * aObserver=0x124aeca0, int aLoadFlags=0, imgIRequest * * aRequest=0x124aeca4) Line 1768 + 0x29 C++ gklayout.dll!nsImageLoadingContent::ImageURIChanged(const nsACString & aNewURI={...}) Line 481 C++ gklayout.dll!nsImageLoadingContent::ImageURIChanged(const nsAString & aNewURI={...}) Line 411 + 0x14 C++ gklayout.dll!nsHTMLImageElement::SetParent(nsIContent * aParent=0x093dd6e8) Line 634 C++ gklayout.dll!nsGenericElement::AppendChildTo(nsIContent * aKid=0x124aec88, int aNotify=0, int aDeepSetDocument=0) Line 2600 C++ gklayout.dll!SinkContext::AddLeaf(nsIHTMLContent * aContent=0x124aec88) Line 1565 C++ gklayout.dll!SinkContext::AddLeaf(const nsIParserNode & aNode={...}) Line 1497 C++ gklayout.dll!HTMLContentSink::AddLeaf(const nsIParserNode & aNode={...}) Line 3127 C++ gkparser.dll!CNavDTD::AddLeaf(const nsIParserNode * aNode=0x0ab3a868) Line 3774 + 0xd C++ gkparser.dll!CNavDTD::HandleDefaultStartToken(CToken * aToken=0x090bb410, nsHTMLTag aChildTag=eHTMLTag_a, nsCParserNode * aNode=0x0ab3a868) Line 1443 + 0x8 C++ gkparser.dll!CNavDTD::HandleStartToken(CToken * aToken=0x00000034) Line 1818 + 0xe C++ gkparser.dll!CNavDTD::HandleToken(CToken * aToken=0x00000034, nsIParser * aParser=0x0ae1bb28) Line 1003 + 0xa C++ gkparser.dll!CNavDTD::BuildModel(nsIParser * aParser=0x0ae1bb28, nsITokenizer * aTokenizer=0x0bdb0520, nsITokenObserver * anObserver=0x00000000, nsIContentSink * aSink=0x0a76a484) Line 472 + 0xa C++ gkparser.dll!nsParser::BuildModel() Line 2032 C++ gkparser.dll!nsParser::ResumeParse(int allowIteration=1, int aIsFinalChunk=1, int aCanInterrupt=1) Line 1894 + 0x6 C++ gkparser.dll!nsParser::ContinueParsing() Line 1430 + 0xc C++ gkparser.dll!nsParserContinueEvent::HandleEvent(PLEvent * aEvent=0x088c3a70) Line 240 C++ xpcom_core.dll!PL_HandleEvent(PLEvent * self=0x088c3a70) Line 693 C xpcom_core.dll!PL_ProcessPendingEvents(PLEventQueue * self=0x00a4dce8) Line 627 + 0x6 C xpcom_core.dll!nsEventQueueImpl::ProcessPendingEvents() Line 402 C++ gkwidget.dll!nsWindow::DispatchPendingEvents() Line 3721 C++ gkwidget.dll!nsWindow::ProcessMessage(unsigned int msg=512, unsigned int wParam=0, long lParam=26542521, long * aRetValue=0x0012fd10) Line 4092 C++ gkwidget.dll!nsWindow::WindowProc(HWND__ * hWnd=0x004c038a, unsigned int msg=512, unsigned int wParam=0, long lParam=38279124) Line 1355 + 0x10 C++ user32.dll!GetDC() + 0x72 user32.dll!GetDC() + 0x154 user32.dll!GetWindowLongW() + 0x127 user32.dll!DispatchMessageW() + 0xf gkwidget.dll!nsAppShell::Run() Line 159 C++ appcomps.dll!nsAppStartup::Run() Line 216 C++ mozilla.exe!main1(int argc=2, char * * argv=0x002a4878, nsISupports * nativeApp=0x0b6b8d98) Line 1321 + 0x9 C++ mozilla.exe!main(int argc=2, char * * argv=0x002a4878) Line 1813 + 0x13 C++ mozilla.exe!WinMain(HINSTANCE__ * __formal=0x00400000, HINSTANCE__ * __formal=0x00400000, char * args=0x0015235a, HINSTANCE__ * __formal=0x00400000) Line 1841 + 0x17 C++ mozilla.exe!WinMainCRTStartup() Line 390 + 0x1b C kernel32.dll!RegisterWaitForInputIdle() + 0x49 - (DeadScript*)esi 0x0ae48ee8 {links= } } } jsdc=0x00a53c58 {links={next=0x00e4c010 __jsd_context_list prev=0x00e4c010 __jsd_context_list } inited=1 data=0x00000000 ...} script=0x00000000 } DeadScript * |+ links {next=0x14aa9cf8 {next=0x0659eb30 {next=0x061bfa60 {next=0x0a4239b8 prev=0x0659eb30 } prev=0x14aa9cf8 {next=0x0659eb30 prev=0x0b6b8d98 } } } } } prev=0x0b6b8d98 ea568 {next=0x0b6b8d98 {next=0x14aa9cf8 prev=0x0b6ea568 } prev=0x06664e68 {next=0x0b6ea568 prev=0x0bc4b008 } } } } PRCListStr |+ jsdc 0x00a53c58 {links={next=0x00e4c010 __jsd_context_list prev=0x00e4c010 __jsd_context_list } inited=1 data=0x00000000 ...} JSDContext * \+ script 0x00000000 jsdIScript * JS_STATIC_DLL_CALLBACK (void) jsds_NotifyPendingDeadScripts (JSContext *cx) { 00E467F8 push ebp 00E467F9 mov ebp,esp 00E467FB push ecx nsCOMPtr<jsdIScriptHook> hook = 0; gJsds->GetScriptHook (getter_AddRefs(hook)); 00E467FC mov eax,dword ptr [gJsds (0E4C0E4h)] 00E46801 push esi 00E46802 push edi 00E46803 xor edi,edi 00E46805 mov dword ptr [hook],edi 00E46808 mov esi,dword ptr [eax] 00E4680A lea ecx,[hook] 00E4680D call nsCOMPtr<nsIEventQueue>::StartAssignment (0E45DFFh) 00E46812 push eax 00E46813 push dword ptr [gJsds (0E4C0E4h)] 00E46819 call dword ptr [esi+18h] DeadScript *ds; #ifdef CAUTIOUS_SCRIPTHOOK JSRuntime *rt = JS_GetRuntime(cx); #endif gJsds->Pause(nsnull); 00E4681C mov eax,dword ptr [gJsds (0E4C0E4h)] 00E46821 mov ecx,dword ptr [eax] 00E46823 push edi 00E46824 push eax 00E46825 call dword ptr [ecx+88h] while (gDeadScripts) { 00E4682B jmp jsds_NotifyPendingDeadScripts+77h (0E4686Fh) ds = gDeadScripts; if (hook) 00E4682D mov eax,dword ptr [hook] 00E46830 cmp eax,edi 00E46832 je jsds_NotifyPendingDeadScripts+45h (0E4683Dh) { /* tell the user this script has been destroyed */ #ifdef CAUTIOUS_SCRIPTHOOK JS_UNKEEP_ATOMS(rt); #endif hook->OnScriptDestroyed (ds->script); 00E46834 push dword ptr [esi+0Ch] 00E46837 mov ecx,dword ptr [eax] 00E46839 push eax 00E4683A call dword ptr [ecx+10h] #ifdef CAUTIOUS_SCRIPTHOOK JS_KEEP_ATOMS(rt); #endif } /* get next deleted script */ gDeadScripts = NS_REINTERPRET_CAST(DeadScript *, PR_NEXT_LINK(&ds->links)); 00E4683D mov eax,dword ptr [esi] if (gDeadScripts == ds) { 00E4683F cmp eax,esi 00E46841 mov dword ptr [gDeadScripts (0E4C0E8h)],eax 00E46846 jne jsds_NotifyPendingDeadScripts+56h (0E4684Eh) /* last script in the list */ gDeadScripts = nsnull; 00E46848 mov dword ptr [gDeadScripts (0E4C0E8h)],edi } /* take ourselves out of the circular list */ PR_REMOVE_LINK(&ds->links); 00E4684E mov ecx,dword ptr [esi+4] 00E46851 mov dword ptr [ecx],eax 00E46853 mov eax,dword ptr [esi] 00E46855 mov ecx,dword ptr [esi+4] 00E46858 mov dword ptr [eax+4],ecx /* addref came from the FromPtr call in jsds_ScriptHookProc */ NS_RELEASE(ds->script); 00E4685B mov eax,dword ptr [esi+0Ch] 00E4685E mov ecx,dword ptr [eax] 00E46860 push eax 00E46861 call dword ptr [ecx+8] /* free the struct! */ PR_Free(ds); 00E46864 push esi 00E46865 mov dword ptr [esi+0Ch],edi 00E46868 call dword ptr [__imp__PR_Free (0E491DCh)] 00E4686E pop ecx 00E4686F mov esi,dword ptr [gDeadScripts (0E4C0E8h)] 00E46875 cmp esi,edi 00E46877 jne jsds_NotifyPendingDeadScripts+35h (0E4682Dh) } gJsds->UnPause(nsnull); 00E46879 mov eax,dword ptr [gJsds (0E4C0E4h)] 00E4687E mov ecx,dword ptr [eax] 00E46880 push edi 00E46881 push eax 00E46882 call dword ptr [ecx+8Ch] } 00E46888 lea ecx,[hook] 00E4688B call dword ptr [__imp_nsCOMPtr_base::~nsCOMPtr_base (0E49230h)] 00E46891 pop edi 00E46892 pop esi 00E46893 leave 00E46894 ret This is a build based on mozilla1.8a5 with a couple of changes to js for certain crashers (but none to jsd). i'm fairly certain i've hit this a couple of times before.
I think I've hit this twice. TB30867552Z and TB31164388W I also think this may be related to Bug 376643. In both of my crashes, firefox was suffering from that bug's symptoms [100% CPU]. In both crashes I had selected "Exit"; all windows closed, but firefox continued to be pegged at 100% CPU until eventually this crash. Is it possible that these pending queued timer events try to fire after the script is dead?
While trying to reproduce the claimed crash associated with bug 379390 (normally I got just a temporary CPU spike/hang) I got this stack. TB31715128 (FF2.0.0.4pre on windows).
Hit it yet again, just as in my previous report. TB32552572K
This crash seems to be a topcrash in Firefox 2.0.0.4, which is weird to me because it's filed in the JavaScript Debugger component. Stack Signature jsds_NotifyPendingDeadScripts 3314a509 Product ID Firefox2 Build ID 2007051502 Trigger Time 2007-06-04 14:18:19.0 Platform Win32 Operating System Windows NT 5.1 build 2600 Module jsd3250.dll + (00004caf) URL visited User Comments Since Last Crash 7224 sec Total Uptime 167528 sec Trigger Reason Access violation Source File, Line No. c:/builds/tinderbox/Fx-Mozilla1.8-release/WINNT_5.2_Depend/mozilla/js/jsd/jsd_xpc.cpp, line 501 Stack Trace jsds_NotifyPendingDeadScripts [mozilla/js/jsd/jsd_xpc.cpp, line 501] jsds_GCCallbackProc [mozilla/js/jsd/jsd_xpc.cpp, line 519] JS_GC [mozilla/js/src/jsapi.c, line 1879] ScopedXPCOMStartup::~ScopedXPCOMStartup [mozilla/toolkit/xre/nsAppRunner.cpp, line 740] main [mozilla/browser/app/nsBrowserApp.cpp, line 61] kernel32.dll + 0x16fd7 (0x7c816fd7)
Samuel, the Bugzilla "JavaScript Debugger" component covers two items: - JSD, the compiled debugging core, which is the jsd3250 module in stacks. - Venkman, one of the front-ends for JSD (Firebug being the commonest other). JSD, being compiled, is shipped with everything (except, sometimes, badly configured Linux distribution builds), and is where the bug (or at least the top of the stack) is. Due to the way certain SpiderMonkey hooks have to be chained, once JSD has added its GC hook (which it will do as soon as any debugger front end is initialised), it will stay hooked until app shutdown. It is also possible JSD is being turned on during startup, but I cannot recall the exact specifics to check this.
note that firebug is also a jsd consumer and iirc it has an option that enables it to start when firefox launches. amusingly, i believe that both venkman and firebug can probably enable jsd at launch such that when firefox updates venkman/firebug are incompatible and thus "disabled" by EM, but because of the way the jsd hook registers (xpcom startup category), it will probably not be disabled by the app update :). but that really isn't particularly relevant. the question is how can script end up null. basically someone needs to figure out if this is a reentrancy issue, a threading/concurrency issue, a lifetime issue, or something else. most likely this will require someone to read the code very very very carefully (I've tried a couple of times and haven't managed to figure out the problem).
It's not realistic to block the branch on this bug when it's both unconfirmed and assigned to someone who probably isn't actively coding any more. Did this become a top crash because we changed something in our code recently, or simply because Firebug has gotten popular?
We should add this information to the list of problematic extensions at mozillazine.org
dveditz: we probably could wallpaper this if someone wanted to, the wallpaper for that is trivially obvious. the problem is that i can't figure out what's up. it doesn't make sense :(
Created attachment 269381 [details] [diff] [review] wallpaper there's no reasonable way for me to test this since i don't have any way to reproduce it (other than general exhaustive use). if i could actually reproduce it, i'd have tried to figure out what was actually wrong and fix that instead...
I believe there is indeed a way to reproduce this behavior, apologies if i was not more clear in my first cross-referenced report of this bug. See Bug 376643 [marked resolved but that's not true for any current build such as firefox 2.0.0.4]. Make sure javascript is enabled. In several tabs or windows, load several copies of the script referenced by that bug. Open a bunch of other tabs and windows to elsewhere for good measure, some of them to other sites with javascript code using setInterval, such as scrollers. If you really want to quickly trigger this bug, make a local copy of the javascript and change the interval to something even faster, like 100 or [yike!] 10 milliseconds. I say yike because Bug 376643 will really hose firefox with such small delays. In any event, having many copies of the script running in separate tabs/windows, proceed to either kill -STOP or pssuspend the process [see Bug 376643 for details]. Wait a good long while, unless you've made a local copy with short delay, in which case a few minutes should suffice. Then kill -CONT or otherwise resume the process. You should see that your CPU is hammered by firefox. As soon as you can, select "Exit" from the File menu. You should observe that your CPU is still hammered, and the windows eventually all close, but your CPU remains hammered. After some time, firefox will crash with a talkback ID which will resolve to this error.
then by all means, please build --enable-debugger-info-modules and add some tracing to figure out what's happening to ds->script. sorry, my current work does not extend anywhere near this area. but people on irc will gladly talk you through instrumenting this code.
Sorry, i'm even less involved here, just volunteering information as i have consistently encountered this bug while contributing to the identification and fixing of the "100% CPU upon resume" bug.
whorfin, is this reproducible with a current trunk nightly?
You're free to follow the reproduction instructions above on a trunk if you'd like. For what it's worth, it just happened to me again on the current release; see TB35632506Y.
Created attachment 285846 [details] Test case with Firebug 1.05, run then change tabs right after. I see this stack trace from Firebug in FF2 once in a while. In the process of developing a profiling test case for performance enhancements for Firebug 1.2, I ran the test in Firebug 1.05. I crashed three times out of maybe 6. It seems to help (that is crash more ;-) if I change tabs soon after the test runs. FF 2.0.0.8 + Firebug 1.05 (getfirebug.com version) + html attached + change tabs. TB 37254658, 37254423, 37254216
One more interesting point: the performance test here creates lots of functions in eval() (and new Function()). The pageads/adsense crash happens right after lots of functions in an eval().
johnjbarton@johnjbarton.com: could you possibly build and try w/ my patch? if you have this reproducable, it should be fairly easy for you to decide whether it fixes the problem. If it does fix it, you should try sticking some printfs into the referenced functions, i suppose.
Since this must have shipped in FF2/FF1.5 not marking as blocking for FF3/Gecko1.9. If you think it should re-nom. If patch comes along we'd obviously consider...
This may have shipped in FF2/1.5 but it's also a topcrash and I think we should fix it. Dan, do you have time to review this? Is there someone else who might?
I've never hit this one on FF3. Doesn't prove much, but I've also not hit a cluster of bugs described in 400532 and 400618. Its my hunch that these are all related to a jsd-gc issue which was uncovered by the discovery of AJAX (many more evals) and Firebug (many more jsd users). They all have the character of multi-thread bugs, being unreliable in place and time. I suggest looking at 400532 first because it is the most reproducible and it can be traced down to a single kind of call if not the same one each time. John.
Comment on attachment 269381 [details] [diff] [review] wallpaper Clearing sr-request, we'll need some sort of module-owner/peer review on this. r- because I don't think this solves anything. There is no way script can be null, and if you're positing memory corruption then I'd prefer a safe null-dereference over taking our chances with a double-free I didn't see any of the talkbacks point at this line, in fact. The large majority of them were crashing on line 501, PR_REMOVE_LINK(&ds->links); A few crashed on line 471, gJsds->GetScriptHook(getter_AddRefs(hook));
I got the same crash some minutes ago while Firefox was closed to apply the todays update. Build ident is Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.10pre) Gecko/20071120 BonEcho/2.0.0.10pre ID:2007112004. I've also Firebug installed. The Talkback id is TB38298366W. Shouldn't the bug be confirmed?
#4 crasher for 2.0.0.9. no crashes (well, one for 2007080200 build) on trunk crash-stats.
This (or something very similar) also happens on Linux, build ID "Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11", every time I shut down the browser (see TB39870148H, TB39772035E, TB39758026X). I do have Firebug 1.05 installed; when I exited the next session after I disabled it to look into bug 366509 the browser didn't crash. (I guess this just confirms everything that others have reported, except to add that it's happening on Linux too.)
re Comment 22: > A few crashed on line 471, gJsds->GetScriptHook(getter_AddRefs(hook)); that's bug 411249
btw, bug 379390 comment 8 has the closest thing i see to smoke. I'll have to think about it a bit more, but it might actually be a useful skid mark, here's the part that interests me: jsds_NotifyPendingDeadScripts [mozilla/js/jsd/jsd_xpc.cpp, line 501] PR_REMOVE_LINK(&ds->links); jsds_GCCallbackProc [mozilla/js/jsd/jsd_xpc.cpp, line 519] js_NewGCThing [mozilla/js/src/jsgc.c, line 1420] js_NewString [mozilla/js/src/jsstr.c, line 2442] js_NewStringCopyZ [mozilla/js/src/jsstr.c, line 2576] JS_NewUCStringCopyZ [mozilla/js/src/jsapi.c, line 4497] I'm wondering if something got dead (more on a weekend)
J. Barton: The test case you post in comment 16 doesn't seem functional; it needs a runner.js and functions [eg, test()] it presumably contains. Can you fix that, or point me to where this came from (almost looks like one of our JS tests?)? Not sure how else to test if this exists on trunk or not, though your comment 21 makes it sound like it probably doesn't?
The test case in comment 16 uses However I can't swear that this file is unchanged since I tried it back in Oct. The code is modified from John Resig's runner.js, see. I've not seen this on FF3. Before firebug-1.1.0b12 we did not call jsd.off() when we should. Since activation in firebug depends on the tab, its possible that tab switching and failure to call jsd.off() could be involved.
Created attachment 307058 [details] [diff] [review] jst's attachment 305891 [details] [diff] [review] from bug 303821 comment 57 these are the only correct/relevant hunks. I've reordered the comment /remove-link / deadscripts lines, changed the comment(s). I have an alternate patch which I'll post shortly.
Created attachment 307066 [details] [diff] [review] my patch i think a lock may still be needed because i don't think the operations here are truly thread safe. (And I don't like the Pause/Unpause stuff, I think it's bogus. If it's trying to avoid thread problems, it fails miserably, as it's amazingly easy to lose the races it's creating.) basically, I worry that the else case in jsds_ScriptHookProc gGCStatus == JSGC_END could conceivably happen on two threads. That might not actually be possible, if the only way to reach it is from the GC thread.
Comment on attachment 307066 [details] [diff] [review] my patch a1.9=beltzner
Comment on attachment 307066 [details] [diff] [review] my patch mozilla/js/jsd/jsd_xpc.cpp 1.87
timeless, does this patch apply on branch as well? This is a topcrasher on branch, so it'd be great to get something there as well.
Created attachment 307571 [details] [diff] [review] backport there's only one difference (reinterpret_cast)
Comment on attachment 307571 [details] [diff] [review] backport We're too late in the cycle to take this now and want some more trunk baking before we take this on the branch. Minusing for 1.8.1.13, but nominating for 1.8.1.14.
Comment on attachment 307571 [details] [diff] [review] backport approved for 1.8.1.15, a=dveditz for release-drivers
Checked fix into the 1.8 branch
I can't validate the crash in 2.0.0.14 with Firebug installed. Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.14) Gecko/2008040413 Firefox/2.0.0.14
verifying based on talkback: this dropped from #20 topcrash in Firefox 2.0.0.14 to #182 in 2.0.0.15 | https://bugzilla.mozilla.org/show_bug.cgi?id=282660 | CC-MAIN-2017-22 | refinedweb | 3,446 | 57.87 |
A few months ago, Microsoft released Beta 1 .NET Framework 3.5 and Visual Studio 2008 codename Orcas (). Microsoft made the version a bit whacky for this release cycle and if you haven’t been paying close attention you wonder whether you somehow missed ASP.NET 3.0 along the way. You didn’t. Microsoft didn’t make an ASP.NET 3.0 update. The .NET 3.0 release cycle focused on add-on libraries that run on top the .NET runtime 2.0. The 3.0 updates brought Windows Presentation Foundation (WPF), Windows Workflow (WF), and Windows Communication Foundation (WCF), and they all consisted entirely of new libraries that run on top of the 2.0 runtime, which itself wasn’t updated by the 3.0 release. For ASP.NET this means that .NET 3.0 really had no effect on the existing feature set and functionality. Daniel Moth sums up the confusing runtime arrangements from 2.0 to 3.5 () in a blog post.
.NET 3.5 will be the next major release of the .NET runtime and it does change the base libraries as well as add a number of new libraries to the core runtime. However, even with these updates, changes to ASP.NET 3.5 are relatively minor. Most of the key feature enhancements in .NET 3.5 are related to core runtime and language enhancements, especially those that relate to Language Integrated Query (LINQ) and its supporting features.
A Different Kind of Update
When Microsoft released ASP.NET 2.0, they made some pretty major changes from version 1.1. Microsoft changed almost every aspect of the HTTP runtime and the core ASP.NET engine changed including a whole new compilation model and new project types. Although you could run 1.x applications with very few (if any) changes, making these applications into full blown 2.0 applications required a host of changes and updates. ASP.NET 2.0 introduced a tremendous amount of new functionality-new controls, many new HTTP runtime features, provider models for many aspects of the runtime. It’s probably not an overstatement to say that Microsoft’s release of ASP.NET 2.0 had huge changes that many of us are still digesting one and half years later.
ASP.NET 3.5, on the other hand, looks to be a much less heavy-handed update and most of the changes are evolutionary rather than revolutionary. In fact, as far as pure runtime features are concerned, not much will change at all. A good number of enhancements that the ASP.NET team will roll into the Orcas release have to do with more tightly integrating technologies that are already released and in use. This includes direct integration of the ASP.NET AJAX features into the runtime and full support for IIS 7’s managed pipeline (which by its own right could have been ASP.NET 3.0!).
For you as a developer, this means that moving to Visual Studio 2008 and ASP.NET 3.5 is probably going to be a lot less tumultuous than the upgrade to 2.0. I, for one, am glad that this is the case as the change from 1.1 to 2.0 involved a lot of work for conversion. In my early experimentation with Orcas Beta 1 I’ve migrated existing applications painlessly to .NET 3.5 and could immediately start adding new 3.5 features without that horrible “legacy” feeling about my code that typically comes with a major .NET version upgrade.
That isn’t to say that .NET 3.5 will not have anything new or the potential for making you *want* to rewrite your applications, but getting a raw 2.0 application running and compiling properly under Visual Studio 2008 either for .NET 2.0 or 3.5 will be very easy.
ASP.NET 3.5: Minimal Changes
ASP.NET 3.5 doesn’t include a ton of new features or controls. In fact, looking through the System.Web.Extensions assembly I found only two new controls: ListView and DataPager.
The ListView control is a new data-bound list control that is a hybrid of a GridView and Repeater. You get the free form functionality and templating of a Repeater with the data editing, selection, and sorting features of a data grid, but without being forced into a tabular layout. The ListView supports layout for table, flow layout, and bullet list layout so it offers a lot of flexibility and some built-in smarts for the type of layout used. All of this provides more flexibility and more control over your layout than either the GridView or the Repeater has on its own. The DataPager control works in combination with a ListView to provide custom paging functionality with the DataPager acting as behavior for the ListView.
The System.Web.Extensions assembly contains all of the new functionality for ASP. System.Web.Extensions and should be familiar since it also contains the ASP.NET AJAX features that Microsoft has already made available as part of the ASP.NET AJAX distribution. With .NET 3.5, System.Web.Extensions becomes a native part of .NET 3.5 so you don’t have to make a separate installation to use ASP.NET AJAX features. Unfortunately it looks like you must still make the ugly configuration settings required by the AJAX features for web.config-I had hoped that the ASP.NET team could have moved these settings into the machine-wide web.config settings.
In addition to these core features that Microsoft will add to System.Web.Extensions already, Microsoft has indicated additional pending features in the ASP.NET Futures CTP code (). The ASP.NET Futures release includes code for a host of features, some of which may or may not make it into the actual framework distribution. It includes a number of additional ASP.NET AJAX Client library features, a host of XAML and Silverlight support features, and an interesting dynamic data framework called Dynamic Data Controls () which is a Rails-like tool to quickly generate a full data manipulation interface to a database. Microsoft has not indicated which of these features of the Futures release will make it into the shipping version of .NET 3.5, but some will undoubtedly make the cut.
The Real Change Maker Is LINQ
While ASP.NET itself may not make a huge wave of changes, that doesn’t mean there won’t be plenty of new stuff for developers to learn. To me, the biggest change in .NET 3.5 is clearly the introduction of Language Integrated Query (LINQ) and I bet that this technology, more than anything else, will have an effect on the way developers code applications, much in the way Generics affected .NET 2.0 coding. Like Generics, LINQ will require a little bit of experimenting to get used to features, but once you ‘get’ the core set of LINQ functionality, LINQ will be hard to live without.
In a nutshell, LINQ provides a new mechanism for querying data from a variety of “data stores” in a way that is more intuitive and less code-intensive than procedural code. With strong similarities to SQL, LINQ uses query parsing techniques within the compiler to reduce the amount of code you have to write for filtering, reordering, and restructuring data. With LINQ you can query any IEnumerable-based list, a database (LINQ to SQL) and XML (LINQ to XML). You can also create new providers so, in theory, LINQ can act as a front end for any data store that publishes a LINQ provider.
LINQ generally returns an IEnumerable-based list of data where the data is specific to the data you are querying. What’s especially nice is that the returned data can be in an inferred format so that you can get strongly-typed data returned from what is essentially a dynamic query. LINQ can either run query results into dynamically constructed types that the compiler generates through type inference, or by explicitly specifying an exact type that matches the query result signature.
LINQ makes it possible to use a SQL-like syntax to query over various types of lists. While this may not sound all that exciting, take a minute to think about how you often deal with list data in an application in order to reorganize the data by sorting or filtering that data. While it’s not too difficult to do this using either raw code with a foreach loop or using .NET 2.0 predicate syntax on the generic collection types, it’s still pretty a pretty verbose process that often splits code into multiple code sections. LINQ instead provides a more concise and dynamic mechanism for re-querying and reshaping data. I have no doubt it will take some time to get used to the power that LINQ offers languages like C# and Visual Basic, but I bet it will end up making a big change in the way that developers write applications.
For example, think of simply querying a list of objects (List<Customer> in this case) for a query condition like this:
var CustQuery = from c in Customers where c.Entered > DateTime.Now.AddDays(-1) && c.Company.Contains("Wind") select new { c.Name, c.Company, Date = c.Entered };
LINQ queries are actually just the definition of the actual query. Nothing actually executes the query to retrieve the data until the data is selected or otherwise accessed. Internally LINQ uses a .Select() extension method on a collection to cause the data to be retrieved and served.
The actual result is a dynamically typed IEnumerable List<AnonymousType> through which you can simple run through with foreach:
foreach (var c in CustQuery) { Response.Write(c.Company + " - " + c.Name + " - " + c.Date.ToString() + " <hr>"); }
Microsoft has made the LINQ syntax a lot more compact than similar procedural code and, to me at least, easier to understand just looking at that code block. What takes a bit of getting used to is just how many things you can actually apply LINQ to as it works with just about any kind of list.
If you look at the small bit of code above you see quite a few new features of the C# language (Visual Basic has most of these same features), which may seem a little unnerving. In order for LINQ to work, Microsoft needed to make a number of modifications to the language compilers and the core runtime. These features that make LINQ work require language features that rely on type inference and the ability to easily shortcut certain verbose language statements like object initialization and delegate-based expression evaluation. Here’s a quick run through of some of the more important language-related features in C# that are related to LINQ but useful in their own right.
Anonymous Types and Implicit Typing
Anonymous types are types that are created without explicitly specifying each one of the member types attached to it. They are basically a shortcut for object creation in which the compiler figures out the type of the specified member based on the value assigned to it.
This is a crucial feature of LINQ which uses it to construct dynamic result types based on the query’s result fields that are chosen. You construct anonymous types like this:
var Person = new { Name = "Rick Strahl", Company = "West Wind", Entered = DateTime.Now };
The var type is an anonymous type that is compiler-generated and has local method scope. The type has no effective name and can’t be constructed directly, but it is returned only as the result of this anonymous type declaration. The type is generated at compile time, not at runtime, so it acts just like any other type with the difference that it’s not visible outside of the local method scope.
Var types are most commonly used with objects, but they can also be used with simple types like int, bool, string etc. You can use a var type in any scenario where the compiler can infer the type:
var name = "Ken Dooit";
Here the compiler creates a string object and any reference to name is treated as string.
By itself this feature sounds like a bad idea-after all, strong typing and compiler safety is a feature of .NET languages that most of us take for granted. But it’s important to understand that behind the scenes the compiler still creates strongly typed objects simply by inferring the type based on the values assigned or parameters expected by a method call or assignment.
I doubt I would ever use this feature with normal types in a method, but it really becomes useful when passing objects as parameters-you can imagine many situations where you create classes merely as message containers in order to pass them to other methods. Anonymous types allow you to simply declare the type inline which makes the code easier to write and more expressive to read.
This is important for LINQ which can return results from a query as a var result and often needs to create types and result values dynamically based on the inputs specified. With LINQ, a query result is var IEnumerable<A> where A is the anonymous type. The type works as if it were a normal .NET type and Visual Studio 2008 is smart enough to even provide you IntelliSense for the object inference. So you can have code like the following (assuming Customer is a previously defined class):
List<Customer> Customers = new List<Customer>(); Customers.Add(new Customer { Name="Rick Strahl", Company = "West Wind", Entered = DateTime.Now. AddDays(-5) } ); Customers.Add(new Customer { Name = "Markus Egger", Company = "EPS", Entered = DateTime.Now }); … add more here var CustQuery = from c in Customers where c.Entered > DateTime. Now.AddDays(-1) select new { c.Name, c.Company, Date = c.Entered }; foreach (var c in CustQuery) { Response.Write(c.Company + " - " + c.Name + " - " + c.Date.ToString() + " <hr>"); }
You’ll notice the var result, which is an IEnumerable of an anonymous type that has Name, Company, and Entered properties. The compiler knows this and fixes up the code accordingly, so referencing c.Company resolves to the anonymous type’s Company field which is a string.
It’s important to understand one serious limitation of this technology: Anonymous types are limited to local method scope, meaning that you can’t pass the type out of the method and expect to get full typing semantics provided outside of the local method that declared the var. Once out of the local method scope, a var result becomes equivalent to an object result with access to any of the dynamic properties available only through Reflection. This makes dynamic LINQ results a little less useful, but thankfully you can also run results into existing objects. So you could potentially rewrite the above query like this:
IEnumerable<Customer> CustQuery = from c in Customers where c.Entered > DateTime.Now. AddDays(-1) && c.Company. Contains("Wind") select new Customer { Name=c.Name, Company=c.Company };
Notice that the select clause writes into a new Customer object rather than generating an anonymous type. In this case you end up with a strong reference to a Customer object and IEnumerable of Customer. This works because the constructor is essentially assigning the values that the new object should take and you end up essentially mapping properties from one type of object to another. So if you have a Person class that also has a Name and Company field, but no Address or Entered field, you can select new Person() and get the Name and Company fields filled.
IEnumerable<Person> CustQuery = from c in Customers where c.Entered > DateTime.Now.AddDays(-1) && c.Company.Contains( "Wind") select new Person { Name=c.Name, Company=c.Company };
You can then pass values returned with this sort of strong typing out of methods because they are just standard .NET classes.
Object Initializers
In the last queries above, I used object initialization syntax to assign the name and company in the Person class. Notice that I could simply assign any property value inside of the curly brackets. It’s a declarative single statement way of initializing an object without requiring a slew of constructors to support each combination of property assignment parameters. In the code above the object is initialized with the values of the object that is currently being iterated-c or a Customer instance in this case.
This is a great shortcut feature that makes for more compact code, but it’s also crucial to get the compact syntax required in LINQ to make property assignments work in the select portion of a LINQ query. Object initializers, in combination with an anonymous type, effectively allow you create a new type inline of your code, which is very useful in many coding situations.
Extension Methods
Extension methods allow extension of existing in-memory objects through static methods that have a specific syntax. Here’s an extension method that extends the string with an IsUpper function in C#:
public static class StringExtensions { public static bool IsUpper( this string s) { return s.ToUpper() == s; } }
This syntax assigns the extension method to a string object by way of the implicit this parameter and the type of the first parameter. The first parameter is implicit and it’s always an instance of the type specified in the first parameter. To use the IsUpper function on the any string instance you have to ensure that the namespace is visible in the current code and, if it is, you can use the extension method like this:
string s = "Hello World"; bool Result = s.IsUpper();
The extension method is scoped to the string class by way of the first parameter, which is always passed to an extension method implicitly. The C# syntax is a little funky and Visual Basic uses an <Extension()> attribute to provide the same functionality. Arguably I find the Visual Basic version more explicit and natural-that doesn’t happen often.
Behind the scenes, the compiler turns the extension method into a simple static method call that passes this-the current instance of the class-as the first parameter. Because the methods are static and the instance is passed as a parameter you only have access to public class members.
This seems like a great tool to add behaviors to existing classes without updating the existing classes. You can even extend sealed classes in this way, which opens a few new avenues of extension.
Lambda Expressions
Lambda expressions () are a syntactic shortcut for anonymous method declarations, and they are what make LINQ filtering and sorting work. In the LINQ code I showed above, I used expressive syntax, which is actually parsed by the compiler and turned into lower level object syntax that chains together the various statement clauses into a single long expression.
One of the sections of a LINQ query deals with the expression parsing for the where clause (or more precisely the .where() and .sort() methods). So this code:
IEnumerable<Customer> custs1= from c in Customers where c.Entered > DateTime.Now.AddDays(-1) select c;
is really equivalent to:
IEnumerable<Customer> custs1 = Customers.Where( c => c.Entered > DateTime.Now.AddDays(-1));
which is also equivalent to:
IEnumerable<Customer> custs2 = Customers.Where( delegate(Customer c) { return c.Entered > DateTime.Now.AddDays(-1); });
So you can think of a lambda expression as a shortcut to an anonymous method call where the left-most section is the parameter name (the type of which is inferred). Lambda expressions can be simple expressions as above, or full code blocks which are wrapped in block delimiters ({ } in C#).
Behind the scenes, the compiler generates code to hook up the delegate and calls it as part of the enumeration sequence that runs through the data to filter and rearrange it.
Lambda expressions can be presented as delegates which is the functional aspect and deals with how they actually get executed. However, you can also assign them as an Expression<Func<>> object which makes it possible to parse the LINQ expression. This low-level feature can be used to implement custom LINQ providers that can do things like LINQ to SQL and LINQ to XML. These technologies take the inbound expressions and translate them into parsing completely separate data stores like a SQL database (in which case the LINQ is translated into SQL statements) or LINQ to XML (in which case XPATH and XmlDocument operations are used to retrieve the data).
It’s a very powerful feature to say the least, but probably one that’s not going to be used much at the application level. Lambda expressions are going to be primarily of interest for framework builders who want to interface with special kinds of data stores or custom generated interfaces. There are already custom interfaces springing up such a LINQ provider for LDAP () and nHibernate ().
LINQ and SQL
One of the major aspects of LINQ is its ability to access database data using this same LINQ-based syntax. Using SQL syntax natively, as opposed to SQL strings coupled with the inferred typing and IntelliSense support that LINQ provides, makes it easier to retrieve data from a database. It also helps reduce errors by providing type checking for database fields to some degree.
LINQ to SQL
You can use LINQ with a database in a couple of ways. The first tool, LINQ to SQL, provides a simplified entity mapping toolset. LINQ to SQL lets you map a database to entities using a new LINQ to SQL designer that imports the schema of the database and creates classes and relationships to build a simple object representation of the database. The mapping is primarily done by class attributes that map entity fields to table fields, and child objects via foreign keys and relations.
Visual Studio 2008 will ship with a LINQ to SQL designer that lets you quickly select an entire database or individual tables and lets you graphically import and map your database.
By default the entire database is imported and the mappings are one to one where each table gets to be a class and each field a property. Relationships are mapped as child collections or properties depending on whether it’s 1 to 1 or 1 to many. You can also choose to import either an entire database or selected tables from the database.
Once the mapping’s been done, the schema is available for querying using standard LINQ syntax against the object types that the mapper creates. The objects are created as partial classes that can be extended and these entity classes are used as the schema that enforces the type safety in the query code.
LINQ to SQL is a great tool for providing a very quick CRUD access layer for a database to provide for simple insert/update/delete functionality using entity objects. It also provides for the strong typing in LINQ queries against the database which means better type safety and the ability to discover what database tables and fields are available right in your code. LINQ to SQL works through a DataContext object, which is a simplified OR manager object. You can also use this object to create new object instances and add them to the database or load and update individual items for CRUD data access. Since CRUD data access often amounts to a large part of an application, this is a very useful addition.
To me, LINQ to SQL is a big improvement over a strongly-typed dataset (if you are using them). It provides much of the functionality that a strongly-typed dataset provides in a more lightweight package and more intuitive manipulation format.
But there’s probably not enough here to make pure Object Relational Mapper diehards happy. The attribute-based model and the nature of the current generator that doesn’t do non-destructive database refreshes for entities are somewhat limiting, but still it’s a great tool to quickly generate a data layer and address the 60-80% of data access scenarios that deal with CRUD operations.
ADO.NET Entity Framework - LINQ to Entities
For those more diehard ORM folks, Microsoft is also working on a more complete object relational framework called ADO.NET Entity Framework (). The Entity framework sports a more complete mapping layer that is based on XML map files. Several map files are used for the physical and logical schema and a mapping layer that relates the two. The Entity framework also integrates more tightly with ADO.NET using familiar ADO.NET objects such as EntityConnection, EntityCommand, and EntityReader to access data. You can query Entity framework data in a number of ways including using LINQ as well as a custom language called Entity Sql, which is a T-SQL-like language for retrieving object data. I’m not quite clear on why you’d need yet another query language beyond raw SQL and beyond the LINQ mapping to confuse the lines even more but it seems to me that LINQ would be the preferred mechanism to query data and retrieve it. The Entity framework also utilizes a data context that LINQ can use for database access so Entity framework objects are directly supported through LINQ.
The Entity Framework provides more control over the object relational mapping process, but it’s also a bit more work to set up and work with. Microsoft will ship the framework with Orcas but it appears that rich design-time support will not arrive until sometime after Orcas is released. Currently there’s a one-time wizard that you can run to create your schemas and mapping layers but after that you have to manually resort to managing the three XML files that provide the entity mapping. It’ll be interesting to keep an eye on the toolset and see how it stacks up against solutions like nHibernate.
Major Visual Studio Changes
Finally, let’s not forget Visual Studio 2008 for the Orcas release cycle. I am happy to see that this version of Visual Studio is not a complete overhaul of the entire environment. Microsoft will add many new features, but the way that projects work will basically stay the same. Visual Studio 2008 also uses the same add-in model as previous versions so most of your favorite plug-ins, templates, and IntelliSense scripts should continue to work in Visual Studio 2008.
Nevertheless there are also some major changes under the hood for ASP.NET.
New Html Editor
I don’t think I know anybody who actually likes the ASP.NET HTML designer in Visual Studio 2005. It’s slow as hell and has a number of issues with control rendering. For me it’s become so much of a problem that I rarely use the designer for anything more than getting controls onto a page. I then spend the rest of my time using markup for page design. It’s not that I don’t want to use a visual designer, but it’s tough when the designer is so slow and unpredictable.
Visual Studio 2008 introduces a brand spanking new designer based on the designer used in Expression Web. Microsoft completely redesigned the editor and, more importantly, they didn’t base it on the slow and buggy HTML edit control. The new editor provides a host of new features including a much more productive split pane view that lets you see both markup and WYSIWYG displays at the same time in real time. You can switch between the panes instantly for editing and see the changes in both and there’s no hesitation when moving between them. The editor’s rendering appears to be more accurate than the old one and maybe-even more important-the editor is considerably faster at loading even complex pages. Considerably faster! Even complex pages that contain master pages and many controls render in less than a second as opposed to 10+ seconds in Visual Studio 2005. Further, because you have split view that shows both design time and markup, you rarely need to switch view modes.
The property sheet now also works in markup view when your cursor is hovering over a control, which also makes markup view more usable. Microsoft added this feature in Visual Studio 2005 SP1 and it didn’t work reliably there, but it works perfectly in Visual Studio 2008 including the ability to get to the event list from markup.
There’s also much deeper support for CSS, which starts with the editor giving you a list of all styles available from all inline styles and external style sheets. The list shows a preview of each style. A CSS properties page also lets you examine which CSS classes are applied to a given HTML or control element and lets you then quickly browse through the styles to see which attributes apply and are overridden. The CSS tools are a little overwhelming because they live on three different windows and have several options to give different views. It takes a little experimenting to figure out all that is available and which of the windows you actually want to use, but it’s well worth the effort.
Improved JavaScript
One highly anticipated feature of Visual Studio 2008 is the improved JavaScript support. With the advent of AJAX, developers will write a lot more JavaScript code in the context of ASP.NET applications and Visual Studio 2008 provides a number of new editor features that will help the minimal JavaScript that Visual Studio has supported.
Visual Studio 2008 provides improved IntelliSense support. Visual Studio supported IntelliSsense for JavaScript prior to Visual Studio 2008 but it is extremely limited-it basically worked for a few known document objects with some static and incomplete member lists. Visual Studio 2008’s support for JavaScript is a more dynamic and can understand JavaScript functions defined in the page or in external JavaScript files that are referenced either via the <script src> tag or by using an ASP.NET AJAX ScriptManager control to add the scripts. JavaScript (.js) files can reference other scripts by using special syntax at the top of the .js file to reference another .js source file.
While the JavaScript IntelliSense works well enough with local functions and reasonably well with the ASP.NET AJAX libraries that are documented and follow the exact ASP.NET AJAX guidelines, I’ve had no luck at all getting the new IntelliSense features to work with other libraries. For example, opening Prototype.js as a related .js file on a page results in no IntelliSense whatsoever on Prototype’s functionality. None of the classes, or even static functions, show up in IntelliSense. It appears that Visual Studio 2008’s IntelliSense only follows the exact guidelines that ASP.NET AJAX requires for class definitions in order to show up in IntelliSense. I sincerely hope that some of these issues get addressed because as it stands currently in Beta 1, the new IntelliSense features in Visual Studio 2008 don’t help me at all with my JavaScript development.
I’m also disappointed that Visual Studio still does not offer support for some sort of JavaScript function/class browsing utility. I frequently look at rather large JavaScript library files and there’s just no good way to browse through the content short of raw text view and using Find. A class/function browser that shows all functions or better yet objects and members (which is more difficult) would certainly make navigating a large library file much easier. No such luck.
On a more positive note, Microsoft directly integrated JavaScript into the ASP.NET debugging process. You can now set breakpoints in JavaScript code in the IDE without having to sprinkle debugger; keywords throughout your code. Running the project will now automatically respect your JavaScript breakpoints in HTML, ASPX pages, and any .js files that are part of the project. The Locals and Watch windows also provide more useful and organized information on objects with class members sorted by type (methods and fields). The debugging experience is seamless so you can debug both client- and server-side code in a single session. This is a great improvement for JavaScript debugging and has made me go back to using Visual Studio script debugging recently. While I won’t throw out my copy of FireBug just yet, I find that for logic debugging the experience is much smoother when directly integrated in Visual Studio 2008.
Multi-Targeting
I want to mention one Visual Studio feature that is not specific to ASP.NET but it’s one of the more prominent features that I think will make the transition to Visual Studio 2008 much easier. Visual Studio 2008 supports targeting multiple versions of the .NET Runtime so you’re not tied to a particular version. You can create projects for .NET 2.0, 3.0, and 3.5, and when you choose one of these project types Visual Studio will automatically add the appropriate runtime libraries to your project. You can also easily switch between versions so you can migrate a Visual Studio 2005 project to Visual Studio 2008 with .NET 2.0, work on that for a while, and then at a later point in time decide to upgrade to .NET 3.5.
This should make it much easier to take advantage of many of the new features of Visual Studio-the new editor and the JavaScript improvements for example-even if you continue to build plain .NET 2.0 applications for some time to come.
Closing Thoughts
I’ve been running Visual Studio 2008 Beta 1 since Microsoft released it in February and I find its overall performance and stability pleasantly surprising. Yes there are parts that are a little unstable, but unlike previous .NET and VS.NET betas, the Visual Studio 2008 beta feels decidedly solid; Visual Studio 2008 is not only usable, but more usable in many ways than Visual Studio 2005. Visual Studio 2008 as a whole feels snappier. The new HTML and markup editor alone is worth the trouble. While Visual Studio 2008 Beta 1 still has some issues, most are minor and outright crashes very rare-in fact, I haven’t crashed Visual Studio 2008 any more than I crash Visual Studio 2005. Even many of the new designers, such as LINQ to SQL Entity designer, work well at this point.
I’m excited about many features in Visual Studio 2008 and although, so far, this release of .NET 3.5 and Visual Studio 2008 has not received the same hype that Visual Studio 2005 received (thankfully), I think this release is turning out to be solid and it brings many new practical features to the framework-as well as improved tools support all in a way that isn’t overwhelming. Personally I prefer this more moderate update approach and so far it’s working out great in that I can use the new technology today with my .NET 2.0 projects while at the same time being able to experiment with the new features for .NET 3.5.
As of the beginning of July I’m still using Beta 1 of .NET 3.5 and Visual Studio 2008. Microsoft has hinted that Beta 2 is on its way before the end of the month and there’s likely to be a go-live license included so you can start thinking about using .NET 3.5 and getting some code out on the Web for real if you choose. Final release has just been announced for February 27th of 2008 with release to manufacturing expected at the end of this year. Given the relative stability of the features and functionality it looks like all of this might actually happen on time too. I’m often critical of the way things are pushed out of the Microsoft marketing machine, but I think this time around Microsoft has struck a good balance and rolled things out at a pace that’s actually works well. Right on! | http://www.codemag.com/article/070123 | CC-MAIN-2016-07 | refinedweb | 6,004 | 61.26 |
Technote (FAQ)
Question
What PTFs have been released for IBM Websphere Voice Response for AIX, Version 4.2?
Answer
The following is a complete listing of PTF updates for WebSphere Voice Response for AIX, Version 4.2 and the fixes included within them, with the most recent PTF updates at the top.
PTF updates can be ordered via your IBM representative OR downloaded from Fix Central (for more details go to How to obtain PTFs for WebSphere Voice Response for AIX )
Back to WVR support
Base changes:
- Fix level 612 - Current Level
- Fix level 565 - Preventative Maintenance
- Fix level 550 - Preventative Maintenance (Fix Pack 2)
- Fix level 510 - Preventative Maintenance
- Fix level 430 - Preventative Maintenance
- Fix level 400 - Preventative Maintenance
- Fix level 321 - Preventative Maintenance
- Fix level 301 - Preventative Maintenance
- Fix level 270 - Preventative Maintenance
- Fix level 200 - Preventative Maintenance (WVR 4.2.3)
- Fix level 190 - Preventative Maintenance
- Fix level 181 - Preventative Maintenance (WVR 4.2.2)
- Fix level 160 - Preventative Maintenance
- Fix level 102 - Preventative Maintenance
- Fix level 65 - Preventative Maintenance
- Fix level 40 - Preventative Maintenance
Features:
Internal Defect fix
- Fixed a problem that was resetting the ownership and permissions on file /etc/rc.dirTalk everytime the user ran vae.setuser. This file should be owned by root so this fix allows you to do this without having to go back again to reset it again.
Internal Defect fix
- DTTA adapter dump files are incorrectly created with a length of 0 bytes. This has no customer implications however the dump files are used as part of the support process.
Internal Defect fix
- This PTF contains changes to the help text associated with the VOIP proxy mode parameter. These changes clarify the support and operation of the Automatic Service Lookup:DNSSRV method.
Internal Defect fix
- Fixed 3 problems in the SIP REGISTER mechanism:
- Fixed the CSEQ to increase by 1 for each new message rather than incorrectly remain at 1.
- Corrected the code to recognise an expiration length if received within the contact header rather than in an expires header.
- Added a RegisterAs option for those registrars that do not deal with hostnames, only IP addresses. By default, WVR will use its hostname contact address.
RegisterAs is used to specify an IP address or hostname to override the default value on a per-host basis in the master.ini file. See the description in $SYS_DIR/voip/master.ini.orig for more details.
Internal Defect fix
- Updates to the dtProblem script to collect information about WVR licence acceptance.
Internal Defect fixes
- Add support for full software TDM in the DTNA. The original support only allowed either normal or full trombone operation on the DTNA. The custom server CA_TDM_Connect subroutine now allows all TDM combinations to be altered on DTNA (as per DTXA and DTTA). This change allows recording of other channels, DSP (WVR) and Line (customer).
- Add socket state information in DTNA to make the socket handling more robust. This will stop situations where sockets can not be bound but are then used potentially resulting in a system crash.
Internal Defect fix
- Fixed a problem to ensure that the VXML2 termchar shadow variable is set after a <record> is terminated by a DTMF key.
Internal Defect fixes
- Usability updates to the DTcheck_bin utility.
- Report back correct error code when attempting to log into the same mailbox simultaneously.
- Fixed auto restart scripts for the DTNA adapter.
- Fixed incorrect message length reported when pausing the message during recording. The length used to just contain the last segment of the message rather than all segments.
- Fixed a potential problem with vae.setuser when changing the default WVR user from dtuser to another AIX user id. Before the fix there was a chance that any new executables shipped as a result of fixes etc would not have their file ownership changed to the new user.
Internal Defect fix
- Fixed a problem with restarting WVR devices if WVR was still runningwhen the machine was rebooted. If the dtline devices end up in the defined state then WVR doesn't recover the devices correctly. This results in script errors appearing in DTstatus.out and trunks failing to appear in WVR.
Fix level 510
APAR IZ67478
PTF U832489 U832213 U832187 U832186 U832212
- Update 4.2.0.510
APAR IZ67478
PTF U832489 U832213 U832187 U832186 U832212
1.This PTF contains the consolidatation of fixes shipped since
the last premaint level.
(APAR IZ67478)
- Update 4.2.0.499
APAR IZ60801
PTF U829118
1.This PTF level is mandatory to allow system to be migrated
from WVR 4.2.3 to WVR 6.1
The option to save to tape using direct write and to save
individual files have been removed and are unsupported
by WVR version 6.1 restoreDT.
(APAR IZ60801)
- Update 4.2.0.474
APAR IZ60109
PTF U828064
1.Modified the DTNA code to prevent an error_id 29800
from occuring if the "Inbound DTMF Method Override"
system parameter is set to "DTMF via SIP info".
(APAR IZ60109)
- Update 4.2.0.465
APAR IZ54200
PTF U827376
1.Updates to PD scripts.
(IZ54200)
2.Corrected the file permission of $SYS_DIR/ipcid.log so that
it is no longer world writeable.
(Defect 36809)
- Update 4.2.0.463
APAR IZ52978 IZ52950 IZ52854
PTF U826173 U825884
1.This fix allows WVR to receive DTNA RTP (VoIP) data sent
from the same machine, a condition which previously resulted
in a 29801 error.
(APAR IZ52978)
2.Fix a device driver crash during the setup of a remote record.
The crash most likely occurs when used with MRCP as this heavily
uses remote record. When the crash occurs the machine will lock
up during a reco attempt.
(APAR IZ52854)
3.This fix corrects a DTMF detection issue with DTNA when using the
Siemens OptiPoint 420 Advance S phone.
(APAR IZ52950)
4.Stop freeing DMA chains on the DTNA. This causes error entries to
be reported in errpt concerning xmfree and "General xmalloc debug
error (Defect 36718)
- Update 4.2.0.457
APAR IZ48300
PTF U824647 U824648
1.This fix corrects a DTNA DTMF defection problem which may
have caused the first DTMF of a call to be lost.
(APAR IZ48300)
2.This PTF also updates trace template required by VOIP_SIP ptf
Fix level 4.2.0.456)
(Defect 36655)
- Update 4.2.0.451
APAR IZ44906
PTF U824350 U824351 U824352
1.Fix a memory freeing error on the DTTA adapter support.
(APAR IZ44906)
2.This fix corrects a problem in the WVR main device driver
which could potentially cause a system crash.
(Defect 36524)
3.The contents of the dtsnmpd.my MIB definition file have
been changed to fully document the meaing of the possible
values for the vpackType variable. The possbile values are:
vpackType OBJECT-TYPE
SYNTAX INTEGER
ACCESS read-only
STATUS mandatory
DESCRIPTION
"Shows the type of each of the packs installed. This value
is not set if the pack is in the 'Defined' or 'Equipped'
state. Possible values are:
o 0 - Unassigned (pack is in 'Defined' or 'Equipped' state)
o 1 - VPACK T1 (no longer supported)
o 2 - VPACK E1 (no longer supported)
o 3 - RPACK (no longer supported)
o 4 - SPACK E1 (no longer supported)
o 5 - Wrap Plug (no longer supported)
o 6 - VPACK (no longer supported)
o 7 - SPACK T1 (no longer supported)
o 8 - XPACK E1
o 9 - XPACK T1
o 10 - XPACK OTHER (DTXA base only or DTXA with Wrap Plug)
o 11 - TPACK E1 (DTTA)
o 12 - TPACK T1 (DTTA)
o 13 - EPACK E1 (DTEA)
o 14 - EPACK T1 (DTEA)
o 15 - NPACK E1 (DTNA)
o 16 - NPACK T1 (DTNA)"
Note (Last line is double : ) = { vpackEntry 2 }
(Defect 36529)
- Update 4.2.0.449
APAR IZ44408
PTF U824348
1.This fix corrects a problem which could result in a
system crash.
(APAR IZ44408)
- Update 4.2.0.448
APAR IZ43471
PTF U824347
1.This fix corrects a deadlock situation with DTNA which
could cause a system hangup.
(APAR IZ43471)
- Update 4.2.0.446
APAR IZ40264
PTF U823638
1..This fix corrects a DTNA DTMF defection problem in WVR.
(IZ42540)
- Update 4.2.0.444
APAR IZ40264
PTF U823635 U823636
1.Change the 10 seconds auto reset/retrieval of mailbox
information to also ensure schedule information is auto
retrieved. This will stop schedule information in system
variables disappearing after 10 seconds.
(APAR IZ40264)
2.This fix corrects a problem which could cause a system
crash under unusual condititons.
(Defect 36548)
- Update 4.2.0.441
APAR IZ37083 IZ37889
PTF U822403
1.This fix corrects an intermittent DTMF detection problem
with DTNA.
(APAR IZ37083)
2.This fix corrects a tromboning deadlock situation with DTNA.
(APAR IZ37889)
- Update 4.2.0.439
APAR IZ37662
PTF U822400 U822399
1.This PTF fixes corrects a problem that has potential to
cause a system crash (but not so far seen)
(APAR IZ37662)
2.This fix corrects a problem in the WVR main device driver
which could potentially cause a system crash.
(Defect 36489)
- Update 4.2.0.433
APAR IZ35009 IZ33121 IZ34658
PTF U821505 U821506
1.This PTF fixes a potential race condition and resulting core
when an TDM abort is issued by the client, and the client
disappears before the response can be sent.
(APAR IZ35009)
2.This fix corrects a problem in DTNA with RFC2833-encoded
DTMF handling where keys were being ignored.
(APAR IZ34658)
3.This fix makes wvrtrunk more robust when interrupted by
e.g. cntrl-C
and also added additional error messages.
(APAR IZ33121)
Fix level 430
APAR IZ33638
PTF U821408 U821409 U821410 U821411 U821412 U821413
- Update 4.2.0.430
APAR IZ33638
PTF U821408 U821409 U821410 U821411 U821412 U821413
1.This PTF corrects a problem which can happen on systems
with the country in an unassigned state.
(APAR IZ33638)
- Update 4.2.0.425
APAR IZ33141
PTF U819609 U819863 U819864
1..This PTF adds India country support for Pack and
System Configuration.
(APAR IZ33141)
- Update 4.2.0.421
APAR IZ30063 IZ30264 IZ30765
PTF U819609 U819863 U819864
This PTF was withdrawn and is replaced by fix level 4.2.0.430
All fixes detailed below can be found in 4.2.0.430
1.This fix corrects an occasional problem where wvrtrunk
gets into an internal permanent error state
(until WVR is restarted) if wvrtrunk is interrupted
before completion (e.g. with cntrl-break).
(APAR IZ30063)
2.This fix allows WVR to correctly receive RTP payload
DTMF keys (RFC2833) when far end delays (does not stop)
voice RTP during DTMF sequences
(APAR IZ30765)
3.This PTF fixes a race condition between a CA_Close_CHP_Link
followed by a CA_Open_CHP_Link from a different custom server
(or different process within the same custom server).
The race condition results in a CHP becoming unusable by
the custom server and a CA_LINK_NOT_OPEN being returned.
(APAR IZ30264)
4.Fix a problem with incorrectly reported adapter loading
with the 64 bit operating system. The problem caused
occasional spikes in reported adapter loading resulting
in errors being generated in WVR.
(Defect 36418)
5.Change access authority on G711 echo cancellation span
in DTEA/DTNA System Parameter group to admin to allow
user to increase echo canc span to 32ms for G711 only.
(Defect 36417)
- Update 4.2.0.417
APAR IZ24049 IZ26968
PTF U819504
1.This fix corrects two locking problems which could
cause occasional system crashes when using DTNA.
(APAR IZ24049)
2.This fix corrects a problem with DTNA (VoIP) where voice
recorded from particular SIP phones resulted in empty voice
recordings.
(APAR IZ26968)
3.This fix corrects an extremely rare red alarm on some
systems which indicated that the DTNA interrupt handling
time had exceeded 20ms. This was due to the Network
Time Protocol daemon in AIX updating the system clock to get
it in sync with other machines in the same network.
(Defect 36380)
- Update 4.2.0.414
APAR IZ25964
PTF U819344 U819345
1.This PTF provides a fix to CHPM crashes when attempting
to report an error.
(APAR IZ24446)
- Update 4.2.0.410
APAR IZ25964
PTF U819344 U819345
This PTF was withdrawn and is replaced by fix level 4.2.0.430
All fixes detailed below can be found in 4.2.0.430
1.This PTF adds base support required by ISDN progress
Indicator Information Element ( IE ).
(APAR IZ25964)
Fix level 400
APAR IZ23274
PTF U818269 U818270 U818271 U818272 U8181294 U8181295 U818296
- Update 4.2.0.400
APAR IZ23274
PTF U818269 U818270 U818271 U818272 U8181294 U8181295 U818296
1.Premaintence level WVR correct system crash with
64bit driver System can crash at shutdown.
(APAR IZ23274)
2.Fix up a 32/64 problem between DTNA and DTDD on
power 6 machines.
(Defect 36369)
3.Tighten requsite filesets.
(36370)
- Update 4.2.0.369
APAR IZ21032
PTF U818264 U818266
1..This fix corrects a bug in the debugmon tool when DTNA
devices are not in the normal order.
Also contains the 'debugrec' voice recording
tool (simplified debugmon)
The debugrec voice recording tool considerably simplifies
what is required to take a diagnostic voice recording or
adapter trace (currently done using the debugmon tool).
Usage: debugrec <trunk> <channel> <option>
<trunk> is the WVR trunk number in the range 1 to 16 and
<channel> is the WVR channel number (both as displayed on
the WVR System Monitor).
All other parameters (e.g. adapter type, trunk type) are
taken from WVR internal configuration information.
debugrec can operate in one of 3 modes:
1) Default (no options) .. recording is continuous
(until stopped) and a stereo wav file is created where
the left channel is 'line out' i.e. what WVR is playing
to the line and the right channel is 'line in' ie.
what WVR is receiving from the line.
File name is /tmp/vrec_line_stereo.wav.
2) Option -v .. debugmon option 'v' compatibility.
Recording is continous (until stopped) to three(DTXA/DTTA)
or four (DTNA) files in /tmp/.
File names are of the form vrec*.
3) Option -a ... debugmon options '8' and '9' compatibility.
Recording is done to a 25 second circular buffer on the
adapter. Also records adapter DSP commands and status.
Creates files in /tmp/ of the form trace_rec*.
(APAR IZ21032)
2.This PTF adds support for DTTA and DTEA on a 64-bit AIX Kernel.
(Defect 36358)
3.Corrected test in diagnostic routine DT6_check_db2
(Defect 36357)
4.Updated dtProblem to collect crontab and atJob entries.
(Defect 36361)
- Update 4.2.0.365
APAR IZ19324
PTF U817181
1.Correction to DTEA/DTTA driver problem.
(APAR IZ19324)
- Update 4.2.0.364
APAR IZ17987
PTF U817178
1.Integrating late DMA write times before logging a 27404
error.
(APAR IZ19363)
2.Updated dtProblem to preserve dates on collected data.
(Defect 36348)
- Update 4.2.0.362
APAR IZ17987
PTF U817178
1.Provides recovery when a DTEA DSP VOIP ports occassionally
goes out of service.
(APAR IZ17987)
- Update 4.2.0.360
APAR IZ17685
PTF U817031 U817037 U817038
1.This PTF adds support for DTNA on a 64-bit AIX Kernel.
NOTE: Only DTNA is supported in 64-bit operation.
(APAR IZ17685)
- Update 4.2.0.351
APAR IZ15552
PTF U816334
1.This PTF Modifies the socket handling code in TSLOT and the
MRCP CS to prevent errno 72 (ECONNABORTED) from
terminating the processes.
This PTF contains the TSLOT changes. If you have
dirTalk.SpeechClient installed on the system, it must be at
level 4.2.0.350 which contains the MRCP CS changes.
(IZ15445)
- Update 4.2.0.349
APAR IZ15369
PTF U816102 U816103
1.This fix allows WVR to recover from DTEA port 'Out of Service'
conditions which may result from a message timing condition
at call hangup.
NOTE This fix also requires dirTalk.VOIP_SIP to be at fix
level 4.2.0.348
(APAR IZ15369)
2.Report reason for Custom Server build failure if the total
number of arguments exceeds the allowed limit.
(Defect 36297)
- Update 4.2.0.343
APAR IZ12489
PTF U815931 U815932 U815933 U815937
1.This PTF adds additional utility functions to Batch Voice
Import (BVI) custom server to easily import and export audio
data between the WVR voice segment database and 'wav' files.
New utilities are called bvi_wav_exp and bvi_wav_imp
(use utility without any parameters for help information).
Refer to the 'The Batch Voice Import process' in the 'WVR
for AIX V4.2 - Application Development using State Tables'
manual for more details.
Link to the WVR for AIX, V4.2 library here.
(APAR IZ12489)
2.Adds base functions required to support SIP enhancements
(See fix level 4.2.0.344 and 4.2.0.345)
(Feature 36151)
3.Prevent Voice Table "Save As" from generating error if source
is from the Default Application.
(Defect 36266)
4.Update diagnostic utility DTmon to show language for Voice
Map Table.
(Defect 36265)
- Update 4.2.0.341
APAR IZ11250 IZ08450
PTF U815635
1.Correct PlayPrompt operation with Large Voice Tables which
could lead to error 800 (Voice Table not Found).
(APAR IZ11250)
2.The configuration parameter Number of Non Swap State Tables
in the Application Server Interface can now be used to
increase the amount of shared memory allocated to prompt
directories and voice tables. Prior to this fix, customers
with a very large number of prompt directories or voice
tables, or those running multiple languages on a system
may have encountered problems accessing prompt or
voice tables.
(APAR IZ08450)
3.Update diagnostic utility DTmon to provide more detailed
information on prompts and Voice Tables.
(Defect 36263)
- Update 4.2.0.338
APAR IZ09678 IZ10879
PTF U815452 U815453 U815457
1.Corrected a problem in vae.setuser when used on a WVR server
with the dirTalk.VRBE filesets installed.
(APAR IZ09678)
2.Corrected the TTL and TOS settings for DTNA RTP packets.
(APAR IZ010879)
3.In AIX 5.3 and later, the compiler flag _MSGQSUPPORT must be
defined in order to use message queues. As message queues are
a common thing to use in with custom servers, this fix defines
the above flag permanently on so that custom servers will function
as before.
(Defect 36261)
- Update 4.2.0.335
APAR IZ09662
PTF U815181 U815182
1.Allow the LogEvent action to write logs which are
greater than 2GB on Large File Enabled file systems.
(APAR IZ09662)
Fix level 321
APAR IZ09215 IZ06395
PTF U814700 U814701 U814702 U814703 U814704 U814705 U814706 U814707 U814708
- Update 4.2.0.321
APAR IZ09215 IZ06395
PTF U814700 U814701 U814702 U814703 U814704 U814705 U814706 U814707 U814708
1.This fix corrects a problem shipped in level 4.2.0.301 of
devices.dirTalk.artic960.rte where silence was being received
from VoIP devices when using DTNA.
(APAR IZ09215)
2.This PTF fixes a problem where CA_Record_Voice could return
a dBm level of zero on the first usage of a channel.
(APAR IZ06395)
- Update 4.2.0.311
APAR IZ06113 IZ06535 IZ06585
PTF U814346
1.This fix corrects the dBm level calculation for
imported voice segments (including the playing of wav
files using vxml where multiple files played sequentially
could be played at different levels prior to this fix).
(APAR IZ06113)
2.Stop potential application import problem involving
removing incorrect files.
(APAR IZ06585)
3.This fix corrects the handling of data and stack limits by
startDT, invoked on re-boot by vaeinit.pre, for limits of
4GB and over. Prior to this fix large limits may have
been misinterpreted or ignored due to integer overlows.
It is advisable to check that the limits specified for
dtuser in /etc/security/limits are correct as detailed in AIX
files reference.
(APAR IZ06535)
- Update 4.2.0.310
APAR IZ06590
PTF U814344 U814345
1.Fix IBM Trombone custom server to stop the 1000 and 1001 errors
Note: Importing IBM_Trombone.imp will revert the IBMTrombone
state Tables to their original forms. If any custom
changes have been made these changes will need to
be reapplied.
(APAR IZ06590)
Fix level 301
APAR IZ05697
PTF U813549 U813755 U813756 U813887 U813888
- Update 4.2.0.301
APAR IZ05697
PTF U813549 U813755 U813756 U813887 U813888
1.Fixes problem with DTNA sending outbound DTMF digits
immediately after receiving inbound digits.
(APAR IZ05697)
- Update 4.2.0.298
APAR IZ05312
PTF U813381
1.Prevent error 20503 with Internal error
error_id = TROMBONE013 being generated if the Incoming
Caller hangs up during the connection of the Outbound Call.
Note: Importing IBM_Trombone.imp will revert the IBMTrombone
state Tables to their original forms. If any custom
changes have been made these changes will need to
be reapplied.
(APAR IZ05312)
- Update 4.2.0.296
APAR IZ04782 IZ05055
PTF U813379 U813380
1.Allow Audio Adapter FC8244 to operate with Ultimedia
Voice Control and Batch Voice Import (BVI).
(APAR IZ04782)
2.Prevent error 1000 being generated if the Incoming Caller
hangs up before the Outbound Call is answered.
Note: Importing IBM_Trombone.imp will revert the
IBMTrombone state Tables to their original forms.
If any custom changes have been made these changes
will need to be reapplied.
(APAR IZ05055)
3.Updated diagnostics tests to handle full hostname when
checking db2 settings.
(Defect 36182)
- Update 4.2.0.295
APAR IY99278 IZ04045
PTF U813377 U813378
1.Fixed a problem concerning respawned CHP's deleteing voice
messages before they can be sent.
(APAR IY99278)
2.This fix resolves a problem where a long hostname in
the file /home/dtdb23in/sqllib/db2nodes.cfg prevented
the WVR system from starting.
(APAR IZ04045)
3.Added call to db2level to diagnostic routines.
(Defect 36168)
4.Updated help text for DTMF variant 1.
(Defect 36141)
5.Corrected path for lsdev in diagnostic routines.
(Defect 36158)
- Update 4.2.0.294
APAR IZ03459
PTF U812855 U812856 U812857
1.Correct occasional loss of voice when tromboning
between DTTA and DTEA.
(APAR IZ03459)
2.This defect corrects the operation of DTMF detection
'Algorithm Variant 1'
(Defect 36165)
- Update 4.2.0.292
APAR IZ01903
PTF U812852
1.This PTF stops CHP's ending up in an infinite loop during
multiple WaitEvent actions after a call has dropped.
(APAR IZ01903)
- Update 4.2.0.285
APAR IY99497
PTF U811971
1.This PTF updates help information concerning enhanced VOIP
functions.
(APAR IY99497)
- Update 4.2.0.283
APAR IY97868
PTF U811969
1.This fix allows an SS7 E1 WVR client to be
configured with a trunk interface E1 Framing Mode
of E1 multiframe (CRC4) instead of the default of
double frame.
Previously a red alarm error 27010 (Pack enablement
failed) would occur when the trunk was enabled.
Leaving a double frame setting when the attached switch
expects multiframe could lead to red alarm RAI errors
17007 (E1 remote alarm indicator qualified alarm)
at trunk startup.
(APAR IY97868)
- Update 4.2.0.279
APAR IY98269
PTF U811644 U811645
1.This fix modifies the $LANG environment variable if the
base AIX system language is set to ja_JP and sets
it to en_US
(APAR IY98269)
2.Correct ISDN Layer 4 trace entries.
(Defect 36123)
3.Updated help text for latest trunk types.
(Defect 33983)
- Update 4.2.0.275
APAR IY97474
PTF U811639 U811640
1.This fix corrects clocking operation for mixed DTTA/DTEA systems
a) DTTA(s) are are always used as source of the main system clock
(on H.100 bus) with DTEAs always being slaves on the bus,
b) The DTEA echo canceller is periodically reset every 10 seconds to
allow it to continue to operate even if slip is occurring in the
TDM network.
NOTE If fileset dirTalk.VOIP_SIP is installed on the system, it will
need updating to fix level 4.2.0.276.
(APAR IY97474)
Fix level 270
APAR IY96222
PTF U811271 U811339 U811340 U811341 U811342 U811343
- Update 4.2.0.270
APAR IY96222
PTF U811271 U811339 U811340 U811341 U811342 U811343
1.dtProblem now collects information on 3270 sessions
configured.
(APAR IY96222)
2.dtSummary will now report machine processor speeds, and
check for the presence of UM.
(Defect 36057)
3.Correct trace formatting for ISDN.
(Defect 36070)
4.Add extra trace entries for error conditions.
(Defect 36073)
- Update 4.2.0.254
APAR IY94296
PTF U810945
1.This PTF changes the handling of VoIP DTMF packets
(RFC2833) to improve the reliability of detection when
used with Cisco equipment.
(APAR IY94296)
- Update 4.2.0.251
APAR IY93740
PTF U810929
1.This change will recover pool buffers in the case of
a suspended X-Server or X-Server death.
(APAR IY93740)
- Update 4.2.0.250
APAR IY93551
PTF U810928
1.Fixed a problem in MWISERVER that caused spurious
error_id 5103 to be reported.
(APAR IY93551)
2.Corrected import checking phase in utility
DT6_check_files.
Prior to this fix, DT6_check_files always
reported not imported.
(Defect 36056)
- Update 4.2.0.243
APAR IY92925
PTF U810595
1.This adds function to SDIEXEC required by SS7 feature.
(APAR IY92925)
- Update 4.2.0.241
APAR IY92374
PTF U810591 U810592
1.Add new column to "DTmon -l" output to show fixed in
memory State Tables.
(APAR IY92374)
2.Prevent "DTmon -l" core dumping in case where State Table
is partially loaded.
(Defect 36048)
3.Updated diagnostic code.
(Defect 36051)
4.Minor correction to ISDN trace formatting to show
the L3 messages correctly.
(Defect 36052)
5. This update will change the way that packetized DTMF
keys (RFC2833 encoded) are handled by WVR to improve robustness
and compatibility with different VoIP phones.
(Defect 36053)
- Update 4.2.0.238
APAR IY91591
PTF U810397 U810588
1.This feature adds a special case to the GetPassword state table
action such that if it is called with a timeout parameter of 0 and
a timeouts allowed parameter of 20 then the caller will not be prompted
for a password, but the mailbox will be locked.
(APAR IY91591)
2.Corrects missing text in System Configuration parameters
(SIP Signalling -> Use Request Header) and DTNA Media->Overload.
See online help text on this parameters for more details.
Note that for the SIP parameter, as well as selecting the header
to be used for the Called Number, it also controls whether the
Request Header is extracted to the Tagged String or not.
(Defect 35946)
- Update 4.2.0.236
APAR IY91328
PTF U810394
1.Fixed UPSERVER (and VAGSERVER) so that they deliver
acknowledgements to listened messages to the correct
sending profile, rather than a random profile.
(APAR IY91328)
2.Updated diagnostic routines
(Defect 36035)
- Update 4.2.0.233
APAR IY90681
PTF U809890
1.The fix corrects a problem with DTEA adapter (VoIP/SIP only) when
the first channel on a DTEA can get locked in a 'dead air' (i.e.
not sending or receiving VoIP packets) state due to ICMP
'Destination Unreachable' messages flooding a queue.
(APAR IY90681)
- Update 4.2.0.231
APAR IY90466 IY90110 IY89379
PTF U809682 U809791
1.Fixes a CHP preformance issue which would be especially apparant
when using a fair number (>5) nested state tables with a largeish
(>50) number of variables.
(But it should improve performance of all state table applications)
(APAR IY90466)
2.Changed library routine Notify_appl to prevent core dump when passed
a negative value. This could cause PROMPTM and other components to
core dump.
(APAR IY90110)
3.Updated firmware for DTEA cards.
This corrects a fault which could cause
'LOSS OF DTMF FOR INBOUND VOIP CALLS'
(APAR IY89379)
4.Added a check to DTst so that specifying a State Table name greater
than 15 characters will cause it to exit with rc=1.
(Defect 36024)
5.Updated diagnostic aids
(Defect 36010)
- Update 4.2.0.229
APAR IY88950
PTF U809562
1.Fixes problem where the same file imported twice (using dtjplex)
was measured at a different db level each time.
(APAR IY88950)
- Update 4.2.0.228
APAR IY89482
PTF U809402 U809403
1.This fix provides SS7 loopback support for T1.
(APAR IY89482)
2.A utility has been added called DT6_check_all.
This utility runs underlying utilities which check db2, devices,
files and space availability and produce a report.
This utility has been added to dtProblem.
NOTE: these utilities must be run as user root and are
diagnostic aids.
(Defect 35999)
- Update 4.2.0.225
APAR IY87920
PTF U809396 U809397
1.Increase the tolerance on Cadence Hangup tone detection.
(APAR IY87920)
- Update 4.2.0.223
APAR IY87497
PTF U808885
1.Increase the size of additional_call_info1 to allow
more information to be passed to the outbound trombone leg.
Note, importing IBM_Trombone.imp will revert the
IBMTrombone state tables back to their original forms.
If any custom changes have been made these changes
will need to be reapplied.
(APAR IY87497)
- Update 4.2.0.222
APAR IY87242 IY87413
PTF U808883 U808884
1.Prevent ISDN error 29212 being generated if a Java
Application is called from a State Table afer a MakeCall.
(APAR IY87242)
2.Fix a problem with TDM connection requests where the
connection id is 0. This primarily affects faxes.
(APAR IY87413)
3.This problem corrects a timing problem on the DTTA and DTEA
adapters which could cause failures on a small number
of adapters.
(Defect 35980)
4.Improvement to dtProblem to detect DTNA adapter settings.
(Defect 35978)
5.Correct trace formatting for BufPool entries.
(Defect 35976)
- Update 4.2.0.219
APAR IY85040
PTF U808520
1.Allow the Signalling channel to display on the System
Monitor when less than 24 channels are configured on
an T1 ISDN trunk.
(APAR IY85040)
- Update 4.2.0.216
APAR IY86011 IY86049 IY85883
PTF U808515 U808514
1.Add extra boundary checking when sending strings from
state tables to custom servers.
(APAR IY86011)
2.Correct a very rare problem with AC and the System Monitor
which can incorrectly report zero Pool Buffers resulting in a
negative number for Pool Buffers in use on the AC
Operations Menu.
(APAR IY86049)
3.A fix has been applied that will cause the failure to
locate .vaeprofile when checking db2_support to notifiy
user of the error.
(APAR IY85883)
- Update 4.2.0.211
APAR IY84166 IY83966 IY84366
PTF U807571 U807570
1.Correct debugmon option "v" (voice record) for DTTA adapter
to prevent buffer overrun errors.
(APAR IY84166)
2.Stop EDGE_ABORT and technical difficulty message being played
during a WaitEvent that has the event cleared as its being
read. This very small timing window can be found if the
WaitEvent detects something at exactly the same time an
external custom server clears the event.
(APAR IY84366)
3.Correct dtProblem script to allow non-numeric PMR numbers
to be entered.
(APAR IY83966)
4.Remove a timing window in trying to connect channels on the
TDM as calls are going away. Before this fix there is a chance
of ending up with one-way audio on the next call.
(Defect 35942)
5.Add pending TDM connection states to TSLOT to handle timing
window whilst hanging up a call as its being tromboned.
(Defect 35938)
6.VOIP_SIP fileset must be at fix level 4.2.0.212 as well
(Defect 35922)
- Update 4.2.0.209
APAR IY83694
PTF U807395
1.This fix corrects a problem where VPD (Vital Product Data)
information such as serial number could be stale
i.e. it could refer to a previous adapter.
(APAR IY83694)
- Update 4.2.0.207
APAR IY82580 IY82257 IY81776
PTF U807392 U807393
1.Improve the general robustness of TSLOT
(cores during error handling).
Stop TSLOT starting connection servers that are not
required due to hardware. Stop TSLOT looping and
filling up the trace.
(APAR IY82580 APAR IY82257 APAR IY81776)
2.The IBM_Trombone custom server has been altered to allow
the outbound call to be aborted before the actual MakeCall
is made. To implement this the IBMTromboneOut state table has
been altered. A WaitEvent has been added just before the
MakeCall to test for EDGE_HUP.
If you wish to allow the outbound call to be aborted before the
MakeCall and have altered (or copied) the IBMTromboneOut state
table then the same change needs to be applied to the altered
(or copied) version. If the state table is left alone
everything will function exactly as before.
Please note, importing the IBM_Trombone import file will
overwrite the IBMTrombone state tables.
(Defect 35920)
Fix level 200
APAR IY81504
PTF U806911 U806921 U806922 U806924 U806925 U806929 U806930
- Update 4.2.0.200
APAR IY81504
PTF U806911 U806921 U806922 U806924 U806925 U806929 U806930
1.Add support for Virtual Adapter ( adapterless DTNA) solution.
(APAR IY81504)
- Update 4.2.0.195
APAR IY81360
PTF U806897
1.Voice Message attachments would not be deleted correctly
when using the System Configuration General parameter
"Set Real Time Migrate Voice Files - ON latest formats".
Note: You can check and correct sny undeleted message
attachments by using the "vm_integrity -v -f" command .
(APAR IY81360)
2.Changed DBCLNUP to remove empty Voice Message Attachment
directories so that empty Voice Message directories will
be deleted.
(Defect 35893)
- Update 4.2.0.192
APAR IY81135
PTF U806885
1.Prevent a very rare occurence of VAGSERVER closing down
due to a corrupted message queue.
(APAR IY81135)
2.Corrected a problem on AC which can cause it to core dump
during password entry if backspace is used.
(Defect 35844)
Fix level 190
APAR IY80213
PTF U806532
- Update 4.2.0.190
APAR IY80213
PTF U806532
1.This PTF corrects a problem seen only on newly installed
systems after fix level 4.2.0.181. The problem was
reported as RC=11.
(APAR IY80213)
- Update 4.2.0.184
APAR IY79984 IY79931
PTF U806377 U806378
1.Correct a problem using Import with Preview which would
sometimes fail with error SQL0117N on Subscriber Classes.
(APAR IY79984)
2.Correct a theoretical timing problem in the DTTA
adapter microcode which could cause cause the
adapter to crash.
(APAR IY79931)
3.The interval of polling for the System Paramters Database
Availablity Check Timeout and File Availablity Check Timeout
is now one quarter of the timeout as descibed in
Configuring the System.
(Defect 35836)
4.Corrected message generated when running DTsnmpd.cfg
(Defect 35830)
5.This fix provides more comprehensive error message reporting
for restoreDT utility
(Defect 35750)
6.Improvements to error reporting in .vaeprofile
(Defect 35749)
Fix level 181
APAR IY78830
PTF U806179 U806184 U806186 U806188 U806189 U806194 U806290
- Update 4.2.0.181
APAR IY78830
PTF U806179 U806184 U806186 U806188 U806189 U806194 U806290
1.This PTF extends support to the following:
Support for AIX 5.3 ML02
Support for LPAR with some limitations. See the Book "General
Information and Planning" GC34-6380-05 for more details.
The following pSeries servers are now supported by WVR 4.2.2
with the DTXA, DTTA and DTEA adapters or in any attached D20
I/O drawer.
eServer p5 520 (9111-520)
eServer p5 550 (9113-550)
Note: The following adapters are supported in these systems
or in any attached D20 I/O drawer, but not if LPAR is
being used.
The DTXA adapter
The SS8 (vendor) SS7 adapter
The Brooktrout (vendor) TR1034 FAX adapter
Note: The older Brooktrout TR114 FAX adapter is NOT supported
in these machines.
The following pSeries server is now supported by WVR 4.2.2
with the DTTA and DTEA adapters or in any attached D20 I/O
drawer.
eServer p5 570 (9115-570)
Note: The following adapters are supported in this system
or in any attached D20 I/O drawer, but not if LPAR is
being used.
The SS8 (vendor) SS7 adapter
The Brooktrout (vendor) TR1034 FAX adapter
Note: The older Brooktrout TR114 FAX adapter is NOT supported
in these machines.
Support for ISA (IBM Support Assistant)
(APAR IY78830)
2.dtProblem has been enhanced to prompt the user to enter the
PMR number if known.
If provided the PMR number will be appended to the start of
the output filename ready for transmission to IBM.
(Defect 35782)
- Update 4.2.0.169
APAR IY78592
PTF U805959
1.Corrects a problem handling adapter EEH errors in which
an error hitting one adapter might affect others.
(APAR IY78592)
- Update 4.2.0.168
APAR IY78530
PTF U805955
1.Correct a rare buffer leak on DTXA and DTTA adapters
when using CCS signalling processes.
This would eventually lead to an adapter crash.
(APAR IY78539)
- Update 4.2.0.166
APAR IY78257
PTF U805948
1.This fix corrects a problem where the execution of
vae.setuser caused the MRCP Custom Server to fail to
start and give no failure indication. Now vae.setuser will
not affect the MRCP Custom Server.
(APAR IY78257)
- Update 4.2.0.164
APAR IY78074
PTF U805946
1.If an adapter crashes when calls are in progress and
the State Table application has non-telephony actions
such as DoNothing it is possible that recovery may fail.
This fix will retry for up to 60 seconds while waiting
for the State Table application to complete.
(APAR IY78074)
- Update 4.2.0.163
APAR IY77767
PTF U805772 U805928 U805929
1.Add FXS Loop Start and FXS Ground Start to default
switch type in Pack Configuration for Japan.
(APAR IY77767)
2.Allows WVR to recover if a DTTA adapter error
(e.g. PCI bus parity error or timeout) occurs on an
adapter other than rpqio0 when operating in an LPAR
environment
(Defect 35800)
3.Change startup scripts to prevent an SSI DB Server without
adapters reporting adapter errors.
(Defect 35790)
Fix level 160
APAR IY77163 IY77220
PTF U805632 U805633 U805634 U805635 U805641 U805706 U805707
- Update 4.2.0.160
APAR IY77163 IY77220
PTF U805632 U805633 U805634 U805635 U805641 U805706 U805707
1.Corrected spelling in Color.res file to prevent errors.
(APAR IY77163)
2.Removed single quote characters from comments in all
$DB/resources/.res files to prevent errors on WVR startup.
Note: If any of these files have been changed then please
backup the changes before installing this PTF and then insert
the changes back into the .res files again afterwards.
(APAR IY77220)
3.Change user help text for error 30510 and also startup
error message to aid user problem determination.
(Defect 35751)
- Update 4.2.0.150
APAR IY75990
PTF U804885 U804884
1.This PTF adds to WVR the ability to record and playback
voice messages in uncompressed form (prior to this change,
all messages were compressed using the built-in 5:1
algorithm).
This feature is controlled via a new System Parameter
in the ASI (Application Server Interface) group known
as 'Voice Message Compression Type'.
Note that WVR must be restarted for
'Voice Message Compression Type' to take effect.
Setting this variable to 'Uncompressed' means that the
Voice Messaging state table actions (Record and Play Voice
Message) will not compress (or decompress) voice messages
between the telephone line and the database. It also means
that the voice messaging custom server actions must specify
uncompressed voice as the data type when the system is set
to use uncompressed messages. The current setting is
available to custom servers in the new global variable
CA_MSG_COMP which will be set to UNCOMPRESSED_VOICE
or COMPRESSED_VOICE depending upon the setting of
'Voice Message Compression Type'.
NOTE that a WVR System (either standalone, Single System
Image (SSI) or (SSI) with UM Inter Node Messaging)
CAN NOT mix compressed and uncompressed messages.
The new system parameter must only be changed when the
voice database is empty if not using IBM Unified Messaging.
IBM Unified Messaging will ship with a utility to migrate
between compressed and uncompressed messages which will
allow a change to an existing IBM Unified Messaging
system without requiring the entire voice message
database to be deleted.
(APAR IY75990)
2.Change user help text for error 17984.
(Defect 35733)
3.Change user help text for error 26003.
(Defect 35732)
- Update 4.2.0.145
APAR IY75802
PTF U804883
1.This fix prevents VAGSERVER core dumping if a
LANGUAGE is corrupt
(APAR IY75802)
2.This fix prevents VAGSERVER or STPDSERVER core dumping
if the database is corrupted with a blank name.
(Defect 35727)
- Update 4.2.0.144
APAR IY75681
PTF U804881
1.Fix the VOX_CTI custom server to solve occasional
timeout problems.
Installing this custom server will replace the
VOX_CTI.ini file in
/usr/lpp/dirTalk/db/current_dir/ca/VOX_CTI_dir
This must be backed up, or recreated after installing
the custom server
(APAR IY75681)
- Update 4.2.0.141
APAR IY74955
PTF U804576
1.Fix the make call response when the inbound call hangs
up whilst an outbound trombone call is being made.
This fix stops the white 1001 error from IBM_Trombone
being generated.
(APAR IY74955)
- Update 4.2.0.139
APAR IY74564 IY74501
PTF U804310 U804311
1.Corrects a problem tromboning between channels
(APAR IY74564)
2.This PTF chamges four system variables to read/write.
The System variables are SV186, SV187, SV188 and SV189
(APAR IY74501)
- Update 4.2.0.137
APAR IY74036
PTF U804303
1.Improve handling of an incorrect/malformed port set in
a CA_TDM_Connect API call.
(APAR IY74036)
- Update 4.2.0.134
APAR IY73949
PTF U804243
1.Increased the maximum number of Custon Servers that
the ASCII Console (AC) can display from 80 to 150.
(APAR IY73949)
- Update 4.2.0.127
APAR IY68504 IY73016
PTF U8039968504)
2.This change checks that the DB/2 file db2nodes.cfg has the
correct hostname configured.
(APAR IY73016)
- Update 4.2.0.123
APAR IY73539
PTF U803963
1.This fix corrects a problem in the DTTA adapter microcode
which resulted in either
1) occasional dead air problems (one way audio) when doing
a trombone between channels on two DTTA adapters.
2) Brroktrout fax problems where it appears that the fax
card is not receiving audio from the line
(APAR IY73539)
- Update 4.2.0.121
APAR IY72979
PTF U803957
1.This fixes a problem where local variables may not be
reset to zero on repeated calls of the same state table.
(APAR IY72979)
- Update 4.2.0.117
APAR IY71878 IY72670
PTF U803948 U803949 U803950 U803952
1.Corrected the cleanup of the db2start log files to prevent
spurious errors.
Before applying the fix you must change the permissions on the
directory to allow uneccessary files to be deleted.
su root
cd /home/dtdb23in/sqllib
chmod g-t log
chmod a-t log
(APAR IY71878)
2.Correction to error that prevents multiple mailbox messages
being deleted though GUI.
(APAR IY72670)
3.This ptf fixes a problem encountered because pSeries firmware
added additional checking.
(Defect 35583)
4.Minor changes to DTTA device driver to fix system test issues
which will not affect WVR normal operation.
(Defect 35464)
Fix level 102
APAR IY71660
PTF U803641 U803642 U803643 U803644 U803645
- Update 4.2.0.102
APAR IY71660
PTF U803641 U803642 U803643 U803644 U803645
1.This PTF supplies enhancements required by the SpeechClient
fileset which supports interfaces to IBM WVS 5.xx systems.
(APAY IY71660)
2.This fix adds a new level of detail to the tracing provided.
Included are more tracepoints, and now the Java layer CallID
is given in the trace output.
(Defect 35546)
- Update 4.2.0.85
APAR IY71431
PTF U803586
1.This PTF increases the number of supported notification
schedules from 5 to 10.
(APAR IY71431)
2.This defect fixes a problem where data from a single function
was incorrectly sent to the trace file for formatting, resulting
in the trace for this function showing an incorrect id value.
(Defect 35655)
- Update 4.2.0.84
APAR IY70771 IY71271 IY71337
PTF U803585
1.This fix prevents large numbers of db2start error log files
collecting in the /home/dtdb23in/sgllib/log directory.
Before starting the system after applying the fix you must change
the permissions on the directory to allow uneccessary files to
be deleted.
Please perform the following commands
su root
cd /home/dtdb23in/sqllib
rm -r log
mkdir log
chown dtdb23in:staff log
chmod u+rwx
chmod g+rws
chmod a+rx
NOTE You MUST PERFORM the above statements as root
(APAR IY70771)
2.Corrected a problem when excluding custom servers from an
application import.
(APAR IY71337)
3.This fix stops VAGSERVER from coring if it receives corrupt content
when trying to open a state table in the GUI.
(APAR IY71271)
- Update 4.2.0.82
APAR IY70756 IY70758
PTF U803321 U803322 U803323
1.Correct the situation where sometimes an adapter problem would
not reliably recycle the adapter.
Note: An adapter problem which previously reported four 17301
errors with "PACK INT,CNF ER" will now report a single 17302
error.
(APAR IY70576)
2.Prevent su authentication errors from OAM occuring when
starting WVR.
(APAR IY70578)
3.This corrects a problem where an 'interrupts have drifted' yellow
warning message 27078 was generated when the adapter performed an
internal resynchonization when the built-in self-test diagnostics
detected that a time-slot slippage had occurred. The changed code
will still perform the rescynchronization operation but in such a
way that interrupt timing is unaffected.
(Defect 35632)
- Update 4.2.0.80
APAR IY69881
PTF U803312
1.Enhancements to support QSIG feature as shipped in fix levels
4.2.0.78 and 4.2.0.79
NOTE slsigpr.h has been shipped in this PTF.
If this file has been used in any privately generated
code, the objects created should be re-compiled using the
latest version shipped in this PTF.
(APAR IY69881)
- Update 4.2.0.77
APAR IY69692
PTF U802840
1.This change limits the notification schedule ids to function in
the range of 0 to 4.
A previous increase in the range enabled by PTF 4.2.0.71 was
found to be unsatisfactory and may cause buffer corruptions.
(APAR IY69692)
- Update 4.2.0.76
APAR IY69689
PTF U802839
1.This ptf fixes a problem encountered because pSeries firmware
added additional checking
(APAR IY69689)
- Update 4.2.0.72
APAR IY68286 IY68860
PTF U802511
1.Show Application Voice Segment list in numerical
sequence (GUI).
(APAR IY68286)
2.This corrects a problem where SDIEXEC could core dump
during trunk enable after restarting a signalling process.
(APAR IY68860)
- Update 4.2.0.71
APAR IY68588
PTF U802489
IT IS STRONGLY RECOMMENDED THAT THIS PTF BE APPLIED ASAP
1.WVR has been modified to allow notification schedule ids in
the range 0-9 inclusive, up from the previous range of 0-4.
(APAR IY68588)
2.Fix a potential loop when hanging up in the middle of a
multiple TSLOT connections
(Defect 35578)
- Update 4.2.0.70
APAR IY68242 IY68377
PTF U802269
1.This defect fixes an error where a voice segment created by saving
an existing segment with a new ID from the GUI was saved in the
database with the wrong compression type.
(APAR IY68242)
2.This corrects a problem where a signalling process sending a
SL_CALL_RECONNECT_CNF with a parameter of SL_REPLY_CALLER_HUNG_UP
would cause the signalling process to become unregisterd.
(APAR IY68377)
3.This update modifies the database constraints on MAILBOX_NFY
to allow up to 10 notification schedules to be configured for
an individual subscriber. The previous limit was 4 notification
schedules
(Defect 35560)
- Update 4.2.0.69
APAR IY67801
PTF U802267
1.Correct a problem to ensure that TSLOT resets all connections
on a given call when hang-up occurs.
(APAR IY67801)
2.Correct one of the TSLOT exceptions to stop a core dump occuring
(Defect 35545)
- Update 4.2.0.68
APAR IY67558 IY67320 IY67557
PTF U802159 U802264
1.Sometimes using wvrtrunk to enable all trunks can result in
"dead aid" on the the channels. This has been corrected.
(APAR IY67558)
2.vm_integrity: scan for duplicate database entries removed for
single mailbox entry checks. ( option -e ).
(APAR IY67320)
3.During a short DBHEALTH outage, if the system parameter
"System Response during Server Outage" is set to
"Busy-out all telephony channels", then sometimes on a system with a
large number of trunks some channels will not automatically
re-enable. This corrects the problem.
(APAR IY67557)
Fix level 65
APAR IY66084 IY67175
PTF U802149 U802150 U802151 U802152
- Update 4.2.0.65
APAR IY66084 IY67175
PTF U802149 U802150 U802151 U802152
1.Script to perform basic db2 runstats against the WVR tables
and indices.
(APAR IY66084)
2.Correct message count decrements to prevent inappropriate
5200 alarms.
(APAR IY67175)
3.Prevent erroneous error message concerning boston.cfg
from appearing.
(Defect 35519)
- Update 4.2.0.60
APAR IY65586
PTF U801250
1.Error message definitions have been updated and corrected
where required.
(APAR IY65586)
- Update 4.2.0.58
APAR IY65256
PTF U801247
1.Fix time slot code to handle 0 sinks (unidirectional TDM
connections)
(APAR IY65256)
2.This fix ensures that members can be deleted from distribution
lists with IDs greater than 32767.
(Defect 35479)
3.This fix allows distribution list IDs up to 65532 to be used
with state tables.
(Defect 35477)
- Update 4.2.0.57
APAR IY65253
PTF U801076
1.DB2 licence daemon (db2licd) stopped on DT_shutdown.
(APAR IY65253)
- Update 4.2.0.51
APAR IY64563
PTF U800854
1.This allows a semi-colon ';' to be added to Referal Extension
when using UpdateProfile.
(APAR IY64563)
2.Adds "ls -lRL" listing of DTJ_HOME to collected diagnostics.
(Defect 35459)
- Update 4.2.0.50
APAR IY64223
PTF U800779
1.This PTF adds support for new devices required by Fax sub system.
(APAR IY64223)
Base sub-system support
- Update 4.2.0.49
APAR IY64184
PTF U800778
1.This PTF adds support for new devices required by Fax sub system.
(APAR IY64184)
Device driver support
- Update 4.2.0.45
APAR IY63949
PTF U800677
1.This PTF updates message files required for multiple ISDN.
(APAR IY63949)
- Update 4.2.0.42
APAR IY63810
PTF U800670 U800671
1.Fixes problem causing occasional voice distortion when adapter
does an internal TDM resync.
(APAR IY63810)
2.When testing fax it was found the TDM user connected slots
were not being reset to system connected slots when packs
were disabled.
The failure would cause outbound CAS calls to fail because
DTMF keys would not be transmitted.
This has now been corrected.
(Defect 35410)
Fix level 40
APAR IY62888
PTF U800501 U800500 U800459 U800458 U800460 U800499
- Update 4.2.0.40
APAR IY62888
PTF U800501 U800500 U800459 U800458 U800460 U800499
1.A code change has been made to prevent spurious error_id 5200
from occurring.
(APAR IY62888)
2.FIx to allow echo cancellation work on inbound java calls.
(APAR IY62894)
3.Fixed incorrect setting of SV573 for SaveVoiceMessage
(APAR IY62883)
4.This fix stops DTmon raising 25032 alarms when it is running at very close
intervals.
(APAR IY62884)
5.Fixes Java assert when processing recognition response in VoiceXML2
when DTMF input is not expected and caller presses DTMF key immediately
after saying a recognised phrase.
(APAR IY62918)
6.Fixed message creation with sent time of 0
(APAR IY62821)
7.Fixes a problem with the quality of the G.723.1 VoIP codec on
the DTEA adapter
(Defect 34875)
- Update 4.2.0.30
APAR IY62658
PTF U800454
1.Fix to imrpove extended error handling (EEH) recovery
(APAR IY62658)
- Update 4.2.0.29
APAR IY62619
PTF U800453
1.This PTF fixes a problem which could cause a DTXA adapter to
freeze under extremely unusual circumstances.
(APAR IY62619)
- Update 4.2.0.28
APAR IY61730
PTF U800261
1.This PTF fix will make the DTXA and DTTA adapter code more
tolerant to short system glitches caused by unexpected device
activity. The error log entry produced as a result of such errors
will now be logged as a yellow warning '17971', rather than a
red error '17302'.
(APAR IY61730)
- Update 4.2.0.27
APAR IY61270
PTF U800259
1.This fix stops wvrtrunk raising 25032 alarms when it is
running at very close intervals to monitor the system.
(APAR IY61270)
- Update 4.2.0.24
APAR IY61485
PTF U800143
1.This fixes a problem where the WVR Java API and CCXML api's could not
pass SIP URI's on an out bound make call to the SIP stack.
(APAR IY61485)
2.This PTF fixes the path to mount_retry in HAstartDT
(Defect 35203)
- Update 4.2.0.22
APAR IY60950
PTF U800027
1.This PTF fixes an error burst generated by the SIP MEDIA
control process when re-cycling trunks.
(APAR IY60950)
- Update 4.2.0.15
APAR IY59123
PTF U499535
1.If 3270 is not installed and the Pack or System configuration
is open in Change mode then it is not possible to start WVR
from another telnet session.
The following error displayed
'Error: failed to run "RDSET3270 disabled" - RC=1'
This has now been corrected.
(APAR IY59123)
2.This PTF fixes a VOIP/SIP configuration in wvrteleconf
problem.
(Defect 35178)
3.Add partial migration support to DTdatabase
(Defect 35177)
4.This PTF fixes TSLOT memory leaks
(Defect 35157)
5.This PTF fixes EDL setting in wvrteleconf
(Defect 35149)
6.Ship wvrteleconf and wvrsysconf .cat file for en_GB
(Defect 35080)
7.This PTF fixes channel group allocation in wvrteleconf
(Defect 35038)
8.SIP Adaptor configuration fixed for wvrteleconf
(Defect 35028)
- Update 4.2.0.14
APAR IY58969 IY58960 IY58928
PTF U498990
1.In some cases starting the WVR GUI using vaeinit when WVR is
already running can result in performance problems.
This is especially noticeable when running Message Center.
This has been corrected.
(APAR IY58928)
2.This fix resolves a problem with the registration of an
MWI signalling process on a machine that does not have any
telephony adapters installed.
(APAR IY58960)
3.This fixes a problem which can give a core dump from SDIEXEC
on system shutdown.
(APAR IY58969)
4.This causes WVR to try again on voice file open if stale NFS file
handle returned
(Defect 35141)
5.This fixes a problem which can occasionally cause fetching of
messages from the database to fail due to the message id becoming
corrupt. This problem will not cause the database to become
unstable - simply just fail for a single call.
(Defect 35139)
6.This changes the error codes from wvrsysconf to be positive.
(Defect 35084)
7.Fixes a possible core in wvrsysconf when rd.data is unavailable.
(Defect 35083)
8.Fixed DTdatabase -m failure on non-networked machine
(Defect 34922)
9.This fixes a maintenance issue with wvrtrunk.
(Defect 33663)
- Update 4.2.0.13
APAR IY58574
PTF U498798
1.Fixed setmwi triggers in DT_Patch_Database
(APAR IY58574)
2.This fixes a problem in the umount_retry script used on systems
with HA. The umount command can hang on nfs mounts when the server
cannot be contacted because the network is down. This enforces a
timeout so that the system will still failover in this
state - rather than hanging at the umount command.
HA Customers should copy the umount_retry script to their own i
location where other HA scripts are located.
(Defect 35085)
3.License information for Unifed Messaging customers.
(Defect 35063)
- Update 4.2.0.10
APAR IY58326
PTF U498785 U498786
1.This fix resolves a problem where DTstatus archive files
are not cleaned up in the $OAM_LOG_PATH directory.
They can then potentially fill the filesystem and stop WVR
running. This problem was seen when large numbers of
errorlog and oamtrace archives were allowed to accumulate in
the $OAM_LOG_PATH directory.
(APAR IY58326)
2.Prior to this fix, the pack config GUI may not have worked
correctly when adapters were removed, reordered or set to the
'defined' state. For example when a DTTA or DTXA card which had
been position between two DTEA cards was physically removed or
taken out of service by relinquishing ownership with the
'dt_setowner -x -u1' command, the VoIP adapter pack config of
the second DTEA did not associate with the correct adapter.
The IP address, subnet mask and default router values were not
displayed at all and the labels for these fields were not
arranged correctly within the dialog box.
(Defect 34888)
3.This fixes a problem in pack configuration which could lead
to incorrectly configured trunks
(Defect 34889)
4.This fixes a problem with wvrtrunk attempting to enable
unconfigured channels.
(Defect 34867)
5.DBHEALTH now detects DB file system full condition
(Defect 32090)
6.This fixes a problem with core dumping when application objects
are copied to another object with the same name.
(Defect 34167)
7.Fixed various minor wvrteleconf problems.
(Defect 34941)
3270 Fixes
- Update 4.2.0.500
APAR IZ60542
PTFs U829858
1.This PTF fixes a problem with 3270 peeker sessions using
up all the buffers up due to a spining loop.
This can occur when the 3270 peeker sessions are being
used on a remote X server, for example Exceed.
If the remote X server connection drops requests are made
for window id 0 which causes the peeker code to loop.
(APAR IZ60542)
- Update 4.2.0.282
APAR IY99360
PTFs U811968
1.Prevent CTRL3270 core dumping in certain circumstances.
(APAR IY99360)
- Update 4.2.0.171
APAR IY78780
PTFs U806142
1.This PTF changes software requirements for dirTalk.3270
when operating on AIX version 5.3.
(APAR IY78780)
- Update 4.2.0.122
APAR IY72354
PTFs U803959
1.The 3270 Script editor GUI could not enter valid values
into the String and Numeric fields using the Put Field
Term Definition GUI.
(APAR IY72354)
ISDN Fixes
- Update 4.2.0.577
APAR IV16543
PTF U850414
1.Remove a race condition in ES services used by ISDN. This race
condition caused error 29106 from es_queue.c
(APAR IV16543)
- Update 4.2.0.574
APAR IV12944
PTF U849731
1.Fix prevents ISDN_MONITOR from core dumping if /tmp is cleared.
(APAR IV12944)
- Update 4.2.0.524
APAR IZ77685
PTF U836469
1.This fix overcomes a possible 29213 error (ISDN channel state
machine invalid primitive) when a outbound/inbound call clash
occurs (glare) on a QSIG ISDN channel. Previously after such
a clash the error could occur on the first inbound call received
on the channel.
(APAR IZ77685)
- Update 4.2.0.505 to 509
APAR IZ67224 IZ67386 IZ67412 IZ67432 IZ67435
PTF U831602 U831603 U831874 U831875 U831876
1.This fix overcomes possible ISDN errors 29109
(ISDN ES buffer pool low) and 29615
(ISDN Layer 1 discarding incoming messages) seen with
2BCT transfer calls on T1 ISDN DMS100 National.
Previously the loss of environment services buffers
associated with these errors could result in failure to
handle any new ISDN calls (inbound or outbound) until
WVR was restarted.
(APAR IZ67224 505 ISDN DMS)
(APAR IZ67386 506 ISDN ATT)
(APAR IZ67412 507 ISDN Euro)
(APAR IZ67432 508 ISDN INS1500)
(APAR IZ67435 509 ISDN com)
- Update 4.2.0.475
APAR IZ60261
PTF U828271
1.This fix helps overcome repetitive 1201 (Line problem/Glare
occurred > 20 times) errors seen occasionally during
QSIG ISDN transfer calls on certain Hicom switches.
Previously when this error occurred the 1201 error would
repeat on each and every follow-on outbound call until
the failing channel was reset at the switch end.
The fix overcomes the problem by setting the channel
disabled to prevent further call attempts.
(APAR IZ60261)
- Update 4.2.0.464
APAR IZ52334 IZ53254
PTF U827375
1.This fix overcomes a possible failure of T1 ISDN blind transfer
(seen on DMS National ISDN). Previously the Transfer Call
action would sometimes return EDGE_OK to the application
despite the CO switch failing to complete the transfer.
(APAR IZ52334)
2.This fix corrects the handling of an outbound ISDN call
when the switch returns a PROGRESS message with
CAUSE = #34 (No circuit/channel available). Previously the
Make Call action would return EDGE_MK_NO_ANSWER rather than
EDGE_MK_NO_LINES_AVAILABLE. The fix also corrects a similar
mishandling of 'user busy'
(PROGRESS with CAUSE = #17 - User busy) i.e. return of
EDGE_MK_PHONE_BUSY rather than EDGE_MK_NO_ANSWER.
(APAR IZ53254)
- Update 4.2.0.450
APAR IZ44739
PTF U824349
1.This fix overcomes a problem with E1 QSIG ISDN, where the
optional parameter CLGN (Calling Party Number) is not
always correctly handled for inbound and outbound calls.
If byte 3 (octet 3) of the CLGN information element is
set to a value 0x20 (number type = national, numbering
plan = unknown) the number is ignored and not presented
to the application (in the CLGN tag). Furthermore, if
an outbound call is attempted with a CLGN byte 3 value
of 0x20 (tag values CLGN.NUMBER_TYPE = 2,
CLGN.NUMBER_PLAN = 0) the call should go through to the
network and not be rejected with a return edge = 6
(OUTBOUND_LINE_PROBLEM). The fix extends support for
CLGN byte 3 values of 0x10, 0x20, 0x30, and 0x40
i.e.
number types of international, national, network
specified and subscriber respectively with numbering
plan unknown.
(APAR IZ44739)
- Update 4.2.0.447
APAR IZ42693
PTF U824345
1.This fix overcomes a possible failure of T1 ISDN 2B-channel blind
transfer.
Previously the Transfer Call action would sometimes unexpectedly
return OUTBOUND_LINE_PROBLEM to the application and transfer would
fail to complete.
(APAR IZ42693)
- Update 4.2.0.442
APAR IZ37058
PTF U822816
1.This fix overcomes a possible CHP core when quiescing
trunks on a T1 ISDN/DMS National system when transfers
are active. Previously errors 1201 (Line Problem) and
29212 (ISDN call state machine invalid primitive)
could also occur.
(IZ37058)
- Update 4.2.0.431
APAR IZ33164
PTF U821503
1.This fix overcomes a regression, specific to E1 Euro
ISDN, which was introduced by APAR IZ26091
(dirTalk.ISDN.Euro-ISDN version 4.2.0.412).
An outbound call (e.g. trombone transfer) with optional
CLPN (Calling Party Number) set by the application may
return OUTBOUND_LINE_PROBLEM despite the
CLPN being valid.
(APAR IZ33164)
- Update 4.2.0.422 to 424
FIX LEVEL / APAR / PTF : 4.2.0.422 / IZ32663 / U820402
FIX LEVEL / APAR / PTF : 4.2.0.423 / IZ32664 / U820473
FIX LEVEL / APAR / PTF : 4.2.0.424 / IZ32705 / U820760
1.The addition of a Progress Indicator IE for EuroISDN required
a change for a common ISDN header file and therefore the Layer
3 executables have been updated.
(APAR IZ32663 IZ32664 IZ32705)
- Update 4.2.0.411 to 412
FIX LEVEL / APAR / PTF : 4.2.0.411 / IZ26090 / U819346
FIX LEVEL / APAR / PTF : 4.2.0.412 / IZ26091 / U819347
1.This change adds support for sending a Progress Indicator
Information Element (IE) with an outgoing ALERTING message
for E1 EuroISDN and QSIG trunks. A new Trunk Interface
Group parameter (Send ISDN Progress Indicator value
on Alerting) defines the description value to be sent
(in octet 4 of Progress Indicator IE).
The default value (of -1) disables sending of
Progress Indicator.
Details of this fix are also documented in TechNote
(APAR IZ26090 IZ26091)
- Update 4.2.0.403 to 406
FIX LEVEL / APAR / PTF : 4.2.0.403 / IZ24030 / U818967
FIX LEVEL / APAR / PTF : 4.2.0.404 / IZ24046 / U818968
FIX LEVEL / APAR / PTF : 4.2.0.405 / IZ24048 / U818969
FIX LEVEL / APAR / PTF : 4.2.0.406 / IZ24029 / U818970
1.Tighten requisite for filesets.
(APAR IZ24030 IZ24046 IZ24048 IZ24029)
- Update 4.2.0.402
APAR IZ24148
PTF U818966
1.This fix overcomes a failure with some outbound calls
(transfers) on T1 ISDN DMS100 National.
If during the outbound call setup the switch returns
a NOTIFY message with optional IE's (Information Elements)
then a STATUS was previously returned with a Cause value
of #96 (Mandatory IE is missing). As a result of this the
switch would not progress the call though to a
CONNECTED state.
(APAR IZ24148)
2.Tighten requisite for filesets.
(Defect 36375)
- Update 4.2.0.370
APAR IZ21397
PTF U818267
1.This fix overcomes a possible 29205 error
(ISDN signalling process signalling library error)
during setup of an outbound ISDN call in which a
PROGRESS message is received.
(APAR IZ21397)
- Update 4.2.0.355
APAR IZ16837
PTF U816492
1.Correct calculation so that ISDN starts if
/var/tmp has greater than 4GB of free space.
(APAR IZ16837)
- Update 4.2.0.354
APAR IZ15405
PTF U816491
1,Prevent hangup if a STATUS message with a CAUSE code of #96 is
received.
(APAR IZ15405)
- Update 4.2.0.353
APAR IZ15553 IZ15755 IZ16048 IZ16070
PTF U816490
1.When making an outbound call, if a PROGRESS mesaage
is received with a CAUSE code of #34
'No circuit/channel available', then an edge of
'EDGE_MK_NETWORK_BUSY' will be returned to a State
Table rather than allowing the action to timeout.
(APAR IZ15553)
2.Prevent error 29212 being generated during ISDN Call
Transfer.
(APAR IZ15755)
3.The ISDN call Tag in SV542 will now correctly report the
PROTOCOL VARIANT if overridden by Multiple ISDN setup.
(APAR IZ16070)
4.Correct leak on QSIG MWI which could cause failure after
a long period of time.
(APAR IZ16048)
- Update 4.2.0.344
APAR IZ12491
PTF U815935
1.Add ability to specify the trunk or trunks on which MWI
is sent using a new parameter 'MWI Trunk' in the
'ISDN Signalling' parameter group.
This new parameter allows selection of a single or
multiple trunks on which MWI requests are to be sent.
Specify the trunk(s) using a comma delimited list of trunk
numbers in the range 1 to 16, or enter 0 to retain the
original operation.
Refer to the WVR publications to see a description of the change.
(APAR IZ12491)
- Update 4.2.0.330 - 334
APAR IZ09288 IZ09317 IZ09324 IZ09326 IZ09329
PTF U815176 U815177 U815178 U815179 U815180
IZ09288 U815176 4.2.0.330 dirTalk.ISDN.com fileset
IZ09317 U815177 4.2.0.331 dirTalk.ISDN.ATT fileset
IZ09324 U815178 4.2.0.332 dirTalk.ISDN.DMS100 fileset
IZ09326 U815179 4.2.0.333 dirTalk.ISDN.Euro-ISDN fileset
IZ09329 U815180 4.2.0.334 dirTalk.ISDN.INS1500 fileset
1.When "Send RESTART on Channel Enable = Yes" the code did
not retry if the switch failed to respond to the RESTART
with an acknowledgement.
This has now been corrected so that RESTART will be
retried two more times before timing out.
(APAR See above)
- Update 4.2.0.289
APAR IZ00020
PTF U812202
1.Correct ISDN layer 4 to correctly obey the Presentation
Restricted attribute of the Calling Number.
(APAR IZ00020)
- Update 4.2.0.287
APAR IY99811
PTF U812107
1.Remove unnecessary error reporting during 2 B-Channel
and RLT Transfer.
(APAR IY99811)
2.Correct a problem on ISDN Trunk 16 where it may fail
to manually enable after being disabled.
(Defect 36146)
- Update 4.2.0.280
APAR IY99346
PTF U811965
1.Correct ISDN DMS call transfer code to prevent
error 29200 with the descriptions "Could not stop
the facility timer" and also "Did not start the
FACILITY timer: timer alreary running"
(APAR IY99346)
2.Improve ISDN layer error reporting.
(Defect 36124)
- Update 4.2.0.264
APAR IY96220
PTF U811214
1.Correct variables and messages used by import check
routines.
(APAR IY96220)
- Update 4.2.0.263
APAR IY96216
PTF U811208
1.Update variables and messages used by import check
routines.
(APAR IY96216)
- Update 4.2.0.262
APAR IY96154
PTF U811207
1.Correct variables and messages used by import check
routines.
(APAR IY96154)
- Update 4.2.0.261
APAR IY96142
PTF U811206
1.Corrects variables and path information used during
import check routines.
(APAR IY96142)
- Update 4.2.0.260
APAR IY95505
PTF U811205
1.Corrected memory leak on QSIG which would cause MWI to
fail after many thousand MWI messages had been sent.
(APAR IY95505)
2.Add extra Trace entries for internal ISDN routines.
(Defect 36072)
3.Correct Trace entry for internal ISDN routine.
(Defect 36071)
- Update 4.2.0.247
APAR IY93040
PTF U810661
1.Add Single Step Transfer custom server for ISDN.
(APAR IY93040)
- Update 4.2.0.244
APAR IY92625
PTF U810596
1.Modifications to QSIG handling of STATUS messages to prevent
call being dropped when forwarded.
(APAR IY92625)
- Update 4.2.0.237
APAR IY91556
PTF U810396
1.Change to 2 B-Channel Transfer so that a Proceeding IE will
no longer initiate a transfer in conformance with Bellcore
GR-2865-CORE.
(APAR IY91556)
- Update 4.2.0.221
APAR IY87181
PTF U808661
1.Add Text strings to some ISDN internal error messages.
(APAR IY87181)
- Update 4.2.0.220
APAR IY85983
PTF U808660
1.Updated Nortel DMS 100 and DMS 250 protocols to support
REDIRN Call Tags on Inbound and Outbound calls.
Also on 2B-Channel Transfer if a "Redirecting Number" IE
exists it will be automatically copied to the outbound call,
and an Original Called Party number will no longer be
generated.
(APAR IY85983)
- Update 4.2.0.206
APAR IY81142
PTF U807391
1.Allow greater than 16 digits in Calling and Called Party Numbers
as part of an outbound SETUP message.
(APAR IY81142)
- Update 4.2.0.202
APAR IY81678
PTF U806946
1.Add support for Virtual Adapter ( adapterless DTNA) solution.
For ISDN fileset.
(APAR IY81678)
- Update 4.2.0.194
APAR IY80909
PTF U806888
1.Correct allowable values of PROGRESS INDICATOR and also
allow PROGRESS before ALERTING on outbound calls.
(APAR IY80909)
- Update 4.2.0.143
APAR IY75430
PTF U804592
1.Allow Redirecting Number octet 3 of the SETUP message to
be FF with octets 3a and 3b not present.
This is to allow SV542 to be populated with the
Redirecting Number.
(APAR IY75430)
- Update 4.2.0.142
APAR IY75375
PTF U804589
1.Allow Redirecting Number octet 3 of the SETUP message to
be FF with octets 3a and 3b not present on ISDN versions T1
National 2 and TR41449/41459.
This is to allow SV542 to be populated with the
Redirecting Number.
(APAR IY75375)
- Update 4.2.0.132
APAR IY73948
PTF U804240
1.This fix completes changes required for DISPLAY.TYPES
(APAR IY73948)
- Update 4.2.0.131
APAR IY73946
PTF U804239
1.Updated due to changes in ISDN.com fileset
(APAR IY73946)
- Update 4.2.0.130
APAR IY73945
PTF U804238
1.Updated due to changes in ISDN.com fileset
(APAR IY73945)
- Update 4.2.0.129
APAR IY73911
PTF U804237
1.Updated due to changes in ISDN.com fileset
(APAR IY73911)
- Update 4.2.0.128
APAR IY73903
PTF U804235
1.Further enhancements to signalling ISDN National
NA007-NA0017.
(APAR IY73903)
- Update 4.2.0.112
APAR IY72461
PTF U803654
1.This fix prevents erroneous status messages being sent
from WVR in response to receipt of a progress IE within a
message when configured as QSIG.
This fix also has added the NFE part of the MWI ASN.1 rose PDU.
The NFE has been defined as sourceEntity = endPINX and
destinationEntity = endPINX only.
(APAR IY72461)
- Update 4.2.0.111
APAR IY72457
PTF U803653
1.A new ISDN tag has been added to provide sending and
receiving of the DISPLAY IE tag, plus support for tag
attributes DISPLAY.INF and DISPLAY.TYPES.
Call transfers type RLT and Two B Channel Transfer will
now automatically foreward a DISPLAY IE to the transferee
if it was present on the original incoming call
Note: Currently only the switch type NT DMS100 with Line
Signalling ISDN National NA007-NA0017 configuration supports
the DISPLAY IE.
(APAR IY72457)
- Update 4.2.0.110
APAR IY72456
PTF U803652
1.This fix adds support for ISDN.com to allow fixes in
fix level 4.2.0.111 and 4.2.0.112 to operate correctly.
(APAR IY72456)
- Update 4.2.0.79
APAR IY69878
PTF U80331178).78
APAR IY69817
PTF U80331017).74
APAR IY68830
PTF U802554
1.Corrects problem where HSF Transfer for New Zealand would
not work for concurrent calls.
(APAR IY68830)
- Update 4.2.0.67
APAR IY66646
PTF U802158
1.This code update fixes an es_buffer leak caused when invoking
a FH type ISDN call transfer.
This feature is currently used in New Zealand.
(APAR IY66646)
- Update 4.2.0.55
APAR IY64700
PTF U8008 INS1500 version
(APAR IY64700)
- Update 4.2.0.54
APAR IY64663
PTF U8008 Euro version
(APAR IY64663)
- Update 4.2.0.53
APAR IY64651
PTF U800856 DMS version
(APAR IY64651)
- Update 4.2.0.52
APAR IY64617
PTF U800855 ATT version
(APAR IY64617)
- Update 4.2.0.44
APAR IY63903
PTF U800676
1.Tech Doc Note for Multiple ISDN support for WVR AIX.
12 October 2004
The following is an explanation and some useful information
about how to use the isdn.ini.sample file which is provided
with the dirTalk.ISDN.com file set to allow the ISDN system
configuration to be overridden. It is primarily made available
to allow ISDN configurations of WVR AIX to allow multiple Q931
protocols to be run on the same P-series machine.
For example the TR41459 module could be configured on some
packs and DMSNAT can now be configured on other packs.
1. The isdn.ini.sample is installed when the dirTalk.ISDN.com
is installed or upgraded.
2. The isdn.ini.sample is installed in to the following
directory /usr/lpp/dirTalk/db/sys_dir/isdn
3. The feature is available when an isdn.ini file exists in
this directory; this can be created by copying
isdn.ini.sample to isdn.ini and changing permissions
to writeable.
4. Changes can be made to any or all of the supported
parameters on a PACK basis. These parameters do not override
the description of how WVR supports ISDN as defined in the
General Information Planning Book.
5. Alternative ISDN modules can be configured by setting the
SignalProcessNumber parameter to the required signal process
number. The signal process numbers are defined in the
slcommon.h file which contains the definition of the
SL_PROT_TYPE structure. The slcommom.h file can be
found in /usr/lpp/dirTalk/include.
6. Changes take affect when a trunk is enabled.
7. Packs that have been updated generate a white notification
alarm number 29618 which states that
"The trunk ISDN configuration as been updated" and indicates
the Trunk number.
8. There are no other console changes. The Custom Sever Manager
will continue to indicate the originally configured ISDN
custom sever.
Note: It is important to set the Run status of any overridden
ISDN custom servers that may also be installed to the Stop
state and the set their IPL status to be INSTALLED by
selecting Auto-Start Off. The primary ISDN custom sever
will control both the original primary ISDN modules and
the overridden ISDN modules, and must be the only ISDN
custom server indicating Run status = WAITING and
IPL status = AUTOEXEC.
9. T309Enabled is usually enabled by the system configuration
should not normally be changed.
10. SendRestartMsgOnChannelEnable parameter can be
enabled/disabled. If this is disabled the switch will
provide the channel restart messages rather than WVR.
11. BChanServiceMessagesEnabled can be enabled/disabled to
control the sending of B channel service messages, which
are used to enable the bearer channels.
12. DChanServiceMessagesEnabled can be enabled/disabled to
control the sending of D channel service messages which are
used to provide D channel backup.
13. MaintenanceProtDisc is used in conjunction with D channel
and B channel service messages and can be set to 3 or 43.
14. The numbering type set in the ISDN system configuration
can be overridden on a pack basis.
NumberingType
a. 0 = unknown
b. 1 = international
c. 2 = national
d. 3 = network-specific
e. 4 = subscriber
f. 5 = abbreviated
15. The numbering plan set in the ISDN system configuration
can be overridden on a pack basis.
NumberingPlan
a. 0 = unknown
b. 1 = ISDN
c. 2 = national
d. 9 = private
16. The signal process selected in pack configuration can be
overridden by setting the ISDNSignalProcess.Number.
The valid numbers are
a. SL_PROC_EUROISDN = 24
( Preferred for E1 ISDN )
a. SL_PROC_5ESS_5E8 = 25
b. SL_PROC_5ESS_5E9 = 26
c. SL_PROC_DMS100_BCS34 = 27
d. SL_PROC_TR41449 = 28
( Preferred also includes TR41459 4ESS )
e. SL_PROC_T1_NATIONAL = 29
( Preferred for Lucent 5ESS switches )
f. SL_PROC_DMS_NATIONAL = 30
( Preferred for Nortel DMS 100 switches )
g. SL_PROC_DMS_250 = 32
( Preferred for Nortel DMS250 switches )
h. SL_PROC_ISDN_INS = 35
( Preferred for Japan INS 1500 )
i. SL_PROC_ISDN_QSIG = 37
( Preferred for E1 QSIG )
17. The ISDN Q.931 signal modules that are started by WVR
when a trunk is enabled can be seen by typing the following
command
ps -eaf | grep ISDNDL3
For example ISDNDL3_EUROISDN 13
18. There is only one ISDN Q.931 signal module per signalling
group, which means for example that an NFAS group of eight
trunks will only require one signalling module.
; File : isdn.ini.sample
;
; Licensed Materials - Property of IBM 5765-001 (C)
; Restricted
; Rights - Use, duplication or disclosure restricted by
; GSA ADP Schedule
; Contract with IBM Corp.
;
; Change History:
;
;
; Purpose :
; This file contains a limited number of isdn configuration
; parameters that can be used to override the original
; configuration set up from the WVR pack and system configuration.
; Its main purpose is to allow different types of isdn q931
; code to co-reside and execute on the same WVR.
; This can be useful for attaching a WVR to different switch
; manufacturers such as Nortel and Lucent.
; For example this file will allow the ISDNDL3_T1NAT signalling
; process to be configured at the time as ISDNDL3_DMSNAT running.
;
;
; Syntax :
; The file must reside in a directory called isdn located
; in $SYS_DIR.
; The actual directory is /usr/lpp/dirTalk/db/sys_dir/isdn.
; The name of the file must be isdn.ini
; This file can be used as a template, copy it to a new file
; called isdn.ini,
; For Example: copy the file by typing
; cp isdn.ini.sample isdn.ini
; Then change the file to be read/write by typing
; chmod +w isdn.ini
;
; Note: T1 and E1 protocols cannot be mixed.
; Different configurations cannot be applied within an
; NFAS group, all packs in that group must use the same
; signal process.
; Each NFAS group can be configured differently, for
; example one group could be T1NAT another group could
; be DMSNAT.
; When overriding the SignalProcssNumber for an NFAS group
; it is only necessary to specify an entry for the primary
; signalling PACK.
;
;
; The tags supported within the isdn.ini file must reside in
; the context of a [PACK_xx] entry. There can be up to
; 16 [PACK_xx] entries present in the file starting from
; [PACK_1] through to [PACK_16].
;
; Not all tags need be present, values not present in isdn.ini
; will be taken from the WVR System Configuration.
;
; The example below shows all supported tag values.
; Remove the ; comment to use an example like this.
;
;[PACK_1]
; T309Enabled = y ; This tag value can y or n
; SendRestartMsgOnChannelEnable = n ; This tag value can y or n
; BChanServiceMessagesEnabled = y ; This tag value can y or n
; DChanServiceMessagesEnabled = n ; This tag value can y or n
; MaintenanceProtDisc = 3 ; tag value only be 3 or 43
; NumberingPlan = 7 ; value only between 0 of 15
; NumberingType = 3 ; value only between 0 of 15
; SignalProcessNumber = 29 ; tag value can only be set
; ; to one of the ISDN signal
; ; process numbers
; ; which are defined in
; ; slcommon.h
;
;
; Further information about system configuration can be found in
; the WVR AIX "Configuring the System" book. Information about
; specific Signal Process Numbers can be found in the SL_PROC_TYPE
; structure which is defined in the slcommon.h file located in
; $VAE/include or /usr/lpp/dirTalk/include directory.
;
;
;
;
;
; A simple example for overriding the signal process.
;
;[PACK_5]
; SignalProcessNumber = 29 ; This tag will set pack 5 to
; execute the Lucent T1 National
; isdn variant. The override does
; not appear on the Custom Server
; Manager window which will continue
; to show the original default
; signal process which must be left
; running. All other ISDN Custom Server
; signal processes that appear in the
; Custom Server Manager should be set
; to INSTALLED by selecting the
; "Auto-Start Off" option.
Example 1
The following example sets PACK 1 (trunk 1) to run the signal
process number 29, which is the Lucent variant of T1 National, and
is typically required for a 5ESS switch. This will run the
ISDNDL3_T1NAT module for pack1, or the primary signalling pack of
an NFAS group which has been configure as pack 1.
[PACK_1]
SignalProcessNumber = 29
Example 2
The following example sets PACK 2 (trunk 2) to run the signal
process number 30, which is the Nortel variant of T1 National, and
is typically required for a DMS100 switch. This will run the
ISDNDL3_DMSNAT module for pack2. The Maintenance Protocol
Discriminator is set to 3 for support of B channel and D channel
Service Messages.
[PACK_2]
MaintenanceProtDisc = 3
SignalProcessNumber = 30
Example 3
The following example sets PACK 3 and PACK 16 to run the signal
process number 32, which is the Nortel variant of T1 National, and
is typically required for a DMS250 switch. This will run the
ISDNDL3_DMS250 module for pack 3 and pack 16. The Maintenance
Protocol Discriminator is set to 43 for support of B channel Service
Messages. The SendRestartMsgOnChannelEnable parameter is set to n
which relies on the switch to provide restart messages rather
than WVR. D Channel service message support required has been
disabled.
[PACK_3]
SendRestartMsgOnChannelEnable = n
DChanServiceMessagesEnabled = n
MaintenanceProtDisc = 43
SignalProcessNumber = 32
[PACK_16]
SendRestartMsgOnChannelEnable = n
DChanServiceMessagesEnabled = n
MaintenanceProtDisc = 43
SignalProcessNumber = 32
(APAR IY63903)
2.This feature provides call Tansfer using Euro ISDN based on
the hook flash feature. This function was requested
specifically by New Zealand.
(Defect 35389)
- Update 4.2.0.43
APAR IY63891
PTF U800672
1.See details in Update 4.2.0.44
(APAR IY63891)
- Update 4.2.0.25
APAR IY61573
PTF U800144
1.This ISDN ptf fixes an ISDN call transfer problem in the 4.2 version
of WVR.
(APAR IY61573)
- Update 4.2.0.19
APAR IY60499
PTF U499203
1.Updated valid combinations of "Screening Indicator" and
"Presentation Indicator" for Original Calling Number and
Calling Party Number.
(APAR IY60499)
BrooktroutFax Fixes
- Update 4.2.0.550
APAR IZ85737
PTF U839630
1.Improved initial fax channel allocation during Brooktrout fax
startup.
(APAR IZ85737)
2.Corrected path information for import check.
(Defect 36079)
APAR IZ85979
PTF U839632
1.Remove excess debug information from the TR1034 Brooktrout
(Defect 35941)
- Update 4.2.0.208
APAR IY83693
PTF U807394
1.This PTF wil stop infinite loop and core in logging code.
(APAR IY83693)
- Update 4.2.0.153
APAR IY77050
PTF U805630
1.Enhance error handling at the end of a fax call.
This change will stop a partially received FAX from being
deleted if there is a problem during FAX transmission.
The following return codes have been implemented
0 - As before, the FAX is completely successful
100 - The far end hung up during the end of fax negotiation.
Valid FAX present
101 - Another error occured during the end of fax negotiation.
Valid FAX present
-490 - As before, FAX failed
(APAR IY77050)
- Update 4.2.0.115
APAR IY72463
PTF U803795
1.This fix ensures that the Brooktrout Fax device retains
the correct permissions after a system reboot.
(APAR IY72463)
- Update 4.2.0.114
APAR IY70584
PTF U803766
1.Changes to the BTFAX_1000 custom server to ensure it uses
the correct configuration file
(APAR IY70584)
2.Changes to the logging process to allow users to move the
current BrooktroutFax.log file without stopping further logging.
A new log file will be created to continue recording log data.
(Defect 35671)
- Update 4.2.0.48
APAR IY64060
PTF U800737
1.This PTF adds support for the new Brooktrout Fax TR1034 Fax
board. Information concerning configuration and use can be
found in "Fax using Brooktrout" manual.
(APAR IY64060)
Device driver
- Update 4.2.0.47
APAR IY64041
PTF U800681
1.This PTF adds support for the new Brooktrout Fax TR1034 Fax
board. Information concerning configuration and use can be
found in "Fax using Brooktrout" manual.
(APAR IY64041)
Base sub-system
- Update 4.2.0.12
APAR IY47312
PTF U495891
1.This PTF corrects the requisite software required by the
dirTalk.BrooktroutFax fileset.
(APAR IY58381)
SP FixesThere no SP Fix Updates on WVR for AIX, V4.2.
GEOTEL Fixes
Internal Defect fix
- Fixed a problem where the GeoTel custom server incorrectly shutdown during a message read. The message read actually works however the trace returned an error. This is misinterpreted as a message read error. Note: Importing the GeoTel custom server will overwrite the $CUR_DIR/service.def in ca/GeoTel_dir. This should be backed up first.
- Update 4.2.0.337
APAR IZ09273 IZ06415
PTF U815451
1.Preserve the tag values in the GeoTel custom server to
ensure ECC vars do not disappear or move.
(APAR IZ09723)
2.Changed the Improve the socket read code to check earlier
for ECONNRESET
(APAR IZ06415)
- Update 4.2.0.291
APAR IZ00740
PTF U812851
1.Catch all cases of the GeoTel ICM socket connection
dropping and reconnect the socket. Note, back up the
service.def file before importing the GeoTel.imp
(APAR IZ00740)
- Update 4.2.0.288
APAR IY99834
PTF U812108
1.Allow CHPs greater than 480 to use the GeoTel custom server.
(APAR IY99834)
- Update 4.2.0.258
APAR IY95504
PTF U811204
1.Improved the handling of trailing whitespace in parameter
definition files.
(APAR IY95504)
- Update 4.2.0.232
APAR IY90691
PTF U809889
1.Fix a problem with ROUTE_SELECT, ROUTE_END and ROUTE_END_EVENT
messages, which ware incorrectly interpreting returned information.
Importing GeoTel.imp will remove all files in the
/usr/lpp/dirTalk/db/current_dir/ca/GeoTel_dir and reset the flags
passed to the custom server. If these have been changed then this
information will need to be backed up before importing the
GeoTel.imp file.
(APAR IY90691)
- Update 4.2.0.218
APAR IY86095
PTF U808518
1.This defect resolves an issue whereby ECC data could
get lost or overwritten when under load.
(APAR IY86095)
2.Added files to allow import check during WVR start up.
(Defect 35954)
3.Improve the socket read functions to handle fragmented packets.
(Defect 35979)
4.Make the socket read code not block if there is no
message to read.
(Defect 35988)
- Update 4.2.0.191
APAR IY81034
PTF U806882
1.This defect resolves an issue whereby if the custom server
is running high load utilising ECC Variables then there
is a possibility of a buffer over-run occuring, causing
the custom server to terminate unexpectedly.
(APAR IY81034)
- Update 4.2.0.167
APAR IY75755
PTF U805954
1.This fix corrects a problem where the delivered message
sent to the Cisco ICM Peripheral Gateway used an incorrect
logical trunk group number if the -c command line
parameter is used and the Channel Group ID of the trunk
group is other than 01.
(APAR IY75755)
2.This feature adds support for up to 30 ECC array indices
spread across the existing five ECC arrays as well as allowing
multiple instances of the custom server to be run.
The support for up to 30 ECC array indices is only available
through the state table interface, the Java interface retains
the current limit of 5.
The existing array support was flawed in that if multiple
arrays were utilised then the data would be stored in the
incorrect locations.
This PTF modifies the behaviour to place the data in the
return parameters in the order in which it is received as
opposed to the custom server attempting to place it in the
order it believed to be correct. The existing custom server
functions remain the same in allowing only up to the first
five array entries to be received. In order to work around
this limit three new functions has been added to the state
table custom server interface.
Retrieve_Index_Value
The Request_Index_Value is issued by the state table
application in order to retrieve the value of a specific array
index which has been sent by the ICM. This command must be
issued by the application after the variables have been
received from the ICM in a message which contains the ECC
variables and before any other messages are picked up by the
state table application. For example, this could be issued
directly after a Run_Script_Request has been issued.
This function can be called repeatedly to retrieve multiple
index values.
If the value of the array index cannot be found then a null
string will be returned.
If the DialogueID cannot be found then
E_INVALID_DIALOGUEID will be returned.
SendData
DialogueID (number)
Obtained using the Create_DialogueID function.
ECCVarArrayTag (number)
The numeric tag value by which this variable is identified.
ECCVarArrayIndex (number)
The numeric array index identifier, the value of which
you wish to retrieve.
ReceiveData
ECCVarArrayString(string[210])
The call related data stored in the specified tag and index.
New_Call_Extended
The New_Call_Extended function provides a way from the state
table interface to send up to 30 array values.
The required parameters are the same as for New_Call (and
the resultant message to ICM is a New_Call) but with the
ability to set more ECC Array values.
SendData
DialogueID (number)
Obtained using the Create_DialogueID function.
TrunkGroupID (number)
The ID of the trunk group on which the call arrived.
Set to SV177 (Current Channel Group) if the -c is
specified as one of the custom server parameters.
Otherwise set to SV166 (Physical Card Number).
When using SV177 as the TrunkGroupID, use the Assign
Data state table action to assign the value of SV177
to a numeric variable. This variable should then be
used as the value that is passed as TrunkGroupID.
TrunkNumber (number) The number of the trunk on which
the call arrived. Set to SV165 (Logical Channel Number)
if the -c is specified as one of the custom server
parameters. Otherwise set to SV167
(Physical Channel Number).
ServiceID (number)
The ID of the service to which this call is assigned.
DialedNumber (string[32])
The number that is used to determine the ICM call type.
ANI (string[40])(optional)
The Calling line ID of the caller.
UserToUserInfo (string[131])(optional)
The ISDN user-to-user information element.
CalledNumber (string[32])(optional)
The complete called number from the network.
DNIS (string[32])(optional)
The DNIS that is provided with the call.
CallVariable1 (string[40])(optional)
Additional VRU information that is to be used
when the ICM script is run.
CallVariable... (string[40])(optional)
Additional VRU information that is to be used
when the ICM script is run.
CallVariable10 (string[40])(optional)
Additional VRU information that is to be used
when the ICM script is run..
ReceiveData
Status (number)
A value from the list of status codes that
describes the result of this request.
Run_Script_Result_Extended
The Run_Script_Result_Extended function provides a way
from the state table interface to send up to 30 array values.
The required parameters are the same as for Run_Script_Request
(and the resultant message to ICM is a Run_Script_Request) but
with the ability to set more ECC Array values.
SendData
DialogueID (number)
Obtained using the Create_DialogueID function.
InvokeID (number)
Set to the InvokeID returned by Run_Script_Request.
ResultCode (number)
Set to true (1) if no errors were found actually
running the script. Set to false (0) if an error
was found.
CallerEnteredDigits (string[40])
Digits that the caller enters.
NewTransaction (number)
Set to true (1) if the VRU PIM should write a
Call Termination record into the database
immediately after processing this message.
CallVariable1 (string[40])(optional)
Additional information that is related to the call.
CallVariable... (string[40])(optional)
Additional information that is related to the call.
CallVariable10 (string[40])(optional)
Additional information that is related to the call.
Using multiple instances of the Cisco ICM custom server
The capability of running multiple instances of the custom
server has been added. In order to utilise this new capability
two new custom server parameters have been added:
-n<number of instances>
The number of instances of the Cisco ICM custom server which
you wish to run. See the section "Running multiple instance
of the Cisco ICM custom server" for more information.
-g<parameter definition file>
The fully qualified name of the parameter definition which
you wish to use as the configuration parameters for the custom
server. See the section "Running multiple instance of the
Cisco ICM custom server" for more information.
In some scenarios customers may wish to run multiple instances
of the Cisco ICM custom server on the same WVR system.
In order to do this the system should be started with
only the -n and -g parameters as described above.
The parameter definition file then contains all of the
attributes required for the different custom server instances.
The format of this file is similar to the attributes which you
specify on the command line, however the instance number must
also be specified for each parameter.
For example, to specify the debug level for an instance the
file would list -d<instance number>lt;debug level>
An example definition file is below:
-d11
-d20
-d31
-f1/home/dirTalk/current_dir/ca/GeoTel_dir/services1.def
-f2/home/dirTalk/current_dir/ca/GeoTel_dir/services2.def
-f3/home/dirTalk/current_dir/ca/GeoTel_dir/services3.def
-B11
-B22
-B33
-K1
-K2
-K3
-U1
-U2
-U3
-V1
-V2
-V3
-W1
-W2
-W3
-X1
-X2
-X3
-Y1
-Y2
-Y3
-Z1
-Z2
-Z3
(Feature 35807)
- Update 4.2.0.135
APAR IY73954
PTF U804259
1.Fix a GeoTel crash when 480 channels are disabled at
the same time. Note installing this PTF will replace
the service.def file.
(APAR IY73954)
2.A new parameter has been added to the custom server.
-e <seconds>
Indicates that trunk group status messages should be
sent every <seconds> seconds. The time interval must
be in the range 1-600 seconds, default value is 60 seconds.
Applying this flag modifies the behaviour of the trunk
group status message such that trunks are considered out
of service if they are busy as well as out of service
(blocked or unavailable). In addition it also allows the
user to modify the frequency at which this check is made.
This parameter should be used in environments where trunk
availability data is critical to the performance of the
platform and it is not possible to achieve the same
results using data on the Cisco ICM platform.
IT IS RECOMMENDED THAT DEBUG LEVEL IS SET TO 0 (-d 0) ON
PRODUCTION SYSTEMS DUE TO INCREASED LOG SIZE AND
PERFORMANCE IMPACT.
(Feature 35572)
3.The GeoTel custom server is now certified as functioning
fully with Cisco ICM version 6.
(Feature 35572)
- Update 4.2.0.116
APAR IY72521
PTF U803796
1.The GeoTel custom server has been modified such that during
failover of a Peripheral Gateway, state tables will now
receive the correct status code.
(APAR IY72521)
- Update 4.2.0.26
APAR IY61713 IY61714 IY61716 IY61717
PTF U800163
1.This PTF fixes core dump from GeoTel custom server when it runs
in Java mode. Sometimes the Java subsystem was releasing a Dialogue
ID after the phone call had already been hung up.
This situation is now handled correctly.
(APAR IY61713)
2.Updates to the GeoTel custom server to enhance performance
(APAR IY61716)
3.Modifies the handling of the TCP/IP socket from Java to the
custom server so that if the Java client is recycled the custom
server automaticaly re-establishes the connection.
(APAR IY61717)
4.This fix prevents previous ECC Vars from persisting if they
are not overwritten. ECC Vars are now properly reset if they are
not explicitly used.
(APAR IY61714)
- Update 4.2.0.16
APAR IY59538
PTF U499718
1.Correct handling of ECC Array Variables.
(APAR IY59538)
ADSI Fixes
There no ADSI Fix Updates on WVR for AIX, V4.2.
TDD Fixes
There no TDD Fix Updates on WVR for AIX, V4.2.
DVT Fixes
There no TDD Fix Updates on WVR for AIX, V4.2.
SS7_D7 Fixes
- Update 4.2.0.553
APAR IZ89192
PTF U839821
1.This fix overcomes a possible SS7 30013 error (Invalid Data Item /
Attempt to access Message Register when not loaded) with an outbound
call attempt when 16 Channel Groups are configured for use (e.g. 1
Channel Group for each of 16 T1/E1 SS7 voice trunks).
(APAR IZ89192)
- Update 4.2.0.550
APAR IZ85776
PTF U839631
1.Improve error condition handling.
(APAR IZ85776)
- Update 4.2.0.502
APAR IZ63867
PTF U829863
1.This fix provides support for AIX 64-bit kernel working
of the device driver (artic8260) for the SS7 signalling
card (SS8/NewNet quad T1/E1 HAX50PCGEN).
The fix includes the D7 1.3.1.19 update package for the
D7 1.3.1.0 base install package. Also corrected in the
update is a possible system crash associated with the
D7 etmod device driver.
(APAR IZ63867)
- Update 4.2.0.436
APAR IZ21900
PTF U821509
1.This fix corrects a problem sometimes seen when
re-introducing a WVR client into a SS7 cluster.
A D7 MAJOR error would sometimes occur (dsmd MAJOR Delaying
dsmd_svc_mtc_syncsegdata_req due to
EDESTBLKD : Destination blocked [211]) and the whole SS7
cluster would crash preventing further calls.
The fix (replacement of D7 1.3.1.17 with version 1.3.1.18)
also corrects some minor D7 related problems
e.g. D7 AccessStatus utility can be invoked from more
than 2 machines in the SS7 cluster.
(APAR IZ21900)
- Update 4.2.0.366
APAR IZ18163
PTF U817183
1.This fix overcomes a possible SS7_D7 custom server core
dump (Error12305 with core file in
/var/adm/ras/dirTalk/core.SS7_D7) when SS7 calls are
ended by the voice application
(APAR IZ18163)
- Update 4.2.0.284
APAR IY97404 IY94586
PTF U811970
1.This fix corrects the recovery from lost SS7 Server
signalling links.
Previously when at least 1 signalling link recovered after
all SS7 ISUP signalling links were lost in a SS7 cluster,
the trunk circuits of a WVR client would recover (become
unblocked, as viewed through 'ss7view -circuit') but this
was not reflected through to the System Monitor GUI
(which remained blocked). As a result, all inbound calls
after this recovery would be rejected (with SS7 error;
"sl_send_indication failed" in ../oamlog/SS7/Errorsxx file).
The fix corrects a regression introduced by the fix for
APAR IY92573 (SS7 PTF U810594 fileset
dirTalk.SS7_D7.Enablement version 4.2.0.242).
(APAR IY97404)
2.This fix prevents the reporting of errors 30028
(DTXA/DTTA Loopback) and 30105 (SS7 Continuity test failed)
if an SS7 Continuity (COT) test call on a previous circuit
is abandoned by the network. Previously this pair of
errors would occur (in errorlog file) if this type of
test call was released (abandoned) by the network rather
than sending a COT success or COT failure as conclusion
of the test.
(APAR IY94586)
- Update 4.2.0.278
APAR IY85051
PTF U811643
1.This fix address a startup problem seen on some
SS7 E1 WVR client systems.
Where a connecting switch sends CGU
(Circuit Group Unblock) ISUP messages as part of the
SS7_D7 custom server start up sequence and when trunks
are enabled, Some circuits fail to become available
(unblocked). Outbound calls are then prevented and
generate the error;
30114 SS7 Error while in SEIZE state
The fix (i.e. replacing D7 1.3.1.15 with version 1.3.1.17)
corrects this problem by ensuring the correct handling of
the CGU ISUP message from the switch.
(APAR IY85051)
You will need to remove any existing .toc file before
trying to upgrade to latest fix level. Please perform
the following command as user root
rm /usr/lpp/dirTalk/sw/ss7/update/.toc
If the .toc file did not exist, it is not a problem.
- Update 4.2.0.248
APAR IY93215
PTF U810899
1.This fix allows the optional ISUP/IAM parameters
Jurisdiction Information (0xC4) and Party Information
(0xFC) for an inbound SS7 T1 call to be presented in the
system variable SV542 (as Tags JINFO and PINFO).
This then allows an application to determine call
charging and redirecting party information.
For an SS7 T1 inbound call application, the
Jurisdiction Information is presented as the value of
Tag JINFO. For Party Information the Tag value is
zero but with string attributes PINFO.CALLING_NAME
and PINFO.REDIRECTING_NAME.
For an SS7 T1 outbound call, this fix also allows
an application to set in SV541 the Tags JINFO and
PINFO for presentation to the remote switch in
an IAM (Initial Address Message).
Support for both new optional parameters is enabled
in configuration file
/usr/lpp/dirTalk/db/current_dir/ca/SS7_D7_cfg/
AnyMachine/Service.cfg
(APAR IY93215)
- Update 4.2.0.242
APAR IY92573 IY85973 IY89932 IY89598 IY91384 IY90726 IY90782
PTF U808886
1.This fix corrects an inconsistency sometimes seen when viewing
an SS7 circuit (CIC) within the SS7 utility 'ss7view -circuit'
verses the System Monitor/Channels GUI. A ss7view CIC state
of 'Far End Service state = Blk' is not always reflected in the
System Monitor which should show the channel as 'Bl' (Blocked)
coloured BLACK and not 'Id' (Idle) coloured BLUE. Previously
relying on the System Monitor could suggest a circuit was available
to handle calls when actually blocked at the network.
(APAR IY92573)
2.This fix overcomes a SS7_D7 custom server core dump or
'segment violation' error (in SS7_D7 Errorsxx log file) on
a WVR client machine when a SS7 Server is shutdown in a
large (at least 60 trunk) SS7 cluster.
(APAR IY90782)
3.This fix allows for the configuring of a T1 SS7 cluster
so that the SS7 signalling link(s) run at 56k bps
(rather than the default of 64k bps). The SS7itty configuration
utility supports this through user selection of either the
Bell Canada or Verizon/MCI SS7 Configuration Pack
(rather than the Standard Configuration Pack) when configuring
for SS7 T1.
(APAR IY85973)
4.This fix helps ensure after a system reboot that
the SS7_D7 custom server automatically start when WVR is
started (assuming AUTOEXEC is set in Custom Server Manager).
Previously an auto restart of SS7_D7 after reboot could result
in failure to start and the error;
A second parent start - now stopped
in the SS7/Errorsxx file.
(APAR IY91384)
5.This fix allows an inbound SS7 T1 or E1 Continuity
(COT) test call on a DTTA type card to proceed as expected without
immediately terminating with error 30028 (Invalid card).
Previously COT tests would only proceed on DTXA cards.
(APAR IY89598)
6.This fix corrects the failure of the SS7_D7 custom server to
acknowledge a CGB (Circuit Group Block) or CGU (Circuit Group
Unblock) message sent from a SS7 switch when the 'Circuit group
supervision message type' is set to 'hardware oriented' rather than
'maintenance oriented'. Previously this would result in a D7 error
of $890117 (ISUP: Unexpected primitive [mod=10 prim=0x90a
Astate=0x0] in AccessAlarms error log. The effect of the error was
to prevent trunks from becoming available at startup and could
result in the switch reporting 'C7SF' signalling failures.
(APAR IY90726)
7.This fix corrects the handling of partial trunk SS7 Circuit
Group Blocks (CGBs) and Circuit Group Unblocks (CGUs) received from
the switch (sometimes sent by SS7 switches at startup). Previously
the error 30114 (SS7 Error while in FAREND_BLOCK/UNBLOCK state)
could occur if the CGB or CGU was directed at only part of a trunk
i.e. not starting at the first circuit in the trunk and not for
the whole of the trunk (24 circuits T1, 30 circuits E1)
(APAR IY89932)
- Update 4.2.0.224
APAR IY85984 IY81336 IY83502 IY83751 IY85331 IY85721 IY86603 IY85975 IY85972
PTF U808886
1.This fix corrects the handling of a Circuit Group
Reset (GRS) ISUP message received from a SS7 network for
a SS7 T1 or E1 trunk.
Previously a GRS did not always unblock all circuits
(CIC's) in the trunk and in turn this could prevent an
outbound call being made on each circuit.
(APAR IY85984)
2.This fix overcomes an input field restriction with
the SS7 configuration utility SS7itty. When adding a
Route Set, the input field 'SS7 Bearer Trunks associated
with this Route Set' was previously restricted to
16 characters. This has been extended to 256 characters
and therefore allows for more complex Route Set
definitions.
(APAR IY86603)
3.This fix corrects the failure to handle an SS7 inbound
call which includes the optional ISUP parameters 0xfc
(PTY_INFO_PARM) and 0xfe (NT proprietary
supplementary end-to-end information request).
Previously such calls were not presented to an application
and therefore could not be answered.
(APAR IY85975)
4.This fix provides automatic handling of SS7 voice trunk
circuits (CICs) when a loss of signal occurs
e.g. a voice trunk is disconnected when the trunk is in
service.
Previously, although a loss of signal alarm was
reported (SL_ALARM_SL on E1, SL_ALARM_LOS on T1)
the trunk CICs would remain locally unblocked.
An inbound call from the switch would then get 'dead air'
(no voice circuit).
The fix also prevents CICs being enabled if a loss of signal
condition is present at startup.
Additionally, this fix provides automatic handling of SS7
voice trunk circuits when a loss of signal condition is
recovered i.e. when SL_ALARM_CLEAR occurs.
If the voice trunk is enabled and loss of signal has
previously been detected the CICs will automatically
become locally unblocked (available for calls) when
the alarm clears e.g. when voice trunk is reconnected.
(APAR IY85972)
5.This fix allows existing SS7 E1/T1 voice calls to continue
(both parties can still hear voice) after all SS7 signalling
links have been disconnected from the SS7 cluster.
Previously existing voice calls would be dropped when the
last SS7 signalling link was disconnected.
As before, new calls are still prevented until a SS7
signalling link is re-established.
(APAR IY85721)
6.This fix corrects the handling of inbound T1 SS7 calls
when 'Continuity check performed on a previous circuit'
is included in the call setup message (IAM).
Previously the call was immediately released on receipt
of 'Continuity check successful' instead of proceeding
in the normal way to an answered state.
(APAR IY85331)
7.This fix corrects the failure to reject a SS7 inbound
call when received without a USI parameter (T1 only) or
without a CDPN parameter (E1 and T1) present in the IAM.
Previously such calls were accepted and could result in a core
dump of the SS7_D7 custom server.
(APAR IY83751)
8.This fix overcomes a problem where shutdown of a SS7
server can cause the other SS7 ISUP server to shutdown
and in turn prevent handling of subsequent SS7 calls by
the SS7 cluster. The problem has been seen to occur only
when the first 7 characters of the WVR client host name
are not unique in a cluster with at least 2 WVR clients.
(APAR IY83502)
9.This fix overcomes the error 30032 (Internal component
failure) when a SS7 outbound call is made with the
optional Tag GENERICADDR set in system variable
SV541. Prior to the fix an outbound call with this
parameter set would not proceed.
(APAR IY81336)
- Update 4.2.0.193
APAR IY77307 IY80720
PTF U806886
1.This fix overcomes the problem of changing on a per call
basis the ISUP/USI (User Service Information) parameter
bytes in an outbound SS7 call. Previously an alteration
using the PUT_TAG/PUT_ATTRIBUTE (e.g. USI.OCTET_2) method
would result in an error 30033 (Attribute USI.OCTET_2 has
invalid number list format).
Also, using the alternative ISUPPARM Tag method was
ineffective i.e. default values (as defined in ISUPParms.cfg)
were still used instead of the cloned set values.
(APAR IY77307)
2.This fix allows, if enabled in Service.cfg, the optional
ISUP/IAM parameter Charge Number (0xEB) for an inbound SS7
T1 call to be presented in system variable SV542
(as Tag value CHARGEN). This then allows an application to
know or control which party is charged for the call.
An additional fix is presenting in system variable SV542
the correct value in Tag REDIRN (Redirecting Number) for
an inbound SS7 call (both T1 and E1).
Previously Redirection Number (0x0C) was presented instead
of Redirecting Number (0x0B). This prevented an inbound
application from correctly determining from where a redirected
call had come from.
(APAR IY80720)
- Update 4.2.0.182
APAR IY77646 IY76175 IY78300 IY78816 IY79085 IY79191 IY74954
PTF U806299
1.This fix addresses a server restart problem after stopping
a working SS7 server in a dual server SS7 cluster.
Stopping then restarting D7 could result in the error;
upmd MAJOR spm_bind() failure: System call timed out [119]
in D7 Mlog error log (/usr/ss8/d7/access/RUN/mlog directory).
Previously this error would prevent new calls from being
handled.
The fix (i.e. replacing D7 1.3.1.11 with version 1.3.1.15)
also corrects the failure to configure D7 with an SS8 adapter
on a Power5 series machine e.g. a 520.
(APAR IY79191 IY74954)
2.This fix overcomes a problem found with SS7_MAINT when
stopping the SS7_D7 custom server whilst StayAlive=Disabled
set in the Service.cfg configuration file.
Previously the shell script would terminate unexpectedly
and leave the SS7_D7 custom server still running.
Using SS7_MAINT to restart the SS7_D7 custom server would
result in the error CA_ALREADY_STARTED.
The Level 3 PD utility ss7Problem has been enhanced to
collect SS7itty Route_Set data (if present). Also, if SS7itty
has not been run on the machine (and directory
/usr/lpp/dirTalk/db/current_dir/ca/SS7_D7_cfg/data is empty)
no error is reported.
(APAR IY79085)
3.This fix corrects a problem found with SS7 configuration when
using SS7itty to configure a SS7 Server. For a configuration
with separate STP's (Signal Transfer Point) and SSP's
(Service Switching Point) the generated mml-ss7 configuration
file (used by the D7 software stack) previously lacked an
ADD-ROUTE statement needed for each named RouteSet. Although
this does not result in an error when loading the SS7 mml
in SS7_MAINT, the voice switching paths are not defined and
therefore cannot be used in calls.
(APAR IY78816)
4.This fix corrects the problem of seeing errors 30008
(Unhandled State Table, state FAREND_INIT stimulus SS7_BLO)
on a WVR client when another WVR client with enabled trunks
in the SS7 cluster is shutdown.
(APAR IY78304)
5.This fix corrects the value in System Variable SV23 (Call Type)
when a redirected (forwarded) inbound call includes the
Tag REDINFO (Redirection Information). Previously SV23 was
inconsistent with the value in REDINFO.REASON
e.g. REDINFO.REASON = User busy,
SV23 = Direct dialed to WebSphere Voice Response.
(APAR IY77648)
6.This fix corrects an auto trunk startup problem seen on
some installations. Also, the possible error 30029
(D7 SS7 Timer problem - Timer Expired for First CGBA timer)
is avoided.
(APAR IY76175)
- Update 4.2.0.161
APAR IY77166 IY77384 IY76762 IY77600 IY77475
PTF U805745
1.This fix corrects the behaviour of the SS7 MakeCall action
when the network immediately returns 'User Busy'
or 'Network Busy'.
Previously 'No Answer' was returned for both these cases.
(APAR IY77166)
2.This fix prevents the SS7 error 30008 (Unhandled State Table:
State ANSW and Stimulus SL_CALL_ANSWER_REQ) when an application
performs a redundant AnswerCall while an outbound call is
already connected.
(APAR IY77600)
3.This fix overcomes the failure to present to an application
the REDINFO tag in SV542 for a SS7 inbound call which is received
from the network containing the optional ISUP IAM parameter
REDI (Redirection Information).
(APAR IY77475)
4.This fix prevents an error (30017 - Configuration Translation
failure) when enabling an ISDN trunk in a mixed ISDN and SS7 trunk
configuration when the SS7_D7 custom server is active.
(APAR IY77384)
5.Modifications to the SS7 AutoStart script to ensure that
the correct environment is set for the WVR user
before starting SS7.
(APAR IY76762)
NOTE file SS7_D7.imp must be imported after installing
this fix for the fix to be activated.
After installing the fix, copy the default version of
Service Service.cfg
at the time of initial installation.
NOTE 2
Perform the following
as user dtuser or equivalent user
cp -p /usr/lpp/dirTalk/sw/ss7/ss7itty/*.dat
$CUR_DIR/ca/SS7_D7_cfg/AnyMachine/.
Restart the SS7_D7 and the D7WVRErrorReport custom servers.
- Update 4.2.0.151
APAR IY76152
PTF U805238
1.This fix overcomes the restriction of not being able
to include bytes 3, 4, and 5 in the USI
(User Service Information) parameter of an outbound
SS7 IAM message (outbound call).
Despite being enabled in file ISUPParms.cfg only
bytes 1 and 2 of the USI parameter could be sent
previously. On certain SS7 switches, the lack of USI
byte 3 can result in an outbound call being rejected.
NOTE file SS7_D7.imp must be imported after installing
this fix for the fix to be activated.
After installing the fix, copy the default version of
ISUPParms ISUPParms.cfg
at the time of initial installation.
Restart the SS7_D7 custom server.
(APAR IY76152)
- Update 4.2.0.118
APAR IY72689
PTF U803951
1.This fix addresses the failure to match inbound SS7 calls
with the desired application. Some switches
e.g. Ericcson MD110 include an 'F' (ST) termination digit in
the Called Number parameter for a call setup (IAM) message.
The fix strips the 'F' digit from the Called Number instead of
converting it to an '?' character which then prevents an
Application Profile match.
This fix also added the configuration option of appending
an 'F' (ST) termination digit to the Called Number parameter of
an outbound SS7 call. Some switches,
e.g. Ericcson MD110 will reject a call to a number that is not
terminated with ST in the Called Number parameter.
Documentation for this new option is included in the
ISUPParms.cfg user configuration file.
(APAR IY72687 IY70997)
2.If after installing this PTF you are informed when start WVR that
d7.xxx filesets are down level, you should shut down WVR and perform
the following
Logon as root
cd /usr/lpp/dirTalk/sw/ss7/update
This directory will contain all d7.xxx updates required.
smitty update_all
(Defect 35704)
3.Tests have been added to ensure that when using SS7 the correct
version of filesets are loaded and the post installation activation
has been performed.
WVR will start regardless but will issue warnings in DTstatus.out
(Defect 35593)
4.This fix overcomes the failure of the SS7 utility SS7_MAINT to
stop the SS7_D7 custom server. Previously the SS7_D7 custom server
would be stopped but would then be automatically restarted
after a period of 30 seconds.
(Defect 35521)
5.The ISUPParm.cfg has the following corrections:-
1: The SS7 REL message will generate the correct IE format
for Cause Ind. (see "REL:Cause").
2: The CVR IE parameters are now configurable.
N.B. This file will be installed into
/usr/lpp/dirTalk/sw/ss7/defcfg/SS7_D7_cfg/AnyMachine directory,
if this is not part of a fresh installation then copy this file to
the following directory
/usr/lpp/dirTalk/db/current_dir/ca/SS7_D7_cfg/AnyMachine/ISUPPparms
be aware that customized modification may have been performed to the
original destination file and those changes may need to be
transferred to its replacement.
(Defect 35488)
6.This fix addresses several E1 ITU Q.784 Compatibility test failures
when attached to an E1 SS7 switch and using the SS7_D7 custom server.
In relation to the failing ITU tests, the following problems are
now corrected in the following ITU test numbers:-
1.3.1.1; Range/Status parameter in CGUA
(Circuit Group Unblock Acknowledge) message is now correctly
set (Status bits were previously set to zero)
1.4.1; Avoids white notify alarm 30005 (Unhandled SS7 message)
1.4.5; Avoids getting stuck in 'call state = TRLC' which then i
cannot be removed by a RSC (Reset) message
1.5.2; Avoids yellow alarm 30203 (D7 major alert/ISUP
Unexpected primitive 0x905)
2.3.1; Avoids white notify alarm 30005 (Unhandled SS7
message/unknown message for ISUP_ALERT)
2.3.3; Avoids white notify alarm 30005 (Unhandled SS7
message/unknown message for ISUP_SETUP)
5.2.4; Support is now included for SUS (Suspend) message
when received from the network during a call)
5.2.9; Avoids a burst of 11 yellow alarms of 30008
(Unhandled State Table)
6.3.1; Avoids yellow alarm 30008 (Unhandled State Table/No
state entry found to match state SACM and stimulus SS7_ERROR)
In addition to the above corrections, several spelling
errors have been corrected in SS7 specific System Monitor
error messages.
(Defect 35473)
7.This fix overcomes the failure of the SS7 utility SS7_MAINT
to delete a SS7 trace file
(/usr/lpp/dirTalk/db/current_dir/oamlog/SS7/SS7-Trace) when using
either the H/A or H/T housekeeping options.
(Defect 35451)
8.This fix addresses a server fail over problem found in dual
server SS7 cluster configurations. Stopping then restarting D7 on
one of the SS7 servers could result in failure to recover and D7
alarm errors on the remaining machines. The fix
(i.e. replacing D7 1.3.1.7 with 1.3.1.11) also overcomes a D7
startup problem found on some combinations of AIX and security
APAR fixes e.g. AIX 5.2 + Maintenance Level 3 with APAR IY64355.
(Defect 35434)
9.This fix corrects an error seen when configuring a SS7
server at the point of loading the D7 SS7 mml configuration
(SS7_MAINT options C and S). Previously the error;
RTSET MO instance does not exist
was being reported in the configuration log file;
/usr/lpp/dirTalk/db/current_dir/ca/SS7_D7_cfg/log/mml.log
corresponding with the ADD-ROUTE statement in the
configuration script file;
/usr/lpp/dirTalk/db/current_dir/ca/SS7_D7_cfg/log/mml.mtp.script.
(Defect 35706)
- Update 4.2.0.66
APAR IY67279
PTF U802154
1.When starting the SS7_D7 customer server an information error
(30015 Configuration Error Detected ), which may not have
occurred previously should no longer appear in the errorlog.
(APAR IY67279)
2.This fix addresses several problems when attached to an SS7
switch and using the SS7_D7 custom server;
If the attached SS7 switch issues a 'Circuit Group Block' or
'Circuit Group Unblock' request then only the specified circuits
rather than all of the circuits within the trunk will be blocked
or unblocked.
(Defect 35314)
- Update 4.2.0.59
APAR IY65548
PTF U801249
1.SS7itty Configurator now support the configuration of
RouteSet for STPs.
(APAR IY65548)
2.The SS7 Message CVR is now supported.
(Defect 35469)
3.Voice bearer traffic loopback can not be initiated from
the ss7view program.
(Defect 35468)
4.SS7itty configurator now supports the HAX44PCGEN card.
(Defect 35452)
5.The SS7itty configurator now support different type of SS7
adaptor on the same machine.
(Defect 35435)
6.The SS7 support WVR Alarm message have been improved.
(Defect 35398)
- Update 4.2.0.46
APAR IY63951
PTF U800680
This PTF provides support for E1 signalling with SS7.
Other defects have also been corrected.
1.Corrects the fault in E1 mode when outbound calls failed with
no lines available when a number were available.
(APAR IY63951)
2.The fault where by SS7_MAINT in option F/2 incorrectly declared
the ISUP process was down has been corretly
(Defect 35243)
3.SS7_MAINT on handle cases where the root password has been
given incorrectly has been corrected.
(Defect 35286)
4.The report of available remove D7 database files on D7 start up
has been corrected.
(Defect 35287)
5.The incorrect detection of SSI system in SSI has been corrected.
Custom server imports will no longer occur on WVR connected to
SSI systems.
(Defect 35303)
6.The help text for SS7itty under the RouteSet menu has been
corrected.
(Defect 35310)
7.A problem when an SS7 Server failed in a 2 server configuration
has been corrected. Previously ISUP trunk would be allocated
to the wrong WVRs.
(Defect 35313)
8.The incorrect extension bit in the Release message has been
corrected. Previously the bit indicate there was more when
there was not.
(Defect 35319)
9.SS7_MAINT now handles multiple Distriubted7 and allows for
selection.
(Defect 35326)
10.When COT (Continuity Test) are now performed the relevant
voice bearer will have loop back asserted.
Previous this would report failure.
(Defect 35332)
11.On rare condition the SS7itty would fail with a memory
violation has been corrected.
(Defect 35350)
12.The situation on outbound calls where the network is not
responding, but D7 is reporting timeout. The problem where
circuit become invisibility allocate has been corrected.
(Defect 35387)
13.SS7_MAINT under housekeep menu option can now delete
SS7-Trace files
(Defect 35391)
14.SS7_MAINT can now process i-Fix files.
(Defect 35393)
15.General typos and correction sto the SS7itty F1 help text
have been performed.
(Defect 35397)
16.General typos and corrections to the WVR alarm message for
SS7 have been performed.
(Defect 35398)
17.The incorrect reference to Line in the SS7itty help text was
corrected.
(Defect 35411)
18.SS7itty now detects duplication in usage of the Trunks when
generating.
(Defect 35413)
19.SS7itty now detects duplication of Point Code with RouteSets
and LinkSets
(Defect 35414)
20.D7 no longer rejects the old style PQ cards with a 75ohm interface.
(Defect 35418)
21.SS7_MAINT will now compare Enablement release with active D7
release and report incompatibilities.
(Defect 35430)
22.SS7_MAINT can now locate the oam log directory in a SS7 Server
configuration.
(Defect 35431)
23.The internal readme has been updated
(Defect 35433)
SpeechClient Fixes
Internal Defect Fixes
- Ensure logging of Speech related connection problems at startup.
- Fixed internal system monitor functionality.
- Move some CPU intensive tracing from trace level 8 to trace level 9.
- Update 4.2.0.557
APAR IZ96532
PTF U842073
1.Fixed an issue where VXML2 hotword recognition would not work if a
Nuance speech server was used. The RECOGNIZE request would fail
with a 403 completion code indicating an unsupported parameter.
(APAR IZ96532)
2.Fixed an issue with MRCP not honouring the negotiated RTSP port on
SETUP messages, which would instead stream to the default port.
(Defect 37063)
3.Fixed an issue where the following error would appear under heavy
call load:
Unhandled exception occurred during MRCP message receive.
Details: java.lang.NullPointerException at
com.ibm.telephony.directtalk.mrcp.ReceiverThread.run(Receiveri
Thread.java:197)
(Defect 7500)
- Update 4.2.0.551
APAR IZ86864
PTF U839805
1.Increase amount of voice data buffered by the device driver to allow
for CPU glitches to be handled better by MRCP. With a smaller buffer
a CPU glitch can cause the voice recognition to stop.
(APAR IZ86864)
- Update 4.2.0.550
APAR IZ85224
PTF U839635
1.There is a minor race condition in the MRCP custom server that can
occur during stopping TTS. If the MRCP custom server is stopping due
to a DTMF then it carries on processing incoming voice from the TTS
engine but doesn't stream it to the line. If during this stop a Speak
complete comes back from the TTS engine the custom server is told to
immediately stop. The custom server ignores this due to stopping
because of the DTMF. Unfortunately this causes the TTS to never
actually stop. A problem occurs when the next TTS is started as this
fails to be setup correctly complaining that the custom server is
already streaming.
(APAR IZ85224)
- Update 4.2.0.529
APAR IZ80913 IZ84682
PTF U837320
1.Fixed a problem with the MRCP custom server hanging at startup,
potentially with 100% CPU.
(APAR IZ84682)
2.Fixed an issue that can cause a MRCP PluginException
(completion-cause 006) if a caller hung up just after a recognition
attempt was started.
(APAR IZ80913)
- Update 4.2.0.526
APAR IZ80094
PTF U836784
1.The code fixes the confidence-threshold, sensitivity
threshold and speedVSaccuracy setting in WVR. With
the fix, if you define a decimal fraction for these 3
properties in a voicexml application, WVR will
convert it to a integer value more accurately.
(APAR IZ80094)
- Update 4.2.0.522
APAR IZ78453 IZ78489
PTF U836467
1.Fixed a problem whereby a no-input event will be reported the a VXML
application as a no-match. The problem only occurs when using Nuance
as the speech server.
(APAR IZ78453)
2.Fixed the handling of RTSP 454 "Session not found" responses when
attempting recognition or TTS. Prior to the fix, the application
would terminate if such a response was received. After installing the
fix, the code will correctly try to re-establish a session to the
speech server.
(APAR IZ78489)
- Update 4.2.0.511
APAR IZ63384 IZ64798 IZ6579
PTF U832491
1.Corrected a potential problem when setting up a connection to an MRCP
server if the response message uses uppercase characters for the
audio format section.
(APAR IZ63384)
2.Corrected a problem that can cause the system to report an "MRCP
Plugin not initialised" message in the wvrtrace files.
(APAR IZ65796)
3.Improved the MRCP plugin import code so that the custom server will
start regardless of the ulimit value set on the machine.
(APAR IZ64798)
- Update 4.2.0.503
APAR IZ63087 IZ63872
PTF U829864
1.Fix a potential timing window in the MRCP distributor thread.
When the problem occurs the MRCP custom server will stop
delivering packets to WVS. This causes the 006 error return
from WVS (no audio streamed). The sleep time in the MRCP
can be on the order of hours, so all reco will fail for
every channel during this time. The timing window can only
occur if act of reading the time between two neighbouring
lines of code is greater than 100ms. Normally this would
happen if the CPU were heavy loaded and the custom server
swapped out for some reason.
(APAR IZ63087)
2.Fix a timing window when stopping reco/tts whilst the system
is very busy. If the state table times out whilst waiting for
the stop and closes the MRCP custom server link then when the
custom server finally responds it results in an exception and
the custom server stopping.
(APAR IZ63872)
- Update 4.2.0.472
APAR IZ59740
PTF U828060
1.Fixed a potential BufferUnderflowException in the MRCP plugin
that could occur when using the recordutterance VXML property.
(APAR IZ59740)
2.Fix a potential null pointer exception which can occur as a
MRCP CSLink message is responded to whilst the MRCP plugin
is checking the validity of the message.
(Defect 7342)
- Update 4.2.0.468
APAR IZ55504 IZ55522
PTF U827379
1.If the TTS being played contains no audio (break tag) and
the TTS engine is Nuance the engine won't stream any audio.
This causes the DDOEP to become upset due to no data being
present for it to write out. The DDOEP needs to be force
cleaned up, however this can not happen if there are no
other channels being used.
So the code now always force cleans up any DDOEP's irrelevant
of whether there are other calls present.
(APAR IZ55504)
2.If the call hangs up just as a start of speech occurs and the
reco thread is attempting to stop the TTS thread an error is
generated. Treat this error as a hangup rather than an error.
(APAR IZ55522)
3.Propagate the hang up occurring during start of speech into
the following reco attempt. Otherwise the VXML browser may
not correctly detect hang up.
(Defect 7324)
4.Fix a race condition between MRCP responses clashing with the
MRCP request being written. This can occur when the system
"pauses" and java halts for a few seconds. The write/timeout
code clashes with the response handling code.
(Defect 7319)
- Update 4.2.0.467
APAR IZ53932
PTF U827378
1.Perform a VXML 2.1 speech recognition using WVS 6.1
(reproducable on Linux version) with recordutternance
property set to true. Fetch timeouts may incorrectly
occur if the packet buffer is 100% utilised
(APAR IZ53932)
- Update 4.2.0.462
APAR IZ52549
PTF U825881
1.Corrected a code defect that could result in a
"PlugInException (102) Set-params failed" error message
being reported.
(APAR IZ52549)
2.Corrected the code logic to prevent the spurious logging
of the following messages:
WVS is configured for pcma, but the telephony network is pcmu.
To improve performace configure WVS for pcmu.
and
WVS is configured for pcmu, but the telephony network is pcma.
To improve performace configure WVS for pcma.
(Defect 7262)
- Update 4.2.0.453
APAR IZ44983
PTF U824384
1.Improve force clean up code for TTS prompts which start
and stop before any audio is streamed.
The original force clean code had a race condition which
can cause the MRCP custom server to crash.
(APAR IZ44983)
- Update 4.2.0.438
APAR IZ36550
PTF U822395
1.Modified the behaviour of TTS enabled VXML applications
so that they honour the bargein section of the VXML2
specification correctly.
(APAR IZ36550)
- Update 4.2.0.435
APAR IZ35027
PTF U821508
1.This PTF fixes a problem with the MRCP custom server when
stopping streaming on a TTS prompt which is for an
unconfigured language.
(APAR IZ35027)
- Update 4.2.0.416
APAR IZ24155
PTF U819377
1.Modified MRCP plugin code to prevent the following error:
Event: error.internal, Error: PlugInException:
LINK_NOT_CONNECTED:MRCPCSLink.checkStatus Link
not connected
(APAR IZ24155)
2.This PTF corrects a problem with CSEQs not matching due to
multiple requests. This happens when a teardown is sent due to
a timeout on a previous request. The teardown causes the
previous request to get a response.
(APAR IZ27918)
3.Stop the MRCP custom server from allowing multiple endpoint
connections.
(APAR IZ27919)
- Update 4.2.0.401
APAR IZ23710
PTF U818800
1.Tighten checks on connected speech technologies.
(APAR IZ23710)
- Update 4.2.0.371
APAR IZ22317
PTF U818268
1.Corrected a NullPointerException that can occur in
the MRCP plugin code if a RECOGNITION-COMPLETE message
is received at the same time as a caller HUP.
(APAR IZ22317)
- Update 4.2.0.368
APAR IZ20931 IZ21095
PTF U817986
1.Modified the MRCP message handling code to improve the
stability under load. Specifically this change prevents
the MRCP custom server from leaking file descriptors.
(IZ20931)
2.Prevented the MRCP plugin from spuriously reporting
DTJ7583 when using TTS from a Nuance server.
(APAR IZ21095)
- Update 4.2.0.330
APAR IZ15445
PTF U816233
1.This PTF Modifies the socket handling code in TSLOT and the
MRCP CS to prevent errno 72 (ECONNABORTED) from
terminating the processes.
This PTF contains the MRCP CS changes. You must also
install fix level dirTalk.DT.rte at level 4.2.0.351
which contains the TSLOT changes.
(IZ15445)
- Update 4.2.0.347
APAR IZ14725
PTF U816100
1.Corrected an ArrayIndexOutOfBounds exception that could
occur when using an application with inline grammars
with MRCP on Nuance.
(APAR IZ14725)
2.Corrected the ulaw/alaw handling for both recognition and
tts when the SpeechClient is used with Nuance.
(Defect 7026)
- Update 4.2.0.297
APAR IZ05665
PTF U813382
1.Improved the handling of grammars that use empty
strings in the result tags.
(APAR IZ05665)
2.Modified the error message text for a failed plugin
installation to improve the end user experience.
(Defect 7015)
3.Fixes internal problem with trace values.
(Defect 36172)
- Update 4.2.0.293
APAR IZ01170 IZ01175 IZ03238
PTF U812854
1.Fixes rare NullPointerException during far end
disconnect processing.
(APAR IZ01170)
2.Correct a serialisation problem that can result in
NullPointerException or ConcurrentModificationException
when MRCP messaging is stressed.
(APAR IZ01175)
3.Modified the MRCP plugin code to check that a
RECOGNNITION-COMPLETE or START-OF-SPEECH event is for the
correct RECOGNIZE request. This prevents us from sending a
spurious second RECOGNIZE request on the same session.
(APAR IZ03238)
- Update 4.2.0.290
APAR IZ00978
PTF U812203
1.This PTF corrects grammar scope order so that if an
utterance is matched in both field level and a higher
grammar (form or link) the field match is reported.
(APAR IZ00978)
- Update 4.2.0.256
APAR IY94941
PTF U811201
1.Modified the MRCP plugin to remove some spurious
debug output.
(APAR IY94941)
- Update 4.2.0.246
APAR IY92939
PTF U810626
1.Configuration changes for SpeechClient.
(APAR IY92939)
- Update 4.2.0.235
APAR IY91009 IY87064
PTF U810391
1.Modified the MRCP plugin to alter the confidence score into
the VXML2 range if this parameter is set to true in dtj.ini
A new parameter has been added to /var/dirTalk/DTBE/dtj.ini:
wvr.use.vxml2.confidencerange.
This controls whether, in an MRCP configured system, the
confidence scores are returned to the VXML2 application
in the range
0 - 100 or 0 - 1.
To use 0 - 100 use the value of false for the new parameter.
To use 0 - 1 use the value of true for the parameter.
Note that the default value is false.
(APAR IY91009)
2.When utterance duration exceeds the maxspeechtimeout property
value a "noinput" event is thrown.
This should be a "maxspeechtimeout" event.
After the installation of this APAR when the utterance
duration exceeds the value set a "maxspeechtimeout" event
will be thrown.
(APAR IY87064)
- Update 4.2.0.227
APAR IY87541
PTF U809400
1.Updated an error message reported when the SpeechClient
import fails
(APAR IY87541)
2.SpeechClient Custom Server MRCP has been added to
the import checker.
(Defect 35952)
- Update 4.2.0.214
APAR IY85213
PTF U807982
1.Changes made to ensure that the correct error event is
thrown when the ASR and TTS servers are unavailable.
NOTE after installation of this PTF you must perform the
following steps.
1) Start Websphere Voice Response for AIX but ensure that
custom server DTJ_VV_Logger is not running.
2) Stop VRBE if running ( dtjstop + dtjshost -exit )
3) Stop DTJ_VV_Logger if running
4) cd /var/dirTalk/DTBE/plugins
5) dtjplgin dtjmrcp.zip
6) Start VRBE ( dtjshost + dtjstart )
(APAR IY85213)
- Update 4.2.0.205
APAR IY82417
PTF U806979
1.Modifications have been made to the configurable options
for SpeechClient voice recognition.
(APAR IY82417)
- Update 4.2.0.165
APAR IY78255
PTF U805947
1.This fix corrects a problem where the execution of
vae.setuser caused the MRCP Custom Server to fail to
start and give no failure indication. Now if the MRCP
Custom Server fails to start because of insufficient
authority then a RED user alarm is raised in the
system monitor window.
vae.setuser is corrected in fix level 4.2.0.166
PTF U805948
(APAR IY78255)
- Update 4.2.0.162
APAR IY77053
PTF U805747
1.This PTF fixes a problem in the Speech Connector such
that outbound MRCP messages were suffering from packet
fragmentation, adding extra delays before receiving the
response from the MRCP server.
(APAR IY77053)
- Update 4.2.0.138
APAR IY74389
PTF U804306
1.This PTF contains extra SpeechClient fixes required for
the static / crash problem.
(APAR IY74389)
- Update 4.2.0.136
APAR IY74146
PTF U804302
1.This PTF uses the correct comfort noise data when the
system has no packets of real data.
(APAR IY74146)
- Update 4.2.0.125
APAR IY73561
PTF U803995
1.Support multiple language reco using MRCP
(APAR IY73561)
2.This fix enables hotword bargein with WVS version 5.1.3
or later
(Defect 6906)
3.This fixes a problem with multilanguage vxml scripts
that gives rise to high network loading, caused be
redundant streaming of voice data
(Defect 6910)
- Update 4.2.0.120
APAR IY72984
PTF U803955
1.Remove extra logging to the DTstatus.out from the
MRCP_Log custom server
(APAR IY72984)
2.Stop occasional white alarms being generated by the
MRCP custom server
(Defect 35701)
VRBE_XML Fixes
Internal Defect Fix
- Fixed an issue where an incorrect error message would be shown if rejecting an inbound call due to the default application being unavailable, instead the message would read "Unknown called number"
Internal Defect Fix
- Fixed an issue where dtjlogmon would fail to realise that the condition that it was searching for had occurred due to either hanging on the dtjflog read or failing to read a large enough part of the log file to actually make a full trace line.
Internal Defect Fixes
- Improve error reporting when attempting to parse an invalid VXMLdocument. The trace and log will contain URI (including attributes), line number, column number and XML parsing error. The logging showing the semi parsed VXML document as been removed as this was confusing and unhelpful.
- Fixed a rare NullPointerException. The NullPointerException was reported as:
(DTJ1008046) FAILURE: VXML2TurnCoordImpl.doField Caught a RuntimeException: java.lang.NullPointerException
at com.ibm.wvr.vxml2.VXML2TurnCoordImpl.doField
(VXML2TurnCoordImpl.java:1359)
at com.ibm.wvr.vxml2.VXML2TurnCoordImpl.doTurn
(VXML2TurnCoordImpl.java:269)
at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
- Update 4.2.0.572
APAR IV12552 IV12570
PTF U848096
1.The error message: "com.ibm.ccx.browser.CCXParser parse() Got
SAXException (Error in <ccxml> element: Document has more than
one <ccxml>" appears despite the document being fetched not
having multiple ccxml elements.
CCXML then refuses to load any more documents with the same error.
(APAR IV12552)
2.Fixed to ensure connection.transfer.disconnect is not thrown to
quickly after a CTI transfer.
(APAR IV12570)
3.Fixed an issue where WVR would fail to fetch certain Nuance builtin
URIs reporting a FileNotFoundException.
(Defect 7581)
4.Fixed the universals help grammar to work with various speech site
documents.
(Defect 7588)
6.Fixed internal system monitor functionality.
(Defect 7591)
7.Move some CPU intensive tracing from trace level 8 to trace level 9.
(Defect 7592)
8.Increased the default trace buffer size and reduce time waiting if
there are no more trace buffers. This does not change the amount of
trace written to disk or the size of the trace files on the disk. The
change is to handle situations when a lot of trace is being generated
and the internal trace system can not keep up. When this happens the
tracing routines will slow down causing VRBE to also slow down.
(Defect 7595)
- Update 4.2.0.570
APAR IV06611
PTF U848089
1.Changes to ensure that the correct connection.disconnect.hangup
event is returned if a caller HUPs before a transfer is started.
(APAR IV06611)
2.Fixed caching mechanism to allow VXML Browsers to continue
operation after failing to cache a successfully fetched document.
Warnings will still be logged to the log files and it is recommended
that these errors be fixed or they may affect your performance but
VRBE should recover without user intervention.
(Defect 7574)
3.Improved application error logging and inclusion of document name
where missing to VoiceXML and Javascript parsing and runtime errors.
(Defect 7551)
- Update 4.2.0.566
APAR IV01546 IV02976
PTF U844287
1.Corrected a potential exception within the dtjstop command that can
VRBE cause nodes to remain running after dtjstop is executed.
(APAR IV01546)
2.Code changes made to ensure that the correct "termtimeout" is used
for a nomatch when using a DTMF grammar.
(APAR IV02976)
3.Fix to enable the passing of the VoiceXML <mark> element to a speech
server in a VoiceXML version 2.0 application.
(Defect 7546)
4.Fix NullPointer Exception when using <initial> element in VXML.
(Defect 7517)
5.ECMAScript version 1.7 now available to configure via the dtjes
script.
(Defect 7542)
6.Fixed an issue resolving specific NumToApps for CCXML services
which overlaps wildcarded non-CCXML application NumToApp definitions.
(Defect 7540)
- Update 4.2.0.562
APAR IZ99396 IZ99397 IV00011
PTF U843719
1.Provide parameters to allow a system administrator to limit the
number of threads WVR will use when fetching resources on system
startup. The following parameters have been added:
wvr.vxml2.fetchthreads.limit
wvr.vxml2.fetchthreads.limit.timer
Where wvr.vxml2.fetchthreads.limit is the maximum number of threads
to use (default is 500) and wvr.vxml2.fetchthreads.limit.timer is the
time in minutes to apply the limit (default 10 minutes). After this
time, WVR will revert to the default maximum thread limit. Setting
wvr.vxml2.fetchthreads.limit.timer to 0 applies the limit
permanently.
(APAR IZ99396)
2.Fixed an issue where VRBE would repeatedly fail to load corrupted
items in the cache to automatically reload the corrupt items from
their source. This would typically be caused by running out of space
on the filesystem.
(APAR IZ99397)
3.Fixed an issue with caching long URIs that did not contain a '?',
and would instead throw an IOException.
(APAR IV00011)
4.Fixed an issue where CCXML would not use the default port on an
outbound <send> HTTP post.
(Defect 7520)
5.Fixed default CCXML HTTP Server port number which incorrectly
specified port 80. The correct default CCXML HTTP Server port number
is 1971.
(Defect 7530)
- Update 4.2.0.556
APAR IZ94402 IZ95648
PTF U842072
1.Fixed a NotSerializableException possible with caching Audio files.
(APAR IZ95648)
2.Fixed an issue where VoiceXML2 applications invoked from the Java API
could fail with a NullPointerException.
(APAR IZ94402)
- Update 4.2.0.554
APAR IZ91373
PTF U840694
1.Fixed a potential "No Reco or TTS plugin found error" when using
CCXML applications.
(APAR IZ91373)
2.Fixed an issue where CCXML wildcard NumToApp mappings would take
precedence over specific VoiceXML or Java API applications. For
example, using the following NumToApp mappings:
NumToApp=111,VXMLApp
NumToApp=1*,CCXMLService
(Defect 7466)
3.Fixed an MRCP resource problem that can result when multiple VXML
documents are called from one CCXML document during a single call.
(Defect 7467)
4.Fixed a potential StringIndexOutOfBoundsException that could occur
is there was an error when using dtjcache to list items.
(Defect 7468)
5.Fixed the VoiceXML caching to allow long URIs to be stored
(Previously, AIX would prevent any URI longer than 254 characters
being stored)
(Defect 7470)
6.Fixed an issue which would cause a second instance of
connection.connected events to be generated when being moved from
one session to another using <move>
(Defect 7474)
7.Fixed an issue where return values from a VoiceXML <exit> would be
preserved for a second VoiceXML document used on the same call.
(Defect 7475)
8.Fixed CCXML-controlled VoiceXML transfers to be more descriptive in
errors. Also fixed dialog.transfer events to contain .uri as per
spec, the .URI parameter has been kept for backwards compatibility.
(Defect 7476)
9.Fixed an issue where a NullPointerExcception would be seen on making
an outbound call from CCXML's <createcall> tag.
(Defect 7477)
10.Fixed an issue where dynamic browsers were not released for five
minutes when VoiceXML made a transfer using CCXML.
(Defect 7480)
11.Fixed an issue with CCXML <move> that could lead the
session.connections entry of a connection to have an
inconsistent
input and dialogid entry.
(Defect 7481)
- Update 4.2.0.550 - Fix Pack 2
APAR IZ85999
PTF U839633
New features contained in this PTF
* Enhanced VoiceXML and CCXML application support for call
information - Provide protocol specific tagging information in
VXML and CCXML.
* VRBE Expire Resource Tool - allows a system administrator to
manually expire a resource in the VXML, CCXML or Audio caches.
* VRBE problem determination utility - enables a system
administrator to collect a dtbeProblem output (or run any
other command) automatically when an error or other message is
reported in VRBE.
For further information on the Fix Pack 2 features please
refer to the TechNote at the following URL:
New fixes contained in this PTF
1.Introduced a new dtj.ini paramter 'wvr.vxml2.grammar.external.fetch'
to provide an override in WVR for unnecessary grammar
fetching/caching when using SpeechServer.
If wvr.vxml2.grammar.external.fetch=false, then http/https/builtin
grammars will no longer be fetched by WVR (either at prefetch or
during document execution), but they still will be fetched by the
speech server. Only file: grammars will continue to be fetched by
WVR.
By default this is set to true (current behavior) ie all grammars
are fetched by WVR.
(Defect 7449)
2.Fixed an issue where cached VoiceXML files would remain in the cache
(but be unused ) when the cache control headers were changed to
nocache or to have an expires value of 0.
(Defect 7447)
3.Fixed a potential NullPointerException when executing a dialogprepare
element in CCXML.
Note that this would not occur with a straight dialogstart.
(Defect 7441)
4.Remove erroneous file from vxi.jar.
(Defect 7437)
5.Allow non W3C defined builtin grammars to be referenced within VXML2
documents as supported by Nuance. e.g. CreditCard.
(Defect 7424)
6.Fixed an incompatibility issue with Nuance recognition and WVR which
could cause an "ERROR: NLSMLException" to be reported in the VRBE
log files.
(Defect 7422)
- Update 4.2.0.530
APAR IZ84683
PTF U837322
1.Fixed an issue that can cause a MRCP PluginException
(completion-cause 006) if a caller hung up just
after a recognitionattempt was started.
(APAR IZ84683)
- Update 4.2.0.525
APAR IZ81207
PTF U836783
1.Corrected a potential NullPointerException when starting VRBE.
The error results in the named application failing to start. The
error text is as follows:
DTJ3027 The request to start application <application name> on
node <node name> at host LocalHost failed.
java.lang.NullPointerException
(APAR IZ81207)
2.When using Java 1.5, <object> element calls to java classes may
fail due to tighter rules on class name definitions. This fix
ensures that java classes are invoked correctly internally and
requires no change to the customers VXML application.
(Defect 7431)
- Update 4.2.0.523
APAR IZ78465
PTF U836468
1.Corrected a potential deadlock when fetching VXML resources that can
cause a memory leak and eventual OutOfMemory error.
(APAR IZ78465)
- Update 4.2.0.518
APAR IZ73213 IZ74903
PTF U835663 U835665
1.Fixed a VarScope error which can occur when a grammar result returned
from a Nuance server includes empty strings. The VarScope error will
look like the following:
(VXI00000) FAILURE: 654322449181048843-0:VarScope::eval:syntax
error:error executing:function evalSI ()
(APAR IZ73213)
2.Fixed an issue that can cause a delay in a speech enabled VXML
application if the caller does not provide any input to the prompt.
(APAR IZ74903)
3.Fixed a problem where the VXML Transfer tag fails to transfer call
when transfer destination is a SIP number.
NOTE: After installation of the PTF a new version of DTJConsult.st
state table installed on the system, which has not been imported.
To import this state table follow the instructions below:
a) Ensure WVR is running.
b) As WVR user (normally dtuser) run:
DTst -export -f $DTJ_DIR/DTJConsult.st.backup -o DTJConsult
c) cd $DTJ_DIR
d) dtjstimp DTJConsult.st
Note: a non zero result means failure
If you have customised DTJConsult state table, you will need to
restore those changes. The modified state table was backed up in
step b) to the file $DTJ_DIR/DTJConsult.st.backup. Changes to the
DTJConsult state table for this fix are:
* add a local variable sipnumber:
LOCAL STRING sipnumber;
* and then add the following code:
AssignData (sipnumber, "LEFT", numberToCall,4);
IF (sipnumber = "sip:")
THEN
AssignData(SV541, "PUT_TAG", "TO_HDR", numberToCall)
;
TransferCall("", "", 0, 0, 0)
edge EDGE_TC_SUCCESSFUL: success
edge EDGE_TC_INVALID_PHONE_NO: invalid_num
edge EDGE_TC_PHONE_BUSY: busy
edge EDGE_TC_NETWORK_BUSY: network_busy
edge EDGE_TC_NO_ANSWER: no_answer
edge EDGE_TC_OUTBOUND_LINE_PROBLEM: failed
edge EDGE_TC_UNEXPECTED_TONE: failed
edge EDGE_HUP: hup
;
ENDIF
* before the line that checks that ringTime = 0:
IF (ringTime = 0)
(Defect 7361)
4.Fixed an issue where CCXML was loading the incorrect VXML document
when many VXML documents are being loaded from CCXML at once.
This was due to accidental sharing of objects that hold the
parameters to be passed to VoiceXML from CCXML. The fix ensures
that the objects are not shared between threads.
(Defect 7400)
5.Fixed an issue with CCXML that would occasionally cause a
CCXThread.run() message in the logs reporting a NullPointerException,
and CCXML wouldn't start up. This is due to a timing error in
sending messages before the internal queues are set up completely.
(Defect 7409)
- Update 4.2.0.512
APAR IZ68012
PTF U832493
1.Fixed a StringIndexOutOfBoundsException raised by the VXML browser
which could cause the browser instance to terminate.
(APAR IZ68012)
- Update 4.2.0.504
APAR IZ64150
PTF U830462 U830463 U830606
1.This fix corrects an out of memory cache problem which
can occur when fetching large numbers of CCXML documents.
(APAR IZ64150)
- Update 4.2.0.471
APAR IZ59164 IZ59310
PTF U828058
1.Fixed a potential NullPointerException within the VXML2
browser which could occur after a VarScope error was caught.
(APAR IZ59164)
2.Maxspeechtimeout events did not generate an
application.lastresult$ variable. However, with the
introduction of recordutterance it is useful for them
to do so in order to access the recording subproperty.
(APAR IZ59310)
- Update 4.2.0.470
APAR IZ56749
PTF U828057
1.Fix potential abnormal termination of dtjflog.
(APAR IZ56749)
2.Help for dtjtrcmod now updated to include the -status
parameter.
(Defect 7322)
- Update 4.2.0.469
APAR IZ56285
PTF U828056
1.Corrected the code to prevent this error from occurring
during a recognition attempt:
(6001195) com.ibm.telephony.directtalk.mrcp.MRCPReco run()
ERROR: Unexpected bargeinType 0
(APAR IZ56285)
- Update 4.2.0.466
APAR IZ53945 IZ53309
PTF U827377
1.Corrected a potential problem when using the
invokeApplication method to move between VXML and
Java applications which could result in the VXML
application remaining active even after the caller has
hung up.
(APAR IZ53945)
2.Add reference locking to handling of the audio cache.
This is to stop the cache handling code leaking memory
when lots of different audio files are used.
(APAR IZ53309)
- Update 4.2.0.461
APAR IZ51751
PTF U825880
1.Fix a timing window with the audio import/playing code.
If two calls attempts to play audio which hasn't been
cached the first call will import the audio. The second
call then incorrectly attempts to play the audio before
its finished importing.
(APAR IZ51751)
- Update 4.2.0.460
APAR IZ50041 IZ50786
PTF U825878
1.WVR Audio cache directives are case sensitive and do
not allow for multiple directives as described within the
W3C specification. This fix allows for multiple directive
and recognised directives can occur anywhere within the
CacheControl directive string.
Improved checking of the Cache-Control directives for audio
file.
(APAR IZ50041)
2.This fixes an emergent scenario where delayed CCXML messages
could be lost if they had exactly the same end timeout value.
This is more likely to happen on systems with a less fine
grained time representation. Outside of CCXML, this change
should have no impact.
(APAR IZ50786)
- Update 4.2.0.458
APAR IZ49889
PTF U825876
1.This PTF corrects a potential Java memory leak within the
VRNode JVM when using VXML2 applications.
(APAR IZ49889)
- Update 4.2.0.455
APAR IZ46787 IZ48373
PTF U824564 U824565
1.Modified the code to enable sharing of cookies between VXML
and audio fetches. Before applying this fix, any cookies set
on a VXML fetch response would not be sent on an audio
fetch request.
(APAR IZ46787)
2.Improved error handling for VoiceXML bargein detection when
hardware grunt detection has overriden the VoiceXML
document via a statetable call and enablement via SV217=1.
Additional logging has also been include to highligh
this as an unsupported configuration
(APAR IZ48373)
3.Prevents exceptions during CCXML call cleanup and
resulting log messages.
(Defect 6981)
4.When the "dtjconf -action export" command was being run,
the following parameters were not being exported if present
in any TelephonySevice definition within the exported
configuration:
ClientName
AAIKeys
CallIDRange
(Defect 7136)
5.Provides a sample to customers on how to use CCXML.
Set up default.cff to point to the Sample7.ccxml file and
run dtjconf to import the default.cff. Then run the code.
The file Sample7.ccxml contains many examples on how to use
ccxml and how it interacts with VoiceXML, and is heavily
commented to explain itself. Thus it is recommended that
customers examine the file themselves to get a better
understanding of the functionality.
(Defect 7077)
- Update 4.2.0.454
APAR IZ45853
PTF U824562
1.This fix removes expired audio cache directories to stop excess
directories existing, which led to a situation where the system
ran out of links in a directory, and an AIX error occured when
creating a new cached file.
After installing this PTF, in order to get the full effect
you should remove the current audio cache manually. To do
this, perform the following:
1) Ensure that VRBE is stopped (dtjstop followed by dtjshost -exit)
2) cd into $CUR_DIR/voice/ext/v2c
3) Remove the "ip address" named directory.
4) Restart VRBE
(APAR IZ45853)
- Update 4.2.0.443
APAR IZ38758
PTF U822817
1.This fix addresses a problem that can cause a plugin to
leak resources. Specifically the MRCP plugin would leak
file descriptors leading to a complete failure of both
reco and TTS.
(APAR IZ38758)
- Update 4.2.0.437
APAR IZ36575
PTF U822398
1.Fixed a NullPointerException that could occur if a
CCXML application calls out to a second VXML dialog
during a single call.
(APAR IZ36575)
- Update 4.2.0.434
APAR IZ35025
PTF U821507
1.Improve HTTP server incoming and outgoing requests in CCXML.
(APAR IZ35025)
- Update 4.2.0.420
APAR IZ29245
PTF U819608
1.When running ISDN/SIP with CCXML it is possible that
the connection.alerting transition does not present
values for connection.local and connection.remote.
The values were available during connection.connected
Fix is to present the information during
connection.alerting state
(APAR IZ29245)
- Update 4.2.0.419
APAR IZ27144
PTF U819375
1.Removed a small timing window which could result in
a NoSuchElementException from the dtjmrcp plugin code.
(APAR IZ28793)
- Update 4.2.0.415
APAR IZ27144
PTF U819375
1.Handle HTTP requests which don't have responses,
ie HTTP server closing the socket
(APAR IZ27144)
2.Fix up namelist handling, and the loss of original
variables when building up the namelist
(Defect 7041)
3.Handle various socket errors for badly constructed HTTP servers
(Defect 7044)
4.Add support for appendix K of the CCXML spec. Allowing
applications to POST events to the CCXML browser, and the
CCXML browser to send HTTP POST requests.
(Defect 7040)
- Update 4.2.0.407
APAR IZ24444
PTF U818973 U818974
1.Tighten requisite filesets.
(APAR IZ24444)
- Update 4.2.0.363
APAR IZ17301
PTF U817179
1.Fix to CCXML browser code to ensure that the connection
class attribute "originator" is correctly set to "local"
for oubound calls and "remote" for inbound calls
(APAR IZ17301)
2.Correct handling of Timer Already Cancelled state in fetch
to prevent exception.
(Defect 7032)
- Update 4.2.0.352
APAR IZ15492
PTF U816335
1.Modified the handling of semantic interpretation or tag
strings in the VXML2 browser code to prevent WVR from
altering the string that the recognizer returns.
(APAR IZ15492)
- Update 4.2.0.346
APAR IZ14702
PTF U816098 U816099
1.Modified the WVR VXML2 audio cache so that it correctly performs
a get-if-modified on expired resources. The behaviour before
this change was to delete expired resources and so force a
re-fetch from the webserver.
In order to use the original behaviour after installing
this fix the following parameter can be set
in $DTBE_HOME/dtj.ini:
wvr.audiocache.no.expiry=false
(APAR IZ14702)
- Update 4.2.0.342
APAR IZ11272
PTF U815930
1.This APAR fixes an intermittent problem that can result
in: DTJ7588 Problem with MRCP Custom Server messages when
first TTS prompt of a call is null string.
(APAR IZ11272)
- Update 4.2.0.340
APAR IZ11252
PTF U815634
1.This PTF expands the Computer Telephony Integration (CTI)
support to include Genesys Framework V7 from the VoiceXML
2.1 and CCXML development environments.
Further details on configuration and usage can be found in
the following technote
(APAR IZ11252)
- Update 4.2.0.339
APAR IZ10989
PTF U815458
1.This APAR fix adds limited validation to the Classpath
entries when a VRBE node is started. If any entries cannot
be located or are not readable message DTJ2009 will be
output to the node.out file.
(APAR IZ10989)
- Update 4.2.0.336
APAR IZ10136
PTF U815450
1.Corrects handling of Timer Already Cancelled state in
fetch to prevent exception.
(APAR IZ10136)
- Update 4.2.0.320
APAR IZ06705
PTF U814349 U814350 U814351 U814352 U814509 U814510 U814610 U814687 U814688 U814697 U814698 U814699
1.Fix ensures that exact NumToApp mapping will always be
selected over a mapping which contains a wildcard
as documented in GC34-6378.
(APAR IZ06705)
2.Corrects minor problem with vrbeProblem data collection.
(Defect 7018)
3.This fix improves hangup detection during a single step
transfer and prevents a potential browser hang.
(Defect 7020)
- Update 4.2.0.300
APAR IZ05722
PTF U813383 U813384
1.Consolidation of previous PTFs covering WVR VRBE_XML filesets.
(APAR IZ05722)
- Update 4.2.0.277
APAR IY97197 IY97219
PTF U811642
1.Corrected the default behaviour of the system when a
maxspeechtimeout event is triggered in a VXML2 application so
that it now conforms to the VXML2 specification.
(APAR IY97197)
2.This fix corrects a problem with the vxml browser so that
it now accepts the recordutterancetype property in vxml2.1
(APAR IY97219)
- Update 4.2.0.266
APAR IY95820 IY89776
PTF U811222 U811255
1.Made a change to ensure that a maxspeechtimeout event can get
thrown when executing a hotword bargein prompt.
(APAR IY95820)
2.This fixes a problem compiling sub rules in dtmf
grammars.
The DTMF grammar chache must be cleared once installed.
rm -fr /var/dirTalk/DTBE/native/aix/dtmfGrammarCache
and then
mkdir /var/dirTalk/DTBE/native/aix/dtmfGrammarCache
(APAR IY89776)
3.Corrects trace entry to aid problem determination.
No user visible chnages.
(Defect 7004)
4.Prevents ASSERT in VXML2SpeechSupport.speechStarted()
after IY95820.
(Defect 7008)
- Update 4.2.0.257
APAR IY94963
PTF U811202
1.Fixed a NullPointerException that could occur when using
the VXML2 <if> statement.
(APAR IY94963)
- Update 4.2.0.255
APAR IY94931
PTF U811200
1.Updates vrbeProblem to pick up log and trace files
from non-default locations.
(APAR IY94931)
- Update 4.2.0.253
APAR IY94397
PTF U810932
1.This update creates the session.ibm.callID session
variable for new incoming calls.
(APAR IY94397)
- Update 4.2.0.252
APAR IY93783
PTF U810930
1.Fixed a problem with transfer type="consultation"
in VXML2.
(APAR IY93783)
- Update 4.2.0.245
APAR IY91534 IY92134
PTF U810623
1.Fixes a problem with duplicate application instances
being displayed by dtjqapps.
(APAR IY91534)
2.Fixed problem when running vrbeProblem script with a relative
output path. Also fixed a problem where a file called
'{OUTPUTFILE}' would be created in the output directory.
(APAR IY92134)
3.Reports unknown errors from the SpeechClient plugin to
browser/application as error.internal
(Defect 6989)
- Update 4.2.0.239
APAR IY91660
PTF U810589
1.Fixed a problem with transfer type="consultation" in VXML2.
(APAR IY91660)
2.Added diagnostic utilities for VRBE environment.
(Defect 6967)
3.Corrected problem with diagnostic routines.
(Defect 6982)
- Update 4.2.0.234
APAR IY90841 IY90627 IY91308
PTF U810390
1.Improved the handling of grammars that use empty strings
in the result tags.
(APAR IY90841)
2.Fixes NullPointerException terminating VXML2 browser when
<dialogterminate> tag is used in CCXML.
(APAR IY90627)
3.Correct problem caused by timeout handling looping in
browser.
(APAR IY91308)
- Update 4.2.0.230
APAR IY90005 IY89358
PTF U809680
1.Corrected the "dtjconf -action export" command so that the
correct value for the AIXPortNumber parameter is displayed
in the resulting output file.
(APAR IY90005)
2.Fixes NullPointerException when application transitions
through multiple documents using fetchaudio and no audio
has been output to caller.
(APAR IY89358)
- Update 4.2.0.226
APAR IY87079 IY87777 IY88142
PTF U809398 U809399
1.This APAR resolves a problem with Italian scansoft
TTS engines when used with multi-lingual applications
where the text '`l5' for the language code is read out by
the scansoft engines before the text in the application.
(APAR IY87079)
2.Added a new parameter so that the default 5 minute timeout
for a CCXML controlled VXML dialog can be modified by the
administrator. The parameter should be set in
$DTBE_HOME/dtj.ini like so:
wvr.ccxml.dialog.timeout=<value in minutes>
Where <value in minutes> is the required timeout value
specified in minutes.
(APAR IY97777)
3.Updated the DTMF grammar compiler to fix a problem.
(APAR IY88142)
4.Fix adds additional trace entry. No user visible changes.
(Defect 6961)
- Update 4.2.0.217
APAR IY85799
PTF U808516
1.Modified order of CCXML transitions when a far end
disconnect is reported to a CCXML document. The new
order should be to receive a connection.disconnected
prior to a dialog.exit
(APAR IY85799)
- Update 4.2.0.213
APAR IY81735
PTF U807981
1.VoiceXML browser caches results of fetch requests when
method="post". This is incorrect as RFC2616 states that
post results should not be cached. Browser corrected to
conform with RFC.
(APAR IY81735)
- Update 4.2.0.210
APAR IY84110 IY82726 IY83674 IY84378
PTF U807567
1.Corrected a parse error when using the http-equiv attribute of
the meta tag in a CCXML application.
(APAR IY84110)
2.Fixes to consulted transfer requests issued via CCXML
(APAR IY83674)
3.Fixed "can't find active ConnectionItem" failures reported in
log.1.log when trying to use Genesys T-Server with CCXML
(APAR IY82726)
4.The default reason applied when a CCXML <disconnecte> call is made
without an application supplied reason, will now result in a
connection.disconnect being generated with a default value
of "near_end_disconnect"
(APAR IY84378)
5.Creates es_MX site documents to override the default Spanish
to replace two character locale information with 5 character
locale required by WVS version 5.
(Defect 6941)
6.Corrected spurious error.notallowed events in the CCXML processor.
(Defect 6951)
7.Fixed an infinite loop when running certain CCXML applications.
(Defect 6946)
- Update 4.2.0.204
APAR IY82081
PTF U806978
1.Site documents for German updated for use with
WVS version 5.
(APAR IY82081)
- Update 4.2.0.203
APAR IY81629
PTF U806976
1.Corrected a problem with recording caller utterances when
using WVS 4.2.
(APAR IY81629)
- Update 4.2.0.183
APAR IY78441 IY78450 IY78503
PTF U806354
1.The problem when a VXML2 document had Cache-Control header
set to public and there was no max-age property set, the
browser would still fetch the resource from the server
each time rather than correctly caching it has been
corrected.
(APAR IY78503)
2.The VXML2 fetch problem when performing root to root
transition has been corrected.
(APAR IY78441)
3.The problem where HTTP redirects were not handled correctly
by the VXML2 browser, specifically when an application
used relative URIs has been corrected.
(APAR IY78450)
- Update 4.2.0.180
APAR IY78792
PTF U806177 U806178
1.Enhancements to vrbeProblem to include
- Prompt user to include PMR (if known) and include
in output filename
- Collect system name and date/timestamp in output
filename
- Collect all logs and trace files rather than a subset
(APAR IY78792)
2.Problem gathering routines updated to work in silent mode
(Defect 6932)
- Update 4.2.0.170
APAR IY78582
PTF U805960
1.Fixed a memory leak/high memory consumption issue
when running DirectTalk beans applications.
(APAR IY78582)
- Update 4.2.0.152
APAR IY76967 IY75319 IY72495
PTF U805629
1.Corrects DTJ6300 Error when fetching document and
content length header not set by the server.
(APAR IY76967)
2.Under some conditions the VXML2 file cache can
become corrupted. This usually occurs when updates
and cache cleanup are occuring in parallel.
Updates are performed when self describing content
like grammars use the meta tag to update cache
properties like Expires.x. Cache cleanup occurs, by
default, when the cache exceeds 80% of it's capacity.
This PTF includes a fix that allows this parallel
activity to occur without causing cache inconsistency.
Cache inconsistency is usually found when cache entries
are zero length.
(APAR IY72495)
3.When using non default value for VXML fetchtimeout a
Memory leak could occur causing timer tasks to be held by
the timer. This has been corrected.
(APAR IY75319)
- Update 4.2.0.133
APAR IY73577
PTF U804242
1.A fix has been made to ensure that VXML2 log data
appears in the correct output location rather than
going to stdout.
(APAR IY73577)
- Update 4.2.0.124
APAR IY73555
PTF U8039973555)
- Update 4.2.0.119
APAR IY72982 IY72688
PTF U803954
1.Removed the following spurious message from output logs:
WARNING: WARNING: ASR during tranfer
not available...application will continue without.
which was output even when there's no ASR defined.
(APAR IY72982)
2.Fixed a problem with speech recognition grammars where
an external grammar rule reference was being used, but
the grammar rule was not being activated.
(APAR IY72688)
- Update 4.2.0.113
APAR IY72464
PTF U803681
1.This PTF corrects a cache replacement problem caused in
fix level 4.2.0.101
Any existing VXML2 cache directories will be removed during
the installation of this PTF.
When using WVR SpeechClient ensure that WVS 5.1 / 5.2
accumulative fix package have been applied.
This fix package is available from the WVS support web site.
(APAR IY72464)
- Update 4.2.0.101
APAR IY71555
PTF U803589 U803590 U803591 U803592 U803612 U803613 U803614 U803636 U803637 U803638 U803639 U803640
1.This PTF adds support in VRBE_XML for IBM WVS 5.x
speech client enabling use of the latest WVS server
with WVR VoiceXML applications.
Support has been added for VoiceXML 2.1 applications
when used in conjunction with IBM WVS 5.x
Various fixes and updates to VoiceXML and CCXML
browser have also been made.
KNOWN limitations:
'hotword' bargintype is not supported on WVS 5.1.1 or WVS 5.1.2
VoiceXML 2.1 <mark> tag is not supported on WVS 5.1.1 or WVS 5.1.2
There is a known problem when returning ECMA variables such as true
and false (for example in the built-in boolean grammar).
This is being addressed and will be resolved in the near future.
In the mean time, please contact a member of the IBM support
team if you have questions relating to this issue.
(APAR IY71555)
- Update 4.2.0.83
APAR IY71216
PTF U803583
1.This PTF removes a spurious message from logs.
(APAR IY71216)
- Update 4.2.0.81
APAR IY68418 IY70360
PTF U803317
1.This fix corrects a NullPointerException when using
cache-control=private for audio resources.
Also includes a fix to correct a problem when using
fetchaudiominimum property in vxml2 could cause the system
to hang if the document is returned before the
fetchaudiominimum completes.
(APAR IY68418)
2.This fix prevents an application from terminating with an
error if a file in the VXML2 filecache is found to be zero
bytes.
(APAR IY70360)
3.This fix prevents a URISyntaxException from occurring when
Cache-Control=private is set for audio resources.
(Defect 6859)
- Update 4.2.0.75
APAR IY69104 IY68788 IY69052
PTF U802834 U802835
1.A change has been made to subdialog handling to prevent an
error being thrown when handling a hangup event.
(APAR IY69104)
2.A fix has been made to prevent a
StringIndexOutOfBoundsException from occurring when
transferring to a short destination number.
(APAR IY68788)
3.This fix corrects a problem when using the VXML2 transfer
feature, using the VXML2 standard characters for Pause (P) and
Wait for dialtone (W). Websphere Voice Response uses "," and "."
respectively, and the VXML2 standard characters were not
previously translated into this syntax.
Applying this fix will allow "P" and "W" in VXML 2 documents
to be executed correctly by Websphere Voice Response.
(APAR IY69057)
4.Correct problem when using bargein and multiple prompts.
(Defect 6785)
- Update 4.2.0.63
APAR IY67121 IY67164
PTF U802147 U802148
1.Additional code added to aid problem debug routines.
(APAR IY67121)
2.Correct problem caused by fix level 4.2.0.56.
Ensures decoding of escaped URI's conforms to VXML standards.
(APAR IY67164)
- Update 4.2.0.62
APAR IY65015
PTF U801600
1.Fixes exception when audio file URI ending with trailing slash
is encountered.
(APAR IY65015)
- Update 4.2.0.61
APAR IY66063 IY66026 IY65511 IY66108
PTF U801255 U801306
1.A code fix has been made to address a memory leak from
the waitForCall method.
(APAR IY66063)
2.A fix has been made to ensure that errors from fetchaudio
do not result in the termination of the application node.
(APAR IY66026)
3.Fixed a problem in the ViaVoice Reco plugin that generated
a class cast exception when running under CCXML.
(APAR IY65511)
4.Fixed a memory leak that could occur in a VXML2 application
which uses lots of unique audio files.
(APAR IY66108)
- Update 4.2.0.56
APAR IY64878 IY64747
PTF U801062
1.This PTF allows <submit> of audio file to web server when
audio file is zero length.
(APAR IY64878)
2.A change has been made to ensure that the URI in any fetch
request is passed as-is to the webserver without any
characters being removed.
Also modifications have been made to the VXML2 cache
implementation to ensure that the filesystem usage is
properly managed.
(APAR IY64747)
3.A fix has been applied to the speech recognition event
queuing mechanism.
(Defect 6728)
4.Fixes possible timing issue on Windows platform where outbound
dialer application can invoke makeCall() before system fully
intitialised and cause startup sequence to enter a wait which
will never complete.
(Defect IC41850)
5.This PTF improves tracing in audio import code
(Defect 6735)
6.Fixed the dtjconf export option to ensure that it does not
generate a NullPointerException.
(Defect 6736)
7.Adds "ls -lRL $DTJ_HOME" to vrbeProblem collected files.
(Defect 6737)
- Update 4.2.0.31
APAR IY59118 IY62794 IY62795 IY62796
PTF U800456
1.Correct a problem where process would incorrectly end initial
tag processing
(APAR IY59118)
2.Correct browser cache management thread failing with
java.lang.StringIndexOutOfBoundsException when zero length
files found in cache. Without cache management cache may continue
to grow in size.
(APAR IY62796)
3.Correct problem caused by cache filling up and process looping.
(APAR IY62794)
4.Correct problem where ArrayIndexOutOfBoundsException was
incorrectly generated.
(APAR IY62795)
- Update 4.2.0.23
APAR IY61425
PTF U800030
1.VXML2 voice reco error message has been corrected to prevent the
possibility of a core dump.
(APAR IY61425)
- Update 4.2.0.17
APAR IY60351 IY60355 IY60356 IY60358 IY603259
PTF U499198 U499199
1.This PTF fixes an issue with short timeouts in fetching a
new page in VoiceXML 1.0.
(APAR IY60351)
2.Fixed an abnormal termination of the VRNode during a recognition
engine failure.
(APAR IY60355)
3.Fixed a problem with the audio cache getting out of step with the
file system where files were being deleted out of step on expiry.
(APAR IY60356)
4.Correct regression in browser
(APAR IY60358)
5.Improvements made to audio handling in VXML2 for zero length
files.
(APAR IY60359)
6.This fix corrects a problem with activity monitoring in
DTJ_VV_Logger
(Defect 5137)
7.This PTF fixes a very small timing window that allowed a VRBE
application node to get stuck in a 100% CPU cycle, locking out
other VoiceXML 2 applications running.
(Defect 6679)
8.Added a check to validate the requested number of NBest results
is in bounds in the WVR Java API.
(Defect 5679)
9.Fixes java.lang.IndexOutOfBoundsException dump in Cisco Java API
(Defect 6631)
10.Cleaned up the error message presented when trying to run a
VoiceXML2 application on the command line before the Node had
been started.
(Defect 5063)
11.Fixed a reintroduced defect when using grammars in WVS which
lead to unexpected browser termination
(Defect 6637)
12.Catch and correctly managing of a previously uncaught exception
which occured when retreiving problematic voice or wav recordings.
These recording are typically 0 bytes in length or simply do not
exist. The browser will now receive SeverStateError controlled
exceptions rather than the unmanaged exception.
(Defect 6650)
13.Fixed a very small timing window that allowed a VRBE application
node to get stuck in a 100% CPU cycle, locking out other
VoiceXML 2 applications running on that node.
(Defect 6676)
14.The Enabled=<Yes|No> flag in a CCXService definition in
default.cff is now correctly read.
(Defect 6536)
15.This change corrects a problem where the CCXML session variable
"startupmode" was not being set to "newcall" if the CCXService
had been configured for singlecall mode.(i.e. when a new CCXML
session is created for each new call arrival).
(Defect 6585)
16.This change fixes the condition where the phone context is being
removed from the destination URI of a dialog.transfer CCXML message
when a transfer is initiated from a VXML2 document.
(Defect 6612)
17.Added processing to support the callerid attribute on the
CCXML <createcall> tag for the SIP protocol. Other protocols
do not support this attribute.
(Defect 6618)
18.Fixed a problem where the hints attribute in CCXML document tags
was being truncated when it was sent to the CHP process.
(Defect 6648)
19.Corrected a situation where far end hangup was not being
detected in CCXML when a VoiceXML 2 dialog was waiting for
user input.
(Defect 6664)
20.Fixed a problem with placing calls through CCXML in a SIP
telephony environment where the destination URI was not being
passed through correctly.
(Defect 6691)
21.Fixed a problem where phone number pattern matching in the NumToApp
statement in default.cff was not working for non-CCXML applications.
(Defect 6711)
VOIP_SIP Fixes
Internal Defect fixes
- Fixed a problem where an unnecesssary 29805 VoIP SIP signalling process Internal Warning error was raised when setting the record route header. The error occurs in function SIPDialogImpl::setRecordRoute with an Unknown Error.
- Fixed a problem that prevented MWI blind notification from working when using VoIP/SIP. The error seen is a 29805 VoIP SIP signalling process Internal Warning error from the DtSipNotify::createNotifyRequest with the message "Unable to create NOTIFY TO header" in DtSipNotify.C
Internal Defect fixes
- Some tracing of NOTIFY used some information from the SIP transaction. If the NOTIFY occurs without a proper transaction this can cause a crash. The trace now only occurs if the transaction is valid.
- Stop MEDIA_CTRL_DTNA dropping into the wrong code path when VOIP is shutdown. This was causing error 29800 to be raised.
- Remove a race condition in ES services used by VoIP. This race condition caused error 29106 from es_queue.
- Remove 100rel from Supported header in outbound INVITE. This can cause the UAS to require 100rel and reliable responses. WVR currently doesn't support this. The result is the 101-199 response is resent until WVR CANCEL's the call.
- Fixed a problem where WVR reports 29805 SIP signalling process error on setRecordRoute for messages that do not have a record route defined. This fix removes the erroneous warnings.
Internal Defect fix
- Fix prevents VOIP_MONITOR from coredumping if /tmp is cleared.
Internal Defect fixes
-.
- Fixed provisional responses for calls with a Require: 100rel.
- Update 4.2.0.567
APAR IV03693
PTF U844292
1.Fix a problem with MakeCall to a IP address that exists but isn't
running SIP. If the MakeCall is then stopped the SIP CANCEL request
also can't be sent. This causes a loop of INVITE and CANCEL requests
to be sent. If a CANCEL fails to send then this will be assumed to
also cancel the original request. This problem is most likely to
occur if the MakeCall is via a trombone.
(APAR IV03693)
2.
(Defect 37178)
3.Fixed provisional responses for calls with a Require: 100rel.
(Defect 37152)
- Update 4.2.0.564
APAR IV02064
PTF U844100
1.Added VOIP SIP support for tromboning between:
1) RFC2833 to SIP INFO DTMF
2) SIP INFO DTMF to RFC2833
3) SIP INFO DTMF tp SIP INFO DTMF
4) RFC2833 to RFC2833 (support already present)
Add support for two SIP INFO DTMF content types:
1) dtmf-relay
2) vnd.nortelnetworks.digits
(APAR IV02064)
2.Register removed from ALLOW as we ignore it, INFO added to ALLOW as
we support it, and 100rel added to Supported as we support it.
(Defect 37141)
3.Added Accept and Accept-encoding headers and in OPTIONS responses
while in a call, as required in RFC 3261.
(Defects 37143 37162)
4.Now reject INVITES with no acceptable media types as 488 - not
acceptable here, as per RFC 3261.
(Defect 37144)
5.Correctly report that we are busy with SIP messages when we have no
channels available and in inbound call is attempted.
(Defect 37145)
- Update 4.2.0.561
APAR IZ99472
PTF U843516
1.Allows "#" in FROM headers in VOIP SIP calls.
(APAR IZ99472)
2.Prevents WVR from including REFER in the ALLOW header when system is
configured to not accept incoming transfer requests.
(Defect 37084)
3.Prevents WVR from attempting an INVITE when making calls to an
invalid address.
(Defect 37082)
4.Refers with a Replaces parameter in the Refer-To header URI now
return a SIP not implemented message rather than address incomplete
(Defect 37080)
- Update 4.2.0.558
APAR IZ93175
PTF U842217
1.Fixed a issue where VOIPL3_SIP core dumped when under high load.
(APAR IZ93175)
2.Fixed an error causing SIP_RESPONSE_ADDRESS_INCOMPLETE to be sent
unnecessarily when processing a REFER message.
(Defect 37081)
3.Fixed incorrect SDP messages when SIP OPTIONS received, and prevents
errors being rasied when an OPTIONS message is received
before a call
is established.
(Defect 37077)
- Update 4.2.0.555
APAR IZ91778 IZ92660 IZ92510
PTF U840695
1.This fix overcomes a 29806 error (VOIP signalling process call state
machine error/Invalid phone number for called party IE) for a VOIP
outbound call StateTable application if any of the the digits
'*#ABCD' is set in the Phone Number parameter for a MakeCall action.
Previously the outbound call would fail to proceed despite the Phone
Number being unused in the call setup (for VOIP outbound calls the
TO_HDR tag set by the application is used).
(APAR IZ91778)
2.This fix overcomes a problem found with outbound VOIP_SIP calls when
the target endpoint is reached via a single outbouund proxy. In some
cases (e.g. if endpoint is using secure SIP i.e. transport=tls) an
outbound call would fail to be established (because SIP ACK response
to 200 OK was directed to the wrong ip address).
(APAR IZ92510)
3.Fix a problem when using the VOIP SIP stack against a specific
ethernet adapter and that adapter is not the first adapter in the
machine. The contact header is always filled out with the ip address
of the first adapter in the machine. With the fix the contact is
filled out with the ip address for configured adapter.
(APAR IZ92660)
- Update 4.2.0.550 - Fix Pack 2
APAR IZ85734 IZ85731
PTF U839629
1.New features contained in this PTF
* Support for SIP Registration - provides support for the SIP
registration method described in RFC 3261.
* New VoIP Signalling - Inbound Call Channel Allocation Method.
A new Inbound Call Channel allocation option called "Allocate
calls for D2IS" has been added to system configuration to control
the channel used for DTNA based calls. This option is for use
when using the Genesys-supplied D2IS custom server in a
behind-the-switch Genesys Framework implementation.
For further information on the Fix Pack 2 features please
refer to the TechNote at the following URL:
2.This fix overcomes a possible VOIP yellow alarm 29805 error (VOIP SIP
signalling process Internal Warning / Failed to create new
callProvider object) which then leads to a failure to handle all
subsequent inbound and outbound VOIP calls until WVR is restarted.
Also fixed is a possible 29806 error (VOIP signalling process call
state machine error / Could not add FINAL_RESPONSE tag to
SL_CALL_DISCONNECT_IND: Too big) which can occur during a VOIP blind
transfer.
(APAR IZ85731)
- Update 4.2.0.519
APAR IZ75568 IZ76479
PTF U835666
1.This fix overcomes a problem found with outbound VOIP SIP calls when
the target endpoint is reached via two or more outbound proxies. In
some cases (e.g. if endpoint is using secure SIP i.e. transport=tls)
an outbound call would fail to be established (because SIP ACK
response to 200 OK was directed to the wrong ip address)
(APAR IZ76479).
2.This fix overcomes an occasional 29805 error (VoIP SIP signalling
process Internal Warning/connid not found) found during VOIP call
transfers despite the transfer to the third party being successful.
(APAR IZ75568)
- Update 4.2.0.514
APAR IZ65932
PTF U832498
1.Fixed a problem in the construction of the 200 message in response to
an incoming INVITE. The 200 media attributes can be incorrect if
the last attribute present in the INVITE happens to be rejected.
For example if the INVITE asks for audio and video. This was
causing the video to be rejected (correctly) followed by the video
being rejected a second time (incorrectly) and the audio being
ignored (also incorrectly).
In the failing case, the invite message would have included content
like this when viewed in a network trace:
Content-Length: 210
v=0
o=user1 53655765 2353687637 IN IP4 9.146.171.66
s=-
c=IN IP4 9.146.171.66
t=0 0
m=audio 6000 RTP/AVP 0
a=rtpmap:0 PCMU/8000
m=video 30000 RTP/AVP 34 99
a=rtpmap:34 H263/90000
a=rtpmap:99 H264/90000
(APAR IZ65932)
- Update 4.2.0.501
APAR IZ62638
PTF U829860
1.This fix modifies the behaviour of a VOIP_SIP blind transfer if the
called third party fails to answer (a timeout occurs waiting for a
NOTIFY 200 SIP message). The Transfer Call action now returns
SL_REPLY_NO_ANSWER instead of SL_REPLY_OUTBOUND_LINE_PROBLEM.
After terminating the transfer (with TerminateCall action) an
application transfer retry should behave as expected without
generating a 29808 error (VOIP signalling process trunk state
machine error - Could not get trunk initial state from the trunk
SIT).
(APAR IZ62638)
- Update 4.2.0.473
APAR IZ60116
PTF U828063
1.This PTF fixes a core dump in the DTNA code along with
improvements to the DTMF detection.
(APAR IZ60116)
- Update 4.2.0.459
APAR IZ50720 IZ52323
PTF U825877
1.This fix removes a problem which when making a TCP non blocking
connect could, in rare circumstances, cause a VOIP crash.
(APAR IZ50720)
2.This fix restores the fix previously provided by APAR IZ43589
(fileset dirTalk.VOIP_SIP version 4.2.0.452) but regressed by
the fix for APAR IZ47300 (fileset dirTalk.VOIP_SIP version
4.2.0.456).
(APAR IZ52323)
- Update 4.2.0.456
APAR IZ42421 IZ47300
PTF U824646
1.This fix prevents a possible 29806 error with a VOIP blind transfer
retry which follows a previous transfer that failed with
OUTBOUND_LINE_PROBLEM because of a 30 second 'ring-no-answer'
Atimeout. lso fixed is a possible 29815 error as a result of a
caller hanging up immediately before a VOIP transfer is attempted.
(APAR IZ42421)
2.This fix implements the following changes in WVR SIP support:
a) Makes TCP socket access non-blocking to prevent permanent hangups
if TCP session hangs
b) Prevent hangup in state machine if switch does not respond
with 200 OK
(APAR IZ47300)
- Update 4.2.0.452
APAR IZ43589
PTF U824353
1.This fix helps prevent a possible hangup (HUP) return
code with the Transfer Call action used with a blind or
attended VOIP transfer. Previously, depending on
machine and application timings the return edge could
vary between OK and HUP despite the transfer being
successful.
(APAR IZ43589)
- Update 4.2.0.445
APAR IZ40641
PTF U823637
1.This fix adds -a option to VOIP_MONITOR to allow SIP trace
to be put to AIX trace.
(APAR IZ40641)
- Update 4.2.0.440
APAR IZ37950
PTF U822402
1.This fix adds additional SIP URI parsing for outbound calls.
(APAR IZ37950)
- Update 4.2.0.432
APAR IZ33913
PTF U821504
1.This fix adds support for the SIP OPTIONS message in an
out of call context where it is used as a 'heartbeat'
message or to query media capabilities. An SDP body
containing the current media (i.e. codec) settings is
returned on the OPTIONS as per RFC3261.
This fix also adds a couple of additional valid formats
to the URI parser for outbound SIP messages.
(APAR IZ33913)
- Update 4.2.0.418
APAR IZ26674 IZ25818
PTF U819605
1.This fix adds additional formats to the SIP URI parser.
(APAR IZ26674)
2.This fix correct WVR SIP operation when a Notify 487 is
received during a SIP transfer operation.
(APAR IZ25818)
3.This fix corrects a problem handling the maddr parameter
when received on a SIP Route header. Also corrects handling
of Record-Route and Route headers for loose and strict
routing.
(Defect 36405)
- Update 4.2.0.413
APAR IZ24154
PTF U819348
1.This PTF corrects a problem which could have led to an
incorrect RTP address being used in a SIP call.
Also, two new System Parameters have been added to the
WebSphere Voice Response system configuration to allow the
administrator to modify the system behavior when using
VOIP/SIP signaling.
The first, 'CHP available call reject threshold', adds the
ability to reject a call if less than a certain number of CHPs
are available and the second, 'Inbound Call Channel Allocation
Method', adds the ability to allocate incoming calls to WVR
internal channels 'round robin' rather than using the highest
available channel.
Details of the new System Parameters are documented in the
Knowledge Center under the 'VoIP SIP Signaling
parameter group'.
(APAR IZ24154)
- Update 4.2.0.408
APAR IZ24001
PTF U818988
1.Tighten requisite filesets.
(APAR IZ24001)
- Update 4.2.0.367
APAR IZ21161 IZ21283
PTF U817185
1.Corrects a problem which can cause a SIP Layer 3 core dump
if a SIP CANCEL is received whilst still processing the
corresponding INVITE.
(APAR IZ21161)
2.This defect corrects some warning errors which occurred
when doing call transfers using SIP REFER/REPLACES with
Cisco CallManager 6.1
(APAR IZ21283)
- Update 4.2.0.361
APAR IZ17691
PTF U817082
1.This PTF contains updates for future enhancements to DTTA
operation.
(APAR IZ17691)
- Update 4.2.0.357
APAR IZ17310
PTF U817030
1.This PTF fixes memory leak in SIP Blind Notify
(used for Message Waiting Indications)
(APAR IZ17310)
2.This PTF fixes core dump processing ACK to SIP CANCEL
(Defect 36339)
- Update 4.2.0.356
APAR IZ16616
PTF U816799
1.Correct calculation so that VOIP starts if
/var/tmp has greater than 4GB of free space.
(APAR IZ16616)
- Update 4.2.0.348
APAR IZ14870
PTF U816101
1.This fix allows WVR to recover from DTEA port 'Out of Service'
conditions which may result from a message timing condition at
call hangup.
NOTE This fix requires fix level 4.2.0.349 also.
(APAR IZ14870)
- Update 4.2.0.345
APAR IZ12639
PTF U815936
1.SIP Enhancements
Adds following funtionality to WVR SIP support as follows;
a) Support SIP Session Timer as defined in RFC 4028. Several
new System Parameters in SIP group are provided to enable
and configure Session Timer.
b) Support outbound DTMF using SIP Info method. Configured using
new value for 'Outbound DTMF Method' in DTEA/DTNA Media system
parameter group.
c) Support SIP Subscribe/Notify for UM Message Waiting notification.
Requires UM 'IMC_Subscribe' custom server to be activated.
d) Various other minor enhancements to improve softswitch and gateway
support
Refer to the 'SIP-specific section' in the 'WVR for AIX V4.2 -
Voice over IP using Session Initiation Protocol' manual for more
details.
Link to the WVR for AIX, V4.2 library here.
(APAR IZ12639)
- Update 4.2.0.322
APAR IZ08694
PTF U815175
1.The fix allows WVR to accept SIP re-Invites with delayed
media within a call. It also corrects the processing
of RecordRoute and Route headers.
(APAR IZ08694)
2.This fix corrects a problem where the RFC2833 (RTP Payload)
payload type (usually 101) was missing on the SDP offer
for an outboung SIP invite.
(APAR IZ09225)
3.Corrects problem in generated SDP when adding annexb=no
for g729, no vad on outbound SIP invite.
(Defect 36240)
- Update 4.2.0.286
APAR IY99568
PTF U811972
1.This update enhances WVR SIP by adding support for the following:
- P-Asserted-ID Header : used to pass proxy-authenticated
'calling' number
- Remote-Party-ID : ditto (older alternative to P-Asserted-ID)
- Privacy header : used to indicate suppression of calling
party number
- Handling of multiple diversion headers to pass Original Called
and Last Redirecting numbers in SV187 and SV188
- Support for system parameter to allow presentation of
calling number when restricted
- Extraction of above headers into SV542 tagged string on
incoming calls
- Generation of headers from SV541 tagged string on outbound
calls
- Support for latest SIP 'caller on hold' method
- Configurable removal of E164 leading digits (starting with '+')
- Support for SIP early media (no SDP offer on incoming INVITE,
WVR responds with offer on 200 OK).
(APAR IY99568)
- Update 4.2.0.276
APAR IY97735
PTF U811641
1.This fix contains the VOIP part to correct clocking operation
for mixed DTTA/DTEA systems.
See fix level 4.2.0.275 for full details of fix.
(APAR IY97735)
- Update 4.2.0.265
APAR IY94651 IY96336
PTF U811215
1.This fix will add a line a:fmtp:18 annex=no to any
SIP SDP generated by WVR (send offer or answer) for
the G729A codec (currently only on DTEA) when Voice
Activity Detection is set OFF for G729A. This will
partially prevent a mismatch of G729A anexb
(voice activity detection) between WVR and the
remote SIP endpoint
(APAR IY94651)
2.This defect adds support for the maddr parameter on
SIP URI's as per RFC3261.
It also allows a non-numeric parameter to be present in
the user part of a SIP URI when extracting called and calling
numbers for passing in SV185/186.
(APAR IY96336)
3.Correct variables and messages used by import check
routines.
(Defect 36074)
- Update 4.2.0.249
APAR IY92940
PTF U810900
1.The fix allow the version (v=) field to be located in any
line of the SDP. This is needed because some ISDN/SS7 gatweways
insert ISUP information prior to the SDP itself.
(APAR IY92940)
- Update 4.2.0.240
APAR IY92115
PTF U810590
1.This fix corrects a problem with WVR SIP support where a
re-invite to put media on hold caused a SIP BYE to be sent.
(APAR IY92115)
2.This defect will change SIP behaviour to always extract the
SIP Request Header (REQ_HDR) into tagged string and SV542 on
an incoming call regardless of the setting of the
'Use Request Header' sysparm.
(Defect 36046)
- Update 4.2.0.215
APAR IY85995
PTF U808513
1.Improve relibility of voip trunk disable during heavy
call load.
Improve SIP tracing with inclusion of message direction
and corrected timestamps.
Ability to extract SIP request header and select app profile
based on it.
(APAR IY85995)
2.Added files to allow import check during WVR start up.
(Defect 35953)
- Update 4.2.0.212
APAR IY84995
PTF U807572
1.Remove potential deadlock from SIP stack during a timeout
(APAR IY84995)
2.DT.rte fileset must be at fix level 4.2.0.211 as well
(Defect 35922)
3.This fix adds the EVENT header to outbound NOTIFY messages
for MWI EVENT: message-summary.
(Defect 35909)
4.Stop auto forcing the signalling to VoIP if the signalling
type is not CAS for DTEA/DTNA trunks.
(Defect 35908)
- Update 4.2.0.201
APAR IY81677
PTF U806945
1.Add support for Virtual Adapter ( adapterless DTNA) solution.
For VOIP fileset.
(APAR IY81677)
- Update 4.2.0.140
APAR IY74570
PTF U804343
1.This fix improves the return codes passed back to
applications making a VoIP supervised transfer call,
allowing application to manage errors such as no answer,
busy etc.
(APAR IY74570)
- Update 4.2.0.126
APAR IY72487 IY72841
PTF U803996
1.This fix corrects an error where the branch ID on an
ACK message confirming a 2XX response to an INVITE was
not unique.
(APAR IY72841)
2.Modify the INVITE so that we don't add our own Route header.
Any existing route informaiton will be retained.
(APAR IY72487)
- Update 4.2.0.73
APAR IY68498
PTF U802526
1.This fix now allows voip numeric tel numbers to
conatain a +.
Eg +12345@bob.uk.ibm.com will arrive in the application
called number as +12345. This will allow vxml applications
to be selected based on a number type of +23445 using the
NumtoApp field.
(APAR IY68498)
- Update 4.2.0.41
APAR IY63806
PTF U800662
1.Problem found with VOIP call transfer, the referred-by header did
not have the "sip:" string at the start of the header.
(APAR IY63806)
- Update 4.2.0.21
APAR IY60974
PTF U800024
1.This PTF fixes a SIP attended transfer problem found after running a
long term test.
(APAR IY60974)
2.This PTF fixes a SIP blind transfer problem found after running a
long term test.
(Defect 35223)
3.Fix to ensure w1 timer stop occurs in request failure for on-hold,
and fix invite failure in farend-hup.
(Defect 35201)
4.This PTF problem which stops channels being occasionally left in a
blocked state when a quiesce command is issued.
(Defect 35172)
5.This PTF fixes a SIP attended transfer problem found after running a
long term test.
(Defect 35110)
6.This PTF fixes an error burst generated by the SIP
MEDIA control process when re-cycling trunks.
(Defect 34957)
7.Fixed a bug affecting SIP over TCP where if the far end unexpectedly
closes the connection then the WVR SIP signalling process can
terminate unexpectedly.
(Defect 34940a)
8.Fixed a bug which sometimes caused quiescing of VOIP channels
to fail.
(Defect 34826)
9.Fixed a memory leak which could occur if WVR fails to send a
blind MWI NOTIFY message.
(Defect 34715)
- Update 4.2.0.18
APAR IY60260
PTF U499201
1.Fix to correct passing up of SIP headers to WVR Applications
when diversion header is included.
(APAR IY60360)
- Update 4.2.0.11
APAR IY58368
PTF U498796
1.This fixes a number of problems found in the SIP
attended transfer feature. SIP attended transfer is based
on the refer replaces method.
(APAR IY58368)
2.This fixes a problem when running SIP without a proxy.
The FROM header was overwriting the request header.
(Defect 35057)
3.This fixes a problem in pack configuration which could lead
to incorrectly configured trunks.
(Defect 34889)
4.Fixed a problem where two instances of VOIP_MONITOR could be
run concurrently.
(Defect 34921) | http://www-01.ibm.com/support/docview.wss?uid=swg21253844 | CC-MAIN-2015-48 | refinedweb | 32,105 | 68.87 |
PrimeNG is a UI library for Angular. It has a list of components that we can use in our Angular applications.
A lot of devies get this error when they try to add PrimeNG:
p-card is not a known Angular component
Note here, that p-card is just an example, and the error could be for any PrimeNG component one might be using. To solve that, make sure you have followed ALL the following steps:
- Installing the packages.
npm install primeng --save npm install primeicons --save
Make sure the following dependencies are added in the package.json file:
"dependencies": { //... "primeng": "^8.0.0", "primeicons": "^2.0.0" },
- Import the necessary modules in the app.module.ts file.
import {CardModule} from 'primeng/card'; import {MenuItem} from 'primeng/api';
In the imports below, add the corresponding module.
imports: [ BrowserModule, AppRoutingModule, FormsModule,
- If you still get the same error, add this to the angular.json file, in the styles:
"styles": [ "node_modules/primeicons/primeicons.css", "node_modules/primeng/resources/themes/nova-light/theme.css", "node_modules/primeng/resources/primeng.min.css", //... ],
This should do the job!
Cheers! Happy coding!
Top comments (0) | https://dev.to/dkp1903/primeng-setup-n6i | CC-MAIN-2022-40 | refinedweb | 187 | 50.53 |
Just.
The electric field of a capacitor (plates separated by $d=2$):
import sys import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Circle WIDTH, HEIGHT, DPI = 700, 700, 100 def E(q, r0, x, y): """Return the electric field vector E=(Ex,Ey) due to charge q at r0.""" den = ((x-r0[0])**2 + (y-r0[1])**2)**1.5 return q * (x - r0[0]) / den, q * (y - r0[1]) / den # Grid of x, y points nx, ny = 128, 128 x = np.linspace(-5, 5, nx) y = np.linspace(-5, 5, ny) X, Y = np.meshgrid(x, y) # Create a capacitor, represented by two rows of nq opposite charges separated # by distance d. If d is very small (e.g. 0.1), this looks like a polarized # disc. nq, d = 20, 2 charges = [] for i in range(nq): charges.append((1, (i/(nq-1)*2-1, -d/2))) charges.append((-1, (i/(nq-1)*2-1, d/2))) # Electric field vector, E=(Ex, Ey), as separate components Ex, Ey = np.zeros((ny, nx)), np.zeros((ny, nx)) for charge in charges: ex, ey = E(*charge, x=X, y=Y) Ex += ex Ey += ey fig = plt.figure(figsize=(WIDTH/DPI, HEIGHT/DPI), facecolor='k') ax = fig.add_subplot(facecolor='k') fig.subplots_adjust(left=0, right=1, bottom=0, top=1) # Plot the streamlines with an appropriate colormap and arrow style color = np.log(np.sqrt(Ex**2 + Ey**2)) ax.streamplot(x, y, Ex, Ey, color=color, linewidth=1, cmap=plt.cm.plasma, density=3, arrowstyle='->') # Add filled circles for the charges themselves charge_colors = {True: '#aa0000', False: '#0000aa'} for q, pos in charges: ax.add_artist(Circle(pos, 0.05, color=charge_colors[q>0], zorder=10)) ax.set_xlabel('$x$') ax.set_ylabel('$y$') ax.set_xlim(-5,5) ax.set_ylim(-5,5) ax.set_aspect('equal') plt.savefig('capacitor.png', dpi=DPI) plt.show()
The electric field of a polarized disc (plates separated by $d=0.1$):
Comments are pre-moderated. Please be patient and your comment will appear soon.
Baudouin DILLMANN 6 months, 2 weeks ago
Great job,Link | Reply
please replace
ax = fig.add_subplot(facecolor='k')
by
ax = fig.add_subplot(111)
christian 6 months, 2 weeks ago
Thank you.Link | Reply
It shouldn't be necessary to specify 111 for the add_subplot function (this is the default), but I did want the background colour of the Axes to be black. If the code is not working for you, you might need to upgrade your version of Matplotlib.
Cheers, Christian
New Comment | https://scipython.com/blog/the-electric-field-of-a-capacitor/ | CC-MAIN-2021-39 | refinedweb | 422 | 69.79 |
There is a task:
Imagine you have been asked by the Physics department to create a Java program to help them record information about experiments they are running on a particle accelerator.
Each type of particle is uniquely identified by name, and has a mass (a decimal number between 0 and 1 that indicates how fast it breeds), and a radioActivedecayLevel (a decimal number between 1 and 10 that signifies how dangerous it is to humans).
Physics run a number of tests on the particles. Each test has one or more types of particle associated with it (up to a maximum of 5), a textual description of the test, the speed (whole number) the particles reached, and a textual result of the test.
In the box below, create the classes, methods and instance variables you would use to create such a program. Note: there is no need to create the whole program – just define the relevant classes, method signatures, constructors and variables/arrays you would use.
My answers:
pseucode
public class Particle
{
String name;
float mass=decimal # from 0 to 1;
float radioActivedecayLevel=decimal # from 1 to 10;
String description;
String result;
int speed;
int testNumber;
}
Thank you!:)
} | https://www.daniweb.com/programming/software-development/threads/85394/from-java-beginner-am-i-right-with-answers | CC-MAIN-2016-50 | refinedweb | 198 | 51.62 |
We call high level operations that maintain the validity of a halfedge graph Euler Operations.
#include <CGAL/boost/graph/Euler_operations.h>
creates a barycentric triangulation of the face incident to
h.
Creates a new vertex and connects it to each vertex incident to
h and splits
face(h, g) into triangular faces.
h remains incident to the original face. The time complexity is linear in the size of the face.
next(h, g)after the operation, i.e., a halfedge pointing to the new vertex.
Note that
add_center_vertex() does not deal with properties of new vertices, halfedges, and faces.
his not a border halfedge.
remove_center_vertex()
#include <CGAL/boost/graph/Euler_operations.h>
adds a new face defined by a range of vertices (identified by their descriptors,
boost::graph_traits<Graph>::vertex_descriptor).
For each pair of consecutive vertices, the corresponding halfedge is added in
g if new, and its connectivity is updated otherwise. The face can be added only at the boundary of
g, or as a new connected component.
vrcontains at least 3 vertices
boost::graph_traits<Graph>::null_face()if the face could not be added.
#include <CGAL/boost/graph/Euler_operations.h>
appends a new face incident to the border halfedge
h1 and
h2 by connecting the vertex
target(h2,g) and the vertex
target(h1,g) with a new halfedge, and filling this separated part of the hole with a new face, such that the new face is incident to
h2.
h1and
h2are border halfedges,
h1 != h2,
next(h1,g) != h2,
h1and
h2are on the same border.
#include <CGAL/boost/graph/Euler_operations.h>
appends a new face to the border halfedge
h2 by connecting the tip of
h2 with the tip of
h1 with two new halfedges and a new vertex and creating a new face that is incident to
h2.
Note that
add_vertex_and_face_to_border() does not deal with properties of new vertices, halfedges, and faces.
h1and
h2are border halfedges
h1 != h2,
h1and
h2are on the same border.
#include <CGAL/boost/graph/Euler_operations.h>
collapses an edge in a graph.
After the collapse of edge
e the following holds:
eis no longer in
g.
eare no longer in
g.
v0is no longer in
g.
his not a border halfedge,
p_his no longer in
gand is replaced by
o_n_h.
his not a border halfedge,
p_o_his no longer in
gand is replaced by
o_n_o_h.
gthat had
v0as target and source now have
v1as target and source, respectively.
g.
v1.
does_satisfy_link_condition(e,g) == true.
#include <CGAL/boost/graph/Euler_operations.h>
collapses an edge in a graph having non-collapsable edges.
Let
h be the halfedge of
e, and let
v0 and
v1 be the source and target vertices of
h. Collapses the edge
e replacing it with
v1, as described in the paragraph above and guarantees that an edge
e2, for which
get(edge_is_constrained_map, e2)==true, is not removed after the collapse.
v1.
gto be an oriented 2-manifold with or without boundaries. Furthermore, the edge
v0v1must satisfy the link condition, which guarantees that the surface mesh is also 2-manifold after the edge collapse.
get(edge_is_constrained_map, v0v1)==false.
v0and
v1are not both incident to a constrained edge.
#include <CGAL/boost/graph/Euler_operations.h>
fills the hole incident to
h.
hmust be a border halfedge
#include <CGAL/boost/graph/Euler_operations.h>
performs an edge flip, rotating the edge pointed by
h by one vertex in the direction of the face orientation.
hare triangles.
#include <CGAL/boost/graph/Euler_operations.h>
joins the two faces incident to
h and
opposite(h,g).
The faces may be holes.
If
Graph is a model of
MutableFaceGraph the face incident to
opposite(h,g) is removed.
join_face() and
split_face() are inverse operations, that is
join_face(split_face(h,g),g) returns
h.
prev(h,g)
out_degree(source(h,g)), g)) >= 3
out_degree(target(h,g)) >= 3
split_face()
#include <CGAL/boost/graph/Euler_operations.h>
glues the cycle of halfedges of
h1 and
h2 together.
The vertices in the cycle of
h2 get removed. If
h1 or
h2 are not border halfedges their faces get removed. The vertices on the face cycle of
h1 get removed. The invariant
join_loop(h1, split_loop(h1,h2,h3,g), g) returns
h1 and keeps the graph unchanged.
h1.
hand
gare different and have equal number of edges.
#include <CGAL/boost/graph/Euler_operations.h>
joins the two vertices incident to
h, (that is
source(h, g) and
target(h, g)) and removes
source(h,g).
Returns the predecessor of
h around the vertex, i.e.,
prev(opposite(h,g)). The invariant
join_vertex(split_vertex(h,g),g) returns
h. The time complexity is linear in the degree of the vertex removed.
prev(opposite(h,g))
hand
opposite(h,g)is at least 4.
source(h, g)is invalidated
his invalidated
split_vertex()
#include <CGAL/boost/graph/Euler_operations.h>
removes the incident face of
h and changes all halfedges incident to the face into border halfedges.
See
remove_face(g,h) for a more generalized variant.
#include <CGAL/boost/graph/Euler_operations.h>
removes the vertex
target(h, g) and all incident halfedges thereby merging all incident faces.
The resulting face may not be triangulated. This function is the inverse operation of
add_center_vertex(). The invariant
h == remove_center_vertex(add_center_vertex(h,g),g) holds, if
h is not a border halfedge.
prev(h, g)
target(h,g)is a hole. There are at least two distinct faces incident to the faces that are incident to
target(h,g). (This prevents the operation from collapsing a volume into two faces glued together with opposite orientations, such as would happen with any vertex of a tetrahedron.)
add_center_vertex()
#include <CGAL/boost/graph/Euler_operations.h>
removes the incident face of
h and changes all halfedges incident to the face into border halfedges or removes them from the graph if they were already border halfedges.
If this creates isolated vertices they get removed as well.
his not a border halfedge
make_hole()for a more specialized variant.
#include <CGAL/boost/graph/Euler_operations.h>
splits the halfedge
h into two halfedges inserting a new vertex that is a copy of
vertex(opposite(h,g),g).
Is equivalent to
opposite(split_vertex( prev(h,g), opposite(h,g),g), g).
hnewpointing to the inserted vertex. The new halfedge is followed by the old halfedge, i.e.,
next(hnew,g) == h.
#include <CGAL/boost/graph/Euler_operations.h>
splits the face incident to
h1 and
h2.
Creates the opposite halfedges
h3 and
h4, such that
next(h1,g) == h3 and
next(h2,g) == h4. Performs the inverse operation to
join_face().
If
Graph is a model of
MutableFaceGraph and if the update of faces is not disabled a new face incident to
h4 is added.
h3
h1and
h2are incident to the same face
h1 != h2
next(h1,g) != h2and
next(h2,g) != h1(no loop)
#include <CGAL/boost/graph/Euler_operations.h>
cuts the graph along the cycle
(h1,h2,h3) changing the genus (halfedge
h3 runs on the backside of the three dimensional figure below).
Three new vertices, three new pairs of halfedges, and two new triangular faces are created.
h1,
h2, and
h3 will be incident to the first new face.
Note that
split_loop() does not deal with properties of new vertices, halfedges, and faces.
h1,
h2, and
h3denote distinct, consecutive halfedges of the graph and form a cycle: i.e.,
target(h1) == target(opposite(h2,g),g), … ,
target(h3,g) == target(opposite(h1,g),g).
h1,
h2, and
h3are all distinct.
#include <CGAL/boost/graph/Euler_operations.h>
splits the target vertex
v of
h1 and
h2, and connects the new vertex and
v with a new edge.
Let
hnew be
opposite(next(h1, g), g) after the split. The split regroups the halfedges around the two vertices. The edge sequence
hnew,
opposite(next(h2, g), g), ...,
h1 remains around the old vertex, while the halfedge sequence
opposite(hnew, g),
opposite(next(h1, g), g) (before the split), ...,
h2 is regrouped around the new vertex. The split returns
hnew, i.e., the new edge incident to vertex
v. The time is proportional to the distance from
h1 to
h2 around the vertex.
hnew
target(h1, g) == target(h2, g), that is
h1and
h2are incident to the same vertex
h1 != h2, that is no antennas
join_vertex() | https://doc.cgal.org/latest/BGL/group__PkgBGLEulerOperations.html | CC-MAIN-2019-30 | refinedweb | 1,368 | 58.99 |
11 September 2012 17:06 [Source: ICIS news]
LONDON (ICIS)--Here is Tuesday’s end of day European oil and chemical market summary from ICIS.
CRUDE: October WTI: $96.93/bbl, up 39 cents/bbl. October BRENT: $114.89/bbl, up 8 cents/bbl
Crude oil futures edged higher on Tuesday as investors remained optimistic that the US Federal Reserve will show more hints of monetary stimulus to boost the ?xml:namespace>
NAPHTHA: $997-999/tonne, down $6-8/tonne
The cargo range slipped from earlier in the day, with two trades taking place this afternoon. October swaps were assessed at $983-985/tonne.
BENZENE: $1,520-1,600/tonne, up $90-150/tonne
European spot benzene prices continue to firm as traders and consumers struggled to cover short positions. September benzene traded at $1,500-1,525/tonne CIF ARA. October benzene traded at $1,525/tonne.
STYRENE: $1,750-1,810/tonne, up $50-80/tonne
Europe October styrene has traded at $1,750/tonne FOB
TOLUENE: $1,300-1,320/tonne, steady
With benzene and styrene markets dominating the aromatics complex today, there was very little mention of toluene prices. Toluene is also tight and sources say its difficult to suggest even a notional range.
MTBE: $1,432/tonne, unchanged.
Prices are assessed stable with no deals taking place. | http://www.icis.com/Articles/2012/09/11/9594280/evening-snapshot-europe-markets-summary.html | CC-MAIN-2014-52 | refinedweb | 222 | 58.38 |
You should be familiar with the mv(1) command by now, which moves a file from one place to another.
If you're writing shell scripts, or if you're using a library which lets you move files then you don't need to worry about how it works, but if you don't have a library available you might be surprised by the amount of effort required.
It's just a rename(2), right?
If you can guarantee that the destination is on the same filesystem and that you don't care if it replaces some other file, then yes.
Not clobbering
mv(1) has the
-n or
--no-clobber option
to prevent accidentally overwriting a file.
The naive way to do this would be to check whether the file exists before calling rename(2), but this is a TOCTTOU bug which can cause this to overwrite if another thread puts a file there.
To do this safely use renameat2(2) with the
RENAME_NOREPLACE flag,
which will make it fail if the destination already exists.
The destination is on another filesystem
On a modern Linux distribution your files are usually spread across multiple file systems, so your persistent files are on a filesystem mounted from local storage, but your operating system puts temporary files on a different file system so they get removed when your computer shuts down.
Unfortunately, the rename(2) system call does not work if the destination is on a different file system.
Checking ahead of time whether a path is on a different file system
is traditionally handled by calling stat(2),
and checking whether the
st_dev field differs,
but this is another TOCTTOU bug waiting to happen
and rename(2) sets errno(3) to
EXDEV
which lets you know it failed for being on another filesystem
in the same system call you would have made anyway.
If you care about still being able to move the file when its destination is on a different file system then you need a fallback when this happens.
So we fall back to copying the file and removing the old one?
In principle, yes, though actually implementing this is surprisingly difficult.
Handling the fallback logic itself is not straight-forward, we'll get that out of the way first.
We can fallback to rename(2) if renameat2(2) is not implemented but only if we don't need to handle not clobbering the target.
When that happens we need to fall back to the copy,
which can use
O_EXCL with
O_CREAT
to only write to the file if it didn't already exist.
If unlinking the source file fails because the file doesn't exist, then that means that we were able to copy its contents while something else removed it.
Given the file was written to its destination and it no longer exists where it used to it can be argued that the operation as a whole was successful.
/* my-mv.c */ #include <stdbool.h> #include <fcntl.h> #include <stdio.h> #include <unistd.h> #include <errno.h> #include <sys/syscall.h> #if !HAVE_DECL_RENAMEAT2 static inline int renameat2(int oldfd, const char *oldname, int newfd, const char *newname, unsigned flags) { return syscall(__NR_renameat2, oldfd, oldname, newfd, newname, flags); } #endif #ifndef RENAME_NOREPLACE #define RENAME_NOREPLACE (1<<0) #endif int copy_file(const char *source, const char *target, bool no_clobber); int move_file(const char *source, const char *target, bool no_clobber) { int ret; ret = renameat2(AT_FDCWD, source, AT_FDCWD, target, no_clobber ? RENAME_NOREPLACE : 0); if (ret == 0) return ret; if (errno == EXDEV) goto xdev; if (errno != ENOSYS) { perror("renaming file"); return ret; } /* Have to skip to copy if unimplemented since rename can't detect EEXIST */ if (no_clobber) goto xdev; rename: ret = rename(source, target); if (ret == 0) return ret; if (errno == EXDEV) goto xdev; perror("renaming file"); return ret; xdev: ret = copy_file(source, target, no_clobber); if (ret < 0) return ret; if (unlink(source) < 0 && errno != ENOENT) { perror("unlinking source file"); return -1; } return ret; }
So we open both files, and loop reading data then writing it?
This will produce a file that when read, will produce the same stream of bytes as the original.
You could use stdio(3) to copy the contents,
but that will have to be left as an exercise for the reader,
since I don't like its record-based interface,
I prefer to deal with file descriptors over
FILE* handles,
and the buffering makes error handling more… interesting.
So, broadly, the idea is to read into a buffer, then write from the buffer to the target file.
However,
EINTR is a problem,
many system calls can be interrupted before they do anything,
and read(2) and write(2) may return less than you asked for.
Glibc has a handy
TEMP_FAILURE_RETRY macro for handling
EINTR,
but to handle the short reads and writes,
you need to always work in a loop.
int naive_contents_copy(int srcfd, int tgtfd) { /* 1MB buffer, too small makes it slow, shrink this if you feel memory pressure on an embedded device */ char buf[1 * 1024 * 1024]; ssize_t total_copied = 0; ssize_t ret; for (;;) { ssize_t n_read; ret = TEMP_FAILURE_RETRY(read(srcfd, buf, sizeof(buf))); if (ret < 0) { perror("Reading from source"); return ret; } n_read = ret; /* Reached the end of the file */ if (n_read == 0) return n_read; while (n_read > 0) { ret = TEMP_FAILURE_RETRY(write(tgtfd, buf, n_read)); if (ret < 0) { perror("Writing to target"); return ret; } n_read -= ret; total_copied += ret; } } return 0; } int copy_file(const char *source, const char *target, bool no_clobber) { int srcfd = -1; int tgtfd = -1; srcfd = open(source, O_RDONLY); if (srcfd == -1) { perror("Opening source file"); return srcfd; } tgtfd = open(target, O_WRONLY|O_CREAT|(no_clobber ? O_EXCL : 0), 0600); if (tgtfd == -1) { perror("Opening target file"); return tgtfd; } return naive_contents_copy(srcfd, tgtfd); }
Making use of our new function
So now we have a nice
move_file function
that will fall back to copying it if renaming does not work.
But code is of no use in isolation, we need a program for it to live in, and the simplest way to use it is a command-line program.
#include <getopt.h> #include <string.h> int main(int argc, char *argv[]) { char *source; char *target; bool no_clobber = false; enum opt { OPT_NO_CLOBBER = 'n', OPT_CLOBBER = 'N', }; static const struct option opts[] = { { .name = "no-clobber", .has_arg = no_argument, .val = OPT_NO_CLOBBER, }, { .name = "clobber", .has_arg = no_argument, .val = OPT_CLOBBER, }, {}, }; for (;;) { int ret = getopt_long(argc, argv, "nN", opts, NULL); if (ret == -1) break; switch (ret) { case '?': return 1; case OPT_NO_CLOBBER: case OPT_CLOBBER: no_clobber = (ret == OPT_NO_CLOBBER); break; } } if (optind == argc || argc > optind + 2) { fprintf(stderr, "1 or 2 positional arguments required\n"); return 2; } source = argv[optind]; if (argc == optind + 2) target = argv[optind + 1]; else /* Move into the current directory with the same name */ target = basename(source); if (move_file(source, target, no_clobber) >= 0) return 0; return 1; }
$ if echo 'int main(){(void)renameat2;}' | gcc -include stdio.h -xc - -o/dev/null 2>/dev/null; then > HAVE_DECL_RENAMEAT2=1 > else > HAVE_DECL_RENAMEAT2=0 > fi $ make CFLAGS="-D_GNU_SOURCE -DHAVE_DECL_RENAMEAT2=$HAVE_DECL_RENAMEAT2" my-mv $ ./my-mv 1 or 2 positional arguments required $ touch test-file $ ./my-mv test-file clobber-file $ ls test-file clobber-file ls: cannot access test-file: No such file or directory clobber-file $ ./my-mv -n test-file clobber-file rename2: No such file or directory $ touch test-file $ ./my-mv --no-clobber test-file clobber-file rename2: File exists $ ./my-mv test-file clobber-file $ ls test-file clobber-file ls: cannot access test-file: No such file or directory
So we've got a complete fallback for rename(2) now?
Not quite.
For most purposes this is likely to be sufficient, but there's a lot more to a file than the data you can read out of it, far more than I can cover in this article, so there will be follow-up articles to cover copying other aspects of files including:
- Sparseness
- Speed
- Metadata
- Atomicity
- Other types of file
Thanks for this article it's really useful, it would be interesting to know why you're not fond of the stdio interface, and also potentially worth mentioning that EINTR is really only something that needs to be explicitly handled on Linux, from signal(7):
This behaviour seems less than helpful to me, it would be really good to know if there's a good reason why GNU/Linux doesn't just restart the call (as the BSDs do)
Also, I had no idea you could make system calls without having a C wrapper for them, so thanks for that as well!
I don't like the buffering behaviour, it defers writes late enough that I lose context about what bit of the write failed, so when I go to flush and close I can't say how much was actually written.
There's also the fact that some signals can be configured to restart, but not others. Signals in general are a bit of a mess, so my way of dealing with it is to wrap everything in a retry and leave signal handling for shut down and use a better form of IPC for everything else.
Yep, it's a pain when the wrappers don't exist because you need to handle the error return calling convention yourself, since part of what the libc wrappers do is set errno. I tend to use negative error number returns in my own code rather than errno which makes it actually closer to what I prefer. | https://yakking.branchable.com/posts/moving-files-1-copying/ | CC-MAIN-2021-43 | refinedweb | 1,566 | 51.92 |
This section is informative.
The SMIL 2.1 specification leaves the SMIL 2.0 Linking Modules [SMIL20-linking] unchanged.
This section is informative.
The.1 specification allows but does not require that user agents be able to process XPointers in SMIL 2.1 URI attribute values.
Where possible, SMIL linking constructs have the same.1 specification.
SMIL profiles may use XML Base [XMLBase]. The SMIL 2.1.1, and not under any XHTML-related namespace.
The SMIL 2.1 Linking Modules support name fragment identifiers and the '#' connector. The fragment part is an id value that identifies one of the elements within the referenced SMIL document. With this construct, SMIL 2.1 Timing and Synchronization Modules for more information.
There are special semantics defined for following a link containing a fragment part into a document containing SMIL timing. These semantics are defined in the SMIL 2.1 Timing and Synchronization Modules.
Due to its integrating nature, the presentation of a SMIL 2.1 document may involve other (non-SMIL) applications or plug-ins. For example, a SMIL 2.1 user agent may use an HTML plug-in to display an embedded HTML page. Vice versa, an HTML user agent may use a SMIL plug-in to display a SMIL 2.1 document embedded in an HTML page. Note that this is only one of the supported methods of integrating SMIL 2.1 and HTML. Another alternative is to use the merged language approach. See the SMIL 2.1.1 implementations may choose not to comply with this recommendation.
If a link is defined in an embedded SMIL 2.1 document, traversal of the link affects only the embedded SMIL 2.1 document.
If a link is defined in a non-SMIL document which is embedded in a SMIL 2.1 document, link traversal can only affect the presentation of the embedded document and not the presentation of the containing SMIL 2.1 document. This restriction may be relaxed in future versions of SMIL.
When a link into a SMIL 2.1 document contains an un-resolvable fragment identifier ("dangling link") because it identifies an element that is not actually part of the document, SMIL 2.1 software should ignore the fragment identifier, and start playback from the beginning of the document.
When a link into a SMIL 2.1 document contains a fragment identifier which identifies an element that is the content of a switch element, SMIL 2.1 software should interpret this link as going to the outermost ancestor switch element instead. In other words, the link should be considered as accessing the switch ancestor element that is not itself contained within a switch.
The SMIL 2.1.1.1.1.1.1 timing to the a element must specify the default and allowed values of the fill attribute on the a element. Languages applying SMIL 2.1 timing to the a element wishing to remain compatible with SMIL 1.0, such as the SMIL 2.1.1.1.1.1 2.1,.1.1.1> | http://www.w3.org/TR/SMIL2/extended-linking.html | CC-MAIN-2015-27 | refinedweb | 507 | 61.63 |
The sizeof operator is another method to determine the storage requirements of any data type of variable during the execution of a program. For example, we can use the expression.
sizeof(int)
to determine the storage requirements of an int variable on our computer. The operator sizeof can also be used on any variable as shown in the example below.
int kk;
printf("sizeof(kk) = %d bytes", sizeof(kk);
The sizeof operator precedes either a variable name OR a datatype name. For example, if x has been declared to be a float variable, then both the expressions given below will yield the number of bytes required to store a float variable.
sizeof (x)
sizeof (float)
using sizeof with a variable name
using sizeof with a datatype
The program, given below will provide the information about the storage of various data types on your C compiler.
#include <stdio.h>
void main()
{
printf( "Size of char= %d bytes\n", sizeof(char));
printf( "Size of short int= %d byte\n", sizeof(short));
printf( "Size of int= %d byte\n", sizeof(int));
printf( "Size of long int= %d byte\n", sizeof(long int));
printf( "Size of long long int= %d byte\n", sizeof(long int));
printf( "Size of float= %d byte\n", sizeof(float));
printf( "Size of double= %d byte\n", sizeof(double));
printf( "Size of long double= %d byte\n", sizeof(long double));
printf( "Size of enum = %d byte\n", sizeof(enum));
} | http://ecomputernotes.com/what-is-c/operator/sizeof-operator | CC-MAIN-2020-05 | refinedweb | 239 | 52.53 |
I have the following data frame in the picture, i want to take a Plot a histogram to show the distribution of all countries in the world for any given year (e.g. 2010).
Following is my code table generates after the following code of cleaning:
dataSheet = pd.read_excel("",sheetname="Data")
dataSheet = dataSheet.transpose()
dataSheet = dataSheet.drop(dataSheet.columns[[0,1]], axis=1) ;
dataSheet = dataSheet.drop(['World Development Indicators', 'Unnamed: 2','Unnamed: 3'])
In order to plot a histogram of all countries for any given year (e.g. 2010), I would do the following. After your code:
dataSheet = pd.read_excel("? downloadformat=excel",sheetname="Data") dataSheet = dataSheet.transpose() dataSheet = dataSheet.drop(dataSheet.columns[[0,1]], axis=1) dataSheet = dataSheet.drop(['World Development Indicators', 'Unnamed: 2','Unnamed: 3'])
I would reorganise the column names, by assigning the actual country names as column names:
dataSheet.columns = dataSheet.iloc[1] # here I'm assigning the column names dataSheet = dataSheet.reindex(dataSheet.index.drop('Data Source')) # here I'm re-indexing and getting rid of the duplicate row
Then I would transpose the data frame again (to be safe I'm assigning it to a new variable):
df = dataSheet.transpose()
And then I'd do the same as I did before with assigning new column names, so we get a decent data frame (although still not optimal) with country names as index.
df.columns = df.iloc[0] df = df.reindex(df.index.drop('Country Name'))
Now you can finally plot the histogram for e.g. year 2010:
import matplotlib.pyplot as plt df[2010].plot(kind='bar', figsize=[30,10]) | https://codedump.io/share/jjdspWwfPb08/1/how-to-create-histograms-in-panda-python-using-specific-rows-and-columns-in-data-frame | CC-MAIN-2017-13 | refinedweb | 265 | 60.51 |
Objectives
- Introduce the 8×8 LED matrices controlled by a MAX7219.
- Show a prototype
- Display different messages.
Bill of materials
LED matrices
If I you have endured this far, you are already familiar with the idea that when in electronics something sucks, either for being laborious, heavy or annoying, someone invents an integrated circuit that solves the problem for us, so we can dedicate ourselves to think on what we want and not to be concerned whether a wire is loose or you have defined correctly the character arrays.
And as you could suspect, the 8 × 8 LED arrays are no exception.
In fact, we have a widespread integrated circuit manufactured by Maxim, the MAX7219 / MAX7221, that makes us become reconciled to the LEDs arrays and even with the 7-segment LED displays. Their features are:
- Input and output are built-in in series, this way it requires fewer pins.
- It can handle 7-segment LED displays up to 8 digits.
- It can handle bar LED displays.
- It can handle 8×8 LED matrices or up to 64 LED diodes.
- It only requires an external resistor for all the 64 LEDs.
- It includes a BCD decoder (There is no need to draw character arrays, they are all built-in: uppercase, lowercase and even numbers and punctuation signs).
- It handles the multiplexing of characters and digits.
- It includes memory to store the characters.
- It supports SPI and QSPI (someday we’ll have to talk about the SPI bus).
- It’s cheap and available for a few euros.
So the only reason to hand-wire any type of LED displays is to understand the circuit and learn (suffering, of course), but in the real world will use a chip like this, because it will save many hours of uttering insults.
In the future we will include a chapter to show how to deal with these displays using the chip directly, but for now we will deal with a breakout board that includes an 8×8 dot LED array and one of these chips.
For less than the worth of their components, we can buy a breakout board with a 8×8 dot LED matrix governed by a MAX7219 and forget about further complications. They are easy to assemble on a breadboard and besides they can be connected in cascade, i.e., we can connect up to 8 in series. The connection with our Arduino is via an asynchronous SPI serial port.
CIRCUIT WIRING DIAGRAM
Again, when an external controller is used, the connection is trivial.
And for the breadboard:
THE CONTROL PROGRAM
To handle the array, there is a control library available, called LedControlMS and can be downloaded from here adafruit LEDControl library
Once you’ve installed the library, Sketch\ Include Library\ Add ZIP. Library…, you must first include the following statement:
#include "LedControlMS.h"
After that we must indicate how many displays we are going to use, only one for now, and then create a LedControl class object. We must pass it the control pin numbers and the number of matrices that we are going to use:
#define NumMatrix 1
LedControl lc=LedControl(12,11,10, NumMatrix);
When the sketch starts all the arrays are in standby mode, so we must activate them first.
for (int i=0; i< NumMatrix ; i++) { lc.shutdown(i,false); // Activate the matrix lc.setIntensity(i,8); // Set brightness to an intermediate level lc.clearDisplay(i); // Clear it all }
And now we have just to write the message:
lc.writeString(0,"Arduino course by Prometec.net");
In summary, the entire sketch to write a message could be like this: Prog_38_1
#include "LedControlMS.h" #define NumMatrix 1 // It declares how many matrices we are going to use LedControl lc=LedControl(12,11,10, NumMatrix); // Create a LedControl object void setup() { for (int i=0; i< NumMatrix ; i++) { lc.shutdown(i,false); // Activate the matrix lc.setIntensity(i,8); // Set brightness to an intermediate level lc.clearDisplay(i); // Clear it all } } void loop() { lc.writeString(0," Arduino course by Prometec.net "); delay(1000); }
As you can see, just like in the previous chapter, we don’t have to define character arrays or worry about multiplexing the pins.
- I am a staunch defender of working the minimum to achieve our goals (in computer jargon is called optimizing resources), but this is because the guys from Adafruit, to whom we will never thank enough, have developed a library for us.
- But make no mistake, this library below does exactly what we saw in the previous chapter. It defines the character arrays and the LED multiplexing functions.
- If you have interest, (there will be someone, I presume) you can check it, because navigating to \\File\Open … you can find two sketches in your library directory, usually \Documents\Arduino\libraries\LedControlMS in Windows, and by loading LedControlMS.cpp you will see the sketches that form the library (do not panic, these libraries are not intended to be understood by newbies).
Summary
- We have seen, very quickly, the features of MAX7219.
- We will speak further about this IC in the future because is very useful to handle different kinds of LED displays.
- We have seen that the manufacturers ship already 8 × 8 SPI LED matrices with a built in MAX7219, that can be very easily handled if we want to show alphanumeric messages.
Give a Reply | http://prometec.org/displays/8x8-spi-led-matrix-with-max7219/ | CC-MAIN-2021-49 | refinedweb | 893 | 61.87 |
[]( Nelson/HamburgerMenu)
Hamburger Menu
An Elegant UITabBarController Subclass
Features
- Quick installation
- Easy customization
- No changes to Storyboard layout required
Animations
Hamburger Menu features spring animations and a menu button that transitions between opened and closed states.
Setup
Install
Add the Pod to your Podfile
pod "HamburgerMenu"
Change UITabBarController Class
Change the class of your main
UITabBarController to
MenuController from the Module
HamburgerMenu.
This will instantly transform your
UITabBarController into a Hamburger Menu using the default menu nib.
Customize
The hamburger menu is easily customized by providing a subclass of
MenuView to the
MenuController.
1. Create MenuView Nib
Right click on your main Storyboard and select
New File.... Choose
User Interface and create a new
View.
Name the view
MyCustomNib (or whatever you want).
2. Create MenuView Subclass
Right click on the newly created
.xib file and select
New File.... Choose
Source and create a new
Cocoa Touch Class.
Name the class
MyCustomNib (or whatever you named the nib) and make it a subclass of
MenuView.
3. Prepare Nib
Open
MyCustomNib.xib and set the
UIView‘s class to
MyCustomNib.
Open
MyCustomNib.swift and import
HamburgerMenu.
import HamburgerMenu
Open your main Storyboard and set your
MenuController‘s
Menu Nib to
MyCustomNib (or whatever you named your nib). Do this by selecting the
MenuController, going to the property panel, and entering the nib’s (without
.xib) name into the property box.
4. Add UI
You are now free to add whatever UI elements you want to the
MyCustomNib.xib. Whatever you add here will show up in your Hamburger Menu.
Make sure to use Autolayout if you want your Hamburger Menu to behave correctly in all orientations and on all devices.
5. Switching Tabs
Call
self.switchTab(_: Int, andClose: Bool) inside of your
MenuView subclass to change the currently selected tab.
@IBAction func buttonForTabTwoTouchUpInside(sender: UIButton) { self.switchToTab(1, andClose: true) //index 1 = tab #2 }
You can also loop over
self.controller.tabBar.items as is done in the default menu view or
self.controller.viewControllers to dynamically create buttons for all children view controllers.
Look at
DefaultMenuView.swift in the Pod to see examples of this being done with a
UIStackView.
Disclaimer
Apple recommends against using hamburger menus in your UI because they can make your app harder to use. See
Designing Intuitive User Experiences - 211 WWDC 2014 session (at 31’ 57") to learn more.
This repo is for the special cases where a hamburger menu is the better solution or where finalized design files demand it (the latter being the reason it was made).
Author
Tanner Nelson
License
HamburgerMenu is available under the MIT license. See the LICENSE file for more info.
Latest podspec
{ "name": "HamburgerMenu", "version": "1.0.2", "summary": "An Elegant UITabBarController Subclass.", "description": "An Elegant UITabBarController Subclass that makes it easy to add a HamburgerMenu to your project.", "homepage": "", "license": "MIT", "authors": { "Tanner Nelson": "[email protected]" }, "source": { "git": "", "tag": "1.0.2" }, "social_media_url": "", "platforms": { "ios": "8.0" }, "requires_arc": true, "source_files": "Pod/Classes/**/*", "resource_bundles": { "HamburgerMenu": [ "Pod/Assets/*.png" ] }, "dependencies": { "HamburgerButton": [ "~> 1.0" ] } }
Sun, 06 Mar 2016 06:53:04 +0000 | https://tryexcept.com/articles/cocoapod/hamburgermenu | CC-MAIN-2019-22 | refinedweb | 513 | 51.04 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.