text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
what is it ---------- A Python package to parse and build CSS Cascading Style Sheets. main changes since 0.9.4a4 -------------------------- for full details for 0.9.4b1 see the relevant CHANGELOG: 0.9.4b1 - **FEATURE**: Added ``csscombine`` script which currently resolves @import rules into the input sheet. No nested @imports are resolved yet and @namespace rules do not work yet: ``a/**/b````. ``a. + IMPROVEMENT: Added better ``str`` and ``repr`` to cssutils.serializer.Preferences + IMPROVEMENT: Added position information to some error reportings (Property, CSSMediaRule + some internal changes license ------- cssutils is published under the LGPL. download -------- for download options for see cssutils needs Python 2.4 or higher (tested with Python 2.5 on Vista only) bug reports, comments, etc are very much appreciated! thanks, Christof | https://mail.python.org/pipermail/python-announce-list/2007-December/006329.html | CC-MAIN-2017-04 | refinedweb | 124 | 61.02 |
Introduction
Some very brief notes summarizing the abstract side of Haskell monads. It’s my crib sheet, written partly to straighten matters in my own mind and partly for future reference.
Most of the information here comes from the usual places, notably the Typeclassopedia.1 I’m also indebted to Dominic Prior for many helpful discussions. Dominic is collecting useful and interesting monad examples2 on Google Docs.
Basic definitions
There are (at least) four sensible ways to define monads, but they’re all equivalent: you get the same monad in every case.
>>=is called ‘bind’ .
returnisn’t like
returnin other languages.
- Monads also define
>>and
fail, but we’ll ignore them for now.
The standard Haskell formulation
The standard Prelude defines monads thus:
class Monad m where (>>=) :: m a -> (a -> m b) -> m b return :: a -> m a
with the (unenforced) rules that:
return a >>= f = f a a >>= return = a (a >>= f) >>= g = a >>= (\x -> f x >>= g)
Intuitively
return x ‘puts’ x into the monad in a ‘natural’ way. Continuing the intuition,
x >>= f applies function
f to value
x
It’s worth noting the signature for
f,
f :: Monad m => a -> m b
which implies that it’s the function’s responsibility to put its result into the monad. Conversely
>>= gets the value from the monad, then applies the function to it. From the outside, everything stays inside the monad. ‘Get’ and ‘put’ are deliberately vague because they mean different things in each monad.
We can make a stronger statement: there are no generic monad functions to take things out of the monad. Put another way, all the function types end in
m x, never just
x.
Our intuitive view of
>>= and
return make the first two monad laws easy to understand.
- The first says that the unwrapping bit of
>>=exactly cancels out the wrapping done by
return, leaving only the function applying bit.
- The second says that you get the same cancellation if you do the unwrapping then the wrapping.
The third law tells us how to compose two monadic functions. On the left we apply first
f then
g to
a, whilst on the right we apply the lambda expression to
a. So, that lambda expression must encode applying first
f then
g.
Note that the monad laws are exhaustive in the sense that they cover all the non-trivial binary combinations of
return and
>>=:
return…
>>=
>>=…
return
>>=…
>>=
Building on functors
Instead of building monads from scratch, we can build them from some of Haskell’s simpler abstract type classes: functor and applicative. In future Haskell might well make this the default.
Let’s look at the declarations:
class Functor f where fmap :: (a -> b) -> f a -> f b class Functor f => Applicative f where (<*>) :: f (a -> b) -> f a -> f b pure :: a -> f a class Applicative m => Monad m where (>>=) :: m a -> (a -> m b) -> m b return :: a -> m a
and the laws which instances of these classes should obey:
fmap id = id fmap (g . h) = (fmap g) . (fmap h) pure id <*> v = v pure f <*> pure x = pure (f x) u <*> pure y = pure ($ y) <*> u u <*> (v <*> w) = pure (.) <*> u <*> v <*> w return a >>= f = f a a >>= return = a (a >>= f) >>= g = a >>= (\x -> f x >>= g)
Finally, because every monad is an applicative, and every applicative is a functor, we can write the characteristic functions of the simpler classes in terms of the more complicated ones:
fmap f x = pure f <*> x fmap f x = x >>= return . f pure = return f <*> a = f >>= \x -> a >>= \y -> return $ x y
Clearly
pure and
return are very similar animals, but let’s look instead at the function-applying functions:
fmap :: Functor f => (u -> v) -> f u -> f v (<*>) :: Applicative a => a (u -> v) -> a u -> a v (=<<) :: Monad m => (u -> m v) -> m u -> m v (=<<) = flip (>>=)
We can regard all three functions as tweaking a function so that it applies to a wrapped value. However the function being transformed is different in each case:
fmaptakes a pure function:
(u -> v).
<*>takes a function already in the applicative:
a (u -> v).
=<<takes a function which puts its result into the monad:
u -> m v.
Doing it with join
Consider the implementation of
fmap with
>>=:
fmap f x = x >>= return . f
It’s clear that to some extent
>>= duplicates the functionality in
fmap, and somewhat begs the question whether we could distil the unique part of
>>= into a different function. Happily we can: it’s called
join, and gives us a third way to define a monad:
class Applicative m => Monad m where join :: m (m a) -> m a return :: a -> m a
Note that
join is almost the inverse to
return, but
join will only collapse two lots of wrapping into one: it won’t return a pure value from the monad. More poetically (ex The Universe of Discourse3 ):
…a monad must possess a join function that takes a ridiculous burrito of burritos and turns them into a regular burrito.
We can implement
join in terms of
>>=, but we need both
join and
fmap to implement
>>=:
join x = x >>= id x >>= f = join (fmap f x)
Finally we need different, but equivalent laws for this definition of monads:
return . f = fmap f . return join . return = id join . fmap return = id join . fmap join = join . join join . fmap (fmap f) = fmap f . join
Kleisli composition
Recall that in the third monad law for
>>= we discussed how to compose monadic functions:
(a >>= f) >>= g = a >>= (\x -> f x >>= g)
where the lambda expression on the left hand side applies
f then
g. The lambda looks a bit unwieldy but happily there is a standard name for this, the Kleisli composition arrow:
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c f >=> g = \x -> f x >>= g
This gives us our fourth and final definition:
class Monad m where (>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c return :: a -> m a
It transpires that if we rewrite the laws to use
>=> instead of
>>= they take on a much more elegant form:
return >=> f = f f >=> return = f (f >=> g) >=> h = f >=> (g >=> h)
In other words
return is the left and right identify for
>=>, and
>=> is associative.
We can also express
fmap,
join, and
>>= succinctly:
fmap f = id >=> return . f join = id >=> id (>>= f) = id >=> f
There’s a fun game to play with the types in the expression for
join. Recall:
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c id :: a -> a
and so in
id >=> id we must have:
a ≣ m b b ≣ m c
and thus:
a ≣ m (m c) (id >=> id) :: Monad m => m (m c) -> m c
Finally note that the Kleisli arrow is the monadic take on
flip (.), not
.:
(.) :: (b -> c) -> (a -> b) -> a -> c (>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
Standard Monad Functions
Having defined the monad, one gets a whole variety of fun little functions to play with it. Many of these are listed in Control.Monad4 and you should consult that for full documentation. The notes below, are my notes on top of that.
>>
>> is a specialized version of
>>=, which is defined for every monad. We omitted it above because it doesn’t add anything conceptual to the picture:
(>>) :: Monad m => m a -> m b -> m b f >> g = f >>= const g
fail
Although it’s included in every monad,
fail is a mistake, born of
do-notation.
liftM, liftM2, …, liftM5
These lift functions of n-arguments into a monadic form:
liftM :: Monad m => (a -> r) -> m a -> m r liftM2 :: Monad m => (a -> a1 -> r) -> m a -> m a1 -> m r …
They can be expressed as a chain of
>>=. For example:
liftM2 f x y = x >>= \u -> y >>= \v -> return (f u v)
though perhaps
do-notation5 is nicer:
liftM2 f x y = do u <- x v <- y return (f u v)
Finally,
liftM ≣ fmap
ap
ap provides a more scalable way to lift functions into the monad:
liftMn f x1 x2 … xn ≣ return f `ap` x1 `ap` … `ap` xn
The right-hand-side might remind you of applicative:
(pure f) <*> x1 <*> x2 <*> … <*> xn
and indeed we find:
pure ≣ return <*> ≣ `ap`
It’s easy to implement
ap directly:
f `ap` x = f >>= \g -> x >>= \y -> return (g y)
But there’s also an elegant relation to
liftM2:
ap = liftM2 id
This is obviously true from the expression for
liftM2 above, but I think there is merit in pondering the result until it is obvious without seeing the innards of the lift.
sequence
sequence interchanges the monad and the list:
sequence :: Monad m => [m a] -> m [a]
It can be implemented with
foldr, where it bears a striking resemblance to the identity fold:
sequence = foldr (liftM2 (:)) (return []) idFold = foldr (:) []
Given the fold, we just convert both the step and base-case to their monadic equivalents and get
sequence.
References
- 1.
- 2.
- 3.
- 4.
- 5. | https://www.mjoldfield.com/atelier/2014/01/monads-algebra.html | CC-MAIN-2019-43 | refinedweb | 1,502 | 61.7 |
Raking MySQL Over Rails
Next, create the states.csv file, and store it within the newly created db/seed directory. A shortened version of the file is presented here:
id, name, abbreviation 1, Alabama, AL 2, Alaska, AK 3, Arizona, AZ 4, Arkansas, AR 5, California, CA 6, Colorado, CO
Next, you'll need to create a Rake file. The process and syntax behind doing so is worthy of an article unto itself, so just trust me on this. I'd been using a homebrewed task for some time, but recently came across a much more succinct solution created by Jeffrey Alan Hardy, which I've slightly modified to account for my preferred use of comma-separated seed files. Paste the following code into a file named seeder.task and save it to your Rails project's lib/tasks directory:
namespace :db do desc "Load seed fixtures (from db/seed) into the current environment's database." task :seed => :environment do require 'active_record/fixtures' Dir.glob(RAILS_ROOT + '/db/fixtures/*.csv').each do |file| Fixtures.create_fixtures('db/seed', File.basename(file, '.*')) end end end
To populate the states table, execute the following command from within your project directory:
%>rake db:seeder
Check the states table, and you'll see it's been populated!
Migrating Data Between Databases
Ask five developers what the superior database solution is, and you're sure to get five different answers. Like the endless vi/emacs and Linux/Windows arguments, there's never a shortage of opinion when it comes to database adoption. However, reality occasionally can take precedence over preference, and you may find yourself in a position where the client has decided to make a last minute switch to MySQL after you've been developing the application for weeks using PostgreSQL. Due to a variety of inconsistencies among various database solutions, it just isn't as easy to migrate data as one might think.
Noted Rails developer Tobias Lütke encountered a similar problem, and created a great Rake task for dumping database data into YAML format, and repopulating any database supported by Rails Migrations (at the time of writing, MySQL, PostgreSQL, SQLite, SQL Server, Sybase, and Oracle). I won't reproduce the task here because it's rather lengthy. Instead, download it from here and place it into your project's lib/tasks directory.
Next, run the following Rake command to retrieve the data in YAML format.
%>rake db:backup:write
All of the tables found in the current environment's database have been backed up to db/backup! Now, all you need to do is update your database.yml file to point to your new database solution, and then execute the following command to populate the new database:
%>rake db:backup:write
Keep in mind that anything found in that new database will be deleted before the new data is populated!
Conclusion
Rake is an amazing tool capable of doing so much more than what was demonstrated in this tutorial. Be sure to check out the Rake homepage, and this great tutorial for more information about this powerful tool.
About the Author
W. Jason Gilmore<< | http://www.developer.com/db/article.php/10920_3717241_2/Raking-MySQL-Over-Rails.htm | CC-MAIN-2014-42 | refinedweb | 520 | 51.28 |
#include <db.h> int DB_ENV->log_stat(DB_ENV *env, DB_LOG_STAT **statp, u_int32_t flags);
The
DB_ENV->log_stat() method returns the logging subsystem
statistics.
The
DB_ENV->log_stat() method creates a statistical structure of type
DB_LOG_LOG_STAT fields will be filled in:
u_int32_t st_cur_file;
The current log file number.
u_int32_t st_cur_offset;
The byte offset in the current log file.
u_int32_t st_disk_file;
The log file number of the last record known to be on disk.
u_int32_t st_disk_offset;
The byte offset of the last record known to be on disk.
u_int32_t st_fileid_init;
The initial allocated file logging identifiers.
u_int32_t st_lg_bsize;
The in-memory log record cache size.
u_int32_t st_lg_size;
The log file size.
u_int32_t st_magic;
The magic number that identifies a file as a log file.
u_int32_t st_maxcommitperflush;
The maximum number of commits contained in a single log flush.
u_int32_t st_maxnfileid;
The maximum number of file logging identifiers used.
u_int32_t st_mincommitperflush;
The minimum number of commits contained in a single log flush that contained a commit.
int st_mode;
The mode of any created log files.
u_int32_t st_nfileid;
The current number of file logging identifiers.
uintmax_t st_rcount;
The number of times the log has been read from disk.
uintmax_t st_record;
The number of records written to this log.
roff_t st_regsize;
The region size, in bytes.
uintmax_t st_region_wait;
The number of times that a thread of control was forced to wait before obtaining the log region mutex.
uintmax_t st_region_nowait;
The number of times that a thread of control was able to obtain the log region mutex without waiting.
uintmax_t st_scount;
The number of times the log has been flushed to disk.
u_int32_t st_version;
The version of the log file type.
u_int32_t st_w_bytes;
The number of bytes over and above st_w_mbytes written to this log.
u_int32_t st_w_mbytes;
The number of megabytes written to this log.
u_int32_t st_wc_bytes;
The number of bytes over and above st_wc_mbytes written to this log since the last checkpoint.
u_int32_t st_wc_mbytes;
The number of megabytes written to this log since the last checkpoint.
uintmax_t st_wcount_fill;
The number of times the log has been written to disk because the in-memory log record cache filled up.
uintmax_t st_wcount;
The number of times the log has been written to disk.
The
DB_ENV->log_stat() method may not be called before the
DB_ENV->open() method is called.
The
DB_ENV->log_stat()
method returns a non-zero error value on failure and 0 on success.
The statp parameter references memory into which a pointer to the allocated statistics structure is copied.
The
DB_ENV->log_stat()
method may fail and return one of the following non-zero errors: | http://idlebox.net/2011/apidocs/db-5.2.28.zip/api_reference/C/logstat.html | CC-MAIN-2014-15 | refinedweb | 424 | 66.44 |
completion time in seconds since the last frame (Read Only).
This property provides the time between the current and previous frame.
Use Time.deltaTime to move a GameObject in the
y direction, at
n units per second. Multiply
n by Time.deltaTime and add to the
y component.
MonoBehaviour.FixedUpdate uses fixedDeltaTime instead of deltaTime. Do not rely on Time.deltaTime inside MonoBehaviour.OnGUI. Unity can call OnGUI multiple times per frame. The application use the same deltaTime value per call.
The following example implements a
timer. The
timer adds deltaTime each frame. The example announces the
timer value and resets it to zero when it reaches 2 seconds. The
timer does not hit 2.0 when MonoBehaviour.Update adds deltaTime. The test is for the
timer moving 2 seconds. The script code removes the reported time, and the
timer restarts. The restart is not always exactly 0.0. The speed changes between 0.5 and 2.0 seconds. Time.timeScale stores the chosen time-is-passing scale.
using System.Collections; using System.Collections.Generic; using UnityEngine;
// Time.deltaTime example. // // Wait two seconds and display waited time. // This is typically just beyond 2 seconds. // Allow the speed of the time to be increased or decreased. // It can range between 0.5 and 2.0. These changes only // happen when the timer restarts.
public class ScriptExample : MonoBehaviour { private float waitTime = 2.0f; private float timer = 0.0f; private float visualTime = 0.0f; private int width, height; private float value = 10.0f; private float scrollBar = 1.0f;
void Awake() { width = Screen.width; height = Screen.height; Time.timeScale = scrollBar; }
void Update() { timer += Time.deltaTime;
// Check if we have reached beyond 2 seconds. // Subtracting two is more accurate over time than resetting to zero. if (timer > waitTime) { visualTime = timer;
// Remove the recorded 2 seconds. timer = timer - waitTime; Time.timeScale = scrollBar; } }
void OnGUI() { GUIStyle sliderDetails = new GUIStyle(GUI.skin.GetStyle("horizontalSlider")); GUIStyle sliderThumbDetails = new GUIStyle(GUI.skin.GetStyle("horizontalSliderThumb")); GUIStyle labelDetails = new GUIStyle(GUI.skin.GetStyle("label"));
// Set the size of the fonts and the width/height of the slider. labelDetails.fontSize = 6 * (width / 200); sliderDetails.fixedHeight = height / 32; sliderDetails.fontSize = 12 * (width / 200); sliderThumbDetails.fixedHeight = height / 32; sliderThumbDetails.fixedWidth = width / 32;
// Show the slider. Make the scale to be ten times bigger than the needed size. value = GUI.HorizontalSlider(new Rect(width / 8, height / 4, width - (4 * width / 8), height - (2 * height / 4)), value, 5.0f, 20.0f, sliderDetails, sliderThumbDetails);
// Show the value from the slider. Make sure that 0.5, 0.6... 1.9, 2.0 are shown. float v = ((float)Mathf.RoundToInt(value)) / 10.0f; GUI.Label(new Rect(width / 8, height / 3.25f, width - (2 * width / 8), height - (2 * height / 4)), "timeScale: " + v.ToString("f1"), labelDetails); scrollBar = v;
// Display the recorded time in a certain size. labelDetails.fontSize = 14 * (width / 200); GUI.Label(new Rect(width / 8, height / 2, width - (2 * width / 8), height - (2 * height / 4)), "Timer value is: " + visualTime.ToString("f4") + " seconds.", labelDetails); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2018.3/Documentation/ScriptReference/Time-deltaTime.html | CC-MAIN-2019-39 | refinedweb | 505 | 64.57 |
Agenda
See also: IRC log
Steven: Roland Merrick of IBM will be joining me as co-chair; he has a lot of experience in the Forms WG, and is very enthusiastic about joining us. He has an existing call at this time, so has some reorganization to do.
Here is the one from the last WG
Here is the latest one:
Shane: There is a mismatch between the documents section and the milestones
Steven: All the more reason to update the roadmap document
Mark: It looks like the documents section is OK, only the milestones are missing some
<scribe> ACTION: Steven to update the roadmap document [recorded in]
Steven: Do we think this is a good idea?
... example:
Shane: It's a good idea
<scribe> ACTION: Steven investigate starting a blog and wiki [recorded in]
Steven: I saw this as an action from last week
Shane: The final version of role is not ready, sorry
Tina: I had an action item to review the role
document
... I'm not convinced we need it
... I don't like representing semantics in attributes rather than elements
Steven: But it offers extensibility in semantics without having to constantly revise the language
Tina: I'm worried that people will create semantics with <div role=...
Steven: Agreed we have to talk about best practices
Rich: You can't stop people doing the wrong thing; at least we now have a way to extract the real semantics, rather than having to guess
Steven: The nice thing that this offers is a link to the semantic web way of defining semantics
Tina: For people who don't even understand h1, rdf doesn't give them any value
Mark: I think you are missing the point
... you don't have to understand rdf
... there are values you can use
... so we have the best of both worlds: a predefined list, and hooks into rdf which makes it extensible for the future.
Tina: I'm worried that people will add
semantics with role
... if someone makes up their own semantics, there is no way to extract the real meaning
Steven: Yes there is! That's the nice thing about semweb tools: you can define the relationship between different semantics ('ontologies')
Rich: The nice thing about this approach is that you don't need 'skip to' links for instance, and the browser still offers a shortcut to the main content
Steven: Are you going to do a review, or was this it Tina?
Tina: The document is fine, I'm just worried about the principle.
Shane: You know my opinion; they should not call it XHTML unless they use modularization
Tina: There are two markup languages, HTML and XHTML. I'm not sure that HTLM5 is even HTML, but that is a different discussion
Rich: I'm worried about the confusion. I have
no problems with HTML5/xml or so
... but not XHTML5
Mark: I don't see why they need two names. They have HTML5, with two serializations. No need for two names
Tina: I agree with the problem of confusion
... I've already seen it amongst developers.
Rich: All existing XHTMLs have been modular, and HTML5 is not. It's a mess.
Yam: It doesn't make any sense for them to produce something called XHTML5
RESOLUTION: We agree that the HTML WG should not use the XHTML name to refer to their XML serialization.
Steven: And now the namespace
... we were criticised in the past for having a different namespace, and therefore we changed it back (correctly in my view)
<ShaneM> I concur.
Rich: I think HTML5 is not backwards compatible
Tina: Agreed, especially with elements changing meaning
Steven: I believe that XHTML2 is more backwards compatible than HTML5, and I plan to make a document comparing them to demonstrate it.
Rich: If the browser manufacturers are going to have to make all these changes for audio, video, canvas and so on, what's the problem with a new namespace?
Steven: Good point
Steven: do the conversion to ASCII
Yam: Quite agree
Mark: Sounds like a good idea
<scribe> ACTION: Steven to get advice from I18N group about international form of URIs [recorded in] | http://www.w3.org/2007/06/20-xhtml-minutes | CC-MAIN-2016-22 | refinedweb | 697 | 64.75 |
I have a directory with ~10,000 image files from an external source.
Many of the filenames contain spaces and punctuation marks that are not DB friendly or Web friendly. I also want to append a SKU number to the end of every filename (for accounting purposes). Many, if not most of the filenames also contain extended latin characters which I want to keep for SEO purposes (specifically so the filenames accurately represent the file contents in Google Images)
I have made a bash script which renames (copies) all the files to my desired result. The bash script is saved in UTF-8. After running it omits approx 500 of the files (unable to stat file...).?
The only way I've been able to figure out myself is by setting my terminal encoding to UTF-8, then iterating through all the likely candidate encodings with convmv until it displays a converted name that 'looks right'. I have no way to be certain that these 500 files all use the same encoding, so I would need to repeat this process 500 times. I would like a more automated method than 'looks right' !!!
There's no 100% accurate way really, but there's a way to give a good guess.
There is a python library chardet which is available here:
e.g.
See what the current LANG variable is set to:
$ echo $LANG
en_IE.UTF-8
Create a filename that'll need to be encoded with UTF-8
$ touch mÉ.txt
Change our encoding and see what happens when we try and list it
$ ls m*
mÉ.txt
$ export LANG=C
$ ls m*
m??.txt
OK, so now we have a filename encoded in UTF-8 and our current locale is C (standard Unix codepage).
So start up python, import chardet and get it to read the filename. I'm use some shell globbing (i.e. expansion through the * wildcard character) to get my file. Change "ls m*" to whatever will match one of your example files.
>>> import chardet
>>> import os
>>> chardet.detect(os.popen("ls m*").read())
{'confidence': 0.505, 'encoding': 'utf-8'}
As you can see, it's only a guess. How good a guess is shown by the "confidence" variable.
You may find this useful, to test the current working directory (python 2.7):
import chardet
import os
for n in os.listdir('.'):
print '%s => %s (%s)' % (n, chardet.detect(n)['encoding'], chardet.detect(n)['confidence'])
Result looks like:
Vorlagen => ascii (1.0)
examples.desktop => ascii (1.0)
Öffentlich => ISO-8859-2 (0.755682154041)
Videos => ascii (1.0)
.bash_history => ascii (1.0)
Arbeitsfläche => EUC-KR (0.99)
To recurse trough path from current directory, cut-and-paste this into a little python script:
#!/usr/bin/python
import chardet
import os
for root, dirs, names in os.walk('.'):
print root
for n in names:
print '%s => %s (%s)' % (n, chardet.detect(n)['encoding'], chardet.detect(n)['confidence'])
By posting your answer, you agree to the privacy policy and terms of service.
asked
5 years ago
viewed
12562 times
active
1 month ago | http://serverfault.com/questions/82821/how-to-tell-the-language-encoding-of-a-filename-on-linux | CC-MAIN-2015-11 | refinedweb | 512 | 67.76 |
Formatting Numbers
Hi guys,
I'm new on developing software on Qt. I would like to get some numbers formatted as string with 2 digits for every numbers. This to get the numbers with value < 9 formatted as "09".
I'm using the lines of code below but they don't give me back what I would like to obtain and I can't understand where the mistake is. Please could you help me and check my code?
@
uint hours=1, minutes=2, seconds=3;
QString str = QString("%1:%2:%3")
.arg(hours, 1, 10)
.arg(minutes, 2, 10, '0')
.arg(seconds, 2, 10, '0');
ui->chronoDisplay->setText(str); //chronoDisplay is a QLabel
//I'm expecting that str has got this format: "1:02:03"@
Thank you in advance.
I guess that this is getting in your way:
[quote]
If fillChar is '0' (the number 0, ASCII 48), the locale's zero is used.
[/quote]
(from the [[doc:QString]] documentation)
I am wondering, why you are not using [[doc:QTime]] to format your time:
@
QTime t(hours, minutes, seconds);
ui->chronoDisplay->setText(t.toString("h:mm::ss"));
@
Edit: fixed second line of above code (replaced QTime:: with t. )
[quote author="Andre" date="1333447407"]I guess that this is getting in your way: [quote] If fillChar is '0' (the number 0, ASCII 48), the locale's zero is used. [/quote] (from the [[doc:QString]] documentation) I am wondering, why you are not using QTime to format your time: @ QTime t(hours, minutes, seconds); ui->chronoDisplay->setText(QTime::toString("h:mm::ss")); @ [/quote]
Just because I didn't know this possibility... Now that I know it I'm going to try to use it!
Thank you for the advice!
Hi Andre,
I've followed your advice but I've got the same result.
I had forgotten to say that I want this application to run under symbian 3 OS. Could be this the problems?
Could be. Could you check the ouput of QLocale::systemLocale().zeroDigit() ?
If that is a space, then I guess you have your culprit...
I don't think so because either the techniques I've followed, have given me back a string filled with "0", the problem is that the number of 0 returned are more than expected!
Despite the fact I think the problem is different, how can I set the zeroDigit() property to be sure that it returns a '0'?
You can't set that value from inside Qt. It is set by the system. However, could you show us what your output is now then? I was under the impression from your previous posts that you did not get enough 0's. Now, you tell us, you get too many. So how does your output look?
I would like to show you an image which shows the output I'm receiving on the device simulator, but how can I link this image?
Just put the image on some public place (I use my public DropBox folder), and you can link it into your message using the small picture icon in the bar above the editor window.
Ok, this is the result I'm gettim from the simulator after the code is processed.
!(nokia simulator image)!
this is the URL of the image:
[quote author="Andre" date="1333447407"]@ QTime t(hours, minutes, seconds); ui->chronoDisplay->setText(QTime::toString("h:mm::ss")); @ [/quote]
Why this code worked just one time and now it doesn't work again?
This is the error I'm receiving when I try to use the code above
C:\Documents and Settings\tlp31\Documenti\C++_Project\chrono-build-simulator..\chrono\mainwindow.cpp:120: error: cannot call member function 'QString QTime::toString(const QString&) const' without object
and this is the code:
@ QTime t(hours, minutes, seconds);
ui->chronoDisplay->setText(QTime::toString("h:mm::ss"));@
You need to call t::toString, not QTime::toString in your second line. The error message describes that quite clearly, I think.
Note: I now see that I wrote it wrong in my example. My apologies.
I'm sorry but it still doesn't work. The compiler says: [..] 121: error: 't' is not a class or namespace
How I have to initialize a QTime class properly?
@
QTime time(hours, minutes, seconds);
ui->chronoDisplay->setText(time.toString("h:mm::ss"));
@
This should work.
Yes it works, and with this modification also the UI is working properly.
Thank you Andre for your support in this matter. | https://forum.qt.io/topic/15554/formatting-numbers | CC-MAIN-2018-05 | refinedweb | 744 | 73.88 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
First we need some includes to access the normal distribution, the algorithms to find location and scale (and some std output of course).
#include <boost/math/distributions/normal.hpp> // for normal_distribution using boost::math::normal; // typedef provides default type is double. #include <boost/math/distributions/cauchy.hpp> // for cauchy_distribution using boost::math::cauchy; // typedef provides default type is double. #include <boost/math/distributions/find_location.hpp> using boost::math::find_location; #include <boost/math/distributions/find_scale.hpp> using boost::math::find_scale; using boost::math::complement; using boost::math::policies::policy; #include <iostream> using std::cout; using std::endl; using std::left; using std::showpoint; using std::noshowpoint; #include <iomanip> using std::setw; using std::setprecision; #include <limits> using std::numeric_limits;
Consider an example from K Krishnamoorthy, Handbook of Statistical Distributions with Applications, ISBN 1-58488-635-8, (2006) p 126, example 10.3.7.
"A machine is set to pack 3 kg of ground beef per pack. Over a long period of time it is found that the average packed was 3 kg with a standard deviation of 0.1 kg. Assume the packing is normally distributed."
We start by constructing a normal distribution with the given parameters:
double mean = 3.; // kg double standard_deviation = 0.1; // kg normal packs(mean, standard_deviation);
We can then find the fraction (or %) of packages that weigh more than 3.1 kg.
double max_weight = 3.1; // kg cout << "Percentage of packs > " << max_weight << " is " << cdf(complement(packs, max_weight)) * 100. << endl; // P(X > 3.1)
We might want to ensure that 95% of packs are over a minimum weight specification, then we want the value of the mean such that P(X < 2.9) = 0.05.
Using the mean of 3 kg, we can estimate the fraction of packs that fail to meet the specification of 2.9 kg.
double minimum_weight = 2.9; cout <<"Fraction of packs <= " << minimum_weight << " with a mean of " << mean << " is " << cdf(complement(packs, minimum_weight)) << endl; // fraction of packs <= 2.9 with a mean of 3 is 0.841345
This is 0.84 - more than the target fraction of 0.95. If we want 95% to be over the minimum weight, what should we set the mean weight to be?
Using the KK StatCalc program supplied with the book and the method given on page 126 gives 3.06449.
We can confirm this by constructing a new distribution which we call 'xpacks' with a safety margin mean of 3.06449 thus:
double over_mean = 3.06449; normal xpacks(over_mean, standard_deviation); cout << "Fraction of packs >= " << minimum_weight << " with a mean of " << xpacks.mean() << " is " << cdf(complement(xpacks, minimum_weight)) << endl; // fraction of packs >= 2.9 with a mean of 3.06449 is 0.950005
Using this Math Toolkit, we can calculate the required mean directly thus:
double under_fraction = 0.05; // so 95% are above the minimum weight mean - sd = 2.9 double low_limit = standard_deviation; double offset = mean - low_limit - quantile(packs, under_fraction); double nominal_mean = mean + offset; // mean + (mean - low_limit - quantile(packs, under_fraction)); normal nominal_packs(nominal_mean, standard_deviation); cout << "Setting the packer to " << nominal_mean << " will mean that " << "fraction of packs >= " << minimum_weight << " is " << cdf(complement(nominal_packs, minimum_weight)) << endl; // Setting the packer to 3.06449 will mean that fraction of packs >= 2.9 is 0.95
This calculation is generalized as the free function called find_location.
To use this we will need to
#include <boost/math/distributions/find_location.hpp> using boost::math::find_location;
and then use find_location function to find safe_mean, & construct a new normal distribution called 'goodpacks'.
double safe_mean = find_location<normal>(minimum_weight, under_fraction, standard_deviation); normal good_packs(safe_mean, standard_deviation);
with the same confirmation as before:
cout << "Setting the packer to " << nominal_mean << " will mean that " << "fraction of packs >= " << minimum_weight << " is " << cdf(complement(good_packs, minimum_weight)) << endl; // Setting the packer to 3.06449 will mean that fraction of packs >= 2.9 is 0.95
After examining the weight distribution of a large number of packs, we might decide that, after all, the assumption of a normal distribution is not really justified. We might find that the fit is better to a Cauchy Distribution. This distribution has wider 'wings', so that whereas most of the values are closer to the mean than the normal, there are also more values than 'normal' that lie further from the mean than the normal.
This might happen because a larger than normal lump of meat is either included or excluded.
We first create a Cauchy Distribution with the original mean and standard deviation, and estimate the fraction that lie below our minimum weight specification.
cauchy cpacks(mean, standard_deviation); cout << "Cauchy Setting the packer to " << mean << " will mean that " << "fraction of packs >= " << minimum_weight << " is " << cdf(complement(cpacks, minimum_weight)) << endl; // Cauchy Setting the packer to 3 will mean that fraction of packs >= 2.9 is 0.75
Note that far fewer of the packs meet the specification, only 75% instead of 95%. Now we can repeat the find_location, using the cauchy distribution as template parameter, in place of the normal used above.
double lc = find_location<cauchy>(minimum_weight, under_fraction, standard_deviation); cout << "find_location<cauchy>(minimum_weight, over fraction, standard_deviation); " << lc << endl; // find_location<cauchy>(minimum_weight, over fraction, packs.standard_deviation()); 3.53138
Note that the safe_mean setting needs to be much higher, 3.53138 instead of 3.06449, so we will make rather less profit.
And again confirm that the fraction meeting specification is as expected.
cauchy goodcpacks(lc, standard_deviation); cout << "Cauchy Setting the packer to " << lc << " will mean that " << "fraction of packs >= " << minimum_weight << " is " << cdf(complement(goodcpacks, minimum_weight)) << endl; // Cauchy Setting the packer to 3.53138 will mean that fraction of packs >= 2.9 is 0.95
Finally we could estimate the effect of a much tighter specification, that 99% of packs met the specification.
cout << "Cauchy Setting the packer to " << find_location<cauchy>(minimum_weight, 0.99, standard_deviation) << " will mean that " << "fraction of packs >= " << minimum_weight << " is " << cdf(complement(goodcpacks, minimum_weight)) << endl;
Setting the packer to 3.13263 will mean that fraction of packs >= 2.9 is 0.99, but will more than double the mean loss from 0.0644 to 0.133 kg per pack.
Of course, this calculation is not limited to packs of meat, it applies to dispensing anything, and it also applies to a 'virtual' material like any measurement.
The only caveat is that the calculation assumes that the standard deviation (scale) is known with a reasonably low uncertainty, something that is not so easy to ensure in practice. And that the distribution is well defined, Normal Distribution or Cauchy Distribution, or some other.
If one is simply dispensing a very large number of packs, then it may be feasible to measure the weight of hundreds or thousands of packs. With a healthy 'degrees of freedom', the confidence intervals for the standard deviation are not too wide, typically about + and - 10% for hundreds of observations.
For other applications, where it is more difficult or expensive to make many observations, the confidence intervals are depressingly wide.
See Confidence Intervals on the standard deviation for a worked example chi_square_std_dev_test.cpp of estimating these intervals.
Alternatively, we could invest in a better (more precise) packer (or measuring device) with a lower standard deviation, or scale.
This might cost more, but would reduce the amount we have to 'give away' in order to meet the specification. kg.
normal pack05(mean, 0.05); cout << "Quantile of " << p << " = " << quantile(pack05, p) << ", mean = " << pack05.mean() << ", sd = " << pack05.standard_deviation() << endl; // Quantile of 0.05 = 2.91776, mean = 3, sd = 0.05 cout <<"Fraction of packs >= " << minimum_weight << " with a mean of " << mean << " and standard deviation of " << pack05.standard_deviation() << " is " << cdf(complement(pack05, minimum_weight)) << endl; // Fraction of packs >= 2.9 with a mean of 3 and standard deviation of 0.05 is 0.97725
So 0.05 was quite a good guess, but we are a little over the 2.9 target, so the standard deviation could be a tiny bit more. So we could do some more guessing to get closer, say by increasing standard deviation to 0.06 kg, constructing another new distribution called pack06.
normal pack06(mean, 0.06); cout << "Quantile of " << p << " = " << quantile(pack06, p) << ", mean = " << pack06.mean() << ", sd = " << pack06.standard_deviation() << endl; // Quantile of 0.05 = 2.90131, mean = 3, sd = 0.06 cout <<"Fraction of packs >= " << minimum_weight << " with a mean of " << mean << " and standard deviation of " << pack06.standard_deviation() << " is " << cdf(complement(pack06, minimum_weight)) << endl; // Fraction of packs >= 2.9 with a mean of 3 and standard deviation of 0.06 is 0.95221
Now we are getting really close, but to do the job properly, we might need to use root finding method, for example the tools provided, and used elsewhere, in the Math Toolkit, see Root Finding Without Derivatives.
But in this (normal) distribution case, we can and should be even smarter and make a direct calculation.
Our required limit is minimum_weight = 2.9 kg, often called the random variate z. For a standard normal distribution, then probability p = N((minimum_weight - mean) / sd).
We want to find the standard deviation that would be required to meet this limit, so that the p th quantile is located at z (minimum_weight). In this case, the 0.05 (5%) quantile is at 2.9 kg pack weight, when the mean is 3 kg, ensuring that 0.95 (95%) of packs are above the minimum weight.
Rearranging, we can directly calculate the required standard deviation:
normal N01; // standard normal distribution with meamn zero and unit standard deviation. p = 0.05; double qp = quantile(N01, p); double sd95 = (minimum_weight - mean) / qp; cout << "For the "<< p << "th quantile to be located at " << minimum_weight << ", would need a standard deviation of " << sd95 << endl; // For the 0.05th quantile to be located at 2.9, would need a standard deviation of 0.0607957
We can now construct a new (normal) distribution pack95 for the 'better' packer, and check that our distribution will meet the specification.
normal pack95(mean, sd95); cout <<"Fraction of packs >= " << minimum_weight << " with a mean of " << mean << " and standard deviation of " << pack95.standard_deviation() << " is " << cdf(complement(pack95, minimum_weight)) << endl; // Fraction of packs >= 2.9 with a mean of 3 and standard deviation of 0.0607957 is 0.95
This calculation is generalized in the free function find_scale, as shown below, giving the same standard deviation.
double ss = find_scale<normal>(minimum_weight, under_fraction, packs.mean()); cout << "find_scale<normal>(minimum_weight, under_fraction, packs.mean()); " << ss << endl; // find_scale<normal>(minimum_weight, under_fraction, packs.mean()); 0.0607957
If we had defined an over_fraction, or percentage that must pass specification
double over_fraction = 0.95;
And (wrongly) written
double sso = find_scale<normal>(minimum_weight, over_fraction, packs.mean());
With the default policy, we would get a message like
Message from thrown exception was: Error in function boost::math::find_scale<Dist, Policy>(double, double, double, Policy): Computed scale (-0.060795683191176959) is <= 0! Was the complement intended?
But this would return a negative standard deviation - obviously impossible. The probability should be 1 - over_fraction, not over_fraction, thus:
double ss1o = find_scale<normal>(minimum_weight, 1 - over_fraction, packs.mean()); cout << "find_scale<normal>(minimum_weight, under_fraction, packs.mean()); " << ss1o << endl; // find_scale<normal>(minimum_weight, under_fraction, packs.mean()); 0.0607957
But notice that using '1 - over_fraction' - will lead to a loss of accuracy, especially if over_fraction was close to unity. In this (very common) case, we should instead use the complements, giving the most accurate result.
double ssc = find_scale<normal>(complement(minimum_weight, over_fraction, packs.mean())); cout << "find_scale<normal>(complement(minimum_weight, over_fraction, packs.mean())); " << ssc << endl; // find_scale<normal>(complement(minimum_weight, over_fraction, packs.mean())); 0.0607957
Note that our guess of 0.06 was close to the accurate value of 0.060795683191176959.
We can again confirm our prediction thus:
normal pack95c(mean, ssc); cout <<"Fraction of packs >= " << minimum_weight << " with a mean of " << mean << " and standard deviation of " << pack95c.standard_deviation() << " is " << cdf(complement(pack95c, minimum_weight)) << endl; // Fraction of packs >= 2.9 with a mean of 3 and standard deviation of 0.0607957 is 0.95
Notice that these two deceptively simple questions:
and/or
are actually extremely common.
The weight of beef might be replaced by a measurement of more or less anything, from drug tablet content, Apollo landing rocket firing, X-ray treatment doses...
The scale can be variation in dispensing or uncertainty in measurement.
See find_mean_and_sd_normal.cpp for full source code & appended program output. | http://www.boost.org/doc/libs/1_43_0/libs/math/doc/sf_and_dist/html/math_toolkit/dist/stat_tut/weg/find_eg/find_mean_and_sd_eg.html | CC-MAIN-2013-20 | refinedweb | 2,037 | 50.43 |
Grok, sqlalchemy and traversal¶
Tags: grokkerdam2008, plone
Martijn Faassen showed a grok + sqlalchemy application he's been working on. This got everyone into the spirit and more importantly into the problem domain. Something he used a lot was custom traversal to navigate the relational database model with URLs. With some quick pseudocode:
from zope.location.location import located class Something(grok.Model): def traverse(self.name): item = session.query(...).first() # or .one() if item == None: return return located(item)
Storm is an alternative to sqlalchemy. Storm seems to be oriented more towards the needs of big complex projects. Sqlalchemy seems much more active open source community-wise. Storm is less "friendly" as you have to / can configure everything, just like zope3. Sqlalchemy seems more grok-like in outlook. At least, that's what I extracted from the discussion.
An outcome of the discussion was that grok needs a to make it real easy to
traverse to attributes. Unrelated to, but useful for, relational db
integration.
grok.traversable("something") was suggested. So this is in
addition to the normal container traversal which effectively works with
dictionary access (
container["subitem_id"]). A shorthand that prevents you
from having to write your own traverse() method in many common cases: so very
grok-like.
A useful warning came up in the discussion. In zope, it is really easy to see everything as folders. In sql, you're making it all relational. Both are not 100% right object-oriented wise. Watch out for brainwashing based on your environment.
We went through the zope api docs for zope.app.container, writing down which sqlalchemy information we ought to use for treating the mapped sqlalchemy results as a container. This helped us to flush out some corner cases.
An essential point is to use sqlalchemy and its ORM mapper for the things that that is good at. As grok we won't do the sqlalchemy work, we'll just provide extra grok-specific functionality like traversal and folderish behaviour. Folderish behaviour means, as an example, in handling a "delete" correctly. Correctness is should be handled on the sqlalchemy level: if you configure your mapper right, the right thing will happen. That might sound dead-pan but it is, actually.
Grok also means: "not invented here: in a positive way". What other people figured out and did right is what we don't have to re-invent. So we won't invent it here.
Something that we'd need to figure out: how and where to define the classes, tables and mappers? Do we want it all easily in one class? Do we need to grok something?
Grok needs the fields for the add and edit forms. We didn't like defining an interface for the fields as that's partially double, partially potentially late-binding, etc. So something simple like:
def form_fields(self): return grok.Fields(rdb.Schema(OurClass))
There was much more, but this ought to do for a summary. I didn't contribute much myself (especially since I have basically no grok experience!). I did learn a lot by listening, picking up useful concepts and ways of working. Nice sprint till now! | http://reinout.vanrees.org/weblog/2008/05/01/grok-sqlalchemy-and-traversal.html | crawl-002 | refinedweb | 526 | 60.21 |
Add a PowerShell module to manage Windows Updates
Add a PowerShell module to manage Windows Updates like you can now with the GUI. Like checking for updates, installing all updates, or a selection of updates, creating a report of pending updates, etc. The Windows Update options in the Server Configuration Manager (sconfig) are very limited.
We don’t own the creation of modules for operating system features. I’ll mark this as Survey so we can provide this information to the WU team, but you might also want to consider filing this in the Windows 10 Feedback Hub. Also, see the community-created PSWindowsUpdate link below :)
PLEEEEEEEAAAAAAAAAASE!!!
PSWindowsUpdate is no good when you are managing a server remotely unfortunately. There is a hack which consists in creating a scheduled task and to trigger that task but then we do not have access to the output of the WU commands. And in any case something as central to the management of windows server as managing updates shouldn't rely on a hack. Particularly compared to how easy it is to do the same on a linux server / ssh.
Hi Karl, the best place for WSUS feedback is really in the Feedback Hub.
Karl Wester-Ebbinghaus (@tweet_alqamar) commented
@Zachary can anyone have a look at my WSUS ideas? would be nice to have your feedback.
I second that request. I know about the WMI capabilities and PSWindowsUpdate. A built-in module would ease administration a lot.
This is possible now but requires an understanding of WQL, WMI, and experience with the SMS namespace.
Christopher Zahrobsky commented
Please do not forget the all-important "Uninstall-WUpdate -Name KB3205401"
Luc FULLENWARTH commented | https://windowsserver.uservoice.com/forums/301869-powershell/suggestions/17728717-add-a-powershell-module-to-manage-windows-updates | CC-MAIN-2019-35 | refinedweb | 278 | 56.25 |
kig
#include <bogus_imp.h>
Detailed Description
This ObjectImp is a BogusImp containing only a string value.
Definition at line 167 of file bogus_imp.h.
Member Typedef Documentation
Definition at line 176 of file bogus_imp.h.
Constructor & Destructor Documentation
Construct a new StringImp containing the string d.
Definition at line 54 of file bogus_imp.cc.
Member Function Documentation
Reimplemented from ObjectImp.
Definition at line 208 of file bogus_imp.cc.
Returns a copy of this ObjectImp.
The copy is an exact copy. Changes to the copy don't affect the original.
Reimplemented in TestResultImp.
Definition at line 69 of file bogus_imp.cc.
Get hold of the contained data.
Definition at line 186 of file bogus_imp.h.
Returns true if this ObjectImp is equal to rhs.
This function checks whether rhs is of the same ObjectImp type, and whether it contains the same data as this ObjectImp.
It is used e.g. by the KigCommand stuff to see what the user has changed during a move.
Reimplemented in TestResultImp.
Definition at line 175 of file bogus_imp.cc.
Reimplemented from ObjectImp.
Definition at line 103 of file bogus_imp.cc.
Set the contained data.
Definition at line 190 of file bogus_imp.h.
Returns the ObjectImpType representing the StringImp type.
Definition at line 220 of file bogus_imp.cc.
Returns the lowermost ObjectImpType that this object is an instantiation of.
E.g. if you want to get a string containing the internal name of the type of an object, you can do:
Reimplemented in TestResultImp.
Definition at line 255 of file bogus_imp.cc.
Reimplemented in TestResultImp.
Definition at line 133 of file bogus_imp.cc.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Fri Jan 17 2020 03:27:12 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/4.14-api/kdeedu-apidocs/kig/html/classStringImp.html | CC-MAIN-2020-05 | refinedweb | 313 | 63.05 |
ImportError with --autogenerate
I've got a very simple sqlalchemy project all living in 1 file (for now), including my Base object. I am interested in autogenerating migrations for this project.
As the tutorial specifies, I have imported the project file in env.py and set target_metadata to point at Base.metadata in the imported module. It is giving me an ImportError on the module name, however. To clarify, all my code is in foo.py. env.py has two lines looking like:
from foo import Base target_metadata = Base.metadata
Then when I run:
alembic revision --autogenerate -m "add initial revision"
I get:
ImportError: No module named foo
What am I doing wrong, if anything?
OK this is not really any kind of "bug" in alembic, if you have more issues along this line it's probably best if you post on the mailing list at .
the short answer is "alembic" is a console script, not unlike if you just ran "python" by itself, which then imports the alembic library, which then runs your env.py. So within env.py, if you want to import some other library, such as your "foo", your .py file has to be importable as a module. The quickest path to this is to just set the PYTHONPATH environment variable to point to the directory where your foo.py is present, or in your env.py, to manipulate sys.path to have this location. The more comprehensive way to go is to use virtualenv, where you'd install your foo.py as an application with "python setup.py develop", which implies you'd have a setup.py and such.
an overview of how to have a .py file importable as a module is at .
OH I see, that makes a lot of sense. What I have is structurally a module, but it wasn't anywhere env.py knew to look in. Thanks for your patient and prompt response! | https://bitbucket.org/zzzeek/alembic/issues/87/importerror-with-autogenerate | CC-MAIN-2018-13 | refinedweb | 323 | 77.33 |
Believe it or not, you don’t need CSS
@media queries. In this article, learn a few lines of JavaScript that will allow you to mimic the features of
@media queries and build fully responsive websites.
The biggest reason developers don’t opt for inline styles in their React projects is because it lacks some notable CSS features:
:hover,
:first-childand other pseudo-selectors
@keyframeanimations
@mediaqueries
The usual solution is to use CSS-in-JS libraries like styled-components or Radium to get these missing features. It works, however, I think the broader question is, “Are these really valid critiques of inline styles when the argument relies on CSS jargon?”
While JavaScript can’t have
:hover it does have
onMouseOver. Or for
@keyframe… it has the Greensock animation library, or Popmotion. JavaScript can’t have some CSS features, but it’s always had a robust browser-native API.
If Ya Can’t Beat Em, JavaScript Em
You can even build
@media query-like behavior by just using vanilla JavaScript. It’s incredibly simple.
class Card extends Component { constructor() { super(); this.mediaQuery = { desktop: 1200, tablet: 768, phone: 576, }; this.state = { windowWidth: null }; } componentDidMount() { window.addEventListener('resize', () => { this.setState({windowWidth: document.body.clientWidth}) }); } render() { return ( <div style={{ width: this.state.windowWidth > this.mediaQuery.phone ? '50%' : '100%', //more styling :) }}> <!-- <Card> contents --> </div> ); } }
The important part is
document.body.clientWidth which gives you the width of the browser. Just add a listener for the window “resize” event to update
windowWidth and voilà! There you have it… a fully responsive website!
Advanced 1990s Technology
document.body.clientWidth has been around for a very long time, yet it’s still useful today for building modern, responsive websites. And you can actually get more mileage from this simple solution.
In this next demo, a low-res image is used for smaller devices to save bandwidth, and desktop devices (larger than 768px) uses a high-res image. This is incredibly simple with JavaScript, whereas it would be quite difficult to tackle with
@media queries.
class ImageGallery extends Component { constructor() { super(); this.mediaQuery = { desktop: 1200, tablet: 768, phone: 576, }; this.state = { windowWidth: null }; } componentDidMount() { window.addEventListener('resize', () => { this.setState({windowWidth: document.body.clientWidth}) }); } render () { const isTablet = this.state.windowWidth < this.mediaQuery.tablet const imgUrl = isTablet ? '' : '' // 👈 this ternary right here return ( <div> <h1>Mildly Inspirational Photos</h1> <div><img src={imgUrl}/></div> </div> ) } }
This is just one example that shows how versatile a JavaScript-driven solution can be. In fact, you can do a lot of other stuff that
@media queries aren’t able to, like:
- Show React components that are crafted specifically for smaller screens
- Show a modal to download the native iOS app
- Block specific routes that don’t work on a targeted device
We're using the Bootstrap breakpoints here, but you can tailor this for your project's needs.
Conclusion
Try out this JavaScript-driven solution! Inline styles are surprisingly flexible, and in some cases provide more expressive power than CSS.
🌠 See the demo for a more robust implementation that uses the Context API to store the browser width, as well as applying a throttle to the "resize" listener. | https://alligator.io/react/responsive-websites-without-css/ | CC-MAIN-2020-34 | refinedweb | 523 | 57.06 |
For an assignment I have to take a string as an input and write it as a file. Then, a function takes the string from the file and puts each word in a dictionary, with the value being the amount of times that word appears in the string. The words will then be printed in a "tower" (similar to a word cloud) with the size of each word based on the amount of times the word appears in the string.
These are the two important functions:
def word_freq_dict(): # function to count the amount of times a word is in the input string
file = open("data_file.txt", 'r')
readFile = file.read() #reads file
words = readFile.split() #splits string into words, puts each word as an element in a list
word_dict = {} # empty dictionary for words to be placed in with the amount of times they appear
for i in words:
word_dict[i] = word_dict.get(i,0) + 1 # adds items in "words" to a dictionary and amount of times they appear
return word_dict
def word_tower():
t = turtle.Turtle()
t.hideturtle() # hides cursor
t.up() # moves cursor up
t.goto(-200, -200) # starts at the -200,-200 position
word_freq_dict() #calls dictionary function
for key, value in word_dict.items():
t.write(key, font = ('Arial', value*10, 'normal'))
t.up(1.5*len(key))
Calling a function doesn't automatically save anything in your current namespace. You have to explicitly assign it.
word_dict = word_freq_dict() | https://codedump.io/share/pIAxwwdSGWXl/1/how-do-i-access-a-dictionary-from-a-function-to-be-used-in-another-function | CC-MAIN-2017-26 | refinedweb | 238 | 72.87 |
logb, logbf, logbl - get exponent of a floating-point value
Synopsis
Description
Return Value
Errors
Colophon
#include <math.h>
These functions extract the exponent from the internal floating-point representation of x and return it as a floating-point value. The integer constant FLT_RADIX, defined in <float.h>, indicates the radix used for the systems floating-point representation. If FLT_RADIX is 2, logb(x) is equal to floor(log2(x)), except that it is probably faster.
If x is subnormal, logb() returns the exponent x would have if it were normalized..
See math_error(7) for information on how to determine whether an error has occurred when calling these functions.
The following errors can occur:These functions do not set errno.
C99, POSIX.1-2001.
ilogb(3), log(3)
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.sgvulcan.com/logbf.3.php | CC-MAIN-2017-09 | refinedweb | 156 | 58.69 |
Transforming Tags into Categorical Data
I’ve encountered a few instances where I need to make clean, dummied data columns from a column that contains a list of attributes. This notebook will go one step further and show an example of generating one such list from a bunch of string fields, generated by concatenating arbitrarily-many
<tag> objects together.
An Example
In the repo Building Machine Learning Powered Applications, the author has a slick chunk of code that takes a DataFrame containing a column with a bunch of tags (I’ve dropped everything else, for simplicity’s sake)
import pandas as pd df = pd.read_csv('data/writers.csv') df.head()
and does a bunch of
pandas magic to transform it into neat columns of popular tags that they use for their modelling.
# Select our tags, represented as strings, and transform them into arrays of tags tags = df["Tags"] clean_tags = tags.str.split("><").apply( lambda x: [a.strip("<").strip(">") for a in x]) # Use pandas' get_dummies to get dummy values # select only tags that appear over 500 times tag_columns = pd.get_dummies(clean_tags.apply(pd.Series).stack()).sum(level=0) all_tags = tag_columns.astype(bool).sum(axis=0).sort_values(ascending=False) top_tags = all_tags[all_tags > 500] top_tag_columns = tag_columns[top_tags.index] final = pd.concat([df, top_tag_columns], axis=1) final.head()
However, that dense chunk of code is doing a ton, so let’s break it down step by step
Less Magic
For starters, they turn the tag strings into lists of strings with a simple
apply() call and some
>< hunting
tags = df['Tags'] clean_tags = tags.str.split('><').apply( lambda x: [a.strip('<').strip('>') for a in x] ) clean_tags.head()
0 [resources, first-time-author] 1 [fiction, grammatical-person, third-person] 2 [publishing, novel, agent] 3 [plot, short-story, planning, brainstorming] 4 [fiction, genre, categories] Name: Tags, dtype: object
Dummying
This next one is a doozy. Just remember that we’re trying to go from the list of tags above, to an identity matrix like so.
tag_columns = pd.get_dummies(clean_tags.apply(pd.Series).stack()).sum(level=0) tag_columns.head()
5 rows × 330 columns
Step by step, we start by turning these lists into
pd.Series objects
a = clean_tags.apply(pd.Series) a.head()
There are 5 columns, because that’s the most tags that are on any of our observations
clean_tags.apply(len).max()
5
We wanted it in the
pd.Series format, so we could call the
.stack() method, which tosses out all of the blank
NaN records and organizes our records into one multi-index Series
b = a.stack() b.head(10)
0 0 resources 1 first-time-author 1 0 fiction 1 grammatical-person 2 third-person 2 0 publishing 1 novel 2 agent 3 0 plot 1 short-story dtype: object
It’s not immediately intuitive why we want it in a Series until you look at the next step where we create the categorical columns from the data.
If we leverage this clean, stacked Series, we get something that looks pretty reasonable. Note the
330 columns.
c = pd.get_dummies(b) c.head()
5 rows × 330 columns
On the other hand, if we make that same call with
a (the
NaN-filled DataFrame), we get nearly four times as many columns as our last implementation.
bad = pd.get_dummies(a) bad.head()
5 rows × 1261 columns
We don’t have to worry about
pandas accidentally using the
NULL data
[x for x in bad.columns if 'NaN' in x]
[]
However, a quick check shows that it is incorrectly placing importance on the column index that it found the data.
[x for x in bad.columns if 'academic' in x]
['0_academic-writing', '1_academic-writing', '2_academic-writing', '3_academic-writing']
Moving on, we’ve still got this
MultiIndex that we’ve got no real use for.
c.head()
5 rows × 330 columns
Hence the call to
sum(level=0)
d = c.sum(level=0) d.head()
5 rows × 330 columns
We had to specify
level=0 because we wanted to ensure that our resulting DataFrame still had a row per row of data in our original DataFrame
len(df), len(d)
(7971, 7971)
Calling
c.sum() without any arguments just does a naive sum down the columns
wrong_1 = c.sum() wrong_1.head()
3-acts 7 academic-writing 277 accessibility 6 acronyms 15 action 3 dtype: int64
And so our data is now a Series, for as many columns we had
print(len(wrong_1))
330
And calling it at
level=1 uses the second level of the MultiIndex to sum each tag by which order they appear.
Here,
3-acts appears as the second tag 3 times, the third once, fourth once, fifth 2 times.
wrong_2 = c.sum(level=1) print(len(wrong_2)) wrong_2.head()
5
5 rows × 330 columns
But, now we’ve got the neat one-hot representation we were after.
d.head()
5 rows × 330 columns
Filtering
From here, the author casts the columns as
bool (I don’t think that was necessary), does a simple sum (didn’t need to specify the axis here), and sorts the values from highest to lowest
all_tags = tag_columns.astype(bool).sum(axis=0).sort_values(ascending=False) all_tags.head()
creative-writing 1351 fiction 1253 style 991 characters 609 technique 549 dtype: int64
Then they specify that they only want to use a Tag as a feature if it’s got more than 500 uses
top_tags = all_tags[all_tags > 500] top_tags.head()
creative-writing 1351 fiction 1253 style 991 characters 609 technique 549 dtype: int64
This narrows our Tag count from 330 to 7
print(len(all_tags), len(top_tags))
330 7
Finally, they use
top_tags.index to get a list of the column names for the tags that meet our criteria, and use that to filter down this intermediate
tag_columns DataFrame
top_tag_columns = tag_columns[top_tags.index] top_tag_columns.head()
Recombining
Last but not least, they use
pd.concat() to staple this dummied tag dataset to our original DataFrame.
Why they used
pd.concat() and not
df.join() is beyond me, as they both do the trick here.
final = pd.concat([df, top_tag_columns], axis=1) final.head() | https://napsterinblue.github.io/notes/python/pandas/tags_to_columns/ | CC-MAIN-2021-04 | refinedweb | 1,014 | 53 |
Profiler
Elements, as of Version 10, includes a cross-platform Instrumenting Performance Profiler that can be used on all four platfroms to find bottlenecks and get an overall understanding of where your application or parts of your application spend the most processing time.
The profiler is a combination of an Aspect that instruments your code, a cross-platform library that links into your application to gather data, and IDE-integrated tools that let you evaluate the results.
Profiling can be enabled with a few simple steps:
- Add the profiler library for your platform to your app as reference.
- Add the profiler aspect Cirrus Reference to your project.
- Annotate the class or classes you want to profile with the
ProfileAspect.
- Add the
PROFILEConditional Define.
- (Re-)Build and run your app.
Both Fire and Water include a new Profiler view (available via "Debug|Show Profiler" and from the Jump Bar) that automatically loads in the results and presents them visually, after your app finished running.
As you tweak your code and run again and again, the profiler view updates with the latest results.
Let's look at these steps in more detail.
1. Adding the Profiler Library
Adding a reference to the profiler library is as easy as opening the "Manage References" sheet (from the "Project" menu, or via ^⇧R in Fire or Alt-Shift-R in Water), selecting the "Other References" tab, and picking the right reference, depending on your platform:
- RemObjects.Elements.Profiler.dll for .NET
- remobjects.elements.profiler.jar for Java
- libElementsProfiler.fx for Cocoa
- ElementsProfiler.fx for Island
2. Adding the Aspect Library
In addition to the main library, you also need to add a reference to the library that defines the
Profile Aspect. To do this, switch to the "Cirrus References" tab of the References dialog and add
- RemObjects.Elements.Profiler.Aspect.dll
Note that regardless of platform, this will be a
.dll reference, as the aspect runs at compile time within the Elements compiler, not runtime.
3. Annotate your Classes
Next, choose the classes you want to profile, and mark them with the
RemObjects.Elements.Profiler.Profile aspect. You can mark as few or as many classes as you want, but you will get best results if you keep profiling focused on a specific area of interest. For example, if loading of documents in your app is slow, annotate the classes that deal with that area of code, but not the rest:
uses RemObjects.Elements.Profiler; type [Profile] MyDocument = public class ... end;
using RemObjects.Elements.Profiler; [Profile] public class MyDocument { ... }
import RemObjects.Elements.Profiler @Profile public class MyDocument { ... }
import RemObjects.Elements.Profiler.* @Profile public class MyDocument { ... }
Any class marked with
[Profile] (Oxygene and C#) or
@Profile (Swift and Java) will be profiled. But if you want to keep different sets of profiling annotations in place, you can amend the aspect with the name with an optional conditional define that needs to be present for profiling on that class to become effective.
E.g.
[Profile("PROFILE_DOC_LOAD")] vs.
[Profile("PROFILE_BUSINESS_LOGIC")].
4. Add the
PROFILE Define
Your code is now almost ready to profile, but if you run your app, you will see no change yet. That is because the Profiler aspect takes no effect unless
PROFILE is defined as Conditional Define in your project.
This is done so that you can easily enable and disable profiling without having to keep adding and removing annotations from your code. When you're done profiling, simply remove the define and rebuild, and the aspect will have no effect – no instrumentation will be added to your code, and you will have absolutely zero run-time overhead.
To add the define, open the "Manage Conditional Defines" sheet (again via the "Project" menu, or or via ^⇧D in Fire or Alt-Shift-D in Water), choose "Add" and type in
PROFILE as the name of the new define. You can also simply copy the string
PROFILE from here, and press ⌘V (Fire) or Ctrl-V (Water) to paste it in as new define.
5. Rebuild and Run Your App
You can now simply run your app, via "Project|Run" or ⌘R (Fire) or Ctrl-R (Water). Exercise the app as you normally would, to hit the code paths you want to profile (e.g. because they are slow). When you're done, simply quit your app naturally.
The IDE will automatically pick up the profiling results, and you can view them by choosing "Debug|Show Profiler" from the menu, or by choosing the "Profiler" item in the jump bar at the left-most level. (If the Profiler View is already visible, it will automatically update to the latest results.)
Analyzing the Results
The Profiler View consists of two panes.
The top pane shows you a list of all methods that have been marked for profiling and have been hit as part of your projects run. You can sort the list by various values, and you can also choose to either show all calls or filter down to a single thread.
In addition to the name and parameters, for each method you will see how often it was called, as well has how much time it took to execute (all executions combined). The Gross time is the full time the method took to run, from start to finish; the Net time will exclude any time spent in (profiled) sub-calls.
For example, if you have a method
Load() that delegates work to
LoadFromFile() and
ProcessData(), then the Gross time includes the entire duration of
Load, including the actual load and the processing. The Net time shows only the time spent in
Load() itself (or any calls that the profiler does not see because they are not instrumented), and excludes the time spent in
LoadFromFile() and
ProcessData().
In other words, the Gross time gives you the total that a specific task took, while the Net tells you how much time was spent at this level, opposed to deeper in the call hierarchy. If Net time is very small, you know the bulk of the work happened in child calls, and you can focus your investigation there; if Net time is relatively large, you know the most processing happened in that method (or in calls you have not profiled).
As you select a method in the top pane, the bottom pane will adjust to show you all the child methods called from this method, as well as (optionally) all the methods that called into it (you can toggle which ones to show via the "Show" popup button).
For these callees or callers, you too will see the Net and Gross time.
Double-clicking a method in the bottom view will activate it in the top (showing its callees and/or callers, in turn) – allowing you to quickly drill in and out of call hierarchies.
Supported Platforms
While profiling is supported on all platforms, technically, some deployment targets (such as iOS, watchOS, tvOS or Android devices) don't have a good way to provide the results back to the IDE. Currently, gathering profiling results is supported for
- .NET
- Plain Java
- macOS (in Fire)
- iOS and tvOS Simulator (in Fire)
- Island/Windows (in Water)
- Island/Linux (in Water, when running in local Bash for Windows)
Gathering profiling results is not (yet) supported when running on devices, or when running on a remote CrossBox server. | https://docs.elementscompiler.com/Tools/Profiler/ | CC-MAIN-2018-43 | refinedweb | 1,225 | 58.32 |
I just installed version 5.1.64998. I am trying to setup the email settings with Exchange 2010. I keep getting the following error:
32 Replies
Mar 29, 2011 at 1:28 UTC
Long shot, but is Hide From Exchange Address Lists set?
Mar 29, 2011 at 1:34 UTC
I have it setup and working with Exchange 2010 and on my outgoing settings I have it like below:
Mar 29, 2011 at 2:08 UTC
Hide from Exchange Address Lists is not set. I tried Noel9849 Settings structure. I still get the same error.
Thanks for the replies.
Mar 29, 2011 at 3:01 UTC
For my Exchange 2010 settings I have:
Exchange mode
https:/
username (not - domain\username)
for both incoming and outgoing.
Hope this helps.
Mar 29, 2011 at 3:10 UTC
I have tried it with your settings format also Kerrie. I still get the same error.
Thanks for trying.
Apr 6, 2011 at 5:57 UTC
Hi Brian,
Thanks for reporting this.
Could you list out which different username and server formats you have tried?
Have you tried all of these:
- username@domain
- username@domain.internal.com
- DOMAIN\username
- username
and
You can read more on this here: http:/
Apr 12, 2011 at 11:00 UTC
Ben,
Thanks for the reply. I tried all the different scenarios you listed. I am still getting the same error. I have also upgraded to Beta V.
I looked at IIS and the /exchange is being redirected to /owa. Is this a problem?
I am also not sure how to tell if Webdav is enabled.
I can get to the log in page and log in as my desired user with no problems.
I have just read that Webdav has been replaced with Exchange Web Service.
Apr 12, 2011 at 1:45 UTC
I have been able to get the email to work. The only way I was able to get it to work was by setting SW up to use SMTP and POP3. I will continue to try and figure out why this is not working with the Exchange Settings.
Thanks for everyone's help with this.
Apr 12, 2011 at 3:42 UTC
Hey Brian, thanks for trying out the different combinations.
Could you send us your log files for review? I'll PM you with details.
Apr 18, 2011 at 3:58 UTC
Is this a brand new Exchange 2010 instance, or did you migrate from 2003 or 2007?
Based on some of the xml returned it looks like it may have been migrated from 2007.
- If so, have you done any work with autodiscovery since the migration?
- Did you make any changes to autodiscover before or after the migration to 2010?
Apr 19, 2011 at 7:09 UTC
This has been migrated from 2003 but the domain was also migrated to a new domiain. The autodiscover has not changed since the initial migration and domain changed.
Apr 19, 2011 at 10:38 UTC
Looks like Aqius has the same issue:
http:/
Apr 19, 2011 at 12:24 UTC
We're still working on this - thanks for providing a temporary mailbox. It has been very helpful having this on hand.
Are you familiar with Exchange administration? I'm wondering if you have used Set-OrganizationConfig or Set-CASMailbox to control EWS on your Exchange server.
These two cmdlets allow you to lock down EWS, preventing certain types of clients or browsers (user agents) from accessing Exchange web services (EWS), which Spiceworks tries to use to access the mailbox.
You can blocked Entourage, for example, or create a list of acceptable user agents. (See the Ews* properties in the technet article above).
I ask because there are differences between the way your server responds to requests when compared to our test server.
When
is accessed it seems to be normal to return a login prompt.
For your server these are the results of my tests with multiple browsers and OSs:
Windows:
- Chrome - yes
- IE - yes
- Firefox - no
OSX:
- Chrome - no
- Safari - no
- Firefox - no
Apr 19, 2011 at 4:27 UTC
Also, for reference here is our server's EWS config. Is yours setup this way?
Note Windows Auth and anonymous auth are enabled.
For comparison, this is a new installation of Exchange 2010 (not migrated), and we generally try to leave everything as close the "defaults" as possible.
Apr 19, 2011 at 4:33 UTC
For your server these are the results of my tests with multiple browsers and OSs:
Windows:
- Chrome - yes
- IE - yes
- Firefox - no
OSX:
- Chrome - no
- Safari - no
- Firefox - no
Again, for comparison, our internal test server is a "yes" in all of the above scenarios. Meaning all browsers prompted for login.
Apr 20, 2011 at 8:33 UTC
My EWS settings match yours.
Apr 20, 2011 at 8:35 UTC
I have not used the cmdlets you spoke of. I am able to login with FireFox when I go to owa. I get the following when I go to ems.
XML Parsing Error: no element found
Location:
Line Number 1, Column 1:
^
Apr 20, 2011 at 9:50 UTC
1st Post
I'm having similar issues and am following this topic with some interest. Spiceworks is a great tool and this is the only aspect I've yet to get working.
Apr 20, 2011 at 10:14 UTC
I am also having the same issue with the email settings. We just moved over from Microsoft hosted to in-house Exchange 2010. It seems like i have the right credentials in the "Outgoing Email" section, because i don't receive an error message but i do receive the following error message in the "Incoming Email" section using the same credentials i used in the "Outgoing Email" section.
Error on incoming settings: Failed to receive email from exchange server. Message: Undefined namespace prefix
I'm using our webmail address in the server field and i'm using Domain\username in the user field.
Any help would be appreciated. Hope this gets fixed soon.
Thanks
Apr 20, 2011 at 1:29 UTC
Brian, did Exch2010 work for you in the past, with 5.0? I tested there and was unable to save the settings there too (with the same error). Were you using Exch2003 in 5.0, and have only recently migrated to 2010?
Simon, can you provide more details? What have you tried so far? Be sure to run through all the URL/username formats:
Are you running 2010? Did you have this problem in 5.0? If not, did you upgrade Exchange from a previous version recently? Has your Exchange server been migrated from a previous version (2003/2007)?
Marvin, same for you - try the above link to reference different URL and username formats. Your error message is different, which means the problem is likely different from Brian's.
Apr 20, 2011 at 1:35 UTC
5.0 was on Exchange 2003. I just recently, got Spiceworks back up and running after the domain change. I got brave and went ahead with 5.1 since I was not running at all.
Apr 20, 2011 at 1:37 UTC
The username seems fine in my entry for incoming settings but if i change the server field, then i get the following message:
Error on incoming settings: Unexpected response from the Exchange server received. Are you sure you entered the correct address? | https://community.spiceworks.com/topic/133814-email-settings-for-exchange-2010 | CC-MAIN-2017-04 | refinedweb | 1,238 | 74.08 |
After making a line-following robot, making a robot that can avoid obstacles is usually one of the first projects recommended for both robotics enthusiasts and engineers-to-be. What if we wanted to combine these two ideas to make a robot follow a flat object?
Let me show you how to do that in this project!
Creating Object Focus With an Ultrasonic Sensor
Line follower robots usually equip color sensors to differentiate between the line they are following and the floor.
Obstacle avoiders utilize infrared or ultrasonic sensors to make sure there is nothing in front of them and keep traversing a preprogrammed course. However, focusing on a single object and calculating its trajectory is another kind of problem.
Attaching a transmitter to the object and a receiver on the follower is the usual solution, but we will only be using one ultrasonic sensor for a challenge. By having a servo motor sway the sensor side to side in a thirty-degree arc, we can measure the distance of an object from both viewable edges. Assuming we are following an object with a flat back side, we can know if an object is turning left or right by measuring these values against each other.
The object of interest (gray) turning right while the ultrasonic sensor (blue) measures edge distance.
With this information, we can instruct our robot to turn accordingly and avoid losing the target object.
For this project, we will only need a four-wheeled robot with appropriate motor drivers, an Arduino Uno board, a micro servo motor, and an HC-SR04 ultrasonic sensor.
Setting up the Hardware
Since the difficult part of this project is the code, the hardware is relatively simple.
I used an Elegoo Smart Robot Car v3 for this project, but any four-wheel-drive robot car with a rotating ultrasonic sensor will work. You may have to tweak some parts of the sketch a bit, but I have marked them for convenience.
Expected motor layout
The ultrasonic sensor will be placed on the front of the robot. Take this into consideration if building your own robot from scratch.
Position of the ultrasonic sensor relative to the rest of the robot.
In order to make your robot work correctly, be sure to mount the ultrasonic sensor on the front of your robot.
Putting the Code in Arduino IDE
The bulk of this project is in the software, so be ready to get your hands dirty with the code.
The only library we will use is the Servo library built-in to the Arduino IDE. For a copy of the Follower Robot sketch, check out the GitHub repository. Now, let’s go over how the code works.
#include <Servo.h> //servo library Servo myservo; // create servo object to control servo //Ultrasonic sensor variables int Echo = A4; int Trig = A5; //motor controller pins #define ENA 5 #define ENB 6 #define IN1 7 #define IN2 8 #define IN3 9 #define IN4 11 #define carSpeed 150 #define carSpeed2 150 int rightDistance = 0, leftDistance = 0;
The first thing you will notice is the servo motor. This will be used to swivel the ultrasonic sensor. Next, you will see the pins I have assigned to the ultrasonic sensor. Analog 4 and 5 are used in this case, but if you are planning on using different pins make sure to change these. You can use digital pins if you would like.
I am using A4 and A5 because that is the default configuration for the Elegoo Car. The EN and IN pins are used to assign motor speed values and motor direction, respectively. If you are not using the same car as I am, you will need to change these values to fit your motors’ pins.
carSpeed(0-255) is the maximum analog value the motors will be given. carSpeed2 can be used to make the car turn faster or slower while retaining the same forward and backward carSpeed value. A larger carSpeed value will make your robot faster, but also consume more energy.
The two integers initialized in the last line store the distance in centimeters from the target to the ultrasonic sensor.
void forward(){ analogWrite(ENA, carSpeed); analogWrite(ENB, carSpeed); digitalWrite(IN1, HIGH); digitalWrite(IN2, LOW); digitalWrite(IN3, LOW); digitalWrite(IN4, HIGH); Serial.println("Forward"); } void back() { analogWrite(ENA, carSpeed); analogWrite(ENB, carSpeed); digitalWrite(IN1, LOW); digitalWrite(IN2, HIGH); digitalWrite(IN3, HIGH); digitalWrite(IN4, LOW); Serial.println("Back"); } void left() { analogWrite(ENA, carSpeed2); analogWrite(ENB, carSpeed2); digitalWrite(IN1, LOW); digitalWrite(IN2, HIGH); digitalWrite(IN3, LOW); digitalWrite(IN4, HIGH); Serial.println("Left"); } void right() { analogWrite(ENA, carSpeed2); analogWrite(ENB, carSpeed2); digitalWrite(IN1, HIGH); digitalWrite(IN2, LOW); digitalWrite(IN3, HIGH); digitalWrite(IN4, LOW); Serial.println("Right"); } void stop() { digitalWrite(ENA, LOW); digitalWrite(ENB, LOW); Serial.println("Stop!"); }
There are five movement commands for this robot, forward(), back(), left(), right(), and stop().
Each movement is in the format of forward() (pictured above), and moves the robot in the specified direction. Here, ENA and ENB, the front and back half of the robot, are commanded to move at carSpeed. IN1 and IN4, the left side motors’ direction pins, are commanded to turn counterclockwise with the HIGH value. IN2 and IN3, the right side motors’ direction pins, are commanded to turn clockwise. This allows our robot to move straight even though our motors are inverted.
If both sides of the motors moved in the same direction, our robot would simply spin in circles.
//Ultrasonic distance measurement method int Distance_test() { digitalWrite(Trig, LOW); delayMicroseconds(2); digitalWrite(Trig, HIGH); delayMicroseconds(20); digitalWrite(Trig, LOW); float Fdistance = pulseIn(Echo, HIGH); Fdistance= Fdistance / 58; return (int)Fdistance; }
This method sends out a pulse from the ultrasonic sensor. It then uses the speed of sound and the time the pulse took to return to calculate distance. This is one of the most common ways to use an ultrasonic sensor, so I will not go into more detail here.
void setup() { myservo.attach(3); // attach servo on pin 3 to servo object Serial.begin(9600); pinMode(Echo, INPUT); pinMode(Trig, OUTPUT); pinMode(IN1, OUTPUT); pinMode(IN2, OUTPUT); pinMode(IN3, OUTPUT); pinMode(IN4, OUTPUT); pinMode(ENA, OUTPUT); pinMode(ENB, OUTPUT); stop(); }
In the setup method we make sure our sensor and motors are ready to go. stop() is used to make sure the motors stop moving before they receive any commands.
void loop() { myservo.write(60); //setservo position to right side delay(200); rightDistance = Distance_test(); myservo.write(120); //setservo position to left side delay(200); leftDistance = Distance_test(); if((rightDistance > 70)&&(leftDistance > 70)){ stop(); }else if((rightDistance >= 20) && (leftDistance >= 20)) { forward(); }else if((rightDistance <= 10) && (leftDistance <= 10)) { back(); delay(100); }else if(rightDistance - 3 > leftDistance) { left(); delay(100); }else if(rightDistance + 3 < leftDistance) { right(); delay(100); }else{ stop(); } }
In loop(), we have our repeating code. First, we set our servo to sixty degrees, calculate the distance to the object, then move the servo sixty degrees to the left to do it again. Below that is our object-following logic. If there is nothing in front of the robot for seventy centimeters, it will stop moving until something is in front of it. This was done to prevent interference from walls or objects far away from the target object.
Our robot will try to stay ten centimeters behind the target object, moving forwards and backward depending on the target’s position. If either side is three centimeters further away than the other, the robot will turn in that direction to follow its predicted path.
Keep in mind that the sixty-degree arc for the ultrasonic sensor as well as the three-centimeter threshold for turning can be changed depending on the project. If your target object is long, you may benefit from having a wider arc and a smaller turning threshold. All that is left now is to test it out!
The Completed Robot
So did it work? Check out the video below of my finished project! | https://maker.pro/arduino/tutorial/an-ultrasonic-object-following-robot | CC-MAIN-2019-26 | refinedweb | 1,324 | 53 |
Ok, I know we’ve been going on about custom cells / cell factories a bit recently, but I wanted to do one more post about a very useful topic: caching within cell content.
These days ‘Hello World’ has been replaced by building a Twitter client, so I’ve decided to frame this topic in terms of building a Twitter client. Because I don’t actually care about the whole web service side of thing, I’ve neglected to implement the whole ‘real data’ / web services aspect of it. If you want to see an actual running implementation with real data, have a look at William Antônio’s Twitter client, which is using this ListCell implementation.
So, in all the posts to this site related to cells, I’m sure you’ve probably come to appreciate the ways in which you should create a ListView or TreeView with custom cell factories. Therefore, what I really want to cover in this post is just the custom cell implementation, and the importance of caching. A Twitter client wouldn’t be a true client without showing the users profile image, so this is my target for caching. Without caching, each time the cell was updated (i.e. the content changes due to scrolling, or when we scroll a user out of screen and then back in), we’d have to redownload and load the image. This would lead to considerable lag and a poor user experience. What we need to do is load the image once, cache it, and reuse it whenever the image URL is requested by a cell. At the same time, we don’t want to run the PC dry of memory by loading all profile images into memory. Enter: SoftReference caching.
Word of warning: I’m not a caching expert. It is possible that I’ve done something stupid, and I hope you’ll let me know, but I believe that the code below should at least be decent. I’ll happily update this example if anyone gives me useful feedback.
Check out the code below, and I’ll continue to discuss it afterwards.
[jfx]
import model.Tweet;
import java.lang.ref.SoftReference;
import java.util.HashMap;
import javafx.geometry.HPos;
import javafx.geometry.VPos;
import javafx.util.Math;
import javafx.scene.control.Label;
import javafx.scene.control.ListCell;
import javafx.scene.image.Image;
import javafx.scene.image.ImageView;
import javafx.scene.layout.Container;
import javafx.scene.text.Font;
import javafx.scene.text.FontWeight;
// controls whether the cache is used or not. This _really_ shouldn’t be false!
def useCache = true;
// map of String -> SoftReference (of Image)
def map = new HashMap();
def IMAGE_SIZE = 48;
public class TwitterListCell extends ListCell {
// used to represent the users image
var imageView:ImageView;
// a slightly bigger and bolder label for the persons name
var personName:Label = Label {
font: Font.font("Arial", FontWeight.BOLD, 13);
}
// the message label
var message:Label = Label {
textWrap: true
}
override var node = Container {
content: bind [ imageView, personName, message ]
override function getPrefHeight(width:Number):Number {
def w = listView.width;
Math.max(IMAGE_SIZE, personName.getPrefHeight(w) + message.getPrefHeight(w));
}
override function doLayout():Void {
var x:Number = -1.5;
var y:Number = 0;
var listWidth = listView.width;
var cellHeight = height;
// position image
Container.positionNode(imageView, x, y, IMAGE_SIZE, cellHeight,
HPos.CENTER, VPos.TOP, false);
// position text at the same indent position regardless of whether
// an image exists or not
x += IMAGE_SIZE + 5;
var textWidth = listWidth – x;
var personNameHeight = personName.getPrefHeight(textWidth);
Container.resizeNode(personName, textWidth, personNameHeight);
Container.positionNode(personName, x, y, listWidth – x, personNameHeight,
HPos.LEFT, VPos.TOP, false);
y += personNameHeight;
Container.resizeNode(message, textWidth, message.getPrefHeight(textWidth));
Container.positionNode(message, x, y, listWidth – x, height – personNameHeight,
HPos.LEFT, VPos.TOP, false);
}
}
override var onUpdate = function():Void {
var tweet = item as Tweet;
personName.text = tweet.person.name;
message.text = tweet.message;
// image handling
if (map.containsKey(tweet.person.image)) {
// the image has possibly been cached, so lets try to get it
var softRef = map.get(tweet.person.image) as SoftReference;
// get the image out of the SoftReference wrapper
var image = softRef.get() as Image;
// check if it is null – which would be the case if the image had
// been removed by the garbage collector
if (image == null) {
// we need to reload the image
loadImage(tweet.person.image);
} else {
// the image is available, so we can reuse it without the
// burden of having to download and reload it into memory.
imageView = ImageView {
image: image;
}
}
} else {
// the image is not cached, so lets load it
loadImage(tweet.person.image);
}
};
function loadImage(url:String) {
// create the image and imageview
var image = Image {
url: url
height: IMAGE_SIZE
preserveRatio: true
backgroundLoading: true
}
imageView = ImageView {
image: image;
}
if (useCache) {
// put into cache using a SoftReference
var softRef = new SoftReference(image);
map.put(url, softRef);
} else {
map.remove(url);
}
}
}
[/jfx]
You’ll note that in this example most of the code is pretty standard. A few variables are created for the image and text, and then I’ve gone the route of laying the content out in a Container, but you can achieve a similar layout using the available layout containers. Following this I have defined an onUpdate function, which is called whenever the cell should be updated. This is usually called due to a user interaction, which may potentially change the Cell.item value, which would of course require an update of the cell’s visuals.
The bulk, and most important part, of the onUpdate function deals with loading the users profile image, or retrieving and reusing the cached version of it. Note the use of the global HashMap, which maps between the URL of the users image and the Image itself. Because it is global (i.e. static), this map will be available, and used, by all TwitterListCell instances. Also important to note is that I didn’t put the ImageView itself into the HashMap as a Node can not be placed in multiple positions in the scenegraph, but an Image can be.
The rest of the code in this class really just deals with the fact that a SoftReference may clear out it’s reference to the Image object if the garbage collector needs the memory, in which case we need to reload the image again. The other obvious part is the need to also put the image into the cache if it’s not already there.
Shown below is the end result, but remember that there is a working version of this demo in William Antônio’s Twitter client, which is a very early work in progress.
I hope this might be useful to people, and as always we’re keen to hear your thoughts and feedback, and what you’re hoping us to cover. Until next time – cheers! 🙂 | http://fxexperience.com/2010/06/custom-cell-caching/ | CC-MAIN-2020-16 | refinedweb | 1,122 | 56.66 |
I'm trying to make a script that will draw lines on an image in a python GUI. I've been able to get the image on the GUI, but do not know how to draw the additional lines. The script should be able to loop so I can draw more lines.
What I have so far:
import tkinter as Tk
root = Tk.Tk()
background_image=Tk.PhotoImage(file="map.png")
background_label = Tk.Label(root, image=background_image)
background_label.place(x=0, y=0, relwidth=1, relheight=1)
root.wm_geometry("794x370")
root.title('Map')
root.mainloop()
You can do that by first placing your image on a canvas:
import tkinter as Tk root = Tk.Tk() canvas = Tk.Canvas(root) background_image=Tk.PhotoImage(file="map.png") canvas.pack(fill=Tk.BOTH, expand=1) # Stretch canvas to root window size. image = canvas.create_image(0, 0, anchor=Tk.NW, image=background_image) line = canvas.create_line(10, 10, 100, 35, fill="red") root.wm_geometry("794x370") root.title('Map') root.mainloop() | https://codedump.io/share/wo98JujMsotv/1/draw-line-on-image-in-tkinter | CC-MAIN-2017-13 | refinedweb | 163 | 54.08 |
Hi jens, Thank you very much for the code reference.
On Wed, Apr 11, 2018 at 3:37 PM, Jens Breitenstein <mailingl...@j-b-s.de> wrote: > Hi! > > > Maybe this helps: > > We wrote our own component, modify as you like > > > public class DateFormatter > { > @Inject private Messages_messages; > > @Parameter(required =true, > where TOC is our Library Alias, replace accordingly. > And we provide different format setttings via our application message > catalog like this > > FORMAT.DateOnly =dd.MM.yyyy FORMAT.DateTime =dd.MM.yyyy - HH:mm:ss > > > Jens > > > > Am 10.04.18 um 21:52 schrieb Thiago H. de Paula Figueiredo: > > Hi! >> >> Your question was clear. There's no such configuration symbol due to the >> way way DateField defines which format to use, which is calling >> DateFormat.getDateInstance(DateFormat.SHORT, locale). >> >> A possibility is to create your own DateField class by copying the source >> from the Tapestry one, customizing it to your needs and contributing it to >> the ComponentOverride service. Something like this, not tested: >> >> public static void contributeComponentOverride(MappedConfiguration<Class, >> Class> configuration) { >> configuration.add(DateField.class, YourDateField.class); >> } >> >> Doing this, Tapestry will use YourDateField instead of DateField when you >> have a <t:datefield> or <input t:. >> >> On Tue, Apr 10, 2018 at 1:04 PM, abangkis <abang...@gmail.com> wrote: >> >> Hi thiago, >>> >>> Sorry, i'm having a bit of trouble understanding your explanation. >>> >>> Lets say i pick April 10th, 2018 from my date picker in my page. It will >>> fill the field with 04/10/2018. While the format we expected is >>> 10-04-2018. I could override the field and specify the format in the >>> template >>> >>> <t:datefield t:>> * >>> >>> but that means i have to do this for every datepicker i have. >>> >>> I was thinking maybe there's some kind of contribution that I can >>> override >>> in the app module. Something like >>> >>> @Contribute(DateField.class) >>> public static void overrideDefaultFormat(MappedConfiguration >>> configuration) >>> { >>> >>> configuration.add(SymbolConstants.DATEFIELD_DEFAULT_FORMAT, >>> "dd-MM-yyyy"); >>> } >>> >>> I hope this clear up my question. >>> >>> Regards >>> >>> >>> On Tue, Apr 10, 2018 at 8:17 PM, Thiago H. de Paula Figueiredo < >>> thiag...@gmail.com> wrote: >>> >>> Hello! >>>> >>>> The default format is taken from >>>> DateFormat.getDateInstance(DateFormat.SHORT, locale), where locale is >>>> >>> got >>> >>>> through @Inject. Is your desired date format the one returned by that >>>> method for your locale? >>>> >>>> On Tue, Apr 10, 2018 at 5:26 AM, abangkis <abang...@gmail.com> wrote: >>>> >>>> Hi, is there a quick way to override tapestry datefield default format? >>>>> >>>> In >>>> >>>>> a single datefield i can do >>>>> >>>>> <t:datefield t:>>>>>>>> t: >>>>> >>>>> But it became very repetitive for many pages. >>>>> >>>>> Regards >>>>> >>>>> >>>>> >>>>> -- >>>>> <> >>>>> twitter : @mreunionlabs @abangkis >>>>> page : >>>>> >>>>> >>>> >>>> -- >>>> Thiago >>>> >>>> >>> >>> -- >>> <> >>> twitter : @mreunionlabs @abangkis >>> page : >>> >>> >> >> > -- <> twitter : @mreunionlabs @abangkis page : | https://www.mail-archive.com/users@tapestry.apache.org/msg76625.html | CC-MAIN-2018-43 | refinedweb | 433 | 51.44 |
Opened 3 years ago
Closed 3 years ago
Last modified 2 months ago
#3078 closed Bug (Completed)
bug in _ArrayUnique
Description
"C:\Program Files (x86)\AutoIt3\autoit-v3.3.14.0\Include\Array.au3" (2297) : ==> Array variable has incorrect number of subscripts or subscript dimension range exceeded.: If IsInt($aArray[$iBase]) Then If IsInt(^ ERROR ->21:48:19 AutoIt3.exe ended.rc:1
reproduser:
#include <Array.au3> Local $aArray[2][3] = [[1,"",""],["aaa","bbb","ccc"]] _ArrayDisplay($aArray) Local $NewPatternArray = _ArrayUnique($aArray,2,1)
Attachments (2)
Change History (12)
comment:1 Changed 3 years ago by guinness
comment:2 Changed 3 years ago by guinness
- Resolution set to No Bug
- Status changed from new to closed
Please ensure you have upgraded the UDFs as well
comment:3 Changed 3 years ago by anonymous
Why?? I just have clean install of 3.3.14.0 and bug is still crush script... Or you saying UDFs is not updated in this process?
>"C:\Program Files (x86)\AutoIt3\SciTE\AutoIt3Wrapper\AutoIt3Wrapper.exe" /run /prod /ErrorStdOut /in "E:\Program Files\Auto-it scripts\test_507.au3" /UserParams +>19:23:23 Starting AutoIt3Wrapper v.15.503.1200.1 SciTE v.3.5.4.0 Keyboard:00000409 OS:WIN_XP/Service Pack 2 CPU:X64 OS:X64 Environment(Language:0409) +> SciTEDir => C:\Program Files (x86)\AutoIt3\SciTE UserDir => C:\Documents and Settings\***\Local Settings\Application Data\AutoIt v3\SciTE\AutoIt3Wrapper SCITE_USERHOME => C:\Documents and Settings\***\Local Settings\Application Data\AutoIt v3\SciTE >Running AU3Check (3.3.14.0) from:C:\Program Files (x86)\AutoIt3 input:E:\Program Files\Auto-it scripts\test_507.au3 +>19:23:25 AU3Check ended.rc:0 >Running:(3.3.14.0):C:\Program Files (x86)\AutoIt3\autoit3.exe "E:\Program Files\Auto-it scripts\test_507.au3" --> Press Ctrl+Alt+Break to Restart or Ctrl+Break to Stop "C:\Program Files (x86)\AutoIt3\Include\Array.au3" (2297) : ==> Array variable has incorrect number of subscripts or subscript dimension range exceeded.: If IsInt($aArray[$iBase]) Then If IsInt(^ ERROR ->19:23:28 AutoIt3.exe ended.rc:1 +>19:23:28 AutoIt3Wrapper Finished. >Exit code: 1 Time: 5.577
comment:4 Changed 3 years ago by anonymous
As addition - problem is - my array is 2D, so string like
If IsInt($aArray[$iBase]) Then
immediately and reasonable cause error. Am I right?
comment:5 Changed 3 years ago by guinness
I am not able to reproduce it with the reproducer above, so I a, suggesting you ensure the array UDF has the same version number as to the AutoIt version.
Related:
comment:6 Changed 3 years ago by guinness
- Resolution No Bug deleted
- Status changed from closed to reopened
comment:7 Changed 3 years ago by guinness
I stand corrected, sorry
comment:8 Changed 3 years ago by Melba23
- Milestone set to 3.3.15.1
- Owner set to Melba23
- Resolution set to Completed
- Status changed from reopened to closed
Changed 2 years ago by jayme_fishman@…
test data to reproduce a bug with _ArrayUnique
Changed 2 years ago by jayme_fishman@…
test script to go with test data to reproduce possible bug with _ArrayUnique
comment:9 Changed 2 years ago by guinness
- Milestone changed from 3.3.15.1 to 3.3.14.2
comment:10 Changed 3 months ago by anonymous
Guidelines for posting comments:
- You cannot re-open a ticket but you may still leave a comment if you have additional information to add.
- In-depth discussions should take place on the forum.
For more information see the full version of the ticket guidelines here.
I am not seeing the bug | https://www.autoitscript.com/trac/autoit/ticket/3078 | CC-MAIN-2018-09 | refinedweb | 592 | 57.27 |
[Solved] Sending C++ values to QML properties
Hi,
I trying to change some QML properties from C++ in order to rotate one object but looks like it doesn't update or repaint. I followed this document:
...but once i change the property i debug it, the value is correct but the object didn't rotate. This is how i change the property and how i read it with the debug:
object->setProperty("canvas3d.xRotSlider", 90);
qDebug() << "Property value:" << object->property("canvas3d.xRotSlider").toDouble();
Any suggestion?
Thanks!
My starting project was the barrel example, it includes js with qml, but instead to rotate the barrel with the sliders i would like to rotate with a given C++ variables values. But i had no success sending the values to the QML neither changing the angles on the js script... I expected something easy and common but is being impossible.
The best way to interact between C++ and QML is to make a class that inherits QQuickItem and create Q_PROPERTY-es in it, then register it in your main (or somewhere else) and import it into QML. For example:
class TestIntegration : public QQuickItem { Q_OBJECT Q_PROPERTY(int testProperty READ testProperty WRITE setTestProperty NOTIFY testPropertyChanged) public: TestIntegration(QQuickItem*parent=0) : QQuickItem(parent) , m_testProperty(0) {} int testProperty() const { return m_testProperty; } public slots: void setTestProperty(int testProperty) { m_testProperty = testProperty; emit testPropertyChanged; } signals: void testPropertyChanged(int testProperty); protected: int m_testProperty; }
Note that the above code can be autogenerated by QtCreator. You just start Q_PROP and it autocompletes it to the entire Q_PROPERTY asking you to set the type and the name of the property. Then you can rightclick Q_PROPERTY and in refactor submenu generate all methods and the field.
After that you just register the class in your main
int main(int argc, char**argv) { ... qmlRegisterType<TestIntegration>("YourNamespace", 1, 0, "TestIntegration"); ... }
and in QML you import it and make the object somewhere
import YourNamespace 1.0; TestIntegration { id: testIntegration }
and use the property how ever you like
randomQmlProperty = testIntegration.testProperty
or
testIntegration.testProperty = whateverYouSetFromQml
Cool!
I did what you explained and now i have a C++ class with some Q_PROPERTY's and some functions to modify them from C++. I did some tests to verify the connection between the QML and the new class. But now, as the instance of this class is on QML, How can i access it from C++ in order to change the properties?
Finally i could. I create a function in the QML file to be called from the C++ application. This is mostly the same i tryied at the begining but instead modifying the property from a inherited C++ class i was trying to change the property from an instance created with JavaScript as the "barrel" example that comes with Qt5.5 (in this case the property is change with a slider). The difference is that now when i change the property the event is triggering and before not, i don't know the reason...
Thanks brcha for the help!
Glad to be of assistance :)
I don't know what your project is like. For the most part, I use QML for UI and one (or more) similar integration classes for all the C++ code. Therefore, my main loop is in QML and it calls C++ through the integration class(es), just like I would in the widget-based Qt app call methods when a specific menu item was clicked or something similar. Thus my whole C++ part of the application is instantiated from QML, and it can have signals, slots, trees of new classes and everything else bellow the "main" C++ entry point object.
In my case is opposite. I have a C++ project and now I want to display a 3D model rotating depending on the moving of some sensors. So as i never used QML before i integrated in my old form a Widget container with a QQuickView inside to show the model. I took some json models from the examples to make it working, and actually i have my own designed model in 3Dstudio and Solidworks to replace the sample, but i'm "googling" in order to make that step between, seems not trivial... | https://forum.qt.io/topic/56526/solved-sending-c-values-to-qml-properties | CC-MAIN-2017-39 | refinedweb | 696 | 52.8 |
Hi,
I have kind of simple question, but can't find anything usefull on the internet.
In my web application i have a sqldatasource control:
<asp:SqlDataSource
<SelectParameters>
<asp:Parameter
</SelectParameters>
</asp:SqlDataSource>
protected void sourceTrainingDate_Selecting(object sender, SqlDataSourceSelectingEventArgs e)
View Complete Post
I am trying to create following function which should return table but when I try to create it I get error message as "Select statements included within a function cannot return data to a client.". I am looking for
workaround for this issue.
create
function dbo.GetCategoryIdTillBase(@Category_Id
as
char
Insert
into ReportDetail( Partcipantid, Reportid)
distinct ParticipantID
, 9 from OpenCredit
except select ParticipantID, 9
from StoreCredit where Closed
= 0
Issue is that when above select statement returns no row, it seems like no record is
Hi Dude,
I am new to Sql server 2005. I have a table in which contains many data. I need to take particular data. For example, I want to take value of 110652.795813712 from FTEBASEPAY column . So i have wrote the sql statment like in below.
SELECT * From tblEmployees where FTEBASEPAY='53842.7782655718'
SELECT * From tblEmployees where FTEBASEPAY='53842.7782655718'
But i am not able to get the particular value. Manually i have seen the tblEmployees table, in which contains the particular data ('53842.7782655718').
When i execute the above select statement, there is no result for it. Please let me know anyone face the same problem? Please give a solution for it. What i have to for overcome this issue?I have to give one more information, the FTEBASEPAY dataType is float in tblEmployees table.
Thanks in Advance.Ad
Hello,
I am adding table rows dynamically, after I return from a post back, these rows disappear.
Is there a way to preserve these rows and add them back on page load???????????"
Hello SQL CLR experts,
We're using SQL Server 2008 R2 (French) and Visual Studio 2008. I'm trying to get a "simple" SQL CLR to work, which calls DriveInfo.GetDrive() and returns drive name, space available and total size into a SQL Table from the UDF GetDriveInfo() defined
below. I need help getting the code to work.
I realize that there is already an xp_fixeddrives extended stored procedure that returns only the available space. I want to return the Total Size of each drive, the volume name, etc.
I based my code on the article but I want to use DriveInfo.GetDrive() instead of DirectoryInfo.GetFileSystemInfos(). I was successfully able to get the DirectoryInfo assembly from the article to work, but I am having trouble
getting my DriveInfo.GetDrive() adaptation to work.
In Visual Studio, I created a new C# SQL Server project and am using the following code:
using System;
using System.IO;
using System.Collections;
using System.Data;
using System.Data.SqlClient; | http://www.dotnetspark.com/links/35301-if-value-is-not-mssql-table-return-something.aspx | CC-MAIN-2017-22 | refinedweb | 466 | 50.63 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
IndexError: tuple index out of range in an Onchange Function
when I try to run below code i got this error
==================================================
File "/home/user/odoo/addons/sale_validation/sale.py", line 708, in on_change_line_product
first_line_rec = self.browse(cr, uid, ids, context=context)[0]
File "/home/user/odoo/openerp/models.py", line 5476, in __getitem__
return self._browse(self.env, (self._ids[key],))
IndexError: tuple index out of range
=================================
class purchase_order_line(osv.osv):
_inherit = 'purchase.order.line'
def on_change_line_product(self, cr, uid, ids,product_qty,context=None):
if context is None: context = {}
res = {}
line_num = 1
first_line_rec = self.browse(cr, uid, ids, context=context)[0]
for line_rec in first_line_rec.order_id.order_line:
res[line_rec.id] = line_num
line_num += 1
return res
_columns={
'line_no': fields.integer(string='Line Number'),
}
xml tag---------------
<field name="line_no" on_change="on_change_line_product(product_qty,context)"/>
Hello Dep,
Please remove the [0] index from this line.
first_line_rec = self.browse(cr, uid, ids, context=context)[0]
This will not giving you an error, but I didn't get you,what are you trying to do. because of there is no logic to write on change method on line number.
Hope you clear first what you want to do.
Above line will resolve you issue.
Hi Dep, You can below code: first_line_rec = self.browse(cr, uid, ids[0], context=context) for line_rec in first_line_rec.order_id.order_line: res[line_rec.id] = line_num line_num += 1 | https://www.odoo.com/forum/help-1/question/indexerror-tuple-index-out-of-range-in-an-onchange-function-84733 | CC-MAIN-2017-09 | refinedweb | 254 | 52.66 |
On Sun, Feb 8, 2009 at 1:25 PM, Brian Candler <B.Candler@pobox.com> wrote:
>
> (1) Personally, I'm not yet ready to move all view generation into the
> browser (Futon-like), i.e. where Javascript fetches the JSON, reformats as
> HTML, and submits back as JSON. In any case, supporting browsers without
> Javascript is still a useful capability.
The _show and _list features give you the capability to serve HTML or
other content types directly based on either doc or view queries. They
are a little lacking in documentation, but the test suite should be
enough to get your started.
>
> So for now at least, I still need a layer which will build HTML from a
> document, and allow document create/update via form POST.
>
If it doesn't work already, it'd be trivial to teach Couch to
understand norm HTML form POSTs, with some bare-bones conversion to a
JSON document (eg: each field is treated as a string, in a flat
namespace)
> (2) I need an application logic layer, which enforces business rules.
>
Validation functions can do a lot of this work. One thing that a
pure-Couch solution will discourage you from doing, that is considered
normal in Rails-style apps, is have one action modify or query lots of
documents or views. This is good for latency and cacheability, but may
make some applications harder to build.
> Put another way: document *storage/replication* and application *processing*
> of documents can (and in many cases probably should) remain separate.
> Couchapps remain an interesting idea however.
Yes. It really depends on your development model. One thing Couchapps
have that it's hard to get any other way is the extreme portability.
>
> This means that the application layer has to fetch the named document,
> *then* analyse what type it has, before dispatching to the appropriate
> controller/view logic.
The way _show and _list handle this is by having named functions, so
you'd have a url like
/db/_show/myapp/posts/post-id
or for authors
/db/_show/myapp/authors/author-id
of course asking for an error by calling something like
/db/_show/myapp/authors/post-id is a problem that Rails avoids due to
the Class / Table mapping.
>
> .
Agreed - I'm working on some validation helpers for Couchapp and my
Sofa blog. Part of what's up in the air here is where to draw the line
between the "standard library" of helpers included with CouchDB, and
what should be maintained in its own project.
> Map-reduce functions are also going to have to be robust to data of the
> 'wrong' type being present.
I always check my fields before using them.
if (doc.foo && doc.foo.bar ... etc)
Good questions. I hope some of what I've written helps.
--
Chris Anderson | http://mail-archives.apache.org/mod_mbox/incubator-couchdb-user/200902.mbox/%3Ce282921e0902091130x196773fdkd7558f2abe7cf904@mail.gmail.com%3E | CC-MAIN-2015-40 | refinedweb | 467 | 61.06 |
Managing Real-Time Tasks
Embedded systems need a way to schedule activities and respond to events in an efficient and deterministic manner. MicroPython offers developers several methods to achieve task scheduling.
In this chapter, we will review the methods that are most commonly used by developers and how to use uasyncio to schedule our own real-time tasks.
The following topics will be covered in this chapter:
- The need for real-time scheduling
- MicroPython scheduling techniques
- Writing a scheduling loop with uasyncio
Technical requirements
The example code for this chapter can be found in this book's GitHub repository:
In order to run the examples, you will require the following hardware and software:
- Pyboard Revision 1.0 or 1.1
- Pyboard Series-D
- Terminal application (such as PuTTy, RealTerm, or a Terminal)
- A text editor (such as VS Code or PyCharm)
The need for real-time scheduling
A real-time embedded system is a system with a dedicated purpose. The real-time system may operate standalone or it may be a component or subsystem of a larger device. Real-time systems are often event-driven and must produce the same output and timing when given the same initial conditions. A real-time system might be built using a microcontroller system that uses a bare-metal scheduler or a real-time operating system (RTOS) to schedule all of its system tasks. Alternatively, it could be built using a System on Chip (SoC) or Field Programming Gate Array (FPGA).
Every embedded system is not necessarily a real-time system. An application processor such as Raspberry Pi using Raspbian or Linux would not be a real-time system because, for a given set of inputs, while the system may give the same output, the time taken can vary wildly due to the multitasking nature of the system. General-purpose operating systems often interrupt tasks to handle OS-related functions, which results in the computing time being variable and non-deterministic.
There are several characteristics that can be used to identify a real-time embedded system:
- They're event-driven as they do not poll inputs.
- They're deterministic because when given the same initial conditions, they produce the same outputs in the same time frame.
- They're resource-constrained in some manner; for example, clock speed, memory, or energy consumption.
- They use a dedicated microcontroller-based processor.
- They may use an RTOS to manage system tasks.
Real-time system types
Real-time systems can be subdivided into two categories: soft real-time and hard real-time systems. Both types require that the system executes in a deterministic and predictable manner. However, they differ in what happens if a deadline is missed. A soft real-time system that misses a deadline is considered to be annoying to its users. It's undesirable for the deadline to be missed and may decrease the usefulness of the system after the deadline, but it's not critical. A hard real-time system, on the other hand, will dramatically decrease its usefulness after a deadline and results in a fatal system fault.
An example of a soft real-time system is a Human Machine Interface (HMI) with a touch controller that is controlling a home furnace. There may be a deadline where the system needs to respond to user input within 1 second of the screen being touched. If a user goes and touches the screen but the system doesn't respond for 3 or 4 seconds, the result is not world ending, but it may make the user complain about how slow the system is.
A hard real-time system could be an electronic braking system that needs to respond to a user pressing the brake pedal within 30 milliseconds. If a user were to press the brake and it took 2 seconds for the brakes to respond, the outcome could be critical. The system's failure to respond could result in injury to the user and dramatically decreases the usefulness of the embedded system.
It is possible to have an embedded system that has a mix of hard and soft requirements. The software in an embedded system is often subdivided into separate tasks based on function and timing requirements. We might find that the user interface on a system is considered to have soft real-time requirements, while the actuator control task must have hard real-time requirements. The type of system that is being built will often factor in the type of scheduler that is used in the solution.
Now, let's explore the different scheduling architectures that can be used with MicroPython to achieve real-time performance.
MicroPython scheduling techniques
When it comes to real-time scheduling using MicroPython, there are five common techniques that developers can employ. These techniques are as follows:
- Round-robin scheduling
- Periodic scheduling using timers
- Event-driven scheduling
- Cooperative scheduling
- MicroPython threads
We'll discuss them in detail in the subsequent sections. In the rest of this chapter, we will build example projects to explore several of these scheduling paradigms. We will also give special treatment to the uasyncio library at the end of this chapter, which is a powerful library for scheduling in MicroPython.
Round-robin scheduling
Round-robin scheduling is nothing more than an infinite loop that is created with a while loop. Inside the loop, developers add their task code and each task is executed sequentially, one after the other. While round-robin is the easiest and simplest scheduling paradigm to implement, there are several problems that developers will encounter when using it. First, getting the application tasks to run at the right rates can be difficult. Any code that is added or removed from the application will result in changes to the loop timing. The reason for this is that there is now more or less code to execute per loop. Second, each task has to be designed to recognize that there are other tasks, which means that they cannot block or wait for an event. They must check and then move on so that the other code has the opportunity to use the processor.
Round-robin scheduling can also be used with interrupts to handle any real-time events that might be occurring in the system. The loop handles all the soft real-time tasks, and then the hard real-time tasks are allocated to interrupt handlers. This helps to provide a balance that ensures each type is executed within a reasonable period of time. Round-robin is a good technique for beginners who are just trying to get a simple application up and running.
As we discussed earlier, adding or removing code affects the loop time, which can affect how the system performs. Round-robin schedulers can handle soft real-time tasks. Any events or hard real-time requirements need to be handled using interrupts. I often refer to this as round-robin scheduling with interrupts. A flowchart showing round-robin scheduling with interrupts can be seen in the following diagram:
The main round-robin loop is often referred to as the background loop. This loop constantly executes in the background when there are no interrupts executing. The interrupts themselves are referred to as the foreground and handle any hard real-time events that need to be handled by the system. These functions trump background tasks and run immediately. It's also important to note that MicroPython handles clearing the interrupt flags for developers, so while they are shown in the preceding diagram, this detail is abstracted and handled by the MicroPython kernel.
In C, an application that uses round-robin scheduling might look something like the following:
int main (void)
{
// Initialize the Microcontroller Unit (MCU) peripherals
System_Init();
while(1)
{
Task1();
Task2();
Task3();
}
// The application should never exit. Return 1 if
// we do reach this point!
return 1;
}
In this example, the code enters into the main function, initializes the microcontroller, and then enters into an infinite while loop that calls each task in order. This is a design pattern that every embedded software developer will have seen early in their career and should be quite familiar with.
Implementing round-robin in MicroPython is very similar:
- First, it's important to recall that the application entry for MicroPython is located within main.py. To access any peripherals, the pyb library needs to be imported into the application (or the machine library for code that can be ported across MicroPython ports).
- Second, any initialization and task functions need to be defined above the main loop. This ensures that they are defined before they are called by the Python interpreter.
- Finally, an infinite loop is created using a while True statement. Each defined task is entered into this loop. The loop's timing can be controlled and tuned using pyb.delay().
Building a task manager using round-robin scheduling
Let's look at an example application that generates an LED railroad lights pattern. From a hardware perspective, this requires the use of two LEDs on the pyboard, such as the blue and yellow LEDs (on the pyboard series-D, you might use the green and blue LEDs). I prefer to use these because when we save new code to the pyboard, the red LED is used to show that the filesystem is being written to, and we don't want to interfere with that indicator. If we want one LED to be on while the other is off and then toggle them back and forth, we will need to initialize the blue LED to be on and the yellow to be off. We can then create two separate tasks, one to control the yellow LED and the other to control the blue LED. The Python code for this is as follows:
import pyb # For uPython MCU features
import time
# define LED color constants
LED_RED = 1
LED_GREEN = 2
LED_BLUE = 3
LED_YELLOW = 4
def task1():
pyb.LED(LED_BLUE).toggle()
def task2():
pyb.LED(LED_GREEN).toggle()
However, the application is not complete until we initialize the LEDs and schedule the tasks to run. The following code shows the LED railroad application's initialization and task execution being written using round-robin scheduling. The main loop is delayed by 150 milliseconds, as well as each loop using the sleep_ms method from the time module. Importing time actually imports the utime module, but importing time can make porting code a little bit easier:
# Setup the MCU and application code to starting conditions
# The blue LED will start on, the yellow LED will be off
pyb.LED(LED_BLUE).on()
pyb.LED(LED_GREEN).off()
# Main application loop
while True:
# Run the first task
task1()
# Run the second task
task2()
# Delay 150 ms
pyb.delay(150)
These two code blocks, when combined, provide us with our first MicroPython application. Running the application on the pyboard can be done by copying the main.py script onto the development board. This can be done either directly, through a Python IDE such as PyCharm, or manually using the following steps:
- Connect the pyboard to your computer with a USB cable.
- Open your Terminal application and connect to the pyboard (refer to the MicroPython documentation | Quick reference for the pyboard | MicroPython tutorial for the pyboard | 3. Getting a MicroPython REPL prompt, for details).
- In the serial Terminal, press Ctrl + C to interrupt any currently running scripts.
- Copy the script to the pyboard USB drive. While the copy is in progress, the red LED will be lit up.
- Once the red light has gone off, the pyboard flash system will be updated.
- In the Terminal, press Ctrl + D to perform a soft reset.
Now, you should see the blue and green LEDs toggling back and forth.
Periodic scheduling using timers
There may be applications where every task that needs to be executed is periodic, such as a push button that needs to be sampled every 10 milliseconds; a display that needs to be updated 60 times per second; or a sensor that is sampled at 10 Hz or interrupts when a value has gone out of range. In purely periodic systems, developers can architect their software to use periodic timers to execute tasks. Each timer can be set up to represent a single task that is executed at the desired rate. When the timer interrupt fires, the task executes.
When using periodic timers for task scheduling, it's important to keep in mind that the task code will be executed from an interrupt handler. Developers should follow best practices for using interrupts, such as the following:
- Keep ISRs short and fast.
- Perform measurements to understand interrupt timing and latency.
- Use interrupt priority settings to emulate pre-emption.
- Make sure that task variables are declared as volatile.
- Avoid calling multiple functions from an ISR.
- Disable interrupts as little as possible.
- Use micropython.schedule() to schedule a function to execute as soon as the MicroPython scheduler is able to.
When using periodic timers to schedule tasks, some of these best practices can be bent slightly. However, if the developer carefully monitors their task timing, bending the rules shouldn't be an issue. If it is, then any hard real-time activity can be handled by the interrupt task and then a round-robin loop can be notified to finish processing the task at a later time.
Timers guarantee that the task will be executed at a regular interval, no matter what is being executed, assuming that a higher-priority interrupt is not executing. The key thing to remember is that these tasks are executed within an interrupt, so the tasks need to be kept short and fast! Developers who use this method should handle any high-priority activity in the task and then offload the rest of the task to the background. For example, a task that handles an incoming byte over a Universal Asynchronous Receiver/Transmitter (UART) device can process the incoming byte by storing it in a circular buffer and then allowing a background task to later process the circular buffer. This keeps the interrupt task short and sweet while allowing the lower-priority processing to be done in the background.
Interrupts within MicroPython are also special in that they are garbage collector (gc) locked. What this means to a developer is that you cannot allocate memory in an ISR. All memory, classes, and so on need to be allocated before being used by the ISR. This has an interesting side effect in that if something goes wrong while executing an ISR, the developer has no way of knowing what went wrong! To get traceback information in situations where memory can't be allocated, such as in ISRs, developers can use the MicroPython emergency exception buffer. This is done by adding the following line of code to either the top of main.py or boot.py:
micropython.alloc_emergency_exception_buf(100)
This line of code is used to allocate 100 bytes to store the traceback information for ISRs and any other tracebacks that occur in areas where memory cannot be allocated. If an exception occurs, the Python traceback information is saved to this buffer and then printed to the REPL. This allows a developer to then figure out what they did wrong and correct it. The value of 100 is recommended as the buffer size by the MicroPython documentation.
When considering using timers for tasks, it's also important to recognize that each time an interrupt fires on an Arm Cortex®-M processor, there is a 12–15 clock cycle overhead to switch from the main code to the interrupt and then again to switch back. The reason for this overhead is that the processor needs to save and restore context information for the application when switching into and out of the interrupts. The nice thing is that these transitions, while they consume clock cycles, are deterministic!
Building a task manager using periodic scheduling
Setting up a timer to behave as a periodic task is exactly the same as setting up a timer in MicroPython for any other purpose. We can create an application very similar to our round-robin scheduler using timers by initializing a timer for each task in the application. The first timer will control the blue LED, while the second will control the green LED. Each timer will use a callback function to the task code that will be executed when the timer expires.
We can use the exact same format for our code that we used previously. We will initialize the blue LED as on, and the green LED as off. This allows us to let the timers free-run and generate the railroad pattern that we saw earlier. It's important to note that if we let the timer free-run, even if we stop the application in the REPL, the timers will continue to execute! The reason for this is that the timers are hardware peripherals that will run until the peripheral is disabled, even if we exit our application and return to the REPL. I mention this because any print statements you add to your callback functions will continue to populate the REPL, even after you halt the program, which can make it difficult to work or determine the state of the application.
When using timers to set up tasks, there is no need for an infinite while loop like we saw with the round-robin applications. The timers will just free-run. If the infinite loop is not added to main.py, background processing will fall back to the system REPL and sit there instead. I personally still like to include the while loop and some status information so that I know whether the MicroPython interpreter is executing code. In this example, we will put a sleep delay in the main loop and then calculate how long the application has been running.
The Python code for our tasks is identical to the round-robin example, except for the addition of the emergency exception buffer, as shown here:
import micropython # For emergency exception buffer
import pyb # For uPython MCU
import time
micropython.alloc_emergency_exception_buf(100)
LED_RED = 1
LED_GREEN = 2
LED_BLUE = 3
LED_YELLOW = 4
def task1(timer):
pyb.LED(LED_BLUE).toggle()
return
def task2(timer):
pyb.LED(LED_GREEN).toggle()
return
Instead of calling the task code directly, we set up two timers – time 1, and timer 2 – with a frequency of 5 Hz (period of 200 milliseconds) and set up the callback function to call the tasks. The code to accomplish this is as follows:
pyb.LED(LED_BLUE).on()
pyb.LED(LED_GREEN).off()
# Create task timer for Blue LED
TimerBlueLed = pyb.Timer(1)
TimerBlueLed.init(freq=5)
TimerBlueLed.callback(task1)
print("Task 1 - Blue LED Toggle initialized ...")
# Create task timer for Green LED
TimerGreenLed = pyb.Timer(2)
TimerGreenLed.init(freq=5)
TimerGreenLed.callback(task2)
print("Task 2 - Green LED Toggle initialized ...")
The only code that's necessary for this example is the code for the main loop, which will do nothing more than print out how long our application has been running. To accomplish this, we need to sample the application start time using the time module's ticks_ms method and store it in TimeStart. We can then use time.ticks_diff to calculate the elapsed time between the current tick and the application start tick. The final piece of code is as follows:
TimeStart = time.ticks_ms()
while True:
time.sleep_ms(5000)
SecondsLive = time.ticks_diff(time.ticks_ms(), TimeStart) / 1000
print("Executing for ", SecondsLive, " seconds")
Once the code is on the pyboard and executing, the REPL should display the information shown in the following screenshot. It shows timer-based task scheduling, which prints the current execution time in the REPL and toggles between the blue and green LEDs at 5 Hz. At this point, you know how to use timers to schedule periodic tasks:
At this point, we are ready to examine some additional scheduling paradigms that are not completely mainstream within MicroPython, such as thread support.
MicroPython thread mechanism
The last scheduling paradigm that developers can use to schedule tasks is the MicroPython thread mechanism. In a microcontroller-based system, a thread is essentially a synonym for a task. There are some minor differences, but they are beyond the scope of this book. Developers can create threads that will contain task code. Each task could then use several different mechanisms to execute their task code, such as the following:
- Waiting on a queue
- Waiting on time using a delay
- Periodically monitoring for a polled event
The thread mechanism has been implemented directly from Python 3.x and provides developers with an easy method for creating separate tasks in their application. It is important to recognize that the Python thread mechanism is NOT deterministic. This means that it will not be useful for developing software that has a hard real-time requirement. The MicroPython thread mechanism is also currently experimental! Threads are not supported in all MicroPython ports and for the ones that are, a developer usually needs to enable threads and recompile the kernel in order to have access to the capability on offer.
Starting with MicroPython version 1.8.2, there is support for an experimental threads module that developers can use to create separate threads. Using the threads module is not recommended for developers who are just getting started with MicroPython for several reasons. First, by default, threading is not enabled in the MicroPython kernel. Developers need to enable threading and then recompile and deploy the kernel. Second, since the threading module is experimental, it has not been ported to every MicroPython port yet.
If threads aren't officially supported and not recommended, why are we even talking about them? Well, if we want to understand the different scheduling mechanisms available to us with MicroPython, we need to include the mechanisms that are even experimental. So, let's dive in and talk about threading with MicroPython (even though you may not be able to run a threading application until you have learned how to recompile the kernel, which you will do in Chapter 5, Customizing the MicroPython Kernel Start Up Code).
When a developer creates a thread, they are creating a semi-independent program. If you think back to what a typical program looks like, it starts with an initialization section and then enters into an infinite loop. Every thread has this structure! There is a section to initialize the thread and its variables, followed by an independent loop. The loop itself can be periodic by using time.sleep_ms() or it can block an event, such as an interrupt.
Advantages of using threads in MicroPython
From an organizational standpoint, threads can be a good choice for many MicroPython applications, although similar behavior can be achieved using the asyncio library (which we will talk about shortly). There are several advantages that threads provide, such as the following:
- They allow a developer to easily break up their program into smaller constituents that can be assigned to individual developers.
- They help us improve the code so that it's scalable and reusable.
- They provide us with a small opportunity to decrease bugs in an application by breaking the application up into smaller, less complex pieces. However, as we mentioned previously, more bugs can be created by developers who are unfamiliar with how to use threads properly.
Considerations when using threads in MicroPython
For a Python programmer, before using threads in a MicroPython application, it makes a lot of sense to consider the potential consequences before immediately jumping to threads. There are a few important considerations that a developer needs to contemplate:
- Threads are not deterministic. When a Python thread is ready to execute, there is no mechanism in place for one thread to be executed before another.
- There is no real mechanism for controlling time slicing. Time slicing is when the CPU is shared between multiple threads that are currently ready to execute.
- To pass data around the application, developers may need to add additional complexities to their design, such as the use of queues.
- Developers who are not familiar with designing and implementing multi-threaded applications will find that inter-thread communication and syncing is full of pitfalls and traps. More time will be spent debugging and new developers will find that the other methods we've discussed are more appropriate for their applications.
- Support for threading is currently experimental in MicroPython (see).
- Threads are not supported on all MicroPython ports, so the applications may be less portable than expected.
- Threads will use more resources than the other techniques we've discussed in this chapter.
Building a task manager using threads
Despite a few drawbacks to using threads, they can be a very powerful tool for developers who understand how to use them in the context of a real-time embedded system. Let's take a look at how we can implement our railroad blinky LED application using threads. The first step to developing the application is to create our threads, just like how we created our tasks in the previous examples. In this case, though, there are several key modifications that are worth noting.
First, we need to import the threading module (_thread). Second, we need to define a thread as a regular function declaration. The difference here is that we treat each function like a separate application where we insert a while True statement. If the thread were to exit the infinite loop, the thread would cease operating and not use any more CPU time.
In this example, we're controlling the LED toggling time by using the time.sleep_ms function and setting our thread loop time to 150 milliseconds, just like we did in the previous examples. Our code now looks as follows:
import micropython # For emergency exception buffer
import pyb # For uPython MCU features
import time # For time features
import _thread # For thread support
micropython.alloc_emergency_exception_buf(100)
LED_RED = 1
LED_GREEN = 2
LED_BLUE = 3
LED_YELLOW = 4
def task1():
while True:
pyb.LED(LED_BLUE).toggle()
time.sleep_ms(150)
def task2():
while True:
pyb.LED(LED_GREEN).toggle()
time.sleep_ms(250)
We can initialize the system the exact same way that we did before by initializing the blue LED to on and the green LED to off. The difference in our thread application is that we want to write some code that will spawn off our two threads. This can be done with the following code:
pyb.LED(LED_BLUE).on()
pyb.LED(LED_GREEN).off()
_thread.start_new_thread(task1, ())
_thread.start_new_thread(task2, ())
As you can see, we're using the _thread.start_new_thread method here. This method requires two parameters. The first is the function that should be called when the thread is ready to run. In this case, these are our Led_BlueToggle and Led_YellowToggle functions. The second parameter is a tuple that needs to be passed to our threads. In this case, we have no parameters to pass, so we just pass an empty tuple.
Before running this code, it's useful to note that the rest of the script is the same as the code in our timer example. We create an infinite loop for the script and then report how long the application has been running for. As a reminder, the code for this is as follows:
TimeStart = time.ticks_ms()
while True:
time.sleep_ms(5000)
SecondsLive = time.ticks_diff(time.ticks_ms(), TimeStart) / 1000
print("Executing for ", SecondsLive, " seconds")
An interesting question to ask yourself as you run the threaded code is, How long will it take before these LEDs are no longer blinking in an alternating pattern? Since the threads are not deterministic, over time, there is the potential for these threads to get out of sync and for the application to no longer behave the way that we expect it to. If you are going to run the code, let it run for a while, over several hours, a day, or even a week, and observe the application's behavior.
Event-driven scheduling
Event-driven scheduling can be an extremely convenient technique for developers whose systems are driven by events that are happening on the system. For example, the system may need to respond to a user button press, an incoming data packet, or a limit switch being reached by an actuator.
In event-driven systems, there may be no need to have a periodic background timer; instead, the system can just respond to the event using interrupts. Event-driven scheduling may have our common infinite while loop, but that loop will do nothing or put the system into a low-power state until an event occurs. Developers who are using event-driven systems can follow the interrupt best practices that we discussed earlier and should also read the MicroPython documentation on ISR rules, which can be found at. It's important to note that when you do use interrupts, MicroPython automatically clears the interrupt flag for the developer so that using interrupts is simplified.
Cooperative scheduling
Cooperative scheduling is a technique that developers can leverage to achieve task periodicity without using a timer for every task. Cooperative schedulers are one of the most widely used schedulers throughout embedded system history. A quick look at any of the embedded.com embedded systems surveys will easily show that.
A cooperative scheduler often uses a single timer to create a system tick that the scheduler then uses to determine whether the task code should be executed. The cooperative scheduler provides a perfect balance for developers who need periodicity, simplicity, flexibility, and scalability. They are also a stepping stone toward an RTOS.
So far, we have examined the methods that developers can use in MicroPython to schedule activities. In the next section, we will discuss how we can use the asyncio library to cooperatively schedule tasks. This method is perhaps the most commonly used method by MicroPython developers due to its flexibility and precise timing beyond the methods that we have already examined.
Cooperative multitasking using asyncio
So far, we have examined how we can schedule tasks in a MicroPython-based system using round-robin, timers, and threads. While threads may be the most powerful scheduling option available, they aren't deterministic schedulers and don't fit the bill for most MicroPython applications. There is another scheduling algorithm that developers can leverage to schedule tasks within their systems: cooperative scheduling.
A cooperative scheduler, also known as cooperative multitasking, is basically a round-robin scheduling loop that includes several mechanisms to allow a task to yield the CPU to other tasks that may need to use it. The developer can fine-tune the way that their application behaves, and their tasks execute without adding the complexity that is required for a pre-emptive scheduler, like those included in an RTOS. Developers who decide that a cooperative scheduler fits their application best will need to make sure that each task they create can complete before any other task needs to execute, hence the name cooperative. The tasks cooperate to ensure that all the tasks are able to execute their code within their requirements but are not held to their timing by any mechanism.
Developers can develop their own cooperative schedulers, but MicroPython currently provides the asyncio library, which can be used to create cooperatively scheduled tasks and to handle asynchronous events in an efficient manner. In the rest of this chapter, we will examine asyncio and how we can use it for task scheduling within our embedded applications.
Introducing asyncio
The asyncio module was added to Python starting in version 3.4 and has been steadily evolving ever since. The purpose of asyncio is to handle asynchronous events that occur in Python applications, such as access to input/output devices, a network, or even a database. Rather than allowing a function to block the application, asyncio added the functionality for us to use coroutines that can yield the CPU while they wait for responses from asynchronous devices.
MicroPython has supported asyncio in the kernel since version 1.11 through the uasyncio library. Prior versions still supported asyncio, but the libraries had to be added manually. This could be done through several means, such as the following:
- Copying the usyncio library to your application folder
- Using micropip.py to download the usyncio library
- Using upip if there is a network connection
If you are unsure whether your MicroPython port supports asyncio, all you need to do is type the following into the REPL:
import usyncio
If you receive an import error, then you know that you need to install the library before continuing. Peter Hinch has put together an excellent guide regarding asyncio with instructions for installing the library that you can find at.
It's important to note that the support for asyncio in MicroPython is for the features that were introduced in Python 3.4. Very few features from the Python 3.5 or above asyncio library have been ported to MicroPython, so if you happen to do more in-depth research into asyncio, please keep this in mind to avoid hours of debugging.
The main purpose of asyncio is to provide developers with a technique for handling asynchronous operations in an efficient manner that doesn't block the CPU. This is done through the use of coroutines, which are sometimes referred to as coros. A coroutine is a specialized version of a Python generator function that can suspend its execution before reaching a return and indirectly passes control to another coroutine. Coroutines are a technique that provides concurrency to a Python application. Concurrency basically means that we can have multiple functions that appear to be executing at the same time but are actually running one at a time in a cooperative manner. This is not parallel processing but cooperative multitasking, which can dramatically improve the scalability and performance of a Python application compared to other synchronous methods.
The general idea behind asyncio is that a developer creates several coroutines that will operate asynchronously with each other. Each coroutine is then called using a task from an event loop that schedules the tasks. This makes the coroutines and tasks nearly synonymous. The event loop will execute a task until it yields execution back to the event loop or to another coroutine. The coroutine may block waiting for an I/O operation or it may simply sleep if the coroutine wants to execute at a periodic interval. It's important to note, however, that if a coroutine is meant to be periodic, there may be jitter in the period, depending on the timing for the other tasks and when the event loop can schedule it to run again.
The general behavior for how coroutines work can be seen in the following diagram, which represents an overview of using coroutines with the asyncio library. This diagram is a modified version of the one presented by Matt Trentini at Pycon AU in 2019 during his talk on asyncio in MicroPython:
As shown in the preceding diagram, the Event Loop schedules a task to be executed that has 100% of the CPU until it reaches a yield point. A yield point is a point in the coroutine where a blocking operation (asynchronous operation) will occur and the coroutine is then willing to give up the CPU until the operation is completed. At this point, the event loop will then schedule other coroutines to run. When the asynchronous event occurs, a callback is used to notify the Event Loop that the event has occurred. The Event Loop will then mark the original coroutine as ready to run and will schedule it to resume when other coroutines have yielded the CPU. At that point, the coroutine can resume operation, but as we mentioned earlier, there could be some time that elapses between the receipt of the callback and the coroutine resuming execution, and this is by no means deterministic.
Now, let's examine how we can use asyncio to rewrite our blinky LED application using cooperative multitasking.
A cooperative multitasking blinky LED example
The first step in creating a railroad blinky LED example is to import the asyncio library. In MicroPython, there is not an asyncio library exactly, but a uasyncio library. To improve portability, many developers will import uasyncio as if it were the asyncio library by importing it at the top of their application, as follows:
import uasyncio as asyncio
Next, we can define our LEDs, just like we did in all our other examples, using the following code:
LED_RED = 1
LED_GREEN = 2
LED_BLUE = 3
LED_YELLOW = 4
If you look back at our example of writing a thread-based application, you'll recall that our task1 code looked as follows:
def task1():
while True:
pyb.LED(LED_BLUE).toggle()
time.sleep_ms(150)
def task2():
while True:
pyb.LED(LED_GREEN).toggle()
time.sleep_ms(150)
This is important to review because creating a coroutine will follow a similar structure! In fact, to tell the Python interpreter that our tasks are asynchronous coroutines, we need to add the async keyword before each of our task definitions, as shown in the following code:
async def task1():
while True:
pyb.LED(LED_BLUE).toggle()
time.sleep_ms(150)
async def task2():
while True:
pyb.LED(LED_GREEN).toggle()
time.sleep_ms(150)
The functions are now coroutines, but they are missing something very important: a yield point! If you examine each of our tasks, you can tell that we really want our coroutine to yield once we have toggled our LED and are going to wait 150 milliseconds. The problem with these functions as they are currently written is that they are making a blocking call to time.sleep_ms. We want to update this with a call to asyncio.sleep_ms and we want to let the interpreter know that we want to relinquish the CPU at this point. In order to do that, we are going to use the await keyword.
The await keyword, when reached by the coroutine, tells the event loop that it has reached a point in its execution where it will be waiting for an event to occur and it is willing to give up the CPU to another task. At this point, control is handed back to the event loop and the event loop can decide what task should be executed next. Using this syntax, our task code for the railroad blinky LED applications would be updated to the following:
async def task1():
while True:
pyb.LED(LED_BLUE).toggle()
await asyncio.sleep_ms(150)
async def task2():
while True:
pyb.LED(LED_GREEN).toggle()
await asyncio.sleep_ms(150)
For the most part, the general structure of our coroutine/task functions remains the same. The difference is that we define the function as async and then use await where we expect the asynchronous function call to be made.
At this point, we just initialize the LEDs using the following code:
pyb.LED(LED_BLUE).on()
pyb.LED(LED_GREEN).off()
Then, we create our event loop.
Creating the event loop for this application requires just four lines of code. The first line will assign the asyncio event loop to a loop variable. The next two lines create tasks that assign our coroutines to the event loop. Finally, we tell the event loop to run forever and our coroutines to execute. These four lines of code look as follows:
loop = asyncio.get_event_loop()
loop.create_task(task1())
loop.create_task(task2())
loop.run_forever()
As you can see, we can create any number of tasks and pass the desired coroutine to the create_task method in order to get them into the event loop. At this point, you could run this example and see that you have an efficiently running railroad blinky LED program that uses cooperative multitasking.
Going further with asyncio
Unfortunately, there just isn't enough time to discuss all the cool capabilities that are offered by asyncio in MicroPython applications. However, as we progress through this book, we will use asyncio and its additional capabilities as we develop our various projects. For those of you who want to dig deeper right now, I would highly recommend checking out Peter Hinch's asyncio tutorial, which also covers how you can coordinate tasks, use queues, and more, with asyncio. You can find the tutorial and some example code at.
Summary
In this chapter, we explored several different types of real-time scheduling techniques that can be used with a MicroPython project. We found that there are many different techniques that a MicroPython developer can leverage to schedule activities in their application. We found that each of these techniques has its place and varies based on the level of complexity a developer wants to include in their scheduler. For example, MicroPython threads can be used, but they are not fully supported in every MicroPython port and should be considered an in-development feature.
After looking at several techniques, we saw that the asyncio library may be the best choice for developers looking to get started with MicroPython. Python developers are already familiar with it and asyncio provides developers with cooperative scheduling capabilities that can provide them with the ability to handle asynchronous events in an efficient, non-blocking manner. This allows developers to get more out of their applications while wasting fewer cycles.
In the next chapter, we will explore how we can write drivers for a simple application that uses a push button to control the state of its RGB LEDs.
Questions
- What characteristics define a real-time embedded system?
- What four scheduling algorithms are commonly used with MicroPython?
- What best practices should a developer follow when using callbacks in MicroPython?
- What process should be followed to load new code onto a MicroPython board?
- Why would a developer place micropython.alloc_emergency_exception_buf(100) in their application?
- What reasons might deter a developer from using the _thread library?
- What keywords indicate that a function is being defined as a coroutine?
Further reading
Here is a list of references you can refer to: | https://www.packtpub.com/product/micropython-projects/9781789958034 | CC-MAIN-2022-27 | refinedweb | 6,972 | 51.78 |
I just uploaded a new version of doctest[1] to Hackage. WHAT IS doctest? ================. A basic example of usage is at [4]. WHAT'S NEW IN THIS VERSION? =========================== It is now possible to intersperse comments between a longer, continuing example. All examples within the same comment now share a namespace. The following now works : -- | Calculate Fibonacci number of given 'Num'. -- -- First let's set `n` to ten: -- -- >>> let n = 10 -- -- And now calculate the 10th Fibonacci number: -- -- >>> fib n -- 55 fib :: Integer -> Integer fib 0 = 0 fib 1 = 1 fib n = fib (n - 1) + fib (n - 2) Thanks to Sakari Jokinen for this contribution! In addition I changed the name from DocTest to doctest. I think using all lower-case package names is a good thing. And as we will use doctest as a library in the near future, this was the last chance for this change. Cheers, Simon [1] [2] [3] [4] | http://www.haskell.org/pipermail/haskell-cafe/2011-June/093221.html | CC-MAIN-2014-10 | refinedweb | 153 | 75.81 |
The QFile class is an I/O device that operates on files. More...
Almost all the functions in this class are reentrant when Qt is built with thread support. The exceptions are setEncodingFunction(), setDecodingFunction(), and setErrorString().
#include <qfile.h>
Inherits QIODevice.
List of all member functions.
QFile is an I/O device for reading and writing binary and text files. A QFile may be used by itself or more conveniently with a QDataStream or QTextStream.
The file name is usually passed in the constructor but can be changed with setName(). You can check for a file's existence with exists() and remove a file with remove().
The file is opened with open(), closed with close() and flushed with flush(). Data is usually read and written using QDataStream or QTextStream, but you can read with readBlock() and readLine() and write with writeBlock(). QFile also supports getch(), ungetch() and putch().
The size of the file is returned by size(). You can get the current file position or move to a new file position using the at() functions. If you've reached the end of the file, atEnd() returns TRUE. The file handle is returned by handle().
Here is a code fragment that uses QTextStream to read a text file line by line. It prints each line with a line number.
QStringList lines; QFile file( "file.txt" ); if ( file.open( IO_ReadOnly ) ) { QTextStream stream( &file ); QString line; int i = 1; while ( !stream.atEnd() ) { line = stream.readLine(); // line of text excluding '\n' printf( "%3d: %s\n", i++, line.latin1() ); lines += line; } file.close(); }
Writing text is just as easy. The following example shows how to write the data we read into the string list from the previous example:
QFile file( "file.txt" ); if ( file.open( IO_WriteOnly ) ) { QTextStream stream( &file ); for ( QStringList::Iterator it = lines.begin(); it != lines.end(); ++it ) stream << *it << "\n"; file.close(); }
The QFileInfo class holds detailed information about a file, such as access permissions, file dates and file types.
The QDir class manages directories and lists of file names.
Qt uses Unicode file names. If you want to do your own I/O on Unix systems you may want to use encodeName() (and decodeName()) to convert the file name into the local encoding.
See also QDataStream, QTextStream, and Input/Output and Networking.
This is used by QFile::setDecodingFunction().
This is used by QFile::setEncodingFunction().
See also setName().
See also size().
Example: distributor/distributor.ui.h.
Reimplemented from QIODevice.
The file is not closed if it was opened with an existing file handle. If the existing file handle is a FILE*, the file is flushed. If the existing file handle is an int file descriptor, nothing is done to the file.
Some "write-behind" filesystems may report an unspecified error on closing the file. These errors only indicate that something may have gone wrong since the previous open(). In such a case status() reports IO_UnspecifiedError after close(), otherwise IO_Ok.
See also open() and flush().
Examples: chart/chartform_files.cpp, distributor/distributor.ui.h, helpviewer/helpwindow.cpp, mdi/application.cpp, qdir/qdir.cpp, qwerty/qwerty.cpp, and xml/outliner/outlinetree.cpp.
Reimplemented from QIODevice.
See also setDecodingFunction().
Example: distributor/distributor.ui.h.
By default, this function converts fileName to the local 8-bit encoding determined by the user's locale. This is sufficient for file names that the user chooses. File names hard-coded into the application should only use 7-bit ASCII filename characters.
The conversion scheme can be changed using setEncodingFunction(). This might be useful if you wish to give the user an option to store file names in UTF-8, etc., but be aware that such file names would probably then be unrecognizable when seen by other programs.
See also decodeName().
Example: distributor/distributor.ui.h.
The returned strings are not translated with the QObject::tr() or QApplication::translate() functions. They are marked as translatable strings in the context QFile. Before you show the string to the user you should translate it first, e.g:
QFile f( "foo.txt" ); if ( !f.open( IO_ReadOnly ) { QMessageBox::critical( this, tr("Open failed"), tr("Could not open file for reading: %1").arg( qApp->translate("QFile",f.errorString()) ) ); return; }
See also QIODevice::status(), QIODevice::resetStatus(), and setErrorString().
Examples: chart/chartform.cpp, dirview/dirview.cpp, and helpviewer/helpwindow.cpp.
Returns TRUE if this file exists; otherwise returns FALSE.
See also name().
close() also flushes the file buffer.
Reimplemented from QIODevice.
Returns the byte/character read, or -1 if the end of the file has been reached.
See also putch() and ungetch().
Reimplemented from QIODevice.
This is a small positive integer, suitable for use with C library functions such as fdopen() and fcntl(). On systems that use file descriptors for sockets (ie. Unix systems, but not Windows) the handle can be used with QSocketNotifier as well.
If the file is not open or there is an error, handle() returns -1.
See also QSocketNotifier.
Returns the name set by setName().
See also setName() and QFileInfo::fileName().
The mode parameter m must be a combination of the following flags:
The raw access mode is best when I/O is block-operated using a 4KB block size or greater. Buffered access works better when reading small portions of data at a time.
Warning: When working with buffered files, data may not be written to the file at once. Call flush() to make sure that the data is really written.
Warning: If you have a buffered file opened for both reading and writing you must not perform an input operation immediately after an output operation or vice versa. You should always call flush() or a file positioning operation, e.g. at(), between input and output operations, otherwise the buffer may contain garbage.
If the file does not exist and IO_WriteOnly or IO_ReadWrite is specified, it is created.
Example:
QFile f1( "/tmp/data.bin" ); f1.open( IO_Raw | IO_ReadWrite ); QFile f2( "readme.txt" ); f2.open( IO_ReadOnly | IO_Translate ); QFile f3( "audit.log" ); f3.open( IO_WriteOnly | IO_Append );
See also name(), close(), isOpen(), and flush().
Examples: application/application.cpp, chart/chartform_files.cpp, distributor/distributor.ui.h, helpviewer/helpwindow.cpp, qdir/qdir.cpp, qwerty/qwerty.cpp, and xml/outliner/outlinetree.cpp.
Reimplemented from QIODevice.
Opens a file in the mode m using an existing file handle f. Returns TRUE if successful, otherwise FALSE.
Example:
#include <stdio.h> void printError( const char* msg ) { QFile f; f.open( IO_WriteOnly, stderr ); f.writeBlock( msg, qstrlen(msg) ); // write to stderr f.close(); }
When a QFile is opened using this function, close() does not actually close the file, only flushes it.
Warning: If f is stdin, stdout, stderr, you may not be able to seek. See QIODevice::isSequentialAccess() for more information.
See also close().
Opens a file in the mode m using an existing file descriptor f. Returns TRUE if successful, otherwise is one of 0 (stdin), 1 (stdout) or 2 (stderr), you may not be able to seek. size() is set to INT_MAX (in limits.h).
See also close().
Returns ch, or -1 if some error occurred.
See also getch() and ungetch().
Reimplemented from QIODevice.
Reads bytes from the file into the char* p, until end-of-line or maxlen bytes have been read, whichever occurs first. Returns the number of bytes read, or -1 if there was an error. Any terminating newline is not stripped.
This function is only efficient for buffered files. Avoid readLine() for files that have been opened with the IO_Raw flag.
See also readBlock() and QTextStream::readLine().
Reimplemented from QIODevice.
Reads a line of text.
Reads bytes from the file into string s, until end-of-line or maxlen bytes have been read, whichever occurs first. Returns the number of bytes read, or -1 if there was an error, e.g. end of file. Any terminating newline is not stripped.
This function is only efficient for buffered files. Avoid using readLine() for files that have been opened with the IO_Raw flag.
Note that the string is read as plain Latin1 bytes, not Unicode.
See also readBlock() and QTextStream::readLine().
The file is closed before it is removed.
Removes the file fileName. Returns TRUE if successful, otherwise FALSE.
Warning: This function is not reentrant.
Sets the function for decoding 8-bit file names to f. The default uses the locale-specific 8-bit encoding.
See also encodeName() and decodeName().
Warning: This function is not reentrant.
Sets the function for encoding Unicode file names to f. The default encodes in the locale-specific 8-bit encoding.
See also encodeName().
Warning: This function is not reentrant.
Sets the error string returned by the errorString() function to str.
See also errorString() and QIODevice::status().
Do not call this function if the file has already been opened.
If the file name has no path or a relative path, the path used will be whatever the application's current directory path is at the time of the open() call.
Example:
QFile file; QDir::setCurrent( "/tmp" ); file.setName( "readme.txt" ); QDir::setCurrent( "/home" ); file.open( IO_ReadOnly ); // opens "/home/readme.txt" under Unix
Note that the directory separator "/" works for all operating systems supported by Qt.
See also name(), QFileInfo, and QDir.
See also at().
Example: table/statistics/statistics.cpp.
Reimplemented from QIODevice.
This function is normally called to "undo" a getch() operation.
Returns ch, or -1 if an error occurred.
See also getch() and putch().
Reimplemented from QIODevice.
This file is part of the Qt toolkit. Copyright © 1995-2003 Trolltech. All Rights Reserved. | http://doc.trolltech.com/3.2/qfile.html | crawl-002 | refinedweb | 1,568 | 70.5 |
C# for loop is one of the most used loop in C# programming language and it is same as we have seen in C/ C++ or Java ( if you have learned coding in any of the languages). For loop has a initialized variable, condition until when to loop, and third is incremement/decrement of value.
Syntax of for loop
for(initialization;condition; increment/decrement) { //code to execute until condition in the for loop = true }
Where
int i =0;
i < 10;
i++
Console.WriteLine("Value of I ="+i);
Here is the complete example, considering above points
for(var i=0; i < 10; i++) { Console.WriteLine("Value of i ="+i); }
Output
Value of i =0 Value of i =1 Value of i =2 Value of i =3 Value of i =4 Value of i =5 Value of i =6 Value of i =7 Value of i =8 Value of i =9
As the value of i =0; in first loop, it prints 0, then 1 because it was incremented by 1, then 2 and so on.
It will print all the values until the value i < 10, so last value is 9.
We can have nested for loops, means a for loop inside another for loop, which is useful when coding for Pyramid pattern in C#, let's take a look on how it works.
using System; public class ForLoopProgram { public static void Main() { //this for loop is checked for(int i=1;i<=3;i++) { //then this for loop is executed until condition is true //on each outer for loop run, this is executed completed again for(int j=1;j<=3;j++) { Console.WriteLine(i+" "+j); } Console.WriteLine("End of round "+i +" for outer loop"); } } }
Output:
1 1 1 2 1 3 End of round 1 for outer loop 2 1 2 2 2 3 End of round 2 for outer loop 3 1 3 2 3 3 End of round 3 for outer loop
If we will not define any condition inside for loop and leave it as it is, it will be executed infinitely until we stop the program manually by clicking Ctrl+C. Example of infinite for loop is
using System; public class InifiniteForProgram { public static void Main() { for(;;) { Console.WriteLine("Hello World"); } } }
You can increment, decrement value by 2 also inside for loop, using operators +=., example
i+=2
for (int i = 0; i < 20; i+=2) { Console.WriteLine(i); }
If you don't like the above method, you can also leave the increment/decrement part blank and do this inside the {} braces, after main code is executed, example
using System; public class Program { public static void Main() { for (int i = 0; i < 20; ) { Console.WriteLine(i ); //increment value here i= i+2; } } }
Output:
0 2 4 6 8 10 12 14 16 18 | https://qawithexperts.com/tutorial/c-sharp/14/c-sharp-for-loop | CC-MAIN-2021-39 | refinedweb | 467 | 59.26 |
using System; namespace DatimeApplication { class Program { static void Main(string[] args) { DateTime startDate = new DateTime(2013, 1, 1); DateTime endDate = new DateTime(2013, 12, 31); TimeSpan diff = endDate - startDate; int days = diff.Days; int j=0; for (var i = 0; i <= days; i++) { var testDate = startDate.AddDays(i); if (testDate.DayOfWeek != DayOfWeek.Saturday && testDate.DayOfWeek != DayOfWeek.Sunday) { j = j + 1; Console.WriteLine(testDate.ToShortDateString()); } } Console.WriteLine(string.Format("Total Days:{0}",j)); Console.ReadLine(); } } }
If you see above code, Its very easy to find all days except Saturday and Sunday in a year. I have created two dates with Start date and end date and then I found a difference between those two days. And then in for loop I have checked whether they are weekdays then I have printed a date for that. I have calculated all days except Saturday and Sunday for year 2013. At the end I have printed total days also.
Once you run the code following is a output as expected.
That’s it. Hope you like it. Stay tuned for more…
Your feedback is very important to me. Please provide your feedback via putting comments. | http://www.dotnetjalps.com/2013/11/find-days-except-saturday-sunday-year.html | CC-MAIN-2017-04 | refinedweb | 191 | 67.65 |
I am trying to build an application that displays an image in a JScrollPane. I have written a class that extends JPanel of which I override the paintComponent() function to paint the image on the JPanel. This is then added to the JScrollPane. The problem is this, the JPanel is displayed as the same size as the JScrollPane and no scrollbars appear, and only a portion of the image is displayed. I'll post some code excerpts below so you can get an idea of what I'm doing.
The JPanel subclass
public class xImagePanel extends JPanel { BufferedImage image; public xImagePanel(){ image = null; this.setAutoscrolls(true); } public void setImage(String path) throws IOException{ File f = new File(path); image = ImageIO.read(f); } @Override public void paintComponent(Graphics g) { Graphics2D g2 = (Graphics2D)g; g2.drawImage(image, 0,0, this); } }
And the main class where it is added to the scrollpane. I didn't add the entire class because the scrollpane is part of a much larger GUI.
try { JScrollPane jScrollPane1 = new JScrollPane(); xImagePanel imagePanel = new xImagePanel(); imagePanel.setSize(2560, 1600); imagePanel.setImage("...\testIMG.jpg"); //directory path excluded } catch (IOException ex) { Logger.getLogger(xbsMainFrame.class.getName()).log(Level.SEVERE, null, ex); } jScrollPane1.getViewport().add(imagePanel); | http://www.javaprogrammingforums.com/awt-java-swing/5383-painting-image-jpanel-inside-jscrollpane.html | CC-MAIN-2015-14 | refinedweb | 203 | 50.43 |
python-twitter 1.3.1
A Python wrapper around the Twitter API
**A Python wrapper around the Twitter API.**
Author: The Python-Twitter Developers <python-twitter@googlegroups.com>
## Introduction
This library provides a pure Python interface for the [Twitter API]() It works with Python versions from 2.5 to 2.7. Python 3 support is under development.
[Twitter]() provides a service that allows people to connect via the web, IM, and SMS. Twitter exposes a [web services API]() and this library is intended to make it even easier for Python programmers to use.
## Building
From source:
Install the dependencies:
- [Requests]()
- [SimpleJson]()
- [Requests OAuthlib]()
Alternatively use `pip`:
$ pip install -r requirements.txt
Download the latest `python-twitter` library from:
Extract the source distribution and run:
```
$ python setup.py build
$ python setup.py install
```
*Testing*
With setuptools installed:
```
$ python setup.py test
```
Without setuptools installed:
```
$ python twitter_test.py
```
## Getting the code
The code is hosted at [Github]()
```
$ git clone git://github.com/bear/python-twitter.git
$ cd.
The python-twitter library now only supports oAuth authentication as the Twitter devs have indicated that OAuth is the only method that will be supported moving forward.
To create an instance of the `twitter.Api` with login credentials (Twitter now requires an oAuth Access Token for all API calls)
```
>>> import twitter
>>> api = twitter.Api(consumer_key='consumer_key',
consumer_secret='consumer_secret',
access_token_key='access_token',
access_token_secret='access_token_secret')
```
To see if your credentials are successful:
```
>>> print api.VerifyCredentials()
{"id": 16133, "location": "Philadelphia", "name": "bear"}
```
**NOTE -** much more than the small sample given here will print [the google group]() for more discussion.
## Contributors
Originally two libraries by DeWitt Clinton and Mike Taylor which was then merged into python-twitter.
Now it's a full-on open source project with many contributors over time:
* Jodok Batlogg,
* Kyle Bock,
* Brad Choate,
* Robert Clarke,
* Jim Cortez,
* Pierre-Jean Coudert,
* Aish Raj Dahal,
* Thomas Dyson,
* Jim Easterbrook
* Yoshinori Fukushima,
* Hameedullah Khan,
* Osama Khalid,
* Omar Kilani,
* Domen Kožar,
* Robert Laquey,
* Jason Lemoine,
* Pradeep Nayak,
* Ian Ozsvald,
* Nicolas Perriault,
* Glen Tregoning,
* Lars Weiler,
* Sebastian Wiesinger,
* Jake Robinson,
* abloch,
* cahlan,
* dpslwk,
* edleaf,
* ecesena,
* git-matrix,
* sbywater,
* thefinn93,
* themylogin,
and the rest of the python-twitter mailing.
```
2014-02-17
changed version to 1.3 and then to 1.3.1 because I forgot to include CHANGES
fix Issue 143 - GetStatusOembed() url parameter was being stomped on
fix debugHTTP in a brute force way but it works again
Add id_str to Status class
Added LookupFriendship() method for checking follow status
pull request from lucas
Fix bug of GetListMembers when specifying `owner_screen_name`
pull request from shichao-an
2014-01-18
backfilling varioius lists endpoints
added a basic user stream call
2014-01-17
changed to version 1.2
fixed python 3 issue in setup.py (print statements)
fixed error in CreateList(), changed count default for GetFollowers to 200 and added a GetFollowersPaged() method
need to backfill commit log entries!
2013-10-06
changed version to 1.1
The following changes have been made since the 1.0.1 release
Remove from ParseTweet the Python 2.7 only dict comprehension item
Fix GetListTimeline condition to enable owner_screen_name based fetching
Many fixes for readability and PEP8
Cleaning up some of the package importing. Only importing the functions that are needed
Also added first build of the sphinx documentation. Copied some info from the readme to the index page
Added lines to setup.py to help the user troubleshoot install problems. #109
Removed the OAuth2 lines from the readme
Removed OAuth2 library requirements
Added GetListMembers()
2013-06-07
changed version to 1.0.1
added README bit about Python version requirement
2013-06-04
changed version to 1.0
removed doc directory until we can update docs for v1.1 API
added to package MANIFEST.in the testdata directory
2013-05-28
bumped version to 1.0rc1
merged in api_v1.1 branch
The library is now only for Twitter API v1.1
2013-03-03
bumped version to 0.8.7
removed GetPublicTimeline from the docs so as to stop confusing
new folks since it was the first example given ... d'oh!
2013-02-10
bumped version to 0.8.6
update requirements.txt to remove distribute reference
github commit 3b9214a879e5fbd03036a7d4ae86babc03784846
Merge pull request #33 from iElectric/profile_image_url_https
github commit 67cbb8390701c945a48094795474ca485f092049
patch by iElectric on github
Change User.NewFromJsonDict so that it will pull from either
profile_image_url_https or profile_image_url to keep older code
working properly if they have stored older json data
2013-02-07
bumped version to 0.8.5
lots of changes have been happening on Github and i've been
very remiss in documenting them here in the Changes file :(
this version is the last v1.0 API release and it's being made
to push to PyPI and other places
all work now will be on getting the v1.1 API supported
2012-11-04
Api.UserLookUp() throws attribute error when corresponding screen_name is not found
Merge pull request #5 from thefinn93/master
Setup.py crashes because the README file is now named README.md
Update .gitignore to add the PyCharm data directory
2012-10-16
Patch by dan@dans.im
Add exclude_replies parameter to GetUserTimeline
Bug reported by michaelmior on github
get_access_token.py attempts Web auth
2011-12-03
Comment by qfuxiang to the above changeset
The base url was wrong for the Followers API calls
Add include_entities parameter to GetStatus()
Patch by gaelenh
Change PostUpdate() so that it takes the shortened link into
account. Small tweak to the patch provided to make the
shortened-link length set by a API value instead of a constant.
Patch by ceesjan.ytec
AttributeError handles the fact that win* doesn't implement
os.getlogin()
Patch by yaleman
As described at
GET trends (corresponding to Api.GetTrendsCurrent) is now
deprecated in favor of GET trends/:woeid. GET trends also now
requires authentication, while trends/:woeid doesn't.
Patch and excellent description by jessica.mckellar
Currently, two Trends containing the same information
(name, query, and timestamp) aren't considered equal because
__eq__ isn't overridden, like it is for Status, User, and the
other Twitter objects.
Patch and excellent description by jessica.mckellar
All variations on a theme - basically Twitter is returning
something different for an error payload. Changed code to
check for both 'error' and 'errors'.
2011-05-08
A comment in this issue made me realize that the parameter sanity
check for max_id was missing in GetMentions() - added
First pass at working in some of the cursor support that has been
in the Twitter API but we haven't made full use of - still working
out the small issues.
2011-04-16
bumped version to 0.8.3
released 0.8.2 to PyPI
bumped version to 0.8.2
Issue 193
Missing retweet_count field on Status object
Patch (with minor tweaks) by from alissonp
Issue 181
Add oauth2 to install_requires parameter list and also updated
README to note that the oauth2 lib can be found in two locations
Issue 182, Issue 137, Issue 93, Issue 190
language value missing from User object
Added 'lang' item and also some others that were needed:
verified, notifications, contributors_enabled and listed_count
patches by wreinerat, apetresc, jpwigan and ghills
2011-02-26
Issue 166
Added a basic, but sadly needed, check when parsing the json
returned by Twitter as Twitter has a habit of returning the
failwhale HTML page for a json api call :(
Patch (with minor tweaks) by adam.aviv
Issue 187
Applied patch by edward.hades to fix issue where MaximumHitFrequency
returns 0 when requests are maxed out
Issue 184
Applied patch by jmstaley to put into the GetUserTimeline API
parameter list the max_id value (it was being completely ignored)
2011-02-20
Added retweeted to Status class
Fixed Status class to return Hashtags list in AsDict() call
Issue 185
Added retweeted_status to Status class - patch by edward.hades
Issue 183
Removed errant print statement - reported by ProgVal
2010-12-21
Setting version to 0.8.1
Issue 179
Added MANIFEST.in to give setup.py sdist some clues as to what
files to include in the tarball
2010-11-14
Setting version to 0.8 for a bit as having a branch for this is
really overkill, i'll just take DeWitt advice and tag it when
the release is out the door
Issue 175
Added geo_enabled to User class - basic parts of patch provided
by adam.aviv with other bits added by me to allow it to pass tests
Issue 174
Added parts of adam.aviv's patch - the bits that add new field items
to the Status class.
Issue 159
Added patch form adam.aviv to make the term parameter for GetSearch()
optional if geocode parameter is supplied
2010-11-03
Ran pydoc to generate docs
2010-10-16
Fixed bad date in previous CHANGES entry
Fixed source of the python-oauth2 library we use: from brosner
to simplegeo
I made a pass thru the docstrings and updated many to be the
text from the current Twitter API docs. Also fixed numerous
whitespace issues and did a s/[optional]/[Optional]/ change.
Imported work by Colin Howe that he did to get the tests working.
Issue 169
Patch by yaemog which adds missing Trends support.
Issue 168
Only cache successful results as suggested by yaemog.
Issue 111
Added a new GetUserRetweets() call as suggested by yash888
Patch given was adjusted to reflect the current code requirements.
Issue 110
Added a VerifyCredentials() sample call to the README example
Issue 105
Added support for the page parameter to GetFriendsTimeline()
as requested by jauderho.
I also updated GetFriendsTimeline() to follow the current
Twitter API documentation
Somewhere in the patch frenzy of today an extra GetStatus()
def was introduced!?! Luckily it was caught by the tests.
wooo tests! \m/
Setting version to 0.8
r0.8 branch created and trunk set to version 0.9-devel
2010-09-26
Issue 150
Patch by blhobbes which removes a double quoting issue that
was happening for GetSearch()
Reported by huubhuubbarbatruuk
Issue 160
Patch by yaemog which adds support for include_rts and
include_entities support to GetUserTimeline and GetPublicTimeline
Small tweaks post-patch
Applied docstring tweak suggested by dclinton in revision comment
Thanks for the catch!
Issue 164
Patch by yaemog which adds GetRetweets support.
Small tweaks and two typo fixes post-patch.
Issue 165
Patch by yaemog which adds GetStatus support.
Small tweaks post-patch
Issue 163
Patch by yaemog which adds users/lookup support.
Small tweaks to docstring only post-patch.
Changed username/password parameter to Api class to be
consumer_key/consumer_secret to better match the new
oAuth only world that Twitter has demanded.
Added debugHTTP to the parameter list to Api class to
control if/when the urllib debug output is displayed.
2010-08-25
First pass at adding list support.
Added a new List class and also added to the Api class
new methods for working with lists:
CreateList(self, user, name, mode=None, description=None)
DestroyList(self, user, id)
CreateSubscription(self, owner, list)
DestroySubscription(self, owner, list)
GetSubscriptions(self, user, cursor=-1)
GetLists(self, user, cursor=-1)
2010-08-24
Fixed introduced bug in the Destroy* and Create* API calls
where any of the routines were passing in an empty dict for
POST data. Before the oAuth change that was enough to tell
_FetchUrl() to use POST instead of GET but now a non-empty
dict is required.
Issue 144
GetFriends() where it was failing with a 'unicode object has
no attribute get'. This was caused when Twitter changed how
they return the JSON data. It used to be a straight list but
now there are some elements *and* then the list.
2010-08-18
Applied the json/simplejson part of the patch found
in Issue 64 ()
Patch provided by Thomas Bohmbach
Applied patch provided by liris.pp in Issue 147
Ensures that during a PostStatus we count the length using a unicode aware
len() routine. Tweaked patch slightly to take into account that the
twitter.Api() instance may have been setup with None for input_encoding.
2010-08-17
Fixed error in the POST path for _FetchUrl() where by
I show to the world that yes, I do make last minute
changes and completely forget to test them :(
Thanks to Peter Sanchez for finding and pointing to
working code that showed the fix
2010-08-15
Added more help text (I hope it helps) to the README
and also to get_access_token.py.
Added doctext notes to twitter.Api() parameter list
to explain more about oAuth.
Added import exception handling for parse_qs() and
parse_qsl() as it seems those funcitons moved between
2.5 and 2.6 so the oAuth update broke the lib under
python2.5. Thanks to Rich for the bug find (sorry
it had to be found the hard way!)
from changeset 184:60315000989c by DeWitt
Update the generated twitter.py docs to match the trunk
2010-08-14
Fixed silly typo in _FetchUrl() when doing a POST
Thanks to Peter Sanchez for the find and fix!
Added some really basic text to the get_access_token.py
startup output that explains why, for now, you need to
visit Twitter and get an Application key/secret to use
this library
2010-08-12
Updated code to use python-oauth2 library for authentication.
Twitter has set a deadline, 2010-08-16 as of this change, for
the switch from Basic to oAuth.
The oAuth integration was inspired by the work done by
Hameedullah Khan and others.
The change to using python-oauth2 library was done purely to
align python-twitter with an oauth library that was maintained
and had tests to try and minimize grief moving forward.
Slipped into GetFriendsTimeline() a new parameter, retweets, to
allow the call to pull from the "friends_timeline" or the
"home_timeline".
Fixed some typos and white-space issues and also updated the
README to point to the new Twitter Dev site.
2010-08-02
Updated copyright information.
2010-06-13
Applied changeset from nicdumz repo nicdumz-cleaner-python-twitter
r=07df3feee06c8d0f9961596e5fceae9e74493d25
datetime is required for MaximumHitFrequency
Applied changeset from nicdumz repo nicdumz-cleaner-python-twitter
r=dd669dff32d101856ed6e50fe8bd938640b04d77
update source URLs in README
Applied changeset from nicdumz repo nicdumz-cleaner-python-twitter
r=8f0796d7fdcea17f4162aeb22d3c36cb603088c7
adjust tests to reflect -> change
Applied changeset from nicdumz repo nicdumz-cleaner-python-twitter
r=3c05b8ebe59eca226d9eaef2760cecca9d50944a
tests: add .info() method to objects returned by our Mockup handler
This is required to completely mimick urllib, and have successful
response.headers attribute accesses.
Applied partial patch for Issue 113
The partial bit means we changed the parameter from "page" to "cursor"
so the call would work. What was left out was a more direct way
to return the cursor value *after* the call and also in the patch
they also changed the method to return an iterator.
2010-05-17
Issue 50
Applied patch by wheaties.box that implements a new method to return
the Rate Limit Status and also adds the new method MaximumHitFrequency
Multiple typo, indent and whitespace tweaks
Issue 60
Pulled out new GetFavorites and GetMentions methods from the patch
submitted by joegermuska
Issue 62
Applied patch from lukev123 that adds gzip compression to the GET
requests sent to Twitter. The patch was modified to default gzip to
False and to allow the twitter.API class instantiation to set the
value to True. This was done to not change current default
behaviour radically.
Issue 80
Fixed PostUpdate() call example in the README
2010-05-16
Issue 19
TinyURL example and the idea for this comes from a bug filed by
acolorado with patch provided by ghills.
Issue 37
Added base_url to the twitter.API class init call to allow the user
to override the default base. Since Twitter now
supports https for all calls I (bear) changed the patch to default to
https instead of http.
Original issue by kotecha.ravi, patch by wiennat and with implementation
tweaks by bear.
Issue 45
Two grammar fixes for relative_created_at property
Patches by thomasdyson and chris.boardman07
2010-01-24
Applying patch submitted to fix Issue 70
The patch was originally submitted by user ghills, adapted by livibetter and
adapted even further by JimMoefoe (read the comments for the full details :) )
Applying patch submitted by markus.magnuson to add new method GetFriendIDs
Issue 94
2009-06-13
Releasing 0.6 to help people avoid the Twitpocalypse.
2009-05-03
Support hashlib in addition to the older md5 library.
2009-03-11
Added page parameter to GetReplies, GetFriends, GetFollowers, and GetDirectMessages
2009-03-03
Added count parameter to GetFriendsTimeline
2009-03-01
Add PostUpdates, which automatically splits long text into multiple updates.
2009-02-25
Add in_reply_to_status_id to api.PostUpdate
2009-02-21
Wrap any error responses in a TwitterError
Add since_id to GetFriendsTimeline and GetUserTimeline
2009-02-20
Added since and since_id to Api.GetReplies
2008-07-10
Added new properties to User and Status classes.
Removed spurious self-import of the twitter module
Added a NOTICE file
Require simplejson 2.x or later
Added get/create/destroy favorite flags for status messages.
Bug fix for non-tty devices.
2007-09-13
Unset the executable bit on README.: The Python-Twitter Developers
- Keywords: twitter api
- License: Apache License 2.0
- Categories
- Package Index Owner: batlogg, dclinton, bear
- Package Index Maintainer: batlogg
- DOAP record: python-twitter-1.3.1.xml | https://pypi.python.org/pypi/python-twitter/1.3.1 | CC-MAIN-2016-44 | refinedweb | 2,878 | 54.52 |
The.
In this article, we'll teach you how to install, setup and use basically the python library "face recognition" in Ubuntu 16.04.
Requirements
Before proceeding with the usage of this library, you will need on your system:
Python 3
In this tutorial, we'll follow the installation of the library with Python 3.
cmake
Your system needs CMake installed,.
If it's not installed in your system, you can run the following commands to install it:
# Update repo sudo apt-get update # Install cmake if it's not installed sudo apt-get install build-essential cmake
1. Install and compile dlib
Before proceeding with the usage and installation of the face recognition library in Python, you will need the distributable of dlib installed on your system and the python binding as well..
To start with the compilation of dlib in your system, clone the repository in some directory in your system:
# Clone the dlib library in some directory of the system git clone
Then, proceed to build dlib with the following commands:
# get into the cloned directory cd dlib # create build directory inside the cloned directory mkdir build # Switch to the created directory cd build # generate a Makefile in the current directory cmake .. # Build dlib ! cmake --build .
This will start the build process and once it finishes, the native library of dlib will be available in your system. For more information about Dlib, please visit the official website here.
2. Install Python binding for dlib
After building dlib, switch again to the cloned directory in the previous step:
cd ..
And proceed with the installation of the python bindings running the
setup.py file with Python 3 with the following command:
python3 setup.py install
This will install the binding and you will be able to import dlib later in your Python code. In case that you face the following exception during the execution of the previous command:
Traceback (most recent call last): File "setup.py", line 42, in <module> from setuptools import setup, Extension ImportError: No module named 'setuptools'
Install the Python 3 setup tools with the following command:
sudo apt-get install python3-setuptools
And now try again to run the
python3 setup.py install command.
3. Install face recognition library
As mentioned, we'll use the face recognition library. This library easily. You can install it with the following command:
Note
The installation will take a while to download and install, so be patient.
pip3 install face_recognition
If you don't have pip3 installed, install it with the following command:
sudo apt-get -y install python3-pip
For more information about this library, please visit the official repository at Github here. After installing the library, you will be able to use it either from the CLI or your python scripts.
4. How to use
When you install
face_recognition, you get two simple command-line programs:
face_recognition- Recognize faces in a photograph or folder full for photographs.
face_detection- Find faces in a photograph or folder full for photographs.
You will have as well the possibility of import the library in your scripts and use it from there !
Face recognition
For example, with this library you will be able to identify some faces according to some little database as source. Create a directory that contains the possible persons that the script will be able to identify, in this example we'll have a directory with 3 celebrities:
In our command, we'll identify this directory as our source of images. In other directory, we'll store the image of the celebrity that we want to identify from our database, obviously we'll use one of the regitered celebrities, but with another image:
The logic is the following, the library will use the directory of images
celebrities as database and we'll search from who's the image(s) stored in the
unknown directory. You can run the following command to accomplish the mentioned task:
face_recognition ./celebrities/ ./unknown/
Then, the output will be:
The command will output the path to the image that was processed, in our case
unknown_celebrity.jpg and will add the name of the matched image from the
celebrities directory as suffix. In this case, the library was able to identify the actor Ryan Reynolds from our images. Note that this can work with multiple images as well.
As mentioned, the CLI utility is just an extra, one of the fun facts is the hability of writing some code by yourself and identifying the faces with some logic, for example:
import face_recognition # Load the jpg files into numpy arrays obama_image = face_recognition.load_image_file("Barack Obama.jpg") justin_image = face_recognition.load_image_file("Justin Timberlake.jpg") ryan_image = face_recognition.load_image_file("Ryan Reynolds.jpg") unknown_image = face_recognition.load_image_file("unknown_celebrity.jpg") # Get the face encodings for each face in each image file # Since there could be more than one face in each image, it returns a list of encodings. # But since I know each image only has one face, I only care about the first encoding in each image, so I grab index 0. try: obama_face_encoding = face_recognition.face_encodings(obama_image)[0] justin_face_encoding = face_recognition.face_encodings(justin_image)[0] ryan_face_encoding = face_recognition.face_encodings(ryan_image)[0] unknown_face_encoding = face_recognition.face_encodings(unknown_image)[0] except IndexError: print("I wasn't able to locate any faces in at least one of the images. Check the image files. Aborting...") quit() known_faces = [ obama_face_encoding, justin_face_encoding, ryan_face_encoding ] # results is an array of True/False telling if the unknown face matched anyone in the known_faces array results = face_recognition.compare_faces(known_faces, unknown_face_encoding) print("Is the unknown face a picture of Obama? {}".format(results[0])) print("Is the unknown face a picture of Justin? {}".format(results[1])) print("Is the unknown face a picture of Ryan? {}".format(results[2])) print("Is the unknown face a new person that we've never seen before? {}".format(not True in results))
The idea is basically the same, compare the face encoding generated by the images that you have in your "database" with the encoding generated by the image that you want to identify.
Face detection providen directory followed by the coordinates as suffix, for example:
face_detection ./celebrities/
This will generate the following output:
As you can see, you can obtain the coordinates of the identified faces in the image after the first comma of the string. You can use it with your code as well:
import face_recognition image = face_recognition.load_image_file("Ryan Reynolds.jpg") face_locations = face_recognition.face_locations(image) # [(98, 469, 284, 283)] print(face_locations)
Happy coding !
Become a more social person | https://ourcodeworld.com/articles/read/841/how-to-install-and-use-the-python-face-recognition-and-detection-library-in-ubuntu-16-04 | CC-MAIN-2021-39 | refinedweb | 1,075 | 53 |
Object Oriented Programming in Swift
Learn how object oriented programming works in Swift by breaking things down into objects that can be inherited and composed from.
Object oriented programming is a fundamental programming paradigm that you must master if you are serious about learning Swift. That’s because object oriented programming is at the heart of most frameworks you’ll be working with. Breaking a problem down into objects that send messages to one another might seem strange at first, but it’s a proven approach for simplifying complex systems, which dates back to the 1950s.
Objects can be used to model almost anything — coordinates on a map, touches on a screen, even fluctuating interest rates in a bank account. When you’re just starting out, it’s useful to practice modeling physical things in the real world before you extend this to more abstract concepts.
In this tutorial, you’ll use object oriented programming to create your own band of musical instruments. You’ll also learn many important concepts along the way including:
- Encapsulation
- Inheritance
- Overriding versus Overloading
- Types versus Instances
- Composition
- Polymorphism
- Access Control
That’s a lot, so let’s get started! :]
Getting Started
Fire up Xcode and go to File\New\Playground…. Type Instruments for Name, select iOS for Platform and click Next. Choose where to save your playground and click Create. Delete everything from it in order to start from scratch.
Designing things in an object-oriented manner usually begins with a general concept extending to more specific types. You want to create musical instruments, so it makes perfect sense to begin with an instrument type and then define concrete (not literally!) instruments such as pianos and guitars from it. Think of the whole thing as a family tree of instruments where everything flows from general to specific and top to bottom like this:
The relationship between a child type and its parent type is an is-a relationship. For example, “Guitar is-a Instrument.” Now that you have a visual understanding of the objects you are dealing with, it’s time to start implementing.
Properties
Add the following block of code at the top of the playground:
// 1 class Instrument { // 2 let brand: String // 3 init(brand: String) { //4 self.brand = brand } }
There’s quite a lot going on here, so let’s break it down:
- You create the
Instrumentbase class with the
classkeyword. This is the root class of the instruments hierarchy. It defines a blueprint which forms the basis of any kind of instrument. Because it’s a type, the name
Instrumentis capitalized. It doesn’t have to be capitalized, however this is the convention in Swift.
- You declare the instrument’s stored properties (data) that all instruments have. In this case, it’s just the brand, which you represent as a
String.
- You create an initializer for the class with the
initkeyword. Its purpose is to construct new instruments by initializing all stored properties.
- You set the instrument’s
brandstored property to what was passed in as a parameter. Since the property and the parameter have the same name, you use the
selfkeyword to distinguish between them.
You’ve implemented a
class for instruments containing a
brand property, but you haven’t given it any behavior yet. Time to add some behavior in the form of methods to the mix.
Methods
You can tune and play an instrument regardless of its particular type. Add the following code inside the
Instrument class right after the initializer:
func tune() -> String { fatalError("Implement this method for \(brand)") }
The
tune() method is a placeholder function that crashes at runtime if you call it. Classes with methods like this are said to be abstract because they are not intended for direct use. Instead, you must define a subclass that overrides the method to do something sensible instead of only calling
fatalError(). More on overriding later.
Functions defined inside a
class are called methods because they have access to properties, such as
brand in the case of
Instrument. Organizing properties and related operations in a
class is a powerful tool for taming complexity. It even has a fancy name: encapsulation. Class types are said to encapsulate data (e.g. stored properties) and behavior (e.g. methods).
Next, add the following code before your
Instrument class:
class Music { let notes: [String] init(notes: [String]) { self.notes = notes } func prepared() -> String { return notes.joined(separator: " ") } }
This is a
Music class that encapsulates an array of notes and allows you to flatten it into a string with the
prepared() method.
Add the following method to the
Instrument class right after the
tune() method:
func play(_ music: Music) -> String { return music.prepared() }
The
play(_:) method returns a
String to be played. You might wonder why you would bother creating a special
Music type, instead of just passing along a
String array of notes. This provides several advantages: Creating
Music helps build a vocabulary, enables the compiler to check your work, and creates a place for future expansion.
Next, add the following method to the
Instrument class right after
play(_:):
func perform(_ music: Music) { print(tune()) print(play(music)) }
The
perform(_:) method first tunes the instrument and then plays the music given in one go. You’ve composed two of your methods together to work in perfect symphony. (Puns very much intended! :])
That’s it as far as the
Instrument class implementation goes. Time to add some specific instruments now.
Inheritance
Add the following class declaration at the bottom of the playground, right after the
Instrument class implementation:
// 1 class Piano: Instrument { let hasPedals: Bool // 2 static let whiteKeys = 52 static let blackKeys = 36 // 3 init(brand: String, hasPedals: Bool = false) { self.hasPedals = hasPedals // 4 super.init(brand: brand) } // 5 override func tune() -> String { return "Piano standard tuning for \(brand)." } override func play(_ music: Music) -> String { // 6 let preparedNotes = super.play(music) return "Piano playing \(preparedNotes)" } }
Here’s what’s going on, step by step:
- You create the
Pianoclass as a subclass of the
Instrumentparent class. All the stored properties and methods are automatically inherited by the
Pianochild class and available for use.
- All pianos have exactly the same number of white and black keys regardless of their brand. The associated values of their corresponding properties don’t change dynamically, so you mark the properties as
staticin order to reflect this.
- The initializer provides a default value for its
hasPedalsparameter which allows you to leave it off if you want.
- You use the
superkeyword to call the parent class initializer after setting the child class stored property
hasPedals. The super class initializer takes care of initializing inherited properties — in this case,
brand.
- You override the inherited
tune()method’s implementation with the
overridekeyword. This provides an implementation of
tune()that doesn’t call
fatalError(), but rather does something specific to
Piano.
- You override the inherited
play(_:)method. And inside this method, you use the
superkeyword this time to call the
Instrumentparent method in order to get the music’s prepared notes and then play on the piano.
Because
Piano derives from
Instrument, users of your code already know a lot about it: It has a brand, it can be tuned, played, and can even be performed.
The piano tunes and plays accordingly, but you can play it in different ways. Therefore, it’s time to add pedals to the mix.
Method Overloading
Add the following method to the
Piano class right after the overridden
play(_:) method:
func play(_ music: Music, usingPedals: Bool) -> String { let preparedNotes = super.play(music) if hasPedals && usingPedals { return "Play piano notes \(preparedNotes) with pedals." } else { return "Play piano notes \(preparedNotes) without pedals." } }
This overloads the
play(_:) method to use pedals if
usingPedals is true and the piano actually has pedals to use. It does not use the
override keyword because it has a different parameter list. Swift uses the parameter list (aka signature) to determine which to use. You need to be careful with overloaded methods though because they have the potential to cause confusion. For example, the
perform(_:) method always calls the
play(_:) one, and will never call your specialized
play(_:usingPedals:) one.
Replace
play(_:), in
Piano, with an implementation that calls your new pedal using version:
override func play(_ music: Music) -> String { return play(music, usingPedals: hasPedals) }
That’s it for the
Piano class implementation. Time to create an actual piano instance, tune it and play some really cool music on it. :]
Instances
Add the following block of code at the end of the playground right after the
Piano class declaration:
// 1 let piano = Piano(brand: "Yamaha", hasPedals: true) piano.tune() // 2 let music = Music(notes: ["C", "G", "F"]) piano.play(music, usingPedals: false) // 3 piano.play(music) // 4 Piano.whiteKeys Piano.blackKeys
This is what’s going on here, step by step:
- You create a
pianoas an instance of the
Pianoclass and
tuneit. Note that while types (classes) are always capitalized, instances are always all lowercase. Again, that’s convention.
- You declare a
musicinstance of the
Musicclass any play it on the piano with your special overload that lets you play the song without using the pedals.
- You call the
Pianoclass version of
play(_:)that always uses the pedals if it can.
- The key counts are
staticconstant values inside the
Pianoclass, so you don’t need a specific instance to call them — you just use the class name prefix instead.
Now that you’ve got a taste of piano music, you can add some guitar solos to the mix.
Intermediate Abstract Base Classes
Add the
Guitar class implementation at the end of the playground:
class Guitar: Instrument { let stringGauge: String init(brand: String, stringGauge: String = "medium") { self.stringGauge = stringGauge super.init(brand: brand) } }
This creates a new class
Guitar that adds the idea of string gauge as a text
String to the
Instrument base class. Like
Instrument,
Guitar is considered an abstract type whose
tune() and
play(_:) methods need to be overridden in a subclass. This is why it is sometimes called a intermediate abstract base class.
That’s it for the
Guitar class – you can add some really cool guitars now! Let’s do it! :]
Concrete Guitars
The first type of guitar you are going to create is an acoustic. Add the
AcousticGuitar class to the end of the playground right after your
Guitar class:)." } }
All acoustic guitars have 6 strings and 20 frets, so you model the corresponding properties as static because they relate to all acoustic guitars. And they are constants since their values never change over time. The class doesn’t add any new stored properties of its own, so you don’t need to create an initializer, as it automatically inherits the initializer from its parent class,
Guitar. Time to test out the guitar with a challenge!
[spoiler title=”Acoustic Guitar”]Add the following test code at the bottom of the playground right after the
AcousticGuitar class declaration:
let acousticGuitar = AcousticGuitar(brand: "Roland", stringGauge: "light") acousticGuitar.tune() acousticGuitar.play(music)
[/spoiler]
It’s time to make some noise and play some loud music. You will need an amplifier! :]
Private
Acoustic guitars are great, but amplified ones are even cooler. Add the
Amplifier class at the bottom of the playground to get the party started:
// 1 class Amplifier { // 2 private var _volume: Int // 3 private(set) var isOn: Bool init() { isOn = false _volume = 0 } // 4 func plugIn() { isOn = true } func unplug() { isOn = false } // 5 var volume: Int { // 6 get { return isOn ? _volume : 0 } // 7 set { _volume = min(max(newValue, 0), 10) } } }
There’s quite a bit going on here, so lets break it down:
- You define the
Amplifierclass. This is also a root class, just like
Instrument.
- The stored property
_volumeis marked
privateso that it can only be accessed inside of the
Amplifierclass and is hidden away from outside users. The underscore at the beginning of the name emphasizes that it is a private implementation detail. Once again, this is merely a convention. But it’s good to follow conventions. :]
- The stored property
isOncan be read by outside users but not written to. This is done with
private(set).
plugIn()and
unplug()affect the state of
isOn.
- The computed property named
volumewraps the private stored property
_volume.
- The getter drops the volume to 0 if it’s not plugged in.
- The volume will always be clamped to a certain value between 0 and 10 inside the setter. No setting the amp to 11.
The access control keyword
private is extremely useful for hiding away complexity and protecting your class from invalid modifications. The fancy name for this is “protecting the invariant”. Invariance refers to truth that should always be preserved by an operation.
Composition
Now that you have a handy amplifier component, it’s time to use it in an electric guitar. Add the
ElectricGuitar class implementation at the end of the playground right after the
Amplifier class declaration:
// 1 class ElectricGuitar: Guitar { // 2 let amplifier: Amplifier // 3 init(brand: String, stringGauge: String = "light", amplifier: Amplifier) { self.amplifier = amplifier super.init(brand: brand, stringGauge: stringGauge) } // 4 override func tune() -> String { amplifier.plugIn() amplifier.volume = 5 return "Tune \(brand) electric with E A D G B E" } // 5 override func play(_ music: Music) -> String { let preparedNotes = super.play(music) return "Play solo \(preparedNotes) at volume \(amplifier.volume)." } }
Taking this step by step:
ElectricGuitaris a concrete type that derives from the abstract, intermediate base class
Guitar.
- An electric guitar contains an amplifier. This is a has-a relationship and not an is-a relationship as with inheritance.
- A custom initializer that initializes all of the stored properties and then calls the super class.
- A reasonable
tune()method.
- A reasonable
play()method.
In a similar vain, add the
BassGuitar class declaration at the bottom of the playground right after the
ElectricGuitar class implementation:
class BassGuitar: Guitar { let amplifier: Amplifier init(brand: String, stringGauge: String = "heavy", amplifier: Amplifier) { self.amplifier = amplifier super.init(brand: brand, stringGauge: stringGauge) } override func tune() -> String { amplifier.plugIn() return "Tune \(brand) bass with E A D G" } override func play(_ music: Music) -> String { let preparedNotes = super.play(music) return "Play bass line \(preparedNotes) at volume \(amplifier.volume)." } }
This creates a bass guitar which also utilizes a (has-a) amplifier. Class containment in action. Time for another challenge!
[spoiler title=”Electric Guitar”]Add the following test code at the bottom of the playground right after the
BassGuitar class declaration:
let amplifier = Amplifier() let electricGuitar = ElectricGuitar(brand: "Gibson", stringGauge: "medium", amplifier: amplifier) electricGuitar.tune() let bassGuitar = BassGuitar(brand: "Fender", stringGauge: "heavy", amplifier: amplifier) bassGuitar.tune() // Notice that because of class reference semantics, the amplifier is a shared // resource between these two guitars. bassGuitar.amplifier.volume electricGuitar.amplifier.volume bassGuitar.amplifier.unplug() bassGuitar.amplifier.volume electricGuitar.amplifier.volume bassGuitar.amplifier.plugIn() bassGuitar.amplifier.volume electricGuitar.amplifier.volume
[/spoiler]
Polymorphism
One of the great strengths of object oriented programming is the ability to use different objects through the same interface while each behaves in its own unique way. This is polymorphism meaning “many forms”. Add the
Band class implementation at the end of the playground:
class Band { let instruments: [Instrument] init(instruments: [Instrument]) { self.instruments = instruments } func perform(_ music: Music) { for instrument in instruments { instrument.perform(music) } } }
The
Band class has an
instruments array stored property which you set in the initializer. The band performs live on stage by going through the
instruments array in a
for in loop and calling the
perform(_:) method for each instrument in the array.
Now go ahead and prepare your first rock concert. Add the following block of code at the bottom of the playground right after the
Band class implementation:
let instruments = [piano, acousticGuitar, electricGuitar, bassGuitar] let band = Band(instruments: instruments) band.perform(music)
You first define an
instruments array from the
Instrument class instances you’ve previously created. Then you declare the
band object and configure its
instruments property with the
Band initializer. Finally you use the
band instance’s
perform(_:) method to make the band perform live music (print results of tuning and playing).
Notice that although the
instruments array’s type is
[Instrument], each instrument performs accordingly depending on its class type. This is how polymorphism works in practice: you now perform in live gigs like a pro! :]
Access Control
You have already seen
private in action as a way to hide complexity and protect your classes from inadvertently getting into invalid states (i.e. breaking the invariant). Swift goes further and provides four levels of access control including:
- private: Visible just within the class.
- fileprivate: Visible from anywhere in the same file.
- internal: Visible from anywhere in the same module or app.
- public: Visible anywhere outside the module.
There are additional access control related keywords:
- open: Not only can it be used anywhere outside the module but also can be subclassed or overridden from outside.
- final: Cannot be overridden or subclassed.
If you don’t specify the access of a class, property or method, it defaults to
internal access. Since you typically only have a single module starting out, this lets you ignore access control concerns at the beginning. You only really need to start worrying about it when your app gets bigger and more complex and you need to think about hiding away some of that complexity.
Making a Framework
Suppose you wanted to make your own music and instrument framework. You can simulate this by adding definitions to the compiled sources of your playground. First, delete the definitions for
Music and
Instrument from the playground. This will cause lots of errors that you will now fix.
Make sure the Project Navigator is visible in Xcode by going to View\Navigators\Show Project Navigator. Then right-click on the Sources folder and select New File from the menu. Rename the file MusicKit.swift and delete everything inside it. Replace the contents with:
// 1 final public class Music { // 2 public let notes: [String] public init(notes: [String]) { self.notes = notes } public func prepared() -> String { return notes.joined(separator: " ") } } // 3 open class Instrument { public let brand: String public init(brand: String) { self.brand = brand } // 4 open func tune() -> String { fatalError("Implement this method for \(brand)") } open func play(_ music: Music) -> String { return music.prepared() } // 5 final public func perform(_ music: Music) { print(tune()) print(play(music)) } }
Save the file and switch back to the main page of your playground. This will continue to work as before. Here are some notes for what you’ve done here:
final publicmeans that is going to be visible by all outsiders but you cannot subclass it.
- Each stored property, initializer, method must be marked
publicif you want to see it from an outside source.
- The class
Instrumentis marked
openbecause subclassing is allowed.
- Methods can also be marked
- Methods can be marked
finalso no one can override them. This can be a useful guarantee.
Where to Go From Here?
You can download the final playground for this tutorial which contains the tutorial’s sample code.
You can read more about object oriented programming in our Swift Apprentice book or challenge yourself even more with our Design Patterns by Tutorials book.
I hope you enjoyed this tutorial and if you have any questions or comments, please join the forum discussion below! | https://www.raywenderlich.com/599-object-oriented-programming-in-swift | CC-MAIN-2021-04 | refinedweb | 3,225 | 55.84 |
I have been building AJAX applications for a while now and absolutely love AJAX and the improvements it can offer in user-interface design, making applications easy and fun to use. But AJAX does have limitations and I, like many others have come to the realization that while AJAX is great for most things, it is not the silver bullet. For data-intensive application, specifically that involve dynamic charting with vector graphics and mining, AJAX falls short.
There are a couple of alternatives out there that fill that niche that AJAX still hasn't successfully filled and Adobe's Flex 2 framework is definitely one of the them. Adobe Flex 2 software is a rich Internet application framework based on Adobe Flash that will enable you to create applications that are cross-platform and browser independent as they run inside the Flash VM. Flash has fulfilled the promise that Java applets never delivered for a variety of reasons. The Flex programming model is fairly simple where developers write MXML and ActionScript source code and the source code is then compiled into bytecode by the Flex compiler, resulting in a binary file with the *.swf extension. Developers use MXML to declaratively define the application user interface elements and use ActionScript for client logic and procedural control. MXML provides declarative abstractions for client-tier logic and bindings between the user interface and application data. ActionScript 3.0 is an implementation of ECMAScript, and it provides support for strong typing, interfaces, delegation, namespaces, error handling, and ECMAScript for XML (E4X).
Adobe gives away the Flex 2 SDK for free and so anyone can create Flex 2 application and compile them into SWF bytecode files. Adobe sells Flex Builder, which is the Eclipse based IDE for Flex development and Flex Data Services, which is a J2EE component deployed inside a container. It provides adapters to connect to EJB's, JMS queues, backend data stores, etc.
One of the barriers to wider Flex adoption is the proprietary nature of the technology. Flex is closed technology and Adobe controls every aspect of it. There's nothing wrong with that but I and I am guessing a lot of people prefer open architecture, open systems and open platforms for application development to prevent vendor lock-in. Adobe has taken some positive steps by releasing the Flex-Ajax Bridge (FABridge) library, which automatically exposes the public data and methods within a Flex application to the JavaScript engine and vice versa. This enables developers to easily integrate Flex applications with existing sites as well as to deliver new applications that combine Ajax with applications created in Flex. A great example of the Flex-AJAX interaction is the charting application on Google Finance. It was interesting to see that Yahoo also decided to use Flash for charting when they deployed the new version of the Yahoo Finance portal.
Open sourcing Flex would certainly lead to wider adoption of Flex as an application development framework. So why doesn't Adobe do it? It seems to fit the Adobe business model – If you take a look at Acrobat or Flash or really any of the other Adobe products. They give away the client for free and monetize the creation part of process. Take a look at PDF and Acrobat – Adobe gives away the reader for free but makes money by selling Adobe Distiller. Why couldn't that model work for Flex? Open-source Flex and continue making money on Flex Builder, Flex Data Services, training, consulting, support and custom components. I'm sure there is already a fairly robust marketplace for Flex components but Adobe can take that to the next level. I know Adobe has spent significant amount of time, money in terms of engineering effort to create Flex but the proprietary nature of it will always be a limiting factor and never let Flex be the premier platform for RIA's. If Adobe waits too long, the browsers will get better and fully support SVG, CSS3, JavaScript JIT compilers and the advantage Flex offers will narrow. The next generation of AJAX frameworks are also just around the corner and they will compete with Flex. OpenLaszlo is another dark-horse in this race that may eat Flex's lunch. OpenLaszlo is everything I want Flex to be – OpenLaszlo programs are written in XML and JavaScript and transparently compiled to Flash. The OpenLaszlo APIs provide animation, layout, data binding, server communication, and declarative UI. And what sets it apart from Flex is that OpenLaszlo is an open source platform. Adobe – Are you listening?
{ 2 trackbacks }
{ 7 comments } 'open.
But I'm really trying to figure this out:
tx, jd/adobe
John Dowdell: Are you thinking more about language constructs here, or frameworks, or the sourcecode for a particular MXML->SWF compiler, or something else? What is it that you’d personally hope to gain, should others pursue such a path?
Hi John and good morning. Great questions –. Maybe a logo program? Implementation that pass the 'test' would be certified Flex compliant?
Allowing external 'entities'’t’t suddenly turn around and make this proprietary.¢â‚¬Ëœentities’ | http://m.mowser.com/web/www.j2eegeek.com%2Fblog%2F2007%2F03%2F01%2Fwill-or-should-adobe-open-source-flex%2F | crawl-002 | refinedweb | 874 | 51.89 |
Hi!
I have weird issue. I am creating react native app with Expo and Firebase. I had added support for BackHandler, it worked very well by the time. When I imported firebase to my project, BackHandler stopped work (closes app).
I have detected that it’s caused by one import line:
import * as firebase from "firebase";
(I added only import line to my App.js file, no functions inside component)
It’s so weird, isn’t it? Do you have any ideas what can I do?
expo --> 28.0.0
firebase --> 5.2.0
react --> 16.3.1
react-native --> 0.56
----- EDITED -----
I installed firebase 4.10.0, and now it works. | https://forums.expo.io/t/issues-between-react-native-app-and-firebase/11539/1 | CC-MAIN-2018-39 | refinedweb | 113 | 79.97 |
I loved MS Paint. I thought I was going to be an online artist using it. But then I quickly found out its limitations (and my own). The spray can only let you have one color. I wanted 3 or more all mixed together for better blending. So that is the focus of my code kata today.
I picked out three colors in the same spectrum that blended well. Then every time I click and hole it starts adding dots in random points around my mouse. The slower I drag my mouse the denser the dots like a spray can.
Try out making up your own code kata, this one or send me your ideas. I'll try them out as well!
Aaaaaannnnnd some source code. I made the paint dots 4 pixels wide and my spray brush 32 coverage 32 pixels. Copy and paste this into your own python file.
# Have a paintbrush you can control with your mouse that sprays like the old
# MS Paint spray paint brush. But instead of one color, use three different
# colors all at once. That way there is no dominant color. This will help
# blend colors together.
from tkinter import *
import random
class View:
def __init__(self, master):
self.root = master
self.frame = Frame(self.root, width = 800, height = 800)
self.canvas = Canvas(self.frame, width = 800, height = 800)
self.colors = [ '#113d84', '#68768e', '#609bff' ] # blues
self.brush_size = 16
self.dot = 2
self.num = len(self.colors) * 3
self.draw = False
self.canvas.bind('
', self.mouse_button_1_click)
self.canvas.bind('
', self.mouse_button_1_release)
self.canvas.bind('
', self.mouse_button_1_motion)
self.root.bind('
', self.key_pressed)
self.canvas.pack()
self.frame.pack()
def add_points(self, x, y):
for i in range(0, self.num):
x1 = random.randint(-self.brush_size, self.brush_size) + x
y1 = random.randint(-self.brush_size, self.brush_size) + y
self.canvas.create_oval(x1-self.dot, y1-self.dot, x1+self.dot, y1+self.dot, outline = self.colors[i%3], fill = self.colors[i%3], tags = 'dot')
def mouse_button_1_click(self, event):
self.draw=True
self.add_points(event.x, event.y)
def mouse_button_1_motion(self, event):
if self.draw == True:
self.add_points(event.x, event.y)
def mouse_button_1_release(self, event):
self.draw = False
def key_pressed(self, event):
if key == 'R':
self.canvas.delete('dot')
master = Tk()
view = View(master)
master.mainloop() | http://www.glaciergeosciences.com/2018/09/sunday-funday-code-kata-ms-paint-program.html | CC-MAIN-2019-43 | refinedweb | 381 | 63.66 |
Using kivy to realize sound playback seems simple, but there are pits. It is also a summary of digging pits after reading a large number of Internet data.
You can use his SoundLoader component to play audio. You can play sound in two lines. If you just run on the PC, just copy the following code
PC playback
from kivy.network.urlrequest import UrlRequest path="./sound/apple.wav" # It can be a local file or a web address , or mp3 b=SoundLoader.load(filename=path) # load file b.play() # Play sound
Mobile terminal plays local files
However, if you want to play local sound on Android, you need to pay attention to the following problems:
1. At present, Kivy only supports wav format
2. When packaging APK, you should pay attention to modifying relevant parameters. Enter buildozer init in the packaging environment to generate the initialization parameter table. You don't know how to package with buildozer. Please see the tutorial of buildozer packaging APK first. Find the following parameters and add wav. In Linux environment, vim can be used to open files for editing
3. The following two ends should be changed. Don't omit. These two paragraphs are not connected
# (list) Source files to include (let empty to include all the files) source.include_exts = py,png,jpg,kv,atlas,wav
# (list) List of inclusions using pattern matching source.include_patterns = assets/*,images/*.png,sound/*.wav
Mobile terminal plays web page audio / server audio
However, if you want to play the web page / server back-end sound on Android, you need to pay attention to the following problems:
Kivy's player does not support streaming media!!!!, So you need to download it and play it again
Here I created a download button and a play button. Click play to download, wait a while, and then click the play button to play the sound. The effect is as follows
python main program code for reference
Here you will see Import os and from android.permissions import. Note that Android does not need to be installed. It comes with Buildozer and P4a and can only take effect on the Android side. Therefore, it is necessary to judge the system environment first
In addition, the spec file needs to be modified
The main program code of main.py is as follows:,
from kivy.lang import Builder from kivy.uix.screenmanager import ScreenManager, Screen from kivy.app import App from kivy.network.urlrequest import UrlRequest from kivy.core.audio import SoundLoader from kivy.utils import platform #import requests import os if platform == "android": from android.permissions import request_permissions, Permission request_permissions([Permission.READ_EXTERNAL_STORAGE, Permission.WRITE_EXTERNAL_STORAGE]) url="" # Build window button, a download button, a play button kv = Builder.load_string(""" #:import utils kivy.utils <test>: name:"test" BoxLayout: Button: text: "download" on_release:root.buttonClicked() Button: text: "apple" on_release:root.sound() """) class test(Screen): def buttonClicked(self): a= UrlRequest(url, file_path='apple.wav') print("done download") def sound(self): b=SoundLoader.load(filename='./apple.wav') b.play() print("done sound") sm = ScreenManager() # Create window manager screens = [test(name="test")] for screen in screens: # Generate containers for each window sm.add_widget(screen) class ScreenApp(App): def build(self): return sm ScreenApp().run()
The permission needs to be modified in the spec file to obtain network and read-write permissions
# (list) Permissions android.permissions = INTERNET,WRITE_EXTERNAL_STORAGE,READ_EXTERNAL_STORAGE | https://programmer.group/python-kivy-app-development-and-playing-sound-music.html | CC-MAIN-2022-40 | refinedweb | 557 | 51.55 |
ColdFusion 10 - Looping Over Queries Using A For-In Loop In CFScript
Yesterday, I campaigned against using ColdFusion 10's dynamic "query" attribute in the CFLoop tag. That said, I am really excited that CFScript now supports FOR-IN looping for queries. This creates complete uniformity in CFScript for looping over queries, arrays, and structs. And, it definitely makes looping over queries in CFScript much easier and far more intuitive than the index-based looping that was previously required.
NOTE: At the time of this writing, ColdFusion 10 was in public beta.
To quickly demonstrate this new features, I am going to build up a new query and then loop over it using a FOR-IN loop. In the following code, you'll notice that you can still access query meta-data (recordCount and currentRow) using the query object:
- " ]
- ]
- );
- // The query object now supports FOR-IN iteration in CFScript. We
- // can iterate over the query, row by row.
- for (friend in friends){
- // When iterating over a query in CFScript, you can use the
- // main query object to get meta-data; then, use the row
- // object to get row-specific properties.
- writeOutput(
- "[ #friends.currentRow# of #friends.recordCount# ] " &
- friend.name &
- "<br />"
- );
- }
- </cfscript>
As you can see, each row of the query is presented as a struct in which the column names have become struct keys. This makes looping and outputting query data very easy! When we run the above code, we get the following output:
[ 1 of 3 ] Tricia
[ 2 of 3 ] Sarah
[ 3 of 3 ] Joanna
With this level of ease, I can't see much of a use for ever converting a query object to another kind of object (except maybe for with API return data). But, with the way query-iteration now works, you can see that converting a query object to an array-of-structs can be incredibly straightforward - the query iteration is already doing have the work for you:
- " ]
- ]
- );
- // I convert the given query to an array of structs.
- function queryToArray( query ){
- // Define our array to hold the row data for the query.
- var queryAsArray = [];
- // Iterate over the query using a FOR-IN construct. This will
- // automatically convert each row to a struct. At that point,
- // all we have to do is collect it.
- for (var row in query){
- // Add the row-as-struct to our array.
- arrayAppend( queryAsArray, row );
- }
- // Return the query collection as an array.
- return( queryAsArray );
- }
- // Iterate over the query as an array. Since we are using the
- // arrayEach() method, we lose track of the current row and
- // the recordCount.
- arrayEach(
- queryToArray( friends ),
- function( friend ){
- // Here, each "friend" is the struct produced by the
- // FOR-IN iteration above.
- writeOutput( friend.name & "<br />" );
- }
- );
- </cfscript>
Here, we've created a function - queryToArray() - that simply collects the structs produced by each row-iteration and appends them to an array. Then, we iterate over the array using ColdFusion 10's new arrayEach() method and get the following output:
Tricia
Sarah
Joanna
As I said before, ColdFusion 10 is really the version of ColdFusion that makes CFScript look really appealing to me. Up until now, I've been a strong proponent of ColdFusion tags for everything; but, CFScript is really maturing to the point where I can imagine using it on a regular basis.
Looking For A New Job?
- 100% Remote - Sr ColdFusion Developer at Short's Travel Management
- ColdFusion Developer Opportunity at Cavulus
- Senior JavaScript/Angular Engineer at Kelaca
- IS Sr. Systems Analyst - Web Development at Nationwide Children's Hospital
Reader Comments
Hey Ben...you've got a typo in the first para. "perviously required".
We agree on this one :)
Great addition.
I love using the for-in clause now in cfscript! Except when looping over component metadata.
Silly things claim to be arrays when they aren't really, and thus break for-in, at least in cf9, haven't tried 10.
@Andy,
Ah, good catch! Should be fixed upon next cache-clear.
@Sam,
Ha ha, high-five :)
@Jim,
Yeah, I'm definitely loving the uniformity now of the script-based looping. That's really weird about the metadata, though. Maybe it's some weird Java object that is confusing the engine? Very strange.
@Ben
Yeah, when using getComponentMetaData() or getMetaData(), it treats parameters as an an array during a writeDump() (possibly functions as well, i forget now), and you can iterate over them with a normal for loop. But when you try to use for-in, it throws something about trying to reference a complex object as a scalar or something and lists it as a java object.
As an aside since it's related, I'm using railo as a place holder until cf10 comes out. Been going through and converting cfcs to cfscript only versions, using your example for metadata in the /** **/ blocks.
Railo doesn't support metadata (or the accessors flag, or a bunch of other things) in those blocks and forces me list them in the method declaration. According to bug reports, Railo doesn't seem inclined to support this functionality due to semi-justifiable language politics, but that argument falls flat when you look at how C# does webmethods.
Was having a fun time with Railo too until I hit this, now i'm dumping it into waste bin of "meh" could-have-beens. Guess thats what I get for experimenting.
CF10 needs to hurry up and drop so I can spend budget monies on it!
@Ben
Might want to double check your code for sending out comment posts, getting a profile pic that is clearly not your font of manliness on your posts.
@Jim,
"But when you try to use for-in, it throws something about trying to reference a complex object as a scalar or something and lists it as a java object"
99% sure that is fixed in CF10.
Its an annoying bug for sure.
@Jim,
Sounds really strange. Out of curiosity, what are you using the metadata for? Sounds like you're really digging into it.
@Sam,
That's good to hear!
@Ben
I'm updating my ExtDirect stack that supports persistent namespaces for invoking components to the latest ExtDirect spec that supports named arguments.
Thus, as I'm adding components to my registry, instead of just counting the number of required arguments, I'm now tracking named optional and required arguments so that when named arguments are used by ExtDirect, I can ensure the argument collection for requireds is fully populated or not, and handle accordingly.
Once I get that done, I'm adding support to the stack to prevent possible CSRF attacks from the AJAX side of ExtJS.
Nice.
Would be glad to see the "item" attribute added to CFLOOP QUERY in CF10. Than you have both: the for-in-loop as in script and you can use a dynamic QUERY attribute without scoping trouble ;-)
SSD Web Solutions provides domain
registration, web hosting and bulk sms services to small businesses. We
provide web hosting on both platform windows hosting and linux hosting. We
do also provide bulk voice calls for promotional calls to India. To find out more information, go visit:
Wow I Appreciate you, and thanks for sharing this valuble information and it is very useful for me and all. | http://www.bennadel.com/blog/2367-coldfusion-10-looping-over-queries-using-a-for-in-loop-in-cfscript.htm | CC-MAIN-2015-18 | refinedweb | 1,205 | 61.46 |
15 April 2009 12:21 [Source: ICIS news]
LONDON (ICIS news)--BASF is preparing short-time working for up to 3,000 employees at its Ludwigshafen, Germany, production hub as a response to weak demand, the chemicals major said on Wednesday.
“Capacity utilisation rates at many plants have remained very low since the beginning of the year, and there are no signs of a sustained improvement in orders from key customer industries in the foreseeable future,” said the head of human resources at the site, Harald Schwager.
The situation was being assessed unit-by-unit to determine which plants would introduce shorter working hours. The short-time working would take effect on 1 June.
“At the moment, around 600 employees in ?xml:namespace>
BASF said it would look at further measures if the situation does not improve in the second half of the year, including the extension of short-time working beyond production units.
The company said it would announce how many units and employees would be affected by short-time working by the middle of May. It was expecting between 2,000 and 3,000 staff to be affected out of the 32,800 working at the
“Employees will receive a net wage of approximately 90% as a result of short-time work compensation provided by the German government as well as a payment from the company under the terms of the collective wage agreement for the chemical industry,” BASF said in a statement.
“Rapid re-introduction of normal working hours is possible at any time, should demand for BASF products pick up,” it added.
At the end of 2008 BASF, the world’s largest chemicals producer, said it would shut down 80 production units worldwide on a temporary basis and run more than 100 at reduced rates, affecting approximately 20,000 employees worldwide.
The cutbacks are a response to a huge decline in demand for chemicals amid the global economic | http://www.icis.com/Articles/2009/04/15/9208091/BASF-to-cut-hours-for-up-to-3000-Ludwigshafen-staff-in.html | CC-MAIN-2015-22 | refinedweb | 322 | 51.82 |
by Jeffrey Kantor (jeff at nd.edu). The latest version of this notebook is available at.
This example provides an introduction to the use of python for the simulation of a simple process modeled by a pair of ordinary differential equations. See SEMD textbook example 2.1 for more details on the process.
Unlike Matlab, in Python it is always necessary to import the functions and libraries that you intend to use. In this case we import the complete
pylab library, and the function
odeint for integrating systems of differential equations from the
scipy library. The command
%matplotlib inline causes graphic commands to produce results directly within the notebook output cells.
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint
rho = 900.0 # density, kg/m**3 w1 = 500.0 # stream 1, kg/min w2 = 200.0 # stream 2, kg/min w = 650.0 # set outflow equal to sum of inflows x1 = 0.4 # composition stream 1, mass fraction x2 = 0.75 # composition stream 2, mass fraction
def func(y,t): V,x = y dVdt = (w1 + w2 - w)/rho dxdt = (w1*(x1-x)+w2*(x2-x))/(rho*V) return [dVdt, dxdt]
V = 2.0 # initial volume, cubic meters x = 0.0 # initial composition, mass fraction t = np.linspace(0,10.0) y = odeint(func,[V,x],t)
plt.plot(t,y) plt.xlabel('Time [min]') plt.ylabel('Volume, Composition') plt.legend(['Volume','Composition']) plt.ylim(0,3) plt.grid() #plt.savefig('BlendingTankStartUp.png')
The blending tank is a system with two state variables (volume and composition). Suppose a mechanism is put in place to force the inflow to equal the outflow, that is
$$w = w_1 + w_2$$
The mechanism could involve the installation of an overflow weir, level controller, or some other device to force a balance between the outflow and total inflows. In this case,
$$\frac{dV}{dt} = 0$$
which means volume is at steady state.
In that case there is just one remaining differential equation
$$\frac{dx}{dt} = \frac{1}{\rho V}( w_1(x_1 - x) + w_1(x_2 - x)) = 0$$
Solving for the steady value of $x$,
$$\bar{x} = \frac{w_1x_1 + w_2x_2}{w_1 + w_2}$$
w1 = 500.0 # stream 1, kg/min w2 = 200.0 # stream 2, kg/min x1 = 0.4 # composition stream 1, mass fraction x2 = 0.75 # composition stream 2, mass fraction x = (w1*x1 + w2*x2)/(w1 + w2) print('Steady State Composition =', x)
Steady State Composition = 0.5 | http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/Blending%20Tank%20Simulation.ipynb | CC-MAIN-2018-26 | refinedweb | 408 | 68.67 |
Hi, Attached is a patch to fix and reenable extended precision support for the in-kernel nwfpe floating point emulator on big-endian ARM platforms, and is a first step towards unfscking nwfpe for big-endian. As a reminder: nwfpe uses the FPA floating point format, which is a way of representing IEEE 754 single-precision, double-precision and extended double-precision numbers as arrays of one, two or three 32 bit words. The FPA format uses native endian byte order but big-endian word order, while nwfpe internally uses fully native 'long long' byte order, resulting in a confusing mix of floating point formats: 40 84 d0 00 00 00 00 00 666.0 in IEEE 754 double precision 40 84 d0 00 00 00 00 00 666.0 in big-endian FPA byte order 00 d0 84 40 00 00 00 00 666.0 in little-endian FPA byte order 40 84 d0 00 00 00 00 00 666.0 in big-endian nwfpe internal byte order 00 00 00 00 00 d0 84 40 666.0 in little-endian nwfpe internal byte order Note that to convert from FPA to nwfpe byte order and vice versa: - on little-endian ARM, you have to swap the two halves of the double. - on big-endian ARM, you don't have to do anything as the formats are the same. There are a couple more (ugly) issues remaining: 1) The extended precision format that nwfpe uses differs from the format that is described in the FPA spec. In the spec, the sign bit is in bit 31 of the first word, while in nwfpe, the sign bit is in bit 15 of the first word. What does actual FPA hardware do here? Maybe the spec is just wrong? 2) Ralph Siemsen told me a while ago that, contrary to what the FPA spec says, there are programs that depend on the exact format that is used by the LFM/SFM instructions. 3) The GETFPREGS ptrace call dumps the internal nwfpe state buffer (eeew) to userspace (i.e. in 'fully native long long byte order'), instead of the FPA format or something else more sensible. This causes many userland applications that have a need to mess with this data (for example, gdb) to just blindly swap the upper and lower words of the data returned from the kernel, assuming that that is the right way to convert from nwfpe byte order to FPA byte order. As per the above, this method of converting doubles is broken on big-endian. [Also, there is another floating point emulator (fastfpe) in the kernel, which uses a different internal state buffer format, so apart from the fact that our GETFPREGS buffer format is nonsensical, it's not even consistently so.] [Another bug: ptrace_getfpregs in arch/arm/kernel/ptrace.c copies a struct fp_state to userland, which is 35 words big if iWMMXt is not compiled in and 39 words big if it is, using a sizeof(struct user_fp), which is 29 words big.] 4) The ARM ELF core dump format uses yet another definition of the floating point word format using bitfields (struct user_fp), which isn't compatible with any of the other formats, but when a core dump is made, simply copies the same nwfpe local state buffer into the core file (arch/arm/kernel/process.s:dump_fpu). What to do: 1) Someone with actual FPA hardware should test this out, or someone more knowledgable about the FPA spec than me should throw in his 2ct. 2) Anyone have more info on this? (Or maybe he just meant GETFPREGS?) 3) Tricky. We can't really change the little-endian ARM behaviour anymore since this is a user-visible ABI. The problem with deciding how to make GETFPREGS behave on big-endian is that there isn't really any kind of definition of the structure format. There are two ways we can define this structure, which are each compatible with how things are currently done on little-endian: 1) Define GETFPREGS as storing doubles in 'reverse byte order' (i.e. from LSB to MSB.) 2) Define GETFPREGS as storing doubles in 'native byte order but little-endian word order' (i.e. little-endian byte order and little-endian word order on little-endian systems, and big-endian byte order and little-endian word order on big-endian systems.) Option 1) would make the big-endian format byte-wise compatible with little-endian, but would require all userspace applications to check if defined(__ARMEB__) and to conditionally byteswap each word (and swap the two words) to convert back to native (FPA) word order if that is the case. Because the float format in core files isn't byte-wise identical anyway, I don't see much value in this option. For applications that swap the two sub-words to convert from kernel ('nwfpe') order to native order, option 2) would have the advantage of not requiring any additional userspace modifications to make those apps work on big-endian. 4) Haven't looked into this too closely yet. Any ideas? cheers, Lennert diff -urN linux-2.6.14.commit/arch/arm/Kconfig linux-2.6.14.snap/arch/arm/Kconfig --- linux-2.6.14.commit/arch/arm/Kconfig 2005-11-06 17:00:50.000000000 +0100 +++ linux-2.6.14.snap/arch/arm/Kconfig 2005-11-06 17:00:31.000000000 +0100 @@ -568,7 +568,7 @@ config FPE_NWFPE_XP bool "Support extended precision" - depends on FPE_NWFPE && !CPU_BIG_ENDIAN + depends on FPE_NWFPE help Say Y to include 80-bit support in the kernel floating-point emulator. Otherwise, only 32 and 64-bit support is compiled in. diff -urN linux-2.6.14.commit/arch/arm/nwfpe/fpa11_cpdt.c linux-2.6.14.snap/arch/arm/nwfpe/fpa11_cpdt.c --- linux-2.6.14.commit/arch/arm/nwfpe/fpa11_cpdt.c 2005-10-28 02:02:08.000000000 +0200 +++ linux-2.6.14.snap/arch/arm/nwfpe/fpa11_cpdt.c 2005-11-06 12:24:34.000000000 +0100 @@ -59,8 +59,13 @@ p = (unsigned int *) &fpa11->fpreg[Fn].fExtended; fpa11->fType[Fn] = typeExtended; get_user(p[0], &pMem[0]); /* sign & exponent */ +#ifdef __ARMEB__ + get_user(p[1], &pMem[1]); /* ms bits */ + get_user(p[2], &pMem[2]); /* ls bits */ +#else get_user(p[1], &pMem[2]); /* ls bits */ get_user(p[2], &pMem[1]); /* ms bits */ +#endif } #endif @@ -78,6 +83,7 @@ case typeSingle: case typeDouble: { + /* @@@ big-endian */ get_user(p[0], &pMem[2]); /* Single */ get_user(p[1], &pMem[1]); /* double msw */ p[2] = 0; /* empty */ @@ -177,8 +183,13 @@ } put_user(val.i[0], &pMem[0]); /* sign & exp */ +#ifdef __ARMEB__ + put_user(val.i[1], &pMem[1]); /* msw */ + put_user(val.i[2], &pMem[2]); +#else put_user(val.i[1], &pMem[2]); put_user(val.i[2], &pMem[1]); /* msw */ +#endif } #endif @@ -194,6 +205,7 @@ case typeSingle: case typeDouble: { + /* @@@ big-endian */ put_user(p[0], &pMem[2]); /* single */ put_user(p[1], &pMem[1]); /* double msw */ put_user(nType << 14, &pMem[0]); diff -urN linux-2.6.14.commit/arch/arm/nwfpe/fpa11.h linux-2.6.14.snap/arch/arm/nwfpe/fpa11.h --- linux-2.6.14.commit/arch/arm/nwfpe/fpa11.h 2005-10-28 02:02:08.000000000 +0200 +++ linux-2.6.14.snap/arch/arm/nwfpe/fpa11.h 2005-10-29 01:41:41.000000000 +0200 @@ -60,7 +60,7 @@ #ifdef CONFIG_FPE_NWFPE_XP floatx80 fExtended; #else - int padding[3]; + u32 padding[3]; #endif } FPREG; diff -urN linux-2.6.14.commit/arch/arm/nwfpe/fpopcode.c linux-2.6.14.snap/arch/arm/nwfpe/fpopcode.c --- linux-2.6.14.commit/arch/arm/nwfpe/fpopcode.c 2005-11-06 17:00:11.000000000 +0100 +++ linux-2.6.14.snap/arch/arm/nwfpe/fpopcode.c 2005-11-06 17:00:02.000000000 +0100 @@ -29,14 +29,14 @@ #ifdef CONFIG_FPE_NWFPE_XP const floatx80 floatx80Constant[] = { - {0x0000, 0x0000000000000000ULL}, /* extended 0.0 */ - {0x3fff, 0x8000000000000000ULL}, /* extended 1.0 */ - {0x4000, 0x8000000000000000ULL}, /* extended 2.0 */ - {0x4000, 0xc000000000000000ULL}, /* extended 3.0 */ - {0x4001, 0x8000000000000000ULL}, /* extended 4.0 */ - {0x4001, 0xa000000000000000ULL}, /* extended 5.0 */ - {0x3ffe, 0x8000000000000000ULL}, /* extended 0.5 */ - {0x4002, 0xa000000000000000ULL} /* extended 10.0 */ + { .high = 0x0000, .low = 0x0000000000000000ULL},/* extended 0.0 */ + { .high = 0x3fff, .low = 0x8000000000000000ULL},/* extended 1.0 */ + { .high = 0x4000, .low = 0x8000000000000000ULL},/* extended 2.0 */ + { .high = 0x4000, .low = 0xc000000000000000ULL},/* extended 3.0 */ + { .high = 0x4001, .low = 0x8000000000000000ULL},/* extended 4.0 */ + { .high = 0x4001, .low = 0xa000000000000000ULL},/* extended 5.0 */ + { .high = 0x3ffe, .low = 0x8000000000000000ULL},/* extended 0.5 */ + { .high = 0x4002, .low = 0xa000000000000000ULL},/* extended 10.0 */ }; #endif diff -urN linux-2.6.14.commit/arch/arm/nwfpe/softfloat.h linux-2.6.14.snap/arch/arm/nwfpe/softfloat.h --- linux-2.6.14.commit/arch/arm/nwfpe/softfloat.h 2005-10-28 02:02:08.000000000 +0200 +++ linux-2.6.14.snap/arch/arm/nwfpe/softfloat.h 2005-11-06 16:55:13.000000000 +0100 @@ -51,11 +51,17 @@ Software IEC/IEEE floating-point types. ------------------------------------------------------------------------------- */ -typedef unsigned long int float32; -typedef unsigned long long float64; +typedef u32 float32; +typedef u64 float64; typedef struct { - unsigned short high; - unsigned long long low; +#ifdef __ARMEB__ + u16 __padding; + u16 high; +#else + u16 high; + u16 __padding; +#endif + u64 low; } floatx80; /* | https://lists.debian.org/debian-arm/2005/11/msg00009.html | CC-MAIN-2018-05 | refinedweb | 1,504 | 57.57 |
Announcements
November GameDev Challenge: Pong! 11/01/17
stormrunnerMembers
Content count776
Joined
Last visited
Community Reputation720 Good
About stormrunner
- RankAdvanced Member
Do you use WMP or Winamp?
stormrunner replied to deathtrap's topic in GDNet LoungeWinAmp for audio, MPlayer for video.
Minigames
stormrunner commented on Trapper Zoid's blog entry in #ifdef TRAPPER_ZOIDVoltron.
Programming for *nix
stormrunner replied to I_Smell_Tuna's topic in Everything UnixQuote:Original post by I_Smell_Tuna From what I've read there are several different windowing variants depending on your unix distribution, is that correct? No, the window manager you use is simply your preference. Many distros - such as Ubuntu and Suse - pick one as their "flagship" and perhaps modify it, but that doesn't mean you can't install a different one if you felt the need. Quote:Original post by I_Smell_Tuna Can someone point me in the direction of some of the more popular windowing methods for unix systems, and possibly some tutorials on how to program them? Most Linux window managers are built on top of X, which is based on a client-server model. Coding for X can be a harrowing experience as most tutorials (like this one) will get you up and running with basic functionality, but from there you're mostly on your own as the documentation for some of the more advanced features is somewhat scarce. A better idea is to use one of the window manager libraries, such as Qt which powers KDE or GTK the backend of GNOME. Both of these libraries are well documented and far easier to use than plain X. They will also show you some cool ways of handling messages and interacting with components - you might fight Qt's signals and slots extremely interesting. As for Linux specific system programming, you might find this to be of help.
How to keep Visual C++ projects organized?
stormrunner replied to jackalope's topic in General and Gameplay ProgrammingQuote:Original post by jackalope I use these in lots of different Visual C++ projects. Back in my dark days, I used to use the ".h #includes .cpp" scheme. You could just combine these files into a static library and stick it in the VC library path.
I require spankings.
stormrunner replied to fisheyel83l's topic in GDNet LoungeHappy Birthday !
Who else is alone for valentines?
stormrunner replied to dave's topic in GDNet LoungeQuote:Original post by APC Also, I'm so alone.
APC smash puny DRM
stormrunner replied to AnonymousPosterChild's topic in GDNet LoungeQuote:Original post by AnonymousPosterChild So what the hell can I do? Eh, it depends. I'm assuming these MP3s are stored on your computer. If you have Windows XP installed, just use the tried and true (in my experience) method of reburning the music. In WinMedia, disable the "Obtain licenses for music" option, which should be on the main page. It's also a good time to set the default format (for ripping) to MP3. Using a CD-RW (so you can reuse it) burn the WMAs to it. Rip the music from the disc The new ripped music should be in unprotected format, so you can use a converter on them. Otherwise, you're screwed. I feel your pain - I've got 1.2 gigs of protected WMAs sitting on my Windows partition from years ago when I wasn't aware of what Windows was doing when I copied my CDs (for backup, oh the irony).
horrible performance after installing gentoo (SOLVED)
stormrunner replied to pulpfist's topic in Everything UnixI'll admit I don't know the specific solution(s) to the problems you're having, since almost every Gentoo install is unique and I've encountered the same problems with different solutions. However, you might find some of the following useful : Quote:Original post by pulpfist I tried to run # hdparm -d 1 /dev/hdd and got this output: HDIO_SET_DMA failed: Operation not permitted First, have you read the Gentoo hdparm wiki page, as there is a section that describes the problem you are having (specifically the "Operation not permitted"). If you're convinced that it's not your harddrive that's causing the problem, I'd advise the following : Running hdparm /dev/hda and seeing what it says Boot using the Live CD, run hdparm /dev/hda from there and compare If DMA is enabled from the Live CD (or you can enable it through hdparm -d /dev/hda on the CD ) simply chroot into your Gentoo installation (from the CD to spare a reboot), open up /etc/conf.d/hdparm and modify the settings in there. It's well commented, so it should be fairly simple. Add it to the default runlevel when you're done so the settings will always be used. Quote:Original post by pulpfist After alsa is configured and un-muted, I have sound when the dvd intro is running, but nothing after that. xmms dont give any sound at all, even after configuring it to use the ALSA 1.2.10 output plugin. Did you compile the kernel module with ALSA (in which case you need to ensure it supports your card) or did you use the alsa-drivers package ? As for XMMS, what file type were you trying to play ? Even if you included mp3 in your use flags, you will still need to install xmms-mp3cue so that it can play them. Check and see that XMMS ( and GStreamer if you use that ) supports the music types you are trying to play. Otherwise, it's purely a problem with ALSA. In your message log, I noticed this : "ERROR[ogle_audio]: prepare failed: Device or resource busy". Were you running any other applications that needed to access the sound card when you were trying to play music ? If that's the issue (and I've had that happen several times) you might want to look into dmixer.
Feedback needed!
stormrunner replied to MarTor's topic in GDNet LoungeCompleted.
Just for the heck of it =P
stormrunner replied to TFS_Waldo's topic in GDNet LoungeFor my development projects I use a toolbar that links to the base directory. For everything else I just arrange the StartMenu to my liking.
I'm Back!
stormrunner commented on Rob Loach's blog entry in ReminiscenceWelcome back ! Quote:Tao.PhysFs.PhysFs is what it is right now, should it be renamed? Yes. To me, Tao.PhysFs.Fs is a lot easier on the eyes than Tao.PhysFs.PhysFs. Whenever I see the latter, I always wonder if the namespace/extra indirection was really necessary.
[java] Double Buffer with Java Swing?
stormrunner replied to Giuoco's topic in General and Gameplay ProgrammingQuote:Original post by Giuoco Is this true? Yes. Iirc, as of JDK 1.3 all Swing widgets use double buffering by default. Quote:Original post by Giuoco Anyone have any experience with the idea that swing double buffers it's graphics automatically? Yes, what exactly did you want to know ? If you're making a game it's usually not a problem since you'll either use a backbuffer scheme like the one shown in that article (perhaps replacing the BufferedImage with a VolatileImage) or ensure that the widget has been double buffered by Swing and access the backbuffer directly via a BufferStrategy. On the whole, it's fine for Swing to create the a double buffer scheme for you, but what you need to do is to prevent Swing from actually updating it, which will happen every time the display is resized or goes in and out of focus (among 10,000 other things).
Your favorite game
stormrunner replied to Blew's topic in GDNet LoungeGoldeneye for the Nintendo 64 Gamespot review
Other sites you frequently visit.
stormrunner replied to Benjamin Heath's topic in GDNet LoungeGamespot, Fedora Linux, various parts of MSDN and TheCodeProject.
Motivation and rewards: what do you want?
stormrunner replied to frob's topic in GDNet LoungeI'm assuming this is for any generic task ... Quote:5-10 minute task rewards?None. Quote:1-2 hour task rewards?Usually a small break, some food, maybe catch a TV series episode (X-Files, Firefly). Then back to work. Quote:Weekly rewards?Depends. If it's something that required a lot of hard work and is worth the purchase, perhaps a CD (this is very rare). Usually, though, my dignity and watching a movie (on the Sci-Fi channel, which is my idea of relaxing ;P) are sufficient. Quote:Monthly rewards?None. | https://www.gamedev.net/profile/64030-stormrunner/ | CC-MAIN-2017-47 | refinedweb | 1,419 | 62.27 |
On 06/23, Sukadev Bhattiprolu wrote:>> Oleg Nesterov [oleg@redhat.com] wrote:> | This is mostly cleanup and optimization, but also fixes the bug.> |> | proc_flush_task() checks upid->nr == 1 to detect the case when> | a sub-namespace exits. However, this doesn't work in case when> | a multithreaded init execs and calls release_task(old_leader),> | the old leader has the same pid 1.> |> | Move pid_ns_release_proc() to zap_pid_ns_processes(), it is called> | when we know for sure that init is exiting.>> Hmm, I almost agreed, but have a question :-)>> Yes, we know that the container-init is exiting. But if its parent (in> the parent ns) waits on it and calls release_task(), won't we call> proc_flush_task_mnt() on this container-init ? This would happen after> dropping the mnt in zap_pid_ns_processes() no ?Indeed. Thanks!Somehow I forgot that init itself has not passed proc_flush_task().Oleg. | http://lkml.org/lkml/2010/6/24/153 | CC-MAIN-2013-20 | refinedweb | 140 | 65.52 |
Provided by: allegro4-doc_4.4.2-4_all
NAME
ustrtod - Converts a string into a floating point number. Allegro game programming library.
SYNOPSIS
#include <allegro.h> double ustrtod(const char *s, char **endp);
DESCRIPTION
This function converts as many characters of `s' that look like a floating point number into one, and sets `*endp' to point to the first unused character, if `endp' is not a NULL pointer. Example: char *endp, *string = "456.203 askdfg"; double number = ustrtod(string, &endp);
RETURN VALUE
Returns the string converted as a value of type `double'. If nothing was converted, returns zero with *endp pointing to the beginning of s.
SEE ALSO
uconvert(3alleg4), ustrtol(3alleg4), uatof(3alleg4) | http://manpages.ubuntu.com/manpages/trusty/man3/ustrtod.3alleg4.html | CC-MAIN-2019-18 | refinedweb | 113 | 51.55 |
Tech Off Thread26 posts
Forum Read Only
This forum has been made read only by the site admins. No new threads or comments can be added.
Is Visual Basic inferior to C#?
Conversation locked
This conversation has been locked by the site admins. No new comments can be made.
Pagination
Is Visual Basic inferior to C#? The reason I am asking this, is because I recently found that there are a few things that Microsoft supports in C#, which are not supported in VB.
What is going on? Up until recently the main difference in what you could do in VB and C# was the calling of “unsafe” methods.
No, VB is not inferior. It's sometimes also the other way around. VB 9 has XML literals, and C# 3 does not (and probably never will). VB has the My-namespace. There are a lot more default Code Snippets for VB than for C#. The only conclusion I can make is that VB and C# have different features because they are built by different teams and they have different goals. Which is a good thing.
Wow, Thanks TommyCarlier. Hip Hip Hooray!
You can use the my namespace from c#, no problem.
Check Erik Meijer in the video's, he's a VB.Net enthousiast and can explain why VB.Net is actually superior to C#.
A lot of things: books, websites, 3rd party tools, etc. have more support for C# than VB.NET. It think it's assumed that more people use C#, but I'm not sure that's true - not if all the millions of classic VB programmers upgraded to VB.NET it wouldn't be anyway. But Microsoft has promoted C# as the 'serious' .NET language, so the perception is out there that supporting C# is a must and that VB.NET is a bonus.
At work I write ASP.NET and Webservices most of the time and the company uses VB.NET for all it's .NET projects. I hadn't used VB.NET before I joined them (although I had used VB6 for many years), but I was a user of C# for several years and convinced them to hire me based on how much transferable knowlege there is between the two as well as my classic VB experience.
Anyway, my observation is that VB.NET is pretty-much perfect for the work we do. Mostly due to one feaure: the compiler generates code such that in comparisons operations, the empty string is equal to null (Nothing in VB.NET). That saves *so* much redundant typing that you have to put in by hand when you write in C#. OK, there are other benefits to VB.NET and there are also some downsides, but certainly anything you can write in C# can also be written in VB.NET.
yes.
VB.NET gets a lot of love from Microsoft. It compiles during entering the code for example. That feature is not found in C#. Other things, mentioned by Tommy, aren't also found in C#.
It's true that you can use the My namespace also in C#, but that means you need to include a VB.NET assembly as reference (uuuhh, ahh!) to your project. Some my-namespace-things that are generated by the VB compiler and aren't therfore available in C#(well the C# doesn't generate them).
VB is also way better at late bound things!
And the best is that you can mix both languages - VB and C#. You need to create two projects in VS, but you can have both in the same solution. If you need the benefits of C#, you can use that language and if you need the benefits of VB, you can use that langauge too. And finally compile them both into the same code!
All languages work so good together! Isn't the .NET world a great world?
my dream is to be able to use different languages in the same project, like in ASP.NET, compile them to different assemblies and the link together with ilmerge, or something like that...
I think that won't happen - at least not in the near future. Why? Well if you have two classes. Let's say A and B. A has a reference to B in it and B a reference to A. Now you have A written in C# and B in VB. How could the VB compiler compile it without knowing anything of the A class? And how could the C# compiler do the same for the B class?
This is an example of a circular reference. If you have separate assemblies you won't get them that often (and it's a lot easier to eliminate them) than in a single assembly!
Hi
just another comment
It is amazing that people are so strongly pro or anti a language.
However if you look at the world in general there are many things like religion where people have vastly different standpoints!
Anyway from 'my' experience in the business world - one can write a VB.NET application - and write the same C# application and the quality (if the same 'process' was followed) will be exactly the same.
So I ALWAYS try and think of the domain/context etc. that the 'language' will be used in to determine whether it is the 'better' language for the job.
Think about it, i would not try and speak Afrikaans (which is my home language) in America if I'm trying to do business with a large organization ....
So try not to think of which language is better, rather think of it this way by asking yourself.
"In this context, which language would be better?"
And think about things like!
- functional requirements of the system
- developer skills ! ! !
- maintenance, will it be inhouse? do the inhouse developers know that language ? their experience thereof?
So there are many many things to consider than just plainly making a statement like ...
language x is better than language y because it is higher up in the alphabet .....
That's something I want to know too.
I know that we can create 2 separate folders under App_Code in ASP.NET application, one for VB.NET and one for C#, to use both language in the same project. But is it possible to create a partial class in both language, some in VB.NET and some in C#?
Somebody remind me what BASIC stands for... "____ All-purpose Symbolic Instruction Code". If only I could remember what the ___ was. I wonder how many more people would develop on C-based languages and other more complex languages if there weren't so many ____'s running around afraid of expanding their knowledge of the way the world works.
... but yes, different teams with different visions combine to form a good thing. Especially when the two teams' work can be accessed from each other.
Flames start.... NOW.
Heres the thing... THEY BOTH MERGE TO IL. Also, some features in IL aren't yet in C# or VB. They use the same namespaces (Except Microsoft.VisualBasic) and they use the same intermediate language... IL! If the teams would make them the same, they would be exactly the same and code conversion would be SO much simpler... but it's not!
BTW...
Dim BG as Microsoft.BillGates
Dim VBprogrammer as Person
BG = VBprogrammer
Its true... Gates is a VB fan!
Bill Gates wrote the Basic compiler for DOS, sure he is a VB fan
This isn't as easily doable in ASP.NET using the App_Code folder, if you wanted to have part of an assembly written in C# and another part in VB.NET the key is to use modules (C# compiler has a "/t:module" switch that outputs a *.netmodule) Then you can include a module into an assembly using "/addmodule" on the command line compile.
I've never actually done this (or know anyone that has), but I don't imagine there should be many problems..
This doesn't give you exactly what you want as far as multiple languages in a "project", but if you're using tool around your builds (say for automated builds, etc) then this is even easier... (for instance VSTS, scripted MSBuild, NAnt, CoReXT (ms internal), etc.)
agree. my dream is be able to code c# along side c++ or powershell script blocks or any other .Net language. Everything is a code block (i.e. just IL with a different face). IIRC, that is the plan with futures of Bartok compiler. | https://channel9.msdn.com/Forums/TechOff/253608-Is-Visual-Basic-inferior-to-C | CC-MAIN-2017-39 | refinedweb | 1,429 | 76.32 |
An installer for XFree86 4.2.0.1 on Mac OS X is now available. This release is a bug fix and compatibility update to version 4.2.0. All users of XFree86 on Darwin/Mac OS X should update to 4.2.0.1. You must install XFree86 4.2.0 before you apply this update. Separate installers are available for Mac OS X 10.1 and 10.2 (Jaguar).
Bug Fixes:
An XDarwin crash on dual processor machines has been fixed.
- libXt is now a flat namespace image.
- Fixed reading uninitialized memory in libXaw.
- /usr/X11R6/include/X11/bitmaps now includes stipple and Stippler, which has been renamed to avoid a case insensitivity name collision.
- (Jaguar only) A new xterm binary is provided.
- (Jaguar only) A new libGLU is provided for compatibility with gcc 3.1. | https://sourceforge.net/p/xonx/news/2002/08/xfree86-4201/ | CC-MAIN-2017-51 | refinedweb | 138 | 71.21 |
September 2017
Volume 32 Number 9
[.NET.
.png)):
$ dotnet new console -o hello $ cd hello $ dotnet run Hello World!:
$ cd .. $ dotnet new library -o logic $ cd logic
The logic you want to encapsulate is the construction of a Hello World message, so change the contents of Class1.cs to the following code:
namespace logic { public static class HelloWorld { public static string GetMessage(string name) => $"Hello {name}!"; } }
At this point, you should also rename Class1.cs to HelloWorld.cs:
$ mv Class1.cs:
$ cd ../hello $ dotnet add reference ../logic/logic.csproj
Now, update the Program.cs file to use the HelloWorld class, as shown in Figure 2.
Figure 2 Updating the Program.cs File to Use HelloWorld Class
using System; using logic; namespace hello { class Program { static void Main(string[] args) { Console.Write("What's your name: "); var name = Console.ReadLine(); var message = HelloWorld.GetMessage(name); Console.WriteLine(message); } } }
To build and run your app, just type dotnet run:
$ dotnet run What's your name: Immo Hello Immo!
You can also create tests from the command line. The CLI supports MSTest, as well as the popular xUnit framework. Let’s use xUnit in this example:
$ cd .. $ dotnet new xunit -o tests $ cd tests $ dotnet add reference ../logic/logic.csproj
Change the UnitTest1.cs contents, as shown in Figure 3, to add a test.
Figure 3 Changing the UnitTest1.cs Contents to Add a Test
using System; using Xunit; using logic; namespace tests { public class UnitTest1 { [Fact] public void Test1() { var expectedMessage = "Hello Immo!"; var actualMessage = HelloWorld.GetMessage("Immo"); Assert.Equal(expectedMessage, actualMessage); } } }
Now you can run the tests by invoking dotnet test:
$ dotnet test Total tests: 1. Passed: 1. Failed: 0. Skipped: 0. Test Run Successful.
To make things a bit more interesting, let’s create a simple ASP.NET Core Web site:
$ cd .. $ dotnet new web -o web $ cd web $ dotnet add reference ../logic/logic.csproj
Edit the Startup.cs file and change the invocation of app.Run to use the HelloWorld class as follows:
app.Run(async (context) => { var name = Environment.UserName; var message = logic.HelloWorld.GetMessage(name); await context.Response.WriteAsync(message); });
To start the development Web server, just use dotnet run again:
$ dotnet run Hosting environment: Production Now listening on: Application started. Press Ctrl+C to shut down.
Browse to the displayed URL, which should be.
At this point, your project structure should look like Figure 4.
Figure 4 The Project Structure You Created
$ tree /f │ ├───hello │ hello.csproj │ Program.cs │ ├───logic │ HelloWorld.cs │ logic.csproj │ ├───tests │ tests.csproj │ UnitTest1.cs │ └───web Program.cs Startup.cs web.csproj
To make it easier to edit the files using Visual Studio, let’s also create a solution file and add all the projects to the solution:
$ cd .. $ dotnet new sln -n HelloWorld $ ls -fi *.csproj -rec | % { dotnet sln add $_.FullName }:
$ cd logic $ cat logic.csproj <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> </PropertyGroup> </Project>
Let’s contrast this with the “hello” console application project file:
$ cd ..\hello $ cat hello.csproj <Project Sdk="Microsoft.NET.Sdk"> <ItemGroup> <ProjectReference Include="..\logic\logic.csproj" /> </ItemGroup> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp2.0</TargetFramework> </PropertyGroup> </Project>:
$ dotnet add package Huitian.PowerCollections
This library provides additional collection types that the BCL doesn’t provide, such as a bag, which makes no ordering guarantees. Let’s change the hello app to make use of it, as shown in Figure 6.
Figure 6 Sample Application Using PowerCollections
using System; using Wintellect.PowerCollections; namespace hello { class Program { static void Main(string[] args) { var data = new Bag<int>() { 1, 2, 3 }; foreach (var element in data) Console.WriteLine(element); } } }
If you run the program, you’ll see the following:
$ dotnet run hello.csproj : warning NU1701: Package 'Huitian.PowerCollections 1.0.0' was restored using '.NETFramework,Version=v4.6.1' instead of the project target framework '.NETCoreApp,Version=v2.0'. This may cause compatibility problems. 1 3 2:
<PackageReference Include="Huitian.PowerCollections" Version="1.0.0" NoWarn="NU1701" />.
Immo Landwerth is a program manager at Microsoft, working on .NET. He focuses on .NET Standard, the BCL and API design.
Discuss this article in the MSDN Magazine forum | https://docs.microsoft.com/en-us/archive/msdn-magazine/2017/september/net-standard-demystifying-net-core-and-net-standard | CC-MAIN-2021-10 | refinedweb | 691 | 54.29 |
Subject: Re: [boost] BOOST_HAS_PRAGMA_ONCE
From: Ion Gaztañaga (igaztanaga_at_[hidden])
Date: 2015-02-05 02:19:54
El 04/02/2015 a las 20:28, Stephen Kelly escribió:
> Ion Gaztañaga wrote:
>
>> Yes, it should be a very tiny help.. except if config.hpp also uses
>> pragma once ;-)
>>
>
> You don't think it's, maybe, relying on implementation details of config,
> and, maybe, that's not a good thing?
Maybe ;-) It's not something new in boost:
#ifndef BOOST_CONFIG_HPP
#include <boost/config.hpp>
#endif
Nothing will break if boost/config.hpp does not define BOOST_CONFIG_HPP,
we'll just loose the tiny help. It's ugly, I agree, but I don't think
this is a problem for anyone.
Ion
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2015/02/219942.php | CC-MAIN-2021-39 | refinedweb | 141 | 69.48 |
New project wizard, from scratch.
name the project
include src directory
The next part of the wizard asks me to select what technologies to use.
In the subtree under Web Application I see only Struts and JSF
and I don't see Web Services Client beneath Web Application
The reason I know those options should be there, is that I just went through this process to create a new project using those settings, and it downloaded all the missing JAR files and whatnot. I got the (I guess) dreaded "SOAP-ENC:Array" error regarding namespaces. From googling, I was led to understand that maybe I had selected the wrong type of WS. I had selected JAX-WS, but maybe I should have used RPC? So I then shutdown IJ, deleted the project folder, restarted, and the proceeded to make a new (second) project, intending to re-select the web services technologies, but to use a different sub-selection. But they've disappeared and won't come back. I've disabled and re-enabled the plugin in the settings.
I don't get it... what did I break??
(IntelliJ 9.0.2 IU 95.66)
New project wizard, from scratch. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206945515-Can-t-select-WebServices-even-though-plugin-is-enabled | CC-MAIN-2020-34 | refinedweb | 199 | 73.98 |
Opened 8 years ago
Closed 8 years ago
#7693 closed (invalid)
Different import paths can impact whether doctests are run or not.
Description
My directory structure looks like this:
/appname/models.py
/appname/utils/init.py (empty)
/appname/utils/util1.py
in util1.py, I do this:
from appname.models import Class1, Class2
This import causes the doctests in models.py to not run. If I remove util1.py entirely, the doctests in models.py are run.
I patched this by editing line 895 in root/django/trunk/django/test/_doctest.py
I changed it from:
return module.name == object.module
to
return module.name.find(object.module) >=0
In my example above:
module.name = 'appname.models' and
object.module = 'projectname.appname.models'
I'm selecting 'has patch' and attaching the file, but frankly am not sure if this is the right way to handle this or not.
Attachments (1)
Change History (2)
Changed 8 years ago by davenaff
comment:1 Changed 8 years ago by russellm
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to invalid
- Status changed from new to closed
There must be something else going on here - you haven't mentioned anything that imports util1.py, so that code won't be imported, and therefore wont be executed. I don't deny you're seeing a problem, but either util1.py isn't the cause, or there is something else going on that your instructions don't cover.
Also - regarding your proposed solution - _doctest.py is a copy of the Python version, provided as a workaround for bugs in older Python versions. Patches proposing fixes to _doctest.py are almost certainly not the right solution to any problem - if they are the solution, they should be submitted upstream to Python itself. | https://code.djangoproject.com/ticket/7693 | CC-MAIN-2016-36 | refinedweb | 299 | 60.61 |
Ryan Weaver on synergies in Drupal, Symfony2, and PHP.
Not being the best is the best
Ryan describes getting into an open source software in a way that describes the power of the community. Talking about the first time he saw a friend do scaffolding in Ruby on Rails, "I watched him do this command and thought 'That was 12 hours of work for me that you just did as one command'. The power of that blew my mind and I had to find something that did that in PHP, which is how I ended coming to Symfony. We've all done it: You start, thinking you're the smartest person in the world and doing your own thing ... I was slapped with the reality check of how wonderful it is for people to collaborate because I never could have done that. I would still be working on trying to get something of that functionality."
Drupal + Symfony - How are we doing?
"The code I've seen from Drupal people that have started to do things in more of the namespace/Symfony kind of way, has been incredible. There's a Behat extension for Drupal (Behat: BDD for PHP). It was made more or less by Open Sourcery in Portland and it is incredible! So you have all this Drupal: functions and hooks and all of as sudden, you look at their code, there's dependency injection and all these really advanced things. I've been really pleasantly surprised by the advanced things that they're already bringing to the code itself."
Drupal 8 + Symfony + All the PHP
In answer to the question "What are you most excited about for Drupal 8?", Ryan explains, "I see an increasing number of people that are coming from the Drupal world that are looking at Symfony. Not they're switching to Symfony, but they're increasing their options." He explains how Drupal-the-CMS can be perfectly complimented by a compatible, highly specific, custom Symfony application. "What's really exciting for a Drupal developer or a Symfony developer is when you're in Drupal and run into Symfony pieces, those are going to be the exact same Symfony pieces that you'll see when you're using the Symfony framework or Silex, which is the micro-framework built with Symfony. So whether you're using Drupal or the Symfony framework, or Silex, it is all exactly the same classes, objects, and ideas. So in Drupal, if you're getting the request object and you're getting post information, or files, or something, that is the Symfony request object, that is the Silex request object! The toolsets are being blurred. How dangerous are we now that Drupal people can do custom things and traditional custom developers can do Drupal more easily. It's really really exciting."
"You could integrate a Symfony or Silex project with Drupal. You could have a Symfony project and it goes through Symfony's routing and you get inside of a controller where you normally render the page. You could forward that over to Drupal, pass the request to Drupal because it is a content page. You boot up the Drupal engine and pass the request over there, because they're using the same technologies. Normally, if you have two PHP projects, you can't really put them together. They'll crash into each other. But inside a single thread, we should be able to pass the information over to Drupal. Drupal does its normal page processing (not knowing it's not really the one processing the request) and then it gives back a finished page. Then we use that finished page inside of Symfony and render something."
"Of course, the same thing can be said in the other direction; having Drupal handle something and forwarding it off to Symfony."
As a Drupalist, I will suddenly have huge swathes of the PHP world directly available to me in my toolbox. "Now with Composer, you can find any PHP library on GitHub and it's going to plug in immediately, because Drupal has taken this step to integrate."
APIs are the Future
On the subject of Drupal 8's representation layer being agnostic and able to produce output in HTML5 or JSON or whatever you need, "We all know we're moving more to APIs. It doesn't mean that the traditional web app is going to go away entirely, but more and more things are being done as an API. Then you have a Javascript front-end, or a device or whatever else. With open source projects, especially as big as Drupal, you can't make that change overnight, so I think it's very forward thinking to put that in now. There's some learning curve and it's a very aggressive step, but fast-forward five years from now and everyone's going to say, 'Thank God we did that!' because if we wait five years, it's gonna be too late." | https://www.acquia.com/de/resources/podcasts/acquia-podcast-126-ryan-weaver-synergies-drupal-symfony2-PHP | CC-MAIN-2015-32 | refinedweb | 833 | 68.7 |
SiteTree Forms and Fields¶
Ocasionally you may want to link some site entities (e.g. Polls, Articles) to certain sitetree items (as to categorize them). You can achieve it with the help of generic forms and fields shipped with SiteTree.
TreeItemForm¶
You can inherit from that form to have a dropdown with tree items for a certain tree:
from sitetree.forms import TreeItemForm class MyTreeItemForm(TreeItemForm): """We inherit from TreeItemForm to allow user link some title to sitetree item. This form besides `title` field will have `tree_item` dropdown. """ title = forms.CharField() # We instruct our form to work with `main` aliased sitetree. # And we set tree item with ID = 2 as initial. my_form = MyTreeItemForm(tree='main', tree_item=2)
You can also use a well known initial={‘tree_item’: 2} approach to set an initial sitetree item.
After that deal with that form just as usual.
TreeItemChoiceField¶
TreeItemChoiceField is what TreeItemForm uses internally to represent sitetree items dropdown, and what used in Admin contrib on sitetree item create/edit pages.
You can inherit from it (and customized it) or use it as it is in your own forms:
from sitetree.fields import TreeItemChoiceField class MyField(TreeItemChoiceField): # We override template used to build select choices. template = 'my_templates/tree_combo.html' # And override root item representation. root_title = '-** Root item **-' | http://django-sitetree.readthedocs.io/en/latest/forms.html | CC-MAIN-2017-30 | refinedweb | 213 | 66.44 |
Language design is hard.
Posted to xml by Ehud Lamm on 5/10/04; 12:29:44 PM
From the discussions:.
One of the most commonly used mechanisms for composition in XML is embedding, with namespaces being used to distinguish one vocabulary from another in the composed document. The end result is documents which may contain a great many "optional" components, which can create problems in itself (the HL7 model in its latest incarnation moves away from optional elements in favour of OO-style polymorphism, in an attempt to resolve some of these problems).
Jon points towards the amazing proliferation of WS-* specs as an example of greater granularity and composability. It's certainly true that one way to "attack" complexity is to break it down into smaller pieces; but if all you can do with those pieces is put them back together again then at best you have a "build your own monolith" construction kit.
Most real PLs give you more than that. They give you the ability to make simple pieces out of simple pieces, through mechanisms such as recursive definition and parameterisation. One of my favourite examples is the "top level" of HaXml's XML parser, which reads:
xmlParse :: String -> String -> Document
xmlParse name = sanitycheck . papply document emptySTs . xmlLex name
Now that's composition.
Seems like an example of marketing and positioning of large software corporations to me. This is all supposed to be manageable through buying "wizards" embedded in the IDEs like Visual Studio, Eclipse, etc.
None of these efforts have an appealing "language" perspective from what I can see. It's all apparently done through the magic of "SOAP Headers".
What kind of language is that? Languages are to communicate. I know of no one who can communicate effectively about the WS-*. No one except the snake oil salesmen.
XBRL isn't designed to be hand-written, and that level of simplicity is not a virtue in the design space it targets.
I get the sense that the appeal of the character basis of XML is that it enables an easy form of inspection in language systems that are typically not very open, especially in their support for aggregated objects.
XML would probably not have gained much appeal in language systems like Lisp, Smalltalk, etc. whose binary objects, especially aggregates, are well supported by the system for inspection.
I do get a bit tired of hearing this particulary excuse, or variations thereupon, being trotted out to justify inscrutable or intractable formats. Particularly, I think it fudges an important issue, which is that many XML vocabularies are not only syntactically verbose but highly repetitious - they require you to spell things out, in longhand, over and over again. What they lack is not so much "simplicity" as style.
Based on the few examples I've seen, XBRL looks to me a little like XHTML would look if it didn't have CSS: each item of information has to be tagged with numerous attributes indicating its format, encoding, precision and so on. What I don't see is any way of defining a few common attribute sets and then re-using them throughout the document. Perhaps it can be done, but browsing the spec I didn't see an obvious mechanism for it.
Readability and writeability by a (sufficiently smart) human being seems to me to be a good rule-of-thumb desideration for language design. Can one think in XBRL? | http://lambda-the-ultimate.org/classic/message12230.html | CC-MAIN-2019-18 | refinedweb | 573 | 61.77 |
Handling several PDO Database connections in Symfony2 through the Dependency Injection Container with PHP.
Symfony framework is closely coupled to Doctrine and it’s very easy to use the ORM from our applications. But as I said before I prefer not to use it. By the other hand I have another problem. Due to my daily work I need to connect to different databases (not only one) in my applications. In Symfony2 we normally configure the default database in our parameters.yml file:
# parameters.yml parameters: database_driver: pdo_pgsql database_host: localhost database_port: 5432 database_name: symfony database_user: username database_password: password
Ok. If we want to use PDO objects with different databases, we can use something like that:
# parameters.yml parameters: database.db1.dsn: sqlite::memory: database.db1.username: username database.db1.password: password database.db2.dsn: pgsql:host=127.0.0.1;port=5432;dbname=testdb database.db2.username: username database.db2.password: password
And now create the PDO objects within our code with new \PDO():
$dsn = $this->container->getParameter('database.db1.dsn'); $username = $this->container->getParameter('database.db1.username'); $password = $this->container->getParameter('database.db1.password') $pdo = new \PDO($dsn, $username, $password);
It works, but it’s awful. We store the database credentials in the service container but we aren’t using the service container properly. So we can do one small improvement. We will create a new configuration file called databases.yml and we will include this new file within the services.yml:
# services.yml imports: - { resource: databases.yml }
And create our databases.yml:
# databases.yml parameters: db.class: Gonzalo123\AppBundle\Db\Db: %db.class% calls: - [setDsn, [%database.db1.dsn%]] - [setUsername, [%database.db1.username%]] - [setPassword, [%database.db1.password%]] db2: class: %db.class% calls: - [setDsn, [%database.db2.dsn%]] - [setUsername, [%database.db2.username%]] - [setPassword, [%database.db2.password%]]
As we can see we have created two new services in the dependency injection container called db1 (sqlite in memory) and db2 (one postgreSql database) that use the same class (in this case ‘Gonzalo123\AppBundle\Db\Db’). So we need to create our Db class:
<?php namespace Gonzalo123\AppBundle\Db; class Db { private $dsn; private $username; private $password; public function setDsn($dsn) { $this->dsn = $dsn; } public function setPassword($password) { $this->password = $password; } public function setUsername($username) { $this->username = $username; } /** @return \PDO */ public function getPDO() { $options = array(\PDO::ATTR_ERRMODE => \PDO::ERRMODE_EXCEPTION); return new \PDO($this->dsn, $this->username, $this->password, $options); } }
And that’s all. Now we can get a new PDO object from our service container with:
$this->container->get('db1')->getPDO();
Better, isn’t it? But it’s still ugly. We need one extra class (Gonzalo123\AppBundle\Db\Db) and this class creates a new instance of PDO object (with getPDO()). Do we really need this class? the answer is no. We can change our service container to:
# databases.yml parameters: pdo.class: PDO pdo.attr_errmode: 3 pdo.erromode_exception: 2 pdo.options: %pdo.attr_errmode%: %pdo.erromode_exception%: %pdo.class% arguments: - %database.db1.dsn% - %database.db1.username% - %database.db1.password% - %pdo.options% db2: class: %pdo.class% arguments: - %database.db2.dsn% - %database.db2.username% - %database.db2.password% - %pdo.options%
Now we don’t need getPDO() and we can get the PDO object directly from service container with:
$this->container->get('db1');
And we can use something like this within our controllers (or maybe better in models):
<?php namespace Gonzalo123\AppBundle\Controller; use Symfony\Bundle\FrameworkBundle\Controller\Controller; class DefaultController extends Controller { public function indexAction($name) { // this code should be out from controller, in a model object. // It is only an example $pdo = $this->container->get('db1'); $pdo->exec("CREATE TABLE IF NOT EXISTS messages (id INTEGER PRIMARY KEY, title TEXT, message TEXT)"); $pdo->exec("INSERT INTO messages(id, title, message) VALUES (1, 'title', 'message')"); $data = $pdo->query("SELECT * FROM messages")->fetchAll(); // return $this->render('AppBundle:Default:index.html.twig', array('usuario' => $data)); } }
Posted on January 7, 2013, in PDO, php, Symfony2, Technology and tagged DIC, pdo, php, php project, Symfony. Bookmark the permalink. 22 Comments.
interesing, thanks for sharing
wondering what is exactly your use case as regularly we use not these services directly but we use doctrine and it is fine
Great post. Why you use this solution instead of doctrine? Do not lose any of the functionality and security of doctrine?.
I’m thinking right now about this because my aplicación have intensive use of dynamic name tables: Note2010, Note2011…. NoteXXXX mapping from same entity Note and I can not solve all the details. See this post:
What do you think about this?.
Thnks.
I normally don’t use Doctrine because I feel very comfortable with SQL. Also The projects that I normally work in aren’t exclusively PHP projects (we share the database). With Doctrine simple things are simple (with raw sql too) and complex problems become nightmares.
Tkns. I understand and will test with PDO.
Absolutely a great solution. I was already on 50% implementing the same idea, bit I’m taking yours as a base.
Great tuts , I am thinking of implementing multiple database support for my php app. right now, it supports only MySql.
Out of interest when and where does the connection get closed? Does syfmony / doctrine do that or is it simply php?
If you don’t close the connection php (PDO) closes the connecion at the end of the script. Anyway you can close the connection when you want
One other scenario I was playing out is where a controller has more than one service and each service talks to the database. That would mean more than one connection would be established for a single request. Do you see that as a concern and would it be better to make sure a single connection is used?
If you get the connection from the dependecy injection container you don’t need to take care about that. Symfony’s DIC will reuse the connecion (only new connection is done the first time you retrieve the connection)
That makes it handy. Out of interest does silex / pimple reuse its connections like symfony 2 DIC?
with pimple it depens on the way on you create the service. The default way don’t reuse (factory pattern), but you can use share() to reuse service (sigleton pattern). Silex’s DoctrineServiceProvider uses share(), so it reuses connection.
Hi, great post! I have a question, when I log to mysql using the terminal, and try to open my symfony app with the PDO connection, I get this “ContextErrorException: Warning: PDO::__construct(): MySQL server has gone away in “, did that happened to you as well?
Thanks for your reply
I am curious about the “moving into model” part. And skiny controllers.
the statement “$this->container->get(‘db1′);” would work only i controller, because it extends base controller class.
If you create a custom model class, let say Gonzalo123\AppBundle\Model\MessagesModel
class MessagesModel
{
}
how would you get access to container? Would you inject whole container in there as a function parameter? Or is there any better solution, how to get access to the registeerd container service inside the custom class?
Thanks!
Inject the whole container is easy and it works, but your model will be strongly coupled to the container. The best approach is always build to decoupled components. If your model doesn’t know anything about your container, better. Maybe here you can inject the database connection to the model (obviously I assume that you aren’t using Doctrine here).
Hi Gonzalo. Thanks for the tutorial… it’s exactly that i’m looking for. I don’t like ORM’s neither. But i get a problem: I followed the tutorial to use a PDO object (get a PDO directly from service container get me a lot of problems), but when i used in security configuration (login form), throw ‘invalid data source name’ message. Do you have any idea that what about this message?… i’m really stuck with this, and I would greatly appreciate any help.
Sorry for my english…
It’s looks like the PDO’s dsn ins’t correct. Try first to perform a PDO connection with a simple PHP script (without symfony). Another way to detect the problem is to use a remote debugger (with xdebug).
I tried to perform PDO alone and work… but I get a new problem, maybe you can help me: I try to get an instance of the db service, but instead from a Controller class using $this->container->get(), I try to call from class that is mapped like a service too… so, i can’t get the container and use get() method… do you have any idea for how to do that?… I readed in other pages that i must adding ‘arguments: [@service_container]‘ in db service definition, but that don’t work.
thanks a lot again and regards
Hi,
Thank you for this wonderful post. This seems to be doing what I wanted to achieve for last 2 weeks.
However, when I run this I get an error saying “Container” is undefined on “$this->container->get(‘db1′)->getPDO();” call.
Can you please guide how can I solve this ?
My Bad. I solved it by injecting @service_container through service.
Pingback: Handling several DBAL Database connections in Symfony2 through the Dependency Injection Container with PHP « Gonzalo Ayuso | Web Architect
Pingback: Scaling Silex applications « Gonzalo Ayuso | Web Architect | http://gonzalo123.com/2013/01/07/handling-several-pdo-database-connections-in-symfony2-through-the-dependency-injection-container-with-php/ | CC-MAIN-2014-41 | refinedweb | 1,549 | 58.58 |
Get the highlights in your inbox every week.
Try Deno as an alternative to Node.js | Opensource.com
Try Deno as an alternative to Node.js
Deno is a secure runtime for JavaScript and TypeScript.
opensource.com
Subscribe now
Deno is a simple, modern, and secure runtime for JavaScript and TypeScript. It uses the JavaScript and WebAssembly engine V8 and is built in Rust. The project, open source under an MIT License, was created by Ryan Dahl, the developer who created Node.js.Deno's GitHub repository outlines its goals:
- Only ship a single executable (
deno)
- Provide secure defaults
- Unless specifically allowed, scripts can't access files, the environment, or the network.
- Browser compatible: The subset of Deno programs which are written completely in JavaScript and do not use the global
Denonamespace (or feature test for it), ought to also be able to be run in a modern web browser without change.
- Provide built-in tooling like unit testing, code formatting, and linting to improve developer experience.
- Does not leak V8 concepts into user land.
- Be able to serve HTTP efficiently
The repo also describes how Deno is different from NodeJS:
-.
- Uses "ES Modules" and does not support
require(). Third party modules are imported via URLs:
import * as log from "";
Install Deno
denoland.jpg
Deno's website has installation instructions for various operating systems, and its complete source code is available in its GitHub repo. I run macOS, so I can install Deno with HomeBrew:
$ brew install deno
On Linux, you can download, read, and then run the install script from Deno's server:
$ curl -fsSL
$ sh ./install.sh
Run Deno
After installing Deno, the easiest way to run it is:
$ deno run
If you explore the welcome example, you should see a single line that prints "Welcome to Deno" with a dinosaur icon. Here is slightly a more complicated version that also can be found on the website:
import { serve } from "";
const s = serve({ port: 8000 });
console.log("");
for await (const req of s) {
req.respond({ body: "Hello World\n" });
}
Save the file with a
.tx extension. Run it with:
$ deno run --allow-net <name-of-your-first-deno-file.ts>
The
--allow-net flag might not be necessary, but you can use it if you see an error like
error: Uncaught PermissionDenied: network access to "0.0.0.0:8000.
Now, open a browser and visit
localhost:8080. It should print "Hello, World!"
That's it! You can learn more about Deno in this video I recorded.
What do you think about Deno? Please share your feedback in the comments. | https://opensource.com/article/21/2/deno | CC-MAIN-2021-25 | refinedweb | 432 | 64.81 |
Scrappy is a scrapbook environment like we used to have in Smalltalk and VisualAge/Java environments.
Why unit test it when you can scrap it? ;)
Enjoy!
-rg
Scrappy is a scrapbook environment like we used to have in Smalltalk and VisualAge/Java environments.
U?ytkownik "Rhett Guthrie" <rhett@bytecrafters.com> napisa? w wiadomo?ci
news:2895689.1046495041101.JavaMail.javamailuser@localhost...
VisualAge/Java environments.
>
>
>
Thank you Rhett. I just love this little plugin. I missed this feature
compared to VAJ.
Then I installed BeanShell plugin but this was too much. I needed just
IdeaScrappy.
Even the name is kinda yummy....
I can't wait for improvements. Do you think it's possible to highlight Java
code the same way as in
code window? or use other Idea features like completion?
Where did you download this plugin from?
I cannot find any download links on the plugin page
mentioned below.
Marius
Michal Szklanowski wrote:
>>Scrappy is a scrapbook environment like we used to have in Smalltalk and
>>
>>
>>Why unit test it when you can scrap it? ;)
>>
Very sweet. I've never used an IDE with a scrapbook before, but it seems very handy indeed. I'm sure you've already considered lots of possible enhancements,most of which probably boil down to "make Scrappy frames work like IDEA editor frames". One thing I would like to see, though, is support for something like the Java import statement. It's pretty clunky to write
com.mycompany.myClass foo = new com.mycompany.myClass();
when what you would really want is
import com.mycompany.*;
myClass foo = new myClass();
A very cool plugin.
--Dave
Eugene
"Marius Scurtescu" <mscurtescu@healthmetrx.com> wrote in message
news:3E67B3D8.9070504@healthmetrx.com...
>
>
>
>
Java
>
It probably reflects more on my state of mind than your choice of plugin name, but when I first glanced at this plugin's name I misread the capitalization...and I know Idea is definitely not crappy!
Very handy plugin, BTW.
Rhett Guthrie wrote:
I love it - it's one of the (few) features of VisualAge that I actually
liked ;)
But - it doesn't work under Linux: for some reason I don't get a popup menu.
Under Windows it works fine...
CU,
Edwin
Rhett Guthrie wrote:
Which classpath does it have? Can i access classes i have written
in the current project?
Stefan
Very nice! I've been waiting some time for a really cool plugin to show up and here we go!
A question: If I do
java.awt.Frame f = new java.awt.Frame();
f.setSize(200,200);
f.show();
it gives me a new little window, but then I don't have any handle to it. I guess the Frame isn't GCd, because some way the jre has a handle to it. Is it possible to get a new handle to the Frame or could Scrappy have a feature where it saves objects for later use?
Bye,
Dag.
Hello,
I tried it on Solaris, and it didn't work either.
It is a very nice feature, and I would like to try it, if only it can work on multi-platforms :(
Best Regards,
MH
Hey, glad you like Scrappy. I would love to tie into Idea's code-completion and syntax highlighting features for the Idea plugin version of Scrappy (IdeaScrappy). However, I have not seen an Idea API for doing this. Does anyone on this list know how to do this? I tinkered with the Idea Editor and Document classes but did not get very far in the limited time I had, that's why I just used a basic JTextArea.
There has to be an open source Java syntax highlighting Swing component out there. Anyone know of one?
-rg
Yes, you are right. I need to do something about the import awkwardness. I was thinking that at the top of the Scrappy pane there would be a label and a text area:
_______________________
imports: |_______________________|
where you type in a list like:
com.mycompany;java.io;java.util
and that these imports would apply to all tabs. It is more flexible for each tab to have it's own imports, but I wonder how often people would need that flexibility. I bet it is more common that people would want one set of imports for the whole project.
I have also thought about putting check boxes for the common packages, java.io, java.util, javax.ejb, javax.swing, but am not sure how well that would work.
Thoughts?
-rg
Sorry about that. I am using plain vanilla swing components, so I do not know why it doesn't work for you or for the gentleman using Solaris.
Consider using the hotkeys instead. After a while you would probably do it this way anyhow. CtrlD will DoIt, CtrlI will InspectIt, Ctrl+P will PrintIt.
Hope that works.
-rg
Yes, just make sure to fully-qualify your classes. Also, you have to manually make your projects before you can access the latest version of the classes. I would like to hook into the IDE to get it to make the project automatically. Maybe some day.
-rg
I do not understand what you mean by 'handle'. Please clarify your question.
You can, of course, execute that code with an InspectIt and then you can inspect the frame. Just make sure your code ends in an expression that evaluates to your frame. For example:
java.awt.Frame f = new java.awt.Frame();
f.setSize(200,200);
f.show();
f
Adding 'f' as the last line will allow you to print it or inspect it.
Regarding saving objects. That is a neat feature. I had not thought about that. Question though, how would you use this feature?
-rg
After I've executed my piece of code, I got a new window, but there is no way IFAIK to get rid of it! Why? Because I don't have any reference ("handle") to it anymore! 'f' is not known anymore.
Maybe there's a proprietary sun.awt.xxxx-method to get a list of active frames...? But a list of saved objects would also help!
Bye,
Dag.
Dag Welinder wrote:
There's even a non-proprietary one: Frame.getFrames().
Sounds like overkill, compared to just supporting import statements. It looks like you're munging the scrapped text into some anonymous class which you then compile using Pizza, load with a custom loader and execute. Stripping out import statements and putting them at the top of your munged class sounds cheap and easy. Your suggestion is slightly more powerful, but loses for intuitiveness compared to import statements. "Standard is better than better", after all.
(Running IDEA #696 on linux)
I tried using the keyboard-shortcuts, (because I don't get a popup menu - no
biggie), but I still have problems:
This is the result of Ctrl-D :
System.out.println("hi!");
/*
COMPILE FAILED:
scrap line 0: cannot access class IScrap; file
bytecrafters/scrappy/IScrap.class not found
scrap line 0: cannot access class ScrapResult; file
bytecrafters/scrappy/ScrapResult.class not found
2 errors
*/
Is there any way I can squeeze more (debug)info out of the plugin?
CU,
Edwin
I think the version of IDEA is the problem here. One user was on a beta of 3.0 and was getting this very problem. Once he upgraded to a production version it went away.
Scrappy writes some info to the IDEA system log so you might check there. But this is definitely a classpath problem - I really think your version of IDEA does not handle classpaths the same way the version I built Scrappy on does.
-rg
Also, I do not know where System.out.println() goes. Instead of running DoIt on a System.out.println() run a PrintIt on the expression you want to print. E.g., type "hi!", select it, and Ctrl+P.
I know, that is not very obvious. I may try to do some trickery to get redirect System.Out to a popup window if someone writes to it...
-rg
Rhett Guthrie wrote:
Build #696 is the current production version of the IDEA 3.0.x series, so I
don't think that's the problem. Could there be some subtle
platform-dependency problem in the way you build up the classpath?
Well, there's lots there, but nothing that I can relate to Scrappy.
ah, ok, so my sample code was ill-chosen, anyway. 8-)
fwiw, I get the same behaviour with this:
java.util.Date date = new java.util.Date();
/*
COMPILE FAILED:
scrap line 0: cannot access class IScrap; file
bytecrafters/scrappy/IScrap.class not found
scrap line 0: cannot access class ScrapResult; file
bytecrafters/scrappy/ScrapResult.class not found
2 errors
*/
Why not just dump it in the panel itself, like you do with the errors? I
think this is the way VAJ used to work - seemed quite sensible.
If there's anything else I can do to help debug, please let me know - this
is a very useful plugin...
CU,
Edwin
Rhett,
I thought I saw somewhere idea to assume everything being selected if
nothing is selected for
CtrlD, CtrlI or CtrlP. Since I don't see it anywhere now, I'd like to put
some emphasis on this
feature that IMHO could speed up the usage and improve user's experience.
r.
"Rhett Guthrie" <rhett@bytecrafters.com> wrote in message
news:2895689.1046495041101.JavaMail.javamailuser@localhost...
VisualAge/Java environments.
>
>
>
>
> | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206142609--ANN-IdeaScrappy-1-0 | CC-MAIN-2020-05 | refinedweb | 1,571 | 76.32 |
. At the time of this writing, the supported databases are SQLite (which comes with Python and thus web2py), PostgreSQL, MySQL, Oracle, MSSQL, FireBird, DB2, Informix, and Ingres and (partially) the Google App Engine (SQL and NoSQL). Experimentally we support more databases. Please check on the web2py web site and mailing list for more recent adapters. Google NoSQL is treated as a particular case in Chapter 13.:
(
pymysql ships with web2py)
web2py defines the following classes that make up the DAL:
DAL represents a database connection. For example:
Table represents a database table. You do not directly instantiate Table; instead,
DAL.define_table instantiates it.
The most important methods of a Table are:
.insert,
.truncate,
.drop, and
.import_from_csv_file.
Field represents a database field. It can be instantiated and passed as an argument to
DAL.define_table.
DAL Rows
Rowrows:
Row contains field values.
Query is an object that represents a SQL "where" clause:
Set is an object that represents a set of records. Its most important methods are
count,
update, and
delete. For example:
Expression is something like an
orderby or
groupby expression. The Field class is derived from the Expression. Here is an example.
Connection strings
A connection with the database is established by creating an instance of the DAL object:.
It is also possible to set the connection string to
None. In this case DAL will not connect to any back-end database, but the API can still be accessed for testing. Examples of this will be discussed in Chapter 7.
Connection pooling
The second argument of the DAL constructor is the
pool_size; it defaults to 0. obtain a connection from the pool and use that for the new transaction. If there are no available connections in the pool, a new connection is established.
The
pool_size parameter is ignored by SQLite and Google App Engine.
Connections in the pools are shared sequentially among threads, in the sense that they may be used by two different but not simultaneous threads. There is only one pool for each web2py process..
Connection pooling is ignored for SQLite, since it would not yield any benefit.
Connection failures
If web2py fails to connect to the database it waits 1 seconds and.
When using connection pooling a connection is used, put back in the pool and then recycled. It is possible that while the connection is idle in pool the connection is closed by the database server. This can be because of a malfunction or a timeout. When this happens web2py detects it and re-establish the connection.
There is also another argument that can be passed to the DAL constructor to check table names and column names against reserved SQL keywords in target back-end databases.
This argument is
check_reserved and it:
The following database backends support reserved words checking.
DAL,
Table,
Field
The best way to understand the DAL API is to try each function yourself. This can be done interactively via the web2py shell, although ultimately, DAL code goes in the models and controllers.
Start by creating a connection. For the sake of example, you can use SQLite. Nothing in this discussion changes when you change the back-end engine.
The database is now connected and the connection is stored in the global variable
db.
At any time you can retrieve the connection string.
and the database name.
The most important method of a DAL is
define_table:
It defines, stores and returns a
Table object called "person" containing a field (column) "name". This object can also be accessed via
db.person, so you do not need to catch the return's. With some limitation, you can also use different primary keys and this is discussed in the section on "Legacy databases and keyed tables".
Record representation
It is optional but recommended to specify a format representation for records:
or
or even more complex ones using a function:.
These are the default values of a Field constructor:
list: the system..
widgetmust be one of the available widget objects, including custom widgets, for example:
SQLFORM.widgets.string.widget. A list of available widgets will be discussed later. Each field type has a default widget.
labelis a string (or something that can be serialized to a string) that contains the label to be used for this field in autogenerated forms.
commentis a string (or something that can be serialized to a string) that contains a comment associated with this field, and will be displayed to the right of the input field in the autogenerated forms.
writableif a field is writable, it can be edited in autogenerated create and update forms.
readableif a field is readable, it will be visible in readonly:
You can also list the fields that have been defined for a given table:
You can query for the type of a table:
and you can access a table from the DAL connection using:
Similarly you can access fields from their name in multiple equivalent ways:
Given a field, you can access the attributes set in its definition:
including its parent table, tablename, and parent connection:
A field also has methods. Some of them are used to build queries and we will seem last argument called "migrate" which must be referred to explicitly by name as in:
The value of migrate is the filename (in the "databases" folder for the application) where web2py stores internal migration information for this table. These files are very important and should never be removed except when the entire database is dropped. In this case, the ".table" files have,
will set the default value of migrate to False whenever
db.define_table is called without a migrate argument.
Migrations can be disabled for all tables at the moment of connection:
db = DAL(...,migrate_enabled=False)
This is the recommended behaviour:
This will rebuild web2py metadata about the table according to the table definition. Try multiple table definitions to see which one works (the one before the failed migration and the one after the failed migration). Once successful remove the
fake_migrate=True attribute.
Before attempting to fix migration problems it is prudent to make a copy of "applications/yourapp/databases/*.table" files.
Migration problems can also be fixed for all tables at once:
Although if this fails, it will not help in narrowing down the problem.
insert
Given a table, you can insert records
Insert returns the unique "id" value of each record inserted.
You can truncate the table, i.e., delete all records and reset the counter of the id.
Now, if you insert a record again, the counter starts again at 1 (this is back-end specific and does not apply to Google NoSQL):
web2py also provides a bulk_insert method you issue the commit command
To check it let's insert a new record:
and roll back, i.e., ignore all operations since the last commit:
If you now insert again, the counter will again be set to 2, since the previous insert was rolled back.
Code in models, views and controllers is enclosed in web2py code that looks like this:
There is no need to ever call
commit or
rollback explicitly in web2py unless one needs more granular control..
In this case, the return values are not parsed or transformed by the DAL, and the format depends on the specific database driver. This usage with selects is normally not needed, but it is more common with indexes.
executesql takes two optional arguments:
placeholders and
as_dict
placeholders is an optional sequence of values to be substituted in or, if supported by the DB driver, a dictionary with keys matching named placeholders in your SQL.
If
as_dict is set to True, and the results cursor returned by the DB driver will be converted to a sequence of dictionaries keyed with the db field names. Results returned with
as_dict = True are the same as those returned when applying .as_list() to a normal select.
_lastsql
Whether SQL was executed manually using executesql or was SQL generated by the DAL, you can always find the SQL code in
db._lastsql. This is useful for debugging purposes:
web2py never generates queries using the "*" operator. web2py is always explicit when selecting fields.
drop
Finally, you can drop tables and all data will be lost::
primarykeyis a list of the field names that make up the primary key.
- All primarykey fields have a
NOT NULLset even if not specified.
- Keyed table can only refer are to other keyed tables.
- Referenceing fields must use the
reference tablename.fieldnameformat.
- The
update_recordfunction is not available for Rows of keyed tables.
Note that currently this is only available for DB2, MS-SQL, Ingres and Informix, but others can be easily:
In your models or controllers, you can commit them concurrently.
Manual uploads
Consider the following model:
Normally an insert is handled automatically via a SQLFORM or a crud form (which is a SQLFORM) but occasionally you already have the file on the filesystem and want to upload it programmatically. This can be done in this way::
You can store the table in a variable. For example, with variable
person, you could do:
You can also store a field in a variable such as
name. For example, you could also do:
You can even build a query (using operators like ==, !=, <, >, <=, >=, like, belongs) and store the query in a variable
q such as in:
When you call
db with a query, you define a set of records. You can store it in a variable
s and write:
It returns an iterable object of class
pydal.objects.Rows whose elements are Row objects.
pydal.objects.Row objects act like dictionaries, but their elements can also be accessed as attributes, like
gluon.storage.Storage.The former differ from the latter because its values are readonly.
The Rows object allows looping over the result of the select and printing the selected field values for each row:
You can do all the steps in one statement:
The select command can take arguments. All unnamed arguments are interpreted as the names of the fields that you want to fetch. For example, you can be explicit on fetching field "id" and field "name":
The table attribute ALL allows you to specify all fields:
Notice that there is no query string passed to db. web2py understands that if you want all fields of the table person without additional information then you want all records of the table person.
An equivalent alternative syntax is the following: not rarely needed.
Shortcuts
The DAL supports various code-simplifying shortcuts. In particular:
returns the record with the given
id if it exists. If the
id does not exist, it returns
None. The above statement is equivalent to
You can delete records by id:
and this is equivalent to
and deletes the record with the given
id, if it exists.
You can insert records:
It is equivalent to
and it creates a new record with field values specified by the dictionary on the right hand side.
You can update records:
which is equivalent to
and it updates an existing record with field values specified by the dictionary on the right hand side.
Fetching a
Row
Yet another convenient syntax is the following: "dog" referencing a "person":
and a simple select from this table:
which is equivalent to
where
._id is a reference to the primary key of the table. Normally
db.dog._id is the same as
db.dog.id and we will assume that in most of this book.
For each Row of dogs it is possible to fetch not just fields from the selected table (dog) but also from linked tables (recursively):
Here
dog.owner.name requires one database select for each dog in dogs and it is therefore inefficient. We suggest using joins whenever possible instead of recursive selects, nevertheless this is convenient and practical when accessing individual records.
You can also do it backwards, by selecting the dogs referenced by a person:
In this last expressions
person.dog is a shortcut for
i.e. the Set of
dogs referenced by the current
person. This syntax breaks down if the referencing table has multiple references to the referenced table. In this case one needs to be more explicit and use a full Query.
Serializing
Rows in views
Given the following action containing a query
The result of a select can be displayed in a view with the following syntax:
Which is equivalent to:. (Note: Using a db in this way in a view is usually not considered good MVC practice.)
Yet it is possible and sometimes convenient to call SQLTABLE explicitly.
The SQLTABLE constructor takes the following optional arguments:
linktothe URL or an action to be used to link reference fields (default to None):
SQLTABLEis useful but there are types when one needs more.
SQLFORM.gridis an extension of SQLTABLE that creates a table with search features and pagination, as well as ability to open detailed records, create, edit and delete records.
SQLFORM.smartgridis a further generalizaiton that allows all of the above but also creates buttons to access referencing records.
Here is an example of usage of
SQLFORM.grid:
and the corresponding view:
{{extend 'layout.html'}} {{=grid}}
SQLFORM.grid and
SQLFORM.smartgrid should be preferrable to
SQLTABLE because they are more powerful although higher level and therefore more constraining. They will be explained in more detail in chapter 8.
orderby,
groupby,
limitby,
distinct
The
select command takes five optional arguments: orderby, groupby, limitby, left and cache. Here we discuss the first three.
You can fetch the records sorted by name:
You can fetch the records sorted by name in reverse order (notice the tilde):
You can have the fetched records appear in random order:
The use of
orderby='<random>'is not supported on Google NoSQL. However, in this situation and likewise in many others where built-ins are insufficient, imports can be used:
And you can sort the records according to multiple fields by concatenating them with a "|":
Using
groupby together with
orderby, you can group records with the same value for the specified field (this is back-end specific, and is not on the Google NoSQL)::
Notice that
distinct can also be an expression for example:
With limitby, you can select a subset of the records (in this case, the first two starting at zero):
Logical operators
Queries can be combined using the binary AND operator "
&":
and the binary OR operator "
|":
You can negate a query (or sub-query) with the "
!=" binary operator:
or by explicit negation with the "
~" unary operator:
Due to Python restrictions in overloading "
and" and "
or" operators, these cannot be used in forming queries. The binary operators must be used instead.
It is also possible to build queries using in-place logical operators:
>>> query = db.person.name!='Alex' >>> query &= db.person.id>3 >>> query |= db.person.name=='John'
count,
isempty,
delete,
update
You can count records in a set:
Notice that
count takes an optional
distinct argument which defaults to False, and it works very much like the same argument for
Sometimes you may need to check is a table is empty. A more efficient way than counting is using the
isempty method:
or equivalently:
You can delete records in a set:
And you can update all records in a set by passing named arguments corresponding to the fields that need to be updated:
Expressions
The value assigned an update statement can be an expression. For example consider this model
The values used in queries can also be expressions
update_record
web2py also allows updating a single record that is already in memory using
update_record
update_record should not be confused with
because for a single row, the method
update updates the row object but not the database record, as in the case of
update_record.
It is also possible to change the attributes of a row (one at the time) and then call
update_record() without arguments to save the changes:
first and
Given a Rows object containing records:
are equivalent to
as_dict and
as_list
A Row object can be serialized into a regular dictionary using the
as_dict() method and a Rows object can be serialized into a list of dictionaries using the
as_list() method. Here are some examples:
These methods are convenient for passing Rows to generic views and or to store Rows in sessions (since Rows objects themselves cannot be serialized since contain a reference to an open DB connection):
find,
exclude,
sort
There are times when one needs:
They can be combined:
Other methods
update_or_insert
Some times you need to perform an insert only if there is no record with the same values as those being inserted. This can be done with
The record will be inserted only of there is no other user called John born in Chicago.
You can specify which values to use as a key to determine if the record exists. For example:
and if there is John his birthplace will be updated else a new record will be created.
validate_and_insert,
validate_and_update
The function
works very much like
works very much the same as.
Virtual fields
Old style.
In order to define one or more virtual fields, you have to define a container class, instantiate it and link it to a table or to a select. For example, consider the following table:
One can define a
total_price virtual field as
Notice the recursive field access
self.order_item.item.unit_price where
self is the looping record.
They can also act on the result of a JOIN:
Virtual fields can be lazy; all they need to do is return a function and access it by calling the function:
or shorter using a lambda function:
New style virtual fields (experimental):
One can define a
total_price virtual field as lazy virtual fields which are calculated on-demand, when called. For example: "dog" that refers to the table "person" which we redefine here:
Table "dog" has two fields, the name of the dog and the owner of the dog. When a field type is another table, it is intended that the field reference the other table by its id. In fact, you can print the actual type value and get:
Now, insert three dogs, two owned by Alex and one by Bob:
You can select as you did for any other table:
Because a dog has a reference to a person, a person can have many dogs, so a record of table person now acquires a new attribute dog, which is a Set, that defines the dogs of that person. This allows looping over all persons and fetching their dogs easily:
Inner joins
Another way to achieve a similar result is by using a join, specifically an INNER JOIN. web2py performs joins automatically and transparently when the query links two or more tables as in the following example::
and it was obvious whether this was the name of a person or a dog, in the result of a join you have to be more explicit and say:
or:
There is an alterantive syntax for INNER JOINS:
While the output is the same, the generated SQL in the two cases can be different. The latter syntax removes possible ambiguities when the same table is joined twice and aliased:
>>> db.define_table('dog', Field('name'), Field('owner1',db.person), Field('owner2',db.person)) >>> rows = db(db.person).select( join=[db.person.with_alias('owner1').on(db.person.id==db.dog.owner1). db.person.with_alias('owner2').on(db.person.id==db.dog.owner2)])
The value of
join can be list of
db.table.on(...) to join.
Left outer join
Notice that Carl did not appear in the list above because he has no dogs. If you intend to select on persons (whether they have dogs or not) and their dogs (if they have any), then you need to perform a LEFT OUTER JOIN. This is done using the argument "left" of the select command. Here is an example:
where:
does the left join query. Here the argument of
db.dog dogs owned by every person. web2py allows this as well. First, you need a count operator. Second, you want to join the person table with the dog table by owner. Third, you want to select all rows (person + dog), group them by person, and count them while grouping:.
Many to many
In the previous examples, we allowed a dog to have one owner but one person could have many dogs. What if Skipper was owned by Alex and Curt? This requires a many-to-many relation, and it is realized via an intermediate table that links a person to a dog via an ownership relation.
Here is how to do it:
the existing ownership relationship can now be rewritten as:
Now you can add the new relation that Curt co-owns Skipper:
Because you now have a three-way relation between tables, it may be convenient to define a new set on which to perform operations:
Now it is easy to select all persons and their dogs from the new Set:
Similarly, you can search for all dogs owned by Alex:
and all owners of Skipper:
A lighter alternative to Many 2 Many relations is tagging. Tagging is discussed in the context of the
IS_IN_DB validator. Tagging works even on database backends that do not support JOINs like the Google App Engine NoSQL.
Many to many,
list:<type>, and
contains
web2py provides the following special field types:
They can contain lists of strings, of integers and of references respectively.
On Google App Engine NoSQL
list:string is mapped into
StringListProperty, the other two are mapped into
ListProperty(int). On relational databases they:
Notice that a
list:reference tag field get a default constraint.
As before, insert a few events, a "port scan", an "xss injection" and an "unauthorized login". For the sake of the example, you can log events with the same event_time but with different severities (1, 2, 3 respectively).
like,
startswith,
contains,
upper,
lower
Fields have a like operator that you can use to match strings:
Here "port%" indicates a string starting with "port". The percent sign character, "%", is a wild-card character that means "any sequence of characters".
web2py also provides some shortcuts:
which are equivalent respectively to)
The
upper and
lower methods allow you to convert the value of the field to upper or lower case, and you can also combine them with the like operator:
year,
month,
day,
hour,
minutes,
seconds
The date and datetime fields have day, month and year methods. The datetime and time fields have hour, minutes and seconds methods. Here is an example:
belongs
The SQL IN operator is realized via the belongs method which returns true when the field value belongs to the specified set (list of tuples):
The DAL also allows a nested select as the argument of the belongs operator. The only caveat is that the nested select has to be a
_select, not a
select, and only one field has to be selected explicitly, the one that defines the set.
sum,
min,
max and
len
You can also use
min and
max to the mininum and maximum value for the selected records
.len() computes the length of a string, text or boolean fields.
Expressions can be combined to form more complex expressions. For example here we are computing the sum of the length of all the severity strings in the logs, increased of one:
Substrings
One can build an expression to refer to a substring. For example, we can group dogs whose name starts with the same three characters and select only one from each group:
Here is
_count
Here is
Here is
_delete
And finally, here is
_update
Moreover you can always use
db._lastsqlto return the most recent SQL code, whether it was executed manually using executesql or was SQL generated by the DAL.
Exporting and importing data
CSV (one Table at a time)
When a DALRows object is converted to a string it is automatically serialized in CSV:
You can serialize a single table in CSV and store it in a file "test.csv":
and you can easily read it back:
To import::
Two tables are separated
\r\n\r\n. The file ends with the line:
1. Change the above model into:
Note, in the above table definitions, the default value for the two 'uuid' fields is set to a lambda function, which returns a UUID (converted to a string). The lambda function is called once for each record inserted, ensuring that each record gets a unique UUID, even if multiple records are inserted in a single transaction.
2. Create a controller action to export the database:
3. Create a controller action to import a saved copy of the other database and sync records:
Notice that
session=None disables the CSRF protection since this URL is intended to be accessed from outside.
4. Create an index manually to make the search by uuid faster.
Notice that steps 2 and 3 work for every database model; they are not specific for this example.)
DALRows objects also have an
xml method (like helpers) that serializes it to XML/HTML:
If you need to serialize the DALRows in any other XML format with custom tags, you can easily do that using the universal TAG helper and the * notation::
Which would render something similar to.
The results of a
selectare complex, un-pickleable objects; they cannot be stored in a session and cannot be cached in any other way than the one explained here.
Self-Reference and aliases
It is possible to define tables with fields that refer to themselves although the usual notation may fail. The following code would be wrong because it uses a variable
db.person before it is defined:
The solution consists of using an alternate notation
In fact
db.tablename and
"reference tablename" are equivalent field types.
If the table refers to itself, then it is not possible to perform a JOIN to select a person and its parents without use of the SQL "AS" keyword. This is achieved in web2py using the
with_alias. Here is an example:
It is also possible to define a dummy table that is not stored in a database in order to reuse it in multiple other places. For example:
This example assumes that standard web2py authentication is enabled.
Notice that if you user.
Common fields and multi-tenancy
db._common_fields is a list of fields that should belong to all the tables. This list can also contain tables and:
and for every record insert,)
It is possible to define new/custom field types. For example we consider here the example if a field that contains binary data in compressed form:
SQLCustomType is a field type factory. Its
type argument must be one of the standard web2py types. It tells web2py how to treat the field values at the web2py level.
native is the name of the field as far as the database is concerned. Allowed names depend on the database engine.
encoder is an optional transformation function applied when the data is stored and
decoder is the optional reversed transformation function.
This feature is marked as experimental. In practice:
This allows us to access any
db.table without need to re-define it.
Copy data from one db into another
Consider the situation in which you have been using the following database:
db = DAL('sqlite://storage.sqlite')
and you wish to move to another database using a different connection string:
db = DAL('postgresql://username:password@hocalql://username:password@hocalable define) MongoDBAdapter extends NoSQLAdapter (experimental)
which override the behavior of the
BaseAdapter.
Each adapter has more or less this structure:
Looking at the various adapters as examples should be easy to write new ones.
When
db instance is created:
db = DAL('mysql://...')
the prefix in the uri string defines the adapter. The mapping is defined in the following dictionary also in "gluon/dal.py":
the uri string is then parsed in more detail by the adapter itself.
For any adapter you can replace the driver with a different one:
from gluon.dal import MySQLAdapter MySQLAdapter.driver = mysqldb
and you can specify optional driver arguments and adapter arguments:
db =DAL(..., driver_args={}, adapter_args={})
Gotchas
SQLite does not support dropping and altering columns. That means that web2py migrations will work up to a point. If you delete a field from a table, the column will remain in the database but be invisible to web2py. If you decide to re-instate has the same problems as MySQL and some more. In particular table metadata itself must be stored in the database in a table that is not migrated by web2py. This is because Google App Engine has a readonly file system. Web2py migrations in Google:SQL combined with the MySQL issue described above can result in metadata corruption. Again this can be prevented (my migrating the table at once and then setting migrate=False so that the metadata table is not accessed any more) or it can fixed a posteriori (my accessing the database using the Google dashboard and deleting any corrupted entry from the table called
web2py_filesystem.
MSSQL does not support the SQL OFFSET keyword. Therefore the database cannot do pagination. When doing a
limitby=(a,b) web2py will fetch the first
b rows and discard the first the
a. This may result in a considerable overhead when compared with other database engines.
Oracle also does not support pagination. It does not support neither the OFFSET nor the LIMIT keywords. Web2py achieves pagination but translating a
db(...).select(limitby=(a,b)) into a complex three-way nested select (as suggested by official Oracle documentation). This works for simple select but may break for complex selects involving alised fields and or joins.
MSSQL has problems with circular references in tables that have ONDELETE CASCADE. This is MSSSQL bug and you work around it by setting the ondelete attribute for all reference fields to "NO ACTION". You can also do it once for all before you define tables:
MSSQL also has problems with arguments passed to the DISTINCT keyword and therefore while this works,
db(query).select(distinct=True)
this does not
db(query).select(distinct=db.mytable.myfield)
Google NoSQL (Datastore) does not allow joins, left joins, aggregates, expression, OR involving more than one table, the like operator and search that searches for content inside these fields types are more efficient on Google NoSQL than on SQL databases. | http://www.web2py.com/books/default/chapter/32/06 | CC-MAIN-2016-40 | refinedweb | 5,053 | 60.95 |
The parent has to take care of its child. This simple idea has big consequences for a thread lifetime. The following program starts a thread, that displays its ID.
// threadWithoutJoin.cpp#include <iostream>
#include <thread>
int main(){
std::thread t([]{std::cout << std::this_thread::get_id() << std::endl;});
}
But the program run results in an unexpected result.
What's the reason?
The lifetime of the created thread t ends with its callable unit. The creator has two choices. First: it waits, until its child is done (t.join()). Second: it detaches itself from its child: t.detach(). A thread t with callable unit (you can create threads without callable unit) is joinable, in case there were no t.join() or t.detach calls to the thread. A joinable thread destructor throws std::terminate exception. Thus, the program terminates. That is the reason, the actual run terminated unexpectedly.
The solution for this problem is simple. By calling t.join(), the program behaves as it should.
// threadWithJoin.cpp#include <iostream>
#include <thread>
int main(){
std::thread t([]{std::cout << std::this_thread::get_id() << std::endl;});
t.join();
}
Of course, you can use t.detach() instead of t.join() in the program above. The thread t is not joinable anymore and its destructor didn't call std::terminate. Seems bad, because now the program behaviour is undefined, because the lifetime of the object std::cout is not ensured. The execution of the program goes a little bit odd.
I will elaborate more on this issue in the next article.
Until now, it was quite easy. But that has not to be forever.
It is not possible to copy a thread (copy semantic), you can only move (move semantic) it. In case a thread will be moved, it's a lot more difficult to deal with its lifetime in the right way.
// threadMoved.cpp#include <iostream>
#include <thread>
#include <utility>
int main(){
std::thread t([]{std::cout << std::this_thread::get_id();});
std::thread t2([]{std::cout << std::this_thread::get_id();});
t= std::move(t2);
t.join();
t2.join();
}
Both threads - t1 and t2 should do a simple job: print their IDs. In addition to that, Thread t2 will be moved to t: t= std::move(t2). At the end, the main thread takes care of its children and joins them. But wait. That's far away from my expectations:
What is going wrong? We have two issues:
I fixed both errors.
// threadMovedFixed.cpp#include <iostream>
#include <thread>
#include <utility>
int main(){
std::thread t([]{std::cout << std::this_thread::get_id() << std::endl;});
std::thread t2([]{std::cout << std::this_thread::get_id() << std::endl;});
t.join();
t= std::move(t2);
t.join();
std::cout << "\n";
std::cout << std::boolalpha << "t2.joinable(): " << t2.joinable() << std::endl;
}
As a result, thread t2 is not joinable any more.
In case it's too bothersome for you to take care of the lifetime of your threads by hand, you can encapsulate a std::thread in your own wrapper class. This class should automatically call join in his destructor. Of course, you can go the other way around and call detach. But you know, there are a few issues with detach.
Anthony Williams created such valuable class. He called it scoped_thread. In the constructor it checks that the thread is joinable and joins it finally in the destructor. Because the copy constructor and copy assignment operator are declared as delete, objects of scoped_thread can not be copied to or assigned from.
// scoped_thread.cpp#include <iostream>
#include <thread>
#include <utility>
class scoped_thread{
std::thread t;
public:
explicit scoped_thread(std::thread t_): t(std::move(t_)){
if ( !t.joinable()) throw std::logic_error("No thread");
}
~scoped_thread(){
t.join();
}
scoped_thread(scoped_thread&)= delete;
scoped_thread& operator=(scoped_thread const &)= delete;
};
int main(){
scoped_thread t(std::thread([]{std::cout << std::this_thread::get_id() << std::endl;}));
}
In the next post I deal with passing data to threads. (Proofreader Alexey Elymanov)
{t
Go to Leanpub/cpplibrary "What every professional C++ programmer should know about the C++ standard library". Get your e-book. Support my blog.
Name (required)
Website
Notify me of follow-up comments
Read more...
Hunting
Today 611
All 382633
Currently are 177 guests and no members online
and i could assume you are knowledgeable in this subject.
Fine with your permission allow me to snatch your RSS feed to stay up to date with impending post.
Thanks a million and please carry on the enjoyable work. | http://modernescpp.com/index.php/threads-lifetime | CC-MAIN-2017-34 | refinedweb | 731 | 68.87 |
I'm having some trouble with making a template for email do a specific if statement. Maybe what I'm doing isn't supported?
{% if ticket.c_location == empty %} Please fill in your location on the web portal next time and don't send an email! The web portal is the preferred method to send in all tickets. {% endif %}
You can use == null to determine if a field has no value...or you can use == ''
Let me know if that does it for you!
Rob
This topic was created during version 5.0.
The latest version is 7.5.00107.
3 Replies
· · ·
· · ·
I also use this as a Liquid quasi-language reference guide: Sep 1, 2017 at 13:15 UTC
0
· · ·
GreyRose LLC is an IT service provider.
Looks like null worked. Thanks!
0 | https://community.spiceworks.com/topic/115709-template-if-statements | CC-MAIN-2020-50 | refinedweb | 133 | 84.47 |
Guinot
All products are sold with 30-day satisfaction guarantee. During the period you can return the item in original packaging and full refund will be given.
Here are some good reasons why choose us as your regular skin care suppliers:
• All our products are sourced from licensed suppliers and are fresh.
• We are trading since 2006 and during this time we have served over 14,000 satisfied customers.
• All orders are nicely wrapped and accompanied with surprise gifts + product samples when available.
• Very personal customer service. No cooperate robots but people with true care for the customer, just as if you would talk to to a friend.
• Wide range of products so you can get the skin care from your favourite brand in one place, not wasting time and money looking for the lotion in one place and cream somewhere else.
• Free advice on what products to choose for your skin type.
Payment methods
We accept Paypal and credit cards.
All payments must be made within 4 days of purchase.
Please contact us if you prefer to pay by credit card.
Shipping can be combined if buying two or more items. Please use “request total from seller” option before making the payment. If the order total is over $60 economy shipping is not available. For "Standard" or "Express" shipping a signature will be required upon the delivery.
We dispatch our items in 1-2 days time, the speed of delivery depends on the service you choose at the checkout.
Australia "Standard delivery"
We use Royal Mail International Airsure® - express priority registered airmail. Royal Mail delivery aim is 6- working days after posting. A signature will be required upon the delivery.
Australia "Express delivery"
We use DHL Express or UPS Express Saver. Delivery normally takes 1-3 working days after despatch.- fully online traceable delivery. Shipping time: 1-3 working days to most locations in Australia.
USA, Canada, Brazil, Hong Kong, Singapore, Malaysia, New Zealand & Iceland
We use DHL or UPS Express courier - fully online traceable delivery. Shipping time: 2-4 working days depending on location.
Rest of the world
Royal Mail International Signed for – registered airmail, delivery times vary from 5-14 working days depending on location and other factors.
International Express Shipping
We use DHL Worldwide Express or UPS Express Saver service. Delivery usually take 1 to 3 working days to most locations worldwide.
We offer 30-day hassle free return policy. Return of any items will be subject to our standard terms and conditions for online purchases.
We will refund return (postage) costs if:
- an incorrect item is received;
- the item has arrived damaged (please provide photographic evidence and we will resend the item or issue a full refund).
Buyer pays for return costs if an unwanted item is ordered (or any other reason not related to the item condition). Only unopened items which are fit for resale will be accepted.
If you wish to cancel your order, return it or report other problems, please contact our customer support team via ebay messages or call +44 (0) 1225290231.
inkFrog Analytics | http://www.ebay.com.au/itm/Guinot-Minceur-Chrono-Logic-Slimming-125ml/181318520227?hash=item2a376d0da3 | CC-MAIN-2014-10 | refinedweb | 513 | 65.32 |
Runtime Xtemplate or drop-in templates?
Runtime Xtemplate or drop-in templates?
I?
The gxt-legacy.jar has the 2.x style XTemplates, which allow them to be compiled from runtime strings, and to give up the ability to have them compiled. Of course, you'll need to use data models that can be introspected at runtime, so normal beans won't work - the compiler renames those fields, so the runtime xtemplates can't turn {user.name} into {f.c}.
So what types of data models would you suggest? JSO objects?
XTemplates only can operate on JSOs, so yes, that can be convenient to have - if your data model can be made into JSOs, that would work well. That said, there are mechanical processes that can turn bean-like objects or other Java types into JSO.
JsoReader (NOT JsonReader, note the missing 'n') is the reverse of this process - we don't a 'JsoWriter', it not being usually too terrible useful. These types all use AutoBeans, which make the string names of each property (get/set method) available at compile time.
The 3.x legacy jar also has the ModelData interface and BaseModelData classes, which can go over the wire using RPC. There is no provided mechanism to turn them into a JSO, though one exists in 2.x and could be easily ported. Check out com.extjs.gxt.ui.client.util.Util.getJsObject(ModelData) - I'm not certain why it was omitted from 3.x's legacy jar, but will look into it for an upcoming release. The com.sencha.gxt.legacy.client.core.js.JsUtil class does have a toJavaScriptObject(Map<String, V>) method, and that needs the map from within a BaseModelData, but has no support for collections.
The important thing in all of these is that the names of each property has to be available at runtime. Without that, you can't get an create a JSO with the properties that the XTemplate expects to find.
So just to clarify, as I am about to start implementing these changes...
Right now I have the JsonReader class handling things coming back from a HTTPProxy (both being wrapped in a paging loader). The POJO objects these get handled with are all Autobeans.
The next step would be to modify my POJO objects to be JSO Overlays that can deserialized from the incoming JSON (still figuring this part out) and then provide the templates at run-time?
Does that sound correct? Maybe extending JsoReader to parse Json into JSO Autobean? I guess I'm confused where Autobeans come into play and if I still need them, as I'm just handling JSON strings and need a JSO Overlay?
Provided you are already using autobeans elsewhere in your app (and I'm assuming you are, since you are also using a loader, proxy, reader - these tools aren't required just to read in a model and display it with a template), turning an JSON-based autobean into a JSO shouldn't be too bad.
When your app is compiled, it already *is* a JSO, and the autobean wrapping is just enough reflection details to map to the json stream you are sending over the wire. When you are in Dev Mode, however, you'll probably need to do some additional wrapping, since in dev mode a json-based autobean is backed by the json.jar tools. This would be my first try to handle this - I'd write a sort of JsoWriter class, like what we have already in JsoReader (except backwards). Using the AutoBeanUtils.getAutoBean class to get the AutoBean instance, then grab the underlying Splittable object. It'll be a JsoSplittable when isScript() is true, and can be treated as a real JSO, or it'll be a JsonSplittable, so you can read out the json string, then pass that through JsonUtils.safeEval(String) to get a JSO. This isn't terribly efficient, but since you are only doing it in Dev Mode, shouldn't be a major concern for real world cases.
At this point, any other approach I'd devise would be to start with taking another look at the problem and working from there - how much reflection is needed in general? Do we really need the full xtemplate language, or can we get by with some other generic way of talking about data? That said, if you've already got AutoBeans in your app, like I said above: you are already dealing with JSOs in your compiled code, so it should be cheap to read out that underlying JSO - just don't modify it as a JSO!
JsoReader will not turn JSON into a JSO. The purpose of the JsoReader is to turn a Jso (like from a JSONP request, or from some JS-based API) into a autobean.
So with this JsoWriter, I just pass my objects through it each time and then pass them to the template for rendering correct? I might take a different path as this path looks like it has a lot of overhead associated with it.
Just so I get the picture right, the code your implying looks something like the following:
Code:
import com.google.gwt.core.client.GWT;import com.google.gwt.core.client.JavaScriptObject; import com.google.gwt.core.client.JsonUtils; import com.google.web.bindery.autobean.gwt.client.impl.JsoSplittable; import com.google.web.bindery.autobean.shared.AutoBean; import com.google.web.bindery.autobean.shared.AutoBeanCodex; import com.google.web.bindery.autobean.shared.AutoBeanFactory; import com.google.web.bindery.autobean.shared.Splittable; import com.sencha.gxt.data.shared.writer.AutoBeanWriter; public class JsoWriter<M> extends AutoBeanWriter<M, JavaScriptObject> { public JsoWriter(AutoBeanFactory factory, Class<M> clazz) { super(factory, clazz); } @Override public JavaScriptObject write(M model) { if(model == null) { return null; } AutoBean<M> autobean = getAutoBean(model); if (autobean == null) { throw new RuntimeException( "Could not serialize " + model.getClass() + " using Autobeans, it appears to not be backed by an autobean. You may need to implement your own DataWriter."); } Splittable splittable = AutoBeanCodex.encode(autobean); if(GWT.isScript()) { return (JsoSplittable)splittable; } else { return JsonUtils.safeEval(splittable.getPayload()); } } }
Like I said, this answer assumes you already *need* that loader/reader/proxy and autobean impl - the overhead should be pretty minimal though. If you don't need those pieces, cut to the chase after getting the JSON string, and call JsonUtils.safeEval.
Two details about how you've done it:
* The abstract class AutoBeanWriter is designed to provide the getAutoBean method when you might have an object that is part autobean and part real java bean impl. This isn't needed in your case, since all data is coming from autobeans, so you can stop subclassing and calling getAutoBean. Instead, use AutoBeanUtils.getAutoBean - a much cheaper way of doing the same thing when you are sure there are no non-autobeans to worry about
* This leaves the AutoBeanCodex.encode as the only expensive call in the whole thing - this is internally turning the data into a String and then into a Splittable, to ensure that nothing gets missed. In your case, if you can assume that no data has been changed since it came over the wire, you can write a faster, cheater version of this by calling on the existing AbstractAutoBean.data member.
Finally, before obsessing over performance, as always, check if performance is acceptable. You can start with code that works, and fine tune later. Typically when dealing with large amounts of data, it isn't the processing that is costly in a browser - it is the drawing of that data. The complexity of rendering your templates will likely dwarf any data manipulation in the class you've just built - measure, then cut.
Perfect information. As always, thanks Colin. | http://www.sencha.com/forum/showthread.php?249456-Runtime-Xtemplate-or-drop-in-templates&s=55b51bf927e57ee0ee54077ff89d6704&p=917619 | CC-MAIN-2014-52 | refinedweb | 1,288 | 54.63 |
backlight LED works backwards?
Hello,
I'm testing the arduino, trying to read input CH2, and consequently turn on backlight (pin 13), turn on LED attached to CH3, and update LCD.
However the backlight seem to go on when CH3 is off and vice versa.
here's the code:
#include <UC1701.h>
#include <Wire.h>
#include <Indio.h>
static UC1701 lcd;
void setup() {
lcd.begin();
Indio.digitalMode(2,INPUT); // Set CH2 as an input
Indio.digitalMode(3,OUTPUT); // Set CH3 as an input
pinMode(13,OUTPUT); // 13 is the backlight
lcd.setCursor(0, 1);
lcd.print("channel 2");
}
void loop() {
// check if the pushbutton is pressed.
// if it is, the CH2 button is HIGH:
if (Indio.digitalRead(2) == HIGH) {
// turn LED on:
digitalWrite(13, HIGH);
Indio.digitalWrite(3, HIGH);
// and write high
lcd.setCursor(64, 1);
lcd.print("high");
}
else {
// turn LED off:
digitalWrite(13, HIGH);
Indio.digitalWrite(3, HIGH);
lcd.setCursor(64, 1);
lcd.print("low ");
}
}
am I doing something wrong?
Yeah, i made a mistake when copying the sketch here, actual code said Low on both LEDs in the "else" clause.
Thanks for the reply!
Hello Alessandro, Yes that is correct, the backlight works inverse because it is switched by a PNP transistor. see in your code that you write "HIGH" to D13 twice, basically keeping the backlight off. So to make it work you need to change it to "LOW" in the first "if" routine. | https://industruino.com/forum/help-1/question/backlight-led-works-backwards-59 | CC-MAIN-2022-27 | refinedweb | 238 | 71.31 |
While moving from C++/MFC to C#/Windows Forms has been a very positive experience for me, it didn't quite live up to the expectation I had that I could send objects in all directions and never have to worry about deleting them when they are no longer required. The problem is, when you add an event handler to an event, the event will hold a reference to the object that contains the event handler, and so long as that reference is there the garbage collector can't collect the object containing the event handler. Thus while the need to call
delete is gone, excessive use, on my behalf, of events has led me to a point where I need to track when objects are no longer required so that I can remove their event handlers from the event that they are servicing, and thus enable them to be garbage collected.
For a couple of years, I have been using many workarounds for this problem, including changing my coding style to reduce the incidence of this problem, and keeping lists of weak references of delegates that I iterate over to simulate events, but I finally I decided to sit down and write a simple-to-use solution to this problem that would allow me to use events in the way that I want to.
I am not the first person to try to solve this problem. Microsoftie Greg Schechter has a very inspiring article on his blog about the problem in general, and the solution that he came up with. While the solution was good, it just didn't quite fill my needs as it required changing my existing codebase too much.
Ian Griffiths of Interact Software Ltd. UK had exactly the sort of solution I was looking for on his blog but unfortunately it didn't quite work. Tricky problem this one.
Wesner Moise mentions in his blog that he talked about weak delegates with the .NET Bass Class Library team when he had dinner with them back in March 2005, but I haven't seen any mention anywhere of when such a feature would be implemented so I've decided to go ahead with my own solution. He also mentions on the same page that he has seen two experts screw up an implementation of weak delegates, so here's hoping, with all the possible scrutiny that this article might get, that I haven't added my name to the list of people who have screwed this up.
I have two ways of using weak event handlers depending on whether you need speed, or simplicity. The simple way to use weak event handlers is to use the
WeakEventHandlerFactory class which requires that you create one instance of the
WeakEventHandlerFactory in your class:
private WeakEventHandlerFactory eventHandlers;
And then add as many event handlers as you want, to any event source, using the
AddWeakHandler method:
eventHandlers.AddWeakHandler<MyEventArgs>(eventSource, "EventSourceChanged", eventSource_EventSourceChanged);
To remove an event handler, use
RemoveWeakHandler:
eventHandlers.RemoveWeakHandler<MyEventArgs>(eventSource, "EventSourceChanged", eventSource_EventSourceChanged);
Not really too far removed from the code that is normally used to add and remove event handlers, is it? It should be noted that a new feature of the C# 2.0 compiler is that it will convert a function name to an event handler, so
eventSource_EventSourceChanged will be converted to
new EventHandler<MyEventArgs>(eventSource_EventSourceChanged).
When your object is garbage collected, the
WeakEventHandlerFactory will get garbage collected around the same time, and its finalizer will remove all of the event handlers that it has created that haven't been removed yet. It doesn't matter if there is a delay between your object being garbage collected and the event handler factory being garbage collected as the event handler factory will not attempt to pass on events after your object has been garbage collected.
The caveat with using the event handler factory is that it is 40x slower than adding or removing normal event handlers. For some situations this is unacceptable, so I have created a
WeakEventHandler<> class that requires a little more code to use, but in fact ends up being 2x faster to add and remove when compared to normal event handlers. To use Weak Event Handlers directly a little more code and care is required. For each event handler in your class, you need to have an object like the following in your class declaration:
private WeakEventHandler<MyEventArgs> eventSourceChangedHandler;
Then in your constructor you initialize the
WeakEventHandler<> object in the following way:
eventSourceChangedHandler = new WeakEventHandler<MyEventArgs>(eventSource_EventSourceChanged);
This is similar to the way you initialize a normal event handler, except that you need to keep an instance of the
WeakEventHandler<> in your class. To use the
WeakEventHandler<> object, simply add it or remove it as you would a normal event handler:
eventSource.EventSourceChanged += eventSourceChangedHandler; eventSource.EventSourceChanged -= eventSourceChangedHandler;
The difference though is that you must remove the event handler before your object has been finalized otherwise your application will bomb with a null reference exception next time the event source fires an event that is handled by a
WeakEventHandler<> in your object. Therefore your finalizer needs to remove the
WeakEventHandler<> like so:
~MyClass { eventSource.EventSourceChanged -= eventSourceChangedHandler; }
I've written four utility classes for creating Weak Event Handlers:
WeakReferenceToEventHandler
WeakEventHandlerInternal
WeakEventHandlerFactory
WeakEventHandler
This class is exactly what the class name says. It is a weak reference to an event handler. This class is used by the
WeakEventHandlerInternal only, and there should not be any need to use it directly.
This class contains methods to add an event handler to the event source, and remove an event handler from the event source. All the required details such as the event source object and the event name are stored in this class so that it can remove itself from the event source should the original event handler be garbage collected.
As events can't be passed in through methods, the
AddHandler method uses reflection to add an intermediate event handler to the event source:
public void AddHandler(object eventSource, string eventName) { // Store the event source details this.eventName = eventName; weakReferenceToEventSource = new WeakReference(eventSource); // Create and intermediate handler that the event source will // have a strong reference to. EventInfo eventInfo = eventSource.GetType().GetEvent(eventName); eventInfo.AddEventHandler(eventSource, new EventHandler<TEventArgs>(IntermediateEventHandler)); }
Similarly, the
RemoveHandler method uses reflection to remove the intermediate event handler:
public void RemoveHandler() { if (weakReferenceToEventSource==null) return; object eventSource = weakReferenceToEventSource.Target; if (eventSource != null) { // Get the event using reflection EventInfo eventInfo = eventSource.GetType().GetEvent(eventName); // Remove the intermediate event handler, which will dereference // this weak reference and allow it to be garbage collected. eventInfo.RemoveEventHandler(eventSource, new EventHandler<TEventArgs>(IntermediateEventHandler)); } }
When the event gets fired, the intermediate event handler handles it. It's at this point that a test is done to see if the original event handler has been garbage collected, and if so then the intermediate event handler removes itself from the event source.
The
WeakEventHandlerInternal class is used internally for storing weak event handlers. This class should not be used directly, so I made the constructor
internal. I have a class called
WeakEventHandler to be used directly that is 8x faster than
WeakEventHandlerInternal for adding and removing events. The
WeakEventHandlerInternal class is a container for the strong reference to the original event handler, and also contains an instance of the
WeakReferenceToEventHandler class. It contains methods for adding, removing, and comparing weak event handlers for equality. It is used by the
WeakEventHandlerFactory class.
The
WeakEventHandlerFactory is the class that can be used to create weak event handlers. It contains two methods:
AddWeakHandler
RemoveWeakHandler
The
AddWeakHandler method creates a weak reference to the original event handler, but also stores a strong reference to the event handler so that it isn't garbage collected as soon as we leave the method. The weak reference and strong reference are stored together in an object that is added to an internal list.
The
RemoveWeakHandler does a search through all the stored event handlers looking for a match, and removes the event handler if it finds a match. This method returns a flag that indicates if it was successful or not.
This class can be used directly, but cautiously, for directly creating weak event handlers. It sacrifices ease of use for speed. An instance of
WeakEventHandler must be held for each event handler that is represented by a
WeakEventHandler object, and each
WeakEventHandler object must be removed from the source event before the event handling object is garbage collected.
If you compile the project linked in at the top of this article, you will get an application that looks like the image at the top. This application demonstrates the usefulness of a weak event handler.
The application has three
DataGrid controls that are used for editing the contents of up to three collections. Above each
DataGrid is a unique identifier for the collection being edited in that
DataGrid, as well as the total of all the values held in that collection.
If you edit one of the collection items, the total displayed above the
DataGrid will be updated. If you remove one of the items, you will see a message in the log window indicating that the event handler was successfully removed from the item that you removed from the collection. From this you can deduce that each item in the collection has an event that fires when you change the item. Thus each item has a reference to its parent collection(s) as it is the collection that handles the event and updates the value of the total held in the collection object. The purpose of this application is to show that a weak reference to the event handler is used allowing a container of objects (the collection) to be garbage collected if it is no longer required, even if the items in the collection still exist.
Now for the fun part. Below each
DataGrid are four buttons. They allow you to display, or copy one of the other two collections.
When you copy a collection, a new collection is created and the objects from the source collection are copied into it. Hitting the "Force Garbage Collection" button will force the replaced collection to be garbage collected if it is no longer being displayed anywhere, and the log window will show a message indicating which collection was garbage collected.
Editing an item in a copied collection will update the collection it was copied from, but items added to or removed from the new collection will have no effect on the source collection.
When you "View a Set", the actual collection itself will then be displayed in the destination
DataGrid, as well as the source
DataGrid. Actions of adding/deleting/editing objects in one
DataGrid will be mirrored in the other
DataGrid.
The sample application includes a little benchmark to show how fast different event handler types can be added and removed. I tested adding and removing 1,000,000 event handlers on an old 3.0GHz Pentium 4 and got the following results:
Speed Test: Adding and Removing 1000000 Event Handlers Adding Normally.........Seconds: 0.5781361 Removing Normally.........Seconds: 0.3906325 Adding WeakEventHandler.........Seconds: 0.156253 Removing WeakEventHandler.........Seconds: 0.2343795 Adding WeakEventHandlerInternal.........Seconds: 2.2812938 Removing WeakEventHandlerInternal.........Seconds: 1.9687878 Adding using WeakEventHandlerFactory.........Seconds: 19.7191286 Removing using WeakEventHandlerFactory.........Seconds: 17.1253288
In the sample application I have set the number of event handlers to add and remove to 100,000 so that you don't have to wait too long to see your own results.
There are two collection classes included in the source code for the sample application:
SubTotalCollection
SubTotalCollectionFast
SubTotalCollection uses the
EventHandlerFactory to create weak event handlers, and
SubTotalCollectionFast uses the
WeakEventHandler class directly.
SubTotalCollectionFast is a good example of managing
WeakEventHandler objects. It adds a handler when a
SubTotal object is added to the collection, it removes a handler when a
SubTotal object is removed from the collection, and when it is finalized or cleared it removes event handlers from all the
SubTotal objects still in the collection.
The sample application just uses the
SubTotalCollection class. To make it use the
SubTotalCollectionFast class, just do a search and replace in the Form1.cs file replacing
SubTotalCollection with
SubTotalCollectionFast.
The new
DataGrid control is a lot easier to use than the old one, for editing simple collections. In this sample application, I have objects of type
SubTotal in a collection that I want to edit in a
DataGrid. There were three steps to doing this. The first was declaring a collection class that inherited from
BindingList, like so:
public class SubTotalCollection : BindingList<SubTotal>
Then in Form.cs I created a binding source, attached it to the
DataGrid, and attached a collection to the binding source, and that was it:
BindingSource leftSource = new BindingSource(); dataGridViewLeft.DataSource = leftSource; leftSource.Source = new SubTotalCollection();
Between this version and the first version posted on September 23rd, 2005, I had many attempts at making the factory faster. After putting in a
Dictionary for finding matching event handlers in the
RemoveHandler method I tried changing it to
SortedDictionary<,>, however for this purpose the
SortedDictionary<,> seemed to take twice as long as using the
Dictionary<,> class.
WeakEventHandlerclass.
WeakEventHandlerclass found by St�phane Issartel.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cs/weakeventhandlerfactory.aspx | crawl-002 | refinedweb | 2,211 | 58.52 |
Hello,
I've just started using mod_python, I wrote my own handler and some modules.
After a bit of work i saw what I believed was bad performance, and I
started profiling.
What I've found is this:
ab -n 100
Trying to fetch non-existant file, that is, just apache: 623 req/sec)
ab -n 100
It's only contents are "def handler(req): return": 324 req/sec
ab -n 100
Contents of python_session.py
---------------
def handler(req):
from mod_python import Session
req.sess = Session.Session(req)
req.sess.save()
return
---------------
8 req/sec !!!!
As we can't use MemorySession on Linux (Linux is multiprocess, right?)
what are the possible alternatives? Would a custom FileSession ala PHP
be faster?
I found an old post from 2001 where some people recommended using a
custom Session Server but I don't know if this recommendation still holds.
Server:
Hardware: CPU 1ghz, 1Gb RAM 133Mhz, 7200 rpm ATA hdd and negligible load
Software: Debian stable, apache 2.0.52 prefork, mod_python 3.1, python 2.4
Thanks in advance for any advice.
PD: Just for the record, a similar php_bare test with "function test() {
return; } test();" and in the same conditions brought 316 req/sec.
--
Juan Alonso | | http://www.modpython.org/pipermail/mod_python/2005-April/017825.html | crawl-002 | refinedweb | 205 | 66.64 |
requires at least Python 3.6 or higher.
Download and Installation
Install via pip
Including OpenCV (recommended):
pip install --upgrade scenedetect[opencv]
Including Headless OpenCV (servers):
pip install --upgrade scenedetect[opencv-headless]
PySceneDetect is available via
pip as the
scenedetect package.
Windows Build (64-bit Only)
Latest Release: v0.6
Release Date: May 29, 2022Installer (recommended) Portable .zip Getting Started
Post Installation
After installation, you can call PySceneDetect from any terminal/command prompt by typing
scenedetect (try running
scenedetect help, or
scenedetect version). 3 and the following packages:
- OpenCV:
pip install opencv-python
- Numpy:
pip install numpy
- Click:
pip install Click
- tqdm:
pip install tqdm
- appdirs:
pip install appdirs
Optional packages:
Video Splitting Tools
For video splitting support, you need to have one of the following tools available (included in Windows builds):
- ffmpeg, required to split video files (
split-videoor
split-video -c/--copy)
- mkvmerge, part of mkvtoolnix, command-line tool, required to split video files in stream copy mode (
split-video -c/--copyonly). On Windows this is usually
C:\PythonXY\Scripts, where
XY is your Python version.
If you have trouble getting PySceneDetect to find
ffmpeg or
mkvmerge, see the section on Manually Enabling
split-video Support on Getting Started: Video Splitting Support Requirements.
Building OpenCV from Source.
To ensure you have all the requirements installed, open a
python interpreter, and ensure you can run
import numpy and
import cv2 without any errors.
Code Signing Policy
Windows EXE/MSI Builds: Free code signing provided by SignPath.io, certificate by SignPath Foundation. | http://scenedetect.com/en/latest/download/ | CC-MAIN-2022-33 | refinedweb | 252 | 53.31 |
Java developers have quickly made Eclipse one of the most popular
Java coding tools on Mac OS X. But although Eclipse is a comfortable
tool to use every day once you know it, it is not easy to get started
with. An earlier ADC article, "Eclipse and
Mac OS X: A Natural Combination" provided an introduction to
Eclipse; this article goes a step further, helping you start
working on projects, get over the initial hurdles and become comfortable with Eclipse.
This article is based on three examples. The first, the classic
HelloWorld example, introduces the IDE, shows you how to create a
project, and how to create, compile, and run a class. For the second
example, we will import the base Swing application created by Xcode and
work with it in Eclipse. You will also learn how to customize the
Eclipse environment and start taking advantage of the Java-specific,
code-aware features such as code assist and refactoring. In the third
example, we will work with an example that introduces the Standard
Widget Toolkit (SWT). SWT is a GUI framework created by the Eclipse
foundation that you can use in place of Swing. This example shows you a
basic example of what settings are needed to create and run an SWT
application from within Eclipse.
HelloWorld
To download Eclipse, point Safari at the Eclipse download
page. The site detects that you are using Mac OS X and
highlights the latest release at the top of the page. The current
release is Eclipse Platform 3.0.1 Mac OS X; however, as there have been
improvements to the Mac OS X version of Eclipse since this release,
you should instead download the latest milestone/stable release. You
can see a link to these files at the right side of the page as part
of the list of the top-10 Eclipse downloads. The current milestone as of this writing is
3.1M5a. Follow the link and download a file with a name something like
eclipse-SDK-3.1M5a-macosx-carbon.tar.gz. Expand this into the Eclipse
folder, look inside and double-click on the Eclipse application.
eclipse-SDK-3.1M5a-macosx-carbon.tar.gz
Whether you are coming to Eclipse from another IDE such as Xcode
or if you are coming from a text editor and command-line tools, you
will find there is a bit of a learning curve. The first difference
appears immediately each time you start Eclipse, when you are prompted to
choose your workspace. All projects contained in the same workspace
are visible in some views and you may find that you want to
conceptually separate projects by creating a new workspace for each
project. This is a matter of taste but is unnecessary as your project
files need not be located in the workspace. You will see an example of
this in the next section. For now, choose the default selection for
your workspace as shown in Figure 1. After working with the three
examples detailed in this article, you should have a better feel about
whether or not you prefer a single workspace or separate workspaces
for individual projects or for families of projects.
Figure 1: Selecting Your Workspace.
Create a new project by selecting File > New > Project. Follow
the wizard and select the option "Java Project" and press the Next
button. In Figure 2 you can see that you have the choice to create
the project inside the workspace or elsewhere. You can also create
the project from existing source code. For now, type in the name
HelloWorld for the project and select Finish.
Figure 2: Creating a Project.
You next are asked whether you wish to switch to the
Java perspective. Respond "Yes". This brings you to the view that you
will use for Java development. You can look ahead at Figure 6 to see
the components of the Java perspective. Create a new class using File
> New > Class or by right clicking on HelloWorld and following
the popup menu to select New > Class. You see a class creation
dialog box like that shown in Figure 3. Enter "Welcome" in the Name
field, leave the public radio button selected in the Modifiers, and
leave the value of the Superclass field
as java.lang.Object. All classes in a Java program extend
another class with the Object class at the root of the tree. As is
tradition, classes with Object as their superclass do not include
the extends keyword in the class declaration. You can
also save yourself a bit of typing by checking that you want a method
stub created for public static void main(String[]
args). The filled in form should look like this:
java.lang.Object
extends
public static void main(String[]
args)
Figure 3: Creating a Java Class.
The following code is generated:
public class Welcome {
/**
* @param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
}
}
To complete your HelloWorld example, replace the line
//TODO Auto-generated method stub with the
customary System.out.println("Hello, World"); To
experiment with the code assist feature, pause after
typing System. The box shown in Figure 4 appears with
suggestions for completing the code. The fourth entry
is out which is of type PrintStream. Type
the letter "o" and out is selected and a second box appears with documentation further
describing System.out. You can configure the length of the pause before the hints appear using Eclipse > Preferences.
//TODO Auto-generated method stub
System.out.println("Hello, World");
System.
out
PrintStream
System.out
Figure 4: Code Assistance.
Eclipse makes it easy for you to refactor your code. You may later
decide to change the name of a class, method, or variable. Eclipse helps
you by identifying all references to the name you wish to change.
Eclipse is even smart enough to realize when you have used the same name
in different contexts so that it is not just doing a global search and
replace. Renaming is one example of supported refactorings in Eclipse.
You may also wish to move a class. Notice that there is a
warning at the top of the wizard shown in Figure 3 recommending that you
not use the default package. Let's go back and create a package named
greetings and move the Welcome class into this new package. If we did
this by hand we would have to remember to change all references to
this class as well as adding a package declaration to the top of the
Welcome class. Use File > New > Package to create a new package
named greetings. Now select the Welcome.java
file. Select the move refactoring either with the menu item Refactor
> Move . . . or by right clicking on Welcome.java and selecting
Refactor > Move . . . from the context menu as shown in Figure
5.
greetings
Figure 5: Refactoring to Move Classes.
At the prompt, select the greetings package and press OK. In the
package Explorer view of the Java Perspective, the Welcome.java class
is moved under the greetings package and the default package no longer
appears as it is not needed. You will notice that the source code for
Welcome.java now begins with the declaration package
greetings; One of the strengths of Eclipse is the amount of
information that the IDE infers from the underlying structure of the
program you are writing.
package
greetings;
What remains is to run the example. In the outline view or in the
Package Explorer view, notice the green arrow associated with the
Welcome class. Right-click on this arrow and select Run As >
Run. You should see the results in a window like that shown in Figure
6. You can also run the application by selection Run > Run and
setting variables for Main, Arguments, JRE, Classpath, Source,
Environment, and Common.
Figure 6: The Java Perspective.
At this point you should have a good feel for many of the
strengths of Eclipse. You have now downloaded and begun to configure
the IDE. You have specified your workspace, created a Java project,
and explored the Java perspective. You have created a new class and a
new package, and learned how easy it is to use the Eclipse refactoring
tools to move your class from one package to another. You have
created, compiled, and run the most basic of Java programs. Now, let's
move on to working with a Swing based program.
In this section you will create the default Swing project in Xcode
and work with the generated files using Eclipse. In Xcode choose File
> New Project. When the New Project wizard appears, choose Java
Swing Application, name the project XSwing and create it in the
default location. You can now quit Xcode.
In Eclipse, create another Java project. Give it the name XSwing
and select the radio button "Create project from existing
source". Browse to the directory you just created with Xcode for the
XSwing project. The location should be something like
/Users/yourname/XSwing. You can select the highlighted Finish button
if you like, or you can further configure the project by selecting the
Next button. If you do the latter, you can see that the source code
files visible to the compiler should be AboutBox.java, PrefPane.java,
and XSwing.java. Now select Finish.
Check that the Project > Build Automatically menu item is
selected. Look back at the contents of /Users/yourname/XSwing and you
will find more than a dozen .class files have been generated. Eclipse has automatically built your saved project. Back in
the Eclipse Java perspective, click on the Problems tab in the bottom
right panel. You can see that there are fourteen warnings
generated. These all indicate that a named serializable class does
not declare a static final serialVersionUID field of type long. This
is not the sort of warning that you want to be bothered by.
Open the preference pane using Eclipse > Preferences. You can
see that Eclipse is very customizable. Look under Java > Compiler
> Errors/Warnings. As is shown in Figure 7, locate the item
Serializable class without serialVersionUID and change the selection
from Warning to Ignore. When you press OK you are prompted that
because the compiler settings were changed, a full rebuild is
required. Answer that yes you would like to do the full build now. All
of the warnings now disappear.
Figure 7: Customizing the Errors.
There are three classes in the XSwing project but only XSwing.java
contains a main() method. Right-click on the class
XSwing in either the Package Explorer or in the Outline view next to
the green arrow. Select Run As > Run and you are taken to the
run dialog. You can also bring up the run dialog using the menu item
Run > Run or by using the toolbar. Fortunately the keyboard
shortcut Shift-Command-F11 allows you to repeat a run as often as you
would like.
main()
From the run dialog, click New. XSwing appears under Java
applications. Before running the application, click on the Arguments
tab and add the VM
argument -Dapple.laf.useScreenMenuBar=true. This is added
for you in Xcode by default. In Eclipse, you need to manually add this
option yourself so that the menu bar appears at the top of the screen
and not as part of the JFrame. In the next section you will see that
this step is not needed if you are using SWT instead of Swing. Figure
8 shows the result of clicking the Run button.
-Dapple.laf.useScreenMenuBar=true
Figure 8: The Sample Swing Application.
Now that the application is running, let's look at some more of
Eclipse's features for working with the source code. You can reopen
the preferences panel and customize the look and feel of the code with
Java > Code Style > Formatter. Change where the opening brace
appears for methods and classes, adjust the spacing at the beginning
of lines or within methods and control statements. You have control of
scores of variables that help you display the code the way you are
most comfortable.
Once you have the editor configured the way you want, take a hard
look at the code and look for places you may want to make some
changes. The automatic refactorings give you a great deal of power and
flexibility. For example, consider the addMenus()
method.
addMenus()
public void addMenus() {
// (1)
fileMenu = new JMenu(resbundle.getString("fileMenu"));
fileMenu.add(new JMenuItem(newAction));
fileMenu.add(new JMenuItem(openAction));
fileMenu.add(new JMenuItem(closeAction));
fileMenu.add(new JMenuItem(saveAction));
fileMenu.add(new JMenuItem(saveAsAction));
// (2)
mainMenuBar.add(fileMenu);
editMenu = new JMenu(resbundle.getString("editMenu"));
editMenu.add(new JMenuItem(undoAction));
editMenu.addSeparator();
editMenu.add(new JMenuItem(cutAction));
editMenu.add(new JMenuItem(copyAction));
editMenu.add(new JMenuItem(pasteAction));
editMenu.add(new JMenuItem(clearAction));
editMenu.addSeparator();
editMenu.add(new JMenuItem(selectAllAction));
mainMenuBar.add(editMenu);
setJMenuBar (mainMenuBar);
}
Select the lines between comment (1) and comment (2). Right-click
on these highlighted lines and select Refactor > Extract
Method. The dialog box shown in Figure 9 appears. Fill in the method
name createFileMenu and press OK.
createFileMenu
Figure 9: Extracting Code into a Method.
The highlighted lines are replaced with a call to the newly created
method createFileMenu() and the highlighted lines make
up the body of this new method. Eclipse does prompt you if you need to
pass information in the form of a method parameter or return type. In
this case that was not necessary so createFileMenu()
takes no arguments and has return type void. Repeat this
process to form a new method named createEditMenu(). Now
the addMenus() method looks like this:
createFileMenu()
void
createEditMenu()
addMenus()
public void addMenus() {
createFileMenu();
mainMenuBar.add(fileMenu);
createEditMenu();
mainMenuBar.add(editMenu);
setJMenuBar (mainMenuBar);
}
Notice that addMenus() almost reads like a bulleted
list of the steps you might take to add the menus. If you need to know
how you created the file menu, you know where to look. For now you
want to hide these details. Click on the triangles that appear in the
left margin next to the newly created createFileMenu()
and createEditMenu() methods. This use of code folding
makes it easy to navigate code. Most of the time that you are dealing
with the XSwing class you will have no need of looking into
the createFileMenu() method so go ahead and collapse
it. Figure 10. shows a portion of XSwing.java. The inner class
newActionClass is collapsed by default. Note the green triangle
on the left at the beginning of the paint() method. This
indicates that paint() overrides a method in a super
class.
addMenus()
createFileMenu()
newActionClass
paint()
Figure 10: Collapsing code.
You have now seen how Eclipse supports working with existing
code. You can easily add a project to your workspace even if the code
is located somewhere else. You learned how to customize Eclipse to
support the code style and level of error checking with which you are
comfortable. You collapsed portions of the code to make the source
more readable. Most importantly, you took advantage of Eclipse's
awareness of the code structure to extract lines of code into a newly
created method.
In this third example, you will get an SWT application up and
running. SWT, the Standard Widget Toolkit, is provided by the Eclipse
project as an alternative GUI framework to Swing or AWT. You can
develop SWT applications using a text editor and command line tools or
with another IDE. This would require a separate download of the SWT
jar files and jnilib files. Until recently, configuring Eclipse on
Mac OS X to run an SWT application during development was tricky. Now,
as you will soon see, it is quite easy to compile and run an SWT
application with Eclipse.
In Eclipse, create a new Java Project and name it XSWT. Right-click
on XSWT in the Package Explorer view and select Java Build Path shown below in Figure 11. You
need to add the SWT jar files to this project. You find these in
the Eclipse distribution's plugins folder in the
eclipse/plugins/org.eclipse.swt.carbon_3.1.0/ws/carbon directory. Select the
Libraries tab and click the Add External JARs button. Navigate to this
directory and select both the swt.jar and swt-pi.jar files. Press Open. These files
now appear in the Libraries tab so you can press OK.
Figure 11: Adding the SWT .jar Files.
Running an SWT application in Eclipse on Mac OS X has gotten much
easier recently. Older tutorials direct you to further
configure Eclipse to find the corresponding native files in the
/eclipse/plugins/org.eclipse.swt.carbon_3.1.0/os/macosx/ppc
directory. This is no longer necessary. Eclipse can now find the files
libswt-carbon-3132.jnilib, libswt-pi-carbon-3123.jnilib, and
libswt-webkit-carbon-3123.jnilib without further information. Note
that the number 3132 in the file names will change for each
release.
/eclipse/plugins/org.eclipse.swt.carbon_3.1.0/os/macosx/ppc
The SWT requires that you learn a new API. Eclipse helps with code
assist, but you still need to pick up a book that gives you a high
level look at the libraries. For example, working with SWT's Menus and
MenuItems differs in significant ways from working with Swing's JMenus
and JMenuItems. Here is the core of what the code in this example will do:
setUpDisplay();
createLabel();
createMenuBar();
revealDisplay();
Although there is not a direct mapping from familiar Swing components to SWT
components, you should be able to read and understand the basic GUI code. Create a class called SWTGreeter with the following code:
import org.eclipse.swt.widgets.Display;
import org.eclipse.swt.widgets.Shell;
import org.eclipse.swt.widgets.Menu;
import org.eclipse.swt.widgets.MenuItem;
import org.eclipse.swt.widgets.Label;
import org.eclipse.swt.SWT;
public class SWTGreeter {
private Display display;
private Shell shell;
SWTGreeter() {
setUpDisplay();
createLabel();
createMenuBar();
revealDisplay();
}
private void setUpDisplay() {
display = new Display();
shell = new Shell(display);
shell.setSize(200, 100);
shell.setText("SWTGreeter");
}
private void createMenuBar() {
Menu menu = new Menu(shell, SWT.BAR);
shell.setMenuBar(menu);
MenuItem fileMenuItem = new MenuItem(menu, SWT.CASCADE);
fileMenuItem.setText("File");
MenuItem editMenuItem = new MenuItem(menu, SWT.CASCADE);
editMenuItem.setText("Edit");
}
private void createLabel() {
Label label = new Label(shell, SWT.CENTER);
label.setText("Hello SWT");
label.setBounds(shell.getClientArea());
}
private void revealDisplay() {
shell.open();
while (!shell.isDisposed()) {
if (!display.readAndDispatch())
display.sleep();
}
display.dispose();
}
public static void main(String[] args) {
new SWTGreeter();
}
}
The File and Edit menus do not contain any menu items, but this is
enough code to display the first two drop down menus in the
menubar. Run this as an SWT Application by right-clicking on
SWTGreeter in the Package Explorer and select Run As >
SWTApplication. The result should look like Figure 12.
Figure 12: The SWT Application.
Notice that the File and Edit menus appear where they belong, in
the screen menu bar, without requiring any specific parameters be
set. There is a lot more work that needs to be done to create a fully
functioning SWT application that looks and feels like a native Mac
application but this simple HelloWorld level example is a great place to start.
If you are doing Java development on Mac OS X, Eclipse is a
quickly evolving open source IDE which already has a rich set of tools
designed to help you. You may find yourself scratching your head the
first time you need to accomplish a new task. How do you add a jar
file? How do you make that warning go away? How do you run this
application? As you use Eclipse for your daily coding you will wonder
how you ever did without the code completion, automatic refactorings,
and other code aware features.
Updated: 2005-02-28
Get information on Apple products.
Visit the Apple Store online or at retail locations.
1-800-MY-APPLE | http://developer.apple.com/tools/usingeclipse.html | crawl-002 | refinedweb | 3,308 | 66.23 |
Make bob, bob again!
:) it means we want to remove this idea of core and extra packages and make everything a part of bob.
To do so we need:
- Make sure bob is only a meta package (both in PyPI and conda) with only optional dependencies.
- Bob will (optionally) depend on all core and extra packages. There will be no concept of core and extra packages anymore.
- We will still use semantic versioning in Bob and follow semantic versioning. This means with every release, we could easily be releasing a new major version of Bob: bob 4, bob 5, bob 6
- We could remove bob/bob.nightlies and bob/docs repositories as well and do everything here.
- Since the bob package is not going to install anything, we could potentially list private stable packages here too. However, I am not sure we can build the docs here then.
Here is the idea for its conda recipe at least:
{% set name = 'bob' %} {% set project_dir = environ.get('RECIPE_DIR') + '/..' %} package: name: {{ name }} version: {{ environ.get('BOB_PACKAGE_VERSION', '0.0.1') }} build: number: {{ environ.get('BOB_BUILD_NUMBER', 0) }} run_exports: - {{ pin_subpackage(name) }} script: - cd {{ project_dir }} {% if environ.get('BUILD_EGG') %} - python setup.py sdist --formats=zip {% endif %} - python setup.py install --single-version-externally-managed --record record.txt requirements: run: - python {{ python }} - setuptools {{ setuptools }} - bob.extension - bob.blitz ... - bob.ip.flandmark - bob.bio.base ... - gridtk - bob.pad.voice ... - bob.buildout # even bob.buildout test: imports: - {{ name }} commands: - nosetests --with-coverage --cover-package={{ name }} -sv {{ name }} - conda inspect linkages -p $PREFIX {{ name }} # [not win] - conda inspect objects -p $PREFIX {{ name }} # [osx] requires: - bob-devel {{ bob_devel }}.* - nose - coverage - sphinx - sphinx_rtd_theme - pkgtools - cmake - pkg-config - freetype - {{ compiler('c') }} - {{ compiler('cxx') }} - bob.buildout ...
Then we run
conda render, and create a new recipe based on that while converting
run to
run_constrained. I have explained this in bob.conda#46 (comment 25677) | https://gitlab.idiap.ch/bob/bob/-/issues/249 | CC-MAIN-2021-17 | refinedweb | 309 | 60.11 |
This article demonstrates how to connect to the Leap Motion controller and access basic tracking data. After reading this article and following along with your own basic program, you should have a solid foundation for beginning your own application development.
First, a little background...
The Leap Motion controller encompasses both hardware and software components.
The Leap Motion hardware consists primarily of a pair of stereo infrared cameras and illumination LEDs. The camera sensors look upward (when the device is in its standard orientation). The following illustration shows how a user’s hands look from the perspective of the Leap Motion sensor:
The Leap Motion software receives the sensor data and analyzes this data specifically for hands, fingers, and arms. The software maintains an internal model of the human hand and compares that model to the sensor data to determine the best fit. Sensor data is analyzed frame-by-frame and the service sends each frame of data to Leap Motion-enabled applications. The Frame object received by your application contains all the known positions, velocities and identities of tracked entities, such as hands and fingers. For an overview for the tracking data provided by the controller, read API Overview.
The Leap Motion software runs as a service (Windows) or daemon (Mac and Linux) on the client computer. Native Leap Motion-enabled applications can connect to this service using the API provide by the Leap Motion dynamic libraries (provided as part of the Leap Motion SDK). Web applications can connect to a WebSocket server hosted by the service. The WebSocket provides tracking data as a JSON-formatted message – one message per frame of data. A JavaScript library, LeapJS, provides an API wrapping this data. For more information read System Architecture.
This tutorial also uses command line compilers and linkers (where needed) in order to focus on the code rather than the environment.
If you haven’t already, download and unzip the latest Leap Motion SDK from the developer site and install the latest Leap Motion service.
Open a terminal or console window and navigate to the SDK samples folder.
Sample.py contains the finished code for this tutorial, but to get the most out of this lesson, you can rename the existing file, and create a new, blank Sample.py file in this folder.
In your new Sample.py program, add code to import the Leap Motion libraries. The following code detects whether you are running 32- or 64-bit Python and loads the proper library:
import os, sys, inspect, thread, time src_dir = os.path.dirname(inspect.getfile(inspect.currentframe())) # Windows and Linux arch_dir = '../lib/x64' if sys.maxsize > 2**32 else '../lib/x86' # Mac #arch_dir = os.path.abspath(os.path.join(src_dir, '../lib')) sys.path.insert(0, os.path.abspath(os.path.join(src_dir, arch_dir))) import Leap
Add the “structural” code to define a Python command-line program:
def main(): # Keep this process running until Enter is pressed print "Press Enter to quit..." try: sys.stdin.readline() except KeyboardInterrupt: pass if __name__ == "__main__": main()
Note that the statement: sys.stdin.readline() does not play nicely with IDLE and possibly other IDEs. To use IDLE, you must implement a different way to prevent the program from reaching the end of the main() subroutine and thus exiting immediately.
This code simply prints a message and then waits for keyboard input before exiting. See Running the Sample for instructions on running the program.
The next step is to add a Controller object to the program – which serves as our connection to the Leap Motion service/daemon.
def main(): controller = Leap.Controller() # Keep this process running until Enter is pressed print "Press Enter to quit..." try: sys.stdin.readline() except KeyboardInterrupt: pass
When you create a Controller object, it automatically connects to the Leap Motion service and, once the connection has been established, you can get tracking data from it using the Controller.frame() method.
The connection process is asynchronous, so you can’t create the Controller in one line and expect to get data in the next line. You have to wait for the connection to complete. But for how long?
You can add a Listener object to the Controller, which provides an event-based mechanism for responding to important Controller state changes. This is the approach used in this tutorial – but it is not always the best approach.
The Problem with Listeners: Listener objects use independent threads to invoke the code that you implement for each event. Thus, using the listener mechanism can introduce the complexities of threading into your program. It becomes your responsibility to make sure that the code you implement in your Listener subclass accesses other parts of your program in a thread-safe manner. For example, you might not be able to access variables related to GUI controls from anything except the main thread. There can also be additional overhead associated with the creation and cleanup of threads.
Avoiding Listeners: You can avoid using Listener objects by simply polling the Controller object for frames (or other state) when convenient for your program. Many programs already have a event- or animation-loop to drive user input and animation. If so, you can get the tracking data once per loop – which is often as fast as you can use the data anyway.
The Listener class in the API defines the signatures for a function that will be called when a Controller event occurs. You create a listener by creating a subclass of Listener and implementing the callback functions for the events you are interested in.
To continue this tutorial, add the SampleListener class to your program:
class SampleListener(Leap.Listener): def on_connect(self, controller): print "Connected" def on_frame(self, controller): print "Frame available" def main(): #...
If you have already taken a look at the finished file, you may have noticed that several more callback functions are present. You can add those too, if you wish, but to keep things simple, we will concentrate on |Listener_onConnect|_ and |Listener_onFrame|_.
Now create a SampleListener object using your new class and add it to your controller:)
Now is a good time to test your sample program. Follow the directions in: Running the Sample.
If everything is correct (and your Leap Motion hardware is plugged in), then you should see the string “Connected” printed to the terminal window followed by an rapid series of “Frame available”. If things go wrong and you cannot figure out why, you can get help on our developer forum at developer.leapmotion.com.
Whenever you run into trouble developing a Leap Motion application, try opening the diagnostic visualizer. This program displays a visualization of the Leap Motion tracking data. You can compare what you see in your program to what you see in the visualizer (which uses the same API) to isolate and identify many problems.
When your Controller object successfully connects to the Leap Motion service/daemon AND the Leap Motion hardware is plugged in, The Controller object changes its is_connected property to true and invokes your |Listener.onConnect|_ callback (if there is one).
When the controller connects, you can set controller properties using such methods as Controller.set_policy().
All the tracking data in the Leap Motion system arrives through the Frame object. You can get Frame objects from your controller (after it has connected) by calling the Controller.frame() method. The |Listener_onFrame|_ callback of your Listener subclass is called when a new frame of data becomes available. When you aren’t using a listener, you can compare the id value to that of the last frame you processed to see if there is a new frame. Note that by setting the history parameter of the frame() function, you can get earlier frames than the current one (up to 60 frames are stored). Thus, even when polling at a slower rate than the Leap Motion frame rate, you can process every frame, if desired.
To get the frame, add this call to frame() to your |Listener_onFrame| callback:
def on_frame(self, controller): frame = controller.frame()
Then, print out some properties of the Frame object:
def on_frame(self, controller): frame = controller.frame() print "Frame id: %d, timestamp: %d, hands: %d, fingers: %d" % ( frame.id, frame.timestamp, len(frame.hands), len(frame.fingers))
Run your sample again, put a hand or two over the Leap Motion device and you should now see the basic statistics of each frame printed to the console window.
I’ll end this tutorial here, but you can look at the rest of the code in the original sample program for examples on how to get all the Hand, Finger, Arm, and Bone objects in a frame.
Type the following at your command prompt (with the current directory set the SDK samples folder):
python Sample.py
You should see the message “Connected” printed to standard output when the application initializes and connects to the Leap. You should then see frame information printed each time the Leap dispatches the onFrame event. | https://developer-archive.leapmotion.com/documentation/python/devguide/Sample_Tutorial.html | CC-MAIN-2019-04 | refinedweb | 1,496 | 53.92 |
Hi, I'm new to c++ and programming all together and I really need help.
So what I had to do was display a table showing the medals earned by several countries at a certain competition and their total medal count.
Then I'm asked to write a function which returns the index of the array for the highest number found. This index will be returned to function main where it will be used to display the name of the country and its gold medal count.
I've done the first part but I can't seem to figure out how to do the second part. I'm guessing the problem is with the function definition for HighestGold which I basically just guessed at.
I'm a total beginner so please dumb down your explanation as much as you can.
Thanks!
#include <iostream> #include <string> #include <conio.h> #include <iomanip> const int COLUMNS = 3; const int COUNTRIES = 7; const int MEDALS = 3; int RowTotal(int table[][COLUMNS], int row); //funtion prototype for RowTotal int HighestGold (int table[][COLUMNS], int col); //function prototype for HighestGold using namespace std; int main() { int total; //total medals for a country string countries[] = {"Canada", "China", "Japan" , "Russia", "Switzerland", "Ukraine", "United States"}; int medalCounts[COUNTRIES][MEDALS] = {{3, 2, 1}, {2, 5, 1}, {1, 4, 0}, {5, 0, 1}, {0, 1, 0}, {0, 0, 1}, {6, 2, 0}}; cout << setw(12) << "Country" << setw(12) << "Gold" << setw(12) << "Silver" << setw(10) << "Bronze" << setw(10) << "Total" << endl << endl; cout << "---------------------------------------------------------\n"; //print countries, counts, and row totals for (int i = 0; i < COUNTRIES; i++) { cout << setw(13) << countries[i]; //proccess the ith row for (int j = 0; j < MEDALS; j++) { cout << setw(10) << medalCounts[i][j]; } total = RowTotal(medalCounts, i); //calculate total medals for the ith row cout << setw(12) << total << endl << endl; HighestGold; //find country with highest gold count cout << "The country with the highest gold count is " << countries << " with " << medalCounts << " gold medals."; } _getch(); return 0; } //----------------------------------------------------------------------------------------------- int RowTotal(int table[][COLUMNS], int row) { int total = 0; for (int j = 0; j < COLUMNS; j++) { total = total + table[row][j]; } return total; } //----------------------------------------------------------------------------------------------- int HighestGold(int table[][COLUMNS], int col) { int index = 0; int highest = 0; bool found = false; while( (!found) && (highest < col) ) { if (highest == table[highest]) found = true; else highest++; } if (found) return highest; } | https://www.daniweb.com/programming/software-development/threads/328399/find-largest-number-in-an-array | CC-MAIN-2018-43 | refinedweb | 383 | 55.92 |
Connecting with the ODBC Driver
This chapter describes how to create an ODBC logical connection definition for the SQL Gateway, and use the Data Migration Wizard. See Using ODBC with InterSystems Software for complete information on how to use InterSystems ODBC.
The following topics are discussed:
Creating ODBC Connections for External Sources — describes how to create an ODBC logical connection definition for the SQL Gateway.
Using the Data Migration Wizard — describes how to migrate data from external ODBC sources and create an appropriate InterSystems IRIS class definition to store the data.
Creating ODBC Connections for External Sources.
The following topics are discussed in this section:
Defining a Logical Connection in the Management Portal
Creating an ODBC Connection through the SQL Gateway
Implementation-specific Options
For OS-specific instructions on how to create a DSN, see the following sections in Using ODBC with InterSystems Software:
“Using an InterSystems Database as an ODBC Data Source on Windows”
“Using an InterSystems Database as an ODBC Data Source on UNIX®”
Defining a Logical Connection in the Management Portal
To define a gateway connection for an ODBC-compliant data source, perform the following steps:
Define an ODBC data source name (DSN) for the external database. See the documentation for the external database for information on how to do this.
In the Management Portal, go to the System Administration > Configuration > Connectivity > SQL Gateway Connections page.
Click Create New Connection.
On the Gateway Connection page, enter or choose values for the following fields:
For Type, choose ODBC.
Connection Name — Specify an identifier for the connection, for use within InterSystems IRIS.
Select an existing DSN — Choose the DSN that you previously created. You must use a DSN, since the ODBC SQL Gateway does not support connections without a DSN.
User — Specify the name for the account to serve as the default for establishing connections, if needed.
Password — Specify the password associated with the default account.
For example, a typical connection might use the following values:
For the other options, see “Implementation-specific Options” later in this section.
Optionally test if the values are valid. To do so, click the Test Connection button. The screen will display a message indicating whether the values you have entered in the previous step allow for a valid connection.
To create the named connection, click Save.
Click Close.
Creating an ODBC Connection through the SQL Gateway
InterSystems IRIS provides ODBC drivers and thus can be used as an ODBC data source. That is, an InterSystems IRIS instance can connect to itself or to another InterSystems IRIS instance via.
To configure an InterSystems IRIS instance (InterSystems IRIS_A) to use another InterSystems IRIS instance (InterSystems IRIS_B) as an ODBC data source, do the following:
On the machine that is running InterSystems IRIS_A, create a DSN that represents the namespace in InterSystems IRIS_B that you want to use.Tip:
If InterSystems IRIS_B is installed on this machine, a suitable DSN might already be available, because when you install InterSystems IRIS, the installer automatically creates DSNs.
Within InterSystems IRIS_A, use the SQL Gateway to create an ODBC connection that uses that DSN. Provide the following details:
For Type, choose ODBC.
Connection Name — Specify an identifier for the connection, for use within InterSystems IRIS_A.
Select an existing DSN — Choose the DSN that you previously created for InterSystems IRIS_B.
For example, a typical connection might use the following values:Tip:
You do not need to specify User and Password because that information is part of the DSN itself.
Click Save.
Click Close.
Implementation-specific Options
Before you define an SQL gateway connection, you should make sure that you understand the requirements of the external database and of the database driver, because these requirements affect how you define the connection. The following options do not apply to all driver implementations.
The Enable legacy outer join syntax (Sybase) option controls whether the SQL gateway connection will enable you use to use legacy outer joins. Legacy outer joins use SQL syntax that predates the SQL-92 standard. To find out whether the external database supports such joins, consult the documentation for that database.
The Needs long data length option controls how the SQL gateway connection will bind data. The value of this option should agree with the SQL_NEED_LONG_DATA_LEN setting of the database driver. To find the value of this setting, use the ODBC SQLGetInfo function. If SQL_NEED_LONG_DATA_LEN equals Y, then select the Needs long data length option; otherwise clear it.
The Supports Unicode streams option controls whether the SQL gateway connection supports Unicode data in streams, which are fields of type LONGVARCHAR or LONGVARBINARY.
Clear this check box for Sybase. If you are using a Sybase database, all fields you access via the SQL gateway should include only UTF-8 data.
Select this check box for other databases., which is consistent with the behavior of typical ODBC clients. CAST.
Using the Data Migration Wizard
The Management Portal provides a wizard that you can use to migrate data from an external table or view.
When you migrate data from a table or view in an external source, the system generates a persistent class to store data of that table or view and then copies the data. This wizard assumes that the class should have the same name as the table or view from which it comes; similarly, the property names are the same as in the table or view. After the class has been generated, it does not have any connection to external data source.
If you have not yet created an SQL Gateway connection to the external database, do so before you begin (see “Creating Gateway Connections for External Sources”).
From the Management Portal select System Explorer, then SQL. Select a namespace with the Switch option at the top of the page; this displays the list of available namespaces.
At the top of the page, click the Wizards drop-down list, and select Data Migration.
On the first page of the wizard, select the table or view, as follows:
Select a destination namespace — Select the InterSystems IRIS namespace to which the data will be copied.
Schema Filter — Specify a schema (class package) name that contains the table or view. You can specify a name with wildcards to return multiple schemas, or % to return all schemas. For example, C% will return all schemas in the namespace beginning with the letter C. Use of this filter is recommended, as it will shorten the return list of schemas to select from, and thus improve loading speed.
Table Filter — Specify a table or view name. You can specify a name with wildcards to return multiple tables and/or views, or % to return all tables/views.
Table type — Select TABLE, VIEW, SYSTEM TABLE, or ALL. The default is TABLE.
Select a SQL Gateway connection — Select the SQL Gateway connection to use.
Click Next.
On the next page, you can optionally specify the following information for each class:
New Schema — Specify the package to contain the class or classes. Be sure to follow the rules for ObjectScript identifiers, including length limits (see the section on Naming Conventions in Defining and Using Classes).Tip:
To change the package name for all classes, type a value at the top of this column and then click Change all.
Copy Definition — Select this check box to generate this class, based on the table definition in the external source. If you have already generated the class, you can clear this check box.
Copy Data — Select this check box to copy the data for this class from the external source. When you copy data, the wizard overwrites any existing data in the InterSystems IRIS class.
Click Next. The wizard displays the following optional settings:
Disable validation — If checked, data will be imported with %NOCHECK specified in the restriction parameter of the INSERT command.
Disable journaling for the importing process — If checked, journaling will be disabled for the process performing the data migration (not system-wide). This can make the migration faster, at the cost of potentially leaving the migrated data in an indeterminate state if the migration is interrupted by a system failure. Journaling is re-enabled at the end of the run, successful or not.
Defer indices — If checked, indices are built after the data is inserted. The wizard calls the class' %SortBegin() method prior to inserting the data in the table. This causes the index entries to be written to a temporary location for sorting. They are written to the actual index location when the wizard calls the %SortEnd() method after all rows have been inserted. Do not use Defer Indices if there are Unique indices defined in the table and you want the migration to catch any unique constraint violations. A unique constraint violation will not be caught if Defer Indices is used.
Disable triggers — If checked, data will be imported with %NOTRIGGER specified in the restriction parameter of the INSERT command.
Delete existing data from table before importing — If checked, existing data will be deleted rather than merged with the new data.
Click Finish. The wizard opens a new window and displays the Background Jobs page with a link to the background tasks page. Click Close to start the import immediately, or click the given link to view the background tasks page. In either case, the wizard starts the import as a background task.
In the Data Migration Wizard window, click Done to go back to the home page of the Management Portal.
Microsoft Access and Foreign Key Constraints
When you use the Data Migration Wizard with Microsoft Access, the wizard tries to copy any foreign key constraints defined on the Access tables. To do this, it queries the MSysRelationships table in Access. By default, this table is hidden and does not provide read access. If the wizard can't access MSysRelationships, it migrates the data table definitions to InterSystems SQL without any foreign key constraints.
If you want the utility to migrate the foreign key constraints along with the table definitions, set Microsoft Access to provide read access for MSysRelationships, as follows:
In Microsoft Access, make sure that system objects are displayed.
Click Tools > Options and select the setting on the View tab.
Click Tools > Security > User and Group Permissions. Then select the Read check box next to the table name. | https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=BSQG_ODBC | CC-MAIN-2021-10 | refinedweb | 1,712 | 52.8 |
This action might not be possible to undo. Are you sure you want to continue?
ADA Guidelines for Infection Control
Second Edition
Authorised by FS Fryer, Federal President, Australian Dental Association Inc. Published by the Australian Dental Association Inc. PO Box 520, St Leonards NSW 1590 Australia Phone: +612 9906 4412 Fax: +612 9906 4917 Email: adainc@ada.org.au Web: ©Australian Dental Association Inc. 2012 First published 2009 This work is copyright. Apart from any permitted use under the Copyright Act 1968, no part of this work may be reproduced by any process without written permission from the publisher. Enquiries should be directed to the Australian Dental Association Inc. at the above address. Disclaimer: The routine work practices outlined in these guidelines are designed to reduce the number of infectious agents in the dental practice environment; prevent or reduce the likelihood of transmission of these infectious agents from one person or item / location to another; and make items and areas as free as possible from infectious agents. Professional judgement is essential in determining the necessary application of these guidelines to the particular circumstances of each individual dental practice. ISBN 978-0-909961-41-1
1
©2012
FOREWORD 5 INTRODUCTION DEFINITIONS 6 7
A. INFECTION CONTROL 8 1. What is infection control? 9 2. Duty of care 10 Infected dental practitioners 11 B. STANDARD PRECAUTIONS OF INFECTION CONTROL 12
1. Hand hygiene 12 Hand care 13 2. Personal protective equipment 13 Gloves 13 Masks 14 Eye protection 14 Protective clothing 15 Footwear 15 3. Surgical procedures and aseptic technique 15 4. Management of sharps 15 Disposal of sharps 16 5. Management of clinical waste 16 6. Environment 16 Design of premises 17 Cleaning the environment 17 Treatment areas 17 C. INFECTION CONTROL STRATEGIES WITHIN THE OPERATING FIELD 19 1. Clean and contaminated zones 19 2. Waterlines and water quality 20 Water quality 20 3. Single use items 20 D. INSTRUMENT REPROCESSING 21 1. Categories of instruments: infection risk relative to instrument use 2. Instrument reprocessing area and workflow 21 21 22
Design of reprocessing area 22 3. Transfer of contaminated instruments and transfer of sharps
2
©2012
4. Cleaning 23 Manual cleaning 23 Mechanical cleaning 24 Drying instruments 24 5. Packaging prior to steam sterilisation 24 6. Steam sterilisation 25 Portable bench top steam sterilisers 25 Maintenance and testing 25 Validation of the sterilisation process 26 Monitoring of cycles 26 Operating the steam steriliser 26 Steam steriliser performance tests 26 Loading 27 Drying 27 Checking the completed load 27 Steam steriliser monitoring tests 28 Chemical indicators 28 Biological indicators 29 7. Disinfection 29 Thermal disinfection 29 Chemical disinfection 29 8. Storage of processed instruments 30 User checks to be made before using 30 Unwrapped semi-critical and non-critical items 31 E. DOCUMENTATION AND PRACTICE PROTOCOLS FOR INFECTION CONTROL 32 1. Maintaining sterilisation records 32 2. Batch control identification 33 34 3. Infection control for dental practitioners and clinical support staff
Immunisation 34 Immunisation records 34 Education 34 Exposure incident protocol 35 4. Infection control manual and other practice management issues 35 Infection control manual 36
3
©2012
F. SPECIAL AREAS AND THEIR PARTICULAR DENTAL INFECTION CONTROL REQUIREMENTS 37 1. Dental radiology and photography 37 2. High technology intra-oral equipment and devices 37 Curing light 38 Air abrasion, electrosurgery units and lasers 38 Implants 38 3. Dental laboratory and dental prosthetics 38 4. Handpiece management 39 5. Specimens 40 6. Nickel-titanium (NiTi) endodontic files 40 Cleaning rotary endodontic files 40 7. Relative analgesia equipment 40 8. Nursing home visits 40 G. INFECTIOUS DISEASES, ALLERGIES AND TRANSMISSION-BASED PRECAUTIONS FOR INFECTION CONTROL 41 1. Creutzfeldt-Jakob disease (CJD) 41 2. Measles, mumps, tuberculosis 41 3. Staphylococcus aureus (MRSA) 41 4. Avian flu 42 42 42 5. Latex sensitivity of dental staff or patients 6. Bloodborne viruses and the infected dental practitioner APPENDIX Blood and body fluid exposure protocol 43
Exposure prevention methods and exposure prone techniques 42
REFERENCES AND ADDITIONAL READING 47
4
©2012
the ADA Guidelines are now recognised as a key source of information for the NHMRC Guidelines. Quite fittingly. The production of this document has required a considerable effort over a long period. 5 ©2012 . and that they have been developed independently without any corporate interest or sponsorship.Foreword This second edition of the ADA Guidelines for Infection Control incorporates a number of changes that have arisen since the publication of the first edition in 2008. During that time the Committee has assisted external expert bodies such as the NHMRC and the Communicable Diseases Network of Australia (CDNA) help define safe practice. The ADA declares that no conflict of interest existed in the development of these guidelines. Special thanks and acknowledgment are due to the current members of the ADA’s Infection Control Committee (chaired by Professor Laurence Walsh) for their generous donation of time and their technical advice and expertise in preparing this document. and have been identified by the Dental Board of Australia as a major resource for dental practitioners. The current edition of the ADA Guidelines is the result of over 20 years of dedicated work by the members of the ADA’s Infection Control Committee. It is the intention of the Australian Dental Association Inc. including the release in October 2010 of the National Health and Medical Research Council (NHMRC) Australian Guidelines for the Prevention and Control of Infection in Healthcare. F Shane Fryer President Australian Dental Association Inc. (ADA) that these infection control guidelines will be updated every three years to ensure that they remain aligned to the evidence base of infection control.
Individual dental practices must have their own infection control procedures in place. It outlines the primary responsibilities of dental practitioners in relation to infection control. The NHMRC Guidelines should be regarded as a companion document to the ADA document as it addresses the foundations of infection control across all healthcare settings. prevent or reduce the likelihood of transmission of these infectious agents from one person or item/ location to another. which aims to prevent infection of patients within the healthcare system and to manage infections effectively when they occur to minimise their consequences. 6 ©2012 . what personal protection is needed and when and how to use it correctly. These guidelines are mainly evidence-based or otherwise based on current international best practice. These guidelines will be regularly reviewed and updated in light of changes in the knowledge base. but reviewing protocols. and make items and areas as free as possible from infectious agents. what vaccinations are needed and why. This includes knowing how infections are transmitted. It is important to acknowledge that professional judgement is essential in determining the necessary application of these guidelines to the situation of the individual dental practice environment. This second edition of the ADA Guidelines incorporates a number of key areas drawn from the NHMRC Australian Guidelines for the Prevention and Control of Infection in Healthcare (published in October 2010). All clinical support staff require appropriate training in the infection control measures that they are expected to undertake on an everyday basis. This revision of the ADA Guidelines also incorporates information from the CDNA Australian National Guidelines for the Management of Health Care Workers known to be infected with Blood-Borne Viruses (published in September 2011). The routine work practices outlined in these guidelines are designed to reduce the number of infectious agents in the dental practice environment. References used to prepare these guidelines are listed at the end of the document and can be sourced for further information. and have been drawn from current expert knowledge and advice in infection control. This body has developed a set of uniform standards to apply across all health services that set the minimum expected level of safe and quality care to be provided to patients. Professional judgement is critical when applying these guidelines to the particular circumstances of each individual dental practice.Introduction This document describes the infection control procedures that dental practitioners and their clinical support staff are expected to follow in a dental practice. and provides specific advice on situations where additional risk-based precautions are warranted. Each dental practitioner is responsible for implementing these guidelines in their clinical practice and for ensuring their clinical support staff are familiar with and able to apply them. and ensuring that staff members undertake the procedures in a consistent and uniform manner. as well as the details of how to keep the practice clean and hygienic. This standard was written in concert with the NHMRC Australian Guidelines for the Prevention and Control of Infection in Healthcare. and provides the rationale for those obligations. and what to do in the event of an exposure incident such as a skin penetrating injury with a sharp instrument. Effective infection control involves not only maintaining documentation about the various procedures and processes in a specific manual. which are tailored to their particular daily routines. including dental practice. Practitioners should also be aware of the development of systems for accreditation of healthcare facilities as a national initiative from the Australian Commission on Quality and Safety in Health Care (ACQSHC). Compliance with procedures is more likely if those involved in carrying them out understand the rationale behind the requirements. training and documentation on a regular basis. One of the 10 standards developed by ACQSHC is a standard on Healthcare-Associated Infection.
as well as the instrument cleaning area within the sterilising room. dental hygienists. endodontic surgical procedures.3 of the NHMRC 2010 Australian Guidelines for the Prevention and Control of Infection in Healthcare and Appendix 1 of the CDNA Australian National Guidelines for the Management of Health Care Workers known to be infected with Blood-Borne Viruses. dental laboratory assistants and dental technicians. The majority of procedures in dentistry are Category 1 EPPs because they are undertaken with the hands and fingertips of the dental practitioner visible and outside the mouth most of the time.Definitions Bloodborne viruses (BBVs) include hepatitis B (HBV). and oral health therapists. This includes surgical entry into tissues. It includes the operating field in the dental operatory. and comprises dentists. The possibility of injury to the practitioner’s gloved hands from sharp instruments and/or tissues is slight. or repair of traumatic injuries to the soft tissues. Dental practitioners is an inclusive term that refers to those registered by the Dental Board to provide clinical dental care to patients. These procedures include those where the dental staff’s hands (whether gloved or not) may be in contact with sharp instruments. If injury occurs it is likely to be noticed and acted upon quickly to avoid the dental practitioner’s blood contaminating a patient’s open tissues. periodontal surgical procedures. clinical support staff and clerical or administrative staff. Dental Board refers to the Dental Board of Australia. and implant surgical procedures (such as implant placement and recovery). A surgical procedure is one where there is a planned breach of a patient’s skin or mucosa and penetration into deeper layers of tissue which have a different immune response. In a smaller group of procedures. These viruses are transmitted primarily by blood-to-blood contact. 1 7 ©2012 . Clinical support staff are those staff other than registered dental practitioners who assist in the provision of dental services – namely dental chairside assistants (dental nurses). In such circumstances it is possible that exposure of the patient’s open tissues to the practitioner’s blood may go unnoticed or would not be noticed immediately. and the risk of the practitioner bleeding into a patient’s open tissues is remote. injury to the practitioner’s gloved hands from sharp instruments and/or tissues is unlikely. needle tips or sharp tissues (spicules of bone or teeth) inside a patient’s open body cavity. Contamination must be confined and contained to this area. Contaminated zone is that area of work in which contamination by patient fluids (blood and saliva) may occur by transfer. hepatitis C (HCV) and human immunodeficiency (HIV). Three different types of EPPs are described in the CDNA Australian National Guidelines for the Management of Health Care Workers known to be infected with Blood-Borne Viruses (2011). and in which there is a distinct risk of injury to the dental practitioner’s gloved hands from sharp instruments and/or tissues. Exposure prone procedures (EPPs) are procedures where there is a risk of injury to dental staff resulting in exposure of the patient’s open tissues to the blood of the staff member. dental therapists. designated as Category 2 EPPs. Invasive procedure is any procedure that pierces skin or mucous membrane or enters a body cavity or organ. splashing or splatter of material. however. wound or confined anatomical space where the hands or fingertips may not be completely visible at all times.1 From section B5. Category 3 EPPs in dentistry are those surgical procedures where the fingertips are out of sight for a significant part of the procedure. the fingertips may not be visible at all times. cavities or organs. dental specialists. The definition of Category 3 EPPs excludes forceps extraction of highly mobile or exfoliating teeth. Such procedures include: maxillofacial surgery. or during certain critical stages. Disinfection is the destruction of pathogenic and other kinds of microorganisms by physical or chemical means. dental prosthetists. Dental staff is an inclusive term for all those employed in a dental practice setting – namely dental practitioners. Exposure incident is any incident where a contaminated object or substance breaches the integrity of the skin or mucous membranes or comes into contact with the eyes. oral surgical procedures including surgical removal of teeth and dento-alveolar surgery.
using personal protective equipment. For instance. and following risk minimisation techniques such as using rubber dam and pre-procedural mouthrinsing. nursing homes). Transmission via large droplets (splash and splatter) requires close contact. injected. 2 8 ©2012 . particularly newly-evolving infection challenges such as avian flu. H1N1 influenza. particularly efficient hand hygiene. In dental practice. adhering to good personal hygiene practices.e. Patients and dental staff have varying susceptibilities to infection depending on their age. microorganisms can also spread by airborne transmission – when dental staff or others inhale small particles that contain infectious agents. scalpel blade. either directly into or via contaminated hands. or where environmental surfaces are not regularly decontaminated. keeping up-to-date regarding specific infectious diseases. large-particle droplets >5 microns in size) that are generated by a patient who is coughing. viruses and fungi from one patient to another patient. state of health. Infection control 1. The spread of microorganisms can be reduced by: • • • • • limiting surface contamination by microorganisms. when the dental staff member’s hands or clothing become contaminated. and identifying the settings that need modified procedures (e. A penetrating injury is any injury from a sharp object such as an injection needle. underlying illnesses. cancer therapy and other factors such as malnutrition and hormone deficiency). and from patients to dental practitioner or other dental staff. using disposable products where appropriate (e. and comprehensive training of dental staff together with a process of regular monitoring of the application of these systems and procedures). as large droplets do not remain suspended in the air. and how to take precautions against them. and come into contact with another’s mucosa (eyes.g. Infection control focuses on limiting or controlling factors that influence the transmission of infection or that contribute to the spread of microorganisms. creating systems that allow infection control procedures to be implemented effectively and make compliance with them easy (this includes having clear procedural documentation. it is necessary that endogenous spread of infection also be prevented by limiting the spread of infectious agents. and for HIV/AIDS. can be transmitted through respiratory droplets (i. or through indirect contact via instruments and equipment. including viral influenza. where patient-care devices are shared between patients. or splashed onto the skin or mucosa. the chance of transmission of the hepatitis C virus (HCV) by similar means is one in 30. ingested.A.g. one in 300. when infectious respiratory droplets are expelled by coughing. In comparison. dental bur or denture clasp contaminated with a patient’s blood or saliva. In addition. from dental practitioner and dental staff to patients. paper towels). when infectious patients have contact with other patients. Successful infection control involves: • • • • understanding the basic principles of infection control. They can spread by direct contact from one person to another. and immune status (which may be impaired by medication. In the dental practice setting. microorganisms may be inhaled. nose or mouth). and multiple resistant organisms. What is infection control? The purpose of infection control in dental practice is to prevent the transmission of disease-producing agents such as bacteria. sneezing or talking. Droplet transmission can occur when a staff member’s hands become contaminated with respiratory droplets and are transferred to susceptible mucosal surfaces such as the eyes. sneezing or talking. hepatitis B virus (HBV) is highly infectious and the chance that this disease will be transmitted by a contaminated penetrating injury2 to a non-immune person is approximately one in three (depending on the infective status of the source of injury). Whether or not the spread of microorganisms results in clinical infection depends in part on the virulence (power to infect) of a particular microorganism and on the susceptibility of the host. disease. implanted. A number of infectious agents.
The requirements for transmission-based precautions are listed in Section B5. orthodontic appliances. 9 ©2012 . are designed to reduce the likelihood of transmission of microorganisms that remain infectious over time and distance when suspended in the air. wax rims and other prosthetic work that have been in a patient’s mouth). Because these microorganisms do not travel over long distances in droplets or aerosols. Transmission-based precautions are applied to patients suspected or confirmed to be infected with agents transmitted by the contact. droplet or airborne routes. chickenpox or viral influenza pose a considerable risk to dental staff and patients if they undergo dental treatment. appropriately handling sharps. such as wearing P2 (N95) surgical respirators. formal risk assessment should be undertaken so that the need for dental treatment is determined. MRSA. These standard precautions minimise the risk of transmission of infection from person to person. eye protection and gowns. masks. measles. positive pressure ventilation is not required. saliva and other body fluids (excluding sweat) whether containing visible blood or not. correctly handling contaminated waste. Non-urgent treatment should be delayed or postponed. They apply to all situations whenever dental practitioners or their clinical support staff touch the mucous membranes or non-intact skin of a dental patient. For patients whom airborne precautions are indicated. appropriately reprocessing reusable instruments. and using. The range of measures used in transmission-based precautions depends on the route(s) of transmission of the infectious agent involved. appropriately handling used linen and clinical gowns. and are required for the treatment of all dental patients regardless of whether a particular patient is infected with or is a carrier of an infectious disease. dentures. environmental barriers such as plastic coverings on surfaces and items that may become contaminated and that are difficult to clean. when handling items contaminated with saliva (e. and when cleaning and processing instruments. it is important to recognise that patients with active tuberculosis. to address the increased risk of transmission. In brief.g. contact precautions are used when there is a risk of direct or indirect contact transmission of infectious agents (e. There are a number of situations where patients have a specific highly infectious condition that necessitates the use of transmission-based precautions in addition to standard precautions. when handling blood (including dried blood). effectively undertaking environmental cleaning. Infectious agents for which airborne precautions are indicated include measles. Standard precautions are also essential when cleaning the dental surgery environment. respiratory hygiene and cough etiquette. using personal protective barriers such as gloves. Airborne precautions.g. using aseptic non-touch techniques where indicated. or highly contagious skin infections/infestations) that are not effectively contained by standard precautions alone. radiographs. wearing appropriate protective equipment during clinical procedures and when cleaning and reprocessing instruments. These agents may be inhaled by susceptible individuals who have not had face-to-face contact with (or been in the same room as) the infectious individual. chickenpox (varicella) and Mycobacterium tuberculosis. and include: • • • • • • • • • • • undertaking regular hand hygiene before gloving and after glove removal. Clostridium difficile. Droplet precautions are intended to prevent transmission of infectious agents spread through respiratory or mucous membrane contact with respiratory secretions. Because the majority of procedures undertaken in dentistry generate aerosols. The agents of most concern to dental practise are respiratory viruses.Standard precautions are the basic processes of infection control that will minimise the risk of transmission of infection.2 of the 2010 NHMRC Australian Guidelines for the Prevention and Control of Infection in Healthcare (2010 NHMRC Guidelines). where appropriate. The application of transmission-based precautions is particularly important in containing multi-resistant organisms (MROs) in hospital environments and in the management of outbreaks of norovirus gastroenteritis in institutions such as hospitals and nursing homes. At the present time there is a lack of evidence from clinical trials regarding the additional benefit of P2 (N95) respirators over conventional surgical masks for reducing the risk of transmission of viral influenza.
seek expert medical advice from an infectious diseases specialist familiar with the requirements of dental practice or from an expert advisory panel if diagnosed with a bloodborne virus. • • • • • • • • • • • • • 3 See the Dental Board of Australia Guidelines on Infection Control. Such advice could include a prohibition on undertaking exposure prone procedures (EPPs) if viraemic. seek specialist medical management. or AS4187 for hospitals). Use of pre-procedural mouthrinses and rubber dam would be essential. including reporting the incident if it was an occupational exposure. have in place a system of reporting. and maintenance of the associated environment. The Dental Board stipulates that all dental practitioners must be aware of their infectious status for the bloodborne viruses HBV. implement systems for processing of reusable instruments and devices.aspx 10 ©2012 .3 Consequently. and if necessary. provide their dental staff with access to key resources such as these ADA Guidelines. all dental practitioners and clinical support staff have a responsibility to follow the specific infection control policies in their place of work. together with minimising the use of aerosol-generating techniques. and two cycles of cleaning for environmental surfaces. implement a hand hygiene program consistent with the national hand hygiene initiative from Hand Hygiene Australia (HHA) which promotes the use of alcohol-based hand rubs in situations where hands are not visibly contaminated.dentalboard. The Dental Board stipulates that dental practitioners must practise in a way that maintains and enhances public health and safety by ensuring that the risk of the spread of infectious diseases is prevented or minimised. Dental practitioners must ensure the premises in which they practise are kept in a clean and hygienic state to prevent or minimise the spread of infectious diseases. transmission-based precautions must be followed. ensure an immunisation program for dental staff is in place and is in accordance with the current edition of the Australian Immunisation Handbook. and the relevant Australian Standards (AS/NZS 4815. The additional measures would include these patients being seen as the last patient of the day. Office-based health care facilities – Reprocessing of reusable medical and surgical instruments and equipment. there will be few situations where the use of analgesics and appropriate antimicrobial agents will not allow a delay until the patient is no longer infectious. maintain an allergy record for each member of the dental staff. Duty of care Dental practitioners have a common law legal duty of care to their patients. implement specific training and education on personal protective equipment. and must ensure that effective infection control measures are in place and are complied with in the practice. monitoring and rectifying breaches of infection control protocols (which would involve addressing this topic in staff meetings and recording the outcomes from such discussions). they take such steps as are practicable to prevent or minimise the spread of infectious diseases. and ensure that. Dental practitioners must: • • • • develop and implement work practices to ensure compliance with infection control standards. unless they are found to have become infected with the bloodborne virus. 2. follow through after potential exposures to bloodborne viruses. document their infection control protocols in a practice manual. HCV and HIV. In general.If such patients need urgent care. in attending a patient.gov. Note that it is not necessary for practitioners to stop performing EPPs after the exposure. be aware of their immune status. July 2010. undergoing testing. implement systems for the safe handling and disposal of sharps. implement systems for environmental cleaning. maintain a vaccination record for each member of the dental staff (see Section 3 of this document for a list of recommended immunisations). implement systems to prevent and manage occupational exposure to bloodborne viruses. the 2010 NHMRC Guidelines. maintain a record of workplace incidents and accidents (including sharps injuries) as required by national OHS legislation. ensure that all dental staff have read the practice manual and have been trained in the infection control protocols used in the practice.au/Codes-and-Guidelines.
Territory and Commonwealth legislation is listed in the References and Additional Reading. which may modify their illness to the extent that restrictions on practice can be lifted. bloodborne viruses should seek the advice of infectious disease specialists familiar with the requirements of dental practice and an advisory panel regarding their fitness to practice. Under the current Dental Board policy (July 2010). respectively.4 Infected dental practitioners Dental practitioners who are infected with. further HCV RNA and HBV DNA testing will be required in such cases for an extended period. practitioners diagnosed with a bloodborne viral infection must cease performing EPPs if viraemic. Relevant State. or who are carriers of. This means that practice owners must provide their employed dental practitioners and dental staff with the required materials and equipment to allow these employees to fulfil their legal obligations for implementing effective infection control in their workplace. It is a breach of anti-discrimination laws for dental practitioners to refuse to treat or impose extra conditions on a patient who has a disability such as being infected with or being a carrier of a bloodborne virus. they must not proceed with their dental studies. The law demands that dental practitioners take reasonable steps to accommodate a patient’s disability. Upon entry into university dental training programs. Advice on alternative careers and counselling should be made available. and comply with. practice owners have an obligation to provide and maintain a safe working environment for employees and for members of the public. but may be permitted to return to normal clinical work following spontaneous clearing of HBV DNA or clearing of HBV DNA in response to anti-viral treatment. and persistent negative results for PCR may result in a review of the infectious status of the practitioner. dental practitioners must not perform EPPs while they are HBV DNA positive. intending students with a bloodborne viral infection must be advised that they will not be able to complete their clinical course requirements or be allowed to practice as a dental practitioner. the clinical treatment type. and impose limits on practitioners infected with HBV or HCV according to their infectivity as assessed by viral load and antibody levels. If found to be positive for one or more BBVs. They may need to modify their clinical practice. industrial relations and equal employment opportunity legislation. relevant anti-discrimination. but may be permitted to return to normal working arrangements and perform EPPs after successful treatment or following spontaneous clearing of HCV RNA. employers of dental practitioners should also consider. in accordance with the relevant policies of the Dental Board and the current CDNA Australian National Guidelines for the Management of Health Care Workers known to be infected with Blood-Borne Viruses (CDNA Australian National Guidelines). It is not appropriate for a practitioner to rely on their own assessment of the risk that they pose to patients. As a result of limitation of practice. Likewise. While the protection of the public’s health is paramount. privacy. However. This includes seeking treatment. students are required to undergo testing for BBVs. under workplace health and safety legislation. Anti-discrimination.In addition. industrial relations and equal opportunity laws apply. Risks of transmission from clinician-to-patient or from patient-to-clinician are dependent on a range of factors including the infectivity of the individual (for example viral load and effect of viral treatments). and operator skill and experience. The CDNA Australian National Guidelines recommend that testing for HBV and HCV should be performed three-monthly and yearly. for the duration of the practitioner’s career. 4 11 ©2012 . to ensure that virus levels remain undetectable. and this includes not undertaking EPPs. privacy. Employers must ensure that the status and rights of infected staff members as employees are safeguarded. This policy also applies to students. The CDNA Australian National Guidelines stipulate that HIV antibody positive practitioners must not perform EPPs. Dental practitioners must not perform EPPs while they are HCV RNA positive. If a dental practitioner knows or suspects that they have been infected with a bloodborne virus. they should consult an appropriately experienced medical practitioner for their management. Effective anti-viral drug treatment protocols reduce the infectivity of individuals.
and on leaving the surgery at the end of the day. Hand hygiene must be undertaken before and after every patient contact. the empty dispenser should be discarded and not re-used. Hand hygiene Hand hygiene is a general term applying to processes aiming to reduce the number of microorganisms on hands. A range of ABHR products are registered with TGA and these contain a sufficiently high level of alcohol (ethanol or isopropanol) to achieve the desired level of decontamination. wet hands must be dried with single use linen or disposable paper towels. and will leave the hands in a dry state after being rubbed on for 15-20 seconds. it is both desirable and convenient to position ABHR dispensers close to the clinical working area (but away from contamination by splash and aerosols).org.hha. medical opinion should be sought. rather than at an existing handwashing sink. Regular use of skin moisturisers both at work and at home should be promoted.au/. the HHA protocol is to use an ABHR for all clinical situations where hands are visibly clean. Practitioners must not use ABHR products that do not carry TGA approval. Unlike detergents. if symptoms persist. see. The normal routine in dental practice should be for dental staff to use ABHR between patient appointments and during interruptions within the one appointment. to the surface of the hands. ABHR must only be used on dry skin. a compatible moisturiser should be applied up to four times per day. but may lead to dermatitis. Simply put. ABHR can be used as often as is required. Standard precautions of infection control The following standard precautions form the basis of infection control and must be carried out routinely for all patients. The initial use of ABHR by staff with existing skin irritation often results in a stinging sensation. Washing hands with soap and water immediately before or after using an ABHR is not only unnecessary. This site also has posters on ‘How to Hand Rub’ and ‘How to Handwash’ which can be downloaded for use in dental practice. For this reason. Handwashing should be undertaken in dedicated (clean) sinks preferably fitted with non-touch taps (or carried out using a non-touch technique) and not in the (contaminated) sinks used for instrument cleaning. For further information on hand decontamination with ABHR.au. before gloves are put on. such domestic products must not be used in clinical settings. Both alcohol-based gels and solutions with proven efficacy that have been designed for use in healthcare settings are available. 12 ©2012 . or visibly soiled with blood or other body fluids. The hand rub is applied onto dry hands and rubbed on for 15-20 seconds. Hands must always be washed at the start of a working session. There is insufficient evidence at present to recommend the use of alcohol-containing foams for hand hygiene. ABHR products designed for domestic use lack TGA registration. and after they are taken off.g. followed by patting dry with single use towels. They must be washed with soap and water when visibly dirty or contaminated with proteinaceous material. after which time the hands will be dry. Suitable ABHR will typically contain a skin emollient to minimise the risk of skin irritation and drying. As a result.hha. This includes either the application of a waterless antimicrobial agent. The rationale is that washing hands with soap and water is preferred in these situations because it guarantees a mechanical removal effect.B. However. have minimal colour and fragrances. because having wet hands dilutes the product thus decreasing its effectiveness. or the use of soap/solution (plain or antimicrobial) and water. It is not permitted to ‘top up’ bottles of ABHR because the outside of the dispenser may become contaminated. e. Comprehensive information on contemporary hand hygiene measures is found on the Hand Hygiene Australia (HHA) website. and on caring for their hands. however. 1. this usually declines over several weeks with the ongoing use of an emollient-containing ABHR. Thus. Attempts to recycle/re-use ABHR dispensers have not proven to be cost effective in Australia to date. Dental staff must be educated regarding the correct use of ABHR and handwashing products. ABHR do not remove skin lipids and they do not require paper towel for drying. If touch taps are used the taps may be turned on and off with a paper towel.org. bearing in mind that any moisturising skin care products used in the dental practice must be compatible with the ABHR used. after toilet breaks. If hands are washed. alcohol-based hand rub (ABHR).
and any cuts or abrasions covered with waterproof dressings. that result in skin drying. (A plain band ring such as a wedding ring may be left on for non-surgical procedures but may cause irritation of the underlying skin. Not only must dental practitioners and clinical support staff be provided with all appropriate necessary protective clothing and equipment for the procedure being undertaken.g. 2. Gloves must be removed as soon as clinical treatment is complete and hand hygiene undertaken immediately to avoid the transfer of microorganisms to other patients or environments. puncture-resistant gloves must be used during 13 ©2012 . A new pair of gloves must be used for each patient and changed as soon as they are cut. Gloves supplied for use in dental practice are required to conform to AS/NZS 4011. Sterile gloves must be worn when a sterile field is necessary for procedures such as oral. Barrier protection. other detergents. torn or punctured. and paper towel use. Wearing gloves does not replace the need for hand hygiene because hands may still become contaminated as a result of manufacturing defects in new gloves that were not obvious to the user. in which case it must not be worn). Heavy-duty utility. Damaged skin in dental practitioners and clinical support staff is an important issue because of the high frequency of dry. Gloves used in patient care must not be washed or reused. Artificial fingernails can harbour microorganisms and must not be worn. chafed or cracked skin can allow entry of microorganisms. and preferably no nail polish should be worn by dental staff. Non-sterile examination gloves may be worn for non-surgical general dental procedures. which is primarily caused by frequent and repeated use of handwashing products – especially soaps. disposable latex or nitrile gloves are appropriate for cleaning the dental operatory during changeover between patient appointments. Personal protective equipment The wearing of protective personal clothing and equipment where aerosols are likely to be generated is an important way to reduce the risk of transmission of infectious agents. The type of glove worn must be appropriate to the task. Because lacerated. dental surgery. The hands of dental staff should be free of jewellery and false nails. Gloves Dental practitioners and clinical support staff must wear gloves whenever there is risk of exposure to blood. Other factors that may contribute to dermatitis include fragrances and preservatives in hand care products (which can cause contact allergies).Hand care Hands must be well cared for. eyewear and gown must be removed before leaving the work area (e. any cuts or open wounds need to be covered with a waterproof dressing. wrist or nail jewellery should be removed prior to putting on gloves as its presence compromises the fit and integrity of gloves and also promotes significant growth of skin microorganisms. they also need to be educated about how to use these items correctly. Any nail polish should be clear. mask. Gloves must be removed or overgloves worn before touching any environmental surface without a barrier or before accessing clean areas. because intact skin is a first line defence mechanism against infection. Damaged skin can not only lead to infection in the host. For instance. All hand. instrument processing or laboratory areas). but can also harbour higher numbers of microorganisms than intact skin and hence increase the risk of transmission to others. including gloves. This means gloves must be worn for all clinical procedures. and using poor quality paper towels. donning gloves while hands are still wet. failing to use moisturisers. All fingernails must be kept short to prevent glove tears and to allow thorough cleaning of the hands. using hot water for handwashing. Gloves also need to be worn when cleaning instruments and environmental surfaces. saliva or other body secretions or when hands will come in contact with mucous membranes. The Practice Manual should list the protocols for glove wearing and for hand hygiene before gloving and after de-gloving. or because of damage (such as tears and pinpricks) that occurs to the gloves during use. itchy skin from irritant contact dermatitis. periodontal or endodontic surgery.
this does not protect from inhaled microorganisms and must be worn in conjunction with a surgical mask. However.org. Because masks protect the mucous membranes of the nose and mouth. Therefore. the ultrasonic scaler and the triplex syringe. cracked. when cutting wires and when cleaning instruments and equipment. If the dental practitioner. dental practitioners and clinical support staff must wear suitable fluid-resistant surgical masks that block particles of three microns or less in size. or where there is a probability of the inhalation of aerosols with a potential for transmission of airborne pathogens. Masks Dental procedures can generate large quantities of aerosols of three microns or less in size and a number of diseases may be transmitted via the airborne (inhalational) route. 14 ©2012 . they must be worn wherever there is a potential for splashing. cover both the nose and mouth. Eyewear protects the eye from a broad range of hazards including projectiles and for this reason eyewear should be worn for most clinical procedures. and where possible be folded out fully to cover the chin and upper neck. peeling or showing signs of deterioration. Note that patients with multiple food allergies have an elevated possibility of latex allergy and it is prudent to use a latex-free approach when treating such patients. but must be washed in detergent after each use.ada. An alternative to protective eyewear is a face shield. stored dry and replaced if torn. Reusable or disposable eyewear that is supplied for use in dental practice is required to conform to AS 1337. The use of powder-free gloves for patient care is recommended strongly because this reduces exposure of staff to latex proteins via both respiratory and contact routes. The filtration abilities of a mask begin to decline with moisture on the inner and outer surfaces of the mask after approximately 20 minutes. In the dental surgery environment. rather than disposable latex gloves. and adapting the mask to the bridge of the nose. Protection from projectiles is particularly important during scaling. alternatives such as neoprene or nitrile gloves must be used. but be removed and discarded as soon as practicable after use. For further information on latex sensitivity see the ADA’s The Practical Guides and www. and be removed by touching the strings and loops only. saliva or body substances. or be worn loosely around the neck while the dental practitioner or clinical support staff member walks around the premises. splattering or spraying of blood. as well as viruses from the patient’s blood.au. be touched by the hands while being worn. Surgical masks for dental use are fluid-repellent paper filter masks and are suitable for both surgical and non-surgical dental procedures that generate aerosols. The following are some basic protocols to be observed in relation to masks as items of personal protective equipment. However. Masks must: • • • • • be fitted and worn according to the manufacturer’s instructions – this means using both tie strings where the mask has two ties. when using rotary instruments. The aerosols produced may be contaminated with bacteria and fungi from the oral cavity (from saliva and dental biofilms). Masks supplied for use in dental practice are required to conform to AS 4381. A latex-free protocol must also be followed including use of non-latex rubber dam. the most common causes of airborne aerosols are the high speed air rotor handpiece. splattering or spraying with blood. These utility gloves can be reused. and thereby minimises the risk of developing latex allergy. saliva or body substances. and use of non-latex materials such as prophylaxis cups. it is suggested that masks be worn at all times when treating patients to prevent contamination of the working area with the operator’s respiratory or nasal secretions/organisms. clinical support staff member or patient has a proven or suspected allergy to latex.instrument cleaning. Masks must not: Eye protection Dental practitioners and clinical support staff must wear protective eyewear to protect the mucous membranes of the eyes during procedures where there is the potential for penetrating injury or exposure to aerosols.
reusable or disposable gown. Tinted lenses may protect patients from the glare of the operating light. Consequently. it is essential that all sharp instruments must be handled and used with care. the risks should be explained and the refusal noted in their dental records. accidentally dropped sharps or spilt chemicals).g. The following additional requirements are necessary to provide for asepsis and a sterile field: long hair must be tied back and covered and beards must be covered. laboratory coat or uniform) should be worn while treating patients when aerosols or splatter are likely to be generated or when contamination with blood or saliva is possible. Management of sharps The practise of dentistry frequently involves the use of sharp instruments. All patients must be offered protective eyewear. conditions of limited access and poor visibility will exist such that there is a risk of a penetrating injury to dental staff with the subsequent possibility of exposure of the patient to the blood of the dental staff member. eyewear for patients may be either single use or can be reused after cleaning with detergent and water. surgical penetration of bone or elevation of a muco-periosteal flap are undertaken. Inappropriate handling of sharps. Protective clothing Protective clothing (e. for minor oral surgery procedures. Surgical procedures and aseptic technique The principles of sterile aseptic technique must be applied to all surgical procedures undertaken in the dental practice setting. or at the end of the day. or if visibly contaminated with blood these must be disposed of according to local waste management regulations. close-fitting and should be shielded at the sides. drinking. Prescription lenses are not considered a substitute for protective eyewear unless they are inserted in frames the design of which provides a suitable level of protection to the orbital region. impermeable protective clothing must be worn. these procedures include specific requirements for surgical handwashing (using an anti-microbial handwashing solution). surgical endodontics and for dental implant placement. anti-fog. sterile gloves are required for the surgical removal of teeth. Footwear Dental practitioners and clinical support staff should wear enclosed footwear that will protect them from injury or contact with sharp objects (e. If patients refuse to wear the protective glasses. Where there is a risk of large splashes with blood or body substances.g. In addition. 3. The most suitable type of protective clothing varies according to the nature of the procedure and the equipment used and is a matter of professional judgement. Sharp instruments such as scalpels and scalers must never be passed by hand between dental staff members and must be placed in a puncture-resistant tray or bowl after each use. Occasionally. 15 ©2012 . distortion-free. 4. Items of disposable protective clothing should be placed in general waste after use. and that the techniques employed minimise the risk of penetrating injuries to dental staff. Uniforms worn by dental practitioners and clinical support staff must be clean and in good condition. Sterile gloves supplied for use in dental practice are required to conform to AS/NZS 4179.Eyewear must be optically clear. Spectacles for vision usually do not provide sufficient protection. for periodontal surgery. taking a break or leaving the practice premises for a meal or other break. Instruments and sharp items must be carried from the surgery to the sterilising area in a lidded puncture-resistant sharps transport container. With regard to cleaning. is the major cause of penetrating injuries which involve potential exposure to bloodborne diseases in the dental surgery. Items of protective clothing must be changed as soon as possible when they become visibly soiled or after repeated exposure to contaminated aerosols. Sterile gloves must be used when EPPs such as incision into mucosal soft tissues. gowning and gloving. both during and after treatment. Patients must be provided with protective eyewear to minimise the risk of possible injury from materials or chemicals used during treatment. The protective gown worn in the clinical area must be removed before eating. Likewise.
scalpel blades.au. Medical waste and sharps containers must be stored securely before collection by licensed waste contractors for final disposal using approved technologies by licensed/accredited contractors. A separate sharps container should be located in each operatory. Standard precautions (gloves. Waste in the dental practice should be separated according to its category (medical or non-medical) at the point of generation. mask. The dental practice must have an easily accessible. and then collected by licensed waste contractors for disposal according to local waste management regulations. burs. empty or partially used cartridges of local anaesthetic solution. or alternatively wrapped in paper towel or placed in a disposable cup and covered with setting plaster before disposal in the general waste. Local regulations on waste management and disposal of teeth may apply. 5. Such waste must be placed in appropriate leak-resistant bags and then yellow containers bearing the international black biohazard symbol and clearly marked medical waste. These must not be overfilled and must not be compacted by hand.org.ada. Medical waste includes recognisable human tissues (excluding teeth) and material or solutions containing free-flowing blood. This must be at the point of use (i. clear set of written instructions on the appropriate action to take in the event of an exposure incident such as a sharps injury.Needles must not be re-sheathed unless an approved recapping device or single-handed technique is used. Environment A range of environmental controls can be used to reduce the risk of transmission of infectious agents in the dental practice. For further information see Appendix: Blood and Body Fluid Exposure Protocol. In some states and territories it is illegal to incinerate teeth restored with amalgam because of issues with mercury vapour emissions. Disposal of sharps The clinician who has used a disposable sharp item must be responsible for its immediate safe management or disposal after use. These instructions must be understood and followed by all dental staff. Sharps containers must be placed in a safe position within the treatment room to avoid accidental tipping over and must be out of the reach of small children. Management of clinical waste Management of medical and related waste must conform to local State or Territory regulations. Medical waste and hazardous chemical waste (which includes some chemicals and mercury used in dental practise) must never be disposed of at local refuse tips that use compaction of an open landfill. the operatory or treatment room) unless transferred in appropriate containers. protective eyewear) must be used when handling medical waste bags and containers. Bags and containers for medical waste should be appropriately colour coded and labelled as biohazard or medical waste.e. Contaminated needles must never be bent or broken by hand or removed from disposable syringes. puncture and leakproof containers. Extracted teeth once cleaned of visible blood and saliva may be given to the patient. These should be considered when designing or refurbishing a dental practice. needles. endodontic files and other single use sharp items must be discarded in clearly labelled. Sharps containers must be sealed when they have been filled to the line marked on the container. Appropriate sharps containers are those that conform to AS 4031 or AS/NZS 4261 as applicable. therefore those teeth must not be placed in medical waste or into sharps containers. close to the point of use of any disposable sharp. Used disposable needle syringe combinations. the ADA’s The Practical Guides and www. orthodontic bands. 16 ©2012 . 6.
window sills. Treatment areas Routine cleaning of the contaminated zone within the dental operatory is necessary to maintain a safe environment because deposits of dust. alternatively single use. Walls. Floor coverings in the dental operatory must be non-slip and impervious with sealed joins. walls and curtains pose minimal risk of disease transmission in a dental practice. sealed clinical specimens or medical products such as drugs or blood because of the risks of cross-contamination. disposable mop heads or cloths may be used. The practice should develop a sequence so that areas including floors.Design of premises The design of the premises and the layout of the dental surgery and treatment areas are important factors in implementing successful infection control. Coved joints of the flooring with the walls are preferred for ease of cleaning. After gloving. Computer keyboards in the dental operatory may harbour microorganisms such as Staphylococcus aureus (MRSA) and they should be covered where possible in treatment areas. and cleaned regularly in non-treatment areas. Care must be taken to avoid contaminated instruments or equipment re-entering clean areas. soil and microbes on environmental surfaces can transmit infection. dust-retaining mops and vacuum cleaners are recommended. The clean zones of the dental practice include office areas. Likewise. workflow for instruments and materials must be from the clean to the contaminated zone. staff may move from the clean zone to the contaminated zone but never the reverse direction. State and Territory public health regulations require that premises be kept clean and hygienic. The dental operatory and the instrument reprocessing rooms must have clearly defined clean and contaminated zones. Brooms must not be used in clinical areas as these disperse dust and bacteria into the air. Food must not be stored in a refrigerator with dental materials. the staff room. blinds and window curtains in patient care areas must be cleaned when they are visibly dusty or soiled. Eating and common room areas for dental staff must be separate from patient treatment areas and the dental laboratory. Welded vinyl flooring is widely used as it is long wearing and easily cleaned. laboratory and instrument reprocessing areas as it is not impervious. damp dusting. The contaminated zone is the area contaminated with material from patient care. Moreover. Environmental surfaces such as bench tops outside the contaminated zone must be cleaned at least weekly using detergent and water. door handles. Lunchroom crockery must not be washed in the handwash sinks or in instrument wash basins. a schedule for cleaning of solid surfaces in the waiting room must be prepared. Work surfaces and bench tops in treatment areas must be non-porous. In the dental operatory. impervious to water. Carpet is acceptable in the waiting room but must not be used in clinical. Because cleaning methods must avoid the generation of aerosols. Surfaces of dental units must be impervious as they may become contaminated with potentially infective material. Mops and cloths must be cleaned after use and allowed to dry before reuse. dental chairside assistants should put on new gloves for cleaning working surfaces during the changeover between patients. rather than using gloves contaminated during chairside assisting work on the previous patient. Cleaning the environment Although surfaces such as floors. Work areas should be well lit and well ventilated with sufficient uncluttered and easily cleaned bench space to accommodate necessary equipment. these surfaces nevertheless must be maintained in a clean and hygienic condition. Patient notes written up by hand or electronically must follow a protocol which prevents environmental contamination of the hard copy notes or computer keyboard. waiting and reception areas as well as those areas used for storage of supplies and of sterilised instruments and equipment. telephone handsets are cleaned on a weekly basis. A number of keyboards are available that have flat surfaces and can be wiped over with detergent or with alcohol-impregnated wipes between patient appointments. smooth without crevices and have sealed joins to facilitate 17 ©2012 . as well as the instrument cleaning area.
Fresh cleaning solutions of detergent should be prepared as instructed by the manufacturer daily. General work surfaces in the dental operatory that are outside the contaminated zone must be cleaned after each session or when they become visibly soiled. Neutral detergents also leave little residue on surfaces. neutral detergent wipes should be used for all routine and general cleaning. washed and dried overnight prior to refilling for subsequent use. Standard precautions (including wearing of personal protective equipment as applicable) must be implemented when cleaning these surfaces.cleaning and prevent the accumulation of contaminated matter. Sinks and wash basins must be cleaned at least daily. 18 ©2012 . Containers for these fresh solutions should be emptied. Neutral pH detergents are best for environmental cleaning because they are less likely than acid or alkaline detergents to damage metals such a stainless steel or to cause skin irritation. Working surfaces in the contaminated zone must be cleaned after every patient by wiping the surface with a neutral detergent. Written cleaning protocols for the practice must be prepared. or more often if appropriate. including methods and frequency of cleaning. A neutral detergent and warm water solution or commercially packaged pre-moistened.
the X-ray head. and the bracket table and handle. It is recommended wherever possible that materials such as cotton rolls. decontaminated. For this reason all bulk supplies such as opened boxes of gloves. Infection control strategies within the contaminated zone The boundaries of the contaminated zone need to be clearly defined. Dental practitioners and clinical support staff should not bring personal effects. and adhere to the outlined protocols. the polymerising light. triplex syringe. and a new barrier placed. cotton rolls or gauze must be stored outside the contaminated zone and protected from contamination from splashes and aerosols. gingival retraction cord and restorative materials should be pre-dispensed from bulk supplies that are kept in drawers or containers to keep these bulk supplies free of contamination from splashes or aerosols. it must be by a method that does not contaminate other instruments or materials in the drawers. If transfer tweezers are used. These surfaces must be cleaned and the items in the zone disposed of. because this has implications for surface management and for the placement of equipment. For equipment that is difficult to clean. The options include: • • drawers are opened by elbow touch. splash and splatter will occur. 1. 19 ©2012 . these must be kept separate from the other instruments. The goal during dental treatment is to contain contamination within this zone. Note: Any instruments placed into the contaminated zone for a treatment session but not used during that session must be regarded as contaminated. In an operatory utilised by multiple dental practitioners. if barriers are not used. if additional instruments and materials have to be retrieved from outside the contaminated zone during a patient treatment. and retrieval of instruments and materials is undertaken using a no-touch technique such as use of transfer tweezers. high volume evacuation and correct patient positioning. the use of overgloves or single use barriers on drawer handles. then a documented cleaning protocol should be followed. gloves must be removed and hands decontaminated with ABHR before dispensing additional materials. and where dental assistants are not routinely assigned to the same operatory. or cleaned and sterilised before the next patient is treated. changes of clothing or bags into clinical areas where cross-contamination is likely to occur. Items where barrier protection may be required include: • • • the operating light handle. When rubber dam is not applied.C. Rubber dam minimises the spread of blood or saliva. A system of zoning aids and simplifies the decontamination process. intra-oral camera and fibre-optic illuminator. Clean areas include those surfaces and drawers where clean or sterilised instruments are stored and that never come in contact with contaminated instruments or equipment. However. tubing for suction. both by determining what is touched and where the spread of droplets. Clean and contaminated zones Within the dental surgery. a protective covering such as a plastic wrap may be necessary. and instrument cradles. pre-procedural antiseptic mouthrinses. Any surface barriers used on such surfaces should be disposed of after each patient treatment. dental floss. clean and contaminated zones must be clearly demarcated. All dental staff must understand the purpose of and requirements within each zone. use of barriers may be preferable. All surfaces and items within the contaminated zone must be deemed contaminated by the treatment in progress. However. Reducing the extent of contamination of the dental operatory can be achieved in part by use of rubber dam. high volume aspiration becomes essential. Clinical contact surfaces in the contaminated zone that are not barrier protected must be cleaned after each patient.
An independent water supply can help to reduce the accumulation of biofilm. Biofilm in dental unit waterlines may be a source of known pathogens (e. Ito I. These items include. McKnight SA 3rd. The manufacturer’s directions for appropriate methods to maintain the recommended quality of dental water and for monitoring water quality should be followed. Flushing each day has been shown to reduce levels of bacteria in dental unit waterlines. 2008. 74(11):1595-609. Int J Dent Hyg.g. Bacterial levels can be tested using commercially available test strips or through commercial microbiology laboratories. Instruments that are very small and/or sharp and are difficult to clean should be considered single use.the number of bacteria in water used as a coolant/irrigant for nonsurgical dental procedures should be . and air/water syringes) should be flushed for a minimum of two minutes at the start of the day and for 30 seconds between patients. The individual then must re-glove before re-entering the contaminated zone. Biofilm levels in dental equipment can be minimised by using a range of measures. In line with the Australian drinking water quality guidelines..5 Air and waterlines from any device connected to the dental water system that enters the patient’s mouth (e.. J Periodontol.” 7 Wirthlin MR. reamers and broaches. handpieces. must be kept free of environmental contamination. This is particularly important after periods of non-use (such as vacations and long weekends). gloves must be removed and hands washed or decontaminated with ABHR before touching the item. Waterlines must be cleaned and disinfected in accordance with the manufacturer’s instructions. suture needles and scalpel blades must be used for one patient and then disposed of immediately into an approved sharps container.g. 6 See CDC (2003) Guidelines for Infection Control in Dental Health-Care Settings. Martel CR. Pasley-Mowry C. chemical dosing of water (e. sutures and scalpel blades. Williams K. 2002. Marshall GW. Single use items Single ‘one patient’ use sterile instruments should be used whenever indicated by the clinical situation. including topical anaesthetic tubes or jars and endodontic medicaments. Rowland RW. Dental local anaesthetic solution and needles must be sterile at time of use and are single-patient use only. flushing lines (e. Containers of medicaments.g. Cobb CM. the regulatory standard for safe drinking water established by EPA and APHA/AWWA. 2. non-tuberculous mycobacteria. ultrasonic scalers. Ferguson BL. which acts as a reservoir of microbial contamination. but are not limited to. silver ions. Cartridges of local anaesthetic must be stored appropriately to prevent their environmental contamination by aerosols. page 29: “. Matsumoto W. at a minimum < 500 CFU/mL.. Such instruments must not be reused unless a validated and safe cleaning process is employed. Water quality Sterile irrigants such as sterile water or sterile saline as a coolant are required for surgical procedures such as dentoalveolar surgery and dental implant placement. splatter and droplets generated by clinical patient care. How does time-dependent dental unit waterline flushing affect planktonic bacteria levels? J Dent Educ. and Legionella spp). 2003 Nov. Pseudomonas aeruginosa. Watanabe E. including water treatments using ozonation or electrochemical activation.6 When treating immunocompromised patients. Waterlines and water quality Most dental unit waterlines contain biofilm. Formation and decontamination of biofilms in dental unit waterlines. or nanoparticle silver). 5 20 ©2012 . Similarly. Agostinho AM. but must be discarded after use. peroxygen compounds.. it is recommended that water from dental unit waterlines contain less than 200 colony forming units per mL. triple syringe and handpieces) after each patient use. with hydrogen peroxide. suture materials. Such items are to be considered single use items as currently no cleaning method has yet been validated as being effective in removing organic material from these items.g. Dental items designated as single use by the manufacturer must not be reprocessed and reused on another patient. This issue is very relevant to matrix bands.66(4):549-55. Incompletely used local anaesthetic cartridges must be discarded after each patient use. water for tooth irrigation during cavity preparation and for ultrasonic scaling should be of no less than potable standard (Australian Drinking Water Guidelines 2011). All waterlines must be fitted with non-return (anti-retraction) valves to help prevent retrograde contamination of the lines by fluids from the oral cavity. and flushing waterlines at the start of the day to reduce overnight or weekend biofilm accumulation.7 3. and stainless steel endodontic files.Whenever it is necessary to move from the contaminated zone to a clean zone to touch non-clinical items without a barrier.6(1):56-62. local anaesthetic needles and cartridges. Dental unit water: bacterial decontamination of old and new dental units by flushing water.
Categories of instruments: infection risk relative to instrument use Contaminated instruments can transmit infections to patients during clinical procedures. implantable items including mini implants. face bows.1 of the 2010 NHMRC Guidelines * Reprocessing is all steps necessary to make a contaminated reusable device ready for its intended use. It may be appropriate to use batch control identification for these surgical instruments. Instruments should be ‘single use disposable’ or sterilised after use. Semi-critical Item: Where there is contact with intact non-sterile mucosa or non-intact skin. cavity or bloodstream (e. 2. These steps may include cleaning. The Spaulding classification describes three instrument/risk categories (critical. curing light tip).8Category of instrument7 Reprocessing requirements* Non-critical Item: Where there is contact with intact skin (lowest risk). these instruments should be stored in the same way as semi-critical instruments to prevent environmental contamination prior to use. Le Cron carver). labelling. protective eyewear. Instruments used in semi-critical procedures should. thermal disinfection of denture polishing buffs may be appropriate as these are unlikely to be contaminated with blood). After processing. 5. How much reprocessing or preparation for reuse is required for reusable instruments and equipment depends on their intended use. and endodontic procedures on vital pulp tissue). where possible. semi-critical instruments should be stored in a way to prevent contamination prior to use by being kept bagged in closed drawers or in dedicated containers such as instrument cassettes. Instrument reprocessing area and workflow Part of the dental premises must be designated as the reprocessing area for reusable instruments (including cleaning.g.g. surgical dental procedures such as the removal of a fully impacted tooth. Instruments must be sterilised where possible and when not possible a barrier must be placed (e. Equipment and instruments that are used in the treatment of mucosal lesions or diseased soft tissue and that come in direct contact with mucosa and gingiva must be single use. semi-critical and non-critical). These instruments must be sterile at the time of use and must be either ‘single use disposable’ or capable of being steam sterilised. Instruments stored in bags that are found to be damaged must be re-sterilised before use. Instrument reprocessing Because contaminated instruments can transmit infections between patients. For guidance on specific dental items in office practice. disposable or cleaned and re-sterilised after each patient. 4. extraction. 1.g. bib chains and Dappens dishes. Ideally. Examples are electrosurgery. 2. instruments used in the placement of implants. Critical items must be used immediately after sterilisation or bagged prior to sterilisation and kept stored in bags until used. and surgical dental handpieces. Reprocessing of instruments must be in accordance with AS/NZS 4815 for office practice or AS/NZS 4187 for hospital practice. 2. Examples: prosthetic gauges and measuring devices.D. 8 21 ©2012 . and other noncritical items when used occasionally in the mouth (e. packaging.1. this should be a dedicated room separate from the treatment room(s) but if not possible because of limited space. functional testing. as a general rule. disinfection and sterilisation. dental tweezers and probes. each of which has specific reprocessing requirements. see section 12. 1. 1. The risk of this happening is related to the site of use. restorative instruments. Examples: dental forceps and elevators. flap retractors and surgical burs. if an instrument cannot be cleaned it cannot be safely reprocessed. instrument reprocessing should occur well clear of the This is based on the Spaulding classification system as described in section B4. correct reprocessing of instruments between each patient use is essential. be sterilised between patients but do not need batch control identification and are not required to be sterile at the point of use. Critical Item: Where there is entry or penetration into sterile tissue. cryotherapy and related devices and tips. Willis gauges. After processing. Examples: mouth mirrors. 3.4 of AS/NZS 4815. Cleaning alone with detergent and water is generally sufficient but in some cases thermal disinfection with heat and water is appropriate. 3.g. The type of instrument and its intended use will determine the method of reprocessing and. In some rare instances thermal disinfection using heat and water is acceptable and professional judgement needs to be exercised (e. metal impression trays. packaging and sterilising) and not used for any other purpose.
smooth work surfaces without crevices made of non-porous materials such as stainless steel or laminate to facilitate cleaning.contaminated zone with good workflow processes established and when there is minimal risk of aerosol contamination of the reprocessing area. to minimise handling and prevent the potential for a penetrating injury if the container is dropped. from contaminated to clean. should be placed on racks and not directly on the bench to prevent damage from water condensation under the cooling packages. good lighting to minimise the risk of sharps injury and enable inspection of cleaned instruments. logbooks. The instrument reprocessing area must be appropriate in layout and size for the volume of instruments being reprocessed. non-slip water-impervious flooring that is readily cleanable. the principles of environmental control need to be observed. 22 ©2012 . Once the cleaning process commences. when removed from the steam steriliser. and a cooling area for sterile items awaiting storage. preparation and packaging. sufficient bench space for drying and packaging areas to enable efficient work practices. The reprocessing area must be divided into distinct areas for: • • • • receiving. A systematic approach to the decontamination of instruments after use will ensure that dirty instruments are segregated from clean items. work benches of a standard height and storage cupboards located at heights that minimise bending over or stretching overhead. Design of the reprocessing area The following are design features of the reprocessing area that will facilitate successful infection control: • • • • • • • • • • • instrument flow in one direction – from dirty to clean. efficient ventilation. sufficient drawers. cleaning agents and self-sealing bags. To minimise particulate contamination and bio-burden (pathogenic bacteria. sterilisation. fungi and viruses). cupboards and shelves to keep work benches as clutter-free as possible and to facilitate storage of sterilised packages as well as general items such as labelling guns. Transfer of contaminated instruments and transfer of sharps Instruments should be carried to the sterilising area in a cassette or in a container that preferably is lidded and punctureproof. The cleaning process should flow in one direction from contaminated area and items to clean area and items.one for handwashing and one for washing contaminated instruments. cleaning and decontamination. Trays of instruments. knee or foot. Remember: instruments must pass in one direction only. heavy-duty utility gloves must be worn. Processed instruments must not be stored in an area where contaminated instruments are held or cleaned or where there is a possibility of contamination from organisms carried in droplets or aerosols. then contaminated areas and instrument washing sinks must be clearly designated. 3. If instrument washing must take place in the clinical or laboratory area due to limitations of space. There must be no inaccessible areas where moisture or soil can accumulate. The gloves must then be taken off and hands washed. sinks must be deep enough and taps provided with anti-splash devices to prevent splashing. Ideally there should be several sinks . The contaminated instruments should be carried with gloved hands to the cleaning area and placed on the bench in the ‘contaminated zone’ of the sterilising room. Instrument flow must be in one direction: from contaminated through to clean. This is essential to prevent damage to packs. both hot and cold water taps should ideally be non-touch or electronic in operation and liquid handwash dispensers should be operated by elbow. and storage.
Common household detergents must not be used due to their high foaming properties. A wire bur brush maintained in good condition may be used for cleaning tungsten carbide and diamond burs. Tan CM. can produce severe complications such as granulomas if they enter a cut in skin or ulcer in a breach of the oral epithelium. Damaged or rusted instruments must be repaired or discarded. if they are unable to be cleaned immediately. Cleaning Used dental instruments are often heavily contaminated with blood and saliva unless pre-cleaned by wiping at the chairside. Even when these potentially diseaseproducing organisms are killed. 9 23 ©2012 . removing the organic material lessens the chance of microorganisms multiplying on the instruments before reprocessing commences. Likewise. Setcos JC & Palenik CJ. saliva. Dental instruments and devices that are contaminated with blood. Beiswanger MA. cements and other contaminants must be treated to prevent the substances drying on them. instruments should be checked visually under good lighting to ensure all soil/ contaminants are removed. These staff must use heavy-duty utility (puncture and chemical-resistant) gloves.4. A mildly alkaline. and wear eye protection/face shield and a mask. and the difficulties in rinsing items free of detergent residue which in turn can interfere with the sterilising/disinfecting process. A waterproof/fluid-resistant gown/apron is also recommended. reduces the risk of exposure to blood and reduces the risk of penetrating skin injuries from sharp or pointed instruments. Such pre-cleaning is strongly recommended because it improves the safety and effectiveness of instrument reprocessing. Clinical support staff who clean and reprocess instruments must be given formal training in the relevant procedures. Manual cleaning Lukewarm tap water is suitable for manual cleaning of instruments. Splashes of cleaning agents on a person’s skin must be washed quickly with clean water and then treated in accordance with the manufacturer’s instructions. dislodged soil and foreign particles. In a like manner. The presence of organic material left on instruments/equipment may prevent the penetration of steam during sterilisation therefore instruments must be completely cleaned before being disinfected or sterilised. ‘Cleaning dental instruments: measuring the effectiveness of an instrument washer/disinfector’ Am J Dent 2000. Similarly. free rinsing non-abrasive liquid detergent should be used as this is much more effective than a neutral pH detergent in removing blood and fatty substances. Cleaning dental instruments by hand is the least efficient method. Hot water is not used at this stage as it coagulates protein which increases the difficulty of cleaning. even if sterile. Gaines DJ. cold water solidifies lipids and should not be used. released endotoxins may remain and may sometimes cause fevers in patients if introduced into cuts or wounds. This will reduce the need for intensive cleaning by hand at a later stage. In addition. If saliva dries and coagulates – particularly if blood is present or if hot water is used for cleaning – it can entrap the organisms inside the mass formed and inhibit penetration of the sterilising/disinfecting agent. too much foam prevents the operator from seeing instruments under the water in the sink and thereby greatly increases the risk of cuts and penetrating injuries from sharp instruments. Alternatively.9 After either manual or mechanical cleaning. If the item is not clean the sterilisation process for that item will be compromised. and those with visible residue soil/contamination must be re-cleaned. A long-handled instrument brush should be used to remove debris until the item is visibly clean. Cleaning significantly reduces the number of microorganisms that need to be killed during sterilisation or disinfection. the lid should be kept on the ultrasonic cleaner when in use to prevent dispersion of aerosols and droplets of fluids. Automated mechanical cleaning is preferred to manual cleaning as it more efficient. Instruments can be cleaned either by hand or mechanically (in either an ultrasonic bath or instrument washer/disinfector). Cleaning techniques should aim to avoid spraying liquids into the air. the instruments may be soaked in detergent or an enzymatic agent to prevent hardening of residue. but if used. It is recommended that gross soil be removed from instruments by wiping them at the chairside onto an adhesive-backed sponge or dampened gauze on the bracket table using a one-handed method to prevent the risk of sharps injury during the wiping action. See article by Miller CH. 13(1): 39-43. low foaming. In addition. the instruments should be fully immersed in a dedicated instrument cleaning sink that is pre-filled with warm water and detergent.
These connect into the water supply and drainage systems and must be serviced according to the manufacturer’s instructions. Suitable methods include using a drying cabinet. After sterilisation. Drying instruments As residual moisture may impede the sterilisation process. rinsed and then stored dry. Packaging prior to steam sterilisation Instruments that must be sterile at time of use (i. since these methods facilitate storage and protect against contamination from aerosols. Such systems must comply with AS/NZS 2945 or AS 3836.ada. Washer/disinfectors must be well maintained and cleaned regularly to prevent formation of biofilms that could contaminate the instruments being processed.2 are suitable for steam sterilisation. gaskets and strainers must be cleaned daily. water must be de-gassed before use. stainless steel syringes or those with serrated beaks such as artery and extraction forceps. Items must be free of visible soil before being placed in an ultrasonic cleaner. tank. cleaned and left dry. critical instruments that penetrate normally sterile tissue). and no part of the operator’s fingers or hands is permitted to be immersed in the fluid during operation of the cleaner. critical instruments must remain bagged or wrapped until use. Mechanical cleaning Mechanical cleaning of instruments can be carried out in instrument washers or ultrasonic cleaners. Where possible. and then inspected visually under good light to ensure all surfaces of all instruments are clean. instruments must be completely submerged in fluid.org.e. instruments to be sterilised by steam should be dried. Cleaning brushes used for manual cleaning must be washed. Likewise. At the end of each day. an aluminium foil test (or another approved performance test) must be performed daily and the result recorded. the lid must be closed during operation (to avoid dispersal of aerosols). and using a short rinse in very hot water. the ultrasonic cleaner tank must be emptied. Ultrasonic cleaners that comply with AS 2773 may be used for instrument cleaning.Abrasive cleaners such as steel wool and abrasive cleaning powders should not be used as these can damage instruments and residues may be left. instrument washers must not be used as a substitute for sterilisation where the items can be sterilised. Instrument washers/ disinfectors are more efficient at pre-sterilisation cleaning than either ultrasonic cleaners or manual cleaning. Instrument washers are also more efficient than a domestic dishwasher.2 and textile linen wraps conforming to AS 3789. Paper bags/wraps conforming to AS 1079. cleaning fluid must be changed a minimum of twice daily (or when it appears heavily contaminated). and dental burs which are reprocessable. using a lint-free cloth or wipe.au. as contact with steam alters its properties. 5. especially for small items such as nickel-titanium endodontic files (following a validated protocol). There are both bench top and floor-mounted models of instrument washers for use in dental practice. instruments are to be rinsed thoroughly to remove all traces of detergent with warm to hot running water. non-critical instruments should be stored in cassettes or bagged. It is not acceptable to use a domestic dishwasher to process dental instruments. Instrument washers have a drying cycle that eliminates the need for a separate drying step. Ultrasonic cleaners are particularly useful for cleaning jointed instruments such as scissors. In an emergency situation a critical instrument may be processed unbagged and then transported to the operatory in a sterile container for immediate use. After manual cleaning. For further information see the ADA’s The Practical Guides and www. In addition: • • • • • • • lids. Paper and synthetic packaging is designed to be used once and then discarded. 24 ©2012 . must be bagged or wrapped prior to sterilisation.
25 ©2012 . domestic adhesive tape. microorganisms may be present under the rubber ring after sterilisation. The sterilisation process requires that all air in the chamber be replaced by steam. Steam sterilisation Sterilisation is the process of rendering an item free of all forms of viable microorganisms. solid items. Likewise. Maintenance and testing All steam sterilisers must be commissioned on installation. Ultraviolet light and boiling water do not sterilise instruments and must not be used. In officebased dental practice. Further. felt tipped non-toxic marking pens. or by using bags that are self-sealing. silicone rubber rings used to identify instruments may impede sterilisation and if used. Steam pushes the air downwards using gravity and forces it out a port in the bottom of the chamber. Portable bench top steam sterilisers (formerly called autoclaves) Small. thus compromising the sterility of the instrument. Instruments with hinges or ratchets must remain open and unlocked. This can be done by using a heat sealing machine. the most efficient and simplest means of sterilising dental instruments is steam under pressure (commonly called steam sterilising or autoclaving). may harbour microorganisms in their adhesive layer and may detach from the instrument during surgery. Some steam sterilisers are capable of being operated through more than one kind of cycle. etching of instruments as a method of identification is preferred for critical instruments. Such sterilisers must be TGA-approved and operated according to the standards AS/NZS 4187 and AS/NZS 4815 and manufacturer’s instructions. including spores. There are several types of sterilisation cycles including: • • • N class cycles – used for unwrapped. 6. Identification colour-coded tapes on instruments must not be used as these can prevent the penetration of steam under the tape. compromising patient safety. and the removal of steam and water vapour after sterilisation. Dry heat sterilisation and chemiclaves are not recommended for routine sterilising dental instruments and equipment. Adhesive stickers. staples and elastic bands are not suitable for sealing packs. Packs or bags must be sealed prior to processing. depending on the circumstances and the type of instruments. the penetration of steam into the pack. and rubber stamps using water-resistant ink may be used for the labelling of packs and bags on the laminated side of packs prior to sterilisation. cassettes used for packaging instrument sets must be perforated to allow for penetration of steam and efficient drying. and B class cycles – for hollow objects where the ratio of the length of the hollow portion to its diameter is more than 1:5.Packaging and wrapping materials must permit the removal or air. S class cycles – specified by the manufacturer and used with multi-pulse vacuum steam sterilisers to suit loads of certain types and configurations. String. Therefore. applying steam steriliser tape. portable or bench top steam sterilisers are the most reliable and efficient sterilising units for use in office-based practice. In these cycles there is a greater challenge for air removal. Air is exhausted by a mechanical pump to create a vacuum before steam is introduced into the chamber. It involves the combination of heat and moisture maintained at the right temperature and pressure for the right length of time to kill microorganisms. Sharp instruments should be packaged in such a way as to prevent perforation of the pack.
Steam sterilisers without a drying cycle are suitable only for sterilising unwrapped items which must then be used immediately after sterilisation if they are critical items. the function of the steriliser must be checked. the operator must verify that the item is suitable for the process (some instruments made of plastic cannot withstand the process). Calibration report (12-monthly). Performance qualification a. The processed chemical indicator must achieve all sterilisation parameters applicable to the indicator used and that information recorded. It is validation of the total process. This is performed by the service technician when new or repaired sterilisers are installed in the practice. Before steam sterilising an instrument. The validation process involves the following steps: Commissioning (Installation qualification and operational qualifications) A commissioning report includes installation documents and operation verification. Where these parameters are displayed on the devices/gauges of steam sterilisers which have no recording device. This record is obtained after major repairs or when pack contents or packaging changes significantly. Time. Steam steriliser performance tests Steam sterilisers. Monitoring of cycles It cannot be assumed that sterilisation has been achieved without the appropriate testing and load checking. a biological indicator (spore test) or chemical indicator (Class 4 or greater) for steam sterilisers or a Class 3 indicator for dry heat sterilisers can be used for each load. a concept known as validation of the sterilisation process is undertaken. and the unit must be used according to the manufacturer’s instructions. It tests the security of seals on the machine. pressure must be measured with continuous. In order to ensure the items are sterilised. An operator’s manual must be available on site. particularly those capable of running a B Class cycle. There are a range of tests that must be carried out prior to commencing the first sterilising cycle for sterilisers with a Class B cycle. Most modern pre-vacuum steam sterilisers incorporate automatic air leak 26 ©2012 . It is necessary to regularly monitor the sterilisation process to ensure the process has met all parameters and that consequently the reprocessed instruments can be assumed to be sterile. Physical qualification (by a qualified instrument technician or manufacturer’s technician): • • b. Alternatively. and Penetration report which checks the physical attributes of the steriliser.g. Operating the steam steriliser As with all infection control procedures.Validation of the sterilisation process In order to ensure appropriate sterilisation of items in the surgery. operational and performance qualification. automatic. The Validation Report summarises satisfactory completion of commissioning. Microbiological Report to confirm functioning of the steriliser using a biological indicator (spore test). readings of the sterilising process should be documented at intervals of 10 seconds. Steam sterilisers which incorporate a drying cycle in their design can be used to process both wrapped and bagged items. The steam steriliser’s performance must also be monitored by periodic testing (including daily and weekly tests as described in AS/NZS 4815). temperature and. permanent monitoring (e. where applicable. process recorder. clinical support staff must be trained in the correct operation of the steam steriliser. printer or data logger). are complex machines. In summary these include: Leak rate test – A leak rate test is a simple push-button operation that is built into steam sterilisers with a Class B cycle.
To ensure air removal. In the absence of automatic air leak detection. Items waiting to be sterilised must be stored in a dedicated ‘pre-sterilisation’ area. Visually check that bags and their contents are dry. or a process challenge device (PCD) – also known as a helix test – for non-porous loads. Check the readings – pressure. Modern steam sterilisers have an integral printer or data logger to allow the parameters reached during the sterilisation cycle to be recorded for routine monitoring. For wrapped items. Correct loading also reduces damage to packs and their contents. this test should be run every working day. not in the steam steriliser. a Class 1 indicator must be included on the outside of each package as a visual check of the item having been through the process. Checking the completed load A number of variables influence the process of sterilisation: the quality of cleaning (residual bio-burden). be fitted with mechanisms to record these sterilising parameters electronically. saturated steam). once the sterilising process (including the drying cycle) is complete. For dental instruments and equipment. Unwrapped critical instruments that must be sterile at the time of use must be used immediately after completion of the sterilising process. and a leak rate test is only performed weekly. If the second cycle is unsatisfactory. Loading The steam steriliser can only work effectively if steam can circulate freely and touch every surface of every instrument. There are several stacking devices that enable correct loading of the steam steriliser. The steam steriliser trays should not be crowded and items must not be packed one on top of the other. With regard to the latter. Existing older type bench top steam sterilisers must. temperature.detection. If no such mechanism is available. Air removal and steam penetration test (Class 2 chemical indicator) – Bowie-Dick-type test for use when processing porous loads. the sterilisation cycle must be regarded as unsatisfactory (regardless of results obtained from chemical indicators) and the sterilising cycle repeated. 27 ©2012 . A Class 1 chemical indicator must be placed in each loading tray being processed if non-bagged items are loaded. Drying Steam sterilisers used to process packaged items must have a dedicated drying cycle so that a dry load is produced. a Bowie-Dick-type test must be performed before the first sterilising cycle of the day in order to determine whether the steam steriliser is operating correctly in terms of its air removal capabilities. the packaging technique. a daily helix test is to be conducted. Forced cooling of items by external fans or boosted air conditioning must not be used. the sterilant quality (levels of ions and lubricants). the choice of packaging materials. For porous loads. a number of checks need to be made and the results recorded. temperature. This will minimise the risk that these items might be recirculated as already sterilised instruments. If any reading is outside its specified limits. the steam steriliser must not be used until the problem has been rectified by an instrument technician. time – on the steam steriliser’s instruments and compare them to the recommended values. and maximises the efficient use of the steam steriliser. and the cycle parameters (time. where possible. When pre-vacuum sterilisers are used to process solid or cannulated (hollow) loads using type B cycles. the steriliser loading technique. allow unwrapped instruments to dry and cool in the steam steriliser before they are handled to avoid contamination and thermal injury. steam sterilisers must reach a holding temperature of 134-137 °C for three minutes for unwrapped loads. hollow items should be loaded according to the manufacturer’s instructions. In those units without a drying cycle. parameters must be monitored and recorded manually or process indicators must be used for each cycle. Cooling items must not be placed on solid surfaces since condensation of vapour inside the pack may result. Packaged or unpackaged items must never be dried by opening the door of the steam steriliser before the drying cycle is completed. Logs and printouts must be retained for inspection and monitoring.
With a pre-vacuum steam steriliser. torn. Class 5 – an integrating indicator indicating time.Check that the external (Class 1) chemical indicator on the bag and any internal (Class 4. A correct colour change indicates that the sterilising parameters of temperature.g. the Class 1 indicator on each pack must be examined after the sterilising cycle to ensure that the pack has been exposed to a sterilising process. gauze post-extraction packs. Class 6 – Class 6 indicators have the highest precision – their accuracy is +/-1 °C and +/. Chemical indicators provide information about conditions in the steam steriliser at the specific locations where they are placed. Their accuracy is +/-1 °C and +/. These show a gradual colour change during sterilising. Class 1 – these are intended for use on individual packs of wrapped instruments to indicate that the unit has been exposed to the sterilisation process (e. Some indicators such as Class 1 types are only sensitive to changes of temperature whilst others such as Classes 5 and 6 are sensitive to variables such as temperature. Cool air pockets (which may be caused by an overcrowded chamber). Check each bag to ensure that it is undamaged and properly sealed. indicating labels). time and pressure) and indicate exposure to a sterilisation cycle at the values of the variable as stated by the manufacturer. time and water (as delivered by saturated steam). or incorrect use of packaging materials. incorrect positioning. Air pockets occur less often in pre-vacuum steam sterilisers. the affected instruments must be considered contaminated and must be repackaged and reprocessed. 5 or 6) chemical indicators have made the required colour change. These indicators have poor accuracy and are only used with dry heat sterilisers. incorrect wrapping.5% on time. pressure and time have been achieved.25% on time. Chemical indicators Chemical indicators show that certain temperatures. When using a B Class cycle to sterilise porous loads of cotton rolls. Steam steriliser monitoring tests It is necessary to regularly monitor the sterilisation cycle to ensure the sterility of reprocessed instruments. Class 4 – are designed to react to two or more of the critical sterilising variables (e. As noted earlier. in packs of a steam steriliser load or in a process challenge device. temperature). These indicators usually fail only when there is gross malfunction of the steam steriliser. times and pressures have been reached during the sterilising process. Class 3 – indicators of this kind respond to only one critical variable (e. are very common causes of failed sterilisation in downwards displacement steam sterilisers. steam steriliser indicating tape. temperature and moisture sometimes called a biological emulator because it is timed to change colour at a temperature of 134 °C. a Class 1 indicator must be placed in each load. It is at this point that the probability of residual viable organisms remaining is less than one in a million (the sterility assurance level). cotton wool and the like a Bowie-Dick-type test is recommended. Instruments are assumed to have been sterilised when the correct sterilisation parameters have been achieved. A Class 6 indicator must be used in each load when using an ‘on-loan’ steam steriliser or when awaiting a technician to carry out IQ and PQ on a newly purchased or majorly-repaired steam steriliser or when using a steam steriliser without a printer. For wrapped loads. 28 ©2012 . They have limited value in general dentistry.15% on time.g. an air removal test such as a helix test or Bowie-Dick-type test must be run each day. Instrument packs must not be used if mechanical or chemical indicators indicate some flaw in the sterilising process.g. unsealed or wet or if items have been dropped on the floor or placed on contaminated surfaces. If the bag/packaging is compressed. If one pack has not changed the whole load must be regarded as suspect. If hollow loads such as handpieces are to be sterilised in a B Class cycle the appropriate test is the helix type PCD. Their accuracy is +/-2 °C and +/. Class 2 – a specific test – either a Bowie-Dick-type test for use when processing porous loads or a helix process challenge device (PCD) for solid or hollow instruments – which measures the effectiveness of air removal and even penetration of steam in a pre-vacuum steriliser. whether in the chamber. if un-bagged semi-critical or non-critical instruments are processed.
The process The item to be thermally disinfected must be cleaned prior to disinfection. polishing buffs and brushes. Most instruments used in dental prosthetics are semi-critical or non-critical items and many can be disinfected by heat and water in a thermal disinfector. For further information see the ADA’s The Practical Guides and www. The chamber of the thermal disinfector must be cleaned regularly. the chamber should be emptied and the instruments rinsed thoroughly at the end of the day. If an item is not clean it cannot be disinfected.ada. Instruments must not be stored in disinfectant solutions either before or after thermal disinfection or sterilising. some prosthetic or laboratory items). an internal multi-parameter time and temperature chemical indicator should be used within each package. bacterial spores). Different types of disinfectants must not be mixed or combined and must be used before expiry dates.g. It is not a sterilising process and must not be used where reusable instruments can withstand steam sterilisation. Biological indicators Only biological indicators that use highly heat-resistant spores actually show that sterility has been achieved. While AS/NZS 4815 permits chemical indicators between Classes 4 and 6 to be used for such a purpose. The preferred test organism for steam sterilisation is Geobacillus Stearothermophilus. Steam sterilisers that have not been calibrated or validated should be monitored by a weekly test using a biological indicator or alternatively each load must be processed with a biological emulator.g. Instrument disinfectants must be TGA-registered. A common use for thermal disinfection in dentistry is for disinfecting some prosthetic instruments. Chemical disinfection using instrument disinfectants – high level For practical purposes there is no place for cold high level chemical disinfection (e. Chemical disinfectants should only be used when thermal disinfection is unsuitable (e. Disinfection Disinfection does not ensure the degree of safety associated with sterilisation because it does not always destroy all microbial forms (e. 7. Wet instruments can be placed into the thermal disinfector. Unused product must be discarded each day – ‘topping up’ is not acceptable.au. gluteraldehyde) in dentistry. instruments must not be left overnight in solutions inside the chamber of an ultrasonic cleaner. It may be used for non-critical instruments and some semi-critical (e. Small electric ovens and microwaves must not be used as a means of thermal disinfection in dental practice. a Class 6 indicator is preferable because of its ability to provide additional information on steam quality that is not provided by Class 4 and 5 indicators. If a high temperature thermal disinfector is used the proper temperature and time parameters must be ensured.org. Likewise. and full validation of the cycle parameters has not yet been undertaken. Rather. as single use disposable instruments are now available. However. Ultraviolet cabinets must not be used for disinfection of instruments. 29 ©2012 . the use of a thermal disinfector should be minimised. Most units connect directly to mains water and drain directly into the normal waste plumbing. Thermal disinfection using washer-disinfectors Thermal disinfection uses heat and water at temperatures which destroy pathogenic non-sporing vegetative organisms.g. prosthetic instruments) which cannot be steam sterilised.g. Products must be used at the recommended concentration for soaking and exposure time.Where instruments are intended to be sterile at point of use.
For open shelving. Semi-critical instruments must be stored away from the contaminated zone. dust-free and in good condition. ultrasonic cleaning and reprocessing. re-clean. all items must be stored above floor level by at least 250 mm. Care should be taken when moving packages of instruments within drawers to reduce the chance of a surface breach through instruments perforating the paper or textile of the package. In the dental surgery the major source of environmental contamination is splashes of fluids that strike items and surfaces. dry. repack and re-sterilise. and in an area that is protected from splashing and aerosols produced during equipment washing. Storage of processed instruments The correct storage of processed instruments is important to protect them from environmental contamination. or from clinical procedures and handwashing. and are subjected to minimal handling before use. Storage containers used for semi-critical instruments must be kept clean. from over-stocking storage areas or from bundling packs together using rubber bands. A package is considered to be non-sterile when it: Wrapped packages of sterilised instruments must be examined before opening to ensure the barrier wrap has not been compromised during storage. allow unrestricted airflow and prevent heating and degradation of the packaging material. If the area used for storage is too small. too high. from ceiling fixtures by at least 400 mm. During storage. or during rotation of instrument packs. Keeping trays and cassettes of semi-critical instruments in closed drawers. moisture – if the pack is placed on a wet bench top. Storage areas for sterilised instruments in packs must be dedicated for that purpose only and be free of dust. User checks to be made before using The integrity of bagged/wrapped packs must be checked before using the instruments. or is dropped or placed on a contaminated surface. splashed with water. which in turn increases the likelihood of compromising the packaging. dry. and aerosols of airborne bacteria and viruses which settle over time on instruments and equipment. Drawers or sealed containers are preferred for the storage of sterile wrapped items because the drawers or containers can be located at a height that allows the contents to be easily seen so that the most recently processed items are placed towards the back of the drawer. Packages that show evidence of damage must not be used. Instrument cassettes and instrument packs must be kept in such a way that contamination from splashes and aerosols does not occur. other liquids or aerosols. dust-free. is damaged or open. dedicated containers or drawers to protect them from environmental contamination.8. Items required to remain sterile must not be stored in ultra-violet cabinets or disinfectant solutions as these processes will compromise sterility. It is important that critical wrapped instruments are stored in a clean dry area. 30 ©2012 . or penetration – if instruments break through the surface of the pack. packs can be contaminated by: • • • • • • over-handling – this can happen through excessive transferring from one place to another. Semi-critical instruments Storage of unwrapped semi-critical instruments and non-critical items must be in clean. cannot be adequately cleaned and may harbour organisms. If there is any doubt that sterility was obtained during processing or the instrument pack has been compromised. This is necessary so that the instruments are sterile at the time of use. comes out of the steam steriliser wet or is placed on a wet surface. and protected from direct sunlight. Cardboard boxes must not be used as storage containers for instruments as these are porous. crowded or awkward it makes access difficult. Critical instruments Critical instruments/items must be stored in a way that maintains the integrity of packs and prevents contamination from any source. and be cleaned periodically. cupboards or lidded containers will help to protect them from contamination by aerosols and splashes. This will facilitate environmental cleaning. insects and vermin.
The drawers or containers must be cleaned with detergent and water periodically and all instruments in the drawers must be reprocessed before replacement in drawers. 31 ©2012 .Unwrapped semi-critical and non-critical items As mentioned above. during patient treatment. cupboards or the like. Care must be taken to ensure that storage areas in the dental operatory do not become contaminated. As described earlier. and in a way that will prevent contamination prior to use. de-gloving. overgloving or using a suitable no-touch technique (transfer tweezers) must be used to access items. or trays or cassettes in sealable plastic containers with lids. This can be achieved by storing in: • • • instrument cassettes in drawers. trays in closed drawers lined with plastic sheeting. instruments must be stored dry.
Maintenance of these records provides evidence of quality management processes and allows for batch control identification of critical instruments. contents of load – e. Keeping chemical indicators is not required as these are not a substitute for a permanent record of a sterilising process and because exposed chemical indicators may change with time and therefore are not a reliable record. How long documentation needs to be kept varies depending on the states or territories. date. but is typically seven years. Since it is necessary to keep documentation for an extended period. cycle or load number. For this reason. the location and number of the steam steriliser (if there are multiple steam sterilisers in the practice). and the exact parameters which have been tested. as follows: • • • • • • • • • steam steriliser number or code (to identify the machine the item was sterilised in). and who authorises release of the load for use. result of the chemical indicators used in the cycle.2 of AS 4815. if the steriliser data is not scanned or in electronic form. the type of biological indicator used and the batch number. wrapped or unwrapped items. dental practitioners have a duty to maintain records relating to the sterilisation process. It is important to check that the biological indicators to be used have not expired. it is essential to determine what steam steriliser cycle parameters are required for successful air removal and steam penetration. it is important that printer readouts remain legible for at least seven years. performance tests performed on the sterilising equipment (such as spore tests. Maintaining sterilisation records Under section 8. For each sterilising cycle (even those that do not include any packs of critical instruments) the results of the cycle must be recorded. batch numbers of packs included in that load (if any). air leakage and air removal tests). Whenever instruments are packaged. The cycle record should be initialled by the dental staff member reviewing them. the name of the operator running the performance qualification. ink based printouts are preferred as thermal printouts may need to be copied to ensure that they remain readable when archived.E. This checking should include all external and internal chemical indicators. and identification (signature or initials) of the person who has checked the steam steriliser readouts and chemical indicator result. Routine recording of cycle data from sterilises enables identification of items should the question arise as to whether sterility problems or another failure occurred with the load. It is also necessary that records are maintained for daily tests on ultrasonic cleaners (such as the foil test). including: • • • • • • the date of the test. The above data showing that the steam steriliser met performance data must be recorded. result of the steam steriliser physical readouts or printout for that cycle. the brand and type of packaging system tested. cycle parameters used (time and temperature) – ensuring these are appropriate for the load type being processed – whether wrapped or unwrapped). These sterilisation records include maintenance records. Documentation and practice protocols for infection control 1. records of validation. The results of any performance qualification tests for sterilisers must also be recorded. and daily steriliser cycle records. This validation of the conditions is necessary when there is a change in 32 ©2012 .g. A certificate of calibration and operational qualification should be issued by the technician carrying out the process and also must be kept as part of the documentation for the dental practice. The latter must incorporate batch information where batch control identification is used for packages of critical instruments.
A batch code comprises a simple sequence of numbers. one placed at each end and one in the middle of the pouch. Thus. It is important to ensure prior to the validation process that the biological indicators to be used have not expired. With larger packs.1. even when there has been no change in the type or method of instrument packaging. and placed directly under the day’s entry on the patient’s hard copy chart. as instruments are removed from their packages. to the patient. but rather is used as a positive control. one indicator should be placed in each corner and one in the centre of the pack. and the cycle or load number. If there is a colour change. or can be combinations of a number sequence with codes for the date and the steam steriliser number (if the practice has several steam sterilisers). problems of air removal are minimal. such as that produced from a labelling gun. provided that the inks and adhesives used can tolerate steam sterilising.2. it is necessary to use three biological indicators in each test pack. the batch control identification includes the steriliser identification number or code (if there is more than one steriliser within the facility). The results of the validation process must be recorded. and The exact parameters which have been validated. This requirement arises from AS 4815 section 8.ada. Batch information can be recorded on packs prior to steam sterilising using non-soluble permanent marker ink. As described in AS 4815 section 8. then all nine steam steriliser indicators should show no colour change. Rather. the now-empty packages should not immediately be placed into the waste. Batch control identification As a quality assurance or risk reduction measure. where one part of the label is peeled off the pack when setting up for the procedure. The location and number of the steam steriliser (if there are multiple steam sterilisers in the practice). A 10th indicator is not sterilised. The test pack with multiple indicators must be prepared in triplicate such that one can be processed on each of three consecutive cycles. which signifies a failed test result. The information should include: • • • • • • The date of the test. 2. Validation of cycle parameters involves using multiple biological (spore) tests. Where the parameters are appropriate for the removal of air and the penetration of steam. With instruments for routine dentistry that are handled in trays and do not require packaging.2.the type of packaging material used. The type of biological indicator used and the batch number. in office-based general dental practice the use of batch code identification would be limited. Several segmented (piggyback) adhesive label systems are available. or by using adhesive labels applied with a labelling gun. the 10 biological indicators comprising the nine which have been processed. For long thin pouches. validation is directed to items required to be sterile at point of use (critical items). and the 10th (as a control) are then developed and the results recorded.5. the holding time for the steam steriliser should be increased in increments of one or two minutes.e. This approach does not apply to semi-critical items used in routine dentistry. i.5. dental practices should use a system for critical packages of equipment. validation is not necessary. they should indicate complete killing of the spores or deactivation of the spore enzymes as appropriate. The name of the operator running the validation tests. in other words. For further information see the ADA’s The Practical Guides and www. those critical instruments used in surgical procedures. Batch control identification links a pack of surgical instruments used on a patient to a particular sterilising cycle and thereby allows dental practitioners to demonstrate that those critical dental instruments used on that patient have been through a particular steriliser cycle with verifiable performance data.org. The brand and type of packaging system tested. in order to establish the minimum time required. and the entire validation procedure repeated. At the time of the critical procedure. but rather put to one side in a clean zone of the operatory so that the batch 33 ©2012 .au. After the three cycles have been completed. For such loads. Validation must be repeated annually.1 which states that batch control numbers should be in place to link steriliser cycle batch information of a critical item that has been sterilised. the date of sterilisation.
Regular refresher training is also appropriate to ensure that the necessary infection control measures are being complied with and understood. as part of their writing up the notes for the procedure. Those who work with remote indigenous communities are advised to also receive immunisation for hepatitis A. All dental practitioners and clinical support staff should be vaccinated against HBV if they have no documented evidence of pre-existing immunity (from natural infection or prior vaccination) and ensure they are assessed for immunity postvaccination.health. Dental practices should have education programs to support their immunisation strategy. emergency procedures for fire and medical emergencies.. The expectations for all healthcare workers – and thus for dental practitioners and clinical support staff – is immunisation to HBV. 3.10 The current edition of the Australian Immunisation Handbook is the 9th edition. reporting requirements for sharps injuries and workplace incidents. policy on wearing and cleaning of uniforms.gov. This induction program should comprise the following: • • • • • • 10 general orientation to the physical environment of the practice. For further information on batch control identification see the ADA’s The Practical Guides and www. recommendations for vaccination prior to commencing work (HBV and others). this refusal must be documented with their reason for refusal noted and signed by him/her. After a full course of HBV immunisation or rubella vaccination.nsf/Content/Handbook-home. however.health. 9th Edition (2008). published in 2008. varicella (if seronegative). and all dental staff should be advised of the potential consequences of non-immunisation. All dental practitioners and clinical support staff are to be advised to have immunisations and are to be offered relevant vaccinations consistent with the NHMRC’s The Australian Immunisation Handbook.org. while those at high risk of exposure to drug-resistant cases of tuberculosis should also undergo vaccination with BCG. This resource can be accessed at:. It is recommended that dental staff also maintain their own immunisation and screening records. 34 ©2012 . measles – mumps – rubella (if non-immune). Immunisation records The practice must develop and maintain regularly updated immunisation/health records for dental staff.ada. Education Dental staff must be provided with comprehensive training in the full range of infection control procedures that they are expected to know about and carry out in their day-to-day work. Immunisations substantially reduce the potential for acquisition of disease.gov. practice expectations in terms of infection control and safe working procedures.au/internet/ immunise/publishing.number information can later be recorded into the treatment records of the patient by the clinician responsible. This pre-service training should include the practical implementation of occupational health and safety and infection control measures used in the practice.gov. Infection control for dental practitioners and clinical support staff Immunisation Dental practitioners and clinical support staff are at risk of exposure to many common vaccine-preventable diseases (VPDs) through contact with patients and the general community.nsf/Content/Handbook-home The Immunise Australia website. testing for antibody levels should be carried out to identify poor responders. pertussis (whooping cough). New clinical dental staff should complete an induction program.au/internet/immunise/publishing.immunise. Any staff member has the right to refuse vaccination. and annual immunisation for viral influenza. pp 104-107.au/ provides further information and resources. See National Health and Medical Research Centre The Australian Immunisation Handbook. thereby limiting further transmission to other dental staff and patients.au.health.
the more important it is to verify that all the relevant skin area is intact. the larger the area of skin exposed and the longer the time of contact.g. to assess the status of seroconversion. To supplement and update the information provided from the initial induction.• • • • • • • • first aid procedures. 35 ©2012 . the major area of concern to dental practitioners and clinical support staff is the risk of the transmission of HIV. regular staff meetings should be held to discuss infection control matters. This process would normally be overseen by specialists in infectious diseases. the patient) wherever practicable. open wound. an injury that involves direct skin contact with blood or saliva visibly contaminated with blood and where there is compromised skin integrity. For further information see Appendix: Blood and Body Fluid Exposure Protocol.g. Exposure incident protocol In the healthcare environment. use of personal protective equipment. abrasion or dermatitis. the required post-injury counselling may be undertaken by a designated medical practitioner or infection control practitioner. nose or eyes. all exposure incidents must be recorded. dental instruments. Postexposure prophylaxis may be available from public hospitals. HBV and HCV by contaminated blood. For sharps injuries. footwear and jewellery. bites or scratches inflicted by patients. and instrument cleaning and sterilisation. management of waste streams and hazardous substances. identification of clean and contaminated zones. needles and scalpel blades). and provide infection control education and training in hygiene and management of infectious hazards. patients with leukaemia or neutropenia may require antibiotic prophylaxis). the term ‘exposure incident’ refers to any incident where a contaminated object or substance breaches the integrity of the skin or mucous membranes or comes into contact with the eyes. Where the source is positive. and followed up. Follow-up tests must be offered after a significant exposure incident. safety rules in terms of hair. HCV and HIV. While the site where such sharps injuries are sustained can become infected with microorganisms.e. such as a cut. For exposures involving the skin. and direct contact with blood or body fluids with the mucous membrane of the mouth. procedures for changeover between patients. This information should be provided when employees are first appointed. This includes: • • • • penetrating injuries of the skin caused by sharps (e. Infection control manual and other practice management issues Each dental practitioner has a duty to: • • • take a detailed medical history to establish if a patient may be more susceptible to infection and therefore may require transmission-based precautions to prevent infection (e. 4. confidentiality of patient information. and blood samples for testing are obtained from the source (i. To comply with occupational health and safety legislation. Services such as sharps injury telephone hotlines may also be of value. These tests include HBV. follow-up tests will need to be repeated at intervals for the injured person. ensure adequate physical facilities are maintained and all equipment is always in sound working order by regular quality checks.
protocol following an exposure incident. setting up the treatment area between patients. recording of information during patient treatment in a manner to avoid cross-contamination. develop a plan for infection control within the practice. Infection control manual A comprehensive infection control manual which is pertinent to the daily routines of the practice must be developed. especially in their right to refuse to give information on their infectivity status or to refuse to be tested for a bloodborne virus.Dental practices should: • • • • • • • • • maintain awareness of new vaccine-preventable diseases (such as H1N1 and other forms of viral influenza). processing of reusable items (cleaning. environmental cleaning protocol. sterilisation. processing of radiographs in a manner to avoid cross-contamination. use of computers and computer-run equipment during patient treatment in a manner to avoid cross-contamination. packaging. ensure dental staff are adequately informed of the rights and responsibilities of patients. waste disposal. inform patients of the risks associated with their dental care and the protocols in place for protecting their privacy and confidentiality. offer testing following occupational exposure such as a sharps injury. quality control mechanisms including documentation for the maintenance and monitoring of equipment. disinfection. immunisation requirements. policies and procedures for dental staff. It must describe the infection control procedures for the practice as a whole and be used as the foundation for training dental staff.g. inform dental staff when they are employed of the health screening policies of the practice. defined zones that require barrier protection and cleaning between patients. management of waterlines used in direct patient contact. a sharps injury. and provide a specific program of education and training in infection control principles. effective reporting systems for breaches of protocols and safe work practices. storage). and ensure dental staff at risk are fully immunised when these vaccines become available (including annual influenza immunisation). inform patients of the infection control strategies in place and provide information about procedures for dealing with concerns about infection control procedures. handling and disposal of sharps. Practice manuals must be updated regularly if and when new guidelines are produced from the Dental Board. provide dental staff infection control measures including personal protective equipment and immunisation. 36 ©2012 . All staff in the practice need to know who is responsible for ensuring certain activities are carried out and to whom to report any accidents or incidents. and handling latex allergy in dental patients and dental staff. personal protective equipment requirements. the ADA and the NHMRC. single use items. The manual must include information about and specifications for: • • • • • • • • • • • • • • • • • methods of hand hygiene (both routine and surgical). e.
Protective barriers should be used on developing equipment where possible. film-holding and positioning devices) must be cleaned and then either heat-sterilised or barrier protected before use on subsequent patients. High technology intra-oral equipment and devices High technology intra-oral equipment and devices include. Special areas and their particular dental infection control requirements Some aspects of dental care. some regional authorities require these to be treated as contaminated medical waste which is placed in yellow containers or plastic bags which are appropriately marked with the international biohazard symbol and collected and disposed of by a licensed operator. Radiography equipment (e. and electrosurgery units. for example: • • • • • • • • • • the handle and tip of the curing light. apex locators. However.g. 37 ©2012 . electronic periodontal probe. computer components associated with CAD/CAM and other electronic devices. Alternatively. After exposure of the radiograph. at a minimum. Other personal protective equipment (e. intra-oral cameras and image capture devices. 1.g. Digital radiography sensors come into contact with mucous membranes and are considered semi-critical devices and they must be cleaned and covered with a barrier before use on subsequent patients. Dental practitioners and clinical support staff should consult the manufacturers about the appropriate barrier and cleaning/ sterilisation procedures required for these devices. or particular settings in which dental care is provided. mask. dry the film packet with a paper towel to remove blood or excess saliva and place in a container (such as a disposable cup) for transport to the developing area. If the item is exposed to mucous membrane or body fluids and cannot tolerate heat sterilisation then. Dental radiology and photography Any items or materials placed in a patient’s mouth which are subsequently removed for processing must be considered biologically contaminated and must be handled in a safe manner. The use of heat-tolerant or disposable intra-oral radiograph devices (unless using digital radiography) is recommended wherever possible and these semi-critical items (e. CAD/CAM. it must cleaned first then protected with a single use barrier before patient use. occlusal analysers. and when surfaces become contaminated the surfaces must then be cleaned. Gloves must be worn when taking radiographs and handling contaminated film packets or sensors. 2.g. radiograph tube head and control panel) which has become contaminated must be cleaned after each patient use. barrier protection can be applied which must be changed after each patient use. Exposed radiographs need to be transported and handled carefully to avoid contamination of the developing equipment. protective eyewear) must be used if spattering of blood or other body fluids is likely. lasers. These are outlined below. air abrasion. present specific challenges to dental practitioners and clinical support staff in implementing effective infection control measures.F. Most state regulations accept film packets and barrier envelopes that have been contaminated with saliva or blood to be disposed of as general waste.
Air abrasion. Air abrasion devices create alumina dust. The most important phase is the thorough cleaning of material that has contacted oral tissue (e. followed by the application of a diluted detergent and further rinsing must continue until all visible contamination is removed. clinical support staff and patients. as the equipment is not intended to contact mucosa. Full aseptic procedures with sterile fields must be employed. Implants In the surgical procedures involved in the placement of implants both the instruments used and the implants must be sterile at the time of use.g. if there is any chance of saliva or blood contamination of the item it should be cleaned by wiping with a neutral detergent before the next barrier is put in place. Evacuation systems which will remove plume vapour and particles must be used whenever electrosurgery units. Moreover. Dental laboratory and dental prosthetics Standard precautions and safe work practices must be used in the dental laboratory. Curing light Curing light tips are semi-critical pieces of equipment and should be heat sterilised or have an appropriate barrier placed over the tip for each patient. even though fragments may be present in the plume. remove gloves and decontaminate/wash hands. dental lasers and air abrasion/ particle beam units are in use. which can be a respiratory irritant for dental practitioners and clinical support staff as well as patients. Explanted devices must not be reprocessed and reused. dental lasers and air abrasion/particle beam devices create particular bio-aerosol hazards. the presence of an infectious agent in plume might not be sufficient to cause disease from airborne exposure. High filtration surgical masks combined with high volume suction can prevent inhalation of particles in plume by dental practitioners. plume also contains gases (e. Thorough rinsing with cold running water.g. As well as particles of tissue and fragments of microorganisms. Most bacteria and viruses are rendered non-viable by laser or electrosurgery. Another advantage of a barrier is that the sensitive light-conducting rods are protected from accidental damage or material contamination. especially if the agent’s normal mode of transmission is not airborne. and it is not always essential (but it is highly recommended) to clean items between change of barriers. The handle of the curing light and the tips must always be cleaned prior to having the barriers placed and a new barrier used for each patient. hydrogen cyanide. and high volume suction devices are essential during their use. Some pathogenic viruses such as human papilloma virus are not inactivated by laser or electrosurgery procedures and remain viable within the plume (smoke) created from soft tissue vaporisation. Even after cleaning there may still be biological contamination present and at all stages of handling of the prosthetic item standard precautions must be applied.When replacing barriers: • • • • remove the contaminated barrier/covering while gloves are still on. Barriered items must be cleaned each day. impressions). Barrier protection is an appropriate level of infection control for all light curing tips. Although some curing light tips may be heat sterilised this is not necessary if an appropriate barrier has been applied to the tip during the treatment of the patient. 3. 38 ©2012 . electrosurgery units and lasers Electrosurgery units. benzene and formaldehyde) which are irritant and noxious. Manufacturers’ instructions for disinfectants need to be carefully followed when cleaning and disinfecting prosthetic items and materials. There is no evidence that bloodborne viral diseases such as HIV or HBV can be transmitted through aerosolisation and inhalation of plume or other dental aerosols.
org.11 For further information on infection control in the dental laboratory see the ADA’s The Practical Guides and www. In theory.g. attachments and materials which are used in the operatory on contaminated prostheses or stages of prosthetic work should be either single use or cleaned and preferably heat sterilised after each patient use. using an aerosol spray can or an automated lubricating device). and run the handpiece briefly before use to clear excess lubricant. and when polishing appliances which have been worn in the mouth. check with local authorities about transport process requirements. polishing pumice should be dispensed for individual use and the pumice tray cleaned after each use. and then their internal aspects cleaned and lubricated prior to sterilising. Because of their lower oil dosing rates.au. 39 ©2012 . ultrasonic scaler handpieces must be sterilised between patients. After sterilising. clean off excess oil. clean the outside of the handpiece with detergent and water – never clean or immerse the handpiece in disinfectant solutions or the ultrasonic cleaner. There continues to be debate about the effective decontamination of handpieces. polishing mops). If a dedicated handpiece cleaning system is not used. Surgical handpieces must be sterilised using a B type cycle in a pre-vacuum steriliser. • All materials transported to and from dental laboratories must first be cleaned and placed in a sealed bag or container. a pre-vacuum steam steriliser will remove the air from the lumen of a dental handpiece. 11 Some states/territories specify disinfection plus cleaning. lubricate the handpiece with pressurised oil for the recommended period. sterilise in a steam steriliser.au. the area for grinding or cutting plaster and making models and the area for instrument management and sterilisation must be well separated and not used at the same time if both procedures utilise the same room. Handpieces should not be fitted to the dental unit until the time of use on a patient and once fitted to the dental unit and exposed to contamination during treatment they must be reprocessed even if not actually used on that patient. The exterior surfaces of handpieces must be cleaned thoroughly. 4. it is strongly recommended that automatic flush-through and lubricant systems be used for cleaning and lubricating dental handpieces. allowing steam to penetrate more quickly. the following protocol should be adopted for the pre-sterilisation cleaning of handpieces: • • • • • • place a blank bur in the chuck during cleaning to prevent contamination and damage of the handpiece bearings. Care needs to be taken to ensure lubricants used do not compromise the sterilisation process and this can be achieved by replacing each week the deionised water in steam sterilisers which recycle water from one cycle to the next. If unsuitable for heat sterilisation these items should be thermally disinfected (e. impressions. Similarly. according to the manufacturer’s instructions (e.org. handpieces must then be stored in a way to prevent contamination.These include: • • • • all materials.ada.g.ada. intra. any instruments. Handpiece management All dental handpieces must be cleaned and lubricated in accordance with the manufacturer’s instructions and must be sterilised after each patient. Current opinion is that effective pre-sterilisation cleaning of dental handpieces and subsequent processing in a downward displacement steam steriliser is acceptable for general dental treatment. dental prostheses. implantable items must be sterile at time of implantation. For further information on handpiece management see the ADA’s The Practical Guides and www. equipment.and extra-oral appliances must be thoroughly cleaned before insertion and adjustment. repaired appliances or relined appliances.
and occasionally there is a bedridden patient at a private home or hospital who needs dental care. Where possible. 6. Once the specimen has been placed in the container.12 Cleaning rotary nickel-titanium endodontic files • • • • • • Immediately after use remove stoppers and insert the files into a scouring sponge soaked with chlorhexidine gluconate aqueous solution. Clean the files by using 10 vigorous in-and-out strokes in the sponge. Place the files in a wire mesh basket and immerse in a suitable enzymatic cleaning solution for 30 minutes. try-ins and articulators must be transported in sealed plastic containers. It is preferable to use plastic zipper bags carrying the appropriate designation provided by the pathology laboratory. clean and disinfect the outside of the container before placing it into the transport bag or container. Waste should be separated at the point of generation. leak-proof container labelled with the biohazard symbol. Specimens To protect those handling and transporting biopsy specimens. 40 ©2012 . After use the instruments must be placed in a rigid sealed container for transport back to the dental surgery for cleaning and reprocessing. All sterilisable components can be processed in a steam steriliser at 134 °C. instruments should be cleaned immediately after use with detergent and water or sprayed with a cleaner to prevent hardening of debris before transport back to the dental clinic or laboratory. General waste should be disposed of in the general waste of the nursing/private home or hospital. The often inadequate facilities can make the provision of treatment difficult. 49 (1): 20-27. Nickel-titanium (NiTi) endodontic files When nickel-titanium endodontic rotary files are reprocessed the pre-sterilising cleaning process must be validated as being effective. Gloves must be worn when handling pathology specimens and specimen containers. During transport. all instruments and materials must be carried in lidded metal or rigid plastic clean containers to prevent damage or spillage. this must be packaged appropriately in a sealed container to prevent leakage during transport. not reused. Nursing home visits There are many dental patients whose dental treatment must be provided in a nursing home. The exceptions are usually the scavenger control valve. Drain and rinse in running water for 20 seconds. standard precautions apply – these include wearing gloves and other protective clothing and proper hand decontamination. Sharps and medical waste must be dealt with according to State regulations (a designated sharps container (AS/NZ 3816) must be transported with other instruments and equipment for this purpose). Dental practitioners and clinical support staff may need to carry all necessary personal protective equipment with them. Cleaning can be done manually or by thermal disinfector. Items such as impressions. Appropriate biohazard labelling must be placed on pathology specimen containers before dispatch. In providing dental care in these settings. Linsuwanont P. 12 Taken from: Parashos P. A verifiable process is described below. Re-usable masks must be cleaned and sterilised. If a biopsy specimen container is visibly contaminated. 7. ‘A cleaning protocol for rotary nickel-titanium endodontic instruments’ Aust Dent J 2004. Relative Analgesia Most componentry of relative analgesia equipment can be sterilised. (the vacuum control block) and depending on the model the fresh gas hose.5. each specimen must be placed in a sturdy. Follow this by 15 minutes ultrasonification in the enzymatic cleaning solution. Proceed to steam sterilisation. Some nasal hoods (masks) are disposable and these must be discarded. Impressions should be rinsed of blood and saliva prior to transportation to the laboratory. Messer HH.
Transmission-based precautions are tailored to the specific infectious agent concerned and may include measures to prevent airborne. given that pain can be reduced through the appropriate use of analgesics until the patient is no longer infectious and has reached the end of any mandatory period of quarantined. Infectious diseases. Care should be taken to limit the zone of contamination and in disposal of waste.2 of the 2010 NHMRC Guidelines See e. No special infection control precautions are necessary for the dental treatment of patients colonised with MRSA but care should be taken to prevent colonisation of the operatory. including those in both high and low risk categories. MRSA colonises the nose. infections caused by this organism are difficult to treat. It is not normally found in the oral cavity but may occasionally be isolated from oral infections. White. It would also be prudent to use additional cycles of surface cleaning at the end of the appointment. The patient should be seen as the last patient of the day.13 Nearly all patients and dental procedures fall in this category. tuberculosis. Gordin (2003) Bacterial contamination of computer keyboards in a teaching hospital. A dental practice which considers treating such patients should only do so after having conducted a written risk assessment. articles by: Schultz. When treating such patients it would also be prudent for clinical staff to wear well-adapted close fitting masks with high filtration capabilities (such as P2/N95 surgical respirators). and Rutala. 1. for restorative work is recommended to reduce exposure of dental practitioners and clinical support staff to potentially infected aerosols. allergies and transmission-based precautions for infection control There are some situations that require additional infection control measures from those standard precautions already outlined. where possible. as a result. These additional measures are now referred to as transmission-based precautions. Creutzfeldt-Jakob disease (CJD) In all patients with potential CJD infection. 3. The use of rubber dam. Transmission-based precautions must be applied for patients with known or suspected infectious diseases not managed by standard precautions alone. Tuberculosis is spread by droplets or by direct contact and has been transmitted as a result of dental procedures.14 Dental staff who are known to be colonised with MRSA must not undertake or assist with major surgical procedures in hospitals. 13 �� 41 ©2012 . Patients with these diseases should have their dental treatment deferred until they are no longer infectious.g. For further information see the link on the ADA’s website to the chapter on Classical Creutzfeldt-Jakob disease in the Creutzfeldt-Jacob Disease Infection Control Guidelines.G. Weber (2006) Bacterial Contamination of Keyboards: Efficacy and Functional Impact of Disinfectants. Where treatment cannot be deferred (e. Measles. tuberculosis Infection by airborne transmission of respiratory secretions can occur with pulmonary tuberculosis and measles. for example. MRSA can survive on surfaces such as computer keyboards for days and for weeks under acrylic nails. mumps. droplet or contact transmission. appropriate barrier precautions must be used and staff assisting in the dental treatment must be aware of their immune status for the relevant infectious disease of the patient.g. Most patients for whom transmission-based precautions are required would normally be quarantined to their home or too ill to consider any treatments other than relief of the most severe dental infection. and abnormal skin (such as wounds. measles. Staphylococcus aureus (MRSA) Methicillin-resistant Staphylococcus aureus (MRSA) is a bacterium which is resistant to common antibiotics and. avian flu and SARS. axillae and perineum. facial swelling) transmission-based precautions must be used for provision of dental treatment. Huber. Gill. instruments used in routine dental and endodontic procedures which come into contact with lower infectivity tissues can be routinely reprocessed. See section B5. 2. ulcers and eczematous skin). Gergen. Zubairi.
Those who carry a bloodborne virus have a legal. care should be taken to ensure latex and chlorhexidine compatibility.g. If latex sensitivity is identified. rubber bite blocks. abraded. dental dam. latex rubber dam. Bloodborne viruses and the infected dental practitioners Infection control against bloodborne viruses is based on the premise that for a person to be infected all of the following three conditions must be present: • • • a susceptible host (i.4. All patients need to be treated as potentially infectious and standard precautions applied to minimise the risk of transmission of infection from person to person. Exposure to infected blood can result in transmission from patient to practitioner. and HIV. HCV. latex components of relative analgesia equipment. then a ‘latex free’ environment should be created for the persons affected. HCV and HIV. These creams should not be petroleum-based. or other body fluids and mucous membranes of the eye. Patients. and methods to reduce the risk of blood contacts have included: use of standard precautions.au. professional and ethical responsibility to review the way they practice dentistry in line with medical advice from their treating specialist physician and advisory panels. HBV. Exposures occur through percutaneous injury (e. Avian flu Avian flu is a highly pathogenic and contagious Type A H5N1 influenza virus which normally only infects birds and occasionally pigs. bungs in some local anaesthetics.org. The majority of exposures in dentistry are preventable. transmission-based precautions will be essential. tissues. use of devices with features engineered to prevent sharp injuries. Although transmission of bloodborne pathogens (e. as well as through contact between potentially infectious blood.au. Dental practitioners and students have a responsibility to know their antibody status for bloodborne viruses such as HBV. or shows signs of dermatitis). and from one patient to another. They must avoid exposure prone procedures if they are viraemic. such transmission is rare. For further information on latex sensitivity see the ADA’s The Practical Guides and www. or non-intact skin (e. which could then progress with time to an acute allergic anaphylactic reaction (Type 1).org. 42 ©2012 . clinical support staff or patients must be treated as a serious medical issue. and alginate mixing bowls are available. and latex rubber alginate mixing bowls. This involves the use of latex-free gloves and removal from the operatory of other identifiable latex products that are likely to cause a reaction. prophylaxis cups. a break in the skin or sharps injury. a virus with sufficient virulence (infectivity) and dose (numbers) to cause infection. bite blocks. Symptoms may manifest as delayed hypersensitivity such as rash. a penetrating injury or cut with a sharp object). Current national policies for managing healthcare workers with a bloodborne viral illness should be followed. dental practitioners or clinical support staff with proven anaphylactic reactions to latex may need to wear a medical alert bracelet and carry self-injectable adrenaline. nose. Should avian flu enter Australia as a human-to-human transmission of the virus. When selecting hand care creams.g. which may result in death. Exposure prevention methods and exposure prone procedures Avoiding occupational exposures to blood is the primary way to prevent transmission of HBV.ada. anyone who is exposed to body fluids containing Human Immunodeficiency Virus (HIV). latex prophylaxis cups. For further information on avian flu see www. 6. Such items would include latex gloves. and HIV) in dental healthcare settings can have serious consequences. 5. Non-latex versions of gloves. HCV. Latex sensitivity of dental practitioners and clinical support staff or patients Suspected natural latex allergy (NLA) in dental practitioners. HCV or HBV or anyone who has not been vaccinated against HBV or who does not have HBV antibody). and modifications of work practices.g. All patient medical histories and new dental staff employment forms must include questions about NLA and/or sensitivity or allergy to latex/rubber products.ada.e. from practitioner to patient. mouth. that is. exposed skin that is chapped. conjunctivitis or rhinitis (Type 4). and a portal through which the virus may enter the host.
the volume of blood or body fluids. Do not apply disinfectants as some are irritants and retard healing. the procedure for which the device was used (e. A full record of the incident should be made including details of: • • • • • • • • • • who was injured. they must be easily accessible and understood. regardless of the situation (e. 43 ©2012 . Factors that influence whether an exposure has the potential to transmit a bloodborne virus (BBV) infection include: • • • • • the type of exposure (mucosal splash vs. the type of body substance (e. the type of exposure. There is no benefit in squeezing the wound. and the time which has elapsed since the exposure. even if administering local anaesthetic or undertaking another type of invasive procedure). Expert medical advice from an S-100 prescriber or an infectious diseases specialist is usually required to determine the need and type of PEP for the exposed person and the necessity or otherwise of testing the blood of the patient after appropriate pre-testing counselling. Further management of the wound is dependent on the nature of the injury. If contact lenses are worn. whether the injury was through a glove or clothing. and all dental practitioners must follow them. In addition.g. and the details of the patient being treated. Flush mucous membranes/conjunctiva with normal saline or water. the time the injury occurred. how much blood is present in the saliva).Appendix Blood and Body Fluid Exposure Protocol First aid • • • • Stop work immediately. These instructions should include emergency contact numbers for expert advice (this should name the medical practitioner experienced in dealing with such cases. the presence of visible blood on the device causing the injury. the length of time in contact with blood or body fluids. the following factors should be considered: • • • the type of device involved. the gauge of the needle. how the incident occurred. who was informed and when.g. whether a solid sharp object or hollow bore object or needle was involved. to complete an accurate assessment after a sharps injury. Allow the wound to bleed and clean it thoroughly with soap and lukewarm water. Each dental practice should have a clear set of written instructions on the appropriate action to take in the event of a sharps injury to either staff or patients. into a vein or artery). what action was taken. Assessment and record An assessment of the risk of transmission is an urgent priority to determine whether post-exposure prophylaxis (PEP) is necessary. a deeply penetrating skin injury).g. remove after flushing eye and clean as usual.
The staff member should be tested at the time of the injury to establish their serological status at the time of the exposure for: • • • HIV antibody. bearing in mind the window period of the tests. In this case. particularly if the result is positive. additional tests would usually then be ordered to assess infectivity (e. Baseline tests Baseline serum is requested from the injured staff member AND the patient (known source). the record of all these details should be signed by those involved in the incident. This testing should be done as soon as possible after the injury (ideally the same day). When the responsible medical practitioner is obtaining this consent. and antibody to hepatitis B surface antigen (anti-HBs). Testing the source patient If a situation arises where there is a need to know the infectious status of a patient (such as a sharps injury). and the long and short-term consequences to the patient of the test results. or contaminated wire).g. the patient should be offered pre-test counselling to provide details on the test procedure. bur. and HCV RNA – the latter two by polymerase chain reaction assay). If the source patient is found to be positive for a BBV.g.• • whether a deep injury occurred in the exposed person. Post-test counselling may also be required. If the source individual tests positive for either of these hepatitis B or C markers. treat the situation the same as the ‘positive patient’ scenario below. with advanced/terminal HIV disease or a high viral load). The source individual should be tested for: • • • HIV antibody. HBV DNA. HBsAg (hepatitis B surface antigen). particularly all ‘contaminated’ sharps injuries (e. Refusal for testing If the source patient refuses testing. 44 ©2012 . hepatitis B ‘e’ antigen. HCV antibody. and there is no need for hepatitis B immunoglobulin after a potential or confirmed exposure to HBV. additional testing of the injured person may be required and assessment by an infectious disease physician is recommended. If the injured staff member has ever had a blood test that demonstrates HBV immunity (anti-HBs antibodies > 10 IU/mL) – whether from vaccination or past infection – they are protected. Finally. and whether the source patient is viraemic (e. the patient has a responsibility to provide information or consent for testing that enables the practice or responsible health professional to ensure the safe management of the injured staff member. and consider whether post-exposure prophylaxis and appropriate long-term follow-up should be offered.g. those involving exposure to blood or blood-contaminated saliva via an instrument. Informed and voluntary consent must be obtained before taking a blood sample to test for any purpose. and HCV antibody (hepatitis C antibody). this refusal for testing should be documented. Testing Testing should be offered following all occupational exposure to blood or body substances.
the level of antibodies is important. three and six months) can be undertaken and possible clinical signs and symptoms monitored by an infectious diseases physician or gastroenterologist. which is as follows: • • after a sharps injury with HIV-infected blood: 0. they are protected. HBV vaccine should be given within seven days of exposure. If levels of immunity are relatively low (i. the assessment of the injured person needs to take into account the risk of seroconversion. The injured staff member should be re-tested for HCV antibodies at three and six months.3% if the source is ‘e’ antigen negative. The window period is six months for HBV and HCV.09% As only a very small proportion of occupational exposures to HIV result in transmission of the virus. and a proper risk assessment has been undertaken by a medical practitioner experienced in HIV management. or was at high risk of bloodborne viral infection at the time of the exposure (because they have recently engaged in behaviours that are associated with a risk for transmission of these viruses). If the staff member is NOT IMMUNE (e. Source positive for HIV If the source is KNOWN or SHOWN to be positive for antibodies to HIV (or is at high risk of seroconverting). unless there is reason to suspect the source person: • • is seroconverting to one of these viruses. or has antibody levels to HBsAg less than 10 IU/mL). but this is undetectable by testing. the side effects and toxicity of HIV post-exposure prophylaxis (PEP) must be carefully considered against its efficacy. Following the final vaccine dose. significantly reducing this window period. PEP is only indicated if there has been a significant exposure. The window period causes a FALSE NEGATIVE test result.3% after a mucous membrane exposure to HIV-infected blood: 0. and then repeated at one to two months and again at six months after the first dose. Give a single dose of hepatitis B immunoglobulin (HBIG) within 48-72 hours. The window period for HIV is usually three months but it can. HIV PEP is typically two or three orally administered anti-retroviral drugs and should 45 ©2012 . AND 2. The use of the polymerase chain reaction (PCR) testing for HIV/viral RNA can identify 90% of infections within four weeks. The risks of transmission after a sharps injury from a positive source varies according to whether active viral replication is occurring.8–3.g. If this HBV prophylaxis is not undertaken. however.1%. the correct treatment is to: 1. Start a course of HBV immunisation. the risk is 1. If the source is HCV RNA negative by PCR assay. Source positive for hepatitis B If the source is KNOWN or SHOWN to be positive for hepatitis B surface antigen (HBsAg). but more than 30% if the source is hepatitis B ‘e’ antigen positive.g. HBV and HCV.Source negative If blood tests show that the source patient is negative for HIV.e. regular liver function tests such as ALT and AST (e. between 10 and 100 IU/mL). the level of immunity (antibodies to surface antigen) should be checked two to four weeks later. very rarely. The patient may be infectious. no further follow-up of the exposed staff member is generally necessary. the risk of transmission of HBV is 6. and specific therapy considered if appropriate. at two. did not seroconvert to the vaccine (a non-responder). If the staff member is immune to HBV (anti-HBs antibodies > 10 IU/mL). has never been immunised. the risk increases to 10% if the source is PCR positive. in addition to their baseline test. be longer. In addition. a booster injection would be prudent. there is no effective post-exposure prophylaxis (PEP) for HCV. Source positive for hepatitis C If the source is KNOWN or SHOWN to be positive for antibodies to HCV.
This therapy should be continued for four weeks. and they should not share implements that may be contaminated with even a small amount of blood (e. and they should be given the opportunity for immediate counselling to address anxieties. PEP should not to be offered for an exposure to non-bloodstained saliva (as this is not potentially infectious for HIV). semen. This will include advice about safe sex. on the advice of an infectious diseases physician.be administered to the recipient within 24-36 hours after exposure (and preferably within two hours). PEP should be offered (but not actively recommended) for exposure of ocular mucous membrane or non-intact skin to potentially infectious blood or body fluids (as there is less increased risk of HIV transmission). three and six months. razors or toothbrushes). A staff member who has been exposed to HIV (or HCV) should not donate blood. Counselling Some people find the experience of an occupational exposure to HCV and HIV very distressing. safe injecting/safe needle use. • • • PEP is recommended for percutaneous (skin penetrating) exposure to potentially infectious blood or body fluids (because of the increased risk of HIV transmission). Follow up Testing for injured person Follow-up blood tests for the injured person should be undertaken at one. and follow-up undertaken to detect any febrile illness occurring within three months of exposure (possibly representing a HIV seroconversion illness). blood donation and safe work practices. breastfeeding.g. organs or tissue for six months. The exposed person should be advised on ways to prevent transmission of bloodborne viral diseases to others. 46 ©2012 .
Infection Control Guidelines for the Prevention of Transmission of Infectious Diseases in the Health Care Setting (ICG). Northern Territory Anti-Discrimination Act 1991. A manual for infection control and occupational health and safety for the dental practice. Code of Practice: Infection control. Accessed November 2007 from http:// Dental Board of Queensland.gu. Endorsed 22 September 2005.au Dental Practice Board of Victoria (2007).vic. dentprac.nsf/Content/cda-cdna-bloodborne. Guidelines for Managing Blood-Borne Virus Infection in Health Care Workers.gov. 15.gov.dentalboard. ©1997 by the American Dental Hygienists’ Association. British Dental Association.gov. Legislation relating to anti-discrimination and equal opportunity: Anti-Discrimination Act 1977. 14. 2007.htm Communicable Diseases Network Australia. 4 – Infection Control Guidelines. 2003. Canberra: Department of Health and Ageing.au/internet/main/publishing. 12. Canberra. 9.qld.au/internet/main/publishing. Information Sheet: Infection control.gov.vic. 8. Guidelines for Infection Control in Dental Health-Care Settings. dentprac. MMWR 2003:52(RR17).au/publications/Policies%20and%20Guidelines/Policies%20and%20Guidelines. Queensland Anti-Discrimination Act 1998. 7th edition. South Australia 7. 6. Accessed August 2007 from. Australian Capital Territory Equal Opportunity Act 1984. Mills S. 4. 2003. Australian Dental Association Victorian Branch Inc. Accessed August 2007 from. Australian Capital Territory Health. 2005. 2006. ISO 17664:2004 – Sterilisation of medical devices – Information to be provided by the manufacturer for the processing of resterilisable medical devices.au/di/2005-303/current/rtf/2005-303. Accessed May 2008 from. Infection Control Policy (Document number 2006/0003814).nsf/Content/icg-guidelines-index. 5.gov. Endorsed January 2004. Accessed November 2007 from. ada.org/federation/assets/statements/english/infection_control_and_hygiene/infection_control. Policy No.rtf Australian Dental Association Inc.healthconnect. Information Sheet: Practice inspection checklist. Advice Sheet – Infection Control in Dentistry A12. Accessed May 2008 from. Accessed November 2007 from FDI World Dental Federation (2003). Dental unit waterlines: check your dental unit water IQ.au Dental Practice Board of Victoria (2007). 10. Eklund KJ. Australia: Department of Health and Ageing.fdiworldental. 16. Australian Capital Territory Public Health (Infection Control) Code of Practice (2005). Accessed July 2007 from. Accessed May 2008 from. 13. The Practical Guides. 47 ©2012 .htm Dental Practice Board of Victoria (2007).pdf Griffith University (2006). the National Public Health Partnership and the Australian Health Ministers’ Advisory Council.au/policylibrary. New South Wales Anti-Discrimination Act 1996.References and additional reading 1.nsf/mainsearch/5033ccaebb0841094a2571720063f738?opendocument International Organization for Standardization. 2. 9. Sydney: ADA Inc. Reprinted from Access Vol. 11. Commonwealth Discrimination Act 1991.health.pdf Communicable Diseases Network Australia.org Centers for Disease Control and Prevention. FDI Policy Statement – Infection Control in Dentistry. No. Accessed August 2007 from. Bednarsh HS. 3. Accessed November 2007 from. Atlanta: US Department of Health and Human Services. 10. Tasmania Disability Discrimination Act 1992. Systematic Operating Procedures 2005. bda. Melbourne: ADA Inc.act.
Standards Australia.immunise.24:302–303. Bacterial contamination of keyboards: efficacy and functional impact of disinfectants. Safety Action Notice SAN(SC)03/11: Decontamination of reusable medical devices: control of aqueous solutions in ultrasonic cleaners. AS 1079. Linsuwanont P. AS 2773. Gill J. Accessed May 2008 from Rutala WA. Schultz M.nsw. Infection Control Policy Directives: PD2005_247 – Infection Control Policy PD2005_414 – Infection Control Program Quality Monitoring Policy PD2005_203 – Management of Reportable Infection Control Incidents PD2005_311 – HIV.health.49:20-27. Circular 2002/80. 31. Bacterial contamination of computer keyboards in a teaching hospital.). Northern Territory Department of Health and Community Services (2005). 33. Parashos P. Infection Control Guidelines. Hepatitis B or Hepatitis C Health Care Workers Infected PD2005_354 – Workcover NSW Reporting Requirements: Occupational Exposures to Blood-Borne Pathogens Accessed July 2007 from–377. 48 ©2012 . Accessed May 2008 from. Western Australia Human Rights and Equal Opportunity Commission Act 1986. Cleaning dental instruments: measuring the effectiveness of an instrument washer/disinfector.HTM NSW Health Department. 29.asp Royal Australian College of General Practitioners (n.1:1999. Tan CM. Commonwealth 17.4 – Infection control. Ultrasonic cleaners for health care facilities – Non-portable.health.nhsscotland. Infection Control Guidelines for Oral Health Care Settings. Ultrasonic cleaners for health care facilities – Benchtop. Messer HH. 30. 25. 27.pdf NSW Health Department. White MS. Queensland Health (2001).d. Victoria Equal Opportunity Act 1984. 18. Standards Australia.3. Infect Control Hosp Epidemiol 2006.com/shs/hazards_safety/hazardsp_P_4. 32. Packaging of items (sterile) for patient care – Non-reusable papers – For the wrapping of goods undergoing sterilisation in health care facilities.gov. 2nd Edition. Gaines DJ. Code of Practice on Infection Control. Accessed August 2007 from. Accessed August 2007 from. Dental Board of the Northern Territory. Hepatitis B and Hepatitis C – Management of Health Care Workers Potentially Exposed PD2005_162 – HIV. AS 2945:2002. 21. RACGP Standards for General Practitioners. Palenik CJ. 26. 28.2:1994. 22. The Australian Sterilising Handbook. Batch-type washer/disinfectors for health care facilities. AS 2773.asp 23.au/policies/a-z/i. 9th Edition Canberra: NHMRC. Beiswanger MA.pdf Organization for Safety and Asepsis Procedures (2007). Accessed August 2007 from. 20.gov.nsf/Content/Handbook-home New Zealand Dental Association and Dental Council of New Zealand (2002).nt. Zubairi S. Code of Practice – Control of Cross Infection in Dental Practice. Am J Dent 2000. Aust Dent J 2004.nsw.gov.qld.au/chrisp/ic_guidelines/contents. Gergen MF.Equal Opportunity Act 1995. Weber DJ. Setcos JC. gov. Gordin F. Miller CH. Criterion 5. NHS Scotland.au/health/org_supp/prof_boards/dental/code_of_practice_infection_control. 24.health. 2008. 19. Infect Control Hosp Epidemiol 2003. A cleaning protocol for rotary nickel-titanium endodontic instruments. CDC Guidelines: From Policy to Practice. Standards Australia.au/internet/immunise/publishing. National Health and Medical Research Council. Standards Australia.au/policy/ohb/publications/infection_control_guidelines. Huber R. Accessed November 2007 from.
J Pediatr Oncol Nurs 2002:19:164– 171. 45. Standards Australia. Standards Australia. Standards Australia. Office-based health care facilities – Reprocessing of reusable medical and surgical instruments and equipment. AS 3836:1998. AS/NZS 3816:1998. 49 ©2012 . 42. AS/NZS 4261:1994. Standards Australia. Dental Board of Australia. Standards Australia. Standards Australia. National Health and Medical Research Council. AS/NZS 4187:2003. Guidelines on infection control. Standards Australia. Walsh LJ (2011) The University of Queensland School of Dentistry Infection Control Management Plan. 44. Standards Australia. Single-use face masks for use in health care. 47. Standards Australia. disinfecting and sterilising reusable medical and surgical instruments and equipment. Australian National Guidelines for the Management of Health Care Workers known to be infected with Blood-Borne Viruses. 43. Single-use sterile surgical rubber gloves – Specification. 38. AS 3789. Textiles for health care facilities and institutions – Theatre linen and pre-packs. 48. Reusable containers for the collection of sharp items used in human and animal medical applications. 46. 41. STEAM STERILISER September 2011. AS 4031:1992. Single-use examination gloves – Specification. October 2010. AS 4381:2002. AS/NZS 4179:1997. Management of clinical and related wastes. July 2010. Standards Australia. AS/NZS 4815:2006. AS/NZS 4011:1997. 36. and maintenance of the associated environment. 39. CDNA. Toles A.34. 37. Non-reusable containers for the collection of sharp medical items used in health care areas. Cleaning. 40. Australian guidelines for the prevention and control of infection in healthcare. and maintenance of associated environments in health care facilities. 35. Rack conveyor washers for health care facilities.2:1991. Artificial nails: Are they putting patients at risk? A review of the research. | https://www.scribd.com/doc/141380648/infection-Control-Guidelines-2012 | CC-MAIN-2017-34 | refinedweb | 26,314 | 50.23 |
Learn C the Hard Way (Zed A Shaw).
The Gnu C Tutorial
Hacking: The Art of Exploitation
Today I stumbled across an excellent series of lectures by Jerry Cain.
This link points at the third one, in which he describes using pointers to do a little swap function. I thought I'd have a go at coding it and came up with this.
So, if I have got this right:
Code: Select all
#include <stdio.h> // swap function void swap_nums(int *, int *); // prototype a function that takes pointers. int main() { int x = 10; int y = 66; printf("x = %d y= %d\n", x, y); swap_nums(&x, &y); // pass the addresses of x and y to my function printf("x = %d y= %d\n", x, y); return 0; } void swap_nums(int *a, int *b) { int c; c = *a; *a = *b; *b = c; }
I pass pointers to my function.
"a" stores the memory location where 10 is stored and "b" the address where I have 66 stored.
c = *a
The asterisk "dereferences" the pointer - in other words now the actual value at that address is put into "c".
*a = *b
The value held at address a is made to equal the value held at address b.
*b = c
The value held at address b is made to equal c (just an int, not a pointer!)
Then we return to main and the values have been swapped!
| https://lb.raspberrypi.org/forums/viewtopic.php?f=33&t=7461&p=92398 | CC-MAIN-2020-16 | refinedweb | 234 | 77.77 |
Item 3 in the 2nd Edition of Effective Java explains three ways of implementing a singleton in Java, the last of which is “Enum as Singleton”. This uses an Enum with a single element as a simple and safe way to provide a singleton. It’s stated as being the best way to implement a singleton (at least, for Java 5 onwards and where the additional flexibility of the “static factory method” approach isn’t required).
But is this technique a good or bad idea? Is anyone actually doing it this way? If you’ve used it or encountered it, are you happy with it or do you have any reservations?
Please note: I’m not interested in any wars over whether singletons are evil or not. The concept exists, one comes across them in real code, and there are reasonable discussions to be had over whether they are always a bad idea or have their uses in particular situations. None of that is relevant to how best to implement a singleton if one ever does wish to do so, or the pros and cons of different implementation techniques.
OK, with that dispensed with, what should we make of the “Enum as Singleton” technique?
From my point of view, it works, the code is trivially simple, and it does automatically take care of the “serialization” issue (that is, maintaining one instance per classloader even in the face of serialization and deserialization of the instance). But it feels too much like a trick, and (arguably) not in the spirit of the concept of an enumeration type. When I see an Enum that isn’t being used to enumerate a set of constants and that only has one element, I think I’m more likely to have to stop and figure out what’s going on rather than immediately and automatically thinking “oh, here’s a singleton”. If it becomes more common I’ll no doubt get used to seeing this idiom, but if so I might then find myself misled by any “normal” enumeration that just happens to only have one element.
Another concern is that whilst the use of a static factory method to provide a singleton offers more flexibility than either the use of a public static member or a single-element Enum, it requires different client code for accessing the singleton. So using either of the latter two approaches means that you risk having to change client code if you ever need to “upgrade” the singleton to the more flexible “static factory method” approach.
A further issue is how best to name Enum classes and instances that are implementing singletons. Should one stick to the usual naming conventions for Enums, or adopt some other naming convention (and maybe include “Singleton” in the name to make the intent clear)? And what if the singleton object is mutable in any way? Or is that a more general issue over the naming of enumeration “constants” if they are actually mutable? Or maybe it makes more sense to say that Enums must be genuine constants and should never, ever be mutable – in which case “Enum as Singleton” shouldn’t be used for any singleton with mutable state, which limits its applicability even more?
So now that the “Enum as Singleton” technique has been widely known for a few years, does anyone have any significant experiences from real-world use of it? Or any other opinions on this technique?
I think its a good solution to use Enum to implement Singleton in Java. Enum are more versatile than one can think of , see the below post 10 examples of enum in java to know what else can you do with enum in java.
Thanks
Javin
10 interview question on Singleton in Java
Very nice article and explanation for enum and singleton, because two concepts are confusing … but this article explains it greatly. I would also like to share one link about enum in java
What is instance control?
Instance control basically refers to single instance of the class OR singleton design pattern .
Java 1.5 onwards we should always prefer ENUM to create singleton instance. It is absolutely safe . JVM guarantees that. All earlier mechanical of controlling instance to single are already broken.
So in any interview you can confidently say ENUM= provides perfect singleton implementation .
Now what is readResolve?
readResolve is nothing but a method provided in serializable class . This method is invoked when serialized object is deserialized. Through readResolve method you can control how instance will be created at the time of deserialization.Lets try to understand some code
public class President {
private static final President singlePresident = new President();
private President(){
}
}
This class generates single instance of President class. singlePresident is static instance and same will be used across.
But do you see it breaks anywhere?
Please visit for more details on this
1) Through ENUM you can implement singleton in scope ClassLoader. If you want have singleton through more then one ClassLoader (in more Nodes on example) it’s hard to implement it with ENUM
2) Enum can’t inherite anything from supper class
3) Mainly you doesn’t care if the class have only one instance in ClassLoader. You need instance which behave as singleton in specific scope.
So on example, you have Server and Android client. On both place you have User. In Android is User singleton in ClassLoader scope, on the Server is the singleton in Session scope.
So when you have implementation:
public abstract class AbstractUser impelments IUser {…}
public class ServerUser extends AbstractUser {…}
public class AndroidUser extends AbsractUser{
public static IUser getInstance() {…}
}
You implement the singleton on Android, but you can use a lot of functionality from non-singleton use.
So there are a lot of situation when you can’t use the ENUM singleton. If you use the ENUM singleton, you will end with more then one pattern to implement singleton in your code. And when you notice, the singleton isn’t the singleton in some scope, you must refactore the code.
So I decide, no, the using ENUM as singleton is antipattern. | https://closingbraces.net/2011/07/04/enum-as-singleton/ | CC-MAIN-2018-34 | refinedweb | 1,018 | 60.04 |
Fields In Matlab Members In Csharp
ISFIELD in Matlab
Lets make a struct with the fields b and c, b has the subfields b1 and b2:
>> A = []; >> A.b = []; >> A.b.b1 = 'b1'; >> A.b.b2 = 'b2'; >> A.c = 3.14; >> A A = b: [1x1 struct] c: 3.1400 >> A.b ans = b1: 'b1' b2: 'b2'
Now we might want to know is there is a field in A that is called b:
>> isfield(A,'b') ans = 1
GETMEMBERS in C#
We create a silly class Ring with an integer id and a string owner. Then we scan the object (or an instance of type Type with the value Ring.) (Type theType = przs.GetType()). We print the members on The Console.
using System; using System.Reflection; namespace hasMemberTest { // Silly class to test on class Ring { public int Id; public string Owner; // ctor sets id and owner public Ring(int id, string BelongsTo) { this.Id = id; this.Owner = (string)BelongsTo.Clone(); } // testprogram static void Main(string[] args) { // create the precious Ring przs = new Ring(1, "Sauron, who else? (A stinking hobbit?)"); Console.WriteLine("Ctor'ed ring #{0} belonging to '{1}'.", przs.Id, przs.Owner); // get type and members Type theType = przs.GetType(); MemberInfo[] mbrInfoArray = theType.GetMembers(); // write all members Console.WriteLine("The '{0}' has the following members", theType); foreach (MemberInfo mbrInfo in mbrInfoArray) { Console.WriteLine(" - {0,32} is a {1,16}", mbrInfo, mbrInfo.MemberType); } } } }
The output should be something like this:
Ctor'ed ring #1 belonging to 'Sauron, who else? (A stinking hobbit?)'. The 'hasMemberTest.Ring' has the following members - System.Type GetType() is a Method - System.String ToString() is a Method - Boolean Equals(System.Object) is a Method - Int32 GetHashCode() is a Method - Void .ctor(Int32, System.String) is a Constructor - Int32 Id is a Field - System.String Owner is a Field
So, indeed, we can check if there is a field in a class - but I would not recommend it since there are better ways of doing it. (See Csharp Inheritance.)
This page belongs to the category Kategori Programmering. | http://pererikstrandberg.se/blog/index.cgi?page=FieldsInMatlabMembersInCsharp | CC-MAIN-2017-39 | refinedweb | 340 | 69.48 |
Jun 22, 2012 05:53 PM|LINK
This question is from a Best Practices point of view.?
Jun 22, 2012 06:40 PM|LINK
Hi,
I would keep the logic out of the webform.cs because I think that's a 'view' - personally I prefer to collect the information there but don't process it there - pass it on - if possible to a purpose build dll (I assume you are anyway) - so I like to use an intermediate class, which interfaces with the dll. So, my web forms inherit from this intermediate class and methodds are static so it's kind of like this: -
using MyDAL; public class IntermediateClass : System.Web.UI.Page { public static Result AddCompany(Company company) { } } //THEN IN WEB FORM public partial class MyWebForm : IntermediateClass { protected void btn_Click(object sender, EventArgs e) { //since we inherit Company c = new Company(); c.ID = (int)hiddenField.Value; c.Name = txtName.Text; AddCompany(c); //resrest. } }
This is one method to go about things. Sorry if I barked up the wrong tree.
1 reply
Last post Jun 22, 2012 06:40 PM by BoogleC | http://forums.asp.net/p/1817339/5037239.aspx/1?Where+to+put+method+code+which+is+called+on+click+of+a+button+ | CC-MAIN-2013-20 | refinedweb | 182 | 61.67 |
Hi to all,
AVANTI 0.2.6 - FFmpeg/Avisynth GUI (June 2008 release) available.
This version has the following changes:
..1.. Improved user tools for complex command line arguments.
..2.. Improved and enhanced the "advanced database manager".
..3.. Updated codecs/formats database for latest FFmpeg versions.
..4.. Adapted database to decode ogg theora (ogg, ogm extensions).
..5.. Added ogg theora templates for encoding.
..6.. Fixed bug in version overwrite option (database not preserved).
..7.. Fixed bug in "Preferences" (some were not updated immediately).
..8.. Updated chm help for renewed database manager.
You can read more and download Avanti 0.2.6 here at
Mainly a version of bug fixes and maintenance to be up-to-date
before the summer break.
The codecs/formats database wasn't updated for a long time
but now contains additions from the latest FFmpeg versions.
If you use the "overwrite" option to upgrade a older version,
also read the help chapter of the "Advanced database manager".
In case of questions or major bug reports (or bad wheather),
I'll be around for support and/or a revision.
Chris.
+ Reply to Thread
Results 31 to 60 of 1104
Thread
thanks for the new version
-
-
.
Avanti 0.2.6 revision 1 available.
Occasionally new FFmpeg versions have changes that are not foreseen and
this requires the release of a revision to keep Avanti compatible.
The latest known Windows builds by Ramiro Polla (SVN-r13242) and Sherpya
(Sherpya-r13537) appear to have revised documentation embedded.
The codecs list in this versions now includes comments (like the formats list
already did) and the current Avanti version isn't prepared for that. Reading the
list into the "Advanced database manager" fails and needs some adapting.
I also found that muxing with FFmpeg is still tricky and have my doubts about
m2v/ac3 and m2v/mp2 (AVI muxing does fine for me). The automatic insertion
of the "-genpts 1" command for m2v is removed to allow some source dependent
experiments on the command line with the -genpts and -copyts commands.
Because these builds perform very well and seems to have (almost) all former
codec problems fixed (Sherpya build still has a broken x264), I decided to
release Avanti 0.2.6 revision 1 which is adapted for using these versions.
If you already installed version 0.2.6, you only need to replace the
"Avanti-GUI.exe" executable. This will preserve your current settings.
Note: I use revisions when changes only concern bug fixes and/or adaptions.
The documentation in Avanti packages is highly related to the specific
version and since there are no new features added, it would give me a lot
of extra (needless) work to change it all.
Chris.
just registered and thanks for the help so far. i'm using this to get xvids on my ps3. works great till i need to "force avisynth". dunno how to do that and can't find it in avanti.
Any help would be greatly appreciated. i've got ffmpeg/avisynth/avanti installed and it works great till i hit a few xvids that are i guess encoded or something wrong.
thanks..
thanks a bunch Chris K.
it still dies on me, guess the xvid is encoded weird. funny though that it's in a series and the few before it work fine.
you guys have any tips if an xvid won't encode right? these are all dbz episodes. i assume they're all the same but i guess not.
-
Hi Chris!
I've been trying to mux an mp3+png into a flv file and Avanti so far has worked but it reconverts the mp3 files, is there any way to bypass the mp3 conversion, just mux mp3+png.
I tried using ffmpeg and mencoder but the final flv is twice as big, with fmpeg and mencoder (160k mp3 +png) results in a 14mg file, the original sources are less than 7mg
With Avanti the final size is 7mg but mp3 gets reconverted, can you help? Im using some scripts 45Tripp posted on another thread
ffmpeg:
Code:
ffmpeg -loop_input -f image2 -i video.png -r 1 -vcodec flv -i audio.mp3 -acodec copy -qscale 2 -g 5 -cmp 3 -subcmp 3 -mbd 2 -flags trell temp.avi -shortest ffmpeg -i temp.avi -vcodec copy -acodec copy video.flv
Mencoder:
Code:
mencoder mf://video.png -mf fps=1/388:type=png -audiofile audio.mp3 -ovc lavc -oac copy -vf harddup -lavcopts vcodec=flv:vqscale=2:keyint=5 -ofps 1 -of lavf -o final.flv
Avanti:
If i choose "mux audio" it tells me to "Please load UNI_MUX_SETUP template to set MUX environment.", i load that template, choose settings but aerror message appears:
Thanks
well frifox wanted the best quality.
if you want you can set them so as not to use cq encoding,
also the ffmpeg command can be done in one now:
Code:
ffmpeg -loop_input -f image2 -i video.png -r 1 -vcodec flv -i audio.mp3 -acodec copy -b 50k -g 5 -cmp 3 -subcmp 3 -mbd 2 -flags trell video.flv -shortestCode:
mencoder mf://video.png -mf fps=1/388:type=png -audiofile audio.mp3 -ovc lavc -oac copy -vf harddup -lavcopts vcodec=flv:vbitrate=50:keyint=5 -ofps 1 -of lavf -o final.flv
for avanti,
i think i once made a 'copy' codec in the database.
i think i keep forgetting to ask Chris for a 'stream copy' option as i'm still waiting
on ffmpeg ogm remux to be fixed.
one way of doing it is to use the -new function,
in the video options at the beginning add this:
Code:
-new -i "@source1" -i "@source2" -dvs -acodec copy
Code:
-y "@destin1"
tripp"I'll give you five dollars if you let me throw a rock at you"
Hi Ricardo,
I've tried this variant on the -new suggestion by Tripp and it worked (with some restrictions):
Code:
-new -loop_input -i "@source1" -i "@source2" -dvs -acodec copy -shortest -y "@destin1"
All audio settings are ingnored anyway with -acodec copy on the command line.
You won't get progression display.
Just wait patiently for the "Process successfully finished after ..... hms" message.
I tried with a mp3 of 938.7 kb and the final flv became 1,4 mb.
Chris.
Hi 45tripp
Thanks for the scripts, it works.
Chris and Tripp adding that bit at the beggining/ end, without or without quotes doesnt work, this is the code that appears, can you show me where to add it?
Code:
-g 160 -cmp 3 -subcmp 3 -mbd 2 -flags aic+cbp+mv0+mv4+trell -sws_flags lanczos
Code:
ffmpeg mencoder avanti General #0 File size : 10.4 MiB : 10.8 MiB : 7.67 MiB PlayTime : 6mn 28s : 6mn 28s : 6mn 28s Bit rate : 224 Kbps : 234 Kbps : 166 Kbps
This is the complete command line with the hq tweaking included:
Code:
-new -loop_input -i "@source1" -i "@source2" -dvs -g 160 -cmp 3 -subcmp 3 -mbd 2 -flags aic+cbp+mv0+mv4+trell -sws_flags lanczos -acodec copy -shortest -y "@destin1"
uni_flash_hq_mod.7z
the diff is what Chris added to the commandline:
Code:
-loop_inputCode:
-shortest
without you sorta a get a bastard single frame matched to the audio, can;t seek the single frame.
take it as a reserved bonus, no re-encode but with no seeking.
tripp"I'll give you five dollars if you let me throw a rock at you"
Thanks Chris and 45Tripp for the explanations and "fix".
-
-
Well I've been messing with the modded template and if I set it to 20fps, the picture becomes blurry after ten seconds and progressively so until the end of the song. But when set to 10fps it appears to retain the original picture quality to the end, although it does become slightly muddy. Any idea why this is? The source picture is JPEG btw.
well it's a moot point, because 1fps is more than enough for a single frame.
but if you insist...
left bitrate at 50k?
you'll have to increase it substantially.
or use qscale encoding.
tripp"I'll give you five dollars if you let me throw a rock at you"
-
-
-
-
Hi everyone. I apologize if this is in the wrong place, but I have some questions.
First, what is the best video scale algorithm for downsizing videos? Second, is there a tutorial for using the VBR feature? What are the best values to enter there in order to optimize the file size, say for a 5 minute video at 750 kb/s?
Also, is Avanti still being developed?
- Originally Posted by dark_guard
i use lanczos for up and down.
others use spline for up and down.
some say one better for up and another for down, and the inverse,
some like a simple bicubic for down, i don't like the softness.
try them out.
pick one
Originally Posted by dark_guard
what's your source what's the goal?
Originally Posted by dark_guard
a suggestion, a bug to report?
Chris K still maintains it.
tripp"I'll give you five dollars if you let me throw a rock at you"
- Originally Posted by 45tripp
I was wondering if there was some subtle difference between them all that I didn't notice.
Originally Posted by 45tripp
Originally Posted by 45tripp
- Originally Posted by dark_guard
Originally Posted by dark_guard
most probably progressive.
the best way to handle is to get dgindex
import the vob. press F5, check "video type" anything with film over 95% means you can 'force film'.
'video'->'field operatin'->'forced film'
press F4 to save project.
you'll get a .d2v file.
install avisynth if you havent already.
open a txt editor (notepad)
dgindex came with DGDecode.dll, place it somewhere convenient, or note it's path.
in the text write:
Code:
LoadPlugin("path\DGDecode.dll") MPEG2Source("moviename.d2v")
now treat 'movie.avs' as your movie.
i'e drag it into Avanti's input.
for flv, the hq flash template in Avanti is as good as it gets.
just change bitrate and resolution
that's sorenson spark,
i've not really been following the later testing on youtube,
but i see success in putting h264 up.
There aren't ready good templates for that in Avanti.
but you can make one.
Originally Posted by 45trippOriginally Posted by dark_guard
post it.
gl
tripp"I'll give you five dollars if you let me throw a rock at you"
I've got a question conerning interlaced mpeg4(XViD)-material:
I've got some files which are interlaced (AviSynth-Test shows "TFF"), but reported progressive in apps like MPEG4Modifier and which don't play well on my standalone.
Seems the device reads the flag and presumes progressive content; esp. annoying when I want to jump back/forth or ffwd, etc.
So on doom9's forum I already learned that I would have to re-encode the files.
I tried AVANTI for that job, and the outcome is really fine - so first I want to say: Thank you, Chris K!!
But the files again are interlaced (I'd like to keep them that way - I de-activated "Deinterlace" in AVANTI), and again are reported as being "progressive".
Is this an option I would have to add in the commandline for ffmpeg? And if so, how would I do it?
Thanks for any help!
- Originally Posted by nbarzgar
nice to hear you like Avanti
Originally Posted by nbarzgarCode:
-flags ilme+ildct
you can't do it with xvid within ffmpeg
gl
tripp"I'll give you five dollars if you let me throw a rock at you" | http://forum.videohelp.com/threads/282223-AVANTI-FFmpeg-Avisynth-GUI-(support-thread)?p=1769403&viewfull=1 | CC-MAIN-2015-22 | refinedweb | 1,932 | 74.08 |
Then you must also code this logic into your program.
Your source shows no such attempt to address this,
- the machine will do what you ask and follow what you told.
Oh, sorry. I had the wrong code atttached. Here i try to sendCommand
#include <SoftwareSerial.h> #include <Nextion.h> int relay1 = 13; int relay2 = 12; boolean button1State; boolean button2State; void setup() { delay(1000); Serial.begin(9600); myNextion.init(); // send the initialization commands for Page 0 pinMode (relay1, OUTPUT); pinMode (relay2, OUTPUT); delay(1000); digitalWrite(relay1, HIGH); digitalWrite(relay2, HIGH); } void loop() { String message = myNextion.listen(); //check for message if (message == "65 0 2 0 ffff ffff ffff") { myNextion.buttonToggle(button1State, "bt1", 0, 2); if (button1State == HIGH) { digitalWrite(relay2, HIGH); delay(100); digitalWrite(relay1, LOW); delay(100); myNextion.sendCommand("bt1.picc=0"); //set "bt1" image to 0 myNextion.sendCommand("ref bt1"); //refresh } else { // turn led1 off: digitalWrite(relay1, HIGH); } } if (message == "65 0 3 0 ffff ffff ffff") { myNextion.buttonToggle(button2State, "b1", 0, 2); if (button2State == HIGH) { digitalWrite(relay1, HIGH); delay(100); digitalWrite(relay2, LOW); delay(100); myNextion.sendCommand("bt0.picc=0"); //set "bt0" image to 0 myNextion.sendCommand("ref bt0"); //refresh } else { // turn led2 off: digitalWrite(relay2, HIGH); } } }
Skip your code for a moment. First you need to deal with your logic.
Logic starts in the mind, then flows to paper, then into design and code.
First I am not a fan of an unsupported unofficial library using ffff instead of 0xFF
- So personally I think someone attempting to use that which even the authors have
abandoned has selected their troubles as a challenge, and .. challenge you have.
Second, I am not a fan of debugging code.
- it may be the artistic license of the programmer to code in a fashion the wish
- but Booleans are true or false -- pins are HIGH and LOW. Booleans are not HIGH
(unless they are from Colorado, where they are legally allowed to be HIGH)
- I am certainly not a fan of having to investigate the authors library to see what they have done
Combining two dual states as you have, is a tri state
- state 0 - both off
- state 1 - relay A on
- state 2 - relay B on
( state 3 - relay A and B on is not permitted or not desired, I would argue the first)
IF you deal with the logic in this tri state, then you may have better clarity
The Nextion side behaviour of a dual state button is to toggle in dual state regardless.
state = 1 - state
this means using two dual states, the undesired state 3 may occur and will need correction
Dual States are not set to check conditions by default, so ...
Is this logic Nextion side? or MCU side?
Or do you select a different way to represent it Nextion side?
I am not certain that you MCU side relay code (2nd code) is failing.
(you did not say what is failing, just throw all code up and said "here, fix it")
I would also not have tried to change the dual state ,picc but its .val
- changing the picture simply will only mess up how the state is displayed
I hope some of this is helpful, I am just not in the mood to code it myself
Jaakko Uotila
Hi. I am trying to control two relays from nextion screen and it works fine. However the both relays cant be On at the same time, so if the relay1 is On and i turn relay2 On, relay1 should turn Off. This i can also do, but what i dont understand is how to change the ralay1 button on the screen also Off. Tried sendCommand but cant get it to work :( Could you please help me.
Here is the cosw and HMI file | http://support.iteadstudio.com/support/discussions/topics/11000009551 | CC-MAIN-2017-26 | refinedweb | 628 | 70.23 |
Language page crashes in only-installable mode
Bug Description
Binary package hint: ubiquity
This is an oem install that included a preseed for showing only the installable languages on the oem page. I'm finding that if the apt cache hasn't been loaded before the page gets loaded, it will crash.
The attached logs show that crash.
This can also be reproduced with this simple python script if the apt cache hasn't yet been loaded:
import sys
sys.path.insert(0, '/usr/lib/
from ubiquity import misc
misc.drop_
import apt
apt.cache.Cache()
This is the (cropped) return from that script:
File "/usr/lib/
self.
File "/usr/lib/
self._cache = apt_pkg.
SystemError: E:Could not open file /var/cache/
Making a minor modification, it will work however:
import sys
sys.path.insert(0, '/usr/lib/
from ubiquity import misc
misc.drop_
import apt
apt.cache.Cache()
drop_privileges is applied to ubiquity when first loaded, so it's effective on all pages. The primary difference between those two methods is that drop_all_privileges will set the euid/egid as well as real uid/gid while drop_privileges only sets the euid/egid.
ProblemType: Bug
DistroRelease: Ubuntu 10.10
Package: ubiquity 2.4.1
ProcVersionSign
Uname: Linux 2.6.35-22-generic i686
Architecture: i386
Date: Sat Sep 25 22:32:08 2010
DistributionCha
# This is a distribution channel descriptor
# For more information see http://
canonical-
InstallationMedia: Ubuntu 10.10 "Maverick" - Build i386 LIVE Binary 20100925-01:44
ProcEnviron:
PATH=(custom, no user)
LANG=en_US.UTF-8
SHELL=/bin/bash
SourcePackage: ubiquity
This bug was fixed in the package ubiquity - 2.4.2
---------------
ubiquity (2.4.2) maverick; urgency=low
[ Jonathon Riddell ]
stepLanguage. ui: nicer icons and better layout on
* gui/qt/
language page (LP: #628808)
[ Mario Limonciello ]
* Set the LANG before running oem-config-remove.
* Prevent a crash of debconf-communicate when removing oem-config.
(LP: #641478)
* Raise privileges when running the language page in only-installable
mode. (LP: #647792)
* Automatic update of included source packages: flash-kernel
2.28ubuntu10.
[ Evan Dandrea ]
privileges_ save so we don't try to
* Don't let Jockey's automatic driver installation failing cause the
entire prepare page to fail.
* Make sure $LANGUAGE gets set in the parallel debconf child process,
so that we get translated descriptions (LP: #646109).
* Set the effective UID in regain_
setgroups([]) as a regular user (LP: #646827).
[ Didier Roche ]
* depends on latest libindicator-dev for ABI change (LP: #637692, #647739)
-- Mario Limonciello <email address hidden> Sat, 25 Sep 2010 18:24:06 -0500 | https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/647792 | CC-MAIN-2019-22 | refinedweb | 424 | 50.43 |
Run GNU/Linux? Be a real man and write your own OS. :)
(OK, I read that the Cassiopeia E-11 has 8Mb of memory... probably already more than enough to run Linux. Running GNU/Linux is another matter though.):zort: man,
that's pure evil.
So, IOCCC 2006 and Bitwise 2007 are both over. This leaves the ICFP Contest.
In other news, I see that the Tubes of the Internet are now filled with the latest antics of the wikiclowns:
I hope both projects will soon collapse under the sheer weight of their own stupidity, and we can have something better.
I.
Update:
Update #2: I see the solutions for Bitwise 2007 are up. Well, fzort and myself took part, and we got into the top 50... but I still wanted to know how well my solutions compare against the `official' solutions. So...
My approach:Remove the outer layer of leaves, and repeat until the number of nodes ≤ 2; then the remaining node(s) will be the answer.
This will be O(n) except that in one place (marked /* ugh */) I do a linear search in the adjacency list for a node.
My code:#include <stdio.h>
int main()
{
unsigned test_cases, n, e, i, j, k, frog, lvi;
static unsigned deg[1000], adjl[1000][1000], lv[2][1000], nlv[2];
scanf("%u", &test_cases);
while (test_cases--) {
scanf("%u", &n);
e = n - 1;
for (i = 0; i < n; ++i)
deg[i] = 0;
while (e--) {
scanf("%u %u", &i, &j);
adjl[i][deg[i]++] = j;
adjl[j][deg[j]++] = i;
}
nlv[0] = 0;
for (i = 0; i < n; ++i)
if (deg[i] < 2)
lv[0][nlv[0]++] = i;
frog = 0;
while (n > 2) {
nlv[!frog] = 0;
for (lvi = 0; lvi < nlv[frog]; ++lvi) {
--n;
i = lv[frog][lvi];
if (deg[i] == 0)
continue;
j = adjl[i][0];
/* ugh */
for (k = 0; adjl[j][k] != i; ++k);
adjl[j][k] = adjl[j][--deg[j]];
if (deg[j] == 1)
lv[!frog][nlv[!frog]++] = j;
}
frog = !frog;
}
for (lvi = 0; lvi < nlv[frog]; ++lvi)
printf("%u ", lv[frog][lvi]);
putchar('\n');
}
return 0;
}
Official solution:Step 1: Identify any leaf of the tree.
Step 2: Find one of the farthest leaves from that leaf (and name it L1).
Step 3: Find one of the farthest leaves from L1, and name it L2.
Step 4: The middle point(s) of the path L1-L2 is the ‘center’ of the graph and the ideal location for Spiderman to buy a house for Mary-Jane.
Conclusion: my code seems to work, but I think the official solution makes more sense.
My approach:For each denomination a[i], maintain 2 arrays, one for when a[i] is used, one for when a[i] is not used. Once each a[i] is processed, throw it away. In other words, successively find the number of coins needed to achieve a sum of 0, 1, 2, ..., S, using only { a[1] }, { a[1], a[2] }, { a[1], a[2], a[3] }, ...
(The "frog" thing in the code is totally unneeded, but I can't be bothered to clean it up.)
My code:#include <limits.h>
#include <stdio.h>
int main()
{
unsigned test_cases, s, d, a, b, t, c1, c2, frog;
static unsigned coins[2][10001];
scanf("%u", &test_cases);
while (test_cases--) {
scanf("%u %u", &s, &d);
frog = 0;
coins[frog][0] = 0;
for (t = 1; t <= s; ++t)
coins[frog][t] = UINT_MAX;
while (d--) {
scanf("%u %u", &a, &b);
if (b == 0)
b = 1;
for (t = 0; t < a * b && t <= s; ++t)
coins[!frog][t] = UINT_MAX;
for (; t <= s; ++t) {
if (coins[frog][t - a * b] == UINT_MAX)
c1 = UINT_MAX;
else
c1 = coins[frog][t - a * b] + b;
if (coins[!frog][t - a] == UINT_MAX)
c2 = UINT_MAX;
else
c2 = coins[!frog][t - a] + 1;
coins[!frog][t] = c1 < c2 ? c1 : c2;
}
for (t = 0; t <= s; ++t)
if (coins[frog][t] < coins[!frog][t])
coins[!frog][t] = coins[frog][t];
frog = !frog;
}
if (coins[frog][s] == UINT_MAX)
printf("-1\n");
else
printf("%u\n", coins[frog][s]);
}
return 0;
}
Official solution:n(S) = min(∀i, V[i])
where, V[i] = min { n(S - a[i] * (b[i] + j)) + (b[i] + j) }, ∀j, j ∈ [0, b[i])
Initialization Step:
n(x) = 0, if x = 0
n(x) = ∞, if x != 0
Conclusion: I don't know what the official solution is saying. :|
My approach:Consider powers g, g^2, g^3, ..., of a generator g of the multiplicative group A = { 1, 2, ..., p - 1 }. For each possible order d | p - 1 of a group element, there are φ(d) elements with this order -- all the g^i(p/d) where i is coprime with d. ThusS = Σ[d|p-1] dφ(d)
To efficiently compute this function, factorize p - 1 into powers of distinct primes: p - 1 = q[1]^k[1] q[2]^k[2] ... q[m]^k[m], where all q[.] are distinct and k[.] > 0. Then any factor of p - 1 can be represented uniquely as q[1]^i[1] q[2]^i[2] ... q[m]^i[m] where 0 ≤ i[v] ≤ k[v] for all v. Now φ(q[1]^i[1] q[2]^i[2] ... q[m]^i[m]) = φ(q[1]^i[1])φ(q[2]^i[2])...φ(q[m]^i[m]), so after playing some games with the distributive law,S
= Π[1≤v≤m] Σ[0≤i[v]≤k[v]] q[v]^i[v]φ(q[v]^i[v])
= Π[1≤v≤m] (1 + Σ[1≤i≤k[v]] q[v]^i . (q[v]^(i-1)) . (q[v] - 1))
(The function prime_power_order(.) probably needs a renaming.)
My code:#include <limits.h>
#include <stdio.h>
static unsigned long long prime_power_order(unsigned q, unsigned k)
{
/* find sum of orders for a cyclic group of order q^k */
unsigned i;
unsigned long long qq = 1, s = 1;
for (i = 1; i <= k; ++i) {
s += qq * q * qq * (q - 1);
qq *= q;
}
return s;
}
int main()
{
unsigned test_cases;
unsigned long p, pp, w, k;
unsigned long long s;
scanf("%u", &test_cases);
while (test_cases--) {
scanf("%lu", &p);
s = 1;
pp = p - 1;
for (w = 2; (unsigned long long)w * w <= pp; ++w) {
if (pp % w != 0)
continue;
k = 0;
while (pp % w == 0) {
pp /= w;
++k;
}
s *= prime_power_order(w, k);
}
if (pp > 1)
s *= prime_power_order(pp, 1);
printf("%qu\n", s);
}
return 0;
}
Official solution:Note that if n=Π[p|n] p^a then the sum is
Π[p|n] {1+p(p-1)+p^2(p^2-p)+p^3(p^3-p^2)+...+p^a(p^a-p^(a-1))}
which can be also written as
Π[p|n] (1-p+p^2-p^3+...-p^(2a-1)+p^(2a))
Conclusion: really the same solution, except it didn't occur to me to do the final simplification 1 + Σ[1≤i≤k] q^i . q^(i-1) . (q - 1) = 1 - q + q^2 - q^3 + ... + q^(2k).
fzort solved this problem.
My approach:I was intending to cast the problem as one of finding the maximum matching in a bipartite graph, but I was too exhausted. :/
Official solution:The problem reduces to finding maximum cadinality matching in bipartite graphs.
Conclusion: boom.
"Jihadi"? I want to seek out and physically harm the person who coined this stupid word. People who engage in jihads are either "jihadists" or "mujahidin". A "Jihadi" is a citizen of the hypothetical nation of Jihad.
The .tk domain for my Pax Neo-TeX doesn't forward URL paths -- zompower.tk forwards to fzort.org/bi/neo-tech/, but zompower.tk/vote.php won't forward to fzort.org/bi/neo-tech/vote.php. To work around this, I added some PHP at the top of the index page to redirect to the appropriate page if the referrer contains a path to some other page:
<?
$refr = $_SERVER['HTTP_REFERER'];
if (isset($refr)) {
$refrp = parse_url($refr);
$rhost = $refrp['host'];
$rpath = $refrp['path'];
if (($rhost === 'zompower.tk' ||
$rhost === '') &&
$rpath !== '' && $rpath !== '/' && $rpath !== '/index.php') {
$rfrag = $refrp['fragment'];
$rqy = $refrp['query'];
if (isset($rfrag) && $rfrag !== '')
$rfrag = '#' . $rfrag;
else if (isset($rqy) && $rqy !== '')
$rfrag = '#' . $rqy;
header("Location:" .
$rpath . $rfrag);
}
}
?>
The code also converts query strings (e.g. ?foo) into anchors (e.g. #foo).
This trick may be useful with other forwarding services and web sites! | http://www.advogato.org/person/bi/diary.html?start=68 | CC-MAIN-2013-48 | refinedweb | 1,380 | 73.98 |
Hey guys,
I've been plugging away at this and keep running into compiler errors or the program simply not functioning as intended.
I'm generating chains of numbers from file1 and comparing and checking them to file2. If the generated chain is within certain lengths then it's output to a file containing the newly generated strings.
For the record, file1 is 8k integers and file2 is 27mil integers.
I need to do this for 3 different mathematical formulas while still using the same table of numbers.
The integers are generally stored in a file in this format and are sorted numerically from low to high:
1 2 3 4 5 6 7 8
9 10 11 12 13 14 15 16
...
...
Although sometimes there are extra spaces because running Replace with notepadd++ across 27mil integers causes freezes.
Here's what I was thinking/trying:
I'm mostly using vectors here but not being able to easily search them is a pain and doing a linear search on a 27mil element vector up to 10 times each of the 8k elements in the original vector is... computer melting.I'm mostly using vectors here but not being able to easily search them is a pain and doing a linear search on a 27mil element vector up to 10 times each of the 8k elements in the original vector is... computer melting.Code:#include <iostream> #include <fstream> #include <algorithm> #include <vector> #include <string.h> #include <stdlib.h> using namespace std; using std::vector; using std::string; // reads the file and store it in a vector void Read(string filename, vector<long> &output); //calculates all the sets of chains for the first equation void Chain1(vector<long> &dataFile1, vector<long> &dataFile2); //second set of equations for another outputfile void Chain2(vector<long> &adataFile1, vector<long> &dataFile2); //third set void Chain3(vector<long> &dataFile1, vector<long> &dataFile2); int main() { //vector for file1, 8k integers vector<long> FromFile1; //vector for file2, 27mil integers vector<long> FromFile2; string file1 = "one.txt"; string file2 = "two.txt"; Read(file1, FromFile1); Read(file2, FromFile2); Chain1(FromFile1,FromFile2); Chain2(FromFile1,FromFile2); Chain3(FromFile1,FromFile2); } void Read(string filename, vector<long> &output) { ifstream file(filename); vector<char *> dataFromFile; char *cstr; if(file.is_open()) { do { cstr = (char *)malloc(256); file.getline(cstr, 256); dataFromFile.push_back(cstr); }while(!file.eof()); cout << "File successfully read" << endl; } else cout << "Failed to open file" << endl; vector<char *> tokens; char *pChar; int numberFromFile; for(int i = 0; i < dataFromFile.size(); i++) { pChar = strtok(dataFromFile[i], " "); while (pChar != NULL) { tokens.push_back(pChar); pChar = strtok (NULL, " "); } numberFromFile = atoi(tokens[0]); output.push_back(numberFromFile); tokens.clear(); } //free the memory allocated earlier for(int i = 0; i < dataFromFile.size(); i++) { free(dataFromFile[i]); } //close the file file.close(); } // function to create chains void Chain1(vector<long> &dataFile1, vector<long> &dataFile2) { // three iterator ints (for loop is giving my compiler fits) int i =0; int g = 0; int n = 0; // storage vector for the number chains as they're generated vector<int> chains; //variable for the current number being tested to be added to //chains int testNumber = 0; //open file to output chains to ofstream compChains("Chains1.txt"); while (i<dataFile1.size()) { //initialize testNumber to current element in dataFile1 //being access sequentially testNumber = dataFile1[i]; //add testNumber to chains vector so the initial //number from the smaller file is contained within chains.push_back(testNumber); do { // perform simple equation to find the next possible //number in the chain testNumber = testNumber * 2 + 1; //test the newly generated number to see if it is in the // larger list of numbers (I know find doesn't work for //vectors) if (testNumber==dataFile2[dataFile2.find(testNumber)]) { //add newly generated number if it passes and //iterate chains.push_back(cPrime); g++; } else { //If new number isn't on list break out and check //the chain break; } if(chains.size()>=10) { //break out and check the chain if number of //elements is 10 or more break; } }while(g<dataFile2.size()); //chains of 6 numbers or more are the only ones outputted to // a new file and cout so i can check on progress if (chains.size()>5) { cout << "Success!" << endl; while (n<chains.size()) { cout << chains[n] << ' '; compChains << chains[n] << ' '; n++; } cout << endl; compChains << endl; } //clear chains and iterate one to test the next element in // dataFile1 chains.clear(); i++; } } void Chain2(vector<long> &dataFile1; vector<long> &dataFile2) { // same format just different equation to generate new //number } void Chain3(vector<long> &dataFile1; vector<long> &dataFile2) { // same format just different equation to generate new //number }
So what kind of container should I use?
I only have ints going into the dataFile containers so this container doesn't need to be templated out with multiple members it just needs to be quickly accessed by index and by searching.
I've been wailing away at this for 3 days now and it either gives me compiler errors because xContainer can't compare int/iterator/whatever or it's not indexing and pulling the files correctly or something along those lines.
I could use some help. | http://forums.devshed.com/programming-42/trouble-comparing-elements-files-955445.html | CC-MAIN-2014-52 | refinedweb | 844 | 50.87 |
-- $Id: CHANGES,v 4.65 2021/11/26 21:07:21 tom Exp $ 2021/11/26 (4.7t) - enable lint-library feature by default, rather than only if a lint program was found (report by Michael Zucchi). - updated configure macros - update config.guess, config.sub 2021/03/03 (4.7s) - ignore preprocessor output with zero line-numbers. - add some null-pointer checks in lintlibs.c, to work with compilers emitting unconventional proprocessor-lines. - add .c.i rule to makefile - updated configure macros 2021/01/10 (4.7r) - add -n and -N options (prompted by discussion with "Radisson97"). - ignore -f and -O options if their values are respectively non-numeric or numeric, e.g., when a user confuses CFLAGS with CPPFLAGS. - sort usage-message. - modify configure script to error out if lex/yacc are not found (report by "Radisson97"). - updated configure macros - update config.guess, config.sub 2020/10/11 (4.7q) - align manpage formatting with ded. - add configure-check for preprocessor -C option, absent in some c89/c99. - change lexer's keyword-matching to a lookup table, avoiding ambiguity. - drop Apollo extensions from lexer. - updated configure macros - update config.guess, config.sub 2020/07/16 (4.7p) - add keywords for gcc "additional floating types" (report by Aaron Sosnick). - modify testing makefile/scripts to support an external script which collects the warning messages from different implementations of yacc. - drop obsolete dist/MANIFEST rules from top-level makefile; the MANIFEST file has long been generated from an external script. - comment-out unterminated string example in testing/syntax.c, since newer gcc C preprocessor no longer ignores text which is ifdef'd out. - updated configure macros - update config.guess, config.sub.guess 2015/07/05 (4.7m) - add --with-man2html option to configure script - update config.guess, config.sub/03 , Ubuntu #275248). - add configure --disable-leaks option. - use configure macro CF_XOPEN_SOURCE macro to make mkstemp() prototyped on Linux. - remove isascii() usage. - code cleanup, to remove K&R relics. - update config.guess, config.sub 2008/01/01 (4.7f) - add symbol for __gnuc_va_list - add some data for c99 syntax to test-cases, e.g., long long. - review/fix some additional places where need_temp() call is needed. - fix a reference to unallocated storage when reading from standard input (Fedora #315061). - minor updates to configure script macros - update mkdirs.sh (for parallel makes) - update config.guess, config.sub - rename install.sh to install-sh 2005/12/08 (4.7e) - 2005/08/21 (4.7d) - modified configure script and makefile to work with cygwin - fix configure check for yacc errors broken by 4.7c changes. - change fixed buf[] in grammar.y to an allocated buffer temp_buf[]. - eliminate fixed limit on number of -I options. - improve parsing for "asm", adding GCC's __asm__ keyword and modifying grammar to work with declarations such as extern int __asm__ (mkstemp, (char *__template), mkstemp64); - add config.guess, config.sub (needed for cross-compiling, etc). - add configure check for Intel compiler. - modify filename comparison for lint-library to ignore leading "./". 2004/03/25 (4.7c) - fix a couple of places where valgrind reported a sscanf's result was not always initialized. - modify behavior of "-X" option so it does not cause preprocessor lines to be written to the output. Preprocessor lines are needed for lint-library text, but are inconsistent with other uses of cproto (patch by Kenneth Pronovici). - add configure option "--enable-llib", which allows one to configure cproto without support for lint-library "llib" files. Since the "-X" option shares the include-file tracking, this option can also be enabled (or disabled) (discussion with Kenneth Pronovici, Bob Van der Poel). - require an ANSI C compiler for building. - updated configure script, using autoconf 2.52 + patch, along with macros from vile/lynx/etc. 2004/03/09 (4.7b) - added new -X option to limit the levels of include-files from which an extern can come (Debian #235824). - added new -i option to support inline function prototypes (Debian #228801, patch by Kenneth Pronovici). 2003/04/05 (4.7a) - add definition of YYFLAG, to enable the error-reporting code with bison 1.875 - add definition of YYSTYPE, to allow this to build with recent (aka "broken") versions of bison (Debian #166140, Lukas Geyer <lukas@debian.org>). - add gcc-specific __builtin_va_arg keyword (Debian #175862, Kenneth Pronovici <pronovic@debian.org>). - modify syntax.c to change token after "#endif" to a comment, thereby avoiding deprecation warning from gcc 3.2, which would cause "make check" to show unexpected results. - resync with version 4.7 at 2003/01/05 - add gcc-specific __builtin_va_list keyword. 2002/02/25 (4.6e) - correct length allocated for filename in include_file(), which was not big enough if the $CPROTO environment variable was corrupted. From report by <Sweasel18@aol.com>, using sharefuzz: - update configure.in to generate config.h directly using autoconf patch from - remove makefile rules that attempt to recreate the configure script. As demonstrated in numerous packages, it always reflects poor design. - remove makefile rules to make shar files (comp.sources.misc is long gone). - stop using changequote(), workaround for bugs in autoconf 2.5x 2000/11/09 (4.7) - Report source file and line number in error messages in gcc-style format. 2000/08/10 (4.6d) - use newer versions of mkdirs.sh and install.sh - regenerate configure script with autoconf 2.13 - restructure aclocal.m4 - modify Makefile.in to allow $(bindir) and $(mandir) to be altered independently of $(prefix) and $(exec_prefix) (patch by Carsten Leonhardt <leo@debian.org>). 2000/07/08 (4.6c) - add a clause to handle "__extension__" before extern declarations. (report by Bob van der Poel <bvdpoel@uniserve.com>) 1999/12/27 (4.6b) - correct check for size of vec[] array in yaccExpected(), broken in 4.6a changes. (report by Wolfgang Wander) 1999/12/19 - add keywords "restrict", "_Bool", "_Complex", "_Imaginary" based on c9x draft. - add keywords "__restrict__" and "__restrict", for gcc. (report by Wolfgang Wander <wwc@rentec.com>) 1999/12/14 (4.6a) - change vec[] array in yaccExpected() to be dynamically allocated. It was a fixed-size (10 entries) array before. Also fix problem reported by Wolfgang Wander <wwc@rentec.com>, which is that if the array were empty, it was passed to qsort() with a zero-size, causing a core dump. - add to syntax.c & corresponding test-cases the typedef/identifier example. 1999/10/21 - allow identifiers to be the same as typedef names, handle this case: typedef int badStyle; void badFunc(int *badStyle) { } (reported by Paul Haas <paulh@iware.com>) 1999/01/03 - add '__extension__' keyword 1998/01/21 (4.6) - Since cproto is no longer being distributed on USENET in shar file format, the patchlev.h and MANIFEST files have been removed. The patchlevel number has been removed from the version number. - Moved files out of the porting directory into separate platform specific directories. The emx directory has files used to compile using EMX. The os2 directory has files used to compile on OS/2 using Watcom C/C++. The win32 directory has files to compile on Windows 95 and Windows NT using Watcom C/C++. - correct order of include-path to keep standard include (e.g., /usr/include) at the end of the search list. - modified lint-library include-stack recovery to work on OSF/1. - supply default initializer for lint-library const data - corrected reset of __attribute__((noreturn)) - added '__volatile', '__const__', '__inline' keywords to grammar to accommodate gcc. - modified configure script to add several development/debugging options (i.e., --with-trace, --with-dbmalloc, --with-dmalloc). - modified ifdefs to avoid using predefined 'unix' (not defined on AIX or CLIX) 1996/04/15 (Patchlevel 5) - corrected instance of fclose on a file pointer after pclose was done (found with Electric Fence). - corrected script make_bat.sh to prevent expansion of \n in argument-string. - Modified handling of lint library option to allow generation of lint libraries for ANSI compilers (set -a -l). - corrected a missing 'void' in parameter list - modified to allow compile/test with 'dmalloc' library in addition to 'dbmalloc'. - corrected memory leak in yyerror.c, and allocation-size for cpp command. - added keywords to work with gcc 2.7.0 - corrected unresolved references to flush_varargs() when OPT_LINTLIBRARY is not defined 1995/08/24 (Patchlevel 4) - Added -S option to only output static declarations. - Fix: The configure script didn't replace @CFLAGS@, @CPPFLAGS@ and @LDFLAGS@ in the makefile generated from makefile.in. - Fix: The -a option generated incorrect function definitions. - update test-cases for the -f2 fix in patch 3. - remove dependency on GNU-make from makefile.in - corrected configuration script that tests whether yyerror can be extended (had cached wrong flag, preventing some configurations from being recognized). - added calls for 'flush_varargs()' to correct situation in lint-library output where VARARGS comments were not reset properly when a function was skipped. - improved the logic of 'strip_name()' (used to compute include-directives for the lint-library option) so that it recognizes the conventional include directory created by a GCC install. 1995/01/06 (Patchlevel 3) - check for a special case in lint-library generation, i.e., prototype arguments of the form "*()", which need more parentheses for SunOS's lint. - modified configure.in, aclocal.m4, makefile.in to work with autoconf 2.1 (also added install.sh - note that "install-sh" is not an MS-DOS-compatible filename). - derive the program name from the argv[0] entry, in case it's installed under a different name. - Fix: The -f2 option generated incorrect prototypes for functions that take a variable argument list. - use 'sed' rather than 'flip' to apply trailing CR's to MS-DOS scripts. 1994/10/26 (Patchlevel 2) - modified grammar to recognize C++ ref-variables (i.e., '&' as a prefix to identifiers at the top lexical level). Lint libraries are formatted without this '&', since lint doesn't grok C++. This corrects an error in cproto 4.0 which caused '&' characters to be suppressed altogether, e.g., when filtering with the "-t" option. - modified rules that generate MANIFEST to put auto-generated scripts there also, if they've been created (e.g., cd testing; make scripts). - modified makefile.in to have shar target for both GNU shar and Rick Salz's cshar utility. - use 'const' in yyerror.c, otherwise the error-reporting auto-configuration does not work. - don't use "#elif" preprocessor control; not all systems support it. 1994/10/25 (Patchlevel 1) - Added testing scripts for MS-DOS and VMS. - Added makefile for Borland C++ for OS/2. - Fix: When the -a, -t or -b options were used, '&' characters were stripped from the output files. - Fix: The system.h file should define EXIT_SUCCESS and EXIT_FAILURE regardless of the presence of <stdlib.h>. 1994/09/24 (Patchlevel 0, dickey) - corrected two malloc-defects in lint library generation (one place where generated parameter name was copied rather than allocated, and another memory leak). - corrected generation of lint library function body, to handle function pointers. - changed the implicit lint library function type from "" to "void", to avoid special-cases in the function-body generation. - added logic using 'NestedParams' to suppress prototype-arguments in lint library function-pointer arguments. - corrected lint-library function parameter derived from prototype "char [2]" (parameter name wasn't supplied). - added PRINTFLIKE and SCANFLIKE to the comments interpreted for the lint library translation. - modified "LINT_PREPRO" comment to pass-thru all comment text if no count is given. - added new comment keyword "LINT_SHADOWED" to generate "#undef symbol" before each function template in lint libraries (useful for processing files that define both macros and functions with the same names). - interpret GCC's __attribute__ noreturn and format for lint-library text (GCC 2.5.8 preprocessor passes these macros through, GCC 2.6 apparently does not). - treat carriage-return (^M) as whitespace where appropriate. - added configuration-test to avoid conflict with prototype for 'popen()' - added several function-pointer regression test-cases Version 3 1994/08/31 (Patchlevel 9, dickey) - use 'autoconf' to make a UNIX configure-script. - recognize GCC extensions '__inline' and '__attribute__' - added ifdef OPT_LINTLIBRARY to allow compiling without the lint library code (saves about 4kb). - corrected some logic that made incorrect commenting for options -c -f1 (e.g., "void (*Sigdisp(int sig, void (*func)(int sig)))(int sig)"). - corrected logic that macroizes (e.g., with P_) functions like 'Sigdisp' (it wasn't doing anything about the trailing "(int sig)"). - corrected handling of K&R conversion of mixed-mode functions (K&R style with prototypes in arguments) to avoid losing comments. - modified logic for options -c -f2 so that cproto inserts a space before the beginning of a comment when it immediately follows an '*'. - enhanced error reporting with new module yyerror.c which (attempts to) report the expected token type/name when a syntax error is encountered. - modified the grammar.y file to try to recover from errors at the next semicolon (as well as the next right curly bracket). - modified to process lex/yacc files with gcc as C-preprocessor. - Added option "-O" to force redirection of errors without shell operations (useful for VAX/VMS and MS-DOS in particular). - added "\s" as a synonym for space in the format options (-P, -F, -C) - tested on Solaris with lex/yacc and flex 2.4.6 / bison 1.22 (SunOS explorer 5.3 Generic_101318-42 sun4m sparc; gcc 2.6.0) - tested on SunOS 4.1.1 with lex/yacc and flex 2.4.6 / bison 1.22 (SunOS calvin 4.1.1 1 sun4c) - tested on IRIX with lex/yacc (IRIX dbs3 5.2 02282015 IP19 mips) - tested on Linux 0.99.15 with lex/yacc and flex 2.4.6 / bison 1.22 / byacc 1.9 - tested on MS-DOS with flex 2.37 / byacc 1.9 (built with turboc.mak). (Bison uses too much stack space). - tested on VAX/VMS 6.1 with VAX-C 3.2 and flex 2.4.6 / bison 1.22. - moved non-UNIX files into 'porting' subdirectory. - added 'testing' subdirectory, for simple regression tests. - tested for memory leaks with dbmalloc (on Linux). 1993/06/09 (Patchlevel 8, internal: dickey/cthuang) - added files 'lintlibs.c' and 'strkey.c' - Allow dollar signs in identifiers - Defined FAIL, SUCCESS to use in 'exit()' calls (VMS is approximately the reverse of UNIX). - Added option "-o" to force redirection without shell operations (useful for VAX/VMS in particular). - Added options "-l" (lintlibrary format), "-T" (typedefs), "-x" (externs in include-files). - Added "-C" option to cpp-invocation (to support VARARGS-decoding for -l option). - Modified grammar.y so that if -T option is turned on, instances of untagged struct, union or enum are shown with the contents of the curly braces. - Modified lex.l so that it sets 'return_val' iff at least one return statement within curly braces returns an expression. Use this to support -l option. - Modified semantic.c (for -l option) to put tabs after short names for better readability. Also (only -l option), put a blank line before function definitions and typedefs. - Corrected lex.l so that it recognizes preprocessor lines within curly braces. - Modified 'gen_prototype()' to trim 'extern' and 'auto' keywords from the text (so that 'extern' is emitted in this function only if the -e option is specified). Do this to support -l option (and to correct normal usage, which implies that -e option is needed to put an 'extern' before declaration). - Corrected test in 'put_decl_spec()' by using 'strkey()' (which tests for a name, not simply a substring). - Modified semantic.c to use 'put_string()' and related procedures to simplify pretty-printing of lint-library text (mainly to control blank lines). (See 'fmt_library()'). - linted some xmalloc calls using ALLOC macro. 1993/06/01 (Patchlevel 7, cthuang) - Fix: The processing of string literals is now more robust. - Removed the -f4 option which generated prototypes like int main P_((int argc, char **argv)); Use the -m option now to put a guard macro around the prototype parameter list. Use the -m option together with -f3 (which is the default) to produce the same output as the old -f4 option. The option to set the guard macro name is now -M. - Comments in prototype parameter lists are now disabled by default. Use the -c option now to output these comments. - Can now process #include directives in which the file is specified with a #define macro. - Now does not convert function definitions that take the formal parameter va_alist from <varargs.h>. - Now recognizes the GNU C modifiers __const and __inline__. Patchlevel 6 - Fix: A function in lex.l exploited the ANSI C feature of concatenating string literals. This prevented the module from being compiled with pre-ANSI C compilers. Patchlevel 5 - Fix: The -v option did not output declarations for function pointers. - Fix: String literals continued over more than one line messed up the line number count. - Fix: The program generated incorrect prototypes for functions that take a variable argument list using <varargs.h>. - Fix: When converting functions from the standard input, cproto generated no output if no functions needed to be converted. - Fix: Now does not output a warning if an untagged struct is found in a typedef declaration. - Added the -b option which rewrites function definition heads to include both old style and new style declarations separated by a conditional compilation directive. For example, the program can generate #ifdef ANSI_FUNC int main (int argc, char *argv[]) #else int main (argc, argv) int argc; char *argv[] #endif { } Added the -B option to set the preprocessor directive that appears at the beginning of such definitions. - Added the keyword "interrupt" to the set of type qualifiers when compiled on a UNIX system. - The MS-DOS version now recognizes the type modifiers introduced by Microsoft C/C++ 7.00. - Now recognizes ANSI C trigraphs (yuck!). - Now use "#if __STDC__" instead of "#if defined(__STDC__)". - GNU bison orders the y.tab.c sections differently than yacc, which resulted in references to variables before they were declared. The grammar specification was modified to also be compatible with bison. Patchlevel 4 - Fix: A typedef name defined as a pointer to char, short or float was incorrectly promoted if it was used to specify a formal parameter. For example, for the definition typedef char *caddr_t; int strlen (s) caddr_t s; { } cproto generated the incorrect prototype int strlen(int s); - Added implementation of the ANSI function tmpfile() for systems that don't have it. - If compiled with Microsoft C, cproto preprocesses its input by running the command "cl /E". To eliminate the error messages when the file <malloc.h> is included, the program now recognizes the specifier _based(void). Patchlevel 3 - Fix: The program didn't generate prototypes for functions defined with the extern specifier. - Fix: The -c option didn't output a space before parameter names in generated prototypes. - Added the -E option to specify a particular C preprocessor to run or to stop the program from running the C preprocessor. - Added the -q option to stop the program from outputting error messages when it cannot read the file specified in an #include directive. - Made the yacc specification compatible with UNIX SYSVR4 yacc. Patchlevel 2 - Fix: The function definition conversion may produce a mangled function definition if an #include directive appears before the function and no comments appear between the directive and the function. - Fix: The size of the buffer allocated for the C preprocessor command string did not include enough space for options set in the environment variable CPROTO. - Replaced the -n option with -c which disables all comments in the generated prototypes. - Replaced the enum's with #define constants to accommodate C compilers that don't like enumerators in constant expressions. Patchlevel 1 - Fix: The program was calling ftell() on an invalid FILE pointer. Patchlevel 0 - Added options to convert function definitions between the old style and ANSI C style. - Options can be specified from the environment variable CPROTO. - The MS-DOS version recognizes more Microsoft C and Borland C++ type modifiers (such as _cdecl, _far, _near). - Fix: Formal parameters specified with typedef names were not promoted. For example, for the definition typedef unsigned short ushort; void test (x) ushort x; { } cproto generated the incorrect prototype void test(ushort x); while the correct one is void test(int x); - Fix: Incorrect prototypes were generated for functions that returned function pointers. For example, cproto generated an incorrect prototype for the function definition void (*signal(int x, void (*func)(int y)))(int z) { } - Fix: Changed calls to memory allocation functions to abort the program if they fail. Version 2 Patchlevel 3 - Made cproto compatible with GNU flex. - After compiling with the preprocessor symbol TURBO_CPP defined, on MS-DOS systems, cproto will pipe its input through the Turbo C preprocessor. - Fix: Typedef names may now be omitted from typedef declarations. For example, every C compiler I tried accepts typedef int; and some even give warnings when encountering this statement. Patchlevel 2 - Cproto is now able to generate prototypes for functions defined in lex and yacc source files named on the command line. Lex and yacc source files are recognized by the .l or .y extension. - Fix: The memory allocated to the typedef symbol table was not being freed after scanning each source file. - Fix: Failure to reset a variable during error recovery caused segmentation faults. Patchlevel 1 - Fix: Cproto incorrectly generated the parameter "int ..." in prototypes of functions taking variable parameters. - Fix: Function definitions can now be followed by an optional semicolon. I found this feature in every C compiler I tried. Patchlevel 0 - Added formal parameter promotion. - Added prototype style that surrounds prototypes with a guard macro. - Handles C++ style comment //. - Nifty new way to set prototype output format. - Got rid of the shell wrapper used to pipe the input through the C preprocessor (cpp). - For the port to MS-DOS, I modified cproto to run without cpp, but since I didn't want to reimplement cpp, the program processes only the #include and #define directives and ignores all others. Macro names defined by the #define directive are treated like typedef names if they appear in declaration specifiers. Version 1 Patchlevel 3 - Fix: identical typedef names and struct tags should be allowed. For example: typedef struct egg_salad egg_salad; struct egg_salad { int mayo; }; void dine(egg_salad l) { } Patchlevel 2 - Fix: A typedef statement should allow a list of typedefs to be declared. Example: typedef int a, *b; - Fix: When run with the -v option on this input, cproto did not output a declaration for variable "b": char *a="one"; char *b="two"; - The options were renamed. Added new options that change the output format of the prototypes. Patchlevel 1 - Fix: Incorrect prototypes were produced for functions that take function pointer parameters or return a function pointer. For example, cproto produced an erroneous prototype for this function definition: void (*signal (sig, func))() int sig; void (*func)(); { /* stuff */ } - The lexical analyser now uses LEX. It should still be compatible with FLEX. | https://www.invisible-island.net/cproto/CHANGES.html | CC-MAIN-2022-21 | refinedweb | 3,797 | 51.24 |
Find Questions & Answers
Can't find what you're looking for? Visit the Questions & Answers page!
I am trying to integrate SAP with sfdc lightning connect and I am using ()
this url while configuring External data source. I got the below error -
Error received from the external system: 403: /IWFND/MED/170No service found for namespace , name SALESORDERXX, version 0001/SAP/SALESORDERXX000172A7A5E67491F18A9B97005056A216B820161108113533.0970000Run transaction /IWFND/ERROR_LOG on SAP.
Even when I tried to hit this url directly from the browser then also it is giving the same error.
And I am using the username and password when browser ask is the username generated by SAP ( started from P*******).
The error is talking about namespace. Am I using the right namespace with my request?
What is missed here ? Thanks in advance. | https://answers.sap.com/questions/53564/sfdc-lightning-connect-with-sap.html | CC-MAIN-2018-13 | refinedweb | 130 | 68.67 |
Hey all, i'm having a bit of trouble with this - it's a program that should display the factorials of all numbers from 0 to k in the range 1-10. here's what i keep getting:
Enter a number to see factorials for: 10
10! = 100
9! = 90
8! = 80
7! = 70
6! = 60
5! = 50
4! = 40
3! = 30
2! = 20
1! = 10
The factorial of 10 is 3628800
Press any key to continue . . .
...and here is what it needs to look like:
0! = 1
1! = 1
2! = 2
3! = 6
4! = 24
5! = 120
6! = 720
7! = 5040
8! = 40320
9! = 362880
10! = 3628800
Any help would be SO appreciated! Thanks all :)
#include<iostream> using namespace std; int factorial(int); void main() { int x; int factorial=1; cout << "Enter a number to see factorials for: "; cin >> x; cout << "\n"; for (int i = x; i; i--) { cout << i << "!"<< " = " << (i*x)<<endl; factorial *= i; } cout << "The factorial of "<<x<<" is "<<factorial<<endl; } int factorial(int n) { if (n <= 1) return 1; return n * factorial(n-1); } | https://www.daniweb.com/programming/software-development/threads/114750/factorial-help | CC-MAIN-2017-26 | refinedweb | 178 | 72.9 |
IEventCancelable
Link to ieventcancelable
This interface is extended by all Events that can be cancelled.
That means you can cancel them using CrT or check if they have been canceled.
Note that events that have been canceled before CrT receives them will not be checked by the handlers.
Also, if you register multiple handlers, and one of them cancels the event, the other CrT handlers will still receive it!
Importing the class
Link to importing-the-class
It might be required to import the class to avoid errors.
import crafttweaker.event.IEventCancelable;
What can be done with them?
Link to what-can-be-done-with-them
event.cancel();Method, returns void (nothing).
event.canceled;Getter, returns a bool.
event.canceled = true;Setter | https://docs.blamejared.com/1.12/en/Vanilla/Events/Events/IEventCancelable | CC-MAIN-2022-27 | refinedweb | 122 | 67.86 |
Working in Python, how can a method owned by a metaclass (metamethod) be retrieved through a class that it instantiated? In the following scenario, it's easy – just use
getattr
with_metaclass
class A(type):
class A(type):
"""a metaclass"""
def method(cls):
m = "this is a metamethod of '%s'"
print(m % cls.__name__)
class B(with_metaclass(A, object)):
"""a class"""
pass
B.method()
# prints: "this is a metamethod of 'B'"
'method'
dir(B)
super
class A(type):
"""a metaclass"""
def method(cls):
m = "this is a metamethod of '%s'"
print(m % cls.__name__)
class B(with_metaclass(A, object)):
"""a class"""
@classmethod
def method(cls):
super(B, cls).method()
B.method()
# raises: "AttributeError: 'super' object has no attribute 'method'"
As you put it, and discovered in practice,
A.method is not on B's lookup chain - The relation of classes and metaclasses is not one of inheritance - it is one of 'instances' as a class is an instance of the metaclass.
Python is a fine language in which it behaves in expected ways, with very few surprises - and it is the same in this circumstances: If we were dealing with 'ordinary' objects, your situation would be the same as having an instance
B of the class
A. And
B.method would be present in
B.__dict__ - a "super" call placed on a method defined for such an instance could never get to
A.method - it would actually yield an error. As
B is a class object,
super inside a method in B's
__dict__ is meaningful - but it will search on B's
__mro__ chain of classes (in this case,
(object,)) - and this is what you hit.
This situation should not happen often, and I don't even think it should happen at all; semantically it is very hard to exist any sense in a method that would be meaningful both as a metaclass method and as a method of the class itself. Moreover, if
method is not redefined in
B note it won't even be visible (nor callable) from B's instances.
Maybe your design should:
a. have a baseclass
Base, using your metaclass
A, that defines
method instead of having it defined in
A - and then define
class B(Base): instead
b. Or have the metaclass
A rather inject
method in each
class it creates, with code for that in it's
__init__ or
__new__ method - along:
def method(cls): m = "this is an injected method of '%s'" print(m % cls.__name__) class A(type): def __init__(cls, name, bases, dct): dct['method'] = classmethod(method)
This would be my preferred approach - but it does not allow
one to override this
method in a class that uses this metaclass -
without some extra logic in there, the approach above would rather override any such
method explicit in the body.
The simpler thing is to have a base class
Base as above, or to inject a method with a different name, like
base_method on the final class, and hardcode calls to it in any overriding
method:
class B(metaclass=A): @classmethod def method(cls): cls.base_method() ...
(Use an extra
if on the metaclass's
__init__ so that the default
method is aliased to
base_method )
What you literally asked for begins here
Now, if you really has a use case for methods in the metaclass to be called from the class, the "one and obvious" way is to simply hardcode the call, as it was done before the existence of
super
You can do either:
class B(metaclass=A): @classmethod def method(cls): A.method(cls) ...
Which is less dynamic, but less magic and more readable - or:
class B(metaclass=A): @classmethod def method(cls): cls.__class__.method(cls) ...
Which is more dynamic (the
__class__ attribute works for
cls just like it would work in the case
B was just some instance of
A like my example in the second paragraph:
B.__class__ is A)
In both cases, you can guard yourself against calling a non existing
method in the metaclass with a simple
if hasattr(cls.__class__, "method"): ... | https://codedump.io/share/DMROlpR7AsFE/1/how-to-retrieve-a-metaclass-method | CC-MAIN-2017-51 | refinedweb | 677 | 69.55 |
On Tue, Feb 22, 2011 at 08:18:21PM +0100, Stefan Behnel wrote: > W.. I can strip them out afterwards, but it helps me figure out what I've broken if I shift too much around at the same time. I don't know enough about Python's trace module to know if I can turn on tracing only for functions defined in a single module or not, since otherwise its hard for me to separate signal from noise. > Some of the log statements span more than one line, which makes it trickier > to strip them out with sed&friends (but backing out the initial changeset > would likely make it simple enough to remove the rest manually). Hmm, perhaps I'll condense the logging statements down onto one (long) line a piece, that will make it easy to comment/uncomment them with sed/emacs/etc. I suppose once Cython can compile the logging module we could leave them in with reduced overhead ;). > Also note that it's best to write runnable tests ("tests/run/"). The > tests in "tests/compile/" are only compiled and imported. See the > hacking guide in the wiki. I know you're not there yet with your > implementation, I'm just mentioning it. Thanks for the tip. >.. The CtxAttribute class is, as its docstring says, just a hook for its deepcopy method. With an alternative deepcopy implementation, CtxAttribute could be replaced with the standard `object`, so don't worry too much about its name at this point ;). >. > I also doubt that Cython allows you to call an attribute "cdef", you'll > need to change that. It seems to work for me: >>> import Cython.Compiler.Parsing as P >>> P.__file__ 'Cython/Compiler/Parsing.so' >>> c = P.CSource() >>> dir(c) [..., 'cdef', 'deepcopy', 'extern', 'name', 'namespace'] >>> c.cdef 0 >>> c.cdef = 1 >>> c.cdef 1 However, I agree that it's generally a bad idea to play around with keywords. I'll revert it to the wordier-but-less-confusing `cdef_flag`. --: <> | https://mail.python.org/pipermail/cython-devel/2011-February/000098.html | CC-MAIN-2014-10 | refinedweb | 331 | 74.69 |
I want to make a pretty simple game for the android and iPhone but I am not entirely sure where to start and what I need to know so could someone point me in the right direction? Thanks.
Want to make a game for android and iPhone. Where do I start?
#1 Members - Reputation: 101
Posted 22 February 2014 - 08:32 AM
#2 Members - Reputation: 1322
Posted 22 February 2014 - 10:15 AM
I want to make a pretty simple game for the android and iPhone but I am not entirely sure where to start and what I need to know so could someone point me in the right direction? Thanks.
Its simple.
Want to make games for android? Learn JAVA.
Want to make games for iPhone? Learn Objective-C.
"Don't gain the world and lose your soul. Wisdom is better than silver or gold." - Bob Marley
#3 Members - Reputation: -400
Posted 22 February 2014 - 10:50 AM
#4 Members - Reputation: 101
Posted 22 February 2014 - 11:07 AM
So C# would be the best to learn?
#5 Members - Reputation: 1322
Posted 22 February 2014 - 11:37 AM
So C# would be the best to learn?
C# is the best to learn if and only you use unity for game development.
"Don't gain the world and lose your soul. Wisdom is better than silver or gold." - Bob Marley
#6 Crossbones+ - Reputation: 4027
Posted 22 February 2014 - 11:38 AM
The problem with programming for both iPhone and Android is that Android uses open technologies, while iPhone uses apple's own breed of C, called Objective-C, that doesn't seem to be used anywhere outside of iThings.
HTML5 is a good choice, but even though it was thought as the new programming wonder, that initial hype has gone cold over problematic and incomplete implementations on most platforms; it's taking way too long to be fully supported. Still, I guess this is just a question of time.
There is the option of using Unity with C# and their JavaScript, it would enable you to export your game to both platforms, but Unity isn't free. The good news is that you can use a free version to learn, but it is missing some performance features and has an in-built ad that generates no revenue for you. If you want to break from the free version limitations, you'd need to pay a monthly subscription (Android and iPhone independently) or acquire the license (also separately, really expensive for an individual). It is worth mentioning that the Unity 2D systems are pretty new and I always avoid using brand new technology, I prefer waiting until I deem them as mature enough.
Also consider Gamesalad, Construc2, Stencyl, GameMaker and other similar, usually ignored, but one of these is what I probably would pick (personally) for mobile. They are cheaper than Unity, and you can make games just as fast. There is also, and other open source alternatives.
You could also use Objective-C, and then port to Android using Apportable, but I personally recommend Unity and those cheaper tools over this any day.
A last option that comes to mind (and really not suited to any beginners) is programming in C and keeping a clean and thin interface; just a little portion of the code would need changes when moving from Android NDK to the Objective-C iOS API (or vice versa).
The only thing I wouldn't recommend is using Flash or Java, since you'd need to rewrite your game completely in order to release for iThings.
Edited by dejaime, 22 February 2014 - 12:04 PM.
#7 Moderators - Reputation: 21080
Posted 22 February 2014 - 03:00 PM
If your only goal is to make simple games on the devices, and you don't care about programming, and you aren't asking about a career, you only want to build a game, then use one of the tools just mentioned.
There are many great games that have used GameSalad, GameMaker, and the rest.
If you are thinking about a long-term career in making games, then you will certainly need to learn much more. You won't be making AAA blockbusters with them, but for relatively simple games, or for some simple experimentation and some fun times, the little engines can be great fun.
#8 Members - Reputation: 106
Posted 22 February 2014 - 04:06 PM
Based on my experience, I would suggest that even if your goal is a long-term career in programming, definitely start off with GameMaker.
If you're a total beginner, it's a great place to start. It has a lot of "drop-and-drag" stuff (for people who don't know code yet), but if you want to tap into its full potential, learn GML (GameMaker's own little "programming language"). That's a good way to get acquainted with programming basics (variables, arrays, loops, etc). It is also an "object-oriented" system, so when you get into "real" programming languages you'll already understand some of the basic object-oriented concepts.
The other alternative (learning a "real" language from scratch) can be a total pain and hardly worth it for a simple game. GameMaker abstracts away a lot of the annoying little micro-details needed to do basic stuff like playing sounds, drawing graphics, etc. and in my experience with Java these things are unnecessarily overcomplicated. I think I've learned them pretty well, but if I had just gotten into Java with no previous experience, I think I would have given up a long time ago (lol). For example:
// GML: sound_play("boom");
This is by a longshot as easy as it gets. It's straightforward and the intent is clear: play the sound "boom" (lol).
// Try to do the same thing in Java: import javax.sound.sampled.*; import java.io.*; public class Sound extends Thread { public AudioInputStream audio_snd; public Clip audio_clip; public String address; public Sound(String address){ try { start(); this.address = address; audio_snd = AudioSystem.getAudioInputStream(new File(address)); AudioFormat audio_format=audio_snd.getFormat(); DataLine.Info info = new DataLine.Info(Clip.class, audio_snd.getFormat(), ((int)audio_snd.getFrameLength()*audio_format.getFrameSize())); audio_clip = (Clip) AudioSystem.getLine(info); audio_clip.open(audio_snd); } catch (Exception e) { e.printStackTrace(); } } public void playFromCurrentPosition(){ audio_clip.start(); } public void play(){ audio_clip.setFramePosition(0); audio_clip.start(); } } class soundPlayer { public static void main(String[] args){ Sound boom = new Sound("sounds\\boom.wav"); boom.play(); } }
It took me Ages to figure this out, and I still don't quite understand every bit of it. But I made it work; that's the main thing. But to get there, I had to learn not only the basics of Java but also how to work with all these other things (AudioInputStream, AudioFormat, DataLine, FloatControl, etc.), all just to play a sound. I've had similar experiences with the web (JavaScript can do a lot these days, but sound is still a little tricky), and I'm sure Objective-C, C# etc. have their fair share of extra work to manage the basics as well. And don't get me started on getting the AVD (android emulator for Java) working for the first time! : )
And besides, Game Maker has a lot of potential (and apparently even more since I last used it, to the point where it can supposedly create games for every platform known to man, including Android and "iThings" (love that btw)). I once developed an entire "console" type system - a program that ran my games from CDs (like the PS3 or or Xbox). It had built-in joystick support, two-player games, and all kinds of cool little extras. GM can also do multiplayer games, though I never experimented with that. And they have a free version, so you can try it out before you go and buy the full package. I could go on about this, but I think you get the idea. : )
Anyway, I hope this advice helps; I started off as a total noob, not knowing squat, and while I've still got a lot to learn, I've come a long way; and GM helped me get there.
Edited by ElGeeko7, 22 February 2014 - 04:09 PM.
#9 Members - Reputation: 1632
Posted 25 February 2014 - 11:31 PM
If you want cross-platform with the same codebase/project, you want something like Unity or GameMaker. Unity is free, but generally harder and more complex to learn. It is also really powerful and good for 3d. GameMaker Studio is much simpler, though it is quite powerful in it's own right. It is more meant for 2d though, and also much easier to learn. I use it a lot. The catch though is that you can't get a free version to export to mobile platforms, while with Unity you can.
There are other free solutions, but from what I've seen of them they either aren't as powerful, or are more difficult to work with, or are platform specific.
#10 Members - Reputation: 172
Posted 06 April 2014 - 06:17 PM
Interesting question, I had a similar question. I'm thinking of games that don't really use graphics or physics, such as my first app I created. I have a video if you follow the link in my signature. It's basically a quiz game, buttons and text, that's about it.
Will programs like Unity work for creating multi-platform apps like this, or is it really oriented for gaming?
Thank you in advance!
Please check out my website for more information on my available apps:
PopQuiz!, free trial on Windows Store:
PopQuiz!, on Google Play:
Debouncer Demo on Google Play: | http://www.gamedev.net/topic/653691-want-to-make-a-game-for-android-and-iphone-where-do-i-start/ | CC-MAIN-2014-42 | refinedweb | 1,612 | 61.67 |
Ok, This is driving me nuts, and searching google just leads me to dead ends so last resort to ask. I am try to create a file save system, but where the user can specify the name of the file. is this possible using ofstream? this wont work but an example of what I am trying to do
So is there a way to do something like that?So is there a way to do something like that?Code:#include <iostream> #include <fstream> using namespace std; int main() { string save = "data"; ofstream out ( + data +".txt" ); cin.get(); }
Thank you SO much | http://cboard.cprogramming.com/windows-programming/80896-save-system.html | CC-MAIN-2015-35 | refinedweb | 101 | 82.75 |
Subject: Re: [PATCH] uts: Don't randomize "struct uts_namespace". To: Linus Torvalds <torva...@linux-foundation.org>, Ken'ichi Ohmichi <oomi...@mxs.nes.nec.co.jp>, Masaki Tachibana <mas-tachib...@vf.jp.nec.com>, Kazuhito Hagio <k-ha...@ab.jp.nec.com> Cc: Kees Cook <keesc...@chromium.org>, Linux Kernel Mailing List <linux-ker...@vger.kernel.org> From: Tetsuo Handa <penguin-ker...@i-love.sakura.ne.jp> Date: Sat, 7 Jul 2018 08:10:08 +0900
Advertising
Hello Ken'ichi, I noticed that makedumpfile ( ) can no longer detect kernel version correctly because "struct uts_namespace" (which is exposed to userspace via vmcoreinfo) is subjected to randomization by GCC_PLUGIN_RANDSTRUCT kernel config option since 4.13. The code was introduced by below commit. commit bfc8fe181c822ad0d8495ceda3c7109a407192f0 Author: ken1_ohmichi <ken1_ohmichi> Date: Fri Dec 22 07:41:14 2006 +0000 linux-2.6.19 support. On linux-2.6.18 or former, the release information could be gotten from the symbol "system_utsname". But on linux-2.6.19, it can be done from the symbol "init_uts_ns". A new makedumpfile can get the release information from the existing symbol. Can you detect kernel version without using "struct uts_namespace" ? On 2018/07/07 1:11, Linus Torvalds wrote: > On Fri, Jul 6, 2018 at 3:07 AM Tetsuo Handa > <penguin-ker...@i-love.sakura.ne.jp> wrote: >> >> I noticed that makedumpfile utility is failing to check kernel version, for >> it depends on offset of "struct uts_namespace"->name being sizeof(int). > > For something like this, we fix makedumpfile instead. This is not a > "user program" using system calls etc, this is something that delves > into the kernel dump and tries to make sense of it. > > Where is the makedumpfile source code? What is it trying to do, and why? > > One option is to just say "hey, you can't make much sense of a > randomized kernel dump anyway, so don't even try". > > Linus > _______________________________________________ kexec mailing list kexec@lists.infradead.org | https://www.mail-archive.com/kexec@lists.infradead.org/msg20301.html | CC-MAIN-2018-30 | refinedweb | 322 | 60.51 |
Manages a collection of source files and derived data (ASTs, indexes), and provides language-aware features such as code completion. More...
#include <ClangdServer.h>
Manages a collection of source files and derived data (ASTs, indexes), and provides language-aware features such as code completion.
The primary client is ClangdLSPServer which exposes these features via the Language Server protocol. ClangdServer may also be embedded directly, though its API is not stable over time.
ClangdServer should be used from a single thread. Many potentially-slow operations have asynchronous APIs and deliver their results on another thread. Such operations support cancellation: if the caller sets up a cancelable context, many operations will notice cancellation and fail early. (ClangdLSPServer uses this to implement $/cancelRequest).
Definition at line 58 of file ClangdServer.h.
Creates a new ClangdServer instance.
ClangdServer uses
CDB to obtain compilation arguments for parsing. Note that ClangdServer only obtains compilation arguments once for each newly added file (i.e., when processing a first call to addDocument) and reuses those arguments for subsequent reparses. However, ClangdServer will check if compilation arguments changed on calls to forceReparse().
Definition at line 172 of file ClangdServer.cpp.
Definition at line 233 of file ClangdServer.cpp.
References FeatureModules, and clang::clangd::Deadline::infinity().
Add a
File to the list of tracked C++ files or update the contents if
File is already tracked.
Also schedules parsing of the AST for it on a separate thread. When the parsing is complete, DiagConsumer passed in constructor will receive onDiagnosticsReady callback. Version identifies this snapshot and is propagated to ASTs, preambles, diagnostics etc built from it. If empty, a version number is generated.
Definition at line 247 of file ClangdServer.cpp.
Apply the code tweak with a specified
ID.
Definition at line 646 of file ClangdServer.cpp.
References Action, and clang::clangd::trace::Metric::Counter.
Definition at line 1000 of file ClangdServer.cpp.
References clang::clangd::timeoutSeconds().
Run code completion for
File at
Pos.
This method should only be called for currently tracked files.
Definition at line 369 of file ClangdServer.cpp.
Creates a context provider that loads and installs config.
Errors in loading config are reported as diagnostics via Callbacks. (This is typically used as ClangdServer::Options::ContextProvider).
Definition at line 288 of file ClangdServer.cpp.
References clang::clangd::Context::clone(), and clang::clangd::Context::current().
Runs an arbitrary action that has access to the AST of the specified file.
The action will execute on one of ClangdServer's internal threads. The AST is only valid for the duration of the callback. As with other actions, the file must have been opened.
Definition at line 975 of file ClangdServer.cpp.
References Action, and Name.
Fetches diagnostics for current version of the
File.
This might fail if server is busy (building a preamble) and would require a long time to prepare diagnostics. If it fails, clients should wait for onSemanticsMaybeChanged and then retry. These 'pulled' diagnostics do not interfere with the diagnostics 'pushed' to Callbacks::onDiagnosticsReady, and clients may use either or both.
Definition at line 980 of file ClangdServer.cpp.
References Action, and Diags.
Get all document links in a file.
Definition at line 912 of file ClangdServer.cpp.
References Action, and clang::clangd::getDocumentLinks().
Retrieve the symbols within the specified file.
Definition at line 822 of file ClangdServer.cpp.
References Action, and clang::clangd::getDocumentSymbols().
Enumerate the code tweaks available to the user at a specified point.
Tweaks where Filter returns false will not be checked or included.
Definition at line 611 of file ClangdServer.cpp.
Gets the installed feature module of a given type, if any.
This exposes access the public interface of feature modules that have one.
Definition at line 190 of file ClangdServer.h.
References clang::clangd::FeatureModuleSet::get().
Definition at line 193 of file ClangdServer.h.
References clang::clangd::FeatureModuleSet::get().
Returns estimated memory usage and other statistics for each of the currently open files.
Overall memory usage of clangd may be significantly more than reported here, as this metric does not account (at least) for:
Definition at line 995 of file ClangdServer.cpp.
Get document highlights for a given position.
Definition at line 725 of file ClangdServer.cpp.
References Action, clang::clangd::findDocumentHighlights(), and Pos.
Get code hover for a given position.
Definition at line 737 of file ClangdServer.cpp.
References Action, FormatStyle(), and Pos.
Retrieve implementations for virtual method.
Definition at line 857 of file ClangdServer.cpp.
References Action, clang::clangd::findImplementations(), and Pos.
Retrieve locations for symbol references.
Definition at line 869 of file ClangdServer.cpp.
Retrieve symbols for types referenced at
Pos.
Definition at line 846 of file ClangdServer.cpp.
References Action, clang::clangd::findType(), and Pos.
Retrieve ranges that can be used to fold code within the specified file.
Definition at line 834 of file ClangdServer.cpp.
References Action, and clang::clangd::getFoldingRanges().
Run formatting for the
File with content
Code.
If
Rng is non-null, formats only that region.
Definition at line 463 of file ClangdServer.cpp.
References Code, getDraft(), clang::clangd::InvalidParams, and Range.
Run formatting after
TriggerText was typed at
Pos in
File with content
Code.
Definition at line 501 of file ClangdServer.cpp.
References Action, Code, getDraft(), clang::clangd::InvalidParams, Pos, and clang::clangd::positionToOffset().
Describe the AST subtree for a piece of code.
Definition at line 936 of file ClangdServer.cpp.
References Action, and Inputs.
Gets the contents of a currently tracked file.
Returns nullptr if the file isn't being tracked.
Definition at line 280 of file ClangdServer.cpp.
References clang::clangd::DraftStore::getDraft().
Referenced by formatFile(), and formatOnType().
Resolve incoming calls for a given call hierarchy item.
Definition at line 786 of file ClangdServer.cpp.
References clang::clangd::incomingCalls().
Resolve inlay hints for a given document.
Definition at line 795 of file ClangdServer.cpp.
Find declaration/definition locations of symbol at a specified position.
Definition at line 693 of file ClangdServer.cpp.
References Action, clang::clangd::locateSymbolAt(), and Pos.
Called when an event occurs for a watched file in the workspace.
Definition at line 806 of file ClangdServer.cpp.
Definition at line 154 of file ClangdServer.cpp.
Get information about call hierarchy for a given position.
Definition at line 775 of file ClangdServer.cpp.
References Action, Pos, and clang::clangd::prepareCallHierarchy().
Test the validity of a rename operation.
If NewName is provided, it performs a name validation.
Definition at line 529 of file ClangdServer.cpp.
References Action, Pos, clang::clangd::rename(), and Results.
Builds a nested representation of memory used by components.
Definition at line 1038 of file ClangdServer.cpp.
References clang::clangd::MemoryTree::child().
Remove
File from list of tracked files, schedule a request to free resources associated with it.
Pending diagnostics for closed files may not be delivered, even if requested with WantDiags::Auto or WantDiags::Yes. An empty set of diagnostics will be delivered, with Version = "".
Definition at line 364 of file ClangdServer.cpp.
References clang::clangd::DraftStore::removeDraft().
Rename all occurrences of the symbol at the
Pos in
File to
NewName.
If WantFormat is false, the final TextEdit will be not formatted, embedders could use this method to get all occurrences of the symbol (e.g. highlighting them in prepare stage).
Definition at line 555 of file ClangdServer.cpp.
Requests a reparse of currently opened files using their latest source.
This will typically only rebuild if something other than the source has changed (e.g. the CDB yields different flags, or files included in the preamble have been modified).
Definition at line 270 of file ClangdServer.cpp.
Resolve type hierarchy item in the given direction.
Definition at line 765 of file ClangdServer.cpp.
References clang::clangd::resolveTypeHierarchy().
Definition at line 924 of file ClangdServer.cpp.
References Action, and clang::clangd::getSemanticHighlightings().
Get semantic ranges around a specified position in a file.
Definition at line 893 of file ClangdServer.cpp.
Provide signature help for
File at
Pos.
This method should only be called for tracked files.
Definition at line 434 of file ClangdServer.cpp.
Switch to a corresponding source file when given a header file, and vice versa.
Definition at line 705 of file ClangdServer.cpp.
References Action, clang::clangd::getCorrespondingHeaderOrSource(), and clang::clangd::ThreadsafeFS::view().
Get symbol info for given position.
Clangd extension - not part of official LSP.
Definition at line 881 of file ClangdServer.cpp.
References Action, clang::clangd::getSymbolInfo(), and Pos.
Get information about type hierarchy for a given position.
Definition at line 751 of file ClangdServer.cpp.
References Action, clang::clangd::getTypeHierarchy(), and Pos.
Retrieve the top symbols from the workspace matching a query.
Definition at line 811 of file ClangdServer.cpp. | https://clang.llvm.org/extra/doxygen/classclang_1_1clangd_1_1ClangdServer.html | CC-MAIN-2022-27 | refinedweb | 1,426 | 53.98 |
Old is 300
Thoughts on these Fabrixquare chelseas?
This really is the fuccboi general huh.
Looks nice. Asks have a few Chelseas that looks kinda nice. Will they just fall apart in a month if i use them for about 2km of walking per week.
>>9463992
Fuck typo. I mean asos
>>9463992
I bought some from Asos and they were sized really weirdly and disgusting leather so I returned them. Pretty cheap though.
W2c good t-shirts for layering?
>>9463824
>200$ for fake Trickers that look like rubber boots.
is there any good stores around stl?
Wanna buy a pair of sneakers and have the option between leather soles and rubber soles. Whats the difference between the two?
I'm not entirely sure how to go about this, but it seemed like something /fa/ would know.
What is the style of overall-type pants thing in this picture called? Specifically the kind that up past your waist like this. I searched, but I'm an idiot and don't know shit, so if you guys could help out I'd really appreciate it.
>>9465080
high-waisted overalls? trying googling that, you might find something similar
should I leave my pants tucked or untucked? can I only tuck with skinny jeans? it comes untucked pretty easily
>>9465114
def untucked. roll, pinroll, or stack instead
>>9465114
That looks fucking stupid. Untuck or cuff/pinroll.
>>9465065
if you have leather soles and it's wet or snowy outside you're gonna get fucked up
>>9465125
whys that? what kind of damage can I be expecting?
>>9465114
looks weird. untucked
also lacewrapping is for meme shoes or combat boots only
What's /fa/'s opinion on Muji? I have one local and I may drop in.
>>9465132
a broken neck when you slip and fall
probably mess up the sole too if that's your concern
>>9465143
its like uniqlo with better quality IMO
>>9465132
depends on how tall you are and how long the fall distance is when you slip and get destroyed like bambi on ice
also I'm pretty sure a leather outsole won't last as long as rubber if you live in a place where rain and snow is common
just do like I did, go talk to my local cobbler, he'll have all the answers you want
Okay so what are the best CP alternatives. Don´t say Stans or that Kent Wang crap. Help a brother out
>>9465392
royal republiq
>>9465400
thanks, I´ll check it out. any experience quality-wise?
>>9465392
svensson
>>9465392
Adidas released the rod laver, the original shoe common projects are based on
>>9465431
yea I looked into that. toebox looks kinda funky though
>>9465429
lookin good. all the versions I dig are sold out though. thanks for the tip
>>9465420
nope sry | https://4archive.org/board/fa/thread/9463824/fuccboi-general | CC-MAIN-2017-51 | refinedweb | 471 | 83.76 |
As previously established, a union of the form
union some_union {
type_a member_a;
type_b member_b;
...
};
int strict_aliasing_example(int *i, float *f)
{
*i = 1;
*f = 1.0;
return (*i);
}
int strict_aliasing_example(int *i, float *f)
{
*i = 1;
*f = 1.0;
return (1);
}
*f
*i
float
int
int breaking_example(void)
{
union {
int i;
float f;
} fi;
return (strict_aliasing_example(&fi.i, &fi.f));
}
fi.i
fi.f
Starting with your example:
int strict_aliasing_example(int *i, float *f) { *i = 1; *f = 1.0; return (*i); }
Let's first acknowledge that, in the absence of any unions, this would violate the strict aliasing rule if
i and
f both point to the same object; assuming the object has no effective type, then
*i = 1 sets the effective type to
int and
*f = 1.0 then sets it to
float, and the final
return (*i) then accesses an object with effective type of
float via an lvalue of type
int, which is clearly not allowed.
The question is about whether this would still amount to a strict-aliasing violation if both
i and
f point to members of the same union. On union member access via the "." member access operator, the specification says (6.5.2.3):
A postfix expression followed by the . operator and an identifier designates a member of a structure or union object. The value is that of the named member (95) and is an lvalue if the first expression is an lvalue.
The footnote 95 referred to in above says:
If the member used to read.
This is clearly intended to allow type punning via a union, but it should be noted that (1) footnotes are non-normative, that is, they are not supposed to proscribe behaviour, but rather they should clarify the intention of some part of the text in accordance with the rest of the specification, and (2) this allowance for type punning via a union is made only for access via the union member access operator, which means that your example stores via a pointer to a non-existing union member, and thereby commits a strict aliasing violation since it accesses the member that is active using an lvalue of unsuitable type. (I might add that I can not see how the footnote describes behavior that is otherwise inherent in the specification - that is, it seems to break the ISO rule of not proscribing behaviour; nothing else in the specification seems to make any allowance for type punning via a union).
With this in mind, your example clearly violates the strict aliasing rule if
f and
i point to the same object (or indeed, if they point to overlapping objects).
There is often confusion caused by another part of the specification, however, also.
Although this does not apply to your example since there is no common initial sequence, some people read this as being a general rule for governing type punning; they believe that it implies that it should be possible to use type punning (or at least a limited form of it, based on a common initial sequence) using two pointers to different union members whenever the complete union declaration is visible (since that is what it says in the paragraph quoted above). However, I would point out that the paragraph above still only applies to union member access via the "." operator. The problem with reconciling this understanding is, in that case, that the complete union declaration must anyway be visible, since otherwise you would not be able to refer to the union members. It is this glitch in the wording that, I think, makes some people believe the common-initial-sequence exception is intended to apply globally, not just for member access via the "." operator, as an exception to the strict aliasing rule; and, having come to this conclusion, a reader might then (incorrectly) interpret the footnote regarding type punning to apply globally also.
(Incidentally, I am aware of several compilers that do not implement the "global common initial sequence" rule - I assume that their authors do not interpret the specification to imply this rule. I am not aware of any compilers which implement the "global common initial sequence" rule while not also allowing arbitrary type punning).
At this point you could well question how reading a non-active union member via the member-access operator doesn't violate strict aliasing, if doing the same via a pointer does so. This is again an area where the specification is somewhat hazy; the key is in deciding which lvalue is responsible for the access. For instance, if a union object
u has a member
a and I read it via the expression
u.a, then we could interpret this as either an access of the member object (
a) or as merely an access of the union object (
u). In the latter case, there is no aliasing violation since it is specifically allowed to access an object (i.e. the active member object) via an lvalue of aggregate type containing a suitable member (6.5¶7). Indeed, the definition of the member access operator in 6.5.2.3 does support this interpretation, if somewhat weakly: the value is that of the named member - while it is potentially an lvalue, it is not necessary to access the object referred to by that lvalue in order to obtain the value of the member, and so strict aliasing violation is avoided.
(To me it seems under-specified, generally, just when an object has "its stored value accessed ... by an lvalue expression" as per 6.5¶7; we can of course make a reasonable determination for ourselves, but then we must be careful to allow for type-punning via unions as per above, or otherwise be willing to disregard footnote 95. Despite the often unnecessary verbiage, the specification is sometimes lacking in necessary detail).
Arguments about union semantics invariably refer to DR 236 at some point. I would note that: | https://codedump.io/share/kmizaz4u7DGk/1/is-the-strict-aliasing-rule-incorrectly-specified | CC-MAIN-2017-04 | refinedweb | 987 | 55.17 |
I am trying to find the midpoint between 2 vertices (vectors).
It seems that mathutils used to have a function MidpointVecs, but now this does not exisr anymore?
I use Blender 2.5 and higher.
Richard
Midpoint of 2 vectors
Scripting in Blender with Python, and working on the API
Moderators: jesterKing, stiv
4 posts • Page 1 of 1
- Posts: 7
- Joined: Mon Dec 24, 2012 12:55 pm
- Location: Breda, the Netherlands
Do it yourself:
cu
Mr.Yeah
Code: Select all
def MidpointVecs(vec1, vec2):
vec = vec1 + vec2
vec = vec / 2
return vec
cu
Mr.Yeah
- Posts: 7
- Joined: Mon Dec 24, 2012 12:55 pm
- Location: Breda, the Netherlands
4 posts • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 2 guests | https://www.blender.org/forum/viewtopic.php?t=25993&view=next | CC-MAIN-2016-18 | refinedweb | 130 | 78.89 |
Don't mind me much, but I was going through mark ups.
Was there a rule about using the text in copy.txt as is, w/o changing the order? Many have break this rule, and the winner Candygirl is one of them.
Also, some added content too, not just mark up. Sadly, Pikacsu81 did so. I think there was a rule about it too.
Again, don't mind me much. I may have understood wrong about the rules.
It's not reflected in the rules, this resolving you talk about.
The rules () state:
Copy is supplied, which can be re-arranged as needed, however no additions or omissions are allowed. [...] Doing so will disqualify the offending contestant's entry!
If you want to "play" the rules, I can easily do so too.
In the rules, there are conflicting statements with what you said (about decorative) and
Any mark-up may be added to the copy
which let's you add...
Well, when I see this, I think of SVG. For both decorative purpose allowed, and added mark up allowed.
Yet, the rules state:
No SVG.
SVG decoration is no worse nor better than adding symbols in the mark up or content with CSS content. Yet I love my SVG better!
As I've said before, don't mind me much. I know it was hard for the staff, and each and everyone of the contestants can interpret a rule according to its morning coffee strength. :lol:
But we should be honest about it so that future contests will not have the same situations stand in the way.
You can't alter the text, but you can rearrange as much as you want.
Yes exactly
Yes, Μitică as you will have noticed several entries rearranged large 'text blocks' or sections, which was permissible. For example:
HTML/CSS [h3]Once you've decided on a design, we will convert that design to HTML and CSS. HTML is the language used on the Internet to mark up websites, whereas CSS is used to define how the website looks.Plus one or two added or removed or forgot main 'copy.txt' as in 'textual words', which strictly speaking wasn't allowed. Obviously 'title' attributes were allowed.
Hmmm... I must've missed that bit about rearranging.
How about added content:
Copy is supplied, which can be re-arranged as needed, however no additions or omissions are allowed.This includes using CSS to permanently style copy away (eg. by using display:none; ). Doing so will disqualify the offending contestant's entry!
Some winning entiries have :before and :after adding content, others have extra content added directly in the mark up.
Don't fret we dealt with it as we deemed appropriate.
JUDGEcoder
I'm not.
Just some inconsistencies.
Candygirl has this:
h1:before {
content:"\\2653";
and some more like this:
#twitter a:before {
content:"t";
When discussing the rules:
Originally Posted by YuriKolovskywhat about text that can be added with CSS "content:"?Originally Posted by ScallioXTXAdding text is adding text, the means doesn't matter. So, no.
Pikacsu87 has this:
<span class="minifish8" title="My name is Twitty.">
<span class="fishbody">•</span>
<span class="fishtail">•</span>
<span class="fishtailin">•</span>
<span class="fishtailend">•</span>
<span class="fisheyebg">•</span>
<span class="fisheye">•</span>
<span class="fisheyeref">•</span>
</span>
and when talking about the rules:
Originally Posted by YuriKolovskyIs it allowed to edit the text to add harmless humor?
Originally Posted by xhtmlcoderYou can add comments to the CSS /* I am a comment */ and HTML source code <!-- Hello World --> like in the 'terminal emulation' example.
Though NOT really any additional hard-copy text other than in that seen in the *.txt file.
The post explaining the rules stated this:
Previously I examined the design part of the entries, and pointed out to some very good ones, that I liked very much. I wanted to also see who's mark up and/or CSS stands out, and the above are just some thoughts I had looking at the pages. That's all. No other hidden purpose. And don't blame me, I didn't make the rules!
Yes, you made some interesting observations and we thank you for your in-depth interest in the competition entries and their code, etc. It shows you have enjoyed yourself studying how different people tackled the subject matter.
Luckily the Judges took such things into consideration.
I am sure you have your favourites for semantic markup are there any of them you particularity though did very well or liked on the markup front?
Yes I did.It would be poor taste to discuss mine, but others have had some interesting approaches.
scout1idf is one of the few that had correctly wrapped embedded style in an XHTML doc:
<style type="text/css">
/*<![CDATA[*/
zcorpan made a show off leaving out from his mark up (optionally, as the specs say) elements like html, head, body, putting only the HTML (probably we should get used to this no version) DTD:
<!doctype html>
thierry-koblentz fine use of namespaces:
<html xmlns="" xmlns:
and there is definitely more to find later in its CSS and mark up how he uses this. Like I gave Pikacsu87 1st place (in my book) for design, I give thierry-koblentz 1st place for interesting mark up.
These are just a few things about the mark up, probably there is much more to be noted. Tomorrow I'll probably post something about the CSS code in the designs, things I find interesting.
Yes, I knew Simon would use minimised markup even his design was 'minimalist' though I think he intended to do some use of SVG fonts (or something cunning) but got sidetracked.
Hm. I was not here during the month of Feb so I missed most of that thread about the rules but... Noonope's comments and quotes bother me.
If anyone had bothered to let ME know that added content via CSS was not allowed, I would not have used such techniques in my own example. To have examples for a contest and then say "but you can't do that" is AT BEST confusing and at worst, unfair.
I also had added a legend, because a form deserves a fieldset and a fieldset demands a legend... which should not be empty of text.
In any case, I guess I'll leave it at that.
Hi poes,
I'm a little in the wild here. Which one is yours?
I also added a fieldset. And a legend. You could have added titles. Or comments. Those are not the ones questioned.
On the other hand, if someone had bothered to tell ALL the contestants that adding chars in content (like • or }) is OK, even though the rules said NO, or that you can use CSS content to add "t" or some other chars, even though the rules said NO, that would've been fair. For ALL the contestants.
As of consequence, some added extra content to help them gain an advantage. It helped them or not, it's not relevant.
I'm also a little confused when you say you are bothered by "Noonope's comments and quotes". Do you feel I did a bad thing?
Now that you mention it... "unfair" is quite right... especially considering:
It's unfair specifically because it was not treated the same way as other "disallowed" content. Had the contest been fair, the additional content should have been stripped out, too, and those submissions judged sans-augmentation rather than just being a deduction of points.
Just my 2 cents... I happened to break the "no content" rule, too... so I don't feel burned by it at all, but I feel it was a silly rule. The contest runners should think about their rules carefully for future contests... don't say something is "not allowed" if it's merely a negative... say it's "frowned upon".
In all, I feel like the rules were too restrictive in some areas and too loose in others, and ultimately forced people to go make their own fonts in order to place better. Typography flexibility in CSS3 is not about making your own fonts to circumvent the use of graphics... that's in my opinion not the spirit of the contest, and those who did that shouldn't have been awarded points for it... only a few entries that I can think of used typography well, the way it was intended - mine (in my logo and tag line) Hueij's (logo and titles) and Yuri's (throughout his design, minus the fb and twitter logos which I feel were a hack). The point of the contest (in my opinion) was to see how attractive a website could be created sans-images. An image of a fish made using fonts is still an image. Get me? Again, not complaining... I just feel the spirit of the contest was lost in focus on markup and circumvention of the rules with the single hole allowed.
I also feel that more weight should have been put on aesthetics... a "markup" contest where design has some weight has not nearly as much real-world significance as a "design" contest where markup has some weight. I think it's a tragedy that some of the best designed submissions were not in the top 5 - bad markup or not... if they followed the spirit of the contest and didn't completely hack things up, they deserved to place. Also, I think it's a tragedy that Pikacsu placed... even though I feel it's the best designed entry in the whole contest and is my favorite overall, it circumvented the rules and didn't follow the spirit of the contest, ultimately using images in a no-images contest.
Again, just my 2 cents...
Cheers.
Your confusing everyone now :SThe initial rules might not have been 100% clear, but they got cleared up.
Due to the fact that it was me who asked all those content related questions, If you continue reading they eventually got resolved, allowing to add "decorative content" like for example a large secondary letter behind the first letter of a paragraph would be allowed, also the rules applied to html, so I played it safe and added all the "only decorative" content with CSS, which won't affect the search engine indexing in any way and leaves the copy.txt intact.
Also like me, english is not your first language, so make sure to check, check and double check the slightest misconceptions there might be with the rules involved, you will be surprised by how many problems will disappear.
@Stomme poes wb! yes the CSS "content:" was allowed, so it's all fair. Also your comment about fieldset's in your entry and the lack of it in the links above, revealed it for me
+1 Using custom fonts and symbols (or even divs as shapes to create imagery) like SVG is against the spirit of the contest. The "no SVG" rule reinforces that. The judges should have been less strict about judging based on a formula, and more open-minded about what was the spirit of the contest.
But as I saw it, it was a markup/design quiz that wanted to avoid images (which SVG is part of, images as in custom content that is not part of the existing content), but still allow styling of existing content with CSS.
I agree that rules need a couple more days of refining and yes the rule clarification was never added to the front post.
[OT]
which is why I don't drink coffee :)[/OT]
I'm not the one confusing anybody, you are. You say that there are rules and amendments to the rules hidden in the posts following the rules, that don't reflect in the rules.
As for SVG, your every day char, like the letter A, is nothing more than a SVG. SVG is not image, though you may draw this conclusion.
SVG is an image, it's a vector image.
I didn't write the rules (none of the Judges did). I know there was conflicting statements in there (I stated that several dozen times during the running of the competition) that the wording should have been better worded.
The three key points were fairly evenly distributed. The 'Attractiveness' had a lot more weight than 'HTML' as it happened. Though you'd need high in both; as all in the top 10 were very close.
The [Judges] were open minded or at least tolerated a lot of mistakes due to the confusion at the start.
To Err Is Human, To Forgive Divine.
If this thread gets locked soon don't be surprised.
To be honest I am not going to waste my time re-listening to how the rules were 'misinterpreted' or not.
All I will say is next time the Rules will be worded clearer as a result of the previous feedback we already have in the archives.
It need not be dragged back up as now we are just 'flogging a dead horse' purely for the sake of debate in this thread. | https://www.sitepoint.com/community/t/spf-pure-html-css-comp-questions-and-discussions/80678 | CC-MAIN-2015-48 | refinedweb | 2,196 | 72.26 |
8 years later,
"BigInt has been shipped in Chrome and is underway in Node, Firefox, and Safari."
BigInt: Arbitrary precision integers in JavaScript:
BigInt is a new primitive that provides a way to represent whole numbers larger than 253, which is the largest number Javascript can reliably represent with the Number primitive.
But the tragic historical fact that until now, all Javascript numbers were floats means that it's impossible to implement this backward-compatibly without significant syntactic warts:
Many (all?) other dynamically typed programming languages which have multiple numeric types implement a numeric tower. This forms an ordering between types -- on the built-in numeric types, when an operator is used with operands from two types, the greater type is chosen as the domain, and the "less general" operand is cast to the "more general" type. Unfortunately, as the previous example shows, there is no "more general" type between arbitrary integers and double-precision floats. The typical resolution, then, is to take floats as the "more general" type.
Silently losing precision sometimes may be a problem, but in most dynamically typed programming languages which provide integers and floats, integers are written like 1 and floats are written like 1.0. It's possible to scan code for operations which may introduce floating point precision by looking for a decimal point. JavaScript exacerbates the scope of losing precision by making the unfortunate decision that a simple literal like 1 is a float. So, if mixed-precision were allowed, an innocent calculation such as
2n ** 53n + 1would produce the float 2 ** 53-- defeating the core functionality of this feature.
To avoid this problem, this proposal bans implicit coercions between Numbers and BigInts, including operations which are mixed type.
1n + 1throws a TypeError. So does passing 1n as an argument into any JavaScript standard library function or Web API which expects a Number. Instead, to convert between types, an explicit call to Number() or BigInt() needs to be made to decide which domain to operate in. 0 === 0nreturns false.
That's all well and good, but obviously the burning questions that I want the answers to are, "What is MOST-POSITIVE-BIGNUM, and how long does it take the Javascript console to print it?"
I wasn't able to figure out which of the several BigInt Javascript packages out there most closely resemble this spec, nor was I able to figure out which of them is the one being used by Chrome. But I assume that they are all representing the bits of the BigInt inside an array (whose cells are either uint32 or IEEE-754 floats) which means that the gating factor is the range of the integer representing the length of an array (which by ECMA is 2^32-1). So the answer is probably within spitting distance of either
As a first attempt, typing 10n ** 1223146n - 1n into Chrome's JS console made it go catatonic for a minute or so, but then it spat out a string of 1,223,147 nines. (Fortunately it truncated it rather than printing them all.) So that's bigger than the Explorer version!
> (2n ** 32n - 1n) ** (2n ** 32n - 1n)
× Uncaught RangeError: Maximum BigInt size exceeded
> (2n ** 32n - 1n) ** (2n ** 31n - 1n)
× Uncaught RangeError: Maximum BigInt size exceeded
> (2n ** 32n - 1n) ** (2n ** 30n - 1n)
... and maybe we have a winner, or at least a lower bound on the max?
That last one has made Chome sit there chewing four CPU-hours so far, so it's either still doing the exponentiation, or it's trying to grind it to decimal. If it ever comes back with a result, I'll update this post...
Update: After 87 hours, I stopped waiting.
Previously, previously, previously, previously, previously, previously.
Needless to say,
1n/3nis defined to give the wrong answer because it's 1957, and it will always be 1957. Whenever I get depressed because JS seems to have won despite its legion of horrors, I have to stop and remind myself, well, PHP could have won instead.
What does "it will always be 1957" mean?
It means FORTRAN, but not yet Lisp.
(Amusingly (I work in a Fortran shop), for Fortran itself the year is now well after 1957, and indeed it's well after 1970: the people who design Fortran understand that computers are not giant PDP-11s any more.)
Gotcha. Thank you!
For those of us who grew up writing PHP and are thus certifiably dumb, can you explain a little more what this means, and why 1n/3n is defined incorrectly?
Because 1/3 is not equal to 0.
Is this worse than C or is it 1957 most everywhere? Time to bring back Pascal?
(crunkit 600) $ cat /tmp/foo.c
#include <stdio.h>
int main() {
double r = 1/3;
printf ("%f\n", r);
}
(crunkit 601) $ gcc -Wall -o /tmp/foo /tmp/foo.c # Look Ma - No errors!
(crunkit 602) $ /tmp/foo
0.000000
There is a complicated relation between dates such that 1957 on big machines is the same as 1970 on small machines (which may be the same as 1980 on microprocessor-based machines). For C it will always be small-machine-1970.
I can't resist adding this story which happened to me, today. Intel have a C compiler which, they claim, gives very good performance on their hardware. One of the things it does is to cope with the fact that (because it is 1970) C doesn't have exponentiation by compiling various
pow*functions inline (and a lot of other numerical functions obviously). So the compiler knows the argument types of these things, since it's generating inline code for them. So, of course, if you add an incorrect declaration for these functions it's going to warn you about that, obviously, because it knows the real signature.
But, because it is 1970 and the compiler is running on a machine which can execute some tens of thousands of instructions a second and probably can's store that many error messages anyway, that would be way too expensive to do: instead the thing completely silently generates code which can never produce the right answer.
We paid money for this compiler, I'm told.
I assume that what you mean by "1n/3n is defined to give the wrong answer" is that it is insane for a basic mathematical operator to default to loss of precision, and I agree.
Which makes me wonder: are there any languages subsequent to Common Lisp that had a fundamental "ratio" type? Or did that concept fall completely out of favor?
For those not in the know: In Common Lisp the numeric tower went: fixnum, bignum, ratio, float, complex. The result of integer division that would result in a remainder was always promoted to the next type up that would preserve information, so (/ 4 6) → a ratio object with numerator 2, denominator 3. If you wanted to convert that to a float, and lose information in doing so, since 2/3 can't be represented as an IEEE-754 float, you'd do that explicitly.
(This required the compiler to be pretty good at static type inferencing to figure out when a piece of code was actually doing integer math all the way through, but, it was.)
Yes, that's what I meant: given it was possible to coerce reasonable numerical performance from Lisp compilers in 1990 or before, and people have been working on compiler technology furiously since then I can't see why the default should not be 'correct where possible' with the compiler working out where 'fast' was also possible but not sacrificing correctness for that unless you told it that was OK. Unfortunately the compiler technology people have been working on since 1990 seems to have been 'compile code written for a giant PDP-11 really well for machines which look nothing like PDP-11s'.
I don't know of post-CL languages with (what I would call) good numerical systems unless you count the billion Scheme variants. I am sure there must be some, I just have not been paying attention.
(This is all just 'worse is better' of course, and I think that war was lost long ago.)
It tickles me to no end to give you the answer and await your reaction: yes, there is a language like that – and its name is Perl 6.
(In fact this point sees usage as a selling point.)
I once wrote a thing which not only debunked the silly 'Python is almost Lisp' thing that some famous Lisp hackers have claimed, but claimed in turn that if you want a language in the spirit of CL without actually being itself Lisp, then Perl is your best bet.
People from the language police visited my house shortly afterwards carrying cattle prods and all copies have been duly burnt.
I wrote a little while back,
But then someone reminded me that Emacs supposedly has lexical closures now and I don't even know what's real any more.
Languages with ratio-of-bignum types? The ones I can think of right now are Scheme, Haskell, and possibly Aldor. I'm actually surprised JavaScript didn't go that way, since any float can be represented precisely as a rational number.
Julia!
Kill me now
Almost All Real Numbers are Normal.
It seems like a sound argument to say that having accepted that we can only represent Almost None of the reals, it's a very small sacrifice to give up on a proportion of this already vanishingly small subset for convenience of implementation.
I confess that we can use the same logic to conclude things will be a lot easier if we get rid of zero but - unlike 2/3 - zero is an additive identity, and you probably want one of those in your system of arithmetic, so that's an argument for it to stay.
Is that still true of the computable numbers, though?
Mmmmm...it’s easy to get Mathematica to do rational arithmetic, but I suspect that’s not really what you’re looking for...
Algebra systems pretty much have to have rationals, and really need arbitrarily-large integers (and therefore rationals with arbitrarily large numerators & denominators). That is, as far as I know, why Lisps often have these things: most of the interesting algebra systems were written in Lisp.
As of earlier today, GNU Emacs also has bignums. No word on MOST-POSITIVE-BIGNUM yet.
OH MY GOD
I have C code now that generates what I think is Emacs's new MOST-POSITIVE-BIGNUM, but it needs 16 GB and that's all the RAM I've got, so I've yet to find the patience to add one to it and let it churn through swap forever to discover whether (1+ most-positive-bignum) is negative...
......So the way you indicate that a numeric literal is not a Number is by appending an "n"?
Yeah I guess the n is for bigiNt. Good thing Javascript is case sensitive!
Geek bites chicken, news at 11.
tags: mutants, perversions | https://www.jwz.org/blog/2018/08/dark-rock-of-mothrir-unsealed-javascript-has-integers-now/ | CC-MAIN-2021-10 | refinedweb | 1,851 | 59.03 |
I'll present my counterpoints to Tom and then shut up. If folks want the
--package option, as long as the warnings are in place, I won't fight any
more (just squirm a little).
Tom wrote:
> > First of all, this goes against JAX-RPC.
>
> This is an extension switch that in no way breaks JAX-RPC. By default we
do
> everything JAX-RPC (vaguely) specifies.
It breaks JAX-RPC. Section 4.3.1: "However, the JAX-RPC 1.0 requires that
a namespace definition in a WSDL document MUST be mapped to a UNIQUE Java
package name."
>
> >?").
Simple! They're in different namespaces! Understand that and you're a
long way to understanding namespaces.
>.
I think this is the essence of where we disagree. Allowing people to just
get their work done without understanding namespaces is like letting folks
program in Java without understanding packages (ie., putting everything in
the default, unnamed package). I believe this is a disservice to users.
Tom does not.
>
> >.
First of all, you DO NOT have to specify anything on the command line. If
no mapping is specified, it will create a package - in this case,
"com.foo". If we want tests that go to test.foo, we could simply create
the namespace "foo.test". If we don't want to change the namespace then,
yes, I agree that -N is a little less usable than -p. But you could also
use the NStoPkg.properties file, do the mapping ONCE, and still not have to
put anything on the command line.
>
>.
You have to know more to do the right thing, but we'll let you do the wrong
thing easily? Tsk tsk.
>
> I think this switch only enhances the usability of the tool and does not
> prevent the correct handling of namespaces.
> | http://mail-archives.apache.org/mod_mbox/axis-java-dev/200112.mbox/%3COF23FCEF60.C300CF0A-ON86256B18.0057D21A@raleigh.ibm.com%20%3E | CC-MAIN-2018-13 | refinedweb | 299 | 76.52 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.