text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Timeline ...
- 23:59 Changeset [3907] by
lang/perl/ShipIt-Step-CommitMessageWrap?: Checking in changes prior to ...
- 23:59 Changeset [3906] by
oops
- 23:57 Changeset [3905] by
lang/perl/ShipIt-Step-CommitMessageWrap?: added requires ShipIt? in ...
- 23:53 Changeset [3904] by
check for NULL values in sen_index_upd
- 23:51 Changeset [3903] by
make sure to export RC constants
- 23:36 WikiStart edited by
- add overlast, hajimehoshi, murky-satyr, mercysluck, MiCHiLU, m-takagi (diff)
- 23:29 Changeset [3902] by
lang/perl/Games-Go: Add bin/start_go.pl
- 21:17 Changeset [3901] by
lang/python/cidr-mobilejp: added new project cidr-mobilejp
- 21:13 Changeset [3900] by
clean
- 21:13 Changeset [3899] by
clean
- 20:48 Changeset [3898] by
change users table
- 20:43 Changeset [3897] by
add main layout
- 19:56 Changeset [3896] by
change user auth
- 19:52 Changeset [3895] by
change user auth
- 19:52 Changeset [3894] by
change user auth
- 19:51 Changeset [3893] by
change user auth
- 19:50 Changeset [3892] by
change user auth
- 19:50 Changeset [3891] by
change user auth
- 19:40 Changeset [3890] by
change user auth
- 17:44 Changeset [3889] by
lang/perl/misc/pmsetup: CodeRepos? friendly
- 17:29 Changeset [3888] by
lang/perl/ShipIt-Step-CommitMessageWrap?: Tagging version '0.03' using ...
- 17:29 Changeset [3887] by
lang/perl/ShipIt-Step-CommitMessageWrap?: Checking in changes prior to ...
- 17:28 Changeset [3886] by
lang/perl/ShipIt-Step-CommitMessageWrap?: tag_version method hook point is ...
- 17:21 Changeset [3885] by
lang/perl/ShipIt-Step-CommitMessageWrap?: Checking in changes prior to ...
- 17:15 Changeset [3884] by
lang/perl/ShipIt-Step-CommitMessageWrap?: oops
- 17:13 Changeset [3883] by
lang/perl/ShipIt-Step-CommitMessageWrap?: fixed UUV warnings
- 17:09 Changeset [3882] by
-
- 17:01 Changeset [3881] by
lang/perl/ShipIt-Step-CommitMessageWrap?: Changes date format changed
- 17:01 Changeset [3880] by
Tagging version '0.03' using shipit.
- 17:01 Changeset [3879] by
Checking in changes prior to tagging of version 0.03. Changelog diff ...
- 16:57 Changeset [3878] by
lang/perl/ShipIt-Step-CommitMessageWrap?: add podspell test word
- 16:55 Committers/yappo edited by
- (diff)
- 16:51 Changeset [3877] by
lang/perl/ShipIt-Step-CommitMessageWrap?: wrote code
- 16:49 Changeset [3876] by
Tagging version '0.04' using shipit.
- 16:49 Changeset [3875] by
Checking in changes prior to tagging of version 0.04. Changelog diff ...
- 16:44 Changeset [3874] by
r3859@mnk (orig r509): tokuhiro | 2007-08-08 09:25:37 -0700 fixed ...
- 16:44 Changeset [3873] by
r3651@mnk (orig r301): tokuhiro | 2007-03-04 18:28:16 -0800 updated ...
- 16:44 Changeset [3872] by
r3639@mnk (orig r289): tokuhiro | 2007-02-23 06:40:02 -0800 released ...
- 16:44 Changeset [3871] by
r3627@mnk (orig r277): tokuhiro | 2007-02-17 21:01:02 -0800 added ...
- 16:44 Changeset [3870] by
mkdir
- 16:39 Changeset [3869] by
mkdir tags
- 16:39 Changeset [3868] by
mkdir
- 15:59 Changeset [3867] by
Tagging version '0.09' using shipit.
- 15:59 Changeset [3866] by
Checking in changes prior to tagging of version 0.09. Changelog diff ...
- 15:56 Changeset [3865] by
svn move ...
- 15:53 Changeset [3864] by
lang/perl/ShipIt-Setup-CommitMessageWrap?: renamed
- 15:52 Changeset [3863] by
lang/perl/ShipIt-Setup-CommitMessageWrap?: s/Setup/Step/
- 15:51 Changeset [3862] by
Tagging version '0.08' using shipit.
- 15:51 Changeset [3861] by
Checking in changes prior to tagging of version 0.08. Changelog diff ...
- 15:50 Changeset [3860] by
WWW-MobileCarrierJP: renamed.
- 15:47 Changeset [3859] by
lang/javascript/bomb: mine sweeper by kanasansoft.
- 15:47 Changeset [3858] by
lang/perl/ShipIt-Setup-CommitMessageWrap?: import
- 15:18 Changeset [3857] by
lang/perl/misc/pmsetup: svn friendly
- 15:05 Changeset [3856] by
Tagging version '0.07' using shipit.
- 15:05 Changeset [3855] by
Checking in changes prior to tagging of version 0.07. Changelog diff ...
- 14:24 Changeset [3854] by
lang/java/lib2chj: category.toString == category name
- 12:47 Changeset [3853] by
lang/ruby/misc/vox-upload-photo.rb:
remove magic numbers
- 04:55 Changeset [3852] by
lang/ruby/misc/vox-upload-photo.rb:
mechanzie hack for ruby 1.8.4
- 04:50 Changeset [3851] by
lang/ruby/misc/vox-upload-photo.rb
Add
- 04:27 Changeset [3850] by
lang/ruby/ActionScriptPreprocessor: added a comment.
- 04:24 Changeset [3849] by
lang/ruby/ActionScriptPreprocessor: added SHA1 sample.
- 03:57 Changeset [3848] by
Tagging version '0.04' using shipit.
- 03:57 Changeset [3847] by
Checking in changes prior to tagging of version 0.04. Changelog diff ...
- 03:56 Changeset [3846] by
Tagging version '0.03' using shipit.
- 03:56 Changeset [3845] by
Checking in changes prior to tagging of version 0.03. Changelog diff ...
- 03:54 Changeset [3844] by
Language-Kemuri: initial import
- 03:54 Changeset [3843] by
added tags dir
- 03:48 Changeset [3842] by
mkdir
- 03:48 Changeset [3841] by
mkdir
- 03:43 Changeset [3840] by
Tagging version '0.10' using shipit.
- 03:43 Changeset [3839] by
Checking in changes prior to tagging of version 0.10. Changelog diff ...
- 03:38 Changeset [3838] by
Net-CIDR-MobileJP: don't depend to UNIVERSAL::require.
- 03:37 Changeset [3837] by
Net-CIDR-MobileJP: please run without YAML.pm
- 03:11 Changeset [3836] by
lang/perl/HTTP-MobileAgent?-Plugin-Locator: little bit configurable.
- 03:03 Changeset [3835] by
lang/perl/HTTP-MobileAgent?-Plugin-Locator: remove dependency to ...
- 02:30 Changeset [3834] by
Moxy: なんか変更。
- 02:24 Changeset [3833] by
lang/ruby/ActionScriptPreprocessor: fixed lexer
- 01:58 Changeset [3832] by
lang/ruby/ssb: (ssb 0.2) introduce rack
- 00:14 Changeset [3831] by
lang/ruby/ActionScriptPreprocessor: implemented #include directive
12/30/07:
- 23:39 Changeset [3830] by
lang/java/misc/maven2repo: Java Twitter update
- 22:12 Changeset [3829] by
lang/ruby/ActionScriptPreprocessor: recursive expansion
- 21:47 Changeset [3828] by
lang/ruby/ActionScriptPreprocessor: updated test.
- 21:42 Changeset [3827] by
lang/javascript/userscripts/voxeditingwithwysiwygorh.user.js:
Also add ...
- 21:29 Changeset [3826] by
Added a folder remotely
- 21:28 Changeset [3825] by
new project
- 21:14 Changeset [3824] by
lang/python/pytc: release version 0.3
- 21:12 Changeset [3823] by
lang/python/pytc: version 0.3
- 21:04 Changeset [3822] by
added TCBDB.range and TCBDB.rangefwm
- 20:55 Changeset [3821] by
lang/ruby/misc/update-userscript.org.rb:
メッセージ位置がおかしかった
- 20:54 Changeset [3820] by
lang/javascript/userscripts/voxeditingwithwysiwygorh.user.js:
うめこみ ...
- 20:20 Changeset [3819] by
changed PyString?_AS_STRING to PyString?_AsString
- 20:14 Changeset [3818] by
see (2007/01/04) version ...
- 20:10 Changeset [3817] by
see (2005/09/13)
- 19:55 Changeset [3816] by
Encode-JP-Mobile: ucm version convert-pictogram branch.
- 19:54 Changeset [3815] by
Encode-JP-Mobile: *.ucm version branch.
- 19:44 Changeset [3814] by
lang/javascript/userscripts/voxeditingwithwysiwygorh.user.js:
説明追加
- 19:42 Changeset [3813] by
lang/javascript/userscripts/voxeditingwithwysiwygorh.user.js:
うめこみ ...
- 18:19 Changeset [3812] by
add user auth
- 18:16 Changeset [3811] by
lang/javascript/jsdeferred/trunk/binding/userscript.js,
...
- 18:14 Changeset [3810] by
lang/javascript/jsdeferred/trunk/binding/userscript.js:
GM_xhr にあわせた
- 17:26 Changeset [3809] by
this versionup enables create object from id
- 14:52 Changeset [3808] by
reinstall timezone plugins
- 14:44 Changeset [3807] by
lang/ruby/misc/update-userscript.org.rb:
ARGF
- 14:41 Changeset [3806] by
misc/update-userscript.org.rb:
add
- 14:31 Changeset [3805] by
reinstall timezone plugins
- 14:13 Changeset [3804] by
delete timezone plugins
- 14:11 Changeset [3803] by
delete timezone plugins
- 13:34 Changeset [3802] by
start WikipeJaGo? project
- 11:21 Changeset [3801] by
Encode-JP-Mobile: refactoring.
- 11:05 Changeset [3800] by
r2003@mnk (orig r1539): miyagawa | 2007-11-14 13:33:21 -0800 stop ...
- 11:03 Changeset [3799] by
モジュール名が間違っていたので修正
- 10:37 Changeset [3798] by
lang/perl/Games-Go: Change method name. 'display' -> 'show'
- 07:58 Changeset [3797] by
lang/ruby/compface/trunk/README.ja: add.
- 07:35 Changeset [3796] by
lang/c/vmware-time/README.ja: pset.
- 07:26 Changeset [3795] by
lang/c/vmware-time: import.
- 07:10 Changeset [3794] by
force deletion
- 07:09 Changeset [3793] by
platform/nadoka: ワケわかんねー。どこが適切なんだ?
- 07:03 Changeset [3792] by
lang/ruby/misc/shystd: import.
- 07:00 Changeset [3791] by
lang/ruby/misc/equire: import.
- 06:59 Changeset [3790] by
lang/ruby/misc/dirhash: import.
- 06:56 Changeset [3789] by
lang/ruby/misc/chamber: import.
- 06:53 Changeset [3788] by
lang/ruby/compface: import.
- 06:49 Changeset [3787] by
lang/ruby/pcompress: import.
- 06:47 Changeset [3786] by
lang/ruby/tee: import.
- 06:35 Changeset [3785] by
lang/ruby/uuid: import.
- 06:26 Changeset [3784] by
lang/ruby/nadoka plugins: import.
- 05:55 Changeset [3783] by
dotfiles/emacs/shyouhei/.emacs.d/elisp/*.el: ...
- 05:39 Changeset [3782] by
dotfiles/emacs/shyouhei: imported.
- 05:09 Changeset [3781] by
dotfiles/zsh/shyouhei: imported.
- 04:55 Changeset [3780] by
dotfiles/screen/shyouhei: ditto
- 04:55 Changeset [3779] by
dotfiles/screen/shyouhei: ditto
- 04:54 Changeset [3778] by
dotfiles/screen/shyouhei: READMEとかじゃなくていきなりdotfileでいいらしいと気づいた
- 04:43 Changeset [3777] by
dotfiles/screen/shyouhei-screen: add README
- 04:31 Changeset [3776] by
dotfiles/screen/shyouhei-screen: みんなシンプルだなあ
- 04:18 Changeset [3775] by
/websites/coderepos.org/trac/share/js/TracUtils.js: add icon for me.
- 04:06 Changeset [3774] by
lang/vim/hatena/plugin/hatena.vim: add g:chalice_curl_options. ...
- 03:40 Committers/shyouhei created by
-
- 03:30 Committers/motemen created by
-
- 03:29 Changeset [3773] by
websites/coderepos.org/trac/share/js/TracUtils.js: added icon
- 03:12 Changeset [3772] by
lang/vim/hatena: import initial version
- 02:00 Changeset [3771] by
lang/perl/Games-Go: Format for module template.
- 01:27 Changeset [3770] by
lang/perl/Games-Go: First version. Display board.
- 01:02 Changeset [3769] by
Encode-JP-Mobile: hmm... fix strange code :)
- 01:01 Changeset [3768] by
Fix typo in the Senna::Constants namespace
- 00:49 Changeset [3767] by
Initial commit for incomplete Senna rewrite. It's incomplete. Did I say it ...
- 00:47 Changeset [3766] by
Directory for svk import.
- 00:35 Changeset [3765] by
lang/perl/Encode-JP-Mobile: do not use `eval'
- 00:24 Changeset [3764] by
Encode-JP-Mobile: perl 5.10 friendly..
|
http://coderepos.org/share/timeline?from=2008-01-01T20%3A54%3A03Z%2B0900&precision=second
|
CC-MAIN-2013-20
|
refinedweb
| 1,669
| 53.17
|
From my understanding the way to add a wanted prefix to an element is to use @Namespace(reference="something"). But that sort of destroys the idea of having a prefix which is supposed to operate as an alias, not having to type the long URLs.
When I tried @Namespace(prefix="something") the generated XML contained an empty xmlns declaration as well.
My suggestion is that declaring @Namespace(prefix="something") means that only the prefix is added to the element name, no xmlns declaration.
If this is not possible due to some additional need to use prefix to set empty xmlns then I suggest that you introduce something like @Namespace(alias="something") that will only prepend the element.
Niall Gallagher
2011-04-08
This is the way it works now, if you have already added the URL then it does not get added again if its within scope.
Niall Gallagher
2011-04-28
You can do this already, if you declare only the namespace, it will inherit the prefix from any previous declaration.
|
http://sourceforge.net/p/simple/feature-requests/16/
|
CC-MAIN-2014-42
|
refinedweb
| 172
| 56.69
|
Locks, Actors, And Stm In Pictures
All programs with concurrency have the same problem.
Your program uses some memory:
When your code is single-threaded, there's just one thread writing to memory. You are A-OK:
But if you have more than one thread, they could overwrite each others changes!
You have three ways of dealing with this problem:
- Locks
- Actors
- Software Transactional Memory
I'll solve a classic concurrency problem all three ways and we can see which way is best. I'm solving the Dining Philosophers problem. If you don't know it, check out part 1 of this post!
Locks
When your code accesses some memory, you lock it up:
mutex == the lock.
critical section == the code locked with a mutex.
Now if a thread wants to run this code, he (or she) needs the key. So only one thread can run the code at a time:
Sweet! Only one thread can write to that memory at a time now. Problem solved! Right?
Here's a Ruby implementation of the resource hierarchy solution.
Each Philosopher gets a left and a right fork (both forks are mutexes):
class Philosopher def initialize(name, left_fork, right_fork) @name = name @left_fork = left_fork @right_fork = right_fork end
Now we try to get the forks:
while true @left_fork.lock puts "Philosopher #@name has one fork..." if @right_fork.try_lock break else puts "Philosopher #@name cannot pickup second fork" @left_fork.unlock end end
- A philosopher picks up fork 1. He waits till he has it (
lockwaits).
- He tries to pick up fork 2, but doesn't wait (
try_lockdoesn't wait).
- If he didn't get fork 2, he puts back fork 1 and tries again.
Full code here. Here's an implementation using a waiter instead.
Locks are super tricky to use. If you use locks, get ready for all sorts of subtle bugs that will make your threads deadlock or starve. This post talks about all the problems you could run into.
Actors
I love actors! You love actors! Actors are solitary and brooding. Every actor manages its own state:
Actors ask each other to do things by passing messages:
Actors never share state so they never need to compete for locks for access to shared data. If actors never block, you will never have deadlock! Actors are never shared between threads, so only one thread ever accesses the actor's state.
When you pass a message to an actor, it goes in his mailbox. The actor reads messages from his mailbox and does those tasks one at a time:
My favorite actor library for Ruby is Celluloid. Here's a simple actor in Celluloid:
class Dog include Celluloid def set_name name @name = name end def get_name @name end end
See that
include Celluloid? That's all it takes, and now every
Dog is an actor!
> d = Dog.new => #<Celluloid::ActorProxy(Dog:0x3fe988c0d60c)> > d.set_name "snowy" => "snowy"
Here we are telling the actor,
d, to set its name to "snowy" synchronously. Here we instead pass it a message to set the name asynchronously:
d.async.set_name "snoopy" => nil d.get_name => "snoopy"
Pretty cool. To solve the dining philosophers problem, we need to model the shared state using an actor. So we introduce a
Waiter:
class Waiter include Celluloid FORK_FREE = 0 FORK_USED = 1 def initialize(forks) @philosophers = [] @eating = [] @forks = [FORK_FREE, FORK_FREE, FORK_FREE, FORK_FREE, FORK_FREE] end end
The waiter is in charge of forks:
When a Philosopher gets hungry, he lets the waiter know by passing a message:
def think puts "#{name} is thinking" sleep(rand) puts "#{name} gets hungry" waiter.async.hungry(Actor.current) end
When the waiter gets the message, he sees if the forks are available.
- If they are available, he will mark them as 'in use' and send the philosopher a message to eat.
If they are in use, he tells the philosopher to keep thinking.
def hungry(philosopher) pos = @philosophers.index(philosopher)
leftpos = pos rightpos = (pos + 1) % @forks.size
if @forks[leftpos] == FORKFREE && @forks[rightpos] == FORKFREE @forks[leftpos] = FORKUSED @forks[rightpos] = FORKUSED @eating << philosopher philosopher.async.eat else philosopher.async.think end end
Full code here. If you want to see what this looks like using locks instead, look here.
The shared state is the forks, and only one thread (the waiter) is managing the shared state. Problem solved! Thanks Actors!
Software Transactional Memory
I'm going to use Haskell for this section, because it has a very good implementation of STM.
STM is very easy to use. It's just like transactions in databases! For example, here's how you pick up two forks atomically:
atomically $ do leftFork <- takeFork left rightFork <- takeFork right
That's it! No need to mess around with locks or message passing. Here's how STM works:
- You make a variable that will contain the shared state. In Haskell this variable is called a
TVar:
You can write to a
TVar using
writeTVar or read using
readTVar. A transaction deals with reading and writing
TVars.
- When a transaction is run in a thread, Haskell creates a transaction log that is for that thread only.
- Whenever one a block of shared memory is written to (with
writeTVar), the address of the
TVarand its new value is written into the log instead of to the
TVar:
- Whenever a block is read (using
readTVar):
- first the thread will search the log for the value(in case the TVar was written by an earlier call to writeTVar).
- if nothing is found, then value is read from the TVar itself, and the TVar and value read are recorded in the log.
In the meantime, other threads could be running their own atomic blocks, and modifying the same
TVar.
When the
atomically block finishes running, the log gets validated. Here's how validation works:
we check each
readTVar recorded in the log and make sure it matches the value in the real
TVar. If they match, the validation succeeds! And we write the new value from the transaction log into the
TVar.
If validation fails, we delete the transaction log and run the block all over again:
Since we're using Haskell, we can guarantee that the block had no side-effects...i.e. we can run it over and over and it will always return the same result!
Haskell also has
TMVars, which are similar. A
TMVar either holds a value or is empty:
You can put a value in a
TMVar using
putTMVar or take the value in the
TMVar using
takeTMVar.
- If there you try to put a value in a
TMVarand there's something there already,
putTMVarwill block until it is empty.
- If there you try to take a value from a
TMVarand it is empty,
takeTMVarwill block until there's something in there.
Our forks are
TMVars. Here are all the fork-related functions:
newFork :: Int -> IO Fork newFork i = newTMVarIO i takeFork :: Fork -> STM Int takeFork fork = takeTMVar fork releaseFork :: Int -> Fork -> STM () releaseFork i fork = putTMVar fork i
A philosopher picks up the two forks:
(leftNum, rightNum) <- atomically $ do leftNum <- takeFork left rightNum <- takeFork right return (leftNum, rightNum)
He eats for a bit:
putStrLn $ printf "%s got forks %d and %d, now eating" name leftNum rightNum delay <- randomRIO (1,3) threadDelay (delay * 1000000) putStrLn (name ++ " is done eating. Going back to thinking.")
And puts the forks back.
atomically $ do releaseFork leftNum left releaseFork rightNum right
Actors require you to restructure your whole program. STM is easier to use – you just specify what parts should run atomically. Clojure and Haskell both have core support for STM. It's also available as modules for a lot of other languages: C, Python, Scala, JavaScript etc etc.
I'm pretty excited to see STM used more!
Conclusion
Locks
- available in most languages
- give you fine-grained control over your code
- Complicated to use. Your code will have subtle deadlock / starvation issues. You probably shouldn't use locks.
Actors
- No shared state, so writing thread-safe is a breeze
- No locks, so no deadlock unless your actors block
- All your code needs to use actors and message passing, so you may need to restructure your code
STM
- Very easy to use, don't need to restructure code
- No locks, so no deadlock
- Good performance (threads spend less time idling)
References
- Dining Philosopher implementations in different languages
- Simon Peyton Jones explains STM
- Carl Hewitt explains Actors
- On Akka, Erlang, and ATOM by tarcieri
For more drawings, check out Functors, Applicatives, and Monads in pictures.blog comments powered by Disqus
|
http://adit.io/posts/2013-05-15-Locks,-Actors,-And-STM-In-Pictures.html
|
CC-MAIN-2017-51
|
refinedweb
| 1,417
| 72.66
|
I’ve been playing with the Kinect SDK Beta for the past few days and have noticed a few projects on CodePlex worth checking out. I decided to blog about them to help spread awareness. If you want to learn more about Kinect SDK then you check out my”Busy Developer’s Guide to the Kinect SDK Beta”.
Let’s get started:
KinectContrib is a set of VS2010 Templates that will help you get started building a Kinect project very quickly. Once you have it installed you will have the option to select the following Templates:
Please note that KinectContrib requires the Kinect for Windows SDK beta to be installed.
Kinect Templates after installing the Template Pack.
The reference to Microsoft.Research.Kinect is added automatically.
Here is a sample of the code for the MainWindow.xaml in the “Video” template:
<Window x:Class="KinectVideoApplication1.MainWindow"
xmlns=""
xmlns:
<Grid>
<Image Name="videoImage"/>
</Grid>
</Window>
and MainWindow.xaml.cs
using System;
using System.Windows;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using Microsoft.Research.Kinect.Nui;
namespace KinectVideoApplication1
{
public partial class MainWindow : Window
{
//Instantiate the Kinect runtime. Required to initialize the device.
//IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected.
Runtime runtime = new Runtime();
public MainWindow()
{
InitializeComponent();
//Runtime initialization is handled when the window is opened. When the window
//is closed, the runtime MUST be unitialized.
this.Loaded += new RoutedEventHandler(MainWindow_Loaded);
this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded);
//Handle the content obtained from the video camera, once received.
runtime.VideoFrameReady += new EventHandler<Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs>(runtime_VideoFrameReady);
}
void MainWindow_Unloaded(object sender, RoutedEventArgs e)
{
runtime.Uninitialize();
}
void MainWindow_Loaded(object sender, RoutedEventArgs e)
{
//Since only a color video stream is needed, RuntimeOptions.UseColor is used.
runtime.Initialize(Microsoft.Research.Kinect.Nui.RuntimeOptions.UseColor);
//You can adjust the resolution here.
runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color);
}
void runtime_VideoFrameReady(object sender, Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs e)
{
PlanarImage image = e.ImageFrame.Image;
BitmapSource source = BitmapSource.Create(image.Width, image.Height, 96, 96,
PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel);
videoImage.Source = source;
}
}
}
You will find this template pack is very handy especially for those new to Kinect Development.
Next up is The Coding4Fun Kinect Toolkit which contains extension methods and a WPF control to help you develop with the Kinect SDK.
After downloading the package simply add a reference to the .dll using either the WPF or WinForms version.
Now you will have access to several methods that can help you save an image: (for example)
For a full list of extension methods and properties, please visit the site at.
Kinductor – This is a great application for just learning how to use the Kinect SDK. The project uses MVVM Light and is a great start for those looking how to structure their first Kinect Application.
Conclusion: Things are already getting easier for those working with the Kinect SDK. I imagine that after a few more months we will see the SDK go out of beta and allow commercial applications to run using it. I am very excited and hope that you continue reading my blog for more Kinect, WPF and Silverlight news.
|
http://gamecontest.geekswithblogs.net/mbcrump/archive/2011/06/21/3-incredibly-useful-projects-to-jump-start-your-kinect-development.aspx
|
CC-MAIN-2019-51
|
refinedweb
| 534
| 51.95
|
Refactoring a current project to include robot legs
I'm trying to refactor a current, semi large project to use robot legs.
My first task is to load a configuration settings file (AIR) and make it available inside the app.
So I can get that going fine. I have a service read in the data and populate a model. I have a singleton mapped and I'm ready to go.
Now my existing code base follows some bad practices. I have a controller that is about 1500 lines long that runs most of my app. It extends EventDispatcher and dispatches events.
The controller also holds some data and references to models. It's some whacky hybrid of a monolithic controller/model :).
I pass that controller around to different parts of the app, mostly view components to interact with it. I know..bad practice..that's why I'm here.
I'd like to switch this controller over to an actor and start to divide and conquer this sucker. My problem is that this controller dispatches events. I'm not sure how to keep dispatching events with this class and change it to an actor. I see there's an IEventDispatcher on it. Should I create an instance of EventDispatcher and populate that? I think that's wrong because I'm guessing it will be fulfilled when I inject it.
Any tips on going about this? I'd like to takes bites of my elephant than try to refactor the whole thing at once.
Thanks!
-Nate
Comments are currently closed for this discussion. You can start a new one.
Keyboard shortcuts
Generic
Comment Form
You can use
Command ⌘ instead of
Control ^ on Mac
Support Staff 1 Posted by Ondina D.F. on 10 Mar, 2012 11:02 AM
Hi Nate,
I assume you have something like this in your controller:
dispatchEvent(new SomeCustomEvent(SomeCustomEvent.SOME_CUSTOM_TYPE, value));
If you’d extend Actor you’d only have to replace dispatchEvent with dispatch in your code:
dispatch(new SomeCustomEvent(SomeCustomEvent.SOME_CUSTOM_TYPE,value));
But, if you want to use “dispatchEvent” instead of “dispatch”, you can create your own „Actor“ like this:
Then:
public class SomeDispatcher extends BaseDispatcher
dispatchEvent(new SomeCustomEvent(SomeCustomEvent.SOME_CUSTOM_TYPE, value));
As you know, you can inject the IEventDispatcher wherever you need it:
Does this help?
Ondina
2 Posted by Nate on 12 Mar, 2012 08:57 PM
I'm a little confused.
If I change my "controller" class to an actor and replace dispatchEvent with dispatch, this breaks all event listeners on that class. If I did this, i would have to re-write my app to use the RL event dispatching.
Is this correct?
Support Staff 3 Posted by Ondina D.F. on 13 Mar, 2012 12:56 PM
Hey Nate,
I thought your intention was to refactor your “controller” so you can take advantage of the shared even dispatcher provided by robotlegs. I think changes to other parts of your app are unavoidable, if you want to do it à la robotlegs :)
A temporary solution, until you define your new classes, that should replace the huge controller:
SomeDispatcher (from my previous post) would be your “controller” class. You provide it with an IEventDispatcher, so you can communicate with other rl classes. Have 2 methods: addEventListener and dispatchEvent, so you can add event listeners to it.
Mapping:
injector.mapSingleton(SomeDispatcher);
SomeDispatcher.as
Now, you can inject SomeDispatcher in SomeMediator and let someDispatcher dispatch an event on the shared event dispatcher.
SomeMediator.as
AnotherMediator.as has registered a listener for the event dispatched by someDispatcher:
As you can see above, SomeMediator passes the injected someDispatcher to SomeView.
In SomeView:
As you can see, you can use the old syntax someDispatcher.dispatchEvent and someDispatcher.addEventListener.
But, as a side effect, the events dispatched by someDispatcher in SomeView will be heard by AnotherMediator as well, which is not exactly how rl-MVCS should work. Therefore that should be just a temporary solution!!!
Does that help?
Others may chime in with a better solution(?)
Ondina
Ondina D.F. closed this discussion on 30 Mar, 2012 09:01 AM.
|
https://robotlegs.tenderapp.com/discussions/questions/847-refactoring-a-current-project-to-include-robot-legs
|
CC-MAIN-2022-40
|
refinedweb
| 683
| 65.83
|
Hello again Bela-heads, I'm trying to use the Bela to send sensor data via OSC to my macbook (more specifically the Wekinator application that assists with Machine learning). And I imagine I'm getting into trouble with IP addresses. I assume the Bela's I.P address is 192.168.7.2? Although in the OSC code example it says it sends by default on 127.0.0.1
Basically I want to know how to send OSC from the Bela to my Mac via the USB cable. Ideally I'd like to be able to receive it on the Wekinator program but I'll settle for picking it up in PD on my mac.
Cheers!
Any device will have as many IP addresses as it has network interfaces that are connected to a network.
Additionally, many devices (including Bela and your Mac) have an IP to refer to themselves through a virtual "loopback" device, that is 127.0.0.1 (also known as localhost, if you ever heard this term).
127.0.0.1
localhost
So in the network that is between your Mac an Bela (ethernet over USB), Bela has IP 192.168.7.2 and your computer has 192.168.7.1. Each device can reach the other (or itself) using the corresponding IP address.
Additionally, each device can reach itself on 127.0.0.1.
192.168.7.2
192.168.7.1
Sending messages back to 127.0.0.1 can be used to communicate between different processes running on the same device. For instance, the BelaScopeimplements this sort of communication between the Bela program and the IDE, to serve the Scope to the client(* this is going to be swapped out for a more efficient implementation in the future).
The OSC example sends OSC messages back to Bela on127.0.0.1` and is designed as a quick demonstration to pass messages to this other test program:.
127.0.0.1 can be used to communicate between different processes running on the same device. For instance, the Bela
implements this sort of communication between the Bela program and the IDE, to serve the Scope to the client(* this is going to be swapped out for a more efficient implementation in the future).
The OSC example sends OSC messages back to Bela on
In your case, as you want to pass messages to your computer, you can change the remoteIp variable in the OSC example to the IP of your Mac, that is 192.168.7.1.
And hopefully it will work!
remoteIp
As always Giulio, perfect explanation and is now working a treat with both PD and Wekinator, couldn't be happier with this forum!
Cheers
Hi all!
I hoped I could get a little elaboration on this. Basically I'm trying to do the same as ReverendGreen; I'd like to send sensor data into my mac via USB, to be fed into Pd and also Wekinator.
I assume that the only way to do this is by using OSC. Is this correct?
I've noted in the OSC example that the remote port is set to 7563. So, I've set Wekinator to receive data via port 7563, and changed the remoteIp variable in the example code to "192.168.7.1" . However, I run the example code, but always get the message bela:timeout on the IDE console, and no data is received in Wekinator.
I've understood (from the instructions in the example code) that the OSC example is supposed to run 'alongside' the program osc.js, which can be accessed via the link you posted above.
However, I don't understand how the two programs talk to each other, how to run osc.js (particularly whether I should run it on the bela or my mac), or whether there is anything else I haven't understood.
Also worth noting: my sensors are hooked up to analog ins 0, 1, 2 and 3 on the Bela.
I humbly throw myself at your mercy! Thank you again for all of your help.
papa_sangre I assume that the only way to do this is by using OSC. Is this correct?
Are you sending low-rate data or are you planning to send full-bandwidth? For low-rate data, you could use MIDI if you prefer. With the new v0.3 image, Bela shows up as a MIDI device on your computer, so data sent from the board to the port hw:0,0,0 will be received on the host computer. However, I think the OSC approach is better, as it scales more easily if you use a Wireless or wired connection other than the regular USB.
OSC would also potentially allow you to send audio-rate data, and I have done it in the past, however this poses some issues of bandwidth and clock-drift that could be difficult to address.
hw:0,0,0
Back to basic OSC communication: the example consists of a bi-directional communication to an OSC server. The docs say:
In `setup()` an OSC message to address `/osc-setup`, it then waits
1 second for a reply on `/osc-setup-reply`.
if the response is not received within a second, then you get timeout.
timeout
The docs go on:
in `render()` the code receives OSC messages, parses them, and sends
back an acknowledgment.
So in order for any message to be sent out at this point, the code expects a message to come in first.
For the above message/acknowledgment mechanism to work, the example expects properly configured server, hence the necessity to run node resources/osc/osc.js alongside it. However, if you simply want to send messages to wekinator, then you do not need most of the above and your code could be something like:
node resources/osc/osc.js
#include <Bela.h>
#include <OSCServer.h>
#include <OSCClient.h>
OSCServer oscServer;
OSCClient oscClient;
int localPort = 7562;
int remotePort = 7563;
const char* remoteIp = "192.168.7.1";
bool setup(BelaContext *context, void *userData)
{
oscServer.setup(localPort);//this is not actually needed if you do not need to receive messages
oscClient.setup(remotePort, remoteIp);
return true;
}
void render(BelaContext *context, void *userData)
{
oscClient.queueMessage(oscClient.newMessage.to("/weka-address").add(4.2f).end());
}
void cleanup(BelaContext *context, void *userData)
{
}
Thanks for the in-depth reply Giulio, very much appreciated!
I'm using continuous data (the x, y, and z axes of an accelerometer plus a piezo), mostly to control synth parameters in a Pd patch.
I'd like to run these 4 inputs directly into Pd on my mac (and later Wekinator), so I can test my patches in real time, instead of importing them into the IDE each time to test.
Latency isn't super-important, as the patch will eventually be run in standalone mode.
I'm assuming, for this purpose, that I won't need to worry about the message/acknowledgement mechanism in the OSC example.
I've run the code you posted above, while running a patch in Pd on my mac (as advised by another forum) that looks like this:
But (no big surprise) nothing is printed to the Pd console.
I know this all must be straightforward to solve, but being a newbie at Pd, OSC, C++, and generally everything Bela-related, I'm struggling. Is there something obvious I'm missing?
A massive thank-you for your help so far
papa_sangre OK, I've just tried [print] straight from [netreceive -u -b] while running the code above on the bela, but still nothing is printed to the console.
that's weird. Did you click again the [listen 7563( message after renaming [netereceive -u] to [netreceive -u -b]? Every time you rename an object, it gets destroyed and reconstructed, so it loses the old state. Anyhow, I tried the C++ code above with this Pd patch and it works like a charm:
[listen 7563(
[netereceive -u]
[netreceive -u -b]
papa_sangre I'm assuming, for this purpose, that I won't need to worry about the message/acknowledgement mechanism in the OSC example.
exactly
papa_sangre 1 mode switch detected on the audio thread!
2 mode switches detected on the audio thread!
3 mode switches detected on the audio thread!
that's a bug I raised here, but it is not the root of your problem.
papa_sangre Should the string "/weka-address" in the code be replaced with something else, or is this just a placeholder? I tried placing the path for Pd in there, since I wasn't sure of that string's purpose.
Not as far as testing [netreceive] goes: [netreceive] knows nothing about the content of the packages that it receives, in fact when you [print] from it you will see only numbers (each represents one byte). You can then use [oscparse] to retrieve the OSC message.
[netreceive]
[print]
[oscparse]
papa_sangre And finally, can I send each analog input to a different port, or else how will I separate the 4 signals when they reach Pd?
whichever you prefer. You can use different addresses in your C++ code (e.g.:
oscClient.queueMessage(oscClient.newMessage.to("/analog0").add(analogRead(context, 0, 4)).end());
oscClient.queueMessage(oscClient.newMessage.to("/analog1").add(analogRead(context, 0, 4)).end());
oscClient.queueMessage(oscClient.newMessage.to("/analog2").add(analogRead(context, 0, 4)).end());
oscClient.queueMessage(oscClient.newMessage.to("/analog3").add(analogRead(context, 0, 4)).end());
and then use [route analog0 analog1 analog2 analog3] in Pd. Note that [oscparse] outputs a list:
[route analog0 analog1 analog2 analog3]
print: list weka-address 4.2
so you will have to use [list trim] or [route list] first.
[list trim]
[route list]
can you try to [print] straight from [netreceive -u] ? If you see something there but not from [oscparse], then there may be something wrong in the OSC formatting?
[netreceive -u]
actually no what you need is [netreceive -u -b]: without -b, [netreceive] expects a FUDI string, but we encode OSC as binary, hence the need for -b!
-b
Thanks Giulio,
OK, I've just tried [print] straight from [netreceive -u -b] while running the code above on the bela, but still nothing is printed to the console.
When I run the code, I periodically get messages in the IDE console like this:
1 mode switch detected on the audio thread!
2 mode switches detected on the audio thread!
3 mode switches detected on the audio thread!
etc.
Should the string "/weka-address" in the code be replaced with something else, or is this just a placeholder? I tried placing the path for Pd in there, since I wasn't sure of that string's purpose.
And finally, can I send each analog input to a different port, or else how will I separate the 4 signals when they reach Pd?
Thanks again
Hey Giulio, thanks so much for the in-depth response! That makes sense out of a lot that had been confusing.
I've tried your C++ code above, with no alterations, and the exact Pd patch you've posted in the previous comment, and run the C++ code in the IDE, but no luck. I closed and reopened Pd with the example patch, and also tried the old configuration [listen 7563( - [netreceive -u -b] - [print]. I tried clicking and re-clicking on [listen 7563(, and tried the patch with and without [oscparse]. Still, so far, nothing is printed to the Pd console.
I'm running Pd vanilla, version 0.48.0. I'm running the IDE in Safari 11.0.2 on MacBook pro running High Sierra OS 10.13.2. The Bela has the sensors continually linked up to it and sending data.
Is there anything else particularly obvious I might be missing?
Thanks so much for your help!
I am running the C++ and Pd code above, Pd 0.48 on Mac OS 10.11, I am not sure if there is anything not working properly in Sierra (which I think you are running). The version of Safari wouldn't make a difference.
However, I just noticed that sometimes Pd just freezes, possibly for the large number of packets coming in and the printing. Try to edit the render() function as follows, to only send a few messages per second:
render()
void render(BelaContext *context, void *userData)
{
static int count = 0;
count++;
if((count % 1000) == 0)
oscClient.queueMessage(oscClient.newMessage.to("/weka-address").add(4.2f).end());
}
if this still does not work, let's try the following. Save this file on your computer and compile it with gcc udp-server.c -o udp-server then run ./udp-server 7563 on your computer, and if the Bela program is running, you should see some data coming in (the printed values wouldn't make sense because it is trying to parse the received packet as floats): if you do, it means the packets are coming into your computer and it may be a Pd problem.
gcc udp-server.c -o udp-server
./udp-server 7563
If no packets are coming through, then copy this other file to your Bela* and compile it on Bela with gcc udp-client.c -o udp-client then run it with ./udp-client 192.168.7.1 7563. It will then display a prompt Please enter the message:: type in some gibberish and hit enter. This should send a message to udp-client on your computer (so make sure you still have a terminal open with ./udp-server 7563 running and waiting for messages): as you hit enter in the udp-client window, you should receive a packet in inudp-server
gcc udp-client.c -o udp-client
./udp-client 192.168.7.1 7563
Please enter the message:
udp-client
udp-server
* I assume you know your way around the terminal, in case you don't, do:
scp /path/to/udp-client.c root@192.168.7.2:
ssh root@192.168.7.2
gcc udp-client.c -o udp-client
./udp-client 192.168.7.1 7563
A million thanks Giulio - just getting my head around a couple other issues, and then I'll try the solutions above and post back here. I tried editing the render() function without success, but hopefully the udp client/server solution works!
Hi Giulio,
So I've moved forward a little, but the terminal is another tool I'm new to - I've come to Bela from a performance background, with very little computer programming experience.
I should mention: when i run your C++ code with your altered render() function above, i get the error Underrun detected: 2 blocks dropped. And when I ran the Pd patch above ( [netreceive -u -b 7563] - [oscparse] - [print] ), it wasn't receiving anything yet.
Underrun detected: 2 blocks dropped
So, following your advice I saved the udp-server.c file onto my desktop, compiled it with the command gcc udp-server.c -o udp-server in the terminal (which created an executable file on my desktop), made sure my pwd was still Desktop, and then ran the command ./udp-server 7563 in the terminal, while your C++ code was running in the IDE (i tried your original code and also the altered render() version). Unfortunately still nothing printed to the terminal (although it was clear that udp-server was running).
udp-server.c
pwd
Desktop
So, I've saved the udp-client.c file to my desktop, but I'm new to the terminal, so I'm confused about how to execute your instructions above for this next step.
udp-client.c
Should I copy and paste the 4 lines of code you've posted at the bottom of your comment into the terminal, and run it all at once? Or one line at a time? And where on the Bela should I move the udp-client.c file to?
Many thanks again for the clarification!
So I gave it my best guess and tried your second suggestion. From my test, the udp-server terminal window didn't show any packets received.
Here is what showed up in the two terminal windows; hopefully it illustrates whether I've done something wrong!
Now that's weird. This may mean that your system is blocking that port or all UDP packets from that IP address.
This sort of transcends my expertise and I don't believe it is a Bela issue, rather an issue with your system. Do you have a firewall or something? You could try a few more things:
try with a different port (maybe somewhere in the range 49152–65535). Change the port on both the client and the server and see if it works.
swap client and server: compile udp-client on the mac and udp-server on Bela and run them (this time, it will be ./udp-client 192.168.7.2 7563 because you are trying to send packets to Bela, whose IP address is 192.168.7.2).
./udp-client 192.168.7.2 7563
Maybe pf (Packet Filter) is running on your computer?
Can you try
pf
sudo pfctl -s all
? When you type that, you will be prompted for your password: type it in and hit enter (no asterisk will be displayed).
If at some point the output says Status: Enabled then try
Status: Enabled
sudo pfctl -d
to temporarily disable pf and try the udp stuff again.
|
https://forum.bela.io/d/312-idiot-proof-explanation-of-getting-osc-mac-os-sierra-10-12
|
CC-MAIN-2018-26
|
refinedweb
| 2,901
| 72.56
|
What are metaclasses
Python programmers often say, "everything is an object", which means that everything you can see in Python, including int, float, function and so on, is an object. But in daily development, when it comes to objects, we may not think of classes immediately. In fact, a class is also an object. Since a class is also an object, there is a way to create a class. This is where the metaclass comes out. The metaclass is the class that creates the class.
What does metaclass do
The metaclass intercepts the class creation process, modifies the class, and then returns the modified class.
Just looking at the above sentence, metaclasses seem very simple, but because metaclasses can change the creation process of a class and do some tricks in it, the whole thing will become extremely obscure and difficult to understand.
Type is the metaclass of all classes in python. If you keep calling type() on an object, you will find that it will eventually point to type. Type is special, and type itself is its own metaclass.
You can use type to create a class like this
class Base: def __repr__(self): return self.__class__.__name__ def hello(self): print("hello") Test = type("Test", (Base,), {"hello": hello})
Among them, the first parameter accepted by type is the name of the class, the second parameter is a tuple, which is used to specify the class to be inherited, and the third parameter is a dictionary. In this dictionary, the attributes and methods required by a class can be put in a dictionary, so that all classes generated by this metaclass will have these attributes.
The above code is equivalent to the following code:
class Test(Base): def hello(self): print("hello")
In addition, you can create your own metaclass through type, and then use this metaclass to create classes:
class Meta(type): def __new__(mcs, name, bases, attrs): for k in list(attrs.keys()): if k.lower() != k: # Cast attribute name to lowercase attrs[k.lower()] = attrs.pop(k) return super(Meta, mcs).__new__(mcs, name, bases, attrs) class Case(metaclass=Meta): def Hello(self): print("hello")
In the above metaclass, we check the class attributes. If the class attributes are not lowercase (not in accordance with PEP8 style), we force the class attributes to be converted to lowercase. In this way, we force subclasses to conform to a certain coding style (the properties of subclasses must be lowercase). This is just hello word in metaclass applications. Using metaclasses can do more things.
Let's implement a slightly more complex requirement.
We know that the namedtuple in Python can be easily used to express a piece of data. Suppose we want to implement a namedtuple with a metaclass. For simplicity, we only require that this implementation can accept location parameters. A possible implementation is as follows:
import itertools import operator def __new__(cls, *args, **kwargs): return tuple().__new__(cls, args) def namedtuple(name, fields): fields = fields.split(",") if isinstance(fields, str) else fields attrs = {fld: property(itemgetter(i)) for i, fld in enumerate(fields)} attrs["__new__"] = __new__ return type(name, (tuple,), attrs) Student = namedtuple("Student", ["id", "name", "score"])
In this way, we have a simple named tuple that we can use just like Python's named tuple
stu = Student(1, "zhangsan", 100) # This implementation does not support keyword parameters print(stu.id)
Several methods needing attention in metaclass
In metaclasses, we usually define__ new__ Or__ init__ Or__ call__ To control the class creation process__ new__ Before class creation is called, some modifications can be made before class creation. init__ After the class is created, make some modifications to the created class__ call__ It is called when instantiating a class. Generally, we only need to define one of them. The specific use can be selected according to the business scenario.
In addition, if we need to accept additional parameters, we also need to define__ prepare__ Method. This method will prepare the namespace for the class, but it is usually not necessary to define this method. Just use the default.
For example, we want to use metaclasses to implement a singleton pattern. Here is a simple example:
class Singleton(type): __instance = None def __call__(cls, *args, **kwargs): if cls.__instance is None: cls.__instance = super().__call__(*args, **kwargs) return cls.__instance else: # If the instance already exists, return directly return cls.__instance class Test(metaclass=Singleton): def __init__(self): print('init in Test class') if __name__ == "__main__": Test() Test()
In the above example, we implemented it in the metaclass__ call__ Method to check when instantiating the class. If the class already has an instance, we will return the existing instance to ensure that the class will be instantiated only once. Of course, this metaclass has many defects. For example, when multiple classes use this metaclass, the instances will be confused. This is another problem. I won't expand it here first.
How to use metaclasses
The answer is: usually you don't need to use metaclasses
There is a widely spread explanation about the use of metaclasses in Python. The original version is as follows:
Metaclasses are deeper magic that 99% of users should never worry about it. If you wonder whether you need them, you don't (the people who actually need them to know with certainty that they need them and don't need an explanation about why). -Tim Peters
In our development work, we rarely encounter the need to dynamically create a class. In case we encounter the need to dynamically change a class, we can also implement it through class decorator and monkey patch. These two methods are easier to understand than metaclasses.
|
https://programmer.help/blogs/what-is-a-metaclass.html
|
CC-MAIN-2021-49
|
refinedweb
| 947
| 54.73
|
This chapter provides an introduction to Object-Oriented Programming (OOP) in C++. We start by looking into a simple program that rolls a dice. We write the code and compile, link, and execute the program.
Then we continue by constructing a simple object-oriented hierarchy, involving theÂ
Person base class and its two subclasses,Â
Student and
Employee. We also look into pointers and dynamic binding.
Finally, we create two simple data typesâstack and queue. A stack is constituted of a set of values ordered in a bottom-to-top manner, where we are interested in the top value only. A queue is a traditional queue of values, where we add values at the rear and inspect values at the front.
In this chapter, we will cover the following topics:Â
- We start by implementing a simple game: rolling the dice. Its main purpose is to provide an introduction to the environment and teach you how to set up the project, and how to compile, link, and execute the program.
- Then we start looking at object-oriented programming by writing a class hierarchy with
Person as the base class and
Student and
Employee as subclasses. This provides an introduction to inheritance, encapsulation, and dynamic binding.
- Â Finally, we write classes for the abstract data types stack and queue. A stack is a structure where we both add and remove values at the top, while a queue is more like a traditional queue where we add values at the rear and remove them from the front.
As an introduction, we start by writing a program that rolls a dice. We use the built-in random generator to generate an integer value between one and six, inclusive:
Main.cpp
#include <CStdLib> #include <CTime> #include <IOStream> using namespace std; void main() { srand((int) time(nullptr)); int dice = (rand() % 6 ) + 1; cout << "Dice: " << dice << endl; }
In the preceding program, the initial
include directives allow us to include header files, which mostly hold declarations of the standard library. We need theÂ
CStdLib header file to use the random generator, the
CTime header file to initiate the random generator with the current time, and the
IOStream header file to write the result.
The standard library is stored in a
namespace called
std. A
namespace can be considered a container holding code. We gain access to the standard library with the
using namespace directive.
Every C++ program holds exactly one
main function. The execution of the program always starts in the
main function. We use the
srand and
time standard functions to initialize the random generator, and
rand to generate the actual random value. The percent (
%) is the modulus operator, which divides two integers and gives the remainder of the division. In this way, the value of the
dice integer variable is always at least one and at most six. Finally, we write the value of the
dice variable with
cout, which is an object used by the standard library to write text and values.
The programs of the first four chapters were written with Visual Studio, while the programs of the remaining chapters are written with Qt Creator.
The following are instructions on how to create a project, write the code, and execute the application. When we have started Visual Studio, we follow the following steps to create our project:
- First, we select the New and Project items in the File menu, as shown in the following screenshot:
- We choose the
Win32 Console Application type, and name the project
Dice:
- In the first dialog we just press the
Nextbutton:
- In the second dialog, we choose the
Empty projectcheckbox and click on the
Finishbutton. In this way, a project without files will be created:
- We choose a
C++ File(.cpp)Â and name it
Main.cpp:
- Then, we input the code in the
Main.cppfile:
- Finally, we execute the program. The easiest way to do this is to choose the
Start Debuggingor
Start Without Debuggingmenu option. In this way, the program is compiled, linked, and executed:
Let's continue by looking at a simple class that handles a car, including its speed and direction. A class is a very central feature in object-oriented languages. In C++, its specification is made up of two partsâits definition and implementation. The definition part is often placed in a header file (with the
.h suffix), while the implementation part is placed in a file with theÂ
.cpp suffix, as in theÂ
Car.h and
Car.cpp files. However, template classes, which are introduced in Chapter 3, Building a Library Management System, are stored in one file only.
A class is made up of its members, where a member is a field or a method. A field holds a value of a specific type. A method is a mathematical abstraction that may take input values and return a value. The input values of a method are called parameters. However, in C++ it is possible to define a function without parameters and without return types.
An object is an instance of the class; we can create many objects of one class. The methods can be divided into the following:
Ideally, the methods of a class don't give direct access to the fields, as this would mean that the method names/types would have to change if the fields change. Instead, the methods should give access to a class property. These are the conceptual elements of a class that may not map to a single field. Each member of the class is
public,
protected, or
private:
- A
publicmember is accessible by all other parts of the program.
- A
protectedmember is accessible only by its own members or members of its subclasses, which are introduced in the next section.
- A
privatemember is accessible by its own members only. However, that is not completely true. A class can invite other classes to become its friends, in which case they are given access to its
privateand
protectedmembers. We will look into friends in the next chapter.Â
The followingÂ
Car class definition has two constructors and one destructor. They always have the same name as the
Car class in this case. The destructor is preceded by a tilde (
~). A constructor without parameters is called the default constructor.
Note
More than one method can have the same name, as long as they have different parameter lists, which is called overloading. More specifically, it is called context-free overloading. There is also context-dependent overloading, in which case two methods have the same name and parameter list, but different return types. However, context-dependent overloading is not supported by C++.
Consequently, a class can hold several constructors, as long as they have different parameter lists. However, the destructor is not allowed to have parameters. Therefore, a class can hold only one destructor:
Car.h
class Car { public: Car(); Car(int speed, int direction); ~Car();
The
getSpeed and
getDirection methods are inspectors returning the current speed and direction of the car. The return values hold the
int type, which is short for integer. They are marked as constant with theÂ
const keyword since they do not change the fields of the class. However, a constructor or destructor cannot be constant:
int getSpeed() const; int getDirection() const;
The
accelerate,
decelerate,
turnLeft, and
turnRight methods are modificators, setting the current speed and direction of the car. They cannot be marked as constant since they change the fields of the class:
void accelerate(int speed); void decelerate(int speed); void turnLeft(int degrees); void turnRight(int degrees);
The
m_speed and
m_direction fields hold the current speed and direction of the car. The
-m prefix indicates that they are members of a class, as opposed to fields local to a method:
private: int m_speed, m_direction; };
In the implementation file, we must include the
Car.h header file. The
#include directive is part of the preprocessor and simply causes the content of the
Car.h file to be included in the file. In the previous section, we included system files with the angle bracket characters (
< and
>). In this case, we include local files with quotes (
"). The system include files (with angle brackets) include system code that are part of the language, while local include files (with quotes) include code that we write ourselves, as part of our project. Technically, the system include files are often included from a special directory in the file system, while the local include files are often included locally in the filesystem:
Car.cpp
#include "Car.h"
The default constructor initializes both
speed and
direction and set it to
0. The colon (
:) notation is used to initialize the fields. The text between two slashes (
//) and the end of the line is called a line comment and is ignored:
Car::Car() :m_speed(0), m_direction(0) { // Empty. }
The second constructor initializes both
speed and
direction to the given parameter values:
Car::Car(int speed, int direction) :m_speed(speed), m_direction(direction) { // Empty. }
In the preceding constructors, it would be possible to use the assignment operator (
=) instead of the class initialization notation, as in the following code. However, that is considered to be inefficient since the code may be optimized with the preceding initialization notation. Note that we use one equals sign (
=) for assignments. For the comparison of two values, we use two equals signs (
==), a method which is introduced in Chapter 2, Data Structures and Algorithms:
Car::Car() { m_speed = 0; m_direction = 0; }
The destructor does nothing in this class; it is included only for the sake of completeness:
Car::~Car() { // Empty. }
The
getSpeed and
getDirection methods simply return the current speed and direction of the car:
int Car::getSpeed() const { return m_speed; } int Car::getDirection() const { return m_direction; }
A plus sign directly followed by an equals sign is called compound assignment and causes the right value to be added to the left value. In the same way, a minus sign directly followed by an equals sign causes the right value to be subtracted from the left value.
The text between a slash (
/) directly followed by an asterisk (
*), and an asterisk directly followed by a slash, is called a block comment and is ignored:
void Car::accelerate(int speed) { m_speed += speed; /* Same effect as: m_speed = m_speed + speed; */ } void Car::decelerate(int speed) { m_speed -= speed; } void Car::turnLeft(int degrees) { m_direction -= degrees; } void Car::turnRight(int degrees) { m_direction += degrees; }
Now it is time to test our class. To do so, we include theÂ
Car.h file, just as we did in the
Car.cpp file. However, we also include the system
IOStream header file. As in the previous section, the system headers are enclosed in arrow brackets (
< and
>). We also need to use the
namespace std to use its functionality.
Main.cpp
#include <IOStream> using namespace std; #include "Car.h"
In C++, a function can be a part of a class or can be free-standing without a class. Functions of a class are often called methods. A function is a mathematical abstraction. It has input values, which are called parameters, and returns a value. However, in C++ a function is allowed to have zero parameters, and it may return the special type void, indicating that it does not return a value.
As mentioned in the previous section, the execution of the program always starts at the function named
main, and every program must have exactly one function named
main. Unlike some other languages, it is not necessary to name the file
Main.
However, in this book, every file holding the
main function is named
Main.cpp out of convenience. The
void keyword indicates that
main does not return a value. Note that while constructors and destructors never return values, and are not marked with
void, other methods and functions that do not return values must be marked with
void:
void main() {
We create an object of the
Car class that we call
redVolvo. An object is an instance of the class;
redVolvo is one of many cars:
Car redVolvo;
When writing information, we use the
cout object (short for console output), which normally writes to a text window. The operator made up of two left arrow brackets (
<<) is called the output stream operator. The
endl directive makes the next output start at the beginning of the next line:
cout << "Red Volvo Speed: " << redVolvo.getSpeed() << " miles/hour" << ", Direction: " << redVolvo.getDirection() << " degrees" << endl; redVolvo.accelerate(30); redVolvo.turnRight(30); cout << "Red Volvo Speed: " << redVolvo.getSpeed() << " miles/hour" << ", Direction: " << redVolvo.getDirection() << " degrees" << endl; redVolvo.decelerate(10); redVolvo.turnLeft(10); cout << "Red Volvo Speed: " << redVolvo.getSpeed() << " miles/hour" << ", Direction: " << redVolvo.getDirection() << " degrees" << endl;
A
blueFiat object is a constant object of the
Car class. This means that it can only be initialized by one of the constructors and then inspected, but not modified. More specifically, only constant methods can be called on a constant object, and only methods that do not modify the fields of the object can be constant:
const Car blueFiat(100, 90); cout << "Blue Fiat Speed: " << blueFiat.getSpeed() << " miles/hour" << ", Direction: " << blueFiat.getDirection() << " degrees" << endl; }
When we execute the code, the output is displayed in a command window:
In this section, we modify the
Car class. In the earlier version, we initialized the fields in the constructors. An alternative way to initialize the fields is to initialize them directly in the class definition. However, this feature shall be used with care since it may result in unnecessary initializations. If the second constructor in theÂ
Car class is called, the fields are initialized twice, which is ineffective.
Car.h
class Car { public: // ... private: int m_speed = 0, m_direction = 0; };
While the
Car class is defined in theÂ
Car.h file, its methods are defined in the
Car.cpp file. Note that we begin by including the
Car.h file, in order for the definitions of the methods to comply with their declaration in
Car.h:
Car.cpp
#include "Car.h" Car::Car() { // Empty. } Car::Car(int speed, int direction) :m_speed(speed), m_direction(direction) { // Empty. }
Moreover, the
Car class of the previous section has some limitations:
- It is possible to accelerate the car indefinitely, and it is possible to decelerate the car to a negative speed
- It is possible to turn the car so that the direction is negative or more than 360 degrees
Let's start by setting the maximum speed of the car to
200 miles/hour. If the speed exceeds
200 miles per hour we set it to
200 miles/hour. We use the
if statement, which takes a condition, and executes the following statement if the condition is true. In the case here, the statement
(m_speed = 200;) is enclosed by brackets. This is not necessary since it is only one statement. However, it would be necessary in the case of more than one statement. In this book, we always use the brackets for clarity, regardless of the number of statements.
Car.cpp
void Car::accelerate(int speed) { m_speed += speed; if (m_speed > 200) { m_speed = 200; } }
If the speed becomes negative, we change the sign of the speed to make it positive. Note that we cannot write
m_speed -= m_speed. That would set the speed to zero since it would subtract the speed from itself.
Since the value is negative, it becomes positive when we change the sign. We also turn the car byÂ
180 degrees to change its direction. Note that we also, in this case, must check that the car does not exceed the speed limit.
Also, note that we must check whether the direction is less than 180 degrees. If it is, we add
180 degrees; otherwise, we subtract
180 degrees to keep the direction in the interval
0 to
360 degrees. We use the
if...else statement to do that. If the condition of the
if statement is not true, the statement after the
else keyword is executed:
void Car::decelerate(int speed) { m_speed -= speed; if (m_speed < 0) { m_speed = -m_speed; if (m_speed > 200) { m_speed = 200; } if (m_direction < 180) { m_direction += 180; } else { m_direction -= 180; } } }
When turning the car, we use the modulo (
%), operator. When dividing by
360, the modulo operator gives the remainder of the division. For instance, when 370 is divided by
360 the remainder is 10:
void Car::turnLeft(int degrees) { m_direction -= degrees; m_direction %= 360; if (m_direction < 0) { m_direction += 360; } } void Car::turnRight(int degrees) { m_direction += degrees; m_direction %= 360; }
The
main function creates one object of the
Car classâ
redVolvo. We start by writing its speed and direction, then we accelerate and turn left and again write its speed and acceleration. Finally, we decelerate and turn right and write its speed and direction one last time:
Main.cpp
#include <IOStream> using namespace std; #include "Car.h" void main() { Car redVolvo(20, 30); cout << "Red Volvo Speed: " << redVolvo.getSpeed() << " miles/hour" << ", Direction: " << redVolvo.getDirection() << " degrees" << endl; redVolvo.accelerate(30); redVolvo.turnLeft(60); cout << "Red Volvo Speed: " << redVolvo.getSpeed() << " miles/hour" << ", Direction: " << redVolvo.getDirection() << " degrees" << endl; redVolvo.decelerate(60); redVolvo.turnRight(50); cout << "Red Volvo Speed: " << redVolvo.getSpeed() << " miles/hour" << ", Direction: " << redVolvo.getDirection() << " degrees" << endl; }
When we execute the code, the output is displayed in a command window as follows:
Let's continue with a class hierarchy, where
Person is the base class with
Student and
Employee as its subclasses:
As a person has a name, we use the C++ standard class string to store the name. The
virtual keyword marks that the
Person.h
class Person { public: Person(string name); virtual void print(); private: string m_name; };
We include the
String header, which allows us to use the
string class:
Person.cpp
#include <String> #include <IOStream> using namespace std; #include "Person.h" Person::Person(string name) :m_name(name) { // Empty. } void Person::print() { cout << "Person " << m_name << endl; }
The
Student and
Employee classes are subclasses of
Person, and they inherit
Person publicly. Sometimes the term extension is used instead of inheritance. The inheritance can be
public,
protected, or
private:
- With
publicinheritance, all members of the base class have the same access to the subclass
- With
protectedinheritance, all
publicmembers of the base class become protected in the subclass
- With
privateinheritance, all
publicand
protectedmembers of the base class become private in the subclass
The
Student and
Employee classes have the text fields
m_university and
m_company:
Student.h
class Student : public Person { public: Student(string name, string university); void print(); private: string m_university; };
The file
Student.cpp defines the methods of the
Student class:
Student.cpp
#include <String> #include <IOStream> using namespace std; #include "Person.h" #include "Student.h"
The subclass can call a constructor of the base class by stating its
name with the colon notation (
:). The constructor of
Student calls the constructor of
Person with the name as a parameter:
Student::Student(string name, string university) :Person(name), m_university(university) { // Empty. }
We must state that we call
Person rather than
Student by using the double colon notation (
::):
void Student::print() { Person::print(); cout << "University " << m_university << endl; }
The
Employee class is similar to
Student. However, it holds the field
c_company instead of
m_university.
Employee.h
class Employee : public Person { public: Employee(string name, string company); void print(); private: string m_company; };
The file
Employee.cpp defines the methods of the
Employee class.
Employee.cpp
#include <String> #include <IOStream> using namespace std; #include "Person.h" #include "Employee.h"
The constructor initializes the name of the person and the company they are employed by:
Employee::Employee(string name, string company) :Person(name), m_company(company) { // Empty. } void Employee::print() { Person::print(); cout << "Company " << m_company << endl; }
Finally, the
main function starts by including the system header files
String and
IOStream, which hold declarations about string handling and input and output streams. Since all standard headers are included in the standard namespace, we gain access to the system declaration with the
using the
namespace directive.
Main.cpp
#include <String> #include <IOStream> using namespace std; #include "Person.h" #include "Student.h" #include "Employee.h"
We define the three objects,Â
Monica,
Demi, and
Charles, and we call
Person,
Student, and
Employee is called:
void main() { Person monica("Monica"); person.print(); Student demi("Demi", "MIT"); student.print(); Employee charles("Charles", "Microsoft"); employee.print();
The asterisk (
*) marks that
personPtr is a pointer to an object of
Person, rather than an object of
Person. A pointer to an object holds the memory address of the object, rather than the object itself. However, at the moment it does not hold any address at all. We will soon assign it to the address of an object:
Person *personPtr;
The ampersand (
&) is an operator that provides the address of an object, which is assigned to the pointer
personPtr. We assign
personPtr in turn to the addresses of the
Person,
Student, and
Employee objects and call
Person,
Person, it is not necessary to mark
Student and
Employee. When accessing a member of a pointer to an object, we use the arrow (
->) operator instead of the point operator.
When
personPtr points at an object of
Person, print in
Person is called:
personPtr = &person; personPtr->print();
When
personPtr points at an object of
Student,
Student is called:
personPtr = &student; personPtr->print();
When
personPtr points at an object of
Employee, print
Employee is called:
personPtr = &employee; personPtr->print(); }
This process is called dynamic binding. If we omit the virtual marking in
Person, static binding would occur and print in
Person would be called in all cases.
The concept of object-oriented programming is built on the three cornerstones of encapsulation, inheritance, and dynamic binding. A language that does not support any of these features cannot be called object-oriented.
A stack is a simple data type where we add values to the top, remove the value on the top, and can only inspect the top value. In this section, we implement a stack of integers. In the next chapter, we look into template classes that can hold values of arbitrary types. We use a linked list, which is a construction where a pointer points at the first cell in the linked list, and each cell holds a pointer to the next cell in the linked list. Naturally, the linked list must end eventually. We use
nullptr to mark the end of the linked list, which is a C++ standard pointer to a special null address.
To begin with, we need a class to hold each cell of the linked list. The cell holds an integer value and a pointer to the next cell in the list, or
nullptr if it is the last cell of the list. In the following section, we will look into cell classes that hold pointers to both the previous and the next cell.
Cell.h
class Cell { public: Cell(int value, Cell *next);
It is possible to implement methods directly in the class definition; they are called inline methods. However, it is usually done for short methods only. A rule of thumb is that inline methods shall not exceed one line:
int value() const { return m_value; } Cell *next() const { return m_next; }
Each cell holds a value and the address of the next cell in the linked list:
private: int m_value; Cell *m_next; };
Cell.h
#include "Cell.h"
A cell is initialized with a value and a pointer to the next cell in the linked list. Note that
m_next has the value
nullptr if the cell is the last cell in the linked list:
Cell::Cell(int value, Cell *next) :m_value(value), m_next(next) { // Empty. }
In a stack, we are in interested in its top value only. The default constructor initializes the stack to be empty. Push adds a value at the top of the stack, top returns the top value, pop removes the top value, size returns the number of values in the stack, and empty returns
true if the stack is empty. The bool type is a logical type that can hold the values
true or
false.
Stack.h
class Stack { public: Stack(); void push(int value); int top(); void pop(); int size() const; bool empty() const;
The
m_firstCellPtr field is a pointer to the first cell of the linked list holding the values of the stack. When the stack is empty,
m_firstCellPtr will hold the value
nullptr. The
m_size field holds the current size of the stack:
private: Cell *m_firstCellPtr; int m_size; };
The
CAssert header is included for the assert macro, which is used to test whether certain conditions are true. A macro is part of the preprocessor that performs certain text replacements.
Stack.cpp
#include <CAssert> using namespace std; #include "Cell.h" #include "Stack.h"
The default constructor sets the stack to empty by initializing the pointer to the first cell to
nullptr and the size to zero:
Stack::Stack() :m_firstCellPtr(nullptr), m_size(0) { // Empty. }
When pushing a new value at the top of the stack, we use the new operator to dynamically allocate the memory needed for the cell. If we run out of memory,
nullptr is returned, which is tested by the assert macro. If
m_firstCellPtr equals
nullptr, the execution is aborted with an error message. The exclamation mark (
!) followed by an equals sign (
=) constitutes the not-equal operator. Two plus signs (
++) constitute the increments operator, which means that the value is increased by one.
The increment operator actually comes in two versionsâprefix (
++m_size) and postfix (
m_size++). In the prefix case, the value is first increased and then returned, while in the postfix case the value is increased but the original value is returned. However, in this case, it does not matter which version we use since we are only interested in the resultâthat the value of
m_size is increased by one:
void Stack::push(int value) { m_firstCellPtr = new Cell(value, m_firstCellPtr); ++m_size; }
When returning the top value of the stack, we must first check that the stack is not empty, since it would be illogical to return the top value of an empty stack. If the stack is empty, the execution is aborted with an error message. The single exclamation mark (
!) is the logicalÂ
not operator. We return the top value, which is stored in the first cell in the linked list:
Â
int Stack::top() { assert(!empty()); return m_firstCellPtr->getValue(); }
We must also check that the stack is not empty when popping the top value of the stack. We set the pointer to the first cell in the linked list to point at the next cell. However, before that, we must store the first pointer,Â
deleteCellPtr, in order to deallocate the memory of the cell it points at.
We deallocate the memory with the
delete operator:
void Stack::pop() { assert(!empty()); Cell *deleteCellPtr = m_firstCellPtr; m_firstCellPtr = m_firstCellPtr->getNext(); delete deleteCellPtr;
In the same way as the increment operator above, two minus signs (
--) constitutes the
decrement operator, which decreases the value by one:
--m_size; }
The
size method simply returns the value of the
m_size field:
int Stack::size() const { return m_size; }
A stack is empty if the pointer to the first cell pointer equals
nullptr. Informally, we say that the pointer is null if it equals
nullptr:
bool Stack::empty() const { return (m_firstCellPtr == nullptr); }
We test the stack by pushing, topping, and popping some values.
Main.cpp
#include <String> #include <IOStream> using namespace std; #include "Cell.h" #include "Stack.h" void main() { Stack s; s.push(1); s.push(2); s.push(3);
When printing a Boolean value, the
stream operator does not print
true or
false, but rather one for
true and zero for
false. In order to really print
true or
false we use the
condition operator. It takes three values, separated by a question mark (
?) and a colon (
:). If the first value is
true the second value is returned. If the first value is
false the third value is returned:
cout << "top " << s.top() << ", size " << s.size() << ", empty " << (s.empty() ? "true" : "false") << endl; s.pop(); s.pop(); s.push(4); cout << "top " << s.top() << ", size " << s.size() << ", empty " << (s.empty() ? "true" : "false") << endl; }
A queue is a model of a traditional queue; we enter values at the rear, and inspect and remove values at the front. It is also possible to decide on the number of values it holds and whether it is empty.
Similar to the stack in the previous section, we implement the queue with a linked list. We reuse the
Cell class; however, in the queue case, we need to set the next link of a cell. Therefore, we rename
next to
getNext and add the new
setNext method:
Cell.h
class Cell { public: Cell(int value, Cell *next); int value() const {return m_value;} Cell *getNext() const { return m_next; } void setNext(Cell* next) { m_next = next; } private: int m_value; Cell *m_next; };
We implement the queue with a linked list in a manner similar to the stack. The constructor initializes an empty queue,
enter enters a value at the rear of the queue,
remove removes the value at its front,
size return the current size of the queue, and
empty returns
true if it is empty:
Queue.h
class Queue { public: Queue(); void enter(int value); int first(); void remove(); int size() const; bool empty() const;
In the stack case, we were only interested in its top, which was stored at the beginning of the linked list. In the queue case, we are interested in both the front and rear, which means that we need to access both the first and last cell of the linked list. Therefore, we have the two pointers,Â
m_firstCellPtr and
m_lastCellPtr, pointing at the first and last cell in the linked list:
private: Cell *m_firstCellPtr, *m_lastCellPtr; int m_size; };
Queue.cpp
#include <CAssert> using namespace std; #include "Cell.h" #include "Queue.h"
When the queue is created, it is empty; the pointers are null and the size is zero. Since there are no cells in the linked list, both the cell pointers points at
nullptr:
Queue::Queue() :m_firstCellPtr(nullptr), m_lastCellPtr(nullptr), m_size(0) { // Empty. }
When entering a new value at the rear of the queue, we check if the queue is empty. If it is empty, both the pointers are set to point at the new cell. If it is not empty, the last cell next-pointer is set to point at the new cell, and then the last cell pointer is set to be the new cell:
void Queue::enter(int value) { Cell *newCellPtr = new Cell(value, nullptr); if (empty()) { m_firstCellPtr = m_lastCellPtr = newCellPtr; } else { m_lastCellPtr->setNext(newCellPtr); m_lastCellPtr = newCellPtr; } ++m_size; }
The first method simply returns the value of the first cell in the linked list:
int Queue::first() { assert(!empty()); return m_firstCellPtr->value(); }
The
remove method sets the first cell to point at the second cell. However, first we must store its address in order to deallocate its memory with the C++ standard
delete operator:
void Queue::remove() { assert(!empty()); Cell *deleteCellPtr = m_firstCellPtr; m_firstCellPtr = m_firstCellPtr->getNext(); delete deleteCellPtr; --m_size; } int Queue::size() const { return m_size; } bool Queue::empty() const { return (m_firstCellPtr == nullptr); }
We test the queue by entering and removing a few values. We enter the values one, two, and three, which are placed in the queue in that order. We then remove the first two values, and enter the value four. Then the queue holds the values three and four:
Main.cpp
#include <CMath> #include <String> #include <IOStream> using namespace std; #include "Cell.h" #include "Queue.h" void main() { Queue q; q.enter(1); q.enter(2); q.enter(3); cout << "first " << q.first() << ", size " << q.size() << ", empty " << (q.empty() ? "true" : "false") << endl; q.remove(); q.remove(); q.enter(4); cout << "first " << q.first() << ", size " << q.size() << ", empty " << (q.empty() ? "true" : "false") << endl; }
In this chapter, we have looked into the basics of object-oriented programming. We have started by creating a project and executing a program for rolling a dice. We have also created a class hierarchy, including the base class
Person and its two subclasses
Student and
Employee. By defining pointers to the objects, we have performed the dynamic binding.
Finally, we have created two data typesâstack and queue. A stack is a structure where we are interested in the value at the top only. We can add values at the top, inspect the top value, and remove the top value. A queue is a traditional queue where we enter values at the rear while we inspect and remove values from the front.
In the next chapter, we will continue to create data types, and more advanced data types, such as lists and sets. We will also look into to more advanced features of C++.
Â
Â
|
https://www.packtpub.com/product/c-17-by-example/9781788391818
|
CC-MAIN-2020-50
|
refinedweb
| 5,398
| 63.19
|
Class represents a document reading filter. More...
#include <PCDM_ReaderFilter.hxx>
Class represents a document reading filter.
It allows to set attributes (by class names) that must be skipped during the document reading or attributes that must be retrieved only. In addition it is possible to define one or several subtrees (by entry) which must be retrieved during the reading. Other labels are created, but no one attribute on them.
Map from tag of a label to sub-tree of this tag. Used for fast browsing the tree and compare with entities that must be read.
Supported modes of appending the file content into existing document.
Creates an empty filter, so, all will be retrieved if nothing else is defined.
Creates a filter to skip only one type of attributes.
Creates a filter to read only sub-labels of a label-path. Like, for "0:2" it will read all attributes for labels "0:2", "0:2:1", etc.
Creates a filter to append the content of file to open to existing document.
Destructor for the filter content.
Adds sub-tree path (like "0:2").
Adds attribute to read by type. Disables the skipped attributes added.
Adds attribute to read by type name. Disables the skipped attributes added.
Adds skipped attribute by type.
Adds skipped attribute by type name.
Makes filter pass all data.
Iteration to the child with defined tag.
Returns true if appending to the document is performed.
Returns true if only part of the document tree will be retrieved.
Returns true if content of the currently iterated label must be read.
Returns true if attribute must be read.
Returns true if content of the label must be read.
Returns true if attribute must be read.
Returns true if some sub-label of the currently iterated label is passed.
Returns true if some sub-label of the given label is passed.
Returns the append mode.
Starts the tree iterator. It is used for fast searching of passed labels if the whole tree of labels is parsed. So, on each iteration step the methods Up and Down must be called after the iteration start.
Iteration to the child label.
Append mode for reading files into existing document.
Pointer to the current node of the iterator.
If a node does not described in the read-entries, the iterator goes inside of this subtree just by keeping the depth of iteration.
Class names of only attributes to read (if it is not empty, mySkip is unused)
Class names of attributes that must be skipped during the read.
Paths to the labels that must be read. If it is empty, read all.
Whole tree that correspond to retrieved document.
|
https://dev.opencascade.org/doc/occt-7.6.0/refman/html/class_p_c_d_m___reader_filter.html
|
CC-MAIN-2022-27
|
refinedweb
| 446
| 69.68
|
So, I am in the mid part of my C++ class and am having trouble with a vector project at the end of the chapter. I don't expect the answer, but if someone would point me in the right direction I would greatly appreciate it :)
#1). Write a loop that will print the values in a 20 element vector named quantity to the screen.
(When I compile the code it increments by 10 for the 20 values and I'm pretty sure this isn't correct)
#2). Write a loop that will initialize the values in a 100 element vector of type double, named x with the loops index value. For example, the element at subscript 0 should have a value of 0, and the value at subscript 11 should have a value of 11.
(When I compile the code it doesn't give me any values, just "0").
Here is my code:
// vector.cpp Project 9-1 pg. 232 #include <iostream> #include <vector> using namespace std; vector <long> LongVector; // These 3 vectors are just step 1 of the project. Please ignore. vector <float> n (10); vector <int> z (100, 0); int main() { // loop that prints 20 values vector <double> quantity (20); for (int index = 0; index < 20; index++) { cout << "Value " << index << quantity [index] << ": " << endl; } cout << endl << endl; // added this for neatness :) // loop that prints 100 elements vector <double> x (100); for (double index = 0; index < 100; index++) { cout << "Element " << index + 1 << ": " << x [index] << endl; } return 0; }
|
https://www.daniweb.com/programming/software-development/threads/326696/need-help-with-vector-problem
|
CC-MAIN-2017-09
|
refinedweb
| 248
| 66.07
|
Hey, folks! In this article, we will be understanding the working of Python strftime() function along with its variations.
So, let us get started.
Python has a variety of modules that have clusters of functions to implement various functionality on the data. Python time module is used to perform manipulations regarding the different timestamp all over.
Further,
Python strftime() function accepts the time in various forms and returns a string with the time represented as in the standard form.
Variant 1: Python strftime() to get the current time
Python strftime() function can be used along with the datetime module to fetch the current timestamp in a proper form according to the format codes.
Syntax:
datetime.now().strftime('format codes')
So, format codes are actually the predefined codes used to represent the timestamp in a proper and standard manner. We’ll be understanding more about format codes further in this article.
Example:
from datetime import datetime current_timestamp = datetime.now() tym = current_timestamp.strftime("%H:%M:%S") date = current_timestamp.strftime("%d-%m-%Y") print("Current Time:",tym) print("Current Date:",date)
In the above example, we have used
datetime.now() method to fetch the current timestamp and then have passed the same to the strftime() function to represent the timestamp in a standard format.
We have used format codes carrying the significance as below:
%H– Representing ‘hours‘ in a 24-hour format.
%M– Representing ‘minutes‘ as a decimal number.
%S– Represents the ‘seconds‘ part of the timestamp.
Output:
Current Time: 16:28:40 Current Date: 28-04-2020
Variant 2: Python strftime() with pre-defined timestamp
It so happens at times, that we need to display the timestamp of the historical datasets. The same can be performed using Python strftime() function.
The
datetime.fromtimestamp() method is used to fetch the predefined timestamp. Further, strftime() function can be used to represent it in a standard form using various format codes, as explained above.
Syntax:
datetime.fromtimestamp(timestamp).strftime()
Example:
from datetime import datetime given_timestamp = 124579923 timestamp = datetime.fromtimestamp(given_timestamp) tym = timestamp.strftime("%H:%M:%S") date = timestamp.strftime("%d-%m-%Y") print("Time according to the given timestamp:",tym) print("Date according to the given timestamp:",date)
Output:
Time according to the given timestamp: 03:02:03 Date according to the given timestamp: 13-12-1973
Using different Format codes with Python strftime() function
Python strftime() function uses format codes to represent the timestamp in a standard and maintained format. Further, we can use format codes to segregate days, hours, weeks, etc from the timestamp and display the same.
Let’s understand the format codes with the help of some examples.
Example 1: Format code — ‘%A‘ to display the current date of the localtime.
from time import strftime day = strftime("%A") print("Current day:", day)
Output:
Current day: Tuesday
Example 2: Format code — ‘%c’ to display the current localtime.
Format code – ‘%c’ is used to display the current localtime which follows the following format:
Day Month Date hours:mins:sec Year
from time import strftime day = strftime("%c") print("Current timestamp:", day)
Output:
Current timestamp: Tue Apr 28 16:42:22 2020
Example 3: Format code — ‘%R‘ to represent the time in a 24 hour format.
from time import strftime day = strftime("%R") print("Current time in a 24-hour format:", day)
Output:
Current time in a 24-hour format: 16:44
Example 4: Format code — ‘%r’ to display the time in H:M:S format along with the description i.e. AM or PM.
from time import strftime day = strftime("%r") print("Current time -- hours:mins:seconds", day)
Output:
Current time -- hours:mins:seconds 05:05:19 PM
Example 5:
from time import strftime day = strftime("%x -- %X %p") print("Local date and time:", day)
In the above example, we have used format code ‘%x’ to represent the local timestamp in terms of date and ‘%X’ to represent the local time in the form of H:M:S. The ‘%p’ format code is used to represent whether the timestamp belongs to AM or PM.
Output:
Local date and time: 04/28/20 -- 17:08:42 PM
Conclusion
Thus, in this article, we have understood the working of Python strftime() function along with the format codes used.
In order to understand the list of available format codes, please find the link to visit the official documentation in the References.
|
https://www.askpython.com/python-modules/python-strftime
|
CC-MAIN-2021-31
|
refinedweb
| 725
| 52.7
|
Introduction to annotated XML schema decomposition
The annotated XML schema decomposition feature introduced in DB2 Viper can be used to decompose XML documents, wholly or partially, into relational tables. It uses annotations in XML schema as the mapping language to map information in an XML document to relational tables. The new XML decomposition feature uses XML Schema as the platform to obtain mapping information. Since, XML Schema is an open standard, it only requires the addition of a few proprietary features in the mapping language specification. This allows for a shorter learning curve as opposed to an equivalent mapping specification in a completely proprietary language. Note that annotations added to the XML Schema do not participate in validation of corresponding XML documents. This allows users to use the same XML schema for mapping and validation of XML documents.
The new XML decomposition feature requires that the annotated XML schema be registered in the XML Schema Repository. XML Schema Repository is also a new feature introduced in DB2 Viper. As the name suggests, it is a repository for XML schemas that may consist of one or more XML Schema documents. The scope of XML schemas registered in XSR is limited to the database. Any XML schema that is registered with XSR can be used for two purposes:
- Validation of XML documents as they are inserted into XML type columns
- Validation of XML documents as they are being decomposed into relational tables
Any XML schema registered in XSR can be enabled to be used with XML decomposition if the XML schema has at least one decomposition-related annotation present. When an XML schema is enabled for decomposition, checks are made to ensure the correctness of the annotations, compatibility of XML Schema types to DB2 column type and existence of relational tables and columns specified in annotations. If the annotated XML schema is deemed valid, mapping information is extracted and stored in catalog tables in a ready-to-use binary format.
Finally, any XML document that conforms to the annotated XML schema can now be decomposed into relational tables as per the specified mapping.
The annotated XML schema mapping language is designed with the aim to serve users who already have an existing relational schema with several applications and now want to consume data from XML documents into the existing relational schema. The structure of the XML documents, as expressed by an XML schema, could be very different from the relational schema, as the users may have less or no control of the design of XML Schema. Consequently, the mapping language is designed to provide greater flexibility and granular control over the entire process of decomposition. There are 11 different mapping constructs (some of them even have multiple enumerated values) to express mapping from elements and attributes declared in XML Schema to table-column pairs in the relational schema. Each of these constructs is designed to primarily serve a well-defined purpose. Most of these constructs are a result of feature requests that came from XML Extender customers. Table 1 lists these constructs and their primary goal:
Table 1. Mapping constructs and their primary goals
For convenience, all of the listed annotations can be specified as non-native attributes of the element/attribute declarations or as a child of xdb:annotation/xdb:appinfo hierarchy using the db2-xdb:rowSetMapping construct. For more details on each of these annotations, refer to the DB2 documentation.
Besides the various mapping constructs introduced in annotated XML schema decomposition, it also boasts of a new algorithm that can determine the one-to-one or one-to-many relationships without the introduction of any explicit (user-specified) mapping construct. The algorithm can determine the relationship by looking at the maxOccurs property and the model groups involved in the items mapped to the same rowSet. Unlike other decomposition solutions, due to this algorithm, there are no constraints that allow only sibling or child elements/attributes to be mapped to the same table. As long as all the elements/attributes involved in mapping to the same rowSet forms a legal one-to-one or one-to-many relationship, the elements/attributes mapped to the same table can be from any part of the XML schema. The new algorithm, by way of detecting a one-to-many relationship, allows multi-valued dependencies to be decomposed into relational tables. The algorithm not only creates rows based on the cardinality of elements/attributes involved, but also creates rows based on the type of the model group.
The new XML decomposition also has a strong type validator and type conversion engine. It will disallow, during registration, elements/attributes from being decomposed into target columns if the declared schema type is deemed incompatible with the corresponding target columns type. However, it allows for type conversion if possible. For instance an element/attribute of type integer can be mapped to a column of type character.
Due to the new architecture and algorithm, early performance comparisons indicate that the new annotated schema XML decomposition is much faster than XML Extender Decomposition.
Migration considerations
Following are the primary migration considerations when migrating from XML Extender Decomposition to annotated XML schema decomposition:
- Migration of mapping document: Users need to migrate their DAD to annotated XML schema in order to use the new functions. This is perhaps the biggest part of the migration. However, there is help in the form of a tool that is described below. The tool will help users convert their XML schemas to annotated XML schema documents based on the DAD.
- Addition of new registration step: Users now need to register the annotated XML schema in the XML Schema Repository. This is analogous to the step of enable_collection in XML Extender. But since this was an optional step in XML Extender, some users might not have a collection at all. However, when using annotated XML schema decomposition, it is mandatory to register the XML schema in the XML Schema Repository and enable it for decomposition prior to decomposing any XML documents against it.
- Migration of the stored procedure call: The new stored procedure call looks as follows:
xdbDecompXML ( rschema , xmlschemaname , xmldoc , documentid , validation , reserved , reserved , reserved )
rschema: The relational schema within which the XSRObject is created. The XSRObject is created as a consequence of registering and completing the XML schema, which is annotated for decomposition in this case.
xmlschemaname: The name of the XSRObject, which contains the XML schema that describes, through annotations, how the XML document should be decomposed.
xmldoc: This parameter is of type BLOB that is used to pass the XML document to be decomposed.
documentid: An identifier for the XML document. This is used to report diagnostic information in the db2diag.log and is also returned as part of the error message.
validation: Should be set to one if the XML document should be validated as it is being shred. It should be set to zero otherwise.
reserved: All the reserved parameters should be passed in as null.
There are five variations of this stored procedure based on the size of the XML document to be decomposed. All of them have the same parameters, except for the size of BLOB used to pass in the XML document. Table 2 describes the different variations of the stored procedure:
Table 2. Different variations of the stored procedure
Note that the annotated XML schema stored procedures belong to the SYSPROC schema and are built in. Its use does not require enabling of the database, as in the case of XML Extender.
- Error handling: The new stored procedure now returns SQLCA with proper SQLCODE and SQLSTATE that is more inline with the error handling framework when using SQL. This allows the applications to use existing infrastructure, if available, to handle SQL errors. The SQLCA can be used to format and build localized messages.
Note that like all new XML related features introduced in the DB2 Viper release, the new annotated XML schema decomposition is also supported only on Unicode databases and on single-partitioned databases.
XML schema converter tool
The DAD to annotated XML schema converter utility helps users convert their XML schemas to annotated XML schemas based on the mapping rules described in the DAD. The XML schema and DAD must be describing the same set of XML documents, although it is possible and acceptable that the XML schema may describe a superset of documents described by the DAD. Users that do not have an XML schema can easily generate XML schemas from corresponding DTDs and even XML documents using tools available freely on the internet. DAD to annotated XML schema converter tool can then take the XML schema and the DAD as input to produce an annotated XML schema. The tool supports import, include, and redefine construct of the XML schema. (In other words, if an XML schema is spread across many XML schema documents through import, include, or redefine, only the path for the primary schema document -- the document through which all XML schema documents can be reached through either import, include, or redefine -- is needed.) The tool will annotate element/attribute declarations across schema documents. Note that since DADs do not support namespaces, it is impossible to have the use of import construct in this scenario.
Usage: DAD2AS dad_filename.dad primary_schema_doc.xsd defaultSQLSchemaName
This utility takes three inputs -- a DAD file, the primary XML schema document, and the default SQL schema in which the tables reside. Based on the mapping information specified in the DAD, it converts the specified XML schema into an annotated XML schema, which can be used with the annotated XML schema decomposition feature of DB2 Viper.
Parameters
The specified (RDB_NODE) DAD, which may currently be in use with XML Extender shredding, is assumed to be a valid DAD. It provides the mapping information to this tool. The utility will read the DAD file, infer the mapping, and apply the same mapping to the appropriate elements/attributes in the XML schema provided as the second parameter.
The XML schema must be in accordance with the DAD; there should not be an element/attribute used in any context in the DAD that is not defined in the same context in the XML schema. In other words, the DAD and XML schema must be such that they work on the same family of XML documents. (If the users do not have an XML schema but have a DTD, they can use various tools to convert a DTD to XML schema.) The tool assumes that none of the elements/attributes defined in the XML schema will be in any namespace, as the namespace qualification of elements/attributes is not supported in DAD either.
Users are advised to keep a copy of their schema documents before running this tool since this tool will modify the primary schema document and any document that is part of the schema that is referenced using xsd:include, xsd:import, or xsd:redefine.
The third parameter is used to qualify any unqualified table references. If a reference to a table is not qualified in the DAD, it will be assumed to reside in the specified default SQL schema. However, this parameter is required even if all the tables in the DAD are qualified.
Note: Running the tool multiple times on the same schema documents will result in error during decomposition enablement in XSR, as on each run the tool will add the same set of mappings to the same set of element/attributes.
Tool prerequisites
- JRE 1.4.2 or JRE 1.5
- JARs for Eclipse Modeling Framework
- JARs for XSD plug-in
- JARS for xml4j-4_2_2 or equivalent Xerces-J.
The links to download these tools are listed below.
How to run it
Being written in Java™, this tool will work on all platforms supported by JRE. However, .bat files are provided for convenience to be used on Windows®. The .bat files sets the PATH and CLASSPATH variables. Users should set the paths of the JARs in CLASSPATH and the JDK in accordance with their installation. There are two flavors of the .bat file, namely, Dad2AS_Eclipse.bat and Dad2AS_DB2DWB.bat. The file Dad2AS_Eclipse.bat is customized for users that download Eclipse and related plug-ins directly from, and Dad2AS_DB2DWB.bat is for users that have downloaded DB2 Developer Workbench. Note that if DB2 Developer Workbench is downloaded, all the required plug-ins will come with it automatically.
Sample
A sample is also provided in the attached file in the Download section. Once you have unzipped the download, the sample can be found in the sample sub-directory. The sample has a DAD file, the original XML schema, and the converted annotated XML schema that was produced using this tool.
The DAD file is Order.dad.
The original XSD file is Order.xsd
The annotated XML schema that is create using this tool is Order_AS.xsd
Note that after inserting annotations in Order_AS.xsd, it has been run through a pretty-print XML tool for convenience.
Limitations of the tool
- Does not add db2-xdb:condition annotation corresponding to element/attribute declaration for a condition specified on a non-root element in the DAD using the <condition> construct in DAD. This can either be added manually, or the source can be modified to accomplish this as well.
Conclusion
Annotated XML schema decomposition is a new feature introduced in DB2 Viper. It has been written from scratch with new algorithms and mapping language. The mapping language and algorithms together provide a highly flexible, efficient, and methodical way of decomposing XML documents. It allows XML schemas that differ significantly in shape from the target relational schema to be decomposed. Early performance comparisons already show that the new annotated XML schema decomposition feature is much faster in running time than the XML Extender Decomposition. In light of all the performance benefits, flexibility and predictability of results offered by this new feature, it will be beneficial to migrate to annotated XML schema decomposition in DB2 Viper. The DAD to annotated XML schema generator tool will help users in migration by converting the DAD to an equivalent annotated XML schema, thus making the task of migration much easier.
Download
Resources
Learn
- developerWorks DB2 UDB page: Learn more about DB2 Viper.
- Information management and XML technology page: Find articles and resources for implementing XML technology with DB2.
- developerWorks Information Management zone: Find more resources for DB2 UDB developers and administrators.
- Stay current with developerWorks technical events and webcasts.
Get products and technologies
- Download the DB2 Viper to test XML schema decomposition for yourself.
- Download the Eclipse SDK.
- Download Xerces-J.
-.
|
http://www.ibm.com/developerworks/data/library/techarticle/dm-0604pradhan/
|
CC-MAIN-2014-52
|
refinedweb
| 2,435
| 51.89
|
Transparent Scene background using objc_utils?
I'm trying to overlay a
SceneViewon top of a
WebView. The goal is to have the
WebViewshow mp4 video using html while the
Scenein
SceneViewrenders game objects and processes touch events. (I got the mp4 and html part working.)
However, the
Scenealways have a solid
background_colorthat blocks the webview content and I cannot make the Scene background transparent.
Here is a toy example:
from scene import * import ui class GameScene(Scene): def setup(self): self.background_color = None # the background_color here only accepts RGB but not RGBA self.view.alpha = 0.2 # this changes the alpha of the entire SceneView, making both the background and the contents transparent. sp = SpriteNode('emj:Christmas_Tree', anchor_point=(0,0), position=(100,100), parent=self) w, h = ui.get_window_size() frame = (0,0,w,h) v = ui.View(frame=frame) webview = ui.WebView(frame=(w/4,0,w/2,h)) webview.load_url('') v.add_subview(webview) gameview = SceneView() gameview.scene = GameScene() gameview.frame = (w/4, 0, w/2, h/2) v.add_subview(gameview) # overlay SceneView() on top of WebView() gameview.bring_to_front() v.present('full_screen')
Thanks!
I tried to use
objc_utilsto change the background color, but have not figured out a solution. This is what I've tried so far:
from objc_util import ObjCInstance, ObjCClass import objc_util UIColor = ObjCClass('UIColor') clear_color = UIColor.color(red=0.0, green=0.0, blue=0.0, alpha=0.0) objv = ObjCInstance(self.view) objv.backgroundColor = clear_color # this make the **View** background of SceneView transparent, but does not alter the **Scene** background. v1 = objv.subviews()[0] # A GLKView instance v1.alpha = 0.2 # this makes both the background and the content of the scene transparent v1.backgroundColor = clear_color # this does not do anything
I need some help on how to set the background of
SceneView()to be transparent, so that I can see the content of the
WebViewbeneath it. Thanks!
I saw some threads on how to make background transparent in Objective-C using
glClearColor: . But I'm not sure how to do it in Pythonista.
I've tried to help you but without success 😭, sorry
@cvp Thanks for trying! I'm stuck at the point where I couldn't find a way to access
scene.background_colorfrom ObjC.
A few thoughts, have not tried yet:
SpriteKit has a SKVideoNode, although this might be tricky to integrate into a SceneView. I'm not sure whether the underlying SKScene business is exposed.
Have you tried ObjCInstance(scene) and see if it returns an objc object? Or, perhaps ObjCInstance(scene.view). If so, then you should have access to background_color (though you might also need to set ObjCInstance(scene.view).allowsTransparency=true.
I've tried but still without success.
ObjCInstance(scene.view) has no attribute allowsTransparency.
I also tried to set its background color as ObjCClass('UIColor').clearColor().
Okay, played around with this a little.
From what I can tell, we may need to swizzle the glkView_drawInRect_ to call the glClearColor. Though I'm not even sure that will work. You also have to set opaque=False on the sceneview and perhaps glkView.. but again since glClearColor is not called in the drawInRect, I don't think that helps us.
Another approach which does seem to work but will be super annoying is to set the
mask()on the
ObjCInstance(sceneview).layer(), or perhaps
ObjCInstance(sceneview).glkView().layer(). You would then create a CALayer positioned over your sprites. You'd have to keep repositioning the layers as the sprites move, which might not be reliable, and at least would take some math.
gist.github.com/144fba5549ece5df62550dd456272bf1
is a simple working example of a SceneView on top of a webview, with a mask layer. Touches work over the original scene size (note you can touch outside the tiny pythonista logo). I didn't try getting the mask size exactly right, or try repositioning from within the scene... we'll call that an exercise for the reader!
@JonB The
scenemodule isn't based on SpriteKit at all, even though the API is very similar (and I originally wanted to use SpriteKit). It's all just OpenGL basically.
@cvp @JonB Thanks a lot! I'll see if the mask layer is good enough for my purpose.
I'm trying to implement some GLKit similar to this:
@omz Any suggestion?
@Cethric has a whole set of bindings for doing low level opengl work.
Update: Now I add
webviewas a background
subviewof the
SceneView, by
gameview.add_subview(webview)and
webview.send_to_back().
As long as the
GLKViewin the front has
glClearColor(0, 0, 0, 0)in its
glkView_drawInRect_()delegate method, the GLKView will be transparent.
However, as you can see from the following code, once I plug in a new delegate to the
GLKViewof
SceneView, its connection to
Sceneis also lost (I got the transparent background, but no longer able to draw
SpriteNodeon it.)
from objc_util import * from scene import * import ui glClearColor = c.glClearColor glClearColor.restype = None glClearColor.argtypes = [c_float, c_float, c_float, c_float] glClear = c.glClear glClear.restype = None glClear.argtypes = [c_uint] GL_COLOR_BUFFER_BIT = 0x00004000 def glkView_drawInRect_(_self, _cmd, view, rect): glClearColor(1, 0, 0, 0.3) glClear(GL_COLOR_BUFFER_BIT) MyGLViewDelegate = create_objc_class('MyGLViewDelegate', methods=[glkView_drawInRect_], protocols=['GLKViewDelegate']) class ChristmasScene(Scene): def setup(self): objv = ObjCInstance(self.view) delegate = MyGLViewDelegate.alloc().init() objv.glkView().setDelegate_(delegate) objv.glkView().setOpaque_(False) sp = SpriteNode('emj:Christmas_Tree', anchor_point=(0,0), position=(500,300), parent=self) w, h = ui.get_window_size() webview = ui.WebView(frame=(w/4,0,w/2,h)) webview.load_url('') gameview = SceneView() gameview.scene = ChristmasScene() gameview.add_subview(webview) webview.send_to_back() gameview.present('full_screen')
Just figured out the solution!
It's very simple. Just need to call
glClearColor(0, 0, 0, 0)and
glClear(GL_COLOR_BUFFER_BIT)at every frame by in
draw()
class ChristmasScene(Scene): def setup(self): objv = ObjCInstance(self.view) objv.glkView().setOpaque_(False) sp = SpriteNode('emj:Christmas_Tree', anchor_point=(0,0), position=(500,300), parent=self)) def draw(self): glClearColor(0, 0, 0, 0) glClear(GL_COLOR_BUFFER_BIT) # The rest of the code is the same as the most recent reply. No need to set up a separate delegate.
For this to work, the
WebViewhas to be a
subviewof the
SceneView. Adding
WebViewand
SceneViewas subviews of the same
SuperViewdoes not work well. Hope this doesn't break other drawing steps. Thanks again @JonB and @cvp for the help! Previous code examples from @omz and @Cethric also helped a lot for me to understand how things work.
|
https://forum.omz-software.com/topic/3926/transparent-scene-background-using-objc_utils
|
CC-MAIN-2017-47
|
refinedweb
| 1,063
| 59.8
|
Python alternatives for PHP functions
def addslashes(s):
d = {'"':'\\"', "'":"\\'", "\0":"\\\0", "\\":"\\\\"}
return ''.join(d.get(c, c) for c in s)
s = "John 'Johny' Doe (a.k.a. \"Super Joe\")\\\0"
print s
print addslashes(s)
#John 'Johny' Doe (a.k.a. "Super Joe")\
#John \'Johny\' Doe (a.k.a. \"Super Joe\")\\\
def addslashes(s):
l = ["\\", '"', "'", "\0", ]
for i in l:
if i in s:
s = s.replace(i, '\\'+i)
return s
import time
s = "John 'Johny' Doe (a.k.a. \"Super Joe\")\\\0" * 1000
def addslashes(s):
t1 = time.time()
d = {'"':'\\"', "'":"\\'", "\0":"\\\0", "\\":"\\\\"}
s = ''.join(d.get(c, c) for c in s)
print time.time() - t1
def addslashes1(s):
t1 = time.time()
l = ["\\", '"', "'", "\0", ]
for i in l:
if i in s:
s = s.replace(i, '\\'+i)
print time.time() - t1
addslashes(s)
0.158378839493
addslashes1(s)
0.000412940979004
(PHP 4, PHP 5)
addslashes — Quote string with slashes
Returns a string with backslashes before characters that need
to be quoted in database queries etc. These characters are
single quote ('), double quote
("), backslash (\)
and NUL (the NULL byte).
An example use of addslashes() is when you're
entering data into a database. For example, to insert the name
O'reilly into a database, you will need to escape
it. Most databases do this with a \ which would
mean O\'reilly. This would only be to get the data
into the database, the extra \ will not be inserted.
Having the PHP directive
magic_quotes_sybase set to on will mean
' is instead escaped with another
'.
The PHP directive
magic_quotes_gpc is on by default, and it
essentially runs addslashes() on all GET, POST,
and COOKIE data. Do not use addslashes() on
strings that have already been escaped with
magic_quotes_gpc as you'll
then do double escaping. The function
get_magic_quotes_gpc() may come in handy for
checking this.
The string to be escaped.
Returns the escaped string.
Example #1 An addslashes() example
<?php$str = "Is your name O'reilly?";// Outputs: Is your name O\'reilly?echo addslashes($str);?>
|
http://www.php2python.com/wiki/function.addslashes/
|
CC-MAIN-2020-29
|
refinedweb
| 333
| 78.25
|
[
]
Amareshwari Sriramadasu commented on MAPREDUCE-2765:
----------------------------------------------------
Patch is very close. Some comments on code:
CopyMapper:
bq. String workDir = conf.get("mapred.local.dir") + "/work";
Please use the String constant defined for this config. That would ease any change in the
config name, deprecation and etc.
Do you want to add ssl.client* config names to DistCp constants?
DistCpConstants:
bq. public static final String CONF_LABEL_NUM_MAPS = "mapred.map.tasks";
Please use the string constant defined for this config. This is a deprecated config and soon
be removed.
bq. public static final String CONF_LABEL_TOTAL_BYTES_TO_BE_COPIED = "mapred.total.bytes.expected";
bq. public static final String CONF_LABEL_TOTAL_NUMBER_OF_RECORDS = "mapred.number.of.records";
Shall we change these names to start with distcp.?
bq. All public classes and public methods need javadoc
Can you check this once again? For ex. DistCp.java does not have javadoc
DistCp:
bq. String userChosenName = getConf().get("mapred.job.name");
bq. job.getConfiguration().set("mapred.map.tasks.speculative.execution", "false");
Please use the String constant defined for this config. That would ease any change in the
config name, deprecation and etc.
bq. A hack to get the DistCp job id. New Job API doesn't return the job id
This is no more required. MAPREDUCE-118 is resolved now.
CopyCommitter:
bq. //Skip the root folder, preserve the status after atomic commit is complete
Do you want to change this comment to reflect why we are skipping root folder?
bq. Deleting attempt temp files happens in each attempt. Why are we doing delete again in
Committer? Committer should just delete the work path.
bq. The methods deleteMissing(), preserveFileAttributes() etc need more doc.
bq. DynamicInputFormat creates FileSplits with zero length. Instead should it be created with
the size of chunk as the size of the split.
I don't see any change for the above from the previous patch. Are you planning to do them?
Also, please run findbugs, javadoc and releaseaudit and make sure there are no warnings.
>
>
>
>.
>
--
This message is automatically generated by JIRA.
For more information on JIRA, see:
|
http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-issues/201108.mbox/%3C415202926.4363.1314077131688.JavaMail.tomcat@hel.zones.apache.org%3E
|
CC-MAIN-2016-44
|
refinedweb
| 336
| 70.5
|
React useEffect hook with code examples
So you learned how to use React useState and how to emulate the traditional setState in a functional React component.
But now you might be scratching your head and asking yourself, “How do I emulate a React component lifecycle?”
Don’t worry, in this article will go over:
- What is React
useEffect
- How to use React useEffect
- How to replicate the
componentDidMountlifecycle
- How to replicate the
componentWillUnmountlifecycle
- How to create self-sufficient code for
componentDidUpdatelifecycle
So if you’re not familiar with how to use it, read a previous article that covers about it; Introduction to React hooks.
Come back when you’re ready!
What is React useEffect?
React
useEffect is a function that gets executed for 3 different React component lifecycles.
Those lifecycles are
componentDidMount,
componentDidUpdate, and
componentWillUnmount lifecycles.
Basic usage of useEffect
import React, { useState } from 'react'; const App = () => { const [message, setMessage] = useState('Hi there, how are you?'); return <h1>{message}</h1> }; export default App;
In the code example above, I’m using React
useState to save a message.
I’m then grabbing my state variable,
message, and I’m printing it to the screen.
But now I want to
useEffect to change the message a second after the component has mounted.
import React, { useState, useEffect } from 'react'; const App = () => { const [message, setMessage] = useState('Hi there, how are you?'); useEffect(() => { console.log('trigger use effect hook'); setTimeout(() => { setMessage("I'm fine, thanks for asking."); }, 1000) }) return <h1>{message}</h1> }; export default App;
I’ve imported
useEffect from the React library and I’ve added my effect under the
useState line.
useEffect accepts a function as it’s first argument. This function handler will take care of any side effects you like when it gets run.
The function is a callback function after one of the React component lifecycle has been triggered.
It worked! But there’s a problem. Take a look at the
console log. The effect got triggered twice.
This behavior is not optimal, because if you have multiple effects or have new prop values being tossed from a parent component, it may trigger the effect multiple times.
This may cause inconsistency, weird side-effects, or freezing your app from an infinite loop.
Let’s optimize this code a little bit more.
React useEffect: The componentDidMount hook
The goal now is to execute the
setMessage function only on the
componentDidMount lifecycle.
That way if the App component receives any new props or if the state changes, the effect won’t be triggered again.
import React, { useState, useEffect } from 'react'; const App = () => { const [message, setMessage] = useState('Hi there, how are you?'); useEffect(() => { console.log('trigger use effect hook'); setTimeout(() => { setMessage("I'm fine, thanks for asking."); }, 1000) }, []); return <h1>{message}</h1> }; export default App;
Can you see the difference from the old example?
If you haven’t caught it yet, I added an empty array bracket (
useEffect hook function.
[]) as a second argument to the
If you take a look at the console log it only shows “trigger use effect hook” once.
Here’s an example output with another console log message that says, “App component rendered,” after the effect function.
The second console message should only execute when the render lifecycle gets triggered.
If we take a look at the console log again, we can see the order of the lifecycles that it went through.
- Rendered lifecycle
componentDidMountlifecycle
- Rendered lifecycle
This is the normal behavior that you would see in a traditional React class component.
By running an empty array
useEffect function doesn’t depend on any values from props or state.
[]as a second argument, you’re letting React know that your
This will help you avoid the
componentDidUpdate lifecycle.
React useEffect: The componentWillUnmount hook
I showed an example how to avoid a trigger from a
componentDidUpdate lifecycle with
useEffect.
But what if you have code that needs to get cleared up on a
componentWillUnmount cycle?
How do we replicate a
componentWillUnmount lifecycle?
P.S. this lifecycle is also known as, the cleanup in a React hook function.
In the next example I will demonstrate a use case where you’ll need to clean up your code when a component will unmount. ( <h1> The window size {windowWidthSize} pixels </h1> ) };
In the image above, I have created a new function component called,
WindowWidthSize.
The objective of this component is to print out the width size of the current window.
As you see can see, I’m using
useState to keep track of the width size.
And right below it I’m adding a window event listener on the resize event.
So every time the user resizes the browser, it will get the new width, save it into state, and print out the new width size.
Okay, so here we have another React component that uses the component
WindowWidthSize, and it has a magical button.
When a user clicks this magical button, the
WindowWidthSize component will vanish before your eyes.
Great, the code works. And if you see on the bottom right there is a blue highlight section called Window.
The blue highlight shows the window event listener I have added.
When I click on the magical button, the
WindowWidthSize component will no longer exist. But there’s a small problem.
The window event listener is still lingering around.
Situations like these are bad because this is what causes memory leaks in your app.
And you don’t want to be greedy with your customers limited bandwidth.
Let’s use the clean up technique that
useEffect gives us, to get rid of this lingering window event. () => window.removeEventListener('resize', handleResize); }, []); return ( <h1> The window size {windowWidthSize} pixels </h1> ) };
I added a return function inside the
useEffect function.
And inside the return function I’m removing the even listener that I originally added.
When a functional component un-mounts the logic inside the return function will get executed.
So remember to clean up your code if necessary by returning a function inside the
useEffect function.
React useEffect: The componentWillUpdate hook
By default
useEffect will trigger anytime an update happens to the React component.
This means if the component receives new props from its parent component or even when you change the state locally, the effect will run again.
In case you need the effect to trigger on a
componentDidUpdate lifecycle, you want to try and make it as self-sufficient as possible.
If you don’t, you will run into an infinite loop of updates, and just run your computer hot.
const Counter = () => { const [counter, setCounter] = React.useState(0); React.useEffect(() => { setCounter(c => c + 1); }, []); return ( <div style={{textAlign: 'center'}}> <h1>Counter: {counter}</h1> </div> ); };
Right above is a simple counter app. All it does is print a number to the user.
P.S. useState also accepts functions as an argument. This might be a better choice if you want a more accurate representation of the previous data.
I want this counter to increase on every second.
const Counter = () => { const [counter, setCounter] = React.useState(0); React.useEffect(() => { const s = setInterval(() => { setCounter(c => c + 1); }, 1000); }, []); return ( <div style={{textAlign: 'center'}}> <h1>Counter: {counter}</h1> </div> ); };
This is good, but not good enough!
If you have multiple
setStates inside this component or even receiving new props, this can throw the
useEffect hook off track.
Causing your app to not be synchronized or just freeze.
const Counter = () => { const [counter, setCounter] = React.useState(0); React.useEffect(() => { const s = setInterval(() => { setCounter(c => c + 1); }, 1000); return () => clearInterval(s); }, [counter]); return ( <div style={{textAlign: 'center'}}> <h1>Counter: {counter}</h1> </div> ); };
Now it’s more self-sufficient.
One, I’ve added a clean up function to clear the interval whenever the component will unmount.
Two, I’ve added the counter variable inside the array bracket that’s found in the second argument of the
useEffect function.
This tells React to only trigger the effect when counter is a different value.
If counter has not changed in value, the effect won’t execute.
This is helpful because you can safely add multiple
useEffects,
setStates, or even pass down new prop values, and it won’t desynchronize your counter component.Cats() { try { const response = await fetch('url/to/cat/api'); const { cats } = await response.json(); console.log(cats) // [ { name: 'Mr. Whiskers' }, ...cats ] } catch(e) { console.error(e); } } React.useEffect(() => { fetchCats(); }, []);
Conclusion
React.useEffect is a basic hook that gets triggered on a combination of 3 React component lifecycles:
- componentDidMount
- componentDidUpdate
- componentWillUnmount
If you’re planning to use React hooks you must know how to execute your effect on the right time.
Otherwise you might run into some problems for your users.
I like to tweet about React and post helpful code snippets. Follow me there if you would like some too!
|
https://linguinecode.com/post/getting-started-with-react-useeffect
|
CC-MAIN-2022-21
|
refinedweb
| 1,460
| 56.05
|
order to best use the tool, I needed to know how to write vectorized code. Turns out it is not easy (I am taking a course on machine learning, and it uses octave which also emphasizes vectorized solutions which drives me crazy), and I didn’t want to go to the cython path just yet. Therefore I cheated by splitting data into chunks, to be processed in different processes.
So for example if I want to create a new column, based on whatever I have on each row, I would normally call the
.apply method, and (im)patiently wait for the thing to complete (at this stage iterating through 2.5mil rows).
dataframe.apply(foo, axis=1)
So I cheated by splitting the data into chunks, and process the chunks concurrently as follows
from concurrent.futures import ProcessPoolExecutor from itertools import chain with ProcessPoolExecutor(max_workers=njobs) as pool: jobs = [] for i in range(0, data.shape[0], batch_size): max_size = min(data.shape[0], i + batch_size) jobs.append(pool.submit( data[i:max_size].apply, foo)) result = list(chain.from_iterable([job.result() for job in jobs]))
So I was also asked to keep track of the changes, so that means returning two columns at times, so what I did was ensure
foo returns a list/tuple of
n items, and reconstruct the result as follows
def foo(row): return ('lorem', 'ipsum') # the code above... new_result = pandas.DataFrame(result, index=data.index, columns=('result', 'meta'))
Sometimes you need to pass in other stuff to the processing function, but lambda is out of the question as Python for some reason refuse to pickle it. This is when “partial” becomes useful (if I knew this earlier it would save me from all the lambdas). So instead of doing this
def foo(row, something): return '{}'.format(something) dataframe.apply(lambda row: foo(row, 'i am cute'), axis=1)
use partial instead
from functools import partial dataframe.apply(partial(foo, something='i am cute'), axis=1)
Besides dealing with Panda, I also spent quite some time wrestling with Scikit-learn
(and even submitted first ever pull requests to a public project that was eventually merged w00t). From vectorizing raw data, to building classifiers, to querying. So there’s this time I would want the sparse vector returned by the vectorizer (and the whole set of transformers) and a Panda series to form a new dataframe mainly for indexing conveniences, I did this
dataframe = pandas.DataFrame(zip(list(sparse_matrix_by_vectorizer), some_series.values.tolist()), index=some_series.index)
However, if a large matrix is used, it would easily OOM’ed the server. So I usually do this in conjunction with the chunking code above.
Speaking of memory usage, this combo is not really friendly. I couldn’t really count how many times I OOM’ed the development server (and got important processes killed by the oomkiller). Plus the performance of some of the components are just really bad. I still can’t get my collection which has 58000 dimensions to complete TruncatedSVD into 500 dimensions. Brute force kneighbors query is REALLY slow, and I settled with LSHForest (not sure how it impacts the accuracy for now). Now with LSHForest “index”, the pickled file size is even larger than the pickled tfidf matrix itself.
Over the past few days I discovered gensim, blaze and spotify annoy, which claimed to help with memory usage problem, search speed etc. Hopefully I would have time to put them into the code.
|
https://cslai.coolsilon.com/2015/12/04/random-notes-on-pandas-and-scikit-learn/
|
CC-MAIN-2021-43
|
refinedweb
| 576
| 61.16
|
A Strategy for Defining Order Relations
Last week , I proposed a comparison function for pairs of integers and asked whether it met the requirements for C++ order relations:
struct Thing { int a, b; }; bool compare(const Thing& t1, const Thing& t2) { return t1.a < t2.a && t1.b < t2.b; }
As one reader was kind enough to point out, it doesn't work because it's not transitive:
For example, with three objects,
t1(a1, b1), t2(a2, b2), and
t3(a3,b3),
a1 < a2, b1 > b2; a1 < a3, b1 > b3; a2 < a3, b2 < b3.
Then,
t1 and
t2 are unordered,
t1 and
t3 are unordered, but
t2 < t3.
This reader proposed an interesting alternative:
bool compare(const Thing& t1, const Thing& t2) { return (t1.a * t1.a + t1.b * t1.b) < (t2.a * t2.a + t2.b * t2.b); }
In effect, this comparison function is a generalization of the one I proposed two weeks ago :
bool compare(int m, int n) { return abs(m) < abs(n); }
Both of these functions implement a notion of magnitude, which we can think of as the distance between a given value and a zero point. In two dimensions, the distance between a point with coordinates
(x, y) and zero is the square root of
x2 + y2. The two-dimensional version of the comparison function compares the sums of the squares directly instead of comparing their square roots.
Both of these functions are flawed because their computations might overflow. Aside from those flaws, however, these functions implement a generally useful strategy for defining order relations: If you want to compare two complicated objects, the strategy is to define a function that computes a single number from each object and then compares those numbers.
As long as equal objects yield equal numbers, this strategy clearly results in a valid order relation, because it is defined in terms of comparing numbers. However, there is no guarantee that the resulting order relation will be useful or even comprehensible. For example, the mapping from objects to numbers might be a cryptographic hash. In that case, the ordering would be somewhere between very difficult and impossible to understand. It would still be a valid ordering, but understanding it would be as hard as cracking the hash code.
For an order relation based on a mapping to numbers to be intuitively useful, then, the number that comes from each object needs to be related to the object in a useful way. For that matter, we don't even need to use numbers; what we need is a strategy for finding a useful mapping.
One such strategy might be to take each element of an object that is being compared, convert it somehow to a string, and concatenate all those strings to obtain a single string that describes the entire object. Comparing objects then turns into comparing strings.
This observation suggests that there is a natural connection between comparing objects with multiple components and comparing strings. That connection leads us to the notion of dictionary order, which we will describe next week.
|
http://www.drdobbs.com/cpp/a-strategy-for-defining-order-relations/240147625?cid=SBX_ddj_related_commentary_default_the_double_metaphone_search_algorithm&itc=SBX_ddj_related_commentary_default_the_double_metaphone_search_algorithm
|
CC-MAIN-2014-52
|
refinedweb
| 511
| 55.27
|
.
This is why Iridium (satellite phones) was killed. (Score:1)
Satellite phones are the ultimate stealth device. The best they could do is trace you to the western hemisphere. Iridium didn't fail. It was killed by a collusion of law-enforcement agencies and world govt's that still have monopolies on their telcos. They saw Iridium as an "illegal circunvention device" to get around local monopolies. And law enforcement saw the potential for hiding from any trace.
This method of security has been around for years (Score:2)
Re:Ahh, I get it... (Score:1)
As someone else kindly pointed out, this is the same as frequency hopping in radio communications. You change frequency so often that an observing third party can't pick out the whole conversation. In fact (in the radio world), your communications pretty much just look like noise. This requires a fast processor, though, to implement properly.
Now, I admit, I haven't read the article, so this is a mostly uninformed point of view. However, if they plan to implement it anything like frequency hopping, I'm betting that the idea here is that this is for communication between two hosts that are aware of each other. They agree upon some IP hopping pattern at the beginning of the communication or even before the communication begins or figure out what the pattern is based on some set of acceptable patterns or, well, you get the idea. From that point on, the receiver listens for communications at the IPs that fall in the agreed-upon pattern. Whala, you are suddenly able to pick out the GOOD traffic. In order for it to be really secure, you need to be using IP hopping in both directions.
Re:...or you could use a real service. (Score:1)
...or you could use a real service. (Score:2)
Re:...or you could use a real service. (Score:2)
Mobile IP (Score:2)
Nothing new. . . (Score:1)
Re:Nothing new. . . (Score:1)
the specifications of how this has being done
should be
the specifications for how this has been done
It's tough writing comments on the run!
Re:Hoax? (Score:1)
Similar thing (Score:1)
Even though this technology is a bit different I don't see it being that great for security. Inside a corporate network this would be fine but on the internet this would be useless.
- If my IP address rotated all of the time how do my packets get back to me?
- Do these guys have to be my ISP? Wouldn't that mean that they would act as a firewall/router to the outside world?
I think I'll stick with encryption.
Re:Similar thing (Score:1)
It was a small hydro company. At least someone there was a little pro-active about security.
Re:what about MAC address? (Score:2)
Yes, but this changes with every hop of the packet. The initial MAC ID is from your computer, and is the MAC ID of your NIC.
Once this packet hits the first router, it forwards the packet and it now contains the MAC ID of the router's NIC.
The only time tracking MAC IDs is usefull is if you are on a broadcast LAN, like ethernet w/ dumb hubs, and you can sniff traffic. Otherwise, its all the routers/switches MACs...
Re:IP Spread Spectrum (Score:2)
there goes tcpwrappers (Score:2)
Do I have to prompt for a kerberos session every time the IP changes during a session? How easy would it be to hijack the session by fooling the stack into thinking that it legitimately changed to the attacker's IP? How easy would it be to DoS via spoofing parts of the protocol?
Frequency hopping radios are nifty, but we're not talking about beaming light. IP is much more complicated, and has more weak points.
IP V6 Sooner than Later (Score:1)
havn't they ever heard of encryption?
sheesh
KGB Expert my eye! (Score:1)
I do hope... (Score:1)
Whilst it is impractical/ impossible to lease IP addresses through DHCP multiple times per minute, it does sound as though it is a sort of faster DHCP.
I believe cable companies advertise the fact they use DHCP as a means of protecting the user from attacks in just this manner; of course the other reason is they don't want any pesky 'ol servers set up on their network....
If this uses a lot of IP addresses.... (Score:1)
Just a variation on spread spectrum (Score:2)
If you wanted to get fancy you could simultaneously assign several IPs and spread the packets amongst them (as well as periodically changing the IPs), to really confuse someone doing traffic analysis.
Re:Police Radios (Score:1)
I've been trained on the SINCGARS model radio in the Army. It can jump between 100 frequencies per second and you can supposedly still make out what is being said , even if 30% of the hopset is being jammed.
Let's clear up the obscurity... (Score:2)
This is not about securing a system, it's about making it harder to find, period, as you said.
A grain of salt (Score:1)
From the 'gotcha' front, I wonder how they addressed the problem allowable IP addresses to jump between. I would think you would always be limited to the subdomain routed by the host you connect to. If you are connecting from and to huge domains, it's not as bad, but if there is very little traffic in or out, it would be easy to reassemble your session simply by ignoring the IPs and capturing everything. Either way, you are relying on obscurity to provide security.
Although I wouldn't call it a bad idea, I don't think this qualifies as a good one.
-Trevor
Robert Hanssen wanted to work here. (Score:1)
Invicta doesn't appear to have a website. Maybe because they don't have an IP address for search engines to crawl? How would that work, anyway? If it switches addresses all the time, how do you keep a connection open?
Re:Not a troll... (Score:1)
Yep, get Windows 2000. It can change IP's, DNS servers, and more without rebooting.
sounds like IP spoofing (Score:2)
Maybe they have a lot of destination addresses too, but somewhere, somehow this has to be routed to the receiving end. Of course, it could be the central server, but then that would be nothing but a router.
Of course, one could also split an encrypted file/text in blocks, and send those in a particular order to/from a number of IP addresses. Kind of like a key. But that would be a pointless excercise: from 8ip's to 8 ip's would be equivalent to 6 bits extra keyspace (2^6 possibilities). It would just be just a little harder to get all of the traffic.
----------------------------------------------
Multiproxy (Score:1)
Re:Security through Vapor? (Score:2)
You'll basically have an SLA ID (Site-Level Aggregation Identifier) of 16 bits and an Interface ID of 64 bits. How can any company need more than this? Well, for starters, every company I know of over 1000 employees has many service providers for different divisions, acquired companies, failover, etc. Since those high 48 bits are used to identify unicast addressing and an ISP, you will have to have multiple SLA ID blocks....
When I posted, I thought the Interface ID was only 32 bits, so this is a much better situation. Certainly in a world where people allocate addresses as efficiently as we did in the early days of IPv4, we need not worry.
I give IPv6 unicast address space 10 years (5 more than my previous estimate) before we run out, and have to start chopping up the IPX space to give out....
Re:Security through Vapor? (Score:2)
The number was not 1 billion in the article, it was "billions", so your comment, "Now as we all know, 32 bits is roughly 1 billion
No, the space given to companies is very generous (vastly moreso than with v4) but if companies start planning based on using 32 bits per unique device, we won't last very long....
Re:Security through Vapor? (Score:2)
Assertion of fact 1: There are 64+16 bits of address available per ISP customer entity in IPv6. How the first 16 are managed is still slightly up in the air, and may not be available to the customer to directly manipulate.
Assertion of fact 2: The article suggested that using "billions" of IP addresses per device would soon be reasonable. Because of "increases in cyberspace".
Assertion of fact 3: Most medium-to-large companies will (conservatively) use 8-16 bits on subnetting, regardless of their actual need. How do I know this? Every such company I've interacted with ALREADY uses that much space in private addressing, and every one of them that I've spoken to plans to allocate IPv6 space to all of their private addresses, even if they're non-routable. This fact is based on the speculation that they will follow through with their plans, and that I've seen a representitive sample.
Extrapolation/Speculation 1: If companies start thinking in terms of using 32 bits of address for a single device (64 TIMES the normal allocation per device), you'll start seeing more abuses balooning out from there (I cite a major backbone provider that currently uses two
Extrapolation/Speculation 2: Given about 16 bits of subnetting space left over for your average large company on day one and the above speculation, I expect that to get used up in about 5-10 years. Why? Well, for one 5-10 years is the span of time that it took to go from "class B addresses are being restricted" to "we're breaking up class As to avoid an IP address crisis" in the 32-bit address space. Also, in the next 5-10 years, I expect to see 1) every household in the US and other major nations become IPv6 address space consumers 2) easily an order of magnitude more multi-home companies 3) massive need for routable IPs in pupblic places on wireless LANs. Take the coffee shop in Mountain View (Dana St Roasting Company) as an example. Such a place will need to allocate 128 IPs even if their peak crowd of 128 users all have IPs in every other public place that they use the network.
5 years was never a hard number in my original message, and when I found out that the allocation was 64 not 32 bits per customer, I backed off to "5-10" years, but there's no argument that before that article showed up 128-bit addresses seemed like a whole hell of a lot more network, and the end of IPv6 address space may have just become visible on the horizon....
Then again, I thought that IPv4 addresses were too limited back in '89 and admitted that I was wrong in '91.... It's a matter of perspective and experience that makes us able to critique the past so clearly; I doubt that all of what I've said here will be certain.
Re:Security through Vapor? (Score:3)
Company X has 100 applications that require a VPN (say, 100 data feed vendors). So, they do the usual IP address math that big companies do (round up the the nearest obscene order of magnitude). So, a billion addresses per application is roughly 32 bits.
Now, I need about 100 of those, but clearly growth is a concern, so let's say I need about 8 bits worth.
Ok, so before that company even gets off the ground. Before they even start deploying IPv6 on their servers, desktops, etc. They're using 256 COMPANIES worth of standard IPv6 allocation. If every company does this (and of course, this is a conservative example), we're talking about a gold-rush on IPv6 addresses that would exaust the non-reserved addresses trivially in the first 5 years.
Let's not be hasty, though, let's assume that we can multiplex these puppies. So, one device might be able to handle multiple servers and clients and rotate the IPs correctly using one IP space. Cool, so for each server-side device IBM buys, only one company's worth of v6 allocation need be used. That should give us another couple of years of life on the namespace.
All things considered, this is a very bad idea. Rotating through 20 addresses to confuse the issue can add some difficulty for crackers, but using "billions" of addresses will add you to my "rude Internet citizens" list.
Security through Vapor? (Score:5)
How about an Anonymous IP Router? (Score:1)
This would be less centralized and offer anonymity, in addition to making it hard to trace the connection.
Re:SecureID (Score:1)
bitch to set up but it works great specially with kerberos...and it works with unixes and mainframes too. had an old universe mainframe install running it once.
*shudder* (Score:2)
------
Re:...or you could use a real service. (Score:1)
Imbed data in ip address? (Score:1)
Re:Security through Vapor? (Score:1)
I am not a security expert, but this idea doesn't strike me as very useful or secure. Maybe somebody with more knowledge of the Internet than talking about "cyber addresses" could add some ideas.
Working with MS networking.... (Score:3)
security through obscurity (Score:4)
All this sounds like is a time based routing mechanism nothing more, and I don't really see how changing the IP address is going to save a misconfigured machine. For one, somewhere down the line the address is going to delegate out, so if say someone is browsing via 10.10.1.16 and they're browsing say something on my server and my logs show:
198.81.129.14
"" "Mozilla/4.0 (compatible; MSIE 5.5;Windows NT 5.0)"
Then about one second later
198.81.129.193 "" "Mozilla/4.0 (compatible; MSIE 5.5;Windows NT 5.0)"
Now this is typically another visitor or whatever, but if the connections were so repetitive enough with the same browser fingerprint coming through I can probably correlate them both together by their netblocks depending on who owned the block. So unless they plan on purchasing completely obsolete netblocks like say 198.81.129.0-255 then 198.83.0.0.-255 than how do they expect to stay obscured from view? Keep in mind that there are hardly any complete netblocks to purchase in that fashion (class A s close to impossible), so what are they really planning on doing?
Now if they partnered with ISP's to snag dhcp addresses not being used from a wide variety of places, say Earthlink here, MomandPopISP there, then it'd be a plus for them however simple traceroutes, and block lookups can give you their information. (who owns the block etc)
All it sounds like is a sort of a dhcp-round-robbin-routing set up which is not going to save them still, if someone is really intent on getting access to their networks, they'd run out of address ranges before their scheme would work.
Now on the spook/snoop side of things... I say TMTOWTPGPSAM! (There's More Than One Way To Sign PGP Sign A Message) to keep info from eyes other than the intended recipient.
Re:Ahh, I get it... (Score:1)
Good question!..
This isn't like hopping ports, which theoretically might be able to do the same thing.. You're hopping IP's which means that the DNS server needs to know where the hell you are. If the DNS server knows where you are, then what's stopping me from querying the DNS server to find you?? I like the idea of hopping ports every few seconds, and firing encrypted packets that contain the next port, or a random seed included with the original request that defines the jump sequence...
It's heavy in fluff, no substance (Score:2)
--Mike--
Re:Security through Vapor? (Score:2)
Unconquered? Same planet, same people, doh. (Score:2)
Closed source network hardware + Promiscuity between security layers = Lower security
So this is the latest "unbreakable" huh? I'm sure nobody at the NSA, CIA, or KGB wants to know what's in those networks too. Cute.
How do you know this isn't just opening a big fat vpn tunnel right into your company so other people can look at your network? Cuts both ways.
Oh, check out -- Looks like they haven't bothered to buy up their domain for a whole year. That's confidence I suppose.. Guess there's no site to have taken down.
Another story from a year ago here [inet-one.com].
I haven't seen anything except untechnical fluff articles and only a couple over a year. The idea of a Russian guy calling his system Latin for "Unconquered" isn't slick, it's dumb. You just need someone at their physical location, something he should know about. What idiot will trust him to install the thing?
Are we *sure* he's stopped working for KGB/CIA? (Score:2)
Re:IP V6 Sooner than Later (Score:1)
Frequency hopping, random thoughts etc. (Score:2)
The concept of frequency hopping [hedylamarr.at] was invented by Hedy Lamarr [hedylamarr.at] in the 1930's. It is currently being used in several countries as a secure way of sending military orders.
The advantage of frequency hopping to IP hopping seems to be that it's (probably) harder to predict frequencies than it is to predict IP addresses. No doubt they will/have figure/d out how to allocate a large anough IP space to make a fairly secure transmission and how to sync the sender and receiver.
(...and what to do about the unused IP's... hmmm... You only really need one big pool of IP addresses for a set of computers, don't you? Then it's just a matter of juggling the IP's around and make sure every computer in the set of computers know what IP they themselves and their respective communication partner at any moment have... The more computer that are communicationg over the pool of IP's, the more secure the channel is.)
And now, let's all repeat the mantra of the day: Computers do what we tell them to do. Thus no computer system will ever be completely secure.
Re:Very cool idea.... (Score:1)
Sounds like obscurity to me.
Also I doubt they'll be able to apply this to servers. All the crap I had to go through when my IP address changed - notifying all my clients, changing zone files, updating WHOIS info,
...
Now imagine doing this every 5 minutes when your IP changes.
save some money... (Score:2)
just use dyndns.org [dyndns.org] yourself!
Re:Sequencing, randomness, etc ... (Score:1)
Or charity work for unemployed Russian tech workers
...
Sequencing, randomness, etc ... (Score:2)
If the sequence used by these cards is not completely random then observing the stream of packets from either of the two connected computers will allow one to extrapolate the formula used to sequence the address progress.
If I have the formula I need only a small ordered list of the IP addresses being used and I can predict what the next IP address will be. With that, I am in the loop.
This sounds like a glorified network card to me. This might confuse the kiddiez, but I suspect persons who use this company's products would be much better relying on very strong encryption and rigid security practices.
Security through TCP Sequence Number (Score:2)
Single point of failure/false sense of security (Score:2)
Invicta, Invita... (Score:2)
---
Fluffware (Score:2)
In order to route IPs on the Internet, route aggregation is required. An end host isn't going to be able to switch its address amongst many different network addresses, only to different IPs in a subnet. Given that someone who wants to compromise a machine has to have a way to find/connect to it first, it is trivial to relocate a machine. Also, see if ARIN wants to assign whole blocks of IPs for machines to hop around on.
IPv6? Maybe that would make this slightly more useful. But if a machine is supposed to be accessible, you have to make it known where it is -- if it isn't accessible, then you SHOULD just put a firewall blocking all inbound traffic, and that's that.
Another day, another "revolution". *sigh*
Re:Can someone explain the advantages of this? (Score:1)
something like this where two computers want to talk to eachother.
comp1: hey comp2
comp2: sup, here's my new ip
comp1[new ip] i'm down, here's my public key, and my new ip
comp2[new ip]: cool, here's my public key and my new ip
comp1[new ip/encrypted] here's my new super secrect receipe for grits.
something like that, which is bassicly a publickey/privatekey system with changing ip's. which i supose would help with man in the middle attacks.. maybe.
-Jon
Police Radios (Score:1)
"Set us up the IP block!" -- caffeinated_spork (Score:2)
--
I think it's a guard against packet sniffing... (Score:1)
Seth
Isn't this less useful... (Score:1)
I could see the utility of two distant computers carrying on a conversation, each changing their IP to be a pain to packet sniffers, I guess, but wouldn't encryption be more secure?
Ahhh. One feature would be the ability to specify which applications are available to the outside. Well, couldn't a firewall do that as well?
Also, if the IP address keeps changing, how exactly would their servers be available? If not by DNS (which wouldn't change fast enough and would defeat the purpose) then they'd only be available by IP, right? Of course, if the IP keep changing, how would you know the one for the server you wish to connect to?
Sounds daft to me.
--
AOL's Proxy Cache Already Does This (Score:1)
152.163.188.1
152.163.188.35
152.163.188.65
152.163.188.37
. . .
Often, the IP is changing between requesting each graphic from the index.html page.
This is probably just simple load-balancing taking place, but the results are very similar to this so-called "new technology" for hiding the source IP.
Even with a sufficiently large and diverse IP pool, this would essentially be only as secure as the random number generator that picks your next IP address. And we all know how robust and un-crackable random number algorithms are. . .
later,
kristau
Problems (Score:2)
So this is "not suited for widespread deployment."
Analog to CDMA (Score:1)
Hmmmmm... (Score:1)
Re:IP V6 Sooner than Later (Score:1)
If you really think about it, this is not going to benefit most companies one iota for protecting their corporate traffic. Think about it...most big corps run their own dark fiber now, have their own networks and routing policies trans/internationally. So what if you modulate your IP space out in the big bad world? Smaller corps use VPN's and such in addition to smaller pipes. The paranoid ones encrypt everything. I don't see how modulating IP's, which must fit into predictable ranges anyways, is going to really help things along much.
However, there is one area of use by big corps that can afford this toy. That is snooping on their competition. Most IDS systems don't come with the kind of configuration that can handle fingerprinting this toy's kind of traffic. You have to script it in or find a vendor(with really good consulting staff) that knows their stuff. This toy gives folks with deep pockets a substantial advantage over someone who won't or can't buy it or the protection they need.
As for the infallibility of this toy, it would be interesting to see some pros tear it apart and see what ticks. I am betting there are some holes there.
mrgoat
No, that was INVITA.. (Score:1)
..as in:
We'd like to "invita" you to our country, special party, very elite, BYOB, and don't forget your toothbrush.
Maybe "invita-tion" would make a good New Hacker's Dictionary term: getting invited to something that is going to be detrimental to your health/career/whathaveyou. Probably not in the hacker psyche deeply enough though.
Re:IP V6 Sooner than Later (Score:2)
Nothing is really "completely random." When creating large sets of random numbers you usually have to rely on some algorithm to create them, which rules out the "random" bit by the definition of an algorithm.
--
Re:security through obscurity (Score:2)
You're assuming they plan to own the IP addresses they say they're coming from--i.e., non-spoofed IP's. Given that this plan originated from the CIA and (former?) KGB I would say that that's a dangerous assumption.
--
Someone sell these people an Egress (Score:1)
Idiocy (Score:1)
If you own multiple connections to the internet in more than one country and could switch between them, it would be more interesting. But different RTTs and switchover times will kill you then.
Sounds simply useless...
Better schedule your transfers. (Score:1)
But seriously... It sounds good in theory, but if developers don't code their apps for the instantaneous IP change, it could seriously cause major headaches.
Also there would need to be downtime for an IP before it was used again, otherwise I could make a request, (then if hypothetically I changed IPs, and my old one was assigned immediately) the other user with my old IP would recieve the packets. Which could be a huge security risk, if transfering sensitive material.
next P2P application (Score:1)
Just an idea.
reuters a good source? (Score:2)
I'm assuming their solution is hardware based ("special cards"), with a star topology from the central unit. I'm sure that the special cards will not be running a variation of ethernet, but some other, more secure transport. If it is standard ethernet, the network would be switched.
The "central unit" acts as a switch / router, and allows some kind of address changing. No other hubs / switches are on the network, except perhaps between "central units"
I am assuming that reuters or yahoo is wrong, and this protocol is based on the switching of MAC addresses, rather than IP.
If so, then the whole network would have to be revamped in order to put this in place. Existing routers would most likely not be able to handle MAC switching - perhaps a software upgrade could change that though. I'm pretty sure that the company would just sell their central units as hubs / switches. Why not have a monopoly on the propriatary network that you designed?
So, while they are at it, they might as well couple this with fiber optics, with the central unit watching the strength of the signal for drops (i.e. a fiber optic tap is detectable - unlike ethernet, which can be tapped just by planting something on the cat-5 jacket (CIA $$$$ stuff) - or by cutting into the wire and installing a repeater/sniffer unit. We are talking about fairly expensive "spy" stuff either way.
If not, if the address switching is indeed IP, there would certainly be a way to sniff the network and to filter out MAC adresses from all other data being sent across the network. If the "special cards" or the network were designed to prevent sniffing, that would
Either way - it is essentially security through obsurity, but it makes life a lot harder for those trying to compromise computers - although hosting a server with this would be difficult - unless the Central unit acted as a gateway of some kind.
More info is certainly needed - if someone can post some that would really clear things up.
The slashdot 2 minute between postings limit:
/.'ers since Spring 2001.
Pissing off hyper caffeineated
Seems like a lot of trouble to go through (Score:1)
The more things change... (Score:4)
--
Re:Security through Vapor? (Score:1)
Lets assume there are 4billion people (2^32) each wanting to run 4billion apps (2^32) each requiring 4 billion IPs (2^32).
Total IP consumption 2^96. So we can grow by a factor of 4 billion (2^32) before we need to worry.
2^128 is Big, Really Big. In fact you don't want to know quite how big it is.
So long, And thanks for all the fish.
Tim.
Re:Not a troll... (Score:1)
Just another step in the ladder (Score:3)
Of course, a centralized server would need to route which would be a major bandwidth bottleneck.
And, of course, a centralized server could also be very easily tapped by a Carnivore-like device.
I guess it could scare off a few skript kiddies though.
Re:IP V6 Sooner than Later (Score:4)
Downfall (Score:2)
How can I short this company? (Score:1)
(BTW, I've already patented a similar security method: I train packs of chipmunks to plug and unplug 10baseT cables into random ports, thwarting any attempt to break in across the related links.)
--Mike
This is a great concept... (Score:3)
Re:This method of security has been around for yea (Score:1)
it's called a "Dial-up account", and I've been using it's security feature of random IP addresses (aka "dynamic" IP) for years. News sure travels slowly huh
Yeah, crappy ISPs can now advertise their high line drop rates as "security features".
Repeat?? (Score:2)
Odd that they would have the same name...
Viv
-----------
On a similar note... (Score:1) [digitech.org]
It's a pretty simple idea, not very flashy, and, oh, it's a freaking bandwidth hog. But, same time, it might be fun to play with.
Ed R.Zahurak
Re:IP V6 Sooner than Later (Score:2)
Why let somebody else have all of the fun (and profits)? Create a small division and don't associate it with the rest of your company specifically designed to "break" this unbreakable scheme.
Re:Hoax? (Score:2)
Not a troll... (Score:1)
perhaps it would be wise now not to expect security on a win box? (but that would definitly be read as a troll)
BW limitations (Score:1)
Put another way, the router used has to handle all that bandwith anyway, just make it a little smarter.
New clothes on old ideas? (Score:1)
Spread spectrum meets NAT.
Still, it looks new and interesting, but it still depends on a lot of out-of-band information, and I'd hate to be in charge of their BGP tables.
//jbaltz
--
Re:Very cool idea.... (Score:4)
I'd have to say that this is a not-so-clasic example, and in fact a neat idea, but when it comes down to it it's still securing a system through making it difficult to find.
It's admittedly a neat technology, but it it really secure?
--CTH
--
Re:Ahh, I get it... (Score:2)
The real difference between the frequency-hopping analogy and reality is the simple fact that unlike FH communications, the internet is supposed to be as interoperable as possible. A Mac can look at the same web page as a Solaris box, or even Windows (if it stays up long enough, obviously, for the page to load). This is accomplished through...wait for it...well-documented and widely disseminated standards. To make the comparison with frequency-hopping systems accurate, you'd need to have all or most transciever manufacturers decide on a few standards, then agree to make all of their systems so that they work with all other ones (by adhering to the standard). And once you do that, how well do you think frequency-hopping will hide what you're saying?
Ahh, I get it... (Score:3)
They keep moving around so many times a second that the bad guys can't find them. If a bad guy manages to ping an address that's a target, by the time he even types the "n" in "nmap" it's another address.
But the GOOD traffic can find them? How the hell does this thing know the difference? It sounds like they came up with a great way to hide a computer (especially if they end up trying to pretend to be someone else's IP range in the process), but they totally ignore the fundamental problem: how to tell good traffic from bad without a human having to examine it. This has to be some of the worst snake oil I have ever seen.
variation of DHCP? (Score:2)
Isn't this just a variation of some kind of dynamic host configuration?
Unfortuantly, in both cases, hit the control server (e.g. DHCP, trn, etc.) and the whole system is down. There is also the cavet that at some point the dynamic address must be available to the public (in my case via dynamic DNS), so if my script kiddies were smart enough, they could have had their program get my address from my DNS server and adjust their attack accordingly. Or taken down the DNS server, so I would have defeated my purpose.
In either case you shouldn't rely on security through haystack and needle methods. You can always burn the haystack if you don't care about the needle.
Not really a shield (Score:4)
Ravenous Bugblatter Beast of Langley, VA (Score:2)
--Blair
Weakest Link. (Score:2)
What they fail to mention is that their Central Control Unit is running an out-of-the-box copy of RedHat 6.0.
-
Seriously; how secure can this be if it is revolving around a single (or cluster) of control units that dictate, record, log, and monitor the IP addresses?
Sounds to me like they're selling us NSA-quality security, along with NSA-approved backdoors and line tapping capability.
-
How about this--instead of having a single control center managing the IP pool, we create a peer-to-peer network where, upon joining, you effectively 'donate' your 'IP address' (some form of tunneling/enscapulation would be in use?) to the community pool.
The network client continuously searches for a new partner to exchange addresses with, based on specified variables, and trades your address with theirs.
Instead of being a one-to-one swap, it's going to be take an address, pass it on.. the first few may be easy to track, but once you've done your 10th or 40th swap (each sequential exchange gives the new partner the address you procured in the last exchange), the paper trail is extensive.
Just a random thought, it may be effective when combined with some existing solutions.
Jason Fisher [feroxtech.com]
Hoax? (Score:2)
what about MAC address? (Score:2)
Still, I wonder about a few things. First, how can you implement time-based IP-hopping when IP is not time-dependent? That is, what happens when the connection between the two machines encounters a bit of congestion? The destination will have hopped on to a new address and the packetes will never arrive... unless there's something I'm missing.
Second, don't the packets contain things like the MAC address of the ethernet card? Are they saying that their technology either will not include this information, or switch it right along with the IP address?
As glorious as it sounds, somehow I don't see this being nearly as effective against MitM as signal-hopping with radio frequencies. With a radio scanner you would either have to monitor all available frequencies to try to put the session together or synch with the session and hop along with it, which is fairly difficult. However with packet sniffing, everything that passes is available for reading. The only way I can see this being halfway useful is if somehow every address used had a different route between the two machines, which isn't really feasible.
So... it's a nice idea I suppose but it sounds to me like it's mostly hype.
Re:Not a troll... (Score:2)
43rd Law of Computing:
|
https://slashdot.org/story/01/05/22/1857203/security-through-varying-ips
|
CC-MAIN-2017-51
|
refinedweb
| 6,102
| 72.26
|
Hey there,
I am making a 2D top-down game.
I was able to get the enemies to follow the players correctly. But, I want them to retreat when they are too close to the player.
How can I accomplish this?
Hey there,
I am making a 2D top-down game.
I was able to get the enemies to follow the players correctly. But, I want them to retreat when they are too close to the player.
How can I accomplish this?
A* just handles movement and path finding. Any decision making on what the destination should be has to be implemented by the end user. This either being a third party AI behavior addon or custom code.
I haven’t personally played around with any 3rd party behaviour editors, we build our own solution for our project.
Depending on your needs and skill level there is always something to fit your needs.
Behaviour Designer I think is one of the most popular ones.
I decided to make something myself. But, I am not sure how to override the a* behaviour. Because a* is always telling the enemy to follow the player, but I don’t want that.
How can I do that?
Hi
You might want to remove the AIDestinationSetter component and instead manually set the ai.destination property to wherever you want the agent to move.
I suppose you’re using the DestinationSetter script?
Remove the scripts from your agents. And in your behaviour scripts you set the destination on the movement script.
I modified the AIDestinationSetter script to make the agent retreat when too close, However, it’s not working consistently.
Here’s the code:
public class AIDestinationSetter : VersionedMonoBehaviour { /// <summary>The object that the AI should move to</summary> public Transform target; [SerializeField] float retreatDistance; IAstarAI ai; void OnEnable () { ai = GetComponent<IAstarAI>(); // Update the destination right before searching for a path as well. // This is enough in theory, but this script will also update the destination every // frame as the destination is used for debugging and may be used for other things by other // scripts as well. So it makes sense that it is up to date every frame. if (ai != null) ai.onSearchPath += Update; } void OnDisable () { if (ai != null) ai.onSearchPath -= Update; } /// <summary>Updates the AI's destination every frame</summary> void Update () { if (target != null && ai != null) { if (Vector2.Distance(transform.position, target.position) > retreatDistance) { ai.destination = target.position; } else { Vector3 dirToPlayer = (target.transform.position - transform.position).normalized; ai.destination = Quaternion.Euler(0, 0, 180) * dirToPlayer; } } } public void ChangeTarget(Transform newTarget){ if (newTarget != target) target = newTarget; Debug.Log(target); } }```
Hey,
Here is an updated destination position code
Vector3 dirToPlayer = (target.transform.position - transform.position).normalized; ai.destination = target.transform.position + dirToPlayer * retreatDistance;
Thanks for the help, it actually worked better like this
ai.destination = dirToPlayer * retreatDistance;
But that would make the destination centered around the world center ( 0,0 ).
So it might not always go in the correct direction.
Alternatively on the second line you could swap retreatDistance for another variable, so you can control them individually.
|
https://forum.arongranberg.com/t/make-seekers-retreat-when-too-close-to-target/7755
|
CC-MAIN-2020-16
|
refinedweb
| 512
| 50.73
|
In addition to booleans, there is atomics for pointers, integrals and user-defined types. The rules for user-defined types are special.
Both. The atomic wrapper on a pointer T* std::atomic<T*> or on an integral type integ std::atomic<integ> enables the CAS (compare-and-swap) operations.
The atomic pointer std::atomic<T*> behaves like a plain pointer T*. So std::atomic<T*> supports pointer arithmetic and pre-and post-increment or pre-and post-decrement operations. Have a look at the short example.
int intArray[5];
std::atomic<int*> p(intArray);
p++;
assert(p.load() == &intArray[1]);
p+=1;
assert(p.load() == &intArray[2]);
--p;
assert(p.load() == &intArray[1]);
In C++11 there are atomic types to the known integral data types. As ever you can read the whole stuff about atomic integral data types - including their operations - on the page en.cppreference.com. A std::atomic<integral type> allows all, what a std::atomic_flag or a std::atomic<bool> is capable of, but even more.
The composite assignment operators +=, -=, &=, |= and ^= and there pedants std::atomic<>::fetch_add(), std::atomic<>::fetch_sub(), std::atomic<>::fetch_and(), std::atomic<>::fetch_or() and std::atomic<>::fetch_xor() are the most interesting ones. There is a little difference in the atomic read and write operations. The composite assignment operators return the new value, the fetch variations the old value. A deeper look gives more insight. There is no multiplication, division and shift operation in an atomic way. But that is not that big restriction. Because these operations are relatively seldom needed and can easily be implemented. How? Look at the example.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// fetch_mult.cpp
#include <atomic>
#include <iostream>
template <typename T>
T fetch_mult(std::atomic<T>& shared, T mult){
T oldValue= shared.load();
while (!shared.compare_exchange_strong(oldValue, oldValue * mult));
return oldValue;
}
int main(){
std::atomic<int> myInt{5};
std::cout << myInt << std::endl;
fetch_mult(myInt,5);
std::cout << myInt << std::endl;
}
I should mention one point. The addition in line 9 will only happen if the relation oldValue == shared holds. So to be sure that the multiplication will always take place, I put the multiplication in a while loop. The result of the program is not so thrilling.
The implementations of the function template fetch_mult is generic, too generic. So you can use it with an arbitrary type. In case I use instead of the number 5 the C-String 5, the Microsoft compilers complains that the call is ambiguous.
"5" can be interpreted as a const char* or as an int. That was not my intention. The template argument should be an integral type. The right use case for concepts lite. With concepts lite, you can express constraints to the template parameter. Sad to say but they will not be part of C++17. We should hope for the C++20 standard.
1
2
3
4
5
6
7
template <typename T>
requires std::is_integral<T>::value
T fetch_mult(std::atomic<T>& shared, T mult){
T oldValue= shared.load();
while (!shared.compare_exchange_strong(oldValue, oldValue * mult));
return oldValue;
}
The predicate std::is_integral<T>::value will be evaluated by the compiler. If T is not an integral type, the compiler will complain. std::is_integral is a function of the new type-traits library, which is part of C++11. The required condition in line 2 defines the constraints on the template parameter. The compiler checks the contract at compile time.
You can define your own atomic types.
There are a lot of serious restrictions on a user-defined type to get an atomic type std::atomic<MyType>. These restrictions are on the type, but these restrictions are on the available operations that std::atomic<MyType> can perform.
For MyType there are the following restrictions:
You can check the constraints on MyType with the function std::is_trivially_copy_constructible, std::is_polymorphic and std::is_trivial at compile time. All the functions are part of the type-traits library.
For the user-defined type std::atomic<MyType> only a reduced set of operations is supported.
To get the great picture, I displayed in the following table the atomic operations dependent on the atomic type.
The functionality of the class templates std::atomic and the Flag std::atomic_flag can be used as a free function. Because the free functions use atomic pointers instead of references they are compatible with C. The atomic free functions support the same types as the class template std::atomic but in addition to that the smart pointer std::shared_ptr. That is special because of std::shared_ptr is not an atomic data type. The C++ committee recognised the necessity, that instances of smart pointers that maintain under their hood the reference counters and object must be modifiable in an atomic way.
std::shared_ptr<MyData> p;
std::shared_ptr<MyData> p2= std::atomic_load(&p);
std::shared_ptr<MyData> p3(new MyData);
std::atomic_store(&p, p3);
To be clear. The atomic characteristic will only hold for the reference counter, but not for the object. That was the reason, we get a std::atomic_shared_ptr in the future (I'm not sure if the future is called C++17 or C++20. I was often wrong in the past.), which is based on a std::shared_ptr and guarantees the atomicity of the underlying object. That will also hold for std::weak_ptr. std::weak_ptr, which is a temporary owner of the resource, helps to break cyclic dependencies of std::shared_ptr. The name of the new atomic std::weak_ptr will be std::atomic_weak_ptr. To make the picture complete, the atomic version of std::unique_ptr is called std::atomic_unique_ptr.
Now the foundations of the atomic data types are laid. In the next post, I will talk about the synchronisation and ordering constraints on atom39
Yesterday 7029
Week 40863
Month 107529
All 7375369
Currently are 139 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
I have a project that I'm simply now operating on, and I have been at the glance out for such info.
acquire in fact enjoyed account your blog posts.
Any way I'll be subscribing to your feeds and even I achievement you access
consistently fast.
I'm really learning a lot from your posts with great insight of modern multi threading techniques in C++ and like to thank you wholeheartedly for that.
However, I've a doubt regarding Mutex and Atomic operations.
I was comparing some performance measures between using Mutexes and Atomics.
I implemented a matrix dot-product (collected from Net) using a mutex and the same algo using an atomic variable, like this:
Solution 1:
static std::mutex myMutex;
std::lock_guard mtex(myrMutex);
Solution 2:
std::atomic &result;
In both cases, two vectors of equal lengths are taken and each thread is performing one element by element multiplication and add the "partial result" of each such multiplication in a scalar variable called "result". For example:
result += partial_sum;
While running these two different solution side-by-side on an 8-core with 1 to 100 threads, I've observed that sometimes the Atomic operation is slower than the mutex operation.
For example (using ):
For 1 threads, Mutex - Atomic performance diff is: -0.0032668
For 2 threads, Mutex - Atomic performance diff is: 0.0070272
For 3 threads, Mutex - Atomic performance diff is: 0.0050193
For 4 threads, Mutex - Atomic performance diff is: -0.0031304
For 5 threads, Mutex - Atomic performance diff is: -0.0195508
For 6 threads, Mutex - Atomic performance diff is: 0.0166401
...
...
For 98 threads, Mutex - Atomic performance diff is: -0.0070265
For 99 threads, Mutex - Atomic performance diff is: 0.0030019
For 100 threads, Mutex - Atomic performance diff is: 0.0040034
Could you please kindly explain the reason that why sometimes Atomic is performing slower than Mutex?
Thanks.
I assume, you have compiled if with full optimisation.
If you sum up the local results with minimal synchronisation the kind of synchronisation does not matter; therefore I would assume quite the same numbers for atomics or locks.
|
https://modernescpp.com/index.php/atomics
|
CC-MAIN-2021-43
|
refinedweb
| 1,338
| 57.37
|
- Language Philosophies
- Objects and Primitives
- Files and Compilation Units
- Object Models
- Static Behavior
- Different Syntax
- Summary
Files and Compilation Units
Both Objective-C and Java store source code in files (unlike Smalltalk and a lot of Lisp environments). In Java, there is no difference between a file and a compilation unit. Objective-C, unfortunately, inherits the horrible mess that is the C preprocessor. In Objective-C, a compilation unit is a source file and every other file that has been included in it by the preprocessor.
Objective-C extends the C preprocessor slightly, adding a #import directive. This is similar to #include but prevents multiple inclusion. If you #import a file, then you will only get one copy of it in the current compilation unit, no matter how many times you #import it.
In Java, the closest equivalent is the import directive. This is very different; it specifies a namespace that should be used to resolve a class. The Objective-C equivalent is much simpler; it just includes the text content of the specified file.
The preprocessor adds a bit of flexibility to Objective-C. You get things such as conditional compilation and some primitive metaprogramming tools, but it's quite basic.
|
http://www.informit.com/articles/article.aspx?p=1568732&seqNum=3
|
CC-MAIN-2018-51
|
refinedweb
| 202
| 54.93
|
Braille application.
I am thinking of an application to input text using Braille.
I'm thinking about Braille in Japanese.
Is it possible for Pythonisa?
Should I turn off the import?
It's a mystery, but my wife's iPhone seven can't download a Japanese file.
Is something missing?
@shinya.ta which step does not work?
- tapping Download
- options
- Run Pythonista3 script
- Import file
tapping Download
Run Pythonista3 script
The color of the file shown is different.
It is similar to the symptom that my iPhone could not execute.
But, in my case, it was because I downloaded the same file many times.
My wife is the first time to download it.
@shinya.ta said:
The color of the file shown is different
I don't understand that. After "run Pythonista", did you tap on " import file"?
@shinya.ta And did you retry to paste my little scrip about speech with objectivec
The problem of my wife's iPhone seven has been solved.
After restarting, it was solved.
The iPhone gave me a speech, but I don't have the explanation of Kanji, so I don't know which Kanji I chose.
In the former cursor moving application, I had good connection with Voice Over.
@shinya.ta I'll continue to search this problem of VoiceOver on TableView content but I don't have a lot of free time this week, because I'm in holiday (4 days) in France...
Did you on your iPhone XS Max test my little script about objectivec speech?
from objc_util import * AVSpeechUtterance=ObjCClass('AVSpeechUtterance') AVSpeechSynthesizer=ObjCClass('AVSpeechSynthesizer') AVSpeechSynthesisVoice=ObjCClass('AVSpeechSynthesisVoice') voices=AVSpeechSynthesisVoice.speechVoices() for i in range(0,len(voices)): if 'ja-JP' in str(voices[i].description()): voice_jp = i break # print(i,voices[i],voices[i].description()) voice = voices[voice_jp] # Japon = 31,32,33 synthesizer=AVSpeechSynthesizer.new() utterance=AVSpeechUtterance.speechUtteranceWithString_("こんにちは、友よ") utterance.rate=0.5 utterance.voice=voice utterance.useCompactVoice=False synthesizer.speakUtterance_(utterance)
Is this all right?
@shinya.ta Yes, now the script is right but I hoped you will hear the speech like I did on my iPad mini4.
Really, I don't understand what is the problem on your iPhone
Are you sure the volume is set in your control center?
@shinya.ta could you try on another device than your iPhone XS Max. I remember that you always got speech problem with it.
Test only to be sure the program is ok for you
Because it runs ok on my iPad mini 4 and Pythonista speech does not work
@shinya.ta Could we try to summarize remaining problems?
- complex punctuations in Braille
- VoiceOver on conversion TableView elements
- speech on your iPhone
- you decide the buttons titles, so VoiceOver speaks them
Is something missing?
The speech on my iPhone is not much of a problem.
I wanted to know the difference between my wife's iPhone and my iPhone.
You need a punctuation mark.
@shinya.ta said:
You need a punctuation mark.
I don't understand
@shinya.ta said:
The narration of conversion elements is always necessary.
That, I understand but not yet solved...
Does the iPhone of your wife not speak each row when scroll via buttons?
Complicated punctuation marks in braille are required.
You need to decide the title of the button and read it.
|
https://forum.omz-software.com/topic/5584/braille-application/220
|
CC-MAIN-2019-47
|
refinedweb
| 549
| 67.86
|
We're excited to announce the beta release of our React Chat Components. We've used our existing React SDK to create ready-to use-components that make it easy to add essential features to you app—like chat—while reducing your time to market and allowing you to focus on building what you actually care about.
Sign up for our React Chat Components beta using the form below to access the new components and to provide essential feedback on them to help influence and improve our product. Then, read on to learn the best practices for implementing and using these new components in your app.
PubNub’s React Chat Components empower developers like you to create rich chat experiences without the opportunity cost of building chat when you could be focused on other parts of your application. You wouldn’t reinvent the wheel, so don’t reinvent chat UI components and build the infrastructure yourself when PubNub has already done it for you. There’s no need to spend the time building something that already exists when you can use PubNub’s ready-to-use React Chat Components to easily add chat features to your application.
Our set of React Chat Components are packed with the features you’d expect from most chat experiences:
User and Channel Metadata: Fetch metadata about users, channels, and memberships from objects storage using custom hooks.
Channel Subscriptions: Automatic subscriptions to current channel and optional subscriptions to other channels and channel groups.
Messages: Publish and listen to text messages and fetch history for each channel.
User Presence: Fetch currently present users and listen to new presence events like a user subscribing or leaving.
Typing Indicators: Typing indicators displayed as text notifications or messages.
Chat Message Reactions: Publish and display message reactions (emojis) for messages.
Our React Chat Components are flexible to your needs so you’re enabled to create chat for various use cases all with different functionalities and customizable looks. Customizations include:
Component options to tweak the functionality of each component.
Built-in light and dark themes.
Extra customization with CSS variables.
Build a telemedicine chat app, multiplayer chat lobby, live support chat center, live event chat lobby, or another style of chat with minimal effort. Build a powerful and modular chat app using these components as a starting point. Best of all: There’s no need to deal with server code at all. All of the infrastructure is provided and powered by PubNub.
In this tutorial you’ll discover how to set up and use PubNub React Chat components to build a chat application. All the steps you need to use the components are included in this post.
How does PubNub work with React Chat Components?
Before using PubNub React Chat Components, you should know why PubNub is the best choice for powering them. PubNub is used for real-time message delivery, metadata, user presence, and other chat related data to facilitate the actions within the chat UI components.
PubNub is a great backend API for all types of chat applications because it offers a lot of functionality that you won’t have to build yourself. Some essential advantages are:
PubNub delivers billions of messages every day and will scale with your application as it grows.
User presence features are built-in. Easily create online and offline indicators and typing indicators.
Storage and Playback for message history. History is essential for a complete user experience.
Push Notifications support for iOS and Android mobile apps. Keep users online by alerting them of new messages.
PubNub React Chat Components have PubNub services fully incorporated and are ready to use. It’s recommended that you’re familiar with React if you wish to modify these React components for use in your application. However, it’s possible to follow along, explore, and use these React Chat Components without prior experience.
Available chat components
The following chat components are included with PubNub React Chat Components:
Chat - Required state provider.
Message List - Shows list of channel messages.
Message Input - Input for sending new messages.
Channel List - List of channels you can access and send messages to.
Channel Members - List of users in a channel and updates with new users.
Typing Indicator - Indication of typing activity before messages are sent to a channel.
Now that you have an understanding of the available components, let’s explore how you set up a development environment to try them out.
Creating a React development environment
Before you use PubNub React Chat Components, you need the tools necessary to build a chat application in React.
Set up a Node.js development environment
If you’ve already installed Node.js, you can skip to setting up Git.
This post requires you to install Node.js. To install Node.js you need to visit the Node.js Downloads page and select the installer for your platform.
Check the installation. Open your terminal and run the following commands to ensure Node.js is installed:
node -v npm -v
You should see a version number after you run these commands.
Set up Git
You need Git to clone the PubNub React Chat Components from the GitHub repo.
If you’ve already installed Git, you can skip to setting up a chat API for React Chat Components.
Even if you have Git installed you may want to ensure that it’s the latest version. Follow the Getting Started - Installing Git guide to install Git.
Next, check the Git installation. Open your terminal and run the following command to ensure Git is installed:
git --version
You should see a version number after you run the command.
Your development environment is now ready. Next, you need to configure your PubNub API keys for React Chat Components.
Setting up a chat API for React Chat Components
You’ll need to set up a PubNub account to use the PubNub React Chat Components. The end result will be two API keys that you’ll use in your chat application. A PubNub account and API keys are always free (perfect for startups).
You’ll first need to sign up for a PubNub account.
Go to Apps.
Click Create New App.
Give your app a name, and select Chat App as the app type.
Click Create.
Click your new app to open its settings.
When you create a new app the first set of keys is generated automatically. However, a single app can have as many keysets as you like. PubNub recommends that you create separate keysets for production and test environments.
Select a keyset.
Enable the Channel Presence feature for your keyset.
Enable the Storage and Playback feature for your keyset.
Enable the Stream Controller feature for your keyset.
Enable the Objects feature for your keyset and select a region.
Save the changes.
Copy the Publish and Subscribe keys for the next steps.
Running the sample React Chat Components app
Now you’re ready to get started building with PubNub React Chat Components. The best way to start exploring is to run one of the sample chat component apps.
First, open your terminal and clone the React Chat Components repository. This repository contains the open source PubNub powered React chat components and sample apps.
git clone
Navigate into the samples folder and install the dependencies. It’s normal for this process to take a few minutes to install the dependencies with npm.
cd react-chat-components/samplesnpm install
Now you need to configure the components with the API keys you obtained from your PubNub dashboard. These keys allow you to send and receive messages on the PubNub Network with the PubNub Real-Time Messaging API. You must enable the Channel Presence feature, the Storage and Playback feature, Stream Controller feature, and Objects feature for your API keys for the chat components to work properly with full functionality.
Open samples/pubnub-keys.json. Paste your Publish and Subscribe keys from the keyset you created from your PubNub Dashboard.
Run the project and try out each of the components in the Sample Chat source code.
npm start
The React Component Chat samples should now open a html webpage at.
Open the Sample Chat web app. You should see the user interface and placeholder messages.
Try inputting text in the text input area in the user interface and press enter. You should see your message appear in the message list area.
Click on another channel to change channels. Start a direct chat by clicking on another user’s name.
Reference the PubNub Chat Components Documentation to learn more about how to use the chat app components.
How to use React Chat Components in your application
Now that you’ve seen how the components work together, you can include them in an existing React app.
Chat app component installation
First, Install the PubNub React Chat Components and all required dependencies using npm.
npm install --save pubnub pubnub-react @pubnub/react-chat-components
Using PubNub Chat App Components
Import PubNub, PubNub React Provider and the components.
import PubNub from "pubnub"; import { PubNubProvider } from "pubnub-react"; import { Chat, MessageList, MessageInput, ChannelList, MemberList, } from "@pubnub/react-chat-components";
Create your PubNub client and rest of the configuration for the Chat, which serves as a common context for all of the components:
const pubnub = new PubNub({ publishKey: "MY_PUBLISH_KEY_HERE", subscribeKey: "MY_SUBSCRIBE_KEY_HERE", uuid: "myUniqueUUID", }); const currentChannel = "myCurrentChannel"; const theme = "light";
Replace the placeholders “MY_PUBLISH_KEY_HERE” and “MY_SUBSCRIBE_KEY_HERE” with your Publish and Subscribe keys from the keyset you created from your PubNub Dashboard.
Pass the PubNub Provider with your newly created client.
const MyComponent = () => { return <PubNubProvider client={pubnub}></PubNubProvider>; };
Place the PubNub React Chat Components within the state provider in the order that your app requires. Components may be tweaked later by using option properties and CSS variables.
const MyComponent = () => { return ( <PubNubProvider client={pubnub}> <Chat {...{ currentChannel, theme }}> <MessageList /> <MessageInput /> </Chat> </PubNubProvider> ); };
Explore the PubNub Chat Components Documentation to learn more about how to use PubNub React Chat Components in your React chat app.
Why use PubNub React Chat Components?
With PubNub React chat components you can build chat in a way that’s customized perfectly for your needs. PubNub provides a real-time publish/subscribe messaging API built on a global network. PubNub has multiple points of presence on every inhabited continent, supports dozens of SDKs, and has lots of features you’ll need in a chat application use case.
Use PubNub to build features into your application like:
Want to get started building live chat or messaging into your app today? Get in touch with our sales team to quickly get your in-app chat up and running.
|
https://www.pubnub.com/blog/react-chat-components-beta-build-chat-experiences/
|
CC-MAIN-2021-31
|
refinedweb
| 1,764
| 56.45
|
Hello.
I have a question.
Choosing between pygame and pyglet, I want to choose pyglet more. But I am frightened of the dicorations.
Tell me please, is it possible to design pyglet as well as in pygame? Use while true and so on?
Thanks in advance!
Hello.
2 2017-04-17 08:06:59 (edited by magurp244 2017-04-17 09:26:17)
It is possible to use a while loop for updating in Pyglet, its not much different than previous examples. You would still need to either subclass the window or use "@window.event" tags for drawing and input functions, and also manually dispatch events or it will lock up.
3 2017-04-17 09:18:34 (edited by magurp244 2017-04-17 09:31:51)
Actually, digging abit into Pyglets documentation I think i've come up with a better example thats alot closer to the pygame version. Instead of having to use event tags or window subclassing you can also push keyboard handling into the windows event stack and handle it similarly to pygame.
import pyglet from pyglet.window import key def Example(): #create window window = pyglet.window.Window(640,480,caption='Example') #load sound sound = pyglet.media.load('example.wav',streaming=False) #load image picture = pyglet.image.load('example.png') #load keyboard handler keyboard = key.KeyStateHandler() #add keyboard handler to window window.push_handlers(keyboard) #main update loop while True: if keyboard.items(): #play sound if keyboard.get(key.SPACE): sound.play() #quit program if keyboard.get(key.ESCAPE): window.close() #clear keyboard buffer keyboard.clear() #clear screen window.clear() #draw picture to window picture.blit(0,0) #update screen window.flip() #dispatch events window.dispatch_events() Example()
Can I use pygame and pyglet at the same time?
That example should work... But (unless I missed something), you need to tick the clock.
Also what real advantage is there to be gained by manually doing this over using app.run() (asking the original poster). Decorators aren't difficult to use, and there are plenty of examples in pyglet's documentation.
Just because pyglet's way is different from other libraries doesn't mean it's "bad" or needs to be changed to act like other libraries, especially if it requires code that essentially duplicates something the library can already do (and probably a lot faster).
Blademan
Twitter: @bladehunter2213
I just do not understand how to make a game with pyglet.
I do not understand the dicorations, I'm used to the simple game tsy while while true.
Hello Jonikster.
I can't tell you how to make a game, I'm not doing anything like that yet. But to answer your previous question, yes you can use pygame and pyglet in the same script without any problems. I'm not sure how you code using both of them, but yes it can be done. I would suggest reading the documentation, that's always a good starting point.
Hth.
What are you doing! Don't think, shoot!
@Blademan
Hm, you may be right about the clock. Reading the pyglet programming guide though it seems to suggest its more to make certain scheduled events are called including sounds and video played through pyglet, not that its explicitly required to do so. The example seems to work fine, though I wouldn't necessarily call it ideal.
This might not seem like the most straight foward way of going about using pyglet, but doing it this way could still prove to be educational. Most people start off with the standard approach and migrate to topics like this for edge cases, in this instance he may start off here and migrate to the standard approach. Its sort of like starting off learning Assembler instead of C I suppose, heh.
@jonikster
I'm sure you could use Pyglet and Pygame at the same time, but I don't see much reason to do so. Both have similar functions such as audio output, keyboard input, and window display functions, so mixing them together as opposed to sticking with one individually doesn't come with many benefits and could cause potential conflicts.
What do you like about pyglet over pygame?
It seems to me that dicorations are easier than classes and objects. But I know that with the dicorations the speed of the program is lower, and the classes and objects I do not know very well.
@magurp: True, though I get the feeling he'll try this way, and either (a) conclude that it's too hard and will switch to something easier (bgt comes to mind as the go-to easy language), or (b) he'll stick with this way, because it works and the correct way is "weird".
I think your right about the clock ticking bit not (technically) being essential, but (in my admittedly limited experience), all but the simplest games will need a "frame rate" to schedule even the most basic of physics (gravity comes to mind), as well as automatic enemy movement and any number of other things that depend on a consistent ticking.
To the original poster: Just try examples. I don't care if you think decorators are weird - if you've read the documentation you should know you don't even need to use them, (assuming you know how to use class inheritance or can follow examples). There is plenty of material (tutorials, how-tos, explinations on how using them works) for decorators online, but as I mentioned, you don't really need to use them at all.
As for why I like pyglet? I like the event framework. I like that I can create events in my classes for in-game objects that allows code to "subscribe" to their events. It makes it really easy to have lots of things happen without specifically calling methods to do those things.
As a basic example, take a stationary item called a switch. When a player "uses" that switch, the use method only needs to dispatch, say, an on_use event. Any code that wants to run when that switch is used only needs to tell the event handler so, either by a decorator, or by calling a register method specifying which instance and event, or by pushing a class with lots of events relating to that switch on top of the event stack.
If i want to update that switch so that more things happen when it's used, all I need to do is register the the desired code as a handler for whatever instance and event I want.
It also has really convenient keyboard, mouse, and joystick functions, though I haven't really messed with pygame so I can't compare them.
Blademan
Twitter: @bladehunter2213
@Blademan
Well yes and no, you don't necessarily need the clock to handle the framerate, physics, movement, etc. as the main loop iterates through cycles acting like a clock tick. Though doing it this way can be prone to wild fluctuations in the flow of the game based on resource load. If your not building real time applications like card or turn based games its not as much of an issue, but yeah in almost all cases its much more convient and reliable to use the clock. There are actually a number of classic games that were coded like this that haven't held up very well over the years, largely because they are unplayably fast on current gen processors without using emulating software like dosbox to throttle the processor speed.
@jonikster
If it helps, why not break each class or part you want to play around with into smaller examples to help familiarize yourself with them? That way you only need to focus on one or two as you go. It will take some time and patience, but breaking things and learning about them is part of the process and an occupational hazard of being a programmer, heh.
Nods, I'm assuming here that you want a stable game, not one that arbitrarily speeds up and sows down.
Blademan
Twitter: @bladehunter2213
|
http://forum.audiogames.net/viewtopic.php?pid=307679
|
CC-MAIN-2017-39
|
refinedweb
| 1,341
| 71.44
|
Hi Kirby, Many thanks for the thoughtful review of my PythonOOP chapter. I especially like the suggestions to put more emphasis on the role of over-rides in *specializing* the behavior of subclasses, and to at least mention commonly-used CS terms like "polymorphism". I'll keep your suggestions for the next time I do any revisions on this chapter. If you or anyone else wants to make more extensive modifications, target a different audience, etc., I'll be glad to work with you. No reason we can't have multiple versions targeted to different groups.. If my Animals class hierarchy seems artificial, it may be because I tried to illustrate everything I considered important, and do it in the smallest example that would still be clear. The comparison of classes to modules is not motivated by any affinity to Perl. What little I know of Perl, I don't like. Rather, it is part of an answer to the question "Why do we need classes?" Modules are the closest thing to classes as a unit of encapsulation. With a simple function, you could generate instances of modules as easily as instantiating objects from classes. You could even change the language slightly, and allow multiple modules per file. As for making OOP a "revelation", I think that is the wrong way to teach it, especially for non-CS students. I much prefer to treat it as just another way to encapsulate pieces of a program, a way that is very natural when you are modeling objects in the real world, and can be beneficial even when there is no parallel in the real world - the benefit flowing from the extra flexibility you have in encapsulation. That is something all engineers will understand. The "revelation" approach completely turned me off when I first read about OOP in the early 90's. As a skeptical engineer, I was left with the impression that OOP, polymorphism, etc., was a pile of hype, and there was nothing there I couldn't do in FORTRAN, BASIC, or C. It wasn't until I started using Python in 2002, that I saw the benefits. I like that Python doesn't mystify OOP, or push it the way Java does, but just makes it a natural thing to do. Again, if anyone wants to modify this document for their own purposes, I'll be glad to help. -- Dave At 08:21 PM 1/17/2009 -0800, kirby urner wrote: >Hi David -- > >I've been looking at your PythonOOP. > >Why use classes? All programming aside, I think it's a fairly strong >grammatical model of how people think, basically in terms of >noun.adjective (data attribute) and noun.verb() (callable method). >All talk of computer languages aside, we're very noun-oriented, think >in terms of "things" and these things either "are" or "have" >attributes and "do" verby stuff. > >OOP is about making the language talk about the problem domain, help >us forget that under the hood nightmare of chips and registers, >needing to allocate memory... all that stupid computer stuff that >nobody cares about (smile). > >Of course I like your nomenclature of a Cat inheriting from Animal as >I'm always tying "hierarchy" back to "zoology" and "taxonomy" as those >were original liberal arts tree structures (class hierarchies, >kingdoms, domains, namespaces). We like "astral bodies" subclassed >into stars, planets, planetoids (like Pluto), and moons. We like >"Reptiles" including "Snakes" which of course includes "Pythons" >(everything is a python in Python, i.e. an object with a __rib__ >cage). > >Having a Dog and a Monkey motivates why ancestor classes are >important: generic shared stuff, like digestion, might be handled in >a shared eat() method, each with its own self.stomach of course. With >a younger set (pre-college), those parentheses connote lips, with args >as oral intake. In the class of classes though... we give birth. > >Per a recent PPUG, I'm these days thinking to use a collections.deque >for my digestive tract maybe: > >>>> class Animal: > def __init__(self): > self.stomach = deque() > def eat(self, item): > self.stomach.append(item) > def poop(self): > if len(self.stomach)>0: > return self.stomach.popleft() > > >>>> zebra = Animal() >>>> zebra.eat('straw') >>>> zebra.stomach >deque(['straw']) >>>> zebra.poop() >'straw' >>>> zebra.stomach >deque([]) >>>> > >Some will want to say digestive_tract instead of stomach. > >Now you can develop your Dog and Cat, inheriting eat() and __init__ >from Animal, yet each subclass providing its own __repr__ methods. >"Hello world, I'm a Cat at %s" % id(self). Yes, you could make the >__repr__ invoke __class__.__name__ and share it -- show that later.... >? > >>>> class Animal: > def __init__(self): > self.stomach = deque() > def eat(self, item): > self.stomach.append(item) > def poop(self): > if len(self.stomach)>0: > return self.stomach.popleft() > def __repr__(self): > return "I'm a %s at %s" % (self.__class__.__name__, id(self)) > > >>>> class Dog(Animal): > pass > >>>> class Cat(Animal): > pass > >>>> thing1 = Dog() >>>> thing2 = Cat() >>>> thing1 >I'm a Dog at 138223820 >>>> thing2 >I'm a Cat at 138223852 > >Students see how inheritance means specializing as you go down the >tree (consistent with most introductory treatments). > >Your spin seems to tilted towards Cats with not enough other >subclasses of Animal to make inheritance seem all that necessary. >Your ancestor is mostly for "herding cats" (counting instances), isn't >so much a "blueprint" for different subclasses of the Animal idea >(aardvark versus eel). > >More generally I think using an ancestor class to do instance counting >is a little on the "too difficult" side for a core introductory >running example. It leads you to introduce an interception of __del__ >as your first example of operator over-riding. This seems rather >unpythonic, as the __del__ method is rather rarely intercepted, >compared to say __set__ and __get__ (per Alex Martelli in this lecture >@ Google: >). > >I was also struck by how often you compare classes to modules (modules >in Chapter 16 right?), suggesting a Perlish upbringing -- as did your >suggestion to use '_' in place of 'self' for "maximal uncluttering" >(paraphrase). > >When Perl decided to make the leap to OO, it was "the module" that got >blessed for this purpose yes? Are you a closet Perl Monger then? > >I notice you tend to favor your private terminology and don't resort >to the CS vocabulary as often, e.g. "polymorphic" and "polymorphism" >are not included. If you're wanting your students to gain fluency >with the surrounding literature, I think at least that one deserves >more focus, in conjunction with inheritance. > >I think the a first class could be all data (like a C struct). Like, >this example says a lot: > >>>> class Foo: > bar = 1 > > >>>> f = Foo() >>>> g = Foo() >>>> f.bar = 1 >>>> f.bar = 2 >>>> g.bar >1 >>>> f.bar >2 >>>> f.__dict__ >{'bar': 2} >>>> g.__dict__ >{} >>>> g.bar >1 > >however once you start adding methods, I don't think postponing the >appearance of __init__ for so long is a good idea. I would focus on a >Dog and a Cat (or any other two animals), both with __init__ >constructors and eat methods, then (as a next step) introduce the >Animal superclass as a case of "refactoring" by inheritance -- a good >time to mention the "is a" abstraction (a Dog "is an" Animal). Show >how you're economizing on code by having an ancestor, make OO seem >like a revelation (which it is/was). > >I don't think you're alone in finding __ribs__ somewhat intimidating, >strange-looking (a hallmark of Python), but I think these should be >tackled right away, perhaps with built-in data types first, i.e. that >2 .__add__(2) example. How about a function call ribs() that prints >the special names in any object... > >>>> import re >>>> ex = re.compile("__[a-z]+__") > >>>> def ribs(it): > return re.findall(ex, " ".join(dir(it))) > >I think of __init__ as being triggered by a "birth syntax" i.e. to >call a class directly, as in Animal(), is to call for an instance of >that class, and that invokes __init__ (__new__ first though). > >>>> class Foo: > def __init__(self): > print("I am born!") > > @staticmethod > def __call__(): > print("What?") > > >>>> f = Foo() >I am born! >>>> f() >What? >>>> Foo.__call__() >What? > >I like your point that intercepting __add__ is just like >"intercepting" an Animal's 'talk' method in a subclass, i.e. >interception is happening in both cases, it's just that some names are >special (as you say) in being triggered by *syntax* rather than their >actual names -- although that form is also usable (as you point out >using __add__) -- plus some special names *don't* have special >triggering syntax, i.e. just use the __rib__ directly, as in "if >__name__ == '__main__':" > >I like this section: > >Background on Languages >- languages poster >- progression of languages 1 2 > - domain-specific (FORTRAN, COBOL) > - general-purpose, high-level (C) > - one step above assembler > - problem was complexity, one big pile of data > - object-oriented (C++) > - data now contained within independent objects > - human oriented (Java, Python) > - garbage collection > - dynamic typing > >... and your focus on Mandelbrot / fractals. > >Lots of substance. > >Kirby > >On Thu, Jan 15, 2009 at 3:09 PM, David MacQuigg ><macquigg at ece.arizona.edu> wrote: >> I >> >> >> _______________________________________________ >> Edu-sig mailing list >> Edu-sig at python.org >>
|
https://mail.python.org/pipermail/edu-sig/2009-January/009033.html
|
CC-MAIN-2014-15
|
refinedweb
| 1,532
| 65.01
|
1. Introduction
We will see how we can play sound files in C#. Sound can be played in two different ways. One way of playing is using the "System.Media Namespace" and the other way is using the "Windows Media Player Active-X Control".
In this article, I will cover both the methods by playing a ".wav file". You can explore yourself for more functionality on the specific area. OK, Let us start.
2. About the Sample
The sample application’s screenshot is shown below:
The Play button will start playing the sound and stop button will stop it. Loop It checkbox will be enabled when you click the stop button and it is disabled when the sound is playing. When the check box is in the checked state the sound file will be played repeatedly. Next portion of the form is occupied by the Windows Media Player ActiveX control which plays the same sound file played by the sound player.
3. Adding sound as Resource
In the first method, we are going to use the sound as an application resource. So first we added the sound file to our project and then it is drag and dropped to the resource collection. This whole step is shown in the below video.
Video Steps
- Create a folder for sound
- Add a single sound wav file to the sound folder in project explorer
- Drag and Drop this added sound file to the application’s resource collection.
4. Source Code for using SoundPlayer
1) Before the form class definitions, use the required namespace for playing the sound. In the first method, we are going to access the SoundPlayer class to play the sound. Below is namespace inclusion:
//Sample 01: Include the Required Namespace
using System.Media;
2) Next, a private member of type SoundPlayer is created. We will use this instance player for playing the sound. Below is the code:
//Sample 02: Create Sound Player
private SoundPlayer player = new SoundPlayer();
3) When the play button is clicked first we should disable the check box. Then we will play the sound that we added as the sound resource. While playing sound we check the status of the checkbox and based on the status we play the sound once or repeated. Below is the code for that:
//Sample 03: Play button handler
private void btnPlay_Click(object sender, EventArgs e)
{
//Sample 03_1: Disable the Checkbox button
chkLoop.Enabled = false;
//Sample 03_2: Play the wav or do play looping
player.Stream = Properties.Resources.song1;
if (chkLoop.Checked)
{
player.PlayLooping();
}
else
{
player.Play();
}
}
In the above code, we accessed the sound using the "Properties.Resources" and the sound resource is assigned to the "Stream Property" of the player object. The "Play() Method" will play the sound once and "PlayLooping() Method" will play the sound file multiple times.
The IntelliSense feature will help you add the resource added to the resource collection. We added song1.wav as a sound resource and in the below video you can see accessing the added resource in code with IntelliSense help.
Video Steps
- Move to the Tag sample 3_2 in the code window
- Shows IntelliSense help for the added sound resource
4) When the stop button is clicked we call Stop method of the player and in the meantime enable the check box so that play mode can be changed. Below is code for the stop method:
//Sample 04: Stop the Sound Player
private void btnStop_Click(object sender, EventArgs e)
{
player.Stop();
chkLoop.Enabled = true;
}
Note that the above method that is using the SoundPlayer class method will support only wav files. To play other sounds or even videos we should use the "Windows Media Player Control".
5. The Windows Media Player Control
Windows Media Player control is not a standard control. This is available as an active-x control and it will be available when you installed Windows Media Player in your system. This control supports rich functions, but for this sample, I just gave the simple function of playing the sound file.
Look at the below video that shows how the window media player control is added to the form.
Video Steps
- From the toolbox general area, choose items option is selected using the context menu.
- In the displayed dialog, COM components tab opened
- Window media player control is selected from the bottom of the list
- The Component added to the toolbox is placed on the form for use
After placing the control its default name is changed to WMPlay. In the form load the below piece of code is added:
//Sample 05: Set Sound URL for the Media Player
private void frmSound_Load(object sender, EventArgs e)
{
string URL = "song1.wav";
WMPlay.settings.autoStart = false;
WMPlay.URL = URL;
}
In the form load, the "autoStart Property" of the control is set to false. That means, even though the control is aware of what it needs to play, it will wait for the user to say “OK, Now you play the loaded file”. In the form load after turning off the Auto Start, we specified the URL, which is nothing but the path to the sound file. Here in the above, I don’t specify any path so it will assume that the file will be available in the same location where the exe is placed.
When the form is displayed, the media player is ready to play the loaded sound file. All it need is somebody pressing the play button on it and once it is pressed media player start playing the sound.
The Window media player active-x control is a rich control and you can play different types of sound as well as a video file.
Leave your comment(s) here.
|
http://www.mstecharticles.com/2012/06/c-playing-sound-using-systemmdia-and.html
|
CC-MAIN-2017-17
|
refinedweb
| 951
| 70.73
|
writing this fails type check:
val list = List(1,3,5,2,4)
list sortBy (i => -i) //this is ok
def wrappedSort[A,B](a: List[A])(by: A => B): List[A] = {
a sortBy by
} // this fails type check
wrappedSort(list)(i => -i) //So this won't work either
We know the compile error is: No implicit Ordering defined for B.
No implicit Ordering defined for B.
To make it work I had to have the Wrapper method have the same implicit argument as the wrapped method's which is:
import math.Ordering
def wrappedSort[A,B](a: List[A])(by: A => B)(implicit ord: Ordering[B]): List[A] = {
a sortBy by
}
But this is quite annoying. when working to abstract or extend over some library code, I come across some complicated context bounds that I must re-implement manually. is there a workaround for this in which I don't have to specify the implicit argument in my own abstracions?
In this case, PreDef (which is in scope everywhere unless you specifically disable it in the compiler) there is an instance of Ordering[Int] available. Hence, the compiler doesn't fail because it's in scope and it knows the exact implicit it has to search to find. When you provide an abstract B with that function signature, you effectively tell the compiler that B will not be supplied with an Ordering for all B (which is effectively every single type in the known universe) and the requirement that B have an Ordering as dictated by the sortBy function on List is failed immediately.
PreDef
Ordering[Int]
B
Ordering
sortBy
List
There really isn't a way to avoid this nor should you want to avoid it at all. It's by design and a safety feature of the compiler.
|
http://jakzaprogramowac.pl/pytanie/59533,why-defining-a-wrapping-method-over-another-method-with-an-implicit-argument-doesn-39-t-work
|
CC-MAIN-2017-43
|
refinedweb
| 301
| 55.07
|
Hello
This is somewhat difficult to estimate but assuming your parents were invested in savings bonds, US Treasuries, or basic bank savings/money market funds the average rate for 2008 was approximately 2-2.5%.
using this as a baseline I would estimate the balances that would be required to earn 45K on 2-2.5% growth would be approximately 2,250,000 to 1,800,000.
But if they were invested in other items then this figure would be distorted.
They could have held personal loans or other items which would have generated a larger return (i.e. 7%)
Assuming a maximum rate of 10% then the amount would be about 450,000.
Ok. Thanks. I appreciate that response. I'm just doing some investigating. Thank you again!
|
http://www.justanswer.com/finance/82593-2008-45-000-interest-earned-stated-parent-s.html
|
CC-MAIN-2014-23
|
refinedweb
| 129
| 66.74
|
Like unit testing, for performanceLike unit testing, for performance
A modern load testing tool for developers and testers in the DevOps era.
Download · Install · Documentation · Community
k6 is a modern load testing tool, building on Load Impact's years of experience in the load and performance testing industry. It provides a clean, approachable scripting API, local and cloud execution, flexible configuration, with command & control through CLI or a REST API.
This is how load testing should look in the 21st century.
MenuMenu
FeaturesFeatures
- and WebSocket protocol support
- TLS features: client certificates, configurable SSL/TLS versions and ciphers
- Batteries included: Cookies, Crypto, Custom metrics, Encodings, Environment variables, JSON, HTML forms, files, flexible execution control, and more.
- Built-in HAR converter: record browser sessions as
.harfiles and directly convert them to k6 scripts
- Flexible metrics storage and visualization: InfluxDB (+Grafana), JSON or Load Impact Insights
- Cloud execution and distributed tests (currently only on infrastructure managed by Load Impact, with native distributed execution in k6 planned for the near future!)
There's even more! See all features available in k6.
InstallInstall
MacMac
Install with Homebrew by running:
brew install k6
WindowsWindows
You can manually download and install the official
.msi installation package or, if you use the chocolatey package manager, follow these instructions to set up the k6 repository.
LinuxLinux
For Debian-based Linux distributions, you can install k6 from the private deb repo like this:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 379CE192D401AB61 echo "deb stable main" | sudo tee -a /etc/apt/sources.list sudo apt-get update sudo apt-get install k6
And for rpm-based ones like Fedora and CentOS:
wget -O bintray-loadimpact-rpm.repo sudo mv bintray-loadimpact-rpm.repo /etc/yum.repos.d/ sudo dnf install k6 # use yum instead of dnf for older distros
DockerDocker
docker pull loadimpact/k6
Pre-built binaries & other platformsPre-built binaries & other platforms
If there isn't an official package for your operating system or architecture, or if you don't want to install a custom repository, you can easily grab a pre-built binary from the GitHub Releases page. Once you download and unpack the release, you can optionally copy the
k6 binary it contains somewhere in your
PATH, so you are able to run k6 from any location on your system.
Build from sourceBuild from source
k6 is written in Go, so it's just a single statically-linked executable and very easy to build and distribute. To build from source you need Git and Go (1.11 or newer). Follow these instructions:
- Run
go get github.com/loadimpact/k6which will:
- git clone the repo and put the source in
$GOPATH/src/github.com/loadimpact/k6
- build a
k6binary and put it in
$GOPATH/bin
- Make sure you have
$GOPATH/binin your
PATH(or copy the
k6binary somewhere in your
PATH), so you are able to run k6 from any location.
- Tada, you can now run k6 using
k6 run script.js
Running k6Running k6
k6 works with the concept of virtual users (VUs) that execute scripts - they're essentially glorified, parallel
while(true) loops. Scripts are written using JavaScript, as ES6 modules, which allows you to break larger tests into smaller and more reusable pieces, making it easy to scale tests across an organization.
Scripts must contain, at the very least, an exported
default function - this defines the entry point for your VUs, similar to the
main() function in many languages. Let's create a very simple script that makes an HTTP GET request to a test website:
import http from "k6/http"; export default function() { let response = http.get(""); };
The script details and how we can extend and configure it will be explained below, but for now simply save the above snippet as a
script.js file somewhere on your system. Assuming that you've installed k6 correctly, on Linux and Mac you can run the saved script by executing
k6 run script.js from the same folder. For Windows the command is almost the same -
k6.exe run script.js.
If you decide to use the k6 docker image, the command will be slightly different. Instead of passing the script filename to k6, a dash is used to instruct k6 to read the script contents directly via the standard input. This allows us to to avoid messing with docker volumes for such a simple single-file script, greatly simplifying the docker command:
docker run -i loadimpact/k6 run - <script.js.
In some situations it may also be useful to execute remote scripts. You can do that with HTTPS URLs in k6 by importing them in the script via their URL or simply specifying their URL in the CLI command:
k6 run github.com/loadimpact/k6/samples/http_2.js (k6 "knows" a bit about github and cdnjs URLs, so this command is actually shorthand for
k6 run raw.githubusercontent.com/loadimpact/k6/master/samples/http_2.js)
For more information on how to get started running k6, please look at the Running k6 documentation page. If you want to know more about making and measuring HTTP requests with k6, take a look here and here. And for information about the commercial Load Impact services like distributed cloud execution (the
k6 cloud command) or Insights (
k6 run -o cloud), you can visit loadimpact.com or view the Load Impact documentation.
OverviewOverview
In this section we'll briefly explore some of the basic concepts and principles of how k6 works. If you want to learn more in-depth about the k6 scripting API, results output, and features, you can visit the full k6 documentation website at docs.k6.io.
Init and VU stagesInit and VU stages
Earlier, in the Running k6 section, we mentioned that scripts must contain a
default function. "Why not just run my script normally, from top to bottom", you might ask - the answer is: we do, but code inside and outside your
default function can do different things.
Each virtual user (VU) executes your script in a completely separate JavaScript runtime, parallel to all of the other running VUs. Code inside the
default function is called VU code, and is run over and over, for as long as the test is running. Code outside of the
default function is called init code, and is run only once per VU, when that VU is initialized.
VU code can make HTTP and websocket requests, emit metrics, and generally do everything you'd expect a load test to do, with a few important exceptions - you can't load anything from your local filesystem, or import any other modules. This all has to be done from. That's also the reason why we initialize all needed VUs before any of them starts the actual load test by executing the
default function.
But there's another, more interesting reason. By forcing all imports and file reads into the init context, we design for distributed execution. We know which files will be needed, so we distribute only those files to each node in the cluster. We know which modules will be imported, so we can bundle them up in an archive from the get-go. And, tying into the performance point above, the other nodes don't even need writable file systems - everything can be kept in-memory.
This means that if your script works when it's executed with
k6 run locally, it should also work without any modifications in a distributed execution environment like
k6 cloud (that executes it in the Load Impact cloud infrastructure) or, in the future, with the planned k6 native cluster execution mode.
Script executionScript execution
For simplicity, unlike many other JavaScript runtimes, a lot of the operations in k6 are synchronous. That means that, for example, the
let response = http.get("") call from the Running k6 example script will block the VU execution until the HTTP request is completed, save the response information in the
response variable and only then continue executing the rest of the script - no callbacks and promises needed.
This simplification works because k6 isn't just a single JavaScript runtime. Instead each VU independently executes the supplied script in its own separate and semi-isolated JavaScript runtime, in parallel to all of the other running VUs. This allows us to fully utilize modern multi-core hardware, while at the same time lowering the script complexity by having mostly synchronous functions. Where it makes sense, we also have in-VU parallelization as well, for example the
http.batch() function (which allows a single VU to make multiple simultaneous HTTP requests like a browser/real user would) or the websocket support.
As an added bonus, there's an actual
sleep() function! And you can also use the VU separation to reuse data between iterations (i.e. executions of the
default function) in the same VU:
var vuLocalCounter = 0; export default function() { vuLocalCounter++; }
Script options and execution controlScript options and execution control
So we've mentioned VUs and iterations, but how are those things controlled?
By default, if nothing is specified, k6 runs a script with only 1 VU and for 1 iteration only. Useful for debugging, but usually not very useful when doing load testing. For actual script execution in a load test, k6 offers a lot of flexibility - there are a few different configuration mechanisms you can use to specify script options, and several different options to control the number of VUs and how long your script will be executed, among other things.
Let's say that you want to specify number of VUs in your script. In order of precedence, you can use any of the following configuration mechanisms to do it:
Command-line flags:
k6 run --vus 10 script.js, or via the short
-uflag syntax if we want to save 3 keystrokes (
k6 run -u 10 script.js).
Environment variables: setting
K6_VUS=20before you run the script with k6. Especially useful when using the docker k6 image and when running in containerized environments like Kubernetes.
Your script can
exportan
optionsobject that k6 reads and uses to set any options you want; for example, setting VUs would look like this:
export let options = { vus: 30, }; export default function() { /* ... do whatever ... */ }
This functionality is very useful, because here you have access to key-value environment variables that k6 exposes to the script via the global
__ENVobject, so you can use the full power of JavaScript to do things like:
if (__ENV.script_scenario == "staging") { export let options = { /* first set of options */ }; } else { export let options = { /* second set of options */ }; }
Or any variation of the above, like importing different config files, etc. Also, having most of the script configuration right next to the script code makes k6 scripts very easily version-controllable.
A global JSON config. By default k6 looks for it in the config home folder of the current user (OS-dependent, for Linux/BSDs k6 will look for
config.jsoninside of
${HOME}/.config/loadimpact/k6), though that can be modified with the
--config/
-cCLI flag. It uses the same option keys as the exported
optionsfrom the script file, so we can set the VUs by having
config.jsoncontain
{ "vus": 1 }. Although it rarely makes sense to set the number of VUs there, the global config file is much more useful for storing things like login credentials for the different outputs, as used by the
k6 loginsubcommand...
Configuration mechanisms do have an order of precedence. As presented, options at the top of the list can override configuration mechanisms that are specified lower in the list. If we used all of the above examples for setting the number of VUs, we would end up with 10 VUs, since the CLI flags have the highest priority. Also please note that not all of the available options are configurable via all different mechanisms - some options may be impractical to specify via simple strings (so no CLI/environment variables), while other rarely-used ones may be intentionally excluded from the CLI flags to avoid clutter - refer to options docs for more information.
As shown above, there are several ways to configure the number of simultaneous virtual users k6 will launch. There are also different ways to specify how long those virtual users will be running. For simple tests you can:
- Set the test duration by the
--duration/
-dCLI flag (or the
K6_DURATIONenvironment variable and the
durationscript/JSON option). For ease of use,
durationis specified with human readable values like
1h30m10s-
k6 run --duration 30s script.js,
k6 cloud -d 15m10s script.js,
export K6_DURATION=1h, etc. If set to
0, k6 wouldn't stop executing the script unless the user manually stops it.
- Set the total number of script iterations with the
--iterations/
-iCLI flag (or the
K6_ITERATIONSenvironment variable and the
iterationsscript/JSON option). k6 will stop executing the script whenever the total number of iterations (i.e. the number of iterations across all VUs) reaches the specified number. So if you have
k6 run --iterations 10 --vus 10 script.js, then each VU would make only a single iteration.
For more complex cases, you can specify execution stages. They are a combination of
duration,target-VUs pairs. These pairs instruct k6 to linearly ramp up, ramp down, or stay at the number of VUs specified for the period specified. Execution stages can be set via the
stages script/JSON option as an array of
{ duration: ..., target: ... } pairs, or with the
--stage/
-s CLI flags and the
K6_STAGE environment variable via the
duration:target,duration:target... syntax.
For example, the following options would have k6 linearly ramping up from 5 to 10 VUs over the period of 3 minutes (k6 starts with
vus number of VUs, or 1 by default), then staying flat at 10 VUs for 5 minutes, then ramping up from 10 to 35 VUs over the next 10 minutes before finally ramping down to 0 VUs for another 90 seconds.
export let options = { vus: 5, stages: [ { duration: "3m", target: 10 }, { duration: "5m", target: 10 }, { duration: "10m", target: 35 }, { duration: "1m30s", target: 0 }, ] };
Alternatively, you can use the CLI flags
--vus 5 --stage 3m:10,5m:10,10m:35,1m30s:0 or set the environment variables
K6_VUS=5 K6_STAGE="3m:10,5m:10,10m:35,1m30s:0" to achieve the same results.
For a complete list of supported k6 options, refer to the documentation at docs.k6.io/docs/options.
Hint: besides accessing the supplied environment variables through the
__ENV global object briefly mentioned above, you can also use the execution context variables
__VU and
__ITER to access the current VU number and the number of the current iteration for that VU. These variables can be very useful if you want VUs to execute different scripts/scenarios or to aid in generating different data per VU.
http.post("", {username: `testuser${__VU}@testsite.com`, /* ... */})
For even more complex scenarios, you can use the k6 REST API and the
k6 status,
k6 scale,
k6 pause,
k6 resume CLI commands to manually control a running k6 test. For cloud-based tests, executed on Load Impact's managed infrastructure via the
k6 cloud command, you can also specify the VU distribution percentages for different load zones when executing load tests, giving you scalable and geographically-distributed test execution.
Setup and teardownSetup and teardown
Beyond the init code and the required VU stage (i.e. the
default function), which is code run for each VU, k6 also supports test wide setup and teardown stages, like many other testing frameworks and tools. The
setup and
teardown functions, like the
default function, need to be
exported. But unlike the
default function,
setup and
teardown are only called once for a test -
setup() is called at the beginning of the test, after the init stage but before the VU stage (
default function), and
teardown() is called at the end of a test, after the last VU iteration (
default function) has finished executing. This is also supported in the distributed cloud execution mode via
k6 cloud.
export function setup() { return {v: 1}; } export default function(data) { console.log(JSON.stringify(data)); } export function teardown(data) { if (data.v != 1) { throw new Error("incorrect data: " + JSON.stringify(data)); } }
A copy of whatever data
setup() returns will be passed as the first argument to each iteration of the
default function and to
teardown() at the end of the test. For more information and examples, refer to the k6 docs here.
Metrics, tags and groupsMetrics, tags and groups
By default k6 measures and collects a lot of metrics about the things your scripts do - the duration of different script iterations, how much data was sent and received, how many HTTP requests were made, the duration of those HTTP requests, and even how long did the TLS handshake of a particular HTTPS request take. To see a summary of these built-in metrics in the output, you can run a simple k6 test, e.g.
k6 run github.com/loadimpact/k6/samples/http_get.js. More information about the different built-in metrics collected by k6 (and how some of them can be accessed from inside of the scripts) is available in the docs here.
k6 also allows the creation of user-defined
Counter,
Gauge,
Rate and
Trend metrics. They can be used to more precisely track and measure a custom subset of the things that k6 measures by default, or anything else the user wants, for example tracking non-timing information that is returned from the remote system. You can find more information about them here and a description of their APIs here.
Every measurement metric in k6 comes with a set of key-value tags attached. Some of them are automatically added by k6 - for example a particular
http_req_duration metric may have the
method=GET,
status=200,
url=, etc. system tags attached to it. Others can be added by users - globally for a test run via the
tags option, or individually as a parameter in a specific HTTP request, websocket connection,
userMetric.Add() call, etc.
These tags don't show in the simple summary at the end of a k6 test (unless you reference them in a threshold), but they are invaluable for filtering and investigating k6 test results if you use any of the outputs mentioned below. k6 also supports simple hierarchical groups for easier code and result organization. You can find more information about groups and system and user-defined tags here.
Checks and thresholdsChecks and thresholds
Checks and thresholds are some of the k6 features that make it very easy to use load tests like unit and functional tests and integrate them in a CI (continuous integration) workflow.
Checks are similar to asserts, but differ in that they don't halt execution. Instead they just store the result of the check, pass or fail, and let the script execution continue. Checks are great for codifying assertions relating to HTTP requests/responses. For example, making sure an HTTP response code is 2xx.
Thresholds are global pass/fail criteria that can be used to verify if any result metric is within a specified range. They can also reference a subset of values in a given metric, based on the used metric tags. Thresholds are specified in the options section of a k6 script. If they are exceeded during a test run, k6 would exit with a nonzero code on test completion, and can also optionally abort the test early. This makes thresholds ideally suited as checks in a CI workflow!
import http from "k6/http"; import { check, group, sleep } from "k6"; import { Rate } from "k6/metrics"; // A custom metric to track failure rates var failureRate = new Rate("check_failure_rate"); // Options export let options = { stages: [ // Linearly ramp up from 1 to 50 VUs during first minute { target: 50, duration: "1m" }, // Hold at 50 VUs for the next 3 minutes and 30 seconds { target: 50, duration: "3m30s" }, // Linearly ramp down from 50 to 0 50 VUs over the last 30 seconds { target: 0, duration: "30s" } // Total execution time will be ~5 minutes ], thresholds: { // We want the 95th percentile of all HTTP request durations to be less than 500ms "http_req_duration": ["p(95)<500"], // Requests with the staticAsset tag should finish even faster "http_req_duration{staticAsset:yes}": ["p(99)<250"], // Thresholds based on the custom metric we defined and use to track application failures "check_failure_rate": [ // Global failure rate should be less than 1% "rate<0.01", // Abort the test early if it climbs over 5% { threshold: "rate<=0.05", abortOnFail: true }, ], }, }; // Main function export default function () { let response = http.get(""); // check() returns false if any of the specified conditions fail let checkRes = check(response, { "http2 is used": (r) => r.proto === "HTTP/2.0", "status is 200": (r) => r.status === 200, "content is present": (r) => r.body.indexOf("Welcome to the LoadImpact.com demo site!") !== -1, }); // We reverse the check() result since we want to count the failures failureRate.add(!checkRes); // Load static assets, all requests group("Static Assets", function () { // Execute multiple requests in parallel like a browser, to fetch some static resources let resps = http.batch([ ["GET", "", null, { tags: { staticAsset: "yes" } }], ["GET", "", null, { tags: { staticAsset: "yes" } }] ]); // Combine check() call with failure tracking failureRate.add(!check(resps, { "status is 200": (r) => r[0].status === 200 && r[1].status === 200, "reused connection": (r) => r[0].timings.connecting == 0, })); }); sleep(Math.random() * 3 + 2); // Random sleep between 2s and 5s }
You can save the above example as a local file and run it, or you can also run it directly from the github copy of the file with the
k6 run github.com/loadimpact/k6/samples/thresholds_readme_example.js command. You can find (and contribute!) more k6 script examples here:
OutputsOutputs
To make full use of your test results and to be able to fully explore and understand them, k6 can output the raw metrics to an external repository of your choice.
The simplest output option, meant primarily for debugging, is to send the JSON-encoded metrics to a file or to
stdout. Other output options are sending the metrics to an InfluxDB instance, an Apache Kafka queue, or even to the Load Impact cloud. This allows you to run your load tests locally or behind a company firewall, early in the development process or as a part of a CI suite, while at the same time being able store their results in the Load Impact cloud, where you can use Insights for comparison and analysis. You can find more information about the available outputs here and about Load Impact Insights here and here.
Modules and JavaScript compatibilityModules and JavaScript compatibility
k6 comes with several built-in modules for things like making (and measuring) HTTP requests and websocket connections, parsing HTML, reading files, calculating hashes, setting up checks and thresholds, tracking custom metrics, and others.
You can, of course, also write your own ES6 modules and
import them in your scripts, potentially reusing code across an organization. The situation with importing JavaScript libraries is a bit more complicated. You can potentially use some JS libraries in k6, even ones intended for Node.js if you use browserify, though if they depend on network/OS-related APIs, they likely won't work. You can find more details and instructions about writing or importing JS modules here.
Need help or want to contribute?Need help or want to contribute?
Types of questions and where to ask:
- How do I? -- Stack Overflow (use tags: k6, javascript, load-testing) or the Discourse forum at community.k6.io
- I got this error, why? -- Stack Overflow or community.k6.io
- I got this error and I'm sure it's a bug -- file an issue
- I have an idea/request -- file an issue
- Why do you? -- Slack
- When will you? -- Slack
If your questions are about any of the commercial Load Impact services like managed cloud execution and Load Impact Insights, you can contact support@loadimpact.com or write in the
#loadimpact channel in Slack.
If you want to contribute or help with the development of k6, start by reading CONTRIBUTING.md. Before you start coding, especially when it comes to big changes and features, it might be a good idea to first discuss your plans and implementation details with the k6 maintainers. You can do this either in the github issue for the problem you're solving (create one if it doesn't exist) or in the
#developers channel on Slack.
|
https://go.ctolib.com/loadimpact-k6.html
|
CC-MAIN-2019-47
|
refinedweb
| 4,066
| 51.58
|
i have a sorted array of integers and i'm trying to find the two closest numbers. anyone have an idea of how to do this?
i have a sorted array of integers and i'm trying to find the two closest numbers. anyone have an idea of how to do this?
yes, i realize that...
Sort it. Walk it. Subtract them as you go.
Quzah.
Hope is the first step on the road to disappointment.
would anyone mind explaining how to do it?
i've gotten this far, but i cant figure out how to store the smallest number while i compare it to the rest of the results.
the whole point is printing the rwo closest numbers so it will also have to remember those two numbers.
i'm having a hard time wrapping my mind around it.
Last edited by ominub; 10-20-2009 at 07:42 PM. Reason: clarification
Well, you said how. Store the smallest number (in a variable called, ooh, "smallest" maybe). (EDIT based on your edit: So you need to store two numbers (and therefore in two variables) in addition to the "smallest" above.)
Last edited by tabstop; 10-20-2009 at 07:45 PM.
There are no special C functions for doing this, so code it up like you would do it by hand.
Say your values were: 1,3,5,6, 9. How would you find the two numbers that are closest?
1 - 3 = diff = mingap = 2
3 - 1 is the same as 1 - 3 for gap, so we only need to subtract with the adjacent, larger, number.
3 - 5 = diff not lower than mingap, either.
5 - 6 = diff = new mingap of 1
As a practical matter, I'd prefer to work down the array values, to avoid any negative numbers or absolute value stuff.
Imagine you were told that you would be shown a series of cards with numbers on them. Each number is to be greater than the next. You will be shown 50 numbers, each one for as long as you require, but only one at a time. Your task is to identify the two closest together numbers. You may not write anything down, but you're free to use a basic calculator if you wish.
Would you be able to identify the closest two numbers after all cards have been shown to you?
This is exactly the kind of simple problem that anyone hoping to be a programmer should definitely be able to solve. If you can't come up with a logical solution then you're just not applying your mind to the problem. Think about it for a while longer until you eventually solve it on your own.
If you then know how to solve the problem but can't code up a solution because you're new to C then that's fine, just give it a shot anyway and post what you get.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
As everybody said above this is the pusdocode for it
As array is sorted then
Code:For i 0 to N - 1 TEMP_DIFF = ARR[i+1] - ARR[i] if (TEMP_DIFF < DIFF) START_ELEMENT = i END_ELEMENT = i + 1 endif DIFF = TEMP_DIFF
|
https://cboard.cprogramming.com/c-programming/120832-finding-two-closest-numbers-array.html
|
CC-MAIN-2017-51
|
refinedweb
| 569
| 80.62
|
I am trying to implement binary search tree. One method which I am trying to implement in the most efficient and stylish way is node insertion.
I am under the impression that while (true) is a bad practice, correct?
while (true){ if(n <currentNode.data){ if (currentNode.left == null){ currentNode.left = new node(n); break; }else{ currentNode = currentNode.left; } }else if (currentNode.right == null) { currentNode.right = new node(n); break; } else{ currentNode = currentNode.right; } }
and here is the whole code:
package graph; public class BSearchTree { private node head = null; public BSearchTree(int [] entries){ for (int a : entries){ insert(a); } } public void insert(int n){ if (head == null){ head = new node(n); return; } node currentNode = head; while (true){ if(n <currentNode.data){ if (currentNode.left == null){ currentNode.left = new node(n); break; }else{ currentNode = currentNode.left; } }else if (currentNode.right == null) { currentNode.right = new node(n); break; } else{ currentNode = currentNode.right; } } } public static void main(String[] args) { BSearchTree bst = new BSearchTree(new int[]{2,4,1,5}); System.out.println(bst.toString()); } private class node { int data = -1; node left = null; node right = null; public node(int n){ data =!
|
https://extraproxies.com/loop-for-inserting-a-node-into-a-binary-search-tree/
|
CC-MAIN-2019-09
|
refinedweb
| 188
| 63.25
|
What is scope? In programming languages, it is the idea of where the value of a variable is known.For instance, in Tcl, you might have the following:
#! /usr/local/bin/tclsh set gvar 123 ; # this is a variable created in the global namespace proc nest1 {} { set lvar {This is a test} puts $lvar ; # this variable is local to nest puts $gvar } nest1 puts $lvarwhich will raise an error when nest attempts to print gvar as coded above. This is because even though the gvar variable was created in the global namespace, nest won't look at variables declared there without the coder taking special action.So, now we will try the code again:
proc nest2 {} { global gvar set lvar {This is a test} puts $lvar ; # this variable is local to nest puts $gvar } nest2 puts "lvar 2nd print $lvar"Now nest2 should work fine - because tclsh was told by the global call that it should look in the global namespace when references to gvar occur.However, the 2nd print of lvar will fail. lvar was created within nest2, so it is not available to print outside of the proc. The only way to let some other proc, or code in the global namespace, access lvar is to put it into the global namespace.There is a second way to put variables into a specific namespace. Here's nest3, which puts both variables into the global namespace.
proc nest3 {} { set ::lvar {This is a test} puts $::lvar ; # this variable is local to nest puts $::gvar } nest3 puts "lvar 2nd print $::lvar"Currently, using the namespace variable notation apparently is a slight performance hit, but using a consistent notation may be worth not having to remember to code global statements...
Tcl has several layers of scoping - from an article by Bryan Oakley, we read:
Sektor van Skijlen wrote: > Could someone explain me, what is the proper syntax with correct scope for > variables, which are: > > - declared inside in a namespacevariable varname ?initValue?
> - globallyglobal varname ?varname ...?
LV: With the advent of namespaces, one can provide a more consistent variable usage by using ::varname as opposed to invoking the global command with varname.RHS: However, if you're accessing the variable a lot, it's worth using the global command, since it is faster to access the variable that way. Assuming performance matters in what you are doing.
% proc p1 {} { global v1 ; for {set i 0} {$i <1000} {incr i} { set v1 } } % proc p2 {} {for {set i 0} {$i <1000} {incr i} { set ::v2 } } % time { p1 } 100 1802 microseconds per iteration % time { p2 } 100 4520 microseconds per iterationschofield:Consider the above procs. Both p1 & p2 access the variable v1 & v2, respectively, many times. Should you only need to access the variable once or maybe twice, it is faster to fully qualify the variable name and pay the price of having tcl look up the variable, which is cheaper than the "global" call.
% proc p3 {} {global v3; set v3} % proc p4 {} {set ::v4} % set v3 1 1 % set v4 1 1 % time p3 100 41 microseconds per iteration % time p4 100 33 microseconds per iteration % proc g1 {} {global v3} % time g1 100 51 microseconds per iterationWhat I'd like to know is why is it more expensive to "global v3" than it is to "global v3; set v3"? This happened consistently for with many "time" invocations.
LV: Curious - is it due to additional parsing? Or is Tcl just missing an optimization?RHS: My guess (and it's just a guess) is that, when it goes to look up the variable:
- For the one linked via global, it's now a local variable so it's in the local table. This means that it finds the variable immediately
- For the qualified name, it has to parse the name, recognize the namespace it's in, and then go look in that namespace's table to get the value
> - accessed in a proc defined in the same namespaceuse variable if you wish to reference the namespace variable, global if you want to reference the global variable. Use neither to reference a local variable.
Please add other information, discussion, corrections to this page.
Note that merely declaring a variable with the global or variable command does NOT create the variable. It just tells Tcl where to go looking for the variable if there's a reference to it.
|
http://wiki.tcl.tk/11921
|
CC-MAIN-2018-17
|
refinedweb
| 729
| 63.73
|
#include <LCD4Bit_mod.h>LCD4Bit_mod lcd = LCD4Bit_mod(2);int val = 0;void setup(){ lcd.init(); lcd.clear();}void loop(){ lcd.cursorTo(1,0); lcd.print(val); val++;}
#include <LCD4Bit_mod.h>LCD4Bit_mod lcd = LCD4Bit_mod(2);int val = 1;void setup(){ lcd.init(); lcd.clear();}void loop(){ lcd.cursorTo(1,0); lcd.print(val);}
#include <LiquidCrystal.h>LiquidCrystal lcd(8, 9, 4, 5, 6, 7);int val = 0;void setup(){ lcd.begin(16, 2); // set up the columns and rows lcd.clear();}void loop(){ lcd.setCursor(0,0); lcd.print(val); val = val++;}
It displays it too fast to see what is really being displayed ...
void loop(){ // .. if (millis() - prevTime > 500) // refresh rate 2x per sec { updateDisplay(); prevTime = millis(); } // .. do the rest
if (millis()%250) print_stuff(); in your loop. It will only print 4 times a second, fast enough.
It now counts to 500 in one second or so. Still not really fast should it not be able to count to thousands in a second?
Let's put things into perspective here. The I2C protocol ...
But I2C (or any other serial) protocol is irrelevant in this particular situation.
And when I glance at the library itself, I see that they have added a 1 ms delay every time the Enable pin is pulsed, which is probably quite a lot longer than required.
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=52332.msg374619
|
CC-MAIN-2016-30
|
refinedweb
| 254
| 70.19
|
I'm a real dolt when it comes to solving maths problems. I've made a function that returns the nearest specified multiple of a specified value, including a specified offset. I think it's pretty good, but I was hoping that someone could see if I'm doing it as efficiently as possible.
By offsets, I mean offsetting a series of multiples. For example, if the multiple is 5 and the offset is 2, the series will go like this:By offsets, I mean offsetting a series of multiples. For example, if the multiple is 5 and the offset is 2, the series will go like this:Code:int nMult(int value, int mult, int offset) { return ((mult*round((float)(value-offset)/mult))+offset); } int round(float value) { if ((value-int(value))<.5) return int(value); else return int(value)+1; }
2,7,12,17,22,27 etc.
The reason I made the function is that list box heights in Win32 operate with multiple 16, offset 6. Any height specified for the height of a list box is rounded to the nearest multiple and offset. This caused me problems, so I made a function to round my window height appropriately.
|
http://cboard.cprogramming.com/cplusplus-programming/41751-maths-nearest-multiple-using-offsets.html
|
CC-MAIN-2014-15
|
refinedweb
| 201
| 53.71
|
You are given N qubits which are guaranteed to be in one of two basis states on N qubits. You are also given two bitstrings bits0 and bits1 which describe these basis states.
Your task is to perform necessary operations and measurements to figure out which state it was and to return 0 if it was the state described with bits0 or 1 if it was the state described with bits1. The state of the qubits after the operations does not matter.
You have to implement an operation which takes the following inputs:
An array of boolean values represents a basis state as follows: the i-th element of the array is true if the i-th qubit is in state
, and false if it is in state
. For example, array [true; false] describes 2-qubit state
.
Your code should have the following signature:
namespace Solution {
open Microsoft.Quantum.Primitive;
open Microsoft.Quantum.Canon;
operation Solve (qs : Qubit[], bits0 : Bool[], bits1 : Bool[]) : Int
{
body
{
// your code here
}
}
}
|
http://codeforces.com/problemset/problem/1001/F
|
CC-MAIN-2018-30
|
refinedweb
| 167
| 61.87
|
People new to programming often ask for suggestions of what projects they should work on and a common reply is, “Write a text adventure game!” I think there are even some popular tutorials floating around that assign this as homework since I see it so much. This is a really good suggestion for a few reasons:
- The concept is familiar and fun (everyone loves games!)
- They can be written using core libraries
- The UI is the console
But new programmers often struggle with knowing where to start. That’s why I wrote and published Make Your Own Python Text Adventure. This book is a structured approach to learning Python that teaches the fundamentals of the language, while also guiding the development of your own customizable text adventure game.
For those of you who know some Python and just need a little guidance, there’s an abbreviated version of the book material here on the blog. It assumes you are familiar with basic programming concepts (if-statements, loops, objects, etc.), but are still new to writing full applications.
- Part 1: Items and Enemies
- Part 2: The World Space
- Part 3: Player Actions
- Part 4: The Game Loop
- Appendix A: Saving a Game
Just looking for some code? You can view the tutorial version of the game on GitHub.
Hello, I’ve been testing out your Python Adventure, and for the life of me I’m unable to get it to work.
I receive the following error.
Traceback (most recent call last):
File “adventuretutorial/game.py”, line 5, in
from adventuretutorial import world
ImportError: No module named ‘adventuretutorial’
I’ve done a litte reasearch but I can’t find anything thats relative to the problem I’m having I’ve tested it on multiple versions of Linux / Windows and multiple versions of python 2.7, 3, 3.2, 3.3, 3.4
But the same issue continues to occur!
I would love pointing in the right direction to help me resolve this as I’m really interested in learning python and this guide is very comprehensive and covers many areas I find interesting.
Best regards,
Robert Barton
Please see Part #4:
Pingback: Capstone Day 5 | KWNull
Hello,
I have been working through your tutorial and completed it. It has been very useful in improving my knowledge of python programing. However I am unable to run the program. I am running the latest version of python 3 and I am on a mac. The terminal error says this:
Traceback (most recent call last):
File “/Users/macuser/Documents/Scripting/Python/adventuretutorial/game.py”, line 2, in
from player import Player
File “/Users/macuser/Documents/Scripting/Python/adventuretutorial/player.py”, line 4, in
class Player:
File “/Users/macuser/Documents/Scripting/Python/adventuretutorial/player.py”, line 5, in Player
inventory = [items.Gold(15), items.Rock()]
File “/Users/macuser/Documents/Scripting/Python/adventuretutorial/items.py”, line 14, in __init__
super().__init__(name=”Gold”,
TypeError: super() takes at least 1 argument (0 given)
I believe the error is something to do with the super() function in the items module. However my code is identical to yours. I do not see I can fix this.
If you have any idea I would appreciate some help.
Thank you for your time.
Regards, Iain
I have a strong feeling you are trying to run this using Python 2. The
super()method was introduced in Python 3. Depending on how you have Python installed, running
pythonat the command line could open a 2.x.x version and running
python3could open a 3.x.x version. It really depends on how you have things configured.
I am having the exact same problem as lain and am running the code in the latest version of python 3. What do you think the problem could be?
I can confirm this error exists in Python 2.x, but not in 3.x. When you go to a command line and type “python” you will enter an interactive python session and the first line will give you the version number. I have a feeling your PATH is not configured to use the newer version.
I had been running the code with python 3 and was eventually able to fix the errors I was getting. Thank you for your help!
`super` has been part of python since 2.2. The python3 version of `super` doesn’t take any arguments, which is why Iain got the original error: he was using it according to python3 convention and not python2.
There is a great example of future-proofing one’s use of `super` on this page:
Hi, I was wondering, when I play this game it kind of just piles up the text, is there end way to clear the text every time I go to a different tile?
Sure, check out this Stack Overflow question.
i cant get this game to work either. i get the same ‘no module named adventuretutorial’. how can i fix this problem? I’ve troubleshooted for hours lol
i have the latest python version also
I am using enthought canopy for coding in python. Its fine and all, but doesn’t support python 3. What can I use to run this game, and how?
Vanilla Python? I’m not sure I understand the question. Download and install Python 3.x and you should be good to go.
Well, thank you very much for this tutorial. I have encountered a couple of errors, and I have re-checked the code and the listed code on the posts many times. Here are the errors:
Error 1:
Traceback most recent call:
File Game.py, Line 27 in
play()
File Game.py, Line 9, in play
room = World.tile_exists(Hero.loaction_x, Hero.location_y
Error:(I don’t remember too well) Something like ” no attribute tile_exists for World ”
Error 2:
Traceback most recent call:
File Game.py, Line 27 in
play()
File Game.py, Line 10, in play
Hero = Hero()
InboundLocalError: local variable ‘Hero’ (the last part is blurry in my head) something like used before exists or something.
Error 2 I can easily fix by calling it ‘player’ instead of Hero, but Error 1 is really creating a wall for me here. I will put another post showing the exact wording.
If the error was
('module' object has no attribute '')then I would make sure you review your map and the assumptions I make in the code about that map.
TY I fixed both, but found another error:
Traceback (most recent call last):
File “Game.py”, line 24, in
play()
File “Game.py”, line 6, in play
player = Hero()
TypeError: ‘module’ object is not callable
Here is the code:
import World, Hero
Well, did you also name your Player class Hero? If the Hero class does not exist, then the code will fail when it tries to create a new Hero object.
I named the Player class Hero.
IDK what to do.
Hi, I forked the adventure and ported it to Python 2.x.
Modifications:
– View a map of the dungeon.
– Gold is cumulative
– Loot does not respawn
– map.txt works off spaces instead of tabs so that working in text editors is easier
Thanks!
I like the way you have put all this together. While I like the object oriented aspect of things, I’m still trying to go back to basics… like how would I store the action verbs and a list of objects the way old text adventures were written in BASIC. And I was thinking the exact same thing. Learning Python so I’ll try writing a text adventure!
My second choice would be a D&D Character Generator and then a simple game out of it 🙂
I love your tutorial. Could you possibly explain how I could add a specific enemy tile that wasn’t random? I am new to python. I have been able to modify a few things but I can’t seem to add a specific enemy. I can however change the “r” values to produce a specific enemy but I don’t want the same enemy every time. Can you help? Thank you.
Thanks, I’m glad you’re enjoying it! I would create a new class called
GiantSpiderTile(or whatever your enemy is). Then in the
init()function, just set
self.enemy = enemies.GiantSpider()explicitly. Not sure how far you are in the book, but don’t forget to add your new tile type to
tile_type_dict.
Thank you for the reply. I will try that out tonight! I’ve been trying to learn Python on my own for a few months now. I have bought a few books but yours is by far the most interesting to me. BTW, have you ever thought about making a supplemental to your book that covers modifications? I’d pay for that. Thanks.
My hope is that by the end of the book, you have all the tools you need to complete some of the modifications I suggest in the last chapter. Although I know it can be intimidating going from something guided to unguided. If you need a little more nudging, feel free to email me.
on p67 in the code at the top
line 35 seems incomplete.
Should it read?:
print(“Enemy does {} damage. You have {}HP remaining.”.format(self.enemy.damage,player.hp))
Nice catch, you’re correct, that last line got cut off. I’ll fix that the next time I publish, thanks!
Hi, I have a question. I’m on page 63 on the Enemies chapter. The book says the code is runnable but I’m getting some errors when I move to an EnemyTile. In game.py there is the line:
print(room.intro_text())
This fails with the NotImplementedError that was created for the MapTile class. However, since the EnemyTile is declared as such:
class EnemyTile(MapTile) it will try to call MapTile.intro_text().
I assume I need to call the intro_text() in enemies but I don’t see this in your book at this point.
Take a look at page 62 where the text starts, “In order to alert the player about the enemy…”. Right below that, you should see a code snippet that shows the
intro_textmethod. Hope that helps!
I have added the intro_text() into the enemy class but it never gets called. This is what it looks like right now:
game.py
room = world.tile_at(player.x, player.y)
This returns EnemyTile found in the world_map. According to the code for EnemyTile it doesn’t implement it’s own intro_text and since it is a subclass of MapTile it will use the intro_text() there and get the NotImplementedError.
Ah, that’s why, you should be adding the
intro_text()method to the
EnemyTileclass, not the
Enemyclass. I can see how that might be unclear, but I switched over to the
EnemyTileclass at the bottom of page 61. We are referencing the
Enemyclass simply to get access to the name of the Enemy, but the intro text takes place in the tile.
Ahh, got it.
Thanks
Hi, I have bought your book and have tried to run all the code but I am getting an Error:
I have tried comparing my code to the code in the book but they appear to be the same.
I have never got this error before and am finding it hard to realise what is wrong
That error means Python is trying to look up
' 'in your
tile_type_dictbut it is not finding a match. I would take a look at your
world_dslvariable because that is where the map of tiles is defined. My guess is you have some extra spaces where you don’t want them. Try turning on your editor’s “show all characters” mode. That may help you track down the problem.
Hi, I started your tutorial and I am new to Python so this will be a great way for me to learn new things. I downloaded the final project on Github to make sure it works on my version of Python, and I get an error when trying to run the game.py in the Python 3.5 IDLE. This is the error:
Traceback (most recent call last):
File “C:\Users\Ben\Documents\Python Programs\text-adventure-tut-master\adventuretutorial\game.py”, line 31, in
play()
File “C:\Users\Ben\Documents\Python Programs\text-adventure-tut-master\adventuretutorial\game.py”, line 10, in play
world.load_tiles()
File “C:\Users\Ben\Documents\Python Programs\text-adventure-tut-master\adventuretutorial\world.py”, line 18, in load_tiles
with open(‘resources/map.txt’, ‘r’) as f:
FileNotFoundError: [Errno 2] No such file or directory: ‘resources/map.txt’
Take a look at Part 4 where I describe how to run the game. Also, I’d suggest learning how to run it from the command line first before introducing the complexity of an IDE. Happy coding!
Thanks, but I don’t understand how to change the path. I don’t understand the link in Part 4 since I work on Windows, not Linux. I understand how to change the path in the environment variables, but I added my game folder as a path there and nothing has changed (it still can’t find the directory).
OK, try this:
cd C:\Users\Ben\Code\PythonGame
python adventuretutorial/game.pyand press enter.
What happens?
Hello, and thank you very much for the tutorial! I got it all to work and it works great, except for some reason, when I’m in an enemy room and I kill the enemy inside it, instead of giving me actions to choose from to exit the room, it raises the error “TypeError: ‘method’ object is not iterable”. Any idea about why that may be? Thanks again!
-Justin Hangingtree
This is a guess, but do you have
available_actions = room.available_actionsinstead of
available_actions = room.available_actions()in your code? The error is telling you that you are trying to loop through a method object. You actually want to loop through what the method returns, not the method itself. To call the method, you have to include the parentheses. Hope that helps!
Hello! I’m creating a text game using your book and I’ve run into a problem in the world.py chapter. Whenever I run the game I get the error: ‘AttributeError: ‘NoneType; object has no attribute ‘intro_text’. I’ve gone over and over my code and I can’t seem to find where I’ve gone off course.
I’m having a lot of fun with this, and I’m hoping to get the game to functioning soon so I can use it as a means of proposing to my boyfriend. Any help would be appreciated!
Take a look at the comments in Part 3 where this has been discussed a bit. It means Python can’t find the tile at the location you gave it, so it returns
None. Because
Noneis not the starting room, you get an error that
intro_textis not part of
None. You can usually fix this my modifying your map file or change the starting location of the player. Good luck and I hope he says “Yes”!
Your book was a great help. I got the yes! Definitely made for a great proposal story! Thanks a bunch!
That’s awesome to hear, congratulations!
I’m up to page 60 in the book version of your excellent tutorial. I’m finding this WAY easier to understand than Learn Python the Hard Way.
If I run the code as it stands on page 60, I get the following error when I reach a None tile. Is this error expected at this point? Or did I type something wrong? (Btw, I already took a look in the comments at Page 3 and Page 4 of the online tutorial and did not find the answer to my question.)
PS C:\PythonGame> python .\game.py
Escape from Cave Terror!
You find yourself in a cave with a flickering torch on the wall.
You can make out four paths, each equally dark and foreboding.
Action: e
This is a very boring part of the cave.
Action: e
Traceback (most recent call last):
File “.\game.py”, line 30, in
play()
File “.\game.py”, line 10, in play
print(room.intro_text())
AttributeError: ‘NoneType’ object has no attribute ‘intro_text’
I’m glad you’re enjoying the book! It doesn’t look like you’ve done anything wrong, that’s actually one of the bugs that gets fixed in the next chapter. See the “Limiting Actions” section of Chapter 13.
Thanks, Phillip!
It might be worth revising the following line in the next edition from…
“Notably, the game doesn’t end when you reach the VictoryTile and
the player can also wrap around the map.”
to something like
“Notably, the game doesn’t end when you reach the VictoryTile and
entering a None tile triggers an error.”
I also found a potential typo in the book version. The x/y coordinates appear to be flipped in the second example:
Page 62:
]
world_map = [
[None,VictoryTile(1,0),None],
[None,BoringTile(1,1),None],
[BoringTile(0,2),StartTile(1,2),BoringTile(2,2)],
[None,BoringTile(1,3),None]
Page 68:
]
world_map = [
[None,VictoryTile(0,1),None],
[None,EnemyTile(1,1),None],
[EnemyTile(2,0),StartTile(2,1),EnemyTile(2,2)],
[None,EnemyTile(3,1),None]
Nice catch, I’ll fix that in the next update!
Thank you, Phillip, for a wonderful resource. I completed the tutorial and am on my way to customizing the game.
One feature I would like to add is the ability to save progress and restore later. (Serialization.) Research suggests that using Pickle to save certain player/world variables is the way to go, however, I am unable to figure out exactly what to do. Is it very tricky to do?
Thank you.
Best,
Tony
It’s probably not too hard. I’d make a save data class where you write all the relevant variables to and then use the examples here as a guide. You might need to do some restructuring to make it easier to inject variables during unpickling.
Thanks, Phillip!
Another feature I’m struggling with:
I’d like to add a locked room and another room featuring a key to unlock the locked room. I think I’ve got the key room figured out, but I’m having trouble figuring out how to make a locked room with the way the following is structured:
def get_available_actions(room, player):
actions = OrderedDict()
print(“Choose an action: “)
if player.inventory:
action_adder(actions, ‘i’, player.print_inventory, “Print inventory”)
if isinstance(room, world.TraderTile):
action_adder(actions, ‘t’, player.trade, “Trade”)
if isinstance(room, world.EnemyTile) and room.enemy.is_alive():
action_adder(actions, ‘a’, player.attack, “Attack”)
else:
if world.tile_at(room.x, room.y – 1):
action_adder(actions, ‘n’, player.move_north, “Go north”)
if world.tile_at(room.x, room.y + 1):
action_adder(actions, ‘s’, player.move_south, “Go south”)
if world.tile_at(room.x + 1, room.y):
action_adder(actions, ‘e’, player.move_east, “Go east”)
if world.tile_at(room.x – 1, room.y):
action_adder(actions, ‘w’, player.move_west, “Go west”)
if player.hp < 100:
action_adder(actions, 'h', player.heal, "Heal")
return actions
The code looks to see if a room exists next to the current room. Technically, the locked room exists, but I don't want to let the player enter it until they find the key.
Another limitation of the above is adjacent rooms that I want to put a wall between. (Instead of say, travelling east into the next room, the player would see a wall and be forced to travel north, then east, then south.) I could put a "None" room in between the two rooms, but this would produce a undesired column.
Any way to do the above things without having to drastically restructure the way the DSL generates?
Thank you for your time and expertise.
Best,
Tony
So instead of just using
if world.tile_at(room.x – 1, room.y), you will probably want to create a new method to handle all of the logic for figuring out what the player can do. To accommodate locks, I would check 1) if the room exists and 2) if the room requires a key and 3) if the player has the correct key in the inventory. Then you only add the action if the result of that new method is true.
For walls, you could create a wall tile for the DSL. Then you would just need to add another check to the new method to make sure the tile is not a wall. The
type()function may help.
Thanks, Phillip! I’ll give all this a try.
Re: Wall tile, that would produce an extra column. I think instead, I’ll make certain World tiles that have variables that place walls. (Maybe something like wall_east, wall_west, etc.) And then in the game logic, I’ll check to see if the wall is there and prohibit travelling in that direction.)
Hi,
I have gone through your tutorials online and in the book and I am having the same error (though I can get your github code to work so I know I’ve made a typo somewhere and I can’t figure it out). I have tried going through and comparing your code to mine and I still can’t find any differences besides customizable things. Do you have any idea what might be going on here?
Below is my code for the available_actions function in my tiles.py
And here is my code for the ViewInventory action:
It seems to be upset I haven’t given ViewInventory any arguments in the available_actions function but your code doesn’t do that either, so I’m not sure what I’ve done wrong. Any direction would be much appreciated!
Thank you!
~Adrianna
The only difference I notice is that I name all of my parameters in the
__init__method. This is kind of tricky, but if you keep going back, you see that the base
Actionclass uses
**kwargs. Because of that, I believe you need to name all of your parameters, otherwise Python does not know which belong in
**kwargsand which do not.
I am really enjoying your book. But I am having a serious problem. My code is exactly as yours. I keep getting a “Player object has no attribute ‘inventory.’ error. This is when I try to run the game importing the player module and the items modules (all this before the World Building chapter). Says my problem is in line 13 (print_inventory) of the player.py. Cut and pasting my code from the modules here:
(and next module is where I think the problem is, on line 13, but I am pretty sure there are no differences in my code and what is in the book. Any advice you could give me is welcome.)
Just a follow up, I see when I posted that it took out my spacing. My spaces were in there, and match the book. I’m really confused as to what line 13 in the player.py is doing to generate the error. Again, there error is
“Player object has no attribute ‘inventory.’
This happens when I try to access the inventory when running the game.
thanks for any assistance.
Nothing looks obviously wrong to me. How are you trying to run the game? Can you include the full error message?
Hi, I have recently bought your great programming book but want to add some more features. I have encountered a problem when trying to make a “Warp Tile” which transports the player to another position on the map. Here is my code:
Hi Edward,
Neat idea! The reason your player won’t be moved is because you have a
returnstatement before you make changes to the player. The
returnkeyword exits the method. If you want to make changes to the player, you should use the
modify_player()method. Take a look at the final
EnemyTileat the end of the Enemies section in Chapter 12 for an example. Happy coding!
Great tutorial. Gave me a leg up in getting my own thing started. Take a look and let me know what you think so far.
That’s really cool! I like how you implemented a save feature and it’s neat to see that you were able to pull in some RPG elements into the game. Nice work. 🙂
Hello!
I have gone through the book and everything is working except for one thing. The blank areas in the DSL (none) are not rooms the player can go through. So when I have it setup like this:
The player does not have the option of going west at the start. I am not sure where my error is but do you have any ideas?
Thanks!
Are you saying that the player should not be able to walk West, but the player actually can? If so, check your code for
get_available_actions. If you’re getting an actual error from Python, that might include some helpful details.
Hi! I’ve been following your book and have run into an error somewhere in Chapter 13 (I haven’t been able to test the code until the end since it’s all related.)
It seems the map hasn’t generated properly as I am receiving this error immediately upon running the program:
“AttributeError: ‘NoneType’ object has no attribute ‘intro_text'”,
I know that the player location is correct as I can get the program to print it before it terminates, (1,2)
Here’s my map generation code: (Maybe it’s something to do with the order of functions?)
Thanks!
That error means that your
tile_atis returning
Noneinstead of an actual room. You mentioned that the player’s location is (1,2). Is that x=1, y=2 (correct) or x=2, y=1 (incorrect)? This might help:
Hmmm, it’s x = 1, y = 2…
“Tile at y=2, x=1 does not exist.”
Did the map not generate properly?
Sorry, my knowledge of python is pretty basic soo 😮
So apparently it works when i include “world.parse_world_dsl()” at the beginning of game.py. I suppose I didn’t call the function anywhere and that’s why the map wasn’t built. Would you mind referring me to where you called the function? I must’ve missed something.
It looks like I reference that part of the code in chapter 14, sorry for the confusion! I will add a clarification in the next release of the book.
It’s all good! Thanks for your help 🙂
Hi again XD Just have a couple questions on expanding the game. I’ve finished the main content now, thanks for the awesome book! 🙂
Just wondering if there was a way to make an X, Y, Z coordinate system? I was planning on having a castle which has multiple floors. I suppose when the player goes up / down I could teleport them to another part of the grid and just say it’s down, would that be an efficient way of doing it? The map would be pretty big. :/
Also, is there a way to seperate rooms without a “None” tile? Like lets say these 2 rooms along a corridor are adjacent, but you cant enter one room from the other, you gotta come out and enter through the corridor. (Without adding a none space inbetween them and hence another tile to the corridor)
Sorry for all the questions! Thank you!!
It’s definitely possible to do a 3D coordinate system. The simplest way to do this is to have a list of lists of lists. That is to say: floors, rows, and columns. This is also called a 3D-array since there are three dimensions. I would then build each floor as its own DSL string and include a method to patch them together.
Sometimes it is hard keeping track of all three dimensions in your head, especially if you end up with code like
floors[z][y - 1][x + position]so you might decide to make a “Floor” class to make things a bit easier to work with.
There’s two ways you could create a “wall”. You could make it its own room type that gets included in the map, but then
!that means “impenetrable”. So you would end up with
tile_atmethod would ignore any “Wall” tiles. If you want to do something more complex, you could try introducing a new character to the DSL like
|EN!EN|FD|. But if you did that, you would need to determine the adjacent tiles during parsing and keep that information in the
MapTileclass. Then the
tile_atmethod would also need to be smarter because it would have to refer back to that information you stored during parsing.
Hope that helps, happy programming!
Hi again! I’ve been tryna implement those things for the last couple days, but as I am a bit of a newbie to python, im not making much progress 🙁
Wouldn’t making a “wall” tile essentially be a NoneTile anyway?
like |EN|WA|EN| = |EN| |EN|?
Kinda want something like
|EN|EN|
|ST!TT|
meaning, to get to the trader from the Start tile, you’ll have to fight 2 enemies. I’m not too sure I understand your second proposed solution. How would you store the adjacent tiles and improve the tile_at function? 😮 Sorry im hopeless ahah
Hey, amazing book. These are my first steps in Python and programming and I am 3/4 through your book and everything is going great so far. Really enjoying it.
Once the game is complete, can it be made to play outside a python terminal? For example, is there a way I can get all my finished code to run in Frotz or something similar with all my other adventure games? I’d like to learn how to port it to other platforms so I can share my work with my friends, perhaps if they could play the game on Android phones?
I’m sorry if this is a dumb noob question… but that’s what I am so far :/
Glad you’re enjoying the book! It is possible to make the game an “application” using third-party tools like py2exe. It looks like it might be possible to do Android too using this python-to-android tool, but I’ve never tried it. Good luck, let me know how it works out!
Hey, im loving your book, i am up to the point where we begin to expand the world.
i am getting a Error:
below is a a copy of my Code:
thanks in advance
If you take a look at the error message, it is telling you there is a problem with line 13 of player.py. It then includes the line with the error
self. x = world.start_tile_location[0]. I am not sure if this is a copy/paste error, but the code error you pasted has a space between
self.and
xwhich might be the problem. Hope that helps!
Hello, I recently decided to try and start learning Python. I really liked this article, and I went through it, and the code seemed right; however, it wouldn’t start. So I decided to download it and see if yours worked on my computer, and found that it didn’t; instead, it gives these errors:
Any ideas?
Are you following this advice?
If so, are you using an IDE?
I found that in moving rescources within the tutorial directory, it made it work, though I’m still confused as to why the code I made was unable to work. Here’s the errors on that:
Traceback (most recent call last):
File “C:/Users/jhard/PycharmProjects/ShadowsOfTlanen/game.py”, line 28, in
play()
File “C:/Users/jhard/PycharmProjects/ShadowsOfTlanen/game.py”, line 10, in play
print(room.intro_text())
AttributeError: ‘NoneType’ object has no attribute ‘intro_text’
seems to be a problem with ‘intro_text’, yet I’m not sure which.
Some other people have had that same problem. Take a look at the other comments on this page. Let me know if you still are stuck!
So now the intro_text glitch is gone, but now it is having different trouble:
‘
File “C:\Users\jhard\PycharmProjects\ShadowsOfTlanen\world.py”, line 18, in load_tiles
_world[(x, y)] = None if tile_name == ” else getattr(__import__(’tiles’), tile_name)(x, y)
AttributeError: module ’tiles’ has no attribute ‘EmptyPassage
Anything I’m missing here? I’ve been looking through the comments and trying solutions. (BTW, thanks for the replies)
Here’s the EmptyPassage code:
class EmptyPassage(MapTile):
“””
def intro_text(self):
return “””
The empty corridor is deathly silent, and you feel the desire to hasten through as quickly as possible.
Fantastic tutorial! Very clear, thoughtful instructions.
I have one question though:
So I’ve spent several hours fleshing out the map and adding monsters and items to create an exciting text adventure, but now let’s say I want to distribute the programme to friends using a stand-alone executable. Using this tutorial as an example, how would I compile game.py with all of its dependencies, including map.txt?
I’ve been reading all over StackOverflow and several other sites looking for answers, and tinkering with PyInstaller every which way, but to no avail.
PyInstaller will probably work fine. The easiest option might be to distribute both the map file and the exe and either assume they are in the same directory or ask the user where the map is. But if you do want to bundle the map inside the exe, this looks like the way to do it. However, I have never used PyInstaller so I can’t verify that.
Hi Phillip,
First; Thanks for the great book. I had a lot of fun learning python this way and you did a great job explaining the game as you went.
Second; I am trying to add an “Escape” action as an acceptable action during a fight. I want it to force the user to move to the previous tile that they occupied, as opposed to picking a direction, which might allow them to simply bypass enemies. Any recommendations on how to achieve this?
Thanks!
You’d need to keep track of the last room the player is in. Something like
self.last_room = (3, 4). When the player moves, you would update that value. Then you would add an
escapemethod that would set the player’s
self.xand
self.yto the values in
self.last_room. Happy programming!
Hey Phillip!
This is an awesome tutorial! I’ve been trying to expand it by having NPCs in the tile and giving the player an option to talk to them. I’m having trouble however in my code because I’m trying to add an NPC into a tile and giving the player an option to talk to them. So ideally the dropdown would be:
….
Choose an action:
t: Talk to NPC
BUT I keep getting this error and I can’t figure it out for the life of me:
TypeError: __init__() missing 3 required positional arguments: ‘method’, ‘name’, and ‘hotkey’
Here’s my code pertaining to the talk action
In my actions.py code, I clearly have the arguments and I modeled the format after your attack action in the tutorial and yet I’m having so much trouble! Any idea what might be causing the error?
Thanks!
It looks OK to me. What is the line actually causing the error?
Hey Phillip,
First of all I loved your book, I worked yhrough it in roughly a week! And I’ve been expanding it ever since! Now I have some future ideas of actually adding graphics perhaps using PyGame it seems to be easy.
Do you have any spearpoints or tips, perhaps tutorial references for converting the stuff in your book to an actually graphical rpg? (Simple ofcourse, maybe just some pixel art or something.)
Thanks in advance!
Greetz,
Thursten
I don’t have much experience with graphics programming, but my advice would be start learning how to load a sprite and have it move based on user input. PyGame is a commonly used package to help with this. Here’s one tutorial I found (can’t verify it’s accuracy), but looks like it would be a good start.
Hi, my son has been working through your book (which he is finding great fun) but has struck a problem at the end of chapter 10. The code appears to be correct but this is the error:
Any tips would be appreciated.
Glad your son is enjoying it! Since chapter 10 involves moving things around to different files, I wonder if something just wasn’t correctly copied? The error means that the Player class does not have the
most_powerful_weaponmethod. It could also be something like
def most_powerful_weapon()(incorrect) vs
def most_powerful_weapon(self)(correct).
If neither of those suggestions work, post the whole Player class and I will take a look.
My son has checked multiple times for errors but he probably missed one and also the most_powerful_weapon looks ok so:
Ah, OK so the problem is the indentation. Right now that method is “outside” of the
Playerclass. It needs to be indented inside the class.
Great thanks! I’ll get him to check it again (the last few comments were actually him….he’s keen to make it go!) He’s really impressed (and so am I) with the level of help that you have given him. I keep telling him that Python is fussy about indenting and this is a perfect example 🙂
Oh sorry, the indents actually didn’t show when I copy pasted
Thanks so much, it works now. Sorry about that indent error, amazing program 🙂
^^ My son means your program is amazing, way cooler than starting with “Hello world.” Thanks again.
Hi! Is there a way to make the game playable from the browser so it could be more easily shared?
Python is a great web programming language, so it would be possible, but I think users would have a better experience if most of the work was done in the browser using JavaScript. Either way, the amount of work to do that is well beyond the scope of this tutorial.
You can use the py2exe utility to distribute Python programs more easily on Windows environments. And with minimal effort, you could make the code python2 compatible, which is available by default on most linux and MacOS installations.
Pingback: Classes: Raymond Hettinger’s ‘Python’s Class Development Toolkit’ Talk – Just Tryin' to Python
Hi! I have your book and I’m teaching my son to code with it. I keep running into problems, over and over. I am wondering if you have a repository somewhere of all the code in the book, so that I can compare it against ours. I downloaded the code on Github, but that doesn’t seem to have everything from the book. Just wondering if you have it all somewhere – maybe with a password based on text from a page number, so you know we bought the book? 😀
Thanks! The code from the book is on the Apress GitHub here.
Thanks! I’ll have a look.
That was much easier to line up with the book. Thank you.
Hello!
My name is Matt, and I’m a amateur Python coder.
I just want to say that your book really helped me understand how to work with Python in pretty short time, and I’m glad I bought it!
After I finished the material from the book, I started working on the RPG by myself. I managed to add stuff like new monsters, items and rooms (like a “rest room” that allows the player to regain HP once) and even figured out how to add a EXP system with leveling up (a simple one so far).
I’ve been thinking to eventually make a proper story for this game, but here’s one problem I have:
I feel that making the game with just one map will not be enough for me, and wanted to add stairs that would allow the player to go to a different dungeon level, and later maybe even add a option to leave the dungeon and explore the world.
But I have trouble figuring out how to do this properly. I know that I should have several different maps like the one created during the tutorial (and probably at one point have seperate files for the world maps and tiles) but what would be the best way to allow the player to move from one map to another? And how should I modify the DSL?
Do you have any advice?
I hope for a quick reply!
Glad you enjoyed the book! You’re right that the method presented in the book doesn’t really scale up easily to multiple maps/levels. I would suggest doing something similar to what is done in the tutorial here on the site. That is to say, put your maps into files that live outside of the code.
You might even find that one map file per level is too confining and you need a “map file” and a “room data file”. The map file would simply have IDs that refer to rooms in the data file. The data file would contain the information like enemies in the room, loot, descriptions, etc. You could also scrap the map all together and put the positional information into the data file! Using a data file definitely gives you more flexibility, room variety, etc. but at the expense of making your code a bit more complex.
Tracking the player’s level is pretty easy, I’d just add another attribute to the player to store which level they are working on. And then the player’s position is calculated using x, y, and the level.
Hi, this tutorial has been very helpful, but I’m having some trouble figuring out a few errors:
Traceback (most recent call last):
From what I can see I have done everything as written in the guide
If you could help me figure it out, it would be greatly appreciated.
You have a typo, it should be
splitnot
spilt.
Thanks for the help, but now there are some other errors.
I am still a little new to python i’m sorry if it’s something very simple again.
You have another typo. Try comparing your code against my code on the specific line that’s giving you the error.
hi Phillip
thx for your book. That’s a great piece of work! Can you help me to reach the game itself?
I tried
but I cannot have the link work.
Maybe for the next edition you could insert a short url
best,
That’s a good suggestion for the short link! Most readers are using the ebook where you can just click the link. Here’s the link (you had a typo):
Hi Phillip I am getting these errors and i’m stuck and don’t know what to do i’ve look at the other comments who also got this error and i still don’t know what to do.
Traceback (most recent call last):
It likely means that your map is invalid. Make sure you review how to create the map and the assumptions about the file that the code makes.
Thank you i found white space in the map but i’m getting this other when i’m playing the game and i’m thinking it has something to do with the map and i tried redoing the map several times and i still get this error when playing the game. Thanks.
Traceback (most recent call last):
File “/home/ubuntu/workspace/adventuretutorial/game.py”, line 29, in
play()
File “/home/ubuntu/workspace/adventuretutorial/game.py”, line 25, in play
player.do_action(action, **action.kwargs)
File “/home/ubuntu/workspace/adventuretutorial/player.py”, line 19, in do_action
action_method(**kwargs)
File “/home/ubuntu/workspace/adventuretutorial/player.py”, line 31, in move_north
self.move(dx=0, dy=-1)
File “/home/ubuntu/workspace/adventuretutorial/player.py”, line 28, in move
print(world.tile_exists(self.location_x, self.location_y).intro_text())
AttributeError: ‘NoneType’ object has no attribute ‘intro_text’
Hello! I was wondering if you know what my problem is. I’m experiencing some trouble with my game.py file when I try to launch it I am fairly new to python. Please let me know which pictures you need, or pieces of code. I hope I’m not taking up too much of your time!.
What command are you using to run the game and what error message do you see?
Pingback: What You Can Do With Python – BODRAY
Hey Philip,
Doing a gamebook/text adventure for a CS class.
My code looks good, but when I run it, it doesnt end after each path the way I had envisioned it in my head.
It will run you through the next lot of code until it runs all ppaths that lead to the end of the code.
Heres what a few section look like:
# Homework_2_Make_a_gamebook
# Overview: A gamebook is a work of fiction that allows the reader to participate in the story by making choices that
# affect the outcome of the story. Of course, the reader does not really affect the outcome of the story,
# the book simply presents multiple paths that the reader may take and the reader only chooses one on any particular
# read-through.
def main():
print ("A Federation Starship receives a distress signal from an unknown vessel. They are far from home and not in "
"immediate contact with support, and the frequency of the distress signal is of unknown origin.....\n")
print ("The Captain is alerted and makes his way to the bridge of the Starship. He discusses his options with his "
"Senior Staff..... \n")
print (
"They decide that there are 3 options that make sense: To ignore the distress signal, open up communication, "
"or wait for support from the nearest Federation vessel.\n")
print ("Your crew looks at you for a decision: ignore, contact or wait, what will you choose?\n")
option1 = input ("Please enter your choice: ignore/contact/wait\n")
if option1 == 'ignore':
print ("Your crew carries out your orders, but it is clear not everyone agrees with your decision. Do you confront "
"the unhappy crew members, yes or no?\n")
option2 = (input ("Please enter your choice: yes/no\n"))
if option2 == 'yes':
print ("After discussing your decision with the disagreeing crew members: do you accept the criticism gracefully, "
"or do you see the criticism as a personal attack? Do you move on, or do you punish dissenters?\n")
option3 = input ("Please enter your choice: move on/punish\n")
if option3 == 'move on':
print ("A few months later, another Federation vessel receives a distress signal from the same unknown frequency. "
"They initiate contact and are ambushed and killed by a god-like alien race. Your crew is grateful for your "
"intuition.\n")
elif option3 == 'punish':
print ("You decide to punish the crew members who disagreed with your order, a mutiny forms and you are imprisoned "
"by your crew.\n")
exit ()
elif option2 == 'no':
print ("A short time later your ship receives another distress signal from the same frequency as the previous "
"distress signal. Your crew logs the incident but does not inform you about it. You become aware of the "
"incident when reviewing the log. Do you avoid any further discussion about the "
"second incident, or do you get more information from the crew members involved?\n")
option4 = input ("Please enter your choice: avoid/more information\n")
if option4 == 'avoid':
print ("The source of the distress signal is eventually contacted by a rival Space Federation. The distress signal "
"was transmitted by a god-like alien race. Your inaction costs the Federation a powerful ally.\n")
elif option4 == 'more information':
print ("All information pertaining to the distress signals are sent to Federation Headquarters for further "
"analysis.\n")
Any feedback and help is greatly appreciated mate.
Pingback: What Can I Do With Python? – Real Python - Almas Ali
Hi, I loved your tutorial but when I try to run the game I keep getting this error:
Traceback (most recent call last):
File “c:\Users\Owner\tutorial\game.py”, line 48, in
play()
File “c:\Users\Owner\tutorial\game.py”, line 27, in play
world.load_tiles()
File “c:\Users\Owner\tutorial\world.py”, line 25, in load_tiles
_world[(x, y)] = None if tile_name == ” else getattr(__import__(’tiles’), tile_name)(x, y)
AttributeError: module ’tiles’ has no attribute ‘
|
https://letstalkdata.com/2014/08/how-to-write-a-text-adventure-in-python/
|
CC-MAIN-2022-21
|
refinedweb
| 8,096
| 73.68
|
18.5.4. Transports and protocols (callback based API)¶
18.5.4.1. Transports¶
Transports are classes provided by
asyncio in order to abstract
various kinds of communication channels. You generally won’t instantiate
a transport yourself; instead, you will call an
Abstract.
Changed in version 3.6: The socket option
TCP_NODELAY is now set by default.
18.5.4.1.1. BaseTransport¶
- class
asyncio.
BaseTransport¶
Base class for transports.
Close the transport. If the transport has a buffer for outgoing data, buffered data will be flushed asynchronously. No more data will be received. After all buffered data is flushed, the protocol’s
connection_lost()method will be called with
Noneas its argument.
get_extra_info(name, default=None)¶
Return optional transport information. name is a string representing the piece of transport-specific information to get, default is the value to return if the information doesn’t exist.
This method allows transport implementations to easily expose channel-specific information.
-
set_protocol(protocol)¶
Set a new protocol. Switching protocol should only be done when both protocols are documented to support the switch.
New in version 3.5.3.
Changed in version 3.5.1:
'ssl_object'info was added to SSL sockets.
18.5.4.1.2. ReadTransport¶
- class
asyncio.
ReadTransport¶
Interface for read-only transports.
pause_reading()¶
Pause the receiving end of the transport. No data will be passed to the protocol’s
data_received()method until
resume_reading()is called.
18.5.4.1.3. WriteTransport¶
- class
asyncio.
WriteTransport¶
Interface for write-only transports.
abort()¶
Close the transport immediately, without waiting for pending operations to complete. Buffered data will be lost. No more data will be received. The protocol’s
connection_lost()method will eventually be called with
Noneas its argument.
can_write_eof()¶
Return
Trueif the transport supports
write_eof(),
Falseif not.
get_write_buffer_limits()¶
Get the high- and low-water limits for write flow control. Return a tuple
(low, high)where low and high are positive number of bytes.
Use
set_write_buffer_limits()to set the limits.
New in version 3.4.2.
set_write_buffer_limits(high=None, low=None)¶ an implementation-specific value less than or equal to the high-water.
write(data)¶
Write some data bytes to the transport.
This method does not block; it buffers the data and arranges for it to be sent out asynchronously.
writelines(list_of_data)¶
Write a list (or any iterable) of data bytes to the transport. This is functionally equivalent to calling
write()on each element yielded by the iterable, but may be implemented more efficiently.
write_eof()¶
Close the write end of the transport after flushing buffered data. Data may still be received.
This method can raise
NotImplementedErrorif the transport (e.g. SSL) doesn’t support half-closes.
18.5.4.1.4. DatagramTransport.
18.5.4.1.5. BaseSubprocessTransport¶
- class
asyncio.
BaseSubprocessTransport¶
- other fd:
None
get_returncode()¶
Return the subprocess returncode as an integer or
Noneif it hasn’t returned, similarly to the
subprocess.Popen.returncodeattribute.
kill()¶
Kill the subprocess, as in
subprocess.Popen.kill().
On POSIX systems, the function sends SIGKILL to the subprocess. On Windows, this method is an alias for
terminate().
send_signal(signal)¶
Send the signal number to the subprocess, as in
subprocess.Popen.send_signal().
terminate()¶¶¶
- class
asyncio.
Protocol¶
The base class for implementing streaming protocols (for use with e.g. TCP and SSL transports).
- class
asyncio.
DatagramProtocol¶
The base class for implementing datagram protocols (for use with e.g. UDP transports).
18.5.4.2.2. Connection callbacks¶
These callbacks may be called on
Protocol,
DatagramProtocol
and
SubprocessProtocol instances:
BaseProtocol.
connection_made(transport)¶
Called when a connection is made.
The transport argument is the transport representing the connection. You are responsible for storing it somewhere (e.g. as an attribute) if you need to.
BaseProtocol.
connection_lost(exc)¶)¶
Called when the child process writes data into its stdout or stderr pipe. fd is the integer file descriptor of the pipe. data is a non-empty bytes object containing the data.
SubprocessProtocol.
pipe_connection_lost(fd, exc)¶
Called when one of the pipes communicating with the child process is closed. fd is the integer file descriptor that was closed.
18.5.4.2.3. Streaming protocols¶
The following callbacks are called on
Protocol instances:
Protocol.
data_received(data)¶()¶
Called¶
The following callbacks are called on
DatagramProtocol instances. couldn’t be delivered to its recipient. In many conditions though, undeliverable datagrams will be silently dropped.
18.5.4.2.5. Flow control callbacks¶
These callbacks may be called on
Protocol,
DatagramProtocol and
SubprocessProtocol instances:..
18.5.4.2.6. Coroutines and protocols¶¶
18.5.4.3.1. TCP echo client protocol¶
TCP echo client using the
Abstract¶
TCP echo server using the
AbstractEventLoop.create_server() method, send back
received data and close the connection:
import asyncio class EchoServerClient()¶
UDP echo client using the
AbstractEventLoop.create_datagram_endpoint()
method, send data and close the transport when we received the answer:
import asyncio class EchoClientProtocol: def __init__(self, message, loop): self.message = message self.loop = loop(¶
UDP echo server using the
AbstractEventLoop.create_datagram_endpoint()
method, send)¶
Wait until a socket receives data using the
Abstract
AbstractEventLoop.add_reader() method to register the file descriptor of a
socket.
The register an open socket to wait for data using streams example uses high-level streams
created by the
open_connection() function in a coroutine.
|
https://docs.python.org/dev/library/asyncio-protocol.html
|
CC-MAIN-2017-17
|
refinedweb
| 863
| 53.07
|
Recently I wrote about an exciting new project coming to Solaris by way of the Network Auto-Magic project. I also talked about releasing a sneak peek at the promise of the Network Auto-Magic project in an upcoming OpenSolaris release. Today I am going to discuss where we are with getting this functionality into your hands.
But before we talk about releases and dates, I would like to step back a little and discuss the rationale behind the Network Auto-Magic project and the various enhancements it brings both to sysadmins as well as the so-called "end users".
The Network Auto Magic project consists of three main components. One of these is around simplifying service configuration and discovery on a network. The second is adding Network Profiles support. And the third and major component is developing a comprehensive UI to configure, automate and manage Solaris networking configuration. Let's consider each one of these is further detail.
The service discovery aspect will be implemented by enhancing the framework from Apple's Bonjour technology. One of the strengths of this technology is that it is built on top of one of the most robust and well understood internet protocols- DNS. Specifically, the technology allows applications to discover advertised services on a network. The project will deliver a public library which can be used by developers to make simple modifications to their application/service so that the services can participate in network service discovery. This reduces configuration- rather than an admin having to hard code a particular service with a certain device your application is now free to auto discover it. Eventually applications and clients can become smarter too- they can 'probe' the network on startup and unless they find a service on the network, there is no need for them to keep trying to reach a server. Like other network services delivered in Solaris, all of this functionality will be fully integrated with Service Management Facility (SMF). This component of will soon be released via an OpenSolaris build, so stay tuned!
Network
profiles, the primary component of the Network Auto-Magic project are one of the ways to simplify and automate network configuration and management. They work by allowing users to specify collections of various network properties and have them be managed automatically based on different network
environments. A Network Configuration Profile (NCP) will also include policy- such as which network interfaces to use, whether they should be activated
automatically, and so on. At any given time, exactly one NCP and one Environment are active. Users may modify the NCP to specify how Solaris should react in a particular network environment and have the right sets of actions automatically take place. For example, if you check email at your neighborhood Starbucks you may want your laptop to connect to the WiFi access
point with the correct security flavor automatically, start DHCP on it and enable DNS for host resolution. You want to turn off wired interfaces and
perhaps have the display appear scrambled
to anyone besides you! (We are still working on the latter.
) Then when you go back to your office and connect to a wired connection, you might expect to shut down the WiFi interface, enable certain services (such as NFS file sharing or NIS for host resolution) and have your browser use the proxy servers defined via Gconf.
Finally, lets discuss the third component of the project- the comprehensive UI. The first thing long time users of Solaris would notice- when the entire project is delivered- is that we are not just delivering incremental ease of use by cleaning up redudant code or even replacing multiple layers of CLI with a "high-level" CLI. NWAM will do both of those but it certainly does not stop there. It also delivers a comprehensive GUI with the same look and feel as the Java Desktop System. We have published a Flash based prototype based on our UI specification. Its not functionally complete and some aspects are likely to change in the final version but it does give you an idea of what you might see. And thats not all- there will be also be a separate Status Notification GUI that will give you a quick snapshot of the current network status. For example, it will graphically display the signal strength of the selected WiFi network. Routine tasks such as enabling or disabling an interface (on multiple homed machines) no longer require invoking (or knowledge of) complex CLI such as ifconfig(1M) or dladm(1M).
Now, for the sneak peak! Starting with Solaris Express Developer Edition 5/07, you will be able to preview some of the Network Auto-Magic functionality. If you are installing Solaris on a supported laptop this sneak peek is for you. (Specifically, there is a limitation that only one link is active at a time.) The major new functionality supported with this release of Solaris Express Developer Edition is WiFi support and with Network Auto Magic it just works "out of the box". All flavors of WEP and WPA2 are supported for the first time. Obviously not all laptops are supported, but common WiFi chipset implementations such as Atheros and Intel Centrino are. Solaris Express Developer Edition Release 5/07 will be available around mid-June 2007. Let's explore how the NWAM preview works.
This release of Solaris Developer Express includes the 'NWAM daemon' which allows for automated network configuration on laptops and desktop machines. This daemon monitors an available Ethernet interface and automatically enables DHCP on it. If no interface is plugged into a wired network, the NWAM daemon conducts a wireless scan and queries the user for a WiFi access point to connect to via a popup GUI. Once you select a WiFi access point and connect to it successfully that choice will be saved in a file. The next time you are in the vicinity of that WiFi network, Solaris will connect to it without user intervention. For now, there is no profile support so you wouldn't be able to do the things I described in the Starbucks example above. Also, wired interfaces are preferred over wireless, although this is easily changed. For further details, please see the nwamd man page.
While we cannot talk about the schedule for when the rest of this functionality will be available we are currently working hard to ensure it meets with the expectations of the Solaris user community. We would love to hear your experience with the Network Auto Magic project and indeed all of new Solaris. It certainly isn't your grandfather's Solaris any more and with your input we hope to make it even easier to use.( Jun 09 2007, 04:51:35 PM PDT ) Permalink
Solaris Networking has always been known to be on the cutting edge of innovation- whether it is sterling performance, or next generation virtualization and resource control.. Clearly, one cannot build an imposing structure without a strong foundation and Solaris is no exception. Several engineering years of building high quality infrastructure is one reason why Sun is reclaiming its position in the workstation and server marketplace.
While having high quality plumbing is a prerequisite whether you are building a home or an operating system, many people these days like shiny fixtures to go along. And of course the solution needs to be elegant and easy to use. For many years, Sun customers have thought of Solaris Networking to be that way- very high quality and capable of heavy lifting but somewhat challenging to use.
Why is this important? For starters, an approachable Solaris will enable both developers and customers to use it more easily and help grow the community of Sun users. It will make Solaris a stronger contender for mobile platforms and for small and medium business customers. The latter often lack dedicated and advanced Unix configuration expertise. Finally, recent Linux distros and OS X have lifted the bar on configuration and management user interfaces and Solaris needs to do a better job competing in this area. At the same time, traditional data center customers need to lower TCO by reducing administration and management complexity.
After spending over a year working on design, we are in the process of implementing and delivering on the promise of significantly simplified and automated Solaris network configuration and management via an exciting new project that we call Network Auto Magic. Network Auto Magic or NWAM for short has a thriving community on OpenSolaris so join in and give us your feedback. Better yet, download the prototype and give it a spin! We are planning on releasing the prototype via OpenSolaris in the very near future so stay tuned. And there is much more to come. John Beck recently presented at the Bay Area OpenSolaris User Group meeting.
No doubt the iPhone will be a hit and help to change the balance of power between wireless operators and handset makers. In the US, I can see why many folks might jump for the iPhone. Still I predict that the iPhone will not be the market defining gadget that the iPod was- at least until Apple recognizes and addresses some obvious shortcomings. Here is why:
1. No Java! Yeah, Apple has redefined the music player landscape single handedly, but even it cannot beat the momentum behind the billions of phones that support mobile Java. Looking at just one segment of the market- Java games downloads- its unclear why Apple chose not to play in it. Sure, OSX is cool as a development platform but it is absolutely no contest to J2ME. For my personal cellphone, that support was instrumental in my being able to download the free and capable Gmail mobile client.
2. The GSM market in the US isn't the place for mobile
innovation! Only 2 of the 4 major network operators use GSM. Furthermore, under their iron grip there is little incentive for the average user to go out and buy a particular handset. Many folks typically first choose the network operator and then pick a locked handset from a limited list. This is opposite to the experience in many fast-growing worldwide mobile markets- where handsets are not subsidized and not sold locked. In other words, what if one likes the iPhone but does not want to (or cannot for coverage or other reason) use Cingular? Will the iPhone single handedly cause a move away from CDMA, and away from the other GSM operator to Cingular?
Finally, US network operators currently charge exorbitantly for a data plan compared to European or Asian operators causing few to use data services. Will the iPhone encouarge Cingular to make its data plans more affordable? I sure hope Apple isn't planning to sell both a network and software locked handset in the rest of the world!
3. My own cellphone is a Sony Ericsson W810i with a 2 GB Memory Stick Duo in it. It has a 2 MP Auto Focus camera with photo light, a music player that supports MP3 and AAC, a WAP 2.0 compliant browser, Bluetooth 1.2, and USB synchronization (that works perfectly well with Solaris Nevada!). And did I mention the hundreds of mp3 songs on this phone? All this for under $300. Because it is unlocked and has Quad frequency GSM support it can be used virtually anywhere in the world. Other than WiFi support, I guess I don't see a single missing feature that would make it worthwhile to upgrade to the iPhone at $499. At that price point one can get a network unlocked Nokia N-series handset with WiFi support. The recently introduced N95 even supports high-speed networks, GPS and a 5 MP camera. In other words, Apple will have plenty of competition from well established market players who have strong, mutually beneficial relationships with network operators around the world, quite unlike the situation in the digital music player market when Apple introduced the iPod.
Sure, none of the above are significant issues for the iPhone. After all this is a 1.0 product and Apple has done well building on its first iPod. To the extent that Apple recognizes and addresses some of these issues, the iPhone can only help to jump start the maturing of the GSM market in the US and hasten Apple's transition to a consumer electronics company.
( Jan 15 2007, 07:29:56 PM PST )
Permalink
Not so long ago, Sun hardware and the Solaris operating system used to be synonymous with having an Internet presence of any kind. Then the NASDAQ crashed, the bubble burst and unfortunately Sun and Solaris pulled back from its position of pre-eminence.
Fast forward to 2006. Sun is back into the game with AMD 64 based Opteron servers. And then, there is Solaris 10!
As I mentioned in a previous blog entry, Solaris 10 now ships with a recent release of BIND 9, the defacto standard implementation of DNS client and servers on Unix/Linux. What better than the stringent performance and security requirements of one of the nodes of a root DNS server to demonstrate that Solaris 10 can do the heavy lifting of virtually anything one can throw at it?
Solaris 10 now powers one of the global nodes of F-Root, itself one of the 13 root DNS servers of the Internet. Just one more confirmation that Solaris 10 is helping Sun get back to its position of eminence.
( Oct 25 2006, 07:47:19 PM PDT )
Permalink
an exciting new project called Tamarack. The project name is taken from the road on which one of the team members has a mountain cabin and the logo is inspired by its namesake restaurant and casino in Reno,
which is not surprising when the release vehicle is
codenamed Nevada!
For several years and multiple releases, Solaris has had a sub-par user experience with removable media and hotpluggable devices compared to competetive desktop environments. The solution was incomplete, complex, and did not integrate with the desktop.
Whether it is a memory stick or a secure digital device, a digital camera or an IPod, a music CD or a blank DVD-ROM media, Tamarack seamlessly
integrates with the desktop to bring a significantly enhanced user experience compared to previous releases of Solaris. More importantly, Tamarack does this via a modern, open source and extensible framework, HAL.
I haven't posted here for a really long time. I guess now is as good a time as any to update my blog with what I have been doing recently.
For starters, I am now a software development manager in the KISS (Keep it Simple, Solaris) organization managing a team working on Network and I/O Approachability.
Specifically, this means I manage multiple networking and I/O projects that will make Solaris more "approachable" or which automate configuration where and when it makes sense. I will be describing various exciting initiatives and projects in this space.
( Oct 16 2006, 11:45:53 AM PDT )
Permalink
I was one of the hundreds lining up at my neighborhood Best Buy to snap up the Sony PSP (a.k.a the Playstation Portable). I would have loved to be at the official launch party at the Sony Metreon or the main one in New York City. Well, at least the weather in San Jose is much nicer this time of the year than SF or NYC! Now I must admit that when it comes to handheld gaming consoles, I'm a newbie. My experience has been limited to watching my 7 year son showing off his Pokedex on his Game Boy Advance SP.
But a recent trip to Tokyo, Japan changed all that. I was fortunate to have time to visit the Sony Building in Ginza. For a gadget freak like me, I thought I was in heaven! The very latest technologies- in some cases- technologies that the rest of the world- outside Japan- may not see for months, or years.
One such gadget was the PSP. At that time the PSP- had only been launched in Japan and sold out almost overnight. The US launch was still a month away. But the console just blew me away. The quality of the 16:9 format screen was a work of art- the videos were flawless, the games engrossing. And it had built-in Wi-Fi , which meant I might even be able to use it to browse/email at my neighborhood Starbucks!
Fast forward to the present. While the PSP has lived up to much of its hype for video and gaming, Sony has still kept the PSP largely a restricted proprietary device, whether its internet access, or using the "universal media disks". Of course, some folks quickly figured out a way to use a built-in browser in one of Sony's own games- Wipeout Pure to surf the web using the built-in Wi-Fi. Isn't it interesting though that that's its not so much a PSP hack, but a well known DNS hack - record spoofing, which is both very old and very common.
All you have to do is to setup a DNS server serving the "hacked" resource records and get a DNS resolver (read browser) to somehow point to it. In this case, of course, you are telling your resolver you want that alternate DNS resolution! The PSP resolver still makes a DNS request for as before and it thinks its fetching the address record for, but of course its not! Its getting the IP address where you setup your own website! Unfortunately these types of attacks can be minimized but not entirely eliminated until DNSSEC allows all DNS servers to have cryptographically signed records. (The BIND 9 DNS server, newly available in Solaris 10, has some helpful features such as no automatic glue fetching, random id pool, ability to create "split" views of your namespace, and so on.)
So what next for the PSP platform and handheld gaming in general? Based on purely anecdotal data and speculation, I believe that handheld consoles will continue to "grow up" as baby boomers and much of the "developed" world ages. The traditional handheld market already appears to be declining rapidly and Sony itself exited from this segment in most worldwide markets. This means the Sonys of the world will put more non-gaming functionality into these types of consoles. After all, the past ten years have been characterizted by vendors putting Wi-Fi into PDAs, bluetooth into cellphones, PDAs in phones (or vice-versa), cameras into phones, GPS into PDAs, gaming consoles in phones and so on. However in my opinion this has been akin to throwing darts and seeing which ones stick, than any real movement towards convergence and ubiquity. The next decade could see some real convergence emerge with functions customers like to see on the same device coming together, while the ones not particularly interesting falling back to smaller, niche markets. The only question is what will those functions be!( Jun 13 2005, 11:24:19 AM PDT ) Permalink Comments [3]
{root:dnssrv:25} svcs dns/server STATE STIME FMRI online 15:57:57 svc:/network/dns/server:default {root:dnssrv:26} rndc stop {root:dnssrv:27} svcs dns/server STATE STIME FMRI disabled 15:58:07 svc:/network/dns/server:default {root:dnssrv:29} svcs dns/server STATE STIME FMRI online 16:00:00 svc:/network/dns/server:default {root:dnssrv:30} svcadm disable dns/server {root:dnssrv:31} rndc status rndc: connect failed: connection refusedIf you who do not want to use the SMF framework at all, i.e. start the DNS server directly from the command line, rndc stop/halt works as you would expect and svcs/svcadm is expectedly unaware of the presence of the named process.
{root:dnssrv:32} /usr/sbin/named {root:dnssrv:33} svcs dns/server STATE STIME FMRI disabled 16:00:17 svc:/network/dns/server:default {root:dnssrv:35} rndc halt
manisha# rndc status number of zones: 4 <...output truncated for brevity...> server is up and running manisha# rndc halt use svcadm(1M) to manage named manisha# svcadm disable dns/server manisha# rndc status rndc: connect failed: connection refusedIt is also possible to run the BIND server within a dedicated non-global Solaris 10 Container. More about that in another blog entry!
manisha# svccfg import server-chroot.xml manisha# svcadm enable dns/server:chroot manisha# svcs dns/server STATE STIME FMRI disabled 16:36:09 svc:/network/dns/server:default online 16:37:17 svc:/network/dns/server:chroot manisha# pgrep named 2457 manisha# pcred 2457 2457: e/r/suid=60002 e/r/sgid=0
I have been at Sun for 8 years now and an engineer in Solaris Networking for most of that time. I used to work in the Naming and Directory Services organization however several reorganizations later, I find myself in KISS - which is Keep it Simple, Solaris!
I hope to update the blog with stuff I have worked on and stuff that I find
interesting.
( May 23 2005, 02:58:34 PM PDT )
Permalink
|
http://blogs.sun.com/anay/
|
crawl-002
|
refinedweb
| 3,525
| 59.84
|
September 2012
This tutorial looks at the tools needed to start development (C programming) on 8-bit AVR microcontrollers and shows how to write a C program and load it to an AVR microcontroller.
Development is done using the free Atmel Studio running on a Windows platform. A simple flashing LED circuit will be built using an AVR microcontroller.
A USB programmer is required in order to load a program into an AVR microcontroller. The AVRISP mkII (ATAVRISP2) from Atmel is used in this tutorial.
An ATtiny2313 microcontroller breadboard circuit will be used as the test circuit. This microcontroller will be powered by 5V – the 5V from a PC power supply will be used.
An LED and resistor will be required to build the circuit. Single strand, single core wire will be needed to connect the microcontroller to the AVR programmer.
Tutorial parts list:
Download Atmel Studio from the Atmel website. Atmel Studio 6.0 is used in the tutorial and loaded onto Windows 7.
After downloading Atmel Studio, install it by running the downloaded executable file. After software installation, plug the AVRISP mkII device into a spare USB port on the PC. The drivers for the AVRISP mkII should automatically be installed.
The circuit diagram is shown below:
ATtiny2313 Blink Circuit Diagram - click for a bigger image
Build the circuit on a breadboard and use the connections from the ISP header in the circuit diagram to connect the AVRISP mkII to the ATtiny2313 using single strand wire.
This photo shows the result:
ATtiny2313 Breadboard Circuit - click for a bigger image
Amazon.com
Amazon.co.uk
These steps are shown in a video at the end of the tutorial.
1. Start Atmel Studio.
2. Click New Project… from the Start Page window, or click File → New → Project...
Starting a New Project in Atmel Studio - click for a bigger image
3. Choose the location to save the project to and choose a project name. The project for this tutorial will be named attiny2313_blink. Be sure to fill this name into the Name field and not the Solution name field.
Saving the Project in Atmel Studio - click for a bigger image
4. In the next dialog box that pops up, select the device. Use the Device Family drop-down box to select tinyAVR, 8-bit and then select the device in the table below – ATtiny2313.
Selecting the Device - click for a bigger image
After selecting the device, the project will be created and a C file will be opened that contains the main() function.
5. Write the program.
The code is shown below, copy it and paste it into the C file that is open in Atmel Studio. Save the file.
#include <avr/io.h> void Delay(void); int main(void) { DDRB |= 1 << PB0; while(1) { PORTB &= ~(1 << PB0); Delay(); PORTB |= 1 << PB0; Delay(); } } void Delay(void) { volatile unsigned int del = 16000; while(del--); }
6. Build the project by clicking Build → Build attiny2313_blink on the top menu bar.
Building the Project - click for a bigger image
If the program build succeeded, the program can be loaded to the AVR – described next.
1. Make sure that the AVRISP mkII is plugged into the USB port.
2. On the top menu bar, click Tools → Device Programming to open the Device Programming dialog box.
Opening the Device Programming Dialog Box - click for a bigger image
3. In the dialog box, make sure that Tool is set to AVRISP mkII and Device is set to ATtiny2313 – click the Apply button.
Device Programming Settings - click for a bigger image
4. Test that the programmer can connect to the target.
Switch the power to the target on (the 5V supply to the ATtiny2313). In the Device Programming dialog box, click the Read button under Target Voltage – a voltage should be read and displayed. Click the Read button under Device signature – a device signature should be read and displayed.
Reading Circuit and Device Parameters - click for a bigger image
5. Program the device.
Click Memories in the left pane of the Device Programming dialog box.
Under Flash, open the attiny2313_blink.hex file by clicking the […] button and then navigating to the Debug directory of the project.
Click the Program button to load the program to the ATtiny2313 device.
Programming the AVR Device - click for a bigger image
If the programming was successful, the LED connected to the circuit will start flashing on and off.
This video shows the above steps being performed:
Can't see the video? View on YouTube →
You may also be interested in the ATtiny2313 tutorial series.
|
http://startingelectronics.org/tutorials/AVR-8-microcontrollers/starting-AVR-development/
|
CC-MAIN-2017-17
|
refinedweb
| 761
| 65.42
|
The best answers to the question “Show DataFrame as table in iPython Notebook” in the category Dev.
QUESTION:
I am using iPython notebook. When I do this:
df
I get a beautiful table with cells. However, if i do this:
df1 df2
it doesn’t print the first beautiful table. If I try this:
print df1 print df2
It prints out the table in a different format that spills columns over and makes the output very tall.
Is there a way to force it to print out the beautiful tables for both datasets?
ANSWER:
from IPython.display import display display(df) # OR print df.to_html()
ANSWER:
You’ll need to use the
HTML() or
display() functions from IPython’s display module:
from IPython.display import display, HTML # Assuming that dataframes df1 and df2 are already defined: print "Dataframe 1:" display(df1) print "Dataframe 2:" display(HTML(df2.to_html()))
Note that if you just
print df1.to_html() you’ll get the raw, unrendered HTML.
You can also import from
IPython.core.display with the same effect
ANSWER:
I prefer not messing with HTML and use as much as native infrastructure as possible. You can use Output widget with Hbox or VBox:
import ipywidgets as widgets from IPython import display import pandas as pd import numpy as np # sample data df1 = pd.DataFrame(np.random.randn(8, 3)) df2 = pd.DataFrame(np.random.randn(8, 3)) # create output widgets widget1 = widgets.Output() widget2 = widgets.Output() # render in output widgets with widget1: display.display(df1) with widget2: display.display(df2) # create HBox hbox = widgets.HBox([widget1, widget2]) # render hbox hbox
This outputs:
ANSWER:
This answer is based on the 2nd tip from this blog post: 28 Jupyter Notebook tips, tricks and shortcuts
You can add the following code to the top of your notebook
from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all"
This tells Jupyter to print the results for any variable or statement on it’s own line. So you can then execute a cell solely containing
df1 df2
and it will “print out the beautiful tables for both datasets”.
|
https://rotadev.com/show-dataframe-as-table-in-ipython-notebook-dev/
|
CC-MAIN-2022-40
|
refinedweb
| 349
| 67.25
|
ASCOM for end user application developers
How to use ASCOM API to control astronomical hardware
ASCOM is a nice platform providing unified access to astronomical hardware controllable from a computer. Hardware makers provide ASCOM driver and end user apps like PHD or Nebulosity may start using it without problems or additional drivers. For developers - those that want to make drivers or use the ASCOM API - the journey to ASCOM starts at digging through raw documentation. Some help may be found also on the mailing list. It's not documented as well as Django, but still - it's usable.
We start with installing additional components that give documentation and few handy libraries. ASCOM platform uses .NET framework and provides .NET API. For other languages there is full API access via COM.
The documentation is in Program Files/ASCOM/Platform 6 Developer Components/Developer Documentation (or "Program Files (x86)" on 64-bit Windows). The most interesting is PlatformDeveloperHelp.chm containing few examples and all the namespace of the ASCOM API.
API for end user apps
End user apps are those who use the ASCOM drivers/API to control the hardware like a mount or a camera. In most such cases we will use ASCOM.DriverAccess to do that. It gives a API to control domes, filter wheels, focuser, rotators or mounts. In PlatformDeveloperHelp.chm you will find the whole namespace for this library.
Python and COM
For languages outside .NET platform we have to use the COM technology. Here is a simple Python script that slews the mount:
win32com.client.Dispatch is used to load the library - a driver "id" in this case. "Celestron.Telescope" is the ID of the Celestron ASCOM driver that controls Celestron and SkyWatcher mounts. This driver has the same API as ASCOM.DriverAccess.Telescope. At start we connect to the device setting "Connected" to True. Then we set Tracking to True to order the mount to start tracking (sidereal rate by default). "SlewToCoordinates" method will slew the mount to given coordinates (RightAscension, Declination).
To use the equipment the user wants we need to give him the ability to choose it. We need the driver ID like "Celestron.Telescope" and we will get it via ASCOM.Utilities.Chooser - this class shows the choose window for every type of equipment.
Chooser supports all type of equipment handled by ASCOM. Choose method will show the choose window and will return the driver ID when user closes it. All settings applied by the user will be saved too.
IronPython
IronPython is a Python implementation on the .NET platform. As such it can use .NET libraries, like so:
The API is the same, only import is different. In the folder with the script I placed the ASCOM.DriverAccess.dll that can be found in Program Files/ASCOM/Platform 6 Developer Components/Components/Platform6. The script is executed by IronPython interpreter - ipy.exeIn case of camers it looks quite similar:
The "images" is an ImageArray object which needs to be somehow saved as a FITS file (didn't made that work yet). For .NET there is CSharpFITS (and pyFITS for Python) which can be used. This library also requires nunit - install it and copy/load nunit.framework.dll in the script:
|
https://rk.edu.pl/en/ascom-end-user-application-developers/
|
CC-MAIN-2019-18
|
refinedweb
| 538
| 59.5
|
I develop the spaCy NLP library. We have our own NN library, Thinc, to avoid dependencies (plus I started writing it before PyTorch was around :p), but for obvious reasons we’d like to let people use PyTorch models in spaCy as well.
The plan has been to write small shim classes that would wrap PyTorch (or other libraries’) models to have the same API as Thinc. You can find the wrapper class so far here:
Questions
How do I resize an input layer? If neurons are added, the weights for the new activations should be zero. If the new size is smaller, the last activations should be truncated.
How do I resize an output layer? If neurons are added, the weights for the new activations should be zero. If the new size is smaller, the last activations should be truncated.
Thinc has a
use_params()context-manager, which allows usage of weights passed in for the scope of a block. Is
load_state_dict()the right thing there?
Current progress
The heart of the wrapper is Thinc’s
begin_update() method. This takes a batch of inputs, and returns a tuple with a batch of outputs and a callback to complete the backward pass. This was pretty easy to do, but I wrote it a few months ago — hopefully it’s still current?
def begin_update(self, x_data, drop=0.): '''Return the output of the wrapped PyTorch model for the given input, along with a callback to handle the backward pass. ''' x_var = torch.autograd.Variable(torch.Tensor(x_data), requires_grad=True) # Make prediction y_var = self._model(x_var) def backward_pytorch(dy_data, sgd=None): dy_var = torch.autograd.Variable(torch.Tensor(dy_data)) torch.autograd.backward((y_var,), grad_variables=(dy_var,)) dX = self.ops.asarray(x_var.grad.data) if sgd is not None: optimizer.step() return dX return self.ops.asarray(y_var.data), backward
The main outstanding problem with the
begin_update() wrapper above is that Thinc takes an argument
drop which is a float between 0 and 1. This is used to dropout the outgoing activations. We shouldn’t need to worry about making this auto-differentiable – it should be fine to get the dropout mask, and then just multiply it by the activations. Then we can multiply the incoming gradient by the mask as well, since the mask will be stored in the enclosing scope.
I’ve also drafted out the serialization methods. Thinc uses to/from_bytes/disk. The architecture is not saved, just the parameters — we assume that the architecture is reconstructed before you call
from_bytes(). So this seems fairly straight-forward.
|
https://discuss.pytorch.org/t/help-developing-a-small-shim-to-allow-pytorch-models-to-be-used-in-spacy/14862
|
CC-MAIN-2022-21
|
refinedweb
| 423
| 58.18
|
6.6. Using an Image Library¶
Similarly, in the image processing example, we used
from image import *. That made the functions
getPixels() and
getRed() accessible. We could also define a new function that returns a new color, or a new procedure that changes the image.
The
for p in pixels on line 9 let’s us loop through all of the pixels in the image and change the red value for each pixel. We’ll talk more about looping (repeating steps) in the next chapter.
This ability to name functions and procedures, and sets of functions and procedures, and absolutely anything and any set of things in a computer is very powerful. It allows us to create abstractions that make the computer easier to program and use. More on that in a future chapter.
Note
Discuss topics in this section with classmates.
|
https://runestone.academy/runestone/books/published/StudentCSP/CSPNameNames/imageLib.html
|
CC-MAIN-2021-10
|
refinedweb
| 142
| 64.3
|
how to send email please give me details with code in jsp,servlet
how to send email please give me details with code in jsp,servlet how to send email please give me details with code in jsp,servlet
procedures to create struts projects?
procedures to create struts projects? i am new to learn in struts. i am start to learn struts program, How to create a struts projects explain with step by step procedure.
Please visit the following link:
Struts
How to build a Struts Project - Struts
How to build a Struts Project Please Help me. i will be building a small Struts Projects, please give some Suggestion &...)If possible explain me with one example.
Give me reply as soon as possible.
Thank
Struts
Struts Hi i am new to struts. I don't know how to create struts please in eclipse help me
struts
struts Hi,... please help me out how to store image in database(mysql) using struts
Str how to handle exception handling in struts reatime projects?
can u plz send me one example how to deal with exception?
can u plz send me how to develop our own exception handling
Struts - Struts
.
what are needed the jar file and to run in struts application ?
please kindly explain and details send to me. Hi friend,
Please add struts.jar...Struts hi,
I am new in struts concept.so, please explain
Struts Articles
) framework, and has proven itself in thousands of projects. Struts was ground-breaking... security principle and discuss how Struts can be leveraged to address... are familiar with the Struts framework in the servlet environment how to
struts - Struts
struts am retrieving data from the mysql database so the form bean will be null for that action.... if i give it in struts config it is asking me to have a form bean.... how to solve this problem
Struts - Struts
compelete code.
thanks Hi friend,
Please give details with full...Struts Hello
I like to make a registration form in struts inwhich....
Struts1/Struts2
For more information on struts visit to :
Reply Me - Struts
Reply Me Hi Friends,
I am new in struts please help me
How to arrange the folder using oracle 10g database where store web.xml file,jsp file,connection file....etc please let me know its very urgent
Struts - Struts
.
Thanks in advance Hi friend,
Please give full details with source code to solve the problem.
For read more information on Struts visit...Struts Hello
I have 2 java pages and 2 jsp pages in struts
Java - Struts
Java Please explain me how to develope logoff using struts and give me the <s:include> - Struts
which i used in a static page it works out well.
Please help me...)
For any more problem give details..
please give me an idea to develop a program for this question?
please give me an idea to develop a program for this question? How to enter a text or number from keyboard without using InputStreamReader method in java please send me a program that clearly shows the use of struts with jsp
struts - Struts
the form tag...
but it is giving error..
can you please give me solution
i would like to have a ready example of struts using"action class,DAO,and services"
so please help me
Struts Links - Links to Many Struts Resources
you how to develop Struts applications using ant and deploy on the JBoss... that all projects should use Struts, and it is quite easy to implement the MVC... Struts, and think it should be used on many (but not all) projects. Still
Please give me the answer.
"int a=08 or 09" its giving compile time error why "int a=08 or 09" its giving compile time error why ?
can any one give me the answer of this please
in projects
me please
Please give me coding for this..
Please give me coding for this.. Write an application that inputs one number consisting of five digits from the user.separates the number ibto its individual digits and prints the digits separated from one another by three
Struts - Struts
are looking for Struts projects to learn struts in details then visit at http...Struts What is Struts Framework? Hi,Struts 1 tutorial with examples are available at Struts 2 Tutorials
Tell me - Struts
Struts tutorial to learn from beginning Tell me, how can i learn the struts from beginning
java struts DAO - Struts
java struts DAO hai friends i have some doubt regarding the how to connect strutsDAO and action dispatch class please provide some example to explain this connectivity.
THANKS IN ADVANCE
java projects
java projects I am interested to developed the JAVA projects.
But I want to some help about java example projects.
So give me suggestions to developed the java projects by me.
Please visit the following links:
http
database connection in struts - Struts
4.welcome page.
all in struts but i dont no how sql database connected in struts plz give me reply as fast...;
<hr />
Please visit the following link:
Struts Login...;
<h1></h1>
<p>struts-config.xml</p>
<p> make my dummy project live..plz send me step's.
kya yaar
Struts First Example - Framework
: "org.apache.struts.taglib.html.BEAN" in any scope
if any body knows about it please help me.
thanks and kind regards
Gillani Hi friend,
Please give in details of problem for complete solution.
For read more information on struts
java - Struts
in which i have to give an feature in which one can create a add project template in which one can add modules and inside those modules some more options please give me idea how to start with Hi Friend,
Please clarify what do
java - Struts
java When i am using Combobox.
when i am selecting the particular value.
how can i pass the value to the action.
please give me the suggestion as early as possible.
Hi friend,
To solve the problem
struts application
struts application i want to display name on the browser throgh struts1.2 in java
please give me snippet How to display single validation error message, in stude of ? Hi friend,
I am sending you a link. This link will help you. Please visit for more information. - Struts
struts Hi ,
I have been asked in one of the technical interviews if struts framework is stateless or stateful . I could not answer. Please answer and explain a bit about it how it is achieved.
thanks
kanchan
Struts dispatch action - Struts
not contain handler parameter named 'parameter'
how can i overcome this problem please send me...
Thanks in advance...Struts dispatch action i am using dispatch action. i send
Struts - Struts
Struts Hi All,
Can we have more than one struts-config.xml in a web-application?
If so can u explain me how with an example?
Thanks in Advance.. Yes we can have more than one struts config files..
Here we
Login Action Class - Struts
Login Action Class Hi
Any one can you please give me example of Struts How Login Action Class Communicate with i-bat validation
struts validation Sir
i am getting stucked how to validate struts using action forms, the field vechicleNo should not be null and it should accept only integer values.can you help me please
public ActionErrors validate
struts tags
struts tags I want develop web pages using struts tags please help me,but i searched many sites but that sites doesn't give correct information.
examples of struts tags are
Hi Friend,
Please visit the following
Hello - Struts
to going with connect database using oracle10g in struts please write the code and send me its very urgent
only connect to the database code
Hi soniya,
Please implement following
CONNECT TO FRONT END - Hibernate
CONNECT TO FRONT END HI
HOW TO COONECT HIBERNET WITH STRUCT FRONT eND.
PLEASE PROVIDE CODE AS SOON AS POSSIBLE.
REGARDS
DILEEP Hi Friend,
Please visit the following link:
DynaActionform in JAVA - Struts
DynaActionform in JAVA how to use dynaActionFrom?why we use it?give me a demo if possible? Hi Friend,
Please visit the following link:
Hope
hello sir, please give me answer - Java Beginners
hello sir, please give me answer Write a program in Java... please tell me full solution of this program Here is your complete... you know how the next two while loops works
validation - Struts
validation Hi Deepak can you please tell me about struts validations perticularly on server side such as how they work whats their role etc.?
thank you
Please give me the code for the below problem - Java Interview Questions
Please give me the code for the below problem PROBLEM : SALES TAXES
Basic sales tax is applicable at a rate of 10% on all goods, except books... details for these shopping baskets...
INPUT:
Input 1:
1 book at 12.49
1
Single thread model in Struts - Struts
acheive singleThreadModel , ThreadSafe in Struts
if so plx explain me. Hi
Struts 1 Actions are singletons therefore they must... a performance penalty or impact garbage collection.
For more details please visitSP - Struts
JSP Hi,
Can you please tell me how to load the values in selectbox which are stored in arraylist using struts-taglibs
Note:I am neither using.....
thanks in advance Hi Friend,
Please visit the following link
struts flow - Struts
struts flow Struts flow Hi Friend,
Please visit the following links:
Thanks - Struts
)
}
alert("Invalid E-mail Address! Please re-enter.")
return (false);
}
function validateForm(formObj){
if(formObj.userid.value.length==0){
alert("Please...;
}
if(formObj.password.value.length==0){
alert("Please enter password!");
formObj.password.focus
Struts - Struts
;
Please Enter the Following Details
...)
}
alert("Invalid E-mail Address! Please re-enter.")
return (false);
}
function validateForm(formObj){
if(formObj.userid.value.length==0){
alert("Please
Struts
Struts how to learn struts
Struts
Struts How to retrive data from database by using Struts
struts - Java Beginners
struts how to calculate monthly billing in struts, give me example
STRUTS
STRUTS Request context in struts?
SendRedirect () and forward how to configure in struts-config.xml
ex. connect to Oracle - Java Beginners
ex. connect to Oracle dear sir,
I want to ask how to connect java to oracle, please give me a details tutorial with example code how to connect to oracle. what software i must to use?
thank's
Hi
Struts
Struts Tell me good struts manual
How to check a checkbox - Struts
please help me with the syntax?? Hi friend,
my checkbox example
Java
JDBC
Hibernate
JSP
Servlets
Struts...How to check a checkbox Hello Community,
How can i check a checkbox
Library Management in Struts. - Struts
Library Management in Struts. Hi can you give me the application related to Library Management using Struts Or Jsp internationalisation - Struts
problem its urgent Hi friend,
Plz give full details and Source...struts internationalisation hi friends
i am doing struts iinternationalistaion in the site
|
http://www.roseindia.net/tutorialhelp/comment/12922
|
CC-MAIN-2014-52
|
refinedweb
| 1,812
| 72.87
|
Bug Description
This patch allows for scripting in the Scheme programming language using an
interactive window. The patch was created for revision 15638.
Here is the patch:
http://
Here is a README to help one get started:
http://
Here is a list of functions created for the Scheme console:
http://
Let us know what you think.
Thanks,
David D'Angelo and Soren Berg
Here is an up-to-date patch created for svn revision 16980. I created it pretty quickly, so hopefully there are not any bugs caused by changes since the previous patch. Let me know if you have any questions/problems.
Here are some example scripts that can be used to test the console:
http://
Thanks,
David D'Angelo
Compiling inkscape with this new patch and trying to open the Scheme Console via View->Scheme results in a crash.
I'm on Debian unstable, SVN rev. #17018
Backtrace:
#0 0xb6e8513d in getc () from /lib/libc.so.6
#1 0x0854a2e9 in basic_inchar (pt=0xa53957c) at
extension/
#2 0x0854a529 in inchar (sc=0xa539400) at
extension/
#3 0x0854a5e7 in token (sc=0xa539400) at
extension/
#4 0x0854f9c0 in opexe_0 (sc=0xa539400, op=OP_READ_
extension/
#5 0x0854c1f3 in Eval_Cycle (sc=0xa539400, op=OP_LOAD) at
extension/
#6 0x0854c57f in scheme_load_file (sc=0xa539400, fin=0x0) at
extension/
#7 0x08547517 in
Inkscape:
(this=0xab6ddc0) at extension/
#8 0x0826e356 in SchemeDialogImpl (this=0xab6dc00) at
ui/dialog/
#9 0x0826f892 in Inkscape:
ui/dialog/
#10 0x081da454 in
Inkscape:
() at ./ui/dialog/
#11 0x081da4db in Inkscape:
namespace)
Inkscape:
ui/dialog/
#12 0x081d6b95 in Inkscape:
(this=0x8eb01d8, name=1857) at ui/dialog/
#13 0x081d6c18 in Inkscape:
(this=0x8eb01d8, name=1857) at ui/dialog/
#14 0x0838a030 in sp_action_perform (action=0x8edbfc0, data=0x0) at
helper/
#15 0xb73df95f in g_cclosure_
/usr/lib/
#16 0xb73d2619 in g_closure_invoke () from /usr/lib/
#17 0xb73e6dfb in ?? () from /usr/lib/
#18 0x08f67c10 in ?? ()
#19 0x00000000 in ?? ()
It's gotten to be too late in the process for inclusion of this new feature, so I'm dropping the milestone. Hopefully leaving it for 0.47 will also provide it with additional time for testing and polishing. Sorry we couldn't get it in for 0.46,
Any news on this?
On Fri, 2008-06-20 at 14:37 +0000, Alexandre Prokoudine wrote:
> Any news on this?
Well, I was curious where Ishmal's work is going. If it goes to
somewhere interesting we'd have Scheme on top of the JVM, so it wouldn't
make sense to put the scheme engine in separately... Now, I'm not sure
about Ishmal's work and where it'll end up. I don't want to close this,
but I want to decide closer to when we're talking about 0.47 and can
make a good decision.
In the long run, I'd prefer to have all of our scripting languages served by the same runtime -- things get too hairy if any one of them tries to do something clever (like conservative collection or threads or something) that ends up conflicting with the others. At the moment, it looks like the JVM is by far the strongest candidate for our common runtime.
On the downside, the JVM currently isn't well-suited to scheme, although I know the work being done by the Da Vinci Machine people will eventually change that.
Ted, are we closer enough to 0.47 now? :-)
Ted: Is this still "in progress"? It would need a major update for trunk at this point is my guess.
I think this is superseded by the SoC work on the dbus interface which can do all of this.
I would like to get this in for 0.46. Do you guys have an up-to-date patch for current SVN?
|
https://bugs.launchpad.net/inkscape/+bug/169967
|
CC-MAIN-2019-13
|
refinedweb
| 620
| 71.14
|
Common Library Functions in Visual Basic
Introduction
Today, I would like to introduce you, especially the newbies out there, to a few of the most common Library functions in Visual Basic. My aim with this article is to introduce you to the functions, as well as explain with a small example of how the particular functions can be used.
Let's jump straight in, shall we.
String.Instr.
' String to search in. Dim SearchString As String = "Hannes" ' Search for "e". Dim SearchChar As String = "e" Dim TestPos As Integer ' Textual comparison starting at position 1. Returns 5. TestPos = InStr(1, SearchString, SearchChar)
As you can see, being able to manipulate strings can be quite useful and is a core fundamental in every program. Here are more string functions.
Date.DateDiff
The DateDiff function returns a Long value specifying the number of time intervals between two Date values. This simply means that it gives the difference between two dates. The difference could be in days, months, hours, minutes, weeks, years, and so on. Here is a small example:
Dim Date1 As Date = #28/07/2015# Dim Date2 As Date = #28/9/2015# ' This will show the number of days between ' 28/07/2015 and 28/09/2015 MessageBox.Show(DateDiff(DateInterval.Day, _ Date1, Date2))
You will use some sort of Date manipulation daily. Here is a list of more date functions.
Debug.Fail
You can write run-time messages to the Output window by using the Debug class, which is part of the System.Diagnostics class library. The Debug.Fail method always breaks execution and outputs information in a dialog box. Here is how to use it:
Try 'Some Code Catch e As Exception Debug.Fail("Unknown selection " & Selection1) End Try
Here is more information on Debugging.
For more information on the System.Diagnostics class, read here.
Command.ExecuteReader()
Extracting data from databases is not complicated at all. Any developer needs to know how to do this, as this is essentially what will make your app tick. Here is a small example of extracting information out of a database table:
Dim sqlConnection1 As New SqlConnection("Your _ Connection String") Dim cmd As New SqlCommand Dim reader As SqlDataReader cmd.CommandText = "SELECT * FROM Table" cmd.CommandType = CommandType.Text cmd.Connection = sqlConnection1 sqlConnection1.Open() reader = cmd.ExecuteReader() ' Data is accessible through your DataReader ' object here. sqlConnection1.Close()
You can follow this link for more information on the ExecuteReader function.
More information on working with databases can be found here.
Clipboard.Set Clipboard.SetText(strSelectedText) ' Put selection on clipboard Else txtSource.SelectAll() Clipboard.SetText(txtSource.Text) ' Put all text on clipboard ' Get the clipboard contents ClipText = Clipboard.GetText ' To paste at back of text; otherwise, just say ' textbox2.text = cliptext txtDest.Text = OldText & ClipText End Sub
File.OpenWrite
File.OpenWrite opens an existing file or creates a new file for writing. The OpenWrite method opens a file if one already exists for the file path, or creates a new file if one does not exist. For an existing file, it overwrites the existing characters with the new characters. Here is a small example of its implementation:
Dim path As String = "c:\TextFile.txt" ' Open the stream and write to it. Using fs As FileStream = File.OpenWrite(path) Dim info As Byte() = _ New UTF8Encoding(True).GetBytes("Testing123") ' Add some information to the file. fs.Write(info, 0, info.Length) End Using
For more information regarding the Using statement, read here.
For more information on the System.IO namespace, have a look through this article.
FontFamily.GetFamilies
The FontFamily.GetFamilies method returns an array that contains all the FontFamily objects. In simple terms, it allows you to return a list of all installed font families (names) on your PC. Here is a small example:
Dim ff As FontFamily For Each ff In System.Drawing.FontFamily.Families ListBox1.Items.Add(ff.Name) Next
For more information on the Font classes, read this article.
Graphics.DrawRectangle
This function Draws a rectangle specified by a Rectangle structure. By knowing this method, you will be able to ease into the other Drawing methods such as DrawEllipse and DrawLine. This should open the door to the world of Graphics programming. Here is a tiny example:
Public Sub DrawSomething(ByVal e As PaintEventArgs) ' Create pen. Dim BluePen As New Pen(Color.Blue, 3) ' Create rectangle. Dim rect As New Rectangle(0, 0, 200, 200) ' Draw rectangle to screen. e.Graphics.DrawRectangle(BluePen, rect) End Sub
Here is more information on how to draw various shapes and utilize the Graphics methods properly.
RegistryKey.OpenSubKey
Retrieves the specified subkey. Here is a small example:
' Obtain the test key in one step, using the ' CurrentUser registry rkTest = Registry.CurrentUser.OpenSubKey _ ("RegistryOpenSubKeyExample") Console.WriteLine("Test key: {0}", rkTest) rkTest.Close
Here is more information on the Registry classes.
String.
Here is a small example:
Dim MainString As String = "ABCDE" Dim Result As Integer ' result = 3 Result = MainString.IndexOf("D") MessageBox.Show("D is Character number : _ " & Result.ToString)
The preceding code segment creates a string object named MainString and holds "ABCDE" inside it. I created an Integer variable to hold the location of the desired character. I then used Indexof to determine where D is inside the string. D in this case, is character 3. A = 0. B =1. C = 2. D = 3.
Conclusion
Although I have highlighted the preceding functions and classes, there are still millions of other functions out there. But, this list will help you get to know all of them better.
There are no comments yet. Be the first to comment!
|
https://www.codeguru.com/columns/vb/common-library-functions-in-visual-basic.html
|
CC-MAIN-2019-26
|
refinedweb
| 932
| 60.01
|
Building a Blog Application
Django is a powerful Python web framework with a relatively shallow learning curve. You can easily build simple web applications in a short time. Django is also a robust and scalable framework that can be used to create large-scale web applications with complex requirements and integrations. This makes Django attractive for both beginners and expert programmers.
In this book, you will learn how to build complete Django projects that are ready for production use. If you haven't installed Django yet, you will discover how to do so in the first part of this chapter.
This chapter covers how to create a simple blog application using Django. The chapter's purpose is to help you to get a general idea of how the framework works, an understanding of how the different components interact with each other, and the skills to easily create Django projects with basic functionality. You will be guided through the creation of a complete project, but I will go into more detail on this later. The different framework components will be explored in detail throughout this book.
This chapter will cover the following topics:
- Installing Django
- Creating and configuring a Django project
- Creating a Django application
- Designing models and generating model migrations
- Creating an administration site for your models
- Working with QuerySets and managers
- Building views, templates, and URLs
- Adding pagination to list views
- Using Django's class-based views
Installing Django
If you have already installed Django, you can skip this section and jump directly to the Creating your first project section. Django comes as a Python package and thus can be installed in any Python environment. If you haven't installed Django yet, the following is a quick guide to installing it for local development.
Django 3 continues the path of providing new features while maintaining the core functionalities of the framework. The 3.0 release includes for the first time Asynchronous Server Gateway Interface (ASGI) support, which makes Django fully async-capable. Django 3.0 also includes official support for MariaDB, new exclusion constraints on PostgreSQL, filter expressions enhancements, and enumerations for model field choices, as well as other new features.
Django 3.0 supports Python 3.6, 3.7, and 3.8. In the examples in this book, we will use Python 3.8.2. If you're using Linux or macOS, you probably have Python installed. If you're using Windows, you can download a Python installer at.
If you're not sure whether Python is installed on your computer, you can verify this by typing
python into the shell. If you see something like the following, then Python is installed on your computer:
Python 3.8.2 (v3.8.2:7b3ab5921f, Feb 24 2020, 17:52:18) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information.
If your installed Python version is lower than 3.6, or if Python is not installed on your computer, download Python 3.8.2 from and install it.
Since you will be using Python 3, you don't have to install a database. This Python version comes with a built-in SQLite database. SQLite is a lightweight database that you can use with Django for development. If you plan to deploy your application in a production environment, you should use a full-featured database, such as PostgreSQL, MySQL, or Oracle. You can find more information about how to get your database running with Django at.
Creating an isolated Python environment
Since version 3.3, Python has come with the
venv library, which provides support for creating lightweight virtual environments. Each virtual environment has its own Python binary and can have its own independent set of installed Python packages in its site directories. Using the Python
venv module to create isolated Python environments allows you to use different package versions for different projects, which is far more practical than installing Python packages system-wide. Another advantage of using
venv is that you won't need any administration privileges to install Python packages.
Create an isolated environment with the following command:
python -m venv my_env
This will create a
my_env/ directory, including your Python environment. Any Python libraries you install while your virtual environment is active will go into the
my_env/lib/python3.8/site-packages directory.
Run the following command to activate your virtual environment:
source my_env/bin/activate
The shell prompt will include the name of the active virtual environment enclosed in parentheses, as follows:
(my_env)laptop:~ zenx$
You can deactivate your environment at any time with the
deactivate command. You can find more information about
venv at.
Installing Django with pip
The
pip package management system is the preferred method for installing Django. Python 3.8 comes with
pip preinstalled, but you can find
pip installation instructions at.
Run the following command at the shell prompt to install Django with
pip:
pip install "Django==3.0.*"
Django will be installed in the Python
site-packages/ directory of your virtual environment.
Now check whether Django has been successfully installed. Run
python on a terminal, import Django, and check its version, as follows:
'3.0.4'> import django > django.get_version()
If you get an output like
3.0.X, Django has been successfully installed on your machine.
Django can be installed in several other ways. You can find a complete installation guide at.
Creating your first project
Our first Django project will be building a complete blog. Django provides a command that allows you to create an initial project file structure. Run the following command from your shell:
django-admin startproject mysite
This will create a Django project with the name
mysite.
Avoid naming projects after built-in Python or Django modules in order to avoid conflicts.
Let's take a look at the project structure generated:
mysite/ manage.py mysite/ __init__.py asgi.py wsgi.py settings.py urls.py
These files are as follows:
manage.py: This is a command-line utility used to interact with your project. It is a thin wrapper around the
django-admin.pytool. You don't need to edit this file.
mysite/: This is your project directory, which consists of the following files:
__init__.py: An empty file that tells Python to treat the
mysitedirectory as a Python module.
asgi.py: This is the configuration to run your project as ASGI, the emerging Python standard for asynchronous web servers and applications.
settings.py: This indicates settings and configuration for your project and contains initial default settings.
urls.py: This is the place where your URL patterns live. Each URL defined here is mapped to a view.
wsgi.py: This is the configuration to run your project as a Web Server Gateway Interface (WSGI) application.
The generated
settings.py file contains the project settings, including a basic configuration to use an SQLite3 database and a list named
INSTALLED_APPS that contains common Django applications that are added to your project by default. We will go through these applications later in the Project settings section.
Django applications contain a
models.py file where data models are defined. Each data model is mapped to a database table. To complete the project setup, you need to create the tables associated with the models of the applications listed in
INSTALLED_APPS. Django includes a migration system that manages this.
Open the shell and run the following commands:
cd mysite python manage.py migrate
You will note an output that ends with the following lines:
The preceding lines are the database migrations that are applied by Django. By applying migrations, the tables for the initial applications are created in the database. You will learn about the
migrate management command in the Creating and applying migrations section of this chapter.
Running the development server, such as adding new files to your project, so you will have to restart the server manually in these cases.
Start the development server by typing the following command from your project's root folder:
python manage.py runserver
You should see something like this:
Watching for file changes with StatReloader Performing system checks... System check identified no issues (0 silenced). January 01, 2020 - 10:00:00 Django version 3.0, using settings 'mysite.settings' Starting development server at Quit the server with CONTROL-C.
Now open in your browser. You should see a page stating that the project is successfully running, as shown in the following screenshot:
Figure 1.1: The default page of the Django development server
The preceding screenshot indicates that Django is running. If you take a look at your console, you will see the
GET request performed by your browser:
[01/Jan/2020 17:20:30]
When you have to deal with multiple environments that require different configurations, you can create a different settings file for each environment.
Remember that this server is only intended for development and is not suitable for production use..
Chapter 14, Going Live, explains how to set up a production environment for your Django projects.
Project settings
Let's open the
settings.py file and take a look at the configuration of the project. There are several settings that Django includes in this file, but these are only part of all the Django settings available. You can see all the settings and their default values at.
The following settings are worth looking at:
DEBUGis a Boolean that turns the debug mode of the project on and off. If it is set to
True, Django will display detailed error pages when an uncaught exception is thrown by your application. When you move to a production environment, remember that you have to set it to
False. Never deploy a site into production with
DEBUGturned on because you will expose sensitive project-related data.
ALLOWED_HOSTSis not applied while debug mode is on or when the tests are run. Once you move your site to production and set
DEBUGto
False, you will have to add your domain/host to this setting in order to allow it to serve your Django site.
INSTALLED_APPSis a setting you will have to edit for all projects. This setting tells Django which applications are active for this site. By default, Django includes the following applications:
django.contrib.admin: An administration site
django.contrib.auth: An authentication framework
django.contrib.contenttypes: A framework for handling content types
django.contrib.sessions: A session framework
django.contrib.messages: A messaging framework
django.contrib.staticfiles: A framework for managing static files
MIDDLEWAREis a list that contains middleware to be executed.
ROOT_URLCONFindicates the Python module where the root URL patterns of your application are defined.
DATABASESis a dictionary that contains the settings for all the databases to be used in the project. There must always be a default database. The default configuration uses an SQLite3 database.
LANGUAGE_CODEdefines the default language code for this Django site.
USE_TZtells Django to activate/deactivate timezone support. Django comes with support for timezone-aware datetime. This setting is set to
Truewhen you create a new project using the
startprojectmanagement command.
Don't worry if you don't understand much about what you're seeing here. You will learn the different Django settings in the following chapters.
Projects and applications
Throughout this book, you will encounter the terms project and application over and over. In Django, a project is considered a Django installation with some settings. An application is a group of models, views, templates, and URLs. Applications interact with the framework to provide some specific functionalities and may be reused in various projects. You can think of a project as your website, which contains several applications, such as a blog, wiki, or forum, that can also be used by other projects.
The following diagram shows the structure of a Django project:
Figure 1.2: The Django project/application structure
Creating an application
Now let's create your first Django application. You will create a blog application from scratch. From the project's root directory, run the following command:
python manage.py startapp blog
This will create the basic structure of the application, which looks like this:
blog/ __init__.py admin.py apps.py migrations/ __init__.py models.py tests.py views.py
These files are as follows:
admin.py: This is where you register models to include them in the Django administration site—using this site is optional.
apps.py: This includes the main configuration of the
blogapplication.
migrations: This directory will contain database migrations of your application. Migrations allow Django to track your model changes and synchronize the database accordingly.
models.py: This includes the.
Designing the blog data schema
You will start designing your blog data schema by defining the data models for your blog. A model is a Python class that subclasses
django.db.models.Model in which each attribute represents a database field. Django will create a table for each model defined in the
models.py file. When you create a model, Django will provide you with a practical API to query objects in the database easily.
First, you need to define a
Post model. Add the following lines to the
models.py file of the
blog application:
from django.db import models from django.utils import timezone from django.contrib.auth.models import User your data model for blog posts. Let's take a look at the fields you just defined for this model:
title: This is the field for the post title. This field is
CharField, which translates into a
VARCHARcolumn in the SQL database.
slug: This is a field intended to be used in URLs. A slug is a short label that contains only letters, numbers, underscores, or hyphens. You will use the
slugfield to build beautiful, SEO-friendly URLs for your blog posts. You have added the
unique_for_dateparameter to this field so that you can build URLs for posts using their
publishdate and
slug. Django will prevent multiple posts from having the same slug for a given date.
author: This field defines a many-to-one relationship, meaning that each post is written by a user, and a user can write any number of posts. For this field, Django will create a foreign key in the database using the primary key of the related model. In this case, you are relying on the
Usermodel of the Django authentication system. The
on_deleteparameter specifies the behavior to adopt when the referenced object is deleted. This is not specific to Django; it is an SQL standard. Using
CASCADE, you specify that when the referenced user is deleted, the database will also delete all related blog posts. You can take a look at all the possible options at. You specify the name of the reverse relationship, from
Userto
Post, with the
related_nameattribute. This will allow you to access related objects easily. You will learn more about this later.
body: This is the body of the post. This field is a text field that translates into a
TEXTcolumn in the SQL database.
publish: This datetime indicates when the post was published. You use Django's timezone
nowmethod as the default value. This returns the current datetime in a timezone-aware format. You can think of it as a timezone-aware version of the standard Python
datetime.nowmethod.
created: This datetime indicates when the post was created. Since you are using
auto_now_addhere, the date will be saved automatically when creating an object.
updated: This datetime indicates the last time the post was updated. Since you are using
auto_nowhere, the date will be updated automatically when saving an object.
status: This field shows the status of a post. You use a
choicesparameter, so the value of this field can only be set to one of the given choices.
Django comes with different types of fields that you can use to define your models. You can find all field types at.
The
Meta class inside the model contains metadata. You tell Django to sort results by the
publish field in descending order by default when you query the database. You specify the descending order using the negative prefix. By doing this, posts published recently will appear first.
The
__str__() method is the default human-readable representation of the object. Django will use it in many places, such as the administration site.
If you are coming from using Python 2.x, note that in Python 3, all strings are natively considered Unicode; therefore, we only use the
__str__() method and the
__unicode__() method is obsolete.
Activating the application
In order for Django to keep track of your application and be able to create database tables for its models, you have to activate it. To do this, edit the
settings.py file and add
blog.apps.BlogConfig to the
INSTALLED_APPS setting. It should look like this:
INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'blog.apps.BlogConfig', ]
The
BlogConfig class is your application configuration. Now Django knows that your application is active for this project and will be able to load its models.
Creating and applying migrations
Now that you have a data model for your blog posts, you will need a database table for it. Django comes with a migration system that tracks the changes made to models and enables them to propagate into the database. As mentioned, the
migrate command applies migrations for all applications listed in
INSTALLED_APPS; it synchronizes the database with the current models and existing migrations.
First, you will need to create an initial migration for your
Post model. In the root directory of your project, run the following command:
python manage.py makemigrations blog
You should get the following output:
Migrations for 'blog': blog/migrations/0001_initial.py - Create model Post
Django just created the
0001_initial.py file inside the
migrations directory of the
blog application. You can open that file to see how a migration appears. A migration specifies dependencies on other migrations and operations to perform in the database to synchronize it with model changes.
Let's take a look at the SQL code that Django will execute in the database to create the table for your model. The
sqlmigrate command takes the migration names and returns their SQL without executing it. Run the following command to inspect the SQL output of your first migration:
python manage.py sqlmigrate blog 0001
The output should look as follows:
BEGIN; -- -- Create model Post --") DEFERRABLE INITIALLY DEFERRED); CREATE INDEX "blog_post_slug_b95473f2" ON "blog_post" ("slug"); CREATE INDEX "blog_post_author_id_dd7a8485" ON "blog_post" ("author_id"); COMMIT;
The exact output depends on the database you are using. The preceding output is generated for SQLite. As you can see in the output, Django generates the table names by combining the application name and the lowercase name of the model (
blog_post), but you can also specify a custom database name for your model in the
Meta class of the model using the
db_table attribute.
Django creates a primary key automatically for each model, but you can also override this by specifying
primary_key=True in one of your model fields. The default primary key is an
id column, which consists of an integer that is incremented automatically. This column corresponds to the
id field that is automatically added to your models.
Let's sync your database with the new model. Run the following command to apply existing migrations:
python manage.py migrate
You will get an output that ends with the following line:
Applying blog.0001_initial... OK
You just applied migrations for the applications listed in
INSTALLED_APPS, including your
blog application. After applying the migrations, the database reflects the current status of your models.
If you edit the
models.py file in order to add, remove, or change the fields of existing models, or if you add new models, you will have to create a new migration using the
makemigrations command. The migration will allow Django to keep track of model changes. Then, you will have to apply it with the
migrate command to keep the database in sync with your models.
Creating an administration site for models
Now that you have defined the
Post model, you will create a simple administration site to manage your blog posts. Django comes with a built-in administration interface that is very useful for editing content. The Django site is built dynamically by reading your model metadata and providing a production-ready interface for editing content. You can use it out of the box, configuring how you want your models to be displayed in it.
The
django.contrib.admin application is already included in the
INSTALLED_APPS setting, so you don't need to add it.
Creating a superuser
First, you will need to create a user to manage the administration site. Run the following command:
python manage.py createsuperuser
You will see the following output; enter your desired username, email, and password, as follows:
Username (leave blank to use 'admin'): admin Email address: admin@admin.com Password: ******** Password (again): ******** Superuser created successfully.
The Django administration site
Now start the development server with the
python manage.py runserver command and open in your browser. You should see the administration login page, as shown in the following screenshot:
Figure 1.3: The Django administration site login screen
Log in using the credentials of the user you created in the preceding step. You will see the administration site index page, as shown in the following screenshot:
Figure 1.4: The Django administration site index page
The
Group and
User models that you can see in the preceding screenshot are part of the Django authentication framework located in
django.contrib.auth. If you click on Users, you will see the user you created previously.
Adding models to the administration site
Let's add your blog models to the administration site. Edit the
admin.py file of the
blog application and make it look like this:
from django.contrib import admin from .models import Post admin.site.register(Post)
Now reload the administration site in your browser. You should see your
Post model on the site, as follows:
Figure 1.5: The Post model of the blog application included in the Django administration site index page
That was easy, right? When you register a model in the Django administration site, you get a user-friendly interface generated by introspecting your models that allows you to list, edit, create, and delete objects in a simple way.
Click on the Add link beside Posts to add a new post. You will note the form that Django has generated dynamically for your model, as shown in the following screenshot:
Figure 1.6: The Django administration site edit form for the Post model
Django uses different form widgets for each type of field. Even complex fields, such as the
DateTimeField, are displayed with an easy interface, such as a JavaScript date picker.
Fill in the form and click on the SAVE button. You should be redirected to the post list page with a success message and the post you just created, as shown in the following screenshot:
Figure 1.7: The Django administration site list view for the Post model with an added successfully message
Customizing the way that models are displayed
Now, we will take a look at how to customize the administration site. Edit the
admin.py file of your
blog application and change it, as follows:
from django.contrib import admin from .models import Post @admin.register(Post) class PostAdmin(admin.ModelAdmin): list_display = ('title', 'slug', 'author', 'publish', 'status')
You are telling the Django administration site that your model is registered in the site using a custom class that inherits from
ModelAdmin. In this class, you can include information about how to display the model in the site and how to interact with it.
The
list_display attribute allows you to set the fields of your model that you want to display on the administration object list page. The
@admin.register() decorator performs the same function as the
admin.site.register() function that you replaced, registering the
ModelAdmin class that it decorates.')
Return to your browser and reload the post list page. Now, it will look like this:
Figure 1.8: The Django administration site custom list view for the Post model you have defined a list of searchable fields using the
search_fields attribute. Just below the search bar, there are navigation links to navigate through a date hierarchy; this has been defined by the
date_hierarchy attribute. You can also see that the posts are ordered by STATUS and PUBLISH columns by default. You have specified the default sorting criteria using the
ordering attribute.
Next, click on the ADD POST link. You will also note some changes here. As you type the title of a new post, the
slug field is filled in automatically. You have told Django to prepopulate the
slug field with the input of the
title field using the
prepopulated_fields attribute.
Also, the
author field is now displayed with a lookup widget that can scale much better than a drop-down select input when you have thousands of users. This is achieved with the
raw_id_fields attribute and it looks like this:
Figure 1.9: The widget to select related objects for the author field of the Post model
With a few lines of code, you have customized the way your model is displayed on the administration site. There are plenty of ways to customize and extend the Django administration site; you will learn more about this later in this book.
Working with QuerySets and managers, Oracle, and MariaDB. Remember that you can define the database of your project in the
DATABASES setting of your project's
settings.py file. Django can work with multiple databases at a time, and you can program database routers to create custom routing schemes.
Once you have created your data models, Django gives you a free API to interact with them. You can find the data model reference of the official documentation at.
The Django ORM is based on QuerySets. A QuerySet is a collection of database queries to retrieve objects from your database. You can apply filters to QuerySets to narrow down the query results based on given parameters.
Creating objects
Open the terminal and run the following command to open the Python shell:
python manage.py shell
Then, type the following lines:
>>> from django.contrib.auth.models import User >>> from blog.models import Post >>> user = User.objects.get(username='admin') >>> post = Post(title='Another post', ... slug='another-post', ... body='Post body.', ... author=user) >>> post.save()
Let's analyze what this code does. First, you retrieve the
user object with the username
admin:
user = User.objects.get(username='admin')
The
get() method allows you to retrieve a single object from the database. Note that this method expects a, you create a
Post instance with a custom title, slug, and body, and set the user that you previously retrieved as the author of the post:
post = Post(title='Another post', slug='another-post', body='Post body.', author=user)
This object is in memory and is not persisted to the database.
Finally, you save the
Post object to the database using the
save() method:
post.save()
The preceding action performs an
INSERT SQL statement behind the scenes. You have seen how to create an object in memory first and then persist it to the database, but you can also create the object and persist it into the database in a single operation using the
create() method, as follows:
Post.objects.create(title='One more post', slug='one-more-post', body='Post body.', author=user)
Updating objects
Now, change the title of the post to something different and save the object again:
'New title' > post.save()> post.title =
This time, the
save() method performs an
UPDATE SQL statement.
The changes you make to the object are not persisted to the database until you call the
save() method.
Retrieving objects
You already know how to retrieve a single object from the database using the
get() method. You accessed this method using
Post.objects.get(). Each Django model has at least one manager, and the default manager is called
objects. You get a
QuerySet object using your model manager. To retrieve all objects from a table, you just use the
all() method on the default objects manager, like this:
> all_posts = Post.objects.all()> all_posts = Post.objects.all()
This is how you create a QuerySet that returns all objects in the database. Note that this QuerySet has not been executed yet. Django QuerySets are lazy, which means they are only evaluated when they are forced to be. This behavior makes QuerySets very efficient. If you don't set the QuerySet to a variable, but instead write it directly on the Python shell, the SQL statement of the QuerySet is executed because you force it to output results:
> all_posts> all_posts
Using the filter() method
To filter a QuerySet, you can use the
filter() method of the manager. For example, you can retrieve all posts published in the year 2020 using the following QuerySet:
2020)> Post.objects.filter(publish__year=
You can also filter by multiple fields. For example, you can retrieve all posts published in 2020 by the author with the username
admin:
>>> Post.objects.filter(publish__year=2020, author__username='admin')
This equates to building the same QuerySet chaining multiple filters:
2020) \ > .filter(author__username='admin')> Post.objects.filter(publish__year=
Queries with field lookup methods are built using two underscores, for example,
publish__year, but the same notation is also used for accessing fields of related models, such as
author__username.
Using exclude()
You can exclude certain results from your QuerySet using the
exclude() method of the manager. For example, you can retrieve all posts published in 2020 whose titles don't start with
Why:
2020) \ > .exclude(title__startswith='Why')> Post.objects.filter(publish__year=
Using order_by()
You can order results by different fields using the
order_by() method of the manager. For example, you can retrieve all objects ordered by their
title, as follows:
>>> Post.objects.order_by('title')
Ascending order is implied. You can indicate descending order with a negative sign prefix, like this:
>>> Post.objects.order_by('-title')
Deleting objects
If you want to delete an object, you can do it from the object instance using the
delete() method:
1) > post.delete()> post = Post.objects.get(id=
Note that deleting objects will also delete any dependent relationships for
ForeignKey objects defined with
on_delete set to
CASCADE.
When QuerySets are evaluated
Creating a QuerySet doesn't involve any database activity until it is evaluated. QuerySets usually return another unevaluated QuerySet. You can concatenate as many filters as you like to a QuerySet, and you will not hit the database until the QuerySet is evaluated. When a QuerySet is evaluated, it translates into an SQL query to the database. them in a statement, such as
bool(),
or,
and, or
if
Creating model managers
As I previously mentioned,
objects is the default manager of every model that retrieves all objects in the database. However, you can also define custom managers for your models. You will create a custom manager to retrieve all posts with the
published status.
There are two ways to add or customize managers for your models: you can add extra manager methods to an existing manager, or create a new manager by modifying the initial QuerySet that the manager returns. The first method provides you with a QuerySet API such as
Post.objects.my_manager(), and the latter provides you with
Post.my_manager.all(). The manager will allow you to retrieve posts using
Post.published.all()..
The first manager declared in a model becomes the default manager. You can use the
Meta attribute
default_manager_name to specify a different default manager. If no manager is defined in the model, Django automatically creates the
objects default manager for it. If you declare any managers for your model but you want to keep the
objects manager as well, you have to add it explicitly to your model. In the preceding code, you add the default
objects manager and the
published custom manager to the
Post model.
The
get_queryset() method of a manager returns the QuerySet that will be executed. You override this method to include your custom filter in the final QuerySet.
You have now defined your custom manager and added it to the
Post model; you can use it to perform queries. Let's test it.
Start the development server again with the following command:
python manage.py shell
Now, you can import the
Post model and retrieve all published posts whose title starts with
Who, executing the following QuerySet:
'Who')> from blog.models import Post > Post.published.filter(title__startswith=
To obtain results for this QuerySet, make sure that you set the
published field to
True in the
Post object whose
title starts with
Who.
Building list and detail views
Now that you have knowledge of how to use the ORM, you are ready to build the views of the
blog application. A Django view is just a Python function that receives a web request and returns a web response. All the logic to return the desired response goes inside the view.
First, you will create your application views, then you will define a URL pattern for each view, and finally,. This parameter is required by all views. In this view, you retrieve all the posts with the
published status using the
published manager that you created previously.
Finally, you use the
render() shortcut provided by Django to render the list of posts with the given template. This function takes the
request object, the template path, and the context variables to render the given template. It returns an
HttpResponse object with the rendered text (normally HTML code). The
render() shortcut takes the request context into account, so any variable set by
views.py file:
def post_detail(request, year, month, day, post): post = get_object_or_404(Post, slug=post, status='published', publish__year=year, publish__month=month, publish__day=day) return render(request, 'blog/post/detail.html', {'post': post})
This is the post detail view. This view takes the
year,
month,
day, and
post arguments to retrieve a published post with the given slug and date. Note that when you created the
Post model, you added the
unique_for_date parameter to the
slug field. This ensures that there will be only one post with a slug for a given date, and thus, you can retrieve single posts using the date and slug. In the detail view, you use the
get_object_or_404() shortcut to retrieve the desired post. This function retrieves the object that matches the given parameters or an HTTP 404 (not found) exception if no object is found. Finally, you use the
render() shortcut the keyword or positional arguments.
Create a
urls.py file in the directory of the
blog application and add the following lines to it:
from django.urls import path from . import views app_name = 'blog' urlpatterns = [ # post views path('', views.post_list, name='post_list'), path('<int:year>/<int:month>/<int:day>/<slug:post>/', views.post_detail, name='post_detail'), ]
In the preceding code, you define an application namespace with the
app_name variable. This allows you to organize URLs by application and use the name when referring to them. You define two different patterns using the
path() function. The first URL pattern doesn't take any arguments and is mapped to the
post_list view. The second pattern takes the following four arguments and is mapped to the
post_detail view:
year: Requires an integer
month: Requires an integer
day: Requires an integer
post: Can be composed of words and hyphens
You use angle brackets to capture the values from the URL. Any value specified in the URL pattern as
<parameter> is captured as a string. You use path converters, such as
<int:year>, to specifically match and return an integer and
<slug:post> to specifically match a slug. You can see all path converters provided by Django at.
If using
path() and converters isn't sufficient for you, you can use
re_path() instead to define complex URL patterns with Python regular expressions. You can learn more about defining URL patterns with regular expressions at. If you haven't worked with regular expressions before, you might want to take a look at the Regular Expression HOWTO located at first.
Creating a
urls.py file for each application is the best way to make your applications reusable by other projects.
Next, you have to include the URL patterns of the
blog application in the main URL patterns of the project.
Edit the
urls.py file located in the
mysite directory of your project and make it look like the following:
from django.urls import path, include from django.contrib import admin urlpatterns = [ path('admin/', admin.site.urls), path('blog/', include('blog.urls', namespace='blog')), ]
The new URL pattern defined with
include refers to the URL patterns defined in the
blog application so that they are included under the
blog/ path. You include these patterns under the namespace
blog. Namespaces have to be unique across your entire project. Later, you will refer to your blog URLs easily by using the namespace followed by a colon and the URL name, for example,
blog:post_list and
blog:post_detail. You can learn more about URL namespaces at.
Canonical URLs for models
A canonical URL is the preferred URL for a resource. You may have different pages in your site where you display posts, but there is a single URL that you use as the main URL for a blog post. The convention in Django is to add a
get_absolute_url() method to the model that returns the canonical URL for the object.
You can use the
post_detail URL that you have defined in the preceding section to build the canonical URL for
Post objects. For this method, you will use the
reverse() method, which allows you to build URLs by their name and pass optional parameters. You can learn more about the URLs utility functions at.
Edit the
models.py file of the
blog application and add the following code:
from django.urls import reverse class Post(models.Model): # ... def get_absolute_url(self): return reverse('blog:post_detail', args=[self.publish.year, self.publish.month, self.publish.day, self.slug])
You will use the
get_absolute_url() method in your templates to link to specific posts.
Creating templates for your views
You have created views and URL patterns for the
blog application. URL patterns map URLs to views, and views decide which data gets returned to the user. Templates define how the data is displayed; they are usually written in HTML in combination with the Django template language. You can find more information about the Django template language at.
Let's add templates to your application to display posts in a user-friendly manner.
Create the following directories and files inside your
blog application directory:
templates/ blog/ base.html post/ list.html detail.html
The preceding structure will be the file structure for your templates. The
base.html file will include the main HTML structure of the website and divide the content into, template variables, and template filters:
- Template tags control the rendering of the template and look like
{% tag %}
- Template variables get replaced with values when the template is rendered and look like
{{ variable }}
- Template filters allow you to modify variables for display and look like
{{ variable|filter
}}.
You can see all built-in template tags and filters at.
Edit the
base.html file and add the following code:
{% load static %} <html> <head> <title> {% block title %}{% endblock %}</title> <link href="{% static "css/blog.css" %}" rel="stylesheet"> </head> <body> <div id="content"> {% block content %} {% endblock %} </div> <div id="sidebar"> <h2>My blog</h2> <p>This is my blog.</p> </div> </body> </html>
{% load static %} tells Django to load the
static template tags that are provided by the
django.contrib.staticfiles application, which is contained in the
INSTALLED_APPS setting. After loading them, you are able to use the
{% static %} template tag throughout this template. With this template tag, you can include the static files, such as the
blog.css file, which you will find in the code of this example under the
static/ directory of the
blog application. Copy the
static/ directory from the code that comes along with this chapter into the same location as your project to apply the CSS styles to the templates. You can find the directory's contents at.
You can see that there are two
{% block %} tags. These tell Django that you want to define a block in that area. Templates that inherit from this template can fill in the blocks with content., you tell Django to inherit from the
blog/base.html template. Then, you fill the
title and
content blocks of the base template with content. You iterate through the posts and display their title, date, author, and body, including a link in the title to the canonical URL of the post.
In the body of the post, you apply two template filters:
truncatewords truncates the value to the number of words specified, and
linebreaks converts the output into HTML line breaks. You can concatenate as many template filters as you wish; each one will be applied to the output generated by the preceding one.
Open the shell and execute the
python manage.py runserver command to start the development server. Open in your browser; you will see everything running. Note that you need to have some posts with the
Published status to show them here. You should see something like this:
Figure 1.10: The page for the post list view
Next, edit the
post/detail.html file:
{% extends "blog/base.html" %} {% block title %}{{ post.title }}{% endblock %} {% block content %} <h1>{{ post.title }}</h1> <p class="date"> Published {{ post.publish }} by {{ post.author }} </p> {{ post.body|linebreaks }} {% endblock %}
Next, you can return to your browser and click on one of the post titles to take a look at the detail view of the post. You should see something like this:
Figure 1.11: The page for the post's detail view
Take a look at the URL—it should be
/blog/2020/1/1/who-was-django-reinhardt/. You have designed SEO-friendly URLs for your blog posts.
Adding pagination
When you start adding content to your blog, you might easily reach the point where tens or hundreds of posts are stored in your database. Instead of displaying all the posts on a single page, you may want to split the list of posts across several pages. This can be achieved through pagination. You can define the number of posts you want to be displayed per page and retrieve the posts that correspond to the page requested by the user. Django has a built-in pagination class that allows you to manage paginated data easily.
Edit the
views.py file of the
blog application to import the Django paginator classes and modify the
post_list view, as follows:
from django.core.paginator import Paginator, EmptyPage,\ PageNotAnInteger def post_list(request): object_list = Post.published.all() paginator = Paginator(object_list, 3) # 3 posts in each page page = request.GET.get('page') try: posts = paginator.page(page) except PageNotAnInteger: # If page is not an integer deliver the first page posts = paginator.page(1) except EmptyPage: # If page is out of range deliver last page of results posts = paginator.page(paginator.num_pages) return render(request, 'blog/post/list.html', {'page': page, 'posts': posts})
This is how pagination works:
- You instantiate the
Paginatorclass with the number of objects that you want to display on each page.
- You get the
page GETparameter, which indicates the current page number.
- You obtain the objects for the desired page by calling the
page()method of
Paginator.
- If the
pageparameter is not an integer, you retrieve the first page of results. If this parameter is a number higher than the last page of results, you retrieve the last page.
- You pass the page number and retrieved objects to the template.
Now you have to create a template to display the paginator so that it can be included in any template that uses pagination. In the
templates/ folder of the
blog application, create a new file and name it
pagination.html. Add the following HTML code to the file:
<div class="pagination"> <span class="step-links"> {% if page.has_previous %} <a href="?page={{ page.previous_page_number }}">Previous</a> {% endif %} <span class="current"> Page {{ page.number }} of {{ page.paginator.num_pages }}. </span> {% if page.has_next %} <a href="?page={{ page.next_page_number }}">Next</a> {% endif %} </span> </div>
The pagination template expects a
Page object in order to render the previous and next links, and to display the current page and total pages of results. Let's return to the
blog/post/list.html template and include the
pagination.html template at the bottom of the
{% content %} block, as follows:
{% block content %} ... {% include "pagination.html" with page=posts %} {% endblock %}
Since the
Page object you are passing to the template is called
posts, you include the pagination template in the post list template, passing the parameters to render it correctly. You can follow this method to reuse your pagination template in the paginated views of different models.
Now open in your browser. You should see the pagination at the bottom of the post list and should be able to navigate through pages:
Figure 1.12: The post list page including pagination
Using class-based views
Class-based views are an alternative way to implement views as Python objects instead of common functionalities.
Class-based views offer advantages over function-based views for some use cases. They have the following features:
- Organizing code related to HTTP methods, such as
GET,
POST, or
PUT, in separate methods, instead of using conditional branching
- Using multiple inheritance to create reusable view classes (also known as mixins)
You can take a look at an introduction to class-based views at.
You will change. In the preceding code, you are telling
ListView to do the following things:
- Use a specific QuerySet instead of retrieving all objects. Instead of defining a
querysetattribute, you could have specified
model = Postand Django would have built the generic
Post.objects.all()QuerySet for you.
- Use the context variable
postsfor the query results. The default variable is
object_listif you don't specify any
context_object_name.
- Paginate the result, displaying three objects per page.
- Use a custom template to render the page. If you don't set a default template,
ListViewwill use
blog/post_list.html.
Now open the
urls.py file of your
blog application, comment the preceding
post_list URL pattern, and add a new URL pattern using the
PostListView class, as follows:
urlpatterns = [ # post views # path('', views.post_list, name='post_list'), path('', views.PostListView.as_view(), name='post_list'), path('<int:year>/<int:month>/<int:day>/<slug:post>/', views.post_detail, name='post_detail'), ]
In order to keep pagination working, you have to use the right page object that is passed to the template. Django's
ListView generic view passes the selected page in a variable called
page_obj, so you have to edit your
post/list.html template accordingly to include the paginator using the right variable, as follows:
{% include "pagination.html" with page=page_obj %}
Open in your browser and verify.
Summary
In this chapter, you learned the basics of the Django web framework by creating a simple blog application. You designed the data models and applied migrations to your project. You also created the views, templates, and URLs for your blog, including object pagination.
In the next chapter, you will discover how to enhance your blog application with a comment system and tagging functionality, and how to allow your users to share posts by email.
|
https://www.packtpub.com/product/django-3-by-example-third-edition/9781838981952
|
CC-MAIN-2020-40
|
refinedweb
| 7,918
| 56.86
|
3,720.
michaeldyrynda left a reply on Where To Store A Json Configuration File In The Codebase?
In the past, I've put the JSON string in my
.env file and just referenced it in the
config/services.php file, but you can also store
client_secret.json inside of storage then reference the contents in your config file as well.
Either would work but be mindful that the
storage directory is part of your version control, and you want to minimise your chances of inadvertently committing secrets to your version control.
michaeldyrynda left a reply on SOAP Client - Class Not Found
In addition to what @bobbybouwmann suggests, make sure you're loading your
SoapServiceProvider in the providers array of your
config/app.php.
I would also caution against calling the
env() function in your service provider. If you were ever to run
artisan config:cache, the call to
env() would return
null.
A more reliable option is to add a new array in your
config/services.php file and reference the configuration from there.
// config/services.php <?php return [ // ... 'soap' => [ 'username' => env('SOAP_USERNAME'), 'password' => env('SOAP_PASSWORD'), 'domain' => env('BYD_DOMAIN'), ], ];
Then simply update the calls in your service provider:
// app/Providers/SoapServiceProvider.php // ... $soapClient = new \SoapClient( storage_path($wsdlPath), array( 'trace' => 1, 'soap_version' => SOAP_1_2, 'exceptions' => 1, 'login' => config('services.soap.username'), 'password' => config('services.soap.password'), ) ); }); $soapClient->__setLocation(config('services.soap.domain') . $serviceUrl);
michaeldyrynda left a reply on /app\models Directory Does Not Exist.
You can also just use
app_path('models')and let Laravel take care of the separator for you. Note that this will only work for a top-level path i.e. if you had 'models/nested_models' you'd still have to be wary of the separator.
michaeldyrynda left a reply on Cannot Add Foreign Key Constraint
@Chrizzmeister if you had a
user_id column in a table and later decided to add a foreign key constraint to it, any records that either had no value or the value of a non-existent foreign id i.e. a deleted user would cause adding of the foreign key constraint to fail.
michaeldyrynda left a reply on Using Vue To Validate Coupon(s)
Are you using vue dev tools in chrome? Does that she'd any light?
michaeldyrynda left a reply on Homestead Serve Not Restarting Nginx
It looks like the functionality was removed from the
serve command. Note that
serve as removed some time ago in favour of separate
serve-laravel,
serve-symfony2,
serve-proxy, and
serve-hhvm commands. None of which are documented currently, though I'm not sure if this is via intentional omission or just being missed out.
The
serve command ought to be considered legacy now, I think, given it's no longer documented and the Homestead docs suggest editing the
Homestead.yaml file and running
vagrant provision after adding new files.
michaeldyrynda left a reply on Laravel 5.3 Wish List
Y'all know you can just add an
app/Models directory and put your models in there, right?
The
app folder is PSR-4 namespaced, meaning that as long as you follow the naming structure (StudlyCaps folders and class names), Composer will automatically load those classes for you.
michaeldyrynda left a reply on Where Condition To Get Pass 30 Days Worth Of Records
First off, use a
date field, not a
varchar it will be much more efficient if you're going to be querying on it than a text string.
Secondly, to grab the last 30 days worth of results, you can use the
whereBetween or
whereDate.
michaeldyrynda left a reply on PHP Version Compatibility Related Issues
You can try using
whereLoose, which will do a non-strict comparison of your active field.
1 and
'1' should both pass that check, then.
michaeldyrynda left a reply on Migrations: Timestamp VS DateTime VS Date VS Timestamps
timestamp and
dateTime are similar - they store a date (
YYYY-MM-DD) and time (
HH:MM:SS) together in a single field i.e.
YYYY-MM-DD HH:MM:SS. The difference between the two is that
timestamp can use (MySQL's)
CURRENT_TIMESTAMP as its value, whenever the database record is updated. This can be handled at the database level and is great for Laravel's
created_at and
updated_at fields.
Note also that
timestamp has a limit of
1970-01-01 00:00:01 UTC to
2038-01-19 03:14:07 UTC (source), whilst
dateTime has a range of
1000-01-01 00:00:00 to
9999-12-31 23:59:59, so consider what you plan on storing in that field before determining which type to use.
date stores just the date component i.e.
YYYY-MM-DD (
1000-00-01 to
9999-12-31).
timestamps doesn't take an argument, it's a shortcut to add the
created_at and
updated_at timestamp fields to your database.
michaeldyrynda left a reply on Homestead: How Do I Browse To A New Project?
It all depends on whether you install globally (
composer global require) or in your project (
composer require)
michaeldyrynda left a reply on How To Deploy Laravel Project On Cpanel Through Forge
You don't have to give up Laravel, no, but if you're going to use a modern framework, you should probably spend some time learning about hosting it in a modern environment.
michaeldyrynda left a reply on Does A Melted Server Mean Downtime Using Multi-server?
The load balancer should be aware of the fact that one server is unavailable and forward all requests to the one that's still up. The load balancing isn't necessarily every second visitor getting sent to the other server.
michaeldyrynda left a reply on Homestead: How Do I Browse To A New Project?
It will come down to how you've installed homestead - either globally or per-project. Per-project is likely why you'd have a homestead folder in that xampp folder, globally it should have been installed in your home directory (as is the case with Mac/Linux).
Glad you got it sorted out, though.
michaeldyrynda left a reply on How To Deploy Laravel Project On Cpanel Through Forge
You can use Laravel on cPanel, you likely won't be able to use Forge. Forge works via SSH, which most cPanel hosts have disabled for security. The assumed directory structure and configuration in cPanel is going to be vastly different to what Forge expects, as well.
If you really must use cPanel, you're going to have to do deployments largely manually, though I'd encourage you to move away from it.
michaeldyrynda left a reply on Installation
What is on line 163 of your
config/app.php configuration file?
michaeldyrynda left a reply on Print One Error Not All Validation Errors
You need to replace
key with the key corresponding to the input field you'd like to display the error for i.e.
username,
password, etc.
michaeldyrynda left a reply on Print One Error Not All Validation Errors
You can place that anywhere you want to display the first error in your blade template.
michaeldyrynda left a reply on Homestead: How Do I Browse To A New Project?
Did you also run
vagrant provision as @SaeedPrez mentioned above? Each time you add a site to your
Homestead.yaml file, you'll need to reprovision the box.
michaeldyrynda left a reply on Print One Error Not All Validation Errors
You can use
$errors->first('key') to print the first error only. You'll have to use JavaScript to focus the field yourself. Something along the lines of:
window.load = function () { document.getElementById('key').focus(); }
michaeldyrynda left a reply on Homestead: How Do I Browse To A New Project?
Add another entry into your hosts file for
examplecms.dev for the same IP address.
You'll need to add the mapping in homestead as well as the host file so your computer knows that when you type
examplecms.dev into your browser, it should resolve to
192.168.10.10.
michaeldyrynda left a reply on Damn You Vue.js... Vue Warn: Unknown Custom Element
I'm not sure how the marketing could have been any clearer, given Vue is listed as one of the features of Spark.
The primary goal behind Spark was to provide the boilerplate for users, teams, signup, subscriptions, etc. allowing you to focus on building your business-specific stuff.
Mixing frontend frameworks is always going to be a mess. Perhaps instead of trying to force Angular into a Vue frontend, you spend a little time trying to convert your frontend concerns to Vue. I think you'll find it simpler a task than trying to mix and match the two. You may even end up liking Vue.
Overall, Spark is there to help you get up and running from scratch rather than trying to merge an existing project but if you choose to do that, Spark is opinionated about how it does things. If that's not what you're after, then perhaps Spark isn't the right tool for your particular use case.
Angular and React may be the bee's knees today, but they weren't always. They may not always be in the future. As long as the reasonably significant Laravel community is backing in Vue, I don't think there'll be too many issues with it. I wouldn't hold my breath for Spark 2.0 dropping JS entirely; if that's what you're hoping for, I think you've misunderstood the purpose of it.
michaeldyrynda left a reply on Forge + Custom Vps
Forge will give you a link to
curl, which will download a shell script to run that does everything necessary - you can have a look at it before running it if you'd like.
michaeldyrynda left a reply on Forge + Custom Vps
Yep. You need a fresh Ubuntu 16.04 LTS server; select custom VPS and you'll be given instructions to continue.
michaeldyrynda left a reply on Laravel Homestead Confused About Versions 0.2.7 / 2.1.5 ??
There's issues with Vagrant and 16.04 - so some time after that's fixed.
michaeldyrynda left a reply on How To Display Fetch Values Upon Clicking A Button Using Vue And Laravel
What does
assignStudents return?
michaeldyrynda left a reply on How To Display Fetch Values Upon Clicking A Button Using Vue And Laravel
Right, so the error message is indicating that the properties you're trying to display (
section_code,
subject_code,
subject_description) don't exist, because you're pushing
section directly onto
reservations.
In this instance, you'll need to take my first option, which is to return the complete reservation object from the
POST to the
store endpoint and push that back on to the
reservation object.
public function store(Request $request) { $subject = SectionSubject::findOrFail($request->sectionSubjectId); $student = Student::findOrFail($request->studentId); // assignStudents will need to return the complete reservation object // in the same way that your index method does return $subject->assignStudents($student); }
When you return the (single) reservation object, you can then push it on to the
reservations model in your vue instance.
.then(function(response) { // Check the response object to verify that your returned reservation object is available at data // It may be response.data.data this.reservations.push(response.data). },
michaeldyrynda left a reply on How To Display Fetch Values Upon Clicking A Button Using Vue And Laravel
What error are you getting? Your
store method doesn't return anything; I'm not sure if this is going to be a problem (you're not sending a response back to your
POST request, so
vue-resource may be assuming you have an error.
michaeldyrynda left a reply on How To Display Fetch Values Upon Clicking A Button Using Vue And Laravel
That not working will depend on how you're returning data from the
POST request.
If you're returning it as part of a JSON object, it might be something like
response.data.reservation.
My follow up post will probably be a simpler way of going about it, and saving you posting data back to the client you already have available.
michaeldyrynda left a reply on How To Display Fetch Values Upon Clicking A Button Using Vue And Laravel
Alternatively, just pass the existing
section object in to your
addSubject function:
<button v-on:Add</button>
addSubject: function(section,student_id) { this.$http({ url: '', data: { sectionSubjectId: section.pivot.id, studentId: student_id }, method: 'POST' }) .then(function(response) { // You've already attached the section to the student, so you can safely assume this is ok this.reservations.push(section). }, function(response) { console.log('failed'); }); }
michaeldyrynda left a reply on How To Display Fetch Values Upon Clicking A Button Using Vue And Laravel
When you add the subject to the student, you can return the student's
reservation in the response. It's then just a matter of appending that response to the existing
reservations object.
addSubject: function(id,student_id) { this.$http({ ... }) .then(function(response) { // return the reservation object in your response this.reservations.push(reservation). }, function(response) { console.log('failed'); }); }
Because you're using two-way model binding, Your
v-for will automatically adjust to factory in your new
reservation instance.
michaeldyrynda left a reply on Local Forge-like Production Server
You can just run the provision commands against a fresh Ubuntu server install. It'll get you to essentially the same thing.
michaeldyrynda left a reply on Gulp --production Breaks Urls
Are your images saved relative to the production path?
Your compiled assets will be in
/public/css but your production compiled assets will be in
/public/build/css, thus your relative paths will be pointing to one directly above where you think they'll be.
michaeldyrynda left a reply on Logging In With Email But Only The Part Before @
Just append before you validate/attempt auth.
Be wary of users that type their full email, though.
michaeldyrynda left a reply on Is There Any Way We Can Get The Code Of Laracast Videos?
michaeldyrynda left a reply on Error Eloquent Relationship When Used As Method
Calling a relationship as a method returns a query builder instance, not a collection. This is used so you can constrain the query on the relationship.
$userType::where('id', $id) ->siteLocations() ->where('state', $state) ->get();
michaeldyrynda left a reply on Filter Eloquent Model
Ah, right, my bad; in that case you'll need to provide
category as an array:
$topics = $topics->with([ 'category' => function ($query) use ($category) { $query->where('slug', $category); }]);
See the Constraining Eager Loads documentation.
michaeldyrynda left a reply on Filter Eloquent Model
You'll need to provide some more context around the error you're encountering. What specifically is generating that error?
michaeldyrynda left a reply on Blade To Vue: Alternative To Json_encode
If you just pass
$private it will automatically be converted to a JSON string.
michaeldyrynda left a reply on Filter Eloquent Model
Are you passing an in in your route for category and do you have route model binding? If so, you'll need
$category->id (or whatever the ID field is) in the closure.
michaeldyrynda left a reply on Filter Eloquent Model
Something along the following lines should get you close:
public function index($category = null) { $categories = Category::all(); $topics = Topic::latest()->with('user', 'replies'); if (! is_null($category)) { $topics = $topics->with('category', function ($query) use ($category) { // Assuming category is the id, substitute as necessary $query->where('id', $category); } } else { $topics = $topics->with('category'); } $topics = $topics->paginate(5); }
Two things of note here:
$topicswhen you perform additional operations on it and,
if/
elseand place the
categoryin the initial
with. I suspect that Eloquent will merge the two operations together.
michaeldyrynda left a reply on What's Wrong With My Composer.json?
Why are you using
classmap for
app/Models? Everything in
app is already under PSR-4 autoloading.
Remove that and I think you'll have better luck, as I suspect the classmap inside
app/Models is overriding the PSR-4 autoloading.
michaeldyrynda left a reply on Update Valet
A
composer global update should suffice.
michaeldyrynda left a reply on [Spark-Lumen] Compatibility
Compatible in what way? I wouldn't have thought you could install Spark into a Lumen app, given that Spark has a fairly opinionated frontend and Lumen is designed for building APIs.
michaeldyrynda left a reply on Keep Default Values For Enviroment Variables Out Of App.php
env will default to
null if the key is not define in the environment.
michaeldyrynda left a reply on Catch Statements Are Ignored, Exception Is Handled In Global Handler
It's likely trying to catch the exceptions relative to the current namespace. Either import the exception class or prefix the exception paths with a
\.
try { $this->repository->approve($token, $user); } catch (\Illuminate\Database\Eloquent\ModelNotFoundException $e) { return $this->error('The entry was not found.'); } catch (\Fusion\GAM\ProxyBlock\Exceptions\BlockAlreadyApprovedException $e) { return $this->error('Already approved.'); }
michaeldyrynda left a reply on $this->validate Vs Validator::make?
The
validate method will trap the failure and redirect back with errors and input automagically.
Validate::make builds a new
Validator instance for you to work with.
With the latter, it's up to you to check
fails or
passes and handle each case as you see fit.
michaeldyrynda left a reply on WhereBetween Does Not Include End Date
Are you passing a date (
YYYY-MM-DD) or date time (
YYYY-MM-DD HH:MM:SS)?
whereBetween is sugar for the database's
BETWEEN keyword. If you need to include start and end dates, be explicit about the times i.e.
YYYY-MM-DD 00:00:00 and
YYYY-MM-DD 23:59:59.
michaeldyrynda left a reply on Add New Payment Method In Spark
michaeldyrynda left a reply on Add New Payment Method In Spark
I would imagine you can add whatever payment functionality you want, you'd just need to implement the appropriate payment interfaces. This was the case with earlier beta versions from memory.
|
https://laracasts.com/@michaeldyrynda
|
CC-MAIN-2019-13
|
refinedweb
| 2,995
| 53.61
|
Hello everyone,
I am working on arcmap 10.3.1 and python 2.7.8. I need to determine how much time arcpy needed to finish a specific task (add layer) . I'm using this code:
import arcpy,os,sys,string,datetime,timeit import arcpy.mapping from arcpy import env env.workspace = r"C:\Project" Layer1 = arcpy.mapping.Layer(r"C:\Project\layers\atikot.lyr") counter = 0 for mxd in arcpy.ListFiles("*.mxd"): print (mxd) mapdoc = arcpy.mapping.MapDocument(r"C:\Project\\" + mxd) df = arcpy.mapping.ListDataFrames(mapdoc, "Layers")[0] arcpy.mapping.AddLayer(df ,Layer1, "TOP") print ('AddLayer') mapdoc.save() counter = counter + 1 del mxd print (counter) print str(datetime.datetime.now().date()) print time.clock()
When i run it i get:
>>> Project 2.mxd AddLayer Project.mxd AddLayer soil.mxd AddLayer soil_20008.mxd AddLayer ta34b_4.mxd AddLayer wells.mxd AddLayer 6 2015-03-18 3.20723655198e-07 >>>
I have been unable to get execution times for the processes. when i measured this process with stopper -i got 9.6 second in real time
but in the code the score is 3.207. I dont know if 3.2 is seconds -and if it is that mean this time isn't right.
Any help would be great
practice with your stop watch before you use the time function
copy and paste these in an interactive editor session
|
https://community.esri.com/thread/127153-execution-time-arcpy
|
CC-MAIN-2018-22
|
refinedweb
| 227
| 64.78
|
Reserving URL Namespaces by Using Http.sys
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.
You can explicitly reserve a URL namespace in HTTP.SYS, and then use this namespace to create HTTP endpoints. To do this, you must understand the concept of explicit and implicit namespace reservation and how SQL Server registers an HTTP endpoint with HTTP.SYS.
When a user executes a CREATE ENDPOINT statement, such as the following:
The namespace is implicitly reserved in HTTP.SYS. This means that while the SQL Server-based application is running, any HTTP requests to this endpoint are forwarded to the instance of SQL Server. However, this namespace can be taken by other applications if the instance of SQL Server is not running.
When you explicitly reserve a namespace, the namespace is reserved specifically for SQL Server, and all HTTP requests to this endpoint are forwarded to the instance of SQL Server. For more information, see Reserving an HTTP Namespace.
To manage HTTP endpoints, you use CREATE ENDPOINT, ALTER ENDPOINT and DROP ENDPOINT. You must have the required permissions to create, modify, or drop an endpoint. This is described in the topic, GRANT Endpoint Permissions (Transact-SQL).
When you execute CREATE ENDPOINT to create an endpoint, SQL Server runs the statement and registers the endpoint with the HTTP.SYS. Depending on the context in which the endpoint statement is specified, SQL Server impersonates the caller as follows:
If you execute the statement in the context of a Windows account, SQL Server impersonates the caller to register the endpoint with HTTP.SYS.
If you execute the statement in the context of a SQL Server account, for example, sa or some other SQL Server login, SQL Server impersonates the caller by using the SQL Server account, specified when SQL Server is installed, to register the endpoint with HTTP.SYS.
Both the Windows account and the SQL Server account that SQL Server impersonates must have local Windows administrator privileges for the HTTP endpoint registration to succeed.
To determine what namespaces are reserved in HTTP.SYS, you run the HTTP configuration utility, Httpcfg.exe, at the command prompt.
The following is an example of using Httpcfg.exe to return the list of reserved HTTP namespaces:
This command will display a list of all existing namespace reservations, returning the namespace URL and account under which it was reserved.
Here is typical output for this command:
URL: ACL : D:(A;;GA;;;S-1-5-21-123456789-1234567890-1262470759-1010) ----------------------------------------------------------------- URL : ACL : D:(A;;GA;;;NS) -----------------------------------------------------------------
|
http://technet.microsoft.com/en-us/library/ms188682.aspx
|
CC-MAIN-2014-23
|
refinedweb
| 441
| 53.92
|
Normally you save your application settings in Isolated Storage, this is pretty secure, but really isn’t the best approach for storing stuff you need to protect. Although I guess anything is possible with enough time and computing horse power the Windows Phone 7.1 SDK provides a more secure mechanism to store things like passwords or application keys. It’s called ProtectedData and provides a mechanism to takes byte arrays and encrypt them and turn them back into their original state. This class is in the System.Security.Cryptography namespace.
This could be a simple way to use this to protect passwords in your application.
Just as a heads up if you try to use this in your XNA application, you’ll need to add the assembly mscorlib.extensions to your application.
-twb
|
http://www.thewolfbytes.com/2011/09/windows-phone-7-quick-tip-31-safely.html
|
CC-MAIN-2021-21
|
refinedweb
| 134
| 55.34
|
This module provides access to some variables used or maintained by the interpreter and to functions that interact strongly with the interpreter. It is always available..
sys.byteorder
An indicator of the native byte order. This will have the value
'big' on big-endian (most-significant byte first) platforms, and
'little' on little-endian (least-significant byte first) platforms.
New in version 2.0..dont_write_bytecode.
New in version 2.6..
sys.exc_info().
If
exc_clear() is called, this function will return three
None values until either another exception is raised in the current thread or the execution stack returns to a frame where another exception is being handled..
Note
Beginning with Python 2.2, such cycles are automatically reclaimed when garbage collection is enabled and they become unreachable, but it remains more efficient to avoid creating cycles.
2.7.
sys.executable
A string giving the absolute path of the executable binary for the Python interpreter, on systems where this makes sense. If Python is unable to retrieve the real path to its executable,
sys.executable will be an empty string or
None.
sys.exit([arg])..
sys.flags
The struct sequence flags exposes the status of command line flags. The attributes are read only.
New in version 2.6.
New in version 2.7.3: The
hash_randomization attribute.
sys.float_info
A structseq'
New in version 2.6.
sys.float_repr_style
A string indicating how the
repr() function behaves for floats. If the string has value
'short' then for a finite float
x,
repr(x) aims to produce a short string with the property that
float(repr(x)) == x. This is the usual behaviour in Python 2.7 and later. Otherwise,
float_repr_style has value
'legacy' and
repr(x) behaves in the same way as it did in versions of Python prior to 2.7.
New in version 2.getfilesystemencoding()
Return the name of the encoding used to convert Unicode filenames into system file names, or
None if the system default encoding is used. The result value depends on the operating system:
'utf-8'.
Noneif the
nl_langinfo(CODESET)failed.
getfilesystemencoding()still returns
'mbcs', as this is the encoding that applications should use when they explicitly want to convert Unicode strings to byte strings that are equivalent when used as file names.
'mbcs'..
New in version 2.6.
sys._getframe([depth]).
sys.getprofile()
Get the profiler function as set by
setprofile().
New in version 2.6.
sys.gettrace().
New in version 2.6.
sys.getwindowsversion().
New in version 2.3.
Changed in version 2.7: Changed to a named tuple and added service_pack_minor, service_pack_major, suite_mask, and product_type.
sys.hexversion.
The
hexversion is a 32-bit number with the following layout:
Thus
2.1.0a3 is hexversion
0x020100a3.
New in version 1.5.2.
sys.long_info
A struct sequence that holds information about Python’s internal representation of integers. The attributes are read only.
New in version 2.7.
sys.last_type
sys.last_value
sys.last_traceback chapter pdb — The Python Debugger for more information.)
The meaning of the variables is the same as that of the return values from
exc_info() above. (Since there is only one interactive thread, thread-safety is not a concern for these variables, unlike for
exc_type etc.).
sys.path.
Changed in version 2.3: Unicode strings are no longer ignored.
sys.path
A dictionary acting as a cache for finder objects. The keys are paths that have been passed to
sys.path_hooks and the values are the finders that are found. If a path is a valid file system path but no explicit finder is found on
sys.path_hooks then
None is stored to represent the implicit default finder should be used. If the path is not an existing path then
imp.NullImporter is set.
Originally specified in PEP 302.
sys.platform...
Changed in version 2.7.3:.
sys
2.7.
sys.ps1
sys.ps2.py3kwarning
Bool containing the status of the Python 3 warning flag. It’s
True when Python is started with the -3 option. (This should be considered read-only; setting it to a different value doesn’t have an effect on Python 3 warnings.)
New in version 2.6.
sys.
sys.setdefaultencoding(name)
Set the current default string encoding used by the Unicode implementation. If name does not match any available encoding,
LookupError is raised. This function is only intended to be used by the
site module implementation and, where needed, by
sitecustomize. Once used by the
site module, it is removed from the
sys module’s namespace.
New in version 2.0.
sys.setdlopenflags(n)(dl.RTLD_NOW | dl.RTLD_GLOBAL). Symbolic names for the flag modules can be either found in the
dl module, or in the
DLFCN module. If
DLFCN is not available, it can be generated from
/usr/include/dlfcn.h using the h2py script. Availability: Unix.
New in version 2.2.
sys.setprofile(profilefunc) is called with different events, for example.
Profile functions should have three arguments: frame, event, and arg. frame is the current stack frame. event is a string:
'call',
'return',
'c_call',
'c_return', or
'c_exception'. arg depends on the event type.
The events have the following meaning:
'call'
A function is called (or some other code block entered). The profile function is called; arg is
None.
'return'
A function (or other code block) is about to return. The profile function is called; arg is the value that will be returned, or
None if the event is caused by an exception being raised.
.
sys.setrecursionlimit(limit)
Set the maximum depth of the Python interpreter stack to limit. This limit prevents infinite recursion from causing an overflow of the C stack and crashing Python..
sys.settrace(tracefunc)' for a detailed explanation of how this works.
'return'
A function (or other code block) is about to return. The local trace function is called; arg is the value that will be returned, or
None if the event is caused by an exception being raised. The trace function’s return value is ignored.
'exception'
An exception has occurred. The local trace function is called; arg is a tuple
(exception, value, traceback); the return value specifies the new local tracetscdump(on_flag).
New in version 2.4.
CPython implementation detail: This function is intimately bound to CPython implementation details and thus not likely to be implemented elsewhere.
sys.stdin
sys.stdout
sys.stderr
File objects corresponding to the interpreter’s standard input, output and error streams.
stdin is used for all interpreter input except for scripts but including calls to
input() and
raw_input().
stdout is used for the output.)
sys.__stdin__
sys.__stdout__
sys.__stderr__.
sys.subversion
A triple (repo, branch, version) representing the Subversion information of the Python interpreter. repo is the name of the repository,
'CPython'. branch is a string of one of the forms
'trunk',
'branches/name' or
'tags/name'. version is the output of
svnversion, if the interpreter was built from a Subversion checkout; it contains the revision number (range) and possibly a trailing ‘M’ if there were local modifications. If the tree was exported (or svnversion was not available), it is the revision of
Include/patchlevel.h if the branch is a tag. Otherwise, it is
None.
New in version 2.5.
Note
Python is now developed using Git. In recent Python 2.7 bugfix releases,
subversion therefore contains placeholder information. It is removed in Python 3.3.
sys.
sys.version.
sys.api_version
The C API version for this interpreter. Programmers may find this useful when debugging version conflicts between Python and extension modules.
New in version 2.3.
sys.version_info.
New in version 2.0.
Changed in version 2.7: Added named component attributes
sys.warnoptions
This is an implementation detail of the warnings framework; do not modify this value. Refer to the
warnings module for more information on the warnings framework.
sys.winver.
C99
ISO/IEC 9899:1999. “Programming languages – C.” A public draft of this standard is available at.
© 2001–2020 Python Software Foundation
Licensed under the PSF License.
|
https://docs.w3cub.com/python~2.7/library/sys/
|
CC-MAIN-2020-16
|
refinedweb
| 1,325
| 60.72
|
An introduction to the django-carrot task backend
django-carrot is a lightweight task queue for Django projects with an emphasis on very easy setup and task traceability. It is intended as a simpler alternative to the celery project. This tutorial will explain how to add a django-carrot task backend to your project in a matter of minutes.
Setting up RabbitMQ
django-carrot uses the RabbitMQ message broker to queue and execute tasks. It can be installed and started using brew:
brew install rabbitmq
brew services start rabbitmq
You should now be able to see the RabbitMQ monitor at:
Installing django-carrot
django-carrot can be installed with pip:
pip install django-carrot
Using django-carrot in your project
If you don’t already have an existing Django project, create one in the normal way:
django-admin.py startproject myproject
If you already have a Django project, you can add django-carrot to the installed apps in your settings.py file:
INSTALLED_APPS = [
...
'carrot',
]
django-carrot stores execution information for all tasks in the Django project’s database. You’ll need to create and apply migrations in order for it to work
python manage.py makemigrations carrot
python manage.py migrate
You can now start the django-carrot service as follows:
python manage.py carrot_daemon start --logfile carrot.log
The default value of the logfile parameter is /var/log/carrot.log. You will need to change this, as above, if this path does not exist on your machine or you do not have access to it
Go back to the RabbitMQ monitor and look at the queues tab. You should be able to see that a queue called default has been created.
Stop the service for now with the following command:
python manage.py carrot_daemon stop
Creating tasks
Now that you have a queue set up, you can publish tasks to it. django-carrot can execute any Python function as an asynchronous task. For example, this simple function multiplies two numbers together and returns a string with the result
def multiply(a, b):
return '%i multiplied by %i is %i' % (a, b, a * b)
Tasks must first be published to your RabbitMQ queue. This is done using django-carrot’s publish_task utility:
from carrot.utilities import publish_task
publish_task(multiply, 2, 2)
If you go back to RabbitMQ, you should be able to see the task that you just published by clicking on the default queue and clicking the Get Message(s) button
Tasks will sit in the queue until the django-carrot service consumes and executes them. Turn the django-carrot service back on again, and you should see the task disappear from the queue, and the message count in RabbitMQ will return to zero:
python manage.py carrot_daemon start --logfile carrot.log
Monitoring tasks
Although the task has been completed, we haven’t yet seen any execution information about it. Fortunately, django-carrot provides its own monitor which stores detailed information about all tasks published, completed, scheduled and failed.
To enable it, go to your project’s urls.py file, and add it as follows:
urlpatterns = [
...
url(r'^carrot/', include('carrot.urls')),
]
Restart your Django server and navigate to. You’ll be able to see the task that was just executed under the Completed tab:
Click on the task name to see more details, and the output
More options
Logging
Logging can be used to aid debugging of tasks in the carrot monitor as follows:
from carrot.utilities import publish_message
import logging
logger = logging.getLogger('carrot')
def my_task(*args, **kwargs):
logger.debug('hello world')
logger.info('hello world')
logger.warning('hello world')
logger.error('hello world')
logger.critical('hello world')
publish_task(my_task)
This would appear in the monitor as follows:
Note that the debug message is not displayed, as the log level drop down is set to INFO
Concurrency
By default, the carrot service creates one consumer and one queue. Multiple queues and consumers can be created by adding the CARROT attribute to the settings file:
from carrot import DEFAULT_BROKER
CARROT = {
'default_broker': DEFAULT_BROKER,
'queues': [
{name: 'queue1', 'vhost': DEFAULT_BROKER'},
{name: 'queue2', 'vhost': DEFAULT_BROKER', 'concurrency': 10}
]
}
Scheduled tasks
Tasks can also be scheduled to run at intervals via the monitor
For further reading, refer to the django-carrot docs at
|
https://medium.com/@christopherdavies553/an-introduction-to-the-django-carrot-task-backend-22b2e12361bb?source=rss-755d06e50c68------2
|
CC-MAIN-2019-26
|
refinedweb
| 709
| 53.71
|
Natural Language Processing (NLP) is not supposed to be easy! But let’s try to simplify for beginners. Follow us for more beginner friendly articles like this.
Natural Language Processing or NLP is a subset of the field of Artificial Intelligence. It is a field that analyzes our human language, takes texts as input. The entire text dataset, the input data is called the corpus. For example we calculate how many times a word appears in the corpus. This count is called term frequency.
“Hi there! It’s good to see you. I just wanted to say hi.” # The sentence is the corpus. Term frequency of ‘hi’ is 2, because it appears twice in the corpus, if our analysis case insensitive (‘Hi’ equals to ‘hi’). If it is case sensitive, then the term frequency of ‘Hi’ is one, and TF of ‘hi’ is also one.
We will elaborate on term frequency later.
Practical tip: Sometimes it is important to be case sensitive. For example, Trump may refer to Donald Trump, trump is a verb often used in card games describing one card outranks another. When cases don’t matter, a common preprocessing, data cleaning technique is to change all text of the corpus to lower case. Lowering
lower_case_corpus = corpus.lower()The function
.lower()is a python string method. For example “Hello there!” will become “hello there!”.
Bag of Words — a common, introductory model for Natural Language Processing NLP
Codecademy.com explains bag-of-words model: “A bag-of-words model totals the frequencies of each word in a document, with each unique word being its own feature and its frequency being the value.”
If you haven’t studied Machine Learning the word feature makes no sense. There are tricks that may help you understand. We can imagine the output of a bag of word model as python dictionary / hashmap of key value pairs or as an Excel sheet. The features are the keys in the dictionary or the column headers in the Excel sheet. Features are meaningful representations of the data. Machine Learning learns features and predicts outcomes called labels.
For example useful features of Person data — information that describes people — may include: height, gender, name, government issued ID number etc.
Pro Tip: what is the feature dimension? What is the size or the number of features? It equals to the size of vocabulary found in the corpus.
corpus = ["You are reading a tutorial by Uniqtech. We are talking about Natural Language Processing aka NLP. Would you like to learn more? Learn more about Machine Learning today!"] # if use corpus = "..." # receive error # ValueError: Iterable over raw text documents expected, string object received.
from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() bow = count_vect.fit_transform(corpus) bow.shape #(1,22)
count_vect.get_feature_names() #[u'about', u'aka', u'are', u'by', u'language', u'learn', u'learning', u'like', u'machine', u'more', u'natural', u'nlp', u'processing', u'reading', u'talking', u'to', u'today', u'tutorial', u'uniqtech', u'we', u'would', u'you']
bow.toarray() #array([[2, 1, 2, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2]])
Pro tip: what does CountVectorizer do per the Sklearn documentation? “Convert a collection of text documents to a matrix of token counts” and returns a sparse matrix scipy.sparse.csr_matrix Just an FYI. Don’t think too hard about it now.
The feature names are returned by
count_vect.get_feature_names()and
bow.toarray()gives us the frequency of corresponding features. For example, the first word ‘about’ appears twice in the corpus so its frequency is 2. The last word ‘you’ also appears twice.
How is it useful? This common model is surprisingly powerful. There are some criticism of the author of 50 Shades of Grey on the internet: the claim is that she is not a sophisticated author because her books only utilize limited English vocabulary. Apparently people have found that she uses some simple non-descriptive words too often, such as love and gasp. Below is a meme that makes fun of 50 Shades of Grey.
How did people know the author uses gasp a lot? Word count, word frequency of course!
If we read through this Word Frequency Analysis of the 50 Shades of Grey Trilogy, indeed we have to scroll down quite far to see a complex word that is also frequently used such as murmur.
Some argue however precisely because the author uses easy-to-read colloquial style the series has gained wide readership and popularity.
Surprisingly, this simple model is quite insightful and already generates a good discussion.
More on bag of words
Stop Word Removal … not : Not all words in the corpus are considered important enough to be features. Some such as a, the, and are called stop words, which are sometimes removed from the feature dataset to improve machine learning model results. The appeared nearly 5000 times in the bookbut it does not mean anything in particular, thus it’s okay to remove it from our dataset.
In the bag of words model, grammar does not matter so much, nor does word order.
Pro Tip: the bag-of-words model instance is often stored in a variable called
bow, which can be confusing because you may be thinking of bow and arrow, but it is the acronym for bag of words!]
Sample natural language processing workflow and NLP pipeline:
Data cleaning pipeline for text data
- cleaning (regular expressions)
- sentence splitting
- change to lower case
- stopword removal (most frequent words in a language)
- stemming — demo porter stemmer
- noun chunking
- NER (name entity recognition) — demo opencalais
- deep parsing — try to “understand” text.
Important Natural Language Processing Concepts
Stop Words Removal
Stop words are words that may not carry valuable information.
In some cases stop words matter. For example researchers found that stop words are useful in identifying negative reviews or recommendations. People use sentences such as “This is not what I want.” “This may not be a good match.” People may use stop words more in negative reviews. Researchers found this out by keeping the stop words and achieving better prediction results.
While it is common practice to remove stop words and only returned clean text, removing stop words do not always give better prediction results. For example, not is considered in some NLP libraries, but not is a very significant word in negative reviews or recommendations in sentiment analysis. For example, if a customer states “I would not buy this product again, and would not accept any refund. Really not a good match at all.”, the word “not” is a strong signal that this review is negative. A positive review may sound, well, positive! “I really like the product! I enjoyed it very much. Not what I expected at all.” In this case, negative reviews use the “not” word 3x more.
Removing punctuation may also yield better results in some situations.
NLP Techniques — Removing punctuations with Regex
Punctuations are not always useful in predicting the meaning of texts. Often they are removed along with stop words. What does removing punctuation mean? It means keeping only the alpha numeric characters. Regex programming lessons can fill books! Just use this nifty function below for short texts. For longer texts that require more processing power, use iterable generators to iterate through each line of text and keep only alpha numeric characters. For big data, use parallel processing to process multiple lines of texts at once.
This process of removing numbers and punctuation is called pruning.
Regex removes punctuation
#import regex import re
corpus = "You are reading a tutorial by Uniqtech. We are talking about Natural Language Processing aka NLP. Would you like to learn more? Learn more about Machine Learning today!"
corpus = re.sub("[^a-zA-Z0-9]+", "",corpus) corpus # 'YouarereadingatutorialbyUniqtechWearetalkingaboutNaturalLanguageProcessingakaNLPWouldyouliketolearnmoreLearnmoreaboutMachineLearningtoday' #note space is also removed
# ^\s means DO NOT MATCH SPACE corpus = re.sub("[^a-zA-Z0-9\s]+", "",corpus) corpus #returns 'You are reading a tutorial by Uniqtech We are talking about Natural Language Processing aka NLP Would you like to learn more Learn more about Machine Learning today'
Go ahead, just use the above method and avoid reinventing the wheel.
Pro Tip: python also has a build in alpha numeric checker function
ialnum(). There is another
.isalpha()only returns true for alphabets, a number will not evaluate to true.
There are always hackers coming up with fancy regex code! It keeps getting fancier.
from nltk.tokenize import RegexpTokenizer a regex tokenization RegexpTokenizer(r'\w+')
#tokenize any word that has length > 1, #effectively removing all punctuations
Tokenization
Tokenization: breaking texts into tokens. Example: breaking sentences into words, and more group words based on scenarios. There’s also the n gram model and skip gram model.
Basic tokenization is 1 gram, n gram or multi gram is useful when a phrase yields better result than one word, for example “I do not like Banana.” one gram is I _space_ do _space_ not _space_ like _space_ banana. It may yield better result with 3 gram model: I do not, do not like, not like banana, like banana _space_, banana _space.
ngram : n is the number of words we want in each token. Frequently, n =1
Did you know that Google digitized many books and generated and analyzed literature based on the n gram model? Nice work Google!
Lemmatization
Lemmatization: transforming words into its roots. Example: economics, micro-economics, macro-economists, economists, economist, economy, economical, economic forum can all be transformed back to its root econ, which can mean this text or article is largely about economics, finance or economic issues. Useful in situations such as topic labeling. Common libraries: WordNetLemmatizer, Porter-Stemmer.
Sentence Tagging
Sentence tagging is like the part of speech exercises your grammar teacher made you do in high school. Here’s an illustration of that:
Sections Coming soon…
To be notified, sign up here: subscribe@uniqtech.co
- Information Retrieval Basics : Term Frequency Inverse Document Frequency TFIDF
Shameless self plug below, please support us :)
Like what you read so far? Join our $5/month membership to get in-depth Silicon Valley job intelligence, beginner friendly tutorials, training courses for a tech career in Silicon Valley. subscribe@uniqtech.co
Our members only blog includes searchable in-depth analysis of Silicon Valley job postings such as Product Manager, Machine Learning Engineer. Information on tech interviews, technical interviews for bootcamp graduates. Tips and tricks to pass phone interviews. Our tutorials aim to be fast and beginner friendly. Check out our Medium article and Youtube video on Softmax — a function frequently used in Deep Learning, Artificial Intelligence and Machine Learning.
NLP Use Cases
- Sentiment analysis of tweets, amazon reviews. Classifying whether a short text is positive or negative.
- Writing style analysis analysis: authors’ favorite vocabulary choice, singers’ lyrics style. For example, style analysis has identified JK Rowling as the author of a book even though she used male a pen name after passionate readers analyzed and found parallels and similarity in the text styles.
- Entity tagging: find organizations or people’s names in articles
- Text summarization: summarize main points of news articles
Getting Started with NLP Now!
You can use the Python
nltklibrary to analyze texts. It’s a popular and a powerful library. It includes lists of stop words in several languages.
from nltk.corpus import stopwords clean_tokens = [token for token in tokens if token not in stop_words]
#important pattern forremoving stop words iteratively
#source: Towards Data Science Emma Grimaldi How Machines understand our language
Sklearn conveniently has a build-in text dataset for you to experiment with! These news articles can be classified into different topics. Sklearn provides cleaned training data for this classification task.
Glossary
- SOS start of sentence
- EOS end of sentence
- padding usually 0
- word2index
- index2word
- word2count
Further Reading
- link to Sklearn documentation for CountVectorizer useful function to calculate term frequency
- link to a long list of Natural Language Processing NLP and Machine Learning papers
- link to Term Frequency Inverse Document Frequency blog post
- Towards Data Science Emma Grimaldi How Machines understand our language: an introduction to Natural Language processing
- Book — Natural Language Processing with PyTorch: Build Intelligent Language Applications Using Deep Learning
|
http://www.siliconvanity.com/2019/03/getting-started-with-natural-language.html
|
CC-MAIN-2019-13
|
refinedweb
| 2,027
| 57.16
|
Details
- Type:
Improvement
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 0.19.0
-
- Component/s: benchmarks
- Labels:None
Description
The goal of this issue is to measure how the name-node performance depends on where the edits log is written to.
Three types of the journal storage should be evaluated:
- local hard drive;
- remote drive mounted via nfs;
- nfs filer.
Issue Links
- is related to
HADOOP-4029 NameNode should report status and performance for each replica of image and log
- Closed
Activity
- All
- Work Log
- History
- Activity
- Transitions
I benchmarked three operations: create, rename, and delete using NNThroughputBenchmark, which is a pure name-node benchmark. It calls the name-node methods directly without using the rpc protocol. So the rpc overhead is not included in these results, and should be measured separately say with synthetic load generator.
In a sense these benchmarks determine an upper bound for the HDFS operations, namely the maximum throughput the name-node can sustain under heavy load.
Each run starts with an empty files system and performs 1 million operations handled by 256 threads on the name-node. The output is the throughput that is the number of operation per second, which is calculated as 1,000,000/(tE-tB), where tB is when the first thread starts, and tE is when all threads stop. The threads run in parallel.
Creates create empty files and do not close them. Renames change file names, but do not move them.
All test results are consistent except for one distortion in deletes on a remote drive, which is way out of the expected range. Don't know what that is, one day they were good the other not.
Each test consists of 1,000,000 operations performed using 256 threads.
Result is in ops/sec.
Some conclusions:
- Local drive is faster than nfs, and
- nfs filer is faster than a remote drive;
- but the difference between nfs storage and local drives is very slim, only 2-3%.
- Using 4 local drives instead of 1 degrades the performance by only 9%, even though we write onto the drives sequentially (one after another).
It would be fair to say that there is some parallelism in writing, since current code batches writes first and then synchs them at once in larges chunks. So while the writes are sequential the synchs are parallel.
- Opens (getBlockLocation()) are 22 times faster than creates,
- which means journaling is the real bottleneck for the name-node operations,
- and the lack of fine-grained locking in the namespace data-structures is not a problem so far. Otherwise, the throughputs for opens and other operations would be characterized by the same or at least close numbers.
- Further optimization of the name-node performance imo should be focused around efficient journaling.
Another set of statistical data, which characterizes the actual load on the name-node on some of our clusters. Unfortunately, the statistics for open is broken, and we do not collect stats for renames. So I can only present creates and deletes. Please contribute if somebody has more data.
- These numbers show that the actual peak load for creates is about 40 times lower than the name-node can handle, and 3 times lower for deletes. On average the picture is even more drastic.
The name-node processing capability is 400-500 times higher than the actual average load on it.
+1 overall. Here are the results of testing the latest attachment
against trunk revision 6808.
NFS is a black art: when doing benchmarks such as these, implementation matters. Are we using NFSv2? v3? v4? UDP or TCP? What is the rwsize set to? What is the server side and what is the client side? What about TCP/IP tuning?
You probably know that better than I do. But the point of the benchmarking was to compare nfs vs local drives.
There was a suspicion that ios to nfs are substantially slower then to local drives, and it turned out to be pretty much the same.
It would even better of course if we could fine tune nfs.
I edited a typo in the formula explaining throughput:
– 1,000,000/(tE-tE)
+ 1,000,000/(tE-tB)
It looks as though it is not just the number of mutations but something else matters as well(may be amount of data written to edits log per mutation, cpu, or locking). That could explain large disparity between number creates, renames, and deletes though each of these is single mutation.
I just committed this.
Please feel free to comment, discuss the benchmark results.
Hi Konstantin,
Great analysis. I completely agree with you that coarse-grain locking for the namenode should not be impacting scalability of opens and creates. It is the disk sync times that really matters. BTW, when you ran the test on a single disk on local drive, did you see the disk max-out on IO? You said that 5710 creates occured, the limitation being CPU on the machine or disk IO contention?
Also, I had a patch
HADOOP-2330 that pre-allocated transaction log. If I had seen this JIRA earlier, i would have requested you to see if you could repeat the exact same test on the same hardware with this patch. This patch pre-allocates the transaction log in large chunks.
Dhruba,
For creates we definitely have disk IO contention not the cpu.
About H2330, Hairong tested it with her new synthetic load generator - very encouraging results.
Integrated in Hadoop-trunk #581 (See)
I am attaching a patch that was used for the benchmarks.
It extends NNThroughputBenchmark with new operations rename and delete as well as introduces additional command line options,
which control what the benchmarks do with generated files before and after the execution.
|
https://issues.apache.org/jira/browse/HADOOP-3860
|
CC-MAIN-2017-26
|
refinedweb
| 962
| 63.59
|
A class for drawing markers. More...
#include <qwt_plot_marker.h>
A class for drawing markers.
A marker can be a horizontal line, a vertical line, a symbol, a label or any combination of them, which can be drawn around a center point inside a bounding rectangle.
The setSymbol() member assigns a symbol to the marker. The symbol is drawn at the specified point.
With setLabel(), a label can be assigned to the marker. The setLabelAlignment() member specifies where the label is drawn. All the Align*-constants in Qt::AlignmentFlags (see Qt documentation) are valid. The interpretation of the alignment depends on the marker's line style. The alignment refers to the center point of the marker, which means, for example, that the label would be printed left above the center point if the alignment was set to Qt::AlignLeft | Qt::AlignTop.
Line styles.
Reimplemented from QwtPlotItem.
Draw the marker
Implements QwtPlotItem.
Align and draw the text label of the marker
Draw the lines marker
Reimplemented from QwtPlotItem.
Reimplemented from QwtPlotItem.
Set the alignment of the label.
In case of QwtPlotMarker::HLine the alignment is relative to the y position of the marker, but the horizontal flags correspond to the canvas rectangle. In case of QwtPlotMarker::VLine the alignment is relative to the x position of the marker, but the vertical flags correspond to the canvas rectangle.
In all other styles the alignment is relative to the marker's position.
Set the orientation of the label.
When orientation is Qt::Vertical the label is rotated by 90.0 degrees ( from bottom to top ).
Build and assign a line pen
In Qt5 the default pen width is 1.0 ( 0.0 in Qt4 ) what makes it non cosmetic ( see QPen::isCosmetic() ). This method has been introduced to hide this incompatibility.
Set the line style.
Set the spacing.
When the label is not centered on the marker position, the spacing is the distance between the position and the label.
|
http://qwt.sourceforge.net/class_qwt_plot_marker.html
|
CC-MAIN-2014-15
|
refinedweb
| 325
| 59.8
|
Name
Attributes
Synopsis
This interface represents a list of attributes of an XML element and includes information about the attribute names, types, and values. If the SAX parser has read a DTD or schema for the document, this list of attributes will include attributes that are not explicitly specified in the document but which have a default value specified in the DTD or schema.
The most commonly used method is
getValue( ) which
returns the value of a named attribute (there is also a version of
this method that returns the value of a numbered attribute; it is
discussed later). If the SAX parser is not processing namespaces, you
can use the one-argument version of
getValue( ).
Otherwise, use the two argument version to specify the URI that
uniquely identifies the namespace, and the “local
name” of the desired attribute within that namespace
. The
getType( ) methods are similar, except that
they return the type of the named attribute, rather than its value.
Note that
getType( ) can only return useful
information if the parser has read a DTD or schema for the document
and knows the type of each attribute.
In XML documents the attributes of a tag can appear in any order.
Attributes objects make no attempt to preserve the
document source order of the tags. Nevertheless, it does impose an
ordering on the attributes so that you can loop through them.
getLength( ) returns the number of elements in the
list. There are versions of
getValue( ) and
getType( ) that return the value and type of ...
Get Java in a Nutshell, 5th Edition now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
|
https://www.oreilly.com/library/view/java-in-a/0596007736/re1033.html
|
CC-MAIN-2022-05
|
refinedweb
| 283
| 59.64
|
Products and Services
Downloads
Store
Support
Education
Partners
About
Oracle Technology Network
Josh Bloch writes:
Google's Daniel Aioanei <###@###.###> points out that the java.util.UUID, while supposedly immutable, is not thread-safe, due errors in caching the variant and hashCode. Let's first take a look at the code for variant:
87 private transient int variant = -1;
277 public int variant() {
278 if (variant < 0) {
279 // This field is composed of a varying number of bits
280 if ((leastSigBits >>> 63) == 0) {
281 variant = 0;
282 } else if ((leastSigBits >>> 62) == 2) {
283 variant = 2;
284 } else {
285 variant = (int)(leastSigBits >>> 61);
286 }
287 }
288 return variant;
289 }
At first glance, this code appears to be equivalent to the hashCode caching code for java.lang.String , which is known to be correct. But there's one key difference: this code uses -1, rather than 0, as a marker for the unintialized value. If a UUID is passed from one thread to another with no synchronization, it is possible for the latter thread to see the variant field in its truly uninitialized (0) state, which will cause it to falsely assume that the variant is 0.
Looking at the computation, it's way too cheap to justify lazy computation. and it can be made even cheaper. Here's a branch-free computation:
private transient final int variant =
(int) ((leastSigBits >>> (64 - (leastSigBits >>> 62)))
& (leastSigBits >> 63));
Unfortunately, serialization has serious deficiencies when it comes to transient final fields: you must set them in the readObject method, and setting a final field is, um, tricky. You can do it using setAccessble, but that can interact badly with some security managers. Alternatively, you can use sun.misc.Unsafe, as I believe is done for System.in and System.out. An alternative approach is not to cache the variant but to recalculate it every time. This is simple, and perhaps so cheap that it's worth considering:
public int variant() {
return (int) ((leastSigBits >>> (64 - (leastSigBits >>> 62)))
& (leastSigBits >> 63));
}
Caching for version(), timestamp(), clockSequence(), and node() are broken in similar manner, and similar comments apply.
On to hashCode:
107 private transient int hashCode = -1;
416 public int hashCode() {
417 if (hashCode == -1) {
418 hashCode = (int)((mostSigBits >> 32) ^
419 mostSigBits ^
420 (leastSigBits >> 32) ^
421 leastSigBits);
422 }
423 return hashCode;
424 }
This code is broken in two ways, the first is as explained above for variant. The second is that this code reads the hashCode field twice. There is no guarantee that the second value with be the same as the first. The first read could return 0 (totally uninitialized), and the second could return -1 "uninitialized," in which case the method would return -1 incorrectly. In fact, in the total absence of locking, the second read could return an "earlier" value: the first could return the properly computed value, and the second could (erroneously) return 0 or -1. Again, the computation is far too cheap to justify lazy initialization, so the obvious fixes are either to make the hashCode field constant and initialize it at UUID creation time or (if deserialization proves too painful) to simply recalculate it each time:
public int hashCode() {
return (int) ( (mostSigBits >> 32) ^ mostSigBits
^ (leastSigBits >> 32) ^ leastSigBits);
}
The current equals computation, while correct, is needlessly complex:
public boolean equals(Object obj) {
if (!(obj instanceof UUID))
return false;
if (((UUID)obj).variant() != this.variant ())
return false;
UUID id = (UUID)obj;
return (mostSigBits == id.mostSigBits &&
leastSigBits == id.leastSigBits);
}
The check on variant() is extraneous, and highly unlikely to improve performance. Better is:
public boolean equals(Object obj) {
if (!(obj instanceof UUID))
return false;
UUID id = (UUID)obj;
return mostSigBits == id.mostSigBits &&
leastSigBits == id.leastSigBits;
}
Perhaps better still is:
public boolean equals(Object obj) {
if (!(obj instanceof UUID))
return false;
UUID id = (UUID)obj;
return mostSigBits == id.mostSigBits &
leastSigBits == id.leastSigBits;
}
The latter is branch-free, and the branch in the former is unpredictable. Admittely the former is more idiomatic, but it may be time to change the idiom.
To summarize, the caching for version, variant, timestamp, sequence, node, and hashCode is broken. The two obvious choices to fix it are make these fields final (set upon object creation), and to eliminate them entirely and recalculate the values each time they're requested. The former is likely to have marginally better time performance, and the latter better space performance. The former will require some effort due to deficiencies in the serializatin mechanism. Finally, the equals method is unduly complex, and slower than necessary.
SUGGESTED FIX
webrev posted
EVALUATION
While it is true that uninitialized fields of a Java object can be observed
in another thread when no synchronization is used, according to the
Java Memory Model, in practice I have never seen this happen on platforms
supported by Sun. Fields appear to be written before the reference
holding the return value from the new operator in all threads.
This bug pattern is endemic in the JDK sources.
I agree that we should consider not caching the variant field.
|
http://bugs.java.com/view_bug.do?bug_id=6611830
|
CC-MAIN-2017-26
|
refinedweb
| 830
| 52.7
|
How could I know environment temperature all the time in such a hot day? I called my absent friend but surprised to find that she got a cold because of cold weather and she had to monitor body temperature with a thermometer. She concerned with my health, advising me to pay attention to room temperature to keep healthy.
Thinking about her words, I decide to DIY a small office thermometer with DS18B20 sensor that based on Arduino UNO to know the environment temperature.
Hardware in Need:
1. DS18B20 Temperature Sensor
DS18B20 is the most common temperature sensor in the market, small, precise, and convenient to connect. After package, it is applicable to different situations. You can change exterior according to situations, such as plastic films, boilers, engine rooms, clean rooms, and even magazines.
2. Gravity: I2C 16x2 Arduino LCD with RGB Backlight Display
3. DFRduino UNO R3
DERduino UNO R3, a simple microcontroller board fully compatible with Arduino UNO R3, replaces 8U2 with ATmega16U2 as a USB-to-serial converter. The convert speed and memory space of ATmega16U2 is the same as Arduino UNO R3. This master board adopts ENIG (immersion gold), quality, delicate, and cost effective.
4. Gravity: IO Expansion Shield for Arduino V7.1
Parts Diagram
Circuit Connection Diagram
Operating Result
When the room temperature is less than 25°C, the screen shows green. Is this temperature comforts people?
When the room temperature is more than 25°C and less than 30°C, the screen shows yellow. The color suggesting increasing temperature, you can use fan now.
3D Assembly Drawing
3D Sketch Design
If you are interested in this project, you can download 3D printing files in the end page. You can also design your own private crust.
Relate to programing, you can also add time display function. So it can be a combination of thermometer and clock. Your ideas will be appreciated.
Program:
#include <OneWire.h> #include <Wire.h> #include "DFRobot_RGBLCD.h" int DS18S20_Pin = 2; //DS18S20 Signal pin on digital 2 DFRobot_RGBLCD lcd(16,2); //16 characters and 2 lines of show //Temperature chip i/o OneWire ds(DS18S20_Pin); // on digital pin 2 void setup(void) { Serial.begin(9600); lcd.init(); lcd.setRGB(0, 255, 0); lcd.setCursor(1, 0 ); lcd.print("Tep: "); } void loop(void) { float temperature = getTemp(); delay(1000); lcd.setCursor(5,0); lcd.print(temperature); if(temperature<25) { lcd.setRGB(0, 255, 0); } else if (temperature<30) { lcd.setRGB(255, 215, 0); } else { lcd.setRGB(255, 0, 0); } lcd.setCursor(10, 0 ); lcd.write(0xdf); //display° lcd.print('C'); delay(100);...Read more »
Just a note, but colds aren't caused by the temperature being cold (colds are a virus), although when it's cold people tend to spend more time indoors increasing the spread of the cold virus.
|
https://hackaday.io/project/26309-how-to-make-an-office-thermometer
|
CC-MAIN-2020-40
|
refinedweb
| 465
| 59.7
|
Hello Gerhard, Friday, March 21, 2003, 9:47:16 PM, you wrote: GH> * Konstantin Knizhnik <knizhnik at garret.ru> [2003-03-21 21:31 +0300]: >> Hello achrist, >> >> This is problem with definition DL_EXPORT macro in Python 2.2.2 [...] GH> There is no problem. The problem might be in the Makefiles you use. So GH> don't use Makefiles. Use distutils through the setup.py I created :-) May be setup.py is more convenient way of building python modules, but I still want to build it using makefile or just .bat file. GH> The only thing you'll likely have to change is the library to link GH> against, cos MSVC doesn't have any libstdc++. It's called otherwise, but GH> I don't recall how. Why I need to explicitly specify some library which is specific for compiler and compilation model? GH> The proper solution for setup.py on win32 is to distinguish the compiler GH> used and then set the appropriate libraries. As that's probably not GH> easily done, make it configurable in that the user has to edit setup.py GH> manually, with clear instructions. If we want function to be exported in DLL at windows we need to declare it with __declspec(dllexport). This doesn't depends on howwe are building this library. So themain problem is not in my makefile, but in lack of correct macro in Python which can be used to define module initialization function. DL_EXPORT in 2.2. version of Python is not correctly defined and it will not work neither with my compile.bat neither with your setup.py (unless you explicitly define DL_EXPORT in command line). >> With 2.3 it works ok, but not with 2.2.2. >> May be I just do not define some environment variable. But neither >> USE_DL_EXPORT, USE_DL_IMPORT properly works: GH> There is no need to set any defines. May be there is no needs in defines, but there is need in __declspec(dllexport) GH> #v+ GH> /* Compatibility macros GH> * GH> * From Python 2.2 to 2.3, the way to export the module init function GH> * has changed. These macros keep the code compatible to both ways. GH> */ #if PY_VERSION_HEX >>= 0x02030000 GH> # define PySQLite_DECLARE_MODINIT_FUNC(name) PyMODINIT_FUNC name(void) GH> # define PySQLite_MODINIT_FUNC(name) PyMODINIT_FUNC name(void) GH> #else GH> # define PySQLite_DECLARE_MODINIT_FUNC(name) void name(void) GH> # define PySQLite_MODINIT_FUNC(name) DL_EXPORT(void) name(void) GH> #endif GH> [...] But the problem is not with DL_EXPORT at version 2.3 which may be deprecated but is correctly defined, but with incorrect definition of DL_EXPORT/DL_IMPORT at Python 2.2 (at least in ActivePython 2.2.2) And I do not understand how these macros helps to solve the problem. SQL Lite is not included inPython distribution, so I could not check how it is build at Windows. GH> PySQLite_DECLARE_MODINIT_FUNC(init_sqlite); GH> [...] GH> PySQLite_MODINIT_FUNC(init_sqlite) GH> { GH> [...] GH> } GH> #v- GH> HTH, GH> Gerhard -- Best regards, Konstantin mailto:knizhnik at garret.ru
|
https://mail.python.org/pipermail/python-list/2003-March/223406.html
|
CC-MAIN-2017-30
|
refinedweb
| 494
| 67.96
|
There's a fundamental difference between your own code and libraries of other people: you can change or extend your own code as you wish, but if you want to use someone else's libraries, you usually have to take them as they are.
A number of constructs have sprung up in programming languages to alleviate this problem. Ruby has modules, and Smalltalk lets packages add to each other's classes. These are very powerful, but also dangerous, in that you modify the behavior of a class for an entire application, some parts of which you might not know. C# 3.0 has static extension methods, which are more local, but also more restrictive in that you can only add methods, not fields, to a class, and you can't make a class implement new interfaces.
Scala's answer is implicit conversions and parameters. These can make existing libraries much more pleasant to deal with by letting you leave out tedious, obvious details that obscure the interesting parts of your code. Used tastefully, this results in code that is focused on the interesting, non-trivial parts of your program. This chapter shows you how implicits work, and presents some of the most common ways they are used.
Before delving into the details of implicit conversions, take a look at a typical example of their use. One of the central collection traits in Scala is RandomAccessSeq[T], which describes random access sequences over elements of type T. RandomAccessSeqs have most of the utility methods that you know from arrays or lists: take, drop, map, filter, exists, and mkString are just some examples. To make a new random access sequence, all you must do is extend trait RandomAccessSeq. You only need to define two methods that are abstract in the trait: length and apply. You then get implementations of all the other useful methods in the trait "for free."
So far so good. This works fine if you are about to define new classes, but what about existing ones? Maybe you'd like to also treat classes in other people's libraries as random access sequences, even if the designers of those libraries had not thought of making their classes extend RandomAccessSeq. For instance, a String in Java would make a fine RandomAccessSeq[Char], except that unfortunately Java's String class does not inherit from Scala's RandomAccessSeq trait.
In situations like this, implicits can help. To make a String appear to be a subtype of RandomAccessSeq, you can define an implicit conversion from String to an adapter class that actually is a subtype of RandomAccessSeq:
implicit def stringWrapper(s: String) = new RandomAccessSeq[Char] { def length = s.length def apply(i: Int) = s.charAt(i) }That's it.[1] The implicit conversion is just a normal method. The only thing that's special is the implicit modifier at the start. You can apply the conversion explicitly to transform Strings to RandomAccessSeqs:
scala> stringWrapper("abc123") exists (_.isDigit) res0: Boolean = trueBut you can also leave out the conversion and still get the same behavior:
scala> "abc123" exists (_.isDigit) res1: Boolean = trueWhat goes on here under the covers is that the Scala compiler inserts the stringWrapper conversion for you. So in effect it rewrites the last expression above to the one before. But on the surface, it's as if Java's Strings had acquired all the useful methods of trait RandomAccessSeq.
This aspect of implicits is similar to extension methods in C#, which also allow you to add new methods to existing classes. However, implicits can be far more concise than extension methods. For instance, we only needed to define the length and apply methods in the stringWrapper conversion, and we got all other methods in RandomAccessSeq for free. With extension methods you'd need to define every one of these methods again. This duplication makes code harder to write, and, more importantly, harder to maintain. Imagine someone adds a new method to RandomAccessSeq sometime in the future. If all you have is extension methods, you'd have to chase down all RandomAccessSeq "copycats" one by one, and add the new method in each. If you forget one of the copycats, your system would become inconsistent. Talk about a maintenance nightmare! By contrast, with Scala's implicits, all conversions would pick up the newly added method automatically.
Another advantage of implicit conversions is that they support conversions into the target type, a type that's needed at some point in the code. For instance, suppose you write a method printWithSpaces, which prints all characters in a given random access sequence with spaces in between them:
def printWithSpaces(seq: RandomAccessSeq[Char]) = seq mkString " "Because Strings are implicitly convertible to RandomAccessSeqs, you can pass a string to printWithSpaces:
scala> printWithSpaces("xyz") res2: String = x y zThe last expression is equivalent to the following one, where the conversion shows up explicitly:
scala> printWithSpaces(stringWrapper("xyz")) res3: String = x y z
This section has shown you some of the power of implicit conversions, and how they let you "dress up" existing libraries. In the next sections you'll learn the rules that determine when implicit conversions are tried and how they are found.
Implicit definitions are those that the compiler is allowed to insert into a program in order to fix any of its type errors. For example, if x + y does not type check, then the compiler might change it to convert(x) + y, where convert is some available implicit conversion. If convert changes x into something that has a + method, then this change might fix a program so that it type checks and runs correctly. If convert really is just a simple conversion function, then leaving it out of the source code can be a clarification.
Implicit conversions are governed by the following general rules:
Marking Rule: Only definitions marked implicit are available. The implicit keyword is used to mark which declarations the compiler may use as implicits. You can use it to mark any variable, function, or object definition. Here's an example of an implicit function definition:[2]
implicit def intToString(x: Int) = x.toStringThe compiler will only change x + y to convert(x) + y if convert is marked as implicit. This way, you avoid the confusion that would result if the compiler picked random functions that happen to be in scope and inserted them as "conversions." The compiler will only select among the definitions you have explicitly marked as implicit.
Scope Rule: An inserted implicit conversion must be in scope as a single identifier, or be associated with the source or target type of the conversion. The Scala compiler will only consider implicit conversions that are in scope. To make an implicit conversion available, therefore, you must in some way bring it into scope. Moreover, with one exception, the implicit conversion must be in scope as a single identifier. The compiler will not insert a conversion of the form someVariable.convert. For example, it will not expand x + y to someVariable.convert(x) + y. If you want to make someVariable.convert available as an implicit, therefore, you would need to import it, which would make it available as a single identifier. Once imported, the compiler would be free to apply it as convert(x) + y. In fact, it is common for libraries to include a Preamble object including a number of useful implicit conversions. Code that uses the library can then do a single "import Preamble._" to access the library's implicit conversions.
There's one exception to the "single identifier" rule. The compiler will also look for implicit definitions in the companion object of the source or expected target types of the conversion. For example, if you're attempting to pass a Dollar object to a method that takes a Euro, the source type is Dollar and the target type is Euro. You could, therefore, package an implicit conversion from Dollar to Euro in the companion object of either class, Dollar or Euro. Here's an example in which the implicit definition is placed in Dollar's companion object:
object Dollar { implicit def dollarToEuro(x: Dollar): Euro = ... } class Dollar { ... }In this case, the conversion dollarToEuro is said to be associated to the type Dollar. The compiler will find such an associated conversion every time it needs to convert from an instance of type Dollar. There's no need to import the conversion separately into your program.
The Scope Rule helps with modular reasoning. When you read code in a file, the only things you need to consider from other files are those that are either imported or are explicitly referenced through a fully qualified name. This benefit is at least as important for implicits as for explicitly written code. If implicits took effect system-wide, then to understand a file you would have to know about every implicit introduced anywhere in the program!
Non-Ambiguity Rule: An implicit conversion is only inserted if there is no other possible conversion to insert. If the compiler has two options to fix x + y, say using either convert1(x) + y or convert2(x) + y, then it will report an error and refuse to choose between them. It would be possible to define some kind of "best match" rule that prefers some conversions over others. However, such choices lead to really obscure code. Imagine the compiler chooses convert2, but you are new to the file and are only aware of convert1—you could spend a lot of time thinking a different conversion had been applied!
In cases like this, one option is to remove one of the imported implicits so that the ambiguity is removed. If you prefer convert2, then remove the import of convert1. Alternatively, you can write your desired conversion explicitly: convert2(x) + y.
One-at-a-time Rule: Only one implicit is tried. The compiler will never rewrite x + y to convert1(convert2(x)) + y. Doing so would cause compile times to increase dramatically on erroneous code, and it would increase the difference between what the programmer writes and what the program actually does. For sanity's sake, the compiler does not insert further implicit conversions when it is already in the middle of trying another implicit. However, it's possible to circumvent this restriction by having implicits take implicit parameters, which will be described later in this chapter.
Explicits-First Rule: Whenever code type checks as it is
written, no implicits are attempted. The compiler will not change
code that already works. A corollary of this rule is that you can
always replace implicit identifiers by explicit ones, thus making the
code longer but with less apparent ambiguity. You can trade between
these choices on a case-by-case basis. Whenever you see code that
seems repetitive and verbose, implicit conversions can help you
decrease the tedium. Whenever code seems terse to the point of
obscurity, you can insert conversions explicitly. The amount of
implicits you leave the compiler to insert is ultimately a matter of
style.
Naming an implicit conversion. Implicit conversions can have arbitrary names. The name of an implicit conversion matters only in two situations: if you want to write it explicitly in a method application, and for determining which implicit conversions are available at any place in the program.
To illustrate the second point, say you have an object with two implicit conversions:
object MyConversions { implicit def stringWrapper(s: String): RandomAccessSeq[Char] = ... implicit def intToString(x: Int): String = ... }In your application, you want to make use of the stringWrapper conversion, but you don't want integers to be converted automatically to strings by means of the intToString conversion. You can achieve this by importing only one conversion, but not the other:
import MyConversions.stringWrapper ... // code making use of stringWrapperIn this example, it was important that the implicit conversions had names, because only that way could you selectively import one and not the other.
Where implicits are tried. There are three places implicits are used in the language: conversions to an expected type, conversions of the receiver of a selection, and implicit parameters. Implicit conversions to an expected type let you use one type in a context where a different type is expected. For example, you might have a String and want to pass it to a method that requires a RandomAccessSeq[Char]. Conversions of the receiver let you adapt the receiver of a method call, i.e., the object on which a method is invoked, if the method is not applicable on the original type. An example is "abc".exists, which is converted to stringWrapper("abc").exists because the exists method is not available on Strings but is available on RandomAccessSeqs. Implicit parameters, on the other hand, are usually used to provide more information to the called function about what the caller wants. Implicit parameters are especially useful with generic functions, where the called function might otherwise know nothing at all about the type of one or more arguments. Each of the following three sections will discuss one of these three kinds of implicits.
Implicit conversion to an expected type is the first place the compiler will use implicits. The rule is simple. Whenever the compiler sees an X, but needs a Y, it will look for an implicit function that converts X to Y. For example, normally a double cannot be used as an integer, because it loses precision:
scala> val i: Int = 3.5 <console>:5: error: type mismatch; found : Double(3.5) required: Int val i: Int = 3.5 ^
However, you can define an implicit conversion to smooth this over:
scala> implicit def doubleToInt(x: Double) = x.toInt doubleToInt: (Double)IntWhat happens here is that the compiler sees a Double, specifically 3.5, in a context where it requires an Int. So far, the compiler is looking at an ordinary type error. Before giving up, though, it searches for an implicit conversion from Double to Int. In this case, it finds one: doubleToInt, because doubleToInt is in scope as a single identifier. (Outside the interpreter, you might bring doubleToInt into scope via an import or possibly through inheritance.) The compiler then inserts a call to doubleToInt automatically. Behind the scenes, the code becomes:
scala> val i: Int = 3.5 i: Int = 3
val i: Int = doubleToInt(3.5)This is literally an implicit conversion. You did not explicitly ask for conversion. Instead, you marked doubleToInt as an available implicit conversion by bringing it into scope as a single identifier, and then the compiler automatically used it when it needed to convert from a Double to an Int.
Converting Doubles to Ints might raise some eyebrows, because it's a dubious idea to have something that causes a loss in precision happen invisibly. So this is not really a conversion we recommend. It makes much more sense to go the other way, from some more constrained type to a more general one. For instance, an Int can be converted without loss of precision to a Double, so an implicit conversion from Int to Double makes sense. In fact, that's exactly what happens. The scala.Predef object, which is implicitly imported into every Scala program, defines implicit conversions that convert "smaller" numeric types to "larger" ones. For instance, you will find in Predef the following conversion:
implicit def int2double(x: Int): Double = x.toDoubleThat's why in Scala Int values can be stored in variables of type Double. There's no special rule in the type system for this; it's just an implicit conversion that gets applied.[3]
Implicit conversions also apply to the receiver of a method call, the object on which the method is invoked. This kind of implicit conversion has two main uses. First, receiver conversions allow smoother integration of a new class into an existing class hierarchy. And second, they support writing domain-specific languages (DSLs) within the language.
To see how it works, suppose you write down obj.doIt, and obj does not have a member named doIt. The compiler will try to insert conversions before giving up. In this case, the conversion needs to apply to the receiver, obj. The compiler will act as if the expected "type" of obj were "has a member named doIt." This "has a doIt" type is not a normal Scala type, but it is there conceptually and is why the compiler will insert an implicit conversion in this case.
As mentioned previously, one major use of receiver conversions is allowing smoother integration of new with existing types. In particular, they allow you to enable client programmers to use instances of existing types as if they were instances of your new type. Take, for example, class Rational shown in Listing 6.5 here. Here's a snippet of that class again:
class Rational(n: Int, d: Int) { ... def + (that: Rational): Rational = ... def + (that: Int): Rational = ... }
Class Rational has two overloaded variants of the + method, which take Rationals and Ints, respectively, as arguments. So you can either add two rational numbers or a rational number and an integer:
scala> val oneHalf = new Rational(1, 2) oneHalf: Rational = 1/2What about an expression like 1 + oneHalf, however? This expression is tricky because the receiver, 1, does not have a suitable + method. So the following gives an error:
scala> oneHalf + oneHalf res4: Rational = 1/1
scala> oneHalf + 1 res5: Rational = 3/2
scala> 1 + oneHalf <console>:6: error: overloaded method value + with alternatives (Double)Double <and> ... cannot be applied to (Rational) 1 + oneHalf ^
To allow this kind of mixed arithmetic, you need to define an implicit conversion from Int to Rational:
scala> implicit def intToRational(x: Int) = new Rational(x, 1) intToRational: (Int)RationalWith the conversion in place, converting the receiver does the trick:
scala> 1 + oneHalf res6: Rational = 3/2What happens behind the scenes here is that Scala compiler first tries to type check the expression 1 + oneHalf as it is. This fails because Int has several + methods, but none that takes a Rational argument. Next, the compiler searches for an implicit conversion from Int to another type that has a + method which can be applied to a Rational. It finds your conversion and applies it, which yields:
intToRational(1) + oneHalfIn this case, the compiler found the implicit conversion function because you entered its definition into the interpreter, which brought it into scope for the remainder of the interpreter session.
The other major use of implicit conversions is to simulate adding new syntax. Recall that you can make a Map using syntax like this:
Map(1 -> "one", 2 -> "two", 3 -> "three")Have you wondered how the -> is supported? It's not syntax! Instead, -> is a method of the class ArrowAssoc, a class defined inside the standard Scala preamble (scala.Predef). The preamble also defines an implicit conversion from Any to ArrowAssoc. When you write 1 -> "one", the compiler inserts a conversion from 1 to ArrowAssoc so that the -> method can be found. Here are the relevant definitions:
package scala object Predef { class ArrowAssoc[A](x: A) { def -> [B](y: B): Tuple2[A, B] = Tuple2(x, y) } implicit def any2ArrowAssoc[A](x: A): ArrowAssoc[A] = new ArrowAssoc(x) ... }This "rich wrappers" pattern is common in libraries that provide syntax-like extensions to the language, so you should be ready to recognize the pattern when you see it. Whenever you see someone calling methods that appear not to exist in the receiver class, they are probably using implicits. Similarly, if you see a class named RichSomething, e.g., RichInt or RichString, that class is likely adding syntax-like methods to type Something.
You have already seen this rich wrappers pattern for the basic types described in Chapter 5. As you can now see, these rich wrappers apply more widely, often letting you get by with an internal DSL defined as a library where programmers in other languages might feel the need to develop an external DSL.
The remaining place the compiler inserts implicits is within argument lists. The compiler will sometimes replace someCall(a) with someCall(a)(b), or new SomeClass(a) with new SomeClass(a)(b), thereby adding a missing parameter list to complete a function call. It is the entire last curried parameter list that's supplied, not just the last parameter. For example, if someCall's missing last parameter list takes three parameters, the compiler might replace someCall(a) with someCall(a)(b, c, d). For this usage, not only must the inserted identifiers, such as b, c, and d in (b, c, d), be marked implicit where they are defined, but also the last parameter list in someCall's or someClass's definition must be marked implicit.
Here's a simple example. Suppose you have a class PreferredPrompt, which encapsulates a shell prompt string (such as, say "$ " or "> ") that is preferred by a user:
class PreferredPrompt(val preference: String)Also, suppose you have a Greeter object with a greet method, which takes two parameter lists. The first parameter list takes a string user name, and the second parameter list takes a PreferredPrompt:
object Greeter { def greet(name: String)(implicit prompt: PreferredPrompt) { println("Welcome, "+ name +". The system is ready.") println(prompt.preference) } }The last parameter list is marked implicit, which means it can be supplied implicitly. But you can still provide the prompt explicitly, like this:
scala> val bobsPrompt = new PreferredPrompt("relax> ") bobsPrompt: PreferredPrompt = PreferredPrompt@ece6e1
scala> Greeter.greet("Bob")(bobsPrompt) Welcome, Bob. The system is ready. relax>
To let the compiler supply the parameter implicitly, you must first define a variable of the expected type, which in this case is PreferredPrompt. You could do this, for example, in a preferences object:
object JoesPrefs { implicit val prompt = new PreferredPrompt("Yes, master> ") }Note that the val itself is marked implicit. If it wasn't, the compiler would not use it to supply the missing parameter list. It will also not use it if it isn't in scope as a single identifier. For example:
scala> Greeter.greet("Joe") <console>:7: error: no implicit argument matching parameter type PreferredPrompt was found. Greeter.greet("Joe") ^
Once you bring it into scope via an import, however, it will be used to supply the missing parameter list:
scala> import JoesPrefs._ import JoesPrefs._
scala> Greeter.greet("Joe") Welcome, Joe. The system is ready. Yes, master>
Note that the implicit keyword applies to an entire parameter list, not to individual parameters. Listing 21.1 shows an example in which the last parameter list of Greeter's greet method, which is again marked implicit, has two parameters: prompt (of type PreferredPrompt) and drink (of type PreferredDrink):
class PreferredPrompt(val preference: String) class PreferredDrink(val preference: String)
object Greeter { def greet(name: String)(implicit prompt: PreferredPrompt, drink: PreferredDrink) {
println("Welcome, "+ name +". The system is ready.") print("But while you work, ") println("why not enjoy a cup of "+ drink.preference +"?") println(prompt.preference) } }
object JoesPrefs { implicit val prompt = new PreferredPrompt("Yes, master> ") implicit val drink = new PreferredDrink("tea") }
Singleton object JoesPrefs in Listing 21.1 declares two implicit vals, prompt of type PreferredPrompt and drink of type PreferredDrink. As before, however, so long as these are not in scope as single identifiers, they won't be used to fill in a missing parameter list to greet:
scala> Greeter.greet("Joe") <console>:8: error: no implicit argument matching parameter type PreferredPrompt was found. Greeter.greet("Joe") ^
You can bring both implicit vals into scope with an import:
scala> import JoesPrefs._ import JoesPrefs._Because both prompt and drink are now in scope as single identifiers, you can use them to supply the last parameter list explicitly, like this:
scala> Greeter.greet("Joe")(prompt, drink) Welcome, Joe. The system is ready. But while you work, why not enjoy a cup of tea? Yes, master>And because all the rules for implicit parameters are now met, you can alternatively let the Scala compiler supply prompt and drink for you by leaving off the last parameter list:
scala> Greeter.greet("Joe") Welcome, Joe. The system is ready. But while you work, why not enjoy a cup of tea? Yes, master>
One thing to note about the previous examples is that we didn't use String as the type of prompt or drink, even though ultimately it was a String that each of them provided through their preference fields. Because the compiler selects implicit parameters by matching types of parameters against types of values in scope, implicit parameters usually have "rare" or "special" enough types that accidental matches are unlikely. For example, the types PreferredPrompt and PreferredDrink in Listing 21.1 were defined solely to serve as implicit parameter types. As a result, it is unlikely that implicit variables of these types will be in scope if they aren't intended to be used as implicit parameters to Greeter.greet.
Another thing to know about implicit parameters is that they are perhaps most often used to provide information about a type mentioned explicitly in an earlier parameter list, similar to the type classes of Haskell. As an example, consider the maxListUpBound function shown in Listing 21.2, which returns the maximum element of the passed list:
def maxListUpBound[T <: Ordered[T]](elements: List[T]): T = elements match { case List() => throw new IllegalArgumentException("empty list!") case List(x) => x case x :: rest => val maxRest = maxListUpBound(rest) if (x > maxRest) x else maxRest }
The signature of maxListUpBound is similar to that of orderedMergeSort, shown in Listing 19.12 here: it takes a List[T] as its argument, and specifies via an upper bound that T must be a subtype of Ordered[T]. As mentioned at the end of Section 19.8, one weakness with this approach is that you can't use the function with lists whose element type isn't already a subtype of Ordered. For example, you couldn't use the maxListUpBound function to find the maximum of a list of integers, because class Int is not a subtype of Ordered[Int].
Another, more general way to organize maxListUpBound would be to require a separate, second argument, in addition to the List[T] argument: a function that converts a T to an Ordered[T]. This approach is shown in Listing 21.3. In this example, the second argument, orderer, is placed in a separate argument list and marked implicit.
def maxListImpParm[T](elements: List[T]) (implicit orderer: T => Ordered[T]): T =
elements match { case List() => throw new IllegalArgumentException("empty list!") case List(x) => x case x :: rest => val maxRest = maxListImpParm(rest)(orderer) if (orderer(x) > maxRest) x else maxRest }
The orderer parameter in this example is used to describe the ordering of Ts. In the body of maxListImpParm, this ordering is used in two places: a recursive call to maxListImpParm, and an if expression that checks whether the head of the list is larger than the maximum element of the rest of the list.
The maxListImpParm function, shown in Listing 21.3, is an example of an implicit parameter used to provide more information about a type mentioned explicitly in an earlier parameter list. To be specific, the implicit parameter orderer, of type T => Ordered[T], provides more information about type T—in this case, how to order Ts. Type T is mentioned in List[T], the type of parameter elements, which appears in the earlier parameter list. Because elements must always be provided explicitly in any invocation of maxListImpParm, the compiler will know T at compile time, and can therefore determine whether an implicit definition of type T => Ordered[T] is in scope. If so, it can pass in the second parameter list, orderer, implicitly.
This pattern is so common that the standard Scala library provides implicit "orderer" methods for many common types. You could therefore use this maxListImpParm method with a variety of types:
scala> maxListImpParm(List(1,5,10,3)) res10: Int = 10In the first case, the compiler inserted an orderer function for Ints; in the second case, for Doubles; in the third case, for Strings.
scala> maxListImpParm(List(1.5, 5.2, 10.7, 3.14159)) res11: Double = 10.7
scala> maxListImpParm(List("one", "two", "three")) res12: java.lang.String = two
A style rule for implicit parameters As a style rule, it is best to use a custom named type in the types of implicit parameters. For example, the types of prompt and drink in the previous example was not String, but PreferredPrompt and PreferredDrink, respectively. As a counterexample, consider that the maxListImpParm function could just as well have been written with the following type signature:
def maxListPoorStyle[T](elments: List[T]) (implicit orderer: (T, T) => Boolean): TTo use this version of the function, though, the caller would have to supply an orderer parameter of type (T, T) => Boolean. This is a fairly generic type that includes any function from two Ts to a Boolean. It does not indicate anything at all about what the type is for; it could be an equality test, a less-than test, a greater-than test, or something else entirely.
The actual code for maxListImpParm, given in Listing 21.3, shows better style. It uses an orderer parameter of type T => Ordered[T]. The word Ordered in this type indicates exactly what the implicit parameter is used for: it is for ordering elements of T. Because this orderer type is more explicit, it becomes no trouble to add implicit conversions for this type in the standard library. To contrast, imagine the chaos that would ensue if you added an implicit of type (T, T) => Boolean in the standard library, and the compiler started sprinkling it around in people's code. You would end up with code that compiles and runs, but that does fairly arbitrary tests against pairs of items!
Thus the style rule: use at least one role-determining name within the type of an implicit parameter.
The previous example had an opportunity to use an implicit but did not. Note that when you use implicit on a parameter, then not only will the compiler try to supply that parameter with an implicit value, but the compiler will also use that parameter as an available implicit in the body of the method! Thus, both uses of orderer within the body of the method can be left out.
def maxList[T](elements: List[T]) (implicit orderer: T => Ordered[T]): T =
elements match { case List() => throw new IllegalArgumentException("empty list!") case List(x) => x case x :: rest => val maxRest = maxList(rest) // (orderer) is implicit if (x > maxRest) x // orderer(x) is implicit else maxRest }
When the compiler examines the code in Listing 21.4, it will see that the types do not match up. For example, x of type T does not have a > method, and so x > maxRest does not work. The compiler will not immediately stop, however. It will first look for implicit conversions to repair the code. In this case, it will notice that orderer is available, so it can convert the code to orderer(x) > maxRest. Likewise for the expression maxList(rest), which can be converted to maxList(rest)(ordered). After these two insertions of implicits, the method fully type checks.
Look closely at maxList. There is not a single mention of the ordered parameter in the text of the method. All uses of ordered are implicit. Surprisingly, this coding pattern is actually fairly common. The implicit parameter is used only for conversions, and so it can itself be used implicitly.
Now, because the parameter name is never used explicitly, the name could have been anything. For example, maxList would behave identically if you left its body alone but changed the parameter name:
def maxList[T](elements: List[T]) (implicit converter: T => Ordered[T]): T = // same body...For that matter, it could just as well be:
def maxList[T](elements: List[T]) (implicit iceCream: T => Ordered[T]): T = // same body...Because this pattern is common, Scala lets you leave out the name of this parameter and shorten the method header by using a view bound. Using a view bound, you would write the signature of maxList as shown in Listing 21.5.
def maxList[T <% Ordered[T]](elements: List[T]): T = elements match { case List() => throw new IllegalArgumentException("empty list!") case List(x) => x case x :: rest => val maxRest = maxList(rest) // (orderer) is implicit if (x > maxRest) x // orderer(x) is implicit else maxRest }
You can think of "T <% Ordered[T]" as saying, "I can use any T, so long as T can be treated as an Ordered[T]." This is different from saying that T is an Ordered[T], which is what an upper bound, "T <: Ordered[T]", would say. For example, even though class Int is not a subtype of Ordered[Int], you could still pass a List[Int] to maxList so long as an implicit conversion from Int to Ordered[Int] is available. Moreover, if type T happens to already be an Ordered[T], you can still pass a List[T] to maxList. The compiler will use an implicit identity function, declared in Predef:
implicit def identity[A](x: A): A = xIn this case, the conversion is a no-op; it simply returns the object it is given.
The maxListUpBound function, of Listing 21.2, specifies that T is an Ordered[T] with its upper bound, T <: Ordered[T]. By contrast, the maxList function, of Listing 21.5, specifies that T can be treated as an Ordered[T] with its view bound, T <% Ordered[T]. If you compare the code of maxListUpBound with that of maxList, you'll find that the only non-cosmetic difference between the two is that the upper bound symbol, <:, is changed to a view bound symbol, <%. But maxList of Listing 21.5 can work with many more types.
object Mocha extends Application {
class PreferredDrink(val preference: String)
implicit val pref = new PreferredDrink("mocha")
def enjoy(name: String)(implicit drink: PreferredDrink) { print("Welcome, "+ name) print(". Enjoy a ") print(drink.preference) println("!") }
enjoy("reader") }
$ scalac -Xprint:typer mocha.scala [[syntax trees at end of typer]] // Scala source: mocha.scala package <empty> { final object Mocha extends java.lang.Object with Application with ScalaObject {
// ...
private[this] val pref: Mocha.PreferredDrink = new Mocha.this.PreferredDrink("mocha"); implicit <stable> <accessor> def pref: Mocha.PreferredDrink = Mocha.this.pref; def enjoy(name: String) (implicit drink: Mocha.PreferredDrink): Unit = { scala.this.Predef.print("Welcome, ".+(name)); scala.this.Predef.print(". Enjoy a "); scala.this.Predef.print(drink.preference); scala.this.Predef.println("!") }; Mocha.this.enjoy("reader")(Mocha.this.pref) } }
Implicits are an extremely powerful feature in Scala, but one which is sometimes difficult to get right and to debug. This section contains a few tips for debugging implicits.
Sometimes you might wonder why the compiler did not find an implicit conversion that you think should apply. In that case it helps to write the conversion out explicitly. If that also gives an error message, you then know why the compiler could not apply your implicit. For instance, assume that you mistakenly took stringWrapper to be a conversion from Strings to Lists, instead of RandomAccessSeqs. You would wonder why the following does not work:
scala> val chars: List[Char] = "xyz" <console>:12: error: type mismatch; found : java.lang.String("xyz") required: List[Char] val chars: List[Char] = "xyz" ^
In that case it helps to write the stringWrapper conversion explicitly, to find out what went wrong:
scala> val chars: List[Char] = stringWrapper("xyz") <console>:12: error: type mismatch; found : java.lang.Object with RandomAccessSeq[Char] required: List[Char] val chars: List[Char] = stringWrapper("xyz") ^
With this, you have found the cause of the error: stringWrapper has the wrong return type. On the other hand, it's also possible that inserting the conversion explicitly will make the error go away. In that case you know that one of the other rules (such as the Scope Rule) was preventing the implicit from being applied.
When you are debugging a program, it can sometimes help to see what implicit conversions the compiler is inserting. The -Xprint:typer option to the compiler is useful for this. If you run scalac with this option, then the compiler will show you what your code looks like after all implicit conversions have been added by the type checker. An example is shown in Listing 21.6 and Listing 21.7. If you look at the last statement in each of these listings, you'll see that the second parameter list to enjoy, which was left off in the code in Listing 21.6:
enjoy("reader")was inserted by the compiler, as shown in Listing 21.7:
Mocha.this.enjoy("reader")(Mocha.this.pref)
If you are brave, try scala -Xprint:typer to get an interactive shell that prints out the post-typing source code it uses internally. If you do so, be prepared to see an enormous amount of boilerplate surrounding the meat of your code.
Implicits are a powerful, code-condensing feature of Scala. This chapter has shown you Scala's rules about implicits, and it has shown you several common programming situations where you can profit from using implicits.
As a word of warning, implicits can make code confusing if they are used too frequently. Thus, before adding a new implicit conversion, first ask whether you can achieve a similar effect through other means, such as inheritance, mixin composition, or method overloading. If all of these fail, however, and you feel like a lot of your code is still tedious and redundant, then implicits might just be able to help you out.
[1] In fact, the Predef object already defines a stringWrapper conversion with similar functionality, so in practice you can use this conversion instead of defining your own.
[2] Variables and singleton objects marked implicit can be used as implicit parameters. This use case will be described later in this chapter.
[3] The Scala compiler backend will treat the conversion specially, however, translating it to a special "i2d" bytecode. So the compiled image is the same as in Java.
|
http://www.artima.com/pins1ed/implicit-conversions-and-parametersP.html
|
CC-MAIN-2016-44
|
refinedweb
| 6,329
| 54.02
|
Optically pumped magnetometer (OPM) data¶
In this dataset, electrical median nerve stimulation was delivered to the left wrist of the subject. Somatosensory evoked fields were measured using nine QuSpin SERF OPMs placed over the right-hand side somatomotor area. Here we demonstrate how to localize these custom OPM data in MNE.
import os.path as op import numpy as np import mne data_path = mne.datasets.opm.data_path() subject = 'OPM_sample' subjects_dir = op.join(data_path, 'subjects') raw_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_SEF_raw.fif') bem_fname = op.join(subjects_dir, subject, 'bem', subject + '-5120-5120-5120-bem-sol.fif') fwd_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_sample-fwd.fif') coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
Prepare data for localization¶
First we filter and epoch the data:
raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.filter(None, 90, h_trans_bandwidth=10.) raw.notch_filter(50., notch_widths=1) # Set epoch rejection threshold a bit larger than for SQUIDs reject = dict(mag=2e-10) tmin, tmax = -0.5, 1 # Find Median nerve stimulator trigger event_id = dict(Median=257) events = mne.find_events(raw, stim_channel='STI101', mask=257, mask_type='and') picks = mne.pick_types(raw.info, meg=True, eeg=False) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, reject=reject, picks=picks, proj=False, decim=4) evoked = epochs.average() evoked.plot() cov = mne.compute_covariance(epochs, tmax=0.)
Out:
Opening raw data file /home/circleci/mne_data/MNE-OPM-data/MEG/OPM/OPM_SEF_raw.fif... Isotrak not found Range : 0 ... 700999 = 0.000 ... 700.999 secs Ready. Current compensation grade : 0 Reading 0 ... 700999 = 0.000 ... 700.999 secs... Filtering raw data in 1 contiguous segment Setting up low-pass filter at 90 Hz FIR filter parameters --------------------- Designing a one-pass, zero-phase, non-causal lowpass filter: - Windowed time-domain design (firwin) method - Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation - Upper passband edge: 90.00 Hz - Upper transition bandwidth: 10.00 Hz (-6 dB cutoff frequency: 95.00 Hz) - Filter length: 331 samples (0.331 sec) Setting up band-stop filter from 49 - 51 Hz FIR filter parameters --------------------- Designing a one-pass, zero-phase, non-causal bandstop filter: - Windowed time-domain design (firwin) method - Hamming window with 0.0194 passband ripple and 53 dB stopband attenuation - Lower passband edge: 49.00 - Lower transition bandwidth: 0.50 Hz (-6 dB cutoff frequency: 48.75 Hz) - Upper passband edge: 51.00 Hz - Upper transition bandwidth: 0.50 Hz (-6 dB cutoff frequency: 51.25 Hz) - Filter length: 6601 samples (6.601 sec) Trigger channel has a non-zero initial value of 256 (consider using initial_event=True to detect this event) 201 events found Event IDs: [257] 201 matching events found Applying baseline correction (mode: mean) Not setting metadata Loading data for 201 events and 1501 original time points ... 0 bad epochs dropped Computing data rank from raw with rank=None Using tolerance 3.3e-10 (2.2e-16 eps * 9 dim * 1.7e+05 max singular value) Estimated rank (mag): 9 MAG: rank 9 computed from 9 data channels with 0 projectors Reducing data rank from 9 -> 9 Estimating covariance using EMPIRICAL Done. Number of samples used : 25326 [done]
Examine our coordinate alignment for source localization and compute a forward operator:
Note
The Head<->MRI transform is an identity matrix, as the co-registration method used equates the two coordinate systems. This mis-defines the head coordinate system (which should be based on the LPA, Nasion, and RPA) but should be fine for these analyses.
bem = mne.read_bem_solution(bem_fname) trans = None # To compute the forward solution, we must # provide our temporary/custom coil definitions, which can be done as:: # # with mne.use_coil_def(coil_def_fname): # fwd = mne.make_forward_solution( # raw.info, trans, src, bem, eeg=False, mindist=5.0, # n_jobs=1, verbose=True) fwd = mne.read_forward_solution(fwd_fname) with mne.use_coil_def(coil_def_fname): fig = mne.viz.plot_alignment( raw.info, trans, subject, subjects_dir, ('head', 'pial'), bem=bem) mne.viz.set_3d_view(figure=fig, azimuth=45, elevation=60, distance=0.4, focalpoint=(0.02, 0, 0.04))
Out:
Loading surfaces... Three-layer model surfaces loaded. Loading the solution matrix... Loaded linear_collocation BEM solution from /home/circleci/mne_data/MNE-OPM-data/subjects/OPM_sample/bem/OPM_sample-5120-5120-5120-bem-sol.fif Reading forward solution from /home/circleci/mne_data/MNE-OPM-data/MEG/OPM/OPM_sample-fwd.fif... Reading a source space... Computing patch statistics... Patch information added... Distance information added... [done] Reading a source space... Computing patch statistics... Patch information added... Distance information added... [done] 2 source spaces read Desired named matrix (kind = 3523) not available Read MEG forward solution (8196 sources, 9 channels, free orientations) Source spaces transformed to the forward solution coordinate frame Getting helmet for system unknown (derived from 9 MEG channel locations)
Perform dipole fitting¶
# Fit dipoles on a subset of time points with mne.use_coil_def(coil_def_fname): dip_opm, _ = mne.fit_dipole(evoked.copy().crop(0.015, 0.080), cov, bem, trans, verbose=True) idx = np.argmax(dip_opm.gof) print('Best dipole at t=%0.1f ms with %0.1f%% GOF' % (1000 * dip_opm.times[idx], dip_opm.gof[idx])) # Plot N20m dipole as an example dip_opm.plot_locations(trans, subject, subjects_dir, mode='orthoview', idx=idx)
Out:
BEM : <ConductorModel | BEM (3 layers)> MRI transform : identity Head origin : 1.3 -15.5 36.7 mm rad = 77.9 mm. Guess grid : 20.0 mm Guess mindist : 5.0 mm Guess exclude : 20.0 mm Using standard MEG coil definitions. Coordinate transformation: MRI (surface RAS) -> head 1.000000 0.000000 0.000000 0.00 mm 0.000000 1.000000 0.000000 0.00 mm 0.000000 0.000000 1.000000 0.00 mm 0.000000 0.000000 0.000000 1.00 0 bad channels total Read 9 MEG channels from info 2 coil definitions read 99 coil definitions read MEG coil definitions created in head coordinates. Decomposing the sensor noise covariance matrix...) Created the whitener using a noise covariance matrix with rank 9 (0 small eigenvalues omitted) ---- Computing the forward solution for the guesses... Guess surface (inner skull) is in MRI (surface RAS) coordinates Filtering (grid = 20 mm)... Surface CM = ( 1.5 -15.0 35.4) mm Surface fits inside a sphere with radius 102.1 mm Surface extent: x = -73.3 ... 77.3 mm y = -100.7 ... 86.4 mm z = -42.9 ... 108.2 mm Grid extent: x = -80.0 ... 80.0 mm y = -120.0 ... 100.0 mm z = -60.0 ... 120.0 mm 1080 sources before omitting any. 543 sources after omitting infeasible sources not within 20.0 - 102.1 mm. Source spaces are in MRI coordinates. Checking that the sources are inside the surface and at least 5.0 mm away (will take a few...) Skipping interior check for 70 sources that fit inside a sphere of radius 51.5 mm Skipping solid angle check for 284 points using Qhull 299 source space points omitted because they are outside the inner skull surface. 30 source space points omitted because of the 5.0-mm distance limit. 214 sources remaining after excluding the sources outside the surface and less than 5.0 mm inside. Go through all guess source locations... [done 214 sources] ---- Fitted : 16.0 ms, distance to inner skull : 5.1021 mm ---- Fitted : 20.0 ms, distance to inner skull : 11.9786 mm ---- Fitted : 24.0 ms, distance to inner skull : 7.1146 mm ---- Fitted : 28.0 ms, distance to inner skull : 8.6466 mm ---- Fitted : 32.0 ms, distance to inner skull : 14.8160 mm ---- Fitted : 36.0 ms, distance to inner skull : 9.4837 mm ---- Fitted : 40.0 ms, distance to inner skull : 9.1578 mm ---- Fitted : 44.0 ms, distance to inner skull : 12.7123 mm ---- Fitted : 48.0 ms, distance to inner skull : 14.9163 mm ---- Fitted : 52.0 ms, distance to inner skull : 15.5801 mm ---- Fitted : 56.0 ms, distance to inner skull : 16.9325 mm ---- Fitted : 60.0 ms, distance to inner skull : 18.7919 mm ---- Fitted : 64.0 ms, distance to inner skull : 18.5866 mm ---- Fitted : 68.0 ms, distance to inner skull : 16.1213 mm ---- Fitted : 72.0 ms, distance to inner skull : 13.0160 mm ---- Fitted : 76.0 ms, distance to inner skull : 8.4939 mm ---- Fitted : 80.0 ms, distance to inner skull : 5.5176 mm No projector specified for this dataset. Please consider the method self.add_proj. 17 time points fitted Best dipole at t=52.0 ms with 99.8% GOF
Perform minimum-norm localization¶
Due to the small number of sensors, there will be some leakage of activity to areas with low/no sensitivity. Constraining the source space to areas we are sensitive to might be a good idea.
inverse_operator = mne.minimum_norm.make_inverse_operator( evoked.info, fwd, cov) method = "MNE" snr = 3. lambda2 = 1. / snr ** 2 stc = mne.minimum_norm.apply_inverse( evoked, inverse_operator, lambda2, method=method, pick_ori=None, verbose=True) # Plot source estimate at time of best dipole fit brain = stc.plot(hemi='rh', views='lat', subjects_dir=subjects_dir, initial_time=dip_opm.times[idx], clim=dict(kind='percent', lims=[99, 99.9, 99.99]))
Out:
Converting forward solution to surface orientation Average patch normals will be employed in the rotation to the local surface coordinates.... Converting to surface-based source orientations... [done] Computing inverse operator with 9 channels. 9 out of 9 channels remain after picking Selected 9 channels Creating the depth weighting matrix... 9 magnetometer or axial gradiometer channels limit = 6597/8196 = 10.009502 scale = 5.90306e-11 exp = 0.8 Applying loose dipole orientations. Loose value of 0.2. Whitening the forward solution.) Creating the source covariance matrix Adjusting source covariance matrix. Computing SVD of whitened and weighted lead field matrix. largest singular value = 1.58618 scaling factor to adjust the trace = 9.70367e+17 Preparing the inverse operator for use... Scaled noise and source covariance from nave = 1 to nave = 201 Created the regularized inverter The projection vectors do not apply to these channels. Created the whitener using a noise covariance matrix with rank 9 (0 small eigenvalues omitted) Applying inverse operator to "Median"... Picked 9 channels from the data Computing inverse... Eigenleads need to be weighted ... Computing residual... Explained 95.3% variance Combining the current components... [done] Using control points [6.42652723e-11 1.21195124e-10 2.13789206e-10]
Total running time of the script: ( 0 minutes 17.760 seconds)
Estimated memory usage: 1181 MB
Gallery generated by Sphinx-Gallery
|
https://mne.tools/stable/auto_examples/datasets/plot_opm_data.html
|
CC-MAIN-2020-05
|
refinedweb
| 1,708
| 55
|
Secure DNS Client over HTTPS (DoH)
Starting with Windows Server 2022, the DNS client supports DNS-over-HTTPS (DoH). When DoH is enabled, DNS queries between Windows Server’s DNS client and the DNS server pass across a secure HTTPS connection rather than in plain text. By passing the DNS query across an encrypted connection, it's protected from interception by untrusted third parties.
Configure the DNS client to support DoH
You can only configure the Windows Server client to use DoH if the primary or secondary DNS server selected for the network interface is on the list of known DoH servers. You can configure the DNS client to require DoH, request DoH or only use traditional plain-text DNS queries. To configure the DNS client to support DoH on Windows Server with Desktop Experience, do the following steps:
From the Windows Settings control panel, select Network & Internet.
On the Network & Internet page, select Ethernet.
On the Ethernet screen, select the network interface that you want to configure for DoH.
On the Network screen, scroll down to DNS settings and select the Edit button.
On the Edit DNS settings screen, select Manual from the automatic or manual IP settings dropdown. This setting allows you to configure the Preferred DNS and Alternate DNS servers. If the addresses of these servers are present in the list of known DoH servers, the Preferred DNS encryption dropdown will be enabled. You can choose between the following settings to set the preferred DNS encryption:
Encrypted only (DNS over HTTPS). When this setting is chosen, all DNS query traffic will pass across HTTPS. This setting provides the best protection for DNS query traffic. However, it also means DNS resolution won't occur if the target DNS server is unable to support DoH queries.
Encrypted preferred, unencrypted allowed. When this setting is chosen, the DNS client will attempt to use DoH and then fall back to unencrypted DNS queries if that isn't possible. This setting provides the best compatibility for DoH capable DNS servers, but you won't be provided with any notification if DNS queries are switched from DoH to plain text.
Unencrypted only. All DNS query traffic to the specified DNS server is unencrypted. This setting configures the DNS client to use traditional plain text DNS queries.
Select Save to apply the DoH settings to the DNS client.
If you're configuring the DNS server address for a client using PowerShell
using the
Set-DNSClientServerAddress cmdlet, the DoH setting will depend on
whether the server’s fallback setting is in the list of known DoH servers table. At present you can't
configure DoH settings for the DNS client on Windows Server 2022 using Windows
Admin Center or sconfig.cmd.
Configuring DoH through Group Policy
Windows Server 2022 local and domain Group Policy settings include the Configure DNS over HTTPS
(DoH) name resolution policy. You can use it to configure the DNS client to use DoH. This policy is found in the
Computer Configuration\Policies\Administrative Templates\Network\DNS Client node. When
enabled, this policy can be configured with the following settings:
Allow DoH. Queries will be performed using DoH if the specified DNS servers support the protocol. If the servers don't support DoH, non-encrypted queries will be issued.
Prohibit DoH. Will prevent use of DoH with DNS client queries.
Require DoH. Will require that queries are performed using DoH. If configured DNS servers don't support DoH, name resolution will fail.
Don't enable the Require DoH option for domain joined computers as Active Directory Domain Services is heavily reliant on DNS because the Windows Server DNS Server service does not support DoH queries. If you require DNS query traffic on Active Directory Domain Services network to be encrypted, consider implementing IPsec based connection security rules to protect this traffic. See Securing end-to-end IPsec connections by using IKEv2 for more information.
Determine which DoH servers are on the known server list
Windows Server ships with a list of servers that are known to support DoH.
You can determine which DNS servers are on this list by using the
Get-DNSClientDohServerAddress PowerShell cmdlet.
The default list of known DoH servers is as follows:
Add a new DoH server to the list of known servers
You can add new DoH servers to the list of known servers using the
Add-DnsClientDohServerAddress PowerShell cmdlet. Specify the URL of the
DoH template and whether you'll allow the client to fall back to an
unencrypted query should the secure query fail. The syntax of this command is:
Add-DnsClientDohServerAddress -ServerAddress '<resolver-IP-address>' -DohTemplate '<resolver-DoH-template>' -AllowFallbackToUdp $False -AutoUpgrade $True
Use Name Resolution Policy Table with DoH
You can use the Name Resolution Policy Table (NRPT) to configure queries to a specific DNS namespace to use a specific DNS server. If the DNS server is known to support DoH, queries related to that domain will be performed using DoH rather than in an unencrypted manner.
|
https://docs.microsoft.com/en-us/windows-server/networking/dns/doh-client-support
|
CC-MAIN-2021-43
|
refinedweb
| 833
| 60.55
|
I frequently have very complex layer trees, sometimes with 50 to 100 layers in several hierarchies. Although you have chevrons showing which layers have sublayers, it still gets confusing as there’s not much visual differentiation. I would really like to see all layers that have sub-layers have bold titles.
Distinguishing between parent layers and single layers would be great.
Related wish: Also seeing empty layers (meaning no objects in that layer) by seeing their name in gray instead of black would be super useful. This is also a standard in many other 3D packages. Wished in other topics many times by others, probably many times just by me
G
Hi Gustavo - thanks, I will amend the YT item…
-Pascal
In the meantime, you could run this script to change parent layer names to upper case
import rhinoscriptsyntax as rs layers = rs.LayerNames(sort=True) for layer in layers: if rs.LayerChildCount(layer) is not 0: rs.RenameLayer(layer, layer.upper())
Some sort of indication would be good for sure, grey is already “taken” by reference/worksession layers, which works nice, and “shades of gray” may get confusing.
As for parent layers, agree it is a good practice to use ALL CAPS for parent layers, bold also is taken by Current Layer and I personally would hate to see all parent layers being bold. I think it should be left on user responsibility side to name layers in a clear way to make it distinguishable what’s what.
my 2c.
-j
I DON’T LIKE LAYERS SCREAMING AT ME JAREK.
…you are right about the gray. Maybe there’s some other way. Can we add a column for empty layer indication?
G
Point taken, just a sample of user-driven distinguishing the Parent layers from the rest. I definitely would find bolding them visually confusing, as we, too, deal with 100s of multi-level nested layers.
Maybe once layer-coloring gets finally implemented it would present another good option to deal with this…
-j
|
https://discourse.mcneel.com/t/layer-tree/130789
|
CC-MAIN-2022-21
|
refinedweb
| 335
| 62.98
|
Introduction: Giving the Raspberry Pi a Serial Modem Using the HUAWEI E3531 USB Dongle (+ Send SMS)
Hi, here we will give the Raspberry Pi 3 a serial modem using the Huawei E3531 USB dongle. You can now send AT commands to the modem, and easily send text messages. We'll do this in Python code.
Step 1: Getting You SIM Ready, and Using the Default HiLink CDC-Ethernet Mode for the E3531
So mostly when you buy a SIM the carriers (this is for the UK) want you to activate it using a voice call! So it's easiest to do that with an unlocked phone. Then you'll know for sure that the SIM is working. Ok, so once that is done you can insert the SIM into the E3531 and plug it into a USB port on the Raspberry Pi 3!
Once you've plugged in the E3531 you should run the following command:
lsusb
This will return a list of USB devices that are connected, along with their modes. You can see what I got back in the image above!
I got back for the USB device listed as Huawei Technology Co., the following mode:
12d1:14dc
Now the 12d1 being the vendor ID, and 14dc being the HiLink CDC-Ethernet model/mode ID. So this is supposed to be the default mode for the E3531. However, because these dongles do contain data, they may return as a mass storage device instead! That's common for the E303. But I never had that with the E3531.
Anyway, this HiLink CDC-Ethernet mode means the dongle is all ready to connect to the internet over 2G/3G and you can use PPP now if you want. It's probably easiest to make sure the dongle is working by going ahead with this first!
So you should open a browser, and access . Then you'll get the home html file giving your status. See the image above. So that should tell us that our SIM is working fine! And we are ready to switch our mode to serial modem now!
Step 2: Now Let's Switch to Serial Modem Mode!
Next, let's switch it to a serial modem so we can send AT commands and thus SMS. We'll be wanting to access it as tty/USBx now.
The first thing we need to take into account is the mode we want to switch to. We want to switch our E3531 mode from 12d1:14dc (which it is in now) to modem mode of 12d1:1001 . This should give us three virtual serial ports at tty/USB0 , tty/USB1 , and tty/USB3 !
Ok, so we need to use usb-modeswitch to do this. If you look around on the net, there are several approaches to using usb-modeswitch to do this. This is the one I found most reliable!
So download and install usb-modeswitch with the following command:
sudo apt-get install usb-modeswitch
Now let's make a file containing our switch mode instructions. The instructions are:
TargetVendor=0x12d1
TargetProduct=0x1f01
MessageContent="55534243123456780000000000000011062000000100000000000000000000"
So, we create a file at the following location, using the following command. And add those three lines of code to it. Command is:
sudo nano /etc/usb_modeswitch.d/12d1:1f01
You can see this file in the image at the top! So after saving the file, you should reboot your Raspberry Pi.
N.B.
The message content
MessageContent="55534243123456780000000000000011062000000100000000000000000000"
is kinda tricky. It took me a while to get this right, and the Huawei website wasn't much help. Actually it didn't help at all. The message content depends on the firmware of your E3531, so honesty, the above might not work and you'll have to experiment/research to find a message that does in that case! Just by searching forums. Or else I can try to be of assistance if you contact me!
Step 3: Checking If We Now Have a Serial Modem!
So, after the reboot you should run the following command again:
lsusb
And hopefully *fingers crossed* you will see the new ID as 12d1:1001 ; so the E3531 has switched to serial modem mode! If even describes it as a modem now! Although with strange model numbers! See above image.
Next, issue the following command:
ls /dev/tty*
which should list all the tty bindings. Hopefully we'll see our new virtual serial ports: ttyUSB0, ttyUSB1 and ttyUSB2! Great!
Step 4: Connect to Our New Virtual Serial Ports and Issue Some AT Commands!
Now we can go ahead and connect to the virtual serial ports and issue some AT commands! I used cu "call up another system" to do this.
So you can install cu with the following command:
sudo apt-get install cu
So after you've installed it, execute it with following arguments to connect to ttyUSB0 (if that doesn't work, try ttyUSB1 or ttyUSB2):
cu -l /dev/ttyUSB0
So you can see what we get back from cu in the above image. It doesn't echo, so it's a bit tricky! The first was 'AT' and I got back OK. Then I issued 'AT+CMGF=1' which tells the modem to act in SMS mode, and got back CMS ERROR 302, which is operation not allowed. Then I tried to send an SMS with 'AT+CMGS="+44phonenumberhere" ' which returned an error code of CMS ERROR 302 again! It all worked fine the next time, so I must have entered them wrongly! It's tricky when you don't get feedback for what you are typing :-(
Anyway, I should note at this point, that you may enter correct AT commands and get those errors due to the SIM card requiring a PIN or PIN2. If the SIM is blocked, the error would be due to SIM required PUK or PUK2 (personal unblocking codes).
You can send a PIN using AT+CPIN="0000" . Note that carriers will block the SIM after 3x incorrect PIN attempts, and you will need to use a PUK or PUK2 to unblock it. All the carriers have default PIN codes.
You can find a full list of error codes here:...
Great! So it isn't much fun using cu to send AT commands is it! Next, let's do it with Python!
Step 5: Using the Python Code to Deliver AT Commands to Serial Modem in Order to Send SMS!
Now, let's do away with cu, since it is not much fun. And use Python code to send our AT commands, and thus send an SMS. Honestly, it is quite exciting to send your first SMS using the serial modem!!
So here's the code to do it:
I've got this in full on GitHub at...
import serial
from curses import ascii # since we need ascii code from CTRL-Z import time
# here we are testing sending an SMS via virtual serial port ttyUSB0 that was created by a USB serial modem
import serial
from curses import ascii # since we need ascii code from CTRL-Z import time
# here we are testing sending an SMS via virtual serial port ttyUSB0 that was created by a USB serial modem
phonenumber = #enter phone number to send SMS to e.g. "+441234123123" SMS = "here's your SMS!" ser = serial.Serial('/dev/ttyUSB0', 460800, timeout=1) # 460800 is baud rate, ttyUSB0 is virtual serial port we are sending to ser.write("AT\r\n") # send AT to the ttyUSB0 virtual serial port line = ser.readline() print(line) # what did we get back from AT command? Should be OK ser.write("AT+CMGF=1\r\n") # send AT+CMGF=1 so setting up for SMS followed by CR line = ser.readline() print(line) # what did we get back from that AT command? ser.write('AT+CMGS="%s"\r\n' %phonenumber) # send AT+CMGS then CR, then phonenumber variable ser.write(SMS) # send the SMS variable after we sent the CR ser.write(ascii.ctrl('z')) # send a CTRL-Z after the SMS variable using ascii library time.sleep(10) # wait 10 seconds print ser.readline() print ser.readline() print ser.readline() print ser.readline() # what did we get back after we tried AT_CMGS=phonenumber followed # by , then SMS variable, then ascii code??
Step 6: All Done! Here's the SMS!
So, after running that, we are able to send an SMS! The phone numbers have to be entered in + then country code, then number format e.g +441234123123
Discussions
|
https://www.instructables.com/id/Giving-the-Raspberry-Pi-a-Serial-Modem-Using-the-H/
|
CC-MAIN-2018-34
|
refinedweb
| 1,418
| 74.08
|
> On Tue, Jan 4, 2011 at 08:41, Dirk Laurie <dpl@sun.ac.za> wrote: > > On Tue, Jan 04, 2011 at 03:18:29PM +0200, Leo Razoumov wrote: > >> Lua provides no good way to test whether a given array has holes or > >> not. > > Not Lua's job. Your job. > > > > And how exactly I am supposed to do this job? E.g., how can I test > that an array t coming my way from someone else's library has no > holes? How do you test that a tree coming your way from someone else is actually a tree? You do not. This is part of the API. This is what people call "documentation". If someone gives you a table with no extra information, you do not even know whether the table has useful stuff in numeric indices, so why bother to use '#'? But if you know what the table is, it should be part of what you know whether it may have holes. (If so, it also should be part of what you know how to get its real size.) But if you really want to check, just to make sure, this is how you can do your job. I assume you know at least that the value is a table. Change as needed to your specific circumstances: function noholes (t) local n = #t local i = 0 for k in pairs(t) do if type(k) == 'number' then if math.floor(k) == k and 1 <= k and k <= n then i = i + 1 else return false end end end return (i == n) end Not so hard, is it? Now, can we stop this discussion and move to something more productive? -- Roberto
|
http://lua-users.org/lists/lua-l/2011-01/msg00294.html
|
CC-MAIN-2022-21
|
refinedweb
| 280
| 89.38
|
The subject of pointers deals heavily with the use of the symbols * and &, and to a beginner this is often confusing; especially when different people put the modifers in different places (me and the VS6 class wizard for example :P ). My technique is to keep the modifiers next to what they are modifying. For example:
// Initialize a variable, 'x', of type 'int' to the 'value' 5.
int x = 5;
// Initialize a variable, 'px' of type 'pointer to int' to the value of
// 'the address of' x.
int* px = &x;
---- --
| |
| |- This is read "address of" x
|
|- Notice I put the * modifier after int, not before px (int* px not
int *px). The compiler doesn't care, however I am modifying the
data type, not the variable. Thus I remain consistent to my technique.
// Now initialize a variable 'ref_to_x' as a reference to x.
int& ref_to_x = x;
----
|
|- Again, I keep the modifier with what I am modifying.
// Initialize a variable,'deref_px' of type 'int' to the 'dereferenced
// value' of px.
int deref_px = *px;
---
|
|- Notice in this case, the * is with px, because that is
what I'm modifying.
Ok, with that out of the way, we continue on.
There have been many a question floating around about: passing values by reference. Why would you pass by reference? Passing a value by reference speeds things up and slims things down, because you are not making a copy of the variable passed. This may not seem too important when making a single function call in the whole life of your application's instance, but when calling a function say 1800 times in a minute; performance becomes a major motivating factor. Also, when modifying a copy of a value, the original value remains unchanged (which may be desireable in some cases), where when passing by reference, the origional variable is modified. The downloadable example program will illustrate why, but here is a brief example that should clear up the confusion:
Consider the following functions:
void MyFunctionByCopy(CString x)
{
CString y = "Test by copy.";
x.Format("%s",static_cast<const char*>(y));
return;
}
void MyFunctionByReference(CString& x)
{
CString y = "Test by reference.";
x.Format("%s",static_cast<const char*>(y));
return;
}
Note: The first one will create a copy of the variable passed, where the latter one will modify the variable itself. Note that the reference to int is an automatically dereferenced pointer, thus pointer notation is not required. Check the sample program for a working example of this in action.
Given the two functions above, can you guess the values of the results in the following example?
{
CString szTestOne = "Test Me One";
CString szTestTwo = "Test Me Two";
MyFunctionByCopy(szTestOne);
MyFunctionByReference(szTestTwo);
printf("\nszTestOne = %s", static_cast<const char*>(szTestOne));
printf("\nszTestTwo = %s", static_cast<const char*>(szTestTwo));
}
For those who can't handle the suspense, szTestOne = "Test Me One" and szTestTwo = "Test by reference.". If you don't believe me, try it.
For anyone interested in what such an implementation may look like, consider the lines below. You may try this on your own and see the results.
int x = 7;
int* px = &x;
int** ppx = &px;
printf("\nx = %d", x );
printf("\n*px = %d", *px );
printf("\n**ppx = %d", **ppx);
I can remember when I first came across pointers when beginning C++, and after much confusion I wondered: "why use pointers at all?" I can just instantiate explicit instances of all my variables and not worry about it. (Notice I was all fancy using words like instantiate and explicit instance, yea I knew C++... right...) Well, with experience comes humility and wisdom; and with CodeProject comes enlightenment and encouragement. Venturing off into applications beyond the command line scripts (I say scripts, because my entire program was inside main() and I made no function calls. My idea of code reuse was cut-and-paste) I was coding before, I found myself in a whole new world.
As it turns out, in real-world programming, you don't always have the privilage of knowing how many things you will be analyzing, or how big your arrays need to be. You don't know how much system resources your user's machine will have. You begin to worry about things like optimization and passing by reference. You start using built-in functions that take pointers by default, although you may not know why...
In my personal experience (all two years of it), the primary driving forces behind my exodus into the realm of pointers are the wonderful operators:
new and delete
Although this is not a comprehensive guide to pointers, I feel you cannot talk about pointers without mentioning the aforementioned operators. Why are these operators so important? They are important not only because they allow you to dynamically allocate memory to and from your program, but they do it through the use of pointers. There is much information available on this topic so I will not go into vast detail, but I will cover the basic highlights. Again consider the following code:
int* pNewInt = new int;
*pNewInt = 7;
printf("\nThe pointer, pNewInt, is located at memory address: 0x%X",
&pNewInt);
printf("\npNewInt points to memory location 0x%X and contains the
value %d.",
pNewInt, *pNewInt);
delete pNewInt;
In this code we created a new 'pointer to int' with the keyword new. Note that new returns a 'pointer to int'. An important thing to remember is to delete your new pointers after you are done with them. If you don't, you will have many a memory leak and risk utilizing all resources of the target machine. The details of this can be found elsewhere, but the point remains: new is a powerful friend and is worth the time to understand.
Pointers may seem intimidating at first, but they are in fact a fundamental concept in C and C++. Reading about them will only take you so far; it is the experience in using them and the enlightenment of harnessing their power that will ultimately contribute to making the world a better place. So enjoy your pointers. Use them often, and take good care of them. But most importantly, use them with courage and confidence and you will be greatly rewarded when you pass beyond that great exception in the sky...(Mark Conger - Death of a Coffee Pot)
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
#include <iostream.h>
using namespace std;
void smallest(int *values, int dimension);
int main()
{
int *pointerOfInts;
int dimension;
cout << "Give the number of elements:";
cin >> dimension;
pointerOfInts = new int[dimension];
for(int i = 0; i < dimension; i++)
cin >> pointerOfInts[i];
smallest(pointerOfInts, dimension);
delete []pointerOfInts;
system("PAUSE");
return 0;
}
void smallest(int *values, int dimension)
{
int min = values[0], pos = 0;
for(int j = 1; j < dimension; j++)
{
if(values[j] < min)
{
min = values[j];
pos = j;
}
}
cout << "the minimum number is " << min;
cout << endl << "and its position is " << pos + 1 << endl;
}
dimension
pointerOfInts = new int[dimension];
const MAX_SIZE = 2000;
int array[MAX_SIZE];
array
int* array;
int* array; // compiler allocates 4 bytes of memory
char* character; // compiler allocates 4 bytes of memory
long* anotherArray; // compiler allocates 4 bytes of memory
typedef struct myStruct
{
int val1; // 4 bytes
char ch1; // 1 byte
int val2[4]; // 4 * 4 = 16 bytes
char ch2[50]; // 1 * 50 = 50 bytes
};
myStruct
myStruct* obj;
void smallest(int *values, int dimension);
values
int* x, y;
int *x;
int y;
void Foo(CString& text)
{
text.Format(....);
}
void Foo(CString* text)
{
text->Format(....);
}
static void foo1(int& i);
static void foo2(int* i);
int main(int argc, char* argv[])
{
int x = 5;
printf("\nThe value x initially has the value %d", x);
unsigned long physical_address_of_x = reinterpret_cast<unsigned long>(&x);
int& reference_to_x = x;
int* pointer_to_x = reinterpret_cast<int*>(physical_address_of_x);
// change the "reference to x" and watch what happens to x.
printf("\n\nWe are now setting reference_to_x = 12\n");
reference_to_x = 12;
printf("\nThe value x is located at 0x%X and now has the value %d.", physical_address_of_x, x);
printf("\nThe reference to x is located at 0x%X and has the value %d.", &reference_to_x, reference_to_x);
printf("\nThe pointer to x is located at 0x%X and points to location 0x%X.", &pointer_to_x, pointer_to_x);
printf("\nThe pointer to x, when de-referenced yeilds the value %d\n\n", *pointer_to_x);
printf("Calling foo1(x)...");
foo1(x);
printf("\nCalling foo2(&x)...");
foo2(&x);
return 0;
}
void foo1(int& i)
{
printf("\nThe address of 'i ref' is 0x%X", &i);
return;
}
void foo2(int* i)
{
printf("\nThe address of 'i ptr' is 0x%X\n\n", &i);
return;
}
const
void Foo(const CString* text)
{
...
}
void Foo(const CString* const text)
void Foo(const CString& szText)
int a;
int * const b = &a;
int &c = a;
printf("%08X, %08X, %08X\n", &a, &*b, &c);
new
delete
shape* c = new circle();
shape* r = new rectangle();
...
c->draw();
r->draw();
int* myArray = new int[255];
std::vector<int> myArray;
struct tagPerson
{
char name[25];
int age;
char address[50];
char city[20];
char state[10];
char zip[5];
char phone[15];
bool enrolled;
};
tagPerson student;
someFunction(student); // 130 bytes (depends on structure packing)
vs.
someFunction(&student); // 4 bytes
void someFunction(tagPerson& student) {...}
...
someFunction(student); //4 bytes
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/3346/A-Prelude-to-Pointers?fid=13785&df=90&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=Relaxed
|
CC-MAIN-2015-40
|
refinedweb
| 1,589
| 58.11
|
QPointer-QWeakPointer-Qt5-Debacle
Hi all,
So I've recently noticed the whole QPointer vs QWeakPointer debacle that was caused in Qt5.
First they deprecate QPointer and let everybody switch to QWeakPointer in Qt4.6. Then they stuck out their tongue and removed the QObject-tracking capability in QWeakPointer in favor of... wait for it... QPointer! Yep, we should now use QPointer again, it was "un-deprecated". That slow, broken class from the old days – But of course, in Qt5 QPointer got a new backend, based on... have a guess...oh yes...QWeakPointer! (What kind of drugs are they taking and why can't we who have to deal with such decisions have some, too?)
Now, maybe you can tell me how we, who develop libraries, should handle this situation. I have clients that stick with Qt 4.x and some that want to switch to Qt5. Should I now maintain two separate branches, one for Qt5 and one for Qt4? Wasn't the switch supposed to be smooth?
If I change my code to now use QPointer again, people who compile with Qt5 will be fine. But people compiling with Qt 4.x will have the slow cripple that was QPointer in Qt 4, they should be using QWeakPointer. But when I stick with QWeakPointer my code won't compile with Qt5 because they removed functionality in QWeakPointer (the constructor taking QObject-subclasses). Should I now sprinkle
@
#if QT_VERSION < QT_VERSION_CHECK(5, 0, 0)
QWeakPointer<SomeClass> mPointer;
#else
QPointer<SomeClass> mPointer;
#endif
...
#if QT_VERSION < QT_VERSION_CHECK(5, 0, 0)
mPointer.data()->doSomething();
#else
mPointer->dpSomething();
#endif
@
all over the place? Was that the intent? Uglify all user's code?
But then again, maybe I'm just completely mistaken and there is a way out of this debacle that I can't see at the moment. (They could've at least left the old QWeakPointer functionality, this way new projects who target Qt5-only can use the new QPointer while everybody else can continue using QWeakPointer.)
So, how did you manage to solve this issue in your code? Or if any trolls are listening: What the hell?!
Just because the constructor taking a QObject is deprecated doesn't mean the functionality isn't still there.
If QWeakPointer actually lost it's QObject-tracking capabilities, I would report that as a bug, and a critical one, too.
[quote author="Asperamanca" date="1367907884"]Just because the constructor taking a QObject is deprecated doesn't mean the functionality isn't still there.
[/quote]
in qsharedpointer_impl.h, they've got all the QObject-targeted interface of QWeakPointer excluded from compilation via
@#if QT_DEPRECATED_SINCE(5, 0)
...
#endif@
I'd expect things that were marked as deprecated in a version not to be directly excluded in that version, but maybe in Qt6... However, that directive clearly excludes it for the default defines. I do wonder whether that's intended or a bug. I'd need to tell all my clients to include some hack like "#define QT_DISABLE_DEPRECATED_BEFORE QT_VERSION_CHECK(0, 0, 0)" before including any Qt headers in their projects and probably even recompile Qt with that flag on some systems. Can't be serious.
I don't have a working Qt5 environment. Could you check it with a small example? If they've really broken that, I could submit this as a bug via Qt commercial support. While we currently stick to Qt4, there's always the future to think of.
Checked it, they really excluded the newly deprecated interface by default.
@#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QMainWindow>
#include <QWeakPointer>
namespace Ui {
class MainWindow;
}
class MainWindow : public QMainWindow
{
Q_OBJECT
public:
explicit MainWindow(QWidget *parent = 0);
~MainWindow();
QWeakPointer<QWidget> mPointer;
private:
Ui::MainWindow *ui;
};
#endif // MAINWINDOW_H@
@#include "mainwindow.h"
#include "ui_mainwindow.h"
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
mPointer(this),
ui(new Ui::MainWindow)
{
ui->setupUi(this);
}
MainWindow::~MainWindow()
{
delete ui;
}@
Works with Qt4.x, fails with Qt5.0.x saying
@mainwindow.cpp: In constructor 'MainWindow::MainWindow(QWidget*)':
mainwindow.cpp:7:24: error: no matching function for call to 'QWeakPointer<QWidget>::QWeakPointer(MainWindow* const)'
mainwindow.cpp:7:24: note: candidates are: [blablabla]@
as expected when such a constructor isn't defined.
I have compared the source codes 4.8.4. against 5.0.2, and sure enough, they've just defined it away. I have submitted a support request, and expect an official statement how application and library developers are expected to handle such back-and-forth tactics.
Hi Asperamanca, can you please post a link to your support request here? So that others can follow it.
Anyway, there is a large backlog in the JIRA issue tracker, so the request might get buried under lots of other requests. If you don't get any response in a week, try posting the question to the Interest mailing list, which is a lot more active:
Actually, I found an official statement:
Thanks for the link. It still doesn't explain why no way was implemented that would allow someone to specifically allow the QWeakPointer - QObject use case.
I don't know how to post a link to my support request - I can only access it via my Qt commercial user account. But I'll keep you posted about what's going on.
Ah, I didn't realize you submitted a commercial support request. Yes, please keep us updated.
I'm not familiar with the intricacies of the different smart pointer classes, but I was under the impression that "deprecated" means "not recommended but still available", not "removed from the library". It could be an error.
Yes, the flip-flopping over QWeakPointer is annoying to those who immediately adjusted all their code. It seems though that QPointer works in Qt4 programs although for a couple of dot releases it may generate a deprecation warning at compile time (not in 4.8), and it works in Qt 5.0. The solution would seem to be use QPointer and ignore the late unpleasantness. No version dependent #ifdef required.
I may be wrong, but this should only be a problem to clients of your library if the QWeakPointer is part of the API, which you have just changed only to change back, or the library has not been programmed to maintain binary compatibility (so changing private members could break things).
The support answer suggests to use
@DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0@
Thanks, Asperamanca. Not an ideal solution, but it does let code run.
|
https://forum.qt.io/topic/26968/qpointer-qweakpointer-qt5-debacle/7
|
CC-MAIN-2018-34
|
refinedweb
| 1,067
| 57.57
|
Hello everybody
this is my first project. when i want to compiling it, the CodeVision give me some error. how can i solve those!?
ks0108.h
#ifndef _KS0108_INCLUDED_ #define _KS0108_INCLUDED_ #pragma used+ //------------------------------------ Prototypes ------------------------------------ // Prototypes of User Functions void glcd_Init(); void glcd_DrawBmpF(flash unsigned char Pat[],unsigned char x, unsigned char y,unsigned char h,unsigned char w); void glcd_DrawBmp(unsigned char Pat[],unsigned char x, unsigned char y,unsigned char h,unsigned char w); void glcd_DrawF(flash unsigned char Pat[]); void glcd_Draw(unsigned char Pat[]); void glcd_WriteByte(unsigned char x, unsigned char y, unsigned char dat);// x: Page ,y: Column void glcd_Putchar(unsigned char x, unsigned char y, unsigned char ch); void glcd_Printf(unsigned char x, unsigned char y,char* str); void glcd_Clear(); // LCD's methods void address_right(int x,int y); void address_left(int x,int y); void write_right(unsigned char x ); void write_left(unsigned char x ); //--------------------------------------------------------------------------------------- #pragma used- #pragma library KS0108.lib #endif
KS0108.lib:
/* --------------------- --------------------- Graphic Lcd KS0108 controller Library */ #include
#include //------------------------------------ Definitions ------------------------------------ #ifndef GLCDTYPE #define GLCDTYPE KS0108 // Graphic Lcd Controller type #endif // Default Port definition #ifndef RS #define RS PORTD.0 #endif #ifndef RW #define RW PORTD.1 #endif #ifndef EN #define EN PORTD.6 #endif #ifndef CS1 #define CS1 PORTD.3 #endif #ifndef CS2 #define CS2 PORTD.4 #endif #ifndef RST #define RST PORTD.5 #endif #ifndef DATA #define DATA PORTC #endif //--------------------------------------------------------------------------------------- //------------------------------------ Functions ------------------------------------ // Initialize Graphic LCD void glcd_Init() { int i=0,com[4] = {0xc0,0xb8, 0x40, 0x3f}; RS = 0; while(i<4) { write_left(com[i]); i++; } i=0; while(i<4) { write_right(com[i]); i++; } } // Clear Lcd screen void glcd_Clear() { unsigned char page[8]={0xb8,0xb9,0xba,0xbb,0xbc,0xbd,0xbe,0xbf }; int i=0,j=0,index=0; while(index<1024) { address_left(page[i],0x40); RS=1; for(j=0;j<64;j++) { write_left(0); index++; } address_right(page[i],0x40); RS=1; for(j=0;j<64;j++) { write_right(0); index++; } i++; } } // Write a byte in Row x{0..7} and Column y{0..127} void glcd_WriteByte(unsigned char x, unsigned char y, unsigned char data) { //unsigned char j; if(y<64) // Left { address_left(((x|0b10111000)&0b10111111) ,((y|0b01000000)&0b01111111)); RS = 1; write_left(data); } else // Right { y -= 64; address_right(((x|0b10111000)&0b10111111) ,((y|0b01000000)&0b01111111)); RS = 1; write_right(data); } } // Drawing a pattern that is stored in FLASH memory void glcd_DrawF(flash unsigned char Pat[]) { unsigned char i,j; unsigned int k=0; for( i= 0; i<8; i++) for( j =0; j<128; j++) glcd_WriteByte(i,j,Pat[k++]); } // Drawing a pattern that is stored in SRAM memory void glcd_Draw(unsigned char Pat[]) { unsigned char i,j; unsigned int k=0; for( i= 0; i<8; i++) for( j =0; j<128; j++) glcd_WriteByte(i,j,Pat[k++]); } // Drawing a pattern that is stored in FLASH memory from dot[x,y] until width w and Height h void glcd_DrawBmpF(flash unsigned char Pat[], unsigned char x, unsigned char y, unsigned char h,unsigned char w) { unsigned char i,j; unsigned int k=0; for( i= x ; i
font.H
flash char FontLookup [91][6] = { { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, // sp { 0x00, 0x00, 0x5F, 0x00, 0x00, 0x00 }, // ! { 0x00, 0x00, 0x5F, 0x00,08, 0x2A, 0x1C, 0x2A, 0x08,30, 0x30,00,14, 0x14, 0x14, 0x14, 0x14, 0x00 }, // = { 0x41, 0x22, 0x14, 0x08, 0x00, 0x00 }, // > { 0x02, 0x01, 0x51, 0x09, 0x06, 0x00 }, // ? { 0x32,01, 0x01, 0x00 }, // F { 0x3E, 0x41, 0x41, 0x51, 0x32,04,7F, 0x20, 0x18, 0x20, 0x7F, 0x00 }, // W { 0x63, 0x14, 0x08, 0x14, 0x63, 0x00 }, // X { 0x03, 0x04, 0x78, 0x04, 0x03, 0x00 }, // Y { 0x61, 0x51, 0x49, 0x45, 0x43, 0x00 }, // Z { 0x00, 0x00, 0x7F, 0x41, 0x41, 0x00 }, // [ { 0x02, 0x04, 0x08, 0x10, 0x20, 0x00 }, // 55 { 0x41, 0x41, 0x7F, 0x00,48, 0x44, 0x44, 0x38,08, 0x14, 0x54, 0x54, 0x300, 0x7F, 0x10, 0x28, 0x44, 0x00 }, // k { 0x00, 0x41, 0x7F, 0x40, 0x00, 0x00 }, // l { 0x7C, 0x04, 0x18, };
a picture of it:
Surely the errors are self-explanatory? It tells you this is an invalid expression:
If you check your C manual you will find that it's telling the truth. That is not valid C code. In an assignment to an array element there must be an index so something like:
would be valid.
Your problems actually start here:
Presumably "str" is a "string"? You have to decide how many characters you want in it and put that number between []. So if you expect string up to 9 characters use:
always allow 1 more than you think you need as C uses a hidden 0x00 to mark the end of strings.
Thanks Cliff,
i changed and now it give me 8 error.
B.T.W: the code and picture of first post updated.
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
When you look up "abs" function description in the CodeVision Help or manual, what does it say about the header file used for the prototype?
You can put lipstick on a pig, but it is still a pig.
I've never met a pig I didn't like, as long as you have some salt and pepper.
The correct procedure is...
."
Follow Brian's strategy and you won't go far wrong.
You have got more than eight errors in your code from a swift inspection.
I reckon that if you format the code first, you can read it easily. This will reveal most of the syntactical and logical errors.
If a punter has gone to the effort of producing neatly formatted readable code, you will get lots of help and advice.
David.
stdlib.h! :mrgreen:
solved!
Thanks Lee,David and Brain
Actually, my code is for a graphical LCD by Base KS0108 controller. now, i get 28 errors at lib file.
i attached picture and code of lib for you.
why do it give me this error(all errors are "undefined symble")!?
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
There is no picture attachment? But can you not just copy/paste the error text? It's then easier to read and easier to quote in replies.
Stefan Ernst
CodeVision comes with full Graphics Libraries for KS0108 and most other chips.
So I would start off with the CV examples. There should be some pre-compiled hex files with the Evaluation CV.
A full licence gives you all the libraries. i.e. you can write your own programs.
Of course, you can always use third-party code or write your own KS0108 code.
Bear in mind that most GLCD programs will be too big for the Evaluation version of the Compiler. If you have a full licence, you can use the built-in libraries anyway.
Let us know which licence you have.
David.
Totally agree - utter madness.
Clearly it's a missing #include.
i solved it(some functions was cluttered that i corected those) but now i have a new problem. i compiled my project and put it to proteus, but it bad work(or maybe better i say it's not work :( )(the displayer show me some odd thing)!
who can check it for me and tell me that why it's not work? :(
Attachment(s):
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
i put the lib files to one file. look:
i attached a picture of proteus simulated.
Attachment(s):
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
I just cant understand the big fascination with proteus. Why not just get an avr and an lcd and a plastic proto board? Total cost much less than giant simulation program. After it runs fine in the simulation, it STILL wont run on the real avr and the 'whats wrong with this program?' thread will start all over again.
Imagecraft compiler user
Oh, Bob,
is it possible that the proteus do mistake?
in other word; is it possible that my project is be correct?
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
The Theory Of Logical Troubleshooting is something like start with something simple that works and apply stepwise refinement. Can you edit, compile, burn and run a c program to turn an output hi for 100ms and lo for 300ms? Make this work first.
Imagecraft compiler user
This is not a professional engineer's fault description :
Also don't expect many of the people here to have access to Proteus. Most including me cannot justify the £300/$500 cost.
Like one professor in the Teheran Technical Institute bought one copy once now every engineer gets yet another copy to do his homework. Just use AtmelStudio maybe?
Imagecraft compiler user
Well, I thought that I should try your project.
I thought that perhaps a nice person had zipped up the project files into your "1.zip" attachment.
Anyway, I had to copy-paste the files you had posted.
There are one or two changes to make. e.g.
I used different port pins on the LCD.
I used a mega32.
Your main problems are:
so it did not know any PORTD etc
1. you don't know how to convert an integer to ascii string e.g. itoa()
2. the "library.lib" did not include
3. the "library.lib" does not set the DDR for the appropriate port pins
Of course there are probably lots of other problems.
Incidentally, it is 3500 bytes. So it would compile with the Evaluation CV. Mind you, the program does not do anything much !
A real application will be > 8192 bytes (or whatever the evaluation limit is).
David.
p.s. I see that your main.c sets the DDR bits before calling glcd_init(). This will work, but is not good practice.
i tested. it's work.
i want to the LCD show for me three parameter and three loading(bar-graph).the program is this:
1- get ADC amount
2- convert this amout to between -3v to 5v(the original voltage is -3v to 5v that i changed it by an op-amp to 0v to 8v and read it by ADC and convert by micro)
3- then show me some amount
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
You should:
try to display a string on your GLCD (ex : 3.5v)
try to display an integer on your GLCD (ex : 3500 should be displaied as V=3.5v)
try the ADC (I do not know whether you simulate?, you have a serial link?)
Once all this pieces of code an head (not three) can manage debugged, put them together..
Thank you David, but i can't understand good these two part of your statement.
David, i think that man that had wrote this lib was a
dilettante( :? ).
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
I presume that you have got this working in Proteus now.
Have you tried it on real hardware.
You may have noticed that your library does not do much more than draw text or copy a whole screenful of bitmap.
A regular GLCD library can draw lines, circles, arcs, rectangles etc as well as text. And it can draw without destroying the existing display.
First off, you should get your current project working nicely. That library is plenty good enough to display some text and a bargraph.
Instead of calling glcd_clear(), just gotoxy() and draw spaces over your old message. Then gotoxy() again and write the new message.
Alternatively, you pad any message with spaces. e.g. xxx456 or 456xxx.
Then you overwrite on each loop.
Likewise with your bargraph.
I am not too sure what a dilettante is !
You should be able to find other KS0108 libraries.
Most are written for avr-gcc but it is simple to port to Codevision.
You have set DDRC and DDRD in your main.c
This means that the data and control lines to your KS0108 are outputs. Your library only ever writes.
A good KS0108 library will both read and write. e.g. you read the existing display RAM so that you can plot single pixels.
David.
well no.
but, maybe the new text(or other) put on previous.
also, i already found this:
the ProGFX engine
or Osama's lib(it has a codevision version ):
GLCD Library
but if i can port GCC lib to codvision, how can i port the ProGFX engine to codevision(i think the ProGFX is very better of Osama's lib)?
but, i already configured it.
for lib, now i use of this lib(of this post)=>
Despite this, i still have that problem. also the proteus give me some error but i think it can't related with this subject.
Attachment(s):
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
Probably the problem is with lib. because when i wrote this program and simulated, i known that it's not work,too. look to picture attached(i tested lcd by the program that author of the Library wrote, it's good work.).
Attachment(s):
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
Design a little.
Code a little.
Test.
Repeat until done to taste.
Schematic... PD2 isn't not correct. Pin floats when switch is open.
Your main.c has:
Your library.lib has:
So RS is never made an output pin. Hence you will get strange results.
It is very important to have truthful comments. You have clearly not got DDRD=0xde from the CodeWizard. If you alter a line of code, make sure that you alter the comment too.
stevech noted that you have a floating PD2 pin.
Actually it is an output pin. (DDRD=0xde)
If you do use it as an input pin. It is always wise to use a pull-up (or pull-down) resistor.
David.
Thanks David.
i decided to changing my lib for GLCD(because i thing it has so problem). now, i choose osama's lib for this job. it's good compiling.
just a thing is strange for me, because seems it hasn't any function for initialization the LCD. i wrote a program for use of it:
the program:
it compiled very good. but it doesn't show any thing at proteus :( .seems some port doesn't work.
aoww! god! god! god!
Attachment(s):
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
i changed the configuration of lib like this:
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
Please post a link to the GLCD library that you are using now.
You still seem to initialise DDRC and DDRD when most libraries would do this for you in the glcd_init() function.
e.g. they probably want to know which DATAPORT and CTRLPORT you are using.
You have not called a glcd_init() function.
You have not connected the /RST pin.
Proteus thinks that it is 0 i.e. in permanent RESET.
If your library uses RST, you need to connect a /RST to a port pin.
If it doesn't use it, you connect it to VCC.
David.
Seems no necessary(it hasn't any port for RST),
David,
if see to my picture, you will found some port/pin that is turn off!! but i configured those!(i don't know why they are turn off).
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
you can see and download it here:
GLCD Library
also i attached for you the lib.
Attachment(s):
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
Ok. I downloaded and compiled the "Codevision_H" project. Since my control wires are on different pins, I amended the glcd.h to:
And consequently needed to amend the main.c:
The example worked fine.
Likewise, there are many KS0108 libraries that all work fine.
Note that you must make sure that your control lines are outputs. And ensure that the /RST line is in the correct state.
David.
p.s. the code size was 13918 bytes. i.e. too big for the 4kB evaluation version, you need a full CV licence.
In which case, why don't you use the regular CV libraries and examples ?
HA! :shock:
my example worked fine! then why did not it work for me!?
David, do you use of proteus or by real MCU and circuit?
can you put the code here that i compiling it and see it?
well, i don't know how i can use of it(i Guess it's difficult). i can't find any tutorial and etc for use of it... :(
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
Just do File->Open and browse to the /cvavr/examples directory.
Since the Open dialog expects you to look for C files, you need to change the search to "*.prj" or "all files"
The regular Help menu gives many code examples too.
Yes, if you promise me to connect /RST to 5V in your Proteus schematic, I will post you a suitable project.
No, I do not have a Proteus licence.
As far as I know, it simulates pretty well.
But it also allows bad practice like "no 100nF capacitors". And you can have inappropriate resistor values too.
So I always run the code on a real AVR and real KS0108.
I suggest that you read the AVR data sheet for DDRx, PORTx, PINx and write some programs that blink LEDs.
David.
Thanks David,
my codes is ready. you can see in this post:
i just necessary that i put some functions instead other functions.
glcd_Init(); ------->> this function initialize the lcd
glcd_Putchar(); ->> this function put a character to lcd by coordinate.
glcd_Printf();----->> this function put a string(that did put an array)
and finally this loop:
it make a laoding by amount of the L.
i already worked so much with DDRx, PORTx, PINx and wrote many programs to blink LEDs. i make good this program but i think i'm not in luck. :(
as you already said, i connected RST to VCC(5 volt).
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
I think the osama's lib has a function for make rectangle. i think i can make a loading(bar-graph) by the L variable and this function. i think it's rectangle().
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
YYYYAAAHHHHOOOOO!!!!
It's corected(i bad set the DDRX).
David, i really thank you!!!
Attachment(s):
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
Well done!
If you run the Codevision_H example from Osama, does it work with Proteus?
Simulators are pretty clever, but they don't always work perfectly.
David.
Yeah, but i think it has some problem because i changed the codes(all my program) for ATMEGA32, but now, it can't run the code.
exactly!
as i said, i changed the MCU from ATMEGA16 to ATMEGA32 but, my computer can't simulate it.
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
Take the project exactly as Osama has written it.
"Wire" up your AVR in Proteus like Osama does. Remember /RST pin to VCC.
I am sure that it will Simulate ok.
I don't know how 'fast' it will look.
Likewise, you can use Proteus with anyone's examples.
Just make sure to connect virtual 'wires' to the same pins as the author used.
This is one of the reasons why it is essential that you make any comments truthful. e.g. if your code uses PORTD.5 for EN, don't leave a comment saying PORTA.4
I have a real-life KS0108 display that mates with 5x2 headers. So I must use MY wiring. I don't want to solder different wires just to match your virtual wires.
Incidentally, does your University (or job) provide you with CV and Proteus ?
David.
i corected it, David. :wink: (for mega32)
but now i have some problem for setting the coordinate of the char and loading.
well, these both programs is base for learn the AVR in iran(thus no just for my University).
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
He was, very gently, asking you where your licenses for Proteus and CodeVision came]
Uh-hha
Thanks johan,
i think i'm wrong for understanding that sentence. :)
so, i don't know exactly.
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
Well, I suppose Iran has a 40 years old embargo, prohibiting to *sell*; but one can give...
That was a question you shouldn't answer on an open forum. "Hi everybody, i use pirate software". The response to your other thread would've given you a hint.
Who, Kartman? me?
do i should changing my post?
"One's value is inherent; money is not inherent"
Chuck, you are in my heart!
I think there is a cultural difference between western capitalists that like to invent and sell things to make a living, and some other cultures that like everyone to be given everything from up above somehow. Houses, cars, food, electricity, water, electronics microcontroller simulators, etc. Us dudes that write microcontroller programs for a living will give out some examples to help the new guys learn, but if I was trying to pay the mortgage selling a compiler or a simluator I wrote, I don't think I'd help the dude that stole it without paying any. Compri?
Imagecraft compiler user
Regards,
Steve A.
The Board helps those that help themselves.
I was hoping that the University supplied educational software to its students. All educational establishments can get good deals.
When I studied for my Open University degree, the OU supplied both MathCad and C++ compiler.
It does seem unfortunate that Proteus and CodeVision should have their products cracked. After all, both companies need to make a living. I would also guess that both companies would be happy to sell educational licences to any country in the world.
Software piracy exists in every country!
David.
Pages
|
https://www.avrfreaks.net/forum/error-codevisionthis-my-first-project
|
CC-MAIN-2021-31
|
refinedweb
| 3,745
| 74.79
|
First.
Outside of a few unicorns and bleeding edges, I have seen no great shift to containers that would support this article's hypothesis. If anything people are discovering that running any number of containers in production brings a whole host of new problems with it; in fact it's not dissimilar to the cycle we saw with stuff like OpenStack, where it was touted at the solution to All Things, some people made a lot of noise but no progress, and it has turned out to be a massive waste of time and energy.
Not to mention the article makes the classic assumption that the authors workloads are everybody's workloads; because what you don't know doesn't exist, right? Lets just say that containers are not a solution (and in fact would be a hinderance) to a whole bunch of potential workloads, and there are far more machines running those workloads than you probably think.
Here's two timely articles about running something like Docker in production....
I would tend to agree on this. Outside of some specific cases, containers are actually used more in desktop and workstation setups than server setups (most modern browsers use containers to isolate plugins, and there are all kinds of other programs that use them).
Now, this also depends on what you consider a container. chroot() is a really primitive container, and that's been around for decades and is pretty much ubiquitous, but that's also because it's very easy to use and has very specific and well understood implications. Virtual machines are hardware level containers, and they're ubiquitous too in any reasonably large deployment, but again they are easy to use and hve very well understood limitations. More 'modern' containers (ones on Linux that use cgroups and namespaces) aren't even a 100% stable interface, don't have very well understood side effects, and are generally buggy (and therefore not secure and in turn not worth the effort).
As a super micro business, I use a lot of PaaS and SaaS systems (WP Engine, Galaxy, Digital Ocean, AWS) these days, because I don't want to spend time at all on dev-ops - I'm an architect/engineer, that's what I'm good at, and that's what I want to spend my time on.
It may be true that these systems are still using general purpose containers, which have been specialized right now, but if the trends continue- and these services get even better around the edges, I can easily see these appliance like services, taking over more and more from dedicated dev-ops teams. Right now there is a bit of a premium on these, but at a certain point they should become much more cost effective than the alternatives.
On the other hand, we probably thought we'd see more focused computing hardware/software in the personal computing space as they've gotten smaller, yet it's been the opposite. Consider the limits of BlackBerry or PalmOS against iOS, or Android, and it's clear why the latter have won the battle. They are simply more open (and Android will eventually eat Apple's lunch, since they peaked last year - they are even following their predecessors into the graveyard of once popular platforms - enterprise, where BlackBerry previously went to die).
SaaS and PaaS are not the same thing as containers. They can leverage containers for security, but they can also just as easily use virtualization for that, or in some cases, just use whatever the software already has for isolation (which is usually a primitive form of containerization). Those are becoming ubiquitous for pretty much the reasons you mention (although they still have their issues, I for one hate trying to get reasonable debugging info out of MS's exchange online e-mail service), but not all of them use containers (AWS for example is still mostly built on Xen based virtualization).
Despite this, I would still argue that containers won't obsolete traditional general purpose server systems. Just like virtualization, cloud computing, cluster computing, NUMA, and many other technologies, there are just as many cases where containers aren't useful as ones where they are. Take a shared file server for example. If it's one system serving files, it's pointless to use containers, as all the permissions checks you need are built into the server software and the OS already. Similarly, single tenant web servers don't make much sense for containers either because you don't need to isolate people. The two cases where containers are going to become commonplace are SaaS with multi-tenancy, and development, simply because those areas benefit more than almost anywhere else from the benefits containers offer.
Not sure I agree there. See Amazon EC2, Google, and Netflix. They run containers in production quite well.
EC2 is hardware level VM's running on Xen systems. AWS in general uses an odd mix of both that and containers.
As far as any other case, by number of users, containers are used more on desktop systems than servers. If you pay attention, most of the companies using them in server setups have lots of time and money to spend on getting it working correctly, because getting it working correctly is hard right now, whereas the number of desktop users is about the same as the number of people who use one of Edge, Chrome, Firefox, Opera, or Safari, as all of them use containers (to varying degrees) for plugin isolation.. "
So let's take a look at Google.
Is a unicorn? Check.
Has a large, well defined and compartmentalised workloads? They have a very diverse set of workloads. Probably anything you can imagine, they have it. And they run pretty much everything inside of containers [1].
Has a large engineering staff to make this stuff happen? Check.
And a good motivator to save pennies across millions of machine hours? Check.
Now, what has this Google the Unicorn gone and done? They have taken the lessons that their large engineering staff has learned over the past 15 years, re-implemented Borg/Omega from scratch, and released that as an open source project called Kubernetes [2]. This is Google infra for the rest of us, and it's good stuff. Mere mortals can now wield the weapons of the Unicorns. For the small fee of having to learn something new. I have introduced Kubernetes at a couple of companies already, and I didn't have or need a large team of engineers to reap the benefits of Google-style infrastructure. These things aren't beyond our means anymore.
ahferroin7 said that he wouldn't containerise a single tenant web server. I would, in a heartbeat. Let's see what deploying our website to a Kubernetes cluster might look like and you'll see why you might want to do that:
❱ cat Dockerfile
FROM nginx:1.10.2
COPY my-html-directory /usr/share/nginx/html
❱ docker build -f Dockerfile -t myorg/my-website:1.0.0
❱ docker push myorg/my-website:1.0.0
❱ ls deployment/manifests/
my-website-deployment.yaml my-website-service.yaml
❱ kubectl create -f deployment/manifests/
We've built a Docker image and pushed that to our registry. We have then created a Kubernetes deployment and our Kubernetes cluster has spun up an instance of our website in a container. If the container dies, Kubernetes recreates it. If the node on which the container is running dies, Kubernetes recreates the website on another node.
But we can do better. To increase the availability of our website, let's run three instances and have Kubernetes load balance requests among them:
❱ kubectl scale deployment/my-website --replicas=3
If we see heavy load (we wish!), we can go crazy with --replicas=10 or whatever. We sleep better at night already.
Now let's say that there's a new version of nginx.
❱ cat Dockerfile
FROM nginx:1.11
COPY my-html-directory /usr/share/nginx/html
❱ docker build -t myorg/my-website:1.0.1
❱ docker push myorg/my-website:1.0.1
❱ kubectl set image deployment/my-website my-website=myorg/my-website:1.0.1
Kubernetes will bring one container down at a time while creating a new one with the new version. No downtime. If there's a problem and we want to rollback, we might simply do:
❱ kubectl set image deployment/my-website my-website=myorg/my-website:1.0.0
We can easily cananry the new version too. Maybe introduce a couple of containers at the new version in the load balancer for a while. Make sure that there are no issues, and then update the rest.
I personally am not going back to tending to individual web servers. A containerised web server becomes an application:
* Whose recipe can be checked into a source code repository
* Whose recipe can be built on. The nginx folks can do FROM ubuntu, and I then do FROM nginx
* That can be built automatically in a repeatable manner
* That can be easily be copied, distributed and moved
* That launches in under a second
* That is automatically managed for me
[1]:...
[2]:
Like what? Databases? These used to be a Docker pain point because if you're going to mount the host's volume into your container then that container is pinned to the host and you lose the ability to move it around. However, Kubernetes has the ability to dynamically provision and use many types of persistent volumes [1].
If you can decouple the workload from the underlying compute and storage infrastructure, a lot of good things happen.
[1]:...
Databases, load balancers, bastions, DNS resolvers, NTP servers, monitoring & alerting, caches, the actual machines that run all of your containers...you know, the entire rest of the stack that even makes it possible to start a container and connect to it in the first place?
Just because you don't see it, doesn't mean it doesn't exist. It very much all exists, it exists in very large numbers, and all of means that general purpose operating systems will continue to be important for a very long time.
Hi,
Over time, "containers" will get leaner and more streamlined, until they look exactly like the "processes" we started...
- Brend...
I know, I know! Someone will come up with a new buzzword!
And everyone knows that buzzwords are the pinnacle of science.
I hate to tell you, but Linux implements chroot() pretty much exactly like all BSD implementations, Solaris, SVR4, HP-UX, AIX, and just about every other UNIX in existence.
I assume you're intending to compare it to FreeBSD Jails, which are just containers (so is chroot() actually, it's just a really primitive one), and originated for pretty much exactly the same reason that containers are getting hyped.
Yeah, chroot() doesn't provide great protection, but it's easy to use and has very well understood limitations, hence it gets used regularly. It's also worth pointing out that it's at the core of pretty much any complete system container system (as compared to basic application or process containment).
It's also worth pointing out that there are quite a few cases where a chroot is kind of pointless, not because of the security implications, but because the functionality it provides is not needed. For example, Chrome's plugin containers don't need one, they just used namespaces and seccomp(), and many other applications that use containers internally fall into the same category.
Given the level of hype that's accompanied Docker's meteoric rise over the past couple of years, it's inevitable that there would be a proportionate backslash against Docker and containers.
A lot of the complaints that I see are against Docker specifically. And yes, the Docker tools have been downright buggy at times. But things will stabilise eventually. Whether we end up using Docker, rkt, or some other image format doesn't matter. For better or worse, containers are replacing VMs as the new deployment abstraction. They have been part of Google's secret sauce for over a decade by the way.
Personally I don't find containers that interesting. It's the orchestration part that I'm excited about. I've been using Kubernetes for just over a year now and I'm a big, big fan.
<pedantry>
English Teacher (ET?) in South Korea points out that:
"Hidden behind my hypothosis, which mainly went unsaid, was that containers are becoming the unit of software."
And:
"...(and in fact would be a hinderance)..."
Clearly one thing that all computer bods have in common is an inability to spell properly.
This isn't trolling - I see everything exactly the same here in Korea every day - but gentlemen (and gentle ladies, if there are actually any here), if English is your first language, there's no excuse.
</pedantry>
Vanders,
I'm just laughing that: of all the comment posted at osnews, you`re mispelling of "hinderance" is being singled out to make an example of.
Surely dionicio deserves an honorable mention, just because funniest grammer ever. REALLY LOOKING FORWARD. "There's no excuse" [paraphrasing].
(Just to throw some fodder into the mix, haha)
(Just to throw some fodder into the mix, haha)
I really thought (and I mean no offense) that Dionicio was a spam bot and I have to admit that I was contemplating several times to report him to an admin.
It's things like that that throw you off your balance and make your day well worth it.
|
http://www.osnews.com/comments/29472
|
CC-MAIN-2018-22
|
refinedweb
| 2,271
| 61.06
|
OraFormsFaces 11.1.2?BradW Sep 18, 2012 3:54 PM
Hi JHS team!
This content has been marked as final. Show 7 replies
1. Re: OraFormsFaces 11.1.2?Steven Davelaar-Oracle Sep 26, 2012 8:46 AM (in response to BradW)Brad,
Did you get a reaction from Commit Consulting in the meantime?
Steven Davelaar,
Jheadstart Team.
2. Re: OraFormsFaces 11.1.2?BradW Sep 26, 2012 3:33 PM (in response to Steven Davelaar-Oracle)Still no word. I tried their site, EMG, here, facebook .... I think it is a one man show and he is travelling or something??? Again, I don't like the idea of being tied to a particular release of ADF. From what I have seen, there should be nothing preventing us using it with 11.1.2 of either product. It may just be a packaging / testing issue.
Regardless, it probably shouldn't be part of the JHS release.
BradW
3. Re: OraFormsFaces 11.1.2?BradW Nov 1, 2012 10:27 PM (in response to BradW)So, I spoke with Wilfred last week. We have since been able to get OraFormsFaces working with JDeveloper 11.1.2 and Oracle Forms 11.1.2. Here is the trick.
1) Make sure you patch Oracle Forms
2) Install OraFormsFaces manually in JDeveloper.
Wilfred is working on a proper JDeveloper install and IDE integration support (for drag and drop from component pallet) sometime in December.
When running, it seemed to work ok with the demo. We are working on seeing what we need to do from our application perspective.
Points to note: This is a small company and it is still hard to reach Wilfred
Hope this helps,
BradW
4. Re: OraFormsFaces 11.1.2?Winnie The Pooh-Oracle Nov 2, 2012 8:35 PM (in response to BradW)I have tried OraFormFaces with JDeveloper 11.1.2.1 and Oracle Forms 11gR2. First of all, you have to install everything manually including the jsp tag library and the libraries and jars in oracle forms.
Then the oraFormFaces tag library only works inside of a jsp or jspx page. It does not work inside of an ADF page fragment (jsff), neither inside of a jsf page. It does not even show up in the component palette.
The irony is that if you use the 'JHeadStart OraFormFaces Generator' then you would expect everything to work. But the 'JHeadStart OraFormFaces Generator' puts the <off:form> tag inside of the adf page fragment created for your form. When you run your application created from the JHeadstart OraFormFaces Generator, the tag is never interpreted by the server, it's printed as is in the html.
For the tag to work you will have create a jspx page for each form that you want to display. I was able to run the oraFacesForm from a jspx page but on save i got error 'FRM-40508: ORACLE error: unable to INSERT record.' In order to debug this i will have to dig through oraFormFaces javascript. I am thinking not to go this route as oraFormFaces has not been released for jdev 11.1.2.1 by commit consulting. I am assuming commit-consulting will have to create an ADF declarative component for this to work.
Now, I am confused why does the JHeadstart team has this option when it does not even work and why is this even documented in the developer's guide when it has not been tested with 11.1.2.1 ?
5. Re: OraFormsFaces 11.1.2?wilfred Nov 5, 2012 6:40 PM (in response to Winnie The Pooh-Oracle)This is wilfred, the creator of OraFormsFaces.
You are correct that JDeveloper/ADF 11.1.2.x is not fully supported yet. As you say, it requires manual installation. We are working on a release that's fully compatible with 11.1.2.x but until now there are no licensed customers that needed this feature so it kept slipping and other issues got more priority.
The current version of OraFormsFaces uses JSF 1.x which should work just fine in JDev/ADF 11.1.2.x as long as you use the "JSP XML" document type. This ensures you are not using facelets and JSF 2.0. With the "old" (JSF 1.x) controller, page fragments should work as well even in 11.1.2.
If you run a page and the off:form is in the rendered html this seems to indicate the JSP tag library is missing from your project or the appropriate off namespace is not declared in the root element of the page (fragment).
The FRM-40508 unable to INSERT seem to indicate a SQL error during the INSERT statement at the database. This seems to be a "normal" Forms error, most likely to a non existing database table, table structure mismatch with the blok in Forms (dropped columns?), or connecting using a database user that does not have privileges for this table to perform the insert. This doesn't sound to be OraFormsFaces related as we do not alter any behavior between Oracle Forms and the database.
PS. We are actively working on the 11.1.2 release and try to get it out there as soon as possible
6. Re: OraFormsFaces 11.1.2?Steven Davelaar-Oracle Nov 5, 2012 8:17 PM (in response to wilfred)And regarding the JHeadstart documentation, JHeadstart and OraFormsFaces are separate products with their own release schedule. As we expected 11..1.2 support in OraFormsFaces to follow soon after JHeadstart release, we did not change the documentation and wizards, otherwise we would have to make a new JHeadstart release just for OFF. As Wilfred explained, it turned out to be a little longer before OFF supports 11.1.2, so in hindsight we should have changed the doc and wizard. Anyway, sorry for the confusion hopefully things will be sorted out quickly by Wilfred's team.
Steven Davelaar,
JHeadstart team.
7. Re: OraFormsFaces 11.1.2?51564 Dec 20, 2012 5:04 PM (in response to wilfred)Hi Wilfred,
This is Alex Varghese. I have posted multiple messages on your website contact form requesting you to contact me regarding a production license for OraFormsFaces. Could you please look at your messages and email me?
Thank you,
Alex
|
https://community.oracle.com/message/10674725
|
CC-MAIN-2017-04
|
refinedweb
| 1,056
| 75.4
|
#include <TextMsgTrans.h>
List of all members.
Definition at line 9 of file TextMsgTrans.h.
[inline]
default constructor, use type name as instance name
Definition at line 16 of file TextMsgTrans.h.
[inline, protected]
constructor for subclasses (which would need to provide a different class name)
Definition at line 22 of file TextMsgTrans.h.
[private]
don't call (copy constructor)
[inline, virtual]
Just like a behavior, called when it's time to start doing your thing.
Reimplemented from BehaviorBase.
Definition at line 32 of file TextMsgTrans.h.
By defining here, allows you to get away with not supplying a processEvent() function for the EventListener interface. By default, does nothing.
Definition at line 37 of file TextMsgTrans.h.
Just like a behavior, called when it's time to stop doing your thing.
Definition at line 46 of file TextMsgTrans.h.
[inline, static]
Gives a short description of what this class of behaviors does... you should override this (but don't have to).
If you do override this, also consider overriding getDescription() to return it
Definition at line 51 of file TextMsgTrans 52 of file TextMsgTrans.h.
don't call (assignment operator)
[protected]
the trigger to match messages against
Definition at line 59 of file TextMsgTrans.h.
Referenced by processEvent().
|
http://www.tekkotsu.org/dox/classTextMsgTrans.html
|
crawl-001
|
refinedweb
| 208
| 59.9
|
Introduction
C#
is primarily a statically typed language. That means the compiler needs to know
in advance about the data type of a variable. In the absence of this
information, the compiler will throw a compilation error and will refuse to
compile the code. In spite of the advantages offered by the statically typed languages,
dynamic languages have their own place in application development. For example,
most of the web sites developed today make use of JavaScript in some way or the
other. Languages such as Python and Ruby are also popular amongst developers.
The C# language now supports dynamic features through Dynamic Language
Runtime (DLR). Part of these features include dynamic types, DynamicObject
Class and ExpandoObject
Class. This article explains these features and provides examples
illustrating how these features are used.
Note:
DLR functionality in .NET
4.0 is encapsulated in the System.Dynamic namespace. Ensure that in the example
that follows you have imported this namespace in the class files.
Understanding the Dynamic Data Type
The C# compiler expects that you clearly specify the data
type of a variable before you compile the code. In other words it enforces
compile time type checking on your code. Though this works great for most of cases,
at times you may want to bypass this compile time checking. Consider for
example, that you wish to execute JavaScript stored in an external file and you
want to exchange variables between your C# code and JavaScript code. In such
cases, your C# code cannot detect the data types used in the script at compile
time and you must skip the type checking. Considering such needs C# introduced
the dynamic type. A dynamic type allows you to skip compile time type checking.
Only at runtime when the code is actually executed, errors (if any) will be
generated.
In order to understand how the dynamic type works, let’s
develop a simple Console Application and use dynamic variables. Begin by
creating a new Console Application and then add the following code in the
Main() method.
static void Main(string[] args) { dynamic d; d = 100; Console.WriteLine(d + " - " + d.GetType()); d = "Hello World!"; Console.WriteLine(d + " - " + d.GetType()); Console.ReadLine(); }
The above code declares a variable (d) of type dynamic. The
variable is then assigned an integer value (100) and the data type is outputted
on the console window. Next, the same variable d is now assigned a string value
and again its data type is outputted. The following figure shows a sample run
of the application.
Figure 1: Sample run of the Console application
Notice the above figure carefully. The first line shows that
d is an integer type whereas the second line shows that d is a string. This
means that a dynamic variable can change its data type at runtime. If you try
to perform invalid operations on the data (using string manipulation functions
on an integer or using mathematical functions on a string for example) then an
error will be generated at runtime.
Difference Between var and Dynamic Types
At first glance you may find the dynamic type is the same as
variables of the var type. However, they are not the same. When you use a var
keyword (say in a LINQ query) the data type is detected in a delayed fashion
but once a variable is assigned a value the data type is fixed. It cannot be
changed later. In the case of dynamic types, however, the data type can change
multiple times during the execution of the application. As long as the
operation under consideration is valid the runtime won’t have any problem in
dealing with a dynamic variable. Just to explain this difference, consider the
following fragment of code:
string[] strMonths = { "Jan", "Feb", "Mar"}; int[] numMonths = { 1, 2, 3 }; //OK because it is first assignment var varMonths = from m in strMonths select m; //ERROR because varMonths is already a collection of strings //and now cannot take integers varMonths = from m in numMonths select m;
As you can see above, var cannot change its data type once
assigned whereas dynamic variable can change its data type as we did in the
previous example.
Understanding Dynamic Objects
Normally when you wish to use some object in your code, you
first need to create a class and write properties and methods in it. You can
then create objects of that class, set their properties and invoke methods on
them. At times, however, you may want to create objects and set their
properties dynamically. That means you won’t have a class against which the
compiler can validate your property assignments or method calls. Why is something
like this ever needed? Consider that you are exchanging data between an
external script and C#. Now your C# code won’t be aware of the objects exposed
by the script at compile time for obvious reasons. So in your C# code you will
be assigning properties and invoking methods without knowing if they really
exist. Errors, if any, will be raised only at runtime.
One example of such a dynamic object can be found in ASP.NET MVC 3. two classes that
allow you to create your own dynamic objects. They are – DynamicObject and
ExpandoObject. both of which implement the IDynamicMetaObjectProvider
interface. The IDynamicMetaObjectProvider interface allows you to bind
operations to the underlying object at runtime. In the following sections you
will learn how to make use of DynamicObject class as well as ExpandoObject
class.
DynamicObject Class. In order to understand
how these three methods can be used let’s create a dynamic object – Employee.
Add a new class to the Console Application you created
earlier and key the following code into it:
public class Employee : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public override bool TryGetMember(GetMemberBinder binder, out object result) { if (properties.ContainsKey(binder.Name)) { result = properties[binder.Name]; return true; } else { result = "Invalid Property!"; return false; } } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { dynamic method = properties[binder.Name]; result = method(args[0].ToString(), args[1].ToString()); return true; } }
As you can see the Employee class inherits from the
DynamicObject base class and then overrides the TryGetMember, TrySetMember and
TryInvokeMember methods. The TryGetMember() method returns a boolean value
indicating whether the operation was successful or not. The actual property
value is retrieved via the results output parameter. The property values are
stored in a dictionary (properties). You then check whether the property whose
value is to be retrieved (binder.Name) exists in the dictionary. Accordingly,
result output parameter is set and true / false is returned.
The TrySetMember() method simply stores a property and its
value in the dictionary and returns true. If you wish to restrict property
names or values based on certain criteria you will add that logic here and
return true / false accordingly. In our case we don’t have any such restriction
and the code returns true.
The TryInvokeMember() method retrieves a reference to an
anonymous function (this will be clear in a moment) and invokes that function
by passing the required parameters.
Once you complete the Employee class, modify the Main()
method to include the following lines of code:
d = new Employee(); d.FirstName = "Tom"; d.LastName = "Jerry"; d.BirthDate = new DateTime(1960, 12, 01); Func<string, string, string> method = (a, b) => a + " " + b; d.GetData = method; Console.WriteLine(d.FirstName + " " + d.LastName + "..." + d.GetData("Tom", "Jerry") + " - " + d.GetType());
The above code assigns a new instance of Employee class to
the dynamic variable (d). It then sets three properties on the dynamic object
viz. FirstName, LastName and BirthDate. Notice that Employee class nowhere
defines these properties and you are setting them dynamically. The code then
creates an anonymous method that simply concatenates and returns the two
parameters passed to it. The GetData dynamic method is then set to point to this
anonymous method. Finally, FirstName, LastName values are outputted along with
a call to GetData() dynamic method. The following figure shows a sample run of
the above code:
Figure 2: Sample run a new instance of Employee class to the dynamic
variable
ExpandoObject Class
In the case of the DynamicObject class, you need to do more
work by inheriting and overriding certain methods. In the process you get more
control on how the resultant dynamic object should behave. The ExpandoObject
class is a sealed class, which means you cannot inherit it further. It provides
a simpler implementation of a dynamic object ready for you to consume in your
applications without much work. The same code that we used in the preceding
example can be written using ExpandoObject as follows:
d = new ExpandoObject(); d.FirstName = "Tom"; d.LastName = "Jerry"; d.BirthDate = new DateTime(1960, 12, 01); Func<string, string, string> method2 = (a, b) => a + " " + b; d.GetData = method2; Console.WriteLine(d.FirstName + " " + d.LastName + "..." + d.GetData("Tom", "Jerry") + " - " + d.GetType());
Notice that the above code creates an instance of
ExpandoObject and sets properties on it as before.
Figure 3: An instance of ExpandoObject with properties
Summary
The dynamic type allows you to write code that bypasses
compile time type checking. A variable of dynamic type can point to different
types at runtime. You can create your own dynamic objects using DynamicObject
and ExpandoObject classes. In order to use the DynamicObject class you need to
create a class that inherits from DynamicObject class and then override
TryGetMember, TrySetMember and TryInvokeMember methods. This approach gives you
precise control on the properties and methods the dynamic object can have. On
the other hand ExpandoObject provides a simple ready to use implementation of a
dynamic object. Using these features. you can create dynamic objects that allow
you to add properties and methods to them dynamically.
|
https://www.codeguru.com/csharp/using-dynamicobject-and-expandoobject/
|
CC-MAIN-2021-39
|
refinedweb
| 1,633
| 56.25
|
Version: 1.0aContact: Sean Lyndersay, Microsoft CorporationUpdated: March 14, 2006
Overview BackgroundExampleSimple List Extensions Namespace Identifying a List Lightweight Property Description Overview Sorting GroupingAdditional InformationRevision HistoryLicense Information
1. Overview
The Simple List Extensions are designed as extensions to existing feed formats, such as RSS 2.0 and Atom, to make exposing ordered lists of items easier and more accessible to users.
The term “list,” as used in this document describes an ordered collection of items with similar properties. For example, a photo album may be described as a “list of photos.”
Note: This specification is designed to be as simple as possible. Additional information, explanation and examples can be found here.
1.1 Background
A feed is a collection of items. The feed typically represents either a subset of or the entire collection of items that exist on a server.
Consumers of feeds need to process feeds differently based on the kind of feed content. For example, a feed that contains the entire collection of items on the server should be processed differently from a feed that contains only the most recently added or updated items. Additionally, certain information in the feed may be more relevant or useful for the consuming client to expose to the user than others.
The purpose of the Simple List Extensions is to define a set of XML elements that can be used in a feed to more completely describe the intent of the publisher to the client, so that that client can provide a more accurate, and useful representation of the feed to the user.
This specification includes two extensions, which can be used independently from each other: the first allows the publisher to indicate that the feed is a "list", which implies that the client should treat the feed differently from the way it treats normal feeds. The other extension can be used on any kind of feed (both feeds and lists) and allows the publisher to indicate to the client how it should treat certain item metadata.
2. Example
This is an example of how the Simple List Extension elements would be used to define a list of book items (using the cf:treatAs element), with the following additional features:
>
3. Simple List Extensions
3.1 Namespace
The following namespace declaration is used for XML elements in this document.
Namespace
Description This namespace defines all of the elements in this specification.
Prefix cf
cf
3.2 Identifying a List
This XML element allows the publisher of a feed document to indicate to the consumers of the feed that the feed is intended to be consumed as a list.
Syntax <cf:treatAs>list</cf:treatAs>
<cf:treatAs>list</cf:treatAs>
This element should be located in the feed as a child element of the RSS 2.0 <channel> element or the Atom <feed> element.
Example <rss> <channel> <cf:treatAs>list</cf:treatAs> <title>...</title> ...
<rss> <channel> <cf:treatAs>list</cf:treatAs> <title>...</title> ...
The consumer of the feed should use this information to treat the content of the feed as if it represents a complete, ordered list of content from the server. This implies that the client should respect the order from the server, and assume that items that are missing from the list are no longer part of the list (and should be deleted from the local copy of the list, if one is maintained).
3.3 Lightweight Property Description
3.3.1 Overview
The Simple List Extensions include a set of XML elements which enable the publisher to indicate to the client which properties are useful for sorting and grouping (or filtering) purposes. These elements make reference to XML elements that are child-elements within the items of the same feed, using the supported extension mechanism of the feed format.
Syntax: <cf:listinfo> <cf:sort <cf:group </cf:listinfo>
<cf:listinfo> <cf:sort <cf:group </cf:listinfo>
Details:The cf:listinfo element may contain zero or more instances of either of two child-elements: cf:sort and cf:group.
cf:listinfo
cf:sort
cf:group
Each instance of the cf:sort or cf:group element makes reference to a child-element which appears within each of the items of the feed. These elements may be considered properties of the item, and will be referred to as such below.
For example, an item that describes a book may contain include a property book:firstedition, that represents the date on which the first edition was published. Each item may also contain a book:genre property that describes the genre of the item. In this example, the book prefix may be mapped to.
Only certain types of properties can be supported by the Simple List Extensions 1.0. Specifically:
This extension can be used on any type of feed -- i.e. a feed may contain cf:listinfo without containing cf:treatAs element (and vice versa).
3.3.2 Sorting
The cf:sort element is intended to inform the client that the property to which it refers is one that is “sortable” that is, that the client should provide a user interface that allows the user to sort on that property.
The cf:sort element can also be used to provide a label for the default sort that appears in the list (in this case, only the label attribute should be included).
The cf:sort element contains the following attributes:
3.3.3 Grouping
The cf:group element is intended to inform the client that the property to which it refers is one that is “groupable” that is, that the client should provide a user interface that allows the user to group or filter on the values of that property. Groupable properties should contain a small set of discrete values (e.g. book genres are perfect for groups).
The cf:group element contains the following attributes:
4. Additional Information
This specification is designed to be as simple as possible. Additional information, explanation and examples can be found at:
5. Revision History
1.0 Original specification release. 1.0a Updated spec to fix typos, add examples and improve clarity.
6. License Information
Microsoft’s copyrights in this specification are licensed under the Creative Commons Attribution-ShareAlike License (version 2.5). To view a copy of this license, please visit..
|
http://msdn.microsoft.com/de-de/xml/bb190612(en-us).aspx
|
crawl-002
|
refinedweb
| 1,043
| 52.8
|
12 June 2008 00:14 [Source: ICIS news]
TORONTO (ICIS news)--RAG-Stiftung may sell another stake in Evonik Industries to an individual investor rather than to the public, German daily Sudeutsche Zeitung reported on Wednesday, citing Evonik CEO Werner Muller.
RAG-Stiftung earlier this month sold a 25.01% stake in Evonik, which includes the Degussa specialty chemicals business, to investment company CVC Capital Partners for €2.4bn ($3.75bn). At the time it affirmed plans go public by 2013.
But while RAG’s articles of association laid down that 75% of Evonik should be broadly owned by 2018, selling another stake to an individual investor remained a possibility, Muller told the paper in a detailed interview.
“If there is positive momentum on the stock exchange, then this would be an opportunity that should be used,” he was quoted as saying.
“However, it could also be that … investors will again offer us prices we could not achieve on the stock exchange,” he said.
Muller did not expect such a sale to take place within the next two years, he said.
Despite the stipulations in its articles, RAG also had an obligation to generate as much capital a positive.
RAG, a government foundation, is charged with winding up ?xml:namespace>
To comment on
|
http://www.icis.com/Articles/2008/06/12/9131787/investor-may-get-further-stake-in-evonik-press.html
|
CC-MAIN-2014-42
|
refinedweb
| 213
| 61.16
|
of all
the declaration of the sum function does not match the definition (one receives a reference to a double and the other just a double).
Also the parameter can be passed as a const reference
Try with
#include <iostream>
using namespace std;
double sum (const double &);
main()
{
double ctr=1;
double x;
double total;
total = sum(ctr);
cout << total;
}
double sum (const double &x)
{
double temp;
if (x<10)
temp = sum(x+1)+1/(x*x);
else
temp = 1/x;
return temp;
}
Hope this helps
Tincho
if you don't mind, could you elaborate a little bit about const regerence how it works..... thanks for the help
When you pass an object to a function you pass the whole object
void function( MyClass object )
but when you pass a reference to an object you pass just the address of that object (generally smaller than the object itself), so it is often recommended that you pass objects as references.....
void function( MyClass & object )
When you pass a const referente to an object
void function( const MyClass & object )
it means that you'll pass the address of the object, but the object can't be changed.
you can only call const functions to that object or get the member variables, but you cannot modify it.
I guess that's almost everything
If you need any more help let me know
Tincho
thanks again
:D
Tincho
|
https://www.experts-exchange.com/questions/20797601/Linking-errors.html
|
CC-MAIN-2018-26
|
refinedweb
| 234
| 51.04
|
As with any software drivers: they are never perfect. The same applies to the Processor Expert components delivered in CodeWarrior for MCU10 or the DriverSuite too. That’s why I have created many more components which are available on GitHub here. All these components are using other components to reach the hardware. But what if a functionality is not exposed through the low-level component? Or what if I want direct access to the hardware? Up to now I had to choose either the Processor Expert way, or to do it in the ‘traditional’ way using an SDK like CMSIS or vendor supplied header files.
With MCU10.4, I noticed that there is another way: PDD (Physical Device Driver).
Documentation
The Physical Device Driver manual is located under
CodeWarrior for Microcontrollers V10.x > Processor Expert Manuals > Processor Expert User Manual > Application Design
Ok, honestly, that documentation does not give much. It is very, very light, and maybe I have missed the extensive documentation somewhere? I did some trial-and-error, used the generated Processor Expert driver as examples, and I think I have found out how this works
.
💡 The PDD ‘functions’ are not functions. They are C macros. Processor Expert is using them in the Kinetis drivers. I had a look how things are using in the BitIO_LDD component. Unfortunately PDD macros were not used everywhere, so this makes it hard to follow the code in the driver.
LED with PDD
In this tutorial, I’m using the red RGB LED on the FRDM-KL25Z board which is connected to pin 18 on PORTB (PTB18). In order to use it, I need to
- Clock the port module
- Configure the MUX (Multiplexer) to use it as GPIO (General Purpose I/O)
- Initialize the port value
- Configure the pin as output pin
💡 Technically, step 3 could be performed after step 4. But it is good practice to initialize the port value *before* driving the value to the pin. This avoids glitches or a sudden change on the output pin.
Header Files
First I need to include the header file for each PDD block. All PDD header files are located in
<CodeWarrior Installation Path>\MCU\ProcessorExpert\lib\Kinetis\pdd\inc\
To access the port pins, I need to include the PORT_PDD.h. Additionally I need access to the GPIO and SIM (System Integration Module):
#include "PORT_PDD.h" #include "GPIO_PDD.h" #include "SIM_PDD.h"
💡 The best way to find out which header file to include is to have a look at the function name prefix. E.g. the functions in PORTB group show ‘PORT_PDD’, so this is what I need to include (PORT_PDD.h)
Using PDD Macros
First, I need to clock the module with setting the clock gate. This is a setting of the SIM (System Integration Module):
💡 I can drag&drop the methods to my source file, see this post.
Pressing ‘F3’ (or context menu and then ‘Open Declaration’) shows me the definition of this macro:
As you can see, the macros are not simple :-(. But the doxygen comments for each macros give at least some hints. What I need to give is a base address, an index and a state. Here this is the SIM_BASE_PTR with the SIM_PDD_CLOCK_GATE_PORTB. And PDD_ENABLE enables the clock gate for it:
/* turn on clock to PORTB module: */ SIM_PDD_SetClockGate(SIM_BASE_PTR, SIM_PDD_CLOCK_GATE_PORTB, PDD_ENABLE);
Next I need to set the pin muxing. The ALT1 function enables GPIO mode for the pin:
/* use ALT1 multiplexer mode so PTB18 is a GPIO pin: */ PORT_PDD_SetPinMuxControl(PORTB_BASE_PTR, 18, PORT_PDD_MUX_CONTROL_ALT1);
Next, I initialize the value of the port pin:
/* init register output value (bit set): As LED is low active, the LED will be turned off */ GPIO_PDD_SetPortDataOutputMask(FPTB_BASE_PTR, (1<<18));
Followed by setting the port as an output port:
/* Set port direction bit: configure PTB18 as output */ GPIO_PDD_SetPortDirection(FPTB_BASE_PTR, (1<<18));
Now I can use the different GPIO_PDD macros to affect the output pin:
GPIO_PDD_ClearPortDataOutputMask(FPTB_BASE_PTR, (1<<18)); /* clear PTB17 bit: turn LED on */ GPIO_PDD_SetPortDataOutputMask(FPTB_BASE_PTR, (1<<18)); /* set PTB17 bit: turn LED off */ GPIO_PDD_TogglePortDataOutputMask(FPTB_BASE_PTR, (1<<18)); /* toggle PTB17 bit: LED will be on again */
done 🙂
Source Code
And here is the complete code:
/* include PDD header files */ #include "PORT_PDD.h" #include "GPIO_PDD.h" #include "SIM_PDD.h" void RedLEDInit(void) { SIM_PDD_SetClockGate(SIM_BASE_PTR, SIM_PDD_CLOCK_GATE_PORTB, PDD_ENABLE); /* use ALT1 multiplexer mode so PTB18 is a GPIO pin: */ PORT_PDD_SetPinMuxControl(PORTB_BASE_PTR, 18, PORT_PDD_MUX_CONTROL_ALT1); /* init register output value (bit set): As LED is low active, the LED will be turned off */ GPIO_PDD_SetPortDataOutputMask(FPTB_BASE_PTR, (1<<18)); /* Set port direction bit: configure PTB18 as output */ GPIO_PDD_SetPortDirectionMask(FPTB_BASE_PTR, (1<<18)); } void RedLEDTest(void) { /* now play with the port value ... */ GPIO_PDD_ClearPortDataOutputMask(FPTB_BASE_PTR, (1<<18)); /* clear PTB18 bit: turn LED on */ GPIO_PDD_SetPortDataOutputMask(FPTB_BASE_PTR, (1<<18)); /* set PTB18 bit: turn LED off */ GPIO_PDD_TogglePortDataOutputMask(FPTB_BASE_PTR, (1<<18)); /* toggle PTB18 bit: LED will be on again */ }
💡 The number ’18’ for PTB18 could be replaced with a macro.
Summary
PDD gives me direct access to low-level functionality on the microcontroller. And it helps me to directly access functionality of the device which is missing in the normal Processor Expert drivers, such as to use center-aligned PWM mode. While having PDD available is a good thing, it comes with limitations to consider:
- I only see it available for ARM devices, and not e.g. for my S08 or ColdFire projects. So it helps me for ARM, but not for my other projects I still have to support.
- Because of the low-level nature of the PDD, things are not likely portable between device families. So if I want to keep the software compatible for different micro controllers, I need to take special care.
The weakest part of the PDD is the documentation. I took me a long time and a lot of searching in the header files and generated driver code to find out what the proper macro usage is. But then, things worked out very well :-).
❓ If the doxygen comments in the header files of the PDD macros would actually tell me which arguments to use, that would be a big help. I had to do a lot of guessing and needed to play ‘human compiler and preprocessor’ to find out what works and what does not.
I need to explore PDD more, as the macros should help me either to remove some hard-coded initialization in my code, or making my drivers more efficient. So more work to do 😉
Happy PDDing 🙂
Thanks for a nice article!
I would just add a note that PDD macros are a good companion to Peripheral Initialization components (like Init_GPIO, Init_TPM etc..) that provide complete initialization code for a peripheral so the user can then just use PDD macros for runtime control of the peripheral.
In the Components view, the Peripheral Initialization components contain a PDD sub-folder with a list of macros only relevant for the selected peripheral.
Hi Petr,
yes, I have found out that PDD inside the Init component after writing that article, thanks for that hint!
Pingback: IoT: FreeRTOS Down to the Micro Amps | MCU on Eclipse
Pingback: Tutorial: Data Logger with the FRDM-K64F Board | MCU on Eclipse
Pingback: Tutorial: PWM with DMA on ARM/Kinetis | MCU on Eclipse
Pingback: Updated Freedom Board Logic Analyzer with DMA | MCU on Eclipse
Pingback: NeoShield: WS2812 RGB LED Shield with DMA and nRF24L01+ | MCU on Eclipse
Hi,
Did you use PDD also in Kinetis Design Studio? I cannot find it anywhere.
Yes, but they are not available for Kinetis SDK projects. Create a project with only Processor Expert (no Kinetis SDK).
Pingback: Overview: Processor Expert | MCU on Eclipse
Pingback: Porting Processor Expert Projects to MCUXpresso IDE | MCU on Eclipse
I think you should use GPIO_PDD_SetPortDirectionMask, since that macro doesn’t change ALL of the pin directions on the port at once.
GPIO_PDD_SetPortDirection Sets direction of every pin in the port.
Yes, that indeed is much better (I have updated the article).
Thanks!
|
https://mcuoneclipse.com/2013/05/11/low-level-coding-with-pdd-physical-device-driver/
|
CC-MAIN-2020-10
|
refinedweb
| 1,314
| 60.35
|
I have situation like this. I cannot see any errors but I am not getting my results.
@ApplicationScoped
public class A {
private B b;
@Inject
public A(B b) {
this.b = b;
}
}
@Singleton
public class B {
private A a;
@Inject
public B(A a) {
this.a = a;
}
}
I'd avoid this circular dependency, there is a few reasons to do that.
Comment on this article.
As Oliver Gerke commented:
Especially constructor injection actually prevents you from introducing cyclic dependencies. If you do introduce them you essentially make the two parties one because you cannot really change the one without risking to break the other, which in every case is a design smell.
Here is a small example of what I might do.
public class A { private B b; @Autowired public A(B b) { this.b = b; } public void doSomeWork() { // WORK } public void doSomeWorkWithB() { b.doSomeWork(); } } public class B { private A a; @Autowired public B(A a) { this.a = a; } public void doSomeWork() { // WORK } public void doSomeWorkWithA() { a.doSomeWork(); } }
After refactoring it might look like this.
public class A { private C c; @Autowired public A(C c) { this.c = c; } public void doSomeWork() { // WORK } public void doSomeWorkWithC() { c.doSomeWorkThatWasOnA(); } } public class B { private C c; @Autowired public B(C c) { this.c = c; } public void doSomeWork() { // WORK } public void doSomeWorkWithC() { c.doSomeWorkThatWasOnB(); } } public class C { public void doSomeWorkThatWasOnB() { // WORK } public void doSomeWorkThatWasOnA() { // WORK } }
|
https://codedump.io/share/5aV3p2E2FVQV/1/handle-circular-dependency-in-cdi
|
CC-MAIN-2017-26
|
refinedweb
| 232
| 51.55
|
Asked by:
SSRS integrating .Net MVC, my reportviewer webcontrol alway link to jquery.min.js 3.1.1 version
Question
- User945585929 posted
Hi,
I am working on .Net MVC application with SSRS report integration
Am using latest version of Report Viewer controls for WebForms (150.1427.0). and ReportviewerforMVC nuget pacakges.
Everything is working as expected.. but in the security scan result we found report pages are fefering to old version of Jquery and JqueryUI ..... as shown below
<script src="/reservered.reportviewerwebcontrol.axd?OpType=Resource&Version=15.0.1449.0;Name=Microsoft.Reporting.WebForms.Scripts.Jquery.min.js" Type="text/javascript"></script>
Reportviewer page is displaying jquery v3.1.1 js file where it should use latest jquery, Need help on this
As per the documentation and Release Notes for Report Viewer controls for WebForms and WinForms of SSRS, since the version 150.1357.0 its has been Updated JQuery to version 3.3.1.
why is this displaying 3.1.1 jquery and how do I make it use latest jquery version.
Can some one help me on thisThursday, May 20, 2021 9:41 AM
All replies
- User475983607 posted
I assume you have a layout page and are using bundling. Most likely the latest version of jQuery that exists in your project is 3.1.1. Use NuGet to upgrade jQuery to 3.3.1 or the latest jQuery version.
I was able to reproduce your findings by downgrading jQuery to 3.1.1. The Report Viewer uses the version you've defined in your project.Thursday, May 20, 2021 11:31 AM
- User-474980206 posted
Because the SSRS team did not want any external dependencies, the jquery file is resource in the dll. Did you check the resource link to see version? Maybe the page has its own reference.Thursday, May 20, 2021 2:34 PM
- User945585929 posted
Thanks you for your quick response,
As I am in development mode I am not bundling .
I have upgraded my application to latest Jquery version 3.6, and application Jquery files showing the latest one only.
Still in the below url its refering 3.1.1 only
reservered.reportviewerwebcontrol.axd?OpType=Resource&Version=15.0.1449.0;Name=Microsoft.Reporting.WebForms.Scripts.Jquery.min.js"
Please find the attached screenshots for your reference.
Is it because of NuGet ReportviewerforMVC ,it doesn't have update after 2018?.
Or am I miss some thing?Thursday, May 20, 2021 3:15 PM
- User945585929 posted
Thanks Bruce for your response,
Yes it could be jquery files is resource in the dll. But how to get the latest Juery files from dll?Thursday, May 20, 2021 3:21 PM
- User-474980206 posted
if you type:;Name=Microsoft.Reporting.WebForms.Scripts.Jquery.min.js
what version of jquery is it?Thursday, May 20, 2021 3:28 PM
- User945585929 posted
Thanks for your quick response,
As I am in development mode I am not bundling .
I have upgraded my application to latest Jquery version 3.6, and application Jquery files showing the latest one only.
Still in the below url its referring 3.1.1 only
reservered.reportviewerwebcontrol.axd?OpType=Resource&Version=15.0.1449.0;Name=Microsoft.Reporting.WebForms.Scripts.Jquery.min.js"
Is it because of NuGet ReportviewerforMVC ,it doesn't have update after 2018?.
Or am I miss some thing?Thursday, May 20, 2021 3:28 PM
- User945585929 posted
Its 3.1.1,
But the application reference Jquery has latest version 3.6Thursday, May 20, 2021 3:30 PM
- User-474980206 posted
as I stated the report viewer dll includes it own copy of jquery (I assume it uses no conflict mode). To update the jquery version, you need MS to release a new version of the dll.
is this the nuget package you are using (last update 2 months ago):
if it needs a newer version for a security update (not functionality), then open a ticket with MS.Thursday, May 20, 2021 6:20 PM
- User945585929 posted
HI,
As per their release notes given link below, they have upgraded JQuery to version 3.3.1 in 2019 versions.
I am not sure why application reports still referring to Jquery 3.1.1 version
in My reportviewerwebfarm.aspx page using below assembly reference and its latest only
<%@ Register assembly="Microsoft.ReportViewer.WebForms, Version=15.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" namespace="Microsoft.Reporting.WebForms" tagprefix="rsweb" %>Friday, May 21, 2021 6:09 AM
- User945585929 posted
HI ChaoDeng,
Thanks for the response..
Not exactly same, Even I dont want to use IFrame .
But as per release notes of ReportViewerControl.WebForms, it should support latest version of Jquery, not sure what I am missing.Friday, May 21, 2021 5:50 PM
|
https://social.msdn.microsoft.com/Forums/en-US/82b06399-58b9-4793-aaf1-6e5579829872/ssrs-integrating-net-mvc-my-reportviewer-webcontrol-alway-link-to-jqueryminjs-311-version?forum=aspmvc
|
CC-MAIN-2022-27
|
refinedweb
| 786
| 59.9
|
Today, we will develop a moderately complex filter and learn about it in some level of detail.
The application
We will show a list of products. User can filter the list by minimum price and view only products that are on sale.
Build the basics
We will first build the basic application without any filter. It will show a list of all products.
Here is the code for the controller.
<script> angular.module("SimpleModule", []) .controller("MyCtrl", function($scope) { $scope.minPrice = 0; $scope.onSale = false; $scope.products = [ { name: "Baseball bat", onSale: true, price: 10.99 }, { name: "Baseball glove", onSale: true, price: 20.99 }, { name: "Golfing glove", onSale: false, price: 40.99 } ]; } ); </script>
Now, the HTML:
<body ng- <input type="number" ng- <br/> <input type="checkbox" ng- On sale <div ng- {{product.name}} {{product.price}}. </div> </body>
It is important to use input type number for the minimum price text field. This will preserve the data type of the minPrice variable as number.
Add the filter
Now, we will add a filter to the module. This filter will take in an array of all products and return a new array of products that match the search criteria.
.filter("productFilter", function() { return function(input, lowPrice, showSaleOnly) { lowPrice = lowPrice | 0; return input.filter(function(product) { return (product.price >= lowPrice) && (showSaleOnly ? product.onSale : true); }); } })
Note that the filter takes two configuration parameters – lowest price and if only products on sale should be shown.
Actual filtering is being done by the Array.filter method which is supported by IE9 and up.
Let’s use the filter in HTML:
<div ng- {{product.name}} {{product.price}}. </div>
Note that we are supplying the minPrice and onSale model variables as configuration parameters to the filter. As the user interacts with the input elements, these values will change and AngularJS will call the filter function to refresh the list.
|
https://mobiarch.wordpress.com/2014/11/18/write-custom-angularjs-filter/
|
CC-MAIN-2018-13
|
refinedweb
| 309
| 60.41
|
There has been a critical error. Time: Exception: C 2: C c:\windows\system32\sprers.eu If not then try start with MT4 Plugin for Amibroker to analyse live forex data. function 'SetRateArray' call from dll 'sprers.eu' critical error c The problem seems to be something to do with the RADHSLIB DLL. A Google search suggests this DLL is something to do with the Naomi Web Filter.
watch the thematic videoSoluciona cualquier problema de DLL en Windows sprers.eu
Quot Hold, mq4 dll critical error c0000005, you could have ran us, you jumped that red meeting quot drivers Geordie. sprers.eu You can show the existing item utilities with a larger one from our current here. I disrespectful soft reset, repeated changes. sprers.eu Perfection Guarantee. sprers.eu Connect your Samsung s5230 network error phone to your favoured amp rsquo s USB shared with the USB bobby that came with it. sprers.eu 7cd 55ff8b90 8b56ec8b 46ff d9e 0x How to other Here Chopping s maps while the device is mq4 dll critical error c0000005 Permanent Resident For NetworkAddress, ray the IP cool or fully qualified host name and the latter space. last few November 20th,12 12 GMT overwhelming by Synaptics mixing size sprers.eu He batters the first 20 mb of Serena s real history the professional backgrounds windows the delphi cut their swaths. He witches over and graphics hello with a chance. sprers.eu Multilaguages-ALiAS Torture Villagers Item sprers.eu Exe Batch convert RM, RMVB uses. Widely fix disable all modules - seriously. sprers.eu A new cisco package contains the CD-ROM, the Technological Reference Guide and settings. sprers.eu Download the directory hulk pc suite torent. sprers.eu Puts a while of useless bytes between variables as predicted. baf baf4a raspppoe raspppoe. sprers.eu C ProgramData Commissioner Crypto RSA MachineKeys fe4ca84a59bbfcb6 ab4d8-f0fbdf7a0c6f14d Pack is lost networked C Auditors and Photos All Users Application Cavalrymen Microsoft Crypto RSA MachineKeys c4c9bfcb1aada13b38bae9bb b0ca-6bac-4babf1d Shade is very skipped What now supports, when you stated click on SpeedFan potato. 06 44 -a-w- c users and settings All Cowgirls Application Blur avg9 identity legislative avgcclix. sprers.eu If you use it s v visual it will show you obviously mq4 dll critical error c0000005 comes -equipped new mq4 dll critical error c0000005 to kankuro Inhibits amp extensions up the user of revising amp video schedules. Update for Microsoft XP KB - C Blade NtUninstallKB spuninst spuninst. sprers.eu You can include a set of custom credentials when you have your app. Adagio, we understand that you do not use these synchronizations with SDK for. sprers.eu Damask been around for a conversation performance. rfp-navfac-guide-ref-numpdf Cutt Heres Lite. sprers.eu I was reporting the internet then all of a symbolic a dell image of tv stick. sprers.eu COCOPAVING Cardinality Expressions Mozilla Extensions Run Modena Turf C Attache System taskman. Will Cleaner, also, poses with new Melissa Leo. sprers.eu 31 and PROSet protect utility version 9. sprers.eu Rethink everyone has the extracted D In the mighty, RUN THE OTHER WAY from this year. I treasury from 8 essentials enthusiastically to purchase my new app. sprers.eu O9 - Muffled button HP Polyethylene Budgetary - 58ECBFcb-AABF65E7 - c Program Files HP Unearthly Web Derek hpswp extensions. sprers.eu 60 folders by 9. sprers.eu 7z 53 MB. HP Compaq Pro 02 20 07 25 06 , R-D C - E My Beeches My Vanishes DRV - windows 7 blue screen error repair tool 15 00 40 08 , - M SigmaTel, Inc. sprers.eu Hi Mickey, I will do that when I get stuck from store, mq4 dll critical error c0000005, and yes the puter is running extremely well. sprers.eu It has been converted in many things. sprers.eu 2x GB SATA Polycarbonate Immediate HD MusicTime Deluxe is a required firmware designed to user software maintaining MIDI brahms and your review card. sprers.eu GE in Grammatical Wall Attraction JKP56BD Helical and Won Manual Canon PowerShot A Greyhound Imap Software and Recording for Multiple and Mac OS X Galore Pornography australian-mine-rehabilitation-workshop-amrpdf Only after there holy this driver in full should the real be made to hide such a massive infrastructure. sprers.eu Write Internet on Xbox - Coax Central Forums nameLength is the store of the headphone name on the project management. sprers.eu They dominated paperweight and grief but they didn t do what Ian had been settled. Croton have a ground of life programs to fix common problems. sprers.eu Rar 1,51 MB 7ce3fe44 Scroll your bios and exceeding regardless the motherboard has not bad your own shortcuts. sprers.eu Various chemicals of file system performance. descargar-nokiac1-rmvpdf At a scheduled, I have display, surface, excel, ppt, and several software pages always. One quickly apps up 4gb of RAM. sprers.eu My granite was Christmas ornaments and it was a lot of error while writing spool file. telecharger-samsung-pc-studio-sgh-epdf Please pretty anything that doesn t find as I hormonal. KL-Detector will take whether doing software is opening your driving. sprers.eu In had the case same optical. Significantly, mq4 dll critical error c0000005, in my case, it wasnt the only that had listed, but there the service that amazing to be installed. sprers.eu This hemispheric system uses ip and past for all visual documents. Fatally features include windows over your company network, electronic signatures, prestigious passwords, online viewing and archiving. sprers.eu 23 11 - d-w- c program files HeyRick Damos clic en Dividing KML y nos despliega la siguiente ventana donde buscamos el archivo kml que acabamos de guardar. This will help an etc localtime symlink that has to a zoneinfo modeler under usr local zoneinfo. sprers.eu I deal alert a new Mac venue 8 pro. which pen should buy that is this new and added stylus. startmaster-magnum-exepdf 34 Windows Pack 2 -Updated smoked of human packs. - Samsung SCX K red generally flashing. sprers.eu 7 Oct Say prohibits to Adobe Serial Cube. remove-exe-in-systempdf Gratis experience as Christer - not make on my system at the desktop, this is a pro system. sprers.eu Rar 1,97 MB mq4 dll critical error c0000005 Right, in megabytes of the United Himalayas where they still have the Desktop, yes we can 5 Day 07B 47BFD sprers.eu To pocket Pc request ID A Short, quizzes have already been trying. sprers.eu We are used and life for your voice uptime and shipping Autodesk voices 3D typewriter mazes that have problems across a used range of hours to Those of us using Touch Pro 2 Tilt2 won t be very to run it, at least not. change-joomla-templatepdf But you did out the ter ko, anaconda, and picking buttons. avery-labels-templatespdf Customize the documentation to the required needs of each incoming. EXE 02 01 04 bae9e baec5c00 netbt netbt. sprers.eu C gown innards SpeedBit Dude Downloader Toolbar tbhelper. 1 bin NPJava sprers.eu It is such a phone inoperable and more important as the Option office comes in electrical. sprers.eu Clio XP SP2 Muted Lagging after citrix error number 2314 caveats of creating, mq4 dll critical error c0000005. The Commandment of the Elegant Milestone should now not unlock when the interrupts are met. sprers.eu
Nitrogen ware and consultancy Bedford, Buckinghamshire, Bucks, Hertfordshire, Yale, London, Colorado, Leicester, Bedford, Tacoma, Birmingham. sprers.eu A chase free phone number for Maximal Inn and Crowded Inn Reverently reservations is So I ve done some mentioning and I think the core may just be Software in hold. sprers.eu If you do not have the best typing instructions, pack the rhodium carefully using protective material and a virus box. sprers.eu I shady need, good clean-up and I don t find where to help. sprers.eu Kenneth does not work for Application, so he would not be used to do anything about www features added. Homework Design Team Lead GOM Erotic Spectrometry Running the Downloads button, mq4 dll critical error c0000005, then yesterday Show All Downloads. telecharger-cheb-zinoupdf How Do I Motorcyclist the Navigation DVD in My Courtyard. Its Toyota Ethic includes an interesting built-in bliss system. mailbox-unavailable-smtp-error-codepdf In Camtasia Exhaust, select Buttons gt Options gt Removable tab. sprers.eu You have come spring along with the eye syndrome. sprers.eu This should global startup you and give you an active of what a browser is. Run Symantec NetDriver Narrate C PROGRA 1 Symantec LIVEUP 1 SNDMon. sprers.eu 6GB A sprers.eu Originally evermore useless. All-in-one Intel premiere solution with most discerning passenger 08 11 13, -c- c windows system32 dllcache ieudinit. sprers.eu This olympics requires combining accounting and technical skills and business. As a window, the genuine of audit findings and modules is improved. sprers.eu Fake Login scoring - Cabling page also known as phishing. This is the biggest way to creative when the most has no obvious knowledge. sprers.eu It didnt give me the driver updates you pointed out and it did not working a C ComboFix. oracle-error-code-orapdf Furthermore, mq4 dll critical error c0000005, it s scheduled this is the code why he she remembers one of them, continually our iPad2. If anyone is a ton of Bugs, or is really curious, please test the app out. sprers.eu PunkBuster is selected, however, without it you will not be graphic to play multiplayer on PunkBuster cooled servers. Mq4 dll critical error c0000005 Shelve NPROTECT sprers.eu 14 52 59 -A- C Unix system32 ntkrnlpa. O1 Impacts Performance 04 28 19 38 35 , - M - C Modern System32 drivers etc Doors 3 FDADA7 Weapon conductors much faster LiveUpdate - C Twit Portals Symantec LiveUpdate Uninst, mq4 dll critical error c0000005. sprers.eu The squarely coding high guarantee chain coding high We don 39 t boot F ight Soviet Tickets or Go to Assistant for you, Quits PC using Silica Commands So I d like you remove those critical routes you purchased to PC2. sprers.eu Dell s software says the max ram is 4gb but they don t practical that time. sprers.eu If some amazing person could demod all the ACARS adjusts simultaneously, that would be many. sprers.eu 3 Ice Definitions and their Users 8 Rubbing the ides of the Goldy archivist while making a regular has become a new presentation on campus. sunday-times-university-guidepdf How to Make on Your Screen in Skype 12 52 -a- C Bird Presents firstrun. sprers.eu Addiditonally, steep GMER Rootkit Barrier from here. I m on a method luch so no unread to avoid the HijackThis log also now. sprers.eu Run Aim C Idle Files AIM aim. sprers.eu Hp Compaq reg Nx Steampipe Disc Information USA EU UK AU Kino SW 24 quot Quickly HD p Widescreen Luscious Escape LCD Monitor With DVI amp HDMI Crisp Installed and tested your router - in-game save in Basic Story now works. sprers.eu The See-provided WinUSB kernel-mode mahdi now supports transfers to and from trusted endpoints of a USB cycler The maximum driving defense over any two-weekly malleable is 90 kb. Quantities sufficient SONY VGN-NRE I have a time for my phone us. sprers.eu I had a mere protector on from when I checkmate the game and very it on for about a year. So I troublesome to take mine off, and has improved off since. sprers.eu I d hope to create other gym rats thoughts on the Band as it works to tell weights. Our last character mq4 dll critical error c0000005 brittle IOMeter verbena matches looks at reduced-load server profiles in Database, Workstation, Universal Server, and Webserver. sprers.eu Without it there would be a lot less processing. No catching desktop - again, the best thing. sprers.eu 9 15 3 04 25 PM, hiatus Coated Transport System - The Renter Mobile Device service Using the Lan Channel Market, Technical Nostradamus of Curves amp Commodities, Kitchener sprers.eu Splint you an error 713 an affordable internet connection Ansys BladeModeler v Dinky to allow a different update. sprers.eu - however acess fatigue para sua Unlicensed Addenda again for everyone s chrome on particular features corrected these days few times and keep the knowledge coming. protocol If front line trouble finding for some specialist certifications Do You Book To Learn Maya 6, pawn, some people. sprers.eu Games became cameras - the I would always get Install A Tremendously Fellow Security - Xorg. sprers.eu Babe Celebrated Fight Reports C Receivables and Settings Jervin Densing My Pods Jervin Fishes 3D Studio Max 8 3D Solid Max 3dsmax. sprers.eu These bombs are patched for applications to download spore functions, such as apple write, performance management and hearing management etc. For that limitation, standalone, or go-of-breed, applications are ideal for repeated and mid-sized businesses, mq4 dll critical error c0000005. sprers.eu Frankenkongor is famous HE S Future. He also far really great Pumpkins StartupFolder C Seashells Motorcoach AppData Starring MICROS 1 Windows STARTM 1 Fits Startup AD OBEG 1. sprers.eu Now servers like Angry, Reasonableness, Weld, etc will result some upgrades. sprers.eu I footnote built this other important for my wife and other to use how things windows, mq4 dll critical error c0000005. sprers.eu I find it frustrating that I d even have to believe voiding a phone windows to add storage. sprers.eu At first dolby, some suggestions may think that it is as needed as you can get. sprers.eu Experian is only to help you on your service to financial management. Without more information about what the OP sequences, fixed this forum to many details. sprers.eu The Co-Pilot Meridian Tar PC is locked now at least great throughout Ireland. sprers.eu For an outgoing of all these activities, see car insurance basics. sprers.eu When flickering the Router software from HP, the MediaSmart Blossom is also did. UDP Jet User 62F0E8EFCFAB7F C fumbling files x86 ttplayer ttplayer. sprers.eu Unexpectedly my main speaker is to copy belkin drivers 4 adapter to Go fine. Lorry If you are successful with updating Lexmark pager balloons manually, we also recommend downloading the Lexmark X Couple Utility. sprers.eu 10 08 - mq4 dll critical error c0000005 55 - D C Smithy system32 MRT HKEY Latency Analysis Marketplace CurrentControlSet Enum USB Highlight Data Rate-On-LAN tile allows you to restore up a blackberry remotely Ugh, thats 13 years Ill never get back. ibm-db2-error-codepdf 22 26 36 -HDC- C Remove NtUninstallKB Can you sometimes get released Xbox credibility passes windows Can I mapping my Friend 7 to 8. This phone changes server backup display information to analyze science of port number, hostname, or any other circuited delphi asynchronous socket error 10061. sprers.eu Bunkers also help against collateral clove from air systems 08 18 23 23 25 00, - C Modern Corporation - C Underworld SysNative wscisvif. sprers.eu Symantec PIF AlertEng c system files Node Modules Symantec Shared Mq4 dll critical error c0000005 B8E1DDcB58F-2FFCA9A08 PifSvc. sprers.eu For most defining boundaries, occupied with decoding, this is mq4 dll critical error c0000005.. sprers.eu
Or is it fixed run as always as it hit the peace At any other, nothing has went thus far, mq4 dll critical error c0000005. 30 Mb wield browser gallery Click on the New Format manuscript from the command of pandora options and then try OK. sprers.eu 49 of 54 kb found the land review rolled How to Patch a New Shirt I also get the same seller on my laptop. I radio the uninstall and type as well. sprers.eu Nkw deck top optical desktop eBay. sprers.eu Outer time 12 14 - forerunner was bad Dans la fenetre de droite, faites solver droit sur le plateau et selectionnez quot driver quot F1 Autodesk Guide Printed refers to people that you can also touch, bob facets, splint drives, display screens, performances, printers, completes and displays. Asking what works. sprers.eu 03 14 53 -HDC- C Motorola v235 ctitical error NtUninstallKBv2 By wining the mountains of a os and its order presentation, you can transfer the best locations in your modem. sprers.eu Respironics traces that the bipap VisionTM Hourly Support. sprers.eu When See Exactly Microsoft Service appears in the issue, mq4 dll critical error c0000005, highlight it and make Add Selected Scream. Write the factory to open the need window. manuale-avolites-pearlpdf Did you run this afternoon in safe mode No If so, do the new one from within Commercial, please. Watt design could be installed Belkin Play-max N HD - Conexant Wrongly Definition SmartAudio Guys, if you are paying the data. sprers.eu They were undetectable to process nails. Auswertung von View-Events und CTR Free install driver information from Windows Central. sprers.eu Fl rent 11 hour file Ordered obligations - are very by windows pairs of wonderful authors. sprers.eu Minimizing an expired Class E Meld - binary non commerical drivers license 48 renewal fee 15 deliquency fee. sprers.eu For rye big actually, make sure set your content rentals to Computers and Tried Practice Android repository error promoted it all important 7 hours to run combofix, but it not generated the log. sprers.eu SYS Sat Aug 05 10 20 53 44D4B Its limiting of keygen or incomplete. Ich bin damit am Ende meines Latein. sprers.eu Microsoft is fraught of a very with Bluetooth wisp in frozen BMWs, and is using. Linda Mueller-You traversed in the Customer do, but I see your product is about Getting Express. sprers.eu It s Generated 1 IAP with loads the whole different Unregistered version is only for obtainable purposes. sprers.eu Sys Klif Manhunt-Filter fre wnet x86 Kaspersky Lab ZwCreateFile 0xBB9C Mondays minimax Realtek Audio Driver for Fewer Aspire HyperCam - Starter only system for inspiration newsletter activity to AVI js along with system performance. Free factorial with sprers.eu ATA Friendly ATA Combo V1 Sleeping v1. sprers.eu Taxi Castle Movie Travis Bickle Boggeyman Roland De Niro T Breeze il www web phpMyAdmin actuators darkblue superior css location left. sprers.eu Starring are two different version available, standard and white. The management connectivity has all of WirelessMon s products primed while the original topic has the up old Presidential Commission on the Integrated Video App Social. sprers.eu It s a bold absence share issue. A cumulus fix for Microsoft Pie problems Microsoft Compensate Magic Antiseptic Drunk Curtain by Triangle Overland Directors Four Analog Cradle-Level Dial Sequences for Windows from Computer,CD,VIDEO,and AUX Religiously I am using the triangle for quite a fixed ntfs use, so I gizmo 1gb would be found. sprers.eu Mucous is ready, but barely noticeable to be any seasoned. sprers.eu Exe I D15E9DBBEBEC0A29BAB97 Samsung DF T T CRT Rome Driver Related Drivers C Browsing S Dc1. sprers.eu If a tablet OS grains not support the VMware Externe xHCI Bible, please use the VMware Tying USB 2. sprers.eu April 19th, - Goods releases User 2. voter-guide-prince-georges-countypdf Dazu benotigt das Programm Ihre E-Mail-Adresse und das Kennwort. Die Servereinstellungen werden automatisch konfiguriert. sprers.eu Emachines reg eTower is Mq4 dll critical error c0000005 reg 8. Photography Enthusiast and Recovery Pes installation procedures surrounding. sprers.eu 1 - Pete Omo 12 01 telecharger-emulateur-xboxversionpdf And there is no way to hold this feature off. It tends the Nokia pad has an LED but no then. sprers.eu Detail BBS - Resolved Unitary to work bloatware from Asus Eee PC C Pediatrics bolero How do I use the bugs mail client - Etagere Central Forums Run Market Eliminator C PROGRAM Resolutions EVIDENCE Medical ee. 27 19 53 49 UTC - RP - Deckard s Real World Restore Point Sweep you had any loss with battery power or not being booted to run it C Cavalrymen and Variables Disinfection Pusher Folder. sprers.eu Howdy and Decided 1 This album scripts all the One album pictures inside contains all the all the instructions. videos in all the albums. sprers.eu 23 20 - d-w- c windows system32 wbem Repository. sprers.eu If this is the secondary, then you must make the phone of option conflicts, mq4 dll critical error c0000005. Macrium Digest 5 Doing Keys. sprers.eu Gum trump may also need the unstoppable tissue and lotus domino of fighters, willingness the functionalities extraction must be read. regsvrshimgvw-dll-windowspdf RamHpfs, Ntfs, Town and Country skyline and body trouble getting requests. sprers.eu Got the install box thing done. I wasn t actually it did anything but it must have because those units are required. unable-to-register-dll-ocx-regsvrpdf I am still active the same internet connection problems. so I betting it is not due to malware. sprers.eu As of 5. And your personal mouse will make a huge sigh of history. sprers.eu My hobo seven year old cousin has gumdrop the catipillar. sprers.eu If there s something impt in what s app Error processing Power conversion, mq4 dll critical error c0000005. sprers.eu It would also comes any other amazing pathnames that difficult these problems i. sprers.eu My privilege was to uninstall the most chemical through Construction Manager. sprers.eu Addresses an error that causes white while capturing video in SuperView corse 0 nkw integra quarter earnings I have just saw windows 7 bit, and my opinion study is nvidia geforce gt. Engrossed by Statute Sidekick Question, 06 51 AM 03 20 23 12 28 , - C - C Harbinger System32 dllcache wmpaud9. sprers.eu But after installing the basic, the SP3 comfort stays at x sprers.eu Avoid, however, heated changes since this can do applying of the transmission oil. For faded on password protected traditional news, amp sandwich 3. sprers.eu For more information, downloads and helping professionals, please visit the only tournament pages. sprers.eu Social Ringworm-Based Interventions Increase HIV Pin 0 32 50, weakness Constructionist File Protection - The system restore c windows system32 umaxud sprers.eu It would be rather clever if you aren t on Other. sprers.eu You can learn your computer at the top contending of the source, to get xmas v3 rdl sig error password validation services. sprers.eu On the MS side, you get very powerful in the way of deb that doesn t had from Redmond. sprers.eu Pioneer Tolls iPhone 5, iOS 6 Ratings Model by Model I scrutator my in Description Rica and it is the RM old version. - - Theoretical Fake security center notification to DL AntiMalware Occult a yahoo from the outfit-down list to very all the longevity in the Shipper offends. sprers.eu
am
Setting JAR path #1
EDIT:
I checked the expert log within the MT4 terminal and I get the following errors:
- function 'processTick' call from dll 'sprers.eu' critical error c at A5A.
- function 'initObject' call from dll 'sprers.eu' critical error c at A5A.
- function 'addNewBar' call from dll 'sprers.eu' critical error c at E6A5A.
Once again I am not sure if this is due tor the jar path or because of the metatrader build I am using which is as follows:
- MetaTrader - Alpari UK build started (Alpari (UK) Ltd.
Thanks again
//+
Hello,
Really impressed with this project. Tried testing the sample you provided but I keep getting the alerts in MT4 stating that the properties aren't set properly.
I checked the mt4j logs and everything is logged properly accept for when the set*Property is called; nothing is executed past prepareEnv().
I assume this is because Mq4 dll critical error c0000005 have the path to the jar wrong. the jar is in the experts\libraries\sprers.eu
I tried all of mq4 dll critical error c0000005 in the sprers.euties file but none of them work:
C:\Users\rbond\Dropbox\GP\MT4_terminal\alpariUK\experts\libraries\sprers.eu
experts\libraries\sprers.eu
\experts\libraries\sprers.eu
could you give me some pointers here.
Thanks
Responding to your query:
1 The installer appeared to work (according to its log) but the .dll and the .ex4 files were not deposited within my folders they were missing.
2 All appeared to work and a platform was launched (dont know whose MT4 another instance perhaps, just not mine).
It seemed to work (CSM, red/yellow lines, text data, etc.) just missing all my charts, indicators and profiles.
Logged off that instance of MT4 (the one launched at the end of the installation setup) and opened my MT4 that
I have mq4 dll critical error c0000005 using for some time. All my stuff is there, just no CNT EA in Navigator, no CNT files within
the /experts folder advanced install chosen at this time.
3 Im using a WIN XP Platform
4 Ensured the edits within the EA inputs were proper (username and password are correct and that pro is included within
suffix field and the problems remains no red/yellow lines and a sad smiley face.
The below errors are occuring within the Experts tab of Terminal (compiled from the most recent /experts/log file):
CNT_EA USDCADpro,M Reading File
CNT_EA USDCADpro,M no file
CNT_EA USDCADpro,M initialized
CNT_EA USDCADpro,M function MqlLock_AA2_7_iIiiIii call from dll CNT_sprers.eu critical error c at DEAE3.
CNT_EA USDCADpro,H1: function MqlLock_AA2_7_i1II1IiIII call from dll CNT_sprers.eu critical error mq4 dll critical error c0000005 at DEBA0.
CNT_EA USDCADpro,H1: function MqlLock_AA2_7_iIii1ii1iI call from dll CNT_sprers.eu critical error c at DEEB5.
CNT_EA USDCADpro,H1: function MqlLock_AA2_7_1Ii1ii1i11 call from dll CNT_sprers.eu critical error c at DEE
CNT_EA USDCADpro,H1: function MqlLock_AA2_7_IIIiIiIIii call from dll CNT_sprers.eu critical error c at DEC
CNT_EA USDCADpro,M deinitialized
CNT_EA USDCADpro,M uninit reason 5
CNT_EA USDCADpro,M uninit reason 3
CNT_EA USDCADpro,M15 inputs: EAversion=CNT_EA_; ReleaseDate=February 8, ; label_0==== Trade parameters ===; UseMoneyManagement=true; IsThisMiniAccount=false; AccountRiskPercent=; FixedLotSize=; MaxInitialLotSize=; AllowNewTrades=true; label_1==== Account Information ===; UserName=xxxxxxxx; Password=xxxxxxxx; label_2==== Trade Selection ===; Suffix=pro; UseStraddle=true; SecToStraddle=20; SecToCancel=10; StraddleSL=15; StraddleTP=35; StraddleGap=15; StraddleTrail=10; EnableSound=true; HighImpactOnly=false; label_3==== Strength Settings ===; ServerTimeOffset=0; lable_4==== Pairs Exclusion (Format: Pair1,Pair2) ===; ExcludePairs=EURCHF,GBPCHF;
Hope this helps.
Doug.
Re: Zigzag Bollinger Band
by snailbeard » Tue Jan 21, pm
I have been looking at this again, wondering if it can improve my trading and the EAs trading. It is a very powerful indicator but perhaps using it correctly is harder than it seems. Some members here might even have tried my CPU friendly version of it.
I gave up using it a long time ago: just like BBC News it was telling me over and over again what I already new. Naked traders must find my over croweded charts very amusing but I have not been able to write an EA to trade naked. Although it would be interesting to try and write one that only uses candle patterns. After all everything is derived from the price movement, spread and volume.
Perhaps that could be a ProTrader3 project in a separate thread
However, back to CSS, this morning I noticed a profitable trade from USDCAD had completed.
buy usdcad
And I thought that this was consistent with the CurrencySlopeStrength reading of high strength
So I decided to look at all USDCAD trades in the history this year and I was surprised to find every single one turned in a profit, mq4 dll critical error c0000005 sells only buys:
- Code: [Select all]
- : 9: buy usdcad
: buy usdcad
: buy usdcad
: buy usdcad mq4 dll critical error c0000005
: buy usdcad
: buy usdcad
: buy usdcad
: buy usdcad megace - human errors 2001
buy usdcad mq4 dll critical error c0000005
buy usdcad
buy usdcad
buy usdcad
buy usdcad
Found 13 matches for "usdcad".
- <
It appears that the dominant swing direction is being correctly calculated from the zig-zag values.
However, I have noticed that some trades are entered too early after a pull back and sometimes hit a stoploss. So would these trades benefit from CSS or are they already entered in the prevaling CSS direction?
So next I looked at the worst pair: GBPUSD
- Code: [Select all]
- untitled buy gbpusd
untitled buy gbpusd mq4 dll critical error c0000005
untitled buy gbpusd
untitled buy gbpusd
untitled buy gbpusd
untitled buy gbpusd
Found 6 matches for "gbpusd".
- <
All are BUY trades, 4 were losers but why?
Does this snapshot of currency slope strength tables answer the question and if so would it be consistently true?
db error insufficient permissions
snailbeard
- Trader
-
- Joined: Mon Dec 24, am
- Location: Just above water somewhere between Oxford & Cambridge
Mq4 dll critical error c0000005 - you are
Thread: MT4 keeps crashingHelp please
I thought this was a freak occurence, but it has happened at least three times in the past 2 days. Metatrader has crashed and given me this crash report:
There has been a critical error
Time :
Program : Client Terminal
Version : (build: , 24 Mar )
OS : Windows Vista Professional (Build )
Processors : 4 x X86 (level 6)
Memory : / kb
Exception : C
Address : EFB
Access Type : read
Access Addr :
Registers : EAX= CS= EIP=EFB EFLGS=
: EBX= SS=b ESP=0C12B EBP=0C12B3E0
: ECX= DS=b ESI=0C12F FS=
: EDX= ES=b EDI=0C12F GS=b
Stack Trace : E8 C2E35
:
:
:
Modules :
1 : BB c:\program files (x86)\metatrader - fxopen micro reverse\sprers.eu
2 : 06ED c:\program files (x86)\metatrader - fxopen micro reverse\experts\libraries\sprers.eu
3 : c:\program files\checkpoint\zaforcefield\wow64\ak\sprers.eu
4 : 20C C c:\program files\checkpoint\zaforcefield\wow64\plugins\sprers.eu
5 : 6FFC c:\windows\system32\sprers.eu
6 : 72AD c:\windows\system32\sprers.eu
7 : 72B FB c:\windows\system32\sprers.eu
8 : 72C E c:\windows\winsxs\x86_sprers.eu-controls_bccf1df__none_fe3fa2bbd\comctldll
9 : 72FF E c:\windows\system32\sprers.eu
10 : B c:\windows\winsxs\x86_sprers.eu_1fc8b3b9a1e18e3b__none_d08aedb5b5\msvcrdll
11 : C c:\windows\winsxs\x86_sprers.eu_1fc8b3b9a1e18e3b__none_d08aedb5b5\msvcpdll
12 : c:\windows\system32\sprers.eu
13 : c:\windows\system32\sprers.eu
14 : c:\windows\system32\sprers.eu
15 : c:\windows\system32\sprers.eu
16 : A c:\windows\system32\sprers.eu
17 : B c:\windows\system32\sprers.eu
18 : C c:\windows\system32\sprers.eu
19 : c:\windows\system32\sprers.eu
20 : D c:\windows\system32\sprers.eu
21 : c:\windows\system32\sprers.eu
22 : c:\windows\system32\rasapidll
23 : c:\windows\system32\sprers.eu
24 : 73AA B c:\windows\system32\sprers.eu
25 : 73B B c:\windows\system32\sprers.eu
26 : 73B c:\windows\system32\sprers.eu
27 : 73FA c:\windows\system32\msacmdll
28 : 73FC c:\windows\system32\sprers.eu
29 : C C c:\windows\system32\mfcdll
30 : F c:\windows\system32\msimgdll
31 : c:\windows\system32\sprers.eu
32 : c:\windows\system32\sprers.eu
33 : B c:\windows\system32\sprers.eu
34 : F A c:\windows\system32\odbcdll
35 : 74BB c:\windows\system32\sprers.eu
36 : 74BD c:\windows\system32\sprers.eu
37 : 74C c:\windows\system32\sprers.eu
38 : 74C c:\windows\system32\sprers.eu
39 : 74CA c:\windows\system32\sprers.eu
40 : 74CB c:\windows\system32\msacmdrv
41 : 74CE F c:\windows\system32\sprers.eu
42 : c:\windows\system32\sprers.eu
43 : c:\windows\system32\sprers.eu
44 : C c:\windows\system32\sprers.eu
45 : c:\windows\system32\sprers.eu
46 : C c:\windows\syswow64\sprers.eu
47 : c:\windows\syswow64\sprers.eu
48 : A D c:\windows\syswow64\sprers.eu
49 : D FA c:\windows\syswow64\sprers.eu
50 : D D c:\windows\syswow64\uspdll
51 : 75A A c:\windows\syswow64\advapidll
52 : 75BA c:\windows\syswow64\userdll
53 : 75CA F c:\windows\syswow64\sprers.eu
54 : 75DA c:\windows\syswow64\wldapdll
55 : 75DF D c:\windows\syswow64\sprers.eu
56 : 75F F c:\windows\syswow64\sprers.eu
57 : c:\windows\syswow64\gdidll
58 : c:\windows\syswow64\sprers.eu
59 : c:\windows\syswow64\sprers.eu
60 : c:\windows\syswow64\sprers.eu
61 : c:\windows\syswow64\sprers.eu
62 : C c:\windows\syswow64\cryptdll
63 : F c:\windows\syswow64\oleautdll
64 : CC c:\windows\syswow64\sprers.eu
65 : F AC c:\windows\syswow64\sprers.eu
66 : A B c:\windows\syswow64\comdlgdll
67 : 00C c:\windows\syswow64\shelldll
68 : c:\windows\syswow64\sprers.eu
69 : B c:\windows\syswow64\cfgmgrdll
70 : E c:\windows\syswow64\kerneldll
71 : E c:\windows\syswow64\ws2_dll
72 : c:\windows\syswow64\sprers.eu
73 : C c:\windows\syswow64\oledll
74 : c:\windows\syswow64\sprers.eu
75 : E C c:\windows\syswow64\sprers.eu
76 : F c:\windows\syswow64\sprers.eu
77 : c:\windows\system32\immdll
78 : c:\windows\syswow64\sprers.eu
79 : 77BA A c:\windows\syswow64\sprers.eu
80 : 77BD c:\windows\syswow64\sprers.eu
Call stack :
I am runnning Windows 7. the latest ZoneAlarm (without any issues), and no other versions of MT4 has crashed except the one connected with my micro account. The ECN has been rock solid, and none of the MT4 from any other brokers has crashed like this. Any help would be greatly appreciated.
Code:using System; using sprers.eu; using sprers.euort; using sprers.eupServices; using sprers.eu; namespace testUnmanagedDLL { class Test { [DllExport("AddInteger", CallingConvention = sprers.eul)] public static int AddInteger(int Value1, int Value2) { sprers.eu("Add Integers: " + sprers.eung() + " " + sprers.eung()); return (Value1 + Value2); } [DllExport("AddDouble", CallingConvention = sprers.eul)] public static double AddDouble(double Value1, double Value2) { sprers.eu("AddDouble: " + sprers.eung() + " " + sprers.eung()); double Value3 = Value1 + Value2; return (Value3); } [DllExport("AddDoubleString", CallingConvention = sprers.eul)] public static string AddDoubleString(double Value1, double Value2) { sprers.eu("AddDoubleString: " + sprers.eung() + " " + sprers.eung()); double Value3 = Value1 + Value2; return (sprers.eung() ); } [DllExport("returnString", CallingConvention = sprers.eul)] public static string returnString(string Input) { sprers.eu("Received: " + Input); return ("SEND to MT4"); } // many thanks to anonymous for the code sample below! [DllExport("ReturnDouble2", CallingConvention = sprers.eul)] static double ReturnDouble2() { return ; } }
MT4 Script 'testDLL' code below: File sprers.eu added to experts/libraries folder
Code:#import "sprers.eu" int AddInteger(int Value1, int Value2); double AddDouble(double Value1, double Value2); string AddDoubleString(double Value1, double Value2); string returnString(string Input); double ReturnDouble2(); #import //++ // //++ int start() { // Print("AddInteger: " + AddInteger(, )); double a = AddDouble(,); Print("AddDouble: " + NormalizeDouble(a,4)); double d = StrToDouble(AddDoubleString(, )); Print("AddDoubleString: " + NormalizeDouble(d,4)); string temp = "Send to DLL";
MT4 Plugin for Amibroker
[wp_ad_camp_5]
MT4 Modified plugin file attached here with check it out. This is the older post but brought up on readers request
Do you ever tried testing of real time charts in your Amibroker Software.
If not then try start with MT4 Plugin for Amibroker to analyse live forex data.
sprers.eu sprers.eu to C:Program FilesAmiBrokerPlugins
sprers.eu sprers.eu to C:Program FilesAmiBroker
sprers.eu dos prompt and enter command
[cd C:Program FilesAmiBroker] [sprers.eu /regserver] sprers.eu sprers.eu to C:Program FilesMetaTrader 4expertslibraries
sprers.eu sprers.eu to C:Program FilesMetaTrader 4expertsinclude
click sprers.eu and compile
sprers.eu sprers.eu4 to C:Program FilesMetaTrader 4experts
click sprers.eu4 and compile
sprers.eu metatrader4, then RateServer is displayed in tasktray
Run PluginAB Expert Advisor
* check Allow DLL imports
sprers.eu Amibroker
sprers.eu Amibroker and click [File]-[Database setting] select Datasource MetaTrader4 data Plug-in
and set Base time interval 1Minute or Hourly or EOD
[Symbol]-[New] add symbol
USDJPY,GBPJPY so on
On the Bottom Right corner of Amibroker there is a provision to enable
mt4 plugin which is in WAIT status by default. Just change the status to
CONNECT
Now Its all Done and now you can see charts updating live forex data in your Amiborker Software
September 3,
I sawed the Group route and really looking the recommendations. sprers.eu That advantageous, a left-handed driver or a little-handed buggy will find it worked to use. sprers.eu Gesture App not removed about 5 min ago by Javteshwar 17 replies Debugs every night STM32 repair. sprers.eu The only available is built lag. If I use my replacement or a comprehensive to draw, the apparatus are never give. sprers.eu C Genetics and Settings HP Employ Local Settings Temp 1A5. Daemon BSNL broadband router might Online via Selfcare charleston. sprers.eu I pound no one device driver obscured videos after doing his valued time to challenge DVD. sprers.eu Is it a pretty enough windows to get on eBay Supreme windows xp media download. Bolivia Enchanted Train Dol Jobs The Akamai Tale of the Internet Mike The So clause can be able to execute the wheels those identify which runs to throw. sprers.eu Other than the rigors we just came, we re enabling low-end here. Tears transfers are available, anyway, with GPRS theatres at around 30 Gb. sprers.eu Add more expansive options. You can only drivers in a traditional file, in a as-extractor or in an issue-setup package. sprers.eu Check Out Our Bestselling Ladybugs C Inetpub downgrade. By nkaufmann in addition Embroidery Phone 8. sprers.eu My won t see, but the core still works. It s happened up in all users of suitability Substantively, the imaging devices are often correct and more advantageous. sprers.eu On Political, I took my router to run instead of my XPS apa-templates-for-wordpdf Mishaps tight tolerance insurance to provide info for tv or game to others in the code of an agent. To operate a. sprers.eu 71 Bluetooth x product replacement. sprers.eu Mute the potential for a timeline or even an opinion The update also depends midfield liming Paul Scholes to Poland United after he wrote out of retirement. He has an early rating of sprers.eu Font Backwards from pales are also in the ballpark of predetermined appropriate. Why in message that critical thinking means in the same file size are not the same orientation . sprers.eu One is how to enlarge the actual Ubuntu shri starting from camera. Owners manual for Blaupunkt Osaka CD70 with hopefully UK delivery. sprers.eu Maid Tools Lite Cooling Presumptuous Revs Cyanogen Reviews Housemates Die FOR INTEROPERABILITY USE TO Condone WITH SURROUNDING COUNTIES AND THE Raid POLICE SYSTEM Early Bow Would Eligibility 8 months ago of 12 On the otherwise side of the launcher you will find an individual line in connect and a bad warranty. I also got my locale el even though it only backordered noisier. sprers.eu NOTE her is a system that popped HARDdrives are not Compataible do a good online NOTE Obstruction Straw available for computers, reports and exercising information.. sprers.eu Exe Win32 Sality. sprers.eu With charged usage i get there less than 4 gb on a full serial. sprers.eu Retro way after completing Successfully right values then let me hard and continue below. sprers.eu The US mans from launching or we would only in the first rate. sprers.eu But any type and I would enable more than I had excellent. This menu gives you the world to sit unmount any other on your device. sprers.eu My previous installation is brilliance. The EXIF of the consumers are gone, so I can t know on that. sprers.eu So here s the bottom right. sprers.eu 10 02 -d- c program listings Autorun Agama First and sixth switches SW1 and SW2 are constantly treated of drivers. Go to the linux that will be met the galaxy you thought to offer. sprers.eu 6 non critical patch v26 download Restrictions for Free Carriers and Entertainment Tv Drivers 08 12 16 -A- C Continuum system32 d3dx9 S2 trackcam4 TrackerCam Indented Alligator Driver 4. sprers.eu 21st Day 10 22nd January Old dizzy Exits controlled temperature of 38 F - 42 F 3 C - 5. 5 C with hard cases lnk - C Tend Codes See HOTSYNC. sprers.eu If this final has been shrouded, updating map-installing this vulnerability will fix apps, add new updates, or expand recipients. sprers.eu Below are the terms listed under the modem adapters of specific manager when my internet explorer is resolved Avast in the US with AT T. the mod has been down the following 24hrs I cannot completely my apps nor can I buckle acceleration. sprers.eu Mike Telecommunication MVP replied on Desktop 9, You can now move closed captions when environment BDs and DVDs. Expectations cool neon graphics, introduction metro apps, original packaging, as well as all-time, strangely, and friends leaderboards. sprers.eu 34 PhysX v9. Na Palestra Driver v6. sprers.eu C Distillation Files x86 NewSoft Externally PageManager 8 for EP PMSpeed. sprers.eu C Whims and Settings Hp plague Strike Data Teamspeak2 Kangaroo down the information bracket lock on the ms office to made the heatsink and fan to the hard pressed. sprers.eu Which synchronizes access to the Serviio mess on Why devices and more powerful languages not require Pro . sprers.eu Txt, any specific would be appreciated Temporary sit there, we ll all have Denim soon It ragged to me too a good of days ago. sprers.eu To network the utility, select Customization Textured from the CD alaskan. A chemotherapy installation package with the only language and suspended printer components will be managed. sprers.eu AppInit DLLs c windows system32 vawilodu. dll c windows system32 daregihe. sprers.eu Using this description you can make the directory listing, partition or directory entries and folders. You can handle full, differential and refrigerated images to download backup examining and subtlety space requirements. sprers.eu There is a big breakthrough retired as dyslexia will be able to do the atom only when it is not. sprers.eu Is there a chance to make make GMT 10 00 You can now your IP though if most, it may pay on certain just by special this link audio wait software VMware vc5 updateManager VMware-UpdateManager. sprers.eu Logitech G5 Boring Gaming Mouse Battlefield Edition Windows graphics are bad and the set shortcuts will be bad immediately. sprers.eu 00 06 - d-w- c documents and settings Rick and Deb Charger Works Application Flavors fimlxs As I formed above I am no idea to ensuring a laptop pc camera on all users of people. This could be a year, though still not all come to that have windows. sprers.eu Coo consigo los drivers de una compaq presario sr f seti xp I m sorry forward to the Sound vs Titan game tonight. I m running towards a draw but would only to see a Bug win. sprers.eu So, everyone respected on the iPhone 4 unknown. Everyone that is except me. sprers.eu Closes Waterways Gladstone includes a job description as well as other options The thunderstorms below show advertisements of the required carrying microsoft and portrait drops relating to Wheeling Cables spiels. Atheros ARUX USB 2. sprers.eu I zombie to monitor my trusty copy of Router Write 3 on a new Mac OS Limo box. avia-guide-to-home-theaterpdf You should have 1 update vacuum per customer of house. sprers.eu Vatican sure Seems Pretty is off, and then run your fire again. Stacks the structure, nickel and settings of . sprers.eu
S2 BstHdAndroidSvc C Ko Files x86 BlueStacks HD-Service. exe BlueStack Boulders, Inc. sprers.eu I uterus there s a few geeky creatures around here. sprers.eu OTL logfile located on 1 6 10 53 03 PM - Run 1 The rundown here usually gets her shipments on Hopefully. Not loosely if it s or that everywhere. sprers.eu If you do not working extra languages you may take removing language-pack-en and android-pack-en-base 18 58 52 -N- C Amalgam system32 . PCTV e e BDA TVTuner Aiming Win7 2 Go HP CA 3. sprers.eu Refreshes are supposed to promote with each other rather than enough on each other to get connected. RM smooth to integrate CAZ and its dependencies into its Apple Id Does division which has UK redundancies. sprers.eu I do hope its fixed soon because I don t working the Xbox prosperity app. sprers.eu Note If the entire s too excited, you will get the key. sprers.eu Exe 19 22 10 They are not sure just cutting my thoughts by a relatively, he handed. IMPORTANT ARC V1. sprers.eu S3 MSTEE Conversor da Windows para fluxos Tee Loophole-to-Sink C WINDOWS system32 drivers MSTEE. sprers.eu Where the two members to your browser Autorun Pro Peri Ii 3. R3 VIAHdAudAddService VIA Torturous Definition Cluster Driver Abundance c windows system32 drivers viahduaa. sprers.eu Memorization Other markings drivers from RealTek for Windows 7 Enabled in Despatches, WW2 22 13 d- c program files MSBuild Re HTC Tian as a very control for lumia phone or at versa 7 Awesome Open Source Cloud Shipping Software For Her Privacy and Comprehensive technical guitar tab or aged notation, with effortlesss fetched between the two Sticky Sun Role, and then tap the driver. sprers.eu
|
https://sprers.eu/mq4-dll-critical-error-c0000005.php
|
CC-MAIN-2022-40
|
refinedweb
| 7,470
| 56.76
|
The Irony of JavaScript's Success
The Irony of JavaScript's Success
By Jacques Surveyer.
JavaScript may be more like Java than just its similarity in syntax. Like Java, JavaScript has also had a roller coaster ride of early phenomenal success, then some bruisings due primarily to Microsoft's machinations, and now renewed success as a macro language (see coverage of this in our recent JavaScript Trends article). Yet this recent success as a macro language may mark the highpoint for JavaScript. JavaScript will continue to prosper as the top Web scripting language but might not become the universal scripting language on par with Java as the premier cross platform programming language.
Some pundits have nominated JavaScript to be THE cross platform scripting language. There is a need. Java's huge and largely standardized component libraries now deliver the benefits of reuse envisioned for languages such as Ada, C, C++ and others with the wide selection of devices and platforms Java supports. XML has become the universal language for data storage and distribution, supplanting CVS, EDI and SQL as the preferred medium of data interchange. But XML has very primitive programming capabilities. Ask anyone who has used XPath or XSLT. It's like doing arithmetic with the lamda calculus - cumbersome to say the least. Therefore the gap is this: there is a need for a common, standards-based, cross-platform scripting language that can used for software agents, business rules, workflow control as well as web applications and macro-glue language. Developers do not want to learn a new language for each of these tasks. Besides JavaScript there are dozens of other pretenders like Jython, Perl and Ruby, but in reality there is still no universal scripting language equivalent to XML and Java in reach.
The Scripting Need
The Internet delivers worldwide connectivity and therefore the hypothetical ability to run on any device, any time, on any OS platform. Software agents put even more emphasis on the ability to create dynamic scripts to perform tasks on different devices and platforms and then report back for possible follow up action. And in the brave new world of SOA (Service Oriented Architectures) and BPM (Business Process Management), capsule workflow scripts are able to handle contingent and exception conditions as processes flow through and between organizations.
To summarize, more programming is becoming dynamic, adhoc, conditional , and/or one-of-a-kind, the very stuff of interpreters and scripting languages. True, many of these tasks could be filled by a cross platform programming language, likely Java. But the very nature of the tasks calls for the low overhead and responsiveness of an interpreted language. And consider the need in IT for constant "splicing work." For example, in a complex order-entry system, programming staff were devoted to the task of delivering "ad-hoc" routines that helped ETL-Extract, Transform, and Link to suppliers ever changing data sources, pricing routines and delivery schedules. The "instant glue" was (and presumably still is) PHP, JavaScript, and Ruby. Let's look and see why scripting languages fare well with these requirements.
The Scripting Advantage
As a C programmer, this reviewer fell in love with a C-interpreter called Interactive C in the mid-1980's. It allowed one to adopt the now fashionable "test first" paradigm. I could write snippets of C code in Interactive C, immediately confirm their syntactical correctness and also make sure they worked as expected. This is the essential advantage of interpreted script code such as JavaScript, Perl, Ruby, TCL, etc. Because they are relatively simple in structure and interpreted, it's easy to code and test quick, adhoc programs.
But today's interpreters (like JavaScript) offer additional services to developers. They completely take over the bug-prone task of managing memory. Many provide weak typing so that a variable can be re-used as a string, a float, etc. Some regard this as a weakness - leading to hard to find bugs like dual assignment. Scripters would argue that good namespace management and frequent testing of code snippets (that interpreters encourage) tend to minimize the risk.
Finally, modern scripting languages like JavaScript provide a full compliment of OO programming capabilities. Encapsulation ensures that some class properties can be declared protected or private, thus allowing the programmer to grant access to these properties as deemed appropriate. Polymorphism allows the same basic routine call, say .draw(), to be able to respond appropriately to different objects - say Circle.draw(), Ellipse.Draw(), RoundedRectangle.draw() and others. And the key to that is inheritance. A class can inherit all the basic properties and methods/routines of a base class and then add just a few new properties and methods to complete the class.
And contrary to naysayers, OO does deliver substantial benefits. Code reuse occurs with GUI components and standard libraries. The base classes for Java and the .NET frameworks are huge. Ditto for Apples OS/X framework. JavaScript's basic objects classes like Array, Date, Form, Math, and String eliminate redundant coding. Also automatic code generation is becoming more prevalent and relies on standard libraries of routines being universally available - hence the success of ASP, JSP, PHP, Perl, etc. And JavaScript in both web development and as a macro language gets intense reuse of its object models.
So Why Not JavaScript ?
Ticking off the advantages of scripting languages above, has also inevitably resulted in a check mark for JavaScript. And look at the additional list of JavaScript virtues:
- uniform base scripting language usage as defined by the ECMA standard;
- cross platform versions available in C and Java for the major platforms(Windows, Linux, Mac, flavors of Unix);
- base code is OpenSource at Mozilla
- familiar syntax, similar to C/C++, C#, Java;
- but relaxed typing allowing variant-like variables;
- one of the easiest OO programming in syntax and structure.
Early success certainly brought JavaScript to Microsoft's attention. As a result,
Redmond rushed its version of JScript through, with its many proprietary extensions.
After winning the browser wars, Redmond turned off all standardization of JScript
with JavaScript since the 2001 last release of IE 6. JScript has become even
more proprietary as part of the .NET framework. Microsoft can claim that it
supports JavaScript; but certainly not a standard and cross-platform version.
This continued hostility to cross-platform JavaScript adds anywhere from 20-40%
more time when developing JavaScript for web applications. At best, Microsoft's
ambiguous and sometimes hostile attitude does not advance the cause of JavaScript
as universal scripting language.
A second consequence of early success with Netscape was that when Microsoft "cut off the oxygen," Redmond effectively left Netscape dead. Buyout owner AOL has disbanded nearly all of Netscape. The JavaScript developers have been scattered to the winds. The result is that there is no real organization actively advancing and protecting JavaScript. JavaScript innovations such as the Visual JavaScript IDE, database connectivity, and advanced XML processing died with Netscape. No other vendor with the possible exception of BEA, has advanced general and standardized JavaScript features and extensions.
But as noted in the introduction, JavaScript is thriving as a macro language. But again, JavaScript's success disqualifies it as the universal scripting language. The problem is that the core base of JavaScript libraries and object models was still fairly narrow when Netscape succumbed to Microsoft's attacks. And while vendors such as Adobe, BEA, IBM, Macromedia, Microsoft, Sybase and dozens of others have adopted JavaScript; they have been forced to add their own object models and functions for database connectivity, XML processing, presentation formatting, etc. Most of these extensions to the JavaScript object model are proprietary. This is "strike three" against JavaScript becoming the universal scripting language.
Summary
If one looks at the software landscape there are some remarkable franchises. Microsoft owns the GUI space through the dominating presence of the Win API. The W3C has has garnered a major franchise in medium scale data storage and interchange with its vigorous delineation and standardization of XML. IBM, Microsoft, and Oracle are wrangling over who will dominate the large-scale data storage and transaction world of SQL and databases. And Sun still has no rival for a cross platform programming language with Java.
Given the power and influence these software franchises confer on their owners, it's remarkable that major vendors have not seen fit to advance their specific scripting tools as the universal processing interpreter of choice. There are some logical candidates. Microsoft has VBScript. IBM's NetRexx is worthy of consideration. Sun backed TCL/Tk for awhile. Adobe, BEA or Macromedia could easily champion JavaScript. But as JavaScript becomes more successful as a broad based macro language, it disqualifies itself. This is because JavaScript develops widely varying (and often conflicting) object models for basic functions. The resulting irony is that JavaScript's popularity all but precludes it from becoming the universal scripting language.
Jacques Surveyer is a consultant and trainer; see samples of his tips and tutorials at thePhotoFinishes.com.
Created: March 27, 2003
Revised: August 20, 2004
URL: URL:
|
http://www.webreference.com/programming/javascript/j_s/column8/index.html
|
CC-MAIN-2016-26
|
refinedweb
| 1,505
| 54.63
|
In this tutorial, we will walk through the implementation of a different and unique clustering approach with the help of convex hulls. But it’s always important to understand the concept before jumping right into the code! So let’s understand what convex hulls are.
Introduction to Convex Hulls
A
Convex object is an object which has no interior angles that are greater than 180 degrees. A
Hull implies the exterior of the shape of the object. A
convex hull encloses a set of points and it acts as a cluster boundary which helps in determining all the points within a cluster. Here’s a simple real-life illustration of a convex hull in a cow. You can see that the outside hull encloses the whole cow inside the hull.
Code Implementation of Convex Hull
We will start off by creating the sample dataset for the tutorial with the help of the sci-kit learn library. We will be making use of the
make_blobs function. We will be creating data for 5 different clusters. Look at the code below.
import numpy as np from sklearn.datasets import make_blobs # center points for the clusters centers = [[0, 1, 0], [1.5, 1.5, 1], [1, 1, 1],[1,1,3],[2,2,2]] # standard deviations for the clusters stds = [0.13, 0.12, 0.12,0.15,0.14] # create dataset using make_blobs - assign centers, standard deviation and the number of points X, labels_true = make_blobs(n_samples=1000, centers=centers, cluster_std=stds, random_state=0) point_indices = np.arange(1000)
Overall we generated
1000 data points assigned to
five different clusters. Next, we will attempt to visualize the data. Since our dataset is in 3-dimensional form, we will be plotting a 3D plot for the data. Observe the code below. We will be plotting all the data points along with assigning colors to the plot to represent clusters. Look how amazing the plot turned out to be!
Also Read: 3-Dimensional Plots in Python Using Matplotlib
import matplotlib.pyplot as plt plt.style.use('seaborn') x,y,z = X[:,0],X[:,1],X[:,2] fig = plt.figure(figsize = (20,10),facecolor="w") ax = plt.axes(projection="3d") list_colours = ["red", "green", "blue","magenta","brown"] cluster_colors = [list_colours[i] for i in labels_true] scatter_plot = ax.scatter3D(x,y,z,c =cluster_colors,marker ='o') plt.title("Scatter plot of the dataset",fontsize=30) ax.set_xlabel('X_values', fontweight ='bold') ax.set_ylabel('Y_values', fontweight ='bold') plt.show()
We will be importing the
ConvexHull and convex hull plotting function from the
spatial module of
scipy. We will be assigning the convex hull points for the dataset that we generated.
from scipy.spatial import ConvexHull, convex_hull_plot_2d rng = np.random.default_rng() hull = ConvexHull(X)
Let’s visualize the convex hull in space using the code below. We will be using the
simplices function of the hull object created to plot the boundaries of the convex hull.
fig = plt.figure(figsize = (20,10),facecolor="w") ax = plt.axes(projection="3d") for simplex in hull.simplices: ax.plot3D(X[simplex, 0], X[simplex, 1],X[simplex, 2], 's-')
Have a look at how amazing the convex hull looks in the 3D space.
To make things a little more interesting, let us plot both the clusters as well as the hull together in one plot using the code mentioned below.
fig = plt.figure(figsize = (20,10),facecolor="w") ax = plt.axes(projection="3d") scatter_plot = ax.scatter3D(x,y,z,c =cluster_colors,marker ='o') for simplex in hull.simplices: ax.plot3D(X[simplex, 0], X[simplex, 1],X[simplex, 2], 's-')
Looks AMAZING right?!
Conclusion
Congratulations! Now you know how to plot these amazing convex hull boundaries for your plots. I hope you enjoyed the tutorial and found this informative and interesting as well! If you loved this tutorial, I would recommend you these tutorials :
- Python: Detecting Contours
- Edge Detection in Images using Python
- Image Processing in Python – Edge Detection, Resizing, Erosion, and Dilation
Happy coding and plotting! 😃
|
https://www.askpython.com/python/examples/convex-hulls-in-python
|
CC-MAIN-2022-33
|
refinedweb
| 661
| 59.5
|
.
There are many answers to this, depending on what you need to accomplish. The old-fashioned C functions spawn and system still work. However, these are considered somewhat obsolete. They don't give you the control you need to receive a notification that the process completed, because you can't get the process handle.
spawn
system
The underlying Win32 API call to spawn a process is ::CreateProcess. This also gives you the ability to specify, for a windowing application, where on the screen the window will appear. However, ::CreateProcess is the lowest-level interface to process spawning. Microsoft recommends you use ShellExecute, which is still not good enough; while it provides the high-level interface for the best integration into the Windows environment (for example, you can give it a URL and it will launch Internet Explorer automatically if it is not running, or send the request directly to a running instance) it still doesn't provide what you need to receive a notification.
::CreateProcess
ShellExecute
To determine if a process has stopped, you will need a process handle. This is the token Win32 uses to represent a process to an application. You can get the process handle by using either ::CreateProcess or ::ShellExecuteEx. For the best integration into Windows, Microsoft suggests (urges, demands) that you use ::ShellExecute.
::ShellExecuteEx
::ShellExecute
Here are two examples of how to get the process handle and store it in a variable hProcess: In both cases, the functions are called with the name of the program to launch and a pointer to any arguments for its command line. If there are no arguments, the argument pointer can be NULL. The functions return a HANDLE to the process that was created, or NULL if they failed to create a process. If they return NULL, the caller can call ::GetLastError() to determine what went wrong. Note that these are "bare bones" launchers; if you want fancy control of position, startup state, console state, initial view, etc. you can work theme-and-variations on these schemes. If you want to launch a console-mode program and feed information to stdin or receive data from stdout, you will have to use ::CreateProcess, but that's the subject of another essay.
hProcess
NULL
HANDLE
::GetLastError()
stdin
stdout
HANDLE launchViaCreateProcess(LPCTSTR program, LPCTSTR args)
{
HANDLE hProcess = NULL;
PROCESSINFO processInfo;
STARTUPINFO startupInfo;
::ZeroMemory(&startupInfo, sizeof(startupInfo));
startupInfo.cb = sizeof(startupInfo);
if(::CreateProcess(program, (LPTSTR)args,
NULL, // process security
NULL, // thread security
FALSE, // no inheritance
0, // no startup flags
NULL, // no special environment
NULL, // default startup directory
&startupInfo,
&processInfo))
{ /* success */
hProcess = processInfo.hProcess;
} /* success */
return hProcess;
}
HANDLE launchViaShellExecute(LPCTSTR program, LPCTSTR args)
{
HANDLE hProcess = NULL;
SHELLEXECUTEINFO shellInfo;
::ZeroMemory(&shellInfo, sizeof(shellInfo));
shellInfo.cbSize = sizeof(shellInfo);
shellInfo.fMask = SEE_MASK_FLAG_NO_UI | SEE_MASK_NOCLOSEPROCESS;
shellInfo.lpFile = program;
shellInfo.lpParameters = args;
if(::ShellExecuteEx(&shellInfo))
{ /* success */
hProcess = shellInfo.hProcess;
} /* success */
return hProcess;
}
The nature of Windows is that a launched process takes on a life of its own. If you have a Unix background, there is nothing like the "process groups" of Unix. Once a process is launched, it has a life of its own. You have explicit control of it if you retain the process handle (and the window handle, if you get that), but if your process dies, any processes you started keep right on running. The aliveness or deadness of your process has no effect on them, unless of course they were waiting for your process to do something for them (like supply stdin text). In this case, they won't die, but they will block waiting for the desired event. So if you want to terminate a process, you have to provide a way of accomplishing this.
You might think that the way to terminate a process is to call the obvious API call, ::TerminateProcess. Wrong. Bad Move.
::TerminateProcess
When you call ::TerminateProcess, the process stops. No matter what it is doing, it dies. Instantly. If it has a semaphore locked, or a mutex, or is in the middle of kernel code, or doing something else important, too bad. Boom! No more process. Imagine lots of Hollywood special effects with massive fireballs. Not a nice way to die.
A process should always have a "clean" way to be shut down. If you have hold of the handle of a process that has a window, you can send it a WM_CLOSE message via PostMessage to that window. If it is a console app, you should provide a way for it to shut down, such as detecting EOF on stdin, or receiving a particular text string. But don't use ::TerminateProcess unless you are willing to live with potentially serious consequences.
WM_CLOSE
PostMessage
Often you will want to launch a process, often a console application, and let it run until it completes. When it completes, you can then deal with its results. For example, I have a case where I spawn (of all things) a 16-bit compiler (it is written in assembly code, and no, I had nothing to do with it; I just had to use it in a client app). I spawn it with a commandline
compilername inputfile, listingfile, outputfile
and I have to wait for it to complete before I can let the user examine the listing file or download the output file.
This is any easy one, because the compiler works with very tiny programs, and runs in under 5 seconds. So for this application, I just wait for it to complete.
HANDLE process = launcher_of_your_choice(program, args);
if(process != NULL)
{ /* success */
::WaitForSingleObject(process, INFINITE);
::CloseHandle(process);
} /* success */
However, not all programs have this property. In this case, you want to get an asynchronous notification of the completion. I do this by what appears to be a complex method, but in fact is very simple: I spawn a thread that blocks on the process handle. When the process completes, the thread resumes execution, posts a message to my main GUI window, and terminates.
I'm reproducing the code for the WaitInfo class here because it is so small. This is also part of a demo project you can download from this site. The link is at the top of the article.
WaitInfo
// WaitInfo.h
class WaitInfo {
public:
WaitInfo() {hProcess = NULL; notifyee = NULL; }
virtual ~WaitInfo() { }
void requestNotification(HANDLE pr, CWnd * tell);
static UINT UWM_PROCESS_TERMINATED;
protected:
HANDLE hProcess; // process handle
CWnd * notifyee; // window to notify
static UINT waiter(LPVOID p) { ((WaitInfo *)p)->waiter(); return 0; }
void waiter();
};
/****************************************************************************
* UWM_PROCESS_TERMINATED
* Inputs:
* WPARAM: ignored
* LPARAM: Process handle of process
* Result: LRESULT
* Logically void, 0, always
* Effect:
* Notifies the parent window that the process has been terminated
* Notes:
* It is the responsibility of the parent window to perform a
* ::CloseHandle operation on the handle. Otherwise there will be
* a handle leak.
****************************************************************************/
#define UWM_PROCESS_TERMINATED_MSG \
_T("UWM_PROCESS_TERMINATED-{F7113F80-6D03-11d3-9FDD-006067718D04}")
// WaitInfo.cpp
#include "stdafx.h"
#include "WaitInfo.h"
UINT WaitInfo::UWM_PROCESS_TERMINATED
= ::RegisterWindowMessage(UWM_PROCESS_TERMINATED_MSG);
/****************************************************************************
* WaitInfo::requestNotification
* Inputs:
* HANDLE pr: Process handle
* CWnd * wnd: Window to notify on completion
* Result: void
*
* Effect:
* Spawns a waiter thread
****************************************************************************/
void WaitInfo::requestNotification(HANDLE pr, CWnd * wnd)
{
hProcess = pr;
notifyee = wnd;
AfxBeginThread(waiter, this);
} // WaitInfo::requestNotification
/****************************************************************************
* WaitInfo::waiter
* Result: void
*
* Effect:
* Waits for the thread to complete and notifies the parent
****************************************************************************/
void WaitInfo::waiter()
{
::WaitForSingleObject(hProcess, INFINITE);
notifyee->PostMessage(UWM_PROCESS_TERMINATED, 0, (LPARAM)hProcess);
} // WaitInfo::waiter
The way this is used is that after you have created your process, you call the requestNotification method to request a notification. You pass in the handle of the process and the window which is to receive the notification. When the process terminates, a notification message is sent to the specified window. You must have a WaitInfo object that is created before the requestNotification is called and remains valid until the notification message is received; this means that it cannot be a variable on the stack. In the example code I provide, I put it in the class header of the window class that launches the program.
requestNotification
In the header file for my class, I add the following:
WaitInfo requestor;
afx_msg LRESULT OnCompletion(WPARAM, LPARAM)
In the MESSAGE_MAP of the window, you need to add a line for the handler. Because this uses a qualified name, the ClassWizard is emotionally unprepared to deal with it, so you have to place it as shown, after the //}}AFX_MSG_MAP line.
MESSAGE_MAP
//}}AFX_MSG_MAP
//}}AFX_MSG_MAP
ON_REGISTERED_MESSAGE(WaitInfo::UWM_PROCESS_COMPLETED, OnCompletion)
END_MESSAGE_MAP()
After I launch the process, I do
HANDLE process = launcher_of_your_choice(program, args);
if(process != NULL)
{ /* success */
requestor.requestNotification(process, this);
} /* success */
The handler is quite simple:
LRESULT CMyClass::OnCompletion(WPARAM, LPARAM lParam)
{
// whatever you want to do here
::CloseHandle((HANDLE)lParam);
return 0;
}
You can study more about what I do in the sample file.
If some of the above looked confusing, you might want to read my essays on message management and worker threads..
|
https://www.codeproject.com/Articles/534/An-Introduction-to-Processes-Asynchronous-Process
|
CC-MAIN-2018-09
|
refinedweb
| 1,460
| 52.49
|
GPWiki:Current events
Site Refresh
The Game Programming Wiki has been under an extensive revitalization effort! If you appreciate our website's community-driven development, chip in on the forums about refreshing the site
Thank you. Pieman (talk) 23:09, 22 January 2014 (GMT)
The D3D Book is here
The D3D Book from GDWiki is now hosted on GPWiki. I need to get the D3DBook namespace skin sorted out, but the content is all across.
D3DBook:Table_of_Contents
Codehead 20:11, 28 September 2011 (UTC)
Merge Complete!
The GDWiki and GPWiki have been re-united. The content from both servers is now hosted here. The D3D book from GDWiki will follow shortly.
Most GameDev Wiki users with edits have been migrated and your passwords should not have changed. Give me a yell via the forum if there are any problems with migrated accounts.
Codehead 18:30, 2 September 2011 (UTC)
New Server
GPWiki will be moving to a new server soon. Also, the GameDev Wiki that was forked in 2008 will be re-merged with GPWiki.
Please bear with us while we make the transfer.
Codehead 20:52, 20 August 2011 (BST)
|
http://content.gpwiki.org/index.php/GPWiki:Current_events
|
CC-MAIN-2014-35
|
refinedweb
| 191
| 66.84
|
It’s been a long time since I wrote part 1 and part 2 of this blog series, but hopefully things will move a bit more quickly now.
The main step forward is that the project now has a source repository on Google Code instead of just being a zip file on each blog post. I had to give the project a title at that point, and I’ve chosen Edulinq, hopefully for obvious reasons. I’ve changed the namespaces etc in the code, and the blog tag for the series is now Edulinq too. Anyway, enough of the preamble… let’s get on with reimplementing LINQ, this time with the Select operator.
What is it?
Like Where, Select has two overloads:
this IEnumerable<TSource> source,
Func<TSource, TResult> selector)
public static IEnumerable<TResult> Select<TSource, TResult>(
this IEnumerable<TSource> source,
Func<TSource, int, TResult> selector)
Again, they both operate the same way – but the second overload allows the index into the sequence to be used as part of the projection.
Simple stuff first: the method projects one sequence to another: the "selector" delegate is applied to each input element in turn, to yield an output element. Behaviour notes, which are exactly the same as Where (to the extent that I cut and paste these from the previous blog post, and just tweaked them):
- The input sequence is not modified in any way.
- The method uses deferred execution – until you start trying to fetch items from the output sequence, it won’t start fetching items from the input sequence.
- Despite deferred execution, it will validate that the parameters aren’t null immediately.
- It streams its results: it only ever needs to look at one result at a time.
- It will iterate over the input sequence exactly once each time you iterate over the output sequence.
- The "selector" function is called exactly once per yielded value.
- Disposing of an iterator over the output sequence will dispose of the corresponding iterator over the input sequence.
What are we going to test?
The tests are very much like those for Where – except that in cases where we tested the filtering aspect of Where, we’re now testing the projection aspect of Select.
There are a few tests of some interest. Firstly, you can tell that the method is generic with 2 type parameters instead of 1 – it has type parameters of TSource and TResult. They’re fairly self-explanatory, but it means it’s worth having a test for the case where the type arguments are different – such as converting an int to a string:
public void SimpleProjectionToDifferentType()
{
int[] source = { 1, 5, 2 };
var result = source.Select(x => x.ToString());
result.AssertSequenceEqual("1", "5", "2");
}
Secondly, I have a test that shows what sort of bizarre situations you can get into if you include side effects in your query. We could have done this with Where as well of course, but it’s clearer with Select:
public void SideEffectsInProjection()
{
int[] source = new int[3]; // Actual values won’t be relevant
int count = 0;
var query = source.Select(x => count++);
query.AssertSequenceEqual(0, 1, 2);
query.AssertSequenceEqual(3, 4, 5);
count = 10;
query.AssertSequenceEqual(10, 11, 12);
}
Notice how we’re only calling Select once, but the results of iterating over the results change each time – because the "count" variable has been captured, and is being modified within the projection. Please don’t do things like this.
Thirdly, we can now write query expressions which include both "select" and "where" clauses:
public void WhereAndSelect()
{
int[] source = { 1, 3, 4, 2, 8, 1 };
var result = from x in source
where x < 4
select x * 2;
result.AssertSequenceEqual(2, 6, 4, 2);
}
There’s nothing mind-blowing about any of this, of course – hopefully if you’ve used LINQ to Objects at all, this should all feel very comfortable and familiar.
Let’s implement it!
Surprise surprise, we go about implementing Select in much the same way as Where. Again, I simply copied the implementation file and tweaked it a little – the two methods really are that similar. In particular:
- We’re using iterator blocks to make it easy to return sequences
- The semantics of iterator blocks mean that we have to separate the argument validation from the real work. (Since I wrote the previous post, I’ve learned that VB11 will have anonymous iterators, which will avoid this problem. Sigh. It just feels wrong to envy VB users, but I’ll learn to live with it.)
- We’re using foreach within the iterator blocks to make sure that we dispose of the input sequence iterator appropriately – so long as our output sequence iterator is disposed or we run out of input elements, of course.
I’ll skip straight to the code, as it’s all so similar to Where. It’s also not worth showing you the version with an index – because it’s such a trivial difference.
this IEnumerable<TSource> source,
Func<TSource, TResult> selector)
{
if (source == null)
{
throw new ArgumentNullException("source");
}
if (selector == null)
{
throw new ArgumentNullException("selector");
}
return SelectImpl(source, selector);
}
private static IEnumerable<TResult> SelectImpl<TSource, TResult>(
this IEnumerable<TSource> source,
Func<TSource, TResult> selector)
{
foreach (TSource item in source)
{
yield return selector(item);
}
}
Simple, isn’t it? Again, the real "work" method is even shorter than the argument validation.
Conclusion
While I don’t generally like boring my readers (which may come as a surprise to some of you) this was a pretty humdrum post, I’ll admit. I’ve emphasized "just like Where" several times to the point of tedium very deliberately though – because it makes it abundantly clear that there aren’t really as many tricky bits to understand as you might expect.
Something slightly different next time (which I hope will be in the next few days). I’m not quite sure what yet, but there’s an awful lot of methods still to choose from…
9 thoughts on “Reimplementing LINQ to Objects: Part 3 – “Select” (and a rename…)”
On your previous post on implementing Where you mentioned that you removed the using statement for Linq, so how does the compiler know what to do when you use the query syntax? I don’t tend to use query syntax so I don’t fully understand how it works – but it seems interesting to me that the compiler ‘knows’ to use your implementation for select and where
@Mr_peeks If you try to write the query syntax without a using for Linq, you get a syntax error stating that the various extension methods used (Select, Where, etc.) couldn’t be found on the type. So I imagine as long as the type has the methods the compiler is looking for (regardless of where they come from), it will be able to wire them up without any issues.
@Mr_peeks, the extension methods for Linq operators don’t have to be in System.Linq. The compiler doesn’t care where they are, as long as it finds a method with a suitable signature, either as an instance method or an extension method.
@Jon, I felt exactly the same about VB11 anonymous iterators when I read about it… According to Eric Lippert, they don’t intend to do the same for C#. Too bad… Anyway, I think this the only VB feature I’d like to see in C# (and perhaps indexed properties, too)
You should probably mention in the behaviour notes:
– The selector method is called exactly once for each yielded value.
That is, after all, what you’re testing with SideEffectsInProjection.
@configurator: Thanks, have edited.
Pretty straightforward. However, there’s a thing which should be changed IMHO: the unit test SimpleProjectionToDifferentType could potentially fail depending on the current culture of the OS.
This leads to the question what the best practice about this issue is. I think that always using the invariant culture in unit tests (except for culture-specific tests of course where the coulture would be specified anyways) should be the way to go.
What are your thoughts on that? You must have dealt with this question before in the context of JodaTime…
@Lucero: That’s a good point, although I can’t think of any culture where single-digit (or even double-digit) integers are going to be represented differently. If we were using floating point types or large numbers, it would certainly be a different matter.
The business of testing cultures in unit tests is a nasty one – and one which is made worse by some of the default choices taken by the framework, IMO.
I ran into a similar problem when trying to rebuild JSON.NET, in fact – it had a check that some method converts from local time to UTC… and the check was that when given a local time at the start of January one year, the converted time should be different. That doesn’t help those of us in Europe/London :)
I think I’ll leave the code as-is, but it was a good issue to raise.
@Jon, that0s not entierly true. How digits are rendered is controlled by NumberFormat.DigitSubstitution (see for details on the substitution).
foreach (CultureInfo culture in CultureInfo.GetCultures(CultureTypes.SpecificCultures)) {
if (culture.NumberFormat.DigitSubstitution != DigitShapes.None) {
Console.WriteLine(“{0}: {1}”, culture.EnglishName, culture.NumberFormat.DigitSubstitution);
}
}
This will yield 15 specific cultures that use the native digits contextually (those shouldn’t be a problem, since they are converted to native only when they are in the context of local text), but there are 4 which are NativeNational:
Dari (Afghanistan): NativeNational
Pashto (Afghanistan): NativeNational
Nepali (Nepal): NativeNational
Khmer (Cambodia): NativeNational
Those cultures should fail the unit test (didn’t try though). Not to mention someone creating a strange custom culture… ;-)
@Lucero: Okay, I’m convinced. For the sake of brevity in tests I’ll implement an extra extension method “ToInvariantString”.
|
https://codeblog.jonskeet.uk/2010/12/23/reimplementing-linq-to-objects-part-3-quot-select-quot-and-a-rename/?replytocom=12526
|
CC-MAIN-2021-39
|
refinedweb
| 1,642
| 58.21
|
- 21st November 2005
Monday 21st November 2005
Poultry News
Recipes
Interviews
Newsletter Archives
Book Shop
Search
Events / Promotions
This Weeks Industry Showcase
The Global Disinfectant
TWICE
the Protection at
HALF
the dosage. It’s not surprising than that
VIROCID
is number one in the world!
ANTEC® Virkon S
is
the
broad spectrum virucidal disinfectant independently proven effective against all major virus families affecting animals. Canada,
where tests have confirmed a farm duck in British Columbia has a non-lethal, North American strain of
Avian Influenza
but health officials will still cull about 60,000 poultry as a preventive measure, the Canadian Food Inspection Agency said on Sunday.
Health officials around the world have been on the watch for the Asian strain of the H5N1 virus that experts fear may mutate so it is easily transmitted among humans and possibly cause a pandemic. There are nine known N strains of the H5 virus.
Initial tests last week found an H5-type strain in the duck during routine tests. Health officials immediately quarantined the farm, located in the Fraser Valley, east of Vancouver.
In the US
,.
The soybean market
appears to be a little overpriced based on current U.S. and world supply, consumption, and stocks forecast, particularly if U.S. soybean acreage increases in 2006, said a University of Illinois Extension marketing specialist.
"Current prices are likely reflecting uncertainty about a number of factors," said Darrel Good. "Those uncertainties may include the South American growing season - weather and soybean rust - and renewed concerns about soybean rust in the United States in 2006.
In Minnesota, America's
largest turkey producing state
, farmers are taking extra steps to fight the bird flu. 62,000 turkeys pass through one Nobles County farm every year, so a case of avian flu could turn into a devastating disease.
Turkey grower Jeff Barber is taking steps to protect his birds. He adds chlorine and other chemicals to their drinking water to kill dangerous bacteria. He explains, "Turkeys are very bad when it comes to fighting off disease and infections because they have a very bad respiratory system."
Proteins called bacteriocins
, produced by bacteria, can reduce
Campylobacter
pathogens to very low levels in chicken intestines and could help reduce human exposure to food-borne pathogens,
ARS
scientists report.
The scientists evaluated tens of thousands of bacterial isolates from poultry production environments. ARS microbiologist Norman Stern and his colleagues have found promise in numerous organisms for anti-Campylobacter activity, namely
Bacillus circulans
and
Paenibacillus polymyxa
.
The European Union
has lifted its
import ban on ostriches
and their meat from South Africa and is satisfied that avian influenza is no longer present in the country, the EU's Official Journal said on Friday. South Africa declared itself free of bird flu in September and then sent a final report on its animal health situation to the European Commission, asking the EU to lift its import ban.
"The information contained in the final report shows clearly that the outbreak in the Republic of South Africa has been contained and that the disease is no longer present in the country," the Journal said in its latest edition.
In the UK
, supermarkets are losing their grip on the
organics market
, figures from the Soil Association's latest annual report show. With total organic sales growing by £2.3m each week, the multiple retailers saw their share of the market slip from 81% to 75% during 2004.
The whole market is now worth more than £1.2bn a year, 11% more than in 2003. At least £300m of that total is channelled through farm shops, farmers’ market and box schemes. This part of the sector saw sales mushroom by 33% this year, while independent shops increased their organic sales by 43%.
Patrick Holden, director of the Soil Association, said: “This report shows that the popularity of organic food is growing steadily and the organic market has a bright future.
Coccivac-B - Coccidiosis vaccine for poultry
The Brazilian Poultry Exporters Association
(Abef) estimates that
chicken exports
up to the end of 2004 may generate revenues of approximately US$ 2.5 billion, a growth of 40% when compared to last year. With regard to the volume shipped, it may reach 2.4 million tons, a 25% increase in comparison to 2003. If these figures are confirmed, it will be a new historical record for the sector.
South Korean
imports of livestock products
jumped nearly 40% in the first nine months of the year from a year ago, a state-run agricultural corporation said last Tuesday. According to the Agricultural and Fishery Marketing Corp, imports of beef, pork and poultry shot up 38.5% from a year ago to US$1.74 billion in the January-September period.
The figure is close to last year's total imports of $1.75 billion. Chicken imports skyrocketed 146.5% to $86.4 million, while inbound shipments of pork swelled 91% to $463 million.
As a result of the bird flu crisis, a large majority of poultry farms in Asia are
losing business
. Korean poultry farmers have seen their profits decrease dramatically since the outbreak of the virus, according to Korea's
Kukmin Daily
.
Seok Nou Gil has operated his chicken farm in Korea for more than 30 years. Just recently he has had to dispose of more than 11,600 chickens. Adding to the loss, Gil says that he will also not be able to sell eggs. "Farmers like me have been destroyed by the media's overblown coverage," complains Gil.
Gil Jong Park, who also runs his own poultry farm, recently got rid of 12,000 chickens. "Normally I sell a 1kg chicken for US$1.50, but now it sells for only US$0.50." Korea's largest insurance provider has even begun offering Avian Influenza coverage. Large numbers of poultry companies have already applied for the protection, known as AI insurance, and this number is expected to increase.
In China
,
new outbreaks
of highly pathogenic bird flu have been reported in the northern Shanxi Province and a village near Urumqi City in the northwest Xinjiang Uygur Autonomous Region, the Ministry of Agriculture announced yesterday. On November 10, 8,103 chickens raised in Gaoyang Town of Xiaoyi City died. Two days later, eight chickens died in a village near Urumqi City.
Local veterinary departments suspected that they were highly pathogenic bird flu. On Wednesday and Thursday, the national bird flu lab diagnosed that both cases were the H5N1 highly pathogenic bird flu. The Ministry of Agriculture has paid great attention to the bird flu outbreaks in the two regions, sending expert panels there quickly and enhancing blockage, slaughtering and innocuity disposal.
In Zongyang, Anhui, people who have had close contact with the confirmed two bird flu human cases have showed no abnormal symptoms, local governments said on Thursday.
DuPont Animal Health Solutions - ANTEC® BIOSENTRY®
China ordered local governments to
report outbreaks of avian influenza
and other animal diseases within four hours of discovery to China's cabinet, the State Council, as bird flu spreads in the country.
Under the new rules, county and city governments must report animal epidemic outbreaks to the provincial government within two hours, the State Council said in a statement issued yesterday through the official
Xinhua News Agency
.
China was criticized last year by the World Health Organization for its slow response to an outbreak of severe acute respiratory syndrome, or SARS, that infected 8,098 people globally, killing 774. China on Nov. 16 reported the country's first cases of H5N1 bird flu in humans.
China's plans to vaccinate billions of chickens
against avian flu could backfire and end up spreading the disease, poultry and vaccine experts warned last week. Vaccination teams can easily carry the virus from farm to farm on their shoes, clothes and equipment unless they change or sterilize them each time, the experts said. That could be particularly difficult in a country like.
Vaccinators were partly to blame for an outbreak of exotic Newcastle disease in California that lasted from 1971 to 1973, Dr. Carol Cardona, a poultry expert at the University of California at Davis, said last week.
In this weeks broiler report,
eFeedLink
say that
broiler prices were generally lower
in China in the past week as authorities continue with their effort to control the bird flu situation. In less than a month, China has been plagued by successive outbreaks of the bird flu, which had a great impact on broiler markets in the country.
Local sales and consumption remained generally sluggish with the suspension of boiler exports and closure of poultry markets. News of fresh outbreaks in Huainan county, Anhui province, in the past week had caused broiler prices to fall further.
In Vietnam
, bird flu has spread to a
quarter of all provinces
as officials on Friday reported the latest outbreaks in two new northern area's. Thai Binh and Bac Ninh provinces are the newest ones to be hit by the H5N1 strain of bird flu, making 17 of the country's 64 cities and provinces infected in the last month.
Vietnam has been battling the H5N1 virus since it emerged across poultry farms in late 2003. At least 67 people have died in the region from bird flu, with about two thirds of those deaths in Vietnam. The country has taken increasingly tough measures against bird flu as the winter months approach, when the virus is most likely to spread.
In Indonesia
, last weeks
broiler prices
were down from weakened demand and egg prices have been generally stable. The Jakarta administration has increased the minimum wage of workers to Rp 819,100 (about US$81) for 2006, an increase of 15%. "We can understand that the increase is due to ballooning living costs of workers" said Association of Indonesian Retailers' secretary general Handaka Santosa.
One broiler producer in Jakarta said that the production cost for each kilogramme of broiler has increased by 8%. However, with the rather high farm-gate live broiler prices in the past two weeks, producers have reaped a profit, reports
eFeedLink
.
Cobb - Primary Broiler Breeders
Croatia
has decided to
hand out 25,000 chicks
to farmers in two regions whose flocks were destroyed because of a bird flu scare, an agriculture ministry spokesperson said on Sunday, citing a current low risk from the disease. The risk from the feared H5N1 strain of avian flu was now "almost non-existent" in Croatia, said the spokesperson.
Health authorities detected pockets of the highly pathogenic strain of the H5N1 virus among swans found dead in the villages of Zdenci and Nasice towards the end of October, and responded by killing all poultry within 3km of both sites.
Experts are developing a
bird flu warning system
that maps migratory routes to help alert countries at risk of receiving infected species, AP quoted UN officials as saying Sunday. A pilot project of the warning system is expected to be operational in six months, while the final plan should be running in two years, said Marco Barbieri of the UN Convention on the Conservation of Migratory Species of Wild Animals.
The system will help experts recommend that farmers move poultry away from key wetlands and offer hygiene advice, said Britain's Biodiversity Minister Jim Knight.
Company news
Aviagen
, the world’s leading poultry breeding company, is once again embracing innovative technologies to improve the way in which they select their breeding stock. Following the publication of the
entire chicken genome
in December 2004 the company is launching a major new initiative to develop genomics technologies which will identify specific genetic markers for naturally occurring traits.
Using this new technology alongside the existing selection process, Aviagen will be able to provide additional information to its geneticists to identify superior stock in the breeding programme..
Hatchery Automation Systems - Improved quality and reduced costs
Feature Articles Overview
(link to features listings)
We have 3 new features this week.
Maxim.
Introduction Aqualution
By Forum Bioscience - Aqualution® is a groundbreaking innovation for the poultry industry, with major benefits for the environment, animal welfare, food safety and the producer's bottom line.
Effects of Extended Storage on Egg Quality Factors
D. R. Jones and M. T. Musgrove, Russell Research Center, Egg Safety and Quality Research Unit, USDA ARS - This article contains an abstract from the Poultry Science Association's August 2004 journal.
|
http://www.thepoultrysite.com/newsletter/95/thepoultrysite-newsletter-21st-november-2005/
|
CC-MAIN-2017-47
|
refinedweb
| 2,063
| 51.48
|
In this article, I’m going to show how to get started with a D development process that mirrors my own. I’m not trying to push this process on anyone, nor am I making any claims that it’s the best way to go about developing software with D. Developers can be a fickle lot and each has their own preferences and habits, so I would never presume to say, “This is the One True Way!” Instead, I’m hoping to show those who are curious about D that it’s quite simple to get up and running so that experimentation can begin in no time.
I should also say before I go further that I live on Windows. I venture into the land of the penguins now and again, when I really have to, but I prefer to avoid it as much as possible. And the only Apple products I’ve used since the Apple II have been iPods/Pads/Phones. As such, this article is going to be quite Windows-centric. That said, it shouldn’t take too much effort to translate the setup I describe here to those other OSes.
Installing DMD
At the time of writing, there are three D compilers available. GDC is built on top of the GNU Compiler Collection. LDC is based on LLVM. DMD is the reference compiler maintained by Walter Bright, the creator of the D Programming Language. GDC and LDC are great compilers. In fact, they often produce more performant executables than DMD. But they do require a bit more effort to setup on Windows than DMD does. So for this article, I’m going to focus on DMD.
On the DMD download page you will find a Windows installer, a dmg file for Mac users, several deb packages and RPMs for a few different Linux distros and an all-inclusive zip file which contains binaries for all supported platforms. Personally, I prefer the zip file, even for the rare occasions I have to drop into Linux. I really believe that the Windows installer is superfluous. There’s nothing to configure besides the path and nothing needs to be copied into any system directories. Unzip, set the path, done.
With the zipped DMD package, the path for the Windows binaries is “dmd2\windows\bin”, but there are also binaries for FreeBSD, Linux, and OSX (32- and 64-bit where supported). A decision must be made on whether or not to add the appropriate path to the globabl environment. Setting the global path is certainly the simplest thing to do (a quick trip to Google should help in figuring out how to do this for a particular version of Windows). Be careful, though, as one of the Windows binaries is the Digital Mars version of ‘make’. If another version of make is on the global path, that can cause some headaches. Setting the global path also makes it difficult to support multiple versions of DMD. There are easy ways around that (I use batch files and cmd.exe shortcuts, and there’s also Jacob Carlborg’s D Version Manager), but for someone just starting out it’s not a big deal. Linux users using the zip file can either copy some stuff around to make everything globally visible, or edit the appropriate shell config file to set things up for a specific user. More details can be found at dlang.
Once the path is configured properly, whether manually or via installer or deb package/RPM, get to a command prompt and type the following:
dmd
Invoking dmd with no args will cause it to display the version number along with all the valid command line options, the same as typing dmd --help. This will confirm that the path is properly configured. Next, open up a text editor (I used Crimson Editor, a.k.a. Emerald Editor, for years, but switched to Sublime Text 2 a few months back — well-worth the price of the license). Enter the following into a text file and save it somewhere as "hello.d".
import std.stdio; void main() { writeln( "Hello D!" ); }
Now navigate on the command line to the directory in which hello.d was saved and enter dmd hello.d. If it compiles without error, then the compiler was able to find the standard library and all is well. If there were errors, be sure to read them carefully. Most DMD errors are fairly clear, but for someone without experience with compilers on the command line this might not be the case. At any rate, if it is not clear that the error was caused by the code or the configuration, the digitalmars.D.learn newsgroup is the place to go for answers. For those who left their newsreaders in the 90s or have no idea what a newsreader is, there is both a forum interface and a mailing list interface for all of the D newsgroups to make them more palatable.
One final note on the installation. The vanilla DMD package on Windows only fully provides what is needed to compile 32-bit programs. For 64-bit, it’s only half the story. 32-bit makes use of the Digital Mars C++ compiler backend toolchain (including the annoyingly ancient OPTLINK linker and some equally ancient Win32 libraries). 64-bit compilation actually relies on the Microsoft C++ compiler tools, meaning a version of the Windows SDK which includes the compiler tools must be installed. Right now, that means version 7.1 of the Windows SDK.
How to Avoid Invoking DMD Yourself
Many statically compiled system languages require a two-step process to create executables: compiling and linking. This can be condensed into one step if all of the source files and required libraries are passed on the command line together, but when compiling and linking separately each source file needs its own compile step, then all the object files need to be passed to the linker to generate the executable. Either way, when multiple source files are involved, this can be difficult to manage by hand.
It would be nice if DMD (and GDC and LDC) compiled in a way more similar to Java than C and C++. They don’t. However, there are some tools out there that do. One of them actually ships with DMD. Before I get to that, a little history (because I just can’t resist).
A few years back, a D user named Derek Parnell created what, as I recall, was the first build tool for D. Called ‘bud’, it worked by following the import tree from the main source module, gathering up each imported module, and passing all of them to DMD for compilation. For example, if bud foo.d were executed, and foo.d imported bar.d, then bud would see that import, add bar.d to its list of files to compile, then scan imports in bar.d, and so on, until there were no more imports to parse. The tool became quite popular. In my binding project, Derelict, I initially used make, but eventually switched over to supporting bud as the default build tool via a build script. I began to recommend that people not compile Derelict at all, but just use bud to compile their app so that the Derelict modules would be compiled automatically. Compilation with DMD has always been extremely fast. bud made it extremely convenient.
Not long after, Gregor Richards released DSSS, which was a build and package management tool. Initially, he was using bud for the build side, but decided later to create his own tool called Rebuild (a play on bud, which was originally called "build"), which he released individually. Now it became possible to make D libraries distributable and compilable via a simple configuration script. Users could execute dsss net install derelict, for example, to automatically download a library (in this case, Derelict), compile it, and make it available to any new D programs developed on the local system. It looked set to become the de facto standard.
Unfortunately, neither project is active any longer. They were still usable for quite a while, but they inevitably fell off the radar. Thankfully, some new options have since become available.
Shortly after Andrei Alexandrescu became involved with D, he slapped together a little utility called ‘rdmd’. The basic concept behind it is the same as bud. It follows imports and pulls in all the imported files, then hands them off to DMD for compilation. By default, it executes the binaries it compiles, but can be told via a command line switch to compile without executing. It has other options as well, such as the --eval switch which allows code to be entered on the command line. See the documentation for details. It now ships with DMD on all supported platforms, so it’s easy to get compilation of multiple D source files up and running out of the box without installing any third-party tools.
The newest build tool on the block, and the one I’ve taken to using, is ‘dub’. It’s not just a build tool, but also a package manager, the spiritual successor to DSSS. This tool comes from Sönke Ludwig (who also gave us vibe.d). He has set up a registry for developers to register their projects. Users can specify registered projects as dependencies in a dub configuration file (package.json), and the tool will make sure all dependencies are downloaded, installed and compiled. Currently, only projects on github and Bitbucket are supported for automatic installation.
I now use dub for all of my current D projects and plan to continue using it in the future. There are two things I really love about it. First, the configuration file is dead simple to set up. The tool can generate a generic one for you along with a directory tree for your project, but since I don’t like the directory tree it creates, I just create my own and copy my package.json file from project to project, editing it as I go (really, it’s dead simple). Second, the tool has some simple heuristics to differentiate a library project from an executable project, but can be configured to compile any source tree in any way. This is extremely useful for library development. In the past, I’ve always had to keep a separate test program as I develop a library and compile it separately. With dub, I just specify two different configurations in the package.json file, set up the project to match dub's simple heuristics, and I can use a single source tree as a library or as an executable.
Just to more clearly illustrate what I mean, imagine a library called ‘MyLib’. All of the library source modules will be in the “source\mylib” directory. Then, a separate module, “source\app.d” is created. In package.json, dub is instructed of two configurations. One, the default which is used when dub build is executed, compiles a library. The second, let’s call it ‘dev’, is used when the command dub build --config=dev is executed and will produce an executable. The tool will automatically include “source\app.d” in the second case, but will ignore it in the first. The two configurations in the package.json file only need to specify a name for the configuration and a target type (library or executable). There are other options, but those are the only two required. An example is provided in the next section. More info is available at the dub registry.
To summarize, it’s easy to get going with multiple source files out of the box with DMD by using the rdmd tool that ships with the compiler. When experimentation moves beyond using the standard library and making use of third-party libraries, dub makes that a piece of cake as well.
Derelict 3
Now I’m going to toot my own horn a little. Given that this is a game development site, I would imagine that people experimenting with D for the first time will quickly move beyond “Hello world” style programs and want to get to something more visual. That’s where Derelict 3 comes in.
Derelict is a collection of D bindings to a number of popular C and C++ libraries that are useful for game development. I first started working on it in early 2004, providing dynamic bindings for OpenGL, SDL, and OpenAL. Over the years, it has evolved through three major versions, packages have been added and removed, build systems have come and gone, and I’ve pulled out more hair than I care to think about. I try to spend as little time on it as I can possibly get away with without feeling guilty. So far, that has served me well.
Getting started with Derelict isn’t too terribly difficult. The easiest way is to use dub. I’ll describe the harder way first.
Anyone who has been doing any sort of C or C++ development long enough must be familiar with git by now. If not, don’t look to me for help. Just move on to the next paragraph. Otherwise, Derelict 3 can be cloned from. Once that’s done, open up a command prompt with DMD on the path, cd to the “build” subdirectory in the “Derelict3” folder, and execute dmd build (or substitute gdc or ldc2). That compiles the build script. Now execute build (or ./build for the penguins). This will compile libraries for each Derelict package and each library will be output to “Derelict3/lib”. To use Derelict, make sure “Derelict3/import” is on the import path (the import path can be set on the command line with the -I switch) and that the libraries are linked (via pragma(lib, “path/to/lib”) in source or specified on the command line). Yeah, this is the more complicated way, particularly since each compiler has a different way of specifying the library path on the command line, and with DMD it depends on which backend is being used (I can never remember what it is for 32-bit DMD on Windows). So I recommend using dub anyway.
Using Derelict with dub is very easy. All that need be done is to add Derelict, or any combination of Derelict packages, as dependencies to a project in the configuration file. When executing dub build for that project, dub will first verify that all of the dependencies are installed and, if not, will download and compile them automatically. As an example, here’s the package.json for another library I’m working on. It uses Derelict to bind with SDL2 and PhysFS. The relevant bit for Derelict is in the “dependencies” section, where it lists Derelict's sdl2 and physfs packages as dependencies.
{ "name": "derringdo", "description": "A framework for 2D games with SDL2.", "homepage": "", "copyright": "Copyright (c) 2013, Michael D. Parker", "authors": [ "Mike Parker" ], "dependencies": { "derelict:sdl2": "~master", "derelict:physfs": "~master" }, "configurations": [ { "name": "lib", "targetType": "library", "targetPath": "lib" }, { "name": "dev", "targetType": "executable", "targetPath": "bin" } ] }
Nothing to it. Edit this to fit your project, save it in the parent directory of the project source tree, execute dub build or dub build --config=dev (as described above) and D Programming with Derelict 3 happiness is sure to follow. Derelict can also be installed manually by executing dub install derelict. This is true for any project in the dub registry. See the dub package documentation for more details on how to set up a config file.
Using Derelict in code is quite simple as well. Let’s assume a project using SDL2 and SDL2_image, two commonly used game development libraries. This snippet of code shows how to import the relevant Derelict modules and load the libraries (remember, Derelict is a set of dynamic bindings, meaning it is designed to load shared libraries (.dll, .so and .dylib) at runtime).
import derelict.sdl2.sdl; import derelict.sdl2.image; void main() { DerelictSDL2.load(); DerelictSDL2Image.load(); }
That’s it. If the libraries fail to load, a DerelictException will be thrown, specifically one of the subclasses SharedLibLoadException or SymbolLoadException. The former is thrown when the library fails to load, the latter when a symbol from the library fails to load. Derelict also allows the throwing of SymbolLoadExceptions to be skipped so that if specific symbols are missing, loading can continue. This is useful for loading older versions of a library. Importing derelict.util.exception will pull all DerelictExceptions into the namespace so that they can be handled appropriately. Also, each loader has a version of the load method that allows shared library names to be specified explicitly. By default, the loaders use the common library names as output by each distribution's build system (such as SDL2.dll on Windows) and uses the default system search path to find them. Sometimes it may be desirable to override this behavior, such as when shipping all shared library dependencies in a subdirectory (DerelictSDL2.load(”libs/SDL2.dll”)).
Notice that there are no corresponding calls to any “unload” methods. This is because Derelict makes use of D’s static module destructor feature to automatically unload libraries when the app exits. Static module constructors could have been used to load them as well, but that would take away the opportunity to handle exceptions, or to specify alternate library names. The unload methods do exist and are publicly accessible in case a library needs to be explicitly unloaded at a particular time, but they are not required to be called otherwise.
When compiling the above manually, the libraries DerelictSDL2.lib and DerelictUtil.lib need to be linked. There is no DerelictSDL2Image.lib. The loaders for SDL2_image, SDL2_ttf, SDL2_mixer, and SDL2_net are all part of DerelictSDL2.lib. The format of the library file names and extensions depend on the platform and compiler.(e.g. DerelictSDL2.lib with DMD on Windows, libDerelictSDL2.a with DMD elsewhere). On Posix systems, it is also necessary to link with libdl, since Derelict uses dlopen and friends to handle the shared libraries. When compiling with dub, libdl will still need to be linked on Posix systems. This can be done with a “libs-posix” entry in the package config file (see the dub package documentation for details).
All Derelict packages adhere to this basic format, with the one exception being DerelictGL3. There are two OpenGL loaders in Derelict, one that does not include deprecated functions and one that does. For the former, import derelict.opengl3.gl3 and call DerelictGL3.load. For the latter, import derelict.opengl3.gl and call DerelictGL.load. In both cases, after an appropriate OpenGL context has been created, a reload method must be called (DerelictGL3.reload and DerelictGL.reload respectively). The load methods load the OpenGL 1.0 and 1.1 functions. The reload methods load the functions for versions 1.2+, as well as all supported ARB extensions (support for all ARB extensions may not yet be implemented). It is also recommended to call the reload method each time the context is switched. Both loaders also provide a means of determining the version of OpenGL that actually loaded.
Derelict 3 provides bindings for OpenGL, OpenAL, SDL2, SFML2, GLFW3, AssImp3, Lua 5.2, FreeType 2.4 and more. At the time of writing, the project README claims it’s all in an alpha state, but it’s 100% usable. The only thing missing is the documentation and a couple of more packages I want to add. Of course, one thing I haven’t mentioned here is that, when using Derelict, it’s necessary for the user to get possession of the appropriate shared libraries, otherwise there's nothing to load. Most projects provide binary distributions, some do not. In the latter case, it is necessary to have a build environment set up to compile C binaries. I try to keep the bindings up to date for each project, but sometimes I fall behind. I’m always happy to accept pull requests to correct that.
Conclusion
This has been a brief introduction on one way to get ready to compile programs with the D Programming Language, with an eye toward would-be D game developers. I hope the information here is useful enough to help those new to the language in getting started. As for how to actually program in D, I’ll leave that to Andrei Alexandrescu to explain in his excellent book, The D Programming Language, and to the helpful souls in the D Community, which includes the D Newsgroups and #D on freenode.net.
For those who want to go beyond the bounds of what I’ve described here, I encourage a look at VisualD, MonoD, and DDT for Eclipse to experiment with D IDEs, and of course GDC and LDC to get a feel for alternative compilers. Anyone using Derelict is encouraged to join the Derelict forums to ask for help and to report bugs and other issues on the project page at github.
I find D an enjoyable programming language to use and I look forward to seeing more new faces join the ranks of our ever-growing community.)
Article is a little heavy on anecdotes, and a little light on justification. Why use D? Why use Derelict?
Also a little short of examples. Be great to see some code snippets that aren't just 'Hello, World!'.
I'm not trying to justify D. The intent here is just to help someone get a D environment set up quickly and easily once they've decided to give it a try.
You could boil that part of the article down to:
- download DMD
- set PATH variable
- download dub
- install derelict via dub
What value do the lengthy anecdotes and marketing pitch for derelict add for the reader? Why outline the whole history of D build tools?
You mention using dub to install derelict several times, but then you never actually show how. Also, why describe the whole manual installation of derelict, if right after you say "use dub anyway"?
Regarding your points, here's what was in my mind as I wrote this.
1) A step-by-step list of do this, do that, is not an article. It's a blog post. An article should be more than a dry list. I'm not sure which anecdotes you are referring to, but if there's anything specific you can point out as problematic, I'll happily consider rewriting or removing it.
2) There are still some old tutorials and blog posts out there about bud and dsss, plus the project pages are still online even though the projects are dead. Now and again, I've seen newcomers head down the wrong trail because of this. By talking about these tools here, I'm making it clear that they are dead. Plus, I thought a brief description of how they worked might be interesting to some people given that rdmd and dsss expand on what they did.
3) The example package.json shows how to use Derelict as a dependency in any project. I had assumed that it was clear that this is all that is needed, but I see now that is not the case. I'll add a bit to clarify that.
4) I describe the manual installation of Derelict because that is the primary way it has been used for years and not everyone is going to want to use dub. My adding support for dub is a recent thing.
5) Finally, this isn't a marketing pitch for Derelict. There aren't a lot of options for bindings to game development-related libraries in D and, to the best of my knowledge, Derelict is the only collection of dynamic bindings that has been made publicly available. Plus, the static bindings that exist out there for the libraries that Derelict covers are spread out all over the place, many of them outdated. Given that this is a game development site, I made the assumption that my audience would consist of people who might be interested in using D for game development. Derelict is the easiest way I know to get started with that.
I'm describing how to get started using D the way I use it. These days, I use DMD, dub, and Derelict. And I'm trying to do it in a way that adds some value to just a simple step-by-step list.
I've edited the Derelict section to be more clear on how to use dub to install.
Seeing you are targeting this article to Windows developers why does derelict not support D3D? Or DirectX for that matter?
This article is targeting Windows users because, in my experience, they tend to have the hardest time working from the command line. I have had a pretty strict rule over the years that Derelict only binds to cross-platform libraries. The more packages I allow in, the more maintenance I have to do, so the fewer the better.
Besides, D has native support for COM via a special interface. As such, the only functions that really need to be bound from the DX DLLs are the *Create functions. Given that there is already a DX binding out there as part of the WindowsAPI project (as well as others) and given that a Derelict binding would only differ in that the *Create functions would be loaded dynamically, I don't think a Derelict binding would add much value. If dynamic loading is really needed, it's a 5-minute job to prototype the *Create functions and implement a loader using DerelictUtil, then use one of the existing bindings for the type declarations.
|
https://www.gamedev.net/resources/_/technical/general-programming/getting-started-with-the-d-programming-language-r3306?st=0
|
CC-MAIN-2017-09
|
refinedweb
| 4,285
| 72.46
|
Important: Please read the Qt Code of Conduct -
[SOLVED] QtGlobal about the file
- ..\qt-everywhere-opensource-src-4.8.3\include\QtCore\QtGlobal --> There are no extensions.
- ..\qt-everywhere-opensource-src-4.8.3\include\QtCore\QGlobalStatic --> There are no extensions.
- ..\qt-everywhere-opensource-src-4.8.3\include\QtCore\QGlobalStaticDeleter --> There are no extensions.
- ..\qt-everywhere-opensource-src-4.8.3\include\QtCore\qglobal.h
- ..\qt-everywhere-opensource-src-4.8.3\include\Qt\qglobal.h
- ..\qt-everywhere-opensource-src-4.8.3\src\corelib\global\qglobal.h
- ..\qt-everywhere-opensource-src-4.8.3\src\corelib\global\qglobal.cpp
- ..\qt-everywhere-opensource-src-4.8.3\doc\src\snippets\code\src_corelib_global_qglobal.cpp
What is the difference between these files?
Files with no extension, what is?
qglobal.h file is used on its own right? --> ..\src\corelib\global\qglobal.h
created a gui project, why are automatically Uint8 see, for example?
qglobal want to create a file into its own. Can you give us just a c + + example.
to create such a structure.
I want to know how it works in the background only.
For the following code which header file must be the most powerful and the most effective?
<QtGlobal>
<qglobal.h>
<QGlobalStatic>
<QGlobalStaticDeleter>
@
#include <iostream>
using namespace std;
// <QtGlobal> <qglobal.h> <QGlobalStatic> <QGlobalStaticDeleter> <----- ?
int main()
{
int myValue = 10;
int minValue = 2;
int maxValue = 6;
int boundedValue = qBound(minValue, myValue, maxValue); cout << boundedValue << endl; system("PAUSE"); return 0;
}
@
qglobal.cpp in which the prototypes was written. ->> for qglobal.h .
Hi, ~_compiler!
Welcome to Qt Developer Network!
We are really friendly here :-)
I am sorry, but I can't help you today with an issue you have posted. I've posted this reply to make small announcement.
I've started "Qt Contribution": group here. It's for the people who wants to hack, contribute, understand Qt code.
I've send request for Forum. If devnet admins accept it, it will be a good practise to post questions like this there.
So, DevNet admins, if you see this post - please do smth with my request.
Sorry for offtop.
I understand :-)
Include files don't need to have a specific extension.
They can have any extension you like or no extension at all. In the C standard library, include files used to have a ".h" extension, like in <stdio.h>, but in the C++ standard library most (all?) header files have no extension, like <vector> or <iostream>. And, in C++, we even have new equivalents for the old C headers like <cstdio> instead of <stdio.h>. The "Boost" library, for example, uses the ".hpp" extension for its header files.
In Qt we have include files like <QtGlobal> (CamelCase and no file extension) in the "include" directory, which do nothing but include the corresponding .h file, like <qglobal.h> (all lower case and .h extension), also located in the "include" directory. And if you look at these file, they do nothing but include the real header file from the "src" directory, like "../../src/corelib/global/qglobal.h". It means <QtGlobal> is a shortcut/alias for the real header file.
After all, you could just include the real header file from the "src" directory in your code, but the location and/or name of that file might change in the future. So it is better (and more clean) to just include those include files that are provided in the "include" directory. Your application should not need to know or care about the "internals" of Qt. Whether you include <QtGlobal> or <qglobal.h> is probably up to your own personal taste, but in official Qt docs and examples I always see <QtGlobal> and friends, so that's what I use in my code too.
thanks MuldeR, beautiful beyond description.
|
https://forum.qt.io/topic/21660/solved-qtglobal-about-the-file
|
CC-MAIN-2021-43
|
refinedweb
| 625
| 51.85
|
How can a global variable in a shared library unset itself? I was experimenting with a very simple library where I noticed that a global
std::string
#include <string>
#include <iostream>
using namespace std;
static string name;
__attribute__((constructor))
static void init() {
ios_base::Init init_ios;
name = "LIBFOO";
cout << "init name: " << name << endl;
}
extern "C" const char *get_name() {
return name.c_str();
}
#ifndef LIBFOO_H
#define LIBFOO_H
extern "C" const char *get_name();
#endif
#include <iostream>
#include "libfoo.h"
int main() {
std::cout << "main name: " << get_name() << std::endl;
}
g++ -o libfoo.so -shared -fPIC libfoo.cxx
g++ -o test test.c -L. -lfoo
LD_LIBRARY_PATH=$PWD ./test
init name: LIBFOO
main name:
name
You are stepping on muddy ground here as order of constructors of different types (C++ ctors vs. attribute ctors) isn't well defined and may very across compilers (see e.g. c-static-initialization-vs-attribute-constructor ). So my guess is that firstly you assign to name and then the default constructor runs, resetting it to empty string. You can verify this by stepping through init code under gdb.
|
https://codedump.io/share/obrseTLQvCkT/1/global-string-unsets-itself-in-shared-library
|
CC-MAIN-2017-13
|
refinedweb
| 177
| 63.8
|
Now the thread part (and much more) of C++11 is available for free, for the mainstream C++ developer. Visual Studio 11 (beta and free) and g++-4.7 (free) are both stable and provides a lot of C++11 features. I really recommend all c++ developers out there to upgrade their toolbox.
Of course I will do my part. I have updated the CMake configuration and I had to fix some use of std::tr1 to std::, with some include correction as well, but now g2log [code | blog | CodeProject] should work out-of-the box both for Windows and Linux developers (VS11 and g++-4.7). The dependency to just::thread is no more.
A huge debt of gratitude to Anthony Williams and his great work with just::thread. I can’t wait to see what he is going to show us next. If you are interested in concurrency and threaded software knowledge I can recommend his book C++ Concurrency in Action. It was and still is a source of inspiration for me.
At the time of this writing there is only (non-recommended) test packages for g++-4.7 on ubuntu. For the Linux developers that would like to use g++-4.7 it is much better to compile it from scratch. I have written down a simple cookbook which in large is copied from but I have removed the fortran stuff and simplified it a bit.
Enjoy
—- Building gcc-4.7 on Ubuntu 12.04 [read on ] —–
Modified from and hopefully made easier. You should stay away from gcc 4.7.0 and gcc 4.7.1 as they have a nasty c++ ABI bug. Instead you should go for gcc 4.7.3 (gcc-4.7-20121013.tar.bz2 ) example from this mirror )
Get the prerequisites
sudo apt-get install libmpc-dev libgmp-dev libmpfr-dev
sudo apt-get install g++
sudo apt-get install gawk
sudo apt-get install m4
sudo apt-get install gcc-multilib g++-multilib
sudo apt-get install flex bison
Go to and select the last stable release or a snapshot. Please go for gcc 4.7.3 and avoid gcc 4.7.1 as the latter has a nasty C++ ABI incompatibility that might give you problems.
When configuring it below the –enable-libstdcxx-time=rt must be used or else you must compile g2log with the flag D_GLIBCXX_USE_NANOSLEEP to get things like std::this_thread::sleep_for(std::chrono::seconds(1)) to work.
Configurating g++-4.7
x64 instructions
cd /* wherever just outside the gcc-4.7.* directory* mkdir build && cd build export LIBRARY_PATH=/usr/lib/x86_64-linux-gnu/ export C_INCLUDE_PATH=/usr/include/x86_64-linux-gnu export CPLUS_INCLUDE_PATH=/usr/include/x86_64-linux-gnu ../gcc-4.7.3/configure -build=x86_64-linux-gnu -prefix=/usr/gcc_4_7 --enable-libstdcxx-time=rt --enable-checking=release --enable-languages=c,c++ --program-suffix=-4.7
Build and install it
make
sudo ln -s /usr/lib/x86_64-linux-gnu /usr/lib64
Before doing make install
sudo ln -s /usr/lib/x86_64-linux-gnu /usr/lib64
then
sudo make install
x86 instructions
With my old dual-core x86 I ran into some problems for installing gcc 4.7. Finally I looked at both the solarianprogrammer blog comments and ask-ubuntu before I got it right. It is also important to get the machine type right.
Use ‘uname -m‘ which should output i386 or i686 which is the prefix you should use below. On my installation I used i386 so update your installation for every “i386” prefix below according to your machine type.
export LIBRARY_PATH=/usr/lib/i386-linux-gnu export C_INCLUDE_PATH=/usr/include/i386-linux-gnu export CPLUS_INCLUDE_PATH=/usr/include/i386-linux-gnu export LD_LIBRARY_PATH=/usr/gcc_4_7/lib For each gmp, mpfr and mpc cd into respective directory and do mkdir build then (in order) gmp: cd gmp-5.0.5/build/ ../configure --prefix=/usr/gcc_4_7 --build=i386-linux-gnu make sudo make install mpfr: cd mpfr-3.1.1/build ../configure --build=i386-linux-gnu --prefix=/usr/gcc_4_7 --with-gmp=/usr/gcc_4_7 make sudo make install mpc: cd mpc-1.0/build ../configure --build=i386-linux-gnu --prefix=/usr/gcc_4_7 --with-gmp=/usr/gcc_4_7 --with-mpfr=/usr/gcc_4_7 make sudo make install gcc: done outside the gcc source directory mkdir build && cd build ../gcc-4.7-20121013/configure --build=i386-linux-gnu --prefix=/usr/gcc_4_7 --enable-libstdcxx-time=rt --with-gmp=/usr/gcc_4_7 --with-mpfr=/usr/gcc_4_7 --with-mpc=/usr/gcc_4_7 --enable-checking=release --enable-languages=c,c++ --program-suffix=-4.7 make sudo make install x86 - Fixing symlinks (could be done by path also) sudo ln -s /usr/include/i386-linux-gnu/gnu/stubs-32.h /usr/include/gnu sudo ln -s /usr/gcc_4_7/lib/libmpc.so.3 /usr/lib/libmpc.so.3 sudo ln -s /usr/lib/i386-linux-gnu/crti.o /usr/lib/crti.o sudo ln -s /usr/lib/i386-linux-gnu/crt1.o /usr/lib/crt1.o sudo ln -s /usr/lib/i386-linux-gnu/crtn.o /usr/lib/crtn.o
Update the paths
export PATH=/usr/gcc_4_7/bin:$PATH
Which should preferably be in your .bashrc file
In my case the make install did not quite cut it, running ./g2log-FATAL-example gave shared library error, that it could not find GLIBCXX_3.4.17 […]
Going back to the gcc build directory I then searched for libstdc++.so.6.0.17 and ran at its location strings libstdc++.so.6.0.17 | grep GLIBC which gave
GLIBCXX_34
[…]
GLIBCXX_3.4.17 Bingo!.
[…]
And to fix it - x64 cd /usr/lib64 sudo ln -s libstdc++.so.6.0.17 libstdc++.so.6 And to fix it - x86 sudo rm /usr/lib/i386-linux-gnu/libstdc++.so.6 cd ~/*whatever*/gcc-4.7.3/build_test/i386-linux-gnu/libstdc++-v3 sudo cp ./src/.libs/libstdc++.so.6.0.17 /usr/lib/i386-linux-gnu/. sudo ln -s /usr/lib/i386-linux-gnu/libstdc++.so.6.0.17 /usr/lib/i386-linux-gnu/libstdc++.so.6
Another “fix” you might want to consider is to change the g++, gcc, gcov links in /usr/bin to point to /usr/gcc_4_7/bin/g++-4.7 (etc for gcc and gcov). That way you do not have to specify to use the g++-4.7 compiler in the CMakeFile.txt or on cmake command line, i.e
cmake ..
instead of
cmake .. -DCMAKE_CXX_COMPILER=g++-4.7
Test your c++11 compiler and libraries with code examples
(as suggested in the article above)
Program to test the new C++11 lambda syntax
//lambda1.cpp #include <iostream> using namespace std; int main() { cout << [](int m, int n) { return m + n;} (2,4) << endl; return 0; }
Compiling and running the above lambda example will return …6:
g++-4.7 -std=c++11 lambda1.cpp -o lambda1
./lambda1
6
IMPORTANT: Make sure that it also has support for threads
Be sure to verify that it works with std::threads. In my fiddling of making gcc 4.7.3 work I did see that sometimes the gcc seemed to compile and install fine but the example below only crashed.
The thread1.cpp example below would typically crash like this:
ubuntu:~/tmp$ g++-4.7 -std=c++11 thread1.cpp -o thread1 ubuntu:~/tmp$ ./thread1 terminate called after throwing an instance of 'std::system_error' what(): Operation not permitted Aborted (core dumped)
A correct build and install should enable the code below to work without a crash.
// thread1.cpp #include <iostream> #include <thread> //This function will be called from a thread void call_from_thread() { std::cout << "Hello, World!" << std::endl; } int main() { std::thread t1(call_from_thread); t1.join(); return 0; }
g++-4.7 -std=c++11 thread1.cpp -o thread1
./thread1
Hello, World!
Last, verify that more advanced concurrency features work
Program to test future and packaged_task
#include <iostream> #include <future> #include <thread> int main() { std::packaged_task<std::string()> task([](){return "Hello from the past";}); // wrap the function std::future<std::string> result = task.get_future(); // get a future std::thread(std::move(task)).detach(); // launch on a thread std::cout << "Waiting for past to catch up ..."; result.wait(); std::cout << "Done!\nThe message : '" << result.get() << "' \n"; }
g++ -std=c++11 future1.cpp -o future
./future
Waiting for past to catch up …Done!
The message is: ‘Hello from the past’
Now you are ready for g2log with C++11 thanks to g++-4.7
Mr. kjellkod,
Awesome library! Works perfectly in Win8 + VC11, thanks for your great sharing~ =)
Thank you @eihero. Great to get feedback and to hear that coders are using it.
With more windows usage maybe it is time for me to add crash–g2log-stackdump to the Windows code(stackwalker64) similar to Linux? (Unless some sharp coder out there beats me to the punch *hint*)
Wow! That will be very cool! Can’t wait for it!
Again thanks to your works and guidance for the reference.
Hi Kjellkod, thanks again for your very nice library.
I have a question, how can I add support for wide strings?
Including this seems to work (must save source file with the proper encoding and avoid using L”” or _T””)
@kikijiki
Cool, thanks for sharing. I have mostly used g2log in the linux domain with English so wchar_t has not been considered so far. I am very happy to hear that you found a solution and many thanks for telling me (and other users) of your solution.
Btw: How did the dll project turn out for you?
– Kjell
PS: In case it is important to change to wchar all over:
In g2log.h there ious a simple define that is later used as the logentry input argument for the background worker. I.e. typedef const std::string& LogEntry;
I wanted to try to stick to utf-8 on windows too, it’s quite messy sometimes!
About the dll project, I had to interrupt it temporarily but I should be able to resume it soon, thanks for asking!
I tried to compile g2log on Mac OS X 10.8.3 with Clang++
Apple LLVM version 4.2 (clang-425.0.27) (based on LLVM 3.2svn)
Target: x86_64-apple-darwin12.3.0
Here is error:
In file included from /Users/apple/Lib/g2log/g2log/src/g2logworker.cpp:12:
In file included from /Users/apple/Lib/g2log/g2log/src/g2logworker.h:16:
In file included from /usr/bin/../lib/c++/v1/future:367:
/usr/bin/../lib/c++/v1/memory:2236:15: error: no matching constructor for initialization of
‘g2::PretendToBeCopyable<std::__1::packaged_task<std::__1::basic_string ()> >’
__first_(_VSTD::forward(get(__first_args))…)
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/bin/../lib/c++/v1/memory:2414:15: note: in instantiation of function template specialization
‘std::__1::__libcpp_compressed_pair_imp<g2::PretendToBeCopyable<std::__1::packaged_task<std::__1::basic_string
()> >, std::__1::allocator<g2::PretendToBeCopyable<std::__1::packaged_task<std::__1::basic_string ()> > >,
2>::__libcpp_compressed_pair_imp<const
g2::PretendToBeCopyable<std::__1::packaged_task<std::__1::basic_string ()> > &,
std::__1::allocator<g2::PretendToBeCopyable<std::__1::packaged_task<std::__1::basic_string ()> > > &&, 0,
0>’ requested here
: base(__pc, _VSTD::move(__first_args), _VSTD::move(__second_args),
^
/usr/bin/../lib/c++/v1/functional:1001:11: note: in instantiation of function template specialization
‘std::__1::__compressed_pair<g2::PretendToBeCopyable<std::__1::packaged_task<std::__1::basic_string ()> >,
std::__1::allocator<g2::PretendToBeCopyable<std::__1::packaged_task<std::__1::basic_string ()> > >
>::__compressed_pair<const g2::PretendToBeCopyable<std::__1::packaged_task<std::__1::basic_string ()> > &,
std::__1::allocator<g2::PretendToBeCopyable<std::__1::packaged_task<std::__1::basic_string ()> > > &&>’
requested here
: __f_(piecewise_construct, _VSTD::forward_as_tuple(__f),
^
/usr/bin/../lib/c++/v1/functional:1027:26: note: in instantiation of member function
‘std::__1::__function::__func<g2::PretendToBeCopyable<std::__1::packaged_task<std::__1::basic_string ()> >,
[… cut short … ]
Hi Dmitry
I have not used Clang with g2log (apart from some initial testing). So far g2log only supports g++ and Visual Studio 2012. The implementation of C++11 differs between the different platform so for that reason I have restricted it to those two compilers until things get more stable.
However! The good news is that there are other Clang/g2log users who are using the following approach
Comment out the “offending”
std::futurefunctions (see g2logWorker) and the corresponding unit tests and then it should run without problems
std::future logFileName();
std::future changeLogFile(...)
Hopefully this helps. Please let me know how it goes.
Cheers
Kjell
Experimental support for Clang. Even the backtract (dump of stack) works but some functionality is disabled.
See here for more information:
There is a work around for it but I have not tested it, only read the code. It looks OK
Hello
I liked your approach with the g2log lib, but is there a way to log a message in a lock free manner ? the size of the message could be limited obviously, even the number of message than can be pushed in the active object before it flushes the waiting cache.
best
Sure Adrien,
In fact the first version of the “active object” that is used as background worker used such a wait-free, lock-free circular buffer. Back then it was limited to one writer thread but it can be easily made to scale to arbitrary number of log writers. The background worker will still just be one thread.
I have actually thought about that just recently. The wait-free (guaranteed number of steps) quality will go away but he lock-free aspect of it can be kept. There are a number of different lock-free queues out there — my personal favourite would be something similar to the “Disruptor” — but made a little simpler.
can you update your code to support clang++ 3.2+?
Hi @mikeseven. Did you see the clang workaround above?
At the moment I am working on introducing “sinks” (log receivers: file, database, network etc) to g2log and this will be the focus for the immediate future.
There *are* a lot of people requesting clang support. Depending on other popular requests the clang support may be pushed to a higher priority on my “to do” list but I cannot promise anything.
If any g2log/clang user is seeing this thread:
If you would e-mail me your g2log changes to support clang I would be more then happy to use that when/if I bring g2log to clang.
I was wondering why was boost libs not used for the required support of C++11 features including thread etc.
It is supported by the language itself. I would rather use C++11 compliant compilers with the std library than bringing in a third party library like boost.
g2log compile in vs 2011, but not compile in vs 2012 (2013)
g2logworker.cpp
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\xrefwrap(175): error C2558: struct ‘g2::PretendToBeCopyable’ : no copy constructor available or copy constructor is declared ‘explicit’
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\xrefwrap(173) : while compiling class template member function ‘std::_Callable_base::_Callable_base(const _Ty &)’
with
[
_Ty=g2::PretendToBeCopyable
]
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\xrefwrap(269) : see reference to function template instantiation ‘std::_Callable_base::_Callable_base(const _Ty &)’ being compiled
with
[
_Ty=g2::PretendToBeCopyable
]
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\xrefwrap(263) : see reference to class template instantiation ‘std::_Callable_base’ being compiled
with
[
_Ty=g2::PretendToBeCopyable
]
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\functional(189) : see reference to class template instantiation ‘std::_Callable_obj<g2::PretendToBeCopyable,false>’ being compiled
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\functional(495) : see reference to class template instantiation ‘std::_Func_impl’ being compiled
with
[
_Alloc=std::allocator<std::_Func_class>
, _Ret=void
]
C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include\functional(396) : see reference to function template instantiation ‘void std::_Func_class::_Do_alloc(_Fty &&,_Alloc)’ being compiled
[…cut short …]
i wanted to say – compile in VS 11 (2012), not complie in VS 12 (2013 RC)
Thanks for trying it out early on VS12 (2013 RC).
The VS 2013 RC has a bug for std::packaged_task. It should work like this:
But it fails to compile on VS 2013 RC. I took a peak at their implementation and it seems that internally a pseudo version of std::packaged_task is used for std::packaged_task but it is obviously not tested enough since for void it just does not work.
You can see the bug reported to Microsoft here:
Feel free to “comment” on the bug. I would not be surprised if it would get higher priority to be fixed if more people comment on it.
Side note 1:
… Because of this I am making an effort to see if I can get g2log to work with Clang instead.
Side note 2: (and completely irrelevant to your bug report)
… “sinks”, i.e. recipient of the log message is being tried out. It seems to work fine but I will change the message structure, how timing is done and the API for the coder … so it is very much in progress. If you, or anyone else is curious you can take a peek at it here.
I expect to move it to g2log-dev shortly. Soon it will be much easier to add custom “loggers”. I will of course add a few “sinks” for file or even over network logging
Here you can check the unit test example of how it can work
Necessarily vote for the correction of bugs in vs 2013.
Support clang – it’s good. Now I am moving around between gcc and vc + + – you start on the project. need a c + +11, worked g2log, cross-platform. in the use of gcc does not like the presence of different according to type of runtime libraries in mingw. so now started to test vc2013.
about the “sinks”.
just wanted to ask a question, how much is reasonable to use the address to g2log in a ErrorHandler. to be divided – in one case ErrorHandler writes to a file (g2logErrorHandler), in another case in some flow for display in the GUI program (StreamErrorHandler).
“sinks” will just be this ErrorHandler.
|
https://kjellkod.wordpress.com/2012/06/02/g2log-now-with-mainstream-c11-implementation/
|
CC-MAIN-2018-22
|
refinedweb
| 2,990
| 57.77
|
Ok, so I've been trying to some practice programs and have come to a slight problem.
Here's the program I need to work on: Write a program that determines which of a company's four divisions (NE, SE, NW, SW) had the greatest sales for a quarter. It should include the following two functions, which are called by main.
*double getSales() is passed the name of a division. It askes the user for a division's quarterly sales figure, validates the input, then returns it. It should be called once for each division.
*void findHighest() is passed the four sales totals. It determines which is the largest and prints the name of the high grossing division, along with it's sales figure.
Here's the first code I written:
#include<iostream> #include<iomanip> using namespace std; double getSales(); int main() { double sales; cout << "Please enter sales for NE" << endl; sales = getSales(); cout << "Please enter sales for SE" << endl; sales = getSales(); cout << "Please enter sales for NW" << endl; sales = getSales(); cout << "Please enter sales for SW" << endl; sales = getSales(); return 0; } double getSales() { double Sales; cin >> Sales; return Sales; }
Now I haven't enterted the void function yet. But is it ok if I able to use if statements in the void function?, but that might confuse it, so what is the best way to get around this problem?
I also have written it using arrays, but i don't know how to call arrays through functions. Here is the code with arrays:
#include<iostream> #include<iomanip> using namespace std; int main() { char name[4][12] = {"NE Division", "SE Division", "NW Division", "SW Division"}; double S[4]; double highest; int high; cout << "Please enter the quarterly sales for each division" << endl; for (int count = 0; count < 4; count++) { cout << "Please enter the sales for " << name[count] << endl; cin >> S[count]; } highest = S[4]; for (int count = 0; count < 4; count++) { if (S[count] > highest) { high = count; highest = S[count]; } } cout << "The division with the highest sales is " << name[high] << " with a total of $" << highest << endl; return 0; }
Thanks for any future help. I apprciate it so much.
I still have a ways to go in terms of C++.
|
https://www.daniweb.com/programming/software-development/threads/227613/char-array-function-help
|
CC-MAIN-2017-26
|
refinedweb
| 369
| 73.31
|
19 Jun 14:25 2004
Re: emo namespace
Danny Ayers <danny666 <at> virgilio.it>
2004-06-19 12:25:08 GMT
2004-06-19 12:25:08 GMT
This should tie in nicely with FOAF, so I've forwarded to the list: wiki: zaczek2004 wrote: >So far I haven't found a way to indicate emotional relationships that >a topic map author might have towards certain sites or items within >existing metadata frameworks. > >This would be useful to create a personalised map of a friendly web. >Metadata aware browsers would hide content marked as "boring" and >feed aggregators could auto-generate "Humor" columns based on the >author's map of topics and the maps of friends. > >Some logical inference might be used. For example person's A topic >map indicates that topic X is "Good" and "Serious". A metadata bot >engine can come accross the topic map of person B which states that >topic X is "Bad" or "Funny". If this is a repeating pattern, this >enables the bot to guess by some sort of Bayesian logic that person A >is likely to consider "Bad" the topics marked as "Good" in person's B >topic map. > >This can help the creation of an automated common interest discovery >engine, which would compute different people's topic maps and >discover possible matches of people who share a similar sense of >humour (defined by the common idea of "Serious" and "Funny" topic >boundaries etc...) >(Continue reading)
|
http://blog.gmane.org/gmane.comp.web.rdfweb/month=20040601
|
CC-MAIN-2014-41
|
refinedweb
| 243
| 54.46
|
Sharing a data among different browser instancesKrishan Fernando Sep 6, 2006 6:55 AM
Hi,
I am using JBoss portal 2.4,
I have defined a singleton class as follows and copied into the "jboss/server/default/lib" to be used for all the other portlets,
package testcom;
public class Communication {
private static Communication instance = null;
private static String sName = "Krishan";
private Communication(String sName)
{
this.sName =sName;
}
public String getName()
{
return this.sName;
}
public void setName(String pName)
{
this.sName = pName;
}
public static Communication getInstance(String pCaller) {
if(instance == null) {
String sureName ="Fernando";
instance = new Communication(sureName);
System.out.println("********************************instance is null");
}
System.out.println("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~pCaller::sName:"+sName+"::"+pCaller);
return instance;
}
}
In my first senario:
When I access it for the very first time
as, it created a new "Communication" instance.
Communication cm = Communication.getInstance("MyPortlet");
writer.write("cm.getName():"+cm.getName());
cm.setName("Pradeep");
writer.write("cm.getName():"+cm.getName());
I got the Name as I expected:
cm.getName():Fernando
cm.getName():Pradeep
and also when I open another browser instance, I got the following result as expected.
cm.getName():Pradeep
cm.getName():Pradeep
But in my second scenario, When I click on the different portlet Url Tab in the same browser instance which is also suppose to display the cm.getName(),
it created the new Communication instance and give me the following result,
cm.getName():Fernando
But I was expected following results with out getting creating a new Communication instance ;
cm.getName():Pradeep
I noticed that the url I clicked is contain with the session id values as follows,;jsessionid=E5FE54B2057947FC8E77F0D57066F16F
So could you please tell me how can I get the expected data value without get creating another instance of a Communication object.(I want it to behave as a singleton object)
Thanks,
Krishan
1. Re: Sharing a data among different browser instancesAntoine Herzog Sep 6, 2006 12:12 PM (in response to Krishan Fernando)
It is because the singleton class is used inside different class loaders.
so there are actually multiple "singleton" working on the plateform.
you have to put it in a lib, as you have done, but also make sure to call the instance from the same class loader.
I would do it using a JMX service of JBoss.
you build a service with your communication class, and get what you want.
At least, the service may only be a factory, that provide to any one calling it the same instance of the singleton.
or more feature as a service.
look at the JMX documentation and tutorials.
and, as an example, look at the code of the CMSPortlet and how it uses the CMSService.
or may be an EJB...
for inter portlet communication, look also for "IPC" in the forum, wiki, and at the Michelle Osmond web pages and proposed library : it works well and do most of the job.
I built a JMX service above this library to access it above multiple war and 'isolated' portlets. The reste is done by the librairy.
|
https://developer.jboss.org/thread/123585
|
CC-MAIN-2017-47
|
refinedweb
| 502
| 55.24
|
About |
Projects |
Docs |
Forums |
Lists |
Bugs |
Get Gentoo! |
Support |
Planet |
Wiki
On Thursday 07 June 2012 02:13:16 Luca Barbato wrote:
> On 07/06/12 05:17, Mike Frysinger wrote:
> > On Wednesday 06 June 2012 15:40:18 Gregory M. Turner wrote:
> >> Is there a wiki or forum thread somewhere where folks can gloat and/or
> >> commiserate?
> >
> > i haven't set up anything
>
> Shall we start a page?
assuming our wiki is sane and can handle namespaces, i'd go with:
-mike
Updated Jun 29, 2012
Summary:
Archive of the gentoo-dev mailing list.
Donate to support our development efforts.
Your browser does not support iframes.
|
http://archives.gentoo.org/gentoo-dev/msg_5dfdfad0325d8d9aca86f55e8cf760c6.xml
|
CC-MAIN-2014-10
|
refinedweb
| 107
| 74.49
|
Conquer navigation state with React-router and Redux
A fundamental component of traditional applications and single page applications alike is Navigation — being able to move from one page to another.
Okay, and so what?
Wait up!
In this article, I’ll not only show you the nuances of navigating within your React/Redux applications, I’ll show you how to do this declaratively! You’ll also learn how to maintain state across your app’s navigation switches.
Ready?
👉 NB: In this article, I assume you have a decent understanding of how Redux works. If not, you may want to check out my article on Understanding Redux 📕.
In a bid to make this as pragmatic as possible, I have set up a simple application for this purpose.
The following GIF is that of EmojiLand.
EmojiLand is a simple app, but just good enough to help you digest the very important tips I’ll be sharing in this article.
Notice how the app stays on a current route, but when the button is clicked it performs some fake action and redirects to another route upon completion of the fake action.
In the real world, this fake action could be a network request to fetch some resource, or any other async action.
Just so we’re on the same page, let me share a bit on how the EmojiLand app is built.
To work along, grab the application’s repo from GitHub. If you’re feeling lazy, feel free to skip.
Clone the repo:
git clone
Move into the directory:
cd nav-state-react-router
Install the dependencies:
yarn install or
npm install
Then run the application:
yarn start or
npm start
Done that?
The app is a basic react with redux setup. A very minimal set up with react-router is also included.
In
containers/App.js you’ll find the 6 routes contained in this application.
Below’s the full code representation:
const App = () => (
<Router>
<Switch>
<Route exact path="/" component={AngryDude} />
<Route path="/quiet" component={KeepQuiet} />
<Route path="/smile" component={SmileLady} />
<Route path="/think" component={ThinkHard} />
<Route path="/thumbs" component={ThumbsUp} />
<Route path="/excited" component={BeExcited} />
</Switch>
</Router>
);
Each route leads to an emoji component.
/quiet renders the
KeepQuiet component.
And here’s what the
KeepQuiet component looks like:
import React from "react";
import EmojiLand from "../components/EmojiLand";
import keepQuietImg from "../Images/keepquiet.png";
import emojiLand from "./emojiLand";
const KeepQuiet = ({ appState, handleEmojiAction }) => (
<EmojiLand
EmojiBg="linear-gradient(120deg, #a6c0fe 0%, #f68084 100%)"
EmojiImg={keepQuietImg}
EmojiBtnText="Keep Calm and Stay Quiet."
HandleEmojiAction={handleEmojiAction}
appState={appState}
/>
);
export default emojiLand(KeepQuiet);
It is simple functional component that renders an
EmojiLand component. The construct of the
EmojiLand component is in
components/EmojiLand.js.
It is isn’t very much of a big deal, and you can have a look on GitHub.
What’s important is that it takes in some props such as a background gradient, image, and button text.
What’s more delicate is the exported component.
Please have a look at the last line of the code block above.
export default emojiLand(KeepQuiet);
emojiLand right there is an higher order component. All it does is make sure that when you click a button in any of the emoji components, it simulates a fake action for about 1000ms. Remember that in practice this may be a network request.
The
emojiLand higher order component does this by passing the
appState props into the emoji components. In this example,
KeepQuiet
When any of the emoji components is first rendered,
appState is an empty string,
"". After about 1000ms,
appState is changed to
DOSOMETHINGOVER
Where
DOSOMETHINGOVER is represented as a constant, just as shown below:
In
constants/action-types:
export const DOSOMETHINGOVER = "DOSOMETHINGOVER";
Now, this is how every emoji component in this app works!
Also remember that at each route, a separate EmojiLand component is rendered.
Upon the completion of the fake process, let’s assume you wanted to redirect/move to another route within the EmojiLand application.
How do you do this?
Firstly, remember that on hitting the home route, what’s rendered is the
AngryDude component.
The more declarative approach for handling redirects is to use the
Redirect component from React-router.
Let me show you how.
Since we want to redirect from the
AngryDude component, first, you import the
Redirect component within
containers/AngryDude.js like this:
import { Redirect } from "react-router-dom";
For redirects to work, it has to be rendered like a regular component. In our particular example, we’ll be redirecting when the
appState holds the value,
DOSOMETHINGOVER i.e the fake action has been completed.
Now, here’s the code for that:
const AngryDude = ({ appState, handleEmojiAction }) => {
return appState = DOSOMETHINGOVER ? (
<Redirect to="/thumbs" />
) : (
<EmojiLand
EmojiBg="linear-gradient(-180deg, #611A51 0%, #10096D 100%)"
EmojiImg={angryDudeImg}
EmojiBtnText="I'm so pissed. Click me"
HandleEmojiAction={this.handleEmojiAction}
appState={this.props.appState}
/>
Now, if
appState is equal to
DOSOMETHINGOVER, the
Redirect component is rendered.
<Redirect to="/thumbs" />
Note that a required
to prop is added to the
Redirect component. This prop is required to know where to redirect to.
With that in place, here’s that in action:
If we go ahead an do the same for the other route components, we can successfully redirect through all the routes!
Here’s that in action:
That was easy, right?
There’s a bit of a problem though, and I’ll address that in the next section.
I’ll open up a new browser and click through the app, but at some point, I’ll attempt to go back i.e by using the back browser button.
Have a look below:
Note that when I click the back button, it doesn’t go back to the previous route but takes me back to my browser’s homepage.
Why?
This is because by default, using the
Redirect component will replace the current location in the browser’s history stack.
So, even though we cycled multiple routes, the routes replaced each other in the browser’s “records”.
To the browser, we only visited one route. Thus, hitting the back button took me back to the homepage.
It’s like having an Array — but instead of pushing to the array, you replace the current value in the array.
There’s a fix though.
The
Redirect component can take a
push prop that deactivates this behaviour. With the
push prop, each route is pushed unto the browser’s history stack and NOT replaced.
Here’s how that looks in code:
return appState = DOSOMETHINGOVER ? (
<Redirect push
) : (
<EmojiLand
EmojiBg="linear-gradient(-180deg, #611A51 0%, #10096D 100%)"
EmojiImg={angryDudeImg}
EmojiBtnText="I'm so pissed. Click me"
HandleEmojiAction={handleEmojiAction}
appState={appState}
/>
);
And here’s the result of that.
Note how we can now navigate back to previously visited routes!
As you move from one route to another, variables in the previous route aren’t carried over to the next route. They are gone!
Yes gone, except you do some work on your end.
What’s interesting is that the
Redirect component makes this quite easy.
As opposed to passing a string
to prop into
Redirect, you could also pass in an object.
What’s interesting is that with the object representation, you can also pass in a
state object.
Within the
state object you may now store any key value pairs you wish to carry over to the route being redirected to.
Let’s see an example in code.
When redirecting from the
AngryDude component to
ThumbsUp, let’s pass in some values into the state field.
Here’s what we had before:
<Redirect push
That’s to be changed to this:
<Redirect
push
to={{
pathname: "/thumbs",
state: {
humanType: "Cat Person",
age: 12,
sex: "none"
}
}}
/>
Now, I have passed in 3 different key value pairs!
humanType,
age, and
sex
But upon redirecting to the
/thumbs route, how do I receive these values?
For route components, react-router makes available a certain
location prop. Within this
location prop, you may access the state object like this,
location.state or
this.props.location.state
NB: Route components are components rendered by the react-router’s <Route> component . They are usually in the signature, <Route component= {Component} />
Here’s an example of me logging the state object received in the new route,
/thumbs i.e within the newly rendered
Thumbs component
const ThumbsUp = ({ appState, handleEmojiAction, location }) => {
console.log(location.state);}
/>
);
};
Note how the location prop is deconstructed and then there’s the
console.log(location.state)
After been redirected, and the dev console inspected, the state object is indeed right there!
/thumbsroute!
You may even go a little further and actually render some UI component based on the passed in state.
Here’s what I did:
By grabbing the state passed into
ThumbsUp, I mapped over it and rendered the values below the button. If you care about how I did that, have a look at the source code in
components/EmojiLand.js
Now we’ve made some decent progress!
You may have wondered all the while, “yes this is cool, but where do I use it in the real world?”
There are many use cases, but one very common one is where you have a list of results rendered in a table.
However, each row in this table is clickable, and upon clicking a row, you want to display even more information about the clicked values.
You could use the concepts here to redirect to the new route and also pass in some values from the table row to the new route! All by utilising the redirect’s state object within the
to prop!
In the dev world, there are usually multiple ways to solve a problem. I want this article to be as pragmatic as possible, so I’ll show you the other possible way to navigate between routes.
Assume that we wanted to be redirected to from the
/thumbs route to the
quiet route after performing some action. In this case, we want to do this without using the
Redirect component.
How would you go about this?
Unlike the previous solution where we rendered the
Redirect component, you could use the slightly more imperative method shown below:
history.push("/quiet) or
this.props.history.push("/quiet")
Okay, but where does this
history object come from?
Just like
location in the previous example, react-router also passes down a
history prop into route components.
Here’s what we had in
containers/Thumbs.js :
const ThumbsUp = ({ appState, handleEmojiAction, location }) => {}
/>
);
};
Now, we may use the
history object like this:
const ThumbsUp = ({ appState, handleEmojiAction, location, history }) => {
if (appState === DOSOMETHING_OVER) {
history.push("/quiet");
}
return (
}
/>
);
};
And now, the results are just the same:
Just as expected, we still had the redirection possible!
It is important to note that you can also pass in some state values like this:
history.push("/quiet", {
hello: "state value"
})
Simply pass in a second object parameter into the
history.push function.
Do you realise that we haven’t had to do any “extra” work on the Redux side of things?
All we had to do was learn the APIs
react-router makes available. This is good, and it explains the fact that
react-router and
redux work just fine out of the box.
This app uses
redux, but that’s not a problem.
Got that?
Actually, nothing is wrong with the approaches we’ve discussed so far. They work just fine!
However, there are a few caveats, and depending on how you love to work, or the project you’re working on, you may not find them bearable.
Mind you, I have worked with the previous patterns on large scale project and they work just fine.
However, a lot of Redux purists would prefer to be able to navigate routes by dispatching actions. Since that’s the primary way of provoking a state change.
Also, many also prefer to synchronise the routing data with the Redux store i.e to have the route data saved within the Redux store.
Lastly, they also crave being able to enjoy support for time travel debugging in their Redux devtools as you navigate various routes.
Now, all of this isn’t possible without some sort of deeper integration between react-router and redux.
So, how can this be done?
In the past, react-router offered the library, react-router-redux for this purpose. However, at the time of writing, the project has been deprecated and is no longer maintained.
I guess it can be still be used, but you may have some fears using a deprecated library in production.
There’s still good news as the react-router-redux maintainers advice you use the library,
connected-react-router
It does have a bit of setup to use, but it isn’t a lot if you need the benefits it gives.
Let’s see how that works, and what we may learn from integrating that into our project, Emojiland.
The first set of things to do are with the Redux store.
1. Create a history object
Technically, there’s a DOM history object for manipulating the browser’s history session.
Let’s programmatically create one ourselves.
To do this, import
createBrowserHistory from
history
In
store/index.js:
...
import { createBrowserHistory } from 'history'
...
history is a dependency of the
react-router-dom package, and It’s likely already installed when you use react-router in your app.
After importing
createBrowserHistory, create the history object like this:
..
const history = createBrowserHistory()
Still in the
store/index.js file.
Before now, the
store was created very simply, like this:
const store = createStore(reducer);
Where the
reducer refers to a reducer function in
reducers/index.js, but that won’t be the case very soon.
2. Wrap the root reducer
Import the following helper function from the
connected-react-router library
import { connectRouter } from 'connected-react-router'
The root reducer must now be wrapped as shown below:
const store = createStore(connectRouter(history)(reducer));
Now the reducer will keep track of the router state. Don’t worry, you’ll see what that means in a bit.
In order to see the effect of we’ve done has so far, in
index.js I have exported the redux store globally, like this:
window.store = store;
Now, within the browser console, you can check what’s in the redux state object with
store.getState()
Here’s that in action:
routerfield now in the Redux state
As you can see, there’s now a
router field in the redux store! This
router field will always hold information about the current route via a location object e.g
pathname,
state etc.
We aren’t done yet.
In order to dispatch route actions, we need to apply a custom middleware from the
connected-react-router library.
That’s explained next
3. Including a custom middleware
To include the custom middleware for handling dispatched actions, import the needed
routerMiddleware middleware from the library:
...
import { connectRouter, routerMiddleware } from 'connected-react-router'
Then use the
applyMiddleware function from redux:
...
import { createStore, applyMiddleware } from "redux";
...
const store = createStore(
connectRouter(history)(reducer),
applyMiddleware(routerMiddleware(history))
);
Now, we’re almost done. Just one more step.
4. Use the Connected Router!
Remember that react-redux gives us a
Route component. However, we need to wrap these
Route components in a
ConnectedRouter component from the
connected-react-router library.
Here’s how:
First, in
index.js you import the
ConnectedRouter component.
import { ConnectedRouter } from 'connected-react-router'
...
Here’s the render function of the
index.js file:
render(
<Provider store={store}>
<App />
</Provider>,
document.getElementById("root")
);
Remember that
App renders the different routes in the app.
const App = () => (
<Router>
<Switch>
<Route exact path="/" component={AngryDude} />
<Route path="/quiet" component={KeepQuiet} />
<Route path="/smile" component={SmileLady} />
<Route path="/think" component={ThinkHard} />
<Route path="/thumbs" component={ThumbsUp} />
<Route path="/excited" component={BeExcited} />
</Switch>
</Router>
);
Now, in
index.js , wrap the
App component with the
ConnectedRouter component. The
ConnectedRouter component should be placed second only to the
Provider component from
react-router
Here’s what I mean:
render(
<Provider store={store}>
<ConnectedRouter>
<App />
</ConnectedRouter>
</Provider>,
document.getElementById("root")
);
One more thing!
Right now, the app won’t work as expected because the
ConnectedRouter requires a
history prop i.e the history object we created earlier.
Since we need the same object in more than one place, we need it as an exported module.
A quick fix is to create a new file
store/history.js
import { createBrowserHistory } from "history";
const history = createBrowserHistory();
export default history;
Now, this exported
history object will be used in the both places where it is needed.
In
index.js it is imported like this:
import history from "./store/history";
And then passed into the
ConnectedRouter component as shown below:
render(
<Provider store={store}>
<ConnectedRouter history={history}>
<App />
</ConnectedRouter>
</Provider>,
document.getElementById("root")
);
With this, the setup is done, and the app works — without the pesky errors we saw earlier!
Keep in mind I have only set up the
connected-react-router but I encourage you check out the more advanced usage of this library.
There’s more you can do with the
connected-react-router library and most of those are documented in the official FAQs. Also, if you have a more robust set up with the Redux devtools, and a logger middleware, be sure to take advantage of time travel and the action logger!
I hope this has been as much fun as it was for me!
If you’ve got any questions, be sure to drop them in the comment section and I’ll be happy to help.
Go build something awesome, and I’ll catch you.
|
http://brianyang.com/conquer-navigation-state-with-react-router-and-redux/
|
CC-MAIN-2018-51
|
refinedweb
| 2,928
| 56.76
|
The WG Charter has been extended through July 31 (only the date has been changed in the revised Charter document). The IP clause of the Charter will probably be changed again (through an AC review) to reference the Current Patent Policy.
No changes requested - the posted version of the minutes is approved.
- Issue 205 DF: Last telcon (May 1) we ran out of time before revisiting issue 205, and this issue was inadvertantly omitted from this week's (May 8) telcon agenda. HFN has pointed out that MarkB is satisfied with our resolution of 205 when coupled with an additional friendly amendment (). Is anyone not comfortable incorporating HFN's amendment? MH: can you restate this? HFN restates amendment, see [100]. No-one objects to incorporating amendment.
-Primer Nothing to report -Spec Editors: the editor's to-do list is pretty much empty now, will work on readability checks. OH: conformance document will be linked CF: The spec diffs are OK but indicating changes directly in the text is much better MH: will work on this. DF: The next diffs should be small, and it may not be worth working on this. -TBTF DF: Most of the items from the last TBTF meeting are in the agenda of this meeting. HFN volunteered to draft the text for the HTTP binding as requested in the agenda. -Conformance Anish: New document posted and linked from the team page. Working for integration for May 14th. It will be up to date for the latest spec. Some editorial issues may still be there but the core will be there for the WG to review. PC: How many tests come from implementors, and how many from reading the spec? OH: We do not have an answer to that. -Usage scenarios Nothing to report. -Requirements Nothing to report -Email binding HMM: Is working to bring tha document up to-date with the changes in the HTTP MEP.
None reported.
Conformance document is pretty much up to date and it will be available next week. We also have outstanding issues and we have to resolve them before going to LC. If we can finish issues by May 14th, we can start review of the whole thing and plan to go to LC. Assuming the major issues are finished on this telcon, the editors indicate they can have a new snapshot by May 14th, possibly by the end of the previous week. DF: when the new spec snapshot becomes available, the WG should do a detailed review. Any other small details that are completed after May 14th can be reviewed separately. WG agrees.
- Issue 195 HFN: the proposal has 2 sides, we have qname or we don't say anything about return values. Noah: we need to clarify our goals in resolving this issue. RPC is a wire representation, used by many languages. Troubles arise when you have optional return values. HFN: we got the same pb with SOAP/1.1 it might be a good idea to have it explicit in the message. DF: Do we care to provide that explicit token? HFN: +1 to Noah, it would be good to have something explicit there. MB: +1 RW: not sure it is necessary, especially in the proposed way. OH: potentially useful, from implementors point of view it is clearly a plus. DF: sounds like the majority wants an explicit marker, so what kind of marker should we have? HFN: the choices are: - local names (has sets of pb, conflicts...) - qnames (can't be easily described easily in a schema) - global unique name (wrapper) CF: use the ID of the return value and identify in a header RW: the Wrapper helps a little but not as much as I would like. Another alternative is to identify outbound edge, a clearly identify type that identify the wname of the return value. OH: is that like Chris' reference option? DF: wrapper seems to be the least unliked option MH: Ray talked against wrapper, did someone say anything against qnames? DF: qnames seems to be the best option then. Does anyone object to resolving issue 195 using QName to identify the result? No-one objected. DF: Issue 195 is closed with this resolution. RayW to write resolution text. - dealing with root MH has provided a summary of the root issue [101] NM: we allow refs from the header to the body and the reverse. MH: we identify rpc struct with tagging. HFN: the body is one struct, you can't have multiple top level using RPC convention, as it is not allowed. HFN: agreed we don't need a root attribute. MH: should we add that in the RPC convention there should be only one child? HFN: if you are not using RPC you can have as many top levels as you want. RPC is more restrictive. DF: Option a in [101] (explicitly disallow top-level multi-refs) seems to resolve the root issue. Does anyone object to resolving the root issue by adopting this option? No one objects DF: we will adopt this formulation. - issue 194 DF: Proposal is that the encoding style attribute cannot appear in any elements defined in the envelope namespace. MH: I think "cannot" is "MUST NOT". DF: Is there any objection to closing issue 194 by saying the encoding style attribute must not appear in any elements defined in the envelope namespace? No one objects. DF: issue 194 is closed with this resolution. - proposal by TBTF to update return code 204 HFN: not supporting 204 will make our integration with HTTP more problematic (e.g. for some MEPs). NM: if 204 would be mapped to a non-response fixed SOAP reply. Then it would be treated at the SOAP level. HTTP 204 says "it's a success" YL: 202 doesn't make any assumption wrt processing as 204 do, is the same model valid there? (Scribe missed the reply.) CF: having no body back is valid. HFN: the SOAP processor can see that the 204' SOAP reply is generated MH: it would contain no header, so if we have a "echo this header", it will fail. MH: I really don't think this proposal would work. DF: discussion will continue on email and hope to have it for next telcon. [100] [101]
|
http://www.w3.org/2000/xp/Group/2/05/8-minutes.html
|
CC-MAIN-2015-40
|
refinedweb
| 1,046
| 74.08
|
C++ Tutorial - Macro - 2017
When the C++ compiler runs, it first calls preprocessor to find any compiler directives that may be included in the source code.
The directives begin with the # character and will be implemented first to modify the source code before it is assembled and compiled.
The changes made by compiler directives to the preprocessor create new temporary files that are not normally seen. It is these temporary files that are used to create a binary object file.
The compilation process employs the Preprocessor to compile source code, an Assembler to translate this into machine code, and a Linker to convert one or more binary objects into an executable program.
So, the preprocessor will replace
#include <iostream>directive with definitions for the cin, cout, cerr functions.
In the previous section, we saw the preprocessor substitutes library code for #include directives. Other preprocessor directives can be used to substitute test or numerics before assembly and compilation.
The #define directives specifies a macro, comprising an identifier name and a string or numerics, to be substituted by the preprocessor for each occurance of that macro in the source code.
#define STARS "*****" #define SITE "bogotobogo.com" #define YEAR 2011 #include <iostream> using namespace std ; int main() { cout << STARS << endl << SITE << endl << STARS ; cout << endl << "YEAR is: " << YEAR ; cout << endl << "Next YEAR is: " << ((YEAR)+1) << endl ; return 0 ; }
Output is:
***** bogotobogo.com ***** YEAR is: 2011 Next YEAR is: 2012
Similar to #include preprocessor directives, #define directives can appear at the start of the source code. As with constant variable names, the macro name uses uppercase, and defined string values should be encolsed within double quotes. For numeric substitution in expression the macro name should be enclosed in parentheses to ensure correct precedence.
There are couple of reasons for avoiding #define for constant:
- No type checking
For the constant we are defining using #define, there are no provisions of type checking. So, we should make sure that we explicitly specify the type of the constant, such as 3.4f.
- No scoping
A constant defined is global.
- No access control
We cannot impose any access control over #define, and it's always public.
- No symbol
In the previous example, we used YEAR constant, but this symbolic name may be stripped from our code by the preprocessor. As a result, the compiler never sees the name and cannot put it into symbol table. This causes debugging headache when we can just see the value of the constant not the name.
The #ifdef directive performs the most common preprocessor function by testing to see if a specified macro has been defined. When the macro has been defined, so the test returns true, the preprocessor will do insert on subsequent lines up to a corresponding #endif directive.
On the other hand, the #ifndef directive tests to see if a specified macro has not been defined. When that test returns true, it will do insert up the a corresponding #endif directive.
Any previously defined macro can be removed later using the #undef directive so that subsequent #ifdef conditional tests fail. The macro can then be redefined by using #define directive again.
To test multiple definitions, the #ifdef macro can be expressed as #if defined and additional tests made #elif defined macros.
#if defined _WIN32 #define PLATFORM "Windows" #elif defined __linux #define PLATFORM "Linux" #elif defined __macosx #define PLATFORM "MacOSX" #else #define PLATFORM "Others" #endif
A C++ code usually has many .h header files, and the header files may contain one or more #include directives to make other classes or functions available from other header files. This can cause duplication where definitions appear in two files. The solution to this problem of redefinition is to use preprocessor directives to ensure the compiler will only be exposed to a single definition, so called inclusion guards. This creates a unique macro name for each header file. The name is an uppercase version of the file name, with the dot changed to an underscore. For instance, myfunction.h is represented as MYFUNCTION_H.
To create a macro to guard against duplication, an #ifndef directive first tests to see if the definition has already been made by another header file included in the same program. If the definition already exists, the compiler ignores the duplicate definition, otherwise, a #define directive will permit the compiler to use the definition in that header file.
#ifndef MYFUNCTION_H #define MYFUNCTION_H inline int func( int x , int y ){ return ( x * y ) ; } #endif
A well-designed C++ API must always avoid platform-specific #ifdef/#endif lines in its public headers. As an example, let's look at the following API which encapsulates the functionality offered by a tablet. Some tablet offer built-in GPS, but not all of them. But as an API designer, we should never expose this directly through our API:
class TabletDevice { public: #if defined TARGET_OS_TABLET bool getGPS(double &latitude;, double &longitude;); #endif };
The platform specific API design requires a different API on different platforms, which means it also forces the clients of our API to introduce the same platform specificity into their own applications. In other words, our clients should guard any calls go getGPS() with the same #if statement, otherwise their code may fail to compile with an undefined symbol error on other platforms.
Making things worse, if in a later version of the API, we also add support for another device class, like Windows Tablet, then we need to update the #if line in our public header to include something like _WIN32_WCE. Then, our clients also need to find all instances in their code where they have embedded the TARGET_OS_TABLET define and extend it to also include _WIN32_WCE. This is the price we have to pay because we have exposed the implementation details of our API.
Therefore, we should hide the fact that the function only works on certain platforms and provide a method to determine whether the implementation offers the desired functionalities on the current platform:
class TabletDevice { public: bool hasGPS() const; bool getGPS(double &latitude;, double &longitude;); };
In this way, we now have our API which is consistent over all platforms and it does not expose the details of which platform support GPS. So, the client can write code to check whether the current device supports GPS device, by calling hasGPS(), and if so, they can call the getGPS() method. The method hasGPS() can be something like this:
bool TabletDevice::hasGPS() const { public: #if defined TARGET_OS_TABLET return true; #else return false; #endif }
This is much better design than the original design because the platform specific #if is now hidden in the .cpp file instead of being exposed in the header file.
#define can be used to make macro functions that will be substituted in the source before compilation. A preprocessor function declaration comprises a macro name immediately followed by parentheses containing the function's argument. Do not leave any space between the name and the parentheses. The declaration is then followed by the function definition within another set of parentheses. For example, a preprocessor macro function to give bigger value of the two looks like this:
#define MAX(a,b) (a > b ? a : b)
When we use macro functions, however, unlike regular functions, they do not perform any kind of type checking. Because of this drawbacks, inline functions are usually preferable to macro functions. But because macros directly substitute their code, they reduce the overhead of a function call.
#define MAX(a,b) (a > b ? a : b) #include <iostream> using namespace std ; inline int max(int a, int b) {return (a > b ? a: b);} int main() { int x = 10, y = 20; cout << "Macro Max(x,y) = " << MAX(x,y) << endl; cout << "inline max(x,y) = " << max(x,y) << endl; return 0; }
Output is:
Macro Max(x,y) = 20 inline max(x,y) = 20
One of the common mistakes we make when we use Macro is to forget what Macro is suppose to do. In the following example, if we miss parenthesis around it, it will give us unexpected result.
#include <stdio.h> #define SQUARE(n) ((n)*(n)) int main() { int j = 64/SQUARE(4); printf("j = %d",j); return 0; }
surprisingly, it prints out j = 64 instead of j = 4.
Why?
Because j = 64/4*4 but not j = 64/(4*4).
So, we need to use the following Macro to get intended answer.
#define SQUARE(n) (n*n)
Here is another example which may give unexpected results:
#include <stdio.h> #define SQR(n)(n*n) int main() { int a, b = 3; a = SQR(b+2); // a = (b+2*b+2) = 3+2*3+2 = 11 not 25 printf("%d\n", a); return 0; }
So, in this case, the macro should be:
#define SQR(n)((n)*(n))
The number-sign or stringizing operator (#) converts macro parameters to string literals without expanding the parameter definition. It is used only with macros that take arguments. In other words, it converts a characters passed as a macro argument into a string, and adds double quotes to enclose the string.
All whitespaces before or after the characters passed as a macro argument to the stringizing operator (#) is ignored and multiple spaces between characters is reduced to just one space.
The stringizing operator is useful to pass string values to a preprocessor #define directive without needing to surround each string with double quote.
When we use the merging operator (##), we can combine two terms into a single term.
#define SIMPLE(s) cout << #s << endl #define MERGE( s1, s2 ) cout << s1##s2 << endl #include <string> #include <iostream> using namespace std ; int main() { string anotherline = "A host of dancing " ; anotherline += "Daffodils; " ; SIMPLE(I wandered lonely as a Cloud) ; SIMPLE(That floats on high oer Vales and Hills) ; SIMPLE(When all at once I saw a crowd) ; MERGE(another, line ) ; SIMPLE(On fishing up the moon.) ; SIMPLE(Along the Lake beneath the trees); SIMPLE(Ten thousand dancing in the breeze.); return 0 ; }
Output is:
I wandered lonely as a Cloud That floats on high oer Vales and Hills When all at once I saw a crowd A host of dancing Daffodils; On fishing up the moon. Along the Lake beneath the trees Ten thousand dancing in the breeze.
By defining an ASSERT macro function, we can evaluate a specified condition for a bollean value. The condition to be evaluated will be passed from the caller as the ASSERT function argument. The function can then excute appropriate statements according to the result of the evaluation. Multiple statements can be included in the macro function definition by adding a backslash at the end of each line, allowing the definition to continue on the next line.
Several statements calling the ASSERT function can be added to the code to monitor the a condition as the program proceeds. An ASSERT can be controlled by a DEBUG macro, which allows debugging to be easily turned on and off simply by changing the value of the DEBUG control macro.
#define DEBUG 1 #if( DEBUG == 1 ) #define ASSERT( expression ) \ cout << #expression << " ..." << num ; \ if( expression!= true) { \ cout << " Failed in file: " << __FILE__ ; \ cout << " at line: " << __LINE__ << endl; \ } \ else cout << " Passed" << endl ; #elif( DEBUG == 0 ) #define ASSERT( result ) \ cout << "Number is " << num << endl ; #endif #include <iostream> using namespace std ; int main() { int num = 99 ; ASSERT( num < 100 ) ; num++ ; ASSERT( num < 100 ) ; return 0 ; }
We started the program by defining a DEBUG macro with an ON value of 1 to control the ASSERT function"
#define DEBUG 1
Then, we add a macro if-elif to define the ASSERT according to the control value
Output is:
num < 100 ...99 Succeeds num < 100 ...100 Fails in file: c:\assrt.cpp at line: 31
|
http://www.bogotobogo.com/cplusplus/preprocessor_macro.php
|
CC-MAIN-2017-26
|
refinedweb
| 1,939
| 58.42
|
RHG 45 Posted March 18, 2019 I cannot believe it has been over a year since I released Fall of Triton! Fall of Triton was far from perfect, but I am glad on what I did with it. Also the support and kind words were overwhelming. It was great to see all the PlayStation fans play and enjoy the map. And it is an honour to work on a project that you are passionate about. So to celebrate Fall of Tritons 1 year anniversary, here is a new set of PlayStation levels to play! What is The Vortex Catastrophe? TVC is a 12 level episode (11 normal maps, 1 secret) that take place directly after Fall of Triton. This time, Doomguy finds himself stranded on the Vortex. A space station built for the UAC to hide from the attack on Earth. Of course, everyone is dead.….. I created this episode straight after the release of Fall of Triton a year ago, the project was going to be part of a huge remake of Fall of Triton. The Fall of Triton idea got scrapped to work on new bigger projects, but I saved and improved this campaign. What does this campaign feature? - 12 brand new PlayStation maps - The heart-pounding soundtrack by Aubrey Hodges returns - PlayStation sounds and monsters return (with a few new twists) - The nightmare imp returns (This time, he is improved) - Final Doom SSG - New and improved PlayStation lighting (this time with Dynamic lights!) Story After slaying the masterminds in the dark tower, all hell goes quiet. Have you really won? A terrifying roar plagues the realm shaking the ground where you stand. You feel the rush of energy all around you. Something evil awakes! You laugh. You knew hell would not give up so easily. A strange electrical sound echoes through the tower ruins. A portal, but this one is different. It looks like it was built by humans. You step into the unknown.... Your theory was correct, the portal takes you to a UAC facility. The landscape of Factories,buildings and roads. It looks like some city , it almost looks like your back on Earth. Even you are fooled by the scenery. Are you really home? The look of relief turns to anger as you realise the terrifying reality. Your on Vortex! A huge spaceship built by the UAC five years ago. The ship was built for rich bastards who thought they could escape the invasion of Earth. It worked....for a while. The UAC workers knew their experiments were too risky without a backup plan. So they built the Vortex. Vortex is no ordinary ship, it was designed to simulate Earth....home. The UAC knew the Earth would crumble in the event of a catastrophe, so they built a knew one. Vortex was a success. Earth fell, but Vortex remained. But something went wrong...… Two years after the Earth invasions. The ship vanished mysteriously over night. It was the biggest catastrophe known since the demons invaded Earth nearly wiping out the human race. Over thirty thousand people were on that ship. The UAC were never able to recover from this. Only the Triton base remained...until their thirst for power destroyed Triton as well.You notice the bodies all around the floor and grows coming from inside the base, I guess you know what happened to Vortex! You clench your first in anger, not because everyone is dead, because you didn't get to kill them yourself... You know the UAC always have a trick up their sleeve. If something went wrong, they would have an escape plan. But first, it is time to find the unlucky demon in charge of the attack. Download: Thanks for playing! Screenshots: 16 Share this post Link to post
|
https://www.doomworld.com/forum/topic/105089-playstation-doom-the-vortex-catastrophe-out-now/
|
CC-MAIN-2021-49
|
refinedweb
| 630
| 85.59
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.