text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Oleg Nesterov [oleg@redhat.com] wrote:| On 11/15, Sukadev Bhattiprolu wrote:| >| > Subject: [PATCH] Define/use siginfo_from_ancestor_ns()| | Imho, the main problem with this patch is that it tries to do many| different things at once, and each part is suboptimal/incomplete.| | This needs several patches. Not only because this is easier to review,| but also because each part needs the good changelog.I agree I sent this as an RFC to show the overall changes.I do plan to include the following two patches, which should addressthe issue of ->nsproxy being NULL. | Also. I don't think we should try do solve the "whole" problem right| now. For example, if we add/use siginfo_from_ancestor_ns(), we should| also change sys_sigaction(SIG_IGN). As I said, imho we should start| with:| | - cinit can't be killed from its namespace| | - the parent ns can always SIGKILL cinit| | This solves most of problems, and this is very simple.Yes, I agree and am trying to solve only those two :-) I moved outchanges to __do_notify() and others to separate patches, but maybewe can simplify this patch further.| | As for .si_pid mangling, this again needs a separate patch.I thought we were going to use SIG_FROM_USER to decide if the siginfodoes in fact have a ->si_pid (so we don't need the switch statementwe had in an earlier patch).| | Sukadev, I don't have a time today, I'll return tomorrow with more| comments on this...No problem. Thanks for the comments so far.| | > +static int sig_ignored(struct task_struct *t, int sig, int same_ns)| > {| > void __user *handler;| >| > @@ -68,6 +68,14 @@ static int sig_ignored(struct task_struct *t, int sig)| > handler = sig_handler(t, sig);| > if (!sig_handler_ignored(handler, sig))| > return 0;| > + /*| > + * ignores SIGSTOP/SIGKILL signals to init from same namespace.| > + *| > + * TODO: Ignore unblocked SIG_DFL signals also here or drop them| > + * in get_signal_to_deliver() ?| > + */| > + if (is_container_init(t) && same_ns && sig_kernel_only(sig))| > + return 1;| | No, no. is_container_init() is slow and unneeded, same_ns is bogus,| the usage of sig_kernel_only() is suboptimal. The comment is not| right too...Maybe in a separate patch, but same_ns is needed to ensure container-initdoes not ignore signals from ancestor namespace - no ? I was undecided between the above sig_kernel_only() check and 'handler == SIG_DFL' (hence the TODO).| | As I already said, this problem is not namespace-specific, we need| some changes for the global init too.Right I used is_container_init() since it includes global init().Again, maybe it could have been separate patches for just global_initfirst.But I see from your patch that we could use SIGNAL_UNKILLABLE insteadof is_container_init(). That is more efficient.| | Actually, I already did the patch, I'll send it soon.Ok. I will review that.| | > static int send_signal(int sig, struct siginfo *info, struct task_struct *t,| > int group)| > {| > struct sigpending *pending;| > struct sigqueue *q;| > + int from_ancestor_ns;| > +| > + from_ancestor_ns = 0;| > + if (siginfo_from_user(info)) {| > + /* if t can't see us we are from parent ns */| > + if (task_pid_nr_ns(current, task_active_pid_ns(t)) == 0)| ^^^^^^^^^^^^^^^^^^| | ->nsproxy may be NULL, but we can use task_pid(t)->numbers[-1].nsEric's patch of generalizing task_active_pid_ns() should fix this. Itwas reviewed several times, so I did not send it, but yes, I shouldhave mentioned it.| | > @@ -864,6 +902,9 @@ static int send_signal(int sig, struct siginfo *info, struct task_struct *t,| > * and sent by user using something other than kill().| > */| > return -EAGAIN;| > +| > + if (from_ancestor_ns)| > + return -ENOMEM;| | This change alone needs a fat comment in changelog. But I don't think| we need it now. Until we change the dequeue path to check "from_ancestor_ns".Ok.| | > +static inline int siginfo_from_ancestor_ns(siginfo_t *info)| > +{| > + return SI_FROMUSER(info) && (info->si_pid == 0);| > +}| | Yes, this is problem... I doubt we can rely on !si_pid here.| More on this later.| | > @@ -2296,10 +2347,25 @@ sys_rt_sigqueueinfo(pid_t pid, int sig, siginfo_t __user *uinfo)| > Nor can they impersonate a kill(), which adds source info. */| > if (info.si_code >= 0)| > return -EPERM;| > - info.si_signo = sig;| > + info.si_signo = sig | SIG_FROM_USER;| >| > /* POSIX.1b doesn't mention process groups. */| > - return kill_proc_info(sig, &info, pid);| > + rcu_read_lock();| > + spid = find_vpid(pid);| > + /* | > + * A container-init (cinit) ignores/drops fatal signals unless sender| > + * is in an ancestor namespace. Cinit uses 'si_pid == 0' to check if| > + * sender is an ancestor. See siginfo_from_ancestor_ns().| > + *| > + * If signalling a descendant cinit, set si_pid to 0 so it does not| > + * get ignored/dropped.| > + */| > + if (!pid_nr_ns(spid, task_active_pid_ns(current)))| > + info.si_pid = 0;| > + error = kill_pid_info(sig, &info, spid);| | Can't understand. We set SIG_FROM_USER, If signalling a descendant task| (not only cinit), send_signal() will clear .si_pid anyway?Good point. We had gone back and forth on this and I thought one of theemails mentioned this check. Maybe I misread that.But yes, its not needed since send_signal() does it. | https://lkml.org/lkml/2008/11/18/304 | CC-MAIN-2017-30 | refinedweb | 764 | 66.64 |
Background
Observe that document site built based on create-react-doc, I found the webpage code is bare(see the picture below). This is obviously a common problem of single-page application (SPA) sites. It is not conducive to be searched by search engines (SEO).
Isn't it possible that SPA sites can't perform SEO, so what about frameworks such as Gatsby, nuxt It can be used as the first choice for many bloggers to build blogs. What are the technical principles of such frameworks to empower SEO? Driven by curiosity, I start my journey of empowering SEO in creat-react-doc.. There are a comprehensive list of 17 best practices, and 33 practices that should be avoided.
Practical case of SEO in SPA site
In the context of the light document site, we do not consider the SSR scheme for the time being.
After investigating the SEO schemes of document sites on the market, the author summarizes the following four categories:
- Static template rendering scheme
- 404 redirection scheme
- SSG plan
- Pre-rendering scheme
Static template rendering scheme
hexo is the most typical in the static template rendering scheme. Such frameworks need to specify a specific template language (such as pug) to develop themes, so as to achieve the purpose of direct output of web content.
404 Redirection Scheme
The principle of the 404 redirect solution is mainly to use the 404 mechanism of GitHub Pages for redirection. Typical cases are spa-github-pages, sghpa.
But unfortunately, in 2019 Google adjusted crawler algorithm, so this kind of redirection scheme is not conducive to SEO at the moment. The author of spa-github-pages also stated that if SEO is required, use the SSG plan or the paid plan Netlify.
SSG plan
The full name of the SSG scheme is called
static site generator. In the community, nuxt, Gatsby and other framework-enabling SEO technologies can be classified without exception such SSG schemes.
Taking the nuxt framework as an example, based on the
conventional routing, it converts vue files into static web pages by executing the
nuxt generate command.
example:
-| pages/ --------| about.vue/ --------| index.vue/
After being static, it becomes:
-| dist/ --------| about/ ----------| index.html --------| index.html
After the routing is static, the document directory structure at this time can be hosted by any static site service provider.
Pre-rendering scheme
After the above analysis of the SSG scheme, at this time the key to optimization of the SPA site is already on paper ——
static routing. Compared with frameworks such as nuxt and Gatsby, which have the limitation of conventional routing, create-react-doc has flexible and free organization in the directory structure. Its website building concept is
File is Site, and it is also very convenient to migrate existing markdown documents.
Take blog project structure as an example, the document structure is as follows:
-| BasicSkill/ --------| basic/ ----------| DOM.md ----------| HTML5.md
It should become:
-| BasicSkill/ --------| basic/ ----------| DOM ------------| index.html ----------| HTML5 ------------| index.html
After investigation, the idea and the prerender-spa-plugin pre-rendering solution hit it off. The principle of the pre-rendering scheme can be seen in the following figure:
So far, the technology selection is determined to use the pre-rendering scheme to achieve SSG.
Pre-rendering program practice
A brief overview of the steps of create-react-doc's practice in the pre-rendering solution is as follows (for complete changes, see mr):
- Transform hash routing to history routing. Because the history routing structure naturally matches the document static directory structure.
export default function RouterRoot() { return ( -<HashRouter> + <BrowserRouter> <RoutersContainer /> -</HashRouter> + </BrowserRouter> ) }
- Added
pre-rendering environmenton the basis of development environment and generation environment, and matched the routing environment at the same time. It mainly solves the correspondence between
resource filesand
sub-paths under the main domain name. The process is tortuous, and interested friends can see issue.
const ifProd = env ==='prod' + const ifPrerender = window.__PRERENDER_INJECTED && window.__PRERENDER_INJECTED.prerender + const ifAddPrefix = ifProd && !ifPrerender <Route key={item.path} exact -path={item.path} + path={ifAddPrefix? `/${repo}${item.path}`: item.path} render={() => {... }} />
- Compatible with the use of prerender-spa-plugin in webpack 5.
The official version currently does not support webpack 5, see issue for details, and I have a need to execute callbacks after pre-rendering. Therefore, a copy of version is currently forked, which solves the above problems.
After the practice of the above steps, static routing is finally implemented in the SPA site.
SEO optimization with additional buff, the site opens in seconds?
SEO optimization so far, let's look at the changes in
FP,
FCP,
LCP and other indicator data before and after site optimization.
Taking the blog site as an example, the index data before and after optimization is as follows:
Before optimization: Before accessing the pre-rendering scheme, the time node for the first drawing (FP, FCP) is about
8s, and the LCP is about 17s.
After optimization: After accessing the pre-rendering scheme, the first drawing time node starts within
1s, and the LCP is within 1.5s.
Comparing the optimization between before and after: the first screen drawing speed has been increased by
8 times, and the maximum content drawing speed has been increased by
11 times. I wanted to optimize SEO, but I got another way to optimize site performance.
Generate Sitemap Sitemap
After finishing the pre-rendering and realizing the static routing of the site, it is one step closer to the goal of SEO. Putting aside SEO optimization details for the time being, go straight to the core hinterland of SEO site map.
The format of Sitemap and the meaning of each field are briefly explained as follows:
<?xml version="1.0" encoding="utf-8"?> <urlset> <!-- Required tag, this is the definition entry of a specific link, each piece of data must be included with <url> and </url>, this is required --> <url> <!-- Required, URL link address, length must not exceed 256 bytes --> <loc></loc> <!-- You don't need to submit the tag, which is used to specify the last update time of the link --> <lastmod>2021-03-06</lastmod> <!-- You don't need to submit the tag, use this tag to tell the update frequency of this link --> <changefreq>daily</changefreq> <!-- You don’t need to submit the tag, which is used to specify the priority ratio of this link to other links. This value is set between 0.0-1.0 --> <priority>0.8</priority> </url> </urlset>
In the above sitemap, the lastmod, changefreq, and priority fields are not so important for SEO, see how-to-create-a-sitemap
According to the above structure, I developed the sitemap generation package crd-generator-sitemap, the logic is to splice the pre-rendered routing path into the above format.
The user only needs to add the following parameters in the site root directory
config.yml to automatically generate sitemap during the automatic release process.
seo: google: true
Submit the generated sitemap to Google Search Console for a try,
Finally, verify the before and after optimization of Google search site.
Before optimization: Only one piece of data is found.
After optimization: Search the location data declared in the site map.
So far, the complete process of using SSG to optimize SPA sites to achieve SEO has been fully realized. Follow-up is left to refer to the Search Engine Optimization (SEO) Beginner's Guide to optimize some SEO details and support more searches The engine is up.
Summary
This article starts with the realization of SEO on the SPA site, and successively introduces the basic principles of SEO, four practical cases of SEO in the SPA site, combined with create-react-doc SPA framework for complete SEO practice.
If this article is helpful to you, welcome star, feedback.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/muyunyun/seo-practice-in-spa-site-3mli | CC-MAIN-2021-21 | refinedweb | 1,287 | 53 |
Hi all,
the extension I’m developing needs an external python library.
I can install it by using pip_install in the python shell but, of course I need to embed this into the installation process.
How can I do this?
Thanks a lot.
Paolo
Hi all,
the extension I’m developing needs an external python library.
I can install it by using pip_install in the python shell but, of course I need to embed this into the installation process.
How can I do this?
Thanks a lot.
Paolo
Hi Paolo,
You can use the following structure to do this right in the module. The first time the module is loaded the library will be automatically installed, and every successive time the import in the try block will succeed and the catch will not be executed.
try: import library_name except: slicer.util.pip_install('library_name') import library_name
Of course you can use other forms of the Python import lines as well, like
from library_name import module_name
Thanks @markasselin!
Yes, this is a possible solution, but I was looking for a way to do this in the real installation step (maybe you install the extension and the first time you run the extension you have no internet access).
Thanks again
Hi Paulo -
Since the user needs internet to download the extension you are probably safe with the approach Mark suggested. If you want to be one step safer, you could include that code in a method called by when the
startupCompleted signal is triggered. If you set this up in the module class it will be triggered every time Slicer starts up with your extension installed. Since they need to restart after installing the extension it’s pretty likely to happen while they are on the internet.
Something like this:
The installation steps that @markasselin and @pieper described are correct.
You cannot install required Python packages during extension installation time, because extensions are shared between all Slicer instances of a specific version, while extension packages are not shared (but installed separately for each Slicer instance).
Tagging for when I forget how to do this.
Thanks a lot!
I used the strategy proposed by @markasselin, it is much more confortable!
Thanks again!
Paolo
Ciao @PaoloZaffino, if your extension is in the extension manager, you can actually do it directly in the cmake (so they will be packed at compilation time on the kitware factory machines):
add python requirements in the superbuild:
ensure the packing directly in Slicer build:
(this will not pollute the Slicer build. They will be installed only when you install the extension)
I tested this recently and it works. However, I tested only on linux.
P.S.: If you don’t have a SuperBuild, I advise to add it if you want to use this approach. | https://discourse.slicer.org/t/install-python-library-with-extension/10110 | CC-MAIN-2022-40 | refinedweb | 465 | 62.07 |
#include <cafe/mem.h> u32 MEMResizeForMBlockFrmHeap( MEMHeapHandle heap, void* memBlock, u32 newSize );
If the function succeeds, returns the size in bytes of the changed memory block. Otherwise, zero is returned.
This function changes the size of an allocated memory block in the frame heap. However, this function can only be applied to memory blocks that meet the following condition.
To allocate a memory block from the beginning of an available heap region, specify a positive alignment value when using either the
MEMAllocFromFrmHeap or
MEMAllocFromFrmHeapEx function.
When increasing the current size of the memory block, there must be enough free space after the memory block to increase the size. If there is insufficient available space, the function fails and returns zero. If the size increase of the memory block is successful, the size increase may be greater than the specified size.
Decreasing the memory block size is impossible if the size decrease is a few bytes. In this case, the current size of the memory block is returned.
This function does not check whether the value specified in memBlock indicates a pointer to a memory block that satisfies the conditions of this function. If a specified value does not satisfy the conditions, the behavior is unknown.
MEMAllocFromFrmHeap
MEMAllocFromFrmHeapEx
MEMCreateFrmHeap
MEMCreateFrmHeapEx
2013/05/08 Automated cleanup pass.
2010/11/01 Initial version.
CONFIDENTIAL | http://anus.trade/wiiu/personalshit/wiiusdkdocs/fuckyoudontguessmylinks/actuallykillyourself/AA3395599559ASDLG/os/Mem/FrmHeap/MEMResizeForMBlockFrmHeap.html | CC-MAIN-2018-05 | refinedweb | 221 | 55.84 |
Documents: Integers are getting corrupted
Bug Description
Hi,
when saving a javascript object with an integer, the integer is not the same after retrieved from the db.
Saving {test_id:2911298} will result in {test_id:2911300}. I'm using 0.1.4bzr89quantal0
Example:
import QtQuick 2.0
import U1db 1.0 as U1db
Item{
id: u1dbBug
U1db.Database {
id: testDatabase
path: "test-database"
}
function testFunction() {
var docs = testDatabase.
for(var x=0;x<docs.
var doc = testDatabase.
}
}
}
Related branches
- U1DB Qt developers: Pending requested 2013-08-27
- Diff: 16 lines (+6/-0)1 file modifiedtests/tst_database.qml (+6/-0)
- Ubuntu branches: Pending requested 2013-07-08
- Diff: 174 lines (+151/-0)3 files modifieddebian/changelog (+7/-0)
debian/patches/fix_number_precision_qjsondocument.patch_8e8becdc.patch (+143/-0)
debian/patches/series (+1/-0)
This is an upstream bug hitting us in QJsonDocument:
See https:/
Thanks!
So it will be fixed in Qt5.1, any prediction when 5.1 will land?
Current focus in small fixes is backporting them to 5.0.2. The patch from https:/
I've pushed a package to qt5-beta2 PPA: 5.0.2+dfsg1-
This is available for saucy in qt5-beta2 PPA (https:/
The test procedure is: 1. Have the latest saucy image installed on your device. Log in, and do apt-get update && apt-get dist-upgrade on the device 2. apt-add-repository ppa:canonical-
After testing has been done, the updated version can be uploaded to saucy from lp:~kubuntu-packagers/kubuntu-packaging/qtbase-opensource-src_5.0.2 (orig tarball already in saucy archives)
Updated at qt5-beta2 (5.0.2+
Works fine on the device with qt5-beta2.
Thanks for testing, I'm waiting for a test result for bug #1163687 as well, but it's good to already know it generally seems to work with these two bug patches added.
This bug was fixed in the package qtbase-
---------------
qtbase-
* debian/
- Cherry-pick from upstream (LP: #1181359)
-- Timo Jyrinki <email address hidden> Thu, 06 Jun 2013 15:15:57 +0300
Will this fix also come to raring?
Looks like 2911298 is stored in Sqlite as 2.9113e+06 and rounded when it comes out. | https://bugs.launchpad.net/u1db-qt/+bug/1181359 | CC-MAIN-2020-16 | refinedweb | 359 | 60.21 |
I am using ReactJS with Babel and Webpack and using ES6 as well as the proposed class fields for arrow functions. I understand that arrow functions make things more efficient by not recreating the functions each render similar to how binding in the constructor works. However, I am not 100% sure if I am using them correctly. The following is a simplified section of my code in three different files.
My code:
Main.js
prevItem = () => { console.log("Div is clicked") } render(){ return ( <SecondClass prevItem={this.prevItem} /> ) }
SecondClass.js
<ThirdClass type="prev" onClick={()=>this.props.prevItem()} />
ThirdClass.js
<div onClick={()=>{this.props.onClick()}}>Previous</div>
Question:
Is my code above using the arrow functions correctly? I noticed that for SecondClass.js I could have also used:
<ThirdClass type="prev" onClick={this.props.prevItem} />
Is there a difference between one method or the other since I used an ES6 arrow function in my original function definition? Or should I be using the arrow syntax all the way through until my last div?
Solution 1
I understand that arrow functions make things more efficient by not recreating the functions each render similar to how binding in the constructor works.
This is not true. It depends on where exactly are you using the Arrow function. If
Arrow function are used in render method, then they create a new instance
everytime render is called just like how
bind would work. Consider this example
<div onClick={()=>{this.onClick()}}>Previous</div>
Here each time render is called an anonymous function is created and that function when called, calls
this.onClick.
However consider the case below
onClick = () => { console.log("Div is clicked") }
In above case, the arrow function does not recreate function everytime, but binds the context to the React component as
An arrow function does not have its own this; the this value of the enclosing execution context is used. once when the class is instantiated. This is similar to how
binding works is constructor. This is a part of
proposed class fields for arrow functions and it isn't a ES6 feature,
To understand what you wish to ask, you must know that a function gets its context from where it is called. Check
this question for more understanding.
In your case, you have used
Arrow function to define
prevItem and hence it gets the context of the enclosing React component.
prevItem = () => { console.log("Div is clicked") } render(){ return ( <SecondClass prevItem={this.prevItem} /> ) }
Now in its child, even if you call
prevItem with any custom context,
using bind or arrow function,
prevItem when executed in parent i.e
Main.js will get the context of its enclosing React component. And since you just wish to execute prevItem function and do not want to pass any data to this from the child, writing
<ThirdClass type="prev" onClick={()=>this.props.prevItem()} />
and
<div onClick={()=>{this.props.onClick()}}>Previous</div>
is simply useless and will only add to performance implication since new functions are created in
SecondClass and
ThirdClass everytime. You simply don't need to have these functions defined as arrow function and could just write
<ThirdClass type="prev" onClick={this.props.prevItem} />
and
<div onClick={this.props.onClick}>Previous</div>
since its already binded in the parent.
Now even if you have to pass some additional data to these function from ThirdClass and SecondClass, you shouldn't directly use
Arrow function or
bind in render. Have a look at this answer on
How to Avoid binding in Render method
Solution 2
I understand that arrow functions make things more efficient by not recreating the functions each time they are referred to
Arrow functions handles the
this context in a lexical way, where "normal" function do it dynamically. I wrote about the this key word in depth if you need more info about it.
On both of your examples of the inline arrow function, you are creating a new function instance on each
render.
This will create and pass a new instance on each render
onClick={() => {}}
On the 3rd example you only have one instance.
This only pass a reference to an already existing instance
onClick={this.myHandler}
As for the benefits of arrow functions as class fields (there is a small down side, i will post it in the bottom of the answer), if you have a normal function handler that needs to access the current instance of the
classvia
this:
myHandler(){ // this.setState(...) }
You will need to explicit
bind it to the
class.
The most common approach will be to do it in the
constructor because it runs only once:
constructor(props){ super(props); this.myHandler = this.myHandler.bind(this); }
If you use an arrow function as the handler though, you don't need to
bind it to the
class because as mentioned above, the arrow function use a lexical context for
this:
myHandler = () => { // this.setState(...) }
With both approaches you will use the handler like this:
<div onClick={this.myHandler}></div>
The main reason for taking this approach:
<div onClick={() => this.myHandler(someParameter)}></div>
Is if you want to pass parameters to the handler beside the native
event that get passed, meaning you want to pass a parameter upwards.
As mentioned, this will create a new function instance on each render.
(There is a better approach for this, keep reading).
Running example for such use case: => { const style = { color: item.active ? 'green' : 'red' }; return ( <div onClick={() => this.toggleITem(item.name)} style={style} > {item.name} </div> )}) } </div> ); } } ReactDOM.render(<App />, document.getElementById('root'));
<script src=""></script> <script src=""></script> <div id="root"></div>
A better approach would be to create component composition.
You can create a child component that wraps the relevant markup, will have it's own handler and will get both the
data and
handler as props from the parent.
The child component will then invoke the handler that it got from the parent and will pass the
data as a parameter.
Running example with child component:
class Item extends React.Component { onClick = () => { const { onClick, name } = this.props; onClick(name); } render() { const { name, active } = this.props; const style = { color: active ? 'green' : 'red' }; return (<div style={style} onClick={this.onClick}>{name}</div>) } } => { return <Item {...item} onClick={this.toggleITem} /> }) } </div> ); } } ReactDOM.render(<App />, document.getElementById('root'));
<script src=""></script> <script src=""></script> <div id="root"></div>
Class Fields the down-side:
As i mentioned, there is a small down-side for class fields.
The difference between a class method and a class field is that the class field is attached to the
instance of the
class (constructor function).
where as the class methods and objects are attached to the prototype.
Hence, if you will have ridiculously large amount of instances of this class you may get a performance hit.
Given this code block:
class MyClass { myMethod(){} myOtherMethod = () => {} }
babel will transpile it to this: MyClass = function() { function MyClass() { _classCallCheck(this, MyClass); this.myOtherMethod = function() {}; } _createClass(MyClass, [{ key: "myMethod", value: function myMethod() {} }]); return MyClass; }();
Solution 3
So your first approach
<ThirdClass type="prev" onClick={()=>this.props.prevItem()} />
In this you can pass any arguments which are available in ThirdClass to the prevItem function. It's the good way of calling parent functions with arguments.Like this
<ThirdClass type="prev" onClick={()=>this.props.prevItem(firstArgument, secondArgument)} />
Your second approach is
<ThirdClass type="prev" onClick={this.props.prevItem} />
This approach disallows you to pass any ThirdClass specific arguments.
Both the apporaches are right, it just that , it depends on your use case. Both of the approach using es6 arrow function and are right in above mentioned respective scenarios
Solution 4
Using
JavaScript curring function declaration, can be a different way to other answers, pay attention to the following codes:
clickHandler = someData => e => this.setState({ stateKey: someData });
Now in
JSX, you can write:
<div onClick={this.clickHandler('someData')} />
The
clickHandler with
someData return a function with
e argument but it is not used inside
clickHandler function. so it works well.
For writing more completely write like below:
clickHandler = someData => () => this.setState({ stateKey: someData });
It is no need to
e, so why I should write it.
Solution 5
Using arrows in your original function definition allows you not to bind the function in your constructor.
If you didn't use an arrow...
prevItem(){ console.log("Div is clicked") }
Then you would have to create a constructor a bind it there...
class MyComponent extends Component { constructor(props) { super(props) this.prevItem = this.prevItem.bind(this) } prevItem() { ... } }
Using the arrow is easier when you start about because it just works and you don't have to understand what a constructor is and delve into the complexities of
this in javascript.
However, performance wise it is better to bind in the constructor. The bind in constructor method will create a single instance of the function and re-use it, even if the render method is called multiple times. | https://solutionschecker.com/questions/correct-use-of-arrow-functions-in-react/ | CC-MAIN-2022-40 | refinedweb | 1,472 | 56.45 |
Adding Outgoing Webhooks Integration to a Slack Channel
In the previous tutorial you have seen how you can configure a Slack channel to receive messages from external sources, such as an ESP8266 device. This is very handy for setting notifications that are triggered by various sensors, which we showed on the example of a motion sensor. Now we want to enable communication in the opposite direction, from Slack to the ESP8266. This will give us plenty opportunities and new ideas for development. I will show you one simple project I developed where I control home appliances with commands from Slack.
Same as before, you have to add an integration to the Slack channel you want to use for this purpose. If you don’t know how to do this, check the previous tutorial. This time we need the an integration that will allow us to send Slack messages to external addresses, such as our app in Bluemix. Search for the integration called Outgoing Webhooks.
Outgoing Webhooks integration
Choose channel, trigger word (optional) and URL. If you avoid trigger word, any messages sent to the specified channel will be sent to the specified URL(s). If you choose trigger word, only messages sent to the channel that start with the trigger word will be sent to the URL. URL has to start with the name of your Node-RED application, followed by the arbitrary path, for example: <appName>.mybluemix.net/slack.
Integration settings
In the above example, only messages sent to channel #sensors that start with the word @ESP8266 will be sent to the URL.
On the Bluemix side, we will use an HTTP node that will listen to incoming messages.
There is also a generated token that can be used for authentication. You can fill a few optional fields and click save settings. Your channel is now ready and you should see a notification about added integration.
Integration added
Node-RED Flow
Now we need to make an app that will accept messages from Slack and forward them to the ESP8266. Same as before, we will make it using Node-RED and it will be hosted on Bluemix (explained in the 3rd part).
The flow consists of an HTTP input node that listens to incoming messages from Slack, a function node that is used to the extract command string from the whole message string and IoT out node to send that command to the ESP8266.
Flow
In HTTP node you have to put the same path you have chosen while adding webhook integration to your channel (/slack in my case).
HTTP in node
A payload with various parameters is sent from Slack, with parameter text containing actual message content. Function node is used to extract text value and to remove trigger word, since we want to transfer command text only to the ESP8266. I have chosen that all messages I send from Slack have the following format @ESP8266_<commandText>.
Function code:
var text = msg.payload.text; msg.payload = text.substring(9, text.length); return msg;
Configure IoT out node:
IBM out node
Upload the code for subscriptions to the ESP8266. Be sure that IoT node configuration corresponds to the subscribe topic (command type ‘test’ and format String).
]); } }
That’s it, you should see commands you send from Slack on the Serial monitor. There are plenty options how you can extend this system. You can attach and control from Slack motors, relays, lights etc. and build ‘Smart’ home quickly and cost-effectively.
Control Home Appliances with IR Signals
I have decided to connect an Arduino board with an infra-red diode to my ESP8266 device and control any home appliance that can be controlled by IR signals. ESP8266 receives commands from Slack and send them to the Arduino board over serial port (UART), which then sends appropriate signals to the IR diode. In order to work with IR protocol, you will have to add a great library by Ken Shirriff, called IRremote. Go to library manager and search for it, or add a zip file. On the github page you can see supported boards. I have used Arduino Mega board and I attached the IR diode to the pin 9 (you can’t use any pin for this purpose, check github page for more details). You have to put current limiting resistor to avoid damaging the board. You shouldn’t put too large resistor because you will reduce the range significantly (I have used 100 ohm and the range is around 4m). If you need longer range, you should consider option of driving the IR diode with a transistor. When placing the diode, be sure that the shorter lead is connected to the ground and longer to the pin 9 over the resistor. Whole circuit with Arduino board is quite simple. The yellow unconnected wire on the image is used for connection with the ESP8266. To transfer data from the ESP8266 to the Arduino over UART port, Tx pin of the ESP8266 has to be connected to the Rx pin of the Arduino. Also, don't forget to connnect the ground of the ESP8266 to the ground of the Arduino.
Schematics for sending IR signals
Frequency of emitted IR waves should match frequency of an IR receiver of the appliance you want to control, but even if there is a small difference, communication should work properly.
Next thing you need to do is to reverse engineer a remote control of the appliance you want to control. You need to know which IR protocol is used by certain device and which data corresponds to each button of the remote control. To keep it simple, we will just use on and off commands. In order to do this, you will need to hook an IR receiver to the Arduino and upload a program that will do this work for you. I used TSOP 32128 receiver from this series. It has three pins, ground, power and data pin. Upload the IRrecvDumpV2 program from examples to the Arduino and connect the data pin of the receiver to the pin that is named in the code as recvPin.
Schematics for receiving IR signals
When you point the remote control and press any button, you should see readings on the serial monitor. Based on this, you know which protocol is used and which data is transferred when a certain button is pressed. These readings give you information which commands and which data to use to send commands with the Arduino. See IRsendDemo program to see how data is sent. It can happen that your IR remote control uses an unknown protocol which was the case with my air conditioner.
Serail monitor
Fortunately, this program prints whole data in raw format, which you can then send with sendRaw command, without need to care about specific protocol.
Raw data
I copied this raw data for on and off commands and everything worked perfectly. This is the code I uploaded to the Arduino:
#include <IRremote.h> IRsend irsend; //put the data you obtained for your remote control //unsigned int irSignalOn[179] = { ... //unsigned int irSignalOff[179] = { ... void setup() { Serial.begin(115200); } void loop() { String command = ""; if(Serial.available()){ delay(100); while(Serial.available() > 0){ command += char(Serial.read()); } if(command == "on"){ irsend.sendRaw(irSignalOn, 179, 38); Serial.println(command); } else if(command == "off"){ irsend.sendRaw(irSignalOff, 179, 38); Serial.println(command); } } }
It reads commands that come from the ESP8266 and based on them decides which signal to send to IR diode.
In the ESP8266 code, remove all serial print commands, except the one in the callback function, to avoid unnecessary data transfers. Second argument of the sendRaw function is used for data length and third for frequency of the signal in kHz.
Purpose of this part wasn’t to teach you about IR communication, but to show you an interesting application of the system we previously build. If you are interested more in IR communication, protocols and library, there are plenty of documentation available online.
That’s the end of this part. You have learnt how to transfer messages from Slack to the ESP8266. Also, you have seen an interesting application of this system where you can remotely control home appliances from Slack. | https://tuts.codingo.me/control-home-appliances-from-slack/ | CC-MAIN-2017-34 | refinedweb | 1,372 | 63.09 |
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <math.h>
#include <assert.h>
// CUDA runtime
#include <cuda_runtime.h>
// helper functions and utilities to work with CUDA
#include <helper_cuda.h>
#include <helper_functions.h>
__global__ void kernel_big()
{
const int MEMSIZE = 15360;
__shared__ unsigned char data[MEMSIZE];
if (threadIdx.x == 0 && threadIdx.y == 0)
{
for (int i = 0; i < MEMSIZE; i++)
{
unsigned char tmp = data[i];
}
for (int i = 0; i < MEMSIZE; i++)
{
data[i] = 42;
}
}
__syncthreads();
}
////////////////////////////////////////////////////////////////////////////////
//! Entry point for Cuda functionality on host side
////////////////////////////////////////////////////////////////////////////////
extern "C" void
runTest(const int argc, const char **argv)
{
// use command-line specified CUDA device, otherwise use device with highest Gflops/s
findCudaDevice(argc, (const char **)argv);
kernel_big << <dim3(24, 24, 1), dim3(16, 16, 1) >> >();
// check if kernel execution generated and error
getLastCudaError("Kernel execution failed");
}
said:Thanks for confirming the problem Harry. Hopefully this will get on the list of bugs to fix because it makes debugging Cuda code more difficult.
You must log in to send a PM.
Add Reply
Please Login | Register to add a comment.
Type
Reason
Authorization Required
Not a member? Register Now
Windows 7
Cuda 8.0.61
Driver 376.51
NSight 5.2.0.16321
I've recently had an unusual problem debugging simple GPU code in Nsight. A few months ago I had written a GPU algorithm which ran perfectly well on my 980 Ti. My co-worker had recently build the algorithm on his machine with a 1060, but was getting bounds check errors in Cuda Debug mode. I went back to run again on my machine under the same conditions with the 980 Ti and everything was fine. I then tried a 960, 1060, and 1080 card on my machine and ALL showed the same bounds check errors. Cuda-memcheck reported NO errors for all of the video cards. I've created a very simple function that reproduces the weird behavior. You can see all the code is doing is setting up a block of shared memory, then iterating though it. All of the cards I've tried list 48k as their max shared memory size, so 15360 shouldn't be a problem. The problem doesn't appear on the 980 Ti, but does on a 960, 1060, and 1080. Thanks
An example error that gets reported is:
Summary of access violations:
c:\programdata\nvidia corporation\cuda samples\v8.0\0_simple\cppintegration\cppintegration.cu(47): error MemoryChecker: #misaligned=1 #invalidAddress=0
================================================================================
Memory Checker detected 1 access violations.
error = misaligned store (global memory)
gridid = 5
blockIdx = {17,6,0}
threadIdx = {0,0,0}
address = 0x10001000523
accessSize = 1
Hi, I don't see this issue on nsight 5.3, could you please check the latest version? | https://devtalk.nvidia.com/default/topic/1005607/shared-memory-debug-errors-in-nsight/?offset=5 | CC-MAIN-2018-13 | refinedweb | 450 | 58.89 |
Before you start
About this tutorial.
Prerequisites
You must have a JDK 5.0 development environment available to you in order to use generics. You can download JDK 5.0 for free from the Sun Microsystems Web site.
Introduction to generics
What are generics?
Generic types, or generics for short, are an extension to the Java language's type system to support the creation of classes that can be parameterized by types. You can think of a type parameter as a placeholder for a type to be specified at the time the parameterized type is used, just as a formal method argument is a placeholder for a value that is passed at runtime.
The motivation for generics can be seen in the Collections framework. For example, the
Map class allows you to add entries of any class to a
Map, even though it is a very common use case to store objects of only a certain type, such as
String, in a given map.
Because
Map.get() is defined to return
Object, you typically have to cast the results of
Map.get() to the expected type, as in the following code:
Map m = new HashMap(); m.put("key", "blarg"); String s = (String) m.get("key");
To make the program compile, you have to cast the result of
get() to
String, and hope that the result really is a
String. But it is possible that someone has stored something other than a
String in this map, in which case the code above would throw a
ClassCastException.
Ideally, you would like to capture the concept that
m is a
Map that maps
String keys to
String values. This would let you eliminate the casts from your code and, at the same time, gain an additional layer of type checking that would prevent someone from storing keys or values of the wrong type in a collection. This is what generics do for you.
Benefits of generics
The addition of generics to the Java language is a major enhancement. Not only were there major changes to the language, type system, and compiler to support generic types, but the class libraries were overhauled so that many important classes, such as the Collections framework, have been made generic. This enables a number of benefits:
Type safety. The primary goal of generics is to increase the type safety of Java programs. By knowing the type bounds of a variable that is defined using a generic type, the compiler can verify type assumptions to a much greater degree. Without generics , these assumptions exist only in the programmer's head (or if you are lucky, in a code comment).
A popular technique in Java programs is to define collections whose elements or keys are of a common type, such as "list of
String" or "map from
Stringto
String." By capturing that additional type information in a variable's declaration, generics enable the compiler to enforce those additional type constraints. Type errors can now be caught at compile time, rather than showing up as
ClassCastExceptions at runtime. Moving type checking from runtime to compile time helps you find errors more easily and improves your programs' reliability.
Elimination of casts. A side benefit of generics is that you can eliminate many type casts from your source code. This makes code more readable and reduces the chance of error.
Although the reduced need for casting reduces the verbosity of code that uses generic classes, declaring variables of generic types involves a corresponding increase in verbosity. Compare the following two code examples.
This code does not use generics:
List li = new ArrayList(); li.put(new Integer(3)); Integer i = (Integer) li.get(0);
This code uses generics:
List<Integer> li = new ArrayList<Integer>(); li.put(new Integer(3)); Integer i = li.get(0);
Using a variable of generic type only once in a simple program does not result in a net savings in verbosity. But the savings start to add up for larger programs that use a variable of generic type many times.
Potential performance gains. Generics create the possibility for greater optimization. In the initial implementation of generics, the compiler inserts the same casts into the generated bytecode that the programmer would have specified without generics. But the fact that more type information is available to the compiler allows for the possibility of optimizations in future versions of the JVM.
Because of the way generics are implemented, (almost) no JVM or classfile changes were required for the support of generic types. All of the work is done in the compiler, which generates code similar to what you would write without generics (complete with casts), only with greater confidence in its type safety.
Example of generic usage
Many of the best examples of generic types come from the Collections framework, because generics let you specify type constraints on the elements stored in collections. Consider this example of using the
Map class, which involves a certain degree of optimism that the result returned by
Map.get() really will be a
String:
Map m = new HashMap(); m.put("key", "blarg"); String s = (String) m.get("key");
The above code will throw a
ClassCastException in the event someone has placed something other than a
String in the map. Generics allow you to express the type constraint that
m is a
Map that maps
String keys to
String values. This lets you eliminate the casts from your code and, at the same time, gain an additional layer of type checking that would prevent someone from storing keys or values of the wrong type in a collection.
The following code sample shows a portion of the definition of the
Map interface from the Collections framework in JDK 5.0:
public interface Map<K, V> { public void put(K key, V value); public V get(K key); }
Note two additions to the interface:
- The specification of type parameters
Kand
Vat the class level, representing placeholders for types that will be specified when a variable of type
Mapis declared
- The use of
Kand
Vin the method signature for
get(),
put(), and other methods
To gain the benefit of using generics, you must supply concrete values for
K and
V when defining or instantiating variables of type
Map. You do this in a relatively straightforward way:
Map<String, String> m = new HashMap<String, String>(); m.put("key", "blarg"); String s = m.get("key");
When you use the generic version of
Map, you no longer need to cast the result of
Map.get() to
String, because the compiler knows that
get() will return a
String.
You don't save any keystrokes in the version that uses generics; in fact, it requires more typing than the version that uses the cast. The savings comes in the form of the additional type safety you get by using generic types. Because the compiler knows more about the types of keys and values that will be put into a
Map, type checking moves from execution time to compile time, improving reliability and speeding development.
Backward compatibility
An important goal for the addition of generics to the Java language was to maintain backward compatibility. Although many classes in the standard class libraries in JDK 5.0 have been generified, such as the Collections framework, existing code that uses Collections classes such as
HashMap and
ArrayList will continue to work unmodified in JDK 5.0. Of course, existing code that does not take advantage of generics will not gain the additional type-safety benefits of generics.
Basics of generic types
Type parameters
When you define a generic class, or declare a variable of a generic class, you use angle brackets to specify formal type parameters. The relationship between formal and actual type parameters is similar to the relationship between formal and actual method parameters, except that type parameters represent types, not values.
Type parameters in a generic class can be used almost anywhere a class name can be used. For example, here is an excerpt from the definition of the
java.util.Map interface:
public interface Map<K, V> { public void put(K key, V value); public V get(K key); }
The
Map interface is parameterized by two types -- the key type
K and the value type
V. Methods that would (without generics) accept or return
Object now use
K or
V in their signatures instead, indicating additional typing constraints underlying the specification of
Map.
When declaring or instantiating objects of a generic type, you must specify the values of the type parameters:
Map<String, String> map = new HashMap<String, String>();
Note that in this example, you have to specify the type parameters twice -- once in declaring the type of the variable
map and a second time in selecting the parameterization of the
HashMap class so you can instantiate an instance of the correct type.
When the compiler encounters a variable of type
Map<String, String>, it knows that
K and
V now are bound to
String, and so it knows that the result of
Map.get() on such a variable will have type
String.
Any class, except an exception type, an enumeration, or an anonymous inner class, can have type parameters.
Naming type parameters
The recommended naming convention is to use uppercase, single-letter names for type parameters. This differs from the C++ convention (see Appendix A: Comparison to C++ templates), and reflects the assumption that most generic classes will have a small number of type parameters. For common generic patterns, the recommended names are:
- K - A key, such as the key to a map
- V - A value, such as the contents of a
List,
Set, or the values in a
Map
- E - An exception class
- T - A generic type
Generic types are not covariant
A common source of confusion with generic types is to assume that, like arrays, they are covariant. They are not. This is a fancy way of saying that
List<Object> is not a supertype of
List<String>.
If A extends B, then an array of A is also an array of B, and you can freely supply an
A[] where a
B[] is expected:
Integer[] intArray = new Integer[10]; Number[] numberArray = intArray;
The code above is valid because an
Integeris a
Number, and an
Integer array is a
Number array. However, the same is not true with generics. The following code is invalid:
List<Integer> intList = new ArrayList<Integer>(); List<Number> numberList = intList; // invalid
At first, most Java programmers find this lack of covariance annoying, or even "broken," but there is a good reason for it. If you could assign a
List<Integer> to a
List<Number>, the following code would violate the type safety that generics are supposed to provide:
List<Integer> intList = new ArrayList<Integer>(); List<Number> numberList = intList; // invalid numberList.add(new Float(3.1415));
Because
intList and
numberList are aliased, the above code, if allowed, would let you put things other than
Integers into
intList. However, there is a way to write flexible methods which can accept a
family of generic types, as you'll see in the next panel.
Type wildcards
Suppose you have this method:
void printList(List l) { for (Object o : l) System.out.println(o); }
The code above compiles on JDK 5.0, but if you try to call it with a
List<Integer>, you'll get a warning. The warning occurs because you're passing a generic type (
List<Integer>) to a method that only promises to treat it as a
List (a so-called raw type), which could undermine the type safety of using generics.
What if you try writing the method like this:
void printList(List<Object> l) { for (Object o : l) System.out.println(o); }
It still won't compile, because a
List<Integer> is not a
List<Object>
(as you learned in the previous section, Generic types are not covariant). That's really annoying -- now your generic version is less useful than the original, nongeneric version!
The solution is to use a type wildcard:
void printList(List<?> l) { for (Object o : l) System.out.println(o); }
The question mark in the above code is a type wildcard. It is pronounced "unknown" (as in "list of unknown").
List<?> is a supertype of any generic
List, so you can freely pass
List<Object>,
List<Integer>, or
List<List<List<Flutzpah>>> to
printList().
Type wildcards in action
The previous section, Type wildcards, introduced the type wildcard, which lets you declare variables of type
List<?>. What can you do with such a
List? Quite conveniently, you can retrieve elements from it, but not add elements to it. The reason for this is not that the compiler knows which methods modify the list and which do not. It is that (most of) the mutative methods happen to require more type information than nonmutative methods. The following code works just fine:
List<Integer> li = new ArrayList<Integer>(); li.add(new Integer(42)); List<?> lu = li; System.out.println(lu.get(0));
Why does this work? The compiler has no clue as to the value of the type parameter of
List for
lu. However, the compiler is smart enough that it can do some type inference. In this case, it infers that the unknown type parameter must extend
Object
. (This particular inference is no great leap, but the compiler can make some pretty impressive type inferences, as you will see later (in The gory details
). So it lets you call
List.get() and infers the return type to be
Object.
On the other hand, the following code does not work:
List<Integer> li = new ArrayList<Integer>(); li.add(new Integer(42)); List<?> lu = li; lu.add(new Integer(43)); // error
In this case, the compiler cannot make a strong enough inference about the type parameter of
List for
lu to be certain that passing an
Integer to
List.add() is type-safe. So the compiler will not allow you to do this.
Lest you still think the compiler has some notion of which methods change the contents of the list and which don't, note that the following code will work, because it doesn't depend on the compiler having to know anything about the type parameter of
lu:
List<Integer> li = new ArrayList<Integer>(); li.add(new Integer(42)); List<?> lu = li; lu.clear();
Generic methods
You have seen (in Type parameters) that a class can be made generic by adding a list of formal type arguments to its definition. Methods can also be made generic, whether or not the class in which they are defined is generic.
Generic classes enforce type constraints across multiple method signatures. In
List<V>, the type parameter
V appears in the signatures for
get(),
add(),
contains(), etc. When you create a variable of type
Map<K, V>, you are asserting a type constraint across methods. The values you pass to
put() will be the same type as those returned by
get().
Similarly, when you declare a generic method, you generally do so because you want to assert a type constraint across multiple arguments to the method. For example, depending on the boolean value of the first argument to the
ifThenElse() method in the following code, it will return either the second or third argument:
public <T> T ifThenElse(boolean b, T first, T second) { return b ? first : second; }
Note that you can call
ifThenElse() without explicitly telling the compiler what value of
T you want. The compiler doesn't need to be told explicitly what value T will have; it only knows that they must all be the same. The compiler allows you to call the following code, because the compiler can use type inference to infer that substituting
String for
T satisfies all type constraints:
String s = ifThenElse(b, "a", "b");
Similarly, you can call:
Integer i = ifThenElse(b, new Integer(1), new Integer(2));
However, the compiler doesn't allow the following code, because no type will satisfy the required type constraints:
String s = ifThenElse(b, "pi", new Float(3.14));
Why would you choose to use a generic method, instead of adding the type
T to the class definition? There are (at least) two cases in which this makes sense:
- When the generic method is static, in which case class type parameters cannot be used.
- When the type constraints on
Treally are local to the method, which means that there is no constraint that the same type
Tbe used in another method signature of the same class. By making the type parameter for a generic method local to the method, you simplify the signature of the enclosing class.
Bounded types
In the example in the previous section, Generic methods, the type parameter
V was an unconstrained, or unbounded, type. Sometimes you need to specify additional constraints on a type parameter, while still not specifying it completely.
Consider the example
Matrix class, which uses a type parameter
V that is bounded by the
Number class:
public class Matrix<V extends Number> { ... }
The compiler would allow you to create a variable of type
Matrix<Integer> or
Matrix<Float>, but would issue an error if you tried to define a variable of type
Matrix<String>. The type parameter
V is said to be bounded by
Number. In the absence of a type bound, a type parameter is assumed to be bounded by
Object
. This is why the example in the previous section, Generic methods, allows
List.get() to return an
Object when called on a
List<?>, even though the compiler doesn't know the type of the type parameter
V.
A simple generic class
Writing a basic container class
At this point you're ready to write a simple generic class. By far, the most common use cases for generic classes are container classes, such as the Collections framework, or value-holder classes, such as
WeakReference or
ThreadLocal. Let's write a class, similar to
List, that acts as a container, using generics to express the constraint that all elements of the
Lhist will have the same type. For simplicity of implementation,
Lhist uses a fixed-size array to store values and does not accept null values.
The
Lhist class will have one type parameter,
V, which is the type of values in the
Lhist, and will have the following methods:
public class Lhist<V> { public Lhist(int capacity) { ... } public int size() { ... } public void add(V value) { ... } public void remove(V value) { ... } public V get(int index) { ... } }
To instantiate a
Lhist, you simply specify the type parameter when declaring one, and the desired capacity:
Lhist<String> stringList = new Lhist<String>(10);
Implementing the constructor
The first stumbling block you will run into when implementing the
Lhist class is implementing the constructor. You'd like to implement it like this:
public class Lhist<V> { private V[] array; public Lhist(int capacity) { array = new V[capacity]; // illegal } }
This seems a natural way to allocate the backing array, but unfortunately you can't do this. The reasons why are complicated; you'll understand them later when you get to the topic of erasure in The gory details . The way to do what you want is ugly and counterintuitive. One possible implementation of the constructor is this (which uses the approach taken by the Collections classes):
public class Lhist<V> { private V[] array; public Lhist(int capacity) { array = (V[]) new Object[capacity]; } }
Alternatively, you could use reflection to instantiate the array. But doing so would require passing an additional argument to the constructor -- a class literal, such as
Foo.class
. Class literals will be discussed later as well, in the section on Class<T> .
Implementing the methods
Implementing the rest of
Lhist's methods is a lot easier. Here's the full implementation of the
Lhist class:
public class Lhist<V> { private V[] array; private int size; public Lhist(int capacity) { array = (V[]) new Object[capacity]; } public void add(V value) { if (size == array.length) throw new IndexOutOfBoundsException(Integer.toString(size)); else if (value == null) throw new NullPointerException(); array[size++] = value; } public void remove(V value) { int removalCount = 0; for (int i=0; i<size; i++) { if (array[i].equals(value)) ++removalCount; else if (removalCount > 0) { array[i-removalCount] = array[i]; array[i] = null; } } size -= removalCount; } public int size() { return size; } public V get(int i) { if (i >= size) throw new IndexOutOfBoundsException(Integer.toString(i)); return array[i]; } }
Note that you use the formal type parameter
V in methods that will accept or return a
V, but you do not have any idea what methods or fields
V has, because it is not known to the generic code.
Using the
Lhist class
Using the
Lhist class is easy. To define a
Lhist of integers, you simply supply the actual value for the type parameter, in the declaration and the constructor:
Lhist<Integer> li = new Lhist<Integer>(30);
The compiler knows that any value returned by
li.get() will be of type
Integer, and it will enforce that anything passed to
li.add() or
li.remove() will be an
Integer. With the exception of the weird way the constructor was implemented, you didn't need to do anything terribly special to make
Lhist a generic class.
Generics in the Java class libraries
Collections classes
By far, the biggest consumer of generics support in the Java class libraries is the Collections framework. Just as container classes were the primary motivation for templates in C++ (see the Appendix A: Comparison to C++ templates) (although they have been subsequently used for much more), improving the type safety of the Collection classes was the primary motivation for generics in the Java language. The Collections classes also serve as a model of how generics can be used, because they demonstrate almost all the standard tricks and idioms of generic types.
All of the standard collection interfaces are generified --
Collection<V>,
List<V>,
Set<V>, and
Map<K,V>. Similarly, the implementations of the collection interfaces are generified with the same type arguments, so
HashMap<K,V> implements
Map<K,V>, etc.
The Collections classes also use many of the "tricks" and idioms of generics, such as upper- and lower-bounded wildcards. For example, in the interface
Collection<V>, the
addAll method is defined as follows:
interface Collection<V> { boolean addAll(Collection<? extends V>); }
This definition, which combines wildcard type parameters with bounded type parameters, allows you to add the contents of a
Collection<Integer> to a
Collection<Number>.
If the class libraries defined
addAll() to take a
Collection<V>, you would not be able to add the contents of a
Collection<Integer> to a
Collection<Number>. Rather than restricting the parameter of
addAll() to be a collection containing exactly the same type as the collection you are adding to, it is possible to instead make the more reasonable constraint that the elements of the collection being passed to
addAll() be suitable for addition to your collection. Bounded types let you do that, and the use of bounded wildcards frees you from the requirement to make up another placeholder name that will not be used anywhere else.
As a subtle example of how generifying a class can change its semantics (if you're not careful), notice that the type of the argument to
Collection.removeAll() is
Collection<?>, not
Collection<? extends V>. This is because it is acceptable to pass a collection of mixed type to
removeAll(), and defining
removeAll more restrictively would have altered the semantics and usefulness of the method. This illustrates how generifying an existing class is a lot harder than defining a new generic class, because you must be careful not to change the semantics of the class or break existing nongeneric code.
Other container classes
In addition to the Collections classes, several other classes in the Java class library act as containers for values. These classes include
WeakReference,
SoftReference, and
ThreadLocal. They have all been generified over the type of value they are a container for, so
WeakReference<T> is a weak reference to an object of type T, and
ThreadLocal<T> is a handle to a thread-local variable of type T.
Generics are not just for containers
The most common, and most straightforward, use for generic types is container classes, such as the Collections classes or the references classes (such as
WeakReference<T>.) The meaning of the type parameter in
Collection<V> is intuitively obvious -- "a collection of values all of which are of type V." Similarly,
ThreadLocal<T> has an obvious interpretation -- "a thread-local variable whose type is T." However, nothing in the specification of generics has anything to do with containment.
The meaning of the type parameter in classes such as
Comparable<T> or
Class<T> is more subtle. Sometimes, as in the case of
Class<T>, the type variable is there mostly to help the compiler with type inference. Sometimes, as in the case of the cryptic
Enum<E extends Enum<E>>, it is there to place a constraint on the class hierarchy's structure.
Comparable<T>
The
Comparable interface has been generified so that an object that implements
Comparable declares what type it can be compared with. (Usually, this is the type of the object itself, but sometimes might be a superclass.)
public interface Comparable<T> { public boolean compareTo(T other); }
So the
Comparable interface includes a type parameter
T, which is the type of object a class implementing
Comparable can be compared to. This means that if you are defining a class that implements
Comparable, such as
String, you must declare not only that the class supports comparison, but also what it is comparable to, which is usually itself:
public class String implements Comparable<String> { ... }
Now consider an implementation of a binary
max() method. You want to take two arguments of the same type, both to be
Comparable, and to be
Comparable to each other. Fortunately, that is relatively straightforward if you use a generic method and a bounded type parameter:
public static <T extends Comparable<T>> T max(T t1, T t2) { if (t1.compareTo(t2) > 0) return t1; else return t2; }
In this case, you define a generic method, generified over a type
T, that you constrain to extend (implement)
Comparable<T>. Both of the arguments must be of type
T, which means they are the same type, support comparison, and are comparable to each other. Easy!
Even better, the compiler will use type inference to decide what value of
T was meant when calling
max(). So the following invocation works without having to specify
T at all:
String s = max("moo", "bark");
The compiler will figure out that the intended value of
T is
String, and it will compile and type-check accordingly. But if you tried to call
max() with arguments of class
X that didn't implement
Comparable<X>, the compiler wouldn't permit it.
Class<T>
The class
Class has been generified, but in a way that many people find confusing at first. What is the meaning of the type parameter
T in
Class<T>? It turns out that it is the class instance being referenced. How can that be? Isn't that circular? And even if not, why would it be defined that way?
In prior JDKs, the definition of the
Class.newInstance() method returned
Object, which you would then likely cast to another type:
class Class { Object newInstance(); }
However, using generics, you define the
Class.newInstance() method with a more specific return type:
class Class<T> { T newInstance(); }
How do you create an instance of type
Class<T>? Just as with nongeneric code, you have two ways: calling the method
Class.forName() or using the class literal
X.class.
Class.forName() is defined to return
Class<?>. On the other hand, the class literal
X.class is defined to have type
Class<X>, so
String.class is of type
Class<String>.
What is the benefit of having
Foo.class be of type
Class<Foo>? The big benefit is that it can improve the type safety of code that uses reflection, through the magic of type inference. Also, you don't need to cast
Foo.class.newInstance() to
Foo.
Consider a method that retrieves a set of objects from a database and returns a collection of JavaBeans objects. You instantiate and initialize the created objects via reflection, but that doesn't mean type safety has to go totally out the window. Consider this method:
public static<T> List<T> getRecords(Class<T> c, Selector s) { // Use Selector to select rows List<T> list = new ArrayList<T>(); for (/* iterate over results */) { T row = c.newInstance(); // use reflection to set fields from result list.add(row); } return list; }
You can call this method simply like this:
List<FooRecord> l = getRecords(FooRecord.class, fooSelector);
The compiler will infer the return type of
getRecords() from the fact that
FooRecord.class is of type
Class<FooRecord>. You use the class literal both to construct the new instance and to provide type information to the compiler for it to use in type checking.
Replacing T[] with Class<T>
The
Collection interface includes a method for copying the contents of a collection into an array of a caller-specified type:
public Object[] toArray(Object[] prototypeArray) { ... }
The semantics of
toArray(Object[]) are that if the passed array is big enough, it should be used to store the results; otherwise a new array of the same type is allocated, using reflection. In general, passing an array as a parameter solely to provide the desired return type is kind of a cheap trick, but prior to the addition of generics, it was the most convenient way to communicate type information to a method.
With generics, you have a more straightforward way to do this. Instead of defining
toArray() as above, a generic
toArray() might look like this:
public<T> T[] toArray(Class<T> returnType)
Invoking such a
toArray() method is simple:
FooBar[] fba = something.toArray(FooBar.class);
The
Collection interface has not been changed to use this technique, because that would break many existing collection implementations. But if
Collection were being rebuilt with generics from the ground up, it would almost certainly use this idiom for specifying which type it wants the return value to be.
Enum<E>
One of the other additions to the Java language in JDK 5.0 is enumerations. When you declare an enumeration with the
enum keyword, the compiler internally generates a class for you that extends
Enum and declares static instances for each value of the enumeration. So if you say:
public enum Suit {HEART, DIAMOND, CLUB, SPADE};
the compiler will internally generate a class called
Suit, which extends
java.lang.Enum<Suit> and has constant (
public static final) members called
HEART,
DIAMOND,
CLUB, and
SPADE, each of which is of the
Suit class.
Like
Class,
Enum is a generic class. But unlike
Class, its signature is a little more complicated:
class Enum<E extends Enum<E>> { . . . }
What on earth does that mean? Doesn't that lead to an infinite recursion?
Let's take it in steps. The type parameter
E is used in various methods of
Enum, such as
compareTo() or
getDeclaringClass(). In order for these to be type-safe, the
Enum class must be generified over the class of the enumeration.
So what about the
extends Enum<E> part? That has two parts too. The first part says that classes that are type arguments to
Enum must themselves be subtypes of
Enum, so you can't declare a class X to extend
Enum<Integer>. The second part says that any class that extends
Enum must pass itself as the type parameter. You cannot declare X to extend
Enum<Y>, even if Y extends
Enum.
In summary,
Enum is a parameterized type that can only be instantiated for its subtypes, and those subtypes will then inherit methods that depend on the subtype. Whew! Fortunately, in the case of
Enum, the compiler does the work for you, and the right thing happens.
Interoperating with nongeneric code
Millions of lines of existing code use classes from the Java class library that have been generified, such as the Collections framework,
Class, and
ThreadLocal. It is important that the improvements in JDK 5.0 not break all that code, so the compiler allows you to use generic classes without specifying their type parameters.
Of course, doing things "the old way" is less safe than the new way, because you are bypassing the type safety that the compiler is ready to offer you. If you try to pass a
List<String> to a method that accepts a
List, it will work, but the compiler will emit a warning that type safety might be lost (a so-called "unchecked conversion" warning.)
A generic type without type parameters, such as a variable declared to be of type
List instead of
List<Something>, is referred to as a raw type. A raw type is assignment compatible with any instantiation of the parameterized type, but such an assignment will generate an unchecked-conversion warning.
To eliminate some of the unchecked-conversion warnings, assuming you are not ready to generify all your code, you can use a wildcard type parameter instead. Use
List<?> instead of
List.
List is a raw type;
List<?> is a generic type with an unknown type parameter. The compiler will treat them differently and likely emit fewer warnings.
In any case, the compiler will generate casts when it generates bytecode, so in no case will the generated bytecode be any less safe than it would be without generics. If you manage to subvert type safety by using raw types or playing games with class files, you will get the same
ClassCastExceptions or
ArrayStoreExceptions you would have gotten without generics.
Checked collections
As an aid to migrating from raw collection types to generic collection types, the Collections framework adds some new collection wrappers to provide early warnings for some type-safety bugs. Just as the
Collections.unmodifiableSet() factory method wraps an existing
Set with a
Set that does not permit any modification, the
Collections.checkedSet() (also
checkedList() and
checkedMap()) factory methods create a wrapper, or view, class that prevents you from placing variables of the wrong type in a collection.
The
checkedXxx() methods all take a class literal as an argument, so they can check (at runtime) that modifications are allowable. A typical implementation would look like this:
public class Collections { public static <E> Collection<E> checkedCollection(Collection<E> c, Class<E> type ) { return new CheckedCollection<E>(c, type); } private static class CheckedCollection<E> implements Collection<E> { private final Collection<E> c; private final Class<E> type; CheckedCollection(Collection<E> c, Class<E> type) { this.c = c; this.type = type; } public boolean add(E o) { if (!type.isInstance(o)) throw new ClassCastException(); else return c.add(o); } } }
The gory details
Erasure
Perhaps the most challenging aspect of generic types is erasure, which is the technique underlying the implementation of generics in the Java language. Erasure means that the compiler basically throws away much of the type information of a parameterized class when generating the class file. The compiler generates code with casts in it, just as programmers did by hand before generics. The difference is that the compiler has first validated a number of type-safety constraints that it could not have validated without generic types.
The implications of implementing generics through erasure are considerable and, at first, confusing. Although you cannot assign a
List<Integer> to a
List<Number> because they are different types, variables of type
List<Integer> and
List<Number> are of the same class! To see this, try evaluating this expression:
new List<Number>().getClass() == new List<Integer>().getClass()
The compiler generates only one class for
List. By the time the bytecode for
List is generated, little trace of its type parameter remains.
When generating bytecode for a generic class, the compiler replaces type parameters with their erasure. For an unbounded type parameter (
<V>), its erasure is
Object. For an upper-bounded type parameter (
<K extends Comparable<K>>), its erasure is the erasure of its upper bound (in this case,
Comparable). For type parameters with multiple bounds, the erasure of its leftmost bound is used.
If you inspected the generated bytecode, you would not be able to tell the difference between code that came from
List<Integer> and
List<String>. The type bound
T is replaced in the bytecode with
T's upper bound, which is usually
Object.
Implications of erasure
Erasure has a number of implications that might seem odd at first. For example, because a class can implement an interface only once, you cannot define a class like this:
// invalid definition class DecimalString implements Comparable<String>, Comparable<Integer> { ... }
In light of erasure, the above declaration simply does not make sense. The two instantiations of
Comparable are the same interface, and they specify the same
compareTo() method. You cannot implement a method or an interface twice.
Another, much more annoying implication of erasure is that you cannot instantiate an object or an array using a type parameter. This means you can't use
new T() or
new T[10] in a generic class with a type parameter
T. The compiler simply does not know what bytecode to generate.
There are some workarounds for this issue, generally involving reflection and the use of class literals (
Foo.class), but they are annoying. The constructor in the
Lhist
example class displayed one such technique for working around the problem (see Implementing the constructor), and the discussion of
toArray()
(in Replacing T[] with Class<T>) offered another.
Another implication of erasure is that it makes no sense to use
instanceof to test if a reference is an instance of a parameterized type. The runtime simply cannot tell a
List<String> from a
List<Number>, so testing for
(x instanceof List<String>) doesn't make any sense.
Similarly, the following method won't increase the type safety of your programs:
public <T> T naiveCast(T t, Object o) { return (T) o; }
The compiler will simply emit an unchecked warning, because it has no idea whether the cast is safe or not.
Types versus classes
The addition of generic types has made the type system in the Java language more complicated. Previously, the language had two kinds of types -- reference types and primitive types. For reference types, the concepts of type and class were basically interchangeable, as were the terms subtype and subclass.
With the addition of generics, the relationship between type and class has become more complex.
List<Integer> and
List<Object> are distinct types, but they are of the same class. Even though
Integer extends
Object, a
List<Integer> is not a
List<Object>, and it cannot be assigned or even cast to
List<Object>.
On the other hand, now there is a new weird type called
List<?>, which is a supertype of both
List<Integer> and
List<Object>. And there is the even weirder
List<? extends Number>. The structure and shape of the type hierarchy got a lot more complicated. Types and classes are no longer mostly the same thing.
Covariance
As you learned earlier (see Generic types are not covariant), generic types, unlike arrays, are not covariant. An
Integer is a
Number, and an array of
Integer is an array of
Number. Therefore, you can freely assign an
Integer[] reference to a variable of type
Number[]. But a
List<Integer> is not a
List<Number>, and for good reason -- the ability to assign a
List<Integer> to a
List<Number> could subvert the type checking that generics are supposed to provide.
This means that if you have a method argument that is a generic type, such as
Collection<V>, you cannot pass a collection of a subclass of
V to that method. If you want to give yourself the freedom to do so, you must use bounded type parameters, such as
Collection<T extends V> (or
Collection<? extends V>.)
Arrays
You can use generic types in most situations where you could use a nongeneric type, but there are some restrictions. For example, you cannot declare an array of a generic type (except if the type arguments are unbounded wildcards). The following code is illegal:
List<String>[] listArray = new List<String>[10]; // illegal
Permitting such a construction could create problems, because arrays in Java language are covariant, but parameterized types are not. Because any array type is type-compatible with
Object[] (a
Foo[]is an
Object[]), the following code would compile without warning, but it would fail at runtime, which would undermine the goal of having any program that compiles without unchecked warnings be type-safe:
List<String>[] listArray = new List<String>[10]; // illegal Object[] oa = listArray; oa[0] = new List<Integer>(); String s = lsa[0].get(0); // ClassCastException
If, on the other hand,
listArray were of type
List<?>, an explicit cast would be required in the last line. Although it would still generate a runtime error, it would not undermine the type-safety guarantees offered by generics (because the error would be in the explicit cast). So arrays of
List<?> are permitted.
New meanings for
extends
Before the introduction of generics in the Java language, the
extends keyword always meant that a new class or interface was being created that inherited from another class or interface.
With the introduction of generics, the
extends keyword has another meaning. You use
extends in the definition of a type parameter (
Collection<T extends Number>) or a wildcard type parameter (
Collection<? extends Number>).
When you use
extends to denote a type parameter bound, you are not requiring a subclass-superclass relationship, but merely a subtype-supertype relationship. It is also important to remember that the bounded type does not need to be a strict subtype of the bound; it could be the bound as well. In other words, for a
Collection<? extends Number>, you could assign a
Collection<Number> (although
Number is not a strict subtype of
Number) as well as a
Collection<Integer>,
Collection<Long>,
Collection<Float>, and so on.
In any of these meanings, the type on the right-hand side of
extends can be a parameterized type (
Set<V> extends Collection<V>).
Bounded types
So far, you've seen one kind of type bound -- the upper bound. Specifying an upper bound constrains a type parameter to be a supertype of (or equal to) a given type bound, as in
Collection<? extends Number>. It is also possible, though less common, to specify a lower bound, which you write as
Collection<? super Foo>. Only wildcards can have lower bounds.
In addition to specifying a type constraint on the type parameter, specifying a bound has another significant effect. If a type
T is known to extend
Number, then the methods and fields of
Number can be accessed through a variable of type
T. It might not be known at compile time what the value of
T is, but it is known at least to be a
Number.
There are some restrictions on which classes can act as type bounds. Primitive types and array types cannot be used as type bounds (but array types can be used as wildcard bounds). Any reference type (including parameterized types) can be used as a type bound.
class C <T extends int> // illegal class C <T extends Foo[]> // illegal class C <T extends Foo> //legal class C <T extends Foo<? extends Moo<T>>> //legal class C <T, V extends T> // legal
One place where you might use a lower bound is in a method that selects elements from one collection and puts them in another. For example:
class Bunch<V> { public void add(V value) { ... } public void copyTo(Collection<? super V>) { ... } ... }
The
copyTo() method copies all the values from the
Bunch into a specified collection. Rather than specify that it must be a
Collection<V>, you can specify that it be a
Collection<? super V>, which means
copyTo() can copy the contents of a
Bunch<String> to a
Collection<Object> or a
Collection<String>, rather than just a
Collection<String>.
The other common case for lower bounds is with the
Comparable interface. Rather than specifying:
public static <T extends Comparable<T>> T max(Collection<T> c) { ... }
You can be more flexible in what types you accept:
public static <T extends Comparable<? super T>> T max(Collection<T> c) { ... }
This way, you can pass a type that is comparable to its supertype, in addition to a type that is comparable to itself, for some additional flexibility. This becomes valuable for classes that extend classes that are already
Comparable:
public class Base implements Comparable<Base> { ... } public class Child extends Base { }
Because
Child already implements
Comparable<Base> (which it inherits from the superclass
Base), you can pass it to the second example of
max() above, but not the first.
Multiple bounds
A type parameter can have more than one bound. This is useful when you want to constrain a type parameter to be, say, both
Comparable and
Serializable. The syntax for multiple bounds is to separate the bounds with an ampersand:
class C<T extends Comparable<? super T> & Serializable>
A wildcard type can have a single bound -- either an upper or a lower bound. A named type parameter can have one or more upper bounds. A type parameter with multiple bounds can be used to access the methods and fields of each of its bounds.
Type parameters and type arguments
In the definition of a parameterized class, the placeholder names (such as
V in
Collection<V>) are referred to as type parameters. They have a similar role to that of formal arguments in a method definition. In a declaration of a variable of a parameterized class, the type values specified in the declaration are referred to as type arguments. These have a role similar to actual arguments in a method call. So given the definition:
interface Collection<V> { ... }
and the declaration:
Collection<String> cs = new HashSet<String>();
the name V (which can be used throughout the body of the
Collection interface) is called a type parameter. In the declaration of
cs, both usages of
String are type arguments (one for
Collection<V> and the other for
HashSet<V>.)
There are some restrictions on when you can use type parameters. Most of the time, you can use them anyplace you can use an actual type definition. But there are exceptions. You cannot use them to create objects or arrays, and you cannot use them in a static context or in the context of handling an exception. You also cannot use them as supertypes (
class Foo<T> extends T), in
instanceof expressions, or as class literals.
Similarly, there are some restrictions on which types you can use as type arguments. They must be reference types (not primitive types), wildcards, type parameters, or instantiations of other parameterized types. So you can define a
List<String> (reference type), a
List<?> (wildcard), or a
List<List<?>> (instantiation of other parameterized types). Inside the definition of a parameterized type with type parameter T, you could also declare a
List<T> (type parameter.)
Wrap-up
Summary
The addition of generic types is a major change to both the Java language and the Java class libraries. Generic types (generics) can improve the type safety, maintainability, and reliability of Java applications, but at the cost of some additional complexity.
Great care was taken to ensure that existing classes will continue to work with the generified class libraries in JDK 5.0, so you can get started with generics as quickly or as slowly as you like.
Appendix
Appendix A: Comparison to C++ templates
The syntax for generic classes bears a superficial similarity to the template facility in C++. However, there are substantial differences between the two. For example, a generic type in Java language cannot take a primitive type as a type parameter -- only a reference type. This means that you can define a
List<Integer>, but not a
List<int>. (However, autoboxing can help make a
List<Integer> behave like a
List of int.)
C++ templates are effectively macros; when you use a C++ template, the compiler expands the template using the provided type parameters. The C++ code generated for
List<A> differs from the code generated for
List<B>, because A and B might have different operator overloading or inlined methods. And in C++,
List<A> and
List<B> are actually two different classes.
Generic Java classes are implemented quite differently. Objects of type
ArrayList<Integer> and
ArrayList<String> share the same class, and only one
ArrayList class exists. The compiler enforces type constraints, and the runtime has no information about the type parameters of a generic type. This is implemented through erasure
, explained in The gory details
.
Resources
Learn
- Gilad Bracha, the principal architect of generic type support in the Java language, has written a tutorial on generics.
- Angelika Langer has put together a wonderful FAQ on generics.
- The specification for generics, including the changes to the Java Language Specification, was developed through the Java Community Process under JSR 14.
- Eric Allen explores generics in the four-part article, Java generics without the pain (developerWorks, February-May 2003) in his Diagnosing Java code column series.
- Read the complete Taming Tiger series by John Zukowski for tips about the features of Java 5.0.
- For a more comprehensive look at some of the new features in Java 5.0, see the following articles by Brett McLaughlin:
- "Annotations in Tiger, Part 1" (developerWorks, August 2004)
- "Annotations in Tiger, Part 2" (developerWorks, August 2004)
- "Getting started with enumerated types" (developerWorks, November 2004)
- "Enhanced looping in Java 5.0 with for/in" (developerWorks, December 2004)
- You'll find articles about every aspect of Java programming in the developerWorks Java technology zone.
- Also see the Java technology zone tutorials page for a complete listing of free Java-focused tutorials from developerWorks.
Get products and technologies
Discuss
- Participate. | http://www.ibm.com/developerworks/java/tutorials/j-generics/j-generics.html | CC-MAIN-2015-11 | refinedweb | 8,295 | 52.7 |
Details
Description
We should provide some reusable classes such as JoinPE and a MapEvent. The MapEvent is a schema-less reusable event class.
Activity
- All
- Work Log
- History
- Activity
- Transitions
A generic stream (default stream) class to carry this MapEvent may be desirable in some scenarios.
A user can just create PEs (which uses MapEvent)
Also how about giving some generic PEs ....like PEs which can do math/string/date operations...
Makes sense to have a generic stream, I think.
We could have a factory method for the generic stream that only takes these arguments:
- stream name
- key finder class
orkey finder string
- target PE
(finder string is in the todo list, we already have it in 0.4. the string is parsed to generate the finder class. It's an option for developers who dont' want to write key finder classes.)
Another thought is, should the Event base class be the generic event class? If the map is not created the overhead would be limited to one reference field. The benefit is that all events will be augmentable without having to subclass.
Why would we do this? because it provides an easy way to add meta-data or annotations to events. For example, for 2-way communication we need to send the origin address in the event over several hops. In NLP, we may want to add annotations to the event as it traverses several PEs. The annotator chain may change often, making it impractical to use dedicated event classes. The Map in base Event will make it possible to append these type of info using a simple pattern. (Of course this is more costly than using event classes but requires less boilerplate and fewer classes.) This design provide flexibility to use classes for static typing and speed or maps for flexibility or a combination of both.
We can now dynamically add attributes to Event in a type safe manner. Please review the code:
Here is test case:
Please review.
This was committed in e228a8aeb423feba5d58a24fb160c63c6ac8efca , merged into piper and it is working fine
What do you think of the following API for MapEvent:
package org.apache.s4.base;
public class MapEvent extends Event {
public <T> void put(Class<T> type, String name, T value) {
}
public <T> T get(String name){ T x = null; // get object. // throw exception if type doesn't match. return x; }
} | https://issues.apache.org/jira/browse/S4-21 | CC-MAIN-2016-36 | refinedweb | 395 | 64.51 |
Generating Java Best Seller Book Titles
A slews of books are written for each tech buzz, and their titles are so similar to the point you can reliably predict it. Just for fun, I've written a simple Java class to generate these titles.
import java.util.Arrays;
public class BookTitlesGen {
public static final String[] technologies = new String[]{
"AJAX", "EJB3", "JSF", "Web Services"
};
public static final String[] templates = new String[] {
"XXX in Action", "Professional XXX",
"Effective XXX", "Master XXX",
"XXX Definitive Guide", "Core XXX",
"XXX Cookbook","XXX Bible",
"Head First XXX", "XXX Best Practice",
"A Developer's Guide to XXX",
"XXX Unleashed", "XXX for Dummies",
};
public static void main(String[] args) {
genBookTitles();
}
private static void genBookTitles() {
for(String technology : technologies) {
String banner = "Best Seller Books for " + technology;
char[] line = new char[banner.length()];
Arrays.fill(line, '=');
System.out.println(banner);
System.out.println(line);
for(String template : templates) {
String title = template.replaceAll("XXX", technology);
System.out.println(title);
}
System.out.println("");
}
}
}
They are roughly in the order of popularity. To save space, I omitted the output for JSF and Web Services. Can't we have better names?They are roughly in the order of popularity. To save space, I omitted the output for JSF and Web Services. Can't we have better names?
Best Seller Books for AJAX
==========================
AJAX in Action
Professional AJAX
Effective AJAX
Master AJAX
AJAX Definitive Guide
Core AJAX
AJAX Cookbook
AJAX Bible
Head First AJAX
AJAX Best Practice
A Developer's Guide to AJAX
AJAX Unleashed
AJAX for Dummies
Best Seller Books for EJB3
==========================
EJB3 in Action
Professional EJB3
Effective EJB3
Master EJB3
EJB3 Definitive Guide
Core EJB3
EJB3 Cookbook
EJB3 Bible
Head First EJB3
EJB3 Best Practice
A Developer's Guide to EJB3
EJB3 Unleashed
EJB3 for Dummies
There's definitely a lot to know about this issue. I really like all the points you made.
UI Development Training in Bangalore | https://javahowto.blogspot.com/2006/05/generating-java-best-seller-book.html | CC-MAIN-2020-24 | refinedweb | 316 | 50.77 |
Can I create Ruby classes within functions bodies ?
I seem to be getting error which tells me its not allowed but I think it should be as classes are too objects here.
class A
def method
class B
end
end
end
class A def method self.class.const_set :B, Class.new { def foo 'bar' end } end end A.new.method A::B.new.foo # => 'bar'
However, why do you want to assign a constant inside a method? That doesn't make sense: constants are constant, you can only assign to them once which means you can only run your method once. Then, why do you write a method at all, if it is only ever going to be run once, anyway? | https://codedump.io/share/7rrGDEhcLmKP/1/why-cant-there-be-classes-inside-methods-in-ruby | CC-MAIN-2017-17 | refinedweb | 121 | 82.65 |
I just noticed it on power dock the omega LED blinks 10 times and then becomes stable. Is this some kind of signal for something wrong?
Posts made by Shamyl Mansoor
- RE: Power Dock Issues
- RE: Power Dock Issues posted in Omega Talk
- Power
- Main Dock still not shipped!
Hi guys,
As part of my pledge I was supposed to get one Main dock which still hasn't been shipped. Can I get a date on it's shipment?
Shamyl
- RE: Oled Screen
Hurry up with the library guys! I'm thinking of a very very cool project with the OLED screen and don't want to waste my time on figuring out the code to run it! Just need simple function calls to make it work!
- RE: Oled Screen
.
- RE: Oled Screen
Since I only have the mini-dock, I was wondering where I can find the wiring diagram for the OLED expansion if I want to interface it to the Omega using a breadboard? I'm assuming it would be using I2C for communication?
- RE: Oled Screen
I have mine as well. Will probably explore how to interface it over the weekend and post here about it if I do.
- Sending an email using CLI and Python
I received my Onion Omega today and being a weekend wanted to start off with something simple. I've programmed in python before and while browsing the FAQ and eventually the OpenWRT pages about installing python I saw the python-email library. So I decided to write a python script to send an email from the Onion Omega to my Gmail address. The idea being that once I connect some hardware like a switch or my home door bell to the Onion, it can email me if there is a new activity.
The first thing I searched for was how to send email using the CLI. I found this useful link () and decided to use mailsend to try and send an email. Here are the steps that I followed:
Step 1: Install mailsend
opkg –install mailsend
Step 2: Use mailsend to send an email to my account. I had to experiment with different options but eventually the following worked
mailsend -to recipient@gmail.com -from youruserid@yourhost.com -ssl -port 465 -auth-login -smtp host236.hostmonster.com -sub test +cc +bc -v -user youruserid@yourhost.com -pass “yourpassword” -M "Your message here"
Somehow I’m unable to send an email from my Gmail account so I used another host that my company uses.
Sending emails via Python Script
Step 1: Install Python ()
Install python on Onion. I tried installing python-email only but the library had dependencies to many other libraries and after fixing those dependencies, I still had more dependencies. So I decided to install complete python instead of python-light
opkg -update
opkg –install python
Step 2: Write the script to send an email
#!/usr/bin/python
import smtplib
sender = ' youruserid@yourhost.com '
toaddrs = “recipient@gmail.com '
message = """From: Onion Omega Onion@onionomega.com
To: Recipient < recipient@gmail.com >
Subject: SMTP e-mail test
This is a test e-mail message.
"""
#Credentials
password = 'yourpasswordhere'
#The actual mail send
server = smtplib.SMTP_SSL('Your SMTP server address here :465') #(‘Host:Port’)
server.login(sender,password)
server.sendmail(sender, toaddrs, message)
server.quit()
print "done"
Success!
Links:
Another example to send email using SSL via python
- RE: littel bit confused
Had the same issue and was confused. So I guess I have to wait for another shipment! | http://community.onion.io/user/shamyl-mansoor/posts | CC-MAIN-2019-39 | refinedweb | 585 | 74.29 |
17 June 2009 15:05 [Source: ICIS news]
TORONTO (ICIS news)--The administrator of insolvent Germany-based polyester firm Trevira has hired its former CEO to help steer the company and find a buyer, he said on Wednesday.
Werner Schneider, who was appointed administrator by a court in ?xml:namespace>
Wohner left the company in early May. Shortly afterwards, Trevira parent Reliance Industries appointed Elke Bauerle, a lawyer and insolvency expert, as managing director.
Trevira, which employs a staff of about 1,800 in Europe and has five production sites in four European countries –
Sales were currently running at €1m ($1.4 m)/day, he said, adding that customers and banks kept supporting the company.
“First signals” in finding investors or buyers had been positive. Talks with interested parties were due to get underway this week, Schneider said.
He also said that, contrary to earlier statements,
($1 = €0.72)
For more on Reliance. | http://www.icis.com/Articles/2009/06/17/9225708/germanys+trevira+re-hires+former+ceo+looks+for+buyer.html | CC-MAIN-2013-20 | refinedweb | 153 | 56.76 |
GETPEEREID(2) BSD Programmer's Manual GETPEEREID(2)
getpeereid - get effective user and group identification of locally- connected peer
#include <sys/types.h> #include <sys/socket.h> int getpeereid(int s, uid_t *euid, gid_t *egid);
getpeereid() returns the effective user ID and group ID of the peer con- nected to a UNIX domain socket (see unix(4)). The argument s must be of type SOCK_STREAM. One common use is for UNIX domain servers to determine the credentials of clients that have connected to it. getpeereid() takes three parameters: • s contains the file descriptor of the socket whose peer credentials should be looked up. • euid points to a uid_t variable into which the effective user ID for the connected peer will be stored. • egid points to a gid_t variable into which the effective group ID for the connected peer will be stored.
If the call succeeds, a 0 is returned and euid and egid are set to the effective user ID and group ID of the connected peer. Otherwise, errno is set and a value of -1 is returned.
On failure, errno is set to one of the following: [EBADF] The argument s is not a valid descriptor. [ENOTSOCK] The argument s is a file, not a socket. [EOPNOTSUPP] The socket is not in the UNIX domain. [ENOTCONN] The socket is not connected. [ENOBUFS] Insufficient resources were available in the system to per- form the operation. [EFAULT] The euid or egid parameters point to memory not in a valid part of the process address space.
accept(2), bind(2), getpeername(2), getsockname(2), socket(2), unix(4)
The getpeereid() function call appeared in OpenBSD 3.0. MirOS BSD #10-current June 26,. | http://mirbsd.mirsolutions.de/htman/sparc/man2/getpeereid.htm | crawl-003 | refinedweb | 281 | 65.01 |
Getting to Know Mono
July 1st, 2003 by Julio David Quintana in.
The first step in taking Mono out for a test spin is to visit the project web site () and download the latest source tarballs or platform binaries. Currently, Mono has been ported only to Linux and Windows, but work is being done on Mac OS X, FreeBSD and other platforms. Binaries are available for a variety of Linux distributions including Debian, Red Hat, SuSE and Mandrake. If you use Ximian Red Carpet, the files also are available in the Mono Channel. For this article, we are using Mono version 0.20. You'll notice that in addition to the Mono packages providing the runtime, C# compiler and class libraries, there are a few other goodies to play with such as the Mono debugger, XSP web server and Monodoc documentation browser.
If you have trouble installing Mono, check out the tutorials offered on the web site.
Mono currently comes packaged with the following components:
C# and Basic language compilers.
VES consisting of a JIT compiler and associated garbage collector, security system, class loader, verifier and threading system. An interpreter is also included.
A set of class libraries written in C# that implements the classes defined in the CLI standard, classes that are part of the .NET FCL, and other Mono-specific classes.
Various utilities.
The Mono C# language compiler is mcs. In an interesting programming feat, mcs is written in C#. Since Mono 0.10, mcs even has been able to compile itself. If you are interested in the details of the command-line options, which are compatible with the command-line options provided by Microsoft's C# compiler, a thorough man page is available.
The compiler for Mono's equivalent of Visual Basic.NET, MonoBasic, is mbas. Although not as far in development as the C# compiler, mbas provides enough functionality to experiment a little in Basic.
Two execution environments are included with Mono, mono and mint. mono is a JIT compiler compatible with the CLI's definition of the VES. mint on the other hand, is an interpreter. It is provided as an easy-to-port alternative to mono, which currently runs only on the x86 platform. For the greatest code execution speed, use mono.
A couple interesting utilities also provided with Mono are monodis and pedump. monodis is used to disassemble a compiled assembly and output the corresponding CIL code. It was used to display the sample CIL code for Listing 1. If you are curious to see more of what CIL looks like or to take a peek at what makes up a portable executable, play around with these.
Now that we are familiar with the components of Mono, it is time to try them out. To experiment with the language interaction of Mono, we write a simple class with a single method in C# and call it from a MonoBasic program.
Listing 2 shows the C# library ljlib.cs, and Listing 3 shows the MonoBasic program hello.vb.
The first step is to compile the ljlib.cs into a library. Compiled libraries have the .dll extension, and compiled executables have the .exe extension. To compile to a library, use the -target:library switch in mcs:
[jdq@newton]$ mcs -target:library ljlib.cs Compilation succeeded
This creates the ljlib.dll file, which contains the LJlib namespace and Output class. Now we need to compile the hello.vb program. In order to use the ljlib.dll file we just created, we need to tell the MonoBasic compiler to use it as a reference. We do that with the -r switch:
[jdq@newton]$ mbas -r ./ljlib.dll hello.vb Compilation succeededThe output of mbas is the PE hello.exe. It can be executed with mono:
[jdq@newton]$ mono hello.exe Hello Linux Journal!And there you have it—two languages, C# and MonoBasic, executing on the same runtime and working together. This is a trivial example; however, it does demonstrate the language independence and interoperability of the CLI and hints at the power of Mono as a development platform.
Subscribe now!
Featured Video
Linux Journal Gadget Guy, Shawn Powers, takes us through installing Ubuntu on a machine running Windows with the Wubi installer. | http://www.linuxjournal.com/article/6623 | crawl-001 | refinedweb | 706 | 66.44 |
In Maximum size subarray sum equals k we have given an array of integers and a value k. You have to find the length of the longest subarray whose sum is equal to k. If no such subarray exists then return 0.
One approach is to use hashtable and check if the sum is equal to k and return the length of that subarray whose length is equal to k.
Example:
Input:
A[]={10, 5, 2, 7 , 1, 9};
k = 15;
Output:
4
Explanation:
There are multiple sub-arrays whose sum could be 15 like {10, 5}, {5, 1, 9}, {5, 2, 7, 1} but the longest subarray with sum 15 is {5, 2, 7, 1} whose length is 4.
Algorithm
- Tek two variables suppose summation and length and set them to zero i.e., summation=0, lengths=0.
- Declare key-value pair (map or hash-table)
- Iterate through the loop from index =0 to index-1 and proceed further with the following ways-
- Sum up the array and summation into summation.
- If summation found to be equal with the k, then make the value of lengths increased by 1.
- Proceed further with the check of summation is present or not, if not then add it as key-value pair.
- If (summation – k) finds a presence in hashtable then get the index.
- If above line satisfies,then update the value of lengths, if lengths<index-storeVal[summation -k] then set lengths=index-storeVal[summation -k].
Explanation For Maximum size subarray sum equals k
This is going to be an effective way in which we can implement this code with efficient time complexity. So our main idea is to use a HashMap and some variables which are going to store different values in it.
This code uses for loop which goes on till the length of array reached. First, it going to sums up the value of arr[i] and add it to summation and make the summation updated. If the value of summation finds to be equal with the value of k then it will update and increase the value of lengths by 1.
The very next if block is going to check whether the storeVal(which is a HashMap variable) contains the value of summation in a map and if it does not contain summation as a key in a map it will make an entry in map and put the value of summation and index in map.
The next if block going to check if storeVal contains the key in the form of ( summation – k ) and if it so then it will compare the lengths with ( index – storeVal (summation – k ) ) and if the length is less than ( index – storeVal (summation – k ) ) then it will make the lengths = ( index – storeVal (summation – k ) ), and it goes on till we find our maximum length of subarray with a summation equal to k and at last returns lengths.
Suppose array given:
arr={10, 5, 2, 7, 1, 9 }
i=0; summation=10, // summation+=arr[i]
summation==k // ( check summation equal to k ) // here not
if (!storeVal.containsKey(sum))
storeVal.put(sum,i) /* everytime it will put the values of summation and index as key value pair in the map*/
if ( storeVal.containsKey(summation – k))// here not
And it will continues up and does not enter in if block till it satisfies the condition and it will satisfies at index=4 and the some operations performs and we get the correct answer.
Implementation in C++ for Maximum size subarray sum equals k
#include <iostream> #include <bits/stdc++.h> using namespace std; int lenOfSubArray(int arr[], int n, int k) { int summation = 0, lengths = 0; map<int, int> storeVal; for (int index = 0; index < n; index++) { //sums up the array summation += arr[index]; //check if summation is equals to k if (summation == k) { lengths = index + 1; } if (storeVal.find(summation) == storeVal.end()) { //store the values as a key value pair in //map storeVal[summation] = index; } if (storeVal.find(summation - k) != storeVal.end()) { //updation of lengths if (lengths < (index - storeVal[summation - k])) { lengths = index - storeVal[summation - k]; } } } return lengths; } int main() { int arr[] = { 2, 5, 7, 3, 7, 9, 1 }; int k = 17; int n = sizeof(arr) / sizeof(arr[0]); cout << "Length of Longest Substring is:" << lenOfSubArray(arr, n, k); return 0; }
Length of Longest Substring is: 4
Implementation in java for Maximum size subarray sum equals k
import java.util.*; import java.io.*; class findMaxSubArray { public static int lenOfSubArray(int arr[], int n, int k) { int summation = 0, lengths = 0; HashMap<Integer, Integer> storeVal = new HashMap<>(); for (int index = 0; index<n; index++) { //sums up the array summation += arr[index]; //check if summation is equals to k if (summation == k) { lengths = index + 1; } if (!storeVal.containsKey(summation)) { //store the values as a key value pair in //map storeVal.put(summation, index); } if (storeVal.containsKey(summation - k)) { //updation of lengths if (lengths<(index - storeVal.get(summation - k))) { lengths = index - storeVal.get(summation - k); } } } return lengths; } //Driver code public static void main(String args[]) { int arr[] = { 2, 5, 7, 3, 7, 9, 1 }; int k = 17; int n = arr.length; System.out.println("Length of Longest Substring is:" + lenOfSubArray(arr, n, k)); } }
Length of Longest Substring is:4
Complexity Analysis
Time complexity
O(n) where n is the size of the array.
Auxiliary Space
O(n) where n is the size of the array. | https://www.tutorialcup.com/interview/array/maximum-size-subarray-sum-equals-k.htm | CC-MAIN-2021-25 | refinedweb | 898 | 54.66 |
Introduction
In this second part of our chat application tutorial, we will start serving the application frontend files from the ESP32. We will do this by storing the files on the ESP32 SPIFFS file system and adding endpoints to the HTTP server to serve them.
Although in the previous tutorial we had just a single file with all the HTML and JavaScript of our application, we will now split it in two files: one containing the HTML and another containing the JavaScript. This process was already covered in greater detail here.
If we take a look at the original frontend code, we can clearly see that the HTML part is related with the content of the interface and the JavaScript part is related with its behavior. So, having a separation of the code in two different files makes our application cleaner and the development process easier.
Furthermore, if we were working in a more complex application, it was likely that we had multiple HTML pages that could reuse some parts of the JavaScript code. If that happens, replicating the same JavaScript code in all the HTML pages is definitely not a good ideia for maintenance.
With this in mind, we can simply split the JS and the HTML code and import the JS file from the HTML, like we are going to do below. Although we are not adding any CSS (style and appearance) yet, please keep in mind that if we did, it would also be a good idea to have it on a separated file. We will do that later in part 3.
You can consult the following tutorials for more detailed explanations on how to serve files from the ESP32:
- HTML code
Our HTML code will be pretty much the same as part 1, except that we are no longer having the JavaScript code directly here.
Like we have seen in the previous tutorial, the script tag can be used to embed JavaScript code, which was where we defined all the behavior of our application (websocket connection, enable and disable of elements, etc..). This time, instead of having all the JS code inside the script tag, we are going to import it from an external JavaScript file, using the src attribute.
Note that we should assign the URL of the file containing the JS code to this src attribute. This URL can either be absolute (pointing to another host where the JavaScript file is being server) or relative (pointing to the same host that is serving the HTML file). In our case, we are going with a relative URL, since the ESP32 will not only serve the HTML file but also the JavaScript file.
We will assume that the file is called chat.js and it will be served in a route with the same exact name. Consequently, the script tag should look like the following:
<head> <script src="chat.js"></script> </head>
Naturally, we will need to ensure that the ESP32 will serve the JS file on the “/chat.js” route, so it can be imported in this HTML file.
The rest of the HTML code should be exactly like the one we wrote in part 1. As a quick reminder, it has 4 divs:
- One for the connection elements (input text for the name and connect button).
- One that starts empty but will be where the messages of the chat will be added.
- One with the elements to send a new message (input text for the message and send button).
- One with the disconnect button.
The full code, which includes these 4 divs in the body, can be seen below.
<!DOCTYPE html> <html> <head> <script src="chat.js"></script> </head> <body> <div> <input type = "text" id = "nameText"></input> <button onclick = "OpenWebsocket()" id = "connectButton">CONNECT</button> </div> <div id = "chatDiv"> </div> <div> <input type = "text" disabled</input> <button onclick = "SendData()" disabledSEND</button> </div> <div> <button onclick = "CloseWebsocket()" disabledDISCONNECT</button> </div> </body> </html>
We will assume that this HTML file will be called “chat.html“.
The JavaScript code
Since we removed the JS code from the HTML, we need to place it in another file, so it can be imported. As already mentioned, we are assuming that the file is called “chat.js“.
Like for the HTML, this file will contain pretty much the same JavaScript code from the previous tutorial, with a small exception. On that tutorial, we were running the code from a computer, meaning that we had to know beforehand what was the IP address of the ESP32 in the network (recall that we had it hardcoded in the websocket URL).
Now, we are going to be serving the JS and the HTML file from the same ESP32 that is hosting the websocket endpoint. So, we can simply use the hostname property from the location object, which should allow us to access the host that served the file. In our case, since we are not using any domain name, this host will simply be the IP address of the ESP32.
Using this approach, we no longer have to hardcode the IP address of the ESP32 on the JS code. The portion of the code that is new is shown below (it executes inside the OpenWebsocket function).
const url = `ws://${location.hostname}/chat`; ws = new WebSocket(url);
The full JavaScript code, already with this modification, is shown below.
var ws = null; var name = null; function OpenWebsocket() { const nameTextElement = document.getElementById("nameText"); name = nameTextElement.value; nameTextElement.value = ''; const url = `ws://${location.hostname}/chat`; ws = new WebSocket(url); ws.onopen = function() { document.getElementById("inputText").disabled = false; document.getElementById("sendButton").disabled = false; document.getElementById("disconnectButton").disabled = false; document.getElementById("connectButton").disabled = true; document.getElementById("nameText").disabled = true; }; ws.onclose = function() { document.getElementById("inputText").disabled = true; document.getElementById("sendButton").disabled = true; document.getElementById("disconnectButton").disabled = true; document.getElementById("connectButton").disabled = false; document.getElementById("nameText").disabled = false; document.getElementById("chatDiv").innerHTML = ''; }; ws.onmessage = function(event) { const receivedObj = JSON.parse(event.data); const textToDisplay = `${receivedObj.name}: ${receivedObj.msg}`; const newChatEntryElement = document.createElement('p'); newChatEntryElement.textContent = textToDisplay; const chatDiv = document.getElementById("chatDiv"); chatDiv.appendChild(newChatEntryElement); }; } function CloseWebsocket(){ ws.close(); } function SendData(){ const inputTextElement = document.getElementById("inputText"); const msg = inputTextElement.value; inputTextElement.value = ''; const objToSend = { msg: msg, name: name } const serializedObj = JSON.stringify(objToSend); ws.send(serializedObj); }
Uploading the code to the ESP32 file system
In order for the ESP32 to be able to access the previous two files (“chat.html” and “chat.js“), we need to upload them to the file system of the device. The easiest way to do so is using the Arduino IDE file system upload plugin. The usage of this plugin was detailed in this previous tutorial.
In short, we need to have an Arduino sketch opened in the Arduino IDE and access its location in the computer file system. Once we are on the sketch folder, we need to create a folder called “data“. Inside, we place the files we want to upload (in our case, the “chat.html” and the “chat.js” files, like shown below in figure 1).
After having the “data” folder ready, we simply need to go back to the Arduino IDE and, under the “Tools” menu, select “ESP32 Sketch Data Upload”. This should initiate the process to upload the file to the SPIFFS file system of the ESP32.
Note that the files will keep the same name in the ESP32 file system and, since they were located in the data folder, they will be on the root of the file system. Consequently, they are located on the “/chat.html” and “/chat.js” paths.
The ESP32 code
The ESP32 code will also not differ much from what we have covered in the previous tutorial. In this case, we will need to add two new endpoints to serve our HTML and JS files, and interact with the file system to obtain them.
Since we will need to interact with the file system of the ESP32 (SPIFFS), we will need to include the SPIFFS.h library, in addition to the includes we already had.
#include <SPIFFS.h>
Moving on to the Arduino setup function, we need to initialize the SPIFFS file system. Only after that we will be able to access its files. To perform the initialization, we simply need to call the begin method on the SPIFFS extern variable that gets available from the previously mentioned include.
Note that this method call will return a Boolean indicating if the initialization was correctly done (true) or not (false). We will use this value for error checking, since if it fails we cannot proceed with the execution of the rest of the code, as it wouldn’t work.
if(!SPIFFS.begin()){ Serial.println("An Error has occurred while mounting SPIFFS"); return; }
After the initialization of SPIFFS, we should now be able to access files in the file system. So, we will first configure a route that will serve the HTML of our application. This route will be called “/chat” and answer to HTTP GET requests.
server.on("/chat", HTTP_GET, [](AsyncWebServerRequest *request){ // function implementation });
On its implementation, the route handling function will access to the “chat.html” file and serve it to the client. We can do so by calling the send method on the AsyncWebServerRequest object to which we receive a pointer as input of the route handling function. We pass as first input the SPIFFS extern variable, as second the path to the file in the SPIFFS file system (it should be “/chat.html“, since we uploaded it to the root of the file system), and as third the content-type (which should be “application/html“, so the client knows how to interpret its content).
request->send(SPIFFS, "/chat.html", "text/html");
The full code for the route configuration is shown below.
server.on("/chat", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(SPIFFS, "/chat.html", "text/html"); });
We will also need to configure a route to serve the “chat.js” file. From the HTML code, we can recall that it expects this file to be served on the “/chat.js” endpoint. The implementation of the route handling function will be similar to the previous one, except that we are going to access the “/chat.js” file in the SPIFFS file system, and the content-type is “text/javascript“.
server.on("/chat.js", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(SPIFFS, "/chat.js", "text/javascript"); });
The complete code with these changes is shown below.
#include <WiFi.h> #include <ESPAsyncWebServer.h> #include <SPIFFS.h> const char* ssid = "yourNetworkName"; const char* password = "yourNetworkPassword"; AsyncWebServer server(80); AsyncWebSocket ws("/chat");"); } else if(type == WS_EVT_DATA){ ws.textAll(data, len); Serial.print("Data received: "); for(int i=0; i < len; i++) { Serial.print((char) data[i]); } Serial.println(); } } void setup(){ Serial.begin(115200); if(!SPIFFS.begin()){ Serial.println("An Error has occurred while mounting SPIFFS"); return; } WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("Connecting to WiFi.."); } Serial.println(WiFi.localIP()); ws.onEvent(onWsEvent); server.addHandler(&ws); server.on("/chat", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(SPIFFS, "/chat.html", "text/html"); }); server.on("/chat.js", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(SPIFFS, "/chat.js", "text/javascript"); }); server.begin(); } void loop(){}
Testing the code
To test the system, simply compile and upload the Arduino code from the previous section to the ESP32. Once the procedure is finished, open a serial monitor tool of your choice, to obtain the outputs from the ESP32.
Once the connection to the WiFi network is finished, you should get the IP address assigned to the ESP32 on that network. Copy that address. Then, open a web browser of your choice in a computer connected to the same network and type the following, changing #yourEsp32Ip# by the IP address you have just copied:
Like shown on figure 2, you should get exactly the same page from the previous part.
Then, to test everything is working correctly, simply open more tabs to the same URL and test the chat application like we have done before. If you have more devices connected to the network, you can also test that the chatting app works the same (ex: using a smartphone to talk with the computer). | https://techtutorialsx.com/2022/01/17/esp32-chat-application-part-2/ | CC-MAIN-2022-33 | refinedweb | 2,023 | 65.22 |
I'm trying to use
Slapper.AutoMapper alongside
Dapper to accomplish something like this: How do I write one to many query in Dapper.Net?.
My POCO is like this:
public class MyEntity { public string Name { get; set; } public string Description { get; set; } public int Level { get; set; } public IList<int> Types { get; set; } }
And my DB rows returned are like this:
So one entity can have many Types. This is how I map the stuff:
dynamic test = conn.Query<dynamic>(sql); Slapper.AutoMapper.Configuration.AddIdentifier(typeof(MyEntity), "Name"); var testContact = Slapper.AutoMapper.MapDynamic<MyEntity>(test);
However in all my result objects the Types property is null. How can I map all the Type values in to the IList Types?
Check the last commit on Slapper.AutoMapper repository... | https://dapper-tutorial.net/knowledge-base/34840171/how-to-map-ilist-int--with-slapper-automapper | CC-MAIN-2020-40 | refinedweb | 128 | 57.37 |
Subject: Re: [boost] [RFC] unique_val (handles, IDs, descriptors...)
From: Miguel Ojeda (miguel.ojeda.sandonis_at_[hidden])
Date: 2018-02-15 11:18:43
On Thu, Feb 15, 2018 at 11:18 AM, Andrzej Krzemienski via Boost
<boost_at_[hidden]> wrote:
> 2018-02-14 22:18 GMT+01:00 Miguel Ojeda via Boost <boost_at_[hidden]>:
>
>>.
>>
>
> So, it is my understanding that the difference between an `int` and an
> `unique_val<int>` is that the latter is not copyable (and whoever keeps it
> as member is not copyable), and moving from it guarantees that a certain
> default value will be assigned to it. Right?
Right. Specially important is the fact that it enables those that keep
it as a member movable (if they default it -- given the current rules)
without effort, i.e. no need to implement move constructor/assignment
(which for some people is non trivial or they have not yet learned
them).
>
> If I got that right, I have two remarks.
>
> 1. In your example:
>
> ```
> class Foo
> {
> unique_val<FooHandle> id_;
>
> public:
> Foo() : id_(CreateFoo(/* ... */))
> {
> if (not id_)
> throw std::runtime_error("CreateFoo() failed");
> }
>
> ~Foo()
> {
> if (id_)
> DestroyFoo(id_.get());
> }
>
> Foo(Foo&&) = default;
> Foo& operator=(Foo&&) = default;
> };
> ```
>
> `id_` is private (should be of no interest to the users). `Foo` is indeed
> non-copyable, but it has not been explicitly declared to the users: it has
> only been inferred from the private member (implementation detail). I would
> still expect that the class has copy operations explicitly deleted so that
> I know it is by design rather than by the current choice of private members.
>
> Your type indeed prevents accidental copying. But at least the example
> encourages a bad practice of not declaring the interface of your class
> explicitly.
You are right! The example tries to be as minimal as possible and
tries to make clear that we get the copy operations deleted by
default, but probably a comment would be nice to explicitly say to
(like the comment inside Widget), specially since it is meant for
beginners.
I partially agree with you, for some classes I typically
delete/default/implement all the special functions (basically the RAII
ones), but for the general case, it is a matter of taste: some people
prefer to be as succinct as possible (and if you are using the Rule of
Zero, that is the point).
>
> 2. Next thing I have to do after defining my `unique_value<>` type is to
> write a RAII wrapper for the resource. So, as others mentioned` maybe I
> would prefer a tool for doing the latter rather than only a movable ID.
> There is a proposal for adding this into the Standard Library:
>
>
> And also a reference implementation:
>
>
Very nice! Indeed, this is something that is missing from the standard
library and would be very useful.
I just took a cursory look at the proposal/code; some comments w.r.t.
unique_val (please correct me if I am wrong!)
- it seems unique_resource is not without overhead since it has to
keep a Deleter somehow and/or know whether it has to delete or not on
the destructor. In the reference implementation a unique_resource<int>
is the size of 2 ints when using an empty Deleter (i.e. one int plus 1
byte plus the padding). unique_val can be used to implement RAII
classes without any overhead [*]
- it does not provide for a default deleter, since it is not meant
to just hold a movable value (fine though, since it is not meant to --
same discussion with unique_handle).
Cheers,
Miguel
[*]
#include <iostream>
#include "scope_exit.h"
#include "unique_resource.h"
struct Deleter
{
const Deleter & operator()(int) const { return *this; } }
};
int main()
{
std::experimental::unique_resource<int, Deleter> foo(1, Deleter());
std::cout << sizeof(foo) << '\n' << sizeof(int) << '\n';
return 0;
}
>
> Regards,
> &rzej;
>
> _______________________________________________
> Unsubscribe & other changes:
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2018/02/241338.php | CC-MAIN-2021-25 | refinedweb | 648 | 61.56 |
Subject: Re: [boost] [review][Fit] Review of Fit starts today : September 8 - September 17
From: Edward Diener (eldiener_at_[hidden])
Date: 2017-09-16 01:46:10
On 9/8/2017 7:02 AM, Matt Calabrese via Boost wrote:
> A formal review of the Fit library developed by Paul Fultz II starts
> today, September 8, and runs through September 17.
>
> A branch has been made for the review. You can find Fit for review at
> these links:
>
> Source:
> Docs:
I have some questions about Fit which I can not understand by reading
the docs:
1) Does Fit functionality work only with function objects as opposed to
callables in general ?
2) Is the purpose of BOOST_FIT_STATIC_FUNCTION only to create a function
object at global or namespace level ?
3) Does BOOST_FIT_STATIC_FUNCTION only accept a function object or does
it accept any callable ?
4) The documentation says that "BOOST_FIT_STATIC_LAMBDA_FUNCTION can be
used to the declare the lambda as a function". I am not sure what this
actually means since a lambda function is a function..
In general I am confused by what the requirements of Fit adaptors are.
This may be because the terms "function objects" and "functions" are
used in a way which does not follow standard C++ terminology as far as I
know it. I think I complained about this confusion in my comments
regarding Fit in the earlier review, but while the docs seem more
extensive than earlier, my confusion still remains.
>
>.
>
> ====================
>
>!
>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2017/09/238621.php | CC-MAIN-2021-39 | refinedweb | 263 | 61.97 |
always I’ll give you an example.
What is a network socket
You might already be familiar with the concept of a socket, but let’s revise it. A socket is something that let’s revise this knowledge. A socket is an endpoint used for communication – usually network one. So when you open your browser, it uses sockets to read a web page, same applies for servers. In Unix world everything is a file, or to be a bit more precise, OS represents every device as a file. When you run a server, its sockets are “files”. Let’s spawn a listening server with netcat by running
nc -l 127.0.0.1 8080 and list its files with
lsof:
➜ ~ lsof -c nc COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME nc 13437 gonczor cwd DIR 1,6 864 20758717 /Users/gonczor nc 13437 gonczor txt REG 1,6 203632 1152921500312810925 /usr/bin/nc nc 13437 gonczor txt REG 1,6 2160672 1152921500312811906 /usr/lib/dyld nc 13437 gonczor 0u CHR 16,0 0t22384 1159 /dev/ttys000 nc 13437 gonczor 1u CHR 16,0 0t22384 1159 /dev/ttys000 nc 13437 gonczor 2u CHR 16,0 0t22384 1159 /dev/ttys000 nc 13437 gonczor 3u IPv4 0x19bdae2b0a7886e3 0t0 TCP localhost:http-alt (LISTEN)
In the last line the file is of IPv4 (notice that directories are also represented as files), with its node and a few other things that are not relevant now. How does writing to such a socket look like? Let’s analyze a simple program in C I’ve mostly copied from here.
; const char* message = "Hello, world!\n"; struct sockaddr_in serv_addr; if(argc != 3) { printf("\n Usage: %s <ip of server> <port>\n",argv[0]); return 1; } if((sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0) { printf("\n Error : Could not create socket \n"); return 1; } memset(&serv_addr, '0', sizeof(serv_addr)); serv_addr.sin_family = AF_INET; serv_addr.sin_port = htons(atoi(argv[2])); if(inet_pton(AF_INET, argv[1], &serv_addr.sin_addr)<=0) { printf("\n inet_pton error occured\n"); return 1; } if( connect(sockfd, (struct sockaddr *)&serv_addr, sizeof(serv_addr)) < 0) { printf("\n Error : Connect Failed \n"); return 1; } write(sockfd, message, strlen(message)); close(sockfd); return 0; }
At the end of the post you’ll find a link to repository with all the source code. Anyway, what is important here is the
write() call at the bottom. We can use other functions like
send, but I wanted to show that you can use the same function for writing to files and to the socket. The server should react in the following way receiving the data and displaying it:
➜ IPC git:(master) ✗ nc -l 127.0.0.1 8080 Hello, world!
But what if you wanted to communicate 2 processes running in your memory on the same machine? Can it be done better?
What are Unix sockets
I’m just going to shamelessly copy an answer from serverfault giving a good overall idea of what Unix sockets are:
A UNIX socket, AKA Unix Domain. So if you plan to communicate with processes on the same host, this is a better option than IP sockets.
An example use case? I remember working for a company that had on premise servers that would run web applications written in Python. For a few reasons we wanted to use Nginx server in front of them – better scaling when spawning new processes, better static files handling and so on. What we did was to spawn Nginx as one process and make it a reverse proxy pointing to sockets. We would then make our python application listen on the same sockets. We can configure server to do this by adding the following line:
listen unix:/var/run/nginx.sock;
Implementation
Let’s take a look at the client implementation:
#include <netinet/in.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <sys/un.h> int main(int argc, char *argv[]) { int sockfd = 0, len = 0, socket_path_len = 0; const char* message = "Hello, world!\n"; const char* socket_path = "local.sock"; struct sockaddr_un serv_addr; if((sockfd = socket(AF_UNIX, SOCK_STREAM, 0)) < 0) { perror("\nCould not create socket \n"); return 1; } serv_addr.sun_family = AF_UNIX; socket_path_len = strlen(socket_path); strncpy(serv_addr.sun_path, socket_path, socket_path_len+1); len = socket_path_len + 1 + sizeof(serv_addr.sun_family); printf("%s: %d\n", socket_path, socket_path_len); if (connect(sockfd, (struct sockaddr *)&serv_addr, len) == -1) { perror("\nConnection failed \n"); return 1; } if (write(sockfd, message, strlen(message)) == -1) { perror("\nWriting data unsuccessful\n"); return 1; } close(sockfd); return 0; }
There are a few differences worth discussing. First we used
AF_UNIX instead of
AF_INET to create socket:
socket(AF_UNIX, SOCK_STREAM, 0). Second the binding process is a bit different as we don’t need to choose both address and port, only “address” in form of a disk space. Moreover, there is no need to translate the address string into address structure:
The inet_pton() function converts a presentation format address (that is, printable form as held in a character string) to network format (usually a struct in_addr or some other
internal binary representation, in network byte order).
After the initialization part is done, sending data with
write() function is exactly the same. One catch I came across is correctly counting the buffer sizes – hence the
+1 in
strncpy() and
len counting.
I’ve also written a simple python server that displays data received over the network:
#!/usr/bin/env python3 import socket import sys import os ADDRESS = "./local.sock" if os.path.exists(ADDRESS): os.unlink(ADDRESS) print("# Creating server...") server = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) try: server.bind(ADDRESS) server.listen(1) print("# Server created") while True: connection, client = server.accept() if data := connection.recv(0x1_000): print(f"Received: {data.decode('utf-8')}") else: print("Error") break finally: server.close()
The process here is even more simple. Again I needed to use
AF_UNIX instead of
AF_INET and point to a file instead of address and port tuple. We can now run the server and the client:
Summary
That’s all for today. As always you can find working examples on my GitLab (remember that those will only work on Unix systems – macOS/Linux). You may also want to check out the latest posts: | https://blacksheephacks.pl/ipc-unix-sockets-explained/ | CC-MAIN-2022-40 | refinedweb | 1,034 | 65.32 |
Flashcards
Preview
WinServ2008_Ch4
The flashcards below were created by user
swh136
on
FreezingBlue Flashcards
.
Quiz
iOS
Android
More
ACL
Access Control List: A list of all security descriptors that have been set up for a particular object, such as for a shared folder or a shred printer.
Bridgehead Server
A domain controller at each AD site with access to a site network link, which is designated as the DC to exchange replication information. There is only on e bridgehead server per site.
Container
An AD object that houses other objects, such as a tree that houses domains or a domain that houses organizational units.
Contiguous Namespace
A namespace in which every child object has a portion of its name fom its parent object.
Directory Service
A large container (dbs) of network data and resources, such as computers, printer, useraccounts, and user groups, that enables management and fast access to those resources.
Disjointed Namespace
A namespace in which the child object name does not resemble the parent object name
Distribution Group
A list of users that enables one e-mail message to be sent to all users on the list. A distribution group is not used for secureity and thus cannot appear in an access control list (ACL)
DC
Domain Controller: A Windows Server 03 or 08 server that contains a full copy of the AD information, is used to add a new object to AD, and replicates all changes made to it so the changes are updated on every DC in the same domain.
Domain Function Level
Refers to the Windows Server operating systems on domain controllers and the domain-specific functions they support. Depending on the functional level, one, two, or all of the flollowing operating systems are supported: Windows 2000, 03, & 08 Servers.
Domain Local Security Group
A group that is used to manage resources--shared folders and printers, for example--in its home domain, and that is primarily used to give global groups access to those resources.
Forest
A grouping of AD trees that each have contiguous namespaces within their own domain structure, but that have disjointed namespaces between trees. The trees and their domains use the same schema and global catalog.
Forest Functional Level
A forest-wide setting that refers to the types of domain controllers in a forest, which can be any combination of Win 00, 03, 08 Servers. The level also reflects the types of AD services and functions supported.
Global Catalog
A repository for all objects and the most frequently used attributes for each object in all domains. Each forest has a single global catalog that can be replicated onto multiple servers.
Global Security Group
A group that typically contains user accounts from its home domain, and that is a member of domain local groups in the same or other domains, so as to give that global group's member accounts access to the resources defined to the domain local groups.
GUID
Globally Unique Identifier: a unique number, up to 16 characters long, that is associated with an AD object.
Kerberos Transitive Trust Relationship
A set of two-way trusts between two or more domains (or forests in a forest trust) in which Kerberos security is used.
Local Security Group
A group of user accounts that is used to manage resources on a standalone computer
Local User Profile
A desktop setup that is associated with one or more accounts to determine what startup programs are used, additional desktop icons, and other customizations. A user profile is local to the computer in which it is stored.
Mandatory User Profile
A user profile set up by the server administrator that is loaded from the server to the client each time the user logs on; changes that the user makes to the profile are not saved.
Member Server
A server on an AD managed network that is not installed to have AD.
Multimaster Replication
Win Server 03 and 08 networks can have multiple servers called DC's that store AD information and replicate it to each other. Because each DC acts as a master, replication does not stop when one DC is down and updates to AD continue, for example creating a new account.
Name Resolution
A process used to translate a computer's logical or host name into a network address, such as to a dotted decimal address associated with a computer--and vice versa.
Namespace
A logical area on a network that contains directory services and named objects, and that has the ability to perform name resolution.
Object
A network resource, such as a server or a user account, that has distinct attributes or properties, is defined in a domain, and exists in AD.
OU
Organizational Unit: A grouping of objects within a domain that provides a means to establish specific policies for governing those objects, and that enables object management to be delegated.
RODC
Read-Only Domain Controller: A domain controller that houses AD information, but cannot be updated, such as to create a new account. This specialized domain controller receives updates from regular DC's, but does not replicate to any DCs because it is read-only by design.
Roaming Profile
Desktop settings that are associated with an account so that the same settings are employed no matter which computer is used to access the account (the profile is downloaded to the client from a server).
Schema
Elements used in the definition of each object contained in AD, including the object class and its attributes.
Scope
Scope of Influence: the reach of a type of group, such as access to resources in a single domain or access to all resources in all domains in a forest. (another meaning for the term scope is the beginning through ending IP addresses defined in a DHCP server for use by DHCP clients.
Security Group
Used to assign a group of users permission to access network resources.
Site
An option in AD to interconnect IP subnets so that the server can determine the fastest route to connect clients for authentication and to connect DCs for replication of AD. Site information also enables AD to create redundant routes for DC replication.
Transitive Trust
A trust relationship between two or more domains in a tree, in which each domain has access to objects in the others.
Tree
Related domains that use a contiguous namespace, share the same schema, and have two-way transitive trust relationship.
Two-Way Trust
A domain relationship in which both domains are trusted and trusting, enabling one to have access to objects in the other.
Universal Security Group
A group that is used to provide access to resources in any domain within a forest. A common implementation is to make global groups that contain accounts members of a universal group that has access to resources.
Author:
swh136
ID:
73649
Card Set:
WinServ2008_Ch4
Updated:
2011-03-18 07:15:45
Windows Server Chapter
Folders:
Description:
Windos Server 2008 Ch 4, Palmer
Show Answers:
Flashcards
Preview | https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=73649 | CC-MAIN-2018-13 | refinedweb | 1,160 | 54.76 |
Introduction to For Loop in Java
Looping is a concept in Java which executes a certain bunch of statements repetitively when a certain condition is true. Java provides three ways of executing the loops. They are
- For Loop
- While Loop
- Do while Loop
In this article, we are going to see the benefits, usage, and syntax of them for a loop. For loop follows five steps to working in a Java coding background. The steps are mentioned below
- Initializing Condition– In the Initialization phase we introduce the variables to be used in the Java program. Generally, the variables are initialized as zero or one.
- Testing Condition– In the test condition, one of the variables that are the counter variable is checked whether it is greater than or less than a certain quantity.
- Statement Execution– In this phase, the print statement or the variable inside the for loop is executed making it easier to generate the output. Sometimes the print statement is also used within this phase.
- Incrementing/Decrementing Condition– In this phase, the loop control variable or the counter variable is incremented by 1 generally to move the code forward. There can also be a decrement of 1 to the loop control variable if the condition of the program demands so.
- Terminating the Loop– When the condition doesn’t satisfy in the testing condition phase, the loop closes and doesn’t work anymore.
Java is an entry controlled loop as the condition is checked prior to the execution of the statement.
The syntax of a for loop in a Java program can be easily executed using the following
Syntax
for (initialization condition; testing condition;
increment/decrement)
{
statement(s) or print statement
}
Flowchart
Examples of For Loop in Java
Example #1
In the first example, we are going to generate the first 10 numbers in a Java program using for loop. The sample code is given below as well as the output.
The name of the class is forLoopDemo. There are three phases in the loop statement. It runs from 1 to 10 generating all the natural numbers in between.
class forLoopDemo
{
public static void main(String args[])
{
// for loop 0begins when x=1
// and runs till x <=10
System.out.println("OUTPUT OF THE FIRST 10 NATURAL NUMBERS");
for (int x = 1; x <= 10; x++)
System.out.println(+ x)
}
}
Output:
Example #2
After the first example, we move on to the second example where we introduce an array and print certain elements in the array. The syntax for printing the elements in the array is as follows.
Syntax
for (T element:Collection obj/array)
{
statement(s)
}
The sample code, as well as the output, is shown below. In other words, it is also known as enhanced for loop. The simple loop format is also shown in the code below.
// Java program to illustrate enhanced for loop
public class enhanced for loop
{
public static void main(String args[])
{
String array[] = {"Ron", "Harry", "Hermoine"};
//enhanced for loop
for (String x:array)
{
System.out.println(x);
}
/* for loop for same function
for (int i = 0; i < array.length; i++)
{
System.out.println(array[i]);
}
*/
}
}
Output:
Example #3
In example 3, we are going to check an infinite for loop. An infinite for loop is one that runs without stopping. It is one of the disadvantages of using for loop. An infinite loop can be created deliberately. In most cases, an infinite for loop is created by mistake. In the code below, an infinite loop is created because the update statement is not provided. The sample code, as well as the output, is shown below.
//Java program to illustrate various pitfalls.
public class LooppitfallsDemo
{
public static void main(String[] args)
{
// infinite loop because condition is not apt
// condition should have been i>0.
for (int i = 5; i != 0; i -= 2)
{
System.out.println(i);
}
int x = 5;
// infinite loop because update statement
// is not provided.
while (x == 5)
{
System.out.println("In the loop");
}
}
}
Output:
The sample output is shown above as well as the running of the Java virtual machine. The Java virtual machine runs indefinitely and it doesn’t stop. The JVM can be stopped by right-clicking on the JVM icon as shown and then stopping it. Also, the shortcut is shown which is Control + Shift + R.
Example #4
In example 4, we are going to see another application for loop which is a nested for loop. Nested for loop means a for loop within for a loop. That means two for loops are inside each other. They are generally used to print complex patterns in a Java platform. An example of a nested for loop is shown below.
Here the class name is PyramidExample. Then the main() is declared. After that, the two-loop control variables are declared. One is the loop control variable “i” and the other is the loop control variable “j”. Then the “*” is printed in the loop control. The new line is given so that the given format of the pyramid structure is maintained. In this code, the program is run till 5 times. However, by increasing the value of the “i” th loop control variable we can make sure that the pyramid is bigger.
public class PyramidExample {
public static void main(String[] args) {
for(int i=1;i<=5;i++){
for(int j=1;j<=i;j++){
System.out.print("* ");
}
System.out.println();//new line
}
}
}
Output:
Example #5
In this example, we are going to see how a for loop goes through each and every element of an array and prints them.
In the below code the class name is GFG. The package java. io .* is imported here. Also the throws IO Exception is used at the main() which throws and removes any exception arriving at the piece of code. The ar.length() returns the length of the array. The variable x stores the element at the “i”th position and prints it. This code is one of the easiest ways of showing how to access array elements using for loop function.
// Java program to iterate over an array
// using for loop
import java.io.*;
class GFG {
public static void main(String args[]) throws IOException
{
int ar[] = { 1, 2, 3, 4, 5, 6, 7, 8 };
int i, x;
// iterating over an array
for (i = 0; i < ar.length; i++) {
// accessing each element of array
x = ar[i];
System.out.print(x + " ");
}
}
}
Output:
Example #6
In this example, we are going to see whether a number is a palindrome or not. In this also, a for loop is used. A palindrome number is one which when reversed represents the same number.
Examples are 121, 1331, 4334, etc.
The code and output is given below::
Conclusion – For Loop in Java
In this article, we see how a for loop is used in many cases. The condition is checked at the beginning of the loop and then if the condition is satisfied then it is used in the remaining part of the loop. It is very similar to a while loop which is also an entry- controlled loop. It is in contrast to the do-while loop in which the condition is checked at the exit of the loop.
For loops are not only used in Java, but it is also used in C, C++, Python and many other programming languages. Mostly they are used to print patterns, in menu-driven programs to check the behavior of a number and much more.
Recommended Articles
This is a guide to For Loop in Java. Here we discuss the introduction to For Loop in Java, For Loop Steps which are Initializing condition, Testing condition, and Statement execution. along with some sample code. You may also look at the following articles to learn more – | https://www.educba.com/for-loop-in-java/?source=leftnav | CC-MAIN-2020-34 | refinedweb | 1,294 | 65.01 |
Hello everyone! In today’s article, we’ll be taking a look at the gzip module in Python.
This module gives us an easy way to deal with gzip files (
.gz). This works very similarly to the Linux utility commands
gzip and
gunzip.
Let’s look at how we can use this module effectively, using some illustrative examples!
Using the gzip Module in Python
This module provides us with high-level functions such as
open(),
compress() and
decompress(), for quickly dealing with these file extensions.
Essentially, this will be simply opening a file!
To import this module, you need the below statement:
import gzip
There is no need to pip install this module since it is a part of the standard library! Let’s get started with dealing with some gzip files.
Writing to a compressed file
We can use the
gzip.open() method to directly open
.gz files and write to these compressed files!
import gzip import os import io name = 'sample.txt.gz' with gzip.open(name, 'wb') as output: # We cannot directly write Python objects like strings! # We must first convert them into a bytes format using io.BytesIO() and then write it with io.TextIOWrapper(output, encoding='utf-8') as encode: encode.write('This is a sample text') # Let's print the updated file stats now print(f"The file {name} now contains {os.stat(name).st_size} bytes")
Here, note that we cannot directly write Python objects like strings!
We must first convert them into a bytes format using
io.TextIOWrapper() and then write it using this wrapper function. That’s why we open the file in binary-write mode (
wb).
If you run the program, you’ll get the below output.
Output
The file sample.txt.gz now contains 57 bytes
Also, you would observe that the file
sample.txt.gz is created on your current directory. Alright, so we’ve successfully written to this compressed file.
Let’s now try to decompress it and read it’s contents.
Reading Compressed Data from the gzip file
Now, similar to the
write() function via the wrapper, we can also
read() using that same function.
import gzip import os import io name = 'sample.txt.gz' with gzip.open(name, 'rb') as ip: with io.TextIOWrapper(ip, encoding='utf-8') as decoder: # Let's read the content using read() content = decoder.read() print(content)
Output
This is a sample text
Indeed, we were able to get back the same text we wrote initially!
Compressing Data
Another useful feature of this module is that we can effectively compress data using
gzip.
If we have a lot of byte content as input, we can use the
gzip.compress() function to compress it.
import gzip ip = b"This is a large wall of text. This is also from AskPython" out = gzip.compress(ip)
In this case, the binary string will be compressed using
gzip.compress.
Conclusion
In this article, we learned about how we can use the gzip module in Python, to read and write to
.gz files.
References
- Python gzip module Documentation
- JournalDev article on Python gzip module | https://www.askpython.com/python-modules/gzip-module-in-python | CC-MAIN-2021-10 | refinedweb | 518 | 68.67 |
Hi ..I am Sakthi.. - Java Beginners
Hi ..I am Sakthi.. can u tell me Some of the packages n Sub... that is available in java and also starts with javax. package
HEMAL RAJYAGURU Hi friend,
package javacode;
import javax.swing.*;
import
hi - Java Beginners
hi hi sir, i am entering the 2 dates in the jtable,i want to difference between that dates,plz provide the suitable example sir
Hi.../beginners/DateDifferent.shtml I need some help I've got my java code and am having difficulty to spot what errors there are is someone able to help
import java.util.Scanner;
public class Post {
public static void main(String[] args) {
Scanner sc
hi - Java Beginners
information.
Thanks.
Hi...
For read more information Hi.... let me know the difference between object and instance
Hi... - Java Beginners
Hi... Hi friends,
I hv two jsp page one is aa.jsp & bb.jsp
I want to display bb.jsp within aa.jsp
i am write this here but this not working
Upload Record
please tell me Hi Friend... - Java Beginners
Hi Friend... import java.util.*; // PACKAGES
WHAT DOES IT MEAN...=(IpImage)obj; // CREATION OF OBJECT ?
IpImage ip2=(IpImage)vvv.elementAt(i); // WHAT is successfully then i face problem in adding two value please write the code
HI - Java Beginners
case & also i do subtraction & search dialognal element in this. Hi...HI how i make a program if i take 2 dimensional array in same...];
for (int i=0; i
hi - Java Beginners
hi hi sir,i am almost complete my project,i want to create a batch file for my project,how to create a batch file for my project,plz help me or plz provide me with some examples
ThanQ
hi - Java Beginners
hi hi sir sir, i am using jtable to add the records to the database by using submit button ( i am using submit 4 add records 2 database sir) ,if i want... my new record,when i am try 2 enter the record into my jtable,how to resolve
hi - Java Beginners
to be declared final
i want to be use this variable in another class,what i am do sir,plz tell me.if i am declare a variable is a final,then there is no way to change this,plz tell me for my problem Hi Friend,
Declare
hi - Java Beginners
Sorting String Looking for an example to sort string in Java. ...;)); writer.write("Before shorting :\n"); for(int i=0; i < 5 ; i++) { writer.write(words[i] + " "); } java.util.Arrays.sort(words
,good afternoon,
i want to add a row in jtable when i am pressing the enter key,and that row is available to insert the data plz give the program sir,urgent
Thank u Hi Friend,
Try Friends,
I want to write code in java for change...,con_pwd
Thanks Hi Friend,
Create a database table login[id...=con.createStatement();
int i=st.executeUpdate("update login set password want to display two calendar in my form... is successfully but date of birth is not why...
Hi Friend,
Try... vCalHeader;
var vCalData;
var vCalTime;
var i;
var j;
var SelectStr
hi hi sir,when i am enter a one value in jtextfield the related... phone no sir Hi Friend,
Try the following code:
import... = true;
}else if(code==KeyEvent.VK_RIGHT) {
for(int i=0
hi - Java Beginners
hi hi sir,when i am add a jtable record to the database by using the submit button,then nullpointer exception is arised,plz see this program...){
countoftable=table.getRowCount();
for(int i
hi - Java Beginners
) with column names,if i am using this table type table(rowno,colno) how i am set.../components/table.html
Best Regards. Hi Friend,
Try the following... st[]={"Id","Name","Address","ContactNo","Email"};
for(int i=0;i
hi - Java Beginners
(){// Final Method
System.out.println("I am Final Method");
}// End Of Final Method
,plzzzzzzzzzzzzzzzzzzzzzzzzzzzz,
i am declare the final for jtextfields
Hi...hi
i am declare the final for jtextfields
package Welcome;
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
import
Hi...doubt on Packages - Java Beginners
Hi...doubt on Packages Does import.javax.mail.* is already Existing Package in java..
I have downloaded one program on Password Authentication... ..Explain me. Hi friend,
Package javax.mail
The Java Mail API allows
Hi Friend..IS IT POSSIBLE? - Java Beginners
Hi Friend..IS IT POSSIBLE? Hi Friend...Thank u for ur valuable response..
IS IT POSSIBLE FOR US?
I need to display the total Number od days by Each month...
WAIT I WILL SHOW THE OUTPUT
Hi ..doubt on DATE - Java Beginners
Hi ..doubt on DATE Hi Friend...Thank u for ur valuable response..
IS IT POSSIBLE FOR US?
I need to display the total Number od days by Each month...
WAIT I WILL SHOW THE OUTPUT:
---------------------------------
ENTER
Hi..Date Program - Java Beginners
Hi..Date Program Hi Friend...thank for ur Valuable information...
Its displaying the total number of days..
but i WANT TO DISPLAY THE NUMBER OF DAYS BY EACH MONTH...
Hi friend,
Code to solve the problem
Hi .Again me.. - Java Beginners
://
Thanks. I am sending running code...Hi .Again me.. Hi Friend......
can u pls send me some code......
REsponse me.. Hi friend,
import java.io.*;
import java.awt.
Hi..Again Doubt .. - Java Beginners
Hi..Again Doubt .. Thank u for ur Very Good Response...Really great..
i have completed that..
If i click the RadioButton,,ActionListenr should get... WORRY WE'ILL ROCK ");
}
}
}
Hi
Can u send your MainMenu.java
Hi
Hi Hi this is really good example to beginners who is learning struts2.0
thanks
I wonder - Java Beginners
I wonder Write two separate Java?s class definition where the first one is a class Health Convertor which has at least four main members:
i... the defined methods
Hi Friend,
Try the following code:
import
Hi..How to Display Days by month - Java Beginners
Hi..How to Display Days by month Hi Friend....
I have a doubt... feb 28 days
..
Like that ..
Can u plz Guide me..
Monday i have teach to students....
..Thank u ..
Hi Prabhu g
I am sending
Hi Friend ..Doubt on Exceptions - Java Beginners
Hi Friend ..Doubt on Exceptions Hi Friend...
Can u please send some Example program for Exceptions..
I want program for ArrayIndexOutOfbounds
OverFlow Exception..
Thanks...
Sakthi Hi friend,
Code
hi roseindia - Java Beginners
hi roseindia what is java? Java is a platform independent.... Object Oriented Programming structre(OOPS) concepts are followed in JAVA as similar to C++. But JAVA has additional feature of Database connectivity, Applets
hi, - Java Beginners
hi, what is the need for going to java.how java differ form .net.What is the advantage over .net.Is there any disadvantage in java Hi Friend,
Difference between java and .net:
1)Java is developed by Sun
Hi Check this.... - Java Beginners
Hi Check this.... Hi Sakthi here..
Run This Code..
Hi sakthi
Your code is not visible here, can u send again please..
Thanks... between two dates..
I need to display the number of days by Each Month Separately...
Reply me ....
Thanku
Hi friend,
Code to solve
hi roseindia - Java Beginners
hi roseindia what is class? Hi Deepthi,
Whatever we can see in this world all the things are a object. And all the objects... information about class with example at:
hi - Java Beginners
; Hi anjali,
class ploy {
public void ployTest(int x...);
}
}
-------------------------------------------------
Read for more information.
Thanks
Java I/O - Java Beginners
Creating Directory Java I/O Hi, I wanted to know how to create a directory in Java I/O? Hi, Creating directory with the help of Java Program is not that difficult, go through the given link for Java Example Codehttp
Hi da SAKTHI ..check thiz - Java Beginners
/
Thanks
Hi friend,
I saw your code and modified. What you want...Hi da SAKTHI ..check thiz package bio;//LEAVE IT
import java.lang.... ");
}
}
}
Hi friend,
Plz give details of "MainMenu.java
Hi ...CHECK - Java Beginners
Hi ...CHECK Hi Da..sakthi Here
RUN THIS CODE
-----
package bio;
import java.lang.*;
import java.awt.*;
import java.awt.Color;
import... ");
Screen.showMessage(" DON WORRY WE'ILL ROCK ");
}
}
}
Hi
Hi - Java Beginners what are access modifiers available in java
logic for prime number Logic for prime number in Java. HTML Tutorials develop a online bit by bit examination process as part of my project in this i am stuck at how to store multiple choice questions options and correct option for the question.this is the first project i am doing
hi
hi how i get answer of my question which is asked by me for few minutes ago.....rply
hi
hi what are the steps mandatory to develop a simple java program?
what is default value for int type of local variable?
what are the keywords available in simple HelloWorld program?
Class is a blueprint of similiar objects(True
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/84245 | CC-MAIN-2015-22 | refinedweb | 1,496 | 69.07 |
"Golden.
Recursive
The first obvious idea is to use recursion and calculate all paths from top to down. When we down to one level, then all below available cells are the new sub-triangle and we can start our function one more time for the new triangle. And so on until we reach the bottom. Simple and obviously.
def golden_pyramid(triangle, row=0, column=0, total=0): global count count += 1 if row == len(triangle) - 1: return total + triangle[row][column] return max(golden_pyramid(triangle, row + 1, column, total + triangle[row][column]), golden_pyramid(triangle, row + 1, column + 1, total + triangle[row][column]))
But as we can see for the first level we run our function 2 times, then 4, 8, 16.... So as result we will get 2N complexity and for the hundred-storied pyramid we need ≈ 1030 function calls. Hm...
Dynamic Programming
But what if we will use dynamic programming method and break our problem to small pieces, which can be merged then. For simplicity look at the triangle upside down. Now look at the second (from new top) level. For each cell we can choose what is the best possible for this small three element triangle. Choose the best from the first level (early bottom), summarize with current cell value and write it. Now we have the new shorter triangle and can repeat this operation again and again. As result we have (N-1)+(N-2)+...2+1 operations and this is N2 complexity.
def golden_pyramid_d(triangle): tr = [row[:] for row in triangle] # copy for i in range(len(tr) - 2, -1, -1): for j in range(i + 1): tr[i][j] += max(tr[i + 1][j], tr[i + 1][j + 1]) return tr[0][0]
Checkio Player Solutions
@gyahun_dash made the interesting realisation of dynamic programming method in "DP" solution. He used "reduce" to work at rows by pairs with accumulating and "map" to process each level.
from functools import reduce def sum_triangle(top, left, right): return top + max(left, right) def integrate(lowerline, upperline): return list(map(sum_triangle, upperline, lowerline, lowerline[1:])) def count_gold(pyramid): return reduce(integrate, reversed(pyramid)).pop()
@evoynov used binary numbers to define all possible paths as combinations of 1 and 0 in "Binaries" solution. But this solution has the complexity as recursive method that was described early.
def count_gold(p): path = 1 << len(p) res = 0 while bin(path).count("1") != len(p) + 1: s = ind = 0 for row in range(len(p)): ind += 1 if row > 0 and bin(path)[3:][row] == "1" else 0 s += p[row][ind] res = max(res, s) path += 1 return res
And just for final little brain breaking puzzle (don't worry it's not too hard) with @nickie's in "Functional DP" one-liner which is only formally two-liners. Of course this is solution from "Creative" category and don't think that @nickie writes this for production. Just for fun.
count_gold=lambda p:__import__("functools").reduce(lambda D,r:[x+max(D[j],D[j+1]) for j,x in enumerate(r)],p[-2::-1],list(p[-1]))[0]
That's all folks. Propose your ideas for the next articles. | https://py.checkio.org/blog/golden-pyramid/ | CC-MAIN-2018-09 | refinedweb | 528 | 53.61 |
You’ve heard about Stimulus in the DHH press tour. You’ve read the Stimulus Handbook. How about a more complicated example than what you’ve seen? Here’s a tutorial on using Stimulus to drag and drop items around in a list.
Let’s start with our simple ordered list in html. Notice that each item has a id associated with it, so that we’ll be able to keep track of which item needs to be moved. We also need to make each item draggable, so that the correct javascript events are sent to our controller.
<ol> <li draggable="true" data-Take out trash</li> <li draggable="true" data-Check Email</li> <li draggable="true" data-Subscribe to mailing list</li> <li draggable="true" data-Research Stimulus</li> </ol>
Now, we’ll need to create the stimulus controller that is just going to handle all the events. We’ll name it drag-item, and if we are using Rails and webpacker, it would go in
app/javascript/controllers/drag_item_controller.js:
import { Controller } from "stimulus" export default class extends Controller { }
Go ahead and hook up the controller to your html:
<ol data-
We must now listen for a couple drag events, so add those to the html. Notice how we can add multiple actions just by separating each one with a space.
<ol data-
Let’s go back to our controller and add those actions. First, we’ll keep track of the
item-todo-id when we start dragging so that we know which todo to move when we end dragging:
dragstart(event) { event.dataTransfer.setData("application/drag-key", event.target.getAttribute("data-todo-id")) event.dataTransfer.effectAllowed = "move" }
We’ll prevent the default action when dragging an item. This makes sure the drag operation isn’t handled by the browser itself:
dragover(event) { event.preventDefault() return true } dragenter(event) { event.preventDefault() }
On the drop event, we get the element that we were dragging based on it’s
data-todo-id, and then in order for the drop to visually make sense, we see where the dragged element compares to where it was dropped, and then insert it before or after the drop target depending on the result.
drop(event) { var data = event.dataTransfer.getData("application/drag-key") const dropTarget = event.target const draggedItem = this.element.querySelector(`[data-todo-id='${data}']`); const positionComparison = dropTarget.compareDocumentPosition(draggedItem) if ( positionComparison & 4) { event.target.insertAdjacentElement('beforebegin', draggedItem); } else if ( positionComparison & 2) { event.target.insertAdjacentElement('afterend', draggedItem); } event.preventDefault() }
This is where we might want to post our action to the server so it can properly handle the movement, but there is no need to do anything at this moment, so we’ll leave it blank.
dragend(event) { }
Now you have can drag and drop items in a list, using Stimulus as the javascript library.
Want To Learn More?
Try out some more of my Stimulus.js Tutorials. | https://johnbeatty.co/2018/03/09/stimulus-js-tutorial-how-do-i-drag-and-drop-items-in-a-list/ | CC-MAIN-2019-35 | refinedweb | 485 | 56.66 |
Hey,
Recently, we (
concourse) were having trouble coming up with an easy way to
answer the question of whether
runc was taking too long to perform a
particular action.
Given that each action took form of executing the
runc binary, it seemed to
make sense to time those executions somehow.
For instance, it'd be great to get information like
TIME(s) ARGV 0.111638 /var/gdn/assets/linux/bin/runc --root /run/runc --debug --... 50.039328 /var/gdn/assets/linux/bin/runc --root /run/runc events d7d... 0.209312 /var/gdn/assets/linux/bin/runc --root /run/runc --debug --... 0.206789 /var/gdn/assets/linux/bin/runc --root /run/runc --debug --... 0.195814 /var/gdn/assets/linux/bin/runc --root /run/runc --debug --... 265.008811 /var/gdn/assets/linux/bin/runc --root /run/runc events 984... 75.007661 /var/gdn/assets/linux/bin/runc --root /run/runc events 6df... 95.007827 /var/gdn/assets/linux/bin/runc --root /run/runc events e26... 185.009876 /var/gdn/assets/linux/bin/runc --root /run/runc events d2a... 150.009166 /var/gdn/assets/linux/bin/runc --root /run/runc events 6ad...
While we could do that by patching the code that performs those process
executions (
gdn), that'd mean having to create a separate version of it,
deploying across the fleet, and then collecting the data aftwards.
Definitely doable as we have done something similar when trying to understand
where
gdn spent most of its time during container creation (see container
creation time dissected for more):
But, it turns out that there's a way of doing that without having to rebuild and
redeploy
concourse.
observing process creation time
Knowing that with
ebpf we can hook into
ftrace‘s instrumentation and capture
details about code that's being executed in the kernel (by being there with our
very small specifically crafted piece of code), we let a userspace program know
the details of a program that's being started (via
execve(2), and then when
it ends (via
_exit(2), either voluntarily or not), know the exit status and
how long it took.
For instance, consider the following “hello world” program:
If we trace syscalls that leads to the execution of our binary, we can see that
at the start,
execve(2) takes the responsability of bringing to the kernel
the information that tells what file is supposed to be loaded, with which
arguments and environmnt variables.
At the end of its execution
$ strace -f ./hello execve("./hello", ["./hello"], 0x7ffc2316f938 /* 19 vars */) = 0 brk(NULL) = 0x220a000 brk(0x220b1c0) = 0x220b1c0 arch_prctl(ARCH_SET_FS, 0x220a880) = 0 uname({sysname="Linux", nodename="disco", ...}) = 0 readlink("/proc/self/exe", "/home/ubuntu/hello/hello", 4096) = 24 brk(0x222c1c0) = 0x222c1c0 brk(0x222d000) = 0x222d000 fstat(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 2), ...}) = 0 write(1, "hello\n", 6hello ) = 6 exit_group(0) = ? +++ exited with 0 +++
Naturally, there's no
exit_group(2) in our code - that's because the
implementation of the C standard (
glibc in my case here) is doing that for us
(see the end of the article to see where that happens in
glibc).
Regardless, what matters is that if we can time how long it took between the
execve call, and the
exit_group for a given
pid, we're able to measure how
long a given program took to run1.
execve(...) ts=100-. | | time=400 | exit() ts=500-*
It turns out that with
bcc, we can do that with a small mix of C and Python.
1: yeah, there are edge cases, e.g., creating a process from the
previous binary and not
execveing, etc.
tracing executions with bcc
The idea of tracing executions is not new - in the set of tools that
bcc has by default,
execsnoop does pretty much that: for every
execve,
it reports who did that (and some other info).
In our case, we wanted to go a little bit further: wait until an
exit happens,
and then time that.
The approach looked like the following:
instrument execve when triggered: submit data to userspace, telling who did it, etc instrument exit when triggered: submit data to userspace, bringing exit code, etc
That means that while we capture the data (e.g.,
pid,
ppid,
exit code,
etc) from within the Kernel, we let our code in userspace deal with the logic of
deciding what to do with that information:
- should I care about this
execve?
- is this an
exitfrom a process that I was already tracing?
As the events come from the kernel first, let's see how that instrumentation looks like.
in the kernel
Leveraging
kretprobes, when instrumenting these functions we're able to capture
not only all of those arguments used to call it, but also value returned by that
specific call.
my_cool_func+start <--kprobe => our_method(args ...) | ... | my_cool_func+end <--kretprobe => our_method(retval + args ...)
For instance, we can ignore those
execves that failed, while still moving on
otherwise.
// instrument the `execve` syscall (when it returns) // int kr__sys_execve(struct pt_regs* ctx, const char __user* filename, const char __user* const __user* __argv, const char __user* const __user* __envp) { int ret = PT_REGS_RC(ctx); if (ret != 0) { // did the execve fail? return 0; } __submit_start(ctx); return 0; }
For
_exit, using a
kprobe suffice as all that we need is the
code being
passed to it.
// instrument `_exit` syscall // int k__do_exit(struct pt_regs* ctx, long code) { __submit_finish(ctx, code); return 0; }
Then, during
__submit_finish and
__submit_start, we can then do the whole
job of filling the datastructure to be sent to userspace with the right fields,
and then actually doing so (submitting to the perf buffer).
For instance,
__submit_start:
static inline void __submit_start(struct pt_regs* ctx) { struct data_t data = {}; data.type = EVENT_START; __submit(ctx, &data); } static inline void __submit_finish(struct pt_regs* ctx, int code) { struct data_t data = {}; data.type = EVENT_FINISH; data.exitcode = code; __submit(ctx, &data); }
And then, finally, on
__submit, adding the final common fields (like,
timestamp when this was called), and then submitting to the perf buffer:
static inline void __submit(struct pt_regs* ctx, struct data_t* data) { struct task_struct* task; task = (struct task_struct*)bpf_get_current_task(); data->ts = bpf_ktime_get_ns(); data->pid = task->tgid; data->ppid = task->real_parent->tgid; bpf_get_current_comm(&data->comm, sizeof(data->comm)); events.perf_submit(ctx, data, sizeof(*data)); }
in userspace
Once everything has been instrumented, we can receive the events from the kernel by waiting for them:
program["events"].open_perf_buffer(handle_events) while 1: try: program.perf_buffer_poll() except KeyboardInterrupt: exit()
Which then, on
handle_events, can handle each event being submitted:
With that all set up (see
trace.py), we can then get that information:
sudo ./trace.py PID PPID CODE TIME(s) ARGV 8168 2921 0 0.002005 ls --color=auto 8169 2921 0 1.001772 sleep 1
tracing process execution in a cgroup
As tracing the whole system can generate quite a bit of noise, and most of what we wanted to keep track of was contained in a non-root cgroup, it made sense to allow tracing process executed within a given cgroup.
Each task in the system has a reference-counted pointer to a css_set.
(extra) where is glibc calling exit_group?
If you're wondering how one goes from a
return 1 in
int main(argc,argv) to
an
exit_group() using that
1 returned, here's a very quick walk through
glibc.
STATIC int LIBC_START_MAIN(int (*main)(int, char**, char** MAIN_AUXVEC_DECL), int argc, char** argv, __typeof(main) init, void (*fini)(void), void (*rtld_fini)(void), void* stack_end) { /* Result of the 'main' function. */ int result; /* Nothing fancy, just call the function. */ result = main(argc, argv, __environ MAIN_AUXVEC_PARAM); exit(result); }
void exit (int status) { __run_exit_handlers (status, &__exit_funcs, true, true); }
/* Call all functions registered with `atexit' and `on_exit', in the reverse of the order in which they were registered perform stdio cleanup, and terminate program execution with STATUS. */ void attribute_hidden __run_exit_handlers(int status, struct exit_function_list** listp, bool run_list_atexit, bool run_dtors) { // ... run each handler, then, finally _exit(status); }
void _exit(int status) { while (1) { #ifdef __NR_exit_group INLINE_SYSCALL(exit_group, 1, status); #endif INLINE_SYSCALL(exit, 1, status); #ifdef ABORT_INSTRUCTION ABORT_INSTRUCTION; #endif } } | https://ops.tips/notes/linux-system-wide-process-execution-time/ | CC-MAIN-2020-05 | refinedweb | 1,329 | 51.58 |
11 November 2011 11:42 [Source: ICIS news]
BUCHAREST (ICIS)--Rompetrol’s petrochemicals business reported a net loss of $2.53m (€1.87m) for the first nine months of 2011, compared with a net loss of $454,460 in the same period last year, the company said on Friday.
Rompetrol Petrochemicals attributed its results to operating lower capacity rates and “to the bad economic context following the military conflict in ?xml:namespace>
The company’s turnover increased by 36% year on year to $292m.
In the first nine months, Rompetrol Petrochemicals processed 57% more ethylene year on year. It processed 4% more propylene compared with the corresponding period last year, the company said.
The company is investing $18m to raise the capacity of its high density polyethylene (HDPE) plant at Navodari in eastern
The company also has a 60,000 tonne/year low density polyethylene (LDPE) plant at Navodari.
Rompetrol Petrochemicals is part of oil refiner Rompetrol Group, which is owned by Kazakhstan's KazMunaiGaz (KM | http://www.icis.com/Articles/2011/11/11/9507380/romanias-rompetrol-petrochemicals-reports-net-loss-of-2.53m.html | CC-MAIN-2014-49 | refinedweb | 166 | 51.78 |
Recently I started learning React for my new job. I basically went from being a coding ninja to being a coding newbie. But hey everything can be learnt so here is what I have learnt so far.
Entering JavaScript Land
Use
{}to enter JavaScript land. In Vue we use double these so I just need to remember that from now on it's ony one and if I see double
{{}}it means JavaScript land followed by an object.
<h2 style={{ color: 'black' }}>hi</h2>
This is easier to understand when there is more than one value in our object.
<h2 style={{ color: 'black', padding: '10px' }}>hi</h2>
React components are just functions that return something. Normally some html.
export default function Button(){ return <button>click me</button> }
Functions with Capital Letters
These functions need to be named with a capital letter. Which makes sense as then it is easy for the browser to distinguish between what is html and what is a React component.
<button> // html <Button /> // React Component
Using JSX
React components use JSX which basically means we can add dynamic stuff to our html. And of course you can pass props into the component
const name = 'Debbie' export default function Name(name){ return <h1> Hi {name}</h1> }
Return Statements
One thing to watch out for is the return statement. We can return things on one line but if we have more things to return and need more than one line then we must use brackets. And important to remember is that we cannot use a semicolon at the end of the line inside these brackets. If we do it will break.
export default function Hi(){ return ( <div> <h1>Hi</h1> <p>welcome to our world</p> </div> ) }
A Single Root element
Just like in Vue 2 you can't render multiple items such as the example above without wrapping it in the
<div>. However React gives you an element called a fragment so that you don't have to render a
<div>. A fragment is just like empty syntax and won't get rendered.
export default function Hi(){ return ( <> <h1>Hi</h1> <p>welcome to our world</p> </> ) }
CamelCase for JSX
With JSX we need to use camelCase and not kebab-case. That means we need to use
backgroundColor instead of
background-color. Class is also a reserved name in JavaScript so we can't use it meaning we can't say use
class for our styling and have to use
classNameinstead.
<h1 className="heading">hi</h1>
Using Dynamic Values in JSX
In vue to add a dynamic value to an image we would bind the
srcattribute by adding a colon before it
:srcbut in React we don't bind the attribute but instead the value. We do this by removing the quotes and replacing them with curly brackets
src={}. If you think about it, it actually makes sense as using curly brackets means we are entering JavaScript land.
const
As we are entering JavaScript land by using curly brackets it also means we can concatenate things.
const imgUrl = " <img src={imgURL + '/102'} /> // OR <img src={`${imgURL}/102`} />
Printing out Lists
Unlike in Vue there is no
v-forfor rendering lists. We need to do this just like we would do in any JavaScript function by using
map() or
filter().
const people = ['Debbie', 'Alex', 'Nat'] export default function List(){ const peopleList = people.map(person => <li key={person}>{person}</li> ) return <ul>{peopleList}</ul> }
Filtering out Items. Say we wanted to filter out which people were working with which framework. We first need to filter over our array of people to get a new array of only those with 'React' in this case and then map over that new array of 'ReactPeople' to print out each person's name.
const people = [{ name: 'Debbie', framework: 'React', }, { name: 'Alex', framework: 'Vue', }, { name: 'Nat', framework: 'React', }]; const reactPeople = people.filter(person => person.framework === 'React' ); export default function List(){ const peopleList = reactPeople.map(person => <li key={person.name}>{person.name}</li> ) return <ul>{peopleList}</ul> }
Remember when working with arrow functions there is no return however if you add
{}then you must add a return statement. Using
{}also allows you to return more than just one line of code.
Just like in Vue adding a
keyto lists is important. The only difference is that instead of
:key="name"in React it is
key={name}. Make sure to keep your keys unique. If you don't set a key React will use the index but if the position changes due to reordering then you will run into some issues.
If you need to pass in anything to a Fragment such as a key value then you need to actually write
<Fragment key={name}> instead of just using
<div> if you prefer.
<>. You can also use a
Exporting Components
There are 2 ways to export a component. Using named exports or default exports. With default export you can only have one function to export but with named exports you can have more than one. The thing to remember is how you export it affects how you import it. With named exports you must use the same name when importing it. With default exports you can use any name you like.
Named Exports
export function Button(){ return <button>click me</button> }
import { Button } from './button'
Default Exports
export default function Button(){ return <button>click me</button> }
import Button from './button' // or import MyButton from './button'
You can mix having a default export and a named export in the one file but in general this can get confusing so it is best to only stick to either 2 named exports or 2 files with default exports.
Working with Props
Props are used so parent components can pass information to child components. The props you can pass to html elements are predefined to conform with HTML standards, for example passing an
src prop to an
<img>element. But you can pass any props you like to your own components.
When passing in props remember that
({ color, height }) is just using destructuring:
function MyImage({ color, height }) { return //... }
By destructuring you don't have to do something like this:
function MyImage(props) { let color = props.color; let height = props.height; return //.... }
You can specify a default property for a prop using destructuring by adding an
=.
function MyImage({ color, height = 50 }) { return //... }
Passing lots of props can get very repetitive and this is where the spread operator comes in
{ ...props }
function Card({ color, height, width, isActive }) { return ( div <Avatar color={color} height={height} width={width} isActive={isActive} /> </div> ); }
Here we just pass in all the props and then use the spread operator to spread them so that the
Avatar component has access to all the props without having to write them out individually.
function Card(props) { return ( <div className="card"> <Avatar {...props} /> </div> ); }
React's Children
This is similar to what we call slots in Vue. We pass in the children prop to the function and then use it so that anything rendered inside the Card component will be rendered.
function Card({ children }) { return ( <div className="card"> {children} </div> ); }
We can then easily use this component to render different components inside.
Cruises:
export default function Cruises() { return ( <Card> <Cruises /> </Card> ); }
Flights:
export default function Cruises() { return ( <Card> <Flights /> </Card> ); }
Conclusion
Understanding how React works is really helpful as well as being able to compare the differences to Vue. Overall React is really just JavaScript and Vue just adds some magic to make things much easier. The more you work with React the easier it gets. Still so much more to learn though. This is just the beginning.
Disclaimer: I am still only learning so if you see anything wrong here then just let me know :)
Courses I am taking to enhance my React Journey:
- Kent C Dodds React a Beginners Guide
- Brian Holt Complete React v5
- Kent C Dodds Epic React
Discussion (8)
Btw in newer versions of react, using class instead of className will not throw an error and will work just the same, although it is not recommended and will throw a warning in the console. Maybe in a future version we'll be able to use it legally!
good to know. yeah it would be cool to just use it as keeps things simpler
Interesting article! With you coming from a Vue background: what do you prefer?
at the moment I prefer Vue because I can build complete applications and with React I am not there yet. When I have fully learnt it who knows. It's kinda like skiing and snowboarding. I like both, they are very different but achieve the same thing however I am a much better skier so I prefer skiing simply because I can do more. If I had the time maybe I would learn to board better and then perhaps prefer boarding or just like both the same, who knows!!
this is a great overview Debbie. Thanks !
Great but you writing a blog for React seems a bit sad haha
well always good to learn a bit of everything, you never know when you might need it
Yup was just pulling your leg 😄 anyways I am more interested in Svelte though.TC | https://practicaldev-herokuapp-com.global.ssl.fastly.net/debs_obrien/learning-react-279j | CC-MAIN-2022-21 | refinedweb | 1,540 | 70.23 |
.
Prerequisites.
XML-related technologies. ;-)
About the example used in this tutorial
Overview
In this tutorial, you will learn XPath by writing the presentation layer of an auction site application. You will specify the XPath expressions inside an XSLT stylesheet that's used to present XML documents containing auction items. All the files used in this tutorial are in the zip file, x-xpathTutorial.zip (see Resources for a link), including:
- XPath/AuctionItemList.xsd - an XML Schema document that defines the data format of the auction items.
- XPath/AuctionItemList.xml - an XML file that contains a list of auction items; it is the data for the example.
- XPath/AuctionItemSummary-Base.xsl - an XSLT stylesheet that defines what a Web browser will display when it loads AuctionItemList.xml; it contains the data's presentation rules.
- XPath/AuctionItemSummary-Section5.xsl - the solution in Location paths .
- XPath/AuctionItemSummary-Section6.xsl - the solution in Expressions .
- XPath/AuctionItemSummary-Section7.xsl - the solution in Function library .
AuctionItemList.xsd
AuctionItemList.xsd contains the business rules for the auction item and auction items list data, described in XML Schema language:
An auction item list has only one root element called
list of
type
auctionItemList.
An
auctionItemList is composed of zero or more
item elements of type
auctionItem.
An
auctionItem is composed of five elements
(
bidIncrement,
currentPrice of type
price,
endOfAuction,
description,
and
sellerId) and one attribute group of type
itemAttributes.
A
price is a positive decimal value with two decimal places
and must have a
currency attribute of type
customCurrency associated with it.
A
customCurrency must be one of
USD,
GBP, or
EUR.
An
itemAttributes group must contain one string attribute
type, one string attribute
id, and one boolean
attribute
private, that is
false by default.
A
type attribute must be one of the following:
Unknown,
Traditional,
BidOnly,
FixedPrice, or
IndividualOffer.
If you want to learn more about XML Schema, see Resources for more developerWorks articles and tutorials.
AuctionItemList.xml
AuctionItemList.xml conforms to the XML schema defined in
AuctionItemList.xsd and contains a list of type
auctionItemList. This list contains seven items. The
xsi:schemaLocation attribute of the list root element
indicates that this XML document must conform to the AuctionItemList.xsd
schema.
That takes care of the data format, but what about the presentation? How do you specify which XSLT stylesheet to use to display this XML document in a Web browser? This is defined in the second line of the XML document:
<?xml-stylesheet type="text/xsl" href="AuctionItemSummary-Base.xsl"?>
Here, I state that the AuctionItemSummary-Base.xsl stylesheet should be used. The data itself has been chosen so that the use of XPath can be demonstrated to show specific data properties. When no XML stylesheet document is linked to AuctionItemList.xml, then a Web browser simply shows the XML contents and it looks like the following:
Figure 1. AuctionItemList.xml
AuctionItemSummary-Base.xsl
AuctionItemSummary-Base.xsl is the XSLT stylesheet that
defines the rules used by an XSLT processor to display
AuctionItemList XML documents. It uses XPath expressions to
find information in the XML document and display it in an HTML table. I
will focus in more detail on the use of XPath in XSLT in XPath
overview . Here, I describe briefly the contents of
AuctionItemSummary-Base.xsl. It defines templates that are activated when
XML documents are processed. Which template is activated depends on the
XPath expression documented in the
match attribute of this
template element. For example, the following snippets, taken
from AuctionItemSummary-Base.xsl, are XPath expressions:
"/"
"list"
"item"
The information that displays when a template is activated is defined by
its
value-of elements'
select attributes. These
attributes' values are also XPath expressions. For example:
"description"
"currentPrice"
"@type"
"@private"
In each section (Location paths , (Expressions , and (Function library ), you will modify AuctionItemSummary-Base.xsl to display the information in a different way.
By this point, you should have looked at the files in a text/XML editor. Now you can open AuctionItemList.xml in your favorite Web browser to see the output generated by an XSLT processor based on the stylesheet. You should see something similar to the following:
Figure 2. The base auction item table
XPath overview
What is XPath?
The XML Path Language (XPath) is a set of syntax and semantics for referring to portions of XML documents. XPath is intended to be used by other specifications such as XSL Transformations (XSLT) and the XML Pointer Language (XPointer). To help you understand what XPath is, I will start by showing examples related to AuctionItemList.xml.
XPath expressions identify a set of nodes in an XML
document. This set of nodes can contain zero or more nodes. For example,
the XPath expression
/list, if applied to
AuctionItemList.xml, identifies one single node -- the
list
root element.
The XPath expression
/list/item identifies all the
item elements.
XPath uses a notation with forward slashes (
/) similar the
UNIX shell. This is so that XPath can be used in Uniformed Resource
Identifiers (URIs), such as URLs. This is actually where XPath's name
comes from: the use of a path notation as in URLs.
Legal XPath expressions can include predicates. Predicates contain boolean expressions, which are tested for each node in the context node-set. If true, the node is kept in the set of nodes identified; otherwise, the node is discarded. Predicates are useful in reducing the result set. For example, the following XPath expression identifies the second item only:
/list/item[currentPrice>1000.0]
XPath expressions can refer to attributes as well as elements in an XML
document. When referring to an attribute, the
@ character is
used. For example, the following XPath expression identifies
currentPrice elements whose currency attributes contain the
value
EUR:
/list/item/currentPrice[@currency="EUR"]
XPath also provides functions , which can come in very
handy. I'll show you these in more detail in Function
library , but here is a taste of it. The XPath expression below
identifies the
description element of the item whose
type attribute is
"IndividualOffer" (and has the
value
2MP digital camera):
/list/item[starts-with(@type,"Ind")]/description
Test the above XPath expressions in your favorite XML editor: Open AuctionItemList.xml and enter the expressions in the XPath evaluator to see which nodes are selected.
That's it -- you've now been introduced to XPath! So far, you've learned
that XPath is a language for identifying parts of XML documents. You've
seen what an XPath expression looks like and how it refers to elements and
attributes inside XML documents. I've also shown you how XPath provides
functions for manipulating data. However, this is just a quick overview; I
will discuss all these points in more detail -- as well as more aspects of
XPath -- in the remaining sections. For example, I'll examine XPath
namespaces and special characters (such as
// and
*) and show you that not all XPath expressions have the form
shown in the examples above (called abbreviated location paths).
XSLT, XLink, and XPointer
XSLT, XLink, and XPointer are all W3C standards. XSLT and XPath, along with XSL Formatting Object (XSL-FO), form the W3C eXtensible Stylesheet Language (XSL) family of specifications. (see Resources if you want to look at these specifications.)
Presenting : XSLT uses XPath extensively for matching -- that is, testing whether a node matches a given pattern. XSLT specifies the context used by XPath. You should understand XPath if you want to work with XSLT. In About the example used in this tutorial , you saw that the AuctionItemSummary-Base.xsl stylesheet contains XPath expressions. These XPath expressions are used by XSLT to find elements that match criteria in the source document, and also to display information in the result document. XSLT also makes use of XPath functions to perform arithmetic or string manipulation operations.
Linking: XLink provides a generalization of the HTML hyperlink concept in XML. XLink defines a syntax for elements to be inserted into XML documents in order to link resources together and to describe their relationship. These links can be unidirectional, like HTML's hyperlinks, or more complex. XLink uses XPointer to locate resources.
Pointing: XPointer is an extension to XPath that provides addressing into XML documents and their internals. XPointer generalizes the notion of XPath nodes with the concept of XPointer locations, points, and ranges. XPointer also specifies the context used during XPath evaluation and provides extra functions that aren't available in XPath.
XPath is essential to the specifications mentioned above. In fact, the XPath specification explicitly states that XPath is designed to be used by XSLT and XPointer.
Here is a recap of the XML technologies I have talked about so far:
- XML: basis for other technologies (data)
- XML Schema: data format rules
- XSLT: data presentation/matching
- XLink: linking
- XPointer and XPath: addressing
XPath terminology
What is an XPath node?
XPath sees an XML document as a tree of nodes. Nodes can be of different types, such as element nodes or attribute nodes. Some types of nodes have names that consist of a nullable XML namespace URI and a local part. For example, the figure below shows the XPath representation of AuctionItemList.xml as a tree of nodes:
Figure 3. XPath's view of AuctionItemList.xml
One special type of node is the root node. An XML document contains only one such node and it is the root of the tree, containing the whole of the XML document. Note that the root node contains the root element as well as any processing, declaration, or comment nodes that appear before or after the root element. In the example, the children of the root node are:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="AuctionItemSummary-Base.xsl"?>
and
<list ...> <item ...> ... </item> <item ...> ... </item> ... </list>
No node type exists for XML declarations (such as
<?xml version="1.0" encoding="UTF-8"?>) or Document
Type Definitions (DTDs). It is therefore not possible to refer to such
entities in XPath.
Element nodes represent every element in an XML document. Attribute nodes
belong to element nodes and represent attributes in the XML document.
However, attributes that start with
xmlns: are represented in
XPath with namespace nodes. Other types of nodes include text nodes,
processing instruction nodes, and comment nodes.
Out of context?
XPath evaluates expressions relative to a context. This
context is usually specified by the technologies that extend XPath, such
as XSLT and XPointer. An XPath context includes a context node, context
size, context position, and other context data. From a context standpoint,
the context node is of most interest here. When the context node is the
root node,
list/item refers to the seven
item
elements in AuctionItemList.xml. When the context node is another node --
for example, the first
item element --
list/item
does not refer to anything in the XML document.
In XSLT, the values of
select attributes are XPath
expressions. For example, in AuctionItemSummary-Base.xsl, the
xsl:apply-templates and
xsl:value-of elements
have
select attributes whose values (XPath expressions) are,
among others,
list,
item, or
itemId. In XSLT, the context node is the current node
being evaluated. XSLT templates can be activated several times
for an XML document and produce different results. With
AuctionItemList.xml, the first and second templates
(
match="/" and
match ="list", respectively) are
activated once, and the third template (
match="item") is
activated seven times (once for each
item element). The first
time the "item" template is activated, the context node is the first
item element in the XML document ("Miles Smiles album, CD")
and, for example, the value of
<xsl:value-of is
itemId0001. The second time the XSLT template is activated,
the context node is the second
item element (the "Coltrane
Blue Train" CD) and the value of
<xsl:value-of is
itemId0002. Note that had I used
/list/item/@id
as opposed to
@id for the
select attribute, the
value of the
xsl-value-of element would have been null.
Location paths
What is a location path?
Location paths are the most useful and widely used feature of XPath. A location path is a specialization of an XPath expression (described in Expressions ). A location path identifies a set of XPath nodes relative to its context. XPath defines two syntaxes: the abbreviated syntax and the unabbreviated syntax.
In this tutorial, I only talk about the abbreviated syntax because it is the most widely used; the unabbreviated syntax is also more complex. If you are interested in the unabbreviated syntax, have a look at the XPath 1.0 specification (see Resources).
The two types of location paths are relative and absolute.
A relative location path is a sequence of location steps
separated by
/. For example:
list/item[currentPrice<20.0]
This location path consists of two location steps: the first,
list, selects a set of nodes relative to the context node;
the second,
item[currentPrice<20.0], selects a set of
nodes in the subset identified by the first step; and so on, if there are
more nodes.
An absolute location path consists of a
/,
optionally followed by a relative location path, with
/
referring to the root node. An absolute location path is basically a
relative location path evaluated in the context of the root node, for
example:
/list/item[currentPrice<20.0]
With absolute location paths (location paths that start with
/), the context node isn't meaningful because the path is
always evaluated from the root node.
Useful syntax
The abbreviated syntax provides a set of useful characters (most of which you saw in XPath overview ). I will now enumerate the most commonly used characters and give examples relative to the root node of AuctionItemList.xml -- that is, with the context node being the root node of AuctionItemList.xml.
@ is used to refer to attributes. For example, the
location path
@currency identifies the
currency
attribute.
list/item[@private] identifies the
item elements with a
private attribute --
meaning, all the
item elements in AuctionItemList.xml.
* is used to refer to all the elements that are children of the context node. @* is used to refer to all the attributes of the context node.
[] can also be used to refer to specific elements in an
ordered sequence. For example,
list/item[2] refers to the
second
item element. This is actually a predicate (described
next in Predicates).
// is used to refer to all children of the context node.
For example,
//item refers to all
item elements,
and
//list/item refers to all
item elements that
have a
list parent (that is, all the
item
elements in the example).
. is used to refer to the context node itself. For
example,
. selects the context node, and
.//item
refers to all the
item elements that are children of the
context node.
.. is used to refer to the parent of the context node. For
example,
../item would refer to the first
item
element in the context of the first
bidIncrement element.,
[]. Have a look
at the following location path:
list/item/currentPrice[@currency="EUR"]
During evaluation, all
currentPrice elements in
AuctionItemList.xml are in the selected node-set. Then, the
@currency="EUR" predicate is evaluated and the
currentPrice elements whose currencies do not contain the
EUR value are rejected.
Predicates can also use the relational operators
>,
<,
>=,
<=, and
!= . They can also use boolean operators, as you'll see in Expressions .
Lab: Location paths
Now that I have explained what location paths are, your task is to modify AuctionItemSummary-Base.xsl to produce the following output -- specifically, a table containing only the items with currency listed in U.S. dollars:
Figure 4. Table containing auction items in U.S. dollars
Note: You need to replace the value of the
select attribute inside the
list template with
the correct location path. Use single quotation marks (
')
inside a string enclosed in double quotation marks (
" ). I
will talk more about this in Expressions .
A solution to this is AuctionItemSummary-Section5.xsl. Change the second line of AuctionItemList.xml to refer to AuctionItemSummary-Section5.xsl, and open AuctionItemList.xml in your Web browser.
Location paths are a subset of the more general XPath expressions. An expression refers not only to a set of nodes (location paths), but can also return a boolean, a number, or a string.
Expressions
Booleans
A boolean expression can have one of two values: true or false.
XPath defines the
and and
or operators. With
and, the left operand is evaluated first: If it is false, the
expression returns false; otherwise, the right operand is evaluated and
determines the result of the expression. With
or, the left
operand is evaluated and if true, the expression returns true; otherwise,
the right operand is evaluated and determines the value of the expression.
As an example, the boolean expression
type="BidOnly"
evaluates to true in the context of the second
item element
of AuctionItemList.xml.
XPath defines the following operators:
=means "is equal to"
!=means "is not equal to"
<means "is less than"
<=means "is less than or equal to"
>means "is greater than"
>=means "is greater than or equal to"
For example, the boolean expression
bidIncrement != 10 returns
true in the context of the first
item element in
AuctionItemList.xml and false in the context of the second
item element.
The
= operator, when applied to nodes, tests whether two nodes
have the same value, not whether they are the same node. This can be used
to compare attribute values. For example,
item[@type = @private] selects items whose
type
and
private attributes have the same value.
When an XPath expression is contained in an XML document, the
well-formedness rules of XML 1.0 need to be followed, and any
< or
<= characters must be quoted using
< and
<=, respectively. For
example, the XPath expression
bidIncrement < 5 is valid in
XPointer but needs to be written as
bidIncrement < 5
if it is to be contained in an XSLT document.
Conversions happen when operands of a boolean expression are not of the same type (node-set, numbers, strings). Refer to the XPath 1.0 specification for details.
Numbers
An XPath number is a double precision 64-bit
floating-point number. XPath numbers include the "Not-a-Number"
NaN value, positive and negative infinity, and positive and
negative zero.
Numeric operators provided by XPath are:
+ (addition),
- (subtraction),
* (multiplication),
div (division), and
mod (remainder from
truncating division).
Numeric operators convert operands to numbers if needed, as if they were
using the
number function (described in Function
library ).
Note: The subtraction (
-) operator has to be
preceded by whitespace because XML allows "
-" characters in
strings.
Here are a few examples of XPath numeric expressions:
- 7 + 3 returns 10
- 7 - 3 returns 4
- 7 * 3 returns 21
- 7 div 3 returns 2.3333333333333335
- 7 mod 3 returns 1
Note: An asterisk (
*) can be interpreted as
the wild card character or as the multiplier character. XPath defines
lexical rules to eliminate ambiguities (see the XPath 1.0 specification
for details). However, a new operator was introduced for the division
character,
div, because the forward slash (
/) is
used to separate location steps.
Strings
XPath strings are a sequence of valid XML 1.0 (Unicode)
characters -- for example,
Miles Smiles album, CD.
Strings in XPath are enclosed in quotation marks (
' or
"). When an XPath string is contained in an XML document and
contains quotation marks, you have to use one of the two following
options:
- Quote them using
'or
", respectively. For example,
description = 'New 256m "USB" MP3 player'.
- Use single quotation marks (
') if the expression is enclosed in double quotation marks (
"), and vice-versa. For example,
'New 356m "USB" MP3 player'.
XPath provides useful functions for dealing with strings, as described in Function library .
Lab: Expressions
Now that I have explained XPath expressions, your task is to modify AuctionItemSummary-Base.xsl to produce the following output -- a table containing all the items whose auction finishes within the hour:
Figure 5. Table containing auctions finishing in the hour
Note:
endOfAuction is the time remaining until
the end of the auction, in minutes. You need to modify the same
select attribute as in Location paths .
A solution to this is AuctionItemSummary-Section6.xsl. Change the second line of AuctionItemList.xml to refer to AuctionItemSummary-Section6.xsl, and open AuctionItemList.xml in your Web browser.
Function library
Function Library
XPath defines a set of functions called the core function library. Each function is defined by three artifacts:
- A function name
- A return type (mandatory, no void)
- The type of the arguments (zero or more, mandatory, or optional)
You may find functions useful inside predicates and expressions. Other specifications, such as XSLT, extend this function set. Functions are divided into four groups, which I discuss in the rest of this section:
Node-set functions
Node-set functions provide information on a set of nodes (one or more nodes). Useful node-set functions include:
last()- Returns a number called the context size, which is the number of nodes in a given context. This is. For example, in the context of the AuctionItemList.xml document,
count(//item)returns the number of
itemelements, which is 7.
id(object)- Returns a node-set, the result of selecting elements by their unique id declared as type ID in a DTD. Because I don't use a DTD in AuctionItemList.xml, the result node-set is always empty for this example.
Id("ItemId0001")returns an empty node-set.
XPath also defines three other functions related to node names and namespaces:
local-name()
namespace-uri()
name()
Refer to section 4.1 of the XPath 1.0 specification for details.
String functions
With string functions, you can manipulate strings. Useful string functions include:
string()- Converts the argument object or the context node to a string. Valid arguments are a node-set, a number, a boolean, or any other type -- but in the last case, the conversion result is less predictable. It is recommended to use XSLT's
format-numberfunction to convert numbers to strings, or XSLT's
xsl:numberelement for presenting to users.
concat()- Takes two or more strings as arguments and returns the concatenation of them. For example,
concat("Original ","recording ","Blue Train LP record")returns
"Original recording Blue Train LP record".
starts-with()- Returns true if the first argument string starts with the second argument string; false otherwise. For example,
starts-with("Miles Smiles album, CD", "Miles")returns true.
contains()- Returns true if the first argument string contains the second argument string; false otherwise. For example,
contains("Miles Smiles album, CD", "album")returns true.
Other XPath string functions are
substring(),
substring-before(),
substring-after(),
string-length(),
normalize-space(), and
translate(). Refer to section 4.2 of the
XPath 1.0 specification for details.
Boolean functions
Boolean functions are used to convert an object or a string to either true or false, or to get the true or false values directly. The boolean functions are:
boolean()- Returns the conversion to boolean of the object passed as an argument, according to the following rules: A number is true if different from zero or
NaN; a node-set or a string are true if not empty. Other types of objects are converted in a less predictable way.
not()- Returns true if the boolean passed as argument is false; false otherwise.
true()and
false()- Return true or false, respectively. These functions are useful because true and false are seen as normal strings in XPath, and not the true and false values.
lang()- Returns true if the language of the context node is the same or a sub-language of the string argument is specified; false otherwise. The language of the context node is defined by the value of the
xml:langattribute. For example,
lang("en")returns false on any node of the tree for AuctionItemList.xml because the
xml:langattribute is not specified.
Number.
number("250")returns 250 and
number("miles1965")returns
NaN.
sum()- Returns the sum of all nodes in the node-set argument after the
number()function has been applied to them.
floor()- Returns the largest integer number that is not greater that the number argument. For example,
floor(2.75)returns 2.
ceiling()- Returns the smallest integer number that is not less than the number argument. For example,
ceiling(2.75)returns 3.
round()- Returns the integer number that is closest to the number argument. For example,
round(2.75)returns 3.
Lab: Function library
Now that I have explained XPath functions, your task is to modify AuctionItemSummary-Base.xsl to produce the following output -- a table containing only the new auction items:
Figure 6. New auction items only
Note: Such items will contain the string
New
or
NEW in their description. You need to modify the same
select attribute as in Location paths and Expressions .
A solution to this is AuctionItemSummary-Section7.xsl. Change the second line of AuctionItemList.xml to refer to AuctionItemSummary-Section7.xsl and open AuctionItemList.xml in your Web browser.
Tutorial wrap-up
XPath 2.0
XPath 2.0 is a superset of XPath 1.0 and currently a W3C Working Draft. Two W3C working groups are working on version 2.0 of XPath: the XML Query Working Group and the XSL Working Group. XPath 2.0 has more power and is more robust because it supports a broader set of data types. This is because XPath 2.0 values use XML Schema types rather than simple strings, numbers, or booleans. XPath 2.0 is backward-compatible so that 1.0 expressions behave the same in 2.0, with exceptions listed in the specification.
See Resources for more information on XPath 2.0.
Summary
In this tutorial, you learned that XPath is a language that's used to address parts of XML documents. You saw how XPath relates to other XML technologies, such as XSLT and XPointer. You saw what XPath expressions are, including the special case of expressions called location paths. You also had an overview of the XPath function library and the new features of the upcoming XPath 2.0.
Download
Resources
Learn
- Read the XML Path Language (XPath) version 1.0 Recommendation at the W3C site.
- Take a look at the XPath 2.0 specification (draft).
- Find out about new features in the upcoming XPath 2.0 in the developerWorks article "XML for Data: What's new in XPath 2.0?" (September 2002).
- Review your XML basics with Doug Tidwell's tutorial "Introduction to XML" (developerWorks, August 2002).
- Learn more about transforming XML documents -- read the XSL Transformations (XSLT) 1.0 specification.
- Try the XML Linking Language (XLink) to insert elements into XML documents in order to create and describe links between resources.
- Explore the XML Pointer Language (XPointer), an extension to XPath that provides addressing into XML documents and their internals.
- Find more XML resources on the developerWorks XML zone.
- Find out how you can become an IBM Certified Developer in XML and related technologies.
- Stay current with developerWorks technical events and Webcasts.
Get products and technologies
- Build your next development project with IBM trial software, available for download directly from developerWorks.. | http://www.ibm.com/developerworks/xml/tutorials/x-xpath/x-xpath.html | CC-MAIN-2016-07 | refinedweb | 4,498 | 57.77 |
This site uses strictly necessary cookies. More Information
Hi guys,
I'm creating a top down game and I have a weird control problem.
Every time I press a movement key (W,A,S,D) or release a movement key, the player character moves slightly upwards away from the ground (+ve Y direction). He just floats away bit by bit.
I've added a Player object with a ChracterController and no rigid body.
I've added the following movement script:
public class PlayerMovement : MonoBehaviour
{
public float Speed = 6.0f;
private Vector3 _moveDirection = Vector3.zero;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
CharacterController controller = GetComponent<CharacterController>();
_moveDirection = new Vector3(-1 * Input.GetAxis("Vertical"), 0, Input.GetAxis("Horizontal"));
_moveDirection *= Speed;
controller.Move(_moveDirection * Time.deltaTime);
}
}
I'm doing camera relative movement, not character relative.
This script works fine except the unexpected Y movement on each key down and release. I'm sure I'm making a dumb noob mistake! :)
If I use transform.Translate() this problem does not occur, but then collisions don't work properly.
125][1]125
Answer by Berenger
·
Jun 11, 2012 at 04:00 AM
I suppose your CC is bumping against the ground, making it go higher step by step. You should use the other fonction of CC, SimpleMove, which does exactly what you need : 2D deplacement, y is 2D movement less jerky on a controller, with velocity and such?
0
Answers
How to make top down car movement?
1
Answer
Authoritative Networking 2D Movement
0
Answers
Top down camera that follows the player and rotates with the player while remaining constrained to world bounds
0
Answers
Enemy Movement Help
3
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/265291/character-movement-problem-floats-away.html | CC-MAIN-2021-21 | refinedweb | 285 | 57.67 |
This example does the following:
Imports monthly sales figures for all products from
the
dbtoolboxdemo data source into the MATLAB® workspace.
Computes total sales for each month.
Exports the totals to a new table.
This example assumes that you are connecting to a Microsoft® Access™ database
that contains tables named
salesVolume and
yearlySales.
The table
salesVolume contains the column names
for each month. The table
yearlySales contains
the column named
salesTotal.
To access the code for this example, see
matlab\toolbox\database\dbdemos\dbinsert2demo.m.
Create a database connection
conn to
the Microsoft Access database using the JDBC/ODBC bridge. Here,
this code assumes that you are connecting to a data source named
dbtoolboxdemo with
blank user name and password.
conn = database('dbtoolboxdemo','','');
Alternatively, you can use the native ODBC interface for an
ODBC connection. For details, see
database.
Ensure that the database is writable using
conn.
a = isreadonly(conn)
a = 0
When the
isreadonly function returns
0,
the database is writable.
Specify preferences for the retrieved data. Set the
data return format to
numeric. Specify that
NULL values
read from the database are converted to
0 in the MATLAB workspace.
setdbprefs... ({'NullNumberRead';'DataReturnFormat'},{'0';'numeric'})
When you specify
DataReturnFormat as
numeric,
the value for
NullNumberRead must be numeric.
Execute the SQL query
sqlquery using
conn to
import all data from the
salesVolume table. The
cursor object
curs contains the executed query.
Import the data from the executed query using the
fetch function.
sqlquery = 'select * from salesVolume'; curs = exec(conn,sqlquery); curs = fetch(curs);
Display the names of the columns in the fetched data set.
columnnames(curs)
ans = 'StockNumber', 'January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'
Display the data for January. January data is in the second column of the fetched data set.
curs.Data(:,2)
ans = 1400 2400 1800 3000 4300 5000 1200 3000 3000 0
Assign the dimensions of the matrix containing the
fetched data set to
m and
n.
[m,n] = size(curs.Data)
m = 10 n = 13
Calculate monthly totals using
m and
n.
The variable
tmp is the sales volume for all products
in a given month
c. The variable
monthly is
the total sales volume of all products for that month. For example,
if
c is
2, row
1 of
monthly is
the total of all rows in column
2 of
curs.Data,
where column
2 is the sales volume for January.
for c = 2:n tmp = curs.Data(:,c); monthly(c-1,1) = sum(tmp(:)); end
Display the monthly totals.
monthly
ans = 25100 15621 14606 11944 9965 8643 6525 5899 8632 13170 48345 172000
Create a cell array
colnames containing
the column name for inserting the data.
colnames{1,1} = 'salesTotal';
Insert the data into the
yearlySales table
using
conn,
colnames, and the
monthly totals
monthly.
datainsert(conn,'yearlySales',colnames,monthly)
To verify the data import in Microsoft Access,
view the
yearlySales table from the
tutorial database.
After you finish working with the cursor object, close it. Close the database connection.
close(curs) close(conn)
database |
datainsert |
exec |
fetch |
isreadonly |
setdbprefs | http://www.mathworks.com/help/database/ug/exporting-multiple-records-from-the-matlab-workspace.html?requestedDomain=www.mathworks.com&nocookie=true | CC-MAIN-2016-30 | refinedweb | 513 | 58.38 |
I've gotten this applet to do some of what it should but I'm having trouble getting it to do the rest. It's a multiplication applet that asks the user a question such as "What is 6 times 7?", has a JTextField for the user's answer, and a JButton to check the answer. After the button is clicked, if the answer is wrong, it should draw the string "No. Please try again." and allow the user to try as many times as it takes for them to get it correct. If the answer is correct, it should draw the string "Very good!!!" AND reset the question with a new set of random numbers. So far it will recognize if the answer is right or wrong but if, for example, the user first puts in a wrong answer, it draws the appropriate string but when the user gets it right, it draws the appropriate string on top of the other string instead of replacing it AND it doesn't reset the question with new numbers. Can someone PLEASE help me figure out these two issues?
Here's my code:
import java.awt.*; import java.awt.Graphics; import java.lang.Object; import java.awt.event.*; import javax.swing.*; import java.util.*; public class NewJApplet extends JApplet implements ActionListener { public Graphics brush ; actionPerformed(ActionEvent e) { int ans = Integer.parseInt(answer.getText()); if(ans == number1 * number2) { answer.setText(""); Random rand = new Random(); int number1 = rand.nextInt(9) + 1; int number2 = rand.nextInt(9) + 1; brush = getGraphics(); brush.setFont(font2); brush.drawString(right, 20, 120); validate(); } else { answer.setText(""); brush = getGraphics(); brush.setFont(font2); brush.drawString(wrong, 20, 120); validate(); } } } ); } @Override public void actionPerformed(ActionEvent e) { answer.setText(""); Random rand = new Random(); int number1 = rand.nextInt(10); int number2 = rand.nextInt(10); } } | https://www.daniweb.com/programming/software-development/threads/282293/please-help-resetting-drawstring-and-random-numbers | CC-MAIN-2017-43 | refinedweb | 300 | 60.01 |
Walkthrough: Data Binding Web Pages with a Visual Studio Data Component
Many Web applications are built by using multiple tiers, with one or more components in the middle tier that combine data access together with business logic. This walkthrough shows you how to build a data-access component in a Web site and bind a Web server control (a GridView control) to the data that is managed by the component. The data component interacts with a Microsoft SQL Server database, and can both read and write data.
Tasks illustrated in this walkthrough include the following:
Creating a component that can read and write data.
Referencing the data component as a data source on a Web page.
Binding a control to the data that is returned by the data component.
Reading and writing data using the data component.
In order to complete this walkthrough, you will need the following:
Access to the SQL Server Northwind database. For information about downloading and installing the SQL Server sample Northwind database, see Installing Sample Databases on the Microsoft SQL Server Web site.. textbox, enter the name of the folder where you want to keep the pages of the Web site.
For example, type the folder name C:\WebSites\ComponentSample.
In the Language list, click the programming language that you prefer to work in.
Click OK.
Visual Web Developer creates the folder and a new page named Default.aspx.
In this walkthrough, you will use a wizard to generate a component that reads data from and writes data to the Northwind database. The component includes a schema file (.xsd) describing the data that you want and the methods that will be used to read and write data. You will not have to write any code. At run time, the .xsd file is compiled into an assembly that performs the tasks that you specify in the wizard.
To create a data-access component
If the Web site does not already have an App_Code folder, do the following:
In Solution Explorer, right-click the name of the Web site.
Point to Add Folder, and then click App_Code Folder.
Right-click the App_Code folder, and then click Add New Item.
The Add New Item dialog box appears.
Under Visual Studio installed templates, click DataSet.
In the Name box, type EmployeesObject, and then click Add.
The TableAdapter Configuration wizard appears.
Click New Connection.
If the Choose Data Source dialog box appears, click Microsoft SQL Server and then click Continue.
In the Server name box, enter the name of the computer that is running SQL Server.
For the logon credentials, select the option that is appropriate for accessing the SQL Server database (integrated security or specific ID and password) and if it is required, enter a user name and password.
If you specify explicit credentials, select the Save my password check box.
Click Select or enter a database name, and then enter Northwind.
Click Test connection, and when you are sure that the connection works, click OK.
The TableAdapter Configuration wizard appears with the connection information filled in.
Click Next.
A page where you can choose to store the connection string in the configuration file appears.
Select the Yes, save this connection as check box, and then click Next.
You can leave the default connection string name.
A page where you can choose to use SQL statements or stored procedures appears.
Click Use SQL statements, and then click Next.
Using stored procedures has some advantages, including performance and security. However, for simplicity in this walkthrough, you will use an SQL statement.
A page where you can define the SQL statement appears.
Under What data should be loaded into the table, type the following SQL statement:
Click Next.
A page where you can define the methods that the component will expose appears.
Click to clear the Fill a DataTable check box, and then select the Return a DataTable and Create methods to send updates directly to the database check boxes.
You do not need a method to fill a data table for this walkthrough. However, you will need a method that returns the data and you also want the component to contain methods that update the database.
In the Method Name box, type GetEmployees.
You are naming the method that will be used later to obtain data.
Click Finish.
The wizard configures the component and displays it in the component designer, displaying the data that the component manages and the methods that the component exposes.
Save the data component, and then close the component designer.
On the Build menu, click Build Web Site to make sure that the component compiles correctly.
You can now use the data component as a data source in an ASP.NET Web page. To access the data component, you will use an ObjectDataSource control, configuring it to call the data-access methods that are exposed by the data component. You can then add controls to the page and bind them to the data source control.
To add a data source control to the page
Open the Default.aspx page and switch to Design view.
From the Data group in the Toolbox, drag an ObjectDataSource control onto the page.
If the ObjectDataSource Tasks shortcut menu does not appear, right-click the ObjectDataSource control, and then click Show Smart Tag.
On the ObjectDataSource Tasks shortcut menu, click Configure Data Source.
The Configure Data Source wizard appears.
In the Choose your business object list, click EmployeesObjectTableAdapters.EmployeesTableAdapter.
This is the type name (namespace and class name) of the component that you created in the preceding section.
Click Next.
On the Select tab, in the Choose a method list, click GetEmployees(), returns EmployeesDataTable.
The GetEmployees method is a method that was defined in the component that you created in the preceding section. It returns the results of the SQL statement, available in a DataTable object that data controls can bind to.
Click Finish.
You can now add data controls to the page and bind them to the ObjectDataSource control. In this walkthrough, you will work with the GridView control.
To add a GridView control to the page and bind it to the data
From the Data group in the Toolbox, drag a GridView control onto the page.
If the GridView Tasks shortcut menu does not appear, right-click the GridView control, and then click Show Smart Tag.
On the GridView Tasks shortcut menu, in the Choose Data Source list, click ObjectDataSource1, which is the ID of the control that you configured in the preceding section.
The GridView control reappears with a column for each data column that is returned by the SQL statement.
In Properties, verify that the DataKeyNames is set to EmployeeID.
Now that all controls that you need are on the page, you can test the page.
To test the page
Press CTRL+F5 to run the page.
Confirm that the EmployeeID, LastName, FirstName, and HireDate columns from the Employees table are displayed in the grid.
Close the browser.
The GridView control requests data from the ObjectDataSource control. In turn, the ObjectDataSource control creates an instance of the data component and calls GetEmployees method for the data component. The GetEmployees method returns a DataTable object, which the ObjectDataSource control returns to the GridView control.
The data component that you created includes SQL statements to update the database (update, insert, and delete records). The update facilities of the data component are exposed by methods that were generated automatically when the wizard created the component. The GridView control and ObjectDataSource control can interact to automatically start the update methods.
To enable updates and deletes
Right-click the GridView control, and then click Show Smart Tag.
Select the Enable Editing check box.
Select the Enable Deleting check box.
Testing Updates
You can now test to make sure that you can use the component to update the database.
To test updates
Press CTRL+F5 to run the page.
This time, the GridView control displays Edit and Delete links in each row.
Click the Edit link that is next to a row.
Make a change to the row, and then click Update.
The grid is redisplayed with the updated date.
Click the Delete link that is in a row.
The row is permanently deleted from the database. The grid is redisplayed without that row.
Close the browser.
This walkthrough illustrates how to work with a data component. You might want to experiment with additional features of navigation. For example, you might want to do the following:
Create a custom data component instead of using the wizard.
For an example, see Walkthrough: Data Binding to a Custom Business Object.
Restrict which users can change the data. A typical method is to add membership and roles to the Web site, and then establish rules that the business component can check before allowing changes to data.
For detailed information, see Walkthrough: Creating a Web Site with Membership and User Login (Visual Studio) and Walkthrough: Managing Web Site Users with Roles. | https://msdn.microsoft.com/EN-US/library/3h7eexxe(v=vs.80)?cs-save-lang=1&cs-lang=cpp | CC-MAIN-2016-50 | refinedweb | 1,494 | 65.52 |
In this article we will investigate the Status Bars found in the .NET Framework, to read this article and get the most of it you must have a good working knowledge with C#, OOP and WF.NET.
Introduction to Status bars:Most of you (who program on the win32 platform using former Microsoft programming languages like Microsoft Visual C++) are familiar with status bars. Most of the modern Windows applications display some useful information for the users in many different ways (tool tips, Help Menu, Status Bar) and Status Bar is a primary way to display keys and application status and maybe provide information related to cursor position or maybe information related to the current performed task.
Status Bars display information as textual information or as images that will explain the current task. Famous example of that is Microsoft Word's status bar which display many information for you like page number, the total number of pages, the current line number, columns, and some keys status like the status of the insert key, and some other information is graphical representation (images) and an example of that while you are typing you will find an open book and a pencil as you write and when you stop writing it will change to an open book and a red check mark over it also when you print you will find a printer icon printing pages while you are printing all this useful information help you track many processes in your applications and it will improve user interaction for better and faster application use.
In other words (technically), the status bar control display information related to objects and other controls on the form, for example if you have a menu on your form you can write code that will display a textual description about each menu item in the menu. In this article we will learn about the members of the status bar control and how we can use them to create powerful status bar in our applications.
The Status bar Class:
The Status Bar class is used to create a Status bar control on a form. This control is used to display textual information or graphical information related to the application status or the performed task. The Status Bar class is part of the System.Windows.Forms namespace and it's inherits from the Control class.
Each form can have its own status bar but it's not make sense so in real 'applications there is only one Status Bar control for a given application (but in some situation maybe you have to have more than one Status Bar for your application). When you have only one Status Bar for your application its contents can be changed depending on the active form or the current task and actually I think It's the power of the Status Bar control.
The Status Bar control can display 2 types of information: a normal text as a description for other controls on the form like Menus and Toolbars buttons and we call it flyby text because it will change each time the mouse moves over a control also we can display text in the Status Bar control while our application is performing some operation (like opening a file or access the internet). Another type of information that we can display is application and keys state, and I will talk about it later when we learn about panels.
Status bar Class membersBecause the Status Bar control inherits from the control class I will not discuss all the inherited members and I will go directly to the Status bars Members and some of the control class members.
Status Bar properties Properties Description ContextMenu Use this property to get or set the context menu associated with the control. Controls Gets a collection of the controls contained by this control. Created Gets a value indicating whether the status bar has been created Dock Gets or sets the dock setting for the status bar and by default the value is DockStyle.Bottom so you will find the control docked to the bottom of the form when you create it. HasChildren Gets a value indicating whether the control contains one or more child controls. Panels Gets the statusBarPanelCollection class containing the set of statusBarPanel objects managed by this status bar. Parent Gets or sets the parent container of the control. ShowPanels This property related to whether you want to show the panels or a simple text in the Status Bar control. If you set it to true then it will show the panels and hide the text and if false then it will show the text and hide the panels. SizingGrip Gets or Sets a value indicating whether a sizing grip should be displayed in the lower-right corner of the Status Bar. You can use this grip to resize the form. The Default setting is true. TabStop Gets or Sets Whether the control is a tab stop on the form. The default value for status bar is false so you will not be able to tab to the status bar. Text Gets or Sets the text for the status bar. This is displayed on the status bar only if the ShowPanels is set to false. Status Bar Methods Methods Description BringToFront Bring the control to the front of the z-order. Contains Retrieves a value indicating whether the specified control is a child of the control. For example you may need to know if the status bar control contains a panel object called PrinterPanel so you can use this method to know that. FindForm Retrieve the form that the status bar created on. GetType Get the type of the current instance Hide Hide the control from the user. You may need to hide the status bar in some situations. Invalidate Overloaded. Invalidates a specific region of the control and causes a paint message to be sent to the control. ResetText Resets the Text property to its default value. The default value of the text property is "Statusbar1". SendToBack Sends the control to the back of the z-order. Show Displays the control to the user. Status Bar events events Description PanelClick Occurs when a panel on the status bar is clicked DrawItem Occurs when an owner-drawn status bar panel must be redrawn Resize Occurs when the control is resized. TextChanged Occurs when the value of the text property changed I think these are the most common members you will deal with them, let's go and add a status bar control to a Windows Application.
Adding Status bar Control to a Windows ApplicationCreate a new windows application and name it statusBarTest and double click on the button named "StatusBar" in the toolbox window then you will find that a status Bar control named "statusBar1" added to your form (Form1). Figure 1 displays Form1 and a Status Bar control created on it.
Figure 1.1
Note that the status bar instance is docked to the bottom of the form because the Dock property is set to DockStyle.Bottom by default. Also note that the text displayed on the status bar is the value of the text property and here the default value is "statusBar1".
Let's look at the changes made to the InitializeComponent() method in the Visual Studio.NET auto generated code after adding the StatusBar control. #region Windows Form Designer generated code/// <summary>/// Required method for Designer support - do not modify/// the contents of this method with the code editor./// </summary>private void InitializeComponent(){this.statusBar1 = new System.Windows.Forms.StatusBar();this.SuspendLayout();//// statusBar1//this.statusBar1.Location = new System.Drawing.Point(0,251);this.statusBar1.Name = "statusBar1";this.statusBar1.Size = new System.Drawing.Size(292, 22);this.statusBar1.TabIndex = 0;this.statusBar1.Text = "statusBar1";//// Form1//this.AutoScaleBaseSize = new System.Drawing.Size(5, 13);this.ClientSize = new System.Drawing.Size(292, 273);this.Controls.AddRange(new System.Windows.Forms.Control[] {this.statusBar1});this.Name = "Form1";this.Text = "Form1";this.ResumeLayout(false);}#endregionBefore we discuss this code notice that the form contains only this StatusBar and there are no other controls over it.
The first line of code in the InitializeComponent() method create a new object (new instance) of the StatusBar type. The next 5 lines set the properties of the StatusBar control (location, name, size, tabindex and text). Normally you will not tab into a status bar control though VS.NET will set the tabindex property and that's to ensure that all controls on the form have a unique index (a mechanism to avoid bugs). There is a property called TabStop which if you set it to false you will not be able to tab into the control and that's exactly what statusbar control does, it will set the TabStop property to false so the control will not support tabbing.
Most important thing you must note about the above code is placing the statusbar on the form (the z-order). Look at the following line of code (from the above code):this.Controls.AddRange(new System.Windows.Forms.Control[]{this.statusBar1}
Add the button control so the form will be like this:
Now let's look again at the previous line:this.Controls.AddRange(new System.Windows.Forms.Control[] {this.button1,this.statusBar1});
Although we added the statusbar instance first the button control is added to the form first and that's because of the z-order (if you don't know about z-order wait for my next article about it).
Important Note:You may notice that I used the words "statusbar object" in place of "statusbar control" and many of you (who don't know much about OOP) will get confused. In fact a control is a class with a visual representation. For example, the textbox control is just a textbox class with a visual representation (a rectangular where we can type characters). So when I say statusbar control I mean the statusbar class. Almost all of the windows form controls that shipped with the .NET Framework are classes with a visual representation (but there are many controls that don't have a visual representation).
Working with StatusBar control
As I said in the beginning of the article that statusbar control can display information as a simple text also the statusbar control can display information in panels. A panel (as we will see in later section) is an area of information on the statusbar. For now let's create a windows application that contains a statusbar control and a File menu. In this application we will display simple text information in the status bar when the user selects a menu item from the file Menu.
Create a windows application and name it "StatusBarText". Then add a statusbar control and a MainMenu object and you will get a form like this:
Create a submenu items for the File menu. Create the standard New, Open and Close menu items in the file menu. So the form will look like the following:
Before we write the code that will display some information when selecting a menu item we need to set the text of the statusbar control to empty string in the event handler of the form load event so when the form is loaded the statusbar contains empty string. To do this double click empty space on the form and VS. NET will create the event handler for you like following:private void Form1_Load(object sender, System.EventArgs e){}Now write the following line of code (inside the event handler) to set the text property of the statusbar control to an empty string.
this
Run the application and you will not find the text "statusBar1" written in the statusbar control.
Now let's begin writing the code for displaying simple information when the user selects a menu item from the File menu. To perform this operation we need to handle the event select of the menu items in the File menu so when the user select a menu item we will display some helpful information is the statusbar control. Let's do just this:
As you might now VS. NET will help you create the event handler for the select event just select the first menu item in the file menu (New) and then right click to get a pop-up menu then select properties to get to the properties window of the New menu item. Select the events icon to view the available events for the menu item. You will get the next window:
From the events window now we know that there are only five events for the menu item. In our application we need to implement an event handler to the select event. Double click on the select in the events window and VS. NET will generate an event handler for the New menu item named menuItem2_Select and VS. NET will take you to the code window and specially to that handler which look like:
private
Pretty simple we just used the text property of the statusbar control to set the text to "Use New to Create a new application".
Let's do it with the other menu items. I will not mention the steps because I mentioned it in the new menu item. The next is the code for the event handlers for the select event of the (Open and Close menu items).
In this application I will not implement an event handler for the MenuComplete event but I will override the method OnMenuComplete which is Microsoft recommendation.
Important Note:
There are 2 ways to handle events:
1. You can override the On* methods (like our method OnMenuComplete) which is created inside many of the .NET Windows Forms classes like the Form class and this method will be called whenever the related event happen but you can only use it if you are inheriting a class, I mean I can use On* methods if I'm inheriting from a class support them so I can override them. Seems logical right?2. You can create an event handler for the event which will be called whenever the event happen.
In fact Microsoft recommend that you override the On* methods in the derived classes instead of creating an event handlers for the events and that's for better performance and more organizing for our code. Also don't forget to call the base method to do its part of the job, I mean when you override a base class method don't forget to call the base class method in the derived method. For example if you going to override OnMenuComplete method in our form Form1 don't forget to call the Form class's method OnMenuComplete inside your method.
Open the code window and then write the following method:
protected
Now let's move to more advanced programming techniques in statusbar control.
StatusBar PanelsStatusbar panels are merely separate areas in the statusbar that can display text or graphical information. Statusbar panels make our control look better and it provides you with a way to display more than one piece and kind of information. Some of you may call it panes. Each StatusBar Panel is an instance of the StatusBarPanel class and the StatusBar control has a property called panels which is a collection of StatusBarPanel objects. Next is the table of most important members of the StatusBarPanel class. StatusBarPanelproperties Properties Description MinWidth Gets or sets the minimum width for the panel Parent This property will get the StatusBar object that contains this panel. Alignment Gets or sets the HorizontalAlignment for the panel's text AutoSize Gets or sets how the panel is sized within the status bar. And it takes its values from StatusBarPanelAutoSize enumeration which we will discuss in the last of the article. BorderStyle Gets or sets the type of border to display for the panel. And it takes its values from StatusBarPanelBorderStyle enumeration which we will discuss in the last of the article. Style Gets or sets the style used to draw the panel. And it takes its values from the StatusBarPanelStyle enumeration which we will discuss in the last of the article. Text Gets or sets the text for the panel ToolTipText Gets or sets the tool tip for the panel Icon Gets or sets the icon to display within the status bar panel
Adding StatusBar Panels to our StatusBar controlLet's add 2 panels to our statusbar control using Visual Studio.NET. To add panels to our statusbar control I will use the Panels property in the property window. Go to the design window and right click on the statusbar control to get the pop-up window then choose properties. Now navigate to the Panels property and click on the entry named (Collection) which will bring to you a … button then click on that button and you will get the StatusbarPanelCollection Editor which we will use to add panels to our statusbar. This is the StatusbarPanelCollection Editor:
Let's add our first Panel; click on the Add button in the editor and your editor will look like the next figure:
By default Visual Studio.NET added a statusBarPanel named statusBarPanel1 and a default values for its properties which we going to change it. Let's change some properties.
Change the statusBarPanel1 object's properties to the following:
BorderStyle = RaisedWidth = 200
And then click the ok button to get back to the design window. Note that our statusBarPanel still invisible because the property ShowPanels has a value of false so you need to change it to true from the properties window. Because the statusbar text property is useless now we need to change our code that uses statusBar1.Text property to statusBarPanel1.Text property and the best way to do this is via the Edit ïƒ Find and Replace ïƒ Replace menu item. Now open the code window and then select the Find and Replace from the Edit menu and select Replace from the submenu then type this.statusBar1.Text in the Find What: entry and type this.statusBarPanel1.Text in the Replace With: entry and click on Replace All button. Run the application and you will note that our statusbar will perform as before except it has one statusBarPanel to display information instead of its text property. Let's look at the changes made to the Visual Studio.NET auto generated code.
The only thing that you must know about is the next few lines:
((System.ComponentModel.ISupportInitialize)(this.statusBarPanel1)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this.statusBarPanel1)).EndInit();
After VS.NET will set the properties for the Panel it will add it to the statusBarPanel collection in the following line:this.statusBar1.Panels.AddRange(new System.Windows.Forms.StatusBarPanel[] {this.statusBarPanel1});
Let's create our second statusBarPanel. From the StatusbarPanelCollection Editor create a second panel and leave the default properties values.
This Panel will tell us if the Insert button has been pressed or not. Let's write the code step-by-step, first we need 2 strings which will be displayed depending on the status of the Insert Button ("Insert Pressed", "Insert Not pressed"). When the application is first loaded we can display the string "Insert Not Pressed" in the second Panel and it makes sense because the application just loaded and the user didn't press the Insert key yet. So we can set the text property of this Panel to the string "Insert Not Pressed" in the event handler of the load event of the form, to do this double click on any empty space on the form to get to the event handler and write the following code:
So the event handler will look like:
I initialized its value to false because when the application first start the Insert Key is not pressed (is not active) so false is the right value. Then we need to override the OnKeyDown method of the Form1 class to test for the pressed key. Now write the following method in the Form1 class:
I think by now you know how much important having a StatusBar in your application to minimize users problems and improve program interaction by providing helpful information like what we did here. When you add statusBarPanels using the editor you find 3 properties that takes a values from enumerations let's talk about that.
StatusBar and Enumerations
The statusBarPanel class has 3 properties that take values from enumerations these are (AutoSize property which takes its values from StatusBarPanelAutoSize enum, BorderStyle property which takes its values from StatusBarPanelBorderStyle property and the Style property which takes its values from the StatusBarPanelStyle enum).
I think by now you know much about Status Bars but still another important subject we must touch, in our next part of that article, it's owner-drawn status bar's panels.
StatusBars in Real Applications
Directory Picker Pro in C#
hi!
my question is that when i resize the windows Form, StatusBar can be Resize by using Anchor and Dock properties but Staus Bar panel change location. | http://www.c-sharpcorner.com/UploadFile/myoussef/StutusBarsInRealApps12012005020504AM/StutusBarsInRealApps.aspx | crawl-003 | refinedweb | 3,503 | 58.52 |
JSONObject
Latest revision as of 20:25, 30 June 2015
[edit] Author
Matt Schoen of Defective Studios
[edit] Download
Use the github repository or get it on the AssetStore
[edit] Intro
I came across the need to send structured data to and from a server on one of my projects, and figured it would be worth my while to use JSON. When I looked into the issue, I tried a few of the C# implementations listed on json.org, but found them to be too complicated to work with and expand upon. So, I've written a very simple JSONObject class, which can be generically used to encode/decode data into a simple container. This page assumes that you know what JSON is, and how it works. It's rather simple, just go to json.org for a visual description of the encoding format.
As an aside, this class is pretty central to the AssetCloud content management system, from Defective Studios.
Update: The code has been updated to version 1.4 to incorporate user-submitted patches and bug reports. This fixes issues dealing with whitespace in the format, as well as empty arrays and objects, and escaped quotes within strings.
[edit] Usage
Users should not have to modify the JSONObject class themselves, and must follow the very simple proceedures outlined below:
Sample data (in JSON format):
{ "TestObject": { "SomeText": "Blah", "SomeObject": { "SomeNumber": 42, "SomeBool": true, "SomeNull": null }, "SomeEmptyObject": { }, "SomeEmptyArray": [ ], "EmbeddedObject": "{\"field\":\"Value with \\\"escaped quotes\\\"\"}" } }
[edit] Features
- Decode JSON-formatted strings into a usable data structure
- Encode structured data into a JSON-formatted string
- Interoperable with Dictionary and WWWForm
- Optimized parse/stringify functions -- minimal (unavoidable) garbage creation
- Asynchronous stringify function for serializing lots of data without frame drops
- MaxDepth parsing will skip over nested data that you don't need
- Special (non-compliant) "Baked" object type can store stringified data within parsed objects
- Copy to new JSONObject
- Merge with another JSONObject (experimental)
- Random access (with [int] or [string])
- ToString() returns JSON data with optional "pretty" flag to include newlines and tabs
- Switch between double and float for numeric storage depending on level of precision needed (and to ensure that numbers are parsed/stringified correctly)
- Supports Infinity and NaN values
- JSONTemplates static class provides serialization functions for common classes like Vector3, Matrix4x4
- Object pool implementation (experimental)
- Handy JSONChecker window to test parsing on sample data
It should be pretty obvious what this parser can and cannot do. If anyone reading this is a JSON buff (is there such a thing?) please feel free to expand and modify the parser to be more compliant. Currently I am using the .NET System.Convert namespace functions for parsing the data itself. It parses strings and numbers, which was all that I needed of it, but unless the formatting is supported by System.Convert, it may not incorporate all proper JSON strings. Also, having never written a JSON parser before, I don't doubt that I could improve the efficiency or correctness of the parser. It serves my purpose, and hopefully will help you with your project! Let me know if you make any improvements :)
Also, you JSON buffs (really, who would admit to being a JSON buff...) might also notice from my feature list that this thing isn't exactly to specifications. Here is where it differs:
- "a string" is considered valid JSON. There is an optional "strict" parameter to the parser which will bomb out on such input, in case that matters to you.
- The "Baked" mode is totally made up.
- The "MaxDepth" parsing is totally made up.
- NaN and Infinity aren't officially supported by JSON (read more about this issue... I lol'd @ the first comment on the first answer)
- I have no idea about edge cases in my parsing strategy. I have been using this code for about 3 years now and have only had to modify the parser because other people's use cases (still valid JSON) didn't parse correctly. In my experience, anything that this code generates is parsed just fine.
[edit] Encoding
Encoding is something of a hard-coded process. This is because I have no idea what your data is! It would be great if this were some sort of interface for taking an entire class and encoding it's number/string fields, but it's not. I've come up with a few clever ways of using loops and/or recursive methods to cut down of the amount of code I have to write when I use this tool, but they're pretty project-specific.
Note: This section used to be WRONG! And now it's OLD! Will update later... this will all still work, but there are now a couple of ways to skin this cat.
//Note: your data can only be numbers and strings. This is not a solution for object serialization or anything like that. JSONObject j = new JSONObject(JSONObject.Type.OBJECT); //number j.AddField("field1", 0.5); //string j.AddField("field2", "sampletext"); //array JSONObject arr = new JSONObject(JSONObject.Type.ARRAY); j.AddField("field3", arr); arr.Add(1); arr.Add(2); arr.Add(3); string encodedString = j.print();
NEW! The constructor, Add, and AddField functions now support a nested delegate structure. This is useful if you need to create a nested JSONObject in a single line. For example:
DoRequest(URL, new JSONObject(delegate(JSONObject request) { request.AddField("sort", delegate(JSONObject sort) { sort.AddField("_timestamp", "desc"); }); request.AddField("query", new JSONObject(delegate(JSONObject query) { query.AddField("match_all", JSONObject.obj); })); request.AddField("fields", delegate(JSONObject fields) { fields.Add("_timestamp"); }); }).ToString());
[edit] Decoding
Decoding is much simpler on the input end, and again, what you do with the JSONObject will vary on a per-project basis. One of the more complicated way to extract the data is with a recursive function, as drafted below. Calling the constructor with a properly formatted JSON string will return the root object (or array) containing all of its children, in one neat reference! The data is in a public ArrayList called list, with a matching key list (called keys!) if the root is an Object. If that's confusing, take a glance over the following code and the print() method in the JSONOBject class. If there is an error in the JSON formatting (or if there's an error with my code!) the debug console will read "improper JSON formatting".
string encodedString = "{\"field1\": 0.5,\"field2\": \"sampletext\",\"field3\": [1,2,3]}"; JSONObject j = new JSONObject(encodedString); accessData(j); //access data (and print it) void accessData(JSONObject obj){ switch(obj.type){ case JSONObject.Type.OBJECT: for(int i = 0; i < obj.list.Count; i++){ string key = (string)obj.keys[i]; JSONObject j = (JSONObject)obj.list[i]; Debug.Log(key); accessData(j); } break; case JSONObject.Type.ARRAY: foreach(JSONObject j in obj.list){ accessData(j); } break; case JSONObject.Type.STRING: Debug.Log(obj.str); break; case JSONObject.Type.NUMBER: Debug.Log(obj.n); break; case JSONObject.Type.BOOL: Debug.Log(obj.b); break; case JSONObject.Type.NULL: Debug.Log("NULL"); break; } }
NEW! Decoding now also supports a delegate format which will automatically check if a field exists before processing the data, providing an optional parameter for an OnFieldNotFound response. For example:
new JSONObject(data); list.GetField("hits", delegate(JSONObject hits) { hits.GetField("hits", delegate(JSONObject hits2) { foreach (JSONObject gameSession in hits2.list) { Debug.Log(gameSession); } }); }, delegate(string name) { //"name" will be equal to the name of the missing field. In this case, "hits" Debug.LogWarning("no game sessions"); });
[edit] Not So New! (O(n)) Random access!
I've added a string and int [] index to the class, so you can now retrieve data as such (from above):
JSONObject arr = obj["field3"]; Debug.log(arr[2].n); //Should ouptut "3"
[edit] The Code
The code is now available on github.
[edit] Change Log
[edit] v1.4
Big update!
- Better GC performance. Enough of that garbage!
- Remaining culprits are internal garbage from StringBuilder.Append/AppendFormat, String.Substring, List.Add/GrowIfNeeded, Single.ToString
- Added asynchronous Stringify function for serializing large amounts of data at runtime without frame drops
- Added Baked type
- Added MaxDepth to parsing function
- Various cleanup refactors recommended by ReSharper
[edit] v1.3.2
- Added support for NaN
- Added strict mode to fail on purpose for improper formatting. Right now this just means that if the parse string doesn't start with [ or {, it will print a warning and return a null JO.
- Changed infinity and NaN implementation to use float and double instead of Mathf
- Handles empty objects/arrays better
- Added a flag to print and ToString to turn on/off pretty print. The define on top is now an override to system-wide disable
[edit] Earlier Versions
I'll fill these in later... | http://wiki.unity3d.com/index.php?title=JSONObject&diff=cur&oldid=17176 | CC-MAIN-2020-29 | refinedweb | 1,453 | 57.47 |
/* sched_set.c Usage: sched_set policy priority pid... Sets the policy and priority of all process specified by the 'pid' arguments. See also sched_view.c. The distribution version of this code is slightly different from the code shown in the book in order to better fix a bug that was present in the code as originally shown in the book. See the erratum for page 743 (). */ #include <sched.h> #include "tlpi_hdr.h"
int main(int argc, char *argv[]) { int j, pol; struct sched_param sp; if (argc < 3 || strchr("rfo" #ifdef SCHED_BATCH /* Linux-specific */ "b" #endif #ifdef SCHED_IDLE /* Linux-specific */ "i" #endif , argv[1][0]) == NULL) usageErr("%s policy priority [pid...]\n" " policy is 'r' (RR), 'f' (FIFO), " #ifdef SCHED_BATCH /* Linux-specific */ "'b' (BATCH), " #endif #ifdef SCHED_IDLE /* Linux-specific */ "'i' (IDLE), " #endif "or 'o' (OTHER)\n", argv[0]); pol = (argv[1][0] == 'r') ? SCHED_RR : (argv[1][0] == 'f') ? SCHED_FIFO : #ifdef SCHED_BATCH /* Linux-specific, since kernel 2.6.16 */ (argv[1][0] == 'b') ? SCHED_BATCH : #endif #ifdef SCHED_IDLE /* Linux-specific, since kernel 2.6.23 */ (argv[1][0] == 'i') ? SCHED_IDLE : #endif SCHED_OTHER; sp.sched_priority = getInt(argv[2], 0, "priority"); for (j = 3; j < argc; j++) if (sched_setscheduler(getLong(argv[j], 0, "pid"), pol, &sp) == -1) errExit("sched_setscheduler"); exit(EXIT_SUCCESS); }
Download procpri/sched_set. | http://man7.org/tlpi/code/online/dist/procpri/sched_set.c.html | CC-MAIN-2019-26 | refinedweb | 208 | 60.01 |
Type: Posts; User: happyme
OK fixed it:
SELECT * FROM
(table1 LEFT OUTER JOIN (SELECT * FROM table2 WHERE (colb = 8)) xx
ON
table1.cola = xx.cola)
Thanks for the tip VictorN, I didn't know you could use IIF in access
I'm having some trouble with VB.NET Database Explorer Query Designer and wondered if any documentation exists on this? I've googled and found nothing.
Any pointers on where to look?
ta
H
I need to get a set of rows with null values if the join conditions aren't met, which is why I've selected a left outer join.
in respoinse to VictorN's question,
Is it a xx.colb or xx.cola? ...
If I use an outer join
'on table1.cola = xx.cola AND
xx.colb = 8'
then I get a row for every row in table1 that satisfies the where clause. if there are rows where table2.colb = 8 I get those...
Hi All,
I am using VB.NET 2010 Express to query an access mdb database.
I need to join a table like this:
SELECT * FROM
table1 LEFT OUTER JOIN table2 xx ON
table1.cola = xx.cola AND...
I have a datatable called dt which includes a column called type
I have a string array called typestringarray with about 15 members
I want to create a new datatable from the rows where type is a...
OK, I know using GOTO is ugly old programming, but what is a better solution?
I give my user an OpenFileDialog and allow them to choose a file.
If I don't like the file they have chosen I give...
I am using .ToString("#,##0.00") to format a number.
I'm sure I have used this before but today it doesn't work.
I want to format a System.Double value from the datatable like this 12345.6789...
Thank you so much, problem solved
I have a datatable which has a number of rows containing data
Then I need to add some empty datacolumns which I do like this:
Dim OT_Hours As New DataColumn("OT_Hours")...
solved, using streamreader and Split:
Public Function Read_CSV(ByVal FileName As String, ByVal FilePath As String) As DataTable
'----------------------------------------------------
...
I have written some code to get the data from a number of text boxes all named in the style txb_[column name]
I want to write this data to a datatable.
The text boxes were initially filled...
I want to load a csv file (A comma delimited text file) into a datatable. I found the OdbcDataAdapter Class which looks perfect, but it says it is supported only in version 1.1 of the .NET...
Hi,
I have written a short piece of code to validate a UK NI number.
The format is 2 upper case letters, followed by 6 digits followed by A,B, C or D
eg AB123456A is OK
This code does...
I've found a work around solution:
I disable the print button until a successful print preview (where everything fits of the page) has been completed
I'm printing a datatable and once the user has pressed the print button, the QueryPageSettings event handler checks the width of the document and adjusts the paper to landscape if needed.
That bit...
Assuming your datagrids displays the contents of datatable1 and datatable2, could you use:
datatable2.ImportRow(datatable1.Rows(i))
where i the the selected row in the datagrid
then...
have a look at this excellent post by aniskhan:
he sets out how to print a data table. You can use loops to print pages in blocks of 5...
Thank you aniskhan, your code creates the correct column headings for me, but no data. Do I need to add the data row by row too?
I've got a couple of listboxes which the user can move items from one to the other. When they're done moving the items, which are column names from a datatable, I want to show only the columns which...
I would look for a low tech solution: print a line down the edges of your page with a known margin, then measure the actual location with a ruler, and compensate accordingly.
Obviously this...
Thank you aniskhan for coming to my aid again. Unfortunately I've falled at the first hurdle as I don't have the System.Data.Odbc namespace which I suspect I need. I'm using .net version 1.0.
I...
I'm planning to print the contents of a csv text file
The data is in a number of rows and columns and I've put the whole lot into a 2 dimensional array, and found out which are the widest columns....
Thank you very much for your time.
this bit:
Imports System.Drawing.Printing
Imports System.IO
I'm trying to print a few lines of text using PrintDocument
I'm using VB.NET version 1
the following line:
Dim prn As New PrintDocument
causes this error : | http://forums.codeguru.com/search.php?s=6d1deb10baa58f71042bb4f9fd3a7dd2&searchid=8424453 | CC-MAIN-2016-07 | refinedweb | 825 | 73.88 |
How to Parallelize Your Application - Part 2 Threads v Tasks
Join James Brundage, Tester from the Windows PowerShell team, and Bruce Kyle for a quick introduction to how embed PowerShell within your C# application.
See how you can easily reference the PowerShell assembly and start embedding PowerShell cmdlets inside of a C# application with PowerShell V2.
Sample Code:
using System.Management.Automation;
using System.Management.Automation.Runspaces;
/* Calls the Get-Process PowerShell command and returns the output as a string */
foreach (string str in PowerShell.Create().
AddScript("Get-Process").
AddCommand("Out-String").Invoke<string>())
{
Console.WriteLine(str);
}
NOTE: You'll need to add a reference to the version of PowerShell you have installed in your GAC to the Visual Studio project file. To do so, you'll open the project file as a text file and add the following line into the <ItemGroup> section:
<Reference Include="System.Management.Automation" />
For more news, tips, and links to developer training, see the US ISV Community blog.
One thing I'd be really intreasted in is any guidance on how to make my apps manageble by powershell, how to handle communication from the cmdlet to the app etc. At the moment I have a WCF service I call from the powershell command but am intreasted on how others do it.
Cheers,
Stephen.
Why didn't you guys use a tripod? The tiny screen in the video is very hard to read and the camera shake makes it even harder. Fortunatelly, the code was posted to the right of the video (and the subtitles helped, too), because it was nearly impossible to make it out on the screen. In the end, I think this entire exercise would have been better as a post to a blog.
How to handle the scenario of calling Exchange Server 2010 Commandlets and powershell commandlets in C# ??
Exchange Commandlets require remote runspace to be created for them to run. By doing so, we will not able to run powershell commandlets like "Where-Object" in the same runspace.
Creating a local runspace would enable us to execute powershell commandlets and not Exchange 2010 commandlets.
We can maintain separate runspaces, one local runspace to run powershell commandlets and one remote runspace to run Exchange 2010 COmmandlets.
But, How to execute a pipeline of Exchange 2010 commandlets and powershell commandlets like this: "Get-ExchangeServer | Select-Object Name" ?? | https://channel9.msdn.com/Blogs/bruceky/How-to-Embedding-PowerShell-Within-a-C-Application?format=auto | CC-MAIN-2018-13 | refinedweb | 395 | 52.7 |
Inheritance (C# Programming Guide)
Inheritance, together with encapsulation and polymorphism, is one of the three primary characteristics (or pillars) of object-oriented programming. Inheritance enables you to create new classes that reuse, extend, and modify the behavior that is defined in other classes..
Conceptually, a derived class is a specialization of the base class. For example, if you have a base class Animal, you might have one derived class that is named Mammal and another derived class that is named Reptile. A Mammal is an Animal, and a Reptile is an Animal, but each derived class represents different specializations of the base class..
The following illustration shows a class WorkItem that represents an item of work in some business process. Like all classes, it derives from System.Object and inherits all its methods. WorkItem adds five members of its own. These include a constructor, because constructors are not inherited. Class ChangeRequest inherits from WorkItem and represents a particular kind of work item. ChangeRequest adds two more members to the members that it inherits from WorkItem and from Object. It must add its own constructor, and it also adds originalItemID. Property originalItemID enables the ChangeRequest instance to be associated with the original WorkItem to which the change request applies.
The following example shows how the class relationships demonstrated in the previous illustration are expressed in C#. The example also shows how WorkItem overrides the virtual method Object.ToString, and how the ChangeRequest class inherits the WorkItem implementation of the method.
// WorkItem implicitly inherits from the Object class. public class WorkItem { // Static field currentID stores the job ID of the last WorkItem that // has been created. private static int currentID; //Properties. protected int ID { get; set; } protected string Title { get; set; } protected string Description { get; set; } protected TimeSpan jobLength { get; set; } // Default constructor. If a derived class does not invoke a base- // class constructor explicitly, the default constructor is called // implicitly. public WorkItem() { ID = 0; Title = "Default title"; Description = "Default description."; jobLength = new TimeSpan(); } // Instance constructor that has three parameters. public WorkItem(string title, string desc, TimeSpan joblen) { this.ID = GetNextID(); this.Title = title; this.Description = desc; this.jobLength = joblen; } // Static constructor to initialize the static member, currentID. This // constructor is called one time, automatically, before any instance // of WorkItem or ChangeRequest is created, or currentID is referenced. static WorkItem() { currentID = 0; } protected int GetNextID() { // currentID is a static field. It is incremented each time a new // instance of WorkItem is created. return ++currentID; } // Method Update enables you to update the title and job length of an // existing WorkItem object. public void Update(string title, TimeSpan joblen) { this.Title = title; this.jobLength = joblen; } // Virtual method override of the ToString method that is inherited // from System.Object. public override string ToString() { return String.Format("{0} - {1}", this.ID, this.Title); } } // ChangeRequest derives from WorkItem and adds a property (originalItemID) // and two constructors. public class ChangeRequest : WorkItem { protected int originalItemID { get; set; } // Constructors. Because neither constructor calls a base-class // constructor explicitly, the default constructor in the base class // is called implicitly. The base class must contain a default // constructor. // Default constructor for the derived class. public ChangeRequest() { } // Instance constructor that has four parameters. public ChangeRequest(string title, string desc, TimeSpan jobLen, int originalID) { // The following properties and the GetNexID method are inherited // from WorkItem. this.ID = GetNextID(); this.Title = title; this.Description = desc; this.jobLength = jobLen; // Property originalItemId is a member of ChangeRequest, but not // of WorkItem. this.originalItemID = originalID; } } class Program { static void Main() { // Create an instance of WorkItem by using the constructor in the // base class that takes three arguments. WorkItem item = new WorkItem("Fix Bugs", "Fix all bugs in my code branch", new TimeSpan(3, 4, 0, 0)); // Create an instance of ChangeRequest by using the constructor in // the derived class that takes four arguments. ChangeRequest change = new ChangeRequest("Change Base Class Design", "Add members to the class", new TimeSpan(4, 0, 0), 1); // Use the ToString method defined in WorkItem. Console.WriteLine(item.ToString()); // Use the inherited Update method to change the title of the // ChangeRequest object. change.Update("Change the Design of the Base Class", new TimeSpan(4, 0, 0)); // ChangeRequest inherits WorkItem's override of ToString. Console.WriteLine(change.ToString()); // Keep the console open in debug mode. Console.WriteLine("Press any key to exit."); Console.ReadKey(); } } /* Output: 1 - Fix Bugs 2 - Change the Design of the Base Class */ (C# Programming Guide). Interfaces (C# Programming Guide).
A derived class has access to the public, protected, internal, and protected internal members of a base class. Even though a derived class inherits the private members of a base class, it cannot access those members. However, all those private members are still present in the derived class and can do the same work they would do in the base class itself. For example, suppose that a protected base class method accesses a private field. That field has to be present in the derived class for the inherited base class method to work correctly.
A class can prevent other classes from inheriting from it, or from any of its members, by declaring itself or the member as sealed. For more information, see Abstract and Sealed Classes and Class Members (C# Programming Guide).
A derived class can hide base class members by declaring members with the same name and signature. The new modifier can be used to explicitly indicate that the member is not intended to be an override of the base member. The use of new is not required, but a compiler warning will be generated if new is not used. For more information, see Versioning with the Override and New Keywords (C# Programming Guide) and Knowing When to Use Override and New Keywords (C# Programming Guide). | https://msdn.microsoft.com/en-US/library/ms173149(v=vs.110).aspx | CC-MAIN-2015-11 | refinedweb | 955 | 57.98 |
Getting Song Lyrics in Node.js
If you're developing a music app, you probably want access to song lyrics at some point. In this post I'll show you how to get song lyrics using Node.js.
#Using the Genius API
Genius is a site that grew from providing annotated rap lyrics to annotating the whole web. They provide an easy-to-use API that unfortunately lacks some often needed functionality.
What we can do with it:
- Search their database for song "meta data"
- Get the songs for a given artist ID
- Get annotations (referents) for a song (or any web page).
What we cannot do is:
- Get an artist ID directly through some API entry point
- Get the lyrics for a song.
The second point has probably to do with licensing issues on their part. You can get parts of the lyrics through their referents/annotations endpoint. If you want the full lyrics however, you need to:
- Search for the artist, iterate through songs and get the artist id
- Get songs meta data for the artist id and extract the genius url
- Parse the genius url for the full lyrics.
Note: This post is for informational and learning purposes only. Do not violate genius' terms of service.
#Using the Genius API in Node.js
We will use the great NPM package genius-api that provides a simple interface for using the Genius API. As we don't do query for any user-specific data, the authentication part works by registering your client and getting the client access token directly from their dashboard.
To initialize
genius-api you call it with the client access token:
import Genius from 'genius-api' const accessToken = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890' const genius = new Genius(accessToken)
#Getting the artist id
We will search for the artist, iterate through the results (songs), and grab the artist id from there once there is a match:
// genius API does not have an artist entrypoint. // Instead, search for the artist => get a song by that artist => get API info on that song => get artist id Genius.prototype.getArtistIdByName = function getArtistIdByName(artistName) { const normalizeName = name => name.replace(/\./g, '').toLowerCase() // regex removes dots const artistNameNormalized = normalizeName(artistName) return this.search(artistName) .then((response) => { for (let i = 0; i < response.hits.length; i += 1) { const hit = response.hits[i] if (hit.type === 'song' && normalizeName(hit.result.primary_artist.name) === artistNameNormalized) { return hit.result } } throw new Error(`Did not find any songs whose artist is "${artistNameNormalized}".`) }) .then(songInfo => songInfo.primary_artist.id) } const genius = new Genius(accessToken) genius.getArtistIdByName('Drake') .then(artistId => { /* ... */ }) .catch(err => console.error(err))
#Getting the songs from artist id
Given the artist id, a request to
/artists/:id/songs is enough to get the songs:
genius.songsByArtist(artistId, { per_page: 50, sort: 'popularity', }) .then(songs => songs.map(song => song.url)) // has more song info like 'id', 'title', ...
#Parse the song url for the full lyrics
We will download the song URLs using node-fetch and parse the HTML with the awesome cheerio.
Genius.prototype.getSongLyrics = function getSongLyrics(geniusUrl) { return fetch(geniusUrl, { method: 'GET', }) .then(response => { if (response.ok) return response.text() throw new Error('Could not get song url ...') }) .then(parseSongHTML) } // parse.js import cheerio from 'cheerio' function parseSongHTML(htmlText) { const $ = cheerio.load(htmlText) const lyrics = $('.lyrics').text() const releaseDate = $('release-date .song_info-info').text() return { lyrics, releaseDate, } }
The lyrics live in an HTML tag called
lyrics and are really convenient to parse, same for the release data.
Now you got a way to get the lyrics for artists and can build something interesting on top of it. | http://cmichel.io/song-lyrics-in-nodejs/ | CC-MAIN-2017-47 | refinedweb | 594 | 57.87 |
In this second part of K8s-Series, I’ll explain the concept of Namespaces and when to use multiple namespaces.
Prerequisites: Watch and Read “How to Build Highly Available Kubernetes Cluster on AWS”
Topics to covered in today’s article
Before we jump into Kubernetes world. Let’s talk in general about namespace.
In a computing world, a namespace is a set of signs (names) that are used to identify and refer to objects of various kinds. …
Today, I would like to show you, how you can enable the git branches in the bash prompt in less than 4 mins.
We will find the .bashrc file in our home directory.
Q. What is .bashrc?
Ans:-
.bashrc is a Bash shell script. It initializes an interactive shell session. You can put any command in that file that you could type at the command prompt. You put commands inside the .bashrc to set up the shell for use in your particular environment, or to customize things to your preferences. A common thing to put inside the .bashrc …
In today’s blog, I am going to show you how to troubleshoot in Jenkins when you forget your username or password.
So, before we move on the issue, I would like to share this with newbie users
What is Jenkins? And, what it is used for?.
You can watch the…
In this K8-Series, I’ll be making different Kubernetes #k8s topics such as
*new topics add-ons as we go farther in this journey
This interactive article assumes you have some basic knowledge of Containerization, how to operate with AWS as Cloud Providers (or other cloud providers such as GCP, Azure etc.), how to launch your nodes/servers and have some basic understanding of Linux commands.
Support the author: Mohit Sharma
After a long delay, I come back again with another article called “Run a serverless code using AWS lambda.”
In this article, you will learn the basics of running code on AWS Lambda without provisioning or managing servers. Let’s start with an intro.
AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code. …
Hello friends, today I am going to tell you the way by just seeing the Dataset how would you know which model I have to choose.
So, let’s get started ….!
What is Data-set?
A data set (or data-set) is a collection of data. Most commonly a data set corresponds to the contents of a single database table, or a single statistical data matrix, where every column of the table represents a particular variable, and each row corresponds to a given member of the data set in question.
Let’s see some data-set which is in the form of a .csv file…
In this article,. …
I am back with the seaborn tutorial. Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.
To… | https://imoisharma.medium.com/?source=post_internal_links---------0---------------------------- | CC-MAIN-2021-31 | refinedweb | 522 | 72.87 |
The XPath 2.0 Data Model
February 2, 2005
In everything I've written in this column so far about XSLT 2.0, I've been cherry-picking—taking fun new features and plugging them into what were otherwise XSLT 1.0 stylesheets in order to demonstrate these features. As XSLT 2.0 and its companion specification XQuery 1.0 approach Recommendation status, it's time to step back and look at a more fundamental difference between 2.0 and 1.0: the underlying data models. A better understanding of the differences gives you a better understanding of what you can get out of XSLT 2.0 besides a wider selection of function calls.
The current XSLT 2.0 Working Draft's short Data Model section opens by saying, "The data model used by XSLT is the XPath 2.0 and XQuery 1.0 data model, as defined in [the W3C TR XQuery 1.0 and XPath 2.0 Data Model document]. XSLT operates on source, result, and stylesheet documents using the same data model." The XSLT 2.0 Data Model section goes on to describe a few details about issues such as white space and attribute type handling, but these concern XSLT processor developers more than stylesheet developers. The "XQuery 1.0 and XPath 2.0 Data Model" document that it mentions is what the majority of us really want to look at.
Before looking more closely at this document, however, let's back up for some historical context. Many people felt that the original XML 1.0 Recommendation released in February of 1998 failed to describe a rigorous data model. This raised the possibility of different applications interpreting document information differently, increasing the danger of incompatibilities. To remedy this, the W3C released the XML Information Set Recommendation (also known as "the infoset") in October of 2001. It described the exact information to expect from an XML document more formally and less ambiguously than the original 1998 XML Recommendation did.
Like the XSLT 2.0 spec, the XSLT 1.0 Recommendation includes a Data Model section that describes its basic dependence on the XPath data model (in this case, XPath 1.0) before going on to describe a few new details. The XPath 1.0 Recommendation's Data Model section, which is about six pages when printed out, provides a subsection for each possible node type in the XPath representation of an XML tree: root nodes, element nodes, text nodes, attribute nodes, namespace nodes, processing instruction nodes, and comment nodes. (Remember that there are different ways to model an XML document as a tree—for example, a DOM tree, an entity structure tree, an element tree, and an XPath tree—so certain basic tree structure ideas such as "root" will vary from one model to another.) This spec also includes a brief, non-normative appendix that describes how you can create an XPath 1.0 tree from an infoset, so that no guesswork is needed regarding the relationship of the XPath 1.0 data model to the official model describing the information to find in an XML document.
The Data Model sections of these W3C technical reports are all fairly short. When I printed the latest draft of the new XQuery 1.0 and XPath 2.0 Data Model document, it added up to 90 pages, although nearly half consist of appendices that restate earlier information in more mechanical or tabular form. The document has a reputation for being complex, but once you have a good overview of its structure, the complexity appears more manageable.
The Data Model document's Introduction tells us that it's based on the infoset with two additions: "support for [W3C] XML Schema types" and "representation of collections of documents and of complex values."
This talk of "complex values" makes the second addition sound more complicated, but it's actually simpler, so I'll cover it first. This part of the data model has actually simplified a messier aspect of XPath 1.0, and we've already seen the payoff in an earlier column on using temporary trees in XSLT 2.0. Here are the key points, including some direct quotes from the document:
- "Every instance of the data model is a sequence." (One class of "pervasive changes" from XSLT 1.0 to 2.0 is "support for sequences as a replacement for the node-sets of XPath 1.0.")
- "A sequence is an ordered collection of zero or more items."
- All items are either atomic values (like the number 14 or "this string") or a node of one of the types listed above: element node, attribute node, text node, and so forth. The choice of seven types now lists "document node" instead of "root node," presumably because temporary trees can have root nodes that lack the special properties found in the root node of an actual document tree.
Sequences
There's nothing special that must happen to a node for it to be considered part of a sequence; a single item is treated as a sequence with one item in it. One sequence can't contain another sequence, which simplifies things: a sequence's items are the node items and atomic value items in that sequence, period. Because an XPath expression describes a set of nodes, the value of that XPath expression is a sequence.
The idea of a "sequence constructor" comes up often in XSLT 2.0—the spec uses the term about 300 times. The "Sequence Constructors section defines one as "a sequence of zero or more sibling nodes in the stylesheet that can be evaluated to return a sequence of nodes and atomic values." In 2.0, a template rule contains a sequence constructor that gets evaluated to return a sequence for use in your result tree, a temporary tree, or wherever you like. Variables, stylesheet-defined functions, and elements such as xsl:for-each, xsl:if, and xsl:element are all defined in the specification as having sequence constructors as their contents.
A node can have several properties, such as a node name, a parent, and children. Not all node types have the same properties; for example, a document node has no parent property. The data model document talks a lot about retrieving the values of node properties with "accessors," which are abstract versions of functions that represent ways to get the values of node properties. For example, the dm:parent accessor returns the node that is the parent of the node whose parent property you want to know about. (The data model document uses the "dm:" prefix on all accessor functions without declaring a namespace URI for it, because these aren't real functions. If you want real functions, see the XQuery 1.0 and XPath 2.0 Functions and Operators document.)
A tree consists of a node and all the nodes that you can reach from it using the dm:children, dm:attributes, and dm:namespaces accessors. If a tree's root node is a document node, the tree is an XML document. If it's not, the tree is considered to be a fragment. This idea of tree fragments is an improvement over the XSLT 1.0 data model, with its concept of Result Tree Fragments, because the operations allowed on Result Tree Fragments were an often-frustrating subset of those permitted on node-sets. The ability to perform the same operations on a 2.0 source tree, a temporary result tree, a subtree of either, or a temporary tree created on the fly or stored in a variable gives you a lot more flexibility, because you can apply template rules to the nodes of any of them.
The new data model offers more ways to address a collection of documents than the XPath 1.0 model did. XSLT 2.0 offers several ways to create a sequence. One simple XPath 2.0 way is to put a comma-delimited list of sequence items inside of parentheses, as I demonstrated in an earlier column on writing your own functions in XSLT 2.0. The list's items can even be entire documents, as shown in the following xsl:value-of instruction:
<xsl:value-of
The outer parentheses belong to the reverse function, which expects a sequence as an argument, and the next parentheses in from those create a sequence of documents, with each document being pulled in with a call to the document function. (I used the reverse call to test whether my XSLT 2.0 processor would treat the list of documents as a proper sequence.)
The new collection function returns a collection of documents. The argument to pass to it (for example, the name of a directory containing documents, or a document listing document URIs) depends on the implementation.
XSLT and W3C Schema Types
The Data Model document's Types section tells us that "the data model supports strongly typed languages such as [XPath 2.0] and [XQuery] that have a type system based on [Schema Part 1]." Note that it's not based on "Schema Part 2," the Datatypes part of the W3C Schema specification but on XML Schema Part 1: Structures, which includes Part 2. Part 2 defines built-in types such as xs:integer, xs:dateTime, and xs:boolean. It also lets you define new simple types by restricting the existing ones (for example, an employeeAge type defined as an integer between 15 and 70), and Part 1 builds on that by letting you define complex types that have structure: content models (for example, an article type consisting of a title followed by one or more para elements) and attributes.
An XSLT 2.0 stylesheet can use the typing information provided by a schema to ensure the correctness of a source document, of a result document, and even of temporary trees and expressions in the stylesheet itself. You can declare stylesheet functions to return values of a certain type, and you can declare parameters and values to be of specific types so that attempts to create invalid values for them will result in an error. These features help you find data and stylesheet errors earlier and with more helpful error messages.
Another nice feature of type-aware XSLT 2.0 processors is their ability to process source nodes by matching against their type. For example, a stylesheet can include a template rule that processes all elements and attributes of type xs:date.
I mentioned above how the XPath 1.0 spec includes an appendix that describes the relationship of its data model to the infoset's data model. The XQuery 1.0 and XPath 2.0 Data Model document includes detailed descriptions of how to map its data model to an infoset and how to create each component of the data model from an infoset and from a PSVI. The latter is a huge job, accounting for a good portion of the new data model spec's length, because a Post Schema Validation Infoset can hold more information than a pre-validation infoset. A W3C schema can include information about typing, defaults, and more, so validation against that schema can turn an element such as <length>2</length> into <length unit="in">2.0</length> along with the associated information that the "2.0" represents a decimal number. In XPath 2.0, the type-name node property stores the name of the data type declared for the value in the schema, if available from a validation stage of the processing, and "xdt:untyped" otherwise. The rules for determining the value of the type-name property from an associated W3C schema are laid out in the Mapping PSVI Additions to Type Names section of the Data Model document, but be warned—only a big fan of W3C schemas could enjoy reading it.
Things are a bit simpler when the types that may come up in your XSLT processing are limited to the Schema Part 2 types. Without having your stylesheet refer to content models and other aspects of complex types, it can be handy to identify some values as integers, some as booleans, and some as URIs. The Data Model document adds five new types to the 19 primitive types defined in the Part 2 Recommendation: the xdt:untyped one mentioned above and the xdt:untypedAtomic "type," which also serves as more of an annotation about typing status than as the name of an actual type; xdt:anyAtomicType, an abstract type that plugs a newly-discovered architectural hole; and xdt:dayTimeDuration and xdt:yearMonthDuration, two types that offer totally ordered ways to measure elapsed time. That is, a sequence of values of either of these types can be properly sorted, which wasn't always the case with the duration types offered in the Schema Part 2 specification—comparing "one month" with "30 days" wouldn't always give you a clear answer.
For now, remember that if you completely ignore type-aware XSLT 2.0 processing—which will be a perfectly valid approach for much XSLT 2.0 development—the big difference between the XSLT 1.0 and 2.0 data models is that a single node, a complete document, a subtree representing a fragment of a document, and any other set of nodes described by an XPath expression are all sequences, and that much of the processing is now described in terms of these sequences. From a practical standpoint, an XSLT 1.0 stylesheet with the version attribute of its xsl:stylesheet element set to "2.0" and a few new functions called here and there will still work, as we've seen in my earlier columns on XSLT 2.0. Also remember that the new Data Model document lays the groundwork for not only XPath 2.0 (and therefore XSLT 2.0), but also XQuery, so an appreciation of sequences will help you learn XQuery more quickly.
In a future column, I'll demonstrate what type-aware XSLT processing adds to your stylesheet and its handling of source and result documents. | http://www.xml.com/pub/a/2005/02/02/xpath2.html | CC-MAIN-2017-13 | refinedweb | 2,340 | 62.48 |
UTMP(5) File Formats UTMP(5)
utmp, wtmp, lastlog - login records
#include <utmp.h>]; long indicates the time prior to the change, and the character { indicates the new time.
/var/run/utmp The utmpfile. /var/log/wtmp The wtmpfile. /var/log/lastlog The lastlogfile.
This man page is not quite right for the GNO implementation (the struc- tures are incorrect, as is the utmp path).
last(1), login(1), who(1), ac(8), init(8)
A utmp and wtmp file format appeared in Version 6 AT&T UNIX. The last- log file format appeared in 3.0BSD. GNO 17 September 1997 UTMP(5) | http://www.gno.org/gno/man/man5/utmp.5.html | CC-MAIN-2017-43 | refinedweb | 103 | 78.65 |
std::ostrstream::~ostrstream
From cppreference.com
< cpp | io | ostrstream
Destroys a
std::ostrstream object, which also destroys the member std::strstreambuf, which may call the deallocation function if the underlying buffer was dynamically-allocated and not frozen.
[edit] Parameters
(none)
[edit] Notes
If str() was called on a dynamic ostrstream and freeze(false) was not called after that, this destructor leaks memory.
[edit] Example
Run this code
#include <strstream> #include <iostream> int main() { { std::ostrstream s; // dynamic buffer s << 1.23; std::cout << s.str() << '\n'; s.freeze(false); } // destructor called, buffer deallocated { std::ostrstream s; s << 1.23; std::cout << s.str() << '\n'; // buf.freeze(false); } // destructor called, memory leaked }
Output:
1.23 1.23 | http://en.cppreference.com/w/cpp/io/ostrstream/~ostrstream | CC-MAIN-2014-42 | refinedweb | 116 | 53.37 |
[ ]
Pier Fumagalli updated COCOON-1175:
-----------------------------------
Reverting status to "NEEDFORINFO" as this was not imported from BugZilla.
> xslt extension functions with bsf broken due to xalan-2.6 / bsf-2.3 conflict
> ----------------------------------------------------------------------------
>
> Key: COCOON-1175
> URL:
> Project: Cocoon
> Type: Bug
> Components: core
> Versions: 2.1.5
> Environment: Operating System: All
> Platform: All
> Reporter: Bruce Robertson
> Assignee: Cocoon Developers Team
>
> xsl comprising extension functions implemented through BSF will throw an error
> indicating that the required com.ibm.bsf class cannot be found. In fact, the
> namespace of bsf has changed from com.ibm.bsf to org.apache.bsf in the switch
> from bsf-2.2.jar, which was in cocoon-2.1.4 and bsf-2.3.jar, which is provided
> in the bsf block of cocoon-2.1.5 It seems that xalan-2.6, a core component of
> cocoon-2.1.5, was compiled against the com.ibm.bsf version, and this causes the
> error.
> Temporary solution is to add a bsf-2.2.jar file to build/webapp/WEBINF/lib. In
> the future, perhaps testing should be done for extension functions in xslt.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
-
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200510.mbox/%3C204016033.1130147818145.JavaMail.jira@ajax.apache.org%3E | CC-MAIN-2018-13 | refinedweb | 209 | 54.39 |
Closed Bug 1297867 Opened 6 years ago Closed 6 years ago
Measure the amount of scrolling in "session fragment"
Categories
(Core :: Graphics, defect, P3)
Tracking
()
mozilla53
People
(Reporter: Harald, Assigned: rhunt)
References
(Blocks 1 open bug)
Details
(Whiteboard: [gfx-noted])
Attachments
(1 file, 8 obsolete files)
This is about implementing the tracking amount scrolled in a session fragment. Still requires agreement of which unit to track.
Tracked in
:kats for input on unit and implementation
Flags: needinfo?(bugmail)
What data is needed from this probe? Should this e.g. record the scroll amount after each continuous scroll interaction ends into an exponential histogram? Or is this just looking for the overall scroll amount per subsession?
> Should this e.g. record the scroll amount after each continuous scroll interaction ends into an exponential histogram? Overall scroll amount. The per scroll amount will be too granular and might more reflect how users consume content (scrolling while reading or looking at products) Waiting for :kats if the measure should be relative to window size or some pixel density independent unit.
I would lean towards an absolute measure in CSS pixels. Window sizes can vary wildly based on screen size and user preferences, whereas the CSS pixels scrolled on a page is relatively more stable since it's a function of the website. Over time as displays get larger we might see relative measures shrink even though engagement remains the same. I guess actually the same could be said of webpages, so maybe we should measure the scroll amount as a percentage of page length. Also we should clarify which scrolls are important here: just scrolling on the root scrollable of a page, or do we also care about scrolling iframes, scrollable divs, textareas, etc.? Do we want to cover all the different scroll methods (wheel, keyboard, scrollbar dragging, touch, etc) or restrict to specific input methods? We have a probe which measures the number of scrolls using different input methods (see SCROLL_INPUT_METHODS) but those are just a count of how many times the scroll was invoked using that method, and ignores how many pixels were scrolled each time.
Flags: needinfo?(bugmail)
>. > Also we should clarify which scrolls are important here: just scrolling on the root scrollable of a page, or do we also care about scrolling iframes, scrollable divs, textareas, etc.? For the sake of including all content that behaves "scrollable", I would include iframes and overflow-scroll elements but exclude inputs. > Do we want to cover all the different scroll methods (wheel, keyboard, scrollbar dragging, touch, etc) or restrict to specific input methods? I don't see a reason to restrict any methods. They might add different amount of scroll, but when a user wants to get somewhere on a page he will need to scroll the same distance.
Flags: needinfo?(bugmail)
(In reply to Harald Kirschner :digitarald from comment #6) > >. vh is effectively the same as window size, so what I said in comment 5 still applies to that. You're right though that we don't know the page height during loading. We could just record absolute CSS pixels scrolled then? > For the sake of including all content that behaves "scrollable", I would > include iframes and overflow-scroll elements but exclude inputs. Sounds reasonable. > I don't see a reason to restrict any methods. They might add different > amount of scroll, but when a user wants to get somewhere on a page he will > need to scroll the same distance. Also sounds reasonable.
Flags: needinfo?(bugmail)
> We could just record absolute CSS pixels scrolled then? For simplification this makes the most sense. We'll can later analyze longitudinal to test the idea of screen size skewing the data.
Flags: needinfo?(bugmail)
Ok. Do you have somebody in mind to work on this already? Or should I try to find some cycles for it?
Flags: needinfo?(bugmail) → needinfo?(hkirschner)
> Do you have somebody in mind to work on this already? Or should I try to find some cycles for it? You got recommended from a bunch of people. It would be invaluable to have you look into this.
Flags: needinfo?(hkirschner) → needinfo?(bugmail)
:kats, while it seems to be a rather straightforward metric from the outside, how do you estimate the complexity and effort involved?
There's two broad options for how to do this: 1) Instrument a codepath like ScrollFrameHelper::ScrollToWithOrigin which all scrolling goes through, and then exclude scrolls from JS and for form inputs and so on 2) Instrument the different places that receive user-driven scrolling, somewhat like we do for the SCROLL_INPUT_METHODS probe, and again ensure that we exclude form inputs and don't double-count anything or miss anything. Both of these options have some inherent complexity in terms of making sure we include and exclude the right things. I would lean towards option (1) even though that also involves adding additional code in various places to distinguish between JS and non-JS scrolling. I'd estimate that it would probably take about a week to get done.
Flags: needinfo?(bugmail)
:milan, could you prioritize this? We are planning this use this as a core KPI for performance work but it will require validation beforehand to baseline, which might take some iterations.
Flags: needinfo?(milan)
I'll take it into trello and get back to you.
Flags: needinfo?(milan)
r? for data collection process
Flags: needinfo?(rweiss)
Would you recommend to measure this in platform or frontend side?
Flags: needinfo?(bugmail)
I would lean towards platform.
Flags: needinfo?(bugmail)
First pass: Total number of CSS pixels scrolled in the session of Firefox. Histogram, probably exponential, not normalized for the length of the session.
Assignee: nobody → rhunt
Ryan, let's see if we can do this in 52, but 53 for sure.
Priority: P3 → P2
Should this bug live in Core::Graphics?
Flags: needinfo?(milan)
Whiteboard: [measurement:client]
I'll move it, and if Harald needs it elsewhere, he can move it back.
Component: Telemetry → Graphics
Flags: needinfo?(milan)
Priority: P2 → P3
Product: Toolkit → Core
Whiteboard: [gfx-noted]
Taking it out of the engagement bucket for now as it tracks performance related interaction.
Flags: needinfo?(rweiss)
(In reply to Harald Kirschner :digitarald from comment #22) > Taking it out of the engagement bucket for now as it tracks performance > related interaction. Harald, what is the "engagement bucket"? Is scrolling distance still part of the Quantum release criteria?
Flags: needinfo?(hkirschner)
Engagement metrics were in the telemetry component and their data review process is currently being in reworked.
Flags: needinfo?(hkirschner)
I've been working on a prototype of this and I think it's ready. The general approach that I did was to instrument all the different input event handlers that we care about. Option 2, of the ones :kats listed. I looked into option 1 and I think it is doable, but a lot of call sites would have to be updated and I'm not sure if I want to put a requirement on most of Gecko to know whether the scroll they're requesting is input related or not. The basic types of input tracked and where they're tracked 1. Synchronous mouse scrolling (EventStateManager) 2. Asynchronous mouse/pan/fling scrolling (APZCallbackHelper) 3. Page Up, Page Down, Left, Right, Up, Down, Home, End Keys (nsPresShell) 4. Dragging the scrollbar (nsSliderFrame) 5. Clicking the scrollbar track (shift click and normal) (nsSliderFrame) 6. Clicking the scrollbar arrow (nsScrollbarButtonFrame) I've done simple testing with and without APZ enabled and with and without scroll-behavior: smooth. I went to wikipedia and measured the page in CSS pixels, then did variations of all input methods to get to the bottom and then checked to see if the numbers added up. I also tested starting a scroll in JS to make sure that it doesn't count.
This is plumbing to allow us to track whether an APZ callback was originating from input or from something else like a javascript smooth scroll or a scroll snap.
Attachment #8811934 - Flags: review?(bugmail)
Just a helper function for converting units
Attachment #8811935 - Flags: review?(bugmail)
Data policy review.
The code to do the actual scroll tracking. All the different locations for the probes and explanations are in the comment before the patches.
Attachment #8811940 - Flags: review?(bugmail)
Comment on attachment 8811939 [details] [diff] [review] Part 3: Add a telemetry histogram for the total css pixels scrolled Additionally: when should this metric expire in? I put in never only as a placeholder.
Attachment #8811939 - Flags: feedback?(hkirschner)
Comment on attachment 8811935 [details] [diff] [review] Part 2: Add a function for converting scalar app units to CSS pixels Review of attachment 8811935 [details] [diff] [review]: ----------------------------------------------------------------- r=me with the change below ::: layout/base/Units.h @@ +193,5 @@ > struct CSSPixel { > > // Conversions from app units > > + static float FromAppUnits(const float aScalar) { s/float aScalar/nscoord aScalar/ App units are by definition not floating point, so it doesn't make sense to have a floating point app unit. nscoord is the type used for uni-dimensional app units. It's also the argument type for NSAppUnitsToFloatPixels. @@ +221,5 @@ > NSAppUnitsToFloatPixels(aMargin.bottom, float(AppUnitsPerCSSPixel())), > NSAppUnitsToFloatPixels(aMargin.left, float(AppUnitsPerCSSPixel()))); > } > > + static int32_t FromAppUnitsRounded(const float aScalar) { Ditto here, aScalar should be an nscoord.
Attachment #8811935 - Flags: review?(bugmail) → review+
Comment on attachment 8811939 [details] [diff] [review] Part 3: Add a telemetry histogram for the total css pixels scrolled Review of attachment 8811939 [details] [diff] [review]: ----------------------------------------------------------------- > Additionally: when should this metric expire in? I put in never only as a placeholder. Never is expected as we want to use this as fine grained engagement metric.
Attachment #8811939 - Flags: feedback?(hkirschner) → feedback+
Comment on attachment 8811940 [details] [diff] [review] Part 4: Track the amount of CSS pixels scrolled by user input in a subsession Review of attachment 8811940 [details] [diff] [review]: ----------------------------------------------------------------- So overall this looks ok. However given the amount of code duplication and having to handle early-exit conditions (particularly in nsSliderFrame), I'd suggest creating an RAII class (you can put it in gfx/layers/apz/util) to handle this. Initialize it with the nsIScrollableFrame pointer before the scroll happens. In the constructor, you can read the initial scroll position and in the destructor you can read the final scroll position and do the telemetry update. Then you don't have to worry about the early exit conditions as they will get handled automatically. You can add some extra braces as needed to limit the scope of the RAII instance and make sure it doesn't end up including unrelated scroll changes (although that seems unlikely anyway). The code added to APZCCallbackHelper doesn't neatly fit into the RAII pattern but all the rest should. Also, did you verify that form input scrolling doesn't get counted by these? ::: dom/events/EventStateManager.cpp @@ +2656,5 @@ > nsIScrollableFrame::ScrollMomentum momentum = > aEvent->mIsMomentum ? nsIScrollableFrame::SYNTHESIZED_MOMENTUM_EVENT > : nsIScrollableFrame::NOT_MOMENTUM; > > + float scrollBefore = aScrollableFrame->LastScrollDestination().y; s/float/nscoord/, here and throughout this file, as well as the other files. If you extract to RAII then it just needs to be done there. ::: layout/xul/nsSliderFrame.cpp @@ +565,5 @@ > { > SetCurrentThumbPosition(scrollbar, mThumbStart, false, false); > + } > + else > + { nit: cuddle the else with braces on a single line. This file seems to have all sorts of formatting so might as well stick to the standard Mozilla convention in new code. But really with the RAII pattern I think you should use this problem goes away.
Attachment #8811940 - Flags: review?(bugmail) → review-
Also you could probably just inline the changes from part 2 into the RAII class, I'm not sure if it's worth adding new wrapper methods for NSAppUnitsToFloatPixels that are just called once.
Updated with the review comments. This patch is all of the previous patches rolled into one. As for form scrolling, it depends on what you mean. If you move the cursor or expand a text selection outside of view and it's scrolled into view, that won't count. If you scroll a scrollable textarea though, that does count. Tabbing around the page doesn't count either, whether it's form elements or buttons or anything else. I hadn't thought about tabbing around much before because I usually use it with forms, but I guess you could use that for general navigation.
Attachment #8811934 - Attachment is obsolete: true
Attachment #8811935 - Attachment is obsolete: true
Attachment #8811939 - Attachment is obsolete: true
Attachment #8811940 - Attachment is obsolete: true
Attachment #8812671 - Flags: review?(bugmail)
Comment on attachment 8812671 [details] [diff] [review] scroll-tracking.patch Review of attachment 8812671 [details] [diff] [review]: ----------------------------------------------------------------- r=me with comments addressed. I'll let Harald decide if the scrolling of textareas is acceptable - per comment 6 it sounded like he didn't want that. ::: dom/events/EventStateManager.cpp @@ +2657,5 @@ > aEvent->mIsMomentum ? nsIScrollableFrame::SYNTHESIZED_MOMENTUM_EVENT > : nsIScrollableFrame::NOT_MOMENTUM; > > nsIntPoint overflow; > + { Add a comment like so: { // scope the TelemetryScrollTracker ::: gfx/layers/apz/util/TelemetryScrollTracker.h @@ +18,5 @@ > +/** > + * A RAII class used to collect the amount a nsIScrollableFrame scrolls > + * and reports it to Telemetry. > + */ > +class TelemetryScrollTracker Add MOZ_RAII between "class" and "TelemetryScrollTracker". Also #include "mozilla/Attributes.h" to get that define. ::: gfx/layers/moz.build @@ +7,5 @@ > with Files('apz/**'): > BUG_COMPONENT = ('Core', 'Panning and Zooming') > > EXPORTS += [ > + 'apz/util/TelemetryScrollTracker.h', put this in a new section called EXPORTS.mozilla, and include it using mozilla/TelemetryScrollTracker.h. (I try to stay away from top-level includes when possible. If it were up to me all these other files in EXPORTS would also be moved to EXPORTS.mozilla) ::: layout/xul/nsSliderFrame.cpp @@ +566,2 @@ > { > + TelemetryScrollTracker tracker(scrollableFrame); I don't think you need the extra brace here, since the block ends after the second call to SetCurrentThumbPosition anyway. If you want to keep it for extra clarity that's fine but add a comment on the opening brace as I suggested above. @@ +609,5 @@ > > mozilla::Telemetry::Accumulate(mozilla::Telemetry::SCROLL_INPUT_METHODS, > (uint32_t) ScrollInputMethod::MainThreadScrollbarTrackClick); > > + { Add comment @@ +1317,5 @@ > + > + { > + TelemetryScrollTracker tracker(scrollableFrame); > + PageScroll(change); > + } As before, don't need this block - but add a comment if you keep it.
Attachment #8812671 - Flags: review?(bugmail) → review+
Harald, do we want to include scrolling a textarea and/or tabbing between elements?
Flags: needinfo?(hkirschner)
> Harald, do we want to include scrolling a textarea and/or tabbing between elements? No, let's limit this probe to scrolling to what we know as most intentional by users.
Flags: needinfo?(hkirschner)
Updated for review comments. This patch also excludes scrolling in form inputs. The ones I could find from research and testing are: 1. <textarea>'s 2. <select>'s a. with default popup list b. with 'multiple' attribute 3. autocomplete popup list
Attachment #8812671 - Attachment is obsolete: true
Attachment #8813458 - Flags: review?(bugmail)
Comment on attachment 8813458 [details] [diff] [review] scroll-tracking.patch Review of attachment 8813458 [details] [diff] [review]: ----------------------------------------------------------------- LGTM, thanks!
Attachment #8813458 - Flags: review?(bugmail) → review+
You still need data collection review for the telemetry probe - see
Comment on attachment 8813458 [details] [diff] [review] scroll-tracking.patch Feedback for data collection review.
Attachment #8813458 - Flags: feedback?(benjamin)
Comment on attachment 8813458 [details] [diff] [review] scroll-tracking.patch The histogram description here needs to be much more precise. All of the details covered in this bug about scrolls on the toplevel page only and scrolls on the toplevel scrolling context only need to be included in the histogram description. As implemented this is an opt-in metric, which is fine with me but eventually I expect that this will need to be opt-out to answer the questions. I think that before we make this expires: never, we should prove it and make sure that it provides useful data. So I will suggest making this expires in 58 and after we've tested/correlated the data and proved that it's useful we can discuss making this permanent. Does this metric include data from private windows? I believe that per our normal policies it should not.
Flags: needinfo?(rhunt)
Attachment #8813458 - Flags: feedback?(benjamin) → feedback-. Bug 1312881 tracks just the toplevel scroll frame. But the histogram description could definitely use this explanation, so I'll add that. I'll also update the expires to be 58. And yes this does include data from private windows, so that sounds like it will need to change.
Flags: needinfo?(rhunt)
>. Sorry for not catching this earlier, in my mind the previous discussion applied to both bugs. Could we fix this probe to behave like bug 1312881 and only track the top level frame, otherwise we can not compare the numbers? This would also simplify the code I assume.
Flags: needinfo?(rhunt)
Yes this will probably simplify the code greatly. But just to make sure that I absolutely have this down: For both bugs, we want to only measure the scrolling that occurs on the root frame. For this bug, we want to measure the absolute amount of scrolling, and over the duration of a session fragment. By absolute I mean if you scroll down 100px, then up 100px, we record 200px. Where as with bug 1312881, we want to measure the maximum distance that a page is scrolled down to, on a per page basis. So scrolling down 100px, then up up 100px, we record 100px.
Flags: needinfo?(rhunt)
Thank you for summarizing it again, you got it down!
Flags: needinfo?(rhunt)
Ryan, after thinking through the private browsing issues last week I don't think we need to gate this on private browsing. We're not recording the fact that you are using private browsing, which is the bar we really care about. I hope that reduces the amount of work you have to do.
Talking with Harald on irc we decided that JS scrolls are probably not a very significant source of scrolling, and because excluding them complicates the implementation significantly we decided to not worry about it right now.
Flags: needinfo?(rhunt)
This patch adds to the browser-content.js frame script to track the amount of scrolling in a page. There's been a bit of back and forth on what that exactly is in the comments, you can read the exact definition in the histogram definition. This telemetry probe is closely related to the probe in bug 1312881, and both can be implemented in basically the same way. So I made this patch to be the first part of a two part patch, the second part will be in bug 1312881 and add a few lines onto this. I'm not sure if there is a better way to organize this besides adding another bug that contains both of these.
Attachment #8813458 - Attachment is obsolete: true
Attachment #8818037 - Flags: review?(mconley)
Comment on attachment 8818037 [details] [diff] [review] track-scrolling1.patch Review of attachment 8818037 [details] [diff] [review]: ----------------------------------------------------------------- Tested that the sum is accumulated properly, and that we don't collect counts on subframes and inputs. Seems to behave as advertised. Thanks!
Attachment #8818037 - Flags: review?(mconley) → review+
Same as before, but with excluding 'about:' pages moved to apply to both probes.
Attachment #8818037 - Attachment is obsolete: true
Comment on attachment 8818626 [details] [diff] [review] scroll-tracking1.patch feedback? for data review
Attachment #8818626 - Flags: feedback?(benjamin)
Comment on attachment 8818626 [details] [diff] [review] scroll-tracking1.patch This isn't marked as opt-out, which is what I imagine Harald needs. I'm fine with this being opt-out if that's a business requirement. data-r=me either way but please confirm with Harald.
Attachment #8818626 - Flags: feedback?(benjamin) → feedback+
Harald, are we going to have both of these scroll tracking probes be opt-out?
Flags: needinfo?(hkirschner)
> Harald, are we going to have both of these scroll tracking probes be opt-out? I am ok to keep this opt-in while we work to understand this probe as a measure for performance improvements and how much data we need to make conclusions.
Flags: needinfo?(hkirschner)
Pushed by rhunt@eqrion.net: Add a telemetry probe to track the amount of scrolling in a page. r=mconley data-review=bsmedberg
Status: NEW → RESOLVED
Closed: 6 years ago
status-firefox53: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla53
Didin't try a FF Nightly yet but in SeaMonkey I now see an error in the log: > Timestamp: 12/23/2016 12:22:37 PM > Error: ReferenceError: event is not defined > Source File: chrome://global/content/browser-content.js > Line: 1772 Are you sure the code is ok?
Shouldn't this be: > addEventListener("DOMWindowCreated", function(aEvent) { > if (aEvent.target !== content.document...
(In reply to Frank-Rainer Grahl from comment #60) > Shouldn't this be: > > > addEventListener("DOMWindowCreated", function(aEvent) { > > if (aEvent.target !== content.document... Oh my, you're right. Something must have gone wrong when I spliced the patches. I'll have a fix as soon as possible. Sorry!
Pushed by rhunt@eqrion.net: Follow up fix for missing parameter. rs=kats
Backed out for mass test failures: Push with failures: Failure log (example): [task 2016-12-23T17:23:08.615021Z] 17:23:08 INFO - TEST-FAIL | docshell/test/navigation/test_sessionhistory.html | The author of the test has indicated that flaky timeouts are expected. Reason: untriaged [task 2016-12-23T17:23:08.618167Z] 17:23:08 INFO - Buffered messages finished [task 2016-12-23T17:23:08.620079Z] 17:23:08 INFO - TEST-UNEXPECTED-FAIL | docshell/test/navigation/test_sessionhistory.html | Should have persisted session history entry. - got false, expected true [task 2016-12-23T17:23:08.621595Z] 17:23:08 INFO - SimpleTest.is@SimpleTest/SimpleTest.js:271:5 [task 2016-12-23T17:23:08.623566Z] 17:23:08 INFO - test@docshell/test/navigation/file_scrollRestoration.html:47:13 [task 2016-12-23T17:23:08.625508Z] 17:23:08 INFO - setTimeout handler*@docshell/test/navigation/file_scrollRestoration.html:127:13 [task 2016-12-23T17:23:08.627405Z] 17:23:08 INFO - EventListener.handleEvent*@docshell/test/navigation/file_scrollRestoration.html:125:7 [task 2016-12-23T17:23:08.629491Z] 17:23:08 INFO - Not taking screenshot here: see the one that was previously logged
Flags: needinfo?(rhunt)
I backed out the original patch for causing bug 1325645, and for while I investigate the mass test failures.
Status: RESOLVED → REOPENED
Flags: needinfo?(rhunt)
Resolution: FIXED → ---
There are at least two problems with the original patch. 1. We do content.addEventListener("unload") to track the end of viewing a page. Per [1] this disables the bfcache. This caused many of the test failures. 2. The presence of event listeners directly on the page window object seems to causes test failures in devtools. The test failures were not caught beforehand because of me forgetting that I had not done a try run. My apologies. Reading [1] it seems like 'pagehide' is a better event than 'unload' for this. Additionally we need to track 'pageshow', because on navigation back from a page 'DOMWindowCreated' is not fired. [1]
This patch corrects the issues of the previous patch from the comment above. I'll add a reviewer when it's not a holiday anymore. Try run:
Attachment #8818626 - Attachment is obsolete: true
Comment on attachment 8821788 [details] [diff] [review] track-scroll-amount.patch Review of attachment 8821788 [details] [diff] [review]: ----------------------------------------------------------------- ::: toolkit/content/browser-content.js @@ +1808,5 @@ > + }, > + > + shouldIgnorePage: function() { > + return (content.location == "" || > + content.location.protocol === "about:"); no need for parens here.
Attachment #8821788 - Flags: review+
Pushed by rhunt@eqrion.net: Add a telemetry probe to track the amount of scrolling in a page. r=jaws data-review=bsmedberg
Status: REOPENED → RESOLVED
Closed: 6 years ago → 6 years ago
status-firefox53: --- → fixed
Resolution: --- → FIXED
Comment on attachment 8821788 [details] [diff] [review] track-scroll-amount.patch Approval Request Comment [Feature/Bug causing the regression]: Adding a new probe for measurement engagement with content. [User impact if declined]: None on user. Missing opportunity for us to test this probe on Beta during e10s rollout. [Is the change risky?]: Code has been on Nightly for a while and now on Aurora. [Why is the change risky/not risky?]: Telemetry probe does not touch any code but just increments a scroll counter.
Attachment #8821788 - Flags: approval-mozilla-beta?
Comment on attachment 8821788 [details] [diff] [review] track-scroll-amount.patch add telemetry probe for vertical scrolling, beta52+
Attachment #8821788 - Flags: approval-mozilla-beta? → approval-mozilla-beta+
Please don't add JS implemented telemetry probes to (reasonable) hot code paths.
(In reply to Olli Pettay [:smaug] from comment #73) > Please don't add JS implemented telemetry probes to (reasonable) hot code > paths. Hey smaug, Sorry I let this one through. :/ I thought since this event listener was "passive", its impact would be minimal.
I missed also that scrollY is accessed, and that flushes layout, which can be super slow operation. This should probably be backed out.
ni'ing rhunt for what is probably going to need to be a backout of this patch from Nightly to Beta.
Flags: needinfo?(rhunt)
Actually, clearing ni, as it looks like this is being handled in bug 1338891.
Clearing the ni, as it is being handled in bug 1338891 :)
Flags: needinfo?(rhunt) | https://bugzilla.mozilla.org/show_bug.cgi?id=1297867 | CC-MAIN-2022-27 | refinedweb | 4,164 | 57.27 |
IRC log of i18n on 2013-01-09
Timestamps are in UTC.
15:46:02 [RRSAgent]
RRSAgent has joined #i18n
15:46:02 [RRSAgent]
logging to
15:46:05 [Zakim]
Zakim has joined #i18n
15:46:17 [aphillip]
trackbot, prepare teleconference
15:46:19 [trackbot]
RRSAgent, make logs world
15:46:21 [trackbot]
Zakim, this will be 4186
15:46:21 [Zakim]
ok, trackbot; I see I18N_CoreWG()11:00AM scheduled to start in 14 minutes
15:46:22 [trackbot]
Meeting: Internationalization Working Group Teleconference
15:46:22 [trackbot]
Date: 09 January 2013
15:49:01 [aphillip]
aphillip has joined #i18n
15:49:10 [RRSAgent]
I have made the request to generate
aphillip
15:49:25 [aphillip]
rrsagent, set logs world
15:49:40 [aphillip]
Agenda:
15:49:49 [aphillip]
Chair: Addison Phillips
15:49:49 [aphillip]
Scribe: Addison Phillips
15:49:51 [aphillip]
ScribeNick: aphillip
15:49:59 [aphillip]
rrsagent, draft minutes
15:49:59 [RRSAgent]
I have made the request to generate
aphillip
15:50:44 [aphillip]
rrsagent, draft minutes
15:50:44 [RRSAgent]
I have made the request to generate
aphillip
15:51:03 [aphillip]
agenda+ KLReq
16:00:23 [Zakim]
I18N_CoreWG()11:00AM has now started
16:00:30 [Zakim]
+Addison_Phillips
16:01:15 [matial]
matial has joined #i18n
16:01:16 [r12a]
zakim, dial richard
16:01:16 [Zakim]
ok, r12a; the call is being made
16:01:17 [Zakim]
+Richard
16:02:10 [Zakim]
+[IPcaller]
16:02:15 [koji]
zakim, [ipcaller] is me
16:02:15 [Zakim]
+koji; got it
16:02:23 [RRSAgent]
I'm logging. Sorry, nothing found for 'who is here'
16:02:29 [Zakim]
+??P21
16:02:42 [matial]
zakim, ??p21 is me
16:02:42 [Zakim]
+matial; got it
16:02:55 [Norbert]
Norbert has joined #i18n
16:03:25 [Zakim]
+ +1.415.885.aaaa
16:03:44 [Norbert]
zakim, +1.415 is me
16:03:44 [Zakim]
+Norbert; got it
16:04:09 [r12a]
16:04:22 [aphillip]
Topic: Agenda and Minutes
16:04:47 [aphillip]
Topic: Action Items
16:07:06 [aphillip]
close ACTION-164
16:07:06 [trackbot]
Closed ACTION-164 Create blog entry showing our case for css caselessness.
16:08:19 [aphillip]
close ACTION-168
16:08:19 [trackbot]
Closed ACTION-168 Remind everyone to read ITS 2.0 for the 9th.
16:08:26 [aphillip]
Topic: Info Share
16:08:59 [aphillip]
addison: KLReq document received
16:09:16 [aphillip]
... temporary link to follow
16:09:19 [r12a]
16:10:43 [aphillip]
Topic: Pending reviews received over break
16:10:56 [aphillip]
16:11:10 [aphillip]
CSS3: Text-decoration
16:11:41 [Zakim]
+[IPcaller]
16:11:58 [r12a]
zakim, who's here?
16:11:58 [Zakim]
On the phone I see aphillip, Richard, koji, matial (muted), Norbert, fsasaki
16:12:01 [Zakim]
On IRC I see Norbert, matial, aphillip, Zakim, RRSAgent, trackbot, koji, fsasaki, r12a
16:12:02 [aphillip]
due by the 31st
16:12:26 [aphillip]
ACTION: addison: remind everyone to comment on css3-text-decoration by next week
16:12:27 [trackbot]
Created ACTION-169 - Remind everyone to comment on css3-text-decoration by next week [on Addison Phillips - due 2013-01-16].
16:12:59 [aphillip]
Topic: KLReq
16:13:09 [aphillip]
16:13:18 [aphillip]
addison: above is temporary link
16:13:48 [aphillip]
richard: scanned it
16:13:55 [aphillip]
... looks pretty good
16:14:04 [aphillip]
... not as detailed as the Japanese one
16:14:08 [aphillip]
... good for a first version
16:14:23 [aphillip]
... send for review and then get feedback incorporated
16:14:29 [aphillip]
... think it might need some details
16:14:46 [aphillip]
... such as wrapping rules (when, what context)... just example
16:14:50 [aphillip]
... needs English editing
16:15:01 [aphillip]
addison: also need to publish Korean version
16:15:18 [aphillip]
richard: need formal sign-off on pub through w3c
16:15:24 [aphillip]
... talk to domain lead
16:15:32 [aphillip]
... should join IG as IE
16:16:02 [aphillip]
... create IG-task-force
16:16:15 [aphillip]
... and use public-cjk list as their feedback list
16:17:09 [RRSAgent]
I have made the request to generate
aphillip
16:19:59 [David]
David has joined #i18n
16:20:43 [aphillip]
richard: suggest we convert to polyglot
16:21:30 [aphillip]
ACTION: addison: coordinate with richard regarding KLReq
16:21:31 [trackbot]
Created ACTION-170 - Coordinate with richard regarding KLReq [on Addison Phillips - due 2013-01-16].
16:21:38 [aphillip]
Topic: ITS v2 review
16:21:46 [aphillip]
16:23:07 [aphillip]
chair: IMPORTANT to read by next week
16:23:19 [aphillip]
... for call of the 16th
16:23:29 [aphillip]
felix: send comments before next week
16:24:19 [aphillip]
addison: will create item in tracker
16:24:30 [aphillip]
... please follow WG process for creating comments
16:24:38 [r12a]
16:24:49 [aphillip]
Topic: Case Sensitivity in CSS
16:25:10 [aphillip]
16:25:18 [aphillip]
16:26:10 [aphillip]
16:26:28 [aphillip]
16:26:41 [RRSAgent]
I have made the request to generate
aphillip
16:31:06 [aphillip]
16:31:15 [aphillip]
16:31:19 [r12a]
16:31:30 [r12a]
16:39:47 [Zakim]
-fsasaki
16:56:23 [aphillip]
case sensitivity: we're okay with that
16:56:36 [aphillip]
ACI for ASCII-restricted namespaces: okay
16:56:56 [aphillip]
for user identiifers that are case *IN*sensitive: only Unicode CF is acceptable
16:57:12 [aphillip]
for user identifiers that are case *SENSITIVE*, we're okay
16:57:34 [aphillip]
ACI is a special case of UniCF
16:58:14 [aphillip]
we're not okay with any "case insensitive" comparison that can contain non-ASCII characters
16:58:42 [aphillip]
... that is using ACI
17:00:01 [r12a]
s/any "case insensitive" comparison/any ACI comparison/
17:00:23 [aphillip]
if "g" == "G" then "ü" == "Ü"
17:01:56 [aphillip]
we would recommend that CSS use case sensitive comparison (universally)
17:02:16 [aphillip]
if there is a feature that is case insensitive, then it should be UniCF
17:05:29 [aphillip]
restate: we would recommend that CSS adopt case sensitive comparison going forward for all identifiers and language elements, except where legacy considerations apply
17:11:30 [aphillip]
ACI "doesn't really exist" (it's a shadow case of UniCF on an ASCII namespace)
17:14:08 [aphillip].
17:16:09 [aphillip]
rrsagent, draft minutes
17:16:09 [RRSAgent]
I have made the request to generate
aphillip
17:20:30 [aphillip]
possible exception of list-style-type
17:23:28 [aphillip]
resolved:
17:23:55 [aphillip]
- case sensitive
17:24:12 [aphillip]
- except legacy ascii-only items
17:24:22 [aphillip]
- if a new feature case insensitive, then unicf
17:24:42 [aphillip]
rrsagent, make minutes
17:24:42 [RRSAgent]
I have made the request to generate
aphillip
17:24:48 [aphillip]
ACTION; addison: communicate case sensitivity to CSS
17:24:58 [aphillip]
i18n recommends case sensitive
17:25:25 [aphillip]
spec writing guidelines
17:25:35 [aphillip]
Topic: AOB?
17:26:28 [Zakim]
-aphillip
17:26:29 [r12a]
zakim, who's here?
17:26:30 [Zakim]
On the phone I see Richard, koji, matial (muted), Norbert
17:26:30 [Zakim]
On IRC I see Norbert, matial, aphillip, Zakim, RRSAgent, trackbot, koji, r12a
17:26:31 [Zakim]
-koji
17:26:31 [Zakim]
-Norbert
17:26:33 [Zakim]
-matial
17:26:41 [r12a]
zakim, drop richard
17:26:41 [Zakim]
Richard is being disconnected
17:26:43 [Zakim]
I18N_CoreWG()11:00AM has ended
17:26:43 [Zakim]
Attendees were aphillip, Richard, koji, matial, +1.415.885.aaaa, Norbert, [IPcaller], fsasaki
17:26:50 [aphillip]
Present: Addison, Richard, Mati, Koji, Felix
17:26:57 [aphillip]
rrsagent, draft minutes
17:26:57 [RRSAgent]
I have made the request to generate
aphillip
17:27:58 [aphillip]
zakim, bye
17:27:58 [Zakim]
Zakim has left #i18n
17:28:06 [aphillip]
rrsagent, bye
17:28:06 [RRSAgent]
I see 2 open action items saved in
:
17:28:06 [RRSAgent]
ACTION: addison: remind everyone to comment on css3-text-decoration by next week [1]
17:28:06 [RRSAgent]
recorded in
17:28:06 [RRSAgent]
ACTION: addison: coordinate with richard regarding KLReq [2]
17:28:06 [RRSAgent]
recorded in | http://www.w3.org/2013/01/09-i18n-irc | CC-MAIN-2014-42 | refinedweb | 1,382 | 56.73 |
0
hi i have a very large array that i want to populate but i don want to run the for loops every time so i thought i would right a program to output text to fill the array and was wondering if this would work.
the array is
int array[100][12][31];
#include <iostream> #include <stdlib.h> #include <fstream> #include <time.h> using namespace std; #define getrandom( min, max ) ((rand() % (int)(((max) + 1) - (min))) + (min)) int main() { int temp, counter; srand(time(NULL)); ofstream fout("MyFile.txt"); fout << "int array[100][12][31] =" << "\n" << "{"; for (int i = 0; i < 100; i++) { for (int j = 0; j < 12; j++) { for (int k = 0; k < 31; k++) { if ((counter % 15 == 0) && (counter != 0)) fout << "\n"; // after 15 numbers go to a new line in the file temp = getrandom(1000, 9999); // used to get a number for the array fout << temp << ", "; // seperate each element with a "," and a space counter++; } } } fout << "};"; // ends the array with "};" fout.close(); return 0; }
i know this is kinda rough but i was hoping it would help, im not anywhere near being done with this program and i may need to increase the array by a factor of 100 before the end so im not sure how much time it will take to populate the array every time. i do realize that i will have to go back into the .txt to delete the last", " or the compiler will flag an error. if anyone has any ideas or sugestions it would be very much aperciated | https://www.daniweb.com/programming/software-development/threads/190130/question-about-programing-pratice-with-a-large-array | CC-MAIN-2017-26 | refinedweb | 259 | 72.09 |
Re arranging or Re ordering the row of dataframe in pandas python can be done by using reindex function. Let’s see how to
- Re arrange or re order the row of dataframe in pandas python with example
First let’s create a dataframe
import pandas as pd import numpy as np #Create a DataFrame df1 = { 'Name':['George','Andrea','micheal','maggie','Ravi', 'Xien','Jalpa'], 'Gender':["M","F","M","F","M","M","F"], 'Score':[62.7,47.7,55.6,74.6,31.5,77.3,85.4], 'Rounded_score':[63,48,56,75,32,77,85] } df1 = pd.DataFrame(df1,columns=['Name','Gender','Score','Rounded_score']) print(df1)
df1 will be
Re order the row of dataframe in pandas python
Re ordering or re arranging the row of dataframe in pandas python can be done by using reindex function and stored as new dataframe
df2= df1.reindex([2,1,0,5,6,4,3]) print(df2)
row index of the dataframe is re ordered as shown below
| http://www.datasciencemadesimple.com/re-arrange-or-re-order-the-row-of-dataframe-in-pandas-python-2/ | CC-MAIN-2020-16 | refinedweb | 165 | 57.2 |
For my first post, I figured that I would leverage a prototype that I worked on last week. I was investigating some of the issues that one of the Fab40 templates has after you upgrade the server from 2007 to 2010. I learned that the Knowledge Base template has a novel way of solving one of the problems that lookup fields have when using a document library.
Before we dig into the solution to the problem, I should first spend some time describing the problem.
When you create a lookup field, you need to choose a field from the “other” list to be used as the display value. When you are editing an item from “this” list, you will be shown a drop-down list box that allows you to choose an item from the “other” list. When each of the items from the other list are added to the drop-down list, some text value from the other item needs to be displayed here that uniquely identifies the item. Some fields work better than other fields.
For example, let’s say that you are adding a lookup field that is going to be used to select someone from a list of contacts. Using the Zip Code as the field that is displayed in the drop-down list isn’t likely to be very helpful. It would be extremely common for you to have multiple contacts that are in the same zip code. You would then see the same zip code listed more than once in the drop-down list and you would have no idea which entry represents which contact. Choose a contact would be a pretty frustrating experience. Using a field like Name would be a lot better. Granted, the Name field could also contain duplicates. But it would still be a whole lot better than a field like Zip Code which is likely to contain many duplicates.
The problem we have is when the “other” list is a document library. Which field should you choose for the lookup field to display? You could use the Title field but it doesn’t always get populated. Upload some text files into a document library and they will all wind up with an empty value for the Title field. Even if a Title value is specified for each document, this field allows duplicates which, as described above, is not good. So, is there anything about the documents in the document library that is normally going to be unique? Well, the file name. Duh! However, try creating a lookup field to a document library and you’ll notice that there is no file name field to choose. Doh!
The file name is exposed through the object model as a field named FileLeafRef. So why isn't this field available to use in lookup fields? I’m still trying to find someone that can explain that to me. If I figure it out I’ll update this post.
Ok, so now we can (finally) get back to the Knowledge Base site template that I was looking into and how it solved this problem.
One of the features that the Knowledge Base site template has is that articles written in the wiki library can each have one or more related articles. This was accomplished using a multi-value lookup field. Now, multi-valued lookup fields don’t exactly have the greatest UI experience. But they do allow the KB article author to accomplish the scenario without the KB site template developer having to invest in custom (read: expensive) list item selection UI.
But if we're using a multi-value lookup field to choose a document from a document library, what was the field that was used as the lookup field? Well, what the KB site template did was to add a text field to the document library called “Name” in conjunction with an event handler that would ensure that the value of this field always contained the actual name of the file. As wiki articles were created or modified, the event handler would always ensure that the Name field contained the right value.
Now, the way that the KB site template did this was specific to this site template and isn’t really reusable in other site templates. The concept is reusable but the specific implementation in that site template isn’t. So I created a sandboxed solution that contains a reusable implementation of this concept.
Using this solution is pretty easy:
- Upload the solution - Upload the solution to the Solution Gallery in your site collection and activate it.
- Activate the site feature - In each of the sites where you want to use this File Name field, activate the site feature named “BillGr's File Name Field”.
- Add the site column - In each document library where you want to use this field, add the site column named File Name to either the document library or the content types that are used in the document library.
Solution Implementation
In this solution, you’ll find three things:
- A site column that defines the File Name field.
- An assembly that contains a sandboxed event handler.
- A feature (which can be activated/deactivated) with two element manifests:
- One that defines the File Name field.
- One that wires-up the event handler to the ItemAdding and ItemUpdating events.
“File Name” Site Column Definition
One thing to note about the site column is that it has been defined with a very unique internal name (the Name attribute). This is to deal with field name collisions. What if a user has already defined a File Name column? What if they already have other data in that column? What if they already have business logic that is driven by the value of that field. My event handler can’t simply look for the first field named “File Name” that it finds and blindly overwrite its value with the current file name. This is one of the reasons why a field definition has both an internal name and a display name. As an aside, the other main reason for internal names is so that compiled code doesn’t need to be changed when the name of a field is localized into another language. But that’s a subject for a different post.
- <?xml version="1.0" encoding="utf-8"?>
- <Elements xmlns="">
-
- <!-- NOTE: The Name attribute here must match the value of the FileNameFieldInternalName property
- of the ListItemEvents class that is in ListItemEvents\ListItemEvents.cs -->
- <Field ID="{1511BF28-A787-4061-B2E1-71F64CC93FD5}"
- Name="BillGrFileNameField_FileName"
- DisplayName="File Name"
- Type="Text"
- Required="FALSE"
-
- </Field>
-
- </Elements>
In this case, we’ve given our field an internal name that uses the solution name as a prefix. It is highly unlikely that another solution will define another field with this same internal name. Later, our event handler will use the same internal name when accessing the field value from the list item.
Event Handler Wire-Up
When a feature definition includes an event handler, usually the event handler has a ListTemplateId or ListUrl attribute. These attributes are normally used to narrow down the scope of the event handler to a specific list type. Since I haven’t included any of these attributes, my event handler will be globally registered. It will be called whenever any item in a list or document library is added or changed – anywhere in the site.
- <?xml version="1.0" encoding="utf-8"?>
- <Elements xmlns="">
- <Receivers>
- <Receiver>
- <Name>ListItemEventsItemAdding</Name>
- <Type>ItemAdding</Type>
- <Assembly>$SharePoint.Project.AssemblyFullName$</Assembly>
- <Class>BillGr.Samples.FileNameField.ListItemEvents</Class>
- <SequenceNumber>10000</SequenceNumber>
- </Receiver>
- <Receiver>
- <Name>ListItemEventsItemUpdating</Name>
- <Type>ItemUpdating</Type>
- <Assembly>$SharePoint.Project.AssemblyFullName$</Assembly>
- <Class>BillGr.Samples.FileNameField.ListItemEvents</Class>
- <SequenceNumber>10000</SequenceNumber>
- </Receiver>
- </Receivers>
- </Elements>
Event Handler Implementation
The event handler simply ensures that the value of the File Name field is correct given the current name of the document. The code relatively simple. Whenever we’ve been notified that an item has changed, we first have to confirm that the item is in a document library. If it is a “regular” list item, then it won't have a file so there isn’t any file name to stay in sync with. Now, please, no arguments here about Attachments <grin />. SharePoint treats the primary file stream of a document library item very differently from an attachment to a list item.
In addition to confirming that the item is in a document library, the event handler also needs to confirm that the item has a “File Name” field. If it doesn’t have the field then, again, there isn’t anything that we need to do.
Once we’ve ruled out these other scenarios, all that remains is to extract the file name from the URL and then ensure that the value of the “File Name” field contains this same file name.
- using System;
- using System.IO;
- using System.Security.Permissions;
- using Microsoft.SharePoint;
- using Microsoft.SharePoint.Security;
- using Microsoft.SharePoint.Utilities;
- using Microsoft.SharePoint.Workflow;
-
- namespace BillGr.Samples.FileNameField
- {
- /// <summary>
- /// List Item Events
- /// </summary>
- public class ListItemEvents : SPItemEventReceiver
- {
-
- /// <summary>
- /// Gets a string containing the internal name of the "File Name" field.
- /// </summary>
- public static string FileNameFieldInternalName
- {
- get
- {
- // NOTE: The value used here has to match the value of the Name attribute in the field
- // definition that is in FileNameSharedField\Elements.xml.
- return "BillGrFileNameField_FileName"; // Ensure that this is globally unique by using a prefix that is not likely to be used by someone else.
- }
- }
-
- /// <summary>
- /// An item is being added.
- /// </summary>
- /// <param name="properties">An SPItemEventProperties object that represents properties of the event.</param>
- public override void ItemAdding(SPItemEventProperties properties)
- {
- ListItemEvents.UpdateFileNameField(properties);
- properties.Status = SPEventReceiverStatus.Continue;
- }
-
- /// <summary>
- /// An item is being updated.
- /// </summary>
- /// <param name="properties">An SPItemEventProperties object that represents properties of the event.</param>
- public override void ItemUpdating(SPItemEventProperties properties)
- {
- ListItemEvents.UpdateFileNameField(properties);
- properties.Status = SPEventReceiverStatus.Continue;
- }
-
- /// <summary>
- /// Helper method that ensures that the "File Name" field contains the correct value for the specified list item.
- /// </summary>
- /// <param name="properties">An SPItemEventProperties object that represents properties of the event.</param>
- private static void UpdateFileNameField(SPItemEventProperties properties)
- {
- // We can only do this if the item is in some form of a doc-lib and the content type of
- // the item includes the "File Name" field.
- if ((properties.ListItem == null) ||
- (properties.ListItem.File == null) ||
- (!properties.ListItem.Fields.ContainsField(ListItemEvents.FileNameFieldInternalName)))
- {
- return;
- }
-
- // Extract just the file name from the server-relative URL. Note: URLs with forward slashes
- // are compatible with the System.IO.Path class. We are using that here so that we don't
- // have to do any URL parsing. Any time that we try and do URL parsing on our own we
- // eventually get into trouble beause of some corner case that wasn't accounted for in
- // the parsing code. We'll avoid that problem if we leverage the Path class which has
- // already been extensively tested.
- string fileNameWithoutExtension = Path.GetFileNameWithoutExtension(properties.AfterUrl);
-
- // Ensure that the "File Name" field contains the current file name.
- properties.AfterProperties[ListItemEvents.FileNameFieldInternalName] = fileNameWithoutExtension;
- }
-
- } // class ListItemEvents
-
- }
Downloading The Sample
There are two downloads available from the MSDN Code Gallery for this sample:
- SharePoint Solution - This is a working .WSP that you can upload to the Solution Gallery for your site collection and activate it. Once you do this, the “BillGr's File Name Field” feature will be available to activate in all sites of your site collection. After activating the feature, you can add the site column named “File Name” to a document library to start getting updated file names. You'll find this site column in the “Custom Columns” group.
- Visual Studio Project - This is the VS 2010 SharePoint solution that was used to generate the solution above.
As with any of my other samples, you are free to use the solution in your site or take the source code and leverage it in some other solution that you are working on. Just keep in mind that this is a sample and, as such, has no warranty, no guarantee that it will work in all environments, and no promise that a future version of SharePoint won't break it.
Is this WSP for SharePoint 2007 or SharePoint 2010?
The concept works on either SharePoint 2007 or SharePoint 2010. The sample that you can download is a sandboxed solution for use with SharePoint 2010.
can you get the filename to display in a column for a task list and not a document library? Ex. A list item has an attachment and you want a column to display the name of the attachment? (working with sharepoint 2007)? | https://blogs.msdn.microsoft.com/billgr/2010/10/15/enabling-file-names-in-lookup-fields/ | CC-MAIN-2019-39 | refinedweb | 2,097 | 62.68 |
My biggest concern with the Heimdal in experimental, is glob() in libroken. To the best of my knowledge, it isn't required because libc6 glob() does everything required. I am concerned, because of the potential of the symbols conflicting with the function in libc6. The Heimdal configure script correctly detects that glob() is present in libc6, but appears to build glob.c anyway, and it also installs glob.h. A similar situation appears to exist with fnmatch, but I haven't investigated in detail. Unfortunately, solving this is pushing my automake knowledge to its limits. lib/roken/Makefile.am has: if have_glob_h glob_h = else glob_h = glob.h endif Some how that is being set in lib/roken/Makefile, despite the fact have_glob_h is also set (this is confusing me!!!) I simply cannot see where lib/roken/Makefile.am references glob.c. However, lib/roken/Makefile references glob$U.lo and glob$U.o. I asked upstream Heimdal and got no response. Any help would be much appreciated. Thanks. PS. Source code is Heimdal in Debian experimental. -- Brian May <bam@debian.org> | https://lists.debian.org/debian-devel/2005/11/msg01582.html | CC-MAIN-2015-32 | refinedweb | 181 | 53.47 |
Closed Bug 151407 Opened 19 years ago Closed 8 years ago
Bi
Di: Setting document .dir has no effect on <html dir="rtl">
Categories
(Core :: Layout: Text and Fonts, defect)
Tracking
()
mozilla23
People
(Reporter: kyae-young.kim, Assigned: smontagu)
References
Details
(Keywords: dev-doc-needed, rtl, Whiteboard: [oracle-nls])
Attachments
(4 files)
Env. : Mozilla 1.0, Windows NT/2000, Des. : The LTR direction of windows does not support Reproducible step: 1) Load test case in IE 5.5 2) Click the Window Direction button 3) Window will be fliped for LTR direction 4) It does not support in Mozilla 1.0
CONFIRMED with moz 1.0 on win98
Status: UNCONFIRMED → NEW
Ever confirmed: true
Adding regression keyword. I remember seeing something just like this working in the past.
The page that I was thinking of was, which still works. The significant difference seems to be that that page uses document.dir while the attachment here uses top.document.dir
Whiteboard: [oracle-nls]
Changing summary to be more informative; removing "regression" keyword per comment #4.
Summary: Bidi : The LTR direction of windows does not support → BiDi: Setting top.document.dir has no effect
(In reply to comment #4) > The page that I was thinking of was, > which still works. The significant difference seems to be that that page uses > document.dir while the attachment here uses top.document.dir That was wrong. The significant difference is that the attachment here has <HTML dir="rtl"> and the page at just has <HTML>.
Summary: BiDi: Setting top.document.dir has no effect → BiDi: Setting document.dir has no effect on <html dir="rtl">
The same is true in reverse: <html dir="rtl"> does not set document.dir to rtl
*** Bug 256470 has been marked as a duplicate of this bug. ***
OS: Windows NT → All
Hardware: PC → All
Mass-assigning the new rtl keyword to RTL-related (see bug 349193).
Component: Layout: BiDi Hebrew & Arabic → Layout: Text
QA Contact: zach → layout.fonts-and-text
Assignee: mozilla → nobody
This bug is now over a decade old. Confirming error on OS X in Firefox. The behaviour has become so well-known that it has now reached the documentation: says: The dir IDL attribute on Document objects must reflect the dir content attribute of the html element, if any, limited to only known values. If there is no such element, then the attribute must return the empty string and do nothing on setting.
A workaround is to get or set: document.body.parentNode.dir instead of: document.dir
(In reply to David Baron [:dbaron] (don't cc:, use needinfo? instead) from comment #11) >. > html#dom-document-dir says: > > The dir IDL attribute on Document objects must reflect the dir content > attribute of the html element, if any, limited to only known values. If > there is no such element, then the attribute must return the empty string > and do nothing on setting. Hmm, given that, this is a valid bug, right? The problem is that nsIDocument::SetDir just doesn't set the attribute on the root element, it just uses SetDirectionality on it.
Component: Layout: Text → DOM: Core & HTML
I believe this makes our behaviour compatible with the HTML5 spec at The patch removes the directionality property on nsIDocument, which is now redundant, and makes the dir IDL attribute on documents reflect the dir attribute of the html element, rather than the root element (this makes an existing test fail, see following patch). It also stops using the bidi.direction config option to determine the default direction, in response to various comments in. I'll remove other uses of option in a follow-up bug. Note that document.dir can now return four possible values: "auto", "ltr", "rtl", or the empty string. Previously it would always return either "ltr" or "rtl".
Assignee: nobody → smontagu
Attachment #740091 - Flags: review?(ehsan)
Comment on attachment 740091 [details] [diff] [review] Patch Review of attachment 740091 [details] [diff] [review]: ----------------------------------------------------------------- Nice!
Attachment #740091 - Flags: review?(ehsan) → review+
Comment on attachment 740092 [details] [diff] [review] Changes to existing tests Review of attachment 740092 [details] [diff] [review]: ----------------------------------------------------------------- lgtm
Attachment #740092 - Flags: review?(Ms2ger) → review+
Comment on attachment 740091 [details] [diff] [review] Patch Review of attachment 740091 [details] [diff] [review]: ----------------------------------------------------------------- ::: content/base/src/nsDocument.cpp @@ +6253,5 @@ > return NS_OK; > } > > void > +nsIDocument::GetDir(nsAString& aDirection) I'd prefer making GetHtmlElement() const. @@ +6283,5 @@ > return rv.ErrorCode(); > } > > void > nsIDocument::SetDir(const nsAString& aDirection, ErrorResult& rv) Drop the ErrorResult here and the [SetterThrows] from Document.webidl
Component: DOM: Core & HTML → Layout: Text
Flags: in-testsuite+
Was there a reason to not just call GetDir on the root element instead of doing a GetAttr and then comparing to a copy of the dir enum table?
only that Element doesn't have GetDir
would this have been better?
Attachment #740656 - Flags: review?(bzbarsky)
(In reply to comment #22) > Created attachment 740656 [details] [diff] [review] > --> > GetDir on root elelement > > would this have been better? Do we know that the root element is always going to be an HTML element?
We know that it's an <html> element. Is that equivalent?
(In reply to :Ehsan Akhgari (needinfo? me!) from comment #23) > (In reply to comment #22) > > Created attachment 740656 [details] [diff] [review] > > --> > > GetDir on root elelement > > > > would this have been better? > > Do we know that the root element is always going to be an HTML element? We know that it's an html element in the HTML namespace, so, yes. I guess we could make GetHtmlElement() return something more specific, but I don't know if all callers want to include nsGenericHTMLElement.h.
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla23
Comment on attachment 740656 [details] [diff] [review] GetDir on root elelement Yes, perfect. Thank you! And yes, GetHtmlElement always returns an <html:html> or null, so this is safe.
Attachment #740656 - Flags: review?(bzbarsky) → review+
needs documentation at least here and probably elsewhere
Keywords: dev-doc-needed | https://bugzilla.mozilla.org/show_bug.cgi?id=151407 | CC-MAIN-2021-04 | refinedweb | 988 | 57.87 |
I'm trying to remove some words in a string using regex using below program. Its removing properly but its considering only case sensitive. How to make it as case insensitive. I kept
(?1)
replaceAll
package com.test.java;
public class RemoveWords {
public static void main(String args[])
{
// assign some words to string
String sample ="what Is the latest news today in Europe? is there any thing special or everything is common.";
System.out.print(sample.replaceAll("( is | the |in | any )(?i)"," "));
}
}
what Is latest news today Europe? there thing special or everything common.
You need to place the
(?i) before the part of the pattern that you want to make case insensitive:
System.out.print(sample.replaceAll("(?i)\\b(?:is|the|in|any)\\b"," ")); ^^^^
I've replaced spaces around the keywords to be removed with word boundary (
\\b). The problem comes because there may be two keywords one after another separated by just one space.
If you want to delete the keywords only if they are surrounded by space, then you can use positive lookahead and lookbehind as:
(?i)(?<= )(is|the|in|any)(?= ) | https://codedump.io/share/O7wKwdg4L18h/1/java-regex-case-insensitivity-not-working | CC-MAIN-2017-43 | refinedweb | 184 | 68.26 |
Download
FREE PDF.
PowerShell is freely available through the Microsoft Windows Update Service packaged as an optional update for Windows XP SP2, Windows Vista and Windows Server 2003. It is also included with Windows Server 2008 as an optional component. Once installed, it can be started from the Start menu or simply by running "powershell.exe". Basic things you need to know:
Command-Line editing in Powershell: Command-line Editing works just like it does in cmd.exe: use the arrow keys to go up and down, the insert and delete keys to insert and delete characters and so on.
PowerShell parses text in one of two modes"command mode, where quotes are not required around a string and expression mode where strings must be quoted. The parsing mode is determined by what"s at the beginning of the statement. If it"s a command, then the statement is parsed in command mode. If it"s not a command then the statement is parsed in expression mode as shown:
PS (1) > echo 2+2 Hi there # command mode " starts with "echo" command 2+2 Hi there PS (2) > 2+2; "Hi there" # expression mode starts with 2 4 Hi there PS (3) > echo (2+2) Hi (echo there) # Mixing and matching modes with brackets) 4 Hi there
Commands: There are 4 categories of commands in PowerShell:
Pipelines: As with any shell, pipelines are central to the operation of PowerShell. However, instead of returning strings from external processes, PowerShell pipelines are composed of collections of commands. These commands process pipeline objects one at a time, passing each object from pipeline element to pipeline element. Elements can be processed based on properties like Name and Length instead of having to extract substrings from the objects.
PowerShell Literals: PowerShell has the usual set of literal values found in dynamic languages: strings, numbers, arrays and hashtables.
Numbers: PowerShell supports all of the signed .NET number formats. Hex numbers are entered as they are in C and C# with a leading "0x" as in 0xF80e. Floating point includes Single and Double precisions and Decimal. Banker"s rounding is used when rounding values. Expressions are widened as needed. A unique feature in PowerShell are the multiplyer suffixes which make it convenient to enter larger values easily:
Strings: PowerShell uses .NET strings. Single and Double quoted strings are supported. Variable substitution and escape sequence processing is done in double-quoted strings but not in single quoted ones as shown:
PS (1) > $x="Hi" PS (2) > "$x bob`nHow are you?" Hi bob How are you? PS (3) > "$x bob`nHow are you?" $x bob`nHow are you?
The escape character is backtick instead of backslash so that file paths can be written with either forward slash or backslash.
Variables: In PowerShell, variables are organized into namespaces. Variables are identified in a script by prefixing their names with a "$" sign as in "
$x = 3". Variable names can be unqualified like $a or they can be name-space qualified like:
$variable:a or
$env:path. In the latter case,
$env:path is the environment variable path. PowerShell allows you to access functions through the function names space:
$function:prompt and command aliases through the alias namespace
alias:dir
Arrays: Arrays are constructed using the comma "," operator. Unless otherwise specified, arrays are of type Object[]. Indexing is done with square brackets. The "+" operator will concatenate two arrays.
PS (1) > $a = 1, 2, 3 PS (2) > $a[1] 2 PS (3) > $a.length 3 PS (4) > [string] ($a + 4, 5) 1 2 3 4 5
Because PowerShell is a dynamic language, sometimes you don"t know if a command will return an array or a scalar. PowerShell solves this problem with the @( ) notation. An expression evaluated this way will always be an array. If the expression is already an array, it will simple be returned. If it wasn"t an array, a new single-element array will be constructed to hold this value.
HashTables: The PowerShell hashtable literal produces an instance of the .NET type System.Collections.Hashtable. The hashtable keys may be unquoted strings or expressions; individual key/value pairs are separated by either newlines or semicolons as shown:
PS (1) > $h = @{a=1; b=2+2 >> ("the" + "date") = get-date} >> PS (2) > $h Name Value ---- ----- thedate 10/24/2006 9:46:13 PM a 1 b 4 PS (3) > $h["thedate"] Tuesday, October 24, 2006 9:46:13 PM PS (4) > $h.thedate Tuesday, October 24, 2006 9:46:13 PM @{ a=1; b=2} Types [typename]
Type Conversions: For the most part, traditional shells only deal with strings. Individual tools would have to interpret (parse) these strings themselves. In PowerShell, we have a much richer set of objects to work with. However, we still wanted to preserve the ease of use that strings provide. We do this through the PowerShell type conversion subsystem. This facility will automatically convert object types on demand in a transparent way. The type converter is careful to try and not lose information when doing a conversion. It will also only do one conversion step at a time. The user may also specify explicit conversions and, in fact, compose those conversions. Conversions are typically applied to values but they may also be attached to variables in which case anything assigned to that variable will be automatically be converted.
Here"s is an example where a set of type constraints are applied to a variable. We want anything assigned to this variable to first be converted into a string, then into an array of characters and finally into the code points associated with those characters.
PS (1) > [int[]][char[]][string]$v = @() # define variable PS (2) > $v = "Hello" # assign a string PS (3) > [string] $v # display the code points 72 101 108 108 111 101 108 108 111 PS (4) > $v=2+2 # assign a number PS (5) > $v # display the code points 52 PS (6) > [char] 52 # cast it back to char
Flow-control Statements: PowerShell has the usual collection of looping and branching statements. One interesting difference is that in many places, a pipeline can be used instead of a simple expression.
if ($a "eq 13) { "A is 13} else {"A is not 13"}
The condition part of an if statement may also be a pipeline.
if (dir | where {$_.length "gt 10kb}) { "There were files longer than 10kb" }
$a=1; while ($a "lt 10) { $a } $a=10 ; do { $a } while (--$a)
for ($i=0; $i "lt 10; $i++) { "5 * $i is $(5 * $i)" }
foreach ($i in 1..10) { "`$i is $i" } foreach ($file in dir "recurse "filter *.cs sort length) { $_.Filename }|
foreach Cmdlet: This cmdlet can be used to iterate over collections of operators (similar to the map() operation found in many other languages like Perl.) There is a short alias for this command "%". Note that the $_ variable is used to access the current pipeline object in the foreach and where cmdlets.
1..10 foreach { $_ * $_ } $t = 0; dir | foreach { $t += $_ } ; $t 1..10 | %{ "*" * $_ }|
where Cmdlet: This cmdlet selects a subset of objects from a stream based on the evaluation of a condition. The short alias for this command is "?".
1..10 where {$_ -gt 2 "and $_ -lt 10} get-process | where {$_.handlecount "gt 100 }|
switch Statement: The PowerShell switch statement combines both branching and looping. It can be used to process collections of objects in the condition part of the statement or it can be used to scan files using the "file option.
PowerShell has a very rich set of operators for working with numbers, strings, collections and objects. These operators are shown in the following tables.
Arithmetic operators: The arithmetic operators work on numbers. The "+" and "*" operators also work on collections. The "+" operator concatenates strings and collections or arrays. The "*" operator will duplicate a collection the specified number of times.
Assignment Operators: PowerShell has the set of assignment operators commonly found in C-derived languages. The semantics correspond to the binary forms of the operator.
Comparison Operators: Most of the PowerShell operators are the same as are usually found in C-derived languages. The comparison operators, however, are not. To allow the ">" and "<" operators to be used for redirection, a different set of characters had to be chosen so PowerShell operators match those found in the Bourne shell style shell languages. (Note: when applying a PowerShell operator against collection, the elements of the collection that compare appropriately will be returned instead of a simple Boolean.)
Pattern Matching Operators: PowerShell supports two sets of pattern-matching operators. The first set uses regular expressions and the second uses wildcard patterns (sometimes called globbing patterns).
Copy console input into a file:
[console]::In.ReadToEnd() > foo.txt
Setting the Shell Prompt:
function prompt { "$PWD [" + $count++ + "]" }
Setting the Title Bar Text:
$host.UI.RawUI.WindowTitle = "PATH: $PWD"
Regular Expression Patterns: PowerShell regular expressions are implemented using the .NET regular expressions.
PowerShell Functions: Functions can be defined with the function keyword. Since PowerShell is a shell, every statement in a PowerShell function may return a value. Use redirection to $null to discard unnecessary output. The following diagram shows a simple function definition.
Advanced Functions: functions can also be defined like cmdlets with a begin, process and end clause for handling processing in each stage of the pipeline.
In general, the easiest way to get things done in PowerShell is with cmdlets. Basic file operations are carried out with the "core" cmdlets. These cmdlets work on any namespace. This means that you can use them to manipulate files and directories but can also use them to list the defined variables by doing
dir variables:
or remove a function called "junk" by doing:
del function:/junk
Searching Through Text: The fastest way to search through text and files is to use the select-string cmdlet as shown:
select-string Username *.txt "case # case-sensitive search for Username dir "rec "filter *.txt select-string # case-insensitive search # through a set of files dir "rec "filter *.cs | select-string "list Main # only list the first match|
The Select-String cmdlet is commonly aliased to "grep" by UNIX users.
Formatting and Output: by default the output of any expression that isn"t redirected will be displayed by PowerShell. The default display mode can be overridden using the formatting cmdlets:
Output is also handled by a set of cmdlets that send the output to different locations.
Getting and Setting Text Colors:
PS
Unlike most scripting languages, the basic object model for PowerShell is .NET which means that instead of a few simple built-in types, PowerShell has full access to all of the types in the .NET framework. Since there are certain common types that are used more often than others, PowerShell includes shortcuts or type accelerators for those types. The set of accelerators
is a superset of the type shortcuts in C#. (Note: a type literal in Powershell is specified by using the type name enclosed in square brackets like
[int] or
[string].
Accessing Instance Members: as is the case with most object oriented languages, instance members (fields, properties and method) are accesses through the dot "." operator.
"Hi there".length "Hi there".SubString(2,5)
The dot operator can also be used with an argument on the right hand side:
"Hi there".("len" + "th")
Methods can also be invoked indirectly:
$m = "Hi there".substring $m.Invoke(2,3)
Static methods are invoked using the "::" operator with an expression that evaluates to a type on the left-hand side and a member on the right hand side
[math]::sqrt(33) $m = [math] $m::pow(2,8)
Working With Collections: Foreach-Object, Where-Object
[void][reflection.assembly]::LoadWithPartialName ("System.Windows.Forms")
$form = new-object Windows.Forms.Form $form.Text = "My First Form" $button = new-object Windows.Forms.Button $button.text="Push Me!" $button.Dock="fill" $button.add_click({$form.close()}) $form.controls.add($button) $form.Add_Shown({$form.Activate()}) $form.ShowDialog()
Working With Date and Time
Use the Get-Date cmdlet to get the current date.
$now = get-date; $now
Do the same thing using the use static method on System.DateTime
$now = [datetime]::now ; $now
Get the DateTime object representing the beginning of this year using a cast.
$thisYear = [datetime]"2006/01/01"
Get the day of the week for today
$now.DayOfWeek
Get the total number of days since the beginning of the year.
($now-$thisyear).TotalDays
Get the total number of hours since the beginning of the year.
($now-$thisyear).TotalHours
Get the number of days between now and December 25th for this year.
(([datetime] "12/25/2006")-$now).TotalDays
Get the day of the week it occurs on:
([datetime] "12/25/2006").DayOfWeek
Along with .NET, PowerShell also lets you work with COM object. This is most commonly used as the Windows automation mechanism. The following example shows how the Microsoft Word automation model can be used from PowerShell:
Listing: Get-Spelling Script" this script uses Word to spell check a document
if ($args.count -gt 0) { #1 @" Usage for Get-Spelling:
Copy some text into the clipboard, then run this script. It will display the Word spellcheck tool that will let you correct the spelling on the text you"ve selected. When you"re done it will put the text back into the clipboard so you can paste back into the original document.
"@ exit 0 } $shell = new-object -com wscript.shell $word = new-object -com word.application $word.Visible = $false $doc = $word.Documents.Add() $word.Selection.Paste() if ($word.ActiveDocument.SpellingErrors.Count -gt 0) { $word.ActiveDocument.CheckSpelling() $word.Visible = $false $word.Selection.WholeStory() $word.Selection.Copy() $shell.PopUp( "The spell check is complete, " + "the clipboard holds the corrected text." ) } else { [void] $shell.Popup("No Spelling Errors were detected.") } $x = [ref] 0 $word.ActiveDocument.Close($x) $word.Quit()
Tokenizing a Stream Using Regular Expressions:
The "match operator will only retrieve the first match from a string. Using the [regex] class, it"s possible to iterate through all of the matches. The following example will parse simple arithmetic expressions into a collection of tokens:
$pat = [regex] "[0-9]+\+|\-|\*|/| +" $m = $pat.match("11+2 * 35 -4") while ($m.Success) { $m.value $m = $m.NextMatch() }|
The other major object model used in PowerShell is WMI"Windows Management Infrastructure. This is Microsoft"s implementation of the Common Instrumentation Model or CIM. CIM is an industry standard created by Microsoft, HP, IBM and many other computer companies with the intent of coming up with a common set of management abstractions. WMI is accessed in PowerShell through the Get-WMIObject cmdlet and through the [WMI] [WMISearcher] type accelerators. For example, to get information about the BIOS on your computer, you could do:
PS (1) > (Get-WmiObject win32_bios).Name v3.20
Support for active directory is accomplished through type accelerators. A string can be cast into an ADSI (LDAP) query and then used to manipulate the directory as shown:
$domain = [ADSI] ` >> "LDAP://localhost:389/dc=NA,dc=fabrikam,dc=com" PS (2) > $newOU = $domain.Create("OrganizationalUnit", "ou=HR") PS (3) > $newOU.SetInfo() PS (5) > $ou = [ADSI] ` >> "LDAP://localhost:389/ou=HR,dc=NA,dc=fabrikam,dc=com" >> PS (7) > $newUser.Put("title", "HR Consultant") PS (8) > $newUser.Put("employeeID", 1) PS (9) > $newUser.Put("description", "Dog") PS (10) > $newUser.SetInfo() PS (12) > $user = [ADSI] ("LDAP://localhost:389/" + >> "cn=Dogbert,ou=HR,dc=NA,dc=fabrikam,dc=com") >>
PowerShell has no language support for creating new types. Instead this is done through a series of commands that allow you to add members (properties, fields and methods) to existing object. Here"s an example:
PS (1) > $a = 5 # assign $a the integer 5 PS (2) > $a.square() Method invocation failed because [System.Int32] doesn"t contain a method named "square". At line:1 char:10 + $a.square( <<<< ) PS (3) > $a = 5 add-member -pass scriptmethod square {$this * $this} PS (4) > $a 5 PS (5) > $a.square() 25 PS (6) > $a.gettype().Fullname System.Int32|
Working With XML Data: PowerShell directly supports XML. XML documents can be created with a simple cast and document elements can be accessed as though they were properties.
PS (1) > $d = [xml] "<a><b>1</b><c>2</c></a>" PS (2) > $d.a.b 1 PS (3) > $d.a.c 2
Errors and Debugging: The success or failure status of the last command can be determined by checking $?. A command may also have set a numeric code in the $LASTEXITCODE variables. (This is typically done by external applications.)
PS (11) > "exit 25" > invoke-exit.ps1 PS (12) > ./invoke-exit PS (13) > $LASTEXITCODE
The default behavior when an error occurs can be controlled globally with the $ErrorActionPreference variable or, for a single command, with the -ErrorAction Parameter.
The trap Statement: will catch any exceptions thrown in a block. The behavior of the trap statement can be altered with the break and continue statements.
The throw Statement: along with the trap statement, there is a throw statement. This statement may be used with no arguments in which case a default exception will be constructed. Alternatively, an arbitrary value may be thrown that will be automatically wrapped in a PowerShell runtime exception.
The Format Operator
The PowerShell format operator is a wrapper around the .NET String.Format method. It allows you to do very precise formatting:
"0x{0:X} {1:hh} {2,5}|{3,-5}|{4,5}" -f 255, (get-date), "a","b","c"| ratzlaff replied on Thu, 2009/05/14 - 1:19pm
in response to:
ying tao
Bill Thompson replied on Thu, 2009/10/08 - 7:26am
Jim Katz replied on Tue, 2009/10/27 - 11:01am
Greg Palmer replied on Mon, 2010/02/15 - 11:39pm
in response to:
Jim Katz
Powershell 2 behaviour is different with the echo behaviour with a new line per argument. However, the examples work as shown if Write-Host is substituted for echo.
Peter Czarnecki replied on Sun, 2010/05/02 - 12:31pm
Trevor Robertson replied on Tue, 2011/03/22 - 10:10am
Darren Martin replied on Thu, 2012/06/14 - 3:30am
in response to:
Trevor Robertson
Wow, these are really great sample you have shared, I downloaded some and I will see the outcome.
iphone keyboard
Jams Jammy replied on Thu, 2014/10/02 - 5:17am
in response to:
Ninja Assassin
you can't get far before you hit an undid able section. óleo de cartamo emagrece | http://refcardz.dzone.com/refcardz/windows-powershell?oid=dnk0001 | CC-MAIN-2014-42 | refinedweb | 3,065 | 56.45 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
On 06/01/2013 at 09:57, xxxxxxxx wrote:
Hello dear Community!
I'm very proud to tell you about the initial release of the c4dtools library. It is a Python package
containing classes and functions that make it easier writing plugins for Cinema 4D.
Significant features:
â–º Easily access symbols from your c4d_symbols.h file.
â–º Easily load strings from your c4d_strings.str file.
â–º Fast: Symbol-loading will be cached by default.
â–º Easily load libraries from your plugin's lib directory using a self-contained importer.
â–º Convenient wrapper for the c4d.plugins.CommandData class.
â–º Thoroughly documented code!
â–º and more..
Here's some example code:
import c4d
import c4dtools
res, importer = c4dtools.prepare(__file__)
# Import libraries from the `lib` folder relative to the plugins
# directory, 100% self-contained and independent from `sys.path`.
mylib = importer.import_('mylib')
mylib.do_stuff()
class MyDialog(c4d.gui.GeDialog) :
def CreateLayout(self) :
# Access symbols from the `res/c4d_symbols.h` file via
# the global `res` variable returned by `c4dtools.prepare()`.
return self.LoadDialogResource(res.DLG_MYDIALOG)
def InitValues(self) :
# Load strings from the `res/strings_xx/c4d_strings.str` file
# via `res.string`.
string_1 = res.string.IDC_MYSTRING_1()
string_2 = res.string.IDC_MYSTRING_2("Peter")
self.SetString(res.EDT_STRING1, string_1)
self.SetString(res.EDT_STRING2, string_2)
return True
# As of the current release, the only wrapped plugin class is
# `c4d.plugins.CommandData`. The plugin is registered automatically
# in `c4dtools.plugins.main()`, the information for registration
# is defined on class-level.
class MyCommand(c4dtools.plugins.Command) :
PLUGIN_ID = 100008 # !! Must be obtained from the plugincafe !!
PLUGIN_NAME = res.string.IDC_MYCOMMAND()
PLUGIN_HELP = res.string.IDC_MYCOMMAND_HELP()
def Execute(self, doc) :
dlg = MyDialog()
return dlg.Open(c4d.DLG_TYPE_MODAL)
# Invoke the registration of `MyCommand` via `c4dtools.plugins.main()`
# on the main-run of the python-plugin.
if __name__ == '__main__':
c4dtools.plugins.main()
You can download the c4dtools library from github here.
I will continue developing this library and wouldn't mind any forkers.
Some feedback would be nice as well, thanks!
Best,
Niklas
UPDATE: 1.1.0
The c4dtools library has been improved and is now licensed under the Simplified BSD License,
allowing it to be used in commercial projects! The documentation has been updated as well and
is included in the repository.
Grab it from github!
On 06/01/2013 at 11:07, xxxxxxxx wrote:
hey nikklas,
i think that it is a great idea to create some sort of c4d related python libs. any form symbol
management is really welcome i think, it kind of sucks, that you have always do the work twice
currently. however, i am not sure if i really want to wrap the plugin registration. loosing pretty
much control for just 2-3 lines here.
On 07/01/2013 at 07:08, xxxxxxxx wrote:
Hello Niklas.
Thank you for lib!
On 07/01/2013 at 08:24, xxxxxxxx wrote:
Hello Ferdinand,
Thanks for your feedback. I've been experimenting very much until I found this way of loading
and using the symbols. It's quite usable IMHO now.
If you don't want to loose this control, you do not have to use the plugin class. I was tired
of writing the registration code over and over again, (including loading the bitmap, etc.). You could
also overwrite the c4dtools.plugins.Command.register() function.
You wouldn't have needed to remove the code you have posted. I've already put the
console-flushing and container-printing function into the module, thanks for the ideas. They'll
be included in the next commit, but in the development branch. The master-branch will be updated
as major changes come up.
On 07/01/2013 at 09:25, xxxxxxxx wrote:
hey,
i am to lazzy to write a wall of text, so just a list:
1. return the results of the RegisterXyzData() methods in some form.
2. add a member which let you print some text to the console when RegisterXyzData() is True.
3. provide a reference to the actual instance of the registered plugin class.
Symbols :
1. it would be nice if the text length required to call a member of res would be shorter.
i am currently using a module which is just called con, where all my constants are sitting.
so it would be cool if it would be possible just to type res.IDC_MYELEMENTID.
2. i really like the feature that you can add manually constants. at least i am reading
string_2 = res.string.IDC_MYSTRING_2("Peter") in that way (or is it a dict search value) ?
3. another cool feature would be an automatic attribute initialization (InitAttr()) for ressource
based GUIs by parsing the res file and choosing the correct attribute type.
On 07/01/2013 at 10:18, xxxxxxxx wrote:
Hi Ferdinand,
@1: Don't get it.
@2: You can overwrite the register() method for this.
class MyCommand(c4dtools.plugins.Command) :
# ...
def register(self) :
if super(MyCommand, self).register() :
print "Plugin Successfully registered."
return True
return False
@3: You can prevent the class from being registered by c4dtools.plugins.main() and
register it yourself. This information is contained in the source-code inline documentation,
see.
class MyCommand(c4dtools.plugins.Command) :
# ...
autoregister = False
instance = MyCommand()
instance.register()
Symbols
@1: This is the way to access the symbols. You can of course call the variable whatever you
like, con as well of course.
con, importer = c4dtools.prepare(__file__)
print con.IDC_MYSYMBOL
@2: The res.string slot points to a c4dtools.resource.StringLoader instance. Requesting an
attribute will return a callable object wrapping c4d.plugins.GeLoadString with the symbol-id
already set. Therefore, the returned callable object accepts 4 parameters (like the GeLoadString
function does, minus 1 because the id is already set).
See
@3: Don't get it. Do you mean calling a function for each symbol and instead of taking the symbols
id, taking the functions return value as value for the symbol?
-Nik
On 07/01/2013 at 15:40, xxxxxxxx wrote:
Originally posted by xxxxxxxx
@3: Don't get it. Do you mean calling a function for each symbol and instead of taking the symbols
id, taking the functions return value as value for the symbol?
Originally posted by xxxxxxxx
It would be a new functionality completely unrelated to the dialog ressource loading features.
res file :
CONTAINER mynode
{
[...]
DATETIME STAGE_BUDGET { ANIM OFF; TIME_CONTROL; OPEN;}
[...]
}
now:
class mynode(plugins.ObjectData) :
def Init(self, node) :
[...]
self.InitAttr(node, c4d.DateTimeData, c4d.STAGE_BUDGET)
[...]
what would be cool :
class mynode(plugins.ObjectData) :
def Init(self, node) :
# does the same as the example above
res.InitAttributes(self, node, relatedRessourcFile)
On 08/01/2013 at 03:56, xxxxxxxx wrote:
I did not know you were not referring to dialogs, but to descriptions. The current implementation
does not parse the description symbols as they are automatically loaded in to the c4d module.
Although it would not be a problem parsing the descriptions' symbol-files (*.h), it would be very
much work parsing the *.res files, which is required to get the information how to initialize the
attributes in the node.
PS: I know about the symbolcache issue.. Still not a reason for me to include the description
symbols in the res parser.
On 08/01/2013 at 08:25, xxxxxxxx wrote:
I admit that parsing the description symbols as well might be useful sometimes, but I don't
want it to be the default behavior. One can now optionally parse the description symbols. The
code has been committed to the development branch. See.
The argument parse_descriptions must be set to True on c4dtools.prepare() for this.
Thanks for your feedback Ferdinand
-Niklas
On 14/01/2013 at 06:56, xxxxxxxx wrote:
Hey Niklas, fantastic stuff as always. I don't have the time to look into it right now, but if it does what you say it should be a very handy timesaver in the future.
Thank you for sharing!
On 30/01/2013 at 04:08, xxxxxxxx wrote:
I like the initiative!
Do you have an overview / documentation of all functions in this library?
On 01/02/2013 at 09:48, xxxxxxxx wrote:
Hi pgroof,
I've added a Sphinx documentation to the repository. You can find it under docs/build/html.
I have also merged the development branch into the master branch, c4dtools is now on Version 1.0.1.
On 14/02/2013 at 06:48, xxxxxxxx wrote:
The c4dtools library has been improved and is now licensed under the Simplified BSD License,
allowing it to be used in commercial projects! The documentation has been updated as well and
is included in the repository.
Grab it from github!
On 22/03/2013 at 11:12, xxxxxxxx wrote:
Hi!
The c4dtools library has been updated to 1.2.8. The new version includes some extremely cool new
features. The documentation has also been updated.
Some of the new features include:
c4dtools.misc.dialog
Here's a small code-snippet that demonstrates using the c4dtools.misc.dialog module:
class MainDialog(c4d.gui.GeDialog) :
# Must be a unique plugincafe identifier!
ID_DATA = 1001204
def __init__(self) :
super(MainDialog, self).__init__()
self.params = c4dtools.misc.dialog)
def InitValues(self) :
# Load saved values.
bc = c4d.plugins.GetWorldPluginData(self.ID_DATA)
self.params.load_container(self, bc)
return True
def AskClose(self) :
# Store the dialog values.
params = getattr(self, 'params', None)
if params:
bc = params.to_container(self)
c4d.plugins.SetWorldPluginData(self.ID_DATA, bc, False)
# Close the dialog.
return False
c4dtools.resource.menuparser
Since this module requires the scan module, it is not imported implcitly and the c4dtools library can safely be used without this dependency installed!
Here's a small code-snippet on how to use MENU resources:
res, imp = c4dtools.prepare(__file__, __res__)
class MyDialog(c4d.gui.GeDialog) :
MENU_FILE = res.file('menu', 'my_menu.menu')
RECENTS_START = 1000000
def CreateLayout(self) :
menu = c4dtools.resource.menuparser.parse_file(self.MENU_FILE)
recents = menu.find_node(res.MENU_FILE_RECENTS)
item_id = self.RECENTS_START
for fn in get_recent_files() : # arbitrary function
node = c4dtools.resource.menuparser.MenuItem(item_id, str(fn))
recents.add(node)
# Render the menu on the dialog, passing the dialog itself
# and the c4dtools resource.
self.MenuFlushAll()
menu.render(self, res)
self.MenuFinished()
# ...
return True
The corresponding MENU resource looks like this:
# Write comments like in Python.
MENU MENU_FILE {
MENU_FILE_OPEN;
MENU_FILE_SAVE;
--------------; # Adds a separator.
COMMAND COMMAND_ID; # Uses GeDialog.MenuAddCommand().
COMMAND 5159; # Same here.
# Create a sub-menu.
MENU_FILE_RECENTS {
# Will be filled programatically.
}
}
# More menus may follow ...
The symbols used in the menu resource must be defined in the res variable passed to the render() method.
Online Documentation
The c4dtools library now has an index in the Python Package Index and the 1.2.8 documentation is hosted online at.
Edit: corrected example code | https://plugincafe.maxon.net/topic/6848/7638_proud-to-announce-the-c4dtools-library/1 | CC-MAIN-2021-43 | refinedweb | 1,814 | 52.66 |
Enter my little App Engine HTTP module. It provides a simple interface for making arbitrary HTTP requests and will print the full request and response to the terminal (though you can turn the noisy printing off if you want). Download, copy the
http.pyfile to you working directory and try it out in your Python interpreter.
For our first demonstration, let's try to visit the Google search page.
import httpYou should see your request and the server's response (with the HTML for the Google Search page) in your terminal window. This should work with just about any website out there.
client = http.Client()
resp = client.request('GET', '')
Other HTTP debugging tools can show you the request and response like this, but I find that this kind of simple Python client can be useful in writing end-to-end or integration tests which contact your App Engine app remotely.
Along those lines, one of the things which standard HTTP debugging tools do not provide, is a way to sign in to an App Engine app with a Google Account so that the App Engine Users API can identify the current user. I wrote an extremely simple app which illustrates the Users API, try it out here:
After signing in, the page should simply say, "Hello yourusername (yourusername@yourdomain.com)" You'll notice that during the sign in process, you signed in on were asked to approve access to the app. This kind of interaction works great in a browser, but can be tricky when you are using a command line, browserless, client.
It is possible however, to sign in to an App Engine app without using a browser. You can use the same technique used in
appcfg, use ClientLogin and use the authorization token to obtain an app specific cookie which indicates the current user. This simple HTTP library can do this for you and all subsequent requests will use this cookie to tell the App Engine app who the current user is. Try it out by making the request to the simple user app that you visited earlier:
import httpYou should see the following text displayed in the terminal:
client = http.Client()
client.appengine_login('jscudtest')
resp = client.request('GET',
'')
print resp.body
Hello, yourusername (yourusername@yourdomain.com)You can use the
appengine_loginmethod with your own app, just change the argument to the App ID of the app you want to access.
Along with simplifying access to apps which use Google Accounts, I wanted this library to simplify the process of using another feature used by many web apps: HTML form posts. Now I'm certain you've used HTML forms before, here's a simple example:
The above app uses both the Users API and a simple form. As an alternative to visiting this page in the web browser, you can post your shout-out using the following:
import httpIf you've even wondered what gets sent across the wire to post on a form like this, look back in your terminal to see the request from your computer and the response from the server (this is of course just the HTTP layer, wireshark will show you traffic on the IP and Ethernet layer as well).
client = http.Client()
client.appengine_login('shoutout')
client.request('POST', '',
form_data={'who': raw_input('From: '),
'message': raw_input('Message: ')})
That's really all there is to it. I designed this as just a simple script to use on the command line and I wrote it in less time than it's taken me to write this blog post about it (I borrowed atom.http_core from the gdata-python-client as a foundation). With some tweaks to remove the interactive (getpass and raw_input) calls and replace them with parameters, I could see this module as a utility layer in a larger, more complex, App Engine client application. If you're creating on I'd love to hear about it ;-)
For more information on how the
appengine_loginmethod works behind the scenes, see this presentation I gave a few months ago:
Many thanks to Trevor Johns and Nick Johnson for helping me to understand how this ClientLogin-to-cookie exchange works.
I'm sure that App Engine's Java runtime users would appreciate a port of this simple library to Java, if you feel so inclined.
3 comments:
Interesting little library.
In the category of similar tools, I often find myself using:
* FireBug/LiveHTTPHeaders Firefox extensions - easy to view network traffic, markup, and execute arbitrary javascript
* WebScarab - personal web proxy written in Java, tons of features including playback, scripting, and fuzzy testing
* Selenium - automated functional testing scripts written in HTML or in your favorite language
Ah yes, thanks Ben those are great resources. FireBug in particular is one that I'm using all of the time. Especially handy when you have some Ajax components to your application.
I haven't used Selenium yet but I'm sure it won't be long.
This is awesome, Jeff! I was actually wondering if this was possible, and this was a perfect answer! | https://blog.jeffscudder.com/2009/08/test-client-for-app-engine.html | CC-MAIN-2022-40 | refinedweb | 842 | 58.21 |
Our program must create a guessing game that asks the user what the range they want to game to be and how many guesses they want. Once that is done it askes the user to make a guess and based on the guess its supposed to say Hot if it is within 10% of the answer, Warm if it is withing 25%, and Cold if it is outside 25%.
Here is what I have so far,
public class GuessingGame1 { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub Scanner keyboard = new Scanner (System.in); Random generator = new Random(); int guesses; int guessNum = 0; double difference = 0; int answer; double guess = 0; String another = "y"; boolean flag = false; boolean anotherFlag = true; System.out.print("Please enter a number to indicate the range of the game (10,20,30)"); guessNum = keyboard.nextInt(); System.out.println("How many guesses should I allow?"); guesses = keyboard.nextInt(); System.out.println("Let's Play. I've chosen my number."); while(anotherFlag){ answer = generator.nextInt(guessNum) + 1; System.out.println("It's a whole number between 1 and " + guessNum + (": ")); flag = false; guess = keyboard.nextInt(); if(guess > answer) { difference = guess - answer; } else if (guess < answer) { difference = answer - guess; while(!flag) { if(guess == answer) { System.out.println("You guessed correctly! Good Job"); flag = true; } else if(difference / guessNum <= .10) { System.out.println("Hot! Try Again: "); } else if(difference / guessNum > .10 || guessNum <= .25) { System.out.println("Warm! Try Again: "); } else { System.out.println("Cold! Try Again "); } guess = keyboard.nextInt(); } System.out.println(); System.out.println("Would you like to play again? (y/n)"); another = keyboard.next(); if(another.equalsIgnoreCase("y") == true) { anotherFlag = true; } else { anotherFlag = false; } } } } }
I cant seem to get the hot warm and cold thing to work, and how do i impliment the number of guesses the user gets? | https://www.daniweb.com/programming/software-development/threads/463992/guessing-game-in-java | CC-MAIN-2021-25 | refinedweb | 305 | 50.23 |
From reading multiple posts I want to see if you guys can clarify some things.
We are currently in the final stages of moving away from Novell and going AD. Currently I use a local admin account on all systems that SW uses to scan each system.
When we make the transition to AD I would like not to have to add local admin account to every system but my security admin are a pain in my a** and would be down with setting up a Domian Admin account for SW.
Is it possible to setup a AD account that isn't a domain admin but can scan each system without needing to be added to the local admin group on users systems.
Its been awhile since I've played around in AD GPO since we have lovely Novell so forgive me if this is a remedial question.:/
4 Replies:/
Apr 3, 2012 at 11:31 UTC
If I were to use a domain admin account would I still need to add it to the local users on the systems being scanned or no?
Apr 3, 2012 at 12:01 UTC
Pittsburgh Computer Solutions is an IT service provider.
If I were to use a domain admin account would I still need to add it to the local users on the systems being scanned or no?
You would have to set Domain Admins group to be local Admins group. The same way as above just using Domain Admins.
Jan 28, 2016 at 11:53 UTC
Power Users, LLC is an IT service provider.
This excludes your domain controllers, since there is no local admins group on DCs. Here is the correct process that give absolute least privilege needed for spiceworks to work:
-.
a. Right click WMI Control > Properties > Security Tab > Security Button > Advanced > Add
b. Select a principal > scanner.spiceworks
c. Type: Allow
d. Applies to: This namespace and subnamespaces
e.. | https://community.spiceworks.com/topic/212117-non-domain-admin-scanning | CC-MAIN-2016-50 | refinedweb | 321 | 70.63 |
Customize the home page
With tagging in place, we can enhance the application to allow users to create their
own home page. The aim is to allow users to specify the tags they are interested in,
so any content with these tags will be displayed on their home page. This will allow
us to break the home page up into two sections:
- A Most Recent section, containing the last five file uploads and messages
- A Your Data section, containing all the files and messages that are tagged
according to the user’s preferences
Introducing templates
Taking this approach means that files and messages will be displayed in many
different places on the site, instead of just the home page. By the end of this chapter,
messages and files will be rendered in the context of:
- A Most Recent section
- A Your Data section
In the future, we will probably render messages and files in the following contexts
as well:
- Show all files and messages
- Show files and messages by tags
- Show files and messages by search results
Ideally we want to encapsulate the rendering of a file and a message so they look the
same all over the site, and we don’t need to duplicate our presentation logic. Grails
provides a mechanism to handle this, through GSP, called templates.
A template is a GSP file, just the same as our view GSP files, but is differentiated
from a view by prefixing the file name with an underscore. We are going to create
two templates—one template for messages, which will be called _message.gsp and
the other for files, which will be called _file.gsp.
The templates will be responsible for rendering a single message and a single file.
Templates can be created anywhere under the views folder. The location that they
are created in affects the way they are executed. To execute a template we use the
grails render tag. Assume that we create our message template under the views/
message folder. To render this template from a view in the same folder, we would
call the following:
<g:render
However, if we need to render a message from another controller view, say the home
page, which exists under views/home, we would need to call it like so:
<g:render
Passing data to a template
The two examples of executing a template above would only be capable of rendering
static information. We have not supplied any data to the template to render. There
are three ways of passing data into a template:
- Send a map of the data into the template to be rendered
- Provide an object for the template to render
- Provide a collection of objects for the template to render
Render a map
This mechanism is the same as when a controller provides a model for a view to
render. The keys of the map will be the variable names that the values of the map are
bound to within the template. Calling the render tag given below:
<g:render
would bind the myMessage object into a message variable in the template scope and
the template could perform the following:
<div class="messagetitle"> <g:message </div>
Render an object
A single object can be rendered by using the bean attribute:
<g:render
The bean is bound into the template scope with the default variable named it:
<div class="messagetitle"> <g:message </div>
Render a collection
A collection of objects can be rendered by using the collection and var attributes:
<g:render
When using a collection, the render tag will iterate over the items in the collection
and execute the template for each item, binding the current item into the variable
name supplied by the var attribute.
<div class="messagetitle"> <g:message </div≫
Be careful to pass in the actual collection by using ${}. If just the name of the variable
is passed through, then the characters in the collection variable name provided
will be iterated over, rather than the items in the collection. For example, if we use
the following code, the messages collection will be iterated over:
<g:render
However, if we forget to reference the messages object and just pass through the
name of the object, we will end up iterating over the string “messages”:
<g:render
Template namespace
Grails 1.1 has introduced a template namespace to make rendering of templates even
easier. This option only works if the GSP file that renders the template is in the same
folder as the template itself. Consider the first example we saw when rendering a
template and passing a Map of parameters to be rendered:
<g:render
Using the template namespace, this code would be simplified as follows:
<tmpl:message
As we can see, this is a much simpler syntax. Do remember though that this option is
only available when the GSP is in the same folder as the template.
Create the message and file templates
Now, we must extract the presentation logic on the home page, views/home/index.
gsp, to a message and file template. This will make the home page much simpler and
allow us to easily create other views that can render messages and files.
Create two new template files:
- /views/message/_message.gsp
- /views/file/_file.gsp
Taking the code from the index page, we can fill in _message.gsp as follows:
<div class="amessage"> <div class="messagetitle"> <g:message </div> <div class="tagcontainer"> <g:message </div> <div class="messagetitlesupplimentary"> <g:message </div> <div class="messagebody"> <g:message </div> </div>
Likewise, the <div> that contains a file panel should be moved over to the new _
file.gsp. This means the main content of our home page (views/home/index.gsp)
becomes much simpler:
<div class="panel"> <h2>Messages</h2> <g:render </div> <div class="panel"> <h2>Files</h2> <g:render </div>
User tags
The next step is to allow users to register their interest in tags. Once we have
captured this information then we can start to personalize the home page. This is
going to be surprisingly simple, although it sounds like a lot! We just need to:
- Create a relationship between Users and Tags
- Create a controller to handle user profiles
- Create a form that will allow users to specify the tags in which they
are interested
User to tag relationship
Creating a relationship between users and tags is very simple. Users will select a
number of tags that they want to watch, but users themselves are not ‘tagged’, so the
User class cannot extend the Taggable class. Otherwise users would be returned
when performing a polymorphic query on Taggable for all objects with a certain tag.
Besides allowing a user to have a number of tags, it is also necessary to be able to add
tags to a user by specifying a space delimited string. We must also be able to return
the list of tags as a space delimited string.
The updates to the user class are:
package app import tagging.Tagger class User { def tagService static hasMany = [watchedTags: Tagger] … def overrideTags( String tags ) { watchedTags?.each { tag -> tag.delete() } watchedTags = [] watchedTags.addAll( tagService.createTagRelationships( tags )) } def getTagsAsString() { return ((watchedTags)?:[]).join(' ') } }
User ProfileController
The ProfileController is responsible for loading the current user for the My
Tags form, and then saving the tags that have been entered about the user. Create
a new controller class called ProfileController.groovy under the grails-app/
controller/app folder, and add the following code to it:
package app class ProfileController { def userService def myTags = { return ['user': userService.getAuthenticatedUser() ] } def saveTags = { User.get(params.id)?.overrideTags( params.tags ) redirect( controller:'home' ) } }
The myTags action uses userService to retrieve the details of the user making the
request and returns this to the myTags view. Remember, if no view is specified,
Grails will default to the view with the same name of the action.
The saveTags action overrides the existing user tags with the newly submitted tags.
The myTags form
The last step is to create the form view that will allow users to specify the tags
they would like to watch. We will create a GSP view to match the myTags action in
ProfileController. Create the folder grails-app/views/profile and then create
a new file myTags.gsp and give it the following markup:
<%@ page <meta name="layout" content="main"/> <title>My Tags</title> </head> <body> <g:form <g:hiddenField <fieldset> <dl> <dt>My Tags</dt> <dd><g:textField</dd> </dl> </fieldset> <g:submitButton | <g:linkCancel</g:link> </g:form> </body> </html>
This view will be rendered by the myTags action on the ProfileController and is
provided with a User instance. The form submits the tags to the saveTags action on
the ProfileController. The user id is put in a hidden field so we know which user
to add the tags to when the form is submitted, and any existing tags for the user are
rendered in the text field via the tagsAsString property.
Add a link to the myTags action in the header navigation from our layout in main.gsp:
<div id="header"> <jsec:isLoggedIn> <div id="profileActions"> <span class="signout"> <g:linkMy Tags</g:link> | <g:linkSign out</g:link> </span> </div> </jsec:isLoggedIn> <h1><g:linkTeamwork</g:link></h1> </div>
Now restart the application, log in as the default user and you will be able to specify
which tags you are interested in.
Speak Your Mind | http://www.javabeat.net/grails-1-1-web-application-development/3/ | CC-MAIN-2014-42 | refinedweb | 1,566 | 55.58 |
export default class App extends Component { doSomething = () => { console.log('Hi'); } render() { return ( <Container> <Button onClick={this.doSomething} <Button onClick={() => this.doSomething()} </Container> ); } }
What is the difference between both button click in the given React Component?
I think there is no difference.
1st way is ideal.
2nd way is just long way of doing the samething. You are calling a function to return doSomething function.
As explainmy shimphilip above and i would like to add, for second way you could pass parameters also since you’re using function.
I asked the question on stackoverflow. There is also performance difference in both button clicks apart from passing parameters. When the component re-renders, memory is allocated for the second one every time, while the first one just uses memory reference.
React documentation also recommend to bind function in constructor rather than using arrow function to avoid performance issues.
True for that about constructor. I read it before in this article. It explained very well all subtle differences. | https://www.freecodecamp.org/forum/t/what-is-the-difference-between-both-button-click-in-the-given-react-component/218114 | CC-MAIN-2018-47 | refinedweb | 165 | 52.97 |
>> 'go.
Stop worrying: the Closure Compiler (which the "goog" approach uses)
is specifically designed to handle stuff like those "deep" namespaces.
It checks dependencies, so it knows what it can and cannot rename, and
it checks for collisions when doing the renaming. The result of all
this is that you can during development use namespaces however deep
you like - "this.is.a.really.deep.namespace.dont.you.think - but if
the Closure Compiler figures it can get away with it, it will
ruthlessly refactor that to: "a".
EdB
--
Ix Multimedia Software
Jan Luykenstraat 27
3521 VB Utrecht
T. 06-51952295
I. | http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201212.mbox/%3CCAJs+wW2N7vuhp=er4upQy8wfUn6=2922i-J880itj2Nx9E84Jg@mail.gmail.com%3E | CC-MAIN-2016-26 | refinedweb | 101 | 52.8 |
Back to list
|
Post reply
[BMSA 2009-04] Remote DoS in Internet Explorer
Apr 11 2009 07:15AM
Nam Nguyen (namn bluemoon com vn)
BLUE MOON SECURITY ADVISORY 2009-04
===================================
:Title: Remote Denial of Service in Internet Explorer
:Severity: Moderate
:Reporter: Blue Moon Consulting
:Products: Internet Explorer 7 and 8
:Fixed in: --
Description
-----------
We could not find out the definitive description for Internet Explorer from Microsoft website. This is our own understanding of the application: Internet Explorer is a web browser.
We have discovered a remote DoS vulnerability in Internet Explorer 7 and 8. When visit a malicious page, the browser may freeze indefinitely and killing it in Task Manager is required. With IE8's default settings, killing the tab process simply launches another process and goes to the same malicious page, hence repeating the cycle. The root cause is unknown to us. We suspect that it is related to the display of unprintable characters on Windows XP, and Vista. The same problem does not occur in Windows 7.
Microsoft has classified this vulnerability as a stability (not security) issue and will be addressing it in the next version of the application.
Workaround
----------
There is no workaround.
Fix
---
This problem is to be fixed in the next version of Internet Explorer.
Disclosure
----------
Blue Moon Consulting adapts `RFPolicy v2.0 <>`_ in notifying vendors.
:Initial vendor contact:
March 19, 2009: Initial contact sent to secure (at) microsoft (dot) com. [email concealed]
:Vendor response:
March 19, 2009: Tony replied stating the preference for PGP communication.
:Further communication:
March 20, 2009: Technical details and PoC code were sent to Tony, in PGP MIME format.
March 20, 2009: Tony replied with a new case identifier MSRC 9011jr and informed us of a new case manager, Jack.
March 21, 2009: We further reported that IE 8 was affected by the same bug, in PGP MIME format.
March 30, 2009: We asked if Microsoft had received our PoC.
March 31, 2009: Jack confirmed the receipt, and replied that Microsoft could not reproduce the behavior of this bug.
April 01, 2009: We clarified that we tested with IE 7, and IE 8 on Vista Business. Sent in PGP MIME format.
April 01, 2009: Jack said the email was stripped out and asked us to resend.
April 02, 2009: We resent the last email in plain text.
April 03, 2009: Jack told us Microsoft only experienced temporary DoS and in no case did Internet Explorer hang indefinitely.
April 06, 2009: We sent Jack a video clip, in PGP MIME format.
April 06, 2009: Jack asked us to resend because the email was stripped again.
April 07, 2009: We resent the clip in plain text to Jack.
April 09, 2009: Jack acknowledged the receipt and let us know the bug would be fixed in the next version of Internet Explorer.
April 09, 2009: We asked for a confirmation of bug classification.
April 09, 2009: Jack confirmed this bug was classified as stability, instead of a security issue. We therefore decided to release this advisory to the public.
:Public disclosure: April 11, 2009
:Exploit code: The following CGI script causes IE to hang indefinitely.
::
#!C:/python25/python
import sys
import random
CHAR_SET = [chr(x) for x in range(0x20)]
CHAR_SET += [chr(x) for x in range(128, 256)]
def send_file():
l = 800000 + 4096
print "Content-Type: text/plain"
print "Content-Length: %d" % l
print "Cache-Control: no-cache, no-store, must-revalidate"
# this is not standardized, but use it anyway
print "Pragma: no-cache"
print ""
# bypass IE download dialog
sys.stdout.write("a" * 4096)
# print junks
for i in xrange(l):
sys.stdout.write(random.choice(CHAR_SET))
sys.exit()
send_file()
Disclaimer
----------
The information provided in this advisory is provided "as is" without warranty of any kind. Blue Moon Consulting Co., Ltd disclaims all warranties, either express or implied, including the warranties of merchantability and fitness for a particular purpose. Your use of the information on the advisory or materials linked from the advisory is at your own risk. Blue Moon Consulting Co., Ltd reserves the right to change or update this notice at any time.
Cheers
--
Nam Nguyen
Blue Moon Consulting Co., Ltd
[ reply ]
Privacy Statement | http://www.securityfocus.com/archive/1/archive/1/502617/100/0/threaded | CC-MAIN-2013-20 | refinedweb | 696 | 64.2 |
Details
Description
Attempting to compile Hive with Hadoop 0.20.0 fails:
aaron@jargon:~/src/ext/svn/hive-0.3.0$ ant -Dhadoop.version=0.20.0 package
(several lines elided)
compile:
[echo] Compiling: hive
[javac] Compiling 261 source files to /home/aaron/src/ext/svn/hive-0.3.0/build/ql/classes
[javac] /home/aaron/src/ext/svn/hive-0.3.0/build/ql/java/org/apache/hadoop/hive/ql/exec/ExecDriver.java:94: cannot find symbol
[javac] symbol : method getCommandLineConfig()
[javac] location: class org.apache.hadoop.mapred.JobClient
[javac] Configuration commandConf = JobClient.getCommandLineConfig();
[javac] ^
[javac] /home/aaron/src/ext/svn/hive-0.3.0/build/ql/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java:241: cannot find symbol
[javac] symbol : method validateInput(org.apache.hadoop.mapred.JobConf)
[javac] location: interface org.apache.hadoop.mapred.InputFormat
[javac] inputFormat.validateInput(newjob);
] 2 errors
BUILD FAILED
/home/aaron/src/ext/svn/hive-0.3.0/build.xml:145: The following error occurred while executing this line:
/home/aaron/src/ext/svn/hive-0.3.0/ql/build.xml:135: Compile failed; see the compiler error output for details.
Activity
Actually it appears that those test failures are /not/ related to my changes (I checked out vanilla trunk and built and tested it, and those two tests still failed).
Assigned.
Thanks for your contribution.
Please do a submit patch when you attach a patch so that we get it in the patch submitted queue.
Hi Justin, Why have all the @Overrides have been taken out from the code.
The @Override annotations were removed because they were causing runtime exceptions to be generated in the latest sun Java VM. This is because those annotations were used on methods that were not correctly overriding anything in the respective superclass.
which version of JAVA VM are you using?
The below doesn't throw such errors but some configurations of Eclipse does throw such errors.
pchakka@dev111 ~ > java -version
java version "1.6.0_07"
Java(TM) SE Runtime Environment (build 1.6.0_07-b06)
Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)
Output of my java -version reads:
java version "1.6.0_13"
Java(TM) SE Runtime Environment (build 1.6.0_13-b03)
Java HotSpot(TM) Server VM (build 11.3-b02, mixed mode)
Looking for more differences or reasons why this might happen. Double checking some things.
This patch removes all extraneous @override removals (potentially not neccessary, should've been done in another bug). Passes all unit tests on trunk (rev. 778559).
This patch does not attempt to re-implement the hadoop version specific functionality that was removed. This could potentially cause a bug in running trunk hive on hadoop 0.17.x as indicated by the comment on the removed code. Is this acceptable or should an attempt be made at reimplementation of intention using non-depreciated/removed interfaces?
Actually this has to compile with hadoop 0.17.x otherwise we will not be able to deploy this internally at FB. We are still on hadoop 0.17 and we have already lauched trunk into production.
Actually this has to compile with hadoop 0.17.x otherwise we will not be able to deploy this internally at FB. We are still on hadoop 0.17 and we have already lauched trunk into production.
Understood, I'll see what I can do. However, it appears that the API is starting to pull away with what would easily be reverse compatible. A 0.20.0+ branch might be warranted.
I'm +1 on an 0.20 branch. Cloudera's Distribution for Hadoop will be moving toward an 0.20 base and we would like to offer a compatible edition of Hive.
Another +1 for the 0.20 branch.
I agree that at some point we would drop support for older branches (<=17). But it's still worth looking at whether we can avoid doing this right now. For example for the ExecDriver change:
- // workaround for hadoop-17 - jobclient only looks at commandlineconfig
- Configuration commandConf = JobClient.getCommandLineConfig();
- if (commandConf != null) {
- commandConf.set("tmpfiles", realFiles);
we can do this using reflection - (grep -i declaredmethod HiveInputFormat.java).
are the mapredtask.java changes necessary?
this version uses reflection in a couple of places so that ql continues to compile with older versions.
i haven't had a chance to look at the hwi code and make that portable (it doesn't compile with 19 currently). If Edward could take a look - that would be awesome.
I gave a quick try and was getting hunk errors. Are these patches cumulative? Should I apply them in order
HIVE-487.patch, HIVE-487-2.patch, hive-487.3.patch?
I looked at jetty the HWI server in the patch. The changes are cosmetic. We might be able to make that portable with reflection as well if that is what we want.
regenerated.
the HWI stuff does not compile against 0.19 and prior because of the change to using new jetty apis. one option is to bundle the new jetty jar with hive. hadoop seems to have moved to using ivy and i am wondering if we should do the same.
have to use reflection. putting new jetty jars in hive does not matter since hadoop's jars take precedence at runtime (since we launch everything via hadoop)
can take a shot - looks simple ..
i don't think we can keep single version of Hive for all active versions of Hadoop. Why don't we release a branch for 20 and periodically merge from trunk to 20?
branching will get fairly complicated. where will 0.4.0 be branched off? sure there will be a time to deprecate support for older hadoop ersions - just not convinced this is it.
Note that the dependency in this case is particularly frivolous. We could ship Hive with a single version of jetty that Hive components require - but instead we are depending on Hadoop to provide it. This seems more like a setup problem on our side.
One simple option (to not use reflection) is to add the right version of jetty into the runtime classpath (if it's not there already). the compile time works fine already. (since we control the classpath from build xmls)
so i tried a custom classloader to force the new jetty jars to be used preferentially for loading classes.
alas - it does not work. it seems that some classes from the hadoop jetty jars are already loaded by the time control is transferred to hive/hwi. trying to load remaining classes from the new jetty jar causes a 'sealing violation exception'. (this is with hadoop-19)
only reasonable alternative i can think of now is to run the hwiserver by spawning a new jvm (with a modified classpath that omits hadoop's jetty jars)
Joydeep,
We do depend on Hadoop to provide Jetty. The rational was not to require extra or external libraries for the user. At the time Hive had just become its own project from being a Hadoop contrib project so it made sense to depend on Hadoop Jetty. We have a few other options. We can use a completely different Web Server.. Now we have no conflicts. Or we can just build a war with no embedded type options. Even if we switch to tjws we still might end up using reflection since the API could change over time although we would chose when to upgrade the servlet engine, not hadoop. For now I will make a version that uses reflection to start up the server, since these changes are mostly cosmetic.
Ok half way there. I added an abstraction to use reflection to completely kick up the HWI Jetty Server. Right now I only added the code for jetty5 0.19.0. 20 soon. Is this what everyone had in mind?
This sounds reasonable to me. Will go over the patch in more detail. Are you planning to upload another one soon or should I just review this one?
This is going a little slow. The reflection aspect is pretty painful coding. I think I am 98% percent complete. New test cases added. Hopefully I can have a final take in a day or two, sometimes its is hard to decide what exceptions to throw etc since there are very few design patters based around reflecting entire applications
This patch passes all unit tests. Also deployed and tested to a 0.19.0 cluster and a 0.20.0 cluster.
Changes to hive-default.conf are to correct
hive-hwi.war
to
hive_hwi.war
My changes to HWIServer.java clobbered other changes in the previous 0.20 patch since HWIServer.java delegates most duty to ServerWrapper.java
Overall, it looks good - but can you do a simple cosmetic changes.
1. Have 1 patch instead of different patches for jetty and 487
2. Add more comments:
/**
29 Hadoop 17-19 used Jetty5. Hadoop 20 uses jetty6. Hive still should compile and
...run with all versions.
30 Java is strongly typed Class based language. The Reflection API is required to
...circumvent the strong
31 typing. We have used the reflection API to deal with the known versions of Jetty
...we must work with.
32 CS students: If you are ever in a debate about classless VS classful programming
...be sure to
33 reference this code.
34 */
is very good:
Can you repeat a subset of this in HadoopVersion also ?
Hive still should compile and
...run with all versions. (17-20)
3. usesJobShell: can you add more comments here – it is true for version 20 but not for 20 etc.
4. This may be outside the scope of this - but should some unit tests run for hadoop17, and some for hadoop 20, as part of ant test.
Currently, all of them use the default 19. As I mentioned before, this can be done in a follow-up also.
4. is not needed - we can enable 20 from hudson
+1
Can you add some more comments, and then I can commit it
A couple thoughts:
- Does the same compiled jar truly work in all versions of Hadoop between 0.17 and 0.19? That is to say, can we consider an option in which we use some build.xml rules to, depending on the value of a hadoop.version variable, swap between two implementations of the same .java file (one compatible with Jetty 5, one with Jetty 6)? Then in the build product we could simply include two jars and have the wrapper scripts swap between them based on version. If size is a concern, the variant classes could be put in their own jar that would only be a few KB.
- The reflection code in this patch is pretty messy. I mocked up an idea for a slightly cleaner way to do it, and will attach it as a tarball momentarily. The idea is to define our own interfaces which have the same methods as we need to use in Jetty, and use a dynamic proxy to forward those invocations through to the actual implementation class. Dynamically choosing between the two interfaces is simple at runtime by simply checking that the method signatures correspond. This is still dirty (and a bad role model for CS students
) but it should reduce the number of Class.forName and .getMethod calls in the wrapper class
Here's a tarball showing the technique mentioned in the comment above. The script "run.sh" will compile and run the example once with "v1" on the classpath, and a second time with "v2" on the classpath. I'm not certain that this will cover all the cases that are needed for Jetty, but I figured I would throw it out there.
@Todd - Where were you a few weeks ago?
Then in the build product we could simply include two jars and have the wrapper scripts swap between them based on version
The jars are upstream in Hadoop core. I did not look into this closely but the talk about 'Sealing exceptions' above led me to believe I should not try this.
I have wrapped my head around most of your Dynamic Proxy idea. My only concern is will the ant process cooperate? Will eclipse think the HWI classes are broken? Can we translate your run.sh into something ant/eclipse can deal with?
public class WebServer { public void someMethod(String arg) { System.out.println("Webserver v1: " + arg); } }
I really don't want to have one 'someMethod' per each Jetty method. Just start(), stop(), init(). I like your implementation, but this is such a 'hacky' thing, I wonder is it worth thinking that hard? Hopefully the Jetty crew will be happy with their API for the next few years. Hopefully, we will not be supporting Hadoop 0.17.0 indefinitely. Honestly all that reflection has me 'burnt out'.
If you/we can tackle the ant/eclipse issues I would be happy to use the 'Dynamic Proxy', but maybe we tackle it in a different Jira because this is a pretty big blocker and I am sure many people want to see this in the trunk.
@Todd - Where were you a few weeks ago?
Chillin' over on the HADOOP jira
We're gearing up for release of our distribution that includes Hadoop 0.20.0, so just started watching this one more carefully.
The jars are upstream in Hadoop core. I did not look into this closely but the talk about 'Sealing exceptions' above led me to believe I should not try this.
Sorry, what I meant here is that the hive tarball would include lib/hive-0.4.0.jar, lib/jetty-shims/hive-jetty-shim-v6.jar and lib/jetty-shims/hive-jetty-shim-v5.jar
In those jars we'd have two different implementations of the shim. The hive wrapper script would then do something like:
HADOOP_JAR=$HADOOP_HOME/hadoop*core*jar if [[ $HADOOP_JAR =~ 0.1[789] ]]; then JETTY_SHIM=lib/jetty-shims/jetty-shim-v5.jar else JETTY_SHIM=lib/jetty-shims/jetty-shim-v6.jar fi CLASSPATH=$CLASSPATH:$JETTY_SHIM
To generate the shim jars at compile time, we'd compile two different JettyShim.java files - one against the v5 API, and one against the v6 API.
As for eclipse properly completing/warning for the right versions for the right files, I haven't the foggiest idea. But I am pretty sure it's not going to warn if your reflective calls are broken either
My only concern is will the ant process cooperate?
I don't see why not - my example build here is just to show how it works in a self contained way. The stuff inside v1-classes and v2-classes in the example are the equivalent of the two jetty jar versions - we don't have to compile them. The only code that has to compile is DynamicProxy.java which is completely normal code.
If you/we can tackle the ant/eclipse issues I would be happy to use the 'Dynamic Proxy', but maybe we tackle it in a different Jira because this is a pretty big blocker and I am sure many people want to see this in the trunk.
As for committing now and not worrying, that sounds pretty reasonable, as long as there's some kind of deprecation timeline set out. (e.g "in Hive 0.5.0 we will drop support for versions of Hadoop that use Jetty v5" or whatever). As someone who isn't a major Hive contributor, I'll defer to you guys completely – I just wanted to throw the idea up on the JIRA.
need to add the shell script hack to switch the -libjars option as well based on the jar version.
Here's a patch which adds a project called "shims" with separate source directories for 0.17, 0.18, 0.19, and 0.20. Inside each there is an implementation of JettyShims and HadoopShims which encapsulate all of the version-dependent code. The build.xml is set in such a way that ${hadoop.version}
determines which one gets compiled.
This probably needs a bit more javadoc before it's commitable, but I think it's worth considering this approach over reflection.
Also, it seems like hadoop.version may be 0.18.0, 0.18.1, 0.18.2, etc. As long as it's kosher by Apache SVN standards, we should put a symlink for each of those versions in the shims/src/ directory pointing to 0.18, and same for the other minor releases. If symlinks aren't kosher, we need some way of parsing out the major version from within ant.
Not being a regular contributor, I don't have a good test environment set up, but I've verified that this at least builds in all of the above versions.
I had added a GetVersionPref.java some time back to ant extensions in hive. It was later not used because we decided not to use preprocessing for 0.19 changes to validateInput and instead decided to rely on reflection. That can easily be resurrected.
Let me look at this version as well. Also I am going to change this to a blocker as many people are waiting for this.
changing to a blocker.
Looks like a clean implementation. However, I do think that this will need some changes to the eclipse-templates to make it work with eclipse. We would want to conditionally add the src directory in shims corresponding to the proper version of hadoop to the eclipse launch templates in hive/eclipse-templates. Will try this out.
Seems to not compile with 0.17.0
ant -Dhadoop.version=0.17.0 clean package ....
[ivy:retrieve] 1 artifacts copied, 0 already retrieved (14101kB/79ms)
install-hadoopcore-internal:
[untar] Expanding: /data/users/athusoo/commits/hive_trunk_ws9/.ptest_0/build/hadoopcore/hadoop-0.17.0.tar.gz into /data/users/athusoo/commits/hive_trunk_ws9/.ptest_0/build/hadoopcore
[touch] Creating /data/users/athusoo/commits/hive_trunk_ws9/.ptest_0/build/hadoopcore/hadoop-0.17.0.installed
compile:
[echo] Compiling: shims
[javac] Compiling 2 source files to /data/users/athusoo/commits/hive_trunk_ws9/.ptest_0/build/shims/classes
[javac] /data/users/athusoo/commits/hive_trunk_ws9/.ptest_0/shims/src/0.17.0/java/org/apache/hadoop/hive/shims/HadoopShims.java:48: cannot find symbol
[javac] symbol : variable JobClient
[javac] location: class org.apache.hadoop.hive.shims.HadoopShims
[javac] Configuration conf = JobClient.getCommandLineConfig();
[javac] ^
[javac] 1 error
Woops, sorry about that. Simply add an import for o.a.h.mapred.JobClient and it compiles. New patch in a second
Fixes the missing import. Now compiles with hadoop.version=0.17.0
Modified Todd's patch so that it compiles cleanly with 0.20 and 0.17 as well. I have also added support in this patch to generate the proper eclipse files and have verified that this works with eclipse. Additionally the directory names within in shims have been renamed to 0.17, 0.18, ... 0.20 instead of 0.17.0, .. 0.20.0
Please take a look at this. Want to get this in as soon as possible so that we can move ahead with the branching.
Joy was mentioning that an additional change to the cli shell script needs to be made for -libjars support. Joy, can you elaborate on that?
Patch looks good for me (just inspected it visually over here)
One question: once we use these shims, is it possible that we could have just a single hive distribution which works for all versions of Hadoop? I think we may be able to accomplish this by making the shim jar output be libs/shims/hive_shims-{$hadoop.version.prefix}.jar. Then either through ClassLoader magic or shell wrapper magic, we put the right one on the classpath at runtime based on which hadoop version is on the classpath.
Is this possible? Having different tarballs of hive for different versions of hadoop makes our lives slightly difficult for packaging.
It would be ideal if we can make a single jar work with different hadoop versions through classloader magic. There are also some things that are needed in the hive cli script which would have to be abstracted away through a configuration/envrionment variable. Lets try to do that for the long term, but get this in for 0.4.0. Does that sound reasonable?
Hi Ashish,
That does sound reasonable, though I will likely take it on in the short term, as we will be distributing packages for hadoop-0.18 and hadoop-0.20 until the majority of the community and our customers have transitioned over. During that time period we'd like to have a single "hive" package which will function with either. We can apply my work on top of the 0.4.0 release for our distribution, so it shouldn't block it, but I do think it would be nice if this feature were "upstream" in the Apache release.
I've got some time blocked off to work on this - if I get something working this week do you think it might be able to go into 0.4.0?
-Todd
sure. We can get it into 0.4.0. We can wait for your checkin before freezing on 0.4.0 but we would like to at least branch 0.4.0 this week (will have a vote out for it soon). All that means is that if we branch before your checkin, you will have to provide patches for trunk and 0.4.0. Is that ok?
Yep, that's fine. I'm a git user, branches don't faze me
attached the previous version with modifications for cli.sh. these modifications are required (even though they don't fix the problem entirely - see below).
the reason i am not able to fix the problem entirely is because -libjars is no longer processed automatically by RunJar. We have to convert CliDriver to implement 'Tool' interface for this to happen. this is easy - but i would rather not hold up things for that.
I would suggest incorporating the patch as such - open a new jira for auxlibs/auxpath not working in 0.4/trunk and fix it there.
on second thoughts - there's some code in execdriver that i need to call from CliDriver and things should work. will upload another one soon.
one more:
- --auxpath works in both 19 and 20 now. auxlib should also work - i haven't tested it separately
- removed -libjars for hadoop versions 20 and above from cli shell script. changes to CliDriver to add aux jars to classpath at runtime
note that hive server and hwi don't work with auxpath/lib in 20 and above (since that would also require non trivial changes to HWIServer and HiveServer). we can fix this as a followon (in case someone is using the two in combination - which seems doubtful).
please review changes to bin/ext/cli.sh and CliDriver
[junit] Begin query: alter2.q
[junit] diff -a -I (file:)\|(/tmp/.*) /data/users/njain/hive_commit1/hive_commit1/build/ql/test/logs/clientpositive/alter2.q.out /data/users/njain/hive_commit1/hive_commit1/ql/src/test/results/clientpositive/alter2.q.out
[junit] Done query: alter2.q
[junit] Begin query: alter3.q
[junit] plan = /tmp/plan60193.xml
[junit] java.lang.NoClassDefFoundError: org/apache/hadoop/hive/shims/HadoopShims
[junit] at org.apache.hadoop.hive.ql.exec.ExecDriver.initializeFiles(ExecDriver.java:95)
[junit] at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:358)
[junit] at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:571)
org.apache.hadoop.util.RunJar.main(RunJar.java:165)
[junit] at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
[junit] at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
[junit] at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
[junit] at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
[junit] Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.shims.HadoopShims
[junit] at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
[junit] at java.security.AccessController.doPrivileged(Native Method)
[junit] at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
[junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
[junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
[junit] at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)
[junit] ... 12 more
Most of the tests are failing
Taking a look at these failing tests now... any chance someone could hop on IRC in ##hive on freenode? I'm happy to do the work, but would appreciate having someone to hit with quick questions since I'm not too familiar with the code base.
The issue turned out to be that the shim classes weren't getting built into hive_exec.jar, which seems to include the built classes of many other of the components. I'm not entirely sure why this is designed like this (why not just have hive_exec.jar add the other jars to its own classloader at startup?) but including build/shims/classes in there fixed the tests. Attaching new patch momentarily
hive_exec is the one that's submitted to hadoop to execute the map-reduce jobs. so we bundle all the required classes in it up front.
it could be done differently (using libjars) - but was the path of least resistance at the start.
Attaching a new patch which makes the shim behavior happen at runtime. Here's the general idea:
- the shims/build.xml now uses Ivy to download tarballs for Hadoop 17, 18, 19, and 20. It builds each of the shim sources (from src/0.XX/), which have now been renamed so that each classname is unique (eg Hadoop20Shims.class).
- The results of all of these builds end up in a single hive_shims.jar
- Instead of being classes with all static methods, the shim classes are now non-static and are instantiated using ShimLoader.class, in a new shims/src/common/ directory
- ShimLoader simply uses o.a.h.util.VersionInfo to determine the current version info, and reflection to instantiate the proper shims for the current version.
I've tested this against pseudodistributed 18 and 20 clusters and it seemed to work. Unit tests also appear to work, though I haven't had a chance to let them run all the way through. I have not tested HWI at all as of yet.
Still TODO:
- I may have broken eclipse integration somewhat. I'm hoping someone who uses Eclipse can twiddle the necessary stuff there.
- I would appreciate a review of the javadocs for the HadoopShims interface. I don't know the specifics of some of the 17 behavior, so my docs are lame and vague.
- I think build.xml needs to be modified just a bit more so that the output directory/tarball no longer includes $ {hadoop.version}
in it. Additionally there are one or two ant conditionals based on hadoop version - I haven't had a chance to investigate them, but they should probably be removed
- I think we should have a policy that hadoop.version defaults to the most recently released apache trunk - right now it defaults to 0.19.
- To compile the shims we're downloading the entire release tarballs off the apache mirror. Would be nicer if we could just download the specific jars we need to compile against, but that might be a pipe dream.
hey - how do i apply this git diff using patch?
Normal patch -p0 ought to work:
todd@todd-laptop:~/cloudera/cdh/repos/hive$ patch -p0 < /tmp/hive-487-runtime.patch
patching file ant/build.xml
patching file bin/ext/cli.sh
patching file build-common.xml
patching file build.xml
etc...
(from a clean trunk checkout)
my bad .. code looks pretty clean.
one concern is the stuff that u mentioned already - that all hadoop versions need to be downloaded. in particular - sometime back i had made some fixes to allow hive to compile against a specific hadoop tree (see). but this would be reverting that i imagine.
Committed. Thanks Todd
When I try to start Hive with hadoop built by myself, I saw an exception:
java.lang.RuntimeException: Illegal Hadoop Version: Unknown (expected A.B.* format)
at org.apache.hadoop.hive.shims.ShimLoader.getMajorVersion(ShimLoader.java:101)
at org.apache.hadoop.hive.shims.ShimLoader.loadShims(ShimLoader.java:80)
at org.apache.hadoop.hive.shims.ShimLoader.getHadoopShims(ShimLoader.java:62)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver:166)
at org.apache.hadoop.mapred.JobShell.run(JobShell.java:194)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.mapred.JobShell.main(JobShell.java:220)
I guess I need to specify some Hadoop version information when compiling hadoop?
Hi Zheng,
There is some kind of bug I've seen before in Hadoop's build process where the version info doesn't get generated on your first compile. It's silly, but try running 'ant package' a second time in your Hadoop build tree? Running "hadoop version" should let you know whether the version info got compiled in.
-Todd
By doing "ant ... package package" I am able to generate the hadoop distribution with version information, and Hive runs fine with it now! Thanks, Todd!
This patch (based on trunk) will allow a compile and passes all but two unit tests (see attached junit test report). This is my first time contributing so I'm going to need a bit of help with the SQL result differences. | https://issues.apache.org/jira/browse/HIVE-487?focusedCommentId=12735773&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-11 | refinedweb | 4,912 | 68.26 |
73486/how-to-create-a-database-engine-using-sqlalchemy-in-python
Hi Guys,
I am trying to create one Database Engine in my python code. Can anyone tell me how can I do that?
Hi@akhtar,
You can use the Sqlalchemy module in your python code. It helps you to run the SQL command in your python code. You can use the below code in your script.
from sqlalchemy import create_engine
# Create the db engine
engine = create_engine('sqlite:///:memory:')
I hope this will help you.
I think you should try:
I used %matplotlib inline in ...READ MORE
connect mysql database with python
import MySQLdb
db = ...READ MORE
HDF5 works fine for concurrent read only ...READ MORE
David here, from the Zapier Platform team. ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
Enumerate() method adds a counter to an ...READ MORE
Hi@akhtar,
You can use the Sqlalchemy module to ...READ MORE
Hi@akhtar,
I don't know it will help you ...READ MORE
Hi@akhtar,
You can do this task using cv2 ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/73486/how-to-create-a-database-engine-using-sqlalchemy-in-python?show=73488 | CC-MAIN-2022-21 | refinedweb | 212 | 77.64 |
Connecting to Eduroam
I have been trying to figure out how to connect to Eduroam on my nano-gateway and still having some trouble. I have used all the examples in the WLAN docs but I believe I need to change stuff in the main.py as well as the config.py. Was hoping there was something as straight forward as:
WIFI_SSID = 'eduroam'
WIFI_AUTH = 'username'
WIFI_PASS = 'pass'
But it doesn't seem to be the case.
I've been searching for python examples but only found how to connect a Linux system. Eduroam is basically everywhere so I'm sure this answer will be well used.
Cheers,
Dylan
@thomand1000 Unfortunately I didn't get it entirely fixed, the best theory is that the Eduroam has a UDP timeout feature, I did find a work around that has been working consistently, it gives a maximum of about 30 seconds downtime every 31minutes.
The few things I changed are in the nanogateway.py file for the gateway example:
After 1900 seconds the message "Failed to pull downlink packets from server" would occur so I put a machin.reset() in:
def _pull_data(self): token = uos.urandom(2) packet = bytes([PROTOCOL_VERSION]) + token + bytes([PULL_DATA]) + ubinascii.unhexlify(self.id) with self.udp_lock: try: self.sock.sendto(packet, self.server_ip) except Exception as ex: machine.reset() self._log('Failed to pull downlink packets from server: {}', ex)
And every 4-5th restart it would just not connect so I did a simple timer reset in the wifi connection:
def _connect_to_wifi(self): count = 0 self.wlan.connect(ssid='eduroam', auth=(WLAN.WPA2_ENT, 'Username', 'Password'), identity='Username', timeout = 30) while not self.wlan.isconnected(): utime.sleep(1) count = count + 1 print(count) if count >= 30: machine.reset() self._log('WiFi connected to: {}', self.ssid)
Hopefully one day I'll be able to get the UDP timeout thing properly fixed, but they don't change things like this in a hurry for a student haha.
- thomand1000 last edited by
It seems as some of you got it to work. Can you please provide a short guide with some code that worked?
Thanks
@livius I try connect with the following command:
wlan.connect(ssid="eduroam",auth=WLAN.WPA2_ENT,EDUROAM_USER,EDUROAM_PWD),identity=EDUROAM_IDENTITY,timeout=5000)
Where:
- EDUROAM_USER = my email address
- EDUROAM_PWD = my eduroam passwd
- and EDUROAM_IDENTITY = my email address too.
And yes, as the document to configure eduroam in my university says, the authentication mechanism is EAP-MSCHAP-V2.(an screenshot of the web, unfortunally in spanish, is attached)
@franman
How your connection looks like?
I suppose you do not provide any cert as it is not needed on client side for
EAP-MSCHAP-V2
and are you sure that it is not
PEAP-EAP-MSCHAPv2?
@jmarcelino
I have followed the section "Connecting with EAP-PEAP or EAP-TTLS" of, and your comments @jmarcelino , but I can't connect my LoPy to Eduroam. Is it possible that the authentication method will be important? In my university the authentication method is EAP-MSCHAP-V2, and my question is: Is this method supported by LoPy?
@jmarcelino I found where I made my mistake earlier, the password is case sensitive haha. Its connected and working well.
Thanks for the help
- jmarcelino last edited by jmarcelino
@dylan
You need to change nanogateway.py, changing config.py won’t do anything.
For a quick test just put the values directly on the self.wlan.connect(...) call , if it works we’ll change it to pick them up from config.py
@jmarcelino thanks for the reply, I suspected it had something to do with the WPA2 Enterprise network just no idea how to implement it, cheers for the help. I am still having trouble with what placeholders to put in? I put the WPA2 Enterprise stuff (wlan.connect etc) into the config file, but I believe I need to change stuff in the main too, sorry for lack of explanation, gotta go out for a bit and wanted to get this post out.
- jmarcelino last edited by jmarcelino
@dylan
Eduroam is a WPA2 Enterprise network. You need to change the WLAN initialisation to it, see the Connecting with EAP-PEAP or EAP-TTLS example:
I think for Eduroam your username and the identity are your e-mail address. You shouldn't need to have the ca_certs option.
As a quick test change line 207 in nanogateway.py (under _connect_to_wifi function) from
self.wlan.connect(self.ssid, auth=(None, self.password))
to:
self.wlan.connect(ssid='eduoram', auth=(WLAN.WPA2_ENT, 'youremail', 'password'), identity='youremail')
Of course in this replacing the placeholder fields your e-mail and password respectively.
I not sure this will work as I've not used eduroam in a few years so any feedback would be appreciated. | https://forum.pycom.io/topic/2248/connecting-to-eduroam | CC-MAIN-2020-24 | refinedweb | 791 | 65.52 |
Reminder Using Tkinter
Discussion in 'Python' started by RiNo, Dec 13, 2012.
- Similar Threads
Send a ReminderMariame, Oct 14, 2004, in forum: ASP .Net
- Replies:
- 3
- Views:
- 489
- Ryan Walberg, MCSD for .NET
- Oct 14, 2004
Email reminder from ASP.Net C# website.Peter Rilling, May 31, 2005, in forum: ASP .Net
- Replies:
- 1
- Views:
- 5,767
- Adhik Kadam
- May 31, 2005
Reminder - Call for Papers VM 2004Alex Walker, Aug 1, 2003, in forum: Java
- Replies:
- 0
- Views:
- 598
- Alex Walker
- Aug 1, 2003
Reminder: research on roles of variablesPetri Mikael Gerdt, Jan 27, 2005, in forum: Java
- Replies:
- 0
- Views:
- 929
- Petri Mikael Gerdt
- Jan 27, 2005
Application Java for GSM : Birthday Reminderdmoniac75, Mar 4, 2005, in forum: Java
- Replies:
- 0
- Views:
- 1,350
- dmoniac75
- Mar 4, 2005
from Tkinter import *,win = Tk() "from Tkinter import *"Pierre Dagenais, Aug 3, 2008, in forum: Python
- Replies:
- 0
- Views:
- 590
- Pierre Dagenais
- Aug 3, 2008
What is the differences between tkinter in windows and Tkinter in theother platform?Hidekazu IWAKI, Dec 14, 2009, in forum: Python
- Replies:
- 1
- Views:
- 662
- Peter Otten
- Dec 14, 2009
Re: Using property() to extend Tkinter classes but Tkinter classesare old-style classes?Terry Reedy, Nov 28, 2010, in forum: Python
- Replies:
- 5
- Views:
- 850
- Robert Kern
- Nov 30, 2010 | https://www.thecodingforums.com/threads/reminder-using-tkinter.955462/ | CC-MAIN-2017-34 | refinedweb | 214 | 65.05 |
I need to define a variable that contains all possible natural numbers, or at least all numbers from 1 to a million.
I don't want to use the
range
range
X
X
var
In Python 3.2 and higher, representing a container with all integers from 1 to a million is correctly done with
range:
>>> positive_nums_to_1M = range(1, 1000001) >>> 1 in positive_nums_to_1M True >>> 1000000 in positive_nums_to_1M True >>> 0 in positive_nums_to_1M False
It's extremely efficient; the numbers in the range aren't actually generated, instead the membership (or lack thereof) is computed mathematically.
If you need some equivalent object that supports any positive integer, or need it in Python 2.x, you'll have to write your own class, but it's not hard:
from operator import index class natural_num_range(object): def __init__(self, maxnum=None): if maxnum is not None: maxnum = index(maxnum) # Ensure a true native int (or long) self.maxnum = maxnum def __contains__(self, x): try: x = index(x) except TypeError: return False return x >= 1 and (self.maxnum is None or x < self.maxnum)
That does something similar to
range, but without supporting a
start or
step, and not requiring a
stop, so you can do constant time membership tests:
>>> natural = natural_num_range() >>> all(i in natural for i in range(1, 10000000, 10000)) True >>> any(i in natural for i in range(-100000000, 0, 10000)) False | https://codedump.io/share/nngwn7rtZ3TC/1/variable-that-represents-all-numbers | CC-MAIN-2017-09 | refinedweb | 231 | 53.24 |
The Galaxy FilesSystem Toolkit (GFT) allows developers to create user-level file-system extensions that are viewable through Windows Explorer. While the toolkit itself is constructed using C++ COM/ATL code, extension developers can program in higher-level strongly-typed, garbage-collected languages such as C# and Java. Thus, the Toolkit eases the development of user-level file-systems on the Windows platform. The GFT has been used with a Java file-system extension to create a Windows Explorer interface to NFS version 2 fileservers. Other examples of possible extensions include:
The Toolkit is a Windows Shell NameSpace Extension (NSE) which communicates to your custom code (the extension) via a proxy and stub pair. For a view of this, see Figure 1. The Toolkit manages a part of the Shell namespace, e.g., the Windows path "C:\Custom", and converts all Windows Explorer operations on this namespace into the appropriate method invocations on your file-system extension. For example, when a file is dragged-n-dropped onto the folder "C:\Custom", the Toolkit will first invoke CreateFile and then a series of Write calls on your extension.
CreateFile
The rest of this document describes the steps necessary to install the Toolkit and to create a simple extension. Section 2 covers the steps needed to install the Toolkit, while the bulk of the document, Section 3, will guide you through the creation of a sample extension in C#. Known issues and limitations are listed in Section 4, and Acknowledgements are given in Section 5.
To begin, download either the Toolkit source code or the binary installer package from the website. C# programmers will install from the source, since the source zip file contains the necessary interfaces and sample code. Java developers will typically install the GFT using the binary installer, since this procedure does not require Visual Studio.
The GFT can be installed from a Visual Studio solution file and the source code, using the following steps:
Installing from the binary involves the following steps:
At this point, you should have the Galaxy namespace extension (NSE) installed, and a sample C# Mirroring FileSystem extension running.
To explore this Mirror FileSystem, start a Windows Explorer process (Windows Key + E) and navigate to "My Computer". You should notice a new system folder called "Galaxy". Browsing to this folder should reveal two files: COPYING.txt and Subfolder. You can open the text file (the GNU GPL) and also open the Subfolder folder, as well as open the JPG file contained in the Subfolder folder. (These files are mirrors of the files contained in the folder, Test, which was distributed with the source or installed along with the binary, depending on your method of installation.)
You can try copying files to and from the Galaxy file-system, with the caveat that multiple selections are not yet supported. (In other words, you can copy entire folders, but you cannot highlight and copy two or more files at the same time.) Other caveats are listed in Section 4.
To create your own C# file-system extension, you will need to perform two steps:
In this tutorial, we will create a dummy read-only file-system extension called "RandomFileSystem", which will display random text files which are each filled with random text.
First, startup Visual Studio (or your personal IDE) and open the CSharpFileSystem solution file. Now, create a new class in the CSharpFileSystem namespace, which inherits from CSharpFileSystem.FileSystem. We will call this new class "RandomFileSystem". Give the default constructor a single integer argument which specifies how many files to display per folder.
CSharpFileSystem
CSharpFileSystem.FileSystem
RandomFileSystem
Your code should look like the following:
using System;
using System.Diagnostics;
using System.IO;
namespace CSharpFileSystem f /// <span class="code-SummaryComment"><summary></span>
/// Summary description for RandomFileSystem.
/// <span class="code-SummaryComment"></summary></span>
public class RandomFileSystem : FileSystem {
int myNumFiles;
public RandomFileSystem(int i) {
myNumFiles=i;
}
}
}
Now, we need to implement the nine methods of the FileSystem abstract class. In each of the following sections, we will implement a method and discuss the implementation details. In the methods where we do not provide an implementation, we still return a success code of true; Galaxy’s error handling is not yet robust, and thus a false return value is likely to cause a failed assertion.
FileSystem
true
false
This command is used for extensibility, and it allows the namespace extension to pass an arbitrary command string to your file-system. Currently, the only command that is passed is "trace", which gets passed when the user selects "Trace" from the context menu (by right clicking). We can safely ignore this, and so we give a dummy implementation:
public override bool Command(string command string) {
if (command string.ToLower().Equals("trace")) {
Debug.WriteLine("We should log our trace files to a log file here.");
return true;
} else {
return false;
}
This method is used for a fast-path copy so that the Galaxy namespace extension can pass filenames (instead of file contents) when it wants to copy a file to your file system. Since this is an optimization, we will ignore this method and provide the following debugging implementation (Note: You must fully-qualify the FileInfo class since Galaxy defines a FileInfo class as well):
FileInfo
public override bool CopyFile(string windows src, string galaxy dst) {
System.IO.FileInfo fi = new System.IO.FileInfo(windows src);
Debug.WriteLine("The Galaxy NSE wants to copy a file of length"+
fi.Length+" to our filesystem");
return true;
}
Since we are implementing a random read-only file system, the CreateDirectory call is a null operation in our implementation.
CreateDirectory
Use the following code to implement this method:
public override bool CreateDirectory(string path)
{
Debug.WriteLine("We would normally create a directory with the path"
+ path + " at this point");
return true;
}
The same goes for the create file command, so use the following code here:
public override bool CreateFile(string path)
{
Debug.WriteLine("We should create a 0¡length file with the name "
+ path + " here");
return true;
}
Again, since our RandomFileSystem is read only, use the following code for delete:
public override bool Delete(string path) {
Debug.WriteLine("We are being asked to delete the file "+path);
return true;
}
Here, we need to create a random set of files and return them. We will make use of the myNumFiles field to help us choose how many files to return. We choose each odd numbered file to be a text file, while each even-numbered file returned is a folder (i.e., a directory). Each text file is given a random FileSize from 0 to 1023 bytes. This size will be displayed in Windows Explorer, and also used by Galaxy: whenever the file is opened or copied, only the first FileSize bytes will be read.
myNumFiles
FileSize
Folders are given a size of 0, and whenever a folder is opened in Windows Explorer, this ListFiles method will be called on that path name. (Note: While we fill in all of the fields, only the FileName, FileSize, and FolderFlag are currently used by Galaxy.)
ListFiles
FileName
FolderFlag
public override FileInfo[] ListFiles(string path) {
Random rnd = new Random();
int num files to return = rnd.Next(myNumFiles);
FileInfo[] ret = new FileInfo[num files to return];
for (int i=0;i<num files to return;i++) f ret[i] = new FileInfo();
ret[i].CreateTime = DateTime.Now; // not currently used
ret[i].LastAccessTime = DateTime.Now; // not currently used
ret[i].LastModifiedTime = DateTime.Now; // not currently used
ret[i].FilePath = "Empty path field"; // not currently used
ret[i].FolderFlag = (i%2 == 0); // even numbered files are folders
if (ret[i].FolderFlag) f ret[i].FileName = ""+i;
ret[i].Size = 0;
} else {
ret[i].Size = rnd.Next(1024); // files can be up to 1k in size
ret[i].FileName = i+".txt"; // non¡folders are text files
}
}
return ret;
}
The Read method is called whenever a file in Galaxy is copied to Windows, or when a Galaxy file is opened. Since we are implementing a simple read-only random file system, we will return a random string of bytes for the read call (taking care to return only the specified number of bytes). Note that we do not keep the file size consistent across calls, and simply create a new file size in this call. Also, we make sure not to return more bytes than Windows Explorer is asking for (count), and we only return data when the offset into the file is equal to 0 (i.e., normally, the first read call).
Read
count
public override byte[] Read(string path, int offset, int count) {
Random rnd = new Random(); // files can be up to 1k in size
int max file size;
if (offset > 0) { // only return data the first time we are asked
max file size = 0;
} else { max file size = rnd.Next(1024);
}// only return up to ’count’ bytes
int file size = Math.Min(max file size,count);
byte[] ret = new byte[file size];
for (int i=0;i<ret.Length;i++) {
ret[i] = (byte)(’a’+ rnd.Next(26));
}
return (ret);
}
The Stat method is called by Galaxy to get file information whenever files are copied from Galaxy to Windows. We will parse the path name to determine if we are being asked for a file with a name containing the string ".txt". If it does, then we know that the path refers to a file. Otherwise, the path refers to a folder. (Since we control all file names, we can be sure that folders do not contain the string ".txt")
Stat
Note that we do not take special care to preserve consistent file sizes: the file size returned from this method is random, and is probably different from the file size returned from the ListFiles method.
public override FileInfo Stat(string path)
{
FileInfo fi = new FileInfo();
fi.CreateTime = DateTime.Now; // not currently used
fi.LastAccessTime = DateTime.Now; // not currently used
fi.LastModifiedTime = DateTime.Now; // not currently used
Random rnd = new Random();
// parse the path into file and directory parts
int index = path.LastIndexOf("/");
string filename = path.Substring(index+1);
string filepath = path.Substring(0,index);
if (filename.IndexOf(".txt")!=(¡1))
{
// this must be a file
fi.FolderFlag = false;
}
else
{
// this must be a folder
fi.FolderFlag = true;
}
fi.FileName = filename; // keep the same filename
fi.FilePath = filepath; // not currently used
if (fi.FolderFlag)
{
fi.Size = 0;
}
else
{
fi.Size = rnd.Next(1024); // files can be up to 1k in size
}
return (fi);
}
Since we are implementing a read-only file system, we ignore this method and provide a simple implementation:
public override int Write(string path, int offset, int count, byte[] buffer) {
Debug.WriteLine("Explorer wants to write "+
count+" bytes into a file called "+
path+" at byte offset "+offset);
return count; // pretend that we wrote all of the bytes
}
A complete listing of the random file system is available with the source Zip file.
Now, we will modify the driver file Driver.cs to create our new file-system extension. Change the line (roughly line 50):
FileServer s = new FileServer(int.Parse(args[0]),
new FileSystems.Mirror.MirrorFileSystem(args[1]) );
to
FileServer s = new FileServer(int.Parse(args[0]),
new RandomFileSystem(int.Parse(args[1])));
Note that we are passing the second command line argument to the RandomFileSystem constructor as an integer. So, let’s now change this command line parameter to a proper value. Select the CSharpFileSystem project, then select Properties. Change the command line params under "Configuration Properties, Debugging, Start Options, Command Line Arguments" from "8052 ../../Test" to "8052 10". This should allow up to 10 files per folder in our new random file-system. Apply the change, and click "OK". Now, build the code and start our new random file-system by hitting "F5".
Navigate to the Galaxy file system using Windows Explorer, and verify that the random file system works. You may get what seems like strange behavior (zero files, the number of files keep changing, the content of each file changes) but this is how the RandomFileSystem works! You can copy files from the RandomFileSystem, but take care when copying folders: you have a very good chance of exceeding the maximum path length of 260 characters since the depth of any folder is a random variable with unbounded expectation.
As a next step, you can read through the MirrorFileSystem file system contained in FileSystems.MirrorFileSystem.cs in the source code bundle. This is a more complete file system example which mirrors a specified folder of the local Windows file system. Be sure to change the command line arguments of the CSharpFileSystem project back to their original form.
MirrorFileSystem
Since, for the MirrorFileSystem, the second command line argument specifies which folder to mirror, you are free to change this to any folder that you choose. For example, you can change the command line arguments to "8052 C:\Windows" if you would like. (The first argument, 8052, specifies which port the File Server will listen to and must match the port used by the Galaxy namespace extension.)
While basic file operations are implemented (e.g., cut-and-paste, drag-n-drop), there are some limitations in the NSE which should be fixed in future releases. These limitations for Galaxy v1.0 are listed below:
The task of learning COM and the NSE interfaces was made easier by the contribution of several whom we would like to acknowledge here. First, the articles published by Pascal Hurni, Nemanja Trifunovic, Henk Devos, and Michael Dunn on the CodeProject.com website were invaluable in bringing us up to speed on namespace extensions and COM programming in the Windows environment.
The GPL’d code from Pascal Hurni on the CodeProject website and the GPL’d Amiga Disk File namespace extension by Bjarke Viksoe formed the basis for our GFT toolkit. Finally, all of the users who frequent the newsgroup microsoft.public.platformsdk.shell and, in particular, Jim Barry were helpful in finding subtle bugs in. | http://www.codeproject.com/Articles/13515/A-Namespace-Extension-Toolkit?fid=282490&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Quick&spc=Relaxed | CC-MAIN-2015-14 | refinedweb | 2,318 | 63.29 |
FLAGS(2) OpenBSD Programmer's Manual CHFLAGS(2)
NAME
chflags, fchflags - set file flags
SYNOPSIS
#include <sys/stat.h>
#include <unistd.h>
int
chflags(const char *path, unsigned int flags);
int
fchflags(int fd, unsigned int.
SF_IMMUTABLE The file may not be changed.
SF_APPEND The file may only be appended to.
The ``UF_IMMUTABLE'' and ``UF_APPEND'' flags may be set or unset by ei-
ther the owner of a file or the super-user.
The ``SF_IMMUTABLE'' and ``SF_APPEND'' flags may only be set or unset by
the super-user. They may be set at any time, but normally it:
.
[EOPNOTSUPP] The named file resides on a file system that does not sup-
portINVAL] Only the super-user can change flags on block and character
devices.
[EINVAL] The flags value is invalid.
[EPERM] The effective user ID does not match the owner of the file
and the effective user ID is not the super-user.
[EOPNOTSUPP] The named file resides on a file system that does not sup-
port file flags.
[EROFS] The file resides on a read-only file system.
[EIO] An I/O error occurred while reading from or writing to the
file system.
SEE ALSO
chflags(1), init(8)
HISTORY
The chflags() and fchflags functions first appeared in 4.4BSD.
OpenBSD 2.6 June 9, 1993 2 | http://www.rocketaware.com/man/man2/chflags.2.htm | crawl-001 | refinedweb | 218 | 75.71 |
All file systems follow the same general naming conventions for an individual file: a base file name and an optional
extension, separated by a period. However, each file system, such as NTFS and FAT, can have specific and differing rules about the formation of the individual components in a
directory or file name. Character count limitations can also be different and can vary depending on the path name prefix format used.:
The following fundamental rules enable applications to create and process valid names for files and directories, regardless
of the file system:
< > : " / \ | ? * (the prefix).
Any discussion of path names needs to include the concept of a namespace in Windows. There are two main categories of namespace conventions used in the Win32 APIs, commonly referred to as the NT namespace and the Win32 namespace. The NT namespace was designed to be the lowest level namespace on which other subsystems and namespaces could exist, including the Win32 subsystem and, by extension, the Win32 file and device currect versions of Windows for backward compatibility.
To sort out some of this, the following items are different examples of Win32 namespace prefixing and conventions, and summarizes how they are used.
The "\\?\" prefix tells the Win32 APIs to disable all string parsing and to send this string straight to the file system. For example, if the file system supports large paths and file names, you can exceed the MAX_PATH limits that are otherwise enforced by the Win32 APIs. This also allows you to turn off automatic expansion of ".." and "." in the path names. Many but not all file APIs support "\\?\"; you should look at the reference topic for each API to be sure.
The "\\.\" prefix will access the device namespace instead of the file namespace. This is how you access physical disks and volumes directly, without going through the file system, if the API supports this type of access. You can access many other devices this way (using the CreateFile and DefineDosDevice functions, for example).
Most APIs won't support "\\.\", only those that are designed to work with the device namespace.
For example, if you want to open the system's serial communications port 1, you can use either "\\.\COM1" or "COM1" in the call to the CreateFile function. This works because COM1-COM9 are part of the reserved names in the NT file namespace as previously mentioned.
But if you have a 100 port serial expansion board and want to open COM56, you need to open it using "\\.\COM56". This works because "\\.\" goes to the device namespace, and there is no predefined NT namespace for COM56.
Another example of this is using the CreateFile function on "\\.\PhysicalDiskX" or "\\.\CdRom1" allow you to access those devices, bypassing the file system.
It just happens that the device driver that implements the name "C:\" has its own namespace that is the file system. APIs that go through the CreateFile function should work because CreateFile is the same API to open files and devices.
If you're working with Win32 functions, you should use only "\\.\" to access devices and not files. rooted at "\". The subdirectory "Global??" is where the Win32 namespace resides.
Named device objects reside in the NT namespace within the "Device" subdirectory. Here you may also find Serial0 and Serial1, the device objects representing the two COM ports if present on your system. The device object representing a volume would be something like "HarddiskVolume1", although the numeric suffix may vary. The name "DR0" under subdirectory "Harddisk0" would be the device object representing a disk, and so on.
To make these device objects accessible by Win32 applications, the device drivers create a symbolic link (symlink) in the Win32 namespace Win32 handle.
For functions that manipulate files, the file names can be relative to the current directory. A file name is
relative to the current directory if it does not begin with one of the following:.
Typically, Windows stores the long file names on disk as special directory entries, which can be disabled systemwide for
performance reasons depending on the particular file system. When you create a long file name, Windows may also create a short MS-DOS (8.3) form of the
name, called the 8.3 alias, and store it on disk. Starting with Windows 7 and Windows Server 2008 R2, 8.3 file names, long file names, or the full path of a file from the system, consider the following options:.
Send comments about this topic to Microsoft
Build date: 7/9/2009
Although it is not recommended to name a file using reserved words like CON, sometimes they end up on disk through any number of ways. To remove them, try this method from a command prompt with appropriate priviledges. Let's say you have this offending file in the root of the C: drive. Here are the two commands to try:
C:\> rename \\.\C:\CON. deletemeC:\> del deleteme
The Unicode versions of several functions permit a maximum path length of
approximately 32,000 characters composed of components up to 255 characters in
length. | http://msdn.microsoft.com/en-us/aa365247(VS.85).aspx | crawl-002 | refinedweb | 839 | 63.49 |
I’m relatively newish to Django, working on a little project to practise what I’m learning.
I have a table with a ManyToManyField that specifies a “through table” so that I can add a bit more information to the many-to-many relationship. All works fine and I’ve got some test data, no problems for the most part.
I’m stuck now though trying to work out how to read a field from the link in a view function along the following lines – edited for brevity and clarity…
def display_my_thing(request, ref_ident): qryset = Catalogue.objects.filter(Q(something...) | Q(more...) | Q(otherthing)).select_related().distinct() my_obj = qryset.first() # For simplicity's sake here. caption_html = my_obj.caption # Ordinary field is fine. related_title = my_obj.entry.title # Regular related field is fine. extra_info = my_obj.manytomanyTable.extra_field_name # <-- No idea how to do this. # I'll give an example below based on the Django docs to make things easier. pass # Etc....
So that we can talk about a concrete example without pasting a tonne of code, let’s just use the example in the Django docs here – Models | Django documentation | Django (The Django docs are fabulous btw. A huge resource!)
Let’s suppose I get a Group object like this:
g = Group.objects.filter(...something...).select_related().dictinct().first()
Now, I can inspect g.name, or g.members__person (or something like that, I forget), but what I want to do in my view function is to pull out the “invite_reason”. I’m trying things like the following, but none of them work:
why_invited = g.membership__invite_reason
I can inspect a lot of things with a “manage.py shell” but that’s not solved it for me here so far!
I’m hoping I’ve included a decent enough description to answer what is probably a simple question once you know
Thanks in advance,
James. | https://forum.djangoproject.com/t/reading-extra-fields-in-a-manytomany-through-table/10676 | CC-MAIN-2022-21 | refinedweb | 308 | 57.77 |
OK – following on from here and a later comment from Tim (and in fact more this rant from Jonas Maurus);
How about somebody sends me some good pro-PHP rants?
So here’s some pro-PHP ranting…
It’s the execution model
Focused on PHP as an Apache module the two big things are it works and it’s scalable. More to the point no one really has an execution model to compare with it, except perhaps Microsoft with ASP 3.0, which they’ve since abandoned. Before you fly off the handle, think about this one.
Tried to explain the basics a long time ago here – the important thing to take from that (compared to mod_perl / mod_python / mod_* or even “X” application server.) is the interpreter returning to a fresh state after every request (no globals hanging around or otherwise). PHP really is shared nothing. You want scaling? Try here..
Being a little more specific, the execution model PHP gets most testing under is mod_php and CGI, where there’s no “long running scripts” and no need for threads. PHP is optimized to that environment. By contrast Perl, Python and Ruby are general purpose languages and optimized to different requirements. The web is just another platform they support, compared to PHP where the web is the primary platform. The can be expressed in terms of configuration settings like max_input_time and post_max_size – with PHP these problems have had someone thinking about them.
Excellent database support
Lets start with the usual nag here – that PHP doesn’t have a common API for database abstraction. Well that’s always been bogus anyway – SQL itself is rarely portable and writing an application against a specific database requires specific work, so that you’re API happens to be the same as that for some other database is largely irrelevant. That said a common API does make the learning curve easier if you start another project with another DB but PHP’s first priority is to expose “vendor specific” features, like pg_send_query or mysql_unbuffered_query. Put another way, most of PHP’s DB APIs have a one to one relationship with the vendors client API and the benefit there is you don’t lose features or have to fight for them.
That’s not to say you can’t have your DB abstraction cake in PHP – the one I trust most is ADOdb – a native PHP implementation (side note – Python devs may be interested to know John also maintains a Python version of ADOdb). There’s also PDO which is getting there and (I’d argue) something that parallels Perl’s DBI. Should also point out PEAR::DB, PEAR::MDB2 and Creole. And don’t get me started on ORM…
What doesn’t get said is PHP may now have the best (as in most stable and feature complete) across the board db support of any of Perl, Python and Ruby, which, seen from one point of view, makes sense given the number of PHP users. That PHP’s DB support is better than Python’s or Ruby’s that’s probably no news but compared to Perl, was recently surprised to discover that DBI lacks support for Oracle collections, which PHP has. Perhaps that’s just a freak but perhaps not – anyone with experiences to share there?
PHP Arrays
Now the computer scientists typically hate PHP’s arrays, which are both indexed and hashed. Reality is they are not only easy for beginners to get in to, they’re very handy for the web problem (e.g. good fit to ad-hoc XML) and for simple iteration, it’s nice that hashes and indexed arrays unified by common syntax and behaviour. Sure they don’t support everything a computer scientist might want, such as (Python);
points = {} points[(1,2)] = 2 points[(2,4)] = 4
…but on the web, who cares? You’d hardly ever need something like that and where you do, JPGraph makes life easy enough. Meanwhile performance turns out to be not bad by comparison.
BTW, if you want computer science and PHP, take a look at (current favorite bizarre PHP project) J4P5 (a Javascript intepreter written in PHP). Can come up with endless projects of that nature if you’re interested.
More generally, there’s a virtual web ring (how 90’s) of PHP hate which starts from here. 90% of the criticism you’ll find there is simply irrelevant (one day I’m going to do this in detail, if I can be bothered) – these are either not problems for web development or disappear with PHP’s (rollercoaster) learning curve (e.g. the endless functions in global namespace is irrelevant if you’re writing classes). One day I may also do my own take on why PHP sucks (in the sense of Bjarne Stroustrup and: “There are only two kinds of languages: the ones people complain about and the ones nobody uses”) which I’d expect to be a very different story. Today you’ll have to excuse me for focusing only on the good things.
The SPL Extension
Following on from arrays, this is a big reason to use PHP5 if you’re a programmer. It makes a number of classes of web related problem ridiculously easy to solve and gives you syntactic joy should you want it. Some further reading here, here, here, here and here.
PHP 5(.1) XML Support
One for Tim – XML support in PHP 5(.1) is excellent, thanks to libxml2. And it’s not just that the core parser is good, it’s that you’ve got multiple APIs in PHP to it, namely SAX and it’s faster “invert” XML pull reader, SimpleXML and DOM, as well as the supporting cast of XSLT etc. Of course Python, Perl, Ruby etc. also have libxml2 wrappers but this is still a PHP strength, plus libxml2 has become the core of PHP’s XML capabilities, rather than another alternative.
The stuff that says it works… works
Programmers have obsessions with building better cages for themselves, for the sake of a particular paradigm they believe in. Nothing wrong with that but the road to get there is littered with stuff that was broken or partially complete. For the stuff you need for the web, PHP is already there. That’s not to say there isn’t a bunch of unstable code in PHP – just the stuff you’re typically going to use works.
Put another way, PHP has become predictable, at least for me and anyone willing to travel the learning curve. That means if I need to give projects estimates, for example, I know when I can keep to them. I’ve also found it easy to teach to others – takes about a week to get a programmer able to churn out useful PHP if I can work directly with them. I’d rather trade a (slightly more) verbose syntax and delivering on time against 15 minutes syntactic glory, followed by 10 hours bug hunting and another 15 hours workaround. Luckily not everyone thinks this way or progress would have halted.
Unicode and ICU
Having talking about stuff PHP already has, it’s worth being aware that PHP6 is using ICU as part of it’s “core” – see here, here and here. PHP6 already “exists” under CVS and if it’s resticted to the Unicode changes only, it’s less of a step up than PHP5 and may be here sooner rather than later. You might then be able to argue PHP has better Unicode support than Perl and Python.
Stuck in Little boxes
This is bordering on holy war, which I don’t want. Recently language bickering has been back “in”, and if you take it seriously you might believe Perl, Python and PHP are all dead and there is only Ruby. To me that’s simply Ruby (rightly) jostling for a space alongside the others and it’s best put in perspective here.
Being happy in Perl, Python and PHP these days, watching people cry “This rox, everything else sux” borders between amusing and frustrating. Syntax is only one part of the story. Libraries is, to me, the bigger part – what use is great syntax if you can’t do anything with it or need to spend time re-inventing wheels?
For example Perl is like comfortable slippers on the command line – take me away from Getopt::Long and POD::Usage and get screaming. Little things which matter. And have found Perl capable of pushing big blocks of data around a DB at speeds comparable to SQL*Loader.
Meanwhile have used wxPython in earnest once (book due soon!) – none of the other dynamic languages can compare to it – yes there’s wxPerl, wxRuby (**cough** wxHaskell) and even wxPHP but they’re not mature by comparison – check out Activegrid’s IDE download under “/src/python/wxPythonDemos/ide” – these days we can all write our own IDEs… And Python has the dirty secret of excellent Windows support. If I had time to burn I’d love to take this shell namespace extension demo and the Python libssh2 wrapper and do a GMail drive for scp.
The point is people leap on a language then claim, because it does one thing well, it’s the only tool for everything. It’s really a shame time is being spent on wxRuby / wxPerl or otherwise, given that wxPython is already there. It’s far more of a shame that we’ve invented endless template engines for the web – not just the wasted development time – also the wasted user time in identifying and learning to use them (this stuff drives them to Microsoft). And it’s not just the re-invention – it’s that we not even really sharing ideas or seeing them spread – Jeff nailed templates a long time ago but today you’ll find Ruby developers discussing the pros and cons of curly brackets vs. attribute for templates.
Specific to PHP, if you accept there are some things it actually does well, in particular the execution model, it then becomes more a question of how to get the best out of all worlds. In my view (based on what I’m happy with) that’s Perl at your backend for sysadmin, moving data around etc., Python helping users publish their MS Office stuff and PHP serving it to the world.
Along that lines been looking at Fuse and it’s Python and Perl bindings. Have a very specific interest right now which I guess will tell me how stable this stuff is but, all being well, think there’s potential here for hooking up PHP with Perl and Python, via the filesystem, perhaps in conjunction with libxml2.
In short: PHP’s Schwarz is bigger!
COURSES >
BOOKS >
Pingback: Internet Alchemy Infinite Scalability()
Pingback: Slantwise » Blog Archive » In Defense of PHP()
Pingback: Articles speaking for PHP—Time Wasted On Binaries()
Pingback: summitpierce » Blog Archive » PHP Is Junk()
Pingback: imagesafari clip blog » Blog Archive » A pro-PHP Rant()
Pingback: FuzzyBlog » Blog Archive » JSON : A Nice Way to End a Bad Day()
Pingback: PHP — Good or Bad? | FuCoder.com()
Pingback: 59ideas | links for 2006-02-26()
Pingback: Gosling Didn’t Get The Memo [@lesscode.org]()
Pingback: SitePoint Blogs » wxDebug() | https://www.sitepoint.com/a-pro-php-rant/ | CC-MAIN-2016-30 | refinedweb | 1,877 | 66.67 |
XDR_ADMIN(3N) XDR_ADMIN(3N)
NAME
xdr_getpos, xdr_inline, xdrrec_endofrecord, xdrrec_eof, xdrrec_read-
bytes, xdrrec_skiprecord, xdr_setpos - library routines for management
of the XDR stream
DESCRIPTION
XDR library routines allow C programmers to describe arbitrary data
structures in a machine-independent fashion. Protocols such as remote
procedure calls (RPC) use these routines to describe the format of the
data.
These routines deal specifically with the management of the XDR stream.
Routines
The XDR data structure is defined in the RPC/XDR Library Definitions of
the
#include <<rpc/xdr.h>>
u_int xdr_getpos(xdrs)
XDR *xdrs;
Invoke.
long * xdr_inline(xdrs, len)
XDR *xdrs;
int len;
Invoke the in-line routine associated with the XDR stream, xdrs.
The routine returns a pointer to a contiguous piece of the
stream's buffer; len is the byte length of the desired buffer.
Note: A pointer is cast to long *.
Warning: xdr_inline() may return NULL if it cannot allocate a
contiguous piece of a buffer. Therefore the behavior may vary
among stream instances; it exists for the sake of efficiency.
bool_t xdrrec_endofrecord(xdrs, sendnow)
XDR *xdrs;
int sendnow;
This routine can be invoked only on streams created by xdr-
rec_create() (see xdr_create(3N)). The data in the output buf-
fer is marked as a completed record, and the output buffer is
optionally written out if sendnow is non-zero. This routine
returns TRUE if it succeeds, FALSE otherwise.
bool_t xdrrec_eof(xdrs)
XDR *xdrs;
int empty;
This routine can be invoked only on streams created by xdr-
rec_create() (see xdr_create(3N)). After consuming the rest of
the current record in the stream, this routine returns TRUE if
the stream has no more input, FALSE otherwise.
int xdrrec_readbytes(xdrs, addr, nbytes)
XDR *xdrs;
caddr_t addr;
u_int nbytes;
This routine can be invoked only on streams created by xdr-
rec_create() (see xdr_create(3N)). It attempts to read nbytes
bytes from the XDR stream into the buffer pointed to by addr.
On success it returns the number of bytes read. Returns -1 on
failure. A return value of 0 indicates an end of record.
bool_t xdrrec_skiprecord(xdrs)
XDR *xdrs;
This routine can be invoked only on streams created by xdr-
rec_create() (see xdr_create(3N)). It tells the XDR implementa-
tion that the rest of the current record in the stream's input
buffer should be discarded. This routine returns TRUE if it
succeeds, FALSE otherwise.
bool_t xdr_setpos(xdrs, pos)
XDR *xdrs;
u_int pos;
Invoke the set position routine associated with the XDR stream
xdrs. The parameter pos is a position value obtained from
xdr_getpos(). This routine returns 1 if the XDR stream could be
repositioned, and 0 otherwise.
Warning: It is difficult to reposition some types of XDR
streams, so this routine may fail with one type of stream and
succeed with another.
SEE ALSO
xdr(3N), xdr_complex(3N), xdr_create(3N), xdr_simple(3N)
20 January 1990 XDR_ADMIN(3N) | http://modman.unixdev.net/?sektion=3&page=xdr_admin&manpath=SunOS-4.1.3 | CC-MAIN-2017-17 | refinedweb | 474 | 63.7 |
Red Hat Bugzilla – Bug 678855
Review Request: python-rpyc - A Transparent, Symmetrical Python Library for Distributed-Computing
Last modified: 2013-10-19 10:42:52 EDT
Spec URL:
SRPM URL:
Description:
RPyC, or Remote Python Call, is a transparent and symmetrical python library
for remote procedure calls, clustering and distributed-computing.
RPyC makes use of object-proxying, a technique that employs python's dynamic
nature, to overcome the physical boundaries between processes and computers,
so that remote objects can be manipulated as if they were local.
This is my first package, and I need a sponsor.
A couple of issues with the spec file:
(1) The version in the changelog should be '3.0.7-1', not 'RPM created'
(2) rpmlint complains about a number of non-executable-script errors; (see 'rpmlint -I non-executable-script' for how to fix that)
Fixed, the corrected package+spec are available under the same links.
APPROVED
Please follow and
import the package. Close this bug as RAWHIDE once it's been successfully
imported and built.
New Package SCM Request
=======================
Package Name: python-rpyc
Short Description: A Transparent, Symmetrical Python Library for Distributed-Computing
Owners: erez
Branches: f15 el6
InitialCC:
We are unable to set fedora‑cvs?. Should we ask for fedora‑review+ first?
Yep! When approved, the review flag must be set to "+".
What must we do to set the fedora‑cvs flag (we still can't)? Could you please set it for us?
Not sure why you can't set it yourself, I went ahead and set it fedora-cvs? upon request on IRC.
The requested package name does not match the package name in the ticket
summary. Please fix whichever is incorrect and re-raise the fedora-cvs flag.
Git done (by process-git-requests).
python-rpyc-3.0.7-1.el6 has been submitted as an update for Fedora EPEL 6.
python-rpyc-3.0.7-1.fc15 has been submitted as an update for Fedora 15.
python-rpyc-3.0.7-1.el6 has been pushed to the Fedora EPEL 6 testing repository.
python-rpyc-3.0.7-1.fc15 has been pushed to the Fedora 15 stable repository.
python-rpyc-3.0.7-1.el6 has been pushed to the Fedora EPEL 6 stable repository. | https://bugzilla.redhat.com/show_bug.cgi?id=678855 | CC-MAIN-2017-30 | refinedweb | 377 | 56.86 |
The use of JavaScript has exploded over time. Now it is practically unheard of for a website not to utilize JavaScript to some extent. As a web developer who has concentrated on back-end coding in C# and front-end look and feel via HTML and CSS, my skills in JavaScript evolved over time instead of by conscious effort. While this is not uncommon, it can allow for some bad habits to be formed. This set of best practices is my way of taking a step back and addressing JavaScript as a first-class language, with both good parts and bad parts. My concentration will be on just JavaScript, regardless of where it is run. However, you will see references in here to the browser and to Visual Studio. This is simply because that is where I live, not because either are necessary for these best practices to apply. And so, without further ado, let's jump right in and see just how far down this rabbit hole goes.
When testing equality, a lot of languages with syntax similar to JavaScript use the double equals operator (==). However, in JavaScript you should always use triple equals (===). The difference is in how equality is determined. A triple equals operator evaluates the two items based upon their type and value. It makes no interpretations. Let's look at a couple examples:
if(1 === '1') //Returns false
if(1 == '1') //Returns true
if(0 === '') //Returns false
if(0 == '') //Returns true
The first line would equal false because the number one does not equal the character 1. That is what we would expect. The double equals operator, on the other hand, will attempt to coerce the two values to be the same type before comparing equality. This can lead to unexpected results. In the second grouping, the result using the double equals would be true. That probably isn't what we were expecting.
Just to be clear here, the same rule applies to the inequality operator as well. Looking at our above tests, we can see that both types of inequality operators work the same way as their counterparts:
if(1 !== '1') //Returns true
if(1 != '1') //Returns false
if(0 !== '') //Returns true
if(0 != '') //Returns false
The bottom line here is that we should always use the triple equals operator (or !== for not equal) rather than the double equals. The results are far more expected and predictable. The only exception would be once you are positive you understand what is happening and you absolutely need the coercion before comparison.
Most developers won't intentionally fail to put semicolons in the proper places. However, you need to be aware that the browser will usually put in semicolons where it thinks they are necessary. This can enable a bad habit that will come back to bite you down the road. In some instances, the compiler might assume that a semicolon is not needed, which will introduce tricky, hard-to-find bugs into your code. Avoid this by always adding the proper semicolons. A good tool to help you check your JavaScript for forgotten semicolons is JSLint.
As I mentioned above, JSLint is a great tool for helping you identify common problems in your JavaScript code. You can paste your code into the website listed above, you can use a different site like JavaScriptLint.com, or you can use one of the many downloadable JSLint tools. For instance, Visual Studio has an add-in for JSLint that will allow you to check for errors at compile-time (or manually).
Whatever you choose to do, the key point here is to run a tool like JSLint on your code. It will pick up bad code that is being masked by the browser. This will make your code cleaner and it will help to prevent those pesky bugs from showing up in production.
When you first start using JavaScript, the temptation is to just declare everything and use it as needed. This places all of your functions and variables into the global scope. The problem with this, besides it being sloppy, is that it makes your code extremely vulnerable to being affected by other code. For instance, consider the following example:
var cost = 5;
//...time goes by...
console.log(cost);
Imagine your surprise when the alert pops up and says "expensive" instead of 5. When you trace it down, you might find that a different piece of JavaScript somewhere else used a variable called cost to store text about cost for a different section of your application.
The solution to this is namespacing. To create a namespace, you simply declare a variable and then attach the properties and methods you want to it. The above code would be improved to look like this:
var MyNamespace = {};
MyNamespace.cost = 5;
//...time goes by...
console.log(MyNamespace.cost);
The resulting value would be 5, as expected. Now you only have one variable directly attached to the global context. The only way you should have a problem with naming conflicts now is if another application uses the same namespace as you. This problem will be much easier to diagnose, since none of your code will work (all of your methods and properties will be wiped out).
The Eval function allows us to pass a string to the JavaScript compiler and have it execute as JavaScript. In simple terms, anything you pass in at runtime gets executed as if it were added at design time. Here is an example of what that might look like:
eval("alert('Hi');");
This would pop up an alert box with the message "Hi" in it. The text inside the eval could have been passed in by the user or it could have been pulled from a database or other location.
There are a couple reasons why the eval function should be avoided. First, it is significantly slower than design time code. Second, it is a security risk. When code is acquired and executed at runtime, it opens a potential threat vector for malicious programmers to exploit. Bottom line here is that this function should be avoided at all costs.
When is 0.1 + 0.2 not equal to 0.3? When you do the calculation in JavaScript. The actual value of 0.1 + 0.2 comes out to be something like 0.30000000000000004. The reason for this (nope, not a bug) is because JavaScript uses Binary Floating Point numbers. To get around this issue, you can multiply your numbers to remove the decimal portion. For instance, if you were to be adding up the cost of two items, you could multiply each price by 100 and then divide the sum by 100. Here is an example:
var hamburger = 8.20;
var fries = 2.10;
var total = hamburger + fries;
console.log(total); //Outputs 10.299999999999999
hamburger = hamburger * 100;
fries = fries * 100;
total = hamburger + fries;
total = total / 100;
console.log(total); //Outputs 10.3
Most developers that write software in other C-family programming languages use the Allman style of formatting for block quotes. This places the opening curly brace on its own line. This pattern would look like this in JavaScript:
if(myState === 'testing')
{
console.log('You are in testing');
}
else
{
console.log('You are in production');
}
This will work most of the time. However, JavaScript is designed in such a way that following the K&R style of formatting for blocks is a better idea. This format starts the opening curly brace on the same line as the preceding line of code. It looks like this:
if(myState === 'testing') {
console.log('You are in testing');
} else {
console.log('You are in production');
}
While this may only seem like a stylistic difference, there can be times when there is a impact on your code if you use the Allman style. Earlier we talked about the browser inserting semicolons where it felt they were needed. One fairly serious issue with this is on return statements. Let's look at an example:
return
{
age: 15,
name: Jon
}
You would assume that the object would be returned but instead the return value will be undefined. The reason for this is because the browser has inserted a semicolon after the word return, assuming that one is missing. While return is probably the most common place where you will experience this issue, it isn't the only place. Browsers will add semi-colons after a number of keywords, including var and throw.
undefined
It is because of this type of issue that is is considered best practice to always use the K&R style for blocks to ensure that your code always works as expected.
There are a number of shortcuts and one-liners that can be used in lieu of their explicit counterparts. In most cases, these shortcuts actually encourage errors in the future. For instance, this is acceptable notation:
if (i > 3)
doSomething();
The problem with this is what could happen in the future. Say, for instance, a programmer were told to reset the value of i once the doSomething() function was executed. The programmer might modify the above code like so:
i
doSomething()
if (i > 3)
doSomething();
i = 0;
In this instance, i will be reset to zero even if the if statement evaluates to false. The problem might not be apparent at first and this issue doesn't really jump off the page when you are reading over the code in a code review.
Instead of using the shortcut, take the time necessary to turn this into the full notation. Doing so will protect you in the future. The final notation would look like this:
if (i > 3) {
doSomething();
}
Now when anyone goes in to add additional logic, it becomes readily apparent where to put the code and what will happen when you do.
Most languages that conform to the C-family style will not put an item into memory until the program execution hits the line where the item is initialized.
JavaScript is not like most other languages. It utilizes function-level scoping of variables and functions. When a variable is declared, the declaration statement gets hoisted to the top of the function. The same is true for functions. For example, this is permissible (if horrible) format:
function simpleExample(){
i = 7;
console.log(i);
var i;
}
What happens behind the scenes is that the var i; line declaration gets hoisted to the top of the simpleExample function. To make matters more complicated, not only the declaration of a variable gets hoisted but the entire function declaration gets hoisted. Let's look at an example to make this clearer:
var i;
simpleExample
function complexExample() {
i = 7;
console.log(i); //The message says 7
console.log(testOne()); //This gives a type error saying testOne is not a function
console.log(testTwo()); //The message says "Hi from test two"
var testOne = function(){ return 'Hi from test one'; }
function testTwo(){ return 'Hi from test two'; }
var i = 2;
}
Let's rewrite this function the way JavaScript sees it once it has hoisted the variable declarations and functions:
function complexExample() {
var testOne;
function testTwo(){ return 'Hi from test two'; }
var i;
i = 7;
console.log(i); //The message says 7
console.log(testOne()); //This gives a type error saying testOne is not a function
console.log(testTwo()); //The message says "Hi from test two"
testOne = function(){ return 'Hi from test one'; }
i = 2;
}
See the difference? The function testOne didn't get hoisted because it was a variable declaration (the variable is named testOne and the declaration is the anonymous function). The variable i gets its declaration hoisted and the initialization actually becomes an assignment down below.
testOne
In order to minimize mistakes and reduce the chances of introducing hard to find bugs in your code, always declare your variables at the top of your function and declare your functions next, before you need to use them. This reduces the chances of a misunderstanding about what is going on in your code.
It is possible to shorten a long namespace using the with statement. For instance, this is technically correct syntax:
with (myNamespace.parent.child.person) {
firstName = 'Jon';
lastName = 'Smyth';
}
That is equivalent of typing the following:
myNamespace.parent.child.person.firstName = 'Jon';
myNamespace.parent.child.person.lastName = 'Smyth';
The problem is that there are times when this goes badly wrong. Like many of the other common pitfalls of JavaScript, this will work fine in most circumstances. The better method of handling this issue is to assign the object to a variable and then reference the variable like so:
var p = myNamespace.parent.child.person;
p.firstName = 'Jon';
p.lastName = 'Smyth';
This method works every time, which is what we want out of a coding practice.
Again, the edge cases here will bite you if you aren't careful. Normally, typeof returns the string representation of the value type ('number', 'string', etc.) The problem comes in when evaluating NaN ('number'), null ('object'), and other odd cases. For example, here are a couple of comparisons that might be unexpected:
typeof
var i = 10;
i = i - 'taxi'; //Here i becomes NaN
if (typeof(i) === 'number') {
console.log('i is a number');
} else {
console.log('You subtracted a bad value from i');
}
The resulting alert message would be "i is a number", even though clearly it is NaN (or "Not a Number"). If you were attempting to ensure the passed in value (here it is represented by 'taxi') subtracted from i was a valid number, you would get unexpected results.
While there are times when it is necessary to try to determine the type of a particular value, be sure to understand these (and other) peculiarities about typeof that could lead to undesirable results.
Just like the typeof function, the parseInt function has quirks that need to be understood before it is used. There are two major areas that lead to unexpected results. First, if the first character is a number, parseInt will return all of the number characters it finds until it hits a non-numeric character. Here is an example:
parseInt
parseInt("56"); //Returns 56
parseInt("Joe"); //Returns NaN
parseInt("Joe56"); //Returns NaN
parseInt("56Joe"); //Returns 56
parseInt("21.95"); //Returns 21
Note that last example I threw in there to trip you up. The decimal point is not a valid character in an integer, so just like any other character, parseInt stops evaluating on it. Thus, we get 21 when evaluating 21.95 and no rounding is attempted.
The second pitfall is in the interpretation of the number. It used to be that a string with a leading zero was determined to be a number in octal format. Ecmascript 5 (JavaScript is an implementation of Ecmascript) removed this functionality. Now most numbers will default to base 10 (the most common numbering format). The one exception is a string that starts with "0x". This type of string will be assumed to be a hexadecimal number (base 16) and it will be converted to a base 10 number on output. To specify a number's format, thus ensuring it is properly evaluated, you can include the optional parameter called a radix. Here are some more examples to illustrate these possibilities:
parseInt("08"); //Returns 8 - used to return 0 (base 8)
parseInt("0x12"); //Returns 18 - assumes hexadecimal
parseInt("12", 16); //Returns 18, since base 16 is specified
When you execute a switch statement, each case statement should be concluded by a break statement like so:
switch
case
break
switch(i) {
case 1:
console.log('One');
break;
case 2:
console.log('Two');
break;
case 3:
console.log('Three');
break;
default:
console.log('Unknown');
break;
}
If you were to assign the value of 2 to the variable i, this switch statement would fire an alert that says "Two". The language does permit you to allow fall through by omitting the break statement(s) like so:
switch(i) {
case 1:
console.log('One');
break;
case 2:
console.log('Two');
case 3:
console.log('Three');
break;
default:
console.log('Unknown');
break;
}
Now if you passed in a value of 2, you would get two alerts, the first one saying "Two" and the second one saying "Three". This can seem to be a desirable solution in certain circumstances. The problem is that this can create false expectations. If you do not see that a break statement is missing, you may add logic that gets fired accidentally. Conversely, you may notice later that a break statement is missing and you might assume this is a bug. The bottom line is that fall through should not be used intentionally in order to keep your logic clean and clear.
The For...In loop works as it is intended to work, but how it works surprises people. The basic overview is that it loops through the attached, enumeration-visible members on an object. It does not simply walk down the index list like a basic for loop does. The following two examples are NOT equivalent:
// The standard for loop
for(var i = 0; i < arr.length; i++) {}
// The for...in loop
for(var i in arr) {}
In some cases, the output will act the same in the above two cases. That does not mean they work the same way. There are three major ways that for...in is different than a standard for loop. These are:
If you fully understand for...in and know that it is the right choice for your specific situation, it can be a good solution. However, for the other 99% of situations, you should use a standard for loop instead. It will be quicker, easier to understand, and less likely to cause weird bugs that are hard to diagnose.
When declaring a variable, always use the var keyword unless you are specifically attaching the variable to an object. Failure to do so attaches your new variable to the global scope (window if you are in a browser). Here is an example to illustrate how this works:
function carDemo() {
var carMake = 'Dodge';
carModel = 'Charger';
}
console.log(carMake); //Undefined, since carMake is defined inside the testing function scope
console.log(carModel); //Charger, since this variable has been implicitly attached to window
The declaration of the carModel variable is the equivalent of saying window.carModel = 'Charger';. This clogs up the global scope and endangers your other JavaScript code blocks, since you might inadvertently change the value of a variable somewhere else.
window.carModel = 'Charger';
JavaScript is rather flexible with what it allows you to do. This isn't always a good thing. For instance, when you create a function, you can specify that one of the parameters be named arguments. This will overwrite the arguments object that every function is given by inheritance. This is an example of a special word that isn't truly reserved. Here is an example of how it would work:
arguments
// This function correctly accesses the inherited
// arguments parameter
function CorrectWay() {
for(var i = 0; i < arguments.length; i++) {
console.log(arguments[i]);
}
}
// You should never name a parameter after
// a reserved or special word like "arguments"
function WrongWay(arguments) {
for(var i = 0; i < arguments.length; i++) {
console.log(arguments[i]);
}
}
// Outputs 'hello' and 'hi'
CorrectWay('hello', 'hi');
// Outputs 'h', 'e', 'l', 'l', and 'o'
WrongWay('hello', 'hi');
There are also reserved words that will cause you issues when you attempt to run your application. A complete listing of these words can be found at the Mozilla Developer Network. While there are work-arounds to use some of these words, avoid doing so if at all possible. Instead, use key words that won't conflict with current or potential future reserved or special words.
When I originally developed my list of best practices, this one was so obvious I overlooked it. Fortunately Daniele Rota Nodari pointed it out to me. Keeping a consistent standard is important to writing easily understandable code. Matching the coding style of the application you are working in should become second nature, even if that means changing your personal style for the duration of the project. When you get the opportunity to start a project, make sure that you have already established a personal coding style that you can apply in a consistent manner.
While being inconsistent with how you write your code won’t necessarily add bugs into your application, it does make your code harder to read and understand. The harder code is to read and understand, the more likely it is that someone will make a mistake. A good post on JavaScript coding styles and consistency in applying them can be found here:. The bottom line here is that you need to write consistent code. If you bring snippets into your application, format them to match your existing style. If you are working in someone else’s application, match your code to the existing style.
As with any software development language, reading the code of other developers will help you improve your own skills. Find a popular open source library and peruse the code. Figure out what they are doing and then identify why they chose to do things that way. If you can't figure it out, ask someone. Push yourself to learn new ways to attach common problems.
JavaScript is not C#. It is not Java or Java-lite. It is its own language. If you treat it as such, you will have a much easier time navigating its particular peculiarities. As you may have noticed, the common theme throughout many of these best practices is that there are hidden pitfalls that can be avoided by simply modifying how you approach certain problems. Little formatting and layout techniques can make a big difference in the success of your project.
Before I finish, I wanted to point out that there are a number of best practices related to JavaScript in HTML that I have not mentioned. For instance, a simple one is to include your JavaScript files at the bottom of your body tag. This omission is intentional. While I've made passing references to both the browser and Visual Studio, the above tips are purely about JavaScript. I will be writing a separate article that covers JavaScript in the browser.
body
Thus concludes my attempt at compiling a list of JavaScript best practices. Please be sure to let me know if you have others that you think should be added to the list.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
var test
function doSomething()
{
var x = "I am x"
var
y = "I am y"
alert(x + ", " + y)
}
doSomething()
alert("x is " + (typeof x))
alert("y is " + (typeof y))
var
y
return
continue
var x = "I am x"
var
y = "I am y"
function p()
{ var q = 'b';
+1;
var r = 'c'
+2;
document.write('q = ' + q + '<br>r = ' + r)
}
p()
Quote:While return is probably the most common place where you will experience this issue, it isn't the only place. Browsers will add semi-colons after a number of keywords, including var and throw.
function f()
{
var x = 1;
return x
+ 1;
}
alert("function returned " + f());
function f()
{
var x = 1;
return
x + 1;
}
alert("function returned " + f());
var x
= 1;
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/articles/580165/javascript-best-practices?fid=1830910&df=10000&mpp=10&sort=position&spc=relaxed&select=4550345&tid=4548311 | CC-MAIN-2017-09 | refinedweb | 3,905 | 64 |
Template talk:Infobox Play
From Wikipedia, the free encyclopedia
[edit] Start
This info box for plays was modelled after the Film Infobox. After working on the Category:Ancient Greek plays, I decided an info box would be best for things like the list of characters. I'm open to any suggestions for additions or changes. We could add things like the date written, the list of full text links, etc. I'm only going to add this box to the ancient Greek plays for now. However, anyone would like to use this for other plays, or make this into a larger project, feel free. -Ravenous 05:14, 13 January 2006 (UTC)
[edit] Additions
I think there should be a section for date of premiere, country or origin, language of origin, series, subject and genre. What do others think? Remember 20:46, 1 December 2006 (UTC)
- Okay I've added it. Feel free to revise it. Remember 20:24, 8 December 2006 (UTC)
Why are there two settings? and what's mute? just wondering --Goodface87 17:08, 13 December 2006 (UTC)
- There isn't two subjects (i removed the extra one in the list above). Also, mute is for non speaking characters. That's usually what they were listed under in the books of plays I've got. - Ravenous 20:39, 14 December 2006 (UTC)
Can there be a "basis" field for plays that are based on source material?--Cassmus 09:06, 2 April 2007 (UTC) I am currently testing this Infobox for a prototype theatre stub on a musical, and I ran into scale problems, viz. insufficient room to properly spell out the Author (theatre), Soundtrack Composer, and Scorer under "Written By." Recommend adjusting the fields for widths similar to those in Template:Infobox musical artist and similar music-related Infoboxes. - B.C.Schmerker 03:45, 1 May 2007 (UTC)
Can we also add a production section like in the musical box for new productions on Broadway of new plays along with an awards section? Thanks, guys. LiamFitzpatrick (talk) 07:33, 8 May 2008 (UTC)
[edit] Problems
- The functions of "caption", "series" and "subject" seem not to be working for the new article I'm working on - A Very Merry Unauthorized Children's Scientology Pageant ... Smee 06:10, 23 February 2007 (UTC).
[edit] Image size
I was having problems with an image that was less than 200px wide and was stretching to fit, resulting in a blurred image. I went back to the Infobox film template and lemmed that solution. Note that the parameter name is "image_size", the underscore must be used between the two words.—Chidom talk 09:30, 26 March 2007 (UTC)
[edit] Are Info Boxes mandatory?
(this question is being brought over from the Village Pump - for that discussion see [1])—Preceding unsigned comment added by Smatprt (talk • contribs)
See also discussions at "Hamlet" here. AndyJones 18:11, 27 April 2007 (UTC)
I see that info boxes can be helpful in many situations. However, in some cases, I find them redundant and too much like "lists". I am concerned that we are turning some articles from important encyclopedia entries into USA TODAY stories with these little boxes that make it easier to avoid actually learning about the subject, as opposed to simply getting a few quick facts. In the case of Shakespeare's plays, for example, I think they can create more problems than they solve. As to the regular information fields - we don't know when Shakespeare's plays were written, where they were first performed, what exactly the sources were, who was in them or what the original critical response was. We even argue now over what was a comedy and what was a tradgedy. Also, listing all or many of the characters opens further debates, and attempting to list every setting (which can be quite a few and many are not clear) is impossible. I also feel that the over use of these boxes gives a feeling of dumbing down of an article - sort of like using cliff notes to write a report instead of reading the whole article. In fact, for the most part, all the information in these boxes is typically found in the first paragraph or two of the article itself. Isn't this redundant? Are not these just more lists that duplicate the information in the articles? Are these boxes mandatory for all plays?
For example - For Hamlet - all we really can say would be: Hamlet, written in England...in English. Set in Denmark (all of which is in the first paragraph of the article). Then we can have a list of characters that already has its own section very closeby (and listed in the table of contents for an easy jump right to the full cast). Isn't this redundance at its worst?Smatprt 03:42, 27 April 2007 (UTC)
- I also think we could add when the play was written or when it premiered and where, when it stopped playing (if it did), and who originally directed or produced it (for later plays) and perhaps a link to the actual text of the play. While this information would be able to people elsewhere in the article, I think the inforbox provides a nice format for the primary bits of information. The infobox can also be used to incorporate metadata easily. Remember 19:54, 27 April 2007 (UTC)
Your quesitons illustrate my point nicely - regarding Hamlet - we don't know when the play was written; we don't know when it premiered; we don't know when it stopped playing, and we don't know who originally directed of produced it. The same applies to each and every play by Shakespeare. Do you see the problem now?Smatprt 05:06, 28 April 2007 (UTC)
- I don't know if the box really needs to be manditory for every play, but it's been useful for some. I created it for the ancient greek plays, and it seems to be a good fit with those. Also, the parameters for it are merely options. Like the box itself, the paramters are there if you want them, if not - you don't need to include all of them. I believe I only used a few, as like you said, a lot are unknown for these old plays.
- Whether information is displayed like "usa today" or an "important encyclopedia" is really just a value judgement on what - class? aesthetics? From the articles I've worked on, often people where putting lists of charaters within the text itself. From a design perspective, I believe a simple list is better served in something like an infobox than interrupting the flow of text within the article. If it's a matter of class, I think usabilty trumps class when it comes to relaying information. And I agree with Remember that infoboxes make the article more usable by putting the metadata in an easy to find location. Often, users are only there for that particular info and would prefer not to spend time searching it out within the text.
- I disagree with the premise that boxes "make it easier to avoid actually learning about the subject". One could use a similiar arguement for an encyclopedia article itself. Such as, why read the play when you could just read the summary on wikipedia? Everyone has their own limits of how far they are going to delve into a topic, it all depends on their time and inclination. If all they want to know is the most basic facts about something, why not make it a little easier for them? Does it really get in the way of the users who are there to read the whole article? - Ravenous 06:34, 28 April 2007 (UTC)
- I think that the inclusion of an infobox should be up to the author of the article - I've written many many articles (not plays, but in general) which simply don't bear the kinds of information which are necessary to fill out an infobox. Some of the plays I may be writing about in the near future are ones for which no script remains, and some of which were even shut down by gov't censorship before ever being performed. Point is, sometimes it simply doesn't serve to try to fill out an infobox, if the questions the box is asking aren't relevant to what you have available, or to the main thrust of the importance/interest of the play.
- On the other hand, to respond to the comment that infoboxes duplicate information in the text. Yes, they do. But more often than not, I find that my prose sounds a bit strange, perhaps a bit choppy and forced as I try to incorporate all that information into the introduction. Laying it out in the box can be quite helpful, and allows you to get all that basic data out of the way, so the main text can focus on the meat of the issue. (Plot summaries, performance history, whatever). LordAmeth 09:19, 28 April 2007 (UTC)
[edit] Voting section
This section is not meant to be binding but instead is designed to better gauge people's opinions on the template.
[edit] Against infobox plays
I am against all infoboxes for play articles.
[edit] For infoboxes on all play articles
I am for infoboxes on each play article.
[edit] Allow each article to adopt a consensus on its own talk page
I am for each article making a decision about whether to add an infobox or not.
[edit] Keep open and free editing
Sorry to create a new section, but my opinion really doesn't fall into any of your three categories there. I do not support the idea that consensus should be created among a certain cabal of editors on each separate play article page, as (a) this goes against the open, free nature of the Wiki, where editors can come and go and make changes as they wish, and (b) it encourages a lack of standards and consistency. I think that every major play (Shakespeare, Broadway classics, Gilbert & Sullivan, Chikamatsu, the Greek classics, etc.), those which are most well-known, most influential, most expansive in their coverage, should have infoboxes, as it completes out the article, and supplies to-the-point information which may be buried in larger, longer articles. On the other hand, I am not voting "for infoboxes on all play articles" as this ignores the situations which can come up - each individual editor (not by cabal consensus, but individually) as they work on an article, or more especially when they create a new article, should be allowed to decide for themselves if an infobox is a good idea. I don't believe that this should be left up to the personal whims of the editors - some commented above that they just don't like the way it looks or whatever - but if an infobox would be genuinely inappropriate, irrelevant in a given situation, editors should feel free to leave it out.
Outside of play articles, there have been countless times that I've found that for whatever subject I'm working on, the associated infobox just doesn't apply to what I'm doing, and in those cases, omission of an infobox is more than justified. Sometimes, there simply isn't enough information in a given editor's sources (or known to scholarship in general) to satisfactorily fill out an infobox. In these cases, and more or less only in these situations, do I believe that omission of an infobox is called for. Sorry for the long message. Thanks for reading. LordAmeth 15:01, 2 May 2007 (UTC)
- Support. After reading LordAmeth's comments above, I agree with his general comments (except the Shakespeare reference where, I believe scholars don't know enough factual details to make the box useful). In general, there are far too many attempts to control the articles on Wikipedia. Cabal consensus is a huge problem on the William Shakespeare page (and related pages) in particular. Check out the talk sections there for what may be a perfect example about what LordAmeth describes. Smatprt 19:42, 2 May 2007 (UTC)
- Support. I support letting the authors of the article decide, however I do think the infobox should be useful on plenty of play articles. Even if it only includes something as simple as a picture, writer, date (estimated if need be), and some principle characters. For that reason, I wouldn't be against using them on Shakespeare plays if I were editing those. There is even less known about the Ancient Greek plays than those of Shakespeare, and I believe I put the box on most all of them. - Ravenous 20:47, 2 May 2007 (UTC)
This is all well and good, but it doesn't solve anything. We're right back to where we were before the whole debate started. Free editing is great, but sometimes, people just don't agree. Sometimes a consensus can't be reached. I don't particularly know how to solve this at all, but, right now, I feel like we're just running around in circles. I agree with the free editing thing, but, again, it is kind of redundant and solves nothing. Basically, some people want an infobox on shakespeare pages, and others don't. Both sides have strong feelings, and there is no consensus. There is no way to verify the statements made by either side, and thus both sides are based mostly on opinion. There isn't a "right" answer. Both sides just need to settle and give a little, recognizing that they are based on their own feelings. (In this little blurb here, I'm referring mostly to the discussions on the Hamlet page.) The answer, I guess, then, is free editing, as well as compromise between editors. Wrad 03:10, 9 May 2007 (UTC)
[edit] Fixed errors
There were a number of errors in this template which I think I have corrected, including:
- Excess lines above the template (not quite sure HOW I fixed that one, might be to do with either of the following)
- "Series" and "Subject" should now work properly - I changed something in "Caption" as well (removing a dash and correcting a | usage), although I don't know what the actual effect of that change was.
- Line break should not appear if there is no web page, playbill, etc
I'm mentioning this here in case anyone was previously put off from using the template because of these errors. GDallimore (Talk) 11:58, 23 May 2007 (UTC)
- PS I came across this template when creating Four Nights in Knaresborough which could do with a second person going over it. Thanks! GDallimore (Talk) 12:05, 23 May 2007 (UTC)
[edit] some sections still don't appear
The list of characters doesn't show up in the finished infobox. Help? Fixed it -- someone had altered the code in the template.... Aristophanes68 (talk) 02:00, 31 March 2008 (UTC)
[edit] yikes
Aside from whether info boxes should be used, why have "country of Origin" when place of (postumous, in the case of Woyzeck) premiere is meant? Date and place of writing are missing, as is duration, a consideration when looking to fill double bills... Sparafucil (talk) 23:04, 31 March 2008 (UTC)
- Agree: especially with the need to distinguish date of writing from date of premiere, at least when the two are substantially different.... Aristophanes68 (talk) 14:25, 2 April 2008 (UTC)
[edit] Error
Please see the page for Alibi (play). The words "insertformulahere" are appearing in the info box between "written" and "by". Any ideas why?--Jtomlin1uk (talk) 14:18, 28 April 2008 (UTC)
- In addition the words"Headline text" are appearing as a section heading, even when there is no section heading!--Jtomlin1uk (talk) 14:53, 28 April 2008 (UTC)
[edit] iobdb - lortel Internet off-broadway database
{{editprotected}}
requesting inclusion of a direct link in infobox to the lortel archives for off-broadway shows using the below coding:
|data15 = {{#if:{{{iobdb_id<includeonly>|</includeonly>}}} | [{{{iobdb_id}}} IOBDB profile] }}
--emerson7 16:01, 21 August 2008 (UTC)
[edit] Add hCalendar microformat
{{editprotected}} Please also add the hCalendar microformat, as on {{Infobox Film}}. This will involve adding class="vevent" to the whole template, (I can't see where as the two templates are not the same); plus class="summary" around the name; and class="description" around the "Written by" table-row. See the edits to the film template for details. Andy Mabbett | Talk to Andy Mabbett 21:41, 22 August 2008 (UTC)
Not done: please be more specific about what needs to be changed. Happy‑melon 14:00, 23 August 2008 (UTC)
- The exact same changes as in the film template edit (nbsp notwithstanding). Andy Mabbett | Talk to Andy Mabbett 14:05, 23 August 2008 (UTC)
- Except that this template uses a meta-template, there is no parameter called |director= and the |name= field is formatted, which IIRC you've said somewhere else is a problem. Please have a play around in a sandbox and read the documentation for {{Infobox}}, work out what code will achieve the effect you want (given that I have no idea how this microcard system works) and come back here with the exact change you want me or someone else to make. We all have our areas of expertise: while template code is one of mine, microcards are not, and it appears to be one of yours :D. On that note, many thanks for the helpful requests you've made all over the template namespace today; it looks like you've improved the accessibility of thousands of articles by applying your knowledge where it counts. Happy‑melon 14:14, 23 August 2008 (UTC)
[outdent]
OK, I see what you mean, sorry.
Firstly,
class="vevent" needs to apply to the whole table. I don't know how that would be achieved.
Then, if that has been done, change:
|above = ''{{{name|{{PAGENAME}}}}}''
to:
|above = ''<span class="summary">{{{name|{{PAGENAME}}}}}</span>''
lastly, the table row:
|label1 = Written by |data1 = {{{writer<includeonly>|</ifncludeonly>}}}
Can have
class="description"; again, I don't know how to achieve that in this type of template, but it's only an optional property.
Thank you for your kind words; you can find out more about microformats (not "microcards" - though that's a neat portmanteau!), including hCard (an HTML representation of vCard), on those pages; and at the microformats project page. Andy Mabbett | Talk to Andy Mabbett 14:36, 23 August 2008 (UTC)
- Nearly there, but
class="description"seems to be wrapped around the right-hand TD, not the TR. Andy Mabbett | Talk to Andy Mabbett 16:31, 23 August 2008 (UTC)
- No; you can't run a span across multiple table cells in valid HTML. I've already raised the issue on the relevant talk page. Thanks, anyway. Andy Mabbett | Talk to Andy Mabbett 14:44, 24 August 2008 (UTC)
[edit] IBDB
{{editprotected}}
The URL listed for linking to the IBDB (The Internet Broadway Database) for the Arsenic and Old Lace (Play) article is incorrect and takes the reader to an incorrect location. To correct this link, I request that the string "show" in the link be replaced by the string "production"
Alternatively, the URL may be changed to "1692" as the last four digits to link to the IBDB entry for both the original production and the 1986 revival.
Thank you.
- Wikipedia User jfduncan John F. Duncan, theatre director and educator
Not done:I have fixed the link at Arsenic and Old Lace (play), as changing the template would break other instances of the template. Regards,--Aervanath talks like a mover, but not a shaker 18:52, 11 January 2009 (UTC)
[edit] Adding a 'movement' section
Hello, I would like to request that a section be added for 'Movement', underneath 'Genre'. This is because many plays cannot easily be classified as specific genres, but are allied with particular movements in theatre. Examples are Waiting for Godot, which is unrelated to conventional genres but is part of the Theatre of the Absurd movement. If someone could make this change it would enhance the infobox's value. Downstage right (talk) 15:20, 1 January 2009 (UTC)
- <Rattles bars> Downstage right (talk) 23:01, 5 January 2009 (UTC)
[edit] setting
the play takes place in the 1990s, not 1969. —Preceding unsigned comment added by 72.225.144.11 (talk) 20:43, 4 January 2009 (UTC)
- What are you talking about? Downstage right (talk) 01:52, 5 January 2009 (UTC)
[edit] Sidebar Setting Error: 1997, not 1969
The setting is incorrectly listed as 1969. It is actually set in 1997 according to the published play text, written by August Wilson. —Preceding unsigned comment added by 72.22.28.4 (talk) 02:03, 9 January 2009 (UTC)
- This is the talk page for all infoboxes on all plays, not just the one you're bothered about. Nobody knows what play you're referring to. Go to the article you're worried about, click the box marked 'edit this page' at the top, and make the change. Downstage right (talk) 23:01, 9 January 2009 (UTC)
[edit] Proposed removal of "view • talk • edit" links
{{editprotected}}
I propose that we removed the "view", "talk", and "edit" links from the bottom of this template. I've never seen this on an infobox before, and it seems rather silly to include them, as this template doesn't directly relate to the articles that it's used in. (You can see this problem in the two sections above this, in which editors mistakenly thought this template applied only to the specific article it appeared in). Mr. Absurd (talk) 23:53, 17 January 2009 (UTC)
- As you can see from the further mistakes below, this really needs to be done. Downstage right (talk) 12:03, 5 March 2009 (UTC)
- I just added an {{editprotected}}, as my earlier comments seem to not have been noticed by any admins. Hopefully this issue can be resolved soon. Mr. Absurd (talk) 02:35, 6 March 2009 (UTC)
- I was trying to make the edit but for the life of me I can't figure out what in the template is causing the display. Maybe it's the "body class"? Well anyway, You'll have to wait for someone better at template coding than I unless you can spoonfeed me what to change.--Fuhghettaboutit (talk) 05:14, 6 March 2009 (UTC)
- It comes from Template:Infobox, which this uses. That template adds the view/talk/edit links if you pass a "name". I did add a "noedit" flag so that the edit link is gone but the talk link remains. If this is an issue, maybe take it up at Template talk:Infobox? Oren0 (talk) 07:45, 6 March 2009 (UTC)
- Weird. When I went to Category:Infobox templates and clicked on a bunch at random, not one had the links so I discounted that. Thanks for the information.--Fuhghettaboutit (talk) 13:10, 6 March 2009 (UTC)
[edit] Misspelling
{{editprotected}} Could someone please change the spelling of 'Seppember' to 'September' as it's irritating and makes the page seem less reliable. Sissy Marshmallow (talk) 13:19, 7 February 2009 (UTC)
Not done Is it possible you stumbled upon an article in which when someone used this template they filled in a parameter with the misspelling? There are no months in this template; the misspelling is not here. When users use the template they have to fill in the information parameters manually to populate the template and can misspell words. Go to the article where you found the error, click edit this page and search for "seppember"; it's probably in the text of the page, but it does not appear to be an error in the template itself.--Fuhghettaboutit (talk) 14:57, 7 February 2009 (UTC)
{{edit protected}} It would be nice if you could add "Fulganzio", "The Little Monk", or "Fulganzio, the little monk" under characters- he is a rather major character who shapes the play who shouldn't be left out of any character list. Thanks!—Preceding unsigned comment added by 71.81.69.75 (talk • contribs)
Not done Like the user directly above, you have apparently confused the text on a particular article that employs this template, with the template itself. This is a generic template appearing in hundreds of different articles. When it is used, a person fills in the blank parameters, manually, thus populating the template for that particular page. Go to the article at issue and make the edit to this filled out template there. If the article happens to be protected, then add your edit protected request to the article's talk page.--Fuhghettaboutit (talk) 23:54, 19 February 2009 (UTC)
[edit] General syntax cleanup
{{editprotected}} Requesting sync with the new sandbox for various bits of syntax cleanup including the removal of the "view/edit" links as requested in the section above. Other than that one change there shouldn't be any visible impact. Chris Cunningham (not at work) - talk 12:41, 22 April 2009 (UTC) | http://ornacle.com/wiki/Template_talk:Infobox_Play | crawl-002 | refinedweb | 4,191 | 67.08 |
Scott Seely
Microsoft Developer Network
November 2001
The following article is an excerpt of Chapter 2 from Scott Seely's book, SOAP: Cross Platform Web Service Development Using XML, Prentice Hall-PTR, © 2002.
Summary: This article explains what you need to know about XML in order to understand SOAP. You will learn the basics about Uniform Resource Identifers, XML, XML Schema, and XML Namespaces. (23 printed pages)
When looking for a way to express the SOAP payload, the authors of the specification had a number of ways they could have gone. They could have invented their own protocol, declared that CORBA or DCOM will now be known as SOAP, or invented something new by combining existing technologies. In the end, they chose to minimize the amount of required invention by combining existing technologies. To express the content of a SOAP message, the authors chose the eXtensible Markup Language, XML.
XML contains a large number of features—far more than SOAP uses or needs. For example, the SOAP specification states, "A SOAP message MUST NOT contain a Document Type Declaration. A SOAP message MUST NOT contain Processing Instructions" (SOAP 1.1 Specification, section 3, "Relation to XML"). Of the XML standards that SOAP has adopted, it specifies how that feature will be used. You will see this in chapter 3 when looking at SOAP serialization. As we will see later, this decision makes it fairly easy to implement solutions using SOAP because developers do not need to have a full-fledged XML parser in order to use SOAP. In order to understand SOAP, we need to understand the following items first:
In order to access a unique item over the Internet, you need to know how to identify that one object amongst everything else out there. Uniform Resource Identifiers, or URIs, provide a way of uniquely identifying those many different items. Described in detail by RFC1630, this specification spells out the rules used to use many different protocols within the URI framework. A URI has the form:
<scheme>:<scheme-specific-part>
When the scheme-specific-part contains slashes ('/'), those slashes indicate some hierarchical structure within the path.
scheme-specific-part
The best-known type of URI is the Uniform Resource Locator, or URL. Like all URIs, a URL follows the <scheme>:<scheme-specific-part> method of addressing. Table 2.1 identifies the schemes named by RFC1738 and RFC1808. (You can obtain the source to these and other RFCs from.) Using these schemes, we can connect to various places on the Web using nothing but a URL translator such as Internet Explorer or Netscape Navigator. URLs define this layout for the scheme-specific-part:
<scheme>:<scheme-specific-part>
//<user>:<password>@<host>:<port>/<url-path>
If you are at all familiar with URLs, you know that a good number of the items in the above layout are optional. More often than not, you type in URLs such as: (my web site) or (my ftp site)
The various parts of the scheme syntax identify:
user
url-path
Table 2.1. Currently available URL Schemes
Uniform Resource Names (URNs) are much less familiar to the average Web user than the ubiquitous URL. Unlike a URL, a URN does not resolve to a unique, physical location. URNs serve as persistent resource identifiers. They allow other collections of identifiers from one namespace to be mapped into URN-space. Because of this requirement, the URN syntax provides the ability to pass and encode character data using existing protocols. RFC2141 defines how to create and use a URN. The production for a URN follows the general rules for a URI. In general, it looks like this:
<URN> ::= "urn:" <NID> ":" <NSS>
A URN uses the string "urn:" to identify the scheme. NID specifies the Namespace ID and NSS specifies the Namespace Specific String. When interpreting URNs we look to the NID to tell us how to interpret the NSS. When reading or creating a URN, the initial construct "urn:" <NID> is case-insensitive.
urn:
NID
NSS
"urn:" <NID>
URLs and URNs represent two common uses for a URI. In the next section we will see yet another use of URIs: XML Namespaces.
When XML first hit and the trade press began reviewing it in the 1996-1997 timeframe, I dug around looking for examples of what XML looked like. I was surprised at how many industry wonks were saying that it was the next big thing but then would not (or could not) show what this markup language looked like. Given the hype and lack of examples, I imagined it must be a fairly complex, ornery beast. After a few months of hype, developers began writing articles on the topic giving out the details I wanted to know. Some of these articles described it as a descendent of SGML, only better suited for development. How was it made to work better in the program development area? SGML offers you extraordinary levels of flexibility but makes it very difficult to implement a full-featured SGML parser. XML more or less defines a concrete set of rules that readers and writers of XML data must follow. Because the language definition for XML is more rigid, it is easier to create conforming documents and parsers. Do not get the wrong idea—XML is a subset of SGML. Anyhow, after the digerati calmed down and the developers got their chance to speak up I got really excited. Why? Well, the first thing was that I finally saw some practical applications of XML. It works as a data language that both machines and people can easily understand. If you have ever read or written HTML, you will find XML fairly easy to understand and use. Like HTML, it contains begin tags and end tags. Unlike HTML, every begin tag must have a matching end tag. End tags look like their matching begin tag with a leading "/". Let's jump in and take a look at what XML can look like.
The following XML shows one way of encoding the contents of a library:
<?xml version="1.0">
<Library>
<Book>
<Title>Green Eggs and Ham</Title>
<Author>Dr. Seuss</Author>
</Book>
<Book>
<Title>Windows Shell Programming</Title>
<Author>Scott Seely</Author>
</Book>
<Picture>
<Title>American Gothic</Title>
<Artist>Grant Wood</Artist>
</Picture>
</Library>
Even if you have never read XML in your life, the above makes a fair amount of sense. The document demonstrates a number of the rules found in an XML document. The first line in the above sample is a processing instruction declaring the version of XML used by the document. Documents do not have to include this element, but normally you should include it. All XML documents must have one enclosing element (the version information does not count as an enclosing element). The Library element wraps the entire document above. It contains three sub-elements: two books and a picture. As you may guess, not one word in the above XML document is an XML keyword. If you want to be a freewheeling XML author, all you need to do is watch the spelling in your tag names and make sure that every begin tag has an end tag. Writing XML documents this way can cause problems. For example, you could accidentally write this:
Library
<Library>
<Book>
<Title>Green Eggs and Ham</Title>
<Author>Dr. Seuss</Author>
</Book>
<Bokk>
<Title>Windows Shell Programming</Title>
<Author>Scott Seely</Author>
</Bokk>
<Picture>
<Title>American Gothic</Title>
<Artist>Grant Wood</Artist>
</Picture>
</Library>
As a human reader, you recognize that the author of the document misspelled "Book" for the book, Windows Shell Programming. Likewise, the parser will accept the document but it will not realize that you have two books in the library list. Instead, it will think you have one Book, one Bokk, and one Picture. If you want the XML parser to do some checking for you and only read valid constructs, you can use something called a Document Type Declaration (DTD) or an XML Schema. DTDs are not covered in this book because section 3 of the SOAP specification specifies that a SOAP message "MUST NOT contain a Document Type Declaration." If you really must know how to use DTDs, see the recommended reading list at the end of the chapter. With a few exceptions (i.e. publishing, document management, etc.), you should always use XSD to describe data.
Book
Bokk
Picture
An XML Schema provides a superset of the capabilities found in a DTD. They both provide a method for specifying the structure of an XML element. Whereas both schemas and DTDs allow for element definitions, only schemas allow you to specify type information. All XML data is character-based. It will specify a 4 as the character 4, rarely as the binary representation 0100. (XML does allow for encoding binary data within the message. This method allows us to send things such as image data inside of an XML message.) We can enhance the library example to demonstrate the benefits of schemas over DTDs. We will add copyright date to the book information.
A simple DTD will have elements that contain other elements and/or character data. The simplest element declaration would declare the element name and the contents as character data:
<!ELEMENT element-name (#PCDATA)>
An element may also consist of other elements. If an element contains exactly one instance of a given element, we would have the following DTD:
<!ELEMENT parentElement (childElement)>
<!ELEMENT childElement (#PCDATA)>
Alternatively, the parentElement might contain zero or more childElements. We indicate this using an asterisk, *.
<!ELEMENT parentElement (childElement*)>
<!ELEMENT childElement (#PCDATA)>
Finally, you can also indicate composition of elements in a DTD. For example, parentElement might contain two different pieces of data.
<!ELEMENT parentElement (childElem1, childElem2)>
<!ELEMENT childElem1 (#PCDATA)>
<!ELEMENT childElem2 (#PCDATA)>
If we wanted to generate a DTD for a library of books, it might look like this:
<!ELEMENT Library (Book*)>
<!ELEMENT Book ( Title, Author*, Copyright )>
<!ELEMENT Title (#PCDATA)>
<!ELEMENT Author (#PCDATA)>
<!ELEMENT Copyright (#PCDATA)>
The Library consists of zero or more elements of type Book. Each Book has a Title, zero or more elements of type Author, and a Copyright. The Title, Author, and Copyright elements all contain character data. Rewriting the library example to use the DTD, we have the following XML document:
<?xml version="1.0" ?>
<!DOCTYPE Library PUBLIC "." "Library.dtd" >
<Library>
<Book>
<Title>Green Eggs and Ham</Title>
<Author>Dr. Seuss</Author>
</Book>
<Book>
<Title>Windows Shell Programming</Title>
<Author>Scott Seely</Author>
</Book>
</Library>
A validating parser will load Library.dtd and use it to validate the contents of the document. This is all well and good, but wouldn't it be nice if we could specify more information than "this element contains character data"? You see, DTDs come from SGML. SGML primarily concerned itself with document publishing. As such, the print industry has been using it for years. Because they deal with print issues all the time, SGML provided ways to reproduce the same document in lots of different forms. Now that computing has embraced XML, the programmer types (i.e. you and me), wanted a way to express the characteristics of the data. A DTD can specify the number of instances of a piece of data and what a particular structure looks like. By extending a SGML dialect, I could even specify the characteristics of the data. The problem here is that every developer may come up with a different naming system. I also have a gripe with DTDs—they do not look like XML. For these and other reasons, the W3C eventually published the XML Schema recommendation. Here is the Library DTD defined as a schema:
Library.dtd
<schema xmlns:xsd=
""
targetNamespace=
""
xmlns:
<complexType name="Book">
<element type="Title"></element>
<element type="Author"></element>
<element type="Copyright"></element>
</complexType>
<simpleType name="Title" xsi:
</simpleType>
<simpleType name="Author" xsi:
</simpleType>
<simpleType name="Copyright" xsi:
</simpleType>
</schema>
You would save the above as an XML file. To use the schema, simply reference the targetNamespace in your document like so:
<myLibrary:Library xmlns:
<myLibrary:Book>
<myLibrary:Title>Green Eggs and Ham
</myLibrary:Title>
<myLibrary:Author>Dr. Seuss
</myLibrary:Author>
<myLibrary:Copyright>1957
</myLibrary:Copyright>
</myLibrary:Book>
<myLibrary:Book>
<myLibrary:Title>Windows Shell Programming
</myLibrary:Title>
<myLibrary:Author>Scott Seely
</myLibrary:Author>
<myLibrary:Copyright>2000
</myLibrary:Copyright>
</myLibrary:Book>
</myLibrary:Library>
Both the schema and the document use the text, xmlns. This string tells the parser to use the set of names specified by the namespace identified by the indicated URI. This means that both the reader and writer of the XML document must agree on what the particular XML Namespace means. Without this agreement, the XML Schema will lose any potential value. All elements inside the tag using the xmlns declaration are part of the enclosing namespace unless otherwise specified.
To aid with the definition and validation of data, XML Schema uses facets to define characteristics of a specific datatype. A facet defines an aspect of a value space. A "value space" is the set of all valid values for a given datatype. You use a facet to distinguish what makes one datatype different from another. The XML Schema document specifies two types of facets: fundamental and non-fundamental facets.
A fundamental facet is an abstract property that characterizes the values of a value space. These include the following facets:
(a, b)
a
b
a=b
a!=b
(a, b)
a=b
a!=b
a=a
b=a
(a, b, c)
b=c
a=c
The non-fundamental or constraining facets are optional properties that you can apply to a datatype to constrain its value space. The following facets do this for you:
Using all of these facets you can constrain existing datatypes. This helps perform tasks such as data validation and verifying the overall "correctness" of an XML document.
Combined with facets, the XML Schema datatypes can help you give meaning to the items contained by your schema. For a comprehensive listing of all available data types, look at.
We already saw these in use in the last section, XML Schemas. Simply put, namespaces define a set of unique names within a given context. A namespace can use any URN as long as that URN is unique. For example, the preceding schema defined the namespace myLibrary. The schema contained in the file LibrarySchema.xml is in the same directory as the source page and uniquely identifies the namespace.
myLibrary
LibrarySchema.xml
What does a namespace do for us? It allows us to create multiple elements with the same name (such as postOffice:address and memory:address). Putting these similar structures into unique namespaces helps prevent the concepts from clashing with each other and allows the computer to unequivocally determine which structure is being referenced. This same practice exists in C++, Java, C#, and a number of other languages. A number of arguments exist that are both for and against namespaces. Many of the arguments against namespaces boil down to the idea that namespaces are a solution in search of a problem. The arguments for them state that developers are better off when they do not have to re-architect an application because someone else used a function with the same name. With regards to the C language and pre-standardized C++, people avoided collisions with things such as standard library functions all the time. For better or worse, these same people also had to avoid collisions with names of functions supplied by various vendors. Often, the code supplied by these vendors would clash with functions written by the developer. In Java, the location of the package often defines the namespace. For example, you can create two classes named Foo and differentiate them by putting them in different packages (com.scottseely.foo is different from com.prenticehall.foo). This is a bit off the topic, but here is an example of how namespaces work in C++.
postOffice:address
memory:address
Foo
com.scottseely.foo
com.prenticehall.foo
#include "someVendorHeader.h"
void someFunc()
{
// Code to do something
}
Inside of the vendor's header file, they have a function with the same signature as someFunc(), which means that the code will not compile. To fix this, the programmer can write this:
someFunc()
#include "someVendorHeader.h"
namespace myFuncs
{
void someFunc()
{
// Code to do something, even call the
// vendor's function!-- :: says to use the
// function in the global namespace.
::someFunc();
}
}
Problem solved! With the various DTDs and schemas being created, the creators of XML namespaces figured that they could learn from others and include similar functionality. Let's look back at the schema example and the lines that define the namespace:
<myLibrary:Library xmlns:
Regardless of the name of the schema in LibrarySchema.xml, the enclosing namespace is named myLibrary. The namespace could have easily been called x, Bob, or yth443. By using a namespace, we make it possible to use many different schemas that define Book. Imagine that you are an online bookseller. All of your vendors ship you their catalogs via XML. Each vendor defines the Book element slightly differently. Because there is no standardization, you have to read in the various catalogs and normalize the data for your database. Namespaces can help you do this by putting each Book definition into a uniquely identified namespace. (Other examples abound: You could use namespaces to aggregate job databases, stock market data, or cooking recipes.)
x
Bob
yth443
Namespaces also come in handy for creating self-documenting XML. If you are using schema from many different sources, using namespaces will help the human reader know where the various bits of data came from. Within an XML document, a namespace remains active for the element declaring it and all elements contained by the declarer. Likewise, if an inner element declares a different namespace then all of its inner elements use the new namespace. To see this, consider the following example. Elements in the outer namespace are displayed using regular characters and the inner namespace is in italics.
<outer:library xmlns:
<book>
<title>The XML Handbook</title>
<author>Charles F. Goldfarb</author>
<author>Paul Prescod</author>
</book>
<inner:book
xmlns:
<title>Windows Shell Programming</title>
<writer>Scott Seely</writer>
</inner:book>
</outer:library>
These scoping rules help out by reducing the verbosity of the XML document. An XML document may also mix and match namespaces within a single element. The scoping rules outlined above still apply—they just seem to get a bit more complex. Consider this example that matches up a book with some Library of Congress information.
<lib:library xmlns:lib=
""
xmlns:LOC=
">
<book>
<title>The XML Handbook</title>
<author>Charles F. Goldfarb</author>
<author>Paul Prescod</author>
<LOC:ISBN>0-13-014714-1</LOC:ISBN>
</book>
</lib:library>
In the above example, the elements book, author, and library are all part of the lib namespace. ISBN exists as a part of the LOC namespace.
book
author
library
lib
ISBN
LOC
Namespaces combined with schemas provide some great opportunities for document validation and ease of readability. The XML Namespace may reference a targetNamespace that is part of a XML Schema known to both the reader and writer of the XML document. Fortunately, this is not the only possibility. A namespace can be used to simply make the named element unique within the XML document.
All of the XML documents presented in this chapter have used elements to present data. XML also supports attributes. We saw these used as facets within the description of XML Schema. As stated earlier, elements require begin and end tags. Attributes do not. Instead, they are contained by the begin tag of an element. A given element can have one or more elements of the same type. It can only have one attribute of any given type. The following XML is legal:
<Library>
<Book title="Windows Shell Programming">
<Author>Scott Seely</Author>
</Book>
</Library>
The Book element has an attribute, title, which gives the title of the book. The XML expresses Author as a sub-element. This could have easily been expressed as another attribute and been 100% valid.
title
Author
<Library>
<Book title="Windows Shell Programming"
author="Scott Seely" />
</Library>
How would you express a book with more than one author? You could try this:
<Library>
<Book title="The XML Handbook"
author="Charles F. Goldfarb"
author="Paul Prescod">
</Book>
</Library>
As I mentioned already, the above fragment is invalid. You cannot have two attributes with the same name. You could achieve a similar effect by writing this:
<Library>
<Book title="The XML Handbook">
<author name="Charles F. Goldfarb" />
<author name="Paul Prescod" />
</Book>
</Library>
The author element uses a name attribute to contain the names of any writers associated with the Book. Because these are empty elements, the fragment uses the empty element notation: "/>". Attributes can be declared in three different ways:
name
/>
The above examples use option 1. This works well for learning but poorly for production environments. As mentioned earlier, SOAP forbids the use of DTDs, so we will not investigate that option. This leaves us with Option 3, schema. When creating attributes for an XML Schema, you use the attribute keyword. This word only has meaning within the schema namespace. You use attribute to define characteristics of the type. Attribute includes the item within an element type definition. To create the book example using attributes, the schema would look something like this:
attribute
Attribute
<Schema xmlns:xsd
""
targetNamespace=
xmlns:
<attribute name="title"
xsi:
<attribute name="name" xsi:
<complexType name="Author" content="empty">
<attribute type="name" />
</complexType>
<complexType name="Book" content="eltOnly">
<attribute type="title" />
<element type="Author" />
</complexType>
</Schema>
Looking at the title attribute definition, we see that it specifies the datatype (string) and the name of the element. Fairly easy, right? The full syntax for an attribute is:
string
<attribute
default="default value"
fixed = "fixed value"
form = "{qualified | unqualified}"
id = "ID"
name="NCName"
ref = "QName"
xsi:
For an example using all the fields, let's add a new attribute to the myBook schema, format.
myBook
format
<Schema xmlns:xsd=
""
xmlns:
<attribute name="title"
xsixsd:
<attribute name="name" xsi:
<attribute name="format"
default="soft-cover" use="optional">
<enumeration value="soft-cover" />
<enumeration value="hard-cover" />
</attribute>
<complexType name="Author">
<attribute type="name" />
</complexType>
<complexType name="Book">
<attribute type="title" />
<attribute type="format" />
<element type="Author" />
</complexType>
</Schema>
Using this schema for one title, we would have the following XML:
<lib:Book xmlns:
<author name="Charles F. Goldfarb" />
<author name="Paul Prescod" />
</lib:Book>
If a program requested the format attribute from the Book element, it should get back the value soft-cover. Viewing this XML document in Internet Explorer 5.5 yields the data shown in Figure 1.
soft-cover
Figure 1. Using Microsoft Internet Explorer to view XML documents
Internet Explorer will not flag invalid data, but it will flag properly (and improperly) structured data. For example, you could set the format attribute to "stone-tablet" and Internet Explorer would still display the document.
"stone-tablet"
This chapter presented just enough information to make SOAP accessible to you. Many Internet technologies use URIs to express locations and other concepts. You must understand how these are formed and what they mean in order to appreciate there usefulness when use by other markup languages and protocols. After discussing the basics, we took a quick look at XML. Since this language came onto the scene in late 1997, many new ideas have been layered on top of it. Besides XML Schemas and Namespaces we have also seen other technologies layered on top of XML. Among the proposals winding their way through the W3C approval process are:
At the request of my reviewers, I have to point out that the above synopses are very limited descriptions of all you can do with the various W3C recommendations and their related implementations. Many of the specifications are fairly long. I would recommend visiting to read the current overviews of the various technologies if one looks interesting to you.
Of course, there are many other ideas related to XML winding their way through the standards process such as SOAP. While working with SOAP, you will find it handy to have XML reference material handy. I went through a lot of effort to make sure this book stands on its own. Still, it is hard to cover XML in a chapter. Fortunately, a lot of good books exist. The best all-around book on the market that I have found is The XML Handbook by Charles F. Goldfarb and Paul Prescod. Mr. Goldfarb has been involved with SGML (and consequently XML) since its inception. If you have good financial resources you should also purchase the XML Developers Toolkit. The Toolkit contains three books at a reduced price. Even though Prentice Hall publishes these books, I do not recommend them just to keep my publisher happy. These truly are the best books I own regarding XML and I went through a lot of books before I found these.
At this point you should understand enough about XML to make the SOAP specification readable. Let's get moving and cover the specification! | http://msdn.microsoft.com/en-us/library/ms996539.aspx | crawl-002 | refinedweb | 4,181 | 56.05 |
20111112¶
user-specific language selection¶
More details to do for the support of user-specific language selection.
lino.apps.dsbe.models.CefLevel overrides display_text,
and this method wasn’t yet adapted to yesterday’s changes.
There were still some places where Lino translates too early:
- the headers of tabs
- labels of buttons and menus
Worked on the French translations which are far from being perfect but start to get usable.
Das Geschlecht (m/w) einer Person wird in Englisch “Gender” und nicht “Sex” genannt. Ich habe jetzt wenigstens die Feldbezeichnung geändert. Bei Gelegenheit auch den internen Feldnamen (obschon das mehr Arbeit sein wird).
There is one conceptual detail which needed a design decision: how to mark hotkey letters in menu items. Until now we had the following temporary solution:
def prepare_label(mi): label = unicode(mi.label) # trigger translation n = label.find(mi.HOTKEY_MARKER) if n != -1: label = label.replace(mi.HOTKEY_MARKER,'') return label
This method doesn’t work any longer with lazy translation. But anyway it wasn’t used since lino.ui.extjs3 simply removed the markers.
Listings¶
Optimizations when printing a
Listing:
- It didn’t use the DavLink applet when
lino.Lino.use_davlinkwas True. Fixed.
- It no longer uses a hard-coded site-wide template “Listing.odt” but a Default.odt in the model’s config directory.
Check-in 20111112b
Miscellaneous Bugfixes¶
- Candidates of a Course : this print action was erroneously moved from Course to CourseOffer. Fixed.
- The Labels of menu entries for Listings were strange: twice the listing title, separated by a space. Fixed.
- There is now also a command to view the existing records for each Listing type..
- Fixed the “Irritating scrollbar” bug: The welcome page gets a vertical scrollbar when it has more than a screenful of information to display. That’s normal, but it was irritating that this scrollbar “didn’t disappear” when another window is opened. And even worse, it disturbed the layout of some windows (e.g. detail of a course) because their layout manager isn’t obviously aware of that scrollbar when creating the canvas, and thus each window now also has a horizontal scrollbar.
- html_lines inserts the welcome.html now only if
not on_ready.
- Quick Links now also use a http: instead of a javascript: link.
Check-in 20111112c
Sunday¶
Started a new experimental application for internal use.
Some general optimizations:
- Added new configuration setting
languages.
- e.g. PersonMixin replaced by Person. Note that in
lino.modlib.contacts.models, Person and Company are abstract and are not automatically subclasses of Contact. There may be applications who want “persons” that are not a Contact.
- Field “sex” renamed to “gender” (this will need data migration)
Added new module
lino.modlib.tickets.
Check-in 20111113b | http://luc.lino-framework.org/blog/2011/1112.html | CC-MAIN-2019-13 | refinedweb | 451 | 51.24 |
Versions: [Basic Rules] [Basic + Intermediate Rules] [All Rules]
Use the Google Java style guide for any topics not covered in this document.
Legend: basic rule | intermediate rule | advanced rule
Names representing packages should be in all lower case.
com.company.application.ui
For school projects, the root name of the package should be your group name or project name followed by logical group names.
e.g. todobuddy.ui, todobuddy.file etc.
Rationale: Your code is not officially ‘produced by NUS’, therefore do not use
edu.nus.comp.* or anything similar.
Class/enum names must be nouns and written in PascalCase.
Line, AudioSystem
Variable names must be in camelCase.
line, audioSystem
Constant names must be all uppercase using underscore to separate words.
MAX_ITERATIONS, COLOR_RED
Names representing methods must be verbs and written in camelCase.
getName(), computeTotalWidth()
Underscores may be used in test method names using the following three part format
featureUnderTest_testScenario_expectedBehavior()
e.g.
sortList_emptyList_exceptionThrown()
getMember_memberNotFount_nullReturned
Third part or both second and third parts can be omitted depending on what's covered in the test.
For example, the test method
sortList_emptyList() will test
sortList() method for all variations of the 'empty list'
scenario and the test method
sortList() will test the
sortList() method for all scenarios.
Abbreviations and acronyms should not be uppercase when used as a part of a name.
Good
exportHtmlSource(); openDvdPlayer();
Bad
exportHTMLSource(); openDVDPlayer();
All names should be written in English.
Rationale: The code is meant for an international audience.
Variables with a large scope should have long names, variables with a small scope can have short names.
Scratch variables used for temporary storage or indices can be kept short. A programmer reading such variables should be able to assume that its value is not used outside a few lines of code. Common scratch variables for integers are
i, j, k, m, n and for characters
c and
d.
Rationale: When the scope is small, the reader does not have to remember it for long.
Boolean variables/methods should be named to sound like booleans
//variables isSet, isVisible, isFinished, isFound, isOpen, hasData, wasOpen //methods boolean hasLicense(); boolean canEvaluate(); boolean shouldAbort = false;
As much as possible, use a prefix such as
is,
has,
was, etc. for boolean variable/method names so that linters can automatically verify that this style rule is being followed.
Setter methods for boolean variables must be of the form:
void setFound(boolean isFound);
Rationale: This is the naming convention for boolean methods and variables used by Java core packages. It also makes the code read like normal English e.g.
if(isOpen) ...
Plural form should be used on names representing a collection of objects.
Collection<Point> points; int[] values;
Rationale: Enhances readability since the name gives the user an immediate clue of the type of the variable and the operations that can be performed on its elements. One space character after the variable type is enough to obtain clarity.
Iterator variables can be called i, j, k etc.
Variables named j, k etc. should be used for nested loops only.
for (Iterator i = points.iterator(); i.hasNext(); ) { ... } for (int i = 0; i < nTables; i++) { ... }
Rationale: The notation is taken from mathematics where it is an established convention for indicating iterators.
Associated constants should have a common prefix.
static final int COLOR_RED = 1; static final int COLOR_GREEN = 2; static final int COLOR_BLUE = 3;
Rationale: This indicates that they belong together, and make them appear together when sorted alphabetically.
Basic indentation should be 4 spaces (not tabs).
for (i = 0; i < nElements; i++) { a[i] = 0; }
Rationale: Just follow it
Line length should be no longer than 120 chars.
Try to keep line length shorter than 110 characters (soft limit). But it is OK to exceed the limit slightly (hard limit: 120 chars). If the line exceeds the limit, use line wrapping at appropriate places of the line.
Indentation for wrapped lines should be 8 spaces (i.e. twice the normal indentation of 4 spaces) more than the parent line.
setText("Long line split" + "into two parts."); if (isReady) { setText("Long line split" + "into two parts."); }
Place line break to improve readability
When wrapping lines, the main objective is to improve readability. Do not always accept the auto-formatting suggested by the IDE.
In general:
Break after a comma.
Break before an operator. This also applies to the following "operator-like" symbols: the dot separator
., the ampersand in type bounds
<T extends Foo & Bar>, and the pipe in catch blocks
catch (FooException | BarException e)
totalSum = a + b + c + d + e; setText("Long line split" + "into two parts."); method(param1, object.method() .method2(), param3);
(that follows it.
Good
someMethodWithVeryVeryVeryVeryVeryVeryVeryVeryVeryVeryVeryLongName( int anArg, Object anotherArg);
Bad
someMethodWithVeryVeryVeryVeryVeryVeryVeryVeryVeryVeryVeryLongName (int anArg, Object anotherArg);
Good
longName1 = longName2 * (longName3 + longName4 - longName5) + 4 * longname6
Bad
longName1 = longName2 * (longName3 + longName4 - longName5) + 4 * longname6;
alpha = (aLongBooleanExpression) ? beta : gamma; alpha = (aLongBooleanExpression) ? beta : gamma;
Use K&R style brackets (aka Egyptian style).
Good
while (!done) { doSomething(); done = moreToDo(); }
Bad
while (!done) { doSomething(); done = moreToDo(); }
Rationale: Just follow it.
Method definitions should have the following form:
public void someMethod() throws SomeException { ... }
The if-else class of statements should have the following form:
if (condition) { statements; }
if (condition) { statements; } else { statements; }
if (condition) { statements; } else if (condition) { statements; } else { statements; }
The for statement should have the following form:
for (initialization; condition; update) { statements; }
The while and the do-while statements should have the following form:
while (condition) { statements; }
do { statements; } while (condition);
The switch statement should have the following form: Note there is no indentation for
case clauses.
Configure your IDE to follow this style instead.
switch (condition) { case ABC: statements; // Fallthrough case DEF: statements; break; case XYZ: statements; break; default: statements; break; }
The explicit
//Fallthrough comment should be included whenever there is a
case statement without a break statement.
Rationale: Leaving out the
break is a common error, and it must be made clear that it is intentional when it is not there.
A try-catch statement should have the following form:
try { statements; } catch (Exception exception) { statements; }
try { statements; } catch (Exception exception) { statements; } finally { statements; }
White space within a statement
It is difficult to give a complete list of the suggested use of whitespace in Java code. The examples below however should give a general idea of the intentions.
Rationale: Makes the individual components of the statements stand out and enhances readability.
Logical units within a block should be separated by one blank line.
// Create a new identity matrix Matrix4x4 matrix = new Matrix4x4(); // Precompute angles for efficiency double cosAngle = Math.cos(angle); double sinAngle = Math.sin(angle); // Specify matrix as a rotation transformation matrix.setElement(1, 1, cosAngle); matrix.setElement(1, 2, sinAngle); matrix.setElement(2, 1, -sinAngle); matrix.setElement(2, 2, cosAngle); // Apply rotation transformation.multiply(matrix);
Rationale: Enhances readability by introducing white space between logical units. Each block is often introduced by a comment as indicated in the example above.
Put every class in a package.
Every class should be part of some package.
Rationale: It will help you and other developers easily understand the code base when all the classes have been grouped in packages.
Put related classes in a single package.
Package together the classes that are related. For example in Java, the classes related to file writing is grouped in the package
java.io and the classes which handle lists, maps etc are grouped in
java.util package.
The ordering of import statements must be consistent.
Rationale: A consistent ordering of import statements makes it easier to browse the list and determine the dependencies when there are many imports.Example:
import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import java.io.File; import java.io.IOException; import javax.xml.bind.JAXBContext; import javax.xml.bind.JAXBException; import org.loadui.testfx.GuiTest; import org.testfx.api.FxToolkit; import com.google.common.io.Files; import javafx.geometry.Bounds; import javafx.geometry.Point2D; import junit.framework.AssertionFailedError;
IDEs have support for auto-ordering import statements. However, note that the default orderings of different IDEs are not always the same. It is recommended that you and your team use the same IDE and stick to a consistent ordering.
Imported classes should always be listed explicitly.
Good
import java.util.List; import java.util.ArrayList; import java.util.HashSet;
Bad
import java.util.*;
Rationale: Importing classes explicitly gives an excellent documentation value for the class at hand and makes the class easier to comprehend and maintain. Appropriate tools should be used in order to always keep the import list minimal and up to date. IDE's can be configured to do this easily.
Class and Interface declarations should be organized in the following manner:
Rationale: Make code easy to navigate by making the location of each class element predictable.
Method modifiers should be given in the following order:
<access> static abstract synchronized <unusual> final native
<access> = public | protected | private <unusual> = volatile | transient
The
<access> modifier (if present) must be the first modifier.
Good
public static double square(double a);
Bad
static public double square(double a);
Rationale: The most important point here is to keep the access modifier as the first modifier. The order is less important for the other modifiers , but it make sense to have a fixed convention.
Array specifiers must be attached to the type not the variable.
Good
int[] a = new int[20];
Bad
int a[] = new int[20];
Rationale: The arrayness is a feature of the base type, not the variable. Java allows both forms however.
Variables should be initialized where they are declared and they should be declared in the smallest scope possible.
Good
int sum = 0; for (int i = 0; i < 10; i++) { for (int j = 0; j < 10; j++) { sum += i * j; } }
Bad
int i, j, sum; sum = 0; for (i = 0; i < 10; i++) { for (j = 0; j < 10; j++) { sum += i * j; } }
Rationale: This ensures that variables are valid at any time. Sometimes it is impossible to initialize a variable to a valid value where it is declared. In these cases it should be left uninitialized rather than initialized to some phony value.
Class variables should never be declared public unless the class is a data class with no behavior. This rule does not apply to constants.
Bad
public class Foo{ public int bar; }
Rationale: The concept of Java information hiding and encapsulation is violated by public variables. Use non-public variables and access functions instead.
Avoid unnecessary use of
this with fields.
Use the
this keyword only when a field is shadowed by a method or constructor parameter.
Good
public User(String name) { this.name = name; ... }
Bad
public User(String name) { // 'id' is not shadowed by any method parameters this.id = User.getNewId(); ... }
Rationale: to reduce unnecessary noise.
The loop body should be wrapped by curly brackets irrespective of how many lines there are in the body.
Good
for (i = 0; i < 100; i++) { sum += value[i]; }
Bad
for (i = 0, sum = 0; i < 100; i++) sum += value[i];
Rationale: When there is only one statement in the loop body, Java allows it to be written without wrapping it between
{ }. However that is error prone and very strongly discouraged from using.
The conditional should be put on a separate line.
Good
if (isDone) { doCleanup(); }
Bad
if (isDone) doCleanup();
Rationale: This helps when debugging using an IDE debugger. When writing on a single line, it is not apparent whether the condition is really true or not.
Single statement conditionals should still be wrapped by curly brackets.
Good
InputStream stream = File.open(fileName, "w"); if (stream != null) { readFile(stream); }
Bad
InputStream stream = File.open(fileName, "w"); if (stream != null) readFile(stream);
The body of the conditional should be wrapped by curly brackets irrespective of how many statements.
Rationale: Omitting braces can lead to subtle bugs.
All comments should be written in English.
Furthermore, use American spelling and avoid local slang.
Rationale: The code is meant for an international audience.
Write descriptive header comments for all public classes/methods.
You MUST write header comments for all classes, public methods except for getters/setters.
Rationale:
public method are meant to be used by others and the users should not be forced to read the code of the method to understand its exact behavior. The code, even if it is self-explanatory, can only tell the reader HOW the code works, not WHAT the code is supposed to do.
All non-trivial private methods should carry header comments.
Rationale: Writing header comments will hep novice programmers to self-detect abstraction problems. e.g. If it is hard to describe the method succinctly, there is something wrong with the method abstraction.
Javadoc comments should have the following form:
/** * Returns lateral location of the specified position. * If the position is unset, NaN is returned. * * @param x X coordinate of position. * @param y Y coordinate of position. * @param zone Zone of position. * @return Lateral location. * @throws IllegalArgumentException If zone is <= 0. */ public double computeLocation(double x, double y, int zone) throws IllegalArgumentException { //... }
Note in particular:
/**on a separate line
Returns ...,
Sends ...,
Adds ...(not
Returnor
Returnningetc.)
*is aligned with the first one
*
Javadoc of class members can be specified on a single line as follows:
/** Number of connections to this database */ private int connectionCount;
Comments should be indented relative to their position in the code.
Good
while (true) { // Do something something(); }
Bad
while (true) { // Do something something(); }
Bad
while (true) { // Do something something(); }
Rationale: This is to avoid the comments from breaking the logical structure of the program. | https://se-education.org/guides/conventions/java/index.html | CC-MAIN-2022-33 | refinedweb | 2,244 | 58.08 |
C++ Programming
Reverse number program by C++
Reverse number program will reverse the position of all the digits of the given number. In this guide we will see C++ program to reverse a number. We will take the number from user and reverse it. If the user gives a number 753 then after reversing the number will be 357. Then we will print the result.
In C programming we have discussed several C program to reverse a number. Here, we will do same thing with C++ language. Let’s see the program bellow to reverse a number using C++.
C++ program to reverse a number
At first we will take the number and store it to main_num variable. Then we will use while loop to continue until the number is not equal to zero. Inside the while loop we have implemented our logic to reverse the number.
// c++ program to reverse a number #include <iostream> using namespace std; int main(){ int main_num, reverse_num = 0, reminder; cout << "Enter the integer to reverse it : "; cin >> main_num; while(main_num != 0){ reminder = main_num % 10; reverse_num = reverse_num * 10 + reminder; main_num /= 10; } cout << "\nNumber after reversing is = " << reverse_num << endl; return 0; }
Output of reverse number program
| https://worldtechjournal.com/cpp-programming/reverse-number-program-cpp/ | CC-MAIN-2022-40 | refinedweb | 200 | 72.97 |
Inject Bower packages into your source code with Grunt.Grunt is great.gruntplugin html grunt bower package wiredep dependency component postinstall
Speed up your AngularJS app by automatically minifying, combining, and automatically caching your HTML templates with $templateCache.Global namespace for Angular.gruntplugin angular template templates concat angularjs angular-component
A grunt task for removing unused CSS from your projects with UnCSS. Issues with the output should be reported on the UnCSS issue tracker.gruntplugin uncss css
Concatenate files. Run this task with the grunt concat command.gruntplugin:keepalive.gruntplugin server connect http
This plugin was designed to work with Grunt 0.4.x. If you're still using grunt v0.3.x it's strongly recommended that you upgrade, but in case you can't please use v0.3.2. Run this task with the grunt copy command.gruntplugin
Issues with the output should be reported on the clean-css issue tracker. Run this task with the grunt cssmin command.gruntplugin cssmin css style styles stylesheet minify compress
Select.gruntplugin compress gif image img jpeg jpg minify png svg
Run this task with the grunt jshint command. Task targets, files and options may be specified according to the grunt Configuring tasks guide.gruntplugin
This plugin was designed to work with Grunt 0.4.x. If you're still using grunt v0.3.x it's strongly recommended that you upgrade, but in case you can't please use v0.3.2. Run this task with the grunt less command.gruntplugin
Run this task with the grunt uglify command. Task targets, files and options may be specified according to the grunt Configuring tasks guide.gruntplugin
Run this task with the grunt watch command. This defines what file patterns this task will watch. It can be a string or an array of files and/or minimatch patterns.gruntplugin watch livereload
Merge SVGs from a folder. I am looking for a maintainer who would like to take over this plugin.svg icon sprite gruntplugin
Autop
grunt-perfbudget is a Grunt.js task for enforcing a performance budget (more on performance budgets). It uses the wonderful webpagetest.org and the WebPagetest API Wrapper for NodeJS created by Marcel Dur.gruntplugin
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects. | https://www.findbestopensource.com/tagged/gruntplugin?fq=MIT | CC-MAIN-2021-25 | refinedweb | 403 | 61.02 |
0
Hey again I need a little help with this assignment. What I'm supposed to do is have a user enter a positive integer and then the program should tell the user all the prime numbers before the number the user entered. However, with my program when I enter 22 for example it will give me numbers like 25, 26, and 28, or something along those lines.
Here's my code:
#include <iostream> #include <string> using namespace std; int main () { int choice; cout<<"What would you like to do: "<<endl; cout<<"Make your choice by entering 1, 2, 3, or 4 "<<endl; cout<<"+-------------------------------+"<<endl; cout<<"1. Enter a positive integer, then print out all the prime numbers that are less than or equal to the number you entered"<<endl; cout<<"2. Enter a positive integer and then print out the number with its digits reversed"<<endl; cout<<"3. Enter two positive integers and then print out the greatest common factor"<<endl; cout<<"4. Nothing"<<endl; cout<<"+-------------------------------+"<<endl; cin>>choice; if(choice==1) { int n=0; cout<<"Please enter a positive integer "<<endl; cin>>n; while( n > 1) { if( n % 2 != 0, n % 3 != 0) { cout<<"The prime numbers are: "<<n<<endl; n++; if(n % 6 == 0) { n++; } } } }
Thanks again for the help | https://www.daniweb.com/programming/software-development/threads/69435/c-finding-prime-numbers | CC-MAIN-2017-17 | refinedweb | 213 | 69.11 |
Starting July 1st, I am going to have the opportunity to step outside by development comfort zone and begin working on the Cloud Foundry logging system team. Why is it outside my comfort zone? The biggest reason is that is it being written in Go. The second is that it is going to have performance requirements that I’m not used to being worried about. As a Web/Ruby/Rails/Javascript developer, I’ve been using a different skill set and I’m excited to get the opportunity to step outside of my comfort zone and try some new things.
I’ve been ramping up on Go in my spare time and as I learn more, I am going to share my experiences and some tips here. Let’s start off with how to write a simply Go program that runs from the command line and take arguments.
First you need to install Go. The Go team has done a great job outlining the steps so check out this page for details. Part of instructions will be writing a basic Go program to make sure everything works.
package main import "fmt" func main() { fmt.Printf("hello, world\n") }
Line 1 of the program creates a package called “main”. You use packages in Go in similar ways you’d use modules in Ruby except everything must live in a package. All Go programs need a “main” package and it tells Go where the program starts. The official entry point of a Go program is the “main” function:
func main() { }
See the Go language specification on Program Execution for more details.
The last part of that test program is the import statement:
import "fmt"
Think of them like a Ruby “require” statement that has a little more functionality. This specific import statement brings the “fmt” package into scope and allows you to print things to the screen.
That’s all for the initial post in this series. Please check back often for new posts. If you have any questions, please leave them in a comment and I’ll do my best to answer them. Also, if you have anything you’d like to see in this column, please let me know.
Sounds like fun. Really looking forward to more posts in this series!
June 23, 2013 at 6:19 pm
I am also learning GO after being happy with ruby for a long time. The package system is intriguing to me and it forms a central feature (of many) that makes GO easier to work in than C, for instance. When writing GO code, one benefits from adopting the GO workspace file structure–set $GOPATH to the directory you want to keep your GO code in (perhaps ~/Dropbox/go or ~/Google Drive/go), within the $GOPATH place a src directory, which will contain all the packages you create. Within src, you place a separate directory for each package you write. A package is the composed of all of the files in this directory. The directory name and the package files inside do not have to match each other–but you will find it most sensible to make them the match (i.e. package strings lives in the src/strings directory). All of the files in a directory should be part of the same package. You don’t mix ‘package main’ file with ‘package foo’ files. If you mix files in this way, GO will complain when you try to go install the file. The import statement, creates accessibility to exported (capitalized) functions and constants. The accessibility is file-limited. For instance, if you have a file bar.go and you ‘import strings’ into that file, strings.Replace (as an example) is now available within that file, but not from the rest of the package to which bar.go belongs. From elsewhere in the package (from some separate file), the strings functions won’t be visible. If you look at the files composing the standard package strings, you find that it is composed of:
/usr/local/go/src/pkg/strings
├── reader.go
├── replace.go
├── search.go
├── strings.go
and some test files that we will ignore. reader.go imports [errors, io, and unicode/utf8]; replace imports [io]; search.go does not import anything, and strings.go imports [unicode, unicode/utf8]. You now see why different files of this package must import the same external package.
That brings us to the realization that GO must keep some sort of registry of the visible code for associated with each file. When looking at a file, it is easy to track down where functions are coming from–‘import strings’ gives access to strings.Functions. If you don’t understand what strings. Somethings does, you can easily ‘go doc strings’ to find out. All along the way, GO seems to be designed to save you time in the short run–through great tools, and in the long run–by encouraging the developer to adopt well thought-out approaches to organization and design.
Looking in the source code is very educational. To aid that, I added the following function to my .zshrc file:
function letsGO()
{
#uses the go directory to find the standard go package and opens the package files in sublime text 2
#usage:
#% letsGO ‘package unicode$’
grep -rl “${1}” /usr/local/go/src/pkg | xargs subl
#find /usr/local/go -name “${1}” | xargs subl
}
Happy GO-ing!
August 24, 2013 at 3:44 pm | http://pivotallabs.com/a-rubyist-learning-go-a-basic-go-program/ | CC-MAIN-2015-32 | refinedweb | 908 | 72.36 |
The next trait from the top in the collections hierarchy is
Iterable. All methods in this trait are defined in terms of an an abstract method,
iterator, which yields the collection’s elements one by one. The
foreach method from trait
Traversable is implemented in
Iterable in terms of
iterator. Here is the actual implementation:
def foreach[U](f: Elem => U): Unit = { val it = iterator while (it.hasNext) f(it.next()) }
Quite a few subclasses of
Iterable override this standard implementation of foreach in
Iterable, because they can provide a more efficient implementation. Remember that
foreach is the basis of the implementation of all operations in
Traversable, so its performance matters.
Two more methods exist in
Iterable that return iterators:
grouped and
sliding. These iterators, however, do not return single elements but whole subsequences of elements of the original collection. The maximal size of these subsequences is given as an argument to these methods. The
grouped method returns its elements in “chunked” increments, where
sliding yields a sliding “window” over the elements. The difference between the two should become clear by looking at the following REPL interaction:
scala> val xs = List(1, 2, 3, 4, 5) xs: List[Int] = List(1, 2, 3, 4, 5) scala> val git = xs grouped 3 git: Iterator[List[Int]] = non-empty iterator scala> git.next() res3: List[Int] = List(1, 2, 3) scala> git.next() res4: List[Int] = List(4, 5) scala> val sit = xs sliding 3 sit: Iterator[List[Int]] = non-empty iterator scala> sit.next() res5: List[Int] = List(1, 2, 3) scala> sit.next() res6: List[Int] = List(2, 3, 4) scala> sit.next() res7: List[Int] = List(3, 4, 5)
Trait
Iterable also adds some other methods to
Traversable that can be implemented efficiently only if an iterator is available. They are summarized in the following table.
In the inheritance hierarchy below Iterable you find three traits: Seq, Set, and Map. A common aspect of these three traits is that they all implement the PartialFunction trait with its
apply and
isDefinedAt methods. However, the way each trait implements PartialFunction differs.
For sequences,
apply is positional indexing, where elements are always numbered from
0. That is,
Seq(1, 2, 3)(1) gives
2. For sets,
apply is a membership test. For instance,
Set('a', 'b', 'c')('b') gives
true whereas
Set()('a') gives
false. Finally for maps,
apply is a selection. For instance,
Map('a' -> 1, 'b' -> 10, 'c' -> 100)('b') gives
10.
In the following, we will explain each of the three kinds of collections in more detail.blog comments powered by Disqus
Contents | http://docs.scala-lang.org/overviews/collections/trait-iterable.html | CC-MAIN-2016-36 | refinedweb | 434 | 57.16 |
This is my fifth assignment where we try to build a working small project with a servo. The servo that I will use is made by Tower Pro. The name of the servo is Micro Serve 9g SG90 The range is 0 – 180 degrees where maximum speed is 60 degrees/s. I will also use the component from my previous assignment where the sensor was HC-SR04 (picture below).
When I started this assignment, the first problem I encountered was that the electricity that is generated from one pin in arduino is not sufficient enough for both the sensor and the servo so I had to divide them in their own pins because when I tried it on the breadboard it did not work properly. The wire colors in the servo are brown (Ground/GND/-), red (5v/+) and orange (For the signal). To use the servo, you must import the Servo.h library to the arduino software.
#include <Servo.h>
Before I start, please take time to take a look at my older assignment so you will understand what is happening in this assignment because I will use recycled code from my older posts.
Below I have posted the first phase of my testing with this servo, where the servo moves(degrees) based on the value it gets from the Ultrasonic sensor where the max value is 181 and lowest is 0 degrees. This test is for the later project where I will code the values for the robot walker/car. The code can include maneuvers in algorithmic order for example “if the object is too close –> STOP –> move slightly back –> STOP –> move right –> etc “. The logic how the robot will move is based on the components you will be using to change the direction of the robot. For example if you were to use wheels instead of legs, the code would be different. You can also load your robot with sensor so it would be safer if it was one of the bigger projects where the robot would be the size of industrial bots. Anyway, here is the code. Note that if you are new to programming languages, the compiler will ignore everything in one line after the // comment. This way you can take notes for yourself in the compiler itself. The same is with Java and C++.
#include <Servo.h> // Library
#define trigPin 13
#define echoPin 12
Servo servo; // The making of servo variable
void setup() {
Serial.begin (9600);
pinMode(trigPin, OUTPUT);
pinMode(echoPin, INPUT);
servo.attach(11); // Attach variable to pin 11
}
void loop() {
int duration, distance;
digitalWrite(trigPin, HIGH);
delayMicroseconds(1000);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);
distance = (duration/2) / 29.1;
if (distance >= 181 || distance <= 0){
Serial.println(“Out of range”);
}
else {
Serial.print(distance);
Serial.println(” cm”);
servo.write(distance); // move the servo with distance
}
delay(2000); // delay 2 seconds
}
Here is some reference for the codes where attach() means
Attach the Servo variable to a pin. Note that in Arduino 0016 and earlier, the Servo library supports only servos on only two pins: 9 and 10.
Writes a value to the servo, controlling the shaft accordingly. On a standard servo, this will set the angle of the shaft (in degrees), moving the shaft to that orientation. On a continuous rotation servo, this will set the speed of the servo (with 0 being full-speed in one direction, 180 being full speed in the other, and a value near 90 being no movement).
If you are new to arduino, then I suggest you to look a closer look at the syntax example at their own website with broader information. Below is the picture of the components that I used for the above code.
Here is the video
With the components of the above picture/video, I modified the code to make this servo to mimic a walking robot (You would of course need two servos). The code is below. You can see that the servo will stop completely when an object comes closer than 5 cm in front of it. We can make the servo to do anything for example in the STOP section we could have created a reference to another method where if the object comes too close, turn and run. There the only limit that you have is the hardware and the programming language.
void loop() {
int duration, distance;
digitalWrite(trigPin, HIGH);
delayMicroseconds(1000);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);
distance = (duration/2) / 29.1;
if (distance <= 5){
Serial.println(“STOP”);
}
else {
Serial.print(distance);
Serial.println(” cm”);
walk();
}
delay(500);
}
void walk()
{
servo.write(0);
delay(1000);
servo.write(180);
}
By the way, if you are completely new to arduino, I suggest you to take a look at the arduino example code library from the File –> Examples to get the basic idea of the logic behind every added component. You can also monitor the values and your own writings to the console from Serial monitor in the tools –> Serial monitor. I must also add that although this is not a programming class, it’s a good idea to always manage your code in smart way by not duplicating your code when you can create a method and reference that one method that has ten lines of code every time when you need the same functionality. The code in the assignment example has a method walk() and the way it works is explained below
void walk()
{ // Opening brackets
// All of the codes, must be inserted here
} // Closing brackets
and whenever you want to use this method you can reference in in other methods or loops by just adding
walk();
The method can have some functionality that does something, like in our case or to do some function with the values it receives inside the void walk(value){ }.Although I do not always follow my own advice’s they are known set of good programming standards to learn. You might also use operators on methods for example instead of using one method four times where the bot will walk few meters in every time a walk(); method is activated you can multiply the method by one activation but I am going to go in depth in the arithmetic functionality in another post. I can tell you this that the programming language in Arduino software is similiar to C with few exceptions.
I will update this assignment gradually.
Source:
Lectures on Tero Karvinen
Make: Arduino Bots and Gadgets by Kimmo and Tero Karvinen | https://kuroshfarsimadan.wordpress.com/2013/02/14/bus4tn007-3-tehtava-5-assignment-5/ | CC-MAIN-2017-26 | refinedweb | 1,082 | 59.33 |
)
Esc still won't get me out of visual mode?
Does this mean window/text command names need to be unique to avoid clashes?
Ah, true...
Shoot
Hello,
Another strange thing, seems it is installing itself as 2130 version (it shows this version in the about box).
Is it right ?
Thanks a lot =)
with the latest build (2130), there is a issue with the search file in project (broken)
When.
I'm getting the same offset flash of the tab name when saving.
Loading a project with many open files is definitely way faster on OS X, thank you!
Loving the instant start on Windows. Great stuff.
This is a 'feature' so that you can do multiline find/replace. Very useful.
I have an issue with the new dispatching sytem.
I wrote this plugin to automatically populate the 'Goto Anything' with selected word (a kind of open file under):
class GotoSelectionCommand(sublime_plugin.TextCommand):
def run(self, edit):
selection = self.view.sel()
if selection and selection[0]:
self.view.window().run_command("show_overlay", {"overlay": "goto", "text": self.view.substr(selection[0]).strip()})
But it doesn't fully work.The overlay panel briefly open but automatically close by selecting the first item in the list.If the file is already open, it work fine.So probably something related to the opening of transient file because if I put a '@' as first character, it works as expected.
startup, version: 2128 windows x64 channel: dev
Haven't noticed much faster startup times. OS: Ubuntu 11.04.Keep up great work! Thx.
I packaged up the wrong files for the 2128 OS X build, and had to redo it, hence the larger version number there. I didn't change the builds for the other platforms though, as there were no changes there. | https://forum.sublimetext.com/t/dev-build-2128/2754/10 | CC-MAIN-2016-40 | refinedweb | 296 | 68.06 |
i have a program at the moment that is asking the user to input thier name, age and course. the details are stored in a stucture and is the passed to the function by the caller. the function collects the data and then on the return to the caller the stucture contains the useres details. then main() should output the contents of the structure.
at the moment i have this but for some reason when you put your name in it skips Age, and Course. also i am unsure if it is doing what i have explained corectly. this is what i have so far
any help will be much appreciatedany help will be much appreciatedCode:#include <iostream> using namespace std; struct myStruct { char Name; int age; char course; }; void myFunc1(myStruct m) // by value { cout<<"Enter you name"; cin >>m.Name; } void myFunc2(myStruct &m) // by reference { cout<<"Age: "; cin>>m.age; } void myFunc3(myStruct &m) { cout<<"Course: "; cin >>m.course; } int main() { myStruct m; m.Name; m.age; m.course; myFunc1(m); myFunc2(m); myFunc3(m); system("pause"); } | http://cboard.cprogramming.com/cplusplus-programming/84767-structures-help.html | CC-MAIN-2014-10 | refinedweb | 180 | 73.47 |
For the part where your main program waits for the worker thread to end, you can simplify it by using WaitForSingleObject(workerThreadHandle, INFINITE). It basically do the same thing for your except that you don't need to implement the loop.
Thanks for your good and fast answer!
I tried your hint and failed: The WaitForSingleObject needs 0 ms for the job and in debug mode I see memory leaks...
//I define globally:
CWinThread* pThread;
//On Init:
pThread = AfxBeginThread(ServerProc, GetSafeHwnd(), THREAD_PRIORITY_NORMAL);
//When closing:
WaitForSingleObject(pThread, INFINITE);
Where is my fault?
Thanks!
Marc
Write:
WaitForSingleObject(pThread->m_hThread, INFINITE);
Regards,
Marco Era
Latest post on my blog: Back to the Amiga's times
Originally posted by Marc from D
in debug mode I see memory leaks...
Originally posted by Andreas Masur in many, many threads
Detecting Memory LeaksDetecting Memory Leaks in MFC
Thought for the day/week/month/year:
Windows System Error 4006:
Replication with a nonconfigured partner is not allowed.
The probably best way to close a thread is to use ExitThread(0) from within the thread. This way everything is cleaned up well. So, you can have something like:
if (yourCloseFlag)
ExitThread(0);
Originally posted by Radu
The probably best way to close a thread is to use ExitThread(0) from within the thread. This way everything is cleaned up well. So, you can have something like:
if (yourCloseFlag)
ExitThread(0);
Whoa! "Everything" is not cleaned up, and it is NOT the best way to close a thread...
Richter, "Programming Applications for Microsoft Windows":
You can force your thread to terminate by having it call ExitThread:
Code:
VOID ExitThread(DWORD dwExitCode);
This function terminates the thread and causes the operating system to clean up all of the operating system resources that were used by the thread. However, your C/C++ resources (such as C++ class objects) will not be destroyed. For this reason, it is much better to simply return from your thread function instead of calling ExitThread yourself.
The recommended way to have a thread terminate is by having its thread function simply return (as described in the previous section). However, if you use the method described in this section, be aware that the ExitThread function is the Windows function that kills a thread. If you are writing C/C++ code, you should never call ExitThread. Instead, you should use the Visual C++ run-time library function _endthreadex.
Note that calling ExitProcess or ExitThread causes a process or thread to die while inside a function. As far the operating system is concerned, this is fine and all of the process's or thread's operating system resources will be cleaned up perfectly. However, a C/C++ application should avoid calling these functions because the C/C++ run time might not be able to clean up properly
Making explicit calls to ExitProcess and ExitThread is a common problem that causes an application to not clean itself up properly. In the case of ExitThread, the process continues to run but can leak memory or other resources.
VOID ExitThread(DWORD dwExitCode);
Hi, Marc!
Take a look at this article:Using Worker Threads
Originally posted by puzzolino
Write:
WaitForSingleObject(pThread->m_hThread, INFINITE);
I think WaitForSingleObject is the best approach
But it is better if instead of INFINITE one mention a reasonable timeout like 10 sec or so
pThread->PostThreadMessage(KILL_YOURSELF,0,0)//where KILL_YOURSELF calls AfxEndThread(1,true)
WaitForSingleObject(*pThread,10000);
It is little bit better . But it is what i think
As this thread here became larger than expected, I just want to clarify:
The question was perfectly answered by the hint "WaitForSingleObject". I didn't have a problem in terminating the thread, the only problem I had was that I used an unlogical way to figure out when the thread really stopped. As I'm doing a worker-thread for serial communication, there are neither GUI-objects used nor are there really objects at all.
Thanks to you all for your hints!
Marc
Originally posted by Marc from D
I didn't have a problem in terminating the thread, the only problem I had was that I used an unlogical way to figure out when the thread really stopped.
To me, "How to end threads good" implies the former rather than the latter...
I've always used the following system...
In the GUI thread, set a flag to tell worker threads they should end themselves at their own earliest possible convenience.
What you do next in the gui thread depends....
1) If the GUI thread doesn't need any results from the worker threads, just let the GUI thread end normally, while the worker threads are still running/shutting down. The process/application will properly terminate when all of it's threads are closed.
Note that the user will still see the process active in the taskmanager for some time after he/she has closed the 'application window'. This rarely is an issue anyway.
2) If the GUI thread must wait for worker threads to finish because it needs to save obtained data or thread progressstatus, needs to do cleanup which can't happen until all threads are closed, you will need to wait for the threads to finish.
The seemingly obvious (and easy) way to wait for a thread to finish is to use WaitForSingleObject(), but there are a couple catches to this...
- your program will be unresponsive while it's waiting, which can be a problem if the wait can take a long time.
- It's somewhat less suited if you have multiple worker threads. Although it's not necessarily bad doing it this way. At the least, you'll be sure the threads have ended, even though the thread you decided to wait on first, may end up being the one that finished last.
If ending a thread can take a long time, or if you have multiple threads, then I have (and this has worked nicely so far) used the following system...
When creating a thread, a counter gets incremended, this counter will always contain the number of active threads (excluding the GUI thread). Use InterlockedIncrement().
When the GUI wants to end, it sets a 'threads_end_yourself' flag. When this flag is set, all (harmfull) menu-items are disabled.
Threads check this flag in their working loop, and shut themselves down at their own earliest possible convenience.
At thread end, the thread decrements the flag, and when it hits zero, it posts a user-message telling the GUI all threads are now closed. Each workerthread ends with the code:
Code:
if (!InterlockedDecrement(&lNumThreads))
PostMessage(WM_ALL_THREADS_ENDED, 0,0);
Or if you use MFC, you can create your own overridden CWinThread, and stuff that code in ExitThread().
When the GUI receives the WM_ALL_THREADS_ENDED, it does it's cleanup, and terminates.
It's easy enough to display a "shutting down, please wait" type dialog using this method.
While it may take a little more work this way. It does provide a clean, safe way to end your app that will work in all cases, provided you follow the three rules below.
- It won't use ANY significant CPU time, which any form of polling loop (nomatter how smart you can make it) WILL.
- Your app stays responsive (user can move/size window, there's a proper message telling the app is busy...) while it's shutting down.
3 rules if you want reliable multithreading code...
1) Don't use ExitThread()
2) If think you need to use ExitThread(), then reread rule 1.
3) If you think you have found a legitimate use of ExitThread(), then reread rule 1.
if (!InterlockedDecrement(&lNumThreads))
PostMessage(WM_ALL_THREADS_ENDED, 0,0);
Forum Rules | http://forums.codeguru.com/showthread.php?272728-How-to-end-threads-good&p=850609 | CC-MAIN-2015-27 | refinedweb | 1,277 | 60.75 |
New exercises added to: Ruby, Java, PHP, Python, C Programming, Matplotlib, Python NumPy, Python Pandas, PL/SQL, Swift
Java HashMap/Hashtable, LinkedHashMap and TreeMap
Introduction
The basic idea of a map is that it maintains key-value associations (pairs) so you can look up a value using a key. In this tutorial, we will discuss Java HashMap/Hashtable, LinkedHashMap, and TreeMap.
HashMap/Hashtable
HashMap has implementation based on a hash table. (Use this class instead of Hashtable which is legacy class) .The HashMap gives you an unsorted, unordered Map. When you need a Map and you don't care about the order (when you iterate through it), then HashMap is the right choice. Keys of HashMap is like Set means no duplicates allowed and unordered while values can be any object even null or duplicate is also allowed. HashMap is very much similar to Hashtable only difference is Hashtable has all method synchronized for thread safety while HashMap has non-synchronized methods for better performance.
We can visualize HashMap as below diagram where we have keys as per hash-code and corresponding values.
HashMap provides constant-time performance for inserting and locating pairs. Performance can be adjusted via constructors that allow you to set the capacity and load factor of the hash table.
HashMap Constructors
HashMap( )
Default HashMap Constructor (with default capacity of 16 and load factor 0.75)
HashMap(Map<? extends KeyObject, ? extends ValueObject> m)
This is used to create HashMap based on existing map implementation m.
HashMap(int capacity)
This is used to initialize HashMap with capacity and default load factor.
HashMap(int capacity, float loadFactor)
This is used to initialize HashMap with capacity and custom load factor.
The basic operations of HashMap (put, get, containsKey, containsValue, size, and is Empty) behave exactly like their counterparts in Hashtable. HashMap has toString( ) method overridden to print the key-value pairs easily. The following program illustrates HashMap. It maps names to salary. Notice how a set-view is obtained and used.
Java Code:Go to the editor
import java.util.*; public class EmployeeSalaryStoring { public static void main(String[] args) { //Below Line will create HashMap with initial size 10 and 0.5 load factor Map<String, Integer>empSal = new HashMap<String, Integer>(10, 0.5f); //Adding employee name and salary to map empSal.put("Ramesh", 10000); empSal.put("Suresh", 20000); empSal.put("Mahesh", 30000); empSal.put("Naresh", 1000); empSal.put("Nainesh", 15000); empSal.put("Rakesh", 10000); // Duplicate Value also allowed but Keys should not be duplicate empSal.put("Nilesh", null); //Value can be null as well System.out.println("Original Map: "+ empSal);// Printing full Map //Adding new employee the Map to see ordering of object changes empSal.put("Rohit", 23000); //Removing one key-value pair empSal.remove("Nilesh"); System.out.println("Updated Map: "+empSal);// Printing full Map //Printing all Keys System.out.println(empSal.keySet()); //Printing all Values System.out.println(empSal.values()); } }
Output:
Java LinkedHashMap
LinkedHashMap extends HashMap. It maintains a linked list of the entries in the map, in the order in which they were inserted. This allows insertion-order iteration over the map. That is,when iterating through a collection-view of a LinkedHashMap, the elements will be returned in the order in which they were inserted. Also if one inserts the key again into the LinkedHashMap the original orders are retained..
Constructors
LinkedHashMap( )
This constructor constructs an empty insertion-ordered LinkedHashMap instance with the default initial capacity (16) and load factor (0.75).
LinkedHashMap(int capacity)
This constructor constructs an empty LinkedHashMap with the specified initial capacity.
LinkedHashMap(int capacity, float fillRatio)
This constructor constructs an empty LinkedHashMapwith the specified initial capacity and load factor.
LinkedHashMap(Map m)
This constructor constructs an insertion-ordered Linked HashMap with the same mappings as the specified Map.
LinkedHashMap(int capacity, float fillRatio, boolean Order)
This constructor constructs an empty LinkedHashMap instance with the specified initial capacity, load factor and ordering mode.
Important methods supported by LinkedHashMap
Class clear( )
Removes all mappings from the map.
containsValue(object value )>
Returns true if this map maps one or more keys to the specified value.
get(Object key)
Returns the value to which the specified key is mapped, or null if this map contains no mapping for the key.
removeEldestEntry(Map.Entry eldest)
Returns true if this map should remove its eldest entry.
Java Program demonstrate use of LinkedHashMap:
Java Code:
package linkedhashmap; import java.util.LinkedHashMap; import java.util.Map; public class LinkedHashMapDemo { public static void main (String args[]){ //Here Insertion order maintains Map<Integer, String>lmap = new LinkedHashMap<Integer, String>(); lmap.put(12, "Mahesh"); lmap.put(5, "Naresh"); lmap.put(23, "Suresh"); lmap.put(9, "Sachin"); System.out.println("LinkedHashMap before modification" + lmap); System.out.println("Is Employee ID 12 exists: " +lmap.containsKey(12)); System.out.println("Is Employee name Amit Exists: "+lmap.containsValue("Amit")); System.out.println("Total number of employees: "+ lmap.size()); System.out.println("Removing Employee with ID 5: " + lmap.remove(5)); System.out.println("Removing Employee with ID 3 (which does not exist): " + lmap.remove(3)); System.out.println("LinkedHashMap After modification" + lmap); } }
Output:
Java TreeMap
A TreeMap is a Map that maintains its entries in ascending order, sorted according to the keys' natural ordering, or according to a Comparator provided at the time of the TreeMap constructor argument.The TreeMap class is efficient for traversing the keys in a sorted order. The keys can be sorted using the Comparable interface or the Comparator interface. SortedMap is a subinterface of Map, which guarantees that the entries in the map are sorted. Additionally,.
TreeMap Constructors
TreeMap( )
Default TreeMap Constructor
TreeMap(Map m)
This is used to create TreeMap based on existing map implementation m.
TreeMap(SortedMap m)
This is used to create TreeMap based on existing map implementation m.
TreeMap(Comparator ()
This is used to create TreeMap with ordering based on comparator output.
Java Program which explains some important methods of the tree map.
Java Code:Go to the editor
import java.util.Map; import java.util.TreeMap; public class TreeMapDemo { public static void main(String[] args) { //Creating Map of Fruit and price of it Map<String, Integer> tMap = new TreeMap<String, Integer>(); tMap.put("Orange", 12); tMap.put("Apple", 25); tMap.put("Mango", 45); tMap.put("Chicku", 10); tMap.put("Banana", 4); tMap.put("Strawberry", 90); System.out.println("Sorted Fruit by Name: "+tMap); tMap.put("Pinapple", 87); tMap.remove("Chicku"); System.out.println("Updated Sorted Fruit by Name: "+tMap); } }
Output:
Java Code:Go to the editor
import java.util.*; public class CountOccurrenceOfWords { public static void main(String[] args) { // Set text in a string String text = "Good morning class. Have a good learning class. Enjoy learning with fun!"; // Create a TreeMap to hold words as key and count as value TreeMap<String, Integer> map = new TreeMap<String, Integer>(); String[] words = text.split(" "); //Splitting sentance based on String for (int i = 0; i < words.length; i++) { String key = words[i].toLowerCase(); if (key.length() > 0) { if (map.get(key) == null) { map.put(key, 1); } else { int value = map.get(key).intValue(); value++; map.put(key, value); } } } System.out.println(map); } }
Output:
Summary:
- Map is collection of key-value pair (associate) objects collection
- HashMap allows one null key but Hashtable does not allow any null keys.
- Values in HashMap can be null or duplicate but keys have to be unique.
- Iteration order is not constant in the case of HashMap.
- When we need to maintain insertion order while iterating we should use LinkedHashMap.
- LinkedHashMap provides all the methods same as HashMap.
- LinkedHashMap is not threaded safe.
- TreeMap has faster iteration speed compare to other map implementation.
- TreeMap is sorted order collection either natural ordering or custom ordering as per comparator.
Java Code Editor:
Amazon promo codes to get huge discounts for limited period (USA only). | https://www.w3resource.com/java-tutorial/java-maps.php | CC-MAIN-2018-26 | refinedweb | 1,295 | 51.95 |
Hi all. I can't be bothered blogging so thought I would share this with anyone who is interested. It is the source for a rotating picture cube, similar to the Telerik control - although I haven't turned it into a control (this shouldn't be hard) or handled clicks yet (this might be). The images are stored in an images folder in the root directory. I used photos from the photoviewer example and cropped them square. The pictures array contains the image names.
This is my first real experimentation with IronPython and I must tell you that it is really very nice and succinct to develop with compared to C#. Also much faster since don't need to compile. I have just been developing from notepad.
The brains behind the code actually comes from the following flash tutorial so full props to them. I encourage other users to get busy converting interesting flash examples across to Silverlight so the community can learn:
Anyway I hope someone finds this useful
Cheers
Mark
XAML CODE
<Canvas xmlns="" xmlns: <x:Code <Canvas Loaded="Loaded" /> <Canvas.Resources> <Storyboard x:<DoubleAnimation Duration="00:00:0.02" /></Storyboard> </Canvas.Resources> <Canvas x: <Rectangle x:</Canvas>
Python Code
import clrimport Systemfrom System.Windows.Controls import *from System.Windows.Media import *from System.Windows import *from System.Windows.Shapes import *from System.Windows.Media.Animation import *from System.Math import *pictures = ['sqjaguar', 'sqgorilla', 'sqgyr', 'sqheron', 'sqeagle', 'sqtamarin']rotations = {'x': 0, 'y': 0, 'z': 0}boxPoints = [{'x': -50, 'y': -50, 'z': -50}, {'x': 50, 'y': 50, 'z': -50}, {'x': -50, 'y': 50, 'z': -50}, {'x': -50, 'y': -50, 'z': 50}, {'x': 50, 'y': -50, 'z': 50}, {'x': 50, 'y': 50, 'z': 50}]curX = 0curY = 0 def pointsTransform(tripoints, rotations): v10 = Sin(rotations['x']) v12 = Cos(rotations['x']) v8 = Sin(rotations['y']) v11 = Cos(rotations['y']) v7 = Sin(rotations['z']) v9 = Cos(rotations['z']) v1 = len(tripoints)-1 v17 = [{'x': 0, 'y': 0}, {'x': 0, 'y': 0}, {'x': 0, 'y': 0}, {'x': 0, 'y': 0}, {'x': 0, 'y': 0}, {'x': 0, 'y': 0}] while (v1>=0): v16 = tripoints[v1]['x'] v15 = tripoints[v1]['y'] v3 = tripoints[v1]['z'] v5 = v12 * v15 - v10 * v3 v4 = v10 * v15 + v12 * v3 v18 = v11 * v4 - v8 * v16 v6 = v8 * v4 + v11 * v16 v14 = v9 * v6 - v7 * v5 v13 = v7 * v6 + v9 * v5 v17[v1] = {'x': v14, 'y': v13} v1 -= 1 return v17def canvasPointTransform(i, a, b, c): photo = Root.FindName("Photo%i" % i) photo.Opacity = pointsIsVisible(a, b, c) if (photo.Opacity==0): return mt = Root.FindName("Matrix%i" % i) matrix = mt.Matrix matrix.OffsetX = b['x'] matrix.OffsetY = b['y'] matrix.M11 = (a['x'] - b['x']) / photo.Width matrix.M12 = (a['y'] - b['y']) / photo.Width matrix.M21 = (c['x'] - b['x']) / photo.Height matrix.M22 = (c['y'] - b['y']) / photo.Height mt.Matrix = matrix def pointsIsVisible(a, b, c): v5 = b['x'] - a['x'] if (v5==0): return (a['y'] > b['y']) == (c['x'] > a['x']) v4 = c['x'] - a['x'] if (v4==0): return (a['y'] > c['y']) == (b['x'] < a['x']) return (((b['y'] - a['y']) / v5) < ((c['y'] - a['y']) / v4)) != ((a['x'] < b['x']) == (a['x'] > c['x'])) def CreatePhoto(sequence, x, y, path): t = XamlReader.Load(""" <Canvas xmlns: <Image Width='111' Height='111' Canvas. <Canvas.RenderTransform> <MatrixTransform x: </Canvas.RenderTransform> </Canvas>""" % (sequence,x, y, path,sequence)) Inner.Children.Add(t) def timer_Completed(sender, e): rotations['x'] -= curY / 2000 rotations['y'] += curX / 2000 try: v2 = pointsTransform(boxPoints, rotations) canvasPointTransform(0, v2[2], v2[0], v2[3]) canvasPointTransform(1, v2[5], v2[1], v2[2]) canvasPointTransform(2, v2[0], v2[2], v2[1]) canvasPointTransform(3, v2[4], v2[3], v2[0]) canvasPointTransform(4, v2[3], v2[4], v2[5]) canvasPointTransform(5, v2[1], v2[5], v2[4]) except System.Exception, e: debug.TextWrapping = System.Windows.TextWrapping.Wrap debug.Text = e.ToString() timer.Begin()def mouseMove(sender, e): global curX global curY pt = e.GetPosition(Inner) curX = pt.X curY = pt.Ydef Loaded(sender, e): global pictures global boxPoints global rotations elt = System.Windows.XamlReader.Load(""" <Canvas xmlns: </Canvas>""" ) Root.Children.Add(elt) sequence = 0 while sequence < len(pictures): CreatePhoto(sequence,0, 0,"images/%s.jpg" % pictures[sequence]) sequence += 1 Inner.MouseMove += mouseMove timer.Completed += timer_Completed timer.Begin()
do you have a site where this is working so we can see it in action?
-th this answered your question, please be sure to click the 'mark as answered' feature, otherwise please feel free to post follow-up questions that are related.
No I'm afraid not. If someone else wants to host it I can email the files.
yeah, send me the files, i'll put it up...
posted here:
Hey Mark,
great example of what's possible with Silverlight.
I did find that the anything you draw on the side of the cube, is drawn backwards. I'm still trying to decypher the code. Have to think back to highschool and Matrix calculations
I found that if you update that 'canvasPointTransform' method, and replace :
matrix.OffsetX = b['x'] matrix.OffsetY = b['y'] matrix.M11 = (a['x'] - b['x']) / photo.Width matrix.M12 = (a['y'] - b['y']) / photo.Width
WITH
matrix.OffsetX = a['x'] matrix.OffsetY = a['y'] matrix.M11 = (b['x'] - a['x']) / photo.Width matrix.M12 = (b['y'] - a['y']) / photo.Width
you will get the canvas of each photo drawn the corrent way.
Thanks for sharing this.
Vadim
Cheers,Vadim TabakmanSolutions ConsultantOBSNintexMy Blog - VTonMS
Good to know thanks Vadim. It would be quite interesting to extend the example and put things like buttons on the different canvases
Also would be good to figure the math for a click so you could determine which photo was clicked on.
I've been playing around with the cube here and there, but given that I'm not python guru, I decided to convert it to C#.
It's not the most elegant of code, but hey, I'm just playing. I've got a control now on each side of the cube and when clicked on, it plays a video.
Here is link to the example.
and here's the link to the source zip.
Thanks for sharing your python code. This has been quite fun actually.
I also talk a little about it in my blog - VTonMS. | http://silverlight.net/forums/p/2475/6596.aspx | crawl-002 | refinedweb | 1,049 | 60.21 |
Shader Graph Custom Node API: Using the Code Function Node
With.
Now you are ready to try making Nodes in Shader Graph using Code Function Node! But this is, of course, just the beginning. There is much more you can do in Shader Graph to customize the system.
Stay tuned to this blog and talk to us on the Forums!
Родственные публикации
20onney ShihАпрель 23, 2018 в 1:04 дп
Instead of all this boilerplate you could’ve made a hybrid solution and let us write HLSL directly in the ShaderGraph, or atleast provide some interop between SG and .shader files.
This is way overcsharping things!
Alan MattanoМарт 30, 2018 в 2:05 дп
WHERE IS THE BEST PLACE TO PUT THIS SCRIPT?
Brandon Rivera-MeloАпрель 2, 2018 в 5:25 дп
Looks like this uses the UnityEditor namespace (using UnityEditor.ShaderGraphs;), so I’ve put it «Assets>[ProjectName]>Shaders>Editor». Regardless if your overall hierarchy, I believe the namespace part requires it live in a folder titled «Editor» so this code can be left out from builds.
Isaac SurfrazМарт 28, 2018 в 12:56 пп
has anyone moaning about the sting part here actually even bothered to look at the examples posted by andy touch and similar postings?
I really think you are expecting it to be far more unusable than it is.
Also you are defining the bindings using a string, not the entire shader. Calm down.
AndrewАпрель 10, 2018 в 6:19 пп
It has little to do with how «usable» it is and everything to do with how this is an alpha/proof-of-concept approach being passed off as a production-ready system. If you *ever* have the developer writing anything script-related as a plain string (and embedded in another script file, no less), then someone, somewhere, royally screwed up. That is because writing script as a string within another script is highly-coupled, virtually-untestable, and difficult to maintain — the three most common signs of bad code design.
And it’s not like this is the only way to do this, either. Myself and others have posted numerous alternative approaches in other comments. This system is nothing short of lazy and sloppy, and it will be addressed within a month (if not a week) of this feature’s official release by an editor add-on on the Asset Store. And while the problem will be effectively solved at that point, the issue is that the Unity Dev Team created the problem in the first place seemingly without even recognizing that it *is* a problem.
YesOkМарт 28, 2018 в 9:54 дп
Is it still possible to write shaders the old boring way? I don’t see how nodes really help with writing anything requiring custom algebra. I don’t mind the node system, as long as it doesn’t get in the way of my boring old HLSL approuch. I’m not to fond of creating code in a huge string like I have brain damage or something.
Peter Bay BastianМарт 28, 2018 в 11:59 дп
Yes, absolutely. Shader Graph is a system on top of the existing shader system in Unity. You won’t be able to use your shaders written the old way in Shader Graph, but it also won’t prevent you from doing what you’ve always been doing.
YesOkМарт 28, 2018 в 2:41 пп
Awesome
AndreМарт 28, 2018 в 9:40 дп
Interesting! But indeed seems like the wrong way around — you should concentrate on the hlsl first — have an hlsl file full of functions and nodes are created from that — if u need a little C# to create the nodes then ok but can you not parse the hlsl to get function names, input/output to a degree?
Andy BakerМарт 28, 2018 в 9:32 дп
It’s not so much that you released an API based on writing HLSL in a string literal that bothers me. We’ve all done worse things when necessary.
It’s the fact that the announcement of this aspect of the API wasn’t wrapped in shameful apologies and embarrassed justifications. It concerns me — not that you’ve done it like this, but that you don’t seem to think it’s anything to be bothered about.
MattМарт 28, 2018 в 9:27 дп
Could you expand more on the «why» to this approach? I echo the other comments here, I feel this is long winded and prone to error writing the HLSL as a string… beyond simple stuff like in this blog, in production (using proprietary engines that have shader graphs with custom nodes), these custom nodes can get quiet elaborate, like, it might be a lengthy «Draw Object Outline» node…
So — hearing the why (and how you arrived at this approach compared to all these others suggested/attempted) would be great as it would help put some context to it all. Cheers!
TaiМарт 28, 2018 в 3:17 дп
This is the worst workflow I’ve ever seen in Unity.
CorruptScanlineМарт 27, 2018 в 8:55 пп
Why not just define the inputs and outputs as class members and have a virtual GenerateCode function? In any case the awkward GetFunctionToConvert reflection stuff could be hidden by having a ShaderFunction attribute that you put on whatever function you want.
VitoМарт 27, 2018 в 7:10 пп
So. I am liking. strings’s not a problem for me. I’m used to writing shaders without autocomplitionDD
I’ve been eagerly awaiting for 2018 release.
push-popМарт 27, 2018 в 5:49 пп
writing shader function body in a string ? seriously ? this is what you released ?
did you ever think what this might look like exposed for the user when you were designing shader graph ?
that’s laughable
Michal PiatekМарт 27, 2018 в 5:35 пп
It’s cool and simple but I dislike the fact that I will be forced to either use C# code highlighting&autocompletion or HLSL. Unless I missed something and there are options to do that, in let’s just say, VS Code.
AndrewМарт 27, 2018 в 5:32 пп
Binding the method via a reflection reference? Returning the entire shader function code as a string? This feels super clunky, not gonna lie. Why can’t you pass a reference to the method itself instead of reflecting it, for example? Why can’t you just supply a shader file instead of going through the hassle of converting it into a string literal?
I’m sure there must be technical reasons why it was implemented this way, but for the life of me I can’t imagine what they could be other than the Shader Graph team having designed themselves into a corner on this.
Matt DeanМарт 27, 2018 в 7:42 пп
We could do this via a function reference, but it would just tidy the user facing API a bit. Well look into it but its not a huge priority for us. We use reflection to access the function arguments and convert them to ports, all this would do is hide that.
As for the string literals we do this for simplicity, we current ship ~150 nodes, with the vast majority using this abstraction. Splitting the HLSL into separate files isn’t optimal for us. However, message received. You can still do what you want to do via this API though. Something along the lines of
return File.Read("otherfile.hlsl"). This just isnt mentioned in this blog post (maybe it should have been).
Arthur BrusseeМарт 28, 2018 в 12:17 дп
I feel like the string is mostly fine — it’s tightly coupled with the names and such of the surrounding function, so it’s easier to have it inline. However, I feel there’s 2 paths forward:
->Most ambitious: Generate HLSL from a C# subset, Xenko style (). It seems unity is doing _some_ of that in their new pipelines already, and the new math library even makes more syntax match. The idea of being able to build shaders out of reusable C# snippets would be an incredible dev experience.
-> Little more reasonable: Work with the UnityVS team to recognize these snippets as HLSL, and use the HLSL coloring / autocomplete on them.
Andrew AckermanМарт 28, 2018 в 2:01 дп
I don’t really dispute that the existence of this class may or may not be necessary (the mapping needs to be done somewhere) but this strikes me as a very odd way of going about it. Using this approach, the C# class is almost entirely arbitrary boilerplate while the embedded HLSL code is represented as a string literal, which means no syntax coloring, compile-time type safety, Intellisense, or any of that other goodness.
IMHO, the C# code should be entirely abstracted away, auto-generated for any but the people who have specific reasons to do it manually (whatever those reasons might be). Maybe when the developer chooses the option to create a custom shader node, there is a properties window for the file that displays the ins and outs similarly to how properties for shaders are currently handled, and that is where you can specify what properties the node will have. Or, when they get defined in the shader file, the C# file gets updated with the properties when the shader is «compiled».
As far as the HLSL code itself, while it might technically be the most «optimal» to have the shader code and the C# code in a single script file, that doesn’t make it the best option. This approach is extremely unfriendly to anyone who doesn’t know HLSL by heart and can code out a shader file in their sleep, and even for those people, the ability to use regular HLSL development tools and not having to essentially write their shader using vanilla Notepad is still a plus. (And yes, while you could write the code in an .hlsl file and load the string with a
File.Read, that is an extremely arbitrary step that is just avoiding the problem rather than actually solving it.) | https://blogs.unity3d.com/ru/2018/03/27/shader-graph-custom-node-api-using-the-code-function-node/ | CC-MAIN-2018-30 | refinedweb | 1,688 | 67.49 |
My WCF Webservice uses the new System.AddIn namespace to support addins.
I want to expose the class returned by every add-in to my webservice. I believe lots of people would want to do this!
There are some obstacles;
- The class returned to the host is derived from the abstract view class.
- The webservice contract needs a class, right, an abstract class or interface as parameter or return type is not allowed.
- And the class in question is the host view adapter, which should not be referenced (to keep the independent versioning intact)
- There will be thousands of objects in an hierarchical array so creating a new object of each would impact performance
Thread Closed
This thread is kinda stale and has been closed but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know. | http://channel9.msdn.com/Forums/TechOff/259836-Expose-a-class-defined-in-SystemAddIn-to-WCF | CC-MAIN-2014-52 | refinedweb | 149 | 67.18 |
Red Hat Bugzilla – Bug 136120
g++ generates incorrect code for aliased pointer assignments in c++
Last modified: 2007-11-30 17:10:52 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.3)
Gecko/20040929
Description of problem:
The following program should produce no output.
When compiled:
g++ -O1 -o bug bug.cc
it behaves correctly.
When compiled:
g++ -O1 -fstrict-aliasing -fschedule-insns -o bug bug.cc
or
g++ -O2 -o bug bug.cc
it produces output.
See the comments in the text of the program for more detail.
$ cat bug.cc
#include <iostream>
struct Key
{
long key;
long ordering;
Key() : key(0), ordering(0) {}
};
int
main(int argc, const char* argv[])
{
long* ordering = new long[100];
for (int i = 0; i < 100; ++i)
{
ordering[i] = i;
}
Key* keys = new Key[100];
// We *reuse* the ordering array as an array of key pointers.
Key** perm = reinterpret_cast<Key**>(ordering);
int base = 0;
for (int k = 0; k < 1; ++k)
{
int n = &ordering[100] - &ordering[0];
for (int i = 0; i < 100; ++i)
{
// XXX When compiled with -O2 or
// -O1 -fstrict-aliasing -fschedule-insns
// these two statements appear to be reversed!
// copy the ordering value into the key
keys[i + base].ordering = ordering[i];
// copy the address of the key over the ordering value
perm[i] = &keys[i + base];
}
base += n;
}
// Check the ordering values.
// If the compiler gets it right, there should be no output.
for (int i = 0; i < 100; ++i)
{
if (keys[i].ordering != i)
{
std::cerr << "keys[" << i << "].ordering == " <<
keys[i].ordering
<< std::endl;
}
}
delete [] keys;
delete [] ordering;
return 0;
}
Version-Release number of selected component (if applicable):
gcc-3.4.2-2
How reproducible:
Always
Steps to Reproduce:
see above.
Additional info:
this bug, like all code generator bugs is worrying because it can
be hard to know if you're triggering it or not. Your program *might*
work. Fortunately it seems you can work around it by adding
-fno-strict-aliasing to the compiler arguments, though this does
result in the loss of some legitimate optimization of course.
The reinterpret_cast is exactly what you shouldn't be doing, see
info gcc on -fstrict-aliasing, at least IMHO.
ISO C++, 3.10#15 says:
If a program attempts to access the stored value of an object through an
lvalue of other than one of the following types the behaviour
The dynamic type of the object keys[i] is Key, and long certainly is not
one of the types mentioned above, so IMHO what you are seeing is perfectly
fine behaviour. | https://bugzilla.redhat.com/show_bug.cgi?id=136120 | CC-MAIN-2017-17 | refinedweb | 433 | 63.9 |
>>.'"
Round 1. Fight. (Score:4, Insightful)
We have the last Java 7 preview (GPL).
Fork the darn thing and see who lives.
Re: (Score:2)
+1 for this!
Re: (Score:2)
It's working for LibreOffice so why not.
Re: (Score:2)
Is it? I haven't heard much from LibreOffice since they finished merging in the pre-existing patches that Sun weren't willing to accept for OpenOffice.org. Have they actually done much more since then?
Re:Round 1. Fight. (Score:4, Informative)
They released version 3.4.2 three days ago [documentfoundation.org]. As I understand it they're mostly working on bug fixes for now--lord knows they need it--and removing as much Java dependence as possible.
Re: (Score:2)
Re: (Score:3)
That wont work because Oracle will still sue you for patent infringement (See Oracle v. Android).
Re: (Score:2)
Only if you claim its not Java...
Re: (Score:3)
Only if you claim its not Java...
Just turn it upside down and it becomes "enef", which could be pronounced "enough". That would fit the situation quite well I think.
Re: (Score:2, Insightful)
Fork the darn thing and see who lives.
With their war chest of patents.. they could litigate any serious competitor into the ground.
Now whether they have any reason to do so is another question.
Personally I'd start transitioning away from Java at this point if possible/practical. It's a shame because it worked really well in a lot of situations
:(
Re: (Score:2)
Now whether they have any reason to do so is another question.
They are Oracle and they own the patents & trademarks. Those are the only reasons they need (and frankly, the first one is probably enough for them).
Re: (Score:3)
IBM hasn't been sued yet, and they have their own JVM too. Or there's OpenJDK too. The current litiagtion now with Google is about creating an incompatible Java. (For compatible forks patents are granted. )
It's not the Java developers who are fucked, but the Dalvik developers.
Re: (Score:3)
IBM also has the Nazgul. And a patent war chest that would make your eyes bug out.
Re: (Score:2)
Re: (Score:3)
Also in the news (Score:5, Funny)
It seems as if Oracle would like nothing better than to stomp Apache and its open source Java efforts clean out of existence.
Also in the news. It seems that water makes things wet.
Re: (Score:2, Insightful)
MySQL, you're next!
Re:Also in the news (Score:4, Informative)
Re: (Score:2)
Oracle damaging the open-source community! GASP! (Score:2)
They're Oracle, that's their business model, it's what they do. Convert the goodness of open source communities into money, like a software Gargamel.
What's the next article going to be? Facebook eroding society's expectations of privacy? BP moving fossil carbon into the biosphere?
Re: (Score:3)
Except the post is wrong, the article isn't about Oracle damaging the OSS community, it's about them damaging Java.
Releasing a JVM with a serious bug doesn't damage the OSS community. In fact it's an excellent way to give it more influence. Issues like these provide plenty incentive to fork.
The worst case for Oracle would be it goes the way it happened with XFree86: every distribution ships the Apache version, and everybody stops caring about the original project's existence.
Re: (Score:2)
The worst case for Oracle would be it goes the way it happened with XFree86: every distribution ships the Apache version, and everybody stops caring about the original project's existence.
That's all good and well, if they can guarantee all existing Java applications will work with it. I'm not sure how it functionally compares with OpenJDK, but lots of existing Java applications simply won't work with it. If they can manage to do what OpenJDK can't, then they have a chance. Otherwise everyone is still suck using Oracle's version, especially Enterprise users (which I'd imagine accounts for most of Java's use).
Re: (Score:3)
I have pretty positive experience with OpenJDK. I guess you won't get to run into any trouble unless you use video streaming features. (codec licensing problems) For J2EE fat client or webapps you're pretty safe.
Re: (Score:2)
No kidding (Score:2)
No kidding
.. look at what java has done to my dreams!! [youtube.com]
Maybe I'm just being an idiot... (Score:2)
...but why is it Ironic that the Apache foundation were the first to warn the community? From reading the summary, it seems highly appropriate that Apache were the first ones to warn the community, not Ironic at all. Unless, of course, I'm missing something (which I suspect I am).
Re: (Score:3)
Unless, of course, I'm missing something (which I suspect I am).
Unless I'm missing something also, it's probably the fact that a large majority of the population doesn't actually understand what the the word irony actually means.
Re: (Score:2)
Unless I'm missing something also, it's probably the fact that a large majority of the population doesn't actually understand what the the word irony actually means.
You mean, like you?
"Oh, the irony" is an ironic statement. Claiming irony when there is no irony, is an ironic statement.
Re:Maybe I'm just being an idiot... (Score:4, Funny)
Re: (Score:2)
Re: (Score:2)
Once again, why does that make it Ironic? Apache had good reason to warn everyone, they got shafted by Oracle. In that instance, it seems highly appropriate that Apache warned the community.
Irony, as defined by [reference.com]
an outcome of events contrary to what was, or might have been, expected.
Why would it be unexpected of Apache to warn the community after they resigned from the Java Community Process committee? Surely that's the exact opposite, surely it's expected that Apache would warn the community since they resigned for a reason.
Java and .NET falling by the wayside? (Score:4, Interesting)
It seem strange that Oracle would push people away from Java, especially since Sun spent a great deal of time getting people to adopt it. Now Microsoft seems to have gone soft on
.NET which was that technology to compete with Java. Did Oracle somehow make a backroom deal with Microsoft? As I recall the Sun/Microsoft suit prohibited Microsoft from having their own Java implementation, is Microsoft now going to license Java from Oracle as the .NET replacement? This is all speculation but Oracle hasn't done anything good for the things they received in the Sun acquisition, Solaris, Java and SPARC. I realize that Oracle is a big company that likes lots of revenues but it seems to me that Sun market share was on the decline and now Oracle is just shutting the door on what remaining customers they had.
Re: (Score:2)
Java would not be a suitable replacement for
.NET. The purpose of .NET is to keep people on Windows, not give them a migration path away from it.
Re: (Score:2)
Given how well the "write once run anywhere" marketing aspect of Java has basically failed, its no more a migration path than
.Net is these days.
Re: (Score:2)
Given how well the "write once run anywhere" marketing aspect of Java has basically failed, its no more a migration path than
.Net is these days.
What things won't Java run on? We routinely run the same Java code on Windows and Linux.
Re: (Score:2)
And MacOS, AS/400, z/OS.
Re: (Score:2)
Java runs on lots of things, but thats not my point. Will every Java application run on all JVMs? No.
Take Azureus for example - built in Java, but separate downloads for OSX and Windows. And thats all too common in the Java world...
Java is only a migration path away from Windows if all your applications run seamlessly on the other platforms, and that only happens if you are actually careful during development.
Re: (Score:3)
Take Azureus for example - built in Java, but separate downloads for OSX and Windows.
That's because Azureus isn't 100% Pure Java. They decided to use SWT instead of Swing for the UI. SWT uses a lot of "native code". Of course you end up needing separate installers.
The software company I work for targets Linux, Windows, AS/400, HP-UX, Solaris, and AIX. In our non-trivial experience, Java is shockingly and impressively portable.
Re: (Score:3)
What Java doesn't have is a good external installer for native libraries. That's the only reason for multi-platform installers. Even 3D games like [wurmonline.com] don't have multiple platform installation options; they run through Webstart and install automatically.
Re: (Score:2)
Re: (Score:2)
Which features of Java as a language or Java programs do you commonly see failing to work across different platforms?
Re: (Score:2)
Re: (Score:2)
For Microsoft it wouldn't be. For Oracle, it would.
:-D
Re: (Score:2)
It wouldn't surprise me in the least if oracle bought sun just for their IP
.. so the could sue the shit outa google.
Java and soon MySQL are just collateral damage.
Re: (Score:3)
As I recall the Sun/Microsoft suit prohibited Microsoft from having their own Java implementation,
Wrong. It prohibited them from having an incompatible implementation and calling it java, very similar the current case of oracle vs. google.
in the process against ms it was about the name. in the process against google its about the patents. However the core of both is: work for the platform and fall under special regulations for the platform or not.
Re: (Score:2)
It's not really analogous for the Java/Android story... If you wanted to reach for an analogy, it'd be Oracle suing Microsoft over
.Net.
Re: (Score:2)
Except Microsoft licensed Java VM patents for
.Net. Oracle can't sue Microsoft for infringement because they've already got a licensing agreement in place.
So the situation's the same, just the Microsoft-Sun (now Oracle) deal would've been the path had Google licensed the patents as well. One licensed the stuff, the other didn't.
Re:Java and .NET falling by the wayside? (Score:4, Insightful)
Its amazing how far a single article of FUD goes these days - Microsoft is not "going soft" on
.Net, they just weren't willing to discuss it during a talk about something else entirely, while in Windows 8, .Net is still there and stronger than ever.
As I recall the Sun/Microsoft suit prohibited Microsoft from having their own Java implementation, is Microsoft now going to license Java from Oracle as the
.NET replacement
Microsoft already have a licensing deal with Sun/Oracle in place for
.Net - it was pursued years ago, at the very birth of .Net. And besides, what would Microsoft gain from going to Java? Functionality wise, .Net is better featured so what would Microsoft gain from switching ecosystems? Not a whole lot.
Microsoft don't want Java, they already made their version of it and are quite happy with it.
Re: (Score:3)
Actually, they've done pretty good with one and only ONE item they got... VirtualBox. I'm kind of waiting for the other shoe to drop on that one as well, thought.
Re: (Score:2)
I really wish I knew what you meant by "go soft on
.Net". It's the premier development platform for the most widely distributed desktop and server OS on the planet. And their new phones use it.
Yeah. I don't know what you mean.
Re: (Score:2)
I was referring to this: Slashdot [slashdot.org]
Re: (Score:2)
It's the premier development platform for the most widely distributed desktop and server OS on the planet.
I thought Linux had at least half, and maybe even as high as 2/3, the server market as compared to all other operating systems.
Re: (Score:2)
Re: (Score:2)
SPARC is the only thing Oracle is protecting. All those " sucks, move to a much faster Oracle SPARC server" ads, the early retirement of Oracle for Itanium. And they're actively spreading the message that their SPARC box is optimized for Oracle, so it's worth more.
Sounds more like a witchhunt (Score:3)
I'm not Oracle fan (actually, I'm a hater), but this seems more like a witch hunt. I mean, the title "Oracle's Java Polices Are Destroying the Community", sounds a little harsh considering you only said that Oracle released a buggy version of Java and they were not the first to report it.
...not that I'm against an Oracle witch hunt. ;)
non issue again. (Score:2)
Most production work will remain at java 6 for a while, until everyone makes their versions of java 7 available, Apple and IBM in particular. RHEL doesn't ship with the openjdk-1.7.0 yet. It's just not available in enough places to be worth developing against yet. Oracle knows that Apache is one of the major reasons that java is a popular as it is. They did give the Apache foundation, all of OpenOffice you know. Some idiot made a bad call and told management, that the error was just a corner case, and ma
Re: (Score:2)
Alternatives? (Score:2)
Re: (Score:2)
Re: (Score:2)
I thought that on x86 at least, most Java is JIT compiled to high performance native.
Just-in-time compilation [wikipedia.org]
HotSpot [wikipedia.org]
Re: (Score:3)
I've never understood why a virtual machine is, in any way, better than an intermediate language that can be compiled to native code for a particular platform.
Garbage collection. GC makes people angry for some reason, but I'm personally happy not having to malloc memory all the time. Also hardware and OS independence. It's nice to just open a file and read and write from it, and not really care what the OS is, or what filesystem it's using, and so forth. Same with inputs and outputs, memory management, thread handling, etc. You could add all of these things to your hypothetical intermediate language, but in the end you'd just be recreating the JVM.
A fair number
How about a JVM language? (Score:2)
Scala scheme python etc all run in the JVM
If you don't like Oracle's JVM, use the IBM one or the Apache one instead
Oracle is NOT going to destroy java, IBM and Apache will not allow it.
Re: (Score:2)
I'm not a Microsoft user or programmer, but I've worked with both. C# is very very nearly Java at first blush and if you are comfortable in Java you will be comfortable in C#. Go with mono and you've got your cross platform. There aren't as many libraries as there ae for Java, but it seems that the core is a bit stronger.
Your Mileage May Vary and I've not even seen C# doing GUI work, but it has to be better than swing in both ease of use and looks.
Major f*ckup, Oracle (Score:3)
A couple of factors motivating users to seek open solutions are: The proprietary vendor screws a product up and then doesn't fix it[1]. The vendor starts withholding necessary documentation or other support from the software community[2]. When will my product become competition for the vendor and I too will get buggered?
I can't think of a faster way for developers to jump ship to an open version of Java. And perhaps begin to fear other Oracle products as well.
[1] Heck, enough screw-ups and I'll start looking for a competent alternative. Never mind timely patches.
[2] Its called 'cutting off their air supply' and was made famous by a little outfit in Redmond.
Thank You Oracle (Score:3).
Re: (Score:2)
Re: (Score:2)
Die, Java, die!
It is German for The, Java, the. And as we all know, nobody who speaks German can be evil.
Re: (Score:2)
Re: (Score:2)
Java is not dead. Maybe it's not the hip language anymore, but it definitely is not dead.
Re: (Score:3).
Re: (Score:2).
Re:Java, truley an American icon (Score:5, Insightful).
My sarcasm detector needs calibration, but, in the meantime, those who spent money in college learning a language and not the concepts behind the language got ripped off. Give fish vs teach fishing and all that jazz.
Re: (Score:2)
Someone Mod this guy up...
- Dan.
Re: (Score:3)
Yeah!
Jazz Fish ROCKS!
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
> but so many legacy systems are built on it [Java] that it's basically guaranteed to live for quite a while longer
I imagine that nobody is writing new applications in COBOL. New applications are written in Java every day.
Java may not be the hip new thing anymore, but it's being developed for heavily.
It took COBOL over 30 years to reach this point. Perhaps Java will reach the same point, but I'll bet it takes decades
... for now, it's alive and well. (Perhaps there's been a buggy new release, but all
COBOL Forever. (Score:3)
I imagine that nobody is writing new applications in COBOL.
You could be wrong, you know
Fujitsu announced late Friday that it is shipping four middleware products designed to work with Microsoft's Windows Azure public cloud development platform
"The new line of products delivers runtime environments for Java and Cobol, two application programming languages that are commonly employed in building mission-critical systems, in addition to providing functionality enabling central monitoring between on-premise systems and the Windows Azure Platform."
Fujitsu Teams with Microsoft on Azure Middleware [datamation.com].
The case for COBOL [computerworlduk.com]
Re: (Score:2)
Re: (Score:2)
Just like COBOL is not dead. Sure, it's not the hip language, but so many legacy systems are built on it that it's basically guaranteed to live for quite a while longer. I suspect Java will have the same fate.
Java is not remotely in anywhere the same situation as Cobol. Java jobs are plentiful as is the development scene which covers everything from Android all the way up to big iron. There really isn't much to challenge the language at present though given Oracle's pathetic stewardship perhaps there should be.
Re:Java, truley an American icon (Score:4, Interesting)
As PyPy matures and begins to rival Java in performance, I strongly suspect Python will begin to offset Java in the enterprise. Most studies clearly indicate Java is not a desired language by most programmers. Rather, most programmers program in Java because the enterprise dictated it. With Java/Oracle beginning to lose face, IMOHO, it opens the door for languages programmers actually want to use. This means languages like Python, which have extremely rich libraries, easily integrate with other languages, and continues to grow in appeal.
Ruby, of course, is not in the running as its positioned itself as the anti-culture (anti-enterprise) hipster language.
Re: (Score:2)
no its just a bloated corpse ready to pop and make a big mess
I never really understood the appeal of java, yea ok I get its "benefits" but I also get that that usually means I am going to have to install some annoying shit VM that nags me to update every 3 days just so I can run some kiddy script with a bad UI and just qualifying as functional program that runs 9x slower than it really should.
I have yet to see a quality java program the entire time its been out
This isn't a news article. (Score:5, Insightful)
Really, who didn't see this coming?
This isn't a news article. This is an article about two previous [slashdot.org] news articles [slashdot.org]. There's nothing to see coming. Submitted by the author of an article about the two previous stories. Slow news day, I hope; this is just a group-think trajectory thing.: (Score:2)
You know, I do agree with you. But I also think this is being blown a little bit out of proportion. No enterprises go and install the newest version of Java the day it comes out on their production apps. It will be nice if we get to start using Java 7 in a year. It's definitely not going to happen next month, and by then they will probably have a release out with this fixed.
I agree that Oracle should have treated this as the show-stopper it is, but this is just another sign of what we already knew: Oracle's
Re: (Score:3)
That entirely depends on how well the GPLv2 protects you from their patents.
Oh, and you can't use the name Java because Sun has it trademarked.
Oh, and no clue what'll happen related to trademarks if you continue to use the word "java" in the various namespaces in the language.
Re: (Score:3)
Slashdot loves to rake on java.. but I always liked it. I don't work with it much any more, but I have fond memories.
Specifically I liked developing with it. Using it is an entirely different matter.. swing based UIs are still generally terrible. From the code side it was nice.
Re: (Score:3)
Biggest issue for the amount of boiler plate crap. Things like anonymous classes where proper closures would make the code a lot cleaner. Eclipse takes care of a lot of refactoring and c
Re: (Score:3)
Re: (Score:2)
There are many, many, many web application written on Weblogic, Tomcat, and JBoss webservers. They're all Java webapps. Not to mention Eclipse is written in Java as well. I don't think you have a clue as to what all is out there.
Re:Watch me falling asleep over Javatalk (Score:5, Informative)
And LibreOffice is working on reimplementing many of those features without Java.
Re: (Score:2)
Java Police, arrest this man
he talks in NET
He buzzes like C
He's like a detuned VM
This is what you get when you mess with us
Re: (Score:2)
R ich
A sshole
C alled
L arry
E llison
When you find out, let me know. (Score:2)
When you find out, let me know too. I think we're riding the same ship.
Re: (Score:3)
First of all, you should take the rumors of
.NET demise with a grain of salt. Even more so this goes for Java - last I checked, Oracle is pretty enthusiastic about their ability to get $$$ from the stack, so it would be surprising if they ditch it anytime soon.
Regardless of the above, learning C++ is a good idea. Even if you stick to
.NET or Java development, in large projects there are always bits and pieces which need to be written in C++, or at least require a good understanding of how it works - either
Re: (Score:2)
3. an outcome of events contrary to what was, or might have been, expected.
How does that not fit in this case? Did you even read the next line in the summary?
Re: (Score:2)
According to the Oxford English Dictionary, also: "the incongruity created when the (tragic) significance of a character's speech or actions is revealed to the audience but unknown to the character concerned; the literary device so used, orig. in Greek tragedy; also transf."
We (the audience) saw this coming, but Oracle don't seem to have. So that's irony in this, one of the earliest senses.
Re: (Score:2)
And yet nothing of value will be lost.
Re: (Score:3)
Why the heck they did buy Sun then? It's not like Sun was standing in Oracle's way. I'll never understanding this kind of corporate merger shenanigans :
1. Buy undervalued tech outfit for billions $$$
2. Scrap technology within said outfit
3. ???
4 Profit! | http://developers.slashdot.org/story/11/08/04/1427213/oracles-java-policies-are-destroying-the-community?sdsrc=next | CC-MAIN-2014-42 | refinedweb | 4,060 | 74.08 |
Copy-Item
Updated: April 21, 2010
Applies To: Windows PowerShell 2.0
Copies an item from one location to another within a namespace.
Syntax
Copy-Item [-LiteralPath] <string[]> [[-Destination] <string>] [-Container] [-Credential <PSCredential>] [-Exclude <string[]>] [-Filter <string>] [-Force] [-Include <string[]>] [-PassThru] [-Recurse] [-Confirm] [-WhatIf] [-UseTransaction] [ items being copied. The particular items that the cmdlet can copy depend on the Windows PowerShell providers available. For example, when used with the FileSystem Provider, it can copy files and directories and when used with the Registry Provider, it can copy registry keys and entries. to be copied.
Copy-Item is like the 'cp' or 'copy' commands in other shells.
The Copy-Item cmdlet is designed to work with the data exposed by any provider. To list the providers available in your session, type "Get-PsProvider". For more information, see about_Providers.
Example 1
Description
-----------
This command will copy the file mar1604.log.txt to the C:\Presentation directory. The command does not delete the original file.
Example 2
Description
-----------
This command copies the entire contents of the Logfiles directory into the Drawings directory. If the source directory contains files in subdirectories, those subdirectories will be copied with their file trees intact. The Container parameter is set to true by default. This preserves the directory structure.
Example 3
Description
-----------
This command copies the contents of the C:\Logfiles directory to the C:\Drawings\Logs directory. It will create the subdirectory \Logs if it does not already exist. | http://technet.microsoft.com/en-us/library/dd347638(d=printer).aspx | CC-MAIN-2013-48 | refinedweb | 242 | 59.19 |
Each Answer to this Q is separated by one/two green lines.
I’m working on an app that to do some facial recognition from a webcam stream. I get base64 encoded data uri’s of the canvas and want to use it to do something like this:
cv2.imshow('image',img)
The data URI looks something like this:
snippet" data-
<img src=">
The official doc says, that
imread accepts a file path as the argument. From this SO answer, if I do something like:
import base64 imgdata = base64.b64decode(imgstring) #I use imgdata as this variable itself in references below filename="some_image.jpg" with open(filename, 'wb') as f: f.write(imgdata)
The above code snippet works and the image file gets generated properly. However I don’t think so many File IO operations are feasible considering I’d be doing this for every frame of the stream. I want to be able to read the image into the memory directly creating the
img object.
I have tried two solutions that seem to be working for some people.
pilImage = Image.open(StringIO(imgdata)) npImage = np.array(pilImage) matImage = cv.fromarray(npImage)
I get
cvnot defined as I have openCV3 installed which is available to me as
cv2module. I tried
img = cv2.imdecode(npImage,0), this returns nothing.
Getting the bytes from decoded string and converting it into an numpy array of sorts
file_bytes = numpy.asarray(bytearray(imgdata), dtype=numpy.uint8) img = cv2.imdecode(file_bytes, 0) #Here as well I get returned nothing
The documentation doesn’t really mention what the
imdecode function returns. However, from the errors that I encountered, I guess it is expecting a
numpy array or a
scalar as the first argument. How do I get a handle on that image in memory so that I can do
cv2.imshow('image',img) and all kinds of cool stuff thereafter.
I hope I was able to make myself clear.
This worked for me on python 2, and doesn’t require PIL/pillow or any other dependencies (except cv2):
Edit: for python3 use
base64.b64decode(encoded_data) to decode instead.
import cv2 import numpy as np def data_uri_to_cv2_img(uri): encoded_data = uri.split(',')[1] nparr = np.fromstring(encoded_data.decode('base64'), np.uint8) img = cv2.imdecode(nparr, cv2.IMREAD_COLOR) return img data_uri = " img = data_uri_to_cv2_img(data_uri) cv2.imshow(img)
This is my solution for python 3.7 and without using PIL
import base64 def readb64(uri): encoded_data = uri.split(',')[1] nparr = np.fromstring(base64.b64decode(encoded_data), np.uint8) img = cv2.imdecode(nparr, cv2.IMREAD_COLOR) return img
i hope that this solutions works for all
You can just use both cv2 and pillow like this:
import base64 from PIL import Image import cv2 from StringIO import StringIO import numpy as np def readb64(base64_string): sbuf = StringIO() sbuf.write(base64.b64decode(base64_string)) pimg = Image.open(sbuf) return cv2.cvtColor(np.array(pimg), cv2.COLOR_RGB2BGR) cvimg = readb') cv2.imshow(cvimg)
I found this simple solution.
import cv2 import numpy as np import base64 image = "" # raw data with base64 encoding decoded_data = base64.b64decode(image) np_data = np.fromstring(decoded_data,np.uint8) img = cv2.imdecode(np_data,cv2.IMREAD_UNCHANGED) cv2.imshow("test", img) cv2.waitKey(0)
Source :
| https://techstalking.com/programming/python/read-a-base-64-encoded-image-from-memory-using-opencv-python-library/ | CC-MAIN-2022-40 | refinedweb | 528 | 52.56 |
When working on applications, you'll often need to execute certain tasks in the background in predefined intervals of time. Scheduling jobs in applications is a challenge, and you can choose from many available frameworks around, such as Quartz, Hangfire, etc.
Quartz.Net has been in use for a long time and provides better support for working with Cron expressions. Hangfire is yet another job scheduler framework that takes advantage of the request processing pipeline of ASP.Net for processing and executing jobs.
Quartz.Net is a .Net port of the popular Java job scheduling framework. It's an open source job scheduling system that can be used from smallest apps to large-scale enterprise systems. The official website of Quartz.Net states: "Quartz.Net is a full-featured, open source job scheduling system that can be used from smallest apps to large scale enterprise systems."
Getting started
You can install Quartz.Net from the downloads section of the Quartz official website. You can also install Quartz.Net through the Package Manager Window in your Visual Studio IDE.
The three primary components in Quartz are jobs, triggers and schedulers, i.e., to create and schedule jobs in Quartz.Net, you would need to have schedulers, triggers and jobs. While a job denotes the task that needs to be executed, a trigger is used to specify how the job will be executed. The scheduler is the component that schedules the jobs. Note that you should register your jobs and triggers with the scheduler.
Programming Quartz.Net in C#
To create a job, you should create a class that implements the IJob interface. Incidentally, this interface declares the Execute method -- you should implement this method in your custom job class. The following code snippet illustrates how you can implement the IJob interface to design a custom job class using the Quartz.Net library.
public class IDGJob : IJob
{
public void Execute(IJobExecutionContext context)
{
//Sample code that denotes the job to be performed
}
}
Here's a simple implementation of the Execute method of the IDGJob class -- I'll leave it to you to design your custom job class to suit your application's needs. The code snippet given below writes the current DateTime value as a text to a file. Note that this implementation is not thread safe; it's just for illustration purposes only.
public void Execute(IJobExecutionContext context)
{
using (StreamWriter streamWriter = new StreamWriter(@"D:\IDGLog.txt", true))
{
streamWriter.WriteLine(DateTime.Now.ToString());
}
}
Now that you have already defined the job class, you'll need to create your own job scheduler class and define the trigger for your job. The trigger will contain the job's metadata as cron expression. You can visit this link to generate cron expressions.
Now, how is that the jobs are scheduled? Well, there is a component called job scheduler that is responsible for scheduling your jobs. In essence, you can take advantage of job schedulers to schedule your jobs for execution. The following code listing illustrates how we can define a trigger for our job and then register job and the trigger with the job scheduler.
public class JobScheduler
{
public static void Start()
{
IScheduler scheduler = StdSchedulerFactory.GetDefaultScheduler();
scheduler.Start();
IJobDetail job = JobBuilder.Create<IDGJob>().Build();
ITrigger trigger = TriggerBuilder.Create()
.WithIdentity("IDGJob", "IDG")
.WithCronSchedule("0 0/1 * 1/1 * ? *")
.StartAt(DateTime.UtcNow)
.WithPriority(1)
.Build();
scheduler.ScheduleJob(job, trigger);
}
}
Refer to the code listing given above. Note how the name and group of the trigger has been specified when creating the trigger instance. Once the trigger for the job is defined and configured using the needed cron expression, the trigger is registered with the job scheduler.
You can as well build a trigger that is fired every second and repeats it indefinitely. Here's a code snippet that illustrates how you can build a trigger like this.
ITrigger trigger = TriggerBuilder.Create()
.WithIdentity("IDGJob", "IDG")
.StartNow()
.WithSimpleSchedule(s => s
.WithIntervalInSeconds(10)
.RepeatForever())
.Build();
You do not always need a windows service to start your scheduler. If you are using an ASP.Net web application, you can take advantage of the Application_Start event of the Global.asax file and then make a call to JobScheduler.Start() method as shown in the code snippet below.
public class Global : HttpApplication
{
void Application_Start(object sender, EventArgs e)
{
// Code that runs on application startup
JobScheduler.Start();
}
}
Note that JobScheduler is the name of the custom class we designed earlier. Note that you can also leverage Quartz.Net to store your jobs to persistent storages, i.e., you can persist your jobs in database as well. You can know the list of all the supported job stores from here.
This article is published as part of the IDG Contributor Network. Want to Join? | http://www.infoworld.com/article/3078781/application-development/how-to-work-with-quartz-net-in-c.html | CC-MAIN-2016-50 | refinedweb | 788 | 58.89 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.