text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Bulk AMA with Stone DeFi
Hello guys, today we would like to welcome Vincent Khoo — Stone DeFi Marketing Lead.
Hi Vincent! It’s great you have joined us, we are looking forward to the talk.
Hello Everyone.
Could you tell us about yourself and give us your team’s introduction? What is the idea behind Stone Defi?
Hi. My name is Vincent, Marketing Lead for Stone, everyone can call me VK. Before I go into the topic of Stone, pls allow me to do a brief introduction. I used to be in the traditional finance space before crypto. In late 2016 and early 2017, I started my journey of crypto and I was with a crypto fund named Chain Capital from 2017 until 2019. During the period with the crypto fund, I studied and did due diligence in many kinds of projects. We invested in a number of good projects and bad performance projects too. I was also involved in project incubations from beginning until exchange listing and post-investment management.
After a few years in a crypto fund, I stepped down from my position as a Business officer and started my own Fintech Advisory startup in Singapore & Malaysia as well as fund management for the secondary market.
During summer 2020, we found out DeFi is something fresh and revolutionary. We started to fund into DeFi protocols and experienced the first mover rewards. Sooner we found out there’re significant problems in the market like high APY volatility, security issue, gas fee problems, and so on. That’s the point me and my team came out the idea of Stone which want to focus on bringing “Rock Solid Yield” to DeFi user in the market. Stone is also looking to provide more innovative products based on a wide range of yield-bearing assets to users across multiple blockchains.
Your slogan is Rock Solid Yield. Could you explain what it means?
Our logo is a hollow S on a stripped stone pattern. We propose the following SOLID principles for Stone:
S for Stable returns: Manage risks and rewards to achieve stable returns. DeFi is complex and looking at only indicative APY creates more tears than happiness. A paradigm shift in yield philosophy is to consider both risk, return (and the sustainability of such return), a principle widely used in the traditional financial industry.
O for Open collaboration: Work with as many community members as possible to source the best ideas. Ensure the right incentive model is in place to reward contributors starting from day one. Stone is flexible so any projects can be connected with it as well. Stone protocol (including strategies) will be open-sourced for transparency. This also allows communities and partners to contribute to the protocol development easily.
L for Long term development: Establish commitment and an inclusive culture to get more contributors along the way with the right incentive system.
I for Incremental deployment: Make incremental improvements with extensive testing and constantly learning from other projects. DeFi is a nascent industry requiring a large number of trial and error.
D for DAO driven: Provide a clear roadmap towards a DAO governed protocol. We acknowledge that at the beginning a committed small committee is more practical during bootstrap and a fully decentralized organization takes time. Stone shall engage the community to discuss a plan from day 1 and ensure sufficient fundings (tokens) are reserved for the DAO to manage in the future.
You recently completed your integration with Polygon/Matic. What opportunities did it open up for Matic users to farm?
Well, this is the highlight for tonight. We are glad to integrate with Polygon (formerly Matic Network), a Layer-2 scaling solution with payment and lending solutions, atomic swaps, and improved dApp and DEX performance. The link is as following:.
As Polygon is more open and robust primarily in terms of the types of architecture it can support, our newly launched product on Polygon allows Stone users to benefit from the yield income opportunities. This is because Polygon is built on Ethereum. So, it incorporates any scaling or infrastructure solution from the Ethereum ecosystem. Polygon fully adopts the Ethereum ethos of open innovation and has designed Polygon with the same goals in mind.
From above, we can see that our new integration with Polygon has solved common pain points in the DeFi world which are the high gas fee, scalability, and user experience as a whole. Stone aims to achieve the best yield aggregation platform which allows all the PoS assets to be flowing or transaction in between every chain as we truly believe that there will be a multi-chain existence in the long run. The communication between chains can be low or even zero friction.
Stone also wishes to provide the best user experience to all our users. For example, the experience on CeFi nowadays can be achieved in the DeFi world one day, and this is very significant in creating “Rock Solid Yield” for all the users in the DeFi ecosystem. In order to kickstart this kind of vision, Matic is just another new journey for us and we will continue to expand our public chain coverage and deliver the best product for our users.
DEFI is evolving too fast and the ability to break is very high. What do you think about this and can you be sure that Stone’s products will be in demand in the long term? What is Stone planning to contribute to DeFi’s growth?
We have observed in the DeFi space is that many projects utilize unsustainable yields to attract TVL deposits into their protocols. Unfortunately, this always resulted in wildly fluctuating token prices for holders and eventually depress the value of the protocol tokens. STONE focuses on creating long-term sustainable yield strategies that are reliable and allow our token holders to sleep well at night knowing that the STONE protocol is powering higher investment alpha with properly balanced risk/reward outcomes. Hence — the promise of Rock Solid Yield.
Next, as a key differentiator, STONE will be launching innovative and unique yield strategies, allowing for decentralized fund creations and asset deployments. Currently, yield aggregators available in the market rely primarily on lending and liquidity provision to generate yield. While Stone will have strategies in this space, our strategies will also address two major markets — liquid staking strategies and data yield strategies that are untouched to date. In particular, if we look at the staking market cap, it is well over US$120B and is a massive global market.
Many projects are currently slowed down in development due to the situation in the market, do you have delays in drafting on the roadmap?
Due to the market condition, we can see basically the majority of the projects have slowed down a lot and we can clearly see that the trading volume in the exchanges has dropped at least 50% and people are losing interest or confidence in the crypto market.
Although the market condition with low interest, our team are still working as usual and we continue to the PR event like AMA or activities. Internal-wise, we are focusing more on the product and tech part the most. During this time is the best period for us to prepare and improve on the product. Everything on the roadmap is still on track and be completed.
How do you keep your customers’ assets safe from hackers? How do you manage if there is a cyber attack on your platform that will infringe on user privacy? Is StoneDefi protected and ready to deal with this issue?
For Stone, the audit is one part, and we are also very careful about the strategies. We are glad that we have passed the audit by Peckshield and in addition, we were using our own funds to test before the public. We totally agree with the idea of using smart contracts to control fund flow and set authority clear.
Besides, we understand there are many exploits after the audit, so we opt to take a more careful approach for the product release, we have been releasing a more stable version like the alpha test and also continuously engage with the third-party security service party and other developers to enhance the product safety. Again to emphasize, we are pushing all functions at once, but making sure we are right at each release
We do not blindly trust code audits. Our approach is to launch features one by one, and have checkpoints to test out things in a real environment. that’s what the alpha version is for. for future features, we will do additional audits for security.
As TVL grows, with more feedback from the community, our team is also gaining experience in improving the process. We have hired more external experts to stress test our platform. All this is to ensure security and we wish the community and user could participate as part of us and go far with us.
As you are going to launch products on substrate, are you also planning to apply for a parachain slot somewhere in the future? Or integrate with solutions like Bifrost?
First of all, we all know that the para chain seats on Polkadot are limited, so we need to bid. But we all know clearly that the slot is not for who locked the most DOT on-chain. That’s why we need to talk about strategy. For the project, the significance of the card slot auction is to obtain the use rights of limited resources. Stone does not need to participate in the slot auction at this stage. If you really need to use it, we can also use para chain in a leased way.
But in the future, we will help more high-quality projects in the Dot ecosystem to participate in the slot auction, because we have a lot of ksm (5 nodes) and dots (about 4.5 million votes) in our hands.
Therefore, our strategy is to first support other projects that are more in need. When Stone needs an auction slot at a certain moment, we hope to get the support of other project parties and ticket holders too.
What is StoneDefi revenue model? In which ways do you generate revenue/profit? So many projects just like to speak about the “long term vision and mission” but what are your short term objectives? What are you focusing on right now?
Our ultimate focus is to ensure user funds are safe and yield as rock-solid as possible. We are looking at layer 2 and also building on substrate to leverage the cross-chain capabilities and low fees in the future.
We knew that is more important in the defi space than new technical advancement can lower gas fee
meanwhile, Stone aggregate capital deployment by tranches, stone will monitor the gas fees, actually, we also plan to compensate the partial gas costs from the business income of stone in the future. Therefore, if TVL getting bigger on stone, gas fee will be lowered for every user.
And we are a platform for liquid-staked assets. We will work together with more chains in the future and creating more use cases or provide more yield for their staked assets. That’s also part of our income to sustain the project.
What is the competitive advantage of StoneDefi? What do you have over other competitors? About features such as security, scalability, community development,… do you think you’ve finished or need to continue to develop?
Well, STONE won’t encourage users to go into new protocols because the APY is high, but we will assess its credibility and how sustainable that APY would be.
In STONE, we introduce an index to hedge the risks of single assets, and STONE will be able to deploy the underlying assets to generate additional yield for the index holder.
In simple terms, the Sharpe ratio is the tool for risk-adjusted allocations. It doesn’t simply focus on APY but overall risk and rewards assessment. This is the way we provide the “Rock Solid Yield”.
In our Litepaper, there are mathematical explanations of Sharpe Ratio and Portfolio Rebalancing. This enables investors to examine the overall risk-adjusted return of a portfolio or an asset. In fact, it has been widely used in the traditional financial markets.
Therefore, we strongly believe that investors will tend to have stable and solid returns compared to high volatility APY. of course, there are people who would like to take risks, but we all know that’s not a long-term and relaxing game.
Another key differentiator is our cross-chain yield strategies. Typically, yield aggregators only reside on a single chain. STONE’s yield strategies allow for cross-chain asset deployment that can help users maximize their returns on a global, multi-chain portfolio level.
Lastly, Stone aims to build the most open and collaborative community culture in the space. The reason we are launching the community development before the product launch and token issuance is that we want community members to make an impact since day 1, and to be part of Stone’s growth journey together. We will launch a committee first to make collective decisions, providing options for our community to choose from. We will subsequently decentralize into a DAO model and pass on full control to the community. We will soon provide an explanation of Stone tokenomics, the general idea is that besides tokens for yield farming, the largest reserve is for the DAO and contributors to the product. Stone, at its core, will be a project for the community.
Currently, most projects and platforms are in English. How will you reach non-English local communities? Do you have any plan for them to better understand your project?
The current expansion for Stone will be more on markets like Korea and China as they are a kind of big quantity of players coverage in crypto space. However, due to the regulation in these two countries, we need to work smart and planned better in order to penetrate. And we also looking for ambassadors for non-English speaking communities to expand.
Thank you, Vincent, I’m out of questions now. It’s been great talking to you!
|
https://crypto-bulk-intl.medium.com/bulk-ama-with-stone-defi-747f4d2c2515?readmore=1&source=user_profile---------2----------------------------
|
CC-MAIN-2021-49
|
refinedweb
| 2,382
| 62.07
|
Monday Mar 21, 2011
Friday Apr 10, 2009
Creating OpenSolaris installation USB sticks on Windows
By glagasse on Apr 10, 2009
It was great to read my mail this morning and see this:
A nice GUI tool that will allow Windows users to put a copy of OpenSolaris media on a usb stick which they can use to boot machines without (and with if you'd rather save a CD) a cd/dvd-rom instead of needing to have Solaris/OpenSolaris already installed somewhere and using the usbcopy script.
Now, if we could get something similar written for Linux, Mac OS X and OpenSolaris the path to world domination would be even closer :-)
I haven't tried this yet, so I can't speak to how well it does or does not work. A USB image that can be written to a USB stick can be found at:
This is the latest snapshot of what will be the 2009.06 OpenSolaris release (which is shaping up quite nicely).
Friday Apr 25, 2008
HOW
Release Candidate for OpenSolaris 2008.05 based on Project Indiana now available
By glagasse on Apr 25, 2008
Thursday Jan 31, 2008
CPU).
Wednesday Dec 19, 2007
Christ.
Monday Apr 02, 2007
Dwarf Caiman or rather, this isn't your father's Solaris Installer
By glagasse on Apr 02, 2007
Greetings Blogosphere. Long time, no post. That's mostly due to my working on things that weren't necessarily interesting outside of Sun (ie Gatekeeping/Release Engineering). That said, I've had a change in responsibilities and so am more likely to start posting here again. Stay Tuned.
Now to the topic at hand. I was involved with the Consolidation that handles the Solaris Installer technology for almost 2 years. As such, I grew quite an affinity for the work going on to bring the Solaris Installer into the 21st century as it were in respect to offerings from Apple, Windows and Linux distributions (for distributions that offer more than a text based install that is).
The work required is part of a larger initiative called Caiman which calls for the re-architecture of the Solaris Installer. Dave Miner is the architect for this overall strategy and posted an architecture document on OpenSolaris. You can find the document here.
The work outlined in that document is being undertaken in the Installation and Packaging Community.
The first effort to come out of this re-architecture is the Dwarf Caiman project
Today the team working on Dwarf Caiman released a demo of the gui application which will become the Solaris Installer. The announcement with the details can be found on the install-discuss forum/mailing list.
That said, downloading the demo package and installing it should pose no great risk. It doesn't do an actual install and thus doesn't modify anything on disk. It merely walks through what the Install will look like. This is however pre-pre-pre Alpha and the usual caveats for such software applies. There's lots of bugs to be sure. And certainly alot more work to be done to turn it from a demo into a functional piece of software.
I was utterly blown away after installing the demo and running it. I know the people working on this project and they're very smart so it should really come as no surprise as to how good this looks. Comparing the demo Installer to what we currently employ is, well farther apart than comparing night and day in my opinion. The Dwarf Installer rivals anything that Windows, Mac OS or Linux put out as far as I'm concerned. It truly looks sophisticated and functional. The team has managed to put together something that looks more 21st century than 20th. And this is just window dressing as it were. I can not wait until this is available to install Solaris with.
I highly suggest that anyone who is curious about where we are taking the Solaris installation experience to go ahead and download the demo and take a look. Sure, this is just window dressing but it's going to allow us to do some really cool things underneath and it's only the beginning of a much larger strategy.
As the saying goes, the future looks bright indeed!
Friday Jan 20, 2006
How.
Wednesday Jun 22, 2005
Retro Gaming - Solaris Style
By glagasse on Jun 22, 2005
Being a long-time avid gamer I enjoy reminiscing and running old favorites on today's hardware and software when possible.
Dan pointed out to me at lunch today that an old favorite of ours called star control 2 was available as open source. He thought it would be quite cool to get it running on Solaris x86 (which I agreed with).
So, after downloading the source and the content files it was a simple matter of adding some libs from blastwave and compiling it up. Sure enough, it runs just beautifully (1980's style) on my Solaris x86 box.
This was done on an x86 box running the equivalent of Solaris Express 06/05. You'll need the following:
gcc (/usr/sfw/bin/gcc) gnu make (/usr/sfw/bin/gmake)
You'll also need to install the following packages from blastwave (or you could roll your own):
/opt/csw/bin/pkg-get -i libsdl libogg libvorbis sdlimage
Then set some environment variables:
export PATH=/opt/csw/bin:/usr/sfw/bin:$PATH
export CPPFLAGS="-I/opt/csw/include -I/usr/sfw/include"
export LDFLAGS="-L/opt/csw/lib -L/usr/sfw/lib -R/opt/csw/lib -R/usr/sfw/lib"
Then your on to compiling:
./build.sh uqm config
./build.sh uqm depend
./build.sh uqm install
Finally:
/usr/local/games/bin/uqm
Tuesday Jun 14, 2005
Solaris Standards Conformance
By glagasse on Jun 14, 2005
Today with the launch of OpenSolaris I thought I would talk about something that doesn't seem to get a lot of notoriety (ie. standards work inside Sun).
One of the things I worked on during Solaris 10's development cycle was the Solaris AGR/UNIX200x/SUSv3 Standards Conformance project. The project was tasked with delivering the ON (OS/Networking) portions of a certifiable/brandable Solaris Product as defined by the work performed by the AGR (Austin Group Revision) Standards Working Group.
This work is also sometimes identified as UNIX200x and SUSv3. The formal standard is known by the following:
- ISO/IEC 9945-[1234]:2002
- IEEE Std 1003.1-2001
- The Open Group's (TOG) Single UNIX Specification, version 3 (SUSv3)
This standard requires conformance to the ISO/IEC 9899:1999 C programming Language standard.
My contribution to this project was specifically in the networking area. It involved making changes to various header files so that Solaris conforms to the above listed specifications. While I didn't add alot of new exciting code as part of this project, I did get to add a completely new system call (See Erick Schrock's treatise on how to do this properly). Prior to Solaris 10, there was no sockatmark() system call. Since the Open Group Base Specification requires this, I needed to add it to Solaris.
The function itself is trivial, it is used to determine whether a socket is at the out-of-band mark. On Solaris, before this was implemented, a developer would just issue an ioctl call using the SIOCATMARK request. This is essentially what sockatmark() does. I made it a wrapper system call that calls ioctl with the proper request.
/\* \* Determine whether the socket is at the out-of-band \* data mark. \*/ int _sockatmark(int sock) { int val; if (ioctl(sock, SIOCATMARK, &val) == -1) return (-1); return (val); }
Not ground breaking or sexy but part of a bigger picture nonetheless.
There were many contributors to this project (ON is rather large in terms of the sheer amount of code contained therein) and it was a great team to work on and with. We managed to complete all of the required work and Solaris 10 is branded for UNIX03 on all architectures that Solaris 10 supports (x86,sparc,opteron). Of course, since the next version of Solaris starts out from where Solaris 10 finished, the next version should be branded as well once it ships.
Technorati Tag: OpenSolaris
Technorati Tag: Solaris
Tuesday May 31, 2005
printf("Hello World!\\n");
By glagasse on May 31, 2005
Allow myself to introduce... myself.. I've been at Sun for 6 years now (in fact today is my 6 year anniversary).
I came to Sun (in reverse chronological order) by way of Progress Software, FTP Software, and finally Tranti Systems. I was hired into Sun by Doug Chmura to work in the Networking Technical Support Group. Providing technical support to customers on a variety of issues. Pretty much all of the core networking functionality in Solaris (tcp/ip, sendmail, dns, nis, nis+, etc.). That was a fun group to work in. And boy, let me tell you, I learned more about Unix administration and troubleshooting in my first 6 months on the job than I had my entire career at the time. Which of course I loved for various reasons. Mostly because I love to learn new things (I think you have to if your going to be successful in high tech). The work was really stressful, you've never quite had a phone call as one from a harried system administrator who is screaming at you because his mission critical server isn't doing it's job.
Having worked in the Technical Support field for more years than I cared to remember, I decided it was time for a change (more to the point I decided I was tired of picking up a telephone every 5 minutes) and moved over into Software Test Development in the SNT (Solaris Networking Technologies) group. Though it's not called SNT anymore, I still work with the same guys on the same things. Essentially, instead of supporting the core networking functionality in Solaris, I helped write automated (and some not so automated) tests and test suites for said functionality (and new technology as it was integrated).
And now comes a new chapter to my professional career. Instead of writing tests for features and functionality in Solaris, I'll be developing those features and functionality. I'll be helping the Install area of Solaris (fixing bugs for starters).
It should be a fun ride.
About
glagasse
Search
Recent Posts
- Migration
- Creating OpenSolaris installation USB sticks on Windows
- HOWTO: Enable zfs compression when installing OpenSolaris 2008.05
- Release Candidate for OpenSolaris 2008.05 based on Project Indiana now available
- CPU power management on OpenSolaris
- Christmas come early...
- Dwarf Caiman or rather, this isn't your father's Solaris Installer
- How to remove a batch of packages without pulling your hair out
- Retro Gaming - Solaris Style
- Solaris Standards Conformance
Categories
- General
Archives
Blogroll
News
No bookmarks in folder
|
https://blogs.oracle.com/glagasse/category/General
|
CC-MAIN-2016-50
|
refinedweb
| 1,822
| 60.85
|
PCNT_TCC_TypeDef Struct Reference
TCC initialization structure.
#include <em_pcnt.h>
TCC initialization structure.
Field Documentation
◆ mode
Mode to operate in.
◆ prescaler
Prescaler value for LFACLK in LFA mode.
◆ compare
Choose the event that will trigger a clear.
◆ tccPRS
PRS input to TCC module, either for gating the PCNT clock, triggering the TCC comparison, or both.
◆ prsPolarity
TCC PRS input polarity.
False = Rising edge for comparison trigger, and PCNT clock gated when PRS signal is high.
True = Falling edge for comparison trigger, and PCNT clock gated when PRS signal is low.
◆ prsGateEnable
Enable gating PCNT input clock through TCC PRS signal.
Polarity selection is done through prsPolarity.
|
https://docs.silabs.com/gecko-platform/latest/emlib/api/efm32gg11/struct-p-c-n-t-t-c-c-type-def
|
CC-MAIN-2021-04
|
refinedweb
| 105
| 53.37
|
U.S. Loses Top-Notch Credit Rating. Is Gold Still AAA?
Based on the August 5th, 2011 Premium Update. Visit our archives for more gold & silver analysis.
Nearly two weeks ago, in our essay on gold and debt ceiling, we wrote the following:
The [debt ceiling] stalemate may cost America its AAA rating, adding $100 billion a year to government costs while dragging down economic growth.
As a matter of fact, on Friday the S&P rating agency downgraded the U.S. credit rating from AAA to AA+. This is an unprecedented event and because of that the effects are unclear as far as the stock market is concerned. In the long term they are unclear because on one hand it's obvious that the credit downgrade will make U.S. securities more risky and thus less attractive to foreign investors, but on the other hand we know that when Canada lost its AAA rating in April 1993, Canadian stocks rallied more than 15% in the subsequent year. Japanese stocks moved over 25% higher in the 12 months after Moody's downgraded Japan in November 1998.
In the short term the situation is complicated because some of this information might have already been factored in in previous price levels and we could have the "buy the rumor, sell the fact" type of event, which in this case would mean "sell the rumor, buy the fact".
Speaking of facts, let's start with them. The U.S. has been downgraded from
AAA to AA+.
From S&P website we get the following definitions:
'AAA' -- Extremely strong capacity to meet financial commitments. Highest
Rating.
'AA' -- Very strong capacity to meet financial commitments.
Note: Ratings from 'AA' to 'CCC' may be modified by the addition of a plus (+) or minus (-) sign to show relative standing within the major rating categories.
Additional facts are:
- Moody's and Fitch did not change their top credit rating for the U.S.
- Credit ratings are used for calculating required rate of return (lower rating -> bigger risk -> bigger payoff required for taking this additional risk called the risk premium) and this means that they directly related to US debt securities and indirectly to other US securities as well.
So, the U.S. has not been downgraded to "junk" status (like Greece), it's been downgraded from extremely strong to very strong. This will have a small impact on the risk premiums - perhaps 0.38% (compare country risk premium between Aaa and Aa1 countries on this website). So, the logical approach suggests that not much should change - after all, this is a slight change of view on the U.S. credit, and a change of view expressed by only one rating agency.
On the other hand, it's the world's biggest economic superpower that's no longer top notch and it seems that this action will make many investors sell their "riskless Treasuries" and buy other countries' notes/bonds or precious metals instead. There's a lot of fear in the marketplace as the traditional safe bet (Treasuries) doesn't appear as safe as it used to. This creates a potentially positive environment for gold.
To determine whether the outlook for metals is in fact positive, let's move on to the technical part of today's essay. We will start with the medium-term S&P 500 Index chart (charts courtesy by).
Declines seen on Thursday and Friday were followed by a huge move down on Monday triggered by the U.S. debt downgrade. These observations lead us to the obvious question of whether the decline will last longer. At this point, the situation is very unclear, however based on Tuesday's strong rebound after stocks touched the 38.2% Fibonacci retracement level visible on the above chart, 50-week moving average and other factors, it seems that at least a local bottom has been formed.
This doesn't paint an overly bullish picture for gold for the following weeks, as it has been negatively correlated with the main stock indices. In other words, gold's rally can be to a large extent explained by the increased fear among stock investors who dumped their holdings to buy gold. The US downgrade has increased the tension.
With stocks perhaps at a local bottom, it seems that gold may form a short-term top soon.
This becomes extremely important when you take into account the above long-term chart and realize that right now gold is on the brink of $1,800. Yes, we were bullish on gold just a few days ago, but that was also many tens of dollars ago. With this volatility things can change very quickly.
Once the first shock is over, we may see markets come to their senses and accept the fact that an AA+ rating for the U.S. debt is far from bad. Once they do that, gold is likely to move lower, even though the long-term situation has just (low interest rates at least until mid-2013) become even more favorable.
Summing up, the U.S. rating downgrade resulted in declines in the general stock market and took the indices much lower. However, AA+ rating is not the end of the world and investors may soon realize that have overreacted. Was the final bottom reached? That is still unclear, however at least a short-term move higher appears likely. Meanwhile, fueled by fear, gold might move just a little higher, but as soon as things calm down, the yellow metal is likely to decline - likely after topping close to $1,800.
|
http://www.safehaven.com/article/22114/us-loses-top-notch-credit-rating-is-gold-still-aaa
|
CC-MAIN-2018-09
|
refinedweb
| 932
| 70.33
|
Online auctions with ascending price and time limit
Project description
englishauction
Online auctions with ascending price and time limit
Contents
Source Code and Documentation
- Source Code:
- Documentation:
About
An English auction is the most common form of auction. When an auction opens, the price starts low and increases as buyers bid for the item. Live auctions usually end when there is no new highest bid for a period of time. For online auctions, an end time is usually set. To prevent items selling for a loss, sometimes the seller will place a reserve. A reserve is the least amount to sell the item for, although the auction may start at a lower price. Another common feature of online auctions is the ability to pay a set price to win and end the auction.
This package aims to provide functionality of online English auctions.
Requirements
- Requires.
Usage
from mindpowered_englishauction import * ea = EnglishAuction() ea.GetOpenAuctions(0, 10, "start", true);
Support
We are here to support using this package. If it doesn't do what you're looking for, isn't working, or you just need help, please Contact us.
There is also a public Issue Tracker available for this package.
Licensing
This package is released under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/mindpowered-englishauction/
|
CC-MAIN-2021-31
|
refinedweb
| 236
| 64.41
|
Reading Text from Images Using Java
We take a look at some code that can help you to read text from an image with your Java application. Possible uses? Making sure your Captcha is doing its job.
Join the DZone community and get the full member experience.Join For Free
This post will help read texts from your images. It makes use of the Tesseract library.
You can also use the module below to check if the Captcha on your site is strong enough and cannot be easily broken.
References
Language Used
Java
Git Location
POM Dependency
<dependency> <groupId>net.sourceforge.tess4j</groupId> <artifactId>tess4j</artifactId> <version>3.2.1</version> </dependency>
Prerequisites
Let's assume you are running this program from c:\myprogram. Now, you can follow either of two methods, based on your requirements.
Space saving method: Only download the language data you need. That only requires 30MB for an English dataset.
Create a folder named tessdata inside c:\myprogram\
Navigate to
Download eng.traineddata for breaking Captchas with English (trained data is available for other languages as well).
Place the eng.traineddata inside the tessdata folder.
Finally, your folder structure should look like: c:\myprogram\tessdata\eng.traineddata
Time-saving method: Download trained data packages larger than 1GB from several languages.
You can also skip Step 2 to Step 5 and simply download the tessdata-master folder from
Unzip the content of tessdata-master.zip file in your main project folder (for example, here, it is c:\myprogram\)
Rename tessdata-master to tessdata
Finally, your folder structure should look like c:\myprogram\tessdata\<Trained data from several languages>.
Program
ImageCracker Class, crackImage Method"; } }
How It Works
First, crackImage takes the image that needs to be read.
We point a file object to that image.
We make a Tesseract object named instance.
We call the predefined method doOCR of the Tesseract library, passing the file object from step 2.
The doOCR method returns the text read from the image and returns the same.
In case of failure, it prints the error message and returns an error string.
Driver Class, Main Method
public static void main(String[] args) { // TODO Auto-generated method stub System.out.println(ImageCracker.crackImage("testImage.PNG")); }
How It Works
We call the crackImage method, passing the image to be read from.
We print the text read from the method on the console.
Input Image (testImage.PNG)
Output
Create a Youtube metadata crawler using Java.
Full Program
ImageCracker Class
package com.cooltrickshome; import java.io.File; import net.sourceforge.tess4j.*; public class ImageCracker {"; } } }
Driver Class
package com.cooltrickshome; public class Driver { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub System.out.println(ImageCracker.crackImage("testImage.PNG")); } }
Hope it helps!
Published at DZone with permission of Anurag Jain, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
|
https://dzone.com/articles/reading-text-from-images-using-java-1
|
CC-MAIN-2022-33
|
refinedweb
| 481
| 52.15
|
On Wed, 27 Aug 1997, Martin Kraemer wrote:
> Here's a list of unrelated patches which you might want to comment
> on... (Or is my compiler just too picky? Only by applying these
> patches could I make it most of apache's modules _without_ warnings)
>
> Martin
>
> **** In SVR4, many machines have B_ERROR defined. This gives me lots
> **** of warnings when it gets redefined:
>
> Index: main/buff.h
> ===================================================================
> RCS file: /home/cvs/apachen/src/main/buff.h,v
> retrieving revision 1.3.0.1
> diff -u -r1.3.0.1 buff.h
> --- buff.h 1997/08/21 14:35:55 1.3.0.1
> +++ buff.h 1997/08/27 09:28:38
> @@ -69,6 +69,9 @@
> #define B_RDERR (16)
> /* A write error has occurred */
> #define B_WRERR (32)
> +#ifdef B_ERROR /* in SVR4: sometimes defined in /usr/include/sys/buf.h */
> +#undef B_ERROR
> +#endif
> #define B_ERROR (48)
> /* Use chunked writing */
> #define B_CHUNK (64)
I'd actually favour renaming all of these to BUFF_* ... but the above
is fine with me as an interim.
> **** On my SVR4 platform (and I assume on many others as well),
> **** the size argument is size_t, not the (apache default of) int:
> Index: main/conf.h
> ===================================================================
> RCS file: /home/cvs/apachen/src/main/conf.h,v
> retrieving revision 1.5
> diff -u -r1.5 conf.h
> --- conf.h 1997/08/25 12:13:19 1.5
> +++ conf.h 1997/08/27 09:28:38
> @@ -368,6 +368,9 @@
> #define JMP_BUF sigjmp_buf
> /* A lot of SVR4 systems need this */
> #define USE_FCNTL_SERIALIZED_ACCEPT
> +#ifdef SNI /* SINIX/ReliantUNIX, probably other SVR4's as well */
> +#define NET_SIZE_T size_t
> +#endif /*SNI*/
>
> #elif defined(UW)
> #define NO_LINGCLOSE
+1 on #define NET_SIZE_T size_t for all svr4s.
I've learned a little more about this particular issue. POSIX.something
dictated that the network functions (like accept) that pass back a length
through a pointer would use size_t for that length. Sane people pointed
out that size_t is usually 64 bits on a 64-bit box, and int is only
32-bit, so this new POSIX requirement would break any program written in
the BSD-style (where the length is an int). Subsequently a type socklen_t
was created, and it is used for the length pointers ... so we might want
to change the define from NET_SIZE_T to SOCKLEN_T.
>
> **** another attempt to resolve the size_t vs. int conflict:
> **** (BTW: what about the portability of the in_addr_t type?
> **** In the proxy module, test for (inet_addr() == -1) give me
> **** some warnings, too.)
> Index: main/util_script.c
> ===================================================================
> RCS file: /home/cvs/apachen/src/main/util_script.c,v
> retrieving revision 1.3.0.1
> diff -u -r1.3.0.1 util_script.c
> --- util_script.c 1997/08/21 14:36:01 1.3.0.1
> +++ util_script.c 1997/08/27 09:28:39
> @@ -439,7 +439,7 @@
> API_EXPORT(void) send_size(size_t size, request_rec *r) {
> char ss[20];
>
> - if(size == -1)
> + if(size == (size_t)-1)
> strcpy(ss, " -");
> else if(!size)
> strcpy(ss, " 0k");
I doubt that in_addr_t is portable.
I can't tell where send_size is called with a negative size. It doesn't
ever seem to be ...
>
> **** Another unused variable which could be eliminated:
> Index: modules/proxy/proxy_util.c
> ===================================================================
> RCS file: /home/cvs/apachen/src/modules/proxy/proxy_util.c,v
> retrieving revision 1.4
> diff -u -r1.4 proxy_util.c
> --- proxy_util.c 1997/08/25 08:26:50 1.4
> +++ proxy_util.c 1997/08/27 09:28:40
> @@ -110,7 +110,7 @@
> char *
> proxy_canonenc(pool *p, const char *x, int len, enum enctype t, int isenc)
> {
> - int i, j, ispath, ch;
> + int i, j, ch;
> char *y;
> const char *allowed; /* characters which should not be encoded */
> const char *reserved; /* characters which much not be en/de-coded */
> @@ -133,7 +133,6 @@
> else reserved = "";
>
> y = palloc(p, 3*len+1);
> - ispath = (t == enc_path);
>
> for (i=0, j=0; i < len; i++, j++)
> {
>
+1
Dean
|
http://mail-archives.apache.org/mod_mbox/httpd-dev/199709.mbox/%3CPine.LNX.3.95dg3.970901230320.31652J-100000@twinlark.arctic.org%3E
|
CC-MAIN-2016-40
|
refinedweb
| 638
| 67.25
|
EPiServer 7 comes with a new user interface. It’s built as a client side application using JavaScript and is based on the client side framework Dojo. This affects editing of objects, for instance pages, and in EPiServer 7 we introduce a new way to create custom editors. Since the entire application is based on the Dojo framework, editing of content is done using the Dijit widget system which is Dojos widget system layered on top of the Dojo core.
It’s worth mentioning that we have a legacy editor that wraps any custom editors written for EPiServer 5-6 inside an iframe in a pop-up so any custom editors you have written will still work, although they require an additional click to edit.
General goals with the new editing system
Before going into the details of how to create a custom editor I would like to state few of the goals when developing the new object editing system:
In the EPiServer CMS 5-6 property system you connected a property editor by creating an instance of PropertyDataControl in your property. This instance was then responsible for creating controls both for displaying content on the site as well as creating an editor for the property.
In EPiServer 7 we have strived to separate the user interface logic and the rendering of content from the actual data types. To register an editor for a type you need to connect an editor descriptor to your property. There are several ways to do this. Either you annotate your property with your desired editor class in Dojo:
1: [ClientEditor(ClientEditingClass = "app.editors.EmailTextbox")]
2: public virtual string EmailAddress { get; set; }
You can also connect the property to an editor descriptor that is responsible for defining the editor class as well as additional settings for the editor:
1: [EditorDescriptor(EditorDescriptorType = typeof(ImageUrlEditorDescriptor))]
2: public virtual string Logotype2 { get; set; }
The two examples above will have effect on the property only. You can also register global editor descriptors and connect them to one or several types, effectively decoupling the UI-logic from the model:
1: [EditorDescriptorRegistration(TargetType = typeof(Email))]
2: public class PageListEditorDescriptor : EditorDescriptor
3: {
4: public PageListEditorDescriptor()
5: {
6: this.ClientEditingClass = "app.editors.EmailEditor";
7: }
8: }
Note: In the example above I connect the editor to my class Email which is just a regular .NET class used as my value type for my property. In the EPiServer 7 preview you need to connect your editor descriptor to your property type, for instance PropertyEmail and not the value type Email.
The following example shows how an editor with e-mail address validation can look like.
1: define([
2: "dojo",
3: "dojo/_base/declare",
4: "dijit/_Widget",
5: "dijit/_TemplatedMixin"
6: ], function (
7: dojo,
8: declare,
9: _Widget,
10: _TemplatedMixin) {
11:
12: declare("app.editors.EmailTextbox", [_Widget, _TemplatedMixin], {
13:
14: // templateString: [protected] String
15: // A string that represents the default widget template.
16: templateString: '<div> \
17: <input type="email" data- \
18: </div>',
19:
20: postCreate: function () {
21: // summary:
22: // Set the value to the textbox after the DOM fragment is created.
23: // tags:
24: // protected
25:
26: this.set('value', this.value);
27:
28: if (this.intermediateChanges) {
29: this.connect(this.email, 'onkeydown', this._onIntermediateChange);
30: this.connect(this.email, 'onkeyup', this._onIntermediateChange);
31: }
32: },
33:
34: focus: function () {
35: // summary:
36: // Put focus on this widget.
37: // tags:
38: // public
39:
40: dijit.focus(this.email);
41: },
42:
43: isValid: function () {
44: // summary:
45: // Indicates whether the current value is valid.
46: // tags:
47: // public
48:
49: var emailRegex = '[a-zA-Z0-9_.-]+@[a-zA-Z0-9-]+.[a-zA-Z0-9-.]+';
50: if (!this.required) {
51: emailRegex = '(' + emailRegex + ')?';
52: }
53: var regex = new RegExp('^' + emailRegex + '$');
54: return regex.test(this.value);
55: },
56:
57: onChange: function (value) {
58: // summary:
59: // Called when the value in the widget changes.
60: // tags:
61: // public callback
62: },
63:
64: _onIntermediateChange: function (event) {
65: // summary:
66: // Handles the textbox key press events event and populates this to the onChange method.
67: // tags:
68: // private
69:
70: if (this.intermediateChanges) {
71: this._set('value', event.target.value);
72: this.onChange(this.value);
73: }
74: },
75:
76: _onChange: function (event) {
77: // summary:
78: // Handles the textbox change event and populates this to the onChange method.
79: // tags:
80: // private
81:
82: this._set('value', event.target.value);
83: this.onChange(this.value);
84: },
85:
86: _setValueAttr: function (value) {
87: // summary:
88: // Sets the value of the widget to "value" and updates the value displayed in the textbox.
89: // tags:
90: // private
91:
92: this._set('value', value);
93: this.email.value = this.value || '';
94: }
95: });
96: });
EPiServer configures a default namespace in Dojo called “app” which is mapped to the folder “ClientResources\Scripts”. In this case we have placed the editor in the “ClientResources\Scripts\Editors” folder in the site so that we can load the script file whenever the class "app.editors.EmailTextbox" is required. The result will look like this:
It’s also worth mentioning that the easiest way to accomplish email validation is to add a RegularExpression validaton attribute to your property which removes the need for a custom editor.
While the SDK documentation is great, this is just what I needed.
Thanks, Linus!
thanks for this
been massive help on a current project
Hello Linus, I added the simplest possible editor and added it to the page type like this:
[ClientEditor(ClientEditingClass = "app.editors.FooBar")]
[Display(Name = "Name of foobar", GroupName = SystemTabNames.Content)]
public virtual string FooBar { get; set; }
When editing the page the editor is loaded almost as expected; instead of displaying "Name of foobar" on the left hand column, the following is shown: "[object HTMLLabelElement]".
I'm a little bit confused about where you set the "Email Address" label in your example code. What am I missing?
John: I just tried your code on the latest build and it works fine. If you have access to the CTP-forum, please test with the latest build there: otherwise there should be a pubic build out pretty soon that you can use.
Hi Linus,
We have a lot of custom properties written in the earlier versions, some of them I managed to convert to the DOJO based properties.
But some of them have complex design and takes long time to learn and migrate to DOJO. Is it possible to hook the legacy editor for these properties?
I have seen in one of your blog that, we can use IFrame to keep the legacy editors, but I am not sure whether we can use this for the Custom properties.
If we can use legacy editors how can we do that?
Hi Kiran!
I understand your concern and sure, you can use the legacy system for your properties that you don't want/have time to convert. The legacy system is used automatically whenever there is no registered editor descriptor for a given type/uiHint-combination. If you are using a well known type, like string, int etc., you need to add an uiHint to make sure that you get a type/uiHint-combination without any editor descriptors. For instance
[UIHint("somethinguinique")]
public virtual sting MyCustomProperty
Thanks Linus
I have a simple requirement which I'm not sure if I need (if its possible) to create a new editor using Dojo.
The requirement is to have a property of type "PageReference" which should be set by only pages of a specific page type (i.e. AuthorPageType).
I need to retain the Episerver's nice default PageRefrence editor, but just filter the pages in tree view based on specific ones, so the user only select the right ones.
Is this requirement something somewhere in the SDK which I cannot see or I need to do extra work?
Thanks
Kayvan
@Kayvan. Sorry for the late answer. There is no support for disabling certain items when selecting pages in the tree. If you want to use a tree I would try to add a custom validation attribute that you put on the property in the model. Check this blog post:
The other option you have is to show possible selections in a drop down. If there are only a few options I would go for Joel Abrahamssons solution:
And if you potentially have a lot of options check this blog post:
Hi Linus,
I'm having some real trouble getting this working. See here: . Any help appreciated.
Thanks,
Greg
|
https://world.episerver.com/Blogs/Linus-Ekstrom/Dates/2012/7/Creating-a-custom-editor-for-a-property/
|
CC-MAIN-2019-51
|
refinedweb
| 1,407
| 53.81
|
ssl_get_fd(3ssl) [bsd man page]
SSL_get_fd(3SSL) OpenSSL SSL_get_fd(3SSL) NAME
SSL_get_fd - get file descriptor linked to an SSL object SYNOPSIS
#include <openssl/ssl.h> int SSL_get_fd(const SSL *ssl); int SSL_get_rfd(const SSL *ssl); int SSL_get_wfd(const SSL *ssl); DESCRIPTION. RETURN VALUES
The following return values can occur: -1 The operation failed, because the underlying BIO is not of the correct type (suitable for file descriptors). >=0 The file descriptor linked to ssl. SEE ALSO
SSL_set_fd(3), ssl(3) , bio(3) 1.0.1e 2013-02-11 SSL_get_fd(3SSL)
SSL_set_fd(3SSL) OpenSSL SSL_set_fd(3SSL) VALUES
The following return values can occur: 0 The operation failed. Check the error stack to find out why. 1 The operation succeeded. SEE ALSO
SSL_get_fd(3), SSL_set_bio(3), SSL_connect(3), SSL_accept(3), SSL_shutdown(3), ssl(3) , bio(3) 1.0.1e 2013-02-11 SSL_set_fd(3SSL)
|
https://www.unix.com/man-page/bsd/3ssl/ssl_get_fd
|
CC-MAIN-2021-31
|
refinedweb
| 142
| 51.65
|
Disclaimer: I am no developer; I have very little idea what I am doing, so use these suggestions at your own risk .. and I do welcome suggestions for improvements.
This being said, here we go:
- in Net/src, add the following into DNS.cpp and IpAddress.cpp:
#define _WIN32_WINNT 0x501
#include <ws2tcpip.h>
- in Data/MySQL/src, add the following into all files:
#define _WIN32_WINNT 0x0501
#include <winsock2.h>
- in Net/src/NetworkInterface.cpp, fix the two bugs from and add the following:
#define _WIN32_WINNT 0x501
#include <winsock2.h>
#include <windef.h>
#include <iptypes.h>
#include <iphlpapi.h>
- start MSYS, go to the source folder and run:
./configure --no-tests --no-samples
make -s
- it will fail eventually, because strip is looking for some <name>.exe.exe, instead of <name>.exe - find the <name>.exe from the log, make a copy called <name>.exe.exe, possibly strip <name>.exe and then run make -s again.
- repeat the above until it builds everything.
- run make install to get the files into <MSYS-root>/local
- in file <MSYS-root>/local/include/FPEnvironment.h, change
[i]#include "Poco/FPEnvironment_WIN32.h"
on line 55 (after #elif defined(POCO_OS_FAMILY_WINDOWS)) into
#include "Poco/FPEnvironment_DUMMY.h"
I am not sure whether this can do some harm, but neither FPEnvironment_WIN32.h nor FPEnvironment_C99.h
With this setup, I managed to build and install Poco and and build my own code that uses the MySQL interface.
However, I did not manage to build the whole test suite: when I try to do this, the first couple of test suites builds OK (except for the .exe.exe issue), but the Net test suite fails with "undefined reference to `Poco::Error::getMessage(unsigned long)' from NetworkInterface.cpp" ...
Good luck
Michal
|
http://pocoproject.org/forum/viewtopic.php?f=12&t=5844
|
CC-MAIN-2015-11
|
refinedweb
| 289
| 62.54
|
Minimally Sufficient Pandas
In this article, I will offer an opinionated perspective on how to best use the Pandas library for data analysis. My objective is to argue that only a small subset of the library is sufficient to complete nearly all of the data analysis tasks that one will encounter. This minimally sufficient subset of the library will benefit both beginners and professionals using Pandas. Not everyone will agree with the suggestions I lay forward, but they are how I teach and how I use the library myself. If you disagree or have any of your own suggestions, please leave them in the comments below.
By the end of this article you will:
- Know why limiting Pandas to a small subset will keep your focus on the actual data analysis and not on the syntax
- Have specific guidelines for taking a single approach to completing a variety of common data analysis tasks with Pandas
Keep up with all my material
- Watch my YouTube videos
- Follow me on Twitter
- Take a class with me at Dunder Data
Pandas is Powerful but Difficult to use
Pandas is the most popular Python library for doing data analysis. While it does offer quite a lot of functionality, it is also regarded as a fairly difficult library to learn well. Some reasons for this include:
- There are often multiple ways to complete common tasks
- There are over 240 DataFrame attributes and methods
- There are several methods that are aliases (reference the same exact underlying code) of each other
- There are several methods that have nearly identical functionality
- There are many tutorials written by different people that show different ways to do the same thing
- There is no official document with guidelines on how to idiomatically complete common tasks
- The official documentation, itself contains non-idiomatic code
What is Minimally Sufficient Pandas?
The whole point of a data analysis library should be to provide you with the tools so that you can focus on the data analysis. While Pandas does provide you with the right tools, it doesn’t do so in a way that allows you to focus on the analysis. Instead, users are forced to tread through the complex and overabundant syntax.
I endorse the following as my definition for Minimally Sufficient Pandas.
- It is a small subset of the library that is sufficient to accomplish nearly everything that it has to offer.
- It allows you to focus on doing data analysis and not the syntax
With this minimally sufficient subset of Pandas:
- Your code will be simple, explicit, straightforward, and boring
- You will choose one obvious way to accomplish a task
- You will use this obvious way every single time
- You won’t have to retain as many commands in working memory
- Your code will be easier to understand by others and by you
Standardizing common tasks
Pandas often gives its users multiple approaches to complete the same task. This means that your approach may use different syntax than someone else’s. This can occur even with the most rudimentary tasks such as selecting a single column of data. Using multiple different syntaxes might not lead to many issues during a single analysis done by a single person. However, it can cause havoc when a team of people are working through a long analysis using all different approaches to Pandas.
By not having a standard approach to common tasks, a larger cognitive load is placed on the developer, who must remember all the slight differences to each approach. Having more than a single way to complete each common task is asking to introduce errors and inefficiencies.
Avalanche of Stack Overflow Answers
It is not uncommon to search for Pandas answers on Stack Overflow only to be met with several competing and varied results for common tasks. This particular question about renaming columns in a DataFrame has 28 answers. Treading through this deluge of information makes it difficult for those wanting to know the one idiomatic way to complete a task that they can commit to memory.
No Tricks
Eliminating much of the library will come with some (good) limitations. Knowing many obscure Pandas tricks might impress your friends, but it doesn’t usually lead to good code. It can lead to long lines of code that are difficult to understand and may be harder to debug.
Specific Pandas Examples
We will now cover a series of specific examples within Pandas where multiple approaches exist to complete a task. I will compare and contrast the different approaches and give guidance on which one I prefer. Listed below are the topics I cover.
- Selecting a single column of data
- The deprecated
ixindexer
- Selection with
atand
iat
read_csvvs
read_tableduplication
isnavs
isnulland
notnavs
notnull
- Arithmetic and Comparison Operators and their Corresponding Methods
- Builtin Python functions vs Pandas methods with the same name
- Standardizing
groupbyaggregation
- Handling a MultiIndex
- The similarity between
groupby,
pivot_tableand
crosstab
pivotvs
pivot_table
- The similarity between
meltand
stack
- The similarity between
pivotand
unstack
Minimally Sufficient Guiding Principle
The concrete examples were all derived by the following principle:
If a method does not provide any additional functionality over another method (i.e. its functionality is a subset of another) then it shouldn’t be used. Methods should only be considered if they have some additional, unique functionality.
Selecting a Single Column of Data
Selecting a single column of data from a Pandas DataFrame is just about the simplest task you can do and unfortunately, it is here where we first encounter the multiple-choice option that Pandas presents to its users.
You may select a single column as a Series with either the brackets or dot notation. Let’s read in a small, trivial DataFrame and select a column using both methods.
>>> import pandas as pd
>>> df = pd.read_csv('data/sample_data.csv', index_col=0)
>>> df
Selection with the brackets
Placing a column name in the brackets appended to a DataFrame selects a single column of a DataFrame as a Series.
>>> df['state']
name
Jane NY
Niko TX
Aaron FL
Penelope AL
Dean AK
Christina TX
Cornelia TX
Name: state, dtype: object
Selection with dot notation
Alternatively, you may select a single column with dot notation. Simply, place the name of the column after the dot operator. The output is the exact same as above.
>>> df.state
Issues with the dot notation
There are three issues with using dot notation. It doesn’t work in the following situations:
- When there are spaces in the column name
- When the column name is the same as a DataFrame method
- When the column name is a variable
There are spaces in the column name
If the desired column name has spaces in it, you won’t be able to select it with the dot notation. Python uses spaces to separate names and operators and hence will not treat a column name with a space as correct syntax. Let’s create this error.
df.favorite food
You can only use the brackets to select columns with spaces.
df['favorite food']
The column name is the same as a DataFrame method
When a column name and a DataFrame method collide, Pandas will always reference the method and not the column name. For instance, the column name
count is a method and will be referenced when using dot notation. This actually doesn’t produce an error as Python allows you to reference methods without calling them. Let’s reference this method now.
df.count
The output is going to be very confusing if you haven’t encountered it before. Notice at the top it states ‘bound method DataFrame.count of’. Python is telling us that this is a method of some DataFrame object. Instead of using the method name, it outputs its official string representation. Many people believe that they’ve produced some kind of analysis with this result. This isn’t true and almost nothing has happened. A reference to the method that outputs the object’s representation has been produced. That is all.
Regardless, it’s clear that using dot notation did not select a single column of the DataFrame as a Series. Again, you must use the brackets when selecting a column with the same name as a DataFrame method.
df['count']
The column name is a variable
Let’s say you are using a variable to hold a reference to the column name you would like to select. In this case, the only possibility again is to use the brackets. Below is a simple example where we assign the value of a column name to a variable and then pass this variable to the brackets.
>>>>> df[col]
The brackets are a superset of dot notation
The brackets are a strict superset of the dot notation in terms of functionality for selecting a single column. There are three cases which are not handled by the dot notation.
Lots of Pandas is written with the dot notation. Why?
Many tutorials make use of the dot notation to select a single column of data. Why is this done when the brackets seem to be clearly superior? It might be because the official documentation contains plenty of examples that use it. It also uses three fewer characters which entice the very laziest amongst us.
Guidance: Use the brackets for selecting a column of data
The dot notation provides no additional functionality over the brackets and does not work in all situations. Therefore, I never use it. Its single advantage is three fewer keystrokes.
I suggest using only the brackets for selecting a single column of data. Having just a single approach to this very common task will make your Pandas code much more consistent.
The deprecated
ix indexer - never use it
Pandas allows you to select rows by either label or integer location. This flexible dual selection capability is a great cause of confusion for beginners. The
ix indexer was created in the early days of Pandas to select rows and columns by both label and integer location. This turned out to be quite ambiguous as Pandas row and column names can be both integers and strings.
To make selections explicit, the
loc and
iloc indexers were made available. The
loc indexer selects only by label while the
iloc indexer selects only by integer location. Although the
ix indexer was versatile, it has been deprecated in favor of the
loc and
iloc indexers.
Guidance: Every trace of ix should be removed and replaced with
loc or
iloc
Selection with at and iat
Two additional indexers,
at and
iat, exist that select a single cell of a DataFrame. These provide a slight performance advantage over their analogous
loc and
ilocindexers. But, they introduce the additional burden of having to remember what they do. Also, for most data analyses, the increase in performance isn’t useful unless it’s being done at scale. And if performance truly is an issue, then taking your data out of a DataFrame and into a NumPy array will give you a large performance gain.
Performance comparison iloc vs
iat vs NumPy
Let’s compare the perfomance of selecting a single cell with
iloc,
iat and a NumPy array. Here we create a NumPy array with 100k rows and 5 columns containing random data. We then create a DataFrame out of it and make the selections.
>>> import numpy as np
>>> a = np.random.rand(10 ** 5, 5)
>>> df1 = pd.DataFrame(a)
>>> row = 50000
>>> col = 3
>>> %timeit df1.iloc[row, col]
13.8 µs ± 3.36 µs per loop
>>> %timeit df1.iat[row, col]
7.36 µs ± 927 ns per loop
>>> %timeit a[row, col]
232 ns ± 8.72 ns per loop
While
iat is a little less than twice as fast as
iloc, selection with a NumPy array is about 60x as fast. So, if you really had an application that had performance requirements, you should be using NumPy directly and not Pandas.
Guidance: Use NumPy arrays if your application relies on performance for selecting a single cell of data and not
at or
iat.
Method Duplication
There are multiple methods in Pandas that do the exact same thing. Whenever two methods share the same exact underlying functionality, we say that they are aliases of each other. Having duplication in a library is completely unnecessary, pollutes the namespace and forces analysts to remember one more bit of information about a library.
This next section covers several instances of duplication along with other instances of methods that are very similar to one another.
read_csv vs
read_table duplication
One example of duplication is with the
read_csv and
read_table functions. They both do the same exact thing, read in data from a text file. The only difference is that
read_csv defaults the delimiter to a comma, while
read_table uses tab as its default.
Let’s verify that
read_csv and
read_table are capable of producing the same results. Here we use a sample of the public College Scoreboard dataset. The
equals method verifies whether two DataFrames have the exact same values.
>>> college = pd.read_csv('data/college.csv')
>>> college.head()
>>> college2 = pd.read_table('data/college.csv', delimiter=',')
>>> college.equals(college2)
True
read_table is getting deprecated
I made a post in the Pandas Github repo suggesting that a few functions and methods that I’d like to see deprecated. The
read_table function is getting deprecated and should never be used.
Guidance: Only use
read_csv to read in delimitted text files
isna vs
isnull and
notna vs
notnull
The
isna and
isnull methods both determine whether each value in the DataFrame is missing or not. The result will always be a DataFrame (or Series) of all boolean values.
These methods are exactly the same. We say that one is an alias of the other. There is no need for both of them in the library. The
isna method was added more recently because the characters
na are found in other missing value methods such as
dropna and
fillna. Confusingly, Pandas uses
NaN,
None, and
NaT as missing value representations and not
NA.
notna and
notnull are aliases of each other as well and simply return the opposite of
isna. There's no need for both of them.
Let’s verify that
isna and
isnull are aliases.
>>> college_isna = college.isna()
>>> college_isnull = college.isnull()
>>> college_isna.equals(college_isnull)
True
I only use
isna and
notna
I use the methods that end in
na to match the names of the other missing value methods
dropna and
fillna.
You can also avoid ever using
notna since Pandas provides the inversion operator,
~ to invert boolean DataFrames.
Guidance: Use only
isna and
notna
Arithmetic and Comparison Operators and their Corresponding Methods
All arithmetic operators have corresponding methods that function similarly.
+-
add
--
suband
subtract
*-
muland
multiply
/-
div,
divideand
truediv
**-
pow
//-
floordiv
%-
mod
All the comparison operators also have corresponding methods.
>-
gt
<-
lt
>=-
ge
<=-
le
==-
eq
!=-
ne
Let’s select the undergraduate population column,
ugds as a Series, add 100 to it and verify that both the plus operator its corresponding method,
add, give the same result.
>>> ugds = college['ugds']
>>> ugds_operator = ugds + 100
>>> ugds_method = ugds.add(100)
>>> ugds_operator.equals(ugds_method)
True
Calculating the z-scores of each school
Let’s do a slightly more complex example. Below, we set the index to be the institution name and then select both of the SAT columns. We remove schools that do not provide these scores with
dropna.
>>> college_idx = college.set_index('instnm')
>>> sats = college_idx[['satmtmid', 'satvrmid']].dropna()
>>> sats.head()
Let’s say we are interested in finding the z-score for each college’s SAT score. To calculate this, we would need to subtract the mean and divide by the standard deviation. Let’s first calculate the mean and standard deviation of each column.
>>> mean = sats.mean()
>>> mean
satmtmid 530.958615
satvrmid 522.775338
dtype: float64
>>> std = sats.std()
>>> std
satmtmid 73.645153
satvrmid 68.591051
dtype: float64
Let’s now use the arithmetic operators to complete the calculation.
>>> zscore_operator = (sats - mean) / std
>>> zscore_operator.head()
Let’s repeat this with their corresponding methods and verify equality.
>>> zscore_methods = sats.sub(mean).div(std)
>>> zscore_operator.equals(zscore_methods)
True
An actual need for the method
So far we haven’t seen an explicit need for the methods over the operators. Let’s see an example where we absolutely need the method to complete the task. The college dataset contains 9 consecutive columns holding the relative frequency of the undergraduate population by race. The first column is
ugds_white and the last
ugds_unkn. Let's select these columns now into their own DataFrame.
>>> college_race = college_idx.loc[:, ‘ugds_white’:’ugds_unkn’]
>>> college_race.head()
Let’s say we are interested in the raw count of the student population by race per school. We need to multiply the total undergraduate population by each column. Let’s select the
ugds column as a Series.
>>> ugds = college_idx['ugds']
>>> ugds.head()
instnm
Alabama A & M University 4206.0
University of Alabama at Birmingham 11383.0
Amridge University 291.0
University of Alabama in Huntsville 5451.0
Alabama State University 4811.0
Name: ugds, dtype: float64
We then multiply the
college_race DataFrame by this Series. Intuitively, this seems like it should work, but it does not. Instead, it returns an enormous DataFrame with 7,544 columns.
>>> df_attempt = college_race * ugds
>>> df_attempt.head()
>>> df_attempt.shape
(7535, 7544)
Automatic alignment on the index and/or columns
Whenever an operation happens between two Pandas objects, an alignment always takes place between the index and/or columns of the two objects. In the above operation, we multiplied the
college_race DataFrame and the
ugds Series together. Pandas automatically (implicitly) aligned the columns of
college_race to the index values of
ugds.
None of the
college_race columns match the index values of
ugds. Pandas does the alignment by performing an outer join keeping all values that match as well as those that do not. This returns a ridiculous looking DataFrame with all missing values. Scroll all the way to the right to view the original column names of the
college_race DataFrame.
Change the direction of the alignment with a method
All operators only work in a single way. We cannot change how the multiplication operator,
*, works. Methods, on the other hand, can have parameters that we can use to control how the operation takes place.
Use the
axis parameter of the
mul method
All the methods that correspond to the operators listed above have an
axis parameter that allows us to change the direction of the alignment. Instead of aligning the columns of a DataFrame to the index of a Series, we can align the index of a DataFrame to the index of a Series. Let's do that now so that we can find the answer to our problem from above.
>>> df_correct = college_race.mul(ugds, axis='index').round(0)
>>> df_correct.head()
By default, the
axis parameter is set to 'columns'. We changed it to 'index' so that a proper alignment took place
Guidance: Only use the arithmetic and comparison methods when absolutely necessary, otherwise use the operators
The arithmetic and comparison operators are more common and should be attempted first. If you come across a case where the operator does not complete the task, then use the method.
Builtin Python functions vs Pandas methods with the same name
There are a few DataFrame/Series methods that return the same result as a builtin Python function with the same name. They are:
sum
min
max
abs
Let’s verify that they give the same result by testing them out on a single column of data. We begin by selecting the non-missing values of the undergraduate student population column,
ugds.
>>> ugds = college['ugds'].dropna()
>>> ugds.head()
0 4206.0
1 11383.0
2 291.0
3 5451.0
4 4811.0
Name: ugds, dtype: float64
Verifying sum
>>> sum(ugds)
16200904.0
>>> ugds.sum()
16200904.0
Verifying max
>>> max(ugds)
151558.0
>>> ugds.max()
151558.0
Verifying min
>>> min(ugds)
0.0
>>> ugds.min()
0.0
Verifying abs
>>> abs(ugds).head()
0 4206.0
1 11383.0
2 291.0
3 5451.0
4 4811.0
Name: ugds, dtype: float64
>>> ugds.abs().head()
0 4206.0
1 11383.0
2 291.0
3 5451.0
4 4811.0
Name: ugds, dtype: float64
Time the performance of each
Let’s see if there is a performance difference between each method.
sum performance
>>> %timeit sum(ugds)
644 µs ± 80.3 µs per loop
>>> %timeit -n 5 ugds.sum()
164 µs ± 81 µs per loop
max performance
>>> %timeit -n 5 max(ugds)
717 µs ± 46.5 µs per loop
>>> %timeit -n 5 ugds.max()
172 µs ± 81.9 µs per loop
min performance
>>> %timeit -n 5 min(ugds)
705 µs ± 33.6 µs per loop
>>> %timeit -n 5 ugds.min()
151 µs ± 64 µs per loop
abs performance
>>> %timeit -n 5 abs(ugds)
138 µs ± 32.6 µs per loop
>>> %timeit -n 5 ugds.abs()
128 µs ± 12.2 µs per loop
Performance discrepancy for
sum,
max, and
min
There are clear performance discrepancies for
sum,
max, and
min. Completely different code is executed when these builtin Python functions are used as opposed to when the Pandas method is called. Calling
sum(ugds) essentially creates a Python for loop to iterate through each value one at a time. On the other hand, calling
ugds.sum() executes the internal Pandas
sum method which is written in C and much faster than iterating with a Python for loop.
There is a lot of overhead in Pandas which is why the difference is not greater. If we instead create a NumPy array and redo the timings, we can see an enormous difference with the Numpy array
sum outperforming the Python
sum function by a factor of 200 on an array of 10,000 floats.
No Performance difference for
abs
Notice that there is no performance difference when calling the
abs function versus the
abs Pandas method. This is because the exact same underlying code is being called. This is due to how Python chose to design the
abs function. It allows developers to provide a custom method to be executed whenever the
abs function is called. Thus, when you write
abs(ugds), you are really calling
ugds.abs(). They are literally the same.
Guidance: Use the Pandas method over any built-in Python function with the same name.
Standardizing
groupby Aggregation
There are a number of syntaxes that get used for the
groupby method when performing an aggregation. I suggest choosing a single syntax so that all of your code looks the same.
The three components of
groupby aggregation
Typically, when calling the
groupby method, you will be performing an aggregation. This is the by far the most common scenario. When you are performing an aggregation during a
groupby, there will always be three components.
- Grouping column — Unique values form independent groups
- Aggregating column — Column whose values will get aggregated. Usually numeric
- Aggregating function — How the values will get aggregated (sum, min, max, mean, median, etc…)
My syntax of choice for
groupby
There are a few different syntaxes that Pandas allows to perform a groupby aggregation. The following is the one I use.
df.groupby('grouping column').agg({'aggregating column': 'aggregating function'})
A buffet of
groupby syntaxes for finding the maximum math SAT score per state
Below, we will cover several different syntaxes that return the same (or similar) result for finding the maximum SAT score per state. Let’s look at the data we will be using first.
>>> college[['stabbr', 'satmtmid', 'satvrmid', 'ugds']].head()
Method 1: Here is my preferred way of doing the groupby aggregation. It handles complex cases.
>>> college.groupby('stabbr').agg({'satmtmid': 'max'}).head()
Method 2a: The aggregating column can be selected within brackets following the call to
groupby. Notice that a Series is returned here and not a DataFrame.
>>> college.groupby('stabbr')['satmtmid'].agg('max').head()
stabbr
AK 503.0
AL 590.0
AR 600.0
AS NaN
AZ 580.0
Name: satmtmid, dtype: float64
Method 2b: The
aggregate method is an alias for
agg and can also be used. This returns the same Series as above.
>>> college.groupby('stabbr')['satmtmid'].aggregate('max').head()
Method 3: You can call the aggregating method directly without calling
agg. This returns the same Series as above.
>>> college.groupby('stabbr')['satmtmid'].max().head()
Major benefits of preferred syntax
The reason I choose this syntax is that it can handle more complex grouping problems. For instance, if we wanted to find the max and min of the math and verbal sat scores along with the average undergrad population per state we would do the following.
>>> df.groupby('stabbr').agg({'satmtmid': ['min', 'max'],
'satvrmid': ['min', 'max'],
'ugds': 'mean'}).round(0).head(10)
This problem isn’t solvable using the other syntaxes.
Guidance — Use
df.groupby('grouping column').agg({'aggregating column': 'aggregating function'}) as your primary syntax of choice
Handling a MultiIndex
A MultiIndex or multi-level index is a cumbersome addition to a Pandas DataFrame that occasionally makes data easier to view, but often makes it more difficult to manipulate. You usually encounter a MultiIndex after a call to
groupby when using multiple grouping columns or multiple aggregating columns.
Let’s create a result similar to the last groupby from above, except this time group by both state and religious affiliation.
>>> agg_dict = {'satmtmid': ['min', 'max'],
'satvrmid': ['min', 'max'],
'ugds': 'mean'}
>>> df = college.groupby(['stabbr', 'relaffil']).agg(agg_dict)
>>> df.head(10).round(0)
A MultiIndex in both the index and columns
Both the rows and columns have a MultiIndex with two levels.
Selection and further processing is difficult with a MultiIndex
There is little extra functionality that a MultiIndex adds to your DataFrame. They have different syntax for making subset selections and are more difficult to use with other methods. If you are an expert Pandas user, you can get some performance gains when making subset selections, though I typically do not like the added complexity that they come with. I suggest working with DataFrames that have a simpler, single-level index.
Convert to a single level index — Rename the columns and reset the index
We can convert this DataFrame so that only single-level indexes remain. There is no direct way to rename columns of a DataFrame during a groupby (yes, something so simple is impossible with pandas), so we must overwrite them manually. Let’s do that now.
>>>> df.columns = ['min satmtmid', 'max satmtmid', 'min satvrmid',
'max satvrmid', 'mean ugds']
>>> df.head()
From here, we can use the
reset_index method to make each index level an actual column.
>>> df.reset_index().head()
Guidance: Avoid using a MultiIndex. Flatten it after a call to
groupby by renaming columns and resetting the index.
The similarity between groupby, pivot_table, and crosstab
Some users might be surprised to find that a
groupby (when aggregating),
pivot_table, and
pd.crosstab are essentially identical. However, there are specific use cases for each, so all still meet the threshold for being included in a minimally sufficient subset of Pandas.
The equivalency of
groupby aggregation and
pivot_table
Performing an aggregation with
groupby is essentially equivalent to using the
pivot_table method. Both methods return the exact same data, but in a different shape. Let’s see a simple example that proves that this is the case. We will use a new dataset containing employee demographic information from the city of Houston.
>>> emp = pd.read_csv('data/employee.csv')
>>> emp.head()
Let’s use a
groupby to find the average salary for each department by gender.
>>> emp.groupby(['dept', 'gender']).agg({'salary':'mean'}).round(-3)
We can duplicate this data by using a
pivot_table.
>>> emp.pivot_table(index='dept', columns='gender',
values='salary', aggfunc='mean').round(-3)
Notice that the values are exactly the same. The only difference is that the gender column has been pivoted so its unique values are now the column names. The same three components of a
groupby are found in a
pivot_table. The grouping column(s) are passed to the
index and
columns parameters. The aggregating column is passed to the
values parameter and the aggregating function is passed to the
aggfunc parameter.
It’s actually possible to get an exact duplication of both the data and the shape by passing both grouping columns as a list to the
index parameter.
>>> emp.pivot_table(index=['dept','gender'],
values='salary', aggfunc='mean').round(-3)
Typically,
pivot_table is used with two grouping columns, one as the
index and the other as the
columns. But, it can be used for a single grouping column. The following produces an exact duplication of a single grouping column with
groupby.
>>> df1 = emp.groupby('dept').agg({'salary':'mean'}).round(0)
>>> df2 = emp.pivot_table(index='dept', values='salary',
aggfunc='mean').round(0)
>>> df1.equals(df2)
True
Guidance: use
pivot_table when comparing groups
I really like to use pivot tables to compare values across groups and a
groupby when I want to continue an analysis. From above, it is easier to compare male to female salaries when using the output of
pivot_table. The result is easier to digest as a human and its the type of data you will see in an article or blog post. I view pivot tables as a finished product.
The result of a
groupby is going to be in tidy form, which lends itself to easier subsequent analysis, but isn’t as interpretable.
The equivalency of pivot_table and pd.crosstab
The
pivot_table method and the
crosstab function can both produce the exact same results with the same shape. They both share the parameters
index,
columns,
values, and
aggfunc. The major difference on the surface is that
crosstab is a function and not a DataFrame method. This forces you to use columns as Series and not string names for the parameters. Let’s see an example taking the average salary by gender and race.
>>> emp.pivot_table(index='gender', columns='race',
values='salary', aggfunc='mean').round(-3)
The
crosstab function produces the exact same result with the following syntax.
>>> pd.crosstab(index=emp['gender'], columns=emp['race'],
values=emp['salary'], aggfunc='mean').round(-3)
crosstab was built for counting
A crosstabulation (also known as a contingency table) shows the frequency between two variables. This is the default functionality for
crosstab if given two columns. Let’s show this by counting the frequency of all race and gender combinations. Notice that there is no need to provide an
aggfunc.
>>> pd.crosstab(index=emp['gender'], columns=emp['race'])
The
pivot_table method can duplicate this but you must use the
size aggregation function.
>>> emp.pivot_table(index='gender', columns='race', aggfunc='size')
Relative frequency — the unique functionality with crosstab
At this point, it appears that the
crosstab function is just a subset of
pivot_table. But, there is a single unique functionality that it posseses that makes it potentially worthwhile to add to your minimally sufficient subset. It has the ability to calculate relative frequencies across groups with the
normalize parameter. For instance, if we wanted the percentage breakdown by gender across each race we can set the
normalize parameter to ‘columns’.
>>> pd.crosstab(index=emp['gender'], columns=emp['race'],
normalize='columns').round(2)
You also have the option of normalizing over the rows using the string ‘index’ or over the entire DataFrame with the string ‘all’ as seen below.
>>> pd.crosstab(index=emp['gender'], columns=emp['race'],
normalize='all').round(3)
Guidance: Only use crosstab when finding relative frequency
All other situations where the
crosstab function may be used can be handled with
pivot_table. It is possible to manually calculate the relative frequencies after running
pivot_table so
crosstab isn’t all that necessary. But, it does do this calculation in a single readable line of code, so I will continue to use it.
pivot vs pivot_table
There exists a
pivot method that is nearly useless and can basically be ignored. It functions similarly to
pivot_table but does not do any aggregation. It only has three parameters,
index,
columns, and
values. All three of these parameters are present in
pivot_table. It reshapes the data without an aggregation. Let’s see an example with a new simple dataset.
>>> df = pd.read_csv('data/state_fruit.csv')
>>> df
Let’s use the
pivot method to reshape this data so that the fruit names become the columns and the weight becomes the values.
>>> df.pivot(index='state', columns='fruit', values='weight')
Using the
pivot method, reshapes the data without aggregating or doing anything to it.
pivot_table, on the other hand, requires that you do an aggregation. In this case, there is only one value per intersection of state and fruit, so many aggregation functions will return the same value. Let’s recreate this exact same table with the max aggregation function.
>>> df.pivot_table(index='state', columns='fruit',
values='weight', aggfunc='max')
Issues with pivot
There are a couple major issues with the
pivot method. First, it can only handle the case when both
index and
columns are set to a single column. If you want to keep multiple columns in the index then you cannot use
pivot. Also, if any combination of
index and
columns appear more than once, then you will get an error as it does not perform an aggregation. Let’s produce this particular error with a dataset that is similar to the above but adds two additional rows.
>>> df2 = pd.read_csv('data/state_fruit2.csv')
>>> df2
Attempting to pivot this will not work as now the combination for both Texas and Florida with Oranges have multiple rows.
>>> df2.pivot(index='state', columns='fruit', values='weight')
ValueError: Index contains duplicate entries, cannot reshape
If you would like to reshape this data, you will need to decide on how you would like to aggregate the values.
Guidance — Consider using only
pivot_table and not pivot
pivot_table can accomplish all of what
pivot can do. In the case that you do not need to perform an aggregation, you still must provide an aggregation function.
The similarity between melt and stack
The
melt and
stack methods reshape data in the same exact manner. The major difference is that the
melt method does not work with data in the index while
stack does. It’s easier to describe how they work with an example. Let’s begin by reading in a small dataset of arrival delay of airlines for a few airports.
>>> ad = pd.read_csv('data/airline_delay.csv')
>>> ad
Let’s reshape this data so that we have three columns, the airline, the airport and the arrival delay. We will begin with the
melt method, which has two main parameters,
id_vars which are the column names that are to remain vertical (and not reshaped) and
value_vars which are the column names to be reshaped into a single column.
>>> ad.melt(id_vars='airline', value_vars=['ATL', 'DEN', 'DFW'])
The
stack method can produce nearly identical data, but it places the reshaped column in the index. It also preserves the current index. To recreate the data above, we need to set the index to the column(s) that will not be reshaped first. Let’s do that now.
>>> ad_idx = ad.set_index('airline')
>>> ad_idx
Now, we can use
stack without setting any parameters to get nearly the same result as
melt.
>>> ad_idx.stack()
airline
AA ATL 4
DEN 9
DFW 5
AS ATL 6
DEN -3
DFW -5
B6 ATL 2
DEN 12
DFW 4
DL ATL 0
DEN -3
DFW 10
dtype: int64
This returns a Series with a MultiIndex with two levels. The data values are the same, but in a different order. Calling
reset_index will get us back to a single-index DataFrame.
>>> ad_idx.stack().reset_index()
Renaming columns with
melt
I prefer
melt as you can rename columns directly and you can avoid dealing with a MultiIndex. The
var_name and
value_name parameters are provided to
melt to rename the reshaped columns. It’s also unnecessary to list out all of the columns you are melting because all the columns not found in
id_vars will be reshaped.
>>> ad.melt(id_vars='airline', var_name='airport',
value_name='arrival delay')
Guidance — Use
melt over
stack because it allows you to rename columns and it avoids a MultiIndex
The Similarity between pivot and unstack
We’ve already seen how the
pivot method words.
unstack is its analog that works with values in the index. Let’s look at the simple DataFrame that we used with
pivot.
>>> df = pd.read_csv('data/state_fruit.csv')
>>> df
The
unstack method pivots values in the index. We must set the index to contain the columns that we would have used as the
index and
columns parameters in the
pivot method. Let’s do that now.
>>> df_idx = df.set_index(['state', 'fruit'])
>>> df_idx
Now we can use
unstack without any parameters, which will pivot the index level closest to the actual data (the fruit column) so that its unique values become the new column names.
>>> df_idx.unstack()
The result is nearly identical to what was returned with the
pivot method except now we have a MultiIndex for the columns.
Guidance — Use
pivot_table over
unstack or
pivot
Both
pivot and
unstack work similarly but from above,
pivot_table can handle all cases that
pivot can, so I suggest using it over both of the others.
End of Specific Examples
The above specific examples cover many of the most common tasks within Pandas where there are multiple different approaches you can take. For each example, I argued for using a single approach. This is the approach that I use when doing a data analysis with Pandas and the approach I teach to my students.
The Zen of Python
Minimally Sufficient Python was inspired by the Zen of Python, a list of 19 aphorisms giving guidance for language usage by Tim Peters. The aphorism in particular worth noting is the following:
There should be one-- and preferably only one --obvious way to do it.
I find that the Pandas library disobeys this guidance more than any other library I have encountered. Minimally Sufficient Pandas is an attempt to steer users so that this principle is upheld.
Pandas Style Guide
While the specific examples above provide guidance for many tasks, it is not an exhaustive list that covers all corners of the library. You may also disagree with some of the guidance.
To help you use the library I recommend creating a “Pandas style guide”. This isn’t much different than coding style guides that are often created so that codebases look similar. This is something that greatly benefits teams of analysts that all use Pandas. Enforcing a Pandas style guide can help by:
- Having all common data analysis tasks use the same syntax
- Making it easier to put Pandas code in production
- Reducing the chance of landing on a Pandas bug. There are thousands of open issues. Using a smaller subset of the library will help avoid these.
Best of the API
The Pandas DataFrame API is enormous. There are dozens of methods that have little to no use or are aliases. Below is my list of all the DataFrame attributes and methods that I consider sufficient to complete nearly any task.
Attributes
- columns
- dtypes
- index
- shape
- T
- values
Aggregation Methods
- all
- any
- count
- describe
- idxmax
- idxmin
- max
- mean
- median
- min
- mode
- nunique
- sum
- std
- var
Non-Aggretaion Statistical Methods
- abs
- clip
- corr
- cov
- cummax
- cummin
- cumprod
- cumsum
- diff
- nlargest
- nsmallest
- pct_change
- prod
- quantile
- rank
- round
Subset Selection
- head
- iloc
- loc
- tail
Missing Value Handling
- dropna
- fillna
- interpolate
- isna
- notna
Grouping
- expanding
- groupby
- pivot_table
- resample
- rolling
Joining Data
- append
- merge
Other
- asfreq
- astype
- copy
- drop
- drop_duplicates
- equals
- isin
- melt
- plot
- rename
- replace
- reset_index
- sample
- select_dtypes
- shift
- sort_index
- sort_values
- to_csv
- to_json
- to_sql
Functions
- pd.concat
- pd.crosstab
- pd.cut
- pd.qcut
- pd.read_csv
- pd.read_json
- pd.read_sql
- pd.to_datetime
- pd.to_timedelta
Conclusion
I feel strongly that Minimally Sufficient Pandas is a useful guide for those wanting to increase their effectiveness at data analysis without getting lost in the syntax.
|
https://medium.com/dunder-data/minimally-sufficient-pandas-a8e67f2a2428?utm_campaign=Data_Elixir&utm_source=Data_Elixir_219
|
CC-MAIN-2019-09
|
refinedweb
| 6,624
| 55.34
|
Featured Replies in this Discussion
Well from the basic's C is well one of the most flaxible language's with almost every person in this forum looking at it, almost every day. Well some of us. Now I wouldnt say C is based on commands. But more of what you tell it to do using variable's, interger's, floating points ETC. The # 1 used instruction you will give it is #include <the library you choose.h, which is most likey stdio.h, or stdlib.h. Abbrivation of coarse. Lasy programmers!. stdio means standard input/output, stdlib means standard libary. Tere are a lot of librarys i can not get in to at this moment. Also with C, everything needs a function printf(), main() etc etc. lol. You will learn more of that in class or book's. Heres a basic set up.
#include <stdio.h>
main()
{
printf(hello sexy); /*this is a comment. thjese make a program easier to read when needing edit's. The /, make it so that the program wont read it as a sequence. All programs, and lines of code have to produce and ending sequence. In C that is ;. Other wise C would keep going producing a syntax error, because it is a computer and doesnt know what to do. lol =) */
}
|
https://www.daniweb.com/software-development/c/threads/13958/basic-commands-and-syntax-in-c-language
|
CC-MAIN-2015-14
|
refinedweb
| 217
| 85.89
|
2009-09-16 14:08:40 8 Comments
I know that global variables in C sometimes have the
extern keyword. What is an
extern variable? What is the declaration like? What is its scope?
This is related to sharing variables across source files, but how does that work precisely? Where do I use
extern?
Related Questions
Sponsored Content
10 Answered Questions
[SOLVED] Improve INSERT-per-second performance of SQLite?
- 2009-11-10 22:16:43
- Mike Willekes
- 379436 View
- 2847 Score
- 10 Answer
- Tags: c performance sqlite optimization
12 Answered Questions
[SOLVED] Global variables in AngularJS
- 2012-08-13 16:27:07
- Lightbulb1
- 437405 View
- 343 Score
- 12 Answer
- Tags: angularjs global-variables
4 Answered Questions
[SOLVED] What does the C ??!??! operator do?
6 Answered Questions
[SOLVED] Share variables between files in Node.js?
- 2010-10-13 11:11:36
- never_had_a_name
- 154929 View
- 114 Score
- 6 Answer
- Tags: javascript node.js global-variables
7 Answered Questions
[SOLVED] Global variables in Javascript across multiple files
- 2010-05-28 21:54:57
- Goro
- 207514 View
- 119 Score
- 7 Answer
- Tags: javascript scope global-variables global
6 Answered Questions
[SOLVED] shared global variables in C
- 2010-06-09 23:12:57
- Claudiu
- 195620 View
- 78 Score
- 6 Answer
- Tags: c variables linker global-variables scope
1 Answered Questions
[SOLVED] Are global variables by default extern? if yes, then why they have default value " 0 "?
- 2017-01-21 05:00:11
- Srshti
- 735 View
- 2 Score
- 1 Answer
- Tags: c global-variables extern
17 Answered Questions
[SOLVED] How to declare global variables in Android?
- 2009-04-02 01:54:30
- Niko Gamulin
- 298908 View
- 592 Score
- 17 Answer
- Tags: android singleton global-variables state
4 Answered Questions
[SOLVED] Why won't extern link to a static variable?
2 Answered Questions
[SOLVED] How to declare and use global external variable across multiple source files?
- 2015-02-03 11:01:50
- Marius Macijauskas
- 809 View
- 0 Score
- 2 Answer
- Tags: c++ global-variables extern
@muusbolla 2019-06-24 02:23:37
A very short solution I use to allow a header file to contain the extern reference or actual implementation of an object. The file that actually contains the object just does
#define GLOBAL_FOO_IMPLEMENTATION. Then when I add a new object to this file it shows up in that file also without me having to copy and paste the definition.
I use this pattern across multiple files. So in order to keep things as self contained as possible, I just reuse the single GLOBAL macro in each header. My header looks like this:
@Ciro Santilli 新疆改造中心法轮功六四事件 2015-05-29 07:34:58
GCC ELF Linux implementation
main.c:
Compile and decompile:
Output contains:
The System V ABI Update ELF spec "Symbol Table" chapter explains:
which is basically the behavior the C standard gives to
externvariables.
From now on, it is the job of the linker to make the final program, but the
externinformation has already been extracted from the source code into the object file.
Tested on GCC 4.8.
C++17 inline variables
In C++17, you might want to use inline variables instead of extern ones, as they are simple to use (can be defined just once on header) and more powerful (support constexpr). See: What does 'const static' mean in C and C++?
@Jonathan Leffler 2015-08-30 14:57:58
It's not my down-vote, so I don't know. However, I'll proffer an opinion. Although looking at the output of
readelfor
nmcan be helpful, you've not explained the fundamentals of how to make use of
extern, nor completed the first program with the actual definition. Your code doesn't even use
notExtern. There's a nomenclature problem, too: although
notExternis defined here rather than declared with
extern, it is an external variable that could be accessed by other source files if those translation units contained a suitable declaration (which would need
extern int notExtern;!).
@Ciro Santilli 新疆改造中心法轮功六四事件 2015-09-02 14:52:12
@JonathanLeffler thanks for the feedback! The standard behavior and usage recommendations have already been done in other answers, so I decided to show the implementation a bit as that really helped me grasp what is going on. Not using
notExternwas ugly, fixed it. About nomenclature, let me know if you have a better name. Of course that would not be a good name for an actual program, but I think it fits the didactic role well here.
@Jonathan Leffler 2015-09-02 14:56:29
As to names, what about
global_deffor the variable defined here, and
extern_reffor the variable defined in some other module? Would they have suitably clear symmetry? You still end up with
int extern_ref = 57;or something like that in the file where it is defined, so the name isn't quite ideal, but within the context of the single source file, it is a reasonable choice. Having
extern int global_def;in a header isn't as much of a problem, it seems to me. Entirely up to you, of course.
@Lucian Nut 2019-01-09 20:50:02
Declaration won't allocate memory (the variable must be defined for memory allocation) but the definition will. This is just another simple view on the extern keyword since the other answers are really great.
@user50619 2018-10-09 10:01:40
With xc8 you have to be careful about declaring a variable as the same type in each file as you could , erroneously, declare something an
intin one file and a
charsay in another. This could lead to corruption of variables.
This problem was elegantly solved in a microchip forum some 15 years ago /* See "http:" / / "forum/all/showflat.php/Cat/0/Number/18766/an/0/page/0#18766"
But this link seems to no longer work...
So I;ll quickly try to explain it; make a file called global.h.
In it declare the following
Now in the file main.c
This means in main.c the variable will be declared as an
unsigned char.
Now in other files simply including global.h will have it declared as an extern for that file.
But it will be correctly declared as an
unsigned char.
The old forum post probably explained this a bit more clearly. But this is a real potential
gotchawhen using a compiler that allows you to declare a variable in one file and then declare it extern as a different type in another. The problems associated with that are if you say declared testing_mode as an int in another file it would think it was a 16 bit var and overwrite some other part of ram, potentially corrupting another variable. Difficult to debug!
@Jonathan Leffler 2009-09-16 14:37:14
Using
externis only of relevance when the program you're building consists of multiple source files linked together, where some of the variables defined, for example, in source file
file1.cneed to be referenced in other source files, such as
file2.c.
It is important to understand the difference between defining a variable and declaringterndeclaration.cand
file2.c:
file3.h
file1.c
file2.cin front of function declarations in headers for consistency — to match the
externin front of variable declarations in headers. Many people prefer not to use
externin front of function declarations; the compiler doesn't care — and ultimately, neither do I as long as you're consistent, at least within a source file.
prog1.h
prog1.c
prog1uses
prog1.c,
file1.c,
file2.c,
file3.hand
prog1.h.
The file
prog1.mkis a makefile for
prog1only. It will work with most versions of
makeproduced since about the turn of the millennium. It is not tied specifically to GNU Make.
prog1.mk
Guidelines
Rules to be broken by experts only, and only with good reason:
externdeclarations of variables — never
staticor unqualified variable definitions.
externdeclarations of variables — source files always include the (sole) header that declares them.
extern. 'common' definition of a variable too. 'Common',
file11.c
file12.c
This technique does not conform to the letter of the C standard and the 'one definition rule' — it is officially undefined behaviour:
However, the C standard also lists it in informative Annex J as one of the Common extensions.
Because this technique is not always supported, it is best to avoid using it, especially if your code needs to be portable. Using this technique, you can also end up with unintentional type punning. If one of the files declared
ias a
doubleinstead of as an
int, C's type-unsafe linkers probably would not spot the mismatch. If you're on a machine with 64-bit
intand
double, you'd not even get a warning; on a machine with 32-bit
intand 64-bit
double, you'd probably get a warning about the different sizes — the linker would use the largest size, exactly as a Fortran program would take the largest size of any common blocks.
The next two files complete the source for
prog2:
prog2.h
prog2.c
prog2uses
prog2.c,
file10.c,
file11.c,
file12.c,
prog2.h.
Warning
As noted in comments here, and as stated in my answer to a similar question, 'ex
Note 1: if the header defines the variable without the
externkeyword, then each file that includes the header creates a tentative definition of the variable. As noted previously, this will often work, but the C standard does not guarantee that it will work.
broken_header.h
Note 2: if the header defines and initializes the variable, then only one source file in a given program can use the header. Since headers are primarily for sharing information, it is a bit silly to create one that can only be used once.
seldom_correct.h
Note 3: if the header defines a static variable (with or without initialization), then each source file ends up with its own private version of the 'global' 'declarations 'main
file1a.c
file2a.c
The next two files complete the source for
prog3:
prog3.h
prog3.c
Reverse contents of
#ifand
#elseblocks, fixing bug identified by Denis Kniazhev
file1b.c
file2b.c
Clearly, the code for the oddball structure is not what you'd normally write, but it illustrates the point. The first argument to the second invocation of
INITIALIZERis
{ 41and the remaining argument (singular in this example) is
43 }. Without C99 or similar support for variable argument lists for macros, initializers that need to contain commas are very problematic.
Correct header
file3b.hincluded (instead of
fileba.h) per Denis Kniazhev
The next two files complete the source for
prog4:
prog4.h
prog4.c:
The header might be included twice indirectly. For example, if
file4b.hincludes
file3b.hfor a type definition that isn't shown, and
file1b.cneeds to use both header
file4b.handbefore including
file3b.hto generate the definitions, but the normal header guards on
file3b.hwould prevent the header being reincluded.
So, you need to include the body of
file3b.hat.cand
file6c.cdirectly include the header
file2c.hseveral times, but that is the simplest way to show that the mechanism works. It means that if the header was indirectly included twice, it would also be safe.
The restrictions for this to work are:
external.h
file1c.h
file2c.h
file3c.c
file4c.c
file5c.c
file6c.c
The next source file completes the source (provides a main program) for
prog5,
prog6and
prog7:
prog5.c.hinto
file2d.h:
file2d.h
The issue becomes 'should the header include
#undef DEFINE_VARIABLES?' If you omit that from the header and wrap any defining invocation with
#defineand
#undef:
in the source code (so the headers never alter the value of
DEFINE_VARIABLES), then you should be clean. It is just a nuisance to have to remember to write the the extra line. An alternative might be:
externdef.h
This is getting a tad convoluted, but seems to be secure (using the
file2d.h, with no
#undef DEFINE_VARIABLESin the
file2d.h).
file7c.c
file8c.h
file8c.c
The next two files complete the source for
prog8and
prog9:
prog8.c
file9c 'avoiding.hand.cand
prog8.cis the name of one of the headers that are included. It would be possible to reorganize the code so that the
main()function was not repeated, but it would conceal more than it revealed.)
@Johannes Schaub - litb 2009-09-16 15:03:34
Are you sure that having tentative definitions spread across multiple translation units is blessed by C? The C99 TC3 draft says " If a translation unit contains one or more tentative definitions for an identifier, and the translation unit contains no external definition for that identifier, then the behavior is exactly as if the translation unit contains a file scope declaration of that identifier, with the composite type as of the end of the translation unit, with an initializer equal to 0."
@Johannes Schaub - litb 2009-09-16 15:05:48
That seems to mean that each such translation unit contains an external definition for it, and violate "somewhere in the entire program there shall be exactly one external definition for the identifier; otherwise [if the identifier isn't used], there shall be no more than one.". As i understood the COMMON blocks, they are non-standard extensions.
@Jonathan Leffler 2009-09-16 15:19:12
@litb: see Annex J.5.11 for the common definition - it is a common extension.
@Jonathan Leffler 2009-09-16 15:20:39
@litb: and I agree it should be avoided - that's why it is in the section on 'Not so good way to define global variables'.
@Johannes Schaub - litb 2009-09-16 15:30:02
Indeed it's a common extension, but it's undefined behavior for a program to rely on it. I just wasn't clear whether you were saying that this is allowed by C's own rules. Now i see you are saying it's just a common extension and to avoid it if you need your code to be portable. So i can upvote you without doubts. Really great answer IMHO :)
@Zak 2013-01-16 00:59:52
In your example of
file3a.h, should the
externkeyword come on the
ifinstead of the
else?
@Jonathan Leffler 2013-01-16 02:14:23
@Zak: No. The conditional code in
file3a.his
#ifdef DEFINE_VARIABLES / #define EXTERN / #else / #define EXTERN extern / #endif, removing comments and using slashes to mark the ends of lines. If
DEFINE_VARIABLESis specified, then the variables should not have the
externprefix which would mark them as declarations instead of definitions. That, in turn, means that the compiler will allocate space for the variables, rather than simply recording their existence.
@Denis Kniazhev 2014-05-20 14:40:42
I've learnt a lot, thanks. In file2b.c, shouldn't it be "file3b.h" instead of "fileba.h"? Also, if I just put files external.h, file1c.h, file2c.h and file3c.c (with main inside) into an xcode project then I get a linker error (I use clang). Using file5c.c instead of file3c.c works (Defining variables then defining)
@Jonathan Leffler 2014-05-22 05:56:21
@DenisKniazhev: Thank you for spotting the typo in
file2b.c. The problem with the other code was that I managed to get the bodies of the
#ifand
#elseclauses reversed in
external.h(as I wipe the egg off my face; it makes a horrid mess in a beard!). The answer is now generated from a template file containing the text and references to the source files, and it also includes extra test programs and headers. I have it all under version control and there's a makefile that ensures that everything builds cleanly. See my profile to contact me by email for the tar file of the material here.
@Jonathan Leffler 2014-05-22 05:57:27
The substance of the answer, I should add, is unchanged, regardless of how big the diffs look.
@Jonathan Leffler 2014-08-05 03:28:12
If you stop at the top, it keeps simple things simple. As you read further down, it deals with more nuances, complications and details. I've just added two 'early stopping points' for less experienced C programmers — or C programmers who already know the subject. There's no need to read it all if you already know the answer (but let me know if you find a technical fault).
@supercat 2014-09-18 18:23:09
Is there any reasonable pattern for a scenario in which code in numerous modules needs to make use of the same initialized array and know its size? At least some compilers will regard
extern int foo[] = {1,2,3};as equivalent to
extern int foo[3];, so it's possible to use some preprocessor logic to selectively omit the "extern" when the file is included from within its main source file. I haven't figured out any way to make that not look really ugly, though.
@Jonathan Leffler 2014-09-19 00:28:09
@supercat: I would create a file
foo.cto contain the definition of the array and a variable to hold its size:
#include "foo.h"plus
int foo[] = { 1, 2, 3}; size_t foo_size = sizeof(foo) / sizeof(foo[0]);, and a header
foo.hwhich would contain
#include <stddef.h(to get the definition of
size_t) plus
extern int foo[]; extern size_t foo_size;. You'd then put
foo.ointo a suitable library, and
foo.hin a suitable directory of headers, and compile and link against the header and library. You can add header guards, and maybe use the 'avoid repetition' ideas from the main answer.
@Jonathan Leffler 2014-09-19 00:39:11
@supercat: I'd add
constto the
size_t'variable' since the size of the array doesn't change. The major downside of this is that you don't have an integer constant (as opposed to a constant integer) which you can use in contexts where an integer constant is needed. Realistically, it is unlikely to be a problem.
@supercat 2014-09-19 02:27:34
@JonathanLeffler: For embedded systems, the differences between constants and never-written variables can be significant. It's really too bad that C never defined a means of defining link-time constants other than addresses, since I think linker systems even when C was designed could support such a concept at least for things that weren't larger than
int.
@supercat 2014-09-19 02:45:34
@JonathanLeffler: BTW, on many embedded systems, there can also be a substantial speed and code size penalty for separating parts of a program into different compilation unit. For example, on a typical ARM,
void setVariables(int a,b,c) {x=a;y=b;z=c;}could be 14 bytes if x-z are in the same compilation unit as the method, but would require 26 bytes if they're in another compilation unit. Some people may despise the idea of using
#includeto join together C files that could also run as separate compilation units, but there can be some major efficiency advantages to doing so.
@Jonathan Leffler 2014-09-19 08:37:55
@supercat: It occurs to me that you can use C99 array literals to get an enumeration value for the array size, exemplified by (
foo.h):
#define FOO_INITIALIZER { 1, 2, 3, 4, 5 }to define the initializer for the array,
enum { FOO_SIZE = sizeof((int [])FOO_INITIALIZER) / sizeof(((int [])FOO_INITIALIZER)[0]) };to get the size of the array, and
extern int foo[];to declare the array. Clearly, the definition should be just
int foo[FOO_SIZE] = FOO_INITIALIZER;, though the size doesn't really have to be included in the definition. This gets you a integer constant,
FOO_SIZE.
@supercat 2014-09-20 18:27:22
+1 for the idea of using an
enum, though I'm not sure how compilers would handle the aforementioned syntax. I've used great big monster macros for a variety of purposes in a style similar to this, but I think using
[extern] int foo[] = {..data..}is a bit cleaner since it uses the size of the actual created array. BTW, one really nasty hack which works on some embedded compilers would be to have
foo.ccontain the array and then
int[] FOO_SIZE_AS_ADDR @(sizeof(foo)), and then have the
.hfile
#define foo_size ((int)(FOO_SIZE_AS_ADDR)). The thing the hack is doing...
@supercat 2014-09-20 18:28:34
...(allowing an
intvalue to be used as a link-time constant) would be the cleanest way to share the array content and size among different compilation units [note that the FOO_SIZE_AS_ADDR doesn't point to anything meaningful, but on some platforms its address can be used as a link-time constant]. Of course, many platforms don't provide an
@syntax for forcing addresses, so the only way to declare such addresses would be in an assembly-language file. Further, on many platforms there would be no guarantee that all
intvalues will be accepted by the linker as addresses.
@Jonathan Leffler 2014-09-20 18:31:33
@supercat: You need to be careful to decide when you give up on using what the standard promises will work. It is a legitimate, but in my experience fraught, decision to make. The fraughtness comes because people don't realize they're going outside the standard and the information is not documented in the code, and when you move it to a new environment or a new version of the compiler or whatever, the behaviour changes because it was not standard behaviour. Your judgement call. My judgement errs on the side of caution and following the standard for maximum portability.
@supercat 2014-09-20 18:53:05
@JonathanLeffler: I would not use the aformentioned hack with array sizes, because there is a standard-compliant means of handling the concept that, while clunky, is workable. I have used that sort of hack for some other purposes, however--typically with constants that are defined in an assembly-language file. For example, in one case an assembly-language method needed to be passed an array of a certain size, but the assembly code could be adapted to change that size (the code would need to be modified according to the array size, but such modification would not be difficult). In that case...
@supercat 2014-09-20 18:54:33
@JonathanLeffler: I figured that having the
.hfile contain a hacky expression to convert a pointer to an integer was safer than having it specify a number. If the code needed to be ported to a platform where that wouldn't work, the assembly file would almost certainly need to be changed for other reasons anyhow. PS--I wonder how much "portable" code would work on a platform where
intwas 64 bits? I would expect a lot of code which is thought to be portable would end up working most of the time but end up with obscure little bugs because of C's unfortunate type-promotion rules.
@uchuugaka 2016-02-20 14:20:50
This is a good answer to return to as you learn C/Objective-C/C++ but header guards should be noted much earlier. Reason being, if you do anything with any *nix it's unavoidably standard practice.
@daniel 2017-04-04 00:45:55
why do you use extern when declaring functions?
@Jonathan Leffler 2017-04-04 00:50:01
Because I only ever write such declarations in headers, and any variables declared in headers must be prefixed with
extern, so for symmetry, I also declare functions prefixed with
extern. Other people don't do it — it's a point of difference in style.
@thegreatcoder 2018-06-18 22:56:30
I am a complete amateur to this. How do I write a makefile for the very first section before the guidelines? Can you provide a sample makefile? Thanks. Edit: I realized that we can use gcc file1.c file2.c prog1.c -o final_working to make it work. But if I were to write a formal make file, how do I do it for this case? I wrote one, but it doesn't seem to work. Your response will be quite helpful to check against mine.
@Jonathan Leffler 2018-06-19 01:19:29
@Shubashree: Hmmm – interesting. OK; I've added
prog1.mkto show you the 'minimal' makefile. It isn't completely minimal, of course. That allows me to tune the build if necessary, but covers most of the bases, leaving a very stringent set of compilation options. Not that some of the
xFLAGSnames used have other uses in standard Make (notably
GFLAGS— related to SCCS, but you probably don't use SCCS, so it probably doesn't matter). Beware!
@Hefaz 2019-04-29 20:30:00
Its not working. says undefined reference to
use_it
@Jonathan Leffler 2019-04-29 20:32:37
@Hefaz — please be more precise. What is not working? There are quite a number of programs listed; most of them have
use_it()as a function. All of them require linking multiple object files to build a single program — how are you compiling the code that gives you the undefined reference?
@Hefaz 2019-04-29 22:35:36
@JonathanLeffler I am compiling prog1.c . I created all the files as in the answer here, and added the related codes, now when i try to compile it says,
underfined reference to use_it()
@Jonathan Leffler 2019-04-29 22:39:00
@Hefaz — What command line are you using to compile it? As noted in my answer, you can download the files from GitHub, including
prog1.mk, a makefile that builds
prog1. You are almost certainly simply not linking all the object files. You need to link
prog1.o,
file1.o, and
file2.otogether (having compiled them all from the source files), or you need to compile
prog1.c,
file.c` and
file2.ctogether.
@Hefaz 2019-04-29 22:41:28
I am running it in Windows, codeBlock IDE, do I need the
.mkas well?
@Jonathan Leffler 2019-04-29 22:45:15
@Hefaz — I have never used Code Blocks IDE (so I don't know how to drive it), but you need to compile 3 source files and link them together to build the program. The whole question is interworking between source files; you have to know how to set up your build environment to build separate object files and link them together to build programs. In my build environment — a Unix command-line shell — the makefile is the easiest way to do that. Your mileage will vary, depending on how hard your IDE makes it for you to do what comes naturally on Unix systems. (The URL at GitHub is in a comment.)
@Hefaz 2019-04-29 22:46:52
Ok, Thank You, I will try and let you know if there was a related problem.
@Geremia 2016-01-27 19:47:31
externsimply means a variable is defined elsewhere (e.g., in another file).
@Johannes Weiss 2009-09-16 14:12:24
An
externvariable is a declaration (thanks to sbi for the correction) of a variable which is defined in another translation unit. That means the storage for the variable is allocated in another file.
Say you have two
.c-files
test1.cand
test2.c. If you define a global variable
int test1_var;in
test1.cand you'd like to access this variable in
test2.cyou have to use
extern int test1_var;in
test2.c.
Complete sample:
@sbi 2009-09-16 14:18:10
There's no "pseudo-definitions". It's a declaration.
@radiohead 2018-03-24 03:15:14
In the above example, if I change the
extern int test1_var;to
int test1_var;, the linker (gcc 5.4.0) still passes. So, is
externreally needed in this case?
@Jonathan Leffler 2018-06-16 19:44:41
@radiohead: In my answer, you will find the information that dropping the
externis a common extension that often works — and specifically works with GCC (but GCC is far from being the only compiler that supports it; it is prevalent on Unix systems). You can look for "J.5.11" or the section "Not so good way" in my answer (I know — it is long) and the text near that explains it (or tries to do so).
@shoham 2014-09-01 07:35:20
externis used so one
first.cfile can have full access to a global parameter in another
second.cfile.
The
externcan be declared in the
first.cfile or in any of the header files
first.cincludes.
@Jonathan Leffler 2015-09-02 15:09:14
Note that the
externdeclaration should be in a header, not in
first.c, so that if the type changes, the declaration will change too. Also, the header that declares the variable should be included by
second.cto ensure that the definition is consistent with the declaration. The declaration in the header is the glue that holds it all together; it allows the files to be compiled separately but ensures they have a consistent view of the type of the global variable.
@user1270846 2012-08-09 09:21:11
First off, the
externkeyword is not used for defining a variable; rather it is used for declaring a variable. I can say
externis a storage class, not a data type.
externis used to let other C files or external components know this variable is already defined somewhere. Example: if you are building a library, no need to define global variable mandatorily somewhere in library itself. The library will be compiled directly, but while linking the file, it checks for the definition.
@loganaayahee 2012-10-03 04:58:14
externallows one module of your program to access a global variable or function declared in another module of your program. You usually have extern variables declared in header files.
If you don't want a program to access your variables or functions, you use
staticwhich tells the compiler that this variable or function cannot be used outside of this module.
@Phoenix225 2012-07-02 09:11:11
In C a variable inside a file say example.c is given local scope. The compiler expects that the variable would have its definition inside the same file example.c and when it does not find the same , it would throw an error.A function on the other hand has by default global scope . Thus you do not have to explicitly mention to the compiler "look dude...you might find the definition of this function here". For a function including the file which contains its declaration is enough.(The file which you actually call a header file). For example consider the following 2 files :
example.c
example1.c
Now when you compile the two files together, using the following commands :
step 1)cc -o ex example.c example1.c step 2)./ex
You get the following output : The value of a is <5>
@Anup 2012-08-20 10:19:51
extern keyword is used with the variable for its identification as a global variable.
@Alex Lockwood 2012-06-20 23:43:15
The correct interpretation of extern is that you tell something to the compiler. You tell the compiler that, despite not being present right now, the variable declared will somehow be found by the linker (typically in another object (file)). The linker will then be the lucky guy to find everything and put it together, whether you had some extern declarations or not.
@Buggieboy 2009-09-16 14:50:53
I like to think of an extern variable as a promise that you make to the compiler.
When encountering an extern, the compiler can only find out its type, not where it "lives", so it can't resolve the reference.
You are telling it, "Trust me. At link time this reference will be resolvable."
@Lie Ryan 2010-11-30 02:16:18
More generally, a declaration is a promise that the name will be resolvable to a exactly one definition at link time. An extern declares a variable without defining.
@Arkaitz Jimenez 2009-09-16 14:11:25.
@mjv 2009-09-16 14:19:40
In other words the translation unit where extern is used knows about this variable, its type etc. and hence allows the source code in the underlying logic to use it, but it does not allocate the variable, another translation unit will do that. If both translation units were to declare the variable normally, there would be effectily two physical locations for the variable, with the associated "wrong" references within the compiled code, and with the resulting ambiguity for the linker.
@BenB 2009-09-16 14:18:57
extern tells the compiler to trust you that the memory for this variable is declared elsewhere, so it doesnt try to allocate/check memory.
Therefore, you can compile a file that has reference to an extern, but you can not link if that memory is not declared somewhere.
Useful for global variables and libraries, but dangerous because the linker does not type check.
@sbi 2009-09-16 14:37:20
The memory isn't declared. See the answers to this question: stackoverflow.com/questions/1410563 for more details.
@sbi 2009-09-16 14:16:24
Adding an
externturns a variable definition into a variable declaration. See this thread as to what's the difference between a declaration and a definition.
@user1150105 2012-11-08 19:07:39
What difference between
int fooand
extern int foo(file scope)? Both are declaration, isn't it?
@sbi 2012-11-09 09:12:39
@user14284: They are both declaration only in the sense that every definition is a declaration, too. But I linked to an explanation of this. ("See this thread as to what's the difference between a declaration and a definition.") Why don't you simple follow the link and read?
|
https://tutel.me/c/programming/questions/1433204/how+do+i+use+extern+to+share+variables+between+source+files
|
CC-MAIN-2019-43
|
refinedweb
| 5,619
| 63.59
|
I see
Okay thank you, I will try that
I see
Okay thank you, I will try that
"How would you describe the condition you want to test in the if statement in English?"
If the input is larger than 2000 and that the bonus is larger than 400, then shows "Unacceptable ..."
Because...
It says ". expected instead of this token" while highlighting two commas and a } sign
Alright, thanks anyway
"What is the error message for the }?"
It's clumped together in one error message, along with the other commas.
I'm reading the link right now
"if (deposit>20000, bonus>400) "
and
"else if (deposit<=2000, bonus<=400) "
oh and the "}" in the "} // CollegeSavings class " is reported as error as well
I wanted to make two if statement...
import hsa.*;
// The "CollegeSavings" class.
public class CollegeSavings
{
public static void main (String[] args)
{double deposit, bonus, output;
Stdout.println("Please input...
Here's the assignment I was given:
The government introduces a new program of educational savings accounts. It adds a 20% bonus of whatever you contributed to the account up to a maximum of a $400...
|
http://www.javaprogrammingforums.com/search.php?s=f9caf85280b4c70793aa73fc6f9ef140&searchid=1273469
|
CC-MAIN-2014-52
|
refinedweb
| 187
| 68.3
|
MVC Architecture
Contents
Large client side applications have always been hard to write, hard to organize and hard to maintain. They tend to quickly grow out of control as you add more functionality and developers to a project. Ext JS 4 comes with a new application architecture that not only organizes your code but reduces the amount you have to write.
Our application architecture follows an MVC-like pattern with Models and Controllers being introduced for the first time. There are many MVC architectures, most of which are slightly different from one another. Here's how we define ours: guide we'll be creating a very simple application that manages User data. By the end you will know how to put simple applications together using the new Ext JS 4 application architecture.
The application architecture is as much about providing structure and consistency as it is about actual classes and framework code. Following the conventions unlocks a number of important benefits:
- Every application works the same way so you only have to learn it once
- It's easy to share code between apps because they all work the same way
- You can use our build tools to create optimized versions of your applications for production use
File Structure
Ext JS 4 applications follow a unified directory structure that is the
same for every app. Please check out the Getting Started
guide for a detailed explanation on the
basic file structure of an application. In MVC layout, all classes are
placed into the
app/ folder, which in turn contains sub-folders to
namespace your models, views, controllers and stores. Here is how the
folder structure for the simple example app will look when we're done:
In this example, we are encapsulating the whole application inside one
folder called '
account_manager'. Essential files from the Ext JS 4
SDK are wrapped inside
ext-4/
folder. Hence the content of our
index.html looks like this:
<html> <head> <title>Account Manager</title> <link rel="stylesheet" type="text/css" href="ext-4/resources/css/ext-all.css"> <script type="text/javascript" src="ext-4/ext-debug.js"></script> <script type="text/javascript" src="app.js"></script> </head> <body></body> </html>
Creating the application in app.js
Every Ext JS 4 application starts with an instance of Application class. The Application contains global settings for your application (such as the app's name), as well as maintains references to all of the models, views and controllers used by the app. An Application also contains a launch function, which is run automatically when everything is loaded.
Let's create a simple Account Manager app that will help us manage User accounts. First we need to pick a global namespace for this application. All Ext JS 4 applications should only use a single global variable, with all of the application's classes nested inside it. Usually we want a short global variable so in this case we're going to use "AM":
Ext.application({ requires: ['Ext.container.Viewport'], name: 'AM', appFolder: 'app', launch: function() { Ext.create('Ext.container.Viewport', { layout: 'fit', items: [ { xtype: 'panel', title: 'Users', html : 'List of users will go here' } ] }); } });
There are a few things going on here. First we invoked
Ext.application to create a new instance of Application class, to
which we passed the name
'AM'. This automatically sets up a global
variable
AM for us, and registers the namespace to
Ext.Loader,
with the corresponding path of '
app' set via the
appFolder config
option. We also provided a simple launch function that just creates a
Viewport which contains a single
Panel that will fill the screen.
Defining a Controller
Controllers are the glue that binds an application together. All they
really do is listen for events (usually from views) and take some
actions. Continuing our Account Manager application, lets create a
controller. Create a file called
app/controller/Users.js and add
the following code:
Ext.define('AM.controller.Users', { extend: 'Ext.app.Controller', init: function() { console.log('Initialized Users! This happens before the Application launch function is called'); } });
Now lets add our newly created Users controller to the application config in app.js:
Ext.application({ ... controllers: [ 'Users' ], ... });
When we load our application by visiting
index.html inside a
browser, the
Users controller is automatically loaded (because we
specified it in the Application definition above), and its
init
function is called just before the Application's
launch function.('AM.controller.Users', { extend: 'Ext.app.Controller', init: function() { this.control({ 'viewport > panel': { render: this.onPanelRendered } }); }, onPanelRendered: function() { console.log('The panel was rendered'); } });
We've updated the
init function to use
this.control to set up
listeners on views in our application. The
control function uses the
new ComponentQuery engine to quickly and easily get references to
components on the page. If you are not familiar with ComponentQuery
yet, be sure to check out the ComponentQuery
documentation for a full explanation..
When we run our application now we see the following:
Not exactly the most exciting application ever, but it shows how easy it is to get started with organized code. Let's flesh the app out a little now by adding a grid.
Defining a View
Until now our application has only been a few lines long and only
inhabits two files -
app.js and
app/controller/Users.js. Now that
we want to add a grid showing all of the users in our system, it's
time to organize our logic a little better and start using views.
A View is nothing more than a Component, usually defined as a subclass
of an Ext JS component. We're going to create our Users grid now by
creating a new file called
app/view/user/List.js and putting the
following into it:
Ext.define('AM.view.user.List' ,{ extend: 'Ext.grid.Panel', alias: 'widget.userlist', title: 'All Users', initComponent: function() { this.store = { fields: ['name', 'email'], data : [ {name: 'Ed', email: 'ed@sencha.com'}, {name: 'Tommy', email: 'tommy@sencha.com'} ] }; this.columns = [ {header: 'Name', dataIndex: 'name', flex: 1}, {header: 'Email', dataIndex: 'email', flex: 1} ]; this.callParent(arguments); } });
Our View class is nothing more than a normal class. In this case we happen to extend the Grid Component and set up an alias so that we can use it as an xtype (more on that in a moment). We also passed in the store configuration and the columns that the grid should render.
Next we need to add this view to our
Users controller. Because we
set an alias using the special
'widget.' format, we can use
'userlist' as an xtype now, just like we had used
'panel'
previously.
Ext.define('AM.controller.Users', { extend: 'Ext.app.Controller', views: [ 'user.List' ], init: ... onPanelRendered: ... });
And then render it inside the main viewport by modifying the launch
method in
app.js to:
Ext.application({ ... launch: function() { Ext.create('Ext.container.Viewport', { layout: 'fit', items: { xtype: 'userlist' } }); } });
The only other thing to note here is that we specified
'user.List'
inside the views array. This tells the application to load that file
automatically so that we can use it when we launch. The application
uses Ext JS 4's new dynamic loading system to automatically pull this
file from the server. Here's what we see when we refresh the page now:
Controlling the grid
Note that our
onPanelRendered function is still being called. This
is because our grid class still matches the
'viewport > panel'
selector. The reason for this is that our class extends Grid, which in
turn extends Panel.
At the moment, the listeners we add to this selector will actually be called for every Panel or Panel subclass that is a direct child of the viewport, so let's tighten that up a bit using our new xtype. While we're at it, let's instead listen for double clicks on rows in the grid so that we can later edit that User:
Ext.define('AM.controller.Users', { extend: 'Ext.app.Controller', views: [ 'user.List' ], init: function() { this.control({ 'userlist': { itemdblclick: this.editUser } }); }, editUser: function(grid, record) { console.log('Double clicked on ' + record.get('name')); } });
Note that we changed the ComponentQuery selector (to simply
'userlist'), the event name (to
'itemdblclick') and the handler
function name (to
'editUser'). For now we're just logging out the
name of the User we double clicked:
Logging to the console is all well and good but we really want to edit
our Users. Let's do that now, starting with a new view in
app/view/user/Edit.js:
Ext.define('AM.view.user.Edit', { extend: 'Ext.window.Window', alias: 'widget.useredit', title: 'Edit User', layout: 'fit', autoShow: true, initComponent: function() { this.items = [ { xtype: 'form', items: [ { xtype: 'textfield', name : 'name', fieldLabel: 'Name' }, { xtype: 'textfield', name : 'email', fieldLabel: 'Email' } ] } ]; this.buttons = [ { text: 'Save', action: 'save' }, { text: 'Cancel', scope: this, handler: this.close } ]; this.callParent(arguments); } });
Again we're just defining a subclass of an existing component - this
time
Ext.window.Window. Once more we used
initComponent to specify
the complex objects
items and
buttons. We used a
'fit' layout
and a form as the single item, which contains fields to edit the name
and the email address. Finally we created two buttons, one which just
closes the window, and the other that will be used to save our
changes.
All we have to do now is add the view to the controller, render it and load the User into it:
Ext.define('AM.controller.Users', { extend: 'Ext.app.Controller', views: [ 'user.List', 'user.Edit' ], init: ... editUser: function(grid, record) { var view = Ext.widget('useredit'); view.down('form').loadRecord(record); } });
First we created the view using the convenient method
Ext.widget,
which is equivalent to
Ext.create('widget.useredit'). Then we
leveraged ComponentQuery once more to quickly get a reference to the
edit window's form. Every component in Ext JS 4 has a
down function,
which accepts a ComponentQuery selector to quickly find any child
component.
Double clicking a row in our grid now yields something like this:
Creating a Model and a Store
Now that we have our edit form it's almost time to start editing our users and saving those changes. Before we do that though, we should refactor our code a little.
At the moment the
AM.view.user.List component creates a Store
inline. This works well but we'd like to be able to reference that
Store elsewhere in the application so that we can update the data in
it. We'll start by breaking the Store out into its own file -
app/store/Users.js:
Ext.define('AM.store.Users', { extend: 'Ext.data.Store', fields: ['name', 'email'], data: [ {name: 'Ed', email: 'ed@sencha.com'}, {name: 'Tommy', email: 'tommy@sencha.com'} ] });
Now we'll just make 2 small changes - first we'll ask our
Users
controller to include this Store when it loads:
Ext.define('AM.controller.Users', { extend: 'Ext.app.Controller', stores: [ 'Users' ], ... });
then we'll update
app/view/user/List.js to simply reference the
Store by id:
Ext.define('AM.view.user.List' ,{ extend: 'Ext.grid.Panel', alias: 'widget.userlist', title: 'All Users', // we no longer define the Users store in the `initComponent` method store: 'Users', initComponent: function() { this.columns = [ ... });
By including the stores that our
Users controller cares about in its
definition they are automatically loaded onto the page and given a
storeId, which makes them really easy
to reference in our views (by simply configuring
store: 'Users' in
this case).
At the moment we've just defined our fields (
'name' and
'email')
inline on the store. This works well enough but in Ext JS 4 we have a
powerful
Ext.data.Model class that we'd like to take advantage of
when it comes to editing our Users. We'll finish this section by
refactoring our Store to use a Model, which we'll put in
app/model/User.js:
Ext.define('AM.model.User', { extend: 'Ext.data.Model', fields: ['name', 'email'] });
That's all we need to do to define our Model. Now we'll just update our Store to reference the Model name instead of providing fields inline...
Ext.define('AM.store.Users', { extend: 'Ext.data.Store', model: 'AM.model.User', data: [ {name: 'Ed', email: 'ed@sencha.com'}, {name: 'Tommy', email: 'tommy@sencha.com'} ] });
And we'll ask the
Users controller to get a reference to the
User
model too:
Ext.define('AM.controller.Users', { extend: 'Ext.app.Controller', stores: ['Users'], models: ['User'], ... });
Our refactoring will make the next section easier but should not have affected the application's current behavior. If we reload the page now and double click on a row we see that the edit User window still appears as expected. Now it's time to finish the editing functionality:
Saving data with the Model
Now that we have our users grid loading data and opening an edit window when we double click each row, we'd like to save the changes that the user makes. The Edit User window that the defined above contains a form (with fields for name and email), and a save button. First let's update our controller's init function to listen for clicks to that save button:
Ext.define('AM.controller.Users', { ... init: function() { this.control({ 'viewport > userlist': { itemdblclick: this.editUser }, 'useredit button[action=save]': { click: this.updateUser } }); }, ... updateUser: function(button) { console.log('clicked the Save button'); } ... });
We added a second ComponentQuery selector to our
this.control call -
this time
'useredit button[action=save]'. This works the same way as
the first selector - it uses the
'useredit' xtype that we defined
above to focus in on our edit user window, and then looks for any
buttons with the
'save' action inside that window. When we defined
our edit user window we passed
{action: 'save'} to the save button,
which gives us an easy way to target that button.
We can satisfy ourselves that the
updateUser function is called when
we click the Save button:
Now that we've seen our handler is correctly attached to the Save
button's click event, let's fill in the real logic for the
updateUser function. In this function we need to get the data out of
the form, update our User with it and then save that back to the Users
store we created above. Let's see how we might do that:
updateUser: function(button) { var win = button.up('window'), form = win.down('form'), record = form.getRecord(), values = form.getValues(); record.set(values); win.close(); }
Let's break down what's going on here. Our click event gave us a
reference to the button that the user clicked on, but what we really
want is access to the form that contains the data and the window
itself. To get things working quickly we'll just use ComponentQuery
again here, first using
button.up('window') to get a reference to
the Edit User window, then
win.down('form') to get the form.
After that we simply fetch the record that's currently loaded into the
form and update it with whatever the user has typed into the
form. Finally we close the window to bring attention back to the
grid. Here's what we see when we run our app again, change the name
field to
'Ed Spencer' and click save:
Saving to the server
Easy enough. Let's finish this up now by making it interact with our server side. At the moment we are hard coding the two User records into the Users Store, so let's start by reading those over AJAX instead:
Ext.define('AM.store.Users', { extend: 'Ext.data.Store', model: 'AM.model.User', autoLoad: true, proxy: { type: 'ajax', url: 'data/users.json', reader: { type: 'json', root: 'users', successProperty: 'success' } } });
Here we removed the
'data' property and replaced it with a
Proxy. Proxies are the way to load and
save data from a Store or a Model in Ext JS 4. There are proxies for
AJAX, JSON-P and HTML5 localStorage among others. Here we've used a
simple AJAX proxy, which we've told to load data from the url
'data/users.json'.
We also attached a Reader to the
Proxy. The reader is responsible for decoding the server response into
a format the Store can understand. This time we used a JSON Reader, and specified the root and
successProperty configurations. Finally we'll create our
data/users.json file and paste our previous data into it:
{ "success": true, "users": [ {"id": 1, "name": 'Ed', "email": "ed@sencha.com"}, {"id": 2, "name": 'Tommy', "email": "tommy@sencha.com"} ] }
The only other change we made to the Store was to set
autoLoad to
true, which means the Store will ask its Proxy to load that data
immediately. If we refresh the page now we'll see the same outcome as
before, except that we're now no longer hard coding the data into our
application.
The last thing we want to do here is send our changes back to the server. For this example we're just using static JSON files on the server side so we won't see any database changes but we can at least verify that everything is plugged together correctly. First we'll make a small change to our new proxy to tell it to send updates back to a different url:
proxy: { type: 'ajax', api: { read: 'data/users.json', update: 'data/updateUsers.json' }, reader: { type: 'json', root: 'users', successProperty: 'success' } }
We're still reading the data from
users.json but any updates will be
sent to
updateUsers.json. This is just so we know things are working
without overwriting our test data. After updating a record, the
updateUsers.json file just contains
{"success": true}. Since it is
updated through a HTTP POST command, you may have to create an empty
file to avoid receiving a 404 error.
The only other change we need to make is to tell our Store to synchronize itself after editing, which we do by adding one more line inside the updateUser function, which now looks like this:
updateUser: function(button) { var win = button.up('window'), form = win.down('form'), record = form.getRecord(), values = form.getValues(); record.set(values); win.close(); // synchronize the store after editing the record this.getUsersStore().sync(); }
Now we can run through our full example and make sure that everything
works. We'll edit a row, hit the Save button and see that the request
is correctly sent to
updateUser.json
Deployment
The newly introduced Sencha SDK Tools (download here) makes deployment of any Ext JS 4 application easier than ever. The tools allows you to generate a manifest of all dependencies in the form of a JSB3 (JSBuilder file format) file, and create a minimal custom build of just what your application needs within minutes.
Please refer to the Getting Started guide for detailed instructions.
Next Steps
We've created a very simple application that manages User data and sends any updates back to the server. We started out simple and gradually refactored our code to make it cleaner and more organized. At this point it's easy to add more functionality to our application without creating spaghetti code. The full source code for this application can be found in the Ext JS 4 SDK download, inside the examples/app/simple folder.
|
http://docs.sencha.com/extjs/4.1.3/?_escaped_fragment_=/guide/application_architecture
|
CC-MAIN-2017-17
|
refinedweb
| 3,222
| 66.23
|
Brad Boyer wrote:>Hans Reiser wrote:>>>I remember that I used to be a sysadmin with some NetApp boxes that have >>a .snapshot directory that is invisible, and has special qualities.>>>>It worked. There were no namespace collision problems. None.>>>>These things can be survived by users.;-)>>>>Yes, these things can be survived, but speaking as someone who currently>has a job involving multiple NetApp boxes, I can say that the .snapshot>directory has some seriously annoying properties that break tar and>other programs that expect things to look normal. The snapshots have>saved my ass a few times, but they're still a pain to work with due>to a few little quirks. In particular, the files in the snapshot keep>the same inode number as the actual file. Just remember that clever>solutions that almost fit the traditional model can have strange>results over time.>> Brad Boyer> flar@allandria.com>>>Can you detail the problem?Hans-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
https://lkml.org/lkml/2001/12/14/135
|
CC-MAIN-2022-27
|
refinedweb
| 186
| 58.79
|
State of React and CSS
TLDR: You can think of this post as the CSS version of the JS Fatigue article. This is based on some research I did recently.
Recently, I started to work on a project that helps us to develop UI components isolated from the main app. Initially, I assumed everyone would write CSS in JavaScript with React.
After a while I realized that it’s not fair to assume that everyone would write CSS in JavaScript. CSS is a land of choices.
In this article, I will share what I found from the research. Let’s get started.
: Wait. Can I have a look at the tool you mentioned?
It’s React Storybook . It’s still a pre-release software and I am planning to do a release in a few days.
How do we use CSS?
The first question we should ask is how we are using CSS. The answer is subjective, based on the role we play in the team. Let’s explore.
Frontend Developer
The role of the frontend developer is to implement business functionalities. This involves many tasks, including creating the User Interfaces.
Frontend developers don’t usually worry too much about CSS. They mostly care about the application’s functionality and the layout of the app.
UI Designer
The role of the UI designer is to build great User Interfaces that customers love.
Basically, UI designers need to make sure that the design they create in Photoshop is what is actually implemented.
They usually care a lot about CSS and work with the components created by the Frontend Developer.
It’s really hard to make a clear boundary between the Frontend Developer and the UI Designer these days. Usually a team of a few people will take care of both these roles.
Core developer
Core developers build some core JavaScript tools/libraries and focus on non-UI stuff. CSS is something they don’t need to worry about.
Architect
There are plenty of CSS frameworks.
There are different CSS pre-processors.
With React, now we have CSS in JavaScript.
So, who’s going to decide what should we use in our project?
The architect is the person who is going to take that decision .
When making the decision, the architect needs to be very careful. Bad CSS makes it harder to maintain the app in the long run.
In the rest of this article, I will talk about how the community use CSS with React and help the architect to make a wise decision.
Approaches to use CSS with React
There are many ways we can use CSS with React. Let’s discuss some of these approaches.
Using Existing CSS Frameworks
There are a bunch of pretty good CSS frameworks, like Bootstrap and Semantic UI . You can easily use them with your React project.
If you go with this approach, there’s not much new stuff to learn.
But, there are much better ways to write CSS with React.
Using CSS Directly (including CSS preprocessors)
This is something you can do when you are working with an existing CSS framework or trying to style from CSS directly . You can also use CSS preprocessors like SCSS or LESS.
Your built system may have support for these. With Webpack , you can configure to import CSS files just like any JS file.
If you use Meteor , it will import CSS automatically for you.
Using React-Based UI Frameworks
There are UI frameworks specifically written for React. For example, take a look at Material UI and Rebass . With these frameworks, you can simply import any UI component and use it immediately.
For example, here’s how you create a inline form with Rebass.
: Wow. This is so cool.
Yes. It is.
With this approach you don’t need to deal with CSS directly. If you need to customize the look and feel, there are some ways to do it easily in JavaScript (at the runtime).
Even though this is a pretty cool way to style your app, there are not many choices. Material UI is the only one that is popular and stable.
CSS Modules
CSS modules give you a way to write modular CSS. By default, every CSS rule you write is available inside a local namespace, so it won’t pollute the global namespace. You can write a CSS module using plain old CSS like this:
Then you can simply import the CSS className and use it in your React component.
For CSS modules you need to have support from your build tool. And it has the support for most popular tools, like Webpack and Meteor .
Using Inline CSS with JavaScript
React has changed how we write CSS by allowing us to write inline CSS via JavaScript. With this approach we can build UI components independently and we can easily distribute components (since CSS lives inside the container).
But the default React implementation doesn’t have support for some CSS features like media queries. Projects like Radium will help to get those missing features.
Take a look at this presentation to learn more about CSS in JS.
With this approach, you don’t need any special support from the build tool for CSS. That’s because it’s just JavaScript.
On the other hand, this is a fairly new thing and there are some challenges we need to address as a community.
: Okay. That’s a lot of choices. What should I choose?
Well, that’s a hell of a question.
What to Choose
React is all about rethinking how we build our apps. But, it raises the question of whether to follow the same rule for CSS.
At Kadira, we believe in writing CSS in JS.
At the same time, we don’t really like to build each and every UI component from scratch. That’s why we’ve picked both Rebass and Radium .
We get the default set of components from Rebass. We also build some components from scratch with Radium.
So here’s my personal suggestion on this topic.
1.) If you are starting a new project, try to write CSS in JavaScript. If you find you like it, go with it.
2.) If this is an existing project/team and you’ve got existing stylesheets, then it’s a wise idea to go with your traditional approach.
: Sounds good. So, what’s the approach you selected for your devtool?
It’s a React devtool. We can’t really force developers to write CSS in a specific way, so I simply had to add support for all the options.
So, what’s your opinion on this? How you write CSS with your React app?
followKADIRA VOICE for articles like React and CSS
评论 抢沙发
|
http://www.shellsec.com/news/5202.html
|
CC-MAIN-2018-05
|
refinedweb
| 1,124
| 75.61
|
How to solve the firmware update failures
Hello,
The upgrade problems are caused by a bug in Espressif's IDF that is causing the flash memory of the module to be write protected. Espressif is working to provide a permanent solution for this as soon as possible. In the meantime they have given us a temporal way to solve it. The procedure is the following:
Download this tool which needs to be executed as:
python flash_debug.py /dev/ttyUSB0 write 0x0 2
(/dev/ttyUSB0 will have to be replaced by the actual serial port on your system)
Running the tool requires a serial connection to the module, Python 2 and Pyserial installed on your computer. The easiest way is via the expansion board. If an expansion board is not available then any USB-to-serial converter will work. The connection must be made on pins P0 (WiPy/LoPy RXD) and P1 (WiPy/LoPy TXD). Before running the tool, connect a cable jumper between GND and P2 (G23 on the expansion board), then reset the module and run the command. The total steps are:
- Connect the WiPy/LoPy via the serial port to your computer.
- Connect a cable jumper between GND and P2 (G23 on the expansion board).
- Reset the board.
- Run the command outlined above (python flash_debug.py /dev/ttyUSB0 write 0x0 2) and remember to pass the actual serial port on your system.
- Remove the cable jumper.
- Power cycle the WiPy/LoPy.
- Run the firmware update tool.
We apologize for this issue but is completely out of our control. This is happening with all ESP32 based boards. I expect a solution from Espressif within a few days. Once that's done we will release a special firmware upgrade that will also install a new bootloader.
For the firmware release next Tuesday we will integrate this unlock process in the firmware update to avoid any more manual steps.
This thread is being closed as the feedback and information contained here are now covered by the firmware upgrade tools. Please check the downloads section of our website to get the last versions of the tools.
For further support on the tools please feel free to open a new thread.
@Xykon Thank you! Finally I was able to update... your post should get pin on top of this post. I connected my Lopy for the first time today and as written in the starting guide I started by the upgrade process. Thought it would be more straight-forward :P Anyway I understand bugs can happen at this early stage.
@Xykon Many thanks - I tried to use the standard pycom windows updater/patcher for 0.9.2.b1 which failed using the offical expansion board. It immediately returned with a "fail" every time I tried although REPL via Putty and PyMakr still worked fine. Having read this thread, I wondered if the high baud rate specified in Line 12 of updater.py might be the problem. Tried again using Windows GUI but still no joy. Finally, tried again using a command window and managed to update both my LoPy's first time. I'd spent a couple of weeks unable to update them over wifi and had write protect problems - hopefully all will me smoother now!
Many thanks to all - especially Xykon - Great work - (another upvote for you!)
@livius said in How to solve the firmware update failures:
I have used official expansion board and without your changes - it not work
Your fix work for me
OK thanks for testing that.
I usually have the module plugged into my PSoC5LP development board's breadboard so I can (among other things like testing I2C and SPI) do remote controlled resets in normal, safeboot and bootloader mode without having to connect any wires. Since that dev board has an RS232 converter on it I connected it to P0/P1 and used an FTDI RS232 cable to get access to the LoPy's UART0. I had a chat with abilio about it and tried again using the expansion board and this time it worked at the higher speed.
Either way I guess the firmware updater should have an optional "high speed upload" switch to choose between 115200 and 921600 bps.
I have used official expansion board and without your changes - it not work
Your fix work for me
It turns out the problem with using 921600bps for firmware upload not working for me was my own test setup... if you use the official expansion board using just the command line on Windows without making changes to updater.py should work as well.
If you happen to use RS232 for UART0 then reducing the firmware upload speed to 115200bps seems to be necessary.
- AgriKinetics last edited by
Xykon not
LarryTru
:(
OK I got the Windows updater to work as well (though not the GUI).
First make the same modifications to
"C:\Program Files (x86)\Pycom\Pycom Firmware Update\Upgrader\bin\updater.py"
Line12:
BAUD_RATE = 115200
Line23:
#self.esp.change_baud(baudrate)
Now open a command prompt and run:
cd "C:\Program Files (x86)\Pycom\Pycom Firmware Update\Upgrader"
..\Python27\python.exe bin\updater.py --port COM4 --file firmware\lopy_0.9.2.b1_868.bin
Make sure that you choose the correct COM port and firmware file for your module/region.
So far I did manage to get the Linux updater to work.
In ./bin/updater I set
BAUD_RATE = 115200and commented
#self.esp.change_baud(baudrate)
I tried the same thing in the Windows version but it still doesn't work.
interested that UART work ok
with com port monitor i retrive data without any problem only PYMAKR can not connect
Thank you for the help but it does not change anything
still pymakr can not connect :(
I see that update script does erase only some block(3.5MB)
i ommit that and i clear block
If someone have mre concept :)
import os
from machine import UART
uart = UART(0, 115200)
os.dupterm(uart)
@livius There are some extra steps in the updater to preserve your flash memory (MAC address, files on /flash). Since you skipped these steps it's likely your boot.py was overwritten.
Upload a new boot.py file with the following code:
import os
from machine import UART
uart = UART(0, 115200)
os.dupterm(uart)
P.S. Does anyone know where to find the formatting guide for this forum? I've seen people post source codes with a nice black background and syntax highlighting. How do you do that? Is there a help button somewhere for this?
i do manual update by:
esptool.py --chip esp32 --port COM8 --baud 115200 write_flash 0x210000 wipy_0.9.2.b1.bin -fs 4MB -z
esptool.py --chip esp32 --port COM8 --baud 115200 write_flash 0x204000 partitions.bin -fs 4MB -z
esptool.py --chip esp32 --port COM8 --baud 115200 write_flash 0x201000 bootloader.bin -fs 4MB -z
i can connect by ftp and by telnet
by tenet i got:
import os
os.uname()
(sysname='WiPy', nodename='WiPy', release='0.9.2.b1', version='333b92c on 2016-10-26', machine='WiPy with ESP32')
but when i start wipy2.0 with board2.0
i can not connect by pymark - i do not know why it not "listen" on UART
but i can do above operations again - and as i understand corectly it is done by UART?
Thanks for pointing this out. It must be an issue with the windows build of the updater, since the mac version works. We'll try to figure it out and get back to you guys as soon as we can.
- AgriKinetics last edited by
The correct 0.9.2.b1 is in the 'firmware' folder. I tried copying into the main 'upgrader' folder, but, the Upgrader still fails. Thought it might be the version of python (I was running 3.5) but, 2.7.12 fails also.
Maybe because it is still wipy_0.9.1.b1.bin not wipy_0.9.2.b1.bin
i look into the folder and there is 1 not 2.
|
https://forum.pycom.io/topic/88/how-to-solve-the-firmware-update-failures/3?lang=en-US
|
CC-MAIN-2022-40
|
refinedweb
| 1,334
| 74.69
|
Convolutional layers
A convolutional layer (sometimes referred to in the literature as "filter") is a particular type of neural network that manipulates the image to highlight certain features. Before we get into the details, let's introduce a convolutional filter using some code and some examples. This will make the intuition simpler and will make understanding the theory easier. To do this we can use the
keras datasets, which makes it easy to load the data.
We will import
numpy, then the
mnist dataset, and
matplotlib to show the data:
import numpy from keras.datasets import mnist import matplotlib.pyplot as plt import matplotlib.cm as cm
Let's define our main function that takes in an integer, corresponding to the image in the
mnist dataset, and ...
Get Python Deep Learning now with O’Reilly online learning.
O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
|
https://www.oreilly.com/library/view/python-deep-learning/9781786464453/ch05s03.html
|
CC-MAIN-2020-50
|
refinedweb
| 153
| 56.66
|
Right, this problem only arises with import cycles, and that's why we resisted making eager submodule resolution work *at all* for so long (issue 992389 was filed way back in 2004).
We only conceded the point (with issue 17636 being implemented for 3.5) specifically to address a barrier to adoption for explicit relative imports, as it turned out that "from . import bar" could fail in cases where "import foo.bar" previously worked.
The best explanation I could find for that rationale in the related python-dev thread is PJE's post here:
What Victor's python-ideas thread pointed out is that there are actually *3* flavours of import where this particular circular reference problem can come up:
# Has worked as long as Python has had packages,
# as long as you only lazily resolve foo.bar in
# function and method implementations
import foo.bar
# Has worked since 3.5 due to the IMPORT_FROM
# change that falls back to a sys.modules lookup
from foo import bar
# Still gives AttributeError since it
# eagerly resolves the attribute lookup
import foo.bar as bar
While I think the architectural case for allowing this kind of circular dependency between different top level namespaces is *much* weaker than that for allowing it within packages, I do think there's a reasonable consistency argument to be made in favour of ensuring that `from foo import bar` and `import foo.bar as bar` are functionally equivalent when `bar` is a submodule of `foo`, especially since the latter form makes it clearer to the reader that `bar` *is* a submodule, rather than any arbitrary attribute.
I don't think it's a big problem in practice (so I wouldn't spend any time on implementing it myself), but the notion of an IMPORT_ATTR opcode for the "import x.y.z as m" case that parallels IMPORT_FROM seems architecturally clean to me in a way that the proposed resolutions to issue 992389 weren't.
|
https://bugs.python.org/msg291416
|
CC-MAIN-2017-47
|
refinedweb
| 327
| 57.71
|
Tech Tips archive
February 15, 2000
This issue presents tips, techniques, and sample code for the
following topics:
JTree
This issue of the JDC Tech Tips is written by Glen McCluskey.
These tips were developed using Java
2 SDK, Standard Edition,
v 1.2.2, and are not guaranteed to work with other versions.
JTree is a Swing component used to manipulate hierarchical data
such as directory/file trees. If you've worked with a file browser
of any type you've probably used a tree component. You can collapse
and expand the various nodes in the hierarchy. This tip will
cover some basics in using JTree.
JTree
A tree component consists of a root node and a set of child nodes.
Each node contains a user object (like a string) and zero or more
child nodes. For example, you might have a tree structure like
this:
testing
one
1.1
two
2.1
three
3.1
3.2
3.3
The root node, testing, has three children. These child nodes have
children as well. A node with no children, such as 3.2, is a leaf
node.
Nodes are represented by the DefaultMutableTreeNode class, which
implements the interfaces TreeNode and MutableTreeNode.
Mutable means that the node can change, by adding or deleting
children, or by changing the user object.
DefaultMutableTreeNode
TreeNode
MutableTreeNode
Here is a simple example of using JTree:
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import javax.swing.event.*;
import javax.swing.tree.*;
import java.util.Vector;
public class JTreeDemo {
public static void main(String args[]) {
JFrame frame = new JFrame("JTree Demo");
// handle window close
frame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {
System.exit(0);
}
});
JPanel panel1 = new JPanel();
// set up tree root and nodes
DefaultMutableTreeNode root =
new DefaultMutableTreeNode("testing");
DefaultMutableTreeNode one =
new DefaultMutableTreeNode("one");
one.add(new DefaultMutableTreeNode("1.1"));
one.add(new DefaultMutableTreeNode("1.2"));
DefaultMutableTreeNode two =
new DefaultMutableTreeNode("two");
two.add(new DefaultMutableTreeNode("2.1"));
two.add(new DefaultMutableTreeNode("2.2"));
DefaultMutableTreeNode three =
new DefaultMutableTreeNode("three");
Vector vec = new Vector();
for (int i = 1; i <= 25; i++)
vec.addElement("3." + i);
JTree.DynamicUtilTreeNode.createChildren(three, vec);
root.add(one);
root.add(two);
root.add(three);
// set up tree and scroller for it
// also set text selection color to red
JTree jt = new JTree(root);
DefaultTreeCellRenderer tcr =
(DefaultTreeCellRenderer)jt.getCellRenderer();
tcr.setTextSelectionColor(Color.red);
JScrollPane jsp = new JScrollPane(jt);
jsp.setPreferredSize(new Dimension(200, 300));
// set text field for echoing selections
JPanel panel2 = new JPanel();
final JTextField tf = new JTextField(25);
panel2.add(tf);
// handle selections in the tree
TreeSelectionListener listen;
listen = new TreeSelectionListener() {
public void valueChanged(TreeSelectionEvent e) {
// get selected path
TreePath path = e.getPath();
int cnt = path.getPathCount();
StringBuffer sb = new StringBuffer();
// pick out the path components
for (int i = 0; i < cnt; i++) {
String s =
path.getPathComponent(i).toString();
sb.append(s);
if (i + 1 != cnt)
sb.append("#");
}
tf.setText(sb.toString());
}
};
jt.addTreeSelectionListener(listen);
panel1.add(jsp);
frame.getContentPane().add("North", panel1);
frame.getContentPane().add("South", panel2);
frame.setLocation(100, 100);
frame.pack();
frame.setVisible(true);
}
}
The node structure for the tree is constructed in a straightforward
way. As nodes are created:
DefaultMutableTreeNode one =
new DefaultMutableTreeNode("one");
one.add(new DefaultMutableTreeNode("1.1"));
one.add(new DefaultMutableTreeNode("1.2"));
they are added to the parent node:
root.add(one);
Creating nodes one at a time, however, is tedious for large
trees. So the demo illustrates an alternative. This approach
uses the createChildren method of the JTree.DynamicUtilTreeNode
class to create a series of nodes from a Vector object.
JTree.DynamicUtilTreeNode
Vector
DefaultMutableTreeNode three =
new DefaultMutableTreeNode("three");
Vector vec = new Vector();
for (int i = 1; i <= 25; i++)
vec.addElement("3." + i);
JTree.DynamicUtilTreeNode.createChildren(three, vec);
In this case, it adds 25 children to the "three" node.
Once a tree is set up and displayed, how do you handle node
selection in the tree? The demo shows how to set up a tree
selection listener, and get and display a path. The path is
a sequence of nodes from the root to the currently-selected node
in the tree. A path might be:
testing#three#3.7
Notice that when you select a node in the tree the path is
displayed in the lower text box.
There are many other aspects of JTree. For example, the class
DefaultTreeCellRenderer allows you to control the way nodes are
displayed. The demo above uses cell renderers, in a basic way;
it specifies that the current selection should be displayed
in red. But there's more that you can do with this class. For
example, you can use it to specify an icon to be displayed
when nodes are drawn.
DefaultTreeCellRenderer
Click to view Source code for
this tip, or right-click to download.
The December 14, 1999 issue of the Tech Tips, discussed how RMI (Remote Method Invocation) can be used to
communicate between programs. Another technique for communication
is the Runtime.exec method. You can use this:
Runtime.exec
#include <stdio.h>
int main() {
printf("testing\n");
return 0;
}
This application writes a string "testing" to standard output, and
then terminates with an exit status of 0.
To execute this simple program within a Java application, compile
the C application:
$ cc test.c -o test
(your C compiler might require different parameters) and then
invoke);
After all the output has been read, waitFor is called to
wait on the program to terminate, and then exitValue is called to
get the exit value of the program. If you've done much systems
programming, for example with UNIX system calls, this approach
will be a familiar one. (This example assumes that the current
directory is in your shell search path; more on this subject
below).
waitFor
exitValue
If you're on a UNIX system, you can replace:
runCommand("test");
runCommand("ls -l");
to get a full (long) listing of files in the current directory.
But getting a listing in this way highlights a fundamental
weakness of using Runtime.exec -- the programs you invoke aren't
necessarily portable. That is, Runtime.exec is portable, and
exists across different Java implementations, but the invoked
programs are not. There's no program named "ls" on Windows
systems.
Runtime.exec --
Suppose that you're running Windows NT and you decide to remedy
this problem by saying:
runCommand("dir");
where "dir" is the equivalent command to "ls". This doesn't work,
because "dir" is not an executable program. Instead it is
built into the shell (command interpreter) CMD.EXE. So you need
to say:
CMD.EXE
runCommand("cmd /c dir");
where "cmd /c command" says "invoke a shell and execute the single
specified command and then exit." Similarly, for a UNIX shell like
the Korn shell, you might say:
runCommand("ksh -c alias");
where "alias" is a command built into the shell. The output in this
case is a list of all your shell aliases.
In the example above of obtaining a directory listing, you can use
portable Java facilities to achieve the same result. For example,
saying:
import java.io.File;
public class DumpFiles {
public static void main(String args[]) {
String list[] = new File(".").list();
for (int i = 0; i < list.length; i++)
System.out.println(list[i]);
}
}
gives you a list of all files and directories in the
current directory. So using ls/dir probably doesn't make sense in
most cases.
A situation where it makes sense to use Runtime.exec is one
in which you allow the user to specify an editor or word processor
(like Emacs or Vi or Word) to edit files. This is a common feature
in large applications. The application would have a configuration
file with the local path of the editor, and Runtime.exec would be
called with this path.
One tricky aspect of Runtime.exec is how it finds files. For
example, if you say:
Runtime.exec("ls");
how is the "ls" program found? Experiments with JDK 1.2.2 indicate
that the PATH environment variable is searched. This is just like
what happens when you execute commands with a shell. But the
documentation doesn't address this point, so it pays to be careful.
You can't assume that a search path has been set. It might make
more sense to use Runtime.exec in a limited way as discussed above,
with absolute paths specified.
There's also a variant of Runtime.exec that allows you to specify
environment strings.
Runtime.exec
To subscribe to these and other SDN publications:
- Go to the Sun Developer Network - Subscriptions page,
choose the newsletters you want to subscribe to and click
"Submit".
To unsubscribe,
- Go to the Sun Developer Network -
Subscriptions page,
uncheck the appropriate checkbox, and click "Submit".
_______
1 As used on this web site, the
terms "Java
virtual machine" or "JVM" mean a virtual
machine for the Java
platform.
|
http://java.sun.com/developer/TechTips/2000/tt0209.html
|
crawl-002
|
refinedweb
| 1,475
| 58.89
|
Several months ago, I got very interested in Linux processes, socket programming, and IPC. While I was very familiar with the concepts themselves, I had never actually written programs that utilized them. In this article, I will look at the simpler of the topics, using the Linux fork() function to generate a new process.
fork() is surprisingly easy to use. Simply make a call to fork, and you have a new process. The return from fork() is the process ID of the child ID. In the child process, the returned value is 0. This makes storing the returned value from fork necessary to determine while process is active. For example, if I have this function call:
pid = fork();
pid in the parent process will be assigned the value of the child process ID, and inside the child process, pid will be assigned 0. I really thought there would be more to it, but in a nutshell, that is basically it. Below is an example program. The program will fork a process. The parent will count to 100 by 2, starting at 0, and the child will count to 101 starting at 1. Both will display to the same console, so the output will need to identify itself as the child or parent process. The example was compiled with GCC under Cygwin with no problems.
//Two C headers needed for the process ID type and the system calls to fork processes
#include <sys/types.h>
#include <unistd.h>
//I prefer the C++ style IO over the C style
#include <iostream>
//Needed to call IOSTREAM in the standard namespace without prefix
using namespace std;
int main()
{
//variable to store the forked process ID
pid_t process_id;
//counter to be used in the loops
int counter;
//Fork the process ID, and if there is an error, exit the program with an error
process_id = fork();
if (process_id < 0)
{
cerr << "Process failed to create!" << endl;
exit(1);
}
//If the process ID is not 0, it is the parent process. The parent process will counter from 0 to 100 and increment
//by two each time. The child process will have a process_id of 0, and increment by 2 starting from 1.
if (process_id)
{
//Let the user know what the parent ID is and the generated child ID
cout << "I am the parent ID. My PID is: " << getpid() << " and my Childs process ID is " << process_id << endl;
for (counter = 0; counter <= 100; counter += 2)
{
cout << "Parent (" << getpid() << "): " << counter << endl;
}
}
else
{
//Tell the user what the current process ID is using getpid to verify it is the child process, then show
//that process_id is 0
cout << "I am the Child ID " << getpid() << " and I think the assigned process ID is " << process_id << endl;
for (counter = 1; counter <= 101; counter += 2)
{
cout << "Child(" << getpid() << "): " << counter << endl;
}
}
return 0;
}
Being able to spawn separate processes is incredible useful in handling multiple connection in a server program. Sharing variables between multiple processes gets really interesting. This is where the various IPC methods come in to play, but that’s for another article.
|
http://digiassn.blogspot.com/2005/12/fork.html
|
CC-MAIN-2017-26
|
refinedweb
| 508
| 76.45
|
Apr 20, 2010 08:13 AM|digsy|LINK
I need to make a dropdownlist webusercontrol that shows a list of years.
The control needs to be flexible enough so that I can tell it show X number of years before and after this year. So if I said show 2 years before after this year the dropdownlist would show 2008, 2009, 2010, 2011 & 2012.
In the past I've stored the "X years" for later use by exposing a property and then storing the value in a hidden field. EG:
public string NumberYears { set { NumberYearsHiddenField.Value = value; } get { return NumberYearsHiddenField.Value; } }
I want to store the "X years" values that I send to the control the first time - so not a value that I send in a postback. Is there a better way than using a hidden field ?
Contributor
2010 Points
Apr 20, 2010 08:25 AM|Menno van den Heuvel|LINK
Put it in the viewstate:
public string NumberYears { set { ViewState["NumberYears"] = value; } get { return (string)(ViewState["NumberYears"] ?? string.Empty); } }
The viewstate of course also exists as a hidden field in the form, but it's a bit more convenient, and you can have it encrypted if security is important.
Menno
2 replies
Last post Apr 20, 2010 10:41 AM by digsy
|
https://forums.asp.net/t/1549236.aspx?Retaining+a+value+in+a+webusercontrol
|
CC-MAIN-2019-39
|
refinedweb
| 214
| 70.02
|
Content-type: text/html
#include <sys/ddi.h> #include <sys/sunddi.h>
void ddi_io_rep_put8(ddi_acc_handle_t handle, uint8_t *host_addr, uin8_t *dev_addr, size_t repcount);
void ddi_io_rep_put16(ddi_acc_handle_t handle, uint16_t *host_addr, uin16_t *dev_addr, size_t repcount);
void ddi_io_rep_put32(ddi_acc_handle_t handle, uint32_t *host_addr, uin32_t *dev_addr, size_t repcount);
Solaris DDI specific (Solaris DDI).
handle Data access handle returned from setup calls, such as ddi_regs_map_setup(9F).
host_addr Base host address.
dev_addr Base device address.
repcount Number of data accesses to perform.
These routines generate multiple writes to the device address, dev_address, in I/O space. repcount data is copied from the host address, host_addr, to the device address, dev_addr. For each input datum, the ddi_io_rep_put8(), ddi_io_rep_put16(), and ddi_io_rep_put32() functions write 8 bits, 16 bits, and 32 bits of data, respectively,_get8:
|
http://backdrift.org/man/SunOS-5.10/man9f/ddi_io_rep_putw.9f.html
|
CC-MAIN-2016-50
|
refinedweb
| 124
| 51.65
|
direct.showbase.CountedResource¶
from direct.showbase.CountedResource import CountedResource
Inheritance diagram
- class
CountedResource[source]¶
This class is an attempt to combine the RAIA idiom with reference counting semantics in order to model shared resources. RAIA stands for “Resource Allocation Is Acquisition” (see ‘Effective C++’ for a more in-depth explanation)
When a resource is needed, create an appropriate CountedResource object. If the resource is already available (meaning another CountedResource object of the same type already exists), no action is taken. Otherwise, acquire() is invoked, and the resource is allocated. The resource will remain valid until all matching CountedResource objects have been deleted. When no objects of a particular CountedResource type exist, the release() function for that type is invoked and the managed resource is cleaned up.
Usage
Define a subclass of CountedResource that defines the @classmethods acquire() and release(). In these two functions, define your resource allocation and cleanup code.
Important
If you define your own __init__ and __del__ methods, you MUST be sure to call down to the ones defined in CountedResource.
Notes
Until we figure out a way to wrangle a bit more functionality out of Python, you MUST NOT inherit from any class that has CountedResource as its base class. In debug mode, this will raise a runtime assertion during the invalid class’s call to __init__(). If you have more than one resource that you want to manage/access with a single object, you should subclass CountedResource again. See the example code at the bottom of this file to see how to accomplish this (This is useful for dependent resources).
|
https://docs.panda3d.org/1.10/python/reference/direct.showbase.CountedResource
|
CC-MAIN-2020-40
|
refinedweb
| 264
| 52.9
|
I've spent the last month or so on a project that, due to factors beyond my control, must be in C++ and not D.
A little of my background first: I used C/C++ as my primary language in the late 90's and early 2000's. Liked it at the time, but then got fed up certain aspects and went looking for alternatives. Long story short, I found D, fell in love with it, and have been using it extensively for years.
Now that I'm back in C++-land (hey, it beats the hell out of writing a whole game entirely in a dynamic service/server-compiled toy from the same jokers folks who brought us the wonderful Flash), I've built up a list of the top things I miss from D when using C++. There's plenty of other great stuff in D that I miss, but these are the ones...so far...that are proving to be the most painful to live without in the particular project I'm working on (An iOS/Android puzzle-ish game). Other projects would likely have very different lists, of course.
Incidentally, this also serves as a deeper explanation for anyone who didn't understand the vague heckling of C++ in my previous post.
Top D Features I Miss In C++
Proper module system: One of the main things that drove me away from C++ in the first place. The header-file/text-inclusion hack is a colossal, anachronistic pain in the ass. There are so many problems with this one issue alone, I'm not sure they're even worth listing. But I will anyway because they deserve public ridicule:
C++'s "Module" System Sucks
#ifndef _FILENAME_H_: All the reason you need to know C++'s module system sucks. Retarded hack.
Identifiers prefixed with psuedo-module vomit: Let's see... CIwFVec4... CIwGameActorImage... WTF? Can I haz FVec4 and ActorImage, plz? Yea, I know there's namespaces, but not everyone seems to use them. Somehow I have a feeling there's a good reason for that - besides just easier interfacing with D.
DRY? What's DRY? Derp derp durrr....
Scrambled modules! Let's chop up our module and play "What goes where?!?" Implementation files: non-inlined function bodies, non-member variables (which aren't evil globals in a language with proper modules), and private preprocessor directives. Header files: member variables, inlineable function bodies, public preprocessor directives, member accessibility (public/private), non-DRY duplication of non-inlined function signatures, plus includes and forward declarations that should be private but must be leaked out due to something else in the header needing to use them. Whee!
Header files are interface documentation? LIES! Seriously, look at your header files. You're really going to try to tell me that only contains the public interface? Then what's that private: section doing in there? And that forward declaration for an external class? If you want automatic documentation, run a damn documentation generator. They exist.
Headers affected by whatever headers were included before it: Hygiene? What's hygiene? And why is this compiling so slow? Durrr...I wonder why?
Fuck precompiled headers: Talk about "solving the wrong problem".
No RDMD (automatic dependency finding): Every...fucking...source file must be manually specified. I remember to do this about half the time.
And the #1 reason C++'s "module" system sucks: What is this, 1970?
Actual properties:
foo->setX(foo->getX() + 1); // Suck my nutsack, C++
Metaprogramming that doesn't involve a text preprocessor or gobs of excessive template instantiations.
Sane type-name syntax: In C++, how do you spell int*[]* delegate()[]* (Ie, a pointer to an array of delegates which take no params and return a pointer to an array of int pointers)? Wrong! The correct answer is "Fuck you, I'm not attempting that shit in C++!"
Type deduction: Unfortunately, Marmalade doesn't yet support C++11, so I don't even get auto.
Closures, lambdas, nested functions: There's a good reason Qt in C++ requires a special preprocessor. (Yea, I'll care about C++11's features when I can rely on them actually existing in my compiler.)
No forward declarations: Was I wrong before? Maybe this is still 1970?
Actual reference types: And no, I don't mean Foo&.
Non-schizo virtual: C++'s member funcs are non-virtual by default...except the ones that are virtual by default and can't be sealed as final. WTF? WTF indeed. (Yes, I do understand how it works, but it works stupidly.)
Default initialization: Granted, I'd rather be told at compile-time about possibly using non-inited vars (ie, C#-style)...But garbage-initialization? Screw that. Especially for pointers, I mean, fuck, really?! Pain in the damn ass.
Fast compilation: Hooray for the world's slowest-compiling language! Whee!!
Polysemous literals: C++ can barely figure out that 2 can be a float. Gotta babysit it with 2.0f. Great.
Ranges: Fuck iterator pairs.
Named immutables/constants that actually exist in the real code, not just the preprocessor.
Designed by someone who actually fucking understands proper engineering.
Things I'm surprised I don't miss yet (but may still miss later and would undoubtedly miss if my project were for a different domain or maybe even just using different engine):
Slices and string/array manipulation: Fuck null-terminated strings, fuck pointer/length pairs, and fuck everything in string.h.
Associative arrays: Just works. Any key type. No syntactic bullshit.
Scope guards: Also known as "RAII that isn't a contrived syntactical kludge".
GC: You can pry manual memory management from my cold dead hands, but that doesn't mean I always wanna fuck with it.
- Foreach: I'm still convinced foreach(i; 0..blah) is essential for any modern language, but my prior experience with C/C++ seems to have left for(int i=0; i<blah; i++) irreparably burned into my brain.
And ok, to be perfectly fair to C++, let's turn things around a bit...
Things from C++ I miss when I go back to D:
- .......
- Ummm.....
- Not a fucking thing.
UPDATE (2012-09-12): Added "RDMD" to reasons C++'s module system sucks. Can't believe I forgot that before, that's a big one.
1 comment for "Top D Features I Miss In C++"
Well. I can see you love D. I like it the most together with Rust maybe, but both after C++, for the following reasons:
1. Top reason: it is really, really practical: lots of libraries, good performance, very usable if you know learn good style (C++11/14).
2. GC: Well, when I don't want it, it is annoying.
3. Type system: non-uniform -> polymorphic vs structs. Two worlds, bad for generic programming. For me these 2. and 3. are the big mistakes. Even if 2. allows nice things, yes, but when you don't want it (embedded), it hurts. I know about @nogc, but it is still designed for a GC in general. 3. is directly a bad choice, in my opinion, because the type system should go more in the direction of type-erasure a la boost.any.
Best D points: immutability, a reasonable level of compatibility with C and C++, threading looks good, pure looks nice, and CFTE is really great, I wish C++ had a general solution for CFTE, especially, even I went to whine to the std proposals for C++, but I think it is not practical.
|
https://semitwist.com/articles/article/view/top-d-features-i-miss-in-c
|
CC-MAIN-2017-30
|
refinedweb
| 1,250
| 65.83
|
So you’ve decided to build a Single Page App with React, and everything seems to be going dandy. You’ve got yourself some wireframes, a HTML file and a few components, and then you decide to add some routes. Easy, right?
Well, thats what you thought until you started reading the internet. But now you’re worrying about isomorphism and the HTML 5 history API and even how to pass props to your view components again. And if you thought learning all this was painful, imagine rewriting your application when the routing library’s API breaks in a few weeks.
Routing doesn’t have to be complicated, so why stress yourself out with libraries when a hand-rolled router can take less than 20 lines? Especially seeing that if you’d have just kept following this guide, you would have had something working in only two minutes…
Hash-based routing in two minutes
Routing means doing something in response to a change in the browser’s current URL. There are two ways you can accomplish this:
- pushState routing, using the HTML5 History API
- hash-based routing, using the portion of the page’s URL starting with
#, i.e. the hash.
Hash-based routing is by far the simpler of the two alternatives, and with the exception of a few specific cases, it’ll usually do the job. So let’s go with this.
Implementing hash-based routing with React is simple; just choose what to render based on the string stored in
window.location.hash. We’ll do this once on page load, and again each time the browser emits the
hashchange event:
// Handle the initial route navigated() // Handle browser navigation events window.addEventListener('hashchange', navigated, false);
Given the above two lines, all we need to do to finish our router is implement the
navigated function. And since you won’t learn anything without putting it into practice, let’s do this as an exercise.
Exercise 1: Create a hash-based router
The specification for
navigated is simple; it calls
ReactDOM.render, with the passed in component depending on the value of
window.location.hash.
Your task is to implement the
navigated function, handling the following hashes:
- For
#/, use a component containing the text
I'm amazing! I've made a Raw React Router!
- Otherwise, use a component containing the text
Not Found
If you need a HTML file to test your script with, use the file from part 1‘s Exercise 1.
Once you have tested your work by entering in various URLs, compare your solution to mine by touching or hovering your mouse over this box:
function navigated() { // Choose which component to render based on browser URL var component = window.location.hash == "#/" ? React.createElement('div', {}, "Index Page") : React.createElement('div', {}, "Not Found") // Render the new component to the page's #react-app element ReactDOM.render( component, document.getElementById('react-app') ); }
Congratulations, you now know how to build a working router! Given enough time, you could use what you’ve learned to build a full-featured routing system.
Of course, while this incredibly basic router works, trying to integrate it with our contact list app from parts one and two is not going to scale. So let’s learn how to apply these fundamentals to a real app.
This is part three of my series on Raw React. If you’re new to React.js, start from part one. Otherwise, you can get your bearings at part two’s GitHub repository.
Managing the current location
In the example above, we’ve directly referenced
window.location.hash when choosing what to render:
var component = window.location.hash == "#/" ? React.createElement('div', {}, "Index Page") : React.createElement('div', {}, "Not Found")
As we learned in part one, React apps don’t re-render themselves. Because of this, when
window.location.hash changes value, we need to manually call
ReactDOM.render to update the DOM. We had no trouble doing this in the first exercise, but how would we go about applying this to our contact list application?
Review: The Story So Far
In our contact list application from parts one and two,
ReactDOM.render is never called manually. Instead, it is called from within our
setState function. But what does the
setState function have to do with rendering?
The
setState function – as you may expect – is used to update the current application state. This state is stored in a global
state variable – but crucially – we make sure to never update this variable directly. And because all updates to
state happen through the
setState function, we also know that the only time the app must be re-rendered is within each call to
setState. That is – as long as the application’s state is completely stored in
state.
One more thing. As
setState is a global, we could call it from anywhere. But we’ve decided to only call
setState from functions which directly handle user input. We call these functions actions, and place them all in a single location within the source code, passing them via
props to where they are required.
Location as state
But now that we’ve remembered how our app fits together, it seems we have a problem: our simple hash-based router requires that we call
ReactDOM.render manually, but our app requires that
ReactDOM.render is called from within
setState.
Can you think of a way to reconcile this? Have a think about it, then check your intuition by touching or hovering your mouse over this box:
window.location.hasheach time we render, we can instead store it inside our
stateobject by calling
setStatewithin the hash change handler. In other words, we can turn our
navigatedfunction into an action.
Exercise 2: Adding routing to your contact list
Lets continue from where we left off in part 2, and give our contact list app some routes! To start, we’ll keep it simple by providing only two routes:
#/contacts, which displays the existing contact list
- a default route, which displays a “Not Found” message and a link to
#/contacts
Your task is to implement the following changes:
- Add a
navigatedfunction which stores the current hash in
state.location
- Call
navigatedon page load, and on subsequent hash change events
- Modify
setStateto render the correct content for the current value of
state.location
Once you’ve got it working (or gotten stuck), compare your answer with mine:
Of course, a single page app shouldn’t literally be a single page. So let’s add an edit form!
Extracting route parameters
We’d like to be able to specify the contact we want to edit by adding its
id to the hash, following this pattern:
#/contacts/<id>/
The
<id> part of the above hash is called a route parameter. Note how the parameter is delineated by
/ characters; you might be familiar with this if you’re used a server side tool like Express or Ruby on Rails.
Actually, it isn’t just route parameters that we delineate with
/ – our route names are also sandwiched between slashes. Given this is the case, lets make our job easier by storing an array of parts in
state.location, as opposed to the hash itself:
['contacts', '<id>']
Did you notice how I didn’t write
['#', 'contacts', '<id>'] or
['', 'contacts', '<id>', '']? While these are perfectly valid ways of storing your current route, the information we care about is all located between the first and final
/ characters. So let’s cut the crusts off:
// Removes the `#`, and any leading/final `/` characters window.location.hash.replace(/^#\/?|\/$/g, '').split('/');
But James, you ask – won’t routing based on URLs get me into trouble in the long term? If my URL structure changes, refactoring it will be a nightmare! Actually, you’re spot on.
URL parsing tools
While this article is about the fundamentals of routing with React, for any real project you’ll probably want to use a tool to name your routes and route parameters, and to lookup/generate URIs using these names.
The tool I use for this purpose is called uniloc, and is part of Unicorn Standard – my collection of tools for JavaScript-based Single Page Applications. And on the odd chance you’d like to learn how to build a real react app, you can get a version of the contact list project extended to use uniloc just by signing up to hear about my latest articles and tools! But I digress.
Get the uniloc/Raw React example project
Exercise 3: Selecting component props by hash
Now that you know how to extract the route parameters, lets actually set up a (read-only) contact form. Heres the view component we’ll use:
var ContactView = React.createClass({ propTypes: { contacts: React.PropTypes.array.isRequired, id: React.PropTypes.string.isRequired, }, render: function() { var key = this.props.id; var contactForm = this.props.contacts.filter(function(contact) { return contact.key == key })[0]; return ( !contactForm ? React.createElement('h1', {}, "Not Found") : React.createElement('div', {className: 'ContactView'}, React.createElement('h1', {className: 'ContactView-title'}, "Edit Contact"), React.createElement(ContactForm, { value: contactForm, onChange: function(){}, onSubmit: function(){}, }) ) ) }, });
props: while the contact objects from part two specify a
keyvalue and no
id, this view’s
propsspecify an
idbut no
key. This is because
keyis a special prop which is consumed by React. See the React documentation for more details.
Your task is to display the
ContactView under the
/contacts/<id>/ route, and add links to this view to the contact list.
Once you’ve got this working, compare your answer with mine:
Your app is finally starting to take shape! You can add contacts, navigate between pages, and you can even use the browser forward/backward buttons (don’t laugh, a lot of web apps fall to pieces when the user touches them).
But while it’s great that you’ve managed to get this far, having a giant switch statement smack bang in the middle of
setState obviously isn’t going to scale. So let’s fix this with an
Application component.
The Application component
The
Application component is the component we’ll pass to
ReactDOM.render. It takes the
state global as its
props, and returns the rest of the application:
ReactDOM.render( React.createElement(Application, state), document.getElementById('react-app') );
This seems pretty simple, but it’s actually a really big deal. Why?
One of React’s biggest strengths is that the stateless components typically used with it encourage you to design apps which are easy to reason about. By specifying your entire user interface’s state with a single
state object, you cleanly separate your app into two parts:
- A model, which manages your application state
- A view, which defines how to render that state
Your
Application component is the interface between model and view. Because of this, a well written
Application component will at a glance show you how your entire app fits together.
Actions, Callbacks &
Application
As part of wiring your application together, your
Application component is also responsible for passing the correct actions to the views which it renders:
React.createElement(ContactsView, Object.assign({}, this.props, { onChangeContact: updateNewContact, onSubmitContact: submitNewContact, }));
Object.assign? Read about it at Mozilla Developer Network.
But as our app currently stands, all actions are global functions. So why not just use these action functions directly from the view components?
When asking this question, it helps to remember the reason we’re using React in the first place – we want to be able to compose our application from reusable components. Using global actions within view components ties them to the specific application, while at the same time hiding the component’s dependencies from anyone trying to grok the app through its
Application component.
Exercise 4: Implement your
Application component
Implementing your
Application component is simple: all you need to do is shift the functionality which is currently inside your
setState function to a new
Application component.
Also, since we’ll eventually want to handle user input in our edit form, let’s take this exercise as an opportunity to add empty
updateContactForm and
submitContactForm actions. These actions should be passed from
Application into the
ContactView component, which will in turn pass them to
ContactForm.
Your task is to implement the
Application component, with empty
updateContactForm and
submitContactForm handlers.
When you’ve finished, compare your implementation with mine. It should be nearly identical:
And with that simple change, your application’s guts are now all clean and tidy! But your users won’t care if the “Save” button doesn’t work. So let’s fix it.
Navigating programatically
Action functions like the one called by “Save” often need to send the user to a different page. But while
<a href="#/..."> tags provide a simple way to let users navigate, they won’t help us when we want to change the location programatically. So what are we to do?
Our first thought might be to simply change our “Save” button to an
<a> tag styled like a button. But this will not allow us to call an action function; the user will be able to navigate, but won’t be able to save the form’s contents. So instead, let’s mimic the functionality of
<a> from within our action by using the
window.location.replace function:
// Navigate to `#/contacts` window.location.replace( window.location.pathname + window.location.search + '#/contacts' );
Exercise 5: Navigate within an action
When the user clicks on the “Save” button before the form input has changed, the user will still expect to be taken back to the contacts list.
Your task is to implement the
submitContactForm action such that it sends the user to the contact list.
When you’ve finished, compare your implementation with mine by touching or hovering over this box. It should be nearly identical:
function submitContactForm() { window.location.replace( window.location.pathname + window.location.search + '#/contacts' ); }
Wonderful, the user will no longer be confused when they press the save button and nothing happens! But, given the app immediately displays “Not Found” when it is loaded, there’s a good chance they’ll never even see the save button. Let’s do something about that.
Redirecting
Redirecting is just a fancy way of saying “sending the user to location A when they request location B”. And now that we know how to navigate programatically, implementing redirection is simple: just update
window.location as we’re handling navigation events:
function navigated() { var normalizedHash = window.location.hash.replace(/^#\/?|\/$/g, ''); if (normalizedHash == 'some-route') { window.location.replace( window.location.pathname + window.location.search + '#/another-route' ); } else { setState({ location: normalizedHash.split('/') }); } }
The only trick here is to make sure you don’t update
setState until you’ve reached the final destination. This is because updating
window.location will cause
navigated to be called once the browser has updated its URL. If you update
setState on each step you take, you’ll end up rendering each intermediate route. This will cause performance issues; more importantly though, it will look terrible.
Why redirects are necessary
Within a single page app (or the web in general), you generally want to ensure that what is visible in the address bar:
Corresponds to what is actually visible within the page:
James, you say – I already know that. I’ve just read through an entire article on routing for crying out loud – and besides – this still doesn’t explain why redirects are necessary. But the reason I brought it up is that sometimes you have a URL which corresponds to more than one page.
The textbook example of this is the root location, i.e.
#/. Do you know why the content underneath
#/ may vary? Have a quick think about it, then check your answer by touching or hovering your mouse over this box:
The root location is the first location a user will see when they open an app. However, what a user wants to see initially will depend on what they’ve seen before.
Is the user a logged-in customer? They’ll probably want to see their contacts. Are they someone completely new? We’ll want to show them the registration form.
In order to ensure that the app’s URL and content match, you’ll need to redirect the user from the root location to the hash which matches their desired view.
Exercise 6: Handle the root location
Now that you know why you need a root redirect and how to implement it, why not give it a shot?
And while you’re at it, since the app now navigates programatically in multiple places, refactor your code by replacing your existing call to
window.location.replace with a new
startNavigating(newLocation) action.
Your task is to implement a redirect from the root location to
#/contacts, using your new
startNavigating function.
Once you’ve finished, compare your implementation with mine by touching or hovering over this box. It should be nearly identical:
function submitContactForm() { startNavigating('/contacts'); } function navigated() { // Strip leading and trailing '/' normalizedHash = window.location.hash.replace(/^#/?|/$/g, ''); if (normalizedHash === '') { // Redirect for default route startNavigating('/contacts'); } else { // Otherwise update our application state setState({location: normalizedHash.split('/')}); } } function startNavigating(hash) { window.location.replace( window.location.pathname + window.location.search + '#' + hash ); }
Great work! At this point, there is only one thing left: making the edit form work. And, given you’ve completed part 2, you should already be capable of completing this yourself! But before you do, there is one more thing you should know about:
Transitioning between locations
If we were to draw a timeline of the transition between two routes, it would look something like this:
When the current location changes primarily because the user wants to see something else, this makes total sense. The user’s intent is to navigate, so the only property under
state which needs to change is
location.
But what happens when navigation happens for some other reason, like submitting changes to a contact? Let’s have a look! Assuming we implement our
submitContactForm action with something like the following:
startNavigating('#/contacts'); setState({contacts: updateContacts});
Our flow will look like this:
There is a problem here. Do you see it? Once you think you do, check your understanding by touching or hovering your mouse over this box:
ReactDOM.renderfirst renders the new view component without the updated contacts, then re-renders it with the updated data. This slows things down, but more importantly will also result in old data briefly flashing on screen.
To eliminate this problem, we can add a new
transitioning property to our
state object. When navigation starts,
state.transitioning will be set to
true, and when complete, it will change back to
false. By ensuring
setState only renders when
state.transitioning is
false, we eliminate the double render.
state.transitioningin our
Applicationcomponent’s
rendermethod instead, it would also be possible to apply CSS transitions between views.
Exercise 7: Complete implementation of the edit form
With the contact list app being almost complete, there is only one thing left to do:
Your task is to make the edit form work.
This will involve:
- Adding a new object to
statewhich stores the current values of the various contact’s edit forms
- Updating your
ContactViewcomponent to show the edited data
- Completing the
updateContactFormand
submitContactFormactions
Be careful to make sure that the entered data is not saved until the user actually presses “Save”. Also, ensure that validation is performed correctly and any errors are displayed properly.
Finally, don’t forget to implement
state.transitioning! While it may not feel necessary with an app of this size, ensuring that
ReactDOM.render is not called multiple times will become increasingly important as your app starts to grow.
Once you’re happy with your implementation, compare it to this guide’s GitHub repository. Make sure your features match the specification, but don’t get too hung up on differences in implementation.
And there you have it, you now know the fundamentals of routing with React! Pat yourself on the back for a job well done!
Your next steps
Now that you know how to build a router, the next step is to go out and do it, right?
Hold on a minute. The next step actually depends on what you’re building. Are you creating a tiny app with only three or four routes? In that case, its time to get started! But what if you’re building something a little bigger?
In order to avoid 1000-line
Application components it will become increasingly important to focus on how your routes are defined as your app starts to scale. And while nothing is stopping you from implementing this yourself, not everything is best solved with Raw React.
Don’t reinvent the wheel
The last thing I want anyone to take away from this guide is that tools are bad. Tools are incredibly important. But — if all you have is a hammer, everything looks like a nail. Without the knowledge of how a router works, all you really have is react-router et al.
Now don’t get me wrong – react-router is a great tool for a number of use cases. But now that you know how routing works, you should be able to pick the best tool for your use case. And on the odd chance that you decide this is a Raw React, hash-based router, you’ll probably want a tool to help map URLs to routes. And as it happens, I made uniloc just for this purpose. And to give you a head start, I’ve put together a small example project for my subscribers which marries uniloc with your contact list app.
In return for your e-mail address, I’ll send you the next episodes in the series as they’re released. As a bonus you’ll also immediately receive the uniloc-enhanced project source, and!
- Learn Raw React Part 1
- Learn Raw React Part 2: Ridiculously Simple Forms
- Push-state vs hash-based routing
- Interacting with the DOM in React
contribute:
// with #/product/id
var normalizedHash = window.location
.hash.slice(2);
// reproduce product/id
window.location
.hash.slice(1);
// /product/id
the state:
window.location
.hash.slice(1).split('/')
// out: ["","product","id"]
window.location
.hash.slice(1).split('/').splice(1,2);
// out: ["product",'id']
the next issue: approach routing more “expressive”,
and use RegEXP `*, + ? /:`?
`/product/:id`
`/product/category/:id*`
`/product/:number(\\d+)`
`/product/?search`
test case:
'/product/:id'.match(/^\/product\/([^\/]+?)\/?$/i)
// out: path:'product', catch:':id'
trying1:
route: ‘/product/:120’
path: /product/id
RegEXP: /^\/product\/([^\\/]+?)(?:\/(?=$))?$/i
Keys: 120,
path:id=120
by the way.. nice post thanks., james.. : )
I’ve found the final implementation for submitContactForm extremely complex. One of the things I like the most in React applications is the explicitness and easiness to reason about the data flow, avoiding exactly the kind of complex algorithms that make you stop and run in your mind each step to be able to grasp what that local state is doing there. I’ll try to work on a simplification and create a PR! 🙂
Also, is it a good idea to access the state directly from the actions — and even worse, update it, like in –?
|
http://jamesknelson.com/routing-with-raw-react/
|
CC-MAIN-2017-34
|
refinedweb
| 3,852
| 55.44
|
It is not uncommon to encounter a situation when the System Under Test (SUT) is dependent on a number of collaborators. Let's imagine that we are developing code that emulates an Automatic Teller Machine (ATM). An ATM can be thought of as a box that host various components such as the keyboard, the display, the cash dispenser and so on, as well as it is a controller that orchestrates the work of constituent parts. In addition, it should have a communication link to the bank it belongs, in order to check user's details and the state of her account.
There is a lot of parts that may not be already produced by the time we write the ATM class and also if they were, it could be slow to use real parts in our tests. Moreover, an attempt to communicate with the bank could lead to unpredictable and not repeatable results. Furthermore, it is easier to emulate communication failures with some code than with hardware.
To solve a problem of testing a system which is dependent on other components test doubles are used, a term coined by Gerard Meszaros in his book «xUnit Test Patterns». The author uses an allusion to Stunt Doubles, replacements for real stars, who are specially trained to do some tricks but cannot act themselves. The idea with test doubles is that a real collaborator is replaced with some object with the same API but the object exposes only the behavior necessary for a single test.
There are several frameworks that simplify test doubles creation, but their review is out of scope of this piece. Instead, we'll discuss how to do it with Mockito, a popular framework for testing. For our example we will use JUnit in addition. The code of the example is available here.First, let's take a look at a method where there is no place for test doubles.
Boolean isTransactionPossible(final BigDecimal amount, final BigDecimal balance) { return balance.compareTo(amount) >= 0; }
It's a simple method which checks if the user's balance is greater than the amount to be withdrawn, with no collaboration with any parts of the ATM. Such methods are tested as usual, that is something is passed as values and then the result is checked.Now let's take a look at the method to withdraw some cash.
public void withdrawCash() throws BankServiceException { //Show user options to withdraw cash screen.displayMenuOfWithdrowalAmounts(); //Read amount from the keypad BigDecimal amount = keypad.getAmount(); //Connect to the bank and check balance BigDecimal balance = bankService.getBalance(); //Check if there is enough money on the account if (isTransactionPossible(amount, balance)) { //Check if there is enough cash in the dispenser if (!cashDispenser.isThereEnoughMoney(amount)) { //Display error message screen.displayError(INSUFFICIENT_CASH_AVAILABLE); } else { //Debit amount from user's account bankService.debit(amount); //Dipenses cash cashDispenser.dispenseCash(amount); //Bids goodbye screen.displayMessage(GOODBYE); } } else { //Display error message screen.displayError(AMOUNT_EXCEEDS_BALANCE); } }
Firstly, the method displays the menu to input the sum to withdraw. Secondly, it reads the sum from the keypad. Thirdly, the bank service is accessed to obtain the balance of user account. After that there is a check if there is enough money on the account. If it passes, the cash dispenser is checked to have enough notes. If everything is OK, then the account is debited, cash is dispensed and finally, the message about the transaction is shown on the screen. If there are some problems with the amount entered, the error messages are displayed on the screen. Bank service errors handling if left out, it will be discussed later.Now lets try to craft a test to check the aforementioned code. It could look like that.
@RunWith(MockitoJUnitRunner.class) public class ATMTest { private ATM sut; @Mock private BankService bankService; @Mock private CashDispenser cashDispenser; @Mock private DepositSlot depositSlot; @Mock private Keypad keypad; @Mock private Screen screen; @Before public void setUp() { sut = new ATM(bankService, cashDispenser, depositSlot, keypad, screen); } @Test public void testWithdrawCashSuccess() throws BankServiceException { //given BigDecimal amount = new BigDecimal("1000"); BigDecimal balance = new BigDecimal("5000"); Mockito.when(keypad.getAmount()).thenReturn(amount); Mockito.when(bankService.getBalance()).thenReturn(balance); Mockito.when(cashDispenser.isThereEnoughMoney(amount)).thenReturn(true); //when sut.withdrawCash(); //then Mockito.verify(bankService, Mockito.times(1)).debit(amount); Mockito.verify(cashDispenser).dispenseCash(amount); } }
Parts not pertaining to our discussion are omitted for brevity. To create test doubles using Mockito one should use a @Mock annotation, which creates a double and initializes the variable it belongs to. In order for this annotation to work, the class should be marked with an annotation @RunWith(MockitoJUnitRunner.class). It is worth mentioning, that it is not necessary that collaborators have implementations, in our example all collaborators are interfaces, though concrete classes can be used with the @Mock annotation. The default behavior of the annotation is that it overrides the behavior of all methods which return a value so that they return null, an empty collection or some default value such as 0 or true, although there is a way to change this behavior and another way of creating test doubles, which entails original method calls as well.
As can be seen from the snippet above, first of all we have to create an instance of our ATM. There five values to be passed to the constructor, not all of which are used to test the method. Some are passed only to fill the place. Such test doubles are called dummies, for example, a depositSlot is a dummy, as it is never used in the method we test, but there is a parameter of the constructor, so it is necessary that the value should be passed.
Imagine, that we would like to test a happy path, that is when there is enough money on the account and in the cash dispenser. To do so we instruct our test doubles to return quantities that pass all checks. For instance, the line Mockito.when(keypad.getAmount()).thenReturn(amount); instructs the keypad to return 1000 when the method getAmount() is called. Static import could be used in our code, but otherwise is done to underscore the methods of Mockito.
In the aforementioned case the test double is a stub, that is the object is instructed to return some value when a particular method is called and it provides some input to our system under test. There are similar actions for the bankService and cashDispenser methods, but there is an important difference as well. There are some checks involving these doubles after the method we test had been called. In particular, we check that the account had been debited exactly once and it had been debited with the correct amount. The same thing is done to cash dispenser, but the fact that the method was called a single time was omitted as it is the default.
The gist of the previous paragraph is that we instructed our test doubles to return desired values in response to method calls, in other words had stubbed them. After that, the doubles had recorded all interactions and we analyzed the interactions afterword. Actually, we checked the indirect outputs of our SUT and made sure that it behaves correctly, that is all operations took place desired number of times an involved desired amount to be withdrawn. A test double with such a behavior is called a mock and, as can be seen, it includes the functionality of a stab in it.Now let's take a step back to our stabs and see, how communication problems with our bank can be emulated in a test environment. A snippet below shows a method to display balance.
public void showBalance() { try { BigDecimal balance = bankService.getBalance(); screen.displayBalance(balance); } catch (BankServiceException e) { screen.displayError(BANK_SERVICE_ERROR); } }
If there is a problem connecting to the bank, an error message is shown to the user. To test that the correct message is shown in the case of an error the method below can be used.
@Test public void testShowBalance() throws BankServiceException { //given Mockito.when(bankService.getBalance()) .thenThrow(new BankServiceException()); //when sut.showBalance(); //then Mockito.verify(screen).displayError(ATM.BANK_SERVICE_ERROR); }
First, the double is instructed to throw an exception when method getBalance() is called. Then, after invoking of showBalance() on our SUT, we check that the appropriate error message was shown. This trick allows one to check the behavior of the system under test in a situation when one of its collaborator fails.To conclude our discussion of test doubles and Mockito let's consider another sad path, when there is not enough money on the account. The expected result of the test would be that no cash is dispensed, that the cash dispenser should be engaged in no operations, and the account must not be debited from the account. In other words, we check that our systems behaves in the desired way in certain circumstances. The snippet of code is shown below.
@Test public void testWithdrawCashNotEnoughMoneyOnAccount() throws BankServiceException { //given BigDecimal amount = new BigDecimal("1000"); BigDecimal balance = new BigDecimal("500"); Mockito.when(keypad.getAmount()).thenReturn(amount); Mockito.when(bankService.getBalance()).thenReturn(balance); Mockito.when(cashDispenser.isThereEnoughMoney(amount)).thenReturn(true); //when sut.withdrawCash(); //then Mockito.verifyZeroInteractions(cashDispenser); Mockito.verify(bankService, Mockito.times(0)) .debit(Mockito.any(BigDecimal.class)); }
It shares the same traits as the tests above, but after the execution of the ATM's method we check that there were no calls to the methods of cashDispenser, as well as debit method was not called. The latter is done in a different way than for cashDispenser, because there are interactions in which bankService is entangled, that is when we ask for balance. So, we check that method was called zero times with any possible value of the argument, not only the one we asked to withdraw.
To sum up, test doubles play an important role in testing of a system with collaborators, as they allow to isolate the system under test, improve test performance and simulate various conditions. The Mockito framework alleviates a lot of pain in dealing with such constructs. However, it should be noted that the types of test doubles are not limited to those discussed. There actually are two additional — test spy an fake, which are beyond the scope of this piece.
Please take note that there is more to test in the ATM example: the deposit slot was never used and the recipe printer was not even mentioned. Furthermore, there are other systems consisting of multiple parts, such us a coffee machine or a smartphone, which could be used to try test doubles and Mockito or some other framework. And please don't forget, that on the bank's side there is some code that interacts with an ATM via the Internet and uses some DAO to access the database; the latter can also be mocked.
References
-
-
Mocks Aren't Stubs by Martin Fowler
-
-
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/introduction-test-doubles
|
CC-MAIN-2017-34
|
refinedweb
| 1,815
| 53.31
|
Weekend Hack: Building an Unsplash bot for Telegram with Python
The goal of this post is to step-by-step build our first python Telegram bot, which will serve hi-res images from Unsplash.
This post is the beginning of the Weekend Hack series, a series of short development stories focused on the exploration of new concepts. Set-by-step instructions and repository links will be provided for a more fundamental understanding.
The article is divided into three main sections: technology choices, architecture overview, and step-by-step setup.
After hearing about chatbots and getting hyped, I decided to try the poison myself by building a simple bot that integrates with a third party API while documenting the process.
Technology choices
Before we go nee deep into development, there are some points I would tike to cover:
Why a Telegram bot?
These days you can build bots with many platforms and tools: Facebook, Amazon Lex, Clever Bot, Botsify, Mobile Monkey. AI-powered bots, conversational bots, you name it.
That said, I am a big fan of incremental learning, so instead of choosing a platform like AWS and diving deep into their specific bot API, I have decided to start small and build a barebone chatbot for Telegram. Telegram has a vibrant bot community and friendly API with tons of examples, perfect for starting.
Why use Python?
While you can write Telegram bots in pretty much any language, I have decided to give Python a try. It is not my day-to-day language, but this seemed like an excellent opportunity to learn more about it.
Why integrate it with Unsplash?
Being my first bot and all, I wanted to integrate with a third-party API. Since Unsplash provides Source API that allows for simple image querying, I said, why not? Everybody likes hi-res images anyway.
Architecture overview
Alright, now that we have the goal and tools that we want to use defined, let’s talk about the internals of a Telegram bot. I won’t go in extreme detail, but rather give some general idea of how it works.
The general concept is that a chat user sends a message to the bot, the message is processed by the Telegram infrastructure, and forwarded to your bot. Then, the bot will execute logic and perhaps send something back to the user. Depending on the bot use case, audios, pictures, text, or other media may be sent to the user.
There are multiple ways to connect the bot with the Telegram infrastructure, WebHooks being one of the most used ones. WebHooks are user-defined HTTP-callbacks that enable the implementation of push subscription models, rather than pull (read more about long-polling pull subscription model).
In a nutshell, when the Telegram infrastructure receives a message sent by a user to your bot, it will call the Webhook of the bot, which has assigned a set of callbacks. This way, the bot is only active when a new event comes from the Telegram servers to the bot, rather than performing constant requests to the Telegram infrastructure.
In our case, the user will message the bot with a set of predefined actions (referred to as commands in the Telegram API), and the bot will interact with the Unsplash API.
Step-by-step setup
Let’s start with the fun part!
1. Setup the bot cloud-infrastructure
First, we need to choose an infrastructure provider to host our bot. They are multiple options when it comes to online server hosting. For this project, we will use Glitch, since it provides out of the box domains, WebHook compatibility, easy code sharing, and the best of all, it is free!
Let’s create an account in Glitch and create an hello-webpage project from the right side drop-down menu.
Some files need to be created for this project:
- env. → Contains the secret keys that out bot needs for connecting the WebHook to the Telegram infrastructure. Modify the project name to yours (seen on the top left of the Glitch project). Mine is weekend-hack-unsplash-bot.
# .envTELEGRAM_TOKEN=
PROJECT_NAME=weekend-hack-unsplash-bot
- server.py → The code is connecting the bot specific code with the WebHook calls.
# server.py# import the required libraries
import flask, telebot# import the bot.py file
from bot import bot# setup the Flask web server
app = flask.Flask(__name__)
# define the WEBHOOK path using the bot token
WEBHOOK_URL_PATH = "/{}".format(bot.token)# web server webhook route
@app.route(WEBHOOK_URL_PATH, methods=['POST'])
def webhook():
if flask.request.headers.get('content-type') == 'application/json':
json_string = flask.request.get_data().decode('utf-8')
update = telebot.types.Update.de_json(json_string)
bot.process_new_updates([update])
return ''
else:
flask.abort(403)# start the app
if __name__ == "__main__":
app.run()
- bot.py → The bot logic triggered by the server.py that interacts with the Unsplash API.
# bot.py# import required libraries
import telebot
import requests
from os import environ# setup bot with Telegram token from .env
bot = telebot.TeleBot(environ['TELEGRAM_TOKEN'])# Handler triggered with the /start command
@bot.message_handler(commands=['start'])
def send_welcome(message):
bot.reply_to(message, 'hi there')# configure the webhook for the bot, with the url of the Glitch project
bot.set_webhook(" environ['TELEGRAM_TOKEN']))
- glitch.json → Will make your app to be set up as a custom app by Glitch, allowing to install scripts. In our case, we will install Python3 and spin a process that will re-run when changes are made to the *.py source files.
// glitch.json
{
"install": "pip3 install --user -r requirements.txt",
"start": "PYTHONUNBUFFERED=true python3 server.py",
"watch": {
"ignore": [
"\\.pyc$"
],
"install": {
"include": [
"^requirements\\.txt$",
"^\\.env$"
]
},
"restart": {
"include": [
"\\.py$",
"^start\\.sh"
]
},
"throttle": 5000
}
}
- requirements.txt → Auxiliary file containing the names of packages that need to be installed.
Flask
PyTelegramBotAPI
2. Register the bot in telegram
Now that we have our bot setup in Glitch, we need to register in with Telegram so we can subscribe using WebHooks.
Search in Telegram for the BotFather, Telegram’s bot for creating bots (crazy, right?). Type /newbot and follow the steps, giving a name and username to the bot. The BotFather will then give you a token, that you need to put in the .env file, as:
TELEGRAM_TOKEN=XXX-XXX-XXX
3. Test the bot
Search for the bot in Telegram by the name you gave it (weekend-hack-unsplash-bot in my case) and start a conversation.
According to the code in bot.py, our bot will reply “hi there!” to the /start command.
4. Add more commands
For our example bot, I want to add three commands. Paste them on bot.py. Note that for the third party integration in those commands, we use the Unsplash Source API.
- /random → returns a random picture from Unsplash.
# send random unsplash picture
@bot.message_handler(commands=['random'])
def send_random_pic(message):
response = requests.get('
bot.send_photo(message.chat.id, response.content)
- /4k → returns a random 4K resolution picture from Unsplash and a document with the original file. The document is required, as Telegram compacts the images.
# send random 4k unsplash picture
@bot.message_handler(commands=['4k'])
def send_random_pic(message):
response = requests.get('
bot.send_photo(message.chat.id, response.content)
bot.send_document(message.chat.id, response.content,caption='rename_to_jpeg')
- /topic → returns photos from a specific topic. Topics are defined as comma-separated words.
# send picture from topic
@bot.message_handler(commands=['topic'])
def handle_text(message):
cid = message.chat.id
msgTopics = bot.send_message(cid, 'Type the topic(s), coma separated:')
bot.register_next_step_handler(msgTopics , step_Set_Topics)def step_Set_Topics(message):
cid = message.chat.id
topics = message.text
response = requests.get("
bot.send_photo(message.chat.id, response.content)
The running project and code can be found in Feel free to remix it!
Some snapshots of the bot working on mobile Telegram client:
Summary
Telegram provides an easy API for starting with bots, while Glitch enables rapid prototyping before moving the bot to a fully-fledged hosting solution. Enough for building simple bots, more advanced platforms (ex. Amazon Lex) is worth looking into depending on the requirements for the bot.
Resources
Summary of useful reading resources, tools, and code repositories mentioned in the article.
- Telegram Bot API →
- Python Bot API →
- Source code →
- Unsplash Source API →
- Unsplash Source API with examples →
|
https://alainperkaz-51714.medium.com/weekend-hack-building-an-unsplash-bot-for-telegram-with-python-5d63d2d9620d
|
CC-MAIN-2022-21
|
refinedweb
| 1,360
| 59.09
|
This bug should use infrastructure implemented in bug 585196 and timing API in bug 576006 to measure basic timings for http page loads.
We have a list here (collected also under bug 650143):
I decided to collect basic telemetry like this:
- the whole page load time
- info from channel timing API for the default request
- info from channel timing API for subrequets
- subrequest count for a page
- subrequest first byte latency, etc.
I currently have a patch that measures most or all of these.
It would be nice to get this to Firefox 6, i.e. get this reviewed and landed till Monday midnight to have numbers we can compare with after we have major improvements like pipelining, preconnections, DNS caching.
> I currently have a patch that measures most or all of these.
> It would be nice to get this to Firefox 6, i.e. get this reviewed and landed
Sure--where's the patch? :)
Created attachment 534444 [details] [diff] [review]
v1
I collect timings only for channels that implements nsICacheInfoChannel, currently only HTTP channels. IMO the simplest way to filter all others we are not interested in.
I group requests using load groups. It is not the best but is simplest. We do not add iframes load groups into parent load group, so my grouping is not 100% precise.
To explain the numbers:
* HTTP::PageLoadTime = EndPageLoad - default request creation time (~= DoURILoad)
* HTTP::Subrequest.AsyncOpenLatencySincePageLoadStart = subrequest AsyncOpen - default request creation time
* HTTP::Subrequest.FirstByteLatencySincePageLoadStart = subrequest request start - default request creation time
* HTTP::RequestsPerPage = overall number of requests ever added to a load group
* HTTP::RequestsPerPageFromCache = page requests cache load percentage
Separate for the default request (main page) and subrequests), just those that are not clear on the first sight:
* DomainLookupStartLatency = Domain Lookup Start - AsyncOpen
* RequestStartLatency = Request Start - AsyncOpen
* RTT = Response End - Request Start
* FirstByteSinceAsyncOpen = Response Start - AsyncOpen
* CacheReadStartLatency = Cache Read Start - AsyncOpen
* PositiveCacheValidation = Request End - Request Start (actually the RTT) but reported when we loaded from cache, i.e. RTT here represents "GET If-"/"304 Not Modified" time
* OverallTime = Cache Read End / Response End - AsyncOpen
(In reply to comment #2)
> Created attachment 534444 [details] [diff] [review] [review]
> * RTT = Response End - Request Start
one favor - can we call this something other than RTT please? One very well known definition of rtt is the time latency of moving the minimum amount of data from one host to the other and back again.. it is being used here to describe latency + data-xfer time.
Comment on attachment 534444 [details] [diff] [review]
v1
+r. I've got some renaming suggestions (all very bike-shed-y: anyone else feel free to chime in).
>diff --git a/docshell/base/nsDocShell.cpp b/docshell/base/nsDocShell.cpp
>
>@@ -6034,16 +6038,30 @@ nsDocShell::EndPageLoad(nsIWebProgress *
>+ UMA_HISTOGRAM_TIMES("HTTP::PageLoadTime (ms)",
>+ base::TimeDelta::FromMilliseconds(interval));
let's change "HTTP::PageLoadTime" to "HTTP: Total page load time"
>diff --git a/netwerk/base/src/nsLoadGroup.cpp b/netwerk/base/src/nsLoadGroup.cpp
>--- a/netwerk/base/src/nsLoadGroup.cpp
>+#include "base/histogram.h"
>+#include "base/logging.h"
>
>
>+#ifdef LOG
>+#undef LOG
>+#endif
The chromium LOG stuff can be a mess to work around--see bug 545995--but it seems that we don't have FORCE_PR_LOG on for nsLoadGroup, which makes things easier (unless it's a bug that we don't have FORCE_PR_LOG on for load groups).
>@@ -335,16 +349,18 @@ nsLoadGroup::Cancel(nsresult status)
>
> // Remember the first failure and return it...
> if (NS_FAILED(rv) && NS_SUCCEEDED(firstError))
> firstError = rv;
>
> NS_RELEASE(request);
> }
>
>+ TelemetryReportSubrequests();
Do we have useful data to report when a loadGroup is canceled? Would it be better to not report anything?
>@@ -654,16 +689,54 @@ nsLoadGroup::RemoveRequest(nsIRequest *r
>+
>+ rv = timedChannel->GetAsyncOpen(&timeStamp);
>+ if (NS_SUCCEEDED(rv) && !timeStamp.IsNull()) {
>+ UMA_HISTOGRAM_MEDIUM_TIMES("HTTP::Subrequest.AsyncOpenLatencySincePageLoadStart (ms)",
>+ HISTOGRAM_TIME_DELTA(mDefaultRequestCreationTime, timeStamp));
Change to "HTTP subitem: Page start -> subitem open() (ms)"
>+ }
>+
>+ rv = timedChannel->GetResponseStart(&timeStamp);
>+ if (NS_SUCCEEDED(rv) && !timeStamp.IsNull()) {
>+ UMA_HISTOGRAM_MEDIUM_TIMES("HTTP::Subrequest.FirstByteLatencySincePageLoadStart (ms)",
>+ HISTOGRAM_TIME_DELTA(mDefaultRequestCreationTime, timeStamp));
>+ }
Change to "HTTP subitem: Page start -> first byte received for subitem reply "
>@@ -797,16 +870,170 @@ nsLoadGroup::AdjustPriority(PRInt32 aDel
>+void
>+nsLoadGroup::TelemetryReportSubrequests()
This function actually really reports on the Default channel! Rename to TelemetryReportDefaultLoad?
>+{
>+ if (mCachedContentLoad) {
>+ UMA_HISTOGRAM_COUNTS("HTTP::RequestsPerPage (count)",
>+ mTotalCachingRequestsCount);
"HTTP: requests per page"
>+ if (mTotalCachingRequestsCount) {
>+ UMA_HISTOGRAM_ENUMERATION("HTTP::RequestsPerPageFromCache (%)",
>+ mTotalFromCacheRequestsCount * 100 / mTotalCachingRequestsCount,
>+ 101);
"HTTP: Requests from cache (%)" (there's nothing "per page" about this stat, is there? And we're counting the page request itself, too AFAICT)
>+ mozilla::TimeStamp asyncOpen;
>+ rv = aTimedChannel->GetAsyncOpen(&asyncOpen);
>+
>+ mozilla::TimeStamp startTime;
>+ if (NS_SUCCEEDED(rv) && !asyncOpen.IsNull())
>+ startTime = asyncOpen;
>+ else
>+ startTime = channelCreation;
Why are you worried about asyncOpen time being null? Won't that always be set?
>+#define _UMA_HTTP_REQUEST_HISTOGRAMS_(prefix) \
>+ if (!domainLookupStart.IsNull()) { \
>+ UMA_HISTOGRAM_TIMES( \
I prefer to have long macros put the '\' in the 80th column for each line. Makes it easier to read if they're all in the same column, and if they're at 80, no need to reformat them if one line gets longer than previous max.
>+ prefix "DomainLookupStartLatency (ms)", \
>+ HISTOGRAM_TIME_DELTA(startTime, domainLookupStart)); \
"open() -> DNS request issued (ms)"
>+ \
>+ UMA_HISTOGRAM_TIMES( \
>+ prefix "DomainLookup (ms)", \
>+ HISTOGRAM_TIME_DELTA(domainLookupStart, domainLookupEnd)); \
"DNS lookup time (ms)"
>+ \
>+ if (!connectStart.IsNull()) { \
>+ UMA_HISTOGRAM_TIMES( \
>+ prefix "Connect (ms)", \
>+ HISTOGRAM_TIME_DELTA(connectStart, connectEnd)); \
"TCP connection time (ms)"
>+ \
>+ \
>+ if (!requestStart.IsNull()) { \
>+ UMA_HISTOGRAM_TIMES( \
>+ prefix "RequestStartLatency (ms)", \
>+ HISTOGRAM_TIME_DELTA(startTime, requestStart)); \
"Open -> first byte of request sent (ms)"
>+ UMA_HISTOGRAM_TIMES( \
>+ prefix "RTT (ms)", \
>+ HISTOGRAM_TIME_DELTA(requestStart, responseEnd)); \
"First byte of request sent -> last byte of response received"
>+ if (cacheReadStart.IsNull()) { \
>+ UMA_HISTOGRAM_TIMES( \
>+ prefix "FirstByteSinceAsyncOpen (ms)", \
>+ HISTOGRAM_TIME_DELTA(startTime, responseStart)); \
"Open -> first byte of reply received (ms)"
>+ } \
>+ \
>+ if (!cacheReadStart.IsNull()) { \
>+ UMA_HISTOGRAM_TIMES( \
>+ prefix "CacheReadStartLatency (ms)", \
>+ HISTOGRAM_TIME_DELTA(startTime, cacheReadStart)); \
"Open -> cache entry opened (ms)"
>+ UMA_HISTOGRAM_TIMES( \
>+ prefix "CacheRead (ms)", \
>+ HISTOGRAM_TIME_DELTA(cacheReadStart, cacheReadEnd)); \
"Cache read time (ms)"
>+ \
>+ if (!requestStart.IsNull()) { \
>+ UMA_HISTOGRAM_TIMES( \
>+ prefix "PositiveCacheValidation (ms)", \
>+ HISTOGRAM_TIME_DELTA(requestStart, responseEnd)); \
So requestStart is when we first send data out to the net (the request). How do we know from this that we're doing a cache validation request? Shouldn't that require a check for cacheReadEnd != null?
>+ } \
>+ UMA_HISTOGRAM_TIMES( \
>+ prefix "OverallTime (ms)", \
>+ HISTOGRAM_TIME_DELTA(startTime, (cacheReadEnd.IsNull() ? \
>+ responseEnd : cacheReadEnd)));
"Overall load time (ms)"
>+ if (aDefaultRequest) {
>+ _UMA_HTTP_REQUEST_HISTOGRAMS_("HTTP::DefaultRequest.")
>+ } else {
>+ _UMA_HTTP_REQUEST_HISTOGRAMS_("HTTP::Subrequest.")
how about
"HTTP page: " and "HTTP item: " for the prefixes? I could also do "subitem". But short is good.
>diff --git a/netwerk/base/src/nsLoadGroup.h b/netwerk/base/src/nsLoadGroup.h
>+
>+ /* Telemetry */
>+ mozilla::TimeStamp mPageLoadStartTime;
>+ mozilla::TimeStamp mDefaultRequestCreationTime;
>+ bool mCachedContentLoad;
>+ PRUint32 mTotalCachingRequestsCount;
>+ PRUint32 mTotalFromCacheRequestsCount;
Rename and add comments:
// Number of cacheable (HTTP) requests
s/mTotalCachingRequestsCount/mCacheableRequests/
// Number of requests actually served from cache
s/mTotalFromCacheRequestsCount/mCachedRequests/
s/mCachedContentLoad/mDefaultLoadIsCacheable/ or mIsTimedChannel?
Comment on attachment 534444 [details] [diff] [review]
v1
This patch passes try just fine, btw:
I'm tempted to put numbers at the start of the histogram desciptions, so we can present them in a more sensible order (eg show DNS before connect times, etc.). But that's possible a bad idea, given that we'll wind up adding lots more timings, and keeping a strict numerical ordering would result in changing the names a lot.
But let's at least make sure that 'page' stats show up before 'item': rename to 'subitem' if needed...
Created attachment 534657 [details] [diff] [review]
v1 -> v2 idiff
- renames (but I modified few my self a bit more)
- ignore requests that fail to load (from sever reason, not because of 404 or so)
- some comments unaddressed ; discussed with Jason on IRC
- including only requests that implement nsITimedChannel (as well discussed on IRC)
Created attachment 534658 [details] [diff] [review]
v2
Created attachment 534671 [details] [diff] [review]
interdiff with changes after honza went to sleep (discussed with biesi)
Made a few changes:
In AddRequest, turn on timing even if !mDefaultLoadIsTimed, or we won't set timing for channels added before SetDefaultLoad is called.
In RemoveChannel, 'aStatus' arg is enough to determine if channel failed or not
Some line breaks and whitespace removal.
Made 3 total time reports: cache hits, from network, and all (combined).
Created attachment 534674 [details] [diff] [review]
v3 (ready for checkin)
A couple more changes:
- "HTTP: Requests per page from cache ratio (%)",
+ "HTTP: Requests serviced from cache (%)",
-nsLoadGroup::TelemetryReportRequestTimings
+nsLoadGroup::TelemetryReportChannel
Changed macro to put backslashes on column 80.
Checked to make sure this doesn't break fennec (where timings not yet available). about:telemetry is not supported yet, so we're ok.
Ran into merge conflicts with biesi patch for JS web timings, and ran out of time to land for FF 6. Turns out telemetry isn't on yet anyway, so we've got time to polish the bike shed on this (which probably isn't a bad idea anyway).
I will add a few tweaks to the patch soon.
I also discovered, we do not collect image requests. I already have a patch for imgRequestProxy implementing nsITimedChannel. I will open a new bug for it.
It'd be nice to land this, so that we can all pile in and add our favorite necko stats as followup patches. Honza, any ETA on your updated patch? Should we just land this and you can do a followup?
Comment on attachment 534674 [details] [diff] [review]
v3 (ready for checkin)
Review of attachment 534674 [details] [diff] [review]:
-----------------------------------------------------------------
I wanted to add/change one measurement, but later I decided not to. I just have a local patch to opt a bit the condition in RemoveRequest (comments bellow).
This doesn't include image requests. I have already a patch for that. We should land it separately, it is a bit more complicated.
:::" ?
> This doesn't include image requests. I have already a patch for that. We
> should land it separately, it is a bit more complicated.
OK. You've got my +r for the changes below, assuming they pass try.
> :::.
Nice catch.
> @@ .
So, for example, a loadgroup has 12 requests, all HTTP (so mCacheableRequests=12), and say 8 of them are loaded from cache (mCachedRequests = 8). I was just figuring the most important statistic of interest is the % of cache hits (total, throughout the browser). I mean, that's the same as the "% of cache hits per page" (in the aggregate: we'll have more, smaller samples, with larger variance, doing it this way then if we kept some global counts of requests/cache-hits somewhere and periodically reported them). Why do we care about it per-page?
> :::" ?
I support renaming mCacheableRequests to something like mTimedRequests. mCachedRequests is still about cached requests.
RE: per-page - yes, you are right :) now I understand, and agree.
I'll do the rest of the changes, I think I'll get to this today.
Taras:
We're trying to do a histogram for "% of network requests from cache" (i.e. % cache hits). Right now we have this instrumented so that we call UMA_HISTOGRAM_COUNTS once per LoadGroup (~= once per webpage), with the % of loads from that page that came from cache.
I'm thinking that's going to result in messy stats, where we get a lot of variance (and a lot of entries with 0%, for pages that had nothing from cache). It might make more sense to keep global counters of requests/cache_hits, and then at some point upload them to telemetry. The logical place for this is shutdown, but I'm wondering how late one can call UMA_HISTOGRAM_COUNTS and still have it work. (Alternatively I guess we could call it every 1000 requests or something like that).
Thoughts?
I would expose a global counter to JS. Then add a single field to .simpleMeasurements part of the request.
Using a histogram doesn't make sense to me for this.
Taras,
Where do the simpleMeasurements show up? Right now it grabs ("main", "firstPaint", "sessionRestored"), but I don't see any of those on
Honza: I assume we could just keep a pair of global variables (requests, reqsFromCache) in gHttpHandler, and increment one or both in OnStartReq for every (successful?) channel.
I haven't updated the log to include those values, but they will show up same as the other values.
Optimistically telemetry will be live in a week, and we'll be able to provide consumers with live stats.
Created attachment 538237 [details] [diff] [review]
v4 [Check in/back out comment 22/24]
Passes try, includes the changes we agreed in comment 15.
Comment on attachment 538237 [details] [diff] [review]
v4 [Check in/back out comment 22/24]
(In reply to comment #19)
> Honza: I assume we could just keep a pair of global variables (requests,
> reqsFromCache) in gHttpHandler, and increment one or both in OnStartReq for
> every (successful?) channel.
I believe both are interesting. I left the "per page" measure in. I think it is interesting to see how many requests a page gets satisfied from cache. If we find out it is messy we can remove it.
The overall cache/network ratio is also interesting but I think this should be made a different way then this patch does and be part of cache telemetry patch(es).
Even though try was green with this patch, bug 662555 may cause the leak test to assert and crash, [1].
Backed out as due to: ###!!! ASSERTION: Cannot compute with a null value: '!IsNull()', file ../../dist/include/mozilla/TimeStamp.h, line 209
[1]
Created attachment 542265 [details] [diff] [review]
v5
This is updated to the new telemetry code. It is that large change that I rather want to get this re-reviewed by Taras.
Comment on attachment 542265 [details] [diff] [review]
v5
I cringe at the heavy use of preprocessor here :)
Also feel free to
using namespace mozilla;
to shorten some of the code.
Backed out for apparently causing reftest assertion failures on Windows debug:
I can see we sometimes don't get nsISocketTransport::STATUS_CONNECTED_TO notification where connectEnd is stamped. I'll report a new bug on that.
To fix this one, I will simply check for non-null on both times I subtract.
Created attachment 543113 [details] [diff] [review]
v5.1
Additional check for non-null added. Please just check I didn't mess anything.
I didn't manage to push to try :(
Created attachment 543274 [details] [diff] [review]
minor-fixups
honza--looks good. Just a few minor bitrot and indenting fixups.
One question: It doesn't seem like we need both calls to TelemetryReport(). We call it in Cancel, after we've removed all the channels, but in RemoveRequest we also check if we're the last request to be removed and call it there. Seems like that means we'll call it twice when cancelled, for no good reason (it's not a problem--it behaves correctly if called twice). Can we remove the call in Cancel()?
(In reply to comment #32)
> Can we remove the call in Cancel()?
Yes we can! Good catch.
Created attachment 543491 [details] [diff] [review]
v5.2 as landed [Check in comment 34]
|
https://bugzilla.mozilla.org/show_bug.cgi?id=658894
|
CC-MAIN-2016-26
|
refinedweb
| 2,444
| 54.63
|
Elixir v1.4.2 Task
Conven.
async and awaitdetailedor consider starting the task under a
Task.Supervisorusing
async_nolink.
Supervised tasks
It is also possible to spawn a task under a supervisor:
import Supervisor.Spec children = [ # worker(Task, [fn -> IO.puts "ok" end]) ]
Internally the supervisor will invoke
Task.start_link/1.
Since these tasks are supervised and not directly linked to
the caller, they cannot be awaited on. Note
start_link/1,
unlike
async/1, returns
{:ok, pid} (which is
the result expected by supervision trees).
By default, most supervision strategies will try to restart
a worker after it exits regardless of the reason. If you design
the task to terminate normally (as in the example with
IO.puts/2
above), consider passing
restart: :transient in the options
to
Supervisor.Spec.worker/3.
Dynamically supervised tasks:
import Supervisor.Spec children = [ supervisor.
Distributed tasks.
Summary
Functions
The Task struct
Starts a task that must be awaited on
Starts a task that must be awaited on
Returns a stream that runs the given
function task as part of a supervision tree
Starts a task as part of a supervision tree
Temporarily blocks the current process waiting for a task reply
Yields to multiple tasks in the given time interval
Types
Functions
The Task struct.
It contains these fields:
:pid- the PID of the task process;
nilif the task does not use a task process
:ref- the task monitor reference
:owner- the PID of the process that started the task
Starts a task that must be awaited on.
This function spawns a process that is linked to and monitored
by the caller process. A
Task struct is returned containing
the relevant information.
Read the
Task module documentation for more info on general
usage of
async/1 and
async/3..
Linking parent dies.
Message format
The reply sent by the task will be in the format
{ref, result},
where
ref is the monitor reference held by the task struct
and
result is the return value of the task function.
async_stream(Enumerable.t, (term -> term), Keyword.t) :: Enumerable.t
Returns a stream that runs the given
function concurrently on each
item in
enumerable.
Each
enumerable item is passed as argument to the
function and
processed by its own task. The tasks will be linked to the current
process, similar to
async/1.
See
async_stream/5 for discussion and examples.
async_stream(Enumerable.t, module, atom, [term], Keyword.t) ::, val} upon successful
completion or
{:exit, val} if the caller is trapping exits. Results
are emitted in the same order as the original
enumerable.
The level of concurrency can be controlled via the
:max_concurrency
option and defaults to
System.schedulers_online/0. The timeout
can also be given as option and defaults to 5000 and it defaults to.
Options
:max_concurrency- sets the maximum number of tasks to run at the same time. Defaults to
System.schedulers_online/0.
:timeout- the maximum amount of time to wait without receiving a task reply (across all running tasks). Defaults to
5000.
Example)
Awaits a task reply and returns it.
A timeout, in milliseconds, can be given with default value
of
5000. In case the task process dies, this function will
exit with the same reason as the task.
If the timeout is exceeded,
await will exit; however,
the task will continue to run. When the calling process exits, its
exit signal will terminate the task if it is not trapping exits..
Compatibility with OTP behaviours
It is not recommended to
await a long-running task inside an OTP
behaviour such as
GenServer. Instead, you should match on the message
coming from a task inside your
GenServer.handle_info/2 callback.
Examples
iex> task = Task.async(fn -> 1 + 1 end) iex> Task.await(task) 2
Unlinks and shuts down the task, and then checks for a reply.
Returns
{:ok, reply} if the reply is received while shutting down the task,
{:exit, reason} if the task died, otherwise
nil.
The shutdown method.
Starts a task.
This is only used when the task is used for side-effects (i.e. no interest in the returned result) and it should not be linked to the current process.
Starts a task.
This is only used when the task is used for side-effects (i.e. no interest in the returned result) and it should not be linked to the current process.
Starts a task as part of a supervision tree.
- the task process exited with the reason
:normal
- it isn’t linked to the caller
- the caller is trapping exits.
Example amount of seconds they slept.
If you execute the code all at once, you should see 1 up to 5
printed, as those were the tasks that have replied in the
given time. All other tasks will have been shut down using
the
Task.shutdown/2 call.
|
https://hexdocs.pm/elixir/Task.html
|
CC-MAIN-2017-13
|
refinedweb
| 797
| 66.33
|
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hello everyone. I am here with yet another problem, hoping for your help. I've recently started using Eclipse for my Processing project because the number of classes started exceeding the available tabs in Processing app.
So I've got a question about classes. Let's say I have an object of another class created inside my main class, also there is a boolean variable:
import processing.core.PApplet; public class MainClass extends PApplet { MyClass obj; boolean variable; public static void main(String[] args) { PApplet.main("MainClass"); } public void settings() { size ( 1280, 720); } public void setup() { obj = new MyClass(this); } public void draw() { } }
Then there is another class which, if it was created in Processing app, would see both "obj" and "variable", but in Eclipse it doesn't recognise them:
import processing.core.PApplet; public class AnotherClass{ PApplet parent; public AnotherClass( PApplet p) { parent = p; } void update() { variable = true; obj.doSomething(); } }
How can I, in a not very complicated way, make "AnotherClass" recognise both " obj" and "variable" in a similar manner to how it happens in Processing app? Thanks in advance!
Answers
If the classes are in the same package then this should work
This solution does make sense to me, but it doesn't work with my setup for some reason. Here is what I get:
console:
Code:
.operator:
@GoToLoop, using the dot operator is fine as long as it works. I am having a problem with it though, please, check my post above.
@GotoLoop is right they are members of MainClass not PApplet so my solution is not a solution at all.
Correct me if I am wrong, but for a class to be nested in another class, it needs to be in the same class file. It's a bit of a problem for me because I would like to have multiple class files.
(MainClass). :ar!
It did work when instead of referencing PApplet I referenced MainClass. Yay! Thank you for your help!
|
https://forum.processing.org/two/discussion/27963/eclipse-processing-classes
|
CC-MAIN-2019-43
|
refinedweb
| 346
| 63.09
|
I make tools to help people create wonderful things.
To.
Most users are familiar with word navigation, for moving the caret to the next or previous word: Control-Left/Control-Right on Windows and Linux, and Opt-Left/Opt-Right on Mac. A previous post discussed the different modes for word breaking.
A less well-known features is "subword" navigation, which is very similar, except that it breaks in more places, such camelCase boundaries within words and underscores within identifiers. This is really useful for modifying parts of identifiers precisely. The keybinding is Alt-Left/Alt-Right on Windows and Linux, and Control-Left/Control-Right on Mac. As with word navigation, they can be combined with the Shift modifier to modify the current selection range.
If you know the name of a type or file and want to go straight to it without having to dig through the solution pad and file contents, the Navigate To... command is your friend. This command can be activated with Ctrl-, on Windows or Linux, and Ctrl-.on Mac, or from the Search->Navigate To.. menu. It opens a window that shows a list of all the files, types and members in the solution, and you can filter and search these items using the same substring matching that the completion list uses. When you find the one you want, hit enter and you will be taken straight to it.
MonoDevelop also has Go To File and Go to Type commands, which behave the same way but are restricted to only showing files or types respectively. These predate the Navigate To command, and although its functionality is a superset of both of the older commands combined, they have been kept around because they're noticeably faster for extremely large projects.
There are various places where the MonoDevelop text editor needs to understand where words begin and end, for example, when you use control-left/right to move the caret (alt-left/right on Mac). We refer to this as "word breaking". Unfortunately, word breaking behaviour differs between OSes, and word breaking is often intended for text, not code. In addition, people become used to particular kinds of word breaking. For these reasons, we allow users to change MonoDevelop's word breaking mode in Preferences->Text Editor->Behavior.
In addition, the vi input mode has its own word breaking mode that mimics the behaviour of vim.
When using code completion to explore a new API, it's often useful to know where in the type hierarchy members are defined. For example, when looking for things you can do with a button, the members on the button are more interesting than the members on its superclasses. MonoDevelop makes it easier to do this with a featured called categorized mode. The completion list can be toggled into categorized mode using Ctrl-Space, and will stay in this mode until it is toggled off. While in this mode, items may be be grouped by the completion engine into categories, depending on context. For example, when listing members of types, they will be grouped by the class on which they're defined. Other groupings may be added in future.
When navigating the list with arrow keys, you can jump directly between groups using Shift-Up and Shift-Down. If the list is not in completion mode, these combinations will toggle it on.
Completion mode is not the default behaviour because it makes the ordering and filtering of the list less straightforward.
The MonoDevelop workspace consists of a central document surrounded by pads containing complementary information, tools and navigation aids. Pads can be accessed from the View->Pads and View->Debug Windows menus, and closed when they are not needed. They may be assigned keybindings, which will open the pad if necessary then bring keyboard focus to it. Pads may also be opened automatically by various commands, such as the "Find in Files" command, which opens a pad of search results.
You can drag pads around to arrange them however is most useful to your workflow. Pads can be docked on any side of the document editor, or adjacent to any other pad. If a pad is docked in the same position as another pad, tabs will be added to enable you to switch which of the two pads is visible.
You can even undock pads and move them to float beside MD or on another monitor. Pads that you use less frequently but still wish to be easily accessible can be "auto-hidden" using the "-" button at the top right of the pad. Auto-hidden pads are shown as a little indicator at the side of where the pad was previously docked, and when you hover over this indicator, the pad will be shown again. When the mouse and keyboard focus leaves it, it will hide again.
Which pads are is useful is something that's generally dependent on the current context. For example when debugging, it is useful to have the debugger pads for viewing the stack, locals, etc. When using the visual designer, the toolbox and property grid pads are very important. For this reason, the state of the open pads is represented by a layout, and you can switch between layouts to suit your current needs.
Layouts are very simple. There is always one active layout, and any changes you make to the pads change only the active layout. The current active layout can be changed using the list in the View menu, or the Layouts combo box in the toolbar. A new layout can be created using View->New Layout.
There are several built-in layouts that MonoDevelop switches between automatically based on the current context. The Default layout is shown when MD opens, and when a single single is loaded. The Solution layout is activated while a solution is open. The Debug layout is activated while debugging. There is also a GUI Designer layout that can be activated while using the GTK# designer, but this is optional, and an be enabled in Preferences->Visual Style->GTK# Designer..
The documents list is sorted by which have been most recently used, and when the dialog is opened, the first document it selects is the item after the current active document, i.e. the document that was focussed before it, since it's assumed that you don't want to switch to the current document. However, this also make it very easy to switch between a few documents with minimal keystrokes.
The default mode of the code completion list is to complete the symbol that's being typed. Whenever the completion engine can determine that you are typing an existing symbol (such as a type name, variable name or member name), it automatically triggers the completion list and populates it with all the values that are valid at that point. As you type while the list is open, the list's selection updates to match what best fits what you're typing, and you can manually change the selection using the up/down arrow keys. When you press space, enter, tab, or any punctuation, the completion list "commits" the selection into the document, so you don't have to type the rest of the word manually. This is incredibly useful when you get used to it.
Sometimes the completion engine cannot provide a complete list of valid values, for example when you are defining a lambda at the point that you pass it to a method. In such cases, when you need to type a value that's not in the list, it would be very irritating for the list to commit its best match and overwrite what you're typing. Instead, the completion list goes into suggestion mode.
In suggestion mode, the selection highlight in the list is a rectangle around the selection, not a solid block. When the list is in suggestion mode, it will only commit on tab or enter, so you won't commit accidentally while typing a word. If you use arrow keys to change the selection, the list will go back into completion mode and the highlight will become solid.
Some users like to write code out of order, for example using symbols that don't yet exist, and then defining them symbols later, or writing code that does not parse correctly and fixing it up. Completion mode really makes that style of coding hard to do. The answer is a command that toggles the list into suggestion mode. You can access it via the Edit->Toggle Completion Suggestion Mode menu item, or the Alt-Shift-Space key binding. Once the list is toggled into suggestion mode, it will stay that was until you toggle it back. This it useful because you can switch back and forth as it suits you.
Sometimes it's useful to be able to focus only on your code without the distractions of the pads and the rest of your desktop. MonoDevelop has two ways to make this easier.
The Maximized View can be toggled by double-clicking on the document tab, or using the context menu on the document tab and selecting Switch maximize/normal view. When in maximize view, all open pads are auto-hidden at the sides of the MonoDevelop windows, and all toolbars are hidden (everything in the toolbars is also accessible from the menus).
The Fullscreen View can be activated using the View->Fullscreen menu command. This makes the MonoDevelop window take up the entire screen, hiding the taskbar and the window border.
Both view modes can be used together to maximize the document area as much as possible.
One of my favourite features that we added to MonoDevelop 2.4 is the "import Type" command. It is accessed using the keybinding Ctrl-Alt-Space, and shows a list of all types in all namespaces in all referenced assemblies:
You can use our completion list filtering to find the type you're looking for, then, when you commit the selection from the list, MonoDevelop automatically adds the "using" statement to the file. For example, using StringBuilder is as easy as
using System.Text; at the top of the file.
|
https://mhut.ch/journal?page=4
|
CC-MAIN-2018-30
|
refinedweb
| 1,698
| 60.85
|
So, for the 2nd, (actually third installment, but the JIRB one doesn't really count), I was thinking that we should take a look at a few different subjects. The red thread will be a simple web based Blog application built with Camping. Camping is a microframework for web development, by whytheluckystuff, is insanely small and incredibly powerful. It uses another library called Markaby, which generates markup based on pure Ruby code. I will show the application in smaller parts, with explanations, and at the end include a link to the complete source file.
First of all we have to have a working JRuby built from the latest version of trunk. After JRuby is working we need to install a few gems. Follow these commands and you'll be good:
jruby %JRUBY_HOME%\bin\gem install camping --no-ri --no-rdoc --include-dependencies jruby %JRUBY_HOME%\bin\gem install ActiveRecord-JDBC --no-ri --no-rdoc
This installs Camping, ActiveRecord, ActiveSupport, Markaby, Builder, Metaid and ActiveRecord-JDBC. We don't generate RDoc or RI for these gems, since that's one part of JRuby that still is pretty slow.
The Blog application
The first thing the blog application needs is a database. I will use MySQL, but other databases may work through the JDBC-adapter, but there is still some work to be done in this area. I will have my MySQL server on localhost, so change the configuration if you do not have this setup. You'll need a database for the blog application. I've been conservative and named the database "blog" and the user "blog" with the password "blog". Easy to remember, but not so good for production.
Update: As azzoti mentioned, you have to set your classpath to include the MySQL JDBC-driver, which can be downloaded from.
Now, open up a new Ruby file and call it blog.rb. The name of the file is important; it has to have the same name as the Camping application. Now, first of all we include the dependencies:
These statements first names our application with the Camping.goes-statement. This includes some fairly heavy magic, including reopening the file and rewriting the source to include more references to the Camping framework. But this line is all that is needed. The next line establishes our connection to the database, and it follows standard JDBC naming of the parameters. Of course, these should be in a YAML file somewhere, but for now we make it easy. The last part makes sure we have Session support in our Blog application.
Now we need to define our model, and this is easily done since Camping uses ActiveRecord:
module Blog::Models def self.schema(&block) @@schema = block if block_given? @@schema end class Post < Base; belongs_to :user; end class Comment < Base; belongs_to :user; end class User < Base; end end
The first part of this code defines a helper method that either sets the schema to the block given, or returns an earlier defined schema. The second part defines our model, which includes Post, Comment and User, and their relationships.
The schema is also part of the application, and we'll later see that Camping can automatically create it if it doesn't exist (that's why we didn't need to create any tables ourselves, just the database).
This defines the three tables needed by our blog system. Note that the names of the tables include the name of the application as a prefix. This is because Camping expects more than one application to be deployed in the same container, using the same database.
When we have defined the schema, it's time to define our controller actions. In Camping, each action is a class, and each action class define a method for get, one for post, etc. These classes will be defined inside the module Blog::Controllers. The first action we create will be the Index action. It looks like this:
class Index < R '/' def get @posts = Post.find :all render :index end end
This defines a class that inherits from an anonymous class defined by the R method. What it really does, is bind the Index action to the /-path. It uses ActiveRecord to get all posts and then renders the view with the name index.
The Add-action adds a new Post, but only if there is a user in the @state-variable, which acts as a session. If something is posted to it, it creates a new Post from the information and redirects to the View-action:
class Add def get unless @state.user_id.blank? @user = User.find @state.user_id @post = Post.new end render :add end def post post = Post.create :title => input.post_title, :body => input.post_body, :user_id => @state.user_id redirect View, post end end
As you can see, there's not much to it. Instance variables in the controller will be available to the view later on. Note that this action doesn't inherit from any class at all. This means it will only be available by an URL with it's name in it.
We need a few more controllers. View and Edit are for handling Posts. Comment adds new comments to an existing Post. Login and Logout are pretty self explanatory. And Style returns a stylesheet for all pages. Note that Style doesn't render anything, it just sets @body to a string with the contents to return.
Also note how easy it is to define routing rules with the help of regular expressions to the R method.
Next up, we have to create our views. Since Camping uses Markaby, we do it in Ruby, in the same file. Views are methods in the module Blog::Views with the same name as referenced inside the controllers call to render. There is a special view called layout which get called for all views, if you don't specify otherwise in the call to render. It looks like this:
def layout html do head do title 'Blog' link :rel => 'stylesheet', :type => 'text/css', :href => '/styles.css', :media => 'screen' end body do h1.header { a 'Blog', :href => R(Index) } div.content do self << yield end end end end
As you can see, standard HTML tags are defined by calling a method by that name. The contents of the tag is created inside the block sent to that method, and if it makes sense to give it content as an argument, this works too. Title, for example. If you give a block to it, it will evaluate this and add the result as the title, but in this case it's easier to just provide a string. Note how a link is created, by the method R (another method R this time, since this is in the Views module). Finally, the contents of the layout gets added by appending the result of a yield to self.
The index view is the first we'll see when visiting the application, and it looks like this:
def index if @posts.empty? p 'No posts found.' else for post in @posts _post(post) end end p { a 'Add', :href => R(Add) } p "Current time in millis is #{System.currentTimeMillis}." end
I've also added a call that writes out the current time in milliseconds, from Java's System class, to show that we're actually in Java land now, and potentially could base much of our application on data from Java. We check if there are any posts, and if so iterate over them and write them out with a partial called _post. We also create a link to add more posts. The rest of the views look like this:
In my opinion, this code is actually much easier to read than HTML and most of it is fairly straight forward. One interesting part is the add and edit methods, which checks if a user is logged in, otherwise uses the _login-partial instead of showing the real content.
Finally, we will define a create-method for Camping, which is responsible for creating the tables for our model:
def Blog.create Camping::Models::Session.create_schema unless Blog::Models::Post.table_exists? ActiveRecord::Schema.define(&Blog::Models.schema) Blog::Models::User.create(:username => 'admin', :password => 'camping') end end
This first creates a table for the session information, and then checks if the Post-table exists; if not all the tables in the schema defined before is created.
Now you have seen the complete application. If you have no interest in writing this by hand, the complete code can be found here.
Running the application
To run a Camping application, you need to run the camping executable that has been installed into your %JRUBY_HOME%\bin on your application file. In my case I run it like this:
jruby %JRUBY_HOME%\bin\camping blog.rb
in the directory where my blog.rb exists and I very soon have a nice application at which works wonderfully. Startup is a little bit slow, but as soon as WEBrick has started listening the application is very snappy. You can try changing your blog.rb-file too; Camping will automatically update your application without having to restart the server. As I said above, I included a call to System.currentTimeMillis, to show that we are actually using Java in this blog-application. If that isn't apparent from the call to System, remember that we are actually using JDBC to talk to our database, and very soon you will be able to use the ActiveRecord-JDBC adapter to connect to any databases Java can talk too. That's a bright future.
Apr 24, 2007
Chris Rimmer says:There seem to be some typos in the code here. In a few places a '>' is ...
There seem to be some typos in the code here. In a few places a '>' is replaced by >
The link to the complete file is also missing, so it is a little difficult to check if I've spotted the errors.
Apr 24, 2007
Chris Rimmer says:...actually it is '<' replaced by < but you get the idea...
...actually it is '<' replaced by < but you get the idea...
|
http://docs.codehaus.org/display/JRUBY/The+JRuby+Tutorial+Part+2+-+Going+Camping
|
crawl-002
|
refinedweb
| 1,685
| 72.56
|
import math def chi2P(chi, df): """Return prob(chisq >= chi, with df degrees of freedom). df must be even. """ assert df & 1 == 0 # XXX If chi is very large, exp(-m) will underflow to 0. m = chi / 2.0 sum = term = math.exp(-m) for i in range(1, df//2): term *= m / i sum += term # With small chi and large df, accumulated # roundoff error, plus error in # the platform exp(), can cause this to spill # a few ULP above 1.0. For # example, chi2P(100, 300) on my box # has sum == 1.0 + 2.0**-52 at this # point. Returning a value even a teensy # bit over 1.0 is no good. return min(sum, 1.0)
|
http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/articles/064/6467/6467s2.html
|
CC-MAIN-2018-05
|
refinedweb
| 118
| 95.88
|
A do while loop is a type of loop which repeats code while a certain condition evaluates to true. However, unlike a while loop which tests the condition first, a do while loop tests the condition after running instructions inside the loop. This means that the code inside the loop will always run at least once even if the condition evaluates to false. This is an example of post-test repetition.
Watch the video below and then scroll down to see the sample code.
Sample code
Take a look at the sample code below. The counter is set to 99 and the condition being tested is counter < 10. However, the code inside the loop runs the first time even though the condition evaluates to false because a do while loop runs the code inside the loop before testing the condition (meaning that the instructions inside the loop will always run at least once).
using System; namespace MyCSharpProject { class Program { static void Main(string[] args) { int counter = 99; do { Console.WriteLine("Counter is: " + counter); counter++; } while(counter < 10); Console.ReadLine(); } } }
Next tutorial: Arrays in C#
|
https://www.codemahal.com/video/do-while-loops-in-c-sharp/
|
CC-MAIN-2018-47
|
refinedweb
| 184
| 71.95
|
Nowadays, dual core PCs have become more and more affordable, and have gradually become the standard. Quad core PCs are also getting closer, and of course, PCs with greater number of CPUs/cores are also available. Since there is huge amounts of computational tasks which are quite time consuming, parallel/distributed computing of these tasks is very critical. So, as users and developers receive PCs with several CPUs/cores, there is an obvious wish to use all the computing power of these PCs and load all the cores for paralleling time consuming computations.
This article is going to discuss the topic of paralleling computations in C#, distributing them effectively through all cores available in the system. We will take a very brief look at what is provided by Microsoft’s parallel computation library, but the main aim of the article is to discuss how to implement parallelism using only standard facilities of the .NET framework and how to make it easy to use, so minimal changes should be done to the existing code in order to support parallelism.
As it is known, Microsoft provides an extension for .NET framework 3.5, which allows parallel computations to be distributed across all the cores available in a system. A dedicated blog is also available, where different news and information about the library are available.
Microsoft’s library is quite powerful, easy to use, and provides a lot of different features, which allow solving the different tasks of parallel computations. For example, let’s take a look at how to parallel the code below, which does the multiplication of two square matrices:
void MatrixMultiplication( double[,] a, double[,] b, double[,] c )
{
int s = a.GetLength( 0 );
for ( int i = 0; i < s; i++ )
{
for ( int j = 0; j < s; j++ )
{
double v = 0;
for ( int k = 0; k < s; k++ )
{
v += a[i, k] * b[k, j];
}
c[i, j] = v;
}
}
}
The paralleled version of the code will look like this:
void ParalleledMatrixMultiplicationMS( double[,] a, double[,] b, double[,] c )
{
int s = a.GetLength( 0 );
System.Threading.Parallel.For( 0, s, delegate( int i )
{
for ( int j = 0; j < s; j++ )
{
double v = 0;
for ( int k = 0; k < s; k++ )
{
v += a[i, k] * b[k, j];
}
c[i, j] = v;
}
} );
}
Microsoft’s solution works really well, and their library provides much more than just a single Parallel.For(). However, there are some issues, which may make this library not so preferred for your application:
Parallel.For()
I am not sure if all the above points are important for you or not. Personally, I am interested in supporting the Mono project, and also, I am not yet ready to switch all my projects into the .NET 3.5 version.
So, in this article, we are going to discuss a custom Parallel.For() implementation, which could be used with early .NET versions as well as with Mono, and also may be incorporated easily into any project, since it represents just a tiny DLL assembly. By doing our own implementation, we can maintain the simplicity of its usage, and a good performance which is not going to be worse than that provided by Microsoft’s parallel extension. Note: we are not going to implement a complete analogue of the parallel extensions library provided by Microsoft, but just the Parallel.For(), which covers quite a lot of most paralleling tasks.
In case you are not interested in anything else except Microsoft’s solutions, the article still may be interesting for those who are willing to learn how to implement an easy to use paralleling approach, using just plain C#.
We’ll start by describing how to use our custom implementation of parallelism routines, and the leave implementation details for the next section. As it was stated above, our aim is to make something similar to what Microsoft provides in their parallel extensions library and make it easy to use. So, let’s just implement a variant of Parallel.For() which is provided by Microsoft. The only difference of our variant is that we’ll have just a single definition of the method, which accepts start and stop indexes of the for-loop and the loop’s body as a delegate. Below is our square matrix multiplication code, but paralleled using our Parallel.For() implementation.
for
void ParalleledMatrixMultiplicationAForge( double[,] a, double[,] b, double[,] c )
{
int s = a.GetLength( 0 );
AForge.Parallel.For( 0, s, delegate( int i )
{
for ( int j = 0; j < s; j++ )
{
double v = 0;
for ( int k = 0; k < s; k++ )
{
v += a[i, k] * b[k, j];
}
c[i, j] = v;
}
} );
}
So, as we can see from the code above, using our custom parallelism implementation is as simple as using Microsoft’s solution, and the only difference is just the namespace name where the Parallel class is defined:
Parallel
// Microsoft's solution
System.Threading.Parallel.For( 0, s, delegate( int i )
...
// our implementation
AForge.Parallel.For( 0, s, delegate( int i )
...
Yes, as I have already mentioned, Microsoft provides additional Parallel.For() definitions, which may work not only with delegates, but with lambda expressions, for example. But, this is just an extra flexibility feature which comes for a cost. As for me, delegates are enough and they don’t require .NET framework 3.5.
The complete code of our Parallel.For() implementation may be found in the attachments to the article, but here, we’ll just discuss the main idea, which is concentrated in three main routines – initialization, job scheduling, and execution.
Our implementation of parallelism is going to be based on regular classes from the System.Threading namespace, which are available in any version of .NET – Thread, AutoResetEvent, and ManualResetEvent. To parallel for-loops, we need to create a certain amount of threads, which will be used to execute iterations of the for-loop body, but the events are required to signal about thread availability and job availability. By default, we create the amount of threads which is equal to the amount of cores in the system, but this value may be configured by the user, so more (or less) threads will be used to parallel loops.
System.Threading
Thread
AutoResetEvent
ManualResetEvent
// Initialize Parallel class's instance creating required number of threads
// and synchronization objects
private void Initialize( )
{
// array of events, which signal about available job
jobAvailable = new AutoResetEvent[threadsCount];
// array of events, which signal about available thread
threadIdle = new ManualResetEvent[threadsCount];
// array of threads
threads = new Thread[threadsCount];
for ( int i = 0; i < threadsCount; i++ )
{
jobAvailable[i] = new AutoResetEvent( false );
threadIdle[i] = new ManualResetEvent( true );
threads[i] = new Thread( new ParameterizedThreadStart( WorkerThread ) );
threads[i].IsBackground = true;
threads[i].Start( i );
}
}
What is the idea behind two events which are created for each thread? The threadIdle events are required to signal if a thread is idle doing nothing (available for some job), or busy performing some calculations. The jobAvailable events are required to signal to a particular thread that it needs to wake up and do some work. So, after the above initialization is done, all threadIdle events are set into signaled state, what means that they are all in idle state and available to do something, but all jobAvailable events are set into non-signaled state, which means there is no work for the threads yet. All threads wait on these jobAvailable events, and right after they are turned into signaled state, the threads will wake up and start their job. We’ll see a worker thread’s function a bit later, so it will become clearer how the threads get their job.
threadIdle
jobAvailable
Now, it is time to see how the jobs are scheduled ...
public static void For( int start, int stop, ForLoopBody loopBody )
{
lock ( sync )
{
// get instance of parallel computation manager
Parallel instance = Instance;
instance.currentIndex = start - 1;
instance.stopIndex = stop;
instance.loopBody = loopBody;
// signal about available job for all threads and mark them busy
for ( int i = 0; i < threadsCount; i++ )
{
instance.threadIdle[i].Reset( );
instance.jobAvailable[i].Set( );
}
// wait until all threads become idle
for ( int i = 0; i < threadsCount; i++ )
{
instance.threadIdle[i].WaitOne( );
}
}
}
From the code above, it can be seen that jobs scheduling is very simple - save loop attributes, mark all threads as busy (threadIdle[i].Reset( )), signal to all threads that there is a job for them (jobAvailable[i].Set( )), and then just wait until the threads become idle again.
threadIdle[i].Reset( )
jobAvailable[i].Set( )
The last step is to see how the worker threads are actually working ...
// Worker thread performing parallel computations in loop
private void WorkerThread( object index )
{
int threadIndex = (int) index;
int localIndex = 0;
while ( true )
{
// wait until there is job to do
jobAvailable[threadIndex].WaitOne( );
// exit on null body
if ( loopBody == null )
break;
while ( true )
{
// get local index incrementing global loop's current index
localIndex = Interlocked.Increment( ref currentIndex );
if ( localIndex >= stopIndex )
break;
// run loop's body
loopBody( localIndex );
}
// signal about thread availability
threadIdle[threadIndex].Set( );
}
}
So, as we can see from the above code, all the worker threads just sit doing nothing, and wait until there is something to do, which is signaled by the jobAvailable events. Once the events are received, the threads start their work - safely receive the loop index they need to work on using an interlocked increment, and then just execute the loop's body with the required index, which is done until the entire loop is calculated.
The entire implementation is very simple, and may be done using any version of the .NET framework, which is what we wanted from the beginning. Now, it is time to test it and see its performance compared to Microsoft's solution.
How are we going to test the performance? Well, we'll use a simple technique and just run our routines many times in a loop, checking how much time we spend for it. The test code looks something like this:
// run specified number of tests
for ( int test = 0; test < tests; test++ )
{
// test 1
DateTime start = DateTime.Now;
for ( int run = 0; run < runs; run++ )
{
MatrixMultiplication( a, b, c1 );
}
DateTime end = DateTime.Now;
TimeSpan span = end - start;
Console.Write( span.TotalMilliseconds.ToString( "F3" ) + "\t | " );
test1time += span.TotalMilliseconds;
// other tests
...
Note that we are going to run several iterations of our tests, so in the end, we'll also get the average performance:
// provide average performance
test1time /= tests;
test2time /= tests;
test3time /= tests;
Console.WriteLine( "------------------- AVG -------------------" );
Console.WriteLine( test1time.ToString( "F3" ) + "\t | " +
test2time.ToString( "F3" ) + "\t | " +
test3time.ToString( "F3" ) + "\t | " );
So, let's run it and see the results of our tests (the tests below were done on an Intel Core 2 Duo CPU - 2.2 GHz):
Matrix size: 50, runs: 200
Starting test with 2 threads
Clear C# | AForge | MS |
156.250 | 109.37 | 218.750 |
171.875 | 93.750 | 125.000 |
156.250 | 109.375 | 109.375 |
171.875 | 93.750 | 125.000 |
156.250 | 93.750 | 125.000 |
------------------- AVG --------
162.500 | 100.000 | 140.625 |
Matrix size: 100, runs: 100
Starting test with 2 threads
Clear C# | AForge | MS |
687.500 | 390.625 | 515.625 |
718.750 | 390.625 | 406.250 |
703.125 | 390.625 | 406.250 |
687.500 | 390.625 | 406.250 |
734.375 | 390.625 | 406.250 |
------------------- AVG ----------
706.250 | 390.625 | 428.125 |
Matrix size: 250, runs: 40
Starting test with 2 threads
Clear C# | AForge | MS |
4453.125 | 2484.375 | 2593.750 |
4609.375 | 2500.000 | 2500.000 |
4515.625 | 2484.375 | 2500.000 |
4546.875 | 2484.375 | 2500.000 |
4671.875 | 2500.000 | 2500.000 |
------------------- AVG --------------
4559.375 | 2490.625 | 2518.750 |
Matrix size: 1000, runs: 10
Starting test with 2 threads
Clear C# | AForge | MS |
133078.125 | 72406.250 | 72531.250 |
134875.000 | 72718.750 | 72406.250 |
135296.875 | 72578.125 | 72375.000 |
135484.375 | 72531.250 | 75062.500 |
136500.000 | 72515.625 | 72343.750 |
------------------- AVG -------------------
135046.875 | 72550.000 | 72943.750 |
From the provided results, we can see that our implementation does not look to be worse than Microsoft’s solution. Yes, we don’t have all the features and flexibility, but we’ve met our requirements to support early .NET versions and Mono too. From the above results, it looks like our implementation even performs a bit better.
Analyzing our results a bit further, we can see that Microsoft’s solution requires a bit more time on the very first run, which means that they perform a more complex initialization of worker threads, consuming more time for it.
Also, from our results, we can see that decreasing matrix size results in the fact that performance increase becomes not so evident in the case where work amount is too small for parallelism. But, we'll discuss this in the next section.
It is a common and good practice to do some sort of profiling and performance test before you start optimizing some code and right after. Performance tests will show how much you got from optimization, and if you got anything at all. In many cases, you may believe that you are optimizing your code, making it run faster, but in actuality, you may get a performance decrease. Parallelism is not a panacea, and in some cases, your paralleled code may perform slower. This may happen due to the fact that the amount of paralleled work is very small and the amount of time spent on threads synchronization is much greater. To demonstrate this effect, let's take a look at the result of paralleling the same matrix multiplication, but let's take small matrices this time:
Matrix size: 10, runs: 1000
Starting test with 2 threads
Clear C# | AForge | MS |
0.000 | 46.875 | 156.250 |
15.625 | 15.625 | 46.875 |
0.000 | 15.625 | 46.875 |
0.000 | 15.625 | 46.875 |
0.000 | 15.625 | 31.250 |
------------------- AVG ----------
3.125 | 21.875 | 65.625 |
As we can see from the above result, it is much faster to multiply small matrices without any parallelism at all. Paralleling such computations just leads to the fact that all additional routines for threads synchronization take much more time than the actual useful work. So, measure performance before making decisions on which code to use. Taking into account the fact that the Parallel.For() syntax does not differ a lot from the regular for-loop statement, it should not be an issue to change a few lines of code to perform a different test.
So, as we can see, we've managed to make our own small, but easy to use Parallel.For() implementation, which performs quite well, and is not much worse than the parallel extension from Microsoft. Of course, Microsoft provides more flexibility and features, but many paralleling tasks may be solved with just paralleling for-loops, which was our aim, and we got it.
We did not discuss other paralleling tasks in the article, but just matrix multiplication, so more samples may be investigated to provide clearer results. Personally, I am already using this AForge.Parallel.For() implementation in another project, which deals with image processing and other stuff. Image processing, being one of the areas which may use quite time consuming computations, is one of the many areas that can utilize parallel computations successfully. And preliminary tests of that project already showed the boost from parallelism.
AForge.Parallel.For()
Some tuning of the existing implementation may be done by investigating the optimal amount of background threads to use for parallelism. The current implementation, by default, creates the amount of threads which is equal to the number of cores in the system. However, this may be changed by using the AForge.Parallel.ThreadsCount property, which allows the user to specify exactly how many background threads to create. For example, if we take a look at Microsoft's solution, we can find that they create more than two threads on a dual core system. Inspecting the Task Manager on my system, I found that around 10 additional threads were created when their Parallel.For() was invoked.
AForge.Parallel.ThreadsCount
As another direction of increasing performance of parallel computational tasks, the utilization of GPUs may be considered. For example, NVidia provides the CUDA library, which may be used to utilize their GPUs for general purpose computing. Taking a look at the CUDA website, we can find that it was successfully applied to many different applications.
I would like to thank Israel Lot for his valuable comments on the AForge.Parallel implementation and his contribution.
AForge.Parallel
Although the code and demo application are provided with the article, the AForge.Parallel is going to become one of the new features of the AForge.NET framework, and is going to be released in the 2.0 version, where the class will be utilized for paralleling other classes of the framework.
This article, along with any associated source code and files, is licensed under The GNU General Public License (GPLv3)
public:
int a;
void Paralelo()
{
AForge::Parallel::For(0,1000,gcnew Action<int>(this,&Treads::Form1::LoopParalelo));
}
void LoopParalelo(int i)
{
a= i*i;
}
AForge::Parallel::ForLoopBody ^ cannot convert the 3rd parameter System::Action ^' to 'AForge::Parallel::ForLoopBody ^
could you help me?
Thanks Juan.
Sorry For my bad english
My mail is: juanmanuel_faus@yahoo.com.ar
int i = 0;
// Variables
PointF Or = new PointF();
Or.X = (float)(this.Width / 2.0 + _orT.X);
Or.Y = (float)(this.Height / 2.0 + _orT.Y);
// transformation matrix
double[,] Matrix = new double[2, 3];
Matrix[0, 0] = scale * Math.Sin(AngleRads);
Matrix[0, 1] = scale * Math.Cos(AngleRads);
Matrix[0, 2] = Or.X;
Matrix[1, 0] = -scale * Math.Cos(AngleRads);
Matrix[1, 1] = scale * Math.Sin(AngleRads);
Matrix[1, 2] = Or.Y;
var random = new Random();
// Data array creation: Polar
float[,] datos = new float[36000, 2];
for ( i = 0; i < 36000; i++)
{
datos[i,0] = i; // Angle
datos[i,1] = (float)random.NextDouble(); // Range
}
// C# direct method
float X, Y, xt, yt;
NTGS.Class_CronometroMilisegundos Crono = new NTGS.Class_CronometroMilisegundos();
Crono.Start();
for (i = 0; i < 36000; i++)
{
// Calculo componentes
xt = (float)(datos[i, 1] * Math.Cos(datos[i, 0] * 2 * Math.PI / (100 * 360)));
yt = (float)(datos[i, 1] * Math.Sin(datos[i, 0] * 2 * Math.PI / (100 * 360)));
// Calculo posiciones
X = (int)(Matrix[0, 0] * xt + Matrix[0, 1] * yt + Matrix[0, 2]);
Y = (int)(Matrix[1, 0] * xt + Matrix[1, 1] * yt + Matrix[1, 2]);
}
Crono.Stop();
double t1 = Crono.EllapsedMilliseconds;
// Aforge Pararell method
Crono.Start();
AForge.Parallel.For(0, 36000, k=>
{
// Calculo componentes
xt = (float)(datos[k, 1] * Math.Cos(datos[k, 0] * 2 * Math.PI / (100 * 360)));
yt = (float)(datos[k, 1] * Math.Sin(datos[k, 0] * 2 * Math.PI / (100 * 360)));
// Calculo posiciones
datos[k, 0] = (int)(Matrix[0, 0] * xt + Matrix[0, 1] * yt + Matrix[0, 2]);
datos[k, 1] = (int)(Matrix[1, 0] * xt + Matrix[1, 1] * yt + Matrix[1, 2]);
});
Crono.Stop();
double t2 = Crono.EllapsedMilliseconds;
int size = 36000
ms plain C# | ms Pararell
iteration 1 : 10 ms 43 ms
iteration 2 : 9.8 ms 10 ms
iteration 3 : 10.5 ms 10.5 ms
iteration 4 : 9.6 ms 10.5 ms
iteration 5 : 9.6 ms 10.5 ms
size = 20.000.000
iteration 1: 5500 ms 6000 ms
iteration 2: 5690 ms 5710 ms
iteration 3: 5627 ms 5754 ms
m3ntat_ wrote:How to go about doing this?
break
Andrew Kirillov wrote:so it still could be covered in more details.
Parallel.ThreadsCount
yassir hannoun wrote:what would happen if the user dont have a multicore processor on his machine ?
QueryPerformanceCounter
Stopwatch
ThreadPool
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/29813/Parallel-Computations-in-C?msg=2757734
|
CC-MAIN-2015-35
|
refinedweb
| 3,279
| 63.09
|
279 40 17MB
English Pages 237 [268] Year 2004
228 54 2MB Read more
This book makes clear the skills rookie teachers must learn to ensure that their students learning never has to wait, se
309 70 13MB Read more
98 89 336KB Read more
A follow-up companion to "The South Beach Diet" outlines an exercise program that complements the diet's
173 86 678KB Read more
Software development today is embracing functional programming (FP), whether it's for writing concurrent programs o
246 37 780KB Read more
Like most kids, Katie was a picky eater. She'd sit at the table in silent protest, hide uneaten toast in her bedroo
120 60 281MB Read more
Table of contents :
Better, Faster, Lighter Java
by Bruce A. Tate and Justin Gehtland
Printed in the United States of America.
Published by O'Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472.
Editor: Mike Loukides
Production Editor: Colleen Gorman
Cover Designer: Ellie Volckhausen
Interior Designer: Melanie Wang
Printing History: June 2004: First Edition.
ix -Preface
1 - 1. The Inevitable Bloat
1 -- Bloat Drivers
9 -- Options
11 -- Five Principles for Fighting the Bloat
15 -- Summary
17 - 2. Keep It Simple
17 -- The Value of Simplicity
21 -- Process and Simplicity
26 -- Your Safety Net
35 -- Summary
36 - 3. Do One Thing, and Do It Well
37 -- Understanding the Problem
41 -- Distilling the Problem
46 -- Layering your Architecture
51 -- Refactoring to Reduce Coupling
60 -- Summary
61 - 4. Strive for Transparency
61 -- Benefits of Transparency
62 -- Who's in Control?
64 -- Alternatives to Transparency
70 -- Reflection
77 -- Injecting Code
79 -- Generating Code
82 -- Advanced Topics
85 -- Summary
87 - 5. You Are What You Eat
88 -- Golden Hammers
98 -- Understanding the Big Picture
102 -- Considering Technical Requirements
106 -- Summary
107 - 6. Allow for Extension
107 -- The Basics of Extension
112 -- Tools for Extension
123 -- Plug-In Models
126 -- Who Is the Customer?
128 -- Summary
129 - 7. Hibernate
129 -- The Lie
130 -- What is Hibernate?
141 -- Using Your Persistent Model
145 -- Evaluating Hibernate
150 -- Summary
151 - 8. Spring
151 -- What is Spring?
154 -- Pet Store: A Counter-Example
159 -- The Domain Model
161 -- Adding Persistence
170 -- Presentation
175 -- Summary
177 - 9. Simple Spider
178 -- What Is the Spider?
179 -- Examining the Requirements
182 -- Planning for Development
182 -- The Design
183 -- The Configuration Service
187 -- The Crawler/Indexer Service
193 -- The Search Service
196 -- The Console Interface
199 -- The Web Service Interface
203 -- Extending the Spider
204 - 10. Extending jPetStore
204 -- A Brief Look at the Existing Search Feature
207 -- Replacing the Controller
211 -- The User Interface (JSP)
214 -- Setting Up the Indexer
216 -- Making Use of the Configuration Service
218 -- Adding Hibernate
224 -- Summary
226 - 11. Where Do We Go From Here?
226 -- Technology
231 -- Process
232 -- Challenges
232 -- Conclusion
234 - Bibliography
237 - Index
Better, Faster, Lighter Java,.
O'REILLY ®
Bruce A. Tate &Justin Gebtland
Better, Faster, Lighter Java
™
Other Java™ resources from O'Reilly Related titles
Java Books Resource Center
O'REILLY"
O'!Java.com.
Java'" in a Nutshell Head First Java'" ™ Head First EJB Programming Jakarta Struts Tomcat: The Definitive Guide ™ Learning Java
™
Java Extreme Programming Cookbook ™ Java'" Servlet and JSP ™ Cookbook ™ Hardcore Java ™ JavaServer Pages
java.oreilly.com is a complete catalog of O'Reilly's books on Java and related technologies, including sample chapters and code examples. On]ava.com is a one-stop resource for enterprise Java develop ers, featuring news, code recipes, interviews, weblogs, and more.
Conferences
O'Reilly Media, Inc. brings diverse innovators together to nur ture the ideas that spark revolutionary industries. We specialize in documenting the latest tools and systems, translating the in novator's knowledge into useful skills for those in the trenches. Visit conferences.oreilly.com for our upcoming events.
EILLY NETWORK with a free trial.
fari' ookshelf.
Better, Faster, Lighter Java
TM
Bruce A. Tate and Justin Gehtland
O'REILLY
Beijing • Cambridge • Farnham • Ki:iln • Paris • Sebastopol • Taipei • Tokyo
®
Better, Faster, Lighter Java'"
by Bruce A. Tate and Justin Gehtland
Copyright© 2004/insti tutional sales department: (800) 998-9938 or [email protected]
Editor:
Mike Loukides
Production Editor:
Colleen Gorman
Cover Designer:
Ellie Volckhausen
Interior Designer:
Melanie Wang
Printing History: June 2004:
First Edition.
Nutshell Handbook, the Nutshell Handbook logo, and the O'Reilly logo are registered trademarks of O'Reilly Media, Inc. The Java Series, Better, Faster, Lighter Java, the image of a hummingbird, and related trade dress are trademarks of O'Reilly Media, Inc. Java'" and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc.,.
RepKover. �
™
This book uses RepKover , a durable and flexible lay-flat binding.
ISBN: 0-596-00676-4 [Ml
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 1. The Inevitable Bloat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 Bloat Drivers Options 9 Five Principles for Fighting the Bloat 11 Summary 15 2. Keep It Simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Value of Simplicity Process and Simplicity Your Safety Net Summary
17 17 21 26 35
3. Do One Thing, and Do It Well . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Understanding the Problem Distilling the Problem Layering Your Architecture Refactoring to Reduce Coupling Summary
36 37 41 46 52 60
4. Strive for Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefits of Transparency Who's in Control? Alternatives to Transparency Reflection Injecting Code Generating Code
61 61 62 64 70 77 79
V
Advanced Topics Summary
82 85
5. You Are What You Eat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Golden Hammers Understanding the Big Picture Considering Technical Requirements Summary
88 98 102 106
6. Allow for Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 The Basics of Extension Tools for Extension Plug-In Models Who ls the Customer? Summary
107 112 123 126 128
7. Hibernate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 The Lie What ls Hibernate? Using Your Persistent Model Evaluating Hibernate Summary
129 130 141 145 150
8. Spring ........................................................... 151 What Is Spring? Pet Store: A Counter-Example The Domain Model Adding Persistence Presentation Summary
151 154 159 161 170 175
9. Simple Spider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 What Is the Spider? Examining the Requirements Planning for Development The Design The Configuration Service The Crawler/Indexer Service The Search Service
vi
I
Table of Contents
178 179 182 182 183 187 193
The Console Interface The Web Service Interface Extending the Spider
196 199 203
10. Extending jPetStore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
204 207 211 214 216 218 224
A Brief Look at the Existing Search Feature Replacing the Controller The User Interface QSP) Setting Up the Indexer Making Use of the Configuration Service Adding Hibernate Summary
11. Where Do We Go from Here? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
226 231 232 232
Technology Process Challenges Conclusion
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Table ofContents
I
vii
Preface
In 2001, I was with Steve Daniel, a respected kayaker. We were at Bull Creek after torrential rains, staring at the rapid that we later named Bores. The left side of the rapid had water, but we wanted no part of it. We were here to run the V, a violent six-foot drop with undercut ledges on the right, a potential keeper hydraulic on the left, and a boiling tower of foam seven feet high in the middle. I didn't see a clean route. Steve favored staying right and cranking hard to the left after the drop to avoid the undercut ledge. I was leaning left, where I'd have a tricky setup, and where it would be tough to identify my line, but I felt that I could find it and jump over the hydraulic after making a dicey move at the top. We both dismissed the line in the middle. Neither of us thought we could keep our boats upright after running the drop and hitting the tower, which we called a haystack because of its shape. Neither of us was happy with our intended line, so we stood there and stared. Then a funny thing happened. A little boy, maybe 11 years old, came over with a $10 inflatable raft. He shoved it into the main current, and without paddle, life jacket, helmet, or any skill whatsoever, he jumped right in. He showed absolutely no fear. The stream predictably took him where most of the water was going, right into the "tower of power." The horizontal force of the water shot him through before the tower could budge him an inch. We both laughed hysterically. He should have been dead, but he made it-using an approach that more experienced kayakers would never have considered. We had our line. In 2004, I went with 60 kids to Mexico to build houses for the poor. I'd done light construction of this kind before, and we'd always used portable cement mixers to do the foundation work. This group preferred another method. They'd pour all of the ingredients on the ground-cement, gravel, and sand. We'd mix up the piles with shovels, shape it like a volcano, and then pour water in the middle. The water would soak in, and we'd stir it up some more, and then shovel the fresh cement where we wanted it. The work was utterly exhausting. I later told the project director that he needed cement mixers; they would have saved a lot of backbreaking effort.
ix
He asked me how to maintain the mixers. I didn't know. He asked where he might store them. I couldn't tell him. He then asked how he might transport them to the sites, because most groups tended to bring vans and not pickup trucks. I finally got the picture. He didn't use cement mixers because they were not the right tool for the job for remote sites in Mexico. They might save a half a day of construction effort, but they added just as much or more work to spare us that effort. The tradeoff, once fully understood, not only failed on a pure cost basis, but wouldn't work at all given the available resources. In 2003, I worked with an IT department to simplify their design. They used a multi layered EJB architecture because they believed that it would give them better scal ability and protect their database integrity through sophisticated transactions. After much deliberation, we went from five logical tiers to two, completely removed the EJB session and entity beans, and deployed on Tomcat rather than Web Logic or ]Boss. The new architecture was simpler, faster, and much more reliable. It never ceases to amaze me how often the simplest answer turns out to be the best one. If you're like the average J2EE developer, you probably think you could use a little dose of simplicity about now. Java complexity is growing far beyond our capa bility to comprehend. XML is becoming much more sophisticated, and being pressed into service where simple parsed text would easily suffice. The EJB architecture is everywhere, whether it's warranted or not. Web services have grown from a simple idea and three major APis to a mass of complex, overdone standards. I fear that they may also be forced into the mainstream. I call this tendency "the bloat." Further, so many of us are trained to look for solutions that match our predeter mined complicated notions that we don't recognize simple solutions unless they hit us in the face. As we stare down into the creek at the simple database problem, it becomes a blob of EJB. The interfaces become web services. This transformation hap pens to different developers at different times, but most enterprise developers even tually succumb. The solutions you see match the techniques you've learned, even if they're inappropriate; you've been trained to look beyond the simple solutions that are staring you in the face. Java is in a dangerous place right now, because the real drivers, big vendors like Sun, BEA, Oracle, and IBM, are all motivated to build layer upon layer of sophisticated abstractions, to keep raising the bar and stay one step ahead of the competition. It's not enough to sell a plain servlet container anymore. Tomcat is already filling that niche. Many fear that ]Boss will fill a similar role as a J2EE application server killer. So, the big boys innovate and build more complex, feature-rich servers. That's good-if the servers also deliver value that we, the customers, can leverage. More and more, though, customers can't keep up. The new stuff is too hard. It forces us to know too much. A typical J2EE developer has to understand relational data bases, the Java programming languages, EJB abstractions, JNDI for services, JTA for transactions, JCA and data sources for connection management, XML for data
x I
Preface
representation, Struts for abstracting user interface MVC designs, and so on. Then, she's got to learn a whole set of design patterns to work around holes in the J2EE specification. To make things worse, she needs to keep an eye on the future and at least keep tabs on emerging technologies like Java Server Faces and web services that could explode at any moment. To top it off, it appears that we are approaching an event horizon of s_orts, where programmers are going to spend more time writing code to support their chosen frameworks than to solve their actual problems. It's just like with the cement mixers in Mexico: is it worth it to save yourself from spending time writing database trans actions if you have to spend 50% of your time writing code supporting CMP? Development processes as we know them are also growing out of control. No human with a traditional application budget can concentrate on delivering beautiful object interaction diagrams, class diagrams, and sophisticated use cases and still have enough time to create working code. We spend as much or more time on a project on artifacts that will never affect the program's performance, reliability, or stability. As requirements inevitably change due to increasing competitive pressures, these artifacts must also change, and we find that rather than aiding us, these artifacts turn into a ball, tied to a rope, with the other end forming an ever-tightening noose around our necks. There's a better way. A few independent developers are trying to rethink enterprise development, and building tools that are more appropriate for the job. Gavin King, creator of Hiber nate, is building a persistence framework that does its job with a minimal API and gets out of the way. Rod Johnson, creator of Spring, is building a container that's not invasive or heavy or complicated. They are not attempting to build on the increas ingly precarious J2EE stack. They're digging through the muck to find a more solid foundation. In short, I'm not trying to start a revolution. It's already started. That's the subject of this book. I recommend that we re-imagine what J2EE could and should be, and move back down to a base where we can apply real understand ing and basic principles to build simpler applications. If you're staring at the rapids, looking at solutions you've been taught will work-but you still don't quite see how to get from point A to point B without real pain-it's time to rethink what you're doing. It's time to get beyond the orthodox approaches to software development and focus on making complex tasks simple. If you embrace the fundamental philoso phies in this book, you'll spend more time on what's important. You'll build simpler solutions. When you're done, you'll find that your Java is better, faster, and lighter.
Who Should Read This Book? This book isn't for uber-programmers who already have all the answers. If you think that J2EE does everything that you need it to do and you can make it sing, this book is not for you. Believe me, there are already enough books out there for you.
Preface
I
xi
If you've already cracked the code for simplicity and flexibility, I'm probably not going to teach you too much that's new. The frameworks I hold up as examples have been around for years-although incredibly, people are only now starting to write about them. The techniques I show will probably seem like common sense to you. I'll take your money, but you'll probably be left wanting when you're done. This book is for the frustrated masses. It's intended for those intermediate-to advanced developers with some real experience with Java who are looking for answers to the spiraling complexity. I'll introduce you to some ideas with power and bite. I know that you won't read a phone book. You haven't got time, so I'll keep it short. I'll try to show you techniques with real examples that will help you do things better than you did before.
Organization dis tinction main tain nec essary for most applications but have little to do with the actual problem domain. This chapter examines the methods for providing these kinds of ser vices without unnecessarily affecting the code that solves your business prob lem-that is, how to solve them transparently. The two main methods we examine are reflection and code generation.
xii
I
Preface
Chapter 5, You Are What You Eat Every choice of technology or vendor you make is an embodiment of risk. When you choose to use Java, or log4j, or ]Boss, or Struts, you are hitching yourself to their wagon. This chapter examines some of the reasons we choose certain tech nologies for our projects, some traditional choices that the marketplace has made (and why they may have been poor choices), and some strategies for mak ing exten sion after its release to the world. This chapter examines the techniques for pro viding con struction Spi der,.
Preface
I
xiii
Conventions every thing b e replaced with an actual value in your program. Constant width bold Used for user input in text and in examples showing both input and output. Also used for emphasis in code, and in order to indicate a block of text included in an annotated call 15 (international/local) (707) 829-0104 (fax) There is a web page for this book, which lists errata, examples, or any additional information. You can access this page at: To comment or ask technical questions about this book, send email to: [email protected] For information about books, conferences, Resource Centers, and the O'Reilly Net work, see the O'Reilly web site at:
xiv
I
Preface
Acknowledgments This book has been a real pleasure to write and I hope that translates to something that's a joy for you to read. The names on the cover are necessarily only a small part of the total team effort that it took to produce this book. It would be impossible to thank every person that contributed, but I feel the obligation to try. Both Bruce and Justin would like to thank Michael Loukides for his gentle encourage ment, expert touch, and steady hand. At times, it may have seemed like this book would write itself, but don't underestimate your impact on it. Thanks for giving us the free dom to do something unique, and the gentle guidance and leadership when the book required it. We also greatly appreciate our outstanding technical reviewers, including Stuart Holloway, Andy Hunt, Dave Thomas, and Glenn Vanderburg. We respect each of you deeply. It's truly an honor to have such a combined brain-trust review our book. Special thanks go to Rod Johnson for his quick response and thorough attention while editing the Spring chapter. I'm astounded by what he's accomplished. Many heartfelt thanks also go to the production and marketing teams at O'Reilly, including David Chu for doing whatever it takes to speed the project along, Robert Romano for his work on the graphics, Daniel H. Steinberg for keeping us in front of his community, Colleen Gorman for her experienced, delicate editing, and Kyle Hart for her tireless promotion. This book is about lighter, faster technologies and it relies heavily on the opinions and work of some pioneers. Thanks to the folks at IntelliJ, for use of a fantastic IDE. We used it to create many of the examples in this book. Thanks to Ted Neward, for his help in understanding JSR 175, and for his unique perspective. Ted, you scare me, only in a good way (sometimes) . For his work on Spring, we thank again Rod Johnson. Thanks also to those who contributed to the open source JPetstore exam ples, including Clinton Began for his original JPetstore, which formed the founda tion for Spring's version, and Juergen Hoeller's work to port that example to Spring. Gavin King and crew we thank for a fantastic persistence framework. Your remark able accomplishments are rewriting Java history in the area of transparent persis tence. We also would like to thank Doug Cutting and the entire Lucene maintenance team for their work on that excellent product. Dave Thomas and Mike Clark are Java leaders in the areas of test-driven development and decoupled designs. Thanks to both for providing credible examples for this book.
Bruce A. Tate I would like to personally thank Jay Zimmerman for giving me a soap box for this criti cal message. As a mentor, you've taught me how to run a small business, you've trusted me with your customers, and you've been a jovial friend on the road. Thanks go to Maciej for helping to get the ball rolling and for help outlining this book. Thanks
Preface
I
xv
also go to Mike Clark for your ideas on unit testing, and your friendship. Most impor tantly, I thank my family. You are all the reason that I write. Thanks to Kayla and Julia for your smiles, kisses, and hugs when I am down; to my greatest love Maggie, for your inspiration and understanding; and most of all Connie, for 32 years of loving those who have been the closest to me. Connie, this book is for you.
Justin Gehtland I would like to personally thank Stuart Halloway for being preternaturally busy all the time. I'd also like to say thanks to Ted Neward, Kevin Jones, and Erik Hatcher for forming a gravitational well pulling me towards Java. Mostly, I'd like to thank my wife Lisa and daughter Zoe, who prove to me constantly that work isn't everything. Someday, perhaps, I'll write a book you'd both like to read.
xvi I Preface
CHAPTER 1
The Inevitable Bloat
Java development is in crisis. Though Java's market share has been steadily growing, all is not well. I've seen enterprise Java development efforts fail with increasing regu larity. Even more alarming is that fewer and fewer people are surprised when things do go wrong. Development is getting so cumbersome and complex that it's threaten ing to collapse under its own weight. Typical applications use too many design pat terns, too much XML, and too many Enterprise JavaBeans. And too many beans leads to what I'll call the bloat.
Bloat Drivers I'll illustrate the bloat by comparing it with the famous Lewis and Clark expedition. They started with a huge, heavily loaded 55-foot keel boat. Keel boats were well designed for traversing massive rivers like the Missouri and the Mississippi, but quickly bogged down when the expedition needed to navigate and portage the tighter, trickier rivers out West. Lewis and Clark adapted their strategy; they moved from the keel boats to canoes, and eventually to horseback. To thrive, we all must do the same. Java has not always been hard, and it doesn't have to be today. You must once again discover the lighter, nimbler vessels that can get you where you need to go. If the massive, unwieldy frameworks hinder you, then don't be afraid to beach them. To use the right boat, you've got to quit driving the bloat. Over time, most successful frameworks, languages, and libraries eventually succumb to bloat. Expansion does not happen randomly-powerful forces compel evolution. You don't have to accept my premise blindly. I've got plenty of anecdotal evidence. In this chapter, I'll show you many examples of the bloat in applications, languages, libraries, frameworks, middleware, and even in the operating system itself.
Enterprise Mega-Frameworks Java developers live with a painful reality: huge enterprise frameworks are en vogue. That might be good news to you if you're among the 10% of Java developers who are working on the hardest problems, and your applications happen to fit those enter prise frameworks perfectly. The rest of us are stuck with excruciating complexity for little or no benefit. Successful J2EE vendors listen to the market: • Vendors can charge mega-dollars for mega-frameworks. Selling software means presenting the illusion of value. Big companies have deep pockets, so vendors build products that they can sell to the big boys. • It's hard to compete with other mega-frameworks if you don't support the same features. Face it. Software buyers respond to marketing tally sheets like Pavlov's dogs responded to the dinner bell. • Collaboration can increase bloat. Whenever you get multiple agendas driving a software vision, you get software that supports multiple agendas, often with unintended consequences. That's why we have two dramatically different types of EJB. The process satisfied two dramatically different agendas. You can almost watch each new enterprise framework succumb to the bloat, like chickens being fattened for market. In its first incarnation, XML was slightly tedious, but it provided tremendous power. In truth, XML in its first iteration did almost everything that most developers needed it to. With the additions of XML Schema and the increased use of namespaces, XML is dramatically more cumbersome than ever before. True, Schema and namespaces make it easier to manage and merge mas sive types. Unfortunately, once-simple web services are taking a similar path. But none of those frameworks approach the reputation that Enterprise JavaBeans (EJB) has achieved for bloat. EJB container-managed persistence (CMP) is the poster child for tight coupling, obscure development models, integrated concerns, and sheer weight that are all characteristic of the bloat (Figure 1-1).
Transactions persistence distribution
Figure 1 - 1 . In theory, EJB's beans simplify enterprise programming
2
I
Chapter 1: The Inevitable Bloat
Figure 1-1 shows the EJB container-based architecture. Beans plug into a container that provides services. The premise is sound: you'd like to use a set of system ser vices like persistence, distribution, security, and transactional integrity. The EJB is a bean that snaps into the container, which implements the set of services that the bean will use. Within the bean, the developer is free to focus on business concerns in the bean. My favorite childhood story was The Cat in the Hat by Dr. Seuss, who should have been a programmer. I loved the game called "Up, up, with the fish," in which the Cat tries to keep too many things in the air at once. As an EJB programmer, it's not quite as funny, because you're the one doing the juggling. Consider this very simple exam ple in Example 1-1. I want a simple counter, and I want it to be persistent. Now, I'll play the Cat, and climb up on the ball to lob the first toy into the air. Example 1 - 1 . Counter example: implementation package com . betterj ava . ej bcounter; import j avax . ej b . *; import j ava . rmi . * ; !**
* CMP bean that counts *!
0
public abstract class Counter implements EntityBean { private EntityContext context = null; public abstract Long getID( ) ; public abstract void setID( Long id) ; public abstract int getCount ( ) ; public abstract void setCount (int count) ; public Object ejbCreate{ Long id, int count) throws CreateException { setld{id ) ; setCount(count ) ; }
return null;
public void ejbPostCreate{ Long id, int count) throws CreateException { } public void setEntityContext{EntityContext c) { context = c; }
Bloat Drivers
I
3
Example 1 - 1 . Counter example: implementation (continued) public void unsetEntityContext( ) { context = null; } public public public public public
}
void void void void void
ejbRemove( ) throws RemoveException { } ejbActivate( ) { } ejbPassivate( ) { } ejbStore( ) { } ejbload( ) { }
public void increment( ) { int i=getCount( ) ; i++; setCount(i); public void clear( ) { }
setCount(o);
The first file, called the bean, handles the implementation. Note that this class has the only business logic that you will find in the whole counter application. It accesses two member variables through getters and setters, the counter value and ID, which will both be persistent. It's also got two other methods, called clear and increment, that reset and increment the counter, respectively. For such a simple class, we've got an amazing amount of clutter. You can see the invasive nature of EJB right from the start: 0
This class implements the EJB interface, and you've got to use it in the context of an EJB container. The code must be used inside a container. In fact, you can use it only within an EJB container. You cannot run the code with other types of containers.
8
You see several lifecycle methods that have nothing to do with our business function of counting: ejbActivate, ejbPassivate, ejbStore, ejbload, ejbRemove, setEntityContext, and unsetEntityContext.
@)
Unfortunately, I've had to tuck all of the application logic away into a corner. If a reader of this application did not know EJB, he'd be hard-pressed to under stand exactly what this class was designed to do.
I'm not going to talk about the limitations of container-managed persistence. If you're still typing along, you've got four classes to go. As the Cat said, "But that is not all, no that is not all." Example 1-2 shows the next piece of our EJB counter: the local interface.
4
I
Chapter 1: The Inevitable Bloat
Example 1 -2. Local interface package com . betterjava . ejbcounter; import j avax . ej b . *; !** * Local interface to the Counter EJB. *! public interface CounterLocal extends EJBLocalObject { public public public public
abstract abstract abstract abstract
Long getID( ) ; void setID( Long) ; int getCount( ) ; void setCount ( int count ) ;
This is the interface, and i t is used a s a template for code generation. Things started badly, and they're deteriorating. You're tightly coupling the interface to EJBLocalObject. You are also dealing with increasing repetition. Notice that I've had to repeat all of my implementation's accessors, verbatim, in the interface class. This example shows just one instance of the mind-boggling repetition that plagues EJB. To effectively use EJB, you simply must use a tool or framework that shields you from the repetition, like XDoclet, which generates code from documentation com ments in the code. If you're a pure command-line programmer, that's invasive. But, "'Have no fear,' said the Cat." Let's push onward to Example 1-3. Example 1 -3. LocalHome interface package com. betterjava . ejbcounter; import j avax . ej b . *; import j ava . rmi . * ; import j ava . util . *; !** * Home interface to the local Counter EJB. *! public interface CounterLocalHome extends EJBLocalHome { public Collection findAll( ) throws FinderException; public CounterLocal findByPrimaryKey( Long id) throws FinderException; public CounterLocal create ( Long id, int count ) throws CreateException;
Bloat Drivers
I
5
In Example 1-3, you find the methods that support the container's management of our persistent object. Keep in mind that this class is a generic, standalone persistent class, with no special requirements for construction, destruction, or specialized que ries. Though you aren't building any specialized behavior at all, you must still create a default local home interface that builds finder methods and templates for the lifecy cle of the bean, like creation and destruction. At this point, I'm going to trust that you've gotten the message. I'll omit the painful deployment descriptor that has configuration and mapping details and the primary key object. I'm also not going to include a data transfer object (DTO) , though for well-documented reasons, you're not likely to get acceptable performance without one. Dr. Seuss sums it up nicely: "And this mess is so big and so deep and so tall, we cannot pick it up. There is no way at all." You'd be hard-pressed to find a persistence framework with a more invasive foot print. Keep in mind that every persistent class requires the same handful of support interfaces, deployment descriptors, and classes. With all of this cumbersome, awk ward goo, things get dicey. Some Cats have enough dexterity to keep all of those toys in the air. Most don't.
Progress Developers do not want their programming languages to stay still. They want them to be enhanced and improved over time; so, we must continually add. Yet language vendors and standards boards can't simply remove older interfaces. In order to be successful, languages must maintain backwards compatibility. As a result, additions are not usually balanced with subtractions (Figure 1-2) . That's a foolproof recipe for bloat. Pressure to add
Current platform
Pressure to remove
Figure 1 -2. Backwards compatibility with progress leads to bloat
If you'd like to see an example of this principle in action, look no further than the deprecated classes and methods in Java. Deprecated literally means "to disapprove of strongly," or "to desire the removal of. " In Java, Sun warns against the use of depre cated classes and methods, because they may be removed in some future release. I assume that they are defining either remove or future very loosely, because depre cated methods never disappear. In fact, if you look at the AWT presentation library
6
I
Chapter 1: The Inevitable Bloat
for Java, you'll find many methods that have been deprecated since Version 1.1, over a half a decade ago. You can also look at the other side of the equation. The next few versions of Java are literally packed with new features. If you're wondering about the impact of these changes on the overall size of the Java runtimes, then you're asking the right questions. Let's take a very basic metric: how big was the Zip file for the Windows version of the standard edition SDK? Table 1-1 shows the story. In Version 1.1, you would have to download just under 3.7 mega bytes. That number has grown to 38 megabytes for ]DK 1.4! Table 1-1. Zip file size for standard edition Java developer kit in Version 1.1 and Version 1 .4 JOK version, for Windows
Zip file size
JDK 1.1
3.7 MB
J2SE 1 .2 J2SE 1 .3
20.3 MB
J2SE1.4
38.0 MB
33.2 MB
You may ask, so what? Computers are getting faster, and Java is doing more for me than ever before. It may seem like you've got a free ride, but the ever-growing frame work will cost you, and others: • Some of the growth is occurring in the standard libraries. If the bloat were purely in add-on libraries, then you could perhaps avoid it by choosing not to install the additional libraries. But you can't dodge the standard libraries. That means that your resource requirements will increase. • Java is harder to learn. Early versions of Java allowed most programmers to pick up a few books, or go to class for a week. Today, the learning curve is steeper for all but the most basic tasks. While the steep curve may not directly affect you, it does affect your project teams and the cost of developers. • It's harder to find what you need. Since the libraries continue to grow, you need to wade through much more data to find the classes and methods that you need to do your job. • You need to make more decisions. As alternatives appear in the basic Java toolkits (and often in open source projects) , you've got to make more decisions between many tools that can do similar jobs. You must also learn alternatives to depre cated classes and methods. • You can't fully ignore old features: people still use deprecated methods. How many Vectors have you seen in the past couple of years? Platforms are not immune to the bloat. That's a fact of life that's beyond your con trol. My point is not to add needless anxiety to your life, but to point out the extent of the problems caused by the bloat.
Bloat Drivers
I
7
Economic Forces To be more specific, success drives bloat. The marketplace dictates behavior. Microsoft does not upgrade their operating systems to please us, or to solve our problems. They do so to make money. In the same way, commercial drivers will con tinue to exert pressure on Java to expand, so you'll buy Java products and align your self with their vision. Beyond license fees, Sun does not make money directly from Java, but it's far from a purely altruistic venture. The Java brand improves Sun's credibility, so they sell more hardware, software, and services. Market leaders in the software industry cannot stay still. They must prompt users to upgrade, and attract new customers. Most vendors respond to these challenges by adding to their feature set. For just one example, try installing Microsoft Office. Check out the size of the Word application. Though most users do little more than compose memos and email, Word has grown to near-Biblical proportions. Word has its own simple spreadsheet, a graphics program, and even web publishing built in. Most Word users have noticed few substantive changes over the years. To me, the last life-changing enhancements in Word were the real-time spelling checker and change tracking. Upgrade revenue and the needs of the few are definitely driving Word development today. Keep in mind that I'm an author, and spend way too much time in that application. Of course, we can't blame Microsoft. They're trying to milk a cash cow, just like everyone else. Yet, like many customers, I would be much happier with a cheaper word processor that started faster, responded faster, and crashed less. Within the Java industry, BEA is an interesting illustration of this phenomenon. To this point, BEA has built a strong reputation by delivering an outstanding applica tion server. From 2001 to the present, BEA and IBM have been fighting a fierce bat tle to be the market-leading J2EE application server. IBM increased their WebSphere brand to include everything from their traditional middleware (the layer of software between applications and the operating system) to extensions used to build turnkey e-commerce sites and portals. Two minor competing products, ]Boss and Oracle9iAS, were starting to eat away at BEA's low-end market share. Both of these products were inexpensive. Oracle priced their product aggressively for users of their database, and ]Boss was an open source project, so BEA was under tremendous pres sure to build more value into their product and stay competitive. They responded by extending their server to enterprise solutions for building portal software, messaging middleware, and business integration. They also started a number of other initia tives in the areas of data (Liquid Data), user interface development (NetUI), and sim plified application development (WorkBench). Building a great J2EE application server is simply not enough for BEA any more. They, too, must expand-and extend the inevitable bloat.
8
I
Chapter 1: The Inevitable Bloat
Misuse Nothing drives bloat more than misuse. If you go to Daddy's toolkit and borrow his cool pipe wrench when you need to drive a nail, something's going to go awry. The book Antipatterns, by William J. Brown, et al. (Wiley & Sons), refers to this problem as the golden hammer. When you've got a golden hammer, everything starts to look like a nail. Misuse comes in many forms:
Framework overkill I've seen a departmental calendar built with Enterprise JavaBeans. I've also seen tiny programs use XML for a two-line configuration file. Design patterns These days, it's almost too easy to use a design pattern. When you trade power for simplicity too many times, you get bloat. Sloppy reuse If you try to stuff a round peg in a square hole, you'll have to adapt the hole or the peg. Too many adaptations will often lead to bloat. Cut-and-paste program ming also leads to bloat. Poor process Like fungus in a college refrigerator, bloat best grows in dark, isolated places. Isolated code with no reviews and one owner lets bloat thrive unchecked. Many developers wear golden hammers as a badge of honor. Reaching for the wrong tool for the job is nearly a rite of passage in some of the places that I've worked. It's a practice that may save a few minutes in the short term, but it will cost you in the end.
Options There are many possible solutions for dealing with the bloat in Java. Head-on is but one possibility. It takes courage and energy to take on the bloat, and you may not wish to fight this battle. You've got alternatives, each with a strong historical precedent: Change nothing; hope thatJava will change. This strategy means letting your pro ductivity and code quality slide. Initially, this is the option that most developers inevitably choose, but they're just delaying the inevitable. At some point, things will get too hard, and current software development as we know it will not be sustainable. It's happened before, and it's happening now. The COBOL devel opment model is no longer sufficient, but that doesn't keep people from slog ging ahead with it. Here, I'm talking about the development model, not the development language. Java development is just now surpassing COBOL as the most-used language in the world, begging the question, "Do you want .to be the COBOL developer of the 21 st century?"
Options
I
9
Buy a highly integrated family of tools, frameworks, or applications, and let a ven
dor shield you from the bloat. In this approach, you try to use bloat to your best advantage. You may put your trust in code generation tools or frameworks that rely on code generation, like EJB, Struts, or Model Driven Architecture (MDA) . You're betting that it can reduce your pain to a tolerable threshold, and shield you from lower-level issues. The idea has some promise, but it's dangerous. You've got to have an incredible amount of foresight and luck to make this approach succeed. If you previously bet big on COREA or DCE, then you know exactly what I mean. QuitJava for another object-oriented language. Languages may have a long shelf life, but they're still limited. For many, the decision to switch languages is too emotional. For others, like author Stuart Halloway, the decision is purely prag matic. The long-time CTO of the respected training company DevelopMentor and tireless promoter of their Java practice recently decided to choose Objective C for an important project because Java was not efficient enough for his needs. Alternatives are out there. C# has some features that Java developers have long craved, like delegation, and C# hasn't been around long enough to suffer the bloat that Java has. Ruby is surprisingly simple and productive, and works very well for GUI prototyping and development. Quit object-oriented languages for another paradigm. Every 15 to 20 years, the current programming model runs out of gas. The old paradigms simply cannot support the increasing sophistication of developers. We've seen programming languages with increasingly rich programming models: machine language, assembly languages, high-level languages, structured programming languages, object-oriented languages. In fact, today you're probably noticing increased activity around a new programming model called aspect-oriented programming (see Chapter 11) . Early adopters were using object technology 15 years before it hit the mainstream. Unfortunately, new programming paradigms traditionally have been very difficult to time. Guess too early and you'll get burned. Spend time and effort becoming a master craftsman. An inordinate amount of bloated code comes not from people who know too much about writing soft ware, but from people who know too little. The temptation when faced with a problem that you don't fully understand is to put everything and the kitchen sink into the solution, thus guarding against every unknown. The problem is that you can't guard against unknowns very effectively; frankly, all the extra complexity is likely to generate side effects that will kill the application. Thor oughly understanding not just your problem domain but the craft of software development as well leads to better, smaller, more focused designs that are eas ier to implement and maintain. Each of these techniques has a time and a place. Research teams and academics need to explore new programming models, so they will naturally be interested in other
10
I
Chapter 1: The Inevitable Bloat
programming paradigms. Many serious, complex problems require sophisticated enterprise software, and the developers working on these problems will look to com plex frameworks that can hopefully shield them from the bloat. Small, isolated devel opment projects often have fewer integration requirements, so they make effective use of other programming languages, or paradigms. But for most day-to-day Java applications, the alternatives are too risky. My choice is to actively fight the bloat.
Five Principles for Fighting the Bloat You can't fight the bloat by being simple-minded. You can't simply fill your pro grams with simple cut-and-paste code, full of bubble sorts and hardwiring. You can not forget everything you've learned to date. It's an interesting paradox, but you're going to need your creativity and guile to create simple but flexible systems. You've got to attack the bloat in intelligent ways. The bloat happened because the extended Java community compromised on core principles. Many of these compromises were for good reasons, but when core princi ples slide often enough, bad things happen. To truly fight the bloat, you've got to drive a new stake in the ground, and build a new foundation based on basic princi ples. You've got to be intentional and aggressive. In this book, I'll introduce five basic principles. Together, they form a foundation for better, faster, lighter Java.
1 . Keep It Simple Good programmers value simplicity. You've probably noticed a resurgence of inter est in this core value, driven by newer, Agile development methods like eXtreme Pro gramming (XP). Simple code is easier to write, read, and maintain. When you free yourself with this principle, you can get most of your code out of the way in a hurry, and save time for those nasty, interesting bits that require more energy and more attention. And simple code has some more subtle benefits as well. It can: • Give you freedom to fail. If your simple solution doesn't work, you can throw it away with a clear conscience: you don't have much invested in the solution anyway. • Make testing easier. Testability makes your applications easier to build and more reliable for your users. • Protect you from the effects of time and uncertainty. As time passes and people on a project change, complex code is nearly impossible to enhance or maintain. • Increase the flexibility of your team. If code is simple, it's easier to hand it from one developer to the next. • Self-document your code, and lessen the burden of technical writing that accom panies any complex application.
Five Principlesfor Fighting the Bloat
I
11
More than any core principle, simplicity is the cornerstone of good applications, and the hallmark of good programmers. Conversely, complexity is often a warning sign of an incomplete grasp of the problem. This doesn't mean that you need to build applications with simple behavior. You can easily use simple constructs, like recur sion, and simple classes, like nodes, to get some complex structures and behaviors. Figure 1-3 shows one simple node class consisting of a collection and a string. That's a simple structure, but I use it to represent a family tree, with many complex rela tionships. I've captured the complex relationships in concept, including children, spouses, parents, grandparents, uncles, and nieces.
.Cheryl Tate Name: String ChHdren: Nodes Figure 1 -3. A simple node class, a string, and a collection form the foundation of a family tree
I'm not advocating simplicity across the board, above all else. I'm merely suggesting that you value simplicity as a fundamental foundation of good code. You don't have to over-simplify everything, but you'll be much better off if you pick the simplest approach that will work.
2. Do One Thing, and Do It Well Focus is the second principle, and it builds upon simplicity. This basic premise has two underlying concepts: concentrate on one idea per piece, and decouple your building blocks. Object-oriented programming languages give you the power to encapsulate single ideas. If you don't take advantage of this capability, you're not getting the full benefits of object-orientation. Focus is the premise behind perhaps the most popular design pattern ever, model view-controller (MVC), shown in Figure 1-4. Each component of this design pattern elegantly separates the concerns of one particular aspect of the problem. The view encapsulates the user interface, the model encapsulates the underlying business logic, and the controller marshals data between them.
12
I
Chapter 1 : The Inevitable Bloat
Model
View
Figure 1 -4. Each rectangle encapsulates a single aspect of an application
These ideas seem simple, but they carry incredible power: • Building blocks, designed with a single purpose, are simple. By maintammg focus, it's easier to maintain simplicity. The converse is also true. If you muddy the waters by dividing your focus, you'll be amazed at how quickly you get bogged down in complex, tedious detail. • Encapsulated functionality is easier to replace, modify, and extend. When you insulate your building blocks, you protect yourself from future changes. Don't underestimate the power of decoupled building blocks. I'm not just talking about saving a few hours over a weekend-I'm talking about a principle that can change your process. When you decouple, you have freedom to fail that comes from your freedom to refactor. • You can easily test a single-purpose building block. Most developers find that testing drives better designs, so it should not come as a surprise that decoupled designs are easier to test.
3. Strive for Transparency The third principle is transparency. When you can separate the primary purpose of a block of code from other issues, you're building transparent code. A transparent per sistence framework lets you save most any Java object without worrying about per sistence details. A transparent container will accept any Java object without requiring invasive code changes. The EJB counter in Example 1-1 is a framework that is not transparent. Look at the alternative counter, in Hibernate or JDO, shown in Example 1-4. Example 1 -4. Transparent counter package com . betterjava . ejbcounter; import j ava . util . * ; public class Counter { private string name; private int count;
Five Principles for Fighting the Bloat
I
13
Example 1 -4. Transparent counter (continued) public void setName ( long newName) { name = newName; public string getName ( ) { return name; public int getCount ( ) { return count ; public void clear ( ) { count = o ; public void increment ( ) { count += 1;
That's it. The code is transparent, it's simple, and it encapsulates one concept counting. Transparency, simplicity, and focus are all related concepts. In fact, in this example, we used transparency to achieve focus, leading to simplicity.
4. Allow for Extension Simple applications usually come in two forms: extensible and dead-end. If you want your code to last, you've got to allow for extension. It's not an easy problem to solve. You probably want your frameworks to be easy to use, even when you're solving hard problems. 00 design principles use layered software (which we call abstrac tions) to solve this problem. Instead of trying to organize millions of records of data on a filesystem, you'd probably rather use a relational database. Rather than use native networking protocols like TCP/IP, you'd probably rather use some kind of remote procedure call, like Java's remote method invocation (RMI). Layered soft ware can make complex problems much easier to solve. They can also dramatically improve reuse and even testability. When you build a new abstraction, you've got to engage in a delicate balancing act between power and simplicity. If you oversimplify, your users won't be able to do enough to get the job done. If you undersimplify, your users will gain little from your new abstraction level. Fortunately, you've got a third choice. You can build a very simple abstraction layer and allow the user to access the layer below yours. Think of them as convenient trap doors that let your users have access to the floors below.
14
I
Chapter 1: The Inevitable Bloat
For example, you might want to build a utility to write a message. You might decide to provide facilities to write named serialized messages. Most users may be satisfied with this paradigm. You might also let your users have full access to the JMS connec tion, so they can write directly to the queue if the need arises.
5. You Are What You Eat My mother always told me that I am what I eat. For once, she was right. Applica tions build upon a foundation. Too many developers let external forces easily dictate that foundation. Vendors, religion, and hyp e can lead you to ruin. You've got to learn to listen to your own instincts and build consensus within your team. Be care ful of the concepts you internalize. Look at it this way: a little heresy goes a long way. You can find a whole lot of advice in the Java community, and not all of it is good. Even commonly accepted practices come up short. If you've been around for 10 years or more, you've probably been told that inheritance is the secret to reuse (it's not) or that client-server systems are cheaper (they're not) or that you want to pool objects for efficiency (you don't) . The most powerful ideas around the whole high-tech industry bucked some kind of a trend: • Java lured C++ developers away with an interpreted, garbage-collected lan guage. C++ developers typ ically demand very high performance. Most conven tional wisdom suggested that customers would be much more interested in client-side Java than server-side Java due to performance limitations. So far, the opposite has been true. • Many Java experts said that reflection was far too slow to be practical. Bucking the trend, many new innovative frameworks like Hibernate and Spring use reflection as a cornerstone. • Whole consulting practices were built around EJB. We're only now beginning to understand how ugly and invasive that technology is, from top to bottom. Java development without a little heresy would be a dull place, and a dangerous one. You've got to challenge conventional thinking. When you don't, bloat happens.
Summary In this book, I'm going to take my own medicine. I'll keep it simple and short. At this point, you're probably wondering how five simple principles can change anything at all. Please indulge me. In the pages to come, I'll lay out the five simple principles. I'll then show you the ideas in practice. You'll see how two successful and influential frameworks used these principles, and how to build applications with these
Summary
I
15
frameworks. You'll see an example of a persistent domain model, an enterprise web application, a sophisticated service, and extension using these core concepts. My plan is simple. I'll show you a handful of basic principles. I'll show you how to suc ceed with the same ideas to build better, faster, lighter Java. If you tend to value a book by the weight of its pages, go find another one. If you'd rather weigh the ideas, then welcome aboard. It all begins and ends with simplicity. And that's the subject of Chapter 2.
16
I
Chapter 1 : The Inevitable Bloat
CHAPTER 2
Keep It Simple
Simplicity should be a core value for all Java programmers, but it's not. Most devel opers have yet to establish simplicity as a core value. I'll never forget when one of my friends asked for a code review and handed me a nine-page, hideously complex blob with seemingly random Java tokens. All kinds of thoughts swarmed through my mind in a period of seconds. At first, I thought it was a joke, but he kept staring expectantly. My next thought was that he hated me; I couldn't think of anything I'd done to deserve it. Finally, I began to read. After three pages of pure torture, I glanced up. He was grinning from ear to ear. My slackened jaw fell open, and I finally realized that he was proud of this code. It's a cult. If you've coded for any length of time, you've run across someone from this warped brotherhood. Their creed: if you can write complicated code, you must be good.
The Value of Simplicity Simplicity may be the core value. You can write simple code faster, test it more thor oughly with less effort, and depend on it once it's done. If you make mistakes, you can throw it away without reservation. When requirements change, you can refactor with impunity. If you've never thought about simplicity in software development before, let's first talk about what simplicity is not: • Simple does not mean simple-minded. You'll still think just as hard, but you'll spend your energy on simplicity, elegance, and the interactions between simple components. e=mc2 is a remarkably simple formula that forms the theory of rela tivity, one of the most revolutionary ideas ever. • Simple code does not necessarily indicate simple behavior. Recursion, multi threading, and composition can let you build applications out of simple build ing blocks with amazingly complex behavior. • Writing simple code does not mean taking the easy way out. Cutting and past ing is often the fastest way to write a new method, but it's not always the
17
simplest solution, and rarely the best solution. Simple code is clean, with little replication. • A simple process is not an undisciplined process. Extreme programming is a pro cess that embraces simplicity, and it's quite rigorous in many ways. You must code all of your test cases before writing your code; you must integrate every day; and you must make hard decisions on project scope in order to keep to your schedule. Simple code is clean and beautiful. Learn to seek simplicity, and you'll step over the line from engineer to artist. Consider the evolution of a typical guitar player. Begin ners aspire to play just about anything that they can master. Intermediate players learn to cram more notes and complex rhythms into ever-decreasing spaces. If you've ever heard one of the great blues players, you know that those players have mastered one more skill-they learn what not to play. Bo Diddley embraces silence and sim plicity with every fiber of his being. He strips his music to the bare essence of what's required. Then, when he does add the extra, unexpected notes, they have much more power and soul. Coding simply accrues benefits throughout the development process. Take a look at the typical object-oriented development iteration in Figure 2-1. Here, I'm trying to show the typical steps of an object-oriented cycle. Notice that you can see the tangi ble impact of simplicity in every phase of each iteration. I should also point out that you can have a dramatic impact outside of the typical development iterations, and into the production part of an application's lifecycle, because your code will be eas ier to fix and maintain.
Figure 2-1. Each iteration in an object-oriented project has steps for designing, coding, testing, and reacting to the results of those tests
Here are some reasons to write simple code. They correspond to the numbers in Figure 2-1: 1. Given simple tools, takes less time, and is less prone to error. 2. Easier to write. 18
I
Chapter 2: Keep It Simple
3. Usually easier to test. 4. Usually more reliable in production. 5. Easier to refactor before deployment. 6. Easier to refactor to fix production problems. 7. Easier to maintain. You're probably wishing I would get right to the point and talk about new design patterns that help create simpler code. Here's the bad news: you can't address sim plicity that way. You've got to pay attention to the process you're using to build code, the foundation you're building on, and the basic building blocks you're using in your everyday programming life before you can truly embrace simplicity.
Choosing the Foundations If you want to build simple applications, you're going to have to build on simple frameworks. You need processes, tools, frameworks, and patterns that support the concepts in this book. Face it: if you build on top of an unintelligible, amorphous blob, you're probably going to be writing code that looks like sticky, tangled masses of goo. That goes for foundations you code, technologies you buy, and design pat terns you reuse.
Technology you buy Two values should govern every layer that you add to your system: value and sim plicity. When it comes to value, remember that there are no free rides. Each layer must pay its own way. When I say pay, I'm generally not talking about the software sales price. Over your development cycle, most of your costs-like time and effort to develop, deploy, and maintain your code- will dwarf the sales price of any given component. You'll want to answer some pointed questions for each and every new piece of software:
How does it improve your life? Many a project has used XML for every message, configuration file, or even docu ment. If two elements of a system are necessarily tightly coupled, XML only adds cost and complexity. Often, pure text with hash tables works fine. Likewise, even if the two elements are loosely coupled but the data is simple enough (key/value pairs, or a simple rectangular table), then XML is probably still overkill. What is the cost? If a technology marginally improves your life, you should be willing to pay only a marginal cost. Too often, developers compromise on major values for minimal gain. Adopting EJB CMP for a project because it comes free with an application server often seems wise, until the true, invasive complexity of the beast shows itself.
The Value of Simplicity
I
19
Is it easy to integrate and extend? Many technologies work well within their own domain, but make assumptions that make even basic extensions difficult. Be especially careful with frameworks for distributed communication, persistence, and user interfaces. Will it cause you to compromise your core principles? If you're striving for simplicity and independence, you should not consider ultra invasive technologies. If you need portability at any cost, then you shouldn't use a tool that forces you to adopt nonstandard SQL. Can you maintain it and manage it in production? Client-server technologies often broke down because they were too expensive to deploy. Web developers live with the limitations of the user interface because the deployment advantages on the client are so significant. Is it a fad technology that will leave you hanging when it falls from fashion? Look across the pond at developers moving from Micrsoft's ASP to ASP.NET. While ASP was the platform, VBScript was the language of choice for many devel opers. Sure, it was nonstandard (the standard is JavaScript, or Ecmascript, depend ing on who you ask), but it looked just like VB and was comfortable. With the advent of ASP.NET, guess which language is still supported? Hint: it isn't VBScript. Now there is a lot of rewriting going on that need never have happened. "Buy over build" is a great motto, but you've got to watch what you buy. It's really just a cost comparison. How much would it cost you and your team to develop the equivalent functionality with the equivalent stability but more targeted to your spe cific needs? When you look at it this way, everything is a "buy." Your own develop ment shop is just one more vendor.
Design patterns Treat design patterns like a framework that you purchase. Each one has a cost and a benefit. Like a purchases framework, each design pattern must pay its own way. If you want to embrace simplicity, you can't build in each and every design pattern from the famous Gang of Four book, Design Patterns, by Erich Gamma, Richard Helm, et al. (Addison-Wesley). True, many design patterns allow for contingencies. That's good. Many Java gurus get in trouble when they try to predict what the future might hold. That's bad. The best rule of thumb is to use design patterns when you've physically established a need, today. You need expertise on your team that can recognize when a given situa tion is crying out for a particular pattern. Too often, developers buy the Gang of Four book, or one like it, crack it open to a random page, and apply a pattern that has no problem. Instead, it's better to find a difficult problem, and then apply the right pattern in response. You need experts on a team to apply any technology. Design patterns are no exception. In other words, don't impose design patterns. Let them emerge.
20
I
Chapter 2: Keep It Simple
Your own code Of course, much of your foundation will be code that you or your peers write. It goes without saying that the simplicity of each layer affects the simplicity of the layers above. You may find that you're forced to use a particularly ugly foundation that looks only slightly better than a random string of characters. Further, you may find that it's impossible to junk it and start from scratch with a simpler foundation. When this happens, you can do what moms and pet owners do when they need to feed their charge a bitter pill: they hide it in peanut butter or cheese. I call this technique rebasing. When you rebase, your overriding concern is the interface. Your goal is to give your clients a better interface and usage model than the code below you. An example of rebasing is providing a data access object layer, which hides the details of a data store, over the EJB entities. You can then keep that skeleton deep in the closet, or clean it out at your leisure. Your clients will be protected, and be able to provide a much cleaner interface. multi symbol.
Process and Simplicity
I
21, program mers from testers, and code from the warning, healing light of day. Effective development processes do none of these things. The best development pro cesses, tai lor it to your needs. Teams vary in size, skill, preference, and prejudice. If you don't like class diagrams or object interaction diagrams, don't use them. If pair program ming. "
The Best of Agile Programming methods like XP and SCRUM advocate simplicity, and make it easier to achieve. Many of the authors of these methods are part of the Agile Alliance, which defines Agile software development principles. These ideas are rapidly shap ing the way modern teams build software. The methods run contrary to many of the other methods that you may use. These rules in particular cut against the grain:
Code rules While other methods like RUP require you to build many different types of dia grams as artifacts, Agile methods encourage you to focus on working code as the primary artifact. Everything else is secondary. Embrace change Other methods try to limit change; Agile methods encourage it. Developers refactor whenever they think it's necessary or helpful. Safety measures like con tinuous:
22
I
Chapter 2: Keep It Simple pro cess. When you use them together, you multiply their benefit. All of the principles build upon simplicity, a core value, but simplicity is difficult to maintain through successive iterations without refactoring. Automated unit tests and continuous inte gration build in a safety net to protect the code base from errors injected through refactoring. JU nit is rapidly becoming one of the most critical Java tools in my tool box and the toolboxes of the best developers that I know. Other ideas can help you to tailor your process, too. You can remove some require ment meth odology. They represent a philosophy, from the inside out, based on simplicity.
Process and Simplicity
I
23ac t. Pick a problem
Refactor and test
Figure 2-2. Your goal is to keep as many decisions as possible to the left of the diagram
The chart says to try something simple. How simple? Use your. Algorithms
When you hear about simplicity, it's usually in the context of algorithms. I'm not saying that you should always reach for that bubble sort, but I am saying that you
24
I
Chapter 2: Keep It Simple
should leave all of the tiny, ugly optimizations out until you measure the need for change. Take the example of object allocation. Which code is easier to read, this one: String middle = "very, " ; String prefix "This code is " ; String suffix = " ugly. " String result = '"' ; StringBuffer buffer = new StringBuffer ( ) ; buffer . append( prefix ) ; for ( int i= o; i < S ; i++) { buffer . append(middle ) ; }
buffer . append ( suffix) ; result = buffer. toString ( ) ;
or this one:
String result = "This code is " ; for (int i= o ; i
f)
8
< /result-map>
0
select PRODUCTID, NAME , DESCN, CATEGORY from PRODUCT where PRODUCTID = #value#
0
select PRODUCTID, NAME, DESCN, CATEGORY from PRODUCT where CATEGORY = #value#
select PRODUCTID, NAME , DESCN, CATEGORY from PRODUCT
lower( name) like #keywordlist [ ] # O R lower(category) like #keywordlist [ ] # OR lower(descn) like #keyword list [ ] #
Adding Persistence
I 167
Example 8-9. dataAccessContext-local.xml (continued) 8
${jdbc . driverClassName } ${jdbc . url } < /property> ${jdbc . username } ${jdbc . password }
@
0
classpat h : /sql-map-config. xml
0
1 68
Chapter 8: Spring
Here's what the annotations mean: 0 This bean handles the JDBC configuration. The JDBC configuration properties are in a standard JDBC configuration file, making them easier to maintain and read. Spring provides a configuring class that makes it easy to read property files without converting them to XML @ Here you see the data source. It's a standard J2EE data source. Many J2EE appli cations and frameworks hard-wire an application or framework to a given data source. Configuring them instead makes it easy to choose your own source (and thus your pooling strategy) . e The applicationContext.xml configuration sets the transaction policy. This con figuration specifies the implementation. This application uses the data source transaction manager, which delegates transaction management to the database via JDBC (using commit and rollback). 0 The iBATIS SQL Map utility for building DAO must be configured. It's done here. per sistence layer, separated the transaction policy from the implementation, and iso lated the data source. Take a look at the broader benefits that have been gained beyond configuration.
The Benefits That's all of the persistence code for the Product. The code for the rest of j PetStore is similar. The application effectively isolates the entire domain model within a sin gle layer. The domain has no dependencies on any services, including the data layer. You've also encapsulated all data access into a clean and concise DAO layer, which is independent of data store. Notice what you don't see: Data source configuration Handled by the Spring framework. You don't have to manage a whole bunch of singletons, for session management, data sources, and the like. You can also delay key decisions such as the type of data source until deployment time. Connection processing The Spring framework manages all of the connection processing. One of the most common JDBC errors is a connection leak. If you're not very careful about closing your connections, especially within exception conditions, your applica tion can easily lose stability and crash.
Adding Persistence
I
1 69
Specialized exceptions Many frameworks pass SQL exceptions to the top. They frequently have SQL codes built in that may be specialized to your own RDBMS, making it difficult to code portable applications. Spring has its own exception hierarchy, which insu lates you from these issues. Further, should you change approaches to Hiber nate or JDO, you won't need to change any of your exception processing. The end result of what we've done so far is pretty cool. We have a clean, transparent domain model and a low-maintenance service layer that's independent of our data base. Each layer is neatly encapsulated. Now that we have looked at the backend logic, it's time to put a user interface on this application.
Presentation: • MVC Web is based on interfaces rather than inheritance. As we discussed in Chapter 3, interfaces often give better flexibility and looser coupling than inher itance-based designs. • MVC Web does not dictate your choice of view. Other frameworks tend to pro vide better support for favored view technologies, such as Velocity (proprietary) and Struts (JSP). For example, Struts exposes the model via request attributes. As a result, you need to build a bridge servlet to use a technology such as Veloc ity that doesn't understand the Servlet APL Spring exposes the model through a generic map, so it can work with any view technology. • MVC Web provides consistent configuration across all aspects of a Spring appli cation. It uses the same inversion-of-control paradigm that the other frame works use. • busi ness validation routine, created and configured by the programmer, and sends either the associated error view or success view back to the user, based on results.
1 70
I
Chapter 8: Spring
,, - .. ... .. .. .. ... .. .. .. ,.. .. .. .. .. ... . .. .. .. . . .. .. .. . .. ... .. . .. .. . . .. ... .. . .. .. .. .. .. ... ... . . . .. . . .. ... .., . .. ..
Server
Model (bean)
Validator (bean)
Controller (POJO) (Servlet-like API)
Success view Failure view (HTML)
Dispatcher (Servlet)
'
-.
Client Input form (HTML)
Output view (HTML)
Figure 8-4. The MVC Web framework works much like Struts
Configuration search for prod ucts based on keywords. The configuration file needs two controllers to the applica tion context file. Each entry specifies a controller and the model object, like in Example 8-10. Example 8-1 0. Excerpt from web.xml
Recall that all access to our data layer goes through the fa�ade. As you'd expect, these bean ID entries specify the fa�ade, called petstore. Each form in the application works in the same way. Let's drill down further and look at the controller for search Products.
Presentation
I
1 71
Controllers For MVC Web, each form generally shares a single instance of a controller, which routes all requests related to a given form. It also marshals the form to the correct validation logic and returns the appropriate view to the user. Example 8-1 1 shows the controller for the search Products view. Example 8-1 1 . SearchProductsController.java public class SearchProductsController implements Controller { 0
private PetStoreFacade petStore; public void setPetStore( PetStoreFacade petStore) { this . petStore = petStore;
8
public ModelAndView handleRequest (HttpServletRequest request, HttpServletResponse response) throws Exception {
8
if (request . getparameter ( " search") ! = null) { String keyword = request . getParameter ( " keyword " ) ; if (keyword == null I I keyword . length ( ) == o ) { return new ModelAndView( " Error" , "message " ,
" Please enter a keyword t o search for, then press the search button . " ) ; }
8
0 }
}
else { PagedlistHolder productlist = new PagedlistHolder( this . petStore . searchProduct list( keyword .tolowerCase ( ) ) ) ; productlist . setPageSize(4) ; request . getSession ( ) . setAttribute ( " SearchProductsController_product List " , product List ) ; return new ModelAndView( "Search Products " , " productlist " , productlist ) ;
else { String page = request . getParameter( " page " ) ; PagedlistHolder productlist = ( PagedlistHolder) request . getSession ( ) . getAttribute ( " SearchProductsController_productlist " ) ; i f ( "next " . equals (page ) ) { product list . nextPage( ) ;
0
}
else if ( " previous" . equals(page ) ) { product list . previous Page( ) ; }
return new ModelAndView( " Search Products " , " product List " , productlist ) ;
1 72
I
Chapter 8: Spring
Here's what the annotations mean: 0 Each controller has access to the appropriate domain model. In this case, it's natural for the view to access the model through our fa�ade. 8 A controller has an interface like a servlet, but isn't actually a servlet. User requests instead come in through a single dispatcher servlet, which routes them to the appropriate controller, populating the request parameter. The controller merely responds to the appropriate request, invoking business data and routing control to the appropriate page. 8 In this case, the request is to "search. " The controller must parse out the appro priate keywords. 8 The controller invokes the business logic with the keywords provided by the user. 0 The controller routes the appropriate view back to the user (with the appropri ate model). In this case, the request is "page. " Our user interface supports more products 0 than might fit on a single page.
FormsPas sword; public AccountForm(Account account) { this . account ; account; thi s . newAccount ; false; public AccountForm( ) { thi s . account ; new Account ( ) ; this . newAccount ; true; public Account getAccount ( ) { return account; public boolean isNewAccount ( ) {
Presentation
I
1 73
Example 8-12. AccountForm.java (continued) }
return newAccount;
public void setRepeatedPas sword (String repeatedPassword) { this . repeatedPas sword = Form to a domain object or value object.
Validation You may have noticed validation logic within the original applciationContext.xml. These beans are generally considered business logic, but they've got a tight relation ship. Example 8-13. AccountValidator.java public class AccountValidator implements Validator { public boolean supports (Class clazz) { return Account . clas s . isAssignableFrom(clazz ) ; public void validate(Object obj , Errors errors ) { ValidationUtils . rejectlfEmpty( errors , "firstName" , " FIRST_NAME_REQUIRED", "First name is required . " ) ; ValidationUtils . rejectlfEmpty( errors , "lastName" , " LAST_NAME_REQUIREO" , " Last name is required . " ) ; ValidationUtils . rejectlfEmpty( errors , "email" , "EMAI L_R EQUIRED" , " Email address is required . " ) ; ValidationUtils . rejectlfEmpty( errors, "phone " , " PHONE_REQUIREO", "Phone number is required . " ) ; ValidationUtil s . rejectifEmpty(errors, "addressl", "ADDRE SS_REQUIRED", "Address (1) is required . " ) ;
1 74
I
Chapter 8: Spring
Example 8-13. Account Validator.Java (continued) ValidationUtils . rejectifEmpty(errors, " City is required . " ) ; ValidationUtils . rejectifEmpty(errors , " State is required . " ) ; ValidationUtils . rejectifEmpty(errors , ValidationUtils . rejectifEmpty(errors , " Country is required . " ) ;
"city " , " CITY_REQUIRED" , " state" , " STATE-REOUIRED" ' " zip" , "ZIP_REQUIRED " , "ZIP is required . " ) ; " over all.
Summary I've chosen the j PetStore application for a variety of reasons. The biggest is that you can quickly see the difference between a simple, fast, light application and the alter native. con tainer. Ours is easy to understand, whereas the J2EE counterpart was buried under the complexity of EJB best practices. I haven't always been a believer. In fact, I didn't know who Rod Johnson was before we were introduced in Boston at a conference. I've since come to appreciate this sim ple framework as elegant and important. If you're new to Spring, you've seen only a single application. I hope that through it, you can see how it embraces the principles in this book: Keep it simple Spring's easy to use and understand. In a single chapter, our example covers an application with transactions, persistence, a full web frontend, and a completely modular configuration engine. Do one thing, and do it well Spring's framework has many different aspects and subframeworks. However, it separates each concept nicely. The fundamental value of Spring is the bean
Summary
I
175
factory and configuration service, which let you manage dependencies without coupling your code. Each additional layer of Spring is cleanly decoupled and independent.: • Integration with Hibernate and JDO • AOP concepts • Transact.
176
I
Chapter 8: Spring
CHAPTER 9
Simple Spider
I once had the pleasure of building a house with a carpenter who was from Mexico; I learned a great deal in the process. Despite my broken Spanish (I once twisted the language so badly that I told him that I was going home to marry my sister), he was able to impart an incredible amount of wisdom. My observations were, shall we say . . . less wise. I told him that I was proud of my hammer. It had a sleek and heavy head, and a composite graphite handle that made it easy to swing. The head was waffled so it would grip the nails. He simply chuckled. At the end of the day, he had used his 10-year-old, plain-Jane hammer to drive in four times the nails as I did, with more accuracy and precision. He told me, "No esta el martillo:" it's not the hammer. Luckily, I build code better than I build houses. In this chapter, I continue to put the five basic principles into action. I create a simple service with a light user interface to drive it. The service is an open source web indexer, primarily used to provide site search behavior for a single web site. It is called Simple Spider and is available at. I'll give you an insider's view of the real client requirements that spawned the applica tion in the first place. The requirements were minimal and straightforward, but there was still a lot of work to do to understand the problem space. Notice large functional ity areas that could be built for this application or reused from other tools; '11 walk you through the decision-making process that led us to use or discard each one. You'll also see the ways our desire for simplicity and our selection of the right tools led to the first iteration of the Spider. In the next chapter, I extend jPetStore to use the Spider. I do this while focusing on the use of transparency to enable extensibility. Throughout both these chapters, I constantly apply the principle of focus: do one thing, and do it well. You want to build a simple hammer, one that fits like a glove in the hand of a skilled carpenter. That desire affects everything, from the requirements to the design to the individual lines of code you write. Along the way, I aim a spear at a variety of sacred cows, from over-burdensome frameworks and over-expensive data formats to notions about how technologies like ]Unit and HTTPUnit should be used. In the end, you'll have an open source application and a greater appreciation for how good programming in Java can be if you just keep your wits about you. 1 77
What Is the Spider? One of the most valuable features of any web site is the ability to search for what you need. Companies with web sites are constantly looking for the right tool to provide those features; they can write their own or purchase something from one of the big vendors. The problem with writing your own is mastering the tools. The problem with purchasing is usually vast expense. Google, the world's leading search pro vider, sells a boxed solution at $ 1 8,000 per unit-not including the yearly license. Customized search engines are often built around the act of querying the database that sits behind a web site. Programmers immediately jump to this solution becausetools and libraries make querying a database simple. However, these customized search solutions often miss entire sections of a web site; no matter how stringently a company tries to build an all-dynamic, data-driven web site, they almost always end up with a few static HTML files mixed in. A data-driven query won't discover those pages. Crawling a web site is usually the answer, but don't attack it naively. Let's look at what crawling means. When you crawl a web site, you start at some initial page. After cataloging the text of the page, you parse it, looking for and following any hyperlinks to other endpoints, where you repeat the process. If you aren't careful, crawling a web site invites the most ancient of programming errors: the infinite loop. Take a look at Figure 9-1. The web site is only four pages, but no simple crawler will survive it. Given Pagel as a starting point, the crawler finds a link to Page2. After indexing Pagel, the crawler moves on to Page2. There, it finds links to Page3 and Page4. Page4 is a nice little cul-de-sac on the site, and closes down one avenue of exploration. Page3 is the killer. Not only does it have a reference back to Pagel, start ing the whole cycle again, but it also has an off-site link (to Amazon.com). Anyone who wants a crawler to navigate this beast has more processor cycles than brain cells. Page 1 Page 2
Page 2 Page 3 Page 4 Page 4
Page 3 Page 1 Amazon.com
'--------+
l
Figure 9-1 . A simple, four-page web site that breaks any naive crawler
1 78
I
Chapter 9: Simple Spider
I had a client who couldn't afford the $18,000 expense to buy search capabilities and didn't want to sit down and write something custom that might cost them the same amount in development dollars. They came to me and provided a set of straightfor ward requirements for an application that would enable them to search on their web site. Here's what they asked me to do: 1. Provide a service for crawling a web site, following all links from a provided starting point. a. The crawling service must ignore links to image files. b. The crawler must be configurable to only follow a maximum number of links. 2. Provide a service for indexing the resulting set of web pages. The indexing ser vice should be schedulable; initially, it should run every night at midnight. 3. Each result of a search of the index should return a filename and a rank indicat ing the relative merit of each result. 4. Create two interfaces for accessing the spider: a. A console interface for local searches and testing. b. A web service that returns an XML document representing the results of all the searches. My solution was to write an open source web site indexing and search engine. The goal was to have an application that could be pointed at any arbitrary web site, crawl it to create the domain of searchable pages, and allow a simple search language for query ing the index. The crawler would be configurable to either allow or deny specific kinds of links, based on the link prefix (for example, ONLY follow links starting with http:!/ or NEVER follow links starting with). The indexer would operate on the results of the crawler and the search engine would query the index. Here are the advantages this engine would provide: • • • •
No $18,000 to Google. No $18,000 to the IT department. General enough to work with any web site. A layered architecture that would allow it to easily be used in a variety of UI environments.
Examining the Requirements The requirements for the Simple Spider leave a wide variety of design decisions open. Possible solutions might be based on hosted EJB solutions with XML-configurable indexing schedules, SOAP-encrusted web services with pass-through security, and any number of other combinations of buzz words, golden hammers, and time-wast ing complexities. The first step in designing the Spider was to eliminate complexity
Examining the Requirements
I
1 79
and focus on the problem at hand. In this section, we will go through the decision making steps together. The mantra for this part of the process: ignore what you think you need and examine what you know you need.
Breaking It Down The first two services described by the requirements are the crawler and the indexer. They are listed as separate services in the requirements, but in examining the overall picture, we see no current need to separate them. There are no other services that rely on the crawler absent the indexer, and it doesn't make sense to run the indexer unless the crawler has provided a fresh look at the search domain. Therefore, in the name of simplicity, let's simplify the requirements to specify a single service that both crawls and indexes a web site. The requirements next state that the crawler needs to ignore links to image files, since it would be meaningless to index them for textual search and doing so would take up valuable resources. This is a good place to apply the Inventor's Paradox. Think for a second about the Web: there are more kinds of links to ignore than just image files and, over time, the list is likely to grow. Let's allow for a configuration file that specifies what types of links to ignore. After the link-type requirement comes a requirement for configuring the maximum number of links to follow. Since we have just decided to include a configuration option of some kind, this requirement fits our needs and we can leave it as-is. Next, we have a requirement for making the indexer schedulable. Creating a sched uling service involves implementing a long-running process that sits dormant most of the time, waking up at specified intervals to fire up the indexing service. Writing such a process is not overly complex, but it is redundant and well outside the pri mary problem domain. In the spirit of choosing the right tools and doing one thing well, we can eliminate this entire requirement by relying on the deployment plat form's own scheduling services. On Linux and Unix we have cron and on Windows we have at. In order to hook to these system services, we need only provide an entry point to the Spider that can be used to fire off the indexing service. System adminis trators can then configure their schedulers to perform the task at whatever intervals are required. The final service requirement is the search service. Even though the requirements don't specify it as an individual service, it must be invoked independently of the index (we wouldn't want to re-run the indexer every time we wanted to search for something): it is obvious that it needs to be a separate service within the application. Unfortunately, the search service must be somewhat coupled to the indexing service, as the search service must be coupled to the format of the indexing service's data source. No global standard API currently exists for text index file formats. If and when such a standard comes into being, we'll upgrade the Spider to take advantage
180
I
Chapter 9: Simple Spider
of the new standard and make the searching and indexing services completely decou pled from one another. As for the user interfaces, a console interface is a fairly straightforward choice. How ever, the mere mention of web services often sends people into paroxysms of stan dards exuberance. Because of the voluminous and increasingly complex web services standards stack, actually implementing a web service is becoming more and more difficult. Looking at our requirements, however, we see that we can cut through most of the extraneous standards. Our service only needs to launch a search and return an XML result set. The default implementation of an axis web service can pro vide those capabilities without us messing around with either socket-level program ming or high-level standards implementation.
Refining the Requirements We can greatly improve on the initial requirements. Using the Inventor's Paradox, common sense, and available tools, we can eliminate a few others. Given this analy sis, our new requirements are: 1. Provide a service to crawl and index a web site. a. Allow the user to pass a starting point for the search domain. b. Let the user configure the service to ignore certain types of links. c. Let the user configure the service to only follow a maximum number of links. d. Expose an invoke method to both an existing scheduler and humans. 2. Provide a search service over the results of the crawler/indexer. a. The search should collect a search word or phrase. b. Search results should include a full path to the file containing the search term. c. Search results should contain a relative rank for each result. The actual algo rithm for determining the rank is unimportant. 3. Provide a console-based interface for invoking the indexer/crawler and search service. 4. Provide a web service interface for invoking the indexer/crawler and the search service. The web service interface does not need to explicitly provide authentica tion or authorization. These requirements represent a cleaner design that allows future extensibility and focuses development on tasks that are essential to the problem domain. This is exactly what we need from requirements. They should provide a clear roadmap to success. If you get lost, take a deep breath. It's okay to ask for directions and clarify requirements with a customer.
Examining the Requirements
I
181
Planning for Development Once the requirements are clearly understood, the next step is to plan for develop ment. Java is going to be our implementation technology because it easily provides both interfaces in our requirements (console and web service), has robust network ing capabilities, and allows access to a variety of open source tools that might be use ful for our project. The principles of simplicity and sanity mandate that we provide thorough unit test ing of the entire application. For this, we need JUnit. Since we are also talking about providing a web service frontend and making a lot of network calls, it behooves us to get a hold of HTTPUnit and the Jakarta Cactus tool as well. HTTPUnit is a tool that allows our unit tests to act like a browser, performing web requests and examining web responses. They model the end user's view of a web page or other HTTP end point. Cactus is a little different. It also exercises server code, but instead of examin ing it from the client's viewpoint, it does so from the container's viewpoint. If we write a servlet, Cactus can operate as the container for that servlet, and test its inter action with the container directly. In addition to the unit-testing apparatus, we need a build tool. Ant is, of course, the answer. There really is no other choice when it comes to providing robust build support.
The Design Our application is beginning to take shape. Figure 9-2 shows the entire design o f the Simple Spider. It has layers that present a model, the service API, and two public interfaces. There is not yet a controller layer to separate the interfaces and logic. We'll integrate a controller in the next chapter. Web service Crawler/ Indexer service
Site search service
lndexlinks
QueryBean
lndexlink
HitBean
Console interface Configuration
ConfigBean
lndexPathBean
!It--
Interfaces
+-- Services !It--
Model
Figure 9-2. The Simple Spider design
We need to provide a configuration service to our application. I prefer to encapsu late the configuration into its own service to decouple the rest of the application from its details. This way, the application can switch configuration systems easily
182
I
Chapter 9: Simple Spider
later without much editing of the code. For this version of the application, the Con figuration service will consist of two class, ConfigBean and I ndexPathBean, which will encapsulate returning configuration settings for the application as a whole (ConfigBean) and for getting the current path to the index files (IndexPathBean). The two are separate classes, as finding the path to the index is a more complex task than simply reading a configuration file (see the implementation details below). The con figuration settings we will use are property files, accessed through java . util . Properties. The crawler/indexer service is based on two classes: Indexlinks, which controls the configuration of the service in addition to managing the individual pages in the docu ment domain, and Indexlink, a class modeling a single page in the search domain and allowing us to parse it looking for more links to other pages. We will use Lucene () as our indexer (and searcher) because it is fast, open source, and widely adopted in the industry today. The search service is pro vided through two more classes, QueryBean and HitBean. The former models the search input/output mechanisms, while the latter represents a single result from a larger result set. Sitting over top of the collection of services are the two specified user interfaces, the console version (ConsoleSearch) and a web service (Searchlmpl and its WSDL file).
The Configuration Service Let's start by looking at our configuration service, since it is used by all of the other services and interfaces. We need to provide a generic interface for retrieving our con figuration options, separating the rest of the application from the details of how those values are stored or retrieved. We are going to use property files via java.util. Properties for the initial implementation. Here is the class definition for ConfigBean : package com . relevance . s s . config; import j ava . util. Properties; public class ConfigBean { Properties props = new Propertie s ( ) ; int maxlinks ; String[ ] allowedExtensions; String skippedlinksFile; public ConfigBean( ) {
try {
props . load (getClass( ) . getResourceAsStream( " /com . relevance . ss . properties " ) ) ;
The Configuration Service I 183
maxlinks = Integer. parsei nt ( props . getProperty( " maxlinks " ) ) ; allowedExtensions= props . getProperty( "allowed . extensions " ) . split ( " , " ) ; s kippedlinksFile = props . getProperty ( " skipped . links . file " ) ;
}
catch( Exception ex) { //log the errors and populate with reasonable defaults, if necessary }
public String getSkipped Links File( ) { return s kipped linksFile; public int getMaxlinks ( ) { return maxlinks ; } public String [ ] getAllowedExtensions ( ) { return allowed Extensions; }
The class provides for the retrieval of three properties by name: Maxlinks, AllowedExtensions, and Skipped linksFile. Maxlinks determines the maximum size of the searchable domain, Allowed Extensions is the file types the crawler should attempt to index, and the Skipped Links File is a logfile for keeping track of all the links skipped by a given indexing event. Originally, I thought about adding an additional method to allow for future exten sion of the list of properties: public String getPropertyByName (String propName) { return props . getProperty( propName ) ; }
However, adding this method would be confusing and redundant. If the list of prop erties ever changes, we will have to make changes to the source code for whatever services use the new property; we might as well, then, also update ConfigBean at the same time to expose the new property explicitly. For the sake of simplicity, we'll leave this method out. Getting the path to the index is not as simple as just reading the path from a file. If we were talking about only a console-based interface to the application, it would be okay. But since we are also going to expose a web service, we have to protect against multiple concurrent uses of the index. Specifically, we need to prevent a user from performing a search on the index while the indexer is updating it.
1 84
I
Chapter 9: Simple Spider
To ensure this, we implement a simple kind of shadow copy. The configuration file for the index path contains a root path (index.fullpath) and a property for a special extension to the index root path (index.next) . index.next has a value at all times of either O or 1 . Any attempt to use the index for a search should use the current value of index.fullpath + index.next. Any attempt to create a new index should use the alternate value of index.next, write the new index there, and update the value in the property file so future searches will use the new index. Below is the implementation of IndexPathBean that allows for these behaviors : package com . relevance . s s . config;
import import import import import
j ava . io . IOException; j ava . io . File; j ava . io . FileinputStream; j ava . io . FileOutputStream; j ava . util . Properties ;
public class IndexPathBean { private private private private private
final String propFilePath = " index . properties " ; String nextindexPath; String curindexPath; String nextindex; Properties props;
private void getPaths ( ) throws IOException {
File f = new File ( propFilePath ) ; if ( i f . exists ( ) ) { throw new IOException ( " properties path " + propfilePath + " does not exist " ) ; props new Properties ( ) ; props . load(new FileinputStream(propfilePath ) ) ; String indexRelativePath = props . getProperty ( " index . next " ) ; if (indexRelativePath = = null) { throw new IllegalArgument Exception("indexRelativePath not set in " + propFilePath) ; nextindex = Integer . toString(l - Integer. parseint(indexRelativePath ) ) ; curindexPath = props . getProperty( " i ndex . fullpath " ) + indexRelativePath; nextindexPath = props . getproperty ( " index . fullpath") + nextindex;
public String getflippedindexPath( ) throws IOException { getpaths ( ) ; return nextindexPath; public String getindexPath ( ) throws IOException { get Paths ( ) ; The Configuration Service
I
185
return curindexPath; public void flipindexPath ( ) throws IOException { get Paths ( ) ; props . setProperty( " index . next " , nextindex) ; props . store( new FileOutputStream(propFilePath), " " ) ; }
The class exposes three public methods: getters for the current index path and next index path, and a method that flips them. Any class that needs to merely use the index can call getindexPath ( ) to get the current version. Any class that needs to modify the index can call getFlippedindexPat h ( ) to get the version that isn't cur rently in use, and after modifying it, can call flipindexPat h ( ) to reset the properties file to the new version. All three public methods rely on a private utility method called getPaths ( ) , which reads the current values from the property file. From a simplicity standpoint- and, to a certain extent, transparency as well-we should probably expose the index path methods from ConfigBean, providing a single entry point into the application's configuration settings for the rest of the services. We'll leave the actual functionality separated for ease of maintenance and replace ment (in case we have to modify the way the index path is stored over time). To do that, we add the following lines of code to ConfigBean : IndexPathBean indexPathBean = new IndexPathBean ( ) ; public String getCurindexPath( ) { String indexPath = " " ; try { indexPath = indexPathBean . getindexPath( ) ; } catch( Exception ex) { } return indexPath; public String getNextindexPath ( ) { String indexPath = " " ; try { indexPath = indexPathBean . getFlippedindexPath ( ) ; catch( Exception ex) { } return indexPath;
186
I
Chapter 9: Simple Spider
public void flipindexPath( ) { try { indexPathBean . flipindexPath( ) ; catch( Exception ex) { }
Principles in Action • Keep it simple: use existing Properties tools, not XML • Choose the right tools: java.util.Properties • Do one thing, and do it well: separate configuration details into separate service, keep simple properties and index path in separate classes • Strive for transparency: one entry point for configuration settings, even though there are two implementations • Allow for extension: expandable list of allowable link types
The Crawler/Indexer Service The application needs a way to dynamically follow the links from a given URL and the links from those pages, ad infinitum, in order to create the full domain of search able pages. Just thinking about writing all of the web-related code to do that work gives me the screaming heebie-jeebies. We would have to write methods to post web requests, listen for responses, parse those responses looking for links, and so on. In light of the "keep it simple" chapter, it seems we are immediately faced with a buy-it-or-build-it question. This functionality must exist already; the question is, where? It turns out we already have a library at our disposal that contains everything we need: HTTPUnit. Because HTTPUnit's purpose in life is to imitate a browser, it can be used to make HTTP requests, examine the HTML results, and follow the links contained therein. Using HTTPUnit to do the work for us is a fairly nonstandard approach. HTTPUnit is considered a testing framework, not an application development framework. How ever, since it accomplishes exactly what we need to do with regard to navigating web sites, it would be a waste of effort and resources to attempt to recreate that function ality on our own. Our main entry point to the crawler/indexer service is Indexlinks. This class estab lishes the entry point for the indexable domain and all of the configuration settings for controlling the overall result set. The constructor for the class should accept as much of the configuration information as possible: The Crawler/Indexer Service
I
187
public Indexlinks (String indexPath, int maxlinks , String skipped LinksOutputFileName)
}
this . maxlinks = maxlinks ; this . linksNotFollowedOutputFileName = skippedLinksOutputFileName; writer = new IndexWriter( indexPath, new StandardAnalyzer( ) , true ) ;
The writer is a n instance of org . apache. lucen e . index . IndexWriter, which is initial ized to point to the path where a new index should be created. Our instance requires a series of collections to manage our links. Those collections are: Set linksAlreadyFollowed = new HashSet ( ) ; Set linksNotFollowed = new HashSet ( ) ; Set linkPrefixesToFollow = new HashSet ( ) ; HashSet linkPrefixesToAvoid = new HashSet ( ) ;
The first two are used to store the links as we discover and categorize them. The next two are configuration settings used to determine if we should follow the link based on its prefix. These settings allow us to eliminate subsites or certain external sites from the search set, thus giving us the ability to prevent the crawler from running all over the Internet, indexing everything. The other object we need is a com . meterware . httpunit . WebConversation. HTTPUnit uses this class to model a browser-server session. It provides methods for making requests to web servers, retrieving responses, and manipulating the HTTP messages that result. We'll use it to retrieve our indexable pages. WebConversation conversation = new WebConversation ( ) ;
We must provide setter methods so the users of the indexer/crawler can add prefixes to these two collections: public void setFollowPrefixes ( String [ ] prefixesToFollow) throws MalformedUR LException { for ( int i = o; i < prefixesToFollow. length; i++) { String s = prefixesToFollow [ i ] ; linkPrefixesToFollow. add(new UR L ( s ) ) ;
{
public void setAvoidPrefixes(String [ ] prefixesToAvoid) throws MalformedURLException for ( int i = o; i < prefixesToAvoid . length; i++) { String s = prefixesToAvoid [ i ] ; linkPrefixesToAvoid . add(new URL ( s ) ) ; }
In order to allow users of the application maximum flexibility, we also provide a way to store lists of common prefixes that they want to allow or avoid: public void initFollowPrefixesFromSystemProperties ( ) throws MalformedURLException { String followPrefixes = System. getProperty( " com. relevance . s s . Followlinks " ) ; 188
I
Chapter 9: Simple Spider
if (followPrefixes == null I I followPrefixes. length ( ) == o) return; String( ] prefixes = followPrefixes . split( " " ) ; if (prefixes ! = null && prefixes . length ! = o ) { setFollowPrefixes(prefixes); }
public void initAvoidPrefixesFromSystemProperties( ) throws MalformedURLException { String avoidPrefixes = System. getProperty( " com. relevance . ss . AvoidLinks" ) ; if (avoidPrefixes = = null I I avoidPrefixes . length( ) = = o ) return; String[ ] prefixes = avoidPrefixes . split( " " ) ; if (prefixes ! = null && prefixes. length ! = o ) { setAvoidPrefixes ( prefixes) ; }
As links are considered for inclusion in the index, we'll be executing the same code against each to determine its worth to the index. We need a few helper methods to make those determinations: boolean shouldFollowLink(URL newLink) { for (Iterator iterator = linkPrefixesToFollow. iterator ( ) ; iterator. hasNext( ) ; ) { URL u = (URL) iterator. next( ) ; if (matchesDownToPathPrefix(u, newLink) ) { return true; }
}
return false;
boolean shouldNotFollowLink(URL newLink) { for (Iterator iterator = linkPrefixesToAvoid . iterator ( ) ; iterator. hasNext( ) ; ) { URL u = (URL) iterator. next( ) ; if (matchesDownToPathPrefix(u, newLink)) { return true; return false; private boolean matchesOownToPathPrefix(URL matchBase, URL newLink) { return matchBase . getHost( ) . equals(newLink. getHost( ) ) && matchBase. getPort( ) == newLink. getPort( ) && matchBase. getProtocol( ) . equals(newLink. getProtocol( ) ) && newLink . getPath( ) . startsWith(matchBase. getPath( ) ) ; }
The first two methods, shouldFollowlink and shouldNotFollowlink, compare the URL to the collections for each. The third, matchesDownToPathPrefix, compares the link to one from the collection, making sure the host, port, and protocol are all the same.
The Crawler/Indexer Service
I
189
The service needs a way to consider a link for inclusion in the index. It must accept the new link to consider and the page that contained the link (for record-keeping) : {
void considerNewlink( String linkFrom, Weblink newlink) throws MalformedURLException
}
URL url = null; url = newlink. getRequest( ) . getURL( ) ; if (shouldFollowlink(url)) { if ( linksAlreadyFollowed . add(url .toExternalForm( ) ) ) { if (linksAlreadyFollowed . size( ) > maxlinks) { linksAlreadyFollowed . remove(url .toExternalForm( ) ) ; throw new Error( "Max links exceeded " + maxlinks) ; } if (shouldNotFollowlink(url) ) { Index Link. log . info( "Not following " + url . toExternalForm( ) + " from " + linkFrom) ; } else { Indexlink. log . info( " Following " + url . toExternalForm( ) + " from " + linkFrom) ; addlink(new Indexlink(url .toString( ) , conversation, this ) ) ; } } } else { ignorelink(url, linkFrom) ; }
newlink is an instance of com . meterware . httpunit . Weblink, which represents a single page in a web conversation. This method starts by determining whether the new URL is in our list of approved prefixes; if it isn't, newlink calls the helper method ignorelink (which we'll see in a minute) . If it is approved, we test to see if we have already followed this link; if we have, we just move on to the next link. Note that we verify whether the link as already been followed by attempting to add it to the linksAlreadyFollowed set. If the value already exists in the set, the set returns false. Otherwise, the set returns true and the value is added to the set. We also determine if the addition of the link has caused the linksAlreadyFollwed set to grow past our configured maximum number of links. If it has, we remove the last link and throw an error. Finally, the method checks to make sure the current URL is not in the collection of proscribed prefixes. If it isn't, we call the helper method add Link in order to add the link to the index: private void ignorelink(URL url, String linkFrom) { String status = "Ignoring " + url . toExternalForm( ) + " from " + linkFrom; linksNotFollowed . add(status) ; Indexlink. log . fine(status ) ; public void addlink(Indexlink link) { try { 190
I
Chapter 9: Simple Spider
lin k . runTest ( ) ; catch( Exception ex) {
// handle error . . . }
}
Finally, we need an entry point to kick off the whole process. This method should take the root page of our site to index and begin processing URLs based on our con figuration criteria: public void setinitiallink(String initial link) throws MalformedURLException { if ( ( init iallink == null) I I (initiallink. length ( ) == o) ) { throw new Error( "Must specify a non-null initial link" ) ; }
}
linkPrefixesToFollow . add (new URL (initiallink) ) ; this . initiallink = initiallink; addlink( new Indexlink(initiallink, conversation,this ) ) ;
Next, we define a class to model the links themselves and allow us access to their textual representations for inclusion in the index. That class is the Indexlink class. Indexlink needs three declarations: private WebConversation conversation; private Indexlinks suite; private String name;
The WebConversation index again provides us the HTTPUnit framework's implemen tation of a browser-server session. The I ndexlinks suite is the parent instance of I ndexlinks that is managing this indexing session. The name variable stored the cur rent link's full URL as a String. Creating an instance of the Indexlink class should provide values for all three of these variables: public Indexlink(String name, WebConversation conversation, Indexlinks suite) { this. name = name; if ( (name == null) I I (conversation == null) I I (suite == null) ) { throw new IllegalArgument Exception ( " LinkTest constructor requires non- null args " ) ;
}
this . conversation = conversation ; this . suite = suite;
Each Indexlink exposes a method that navigates to the endpoint specified by the URL and checks to see if the result is an HTML page or other indexable text. If the page is indexable, it is added to the parent suite's index. Finally, we examine the cur rent results to see if they contain links to other pages. For each such link, the process must start over: public void checklink( ) throws Exception { WebResponse response = null; The Crawler/Indexer Service
I
191
try { response = conversation . getResponse(this . name ) ; } catch (HttpNotFoundException hnfe) { // handle error } if ( ! isindexable (response ) ) { return; } addToindex(response) ; Weblink [ ] links = response. getlinks ( ) ; for (int i = o; i < links . length; i++) { Weblink link = links [ i ] ; suite . considerNewlink(this . name, link) ; }
The is Indexable method simply verifies the content type of the returned result: private boolean isindexable(WebResponse response) { return response. getContentType( ) . equals ( " text/html " ) ) . equals ("text/ascii " ) ; }
1 1
response . getContentType (
whereas the addToindex method actually retrieves the full textual result from the URL and adds it to the suite's index: private void addToindex(WebResponse response) throws SAXException, IOException, InterruptedException { Document d = new Document ( ) ; HTMLParser parser = new HTMLParser(response. getinputStream( ) ) ; d . add( Field . Unindexed ( " url " , response. getURL( ) . toExternalForm( ) ) ) ; d . add (Field . Unindexed ( " summary" , parser . getSummary( ) ) ) ; d . add ( Field . Text ( "title " , parser . getTitle( ) ) ) ; d . add ( Field.Text ( " contents " , parser . getReader( ) ) ) ; suite. addToindex(d ) ; }
The parser is an instance o f org . apache . lucene . demo . html . HTMLParser, a freely avail able component from the Lucene team that takes an HTML document and supplies a collection-based interface to its constituent components. Note the final call to s u ite . addToindex, a method on our I ndexlinks class that takes the Document and adds it to the central index: II note : method of Indexlinks public void addToindex(Document d) { try { writer . addDocument (d) ; } catch ( Exception ex) { }
192
I
Chapter 9: Simple Spider
That's it. Together, these two classes provide a single entry point for starting a crawl ing/indexing session. They ignore the concept of scheduling an indexing event; that task is left to the user interface layers. We only have two classes, making the model extremely simple to maintain. And we chose to take advantage of an unusual library (HTTPUnit) to keep us from writing code outside our problem domain (namely, web request/response processing) .
Principles in Action • Keep it simple: chooseHTTPUnit for web navigation code, minimum perfor mance enhancements (maximumlinks, linksToAvoid collection) • Choose the right tools: ]Unit, HTTPUnit, Cactus,· Lucene • Do one thing, and do it well: interface-free model, single entry-point to service, reliance on platform's scheduler; we also ignored this principle in deference to simplicity by combining the crawler and indexer • Strive for transparency: none • Allow for extension: configuration settings for links to ignore
The Search Service The search service uses the same collected object pattern as the crawler/indexer. Our two classes this time are the QueryBean , which is the main entry point into the search service, and the HitBea n , a representation of a single result from the result set. In order to perform a search, we need to know the location of the index to search, the search query itself, and which field of the indexed documents to search: private String query; private String index; private String field; We also need an extensible collection to store our search results: private List results = new Arraylist( ) ; We must provide a constructor for the class, which will take three values: public QueryBean(String index, String query, String field) {
}
this . field = field; this . index = index; this . query = query;
• Unit tests elided for conciseness. Download the full version to see the tests.
The Search Service
I
193
The field variable contains the name of the field of an indexable document we want to search. We want this to be configurable so future versions might allow searching on any field in the document; for our first version, the only important field is " contents " . We provide an overload of the constructor that only takes index and query and uses " contents" as the default for field: public QueryBean(String index, String query) { this ( index, query, " contents " ) ; }
The search feature itself is fairly straightforward: public void execute( ) throws IOException, ParseException { results . clear ( ) ; if (query == null) return; if (field == null) throw new IllegalArgumentException ( " field cannot be null " ) ; if (index == null) throw new IllegalArgumentException(" index cannot be null" ) ; IndexSearcher indexSearcher = new IndexSearcher( index) ; try { Analyzer analyzer = new StandardAnalyzer( ) ; Query q = QueryParser . parse (query, field, analyzer) ; Hits hits = indexSearcher . search ( q ) ; for ( int n=O; n
200
Chapter 9: Simple Spider
< soap : body
< input
< /operation>
< soap : address < /port>
< /definitions>
The section defines any datatypes that need to be exchanged by clients and servers; the wraps the two inputs into a search query (search term and threshold for limiting results based on relative rank) . defines the sequence of individual results of a search operation. After the datatypes, the individual messages are defined. Messages represent inputs to and outputs from individual web service endpoints. Three are defined here: and are the input message and output results of the search service, and is the input message to a return-less index service access point. After all these definitions, map the individual messages and datatypes to the methods of the implementation class. Note that the mapping of dolndex includes an input type but no output message. The implementation is even simpler; it only defines methods that match the WSDL (one for searchContents and one for do Index) : public ResponseType [ ] searchContents (QueryType request) throws RemoteException { try { ConfigBean config = new ConfigBean( ) ; ServletContext context = getServletContext( ) ; if (context == null) { throw new Error ( " null servlet context " ) ; }
QueryBean query = new QueryBean(config . getCurindexPath( ) , request . getSeachString( ) ) ; query . execute ( ) ; HitBean [ ] fullResults = query . getResults ( ) ; Arraylist result = new Arraylist ( ) ; for ( int n=O; n= request . getThreshold( ) ) { ResponseType rt = new ResponseType( ) ;
The Web Service Interface
I
201
}
rt . setScore(hit . getScore( ) ) ; rt . setUrl(new URI (hit . getUrl ( ) ) ) ; result . add(rt) ;
return (ResponseType [ ] ) result . toArray(new ResponseType[result . size ( ) ] ) ; } catch ( Exception e ) { getServletContext( ) . log(e, "fail " ) ; throw new AxisFault (e. getMessage( ) ) ; } public void doindex(String indexUrl) { try { ConfigBean config = new ConfigBean( ) ; String nextindex; try { nextindex = config . getNextindexPath( ) ; catch ( Exception ex) { return; Indexlinks lts = new Indexlinks (nextindex, config. getMaxlinks ( ), config . getSkippedLinksFile( ) ) ; lts . initFollowPrefixesFromSystemProperties ( ) ; lts . initAvoidPrefixesFromSystemProperties ( ) ; lts . setinitiallink(indexUrl) ; config. flipindexPath ( ) ; } catch(Exception e) { //System . out . print(e. getStackTrace( ) ) ; }
These methods are similar to the methods defined in the console application, with minor differences in the types of exceptions thrown, as well as the creation of a the return value for searchContents.
Principles in Action • Keep it simple: ignore the greater part of the web services stack; if you can't read it, don't automate it (the WSDL for this service was written by hand) • Choose the right tools: Axis, JUnit, HTTPUnit
202
I
Chapter 9: Simple Spider
• Do one thing, and do it well: just invoke search and return response • Strive for transparency: web service is the ultimate transparent layer to end users • Allow for extension: none
Extending the Spider So far, the Spider meets the needs the original client. We have provided all of the necessary functionality in a simple, efficient package. The user interfaces are nicely decoupled from the business logic, meaning we can extend the application into mul tiple other interface areas. Since we have designed the application with the idea of extensibility through transparency, we ought to be able to add other services fairly easily. In the next chapter, we're going to see how easy it is to repurpose the spider for use in a different context. We'll replace the existing search functionality in the j PetStore sample application with the Simple Spider. This process demonstrates how follow ing the principlels laid out in this book make it easy to reuse your code and make it work in new contexts. We'll layer our standalone application into a Spring frame work with minimal changes to the original code.
Extending the Spider
I
203
CHAPTER 1 0
Extending jPetStore
The previous chapter introduced the workhorse Simple Spider service with its console based user interface and web service endpoint. In this chapter, we see how easy it is to add the Spider to an existing application, j PetStore. Some might argue the j PetStore already has a search tool; but that tool only searches the database of animals in the pet store, not all the pages on the site. Our customer needs to. search the entire site; j PetStore has at least one page in the current version that isn't searchable at all (the Help page) and text describing the different animals that doesn't show up in a query. We'll add the Spider to the j PetStore, paying careful attention to what we need to change in the code in order to enable the integration. In addition, we will replace the existing persistence layer with Hibernate. By carefully adhering to our core princi ples, our code will be reusable, and since the j PetStore is based on a lightweight framework (Spring) , it doesn't make unreasonable demands on our code in order to incorporate the search capability or the new persistence layer. Coming and going, the inclusion will be simple and almost completely transparent.
A Brief Look at the Existing Search Feature The search feature that comes with j PetStore takes one or more keywords separated by spaces and returns a list of animals with a name or category that includes the term. A search for "dog" turns up six results, while a search for "snake" nets one. However, a search for "venomless" gets no results, even though animal EST-1 1 is called the Venomless Rattlesnake. Even worse, none of the other pages (such as the Help page) shows up in the search at all; neither will any other pages you might add, unless they're an animal entry in the database. The search feature has the following architecture (shown in Figure 10- 1 ) : 1. Any page of the j PetStore application may contain a search entry box with Search button. 2. Clicking the button fires a request (for lshop/searchProducts.do) passing the key words along as part of the request.
204
3. petstore-servlet.xml, the configuration file for the MVC portion of the j PetStore Spring application, has the following definition:
This creates a handler for the "/shop/searchProducts.do" request and maps it to an instance of SearchProductsController, passing along an instance of petStoreimpl called petStore. 4. SearchProductsController instantiates an instance of a class that implements the ProductsDao interface, asking it to search the database for the specified keywords. 5. ProductsDao queries the database and creates an instance of Product for each returned row. 6. ProductDao passes a HashMap containing all of the Produ ct instances back to SearchProductsController. 7. SearchProductsController creates a new ModelAndView instance, passing in the name of the JSP page to display the results (Search Products) and the HashMap of values. The JSP page then renders the results using the PagedListHolder control (a list/table with built-in paging functionality).
petStorelmpl
e 0
f) ___, SearchProducts ____ Controller
_. _ ,
ProductDao
any jPetstore page Hash Map of ProductDao instances
0
SearchProducts.jsp
Figure 1 0- 1 . The original jPetStore search architecture
Only the Produ ctsDao knows how to interact with the underlying data. Produ ct is a straightforward class with information about each product, and the view (Search Products.jsp) simply iterates through the returned results to create the output page.
A Brief Look at the Existing Search Feature
I
205
Deciding on the Spider We've identified how the current search feature works and its limitations: the search feature only searches products in the database, not the site as a whole, and even then it doesn't search all available data about the products. The results it returns are extremely limited- though well-formatted. The Simple Spider is a crawler-based search feature instead of focusing on the data base: it searches everywhere on the site, not just the products table, and it treats any textual information visible to users as part of the search domain. The Spider does have a major limitation-since it is based on a web crawler, it can only catalog pages linked to other pages on the site. If a page is only accessible via some server-side logic (for instance, selecting a product from a drop-down list and submitting the form to the server, which returns a client-side or server-side redirect), the crawler never reaches that page and it won't be part of the search. With a problem like this, in which a feature of the application is too limited to be of much service to our users, we have to decide between refining the existing service or replacing it entirely. The limitation of the j PetStore search is partly due to the funda mental nature of the service (it searches the database, not the site) . Refining it to accomplish the full-site search would be horribly inefficient. The Spider is the obvi ous solution, but we must consider what we are already dealing with (remember, you are what you eat). If jPetStore uses a lot of server-side logic to handle navigation, the Spider simply won't be able to provide a complete catalog. In this case, though, all the navigation on the site is handled client-side, so the Spider is a perfect fit for solv ing our problem and coexisting with our current application.
Extending jPetStore We have decided that an existing service layer of the application is unsuited to our current needs. Additionally, we have decided that replacing the service with a new one is the appropriate solution. This situation is a perfect test of extension: how easy will it be to replace this service? Will it involve new code? Changes to existing code? Or just changes to our configuration services? In order to replace the existing functionality with the Simple Spider, we need to change the output formatting a little (our returns will display full URLs instead of product instances), write a new controller that knows to launch the Simple Spider instead of the ProductsDao object, and change our mapping layer to point to the new controller. Finally, we'll use Spider's configuration service so Spider works better with the new web site. Looking at these requirements, we can already see we'll need to write fewer than 100 lines of code and make only minor configuration changes in order to get this to work. It's a reasonable price to pay for the end result we want. Because j PetStore
206
I
Chapter 1 0: Extending jPetStore
and the Simple Spider were designed to allow for extension in the first place, they fit together well with minimal work. Conversely, we could write much less code and in fact do almost no work at all if we chose to connect to the Spider through the existing web service interface rather than integrating it directly with the j Pet Store. Since the web service interface already exists, it might be construed as a violation of the "do one thing, and do it well" prin ciple to add another, seemingly useless interface. In this instance, though, the added cost of sending a bloated message (XML/SOAP) over a slow transport mechanism (HTTP) is too heavy, especially given the minimal amount of work it will take to get a faster, more efficient integration.
Replacing the Controller First, let's replace the SearchProductsController. Here's the main method of that class: public ModelAndView handleRequest(HttpServletRequest request, HttpServletResponse response) throws Exception { if (request . getParameter ( " search") ! = null) { String keyword = request . getParameter( " keyword" ) ; if (keyword == null I I keyword . length ( ) == o ) { return new ModelAndView(" Error" , "message" , " Please enter a keyword to search for, then press the search button . " ) ; }
else { PagedlistHolder productlist = new PagedlistHolder( this . petStore . searchProductlist( keyword . tolowerCase ( ) ) ) ; productlist . setPageSize(4); request . getSession ( ) . setAttribute ( " SearchProductsController_productlist " , productlist ) ; return new ModelAndView("SearchProducts " , " productlist " , product list ) ; }
else { String page = request . getparameter( " page" ) ; PagedlistHolder product list = ( PagedListHolder) request . getSession( ) . getAttribute( "SearchProductsController_product list" ) ; if ( " next " . equals(page ) ) { product list . nextPage( ) ; }
else if ( " previous" . equals (page ) ) { productlist . previousPage ( ) ; return new ModelAndView( "SearchProducts " , " product list " , productlist ) ;
Replacing the Controller
I
207
The method returns a new instance of ModelAndView and Spring uses it to determine which JSP to load and how to wire data up to it. The method takes an HttpServletRequest and HttpServletResponse in order to interact directly with the HTTP messages. The first thing the method does is make sure the user entered a search term. If not, it displays an error to the user; if so, it creates a Paged ListHolder called productlist with a maximum page size (number of rows per page) set to four. Finally, it calls the petStore instance's searchProductlist method, which calls to ProductsDao and finally returns the HashMap of Product instances. The second clause is for when the user clicks the Next Page or Previous Page buttons on the paged list.
Rewrite or Replace? The next question a conscientious programmer should ask is, does it make more sense to rewrite this class to make use of the Spider, or to write an entirely new con troller? In order to answer that question, we need to consider three more-specific questions first: 1. Do we have access to the original source? Now that we have the j PetStore appli cation, do we control the source, or is it all binary? If we don't control the source, we can short-circuit the rest of the decision. We can only replace the class; we can't rewrite it. 2. Will we ever need to use the original service again? Assuming we have the source and can rewrite the class, can we foresee ever needing to revert to or make use of the database-search functionality? For the sake of flexibility, we usually want to retain old functionality unchanged, which means we want to replace, not rewrite. However. . . 3. Does the current class implement a n easily reused interface? I f we are going to replace the class, how much work will we have to do to get the application to recognize and accept your new class? Think of this as an organ transplant; how much work and medication has to go into the host body to keep it from reject ing the new organ? Will our changes be localized around the new class or more systemic? Here's the answer to these questions: yes, we have the source code; yes, we'll want access to retain the potential for using the old service; and yes, the controller imple ments a very simple interface. The controller only needs to implement a single method, handleRequest, which takes an HttpServletRequest and a HttpServletResponse and returns a ModelAndView. This means the j PetStore applica tion doesn't need any systemic changes in order to use our new controller, as long as we support that interface.
208
I
Chapter 10: ExtendingjPetStore
Implementing the Interface To replace this class, we're going to write our own controller class called SearchPagesController. It must implement the Controller interface, which defines our handleRequest method. public class SearchPagesController implements Controller { }
Here's our controller's handleRequest method: public ModelAndView handleRequest(HttpServletRequest request, HttpServletResponse response) throws Exception { if ( request . get Parameter ( " search") ! = null) { String keyword = request . getParameter(" keyword" ) ; if ( keyword == null I I keyword . length ( ) == o ) { return new ModelAndView( " Error" , "message " , " Please enter a keyword to search for, then press the search button . " ) ; } else { ConfigBean cfg = new ConfigBean ( ) ; String indexpath = " " ; try { indexpath = cfg . getCurlndexPath( ) ; catch ( Exception ex) { return new ModelAndView(" Error " , "message " , "Could not find current index path . " ) ; QueryBean qb = new QueryBean (indexpath, keyword, "content s " ) ; qb. execute( ) ; HashMap hits = new HashMap(qb. getResult s ( ) . length) ; for(int i =O; i
We're telling the application to map requests for "/shop/searchProducts.do" to a new instance of SearchPagesController. At the same time, we tell it, provide the SearchPagesController with the current instance of petStore (in a property called petStore).
210
I
Chapter 10: ExtendingjPetStore
Principles in action
• Keep it simple: the controller logic is a simple invocation of Spider; the control ler interface is very simple (one method) • Choose the right tools: Spring and the Spider • Do one thing and do it well: since the Spider is so well-encapsulated, it's easy to add to an existing service; the controller deals with invoking the Spider and the JSP only needs to display the results- MVC pattern well-demonstrated • Strive for transparency: the site doesn't care how it is indexed; it can easily switch between data-driven and HTML-driven search technologies • Allow for extension: we quickly expanded our search capabilities by adding a new tool with minimal code; the configuration abilities of j PetStore allow for no-code recognition of new service
The User Interface (JSP) The user interface is fairly straightforward. Instead of just dumping our results to the console or creating an XML document of the results (as in the web service implemen tation from Chapter 9) , this time we need to write a JSP that iterates over the results and displays them as hyperlinks in a table. The original j PetStore search feature used a Paged ListHolder for its results because it displayed the image associated with each returned product in the table. Since the images were arbitrary in size, j PetStore didn't want to display too many entries on a given page since it might result in a lot of vertical scrolling. Our results consist of a hyperlink to the returned URL and the relative rank of the given result; therefore, we'll use a simple table to display our results. Again, we are faced with the rewrite-or-replace question. Just like last time, we have three questions to consider: 1 . Do we have access to the original source? We must, since JSPs are just text files in both development and deployment mode. 2. Will we ever want to reuse the existing service? We do, but in this case, a JSP is so easy to recreate that it won't make much difference. 3 . Does the current version implement some standard interface? Not as such, since JSPs are just mixes of static HTML and dynamic content. Because of the rather trivial nature of the changes and because JSPs are easily edited in place (no compilation necessary) , we'll just repurpose the existing SearchProducts.jsp file. This strategy saves us from having to change any more configuration settings:
|
https://dokumen.pub/better-faster-lighter-java-1nbsped-0596006764.html
|
CC-MAIN-2021-25
|
refinedweb
| 22,248
| 63.59
|
I’ve been wrestling with the float vector sink for a couple of days now.
We’re trying to give our python script access to the values being output
by
our narrow filter, but no matter what happens, the vector sink block
doesn’t
seem to be giving out anything.
Our code is as follows:
from gnuradio import gr, usrp
class carrier_sense:
def __init__(self): self.fg = gr.flow_graph () self.u = usrp.source_c() self.complex_mag = gr.complex_to_mag_squared() self.iir_filt = gr.single_pole_iir_filter_ff(0.5) self.dest = gr.vector_sink_f() self.u.tune(0, self.u.db[0][0], 2.4e9) #Connect all of the blocks together self.fg.connect(self.u, self.complex_mag, self.iir_filt,
self.dest)
def main():
testgraph = carrier_sense() testgraph.fg.start() print testgraph.dest.data()
if name == ‘main’:
try:
main()
except KeyboardInterrupt:
pass
Trying to run this code gives us this printed output : ()
Our logic was that since the output of the single_pole_iir_filter is a
float, we could simply attach the vector_sink_c block to the end of it.
We
get the same results when attaching the sink to the complex_mag block,
or a
complex vector sink to the usrp board.
|
https://www.ruby-forum.com/t/possible-misuse-of-the-vector-sink/69180
|
CC-MAIN-2022-33
|
refinedweb
| 191
| 51.68
|
Add data to Gatsby's GraphQL layer using sourceNodes
In this post i'm going to demonstrate how to source data from a NASA API and inject the response into Gatsby's GraphQL layer without the use of a source plugin.
All Gatsby source plugins will use the same approach as outlined below and if you find yourself in a situation where there's no suitable source plugin available to install you could use the following approach to roll your own solution.
If you'd prefer to jump ahead here's a demo repo:
... and a live demo can be seen here:
sourceNodes
To source data from a remote source and add it to Gatsby's data layer you can use the
sourceNodes extension point from
within
gatsby-node.js
sourceNodes has been designed to run at an appropriate time during the build process to allow you to inject your own
data.
Here's a brief list of the Gatsby's build steps, the full list can be seen here: Understanding Gatsby build | build steps
NB: There are some subtle differences between the build steps when running gatsby develop vs gatsby build
success open and validate gatsby-configs - 0.062 ssuccess load plugins - 0.915 ssuccess onPreInit - 0.021 ssuccess delete html and css files from previous builds - 0.030 ssuccess initialize cache - 0.034 ssuccess copy gatsby files - 0.099 ssuccess onPreBootstrap - 0.034 ssuccess source and transform nodes - 0.121 ssuccess Add explicit types - 0.025 ssuccess Add inferred types - 0.144 s
You'll see near the bottom of the snippet: success source and transform nodes. It's here where you can source your own data and make it available to query via GraphQL by using createNode, more on that in a moment.
It's also worth noting next to each build step is a timestamp in seconds. You'll see next to success source and transform nodes it says 0.121 s, naturally this varies slightly depending on which version of Node you're running and i've heard tells that Windows runs Node slower than Mac. 🤷♂️
But... the most important thing I'd like to make clear here is when you source your own data during this build step depending on the response time of the API you're requesting data from and the amount of data you're sourcing can have an impact on this time.
If you're attempting to download a million 4k videos from a remote server on the moon this build step will likely take much longer to complete. You've probably seen comments on Twitter regarding slow Gatsby build times, these comments seldom mention how much data is being sourced, and from where.
Source Plugins
As great as source plugins are, you might find yourself experiencing some of these slow build time issues but because you're using a source plugin it might be hard to resolve them since you don't have access to Gatsby's underlying methods.
My motivation for writing this post is for precisely this reason. You might not need a plugin and by rolling your own solution it's quite likely you can source a smaller data payload which could help bring your build times back up to speed.
Pre-Flight Checks
To use the NASA API you'll need an API key, you can get that from NASA's API Site:
You'll also need a
gatsby-node.js at the root of you project:
...srcgatsby-node.jspackage.json
And finally since I'll be requesting data on the server rather than in browser I'll be using axios
yarn add axios # npm install axios --save
The Code
Ok, with all of the above in place add the below to your
gatsby-node.js file. You'll need to add your own API to the
request string.
// gatsby-node.jsconst axios = require('axios')exports.sourceNodes = async ({ actions, createNodeId, createContentDigest }) => {const { data } = await axios.get(``)actions.createNode({...data,id: createNodeId(data.date),internal: {type: 'apod',contentDigest: createContentDigest(data),},})}
Starting at the top I define and export
sourceNodes. sourceNodes can be an
async function and accepts a number
parameters including but not limited to the following.
- actions
- createNodeId
- createContentDigest
actions
I've refereed to Gatsby's data layer a number of times and at the time of writing this post, This is actually Redux.
Actions are the equivalent to actions bound with bindActionCreators in
Redux. One of the parameters..
createContentDigest
This again is a helper function provided by Gatsby that allows for the creation of a content digest. createContentDigest is used to determine if data has changed or has remained the same since the last build.
actions.createNode
To create a node there's a few things Gatsby requires, and below is the absolute minimum set of parameters you'll need. The full list of accepted parameters can be seen in the docs
// gatsby-node.js // snippet from aboveactions.createNode({...data,id: createNodeId(data.date),internal: {type: 'apod',contentDigest: createContentDigest(data),},})
data
The
...data is the data returned by the NASA API. It's a single object rather than an array of objects. I spread this
straight into my new node, you can of course abstract the response and only inject the data you need.
id
Every node needs an id, i'm not 100% clear on why, but id's are usually required to ensure data is uniquely identifiable
internal.type
This is where you can define a type. In the above i've defined this as
apod. APOD is the NASA API endpoint i'm using
and stands for Astronomy Picture of the Day. This internal type is what you'll use later when querying the data using
GraphQL.
internal.contentDigest
As above, each node requires a
contentDigest to enable stale node detection
Run develop
At this point you should be able to run
gatsby develop, if there's no errors you're in a good place.
GraphiQL
With the node created you should be able to see the
apod type in the GraphiQL explorer. Visit to investigate. If you've used the APOD API as
i've done the accepted query types are as follows.
You'll notice i'm using the singular
apod query name. Gatsby will create two queries for you, the singular as seen
below but also a plural, prefixed by
all, E.g
allApod. As mentioned above the data returned by the NASA API is an
object rather than an array of objects.
{apod {iddateexplanationmedia_typeservice_versiontitleurl}}
Which should give you a response similar to the below
{"data": {"apod": {"id": "bbfeddbe-d2d7-5ce9-8962-35a779b7acb1","date": "2021-07-01","explanation": .","media_type": "image","service_version": "v1","title": "Perseverance Selfie with Ingenuity","url": ""}}}
This confirms that the node was created successfully and can be queried by GraphQL in your React component or page.
Jsx
My preferred method for querying non page queries in Gatsby is to use
useStaticQuery. Here's a query that i've used in
the index.js and I return it using
some simple un-styled HTML
// index.jsimport React from 'react'import { useStaticQuery, graphql } from 'gatsby'const IndexPage = () => {const {apod: { id, date, explanation, media_type, service_version, title, url },} = useStaticQuery(graphql`query {apod {iddateexplanationmedia_typeservice_versiontitleurl}}`)return (<main><p>{date}</p><h1>{title}</h1><p>{explanation}</p><img alt={title} src={url} /><p>{`id: ${id}`}</p><p>{`media_type: ${media_type}`}</p><p>{`service_version: ${service_version}`}</p></main>)}export default IndexPage
... and there you have it, data sourcing without plugins! I've used this approach many times in various projects and covered it quite conclusively with Benedicte Raae on our pokey internet show Gatsby Deep Dives with Queen Raae and the Nattermobs Pirates
Stay tuned for the next post where i'll explain how to convert the image url GraphQL type from "string" to "file" using createSchemaCustomization so it can be used with the new and improved gatsby-plugin-image
Reactions
Newsletter
This is a sign-up to Queen Raae's Gatsby Newsletter because I don't really like people, or emails. She'll let you know when I have something new to share!
|
https://paulie.dev/posts/2021/07/gatsby-source-nodes/
|
CC-MAIN-2022-05
|
refinedweb
| 1,348
| 53.31
|
Question:
When implementing a factory or simple factory, what would go against using a Type instead of an Enum to specify the class to instantiate?
For example
public class SimpleFactory { public static ITest Create(Type type) { if (type == typeof(ConcreteTest1)) return new ConcreteTest1(); if (type == typeof(ConcreteTest2)) return new ConcreteTest2(); throw new Exception("Invalid type"); } }
Solution:1
Using an enum is more restrictive, which means that it is less likely that the user will try to use your factory with an unsupported type.
I find that it's good to do everything possible when defining an API to discourage usage patterns that will cause exceptions to be thrown. Allowing "Type" in this case opens up millions of ways to call your function that will result in:
throw new Exception("Invalid type");
Using an enum would eliminate this. The only way an enum would throw would be if the user did something noticably wrong.
Solution:2
Factories are only useful if they perform configuration or initialization on your objects to put them in a valid state. I wouldn't bother with a factory if all it does is new up and return objects.
I would create a factory for each class hierarchy. For example:
public abstract class Vehicle {} public class Car : Vehicle {} public class Truck : Vehicle {} public class VehicleFactory { public Vehicle CreateVehicle<T>() where T : Vehicle { // Get type of T and delegate creation to private methods } }
Solution:3
if you want a fool proof factory you must create one concrete factory for each concrete type. This class doesn't follow open-closed principle: each time you got a new concrete type you've to re-edit this class.
IMHO a better approach is using inheritance, one concrete factory class for each concrete type.
Solution:4
I would prefer to use a generic constraint, for the reason that having an enum just to specify what kind of object you want seems redundant to me, and with using a type as you've described you violate the Open/Closed principle. What I would do differently from what you have done there is constrain your type so that only allowable types can be passed in.
I'll give an example in c# using generics.
public class SimpleFactory { public static ITest Create<T>() where T: ITest, new() { return new T(); } }
Then you would implement IConcreteTest with both ConcreteTest1 and ConcreteTest2 and you could use your factory like this:
ConcreteTest1 test1 = SimpleFactory.Create<ConcreteTest1>();
Solution:5
If you want to create by type, you could just use
Activator.CreateInstance(Type t). Wrap it in a template method to limit it to your interface, something like
Create<T> where T:ITest.
Solution:6
I think the biggest concern that I would have is that the purpose of the factory is to allow client code to create a derived instance of an object without knowing the details of the type being created (more specifically, the details of how to create the instance, but if done correctly, the caller should not need to know any of the finer details beyond what is provided by the base class).
Using type information extracted from the derived type still requires the caller to have some intimate knowledge about which type he wants to instantiate, which makes it difficult to update and maintain. By substituting an Enum type (or string, int, etc.), you can update the factory without having to update the calling code to be aware of the new derived types.
I suppose one might argue that the type name could be read in as a string from a config file, database, etc., and the type information determined using Reflections (in .NET) or RTTI (in C++), but I think this is a better case for simply using the type string as your identifier since it will effectively serve the same purpose.
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon
|
http://www.toontricks.com/2018/05/tutorial-factory-pattern-enums-or-types.html
|
CC-MAIN-2018-34
|
refinedweb
| 659
| 54.46
|
lautorite.qc.ca Comment choisir Choosing Investments avec qui investir
- Dominick Rich
- 4 years ago
- Views:
Transcription
1 lautorite.qc.ca Comment choisir Choosing Investments avec qui investir three brochures to help consumers manage their personal finances. The second in the series, this brochure will help you choose the investments that suit you. The other brochures are: Reviewing Your Personal Finances and Choosing an Investment Dealer or Representative. s website. Legal deposit Bibliothèque et Archives nationales du Québec, 2011 Legal deposit Library and Archives Canada, 2011 ISBN (Printed version) ISBN (On-line version)
3 Table of contents The three main characteristics of investments... 4 Types of investment income... 4 Five steps to choosing suitable investments: 1. Know yourself Understand the best types of investments for you Understand certain tax plans Decide how to allocate your assets Read and understand the information required for decision-making...16 Main types of investments...18 Were you pleased with your latest investment? Think back to the first time you made an investment, for example, by contributing to your RRSP. Then answer the following questions: 1. Which type of investment did you choose (e.g. stocks, bonds)? 2. Are the capital and the return guaranteed? If so, who is offering the guarantee? 3. What are the conditions for cashing in your investment? 4. What is the expected return? 5. How much tax will you have to pay on the gains? 6. Are there other more suitable investments for you? Many investors can t answer these questions. Yet this is YOUR money. So, before investing, investigate. This brochure sets out five steps to choosing investments that will help you achieve your financial goals. Before taking these steps, we recommend you read the brochure Reviewing Your Personal Finances. Another brochure you may find useful is the third one in this series, called Choosing an Investment Dealer or Representative. To learn more about investments, we also suggest you look at the AMF s Short Investment Glossary. Choosing Investments 3
4 Before following the steps, you should know that investments have different characteristics: THE THREE MAIN CHARACTERISTICS OF INVESTMENTS Expected return Liquidity Risk This is the gain you expect to obtain on your investment. For some types of investments, the return you actually earn may not be the same as what you expected. Fees and taxes will reduce the return. Means the ability to easily convert an investment to cash. This is the possibility that you will earn a lower return on your investment than expected or lose some or all of it. Investments differ by the type of income they can generate: TYPES OF INVESTMENT INCOME EXAMPLE Interest Dividends Capital gain (or loss) The amount a borrower pays an investor to use his/her money. The portion of the profits that a corporation distributes to its shareholders. The difference between the sale and purchase price of an investment. A $1,000 investment in a GIC earning 2% per year will generate $20 in interest after one year. Company ABC pays its shareholders a quarterly dividend of $0.30 on each common share they own. You sell a share for which you paid $12 for $20. You have a capital gain of $8. However, if you sell the same share for $9, you have a $3 loss. Five steps to choosing suitable investments You might want to consult with a representative on these steps. 4 Autorité des marchés financiers
5 1. Know yourself Before investing, you must determine your goals and establish your investor profile. Define your investment objectives Why are you investing? How long will you hold the investment? What kind of return are you expecting? Is the risk worth it? If you re looking for a 4% or 5% return, you should consider low-risk investment vehicles. For instance, the return on a long-term investment is probably higher than on a short-term one. This type of investment can fluctuate depending on interest rates but if you hold onto it until maturity, you ll get the promised return (unless the issuer runs into financial difficulties). Establish your investor profile Do you prefer investments with stable but possibly lower returns or riskier ones that could earn more? Some people lose sleep when the value of their investments drops. So before you choose a risky vehicle, make sure you understand all the possible consequences, especially a worst-case scenario. Knowing your investor profile will be useful when you meet with a representative. Don t be afraid to share your concerns, because he/she has an obligation to know you before suggesting investments. The more information available to the representative, the better he/she can help you achieve your financial goals. In this regard, we recommend reading the third brochure of this series called Choosing an Investment Dealer or Representative. Choosing Investments 5
6 The following brief questionnaire will give you a good idea of your investor profile. We define three broad categories of investors: conservative, moderate and aggressive. There are different types of investor profile questionnaires. For more accurate results, speak to an authorized representative. 1. Personal financial situation If the value of your investments fell during the year, would you have to put a project aside? A. Yes, I would not be able to carry out my projects B. Somewhat. I would have to postpone some projects C. Not at all 2. Investment knowledge Do you know the differences between the various types of investments? A. Not at all B. Somewhat C. Very well 3. Investor experience What types of investments have you already made? A. Very few, perhaps only investments with guaranteed principal and return, e.g. GICs, T-bills and government bonds B. In different types of investments, e.g. GICs, bonds or mutual funds and riskier investments C. In different types of investments, mainly riskier investments such as derivatives 4. Investment horizon - liquidity When will you need your investment money? A. In less than 3 years B. 3 to 10 years C. 10+ years 6 Autorité des marchés financiers
7 5. Return/Risk The following three investments show a best and worst range of possible returns. Which one would you be most likely to hold? A. Investment A B. Investment B C. Investment C Three possible investments with three possible returns 30% 25% 30% 20% 15% 10% 5% 1% 3% 10% 0 5% 10% 15% -5% Investment A Investment B Investment C -15% Worst possible return Best possible return 6. The emotional factor If your investments fell by 10%, how would you feel? A. It would bother me. I may lose sleep over it and maybe sell off. B. It wouldn t bother me because I believe that in the long run, I ll get the return I expect. C. I would invest more money in the vehicle. SCORING Give yourself 1 point for every A answer, 2 points for every B and 3 points for every C. There is no best answer. The number of points will simply help you understand the kind of investor you are. Answer as honestly as possible. Choosing Investments 7
8 Interpretation of results 0-9 points CONSERVATIVE INVESTOR You prefer safer investments and are not comfortable with fluctuations. Perhaps you can t afford to lose money due to your financial and personal circumstances (e.g. age, family situation). Make sure your investments reflect this reality by holding mostly GICs, T-bills, savings bonds and similar vehicles. Because you re likely to obtain modest returns with these types of investments, it s important to start saving early to compensate for this fact. Remember to shop around before investing: Not all financial institutions offer the same returns for GICs points MODERATE INVESTOR You re prepared to take calculated risks to obtain higher returns but aren t very comfortable with significant fluctuations, perhaps because your financial and personal circumstances don t allow you to lose a significant portion of your investment. Good diversification is important to you. If you hold a combination of guaranteed investments, bonds and stocks, you ll benefit from the generally higher returns provided by the riskier securities over the long term but limit the risk exposure of the overall portfolio. 8 Autorité des marchés financiers
9 16+ points AGGRESSIVE INVESTOR You re not afraid of risk and are comfortable with sharp fluctuations. Your financial and personal situation is such that you can withstand a potentially heavy loss. You believe that in the long run, you ll be rewarded because riskier vehicles tend to outperform guaranteed investments. CAREFUL! Many people only think that they re aggressive investors but as soon as their investments lose value, they panic and sell. This profile is for people who have a good grasp of stock market cycles and don t let emotions guide their investment decisions. Choosing Investments 9
10 2. Understand the best types of investments for you Even if you invest with the help of a representative, you should understand the vehicle you re investing in. After all, as we said earlier, it s your money. More specifically, you should know:. How will you make money with this investment? Does it earn dividends, interest or capital gains?. Will the investment fluctuate? If so, what kind of events will make it gain or lose value? Is there a chance that these events will occur?. Is there a cost to acquire this investment? Are there fees attached, e.g. annual management fees? Will you make money despite all these costs? If so, is the risk worth it? Are there risk-free investments on the markets that would earn you a comparable return?. Can you access your money if you need it? If necessary, refer to the second part of this brochure, which describes the main types of investments. 10 Autorité des marchés financiers
11 3. Understand certain tax plans Registered Retirement Savings Plan (RRSP) An RRSP is a plan in which your investments grow tax free. These investments can be stocks, bonds, mutual funds, GICs and others. An RRSP is mainly used to save for retirement. The contributions are tax deductible, thus lowering your taxable income, possibly resulting in a tax refund. You can always cash in your RRSP but you will only get back a portion of the money because the amount you withdraw will be added to your income for the year and you will be taxed accordingly. If you wait until retirement to make a withdrawal, you ll probably pay less tax because your income will be lower at that time. For more information about RRSPs, read the brochure Mieux investir pour accumuler davantage en vue de la retraite (available in French only). Home Buyers Plan (HBP) The HBP is a program that lets you withdraw up to $25,000 from your RRSP tax free to buy or build your first home. If you buy the home with your spouse, you can each withdraw up to $25,000. However, you must make an annual repayment to your RRSP equal to 1/15 th of the total amount withdrawn until the amount is fully repaid. There is no tax deduction for the amounts repaid. A number of conditions and exceptions apply to this plan. Refer to our brochure titled Benefit from the HBP while staying on course for retirement, available on the AMF s website, or visit the Canada Revenue Agency at for more information. Choosing Investments 11
12 CAREFUL! Unlike the TFSA, amounts withdrawn from your RRSP cannot be recontributed. Tax-Free Savings Account (TFSA) A TFSA is a plan in which your investments (stocks, bonds, GICs, etc.) grow tax free. With a TFSA, you can save for whatever reason you want (e.g. to buy a home, a car). Unlike an RRSP, contributions to a TFSA are not tax deductible. On the other hand, you pay no taxes when the money is withdrawn. In other words, the interest, dividends or capital gains on your investments are not taxable. TFSA RRSP When can I start contributing to the plan? As of 2009, provided you are 18 years or older. At any age. You re entitled to contribute as soon as you declare income. How much can I contribute? You must be 18 and over to contribute to a TFSA. Unused contribution room can be used in subsequent years. See the table on page 14 for the maximum allowable contributions from year to year. 18% of income (up to a maximum of $21,000 in 2009 and $22,000 in 2010), less the pension adjustment amount appearing on your T4 slip from the Canada Revenue Agency. Unused contribution room can be carried forward to subsequent years. What happens if I exceed the maximum allowable amount? You will be charged a 1% penalty on the over-contribution for each month that an excess exists in the account (the maximum over-contribution for an RRSP without penalty is $2,000). What types of investments are allowed? Generally, the types of investments allowed in a TFSA are the same as for an RRSP, including GICs, stocks, bonds and mutual funds. Investments in labour-sponsored investment funds are not eligible as TFSA contributions. 12 Autorité des marchés financiers
13 TFSA RRSP Can I contribute to a spousal plan? No, but you can give your spouse the money to contribute to his/her TFSA. How are the contributions taxed? The income generated by the deposits is tax free. Although the contribution is not tax deductible, the amounts you withdraw are not taxable. What are the tax implications of a withdrawal? You can withdraw any amount at any time with no tax consequences because there is no tax on the amounts accumulated. Yes, however you, and not your spouse, will benefit from the tax deduction. The amount deposited grows tax free, and your contributions reduce your taxable income. However, when you withdraw money from your RRSP, it is added to your income. You can withdraw any amount at any time; however, these amounts are taxable. If I withdraw money from the plan, can I replace it later? You can replace the amounts in your TFSA if you wish but only the following year. What happens if I don t contribute the maximum? Your contribution room is carried forward, i.e. you can contribute the amount later on. Where can I make the contributions? You cannot replace money withdrawn from your RRSP unless it was taken out under the Home Buyers Plan (HBP), the Lifelong Learning Plan (LLP) or similar plans. Since 1991, if you have unused contribution room, it accumulates, i.e. you can contribute later on. In most financial institutions, e.g. banks, credit unions, trust companies, life insurance companies or investment firms. Ask your representative. Can I open more than one TFSA/RRSP? Yes. In such a case, the contribution limit applies to all the institutions combined. For example, if your limit is $5,000, you can put in a total of $5,000 and not $5,000 per institution. Can I borrow to contribute to the plan? Yes. However, the interest on your loan will not be tax deductible. Also, in the case of the TFSA, there is no tax refund to help you repay the loan. Choosing Investments 13
14 TFSA RRSP When does the plan have to be wound up? Upon death. On December 31 of the year you turn 71, at which point you can transfer the funds to a Registered Retirement Income Fund (RRIF), purchase an annuity, or cash in the RRSP. Table of Maximum TFSA Contributions Year Maximum annual contribution Maximum cumulative contributions 2009 $5,000 $5, $5,000 $10, $5,000 $15, $5,000 $20, $5,500 $25, $5,500 $31, $10,000 $41, and subsequent years $10,000 Registered Education Savings Plan (RESP) An RESP is a plan that allows you to save money tax free in order to finance part or all of your child s post-secondary education. Although the contributions are not tax deductible, they do entitle the plan holder to the Canada Education Savings Grant (CESG), subject to certain conditions. Québec residents are also entitled to the Québec Education Savings Incentive (QESI), subject to certain conditions. Moreover, some low-income families may qualify for the Canada Learning Bond (CLB). 14 Autorité des marchés financiers
15 4. Decide how to allocate your assets Asset allocation is the proportion of each type of investment you wish to have in your portfolio, in other words, how much of your portfolio will be made up of stocks, bonds, exchange-traded funds, etc. You can have more than one portfolio, each one with a different asset allocation. For instance, you can have an RRSP portfolio, a TFSA portfolio and a portfolio of non-registered investments (non-rrsp and non-tfsa, for example). To choose the most suitable asset allocation for you, consider such things as your investment objectives, risk tolerance and investment horizon. Your representative can help you with this. Diversify your investments The idea is not to put all your eggs in one basket. To this end:. Place your money in more than one institution;. Have different types of investments (e.g. real estate, stocks, bonds);. For investments with terms, diversify the maturities, for instance, have 1-, 2-, and 5-year GICs or bonds;. If you own stocks, invest in different sectors such as financial, resources and technology. Choosing Investments 15
16 5. Read and understand the information required for decision-making Make sure to receive complete written documentation on the investments you are contemplating. When you invest, with or without a representative, learn about the investments by consulting the following sources:. The prospectus (explains the type of investment and most importantly, the risk factors) or other official documents;. The website sedar.com (System for Electronic Document Analysis and Retrieval): All public companies and investment funds (e.g. mutual funds) must file certain documents and information;. The website of the financial institution selling the investment;. The financial statements of the company in which you are thinking of investing;. The company s MD&A (management s discussion and analysis): This is an information document that accompanies the financial statements and helps investors understand the company s performance and financial position. It gives management s point of view on the current financial position, recent performance and outlook for the future; 16 Autorité des marchés financiers
17 . The Annual Information Form: This document describes the company, its activities, subsidiaries, projects and risks to which it is exposed;. The company s investor relations department: This is where you ll find answers to questions about dividends, dates of shareholders meetings and other matters of interest to shareholders. Most large organizations offer this type of service or have a public relations department;. The person offering the investment. Get your information from more than one trustworthy source You should now be able to make the right investment decision for you. Once you have placed the money:. Understand and hold onto any documents that you have signed;. Read and keep any documents you receive about your investment;. Stay on top of your investments;. Balance your portfolio regularly by allocating and diversifying the assets;. Review your investor profile periodically. Choosing Investments 17
18 Main types of investment There are many types of investments with different characteristics such as liquidity, return and risk. Because there is no one vehicle that balances all three characteristics perfectly, you must choose the ones that best meet your needs. For example, you may have to sacrifice return to have an investment that does not exceed your risk tolerance. Investment risk can be classified as follows: Low Medium High Investments for which the amount invested AND the return are guaranteed by deposit insurance, a government or a financial institution. With this type of investment, you know exactly what you ll get at maturity. However, you may not get the expected return and you may even lose money if you withdraw the funds before maturity. Investments for which the amount invested is guaranteed by a well-established company. The value of the investment may fluctuate until maturity, at which point you will get the principal back. If you liquidate the investment before its maturity, you could end up with no return and even lose some of the money you put in. Investments for which neither the amount invested nor the return is guaranteed. The value of the investment may fluctuate heavily over time, and you could lose some or all of the money you put in. Sometimes an investment can fall into more than one risk category. For instance, guaranteed investment certificates (GICs) are typically guaranteed by the issuer and therefore fall into the low risk category. However, there are also indexed GICs for which the return is not guaranteed, only the principal. These therefore fall into the medium risk category. This is why you must find out the characteristics of the investment you are contemplating. 18 Autorité des marchés financiers
19 Debt securities Debt securities are securities through which a borrower, which may be a government or a corporation, acknowledges having a debt toward the security holder. When you invest in a debt security, you are lending your money in exchange for interest. You do not become an owner in the company but a creditor. For example, if you invest $1,000 in a GIC, you are lending your money to a financial institution. The latter is therefore the borrower and you are the creditor. Guaranteed Investment Certificates (GIC) and term deposits These are debt securities issued by a financial institution in exchange for the money you lend them. Term Return Liquidity Risk CONVENTIONAL GIC 30 days to 10 years. Fixed rate of interest at maturity. INDEX-LINKED GIC Varies based on the performance of an index such as a stock market index. Most GICs must be held until maturity, but some may allow early redemption. A fee may apply in such a case. Generally guaranteed by the issuer. The principal may be insured in the event the issuer goes bankrupt by a deposit insurance agency (some restrictions apply). 1 The amount invested may be guaranteed by the issuer. There are other savings products on the market that are very similar to conventional GICs, but with interest that increases each year. For instance, you would earn 0.5% in year 1, 1% in year 2 and 5% in year 3. These vehicles have different names such as step-up bonds and climbing-rate term deposits. 1. For example, non-redeemable GICs with a term of more than five years are not covered by deposit insurance. See the AMF website for the brochure entitled Your deposits are protected. That s a guarantee! Choosing Investments 19
20 Guaranteed investments Before placing your money in a guaranteed investment, ask yourself the following questions:. Who is the guarantor? The Autorité des marchés financiers (deposit insurance), the Canada Deposit Insurance Corporation or a financial institution? An investment is never guaranteed by the individual who sells it but by an institution.. What exactly is the guarantee? Is it the return, the principal or both?. What are the conditions of the guarantee? Must the investment be held for 10 years for the guarantee to apply? Or is it 20 years?. Are there exclusions to this guarantee?. How much does this guarantee cost? 20 Autorité des marchés financiers
21 Bonds and debentures Bonds and debentures are debt securities issued by governments and corporations in exchange for the money you lend them. Debentures are similar to bonds, but are not backed by specific assets (land, buildings, machinery, etc). The issuer typically promises to pay a fixed interest rate to the buyer at certain intervals and pay back a predetermined amount at maturity, usually the face value, which is often a multiple of $1,000. Term Return Liquidity Risk Typically from one year to 30 years. Takes the form of:. Capital gain (loss) when sold; or. Interest. Depends on interest rates and the issuer s creditworthiness. 1 If the bond or debenture is held until maturity, the buyer will receive the return stipulated at the time of purchase. Available through dealers. Liquidity can decrease if interest rates rise or the issuer experiences financial difficulties. The longer the maturity, the greater the risk of fluctuation in the bond s value due to real or anticipated changes in interest rates or the issuer s financial position. Holders have a right to a portion of the company s remaining assets, if it is dissolved (rank ahead of shareholders). 1. Independent agencies rate the quality of certain debt securities. Choosing Investments 21
22 Principal-protected notes (PPNs) Debt securities issued by financial institutions in exchange for the money you lend them. This type of investment does not necessarily have a fixed interest rate. The return is usually variable and is tied to a benchmark portfolio.* Term Return Liquidity Risk Typically between 5 and 10 years. Takes the form of:. Capital gain (loss) if sold before maturity; or. Interest. The return may be tied to the performance of a benchmark portfolio. Some PPNs guarantee a rate of return for certain years, e.g. the first year only. In some cases, the issuer may limit the return on a note or redeem it before maturity. There is no real secondary market for PPNs; therefore, it may be impossible to resell them to other investors. The principal is usually guaranteed by a financial institution. However, the guarantee does not apply if the note is redeemed before maturity; therefore, investors may not get all their money back. Since the return is tied to a benchmark, there is a risk that the interest paid will be less than expected or that there will be no interest payments at all. * Benchmark portfolio: a portfolio made up of stocks, stock market indexes or currencies whose fluctuations are used to determine the value and return of the PPN. Refer to the Short Investment Glossary on the AMF s website at 22 Autorité des marchés financiers
23 Equity securities When you buy equity securities (stocks), you become a part owner of the business. Common stock Shares issued by corporations. The investor has an ownership interest in the issuing company which usually comes with the right to vote on certain decisions. Refer to the AMF s brochure Shareholders meetings, it s your business!, available at Term Return Liquidity Risk None. Takes the form of:. Dividends. Capital gain (loss). Normally traded on a stock exchange or in over-the-counter markets (where unlisted stocks are traded between dealers). Share price may increase or decrease substantially. If the company is dissolved, shareholders have the right to a portion of the remaining assets if any are left after all the creditors have been repaid, including governments and holders of debt securities such as bonds and debentures. Holders of preferred shares also rank ahead of common shareholders. Choosing Investments 23
24 Preferred stock Shares issued by corporations. The investor has an ownership interest in the issuing company. The corporation must pay dividends to preferred shareholders before doing so to common shareholders. Term Return Liquidity Risk Most carry no term, but some are redeemable at the issuer s discretion. Takes the form of:. Cumulative dividends or not. Capital gain (loss). Some corporations offer dividends that are adjusted periodically based on interest rates, for example every five years. Share value depends on the interest rate demanded by other investors on similar shares. The value will decrease if rumours surface that dividends will be less than expected. Because they have different characteristics, the value of preferred shares will be affected differently. Read the issuer s prospectus to understand details about these shares and ask a representative for help if necessary. Typically traded on stock exchanges or in over-the-counter markets. The Board of Directors may decide to suspend dividends for certain periods in the event, for example, of financial difficulties. In such a case, not only would investors not receive dividends, but also the value of their stock may fall sharply. Rising interest rates can also push down the value of preferred stock. If the company is dissolved, preferred shareholders rank ahead of common shareholders but only if money is available after all the creditors have been repaid, including governments and holders of debt securities such as bonds and debentures. 24 Autorité des marchés financiers
25 Investment fund securities These securities give the investor ownership in a fund. Mutual funds A mutual fund is made up of money pooled by a number of investors and managed on their behalf by a manager, who selects different types of securities based on the fund s objectives. There are many types of mutual funds, including money market, fixed income, balanced, equities and international. Term Return Liquidity Risk None. Takes the form of:. Dividends. Interest. Capital gain (loss) realized by the fund or when investors sell their units. Can usually be sold back to the mutual fund. Some funds have a redemption fee, which is usually charged when investors sell their units during the first five to seven years. Depends on the mutual fund s investments (e.g. bonds, stocks). These securities are not guaranteed. Choosing Investments 25
26 NOTE : ETF benchmarks are not necessarily linked to a stock portfolio; they can also be linked to a bond, derivative or other portfolio. Exchange-traded funds (ETFs) Exchange-traded funds are traded like shares on a stock exchange. They usually track a benchmark index. Unlike a mutual fund, an ETF portfolio manager does not seek to maximize the fund s return but only to follow an index; this explains their typically lower management fees. The strategy of some ETFs is to use a leverage effect, meaning that their return will be, for example, twice the positive or negative daily return of a stock index. See the following page for more details. Term Return Liquidity Risk None. Takes the form of:. Dividends. Interest. Capital gain (loss) realized when investors sell their units. Traded on stock exchanges. Depends on the volatility of the index tracked by the fund. For example, an emerging market ETF could be riskier than an ETF that tracks the index of the largest companies listed on the stock exchange of an industrialized country. 26 Autorité des marchés financiers
27 Leveraged ETFs (suitable for sophisticated investors) These ETFs magnify upward and downward stock market movements. Some aim to deliver a multiple of the benchmark s daily return. For instance, they can offer double or triple the daily return of an index (be it a gain or a loss). Example: Josie invests $10,000 in a leveraged ETF that offers double the return of a stock index. Let s assume that one of the fund s stocks costs $25. Josie buys 400 shares. At the end of the day, the index goes up 1%. Josie s stock therefore goes up 2% and is now worth $ Since she owns 400 shares, Josie makes a profit of $200. CAREFUL! These funds do not reproduce the return multiple over the long term, but just the daily benchmark return. Choosing Investments 27
28 Segregated funds Segregated funds are issued by insurance companies. They are similar to mutual funds, but typically include a death benefit and a maturity guarantee. The fund s assets are held by an insurer separately from its other assets, hence the term segregated funds. Term Return Liquidity Risk To benefit from the maturity guarantee, the investment must often be held for 10 or 20 years. If the investor is prepared to forego the guarantee, the investment is redeemable at any time. Takes the form of:. Dividends. Interest. Capital gain (loss) realized by the fund or when investors sell their holdings. Can generally be sold back at any time to the fund. Some funds charge a redemption fee, usually if the redemption takes place in the first five to seven years. However, as a general rule, the investment must be held for 10 years to benefit from the maturity guarantee. Depends on the fund s investments. Individual segregated fund contracts offer a guarantee that protects, at maturity (often 10 years), at least 75% of the amount invested. Insurers also typically offer a death benefit guarantee. What is a death benefit guarantee? If you die before the contract matures and the value of your funds is less than the initial investment, the difference will be reimbursed. For amounts invested after a certain age, for example 80, the percentage of the principal guarantee may be less. 28 Autorité des marchés financiers
29 For more information about investments, refer to the Short Investment Glossary, available at Labour-sponsored investment funds and other similar funds Shares issued by a labour organization or a financial institution. Investors buy shares of the fund, which may entitle them to tax benefits. One of the goals of such funds is to create and maintain jobs. Term Return Liquidity Risk None. Mainly in the form of capital gain (loss). Depends on the performance of the assets in the fund. Investors may also benefit from tax advantages that increase their return. Labour-sponsored fund shares are redeemable only at retirement or early retirement as of age 55, subject to certain conditions. They may also be redeemed in exceptional circumstances such as the purchase of a home, return to school, loss of employment, business start-up, disability or terminal illness. A regional development fund is also available on the market whose shares are redeemable after seven years or earlier in the event of death, disability or terminal illness. The redemption criteria vary from one fund to the next. These funds invest a proportion of their assets in start-ups or SMEs, which may increase the risk. Choosing Investments 29
30 Socially-responsible investing Socially-responsible investing means taking factors such as the environment, human rights and other social or moral factors into account when selecting investments. It is a strategy that can take various forms. For example, you may decide to avoid the bonds and stocks of companies whose products and services you believe adversely affect society (for instance, military equipment or tobacco). You may also decide to only invest in what you believe to be socially-responsible companies such as those that develop renewable energy. Many shareholders urge companies to improve their environmental or social performance by tabling proposals at shareholder meetings. There are many socially-responsible and ethical mutual funds on the market. Read their prospectuses to find out whether their criteria reflect your values. 30 Autorité des marchés financiers
31
32: Elsewhere: You can also visit the AMF website at: lautorite.qc.ca Youth website: tesaffaires.com
Investments at a glance
Investments at a glance Canadian Securities Administrators Securities regulators from each province and territory have teamed up to form the Canadian Securities Administrators, or CSA for short. The CSA
Module 5 - Saving HANDOUT 5-7
HANDOUT 5-7 Savings Tools (detailed) 5 Contents High interest savings account This is a type of deposit account. The bank pays you interest. The rate changes with the prime rate set by the bank. This is
financial planning & advice
financial planning & advice Tips for successful investing. Start early and invest regularly Do your homework take the time to become an informed investor Develop an investment strategy you are comfortable?...
Understanding RSPs. Your Guide to Retirement Savings Plans
Understanding RSPs Your Guide to Retirement Savings Plans Getting Started Some retirement basics Getting Ahead Setting your retirement savings goals Getting the Most Maximizing your RSP growth Getting
Investing. retirement income
Investing to optimize retirement income Research and writing Julien Michaud, B.Sc. Act., ASA (Autorité des marchés financiers) Developers and collaborators Catherine Hamel, M. Sc. (Desjardins Financial
MODULE 3 THE NEXT BIG THING
INVESTING: STOCKS, BONDS & MUTUAL FUNDS This lesson has students learning about stocks, bonds, and mutual funds. The concepts of risk and reward, and return on investment (ROI) are explored. The FIT Work
Chapter 14: Savings and Investing Savings and Investing
Savings and Investing Consumers can use any money left over from purchasing goods and services toward savings or investing. Saving means putting money aside for future use. Investing is using savings to
Basic Investment Terms
Because money doesn t come with instructions.sm Robert C. Eddy, CFP Margaret F. Eddy, CFP Matthew B. Showley, CFP Basic Investment Terms ANNUITY A financial product sold by financial institutions pay BY READING THIS DOCUMENT, YOU WILL: Know the different features of the TFSA Master the differences between a contribution to an RRSP or a TFSA Identify target clienteles. Last
U.S. Treasury Securities
U.S. Treasury Securities U.S. Treasury Securities 4.6 Nonmarketable To help finance its operations, the U.S. government from time to time borrows money by selling investors a variety of debt securities
Investment Guide Funds offered through the Washington State Investment Board
Investment Guide Funds offered through the Washington State Investment Board Investing Overview Asset allocation 2 Two investment approaches 2 Build and Monitor 3 One-Step 3 Diversification 4 Trading restrictions
>Most investors spend the majority of their time thinking and planning
Generating Retirement Income using a Systematic Withdrawal Plan SPECIAL REPORT >Most investors spend the majority of their time thinking and planning around how best to save for retirement. But once you
pensions backgrounder #4
pensions backgrounder #4 Private Retirement Savings Part 4 in a Series The full series of pension backgrounders are contained in the National Union s Pensions Manual, Fourth Edition available from the
January 2008. Bonds. An introduction to bond basics
January 2008 Bonds An introduction to bond basics The information contained in this publication is for general information purposes only and is not intended by the Investment Industry Association of Canada
BEING FINANCIALLY PREPARED FOR RETIREMENT The Scary Facts! $1.00 item in 1972, costs $3.78 today At the average rate of inflation, the price you pay for an item today will be double in 20 years almost
Investing in mortgage schemes?
Investing in mortgage schemes? Independent guide for investors about unlisted mortgage schemes This guide is for you, whether you re an experienced investor or just starting out. Key tips from ASIC about
Managing your surplus cash
Managing your surplus cash Savings and investments BusinESS Coach series Establishing a plan Putting your plan to work Thinking long term Business Coach series Your money should work as hard as you do
Define your goals. Understand your objectives.
Define your goals. Understand your objectives. As an investor, you are unique. Your financial goals, current financial situation, investment experience and attitude towards risk all help determine the
Glossary of Investment Terms
online report consulting group Glossary of Investment Terms glossary of terms actively managed investment Relies on the expertise of a portfolio manager to choose the investment s holdings in an attempt
Investing Practice Questions
Investing Practice Questions 1) When interest is calculated only on the principal amount of the investment, it is known as: a) straight interest b) simple interest c) compound interest d) calculated interest
PURPOSE FUNDS. Simplified Prospectus PURPOSE PREMIUM YIELD FUND. ETF shares, Series A shares and Series F shares
No securities regulatory authority has expressed an opinion about these securities and it is an offence to claim otherwise. PURPOSE FUNDS Simplified Prospectus PURPOSE PREMIUM YIELD FUND ETF shares, Series
Advantages and disadvantages of investing in the Stock Market
Advantages and disadvantages of investing in the Stock Market There are many benefits to investing in shares and we will explore how this common form of investment can be an effective way to make money.
investing mutual funds
investing mutual funds our mission The mission of The USAA Educational Foundation is to help consumers make informed decisions by providing information on financial management, safety concerns and significant
Your guide to the Tax-Free Savings Account
INVESTMENTS Tax-Free Savings Account Your guide to the Tax-Free Savings Account An important part of any financial plan is savings. Short-term goals such as a vacation or long-term goals like
Investor Watch Principal-Protected Notes (PPNs)
Investor Watch Principal-Protected Notes (PPNs) What are PPNs? A Principal Protected Note, or PPN, is an investment product that consists of two parts. One part is an investment that promises to return
Nine Questions Every ETF Investor Should Ask Before Investing
Nine Questions Every ETF Investor Should Ask Before Investing UnderstandETFs.org Copyright 2012 by the Investment Company Institute. All rights reserved. ICI permits use of this publication in any way,
Understanding Annuities
Annuities, 06 5/4/05 12:43 PM Page 1 Important Information about Variable Annuities Variable annuities are offered by prospectus, which you can obtain from your financial professional or the insurance
Customer Investment Profile
Customer Name: Account Number: Contact Number: The purpose of this investment profile form is for us to better understand your financial means, investment experience, investment objectives and general
Understanding (and choosing between)
Tangerine guides to personal finance Understanding (and choosing between) RSPs & TFSAs A guide for Canadians While RSPs have been around for a long time, TFSAs are relatively new and not always well understood
TAX-FREE SAVINGS ACCOUNT (TFSA)
TAX-FREE SAVINGS ACCOUNT (TFSA) DO YOURSELF A FAVOUR! Like paying less tax? Don t we all! And just think of what you could do with the extra money: Travel the world Take time off work Retire early Spoil
Saving for Retirement. Your guide to getting on track.
Saving for Retirement Your guide to getting on track. 2 It s great that you re looking ahead and thinking about retirement now. A sound plan can make all the difference in reaching your future goals. This
Newcomer Finances Toolkit. Investments. Worksheets
Newcomer Finances Toolkit Investments
Determining your investment mix
Determining your investment mix Ten minutes from now, you could know your investment mix. And if your goal is to choose investment options that you can be comfortable with, this is an important step. The
Investments GUIDE TO FUND RISKS
Investments GUIDE TO FUND RISKS CONTENTS Making sense of risk 3 General risks 5 Fund specific risks 6 Useful definitions 9 2 MAKING SENSE OF RISK Understanding all the risks involved when selecting an
INVESTING FOR LIFE S GOALS
TIAA-CREF LIFE GOALS SERIES INVESTING FOR LIFE S GOALS SAVING FOR MAJOR PURCHASES AND OBJECTIVES TIAA-CREF: FINANCIAL SERVICES FOR THE GREATER GOOD OUR COMMITMENT TIAA-CREF is dedicated to serving the
Investing in Bonds - An Introduction
Investing in Bonds - An Introduction By: Scott A. Bishop, CPA, CFP, and Director of Financial Planning What are bonds? Bonds, sometimes called debt instruments or fixed-income securities, are essentially
INVESTING IN MORTGAGE FUNDS?
INVESTING IN MORTGAGE FUNDS? Independent guide for investors about unlisted mortgage funds Mortgage funds can also be called mortgage trusts or mortgage schemes. About ASIC The Australian Securities and
INVESTMENT TERM GLOSSARY
A Accrued Interest - Interest that has been earned but not yet credited to a bond or other fixed-income investment, such as a certificate of deposit. Active Management The use of professional?...
ANNUITIES VARIABLE. MetLife Retirement Perspectives. asset allocation questionnaire
LINE BAN ANNUITIES VARIABLE MetLife Retirement Perspectives asset allocation questionnaire Asset Allocation Questionnaire The following questions will enable you to determine your time horizon and risk
INVESTING IN DEBENTURES?
INVESTING IN DEBENTURES? Independent guide for investors reading a prospectus for unlisted debentures This guide is for you, whether you re an experienced investor or just starting out. About AS
Guide to mutual fund investing. Start with the basics
Guide to mutual fund investing Start with the basics Pursue your financial goals Why do you invest? For a rainy day? A secure retirement? Funding a college tuition? Having a specific goal in mind will
New Client Package Prepared for
Table of Contents Personal Information Questionnaire What Do You Want To Do Your Investment Profile Planning Strategies Checklist New Client Package Prepared for Notes To File Fact Finder Date Completed
Slide 2. What is Investing?
Slide 1 Investments Investment choices can be overwhelming if you don t do your homework. There s the potential for significant gain, but also the potential for significant loss. In this module, you ll
EMPIRE CLASS SEGREGATED FUNDS INFORMATION FOLDER AND POLICY PROVISIONS THE EMPIRE LIFE INSURANCE COMPANY
THE EMPIRE LIFE INSURANCE COMPANY EMPIRE CLASS SEGREGATED FUNDS INFORMATION FOLDER AND POLICY PROVISIONS This document contains the information folder and the contract provisions for the Empire Class Segregated
Determining your investment mix.
Determining your investment mix. Ten minutes from now, you could know your investment mix: And if your goal is to choose investment options that you can be comfortable with, this is an important step.
Investments. Introduction. Learning Objectives
Investments Introduction Investments Learning Objectives Lesson 1 Investment Alternatives: Making it on the Street Wall Street! Compare and contrast investment alternatives, such as stocks, bonds, mutual
Simplified Prospectus
Simplified Prospectus April 3, 2014 BMO Security Funds BMO Money Market Fund (series A, F, I, Advisor Series and Premium Series) BMO Income Funds BMO Bond Fund (series A, F, D, I, NBA, NBF and Advisor
HSBC Mutual Funds. Simplified Prospectus June 15, 2016
HSBC Mutual Funds Simplified Prospectus June 15, 2016 Offering Investor Series, Advisor Series, Premium Series, Manager Series and Institutional Series units of the following Funds: Cash and Money Basics and Your Retirement
Christian Financial Credit Union Roberto Rizza, CRPC Financial Advisor CUSO Financial Services, LP 18441 Utica Road Roseville, MI 48066 586-445-3651 rrizza@cfcumail.org Investing
Investment Policy Questionnaire
Investment Policy Questionnaire Name: Date: Ferguson Investment Services, PLLC Investment Policy Questionnaire Introduction: The information you provide on this questionnaire will remain confidential.
Investing in unlisted property schemes?
Investing in unlisted property schemes? Independent guide for investors about unlisted property schemes This guide is for you, whether you re an experienced investor or just starting out. Key tips from
Mutual Funds Basic Information About Mutual Funds
Saskatchewan Securities Commission Mutual Funds Basic Information About Mutual Funds The Saskatchewan Securities Commission regulates how securities are sold. Securities are investments such as shares,
401(k) Plans Life Advice
401(k) Plans Life Advice A retirement tool provided through your employer Many Americans today are living longer, healthier lives, which could mean your finances may need to accommodate extra years of
plaintalk about life insurance
plaintalk about life insurance The right life insurance protection can have an enormous effect on your life and the lives of those you love. It can mean the difference between leaving your loved ones well
FINANCIAL SERVICES BOARD COLLECTIVE INVESTMENT SCHEMES
FINANCIAL SERVICES BOARD COLLECTIVE INVESTMENT SCHEMES INTRODUCTION This booklet will provide you with information on the importance of understanding ways in which Collective Investment Schemes ( CIS )
plain talk about life insurance
plain talk about life insurance The right life insurance can have an enormous effect on your life and the lives of those you love. It can mean the difference between leaving your loved ones well positioned
EXPLORE. Investment Planning Planning for Financial Security SAVING : INVESTING : PLANNING
EXPLORE Investment Planning Planning for Financial Security SAVING : INVESTING : PLANNING About this seminar Presentation > Provides comprehensive education > Includes action steps > Provides opportunity
INVESTING EFFECTIVELY TO HELP MEET YOUR GOALS. MUTUAL FUNDS
{ } INVESTING EFFECTIVELY TO HELP MEET YOUR GOALS. MUTUAL FUNDS 1 MUTUAL FUNDS: STRENGTH IN NUMBERS You like to think about retirement; that time when you will be able to relax and enjoy life the way
Conservative Investment Strategies and Financial Instruments Last update: May 14, 2014
Summary Conservative Investment Strategies and Financial Instruments Last update: May 14, 2014 Most retirees should hold a significant portion in many cases, 100% of their savings in conservative financial
|
https://docplayer.net/6271116-Lautorite-qc-ca-comment-choisir-choosing-investments-avec-qui-investir.html
|
CC-MAIN-2020-16
|
refinedweb
| 7,919
| 55.44
|
Object Identity. For example:
# Compare two objects def compare(a,b): print ‘The identity of a is ‘, id(a) print ‘The identity of b is ‘, id(b) if a is b: print ‘a and b are the same object’ if a == b: print ‘a and b have the same value’ if type(a) is type(b): print ‘a and b have the same type’
The type of an object is itself an object. This type. For example:
if type(s) is list: print ‘Is a list’ if type(f) is file: print ‘Is a file’
However, some type names are only available in the types module. For example:
import types if type(s) is types.NoneType: print "is None"
Because types can be specialized by defining classes, a better way to check types is to use the built-in isinstance(object, type) function. For example:
if isinstance(s,list): print ‘Is a list’ if isinstance(f,file): print ‘Is a file’ if isinstance(n,types.NoneType): print "is None"
The isinstance() function also works with user-defined classes. Therefore, it is a generic, and preferred, way to check the type of any Python object.
|
http://www.informit.com/articles/article.aspx?p=447207&seqNum=2
|
CC-MAIN-2018-39
|
refinedweb
| 194
| 63.12
|
Life Cycle of Threads
Life Cycle of A Thread
When you are programming with threads, understanding the life cycle of thread is very valuable. While a thread is alive, it is in one of several
Coding for life cycle in threads
Coding for life cycle in threads program for life cycle in threads
Life cycle of Servlet
Life cycle of Servlet
The life cycle of a servlet can be categorized into
four parts:
The servlet will be available
Thread
Thread Thread
Jsp life cycle.
Jsp life cycle. What is JSP life cycle?
Life cycle of JSP (Java Server Page),after translated it works like servlet.It contain 7 phases.... This method is called
only one time during JSP life cycle.
6.Call
Bean life cycle in spring
Bean life cycle in spring
This example gives you an idea on how to Initialize
bean in the program... to retrieves the values of the bean using java file. Here in the file given
below i.e.
Thread in java
Overview of Threads
Threading in Java
Thread Creation...A thread is a lightweight process which exist within a program and executed
to perform a special task. Several threads of execution may be associated
Servlet Life Cycle
Servlet Life Cycle Servlet Life Cycle
EJB life cycle method
EJB life cycle method
... as the life cycle of EJB. Each type of enterprise
bean has different life cycle... is the program denoting life cycle of message
driven bean.
package
servlet life cycle
servlet life cycle What is the life cycle of a servlet
Life Cycle of a Jsp Page
;
<body>
<h1>Showing the life cycle of jsp using jspInit...Life Cycle of a Jsp Page
... as the servlet
life cycle. After get translated the jsp file is just like a servlet
System Development Life Cycle
Development life cycle. Advantages of using the SDLC also include efficient work...
System Development Life Cycle
... cycle of Software Development Life Cycle. The information systems
Chapter 4. Session Bean Life Cycle
Bean Life CycleIdentify correct and incorrect statements or examples about the life cycle of a
stateful or stateless session bean instance...
Chapter 4. Session Bean Life CyclePrev Part I.
Stateful and Stateless Session Bean Life Cycle
Understanding Stateful and
Stateless Session Bean Life Cycle... Bean Life cycle
There are two stages in the Lifecycle of Stateless
Session Bean... the bean into Does Not
Exist state.
Following Diagram shows the Life cycle
Five disciplines in the Iterative Life Cycle
Five disciplines in the Iterative Life Cycle hello,
What are the Five disciplines in the Iterative Life Cycle?
hii,
These are the Five Disciplines in the Iterative Life Cycle:-
Requirements
Analysis and Design
Threads in Java
Threads in Java help in multitasking. They can stop or suspend a specific... and allows other threads to execute.
Example of Threads in Java:
public class Threads{
public static void main(String[] args){
Thread th = new Thread
JSF Life Cycle
JSF Life Cycle
In this section we will discuss about life cycle of JSF.
This section will describe you JSF life cycle. Life cycle of JSF specifies... life cycle for every request and response. Request for the JSF page can
Java Thread
and then
the other thread having priority less than the higher one.
Life cycle of thread :
Diagram - Here is pictorial representation of thread life
cycle.
State of Thread Life cycle -
New : This is the state where new thread is created
write a program in java to demonstrate the complete life cycle of servlet:
write a program in java to demonstrate the complete life cycle of servlet: ... for this program. my question is :Write a program in Java to demonstrate the complete life cycle of a Servlet
Thread
? A thread start its life from Runnable state. A thread first enters runnable state...Thread What is multi-threading? Explain different states of a thread.
Java Multithreading
Multithreading is a technique that allows
Chapter 7. CMP Entity Bean Life Cycle
or examples about the life cycle of
a CMP entity bean....
The following steps describe the life cycle of an entity bean... life starts when the container creates the instance
using newInstance
Threads
that helps you do this.
Thread Life Cycle States
New - Created... that is needed.
Thread Life Cycle Transition State Diagram... Ready thread to run.
It's possible for low priority threads to "starve
Product Life Cycle Diagram
Product life cycle diagram is the graphical representation of four stages... life cycle also called PLC is a concept of marketing that tells about... stages in the Product life cycle diagram indicates:
Introduction
System Development Life Cycle (SDLC)
Among all the models System Development Life Cycle (SDLC) Model is one... Life Cycle Model or Linear Sequential Model or Waterfall Method.
This model... and other important documentations. This is the most important phase of the cycle
SCJP Module-8 Question-4
Given a sample code:
public class Test3 {
public static void main(String args[]) {
Test1 pm1 = new Test1("Hi");
pm1.run();
Test1 pm2 = new Test1("Hello");
pm2.run();
}
}
class Test1 extends Thread {
private
Invalidation cycle
Invalidation cycle Hi....
What is invalidation Cycle? What are its uses?
please give me an example for that....
Thanks
Ans:
Invalidation life-cycle:
*Flex imposes deferred validation on the Flash API
Thread
Thread Write a Java program to create three theads. Each thread should produce the sum of 1 to 10, 11 to 20 and 21to 30 respectively. Main thread....
Java Thread Example
class ThreadExample{
static int
Thread Priorities
;
In Java, thread scheduler can use the thread priorities... schedule of threads . Thread gets the ready-to-run state
according... a Java thread is created, it inherits its priority
from the thread
Thread Priorities
;
In Java, thread scheduler can use the thread... the execution schedule of threads . Thread gets the ready-to-run state
according...;
When a Java thread is created, it inherits its priority
from the thread
Diff between Runnable Interface and Thread class while using threads
Diff between Runnable Interface and Thread class while using threads Diff between Runnable Interface and Thread class while using threads
Hi Friend,
Difference:
1)If you want to extend the Thread class
Count Active Thread in JAVA
Count Active Thread in JAVA
In this tutorial, we are using activeCount() method of
thread to count the current active threads.
Thread activeCount... of active threads in the current thread group.
Example :
class ThreadCount
Thread Priorities
;
In Java, thread scheduler can use the thread... the
execution schedule of threads . Thread gets the ready-to-run state
according...;
When a Java thread is created, it inherits its priority
from the thread
Thread
Thread Explain two ways of creating thread in java. Explain at three methods of thread class.
Java Create Thread
There are two main ways of creating a thread. The first is to extend the Thread class and the second
Java Thread Context
of threads to execute. In java this is achieved through the
ThreadContext class... classloader to set a thread using Thread.setContextClassLoader()
method...
Thread Context
thread
thread Hi
what are the threads required to excute a programe except main threads?
Thanks
kalins naik
Java :Thread getPriority Example
Java :Thread getPriority Example
In this tutorial you will learn how to get thread priority in java thread.
Thread getPriority() :
Thread scheduler uses thread priority concept to assign priority to the
thread. A higher priority
threads in java
threads in java how to read a file in java , split it and write into two different files using threads such that thread is running twice
Thread
Thread there are two threads running at a time.. when am updating a values in database. both thread halt and stop for moment till it get updated into database... so i dnt want thread to get halts for tht moment of period. whats
threads
threads how to print names in different lines in different spaces generating random numbers and using thread...like
ping
pong
ping
pong
Java :Thread setPriority Example
Java :Thread setPriority Example
In this tutorial you will learn how to set thread priority in java thread.
Thread setPriority() :
Thread scheduler uses thread priority concept to assign priority to the
thread. A higher priority
Threading in Java
, program counter
and lifetime.
Life Cycle
of A Thread
Life Cycle of Thread contains different states - New state, Runnable... threads are
executing
Thread Synchronization in Java
When two
Thread
Thread why we need threads? why we need Multithreads? with code and real time example
Invalidation cycle
Invalidation cycle Hi.....
How do you make a component participate in invalidation cycle?
please give me ans with example..
Thanks
Ans:
Using Implement IInvalidating interface or extend UIComponent.
for example
Inter Thread Communication
between two threads. In this process, a thread outside the critical section is tried... the critical section. For this the threads sends a signal using which they can... :
http
Java thread
Java thread Why do threads block on I/O? When a thread... and in that time some other thread which is not waiting for that IO gets a chance to execute.If any input is not available to the thread which got suspended for IO
WAS Thread Hanging - IDE Questions
values cover the transaction not the thread using the transaction's life time.
The exception will be thrown when the thread using the transaction tries to do...WAS Thread Hanging Can you pls explain me what is thread inactivity
Demon thread
Demon thread What is demon thread? why we need Demon thread?
Daemon threads are the service providers for other threads running... there are daemon thread by killing them abruptly.Any thread can be a daemon thread
Java :Thread Methods
Java :Thread Methods
This section explains methods of Thread class.
Thread... number of
active threads in your current thread group.
static Thread... is a
daemon thread or not.
run() : If we are constructing thread by using
Thread
Thread Explain the use of throw and throws keywords.
Java throw and throws
Whenever we want to force an exception then we use throw... a possible exception then we use throws keyword. Point to note here is that the Java scheduling
? Java uses fixed-priority scheduling algorithms to decide which thread... on the basis of their priority relative to other Runnable threads. The thread... is started, Java makes the lower priority thread wait if more than one thread exists
java thread - Java Beginners
java thread PROJECT WORK:
Create a application using thread... .
AccountManager.java
The AccountManager class demonstrates creation of Thread objects using... transfer . Create two threads and initiate the execution of both the threads . Display
Java thread
Java thread What method must be implemented by all threads
Topic 4: Concurrency
Java thread Can we have run() method directly without start() method in threads
Java Thread : isAlive() method
Java Thread : isAlive() method
In this tutorial you will learn how to use isAlive() method in java thread.
isAlive() method :
When you are running many threads ,java take long periods switching between
threads , may be one - Java Beginners
Thread creation and use of threads in JAVA Can anyone explain the concept of thread, thread creation and use of threads in JAVA application? Thread creation and use of threads in JAVA Java Resourcehttp
Main Thread and Child Thread
There are two types of threads in Java Progarm
In Java there are Main and Child Threads used in Programming.
Main thread is automatically created when program runs.
Child Thread gets created by the main thread .
Java Main
disadvantage of threads
java libraries are not thread safe. So, you should be very care full while using..., the other threads using the same memory location will be killed automatically... is the disadvantage of threads?
hello,
The Main disadvantage of in threads
thread dump
thread dump Hi,
I wanted to understand the Locked/waiting state below in the java thread dump. Is it normal to have waiting on locked object... of threads in this state where the waiting on locked values are same. Please
Sync Threads
Sync Threads "If two threads wants to execute a synchronized method in a class, and both threads are using the same instance of the class to invoke...://
Thanks
Java :Thread Join
Java :Thread Join
In this tutorial you will see how to use join method in java thread.
join() method -
join method waits until the thread die. If we are using multiple threads and
want to continue to next process only after
thread related - Java Interview Questions
,
Two threads can communicate with each other using
the wait() and notify... threads
calls notify() method.
The wait() method causes the current thread... the activities of multiple
threads using the same resources.
The notifyAll
Java Thread
Java Thread Tutorials
In this tutorial we will learn java Threads in detail. The Java Thread class helps the programmer to develop the threaded application in Java. Thread is simple path of execution of a program. The Java Virtual Machine
Exception in thread
Exception in thread Hi,
I have created a java file for sending a file to my mail. I am using mail.jar file. I am able to create .class file properly. But am unable to run this file using java command on command prompt.
javac
Thread Synchronization in Java
Thread Synchronization in Java
Sometimes, when two or more threads need shared....
EXAMPLE
Given below the example using a synchronized
block having thread... will be used by only one thread at a time.
The mechanism we use to achieve this is known
Thread Questions
by using separate threads
to reduce the total wait time.
Operations...
Java NotesThread Questions
Name _______________________________
Which areas of memory do separate threads share?
Circle all that are correct
Daemon thread - Java Beginners
information, visit the following link: thread Hi,
What is a daemon thread?
Please provide me... thread which run in background. like garbadge collection thread.
Thanks
Shutting down threads cleanly,java tutorial,java tutorials
Shutting Down Threads
Cleanly
2002-09-16 The Java Specialists' Newsletter..., and playing with Threads. I would
start a Thread, and then to stop it, I simply called... in Oak. Using stop() is incredibly
dangerous, as it will kill your thread even
Creation of Multiple Threads
than one thread (multithreads) in a program using class Thread or implementing...:\nisha>java MultiThread1
Thread Name :main
Thread Name :My...
In this program, two threads are created along with the
"main" thread
Synchronized Threads
;
In Java, the threads are executed independently to each
other. These types....
Java's synchronized is used to ensure that only one thread is in a critical... a synchronized method in a class, and both
threads are using the same instance
Java Sleep Thread
Java Thread sleep() is a static method.
It sleeps the thread for the given time in milliseconds.
It is used to delay the thread.
It is used in Applet or GUI programming for animation
Java Sleep Thread Example
public class
multi threads - Java Beginners
multi threads Hi i writing a multi threaded program in java .I m using three threads. I want to declare variables which will be available to all the threads to access. Is there a way to declare the variables as global variables
thread class - Java Beginners
and notifies the other thread
about this value
- The decrementor threads...thread class Create 2 Thread classes.One Thread is Incrementor and has one variable cnt1 with initial
Value 0. Incrementor threads increments
Green Thread - Java Beginners
of Green Thread in java.
Thanks in advance... Hi friend
Green threads are simulated threads within the VM and were used prior to going to a native OS threading model in 1.2 and beyond. Green threads may have had an advantage
Overview of Thread
with multiple threads is referred to as a multi-threaded
process.
In Java... thread. If no other threads are created by the main thread, then program
terminates...
Overview of Threads
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions.
|
http://www.roseindia.net/tutorialhelp/comment/43225
|
CC-MAIN-2013-20
|
refinedweb
| 2,659
| 65.83
|
ToGroup
From PyMOLWiki
Contents
Overview
toGroup will convert a multistate object into a group of single-state objects. Be warned, by default it deletes your original object (since it's extracting a copy).
PyMOL does a great job at handling multistate objects and grouping them together. One thing that I found myself doing over and over again was
- loading a multistate object (say a PDBQT file with 100 ligand poses)
- splitting that object into all 100 states, with some given prefix
- then grouping them into their own group
- and then finally removing the original.
This became tedious, so I automated that with this script.
Examples
# A multistate object (20 NMR states) fetch 1nmr # Create the group called, "nmrEnsemble" # from '1nmr' and name all the new states state1, # state2, state3, etc. toGroup nmrEnsemble, 1nmr, prefix=state
The Code
import pymol from pymol import cmd def toGroup(groupName,sel,prefix="",delOrig=True): """ DESCRIPTION toGroup will take a multistate object and extract it to a group with N objects all in state #1. It essentially performs the following: split_states myObj, prefix=somePrefix group newGroup, somePrefix* delete myObj PARAMETERS: groupName (string) The name of the group to create sel (string) The name of the selection/object from which to make the group prefix (string) The prefix of the names of each of split states. For example, if your prefix is ''obj'' and is in states 1 through 100 then the states will be labeled obj1, obj2, obj3, ..., obj100. delOrig (string/boolean) If true then delete the original selection, otherwise not. RETURN Nothing, it makes a new group. """ if prefix=="": prefix="grouped" cmd.split_states(sel, prefix=prefix) cmd.group(groupName,prefix+"*") if delOrig: cmd.delete(sel) cmd.extend("toGroup", toGroup)
See Also
group, saveGroup, select,, split_states, delete, extend.
|
https://pymolwiki.org/index.php/ToGroup
|
CC-MAIN-2017-04
|
refinedweb
| 293
| 61.56
|
I am not sure how to deal with sdc file.
Can I export into file geodatabase?
I think the bottleneck I have is about transforming the sdc file at the moment.
What program do you intend to use as a postGIS program to work with the date.
A shapefile may or may not be necessary.
I believe it would go faster if you import to a personal or file geodatabase first.
Also, when you are selecting and exporting pause the drawing.
The length of time required is a direct relation to size of the data file and the power of your computer.
close all programs on your computer and ONLY have the program you need running.
I would personally try to use the Split tool - use the 40 million as the input and your states polygon as the split feature. From memory, this outputs a new feature class (or shapefile) for each split field (in your case the state name). I think this is an ArcInfo level tool though. You can then use each shapefile import into PostGIS.
Alternatively I would use python to iterate through each state feature, select all those that intersect and then write the output to a shapefile.
The idea of using python to iterate is really intrigueing. But what would the code look like?
Currently I am manually paste into the python window the code of arcpy.selectionmanagement...and then use arcpy to export the states, but I haven't try a loop or anything that I can use to get the data.
Would you like to give me a sense what the code will be like?
thanks!!!!!
import arcpy fc = r"C:\temp.gdb\states" with arcpy.da.SearchCursor(fc, "State") as cursor: for row in cursor: # your code here
Maybe take your logic to do the selection, and export that you've used in the python window and put it into a search cursor. For example the following would iterate through each state in a feature class:import arcpy fc = r"C:\temp.gdb\states" with arcpy.da.SearchCursor(fc, "State") as cursor: for row in cursor: # your code here
See for more information on the search cursor.
|
https://community.esri.com/t5/geoprocessing-questions/how-to-export-the-40-million-records-in-a-sdc-file/td-p/626730
|
CC-MAIN-2022-33
|
refinedweb
| 365
| 74.39
|
Thank U for providing this content
Post your Comment
Applet
Applet Write a Java applet that drwas a line between 2 points. The co-ordinates of 2 points should be passed as parametrs from html file. The color of the line should be red
applet problem - Applet
applet problem How can I create a file in client side by a java applet . Surely it will need a signed applet .But how can a signed applet create a file in the client side
applet servlet communication - Applet
();
}
}
}
3)Call this applet with the html file.
Java Applet Demo... project in eclipse and writing the applet code in ajava file which is present in src... in java file will reflect without copying into WebRoot. Hi Friend,
We
problem with applet plugin - Applet
problem with applet plugin hello friends,
iam using Eclipse IDE. i created one applet and i want to plugin that applet into webpage.. when i am...: com.ezsoft.applets.Upload.class
---------------------------------------
i saved that file in src
Applet issue
Applet issue Hello,
Can there be any problem in writing a file to the temp directory from a url using applet???
If some security problems are there please do post it for me
Creating a log in a text file - Applet
Creating a log in a text file Hey there,
I have created an applet that supports 4 different languages, and the applet needs to be runnable as an application. Therefore i have added it onto a frame. I need to create a text file
java - Applet
in the mail).i have display that file path in my applet form.am learner please send me...java i have button browse button in my applet form.when i click on browse button i have to go to file storage box(what ever files stored in my
unable to see the output of applet. - Applet
unable to see the output of applet. Sir,
I was going through the following tutorial
but the problem
Applet Tag Parameters,Applet Tag in HTML
directly to a Java applet.
Parameters are to applets what command-line... passed to the Java applet.
Retrieving Parameters Within the Applet... the applet's operation. APPLET parameters stored in the PARAM tag actually have little
The Java Applet Viewer
The Java Applet Viewer
Applet viewer is a command line program to run
Java applets...; the browser should be Java enabled.To create an applet, we need to define
Applet - Passing Parameter in Java Applet
Applet - Passing Parameter in Java Applet
Introduction
Java applet has the feature... page. Applet will
display "Hello! Java Applet"
if no parameter
Re : applet - AOP
Re : applet hi,
how to run applet using aspectj or Aspectwerkz... file in to
archive tag.
But this is answer for running applet at COMPILE TIME
weaving . i need to know how to run applet at Load
java applet run time error - Applet
java applet run time error Hi,
Im new to java applet.please help me. i have create a MPEG movie player in applet. when i run that program...
{
Player player = null;
/*String location="";
MediaLocator
problem of writing to a local file ( JApplet ) - Applet
file into the applet code it is not working, means when i click the Submit button...problem of writing to a local file ( JApplet ) Dear All,
I want to program a guestbook using java applets but now I have
problem of writing
Applet
Applet Write an applet to display a string in an applet. String should be passed as a parameter to an applet
Problem in show card in applet.
Problem in show card in applet. The following link contained the card demo with applet.... On Run as Java Applet then only show the Applet, not show any one card,hence any
core java - Applet
the applet with html file:
Java Applet Demo
Put your html file...core java how can draw a single line with mouse in applet.
please help me Hi Friend,
Create an applet 'SimpleDrawApplet.java
Applet
Applet Give the class hierarchy of an Applet class
applet
applet Explain different stages in the lifecycle of an applet with figure.
Stages of Applet:
Life cycle of an Applet:
init(): This method is called to initialized an applet
start(): This method is called after
Applet
Applet how to run an applet on a web browser
applet
applet What is the immediate superclass of the Applet class
Applet in Eclipse - Running Applet In Eclipse
in
Eclipse 3.0. An applet is a little Java program that runs inside a Web...->New->Project... from the menu bar to begin creating your Java applet... from the menu bar.
Step 7: Create java class file under
Applet
Applet Write a ava applet that sets blue color foreground and yellow color background at the start of an applet
servlet code - Applet
with the html file.
Java Applet Demo
Thanks...servlet code how to communicate between applet and servlet ... from the servlet to applet.
Here is the code of 'ServletExample.java
Applet
Applet Explain the start() and stop() methods of applet life cycle.
Start and Start method of Applet Life Cycle
Start () method: The start method of an applet is called after the initialization method init
core java - Applet
core java Namaste sir , how can draw a line in Applet. I want when...(MouseEvent evt) { }
}
Then call this applet with html file 'applet.html'.
Draw...; Hi Friend,
Create an applet 'SimpleDrawLine.java':
import java.awt.
Applet
Applet Write a short note on applet life cycle
file hendling through javaPushpendra Singh Bais January 16, 2012 at 12:53 PM
Thank U for providing this content
file hendling through javaPushpendra Singh Bais January 16, 2012 at 12:54 PM
Thank U for providing this content
Post your Comment
|
http://www.roseindia.net/discussion/20612-Java---Read-file-Applet.html
|
CC-MAIN-2014-52
|
refinedweb
| 954
| 64.61
|
Inheritance Problemnmgo Jan 13, 2009 1:31 PM
Hi,
I'm currently evaluating Envers and i'm getting a strange problem ...
I have an example with a very simple domain:
@Entity @Inheritance(strategy=InheritanceType.SINGLE_TABLE) @Audited public abstract class Test { @Id @GeneratedValue private long id; private Integer a; public Test() { } public void setId(long id) { this.id = id; } public long getId() { return id; } public void setA(Integer a) { this.a = a; } public Integer getA() { return a; } } @Entity @Audited public class TestExtend extends Test { private String b; public TestExtend() { } public void setB(String b) { this.b = b; } public String getB() { return b; } }
When I use Hibernate 3.3.1 with envers 1.1.0 for Hibernate 3.3 it all goes well.
If I upgrade envers to 3.4 preview version, all tables are correctly created but when I create a new TestExtend object, it's persisted but not versioned, including other non related entities.
Does anyone have an idea on what might be the problem?
Thanks!!
1. Re: Inheritance Problemadamw Jan 14, 2009 4:18 PM (in response to nmgo)
Hello,
do you also use Hibernate-3.4.0-snapshot? Or Hibernate 3.3?
--
Adam
2. Re: Inheritance Problemnmgo Jan 15, 2009 4:29 AM (in response to nmgo)
Hello,
I'm using Hibernate 3.3.
I've read in your blog that will still works.
Thanks,
Nuno Ochoa
3. Re: Inheritance Problemadamw Jan 15, 2009 12:19 PM (in response to nmgo)
Yes, but I think I could have been wrong on that - sorry. There is a small bug fixed in Hibernate which affects Envers. Please try Hibernate 3.4-snapshot or Envers 1.1.0.ga.
--
Adam
4. Re: Inheritance Problemnmgo Jan 16, 2009 10:49 AM (in response to nmgo)
Hi,
Ok, i've upgraded to Hibernate 3.4-snapshot and worked ...
Thanks for your help,
Nuno Ochoa
5. Re: Inheritance Problemawhitford Jan 26, 2009 1:51 AM (in response to nmgo)
How stable is hibernate-core-3.4-SNAPSHOT for production use? Any chance that a hib-core will be released anytime soon?
6. Re: Inheritance Problemadamw Jan 28, 2009 3:31 AM (in response to nmgo)
Hello,
I don't think there are any significant changes in hibernate-core-3.4-SNAPSHOT, but I'm not 100% sure. I'll ask on IRC today about this.
--
Adam
7. Re: Inheritance Problemadamw Jan 28, 2009 12:29 PM (in response to nmgo)
Hello,
I've created a branch to make a Hibernate-3.3 compatible Envers release. I hope I'll be able to make it in a few days.
--
Adam
|
https://developer.jboss.org/message/4100
|
CC-MAIN-2020-16
|
refinedweb
| 435
| 59.9
|
Developing Python applications in Qt Creator
- Shilvz Sam
I haven't been able to figure out how to create a Qt application in Python with it so far. Online documentation about it appears to be scarce.How do I set up such a project in Qt Creator? Ideally I'm looking for a simple "Hello World" project that I can open in Qt Creator and use that as a starting point to build something.
Welcome to DevNet,
As far as my knowledge goes you can't use python with QtCreator. However, you can use other powerful IDEs like PyCharm or powerful text editors like emacs or vim. (In this example I'm using vim)
Programming Qt4 ( or Qt5) with Python feels basically the same as if you were writing it in C++, as in you can use Qt's documentation and tutorials. The exception being the language and the way you organize your app.
My example doesn't show much but should give you an idea. If you use linux, do install the following packages.
sudo apt-get install python-qt4 python-qt4-dev(I use Qt4 in this out of laziness)
The code I use
from PyQt4 import QtGui import sys # you can also write it as # from PyQt4.QtGui import QLabel app = QtGui.QApplication(sys.argv) window = QtGui.QMainWindow() window.setWindowTitle("Qt Rocks!") window.setFixedWidth(500) window.setFixedHeight(500) widget = QtGui.QWidget() label = QtGui.QLabel("Hello world",window) layout = QtGui.QVBoxLayout() layout.addWidget(label) widget.setLayout(layout) window.setCentralWidget(widget) window.show() sys.exit(app.exec_())
I use vim with several plugins, my recommendation is to use PyCharm instead (mileage may vary)
I hope this helps you, happy coding.
- Shilvz Sam
Thank You :) But what about using pycharm in Rasberry pi
Since we are talking about python scripts you can simply write a script that:
- Pushes your scripts through SCP (SSH)
- or Use Git to pull new changes
You can just use your desktop/laptop and leave deployment/testing later on the Raspberry Pi as you don't need to write code using the device. In theory your code should run anywhere if Python and PyQt4/Qt4 supports it.
So wrapping it up, don't use your Rasberry pi to develop, use your desktop/laptop, it should be fine.
|
https://forum.qt.io/topic/62620/developing-python-applications-in-qt-creator
|
CC-MAIN-2018-39
|
refinedweb
| 381
| 62.78
|
Hi,
Using Spark, how can I join 3 pair-RDD?
I'm able to:
So, to get a RDD joining the 3 files, I have to perform 2 joins.
Thanks :)
Greg.
How about using cogroup.?
Sparks' co group can work on 3 RDDs at once.
The below is scala cogroup syntax i have checked, it says, it can combine two RDDs other1 and other2 at the same time.
def cogroup[W1, W2](other1: RDD[(K, W1)], other2: RDD[(K, W2)]): RDD[(K, (Seq[V], Seq[W1], Seq[W2]))]
For each key k in this or other1 or other2, return a resulting RDD that contains a tuple with the list of values for that key in this, other1 and other2.
I cannot work on spark as i do not have set up at office, otherwise, would love to try this.
After cogroup, you can apply mapValues and merge the three sequences
Thank You.
Hello,
Thanks for your reply, this is a very interesting functionality you have pointed out!
I will have a look at this and check if it also works for complex joins (like outer jons).
Greg.
|
https://community.cloudera.com/t5/Support-Questions/Joining-3-pair-RDDs/td-p/30809
|
CC-MAIN-2019-47
|
refinedweb
| 187
| 78.89
|
Sending data to Java backendasagohan Aug 18, 2009 10:27 PM
What is the best way to send a flex object to a java backend and then operate on it on the server? I have an array that holds a custom class. When the user clicks submit, I want to send the array to Java and then get the contents of the array manipulate the contents and write it to a file.
I have a remote Java Object that I would like to send the data to, in order to manipulate it. I don't think I need a remote object to represent the flex object though because that would mean that whatever changes I make to the object in flex continually gets sent to the backend right? I just want to send it all in one go after the user has finished editing it on the front end.
I have found XML serializers that people have made which recursively go through the object and create XML. Then it has to be deserialized on the Java side. I thought that there must be a simpler way. Is there a library or something for this?
1. Re: Sending data to Java backendRatsnackbar Aug 18, 2009 10:56 PM (in response to asagohan)1 person found this helpful
Hi:
A RemoteObject would still work. Remote Method calls do not need to have continuous updates. You would simply need to trigger sending the Object to your server based on an event or button click. I do it all the time with ColdFusion and Java is not that much different.
I would suggest RemoteObjects over XML as the AMF3 protocol is much much faster.
-Joe
2. Re: Sending data to Java backendasagohan Aug 20, 2009 4:35 AM (in response to asagohan)
Thankyou for your reply. I am still not sure if that is what I want though. Because I will have an array, which holds a bunch of canvas objects, which hold a bunch of text objects.
I am not sure how I would go about getting that to the server. Do I have to create a remote object for the array of canvases, remote objects to represent each canvas and remote objects for each piece of text? Potentially creating hundreds of remote objects. When I create the array remote object, I create multiple canvas remote objects and put them in the array, then add each of my text remote objects to each of the canvases? Is that how it works?
I tried this but I keep getting an error:
(mx.rpc::Fault)#0
content = (null)
errorID = 0
faultCode = "Server.ResourceUnavailable"
faultDetail = "The expected argument types are (example.MyClass) but the supplied types were (flex.messaging.io.amf.ASObject) and converted to (null)."
faultString = "Cannot invoke method 'addItem'."
message = "faultCode:Server.ResourceUnavailable faultString:'Cannot invoke method 'addItem'.' faultDetail:'The expected argument types are (example.MyClass) but the supplied types were (flex.messaging.io.amf.ASObject) and converted to (null).'"
This is the java code:
HelloWorld.java
package example; import java.util.ArrayList; public class HelloWorld { ArrayList <MyClass> x = new ArrayList(); public HelloWorld() { } public void addItem(MyClass myClass) { x.add(myClass); } public String getHelloWorld() { String"; return concatenation; } }
MyClass.java
package example; public class MyClass { }
Flex code:
helloworld.mxml
<?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns: <mx:Script> <![CDATA[ import mx.rpc.remoting.RemoteObject;) ); } private function makeABunchOfObjects():void { var myClass:RemoteObject = new RemoteObject(); myClass. <mx:Text <mx:Spacer <mx:Button </mx:Panel> </mx:Application>
3. Re: Sending data to Java backendRatsnackbar Aug 20, 2009 4:46 AM (in response to asagohan)
Perhaps I am not understanding the reason for your implementation so do not take this personally. But why would you want to send a Canvas object which is part of the view to the server? If you are only wanting to send the data contained within the text objects to the server for processing what you would want to do is to create a ValueObject (TransferObject to some) and send that to the server along with perhaps any instructions you would like to send.
On the other hand (and this is just a blind guess at what you are trying) if you are wanting to send the objects so that you can implement some sort of server side compilation, then you would want to send the raw text object for the canvas as one of the parameters contained within your ValueObject. Either that or you could use Adobe AIR and zip the entire object set and send the file to the server where it can be decompressed and compiled.
Here is a blog article on transfering data between Java and Flex using RemoteClass. ata-transfer-objects-from-java-to-flex/
Hope this helps. If not then perhaps a little more information on what you are attempting to accomplish might help.
-Joe
4. Re: Sending data to Java backendasagohan Aug 22, 2009 2:30 AM (in response to Ratsnackbar)
Hi Joe,
Sorry, I did not explain it properly. I just need a DTO to be sent to Java. Which I think I have working now. However there are still a one thing that I don' quite understand.
I have the following calls in flex my remote object:
var myClass:MyClass = new MyClass(); var myClass2:MyClass = new); myClass.x = "blah"; helloWorldRO.addItem(myClass); myClass2.x = "2BLAH"; helloWorldRO.addItem(myClass2); helloWorldRO.getHelloWorld();
However, it seems like
helloWorldRO.getHelloWorld();
is getting executed before
helloWorldRO.addItem(myClass);
and
helloWorldRO.addItem(myClass);
how can this be??
Here is the Java:
package example; import java.util.ArrayList; public class HelloWorld { ArrayList <MyClass> x = new ArrayList<MyClass>(); public HelloWorld() { } public void addItem(MyClass myClass) { x.add(myClass); } public String getHelloWorld() { String"; return concatenation; } }
The result I get is
"<>".
Then I get "(null)"
and the n I get "(null)"
Instead of:
"(null)"
"(null)"
"<>"
5. Re: Sending data to Java backendAnsury Aug 22, 2009 2:45 AM (in response to asagohan)
I saw XML mentioned and have to add: Don't even think of using XML for data transfer. I'd sooner switch careers and become a mall security guard.
Also, RemoteObject doesn't do anything magic-- it doesn't automagically sync Java objects up with what's on the Flex side. You have to call the remote methods from Flex, usually in what's called a "Delegate".
|
https://forums.adobe.com/thread/479431
|
CC-MAIN-2018-30
|
refinedweb
| 1,057
| 65.62
|
Genetic Optimization
Does anyone know how to make the "reward" into number of profitable trades? So basically instead of like total profit, or sharpe ratio or consistency, I want to make it so that it a) is profitable (so if you lose money over the whole backtest it doesn't count), and b) has a good ratio of profitable trades to losing trades.
I'm basically working on a project for research where I make a trading system which just sort of makes a little profit here and there, rather than trading constantly.
@d416 How can I go about changing the performance measure? I'm going to start digging for it, but just thought I'd ask to maybe save some time :)
- hghhgghdf dfdf last edited by
@Wayne-Filkins-0 pass whatever you want to optimize to the maximize function. So for your case, you could multiply a binary value (profitable/unprofitable) by the percentage of profitable trades and pass that to the gen-opt function
@Wayne-Filkins-0 Agree with @hghhgghdf-dfdf
Key part of the above code would be here:
return cerebro.broker.getvalue()
This is a super simple method of using cash in the account as a measure of performance, but the true BT way would be use Analyzers
-D
@d416 In your optunity script, if you want to change the parameters to decimal numbers like 0.1 - 2.6 or something, do you just type them as decimal range and it just knows to search all decimals? Or do you have to do something else?
- rajanprabu last edited by
somethig like this..
import numpy as np param=np.arange(0.1, 0.9, 0.05)
|
https://community.backtrader.com/topic/186/genetic-optimization/26
|
CC-MAIN-2021-04
|
refinedweb
| 279
| 64.54
|
Resolves part of
Is there a decoding conflict which requires to use brackets here? Something from the TODO list?
An unaligned offset is suspicious looking but technically not wrong
I'm not sure what this question means. The encoding is always printed in brackets?
I thought we codegened these already? Is this missing a codegen change to use the offsets?
Printed yes. But disasm tests do not use brackets. These are needed in a stream of bytes when dis cannot find out a boundaries of an instruction itself, and that is usually an indication of disam conflict between subtargets.
I assume we are only interested in how instructions are getting encoded here, so no need for them to look very realistic?
I too see no bracketed input bytes here, only printed ones. May this be the case that you just see them on a separate line because of the larger length of the instruction?
In D125700#3517167, @arsenm wrote:
I thought we codegened these already? Is this missing a codegen change to use the offsets?
This intentionally misses any codegen changes, yes.
Would love some opinions on how important we think it would be to support that. If I don't miss anything, addressing this would mean either further customising AMDGPUDisassembler::getInstruction() or split the TableGen definitions for GFX8 and GFX9 -- and not just the SMEM ones. Which seems to be a significant amount of changes.
Oh, yes. This is formatting, not really brackets on input.
Isn't the final address has to be aligned, not partial offsets? I.e. this shall be fine even if we wanted to enforce something in the asm.
Yes, my understanding is that it's only the resulting address that has to be aligned.
Overall looks good.
// TODO: Ignore soffset_en when disassembling GFX8 instructions.
Are there cases when an illegal GFX8 code may be decoded with soffset_en=1? The _SGPR_IMM_vi case should not work for GFX8, should it?
This addressing mode is semantically equivalent to the one where soffset is encoded in the offset field. I'm not sure we really need it. Note that sp3 has no syntactic sugar to enforce this encoding. It may be useful for decoder, but I doubt it worth the trouble.
Ditto.
It is a minor issue, but I noted that all your new tests use soffset=s0. It may be a good idea to avoid operands which are encoded as 0 (or add tests with other operands as well).
Yes, the final address is all that matters. But practically speaking that always means a well aligned offset (still not sure why they switched this to a byte offset)
Not sure I read the question right, but I understand, e.g., 0xc0024141 0x00012345 (with imm=1 and soffset_en=1) should decode to s_load_dword s5, s[2:3], s0 offset:0x12345 in GFX9 and to s_load_dword s5, s[2:3], 0x12345 in GFX8. With this patch in place we fail to do the latter because our GFX8 and GFX9 definitions share the same decoding namespace; the isGFX9Only predicate for the _SGPR_IMM case below makes it non-instruction in GFX8.
Yes, these are not needed for codegen needs and may only be useful for disassembling. Should we leave the TODOs be until we know for sure what to do with this?
Agree, and so seem to do the other tests here. I don't mind to update them all, if that's the suggestion?
So after this change disassembler will be unable to decode some legal GFX8 code, correct? I think this should be avoided. Would it be difficult to amend this patch with disassembler changes to avoid this breakage?
Let us see what other people here think. I'd have replaced TODOs with a description of limitations of the current design. But this is a matter of taste.
The file has tests with s101, m0, etc for soffset so the coverage is sufficient. I suggest to correct one new test to use e.g. s1 instead of s0.
As there is no soffset_en in GFX8, all codes with that bit raised are not what I guess you call legal GFX8 codes. I think we would never normally produce such codes for GFX8 from codegen or assembly, but as of the moment I'm not aware of any reasons to think that such codes are actually illegal or invalid. That is basically a disassembling issue again.
Updated a test case to use a register that doesn't encode to 0.
Tagging @foad @arsenm @rampitec for visibility.
Re replacing TODOs with descriptions: my only concern here would be to avoid masking what might be considered a real problem.
Done.
Note that there are still lots of other MC tests and test cases where only s0 is used at a register position. And then on a more general note, I admit I struggle a bit to see the point in testing all possible combinations of what encoding-, diagnostics- and implementation-wise seems to be completely unrelated, such as immediate/register operands and glc modifiers, if that's the right example. Feels like removing unnecessary repetitions would make uncovered cases more visible, allow more combinations that we are truly interested in and maybe somewhat reduce testing times.
You are right, this is a corner case which does not look that important. However there are third-party tools which may produce such codes, you never know. We should be able to disassemble such codes unless this requires a lot of additional work.
The tests in this file have been generated by a script with the purpose of black box testing. The script did not generate all combinations of operands and modifiers, it attempted to provide at least one test for each operand kind and each modifier value. And yes, these tests are not perfect.
When working on a feature you do not have to mimic generated tests. Add a minimal set of tests which you feel would be sufficient for good coverage.
Do we want to see it done as part of this patch? AFAIS, all the other notes are addressed.
Probably a disassembler patch may be committed separately though I'd have preferred it as a part of this change.
LGTM.
I don't understand why these cases are not supported by the disassembler with the current patch. What happens if you try to disassemble them?
It's being treated as non-instruction. Here's what I get for the example above:
; llvm-mc -filetype=obj -triple=amdgcn--amdpal -mcpu=tonga -show-encoding x.s | llvm-objdump -d --mcpu=tonga -
.text
.long 0xc0024141, 0x00012345
0000000000000000 <.text>:
.long 0xc0024141 // 000000000000: C0024141
v_cndmask_b32_e32 v0, v69, v145, vcc // 000000000004: 00012345
I'm going to look into how subtarget predicates work for decoding facilities first and then as plan B maybe try something like opcode canonicalisation.
Updated to support decoding GFX8 loads matching GFX9 encodings.
From what I see in how the TableGen's decoder backend works, it's fine to have predicated patterns that are special cases of other more generic patterns, even if the latter are themselves differently predicated. So as long as we keep our isGFX9Only instructions to be special cases of isGFX8GFX9 ones, which I think we can expect being possible, disassembling should work for GFX8 as expected. For example, replacing the soffset_en expression with let Inst{14} = !if(!and(ps.has_offset, ps.has_soffset), 1, ?); resolves the decoding issue for the GFX8 instruction mentioned above.
Will update to support the GFX9 encodings for the _SGPR loads and add tests.
Updated to support the alternative GFX9 encodings for the SGPR variants.
Done. Please take a look.
...which means SP3 sees what was previously mentioned as the second alternative SGPR encoding as the usual SGPR_IMM case, so no special handling is needed here.
Clearing the approval as this needs another look.
Now that we have IsGFX9Specific, could not this expression be replaced with '?'
Ingenious!
Cleaned up the soffset field expression.
Nice catch. Done.
The NonParsable bit, that's been borrowed from the Hexagon backend, so all the credit goes there!
LGTM, thanks!
|
https://reviews.llvm.org/D125700?id=431309
|
CC-MAIN-2022-33
|
refinedweb
| 1,347
| 65.01
|
On Tue, Feb 27, 2007 at 10:39:50PM -0600, Sebastian P. Luque wrote: > I choose "store in database" and rekall crashes immediately after choosing > the type of database, no matter what it is (mysql, postgresql, the two I > use in my system). I get these message at the terminal (narrowed to what > I think are the relevant lines): Here's an actual backtrace for this crash, reproducible on both i386 and amd64: #0 0x00002aed87dd9a6d in _el_newvar (name=0x7129b0 "print") at syn.cpp:630 #1 0x00002aed87dde434 in el_yyparse () at el.y:397 #2 0x00002aed87dd9754 in el_compile (srce=<value optimized out>, dest=0x0, ifd=0x0, sstr=0x7112f0 "global print ; public f (page) { \n local dbtype = page.ctrl(\"dbType\") ;\nprint (dbtype.value() + \"\\n\") ;\n if (dbtype.value() == \"xbase\") return \"xbase\" ; \n return (dbtype.attr(\"fla"..., eout=<value optimized out>) at compile.cpp:94 #3 0x00002aed87c21135 in KBWizardPage::compile (this=<value optimized out>, name=@0x7fff23623ba0) at kb_wizardbits.cpp:978 #4 0x00002aed87c21221 in KBWizardPage::nextPage (this=0x6a4300) at kb_wizardbits.cpp:1110 #5 0x00002aed87c2d2b8 in KBWizard::clickNext (this=0x7fff23624cd0) at kb_wizard.cpp:598 #6 0x00002aed87c1f458 in KBWizard::qt_invoke (this=0x7fff23624cd0, _id=52, _o=0x7fff23623d20) at kb_wizard.moc:399 #7 0x00002aed8a472c26 in QObject::activate_signal () from /usr/lib/libqt-mt.so.3 #8 0x00002aed8a4737b6 in QObject::activate_signal () from /usr/lib/libqt-mt.so.3 #9 0x00002aed8a7e82bf in QButton::clicked () from /usr/lib/libqt-mt.so.3 #10 0x00002aed8a50cbd7 in QButton::mouseReleaseEvent () from /usr/lib/libqt-mt.so.3 [...] Line 630 is: 630 if ((nptr = lookup (name, cblk->val.block.vars)) == NULL) This is a dereference of a null pointer, cblk. cblk is a global variable which is only initialized in the function _el_newblk(), and cleared in el_syn_clean(). There appear to be two bugs here: first, that _el_newblk() hasn't been called, second, that this results in a NULL deref instead of catching the problem sanely. I'm not even sure what the scripting language being used here is, though, so I have no idea what causes the first bug. As this package is currently orphaned, I'd suggest dropping it from the release given that this most basic operation doesn't work and a fix seems unlikely. -- Steve Langasek Give me a lever long enough and a Free OS Debian Developer to set it on, and I can move the world. vorlon@debian.org
|
https://lists.debian.org/debian-qa-packages/2007/03/msg00011.html
|
CC-MAIN-2017-09
|
refinedweb
| 387
| 58.18
|
This post is about how to write a .NET application to move workitems from another source (e.g. JIRA, Excel etc) into Azure Boards in Azure DevOps, and a Nuget package I’ve built to hopefully make it a bit easier for anyone else doing this as well.
So here’s a problem…
Let’s say you’ve convinced your boss to move your projects to Azure Devops – great! You’re happy, and your team are happy, but before you can really start, there’s still some work to be done – migration of all the historical project data from your existing company systems….
Maybe your company has its own custom story/issue/bug tracking system (maybe it’s JIRA, maybe it’s Mantis, or something else), and you don’t want to lose or archive all that valuable content. You want to load all that content in your project’s Azure Board as well – how do you do that?
Use .NET with Azure Boards to solve this problem
I had exactly this problem recently – my project’s history was exported into one big CSV file, and I needed to get it into Azure Boards. I had loads of fields which I needed to keep and I don’t want to lose all this…
…so I ‘.NET’ted my way out of trouble.
A bit of searching on the internet also leads me to the option of bulk loading using Excel and the TFS Standalone Office Integration pack, but I’m a programmer and I prefer the flexibility of using code. Though, y’know, YMMV.
First I created a .NET Framework console application, and added a couple of NuGet packages for Azure DevOps:
Install-Package Microsoft.TeamFoundationServer.Client Install-Package Microsoft.VisualStudio.Services.Client
These are both projects that target .NET Framework, so I can’t use .NET Core for this yet.
With these included in my application, I now have access to objects which allow me to connect to Azure DevOps through .NET, and also connect to a work item client that allows me to perform create/read/update/delete operations on work items in my project’s board.
It’s pretty easy to load up my project history CSV into a list in a .NET application, so I knew I had all the puzzle pieces to solve this problem, I just needed to put them together.
In order to connect to Azure DevOps and add items using .NET, I used:
- The name of the project I want to add work items to – my project codename is “Corvette“
- The Url of my Azure DevOps instance –
- My personal access token.
If you’ve not generated a personal access token in Azure DevOps before, check this link out for details on how to do it – it’s really straightforward from the Azure DevOps portal:
I can now use the code below to connect to AzureDevOps and create a work item client.
var uri = new Uri(""); var personalAccessToken = "[***my access token***]"; var projectName = "Corvette"; var credentials = new VssBasicCredential("", personalAccessToken); var connection = new VssConnection(uri, credentials); var workItemTrackingHttpClient = connection.GetClient<WorkItemTrackingHttpClient>();
Next, I need to create what is basically a list of name and value pairs which describes the name of the work item field (e.g. title, description etc), and the value that I want to put in that field.
This link below describes the fields you can access through code:
It’s a little bit more complex than a normal dictionaries or other key-value pair objects in .NET but not that difficult. The work item client uses custom objects called JsonPatchDocuments and JsonPatchOperations. Also, the names of the fields are not intuitive out of the box, but given all that, I can still create a work item in .NET using the code below:
var bug = new JsonPatchDocument { new JsonPatchOperation() { Operation = Operation.Add, Path = "/fields/System.Title", Value = "Spelling mistake on the home page" }, new JsonPatchOperation() { Operation = Operation.Add, Path = "/fields/Microsoft.VSTS.TCM.ReproSteps", Value = "Log in, look at the home page - there is a spelling mistake." }, new JsonPatchOperation() { Operation = Operation.Add, Path = "/fields/Microsoft.VSTS.Common.Priority", Value = "1" }, new JsonPatchOperation() { Operation = Operation.Add, Path = "/fields/Microsoft.VSTS.Common.Severity", Value = "2 - High" } };
Then I can add the bug to my Board with the code below:
workItemTrackingHttpClient.CreateWorkItemAsync(bug, ProjectName, "Bug").Result;
Now this works and is very flexible, but I think my code could be made more readable and easy to use. So I refactored the code, moved most of it into library, and uploaded it to NuGet here. My refactoring is pretty simple – I’m not going to go into lots of detail on how I did it, but if you’re interested the code is up on GitHub here.
If you’d like to get this package, you can use the command belowInstall-Package AzureDevOpsBoardsCustomWorkItemObjects -pre
This package depends on the two NuGet packages I referred to earlier in this post, so they’ll be added automatically if you install my NuGet package.
This allows us to instantiate a bug object look much more like creation of a normal POCO, as shown below:" };
And to push this bug to my Azure Board, I can use the code below which is a little simpler than what I wrote previously.
using AzureDevOpsCustomObjects; using AzureDevOpsCustomObjects.Enumerations; using AzureDevOpsCustomObjects.WorkItems; namespace ConsoleApp { internal static class Program { private static void Main(string[] args) { const string uri = ""; const string personalAccessToken = "[[***my personal access token***]]"; const string projectName = "Corvette"; var workItemCreator = new WorkItemCreator(uri, personalAccessToken, projectName);" }; var createdBug = workItemCreator.Create(bug); } } }
I’ve chosen to instantiate the bug with hard-coded text in the example above for clarity – but obviously you can instantiate the POCO any way you like, for example from a database, or perhaps parsing data out of a CSV file.
Anyway, the image below shows the bug added to my Azure Board.
Of course, Bugs are not the only types of work item – let’s say I want to add Product Backlog Items also. And there are many, many different fields used in Azure Boards, and I haven’t coded for all of them in my NuGet package. So:
- I’ve also added a Product Backlog object into my NuGet package,
- I’ve made the creation method generic so it can detect the object type and work out what type of work item is being added to the Board
- I’ve made the work item objects extensible so I users can add any fields which I haven’t coded for yet.
For example, the code below how to add a task and include a comment in the System.History field:
private static void Main(string[] args) { const string uri = ""; const string personalAccessToken = "[[***my personal access token***]]"; const string projectName = "Corvette"; var workItemCreator = new WorkItemCreator(uri, personalAccessToken, projectName); var productBacklogItem = new AzureDevOpsProductBacklogItem { Title = "Add reports for how many users log in each day", Description = "Need a new report with log in statistics.", Priority = AzureDevOpsWorkItemPriority.Low, Severity = AzureDevOpsWorkItemSeverity.Low, AssignedTo = "Jeremy Lindsay", Activity = "Development", AcceptanceCriteria = "This is the acceptance criteria", SystemInformation = "This is the system information", Effort = 13, Tag = "Reporting; Users" }; productBacklogItem.Add( new JsonPatchOperation { Path = "/fields/System.History", Value = "Comment from product owner." } ); var createdBacklogItem = workItemCreator.Create(productBacklogItem); }
Obviously I can change the code to allow addition of comments through a property in the AzureDevOpsProductBacklogItem POCO, but this is just an example to demonstrate how it can be done by adding a JsonPatchOperation.
The image below shows the product backlog item successfully added to my Azure Board.
Wrapping up
The Boards component of Azure DevOps is a useful and effective way to track your teams work items. And if you want to populate a new Board with a list of existing bugs or backlog items, you can do this with .NET. I guess a lot of these functions aren’t new, and they were available in VSTS, and it’s still nice to see these powerful functions and libraries continue to be supported. And hopefully the NuGet package I’ve created to assist in the process will be useful to some of you who are working through the same migration challenges that I am. Obviously this NuGet package can still be improved a lot – it just covers Backlog Items and Bugs right now, and it’d be better if it flagged those fields that are read only – but it’s good enough to meet minimum viable standards for me right now, and maybe it’ll be helpful for you too.
About me: I regularly post about Microsoft technologies like Azure and .NET – if you’re interested, please follow me on Twitter, or have a look at my previous posts here. Thanks!
|
https://jeremylindsayni.wordpress.com/author/jeremylindsayni/page/3/
|
CC-MAIN-2019-30
|
refinedweb
| 1,449
| 52.39
|
Assaf Arkin wrote:
> On Mon, Jul 28, 2008 at 11:23 PM, Ittay Dror <ittay.dror@gmail.com> wrote:
>
>> I merged the other email (ordering) and comments. My comments inline
>>
>> Assaf Arkin wrote:
>>
>>> On Mon, Jul 28, 2008 at 2:42 AM, Ittay Dror <ittay.dror@gmail.com> wrote:
>>>
>>>
>>>> Hi,
>>>>
>>>> I'm working on adding C++ support to buildr. I already have a prototype
>>>> that
>>>> builds libraries and executables in Linux. I'd like to share some of the
>>>> difficulties I had and request changes to buildr to accommodate C++ more
>>>> easily. (Right now, I've created parallel route to that of building
>>>> Java-like code)
>>>>
>>>> compile
>>>> ========
>>>> overview
>>>> --------------------
>>>> the compile method in project returns a CompileTask that is generic and
>>>> uses
>>>> a Compiler instance to do the actual compilation. In C++, compilation is
>>>> also dependency based (.o => .cpp, sometimes precompiling headers). Also,
>>>> the same code can produce several results (static and shared libraries,
>>>> oj
>>>> files with debug, profiling, preprocessor defines turned on and off). [1]
>>>>
>>>> there is the 'build' task, which is used as a stub to attach dependencies
>>>> to.
>>>>
>>>> suggestion
>>>> ---------------------
>>>> * there should be an array of compile tasks (as in packages)
>>>> * #compile should delegate the call to a factory method which returns a
>>>> task
>>>> (again, as in packages)
>>>>
>>>>
>>> Yes. And I know a few people just waiting for the change to compile
>>> multiple things in the same project, so here's another reason for
>>> adding this feature.
>>>
>>> But I have to warn you, it's not as simple as it looks, I took a stab
>>> at it before and deciding to downscale support to one compiler per
>>> project. It's worth doing because a lot of languages would benefit
>>> from it, but that's also what makes it tricky. I think it would be
>>> easier to get C support working without it first, and separately work
>>> on this feature and then improve C support using it.
>>>
>>>
>> How about this: classify compile commands with symbolic names. like
>> compile('java') or compile('c++:shared') ? on bootstrap, the different
>> extensions can create compile tasks based on directory structure (so the
>> Java extension can see that the directory [:source, :main, :java] exists and
>> create compile('java') with some default values.
>>
>> All compile tasks are prerequisits of 'build'
>>
>> Then 'package :jar' can create a package that depends on compile('java'),
>> compile('groovy') or whatever makes sense to put in a jar, as long as the
>> compile task exist of course (not to create them if they don't) (BTW, I have
>> some issues with the lack of command-query separation, normally when using a
>> query method, I wouldn't want a task to be created if it doesn't exist)
>>
>
> Rake::Task.task_defined? will tell if if a task is defined without
> creating it. Rake::Task[] (same as calling task) would find you the
> task, creating it if necessary by looking at the rules, existing files
> or creating a generic task.
>
> I want to avoid discussing the issues with
> compile('java')/compile('groovy') here. It's a big issue that belongs
> in its own thread and affects more than just C/C++. I'm just pointing
> out that it looks as easy as adding a language flag to compile, but
> when you get down to look at all the details involved, it's a pretty
> damn big change.
>
> And separately, see comments below, it will not replace the generic
> compile task but add more tasks for compile to orchestrate.
>
>
>>>
>>>> * generic pre-requisites (like 'resources') should either be tacked on
>>>> 'build' (relying on order of prerequisites), or the compile task can be
>>>> defined to be a composite (that is, from the outside it is a single task,
>>>> but it can use other tasks to accomplish its job).
>>>>
>>>>
>>> compile already is: resources is a prerequisite for compile, some
>>> other tasks (e.g. byte code enhancing) are tacked on to compile by
>>> enhancing it.
>>>
>>>
>>>
>> yes, but the compilation of the java family of languages is one task
>> (calling javac), while compiling c++ is several tasks: task per obj file and
>> task per link. so there's a chain of tasks already. having a generic method
>> receive a task from the factory method and make it depend on 'resources'
>> won't do, since the lower level tasks should be the ones that depend.
>>
>
> I don't see why the existing compile task can't orchestrate all the
> smaller compile tasks. It already orchestrates several tasks,
> compiling a project will compile all its sub-projects, dependencies,
> resources, etc. Think of it as the compile stage of the build, more
> than just running the compiler. In fact all top-level projects have a
> compile task, but many don't have anything to compile, just use it to
> orchestate compilation of all their child projects.
>
> If you let compile orchestrate smaller tasks, you can get the Rake
> dependency mechanism working for you to handle individual object
> files, compiling only that which is necessary, but also get the Buildr
> dependency mechanism orchestrating the different steps of the build
> and dependencies between projects.
>
can you give an example of how a task can orchestrate other tasks? also,
as far as i could tell, the 'compile' method always create a
CompileTask. i can't use it as is because it expects some compiler which
i can't give it because i want to use tasks and also, i can't add
dependencies to it because it depends directly on tasks like 'resources'
which the prerequisites should depend on.
At the risk of spending a lot of time on the obvious (i have a feeling
we're talking about different things):
say a project has 2 cpp files A.cpp and B.cpp, with matching headers,
and no other headers, which compile to shared and static libraries. my
dependency tree is:
compile:cpp ----- libsomething.so --- A.o --- A.cpp
\ \ / \ A.h
\ X
\ / \ B.o --- B.cpp
\- libsomething.a/-----/ \ B.h
these should be rake tasks for two reasons: timestamp checking and the
fact that two artifacts rely on the same set of objects. also linking
and compiling are two different commands and finally, if i call the
compiler twice, it will do the work twice (that is, it doesn't have any
internal mechanism that tells it there's no need to recreate the obj
files or libraries).
note that all of this tree needs to rely on the 'resources' task, since
some headers may be generated. so 'resources' need to run before all the
timestamp checking and compilation is done.
>
>
>> of course the factory method can create just one task that does all the rest
>> in its action (compile obj files and link), but i do want to use tasks for
>> the following reasons:
>> 1. it makes the logic more like make, which will assist acceptance
>> 2. it can use mechanisms in unix compilers to help make. specifically, most
>> (if not all) unix compilers have an option to spit out dependencies of the
>> source files on headers.
>> 3. it reuses timestamp checking code in rake (and if ever rake implements
>> checksum based recompilation)
>> 4. if rake will implement a job execution engine (like -j in make), then
>> structuring compilation by tasks will allow it to parallelize the execution.
>>
>> but, i think the solution is easy: similar to the 'build' "pseudo task", i
>> can create a 'compile:prepare' pseudo task that depends on 'resources' etc.
>> then, the factory method needs only to depend on 'compile:prepare' (the
>> logic is that another extension can then add other things to do before
>> compile without needing to change the compile extensions)
>>
>
> We had compile:prepare in the past which invokes resources and ...
> well, that's about it. It turns out that just having compile and
> doing everything else as prerequisite is good enough.
>
>
>>>
>>>> package & artifacts
>>>> =========
>>>> overview
>>>> ---------------
>>>> buildr has a cool concept that all dependencies (in 'compile.with') are
>>>> converted to tasks that are then simple rake dependencies. However, the
>>>> conversion is not generic enough. to compile C++ code against a
>>>> dependency
>>>> one needs 2 paths: a folder containing headers and another containing
>>>> libraries. To put this in a repository, these need to be packaged into
>>>> one
>>>> file. To use after pulling from the repository, one needs to unpack. So a
>>>> task representing a repository artifact is in fact an unzip task, that
>>>> depends on the 'Artifact' task to pull the package from a remote
>>>> repository.
>>>>
>>>>
>>> Let's take Java for example, let's say we have a task that depends on
>>> the contents of another WAR. Specifically the classes (in
>>> WEB-INF/classes) and libraries (WEB-INF/lib). A generic unzipping
>>> artifact won't help much, you'll get the root path which is useless.
>>> You need the classes path for one, and each file in the lib (pointing
>>> to the directory itself does nothing interesting). It won't work with
>>> EAR either, when you unzip those, you end up with a WAR which you need
>>> to unzip again.
>>>
>>> But this hypothetical task that uses WAR could be smarter. It
>>> understands the semantics of the packages it uses, and all these
>>> packages follow a common convention, so it only needs to unpack the
>>> portions of the WAR it cares about, it knows how to construct the
>>> relevant paths, one to class and one to every JAR inside the lib
>>> directory.
>>>
>>> I think the same analogy applies to C packages. If by convention you
>>> always use include and lib, you can unpack only the portion of the
>>> package you need, find the relevant paths and use them appropriately.
>>>
>>>
>> (note: not sure i'm following you here. )
>>
>
> Artifacts by themselves are a generic mechanism for getting packages
> into the local repository. Their only responsibility if the artifact
> and its metadata, so a task representing a repository artifact would
> only know how to download it.
>
> You can have a separate task that knows how to extract an artifact
> task and use it instead, that way you get the unpacking you need, but
> not all downloaded artifacts have to be unpacked.
>
yes, this is what i'm currently doing, as i explained below.
but what i want is for me to be able to do that by integrating with the
existing 'artifacts' task. right now it will only return Artifact
objects. I'd like to have a more elegant solution than just to run over
them and create my own objects, which i think will be more tricky with
transitive dependencies (where transitivity may come from my artifacts,
e.g. the project's artifacts)
>
>
>> my current implementation creates classes that have methods to retrieve the
>> include paths, the library paths and the library names. I don't use the task
>> name, since it is useless (as you mentioned). so I have an
>> ExtractedRepoArtifact FileTask class that implements these methods by
>> relying on the structure of the package ('include' and 'lib' directories),
>> it depends on the Artifact class and its action is to extract the artifact.
>>
>> When given a project dependency, i return the build task which implements
>> the artifact methods mentioned above by returning the
>> [:source,:main,:include] and [:target, Platform.id, :lib] paths. It also
>> allows the user to add include paths (e.g., for generated files) which are
>> then both used for compilation and returned by the artifact methods.
>>
>>>
>>>> furthermore, when building against another project, there is no need to
>>>> pack
>>>> and unpack in the repository. one can simply use the artifacts produced
>>>> in
>>>> the 'build' phase of the other project.
>>>>
>>>>
>>> Yes. Right now it points to the package, which gets invoked and so
>>> packs everything, whether you need the packing or not. You don't,
>>> however, have to unpack it, if you know the packaging type you can be
>>> smarter and go directly to the source.
>>>
>>>
>> but i don't want to pack if there's no use for it. speed is critical in this
>> project, since there's no eclipse to constantly compile code for you, so
>> developers need to run the build after each change. having it pack
>> unnecessarily wasts time.
>>
>
> One step at a time. I would worry if we can't do that at all, but if
> it's just optimization, we can get to the more problematic issues
> first.
>
>
>
>>>
>>>> finally, in C++ in many cases you rely on a system library.
>>>>
>>>> in all cases the resulting dependency is two-fold: on a include dir paths
>>>> and on a library paths. note that these do not necessarily reside under a
>>>> shared folder. for example, a dependency on another project may depend on
>>>> two include folders: one just a folder in the source tree, the other of
>>>> generated files in the target directory
>>>>
>>>> suggestion
>>>> -------------------
>>>> While usage of Buildr.artifacts is only as a utility method, so one can
>>>> easily write his own implementation and use that, I think it will be nice
>>>> to
>>>> be able to get some reuse.
>>>>
>>>> * when given a project, use it as is (not 'spec.packages'), or allow it
>>>> to
>>>> return its artifacts ('spec.artifacts').
>>>>
>>>>
>>> Yes. Except we're missing that whole dependency later (that's
>>> something 1.4 will add). Ideally the project would have dependency
>>> lists it can populates (at least compile and runtime), and other
>>> projects can get these dependency lists and pick what they want. So
>>> the compile dependency list would be the place to put headers and
>>> libraries, without having to package them. We don't have that right
>>> now.
>>>
>>>
>> this is the purpose for the 'spec.artifacts' suggestion (that is, an
>> 'artifacts' method in Project). maybe need to classify them similarly to my
>> suggestion for 'compile', so the Buildr.artifacts method receives a
>> 'classifier' argument, whose value can be, for example, 'java' and calls
>> 'spec.artifacts(classifier)'. are we on the same page here?
>>
>
> I'm looking at each of your use cases and trying to identify in my mind:
> a) What you can do right now to make it happen.
> b) What, if we added another feature, we should accommodate for.
> c) What new feature we would need for this.
>
> I'm starting with a) because you can get it working right now, it may
> not be elegant and not work as fast, but we can get that out of the
> way so we can focus about doing the rest. There are some things we're
> planning on changing anyway, so I'm also trying to see if future
> changes would address the elegant/fast use cases, I can tell you what
> I have in mind, but no code yet to make it happen. And then identify
> anything not addressed by current plans and decide how to support that
> directly.
>
i got it working now. but i'm doing several code paths in parallel. i
have a 'make' method instead of 'compile'. the reason are both because i
need to create several tasks, not a 'compiler' object (and i want to
create them before rake's execution starts) , and because i need to
create different implementations per platform.
>
> Right now, project.packages is good enough for what you need. It's an
> array of tasks, you can throw any task you want in there and the
> dependent project would pick on it. You don't have to throw ZIP files
> in there, you can add a header file or a directory of header files, or
> a task that knows it's a directly of header files.
>
> It's inelegant because project.packages is intent to be the list of
> things that get installed and released, so it's an "off the label" use
> for that part of the API. But, it will work, and if you just add
> things to the end of project.packages, they won't get installed or
> released. So project.packages is that same as project.artifacts, just
> with a different name.
>
or i can implement my own 'artifacts' method, which is what i did
because i need different artifact objects than what Buildr.artifacts
returns.
> Separately, we need (and planning and working on) a smarter dependency
> management, which you can populate and anything referencing the
> project can access. It won't be called artifacts but dependencies, it
> will do a lot more, and it will be more elegant and documented for
> specific use cases like this.
>
>
>
>>>
>>>> * if a symbol, recursively call on the spec from the namespace
>>>> * if a struct, recursively call
>>>> * otherwise, classify the artifact and call a factory method to create
>>>> it.
>>>> classification can be by packaging (e.g. jar). but actually, i don't have
>>>> a
>>>> very good idea here. note that for c++, there need to be a way of
>>>> defining
>>>> an artifact to look in the system for include files and libraries (maybe
>>>> something like 'openssl:system'? - version and group ids are
>>>> meaningless).
>>>> * the factory method can create different artifacts. for c++ there would
>>>> be
>>>> RepositoryArtifact (downloads and unpacks), ProjectArtifact (short
>>>> circuit
>>>> to the project's target and source directories) and SystemArtifact.
>>>>
>>>> I think that the use of artifact namespaces can help here as it allows to
>>>> create a more verbose syntax for declaring artifacts, while still
>>>> allowing
>>>> the user to create shorter names for them. (as an example in C++ it will
>>>> allow me to add to the artifact the list of flags to use when
>>>> compiling/linking with it, assuming they're not inherent to the artifact,
>>>> e.g. turn debug on). The factory method receives the artifact definition
>>>> (which can actually be defined by each plugin) and decides what to do
>>>> with
>>>> it.
>>>>
>>>>
>>> 1.4 will have a better dependency mechanism, and one thing I looked at
>>> is associating meta-data with each dependency. So perhaps that would
>>> address things like compiling/linking flags.
>>>
>>>
>>>> ordering
>>>> =========
>>>> overview
>>>> -------------------
>>>> to support jni, one needs to first compile java classes, then run javah
>>>> to
>>>> generate headers and then compile c code that implements these headers.
>>>> so
>>>> the javah task should be able to specify it depends on the java compile
>>>> task. this can't be by depending on all compile tasks of course or on
>>>> 'build'.
>>>>
>>> Alternatively:
>>>
>>> compile do |task|
>>> javah task.target
>>> end
>>>
>>> This will run javah each time the compiler runs.
>>>
>>>
>>>
>>>
>> but running each time is what i want to avoid. not only do i want to avoid
>> the invocation of 'javah', but when invoked it will change the timestamp of
>> the generated headers and so many source files will get recompiled.
>>
>
> Rake separates invocation from execution. Invoking a task tells it to
> invoke its prerequisites, then use those to decide if it needs
> executing, and if so execute. Whether you put javah at the end of
> compile, or a prerequisite to build, it will get invoked and it should
> be smart enough to decide whether there's any work to be done.
>
i think i'm missing something here. in the code snippet above, didn't
you add an action to 'compile' and in that action call the javah
command? to me it looks like at the end of compile javah is run.
> But there is a significant difference between the two. If you add it
> to compile, it gets invoked during compilation -- and compilation
> implies there's a change to the source code which might lead to change
> in the header files -- and that happens as often as is necessary. If
> you put is as prerequisite to build, it only happens when the build
> task runs. If you run rake task, which doesn't run the build task,
> you may end up testing the wrong header files.
>
there should be a rule to the effect of:
jni_headers_dir => [classes] do |task|
javah classes # with whatefer flags to put generated headers in
jni_headers_dir
touch jni_headers_dir
end
so if the classes are newer than the directory (and only then) javah
runs. if i run it every time it will generate headers, changing the
timestamp, which will cause all dependent cpp classes to recompile which
will take a lot of time.
>
>
>> note that compiling a C/C++ source file is a much slower process than
>> compiling java.
>>
>>>> suggestion
>>>> -------------------
>>>> when creating a compile task (whose name can be, as in the case of c++,
>>>> the
>>>> result library name - to allow for dependency checking), also create a
>>>> "for
>>>> ordering only" task with a symbolic name (e.g., 'java:compile') which
>>>> depends on the actual task. then other tasks can depend on that task
>>>>
>>>>
>>> And yes, you'll still need that if you want to run the C compiler
>>> after the Java compiler, so I think the right thing to do would have
>>> separate compile tasks.
>>>
>>>> I hope all this makes sense, and I'm looking forward to comments. I
>>>> intend
>>>> to share the code once I'm finished.
>>>>
>>>>
>>> Unfortunately, the last time I wrote C code was over tens years ago,
>>> so my rustiness is showing. I'm sure I missed some points because of
>>> that.
>>>
>>>
>> I hope I cleared things. I think it is worth investing in C/C++ as it is a
>> space where there's still no solutions (that i know of) that handle module
>> dependency.
>>
>
> Definitely.
>
>
>> To make sure it is clear, I'm not asking for the buildr team to implement
>> C/C++ building, I intend to do that, and have already made a demo of it
>> working, but I do want to ask for the infrastructure in buildr to make it
>> easier, since currently it looks like a "stepson".
>>
>
> In addition, two things we should look at.
>
> First, find out a good intersection between C/C++ and other languages.
> There may be some changes that are only necessary for C/C++, but
> hopefully most of these can be shared across languages, that way we
> get better features all around.
>
> Second, make sure we exhausted all our options before making a change.
> If there's another way of doing something, even stop-gap measure
> while we cook up a better feature all around, then we have less
> changes to worry about.
>
> It's an exercise we did before with Groovy and Scala (earlier versions
> were married to Java) and it worked out pretty well. We started by
> not making any changes in Buildr to accommodate it, instead using a
> separate task specifically for compiling Scala code that relied on
> some hacks and inelegant code to actually work. Then took the time to
> build multi-lingual support out of that.
>
i'm already past that. i have ~20 modules compiling, with transitive
dependencies on other modules and on third party modules.
so i'm now at a stage where i want better integration with buildr.
> Assaf
>
>
>> Ittay
>>
>>> Assaf
>>>
>>>
>>>
>>>
>>>> Thank you,
>>>> Ittay
>>>>
>>>>
>>>> Notes:
>>>> [1] I don't consider linking a library as packaging. First, the obj files
>>>> are not used by themselves as in other languages. Second, packaging is
>>>> required to manage dependencies, because in order for project P to be
>>>> built
>>>> against dependency D, D needs to contain both headers and libraries -
>>>> this
>>>> is the package.
>>>>
>>>> --
>>>> --
>>>> Ittay Dror <ittay.dror@gmail.com>
>>>>
>>>>
>>>>
>>>>
>>>>
>> --
>> --
>> Ittay Dror <ittay.dror@gmail.com>
>>
>>
>>
>>
--
--
Ittay Dror <ittay.dror@gmail.com>
|
https://mail-archives.us.apache.org/mod_mbox/buildr-dev/200807.mbox/%3C488F76B2.7050608@gmail.com%3E
|
CC-MAIN-2021-31
|
refinedweb
| 3,830
| 61.26
|
Hi,
I've had similar problem, and solved it by using jaxb methods that accept a classloader as
argument.
Ex : JAXBContext.newInstance(somepackage, myclassloader) will work, provided myclassloader
is able to load the generated class.
But if you call JAXBContext.newInstance(somepackage) it uses the Thread.getContextClassLoader
method and it won't work in OSGi.
You don't have to use a wild-carded dynamic import (and it even does not work).
However, I have not tested with felix, only with Oscar. I had also to provide the package
explicitely because the getPackage method would return null with classes loaded by the Oscar
class loader.
Regards,
Anne
-----Message d'origine-----
De : Rob Walker [mailto:robw@ascert.com]
Envoyé : mercredi 21 mars 2007 10:09
À : felix-dev@incubator.apache.org
Objet : Re: Has anyone used JAXB within Felix?
I can answer part 2
> - If OSGi is using custom class loaders and JAXB is using different
> class loaders, will JAXB ever work in OSGi?
>
Yes it will and it does - we did it some time ago and it worked fine (as per my earlier post)
In fact, I seem now to recall that JAX-B only actually has a very small set of runtime classes
- from memory mostly it's a build-time, generation tool. The Java classes it creates are mostly
standalone with needs largely on things like XML parsers. I seem to remember some namespace
handling and RI classes, and that was about it. So it's mostly about bundling your generated
classes in a correct with all the necessary imports and exports.
-- Rob
> Thanks,
> Tim
>
> Felix Meschberger wrote:
>> Hi,
>>
>> On 3/19/07, Tim Moloney <t.moloney@verizon.net> wrote:
>>> I've not worked with class loaders before. How do I know which
>>> class loaders are being used? Where can I read more about them?
>>
>> Well, you "work with class loaders" all the time, but you don't know.
>> In fact, the whole OSGi Module spec is centered around Java
>> ClassLoading. :-)
>>
>> You will find numerous documentation on class loading on the net, e.g.
>>
>>
>>
>> Point here is, that two Class instances loaded by different
>> ClassLoader instances are not the same even thought their byte code
>> might be exactly the same. And this is one of the tricky things
>> regarding class loaders because this situation is somewhat difficult
>> to trace.
>>
>>> I'm not sure that the source for JAXB is available but I'll look.
>>>
>>> Thanks for the suggestions. :)
>>
>> You are welcome.
>>
>> Regards
>> Felix
>>
>
--
Ascert - Taking systems to the Edge
robw@ascert.com
+44 (0)20 7488 3470
|
http://mail-archives.apache.org/mod_mbox/felix-dev/200703.mbox/%3C49E7012A614B024B80A7D175CB9A64EC0D8537C2@ftrdmel1.rd.francetelecom.fr%3E
|
CC-MAIN-2017-39
|
refinedweb
| 429
| 66.64
|
The mobile communication device market is growing exponentially. A survey by Gartner Dataquest predicted that worldwide mobile phone sales will have totaled 412.7 million units in 2000, a 45.5 percent increase from 1999. (See Resources for a link to the survey.) This growth, coupled with the Internet's evolution into a Web of services, has fueled demand for Internet-enabled services that wireless devices -- particularly cell phones -- can readily access. Many technologies can be harnessed to provide value-added services over a cell phone, including Wireless Application Protocol (WAP) and Short Message Service (SMS)..
What is SMS?.
Here are a few reasons why SMS, not WAP, should be used for value-added services in the short- to medium-term:
- Large number of legacy, i.e., non-WAP, phones
- WAP's uncertain future
- Lack of widespread WAP content (as of yet)
- SMS is suitable for meeting major market needs
- SMS can be used as a kind of push technology, meaning the user doesn't have to request delivery of information -- it can be automatically sent to her/him
Based on the projected demand for wireless services, wireless messaging and access to financial services appear to be predominant, dwarfing the need for browsing (WAP's core strength).
To harness SMS to deliver information to a cell phone, we propose using HTML forms available from major cell phone service providers to send messages to their subscribers. This helps us keep the solution simple and practical.
Web form scraping
A Web form is an HTML Webpage that submits information (such as user name, password, and so forth) to the server. Web form scraping is the act of interacting with an HTML Web form via a computer program. Most cell phone providers have HTML forms on their Websites, from which SMS messages can be sent to subscribers. Our solution will use this feature to deliver wireless content such as stock quotes without actually being connected to the wireless network.
Practical Java solution
We will demonstrate an application that you can use to push Web content to users' cell phones via SMS. Some common application scenarios are:
- Stock trade confirmations/stock quotes
- News
- User-notification service
Our two-part solution is composed of a Java servlet and an application that delivers stock quotes to the user's cell phone using SMS. You can also use it to easily deliver other services. The application fetches the stock information from well-known Websites and sends it to the user's cell phone. The user must first create a profile that details the stock symbols, frequency of updates, and other relevant information.
The application works as follows:
Using an HTML form, a Java servlet retrieves and stores the user's profile information. That information includes the following:
- Unique ID -- email address, in this case
- Cell phone provider
- Cell phone number
- Frequency of updates
- Content preference -- stock symbols, in this case
- A local text file stores the profile.
- A console application, which might run in a separate JVM, takes care of the timing and dispatch. It retrieves the stock quotes from Yahoo Quotes and determines which users should be sent messages, based on their frequency preferences.
- For every eligible user, the application passes the stock quotes to a handler, based on the user's cell phone provider.
- The handler posts the data to an HTML form on the provider's Website, which offers a Web-to-SMS interface.
- The application sleeps for an hour and returns to step 3.
The figure below illustrates the high-level flow. First, the user enters the profile via a desktop-based browser that communicates with the
CellQuotes servlet. That servlet stores the profile in a simple flat file, which is then read by the
TimeWorkerapplication. Based on the profile settings, the
TimeWorker application retrieves the stock quotes from a datastream provider, e.g.,, and posts to the appropriate cell provider's HTML form. Then the data forwards to the appropriate cell phone via SMS.
Solution architecture
Our solution uses the following classes:
(Note: you can download the entire source code from Resources.)
Profile.javaencapsulates user-specific information
ProfileReader.javareads profiles from the flat file as
Profileobjects
ProfileWriter.javawrites
Profileobjects to the flat file
Constants.javacontains constants such as profile filename
CellQuotes.javais the servlet that provides the Web interface to our solution
TimeWorker.javais the behind-the-scenes application that takes care of actual dispatch
CellProvider.javais the interface that must be extended in order to add cellular service providers
CellProviderSelector.javais the factory that returns the appropriate
CellProviderimplementation
The other classes are merely helpers, so we'll concentrate on the
CellQuotes servlet and the
TimeWorker application, which share a common resource: the transaction file. The servlet writes profiles to the transaction file; the application reads new profiles from the file and adds them to the main store. Therefore, the transaction file needs to be synchronized. In a simplistic approach, we have designated a token
lockFile. If the servlet finds that the
lockFileexists, then it assumes the application is reading from the transaction file and waits its turn. If the applicationfinds that the
lockFile exists, then it assumes the servlet is writing to the transaction file and waits its turn. We have also provided a time-out functionality to avoid resource starvation and deadlock.
The code for the
CellQuotes servlet is as follows:
public class CellQuotes extends HttpServlet { public void doGet(...) throws ServletException { // return HTML Form to user // We do this by simply reading a local file and dumping it to the response stream } public void doPost(...) throws ServletException { // extract profile information from POST // assume its already been validated by JavaScript on client // save parameters as new Profile object ... // Write the profile to disk writeProfile(profile); ... // clean up ... } private void writeProfile(Profile p) { // synchronize on some static object, so that multiple servlet instances don't corrupt profiles file synchronized(obj) { // check for existence of LockFile if(Constants.lockFile.exists()) { // waiting and timeout code goes here } // Since access is available, create lockfile and write to transaction file ProfileWriter pw = new ProfileWriter(Constants.transactionFile); prw.writeProfile(p); ... } } }
You might wonder why we completed synchronization in such a roundabout manner, using
lockFiles, timeouts, and whatnot. Why not just synchronize on a common static object? Unfortunately, it's not that simple. Our approach accounts for the fact that the servlet and application might run in different JVMs (which is almost a given); that means a common object cannot be established. (If you know of a better way to complete the synchronization, please let us know.)
The
CellQuotes servlet stores the user's profile in a flat file. The
TimeWorker application adds that profile to its main profile store and also acts on the stored profiles.
The
TimeWorker application must wake every hour to determine which users need to receive stock quotes, and to update its main profile store. It might have to send quotes not only to users with hourly frequencies, but also to those with 3-hour frequencies, 6-hour frequencies, and so on. How does the
TimeWorkerdecide which users to send to?
We have a counter representing the number of hours in a day (24). If the current hour is exactly divisible by the user's set frequency, then that user must receive the quotes. Here's an example.
Suppose we start the application at 00:00 hours and we have three users:
- A: requests that stocks be sent once every hour
- B: requests that stocks be sent once every 3 hours
- C: requests that stocks be sent once every 6 hours
The logic used by the
TimeWorker application to select eligible users every hour is illustrated below:
- At 00:00 hours, the counter equals 0. Zero is divisible by all three frequencies, so
TimeWorkersends all users their quotes.
- At 01:00 hours, the counter equals 1. Only user A receives quotes.
- At 02:00 hours, the counter equals 2. Again, only user A gets quotes.
- At 03:00 hours, the counter equals 3. Users A and B receive quotes.
- At 06:00 hours, the counter equals 6. All three users receive quotes.
Since each provider has its own Web interface for sending SMS messages,
TimeWorker must next send the quotes to each user's cellular service provider. We accomplish that by using a Factory-style design pattern. For every cellular provider, there is an implementation of the
CellProvider interface. The list of those implementing classes is maintained in a text file. The
CellProviderSelector class has a
getProvider(String ID)method that determines the appropriate handler, based on the ID. It does so by using the Reflection API. Thus, adding new cellular providers to our solution is as easy as:
- Writing the implementation of the
CellProviderinterface and adding it to the classpath
- Adding
Classnameto the
CellProviderRegistrytext file
- Hoping the whole contraption actually works
The last point is just a jibe at ourselves; of course it'll work!
The following code shows the internals of the
TimeWorkerapplication:
|
http://www.javaworld.com/article/2076048/mobile-java/deliver-cellular-messages-with-sms.html
|
CC-MAIN-2017-04
|
refinedweb
| 1,492
| 54.73
|
Changelog History
Page 9
Changelog History
Page 9
v1.7.3 Changes
- 👌 Improvement: mousewheel event is handled with target and fired also from objects. #3612
- 👌 Improvement: Pattern loads for canvas background and overlay, corrected svg pattern export #3601
- 🛠 Fix: Wait for pattern loading before calling callback #3598
- 🛠 Fix: add 2 extra pixels to cache canvases to avoid aliasing cut #3596
- 🛠 Fix: Rerender when deselect an itext editing object #3594
- 🛠 Fix: save new state of dimensionProperties at every cache clear #3595
- 👌 Improvement: Better error management in loadFromJSON #3586
- 👌 Improvement: do not reload backgroundImage as an image if is different type #3550
- 👌 Improvement: if a children element is set dirty, set the parent dirty as well. #3564
v1.7.2 Changes
- 🛠 Fix: Textbox do not use stylemap for line wrapping #3546
- 🛠 Fix: Fix for firing object:modified in macOS sierra #3539
- 🛠 Fix: Itext with object caching was not refreshing selection correctly. #3538
- 🛠 Fix: stateful now works again with activeGroup and dinamyc swap between stateful false/true. #3537
- 🛠 Fix: includeDefaultValues was not applied to child objects of groups and path-groups. #3497
- 🛠 Fix: Itext style is cloned on paste action now, allow copy of styles to be independent. #3502
- 🛠 Fix: Add subclasses properties to cacheProperties. #3490
- ➕ Add: Shift and Alt key used for transformations are now dynamic. #3479
- 🛠 Fix: fix to polygon and cache. Added cacheProperties for all classes #3490
-
-
v1.6.7 Changes
- ➕ Add: Snap rotation added to objects. two parameter introduced, snapAngle and snapTreshold. #3383
- 🛠 Fix: Pass target to right click event. #3381
- 🛠 Fix: Correct rendering of bg color for styled text and correct clearing of itext area. #3388
- ➕ Add: Fire mouse:over on the canvas when we enter the canvas from outside the element. #3388
- 🛠 Fix: Fix calculation of words width with spaces and justify. #3408
- 🛠 Fix: Do not export defaults properties for bg and overlay if requested. #3415
- 🛠 Fix: Change export toObect to always delete default properties if requested. #3416
v1.6.6 Changes
v1.6.5 Changes
- 🛠 Fix: charspacing, do not get subzero with charwidth.
- 👌 Improvement: add callback support to all object cloning. #3212
- 👌 Improvement: add backgroundColor to all class #3248
- 🛠 Fix: add custom properties to backgroundImage and overlayImage #3250
- 🛠 Fix: Object intersection is calculated on boundingBox and boundingRect, intersection is fired if objects are overlapping #3252
- 🔄 Change: Restored previous selection behaviour, added key to selection active object under overlaid target #3254
- 👌 Improvement: hasStateChanged let you find state changes of complex properties. #3262
- 🛠 Fix: IText/Textbox shift click selection backward. #3270
- ⏪ Revert: font family quoting was a bad idea. node-canvas stills use it. #3276
- 🛠 Fix: fire mouse:over event for activeObject and activeGroup when using findTarget shourtcuts #3285
- 🛠 Fix: clear method clear all properties of canvas #3305
- 🛠 Fix: text area position method takes in account canvas offset #3306
- 👌 Improvement: Added event on right click and possibility to hide the context menu with a flag 3308
- 🛠 Fix: remove canvas reference from object when object gets removed from canvas #3307
- 👌 Improvement: use native stroke dash if available #3309
- 🛠 Fix: Export correct src when exporting to svg #3310
- 🛠 Fix: Stop text to go on zero dimensions #3312
- 🛠 Fix: Error in dataURL with multiplier was outputting very big canvas with retina #3314
- 🛠 Fix: Error in style map was not respecting style if textbox started with space #3315
v1.6.4 Changes
- 👌 Improvement: Ignore svg: namespace during svg import. #3081
- 👌 Improvement: Better fix for lineHeight of iText/Text #3094
- 👌 Improvement: Support for gradient with 'Infinity' coordinates #3082
- 👌 Improvement: Generally "improved" logic of targeting #3111
- 🛠 Fix: Selection of active group with transparency and preserveObjectStacking true or false #3109
- 🛠 Fix: pattern brush now create the same pattern seen while drawing #3112
- 🛠 Fix: Allow css merge during svg import #3114
- 👌 Improvement: added numeric origins handling fomr 0 to 1. #3121
- 🛠 Fix: Fix a defect with shadow of objects in a scaled group. #3134
- 👌 Improvement: Do not fire unecessary selection:changed events. #3119
- 🛠 Fix: Attached hiddenTextarea to body fixes IE, thanks to @plainview. #3137
- 🛠 Fix: Shift unselect activegroup on transformed canvas. #3144
- ➕ Added: ColorMatrix filter #3139
- 🛠 Fix: Fix condition in wich restoring from Object could cause object overwriting #3146
- 🔄 Change: cloneAsImage for Object and toDataUrl for object are not retina enabled by default. Added option to enable. #3147
- 👌 Improvement: Added textSpacing support for text/itext/textbox #3097
- 🛠 Fix: Quote font family when setting the context fontstyle #3191
- 🛠 Fix: use getSrc during image export, make subclassing easier, return eventually the .src property if nothing else is available #3189
- 🛠 Fix: Inverted the meaning of border scale factor #3154
- 👌 Improvement: Added support for RGBA in HEX notation. #3202
- 👌 Improvement: Added object deselected event. #3195
- 🛠 Fix: loadFromJson callback now gets fired after filter are applied #3210
v1.6.3 Changes
- 👌 Improvement: Use reviver callback for background and overlay image when doing svg export. #2975
- 👌 Improvement: Added object property excludeFromExport to avoid exporting the object to JSON or to SVG. #2976
- 👌 Improvement: Correct the calculation of text boundingbox. Improves svg import #2992
- ➕ Added: Export id property to SVG #2993
- 👌 Improvement: Call the callback on loadSvgFromURL on failed xml load with null agument #2994
- 👌 Improvement: Clear only the Itext area on contextTop during cursor animation #2996
- ➕ Added: Char widths cache has been moved to fabric level and not iText level. Added fabric.util.clearFabricCharWidthsCache(fontName) #2995
- 🛠 Fix: do not set background or overlay image if the url load fails. #3003
- 🛠 Fix: iText mousemove event removal, clear the correct area for Itext, stopped redrawing selection if not necessary #3016
- 🛠 Fix: background image and overlay image scale and move with canvas viewportTransform, parameter available #3019
- ➕ Added: support sub targeting in groups in events #2997
- 🛠 Fix: Select transparent object on mouse up because of _maybeGroupObject #2997
- 🛠 Fix: Remove reference to lastRenderedObject on canvas.remove #3023
- 🛠 Fix: Wait for all objects to be loaded before deleting the properties and setting options. #3029
- 🛠 Fix: Object Padding is unaffected by object transform. #3057
- 🛠 Fix: Restore lastRenderedObject usage. Introduced Canvas.lastRenderedKey to retrieve the lastRendered object from down the stack #3057
- 🛠 Fix: _calcTextareaPosition correctly calculate the position considering the viewportTransform. #3057
- 🛠 Fix: Fixed selectionBacgroundColor with viewport transform. #3057
- 👌 Improvement: Correctly render the cursor with viewport scaling, improved the cursor centering. #3057
- 🛠 Fix: Use canvas zoom and pan when using is target transparent. #2980
v1.6.2 Changes
- 🛠 Fix: restore canvas properties on loadFromJSON with includeProperties. #2921
- 🛠 Fix: Allow hoverCursor on non selectable objects, moveCursor does not appear if the object is not moveable. ➕ Added object.moveCursor to specify a cursor for moving per object. #2924
- 🛠 Fix: Add missing stroke.live translation, allow gradientTransform for dashed line. #2926
- 👌 Improvement: Allow customization of keys that iteract with mouse action ( multiselect key, free transform key, alternative action key, centered transform key ) #2925
- ➕ Added: Make iText fires object:modified on text change on exit editing #2927
- ➕ Added: [control customization part 1] cornerDashArray, borderDashArray. Now borderScaleFactor influences both border and controls, changed default corner size to 13 #2932
- 🛠 Fix: createSVGFontFacesMarkup was failing to retrieve fonts in style #2935
- 🛠 Fix: shadow not scaled with dataUrl to multiplier #2940
- ➕ Added: [control customization part 2] cornerStrokeColor. Now is possible to specify separate stroke and fill color for the controls #2933
- 🛠 Fix: Itext width calculation with caching false was returning nan. #2943
- ➕ Added: [control customization part 3] Rounded corners. It is possible to specify cornerStyle for the object. 'rect' or 'circle' #2942
- ➕ Added: [control customization part 4] Selection background. It is possible to specify selectionBackgroundColor for the object. #2950
- 🛠 Fix: Behaviour of image with filters with resize effects and Object to/from json #2954
- 🛠 Fix: Svg export should not output color notation in rgba format #2955
- 🛠 Fix: minScaleLimit rounding bug #2964
- 🛠 Fix: Itext spacing in justify mode bug #2971
- 🛠 Fix: Object.toDataUrl export when some window.devicepixelRatio is present (retina or browser zoom) #2972
|
https://js.libhunt.com/fabric-js-changelog
|
CC-MAIN-2021-43
|
refinedweb
| 1,286
| 52.6
|
As on the image my responce data contains an array like this :300, 300, 300, 305, 310, 310, 310, 315, 320.
I need to verify whether the each valu is greater than or equal to 299.
But im getting this error.
Is there any way to resolve this?
Solved! Go to Solution.
What is it??? i cant understand what is the meaning on "it" in the context of groovy scripting
@aaronpliu wrote:
int count = 0 price.each { if (it > 299) count++ } if (count > 0) assert false, "Not all price is greater than 299"
.
Goovy is a powerful yet simple to understand.
The requirement can be achieved easily using below statement.
//Define the list of numbers to be checked def listToBeChecked = [300, 300, 300, 305, 310, 310, 310, 315, 320] //Following will check every element against 299 and show error otherwise assert listToBeChecked.every{element -> element > 299 }, 'check failed'
The piece of code is being "iterated" over the list and "it" is a reserved keyword and refers to the current element of the list during iteration.
Hi @chathurad,
As @nmrao said, 'Groovy' language is simple and groovy, you are able to take the better way to deal with your requirements. Groovy has many extended methods based on Java Class / method.
as for "it", a closure using an implicit parameter. you can also specify a parameter like {param -> xxxxx}
(Example)
for list, often using each / collect / any / every / find / findAll / eachWithIndex...etc
def alist = [100, 200, 300, 400, 500] alist.each { println(it) //output: 100, 200, 300, 400, 500 } alist.each {element -> println(element) //output: 100, 200, 300, 400, 500 } def blist = alist.collect { if (it > 300) it } println(blist.findAll {it != null}) //output: [400, 500] def clist = alist.find { it > 300 } println(clist) //output: 400 def dlist = alist.findAll { it > 300 } println(dlist) //output: [400, 500] def t = alist.every { it > 99 } println(t) //output: true def t2 = alist.any { it > 400 } println(t2) //output: true alist.eachWithIndex {element, index -> if (element > 300) println("$index: $element") //output: 3: 400 / 4: 500 }
you would access official website: to understand more. and check API: to understand more usage based on language.
Thanks,
/Aaron
|
https://community.smartbear.com/t5/SoapUI-Pro/Assert-int-value-in-an-array/td-p/176889
|
CC-MAIN-2019-26
|
refinedweb
| 360
| 67.15
|
- first one has some utility in being able to set breakpoints....
note; If you don't like the classic ternary operation how about "??" and "?." [I believe they all have their place]
Admin
If only there were methods for asking if a list was empty…
Admin
You know that pattern where basic string operations like atoi get reinvented in the form of a switch statement? Yeah, "fix" it by replacing every conditional with ?:. Bonus points for automating this process!
Admin
The first one definitely has value - if you need to change the boolean expression on the test you only have to do it in one place, rather than 46...
Admin
Sorry, but not enough info to check the WTF level of this. For example what if this is a method in a class that wraps the license object.
Admin
Are you actually kidding? The
>expression will return a boolean, so you don't need a ternary. I can't think of a single language in which this is not the case.
Admin
The problem isn't the existenc of hasFeatures(). It's that the whole thing could be rewritten in one line.
For clarity write:
boolean hasFeatures() { return (license.getFeatures().size() > 0); }
For brevity:
boolean hasFeatures() { return (license.getFeatures().size()); }
Admin
Or preferably
return license.getFeatures().isEmpty();
Admin
Or if you want it to be correct
return ! license.getFeatures().isEmpty();
Admin
I proudly abuse the hell out of ternaries. I admit they're not the most readable things, but they're still better than an entire eight line if-else construction for simple things.
Admin
No, you fools! You can't use ternaries here! It's wrong.
boolean hasFeatures() { Boolean returnValue = new Boolean(); try { if (license.getFeatures().isEmpty().equals ((new Boolean (Boolean.FALSE)).getValue())) { returnValue = Boolean.TRUE; endif; if (! license.getFeatures().isEmpty().equals ((new Boolean (Boolean.FALSE)).getValue())) { returnValue = Boolean.FALSE; endif; } catch {Exception e) { returnValue = null; } return returnValue; }
Brillant!
Admin
The boolean stuff I see at times boggles my mind...
If bInProduction = False Then ''do stuff Else ''do other stuff End If
Or my all time favourite I saw recently:
If dto.Order.IsNotNothing() And dto.OrderId.IsNotNothing() And dto.Order > 0 And dto.OrderId > 0 Then ''do stuff End If
Note that 'Order' and 'OrderId' are of type Integer IsNotNothing is a .net extension method that wraps around the single line of code: Return obj IsNot Nothing They literally created a function so as to not need to put a space between IsNot and Nothing... ??? And the ultimate best part, Order will only ever be the values 0 or 1!
Admin
The first one has plenty of value - in fact, it can be more WTF to not wrap complex conditionals in a method (or property, or extension method depending on your language).
I think situations for #1 OR #2 alone are enough to warrant a function representing conditional logic.
The only thing WTF about this code is the way they are writing their conditional logic and returning the result. Which is pretty damn WTF tbf.
Admin
Exactly. This is how coding should be done. It's also much more self commenting to do if(user.isActive())) than if(user.getExpireDate() < time())
The fact that it's simple today shouldn't be the point.
Addendum 2016-06-01 09:56: On second thought though I think they just mean the
if(boolean){ return true;} else {return(false);}
thing
as opposed to just
return boolean;
Admin
There are 10 kinds of people in the world: those who understand how boolean expressions work, and those who don't.
Also, the bonus of abusing ternary expressions is that they can have surprising order of precedence issues!
And there should be a special circle of hell for people who always put parens in their return statements, even constants like return(0). IT'S NOT A FUNCTION YOU BLOODY MORONS STOP WRITING IT LIKE ONE.
Admin
I guess someday I'm going to that special circle in hell. Either that or "I dunno LOL ¯(°_o)/¯" is going to have an aneurysm and nobody will care about it again.
Admin
It's about time that we've introduced affirmative action for code. Anyone have a list of the least used commands? I think we should use every command equally.
Admin
Sumireko, are you advocating wider use of goto? Bringing that one back should be fun.
Admin
Just to be extra-nerdy, if you install R and install the 'sos' package, there's a ??? operator
Admin
Could be a bit worse. A few years back someone wrote a ray-tracer for the IOCC. With no Ifs, or for or while loops, just ternaries and recursion. Urp.
Admin
History lesson: The ? : operator(s) comes from Algol 60, which one could use 'if' then' 'else' in the same way (inside an expression. One could say: a := if b < c then d else e; While needless use of the ?: operator(s) is a crime in and of itself, it has some uses, and might even be a bit easier to read than the Algol-60 version.
Life goes on.
Admin
I will care. But every language is different, so a blanket statement can't really be made about them that'd apply to all languages.
And while everyone thinks they have "their own style", I tend to side with Douglas Crockford on this issue. There is ultimately a correct way of writing a method with white-space/parenthesis/etc. in a given language based on the syntax of that language. Deviations from the "correct" spacing/parenthesis/etc. may not have an adverse effect on the programs performance or its ability to compile, but those variation can (and often do) detract from the readability of the program by other developers (or perhaps by you in 6mo?).
These are the least-important types of errors if you ask just about anyone in the industry, but given enough entropy and layered-on bug fixes, it can become a massive problem. This isn't to say that parenthesis in a return statement are the end of the world, but they're a symptom of a larger problem that is often ignored or not recognized as a problem until it's too late.
Admin
I think you missed the point. It has nothing to do with performing a single Boolean operation in a function; it has everything to with with not just returning the result of the conditional. There is no need to do the return true/false. Just return the result of the test.
Admin
This seems to work in Ruby (hoping I get line breaks in...):
Admin
Style is an argument that is pointless to make, because there will never be one camp. Are you the type that would also argue over whether line indents should be multiples of 3 or of 4 characters?
I'm also the guy who places parenthesis around every macro #define. Because the first time you debug a problem that turned out to be because of macro expansion in an expression causing an "precedence of operators" should be the last time you ever fall victim to that problem.
Admin
var i = { think = function(object) { return { "is": ((function isSimple(what) { return (Boolean(what.keys) === false || what.keys().length === 0) ? true: false; })(object).valueOf() != 0 ? false : true) ? "mischievous" :"evil"}; } }; i.think(this).is;
Admin
Which just goes to show that any feature can be abused. Though in this case I think the nested functions contribute much more confusion than the ternaries.
Admin
And in C+, return (x) can produce different behaviour to return x as it uses a different way of looking up x. So no possibility of disastrous confusion there.
Admin
if (condition) return true; else return false; allows you to set a breakpoint on one of the two return statements, or add a logging statement, if one of the two return values would be unusual.
Admin
I'm currently dealing with the same sort of insanity on one of my projects. Right now, I am having to convince a coworker that a function (literally) named:
is poorly named... WTF.
Admin
Funny, I feel the same way about putting parentheses around the expression in an if or while statement. Sure those are because a language designer got tired of saving keystrokes after defining = to be assignment and == to be comparison. Just because the language designer forgot to make redundant parentheses mandatory in a return statement is no excuse for programmers to be inconsistent in coding style.
(Even though I'm inconsistent in coding style since I don't put redundant parentheses in return statements, that doesn't mean I have an excuse.)
Admin
Should be, but won't be. You see, some Linux driver developer decided he/she didn't need parentheses around a few macro #defines, but got lucky because the macros weren't actually used anywhere. I posted it to the linux-scsi mailing list a few years ago, and no one cared. No matter how many times you or I debug problems caused by precedence of operators in macro expansions, we should not expect any of them to be the last time.
Admin
OK, what hacker leaked the source code of the Edge browser? Or is it from Cortana? Either way, we're going to sue you.
Admin
It could be worse. He could have put in a completely useless else statement.
Admin
The real WTF is of course
which I encounter all over some code which I have inherited...
Admin
@Jolyon Perhaps true is not a keyword, and it is defined in a macro or something.........
I'm a little surprised that no one has mentioned the hidden tl;dr code that always returns true.
Admin
So who is a lone developer supposed to do a code review with?
Admin
That said in Javascript I find myself writing this sort of code
const flag = functionThatReturnsSomthingTruthyOrUndefined() ? true : false;
Becasue I've been bitten in the past assigning the result of the function to a flag and having somebody else in another part of the program use flag === true and the code fail (or JSON encoding an object and seeing a string instead of the bool they expected)
Admin
I don't know if there's bad stuff with using it in Javascript, but !! is a good way in C to ensure that you actually have nothing but a 0 or 1 value, and not some other "truthy" value.
Admin
What possible reason is there not to write the first one as:
boolean someFunction() { return someBooleanExpression; }
?
Admin
A rubber duck.
Admin
TRWTF is failure to use the correct operator.
return somebooleanexpression ? true : false : filenotfound;
Admin
Nothing WTF about the first example. It's perhaps silly, but not a WTF. As others have pointed out, it's more readable (to some) and easier to debug. The second is clearly a WTF, but the general concept is not.
However, the REAL WTF here is the complete and utter lack of null checking when doing boolean expressions like in the second example with nested object/function dereferencing. It make the result NOT boolean, but rather true, false or NPE.
Admin
I agree with GWO on this, I don't see a problem with the function existing ( hasFeatures() ), this approach is used a lot in game programming (isAlive(), isEnemy(), isVisible(), etc). The issue is that the function could have been written in one line/simplified greatly. The use of a ternary operator is also nice.
Admin
Actually you still need to negate the boolean value to return false if the list is empty, given the method hasFeatures is asking if the list is NOT empty ergo. return !license.getFeatures.isEmpty ();
Admin
I've seen similar code where the true and false of the ternary operator were swapped (effectively negating the value). Kept me on my toes rewriting that code, I can tell you.
|
https://thedailywtf.com/articles/comments/returnary/?parent=465497
|
CC-MAIN-2019-04
|
refinedweb
| 1,982
| 65.12
|
rresvport(), rresvport_af()
Obtain a socket with a privileged address
Synopsis:
#include <unistd.h> int rresvport( int * port ); int rresvport_af( int * port, int af );
Since:
BlackBerry 10.0.0
Arguments:
- port
- An address in the privileged port space. Privileged Internet ports are those in the range 0 to 1023. Only the superuser may bind this type of address to a socket.
- af
- (rresvport_af() only) The address family; see <sys/socket.h>..
Errors:
The error code EAGAIN is overloaded to mean "All network ports in use."
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/r/rresvport.html
|
CC-MAIN-2019-47
|
refinedweb
| 105
| 61.63
|
I want to post data to a target url after insertion, updation and deletion in a model in flask like rest-hooks in django. for now i have only found signal events of flask-sqlalchemy like below:
@event.listens_for(MyModel, 'after_insert')
def do_stuff(mapper, connection, target):
I am assuming your REST end-point where you want to POST to is in the same flask application. It is a good practice to separate out the business logic in your REST end-points and share the code across your whole application.
In my case, I usually create separate py files (called them services) and move the business logic there:
# inventory_services.py def delete_item(id, data): pass # ... business logic here
Then call this method from your REST end-point where you POST data to:
from inventory_services import delete_item @api.route('/inventory/delete-item', methods=['POST']) def delete_item_api(): posted_data = request.get_json() delete_item(posted_data.id, posted_data.data)
Use the same service methods in your SQL Alchemy hook methods:
from inventory_services import delete_item @event.listens_for(MyModel, 'after_insert') def do_stuff(mapper, connection, target): delete_item(id, data)
The basic idea here is to move the code in your api methods to other plain functions so that they will be accessible across your whole application.
|
https://codedump.io/share/tfKwSk6nFKwj/1/flask-post-data-after-insert-delete-and-update-in-database
|
CC-MAIN-2017-04
|
refinedweb
| 206
| 52.9
|
SignedData.Sign method
[The Sign method is available for use in the operating systems specified in the Requirements section. Instead, use the SignedCms Class in the System.Security.Cryptography.Pkcs namespace.]
The Sign method creates a digital signature on the content to be signed. A digital signature consists of a hash of the content to be signed that is encrypted by using the private key of the signer. This method can only be used after the SignedData.Content property has been initialized. If the Sign method is called on an object that already has a signature, the old signature is replaced. The signature is created by using the SHA1 signing algorithm.
Syntax
Parameters
- Signer [in, optional]
A reference to the Signer object of the signer of the data. The Signer object must have access to the private key of the certificate used to sign. This parameter can be NULL; for more information, see Remarks.
- bDetached [in, optional]
If True, the data to be signed is detached; that is, the content that is signed is not included as part of the signed object. To verify the signature on detached content, an application must have a copy of the original content. Detached content is often used to decrease the size of a signed object to be sent across the web, if the recipient of the signed message has an original copy of the signed data. The default value is False.
-
Important When this method is called from a web script, the script needs to use your private key to create a digital signature. Allowing untrusted websites to use your private key is a security risk. A dialog box that asks whether the website can use your private key appears when this method is first called. If you allow the script to use your private key to create a digital signature and select "Do not show this dialog box again," the dialog box will no longer appear for any script within that domain that uses your private key to create a digital signature. However, scripts outside that domain that attempt to use your private key to create a digital signature will still cause this dialog box to appear. If you do not allow the script to use your private key and select "Do not show this dialog box again," scripts within that domain will automatically be refused the ability to use your private key to create digital signatures.
Because creating a digital signature requires the use of a private key, web-based applications that attempt to use this method will require user interface prompts that allow the user to approve the use of the private key, for security reasons.
The following results apply to the Signer parameter value:
- If the Signer parameter is not NULL, this method uses the private key pointed to by the associated certificate to encrypt the signature. If the private key pointed to by the certificate is not available, the method fails.
- If the Signer parameter is NULL and there is exactly one certificate in the CURRENT_USER MY store that has access to a private key, that certificate is used to create the signature.
-
|
http://msdn.microsoft.com/en-us/library/windows/desktop/aa387726(v=vs.85).aspx
|
CC-MAIN-2014-35
|
refinedweb
| 525
| 58.82
|
.28 ! anton 451: \ Implementation: 1.1 anton 452: 1.3 anton 453: \ explicit scoping 1.1 anton 454: 1.14 anton 455: : scope ( compilation -- scope ; run-time -- ) \ gforth 1.3 anton 456: cs-push-part scopestart ; immediate 457: 1.14 anton 458: : endscope ( compilation scope -- ; run-time -- ) \ gforth 1.3 anton 459: scope? 1.1 anton 460: drop 1.3 anton 461: locals-list @ common-list 462: dup list-size adjust-locals-size 463: locals-list ! ; immediate 1.1 anton 464: 1.3 anton 465: \ adapt the hooks 1.1 anton 466: 1.3 anton 467: : locals-:-hook ( sys -- sys addr xt n ) 468: \ addr is the nfa of the defined word, xt its xt 1.1 anton 469: DEFERS :-hook 470: last @ lastcfa @ 471: clear-leave-stack 472: 0 locals-size ! 473: locals-buffer locals-dp ! 1.3 anton 474: 0 locals-list ! 475: dead-code off 476: defstart ; 1.1 anton 477: 1.3 anton 478: : locals-;-hook ( sys addr xt sys -- sys ) 479: def? 1.1 anton 480: 0 TO locals-wordlist 1.3 anton 481: 0 adjust-locals-size ( not every def ends with an exit ) 1.1 anton 482: lastcfa ! last ! 483: DEFERS ;-hook ; 484: 1.28 ! anton: 1.14 anton 602: : (local) ( addr u -- ) \ local paren-local-paren 1.3 anton 603: \ a little space-inefficient, but well deserved ;-) 604: \ In exchange, there are no restrictions whatsoever on using (local) 1.4 anton 605: \ as long as you use it in a definition 1.3 anton 606: dup 607: if 608: nextname POSTPONE { [ also locals-types ] W: } [ previous ] 609: else 610: 2drop 611: endif ; 1.1 anton 612: 1.4 anton 613: : >definer ( xt -- definer ) 614: \ this gives a unique identifier for the way the xt was defined 615: \ words defined with different does>-codes have different definers 616: \ the definer can be used for comparison and in definer! 1.18 anton 617: dup >code-address [ ' spaces >code-address ] Literal = 1.4 anton 1.13 anton 628: swap [ 1 invert ] literal and does-code! 1.4 anton 629: else 630: code-address! 631: then ; 632: 1.23 pazsan 633: :noname 634: ' dup >definer [ ' locals-wordlist >definer ] literal = 635: if 636: >body ! 637: else 638: -&32 throw 639: endif ; 640: :noname 1.21 anton 641: 0 0 0. 0.0e0 { c: clocal w: wlocal d: dlocal f: flocal } 1.28 ! anton 642: comp' drop dup >definer 1.21 anton 643: case 644: [ ' locals-wordlist >definer ] literal \ value 645: OF >body POSTPONE Aliteral POSTPONE ! ENDOF 1.25 anton 646: [ comp' clocal drop >definer ] literal 1.21 anton 647: OF POSTPONE laddr# >body @ lp-offset, POSTPONE c! ENDOF 1.25 anton 648: [ comp' wlocal drop >definer ] literal 1.21 anton 649: OF POSTPONE laddr# >body @ lp-offset, POSTPONE ! ENDOF 1.25 anton 650: [ comp' dlocal drop >definer ] literal 1.21 anton 651: OF POSTPONE laddr# >body @ lp-offset, POSTPONE 2! ENDOF 1.25 anton 652: [ comp' flocal drop >definer ] literal 1.21 anton 653: OF POSTPONE laddr# >body @ lp-offset, POSTPONE f! ENDOF 654: -&32 throw 1.23 pazsan 655: endcase ; 1.24 anton 656: interpret/compile: TO ( c|w|d|r "name" -- ) \ core-ext,local 1.1 anton 657: 1.6 pazsan 658: : locals| 1.14 anton 659: \ don't use 'locals|'! use '{'! A portable and free '{' 1.21 anton 660: \ implementation is compat/anslocals.fs 1.8 anton 661: BEGIN 662: name 2dup s" |" compare 0<> 663: WHILE 664: (local) 665: REPEAT 1.14 anton 666: drop 0 (local) ; immediate restrict
|
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/glocals.fs?annotate=1.28;f=h;only_with_tag=MAIN;ln=1
|
CC-MAIN-2022-27
|
refinedweb
| 586
| 63.05
|
The first thing that we have to do when we start writing either unit or integration tests is to configure our test cases.
If we want to write clean tests, we must configure our test cases in a clean and simple way. This seems obvious, right?
Sadly, some developers choose to ignore this approach in favor of the don’t repeat yourself (DRY) principle.
This is a mistake.
This blog posts identifies the problems of the DRY principle and describes a better way of configuring our test cases.
The Problem
Let’s assume that we have to write “unit tests” for Spring MVC controllers by using the Spring MVC Test framework. The first controller which we are going to test is called TodoController, but we have to write “unit tests” for the other controllers of our application as well.
As developers, we know that duplicate code is a bad thing. When we write code, we follow the Don’t repeat yourself (DRY) principle which states that:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
I suspect that this is one reason why developers often use inheritance in their test suite. They see inheritance as a cheap and easy way to reuse code and configuration. That is why they put all common code and configuration to the base class (or classes) of the actual test classes.
Let’s see how we can configure our “unit tests” by using the approach.
First, we have to create an abstract base class which configures the Spring MVC Test framework and ensures that its subclasses can provide additional configuration by implementing the setUpTest(MockMvc mockMvc) method.
The source code of the AbstractControllerTest class looks as follows:
import org.junit.Before; abstract class AbstractControllerTest { private MockMvc mockMvc; @Autowired private WebApplicationContext webAppContext; @Before public void setUp() { mockMvc = MockMvcBuilders.webAppContextSetup(webAppContext).build(); setupTest(MockMvc mockMvc) } protected abstract void setUpTest(MockMvc mockMvc); }
Second, we have to implement the actual test class which creates the required mocks and a new controller object. The source code of the TodoControllerTest class looks as follows:
import org.mockito.Mockito; import org.springframework.test.web.servlet.MockMvc; public class TodoControllerTest extends AbstractControllerTest { private MockMvc mockMvc; @Autowired private TodoService serviceMock; @Override protected void setUpTest(MockMvc mockMvc) { Mockito.reset(serviceMock); this.mockMvc = mockMvc; } //Add test methods here }
This test class looks pretty clean but it has one major flaw:
If we want to find out how our test cases are configured, we have to read the source code of the TodoControllerTest and AbstractControllerTest classes.
This might seem like a minor issue but it means that we have to shift our attention from the test cases to the base class (or classes). This requires a mental context switch, and context switching is VERY expensive.
You might of course argue that the mental price of using inheritance in this case is pretty low because the configuration is pretty simple. That is true, but, it is good to remember that this isn’t always the case in real life applications.
The real cost of context switching depends from the depth of the test class hierarchy and the complexity of our configuration.
The Solution
We can improve the readability of our configuration by configuring all test cases in the test class. This means that we have to:
- Add the required annotations (such as @RunWith) to the test class.
- Add the setup and teardown methods to the test class.
If we modify our example test class by following these rules, its source code class TodoControllerTest { private MockMvc mockMvc; @Autowired private TodoService serviceMock; @Autowired private WebApplicationContext webAppContext; @Before public void setUp() { Mockito.reset(serviceMock); mockMvc = MockMvcBuilders.webAppContextSetup(webAppContext).build(); } //Add test methods here }
In my opinion, the new configuration of our test cases looks a lot simpler and cleaner than the old configuration which was divided into TodoControllerTest and AbstractControllerTest classes.
Unfortunately, nothing is free.
This is a Trade-Off
Every software design decision is a trade-off which has both pros and cons. This is not an exception to that rule.
Configuring our test cases in the test class has the following benefits:
- We can understand the configuration of our test cases without reading all superclasses of the test class. This saves a lot of time because we don’t have to shift our attention from one class to another. In other words, we don’t have to pay the price of context switching.
- It saves time when a test fails. If we would use inheritance because we want to avoid duplicate code or configuration, the odds are that our base classes would contain components which are relevant to some but not all test cases. In other words, we would have figure which components are relevant to the failed test case, and this might not be an easy task. When we configure our test cases in the test class, we know that every component is relevant to the failing test case.
On the other hands, the cons of this approach are:
- We have to write duplicate code. This takes longer than putting the required configuration to the base class (or classes).
- If any of the used libraries change in a way that forces us to modify the configuration of our tests, we have to make the required changes to every test class. This is obviously a lot slower than making these only to the base class (or classes).
If our only goal is to write our tests as fast as possible, it is clear that we should eliminate duplicate code and configuration.
However, that is not my only goal.
There are two reasons why I think that the benefits of this approach outweigh its drawbacks:
- Inheritance is not the right tool for reusing code or configuration.
- If a test case fails, we must find and solve the problem as soon as possible, and a clean configuration will help us to achieve that goal.
My stand in this matter is crystal clear. However, there is still one very important question left:
Will you make a different trade-off?
8 comments… add one
>>we know that every component is relevant to the failing test case
I don’t think this is true in a real life project. The configuration is too big to write it for every test new. Most programmers just copy it from a other unit test and change something. After some time there are many different version from the configuration and nobody knows why there are different.
There is some truth in this. I configure my test cases in the following way:
In other words, if I write integration tests, the configuration of my test cases might contain non-relevant components as well. The reality is that it isn’t very practical to create a new application context configuration for each test class.
On the other hand, if I write pure unit tests, the configuration of my test cases is so small that it is practical to put it to the test class. If the configuration is any bigger than that, it indicates that the tested code is doing too much and it should be refactored / rewritten.
Copy and paste programming happens only if you allow it. We use a process where each commit must be reviewed before it can be added to the “main” branch of our project. At the moment we use a tool called Gerrit for this purpose. This is great way to share information to other team members and ensure that shitty code doesn’t end up to our “main” branch.
I guess TodoControllerTest should extends AbstractControllerTest in your example. Now it doesn’t in the example code. Also, adding @Override annotation to the implementation of the abstract method “setUpTest” would make it more apparent to understand that we’re overriding a method from the superclass. ;)
Good points! I will update the sample code. Thanks for pointing these mistakes out!
I got some errors when i ovrride the simplejparepository to expand my customized method. I just difined the method : public T saveWithoutFlush(T entity);
and then the errors like these:
Caused by: org.springframework.data.mapping.PropertyReferenceException: No property save found for type User!
Update: I removed the unnecessary information from the stacktrace. – Petri
The problem is that the
Userentity doesn’t have a property called
save. It is kind of hard to say what causes this without seeing the source code. Can you add the source code of your repository class to Pastebin and leave a new comment which contains the link to the source code of your repository?
As for me cons overweight pros by order of magnitude at least.
That is fine. You should always use the method that makes sense to you.
|
http://www.petrikainulainen.net/programming/testing/writing-clean-tests-it-starts-from-the-configuration/
|
CC-MAIN-2015-40
|
refinedweb
| 1,456
| 62.27
|
Hi guys, can't sleep!
Need help to deploy an app that uses Spring MVC with hibernate on Bluemix using an external relational DB (MySql).
First, I setup a working environment with a local service. This by creating a dataSource bean that reads the defaults RDB for the app and uses the correct driver. (This actually works) - please note that this won't work if you have two different Relational DB service bound to the app.
@Configuration @Profile("cloud") public class CloudFoundryDataSourceConfiguration extends AbstractCloudConfig {
@Bean public DataSource dataSource() {
return connectionFactory().dataSource(); }
The tricky part is when trying to substitute the local service with an external by binding a User-provided-service, an selecting it explicitly within the bean:
@Bean public DataSource dataSource() { return connectionFactory().dataSource("external-service");
}
And then by creating the actual service
cf create-user-provided-service external-service -p "host, port, dbname, username, password"
host> ec2-mysql-service-somewhere-in-the-cloud.com port> 3306 user> root dbname> myschema username> root password> secret
Finally by deploying the app specifying the path and attributes as follows:
applications: - name: appName memory: 1G instances: 1 path: target/appName-1.0.0.war services: - external-service
This will lead to an app that starts, completes its stages but doesn't finds the appropriate dataSource, neither uses the user provided service.
As an alternate solution I tried binding a service defined only by a URI
cf cups external-service -p '{"uri":"mysql://root:secret@dbserver.example.com:3306/mydatabase"}'
as recommended on CloudFoundry with Spring. Unfortunately this only stops Liberty runtime from completing its stages and shows:
-----> Downloaded app package (21M) -----> Downloaded app buildpack cache (616K) OK
FAILED Server error, status code: 400, error code: 170001, message: Staging error: cannot get instances since staging failed
If I bind a different CUPS (without an URI), Liberty again starts and show the app as "Started", but this won't solve my issues of connecting to an external DB.
Please help. Any thoughts?
Are you seeing that "Server error, status code: 400" error consistently after binding the custom service with just the uri? I was trying to replicate this and so far I'm unable to.
Hi Jarek, thank you so much for your help. Every time I bind a service with just a URI I experience same issue as described. Will post a link to a sample repo replicating the issue.
Answer by Ryan J Baxter (2041) | Aug 18, 2014 at 09:35 AM
I would think your second attempt at defining a URI would work. I have looked into the logic that Spring Cloud uses before, and there a few services where it relies on the protocal to figure out the service type (I can't remember off the top of my head if mysql is one of them though). A couple of suggestions...
Hi Ryan, thnx for your help.
I followed your recommendations, firstly I used the Tomcat buildpack and unfortunately didn't work -with the same error. Then, using tomcat I tried to work it out by specifying a Basic Datasource as defined in the CUP Service and same thing.
-> Still trying to extend a service from Spring Cloud. Will post my findings shortly.
Which other java frameworks would you recommend to use within Bluemix?
Will post a link to a sample repo shortly.
Let me know when the repo is posted and I will take a look.
Bump! Hi Marco, Have you made any progress with this problem?
No one has followed this question yet.
Accessing MySQL service in Bluemix 1 Answer
fn:toLowerCase isn't found, jstl functions seems to be missing 1 Answer
bluemix cups service - remote mysql connection 2 Answers
How to connect java Application in bluemix 1 Answer
BlueMix MySql Service JNDI binding on Liberty profile - Not working 2 Answers
|
https://developer.ibm.com/answers/questions/23420/$%7BeditUrl%7D/
|
CC-MAIN-2019-26
|
refinedweb
| 634
| 52.7
|
src/diffusion.h
Time-implicit discretisation of reaction–diffusion equations
We want to discretise implicitly the reaction–diffusion equation where is a reactive term, is the diffusion coefficient and can be a density term.
Using a time-implicit backward Euler discretisation, this can be written Rearranging the terms we get This is a Poisson–Helmholtz problem which can be solved with a multigrid solver.
#include "poisson.h"
The parameters of the
diffusion() function are a scalar field
f, scalar fields
r and defining the reactive term, the timestep
dt and a face vector field containing the diffusion coefficient
D. If
D or are omitted they are set to one. If is omitted it is set to zero. Both
D and may be constant fields.
Note that the
r, and fields will be modified by the solver.
The function returns the statistics of the Poisson solver.
struct Diffusion { // mandatory scalar f; double dt; // optional face vector D; // default 1 scalar r, β; // default 0 scalar θ; // default 1 }; trace mgstats diffusion (struct Diffusion p) {
If dt is zero we don’t do anything.
if (p.dt == 0.) { mgstats s = {0}; return s; }
We define and for convenience.
scalar f = p.f, r = automatic (p.r);
We define a (possibly constant) field equal to .
const scalar idt[] = - 1./p.dt; (const) scalar theta_idt = p.θ.i ? p.θ : idt; if (p.θ.i) { scalar theta_idt = p.θ; foreach() theta_idt[] *= idt[]; }
We use
r to store the r.h.s. of the Poisson–Helmholtz solver.
if (p.r.i) foreach() r[] = theta_idt[]*f[] - r[]; else // r was not passed by the user foreach() r[] = theta_idt[]*f[];
If is provided, we use it to store the diagonal term .
scalar λ = theta_idt; if (p.β.i) { scalar β = p.β; foreach() β[] += theta_idt[]; λ = β; boundary ({λ}); }
Finally we solve the system.
return poisson (f, r, p.D, λ); }
|
http://basilisk.fr/src/diffusion.h
|
CC-MAIN-2017-43
|
refinedweb
| 314
| 67.96
|
Eric Evans writes in his DDD.
Many objects have no conceptual identity. These objects describe some characteristic of a thing.
A value object is an object that describes some characteristics or attribute but carries no concept of identity .
There are many samples where the introduction of a value object is useful. One of the most used value objects in DDD is certainly the Money value object. There is even a pattern called after this value object (the money pattern).
A large proportion of the computers in this world manipulate money. But money isn't a first class data type on the .Net framework. The lack of a type causes problems, the most obvious surrounding currencies. If all your calculations are done in a single currency, this isn't a huge problem, but once you involve multiple currencies you want to avoid adding an amount expressed in dollars to an amount expressed in Euro without taking the currency differences into account. Also rounding is a problem. Monetary calculations are often rounded to the smallest currency unit (pennies for dollar and cent for Euro).
Another typical example of a value object is an Address. The address is even a very interesting beast, since it contains a reference to Country which in turn often is treated like an entity.
Yet another example is a geographical coordinate. It consists of the two values longitude and latitude.
Often we see people introduce a value object for the names of a person. Such a name value object could e.g. consist of the three members first name, middle name and last name where the first and the last are mandatory and the middle name is optional.
Finally another well known value object is Color. Color is a structure which normally consist of four values (red, green, blue and alpha)
Note that the alpha channel is an indicator for the transparency of a colored shape. It goes from completely transparent (0) to opaque (255).
As told earlier a value object should always be immutable. The consequence of this is that once its properties are set they cannot be changed. The best way to guarantee this is that all properties are read-only (they have private or no setter methods). A new instance of a value object is completely defined through its constructor. Let's take as a sample the Name value object.
public class Name
{
public string FirstName { get; private set; }
public string MiddleName { get; private set; }
public string LastName { get; private set; }
public Name(string firstName, string middleName, string lastName)
{
FirstName = firstName;
MiddleName = middleName;
LastName = lastName;
}
}
Since a value object is immutable it makes sense that each value object must always be in a valid state. The validation happens in the constructor of the value object. Again let's look at the Name value object as an example. Below I present the constructor of the value object. This time with the validation logic.
public Name(string firstName, string middleName, string lastName)
// Validation logic
if(string.IsNullOrEmpty(firstName))
throw new ArgumentException("First name cannot be undefined.");
if(string.IsNullOrEmpty(lastName))
throw new ArgumentException("Last name cannot be undefined.");
if(firstName.Length>50)
throw new ArgumentException("Length of first name cannot exceed 50 characters.");
if(middleName.Length>30)
throw new ArgumentException("Length of middle name cannot exceed 30 characters.");
if(lastName.Length>50)
throw new ArgumentException("Length of last name cannot exceed 50 characters.");
FirstName = firstName;
MiddleName = middleName;
LastName = lastName;
The validation logic checks that the first and last name are given and that any of the names does not exceed the maximal tolerated length. If the validation tests have all passed then we are sure that our value object is now in a valid state. From now on we will never again have to make validation check against our Name instance. You should not underestimate the positive consequences of this!
It is very important that two instances of a value object are comparable, that is whether they contain the same values in their constituent properties. To achieve this we have to at least override the two sister methods Equals and GetHashCode which are inherited from the base class System.Object.
But we should also implement the generic interface IEquatable<T> in our value object to provide a type safe method for comparing two instances.
public class Name : IEquatable<Name>
// omitted code for brevity...
public override int GetHashCode()
return string.Format("{0}|{1}|{2}", FirstName, MiddleName, LastName).GetHashCode();
public override bool Equals(object obj)
return Equals(obj as Name);
public bool Equals(Name other)
if(other==null) return false;
return FirstName.Equals(other.FirstName) &&
((MiddleName == null && other.MiddleName == null) ||
(MiddleName != null && MiddleName.Equals(other.MiddleName))) &&
LastName.Equals(other.LastName);
Note that my implementation of GetHashCode is certainly not the unique or best implementation. But it is easy and works for me.
In the Equals method I compare the two instances of the value object property by property. Since the middle name is optional it can be null and thus needs a special treatment.
For convenience we can also override the operators == and != as follows
public static bool operator ==(Name left, Name right)
return Equals(left, right);
public static bool operator !=(Name left, Name right)
return !Equals(left, right);
this allows me to use such constructs as
if(name1 == name2) {...} or
if(name1 != name2) {...}.
if(name1 == name2) {...} or
if(name1 != name2) {...}.
Creating an instance of a value object can be error prone when using the constructor. The code is not very readable. How should I know whether the following code fragment is correct?
address = new Address("Paradise Street 12", "P.O.Box 233", "Neverland", "82344", unitedStates);
Could it possibly be that the postal code and the city are confused? How should I know. Just by reading I have no idea since the code is not self describing. So, is this the correct version?
address = new Address("Paradise Street 12", "P.O.Box 233", "82344", "Neverland", unitedStates);
Note that both versions compile, but only the first one is correct. To eliminate this weakness people often implement object builders for complex value objects like an address. Often a builder implements some kind of fluent interface to make the code very self explaining and compact (free of syntactic noise!).
address = new AddressBuilder()
.AddressLine1("Paradise Steet 12")
.AddressLine2("P.O.Box 233")
.City("Neverland")
.Country(unitedStates);
The above code snippet is very self expressing, isn't it?
How is such a builder implemented? Let's have a look at a possible solution
public class AddressBuilder
internal string addressLine1;
internal string addressLine2;
internal string city;
internal string postalCode;
internal Country country;
[DebuggerStepThrough]
public AddressBuilder AddressLine1(string line)
addressLine1 = line;
return this;
public AddressBuilder AddressLine2(string line)
addressLine2 = line;
public AddressBuilder PostalCode(string code)
public AddressBuilder City(string city)
this.city = city;
public AddressBuilder Country(Country country)
this.country = country;
public static implicit operator Address(AddressBuilder builder)
return new Address(builder.addressLine1, builder.addressLine2, builder.city, builder.postalCode, builder.country);
Especially have a look at the implementation of the implicit operator!
Note that the DebuggerStepThrough attribute is used to avoid debugging through the builder code since the code can be assumed to be error free (it is trivial).
When following TDD (that is: write the test first and only then implement the code to satisfy the test...) we often need some sample data. In the case of a value object we can directly create such an instance in the test method. But this is not DRY since we will have a lot of code duplication. One possible solution is the introduction of a so called object mother. This is a class with static methods which delivers us prefabricated (valid) value objects.
An Object Mother is another name for a factory for test objects. It can be implemented as static class with appropriate methods, e.g.
public static class ObjectMother
private static readonly Country unitedStates = new Country("USA", "United States of America");
private static readonly Country switzerland = new Country("CH", "Switzerland");
public static Address GetAddress()
return new AddressBuilder()
.AddressLine1("Paradise Street 12")
.AddressLine2("P.O.Box 233")
.City("Neverland")
.Country(unitedStates);
public static Address GetSwissAddress()
.AddressLine1("In der Matte 8")
.City("Bern")
.Country(switzerland);
In the above sample I use the address builder introduced above to create sample value objects of type address.
Depending on the needs we can have one or several method for any object type we need (or even several overloads of a method if we want to have some configurability of the created objects...).
A special variant of a value object is a value object whose base is a enum type. Let me give some samples:
Let's assume we have a task entity. A task object has a state which can have any of the following values
public enum TaskStatusEnum
Undefined = 0,
Pending,
InProgress,
Done
But the direct usage of an enum type is unhandy in a domain model. Thus I never use an enum type directly as a value object but rather encapsulate it in a class. A possible implementation for this would be
public class TaskStatus
public TaskStatusEnum Status { get; private set; }
public string Description { get { return Status.ToString(); } }
public TaskStatus(TaskStatusEnum status)
Status = status;
Instances of the TaskStatus class are value objects. Only the property Status is mapped to the database. The Name property is only for visual representation of a task status (on a view). Of course I would have to implement the IEquality<T> interface in the above class as well as override the Equals and GetHashCode methods. But I have omitted this for brevity.
For convenience I normally implement also a static property get for any of the possible values of the enum in the above class, that is
public static TaskStatus Undefined { get { return new TaskStatus(TaskStatusEnum.Undefined); } }
public static TaskStatus Pending { get { return new TaskStatus(TaskStatusEnum.Pending); } }
public static TaskStatus InProgress { get { return new TaskStatus(TaskStatusEnum.InProgress); } }
public static TaskStatus Done { get { return new TaskStatus(TaskStatusEnum.Done); } }
As you can see it's a little bit more overhead over using an enum directly but it's definitely worth the effort. You gain all the advantages a value object offers you.
When we deal with NHibernate a value object is represented by a -->Component. A value object is not stored in a separate table but rather embedded in the table related to the containing entity. That is, if I have an Account entity which contains a property Balance which in turn is a value object (of type Money) then I only have a table Account in the database (but no Money table) and the fields of the Balance value object are part of the Account table.
How are value objects mapped in NHibernate? I want to describe three possible ways how we can achieve the desired result. Let's take a (simplified) entity Account as an example
public class Account
public Guid Id { get; set; }
public string AccountNo { get; set; }
public Money Balance { get; set; }
// additional properties and logic
// omitted for brevity...
The most common way to describe the mapping between a domain model and the underlying database is by using XML mapping files.
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping
<class name="Account">
<id name="Id">
<generator class="guidcomb"/>
</id>
<property name="AccountNo" not-
<component name="Balance">
<property name="Value"/>
<property name="CurrencyCode"/>
</component>
</class>
</hibernate-mapping>
Note that the Money value object is mapped via the <component> tag in the mapping file.
If you are using Castle Active Record for your mapping then you just have to decorate the Balance property of the Account with the attribute [Nested].
[ActiveRecord]
[PrimaryKey]
[Property]
[Nested("Balance")]
You can provide as a parameter to the attribute the column prefix that will be used when mapped to the underlying database table. In the above example the fields in the Account table would be BalanceValue and BalanceCurrencyCode.
With the new Fluent NHibernate framework which I descibed here one can define the mapping as follows
public class AccountMap : ClassMap<Account>
public AccountMap()
Id(x => x.Id);
Map(x => x.AccountNo)
.CanNotBeNull()
.WithLengthOf(20);
Component<Money>(
x => x.Balance, m =>
{
m.Map(x => x.Value, "BalanceValue");
m.Map(x => x.CurrencyCode, "BalanceCurrencyCode");
});
Note that the benefit of the fluent interface is not the brevity of the code but rather the robustness, testability of the mapping as well as the ability to include the mapping in any refactoring.
I have introduced you to the value object, which is a fundamental piece of DDD. Not only have I presented you the theory behind a value object but also shown you some possible implementation for immutability, validation and mapping of value objects. I also have shown how one can handle value objects which are based on a .Net enum. Further I introduced the concept of builders (for value objects) which help you make the code more readable (and thus maintainable). Last but not least I discussed the usage of the Object Mother pattern in the context of test driven development (TDD).
Enjoy
|
http://nhforge.org/blogs/nhibernate/archive/2008/09/17/value-objects.aspx
|
crawl-002
|
refinedweb
| 2,161
| 56.05
|
Slackware :: Bash-completion-1.3-noarch-1 Causes Xorg Failure?Feb 16, 2011
/etc/bash_completion.d/slapt has a syntax error that causes x to fail to load. I had to move the file to be able to run x. Here is the offending file:
[URL]
/etc/bash_completion.d/slapt has a syntax error that causes x to fail to load. I had to move the file to be able to run x. Here is the offending file:
[URL]
I've spent some time searching for answers to this and I haven't found much at all.Please feel free to post pointers to other threads that discuss this particular problem, if you find any.The problem is that in bash I want to mount an iso file to inspect the contents with the command:Code:sudo mount myCD.iso CDMount -o loopThe command works fine, but pressing tab to complete either the iso filename or the CDMount directory does not work. The completion suggestions I get are existing mountpoints which The completion suggestions should include the files and directories in the current directory. This worked with Ubuntu 10.04 and not with 11.04.
I).
'-noarch-' is a substring in the name of some slack packages. E.g., bittorrent-4.4.0-noarch-2.tgz. What's the meaning.View 8 Replies View Related
New Fedora 12 install - I installed the bash_completion package, but it's not filling in knowhosts for ssh. It worked fine in Fedora 11. Is there something I've done wrong, or is this missing in F12?View 6 Replies View Related
I wondered if theres a way to do rotational style completion in bash similar to the behavoir on cmd.exe, I've found it speeds me up in regard to entering commandsView 2 Replies View Related
Just installed 10.04 64bit and in gnome-terminal i have no bash completion when sudo is used.For example:apt- gives apt-getbut sudo apt- does nothing, i get no suggestions from the terminal.View 6 Replies View Related
I have disabled root ssh logins for security. When I am logged in as a normal user over SSH and do a su into root the tab-completion stops working with apt. It still works when doing normal file-browsing, on the original user1, and it has worked previously when logged directly into root. How do I make it autocomplete again? I am using Debian Stable on a headless homeserver.View 3 Replies View Related
Can fedora do bash completion of package names in yum?View 4 Replies View Related
Out of the box, Bash in 10.04 is configured such that it won't expand/complete parameters when there's a single match for a parameter with a leading wildcard. For example, if I have the following files in a directory:
Code:
ABC.bin
DEF.bin
GHI.bin
...and I type cp *E*, I expect to be able to press TAB and have Bash expand *E* to DEF.bin, since that's the only file in the directory with a capital E in its name.
(Note: if I actually submit the command with the wildcards in place, the correct file will be used then, but I don't get to see it beforehand.) I imagine there's something in /etc/bash_completion that's preventing this from working properly. Does anyone know what it is?
[URL]
Code:
#if [ -f /etc/bash_completion ]; then
# . /etc/bash_completion
#fi
What versions of Ubuntu have this commented out? Every version I have used always has this uncommented.
I am writing completion function for one PHP framework called symfony. It has command line interface with syntax:
Code:
symfony [options] [namespace:]action
I want to make action be autocompletable. The function is simplest so far:
Code:
function _symfony_commands()
{
[ -r "cache/completion/.sf" ] && cat cache/completion/.sf
}
[code]....
But, if there is : symbol which separate namespace from action problems coming:
symfony doct[TAB]
will be completed to
symfony doctrine:
But nothing happens if you want complete after : symbol. I've found out that for readline there is three words because it splits line with $COMP_WORDBREAKS
Code:
$ echo $COMP_WORDBREAKS
"'><=;|&(:
I played with $COMP_WORDS array and tried every thought I had to make it work, but failed.
What I should do to escape colon and make readline consider it as one word? Or there is way perhaps to workaround it?
According to the Bash man pages, Quote:.
Here's the line in my .bashrc where HOSTFILE is set:
Code:
export HOSTFILE="~/.hosts"
I opened a new bash session, created ~/.hosts, filled it with the names of servers that I wanted to expand using tab completion. then typed
Code:
ssh p<tab><tab>
expecting to get a lists of all of the hosts in ~/.hosts starting with 'p'. Bash simply beeped at me twice.
I tried
Code:
$ shopt hostcomplete
hostcomplete on
Code:
ssh bchittenden@p<tab><tab>
same results.
[code].....
Is there a way to get colored output when using tab completion in a terminal? My colors are fine everywhere else so I know that I've enabled a color terminal successfully. Using bash in Ubuntu (10.10).View 1 Replies View Related
tab-completion indexes system folders (like /usr/bin, /usr/local/bin)! so say i'm in a folder that has two files, 'text' and 'myprog', i type in an 'm' then tab, and i get hundreds of results including 'mysql', 'mysqlconfig', and others as im sure you can imagine. is there a way to set it to default or something else that will only make it index the current folder?
i tried changing my PATH variable so i could execute programs in the current directory without './' - what i added to PATH was ':.' at the end (apparently this is not the way to do it... :S). i tried resetting PATH various times ('unset PATH', 'PATH=$whatever...') but this has not fixed the problem. using 'unset PATH', of course, removes everything from PATH, which meant that functions (like 'ls') in /usr/bin and /usr/local/bin can't be found. obviously i want those to be found, but i would rather not tab through them!
I have a bash script that checks for contents in a folder every 15 seconds and then acts on it's contents. This works great for the average size file however on very large files it starts acting on the file before it's completely written. Is there a facility in bash shell to get a file complete signal or such? here is trigger to launch a larger script.
Code:
#!/bin/sh
while true
do
$HOME/bin/hpgl.sh >/dev/null 2>&1 &
[code].....
After loading all the apps I use last night with a working FC14 x86_64, I powered up this morning and nothing happened after starting atd.1 try an alternat kernel same effectstart in init 1, OK go to init 3 OKran startx .. failedlook for xorg.conf.. no fileNow if Iwas still running mandriva , I would have typed mcc and bought up the control panel so I could have sorted out the Xorg problem, or tried. There really needs to be something like this.I found that gdm was not installed so installed it.It will now boot up to the login screnn, but wont accept my passwd.restart in init 3, login as root and add another user and set the password for that user.I also rest my passwd as welll.boot up again and get authentication failure on both users, and I cant login as root as someone removed that feature.
Also I can't use Ctrl-Alt -backspace to killl X so I have access to CLI so I can login there and run startx.I suspect several gnome packages have been removed/lost as I use lefthand mouse settings and the mostsetting are back to default.All I have is a live CD to gain access to the forum, and I only have one computer.There really needs to be a fallback to one of the smalll DMs on fedora so in situations like this you are nt forced to use CLI, thats if a new user would know how to access it. As there is no bootsplash, you have to already know to hit return and wait for the bootsplash.
IMO this a is a seriouss flaw, in what otherwise is a good OS.In the meantime I'll try and install XFCE to get X running, but I will need some help getting gnome back.I'll load gnome-panel and see if yum will pick up the rest as depemdencies.I hope this isn't a bug, as to lost the DM after the first machine power cycle is fairly drastic---------- Post added at 08:52 AM GMT ---------- Previous post was at 08:18 AM GMT ----------I resorted to yum install gnome-*which loaded 255 packages, including gnome-desktop, gnome-common, all the gnome applettsThis wasn't just one or two packages corupted, it was removal of the DM altogetherSimilar effect to running "rm -rf gnome-* " can normally adjust the back light settings of my lap top by pressing fn and the up and down aarows. I am using the default xorg server without using xorgconfig in slackware, and it works perfect so far dispite this slight problem. When i do, do that key combo, (in kde) it does display the the meter of brightness, but i can't move the settings. I believe this may be an x issue, but not sure.View 6 Replies View Related
I have searched and searched and maybe I don't know how to articulate this issue with out just posting the problem I'm having. Every time I bring up a terminal window I get the following "Header"
declare -x COLORTERM="gnome-terminal"
declare -x CPLUS_INCLUDE_PATH="/usr/lib64/qt/include"
declare -x DBUS_SESSION_BUS_ADDRESS="unix:abstract=/tmp/dbus-xSFd6zqrYQ,guid=dc5e07974559da016842742900000090"
declare -x DISPLAY=":0.0"
[Code]...
To be honest I cheated and used the .bashrc / .profile files from Ubuntu and all was working fine for a while now and it seems something changed to cause this... any ideas on why I am getting this? I checked my .bashrc and my /etc/profile and it doesn't look like anything is amiss..
I'm writing a script for asterisk to monitor trunk failure, i do a loop for every trunk it got nad would like to name variable like server1=, server2= naming the server upgoing as the trunk is. here is the scripts:
[Code]....
what i would like to do is name the variable server, username and status with the count variable, like this server$COUNT to have server1 when on trunk one, bu as soon as i add the $COUNT after the server, it seems to try to make it a command, it says that:
Code:
./test.sh: line 45: server1=74.63.41.218: command not found generated noarch rpm using alien for rapidsvn tar.gz file. After generating and installation of the rapidsvn rpm it isn't working for CUI or GUI. When I checked for rpm installation status using 'rpm -qa' it is showing as the rpm is installed. But no output of the installationView 6 Replies View Related
I just did a clean install of 13.1 on one of my laptops and the scroll doesn't work on the synaptics touchpad. I've seen some comments about adding a file to the /etc/X11/xorg.conf.d/ directory. I don't have this directory. Is it ok to add this, or did I screw something up during the install. I did another 13.1 install about a week ago and it is also missing this directory. Is this just something in current and not in 13.1?View 1 Replies View Related
I have freshly installed Slackware 13 64-bit.
After messing around last time trying to compile fglrx for my ATi card, I now understand it uses a built in radeon driver.
My question is this: how do I get X to recoqnise my custom xorg.conf file? I dropped it in /etc/X11 but to no avail.
HAL does it great job - but I need my xorg.conf as I have dual monitors which HAL doesn't configure correctly (displays mirrored, not stretched).
I am new to Slackware/Linux and completed the install (Slackware 13 64-bit x86_64).Everything is running correctly.During the install I was asked if I wanted to install X-Windows and I declined.Now I want to install it.I downloaded xorg-server-1.6.3-x86_64-1.txz and installed it using slackpkg.It seemed to me to install very quickly. I see it in the list of installed packages now.What are the steps install a fresh X-Windows on Slackware if I did not pick it during install?View 5 Replies View Related
I bought a very beaultiful pink tv-monitor 26", model LG26LED6500 for my daughter and I also intend to enjoy it setting up in my slackware 13.37. Then I use a VGA cable and 6600GT nvidia card, but now I'm having a doubt because there aren't vertical and horizontal frequencies specs in this manual. Only:
I did a xorg.conf using a vesa driver and kde screen output was a 1024x768 - 61hz.
If I try to use a nvidia driver and xorg.conf piece above X break down. If I use any manual frequency parameters X break down too.
1)What's the difference between CRT config and tv-monitor xorg?
2)Have I use strings modeline monitor and modes screen in this situation?
I am using Slackware 13.1 and recently Xorg servercaused segmentation fault under KDE 4.4.3 twice. Here are listings from Xorg.log files
Code:
Backtrace:
0: X (xorg_backtrace+0x3b) [0x80a1e6b]
[code]....
I have a server running Fedora 8 which I installed via the DVD that came with the book I am using as a learning aid. Anyway, I downloaded (via bit torrent) the Fedora 10 DVD and everything reported success, so I burned the DVD and booted the machine on which I want to install 10 (not my server).
Things were moving along fine until it started doing the actual install (partitioning etc. was complete). I had selected all three repositories. Somewhere in the middle of doing the install an error window opened with the following message "The Automake16-1.6.3-14.noarch.rpm.cannot be opened. This is due to a missing file, a corrupt package or corrupt media. verify your installation source. If you exit your system will be left in an inconsistent state that will likely require reinstallation."
It then gave me the choices of rebooting or ejecting. I took reboot which left me in text mode with a limited version of GRUB. So Thinking it might be a problem with the added repos I tried again this time taking just the default repo. Then I got the same error but having a problem with authconfig-gtk-5.4.4-1.fc10.i386.rpm.I again took the reboot option it restarted from the DVD to take me to the regualr start of installation.
So that leaves me with a nicely partitioned system but I don't know how much as been installed or what do I do to get it to install. I didn't have the startup test the disk per the online instructions. So after all this I did and errors were found. I had errors on two different disks, so now I'm going to try using CDs instead.
This post was originally placed in the src2pkg thread, just below.
But to avoid it being overlooked I am placing it in a new thread here.
I am using src2pkg v.2.0 with Slack 13.0 running kernel 2.6.29.6
In the past I used trackinstall to run 'make install' after configuration and compilation (using a makefile). Here's an example
of what happens when I use the current trackinstall that comes bundled
with src2pkg code...
|
http://linux.bigresource.com/Slackware-bash-completion-1-3-noarch-1-causes-xorg-failure--oPAdgCSHD.html
|
CC-MAIN-2020-16
|
refinedweb
| 2,690
| 73.98
|
C plus plus:Modern C plus plus:RAII
Modern C++ : Going Beyond "C with Classes"
- Preface
- std::vector
- RAII
- Containers
- Iterators
- Algorithms
- Functors
- Binders
- Storing Functors
- References
- Glossary
- Appendices
Contents
Introduction
One common complaint about C and C++ is that you need to manage your own memory. A huge number of C programs end up leaking memory. Admittedly, if you're coding C-style in C++, it is quite difficult to always match your news and new[]s to deletes and delete[]s.
The Concept
RAII is an idiom that takes advantage of templates, destructors, and C++'s absence of GC (Garbage Collection) to provide an elegant, consistant method for handling all resources. GC may be convenient for memory, but I've yet to see one that manages filehandles, mutex locks, or sockets, for example.
RAII is really quite a simple idea. Basically, all resources should be owned by an instance of a class, and that class should release them in its destructor. If that instance is a local variable in a function, the resource will be released when the function returns. If that instance is a member of a class, the resource will be freed once the class is freed, even without a custom destructor.
RAII is an acronym that stands for Resource Aquisition Is Initialisation. You'll probably never actually hear someone use that name though, as it's somewhat misleading. The most important part of RAII is that destructors release the resources, not that they're aquired in constructors.
Examples
fstream
std::fstreams do have a close() member function, but it's rarely needed since, being a RAII class, it closes the file when it goes out of scope.
It also has a constructor that takes the name and modes for the file to open which aquires the resource during initialisation.
Containers
std::vector was the very first thing I covered in this series, and it's a RAII class, as are all containers. They manage the memory they use so you don't have to.
Smart Pointers
The current standard only includes one smart pointer, std::auto_ptr. It's a conceptually simple class that "owns" a pointer and deletes it when it goes out of scope.
The complication with std::auto_ptr comes from its ownership transferring semantics, which will be discussed below.
Scoped Locks
The Boost Thread library uses a nice RAII class called scoped_lock.
Instead of letting users of the library call lock() and unlock() functions on mutexes, they instead create instances of scoped_locks that lock the mutex on construction and release it when destructed. This means that mutexes cannot accidently be left locked and means that they're automatically released in reverse order of locking, thanks to the construction and destruction order guarantees for automatic local variables.
Design Considerations
A basic smart pointer is one of the clearest, most obviously useful situations for RAII, so let's try writing one, starting from the naïve version:
template <typename T> class naive_ptr { T *ptr; public: naive_ptr() : ptr(0) {} explicit naive_ptr(T *p) : ptr(p) {} ~naive_ptr() { delete ptr; } T *get() const { return ptr; } void reset(T *p = 0) { delete ptr; ptr = p; } T *release() { T *p = ptr; ptr = 0; return p; } // And we need to make it act like a pointer too T *operator->() const { return ptr; } T &operator*() const { return *ptr; } }; template <typename T> bool operator==(naive_ptr<T> const lhs, naive_ptr<T> const rhs) { return lhs.get()==rhs.get(); } template <typename T> bool operator!=(naive_ptr<T> const lhs, naive_ptr<T> const rhs) { return !( lhs == rhs ); }
The functions included are fairly simple and obvious:
- get
- To get the value of the contained pointer, if we need it, since &*myptr is ugly.
- reset
- To safely change the contained pointer
- release
- To release the pointer from the control of the naive_ptr, in case we want to keep track of it some other way. ( For example, putting it into a different type of smart pointer or into a ptr_* from the Boost Ptr Container library. )
- operator* and operator->
- So that it can be dereferenced like a normal pointer.
The only thing here that might be surprising is that so many of the functions are const. The thing to remember here is that a T * const is a very different thing from a T const *—even if the pointer is const, the pointee can still be modified.
Much more interesting are the functions that are not included.
- operator[]
- This implementation of naive_ptr uses delete—not delete[]—to release the memory associated with the pointer. This means that storing a pointer allocated with new[] in one is quite unsafe, so if we prevent it from looking like an array it'll be harder for people to make this mistake. Similarly, there is no arithmetic provided.
- operator T*
- Experience has shown that an implicit cast to the pointer type is not a good idea. It ends up allowing the use of subscripting and arithmatic, which, as above, is undesirable. It also makes it legal syntax to call delete with a naive_ptr as the argument, which is clearly bad. You might not think that it would happen, but for people unclear of the idea or when changing old code to use smart pointers, it's quite possible.
- operator=(T*)
- Giving a smart pointer a pointer to manage is something that should be quite explicit. Once a smart pointer owns a pointer, it'll take care of it. With an implicit operator= from plain pointers it's far to easy for a pointer to become owned by multipule smart pointers. myptr = &*myptr; is quite safe (if pointless) with a regular pointer, but would be fatal on a smart pointer, as it would delete the pointer. Plain pointers can also be repointed to and fro many times without releasing memory, but that's not so with our naive_ptr (unless you religiously use release, but that's not a good plan as it rather defeats the purpose of using a smart pointer in the first place). Plain pointers can also point to stack objects that are not to be deleted, which is also very dangerous with naive_ptr.
- operator<
- Relational operators on pointers are technically only defined when both pointers point into the same array, which should never be happening with naive_ptr. We could use std::less<T*> instead, as it defines a total ordering for pointers, but that ordering is useful mainly for use as keys in associative containers and, as I'll explain later, it's illegal to store naive_ptrs in containers.
Right now we have something that looks fairly useful. In fact, if you test it out, you might find that it seems to work fine:
#include <iostream> #include "naive_ptr.hpp" int main() { naive_ptr<int> p( new int(13) ); std::cout << "*p = " << *p << std::endl; p.reset( new int(42) ); std::cout << "*p = " << *p << std::endl; }
The example above gives the results one would expect and doesn't leak any memory.
So what's the problem? Copies.
It's trivial to make an example that fails miserably:
#include <iostream> #include "naive_ptr.hpp" int main() { naive_ptr<int> p1( new int(13) ); std::cout << "*p1 = " << *p1 << std::endl; naive_ptr<int> p2 = p1; std::cout << "*p2 = " << *p2 << std::endl; }
The output will be fine, but it will (hopefully) crash while it's exiting.
The problem is the classic "Rule of Three" violation. naive_ptr has a pointer member that gets shallow copied when the object is copy constructed or assigned, which in this case results in the same pointer value being deleted twice, resulting in undefined behaviour.
There are 3 basic ways of dealing with this problem:
- Don't allow copies
- This is the method chosen for streams (such as fstream) in the std::lib. It's certainly easy to implement and is fine in most situations. If you disallow copying of naive_ptrs (by declaring and not implementing a private copy constructor and private assignment operator) and remove the mutating operations (reset and remove), you end up with something quite similar to boost::scoped_ptr from the Boost Smart Ptr library.
- Do a deep copy.
- This is the method used by containers. Copying a container means making a copy of each element. The intuitive deep copy, ptr ? new T(*ptr) : 0, will fail on polymorphic types, however. (If T were an abstract base class, for example.)
- Transfer ownership
- This is the method used by std::auto_ptr. As evidenced by the existance of std::auto_ptr_ref (an auxillary class used so transfer semantics to work properly) and the number of revisions the relevant section of the standard went through, it's not simple to implement, but can be incredibly useful. The original owner releases its pointer and the copy assumes ownership. It's particularly nice as it has no runtime overhead compared to normal pointer copies. Thanks to this, std::auto_ptr is a particularly elegant way of returning pointers to heap data from functions.
In Closing
RAII is a great help in writing elegant, safe code. Thanks to this idea, C++ has no need for—and doesn't have—the finally construction found in many GCed languages, such as Java. It's also safer, as it doesn't require the programmer to explicitly call the cleanup code at the end of each path through a function.
What's Next
RAII in the form of std::vector does a great job of managing resources when we previously would have needed to new[] and delete[] manually, but there are other ways of storing objects. Luckily, std::vector isn't the only nice data structure that the std::lib provides. There are lots of other Containers as well, for different situations.
|
http://content.gpwiki.org/index.php/C_plus_plus:Modern_C_plus_plus:RAII
|
CC-MAIN-2014-10
|
refinedweb
| 1,606
| 59.64
|
Recently a friend asked me how you might create a Windows Forms application that only allows a single instance per computer. A print driver might make use of this functionality, for example, to launch a print job management dialog whenever a document prints. Never having needed this sort of functionality before, my initial answer wasn't very helpful. But being both curious and disinclined to back down from a technical challenge, I just had to figure this one out.
As I was looking for an inter-process synchronization mechanism, I came across the Semaphore and Mutex in the System.Threading namespace. A semaphore can be used to manage a pooled resource (memory buffer, thread pool, connection pool, etc.) by tracking the number of available resources. You instantiate a semaphore with an invariant maximum corresponding to the quantity of pooled resource entities (e.g., the number of database connections). Whenever a thread wants to use the resource, it calls .WaitOne() on the appropriate semaphore and blocks until the semaphore count is greater than zero. If the count is positive when the call is made, the thread will not need to block at all.
When a thread enters a semaphore, the semaphore decrements its count by one. The calling thread is responsible for calling the semaphore's .Release() method when it has completed using the resource. This allows the semaphore to increment its count--and as a result, gives another thread access to the pooled resource. You must call Release() inside of a finally block as soon as feasible in your codepath; my testing shows that the CLR will not release an unreleased semaphore when the semaphore reference goes out of scope, or even when the thread of execution terminates.
What we need, then, is a semaphore that has a maximum count of one (since we want only one instance of our application to be running) and will increment its count no matter how the thread that entered it terminates. Happily, this would be a pretty good one-sentence description of the Mutex class. The thread that calls WaitOne on a mutex ("mutual exclusion") gains ownership when it enters the mutex. It can release the mutex at any time by calling the ReleaseMutex method, but should it fail to do so (whether by logic error or run-time failure) before it terminates, the CLR will release the mutex on its behalf. As a result, the mutex should prove more reliable for our purposes.
Here is the singleton code in a nutshell:
One of three things can happen when the Main() thread calls m.WaitOne(1, false):
If the Main thread owns the mutex, it runs the application, using a try/finally block in order to guarantee a call to ReleaseMutex. While strictly speaking it is not necessary to call ReleaseMutex (the CLR will do so on the thread's behalf when it terminates), I have written the code this way in order to maintain a best practice. In some other situation, the thread that enters the mutex might continue performing more work (or enter a suspended state) after finishing its use of the resource and might not call ReleaseMutex, so it's a good idea to write the code in this fashion.
What do you think? Is this code helpful? Do you have any suggestions for improvement? Leave a comment!
|
http://geekswithblogs.net/chrisfalter/archive/2008/06/06/how-to-create-a-windows-form-singleton.aspx
|
CC-MAIN-2018-30
|
refinedweb
| 558
| 61.46
|
16 February 2011 16:03 [Source: ICIS news]
By William Lemos
HOUSTON (ICIS)--The ?xml:namespace>
Unlike the
At the end of each month, the four main producers would announce their price initiatives for the following month, and the market would embrace the lowest proposed price, no questions asked.
But the camaraderie, which lasted for years and survived the worst of times in 2008, began to crack in September 2010, when a producer broke ranks and split the market for the first time in more than 10 years.
Prior to last September, BD had settled at different levels only in July 2000, when the market was split between settlements of 25 cents/lb ($551/tonne, €408/tonne) and 26 cents/lb.
A previous split settlement occurred in December 1994, when some contracts settled at 23 cents/lb and others at 24 cents/lb.
While September 2010 was an atypical month, because nominations were separated by a wide 6-cent/lb price gap, the disagreement proved to be more than a bump on the road.
Two other split settlements followed in December and January, as the same producer that split September again refused to match lower prices proposed by its rival suppliers.
February was no different, but this time the roles were reversed.
Market sources said the supplier that had previously split the market three times came out with the lowest nomination, which the three other producers decided to ignore.
BD in February rose by 5 cents/lb and 8 cents/lb from January after the three BD producers settled at 99 cents/lb, while the fourth supplier settled at 97 cents/lb.
The recent string of split settlements could point to a new era in the US BD contract process, whereby each supplier may begin to individually negotiate prices with its consumers.
“That would be a reasonable conclusion,” a settlement participant said.
Another source called the split settlements “strange”, but warned that this could be the new reality for BD, predicting that at least one US producer might continue to ignore the rest of the market.
The uptrend in the US BD market is also likely to continue, market sources said, pointing to firm demand, tight supply and higher spot prices since the turn of the year.
BD spot prices were assessed at $1.10-1.15/lb in the first week of February, rising on average by around 20% from 90-95 cents/lb four weeks earlier.
The surge in January was fuelled by higher BD prices in other regions, particularly
US BD demand is estimated at around 320m lb/month, but monthly domestic production runs at about 250m lb, forcing buyers to look for imports to close the 20% gap.
The constraint on US BD supply stems mostly from restricted crude C4 availability, because of the widespread use of ethane at US crackers.
Ethane, which yields almost no C4s, accounted for around 60% of the feedstock volumes used in the
BD output in Europe in 2010, as measured versus a proportion of ethylene production, averaged 12%, or about twice the ratio for the
The outlook for BD in the
Market participants said another contract increase for BD was likely in March, provided that operating rates at downstream plants were not significantly affected by recent inclement weather in
Petrochemical production in
At least two plants that use BD as a feedstock were affected, according to filings with state and federal regulators.
BD is not a big market, and even small disruptions can tilt the balance of the market, a source said, adding that the price direction for next month would only be clear at the end of February.
An increase in BD contract prices in March would be the fourth in as many months.
($1 = €0.74)
For more on butadi
|
http://www.icis.com/Articles/2011/02/16/9435632/insight-us-butadiene-contract-restructure-takes-hold.html
|
CC-MAIN-2014-35
|
refinedweb
| 632
| 51.82
|
’m deprecating the use of “tips” pages. As the open discussion pages caught on, tips dropped off. For a month or two now I’ve not had a “tips” page put up, and nobody seems to have noticed. All the old ones remain for historical reference:
“Tips Pages”
What’s Going On?
Hurricane Season
Dorian is a Cat 4 last I looked. Trump is staying home and skipping his Poland trip. Guess that is a “stick in the eye” of the Democrat’s plans to call him heartless for abandoning the country during the ‘Shock and HORRORS!!! GLOBAL WARMING DISASTER!!!” of an absolutely typical hurricane. It skipped Puerto Rico so they were denied that “regurgitation’ opportunity. Now Trump is going to be here and they can’t denigrate him for running off. Wonder what Plan C Bleating will be?
Out near the Bahamas at the moment, the NOAA graph says arrival in Florida about Sunday, but… Seems to have a wide range of times over the last few days and the place is a bit up for grabs too. One graph shows a Monday arrival and another has it north of the Bahamas (all on the same weather station -Weather4us / Roku – and NOAA sourced graphs), but “earliest arrival” vs “present wind field” vs “wind speed probabilities” vs an un-named “swath of winds” graph.
My guess is that they are guessing…
Italy Gets A Government – AGAIN
From a “Far Right” coalition to a “Far Left” coalition, Salvini rides again. Still more or less on the “one a year” plan for governments… /sarc; (sort of…)
So no election right now, and the Southern Flank of the EU continues to be a PITA for the German / French core of control
What will come of it? Who knows. What’s in the news so far is that Italy wants to do yet more deficit spending get their economy going and the EU is still saying “No!”. With The Left on board, my guess is that Italy will do it anyway, and get spanked by the EU, and that’s going to accelerate the implosion of the EU. If they DON’T spank Italy, then Greece will be screaming about differential treatment and Spain will immediately follow suit. If they DO spank Itlay. then the risk of Italy following Britain out of the EU grows by a big jump.
Is more deficit spending really the solution? Probably not. Reduce regulatory burden, cut tax take, shrink government in general. Shown to work again and again and again. Lot of “regulation”, high taxes, big government: shown to result in economic stagnation, massive debt, and eventual economic and political collapse. Again and again and again…
It is a very bad parasite that kills the host, but Socialism isn’t a very good parasite, nor are Big Government Rent Seekers.
But for a while at least, Italy will be a fun show.. Chickens, roosts, and all that.
Deep State Skate
Comey gets a pass. “I’m shocked, shocked I say to find gambling is going on…” – Here’s your winnings…
Deep State still batting 100% on Top Cover.
USA Space Force
The USA now has a Space Force Command. Order signed. We’ve now got land, sea, Marines, cyber, air and now Space commands. (Oh, and centcom to try to coordinate them).
So gee, we have a Military Space Force, but are still hitching rides to space from the Russians… Yes, those “horrible” Russians are our ride to space, and work well with our guys. Somehow I don’t see Russia as being all that horrible to us…
Yes, Real Soon Now we’ll have our own way to get to space. Both SpaceX and Boeing are working on it. Any month now we will be able to field the capability we had in the 1970s…
Still waiting for that Single Stage To Orbit space plane we could have made in the ’60s if we’d wanted to. The X-15 went to space. A bit bigger and more fuel it could be turned into a “Space Plane”. But we didn’t. Oh Well.
It’s not about technology, it is about a decision to do it. We’ve regularly scrubbed programs right when they were ready to take that step. From the aerospike engine to over-sizing the shuttle so it needed SRBs to get off the ground. Not sure why…
Maybe with this move the Space Command will want a taxi to get guys on orbit. Or maybe they will just manage satellites and some anti-sat launchers on the ground. My guess is that it’s a Space Command that will not put anyone in space for at least several decades.
Ebola Watch
There’s been a new case in Uganda:
JUNE 27, 2019
Ebola in Uganda, and the dynamics of a new and different outbreak
by Steven Hatch, The Conversation
By Rick Gladstone
June 13, 2019
Ug of the Ebola virus expanded significantly, from eight to at least 27.
So have they got it under control in Uganda since June 13?
Uganda has confirmed a new case of Ebola in the country’s Kasese district. According to the Ministry of Health, the case is of a nine-year-old girl of Congolese origin.
A statement signed by Minister of State Joyce Moriku Kaducu said: the patient traveled with her mother from the Democratic Republic of Congo, DRC, and entered Ugandan territory on August 28, 2019.
So that would be a “no” as folks with Ebola continue to move around the continent…
Socialist Policies On Parade
Venezuela has a nice news story about folks in the country with THE most oil resources on the planet cutting down their forests to cook dinner. Welcome to the world of the future (and the past) where women walk miles every day with bundles of sticks so they can cook dinner and men chop down the forests and kill anything that moves for food.
Venezuela’s trees suffer as firewood replaces scarce cooking gas
Posted on August 29, 2019 by EnviroLink Editor
MARACAY, Venezuela (Reuters) – Endy Perez for years started her day by turning on the stove of her small house in the Venezuelan city of Maracay. These days, her breakfast routine begins with a search for firewood in a national park just behind her home.
Chronic shortages of natural gas in the country with the world’s largest oil reserves now mean that cooking fuel is increasingly coming from trees.
“I have no other option, I have two children … I have to cook,” said Perez, 39, a homemaker, standing next to an improvised wood stove on her porch at the edge of the 108,000 hectare (267,000 acre) Henri Pittier National Park.
The growing use of firewood has triggered alarm among activists who say discussions of environmental problems are often eclipsed by diatribes about runaway inflation, economic collapse and a protracted political stalemate.
Fires and home construction in the last 40 years have deforested about 10% of Henri Pittier Park, said Enrique Garcia, director of the ecological group Let’s Plant.
In addition, he said, the collection of firewood in urban areas can cause respiratory problems from smoke, rising temperatures in cities and increased risk of landslides in poor communities where houses are often built on unsteady terrain.
Wood stoves are now a common sight across Venezuela because of the shortage of gas. Tanks used to store and transport propane are in disrepair for lack of maintenance.
In some cases, people burn trash next to a tree to dry it out so the tree can be cut down and used for cooking fuel. Authorities are broadly ignoring legislation that prohibits cutting down trees without permits.
Welcome to the Socialist Future! Gather your “Sustainable Wood” for cooking before it gets shipped to Britain to make electricity… Forests? Who needs forests… /sarc;
Best practice those Rocket Stove Skills now…
In Argentina, they are “rescheduling” their debt. In other words, “Sure I said I’d pay you today, how about next year instead?”. Argentina is having a bit of an oscillator between Socialist and “right wing” governments. In some ways this can be worse as there is a constant whipsaw between directions. Even after pitching out the Socialists, the “Right Wing” have to deal with the debt problems and doing so can cause all sorts of follow-on problems, that then become justification for a return to Socialism that then… What is needed is an attention span of at least a decade as debt has decadal periods. Good luck with that in an era when folks don’t remember yesterdays news…—argentina–markets-still-worried-after-demand-for-rescheduling-.SySz1JIBr.html
Argentina: markets still worried after demand for rescheduling
8/29/2019, 4:26:04 PM
Buenos Aires (AFP)
The markets reacted negatively Thursday in Argentina after the request of the government of a rescheduling of the debt at the International Monetary Fund (IMF).
Investors, who had already shown signs of nervousness in recent days face the specter of default, seemed worried despite the announcement of the Minister of Finance, Hernan Lacunza.
At the close of the Buenos Aires Stock Exchange, the Merval index lost 5.79% to 23,984.23 points. Argentine shares listed in New York and Europe also fell, according to several specialized sites.
The intervention of the Central Bank, which injected $ 200 million into the foreign exchange market, however, helped to contain the decline in the Argentine currency.
After depreciating 3.5% at the opening, the peso finally moderated its losses. At the end of the session, he lost 0.61%, trading at 60.54 pesos for one dollar.
In the morning, center-right president Mauricio Macri called for calm investors. “It is up to us to contribute to peace without causing fear or confusion”, had urged the Head of State
On Wednesday, Argentina asked the IMF to reschedule its $ 57 billion debt contracted in exchange for fiscal austerity. According to the agreement signed in 2018, the first repayments must occur in 2021.
What is in our future in the USA as both the Republicans and the Democrats are unwilling to shrink government or government spending. Adding debt at the rate of a $Trillion / year is not “sustainable” (nor is burning our forests…)
But those who gain from putting the people of the world in chains of debt love to profit by it…
FUD Watch
“Fear, Uncertainty & Doubt”.
Great Barrier Reef: In the news (again and again and again…) we have an Authority claimed the Reef is DOOMED! due to climate change, yet again. The present claim includes saying the World Heritage Site folks may derank it (clearly the UN is on board with the Fud Factor Game).
Greenland: Another talking head saying it is Melting!!!, but an added claim that Camp Century is going to melt out any day now releasing tons of poo and maybe even radioactive crap. Um, not really. It’s still well under ice.
Brazil Burning: Record ever? Um, no. More or less the usual. You don’t think this might all just be a Hit Piece on a conservative (called “Right Wing”) candidate who is a Brazilian Nationalist do you? (Is there any way it could NOT be? – “Conservative Man Bad”…
Then we had The Greta arrive in NYC to not much fanfare. Seems our media isn’t interested in anything but “Orange Man Bad”. Sorry Greta. Besides, your white. Wrong race for a poster child story here… Nice Antifa shirt though:
Gretta & parents in Antifa Shirts
Hong Kong
Mainland continues to boil the frog. Hong Kongers continue to swim around in the streets. Slowly it warms…
Brexit
Boris and Her Majesty have proroged parliament. Given all the traditional and already scheduled days not in session, this adds all of about 3 or 4 days of “out of session”. Of course, the Remainers are filing court actions and conducting Street Theatre claiming this is the End Of Democracy! A COUP! And more. Not like ignoring the vote of the people for 3 years is a stick in the eye of “Democracy”…
IF Boris has Her Majesty on side, I think he’s in the stronger seat.
Hopefully the EU Masters Of The Universe are busy crapping their pants about now…
Speaking Of The Democrat Primary Race
They continue their Central Authority manipulation of who gets a voice, sidelining Tulsi Gabbard from the debates.
By Monica Showalter.
You Go Dems! Keep on building trust with the American People by publicly manipulating systems, shooting down the candidates that have the most appeal to We The People, and promoting your internally selected Best Suck Up Loyalists. After all, you are the George Soros Bought And Paid For Shills… (by your actions it looks that way to me).
I like Tulsi OK, but I am glad she won’t be running against Trump because she might have a better chance. Even though she is the least evil of the bunch, she still has some pretty far left planks. I’m hoping for Biden or even Fauxahontus.
Well, yeah, for Trump’s Sake I’d like any of their Near-Socialist Losers…
But with Tulsi being the best, and the history of Clinton, The Fix, and The Loss; I’m just really surprised they want to block their strongest chance. Oh Well. Party Agenda over People’s Choice and another loss, I think…
It would be fun if Tulsi bought some commercials DURING the debate to give her debate answers ;-) Talk about stick in the craw ;-) “And now, Tulsi Gabbard giving her positions, paid for by Tusi Gabbard as the DNC rigged the debate to keep her out”…
Hey E.M.! “We’ve regularly scrubbed programs right when they were ready to take that step. From the aerospike engine to over-sizing the shuttle so it needed SRBs to get off the ground. Not sure why…”
Been a while, but my memory of the Shuttle size increase is this. NASA wanted a smaller shuttle, but the US Air Force said, “No. We want a shuttle that can get a single payload with a mass of foo and dimensions of foo cubed into orbit all in one chunk.” NASA said, “Why do you need that large?” Air Force said, “We want what we want.” and that was the end of the smaller shuttle.
Re “Even after pitching out the Socialists, the “Right Wing” have to deal with the debt problems” It’s best to get the facts the right way around EM
Back in late 2014 when Macri was elected Argentina’s foreign debt was about $5 billion – mostly to China. Under Macri it’s ballooned to well over $60 billion US. And anyone one with UD dollars is hoarding them for the default.
And Macri is toast.. Burnt to a cinder…With yet another leftish government inheriting the Macri mess.
Yes it’s whipsaw. But the current right wing Macri government is the mob who got Argentina in this mess.
Do I think that the incoming leftish mob can resurrect things ? very doubtful.
@Bill in Oz:
It is also best to not accuse me of stupidity.
The Left sets up massive entitlement programs that can not be stopped with a new election.
The Right, to get into power, must grease some palms with government gifts.
The end result is that the big bill tends to roll in just after the conservative got elected and his chouces are to not pay it, and crash the economy, or pay it and have lethal tax rates, or accept the debt while putting in place policies to make the economy work well and reduce future debt seversl years out. That is WHY it ends up as an oscillator.
Same thing happening now under Trump.
Same thing happened under Reagan.
And others.
The incoming conservative needs about a decade for a gradual ramp down of social program spending, entitlements, and pay-to-play. They will NEVER get it.
They must stimulate the economy to avoid a black recession (thanks to economy killing policies of their predecessor), get some goodies for their funders (or they can kiss off support), and then start making the changes that can fix things about a decade later.
Usually, they get 4 to 8 years. Things are getting a little better, but folks want more free stuff now, so vote back in the Tax And Spender who proceeds to undo what the conservative did, spend anything he gained, then put in place more irreversable entitlements to pay off his supporters. Usually on a “future payment” basis. These kick in after passing and setting up operations, just about the last year or so of his term. Landing the debt on the next guy.
You are making the same mistake as everyone else. NOT looking at lag time between creation of programs and bill coming due.
Do I have sympathy for the guy facing this mess? Not much. That’s the job.
I can criticize them for things like going ahead and buying military goods or improving security forces; but what is the alternative? “Granny off the cliff” ads per ending the latest socialist givaway program? NOT being able to field a defense force?
The simple fact is that Progressives/Socialists put in place generational spending programs they can not fund, and huge progressive tax rates (ignoring the Laffer Curve) that shrink both economies and tax revenues. This starts to bind, folks pitch them out and put a conservative in, just in time for the debt to roll in, and he gets 4 to 8 years to do a ten year fix AND must bridge the problems with debt for a couple of years or destroy the country.
Rinse and repeat.
Shorter form:
I, as an elected leader, can commit a country to a $Trillion spending plan in, say, my 2nd year, with the bill due in the 5th, all in one signature.
A new leader, arriving in the year the bill comes in, can NOT reverse that. Not even in 2 years. The Conservative is trying to set up conditions for organic growth of the economy that may start raising revenues by a few $Hundred Billion in 4 or more years. EVENTUALLY, that can fix things.
But NOT if every 4th or 8th year someone comes in who reverses them AND adds another $Trillion program with the bill arriving in 3 more years…
How much could we save by cutting the bloated military/national security complex? We could do without a lot of FBI,CIA,NSA buttsniffers, and our “volunteer” military looks more and more like a Dem jobs program every day. Those people ain’t gonna fight. They’re too busy scheming to collect disability payments for paper cuts…..Go back to a draft with real protections for those who fight – how much could be saved?
Put some of those funds in a wall + more border security, then get this government to start paying it’s bills.
One small spending request? We need large supplies of soap onhand for the next government employee who even mentions weakening encryption. They get to gargle with soap – or leave.
Re: Trump and Hurricane – The dems have already figured out the angle. Trump did not cancel plans when a TS sideswiped PR, but did when a Cat4 threatened lily white Florida. So he is a racist. Forget one was a mere TS, and the other a major hurricane – they have the same name. Forget that there are more brown people in Fla than in PR. You are not supposed to know the facts (Truth over facts).
The draft would not work today, the technology in use is out of the mental reach of many of the folks coming out of school today. The idiots who can’t figure out how many genders there are have absolutely no hope of operating a front line main battle tank, or an antiballistic missile system of today let alone maintain 5th generation fighters. It would take their entire 2 year draft enlistment just to get them smart enough not to blow themselves up. Not to mention train pilots or submariners etc.
The draft is only useful for filling cannon fodder roles or quick start rolls where they take college grads and give them LT bars and a few weeks of training on how to march through a jungle – we are well past that now, and the dumbing down of America is making it a fatal flaw in our defense systems. It now takes 6+ years to train the high tech soldiers
E M I said it’s best to get the facts right re Argentina.That does not mean you are stupid. It means you do not know the facts. I know Argentina I lived there a while. And history is my strong suit :
1 : There were Leftist governments in office from around 2000 till late 2014. Those leftist governments were in power because of the stuff ups that the previous right wing mobs did. That lead to the Argentine default in late 2001..That lead to mass unemployment, bank savings being seized by government and resulting political ‘instability’. Millions on the streets.
2: The leftist governments in the period 2001 -2014 mostly rebuilt the economy with a controlled & managed exchange rate. And because it was in default with the IMF, World Bank, etc it could not make foreign currency loans..The rebuild took a long time.. And yes the government did institute special programs for the poor. Over time it also became corrupt.
3: Since late 2014, Macri’s government has pursued a policy of opening up the economy completely to foreign loans, investment and a free exchange rate. And now Argentina is again on the verge of sovereign risk default.
Macri’s government needed to open things up economically and financially. But it has had an ideologically driven bull at agate approach . Opening up a national closed economy needs a pragmatic long term approach. ( Think China from 1979 -2015 )
One more reason for a draft to not work very well:
By the way I strongly support the idea of a draft in the context of giving everyone some skin in the game but it would have to be completely re-engineered for today’s military. Perhaps an military auxiliary (think farm club) that gives some rudimentary training in military subjects does some physical conditioning while engaging in less rigorous beneficial service to the the country. If you get good enough marks you can use it for a stepping stone in the the military on completion of the draft, if not you can do things like trash pickup, brush clearing (fire risk management on public lands etc.)
The money might be better spent in funding phys ed classes in junior high school though.
Take a memo, stay away from the drunken bar scene in major cities. This happened right here In Denver a few days ago.
@EM – re “Real Soon Now we’ll have our own way to get to space.”
Is anyone running a book on how long before the first commercial spacecraft is knocked out of the sky by having a bit of space junk whack through it at a few ‘000 mph? :-|
Looking at that photo of the Thunbergs makes me wonder what would sales be of an
Anti-Greta T-shirt?
Personlly I think she needs the one I saw a couple of days ago:
Do Not Disturb
Already Disturbed
Need another reason to sell out, pack it up, and leave California?
Rent control! Coming soon to a landlord near you.
Note: Can read free, but must turn off ad blocker
@Graeme No.3 – I could go for one of those t-shirts myself ;o)
Re, Greta: I’m not so much anti-Greta as I am angry about the cynical use of her by her parents and the GEBs funding her circus act. There have been arguments as to whether or not she was maneuvered into her current role or if she was offered the role and, just as cynically as everyone else, jumped at the chance. Regardless, she’s just a ‘dumb kid’ like I was at her age that doesn’t know how little she knows.
It just annoys the snot out of me to have her or anyone yammering on about “The Science! The Science!” when they demonstrably have no clue as to the current state of our understanding of the drivers of climate that cause large scale global changes. (Hint: We don’t know much yet because little money is being spent to find out. It’s mostly going to ‘CO2 bad’ and very little to ‘What’s going on?’)
Oh, and I’m still waiting for a list of any regions where the Koppen classification has changed within the past century. I’m not aware of any at the moment, though there could be one or two that have changed.
Who would thunk Tstorm’s in the Midwest saved Florida yesterday? We shall see as there is still more to go.
A public service announcment, before I cut the yard. ;-)
1. Don’t drink and drive!…8315947008
2. Don’t mess with the Rhino!…6695819264
Awe man, the link appears to have broken on my phone. Try cut and paste….
[The link is missing some elided bits and will never work. Any link with lots of … in it will break. E.M.Smith]
Interesting gun legislation for Florida residents evacuating for Hurricane Dorian.
@Bill in Oz:
You might start your fact quest by noting that the 2001 to 2014 debt reduction was largely accomplished by defaulting on $100 BILLION of debt…
It is very easy to get out of debt by blowing it off.
Odd, 2nd attempt.
Latest Pointman
EM Yes you’re right. Argentina ‘s default back in 2001 was huge…And probably more than $100 billion…
The IMF& World Bank this time have loaded $57 billion but if we add in all the other loans made in the past 5 years…I suspect that it will be much more than that..
Now that makes me wonder about why international financial institutions such a s IMF, etc do not learn from past stuff ups.. And exercise caution when a new national government suddenly changes major economic policies. The emphasis being on “Suddenly” ..
If you owe a banker $10,000 he owns you. If you owe him $10,000,000 you own him, just add a few “zeros” for a government/bank relationship. Conservatives fix the fiscal problems, repair the lines of credit. The people get tired of the Conservatives saying NO and elect Liberals that promise Yes, and we will be responsible this time.
Just like Obamacare. After it is passed with NO Republican input and proves a wreak the democrats insist it is the Republicans responsibility to fix it. It is all the Republicans fault that it doesn’t work as advertised. Then the Democrats point out that it must be greatly expanded to make it work out! Even more astonishing, there are Republicans that agree ! We must fix it!!
We don’t need them…pg
On Greta… several of the posted pictures and quotes from her showed, in my interpretation, a teenager having the adventure of a lifetime. Even some spontaneity and smiles. Now, with parants once more… not so much.
An illustration of precisely how poisonous things are getting over Brexit, which turned up today on Breitbart as just the sort of thing you want to be reading on a Sunday morning:
“Mainstream media talking head Terry Christian has suggested Brexit supporters should be deprived of food and medicine in the event of a No Deal Brexit, and said he is hoping for a “good virulent strain” of flu to strike down pensioners who voted Leave.” There is more poisonous bile to follow.
Terry Christian? Can there ever have been such a outstandingly inappropriate name?
@Steven Fraser re Greta: Interesting observation. I’ve seen her in the two different modes but never checked who else was around at the time.
Your observation would argue against those who say she is cynically in her crusade for the ‘Science’ for the money and attention. I’ll have to keep an eye out for that in further photos and videos.
Meanwhile, all I’m willing to say is she doesn’t know squat about climate or science and is a poor source on which to base policy decisions.
This does not bode well for the N Bahama’s. The pro’s in the background are calling for this to peak later today at 185 mph.
@Steve C:
It is amazing to me just how much the Progressives and Globalists are prone to hate and violent speech. While those on the conservative side tend to just do quiet observation…
@Ossqss:
I’m having the spouse watch this one closely and reminding here that this is the normal fall in Florida. Watching and waiting…
The storm has slowed and the eye has filled with low clouds as expected over the Bahama Island. Now it hopefully weakens via upwelling and we soon see the Western flank start flattening out on Sat imagery. That would indicate the beginning of the turn sequence. So far, the flow over Florida has not changed yet, but should this afternoon, hopefully!
This loop may take a bit of time to load as the traffic is quite heavy.
I am sure by now most of you have heard about the shooting in Odessa/Midland Texas.
They have finally identified the shooter but still little info about what went down other than a shooting spree triggered by a benign traffic stop for failure to signal for a left turn.
8 dead including the shooter and 22 others injured.
The Rachel MADdow virus ….
h/t to Lubos …
A blast from the past (and I’m not out of Scotch ;-)
Hurricane on windy is showing 24-25 foot waves, with 9 ft swells near the eye, next high tide in Palm Beach Fl. will be 11:38 tomorrow morning. Storm is currently about 130 miles off the Florida coast just south east of the Grand Bahamas.
H.R.: “she’s just a ‘dumb kid’ like I was at her age that doesn’t know how little she knows.”
Greta doesn’t know the difference between ‘know’ and ‘believe’, and it is not just Greta.
This Dilbert strip “No One Is Taking Advice” is great (age versus experience)
One person says “dude wall” and poof — history disappears. Some want to erase history, and I’m sure some would like to rewrite history, to bring it up to modern standards. Call it “living history”, flexible, adaptable, always PC.
I would think some would rather keep the dude walls up and the keep the history visible in order to keep racism and sexism on the front burner. The movies do this with racism in the South to make sure that we don’t forget how bad it was.
“The past is a foreign country; they do things differently there.”
(L.P. Hartley, “The Go-Between”)
For example, take this totally awesome, incredible documentary movie “Apollo 11 (2019)”
If you look carefully you will see a few non-male non-white participants (which must prove something, in those days before diversity quotas) but mostly there is almost zero diversity even among the white males. Same white shirt, same tie. Nobody looks like a geek, or even has the mad-scientist look. It was a different time. For everybody. I’m not going to judge the past, and certainly not by modern standards.
Except to say, that was amazing, incredible, what they did. Who were all those thousands of engineers, scientists, technicians? Where did they come from? Whatever happened to them?
Can you imagine a project so big, with a goal so far beyond what had already been done, ever being accomplished today? It couldn’t even get out of congress, much less off the ground.
Were those “guys” (all untold thousands of them) amazing, or what?
Kennedy wanted to achieve a “first” but he did say he wanted it “done right”,
Trying is easy, achieving is hard, but “done right” is a whole ‘nother level.
NASA isn’t what it used to be. There are countries where the current people live among the relics of their ancestors. Pyramids, temples, aqueducts, all sorts of things that are beyond what they could do now, not even the experts. The expertise was lost. Progressives imagine the future will be better; it doesn’t always work out that way.
Bahamas got hammered
@YMMV re the Dilbert cartoon: Yup. That describes most teenagers to a tee. I certainly fit the description when I was 15 – 16 -17.
Odd, but I wised up early. It might be because I learned a lot of DIY skills from my mom and dad; gardening, carpentry, mechanics and other practical stuff. They knew what they were doing and in helping or taking direction, I realized I had no clue and had better at least start off doing it their way.
“Respect your elders” used to be a thing, along with “Wisdom comes with age.” Sure, like all kids I only half-believed that, but at least those were common cultural teachings not all that many years ago.
Now it’s “Sue your own parents and teachers if you don’t get your way.” That’s a sure way to reach the bottom of the emotional and intellectual pits in a hurry.
A stationary hurricane should burn itself out at some point. It is heat pump, pumping energy into space and thus cooling the ocean around it. It is also a shield, keeping the Sun from re-heating the water.
If it dies down to a 3 or lower, it could give people a false sense of security. For when it moves, it can move over warming waters and rebuild strength.
Unfortunately it is sitting right over the gulf stream which will keep bring warm water in to feed it.
Now setting centered 100 miles west of Palm Beach about over Freetown and Freeport Grand Bahama.
Outer bands are starting to sweep down the coast near Ft. Lauderdale and Miami which both have coastal flooding advisories out.
Looks like it is just going to sit there and grind on Grand Bahama for a while, it is sitting over a huge pool of 86 deg F water with a steady flow of new warm water between it and the Florida coast.
Looks like a large high pressure is hanging out over the east central states (ie Tennesse and north) which may be blocking its move and turn to the north.
I really enjoyed our cruise with a stop in the Bahamas… I hope the place doesn’t take a reset. A lot of the old quarter stuff was very nice…
The ‘cane went through an ERC ( eyewall replacement cycle) earlier and has been up welling the water for a loooong time from so little movement. Subsequent weakening was inevitable and it is now 941 mb on its way higher. That is a good thing, not so much for the islands. It appears the modeling is taking it out further to sea also, once it starts moving later tonight or early morning.
Here is live recon if you are interested.
Model tools also.
An outstanding example of pure ignorance and fundamental stupidity. And she is in a leadership position. Oh the pain!
Well it appears the post has been disappeared to protect AOC’s ignorance. She was boasting on electric cars and commenting on how people would not be able to get gas if the power was out. Not realizing the electric car was more dependent on electricity than the IC cars. True ignorance of reality.
Yes according to Ft. Lauderdale radar (West Palm beech radar appears to be down) the storm is a lot less well organized than it was a while ago. In fact it looks like it may have backed up to the east slightly, so hopefully it will turn and finally drift off to the north east.
It is shielded from direct input of warm water from the gulf stream now by the Grand Bahama Island so that might help it back out of the coast area and head out to sea.
Looking at the North American Model for the next few hours it is showing (at 700 mb level) that exact movement
In about 24 hours it will start moving to the north north west almost parallel to the Florida coast then around 40 – 48 hours from now it will accelerate and move to the north then around 60 – 72 hours it will begin approaching the Carolinas coast then come ashore somewhere near Kitty Hawk or the southern tip of Cape Charles area and then run across south east Maryland. Hopefully by then it will just be a tropical storm.
This morning on Grand Bahama
@YMMV – Scott Adams has been poking fun at the left a lot lately. But the left is not bright enough to understand what he is saying. He has gotten very political lately, but only those who understand what is going on get it.
@EM – I loved those roasts! Best comedy hours ever
Additional weakening is likely due to the more unfavorable oceanic conditions now being created by Dorian’s stall. The hurricane’s winds are mixing to the surface cooler waters from below that are limiting the hurricane’s heat and moisture supply. Dorian’s lack of motion means that it cannot move itself over to a new area of warm water to feed off of. The combined effect of the ERC and the reduced heat energy being supplied to Dorian could reduce the hurricane to a Category 4 storm with 130 – 140 mph winds by Tuesday afternoon.
In Spanish (yes, I know I’m a sick puppy… music videos in Russian one minute, safety advisories in Spanish the next, native speaker of English but best 2nd language is French , and others…)
But it is rare I learn something in another language….This guy taught me that in lightning country, it matters where you put your feet… obvious in retrospect, but…
So even if you don’t speak Spanish, the illustrations carry the meaning. It is all V=IR at base, but applied…
Basically, don’t put your feet apart on the ground in the current path. If you must run away, do it in leaps and bounds.
Hmmmm this is curious – funny it takes a hurricane to bring this bit of news to the top of the pile.
I can think of several reasons they might create a port on an island with nothing to use the goods from the port, but none of them are good.
I don’t know if the spanish guy said, but a metal pole like that will protect an area described by a 45 degree cone, apex at the top of the pole. Of course there will be ground currents, but if you crouch at the edge of the cone, but not too close to the edge, with your feet close together, you may be OK depending on how high the pole.
So now the real story is starting to come out about the Odessa Tx shooting.
It was likely a work place violence based rampage interrupted by a traffic stop.
A deeply troubled man in a death spiral and finally snapped.
On lightning. Don’t be there, and if you are, reduce your potential. I really couldn’t make it through the whole video but I’m sure that’s what he was talking about :-)
I would say, start here.
I think we have a tactic for combating Antifa
drone – meet UV iridescent chalk powder.
El Foro de Puerto Rico
@elforopr
Follow @elforopr
Así quedó la isla Abaco en las Bahamas, luego del paso del huracán Dorian.
[ This is how Abaco Island was in the Bahamas, after the passage of Hurricane Dorian.]
[REPLY: NOTE THIS IS NSFW! The title has an F-bomb in it. -E.M.Smith]
For comment
“The Quickening”
Predictive ability history?
Re lightning, every time I look out of the window here I see evidence that our intelligence is de-evolving. About eighteen months ago, the landlord of the adjoining property had a couple of lightning conductors installed on the chimneys front and back. (Don’t know why, AFAIK it hasn’t become a legal requirement or anything.)
The one outside my window is about two feel away from, and a foot or so lower than, the television aerial for the same property, mounted on the same chimney. Ho hum.
^^ Hmm. My own intelligence too, it seems.
Feet …
A look at browsers and other things
“A Quick Privacy Quest Update”
Hurricane update
Lightning If the lightning rods are well grounged.
wow , me and wordpress really screwed that up! sorry
Lightning.
Anyone else notice the sudden and profound absence of “Russia, RUSSIA, RUSSIA!!!!” In the news after the NYT decided to swap to “racist, Racist, RACIST!!!”?
It’s enough to make a fella think it’s all made up crap…
@EM, watch for climate change making storms stall to backfill. Coming soon to a theater near you.
What I noticed was London mayor Kahn and Serioso both pop up with British screeds that savage Trump about his control over Boris and the weak minded British electorate that is pushing this very bad Brexit Idea. Could it be that the GEBs are sending out their troops under orders.
Just a tech FYI:
I left the XU4 running overnight so that it could properly do the log file rotation and as a test of what leaving it up all the time would do. Unlike prior times when I’ve “let it run” to do something like download a file scrape or do a database upload; I left the browser open and with a bunch of pages open.
Infowars.com is a very “busy” or high “page weight” page. Lots of images. Dynamic things that keep changing the image. I usually do not leave that kind of page open (and in fact rarely spend time on that kind of page at all, not liking “dancing Java craplettes”… ) But this time “just to see” I left it open, though not the top page.
Well, when I came back to the machine (somewhere around 30-ish hours later, maybe more), not only was the ethernet light blinking a lot (where the cable plugs into the board) indicating it was still moving bits, but “swap space” had almost 2 GB in it.
Realize this board has 2 GB of memory and usually runs about 1/2 GB used. Maybe up to 1 GB used in a heavy session. It rarely does much of anything with swap. I tend to configure every system with at least 2 x memory as swap space (on real USB disk to limit u_SD card wear), and this on has 4 GB of swap on one disk (and another 8 available but usually not active on another disk).
So here I was looking at about 1/2 of swap full, all of memory full, and ethernet activity. All from a system that was nominally “idle” for a day, but with the browser open.
I closed the tab with InfoWars open in it. The ethernet light blinked one or two times more and what then not blinking. InfoWars was the “chatty Cathy”.
I clicked the “close box” on the browser…. and waited… and waited…
After a while the ethernet stopped blinking. A while later their was a notice that a web page was slowing down the browser… (this is FireFox). Eventually the FireFox window closed, and then in HTOP I got to watch as oh so slowly it emptied a couple of GB out of swap…
Moral of Story:
Don’t leave your browser open overnight. While I normally don’t do that just for security and “sanitation” reasons (i.e. it’s good practice to exit any program you are not using); it looks like browsers can churn a whole lot of memory and network bandwidth even when “inactive”…
One u_SD card based system, that use of real disk swap space can cut out a LOT of wear on the card from high page weight chatty Cathy web sites.
Never leave an InfoWars tab open. Close it when done looking at it. Otherwise it is chewing up network and machine resources for nothing. I have a habit of leaving a dozen or two tabs in my browser so I can just click a tab to check a site. ( I know, it is what bookmarks are for…) Some sites are not suited to this use, as they are heavy pages and active. This is one of them.
Every so often, shutdown and restart your browser….
FWIW, with Brave on the Tablet: I can have all the tabs open I want and nothing slows down or fills up. It looks like Brave is very tidy about how it handles many tabs. Some other browsers (cough, FFox) are prone to ever more bogging down behaviours as open tabs become very large in number. It might be worth repeating this experiment on other browsers and boards.
It appears that Climate Change/Disruption/Emergency has a long history.
john cooknell
September 2, 2019 8:08 pm
Paul,
Parliament did declare a climate emergency during the reign of Charles 2nd just before the hot summers you describe.
The first time the UK Parliament declared such a thing was in 1661..
1. Samuel Pepys 21st.
I won’t stress the parallels between fast & vegan, or listening to a sermon nor the fears of plague, death etc.
@James Glendinning:
Very nice Ted Talk. Does an effective job of explaining “why the gun”.
My only “complaint” about it (and it is a minor one) is that it glosses over the issue of abuse of the “Government Monopoly On Violence”. Ignoring that far more people have been killed by Government than ever were killed by individuals. Ignoring that “the legitimate use of violence” also rests with We The People. Essentially ignoring the real reason for the 2nd amendment: To assure our government respects the rest of our rights. It isn’t about murdering Bambi, it is about the availability of a “countervailing force”, the domestic Militia, that both deters foreign aggressors (even aggressor nations), private gangs and “armies”, and yes, even our own political class, from abusing their access to violence. Even the “State monopoly on violence”.
Here in the USA the State does NOT have such a monopoly. It tries. The Left (who want nothing more than an unarmed populace to yoke under the communist doctrinal plough) constantly push for that power, that monopoly over the legitimate use of force. Slowly they have advanced over that path (where once you could, in fact, own your own cannon, we know have “destructive device” laws, and where once you were expected to have a “military style” gun, now The Left vilifies it – unless, of course, under their control…). Here we still have the final resting place of the “legitimate use of force” in the hands of We The People. For self defense. For restraint on our government (i.e. politicians wishing to usurp the State)
Ever wonder why the USA never had a coup?
Ever wonder why the USA has not had a war on our soil in over 100 years?
Ever wonder why the USA is not invaded?
Ever wonder why the USA has not descended into tyranny?
I would assert that a big part of “why” is that the potential tyrant, invader, coup plotter, and war monger all realize there is not a “monopoly on the legitimate use of violence”. So while we DO have a strong and vibrant Federal Military, it is backed up by 50 State Guards, who are also in turn supported by We The People in the peoples Militia (defined as all males over the age of 18, btw) with our own means of violence.
And it works. Those States and Cities which erode this right of We The People the most, have the most rise of gangs, the most corrupt governments, the highest murder rates and chaos.
Eventually they will succeed at putting The Gun beyond the reach of We The People. Succeed at making it a State “Monopoly on the legitimate use of violence”. Shortly thereafter we will either turn into a tyranny, or be invaded (or both, who chooses to die to defend the Tyrant?)
But I understand that the presenter is European, so has the European perspective. He makes a very good case other than that one flaw. Unfortunately, while the period since W.W.II has been relatively peaceful in Europe, the European history is not one to lend comfort that the “Monopoly on the use of violence” has a good outcome. Many many times, the State there has used that monopoly to impress the population into uniform and apply them to avarice towards their neighbor. It seems a particularly European thing to do… Kings, Emperors, Kaisers & More revel in their “Monopoly on the use of violence”… Which is part of why our Constitution prohibits them.
So as long as Europe has an effective control of their Tyrant Wannabes by way of democratic processes controlled by The People, they will do well. But, at any time, should that control falter, and a Tyrant rise: Just who will remove them? We in the USA have done that job twice now. If we decide it’s not our problem? Or worse, if we are also talked into the notion of a State Monopoly on the gun and grow our own Tyrant… who just doesn’t care about Europe?
So I would urge a little skepticism about that one (very European) idea of a Monopoly Of The State as a “good thing”. Central Authority is ALWAYS a risk to freedom, peace and prosperity. It must always fear “We The People” and be kept as small as possible. If not, it will raise a Tyrant and descend into oppression and empire building.
@P.G.:
Yeah, there’s definitely a Central Authority that pushes the Talking Points Du Jour, which then diffuse through the influence tree.
Can’t really say which is an “Assigned Agent” and which is a just a “useful idiot” recruit “just following orders”, but the pattern is what it is. Central Authority -> Agents of Influence -> Useful Idiots. Classical Marxists structure at work.
In comparison, the Right Side has a whole lot of richness in ideas and beliefs. No one Central Authority choosing the one to push today (anyone remember when Black lives Matter was the one thing of the day?) and it suddenly shows up in the same talking head tag lines on all the controlled media… and then parroted by the U.I. clan. Instead it is a lot of different ideas offered and folks get to sort and chose them…
It is an incredibly easy thing to spot, once you are sensitized to look for it. Like the instant evaporation of the Russia, RUSSIA, RUSSIA!!!! crap once the switch was thrown to the next lie. Now, as soon as I hear the same line (especially on the same day) from 2 or 3 “different’ sources, the “suspect” flag goes up. “Who benefits” gets asked.
@Graeme No.3:
Now that’s interesting…. Mid 1600s. 400 years ago, or two of the 208 year solar cycles. Just before the Maunder Minimum. I wonder if hot spikes are normal / expected just before Grand Solar Minima? Would explain a lot.
Speaking of talking points, notice how the outrage over the Odessa / Midland Texas shooter has suddenly gone away? Well it turns out the shooter has a long history of mental problems and failed a background check a few years ago due to mental issues on his record.
He has been making incoherent phone calls to both the local police and FBI for years, including on the day of the shooting.
The media is now mentioning that he “may have” purchased the gun he used private party (and exploited a loop hole in the background check system). The interesting point here is by now they should know exactly who he bought the gun from and when but since they leave that info out I suspect there is a clinker in that coal bin too – like he bought it a long time ago or he bought it from someone who is identifiable as being a Democrat or some such.
Texas Department of Public Safety said the shooter had a rifle that may have been a .223 AR-15 style weapon.
“may have been??” give me a break they know exactly what caliber type and model he used – why the obfuscation?
Maybe the gun is traceable to the Fast and Furious gun sales or maybe to MX13 gangs since he may have purchased it black market?
Still mighty odd what info is being very slow to come out, info that in other shootings was available within 24 hours (like name of gun shop and owner of shop where the purchase was made, exact make and model / caliber etc.)
Larry re Odessa shooter: You are hitting on the key point; it’s not what is said, but what is not said.
Larry wrote: “The interesting point here is by now they should know exactly who he bought the gun from […]”
Yup, serial number and all that. I think who sold him the gun and where that gun has been is probably the best guess as to why the silence.
Also, if the serial number had been filed off, just that fact alone means that no law anyone could pass would stop the shooter from illegally getting an illegal gun. The YSM does not report anything that doesn’t advance the Regressive narrative.
I can see that debate now.
“What went wrong so we can prevent something like this from ever happening again?”
“Well, it seems that no gun store would ever sell to this guy because it would be illegal to do so. So he found some guy that knew some guy that would sell him a gun, cash, no questions asked. It turns out that the gun was reported stolen 6 months ago, so he was illegally buying an illegal gun from a fence.
Therefore I propose we ban all firearms and confiscate those already in the hands of citizens”
Let the politicians and YSM try selling that to the public as the reason for banning guns. Yeah… sure… uh-huh… lead balloon.
Every police car marked and not marked has guns in it. Wonder how many of these are lost and stolen every year?
Buying and selling guns is a known sideline of officers because they can easily buy at police discount and then sell as a private citizens.
@P.G.:
Typically the car gun (and often the personal weapon) is department issue not officer’s persobal property. HOWEVER, most officers I’ve known well had a “throw down piece” taken off some street scum at some point. Don’t know if modern forensics makes that unworkable now, though.
40 years back, if a cop shot a perp then found there was no gun he could just put the throwdown piece in hand… and it would trace back to bad guys if anyone.
Think many folks pushing streetdrugs would register a complaint if a cop pats them down and just says ” I’ll just keep these pills and the gun. Don’t do it again. Get going.”? Cameras will make this harder, but not impossible.
“TELL ME MORE: How do you communicate when the government censors the internet? With a peer-to-peer mesh broadcasting network that doesn’t use the internet.”
A famous remark regarding privately owned firearms and opposition to Nazis
Let’s try this again in the proper thread:
Interesting read from General Mattis regarding President Obama and his failure to act responsibly to provocations.
Anyone with a brain knew this would be the outcome in Hong Kong from the day control was surrendered by the British. The people of Hong Kong will now pay a terrible price to be subsumed into the Communist Chinese state.
“ENSO predictions based on solar activity”
In other news: . Memories come flooding back … we *have* seen this movie before (those of us old enough to have lived through it … young ‘uns only know what they’re told about things that happened before their time). Add to that the selective narratives. Leftists good, lovers of individual liberty bad.
There is this one: , too.
fun with primitive technology – with a bit of modern enhancements -Expedient tent heater
enhanced Finnish torch stove that is a highly specialized restricted sale scope not just any random hunters rifle scope so other than the digital precedent (they have done this sort of things for decades to narrow the universe of suspects for all sorts of crimes), it has little relevance to average shooters.
Here is a adapter for regular rifle scopes. I don’t know if that list included this:
SOLOMARK Rifle Scope Adapter Smartphone Mounting System- Smart Shoot Scope Mount Adapter – Display and Record The Discovery
Some light reading
“THE MAN IN TAN BLUES Part 1”
And now just a bit of musical joy.
Fun with the gas law and global warming
Kind of demonstrates the quality of science being done by major “Climate ” scientists. Or at least those on the government payroll. Nice to see that Ned is getting traction. At least a few understand as much about gasses as a refrigeration mechanic does…pg
This is just to note the passing of another Nobel Peace Prize laureate — Robert Mugabe.
(find the sarc)
The British colony of Rhodesia, now Zimbabwe, doesn’t seem to have become a proud recipient of any British heritage. The independence story sounds familiar. But different, thankfully.
Rhodesia threatened to take sovereignty without British consent. Talks failed. The Rhodesian Front was unwilling to accept what were regarded as unacceptably drastic terms and the British would settle for nothing less – it was a formula doomed to failure. (Wikipedia)
Yup. Failure and then the nightmare..
LL – Jennifer Rubin has such a severe case of TDS, she needs to call an ambulance right now!
The Dimowits whine on just about anything that has made the US a great country, from oil to beef. We need to come up with a whine index to categorize their angst.
Here is a particularly whiny story about NC and hurricanes. It’s as if these NYT idiots woke up yesterday and learned hurricanes hit the East Coast. And of course, they blame climate change as the root of evil here. They don’t mention the folly of building flimsy houses near the ocean as a problem – idiots all around from NC coastal residents to the “journalists”. From the article:
As Dorian chugged northeast away from the Carolinas, many said they were happy to replace blow-away roof shingles and ruined furniture and carry on until the Next One. Others, storm-shaken and weary, wondered, How many more times would they have to pack up their pets and children and race for shelter at the closest middle school? How could they rebuild their homes to withstand hurricanes made wetter and more destructive by climate change? How many more times could they bear to reboot their lives?
Across eastern North Carolina, some residents who had been displaced by devastating inland flooding during Florence ended up riding out Dorian in the borrowed bedrooms and temporary apartments where they have stayed while waiting for slow-moving disaster aid and other help with rebuilding their homes.
@Larry L:
One wonders how much of that Chinese port survived the hurricane ;-)
But it is still “one stone” on the Go Board placed so as to build an eye… We either remove that stone, or watch as they complete the process of entrenchment and become un-removable.
@CDQuarles:
Good one! My but this sure sounds familiar… wonder if they were just rerunning the Nixon Script on Trump? (And thus were really astounded when it didn’t work?)
Hmmm…. Democrats, highly corrupt and working coups since at least the ’70s?
@Another Ian:
Yeah, Obama The Pure is not going to sell to a world with a memory…
@P.G.:
OH! NOW I understand why I couldn’t buy into the Global Warming Crap! I learned to do AC maintenance decades ago and I’m an AC Mechanic at heart! Hard to let go of that skill once you have it… I
‘ve got one of my cars AC running on a 20% propane 80% butane mix as that’s a “drop in replacement” for R12 and I’ve just not had the time to do the flush & oil change for the R134A conversion… Works really well, BTW. Also, California now has a $10 DEPOSIT on each can of 134a you buy AND has their own special valve. They think they will save the planet from “greenhouse gasses” by recovering unused coolant from when folks fill their AC. I make sure to drain mine ENTIRELY whenever I buy any here… Then again, I also buy a case of it whenever I’m out of State so my last buy was a couple of years ago… So I’m not in a hurry to convert to the California Mafia Can…
For anyone worried about “flammable gas” in the AC: It’s all of about 16 ounces. This in a car with 16 GALLONS of more flammable gasoline in it. Almost all of that gas is in the receiver / drier far away from the passenger compartment. In case of an interior leak, the gas is “odorized” and I’d smell it. I don’t smoke and don’t let anyone else smoke in my car anyway. I’ve run such a mix on and off since they first banned R12 back in the ’80s in everything from a Honda, to an International Harvester Scout, to a couple of Mercedes. All with zero problems. It just isn’t a real risk, even in a wreck. Notice nobody gets excited about carrying camp stoves and fuel in their cars? Well, this is safer than that.
@YMMV:
“Use the sarc, Luke!” ;-)
@Jim2:
Oh Boy, a “tool” to assure an echo chamber and spiral decent into confirmation bias…
As a person who seeks out change, I find such tools a big bother. Pretty quickly I’m bored and move to a different site for “some new news”…
Though I have found that swapping browsers / systems / history can also work. Part of why I run over a dozen browsers / systems … in addition to the privacy & security advantages… (IT’s the BORG! Rotate the shield frequencies!…)
Per Hurricanes:
Wonder how many folks in Rhode Island remember they got whacked in about ’53? Or folks in NYC know about their history of Hurricanes?
ANYWHERE on the East Coast or Gulf Coast can get whacked, and will. That’s why all the old big cities were either well inland or were only there to support the ports. Look at Atlanta then just run north along the freeway…
jim2: “We need to come up with a whine index […]”
Oh yes! A whine index is definitely needed. What would it be?
Lessee…
Level 1 – Crickets and frogs go silent
Level 2 – Dogs start to howl
Level 3 – Elephants begin stampeding
Level 4 – Drowns out the sound of a tornado
Level 5 – Start searching for an ice pick to puncture your eardrums
Now that doctors have been tasked to sniff out crazies who MIGHT go shoot or blow up something, the doctor-patient relationship we have come to know is gone. It’s best to stick to mundane topics and symptoms with them now.
@H.R.:
Having had my eardrumps punctured, I’d recommend against #5.
How about “Run screaming from the room” instead?
@E.M. When their whining gets to that level, it’s a viable option ;o)
Run screaming from the room is about a Level 3.5.
Anyhow, jim2 is right, we need a Whine Index. I’ve been busy in the basement putting up wall-mounted wire shelving and couldn’t quickly think of a numerical index. That first go at an index was fast and cutsey, but subjective and hard to remember.
Maybe just Sonic Booms? Oh! How about Mach numbers?
“That thar was a Mach 4 whine iffin I ever heerd one.”
I rather like the Mach numbers.
How’s this for headlines?
Via
@Another Ian
“breaking news” — more like broken news.
If you haven’t seen this video clip yet…
@Jim2, if only that was something new, today. Sadly, that kind of thing dates to the 80s and 90s, thanks, in part, to EMTALA and HIPAA. Now with fully accessible electronic medical records …., and yes, the AMA, like its British colleagues before them, has betrayed its Everyday Joe physician and the public they serve. Can’t say folk were not warned. Even locally, when the Governor supported “Certificate of Need” boards back in the 60s, he was warned that the boards would become co-opted to serve the needs of the incumbents of the day and limit diffusion of cost and life-saving technologies.
Now here’s a thought
Hmmm interesting background info on Israel, Iran, Syria and North Korea’s relationships.
Things are not always as they seem.
Seems like some smoke coming from this direction too
“Flynn Lawyer Response to Threats and Targeted Harassment by Chairman Adam Schiff…”
And more further down
“Keynesian economics is guaranteed to make an economy’s problems worse”
@ Another Ian,
I agree, but will add this from the Austrian School, which I consider add to classical theory the idea (and fact) that money is also an economic good with supply and demand. That being so, governments monkeying with money add to the uncertainty when times are bad. That being so, make the internal rate of return (or interest … the time discount of money) increase. Again, as the article says, what should be done is opposite Keynes. Bank rates should go up, not down. Savers should be rewarded, these then supply the capital, and the badly mismanaged businesses should be bankrupted sooner, so that the cycle is shortened.
IOW, governments should get out of the way, other than enforce contracts and adjudicate bankruptcies. Also not to be forgotten is that values exist in minds. They are not physical objects, so they are subject to change; and do, rapidly, when people panic. I should also note that wide area malinvestment is something governments encourage when they intervene in economies where they shouldn’t.
Re cdquarles says:
“I should also note that wide area malinvestment is something governments encourage when they intervene in economies where they shouldn’t.”
Sounds like a description of “renewable electricity”?
And any other “government enthusiasm” I might add
I can’t find a reference now, and that makes me wonder why that is, but at any rate, there was one US recession during which interest rates were raised rather than lowered. It was a short recession. Of course the Keynesian apologists spun many tales why that didn’t matter. Does anyone know which recession that was?
Never mind, found it.
Sorry, here it is:
It wasn’t interest rates, but government spending was slashed and taxes cut. The Fed apparently stayed out of it, so there wasn’t the usual increase in money supply. It’s an interesting case and one worth knowing about.
Here’s a heart-warming video of ISIS-on-an-island being schooled in the particulars of the Kinetic Theory of Gasses.
Almost looks like an Arc Light mission in Vietnam
A big player in the oil industry is gone from the scene –
Hong Kong protesters halt protest activity on 09/11 to note the anniversary of the 9/11 attack on America.
waste oil burner
He is running this first one stupid hot!
wood stove converted to burn waste oil with electric blower
Better discussion of the basics for a low tech garage heater etc.
simple oil burner with complete instructions for assembly
Prototype design uses only the draft to provide the air.
simple proof of concept draft only air supply
I actually like this better than the blower design, a little effort to induce swirl in the air supply would make a big difference.
@Jim2,
If I am remembering correctly, Ludwig von Mises wrote about the differences between Coolidge/Harding’s response to a panic in a compare and contrast with Hoover/Roosevelt, back in the 30s or 40s. Also if I am remembering correctly, the Fed was rather new at the time and Keynesian economics wasn’t the thing it became later.
I am not sure that the Mises.org site has it; but I do remember a book sold by Laissez Faire Books that did. I once had the quite extensive personal library … sadly lost after multiple moves.
I think the lesson from that short lived down turn, is that the market crash / contraction is the cure not the problem – sort of like a fever is a necessary short term fix for an infection, and the efforts to soften the down turn is actually the problem and prolongs the downturn.
During a strong contraction the hidden hand of everyone’s best self interests quickly nulls out the problems if given a chance to work. Instead of trying to fix the economy let it correct itself and limit intervention to things like soup kitchens that soften the blow on those hurt by the contraction but without meddling in the economy itself.
The problem is everyone is fixated on the “Do something” bias instead of “first do no harm”.
No programmatic solution will be as effective as millions of individuals taking local corrective actions to fix their personal problems.
@LL,
Exactly. The stock and bond markets are a bit of a trailing indicator. The post-WWI ‘bust’ lasted about a year in the US. It lasted longer in war ravaged Europe. Their response, in a large measure, was inflation. They wen’t off gold. The US didn’t. The gold fled Europe, exacerbating their problems … allowing Hitler his opening …. leading to WWII, which meant even more gold leaving Europe to America, for “safe’ keeping.
When your currency becomes a ‘reserve’ currency, people outside your borders demand it. You, then, have to supply it. Your money leaves your country …. so, when the money flows out too fast …. two sides of the same coin. Thankfully, President Trump is here to change the relationships. I’d like to see the USD lose ‘reserve’ status. I’d like to see gold come back. It’s not like we don’t have any gold already and not that we can’t mine some more, when needed.
@cdquarles: “It’s not like we don’t have any gold already and not that we can’t mine some more, when needed.”
You piqued my interest in how much it would cost to get more gold. I found what I believe is a reasonably credible source,American Bullion. Yup, they are in the business of selling gold with its attendant hype, but they are buying on the market and then retailing it. Since the market price is not directly tied to production cost, but rather supply, demand, and speculation, I think their analysis of mining cost is reasonably uncolored.
It turns out the cost is highly variable, depending on the mining region.
So we can mine more. I’m still uncertain about the rate of production if we had to increase the supply of gold in a hurry to meet accelerated demand.
I also wonder how much is laying around in forgotten safe deposit boxes, jewelry boxes, buried in backyards, entombed with the deceased, hidden behind the drywall, and other stashes not part of the day-to-day trading or industrial use. There is probably quite a bit that could be teased out without mining. I’ve run across estimates, but I just take to be WAGs. I don’t see how anyone could really know.
@CDQuarles:
Be careful what you ask for…
India and Russia have cut a deal (thinking about a posting on it..) to conduct their mutual trade in local currencies and dump the $US. China and Iran have worked up a deal to trade Oil for Stuff on similar terms. EU and Russia / China sporadically discuss shifting to € for trade.
Basically there’s a modest speed exit of the $US from the currency of choice in trade… Not enough yet to create currency problems, but given time…
@H.R.:
Any given mine has good deposits, bad deposits, and great deposits. A smart miner works the bad deposits when prices are crazy high and shifts to the good deposits when prices are crazy low (so as to stay in business long term). So a single mine can well have deposits ranging from $200 / ounce to $3000 / ounce to produce.
PRICE determines SUPPLY. With higher prices, all that highly diffused $3000 / ounce gold becomes “economic reserves”. If prices drop to $40 / ounce, there’s almost none that can be mined at a profit and “We are RUNNING OUT!!!!” panic stories get pushed. Higher prices pay for the fancier tech to get gold out of extremely dilute source rocks. There’s exponentially more in diffused sources than there is in concentrated deposits, so a modest price rise yields a massive increase in reserves…
In the limit case we use ion exchange mats to get it from sea water for giga tons… at a price…
So to answer “how much” and “how fast” you must answer “at what price”…
Now a completely separate but important question is: How much does it really benefit the people / society to spend all that money to extract a metal that’s pretty, but only used for a few other things, really; and would some other use be far far better? For example, I’d rather have $2000 of copper than $2000 of gold. (And I’d really rather have a copper / tin alloy…) So for my money, give me bronze first, gold not so much…
Last I looked into the cost of production a couple years ago, most of the major mines had true cost of production around $1200 / oz so that should put a flow on gold prices near that.
This gives a good idea of where the gold market boundaries are.
There are several chemical properties that make gold great as a physical monetary unit. One other thing is that people will readily trade it for other things precisely because its pretty, doesn’t rust or tarnish easily, and nearly all of it ever mined is still around. One of those things is that only one other chemical element that I know of is denser and one other nearly so. That one’s not nearly as familiar to most people as gold is. The other is, or was, such that it was used successfully, for a time, to scam people.
Oh, I almost forgot. The US Dollar is still defined as a specified weight and fineness. That dollar doesn’t circulate. The Fed still holds ca 260 million ounces (Troy) of the stuff, and buys and sells a few million ounces a year for minting, I guess, these days.
Old news on digital privacy but a good refresher:
View at Medium.com
This was written before Dissenter browser came out, which I also rank up there with brave (actually I prefer it over brave)
By the way side note Dissenter came out with an update if you use it do a re-install to make sure you are running the current version.
LL – I like your recession-fever analogy.
@E.M. – I was curious about the cost to mine in light of a change in policy that would switch us to gold.
A: How much gold is already available?
B: How much more would we need to make the switch?
C = B – A and is what we’d have to mine (and yes, I know some plays are gravy and some barely produce, same as coal, oil, etc. What’s the average cost to mine? It seems to be about $1,200/oz.)
What is cost of ‘C’? $500 Billion? $6 Trillion? I have no clue.
If we make the switch rapidly, is there enough money and time to buy and make enough mining equipment to get ‘C’? If ‘C’ is large, regardless of price, is it possible to produce the mining equipment in the allotted time? Will the mining companies be borrowing in paper dollars and paying it back in gold? Who the heck would want their loan paid back in paper?
Anyhow, the mechanics of such a switch just piqued my interest. Things could get really *ahem* interesting if we need a whole bunch of gold in a short time.
Does everyone have their gold pan ready to go? ;o)
Wait up…. If gold spikes that much, everybody will abandon the gold machinery production lines, grab a pan, and go try to make more money panning gold than they are being paid on a production line. We saw that movie before in the 1800s.
Perhaps one of those ‘return to the gold standard, NOW’ folks has thought it through completely and has run all of the necessary numbers. I might have a look for that online if I can catch a break from working on the basement.
There are secondary gold sources than traditional mining.
For example sand and gravel companies who process river sand and gravel often run an inline gold recovery system (sluice box) which covers part of the cost of processing the sand and gravel deposit.
Electronics recycling is recovering precious metals as part of their business model.
Last – when Space-x or someone similar opens up harvesting asteroids for metals, it could flood the world market with several important metals and drastically change the demand cost curve (at least for a while)
You don’t have to go to 100% metals based currency all at once. Right now there is 0% backing of hard currency for nominal currency, The government could set up a gradual adoption of a currency backing where every year an additional 1% of currency must be backed by precious metals, (also no reason the hard metal backing could not be a market basket of several precious metals)
At that rate it would take 100 years for us to return to a gold standard.
Nixon dropped gold backed currency in 1971, or 48 years ago. If you went to 2% coverage each year you would go back on a gold basis in 50 years.
On other topics – attention Beto
Are you sure you can finance that buy back you are talking about?
At just $100 a gun that would work out to about 40 billion dollars or 2% of our entire GDP.
“Art of the deal”
Straw packs and sharpie markers
Sounds a useful form
One Texas Congressman offered Francis his AR … sort of … :
Texas Rep. Briscoe Cain posted a tweet after Thursday night’s Democratic presidential debate in Houston in response to presidential candidate Beto O’Rourke’s call for assault-style weapons to be confiscated, touching off a Twitter debate and possibly an FBI investigation.
Cain, a Republican from Deer Park, tweeted, “My AR is ready for you Robert Francis,” using the former El Paso congressman’s legal name. O’Rourke has said he would institute a national gun registry and a buyback program for “weapons of war.”
Hmmmm major fire at a Saudi oil terminal started after 2 drones hit the facility!
Small commodity drones or nation state military drones?
Confirmed by Euro news, but still very little info.
Chatter on arabic threads is that it was likely an Iranian attack on Saudi Arabia, possibly by Iranian troops i Iraq.
Ooops link
And Reuters
Old study on fats and heart disease counters assumptions of present dietary guide lines.
E.M.
For your wine appreciation
Don’t miss the comments
@Larry L:
Looks like the Yemen / Saudi war is widening to a Sunni / Shia war…
The article says “10 drones” so I doubt they are the very large sized ones like the USA uses.
I’d suspect smaller DIY on commercial public use drones, or slightly larger purpose built in Iran and smuggled in. As an incendiary, all you need is something that can carry a couple of pounds. BUT, if you need to breach a tank wall, make it about 10 lbs.
Heavy crude doesn’t burn worth a damn, but Saudi Light? Rather like dirty K1 with light ends left in. I doubt they store it in open topped tanks, but they might have a thin polymer roof. If so, then a couple of pounds of incendiary / explosive on the top would be enough. I doubt their air defense is set up to detect a 2 or 3 foot diameter drone.
Well, I guess we’ve started into the Robot War era with autonomous robots doing the attacks…
On the heart disease thing:
It is nice to see the studies confirming that chasing fats is a dead end. IMHO the studies posted in a prior article are The Answer. The two key bits that make it “proof” for me are:
1) Gorillas on low Vit C diets get heart disease just like Humans. Leave them on their normal high Vit C diets, no heart disease. Pretty much proves it was our move from lots of Vit C fresh leaves and fruits to low Vit-C grains that was causal.
2) The mouse where they did the knock-out of its Vit-C making gene (so depends on Vit-C in diet like humans, great apes, and Guinea pigs – or was it hamsters…) then ADDED the human gene for the lipoprotein that patches over arterial leaks from low Vit-C (lipoproein-A ?) and when they were given a low Vit-C diet, got heart disease. High Vit-C, no heart disease.
That’s about as definitive as you can possibly get. Created, and removed, on demand.
For Great Apes (Gorillas) they get about 6 gm Vit C / day. Adjusted to human body weight, that’s about 1 gm to 2 gm / day. ( You would be the one gram, me being about 2 x as heavy would be the 2 grams…). Very few people on our “modern” diet get anywhere near that level.
Furthermore: I’m pretty sure those fat feeding studies didn’t bother measuring Vit C levels in the subjects or the diets. Why would you measure how much orange juice was drunk by whom if you only really care about who had bacon vs cereal an who had butter vs margarine?
When various studies of “the same thing” get semi-random results (some pro, some con) it is highly likely you are looking at the wrong “same thing” and need to find the random variation that is actually causal. For coronary artery disease, it looks like that is Vitamin C intake and that the “dietary guidelines” are too low. Enough to prevent scurvy, not enough to prevent coronary artery disease over a lifetime.
@Another Ian:
Good one!
I’d love to get a case, but I suspect they will be immediately sold out after the WUWT article…
Heck, Heartland Institute ought to order a dozen cases just for their events!
I’ll be asking about it at BevMo, but I’m pretty sure I’ll be too late to get any. I suppose there is always next year…
Since I know you like music videos, and this one is of the “classical liberal” variety, I thought you would enjoy this. What is most interesting is the femi-nazi response to it in its commentary.
It’s pretty easy to make thermite. Drop a burning half-pound of that on just about any tank and it will burn through.
Pingback: W.O.O.D. 14 September 2019 | Musings from the Chiefio
@Gary Smith:
Yeah, it takes constant effort to keep the professional troll brigades of The Left under control. Any straying from The Narrative is attacked and if you let that succeed, you just get more of it. Insisting on polite conversation seems to keep t of limited scope.
@Jim2:
Oh, yeah, thermite…. I’m also found of a chunk of magnesium. It will burn in water… Then again, sodium floats on water and self ignites…
In fact… a modestly evil thought…. One could make a molded block of magnesium with a sodium plug in it. Cover the plug with a water proof seal. To deploy, pull off seal and expose to water or wet air… Sodium ought to spontaneously ignite and then light the magnesium. For a good time, wrap the magnesium in a thermite bucket…
Properly done this could be made to look like any number of innocuous things, then to start it, just pull the patch and piss on it…
Would give a whole new meaning to “pissed off” ;-)
Then again, a simple road flare is a lot easier to get and works about as well…
Don’t tell Beto, though, or he will demand we ban fire…
Just some prepper food for thought
The last few days I have been doing an inventory of holes in my emergency preparedness planning.
As we all know, safe water is probably the single most important need aside from enough warmth to avoid hypothermia. Hypothermia can kill in minutes to hours, where lack of water kills in about 4 days depending on temperature and exertion levels. Lack of food on the other hand kills in about 40 days (30% body weight loss is usually fatal and unrecoverable).
It is not hard to make a simple sand filter (assuming you have ready access to clean sand), and you can easily buy very high quality water filters for camping – some that go down to less than 0.5 microns and a few that achieve less than 0.1 microns.
But the best way to treat water is to set up a cascade of filters that start at very rough (ie simple strainer) and progress to about 5 microns before you pass the water through your bacteria safe camping filter than can get down to 0.5 to 0.1 microns.
So that left me looking at the hole in that process and I decided all things considered there are relatively cheap over the counter house hold water filters that you can easily set up as a gravity feed filter system. No need to cobble together a rough sand filter when over the counter you can get better performance for less effort and about the same money for parts.
As a first cut I got these pieces today as a bare minimum to prefilter good enough to avoid clogging the bacteria safe 0.5 micron filters for final treatment.
Just toss these pieces in an old duffel bag and you can have a reliable safe pre-filter stashed in the closet for any situation that requires you to use non-potable water for an emergency.
I am planning on picking up a couple more of these housings over the next few months to set up a 3 or 4 filter cascade that should be good for over 3 months of normal water usage and perhaps 6 months to a year of restricted water use.
Rough water filter (15 to 30 microns medium sediment )
Parts list (minimum)
Brita whole house water filter model WHS-201 $19.97
2 pack of spare filters Brita part number WHF-103 $ 9.98
2 each pvc threaded 90 deg elbows (street EL ) 3/4” $ 2.28
2 each pvc riser theaded poly pipe 24” $ 4.76
. . . . . . . . . . . . . . . . . . . Total cost minimum assembly . . $ 36.99
Brita filters compatible with this housing
WHF-101 large sediment particles from 30 to 50 microns (rated 15,000 gallons or 3 months)
WHF-102 (string wound) large sediment particles (Particulate Class IV) @ 5 GPM, 15 to 30 microns
WHF-103 medium sediment particles from 15 to 30 microns
WHF-104 medium sediment particles from 5 to 15 microns – Performance Level 2 Carbon Wrapped
This appears to be a low cost compatible 5 micron filter
In many uses you don’t really need perfect potable water, such as for bathing, dish washing, or water you are going to boil anyway for cooking.
Add a few minor accessories like a couple ball valves in line to close off the in and out of the filter system, and a step up on the inlet size to 2-3″ PVC as as couple quart or gallon holding tank to feed the system, and ideally a barbtite fitting for the inlet size tubing on one of the 0.1 micron camping filters like this:
To use this disaster water filter in a long running disaster you would definitely want the advantages of ball valves on the output line and some sort of reservoir on the top to pour the raw unfiltered water into. I am currently looking around for the best option for that inlet reservoir, something like a bar sink or even a poly bucket might be the solution of choice. A stainless steel bar sink would run about $50 and the bucket around $5-$8 but would not be as durable.
Choices choices.
Oddly, I have a couple of (about 1/2 gallon sized) plastic filter bodies and filter cartridges in the garage supplies stash ;-)
For the upstream tank:
Old used food barrels are cheap and have plumbing friendly bung fittings. I have 4 of them in the back yard. In a pinch, a simple siphon pipe is enough from an upright barrel…
Were I building something:
Look at the 4 inch or 6 inch PVC pipes (or even larger from construction companies). Make any size tank you want out of them. You can glue in a valve via drilling a hole in the low point side / cap… Some pontoon boat folks look to be using 1 foot or even larger sizes.
Put on an angle, it’s not that hard to make a modest storage “tank” that drains effectively.
Also, look at what the Hydroponic guys use for tanks / pumps. In many cases it’s as plastic tub (mine was about $10) for about 7 gallons (though there are larger). Simple plastic tub, drill hole for valve or use siphon.
BTW, the urgency of a “need” is inversely proportional to the density of it. You die without air in a couple of minutes. Make sure a gas mask or at least a respirator is in the kit… Food? That’s months away. Most folks can easily go a month or two on vitamin pills and water. At my lowest weight (after 3.5 months of “all I wanted to eat” in an isolation study that was mostly just laying on a bed…) I was 156 lbs. I’m now about 220. That’s about 65 lbs of “excess me”. I figure a pound a day, I’ve got 2 months ;-) Though admittedly I was pretty slim at that lower bound. It was about my 8th grade weight…
Oddly, we could have all we wanted to eat, but I just wasn’t interested. No activity and free feeding leading to weight loss? Who knew….
Yes that is the sort of thing I am looking for, just a matter of walking some stores to see what is available in my area. If possible make it so I can store the bits all together in a small duffel bag or similar, putting the filter elements inside the reservoir until you need to assemble it.
Water is a bit of a challenge for me, for a couple reasons:
1. I live in an apartment so cannot pre-build a disaster setup.
2. Annual precipitation is about 15 inches so not nearly as easy to collect adequate rain water as those of you who live in more coastal semi-tropical climates
3. very low summer humidity, so high evaporation losses.
4. I am within walking distance of some perennial water sources but back packing water a half mile or so with a 100′ to 200′ vertical elevation gain is not high on my fun list.
I need to checkout the local farm and ranch supply places too, this is a good possibility for a simple dump in reservoir which already has fittings you could improvise to to 3/4″ pipe fittings.
I don’t want a “tank” per se, but just a descent sized reservoir that you could dump a few quarts of water into easily dipping out of a rain barrel etc.
You can also get rather good sized emergency storage water bladders that would be good to have on hand if you can set them up and fill them in anticipation of an issue (like a Hurricane )
I have two of these stored under the sink in the bathroom, but you need a bath tub sized object to set them in to keep them corralled.
Of course the big problem with the large bladders and poly food type drums is once filled they are almost impossible to move, so I much prefer water storage in the smaller 7 gallon water cubes.
These weigh about 58 pounds full so are about the limit for one person carry for storage water containers.
There are also Jerry can sized water containers that hold 5 – 6 gallons which are a bit easier to handle.
For full on disaster situation my plan is to tap into the rain gutters that surround the apartment building I live in and divert a bit of that water to a rain barrel. All I need is sufficient stored water to get by until that method starts to produce a steady flow of water.
Good news is, I have a 75 gallon water heater so that alone is good for about 150 days of essential use water only.
In the winter time I could literally shovel up water in the form of snow, so dry summer season and dry cold winter seasons would be the problem periods to work around.
You have a car. Consider the virtue of a large plastic container in the back… Initially you would have some gas, and driving 1/2 mile (1 mile round trip) you ought to be able to get through about a year pretty easy…
These folks seem to make / sell every possible tank:
Around here we see a fair number of plastic cubes with a wire / metal cage around them, on trucks. Used for water, wine, whatever. I suspect a trip to Tractor Supply would be enlightening..
They list 79 choices…
Yeah those farm and ranch supply stores carry a bunch of nice solutions if you have the room and ability to transport. If I had a house like I am looking for I would do that in a heart beat.
One of those 1500 gallon water tanks earth bermed so it is mostly protected against freezing would provide year around water for one or two people.
For expedient transport a much cheaper solution than that, a card board box and two heavy duty plastic bags will work just fine as a water bladder holding around 30 gallons of water, depending on how big the box is. I’ve transported about 400 gallons like that in the bed of a pickup truck about 120 miles about 40 years ago.
But I am for planning purposed assuming no cars are running or allowed on the roads, and going back to the very basics. Could also easily ferry about 7 – 14 gallons by using those water cubes and a bicycle like the Vietnamese did on the Ho Chi Minh trail. A bike can carry 200 – 400 pounds of cargo if you can figure out how to load it, but you have to walk it instead of ride it.
Also almost impossible to pick it back up if you ever lose control of it and drop it on its side.
The North Vietnamese lashed a bamboo pole to the frame in front of the bike seat to make it easier to push and to lift it if dropped on its side.
Maybe you ought to just get one of those folding wagon / grocery cart things and use it… One trip of a mile / week, ought to do it. And it would almost be enough exercise to not be inert… 1/7 th mile / day average is pretty meager…
If I was on fairly level terrain it would not be too bad but, pushing a cart of water (or a bike) up a 1/2 mile long 6% – 7% grade is not high on the fun list, much better to collect it here where I live if I can.
I basically have two choices, 2.5 miles each way on pavement with 253 ft elevation gain/loss each way.
Or 0.87 miles each way with 164 ft elevation gain/lost each way over dirt trails with lots of rocks, and random rattle snakes and thorns over narrow foot paths.
If I did choose to hoof it I would either backpack a few liters over the shorter route or use a 2 wheeler with one or possibly 2 of the 7 gallon cubes over the long route. Both would be a pretty good work out.
Long route would take about 2.5 hours and short route about and hour or so (would also involve cutting a fence for access to what is a municipal water supply reservoir which they might not take kindly to.
A tap on the rain gutter would probably deliver a few gallons from any significant thunder storm or light soaking rain. Harvesting snow in trash bags would just involve running up and down 3 flights of stairs for each bag of snow. (or harvested icicles).
|
https://chiefio.wordpress.com/2019/08/30/w-o-o-d-30-august-2019/
|
CC-MAIN-2019-47
|
refinedweb
| 16,391
| 69.72
|
Contents
We have this amazing and fancy graph displaying utility called Graphite running on chromeos-stats. It's beautiful. You all should use it. This doc is about how to get data into the system so that you can view it in Graphite.
There are two different ways to get data into the system:
The first is to write data to the raw backend of Graphite, which is called carbon. It accepts data in the format of <name> <value> <time>, and one can find a basic interface to sending data to carbon in site_utils.graphite.carbon.
<name> <value> <time>
site_utils.graphite.carbon
The second is to write data to a service which will calculate statistics over the data you're sending, and then forward it onto carbon. This service is called statsd. It provides better information, as it will calculate min/mean/max, deviation, and provides a more intelligible interface. It also allows for better horizontal scaling in case we ever start logging a truely hilarious amount of stats. (Which we should!)
I would highly recommend using the statsd over carbon unless you have a specific reason to be sending data directly to carbon.
We have in site-packages a library named statsd. This has been wrapped for our purposes in a library located in autotest_lib.site_utils.graphite.stats, which does some connection caching, prepending of autotest server name, and a little other magic. The interface exposed is exactly the same as the one exposed by statsd, and therefore this doc should work as a guide for both. (But you should use the site_utils one!)
site-packages
statsd
autotest_lib.site_utils.graphite.stats
site_utils
This guide serves to be copy-paste-able, so you should be able to take any snippet out of this doc and run it. Therefore, here's the import boilerplate you'll need when messing around with this code from within autotest:
import time
import common
from autotest_lib.site_utils.graphite import stats
If you prefer, you can find all the code listed in this doc (as of when this was published) in CL 45286.
As you go through and add some stats, or mess with the code shown here, at some point you're going to want to see how the data is shown on Graphite. Navigate to chromeos-stats. Drill down into stats->[stat type]->[your hostname]->[stat name]. Main thing to note here is that statsd dumps all of the stats under stats/, so if you go looking at the root level for [your hostname]->[stat name], you won't find anything. :P
stats->[stat type]->[your hostname]->[stat name]
stats/
[your hostname]->[stat name]
[your hostname] here means "whatever value you have for [SERVER] hostname = in your shadow_config.ini".
[your hostname]
[SERVER] hostname =
shadow_config.ini
The first stat to examine is how to log how long a function takes to run. The easiest target for this is the scheduler tick. Let's define a fake scheduler tick function:
def tick():
time.sleep(10) # Sleeping is a very expensive computation
And now we have a few different ways that we can get the runtime of this function.
We can manually create a timer, and call start() and stop() at the beginning and end of the function:
start()
stop()
def tick_manual():
timer = stats.Timer('testing.tick_manual')
timer.start()
time.sleep(3)
timer.stop()
tick_manual()
# You should now see a point at 3000(ms) in stats/timers/<hostname>/testing/tick_manual
We can also take advantage of the decorator that is attached to the Timer object:
Timer
timer = stats.Timer('testing')
@timer.decorate
def tick_decorator():
time.sleep(5)
tick_decorator()
# You should now see a point at 5000(ms) in stats/timers/<hostname>/testing/tick_manual
Statsd timers report their value in milliseconds, so if you report a value by hand using send(), you should probably report the time in milliseconds also.
send()
If you're looking to keep track of how frequently something occurs, a counter is a good choice. Statsd receives the counter stat, tallies it over time, and flushes the value of events per second to carbon and resets the counter to zero once every ten seconds. With counters, there are no extra statistics that statsd can compute. The normal ones of min, max, std_dev, etc. make no sense in the context of counters.
# We can increment a counter every time we get an rpc request.
def create_job():
stats.Counter('testing.rpc.create_job').increment(delta=1)
# .increment() defaults to delta=1, so it could have been omitted
for _ in range(0, 10):
create_job()
# You should now see 1 at stats/<hostname>/testing/rpc/create_job
# 1 == 10 events / 10 seconds
There also exists a decrement() on the counter object, but I'm not really sure when one would use it. If you're trying to keep a running tally, you should instead use a:
decrement()
If you're looking to be able to send in a number, or if your stat doesn't really make sense as a timer or counter, then you should probably use a gauge. A gauge allows you to just report a number. The benefit of using a gauge over just sending raw data is that statsd will still compute the statistics about the stats you're sending like it normally does.
def running_jobs():
stats.Gauge('scheduler').send('running_jobs', 300)
running_jobs()
# You should now see 300 at stats/gauges/<hostname>/scheduler/running_jobs
Values submitted by an average are automatically averaged against the values in the same bucket at the end of the flush interval. The only use case I can think of for this is if you're trying to measure something in a gauge that's very flaky, which is messing up all of the statistics that are being calculated. However, I can't even think of an example to use in our codebase, so I'm just mentioning this for completeness.
If all else fails, and you don't want any fancy statsd features, you can get statsd to send your data to graphite "pretty much unchanged". Note that the prefixing of your hostname still does happen (assuming you didn't turn it off).
One could use this to log the fact that something happened. Logging something so that there's an obvious spike when you're overlaying graphs doesn't need any sort of statistics calculated about it.
# statsd automatically adds the current time to the data
def scheduler_initialized():
stats.Raw('scheduler.init').send('', 100)
scheduler_initialized()
# 1 will now show up at the current time under stats/<hostname>/scheduler/init
whisper-fetch.py
whisper-fetch.py --pretty /opt/graphite/storage/whisper/stats/timers/cautotest/verify_time/lumpy/mean.wsp
|
http://www.chromium.org/chromium-os/testing/collecting-stats-for-graphite
|
CC-MAIN-2020-24
|
refinedweb
| 1,108
| 62.27
|
#include <Caching_Strategies_T.h>
A attribute is tagged to each item which increments whenever the item is bound or looked up in the cache. Thus it denotes the frequency of use. According to the value of the attribute the item is removed from the CONTAINER i.e cache..
The <container> is the map in which the entries reside. The timer attribute is initialed to zero in this constructor. And the <purge_percent> field denotes the percentage of the entries in the cache which can be purged automagically and by default is set to 10%.
Access the attributes.
Get the percentage of entries to purge.
Set the percentage of entries to purge.
This method acts as a notification about the CONTAINERs bind method call.
Lookup notification.
This method acts as a notification about the CONTAINERs unbind method call
This method acts as a notification about the CONTAINERs trybind method call
This method acts as a notification about the CONTAINERs rebind method call
Purge the cache.
Dumps the state of the object.
The level about which the purging will happen automagically.
This is the helper class which will decide and expunge entries from the cache.
|
http://www.dre.vanderbilt.edu/Doxygen/5.6.7/html/ace/a00274.html
|
CC-MAIN-2013-20
|
refinedweb
| 191
| 66.23
|
Pandas (which is a portmanteau of "panel data") is one of the most important packages to grasp when you’re starting to learn Python.
The package is known for a very useful data structure called the pandas DataFrame. Pandas also allows Python developers to easily deal with tabular data (like spreadsheets) within a Python script.
This tutorial will teach you the fundamentals of pandas that you can use to build data-driven Python applications today.
Table of Contents
You can skip to a specific section of this pandas tutorial using the table of contents below:
- Introduction to Pandas
- Pandas Series
- Pandas DataFrames
- How to Deal With Missing Data in Pandas DataFrames
- The Pandas
groupbyMethod
- What is the Pandas
groupbyFeature?
- The Pandas
concatMethod
- The Pandas
mergeMethod
- The Pandas
joinMethod
- Other Common Operations in Pandas
- Local Data Input and Output (I/O) in Pandas
- Remote Data Input and Output (I/O) in Pandas
- Final Thoughts & Special Offer
Introduction to Pandas
Pandas is a widely-used Python library built on top of NumPy. Much of the rest of this course will be dedicated to learning about pandas and how it is used in the world of finance.
What is Pandas?
Pandas is a Python library created by Wes McKinney, who built pandas to help work with datasets in Python for his work in finance at his place of employment.
According to the library’s website, pandas is “a fast, powerful, flexible and easy to use open source data analysis and manipulation tool, built on top of the Python programming language.”
Pandas stands for ‘panel data’. Note that pandas is typically stylized as an all-lowercase word, although it is considered a best practice to capitalize its first letter at the beginning of sentences.
Pandas is an open source library, which means that anyone can view its source code and make suggestions using pull requests. If you are curious about this, visit the pandas source code repository on GitHub
The Main Benefit of Pandas
Pandas was designed to work with two-dimensional data (similar to Excel spreadsheets). Just as the NumPy library had a built-in data structure called an
array with special attributes and methods, the pandas library has a built-in two-dimensional data structure called a
DataFrame.
What We Will Learn About Pandas
As we mentioned earlier in this course, advanced Python practitioners will spend much more time working with pandas than they spend working with NumPy.
Over the next several sections, we will cover the following information about the pandas library:
- Pandas Series
- Pandas DataFrames
- How To Deal With Missing Data in Pandas
- How To Merge DataFrames in Pandas
- How To Join DataFrames in Pandas
- How To Concatenate DataFrames in Pandas
- Common Operations in Pandas
- Data Input and Output in Pandas
- How To Save Pandas DataFrames as Excel Files for External Users
Pandas Series
In this section, we’ll be exploring pandas Series, which are a core component of the pandas library for Python programming.
What Are Pandas Series?
Series are a special type of data structure available in the pandas Python library. Pandas Series are similar to NumPy arrays, except that we can give them a named or datetime index instead of just a numerical index.
The Imports You’ll Require To Work With Pandas Series
To work with pandas Series, you’ll need to import both NumPy and pandas, as follows:
import numpy as np import pandas as pd
For the rest of this section, I will assume that both of those imports have been executed before running any code blocks.
How To Create a Pandas Series
There are a number of different ways to create a pandas Series. We will explore all of them in this section.
First, let’s create a few starter variables - specifically, we’ll create two lists, a NumPy array, and a dictionary.
labels = ['a', 'b', 'c'] my_list = [10, 20, 30] arr = np.array([10, 20, 30]) d = {'a':10, 'b':20, 'c':30}
The easiest way to create a pandas Series is by passing a vanilla Python list into the
pd.Series() method. We do this with the
my_list variable below:
pd.Series(my_list)
If you run this in your Jupyter Notebook, you will notice that the output is quite different than it is for a normal Python list:
0 10 1 20 2 30 dtype: int64
The output shown above is clearly designed to present as two columns. The second column is the data from
my_list. What is the first column?
One of the key advantages of using pandas Series over NumPy arrays is that they allow for labeling. As you might have guessed, that first column is a column of labels.
We can add labels to a pandas Series using the
index argument like this:
pd.Series(my_list, index=labels) #Remember - we created the 'labels' list earlier in this section
The output of this code is below: <built-in function sum> 1 <built-in function print> 2 <built-in function len>.
What Is A Pandas DataFrame?
A pandas DataFrame is a two-dimensional data structure that has labels for both its rows and columns. For those familiar with Microsoft Excel, Google Sheets, or other spreadsheet software, DataFrames are very similar.
Here is an example of a pandas DataFrame being displayed within a Jupyter Notebook.
We will now go through the process of recreating this DataFrame step-by-step.
First, you’ll need to import both the NumPy and pandas libraries. We have done this before, but in case you’re unsure, here’s another example of how to do that:
import numpy as np import pandas as pd
We’ll also need to create lists for the row and column names. We can do this using vanilla Python lists:
rows = ['X','Y','Z'] cols = ['A', 'B', 'C', 'D', 'E']
Next, we will need to create a NumPy array that holds the data contained within the cells of the DataFrame. I used NumPy’s
np.random.randn method for this. I also wrapped that method in the
np.round method (with a second argument of
2), which rounds each data point to 2 decimal places and makes the data structure much easier to read.
Here’s the final function that generated the data.
data = np.round(np.random.randn(3,5),2)
Once this is done, you can wrap all of the constituent variables in the
pd.DataFrame method to create your first DataFrame!
pd.DataFrame(data, rows, cols)
There is a lot to unpack here, so let’s discuss this example in a bit more detail.
First, it is not necessary to create each variable outside of the DataFrame itself. You could have created this DataFrame in one line like this:
pd.DataFrame(np.round(np.random.randn(3,5),2), ['X','Y','Z'], ['A', 'B', 'C', 'D', 'E'])
With that said, declaring each variable separately makes the code much easier to read.
Second, you might be wondering if it is necessary to put rows into the
DataFrame method before columns. It is indeed necessary. If you tried running
pd.DataFrame(data, cols, rows), your Jupyter Notebook would generate the following error message:
ValueError: Shape of passed values is (3, 5), indices imply (5, 3)
Next, we will explore the relationship between pandas Series and pandas DataFrames.
The Relationship Between Pandas Series and Pandas DataFrame
Let’s take another look at the pandas DataFrame that we just created:
If you had to verbally describe a pandas Series, one way to do so might be “a set of labeled columns containing data where each column shares the same set of row index.”
Interestingly enough, each of these columns is actually a pandas Series! So we can modify our definition of the pandas DataFrame to match its formal definition:
“A set of pandas Series that shares the same index.”
Indexing and Assignment in Pandas DataFrames
We can actually call a specific Series from a pandas DataFrame using square brackets, just like how we call a element from a list. A few examples are below:
df = pd.DataFrame(data, rows, cols) df['A'] """ Returns: X -0.66 Y -0.08 Z 0.64 Name: A, dtype: float64 """ df['E'] """ Returns: X -1.46 Y 1.71 Z -0.20 Name: E, dtype: float64 """
What if you wanted to select multiple columns from a pandas DataFrame? You can pass in a list of columns, either directly in the square brackets - such as
df[['A', 'E']] - or by declaring the variable outside of the square brackets like this:
columnsIWant = ['A', 'E'] df[columnsIWant] #Returns the DataFrame, but only with columns A and E
You can also select a specific element of a specific row using chained square brackets. For example, if you wanted the element contained in row A at index X (which is the element in the top left cell of the DataFrame) you could access it with
df['A']['X'].
A few other examples are below.
df['B']['Z'] #Returns 1.34 df['D']['Y'] #Returns -0.64
How To Create and Remove Columns in a Pandas DataFrame
You can create a new column in a pandas DataFrame by specifying the column as though it already exists, and then assigning it a new pandas Series.
As an example, in the following code block we create a new column called ‘A + B’ which is the sum of columns A and B:
df['A + B'] = df['A'] + df['B'] df #The last line prints out the new DataFrame
Here’s the output of that code block:
To remove this column from the pandas DataFrame, we need to use the
pd.DataFrame.drop method.
Note that this method defaults to dropping rows, not columns. To switch the method settings to operate on columns, we must pass it in the
axis=1 argument.
df.drop('A + B', axis = 1)
It is very important to note that this
drop method does not actually modify the DataFrame itself. For evidence of this, print out the
df variable again, and notice how it still has the
A + B column:
df
The reason that
drop (and many other DataFrame methods!) do not modify the data structure by default is to prevent you from accidentally deleting data.
There are two ways to make pandas automatically overwrite the current DataFrame.
The first is by passing in the argument
inplace=True, like this:
df.drop('A + B', axis=1, inplace=True)
The second is by using an assignment operator that manually overwrites the existing variable, like this:
df = df.drop('A + B', axis=1)
Both options are valid but I find myself using the second option more frequently because it is easier to remember.
The
drop method can also be used to drop rows. For example, we can remove the row
Z as follows:
df.drop('Z')
How To Select A Row From A Pandas DataFrame
We have already seen that we can access a specific column of a pandas DataFrame using square brackets. We will now see how to access a specific row of a pandas DataFrame, with the similar goal of generating a pandas Series from the larger data structure.
DataFrame rows can be accessed by their row label using the
loc attribute along with square brackets. An example is below.
df.loc['X']
Here is the output of that code:
A -0.66 B -1.43 C -0.88 D 1.60 E -1.46 Name: X, dtype: float64
DataFrame rows can be accessed by their numerical index using the
iloc attribute along with square brackets. An example is below.
df.iloc[0]
As you would expect, this code has the same output as our last example:
A -0.66 B -1.43 C -0.88 D 1.60 E -1.46 Name: X, dtype: float64
How To Determine The Number Of Rows and Columns in a Pandas DataFrame
There are many cases where you’ll want to know the shape of a pandas DataFrame. By shape, I am referring to the number of columns and rows in the data structure.
Pandas has a built-in attribute called
shape that allows us to easily access this:
df.shape #Returns (3, 5)
Slicing Pandas DataFrames
We have already seen how to select rows, columns, and elements from a pandas DataFrame. In this section, we will explore how to select a subset of a DataFrame. Specifically, let’s select the elements from columns
A and
B and rows
X and
Y.
We can actually approach this in a step-by-step fashion. First, let’s select columns
A and
B:
df[['A', 'B']]
Then, let’s select rows
X and
Y:
df[['A', 'B']].loc[['X', 'Y']]
And we’re done!
Conditional Selection Using Pandas DataFrame
If you recall from our discussion of NumPy arrays, we were able to select certain elements of the array using conditional operators. For example, if we had a NumPy array called
arr and we only wanted the values of the array that were larger than 4, we could use the command
arr[arr > 4].
Pandas DataFrames follow a similar syntax. For example, if we wanted to know where our DataFrame has values that were greater than 0.5, we could type
df > 0.5 to get the following output:
We can also generate a new pandas DataFrame that contains the normal values where the statement is
True, and
NaN - which stands for Not a Number - values where the statement is false. We do this by passing the statement into the DataFrame using square brackets, like this:
df[df > 0.5]
Here is the output of that code:
You can also use conditional selection to return a subset of the DataFrame where a specific condition is satisfied in a specified column.
To be more specific, let’s say that you wanted the subset of the DataFrame where the value in column
C was less than 1. This is only true for row
X.
You can get an array of the boolean values associated with this statement like this:
df['C'] < 1
Here’s the output:
X True Y False Z False Name: C, dtype: bool
You can also get the DataFrame’s actual values relative to this conditional selection command by typing
df[df['C'] < 1], which outputs just the first row of the DataFrame (since this is the only row where the statement is true for column
C:
You can also chain together multiple conditions while using conditional selection. We do this using pandas’
& operator. You cannot use Python’s normal
and operator, because in this case we are not comparing two boolean values. Instead, we are comparing two pandas Series that contain boolean values, which is why the
& character is used instead.
As an example of multiple conditional selection, you can return the DataFrame subset that satisfies
df['C'] > 0 and
df['A']> 0 with the following code:
df[(df['C'] > 0) & (df['A']> 0)]
How To Modify The Index of a Pandas DataFrame
There are a number of ways that you can modify the index of a pandas DataFrame.
The most basic is to reset the index to its default numerical values. We do this using the
reset_index method:
df.reset_index()
Note that this creates a new column in the DataFrame called
index that contains the previous index labels:
Note that like the other DataFrame operations that we have explored,
reset_index does not modify the original DataFrame unless you either (1) force it to using the
= assignment operator or (2) specify
inplace=True.
You can also set an existing column as the index of the DataFrame using the
set_index method. We can set column
A as the index of the DataFrame using the following code:
df.set_index('A')
The values of
A are now in the index of the DataFrame:
There are three things worth noting here:
set_indexdoes not modify the original DataFrame unless you either (1) force it to using the
=assignment operator or (2) specify
inplace=True.
- Unless you run
reset_indexfirst, performing a
set_indexoperation with
inplace=Trueor a forced
=assignment operator will permanently overwrite your current index values.
- If you want to rename your index to labels that are not currently contained in a column, you can do so by (1) creating a NumPy array with those values, (2) adding those values as a new row of the pandas DataFrame, and (3) running the
set_indexoperation.
How To Rename Columns in a Pandas DataFrame
The last DataFrame operation we’ll discuss is how to rename their columns.
Columns are an attribute of a pandas DataFrame, which means we can call them and modify them using a simple dot operator. For example:
df.columns #Returns Index(['A', 'B', 'C', 'D', 'E'], dtype='object'
The assignment operator is the best way to modify this attribute:
df.columns = [1, 2, 3, 4, 5] df
The Pandas
dropna Method
Pandas has a built-in method called
dropna. When applied against a DataFrame, the
dropna method will remove any rows that contain a NaN value.
Let’s apply the
dropna method to our
df DataFrame as an example:
df.dropna()
Note that like the other DataFrame operations that we have explored,
dropna does not modify the original DataFrame unless you either (1) force it to using the
= assignment operator or (2) specify
inplace=True.
We can also drop any columns that have missing values by passing in the
axis=1 argument to the
dropna method, like this:
df.dropna(axis=1)
The Pandas
fillna Method
In many cases, you will want to replace missing values in a pandas DataFrame instead of dropping it completely. The
fillna method is designed for this.
As an example, let’s fill every missing value in our DataFrame with the
?:
df.fillna('?')
Obviously, there is basically no situation where we would want to replace missing data with an emoji. This was simply an amusing example.
Instead, more commonly we will replace a missing value with either:
- The average value of the entire DataFrame
- The average value of that row of the DataFrame
We will demonstrate both below.
To fill missing values with the average value across the entire DataFrame, use the following code:
df.fillna(df.mean())
To fill the missing values within a particular column with the average value from that column, use the following code (this is for column
A):
df['A'].fillna(df['A'].mean()):
<pandas.core.groupby.generic.DataFrameGroupBy object at 0x113f4ecd0>, we will be using the following two pandas DataFrames:
import pandas as pd leftDataFrame = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'], 'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3']}) rightDataFrame = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']})
The columns
A,
B,
C, and
D have real data in them, while the column
key has a key that is common among both DataFrames. To
merge two DataFrames means to connect them along one column that they both have in common.
How To Merge Pandas DataFrames
You can merge two pandas DataFrames along a common column using the
merge columns. For anyone that is familiar with the SQL programming language, this is very similar to performing an
inner join in SQL.
Do not worry if you are unfamiliar with SQL, because
merge syntax is actually very straightforward. It looks like this:
pd.merge(leftDataFrame, rightDataFrame, how='inner', on='key')
Let’s break down the four arguments we passed into the
merge method:
leftDataFrame: This is the DataFrame that we’d like to merge on the left.
rightDataFrame: This is the DataFrame that we’d like to merge on the right.
how=inner: This is the type of merge that the operation is performing. There are multiple types of merges, but we will only be covering inner merges in this course.
on='key': This is the column that you’d like to perform the merge on. Since
keywas the only column in common between the two DataFrames, it was the only option that we could use to perform the merge.
The Pandas
join Method
In this section, you will learn how to join pandas DataFrames.
The DataFrames We Will Be Using In This Section
We will be using the following two DataFrames in this section:
leftDataFrame = pd.DataFrame({ 'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3']}, index =['K0', 'K1', 'K2', 'K3']) rightDataFrame = pd.DataFrame({ 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}, index = ['K0', 'K1', 'K2', 'K3'])
If these look familiar, it’s because they are! These are the nearly the same DataFrames as we used when learning how to merge pandas DataFrames. A key difference is that instead of the
key column being its own column, it is now the index of the DataFrame. You can think of these DataFrames as being those from the last section after executing
.set_index(key).
How To Join Pandas DataFrames
Joining pandas DataFrames is very similar to merging pandas DataFrames except that the keys on which you’d like to combine are in the index instead of contained within a column.
To join these two DataFrames, we can use the following code:
leftDataFrame.join(rightDataFrame)
I will be using the following DataFrame in this section:
df = pd.DataFrame({'col1':['A','B','C','D'], 'col2':[2,7,3,7], 'col3':['fgh','rty','asd','qwe']})
How To Find Unique Values in a Pandas Series
Pandas has an excellent method called
unique that can be used to find unique values within a pandas Series. Note that this method only works on Series and not on DataFrames. If you try to apply this method to a DataFrame, you will encounter an error:
df.unique() #Returns AttributeError: 'DataFrame' object has no attribute 'unique'
However, since the columns of a pandas DataFrame are each a Series, we can apply the
unique method to a specific column, like this:
df['col2'].unique() #Returns array([2, 7, 3])
Pandas also has a separate
nunique method that counts the number of unique values in a Series and returns that value as an integer. For example:
df['col2'].nunique() #Returns 3
Interestingly, the
nunique method is exactly the same as
len(unique()) but it is a common enough operation that the pandas community decided to create a specific method for this use case.
How To Count The Occurence of Each Value In A Pandas Series
Pandas has a function called
counts_value that allows you to easily count the number of time each observation occurs. An example is below:
df['col2'].value_counts() """ Returns: 7 2 2 1 3 1 Name: col2, dtype: int64 """
How To Use The Pandas
apply Method
The
apply method is one of the most powerful methods available in the pandas library. It allows you to apply a custom function to every element of a pandas Series.
As an example, imagine that we had the following function
exponentify that takes in an integer and raises it to the power of itself:
def exponentify(x): return x**x
The
apply method allows you to easily apply the
exponentify function to each element of the Series:
df['col2'].apply(exponentify) """ Returns: 0 4 1 823543 2 27 3 823543 Name: col2, dtype: int64 """
The
apply method can also be used with built-in functions like
len (although it is definitely more powerful when used with custom functions). An example of the
len function being used in conjunction with
apply is below:
df['col3'].apply(len) """ Returns 0 3 1 3 2 3 3 3 Name: col3, dtype: int64 """
How To Sort A Pandas DataFrame
You can filter a pandas DataFrame by the values of a particular column using the
sort_values method. As an example, if you wanted to sort by
col2 in our DataFrame
df, you would run the following command:
df.sort_values('col2')
The output of this command is below:
There are two things to note from this output:
- As you can see, each row preserves its index, which means the index is now out of order.
- As with the other DataFrame methods, this does not actually modify the original DataFrame unless you force it to using the
=assignment operator or by passing in
inplace = True.. The easiest way to do this is to download the GitHub repository, and then open your Jupyter Notebook in the
stock_prices folder of the repository.
How To Import
.csv Files Using Pandas
We can import
.csv files into a pandas DataFrame using the
read_csv method, like this:
import pandas as pd pd.read_csv('stock_prices.csv')
As you’ll see, this creates (and displays) a new pandas DataFrame containing the data from the
.csv file.
You can also assign this new DataFrame to a variable to be referenced later using the normal
= assignment operator:
new_data_frame = pd.read_csv('stock_prices.csv')
There are a number of
read methods included with the pandas programming library. If you are trying to import data from an external document, then it is likely that pandas has a built-in method for this.
A few examples of different
read methods are below:
pd.read_json() pd.read_html() pd.read_excel()
We will explore some of these methods later in this section.
If we wanted to import a
.csv file that was not directly in our working directory, we need to modify the syntax of the
read_csv method slightly.
If the file is in a folder deeper than what you’re working in now, you need to specify the full path of the file in the
read_csv method argument. As an example, if the
stock_prices.csv file was contained in a folder called
new_folder, then we could import it like this:
new_data_frame = pd.read_csv('./new_folder/stock_prices.csv')
For those unfamiliar with working with directory notation, the
. at the start of the filepath indicates the current directory. Similarly, a
.. indicates one directory above the current directory, and a
...indicates two directories above the current directory.
This syntax (using periods) is exactly how we reference (and import) files that are above our current working directory. As an example, open a Jupyter Notebook inside the
new_folder folder, and place
stock_prices.csv in the parent folder. With this file layout, you could import the
stock_prices.csv file using the following command:
new_data_frame = pd.read_csv('../stock_prices.csv')
Note that this directory syntax is the same for all types of file imports, so we will not be revisiting how to import files from different directories when we explore different import methods later in this course.
How To Export
.csv Files Using Pandas
To demonstrate how to save a new
.csv file, let’s first create a new DataFrame. Specifically, let’s fill a DataFrame with 3 columns and 50 rows with random data using the
np.random.randn method:
import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(50,3))
Now that we have a DataFrame, we can save it using the
to_csv method. This method takes in the name of the new file as its argument.
df.to_csv('my_new_csv.csv')
You will notice that if you run the code above, the new
.csv file will begin with an unlabeled column that contains the index of the DataFrame. An example is below (after opening the
.csv in Microsoft Excel):
In many cases, this is undesirable. To remove the blank index column, pass in
index=False as a second argument to the
to_csv method, like this:
new_data_frame.to_csv('my_new_csv.csv', index = False)
The new
.csv file does not have the unlabelled index column:
The
read_csv and
to_csv methods make it very easy to import and export data from
.csv files using pandas. We will see later in this section that for every
read method that allows us to import data, there is usually a corresponding
to function that allows us to save that data!
How To Import
.json Files Using Pandas
If you are not experienced in working with large datasets, then you may not be familiar with the JSON file type.
JSON stands for JavaScript Object Notation. JSON files are very similar to Python Dictionaries.
JSON files are one of the most commonly-used data types among software developers because they can be manipulated using basically every programming language.
Pandas has a method called
read_json that makes it very easy to import JSON files as a pandas DataFrame. An example is below.
json_data_frame = pd.read_json('stock_prices.json')
We’ll learn how to export JSON files next.
How To Export
.json Files Using Pandas
As I mentioned earlier, there is generally a
to method for every
read method. This means that we can save a DataFrame to a JSON file using the
to_json method.
As an example, let’s take the randomly-generated DataFrame
df from earlier in this section and save it as a JSON file in our local directory:
df.to_json('my_new_json.json')
We’ll learn how to work with Excel files - which have the file extension
.xlsx - next.
How To Import
.xlsx Files Using Pandas
Pandas’
read_excel method makes it very easy to import data from an Excel document into a pandas DataFrame:
new_data_frame = pd.read_excel('stock_prices.xlsx')
Unlike the
read_csv and
read_json methods that we explored earlier in this section, the
read_excel method can accept a second argument. The reason why
read_excel accepts multiple arguments is that Excel spreadsheets can contain multiple sheets. The second argument specifies which sheet you are trying to import and is called
sheet_name.
As an example, if our
stock_prices had a second sheet called
Sheet2, you would import that sheet to a pandas DataFrame like this:
new_data_frame.to_excel('stock_prices.xlsx', sheet_name='Sheet2')
If you do not specify any value for
sheet_name, then
read_excel will import the first sheet of the Excel spreadsheet by default.
While importing Excel documents, it is very important to note that pandas only imports data. It cannot import other Excel capabilities like formatting, formulas, or macros. Trying to import data from an Excel document that has these features may cause pandas to crash.
How To Export
.xlsx Files Using Pandas
Exporting Excel files is very similar to importing Excel files, except we use
to_excel instead of
read_excel. An example is below using our randomly-generated
df DataFrame:
df.to_excel('my_new_excel_file.xlsx')
Like
read_excel,
to_excel accepts a second argument called
sheet_name that allows you to specify the name of the sheet that you’re saving. For example, we could have named the sheet of the new
.xlsx file
My New Sheet! by passing it into the
to_excel method like this:
df.to_excel('my_new_excel_file.xlsx', sheet_name='My New Sheet!')
If you do not specify a value for
sheet_name, then the sheet will be named
Sheet1 by default (just like when you create a new Excel document using the actual application).:
[]()
You can pass this URL into the
read_csv method to import the dataset into a pandas DataFrame without saving the dataset to your computer first:
pd.read_csv('')
How To Import Remote
.json Files
We can import remote
.json files in a similar fashion to
.csv files.
First, grab the raw URL from GitHub. It will look like this:
Next, pass this URL into the
read_json method like this:
pd.read:
Then, pass this URL into the
read_excel method, like this:
pd.read_excel(''),!
|
https://www.freecodecamp.org/news/the-ultimate-guide-to-the-pandas-library-for-data-science-in-python/
|
CC-MAIN-2021-39
|
refinedweb
| 5,199
| 61.87
|
Getting Started With Parcel
Next Generation Web App Bundler
This post has been published first on CodingTheSmartWay.com.
If you’re a web developer you have most certainly made some experience with bundlers like Browserify or Webpack. Those web application bundlers help you to pack the assets of your web application (code, images, packages etc.) into bundles so that the application can be served easily.
Furthermore most bundlers are able to perform many more tasks when building a web application like post processing of code or structuring your application for supporting lazy loading.
However, today most developers struggle with the complicated configuration of bundlers like Webpack. Parcel, a web application bundler, which has been released a few weeks ago is here to solve the problem. The promise is that Parcel is a lot faster then Webpack or Browserify and that the bundler at the same time is requiring no configuration (in most cases).
Let’s take a closer look at Parcel and see how you can use Parcel in your next web development project.
Installation
The project’s website can be found at:
From the start page of the website you can get an overview of the most important features of Parcel:
- Blazing fast bundle times
Parcel uses multiple worker processes to ensure that the compilation process is executed in parallel on multiple cores. Furthermore Parcel uses a caching mechanism for the file system.
- Bundle all your assets
Parcel offers out of the box support for common project assets like JS, CSS, HTML. You do not need to install any plugins to make sure that Parcel is adding those assets to the bundles.
- Automatic transforms
By default Parcel is performing code transformations using Babel, PostCSS and PostHTML.
- Zero configuration code splitting
Parcel is making sure that the project code is split across multiple bundles if not all assets are needed initially. By using this code splitting approach not all assets needs to load at once and the user of the web application will experience a faster load. Code splitting is done by default, no extra configuration is needed.
- Hot module replacement
Parcel is watching for code changes and replaces modules automatically in the browser if needed.
- Friendly error logging
To install Parcel you need to perform the following steps.
First install Parcel globally on your system by using NPM or Yarn.
Using Yarn you need to execute the following command:
$ yarn global add parcel-bundler
Using NPM you need instead execute the following command:
$ npm install -g parcel-bundler
Having installed Parcel on your system successfully we can now make use of Parcel in a new web project.
Initiating A New Project With Parcel
To initiate a new project let’s create a new empty project folder, change into that folder and execute one of the following commands:
With Yarn:
$ yarn init -y
With NPM:
$ npm init -y
This creates a new package.json file in your project directory.
To add an entry point for our application let’s add a new file index.html in the project directory and insert the following HTML code into that file:
<!DOCTYPE html>
<html lang="en">
<head>
<title>Parcel Demo 01</title>
</head>
<body>
<div id="message"></div>
<script src="./app.js"></script>
</body>
</html>
This is just a simple HTML structure. In the body section of you can find two elements: a div element with id message and a script element to include the JavaScript file app.js.
The file app.js is not existing yet, so let’s create this new file in the project directory and insert the following line of code:
document.getElementById('message').innerText = "Hello World!";
This line of JS code is inserting the text “Hello World” into the div element with id message, so that the text should become visible in the browser.
Now let’s start up the development web server by using the following command:
$ parcel index.html
The web server is starting up on port 1234 and if you open up URL in the browser you should be able to see the following output:
You can now update any part of the code (e.g. change the message text) and you’ll see the output in the browser being updated automatically.
If you now take a look into the project folder you’ll notice that two new folders have been created by Parcel:
The .cache folder contains the Cache content which is used by Parcel. The dist folder contains the output of Parcel and the content of that folder is served by the web server.
Inside the the dist folder you can see that for our application one JS bundle has been created.
Adding A Module To The Project
Let’s extend the sample project by creating a new JS module. Parcel supports both, CommonJS and ES6 module syntax. In the following example the ES6 syntax is used. If you want to use the CommonJS syntax instead you can do so without needing to change any configuration.
Add a new empty file lib.js to the project and insert the following JavaScript code:
export function square(x) {
return x * x;
}
This is exporting the function square from the lib module, so that you can add the corresponding import statement in app.js:
import { square } from './lib';
You we can make use of that function in app.js:
document.getElementById('message').innerText = "The Square of 2 is " + square(2);
The output now changes to the following:
The new module has been added to the bundle that was created by Parcel automatically.
If you need to add dependencies to your project, that’s also no problem and Parcel will add the needed dependencies to the bundles as well.
E.g. add the jQuery library to the sample project by executing the following command in the project directory:
$ npm install jquery --save
You can now include jQuery by using the corresponding import statement and extend the example in file app.js with the code you can see in the following:
import { square } from './lib';
import $ from 'jquery';let i = 2;function setMessageText(msg) {
$('#message').text(msg);
}setMessageText("The Square of " + i + " is " + square(i));$('#message').click(() => {
i++;
setMessageText("The Square of " + i + " is " + square(i));
})
Now the user is able to click on the message text and increment the input value by one. The square of that value is updated as well.
The jQuery has been added to the Parcel bundle without anything to do manually.
Adding CSS and SCSS Assets
Of course Parcel takes care of your CSS and SCSS assets. Those assets are automatically recognized and added to the bundle which is created for your project. You need to import CSS assets in a JavaScript or HTML file.
E.g. add a new file styles.css to your project directory, insert the following CSS code:
body {
background-color: powderblue;
}
color: blue;
font-size: 3em;
text-align: center;
}
and make sure that styles.css is included in index.html by added the following link element to the head section:
<link rel="stylesheet" href="styles.css">
The output in the browser should then change to what you can see in the following:
If you want to use SCSS code instead you first need to add the node-sass package first:
$ npm install node-sass
Then add a new file styles.scss to the project and insert for example the following SCSS code:
$messagecolor: blue;
$bgcolor: powderblue;body {
background-color: $bgcolor;
}
color: $messagecolor;
font-size: 3em;
text-align: center;
}
And import that file in any JS file, e.g. in app.js with the corresponding import statement:
import './styles.scss';
The result in the browser should be the same as seen before.
Applying Transformations
Like many other bundlers Parcel is able to apply transformations to assets when building. Out of the box Parcel has already support for many common transforms and transpilers built in. Here are some examples:
- Transform JavaScript using Babel
- Transform CSS using PostCSS
- Transform HTML using PostHTML
Parcel automatically runs those types of transformations when the corresponding module is installed and a small configuration file for the transformation (e.g. .babelrc) is available.
In the following exampel we’ll discover how to use Babel to transform JavaScript and JSX code by setting up a React project with Parcel.
Example: Setting Up A React Project With Parcel
Getting started with React has always been a difficult task with Webpack. Initiating a new React project required to add a lot of Webpack configuration to the project first. To make things easier it has been possible to use Create React App () to initiate a new React project and generating the needed Webpack configuration automatically. However, this disadvantage of the Create React App approach is that it hides the complexity of the build configuration. This only works for small applications. If your application grows and you have further requirements for the build process you need to deal with the complex configuration anyway.
Using Parcel makes setting up a React project much more easier because there is nearly no configuration required.
$ mkdir react-parcel
$ cd react-parcel
$ npm init -y
Next add the following dependencies to the project:
$ npm install --save react
$ npm install --save react-dom
$ npm install --save-dev babel-preset-react
$ npm install --save-dev babel-preset-env
In order to tell Parcel that we’re using ES6 and JSX syntax in our project we need to add a new file .babelrc and include the following minimal configuration for Babel:
{
"presets": ["env", "react"]
}
Next let’s create a simple React app by adding two new files two the project: index.html and app.js.
First insert the following code in index.html:
<!DOCTYPE html>
<html>
<head>
<title>React Parcel Demo</title>
</head>
<body>
<div id="app"></div>
<script src="app.js"></script>
</body>
</html>
Next add the implementation of the React component in app.js:
import React from "react";
import ReactDOM from "react-dom";class HelloMessage extends React.Component {
render() {
return <div>Hello {this.props.name}</div>;
}
}var mount = document.getElementById("app");
ReactDOM.render(<HelloMessage name="Sebastian" />, mount);
Now we’re ready to start up the React application by simply typing in the following command:
$ parcel index.html
and you should be able to see the application running in the browser:
Building For Production
If you want to build your project for production you can use the following command:
$ parcel build index.html
This command creates a dist sub folder where the generated output is stored. You can then use the content of this folder to deploy the application.
You can also specific the folder which should be used for outputting the production build by using command option -d in the following way:
$ parcel build index.html -d build/output
The production build process uses minification in order to decrease the bundle size.
This post has been published first on CodingTheSmartWay.com.
Top Online Course Recommendations
#1 The Complete Web Developer in 2018
Learn to code and become a web developer in 2018 with HTML5, CSS, Javascript, React, Node.js, Machine Learning & more!
Go To Course …
#2 The Complete JavaScript Course — Build a Real-World Project
Master JavaScript with the most complete JavaScript course on the market! Includes projects, challenges, final exam, ES6
Go To Course …
Disclaimer: This post contains affiliate links, which means that if you click on one of the product links, I’ll receive a small commission. This helps support this blog!
|
https://medium.com/codingthesmartway-com-blog/getting-started-with-parcel-197eb85a2c8c
|
CC-MAIN-2021-31
|
refinedweb
| 1,908
| 53.31
|
Use a circular queue to solve this problem
or, if u r able to use STL then this problem can be categorized as a easy one
i have included serial input-output cases, thus it will be made easy for u to debug ur code, if any problem occur
i think it’s a good idea to learn STL documentation, becoz it’s very helpful and provides various features to eliminate hazards
some links:
1.
2.
3.
#inlcude <list>
using namespace std;
list <int> var; // declare a list variable named “var”
var.clear (); // clear the list
var.push_front (x); // insert element x at the front
var.size (); // returns the size of the list
var.front (); // access the top element of list, but not removed it
var.pop_front (); // wipe out the top element of the list
var.push_back (x); // insert an element at the back of the list
#include "stdio.h" #include "list" using namespace std; int main () { list <int> v; int n, i, flag, temp; while (scanf ("%d", &n)) { if (n == 0) return 0; v.clear (); for (i = n; i >= 1; i--) v.push_front(i); flag = 0; printf("Discarded cards:"); while (v.size () > 1) { if (flag == 0) { temp = v.front (); printf(" %d", temp); v.pop_front (); flag = 1; temp = v.front (); v.push_back (temp); v.pop_front (); } else { temp = v.front (); printf(", %d", temp); v.pop_front (); temp = v.front (); v.push_back (temp); v.pop_front (); } if (v.size () == 1) printf("\n"); } if (flag == 0) printf("\n"); printf("Remaining card: %d\n", v.front ()); } return 0; }
Critical input:
1
2
3
4
5
6
8
9
50
Critical output:
Discarded cards:
Remaining card: 1
Discarded cards: 1
Remaining card: 2
Discarded cards: 1, 3
Remaining card: 2
Discarded cards: 1, 3, 2
Remaining card: 4
Discarded cards: 1, 3, 5, 4
Remaining card: 2
Discarded cards: 1, 3, 5, 2, 6
Remaining card: 4
Discarded cards: 1, 3, 5, 7, 2, 6, 4
Remaining card: 8
Discarded cards: 1, 3, 5, 7, 9, 4, 8, 6
Remaining card: 2
Discarded cards: 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 43, 45, 47, 49, 2, 6, 10, 14, 18, 22, 26, 30, 34, 38, 42, 46, 50, 8, 16, 24, 32, 40, 48, 12, 28, 44, 20, 4
Remaining card: 36
|
https://tausiq.wordpress.com/2009/03/25/acm-uva-10935/
|
CC-MAIN-2017-04
|
refinedweb
| 389
| 80.31
|
It would be great to use the API call response not only for getting the values but also the "Display Name" of that value to have a dynamic display name.
@jan.rachwalik If you request for example metrics from Azure which have more that one dimension Microsoft does not guarantee that they come always in the same order. When I request for example the state of the PODs in a Kubernetes Namespace (management.azure.com/[many characters]/metricnames=kube_pod_status_phase&$filter=namespace eq 'orion' and phase eq '*'&aggregation=Average) the order of the list is different for each request. Therefore I want to have the "Display Name" of the metric dynamic. Workaround is to create for each metric it's own API request but I do not think that this is very elegant.
OK. Thanks for quick answer and the details. The request makes sense.
SolarWinds solutions are rooted in our deep connection to our user base in the THWACK® online community. More than 150,000 members are here to solve problems, share technology and best practices, and directly contribute to our product development process.
|
https://thwack.solarwinds.com/t5/SAM-Feature-Requests/Using-API-response-for-display-name-of-an-value/idi-p/603975
|
CC-MAIN-2020-40
|
refinedweb
| 184
| 62.38
|
You are not logged in.
Pages: 1
Hi
I have a function that receives a DWORD value, and it must be a address code that start at 0x00000001 until 0xFFFFFFFF. How can I create a DWORD variable and increment it only with hex-values in a way it will start with 0x00000001 and finish with 0xFFFFFFFF?
For example, the output at each loop should be:
0x00000001
0x00000002
0x00000003
0x00000004
0x00000005
0x00000006
0x00000007
0x00000008
0x00000009
0x0000000A
0x0000000B
0x0000000C
0x0000000D
0x0000000E
0x0000000F
0x00000010
0x00000011
0x00000012
etc...
Thanks
Rob
You are right, but I need this to be stored in a DWORD variable for each iteration of the loop. Why? Because I have to pass it to a API under Windows that typedef is DWORD.
I believe I could do your loop with a sprintf() and "0x%08x" or equivalent, but how to store it properly in a DWORD? If it's not a DWORD the API that I'm calling will fail...
This DWORD for me is kind of new, it appears to be a MS world typedef ( … c_id=80721)
Thanks
DWORD stands for double word, meaning a 32 bits integer.
A word was used for a short integer of 16 bits. So just treat
it as a regular int or unsigned int and you should be fine.
It's not some magic HEX string value based variable or anything
crazy like that, it's a regular int with a stupid typedef.
Edit: It seems the link you posted explains it quite well...
Last edited by i3839 (2010-10-23 01:26 PM)
Yeah, what i3839 said... So, all you need is to define a variable of type "DWORD" (or you can use "unsigned int" if you prefer; they're just different names for the same thing), and then just increment that normally as you would any int... There's no magic involved... And, there's nothing special needed just because the function wants to treat the value as hex instead of decimal or whatever... The raw int value remains the same regardless... (After all, in reality, everything is stored in binary, anyway...)
Hi
Thanks for your help.
I did a loop with a unsigned int and it works fine.
#include <stdio.h> int main(){ unsigned int loop; for ( loop = 0x00000001; loop <= 0xFFFFFFFF ; loop++){ printf ("Value is 0x%08x\r\n", loop); // LSPFunc(loop, SZ_READ, RNOW); } return(0); }
However, when I uncomment LSPFunc it never appear to send the right values. I mean, on the printf it's correctly printed because we are using the right typo (0x%08x). There is a way to cast it as hex to pass it as first parameter to function LSPFunc()?
Thank you
That's an infinite loop you've got there, too, since you do <=... Once you increment 0xffffffff, you'll get 0, which is still less than, so you'll keep going forever...
But, again, it makes no sense to talk of "casting to hex", since hex is merely an interpretation of the raw binary value, which is what you're actually passing... The function will interpret it as hex if that's what it wants...
So, what makes you think it's NOT getting the correct values? What values do you think it IS getting instead? Is this LSPFunc() something of yours, or something you have the source to at least? If so, can we see it?
Also, why not simply declare the type of your local loop variable as "DWORD", if that's what type the function prototype uses? You can still increment it just like that...
Hi RobSeace,
Make sense. So, how to do this loop if I can't do comparison with 0xFFFFFFFF?
Humm, how I can printf values of a DWORD in binary? If possible with a space at each 8 bits. There is a way to do it?
What make me believe it's wrong is that I always get return error that says "wrong code".
It's a 3rd party library, I don't have the source code, just the prototype, and a brief of return values.
Thanks, I did the loop with DWORD.
So, how to do this loop if I can't do comparison with 0xFFFFFFFF?
Well, since you don't seem to need to try 0, you could just change the loop condition to "loop != 0" (or more simply, just "loop")... Or, if you didn't want to rely on the effects of integer overflow, you could get rid of the loop condition entirely, and do the test inside at the bottom of the loop: "if (loop == 0xffffffff) break;"... Or, in the same style, convert it to a do{}while() loop, where the test is naturally on the bottom...
Humm, how I can printf values of a DWORD in binary? If possible with a space at each 8 bits. There is a way to do it?
There's no C standard printf() format for outputing binary, unfortunately... You can do it fairly easily manually, though:
void print_binary (uint val) { int i; for (i = 32; i > 0; i--) printf ("%s%s", (i % 8) ? "" : " ", (val & (1 << (i-1))) ? "1" : "0"); printf ("\n"); }
But, why bother? Hex is much easier to read, and is trivial to convert in one's head directly to binary... (Well, trivial for ME, at least... I would've assumed it would be so for any programmer, as well, though...)
What make me believe it's wrong is that I always get return error that says "wrong code".
Well, maybe it doesn't like those values you're giving it? Is there some reason to believe it should support every single value between 1 and 0xffffffff? I'm guessing you're simply calling it wrong...
Pages: 1
|
http://developerweb.net/viewtopic.php?id=7199
|
CC-MAIN-2019-13
|
refinedweb
| 954
| 81.73
|
A beginners’ tutorial on recursion in Java: Video Lecture 19
This article discusses the basics of recursion in Java. We will discuss the theory of recursion, how we can write a recursive method in Java, and how to trace a recursive program.
We are already familiar with how to use methods in Java. A method is a code snippet that you can call an arbitrary number of times from another method. This article explains how we can use a recursive Java method to solve a problem that we could solve using a loop.
In mathematics and computer science, recursion refers to a special technique to solve problems that are difficult to program using loops. The main property of recursion is that a recursive function defines itself, or at least a part of itself. In a recursive method, you will see that the method is calling itself at some point.
Recursion helps solve many complex problems that are difficult to solve using a loop, and at the same time when solving a portion of the problem is easy to tackle.
Contents
- 1 Properties of recursion
- 2 A video lecture on recursion in Java
- 3 A simple problem to explain recursion
- 4 Another recursive problem
- 5 When is recursion used?
- 6 Exercise questions
- 7 Concluding remarks
Properties of recursion
A recursive method have two properties.
- A base case, or a stopping condition: Of course, we do not want our program to run forever. The recursion — that is the calling of itself — must stop at some point. The condition at which the recursion should stop is called the base case. The base case is generally an easy-to-solve subproblem of the main overarching problem.
- A recursive call: In every call, the method must solve a part of the main problem. When the method calls itself, the call must receive a smaller problem than the problem the current method is handling. Structurally, the smaller problem should have the same properties as the current problem. Only the size of the problem should be smaller in the next call. In particular, we have to make sure that each call is moving the granulated subproblem toward the base case, which is the terminating condition.
A video lecture on recursion in Java
The following video lecture complemnts the content of this page. The video lecture explains the theory of recursion, covers the simple examples of this page, and provides a tracing of a recursive function to clarify the concepts.
A simple problem to explain recursion
Let us say that we have the following problem in hand.
Write a recursive method that prints all the integer numbers between n and 1. That is, if n=5, we have to print
5 4 3 2 1
If n=3, then we have to print
3 2 1.
A solution to the problem using a loop
We can easily solve this problem without any recursions. We can directly use a loop. Let us solve the problem with a loop first.
import java.util.Scanner; class MyPrinter{ public static void main(String[] args){ Scanner myScanner = new Scanner(System.in); System.out.print("What is the value of n: "); int n=myScanner.nextInt(); int i; for (i=n; i>=1; i=i-1){ System.out.print(i+" "); } System.out.println(); } }
In the program above, we ask the user for the value of
n.
The loop variable i starts from n and decreases the value by 1 in each iteration. The execution keeps going inside as long as the value of the loop-variable i is greater than or equal to 1. That is, the loop terminates when the value of the loop-variable becomes smaller than 1.
Inside the loop, we have to print the content of the loop variable i, separated by spaces.
After the loop ends, we write an empty System.out.println statement to make sure that the command prompt appears in the next line instead of in the same line, after the program outputs get printed.
Save the code in a file named
MyPrinter.java, compile it using javac, and run it using java.
My Computer$ javac MyPrinter.java My Computer$ java MyPrinter What is the value of n: 5 5 4 3 2 1 My Computer$ java MyPrinter What is the value of n: 10 10 9 8 7 6 5 4 3 2 1 My Computer$ java MyPrinter What is the value of n: 3 3 2 1 My Computer$
The output demonstrates that the program asked for the value of n. The user entered 5. The program printed.
5 4 3 2 1
After running the program again — the program asked for n, the user entered 10, the program printed all the integer numbers from 10 down to 1.
The output shows another execution of the same program, where the user entered 3 as the value of
n. The program printed
3 2 1
Therefore, the program worked perfectly.
We will solve the same problem, which is printing n integers from n down to 1, using recursion, without using any loop.
A solution to the problem using recursion
We will not use any loop at all. Rather, we will use a recursive method. Let us say that the name of the method is myRecurrence. The method myRecurrence will accept one parameter only, which is n, the number that the user provided.
Here is the code that uses recursion and no loop.; } System.out.print(n+" "); myRecurrence(n-1); } }
The header of the recursive method,
myRecurrence
The return type of
myRecurrence is
void because it does not need to return anything to the caller. The method has only one parameter. The name of the parameter is
n. This parameter n, for the first call from the main method, will hold a copy of the n that the user enters.
Recall from the previous video lecture that the header must start with the word
static because we will call the method from the main method, which is a
static one.
The base case in
myRecurrence
As said earlier, in recursion, there must be two items — a base case and a recursive call.
A base case is a terminating condition. In this case, our terminating condition is — when n is less than 1. That means, when n is less than 1, we should do nothing but return to the caller because there is nothing else to do. Remember that we have to print up to 1. We do not print anything smaller than 1.
We write the base case using an if statement. If we find that n is lesser than 1, we say return.
Inside myRecurrence, the base case is written as:
if (n<1){ return; }
A return statement with an immediate semicolon, in the above base case, indicates a return of execution without any value. Given that the method has a void return type, all we can return is the execution without any content. We have a base case now.
Other recursive properties in
myRecurrence
In a recursive method, a part of the whole work will be solved here within the method. The rest of the work, which is smaller in size, will be solved in a different call of the same method.
In this particular body of the method, we solve the problem of printing one number only, which is whatever value came via the parameter
n.
We simply print n and then put a space. The space is to make sure that there is a gap between the current number printed and the number we will print next. In the method, we wrote:
System.out.print(n+" ");
Now, the task left is — printing the rest of the numbers. Let us assume we don’t know how to print the rest of the numbers. Again, we have printed n, but we do not know how to print the rest of the numbers, which are
n-1,
n-2,
n-3, so and so forth.
If n is equal to 5, we have printed 5. The rest of the numbers that we need to print are 4, 3, 2, and 1. We do not know how to print them, but we have a method, which is the same method myRecurrence that can print a sequence of numbers down to 1.
What we do is, we call myRecurrence with
n-1 as the parameter. That is, if n is 5 in the current method, we are sending 4 as the parameter for the next call.
We write the following line to print all the numbers starting from n-1:
myRecurrence(n-1);
Note again – we have a base case. We solve the problem partially within the method. We send the rest of the problem, that is the smaller subproblem to the next call.
The output of the recursive program
Let us save the program, compile it, and execute it.
Like when we used a loop, the program asks for the value of n. The user enters 5. The program prints:
5 4 3 2 1
The same outputs are observed with
n=10 and
n=3.
Tracing the program
The video above provides tracing of the code with n=3. It is important to trace a code using pen and paper for two main reasons: (1) tracing slowly helps understand the problem well, and (2) tracing helps in finding logical errors in the code. I suggest that the reader watches the video above to enjoy the tracing of the recursive method we have discussed in this article.
Another recursive problem
As a quick bonus teaser program, let us discuss the solution to another problem, which is relevant to the one that we have discussed above.
Using recursion, write all the integer numbers between 1 and n. That is, if
n=5, we have to print
1 2 3 4 5
If n=3, then we have to print
1 2 3
Notice that in the previous program, we wrote all the numbers in reverse order. When
n was 5 we printed 5 4 3 2 1. Now, we are printing 1 2 3 4 5.
How can we change the previous recursive program to print everything in ascending order?
In the previous problem, we first printed
n, and then using another recursion, we printed all the numbers from
n-1 down to 1.
Now, when we go inside the recursive method, we cannot print anything before printing the rest of the numbers. That is, when
n is 5, we have to wait till all the other numbers 1, 2, 3, and 4 are printed.
When n is 4, we have to wait till all the numbers 1, 2, 3, are printed.
When n is 3, we have to wait till all the numbers 1, and 2, are printed.
So and so forth.
We can accomplish by switching these two lines of the previous myRecurrence method.
System.out.print(n+" ");
myRecurrence(n-1);
That is, we can print all the numbers by flipping the above two lines to the following:
myRecurrence(n-1);
System.out.print(n+" ");
The solution
Here is the complete code that prints the numbers in ascending order (1 to
n.); } myRecurrence(n-1); System.out.print(n+" "); } }
After the base case, in the modified
myRecurrence method, we go to recursion without printing
n. Numbers smaller than n are processed first. The System.out.print statement will print
n, after all the smaller numbers are processed.
If we save this file, compile it, and run the program, we will see that the program is printing all the numbers between 1 to n in ascending order.
My Computer$ javac MyPrinter.java My Computer$ java MyPrinter What is the value of n: 5 1 2 3 4 5 My Computer$ java MyPrinter What is the value of n: 10 1 2 3 4 5 6 7 8 9 10 My Computer$ java MyPrinter What is the value of n: 3 1 2 3 My Computer$
Tracing the program
I suggest that you trace this modified program, to print all numbers in increasing order, using paper and pencil. Again, tracing your code using paper and pencil helps develop an understanding of the logical flow of a program.
When is recursion used?
You might think – why on earth we have solved this simple problem using recursion? The answer is — we have used this simple problem to explain how recursion works. In practice, recursion is used for problems that are hard to code using loops.
There are problems for which it is easier to write a recursive solution than writing a loop-based solution. In games, where it is required to keep track of the status of a game board or in artificial intelligence, where it is required to foresee possible changes, recursion is used because it automatically saves the local variables in each call.
Exercise questions
Question 1: Write a recursive method that takes n as its parameter and then returns
1+2+3+ … … … +n for any
n>0.
Question 2: Write a recursive method that takes n as its parameter and then returns the factorial of n for any
n>=0. (The factorial of n is referred to as n! and
n!=n*(n-1)*(n-2) … … 2*1.) As an example,
5!=5*4*3*2*1=120. Factorial of zero is considered 1. That is
0!=1.
Concluding remarks
If you have not yet subscribed to Computing4All.com or to our YouTube channel, please do so to receive notifications on our new articles and videos.
|
https://computing4all.com/recursion-in-java/
|
CC-MAIN-2022-40
|
refinedweb
| 2,259
| 71.34
|
Raphael Pieroni is leading an effort to make Archetype a seamless and end-to-end tool for prototyping Maven projects. The work has been carried out over at the Mojo project () but is approaching a state where it can be brought back to Apache.
This code has been imported into the maven-sandbox at apache.
We currently have a system where the creation of Archetypes is dead simple. Using default information you simply use "mvn archetype:create-from-project" and it will generate your Archetype for you. Very simple. Next we are going to create a few simple additions that will allow this Archetype to be installed or deployed. We are also going to make some tools to automatically ensure the Archetype created is intact. So the very first goal is to allow someone to generate Archetypes, verify them and deploy them. So we are rapidly approaching ther reality where samples projects can be continuously integrated and periodically be deployed in Archetype form. This is what Archetype was meant to be and Raphael has made this happen. Once the process is seamless we wil align all the package names with the Apache namespace and bring the code back to the Maven project. We are close. Many thanks to Raphael!
|
http://docs.codehaus.org/display/MAVEN/ArchetypeNG
|
CC-MAIN-2015-22
|
refinedweb
| 209
| 72.56
|
Since we have setup the environment for Java programming let's introduce Java application programming facilitating a disciplined approach to program design. Most programs you will come across generally do three primary things – get the input, process the information and display the result. Here in this article we shall discuss with examples that demonstrate how your program can display messages and how they can obtain information from the user for processing.
First Program
Let us consider a simple application that displays a line of text
/) // args refers to arguments accepted by main function { System.out.println("My first Java program"); }//end of main method }//end of class MyClass
Let us now consider each line of the program in order.
The program runs. There are two way comments are inserted in Java code: one is called single line comment represented by // i.e. comment terminates at the end of line. Another is multiple line comment represented by /* */. This type of comments begin with the delimiter /* and ends with */. Also there is another type of comments called javadoc comment that are delimited by /** and */. These type of comments are used to embed program documentation directly in the program and preferred Java commenting format in the industry, but the basic idea is same for all formats of comment.
Package
The declaration package mypackage indicates a user defined package. Placing a package declaration at the beginning of a Java source file indicates that the class declared in the file is a part of the specified package.
User defined class
The declaration of class MyClass is the user defined class defined by you. Every program in Java consists of at least one such class declaration. The class is a keyword which introduces class declaration in Java and is immediately followed by the class name (here, MyClass). Class name is an identifier consisting of letters, digits, underscores(_), dollar signs($) but cannot begin with digit and does not contain spaces.
Note: Keywords are nothing but reserved words used by Java. Java is case sensitive i.e. b2 and B2 are different. By convention, in Java all class names begin with capital letter and capitalize the first letter of each word they include. Also observe that each statement is delimited by a semicolon ';' to signify the end of a statement in Java, failure to indicate one is a compile time error.
Braces
A left brace, {, begins the body of a class, function, loops, conditional statement etc. And the corresponding ,}, end the body. There should always be a matching pair of braces.
Method
The parenthesis, (), after the identifier main indicate that it is a program building block called method. A Java class declaration normally contain one or more methods. For a Java Application there must be one main method otherwise the JVM will not start the execution. The main method is the starting point of a Java Application. Methods are like verbs in a sentence. It takes some information, process it and return information when they complete their task.
Standard object and method
System.out is known as standard output object. Method System.out.println displays a line of text in the console window. When it completes its task, it positions the output cursor to the beginning of the next line in the console. There are several variations of this method.
System.out.print("My first Java program");
This will print the text and the cursor will be at the end of the line.
System.out.println("My first Java program");
This will print the text and the cursor will be at the beginning of the next line
System.out.print("My first Java program\n");
This will print the text and the cursor will be at the beginning of the next line due to '\n'
System.out.printf("My first Java program");
print the text and the cursor will be at the end of the line, a formatted print function. We shall see more of this function later down the line.
Second Program
Let's modify our first program and try to print same text with similar formatted output with different print function
/) { //with println function System.out.println("Using println function"); System.out.println("Hello World"); System.out.println("Welcome to the world of Java"); // with print function System.out.print("Using print function\n"); System.out.print("Hello World\n"); System.out.print("Welcome to the world of Java\n"); // with printf function System.out.print("Using printf function\n"); System.out.printf("Hello World\n"); System.out.printf("Welcome to the world of Java\n"); }//end of main method }//end of class MyClass
Escape sequence
The \n used in the above programs is called an escape sequence. There are several escape sequences in Java as follows
- \n Newline. Position the screen cursor at the beginning of the nextline.
- \t Horizontal tab. Move the screen cursor to the next tab stop.
- \r Carriage return. Position the screen cursor at the beginning of the current line and do not advance to the next line. Any character written after the carriage return overwrites the character previously output on that line.
- \\ Backslash, used to print backslash '\' character (imagine, how will you print \ in the output without double backslash, \\)
- \” Double quote, used to print double quote. Imagine how to print exactly this line – Teacher says, “Very Good”.
Third Program
Let's write a program to input two numbers from the user, add them up and show the result.
package mymath; import java.util.Scanner; public class SimpleMath { public static void main(String[] args) { Scanner input = new Scanner(System.in); int a; int b; int sum; System.out.print("Enter first number: "); a = input.nextInt(); System.out.print("Enter second number: "); b = input.nextInt(); sum = a + b; System.out.printf("Sum is %d\n", sum); } }
Here,
import java.util.Scanner
is an import declaration that helps the compiler locate a class that is used in this program. A great strength of Java is its rich set of predefined classes that programmers can reuse rather than invent themselves. These classes are grouped into packages – named collection of classes. These packages are collectively referred to as Java class library or the Java API (Application Program Interface). Programmers use import declarations to identify the predefined classes used in a Java program. In the above program import declaration indicates that this example uses Java's predefined Scanner class from package java.util.
Scanner input = new Scanner(System.in);
is a variable declaration statement that specifies the type and name of a variable (input) that is used in this program. A variable is a location in the computer's memory where values can be stored for use later in the program. All variables must be declared with a name and a type before they are used. A Scanner is a built in Java class that enables a program to read data for use in a program and initialize the input variable with equal sign (=). The expression new Scanner(System.in) creates a Scanner object that reads data typed by the user at the keyboard. System.in is a standard input object (Similarly, System.out is a standard output object) that enables Java application to read information typed by the user.
int a; int b; int sum;
these variables will hold integers values and are of primitive data type int. Other such primitive data types are – boolean, byte, short, char, long, float and double. We shall see their usages in other programs down the line.The primitive data types start with lower case letter, unlike all other classes in Java which start with capital one (like Scanner)
sum = a + b;
Here values contained in a and in b are added through + operator and assigned to the variable sum.
Arithmetic operators
Relational Operators
System.out.printf("Sum is %d\n", sum);
The format specifier %d is a place holder for an int value - the letter d indicates decimal integer
Quizzes
1. Which one is a standard input object
(1) System.out
(2) println
(3) System.in
(4) printf
2. Which one is a standard output object
(1) System.out
(2) println
(3) System.in
(4) printf
3. The method from which Java program starts is called
(1) class
(2) main
(3) printf
(4) None of the above
4. which of the following is a primitive data type
(1) int
(2) float
(3) char
(4) All of the above
5. Which package contains Scanner object
(1) java.util
(2) java.io
(3) java.net
(4) java.sql
6. Which of the following is a valid comment
(1) /* ...*/
(2) /**... */
(3) //...
(4) All of the above
7. Which of the following is not a valid variable declaration
(1) int 67val
(2) double char99
(3) char b23$
(4) boolean _11;
8. Java considers variables number and NuMbEr to be same
(1) True
(2) False
(3) Most of the time true
(4) Sometimes false
9. Every statement ends with
(1) =
(2) }
(3) )
(4) ;
10. ________ are reserved for use by Java
(1) methods
(2) Packages
(3) Keywords
(4) All of the above
11. Parenthesis and braces always need to balance in Java code
(1) True
(2) False
12. The predefined classes of Java are called
(1) Packages
(2) Methods
(3) Java API
(4) User defined classes
Tasks: Try yourself
- Write a application that asks the user to enter two integers, obtain them from the user and prints their sum, product, difference and quotient.
- Write an application that inputs three integers from the user and displays the average.
Previous: Introduction to Java | Next: Control statements | Back to Table of content
Edited by mdebnath, 08 March 2013 - 01:27 AM.
|
http://forum.codecall.net/topic/74428-java-application-first-program-in-java/
|
CC-MAIN-2020-45
|
refinedweb
| 1,606
| 57.87
|
plone.browserlayer 2.1.3
Browser layer management for Zope 2 applications
Introduction
This package aims to make it easier to register visual components (e.g. views and viewlets) so that they only show up in a Plone site where they have been explicitly installed.
Basic usage
To use this feature, you should:
declare plone.browserlayer as a dependency, e.g. in setup.py:
install_requires=[ 'plone.browserlayer', ],
ensure that its ZCML is loaded, e.g. with an include from your own package:
<include package="plone.browserlayer" />
create a layer marker interface unique to your product:
from zope.interface import Interface class IMyProductLayer(Interface): """A layer specific to my product """
register this with GenericSetup, in a browserlayer.xml file:
<layers> <layer name="my.product" interface="my.product.interfaces.IMyProductLayer" /> </layers>
register visual components in ZCML for this layer, e.g.:
<browser:page
Changelog
2.1.3 (2014-02-25)
- Fix tests with diazo. [davisagli]
2.1.2 (2012-10-03)
- Add support for calling many times remove in export (ie:even when no corresponding layer is registred, remove option should not throw exception). [toutpt]
2.1.1 (2011-11-24)
- Added uninstall support to browserlayer.xml with the ‘remove’ option. [maurits]
- GS export xml is now repeatable. Before two consecutive exports could yield differently ordered results. [do3cc]
2.1 - 2011-05-12
- Update import of BeforeTraverseEvent to come from zope.traversing instead of zope.app.publication. [davisagli]
- Add MANIFEST.in [WouterVH]
2.0.1 - 2010-09-21
- Make sure the layers don’t get applied twice if the site is traversed more than once (such as in a vhosting URL). [davisagli]
2.0 - 2010-07-18
- Update license to GPL version 2 only. [hannosch]
- Package metadata cleanup, require Zope2 distribution. [hannosch]
1.0.1 - 2009-09-09
- Be more robust against broken layer registrations. These can occur when packages with registered layers are removed. [wichert]
- Clarified license and copyright. [hannosch]
- Register ourselves for the more generic ISiteRoot from CMFCore and not IPloneSiteRoot. [hannosch]
- Declare test dependencies in an extra. [hannosch]
- Specify package dependencies. [hannosch]
1.0.0 - 2008-04-20
- Unchanged from 1.0rc4
1.0rc4 - 2008-04-13
- Register the GenericSetup import and export steps using zcml. This means you will no longer need to install this package manually. [wichert]
1.0rc3 - 2008-03-09
- Include README.txt and HISTORY.txt in the package’s long description. [wichert]
- Add metadata.xml to the GenericSetup profile. This fixes a deprecation warning for Plone 3.1 and later. [wichert]
1.0b1 - 2007-09-23
- Initial package structure. [zopeskel]
- Author: Plone Foundation
- Keywords: plone browser layer
- License: GPL version 2
- Categories
- Package Index Owner: optilude, wichert, hannosch, esteele, davisagli, evilbungle, timo, plone
- DOAP record: plone.browserlayer-2.1.3.xml
|
https://pypi.python.org/pypi/plone.browserlayer/2.1.3
|
CC-MAIN-2018-09
|
refinedweb
| 457
| 54.08
|
Recently
I tried AutoComplete extender (AJAX toolkit) in one of my on going
web application. What I suppose to do is, user shall be able to get
the AutoSense list of available products from database on typing the
name of the product in a TextBox. When user types some letters in the
Textbox, a popup panel will come to action and displayed the related
words. So that the user can choose exact word from the popup panel.
BTW, Ajax extension and Ajax Tool Kit should already be installed to
implement any AJAX extenders. Here are the steps how to accomplish an
AutoComplete TextBox from Database:
We
start by creating new website. Create a new website by selecting
“ASP.NET AJAX-Enabled Web Site” from installed templates in “New
Web Site” window.
“ScriptManager”
would already be there in your webpage (Default.aspx), as we have
selected AJAX Enabled Website.
Now
drag and drop a Textbox from your Toolbox and AutoCompleteExtender
to your webpage.
Then
add a webservice to your project as WebService.asmx.
First
of all, you need to Import “System.Web.Script.Services”
namespace and add the “ScriptService” reference to the
webserive.
just need to write a simple webmethod ‘GetProducts’ to fetch the
data from the Product table which will return the string array with
product names.
We
will pass the “PrefixText” to this webmethod; I mean the
characters that are typed by the user in the textbox to get the
exact AutoComplete Product list form Database that start with the
characters typed by the user.
Here
is the complete webmethod for that:
using System; using System.Configuration; using System.Data; using System.Data.SqlClient; using System.Web; using System.Collections; using System.Web.Services; using System.Web.Services.Protocols; using System.Web.Script.Services; /// <summary> /// Summary description for WebService /// </summary> [ScriptService] [WebService(Namespace = "")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class WebService : System.Web.Services.WebService { public WebService() { //Uncomment the following line if using designed components //InitializeComponent(); } [WebMethod] public string[] GetProducts(string prefixText) { string sql = "Select * from product); string[] items = new string[dt.Rows.Count]; int i = 0; foreach (DataRow dr in dt.Rows) { items.SetValue(dr["name"].ToString(), i); i++; } return items; } }
you
can see that we have passed prefixText as argument in above
webmethod , which sends it to the query to fetch only the related
words that starts with the prefixText values. Then it returns the
result as an array of strings.
In
the your webpage, set the AutoCompleteExtender’s TargetControlID
property to the TextBox Id. You also need to set ServicePath
property as WebService.asmx, ServiceMethod as GetProducts and
MinimimPrefixLength as 1.
So,
your web page design code will be something like:
<form id="form1" runat="server"> <asp:ScriptManager <div> <asp:TextBox</asp:TextBox> <cc1:AutoCompleteExtender </cc1:AutoCompleteExtender> </div> </form>
Thats
it! Quite simple and a useful feature that user would like to have. Find the sample Code available to Download in attechment.
Great post! I'm still having trouble with my connection string. Do you mind posting your web.config so I can see your <AppSettings>. Or could you tell me what my connection string will be. I have a database named tblNames.mdf in the App_Data folder. So is this correct?
<appSettings>
<add key="DBConn" value="App_Data\tblNames.mdf" />
</appSettings>
Your attachment AJAXEnabledWebSite1.zip wont unzip.
What does your <appSettings> look like in the web.config?
Pingback from AutoComplete TextBox (Using AJAX AutoCompleteExtender) from Database « KaushaL.NET
I did exactly same thing but my website is not working although the xml file is getting generated.
I am using Microsoft Visual Studio 2005.
Is the problem due to the fact that the web site in which I ma using the code is not ASP/AJAX enabled website ?
Any suggestions please
id2Cpp <a href="dnamisitjjog.com/.../a>, [url=]uwczgiycoiqw[/url], [link=]ooobtucbmauc[/link],
Thanks friend, my code were wrong cause my visibility modificator was static and can't access the method in the web services, thanks again!!!!
Great Post.
You can make WebMethod like that
[WebMethod]
public string[] GetProducts(string prefixText)
{
string sql = "Select Name from Contacts);
if (dt.Rows.Count != 0)
{
string[] items = new string[dt.Rows.Count];
int i = 0;
foreach (DataRow dr in dt.Rows)
{
items.SetValue(dr["Name"].ToString(), i);
i++;
}
return items;
}
else
string[] items = new string[1];
items.SetValue("No Name Match", 0);
}
To show if the name you write dosen't match any name inside DB
I have done the auto complete extender as give above but from the text box no output is coming. Please suggest on the same. Here is my code:
APSX CODE:
<asp:ScriptManager
<Services>
<asp:ServiceReference
</Services>
</asp:ScriptManager>
<asp:TextBox</asp:TextBox>
<cc1:AutoCompleteExtender
</cc1:AutoCompleteExtender>
WEB SERVICE CODE:
[WebService(Namespace = "")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[System.Web.Script.Services.ScriptService]
public class AutoCompleateWebService : System.Web.Services.WebService {
public AutoCompleateWebService () {
//Uncomment the following line if using designed components
//InitializeComponent();
[WebMethod]
public string[] GetClientScearchDetails(string perfixText)
try
Utility objUtil = new Utility();
DataSet dsresult = new DataSet();
dsresult = objUtil.getClientDetailsForAutoCompleate(perfixText);
int count = dsresult.Tables[0].Rows.Count;
if (count == 0)
count = 10;
List<string> LsClientName = new List<string>(count);
if (dsresult.Tables[0].Rows.Count > 0)
foreach (DataRow drrow in dsresult.Tables[0].Rows)
{
string strClient = drrow[0].ToString();
LsClientName.Add(strClient);
}
return LsClientName.ToArray();
else
LsClientName.Add("No Such Client Exist");
catch (Exception ex)
throw ex;
I have tried to implement the auto complete extender but it is not calling the perfectly working web service. I already implemented the same it was working now i don't know why it is not working please help me out.
ASP CODE:
<asp:ScriptManager <Services> <asp:ServiceReference </Services> </asp:ScriptManager>
<Services>
The WEB SERVICE Code:
Pingback from Autocomplete extender is not working | The Largest Forum Archive
JAI MATADI ..
hello sir ..
this is great tutorial ..
and i did everything which u told in this but just made a slightly change in my code ..
i put a full path at
SqlDataAdapter da = new SqlDataAdapter(sql, ConfigurationManager.AppSettings["DBConn"]);
inplace of "ConfigurationManager etc..."
i put my full database path like this
@"Data Source=.\SQLEXPRESS;AttachDbFilename=C:\Documents and Settings\Chandrik Tools\My Documents\Visual Studio 2010\WebSites\ajax toolkit websites\App_Data\Database.mdf;Integrated Security=True;User Instance=True"
as i dont see any "APPSETTINGS" etc .. names in my WRB.CONFIG ..
but m not getting any result when i m running my page ..
my code generating very well without any errors .. but the name suggestion results are not coming ..
please tell me sir what wrong i m doing .. i have done everything .. yet not able to get the result .. :(
I have the autocomplete extender working but would like to pass a different value from what I am displaying. I am displaying a lastname, firstname, date of birth.I would like to pass an id value. Can you give me some direction on how to accomplish this. Any help would be appreciated.
Link to us
All material is copyrighted by its respective authors. Site design and layout
is copyrighted by DotNetSlackers.
Advertising Software by Ban Man Pro
|
http://dotnetslackers.com/Community/blogs/kaushalparik/archive/2008/06/06/autocomplete-textbox-using-ajax-autocompleteextender-from-database.aspx
|
CC-MAIN-2015-06
|
refinedweb
| 1,173
| 51.85
|
im getting the following error on a for loop, what does this mean?Code:
arning C4552: '<' : operator has no effect; expected operator with side-effect
Printable View
im getting the following error on a for loop, what does this mean?Code:
arning C4552: '<' : operator has no effect; expected operator with side-effect
Can you show the code that causes this error?
well here it is, Andy helped me figure out the reading part of the file but now im trying to get the average of the numbers in the array. so here is what i got so far.
Code:
#include <iostream.h>
#include <fstream>
#include <stdlib.h>
using namespace std;
const int FILENAMELEN = 30, ARRAYLEN = 4;
// The class array
class array {
public: // These functions are avaiable for use outside the class
inline int getsize() {return(size);}
int readarray();
float findaverage();
array();
private:
//Private - available only to member functions
int x[ARRAYLEN];
int size;
int j;
int numbers[20];
int count;
float average;
float sum;
char filename[FILENAMELEN];
char inchar;
};
//initialization constructor
array::array(void)
{
size = 0;
}
int array::readarray()
{
ifstream infile;
char filename[FILENAMELEN];
char inchar;
cout<<"Enter filename->" <<'\t';
cin>> filename;
//openfile
infile.open(filename);
//if you cant open the file, print error message
if (!infile)
{
cerr<< "Cannot open " <<filename << endl;
exit(1);
}
//Read the file character
while (!infile.eof())
{
inchar = infile.get(); //this reads in even whitespaces
cout << inchar; //display it on screen
}
infile.close();
return(0);
}
float array::findaverage()
{
int j;
float average;
sum =0.0;
for (j<0; j<count; j++)
{
sum+=numbers[j];
}
average = sum/j;
cout << "The average is "<< average <<endl;
return 0;
}
int main()
{
array myarray;
myarray.readarray();
myarray.findaverage();
return(0);
}
for (j<0; j<count; j++)
ah yes, jesus im blind
ok i corrected the for loop but now the result is ugly, no errors, no warnings, but an ugly wrong result
any ideas?any ideas?Code:
-1. #IND
This means the result was not a number/indeterminate. You get this result when the operation is not mathematically defined.This means the result was not a number/indeterminate. You get this result when the operation is not mathematically defined.Code:
-1. #IND
You are getting this result because you are performing maths on an array of numbers that have not been initialised.
wait im confused, the numbers contained in the array have not been initialized? I am able to read the array and display the values that is holding, why cant i operate on those values since im able to read them? any ideas how i can go about it then so i can work on those numbers?
>>I am able to read the array and display the values that is holding
Review readarray() again. I see where it reads the material in the file into a single char and displays that char on the screen. But...I don't see where it stores the material read in anywhere, let alone read in an int that could be stored in numbers[] such that the values in numbers[] could then be used in findaverage().
Read the file contents into an int and store the value of the int in numbers[], or better, in my opinion, use a loop to store the value read from the file directly into numbers[], and get rid of !infile.eof() as the conditional of the loop, which is likely to get you into trouble (likely to read in one more value than you had planned on).
i searched on the forums can i use atoi something for this? to convert the char into int? someone give me a few hints please
Once you read your file into an array, you can use atoi to convert it into a number. Here is an example prog to demonstrate this:
Code:
#include <iostream>
#include <ctype.h>
#include <cstdlib>
using namespace std;
int main() {
char myString[5] = {'1', ' ', '2', ' ', '3'};
int myInt[5] = {0};
int count = 0, sum = 0;
for(int i = 0; i < 5; i++){
if(isdigit(myString[i])){
myInt[count++] = atoi(&myString[i]);
}
}
for(int i = 0; i < count; i++){
sum += myInt[i];
}
cout << endl << sum << endl;
return 0;
}
ok ill work on that. thanks man
for the love of god i still cant figure this out, can someone take a look at my code please the reading is the only thing holding me back, i created another function similiar to findaverage() but if i cant read the numbers then i cant do any work on that either.
Ok here is your while loop where you are reading from the file:
Notice you are not storing your information anywhere. Somewhere in that while loop you are going to need to store the char in inchar into an array or some form of container.Notice you are not storing your information anywhere. Somewhere in that while loop you are going to need to store the char in inchar into an array or some form of container.Code:
//Read the file character
while(!infile.eof())
{
inchar = infile.get(); //this reads in even whitespaces
cout << inchar; //display it on screen
}
infile.close();
So ask yourself, Do I know how large I need to make the array?(aka how many characters are going to be read from the file)
If you do simply create a char array as one of your private members and then add inchar to it like so:
Now if you don't know the size you can either size the array dynamically with the new operator or the easiest way of handling this in C++ is to use a container object such as a vector.Now if you don't know the size you can either size the array dynamically with the new operator or the easiest way of handling this in C++ is to use a container object such as a vector.Code:
//Defined in class
char fileInput[fileSize];
//In your read function
int currentChar = 0;
//Read the file character
while(!infile.eof())
{
fileInput[currentChar] = infile.get(); //this reads in even whitespaces
cout << fileInput[currentChar]; //display it on screen
currentChar++;
}
infile.close();
I have a simple prog that loads a file into a vector. Let me know if you want to see it, I am assuming that you want to figure this out for yourself though. :)
|
https://cboard.cprogramming.com/cplusplus-programming/61753-what-does-mean-printable-thread.html
|
CC-MAIN-2017-22
|
refinedweb
| 1,048
| 68.2
|
Working with URLs¶
As soon as you write a plugin that provides a new view to the user (or if you want to contribute to pretix itself), you need to understand how URLs work in pretix as it differs slightly from the standard Django system.
The reason for the complicated URL handling is that pretix supports custom subdomains for
single organizers. In this example we will use an event organizer with the slug
bigorg
that manages an awesome conference with the slug
awesomecon. If pretix is installed
on pretix.eu, this event is available by default at
and the admin panel is available at.
If the organizer now configures a custom domain like
tickets.bigorg.com, his event will
from now on be available on. The former URL at
pretix.eu will redirect there. However, the admin panel will still only be available
on
pretix.eu for convenience and security reasons.
URL routing¶
The hard part about implementing this URL routing in Django is that contains two parameters of nearly arbitrary content
and contains only one. The only robust way to do
this is by having separate URL configuration for those two cases. In pretix, we call the
former our
maindomain config and the latter our
subdomain config. For pretix’s core
modules we do some magic to avoid duplicate configuration, but for a fairly simple plugin with
only a handful of routes, we recommend just configuring the two URL sets separately.
The file
urls.py inside your plugin package will be loaded and scanned for URL configuration
automatically and should be provided by any plugin that provides any view.
A very basic example that provides one view in the admin panel and one view in the frontend could look like this:
from django.conf.urls import url from . import views urlpatterns = [ url(r'^control/event/(?P<organizer>[^/]+)/(?P<event>[^/]+)/mypluginname/', views.AdminView.as_view(), name='backend'), ] event_patterns = [ url(r'^mypluginname/', views.FrontendView.as_view(), name='frontend'), ]
Note
As you can see, the view in the frontend is not included in the standard Django
urlpatterns
setting but in a separate list with the name
event_patterns. This will automatically prepend
the appropriate parameters to the regex (e.g. the event or the event and the organizer, depending
on the called domain).
If you only provide URLs in the admin area, you do not need to provide a
event_patterns attribute.
URL reversal¶
pretix uses Django’s URL namespacing feature. The URLs of pretix’s core are available in the
control
and
presale namespaces, there are only very few URLs in the root namespace. Your plugin’s URLs will
be available in the
plugins:<applabel> namespace, e.g. the form of the email sending plugin is
available as
plugins:sendmail:send.
Generating a URL for the frontend is a complicated task, because you need to know whether the event’s
organizer uses a custom URL or not and then generate the URL with a different domain and different
arguments based on this information. pretix provides some helpers to make this easier. The first helper
is a python method that emulates a behavior similar to
reverse:
pretix.multidomain.urlreverse.
eventreverse(obj, name, kwargs=None)¶
Works similar to
django.core.urlresolvers.reversebut takes into account that some organizers or events might have their own (sub)domain instead of a subpath.
Non-keyword arguments are not supported as we want do discourage using them for better readability.
- Parameters
obj – An
Eventor
Organizerobject
name (str) – The name of the URL route
kwargs – A dictionary of additional keyword arguments that should be used. You do not need to provide the organizer or event slug here, it will be added automatically as needed.
- Returns
An absolute URL (including scheme and host) as a string
In addition, there is a template tag that works similar to
url but takes an event or organizer object
as its first argument and can be used like this:
{% load eventurl %} <a href="{% eventurl request.event "presale:event.checkout" step="payment" %}">Pay</a>
|
https://docs.pretix.eu/en/latest/development/implementation/urlconfig.html
|
CC-MAIN-2020-34
|
refinedweb
| 666
| 55.44
|
Interesting Things
- Profilers: If you want a good Ruby profiler, check out the ruby-prof gem. It even creates HTML call graphs.
- Gotcha: Rake and Capistrano tasks are global, regardless of their namespace. In addition, if a Rake and Capistrano task have the same name, they will collide: which one wins, nobody knows! We’ve had success encapsulating Rake and Cap tasks in Ruby classes and delegating to them as soon as possible.
- Though we’ve a big fan of Fast-JSON, the new JSON library is faster and appears to fix several of the bugs that FJSON was created to fix.
- Hackety Hack is a little tool to help beginners learn to program in Ruby, especially kids. Why is it cool? Because of Why! Why? Yes, Why!
Ask for Help
- “We have some really slow tests that test external dependencies, such as Amazon’s S3 Service… should we create a ‘slow suite’?” We are planning on doing this, and many of us have create ‘slow suites’ that run only in continuous integration… with mixed success. Sometimes people just ignore the slow suite’s errors.
|
http://pivotallabs.com/community/page/269/
|
CC-MAIN-2015-18
|
refinedweb
| 185
| 73.27
|
CGI::Lingua - Create a multilingual web page
Version 0.48
No longer does your website need to be in English only. CGI::Lingua provides a simple basis to determine which language to display a website. The website tells CGI::Lingua which languages it supports. Based on that list CGI::Lingua tells the application which language the user would like to use.
use CGI::Lingua; # ... my $l = CGI::Lingua->new(supported => ['en', 'fr', 'en-gb', 'en-us']); my $language = $l->language(); if ($language eq 'English') { print '<P>Hello</P>'; } elsif($language eq 'French') { print '<P>Bonjour</P>'; } else { # $language eq 'Unknown' my $rl = $l->requested_language(); print "<P>Sorry for now this page is not available in $rl.</P>"; } my $c = $l->country(); if ($c eq 'us') { # print contact details in the US } elsif ($c eq 'ca') { # print contact details in Canada } else { # print worldwide contact details } # ... use CHI; use CGI::Lingua; # ... my $cache = CHI->new(driver => 'File', root_dir => '/tmp/cache', namespace => 'CGI::Lingua-countries'); my $l = CGI::Lingua->new(supported => ['en', 'fr'], cache => $cache);
Creates a CGI::Lingua object.
Takes one mandatory parameter: a list of languages, in RFC-1766 format, that the website supports. Language codes are of the form primary-code [ - country-code ] e.g. 'en', 'en-gb' for English and British English respectively.
For a list of primary-codes refer to ISO-639 (e.g. 'en' for English). For a list of country-codes refer to ISO-3166 (e.g. 'gb' for United Kingdom).
# We support English, French, British and American English, in that order my $l = CGI::Lingua(supported => [('en', 'fr', 'en-gb', 'en-us')]);
Takes optional parameter cache, an object which is used to cache country lookups. This cache object is an object that understands get() and set() messages, such as an CHI object.
Takes an optional boolean parameter syslog, to log messages to Sys::Syslog.
Takes optional parameter logger, an object which is used for warnings. This logger object is an object that understands warn() message, such as a Log::Log4perl object.
Since emitting warnings from a CGI class can result in messages being lost (you may forget to look in your server's log), or appearing to the client in amongst HTML causing invalid HTML, it is recommended either either syslog or logger (or both) are set. If neither is given, Carp will be used.
Takes an optional parameter dont_use_ip. By default, if none of the requested languages are supported, CGI::Lingua->language() looks in the IP address for the language to use. This may be not what you want, so use this option to disable the feature.
Tells the CGI application what language to display its messages in. The language is the natural name e.g. 'English' or 'Japanese'.
Sublanguages are handled sensibly, so that if a client requests U.S. English on a site that only serves British English, language() will return 'English'.
If none of the requested languages is included within the supported lists, language() returns 'Unknown'.
use CGI::Lingua; # Site supports English and British English my $l = CGI::Lingua->new(supported => ['en', 'fr', 'en-gb']); # If the browser requests 'en-us' , then language will be 'English' and # sublanguage will be undefined because we weren't able to satisfy the # request # Site supports British English only my $l = CGI::Lingua->new({supported => ['fr', 'en-gb']}); # If the browser requests 'en-us' , then language will be 'English' and # sublanguage will also be undefined, which may seem strange, but it # ensures that sites behave sensibly.
Synonym for language, for compatibility with Local::Object::Language
Tells the CGI what variant to use e.g. 'United Kingdom', or 'Unknown' if it can't be determined.
Gives the two character representation of the supported language, e.g. 'en' when you've asked for en-gb.
If none of the requested languages is included within the supported lists, language_code_alpha2() returns undef.
Synonym for language_code_alpha2, kept for historical reasons.
Gives the two character representation of the supported language, e.g. 'gb' when you've asked for en-gb, or undef.
Gives a human readable rendition of what language the user asked for whether or not it is supported.
Returns the two character country code of the remote end.
If Geo::IP is installed, CGI::Lingua will make use of that, otherwise it will do a Whois lookup. If you do not have Geo::IP installed, I recommend you make use of the caching capability of CGI::Lingua.
HTTP doesn't have a way of transmitting a browser's localisation information which would be useful for default currency, date formatting etc.
This method attempts to detect the information, but it is a best guess and is not 100% reliable. But it's better than nothing ;-)
Returns a Locale::Object::Country object.
To be clear, if you're in the US and request the language in Spanish, and the site supports it, language() will return 'Spanish', and locale() will try to return the Locale::Object::Country for the US.
Nigel Horne,
<njh at bandsman.co.uk>
If HTTP_ACCEPT_LANGUAGE is 3 characters, e.g., es-419, sublanguage() returns undef.
Please report any bugs or feature requests to
bug-cgi-lingua at rt.cpan.org, or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
Locale::Country::Object HTTP::BrowserDetect
You can find documentation for this module with the perldoc command.
perldoc CGI::Lingua
You can also look for information at:
This program is released under the following licence: GPL
|
http://search.cpan.org/~nhorne/CGI-Lingua-0.48/lib/CGI/Lingua.pm
|
CC-MAIN-2014-42
|
refinedweb
| 924
| 65.22
|
We have updated our Privacy Policy and encourage you to read it by clicking here.
Yesterday, a JDE client wrote in and asked if our virtual classes were "live".
It ocurred to me that there could be some confusion about what most people mean when they talk about virtual training.
To us, and I think the majority of training providers, virtual classes always mean they are being held "live", with a living, breathing instructor.
The "virtual" part of these classes is the fact that the students and the instructor are not physically in the same location--everyone is communicating from home or their respective offices via the internet.
JDEtips University features a hosted software solution, where the JD Edwards standalone demo software is hosted by a third party which also provides a full virtual classroom solution. Features that I especially like include:
One of the biggest advantages of virtual training, is that we can schedule them in four hour chunks of time, instead of the usual eight hours for on-premise classes. This allows students to keep up with their critical job functions while attending training. They can also practice what they've learned before the next day's session.
Many of our clients are selecting virtual training because their teams are geographically spread out. In that case, virtual training is the most cost effective option.
Back to the original question we started with--if the sessions weren't 'live' what would they be? I think the client who asked that question wanted to know if we were talking about live classes or eLearning.
eLearning is recorded training--typically you can see the instructor's desktop as he works through various exercises, and you'll hear the instructor as he explains what he is doing.
Interactive eLearning allows students to run through practice exercises and get automatic feedback.
eLearning definitely has a place in the panoply of educational techniques that JDE clients need to have available. Presently most eLearning in the JD Edwards space is UPK created end user material.
Does JDEtips have any plans to develop eLearning materials to help clients learn the JDE configuration level skills they need to set up the system? Not at the present time, but it is something we evaluate periodically.
However, our training manuals are for sale, and we feel that these are a great resource for your learning and reference needs when you don’t have the time or budget to attend training.
Note: Click here to view our schedule of public virtual and on-premise JD Edwards classes. We cover the core modules in Financials, Distribution, Manufacturing, and Development (Tools & ERW). Every public class is Guaranteed to Run.
|
http://it.toolbox.com/blogs/jdedwards/virtual-jd-edwards-training-is-it-live-or-not-52618
|
CC-MAIN-2017-26
|
refinedweb
| 447
| 52.19
|
- <<
State of the <tt>union</tt>
Last updated Jan 1, 2003.
Unions are one of the C relics that C++ has retained. On the one hand, they are an example of intrusive, highly implementation-dependent programming style that is the anathema of object-oriented programming. Yet, even in C++ programs, they can have certain useful applications as I will show you in the following passages.
What's in a union?
In the olden days, when memory was scarce and static type-checking was often overlooked ("We're serious programmers and we know what we are doing!"), programming languages such as FORTRAN, PL/1 and C offered a means of storing multiple objects on the same chunk of memory. Of course, one could only use a single object at a time, but this technique could save memory because the decision regarding which object was needed was often delayed to runtime, whereas the objects themselves needed to be declared at compile time.
Think, for example, of a database query that retrieves an employee's record. The record in question can be retrieved using various criteria: the employee's name, his or her ID, telephone number, and so on. Obviously, you don't need all these keys at once, but you can't decide at compile time which one the user will decide to use when querying the database. To solve this problem, it was customary to pack all the keys within a single data structure called a union:
union Key { int ID; char * name; char phone[8]; };
The size of a union is sufficient to contain the largest of its data members. In the case of Key, it's typically 8 bytes -- the size of phone. By contrast, a struct containing the same data members occupies the cumulative size of its members, i.e.,
sizeof(ID) + sizeof(name) + sizeof (phone)
which is 16 bytes on most 32-bit systems (with the possible addition of padding bytes). These savings might not impress you, but in those days, when a system's RAM consisted of a few kilobytes, every byte counted, especially when a program used arrays of unions. A typical program for accessing a database would determine the actual key at runtime using a type-encoding enumeration:
/*C style example of using type-coding enum + union*/ enum KeyType { by_id, by_name, by_phone };
Accessing a union's member is similar to accessing a member of a struct or a class. The crucial difference is that while objects and structs store each data member on a distinct memory address, all members of a union are stored on the same address. Therefore, the programmer must be careful to access the correct member:
Employee * retrieve(union Key * thekey, enum KeyType type) { switch (type) { case of by_id: access_by_id(thekey.id); break; case of by_name: access_by_name(thekey.name); break; //.. } }
This programming style has gone out favor with the advent of object-oriented programming. Not only does it rely heavily on implementation details, it's also error-prone. If the user accesses the wrong data member of the union, the results will be meaningless, just like accessing a random piece of memory. Yet this dangerous characteristic was also an advantage in some systems that didn't support typecasting.
union-based Typecasting
In C++, operator reinterpret_cast performs low-level typecasting between pointers and references that preserves the original binary layout of the source data. For example, in order to examine the bytes of an int, you could do something like this:
int num=2000; unsigned char * p = reinterpret_cast<unsigned char *> (&num); for (int i=0; i<sizeof (num); i++) //display the decimal value of every byte of num cout<<"byte "<<i<<": " << (int) p[i] <<endl;
Before the days of reinterpret_cast, programmers would use a union to achieve the same effect (union initialization rules are explained in this article):
union Cast { int n; char str[sizeof (n) ]; }; Cast c=2000; for (int i=0; i<sizeof (int); i++) printf("%d\n", c.str[i]);
Anonymous unions
C++ introduced a special union type called an anonymous union. Unlike an ordinary union, it doesn't has neither a tag name nor a named instance. As such, it's mostly used as a data member of a class. For example:
class Employee { private: union //anonymous { int key_ID; char * key_name; char key_phone[8]; }; double salary; string name; int rank; //... public: Employee(); };
The advantage of using an anonymous union is that you access its members directly, as if they were ordinary data members of the class:
Employee::Employee() : key_ID(0),//member of an anon. union salary(0.0), rank(0) //ordinary data members {}
Anonymous unions aren't confined to classes; you can declare an anonymous union in a file scope or a namespace scope. In these cases, however, it must be declared static and its members have internal linkage:
static union //declared globally, has internal linkage { int x; void *y; }; namespace NS { static union //a namespace's scope, has internal linkage { int w; char s[4]; }; } int main() { x=0; NS::s[0]='a'; }
A union Facelift
C++ introduced another enhancement, namely the ability to declare member functions in a union, including constructors, destructors etc. Note, however, that virtual member functions (including a virtual destructor) are not allowed:
union Key { private: int ID; char * name; char phone[8]; public: //ctor, dtor, copy ctor and assignment op Key::Key(); ~Key(); Key(const Key & ref); Key& operator=(const Key & ref); //ordinary member functions are also allowed int Assign(int n) { ID=n; } };
That said, a union shall not be a base class nor can it be derived from another class. Note also that only ordinary unions may contain member functions; anonymous unions can't have member functions of any kind, nor can they contain static, private and protected data members.
Summary
In high-level applications, unions have limited usage nowadays, if any. Yet it's important to know how to use them because they are still widely used in legacy code and in low-level APIs. The C++ creators attempted to upgrade unions into an object-oriented entity by adding the ability to declare member functions and private and protected data members in a union. An anonymous union is a special type of a union that has no tag name or instance name.
|
https://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=178
|
CC-MAIN-2015-40
|
refinedweb
| 1,043
| 54.76
|
Bad AsssertionError "Cannot mix positive and negative values in BarSeries"
Bad AsssertionError "Cannot mix positive and negative values in BarSeries"
Unfortunately this assert does not seem to work as expected and causes errors (in dev mode)
Code:
public class BarSeries<M> extends MultipleColorSeries<M> { private void calculatePaths() { ... double value = yFields.get(j).getValue(store.get(i)).doubleValue(); assert value * minY >= 0 : "Cannot mix positive and negative values in BarSeries."; ... } }
In my data I see values ranging between 0 and 5587930998, could it also be because I sometimes have a chart full of 0's ?
Success! Looks like we've fixed this one. According to our records the fix was applied for a bug in our system in a recent build.
|
http://www.sencha.com/forum/showthread.php?231621-Bad-AsssertionError-quot-Cannot-mix-positive-and-negative-values-in-BarSeries-quot&p=858239&viewfull=1
|
CC-MAIN-2015-11
|
refinedweb
| 120
| 53
|
Talk:Proposed features/Tag:man made=septic tank)
Feedback on the mailinglists suggests we change "amenity" to "man_made" This sounds like a good idea.
John Eldredge points out "It would also make sense to map the drain field location as well as the location of the septic tank, if such a location is known, to reduce the risk of subsequent excavations running into the drain field plumbing. In the case of a septic system that has been in place for years, there may no longer be records, or visible evidence, of the exact location of the drain field."
That would require an extra tag. landuse? drainfield=yes? suggestions welcome.
-- User:Batje 25 October
collect similar man-made things
septic tank is one of the many objects forming part of a sewage system. I suppose it would be best to select or introduce a tag for sewage objects. so a septic tank would be tag:sewage=septic tank, and a sewage tube tag:sewage=pipe, or something like this. so "don'd crowd a generic namespace" but group together things that have to do with each other. I didn't check how those basins for waste water management are marked, but I think that a septic tank should fall in the same category as they.
I see in waste water treating plant in Utrecht () having its sedimentation tanks marked as "tag:natural=water", which is probably the closest thing one could find, but I'm sure sedimentation tanks have very little to do with "nature"!
Cesspits
Could cesspits somehow be included in this proposal? There are places where it's still common to find and use cesspits. --naoliv (talk) 11:02, 8 December 2016 (UTC)
|
https://wiki.openstreetmap.org/wiki/Talk:Tag:amenity%3Dseptic_tank
|
CC-MAIN-2018-39
|
refinedweb
| 284
| 67.59
|
The QHelpEngineCore class provides the core functionality of the help system. More...
#include <QHelpEngineCore>
Inherited by: QHelpEngine.
This class was introduced in Qt 4.4.().
Constructs a new core help engine with a parent. The help engine uses the information stored in the collectionFile to provide help. If the collection file does not exist yet, it'll be created.
Destructs the help engine.
Adds the new custom filter filterName. The filter attributes are specified by attributes. If the filter already exists, its attribute set is replaced. The function returns true if the operation succeeded, otherwise it returns false.
See also customFilters() and removeCustomFilter().
Creates the file fileName and copies all contents from the current collection file into the newly created file, and returns true if successful; otherwise returns false.
The copying process makes sure that file references to Qt Collection files (.qch) files are updated accordingly.
This signal is emitted when the current filter is changed to newFilter.
Returns a list of custom filters.
See also addCustomFilter() and removeCustomFilter().
Returns the value assigned to the key. If the requested key does not exist, the specified defaultValue is returned.
See also setCustomValue() and removeCustomValue().
Returns the absolute file name of the Qt compressed help file (.qch) identified by the namespaceName. If there is no Qt compressed help file with the specified namespace registered, an empty string is returned.
See also namespaceName().
Returns a description of the last error that occurred.
Returns the data of the file specified by url. If the file does not exist, an empty QByteArray is returned.
Returns a list of files contained in the Qt compressed help file namespaceName. The files can be filtered by filterAttributes as well as by their extension extensionFilter (e.g. 'html').
Returns a list of filter attributes for the different filter sections defined in the Qt compressed help file with the given namespace namespaceName.
Returns a list of all defined filter attributes.
Returns a list of filter attributes used by the custom filter filterName.
Returns an invalid URL if the file url cannot be found. If the file exists, either the same url is returned or a different url if the file is located in a different namespace which is merged via a common virtual folder.
Returns a map of hits found for the id. A hit contains the title of the document and the url where the keyword is located. The result depends on the current filter, meaning only the keywords registered for the current filter will be returned..
Returns the namespace name defined for the Qt compressed help file (.qch) specified by its documentationFileName. If the file is not valid, an empty string is returned.
See also documentationFileName().
Registers the Qt compressed help file (.qch) contained in the file documentationFileName. One compressed help file, uniquely identified by its namespace can only be registered once. True is returned if the registration was successful, otherwise false.
See also unregisterDocumentation() and error().
Returns a list of all registered Qt compressed help files of the current collection file. The returned names are the namespaces of the registered Qt compressed help files (.qch).
Returns true if the filter filterName was removed successfully, otherwise false.
See also addCustomFilter() and customFilters().
Removes the key from the settings section in the collection file. Returns true if the value was removed successfully, otherwise false.
See also customValue() and setCustomValue().
Save the value under the key. If the key already exist, the value will be overwritten. Returns true if the value was saved successfully, otherwise false.
See also customValue() and removeCustomValue()..
This signal is emitted when the setup is complete.
This signal is emitted when setup is started.
Unregisters the Qt compressed help file (.qch) identified by its namespaceName from the help collection. Returns true on success, otherwise false.
See also registerDocumentation() and error().
This signal is emitted when a non critical error occurs. The warning message is stored in msg.
|
http://doc.qt.nokia.com/main-snapshot/qhelpenginecore.html#addCustomFilter
|
crawl-003
|
refinedweb
| 651
| 61.53
|
A million issues getting started
- AustinBeau last edited by Have been following this guide.
I have a WiPy 2.0 and the expansion board.
Starting out: I had pymate on my phone, connected to the device and uploaded firmware. This seems to break the wipy... I was no longer able to connect to it over pymate... fine... so I bought the expansion board.
Following the guide, I get the firmware upgrader tool, follow the steps including setting 23 to ground. Says firmware upload is complete and to remove the cable and power off. I do this and power up again, still not showing up on WiFi for the pymate app on android.
I tried installing Pymakr for atom, but that wont work as it gives an error saying Pymakr is broken as it was built for another version of atom.
I'm starting to pull out hair and I don't know where to turn.
@AustinBeau
yes i wrote this line from memory - fixed in post, thanks :)
IP is
192.168.4.1and rest default
and then
micro
python
like here
- AustinBeau last edited by
@livius I got WiFi to work again! But just to correct for anyone else reading, it's
from network import WLANnot
machine
And to ftp to it, I set my machines WiFi network to the ssid I set, and ftp at
192.168.1.1with user micro and password python?
Hi @AustinBeau,
To fix your Atom issue, you'll see a little red bug at the bottom of your Atom window. Click this and it will give you an option to rebuild the Pymakr Plugin. This should hopefully fix it for you!
I will investigate the Pymate issues you're having but for the time being, I suggest that you put your device into 'safe boot mode' () and then when the REPL is available, running the following commands:
import os os.mkfs('flash')
This will clear any of the files that were previously written to the device by the Pymate app.
Thanks!
Alex
@AustinBeau
You probably are affected bu current "issue" with wifi where you must set
ssidin AP init
look here for details:
connect to your board by UART
use e.g.
putty or
Arduino IDE and there is com port monitor - set baud rate to 115200 and change linebreak
try simple command like
os.uname()
if this worked try this lines:
from network import WLAN wlan = WLAN(mode=WLAN.AP, ssid='wipy-test')
after this you can connect to it by ftp e.g.
filezilla
and put above lines in your
boot.py
after reset your wipy will be avaiable as
wipy-test
and you can simply connect to it
about other issues you have i can not help, i do not use atom plugin
|
https://forum.pycom.io/topic/1440/a-million-issues-getting-started/5?lang=en-US
|
CC-MAIN-2021-04
|
refinedweb
| 462
| 80.41
|
Buy Time with the Braintree v.zero SDK
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95
This article was sponsored by Braintree. Thank you for supporting the sponsors who make SitePoint possible!
Braintree touts itself as offering “Simple, powerful payments.” We’ve been using Braintree at my company, KYCK, for ages now and I can attest how easy it makes accepting payments. In December of 2013, Braintree was acquired by PayPal. Not much changed due to the acquisition until recently. As the first major change, Braintree has released a new client SDK aimed at making things easier and adding some new features.
The new SDK is called “v.zero,” and it offers:
- A new Drop-In UI that allows you to start accepting payments with “as little as ten lines of code.”
- The ability to easily accept PayPal as a payment method. This is the big, new feature of the SDK, and, no doubt, a direct result of the PayPal acquisition.
- Soon, the ability to accept payments via Bitcoin, through a partnership with Coinbase.
- Powerful, customizable payment flow to fit your needs when the Drop-In UI doesn’t work.
- Future changes to the SDK that will be “easy.” In other words, Braintree aims to keep the SDK current with constantly changing technology without forcing developers to overhaul their payment flow.
- One Touch™ mobile payments with PayPal and Venmo. One Touch makes accepting payments via your mobile app as seamless as possible. This article announcing One Touch has a great video explaining the service. If you are a mobile developer, this is a big deal.
In this post, I’ll walk through some of the features of the v.zero SDK. Most of the focus will be on the Drop-In UI and accepting PayPal, so you can get up and running fast.
The Application
In order to accept payments, there has to be a product. For our app, I want a compelling product. Something that everyone wants or needs, something that they’ll flock to the site to buy. It hit me like a lightning bolt in the middle of the night: Time. Everyone wants more time. So, that’s what we’re going to sell. Our users will be able to buy extra hours to spend as they see fit in their busy lives. We are going to be bazillionaires in no time.
The app is a vanilla Rails app with Devise for authentication. I am not going to walk through setting up the Rails app. Instead, our starting point is a Rails app with a User model that uses Devise’s password authentication. I’ve also added Zurb Foundation for some easy styling. There are roughly 1.2 million tutorials on how to setup Rails and Devise, and I have tagged our starting point (“starting_point”) in the repository.
Our users will have a very simple purchasing flow. Once logged in, the user can choose a payment method (credit card or PayPal) and pay $10 for 1 hour. To start, the application has an
OrdersController with a
new action and view.
How the SDK Works
The Braintree payment flow consists of four steps:
- Create a Braintree Account.
- Add the Braintree Ruby Library to the application.
- Provide a client token to your client (meaning, the browser, in our case)
- Receive a payment method nonce from the client (browser) after a payment is authorized (Note: Before this step, if you had asked me what a nonce is, I would’ve said, “an often rabid, small, furry animal that attacks without provocation.” There’s a lot I don’t know.)
- Create a transaction on Braintree using the payment method nonce.
Create a Braintree Account
In order to use Braintree to accept payments, you have to sign up for a Braintree account. It’s recommended to start in the Braintree “Sandbox,” which allows you to test payments without money actually exchanging hands. The Sandbox is a godsend, allowing developers to perfect the payment flow in an environment that mirrors the real thing.
Head over to the Get Started page and sign up for an account.
Braintree will send a confirmation email, so get confirmed and we’re ready to move forward.
The first login drops you on the Sandbox Dashboard, which looks like this:
The important bits on this page are: Merchant ID:, Public Key, and Private Key. These values will be used to configure the Braintree SDKs in our Rails app. In fact, if you look on that same page, there is an example Ruby configuration that can be copied and pasted into your app.
Get the Braintree Ruby Library
In Rails, this kind of configuration is handled in an initializer. However, we don’t have a
Braintree class in our codebase yet. Luckily, Braintree has a Rubygem for us to utilize. Add
gem "braintree" to the Gemfile and
bundle install.
Create a config/initializers/braintree.rb with the following:
Braintree::Configuration.environment = ENV['BRAINTREE_ENV'] || :sandbox Braintree::Configuratio.merchant_id = ENV['BRAINTREE_MERCHANT_ID'] || 'your merchant id' Braintree::Configuration.public_key = ENV['BRAINTREE_PUBLIC_KEY'] || 'your public key' Braintree::Configuration.private_key = ENV['BRAINTREE_PRIVATE_KEY'] || 'your private key'
Generate a Client Token
When our users show up to buy more time, Braintree has to know who we are in order to get us our millions. Basically, the users will select a payment method and authorize a payment by submitting a form from our app to Braintree’s servers. The client token will be provided alongside the user’s information, and it tells Braintree who we are by identifying our merchant account on their side. Braintree returns a payment method nonce that represents the authorized payment to our application, which we’ll discuss in a moment.
Drop-In UI
Here is where the new Drop-In UI in the v.zero SDK comes into play. Create a partial called app/views/payment/_form.html.erb:
<form id="checkout" method="post" action="/checkout"> <div id="dropin"></div> <input type="submit" value="Pay $10"> </form> <script type="text/javascript"> function setupBT() { braintree.setup("<%=@client_token%>", 'dropin', { container: 'dropin' }); } if (window.addEventListener) window.addEventListener("load", setupBT, false); else if (window.attachEvent) window.attachEvent("onload", setupBT); else window.onload = setupBT; </script> </script>
This
form is pulled directly from the Braintree docs. The
script block is added here to handle the generation of the client token. I wanted to keep it all in one file to make it clearer for this tutorial. The token is created in the
OrdersController#new method:
def new @client_token = Braintree::ClientToken.generate end
Those same Braintree docs show the need to add the Braintree javascript file, so let’s do that now. Download the braintree.js file into the vendor/assets/javascripts directory and add it to our app/assets/javascripts/applications.js:
//= require braintree (ADD THIS LINE) //= require_tree . (This line already exists)
Now, the
braintree javascript variable in our
setupBT function will exist.
If you run the server (and sign up for an account in the app), the Drop-In UI renders and looks pretty good:
It looks like we have the ability to accept PayPal and credit cards out of the box. But that form looks a bit odd without a CVV field. How can we add CVV?
We can add CVV to our form by configuring it in the Braintree Sandbox. Once logged in, Choose “Processing” from the “Settings” menu:
This page has a metric ton of configuration options, including:
- Duplicate Transaction Checking, which stops a transaction from being created if it matches one within the last 30 seconds.
- Accept Venmo.
- Basic Fraud Protection, including CVV.
- Custom Fields.
- Much, much more.
To get CVV added to the form, we’ll need to configure the rules. Click the “Edit” button under “CVV” and add your rules. Here’s mine:
With the CVV rules configured, the form now has the CVV field:
That is pretty cool.
Payment Method Nonce
On to the last step in our basic process: getting a payment method nonce from Braintree that we will provide to our Rails server and then back to Braintree to add a payment.
I want this to be as simple as possible for our first payment. As such, we’ll change the
action on our form to post to
/orders and we’ll render out the
@params. Change the form partial:
<%= form_tag orders_path, method: "post" do %> <div id="dropin"></div> <input type="submit" value="Pay $10"> <% end %> <%= @params %> <!-- We'll remove this later, just testing now -->
The form is now Rails-ed up a bit, that way there won’t be any authenticity token errors.. You can grab fake credit card numbers from this page on the PayPal site. (They don’t work for real stuff… not that I tried them or anything.)
Fill out the form and see what happens.
I immediately noticed the following coolness:
- It won’t let me type any garbage into the Credit Card or other fields. Some basic, but solid, input validation comes with the form for free. Nice.
- The text labels are helpful and intuitive. It’s a nice experience, in fact, and better than a form that took KYCK ages to design and implement.
Submitting the form posted to our
create method and rendered the params in the view:
{"utf8"=>"✓", "authenticity_token"=>"Yxt5NzsrKB4u/rEjmR3A7pIwVbcpGCL/lEBTMx7H8x0=", "payment_method_nonce"=>"1e6dfd62-f92e-4703-8807-b3f6b9b28c84", "action"=>"create", "controller"=>"orders"}
There it is… the payment method nonce. You can read about nonces all you want, but seeing one in the wild is truly a breathtaking experience.
Create a Braintree Transaction
Well, this is pretty exciting. We are already on to the last step in our test run of accepting payments. At this point, it’s simply a matter of creating a transaction on Braintree. This is pretty easy, as it turns out. Change
OrdersController#create like so:
def create nonce = params[:payment_method_nonce] render action: :new and return unless nonce result = Braintree::Transaction.sale( amount: "10.00", payment_method_nonce: nonce ) flash[:notice] = "Sale successful. Head to Sizzler" if result.success? flash[:alert] = "Something is amiss. #{result.transaction.processor_response_text}" unless result.success? redirect_to action: :new end
Go back and fill in your form with that fake credit card number and BOOM! We can accept payments.
Enter PayPal
Let’s see if the using PayPal is as easy as using a fake credit card number. Go back to the app and, instead of filling out the form, click that big, blue PayPal button. You should see a popup asking you to sign in:
Notice it places an overlay on the main form, which is sassy. Once logged in, you’re told exactly what the vendor is asking:
Click ‘Agree’, and you’re returned to the form. It changes to reflect that you’re using PayPal
Click “Pay $10” and watch PayPal being accepted. MMMM…that is some good payment.
Counting Our Money
If you head over to the Braintree Sandbox dashboard, you can see that we are movin’ on up!
Man, today was HUGE for us!
Next Steps
This article scratches the surface of what can be done with the Braintree v.zero SDK. If we wanted to take our time-buying application to the next level, we might:
- Store Braintree customer IDs on our local Users, allowing these users to reuse payment methods. Braintree offers the Vault that will store tokens for each of the payment methods a user adds. The customer can then choose one of these payment methods when returning to buy more time.
- Control the transaction life cycle for our application’s transactions. The Braintree transaction processing flow is involved and you need to know it if you’re using Braintree. You can settle, release, refund funds, among other actions. Learn it, live it, love it.
- Offer subscriptions to our customers. Maybe they can get 10 hours a month for $90 or something. Braintree offers recurring billing that is surprisingly easy to handle. This is where the real cheese lives.
Maybe you have other suggestions for where we can take our time hawking? Let me know what we should do next, and maybe I’ll pen an article for the most requested item.
Remember, the source for this article is in this repository.
In the meantime, watch out for those nonces. They can spring at any time.
Get practical advice to start your career in programming!
Master complex transitions, transformations and animations in CSS!
|
https://www.sitepoint.com/buy-time-braintree-v-zero-sdk/
|
CC-MAIN-2020-50
|
refinedweb
| 2,059
| 66.44
|
* Created Oct 6, 2005 14 * @author mbatchel15 */16 package org.pentaho.core.system;17 18 import org.pentaho.core.session.IPentahoSession;19 20 public class GlobalObjectInitializer implements IPentahoSystemListener {21 22 public boolean startup(IPentahoSession session) {23 //24 // This is not ideal at all. Should have loop here to loop through the25 // objects26 // listed in the pentaho.xml. Need to fix this at some point, but right27 // now, the28 // PentahoSystem object instances this class, calls this method which29 // simply calls30 // back into PentahoSystem to do the work. I don't like it.31 //32 // MB33 return PentahoSystem.initGlobalObjects(session);34 }35 36 public void shutdown() {37 38 }39 40 }41
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/pentaho/core/system/GlobalObjectInitializer.java.htm
|
CC-MAIN-2017-30
|
refinedweb
| 124
| 50.84
|
[2003-06-18] David Abrahams wrote: > >Moving this to the C++-sig as it's a more appropriate forum... > >"dalwan01" <dalwan01 at student.umu.se> writes: > >>> Daniel Wallin <dalwan01 at student.umu.se> writes: >>> >>> > namespace_("foo") >>> > [ >>> > def(..), >>> > def(..) >>> > ]; >>> >>> I considered this syntax but I am not convinced it is an advantage. >>> It seems to have quite a few downsides and no upsides. Am I >>> missing something? >> >>. I must be atipical. I make heavy, nested, use of namespaces in my C++ code. So having an easy way to represent that would be nice. >> *. It's not global state. Unlike Python Lua can handle multiple "instances" of an interpreter by keeping all the interpreter state in one object. So having a single global var for that is not an option. It needs to get passed around explicitly or implicitly. I imagine Lua is not the only interpreter that does this. So it's something to consider carefully as we'll run into it again (I fact if I remember correctly Java JNI does the same thing). >>. It's a somewhat different audience that uses Lua. The kind of audience that looks at the assembly generated to make sure it's efficient. People like game developers, embeded developers, etc. so having a choice between compile time and runtime they, and I, would choose compile time. But perhaps the important thing about this is to consider how to support both models. -- grafik - Don't Assume Anything -- rrivera (at) acm.org - grafik (at) redshift-software.com -- 102708583 (at) icq
|
https://mail.python.org/pipermail/cplusplus-sig/2003-June/004097.html
|
CC-MAIN-2016-50
|
refinedweb
| 254
| 69.99
|
Offering your surveys internationally, properly localized, could be what's standing between you and millions in potential revenue. After all, the bigger your sample size, the more effective your surveys — and that is why internationalization (aka i18n) is critical.
i18n— where 18 stands for the number of letters between the first i and the last n in the word _internationalization _— is the process of designing surveys from the ground-up to support different countries and regions, making sure they arrive ready for an international launch, without needing patches or jury-rigged features.
SurveyJS is a free and open-source (under the MIT license) JavaScript library that lets you do just that — design dynamic, data-driven, multi-language surveys using a variety of front-end technologies. It ships with a lean, scalable solution for internationalization + localization with no need for separate i18n libraries, or bespoke code.
Let's build a multi-language survey of our own to see just how easy this is.
The Game Plan
SurveyJS streamlines internationalization and localization for us with a two-step process:
- The first is the automatic translation of UI elements based on the locale of choice. This is just a variable that can just be pulled from the respondent's system locale (and is not the same as geolocation), or be set manually in your app's UI. The translated strings ship as dictionary files, are community-sourced, support 30+ languages, and can be overridden if needed.
You can manually set the locale in JavaScript like so (and this is how you'd do it if you're designing your app to support changing languages via the UI).
survey.locale = “fr”;
Or in the JSON schema, as a default value.
“locale”: “fr”
- The second stage is a manual translation of survey content (anything that isn't part of the stock UI. In other words, this is your actual survey) — questions, choices, titles, descriptions, etc. — simplified due to the data-driven approach of SurveyJS. Surveys are defined as data models (schemas) written in JSON, which sit on a separate layer from the JavaScript and CSS of the rest of the app.
Within this JSON schema, you use nested keys to indicate the namespace/locale of each string, with each value being the actual translated string for the question/choice/title/description. This is a common pattern used by many JavaScript localization libraries (eg: i18next) — only, here you don't need anything more than SurveyJS itself.
const surveyJson = { “elements”: [{ “type”: “text”, “name”: “firstname” “title”: { “default”: “Enter your first name”, “de”: “Geben Sie Ihren Vornamen ein”, “fr”: “Entrez votre prénom” } }] };
Using this design pattern, your dev team can write all of the strings in their native language as the default, and then send the JSON over to your localization team who send back the same JSON survey schema, edited to include translated strings of the other languages.
The Code
We'll use React for this example, but SurveyJS ships as component libraries for React, Angular,Vue.js,Knockout, and of course,jQuery, so you're pretty much covered.
First up is the JSON schema.
Here in the survey model, you see the pattern we just talked about. Question and page titles/descriptions are an object containing key-value pairs for each locale, and so are all possible choices — but you have a singular return value for each that is only in the native language your dev team uses, and it is this value that gets used programmatically.
{ value: “male”, text: { “default”: “Male”, “de”: “Männlich”, “nl”: “Man” }, }
A possible choice can have different strings, one for each locale, but only a singular value that gets passed back and used in your app's logic.
You'll also notice that you don't have to provide key-value pairs for every possible locale — the default value will be used for anything that's not an explicitly defined locale string.
Next is the JavaScript/React code.
First of all, make sure to import the SurveyJS localization module, either as a
<script> in the
<head> tag of your HTML, or as an import in the component that renders the survey.
<script src=""></script> import “survey-core/survey.i18n”;
We'll only be supporting 3 languages in this example, passed down as props and selectable via a dropdown. Dropdown is a fairly simple React component.
The only *gotcha *you have to be wary of is that you do not want to set locale as a state variable in the component that renders your survey (App, here). That will force a re-render of the entire survey any time a new language is chosen via the dropdown, resetting it and losing all progress.
Have it in the Dropdown component instead, because you *do *want that one to re-render with the current choice whenever we use it to change the language.
As for App, you could just have a function to change locale (and this function sets locale using surveyjs.setLocale = newLocale), pass it down to Dropdown, which will then call it on its own onChange event. A win-win scenario.
Finally, CSS is outside the scope of this article, but if you'd like some inspiration, here it is.
Check here for a list of SurveyJS CSS classes and properties you can override.
All that and FOSS, too!
Using SurveyJS, we needed no bespoke JavaScript code to build and properly internationalize/localize this survey, nor did we have to use any specialized i18n/l10n libraries. In fact, the only library we used at all — SurveyJS itself — was free, open-source, and self-hostable, lending itself well to the distribution of multiple discrete localized surveys on any platform at all.
The more respondents that can read questions in their native language, the greater your response rate. And the more control you have over your product, the more flexible you are when dealing with different cultures, currencies, and regulatory/tax regimes. So using SurveyJS to streamline both the creation and maintenance of your survey code makes all the sense in the world in an industry where extremely rapid implementation and turnaround times are critical.
Using this pattern, your dev team can focus on app code/business logic in only one language, passing over a boilerplate to your localization team, who only need to edit in translated strings and send the JSON schema back, ready to be used.
|
https://plainenglish.io/blog/overcome-the-language-barrier-in-your-surveys-with-easy-i18n-using-surveyjs
|
CC-MAIN-2022-40
|
refinedweb
| 1,064
| 57.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.