text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
It's a surprise to me that ArcGIS Pro changes the python code in my script tool when I pack the project and upload the ppkx to ArcGIS Online.
Here is the first a few lines of code for one script tool in the unpacked ppkx, please pay attention to the codes between "Esri start of added imports" and "Esri end of added variables"
# Esri start of added imports
import sys, os, arcpy
# Esri end of added imports
# Esri start of added variables
g_ESRI_variable_1 = "EventID = '{}'"
g_ESRI_variable_2 = os.path.join(arcpy.env.scriptWorkspace,'..\\..\\..\\Users\\simoxu
\\AppData\\Local\\Esri\\ArcGISPro\\Staging\\SharingProcesses\\0038
\\RDA Tools for ArcGIS Pro\\p20\\current_checkout_attachments')
# Esri end of added variables
#-------------------------------------------------------------------------------
# Name: rda_checkout
# Purpose: checkout the assessment for a specific event and map it
#
# Author: simoxu
# Created: 27/07/2018
#-------------------------------------------------------------------------------
from arcpy import da
import os,sys
import shutil
In one place in my original code, I have the following line:
But it was replaced with the following code, which is causing fatal error for the tool itself.
It's quite strange that ArcGIS Pro packaging tool will change my code without informing me --- I only found out this when I shared the project package with others and then was told the tools could not run properly.
Is it only me? Any advice?
I am using ArcGIS Pro 2.2.1, by the way.
Thanks.
Package Project—Data Management toolbox | ArcGIS Desktop
it seems some of your paths are to your folder and even if you are sharing internally, I doubt that anyone has access to it. You should consolidate all the data, scripts etc into the folder structure that you are packaging. | https://community.esri.com/thread/219494-script-are-changed-by-esri-with-gesrivariable1-gesrivariable2 | CC-MAIN-2018-51 | refinedweb | 275 | 51.52 |
Hi, how can I create a function like that but defining the function in python?
I already did something like that:
from pyspark.sql.types import IntegerType def relative_month(input_date): if input_date is not None: return ((input_date.month + 2) % 6)+1 else: return None _ = spark.udf.register("relative_month", relative_month, IntegerType())
But this UDF only works for the notebook that runs this piece of code.
I want to do the same thing using a SQL syntax to register the function because I will have some users using databricks trough SQL Clients and they will need the functions too.
In the Databricks docs says that i can define a resource:
: (JAR|FILE|ARCHIVE) file_uri
I need to create a .py file and put it somewhere?
- Thanks
Why are Python custom UDFs (registerFunction) showing Arrays with java.lang.Object references? 1 Answer
Pyspark passing variables among functions 0 Answers
PicklingError: Could not serialize object: Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. 0 Answers
How to concatenate/append multiple Spark dataframes column wise in Pyspark? 2 Answers
How to put all element into single column in pyspark? 1 Answer
Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
info@databricks.com
1-866-330-0121 | https://forums.databricks.com/questions/17080/create-function-udf-in-python.html | CC-MAIN-2020-50 | refinedweb | 230 | 65.62 |
I am new to C++ programming, but I am trying to modify the following code so that I can calculate compound interest using only integers. I only clues I have are to (Treat all monetary amounts as integral numbers of pennies. Then "break" the result into its dollar portion and cents portion by using the division and modulus operations. Insert a period.) If anyone could help me along in the right direction or help me find resources that explain this I would greatly appreciate it. Thanks.
#include <iostream>
using std::cout;
using std::endl;
using std::ios;
#include <iomanip>
using std::setw;
using std::setiosflags;
using std::setprecision;
#include <cmath>
int main()
{
double amount, // amount on deposit
principal = 1000.0, // starting principal
rate = .05; // interest rate
cout << "Year" << setw( 21 )
<< "Amount on deposit" << endl;
// set the floating-point number format
cout << setiosflags( ios::fixed | ios::showpoint )
<< setprecision( 2 );
for ( int year = 1; year <= 10; year++ ) {
amount = principal * pow( 1.0 + rate, year );
cout << setw( 4 ) << year << setw( 21 ) << amount << endl;
}
return 0;
} | http://cboard.cprogramming.com/cplusplus-programming/16518-help-monetary-calculations-using-only-integers.html | CC-MAIN-2014-41 | refinedweb | 172 | 53.31 |
-
Silverlight for Windows Phone
Silverlight for Windows Phone
"Silverlight for Windows Phone is a specific version of Silverlight for developing applications that run on Windows Phone. The WP version contains additional support for touch input, and does not contain support for the HTML DOM bridge."
- Microsoft Silverlight is a .NET-based framework for building applications that can run on the desktop, as a browser plugin, or on a Windows Phone. Silverlight for Windows Phone is a specific version of Silverlight for developing applications that run on Windows Phone. The Windows Phone version contains additional support for touch input, and does not contain support for the HTML DOM bridge.
- Silverlight started as a browser plugin (like Adobe Flash), but then gained support for desktop applications (aka out-of-browser applications). Silverlight's most recent usage is in the development of Windows Phone Applications. As such the trend now is to use the term "XAML based applications" instead of Silverlight.
- Additional Silverlight tools are in the "Silverlight Toolkit" (aka "Windows Phone Toolkit") available from Microsoft's open source website (Codeplex). The Toolkit is on a shorter release cycle than the SDKs. Datepicker and other advanced controls are only in the Toolkit.
- The main usage of Silverlight is to produce User Interfaces (in XAML) as a separate component of the application; keeping the UI separate from the application logic. XAML is a Microsoft variant of XML. XAML elements always represent .NET elements. So anything you can do in XAML, you can also code in C# or VB.
- In Silverlight UIs, the XAML and the code-behind file (eg. C#) are tightly-coupled. Each is defined as a partial class and one can not exist without the other.
- There are a few XAML files that do no contain UI code, such as App.xaml, which is used to host the application-level style resources and the applications lifecycle events.
- The Silverlight manifest file (AppManifest.xaml) describes your application to the Silverlight runtime engine, and other resources (images, etc). The manifest file is automatically generated and rarely needs to be edited by hand
Silverlight Controls
"Many Silverlight for Windows Phone controls have been modified from the standard PC version to support the Windows Phone touch interface. These controls have been made larger, and their margins larger, to accommodate the user's touch. They also support the touch events."
- The Windows Phone controls are in the class library Microsoft.Phone.Controls or the The Windows Phone Toolkit.
- Every UI element in Silverlight has the ability to be visible or hidden by use of the Visibility property which has the values:
- Visible - (default) is visible.
- Collapsed - is hidden, will not take up any space on page.
- There are three ways to specify the size of controls:
- Absolute - specify the size in a fixed number of pixels. Absolute sizing takes precedence over the other two sizing options and is always the first one to be calculated. An absolute size will impose a constrained view of the control (will not automatically resize).
- * - star sizing, distribute the available space equally based on amount of available space (eg. *, 2*, 3*) The numbers are used to specify a ratio for dividing the available space.
- AUTO - makes the size just big enough to hold contents. If there is no contents, the size is zero.
User Controls
."
- User Controls for Windows Phone
- TextBlock - display (only) of text. A lightweight control for small amounts of text.
- TextBox - text input (single line or multi-line).
- ListBox - contains an open list of selectable items (Properties: SelectedIndex, Selected Item. Events: SectionChanged).
- Image - control.0. To add Bing Maps to a Windows Phone 7 or Windows Phone 8, you can use Bing Maps AJAX Control, Version 7.0 or the the Windows Phone 8 Maps API. The Windows Phone 8 Maps API does not use Bing Maps.
- WebBrowser -.
- ProgressBar - A ProgressBar control.
- ScrollViewer - creates a scrollable area that can contain other visible elements (text, images, etc.).
- Buttons
- All buttons have three states that can be used to animate the buttons look. For example when you hover over a button its color changes to blue. The three states are:
- normal
- hover
- pressed
- The ClickMode property sets when the theClick event should occur:
- Hover - The Click event occurs every time you hover over the button.
- Press - The Click event occurs as soon as you click on the button.
- Release - (default mode) The Click event occurs as soon as the mouse button is released.
- Button types:
- Button - represents a Windows button control, which reacts to the ButtonBase.Click event.
- CheckBox - represents a control that a user can select and clear.
- RadioButton - represents a Button that can be selected, but not cleared, by a user
- HyperlinkButton - a button control that displays a hyperlink
- ToggleButton - base class for controls that can switch states, such as CheckBox and RadioButton
Buttons Example (Click on Them)
- Custom Controls
- A custom control is essentially a combination of existing controls that can be reused throughout the application. To create a customer control:
- Right-click on project, Add, New-Item, "Silverlight User Control Template".
- Change the design height and width to actual specs (d:DesignHeight="300" => Height="300").
- Create a StackPanel with the desired combination of controls inside.
- Code the events for the individual controls.
- In the page you want to use the control, add a reference to the user control namespace
xmlns:local="clr-namespace:MyUserControl"
- Go to the control insert point and start typing the alias (<local:) and intellisense should have picked up the namespace with the control.
ScrollViewer with Image and Text
Listbox Photo Viewer Custom Control
Lixtbox Photo Viewer, Own Work
- Also available in the Windows Phone Toolkit are:
- ListPicker.
- DatePicker
- TimePicker
- ContextMenu
- AutoCompleteBox
- ToggleSwitch
- WrapPanel.
Silverlight, XNA, DirectX, and HTML5
"Microsoft is migrating from XNA to DirectX in Windows Phone OS 8.0. XNA apps can only be built and compiled up through Windows Phone OS 7.1."
- Silverlight for Windows Phone is used for creating data-driven or event-driven applications.
- XNA and DirectX frameworks are used for graphical game development. XNA uses a game loop and obtains user input by polling (as opposed to being event-driven). The XNA game loop uses only two methods: Update and Draw.
- You can reliably use some HTML5 features (audio, video, geolocation, and canvas graphics) in the Windows Phone WebBrowser control. Media that uses HTML5 audio and video play back more efficiently and smoothly than media played through a browser plug-in (Silverlight, Flash), and helps to conserve battery power.
Reference Articles
- Silverlight Show - Community portal dedicated entirely to Microsoft Silverlight and Windows Phone 7 technologies.
- Silverlight Videos and Tutorials - Silverlight Developer Center.
- Silverlight - Microsoft Developer Network.
- Microsoft Silverlight - Microsoft Silverlight
- Windows Phone Toolkit - CodePlex.
- Creating Custom Controls - Microsoft Developer Network.
- Quickstart: Adding controls and handling events for Windows Phone - Windows Phone Dev Center. | http://www.kcshadow.net/mobile/?q=silverlight | CC-MAIN-2018-22 | refinedweb | 1,137 | 56.55 |
Why Microsoft Is Being Nicer To Open Source 231
itwbennett writes "Is open source's growth in emerging markets what is driving Microsoft to say 'we love open source' with an attempt at a straight face? 'The emerging markets (like the BRIC nations) are a huge potential market for Microsoft,' says Brian Proffitt. .'"
MS OSS Strategy is UpSide Down. (Score:5, Interesting)
Mod Parent Up (Score:2, Insightful)
Re:MS OSS Strategy is UpSide Down. (Score:5, Insightful)
The slide in your editorial demonstrates Microsoft's vision of OSS during initial announcement a couple years ago. They were all for OSS as long as it fit their definition of it. They were working quite hard to get enterprise businesses to embrace their vision of OSS. If they had business following their vision then the vision of true open source would be blurred and out of sight.
What was identified by the OSS community regarding their definition of OSS those couple years ago was exactly what you have identified here. They showed that Microsoft's definition of OSS was only OSS if it was done for Windows. Of course, that's not what true OSS is nor how it was defined some 17 years ago.
Their definition of OSS was released not too long after several Microsoft employees spoke out about how Microsoft was going to kill Linux. One of them went so far as to predict that that year was the start of the death of Linux.
Their definition is nothing less than embrace, EXTEND, extinguish. By getting business to embrace their view they can reduce the reach of OSS into business because they believe Microsoft's version is the only true OSS. That in effect will cease adoption of OSS by business and hence the death of Linux.
I must admit that Linux adoption seems to have slowed and the amount of press has considerably declined. Certainly some areas have continued to expand.
Re: (Score:2)
They showed that Microsoft's definition of OSS was only OSS if it was done for Windows
understandable.. they don;t care what software you write
.. as long as you buy their stuff to do it with. It'd be an interesting software ecosystem (even on Windows only) if they were the only software company allowed to sell software!
I wonder how they'd react if something they sold lots of started to be replaced with an OSS equivalent? A Sharepoint -> Drupal converter for example
:)
Re: (Score:2)
Nobody will fall for MS OSS strategy...
Because many of these government entities are simply waiting for Microsoft to offer them deep discounts. Sad but true.
Re: (Score:3, Interesting)
When your systems are *already* running OSS then Microsoft can't discount themselves into them, because they would have to give away all their software and licences for free just to match what you're already paying. This is why MS's western-world strategy can not work in BRIC economies.
In the west MS's software is already in business and government systems and the costs and training requirements (or FUD-driven perceived costs, at least) to migrate _away_ from MS _to_ OSS is what MS has traditionally relied
Re: (Score:2)
Embrace <- You are here
Extend
Extinguish
Yes, something is up (Score:3, Interesting)
I get MSDN magazine and the latest issue has a seriously good article on sqlight. They said it works really well on cell phones, etc., where it was almost impossible to install a database server and/or could not always have access to a server to connect back to a database.
transporter_ii
Re: (Score:3, Insightful)
You mean SQLite [sqlite.org] ?
Re: (Score:3, Interesting)
Yes. I wish Slashdot had an edit feature. Crap just doesn't show up until you hit submit...
Re: (Score:2)
Yeah, but SQLite isn't even open source -- it's straight up public domain software. Hardly a threat to Microsoft or its business model.
Re: (Score:2)
Yeah, but SQLite isn't even open source -- it's straight up public domain software.
It sounds like you're confusing "open source" with "copyleft".
Embrace, extend, eliminate (Score:2, Interesting)
This is Microsoft's old M.O.
Nothing to see here folks
...
When the cheese moves you follow it (Score:4, Interesting)
Microsoft is always going to be concerned with maximizing their profits (their legal fiduciary responsibility to their shareholders). If they see ways to do that by working with or using open source, then they will.
Microsoft is in a position similar to IBM, where they can provide solutions and support them. If part of that solution is open source, MS still gets all the support dollars. A lot of companies use some open source stuff now, but the last thing you want to tell your PHB is that your support comes from some usenet forum.
Re:When the cheese moves you follow it (Score:5, Interesting)
If you recall, the original "Anti-GPL" stance that Microsoft had, went something along the lines of "Contaminating the software ecosystem."
This was at a time when Microsoft was a quasi-dominant force in the server market, when their IIS server platform actually had a reasonable install base in production environments, and Windows was totally unchallenged by Linux and pals..
As such, their "Cherished" "Software ecosystem" has had no choice but to accept the new competition, which if you re-read their old FUD campaigns, is exactly what they were saying was wrong with GPL software; It is a disruptive license that destroys the status quo, and threatens for-profit development (as it was practiced at the time.)
In the face of their major competitors (like apple) who have at least partially embraced FOSS software (OSX is based on BSD, IIRC.. could be mistaken. That's why Darwin is FOSS.) and are leveraging it like a catylist to gain more and more market penetration and market share, microsoft can no longer afford to try and play the status quo card. That's why the whole "Software ecosystem" rhetoric has dried up. Now they are playing damage control, and trying to butter up to the same projects and people that they snubbed just a decade ago, hoping that small time developers have as short a memory as do MBAs. (Or, even more disturbing, that they can bamboozle new, young and fresh talent in the FOSS community into drinking the koolaid.)
I would trust Microsoft to "Actually like" FOSS, as I would trust Darl McBride to make a linux kernel patch.
Like you pointed out in your post above, Just about the only thing you can predict that Microsoft will do is do whatever is necessary to increase its bottom line; including redact its own policy statements. Likewise, you should expect that Microsoft will do the same thing concerning FOSS policies and licenses, should it cease being profitable for MS to continue such licensing tactics.
This is a very important situation to quietly think to yourself "Caveat Emptor" about, because when you buy into their new policies, you need to be fully aware that Microsoft, can, and likely will, pull the rug out later. Their ONLY loyalty is to their stockholders, and to the all mighty dollar. They don't even have loyalty to their own rules; it would be absurd to expect that they have somehow had a change of heart in a deep way, or to behave ethically if money is involved.
Personally, I find that as a company, they are overburdened in a faulted development and managerial model that wont fare well in the current market environment. Microsoft is slowly but surely being left behind by smaller, or more agile players, much like IBM was neutered by the end of the 90s. As such, I personally would approach this whole issue with a more forward thinking eye.
As much as I DESPISE apple and Mr Jobs, I feel that he is a much more savvy CEO than Ballmer ever was, or ever could be, and this is probably the main reason why there are rumors of his imminent replacement. As such, I would predict Apple's market share to continue to grow in handheld electronic devices, and through that, leverage more into the personal computer market, though Apple seems to be taking the stance that the macintosh market is now a secondary priority.
About the only thing Microsoft has going for it right now is market momentum, and the upgrade inertia of other corporations. (The exact same reason why IE6 refuses to die.)
So, personally I would focus more on other platforms than the microsoft offerings. Microsoft has the smell of death about it.
Re:When the cheese moves you follow it (Score:5, Insightful).
They were just ahead of their time. Today the Rush Limbaughs and Glenn Becks of the world call anything they don't like communist/socialist and people just accept it without question.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Personally, I find that as a company, they are overburdened in a faulted development and managerial model that wont fare well in the current market environment.
Absolutely! This seems to be their biggest problem these days, i certainly agree with most of what you said regarding their support of policies that drive profits. I don't particularly like MS, or Apple even, but they do provide products that work for most people.
Microsoft has the smell of death about it.
Really? [linuxjournal.com] Windows 7 seems to be doing very well.
This is a very important situation to quietly think to yourself "Caveat Emptor" about
Why do people use that term? No-one speaks latin, and in fact it's one letter shorter to write "buyer beware".
Re: (Score:3, Interesting)
Re: (Score:2, Insightful)
As Microsoft has said in the past, open source does have a tendency to spread
Re: (Score:3, Insightful)
If Microsoft suggests using an open-source program instead of a commercial one, any smart client will notice and begin wondering what else they can get without having to pay licensing costs.
By your logic, the latter would happen for any free product that Microsoft offers, not necessarily FOSS (since the client is presumably mainly concerned about saving $$$). Which does not stop MS from releasing stuff for free or very cheap (e.g.: SQL and VS Express, DreamSpark, BizSpark).
Why? Because sometimes, when you drop the price, or even give something away for free, it boosts sales for the rest of your stuff. For example: free Windows development tools -> more Windows applications -> higher Wind
Re: (Score:2)
I don't think that maximizing their profit is what animates MS. Sure, they'd like that but they are more interested in control. This guarantees their survival and jobs over the long haul. There's also a pirate quality about the company. They seem to believe that for them to succeed implies someone else must fail. And in a way, they are correct. They wait until a market has developed and then jump in. That sort of strategy will enforce their belief that for them to succeed, they must cause someone else to fa
Re:When the cheese moves you follow it (Score:5, Interesting)
Aside from Microsoft making somewhat nice with the F/OSS community, which is their own self-interest given that large firms are not monolithic MS, I've noticed that getting technical support for a hybrid set of systems does not automatically get a response that places the blame on the non-MS pieces of your IT setup. If I had to guess, MS may be eyeing the market niche that IBM pretty much dominates (IMNSHO) while still making hardware and creating software; services that mix and match across whatever has in place and make it work. I've seen the first steps in this direction with their various systems management tools, especially for virtualization. The Office cash cow won't last forever and I think they are getting that. Finally.
Does this portend a kinder, gentler Microsoft? Not on your life. They are just continuing with embrace and extend while looking like a 'nice' Microsoft. Yeah, right.
GUIs GUIs GUIs (Score:5, Insightful)
That's where I see MS cutting a nice niche for itself without having to dominate OS's. Their GUI's are usually more intuitive than OSS I have to say. No, they are not perfect, but so far MS does GUI's better than OSS.
I suspect MS spends more time road-testing their GUI's with actual users than OSS products. It's not that they are smarter, they just log the GUI tester hours that most OSS don't or can't. "Basement" coders simply cannot afford such testing sessions, and must rely on email etc. Think about it.
I'm just the messenger, don't shootmod me, please.
Re: (Score:2)
Their GUI's are usually more intuitive than OSS
Let me respectfully disagree. Their GUIs are more polished.
IMHO, their interface has become less intuitive since Win9x/Win2000.
Win9x/Win2000, GNOME, OS X have a coherent UI. That is less the case with WinXP, and Vista, Win7 are a mess from my point of view. And I probably lack experience with KDE to comment fairly, but I think that while it has coherent UI, it is too cluttered.
Re: (Score:2)
We recently had to upgrade from Office 2003 to 2007 at work. Everyone agrees that the new interface is far less intuitive and provides no benefit. The only benefit of the new interface is that it's so foreign to everyone that someone who learned on Office 2007 would have a harder time switching to Open Office. I really don't think that GUI design change was motivated by any customer input.
Re: (Score:2)
As for their attitude
"Could?" (Score:2)
What's this "'could react in a protectionist manner and start giving Microsoft the stink-eye'" shit? Isn't that the normal reaction?
Re: (Score:2)
Since you seem knowledgeable on the subject...
What the hell is a "stink-eye"?
Have never heard this turn of phrase before.
Re: (Score:2)
= dirty look
Re: (Score:2)
Simply posting a link to goatse in this case would have been on topic then?
Noticed something (Score:2)
From the article...
since business-types and engineering-types don't often communicate to each other very well.
Oh boy...did he ever hit the bullseye with this one.
The Eternal Spin Zone: Microsoft (Score:4, Funny)
Few years ago, right here on
/., someone compared Microsoft and Open Source to being a dinosaur
spinning in circles within a tar pit and several animals barking and chattering around it, watching
and waiting as the pathetic creature was sucked in completely by the tar.
Could it be the dinosaur's head is slightly above the tar's surface and a fat, greasy, yet
tiny rodent like clawed hand is reaching out with a large slice of bacon and waving it around
for every animal surrounding it to see, with a pathetic grin and swan song expressing a last
mournful love interest in the solidarity of its foes?
Do not fall for the melody of the monster, nor the pit which welcomes him and his own kind.
Geeks Know Better (Score:3, Insightful)
Emperor: Now witness the firepower of this fully armed and operational battle station! *click* Fire at will, commander!
Crewfish: Sir, we have star destroyers!
Admiral Ackbar: It's a trap!
Zoe: So. Trap?
....
Mal: Trap.
Wash: Wait...how do you...
Mal: You were listenin' I take it?
Everyone:
Mal: Did'ja hear us fight?
Zoe: No?
Mal: Trap.
not true (Score:5, Interesting)
Re:not true (Score:5, Interesting)
The guys that make the WUBI product are from India.
I know India is heavily into math. It really would make sense to have more in India using Linux because more people would have examples to learn by, especially complex code such as the OS kernel.
If India is a lot like their nearby neighbors in Asia most people would be pirating Windows.
Re: (Score:3, Interesting)
I know India is heavily into math.
My daughter's Sunday school in the temple has about 180 kids almost all of them in the top 5-10% of their schools. Would be considered stunning statistic. By law of averages no more than 20 of them should be in the top 10% of their school. But if you randomly pick 180 kids of all ethnicities in America from families with two college educated parents, with a median family income of 55K, you would find they too are in the almost always in the top 5-10% of their school. This is known as sample bias.
Most Ind
Re:not true (Score:4, Insightful)
I'd heard the educational system in India emphasized math. To what degree I guess I don't know. I was under the understanding that it was the primary emphasis of the educational system in India.
They stress arithmatic in the lower elementary school a lot. Rote memorization of multiplication-tables and very fast arithmatic work is emphasized. I can still rattle off my multiplication table upto 16 times 16. I also memorized fractional multiplication tables. one "arai" times three "kaal" is three "araikaal" and such things. The Indian languages have named fractions for 0.5 (arai), 0.25(kaal), 0.125 (araikaal) and 0.0625 (maakaani). English has names only for 0.5 and 0.25. These were tough. But my aritmatic peaked in my entrance examn years. I knew by heart the logarithms of 2, 3, pi, and square roots of 2, 3 and 5!
But when it comes to higher mathematics like Algebra and Trignometry Indian system is not much better than American system. The American system places less emphasis on arithmatic and rote memorization and stresses understanding basic math concepts. By the time Calculus comes around, you will see the superiority of the American education system.
But vast majority of the students in both USA and India do not get do much higher mathematics. So the enormous investment America has done in emphasizing the math concepts is wasted and frittered away. Indians appear to be so much stronger in math. But remember Arithmatic is just one subset of Mathematics. In fact it is a small subset of higher mathematics.
Re: (Score:2)
Dude! It's time to stop practicing and give it a go for real!
Re:not true (Score:4, Interesting)
I agree. Here in Sri Lanka most people have never heard of Linux, are terrified of trying anything new, and only ever use Linux because it is free of cost.
It is gaining some traction, but it still has a tiny desktop share (it is fairly widely used ons servers though).
It has also had a significant impact on MS's revenues. Corporates has successfully used the "we will switch to Linux" threat when MS has tried to make them actually pay for software (AFAIK the only software ANYONE here actually pays for is either very specialist stuff, Lotus Notes and some Adobe stuff - the first because they need the support, the others because it is more expensive to switch platforms than pay up).
Re: (Score:2)
Nice Doggy (Score:2)
Microsoft's attitude to OSS is the art of saying 'Nice doggie' until they can find a rock. [quotationspage.com]
Re: (Score:2)
I like to think of them as being a snake at the dinner table with that astonished look in their eye after being accused of swallowing the whole turkey while a suspicious lump is sliding down their body.
Standing joke (Score:2, Insightful)
Along with beowolf clusters and Russia doing stuff in reverse, we now have the equally tiresome joke that Microsoft is being nicer to open source. Why do these articles keep getting posted?
MS may 'love' open source ... (Score:2)
... but they still won't give it a reach-around.
Simple, really. (Score:3, Insightful)
Developers, developers, developers, developers.
Open Source projects for Windows mean more functionality, interoperability, and convenience for Windows users, and Microsoft doesn't have to do a damn thing to get it. Open Source and Linux are two different things, and Microsoft now realizes this.
Re: (Score:2)
If MS have their way, the only open source being written will be dependent on proprietary windows functionality, making it very difficult to port it to any other platform.
Microsoft hate interoperability and have spent years trying to make it as difficult as possible to use anything else in any environment where you will encounter windows users... They will only ever tolerate any form of interoperability when it goes one way, so outlook will support standards like imap/pop3/smtp but it never works very well
Keyword there (Score:2)
Is bad press to be the big guy bullying the small one. But that don't mean that the big guy loves him, or that "pay" a slightly smaller guy (i.e. Oracle?) to do the dirty job.
So basically they're making themselves irrelevant? (Score:2)
If you support or recommend open-source software, people will use it. If they use it, they aren't paying you.
Thus, your business becomes built on a foundation of others' OSS software, and at that point, you're selling something people can get elsewhere for free.
Same thing has been tried, and unless you're IBM and you're aggressively selling to big business/enterprise, you don't make a whole lot of money, and you're likely to fold in a few years.
Oracle makes microsoft look nice by comparison (Score:3, Interesting)
Oracle is already killing off opensolaris, suing google over android, and who knows what will happen to mysql
or openoffice down the road.
Microsoft paranoia has blinded us to the enemy in our midst. Bill Gates never did as as much damage to open source
as Larry Ellison is doing.
They play nice only while it benefits them (Score:2)
*rant warning*
any attack on open source would be seen as a foreign company attacking local software projects
I bet they considered this in the beginning, but just didn't give a damn because they only thought of themselves, and not of the betterment of the software community.
Re:Wrong (Score:4, Informative)
The article didn't say or even imply that Microsoft hasn't slammed open source, the whole point was that they're not doing it any more.
Re:Wrong (Score:5, Insightful)
The article didn't say or even imply that Microsoft hasn't slammed open source, the whole point was that they're not doing it any more.
Yeah, that's usually called "pandering".
Like the summary explains, they're doing this out of a concern that anything else might alienate potential customers in various markets. That is not a change of heart. It's the same old self-serving Microsoft we've always known. They'd say that Jeffrey Dahmer was a really great guy if they thought it would boost sales. Microsoft hasn't changed. What will and won't alienate potential customers is the only thing that has changed here.
I'll put it very bluntly: anyone who believes otherwise is a naive fool who doesn't understand the first thing about this company or its history.
Re: (Score:2)
So let's see. Microsoft will do anything that it thinks will boost sales.
Those bastards! Next thing you know they will have the audacity to start fixing bugs that people complain about, or implement features that are requested, or even make products that they think people will buy! Oh Noes! The horror. The horror!
Re:Wrong (Score:5, Interesting)
You accurately summarized my paragraph...
The point, my eager-to-resort-to-mockery friend, is that appearing to appreciate Open Source is what Microsoft believes is in its interests today. It was not in Microsoft's interests yesterday (not literally 24 hours ago but figuratively speaking) and may not be in their interests tomorrow. Microsoft is doing this because they hope it will appeal to people who care about Open Source. The people who believe it are likely to find that Microsoft will continue this act for just long enough to lock them into using its software. At that point Microsoft will feel that the ruse has served its purpose and will revert to openly regarding Open Source as an enemy.
Now that you know what my point was, or now that it's more difficult for you to deny knowing what my point was (whichever may be the case), you can see plainly that it has absolutely nothing to do with fixing bugs, adding features, or introducing new products. If you weren't deliberately trolling, you provided a good example of what emotional knee-jerk reactions lead to.
Re: (Score:2)
Many people see no virtue beyond expedience. Argue with them for a lifetime and they'll never understand your point.
Re: (Score:2)
Or perhaps, being technical types, you and clodney are overestimating the importance of technical quality. End-user sales are increased through marketing, not quality products.
Re: (Score:2)
Or perhaps, being technical types, you and clodney are overestimating the importance of technical quality. End-user sales are increased through marketing, not quality products.
Hey, again, as a matter of nuance: I never said that this is a valid point (i.e. never said that quality is the only factor that drives the sales). I only said that "MS will not quite do everything to boost the sales" is a point.
As for my opinion on the validity of this point: of course "playing nice" (or pretending, thereof) costs a heap less than "fix the crap". This is not to say that MS doesn't fix the bugs or doesn't implement requested features (because they eventually do it, otherwise no need for Wi
Re: (Score:2)
Im not sure if English is the first language of most nit-pickers here, but most of the time saying "entity X will do anything to accomplish Y" is not to be taken as absolute truth, but as a general position. Arguing over the finer points of what entails "anything" is indeed to miss any point the speaker is trying to make, and just being argumentative for its own sake.
But continuing on that diversion, for example fixing bugs in the short term is usually either,
1. Part of a contract obligation - which were te
Re: (Score:2)
Arguing over the finer points of what entails "anything" is indeed to miss any point the speaker is trying to make, and just being argumentative for its own sake.
Or just going on a tangent and (pleasant as it would be) waste some more time on
/.? (relax, cool down, unwind, start seeing colors where only black-and-white used to be)
Re: (Score:2)
Re: (Score:2)
Look at the relative 'failure' of sub-notebooks with Linux preinstalled. Most people expected to run Windows apps on them and I'll bet a very large number were returned to the store for this reason (otherwise why would they not be offered anymore?).
Assume for a moment that most people do want to run Windows for whatever reason (familiarity, MS office, etc...).
Then they start getting into open source software on Windows and seeing all that is out there like games, word processors, ad nauseum. At some point
Re: (Score:2)
It's up to us to keep their past FUD tactics as public knowledge. We mustn't give Microsoft the chance to fake a new image to those who are unfamiliar with their past wrong doings!
Re: (Score:2, Funny)
We need a sign!
Safety first: it has been [15] days since Microsoft last attacked.
Re: (Score:2)
Here's your sign?
:p
Re: (Score:2)
Hahaha nice try, but you'll have to step up your game if you want to goatse any Slashdotters.
Re:Wrong (Score:4, Insightful)
Re: (Score:2)
My reading skills aren't the problem here. Perhaps some focus on your own skillset might be in order?
They're on the ropes (Score:3, Insightful)
They see they've missed the transition to mobile, they feel their empire slipping away. Deliberate incompatibility isn't working any more, so this is the change-up. Don't be confused though - as an entity Microsoft still sees open source as "open sores" - a cancer, in Steve Ballmer's words. They just realize that in some markets they have to be more diplomatic now.
In others? Well I'll just quote the first comment from the fine article:" by Anonymous (not verified) on 8/30/10 at 4:43 pm
I get these invitations from Microsoft too. Everybody in tech does
Re: (Score:2)
Dude, you posted that 40 minutes after my comment. I guess time-travel should be part of my skill set?
Moreover, your revised point is the same as the article: MS is changing its tune (even if merely opportunistically), and yet you claim that the article gets its history wrong
...
Re: (Score:2)
I could link to a dozen articles, at least, discussing just this here at Slashdot.
How many of these articles are in Portuguese? The public mass consciousness has no memory, only a fickle perception of the present.
Re: (Score:2)
Re:Wrong (Score:4, Interesting)
That's a fair point - but really - while that might work, my point is that we've got an editorial that doesn't really make the point you are trying to make. Microsoft is saying good things about open source in ALL OF ITS markets. For now. Changing what they've done in the past.
It seemed apparent to me that the point he was trying to make is not what you are responding to there. In fact I was about to make this point my own way until I saw that he had already raised it.
The point is that the general public seems to have an awfully short memory. Otherwise they'd be rightly skeptical of this move. They'd understand that a model of 100% open source software from operating systems to applications is antithetical to Microsoft's business model (for one, that sure would make it hard to implement vendorlock). That alone renders this move suspect. Then there's the long history of viewing Open Source as an enemy, both in the form of action and in the form of things like the Halloween documents.
If Microsoft is saying good things about Open Source in "all of its markets" it's only because of the ease with which the Internet would expose any attempt to say good things in Location A and bad things in Location B. That would just make them look stupid and would be counterproductive to their goal of pandering to the BRIC nations. They're ruthless bastards in my opinion but no one who takes a hard look at their use of long-term strategy would conclude that they are stupid.
GP was not denying that Microsoft is currently acting warm and fuzzy towards Open Source. I have no idea why you reiterate the editorial and must conclude you didn't correctly comprehend the GP. The grandparent is saying that Microsoft's new stance is not genuine and that a cursory understanding of the way this company does business would strongly affirm that position. If documentation of their history in Portuguese can promote such an understanding it could remedy the public's short memory.
The public sees that now Microsoft is being kinder to Open Source. Many seem to forget what the last 10-15 years of the Microsoft monopoly was like. And all it took was a change of PR strategy. They definitely got their dollar's worth from the marketing department this time.
You see this kind of short memory in politics all of the time. Why would it be a surprise when the same tendency is shown regarding business? In either case it doesn't survive contact with the facts so that's where a constructive remedy can be applied.
Not entirely wrong. (Score:2)
Perhaps Microsoft shows one face to the nations in question ("we lurve FOSS"), but their usual face to the rest of the planet ("lunix suX0rz!").
It's not like a corporation that big can't present opposing personalities, each suited to the markets they're trying to take on.
Re: (Score:2)
Re:Not entirely wrong. (Score:4, Insightful)
Corporations are not people. They hate when you antropomorphize them.
In all seriousness, it doesn't have to be an all-or-nothing stance. Microsoft is a business; it exists to earn money. When and where supporting FOSS one way or another is beneficial to the bottom line, directly (more sales) or indirectly (good PR -> more sales), of course it will be supported! This doesn't mean that it'll be supported all the way - and while we're at it, go ask Google for the source code for PageRank...
Re: (Score:2)
No one complains about Microsoft because it is a business or because, as you put it, it exists to earn money. The main reason why Microsoft earned such a profoundly negative reputation is because that corporation has a long history of intentionally deceiving, defrauding and undermining competing projects and businesses.
There is absolutely no reason to dislike anyone just because he intends to run a business. On the other hand, there is a terribly long list of reasons to dislike someone if that person is s
Re:Wrong (Score:5, Interesting).
Re: (Score:2)
Patent battles are going on like crazy today. It probably isn't a good thing to get open source involved in that if at all possible.
Did you miss Apple's recent patent lawsuit against Google over Android (which, need I remind, is very much FOSS)?
And, Microsoft's seemingly over night change of heart can be changed over night again. There's no historical evidence that they should be trusted.
You can still deal with people whom you don't trust - you just assume the worse case scenario, you'll get as much from the deal as is legally entitled to you, and not a bit more. From there, trust may (or may not) eventually enter the picture.
Re: (Score:3, Insightful)
Your comments show a total misunderstanding of open source on your part.
Your point seems to be that we need to *trust* a person or a company before we *let* them join open source. And the trust should be perpetual. That is a darn big barrier. I doubt anyone is actually qualified.
I think Linus Torvalds once said it very well: "People don't need to trust me because of the GPL" (or sth to that effect). The GPL protects the copyrights of the contributors and makes sure it stays in the public domain forever. The
Right (Score:2)
My comments have little to do with trust, except in regards to trusting their commitment to open source, and their willingness to adhere to the definition of open source.
Tainting the waters is pretty self explanatory. Many people didn't want to look at some "leaked" Microsoft code for the possibility that Microsoft could claim Linux was tainted by the release. Think SCO, in how they claimed that Linux was copying huge chunks of code.
SCO's code contributions seemed clearly in favor of Linux and open source
Re: (Score:2).
Except it hasn't been overnight... if you follow some of the Microsoft guys on Twitter you'll see that they are actively trying to change Microsoft's way of thinking.
As a side note, personally I don't think there is an ulterior motive to Microsoft's change of heart with Open Source. Microsoft's found a happy medium between closed source and open source. Notice that software it sells (to end users) remains closed source, while software (or more accurately, libraries) available to developers are being ope
Re: (Score:2)
In terms of time frames, in the real scheme of things, comparatively, over the past 3 decades, this is an over night change. And, even if it works for Microsoft it might not work for open source. Just as easily as they allege change in support of open source that can also change over night.
Re: (Score:3, Informative)
You are living proof that embrace, extend, extinguish works.
Open source was defined many years ago in an effort to ensure that it would not be subjugated and perverted, and that has done it's job for the past 17 years. Microsoft's posted open source license directly conflicts with that definition. Hence, it isn't the real thing.
Re: (Score:2)
Microsoft have made several contributions to the Linux kernel...
ORLY? I'm genuinely curious what they have contributed to the kernel.
Re:Wrong (Score:5, Informative)
Hyper-V kernel extensions
Re: (Score:3, Informative)
Re: (Score:2)
That's not very good evidence of a change of heart.
Re:Wrong (Score:5, Informative)
ah yes, and hyper v was contributed why again? let's not act like it was out of the goodness of their hearts. It was contributed because it violated the GPL license. [networkworld.com]
It should be noted on this actually, that this speaks volumes about the politeness of open source developers, because they absolutely could have pushed for a lot more to resolve the violation.
Re:Wrong (Score:4, Informative)
Re:Wrong (Score:4, Interesting)
Re: (Score:2)
Releasing the project under a permissive license means they can let IronPython and IronRuby gradually fade away without taking responsibility for killing them off.
Re:Wrong (Score:5, Insightful)
Wow! They contributed Linux kernel extensions to let Linux run on their Hyper-V platform! Amazing! Will wonders never cease?
Re: (Score:2)
Their contributions to the linux kernel were only open sourced under pressure, are poorly maintained and only exist to promote their own hypervisor system...
Their other contributions have pretty much all been windows specific, so continuing the trend of trying to lock people in.
Re: (Score:2)
I disagree. I would think that stabbing someone in the back could also be done just by getting the target into a position where the killer can make him feel good with a hug. A pat on the back, some support, a...SHARP STABBING PAIN OF DEFEAT!
Re: (Score:3, Interesting)
So get off MSFT as the exclusive enemy of "Open Source"
Oh shush you. You big drama-queen. Firstly, Steve Ballmer isn't reading our criticisms and sobbing himself to sleep every night, so don't feel like you have to come to his defence. And no-one's saying they are the only enemy of Free/Open Source software. The reason people have been hopping all over them lately is that for the past 10 years they've been painting the GPL and FOSS as worse problems than AIDS and Cancer combined. They have engaged in some despicable, underhanded and, at times illegal, practice
Re: (Score:2)
I see this is the new party line? This particular bit of revisionism has been especially virulent since the Oracle/Google brouhaha started.
Here's the deal how it really went down: Microsoft killed Java on the Windows platform. They did it by licensing Java from Sun, and then putting Windows extensions in the public namespace, violating their license. And since the license was (among other things) for
Re: (Score:3, Insightful)
Making a profit by providing a valuable service or product is one thing...
Actively harming your customers and those around them by getting them locked in to your proprietary and often inferior platform is quite another.
Also, proprietary software having to compete with open source is simply part of the market, if someone else can produce a cheaper and superior product than you, then your business model is failing and you will have to resort to underhanded tactics to prop it up.
At the end of the day, thats wh
All available evidence says otherwise (Score:2)
Microsoft is the greedy evil company we think they are, and then some.
Patent bullying, funding the scox scam, astro-turfing, fake TCO studies, fake benchmarking studies, outright lying to the US congress about difficulty of removing msie from windows, outright lying to the EU about difficulty of removing media player from windows, the OOXML scam, having Washington taxpayers pay for $11 million bridge on MS campus. Firing thousands of US workers, and hiring h1bs to replace the US workers, and all the while c | http://news.slashdot.org/story/10/09/01/0019238/Why-Microsoft-Is-Being-Nicer-To-Open-Source | CC-MAIN-2015-48 | refinedweb | 6,808 | 61.77 |
Flushing an index to disk is just an IndexWriter.commit(), there's nothing
really special about that...
About running your code continuously, you have several options:
1> schedule a recurring job to do this. On *nix systems, this is a cron job,
on Windows systems there's a job scheduler.
2> Just start it up in an infinite loop. That is, your main is just a
while(1){}.
you'll probably want to throttle it a bit, that is run, sleep for some
interval
and start again.
3> You can get really fancy and try to put some filesystem hooks in that
notify you when anything changes in a directory, but I really wouldn't go
there.
Note that you'll have to keep some kind of timestamp (probably in a separate
file or configuration somewhere) that you can compare against to figure out
whether you've already indexed the current version of the file.
The other thing you'll have to worry about is deletions. That is, how do you
*remove* a file from your index if it has been deleted on disk? You may have
to ask your index for all the file paths.
You want to think about storing the file path NOT analyzed (perhaps with
keywordtokenizer). That way you'll be able to know which files to remove
if they are no longer in your directory. As well as which files to update
when they've changed.
HTH
Erick
On Tue, Sep 28, 2010 at 2:18 AM, Yakob <jacobian@opensuse-id.org> wrote:
> On 9/27/10, Uwe Schindler <uwe@thetaphi.de> wrote:
> >
> >
> > Yes. You must close before, else the addIndexes call will do nothing, as
> the
> > index looks empty for the addIndexes() call (because no committed
> segments
> > are available in the ramDir).
> >
> > I don't understand what you mean with flushing? If you are working on
> Lucene
> > 2.9 or 3.0, the ramWriter is flushed to the RAMDir on close. The
> addIndexes
> > call will add the index to the on-disk writer. To flush that fsWriter
> (flush
> > is the wrong thing, you probably mean commit), simply call
> fsWriter.commit()
> > so the newly added segments are written to disk and IndexReaders opened
> in
> > parallel "see" the new segments.
> >
> > Btw: If you are working on Lucene 3.0, the addIndexes call does not need
> the
> > new Directory[] {}, as the method is Java 5 varargs now.
> >
> > Uwe
> >
> >
>
> I mean I need to flush the index periodically.that's mean that the
> index will be regularly updated as the document being added.what do
> you reckon is the solution for this? I need a sample source code to be
> able to flush an index.
>
> ok just like this source code below.
>
> public class SimpleFileIndexer {
>
> public static void main(String[] args) throws Exception {
>
> File indexDir = new
> File("C:/Users/Raden/Documents/lucene/LuceneHibernate/adi");
> File dataDir = new
> File("C:/Users/Raden/Documents/lucene/LuceneHibernate/adi");
> String suffix = "txt";
>
> SimpleFileIndexer indexer = new SimpleFileIndexer();
>
> int numIndex = indexer.index(indexDir, dataDir, suffix);
>
> System.out.println("Total files indexed " + numIndex);
>
> }
>
> indexDirectory(IndexWriter indexWriter, File dataDir,
> String suffix) throws IOException {
> File[] files = dataDir.listFiles();
> for (int i = 0; i < files.length; i++) {
> File f = files[i];
> if (f.isDirectory()) {
> indexDirectory(indexWriter, f, suffix);
> }
> else {
> indexFileWithIndexWriter(indexWriter, f,
> suffix);
> }
> }
> }
>
> private void indexFileWithIndexWriter(IndexWriter indexWriter, File
> f, String suffix) throws IOException {
> if (f.isHidden() || f.isDirectory() || !f.canRead() ||
> !f.exists()) {
> return;
> }
> if (suffix!=null && !f.getName().endsWith(suffix)) {
> return;
> }
> System.out.println("Indexing file " + f.getCanonicalPath());
>
> Document doc = new Document();
> doc.add(new Field("contents", new FileReader(f)));
> doc.add(new Field("filename", f.getCanonicalPath(),
> Field.Store.YES,
> Field.Index.ANALYZED));
>
> indexWriter.addDocument(doc);
> }
>
> }
>
>
> the above source code can index documents when given the directory of
> text files. now what I am asking is how can I made the code to run
> continuously? what class should I use? so that everytime there is new
> documents added to that directory then lucene will index those
> documents automatically, can you help me out on this one. I really
> need to know what is the best solution.
>
> thanks
> --
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
> | http://mail-archives.apache.org/mod_mbox/lucene-java-user/201009.mbox/%3CAANLkTimpjB-SNwx9ZU+Ow=uDf+eCpUuG+2KE-RzbR0op@mail.gmail.com%3E | CC-MAIN-2016-22 | refinedweb | 701 | 59.6 |
Creates a new IPv4 TCP socket.
Syntax
#include <prio.h> PRFileDesc* PR_NewTCPSocket(void);
Returns
The function returns one of the following values:
- Upon successful completion, a pointer to the
PRFileDescobject created for the newly opened IPv4 TCP socket.
- If the creation of a new TCP socket failed,
NULL.
Description
TCP (Transmission Control Protocol) is a connection-oriented, reliable byte-stream protocol of the TCP/IP protocol suite.
PR_NewTCPSocket creates a new IPv4 TCP socket..
See Also
PR_NewTCPSocket is deprecated because it is hardcoded to create an IPv4 TCP socket. New code should use
PR_OpenTCPSocket instead, which allows the address family (IPv4 or IPv6) of the new TCP socket to be specified. | https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR/Reference/PR_NewTCPSocket | CC-MAIN-2020-45 | refinedweb | 111 | 57.98 |
Importing hierarchical entitlement data in OIA - part 2
By user12582982 on Feb 16, 2012
In my last blog entry I talked about importing hierarchical entitlement data into Oracle Identity Analytics (OIA). Today I want to discuss another example regarding Microsoft Windows shared files and folders permissions and show how easy these data can be transformed and imported in OIA for either attestation and / or auditing purposes.
All of the data is represented in two input files. One file containing an AD users export and the other one the file and folder permissions:
File #1 containing AD users adusers.csv is as follows:
(metadata: DN|CN|memberOf|sAMAccountName|displayName|sn|givenName):
File #2 containing files and folders permissions shares.txt is as follows:
(metadata: share;group;permission):
This time I have used the tool 'Talend Open Studio for Data Integration' to join these two input datasets and transform the data into the right XML format for importing it into OIA. In Talend you design a Job which is made out of several components and a flow related to the data going through these various components. The Job I designed for these particular datasets is rather straightforward and easy to understand as can be seen in the screenshot below (by right-clicking on the image you should be able to examine it in the original size).
Within Talend Open Studio I start with two tFileInputDelimited components, each reading one of the two files. The 1st file adusers.csv has a memberOf attribute which is a multivalued attribute. It can contain a list of groups each separated by a ';'. Therefore the next step after reading this file is normalizing the data for the memberOf column using the tNormalize component. Next thing we need to do is joining both datasets. For this I have used the tMap component.
As you can see it is pretty straightforward to connect the input stream / attributes to the output stream / attributes and do the join based on a simple expression (just as a trivial example - group in input file shares.txt needs to be in memberOf in file adusers.csv).
Now that both sets are joined we can transform the data and write to XML.For that I have used the tAdvancedFileOutputXML component which writes the output in an intermediate XML file (in this case: out.xml). Again, pretty straightforward to define the structure and looping and grouping of elements as you can see in the picture below. The schema is still rather arbitrary but I will use XSLT to transform this into the right schema for OIA in the next step.
For that last step in the ELT transformation process I use the tXSLT component and an appropriate XSL Transformation file (in this case: AD_01_accounts.xsl). It picks up the file that was written in the step before, transforms according to the transformation defined in the XSL file and finally writes the output to our final AD_01_accounts.xml file.
If this whole process ends succesfully there is one more step that I have added. This is using the tXSDValidator component in Talend to check the result against a predefined schema. In this case I obviously use the accounts.xsd schema file as shipped with OIA (in this case version 11.1.1.5.0). As you can see in the first picture this validation process also ends successfully and ends with outputting '[job AD] File is Valid'.
Now we are ready to import this XML file into OIA - of course we have to configure a namespace for this particular resource first. This whole exercise took me less than 20 minutes to setup and finish!
All the files mentioned above can also be downloaded in this single package: data.zip. If you open the files individually in a browser by clicking on one of the links above, be sure to look at the source or save the file and open in an XML or other editor... otherwise the browser might just show you some blank page or a page with little information.
Have fun, René!
PS. I have formatted the final AD_01_accounts.xml document using XmlPad so it is easier to read than the default output which is not using any formatting at all - this is obviously just a visual thing for this blog and not needed for importing. | https://blogs.oracle.com/renek/date/20120216 | CC-MAIN-2015-22 | refinedweb | 721 | 60.35 |
Subject: Re: [geometry] Douglas Peucker on 3D lines
From: Stephan Menzel (stephan.menzel_at_[hidden])
Date: 2012-07-16 07:37:55
Hi Barend,
On Sun, Jul 15, 2012 at 2:43 PM, Barend Gehrels <barend_at_[hidden]> wrote:
>>?
It may well be compiler specific. The build environment I got this in
is VC9 (Visual Studio 2008). It quite a lerge project and there is
also extensive use of plenty other boost libraries such as asio,
spirit, phoenix and ublas. At this point I cannot completely rule out
some sort of namespace or template lookup issue, even though I usually
take much care to avoid these. From a distance, it sounds like it may
be something like that.
Problem is, this is a customer's project and I don't have access to
either build environment or code right now as it is finished now. All
I can say is that I did pretty much the same as you in your sample and
it only worked when I did like the second part of your example. The
first part threw this assertion. In any case, as soon as I have access
to this project again I will give it another shot and send you the
complete output.
At home here (Linux, gcc-4.6, boost 1.49) I don't get the assertion
and the code compiles fine.
> The other questions were already answered or fixed now (previous mail).
Thanks a bunch!
I will post here once I have the information.
Cheers,
Stephan
Geometry list run by mateusz at loskot.net | https://lists.boost.org/geometry/2012/07/2008.php | CC-MAIN-2022-21 | refinedweb | 259 | 83.05 |
John Shooter998 Points
How do I define Square as the inner class besides the : Polygon and : base?
I've tried a couple of different ways but I still can't figure out to define the subclass. I don't know if I am suppose to create another class of Square in a new file if so how do I do so?
1 Answer
Steven Parker177,495 Points
This challenge has only one file (one tab), so your new class will be created in the same file and within the same namespace.
Your title seems to indicate you have the right idea ("
: Polygon") but at this point you should make your best effort to create the code and then show your code here if you need more help with it. | https://teamtreehouse.com/community/how-do-i-define-square-as-the-inner-class-besides-the-polygon-and-base | CC-MAIN-2019-51 | refinedweb | 129 | 79.94 |
Excel has a very cool feature where you can declare that a range of cells is a table. It is a feature that allows you to use Excel very much like a database. You can add new rows as necessary, sort the table by columns, do some simple filtering, calculate the sum of columns, and more. Each table has a unique table name, and each column has a column name.. The code and sample documents are attached to this post.
This blog is inactive.
New blog: EricWhite.com/blog
Blog TOC(Updated July 21, 2010 – Fixed a bug where the code would return the wrong value for cells in the table if the columns had been moved.)
Note: this code is presented as an example – a proof-of-concept. This code could be further optimized, so that it performs better (although it performs quite well as is). And it may be interesting in the future to modify the code to use a strongly-typed approach – as the code is currently implemented, if you misspell a table or column name, the code throws an exception. However, this code is useful as is for doing ad-hoc queries of Excel tables. I certainly will be using it! 🙂
This code uses the Open XML SDK, either V1, or the CTP of V2. You can download V1 of the SDK here. You can download CTP1 of V2 of the SDK here.
Thanks to Brian Jones who suggested this project.
Following is a screen clipping of an Excel spreadsheet that contains a table:
You can see the four columns of this table: Item, Qty, Price, and Extension. In addition, in the Design tab of the ribbon, in the far left box, you can see that this table has a table name of “Inventory”. Using the code presented in this post, you can query this table as follows:
var query =
from i in spreadsheet.Table(“Inventory”).TableRows()
where (int)i[“Qty”] > 2
select i;
foreach (var r in query)
{
Console.WriteLine(r[“Item”]);
Console.WriteLine(r[“Qty”]);
Console.WriteLine(r[“Price”]);
Console.WriteLine(r[“Extension”]);
Console.WriteLine();
}
When you run this code, it produces:
Book
44
2
88
Phone
4
10
40
As you can see from the above code, to access a particular column from a table row, you can use a default indexed property, passing the name of the column:
Console.WriteLine(r[“Item”]);
Console.WriteLine(r[“Qty”]);
Console.WriteLine(r[“Price”]);
Console.WriteLine(r[“Extension”]);
This allows us to write code that is easy to read.
The table class (returned by the Table method) has a TableColumns method that iterates the columns in the table:
// list all of the columns in the Inventory table
Console.WriteLine(“Table: Inventory”);
foreach (var c in spreadsheet.Table(“Inventory”).TableColumns())
Console.WriteLine(” {0}”, c.Name);
When you run this code, you see:
Table: Inventory
Item
Qty
Price
Extension
The LtxOpenXml Namespace
Some time ago, I wrote some code that enabled querying Open XML spreadsheets using LINQ to XML, presented in the blog post ‘Open XML SDK and LINQ to XML’. I’ve added the code to query tables to the code presented in that post. The extension methods that enable querying tables make use of that code. The enhanced LtxOpenXml namespace now contains code for:
- Querying word processing documents
- Querying spreadsheets
- Querying tables contained in spreadsheets
The code for querying word processing documents and spreadsheets is unmodified. Refer to the above mentioned blog post for details on using those extension methods.
The code that enables querying of spreadsheet tables is, of course, written in the pure functional style. No state is maintained, and all methods to query the document are lazy.
If you have questions about how to write functional code (like the code that implements the extension methods and classes associated with this post), go through this Functional Programming Tutorial.
I’ve provided a summary of the types and extension methods included in the LtxOpenXml namespace at the end of this post.
Use of Data Types
Here’s another example of a table that contains a few more columns with more data types:
Each row returned by the TableRows method is a collection of TableCell objects. I’ve defined explicit conversions between TableCell and some of the most common .NET types, so that you can simply cast a TableCell to your desired type. Here’s a query to list all vehicles in the table:
// list all vehicles
var q = from c in spreadsheet.Table(“Vehicles”).TableRows()
select new VehicleRecord()
{
Vehicle = (string)c[“Vehicle”],
Color = (string)c[“Color”],
Year = (int)c[“Year”],
HorsePower = (int)c[“HorsePower”],
Cost = (decimal)c[“Cost”],
AcquisitionDate = (DateTime)c[“AcquisitionDate”],
ExecutiveUseOnly = (bool)c[“ExecutiveUseOnly”]
};
Console.WriteLine(“List of all vehicles”);
PrintVehicles(q);
Console.WriteLine();
I’ve written a PrintVehicles method:
public static void PrintVehicles(IEnumerable<VehicleRecord> list)
{
int[] tabs = new[] { 12, 10, 6, 6, 10, 14, 10 };
foreach (var z in list)
Console.WriteLine(“{0}{1}{2}{3}{4}{5}{6}”,
z.Vehicle.PadRight(tabs[0]),
z.Color.PadRight(tabs[1]),
z.Year.ToString().PadRight(tabs[2]),
z.HorsePower.ToString().PadRight(tabs[3]),
z.Cost.ToString().PadRight(tabs[4]),
((DateTime)z.AcquisitionDate).ToShortDateString()
.PadRight(tabs[5]),
((bool)z.ExecutiveUseOnly).ToString()
.PadRight(tabs[6]));
}
When you run the above query, you see:
List of all vehicles
Pickup White 2002 165 23000 2/22/2002 False
Pickup Red 2004 185 32000 10/21/2004 False
Sports Car Red 2003 165 23000 1/1/2004 True
Sedan Blue 2005 200 21000 2/25/2005 False
Limo Black 2008 440 72000 4/1/2008 True
You can query for all executive vehicles, like this:
// list all executive vehicles
q = from c in spreadsheet.Table(“Vehicles”).TableRows()
where (bool)c[“ExecutiveUseOnly”] == true
select new VehicleRecord()
{
Vehicle = (string)c[“Vehicle”],
Color = (string)c[“Color”],
Year = (int)c[“Year”],
HorsePower = (int)c[“HorsePower”],
Cost = (decimal)c[“Cost”],
AcquisitionDate = (DateTime)c[“AcquisitionDate”],
ExecutiveUseOnly = (bool)c[“ExecutiveUseOnly”]
};
You can write queries that select on data types such as DateTime:
// list all vehicles acquired after 2004
q = from c in spreadsheet.Table(“Vehicles”).TableRows()
where (DateTime)c[“AcquisitionDate”] >= new DateTime(2004, 1, 1)
select new VehicleRecord()
{
Vehicle = (string)c[“Vehicle”],
Color = (string)c[“Color”],
Year = (int)c[“Year”],
HorsePower = (int)c[“HorsePower”],
Cost = (decimal)c[“Cost”],
AcquisitionDate = (DateTime)c[“AcquisitionDate”],
ExecutiveUseOnly = (bool)c[“ExecutiveUseOnly”]
};
And of course, you can use all of the grouping, ordering, and filtering capabilities of LINQ queries:
// vehicles grouped by user
var groups = from v in spreadsheet.Table(“Vehicles”).TableRows()
group v by v[“ExecutiveUseOnly”];
foreach (var g in groups)
{
Console.WriteLine(“Executive Use: {0}”, (bool)g.Key);
foreach (var v in g)
Console.WriteLine(” Vehicle:{0} Year:{1}”,
v[“Vehicle”], v[“Year”]);
Console.WriteLine();
}
I’ve imported the Customers and Orders from the Northwind database into a spreadsheet, where the Customers table is in one sheet, and the Orders table is in another sheet within the worksheet. Here is the Customers table:
And here is the Orders table:
We can now write a query that joins the customers and orders tables:
using (SpreadsheetDocument spreadsheet =
SpreadsheetDocument.Open(filename, false))
{
// list all of the columns in the Customer table
Console.WriteLine(“Table: Customer”);
foreach (var c in spreadsheet.Table(“Customer”).TableColumns())
Console.WriteLine(” {0}”, c.Name);
Console.WriteLine();
// list all of the columns in the Order table
Console.WriteLine(“Table: Order”);
foreach (var o in spreadsheet.Table(“Order”).TableColumns())
Console.WriteLine(” {0}”, o.Name);
Console.WriteLine();
// query for all customers with city == London,
// then select all orders for that customer
var q = from c in spreadsheet.Table(“Customer”).TableRows()
where (string)c[“City”] == “London”
select new
{
CustomerID = c[“CustomerID”],
CompanyName = c[“CompanyName”],
ContactName = c[“ContactName”],
Orders = from o in spreadsheet.Table(“Order”).TableRows()
where (string)o[“CustomerID”] ==
(string)c[“CustomerID”]
select new
{
CustomerID = o[“CustomerID”],
OrderID = o[“OrderID”]
}
};
// print the results of the query
int[] tabs = new[] { 20, 25, 30 };
Console.WriteLine(“{0}{1}{2}”,
“CustomerID”.PadRight(tabs[0]),
“CompanyName”.PadRight(tabs[1]),
“ContactName”.PadRight(tabs[2]));
Console.WriteLine(“{0} {1} {2} “, new string(‘-‘, tabs[0] – 1),
new string(‘-‘, tabs[1] – 1), new string(‘-‘, tabs[2] – 1));
foreach (var v in q)
{
Console.WriteLine(“{0}{1}{2}”,
v.CustomerID.Value.PadRight(tabs[0]),
v.CompanyName.Value.PadRight(tabs[1]),
v.ContactName.Value.PadRight(tabs[2]));
foreach (var v2 in v.Orders)
Console.WriteLine(” CustomerID:{0} OrderID:{1}”,
v2.CustomerID, v2.OrderID);
Console.WriteLine();
}
}
This code produces the following output:
Table: Customer
CustomerID
CompanyName
ContactName
ContactTitle
Address
City
Region
Country
Phone
Table: Order
OrderID
CustomerID
EmployeeID
OrderDate
RequiredDate
ShipVia
Freight
ShipName
ShipAddress
ShipCity
ShipRegion
ShipPostalCode
ShipCountry
CustomerID CompanyName ContactName
——————- ———————— —————————–
AROUT Around the Horn Thomas Hardy
CustomerID:AROUT OrderID:10355
CustomerID:AROUT OrderID:10383
CustomerID:AROUT OrderID:10453
CustomerID:AROUT OrderID:10558
CustomerID:AROUT OrderID:10707
CustomerID:AROUT OrderID:10741
CustomerID:AROUT OrderID:10743
CustomerID:AROUT OrderID:10768
CustomerID:AROUT OrderID:10793
CustomerID:AROUT OrderID:10864
CustomerID:AROUT OrderID:10920
CustomerID:AROUT OrderID:10953
CustomerID:AROUT OrderID:11016
BSBEV B’s Beverages Victoria Ashworth
CustomerID:BSBEV OrderID:10289
CustomerID:BSBEV OrderID:10471
CustomerID:BSBEV OrderID:10484
CustomerID:BSBEV OrderID:10538
CustomerID:BSBEV OrderID:10539
CustomerID:BSBEV OrderID:10578
CustomerID:BSBEV OrderID:10599
CustomerID:BSBEV OrderID:10943
CustomerID:BSBEV OrderID:10947
CustomerID:BSBEV OrderID:11023
CONSH Consolidated Holdings Elizabeth Brown
CustomerID:CONSH OrderID:10435
CustomerID:CONSH OrderID:10462
CustomerID:CONSH OrderID:10848
EASTC Eastern Connection Ann Devon
CustomerID:EASTC OrderID:10364
CustomerID:EASTC OrderID:10400
CustomerID:EASTC OrderID:10532
CustomerID:EASTC OrderID:10726
CustomerID:EASTC OrderID:10987
CustomerID:EASTC OrderID:11024
CustomerID:EASTC OrderID:11047
CustomerID:EASTC OrderID:11056
NORTS North/South Simon Crowther
CustomerID:NORTS OrderID:10517
CustomerID:NORTS OrderID:10752
CustomerID:NORTS OrderID:11057
SEVES Seven Seas Imports Hari Kumar
CustomerID:SEVES OrderID:10359
CustomerID:SEVES OrderID:10377
CustomerID:SEVES OrderID:10388
CustomerID:SEVES OrderID:10472
CustomerID:SEVES OrderID:10523
CustomerID:SEVES OrderID:10547
CustomerID:SEVES OrderID:10800
CustomerID:SEVES OrderID:10804
CustomerID:SEVES OrderID:10869
Summary of the LtxOpenXml Namespace
This section summarizes the LtxOpenXml extension methods and types that make it easy to work with Open XML SpreadsheetML tables.
For details on the extension methods and types for word processing documents and spreadsheets (other than Tables within spreadsheets), see the post, Open XML SDK and LINQ to XML.
Tables Extension Method
This method returns a collection of all tables in the spreadsheet. Its signature:
public static IEnumerable<Table> Tables(this SpreadsheetDocument spreadsheet)
Table Extension Method
This method returns the Table object with the specified table name. Its signature:
public static Table Table(this SpreadsheetDocument spreadsheet,
string tableName)
Table Class
This method represents an Excel Table. Its definition:
public class Table
{
public int Id { get; set; }
public string TableName { get; set; }
public string DisplayName { get; set; }
public string Ref { get; set; }
public int? HeaderRowCount { get; set; }
public int? TotalsRowCount { get; set; }
public string TableType { get; set; }
public TableDefinitionPart TableDefinitionPart { get; set; }
public WorksheetPart Parent { get; set; }
public Table(WorksheetPart parent) { Parent = parent; }
public IEnumerable<TableColumn> TableColumns()
{
…
}
public IEnumerable<TableRow> TableRows()
{
…
}
}
This class contains a number of properties about the table. In addition, it contains two methods, TableColumns, which returns a collection of TableColumn objects (the columns of the table), and TableRows, which returns a collection of TableRow objects (the rows of the table).
TableColumn Class
This class represents a column of a table. Its definition:
public class TableColumn
{
public int Id { get; set; }
public string Name { get; set; }
public int? FormatId { get; set; } // dataDxfId
public int? QueryTableFieldId { get; set; }
public string UniqueName { get; set; }
public Table Parent { get; set; }
public TableColumn(Table parent) { Parent = parent; }
}
The most important property of this class is the Name property.
TableRow Class
This class represents a row of a table. Its definition:
public class TableRow
{
public Row Row { get; set; }
public Table Parent { get; set; }
public TableRow(Table parent) { Parent = parent; }
public TableCell this[string columnName]
{
get
{
…
}
}
}
The most important feature of this class is the default indexed property that takes a column name and returns a TableCell object. This is what allows us to write code like this:
Console.WriteLine(r[“Item”]);
Console.WriteLine(r[“Qty”]);
Console.WriteLine(r[“Price”]);
Console.WriteLine(r[“Extension”]);
TableCell Class
This class represents a cell of a row of a table. It implements IEquatable<T> so that you can do a value compare of two cells. It also implements a number of explicit conversions to other data types so that it’s easy to deal with columns of various types. Its definition:
public class TableCell : IEquatable<TableCell>
{
public string Value { get; set; }
public TableCell(string v)
{
Value = v;
}
public override string ToString()
{
return Value;
}
public override bool Equals(object obj)
{
return this.Value == ((TableCell)obj).Value;
}
bool IEquatable<TableCell>.Equals(TableCell other)
{
return this.Value == other.Value;
}
public override int GetHashCode()
{
return this.Value.GetHashCode();
}
public static bool operator ==(TableCell left, TableCell right)
{
if ((object)left != (object)right) return false;
return left.Value == right.Value;
}
public static bool operator !=(TableCell left, TableCell right)
{
if ((object)left != (object)right) return false;
return left.Value != right.Value;
}
public static explicit operator string(TableCell cell)
{
if (cell == null) return null;
return cell.Value;
}
public static explicit operator bool(TableCell cell)
{
if (cell == null) throw new ArgumentNullException(“TableCell”);
return cell.Value == “1”;
}
public static explicit operator bool?(TableCell cell)
{
if (cell == null) return null;
return cell.Value == “1”;
}
public static explicit operator int(TableCell cell)
{
if (cell == null) throw new ArgumentNullException(“TableCell”);
return Int32.Parse(cell.Value);
}
public static explicit operator int?(TableCell cell)
{
if (cell == null) return null;
return Int32.Parse(cell.Value);
}
public static explicit operator uint(TableCell cell)
{
if (cell == null) throw new ArgumentNullException(“TableCell”);
return UInt32.Parse(cell.Value);
}
public static explicit operator uint?(TableCell cell)
{
if (cell == null) return null;
return UInt32.Parse(cell.Value);
}
public static explicit operator long(TableCell cell)
{
if (cell == null) throw new ArgumentNullException(“TableCell”);
return Int64.Parse(cell.Value);
}
public static explicit operator long?(TableCell cell)
{
if (cell == null) return null;
return Int64.Parse(cell.Value);
}
public static explicit operator ulong(TableCell cell)
{
if (cell == null) throw new ArgumentNullException(“TableCell”);
return UInt64.Parse(cell.Value);
}
public static explicit operator ulong?(TableCell cell)
{
if (cell == null) return null;
return UInt64.Parse(cell.Value);
}
public static explicit operator float(TableCell cell)
{
if (cell == null) throw new ArgumentNullException(“TableCell”);
return Single.Parse(cell.Value);
}
public static explicit operator float?(TableCell cell)
{
if (cell == null) return null;
return Single.Parse(cell.Value);
}
public static explicit operator double(TableCell cell)
{
if (cell == null) throw new ArgumentNullException(“TableCell”);
return Double.Parse(cell.Value);
}
public static explicit operator double?(TableCell cell)
{
if (cell == null) return null;
return Double.Parse(cell.Value);
}
public static explicit operator decimal(TableCell cell)
{
if (cell == null) throw new ArgumentNullException(“TableCell”);
return Decimal.Parse(cell.Value);
}
public static explicit operator decimal?(TableCell cell)
{
if (cell == null) return null;
return Decimal.Parse(cell.Value);
}
public static implicit operator DateTime(TableCell cell)
{
if (cell == null) throw new ArgumentNullException(“TableCell”);
return new DateTime(1900, 1, 1).AddDays(Int32.Parse(cell.Value) – 2);
}
public static implicit operator DateTime?(TableCell cell)
{
if (cell == null) return null;
return new DateTime(1900, 1, 1).AddDays(Int32.Parse(cell.Value) – 2);
}
}
I hope you all have been enjoying Zeyad’s articles showing some of the powerful solutions you can build
Is there anyway to use this with Excel Services?
Hi Carlos,
If you are writing a feature for SharePoint, you could use this approach to extract information from spreadsheets. Also, if you have spreadsheets in a document library, you could write a web service to retrieve the spreadsheets and use this code to query tables within the spreadsheets. Does this answer your questions?
Thanks, Eric
Hi Eric,
I am building a B/S software to run a management work with VS2008Sp1 and SQL 2005. I use Linq in my project. My problem is, could I use linq to update the Datum in Excel. if so, I can use LinQ to SQL to retrive data, and Linq to Xml to write that data to excel.
Thanks in advance.
Hi Richard,
Yes, this is certainly possible. The easiest way is to have a ‘template’ spreadsheet that you copy and modify, inserting the results of your LINQ to SQL query. In short, you want to modify the worksheet part, and replace the x:sheetData element and its child x:Row elements with new elements that you construct from your query.
-Eric
Hi Eric,
I am trying my best to find out a fast solution for the reporting part of my project. I have to export the data in to Excel. And I have realize that with automation Excel in the serverside(not a good solution), javascrip and gridview to out put the dataset from the Celint side. And now, I find that open XML maybe a better way to do that. Of course, I have write data to excel with openXml sdk. But I do not know how to manipulate the excel style(eg.the column with, border, and etc). Could you please provide some resource (eg. blog, article or website) for me to learn about that?
Do you have any suggestion for me?
What I need is a Stable and fast system to export data from SQL 2005 to excel. Thanks a lot!
And does it the linq that makes my exporting system slow?
Dernier post de cette série sur la suppresion des commentaires dans les documents PowerPoint 2007 (PresentationML).
Hi Richard,
Exporting the data into Excel is certainly doable. I have a screen-cast that I need to record that shows how to do this, but basically, the gist is to find the sheetData element in the worksheet, and replace that element with a new one that contains appropriate child row elements. Take a look at this post: , and in general, look at the other posts by Zeyad Rajabi on Brian Jones’s blog.
One approach for doing formatting – it is easiest to set up a spreadsheet with your desired formatting, and then modify the spreadsheet rather than generating the spreadsheet with formatting from scratch.
Also, keep OpenXmlDiff in mind – this has the capacity to teach you how to format – save a spreadsheet, change formatting slightly, save it again, and see the differences. This shows you the markup necessary to change formatting.
Regarding speed, the portion to write out the Open XML, or read the Open XML using either LINQ to XML or Open XML SDK V2 will, in general, be very fast.
-Eric
Hi Eric,
I really appreciate your help. I will learn more throug the resources which you and your firend provided.
Thanks again.
Comme à l’accoutumé, voici une brochettes de liens de la semaine sur Open XML. Posts techniques en vrac
By combining LINQ to SQL with LINQ to Objects in this fashion, you can write some pretty simple C# code that uses LINQ to join data across two separate, disparate data sources.
Is there any chance in the future, of providing additional functionality to the given example for Tables so that after manipulating the table contents in memory you can actualy save the results back? Your post was a great help for me, thanks alot!!
Hi Constantinos, I do have plans to show some code to update tables. This would be valuable, I think.
-Eric
Nice post Eric. Have you tried out the open source Linq to Excel project (<a href=""></a>) to use linq queries against Excel? It uses OleDb and takes care of everything in the background. All you have to do is declare a simple class with properties that map to the column names in the excel sheet. Here’s an example:
IExcelRepository<Customer> repo = new ExcelRepository<Customer>();
var londonCustomers = from c in repo.Worksheet
where c.City == "London"
select c;
foreach (Customer customer in londonCustomers)
Console.WriteLine(customer.ToString());
Here is a list on links that I want to share with you. LINQ for Office Developers Some Office solutions
Hi Eric,
You post are good and book even better, but it has been a struggle for me most of the time because i have to painfully reconstruct everything from c# to VB, could you be kind enough to give VB option to all your code?
Thanks a lot
Deepesh
Hi Deepesh,
I agree, having VB samples would be great. I’ll do this when possible. 🙂
-Eric
Hi Eric,
This helped me ALOT and I am very grateful for all your high-quality posts 🙂
Thanks for this post – I’ve found it very useful.
One problem I’ve run into is addressing table columns where columns have been inserted after the rest of the table has been created. The ids are no longer in order and the TableRow[columnName] method no longer works.
You get this in table1.xml:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<table xmlns="" id="1" name="Table1" displayName="Table1" ref="A1:D2" insertRow="1" totalsRowShown="0">
<autoFilter ref="A1:D2">
<filterColumn colId="1" />
</autoFilter>
<tableColumns count="4">
<tableColumn id="1" name="First" />
<tableColumn id="4" name="Inserted" />
<tableColumn id="2" name="Second" />
<tableColumn id="3" name="Third" />
</tableColumns>
<tableStyleInfo name="TableStyleMedium2" showFirstColumn="0" showLastColumn="0" showRowStripes="1" showColumnStripes="0" />
</table>
Because the column ids are out of order, the line in the TableRow class that gets the cell gets the wrong result:
string columnAddress = (startRefs[0].ColumnAddressToIndex() + tc.Id – 1).IndexToColumnAddress();
The solution I’ve used for the problem above is to add a COlumnIndex property to the TableColumn class:
public int ColumnIndex { get; set; }
Set it using the ElementsBeforeSelf method:
new TableColumn(this)
{
Id = (int)c.Attribute("id"),
Name = (string)c.Attribute("name"),
FormatId = (int?)c.Attribute("dataDxfId"),
QueryTableFieldId = (int?)c.Attribute("queryTableFieldId"),
UniqueName = (string)c.Attribute("uniqueName"),
ColumnIndex = c.ElementsBeforeSelf().Count()
}
And then fix the cell reference in the TableRow[columnName] method:
string columnAddress = (startRefs[0].ColumnAddressToIndex() + tc.ColumnIndex).IndexToColumnAddress();
Hope that’s useful for someone.
Hi,
I just use Open XML Lib to query data from Excel file. However, I have one problem with this approach. The problem is that I can not handle the black cell. For ex: I have a table with 14 columns. The first row is the header, it contains column’s name. The remaining rows are the data. In the range of data, there’s some cell are blank. When I query this data, instead of getting 14 columns for row, I only get 13 columns and the data is incorrect due to lack of order.
Onw work around for me right now is replace the blank cell with the null pattern defined by myself, and later I replace the pattern with space. But I find it is tricky.
Do you know a better solution?
Regards,
Minh.
hi what my requirement is client will be able to uplad the multiple CSV uaually 5 to 6 which is user define and he wants to generate another csv file with a use of query builder. As per generated query a new file is generated. Can you please help me to fire a query with multiple tables in csv file. is it possible with above given solution by you.
how to convert following string in to linq query as in my project its comming dynamically.
"from i in spreadsheet.Table("Inventory").TableRows()
where (int)i["Qty"] > 2
select i;"
Hi Gopal,
Take a look at this post:
-Eric
I have two spreadsheets: one is a list of job numbers and descriptions and the other is a time sheet. Any ideas on how I could get the job numbers as a drop -down list in the time sheet file?
Thanks.
Hi Samir, I’m not a word-automation expert. This is a good question. I’ll ask one of the experts in my office.
-Eric
Hi Samir, I have a few questions. Are you looking for a solution in Visual Studio? Do you want the combo box on a toolbar? Which version of Excel are you using? Feel free to contact me directly via the email button above, and I’ll be happy to help get you the answers you need.
Thanks, Eric
Hi Samir, I have perhaps a solution for you. I have a shared Add-in project that populates a combo box on the Ribbon in one Excel workbook with a range of data from another workbook. If you’d like to contact me directly via the EMAIL button on my blog, I’d be happy to send it to you (and anyone else who wants it).
-Eric
Hi Eric,
First let me say thanks!!
I think I’ve un-covered a bug. The issue I was having was the with TableRow.GetCell(). It was incorrectly calculating the ColumnAddress. The correct column was E but it calcuated H.
I suspect the problem arised because I had moved some columns around in my table. To get around my problem I copied and pasted the values into a new table and my issue went away (off course it took 30 minutes of debugging to figure this out).
I can send you the Excel file if you so desire.
Regards,
Mike D.
Hi Mike,
You are right. I’ll update this code next week.
-Eric
I had the same problem as spottedmahn and fixed it (by adding a row index to the TableCell). I can send you the updated source code if you tell me where I can send it.
Hi Eric,
Excellent examples, thank you very much!
I think I may have discovered a bug in the "Table.TableRows()" return query. Shouldn't the return query skip the headerRowsCount and totalRowsCount AFTER the Where clause executes, not before? On a sheet with multiple tables I found that the query was skipping the first headerRowsCount + totalRowsCount number of rows in the sheet, thus I was still getting the header row and totals row returned when the table is not located on the first row. By moving the Where clause BEFORE the skips resolved the issue.
Thanks,
Craig | https://blogs.msdn.microsoft.com/ericwhite/2008/11/14/using-linq-to-query-excel-tables/ | CC-MAIN-2016-36 | refinedweb | 4,312 | 55.34 |
Action mappings is a feature that will benefit greatly from a rewrite and inclusion of other capabilities. By ActionMappings, I mean code that extracts from a URI the following:
- The namespace (xwork) or module prefix (struts)
- The action name
- Any extra parameters
While the first two are obvious, the last isn't necessarily. This is a feature that is currently in development for the next version of WebWork2. For WW2, ActionMapper is an interface which allows implementations to extract the above information. One particular implementation, CoolActionMapper provides greater ReST-style support by allowing parameters to be embedded into the URL, so the framework could support the pattern:
to allow url's like:
There are a couple of reasons why I'd suggest copying the code rather than using it unmodified:
- It is tied to the servlet API and I've been trying hard to not require the core to know about servlets
It assumes one servlet mapping, and I believe we need to support multiple. As for this writing, the mappings can be retrieved from the WebContext
- Related to above, ours would need to know about the mappings in order to recreate a URI from and action, namespace, and servlet mapping
Implementation points:
Do we follow WW2 and use an ActionMapping interface or just embed the information directly into the web context?
- If we create an interface, we'd have to add a "mapping" property to store the optional servlet mapping
I'd assume we'd pass in a WebContext rather than a servlet request
How do we integrate any extracted parameters into the WebContext? The request parameters map is read-only yet we want smooth integration of the parameters to the rest of the app and it seems cumbersome to require every piece of code to check in two places for parameters.
Design goals:
- Again, shouldn't require servlet, although implementations certainly will
- Support for multiple servlet mappings
Comment by rich on Thu Jul 7 22:59:42 2005
Having this capability sounds great to me.
It does raise a general question: should we try to influence/contribute to other projects in order to fulfill our needs? e.g., is this something that could be abstracted and pushed into xwork?
Some questions/comments:
- Why do we need to support multiple servlet mappings?
I like the idea of wrapping up action information into an ActionMapping, since it seems like there could end up being a lot there that could get lost in the shuffle of the general context. Also, maybe there will be different types of ActionMappings? It would still hang off the context (getActionInfo()?).
Assuming we have an extension of WebContext, can't we have this combine the underlying parameter map with the params from the ActionMapper?
Comment by mrdon on Fri Jul 8 09:17:27 2005
Good point, and to that end, I've invited Jason and Patrick from WebWork/XWork to join the discussion. They are very interested in working together to develop solutions that meet both our needs. While our projects will remain separate, I'm hoping we can use common infrastructure and ideas wherever possible.
- One usecase that I've run across is trying to have both an HTML and ReST interface for the same app. The HTML interface is fine with *.do, but the ReST is cleaner with /rest/* If we allowed multiple instances of Ti, I could just use two servlets, but combining them into one lets me share data easier and takes less memory.
- Sure, what different types do you have in mind?
Yes, and this is what WebWork2 does when it creates the ActionContext. Of course by extension you mean a whole other class as we couldn't directly extend WebContext w/o extending all possible subclasses (servlet, faces, portlet, etc).
Comment by rich on Fri Jul 8 13:58:28 2005
That's excellent. The more of an integration project this is, the better, as far as I'm concerned.
- I see. Mainly to support both path-mapping and extension-mapping. I know that from a tooling point of view, it would be much easier if we defined one extension and one path prefix. Is there a compelling reason to offer more flexibility than that?
In Beehive there are two types of action mappings: simple actions and method-based actions. Simple actions either map directly to a path, or go through a script expression evaluator to go through a set of conditions/results. Basically, the shape of metadata is different for the two types of actions. Just making sure -- the ActionMapping is still the way you'd access an action's metadata at runtime?
Oh, right. I was talking about extending WebContext and wrapping the underlying one, but that's not pretty since ServletWebContext, etc. expose properties you'd be interested in. I assume we should extend ActionContext and expose WebContext as a property, along with our other context properties. Sound correct? An alternative would be to leave WebContext out of the ActionContext, and ensure that our single context had everything you need... but this would cause it to reimplement much of what Chain did.
Comment by mrdon on Sun Jul 10 19:14:47 2005
- I don't mind if we suggest one prefix/extension mapping, but I'd like the internal framework to be capable of more. The code I just committed allows multiple servlet mappings - source:src/java/org/apache/ti/config/mapper/ServletActionMapper.java
Well, ActionMapping in this context is different than Struts 1.x. In this case, all an ActionMapping does is capture and action name and namespace. XWork's ActionConfig is like the ActionConfig of Struts 1.x As for two types of action mappings, I was kinda hoping to just go to one type, i.e. Ruby on Rails. If we can keep annotation/tag overhead low, even simple actions could be a controller method.
I was thinking about some sort of ControllerContext would would use ActionContext's threadlocal instance as storage, so it'd provide getters then try to pull them out of ActionContext's map.
Comment by plightbo on Sun Jul 10 20:15:43 2005
Hey guys -- Patrick Lightbody here. Just joining in on the conversation. One thing to keep in mind is weighing between tool support and supporting flexible URLs. At this point, our plans in WebWork are up in the air. Pasted below is a discussion I had with Don over email about this:
Speaking of the ActionMapper -- I'm not done with it by any means and very open to feedback. One of the challenges I have is not mapping from "request" -> action, but rather action -> "request" (url).
For example, suppose you have a form that lets you edit a person. In a RESTful design, that would be a PUT to /people/1. Or, with the "cool" mapper, it would be a POST to /person/1`, where 1 is the person ID. But suppose the form has a drop down form element that is a selection of "persons" you can edit (say, it is a combobox and the textfield becomes the name of the person). Now the <form> element needs to submit to a dynamic location (/person/1, /person/2, etc).
Even when the URL is not dynamic, mapping a form (in WW it is <ww:form>) is not so easy. Using a *.action extension mapping, this:
<ww:form <ww:hidden <ww:textfield ... </ww:form>
produces this HTML (or close to it):
<form name="updatePerson" action="updatePerson.action"> <table> <input type="hidden" name="id" value="123"/> <tr> <td>Email address</td> <td><input name="email" value="plightbo@gmail.com"/> </tr> ... </table> </form>
So in this situation, mapping the action name ("updatePerson") to the URL ("updatePerson.action") is very easy. But in the cool or restful way, the ActionMapper would have to know the parameters of the form ("id" -> "123") and construct a proper HTML output:
<form name="updatePerson" action="/person/123"> <table> <tr> <td>Email address</td> <td><input name="email" value="plightbo@gmail.com"/> </tr> ... </table> </form>
Comment by rich on Mon Jul 11 09:41:59 2005
Response to Don:
- Sounds good -- it's easy for the runtime to be aware of multiple mappings.
- Having one config is OK, but I don't think we should sacrifice wholly-annotation-based actions. I'm all for low overhead, but we did go through a round of every-action-is-a-method in Beehive, and it's just a pain. Even if there were no annotations on an action method, it's still just cleaner to do this:
@ti.simpleAction(name="begin")
- Definitely -- sounds good.
Comment by rich on Mon Jul 11 09:49:46 2005
Patrick/Don, I've got a basic question: what's the main case for generating HTML that accesses actions RESTfully? I can imagine exposing actions/flow in this way for a back-channel XmlHttpRequest (which doesn't have some of the form issues Patrick is raising, and which is what I'd imagine as the main use initially), but I don't see how it would be helpful on the other side. | https://wiki.apache.org/struts/StrutsTi/ActionMappings?highlight=WebWork2 | CC-MAIN-2017-51 | refinedweb | 1,522 | 60.45 |
Problem
The issue is that SQL Server ignores trailing spaces when comparing strings but the .NET string functions (which EF uses) does not. On SQL Server “ABCD” and “ABCD “ are considered equal, but in .NET (and EF) they are not.
The implications of this are best illustrated with an example. Consider the following code that uses a Group and User model with a string based foreign key relationship between them. The code creates a database and populates it with one Group and a couple of Users. The primary key of the Group has trailing spaces but the foreign key values in the User instances do not.
using System; using System.Collections.Generic; using System.ComponentModel.DataAnnotations; using System.Data.Entity; using System.Linq; namespace FixedLengthDemo { class Program { static void Main(string[] args) { using (var db = new UserContext()) { if (!db.Database.Exists()) { db.Database.Create(); db.Groups.Add(new Group { GroupId = "CoolKids ", Name = "The cool kids" }); db.Users.Add(new User { UserId = "JohnC", Name = "John Citizen", GroupId = "CoolKids" }); db.Users.Add(new User { UserId = "JaneC", Name = "Jane Citizen", GroupId = "CoolKids" }); db.SaveChanges(); } } using (var db = new UserContext()) { var groups = db.Groups.Include(g => g.Users).ToList(); Console.WriteLine("Groups in memory: " + db.Groups.Local.Count); Console.WriteLine("Users in memory: " + db.Users.Local.Count); Console.WriteLine("Users in navigation property: " + groups[0].Users.Count); } } } public class UserContext : DbContext { public DbSet<User> Users { get; set; } public DbSet<Group> Groups { get; set; } } public class User { public string UserId { get; set; } public string Name { get; set; } public string GroupId { get; set; } public Group Group { get; set; } } public class Group { public Group() { this.Users = new List<User>(); } public string GroupId { get; set; } public string Name { get; set; } public List<User> Users { get; set; } } }
When we run the console application we see that all Groups and Users are retrieved from the database (since SQL Server will ignore the trailing spaces and successfully join data from the two tables). However, the Users navigation property on Group is not populated because relationship fixup will not match the values due to trailing spaces.
Groups in memory: 1 Users in memory: 2 Users in navigation property: 0
Intrusive Fixes
There are of course some intrusive fixes you could make, and these are things we have recommended in the past:
- Use a view (or other intermediary database structure) to ‘fix’ the data by removing trailing blanks.
- Fix the data in the database by removing trailing blanks.
- Compensate in application code for this limitation of EF.
A Better Fix
In EF6.1 we can use query interceptors and publically constructible query trees to resolve this issue without having to compensate in our database or product code.
The Interceptor
Here is the code for an interceptor that detects any string columns being accessed and applies a trim function to have white space removed in the query.); } } } }
Registering the Interceptor()); } } }
The Result
When we run the app we’ll see that relationship fixup now occurs and the navigation properties are populated.
Groups in memory: 1 Users in memory: 2 Users in navigation property: 2
If we inspect the SQL that is generated we see the trimming is done in the database. Here is the query fragment that selects a property:
LTRIM(RTRIM([Extent1].[GroupId])) AS [GroupId]
We’ll also see that the JOIN operators still work on the untrimmed strings, since SQL will already compensate for the trailing spaces. Here is the query fragment for the join:
LEFT OUTER JOIN [dbo].[Users] AS [Extent2] ON [Extent1].[GroupId] = [Extent2].[GroupId] | https://romiller.com/2014/10/20/ef6-1workaround-trailing-blanks-issue-in-string-joins/ | CC-MAIN-2019-09 | refinedweb | 585 | 56.15 |
Other Aliasxgetaline, xfseek, xfopen, getaline, fassert
SYNOPSIS
#include <files.h>
FILE *xfopen(const char *filename, const char *mode);
void xfclose(FILE *fp);
void xfseek(FILE *fp, long offset, int origin);
char *getaline(FILE *fp);
char *xgetaline(FILE *fp);
void fassert(FILE *fp);
DESCRIPTIONThese functions are useful for file manipulation. The functions that begin with x work like the functions without the letter, except if there is an error, they print an error message and kill the program.
getaline reads a line from the given file. It allocates the memory for the line with malloc(3), and returns a pointer to the beginning of the line. If there is an error, it returns NULL. If the returned value is not NULL, the caller is responsible for freeing the memory. The newline is removed from the end of the line.
fassert checks that the argument is not NULL, and that (for a non-NULL argument) the file does not have its error indicator flag set. If either condition is true, it prints an error message and termiantes the program. If neither condition is true, it does nothing. This can be used to add checks that the I/O in a program is going well; however, it is mostly useful only for small programs, because more serious programs need to handle the errors more gracefully.
AUTHORLars Wirzenius ([email protected]) | http://manpages.org/xfclose/3 | CC-MAIN-2021-49 | refinedweb | 226 | 63.09 |
Agile Software Development: Principles, Patterns,and Practices -- The Adapter Pattern
In my last column we talked about the
Abstract Server
pattern in the context of the
Button and
Light example. At the end of that column, I promised that we'd study the pattern that we might use if we could not modify
Light.
Decoupling
from a Member You Can't Modify
Consider a
Button class that uses a
Light class as follows:
public class Button {
private Light light;
public Button(Light light) {
this.light = light;
}
public void press() {
light.turnOn();
}
}
How can be break the dependency between
Button and
Light, and thus conform to the SOLID principles of OOD, if we can't modify the
Light class?
Why can't we modify the
Light class? Perhaps we don't own the source code. Or perhaps there are so many users of
Light that forcing them all to rebuild and redeploy would be very expensive. Whatever the reason might be, we have decided that
Light should not be modified.
The Adapter Pattern
Remember that a design pattern is a named solution to a problem in a context. In our case, the problem is decoupling
Button from
Light. The context is that
Light cannot be modified. The most common solution for this problem/context pair is the Adapter pattern.
As shown in the diagram below, this pattern adds an extra object to the Abstract Server solution that we discussed in the last column. This extra object, called
LightAdapter, implements the
Switchable interface and delegates messages received by that interface to the associated
Light object.
The code that implements the Adapter is trivial.
public interface Switchable {
void turnOn();
}
public class LightAdapter implements Switchable {
private Light light;
public LightAdapter(Light light) {
this.light = light;
}
public void turnOn() {
light.turnOn();
}
}
This solves the problem nicely. The
Button class no longer knows about the
Light, and the
Light has not been modified. However, there are some costs. The unit test below shows how the
LightAdapter has to be wired to the
Light. This extra step in binding the
Button to the
Light adds complexity to the application. Moreover, the
LightAdapter object itself requires memory and consumes CPU cycles. These costs may be small, but they are enough to discourage speculative use of the Adapter.
public class AdapterTest extends TestCase {
public void testButtonControlsLight() throws Exception {
Light l = new Light();
LightAdapter la = new LightAdapter(l);
Button b = new Button(la);
b.press();
assertTrue(l.isOn());
}
}
The Class Form of Adapter
The pattern as shown above is known as the object form of the Adapter. The class form of the Adapter addresses some of the costs, while incurring others. It is shown in the diagram and code below.
public class LightClassAdapter extends Light implements Switchable {
}
Isn't it wonderful that this class has no body? And yet, it completely fulfills its role as an Adapter. The unit test below shows how easy this form of the Adapter is to use.
public void testButtonControlsLightThroughClassAdapter() throws Exception {
LightClassAdapter lca = new LightClassAdapter();
Button b = new Button(lca);
b.press();
assertTrue(lca.isOn());
}
Like the Abstract Server pattern, the class form of the Adapter pattern does not add a new object to the application. Moreover, there is no extra wiring, nor significant extra storage, nor any extra CPU cycles involved. It is as fast and small as the Abstract Server, and yet it breaks the dependency between
Button and
Light.
On the other hand, it lacks flexibility. The object form of the Adapter allows you to swap different instances of
Light into and out of the
LightAdapter object. The class form of the Adapter is the
Light, and can never be used to hold a different instance of
Light. Moreover, everyone who creates the
Light must know to create a
LightClassAdapter instead. Thus, the creators of
Light are coupled to the solution in a way that the object form of the Adapter avoids.
One last cost of the class form of the Adapter is that it uses up the one and only slot for inheritance. I am consistently annoyed that Java does not allow true multiple inheritance. There are times when a class-form Adapter would be nice to use, and yet, the cost of using that slot is too great to pay.
Now I'll rant. Java should have multiple inheritance. It's ridiculous that a 21st-century language hobbles its users by not allowing something as simple as inheriting from multiple base classes. Yes, I know that the diamond problem leads to some nasty ambiguities; but a reasonable solution to this is to disallow diamonds, or force all diamonds to converge on
Object. Multiple inheritance is useful, damnit! I might not use it every day; but when I want it I want it. OK, I'll stop ranting now.
Adapter Implemented with Anonymous Inner Class
The anonymous inner class feature of Java can be a wonderful way to create object-form Adapters. Consider the following code:
public class AdapterTest extends TestCase {
private Light l = new Light();
public void testAnonymousInnerClassAdapter() throws Exception {
Switchable s = new Switchable() {
public void turnOn() {
l.turnOn();
}
};
Button b = new Button(s);
b.press();
assertTrue(l.isOn());
}
}
An anonymous inner class is used to delegate to the
Light field of the
AdapterTest object. This is nice, because we don't have to create a whole new class and try to come up with some hokey name like
LightAdapter.
This technique is commonly used with listeners in Java. For example, if we have two
JButtons on a dialog box, we can use anonymous inner adapters to create listeners for them.
public class SimpleDialog extends JFrame {
private JButton okButton = new JButton("OK");
private JButton cancelButton = new JButton("CANCEL");
private void ok() {/* called when OK is pressed */}
private void cancel() {/* called when CANCEL is pressed */}
public SimpleDialog() throws HeadlessException {
okButton.addActionListener(
new ActionListener() {
public void actionPerformed(ActionEvent e) {
ok();
}
}
);
cancelButton.addActionListener(
new ActionListener() {
public void actionPerformed(ActionEvent e) {
cancel();
}
}
);
}
}
These cute little anonymous Adapters are easy to create, and they nicely bind each
JButton to the appropriate methods in the
SimpleDialog class. We might show this in UML as follows:
Adapt a Sender's Protocol
Another use of the Adapter pattern is to adapt the protocol of a sender to a receiver. For example, let's say that we have a class that looks like this:
public class ThreeWayLight {
private int brightness = 0;
public void lo() {brightness = 1;}
public void medium() {brightness = 2;}
public void high() {brightness = 3;}
public void off() {brightness = 0;}
public int getBrightness() {
return brightness;
}
}
The
Button was never designed to control a three-way light. Moreover, it looks as though the
ThreeWayLight class was not designed to take input from a
Button. How can we adapt the
Button class to the
ThreeWayLight ? Consider the following unit test:
public void testThreeWayLight() throws Exception {
ThreeWayLight twl= new ThreeWayLight();
ThreeWayAdapter twa = new ThreeWayAdapter(twl);
Button b = new Button(twa);
assertEquals(0, twl.getBrightness());
b.press();
assertEquals(1, twl.getBrightness());
b.press();
assertEquals(2, twl.getBrightness());
b.press();
assertEquals(3, twl.getBrightness());
b.press();
assertEquals(0, twl.getBrightness());
}
Clearly our intent is that the
ThreeWayAdapter should ratchet the
ThreeWayLight through its states every time the
Button is pressed. We can easily implement this adapter as follows:
public class ThreeWayAdapter implements Switchable {
private ThreeWayLight twl;
public ThreeWayAdapter(ThreeWayLight twl) {
this.twl = twl;
}
public void turnOn() {
switch (twl.getBrightness()) {
case 0:
twl.lo();
break;
case 1:
twl.medium();
break;
case 2:
twl.high();
break;
case 3:
twl.off();
break;
}
}
}
This shows how an Adapter can be used to adjust the protocol of the sender (i.e., the
Button) to the protocol of the receiver (i.e., the
ThreeWayLight). The two classes were never designed to work together, and yet we can easily adapt them without having to change them.
We'll see the Adapter pattern again as we explore yet other patterns and design principles in the months to follow. Adapter is a simple, yet very useful, way to decouple classes from each other, especially when the target class should not be changed. Adapter is also a useful way to adapt the protocols of two classes together without directly affecting them.
- Login or register to post comments
- Printer-friendly version
- 5599 reads | https://today.java.net/pub/a/today/2004/08/17/patterns.html | CC-MAIN-2015-14 | refinedweb | 1,370 | 64.2 |
James Stroud schrieb: >. > > It doesn't evaluate it until you ask it to, which is the right > behavior. However, when evaluated, it evaluates "i" also, which is the > last value to which "i" was assigned, namely the integer 1. I'm going > to get flamed pretty hard for this, but it doesn't seem to be the > intuitive behavior to me either. However, in a purely functional > language, you wouldn't be able to construct a list of generators in > this way. > > With python, you have to remember to adopt a purely functional design > and then pray for best results. You can store generators in a list, > but they need to be constructed properly. I can't perfectly > transmogrify your code into functional code because I don't think > making the particular anonymous generator you want is possible in > python. However this is how I would make a close approximation: > > > from itertools import * > > def make_gen(i): > for x in count(): > yield x + (i * 10) > > itlist = [make_gen(i) for i in xrange(2)] > > print "what's in the bags:" > print list(islice(itlist[0], 5)) > print list(islice(itlist[1], 5)) > > > James > You could just as well use the original expression in make_gen, too: from itertools import * def make_gen(i): return (x + (i*10) for x in count()) itlist = [make_gen(i) for i in xrange(2)] print "what's in the bags:" print list(islice(itlist[0], 5)) print list(islice(itlist[1], 5)) what's in the bags: [0, 1, 2, 3, 4] [10, 11, 12, 13, 14] | https://mail.python.org/pipermail/python-list/2009-January/520345.html | CC-MAIN-2014-15 | refinedweb | 258 | 62.41 |
Sending
Non-blocking
C | FORTRAN-legacy | FORTRAN-2008
MPI_Issend
Definition
MPI_Issend is the synchronous non-blocking send (the capital ’I’ standing for immediate return). Unlike its blocking counterpart MPI_Ssend, MPI_Issend will not block until the recipient has received the message. In other words, when MPI_Issend returns, the buffer passed may not have been sent yet, and it must be considered unsafe to reuse the buffer passed. The user must therefore check for completion with MPI_Wait or MPI_Test before safely reusing the buffer passed. Note that MPI_Issend may be implicitly invoked by the standard non-blocking send (MPI_Isend). Other non-blocking sends are MPI_Isend, MPI_Ibsend and MPI_Irsend. Refer to MPI_Ssend to see the blocking counterpart of MPI_Issend.
Copy
Feedback
int MPI_Issend synchronous send.
- MPI_SUCCESS
- The routine successfully completed.
Example
Copy
Feedback
#include <stdio.h> #include <stdlib.h> #include <mpi.h> /** * @brief Illustrates how to send a message in a non-blocking; MPI_Request request; printf("MPI process %d sends value %d.\n", my_rank, buffer_sent); MPI_Issend(&buffer_sent, 1, MPI_INT, 1, 0, MPI_COMM_WORLD, &request); // Do other things while the MPI_Issend completes // <...> // Let's wait for the MPI_Issend to complete before progressing further. MPI_Status status; MPI_Wait(&request, &status); break; } case RECEIVER: { int received; MPI_Recv(&received, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); printf("MPI process %d received value: %d.\n", my_rank, received); break; } } MPI_Finalize(); return EXIT_SUCCESS; } | https://www.rookiehpc.com/mpi/docs/mpi_issend.php | CC-MAIN-2019-43 | refinedweb | 220 | 50.94 |
Hi On Wed, Mar 21, 2007 at 09:59:13PM +0100, Ian Braithwaite wrote: > Hi, > > > Here's the second iteration of my fixed point COOK decoder. > > Many thanks to Michael and Benjamin for the code reviews. > I've updated the patch accordingly, here are some comments to the comments. [...] > Michael Niedermayer writes: > >> diff -upN -x 'ff*' coef/cook_fixp_mdct.h fixpoint/cook_fixp_mdct.h > >> --- coef/cook_fixp_mdct.h 1970-01-01 01:00:00.000000000 +0100 > >> +++ fixpoint/cook_fixp_mdct.h 2007-03-08 11:10:13.000000000 +0100 > > > > adding a generic fixpoint mdct to lavc MUST be a seperate patch&mail&commit > > (this part also doesnt fall under benjamins maintainership but mine) > > and Benjamin Larsson writes: > > The fixedpoint math > > operattions should be moved to some generic fixedpoint.h file for the > > possible use in other codecs. The fixedpoint mdct should be moved to > > mdct.h. > > I don't see it as a generic fixpoint MDCT, at least not yet! > I made some choices when implementing the decoder, for example to use > 32 bit signed integers for variables and 16 bit unsigneds for coefficients. > I borrowed the MDCT from Tremor and adapted it for this. > I also removed some coefficient interpolation used for very large transforms, > that COOK doesn't need. > Turned out to be OK here, but I don't know how suitable it would be for > anything else. > > If you still want the fixpoint math + MDCT submitted as generics, I > would really appreciate some tips or pointers as to how it should be done. use the math ops from mathops.h, and if anything you wrote is faster replace the specific one in mathops.h also dont hesitate to suggest additions to mathops.h [...] > Michael Niedermayer writes: > >> +/** > >> + * Fixed point multiply by fraction. > >> + * > >> + * @param a fix point value > >> + * @param b fix point fraction, 0 <= b < 1 > >> + */ > >> +static inline FIXP fixp_mult_su(FIXP a, FIXPU b) > >> +{ > >> + int32_t hb = (a >> 16) * b; > >> + uint32_t lb = (a & 0xffff) * b; > >> + > >> + return hb + (lb >> 16) + ((lb & 0x8000) >> 15); > > > > return ((int64_t)a * b + 0x8000) >> 16; > > > > (its the compiers job to optimize this and if it fails such code > > should be in ASM under appropriate #ifdef, the above may be optimal > > for ARM but certainly not for every CPU) > > > > also its probably worth trying to do this without correct rounding ... > > I've adopted your rounding method, it's much better. > > I don't understand why you say you would rewrite something in C, that > does what you want, into assembler. > One of C's great strengths (in my opinion) is that often it lets you > express low level details _without_ resorting to assembler. its not about expressing details but about speed, hand written asm simply tends to be faster then what gcc generates, you also can surely tweak the c code until gcc generates equally optimal code but as soon as you upgrade to the next version of gcc you might have to redo this. OTOH with asm gcc cant mess up, it has no choice but to output the code which it should the mess gcc generates out of some multiply constructs below is a good example, next version of gcc could as well behave completely differently > > This code isn't aimed specifically at ARM, but it was written with > a 32 bit CPU in mind, I'll admit. That's why it uses two 16x16->32bit > multiplications, instead of a 64bit one. > > > Since you mentioned the optimizer, and suggested an alternative, > I spent the time to investigate a bit, do some timings and look at > the assembler produced on x86 and ARM. > > Timings: decode a 273s sample with ffmpeg, > > On x86 - 64bit: 3.8s, modified64bit: 3.3s, 32bit: 2.3s. > On ARM - 64bit: 63.4s, 32bit: 34.0s. > > Looking at the assembler listing, > On x86: > The 64bit version uses a single 64x64->64 multiply. > With a little help (voodoo!) gcc can be encouraged to see that > the 64bit version only needs a 32x32->64 multiply. > The original uses two 16x16->32 multiplies. > > On ARM: > The 64bit version calls _muldi3(), a 64x64->64 multiply (?). > The original uses two 32x32->32 multiplies. > > > For the moment I've left all three versions in the code, in case anyone > else wants to look at this. [...] > Michael Niedermayer writes: > >> +/** > >> + * Final converion from floating point values to > >> + * signed, 16 bit sound samples. Round and clip. > >> + * > >> + * @param q pointer to the COOKContext > >> + * @param out pointer to the output buffer > >> + * @param chan 0: left or single channel, 1: right channel > >> + */ > >> +static inline void output_math(COOKContext *q, int16_t *out, int chan) > >> +{ > >> + int j; > >> + > >> + for (j = 0; j < q->samples_per_channel; j++) { > >> + out[chan + q->nb_channels * j] > + > > av_clip(fixp_pow2(q->mono_mdct_output[j], -11), -32768, 32767); > > > > for(j = 0; j < q->samples_per_channel; j++) { > > int v= q->mono_mdct_output[j] >> 11; > > if(v+32768U > 65535U) v= (v>>31) ^ 0x7FFF; > > *out= v; > > out += q->nb_channels; > > } > > Done. > (Dreadfully unreadable code though - maybe it should be wrapped up > in something like av_clip_s16()?) no objection, seperate patch to add av_clip_s16() is welcome [...] > + /* Variables for fixed/float arithmetic routines */ > + realvars_t math; comment isnt doxygen compatible [...] > +/** > + * Additional variables in COOKContext > + * for fixed point routines > + */ > +typedef struct { > + /* generatable tables */ > + /** > + * Sine/cosine table. > + * x_i = 2^16 sin(i 2pi/8192), 2^16 cos(i 2pi/8192); i=0..1024 > + */ > + FIXPU sincos_lookup[2050]; shouldnt this table be static instead of duplicated in the context? [...] > + > +#define STIN static inline ugly > + > +typedef int32_t ogg_int32_t; ogg_* is completely unacceptable unless this would stay extreemly similar to the code from tremor so that future versions could easily be droped in but that also would make it impossible for us to optimize the code so considering A. ugly code easy drop in no optimizations and big changes possible B. clean code drop in for future tremor imdct hard optimizations and big changes possible now i dont belive that there will be much improvments coming from xiph in the future so B seems like the more logical choice > + > +#define DATA_TYPE ogg_int32_t > +#define REG_TYPE register ogg_int32_t > +#define LOOKUP_T const uint16_t > + > +static inline ogg_int32_t MULT32(ogg_int32_t x, ogg_int32_t y) { > + return fixp_mult_pow2(x, y, -1); > +} such wrapers are ugly, even more so with wrong indention > + > +static inline ogg_int32_t MULT31(ogg_int32_t x, ogg_int32_t y) { > + return fixp_mult(x, y); > +} > + > +/* > + * This should be used as a memory barrier, forcing all cached values in > + * registers to wr writen back to memory. Might or might not be beneficial > + * depending on the architecture and compiler. > + */ > +#define MB() useless nop > + > +/* > + * The XPROD functions are meant to optimize the cross products found all > + * over the place in mdct.c by forcing memory operation ordering to avoid > + * unnecessary register reloads as soon as memory is being written to. > + * However this is only beneficial on CPUs with a sane number of general > + * purpose registers which exclude the Intel x86. On Intel, better let the > + * compiler actually reload registers directly from original memory by using > + * macros. > + */ > + > +#ifdef __i386__ > + > +#define XPROD32(_a, _b, _t, _v, _x, _y) \ > + { *(_x)=MULT32(_a,_t)+MULT32(_b,_v); \ > + *(_y)=MULT32(_b,_t)-MULT32(_a,_v); } > +#define XPROD31(_a, _b, _t, _v, _x, _y) \ > + { *(_x)=MULT31(_a,_t)+MULT31(_b,_v); \ > + *(_y)=MULT31(_b,_t)-MULT31(_a,_v); } > +#define XNPROD31(_a, _b, _t, _v, _x, _y) \ > + { *(_x)=MULT31(_a,_t)-MULT31(_b,_v); \ > + *(_y)=MULT31(_b,_t)+MULT31(_a,_v); } > + > +#else tabs are forbidden in svn __i386__ is wrong theres ARCH_X86 (_32/_64) > + > +static inline void XPROD32(ogg_int32_t a, ogg_int32_t b, > + ogg_int32_t t, ogg_int32_t v, > + ogg_int32_t *x, ogg_int32_t *y) > +{ > + *x = MULT32(a, t) + MULT32(b, v); > + *y = MULT32(b, t) - MULT32(a, v); > +} > + > +static inline void X); > +} > + > +static inline void XN); > +} > + > +#endif > + > + > +/* 8 point butterfly (in place) */ > +STIN void mdct_butterfly_8(DATA_TYPE *x){ > + > + REG_TYPE r0 = x[4] + x[0]; > + REG_TYPE r1 = x[4] - x[0]; > + REG_TYPE r2 = x[5] + x[1]; > + REG_TYPE r3 = x[5] - x[1]; > + REG_TYPE r4 = x[6] + x[2]; > + REG_TYPE r5 = x[6] - x[2]; > + REG_TYPE r6 = x[7] + x[3]; > + REG_TYPE r7 = x[7] - x[3]; > + > + x[0] = r5 + r3; > + x[1] = r7 - r1; > + x[2] = r5 - r3; > + x[3] = r7 + r1; > + x[4] = r4 - r0; > + x[5] = r6 - r2; > + x[6] = r4 + r0; > + x[7] = r6 + r2; > + MB(); > +} a+b / a-b aka butterfly can be put into its own function/macro the remainder of the mdct code also likely can be simplified a lot [...] > +static inline int init_cook_math(COOKContext *q) > +{ > + FIXPU *const sincos_lookup = q->math.sincos_lookup; > + FIXP s = 0, c = 0x80000000; /* 0.0, -1.0 */ > + uint16_t a = 0xc910; /* 2^14 pi */ > + int i = 0; > + > + sincos_lookup[i++] = 0x0000; > + sincos_lookup[i++] = 0xffff; > + > + while (i < 2050) { > + FIXP s2 = s + fixp_mult_pow2(c - fixp_mult_pow2(s, a, -11), a, -10); > + FIXP c2 = c - fixp_mult_pow2(s + fixp_mult_pow2(c, a, -11), a, -10); > + > + s = s2; > + c = c2; the c2 variable isnt needed [...] > +static void scalar_dequant_math(COOKContext *q, int index, int quant_index, > + int* subband_coef_index, > + int* subband_coef_sign, FIXP *mlt_p) > +{ > + /* Num. half bits to right shift */ > + const int s = 33 - quant_index + av_log2(q->samples_per_channel); > + FIXP f1; > + int i; > + > + if (s >= 64) { > + memset(mlt_p, 0, sizeof(FIXP) * SUBBAND_SIZE); > + return; > + } > + > + for(i=0 ; i<SUBBAND_SIZE ; i++) { > + if (subband_coef_index[i]) { > + f1 = quant_centroid_tab[index][subband_coef_index[i]][s&1]; > + if (subband_coef_sign[i]) f1 = -f1; > + } else { > + /* noise coding if subband_coef_index[i] == 0 */ > + f1 = dither_tab[index][s&1]; > + if (av_random(&q->random_state) < 0x80000000) f1 = -f1; > + } > + mlt_p[i] = fixp_shr(f1, s/2); > + } wouldnt it be faster to tab = &quant_centroid_tab[index][0][s&1]; outside the loop and then use tab[subband_coef_index[i]][0] (this and all other optimizations can of course can be done as seperate patches, in many cases there is similar code for floats too which might benefit ...) > +} > + > + > +/** > + * the actual requantization of the timedomain samples > + * > + * @param q pointer to the COOKContext > + * @param buffer pointer to the timedomain buffer > + * @param gain_index index for the block multiplier > + * @param gain_index_next index for the next block multiplier > + */ > +static inline void interpolate_math(COOKContext *q, FIXP* buffer, > + int gain_index, int gain_index_next) > +{ > + int gain_size_factor = q->samples_per_channel/8; > + int i; > + > + if(gain_index == gain_index_next){ //static gain > + for(i = 0; i < gain_size_factor; i++) { > + buffer[i] = fixp_pow2(buffer[i], gain_index); > + } is this speed critical? if yes if(gain_index < FOOBAR) for(i = 0; i < gain_size_factor; i++) ... else for(...) ... would be faster, if its not speed critical just forget my comment [...] --: <> | http://ffmpeg.org/pipermail/ffmpeg-devel/2007-March/033190.html | CC-MAIN-2014-49 | refinedweb | 1,677 | 57.4 |
remove LOD_Decimator (c++ decimator), now replaced by bmesh decimator. also remove CTR c++ classes that are no longer used.
style cleanup
remove $Id: tags after discussion on the mailign list: markmail.org/message/fp7ozcywxum3ar7n
doxygen: intern/container tagged
remove nan-makefiles
remove config.h references, was added for automake build system rev around 124-126 but isnt used by any build systems now.
correct fsf address
Patch from GSR that a) fixes a whole bunch of GPL/BL license
blocks that were previously missed; and b) greatly increase my
ohloh stats!
Yes I did it again ;)
added the following 3 lines to everything in the intern dir:
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif
Kent
--
mein@cs.umn.edu
Initial revision | https://git.blender.org/gitweb/gitweb.cgi/blender.git/atom?f=intern/container/intern | CC-MAIN-2021-04 | refinedweb | 120 | 59.3 |
set Object direct or indirect?
nimo frey
Ranch Hand
Joined: Jun 28, 2008
Posts: 580
posted
Apr 15, 2009 03:49:36
0
What is better:
to provide a method void which sets the properties of the object:
public class Test{ public void init(){ MyObject o = new MyObject (); this.getProps(o); } public void getProps(MyObject o) { o.setX(...); } }
or to have a return type and set the value explicitly:
public class Test2{ public void init(){ MyObject o = new MyObject (); o.setOtherObject(this.getProps(o)); } public OtherObject getProps(MyObject o) { OtherObject o1 = ...; return o1; } }
What is better? Why?
Ernest Friedman-Hill
author and iconoclast
Marshal
Joined: Jul 08, 2003
Posts: 24166
30
I like...
posted
Apr 15, 2009 05:19:11
0
Neither one makes much sense to me, especially the method "getProps()" in the first version which in fact doesn't "get"
anything.
A method named "getX" should always return X.
What's wrong with the simpler, more obvious
public void init(){ MyObject o = new MyObject (); o.setX(...); }
or even better, giving the class a constructor?
MyObject o = new MyObject (x);
[Jess in Action]
[AskingGoodQuestions]
Steve Luke
Bartender
Joined: Jan 28, 2003
Posts: 3932
17
I like...
posted
Apr 15, 2009 05:40:28
0
This isn't a direct answer to your question, I don't think, but it is important.
1) a getXXX method should return something. It is a standard naming convention, and should be followed.
2) The value that a getXXX method returns should be related to what the method name is. For example, a method called getProps() should return a value that represents the 'Properties' of the Object it is being called on (maye a Properties object, or a Map of name-value pairs, or event a List of properties).
Your method seems to be 'setting up' or 'filling' the properties of the passed in Object, not returning them, so using getXXX is probably not a good idea. I would suggest using an alternate name, like:
public void fillProperties(MyObject o) { //... }
Or, assuming that MyObject type is used to represent the properties you are 'getting', you could have the 'getProps' method create and return the MyObject object:
public MyObject getProperties() //MyObject not passed in { MyObject o = new MyObject(); //... return o; }
Again, standard get methods usually don't have parameter methods, but it isn't abnormal to have them. So you might do something like this:
public MyObject getProperties(MyObject o) { //... return o; }
The important part with the get method is that there is return value, and that the return value has a relationship to the work the method is doing. So I wouldn't consider filling MyObject with properties and returning some other view or some single Property from inside MyObject as being good practice.
If setting up the OtherObject is complex and you feel it should be done in a different method, feel free to do so, but it would be easier to understand if it went one of these two ways:) { //... }
Or
public void init() { MyObject o = getProperties(); fillOtherObject(o.getOtherObject()); //MyObject has a reference to OtherObject, or makes one based on its data } public MyObject getProperties() { MyObject o = new MyObject(); //... return o; } void fillOtherObject(OtherObject o) { //... }
Steve
nimo frey
Ranch Hand
Joined: Jun 28, 2008
Posts: 580
posted
Apr 15, 2009 12:18:31
0
thank you to both, that helped me much!!
This is exactly what I was doing:) { //... }
The naming convention get/set...Now, it s clear, thank you!
I agree. Here's the link:
subject: set Object direct or indirect?
Similar Threads
I think D is right, but someone told me C is right.
Question regarding Map interface
question about class object arguments
Repositioning an object in a TreeSet?
why CompareTo exists when we already have equals method
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/441151/java/java/set-Object-direct-indirect | CC-MAIN-2014-15 | refinedweb | 646 | 62.78 |
Chapter 3. Compiling and Building
3.1. GNU Compiler Collection (GCC)
gccand
g++), run-time libraries (like
libgcc,
libstdc++,
libgfortran, and
libgomp), and miscellaneous other utilities.
3.1.1. Language Compatibility
-
The following is a list of known incompatibilities between the Red Hat Enterprise Linux 6 and 5 toolchains.
-.
The following is a list of known incompatibilities between the Red Hat Enterprise Linux 5 and 4 toolchains.
-
3.1.2. Object Compatibility and Interoperability.
3.1.3. Running GCC.
3.1.3.1. Simple C Usage
Example 3.1. hello.c
#include <stdio.h> int main() { printf ("Hello world!\n"); return 0; }
Procedure 3.1. Compiling a 'Hello World' C Program
- Compile Example 3.1, “hello.c” into an executable with:
~]$
gcc hello.c -o helloEnsure that the resulting binary
hellois in the same directory as
hello.c.
- Run the
hellobinary, that is,
./hello.
3.1.3.2. Simple C++ Usage.
3.1.3.3. Simple Multi-File Usage
Example 3.3. one.c
#include <stdio.h> void hello() { printf("Hello world!\n"); }
Example 3.4. two.c
extern void hello(); int main() { hello(); return 0; }.
3.1.3.4. Recommended Optimization Options
It is very important to choose the correct architecture for instruction scheduling. By default GCC produces code optimized for the most common processors, but if the CPU on which your code will run is known, the corresponding
The compiler flag
-O2
3.1.3.5. Using Profile Feedback to Tune Optimization Heuristics
-
3.1.3.6. Using 32-bit compilers on a 64-bit host.
3.1.4. GCC Documentation
manpages for
cpp,
gcc,
g++,
gcj, and
gfortran. | https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/developer_guide/compilers | CC-MAIN-2021-10 | refinedweb | 271 | 53.58 |
This is your resource to discuss support topics with your peers, and learn from each other.
02-04-2011 08:28 AM
Hello,
I am trying to make a LabelField act like a ButtonField in that it will be clickable to perform an event. So I have created my own field that extends LabelField. I have also set the new field to change text color based upon whether the field is active or not. The field is changing colors on focus without a problem. The one problem that I am having is this:
Whenever I register the field for a change listener (field.setChangeListener) and then I go to run the code, the field is automatically being clicked repeatedly in a loop of what seems like every second. This happens without me clicking on the field. When I remove the setChangeListener(), this doesn't happen. I tried using a ButtonField and everything works fine, so it must be a problem with my custom field. Would someone mind taking a look at my code? Thanks!
public class LinkField extends LabelField { private boolean active; public LinkField(String text, long style) { super(text, style); Font appFont = Font.getDefault().derive(Font.BOLD, 7, Ui.UNITS_pt); setFont(appFont); } protected void drawFocus(Graphics graphics, boolean on) { } protected void paint(Graphics graphics) { if (active) { graphics.setColor(0x0054a6); } else { graphics.setColor(Color.RED); } super.paint(graphics); } protected boolean navigationClick(int status, int time) { fieldChangeNotify(1); return true; } protected void onFocus(int direction) { active = true; invalidate(); super.onFocus(direction); } protected void onUnfocus() { active = false; invalidate(); super.onUnfocus(); } public boolean keyChar(char key, int status, int time) { if (key == Characters.ENTER) { fieldChangeNotify(0); return true; } return false; } }
Solved! Go to Solution.
02-04-2011 09:15 AM
Do you mean it's firing your fieldChangeNotify every second, or looks like it's gaining focus?
02-04-2011 09:28 AM
02-04-2011 09:30 AM - edited 02-04-2011 09:31 AM
You may try overriding fieldChangeNotify() to only be allowed to call super.fieldChangeNotify() when the correct set of circumstances are met. For instance set a flag before you call it in your navigationClick method and then check for that in fieldChangeNotify. If it exists, call super.fieldChangeNotify and reset the flag. If it doesn't, just ignore it.
02-04-2011 09:55 AM
Have you set the LabelField to be focusable?
You could try
public LinkField(String text, long style) { super(text, style | LinkField.FOCUSABLE); Font appFont = Font.getDefault().derive(Font.BOLD, 7, Ui.UNITS_pt); setFont(appFont); }
02-04-2011 10:15 AM
Remove calling super.onFocus and super.onUnfocus from your overrides - they never achieve anything and might easily mess up your field's inner works.
Even a more robust solution - make your field implement FocusChangeListener, call setFocusListener(this) in the constructor and move all your active = ... + invalidate() logic into the focusChanged method. See if the problem persists.
And yes, heed the advice of BeMor and make your Field FOCUSABLE (alternatively, override isFocusable and control its return via other methods).
02-04-2011 02:01 PM
Hi jprofitt,
I have added the following, which seems to work:
protected void fieldChangeNotify(int context){ if(context == 1){ try { this.getChangeListener.fieldChanged(this, context); } catch (Exception e){ } } }
So navigationClick() will set a flag of 1, and that will be recognized in the above fieldChangeNotify(). I just want to make sure that I did this properly...
Thanks!
02-04-2011 02:02 PM
I removed the super.onFocus and super.onUnfocus calls, thanks. They didn't cause the problem, but if they don't really do anything, then I guess there is no need in having them..
02-04-2011 02:04 PM - edited 02-04-2011 02:04 PM
Yep that's what I was talking about, but dont' forget to reset your flag. Hope it works out for you!
02-04-2011 02:07 PM
Thanks for your help! | https://supportforums.blackberry.com/t5/Java-Development/Trouble-with-custom-LabelField/m-p/774467 | CC-MAIN-2017-09 | refinedweb | 652 | 59.6 |
..
Unlike fields, properties are not classified as variables. Therefore, you cannot pass a property as a ref or out parameter.
Properties have many uses: they can validate data before allowing a change; they can transparently expose data on a class where that data is actually retrieved from some other source, such as a database; they can take an action when data is changed, such as raising an event, or changing the value of other fields.
Properties are declared in the class block by specifying the access level of the field, followed by the type of the property, followed by the name of the property, and followed by a code block that declares a
get-accessor and/or a
set accessor. For example:
public class Date { private int month = 7; // Backing store public int Month { get { return month; } set { if ((value > 0) && (value < 13)) { month = value; } } } }.
Auto-implemented properties provide simplified syntax for simple property declarations. For more information, see Auto-Implemented Properties.
The get Accessor:
class Person { private string name; // the name field public string Name // the Name property { get { return name; } } }
When you reference the property, except as the target of an assignment, the
get accessor is invoked to read the value of the property. For example:
Person person = new Person(); //... System.Console.Write(person.Name); // the get accessor is invoked here.
private int number; public int Number { get { return number++; // Don't do this } }
The
get accessor can be used to return the field value or to compute it and return it. For example:
class Employee { private string name; public string Name { get { { return.
Note */
Example */
See also
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/using-properties | CC-MAIN-2019-22 | refinedweb | 274 | 50.77 |
BBC micro:bit
Tilt Sensor
Introduction
Clearly, the micro:bit has a built-in accelerometer which can be used to work out which way the micro:bit is tilted. The gestures that you have access to in MicroPython give you all of the information that you need for most circumstances. The tilt sensors on this page are a much simpler concept. In its simplest form, the sensor can tell you whether or not you are tilting the sensor along a single axis. The general principle is to have a metal ball that moves when the sensor is tilted, making or breaking a circuit, much as pressing a button completes or breaks a circuit. Older sensors used mercury - Sensors you use should state that they are free of mercury. The sensors on this page are of an on/off nature. You're not getting any measure of how much the sensor is tilted, just the direction.
It turns out that this can be quite useful. There are odd occasions where you want the tilt of the micro:bit and the tilt of another part of the circuit to be independent, perhaps for a device that is used by two people at a time (like a game). At the basic end, you are talking a couple of quid for a sensor.
These are the two I used,
The breakout board on the left is a Sparkfun board called 'tilt-a-whirl'. It was used in a soldering kit of a Simon game and was discontinued some time ago. It's not for sale these days but you may come across one or something similar one day. It measures changes in tilt on 4 directions - forward, back, left, right. The sensor on the right is the one you tend to see these days. You connect it up like you do a button and watch the readings.
Circuit
I put both sensors on the same breadboard but wrote separate programs to test them. You can replace my pin choices with any of the GPIO.
The simple sensor is wired as we would a button, with a 10K resistor.
Programming
For the sensor at the top of the diagram, try this, look at the result and vary as you like,
from microbit import * while True: buttonState = pin0.read_digital() if buttonState==0: display.show(Image.YES) else: display.clear() sleep(20)
For the tilt-a-whirl sensor, we read 2 digital values. That gives 4 combinations of 1 and 0, a 2 bit value in binary. We use a bitwise left shift to set the second bit and then use a logical OR to set the rightmost or least significant bit. We can use that number to look up the orientation in a list.
from microbit import * def ReadTilt(): a = pin8.read_digital() b = pin1.read_digital() return (a << 1) | b directions = ["F", "L", "R","B"] last = -1 while True: reading = ReadTilt() if reading != last: display.show(directions[reading]) last = reading sleep(20)
When I was writing this, I held the breadboard with the pins of the breakout facing me. If you look at the breadboard diagram above and imagine picking it up and rotating it 90° anticlockwise. That's how I held it when I encoded the letter for the direction.
Challenges
- Do something interesting with the tilt switch or something more interesting with several of them.
- If you are lucky enough to have a go with the tilt-a-whirl breakout or can make a DIY version, you'll find that it offers pretty good performance for a 2 bit output. The example is good enough to make a usable 4-choice input for a game or circuit you are doing. | http://www.multiwingspan.co.uk/micro.php?page=tiltdig | CC-MAIN-2019-09 | refinedweb | 615 | 71.95 |
Hi?
View Complete Post
Hi all,
I'm trying to use Html.RenderAction with MVC 3.0 preview 1 and i'm getting an error. i have the following controller:
public class HelpController : Controller { public ActionResult Detail(int id) { ViewModel.Title = "Help Title! " + id.ToString(); ViewModel.Content = "Help Content!" + id.ToString(); return View(); }
and in another controller, i'm trying to do:
@Html.RenderAction("Detail", "Help", new { id = 3 });
but i keep getting:
CS1502: The best overloaded method match for 'Microsoft.WebPages.WebPageUltimateBase.Write(Microso
Hello; }
Hello my friends I tried to change this code to lambda but I could not pleaaaase help me:
var inforeport = from m in db.Products where m.UnitPrice > info select new { unit=m.UnitPrice,name= m.ProductName };
Sample the basic concepts of lambda expressions, explore their benefits, and witness how to use them to write more expressive programs.
Timothy Ng
MSDN Magazine September 2007
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/30173-renderaction-with-lambda-expression-mvc.aspx | CC-MAIN-2017-13 | refinedweb | 165 | 51.85 |
soapsuds (-url:args | -types:args | -is:args | -ia:args) [options]
Creates XML schemas for services in an assembly and creates assemblies from a schema. You can also reference the schema via its URL. Use SoapSuds to create client applications that communicate with .NET remoting servers. Use Wsdl.exe to create clients that communicate with .NET Web Services.
soapsuds -url: -os:app.xml
Specifies a domain name, if one is required for authentication.
Generates code (equivalent to -od:.).
Specifies an HTTP proxy name (use this when connecting through an HTTP proxy).
Specifies an HTTP proxy port (use this when connecting through an HTTP proxy).
Specifies an input assembly file from which to import types. Do not include the .exe or .dll extension.
Specifies the directory that contains .dll files.
Specifies the input schema file.
Creates a proxy that is not wrapped (the default is a wrapped proxy).
Writes output to an assembly file. SoapSuds also generates source code.
Specifies the output directory.
Writes output to an XML schema file.
Specifies a password, if one is required for authentication.
Specifies the namespace for generated proxy code. This should only be used for interop namespaces.
Specifies the URL for the WSDL's service endpoint.
Signs the generated assembly using the specified key file. See Sn.exe.
Specifies one or more input types, with optional assembly name and service endpoint.
Specifies the location of the XML schema.
Specifies a username, if one is required for authentication.
Creates a proxy that is wrapped (this is the default). | http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+III+Language+and+Tools+Reference/Chapter+23.+C+Development+Tools/SoapSuds.exe/ | CC-MAIN-2018-13 | refinedweb | 252 | 62.95 |
02 July 2009 17:39 [Source: ICIS news]
By Joe Kamalick
WASHINGTON (?xml:namespace>
That means that the Democrats can easily quash efforts by the minority Republicans to block legislation.
Well, maybe not quite so easily.
In theory, the party that holds 60 seats in the Senate can at will vote cloture, meaning they can vote to end debate on a bill and proceed to a vote.
A bill can pass in the 100-seat Senate with as little as 51 votes. But to end debate, you have to have 60 senators saying “Enough, let’s vote!”
Because of the lopsided margin needed to end debate - which resulted from an 1806 Senate rule change that left no alternative means for ending discussion - various minorities in the US upper chamber over the years have used the filibuster or endless talk to block a final vote on legislation they oppose.
Often when a filibuster is launched - or even if the minority party merely threatens a filibuster - a bill’s sponsors will pull the matter from consideration.
In the current political climate, Republicans and their allies in commerce and industry had been relying on the filibuster as a last-ditch opportunity in the Senate to block legislation - climate change, health care reform, easing of labour union election rules, etc. - that they could not otherwise defeat in straight-up votes in either the House or Senate.
Both major
Nor is the 60-vote super majority necessarily a sure thing for Democrats in the Senate now.
First, there are only 58 Democrats in the Senate, not 60. The two senators who style themselves as “independent” and not affiliated with either party are Joe Lieberman of
Lieberman is a lifelong Democrat and only changed to “independent” status because he lost his state Democrat Party nomination for the Senate race in 2006 and ran successfully as an independent to hang on to his seat.
Although a long-time independent, Sanders also sides chiefly with Democrats and is part of the Democrat caucus, and he is counted as a Democrat for purposes of committee assignments.
So, de facto if not de jure, Democrats do have their 60-vote majority in the Senate.
But, not so fast.
Two key Democrats, Ted Kennedy of Massachusetts and Robert Byrd of West Virginia, are both in ill health and only rarely have participated in votes or other Senate business in the last six months or so.
In addition, there are a handful of Democrats who are considered, on some issues, moderate or even conservative.
Senator Mary Landrieu of
Senator Kirsten Gillibrand of New York, newly appointed to the Senate to serve out former Senator Hillary Clinton’s term when the latter was named secretary of state, also displays some decidedly conservative philosophies.
Gillibrand supports extending President George Bush’s tax cuts - which other Democrats have decried as “tax cuts for the rich” - on grounds that the economy needs a lower tax burden.
She also is a strong supporter of US Second Amendment gun ownership rights, another position typical among Republicans and conservatives but rare among mainstream liberal Democrats.
Neither is Senator Mark Begich of
Begich supports drilling for oil and gas in the Arctic National Wildlife Refuge (ANWR), a declaration that would make most Democrats and environmentalists faint dead away. He also is a strong backer of gun rights.
Those are just a few Senate Democrats who cannot be considered to be in lock-step with every issue the Democrat leadership or the White House might espouse. Depending on the issue at hand, there are other Senate Democrats who might bolt the party on any given vote on any given day.
So, while on paper, Democrats do indeed command the much-sought Senate super-majority, making it work in terms of real politics can be another story altogether.
We may get a first look at how the 60-vote Democrat Senate will work when a climate bill comes up for consideration. There is enough in that controversial, 1,200-page monster bill to alienate perhaps dozens of senators, Republican and Democrat alike.
Major pieces of legislation are by nature controversial, often offending as many as they please, so putting together a 60-vote majority on any big issue is going to be tough for the Senate Democrat leaders.
Indeed, the Founding Fathers intended that it be difficult to get legislation passed in Congress, especially in the | http://www.icis.com/Articles/2009/07/02/9229338/insight-democrats-super-majority-is-no-sure-thing.html | CC-MAIN-2015-22 | refinedweb | 734 | 55.78 |
- .8 Retrieving Data from Multiple Tables with LINQ
In the two previous examples, we used data bindings to display data extracted using LINQ to SQL. In this section, we concentrate on LINQ to SQL features that simplify querying and combining data from multiple tables. You’ve already seen the SQL INNER JOIN operator in Section 21.4.4—LINQ to SQL provides similar capabilities and allows more complex operations as well. Figure 21.27 uses LINQ to SQL to combine and organize data from multiple tables.
Fig. 21.27 Using LINQ to perform a join and aggregate data across tables.
1 // Fig. 21.27: JoiningTest.cs 2 // Using LINQ to perform a join and aggregate data across tables. 3 using System; 4 using System.Linq; 5 6 namespace JoiningWithLINQ 7 { 8 public class JoiningTest 9 { 10 public static void Main( string[] args ) 11 { 12 // create database connection 13 BooksDataContext database = new BooksDataContext(); 14 15 // get authors and ISBNs of each book they co-authored 16 var authorsAndISBNs = 17 from author in database.Authors 18 join book in database.AuthorISBNs 19 on author.AuthorID equals book.AuthorID 20 orderby author.LastName, author.FirstName 21 select new { author.FirstName, author.LastName, book.ISBN }; 22 23 Console.WriteLine( "Authors and ISBNs:" ); // display header 24 25 // display authors and ISBNs in tabular format 26 foreach ( var element in authorsAndISBNs ) 27 { 28 Console.WriteLine( "\t{0,-10} {1,-10} {2,-10}", 29 element.FirstName, element.LastName, element.ISBN ); 30 } // end foreach 31 32 // get authors and titles of each book they co-authored 33 var authorsAndTitles = 34 from title in database.Titles 35 from book in title.AuthorISBNs 36 let author = book.Author 37 orderby author.LastName, author.FirstName, title.BookTitle 38 select new { author.FirstName, author.LastName, 39 title.BookTitle }; 40 41 Console.WriteLine( "\nAuthors and titles:" ); // header 42 43 // display authors and titles in tabular format 44 foreach ( var element in authorsAndTitles ) 45 { 46 Console.WriteLine( "\t{0,-10} {1,-10} {2}", 47 element.FirstName, element.LastName, element.BookTitle ); 48 } // end foreach 49 50 // get authors and titles of each book 51 // they co-authored; group by author 52 var titlesByAuthor = 53 from author in database.Authors 54 orderby author.LastName, author.FirstName 55 let name = author.FirstName + " " + author.LastName 56 let titles = 57 from book in author.AuthorISBNs 58 orderby book.Title.BookTitle 59 select book.Title.BookTitle 60 select new { Name = name, Titles = titles }; 61 62 Console.WriteLine( "\nTitles grouped by author:" ); // header 63 64 // display titles written by each author, grouped by author 65 foreach ( var author in titlesByAuthor ) 66 { 67 // display author's name 68 Console.WriteLine( "\t" + author.Name + ":" ); 69 70 // display titles written by that author 71 foreach ( var title in author.Titles ) 72 { 73 Console.WriteLine( "\t\t" + title ); 74 } // end inner foreach 75 } // end outer foreach 76 } // end Main 77 } // end class JoiningTest 78 } // end namespace JoiningWithLINQ
The code combines data from the three tables in the Books database and displays the relationships between the book titles and authors in three different ways. The LINQ to SQL classes used in this example were created using the steps described in Section 21.6.1. As in previous examples, the BooksDataContext object (declared in line 13) is needed to be able to query the database.
The first query in the example (lines 17–21) returns results identical to those in Fig. 21.19. It uses LINQ’s join clause, which functions like SQL’s INNER JOIN operator—the generated SQL is nearly identical to the SQL given earlier in Section 21.4.4. As in the SQL example, only rows with the same AuthorID are joined together. Like the from clause, the join clause introduces a range variable—unlike the from clause, it specifies a criterion for joining. The join clause uses equals instead of the == comparison operator because the join criterion is not an arbitrary Boolean expression—you may only join based on equality. Like nested repetition statements, join clauses cause multiple range variables to be in scope—other clauses can access both range variables to combine data from multiple tables (lines 20–21).
The second query (lines 34–39) gives similar output, but it does not use the join query operator. Operations that would require a join in SQL often do not need one in LINQ to SQL, because it automatically creates properties based on foreign-key relationships. These properties enable you to easily access related rows in other tables. Line 35 uses the generated AuthorISBNs property of the Title class to query only the rows in the AuthorISBN table that link to that row of the Titles table. It does this by using multiple from clauses in the same query. In this example the inner from clause iterates over data related to the outer range variable, but the sequences iterated over may be completely unrelated. As with a join clause, both range variables may be used in later clauses. The author variable created in the let clause (line 36) refers to book.Author, demonstrating the automatically generated link between the AuthorISBN and Authors tables based on the foreign-key relationship between them.
Lines 53–60 contain the final query in the example. Instead of returning a flat result set, with data laid out in relational-style rows and columns, the results from this query are hierarchical. Each element in the results contains the name of an Author and a list of Titles that the author wrote. The LINQ query does this by using a nested query in the second let clause (lines 56–59). The outer query iterates over the authors in the database. The inner query (lines 57–59) takes a specific author and retrieves all titles that the author worked on. It does this by navigating the properties created by the foreign-key relationships in the database. The book range variable represents each pair of AuthorID and ISBN in the AuthorISBN table belonging to the author range variable of the outer query. It accesses the Title property of book to retrieve the row in the Titles table with that ISBN and then uses the BookTitle property to include the title of the book in the results. This list of titles is placed into the Titles property of the anonymous type created in the select clause, which also has a Name property that contains the author’s full name. These results are then displayed using nested foreach statements (lines 65–75).
Relational databases cannot return this kind of hierarchical result set, so, unlike the previous two queries, it would be impossible to write a query like this in SQL. Before LINQ, you’d have had to retrieve the results in a flat table like the other two queries, then transform them into the desired format. LINQ does this work for you, allowing you to ignore the relational storage model and concentrate on the object structure that fits your application. | https://www.informit.com/articles/article.aspx?p=1251169&seqNum=8 | CC-MAIN-2020-50 | refinedweb | 1,156 | 55.74 |
This is a page for planning Doctrine integration.
Doctrine 1
Questions
- What namespace should we use?
Todo
- Create a Doctrine1 paginator adapter
- Create a Doctrine1 auth adapter
- Create Zend_Tool providers for Doctrine1
Doctrine 2
Questions
- What namespace should we use?
- Would the doctrine2 classes use 5.3?
- Replication support? Probably need to ask the Doctrine team about this.
Todo
- Update the main Zend_Loader_Autoloader to support both 5.2 and 5.3 style class loading.
- Update the Zend_Loader_Autoloader_Resource to support namespaces
- Create a Doctrine2 paginator adapter
- Create a Doctrine2 auth adapter
- Create a zend server cache adapter for the \Doctrine\Common\Cache (submit to Doctrine codebase) or support Zend_Cache?
- Zend Profiler support?
- Look at the ZF directory structure, do we need to add any folders etc for metadata for instance?
- Create Zend_Tool providers for Doctrine2
Labels:
None | http://framework.zend.com/wiki/pages/viewpage.action?pageId=18219229 | CC-MAIN-2013-48 | refinedweb | 136 | 60.41 |
#include <stdio.h> int main(int argc, const char *const *argv) { if ( argc > 1 ) { FILE *file = fopen(argv[1], "r"); /* Get filename from command line. */ if ( file ) { int ch, prev = '\n' /* so empty files have no lines */, lines = 0; while ( (ch = fgetc(file)) != EOF ) /* Read all chars in the file. */ { if ( ch == '\n' ) { ++lines; /* Bump the counter for every newline. */ } prev = ch; /* Keep a copy to later test whether... */ } fclose(file); if ( prev != '\n' ) /* ...the last line did not end in a newline. */ { ++lines; /* If so, add one more to the total. */ } printf("lines = %d\n", lines); } else { perror(argv[1]); } } return 0; }
Are you able to help answer this sponsored question?
Questions asked by members who have earned a lot of community kudos are featured in order to give back and encourage quality replies. | https://www.daniweb.com/programming/software-development/code/216572/counting-the-number-of-lines-in-file | CC-MAIN-2017-09 | refinedweb | 135 | 84.17 |
Alan G Isaac wrote:
On Thu, 25 May 2006, Robert Kern apparently wrote:
What continuity? This is floating-point arithmetic.
Sure, but a continuity argument suggests (in the absence of specific floating point reasons to doubt it) that a better approximation at one point will mean better approximations nearby. E.g.,
epsilon = 0.00001 sin(100*pi+epsilon)
9.999999976550551e-006
sin((100*pi+epsilon)%(2*pi))
9.9999999887966145e-006
Compare to the bc result of 9.9999999998333333e-006
bc 1.05 Copyright 1991, 1992, 1993, 1994, 1997, 1998 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. scale = 50 epsilon = 0.00001 s(100*pi + epsilon) .00000999999999983333333333416666666666468253968254
You aren't using bc correctly.
bc 1.06 Copyright 1991-1994, 1997, 1998, 2000 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type `warranty'. 100*pi 0
If you know that you are epsilon from n*2*π (the real number, not the floating point one), you should just be calculating sin(epsilon). Usually, you do not know this, and % (2*pi) will not tell you this. (100*pi + epsilon) is not the same thing as (100*π + epsilon).
FWIW, for the calculation that you did in bc, numpy.sin() gives the same results (up to the last digit):
from numpy import * sin(0.00001)
9.9999999998333335e-06
You wanted to know if something there is something exploitable to improve the accuracy of numpy.sin(). In general, there is not. However, if you know the difference between your value and an integer multiple of the real number 2*π, then you can do your floating-point calculation on that difference. However, you will not in general get an improvement by using % (2*pi) to calculate that difference. | https://mail.python.org/archives/list/numpy-discussion@python.org/message/ODKW5SXAGLPWDKDVPLHGMSI4L6PJKLAM/ | CC-MAIN-2022-27 | refinedweb | 299 | 69.38 |
16 July 2012 07:20 [Source: ICIS news]
By Heng Hui
?xml:namespace>
SINGAPORE
However, biodiesel makers in
Indonesian biodiesel makers can sell their PME at lower prices than Malaysian producers, since their feedstock cost is lower, said Malaysian PME converters.
In 2011, Indonesian PME refiners exported around 1.4m tonnes out of their total production of 2.2m tonnes to Europe, mainly Spain and Italy, according to industry sources. The producers’ market share in
In contrast,
Only some of the 29 PME factories in
“It will be good if we could reform the tax structure to something similar to Indonesia’s, or increase the export tax for CPO to a sufficient level such that the cost of PME manufacture in Malaysia becomes competitive with Indonesian PME,” a Malaysian biodiesel maker said.
With such an export tax in place, Malaysian biodiesel makers will enjoy lower-priced feedstock and be able to compete on the same level, he explained.
However, a biodiesel trader said it is unlikely that any action will be taken before the Malaysian election, since the move will benefit only a few biodiesel corporations and anger many small palm oil farmers, driving away votes.
“It’s a political issue. It’s unlikely the government would introduce any tax in the near term because this would greatly depress CPO prices. The government would become greatly unpopular since the country’s stock exchange has also just listed one of the world’s biggest IPO [initial public offering] of palm oil giant Felda,” the trader said.
A biodiesel maker in
However, a market participant in Malaysia pointed out Indonesian’s CPO refining capacity is rapidly expanding at a rate far beyond the actual CPO production, and demand for CPO raw material in Indonesia will increase, boosting the prices of CPO and hence PME in Indonesia.
However, it remains unlikely that Indonesia CPO prices will increase to the extent that the costs of PME production become on par with
Southeast Asian PME producers include Wilmar and Musim | http://www.icis.com/Articles/2012/07/16/9578354/malaysia-pme-producers-seek-cpo-export-tax-change.html | CC-MAIN-2014-42 | refinedweb | 335 | 54.66 |
Update 8:47 PM: after looking at it with a logic analyzer, it appears the first byte of the read buffer is firing off immediately... is there no way to control the delay on the read buffer being exposed so quickly? Im running i2c at 100khz and I find it odd that I cant get in front of the byte.
Note, the CyDelay(1000)(line: 25) occurs after 0xFF in the above capture, I don't think its possible to disable the interrupts fast enough.
Original Post:
I am using the psoc5lp as a slave and when exposing the read buffer for the master, it always uses the previous value of the last transaction in index #0. I see there are other users having the same issue. I am in belief now there is is a flaw with read buffer or something is not documented correctly with the i2c component. I have attached the project to this post. I have attached an image of the bridge control showing the output, It first writes a command to slave 60 and reads back at 61. The first byte shows 0xFF while it should be 0x0A. The delay of 1 second demonstrates that there is no timing issue with the first byte being read faster than the succeeding bytes.
Reference link to another user have time same issue. PSoCDeveloper • Incorrect first byte of data during I2C Reads
#include <project.h> #include <stdio.h> #define I2CBUFFERSIZE 10 uint8 i2cReadBuffer[I2CBUFFERSIZE]; uint8 i2cWriteBuffer[I2CBUFFERSIZE]; int main() { CyGlobalIntEnable; /* Enable global interrupts. */ /* Start I2C slave (SCB mode) */ I2C_1_Start(); I2C_1_SlaveInitReadBuf(i2cReadBuffer, I2CBUFFERSIZE); I2C_1_SlaveInitWriteBuf(i2cWriteBuffer, I2CBUFFERSIZE); for(;;) { CyWdtClear(); /* Write complete: parse command packet */ if (0u != (I2C_1_SlaveStatus() & I2C_1_SSTAT_WR_CMPLT)) { CyGlobalIntDisable; CyDelay(1000); for(int i = 0; i < I2CBUFFERSIZE; i++){ i2cReadBuffer[i] = i+10; // Expected output should be 0x0A, 0x0B, 0x0C..... } CyGlobalIntEnable; /* Clear slave write buffer and status */ I2C_1_SlaveClearWriteBuf(); (void) I2C_1_SlaveClearWriteStatus(); } /* Read complete: expose buffer to master */ if (0u != (I2C_1_SlaveStatus() & I2C_1_SSTAT_RD_CMPLT)) { /* Clear slave read buffer and status */ I2C_1_SlaveClearReadBuf(); (void) I2C_1_SlaveClearReadStatus(); for(int i = 0; i < I2CBUFFERSIZE; i++){ i2cReadBuffer[i] = 0xFF; } } } } /* [] END OF FILE */ | https://community.cypress.com/thread/32827 | CC-MAIN-2019-04 | refinedweb | 341 | 62.58 |
Your Account
by Uche Ogbuji
The.
v).
time.
amara.saxtools.normalize_text_filter.
What real-world conditions would you like to see represented in respectable Python/XML benchmarks?
You're benchmarking subsecond operations by timing the entire Python process? Priceless.
You definitely want to use time.time() or something like that *inside* the program to avoid measuring the Python startup and shutdown time, which is hardly relevant, unless you build web applications using CGI scripts. :)
That said, it is good idea to communicate our benchmarking strategy. What I did when I was curious what Fredrik used was just mail him and ask him. I've been using the same strategy as a result. I get numbers slightly different from his, though not drastically. It's likely due to platform/compiler differences (I'm on Linux, he's on Windows). See my weblog for some numbers:
(fwiw, the virtual debunking team currently suspects that Uche has done most or all of the following mistakes: included Python startup and shutdown times in his figures, included module load times in his figures (cET 0.9 can parse OT.XML nine times in the time it takes Python to load Amara's bindtools component), sent output to a terminal instead of a file or /dev/null, used non-idiomatic solutions for the SAX and cET samples (for cET, Uche's code is 40% slower than the most obvious solution), and, quite possibly, used an unreleased version of the underlying cDomlette library, which is reportedly 3-4 times faster than the current release. And yes, the pystone figures don't seem to match his hardware description, either. This article should be archived in the "the whole bloody breakfast on my face" category, and replaced with an apology.)
Pystone(1.1) time for 50000 passes = 1.3
This machine benchmarks at 38461.5 pystones/second
perhaps some CPU scaling was going on on Uche's so it wasn't running at the full 1700 mhz?
but of course, running the Amara tests at a higher clockspeed, and the pystone/sax/cet tests at a lower speed, might also explain the 3X slowdown.
Sure, I can accept that writing a proper test harness within Python is a better way to time it. And how useful is it that we can even have this discussion, since I didn't make a mystery of my benchmarking technique. And any adjustments are easy, since I didn't make a mystery of my code.
The ensuing discussion is somewhat along the lines I suggested in the article (and it has such color and character to go with it). But crucially, the color interferes with any understanding that there is a lot more to test (I gave examples), and many more ways to test it before we have MIPS-wars quality benchmarks.
All the effbot bluster in the world does not change the fact that benchmarking requires transparency, which has been completely missing from the Python/XML gorilla match until today. And it doesn't change the fact that his benchmarks are useless, essentially measuring conditions completely alien to anyone's actual use.
So effbot has useless benchmarks, and argues that I also now have useless benchmarks. Nowhere to go from there but up.
I always so like the breath of fresh air Uche brings to most topics. His benchmark examples are nicely down to earth (I would point out that I always do almost exactly the same thing--including full code--when I benchmark tools in my articles).
Anyway, with no real a priori sense of how it would come out, I decided to try gnosis.xml.objectify in the mix. I like my API best and all :-).
First, the script used:
$ cat time_xo.py
from gnosis.xml.objectify import make_instance, walk_xo, tagname
ot = make_instance('ot/ot.xml')
for node in walk_xo(ot):
if tagname(node) == 'v' and 'begat' in node.PCDATA:
print node.PCDATA
I don't use the gnosis.xml.objectify.utils.XPath() function here, though I could. That's because I don't really believe XPath is entirely Pythonic
The timings are quite consistent between five runs:
$ time python2.3 time_xo.py > verses
real 0m7.200s
user 0m5.790s
sys 0m0.350s
real 0m7.200s
user 0m5.790s
sys 0m0.350s
Oh... I run on a quite different architecture than Uche, but the Pystone on my Powerbook is just about the same as Uche's:
$ uname -a
Darwin gnosis-powerbook.local 7.7.0 Darwin Kernel Version 7.7.0: Sun Nov 7 16:06:51 PST 2004; root:xnu/xnu-517.9.5.obj~1/RELEASE_PPC Power Macintosh powerpc
$ python /sw/lib/python2.3/test/pystone.py
Pystone(1.1) time for 50000 passes = 3.04
This machine benchmarks at 16447.4 pystones/second
At the very least, you should break out the statistics into startup time, module import time, and actual run time of whatever function represents the program's functionality. Avoiding console output would be a good idea too.
These things are pretty basic to benchmarking any tool for any language -- and I don't have any axe to grind about these tools; I actively try to avoid XML as much as possible anyway. :)
I like my (more Pythonic) API better than that in ElementTree, but if I get speed, why not take advantage of /F's underlying work? Of course, there're a zillion things I want to get around to, so it's not quite a promise.
That means we can now be far less concerned with parsing overhead. Since the structure is already Python-style, the overhead of ElementTree API calls can then be minimal, as is shown by the fast performance of the find operation in ElementTree. Non-C ElementTree find() sometimes can even beat libxml2 XPath, which is implemented in C.
lxml.etree can do a parse very quickly too, using the underlying libxml2 library. Unfortunately it isn't "done" yet then if you want to use the ElementTree API, are there are Python proxies to be produced while the user accesses the XML. This has been made fairly fast by now, but it still lags behind ElementTree. For libxml2 native xpath this proxy overhead is far less, and you can get down to busines right away.
If you want to know how I know all this, see my blog for a lot of benchmarking over the last couple of weeks. I didn't have a 'begat' test yet, but I did test a simple //v test, as Uche did in an earlier article.
What kind of excuse is that?
You're the one that brought up the whole thing yet it seems that you have done a worse job at becnhmarking than others. Very ironic.
I think your benchmarking method is very ad-hoc and you'd be better served if you fixed the glaring errors and posted an updated version of your findings.
I'm getting incomparably better results with cElementtree (runing the same program as you do but I'm benchmarking it with timeit, around 0.25 seconds/run) on a similar laptop. Could not test your framework since your FTP system is down.
Great non-point, Istvan Albert.
There are two points I believe Uche made that have not been addressed despite all of Fredrik Lundh's (effbot) blustering here, on his blog, and on his pythonware daily site. One is that Fredrik's benchmark is pretty useless because it just loads an XML file into a data structure but does nothing significant with it. Two is that Fredrik's useless benchmarks give the misleading impression then that some other XML tools are much horribly slower than they really are, when really most of the XML tools are quite comparable to one another speed-wise, and some of them are even better when you consider other issues like how easy they are to use. And really, since this is Python, ease of use is of primary importance. celementtree may or may not be the fastest, but I don't believe it is the easiest to use or install.
(as for your so-called arguments, some hints: for three processes that run in sequence, the total time is A+B+C, not max(A, B, C). if you set A to zero, the total will drop. second, how hard is it to "click on installer" or type "python setup.py install". thousands of people have already done it. I'm sure you can do it to, if you try. feel free to mail me if you need help.)
and of course you should take advantage of the stuff I'm doing. it may not save you that much, since you still have to create all the objects over at the python side of things, but it's worth trying. drop me a line if you have questions.
First of all, he insinuates that I ran my tests on Amara with my CPU clock speed set higher than when I ran my tests with cElementTree. That was bad (and stupid) enough. Now he implies that I logged in here and anonymously posted a note supporting my point.
This is not the sort of gross libel that I shall dignify with any response other than that I'll have no dealings with Fredrik anymore, directly or indirectly until he apologizes for his crude and infantile insinuations. And it's probably best if I don't ever run into him in person again...
I thought he was a feckless crybaby when he wanted me to apologize for my original post, but nothing in that post rises to the level of libel, and I think that Frederik has now revealed more of his own character than I think he might have wanted the world to know..
Fred.
Let me say this first: I have no investment in whose XML tool is the fastest or easiest to use or more compliant or whatever other standard you choose to apply. Like Phillip, I avoid XML where possible.
I also want to say that I am incredibly disappointed in the extreme lack of maturity some of you are displaying. It's as if you're looking to get offended by the other guys. This is not how adult communities behave.
No, I take that back: this is how adult communities behave all too often. But it's not how they should behave, and not how I've come to expect the Python community to behave.
Now, this article has, I think, a few valid points. First, I think that benchmark results should, wherever possible, come with the code and data that generated them, especially when they are part of the announcement of the package being benchmarked.
Second, I think a number of different kinds of benchmarks, performed with a variety of packages, is an important thing to have. I don't think that benchmark results that don't have this kind of breadth are worthless, though.
Third, Uche correctly points out that benchmark numbers are not the only factor to consider in choosing a tool. However, no one, not even Fredrik, is disputing this.
The primary area where this article misses the mark is the claim that Fredrik was being deceptive. Fredrik did not post a deceptive benchmark. He posted an incomplete set of benchmark results: one that did not include the actual code he used to derive his numbers although he documented his procedure elsewhere. I would encourage Fredrik to post the benchmark code the next time he advertises cElementTree with benchmark numbers. Not only does it encourage confidence in the numbers, but it will also serve as a useful tool for others. Apparently, there are any number of people who don't know how to properly benchmark Python code and fewer (certainly not I before this incident) who know why the standard solution, timeit.py, is inadequate for these tools. People can see what the current "best practices" are for ElementTree for the operations timed and contribute benchmarks for the tools that were not included.
The article also makes the incorrect claim that what Fredrik bechmarks is useless. It's not. The parse time and memory used are important components to the whole XML-wielding program and should be measured. What's more, these are factors that are shared by pretty much every program; I may not need to find text or construct a tree or extract certain tags in my program, but I certainly need to read in the data. Now these aren't the only quantities that should be measured, but the measurement is not useless just because it's the only one offered there. And Fredrik certainly wasn't hiding the fact that that was all that he was timing.
The article also comes to the conclusion that the bechmarks offered by Fredrik were "deceptive" based on the evidence of Uche's own benchmarks which yielded different numbers than Fredrik's. Following the article's logic, the goal was to measure something important that Fredrik's benchmarks didn't measure: find all tags with a certain text string. The benchmarks were done, the numbers were rather different than Fredrik's, and so he was being deceptive in posting his numbers. If the article's benchmarks were adequate measurements, this line of argument would make some sense. However, the article's benchmarking strategy does not accurately measure comparable times.
The fact that the article's benchmarks are open, with full code and documented timing strategy, does not change the fact that they are wrong. Furthermore, it does not change the fact that concluding that someone else is being deceptive because their results (accurately obtained) don't match up with your results (not accurately obtained) is wrong. Posting an article to O'Reilly falsely accusing someone else of deception instead of hashing it out in private or a semi-private forum like the XML-SIG is also wrong.
But you don't have to take my word for it. I redid the benchmarks from the article with a proper timing harness. The results from 5 runs of each package are given, in seconds, in the file timings.csv. The information about my system are given in comments. I couldn't run the saxtools version; I get an exception as documented. I also tried the Gnosis code that David Mertz posted, but Gnosis_Utils-1.1.1 doesn't seem to define one of the functions needed. I didn't implement the lxml version because I didn't feel like building it.
The results are broadly along the lines of what Fredrik posted. Say what you like about his attitude and his "bluster," the man doesn't lie with his benchmarks...
My intent was always to provide code and discussion towards a useful set of benchmarks for the Python community, but clearly this has proved an area where no sensible conversation is possible. My code is still available in the article, and if anyone is interested, they can do with it what they will. I personally have too much real work on my hands to continue with a matter whose contentiousness so far outweighs its importance.
You may not realize this, but this is rather offensive to myself (and probably others). In my
mind at least I've been been engaged in entirely sensible conversation about your benchmarks. I certainly believe sensible conversation is possible. If you believe my comments and suggestions are insensible, please point it out. You just declared them thus, after all.
Perhaps you just read my comments as part of a Fredrik-driven attack, or something, but I have been benchmarking XML the whole month now and I'm genuinely interested in improving the way we do benchmarks. I'm also curious about what went wrong with your particular attempt, and how to do it better next time.
I've asked a number of questions about this article, the benchmarks proposed in them and the benchmark results you get. There are some discrepancies which I'd like explained so we can avoid them in the future. I also think that the approach you take was not entirely correct (measuring Python startup time too, possibly printing to terminal), so I've been pointing that out too.
Instead of answering my comments and those of others, you've been focusing on Fredrik, whose benchmarks it is of course what prompted you to write the whole article in the first place. It's not a surprise Fredrik feels attacked. But a more
civil response from him would've been more productive.
This leads to the question whether you yourself are at all interested in calmly improving the
Python XML benchmarking story. You've certainly been ignoring any civil attempts on my side to help doing so.
It would be unfortunate if I have to go home with the conclusion that this article was only written because you felt threatened by Fredrik's benchmarks, instead of what I'd prefer to believe: that you want to improve the way we benchmark XML libraries in Python.
I must admit the pystone numbers do sound a bit off...
If u guys send me your (non-viral) scripts, I'm happy to run it on my XP1600 and post the impartial results (though I must admit I own a copy of the OReilly book by Mr Lundh).
Easy solution, no?
Now to the middle east.....
You must have not read what I wrote at all. Did you see the first sentence in my paragraph? "There are two points I believe Uche made that have not been addressed..."
And then you complain that I was repeating Uche's arguments??? I was repeating Uche's arguments because....I was repeating Uche's arguments! Which you still have not addressed. Are you for real?
I really would like to see both parties calming down, admitting mistakes (not only those of technical nature) and reconcile. You are eminently respected in the Python community and usually great contributors and your reputation can only grow if you could bring yourself to do this.
Concerning the factual issue, what do you think of the following benchmark? Anything wrong with it?
"Just."
Secondly, it's great that it's available in identical form in both Python and C versions. Not everyone can install C modules on their server, but offers a massive performance boost for those that can.
Lastly, the performance overhead of reading in and parsing the file is a significant, if not the only, benchmark. It is one constant that everyone needs to be able to do, but you cannot tell what the user plans to do from that point onwards.
I had written a similar C module for PHP and was planning to port it to Python before discovering elementtree. I'll now abandon that idea as Fredrik's module is perfect for what I need. Well done Fredrik.
Phillip.
© 2017, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://archive.oreilly.com/pub/post/the_python_comunity_has_too_ma.html | CC-MAIN-2017-47 | refinedweb | 3,184 | 63.29 |
How do I configure service discovery with AWS Cloud Map through the AWS CLI?
Last updated: 2020-06-24
How do I create a hosted zone using AWS Cloud Map through the AWS Command Line Interface (AWS CLI)?
Short description
AWS Cloud Map automates DNS configuration and simplifies the provision of instances for services such as Amazon Elastic Container Service (Amazon ECS), AWS Fargate, and Amazon Elastic Kubernetes Service (Amazon EKS).
To create a hosted zone with AWS Cloud Map using the AWS SDK or the AWS CLI:
1. Create a DNS namespace (for which a hosted zone is automatically created) to define your service naming scheme.
2. Create your service.
3. Register an instance to your service.
Resolution
Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.
Create your DNS namespace
1. Create the namespace using the AWS CLI, replacing example.com with the domain name you want to use.
Note: You must choose between creating a public or a private namespace. Public namespaces are visible on the internet as long as the domain name is registered. Private namespaces are visible only within the virtual private cloud (VPC). You must specify the VPC ID when you create a private namespace.
To create a public namespace:
$ aws servicediscovery create-public-dns-namespace --name example.com
To create a private namespace:
$ aws servicediscovery create-private-dns-namespace --name example.com --vpc vpc-0c92f38bf7db24a05
2. Note the value of OperationId in the output.
For example:
{ "OperationId": "igbkufld72o4vbsbwejfi6eyinfprhc3-jkwmz00b" }
3. Find more details about the operation using the get-operation command. Be sure to replace <OperationId value> with the OperationId value you found in the previous step.
aws servicediscovery get-operation --operation-id <OperationId value>
4. In the output, verify that the Status value is SUCCESS. Make note of the NAMESPACE value, which is the namespace ID used to create the service and register the instance.
For example:
{ "Operation": { "Status": "SUCCESS", "CreateDate": 1534428266.699, "Id": "igbkufld72o4vbsbwejfi6eyinfprhc3-jkwmz00b", "UpdateDate": 1534428267.113, "Type": "CREATE_NAMESPACE", "Targets": { "NAMESPACE": "ns-f2wjnv2p7pqtz5f2" } } }
Note: When you create the namespace, Route 53 automatically creates a hosted zone for the domain. The hosted zone's Domain name value is the same domain name as your namespace. The Comment value is Created by Route 53 Auto Naming. To verify the hosted zone:
1. Open the Route 53 console.
2. On the navigation pane, choose Hosted zones.
3. Find your hosted zone in the list of hosted zones in the content pane.
Create your service
1. Create the service using the servicediscovery create-service command in shorthand syntax as follows. Be sure to replace workers with your preferred service name. Route 53 uses this service name when creating records.
$aws servicediscovery create-service --name workers --dns-config 'NamespaceId="ns-f2wjnv2p7pqtz5f2",RoutingPolicy="WEIGHTED",DnsRecords=[{Type="A",TTL="300"}]'
The default routing policy is "MULTIVALUE". Supported routing policies are "MULTIVALUE" and "WEIGHTED".
2. Note the output. The Id value is the ID of the service you just created.
Note: The CreatorRequestId is the ID of the request. If the API call fails, use the CreatorRequestId to repeat the operation.
For example:
{ "Service": { "DnsConfig": { "NamespaceId": "ns-f2wjnv2p7pqtz5f2", "DnsRecords": [ { "Type": "A", "TTL": 300 } ] }, "CreatorRequestId": "93e0a17a-230b-4c58-b986-f03f3217869f", "Id": "srv-iy3d7hhlf5cjciph", "Arn": "arn:aws:servicediscovery:eu-west-1:356906700443:service/srv-iy3d7hhlf5cjciph", "Name": "workers" } }
Register your instance
1. Register your instance using the servicediscovery register-instance command. Be sure to replace the <value> placeholders with your corresponding values. Note that you can run only one request to register an instance with the same service-id and instance-id options at a time.
Important: The API call fails if you don't provide the service-id, instance-id, and attributes parameters. For more details, see "Options" on the register-instance page.
$ aws servicediscovery register-instance --service-id srv-iy3d7hhlf5cjciph --instance-id i-039444aa1e2932ca3 --attributes=AWS_INSTANCE_IPV4=172.1.1.1
2. Review the output, which includes the OperationId. For example:
{ "OperationId": "z7dfqgeadkvwwid2wa2n5ckicrxs255x-jkwr1x9f" }
3. Open the Route 53 console.
4. On the navigation pane, choose Hosted zones.
5. Select the hosted zone you created earlier.
6. Choose Go to Record Sets, and then verify that the record sets are created for the hosted zone.
Note: When you register the instance, Route 53 automatically creates a record with the service name and domain name.
If you're using Amazon ECS and Route 53 service discovery, you can use the Route 53 namespace and service name to configure your services. Route 53 then automatically creates, deletes, or updates records in your hosted zone according to your Amazon ECS container settings.
Related information
Service discovery (Amazon ECS)
Did this article help?
Do you need billing or technical support? | https://aws.amazon.com/it/premiumsupport/knowledge-center/service-discovery-route53-auto-naming/ | CC-MAIN-2021-17 | refinedweb | 789 | 51.24 |
:)
>> What is even better yet is that you can port all of your Windows 8 applications into the phone in a few swift and easy steps.
You obviously have never tried this, there is very little compatibility between the two platforms, unfortunately
Hi ErikEJ, that's not true at all, let me explain to you.
Windows 8 and Windows Phone 8 are sharing the same runtime, not all of it, but a significant subset of Windows Runtime is built natively into Windows Phone 8. This give you the ability to use the same APIs for common tasks like networking, sensors, location data, InApp purchase, proximity, Touch, Threading, etc.
Using this common Windows Runtime API you increase the share code between WP8 and W8 Store Apps to save time and improve the maintainability.
So it depends of the App, you can reuse up to 90% of your code, 50%, 30% depends of the app obviously.
My recommendations:
1) Use a pattern MVVM, and use a Portable Class Library with W8 and WP8 compatibility and put all your models and viewmodels inside.
2) Be sure what APIs are 100% re-usable before use them in the app, Windows Phone 8 Runtime is a subset of the W8 Runtime, but still the same one, is not different.
3) XAML, design re-use. The set of controls used on Windows 8 is in the Windows.UI.Xaml.Controls, the Windows Phone 8 one is System.Windows.Controls although these are different namespaces and the types are different, there's a lot of similarity in the controls that are supported (names are the same).
4) SharedClasses: As you probably know Portable Class Library doesn't support Windows Runtime API so in this case you can create your portable code in a shared class and link this class from the W8 and WP8 projects.
It's true that there is not much information about how to migrate apps from WP8 to W8 and from WP8 to W8 but it will come very soon, so please stay tuned! | http://blogs.msdn.com/b/msgulfcommunity/archive/2012/11/18/windows-phone-8-sdk-is-this-what-you-were-waiting-for.aspx?Redirected=true | CC-MAIN-2014-35 | refinedweb | 341 | 64.44 |
perlquestion LanX Hi<P><P> <H4> Introduction<P></H4><P> I'm doing a case study to simulate [ -Meta-Operators] in Perl5 and I'm stuck with a special when restricting myself to pure FP. <P><P> See for instance [ Cross operator X]<P><P> <c> <a b> X 1,2 X <x y> #produces ('a', 1, 'x'), ('a', 1, 'y'), ('a', 2, 'x'), ('a', 2, 'y'), ('b', 1, 'x'), ('b', 1, 'y'), ('b', 2, 'x'), ('b', 2, 'y') </c><P><P> can be simulated with<P><P> <c> pp X {"a".."d"}; pp X {"a".."b"} X {1..2} X {'x','y'}; my $iter = X {"a","b"} X {1..2} X {'x','y'}; while ( my ($aref)= $iter->() ) { pp $aref; } __DATA__ (["a"], ["b"], ["c"], ["d"]) ( ["a", 1, "x"], ["a", 1, "y"], ["a", 2, "x"], ["a", 2, "y"], ["b", 1, "x"], ["b", 1, "y"], ["b", 2, "x"], ["b", 2, "y"], ) ["a", 1, "x"] ["a", 1, "y"] ["a", 2, "x"] ["a", 2, "y"] ["b", 1, "x"] ["b", 1, "y"] ["b", 2, "x"] ["b", 2, "y"] </c><P><P> Where <C>X</C> is a function with signature (&;$) which accepts an iterator as second argument and returns an iterator.<P><P> So in LIST context it produces the whole array in scalar context it just returns a lazy iterator. (i.e. allowing infinite lists)<P><P> This functions (and similar others) can be nested and produce nice syntactic sugar, for instance the above has the following precedence<P><P> <c> X {"a","b"} ( X {1..2} ( X {"x","y"} ) )</c><P><P> Here my purely functional implementation so far. <P><P> (Please note that this is a <b>case study for the API</b> and not optimized for speed. A well designed and tested set of such operators could be optimized and/or reimplemented in XS later, w/o messing with the P5's parser while being as closs as possible to P6 semantics.)<P><P> <readmore> <c> use strict; use warnings; use feature qw/say/; use Data::Dump qw/pp/; sub gen_listiter { my @init=@_; my @list=@init; sub { if (@list) { return shift @list } else { @list=@init; return () } } } sub get_array { my $iter=shift; my @res; while ( my ($res)= $iter->() ) { push @res,$res; } return @res; } my %operations =( '+' => sub { defined $_[1] ? $_[0] + $_[1]->[0] : $_[0] }, ); sub X (&;$) { my ($cr,$tail_itr)=@_; # my $op = $operations{$tail_itr} if defined $tail_itr; # undef $tail_itr if $op; $tail_itr //= gen_listiter([]); my $head_itr = gen_listiter($cr->()); my $state='INIT'; my $head; my $tail; my $op //= sub { [ $_[0],@{$_[1]} ] }; my $cross_itr = sub { goto $state; INIT: $<div class="pmsig-708738"> <p>Cheers Rolf <p> <small>( addicted to the Perl Programming Language) </small> </div></div><P><P> <H5> update<P></H5><P> PS: If you are wondering about the title ... I was in a hurry and my brain entered a German word so <c>s/mit/with/</c>. :) | http://www.perlmonks.org/?displaytype=xml;node_id=1056507 | CC-MAIN-2016-07 | refinedweb | 479 | 59.57 |
In the previous post Load Balancing Azure AD FS Services we looked at using Azure RM to deploy and load balance AD FS services. This is the follow-up post to deploy the Web Application Proxy (WAP) servers and its associated load balancer into the DMZ.
In this post we will focus upon the highlighted area in the below diagram. The additional components were previously deployed, for details please see Load Balancing AD FS Services In Azure RM.
Note that this post assumes all of the underlying network resources have been created at this point. This will include:
- Network Security Groups10
- Virtual networks
- IP Subnets
- Availability Sets
Install WAP
Please review the setup steps in the following series of posts. Post #2 covers installing WAP, though you must complete the AD FS install before configuring WAP since the WAP configuration is stored within AD FS.
As noted in the previous Azure RM post, ensure that you take the time to design the Azure environment so that there are no unexpected issues with the configuration. Pay particular attention to the networking design and layout.
The WAP servers should be assigned to their own Availability Set so that they may be added the external load balancer. The AD FS servers are in a separate Availability Set which is load balanced by the internal Load Balancer.
Once the WAP servers are up and running, we can then proceed to deploy the public Azure Load Balancer.
Load Balance WAP
In order to create, configure and use the Azure load balancer we need to:
Create the Azure public load balancer
Add back end pool
Add Windows Firewall Rule
Add probes
Add load balancing rules
Test WAP Servers
Update DNS
Create the Azure Public Load Balancer
At this point the AD FS and WAP server VMs have been created, and the AD FS and WAP services installed and configured. The first step to load balance the WAP portion is to create a new public load balancer in the Azure RM portal. Note that it is assumed that you have already created the network resources and have deployed the AD FS farm and the WAP servers. This post focuses only on the WAP load balancing deployment. See the image above for an example of how Network Security Groups (NSG) can be used to segregate traffic between AD FS and WAP servers.
In the Azure RM portal, navigate to the Load Balancer section, and create a new Azure Load Balancer. Ensure that the type is set to public, and enter a descriptive name that is aligned with your Azure resource naming policy. You will also have to select which public IP address should be used.
In this case the external WAP load balancer will be called Tail-CA-EXT-WAP, and a new external IP address will be assigned to it. The external IP address resource is highlighted on the right
We then need to decide which Resource Group these resources will be stored in. In this case they will be stored in a Resource Group which is used to hold all of the WAP resources. Hence the existing Resource Group was selected from the drop down menu. The below image shows the completed settings.
Click create to launch the validation process, and if successful the public load balancer and external IP resources will be created in the assigned Resource Group.
Wait for the notification that the operation has completed.
Add Backend Pool
You may need to click the refresh button to see the newly created public Azure Load Balancer. Once it is visible, navigate to the Backend Pools property sheet, and click the Add button to add in the relevant Backend infrastructure.
In this environment we will create the Backend pool called Tail-CA-EXT-WAP-BEPool.
We need to add the VMs – note that some of the the VMs are greyed out, as only VMs belonging to the selected Availability Set may be chosen.
Select the required VMs, in this case only the two WAP servers were selected.
Click OK to save the configuration.
Once the changes have been saved, we can then move on to adding in the probes. But first we need to ensure that the local Windows Firewall will allow the probe traffic.
Add Windows Firewall Rule.
The below shows the rules on the AD FS server:
Note that the above highlighted rule does not exist by default on the Windows 2012 R2 WAP server, there is a rule to allow inbound TCP 443.
We can use the New-NetFirewallRule cmdlet to create a new rule. The below is an example you can use or customise, so change the name etc. as you see fit.
New-NetFirewallRule -DisplayName "AD FS HTTP Azure LB Rule (TCP-IN 80)" -Direction Inbound -Action Allow -Protocol TCP -LocalPort 80 -Profile ANY -Group "AD FS" -Description "Manually created rule to allow inbound TCP 80 to WAP for HTTP monitoring"
Add Probes
Once the Windows firewall has been updated on all the WAP servers, proceed to add the HTTP probe.
Select the probes section on the load balancer, and then click add. Fill in the necessary details. In the example below we will probe the AD FS servers on TCP 80 and query for the /adfs/probe endpoint. In this case the probe is saved as Tail-CA-EXT-LB-HTTP-Probe.
Add Load Balancing Rules
To complete the configuration on the Azure Load Balancer, we need to state how the traffic will be balanced. A new Load Balancing rule called Tail-CA-WAP-LB-HTTPS-Rule will be created to hold the configuration.
Note that the relevant Backend pools are selected, along with the Health probes and that TCP 443 is used for the frontend and backend port.
Click OK to save the configuration.
Test WAP Load Balanced VIP
We should now be able to verify that the WAP sign-on page is available externally.
If the external DNS record does not yet exist, or points to another farm use a hosts entry to resolve to the IP of the public load balancer. Please remember to remove or REM out the hosts entry when you are finished testing.
From an external test system you can then navigate to the AD FS namespace. This will be in the format of:
https://<AD FS name>.tailspintoys.ca/adfs/ls/idpinitiatedsignon.htm
In this case the AD FS namespace is adfs.tailspintoys.ca so the test URL is:
Test individual WAP Servers
If you also want the ability to also test the individual WAP servers, you can add inbound NAT rules the the external load balancer. This way you can target the individual server, though you will need to use hosts file entries. Again be sure to remove or REM out the entry after you have finished testing so that it does not trip you up in the future.
Review to this post to for details. The post discusses RDP access, but the same applies for HTTPS.
Update DNS
Ensure that the external DNS zone has an entry for the AD FS namespace which resolves to the public IP of the load balancer. This may have been done previously, and if so this section can be ignored.
It is possible to add or edit a DNS name label associated to the Public IP resource, but this is not the name that should be used for AD FS. It should be the namespace that was selected as part of the design and built out of the AD FS infrastructure.
Cheers,
Rhoderick | https://blogs.technet.microsoft.com/rmilne/2017/02/10/load-balancing-wap-in-azure-rm/ | CC-MAIN-2019-04 | refinedweb | 1,259 | 68.91 |
What are some good exercises for an experienced programmer new to C++? I have found lists and suggestions on the web, but they are always simple or I have already completed them in other programming languages like C or Basic. Something that will help with learning modern C++ is best.
First tell me what do you know about C++? And check out my sig.
Most C++ books that you buy will have end-of-chapter tests or reviews that have a lot of problems for you to solve. You can get some books from your library or find some e-books and try the problems they give at the end of their chapters.
This is my solution to your signature problem. Does it tell you what I know about C++?
#include <algorithm> #include <iostream> #include <sstream> #include <string> #include <utility> #include <vector> using namespace std; string toMathEquation(string const& expr); bool is_operator(string const& term, string& val); bool is_degree(string const& term, string& val); bool is_literal(string const& term, string& val); bool is_match(pair<string, string> items[], size_t sz, string const& term, string& val); string to_string(size_t val); int main() { cout<< toMathEquation("x squared plus nine x minus seven") <<endl; cout<< toMathEquation("two plus six x cubed minus x squared") <<endl; cout<< toMathEquation("negative x cubed divided two x squared") <<endl; cout<< toMathEquation("twenty-five times negative six x quartic") <<endl; } string toMathEquation(string const& expr) { stringstream reader(expr); string term, equation, val; while (reader>>term) { if (is_operator(term, val) || is_degree(term, val) || is_literal(term, val)) equation += val; else equation += term; } return equation; } bool is_operator(string const& term, string& val) { pair<string, string> ops[] = { make_pair("plus", " + "), make_pair("minus", " - "), make_pair("times", " * "), make_pair("divided", " / "), make_pair("negative", "-") }; return is_match(ops, sizeof ops / sizeof ops[0], term, val); } bool is_degree(string const& term, string& val) { pair<string, string> deg[] = { make_pair("squared", "^2"), make_pair("cubed", "^3"), make_pair("quartic", "^4") }; return is_match(deg, sizeof deg / sizeof deg[0], term, val); } bool is_literal(string const& term, string& val) { string words[] = { "one","two","three","four","five", "six","seven","eight","nine", "ten","eleven","twelve","thirteen","fourteen","fifteen", "sixteen","seventeen","eighteen","nineteen", "twenty","thirty","forty","fifty", "sixty","seventy","eighty","ninety" }; vector<pair<string, string>> literals; for (size_t x = 1, k = 0; x < 100; /* see body */) { if (x < 20) literals.push_back(make_pair(words[k++], to_string(x++))); else if (x % 10 == 0) { literals.push_back(make_pair(words[k], to_string(x++))); for (size_t y = 0; y < 9; ++y) literals.push_back(make_pair(words[k] + "-" + words[y], to_string(x++))); ++k; } } literals.push_back(make_pair("one-hundred", "100")); return is_match(&literals[0], literals.size(), term, val); } bool is_match(pair<string, string> items[], size_t sz, string const& term, string& val) { pair<string, string>* match = find_if(items, &items[sz], [term](pair<string, string> p) { return p.first == term; }); bool is_match = match != &items[sz]; if (is_match) val = match->second; return is_match; } string to_string(size_t val) { stringstream ss; ss<< val; return ss.str(); }
One common c++ exercise is: Write a recursive function that gathers a list of all the files and folders on the hard drive, starting with a folder specified by the user. You will need to maintain either a std::vector or std::list (your choice) of all the files, and the file names must contain the complete path. When finished, short the array or list alphabetially by calling std::sort and supplying it your own comparison function. Display al the files after the vector or list has been created. Note: Don't copy/paste code you might find on the net -- that's considered cheating.
Try to write a serialization library like this one. That should keep you busy for a while, and help you learn advanced OOP concepts of C++ that may challenge you a bit. Also see these tutorials, they will show many nice ways of mixing template meta-programming and generic programming, that you don't find as much in other languages.
But again, it's hard to tell how "experienced" you are, but you sure solved firstPerson's problem quickly.
>>Does it tell you what I know about C++?
Not really. Although you *solved the problem fast, it could be better. How far of C++
do you know? Any knowledge about meta-programming? Usually, the best way to get better at programming is to design project. That way you get a chance to practice algorithms and design patterns. But if you just want some problems to solve, then
google online. There are a lot of sites dedicated to practice problems.
-----
*solved: haven't really tested the program, just taking your word.
Edited by firstPerson: n/a
Although you *solved the problem fast, it could be better.
I would like to know how to make it better, but that is a question for another thread.
How far of C++ do you know? Any knowledge about meta-programming?
The C subset, classes and basic OO. I know about template for simple genericity and some STL, but no meta-programming.
Do you know how to use polymorphism? Virtual inheritance? Pure virtual functions?
Problem with inline virtual functions? Try projecteuler.net. They have challenging
math problems for practice. Also research design patterns. There are plenty of stuff to
learn. Here read this. That should keep you busy.
Also how learning to implement different data structures? List, trees, TRI ... | https://www.daniweb.com/programming/software-development/threads/299243/intermediate-c-exercises | CC-MAIN-2017-09 | refinedweb | 879 | 64.71 |
Flex 3 DataGrid Footers.
I've implemented the code you provided to show some total rows on my data grids. I ran into a couple of issues that were overcome by a couple small changes to the code as follows:
listContent.setActualSize(listContent.width, listContent.height - (footerHeight+15));
footer.setActualSize(listContent.width, footerHeight);
footer.move(listContent.x, listContent.y + listContent.heightExcludingOffsets + 15);
Posted by: Jeremiah | March 14, 2008 8:10 AM
This component is fantastic, Very Good
But
if i use delete
paddingTop="0" paddingBottom="0" verticalAlign="middle"
and include
width="100%" height="100%"
in
The Footer of datagrid not display correct
Please test your Datagrid with 138 records
------
if i use this
And update in file
FooterDataGrid.as
in line 19
protected var footerHeight:int = 22;
to
protected var footerHeight:int = 24;
Functional is correct !
----------------
How should I proceed?
---------------------------
Alex responds:
This code is just a prototype and is unsupported. If you have it working, great.
Posted by: MArcio | March 18, 2008 9:35 AM
If i resize collum datagrid and in sequence i use slider to filter datagrid the error ocurred
-------------
TypeError: Error #1009: Cannot access a property or method of a null object reference.
at DataGridFooter/updateDisplayList()[C:\inetpub\wwwroot\webserver\rarus_admin_flex_2\src\DataGridFooter.as:103]
at mx.core::UIComponent/validateDisplayList()[E:\dev\3.0.x\frameworks\projects\framework\src\mx\core\UIComponent.as:6214]
at mx.managers::LayoutManager/validateDisplayList()[E:\dev\3.0.x\frameworks\projects\framework\src\mx\managers\LayoutManager.as:602]
at mx.managers::LayoutManager/doPhasedInstantiation()[E:\dev\3.0.x\frameworks\projects\framework\src\mx\managers\LayoutManager.as:675]
at Function/
at mx.core::UIComponent/callLaterDispatcher2()[E:\dev\3.0.x\frameworks\projects\framework\src\mx\core\UIComponent.as:8460]
at mx.core::UIComponent/callLaterDispatcher()[E:\dev\3.0.x\frameworks\projects\framework\src\mx\core\UIComponent.as:8403]
-------------
Please Help me, how to resolve this ?
----------------------
Alex responds:
in DataGridFooter.as in updateDisplayList, check to see if col is null and break out of the while loop
Posted by: MArcio | March 18, 2008 10:12 AM
if i use itemRenderer error ocurred
TypeError: Error #1009: Cannot access a property or method of a null object reference.
at DataGridFooter/updateDisplayList
-------------
---------------
----------------------------
Alex responds:
Your custon renderer must implement IDropInListItemRenderer
Posted by: MArcio | March 18, 2008 12:31 PM
thanks for that just was asking can’t we using any thing and replacing it on this code
footer.setActualSize(listContent.width, footerHeight);
?
thanks
---------------------------
Alex responds:
Not sure I understand the question. I guess you can make the footer some other size if you want to.
Posted by: KLadofoRA | April 5, 2008 3:26 AM
Hi Alex,
You talked about a version for AdvancedDataGrid developed by an other team. Do you have a link for showing that ? thanks
-----------------
Alex responds:
ADG has SummaryRows. It is developed by another team. I don't know if they've done footer support or not. Try Sameer's blog:
Posted by: romain | May 15, 2008 11:47 AM
useful post.thanks for ur sharing!
Posted by: ggfou | May 18, 2008 2:37 AM
how to enhance debugging in flex :(
---------------
Alex responds:
Debugging works fine for me. If you want specific features that other debuggers have, please file bugs/enhancement requests at bugs.adobe.com/jira
Posted by: akshay | May 19, 2008 11:08 PM
this blog is the AWESOME!
Posted by: labs | June 6, 2008 8:36 PM
Hi, I added some functionality to these footers - horizontal scrolling, column resizing, and locked columns. I also implemented for the Advanced Data Grid. Post here. Thanks for the great post and great starting point.
Posted by: Doug Marttila | June 16, 2008 4:01 PM
Hi Alex, this is a great component. But I have problems with it when I'm using a custom ItemRenderer. In this case the updateDisplayList() function is called permanently and it cause an infinite loop. When the addChild(DisplayObject(renderer)); is removed, it works fine, but the Renderer is not shown (of course). How can I change it? Do you have any ideeas? I need it for my current project.
Thanks, Artur
--------------------------
override invalidateDisplayList() and maybe invalidateSize() and see why it gets called. Usually you're doing something that changes the potential size of the renderer like adding children to it in updateDisplayLIst. You shouldn't do that, but if you must, then block the call to invalidateDisplayList somehow (have it not call super.invalidateDisplayList in those conditions)
Posted by: Artur | July 16, 2008 2:10 AM
thanks for the code again
your posts helps me allot
thanks
Posted by: jbr | July 18, 2008 4:37 PM
If you know some site that supply tutorials about flex, please tell me! and thanks a lot!
-----------------
Alex responds:
Search the web. FlexExamples.com and Lynda.com have stuff. Our documentation folks are sad to hear that their writings aren't sufficient for you.
Posted by: Rick | July 25, 2008 6:28 PM
Hi Alex! I want to use the footer as a toolbar, but I haven't found a way to add a button or a label to it.
Do you have any ideeas how to put a button on this footer?
Help pls. Thanks!
----------------
Alex responds:
You could make a button the renderer for one of the columns. You can also subclass the border and add a toolbar there.
Posted by: Bera Florin | July 29, 2008 4:59 AM
Hi,
thanks for the example.
i´ve copied&pasted the example, but looks like Flex Builder doesn´t recognize the namespaces??
But i got this error in Flex Builder:
"
Could not resolve to a component implementation. loginTest1/src dg.mxml Unknown 1219766177385 1574
"
----------------
Alex responds:
You have to setup xmlns:local in the Application tag. See how I used it in my example
Posted by: Andrew | August 26, 2008 8:59 AM
I'm intending to integrate this solution into my current project which has its own extension of DataGridColumn.
Can you explain why you extend DataGridColumn, **and** then place an instance of the base class as a public property? It's a pattern I haven't seen before, and can't understand why it's necessary to extend the base column class, and then use an instance of the base.
btw, the project I'm working on will be adding the column in actionscript (if this makes any difference)
Thanks.
------------------
Alex responds:
I just did that so you could specify different styles and labelFunctions for the footer. You could just duplicate stuff on the column subclass, but there'd be style name collisions
Posted by: Mark S | September 15, 2008 6:27 AM
Hello there,
thank you for great example.
May i use to project this example?
and what kind of license this example has?
regards
hbell
--------------------------
Alex responds:
There is no license. You can use it however you wish as long as there is no liability back to me.
Posted by: hbell | December 24, 2008 6:01 PM
Does this applicable to editable data grid,where user can change the existing value,correspondingly Ave value need to be updated,Currently when i make the grid editable Ave is not getting calculated.
------------------------
Alex responds:
Yes, there are other events where you'll need to update the footer. ITEM_EDIT_END for editable datagrids and probably on some collection change events as well.
Posted by: Ravi | February 11, 2009 10:10 AM
How do I set the visible property of Grid column and footer?
If I say visible=false at FooterDataGridColumn level the grid column is not visible but the footer column is visible and
visible=false at DataGridColumn level has no effect
Thanks
---------------
Alex responds:
You'll probably have to modify the example to handle hidden columns.
Posted by: Vimal | February 12, 2009 9:36 AM
Hi Alex,
The vertical rows of ADG are not display correctly.At the bottom they are incresed by few pixels.
So the last line of listContent is appearing overlapped by footer. When i make distance between listContent and footer i saw listContent is creating a 1/3row at last.
Posted by: Sachin Dev Tripathi | May 12, 2009 7:22 AM
Hi alex
what is masking?? you have told about this in second para of this page?
is it related to my problem(posted yesterday)
please advice
-sachindevtripathi@gmail.com
---------------------
Alex responds:
See the documentation for flash.display.DisplayObject.mask
Posted by: Sachin Dev Tripathi | May 12, 2009 11:08 PM | http://blogs.adobe.com/aharui/2008/03/flex_3_datagrid_footers.html | crawl-002 | refinedweb | 1,404 | 56.76 |
In this article we will learn how to install IBM Cloud Pak for Integration (CP4I) 2019.4 on Openshift Container Platform 4.2.
Prerequisites
Below are the prerequisites for installing IBM Cloud Pak for Integration 2019.4:
System Requirements (click link to open)
- Redhat Openshift Container Platform 4.2 on Linux® 64-bit
- CP4I common services and different integration capabilities have certain file system and storage requirements. File storage with ‘RWO + RWX’ and Block storage with RWO mode is required. Openshift Container Storage (OCS) can be deployed to provide both of these types of storage, which is backed by Ceph. You can follow the article below to deploy OCS on Openshift 4.2. This recipe assumes that both types of storage i.e. File (RWO + RWX) and Block (RWO) are available and respective storage classes have been configured on OCP.
Deploying your storage backend using OpenShift Container Storage 4
- Other than OCP master and worker nodes, an infra node has been provisioned with a public IP address which has access to OCP cluster nodes and is allowed to access the deployed services from outside. We would use this node as a jump-box. You should have root level access on the jump box.
- Determine the size of your cluster keeping in mind:
– The workload size you expect to run
– The integration capabilities that you expect to run in High Availability or Single instance mode
– The Common Services, Asset Repository and Operations Dashboard requirements
– Scalability requirements
Note: this recipe is only to provide guidance for deploying CP4I 2019.4 on OCP 4.2. It does not cover the aspects for deploying the platform in production environment.
Step – by – step
Validate prerequisites and OCP cluster
Login to the infra node (or Boot node as the case may be) and check if the oc tool is installed. If the oc tool is not installed, follow the steps below:
In the OCP console, click on ‘Command line tools’
Click on ‘Download oc’
After downloading the file ‘oc.tar.gz’, extract it using the command below, give the appropriate permission and move to /usr/bin directory
tar xzvf oc.tar.gz chmod 755 oc mv oc /usr/bin
Now login to the OCP cluster using the oc tool.
oc login --server=<OCP api server> -u <ocp admin user> -p <password>
For example: oc login –server= -u admin -p admin
You may also login by getting the login command with a generated token. To get the login command, login to the OCP console and click on ‘Copy login command’
Click on ‘Display Token’
Copy the login command with token
Login to OCP using this login command
By default, the OpenShift Container Platform registry is secured during cluster installation so that it serves traffic through TLS. Unlike previous versions of the OpenShift Container Platform, the registry is not exposed outside of the cluster at the time of installation.
Instead of logging in to the OpenShift Container Platform registry from within the cluster, you can gain external access to it by exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images using the route host.
Run the command below in a single line to expose the OCP registry:
oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
Use the command below to get the OCP registry route:
oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'
Use the command below to check if File and Block storage classes are available to use:
oc get sc
Run the command below to verify that all OCP nodes are in ‘Ready’ state
oc get nodes
Install docker on jump box and configure access to OCP registry
You need a version of Docker that is supported by OpenShift installed on your jump box / boot node. All versions of Docker that are supported by OpenShift are supported for the boot node. Only Docker is currently supported.
Run the steps below to install docker:
yum install docker -y systemctl start docker systemctl enable docker
Check the docker status
systemctl status docker
If your OCP registry is using self-signed certificates, then you would not be able to access to perform ‘docker login’ unless you add the certificate. Note that these steps are not required for installing CP4I however if you are planning to pull/push images to/from OCP registry that uses a self signed certificate from outside the OCP cluster, follow the steps below to configure the certificate on the client machine.
Navigate to /etc/docker/certs.d and create a folder with the same name as the external url of the registry. If ‘certs.d’ folder doesn’t exist, then create it. The name of the external url of the registry can be found using the command below;
oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'
Create the directory inside /etc/docker/certs.d
mkdir default-route-openshift-image-registry.apps.prod3.os.fyre.ibm.com
Navigate inside this directory and run the command below in a single line to pull the certificate
ex +'/BEGIN CERTIFICATE/,/END CERTIFICATE/p' <(echo | openssl s_client -showcerts -connect <external url for OCP registry>) -scq > ca.crt
For example:
ex +'/BEGIN CERTIFICATE/,/END CERTIFICATE/p' <(echo | openssl s_client -showcerts -connect default-route-openshift-image-registry.apps.prod3.os.fyre.ibm.com:443) -scq > ca.crt
Restart the docker service.
Now validate that you are able to login to the OCP registry using the command below:
docker login <OCP registry url> -u $(oc whoami) -p $(oc whoami -t)
For example:
docker login default-route-openshift-image-registry.apps.prod3.os.fyre.ibm.com -u $(oc whoami) -p $(oc whoami -t)
Download the CP4I installable
The base product installation creates an instance of the Platform Navigator, along with the common services. All of the other components are optional and immediately available to install through the Platform Navigator. The entire product and all components run within a required Red Hat OpenShift Container Platform environment.
You have the following choices for installing IBM Cloud Pak for Integration. All downloads are available from IBM Passport Advantage.
- Download the base product and all component packages. This method can be used in air-gapped environments.
- Download the base product only. All other component packages reside in the online IBM Entitled Registry. Execute the installation procedures to install on a Red Hat OpenShift Container Platform. This method requires internet access but saves time.
Configure Cluster configuration file
Change to the installer_files/cluster/ directory. Place the cluster configuration files (admin.kubeconfig) in the installer_files/cluster/ directory. Rename the file kubeconfig. This file may reside in the setup directory used to create the cluster. If it is not available, you can log into the cluster as admin using the oc login then issue the following command.
oc config view --minify=true --flatten=true > kubeconfig
View the kubeconfig file. If your cluster is using self-signed certificates, it may give you hard time in tls verification and installation may fail. You may update this file as per below example to skip tls verification.
apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: name: api-prod3-os-fyre-ibm-com:6443 contexts: - context: cluster: api-prod3-os-fyre-ibm-com:6443 namespace: default user: admin/api-prod3-os-fyre-ibm-com:6443 name: default/api-prod3-os-fyre-ibm-com:6443/admin current-context: default/api-prod3-os-fyre-ibm-com:6443/admin kind: Config preferences: {} users: - name: admin/api-prod3-os-fyre-ibm-com:6443 user: token: klI928FXCt-0Va8lI2h7VFLN_mwCbyIuaQa_lJ_mM8M
Configure installation environment
Extract the contents of the archive with a command similar to the following.
tar xzvf ibm-cp-int-2019.4.x-offline.tar.gz
Load the images into Docker. Extracting the images might take a few minutes.
tar xvf installer_files/cluster/images/common-services-armonk-x86_64.tar.gz -O|docker load
Configure your cluster
You need to configure your cluster by modifying the installer_files/cluster/config.yaml file. You can use your OpenShift master and infrastructure nodes here, or install these components to dedicated OpenShift compute nodes. You can specify more than one node for each type to build a high availability cluster. After using oc login, use the command oc get nodes to obtain these values. Note that you would likely want to use a worker node.
Open the config.yaml in an editor.
vi config.yaml
Update the below sections in config.yaml. Below is an example:
cluster_nodes: master: - worker3.prod3.os.fyre.ibm.com proxy: - worker4.prod3.os.fyre.ibm.com management: - worker4.prod3.os.fyre.ibm.com
Specify the Storage Class. You can specify separate storage class for storing log data. Below is an example:
# This storage class is used to store persistent data for the common services # components storage_class: rook-ceph-cephfs-internal ## You can set a different storage class for storing log data. ## By default it will use the value of storage_class. # elasticsearch_storage_class:
Specify password for admin user and also specify password rule, e.g.
default_admin_password: admin password_rules: # - '^([a-zA-Z0-9\-]{32,})$' - '(.*)'
Leave rest of the file unchanged unless you want to change the namespaces for respective integration capabilities. Save the file.
The value of the master, proxy, and management parameters is an array and can have multiple nodes. Due to a limitation from OpenShift, if you want to deploy on any master or infrastructure node, you must label the node as an OpenShift compute node with the following command:
oc label node <master node host name/infrastructure node host name> node-role.kubernetes.io/compute=true
This only needs to be done if you want the OpenShift master node and Kubernetes master node to be the same.
Install CP4I
Once preparation completes, run the installation command from the same directory containing the config.yaml file. You can use the command docker images | grep inception to see the value used to install.
Run the command below in single line
sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v /var/run:/var/run:z -v /etc/docker:/etc/docker:z --security-opt label:disable ibmcom/icp-inception-amd64:3.2.2 addon
If you are deploying as root user, you would run this command without ‘sudo’.
This process transfers the product packages from the boot node to the cluster registry. This can take several hours to complete.
Once installation is complete, Platform navigator will be available at the below endpoint:.<openshift apps domain>/
You can use the command below to get the OCP apps domain:
oc -n openshift-console get route console -o jsonpath='{.spec.host}'| cut -f 2- -d "."
You can navigate to the Openshift console and Cloud Pak foundation by clicking on the hamburger menu
Note that if you want to use the ‘Operations Dashboard’ for your integration components, you should first provision the ‘Operations Dashboard’ instance so that you can refer to it while creating an instance of an integration capability.
Conclusion
In this article we have learnt the installation steps for IBM Cloud Pak for Integration on OCP 4.2.
Excellent article, very easy to follow. However, my installation (CP4I 2020.1.1) fails at the Configuring cloudctl after 5 attempts. Please do share any suggestions debugging/resolving the issue. My vSphere OCP 4.3 consists of 3 masters, 3 workers and I am using vSphere storage (instead of Ceph).
Thank you,
FAILED – RETRYING: Configuring cloudctl (1 retries left).
Result was: changed=true
attempts: 5
cmd: bash /tmp/config-cloudctl-script
delta: ‘0:00:00.979458’
end: ‘2020-05-29 17:06:44.202876’
invocation:
module_args:
_raw_params: bash /tmp/config-cloudctl-script
_uses_shell: true
argv: null
chdir: null
creates: null
executable: /bin/bash
removes: null
stdin: null
warn: false
msg: non-zero return code
rc: 1
retries: 6
start: ‘2020-05-29 17:06:43.223418’
stderr: ”
stderr_lines:
stdout: |-
Authenticating…
Get: EOF
FAILED
Set ‘CLOUDCTL_TRACE=true’ for details
stdout_lines:
Can we install CP4I on HP X86 Blade servers.
Hi Deepak,
CP4I only depends on OCP as underlying infrastructure is abstracted from it. CP4I can be installed on wherever OCP can be installed.
CP4I 2020.1.1 requires OCP 4.3 or OCP 4.2
CP4I 2019.4 required OCP 4.2
Thanks. | https://developer.ibm.com/integration/blog/2020/02/20/deploying-ibm-cloud-pak-for-integration-2019-4-on-ocp-4-2/ | CC-MAIN-2020-40 | refinedweb | 2,053 | 55.95 |
Introduction to DataWeave
Introduction to DataWeave
DataWeave is a MuleSoft tool for transforming data. When you're first starting to use it, there's a lot to learn about, especially in terms of how the graphical UI interface works.
Join the DZone community and get the full member experience.Join For Free
Weaving is a method of textile production in which two distinct sets of yarns or threads are interlaced at right angles to form a fabric or cloth. DataWeave is the tool by MuleSoft using which we can weave one or more different types of data together to create another data format.
When you are trying to integrate disparate systems, it is possible that your source system understands data in one format, say JSON and target system understands data in another format, say XML. In this scenario, it is necessary to use Data Transformation for converting data from JSON-to-XML.
DataWeave is a MuleSoft tool for transforming data. The Transform Message component carries out the transformation of Mule message that follows a transform script. You can write your own transform script in DataWeave code or you can use UI to build the script.
It attempts to bring together features of XSLT (mapping), SQL (joinBy, splitBy, orderBy, groupBy, distinctBy operators), Streaming, Functional Programming (use of functions in DataWeave code) to make it a power-packed data transformer.
DataWeave supports DataSense i.e., metadata from connectors, schemas, and sample documents is availbale to more easily design transformations. It also provides content assist while you are coding and auto generates lines of code from actions performed on UI.
e.g. If we drag and drop element from input section onto another element in output section, then corresponding code is automatically generated.
If you add the Transform Message component in your Mule flow, its Properties view looks like this:
The properties view is divided into different components as shown in the following screenshot:
The Graphical UI
The Graphical UI exposes the known input and output structures. You can easily click and drag one field onto another to map these structures. You can do following through the UI:
Drag an element from the input structure over to another on the output structure. This casts a line that joins them and also generates necessary lines of DataWeave code that describes this mapping.
Drag a high-level object that contains inner fields inside it onto another in the output.
Double-click on an output field to add it into the DataWeave code with a static value. This adds an Fx icon next to it as well as a line to the DataWeave code that assigns a default null value to the field. You can change this value in the code as per your need.
Select an element to have its corresponding line in the DataWeave code highlighted. This helps when dealing with large transforms.
If an input field is mapped to two or more output fields, you can right-click it and then select which of the multiple outputs you want to highlight in the DataWeave code.
Filter the views displayed in the input and output structures by typing a name in the search boxes at the top. Only those fields that match your search are then displayed. This is particularly useful when you are dealing with large data structures with many nested elements.
DataWeave Text Editor
This is the section where you write the actual DataWeave code that carries out the transform.
DataWeave Header
The header of a DataWeave script defines directives that indicate the input, the output, and constant values or functions that can be referenced anywhere in the code. These directives can be manually set on the code or automatically defined based on the metadata of your flow or what you map in the UI. The structure of the header is a sequence of lines, each with its own directives. The header section is separated from body by a separator ---.
Through the use of the following directives, key aspects of the transformation are defined:
DataWeave version, i.e., %dw 1.0
Output type, i.e., %output application/java
Input type, i.e., %input payload application/xml
Namespaces to import into your transform, i.e., %namespace ns
Constants that can be referenced throughout the body, i.e., %var discount=0.05
Functions that can be called throughout the body, i.e., %varnewUrl(name) url ++ "?title=’" ++ name ++ "'"
You can also use %function for defining functions.
All directives are declared in the header section of your DataWeave document and act on the entire scope of it.
DataWeave Body
The body contains expressions that generate the output structure. In dataweave body, we define an expression that generates ouptut consisting of three data types:
Simple values: String, boolean, number, date, regex
Arrays: Represented as comma separate values
Objects: Represented as collection of key-value pairs
In the body, it is possible to write expressions which are composed of other expressions. These expressions can be nested inside Arrays or Objects or we can use operators for creating them.
Let us take an example of an API which returns information about books in JSON and our backend only accepts XML input. The XML contains total no. of books. Books are sorted by the year in which they were published. Price for a particular book is displayed only if it is greater than 30.00 dollars. In this case, we can use DataWeave for easy transformation.
Our input JSON is in the following format:
[{ "title" : "Only Time Will Tell", "author" : "Jeffrey Archer", "year" : "2011", "price" : "30.00" }, { "title" : "The Hostage", "author" : "James Patterson", "year" : "2016", "price" : "49.99" }, { "title" : "Harry Potter", "author" : "J K. Rowling", "year" : "2005", "price" : "29.99" }, { "title" : "Twilight", "author" : "Stephenie Meyer", "year" : "2007", "price" : "39.95" } ]
Output XML has following>2016</year> <price>49.99</price> <author>James Patterson</author> </bk:book> </bks:bookstore>
Then, our DataWeave Header will look like below:
%dw 1.0 %output application/xml %input payload application/json %namespace bks %namespace bk %var url="" %function newUrl(name) url ++ "?title='" ++ name ++ "'"
DataWeave Body section will look like:
bks # bookstore: { bks # totalBooks: sizeOf payload, (payload orderBy $.year map { bk # book @ (title: newUrl($.title)): { year: $.year, (price: $.price)when $.price > 30.00, author: $.author } }) }
Note: DataWeave is a new feature of the Mule 3.7 runtime that replaces the DataMapper. }} | https://dzone.com/articles/introduction-to-dataweave?fromrel=true | CC-MAIN-2019-13 | refinedweb | 1,055 | 63.39 |
Week 2
Regression
For this week the lecture slides are available here.
YouTube Video
There is a YouTube video available of me giving this material at the Gaussian Process Road Show in Uganda.
You will need to watch this in HD to make the maths clearer.
Lab Class
Linear regression with numpy and Python.
The notebook for the lab class can be downloaded from here.
To obtain the lab class in ipython notebook, first open the ipython notebook. Then paste the following code into the ipython notebook
import urllib urllib.urlretrieve('', 'MLAI_lab2.ipynb')
You should now be able to find the lab class by clicking
File->Open on
the ipython notebook menu.
Reading
- Reading (Regression)
- Sections 1.1-1.3 of Rogers and Girolami.
- Section 1.2.5 of Bishop up to Eq 1.65.
- Section 1.1 of Bishop.
- Reading (Matrix and Vector Review)
- Section 1.3 of Rogers and Girolami.
- Reading (Basis Functions)
- Chapter 1, pg 1-6 of Bishop.
- Section 1.4 of Rogers and Girolami.
- Chapter 3, Section 3.1 of Bishop up to pg 143.. | http://inverseprobability.com/mlai2013/week2.html | CC-MAIN-2017-43 | refinedweb | 180 | 79.06 |
Twice a month, we revisit some of our readers' favorite posts from throughout the history of Nettuts+. This tutorial was first published in January, 2010.
Give me an hour of your time, and I'll take you on a fly by of the Ruby on Rails framework. We'll create controllers, models, views, add admin logins, and deploy using Heroku's service in under an hour! In this article we'll create a simple bookshelf application where you can add books and write thoughts about them. Then we'll deploy the application in just a few minutes. So buckle up because this article moves fast!
This article assumes that you may know what Ruby on Rails is, but not exactly how it works. This article doesn't describe in-depth how each step works, but it does describe what we need to do, then the code to do that.
Zero
Ruby on Rails is a full stack MVC web application framework. Full stack means you get everything: a simple web server you can use to test your apps, a database layer, testing framework, and an MVC based design. MVC stands for Model-View-Controller.
Model
A model stores information. Models are stored in the database. Rails supports MySQL, PostgreSQL, or SQLite. Each model has its own class and table. Say we want to model a "game." A game has things like number of players, a start time, end time, teams playing, and a winner. These attributes become columns in the "games" table. Table names are lowercase, underscored, and pluralized. The model's class name is Game. In Rails you create models through migrations and generators. A migration describes how to add/remove columns and tables from the database.
Controller
A controller is the manager. It takes information and does some logic like CRUD, or maybe import some stuff from a file, add/remove permissions--you name it a controller can do it. Controllers are the part of your app that does. How do we call controllers? Rails uses routes. A route is a formatted url that is tied to an action with a set of parameters. Going back to the Game model, we need a controller for functionality. At some point we'll to need to list all the games in the system. A basic REST url for this route looks like "/games" How does Rails know what controller to look for and what action to call? It looks at your routes.rb file. You may have a route that looks like this "GET /makes {:name => 'games', :action => 'index'"}. This translates to GamesController and it's index method. Just like models, class names are CamelCase and file names are underscored. So our GamesController would be stored in /app/controllers/games_controller.rb. After the logic, the controller renders a view.
View
A view is the easiest part to understand. It's what you see. It's the HTML you generate to show the user what they need. Views are ERB templates. ERB stands for Embedded Ruby. You use ERB similar to how you embed php into a document. If you want to insert an instance variable @game.time into some html write <%= @game.time %>
Ten
First install Rails. Installing Rails is very easy depending on your platform. If you are on a Linux/OSX, it's no problem. Windows is more complicated and I have no experience with it. This section will give you a brief overview of installing Rails through RubyGems, the Ruby package manager. A gem is a bundle of ruby code in a package that can be used in your programs. For UNIX based system, install RubyGems, then install the Rails gem. This process will go something like this:
# ubuntu sudo apt-get install rubygems # fedora sudo yum install rubygems # gentoo sudo emerge rubygems # OSX sudo port install rubygems # after you have rubygems installed sudo gem install gemcutter # ruby gem hosting service sudo gem tumble sudo gem install Rails
Here are some links to help you through the setup process
- Instant Rails, like instant LAMP but for Rails
- Ubuntu Community Docs about Ruby on Rails
- Getting Rails running on Windows
- Snow Leopard on Rails by a guy I know
- The mandatory google link
Once you can run the "rails" command you're ready for the next step.
Fifteen
Now it's time to install database support before we get started. Rails has support for all the popular DB's, but for this example we'll use SQLite because it's lightweight.. Depending on your platform (again) installing sqlite support can be easy or painful. It can be a pain since the gem has to be built against C extensions, which means the sqlite3 package has to be installed on your system. Again the process will go something like this:
# ubuntu sudo apt-get install sqlite3 sqlite3-devel # fedora sudo yum install sqlite3 sqlite3-devel # OSX sudo port install sqlite3 # then once you have the sqlite3 package installed sudo gem install sqlite3-ruby
Read the previous links if you're having problems with these steps. They describe installing sqlite as well.
Twenty
Time to generate our app. The rails command creates a base application structure. All we need to do is be in a directory and run it like so:
$ cd ~/projects $ Rails bookshelf #this will create a new directory named bookshelf that holds our app $ cd bookshelf
It's important to note that the Rails default is an SQLite based app. You may be thinking, what if I don't want that? The rails command is a generator. All it does is copy stored files into a new directory. By default it will create sqlite3 databases in /bookshelf/db/development.sqlite3, /bookshelf/db/production.sqlite3, and /bookshelf/db/testing.sqlite3. Database connection information is stored in /bookshelf/config/database.yml. You don't need to edit this file since it contains default information for an sqlite setup. It should look like this:
# SQLite version 3.x # gem install sqlite3-ruby (not necessary on OS X Leopard)
Notice there are different environments assigned. Rails has three modes: Development, Testing, and Production. Each has different settings and databases. Development is the default environment. At this point we can start our app to make sure it's working. You can see there's a directory called /script. This directory contains ruby scripts for interacting with our application. Some important ones are /script/console, and /script/server. We will use the /script/server command to start a simple server for our application.
bookshelf $ ./script/server # then you should see something like this. Rails will start a different server depending on your platform, but it should look something like this: => Booting Mongrel => Rails 2.3.5 application starting on => Call with -d to detach => Ctrl-C to shutdown server
Time to visit the application. Point your browser to "" and you should see this splash page:
You're riding on Rails. Now that the code working on a basic level, it's time to delete the splash page and get started with some code.
bookshelf $ rm /public/index.html
Twenty Five
Our application needs data. Remember what this means? It means models. Great, but how we generate a model? Rails comes with some generators to common tasks. The generator is the file /script/generate. The generator will create our model.rb file along with a migration to add the table to the database. A migration file contains code to add/drop tables, or alter/add/remove columns from tables. Migrations are executed in sequence to create the tables. Run migrations (and various other commands) with "rake". Rake is a ruby code runner. Before we get any further, let's start by defining some basic information for the books. A book has these attributes:
- Title : String
- Thoughts : Text
That's enough to start the application. Start by generating a model with these fields using the model generator:
bookshelf $ ./script/generate model Book title:string thoughts:text # notice how the attributes/types are passed to the generator. This will automatically create a migration for these attributes # These are optional. If you leave them out, the generator will create an empty migration. exists app/models/ exists test/unit/ exists test/fixtures/ create app/models/book.rb create test/unit/book_test.rb create test/fixtures/books.yml create db/migrate create db/migrate/20091202052507_create_books.rb # The generator created all the files we need to get our model up and running. We need to pay the most attention to these files: # app/models/book.rb # where our code resides # db/migrate/20091202052507_create_books.rb # code to create our books table.
Open up the migration file:
class CreateBooks < ActiveRecord::Migration def self.up create_table :books do |t| t.string :title t.text :thoughts t.timestamps end end def self.down drop_table :books end end
Notice the create_table :books block. This is where columns are created. An id primary key is created automatically. t.timestamps adds columns for created_at and updated_at. Now, run the migration using the rake task db:migrate. db:migrate applies pending migrations:
bookshelf $ rake db:migrate == CreateBooks: migrating ==================================================== -- create_table(:books) -> 0.0037s == CreateBooks: migrated (0.0038s) ===========================================
Cool, now we have a table, let's create a dummy book just for kicks in the console. The Rails console uses IRB (interactive ruby) and loads all classes for your project. IE you can access to all your models. Open the console like this:
bookshelf $ ./script/console >> # let's create a new model. You can specify a hash of assignments in the constructor to assign values like this: >> book = Book.new :title => 'Rails is awesome!' , :thoughts => 'Some sentence from a super long paragraph' => #<Book id: nil, title: "Rails is awesome!", thoughts: "Some sentence from a super long paragraph", created_at: nil, updated_at: nil> # and ruby will display it back >> book.save => true # now are book is saved in the database. We can query it like this: >> Book.all # find all books and return them in an array => [#<Book id: 1, title: "Rails is awesome!", thoughts: "Some sentence from a super long paragraph", created_at: "2009-12-02 06:05:38", updated_at: "2009-12-02 06:05:38">] >> exit # now that our model is saved, let's exit the console.
Now that we can create books, we need some way to show them to the user
Thirty
Remember controllers? We need a controller to display all the books in the system. This scenario corresponds to the index action in our BooksController (books_controller.rb) which we don't have yet. Just like generating models, use a generator to create the controller:
bookshelf $ ./script/generate controller Books exists app/controllers/ exists app/helpers/ create app/views/books exists test/functional/ create test/unit/helpers/ create app/controllers/books_controller.rb create test/functional/books_controller_test.rb create app/helpers/books_helper.rb create test/unit/helpers/books_helper_test.rb # notice Rails created the file app/controllers/books_controller.rb? This is where we are going to define our actions or methods for the BooksController class
We need to define an action that finds and displays all books. How did we find all the books? Earlier we used Book.all. Our strategy is use Book.all and assign it to an instance variable. Why an instance variable? We assign instance variables because views are rendered with the controllers binding. You're probably thinking bindings and instance variables...what's going on? Views have access to variables defined in actions but only instance variables. Why, because instance variables are scoped to the object and not the action. Let's see some code:
class BooksController < ApplicationController # notice we've defined a method called index for a BooksController instance. We tie this together with routes def index @books = Book.all # instance variables are prefixed with an @. If we said books = Book.all, we wouldn't be able to access books in the template end end
Now the controller can find all the books. But how do we tie this to a url? We have to create some routes. Rails comes with some handy functions for generating RESTful routes (another Rails design principle). This will generate urls like /makes and /makes/1 combined with HTTP verbs to determine what method to call in our controller. Use map.resources to create RESTful routes. Open up /config/routes.rb and change it to this:
ActionController::Routing::Routes.draw do |map| map.resources :books end
Routes.rb can look arcane to new users. Luckily there is a way to decipher this mess. There is routes rake task to display all your routing information. Run that now and take a peek inside:
bookshelf $ rake routes"} # as you can see this command can display a lot of information. On the left column we have a helper to generate a url, then the HTTP verb associated with the url, then the url, and finally the controller and action to call. # for example GET /books will call BooksController#index or # GET /books/1 will call BooksController#show # the url helpers are very important but we'll get to them later. For now know that we are going to create a /books page to list all books
Now we need to create a template to display all our books. Create a new file called /app/views/books/index.html.erb and paste this:
<% for book in @books do %> <h2><%=h book.title %></h2> <p><%= book.thoughts %></p> <% end %>
This simple view loops over all @books and displays some HTML for each book. Notice a subtle difference. <%= is used when we need to output some text. <% is used when we aren't. If you don't follow this rule, you'll get an exception. Also notice the h before book.title. h is a method that escapes HTML entities. If you're not familiar with ruby, you can leave off ()'s on method calls if they're not needed. h text translates to: h(text).
Time to run the server and see what we've got. Start the server, then go to.
bookshelf $ ./script/server
If all goes according to plan you should see some basic HTML.
Thirty Five
We have one book in our system, but we need some more books to play with. There are a few ways to go about doing this. I like the forgery gem. Forgery can create random strings like names, or lorem ipsum stuff. We are going to set a gem dependency in our app, install the gem, then use it to create a rake task to populate our data. Step 1: open up /config/environment.rb and add this line:
config.gem 'forgery'
# now let's tell Rails to install all gems dependencies for this project # gem install gemcutter # if you haven't already # gem tumble # again, if you haven't already bookshelf $ sudo rake gems:install
Now we're going to use the Forgery classes to create some fake data. The Forgery documentation is here. We'll use the LoremIpsumForgery to create some basic data. We can define our own rake tasks by creating a .rake file in /lib/tasks. So create a new file /lib/tasks/populate.rake:
begin namespace :db do desc "Populate the development database with some fake data" task :populate => :environment do 5.times do Book.create! :title => Forgery::LoremIpsum.sentence, :thoughts => Forgery::LoremIpsum.paragraphs(3) end end end rescue LoadError puts "Please run: sudo gem install forgery" end
This rake task will create five fake books. Notice I added a begin/rescue. When you run a rake task it looks at all possible tasks in the rake initialization. If you were to run any rake task before you installed the gem, rake would blow up. Wrapping it in an begin/rescue stop rake from aborting. Run the task to populate our db:
bookshelf $ rake db:populate bookshelf $ ./script/console # let's enter the console to verify it all worked >> Book.all => [#<Book id: 1, title: "Rails is awesome!", thoughts: "Some sentence from a super long paragraph", created_at: "2009-12-02 06:05:38", updated_at: "2009-12-02 06:05:38">, #<Book id: 2,::::">] >> exit
Start the server again and head back to the /books pages. You should see:
Now we have a listing of more than one book. What if we have a lot of books? We need to paginate the results. There's another gem for this. The gem is 'will_paginate.' Following the same steps as before, let's add a gem dependency for 'will_paginate' and rake gems:install:
# in environment.rb config.gem 'will_paginate' # from terminal bookshelf $ sudo rake gems:install # then let's add more books to our db bookshelf $ rake db:populate # run this a few times to get a large sample, or change the number in rake file
Head back to your /books page and you should be bombarded by books at this point. It's time to add pagination. Pagination works at two levels. The controller decides which books should be in @books, and the view should display the pagination links. The will_paginate helper makes this very easy. We'll use the .paginate method and the will_paginate view helper to render page links. All it takes is two lines of code.
# books_controller.rb, change the previous line to: @books = Books.paginate :page => params[:page], :per_page => 10 # index.html.erb, add this line after the loop: <%= will_paginate @books %>
Head back to your /makes page and you should see some pagination links (given you have more than 10 books)
Sweet! We are movin' through this app. It's time to spruce up our page a little bit. One key Rails principle is DRY (Do Not Repeat Yourself). We could work through the exercise of doing some basic CSS to get a page looking OK, or we could keep things DRY and use some code to do it for us. We are going to use Ryan Bate's nifty-generators gem to generate a layout for the site. A layout is a template your views can fill in. For example we can use a layout to determine the over all structure of a page, then define where the views fill it in. Since this isn't a project dependency, we don't have to add it to environment.rb. We can just install it regularly.
# console $ sudo gem install nifty-generators
Run the generator to create a layout file and stylesheets.
$ ./script/generate nifty_layout exists app/views/layouts exists public/stylesheets exists app/helpers create app/views/layouts/application.html.erb # this is our layout file create public/stylesheets/application.css # css file that styles the layout create app/helpers/layout_helper.rb # view helpers needed in the generator
Take a look at the application.html.erb file and see what's inside:
< %> </div> </body> </html>
See those yields? That's where a view fills in the layout. The last yield has no argument. Default content goes there. Yields with an argument must have content defined using content_for. Fix up index.html.erb view to go with the new layout:
<% title 'My Books' %> <% for book in @books do %> <h2><%=h book.title %></h2> <p><%= book.thoughts %></p> <% end %> <%= will_paginate @books %>
All we did was add the title method which sets the title for a page. The title helper calls content_for :title and sets it to the argument. Our view fills in the last yield. Check out the results!
Forty
Now that our application is looking better, let's add some interaction. In typical Web 2.0 style we're going to allow users to comment on our content, but we aren't going to require the user to register. We need to create new model called Comment. A comment is going to have some text, an author, and an associated Book. How do we link these two models together? Associations. Rails provides these associations: belongs_to, has_many, has_one, and has_and_belongs_to many. It should be easy to see that a book has many comments, and a comment belongs_to a book. So we'll use a generator to create the comment model and migration:
$ ./script/generate model Comment text:text author:string exists app/models/ exists test/unit/ exists test/fixtures/ create app/models/comment.rb create test/unit/comment_test.rb create test/fixtures/comments.yml exists db/migrate create db/migrate/20091202081421_create_comments.rb
Astute readers will notice that this migration is lacking the foreign key column. We'll have to add that ourselves. Open up your create_comments.rb migration:
class CreateComments < ActiveRecord::Migration def self.up create_table :comments do |t| t.text :text t.string :author t.belongs_to :book # creates a new integer column named book_id t.timestamps end end def self.down drop_table :comments end end
# now migrate your database $ rake db:migrate rake db:migrate (in /Users/adam/Code/bookshelf) == CreateComments: migrating ================================================= -- create_table(:comments) -> 0.0021s == CreateComments: migrated (0.0022s) ========================================
Now it's time to associate our models using the Rails associations. We'll call the method inside the model's class body. Rails uses metaprogramming to generate the methods needed to make our association work. We'll edit our comment.rb and book.rb files:
# book.rb class Book < ActiveRecord::Base has_many :comments end # comment.rb class Comment < ActiveRecord::Base belongs_to :book end
Now Book instances have a method .comments with returns an array of all its comments. Comment instances have a method called .book that return the associated book. Use the << operator to add objects to arrays. Let's see how it works in the console:
$ ./script/console >> book = Book.find(1) => #<Book id: 1, title: "Rails is awesome!", thoughts: "Some sentence from a super long paragraph", created_at: "2009-12-02 06:05:38", updated_at: "2009-12-02 06:05:38"> >> comment = Comment.new :text => "This is an comment", :author => "Adam" => #<Comment id: nil, text: "This is an comment", author: "Adam", book_id: nil, created_at: nil, updated_at: nil> >> book.comments << comment => [#<Comment id: 1, text: "This is an comment", author: "Adam", book_id: 1, created_at: "2009-12-02 08:25:47", updated_at: "2009-12-02 08:25:47">] >> book.save => true >> book.comments => [#<Comment id: 1, text: "This is an comment", author: "Adam", book_id: 1, created_at: "2009-12-02 08:25:47", updated_at: "2009-12-02 08:25:47">] >> comment.book => #<Book id: 1, title: "Rails is awesome!", thoughts: "Some sentence from a super long paragraph", created_at: "2009-12-02 06:05:38", updated_at: "2009-12-02 06:05:38"> >> exit
In the console session I found one of the existing books, then created a new comment. Next I added it to the book.comments. Then I save book. The book must be saved for the association to be stored. What's next? We need to create a page where the user can view a book and all it comments. That page should also have a form where the user can add their comment. Create a new action in the books controller to show the details for a specified book. The book is found by id. Pop into books_controller.rb and add this:
def show @book = Book.find params[:id] end
Make a new template for this action at /app/views/books/show.html.erb and paste this:
<% title @book.title %> <h2><%=link_to(h(@book.title), book_path(@book)) %></h2> <p><%= @book.thoughts %></p>
Now let's add some links for the index actions to link the show action:
# update index.html.erb contents to: <% title 'My Books' %> <% for book in @books do %> <h2><%=link_to(h(book.title), book_path(book)) %></h2> <p><%= book.thoughts %></p> <% end %> <%= will_paginate @books %>
Remember our url helpers from rake routes? We're using book_path to generate a url to the book controller's show actions. If you don't remember check rake routes again. link_to is a helper to generate a link tag. Now let's fire up our server and click through the app. Now you should have some ugly blue links. Click on your book title and it should go to /books/:id aka BooksController#show:
Time to display some comments. Remember that console session we did a little bit back? One of our books has some comments. let's update our controller to find the comments and our show.html.erb to display them.
# books_controller.rb def show @book = Book.find(params[:id]) @comments = @book.comments end # show.html.erb <% title @book.title %> <h2><%=link_to(h(@book.title), book_path(@book)) %></h2> <p><%= @book.thoughts %></p> <% if @comments %> <h3>Comments</h3> <% for comment in @comments do %> <p><strong><%=h(comment.author) %></strong>: <%=h comment.text %> <% end %> <% end %>
So we assign @comments in the controller to be all the book's comments, then do a loop in the view to display them. Head over to /books/1 (1 came from Book.find(1) in the console session). Check this out:
Now we need the form to create a new comment. We need two things. 1, A comments controller to save the comment, and 2 a route to that action. let's tackle #1 first.
bookshelf $ ./script/generate controller Comments
Create action that instantiates a new comment, sets its attributes (text/author) from the submitted form data, and saves it.
class CommentsController < ApplicationController def create book = Book.find params[:book_id] comment = book.comments.new params[:comment] comment.save flash[:notice] = 'Comment saved' redirect_to book_path(book) end end
First the code finds the book, then creates a new comment form the form data, saves it, sets a message, then redirects back to that book's page. params holds a hash of all GET/POST data with a request. Now we need to create a route to the controller's action. Open up routes.rb:
ActionController::Routing::Routes.draw do |map| map.resources :books do |book| book.resources :comments, :only => :create end end
bookshelf $ rake routes # We needed to add a route to create a new comment for a book. We need to know what book we are creating a comment for, so we need a book_id in the route. Look at the book_comment line. # book_comment is tied to our CommentsController#create book_comments POST /books/:book_id/comments(.:format) {:controller=>"comments", :action=>"create"}"} # every time you modify routes.rb you'll need to restart the server # kill the server process you have running with ^c (ctrl + c) and start it again
Head back to the /books page and make sure nothing has blown up. Everything should be fine and dandy. Now for constructing the form. We need a form that submits POST data to /book/:book_id/comments. Luckily Rails has the perfect helper for this: form_for. form_for takes some models and generates a route for them. We pass form_for a block to create form inputs. Go ahead and paste this into the bottom of your show.html.erb:
<h3>Post Your Comment</h3> <% form_for([@book, Comment.new]) do |form| %> <p><%= form.label :author %></p> <p><%= form.text_field :author %></p> <p><%= form.label :text, 'Comment' %></p> <p><%= form.text_area :text %></p> <%= form.submit 'Save' %> <% end %>
We call form_for to create a new form for the book's comment, then use the text_field/text_area to create inputs for attributes. At this point we can go ahead and make a comment. Fill in the form and viola you now have comments!
See that green thing? That's the flash. The flash is a way to store messages between actions. It's perfect for storing little messages like this. But what do we do if a book has too many comments? We paginate them just like did before. So let's make some changes to the controller and view:
# books_controller.rb def show @book = Book.find(params[:id]) @comments = @book.comments.paginate :page => params[:page], :per_page => 10, :order => 'created_at ASC' end
# show.html.erb <% title @book.title %> <h2><%=link_to(h(@book.title), book_path(@book)) %></h2> %>
Start commenting on your books and you should see some pagination.
Now people can comment, and everything is paginated but we're missing something. We have no web interface for creating books. We need to create a form for that. Also we are the admin so only I should be allowed to create books. This means we need to create a user, login, and check to see if they can do an action.
Fifty
Now we're going to implement CRUD functionality for admins. First we'll implement actions to create, edit, and delete books. Then we'll create an admin login. Finally we'll make sure only admins can do those actions.
Creating a new books requires two new actions. One action that renders a form for a new book. This action is named 'new'. The second is named 'create.' This action takes the form parameters and saves them in the database. Open up your books_controller.rb and add these actions:
def new @book = Book.new end def create @book = Book.new params[:book] if @book.save flash[:notice] = "#{@book.title} saved." redirect_to @book else render :new end end
We also need a new view that shows a form. Create a new file /apps/views/books/new.html.erb and paste this:
<% form_for(@book) do |form| %> <p> <%= form.label :title %><br/> <%= form.text_field :title %> </p> <p> <%= form.label :thoughts %><br/> <%= form.text_area :thoughts %> </p> <%= form.submit %> <% end %>
Now we're ready to create a new book. Point your browser to /books/new and you should see this form. Go a head and create a new book. After you fill in your form you should see your new book.
Get rid of the double header in /app/views/books/show.html.erb and add some links to actions an admin can do on that book. Open up that file and set it's contents to:
<% title @book.title %> %> <p> Admin Actions: <%= link_to 'Edit', edit_book_path(@book) %> | <%= link_to 'Delete', book_path(@book), :method => :delete, :confirm => "Are you sure?" %> </p>
Head over to a book's page and you should see:
Now that we have some links to edit and delete, you can implement them. Editing a book works just about the same as creating a new one. We need an action that shows an edit form, and one to save the changes. Delete is just one action that deletes the record from the database. Open up books_controller.rb and add these actions:
def edit @book = Book.find params[:id] end def update @book = Book.find params[:id] if @book.update_attributes(params[:book]) flash[:notice] = "#{@book.title} saved." redirect_to @book else render :edit end end def destroy book = Book.find params[:id] book.destroy flash[:notice] = "#{book.title} deleted." redirect_to books_path end
The edit action finds the requested book from the id in the url. The update action finds the book from the id and uses the update_attributes method to set the new values from the form. Delete finds the book by id and deletes it. Then it redirects you back to the books listing.
Next we have to create an edit form. This form is exactly the same as the create form. We can just about duplicate the show.html.erb to edit.html.erb. All we are going to do is change the title. Create a new file in /app/views/books/edit.html.erb and paste this:
<% title "Editing #{@book.title}" %> <% form_for(@book) do |form| %> <p> <%= form.label :title %><br/> <%= form.text_field :title %> </p> <p> <%= form.label :thoughts %><br/> <%= form.text_area :thoughts %> </p> <%= form.submit %> <% end %>
Now from one of the book's pages, click the edit link. You should see a familiar form:
Notice how Rails filled in the inputs with the saved values? Nice huh. Go ahead and save some changes to a book. When you're done you should see this:
Now delete that book. You should get a confirmation dialog then be redirected back to /books.
Add a link to create a new book on the index page. Open up /app/views/books/index.html.erb and add this to the bottom:
<p> Admin actions: <%= link_to 'New Book', new_book_path %> </p>
Now that we have CRUD functionality. We need to create our admin user.
Fifty Five
Maintaing user logins is a solved problem in Rails. You rarely have to write your own authentication system. We're going to use the authlogic gem. Authlogic provides simple mechanics to authenticate users and store sessions. This is prefect for our app. We need an admin to login so he can create/edit/delete books. First let's start by installing the authlogic gem.
# add config.gem 'authlogic' in environment.rb bookshelf $ sudo rake gems:install
Create a new model to hold the admins. Since our users are only admins, we'll name the model Admin. For now the model only needs a login attribute. Generate the model using script/generate model:
bookshelf $ ./script/generate model Admin login:string exists app/models/ exists test/unit/ exists test/fixtures/ create app/models/admin.rb create test/unit/admin_test.rb create test/fixtures/admins.yml exists db/migrate create db/migrate/20091204202129_create_admins.rb
Now add authlogic specific columns to our admin model. Open up the migration you just created and paste this into it:
class CreateAdmins < ActiveRecord::Migration def self.up create_table :admins do |t| t.string :login t.string :crypted_password, :null => false t.string :password_salt, :null => false t.string :persistence_token, :null => false t.timestamps end end def self.down drop_table :admins end end
Now migrate your database.
bookshelf $ rake db:migrate == CreateAdmins: migrating =================================================== -- create_table(:admins) -> 0.0025s == CreateAdmins: migrated (0.0026s) ==========================================
Now the admin model is created. Next we need to create an authlogic session for that admin. Authlogic includes a generator for this:
bookshelf $ ./script/generate session admin_session exists app/models/ create app/models/admin_session.rb
Next we need to create some routes for logging in and out. Open up routes.rb and add this line:
map.resource :admin_session
Now we need a controller to handle the logging in and out. Generate this controller using the generator:
bookshelf $ ./script/generate controller AdminSessions exists app/controllers/ exists app/helpers/ create app/views/admin_sessions exists test/functional/ exists test/unit/helpers/ create app/controllers/admin_sessions_controller.rb create test/functional/admin_sessions_controller_test.rb create app/helpers/admin_sessions_helper.rb create test/unit/helpers/admin_sessions_helper_test.rb
Now open up /app/controllers/admin_sessions_controller.rb and paste this into it:
class AdminSessionsController < ApplicationController def new @admin_session = AdminSession.new end def create @admin_session = AdminSession.new(params[:admin_session]) if @admin_session.save flash[:notice] = "Login successful!" redirect_to books_path else render :action => :new end end def destroy current_admin_session.destroy flash[:notice] = "Logout successful!" redirect_to books_path end end
Wow! It seems like we just did a lot, but we haven't. We've just created 2 new models. One model to hold our admins, and the other to hold admin session information. Finally we created a controller to handle the logging in and out. Now we need a view to show a login form. Create a new file at /app/views/admin_sessions/new.html.erb and paste this into it:
<% title 'Login' %> <% form_for @admin_session, :url => admin_session_path do |f| %> <%= f.error_messages %> <p> <%= f.label :login %><br /> <%= f.text_field :login %> </p> <p> <%= f.label :password %><br /> <%= f.password_field :password %> </p> <%= f.submit "Login" %> <% end %>
We're almost done. We still need to tell our Admin model that it uses authlogic and add some logic to our application controller to maintain session information. All controller inherit from application_controller, so it's a good way to share methods between controllers. Open up /app/controllers/application_controller.rb and paste this:
class ApplicationController < ActionController::Base helper :all # include all helpers, all the time protect_from_forgery # See ActionController::RequestForgeryProtection for details # Scrub sensitive parameters from your log # filter_parameter_logging :password filter_parameter_logging :password, :password_confirmation helper_method :current_admin_session, :current_admin private def current_admin_session return @current_admin_session if defined?(@current_admin_session) @current_admin_session = AdminSession.find end def current_admin return @current_admin if defined?(@current_admin) @current_admin = current_admin_session && current_admin_session.user end end
Now in /app/models/admin.rb add this line inside the class:
# /app/models/admin.rb acts_as_authentic
We're finally ready to do some logging in and out. All of the stuff we did was almost purely from the authlogic documentation examples. This is a standard setup for many applications. If you want to find out more about how authlogic works you can here. Here's a run down of what we did.
- Install the authlogic gem
- Create an Admin model to hold the basic information like login/password
- Add authlogic specific columns to the Admin table
- Generated an authlogic admin session
- Created routes for logging in and out
- Generated an AdminSession controller to do all the work
- Created a view that shows a login form
- Added methods to ApplicationController for persisting sessions
- Told the Admin model that it uses authlogic
It's time to create the admin account. Our application is simple and only has one admin. Since we only have one admin, we can easily use the console. Since we'll need to recreate that user later when we deploy, it doesn't make sense to do the same thing twice. Rails now has a functionality for seeding the database. This is perfect for creating the initial records. There is a file /db/seeds.rb where you can write ruby code to create your initial models. Then you can run this file through rake db:seed. In order to create our admin model we'll need a login, password, and password confirmation. Open up /db/seeds.rb and paste this. Fill in the login with the name you want.
Admin.create! :login => 'Adam', :password => 'nettuts', :password_confirmation => 'nettuts'
We use the create! method because it will throw an exception if the record can't be saved. Go ahead and run the rake task to seed the database:
bookshelf $ rake db:seed
Now we should be able to login. Restart the server to get the new routes. Head to /admin_session/new. You should see:
Go ahead and fill it in and now you should be logged in!
Now that admins can login, we can give them access to the new/edit/delete functionality. Rails has these awesome things called filters. Filters are things you can do at points in the request lifecycle. The most popular filter is a before_filter. This filter gets executed before an action in the controller. We can create a before filter in the books controller that checks to see if we have a logged in admin. The filter will redirect users who aren't logged in, therefore preventing unauthorized access. Open up books_controller.rb and add these lines:
# first line inside the class: before_filter :login_required, :except => [:index, :show] # after all the actions private def login_required unless current_admin flash[:error] = 'Only logged in admins an access this page.' redirect_to books_path end end
Now we need to update our views to show the admin links only if there's an admin logged in. That's easy enough. All we need to do is wrap it in an if.
# show.html.erb <% if current_admin %> <p> Admin Actions: <%= link_to 'Edit', edit_book_path(@book) %> | <%= link_to 'Delete', book_path(@book), :method => :delete, :confirm => "Are you sure?" %> </p> <% end %> # index.html.erb <% if current_admin %> <p> Admin actions: <%= link_to 'New Book', new_book_path %> </p> <% end %>
We still need to add a login/logout link. That should go on every page. An easy way to put something on every page is add it to the layout.
# /app/views/layouts/application.erb < %> <% if current_admin %> <p><%= link_to 'Logout', admin_session_path(current_admin_session), :method => :delete %></p> <% else %> <p><%= link_to 'Login', new_admin_session_path %></p> <% end %> </div> </body> </html>
Now you should have login/logout links on pages depending if your logged in and logged out. Go ahead and click the through the app. Try access the new book page after you've logged out. You should see an error message.
Click through the app. You should be able to login and out, and edit/create/delete books. Time for the final step. Let's add some formatting to your thoughts and user comments. Rails has a helper method that will change new lines to line breaks and that sorta thing. Add that show.html.erb:
# <p><%= @book.thoughts %></p> becomes <%= simple_format @book.thoughts %> # do the same thing for comments # <p><strong><%=h(comment.author) %></strong>: <%=h comment.text %> becomes <p><strong><%=h(comment.author) %></strong>:</p> <%= simple_format comment.text %>
It doesn't make sense to put the thoughts in the index, so let's replace that with a preview instead of the entire text.
# index.html.erb # <p><%= book.thoughts %></p> becomes <%= simple_format(truncate(book.thoughts, 100)) %>
Now our final index page should look like this:
Finally we need to set up a route for our root page. Open up routes.rb and add this line:
map.root :controller => 'books', :action => 'index'
Now when you go to / you'll see the book listing.
Sixty
Now we are going to deploy this app in a few steps. You don't need your own server or anything like that. All you need is an account on Heroku. Heroku is a cloud Rails hosting service. If you have a small app, you can use their service for free. Once you've signed up for an account, install the heroku gem:
$ sudo gem install heroku
Heroku works with git. Git is a distributed source control management system. In order to deploy to heroku all you need to do is create your app then push your code to it's server. If you haven't already install git, instructions can be found here. Once you have heroku and git installed you are ready to deploy. First thing we need to do is create a new git repo out of your project:
bookshelf $ git init Initialized empty Git repository in /Users/adam/Code/bookshelf/.git/
It's time to do some preparation for heroku deployment. In order to get your application's gems installed, create a .gems file in the root project directory. Each line has the name of the gem on it. When you push your code to heroku it will read the .gems file and install the gems for you. So create a .gems file and paste this into it:
forgery will_paginate authlogic
There is a problem with authlogic on heroku, so we need to create an initializer to require the gem for us. Create a new file in /config/initializers/authlogic.rb and put this line in there:
require 'authlogic'
Now we should be ready to deploy. First thing you're going to do is run heroku create. This will create a new heroku app for you. If you're a first time user, it will guide you through the setup process.
bookshelf $ heroku create Git remote heroku added
No we are ready to deploy. Here are the steps
- Add all files in the project to a commit
- Commit the files
- Push are code to heroku
- Migrate the database on heroku
- Seed the database on heroku
- Restart the heroku server
- Open your running application
bookshelf $ git add -A bookshelf $ git commit -m 'Initial commit' bookshelf $ git push heroku master bookshelf $ heroku rake db:migrate bookshelf $ heroku rake db:seed bookshelf $ heroku restart bookshelf $ heroku open
Here is the finally app running on the world wide web:
Hit the Brakes
We've covered a lot of ground in this article, so where do we go from here? There are few things we didn't do in this app. We didn't add any validations to the models. We didn't use partials. We didn't do any administration for the comments. These are things you should look into next. Here are some links to help you with the next steps.
- Completed Source Code
- Confused about the form parts? Read this
- Confused about routes? Read this
- Confused about heroku? Read this
- Confused about associations? Read this
- Confused about authlogic? Read this
Links to gems used in this project.
- Follow us on Twitter, or subscribe to the Nettuts+ RSS Feed for the best web development tutorials on the web. Ready
Ready to take your skills to the next level, and start profiting from your scripts and components? Check out our sister marketplace, CodeCanyon._20<<
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/zero-to-sixty-creating-and-deploying-a-rails-app-in-under-an-hour--net-8252 | CC-MAIN-2017-51 | refinedweb | 7,343 | 69.48 |
This page describes and provides access to a table which shows the relationships between steps in recon-all, and files that are read, created, written or deleted at each step.
The main innovation is that this info has been derived from an actual run of Freesurfer, not just from documentation which can so easily be out of sync with the real product.
Applies to: Freesurfer version 3.0.5 2007-02-07
Download Excel version
Though an image of the full chart appears at the bottom of this page, it's more tractable to inspect (or print) your own copy of the Excel version, which you can download from this link:
gw_FS_FileIO_20070618.xls
Data notes
1. C = Create; I = In (read); O = Out (write); D = Delete.
2. You'll see that the most conspicuous feature of the table is a more-or-less diagonal line of "C"s (indicating "create"), as successive steps create new files based on Input files produced in previous steps. I listed the steps in execution order from top to bottom, and I have deliberately organized the files (ie: columns) in order of creation-step, so as to create this visually clarifying diagonal "cascade" effect.
3. To avoid clutter, I have omitted columns for most log files, and all "touch" files.
4. Unlike my previous effort of this type, in this chart I've retained steps and files from right hemi as well as left hemi processing. I did this to more clearly see steps that involve both left and right. Inspecting the chart, you'll see that each side starts with a "tessellate" operation, and the "Right/Left" column helps to keep you oriented.
Method
In brief, this chart builds on directory timestamp data gathered during a run of recon-all on the "bert" sample subject. Gathering this data entails two parts:
- A "directory survey" script (FWIW in python) recursively navigates the Freesurfer directories, capturing read, modify and status-change date-time information for each file.
- Commands added to the recon-all script to run the directory survey script before and after each existing recon-all command. More precisely, recon-all's existing code builds a command string at each step in variable "cmd", and then runs it by invoking "$cmd". I manually searched for each such location, and inserted my own command to invoke "before" and "after" versions of the directory survey script.
Once the run is complete, the directory-survey data is slurped into a database, cleaned up, and then a cross-tab query organizes the data in Files-vs-steps form.
Wrinkles
- The main wrinkle is that using the file system's read, modify and status-change date-time feature is a bit iffy. First, this only works on some systems -- on others not all the data-time values work, or they may have granularity too coarse to be useful (despite being reported in seconds). I found that this did work OK on Centos 4 (RHEL 4, or Fedora Core something) with local files. It did not work on Centos 4 with the files on Mac OS X server accessed via NFS. Don't know whether the problems lies with Mac or NFS.
- Even on a system where directory timestamps work acceptably well, it's obviously necessary to make sure the successive steps in the recon-all process do not occur within one second of each other, otherwise the available timestamp resolution will be unable to report a difference in times. For that reason, the directory-survey script incorporates a delay of at least two seconds.
- Backup jobs running in the middle of the proceedings can log extra reads that are nothing to do with recon-all :-).
Image of the Files vs Steps table
(As noted above, you're probably better off with the Excel version -- so consider this image mainly an advertisement :-).
Example python script to capture directory timestamps
import sys import os import time # Arg1: label for this dir snapshot # Arg2: dir to be ls'ed # Use > or >> to write or append to file label = sys.argv[1] label = '"' + label + '"' startdir = sys.argv[2] # outfilepath = sys.argv[3] # atime; /* Time of last access */ # mtime; /* Time of last data modification */ # ctime; /* Time of last file status change */ # Supposedly... # ctime includes things like chmod, chown, etc. which do not change the # contents of the file, but impact the file itself. #----------------------- def secs2str(secs): tt = time.localtime(secs) tstr = time.strftime('%Y-%m-%d %H:%M:%S', tt) return tstr #----------------------- # print "=====================" # print sys.argv[0] + ' ' + label + ' ' + startdir + ' ' + outfilepath # print sys.argv runstr = 'run_' + time.strftime('%Y-%m-%d_%H:%M:%S') # timestr = time.strftime('%Y-%m-%d %H:%M:%S') allfiles = [] for (dirname, dirshere, fileshere) in os.walk(startdir): for filename in fileshere: pathname = os.path.join(dirname, filename) statinfo = os.stat(pathname) statstr = ( 'gwdirdump.py ' + label + ' ' + runstr + ' "' + secs2str(statinfo.st_atime) + '"' + ' "' + secs2str(statinfo.st_mtime) + '"' + ' "' + secs2str(statinfo.st_ctime) + '" ' + filename + ' ' + dirname) print statstr # Sleep to prevent more file changes before time ticks over at least a second! time.sleep(2)
Document Author(s)
Graham Wideman (GrahamWideman) | http://surfer.nmr.mgh.harvard.edu/fswiki/ReconAllFilesVsSteps | CC-MAIN-2015-14 | refinedweb | 841 | 63.29 |
This code had been working for about 6 months:
EASession session = new EASession(accessory, "com.bluebamboo.p25i"); session.OutputStream.Delegate = this; session.OutputStream.Schedule(NSRunLoop.Current, NSRunLoop.NSDefaultRunLoopMode); session.OutputStream.Open(); session.InputStream.Delegate = this; session.InputStream.Schedule(NSRunLoop.Current, NSRunLoop.NSDefaultRunLoopMode); session.InputStream.Open(); byte[] temp = new byte[data.Length + 5]; Array.Copy(data, 0, temp, 5, data.Length); temp[0] = 0x55; temp[1] = 0x66; temp[2] = 0x77; temp[3] = 0x88; temp[4] = 0x44; session.OutputStream.Write(temp, (uint) temp.Length); Thread.Sleep(1000); session.OutputStream.Close(); session.OutputStream.Unschedule(NSRunLoop.Current, "kCFRunLoopDefaultMode"); //NSRunLoop.NSDefaultRunLoopMode); session.OutputStream.Delegate = null; session.InputStream.Close(); session.InputStream.Unschedule(NSRunLoop.Current, "kCFRunLoopDefaultMode"); //NSRunLoop.NSDefaultRunLoopMode); session.InputStream.Delegate = null; session.Dispose(); accessory.Dispose();
But now, it outputs an error stating the output stream has no space available. Checking HasSpaceAvailable at any time in code shows it's false, and obviously nothing will print to the Bluetooth printer any more, and I have no idea what needs to be done to "make space".
Is this a Xamarin bug, or an iOS bug? It might only be latter versions of iOS, but I only have 6.1.2 and 6.1.3 available to me, and both have this same problem.
Is anything written to the iOS Device Log when this happens?
I assume your also already listing the external accessories you're using in your Info.plist?
I'm getting the same issue when trying to connect to a Star Micronics TSP650II Bluetooth printer.
StackOverflow comment on my question there directed me here. I could work around the issue by making a binding for the SDK Star provides for iOS but I'd much rather not increase the package size by 700K for that.
Were you able to solve this problem without using the SDK.
I am working with a iMZ320 Zebra Bluetooth printer, and have the same issue. In fact I got the same error of "Not available space" on both, using the bounded SDK, and the EASession-EAAccesoryManager classes.
Thanks
Any update on this?
Thank
This is still not working.
And - yes, the protocol is listed in Info.plist
And it seems it's a Xamarin bug because I looked at the source code of Star Micronic's printer and it seems to be doing the same as I'm doing through Xamarin but using the SDK through a binding works and using the EASession directly - doesn't
I put this code after the last Open call for the session streams:
And then my write code is this:
I know this thread is a bit old but I was wondering if anyone can explain to me how to use
.OutputStream.Delegate = this;?
I am not sure where in the code above did @Dino use the delegate?
Also, can't quite understand the use of `BaseViewController, would appreciate if someone could refer me to a link.
Thank you in advance
Sorry - the class that contains the first block of code is inherited from NSStreamDelegate, thus it can be the stream delegate for both input and output. And the BaseViewCOntroller is inherited from UIViewController, and merely contains helper fuctions (some static like the IsiOS7OrBetter).
Thanks @Dino for the quick answer.
I understood those lines were inherited from NSSteramDelegate but couldn't work out how is the delegate working in this case? Where does it implement the stream handleEvent delegate method, if at all?
My problem is that I have a very similar code to you when it comes to establishing the connection. I can open both streams (In/Out) and even verify it using
NSStreamStatus outstatus = outStream.Status;and yet
HasSpaceAvailable();is always zero and if I try to write using
OutputStream.Write(Cmd,(uint)cmd_num_bytes)it will also always return zero (which is expected given the above).
I was also baffled by:
Doesn't WriteBytes belong to Android?
Thanks again for your help
I only posted a snip of the entire class. The delegate is working because it has code I didn't paste in here. Likewise the WriteBytes is a function I wrote that I didn't paste in here. It's nothing to do with Android.
quick question, I have implemented this code and runs without a problem, but seems like the output is getting queued in the printer and not getting printed, is there a command I need to send so the printing actually happens?
I verified by running a different application that also prints and the text i send before get printed on top and then it prints whatever was sent by the second application.
Hi guys,
I recently had the same problem like stated in the first post from @DeanCleaver. The problem with run loops is that only the run loop for the main thread has been started automatically by the app itself. And because every thread has its own run loop, you have to start run loops, which belong to other threads than the main thread, by yourself.
With
NSRunLoop.Currentyou will always get the run loop for the current thread! So if you are not on the main thread, you have to either start the run loop by
NSRunLoop.Current.Run();or by
NSRunLoop.Current.RunUntil(NSDateTime.Now.AddSeconds(0.1));like @DeanCleaver did. I encountered that both statements were blocking the running thread.
NSRunLoop.Current.Run();indefinitely until someone calls
NSRunLoop.Current.Stop();or kills the running process and
NSRunLoop.Current.RunUntil(NSDateTime.Now.AddSeconds(0.1));for at least 0.1 seconds.
So why not using
NSRunLoop.Mainas scheduler for your input and output streams? It solves any problems and all you have to do in the above code sample from @DeanCleaver is, replace
NSRunLoop.Currentwith
NSRunLoop.Main.
I'm trying to use the "polling" method to write out data on the out stream but HasBytesAvailable() is always false. You don't need a run loop when polling, do you?
@JeffGonzales, What do you mean with "polling" method? I think I have really similar problems.
I only got it working with a timeout (about 250ms) between cyclic send and receive calls. But that's not acceptable!
If you are transferring about 500 parameters from a device and you have to wait additionally 500 * 250ms...
@Matze When I refer to polling I mean the option that doesn't use run-loop scheduling:
I cannot write anything as hasSpaceAvailable is always false unless I register a delegate that implements HandleEvent().
@JeffGonzales
I don't even got it working if i assign a delegate which implements "HandleEvent()". Only if i schedule it in a run-loop, the "HandleEvent()" method in my delegate is receiving events. But this is not what I want. I need the polling mode. This is my code for opening the streams:
mEASession = new EASession(connectedEAAccessory, EAAccessoryProtocol); mNSInputStream = mEASession.InputStream; mNSInputStream.Delegate = new NSStreamDelegate(); //"NSStreamDelegate" is inherited from "NSObject" and implements "HandleEvent()" mNSInputStream.Open(); mNSOutputStream = mEASession.OutputStream; mNSOutputStream.Delegate = new NSStreamDelegate(); //"NSStreamDelegate" is inherited from "NSObject" and implements "HandleEvent()" mNSOutputStream.Open();
In addition:
public class NSStreamDelegate : NSObject, INSStreamDelegate { /// <summary> /// The event handler which is triggered when a stream (input/output) receives an event. /// </summary> /// <param name="stream"></param> /// <param name="streamEvent"></param> [Export("stream:handleEvent:")] public void HandleEvent(NSStream stream, NSStreamEvent streamEvent) { if (streamEvent == NSStreamEvent.HasBytesAvailable) { //OnBytesAvailable(); } } }
@FranjoStipanovic.2218
I finally got it working with a little "tweak". I still don't use the polling mode because I didn't got it working but I have a solution which is quite adequate for me. I adjusted my read and write method like so (Only write method as example here):
Everything else is set up like in my other posts. But I am not reacting on any events of the stream. I am listening until I receive any bytes and I am waiting until I have space to write to the stream.
I've had the same bluetooth "library" in my app for over two years and suddenly with the 11.2.6 iOS update it started having the "HasSpaceAvailable is always false" issue. Changing from InputStream.Schedule(NSRunLoop.Current, NSRunLoop.NSDefaultRunLoopMode); to InputStream.Schedule(NSRunLoop.Main, NSRunLoop.NSDefaultRunLoopMode); and things work again.
Hey guys..
Thanks to everyone for this discussion, NSRunLoop.Current.RunUntil(NSDateTime.Now.AddSeconds(0.1)); it resolves my issue. | https://forums.xamarin.com/discussion/2551/easession-outputstream-hasspaceavailable-always-false | CC-MAIN-2019-35 | refinedweb | 1,377 | 58.18 |
This post will show you how to write your first C# program. Please make sure that you have installed Visual Studio on your machine. If you have not yet installed VS, download and install Visual Studio first.
Downloads | IDE, Code, & Team Foundation Server | Visual Studio
Download Visual Studio Community, Professional, and Enterprise. Try Visual Studio Code or Team Foundation Server for free today.
Creating first C# program
- Open Visual Studio and create a Console Application.
- Modify the main() method as shown below.
using System; //Adding .NET namespaces namespace LearnCSharp //Namespace of the class { class Program //The class { static void Main(string[] args) //Main method { Console.WriteLine("Hello World"); Console.ReadKey(); //This keeps the console alive } } }
Output
Hello world
About the program
- using System - The using keyword is used to include namespace (collection of classes) to the program.
- namespace LearnCSharp - Namespace of the current class is declared using the namespace keyword.
- class Program - A class named Program is declared. A class is declared using the class keyword.
- static void Main(string[] args) - Here, we define the main() method. It is the entry point of a console application.
- Console.WriteLine() - WriteLine() is a method of the Console class. These classes and methods are defined in the System namespace. The WriteLine() method is used to display text on the console.
- Console.ReadKey() - This code makes the console application to wait for a key-press, before closing the console.
Subscribe
Join the newsletter to get the latest updates. | https://www.geekinsta.com/first-c-program/ | CC-MAIN-2021-31 | refinedweb | 244 | 68.57 |
Synopsis edit
-
- tailcall command ?arg...?
See Also edit
- Tail call optimization
- TIP#327
- wrapping commands
- wrap commands by using interp invokehidden together with tailcall
Description edittailcall interprets its arguments as a command and executes the command,replacing the execution frame of the command that invoked tailcall. Unlike uplevel, it does not evaluate its arguments as a script, so double substitution does not occur.Unlike some other languages, tailcall is not limited to executing only its caller, but can execute any command. The command to be executed is resolved in the current context before tailcall replaces the context.tailcall is made possible by NRE. It first became available as ::tcl::unsupported::tailcall in the release of Tcl8.6a2.Contrast the following two commands:
tailcall foo [bar] $var return [uplevel 1 [list foo [bar] $var1]]There are a couple of differences:
- foo is resolved in the current context, not in the caller's
- the stack frame is really gone, not just virtually. This has positive effects on memory, and a possibly confusing effect on stack traces.
tailcall try $script
WHD: Let me see if I understand what this does.
proc fred {} { george } proc george {} { tailcall harry }If I call fred, it's almost as though fred called harry directly, instead of george. Not so?MS: yup - all traces of george are gone from the program stack when harry is called. Now, if harry resolves to a different command in george's current namespace than it would under fred's, the harry that is called is george's and not fred's (no diff if the commands are FQ, of course).I think this does pretty much what delegation is supposed to do, right?
jima 2009-10-15: Perhaps this has been asked before or somewhere else...Is this an optimization that takes place at bytecode generation time?I mean, once fred knows that has to call harry directly the bytecodes generated would be the ones equivalent to have said:
proc fred {} { harry }I reckon I am not familiar with all the internals of Tcl but I find this would be an interesting thing. Wouldn't this be a new way to have some sort of macros?MS: Currently, tailcall is not bytecompiled. Everything happens at runtime. That extremely simple example could indeed be bytecoded in a minute, but things get more involved as soon as fred has a bit more structure to it: arguments, local variables, namespace issues both for variable and command lookup, multiple exit points with different (or no) tailcall in them, etc.jima: Thanks a lot Miguel for the answer. I see the point. I guess this is the same with uplevel 1, isn't it?
proc fred {} { uplevel 1 { #code here } }Would it be interesting to define a case (like a contract) saying if your proc is simple enough then it gets bytecompiled and you get some benefits?MS: you do not mean "bytecompiled" but rather "inlined into the caller", as all proc bodies get bytecompiled. There are quite a few other issues with that, especially to accomodate Tcl's dynamic nature. Changing one inlined proc would cause a spoiling of all bytecodes and recompilation of the world, at least with the current approach to bytecode lifetime management.
AMG: Sounds a lot like exec in Unix shells. See execline for more information on a noninteractive Unix shell where everything is done with exec/tailcall.
PYK 2015-12-06: Combine tailcall with an identity command to emulate return:
proc p1 {} { tailcall lindex {Hello from p1} }
Interaction with try edit
% proc foo {} {puts {I'm foo}} % proc bar {} {puts {I'm bar}; try {tailcall foo} finally {puts exiting}} % foo I'm foo % bar I'm bar exiting I'm foo31-03-2015 HE I'm sure ;-) that I don't understood what happend there. Why "exiting" is printed before "I'm foo" when I call bar? If I change bar to
proc bar {} {puts {I'm bar}; try {puts tryBody; tailcall foo} finally {puts exiting}; puts afterwards}and call it, I get:
I'm bar tryBody exiting I'm fooWhat I see is that tailcall replace the rest of proc even inside the body of try. But then, why is the finally clause executed? And even, if we assume the finally clause has to be executed because it is documented always to be executed, then there would be the question, why before the execution of the tailcall command?AMG: [foo] is invoked by replacing [bar] which implies the intervening [try] block must exit before [foo] can start.
wdb: Apparently, the tailcall closes one of the last gaps in Tcl: Tail recursion as known in Scheme.
Example: Cause Caller to Return edit
proc one {} { two return 8 } proc two {} { tailcall return 5 } one ;# -> 5one returns 5, not 8, because by invoking two, which, through tailcall, is replaced by return.
Example: Factorial editNEM: As a test/demo of how to use this facility, here is a simple benchmark using the factorial function:
package require Tcl 8.6a1 namespace import ::tcl::mathop::* interp alias {} tailcall {} tcl::unsupported::tailcall # Naive recursive factorial function proc fac n { if {$n <= 1} { return 1 } else { * $n [fac [- $n 1]] } } # Tail-recursive factorial proc fac-tr {n {k 1}} { if {$n <= 1} { return $k } else { tailcall fac-tr [- $n 1] [* $n $k] } } # Iterative factorial proc fac-i n { for {set k 1} {$n > 1} {incr n -1} { set k [expr {$n*$k}] } return $k } proc test {} { set fmt {%-10s ..%-12.12s %s} puts [format $fmt Implementation Result Time] foreach n {1 5 10 100 500 1000 2500 5000 10000} { puts "\nfac $n:" foreach impl {fac fac-i fac-tr} { if {[catch {$impl $n} result]} { set result n/a set time n/a } else { set time [time [list $impl $n] 10] } puts [format $fmt $impl $result $time] } } } testPutting this in a table, we get (timings taken on Linux box, 2.66GHz, 1GB RAM):As we can see, the tail-recursive version is slightly slower than the iterative version, and unlike the naive version, manages to not blow the stack.
Using Tailcall for Callbacks edit[Napier / Dash Automation] 2015-12-28:Someone (I forget who now) gave me this little snippet which I love and handles many cases for me. I use it with the -command switches throughout my scripts:
proc callback {args} {tailcall namespace code $args} namespace eval foo { proc myProc var {puts $var} proc myCall {} { after 5000 [callback myProc $::myVar] } } set myVar "Cool!" foo::myCall
Emulating tailcall editLars H 2010-05-09: As of late, when writing an uplevel, I've sometimes found myself thinking "That would be slicker with tailcall, but I can't rely on 8.6 features in this project". Today it occurred to me that one can however use a proc to emulate the properties of tailcall that would be needed in these cases, and thus provide a route for forward compatibility.The main situation I've encountered is that of delegating to another command which may make use of upvar or uplevel. That's basically taken care of by
proc utailcall args {uplevel 2 $args}although it's safer to make it
proc utailcall args {return -code return [uplevel 2 $args]}in case the "terminate proc early" aspect of tailcall is relied upon; this is easy to do without thinking much about it.Another aspect of tailcall is the name resolution of the called command. This can be done as follows
proc ntailcall {cmd args} { return -code return [ [uplevel 1 [list ::namespace which $cmd]] {*}$args ] }but it's almost as easy to do both at the same time
proc untailcall {cmd args} { return -code return [ uplevel 2 [list [uplevel 1 [list ::namespace which $cmd]]] $args ] }A word of warning here is that this will produce a very confusing error message if the command is undefined, as namespace which returns an empty string in that case.A third aspect is that of preserving return levels.
proc rtailcall args { catch $args result options dict incr options -level 2 return -options $options $result }This leaves some extra material in the errorInfo, but one can probably live with that. Combining the "r" and "u" aspects is straightforward, but will leave even more:
proc rutailcall args { catch {uplevel 2 $args} result options dict incr options -level 2 return -options $options $result }To complete the set, one might just as well write down the combination of the "r" and "n" aspects
proc rntailcall {cmd args} { catch { [uplevel 1 [list ::namespace which $cmd]] {*}$args } result options dict incr options -level 2 return -options $options $result }and of all three
proc rnutailcall {cmd args} { catch { uplevel 2 [list [uplevel 1 [list ::namespace which $cmd]]] $args } result options dict incr options -level 2 return -options $options $result }But note: all of the above will fail if used for tail recursion, as soon as the loops get long enough.
Replacement for uplevel editAMG: uplevel has limitations with respect to bytecode compilation and interpretation of return. If uplevel's level count is 1, and if it's the last thing being done in the proc, these limitations can be avoided by using tailcall instead. Note that uplevel takes a script whereas tailcall takes a command. If you want to pass a script to tailcall, make it be the sole argument to try.See Possible uplevel deficiencies
When to apply tailcall optimization editHaO 2012-12-14: Is it a good idea to replace any code:
proc proc1 {arg1 arg2} { # do something here which finds arg3 and arg4 return [proc2 $arg3 $arg4] }by
proc proc1 {arg1 arg2} { # do something here which finds arg3 and arg4 tailcall proc2 $arg3 $arg4 }If proc2 is for sure found in the caller namespace?Is this an intelligent optimization?I came to this idea, as the TI C compiler calls this "tailcall optimization".AMG: Yes, except in a highly unlikely situation where proc2 needs proc1 to be visible in the stack. Procedures really ought not to care who called them, but Tcl makes all sorts of things possible, including stupid things. | https://wiki.tcl.tk/14011 | CC-MAIN-2017-47 | refinedweb | 1,670 | 54.46 |
I am trying to find a way to power these off should I wish to conserve battery power.
First thing I did was stick a switch in between the LCD shield GND pin and the GND pin of the arduino.
However, when the switch is turned off, there is a bit of leakage current going such the the LCD screen remains dimly lit.
Next thing I tried was adding powerOn() and powerOff() functions to the class Adafruit_TFTLCD. However there is still leakage current happening such the the screen remains dimly lit with the switch off.
Any suggestions on where else the current might be leaking through?
Has anyone attempted powering down these screens when not in use?
void Adafruit_TFTLCD::powerOff() { #ifdef USE_ADAFRUIT_SHIELD_PINOUT pinMode(A4, INPUT_PULLUP); pinMode(A3, INPUT_PULLUP); pinMode(A2, INPUT_PULLUP); pinMode(A1, INPUT_PULLUP); pinMode(A0, INPUT_PULLUP); pinMode( 2, INPUT_PULLUP); pinMode( 3, INPUT_PULLUP); pinMode( 4, INPUT_PULLUP); pinMode( 5, INPUT_PULLUP); pinMode( 6, INPUT_PULLUP); pinMode( 7, INPUT_PULLUP); pinMode( 8, INPUT_PULLUP); pinMode( 9, INPUT_PULLUP); pinMode(cs, INPUT_PULLUP); pinMode(cd, INPUT_PULLUP); pinMode(wr, INPUT_PULLUP); pinMode(rd, INPUT_PULLUP); pinMode(reset, INPUT_PULLUP); #endif } void Adafruit_TFTLCD::powerOn() { init(); }
| https://forum.arduino.cc/t/powering-off-adafruit_tftlcd/562117/12 | CC-MAIN-2021-39 | refinedweb | 181 | 58.52 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
override a fields.related
Hi all,
I have a problem trying to override a fields.related. It's originally defined in product module, product_supplierinfo class:
_columns = { ... 'product_uom': fields.related('product_id', 'uom_po_id', type='many2one', relation='product.uom', string="Supplier Unit of Measure", readonly="1", help="This comes from the product form."), ... }
I create a module, that depends on product, to put a many2one instead:
class product_supplierinfo(Model): _inherit = 'product.supplierinfo' _columns = { 'product_uom': fields.many2one('product.uom', "Supplier Unit of Measure", required=True, help="This comes from the product form."), }
I add a function to populate the field and call it at installation
def _init_seller_uom(self, cr, uid, ids=None, context=None): psi_ids = self.search(cr, SUPERUSER_ID, [], context=context) for psi in self.browse(cr, SUPERUSER_ID, psi_ids, context=context): uom_id = psi.product_id.uom_po_id.id self.write(cr, SUPERUSER_ID, psi.id, {'product_uom': uom_id}, context=context) return psi_ids
At this point, my column is correctly set and I can modify the values as I'd like. Perfect!
Now, the problem comes when I restart my server with --update=all. It seems the system first parses the original class in product module, sees that the field is a not-stored related field, and thus drops the column. I can indeed see that this query is executed:
ALTER TABLE "product_supplierinfo" DROP COLUMN "product_uom" CASCADE
Later in the update, it parses my module and execute this query:
ALTER TABLE "product_supplierinfo" ADD COLUMN "product_uom" int4
At the end of the update, my column is still updatable, but all values are lost!
Is there a solution? Thanks.
The working solution is finally:
class product_supplierinfo(Model): _inherit = 'product.supplierinfo' def _get_product_uom(self, cr, uid, ids, field_name, arg, context): res = {} for psi in self.browse(cr, uid, ids, context=context): res[psi.id] = psi.product_uom_stored.id return res def _set_product_uom(self, cr, uid, ids, field_name, field_value, arg, context): psi = self.browse(cr, uid, ids, context=context) psi.write({'product_uom_stored': field_value}) return ids _columns = { 'product_uom': fields.function(_get_product_uom, fnct_inv=_set_product_uom, type="many2one", relation="product.uom", help="The supplier UoM for this product."), 'product_uom_stored': fields.many2one('product.uom', "Supplier Unit of Measure", required=True, help="This comes from the product form."), }
Short answer: that's the expected behavior, and you're doing something that is considered a bad practice.
Why does this happen: For many technical and logical reasons, during installations/upgrades the framework needs to handle each module and its dependencies as an isolated set of modules. For instance the
product module will be upgraded before loading the modules that depend on
product. This is necessary to maintain the modularity of OpenERP and let each module deal with the database and registry state that it should know about.
Why is it a bad practice: Overriding a field to change its type (or make it stored when it wasn't) is one of the few things that will break the encapsulation principle, by seriously changing the original behavior of the parent module rather than simply extending it. The extent of that change means the original module and other modules based on it may suddenly have surprises that could break their logic completely. The OpenERP API contains enough entry points for extending the behavior of other modules that you can always do what you want in a different manner, without breaking this rule. Here you could probably add a new m2o field and override some methods to use it as you need. It will also force you to review the cases where the original field is used and perhaps help you detect cases where you wrongly assumed you could replace the related field by an arbitrary value. For example what happens if the UOM you pick is not compatible with the product UOM, and what if some code elsewhere directly takes the Purchase UOM on the product because it does not expect the one of the
supplierinfo to be different?
Some similarly bad practice:
- Storing a field that is normally not stored (this is what you're doing)
- Changing the required flag of a stored field from a parent module (at model level, not view level)
- Overriding methods without calling super()
- Monkey-patching code from parent modules
You can inherit a model and re-declare some of its fields, but in that case you should not alter anything that will significantly change the model. You can change view-level attributes (such as
states,
readonly,
string, etc.). You can even change its type if the new type is 100% compatible at model level, e.g. replace a
char field by a stored function field of type
char, with a proper setter and getter that guarantee it will at least keep its old behavior. Anything else is usually a bad idea.
There have been a few cases were official OpenERP modules used to do this (changing a field type), because it often looks easier at first sight, but it was a bad decision in every case, and most of them have been removed now.
The framework gives you a lot of freedom, including the option to shoot yourself in the foot if you really ask for it, but the cases where you'll need that are not very frequent, and you should be ready to deal with the consequences if you're asking for that.
That may not be the answer you expected but I hope it helps a bit... ;-)
Thanks for the explanations.
So in my exemple, I should:
create a new fields.many2one named product_uom_stored.
in place of the existing product_uom field, put a function field, not stored, with a proper setter and getter, that would return the value stored in product_uom_stored.
Would that be the correct way to do?
@Weste: yes that solution would probably work without the problems you were facing :)
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/override-a-fields-related-47475 | CC-MAIN-2017-17 | refinedweb | 1,014 | 53.61 |
The patches have been updated to address the issues pointed
out below (thanks Bojan).
New contents at the old links:
I had a question though (inline below)...
Bojan Smojver wrote:
> On Wed, 2007-11-14 at 20:02 -0500, Paul J. Reder wrote:
>
>> The APR portion of the patch:
>>
>>
>
> Just a few comments (without going into what the patch does):
>
> - you probably want to rename apr_ldap_init_xref_lock() to
> apr_ldap_xref_init() instead
done
> - LDAP_rebindproc() should probably be static
done
> - what happens when apr_ldap_xref_init() gets called again (e.g. by
> another Apache module wanting to initialise this subsystem)?
addressed (check if it's already been initialized)
> - is it possible to make the API so that xref entry is returned instead
> of being stored for the process?
Question: To what end? The xref entry isn't of any use to anyone except
in terms of a node in a list that the rebind callback can scan through.
The problem is that the rebind callback function is called from the ldap
library function and has no context related to the original request info
other than the ldap handle passed to it. The xrefs and the list they are
kept in needs to be protected to avoid concurrency issues, and having the
xref pointer doesn't help the rebind callback function since the xref
pointer can't be passed to it. Is there some reason the xref entry would
be of use to the caller that I've not thought of?
> - LDAP_xref_entry_t live is a namespace violation, because it's in a
> public header file; should probably be called apr_ldap_xref_entry_t
done
--
Paul J. Reder
-----------------------------------------------------------
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it. Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein | http://mail-archives.apache.org/mod_mbox/apr-dev/200711.mbox/%3C473CBF8B.3040903@remulak.net%3E | CC-MAIN-2018-34 | refinedweb | 305 | 66.98 |
# What's new in AngouriMath 1.2?
Hi. The last 7 months I have been working on the greatest release of [AngouriMath](https://github.com/asc-community/AngouriMath). There is something I want to tell you.
### Briefly about the project
In November of 2019 I realized that it would be nice to have a symbolic algebra library for .NET, which could simplify expressions, solve equations, build latex code, and many more. That is how I decided to create one.
But this already exists...As long as I tell people about what I am working on, they suggest different solutions, be those to rewrite SymPy, make a wrapper over SageMath for .NET, pirate Wolfram|Alpha, use primitive mathnet.symbolics (about which they mention themselves).
All of those have limitations or difficulties. In contrast, what I am working on is a very lightweight library, made and optimized for .NET specifically. It is open-source (under MIT).
### Release 1.2
In August one of the main contributors, [@HappyPig375](https://github.com/Happypig375), helped to rewrite a significant part of the library into a type hierarchy. Every operator/function has its own node (type). Thanks to it, the library now has a more obvious API, improved performance and security. Now, let us go over what has been done within these 7 months.
#### An expression is a record
For example, that is what `Sumf` looks like
```
public sealed partial record Sumf(Entity Augend, Entity Addend) : NumericNode
```
Thanks to it, now we can apply pattern matching:
```
internal static Entity CommonRules(Entity x) => x switch
{
// (a * f(x)) * g(x) = a * (f(x) * g(x))
Mulf(Mulf(Number const1, Function func1), Function func2) => func1 * func2 * const1,
// (a/b) * (c/d) = (a*c)/(b*d)
Mulf(Divf(var any1, var any2), Divf(var any3, var any4)) => any1 * any3 / (any2 * any4),
// a / (b / c) = a * c / b
Divf(var any1, Divf(var any2, var any3)) => any1 * any3 / any2,
```
(it is a few examples of patterns used in the simplification algorithm)
### Math
Here I am going to over over features related to math itself.
#### New functions
`Secant`, `cosecant`, `arcsecant`, `arccosecant` were added as separate nodes.
12 hyperbolic functions were added. They do not have their own nodes and return their symbolic expression (`sinh(x)`as`(e.Pow(x) - e.Pow(-x)) / 2`).
`Abs` and `Signum`. The syntax of `Abs` is the following: `(|x|)`. It looks similar to how we write in on a paper while at the same time allows to avoid ambiguouty (because | is symmetric, there might be problems with it).
Euler's totient function was added.
```
WriteLine(@"phi(8)".EvalNumerical());
WriteLine(@"(|-3 + 4i|)".EvalNumerical());
WriteLine(@"sinh(3)".Simplify());
WriteLine(@"sec(0.5)".Simplify());
```
Prints
```
4
5
(e ^ 3 - 1 / e ^ 3) / 2
sec(1/2)
```
#### Domains
They allow to limit every node's range. If a node is out of its domain, it turns into a `NaN`, which makes the entire expression undefined. `SpecialSet`s are used as values of domains.
#### Booleans and logic
Logic operators were finally added. At the beginning, there was an idea to make those similar to what we use in programming; instead, it was decided to make them literal. That is why the syntax is: `not`, `or`, `xor`, `and`, `implies`.
To check whether an expression is evaluable into a `Boolean`, you need to use `EvaluableBoolean`. Same way, we use `EvaluableNumerical` to check whether an expression is evaluable into a `Number`.
```
WriteLine(@"(true or b) implies c".Simplify());
```
(prints`c`)
#### Equality and inequality signs
They might carry the `Boolean`type, if their operands are computed. Of course, the following were added: `=`, `<`, `>`, `<=`, `>=`.
You can also combine inequalities. For example, `a > b > c` is interpreted as`a > b and b > c`.
```
WriteLine(@"a < b >= c".Simplify());
```
(Prints`a < b and b >= c`)
#### Sets
Sets were added. They are parsable and have their own nodes.
The most naive type of set is `FiniteSet`. Its syntax: `{ 1, 2, 3 }`.
`Intervals` also have familiar syntax: `[1; 2]` for both points included, `(1; 2)` for both excluded, `[1; 2)` for the right one excluded, and `(1; 2]` for the left one excluded. Complex intervals were removed because of their clumsiness.
`SpecialSet` are preset sets. Now we have `CC`, `RR`, `QQ`, `ZZ`, `BB` for complex, real, rational, integer and boolean values.
`ConditionalSet` is written in [set-builder notation](https://en.wikipedia.org/wiki/Set-builder_notation), for example: `{ x : x > 0 and x^2 = y }` (any `x` such that greater than 0 and equals`y`).
```
WriteLine(@"({ 1, 2 } \/ { 5 }) /\ { x : x in [2; 3] and x > 0 } ".Simplify());
```
(Prints`{ 2 }`)
#### Limits improved
First remarkable, second remarkable limits transformation rules were added, as well as the l'Hopital's rule.
```
WriteLine("tan(a x) / (b x)".Limit("x", 0));
WriteLine("(sin(t) - t) / t3".Limit("t", 0));
```
(Prints`a / b` and `-1/6` respectively)
#### "Provided" node
Allows to set constraints on an expression. For example, square root on real numbers we can define as `sqrt(x) provided x >= 0`. It will turn into `NaN` if you substitute negative `x`.
If an expression being turned into `NaN` is an element of a finite set, it will be excluded, instead of turning the whole expression into `NaN`. It is the only exception for `NaN`.
#### Piecewise
`Piecewise` is a sequence of`Provided`s. Unlike the classical piecewise-defined function, here the order of elements matters, and when computing`Piecewise`, the first`Provided` is returned such that its predicate is true.
That is an example of how we can define the absolute value function for real numbers via `Piecewise`:
```
Entity abs = "piecewise(x provided x > 0, -x provided x <= 0)";
WriteLine(abs.Substitute("x", 3).EvalNumerical());
WriteLine(abs.Substitute("x", -3).EvalNumerical());
```
(Prints 3 in both cases)
Ease and convenience of use
---------------------------
This part of the article about features making the use of AngouriMath more comfortable.
#### Exceptions were reconsidered and refactored
Now all exceptions thrown by the library are somehow inherited from `AngouriMathBaseException`. Because there is no`p/invoke` or IO, you can be assured that any exception assignable to `AngouriMathBaseException`does not bring any direct harm to the system or user's data. In other words, when catching exceptions, now you can catch this exception in the last instance of `Catch`.
#### Performance improved
To be precise, the performance has been fluctuating throughout this all time. Nonetheless, it is still by far better than that of 1.1.0.5. [Here](https://github.com/asc-community/AngouriMath/blob/master/Sources/AngouriMath/Docs/WhatsNew/version_performance_control.md) you can find a performance report by every key commit.
#### F# is now supported
With this release, the API for AngouriMath is now supported natively for wonderful language F#. It might be not as functional as the main API, so in especially complicated cases, you still can call methods from the core library itself.
#### Interactive
I wrote an article some time ago about [AngouriMath in Jupyter](https://habr.com/ru/post/528816/). AngouriMath.Interactive itself simplify converts `ILatexiseable` into a LaTeX-code and renders it with MathJax (read more about it in the mentioned article).
A simple example of using AngouriMath.Interactive in Jupyter#### Multithreading
All computations happen strictly in one thread. The library is not responsible for your threading problems. Moreover, all methods are thread-safe. The settings are local for every thread(`[ThreadStatic]`).
The main feature of this update is that you can interrupt computations of `Solve` or`Simplify`.
#### New settings
It is their third version. Even though I am not a big fan of the new solution, it is the best I could achieve. To set a new value of a setting, let us write
```
using var _ = MaxExpansionTermCount.Set(10);
// some code
```
Then this setting will be automatically rolled back to the previous value once the end of the scope is reached (the return type of `Set` is`IDisposable`).
### That is about it
Thank you for your attention. I will be pleased to answer your questions.
Anybody willing to contribute is welcome (contact me or fork and make your changes directly).
### References
1. [GitHub](https://github.com/asc-community/AngouriMath) page of the project.
2. [Website](https://am.angouri.org) of the project.
3. Detailed [What's new](https://am.angouri.org/#whatsnew).
4. [Plans](https://github.com/asc-community/AngouriMath/milestones) for the future updates.
5. My GitHub [profile](https://github.com/WhiteBlackGoose).
6. [SymPy](https://www.sympy.org) - those who inspire me. | https://habr.com/ru/post/545436/ | null | null | 1,425 | 58.38 |
-
Language Bindings for the C++ API: Python partially working
I've just uploaded some files onto They are minimal implementations (window with a button) for Perl and Python. The Perl one works fine, but I'm still having issues with the Python.
Here are some of the issues with Python:
Apparently by convention packages start with a lower-case letter. The bindings are currently in the 'Haiku' package. This would be a trivial change; it depends on how important it is to Haiku's Python user community.
PyModules and PyTypes (= classes) cannot share the same name, as far as I can tell. Thus, in order to have constants and plain functions available from (for example) Haiku.Application, the Application PyType must be in a separate namespace, currently Haiku.Application.Application. Depending on what the community wants, I could move all the constants and plain functions into the ApplicationKit PyModule, and have the PyType be Haiku.Application.
The really big problem, however, is passing objects. BWindow::MessageReceived gets a BMessage object. In order to pass this to Python, I need to have a PyType. But the PyType in question is defined in the ApplicationKit, which is a separate package. So I don't have access to it.
In Perl, this was not a problem; I simply used a string containing the Perl class name, and as long as the user had loaded the relevant package, Perl took care of it. But Python wants the PyType, not just a string. I'm still looking into a way to get around this problem. I'm trying to do it by eval'ing some Python code, but so far I have been unsuccessful. If I can't do it any other way, I could export the PyType from the other package, but then Python would be loading the .so and I would be loading it a second time, in addition to the extra overhead of exporting and importing. It just seems like a waste of resources.
In any case, the Python test script displays the window, and calls event hooks on the Application object (ArgvReceived, ReadyToRun, QuitRequested), but when you click the button and it tries to call the Window's MessageReceived event, it dies.
- jalopeura's blog
- Login to post comments
Re: Language Bindings for the C++ API: Python partially working
While there is a convention for package and module names to be lower case (to avoid file system trouble), this is often ignored. On the other hand, class names are almost always in CamelCase (except built-in types).
As for namespaces, you unfortunately seem to have misunderstood the Python concept, which is very different from the one used by Perl..
In the test program, it would look like this:
or:
Edit: The following paragraph required correction:
Also, as you noticed, dependencies across Python packages are a problem inside C/C++ extensions. Extensions should be self contained. This is not possible with the Haiku API, if you split it up.
Re: Language Bindings for the C++ API: Python partially working
As for namespaces, you unfortunately seem to have misunderstood the Python concept, which is very different from the one used by Perl.
If by "misunderstood the concept" you mean "were not aware of the conventions", then yes, I agree. Otherwise, I don't know what you mean. I have become all too familiar with the "everything is an object" mentality of Python..
Yes, I'm already putting multiple classes into a single module; for example,
Window(the base object) and
CustomWindow(the one you can subclass to respond to events)* live in the same module. The difficult part is that
Window(the object) and
Window(the namespace/module) cannot have the same name. This leads to rather long class names.
*Before you ask, the reason I have two versions of
Windowis to avoid overhead. Every time an object responds to an event, the extension has to determine the Python object, translate the arguments into Python objects, look up the Python method. If there's no subclass, it's just going to find the extension-defined base method, which will translate the arguments into C++ data types and call the base class version. So if the user isn't subclassing it, it makes no sense to bother with all that.
In the test program, it would look like this:
or:
Thank you. Now I have an organizing principle to follow, although the long class names (
Package.Module.Class, rather than simply
Package.Class) really stick out, in my opinion. Of course, they were already that long the way I'm currently handling it; it just seems odd to me (as an outsider) that Python forces this on users. But I'm willing to follow Python conventions and give the user what they want.
However, I still have the problem of constants. For example, there are constants, enums, and in some cases plain functions (i.e., not object or class methods) defined in the header files for the various objects. They logically belong with that object. But Python won't let me place, for example,
B_NO_SPECIFIERinto the same namespace with the
Messageobject, because that namespace holds a class.
So which follows Python convention better: Putting them into the package (
Haiku), putting them into the kit module (
Haiku.ApplicationKit), or putting them into a separate module (either
Haiku.MessageConstantsor
Haiku.ApplicationKit.MessageConstants)?
Also, as you noticed, dependencies across Python packages are a problem inside C/C++ extensions. Extensions should be self contained. This is not possible with the Haiku API, if you split it up.
I ws trying to avoid putting all the kits into the same extensions. For example, why should a user be required to load the Storage kit is he's not going to use it? But the Storage kit uses elements from the Application and Interface kits, so it needs access to them.
And yes, I know a Python user can
from-importand only get the desired modules/classes, but Python still has to load the entire
.sointo memory.
Re: Language Bindings for the C++ API: Python partially working
Okay, we seem to have a problem with the definition of "name". ;-) Let's look at this snippet:
Here, "Haiku.Window.Window" is not a class name. The class name is "Window". It lives in the namespace of the module "Window" who exists in the package global namespace of the package "Haiku". So, "Haiku.Window.Window" is more like a path, if you will, a bit like nested namespaces in C++ or class paths in Java. In Python, of course, it's actually an object reference hierarchy.
Long story short, if the path is Haiku.Window.Window, the class and module do have the same name. Because C has no real concept of namespaces (and in C++, they work quite differently), the names within C/C++ extensions have to be the full path (with dots replaces by underscores) rather than the actual Python names.
Oh, and you don't have to use the full path within Python code every time. It's just the default. Every name may explicitly be imported into the current namespace (from <module> import <name>), or you may import the complete namespace of a module (from <module> import *), though the latter is usually considered to be bad style.
As for the problems with constants, I don't really understand. Why do you think there is a problem with putting constants and classes in the same namespace? There are many Python extensions and modules who do exactly that.
Finally, I doubt that the resulting .so is so big it should be split. It is your choice, of course. However, you should consider, that loading multiple small .so may actually require more resources, than loading a single big one, particularly if multiple programs are using said .so at the same time. If you really want to do it, don't use "eval". Use "PyImport_ImportModule". | https://www.haiku-os.org/blog/jalopeura/2011-06-24_language_bindings_c_api_python_partially_working | CC-MAIN-2017-04 | refinedweb | 1,323 | 72.76 |
Public please. >> Autotest, or via a spy in Autoconf, to real check what matters: shfn >> support, and not "shfn support amongst shells supporting LINENO". If, >> out of chance, there happens to be a shell that supports LINENO but >> not shfn, then we have to go through another full cycle, and this >> LONG... :( > What about putting a spy in AS_INIT that checks there happens to be a > shell that supports LINENO but not shfn, and to mail address@hidden > if so? See the attached diff; it is intended to keep things as > low-tech as possible in the shell detection script. (I remember the > "Present but Cannot Be Compiled" e-mail bursts, but these messages > will hopefully be less frequent). As I already said, if it ever happens that such a shell exists (supports LINENO but not shfn), then we lose one round. Sounds useless a risk to take. Apply the same idea with an embedded M4sh script containing a simple AS_INIT_WITH_SHELL_FUNCTIONS and I'm happy. >> I admit I understood your patch as the first step towards using shfn >> in Autoconf, but I'm wrong, sorry. >> > It is, but is more long term than you thought (and sorry if I gave > that impression). For example, all the patches I committed are the > result of inspecting the source for the m4 list (AS_FOREACH) patch, > which is something that I'd like to have in, surely after I test it on > real life examples, but also before shell functions. > Paolo > Index: m4sh.m4 > =================================================================== > RCS file: /cvsroot/autoconf/autoconf/lib/m4sugar/m4sh.m4,v > retrieving revision 1.107 > diff -u -r1.107 m4sh.m4 > --- m4sh.m4 24 Nov 2003 10:44:52 -0000 1.107 > +++ m4sh.m4 24 Nov 2003 14:58:02 -0000 > @@ -195,6 +195,46 @@ > # Name of the executable. > as_me=`AS_BASENAME("$[0]")` > +$SHELL <<\EOF > +func_return () { > + (exit [$]1) > +} > + > +func_success () { > + func_return 0 > +} > + > +func_failure () { > + func_return 1 > +} > + > +func_ret_success () { > + return 0 > +} > + > +func_ret_failure () { > + return 1 > +} > + > +if func_success; then > + if func_failure; then > + echo 'Your system does not have working shell functions.' > + echo 'Please write to address@hidden (func_failure succeeded).' > + fi > +else > + echo 'Your system does not have working shell functions.' > + echo 'Please write to address@hidden (func_success failed).' > +fi > +if func_ret_success; then > + if func_ret_failure; then > + echo 'Your system does not have working shell functions.' > + echo 'Please write to address@hidden (func_ret_failure succeeded).' > + fi > +else > + echo 'Your system does not have working shell functions.' > + echo 'Please write to address@hidden (func_ret_success failed).' > +fi > +EOF > ]) Looks good! | http://lists.gnu.org/archive/html/autoconf-patches/2003-11/msg00060.html | CC-MAIN-2014-15 | refinedweb | 407 | 73.78 |
Threading Tutorials & Articles
Deep C# - avoiding race conditionsby Mike James
Mike James explores the perils of multi-threading and explores ways of staying safe in a multi-core environment.
Automate web application UI testing with Seleniumby Sing Li
Testing web applications is a problem, but Sing Li thinks the solution might be easier than you think with Selenium.
WPF Custom Controlsby George Shepherd
WPF completely overturns the classic approach to developing Windows applications and adds user interface flexibility and pizzazz unavailable to Windows developers up to now. George looks at one aspect of this - implementing controls.
Parallel Extensions to the .NET Frameworkby Daniel Moth
Taking full advantage of multiple-core CPU architectures is becoming an essential step for new applications. How do you automate the process?
Delegates in VB.NETby John Spano
You use them everyday, but might not know it. In this article, we will take a look at what a delegate is and how it will help you to develop better software..
.NET Delegates: A C# Bedtime Storyby Chris Sells
An introduction to delegates, listeners, events and asyncronous notification..
.NET Threading Part IIby Randy Charles Morin
This is the second article of two parts on .NET threading. In this second part, I will discuss further the synchronization objects in the System.Threading .NET namespace, thread local storage, COM interoperability and thread states.
C# Threading in .NETby Randy Charles Morin
The first in a two part series on C# threads, introducing how to create and manipulate threads with the .NET framework, including creating a thread, thread pools, syncronization, race conditions and timers.
Socket Programming in C# - Part 2by Ashish Dhar
The second part in this series, revealing more practical alternatives to the basic blocking methods in .NET
Learn OpenGL and C#by Johnny
An introduction to using OpenGL using CsGL - an open source library for using OpenGL in .NET.
Events and Delegatesby Faisal Jawaid
An introduction to event driven programming in C#, through the use of Events and Delegates.
When Session Variables Go Badby Bruce Johnson
Bruce Johnson takes a look at pros and cons of using Session variables to maintain state on a web site; and the problems you might hit when using them.
Create your own Web Server using C#by Imtiaz Alam
This article explains how to write a simple web server application using C#
WinChat For .NETby Patrick Lam
WinChat For .NET is a simple peer-to-peer chatting program that functions very similarly to the WinChat program provided by Windows 2000.
Worker Threadsby Joseph M. Newcomer
This describes techniques for proper use of worker threads. It is based on several years' experience in programming multithreaded applications.
Visual Studio Next Generation: Language Enhancementsby Microsoft
Find out about all the great enhancements in Visual Studio.NET | https://www.developerfusion.com/t/threading/tutorials/ | CC-MAIN-2019-04 | refinedweb | 460 | 55.13 |
Today we’ll start looking at a branch of math called Decision Theory. It uses the types of things in probability and statistics that we’ve been looking at to make rational decisions. In fact, in the social sciences when bias/rationality experiments are done, seeing how closely people make decisions to these optimal decisions is the base line definition of rationality.
Today’s post will just take the easiest possible scenarios to explain the terms. I think most of this stuff is really intuitive, but all the textbooks and notes I’ve looked at make this way more complicated and confusing. This basically comes from doing too much too fast and not working basic examples.
Let’s go back to our original problem which is probably getting old by now. We have a fair coin. It gets flipped. I have to bet on either heads or tails. If I guess wrong, then I lose the money I bet. If I guess right, then I double my money. The coin will be flipped 100 times. How should I bet?
Let’s work a few things out. A decision function is a function from the space of random variables
(technically we can let
be any probability space) to the set of possible actions. Let’s call
our set of actions where
corresponds to choosing tails and
corresponds to heads. Our decision function is a function that assigns to each flip a choice of picking heads or tails,
. Note that in this example
is also just a discrete space corresponding to the 100 flips of the coin.
We now define a loss function,
. To make things easy, suppose we bet 1 cent every time. Then our loss is
cent every time we guess wrong and
cents if we guess right. Because of the awkwardness of thinking in terms of loss (i.e. a negative loss is a gain) we will just invert it and use a utility function in this case which measures gains. Thus
when we guess wrong and
when we guess right. Notationally, suppose
is the function that tells us the outcome of each flip. Explicitly,
The last thing we need is the risk involved. The risk is just the expected value of the loss function (or the negative of the expected value of the utility). Suppose our decision function is to pick
every time. Then our expected utility is just
. This makes sense, because half the time we expect to lose and half we expect to win. But we double our money on a win, so we expect a net gain. Thus our risk is
, i.e. there is no risk involved in playing this way!
This is a weird example, because in the real world we have to make our risk function up and it does not usually have negative expected value, i.e. there is almost always real risk in a decision. Also, our typical risk will still be a function. It is only because everything is discrete that some concepts have been combined which will need to be pulled apart later.
The other reason this is weird is that even though there are
different decision functions, they all have the same risk because of the symmetry and independence of everything. In general, each decision function will give a different risk, and they are ordered by this risk. Any minimum risk decision function is called “admissible” and it corresponds to making a rational decision.
I want to point out that if you have the most rudimentary programming skills, then you don’t have to know anything about probability, statistics, or expected values to figure these things out in these simple toy examples. Let’s write a program to check our answer (note that you could write a much simpler program which is only about 5 lines, has no functions, etc to do this):
import random import numpy as np import pylab def flip(): return random.randint(0,1) def simulate(money, bet, choice, length): for i in range(length): tmp = flip() if choice == tmp: money += 2*bet else: money -= bet return money results = [] for i in range(1000): results.append(simulate(10, 1, 0, 100)) pylab.plot(results) pylab.title('Coin Experiment Results') pylab.xlabel('Trial Number') pylab.ylabel('Money at the End of the Trial') pylab.show() print np.mean(results)
This python program runs the given scenario 1000 times. You start with 10 cents. You play the betting game with 100 flips. We expect to end with 60 cents at the end (we start with 10 and have an expected gain of 50). The plot shows that sometimes we end with way more, and sometimes we end with way less (in these 1000 we never end with less than we started with, but note that is a real possibility, just highly unlikely):
It clearly hovers around 60. The program then spits out the average after 1000 simulations and we get 60.465. If we run the program a bunch of times we get the same type of thing over and over, so we can be reasonably certain that our above analysis was correct (supposing a frequentist view of probability it is by definition correct).
Eventually we will want to jump this up to continuous variables. This means doing an integral to get the expected value. We will also want to base our decision on data we observe, i.e. inform our decisions instead of just deciding on what to do ahead of time and then plugging our ears, closing our eyes, and yelling, “La, la, la, I can’t see what’s happening.” When we update our decision as the actions happen it will just update our probability distributions and turn it into a Bayesian decision theory problem.
So you have that to look forward to. Plus some fun programming/pictures should be in the future where we actually do the experiment to see if it agrees with our analysis.
One thought on “Decision Theory 1” | https://hilbertthm90.wordpress.com/2014/03/18/decision-theory-1/ | CC-MAIN-2017-26 | refinedweb | 1,001 | 73.17 |
In this Quick Tip we'll take a look at how to embed and display a 3D model in Flash, using Papervision3D.
Final Result Preview
Let's take a look at the final result we will be working towards:
Introduction
To use this tutorial you will need to have a 3D model, exported as a .dae file, and its texture, as an image file.
I'm going to be using this low-poly mountain bike model from 3DOcean, created by OneManBand (who also created this neat 3D Object Viewer in AIR).
You will need to download a copy of Papervision3D (you can also find a copy in the source files)
Step 1: Creating the Flash File
Create a new ActionScript 3 document with dimensions of 550x200px and set the frame rate to 30fps. Also, set the document class to "EmbeddingDAE".
Create a rectangle that covers the whole stage, and fill it with a radial gradient of #FFFFFF to #D9D9D9. Adjust the gradient with the Gradient Transform Tool, so it looks like this:
Step 2: Setting up the Document Class
Create a new ActionScript 3 file and name it "EmbeddingDAE". This class will extend a class from Papervision that has all the basic functionality set up.
As we're going to be embedding the 3D model in your SWF, we need to make sure the file has been fully loaded before trying to make use of it.
Here is the code for that:
package { import flash.events.Event; import org.papervision3d.view.BasicView; public class EmbeddingDAE extends BasicView { public function EmbeddingDAE() { this.loaderInfo.addEventListener ( Event.COMPLETE, onFullyLoaded ) ; } private function onFullyLoaded(e:Event):void { } } }
Step 3: Embedding the Resources
Instead of hosting our resources at a webserver and loading them from there, we're simply going to embed them in the SWF. We do this by using the Flex SDK
Embed tag. If you don't have the Flex SDK, or are having trouble with the pre-installed version, you can download it here
Flash knows how to deal with certain types of files, like my
.png texture file, but it doesn't know the
.dae file format. Therefore we have to set a secondary parameter, the MIME type, to
application/octet-stream - this means the file will be transformed into a
ByteArray.
When using the
Embed tag, we need to be referring the relative (or full) path of the file, and assigning a class to this file. Later we can create an instance of the embedded file using this class.
Here you can see the code:
public class EmbeddingDAE extends BasicView { [Embed(source="mountain_bike.dae", mimeType="application/octet-stream")] private var bikeModelClass:Class; [Embed(source="bike_texture.png")] private var bikeTextureClass:Class; public function EmbeddingDAE()
You will need to replace the paths so they match your own files.
Step 4: Handling the Texture
To use our texture with our model in Papervision3D, we need to do three things:
- Create an instance of the texture as a
Bitmap- so we can access its
bitmapData.
- Create a
Materialwith this
bitmapData-- this will function like a texture.
- Create a
MaterialsList, which will link our material to our model. It will need the name of the material used for the model. If you only have one texture file (which is most common) you do not need to worry about this, just use "all".
Here is the code doing this (added to
onFullyLoaded()):
var bitmap:Bitmap = new bikeTextureClass ( ) ; var bitmapMaterial:BitmapMaterial = new BitmapMaterial ( bitmap.bitmapData ) ; var materialsList:MaterialsList = new MaterialsList ( ) ; materialsList.addMaterial ( bitmapMaterial, "all" ) ;
Remember to import:
import flash.display.Bitmap; import org.papervision3d.materials.BitmapMaterial; import org.papervision3d.materials.utils.MaterialsList;
Step 5: Load the Model
To load our model, we need to do four things:
- Create a variable for our model - think of this as an empty shell.
- Create an instance of the
ByteArraycontaining our model.
- Create an instance of the variable for our model - creating the shell.
- Load our model by passing the
ByteArrayand the
MaterialsListto our empty shell.
First create the variable:
private var bikeModelDAE:DAE;
Then do the rest (adding to
onFullyLoaded)
var byteArray:ByteArray = new bikeModelClass ( ) ; bikeModelDAE = new DAE ( ) ; bikeModelDAE.load ( byteArray, materialsList ) ;
Remember to import:
import flash.utils.ByteArray; import org.papervision3d.objects.parsers.DAE;
Step 6: Displaying the Model
Now all we are missing is being able to see the model, which is a piece of cake. I'm also adjusting the position of the camera so we can get a good look at this model. Then I'm telling Papervision3D to re-render every frame.
Here's the code (again adding to
onFullyLoaded()):
this.scene.addChild ( bikeModelDAE ) ; this.camera.z = 500; this.startRendering ( ) ;
This is what it will look like:
Step 7: Adding Rotation
Now we can see the model, but only from one point of view. That is a little dull isn't it? Lets add some rotation! Here we're going to override a function that is being called every frame by the Papervision3D engine.
override protected function onRenderTick(event:Event = null):void { super.onRenderTick ( event ) ; bikeModelDAE.yaw ( 1 ) ; }
Here it is once again:
Conclusion
Now you know how to add 3D models to your Flash projects, and it is actually quite simple. I hope you enjoyed reading and found it useful.
Thanks for reading!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/quick-tip-displaying-a-3d-model-with-papervision3d--active-7557 | CC-MAIN-2018-17 | refinedweb | 899 | 57.06 |
Quoting Marian Marinov (mm@1h.com):> -----BEGIN PGP SIGNED MESSAGE-----> Hash: SHA1> > On 06/03/2014 08:54 PM,.> > > > The permission check change would probably only need to be:> > > > > > @@ -2180,6 +2245,10 @@ static int do_new_mount(struct path *path, const char *fstype, int flags, return -ENODEV;> > > > if (user_ns != &init_user_ns) { + if (!(type->fs_flags & FS_UNPRIV_MOUNT) && !capable(CAP_SYS_ADMIN))> > { + put_filesystem(type); + return -EPERM; + } if> > (!(type->fs_flags & FS_USERNS_MOUNT)) { put_filesystem(type); return -EPERM;> > > > > > There are also a few funnies with capturing the user namespace of the filesystem when we perform the mount (in the> > superblock?), and not allowing a mount of that same filesystem in a different user namespace.> > > > But as long as the kuid conversions don't measurably slow down the filesystem when mounted in the initial mount and> > user namespaces I don't see how this would be a problem for anyone, and is very little code.> > > > This may solve one of the problems, but it does not solve the issue with UID/GID maps that overlap in different user> namespaces.> In our cases, this means breaking container migration mechanisms.> > Will this at all be addressed or I'm the only one here that has this sort of requirement?You're not. The openstack scenario has the same problem. So we have asingle base rootfs in a qcow2 or raw file which we want to mount intomultiple containers, each of which has a distinct set of uid mappings.We'd like some way to identify uid mappings at mount time, without havingto walk the whole rootfs to chown every file.(Of course safety would demand that the shared qcow2 use a set of highsubuids, NOT host uids - i.e. if we end up allowing a container toown files owned by 0 on the host - even in a usually unmapped qcow2 -there's danger we'd rather avoid, see again Andy's suggestions ofaccidentidally auto-mounted filesystem images which happen to share aUUID with host's / or /etc. So we'd want to map uids 100000-106536in the qcow2 to uids 0-65536 in the container, which in turn map touids 200000-206536 on the host)-serge | https://lkml.org/lkml/2014/6/23/1 | CC-MAIN-2020-16 | refinedweb | 352 | 68.2 |
We ship with FedEx delivery, tracable and insured.
Including an AIG gemological report. As you can see in the photos the diamond girdle (rondist) is not perfect but it van be covered when you set it on a piece of jewelry. cost includes an external customs broker agent to
import loose Diamonds to France.
Check our other items for sale by clicking on our seller name above (DiamondsExpress)
If you have any question please contact us.
- Number of Stones
- 1
- Stone
- Diamond
- Total carat weight
- 0.40
- Shape/ cut
- Round
- Natural fancy colour
- fancy yellow green
- Clarity
- No Reserve Price! , SI2
- Treatment
- Natural (untreated)
- Certification
- AIG (IL)
- Sealed
- Yes
- Laser Engraved
- No | https://www.catawiki.com/l/28557729-1-pcs-diamond-0-40-ct-round-fancy-yellow-green-si2-no-reserve-price | CC-MAIN-2020-40 | refinedweb | 112 | 65.83 |
I was trying to learn
yield
def test
puts "You are in the method"
yield
puts "You are again back to the method"
yield
end
test {puts "You are in the block"}
test {..}
You are in the method
You are in the block
You are again back to the method
You are in the block
When you wrote
test { puts "You are in the block"}, that was you calling the function. You were calling
test, and passing one argument, a block.
Each method can take one block argument implicitly. When you call
yield inside a function, that's you saying "Invoke the block argument." So when you called
yield twice, you invoked the block argument twice, in between other
puts statements. | https://codedump.io/share/xMDzU6JBkci4/1/what-makes-the-function-call | CC-MAIN-2018-22 | refinedweb | 121 | 85.02 |
Files:
The first step is to create the basic QML items in your application.
To begin with, we create our Same Game application with a main screen like this:
This is defined by the main application file, samegame.qml, which looks like this:
import Qt 4.
The Button item in the code above is defined in a separate component file named Button.qml. To create a functional button, we use the QML elements Text and MouseArea inside a Rectangle. Here is the Button.qml code:
import Qt 4.7 Rectangle { id: container property string text: "Button" signal clicked width: buttonLabel.width + 20; height: buttonLabel.height + 5 border { width: 1; color: Qt.darker(activePalette.button) } smooth: true radius: 8 // color the button with a gradient 4.
You should be familiar with the code so far. We have just created some basic elements to get started. Next, we will populate the game canvas with some blocks.
[Previous: QML Advanced Tutorial] [Next: QML Advanced Tutorial 2 - Populating the Game Canvas] | https://doc.qt.io/archives/qt-4.7/declarative-tutorials-samegame-samegame1.html | CC-MAIN-2021-17 | refinedweb | 167 | 60.82 |
Portable readers/writer lock class.
More...
#include <Inventor/threads/SbThreadRWMutex.h>
This class provides read/write blocking. It is implemented using the pthreads API on UNIX/Linux and the Win32 API on Microsoft Windows.
A readers/writer lock works like this:
Any number of threads can hold a read lock on this object at the same time. While any thread holds a read lock on this object, all requests for a write lock will block. Conversely, only one thread can hold a write lock on this object at any time. While a thread holds a write lock on this object, all requests for a read lock by other threads will block. As a convenience, the thread holding the write lock may obtain any number of read locks, as long as all read locks are released before the write lock is released.
A readers/writer lock is appropriate for a resource that is frequently "read" (its value accessed) and is not often modified. Particularly if the "read" access must be held for a significant amount of time. (If all accesses, both read and write, are quite short then it may be more efficient to use the SbThreadMutex class.) For example, the SoDB::readlock and SoDB::writelock methods use an SbThreadRWMutex to control access to the scene graph. All Open Inventor actions automatically call SoDB::readlock to gain read access before beginning a traversal. This allows, for example, multiple render threads to safely traverse ("read") the scene graph in parallel.
Generally, acquiring a read or write lock on an SbThreadRWMutex is more expensive (takes more time) than acquiring a lock on an SbThreadMutex. As with any synchronization object, failure to release a lock will usually result in a "deadlock" situation that prevents the application from running. SbThreadAutoReadLock and SbThreadAutoWriteLock provide a "safe" way to acquire a lock that will be automatically released when the object goes out of scope.Mutex
Create a read/write mutex.
Destructor.
Request a read lock (non-exclusive access) for this mutex.
Returns zero if successful.
Release a read lock.
Returns zero if successful.
Request a write lock (exclusive access) for this mutex.
Returns zero if successful.
Release a write lock.
Returns zero if successful. | https://developer.openinventor.com/refmans/latest/RefManCpp/class_sb_thread_r_w_mutex.html | CC-MAIN-2021-04 | refinedweb | 367 | 57.67 |
NAME
bb-services - Configuration of TCP network services
SYNOPSIS
$BBHOME/etc/bb-services
DESCRIPTION
bb-services contains definitions of how bbtest-net(1) should test a TCP-based network service (i.e. all common network services except HTTP and DNS). For each service, a simple dialogue can be defined to check that the service is functioning normally, and optional flags determine if the service has e.g. a banner or requires SSL- or telnet-style handshaking to be tested.
FILE FORMAT
bb-services is a text file. A simple service definition for the SMTP service would be this: [smtp] send "mail\r\nquit\r\n" expect "220" options banner This defines a service called "smtp". When the connection is first established, bbtest-net will send the string "mail\r\nquit\r\n" to the service. It will then expect a response beginning with "220". Any data returned by the service (a so-called "banner") will be recorded and included in the status message. The full set of commands available for the bb-services file are: [NAME] Define the name of the TCP service, which will also be the column-name in the resulting display on the test status. If multiple tests share a common definition (e.g. ssh, ssh1 and ssh2 are tested identically), you may list these in a single "[ssh|ssh1|ssh2]" definition, separating each service-name with a pipe-sign. send STRING expect STRING Defines the strings to send to the service after a connection is established, and the response that is expected. Either of these may be omitted, in which case bbtest-net(1) will simply not send any data, or match a response against anything. The send- and expect-strings use standard escaping for non- printable characters. "\r" represents a carriage-return (ASCII 13), "\n" represents a line-feed (ASCII 10), "\t" represents a TAB (ASCII 8). Binary data is input as "\xNN" with NN being the hexadecimal value of the byte. port NUMBER Define the default TCP port-number for this service. If no portnumber is defined, bbtest-net(1) will attempt to lookup the portnumber in the standard /etc/services file. options option1[,option2][,option3] Defines test options. The possible options are banner - include received data in the status message ssl - service uses SSL so perform an SSL handshake telnet - service is telnet, so exchange telnet options
FILES
$BBHOME/etc/bb-services
SEE ALSO
bbtest-net(1) | http://manpages.ubuntu.com/manpages/maverick/en/man5/bb-services.5.html | CC-MAIN-2015-06 | refinedweb | 404 | 54.12 |
26 October 2010 23:16 [Source: ICIS news]
(adds paragraphs 1, 5-7)
HOUSTON (ICIS)--Air Products on Tuesday dismissed a call by the Airgas board of directors for Air Products to raise its $5.5bn (€4.0bn) bid for the ?xml:namespace>
Airgas said its board was willing to authorise talks on Air Products' unsolicited takeover bid if the offer price was raised to more than $70/share.
“Each of our 10 directors is of the view that the current Air Products offer of $65.50/share is grossly inadequate,” Airgas said in a letter to Air Products CEO John McGlade.
Airgas pointed to strong year-on-year profit growth in its most recent quarter, which ended on 30 September.
Air Products replied: "There is nothing in the Airgas earnings or letter that changes our view of value. It is time for the Airgas board either to negotiate with us or terminate the company's poison pill and let Airgas shareholders decide for themselves." | http://www.icis.com/Articles/2010/10/26/9404759/air-products-balks-at-talk-of-raising-airgas-bid.html | CC-MAIN-2014-35 | refinedweb | 165 | 69.01 |
#include <threadsafety.h>
#include <util/macros.h>
#include <condition_variable>
#include <mutex>
#include <string>
#include <thread>
Go to the source code of this file.
Definition at line 233 of file sync.h.
Run code while locking a mutex.
Examples:
WITH_LOCK(cs, shared_val = shared_val + 1);
int val = WITH_LOCK(cs, return shared_val);
Note:
Since the return type deduction follows that of decltype(auto), while the deduced type of:
WITH_LOCK(cs, return {int i = 1; return i;});
is int, the deduced type of:
WITH_LOCK(cs, return {int j = 1; return (j);});
is &int, a reference to a local variable
The above is detectable at compile-time with the -Wreturn-local-addr flag in gcc and the -Wreturn-stack-address flag in clang, both enabled by default.
Definition at line 276 of file sync.h. | https://doxygen.bitcoincore.org/sync_8h.html | CC-MAIN-2021-04 | refinedweb | 130 | 60.45 |
rtcSetGeometryTimeRange.3embree3 - Man Page
NAME
rtcSetGeometryTimeRange - sets the time range for a motion blur geometry
SYNOPSIS
#include <embree3/rtcore.h> void rtcSetGeometryTimeRange( RTCGeometry geometry, float startTime, float endTime );
DESCRIPTION
The
rtcSetGeometryTimeRange function sets a time range which defines the start (and end time) of the first (and last) time step of a motion blur geometry. The time range is defined relative to the camera shutter interval [0,1] but it can be arbitrary. Thus the startTime can be smaller, equal, or larger 0, indicating a geometry whose animation definition start before, at, or after the camera shutter opens. Similar the endTime can be smaller, equal, or larger than 1, indicating a geometry whose animation definition ends after, at, or before the camera shutter closes. The startTime has to be smaller or equal to the endTime.
The default time range when this function is not called is the entire camera shutter [0,1]. For best performance at most one time segment of the piece wise linear definition of the motion should fall outside the shutter window to the left and to the right. Thus do not set the startTime or endTime too far outside the [0,1] interval for best performance.
This time range feature will also allow geometries to appear and disappear during the camera shutter time if the specified time range is a sub range of [0,1].
Please also have a look at the
rtcSetGeometryTimeStepCount function to see how to define the time steps for the specified time range.
EXIT STATUS
On failure an error code is set that can be queried using
rtcGetDeviceError.
SEE ALSO
[rtcSetGeometryTimeStepCount] | https://www.mankier.com/3/rtcSetGeometryTimeRange.3embree3 | CC-MAIN-2022-33 | refinedweb | 269 | 50.97 |
This is the first in a series of patches to add a simple, generalized updater to MemorySSA.
For MemorySSA, every def is may-def, instead of the normal must-def.
(the best way to think of memoryssa is "everything is really one variable, with different versions of that variable at different points in the program).
This means when updating, we end up having to do a bunch of work to touch defs below and above us.
In order to support this quickly, i have ilist'd all the defs for each block. ilist supports tags, so this is quite easy. the only slightly messy part is that you can't have two iplists for the same type that differ only whether they have the ownership part enabled or not, because the traits are for the value type.
The verifiers have been updated to test that the def order is correct..
Thanks for the patch! The overall approach here seems solid to me.
I dunno how cluttered llvm:: is with random pass/analysis-specific things, but it may be nice to scope these somehow (e.g. in a MSSAHelpers struct or namespace or ...)
"since iplist's own if you don't change the traits" sounds strange to me
Less newlines, please
Can this just be a map of const BasicBlock * to unique_ptr<struct { AccessList; DefsList; }> (excuse the not-actual-c++)? Looks like we look both up together a lot, and their lifetimes are sorta-related.
(If it ends up being a big refactor, I'm fine if we do this as a follow-up)
Nit: if we're not going to "cache" our old value, I think this can can be sunk to the if.
InsertIntoDef = true
It looks like we have this pattern:
void foo(MemoryAccess *MA) {
getOrCreateAccessList(MA->getBlock())->add_thing(MA);
if (!isa<MemoryUse>(MA)) {
getOrCreateDefsList(MA->getBlock())->add_thing(MA);
}
}
// Where add_thing is one of {insert, push_back, push_front}.
a lot. Can we maybe factor that out into a insertMemoryAccessBefore/insertMemoryAccessAfter(AccessList::iterator, MemoryAccess *) thing that handles the DefList bits for us?
Looks like we put Phis in this list, too. Is this supposed to push Defs in front of phis?
Please delete this newline (and below)
Is Defs->end() correct here? What if I insert a store before the MemoryUse in
define void @foo() {
%1 = alloca i8
; MemoryUse(liveOnEntry)
%2 = load i8, i8* %1
; 1 = MemoryDef(liveOnEntry)
store i8 0, i8* %1
}
?
Please remove the extra parens
Same "is Defs->end() correct?" comment
Is this from clang-format?
I'm assuming this is temporary
Fixed :)
I tried to rewrite this into something more understandable.
So, i'm not strongly opposed, but note:
We currently guarantee two invariants:
If we put them both together, then obviously we can still maintain invariant 1, but we will break invariant 2.
Breaking that invariant means things will now have to check that the defs list is not null everywhere.
Personally, i don't think it's worth it, but happy to hear another opinion :)
(and obviously, happy to document these invariants)
Fixed. We now cache the old value, so we don't do a lookup for every def.
You can't use the AccessList iterator, but we can at least create two helpers
No, fixed.
No, it is not correct.
We have to find the nearest def, above or below, and do before/after.
Writing the helper functions made me realize this as well.
We'll do it the slow way first, then we can make it faster if necessary (we have the block numbering, we could make it note which are defs, and then find the nearest def to the access passed here).
Since this is an update API, i doubt it'll be that slow in practice.
Not sure how it got there
Yes, it's going away completely as part of the updater update.
Note that insertIntoListsBefore requires the block because the iterator may point at the end.
I have no idea what happened here, i'll fix (but clang format sorted it into this order)
In D29046#653835, @dberlin wrote:.
This isn't of much consequence, but I wanted to point out that your example already violates the pre-conditions to spliceMemoryAccessAbove:
// \brief Splice \p What to just before \p Where.
//
// In order to be efficient, the following conditions must be met:
// - \p Where dominates \p What,
// - All memory accesses in [\p Where, \p What) are no-alias with \p What.
//
// TODO: relax the MemoryDef requirement on Where.
Still, a generalized updater is way more useful than a tightly-scoped splicing API.
LGTM after 3 more small nits. Thanks again!
Much better, thanks! (Looks like clang-format broke the comment, though. :P)
Then I'm happy with our current approach. Documenting the invariants would be nice, too. :)
insertIntoListsForBlock?
I just used createMemoryPhi instead
WFM | https://reviews.llvm.org/D29046 | CC-MAIN-2021-21 | refinedweb | 805 | 64 |
Forum Index
The following code segfaults, when the outer hashmap rehashes,
and I am not yet sure why.
module tests.refcounted_hashmap_test;
import containers;
import std.typecons;
struct Map
{
HashMap!(int, int) theMap;
}
unittest
{
HashMap!(int, RefCounted!Map) mapOfMaps;
foreach (i; 0 .. 1000)
{
RefCounted!Map val;
if (i == 0) val.theMap.getOrAdd(i, i);
mapOfMaps.getOrAdd(i, val);
}
}
For your convenience:
I am using Refcounted() because HashMap itself is not copy-able. Is there another way to have HashMaps as values of HashMaps?
On Sunday, 2 May 2021 at 07:38:03 UTC, Tobias Pankrath wrote:
HashMap!(int, RefCounted!Map) mapOfMaps = HashMap!(int, RefCounted!Map)(1024);
On Sunday, 2 May 2021 at 08:59:15 UTC, Imperatorn wrote:
Is HashMap.init a broken state? That makes using them hard as struct members :/
That only fixes it, because you are avoiding the rehash with the
initial size.
I've figured it out and filed an PR. Would be great if we can get a new version out with this. | https://forum.dlang.org/thread/smcnthknxcxuqetpuspj@forum.dlang.org | CC-MAIN-2021-25 | refinedweb | 167 | 70.8 |
Independent The Taft
August 5 - 11, 2011
TAFT INDEPENDENT
FREEWeekly
Publisher@Taftindependent.com
August 5 - 11, 2011 • Volume 6 Issue 6
“Serving the West Kern County Communities of Taft, South Taft, Ford City, Maricopa, Fellows, McKittrick, Derby Acres, Dustin Acres, and the Cuyama Valley”
Meet Your Local Public Servant- Debra Elliott
City of Taft Passes Budget for 2011-2012
G St.’s Second Hand Ready For Picking
Senator Jean Fuller Visits The Westside By Jane McCabe
Brand Clothing Shoes • Jewelry A Hint of Class Name Accessories & More
Step Back Into School
In Style!
New Arrivals of Backpacks & Shoes Inside The Historic Fort 915 N. 10th Street Suite 34 (661)623-1783
The Best Beer Selection on Tap in Taft!
Black Gold Open Monday’s for Lunch and Dinner Cafe and Deli
508 Center Street • 765-6550
Try Our Meatball Sandwich $6.25
Our homemade meatballs and marinara sauce with parmesan cheese and garlic butter all baked to perfection on a soft toasted French roll
2
Shop Taft
Get Ready For Back To School with Next Step! ITEM OF THE WEEK Pre - Workout Mix By N’SANE
Come in for our Back To School Membership Specials & Savings! $25/mo
Open 24 Hours!
506 Center Street
(661) 205-5579
Anderson Business Services Bookkeeping • Income Tax • Notary
WE COME TO YOU!
Schedule of Services
Acknowledgments & Jurats $10 per signature Mortgage Documents $150 per set Mobile Notary Service $1 per mile
Sandy Anderson
Notary Public Certified Signing Agent
TAFT INDEPENDENT
Passion For Nails Nail Services: Sea Shell • Glitter Acrylic Metalic Flakes • Rock Star (we also treat ingrown nails)
1014 6th Street • Taft In the Save A Lot Shopping Center Monday to Friday
(661)745-4913
Personal Style
Back - To - School Savings!
Newly expanded Juniors & Contempory Fashions
Personal Style
Greg Anderson
Call For Appointment 765-7665 Sandy 577-6790 • Greg 577-6032
Miller’s Beauty Supply
SAN JOAQUIN Automotive Center
Get Your Car Ready For Summer!
763-5445
EVERYDAY OIL CHANGES GAS ENGINE $35.99 OR LESS
Next to the Fox Theater and Black Gold Cafe & Deli
Introducing
Back To School Daze... August 1st-31st
Storewide Discounts
10% off professional shampoo and hair color lines featuring Paul Mitchell, RedKen, Matrix, Joico, L’oreal, Wella and Itely 20% - 50% off jewelry, hair accessories, feather earrings, feather extensions, handbags, nail polish and picture frames
And much much more!
15% off for licensed stylists
Hours: Tuesday 8am-8pm • Wednesday Closed Thursday 8am-5pm • Friday 8am-8pm Saturday 9am-3pm
Come on out and see us every Thursday at the Farmers Market
Shop and Save Downtown Taft!
(up tp 7 qts oil, filter, tax, haz waste fee)
DIESEL ENGINE $71.99 OR LESS
(up to 3.75 gal oil, filter, tax, haz waste fee)
J e w e l r y
Small town, family owned, low overhead. We can save youFine money on •quality Jewelry Candlesjewelry! • Gifts
Watchwill Batteries Acme Jewelry be closed on Tuesdays when Ray is in L.A. Please callSTREET first• 661.763.5451 763-5451 426 CENTER Thank You Store Hours: Tuesday to Friday 9:30am - 5:00pm Saturday 10:00am - 2:00pm Closed Sunday and Monday
426 Center Street
(661)763-5451
J & D Recycling 1277 Kern Street
Sacred Thread, Broomstick Skirts, Jackets, Blouses AND MUCH MORE
421 & 423 Center (661)763-3527
510 Center Street Taft, CA
F i n e
New Arrivals!
Women • Contemporary Junior • Toddler • Infant Men • And More!
Mobile Notary Public Certified Signing Agent
acme jewelry co. Fine Jewelry • Gifts 14K• Gold • Sterling Silver 14K Gold Sterling Silver Black Hills Gold • Jewelry Repair • Watch Batteries Black Hills Gold• •And Jewelry Candies • Candles More! Repair
Men & Seniors & Diabetics Welcome
10am-7pm and Saturday 9am-6pm
August 5 - 11, 2011
SUMMER SPECIAL!
$199.99
Cooling System Flush
(includes up to 2 gal coolant, flush kit, conditioner)
A/C Service
We Can Haul Away Most Large Items Call Us Today • (661)765-6752 Recycling is OUR Business
Ten Percent Firearms
(includes up to 3oz of Freon, 2oz dye)
Overall Vehicle Inspection (visual inspection of all external components) (tax, haz waste fee included)
Billy Messenger Voted Best Mechanic for 2009 and 2010
531 Center Street • 763-1123
Precision Bodyworks & Towing We take the DENTS out of ACCIDENTS Phone (661)763-4420 FAX (661)763-1389 Cell (661)577-6785
317 Main Street • Taft
1277 Kern Street (661)765-6899
Ben’s Books The Largest, Cheapest and Only Used Book Store in Taft Fiction • Non Fiction • Paperback Hard Covers • SciFi • Biography • Religion Childrens • Cookbooks and More!
Come stop by, have some coffee and leisurly browse our selection of books!
810 Center Street • (661)805-9813
August 5 - 11, 2011
Inside Community Events.........3
News Briefs.......................3
TAFT INDEPENDENT
Community Events News Briefs Taft California “Home of the Taft Oilworkers Monument” “Gateway to the Carrizo Plain National Monument”
Westside Watcher............4
VFW Bingo Every Tuesday Night at 5:30pm
West Side Recreation Report..........5
The VFW will hold Bingo Night every Tuesday at 6:30pm at 600 Hazelton Street in Maricopa. Doors open at 5:30pm, buy in is $5 a pack, food will be served. Come on out, bring a friend and support our vets!
Westside News.................6
End of Times Gallery Summer Art Classes
Westside News.................7
The End of Times Gallery, 428 Center Street, is offering the following summer classes: DRAWING - Mondays, 1-3 p.m. CHILDREN’S ART LESSONS - Wednesdays, 1-3 p.m. WATERCOLOR - Thursdays, 1-3 p.m. ACRYLIC PAINTING - Thursdays, 6-8 p.m. All classes are $10 per session, $40 per month. For more information please call 765.4790. The End of Times Gallery is taking artists’ work on CONSIGNMENT for $5 per item. The gallery earns a 30% commission on work sold. If you would like to have your work considered for representation, please call for an appointment - 765.4790.
Community Voices..........8 Obituary............................9 Classified Ads.................10 Westside News...............11 Negocios Hispanos........11
The Taft Independent 508 Center Street P.O. Box 268 Taft, California 93268 (661) 765-6550 Fax (661) 765-6556
Email: Publisher@taftindependent.com Website: Locally and Independently owned since 2006. The Independent is available free of charge, limited to one copy per reader. Additional copies are $1 each. The contents of the Taft Independent are copyrighted by the Taft Independent, and may not reproduced without specific written permission from the publisher. We welcome contributions and suggestions. Our purpose is to present news and issues of importance to our readers. SUBSCRIPTIONS. Subscription home or businessdelivery of the Taft Independent is available for $6.50 per month or $78.00 per year. To subscribe to please call 765-6550. LETTERS-TO-THE-EDITOR. Send us your letter to Taft Independent at the above address. Limit it to 300 words and include your name, address, and phone number. Fax: (661) 765-6556. Email your letter to: Editor@taftindependent.com. ADVERTISING. Display Ads: Rates and special discounts are available. Contact our advertising representative at (661) 765-6550, or email to Advertising@taftindependent.com. Classifieds: Call 765-6550 or fax us at (661) 765-6556. Phone order are taken. Visa and Master Card accepted. Publisher and Editor-in-Chief Michael J. Long taftindypublisher@bak.rr.com Managing Editor Advertising Jessica Skidgel Layout & Design Jessica Skidgel Contributing Writers Jessica Miller, Kent Miller, Wesley Morris, Nicole Frost Columnists Randy Miller, Wendy Soto, Mimi Collins, Jane McCabe, Dr. Harold Pease
Member California Newspaper Publishers Association Printed in California
Pancake/ Waffle Breakfast Saturday, August 6th There will be a pancake/waffle breakfast this Saturday, August 6th at the Veterans Hall in Taft, located on the corner of Cedar and Taylor Street in Ford City from 7am to 11am. Topping of the month is fresh peaches! The breakfast will be hosted by the Maricopa Chamber of Commerce, 25% of the proceeds raised will benefit the Taft Union High School All Star Band that will be traveling to Washington D.C. in December. Back-2-School Connection School Supply Drive Donations Monday, August 8th Donations are needed for the “Back-2-School Connection” Backpacks, school supplies and new or gently used clothing. Donations will be accepted until Monday, August 8th at the following locations: West Side Community Resource Center, 915 N. 10th Street, Suite #20. West Side Urgent Care, 100 E. North Street. First Assembly of God, 314 Asher Ave. West Side Furniture, 617 Center Street. For more information call 765-7281 or 769-8061. Help our children have a successful year! Perseid Meteor Shower Viewing Friday, August 12th
Come join us at A Street Park to view the Perseid Meteor Shower on Friday, August 12th at 9:00 pm. This will be a night of close to peak activity, with nearly 100 meteors per hour expected. Bring a blanket and/or chairs and be prepared to stay awake, as the best viewing times are after 10:00 p.m. This event is FREE to all and s’mores will be provided for the kids. Brought to you by Imagination Laboratories, Inc. and West side Recreation & Park District. Roll In The Good Times ARC Bunco Saturday, August 13th Roll in the good times at Taft’s ARC Annual Bunco on Saturday, August 13th at 5pm at the ARC, located at 204 Van Buren Street. Cost is $20 per person and includes dinner. Dinner is served at 5pm with Bunco to follow. Pre-sell tickets only, deadline is August 10th. For tickets call 763-1532 ext 1. There will be cash prizes and raffle drawings. Bring your friends! Fill the Bus Saturday, August 13th Help support our schools by helping to provide needed supplies for our local children to go to school this season. Come fill the Bus on Saturday, August 13th at King’s Nursery 8am-12pm. Sponsored by the Taft Midway Sunset Lions Club. Please bring pencils, sharpeners, copy and lined paper, punches, construction paper, backpacks, glue sticks, colored tissue paper, air dry clay, scissors, tape, index cards, colored pencils, composition books, dry erase markers, crayons, pencil pouches, post its, headphones, Kleenex, hand sanitizer, socks, sweat pants and jacket. All donations will go to support our local schools by offering needed supplies to classes, teachers, and students. Chamber of Commerce Mixer At Taft’s Community Garden Tuesday, August 16th The Taft District Chamber of Commerce is bringing back community mixers. Hosted by a local business, each mixer is geared towards bringing the community together to learn about the business. Events include a business card drawing, 50/50 raffle, snacks, and fun. The next mixer will be held at the Taft Community Garden on Tuesday, August 16th from 5pm to 7pm. Learn about how to rent a garden bed and grow your own fruits and vegetables.
City of Taft Passes Budget for 2011-2012 By Kent Miller The City of Taft has a budget and it’s lower than the income/spending plan for the previous fiscal year. Tuesday evening the council voted 4-to-0, with Councilmember Dave Noerr absent, to adopt both the final budget for the 2010-11 fiscal year and the proposed budget for the 2011-12 fiscal year ( July 1 through June 30). The 2011-12 proposed budget is about $121,000 less than the 2010-11 final budget of nearly $6.6 million. But the new budget includes a couple of major projects, could be expanded if funding is received for a pair of hoped-for projects and faces an as-yet unresolved issue of the Taft prison. “It’s a good document ... A tight document,” said Councilmember Paul Linder. Linder and Noerr sit on the city’s Finance Committee, which hammered the new budget into shape. “The city is healthy,” Linder said, though “not as healthy as we would like it to be.” Mayor Randy Miller noted that the budget was 32 days late, “but it is here and approved. “It’s one of the earliest budgets that we have approved.” Meeting as the Taft Community Development Agency during an adjournment of the council meeting, the council/agency members approved the final and proposed budgets. They also adopted the planning and administrative expenditures for Low and Moderate Income Housing Fund for fiscal years 2010-11 and 2011-12. Projects and pain The two major projects planned for 2011-12 are the rehabilitations of 6th Street and of 10th Street. The projects’ total cost is put at $720,000. Two hoped-for projects that would only become reality with outside funding are the continuation of Rails to Trails between Hillard and A streets, with Kern County money; and the construction of a $1 million park-and-ride facility in the downtown area, if approved for funding by Kern Council of Governments. Meanwhile, the Taft California Correctional Institute facility could be a major Continued on Page 7
Taft Farmers Market Thursdays 5pm - 8pm Rain or Shine
5th Street Plaza
Over 15 vendors and we are still Growing!
Fruits, Vegetables, Fish, Plants, Herbs, Arts, Crafts and more! For more information please contact the Taft Chamber of Commerce at 765-2165 Kern County Animal Control Low-Cost Rabies Vaccination Clinic Saturday, August 27th Come on out Saturday, August 27th to the Kern County Animal Control low cost rabies vaccination clinic and licensing at Ford City Park located on the corner of Cedar and Taylor Street from 8am-12pm noon. Taft Bike Fest Labor Day Weekend, Friday-Sunday, September 2nd-4th 2 Wheel Production presents The First Annual Taft Bike Fest which will take place this Labor Day Weekend, Friday (12pm-9pm), Saturday (9am-9pm) and Sunday (9am-2pm), September 2nd-4th at the Rails to Trails located at 6th St. and Main St. This three day event will feature concerts, beer garden, multiple vendors, motorcycle bike show and contest, tattoo contest, and motorcycle stunt show. There will be dry tent camping and RV/Trailer parking on site. Vendor space is available for food, crafters, commercial and business vendors. To apply for vendor space stop by the Taft Chamber of Commerce for event form and more information or contact Shannon with the Chamber at shannon.taftchamber@gmail.com or 765-2165. For more information on the Taft Bike Fest email them at taftbikefest@yahoo.com or check them out on Facebook.
4
TAFT INDEPENDENT
Letters to the Editor
Asian Experience
Asian Food and Pizza Lunch and Dinner Tuesday - Friday 11 am - 2 pm 4 pm - 9 pm Saturday 4 pm - 9 pm 215 Center Street, Taft 763- 1815
Sagebrush Annie’s Restaurant and Wine Tasting Paik’s Ranch House Tasting Sat. & Sun. 11:30-5 pm Where Everybody Meets Dinner by Reservation Breakfast, Lunch and Dinner 4211 Highway 33, Ventucopa Open 7 Days (661) 766-2319 Mon. Tues. Thur. 6 am-8:30 pm Sun. Wed. Fri. & Sat. 6 am - 9 pm Taft Crude Coffee House 765-6915 200 Kern St. Taft Coffee House and Deli Monday – Friday 7 am to 4pm. Sagebrush Annie’s Saturday 7 am to 2 pm Restaurant and Wine Tasting Sundays 7:30 am to 10 am Tasting Sat. & Sun. 11:30-5 pm 1010 6th Street, Taft Dinner by Reservation 763-5156 4211 Highway 33, Ventucopa (661) 766-2319 Black Gold Cafe & Deli Pastas - Sandwiches Espresso - Beer - Wine Open Monday to Saturday Your Busines Listed Lunch served 9am-1pm HERE Dinner served 5pm - 8pm Call 765-6550 Wine Tasting on First Thursdays 508 Center Street 765-6550
Westside Entertainment
Get Your Events in the Westside Entertainment Guide. Call 765-6550 or fax 765-6556
Your Restaurant Sagebrush Annie’s Listed Here! Wine Tasting Food and Pizza Asian ExperienceAsian Lunch and Dinner Call Dinner by Reservation Tuesday - Friday 11 a.m. - 2 p.m. 4 p.m. - 9 p.m. 765-6550! Award Winning Saturday 4 p.m.Wines - 9 p.m. Live Music Saturday Nights Starting as low 766-2319 as $12 per 4211 Highway 33, Ventucopa week! 215 Center Street 763-1815 Always Fresh! Dine In or We Deliver 765-4143 700 Kern Street
Mon. - Fri. 10am 2;30pm Taft, CA
August 5 - 11, 2011
Your Restaurant Listed Here! Call 765-6550! Starting as low as $12 per week!
The Maricopa Promise Editorial
Westside Watcher Staged Water District Robbery Strays from Planned Safety Exercise District Employee Distraught After Mock Heist By Michael Long, Publisher In a planned safety exercise gone awry, a West Kern Water District employee donned a mask and robbed a customer service clerk at the district's front offices Friday morning. According to an employee who wished to be unnamed, and later confirmed by district manager Harry Starkey, a male employee, as part of an emergency readiness exercise, put on a mask and entered the district front office on Friday morning and handed the clerk a note claiming he had a gun, and an empty bag with instructions demanding money. Four female employees were reportedly present at the time of the robbery. Three of the women fled when the robber entered the office, but a fourth employee, Kathy Lee remained behind and handed over an undisclosed amount of cash to the robber who then left. Lee, unaware that the robbery had been staged by the district as a test of emergency procedures, was reportedly distraught by the incident. According to Starkey, the exercise strayed from the original plan to have a recognizable employee walk into the office and state that a robbery was in progress to see how employees would react to training. However, prior to the test, the plan was changed at the last moment to have an unidentified employee wear a mask and hand the clerk a threatening note demanding money. Starkey said that the test was originally planned to test employee readiness for emergency situations. In a written statement, Starkey explained how the mock robbery was planned and changed by staff prior to the exercise. "Staff had been working for several months to develop training and exercises for the front counter staff that would prepare them for a possible robbery," Starkey wrote in a statement to the Independent. "The plan was to perform a mock robbery with the 'burglar' being an employee that was clearly recognizable by the front counter staff. In other words, the exercise would be seen as an act and staff would then have the opportunity to practice the information learned in training. The exercise as performed on Friday was redesigned without the consent of management to involve a mask and a reference to a gun. The exercise as conducted was understandably upsetting to those involved. Their welfare is District's foremost concern and we are working through the process with them."
Taft Petroleum Club Every Friday is Ribeye Steak & Chicken Dinner Night
Have Your Next Event At The Club! The club is available for Weddings, Birthdays, and Anniversarys. Our hall holds up to 200 people and the bar can hold 70. Book Today! 450 Petroleum Club Road - 763-3268 Open Tuesday- Friday 3:30pm to Close
I’m Bob Archibald, my wife Lori and I own the Shell Foodmart in Maricopa. I was very happy to read the article that was on the front page of the Bakersfield Californian last Sunday, about Maricopa Police Chief Derick Merrit’s promise of change in their traffic policies. As a show of good faith, I have towed home the trailers with the signs on them. Let us all hope that the Maricopa Police can uphold the law without going overboard as in the past. We all need our police to protect our property, the good people of Maricopa as well as the travelers that pass through our great little town. Knowing that actions speak louder than words, time will tell if the promise is being kept. The opinion of if the promise is being kept will not come from me. It will come from the customers of our store. Customers from our town and the neighboring towns and the ‘valley to the coast travelers’ will sound off loud and clear if the promise is broken. If it’s not kept the trailers with the signs will have to come back. I wish the Chief and his crew good luck and success and I sincerely hope to move on and we can deal with our other problems that are at hand. Thank You, Bob Archibald
WANTED
Taft Independent Subscription-Circulation Manager The Taft Indepenent is looking for a part-time individual to solicit subscriptions and make weekly home and business deliveries. CDL and Insurance Required. Experience Preferred. Incentive Based Compensation. Call 765-6550
Paik’s
Subscribe for home delivery of the Taft Independent today! Delivered weekly to your home or business only $6.50 per month!
Ranch House Restaurant
Address________________________________
Mon, Tues, Thurs - 6 a.m. to 8:30 p.m. Wed, Fri. Sat. and Sun. 6 a.m. to 9:00 p.m.
Name_________________________________ Start Date____________End Date__________
Please complete and mail with your check to: The Taft Independent, P.O. Box 268, Taft, CA 93268 Please make checks out to Taft Independent
“Where Everybody Meets” Breakfast, Lunch & Dinner
Open 7 Days
765-6915
200 Kern Street, Taft, Ca.
August 5 - 11, 2011
TAFT INDEPENDENT
West Side Recreation Report Check us out online! Need more information on programs, classes or facilities? Visit us on the web: steph@wsrpd.com
by Stephanie House
OPEN SWIM Monday-Friday 1:30-5:00 p.m. Admission: $2 per person Last day of Open Swim: Friday, August 19 The William M. Thomas Aquatic Center at the Walter Glenn Natatorium is open for summer! All ages are welcome to stop by to enjoy the slides, brand new spray park and other amenities that have been added to the updated and renovated facility. Children ages 6 and younger must be accompanied by an adult during Open Swim sessions. SATURDAY SWIM Saturdays 11:00 a.m. to 1:30 p.m. Admission: $2 per person Last day of Saturday Swim: August 20 NIGHT SWIM Monday and Thursday Evenings 7:30-8:45 p.m. Admission: $1 per person Last night of Night Swim: August 18 Don’t get a chance to swim during the day? Monday and Thursday nights are just for adults and families! Ages 17 and younger must attend with an adult family member. ICE CREAM & MOVIE SOCIAL Tuesday, August 9 2:00-3:45 p.m. Community Center Assembly Room, 500 Cascade Place, Taft Grades K-8 $3 per person Escape from your couch and bring a friend to eat ice cream and watch the movie “Megamind.” There will be ice cream and a variety of toppings available for you to make the perfect sundae. WONDERFUL WAFFLE Thursday, August 1 2:00-3:00 p.m. Community Center Auditorium, 500 Cascade Place, Taft Ages 6 and up $3 per person ** registration deadline – August 10 Come learn how to make waffles from scratch! We will make them and then eat them. Yum! You will go home with a full tummy and the recipe to try at home. For dessert? Chocolate brownie waffles! Pre-registration is required and space is limited. PRESCHOOL REGISTRATION UNDERWAY The West Side Recreation & Park District’s Preschool program is now enrolling students for the upcoming 2011/2012 school year. Preschool Coordinator is Rene Adamo and teachers are Stefany Ginn and Stacey Wooley. Classes begin the week of September 6. The program is for children ages 3-5. Fees vary per class. As of now, there are still a few spaces available in the Monday/Wednesday class. For more information, please phone 763-4246 or send an email to steph@wsrpd.com. Monday/Tuesday/Wednesday/Thursday Class: 8:30-11:00 a.m. Monday/Wednesday Class: 11:30 a.m. – 1:30 p.m. Tuesday/Thursday Class: 11:30 a.m. – 1:30 p.m. FAMILY FUN FRIDAY Friday, August 12 6:30-9:00 p.m. Natatorium Swimming Pool, 821 4th Street, Taft All Ages $3 per person or $12 per family (max 6 people) Cool down with a dip in the pool and do-it-yourself ice cream sundaes! Bring the whole crew down to the Natatorium for this special swim just for families. The fee includes admission and ice cream. The snack bar will also be open. Ages 17 and younger must be accompanied by an adult.
WEST SIDE RECREATION AND PARK DISTRICT 500 Cascade Place, Taft, CA 93268 (661) 763-4246 info@wsrpd.com
PERSEID METEOR SHOWER VIEWING Friday, August 12 9:00 p.m. ‘A’ Street Park All Ages FREE! Come join us at ‘A’ Street Park to view the Perseid Meteor Shower. This will be a night of close to peak activity, with nearly 100 meteors per hour expected. Just bring a blanket and/or chairs and be prepared to stay awake as the best viewing times are after 10:00 p.m. The event is FREE to all and s’mores will be provided for kids. This fun family event is brought to you by Imagination Laboratories, Inc. and West Side Recreation & Park District. GYMNASTICS Who: Grades K and older When: Monday Evenings Time: 5:30-6:30 p.m. Sessions: September 19 – October 24 and November 7 – December 12 Where: Community Center Auditorium Fee: $40 per session ($30 for each additional family member) Instructor: Suzanne Hale DANCE CLASSES Who: Ages 3 and up When: Mondays or Wednesdays Season: classes begin the week of September 12 Where: Community Center Assembly Room Fee: $20 per month Instructor: Vicky Waugh Participants will learn the basics of tap, jazz and hip-hop. Classes take place one day per week either on Monday or Wednesday. A full class listing is available in the District Office or on our website. Class enrollment is limited so register now! WSYSL – WEST SIDE YOUTH SOCCER LEAGUE Who: Boys and Girls, Ages 4-13 Divisions: U6 (4-5 yrs), U8 (6-7 yrs), U10 (8-9 yrs), U12 (10-11 yrs) and U14 (12-13 yrs) When: August 27 – November 12 Where: TUHS Soccer Fields Fee: $50 per player (plus $5 late fee) Final Registration Deadline: Thursday, August 25 The goal of the WSYSL is to create a soccer environment that is fun and conducive to learning for all ages and ability levels. Please note: shin guards are mandatory. Late registrations are still being accepted. Partial financial (STOP) scholarships are available. Ask us for more details. INSTRUCTIONAL SOCCER Who: Ages 3-5 When: Practices on Mondays, Games on Saturdays Session: September 12 - October 8 Where: ‘A’ Street Park Fee: $25 per child Registration deadline: September 8 Kids will learn basic soccer skills with emphasis on fun and socialization with others their age. YOUTH FLAG FOOTBALL Who: Ages 6-10 When: Practices on Tuesdays, Games on Saturdays Session: September 5 – October 15 Where: ‘A’ Street Park Fee: $35 per child Registration deadline: September 1 This program is for boys and girls ages 6-10 who want to learn the basic fundamentals of football. The program provides young players a fun and exciting opportunity to engage in non-contact, continuous action while learning lessons in teamwork. SOUTH VALLEY X COMPETITION CHEER SPAGHETTI DINNER SHOWCASE FUNDRAISER Thursday, August 11 6:00-8:00 p.m. Community Center Auditorium, 500 Cascade Place, Taft $5.00 Donation per person Help the District’s competition cheer squad raise money for uniforms and competition costs! Stop by for a spaghetti dinner and also have the chance to win some cash in a 50/50 raffle and/or purchase a SVX shirt to support the cause. BOWLING PARTY RENTALS Make your reservation now! Reservations are now being accepted for party rentals at the bowling alley in the new Recreation Center. Parties may take place on Friday evenings, Saturday or Sunday beginning September 16. Rental fees start at $100 for 2-lane rentals. Rental prices include shoes, balls and use of the party room. The Center and bowling alley are slated to open in early September. Call 763-4246 for more information or to make a reservation. S.T.O.P. PROGRAM SCHOLARSHIPS (Strive To Optimize Participation) Did you know that the District has a youth scholarship program? Children in low income, single parent or multiple participant households are eligible! For more information, or to find out how your child can take advantage of reduced program fees, give us a call in the District Office at 763-4246.
6
TAFT INDEPENDENT
Westside News & Business Briefs
Taft Has A Friend In Senator Jean Fuller By Jane McCabe
Jean Fuller, California State Senator of the 18th District. It helps to have a law-maker from your own community who understand your concerns. Taft has such a law-maker in California State Senator Jean Fuller, who was born in Kern County, received her PhD at UCLA, studied at Harvard and at Exeter in England, taught for 17 years, and for the past seven years has represented Kern County at the California State Legislature. In spite of her impressive education, Jean is at heart a downhome country girl, from a Kern County farm family, and, as such, she’s interested in helping the businesses, especially the farming
businesses that operate in Kern County thrive. According to Michael Long, president of the Taft Chamber of Commerce and owner/editor of the Taft Independent newspaper, Jean is “Taft’s best friend in the state senate.” On Wednesday, August 03, 2011, Jean met with the members of the Taft Chamber of Commerce at OT Cookhouse for breakfast to give a talk on what’s happening in the state California legislature. “I love coming to Taft,” Senator Fuller says. “It feels like home to me, the city with a heart.” The good news, according to Senator Fuller, is that the state legislature passed a budget. When threatened with the loss of their paychecks if no budget was reached before the deadline, legislatures passed one 1 ½ days before the deadline. To Senator Fuller this proves that the people’s voice can be heard! Through the initiative process things can change. Deficit funding muddies the waters—when the California State Legislature passed a 85 billion dollar budget (15 billion dollar LESS than last year’s,) at the last minute 6 million dollars in expenditures were added. If phantom funds are not reached, then automatic “trigger cuts” go into effect. A reduced budget calls for a 25% reduction in expenditures, which can only be secured from the areas which are the biggest recipients—health & welfare, prisons, and schools. Everyone wants schools to have what they need, but as soon as the state budget shrinks, like it or not, trigger cuts will go automatically into effect. Despite the necessity of instating austerity measures, Senator Fuller thinks the state legislature has a liberal agenda. Bills she consistently fought against are now being passed. There are outcries from all sectors.
OT
Cookhouse & Saloon 205 N. 10th St. (661)763-1819
Specializing in Steak & Seafood
Lunch
Dinner
Tuesday - Friday Tuesday - Thursday 4p.m. - 9p.m. 11a.m. - 2p.m. Friday & Saturday 4p.m. - 10p.m.
CLOSED SUNDAY/MONDAY
OT Cookhouse Daily Specials (For the week of 8-9-11 thru 8-13-11)
Tues. 8-9-11 Lunch
Pit Beef Sandwich
$8.95
Tues. 8-9-11 Dinner
BBQ Beef Ribs
$10.95
Wed. 8-10-11 Lunch
Beef Tips with Noodles $8.95
Wed. 8-10-11 Dinner
Veal Liver with Bacon & Onions
$10.95
Thurs. 8-11-11 Lunch
Beef Stroganoff
$8.95
Thurs. 8-11-11 Dinner Fri. 8-12-11 Lunch Fri. 8-12-11 Dinner Sat. 8-13-11 Dinner
BBQ Pork Ribs
$13.95/ $15.95
Stuffed Chicken Sandwich Prime Rib
$9.95
$13.95 Half/ $15.95 Full
$15.95 Small/ $18.95 Large
$15.95/ $18.95
Charbroiled 1/2 Chicken with $10.95 Sauteed Veggies
CLOSED SUNDAY AND MONDAY
205 N. 10th Street . (661)763-1819
Jean Fuller, third from left, with Taft Chamber of Commerce staff, Jessica G. Miller, Dr. Kathy Orrin and Shannon Jones. Senator Fuller referred to ancient Rome as an example of a government in which conditions were created to help trade flourish—the building of roads and aqueducts, an efficient and timely postal system, and such—as opposed to excessive regulations that strangle business. “People are going to be in a more visual place to make the difference next year,” she says. With the more liberal agenda as now exists in the state government, “cause” issues are rampant, as, for example, a bill to making the killing of sharks for their fins (for soup) illegal. Senator Fuller feels that some of the bills proposed take up an unnecessary amount of time when we need to create jobs, get water and road work done. She says newly elected, re-elected, Governor Jerry Brown will not raise taxes, as he promised not to, unless the people of the state of California vote for it. She thinks the number of bills put before the legislature should be limited and those bills should be prioritized, because, as it stands, far too many bills are proposed than can be enacted. When asked about instituting landlord protection laws to help landlords from being subjected to the costs they endure by deadbeat tenants, who often can live in homes rent-free f0r long periods of time, while landlords are expected to pay all kinds of costs, Senator Fuller was not very reassuring. Injustices change only when suffering citizens make their voices heard. If you would like to voice a concern or opinion to Senator Fuller, she can be reached at: Capital Office State Capitol, Room 3063 Sacramento, CA 95814, Phone: (916) 651-4018, Fax: (916) 3223304 or at the District Office 5701 Truxtun Avenue, Suite 150 Bakersfield, CA 93309, Phone: (661) 323-0443, Fax: (661) 3230446
August 5 - 11, 2011
Get To Know Your Westside Public Servants
Debra Elliott, City of Taft, Administrative Assistant to the City Manager and Deputy City Clerk By Nicole Frost
Debra Elliott If you have ever been to the Taft City Hall, there is a good chance that you’ve seen Debra Elliott, the administrative assistant to the City Manager and the Deputy City Clerk. “I do quite a bit on the job,” said Elliott. “As the City Manager’s administrative assistant, I assist the City Manager with meetings, I perform clerical duties, keep his calendar straight, and I assist him on projects. As the Deputy city clerk, my job is quite different. I assist with the agenda for city meetings by putting it together, making sure it’s out on time, making sure it’s correct and posting it for the public. Also, I filling during meetings sometimes to take the minutes and I take minutes for various committees such as the public works, traffic, personnel and sometimes finance committees.” Being busy is one of the reasons why Elliott enjoys her job so much. “Because my workday is so varied, I’m never stuck doing the same thing,” said Elliott, with a smile. “I like to work on upcoming projects for the city, especially if they’re exciting, and watching them progress from beginning to end. Plus, everyone gets along here at City Hall. There isn’t really anything about my job that I don’t like.” One of Elliott’s favorite days on the job involved working on the committee for the veteran’s fundraiser earlier this spring. “Working on the fundraiser was probably my most memorable experience, as well as my favorite,” said Elliott. “I saw the project through from beginning to end and, not only was it fun, but it was for a great cause.” Elliott is a wife of 17 years to her husband Michael and she’s a mother to two children, a junior and a senior at Frontier High School. When she’s not organizing the City Manager or assisting with the agenda for a city meeting, she enjoys playing tennis and cooking. Elliott is a Bakersfield native but she has quickly adapted to the Taft community. “I started working at City Hall in January 2010,” said Elliott. “Not having lived in Taft, I was very impressed with the supportiveness of the community.” There isn’t much more that Elliott can fit into her schedule, but she still has some project plans for the future. “I hope for the opportunity to be more involved in upcoming projects,” said Elliott. “Taft is such a friendly and giving community. It’s a great place to be.”
August 5 - 11, 2011
TAFT INDEPENDENT
Westside News & Business Briefs
G St. Second Hand Thrift Open For Taft Pickers
Kyle Goss, Tracy Streeter with shop dog Bella Tu and Gary Goss. G St.’s Second Hand , Anything Thrift Store is now open at 523 Center Street and ready for all Taft pickers. Owners Tracy Streeter and Gary Goss, both long time Taft and Maricopa residents opened the store a little over 2 weeks ago after their hobby for picking and collecting left them with an abundance of merchandise. Tracy and Gary frequent storage unit auctions and estate sales for their great treasures. “We were running out of room, we either needed to open a storage bin or a store front,” said Tracy. G St.’s merchandise ranges from collectables, antiques, furniture, clothing and much more. With something different hitting the shelves everyday. Stop by Tuesday to Saturday from 10am-5pm.
The Place 4014 Highway 33
Beautiful Downtown Ventucopa (661)766-2660
AUGUST EVENTS Every Wednesday - Oak BBQ Steak Sandwich 12:00 PM - Close
Saturday, August 13th $10.00 ALL YOU CAN EAT BUFFET
School Refunds Bond, Earns Low Interest Rate
Taft City Elementary School District has refunded an outstanding general obligation bond, a move administrators said would save the district’s property owners nearly half a million dollars in taxes. The refunded bonds, totaling $7.445 million, were authorized by more than two-thirds of voters in the June, 2001 election and were used to acquire, construct and modernize elementary school facilities throughout the District The interest rates on the outstanding bonds from the 2001 issuance ranged from 4.125% to 5.000%. The average interest rate cost for the new bonds issued in July was 3.23%, a difference that will save property owners $454,000. “We felt as stewards of tax dollars, it was the right thing to do,” said School Superintendent Ron Bryant. “The passage of time and a lower interest rate environment provided the opportunity to refund the old bonds.” The refinancing of the bonds was authorized by the Taft City Elementary School District Board at a November, 2010 meeting. “If you have an opportunity to save local taxpayers money, especially in this economy, you do it,” said Michael McCormick, school board president. The District also has outstanding bonds issued in 2005 and 2006 from the 2001 bond authorization. These bonds may be eligible to be refunded in the future, which would further benefit local taxpayers.
BBQ Chicken green salad, watermelon, Corn on the Cob & Bread $1.50 Domestic Drafts 5:00pm to Close
No To Go’s Starts at 5:00pm until gone!
Live Music By: Cutthroat Reef (Pirate Band)
City Budget Continued from Page 3 source of pain for the city and the prison’s employees after the state cancelled its contract with Taft. The prison issue could mean a decline of $700,000 to $900,000 in administrative fees and the laying-off of 50 employees. A solution would be an agreement between Taft and one or more of the state’s counties to house county prisoners. Police Chief Ken McMinn has reported to the council that he is working on such an agreement. He is already talking with counties about housing their inmates at the Taft facility, McMinn said. 2010-11 revenue The city’s General Fund draft summary of revenue resources for the 2010-11 fiscal year was $6,586,780, which as required balances with the total expenditures for the period. The budget included the transfer of $500,000 from the city’s reserve fund to obtain balance. Income of $225,000 in interest from loans to Taft Community Development Agency was removed from projected income because the notes have been rolled-over for another year. According to the agenda summary statement, prepared by Finance Director Teresa Binkley, the budget maintained the conservative fiscal policies of the city council. The budget was reviewed by the Finance Committee in July and recommended to be presented to the council for adoption, Binkley reported. The major revenue sources were: * Other city taxes of $1.6 million, with nearly $1.2 million of that coming from sales and use tax; * Taft Police department, $1.1 million; * Operating transfers-in, $963,534; * From other agencies of $725,385, with $703,552 of that coming from property tax in-lieu; * Property taxes, $618,280; * Streets/highways/drains, $526,419; * Use of money and property of $475,010, with $402,736 of that from interest income; * Other current charges, $439,333. Planning Department revenue is projected at $56,401 and licenses and permits at $54,983, with $54,555 of that from business licenses. 2010-11 expenditures The major expense areas were: * Police Department, nearly $2.3 million; * Public Works Department, $954,532; * General government, $917,176, with $318,167 going to personnel/risk management; * Kern County Fire Department contract services, $888,199; * Community development of $661,077, with $575,418 for planning and development; * Financial services, $472,944; * Capital purchases of $408,348, with $270,824 for the Public Works Department.
Other items Among other items on the council’s small agenda for Tuesday’s meeting were a HOME grant application and determining how the city should benefit from the energy services agreement with Conergy/Enfinity. On July 5, the council authorized Geary Coats to apply for approval of a family housing project on the Sunset Rails property and on July 12, the Taft Planning Commission approved a conditional use permit, site plan and parking variance for the development. Tuesday evening, Geary Coats and Willow Partners asked and the council authorized submitting a HOME Tire & Automotive Service Center grant application for funding not to exceed $5 million for the 40-unit affordable family housing apartment project. The city is the official applicant for the funding request to the California Department of Housing and Plus Tax Community Development. If approved, funds would come from the Home Investment Partnership Program. $3.50 Oil Disposal Fee In the second items, the city and Conergy/Enfinity Exp. Sept. 30, 2011 have an agreement for the installation of a solar energy Must Present Coupon at system on city properties. But it hasn’t been decided yet Time of Purchase among two ways for the city to benefit financially from Finley Drive • 765-7147 • the agreement. Both options offer a projected savings on
Brand Clothing Shoes • Jewelry A Hint of Class Name Accessories & More
Step Back Into School
In Style!
New Arrivals of Backpacks & Shoes Inside The Historic Fort 915 N. 10th Street Suite 34 (661)623-1783
FREE
Tire Rotation & Brake Check Plus We will check all fluids & tire pressure *Most Cars & Light Trucks Up to 5 Qts.
Oil & Filter Special 95* $
29
523 Mon-Fri 8am-5pm Sat 8am-1pm
Continued on Page 8
8
Community Voices
TAFT INDEPENDENT
Bush and Obama Usurp the Powers of Congress by Using Signing Statements By Dr. Harold Pease The. City Budget Continued from Page 7 the city’s utility bill with Pacific Gas and Electric Co. The difference is the projected amount of savings and when it would happen, said Paul M. Gorte, city redevelopment manager. One plan, “Rebate to City,” pays the rebate directly to the city, front-loading the financial return but reducing the estimated long-term cumulative return, Gorte said. The other plan, “Rebate in PPA,” invests the rebate in the power purchase agreement, reducing the near-term annual return (the first five years) but yielding a greater long term return to the city, he said. Under the second option, the city would see a reduced savings for the first five years of the program, but beginning in year six, the annual savings would exceed the projected annual savings in the first option, Gorte said in his summary statement for the council. Using an estimated PG&E rate increase of 5 percent a year, the difference to the city over 20 years would be about $250,000. Estimated 20-year savings to the city under plan one is $1,850,000 and under plan two is $2,100,000. For the first five years, the estimated savings to the city under plan one is $380,000 and under plan two it is $170,000. Projected cumulative savings to the city’s PG&E bill: Plan one Plan two Over 5 years $380,000 $170,000 Over 10 years $600,000 $525,000 Over 15 years $1,065,000 $1,100,000 Over 20 years $1,850,000 $2,100,000 In discussion, Councilmembers Orchel Kreir and Ron Waldrop indicated preference for plan one, while Mayor Miller and Councilmember Linder showed preference for plan two. The measure was tabled until the Aug. 16 meeting when all five councilmembers are expected to be in attend and the council would have had more time to study the options.
August 5 - 11, 2011
Do You Know Me?
Looking for a male resident of Taft that is currently in his 60’s. He went to Citrus Heights, CA, around 1987 looking to meet Cornelia (Nellie Jo) Ruth; he was turned away at the door by a relative. Believe his father might be James Nelson Ruth who worked in the Texaco oil fields in/around Taft from 1920-1961. It was noted the Taft resident has a strong resemblance to the pictured individual. If you have any information on this please contact Amy at 801-201-6771 or email at amyanastasion@msn.com.
BLM Plans Hunting Enforcement Checkpoint on Carrizo Law.” Native Americans came to hunt the abundant game and their many encampments dotted the plain. Carrizo Plain National Monument as statewide poaching violations are on the rise. A study conducted in the mid 1990s estimated approximately $100 million worth of California’s native wildlife is being poached annually; making poaching second only to the illegal drug trade in black market profitability. California’s fish and wildlife usually is poached from remote areas and transported to major cities for sale and export. “Poachers devastate nature by breaking they must have
Weekly Gas Price Update Average retail gasoline prices in California have risen 0.5 cents per gallon in the past week, averaging $3.80/g as of Monday, August 1st. This compares with the national average that has increased 1.1 cents per gallon in the last week to $3.70/g, according to gasoline price website CaliforniaGasPrices.com. Including the change in gas prices in California during the past week, prices Monday, August 1st were 69.7 cents per gallon higher than the same day one year ago and are 3.6 cents per gallon higher than a month ago. The national average has increased 13.2 cents per gallon during the last month and stands 95.5 cents per gallon higher than this day one year ago.
Subscribe for home delivery of the Taft Independent today! Delivered weekly to your home or business only $6.50 per month! Name_________________________________________ Address________________________________________ Start Date____________End Date___________________ Please complete and mail with your check to: The Taft Independent, P.O. Box 268, Taft, CA 93268. Please make checks out to Taft Independent
August 5 - 11, 2011
Obituary
TAFT INDEPENDENT
Taft Crude
Coffee House
Ronald Lynn Davis
The Only Mortuary On The West Side Where All Arrangements And Funerals Are Personally Directed By
Age 47 - Passed Away July 19, 2011 A previous resident of Taft, Ron moved to Bakersfield in the 1990’s. Ron was born December 10, 1963. He was survived by his wife Shelly Darnell Davis, his parents Edward and Donna Diavis of Taft, sister and brother in law Bonita and Terry Bullard of Maricopa, brothers Larry Davis of Bakersfield and Edward and Monica Davis of San Diego and sister Susan Davis of Taft, stepdaughter Crystal and granddaughter Reily and many nieces and nephews. In-laws Ben and Ann Darnell, Darrell and Sheila Darnell, and three sister in laws, Sheila, Sherry and Cheryl and their husbands. Ron was preceded in death by his brother Timothy Kevin, grandmother Maude Waldron and grandfather Roy Waldron. Ron graduated from Taft Union High School in 1982. He married Shelly Darnell on August 28, 1988. Ron and Shelly were blessed with the apple of his eye, Reily in 2006. Ron passed away at his home due to a long term illness. By his side were his wife, Shelly, sister Bonita, brother Larry and niece Christina. Anyone that knew Ron knew his love of sports. Growing up Ron played little league baseball, Babe Ruth and in High School was a member of both the football and baseball Varsity teams. A memorial in Celebration of his life will be held at the First Southern Baptist Church at 120 Pico St., in Taft, August 6, 2011 at 1:00 pm. The family would like to thank Bakersfield Dialysis Center for all your years of caring for Ron. We would also like to say a special Thank You to Dr. Harold Baer.
MARICOPA QUILT COMPANY FABRIC • NOTIONS • GIFTS WED.-FRI. 10:00-5:30 SAT. 10:00-2:00
New Summer Hours! Wed-Fri 10am-5:30pm Sat 10am-2pm 370 CALIFORNIA • 769-8580
Licensed Funeral Directors
501 Lucard St., Taft • 765-4111 Ice Blended Mocha Fat Free and Sugar Free Available in Most Flavors Open 7 Days - 763-5156 1010 6th Street • Taft
YOUR CHURCH AD HERE! CALL TODAY! 765-6550 NEW LIFE COMMUNITY CHURCH Sunday Services 10am UTURN Youth Service Sunday 6pm 1000 6th St. Weekly Classes Mon - Thurs Please call 765-7472 for info
For a ride to church call 765-7472 before 9am on Sunday Pastors Shannon N. and Shannon L. Kelly or nlctaft@bak.rr.com
FD756 FDR50 FDR595 FDR618
YOUR CHURCH AD HERE! CALL TODAY! 765-6550 Gateway Temple
Community Christian Fellowship 631 North Street Sunday School 9:30 a.m. Morning Worship 10:30 a.m.
St. Andrew’s Episcopal Church
604 Main Street • P.O. Box 578 Maricopa, CA 93252 • (661)769-9599
Sunday Morning Worship 9:45 Sunday Evening Worship 5:00 Monday Evening Mens Prayer 7:00 Wednesday Evening Worship 6:30
Sunday Service - 10 a.m.
For a ride: Call Dorine Horn 487-2416 Pastors Charle (Tommy) and Mary A. McWhorter
703 5th Street - Taft (661) 765-2378
Rev. Linda Huggard
New Hope Temple Trinity Southern Baptist Church
“Connecting Lives” 308 Harrison Street 765-4572
400 Finley Drive
Sunday Morning Worship Service 10 a.m. Sunday Evening Worship Service 6 p.m Bible Classes All Ages Wednesday 7 p.m.
We invite you to join us each week as we worship
Sunday Bible Study 9:45 am Sunday Morning Worship 11:00 am Sunday Evening Worship 6:00 pm Wednesday Prayer & Bible Study 6:00 pm
Peace Lutheran Church- LCMS
TAFT UNITED METHODIST CHURCH
Taft- A caring community under Christ We welcome you to worship with us at peace lutheran church, 26 Emmons Park Drive (across from the College). Worship service begins at 10:00 a.m. Communion will be offered 1st and 3rd Sundays
630 North St. 765-5557
“Open Hearts, Open Minds, Open Doors”
Sunday School for all ages at 9:00 a.m. The Pregnancy crisis center is now open and available for support and assistance. For information, call 763-4791 If you have a prayer request please call (661)765-2488. Leave a message if the pastor or secretary is not available Angel Food Program Tues. 9am - 12pm Thurs. 3pm - 6pm
Pastor Cindy Brettschneider Sunday Morning Worship 10:00 AM Adult Bible Study and Sunday School 11 AM Adult Bible Study Monday 6:00 PM Wednesday Night Service 6:00 PM Praise Team meets on Thursday at 6:00 PM
WANTED: BULKY WASTE PICKUP
Double Gold Medal Winner and Best Cabernet Sauvignon of Show at the San Francisco International Wine Competition
Tasting Sat. & Sun. 11:30 to 4:30 pm.
Now Celebrating Our 22nd Year
8 miles south of HWY 166 on HWY 33 in Ventucopa, Cuyama Valley, 4211 HWY 33. (661) 766-2319
See our new Website!
Advertise With The Taft Independent Call Today 765-6550
Ford City Tuesday
South Taft & Taft Heights Friday
City of Taft Wednesday
• REFRIGERATORS • MATTRESSES • WATER HEATERS • STOVES • WASHERS & DRYERS • SOFAS If Missed… Call Office at 763-5135
All green waste must be bagged. Tree Limbs cut in 6’ length, and bundled. ITEMS NOT ACCEPTED Construction/Demolition Waste/Used Oil/ Hazardous Waste/Tires Westside Waste Management Co., Inc.
10
Classifieds
Classified Ads areare $3.00 per issue for upPhone, to threefax, lines, $5 per Classified Ads $2.00 per line. mail or issue off for up to 5 andTaft $7 per issue for up to 10 lines. Yard drop your adlines, to the Independent.
Sale ads are free. Phone, fax, mail or drop off your ad to the Taft your Independent. Ad photograph for $5. Ad your company logo for
TAFT INDEPENDENT
Affordable Rents We’ve Got em!
$5. Boxed ads are $3 additional. E-mail us (or bring to Boxed\outlined\bolded classified ads start at $12.00 for 8 our office) a photo of$20 your car,$25 truck lines, $16 for 12 lines, forhome, 15 lines, for or 20 motorcycle lines. and we’ll do the rest. Photo Ads. Car, truck or house for sale ads are $5 per week, Yard are $2Email for 3us lines, additional or $10Sale withads a photo. (or bring to our lines office)$2a each. photo of your home, car, truck or motorcycle and we’ll do the rest.
Classified ad deadline is Wednesday at 12 p.m. (noon) Classified ads deadline is now Wednesdays at 2 p.m.
Phone: 765-6550
Preserving for the Future
Phone: 765-6550
Fax: 765-6556 765-6556 Fax: E-mail:Taftindypublisher@bak.rr.com Taftindypublisher@bak.rr.com Email: Payment can byby cash, check, or credit card. card. Payment canbebemade made cash, check, or credit Taft Independent 6thCenter St., Taft, CATaft, 93268. Taft Independent210 508 St., CA 93268
Community YARD SALES Advertise your yard sale ad. 3 lines for $2, additional lines after that $2 each. Fax your ad to 765-6556 or call and leave message at 765-6550 by 12 p.m. Wednesday.
SEEKING INFORMATION
ANNOUNCEMENTS COUNSELING & SUPPORT
Business Services Friday, Saturday, Sunday 6am-? 204 D Street. Porcelain dolls. Yard sale, three families 324 East Woodrow 8/5 - 8/6 8am to ? childrens clothes, household Items and misc. 707 Harrison St. Apt. B. Sat.and Sun. 8 a.m.-? Tons of everything!! Saturday 7am- noon 106 Woodlawn Ave. Lots of misc. 506 Sierra Saturday 8/6 7am Tons of nice teen girl & plus size womens clothes, elec dryer, port DW and more! Saturday Aug 6th only Church wide Yard Sale Calvary Baptist Church in Valley Acres, off hwy 119, 8am-2pm. Yard Sale Friday and Saturday and Sunday 616 Taylor St., baby stuff and household misc. 7am-noon. Saturday 533 E Street 7am- noon. 50 years of collectables, antiques, appliances, books and much more! 506 Church Street Saturday 7am-? Clothes, shoes, furniture, misc. 610 Phillippine Street 8am-3pm Friday and Saturday. Tables, mini fridge, clothes, dvd. and more. Yard Sale 403 Shasta. Saturday. Lots of stuff!
HANDYMAN SERVICES Handyman: Coolers, Landscaping, Homes. Call 765-2947 or 6231529
293-0359 or 661-7656497. We will pick up!
For Sale FOR SALE Pickers Buy & Sell 428 Center Street. Tools, Furniture, Household, Collectables.
MOTORCYCLES AUTOMOBILES
Pets & Livestock FOUND PETS
ALTERATIONS HOUSE CLEANING COMPUTER SERVICES
Taft P.C. Services
PETS Shihtzu puppies 2M/1F reg. vet checked 12 wks $250 ea. 661-763-3222 or 747-0638
LIVESTOCK LOST PETS
Real Estate Computer Repairs 661-623-5188
Employment HELP WANTED BUSINESS OPPORTUNITY
Wanted WANTED Junk Cars! Cash Paid (661) 805-0552 Old Appliances, In ANY Condition. Car Batteries & Motorparts. Cash Paid $1 - $20 Call David 661-
PROPERTY MANAGEMENT
Taft Property Management 1,2,3 and 4 Bedrooms now available in good areas. CRIME FREE HOUSING Brokers Licence 01417057 661-577-7136
PROPERTY FOR RENT BUSINESS FOR SALE
FOR SALE Established local Taft business. Taft Crude Coffee House and Deli. Excellent location, near Taft College. In business for 6 years. $25,000. Room to expand product offerings. Good family business. Call 661-623-4296.
HOMES FOR SALE Real Estate eBroker Inc. 325 Kern Street Karri Christensen LIC# 01522411 & #01333971 661-332-6597 Real Estate Sales & Purchase 2bd. 1 ba. $9,000. on leased land. New carpet and paint. Negotiable. 623-6718. 114 Franklin $40K (Contingent) 417 Tyler 3bd 2bath $60K 106 Lee St 3bed 2 bath $129,500 9057 Ellis Street 4bed 2 bath 10 acres $140K Commercial Building $169K Restaurant/ Dry Goods Store $195K 160Acres in Maricopa $295K Wondering how buying a house works? Set an appointment with Karri to watch a FREE video on the process. Call 661-332-6597 for a current list or drop by the office. ____________________ 4 Homes in Taft 1 House in Maricopa. $26,000 to $85,000. Serious Inquiries only. $9,500 down. Owner carry. 661-343-0507.
MOBILE HOMES HOMES FOR RENT 3 bd rm 1 ba. home on 902 Williams Way. Huge backyard and newer detached 2 car garage. Original garage has been converted to large 4th bedroom or office. $1,250 mo plus $1,250 deposit. Ref. Req. Credit check. No smoking only. 623-4296 West Valley Real Estate
(661) 763-1500. Lic # 01525550 www. BuySellManage.com. FOR RENT 507 Tyler 3/2 512 D St 3/2 223 Eastern 3/2 410 Buchanan 3/1 119 1/2 Madison 1/1 502 Lucard 1/1 FOR SALE Why rent when you can buy for almost half the cost?! Contact us for details and a complete list of homes for Sale! Super clean 1 bed room house with kitchen appliances, plus washer dryer hook ups. Water, garbage, pest control and gardener furnished. No pets. $800 plus $600 deposit. Call 765-4786 between 7a.m and 7 p.m. 707 Filmore 3 bd/1ba $750 mo. + dep. 707 1/2 1 bd/1ba $420 mo. + dep. 661-343-0507.
APART. FOR RENT Room/Studio $350mo + dep. 661-577-4549 alintaft@yahoo.com Newly redecorated 2bd upstairs Apt. Kitchen appliances and washer dryer furnished. All util. paid No. pets. $700 mo. plus $500 deposit. Call 765-4786 between 7 a.m. and 7 p.m. MCKITTRICK. 3/2 Apt. Newly furn.$650 mo. Taft Property Mgt. 661 745-4892. Brokers Licence 01417057 Creekside Apartments. 1 BD and 2 BD. Pool, AC & Appl. 661.7657674. 420 Finley Dr. Courtyard Terrace Apts. 1 and 2 bdrm’s Pool,lndry rm.,1210 4th St. Apt. 1. Sec. 8 OK. (661) 763-1333.
Well-Being BEAUTY PERSONAL TRAINER
August 5 - 11, 2011
Business Services
Ken Shugarts
Air Conditioning & Heating
Cleaning Services My Fair Ladies Cleaning Services Comm. and Residential Serving the Westside 661.477.3455 Lic. No. 007657 Rite Away Carpet Cleaning Carpet & Upholstery Cleaning\General Cleaning Owner Operated Visa\Master Card 765-4191
Plumbing • Septic • Roto-Rooter Framing • Electrical • Concrete We Do All Phases of Construction Kitchen and Bathroom Specialists Ken Shugarts (661) 343-0507 30 Plus Years in Construction License No. 927634
Personals WHEREABOUTS Danny Daniels If anyone knows his whereabouts please tell him it is very urgent for him to contact Betty Lopez at 805-350-0392 or 406-889-3755 or Vicky Valencia at 805245-8130.
Get It Rented!!
Real Estate eBroker Inc. 325 Kern Street
Karri Christensen
LIC# 01522411 & #01333971
661-332-6597
Place Your Ad for $2 Per Line! Call Today (661)765-6550
Get a Real Estate Sales & Purchase
Lot for a Little
Marketing is important to your business. The Taft Independent has marketing opportunities for every budget, large or small. By advertising in the Taft Independent, you will reach over 7,500 potential customers every week. To make a small budget go a long way, call us today at 765-6550
ADS STARTING AT
$
10
PER WEEK
508 Center Street or email taftindypublisher@bak.rr.com
ROGER MILLER INSURANCE a division of DiBuduo & DeFendis Insurance Group
Rich Miller
License # 0707137 • (661) 765-7131 531 Kern Street - P.O. Box 985 (661) 765-4798 FAX Taft, CA 93268 • (661) 203-6694 Cell.
August 5 - 11, 2011
TAFT INDEPENDENT
Derby Acres Tumbleweed Festival
Last Saturday, July 30th, Orchel Krier, owner of The Tumbleweed Bar & Restaurant, hosted the 3rd Annual Derby Acres Tumbleweed Festival. The event was highly attended with activities catering to everyone’s interests, including water slides, a horseshoe tournament, vendors and much more!
Edward J. Herrera Insurance
Negocios Hispanos Negocios de venta
Servicios The Cell Fone Store Móviles y Accesorios y alimentos y más 510 Finley Drive 661-765-2500
G and F Footwear
Athletic and Tennis Shoes Vans - Nike - Levis Adio and More! T-Shirts and Pants 405 Finley Street In the Pilot Plaza Phone 340-8609
Rosy’s Closet
Hombres y Mujeres Ropa y Zapatos 401 Center Street Mar. - Sáb. 10am-8pm Dom. 11am-8pm Cerrado los Lunes
Su anuncio aquí! Las bajas tasas! Llame hoy mismo! 765-6550
Sponsored by Edward J. Herrera Insurance
Mercado de Agricutores de Taft Cada Jueves 5:00pm - 8:00pm Quinta Calle Plaza (5a Calle entre Main y Calle Center)
Auto - Home - Health - Business - Notary Public We are an Independent Agency With Many Pre-Eminent Insurance Companies To Best Suit Your Needs
We Represent You To Give You The Best Service
WE Offer You Low Discounted Rates Our Friendly Staff
Edward J. Herrera Insurance
Vienen a comprar los productos!! Comprar Fresco y Local!! Frutas, verduras, hierbas, productos horneados, mermeladas, joyas, ropa de cama, artesanias y mucho mas Interesado en convertirse en un productor, proveedor o artista (musicos, cantantes, comedios) Ponganse en contacto con Shannon en 661-765-2165 o shannontaftchamber@gmail.com
420 Center Street Taft, Ca 93268 (661)745-4920 Lic. # 0277365
Auto - Casa - Salud - Negocio - Notary Public Somos una Agencia Independiente Con Varias Aseguradoras Prominentes Para Darle El Mejor Servicio
Lo Representamos A Usted Para Darle Un Excelente Servicio Como Usted Se Lo Merece
Le Ofrecemos Los Mejores Precios Nuestro Personal Amable
420 Center Street Taft, Ca 93268 (661)745-4920 Lic. # 0277365
12
August 5 - 11, 2011
Devon’s Body Shop
Used to be Paul’s
HARRISON STREET AUTOMOTIVE
209 Harrison Street • Taft (661)765-2505 or (661)763-1887 fax Ask about $500.00 Free Smog Repair Restrictions Apply
$39.75 * for Smog Check ‘96 or Newer plus certificate
TAFT INDEPENDENT
Bike Shop
We Have Moved! Come see us at 608 Center Street
745-4919
* must present ad at time of service
ANNOUNCEMENTS SERVICES
408 Main Street • (661)765-4337
Qik Smog & Tune
MORTIMER
FIREARM TRAINING
CCW Classes & BSIS Classes
(661)763-4445
Taft Independent Publisher@taftindependent.com
Ray Mortimer, Instructor 661-747-6965
No Appointment Needed for Smog Check! Certified C.A.P. Station General Automotive Repairs
Pre-register at Ten Percent Firearms 661-765-6899
• 661-763-4445 • 500 S. 10th Street
Larry Heptinstall, 661-342-4033 (CCW) Jay Thomas 661-809-1772 (BSIS)
Randy’s Trucking Cart-Away Concrete Mix Trailer • Hydraulic Rotation and Tilt for Mixing and Dumping • Mixes Concrete While Traveling • • Large Internal Blades • • Rear Operator Control Panel •
western shop & PET SUPPLY Now Carrying Wrangler FR Shaw’s Pet Wash Work Pants & Shirts 1st Dog at reg. price August Wash & 2nd dog at 1/2 price! Special Small dog up to 30lbs $14.00 $56.99 $65.99 Dogs 30 lbs & over $17.00
FR spray available for your at home washing and caring needs.
Wrangler Cowboy Cut Jeans
(661) 763-4773 1050 Wood Street
t Augusal Speci
13 MWZ
Includes: Shampoo, conditioner, brushes, nail clippers, dryers, and an air conditioned room.
Kennels are available for additional dogs
$31. 99 $29.99
Nails clipped and filed $12
Each additional dog or cat $9
Saturday, August 27th Dog Rabbies Clinic Ford City Park • 8am-12pm
Monday-Friday 9-5:30, Saturday 9-3
419 Harrison St. Taft, CA 93268 (661) 765-2987
The Tumbleweed Bar and Restaurant Located in the Heart of Oil Country On the Petroleum Highway
We Cater Your Place or Ours Full Bar Available For You Special Breakfast - Lunch - Dinner - Full Bar - Catering - RV Parking Available Event 24870 Highway 33 in Derby Acres • (661) 768-4655
Open 7 Days a Week
Owner Orchel Krier Welcomes You and Your Family - Dinner Reservations
Senator Jean Fuller Visits, Taft and The Westside
Published on Aug 5, 2011
Senator Jean Fuller Visits, Taft and The Westside | https://issuu.com/taftindependent/docs/taft_indy_8-5-11_all | CC-MAIN-2018-22 | refinedweb | 10,799 | 63.7 |
GameFromScratch.com
This tutorial is a quick guide on getting MonoGame up and running on Windows OS. The process is incredibly straight forward, with two options on how to proceed.
First you are going to need a development environment, of which two options are available. Xamarin’s Xamarin Studio or Microsoft’s Visual Studio. If you wish to go the Xamarin Studio route, be sure to check the MacOS guide, as the process will be virtually identical. The rest of this tutorial assumes that you chose to install Visual Studio, the industry standard IDE for Windows development which is now thankfully available for free.
So first things first, download and install Visual Studio 2013 Community. Be sure that you select Community and not one of the 90 trial editions. The community install is a complete and full functioning version of Visual Studio, but with some limitations on the size of your company.
As of time of writing, this is the version you want.
If you want to talk a walk on the wild side, the release candidate of Visual Studio 2015 will also work. Of course, it’s a release candidate… so buyer beware.
Installing either with the minimal recommendations or better will get you all that you need installed.
By far the easiest option, simply download and run the installer available here. Be sure to shut down Visual Studio before installing.
Click Next, then agree to the EULA… after you read it and submit it to your lawyer for approval of course…
Next you will be prompted for the features you want installed. The defaults are pretty solid, click Install:
Visual Studio integrates a package manager called NuGet. This offers a few (potential) benefits over using the library’s standalone installer.
Actually, that’s about it. Basically if you want to be kept up to date on updates, this is the route to go. The install process is certainly more complicated though, at least initially.
First of course you need the NuGet package manager installed. It’s getting more and more common in use, so you will probably have it installed or need it installed shortly. It is available as a Visual Studio extension or command line utility.
To install with NuGet, Launch Visual Studio, on first run you may have to make some configuration choices, don’t worry, most of these can be revisited later on. Once configured, select the menu Tools->NuGet Package Manager->Package Manager Console:
Now you simply install the packages you want. Unlike the installer you download the various packages independently. The list of packages are available here. Assuming you are going to develop initially on Windows, you probably want to start with the DirectX Windows version. To install that in the Package Manager Console type:
Install-Package MonoGame.Framework.WindowsDX
This will download and install all the required files and dependencies. For more details on the MonoGame NuGet packages please read this post on StackOverflow by the maintainer.
Now that you’ve got MonoGame installed let’s create our first project. If not already done, load up Visual Studio.
Select File->New Project
In the resulting dialog, on the left hand side under installed templates, select Visual C#->MonoGame, then on the right hand side select MonoGame Windows Project, pick a location, a project name, if you want source control, then click OK.
As you can see, MonoGame ships out of the box with templates for a number of different targets and a few above may require a bit of an explanation. The MonoGame Windows Project targets Windows desktop using Direct X for the backend. The OpenGL project is another Windows target, but instead using OpenGL as the backend. As DirectX is Windows, XBox and WinPhone only, you may find using the GL backend the most consistent if targeting Mac, Linux, Android and/or iOS, as those all use OpenGL on the back end. A Windows 8.1 Universal project is an application that supports both Win 8 desktop and mobile targets with one code base, and if I am honest, with the announcement of Windows 10, is probably a complete dead end.
Ideally you will write most of your code as libraries, with only the platform specific portions in their own corresponding project file. We will look at this process closer down the road. For now we will KISS (Keep It Simple Stupid) and just target the platform we are developing on.
Once you click OK, the following simple project will be created:
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
namespace Game1
{
/// <summary>
/// This is the main type for your game.
/// </summary>
public class Game1 : Game
{
GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;);
}
}
}
Don’t concern yourself overly with the code, we will explain it all shortly. We just want to make sure that your MonoGame install is up and running. To run your game hit F5 or press:
Assuming all went well, you should see a window drawn in CornFlower blue:
Congratulations, you’ve just created your first ever MonoGame application!
Let’s talk briefly about the Content Pipeline. This is the way you get content into your MonoGame application. If you look in the Solution Explorer, you will see a file named Content.mgcb.
Double click it, and the MonoGame Content Pipeline tool will open:
Let’s quickly add a texture to our project. Select Edit->Add->Existing Item, navigate to an existing image file somewhere on your computer.
Next you will be prompted for how you want the file to be added, copied or linked, I chose copy to break the connection with the source image. This means updates to the original image will not be propagated, nor will deleting it have any effect.
The image will be added to your content bundle, as as you can see with it selected, there are a number of properties exposed for how the image should be processed when building content:
There are several Import processors available for various different file types:
Each one exposes different parameters you can modify. For images some of the key properties exposed are the option to automatically resize the image to a power of 2 dimension, to change the texture compression settings (TextureFormat) or setting the color key value used for transparencies.
Now select the Content root node and you will see the property details have changed:
The key setting here is Platform. Each different platform can have slightly different requirements and the Content Pipeline takes care of that for you. In this case we already have Windows set for the pipeline, so there is nothing that needs to be changed.
Now Build your content using the Buld->Build menu option, or by hitting F6.
Now back in Visual Studio, confirm that the build action on your Content.mgcb file is set correctly. Right click the file and select Properties:
Make sure that Build Action is set to MonoGameContentReference.
This will enable you to use the content as if it was installed locally, making switching between platform versions trivial.
Now actually using content in code is as simple as:
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
var myImage = this.Content.Load<Texture2D>("SBO6h0Gk");
}
Don't worry, we will cover this process in more detail later. Just note that the file extension is not used.
Behind the scenes, you may notice that the Content Pipeline tool created the following xnb file in the /bin/Windows subdirectory of Content:
Programming
MonoGame, C#, Tutorial | http://www.gamefromscratch.com/post/2015/06/10/Getting-Started-with-MonoGame-on-Windows.aspx | CC-MAIN-2017-17 | refinedweb | 1,250 | 62.88 |
Introduction:
In this article I will explain how to create app in twitter and implement twitter login authentication for website in asp.net.
Description:
In previous post I explained article how to integrate facebook login authentication for website in asp.net. Now I will explain how to allow users to login with twitter accounts in website using asp.net.
In previous post I explained article how to integrate facebook login authentication for website in asp.net. Now I will explain how to allow users to login with twitter accounts in website using asp.net.
Before implement twitter login authentication we need to get consumerKey and consumerSecret key from twitter for that we need to create application in twitter by using this link once open that will display window like this
Once app page opened enter Application Name, Description, website (Ex:) and callback url details and click create new application button and here one more thing we need to remember is twitter won’t support for localhost sites (ex:) because of that we need to give hosted domain site url.
If you want to test this with your local application no worries check this post how host website in IIS with custom URL .Once our app created in twitter that would be like as shown below image here we can change logo of our application
Now create new application using visual studio and write following code aspx page
Now in code behind add following namespaces
C# Code
C# Code
After completion of adding namespaces write following code in code behind
If you observe above code I used oAuthTwitter class file you can get this class file from downloadable code. Now get consumerKey and consumerSecret key from twitter and add it in web.config file like this
Demo
Download sample code attached
44 comments :
INDEED YOUR THE BEST...
Thank you very much,but have can I get the user mail address?
hi,
excellent blog .
thanks ,
Ajay. The response contains the user id, screen name etc. but not the email ID.
Is it possible at all to retrieve the email ID of the user?
best one yarrr thank u....
thank u but how to get followers/following users images using asp.net
hello sureshbhai.. I follow your article for login using facebook in website..now i want that get user login data like email ,paassword for storing in our database..and i also want how to get this data at server side means how to get username,password etc on server side to store it on our database
Hi...But how to get other details like firstname,lastname,email,city,state etc....
32
Hi sureshbhai i got this error on return url,
The remote server returned an error: (404) Not Found.
Once i click on Authrize app im getting a window with pin number and it ask me to type it in my website
Hey Suresh good example, I did the same thing and am getting a 404 exception. Not sure whats wrong though..
Again thanks sir...
Yeah ,it doesn't work anymore I guess something changed by twitter.
url = "";
xml = oAuth.oAuthWebRequest(oAuthTwitter.Method.GET, url, String.Empty);
after this line I get 404 error.
Change the foolwing url with
it worked
Really nice article
I seriouslу love уour blog.. Very nice сolors &
thеme. Did you develοp thіѕ
amazing site yoursеlf? Рlеase reply
back as I'm looking to create my own personal website and want to learn where you got this from or what the theme is named. Thanks!
Also visit my weblog -
my web page:
Pls reflesh the source code links
it does.nt work
url = "";
xml = oAuth.oAuthWebRequest(oAuthTwitter.Method.GET, url, String.Empty);
after this it gives error
The remote server returned an error: (400) Bad Request.
Hello
Sir,
Really a very nice blog...but i am fresher dev
So i have task to do the twitter integration
I have follow your code but how i get the
<add key="consumerKey"
<add key="consumerSecret"
of the individual user ...
please suggest what i suppose to do ...
you can get <add key="consumerKey"
<add key="consumerSecret"
on this link
It does not work for me.
I got the following Error,
-----------------------------------------------
Server Error in '/ThanthiT:
Line 294: finally
Line 295: {
Line 296: webRequest.GetResponse().GetResponseStream().Close();
Line 297: responseReader.Close();
Line 298: responseReader = null;
Source File: c:\kalai\website\Thanthi\App_Code\oAuthTwitter.cs Line: 296
Any Solution ?
Following Code will work,XML ends on 2012.Here Use JSON
-----------------------------------------------
if (oAuth.TokenSecret.Length > 0)
{
//We now have the credentials, so make a call to the Twitter API.
url = "";
xml = oAuth.oAuthWebRequest(oAuthTwitter.Method.GET, url, String.Empty);
JObject o = JObject.Parse(xml);
name = Convert.ToString(o["name"]);
username = Convert.ToString(o["screen_name"]);
profileImage = Convert.ToString(o["profile_image_url"]);
followersCount = Convert.ToString(o["followers_count"]);
noOfTweets = Convert.ToString(o["statuses_count"]);
recentTweet = Convert.ToString(o["status"]["text"]);
}
Hello,
It doesn't work it gives error 401 ... Any solutions,Please?
Regards
I solved the problems :)
after long time trying I figured out those changes:
1- Open oAuthTwitter.cs file.
2- Change the links as follow:
public const string REQUEST_TOKEN = "";
public const string AUTHORIZE = "";
public const string ACCESS_TOKEN = "";
private string _callBackUrl = ""; // your app link or any other link...
This problem occure because Twitter has changed the links from(http to https )
this works for me .
Thanks, with modified code in some user comment in your blog. It works
hey,It gives me error :
The remote server returned an error: (401) Unauthorized.
Nice Blog Thanks !
how to get user email from api?
I got the following error
The remote server returned an error: (407) Proxy Authentication Required.
what is the problem ???
It gives error
remote server returned error:(401)Unauthorized)
It gives error
remote server returned error:(401)Unauthorized)
Follow "R koyee" comment above. It worked for me. http to https. Nice work Suresh.
Hi,
Without login in twitter,posible to get tweet information using email addredd in asp.net.
My Requirement is below:
with the use of email address i want to show user profile details & user tweet in my web application.
I create application in twitter for getting api key.
but every time need to login with application.it is not good for me.
So please any idea about how to get those information without login.
this code is not working.its gives me error 401
can u help me??what is the procedure to remove the error
public string WebResponseGet(HttpWebRequest webRequest)
{
StreamReader responseReader = null;
string responseData = "";
try
{
responseReader = new StreamReader(webRequest.GetResponse().GetResponseStream());
responseData = responseReader.ReadToEnd();
}
catch
{
throw;//ih this line error is showed error 401.
}
finally
{
webRequest.GetResponse().GetResponseStream().Close();
responseReader.Close();
responseReader = null;
}
return responseData;
}
Hi,
Code is working fine for me and I am getting data from twitter. But if I refresh the page, its goes to error 401. Any idea?
i get 401 unauthorized too
and i used https as suggested by R koyee
but it still does not works... | http://www.aspdotnet-suresh.com/2012/05/add-twitter-login-authentication-to.html?showComment=1355720539097 | CC-MAIN-2013-20 | refinedweb | 1,159 | 58.99 |
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
Decisions trees are frequently used to represent workflows or algorithms. They also form a method for non-parametric supervised learning. A tree mapping observations to target values is learnt use this method to find the features the most influent on the price of Boston houses. We will use a classic dataset containing a range of diverse indicators about the houses' neighborhoud.
import numpy as np import sklearn as sk import sklearn.datasets as skd import sklearn.ensemble as ske import matplotlib.pyplot as plt import matplotlib as mpl %matplotlib inline mpl.rcParams['figure.dpi'] = mpl.rcParams['savefig.dpi'] = 300
data = skd.load_boston()
The details of this dataset can be found in
data['DESCR']. Here is the description of all features:
The target value is
MEDV.
RandomForestRegressormodel.
reg = ske.RandomForestRegressor()
X = data['data'] y = data['target']
reg.fit(X, y);
reg.feature_importances_. We sort them by decreasing order of importance.
fet_ind = np.argsort(reg.feature_importances_)[::-1] fet_imp = reg.feature_importances_[fet_ind]
fig = plt.figure(figsize=(8,4)); ax = plt.subplot(111); plt.bar(np.arange(len(fet_imp)), fet_imp, width=1, lw=2); plt.grid(False); ax.set_xticks(np.arange(len(fet_imp))+.5); ax.set_xticklabels(data['feature_names'][fet_ind]); plt.xlim(0, len(fet_imp));
We find that LSTAT (proportion of lower status of the population) and RM (number of rooms per dwelling) are the most important features determining the price of a house. As an illustration, here is a scatter plot of the price as a function of LSTAT:
plt.scatter(X[:,-1], y); plt.xlabel('LSTAT indicator'); plt.ylabel('Value of houses (k$)');
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages). | http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter08_ml/06_random_forest.ipynb | CC-MAIN-2018-13 | refinedweb | 314 | 52.76 |
I am trying to to pull bounding box and area data from segmented masks. For the mask shown below, instead of correcting proposing 5 bounding boxes, regionprops proposes 2394 bounding boxes. The image/mask is a binary numpy array with values of either 0 or 1. What can I do so that skimage.measure.regionprops proposes the correct number of bounding boxes? Most of the bounding boxes generated have an area of 1 pixel.
Hey @jameschartouni,
would you mind sharing the code you executed and an example image so that we can have a look what might go wrong?
Thanks!
Cheers,
Robert
The image is the ‘Mask.’
from skimage.io import imread import matplotlib.pyplot as plt from skimage.segmentation import mark_boundaries from skimage.measure import label, regionprops, find_contours import cv2 import os import numpy as np from PIL import Image import matplotlib.pyplot as plt img = Image.open("Mask_Data/10_10_predicted_mask.jpg") mask_labels = label(np.asarray(img)) props = regionprops(mask_labels) img_copy = np.asarray(img) for prop in props: if prop.bbox_area > 0: cv2.rectangle(img_copy, (prop.bbox[1], prop.bbox[0]), (prop.bbox[3], prop.bbox[2]), (145, 0, 0), 2) fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize = (15, 5)) ax1.imshow(img) ax1.set_title('Image') ax2.set_title('Mask') ax3.set_title('Image with derived bounding box') ax2.imshow(img, cmap='gray') ax3.imshow(img_copy) plt.show()
Hey @jameschartouni,
would you mind sharing the example image “Mask_Data/10_10_predicted_mask.jpg” so that we can have a look what might go wrong?
Thanks!
Cheers,
Robert
Hi @jameschartouni,
I think the problem is that you use a mask that was saved in a jpg format. jpg is compressing information and sort of resampling your binary mask, thus creating an image that is not a mask and has weird features (like that box, typical of jpg compression). I made a small example using the imagej blobs picture. I created a mask and saved it once as jpg and once as tif. Then I labelled both with skimage. This just shows the regions to identify:
(65.2 KB)
This shows the labelled image based on a jpg image:
And finally this shows the labelled image based on a tif image:
So I think you can solve your problem by simply saving your masks as tif files and not jpg.
Cheers,
Guillaume
Guilluame,
That seems to have fixed it! Thank you so much! Would have never figured that out. | https://forum.image.sc/t/regionprop-proposes-too-many-bounding-boxes/29898 | CC-MAIN-2020-40 | refinedweb | 404 | 68.97 |
from Sensor Gallery project: a custom TextLabel element that works on both platforms (note that some parts of the code are omitted):
import QtQuick 1.0
Rectangle {
property alias text: labelText.text
...
gradient: Gradient {
GradientStop { position: 0.0; color: "#555555" }
GradientStop { position: 1.0; color: "#222222" }
}
// Inner rectangle to make borders
Rectangle {
...
}
Text {
id: labelText
...
}
} interface classes and dynamic binding usually solves the issue:
...
#include <QObject>
#include "myinterface.h"
class MyClass : public QObject
{
Q_OBJECT
public:
explicit MyClass(QObject *parent = 0);
...
private:
MyInterface *mMyImplementation;
};
...);
#else
}
...
In the source file (.cpp) construct a different instance depending on the target platform.
On how to define Harmattan specific code scope see here.
Step 2: Project file and application deployment
Add the Harmattan configurations into the project file. The following block contains the configurations in the project file the main.cpp so that the correct QML file is loaded when application is launched (note that this is only required if the QML files are deployed onto the device instead of putting them into resources):
#if defined(Q_OS_SYMBIAN) || defined(Q_WS_SIMULATOR)
// Symbian and Simulator
view.setSource(QUrl::fromLocalFile("qml/symbian/main.qml"));
#else
// Harmattan
view.setSource(QUrl::fromLocalFile("qml/harmattan/main.qml"));
#endif
QML and Javascript files can be compiled into the binary using the Qt resource system. By using the resource system one does not have to deploy (i.e. copy) the QML files onto the device. However, due to a bug in Qt Quick 1.0 on Symbian QML files using the component icons cannot be placed in resources. The bug is fixed in Qt Quick 1.1 release for Symbian.
If the main QML file is loaded from resources and you have created separate resource files for both platforms, no changes are required:
view.setSource(QUrl("qrc:/<path>/main.qml"));
Build the application using Harmattan target and fix any errors found. Build errors help you to find the platform specific code that needs to be rewritten for Harmattan. The #ifdef approach usually works.
Step 4: First run
If your Symbian Qt Quick application uses Qt Quick components 1.0 and components such as PageStack, StatusBar or ToolBar, you probably need to modify at least your main.qml of the Harmattan build. The following snippets from Sensor Gallery example show the possible differences in main.qml between Symbian and Harmattan:
import QtQuick 1.0
import com.nokia.symbian 1.0 // Symbian Qt Quick components
import "." Symbian components will also be equipped with PageStackWindow element.
Now, run/debug your application with emulator or on a device and locate the possible errors from the debug log. Fix also possible scaling issues since the resolution and when your application looks and behaves the way you want it to, don't forget to test that your original Symbian version still works.
If your application is using Qt Quick components, note that the component set does not match perfectly. For example, Qt Quick components 1.0 on Symbian lack the PageStackWindow element and Harmattan components version 1.0 lack the ListItem element which you need to implement yourself when porting an application (see the snippets below)., hosted in Projects, that are ported using the approach covered in this section:
- RentBook
- Tic-Tac-Toe over Sockets (ported original from Windows Phone to Qt Quick). | http://developer.nokia.com/community/wiki/index.php?title=Porting_Symbian_Qt_Apps_to_Nokia_N9&oldid=113726 | CC-MAIN-2014-15 | refinedweb | 543 | 51.24 |
I'm trying to modify a homework problem. The first problem was to create an array and compare the user input to the array. Now we're supposed to compare the user input to data stored in a file by writing a method to open the file. I tried writing the method, but I'm terribly confused about calling it in main.
In the first program I just parsed the user string input as an int. Now in main I don't know how to pass the array to the isValid method or if I'm opening the file properly.
Sorry for the long explanation. Trying to explain everything I've done. I'm new to Java and am having trouble connecting the dots. My textbook is not good. Thanks!
package chargeaccountmod; import javax.swing.JOptionPane; import java.io.*; import java.util.Scanner; public class ChargeAccountMod { //validates whether user entered account numbers are valid compared to file private final int SIZE = 5; private int[] validNums = new int[SIZE]; //method to open file public void openFile() throws IOException { int index = 0; File file = new File("C:/Documents/ValidAcctNums.txt"); Scanner inputFile = new Scanner(file); while (inputFile.hasNext() && index < validNums.length) { validNums[index] = inputFile.nextInt(); index++; } inputFile.close(); } //method to check whether numbers are valid public boolean isValid(int number) { boolean found = false; int index = 0; while(!found && index < validNums.length) { if (validNums[index] == number) found = true; else index++; } return found; } public static void main(String[] args) throws IOException { // test int[] acctNum; ChargeAccountMod valid = new ChargeAccountMod(); valid.openFile(); if (valid.isValid(acctNum)) JOptionPane.showMessageDialog(null, "That's a valid account number."); else JOptionPane.showMessageDialog(null, "That's an invalid account number."); System.exit(0); } } | http://www.dreamincode.net/forums/topic/152151-open-file-method-to-validate-account-numbers/ | CC-MAIN-2017-09 | refinedweb | 282 | 52.26 |
The fseek() function is defined in <cstdio> header file.
fseek() prototype
int fseek(FILE* stream, long offset, int origin);
If the file is opened in binary mode, the new position of the file pointer is exactly offset bytes from the origin.
If the file is opened in text mode, the supported values for offset are:
- Zero: It works with any value of origin i.e. SEEK_SET, SEEK_CUR and SEEK_END.
- Value returned by a call to ftell(stream): It only works with origin of SEEK_SET.
If the stream is wide-oriented, the restrictions of both text and binary streams are applied i.e. the result of ftell is allowed with SEEK_SET and zero offset is allowed from SEEK_SET and SEEK_CUR, but not SEEK_END.
The fseek function also undoes the effects of ungetc and clears the end-of-file status, if applicable.
If a read or write error occurs, ferror is set and the file position is unaffected.
fseek() Parameters
- stream: The file stream to modify.
- offset: The number of characters to displace from the origin.
- origin: Position used as reference to add to offset. It can have following values:
fseek() Return value
- On success the fseek() function returns zero, nonzero otherwise.
Example: How fseek() function works?
#include <cstdio> int main() { FILE* fp = fopen("example.txt","w+"); char ch; fputs("Erica 25 Berlin", fp); rewind(fp); printf("Name: "); while((ch=fgetc(fp))!=' ') putchar(ch); putchar('\n'); printf("Age: "); fseek(fp,10,SEEK_SET); while((ch=fgetc(fp))!=' ') putchar(ch); putchar('\n'); printf("City: "); fseek(fp,15,SEEK_SET); while((ch=fgetc(fp))!=EOF) putchar(ch); putchar('\n'); fclose(fp); return 0; }
When you run the program, the output will be:
Name: Erica Age: 25 City: Berlin | https://www.programiz.com/cpp-programming/library-function/cstdio/fseek | CC-MAIN-2020-16 | refinedweb | 282 | 66.94 |
Xperf is a powerful (and free) Windows profiler that provides amazing insights into what is constraining the performance of Windows programs. Xperf includes a sampling profiler and a very capable viewer of the sampled call stacks (see below for UIforETW instructions).
Good news! As of summer 2016 WPA includes the ability to display flame graphs natively! This is far more flexible and faster than the cumbersome export/generate process discussed here. See this article for details, and stop reading this article.
However the numerical and text-based nature of the xperf stack viewer is sometimes not ideal. Sometimes a graphical summary would be better. In this post I show how to create Flame Graphs (like the one to the right) that graphically visualize xperf sampling data.
The normal way to explore the sampling data in an xperf capture is to use a summary table with the Stack column to the left of the orange grouping bar. This makes it easy to drill down along the hottest stack.
(See my article “Xperf for Excess CPU Consumption: WPA edition” for details.)
In the screenshot below, showing Visual Studio Single Step performance problems, we can see the call stacks which account for the majority of the CPU time:
Unfortunately, while this does nicely highlight the stack which is the largest contributor to CPU consumption, the main highlighted stack only contains 27 samples out of a total of 637 on this thread. The siblings contribute another 23 samples, but it’s still hard to see the big picture. Such is the nature of displaying data primarily through text and numbers.
Flame Graphs are an alternate way of displaying call stacks, popularized by Brendan Gregg on his blog, in posts such as this one.
Any type of weighted call stack can be converted to a Flame Graph so I decided to write a Python script that would convert ETW traces to collapsed stack files that could be fed to Brendan’s flamegraph.pl script. A Flame Graph of the sampling data above gives an alternate way of visualizing what is going on:
The vertical axis is the call stack, and the horizontal axis represents how much time was spent along each call stack.
Creating flame graphs from ETW data is easier than ever before with UIforETW. Starting with v1.19 the UIforETW releases contain basic flame graph support – just select a trace and select Scripts-> Create flame graph from the context menu. Okay, don’t forget to make sure that python and perl are installed and Brendan’s flamegraph.pl script is in the UIforETW bin directory. Additional features (selecting what to flame graph) may come later.
It is immediately viscerally obvious that the majority of the time is spent along two call stacks – the largest one rooted in XapiWorkerThread::ProcessSyncTask and a smaller one rooted in XapiWorkerThread::ProcessASyncTask. The full name of the second one isn’t visible in the screen shot above, but when you view the VisualStudio.svg file then when you hover over an entry it shows the full name. Firefox, Chrome, and IE 10 can all display the .svg files.
How does it work?
The collapsed stack format that is used as an input to flamegraph.pl is easy enough to understand. Each line represents a call stack with individual entries separated by semicolons, and each line ends with a space and a number representing the count or weight of the line. This text file can even be used with grep to do some basic exploring of the call stacks, although the length of the lines usually makes this unwieldy.
The script to convert xperf traces to collapsed stack files has been rewritten to use wpaexporter which makes the process much simpler and cleaner. The export process is slightly faster, and the processing of the exported data is far faster as there is an order of magnitude less data to parse.
The next section of this post is now obsolete and is only retained for historical interest. The new script is clean enough that, with the wpaexporter post linked above, it should be easy to understand.
To convert xperf files to the collapsed stack format I made use of a few ‘lightly documented’ and unsupported features of xperf.
You can get some hints about how they work by typing xperf -help processing. None of the specific processing commands gave me what I wanted so I used the generic dumper command. The syntax for this, expressed in Python, is:
‘xperf -i %s -symbols -o %s -a dumper -stacktimeshifting -range %d %d’
The arguments are the input ETL file, the output text file, and the beginning and ending of the time range in microseconds.
The format of the output is only documented through text lines at the top of the output file which list the names of the fields for each of the types of output data. The first type of data that we care about is SampledProfile:
SampledProfile, TimeStamp, Process Name (PID), ThreadID, PrgrmCtr, CPU, ThreadStartImage!Function, Image!Function, Count, SampledProfile type
And the other important data type is Stack:
Stack, TimeStamp, ThreadID, No., Address, Image!Function
Unfortunately these blocks of data are disjoint and there is no documentation on how to associate them. It appears that the only way to do this is to use the common information which is the TimeStamp and ThreadID. So, my Python parser scans the xperf results looking for SampledProfile and Stack data. When it sees a SampledProfile event it says “Hey, this TimeStamp and ThreadID is associated with this Process Name(PID). Watch for it”. When a blob of stack data shows up the script uses the TimeStamp and ThreadID to associate the data with a process. It’s crude, and doesn’t produce identical results to the WPA UI, but it appears to be close enough to be useful.
The rest of the script is just data manipulation – dictionary lookups, sorting, spindling, and other boring details.
Typical usage looks like this:
C:> xperf_to_collapsedstacks.py trace.etl
> wpaexporter.EXE trace.etl –profile ExportCPUUsageSampled.wpaProfile –symbols
Elapsed time for wpaexporter: 58.033 s
Found 10456 samples from 98 threads.
Writing 8320 samples to temporary file collapsed_stacks_0.txt
> perl flamegraph.pl collapsed_stacks_0.txt
Results are in POWERPNT.EXE_5100_3308.svg – they should be auto-opened in the default SVG viewer.
I currently have my Python script configured to summarize just the busiest thread as collapsed call stack files, but that can easily be configured, eventually through UIforETW.
Summary tables versus Flame Graphs
When all of the CPU consumption is along a single stack then drilling down into the WPA summary table is straightforward. However if the samples are spread out over a couple of call stacks then it can take a while to drill down and find the significant parts of both call stacks. If there are a half-dozen significant call stacks then summary tables are quite unwieldy.
In the example below, from Outlook when my entire computer was having a bad day, there were four thousand samples on the main thread, almost all of them in ClearSendMessages. However they are on many different call stacks. In the screen shot below we can see two of the dozen or so call stacks, accounting for fewer than a thousand of the four thousand samples. It took several minutes of exploring to realize that this one function was accounting for almost all of the CPU time:
The Flame Graph makes it obvious. The many irrelevant functions are squeezed horizontally and are easy to ignore. The expensive functions are wide horizontal lines and you can instantly look at the top of the widest columns and realize that the same function is always at the top:
The three wide areas at the top are all ClearSendMessages, and so is the one at the left, and this is plainly obvious, instead of being hidden amongst a sea of text. You can see more details in the original Outlook.svg file.
Kernel callbacks
Initial versions of this graph had severely truncated call stacks due to a rarely seen detail of how call stack collection happens during kernel callbacks. Stack walking cannot cross the kernel boundary (they use different stacks, and may even have different pointer sizes) so the collected stacks are cut off at the kernel boundaries. The xperf trace viewers hide this by merging in the other portions of the stack, and in order to get the dumper action to do this you have to specify the -stacktimeshifting option (thanks to Kyle in the comments). Since WindowProcs are kernel callbacks it means that this call stack coalescing is needed more frequently than you might think.
(See my article on When Even Crashing Doesn’t Work for other implications of crossing the kernel divide in WindowProcs)
It appears that stack collection on Linux using perf behaves identically in the kernel, with [VDSO] entries showing up as a separate stack root.
Summary
Flame Graphs are clearly a useful alternate way of visualizing stack data. If this sort of graphical summary was built in to WPA it would make some performance issues easier to understand. Currently if you want to use Flame Graphs you have to find an area of interest where CPU consumption is the issue and then run the scripts, passing the time range along to them.
The most likely use cases I can see for this is doing reports – visual summaries of performance problems found – or helping in finding regressions. I can certainly imagine a nightly performance test which creates Flame Graphs for key parts of a process so that the causes of regressions can be seen visually.
Other things that would need to be done to make these truly useful would include:
- Create scripts for dumping idle time stacks, perhaps in the same graph as the CPU consumption but in a different color
- Create scripts for summarizing memory allocations
- See if any of the trace parsing C# code that Vance Morrison blogs about would provide better ways of parsing traces
- Integrate Flame Graphs into WPA to make it easy to get a graph of the current time range being explored
In most cases the ability to interactively explore the xperf trace data in WPA – zooming in and out and seeing many types of data on the timeline – will outweigh the benefits of generating Flame Graphs outside of WPA. However I anticipate using them on some investigations in order to get a better high-level overview of what is happening, and to more easily share findings with those who aren’t as accustomed to reading xperf’s call stacks.
I hope that some day Microsoft integrates this type of visualization, either directly or though allowing plugins to the WPA UI. Done!
The script that converts xperf traces to collapsed call stacks can be found in the UIforETW repo on github at xperf_to_collapsedstacks.py and is available from the Traces context menu. You’ll need to download flamegraph.pl separately and put it in the same directory. The two sample Flame Graphs are VisualStudio.svg and Outlook.svg.
Alois Kraus wrote a managed tool that uses the TraceEvent Library to convert ETL files to flame graphs – you can find more details here.
If you want to connect the kernel and user mode stacks, use
‘xperf -i “%s” -symbols -o “%s” -a dumper -stacktimeshifting -range %d %d’
Woah. Dude. That’s awesome. The help on that option (“Time shift stacks right after trigger event.”) would never in a million years have told me that I should try it, but it works like a charm.
I now need to update the post. The Flame Graphs just got more useful.
Further notes on the stacks:
If you specify “-target machine” like this:
xperf -i %s -target machine -o %s.csv -symbols -a dumper -stacktimeshifting
…then you get each stack printed on a single line, comma-separated. Much easier to process. (There may be issues if stack frames contain commas, not sure what the behavior would be in that case.)
Second, you can assume that stacks are printed right after the event they’re logged for. XPerf restores or provides as much associativity as it can, deterministically, because ETW doesn’t associate these stacks with the events any more than through them having the same timestamp and CPU ID. XPerf virtualizes the thread ID through context switch events, I believe.
Third, the reason why stacks are logged in separate blocks (and, by default, printed in separate blocks) is because you’re looking at stacks on events logged from elevated IRQL, in which case the stackwalker cannot traverse pageable stacks so ETW queues a DPC/APC to finish the job. It’s more efficient to log these in separate events and it also buys you some ad-hoc stack compaction: if multiple kernel events are logged before a thread returns to user-mode ETW will neither walk nor log identical user stacks.
Fourth, interleaving of 32-bit and 64-bit frames is a touchy subject but it isn’t the reason for the splitting of things above. Note that xperf will not be able to properly interleave 32-bit and 64-bit stacks; it’ll print the entire 64-bit stack first, then the 32-bit stack. This generally works out just fine for 32-bit apps unless you’re looking at kernel upcalls with 32-bit code calling 64-bit code which calls back to 32-bit code. The stacks just end up looking odd and you have to sort it out by hand.
Finally, the reason why xperf.exe’s dumper defaults to non-“timeshifting” stack dumping is that in order to perform said “timeshifting” xperf needs to store all the stacks of the trace before xperf can dump it and that can be very expensive, memory-wise. The default configuration of the trace dumper is designed to operate with a minimal working set that doesn’t have to store much of anything about the trace so that xperf can still dump many-GB traces to text. (It still needs to take two passes to facilitate process/thread/image name and symbol resolution but the amount of data that is stored for that is small.)
Hi Robin, thanks a lot for all of the information.
1) I’ll try -target machine. Some call stacks definitely contain commas such as “CRefCounted1<IRefCounted,CRefCountServiceBase<1,CRefMT> >::Release” so I’ll have to test to see whether it handles them rationally.
2) I’m not sure what you mean by “stacks are printed right after the event they’re logged for”. At the very least I have often seen twelve SampledProfile events (one per hardware thread) followed by twelve stacks. I thought there was other interleaving going on as well. Then again, I haven’t revisited this with -stacktimeshifting enabled — I just added that last night. I do agree that the stacks aren’t delayed for arbitrarily long, but it also wasn’t obvious how long the delays could be.
3) That makes sense.
4) Isn’t 32-bit code calling 64-bit code which calls back to 32-bit code quite common? Most applications are still 32-bit and most Windows installs are 64-bit so it seems like every WindowProc hits this situation. The Outlook trace is the first one I’ve examined that has this situation and since I don’t have symbols for Outlook.exe I can’t tell for sure how well -stacktimeshifting fixed it. Luckily in games we don’t tend to do a lot of work in WindowProcs so it probably won’t affect us much.
5) It would be nice if the processing commands were better documented. Having to manually specify -stacktimeshifting is fine, but it remains frustrating that this option, like so much of xperf’s awesomeness, is essentially undocumented. I would never have guessed that it was helpful if Kyle hadn’t mentioned it. Searching for “xperf summary table documentation” is amusing, and “xperf stacktimeshifting” is worse.
This is great work! Thanks for the post and suggestions. I want Flame Graphs whenever I’m on Windows now (would be great if this was in WPA).
Something that’s clear to me from this post, is that this can also help a novice get to know the internals of software quickly. I’m not familiar with the target software here, but I have everything on one screen (mental picture), and can explore with the mouse (SVG) to follow the functions, understand their interactions, and see where the majority of work is done for this workload. It’s CPU samples only, but it’s a big start.
Using these for non-regression testing should help identify CPU regressions quickly. I’ve done this by having SVGs in different browser tabs and flicking between them, and looking for any growing mountains. Robert Mustacchi was working on a way to diff Flame Graphs, which could be useful for such testing, which he will blog about soon (; not there yet, but his blog does have other Flame Graph examples in the meantime). Zoomable Flame Graphs is another feature that might be worth developing.
Having Flame Graphs show memory allocations should be straightforward, and useful: instead of stacks and sample counts, it could be stacks and byte counts, and the Flame Graph code should handle it. Can also use “-title Flame Graph: Memory Bytes” to set the title (feature recently added).
Zoomable Flame Graphs would be awesome. seems like the right tool for that.
I think what is really needed is being able to zoom in on the profile data in WPA (changing the time range and thread of interest) and having a Flame Graph for the new area of interest display immediately. Zooming in on the Flame Graph itself is less critical, but would still be nice, especially when looking at memory reports.
Dave Pacheco was coding this up in his node.js version of Flame Graphs. He doesn’t have a working final example online, but his development code was here .
Good to see you here Brendan. I’ll try the -title option.
The main extra addition I’d like to see to flamegraph.pl is a way to have two or more sets of data with different color themes. I’d like to show CPU consumption in red, and CPU idle time in blue. I’m sure I can extract stacks for both and I think putting them on the same graph would work better if they were visibly separated.
Yes, including idle time (blocked time) produces a graph that is complete – all thread time, not just on-CPU. I prototyped this a while ago:
A “Hot Cold Graph”. Blue shows the idle (blocked) time.
A problem may be apparent: for multi-threaded applications, I find there is usually so much “cold” off-CPU time that the “hot” on-CPU time gets horizontally compressed in the visualization.
Solutions? One could be zoomable Flame Graphs, although once you start zooming you can loose track of the big picture (including being able to visually compare frame lengths with higher level functions). Perhaps there is a solution to that problem (dynamic visual legend). Another could be to show hot and cold graphs separately in the same SVG, both taking the full width, and providing details (a visual legend) to show the relative spans of each. I need to hack at it and find something that works.
I did this Hot Cold Graph using two mechanisms, both in DTrace: sampling stacks for the on-CPU time (coarse, but low overhead); and tracing kernel scheduler block events, with timing details and stack trace when the thread runs again. Tracing gives millisecond totals; sampling was converted to milliseconds by multiplying by the rate (rough). I mentioned this method in a recent talk as the “Stack Profile Method”: which differs from traditional profiling as we capture both on-CPU and off-CPU stacks ().
My solution to the multi-threaded question is apparent in my xperf Flame Graphs — each one covers just a single thread. That felt ‘right’, and matches how I typically explore xperf sampling and idle-time data — I almost always group by both process and thread. Showing just a single thread means that idle plus execution equals elapsed.
Is a version of flamegraph.pl that supports the two different colors available? I could try hacking your Perl script to add support but Perl and I are not good friends.
I put my development version code here (might not work – haven’t run it in a long time):
This includes the Hot Cold Graph, and code I was using for per-Thread Hot Cold Graphs (which I had created for the same reason you suggested – it lets you focus on the threads of interest).
There is another version of Flame Graphs written in JavaScript/node.js by Dave Pacheco, if that language is easier to hack on .
I would render those graphs using logarithmic time. It will compress the relative differences, but you can still visualize them. Might be hard to dynamically pick a “good” power factor for the data, but it should be possible. Or make that a custom user option, or just do this –
Pingback: Brendan's blog » Flame Graphs
I do really like Flame Graphs. I do like them so much that I have written my own stack compressor without xperf. I have employed the TraceEvent library from Vance Morrison to create flame graphs without xperf. It does work quite good. See
Looks great — thanks for sharing.
BTW, you talk about identifying which thread readied another thread using heuristics — but usually this should not be necessary. You can record the ReadyThread which, in the case of SetEvent, LeaveCriticalSection, and other readying events will give definitive evidence as to which thread readied another thread.
For using context switches to calculate how long a thread is running I believe that the trick is that the context switch events record when a thread starts running. You then have to watch to see when *another* thread starts running on the same CPU in order to know when the first thread stopped running. To be really accurate you should also watch for time lost to interrupts and DPCs.
Yes I definitely want to use the ReadyThread event. But I was seeing that for longer periods of time threads did ready my thread from processes that did definitely have nothing to do with my current work. I assume that an arbitrary thread can ready me because when I am not giving up my quantum by e.g waiting I can run until my time slice quantum is used up. Later when I am scheduled again the last active thread did ready me.
When I do use context switche events it is also easy to visualize the wait time but as I said it is perhaps not terribly useful. For the graph I have found that reversing the notation gives much better results by printing the method name first and then the class+namespace and module later.
ComanyDll!Company.Infrastructure.Algo.Class.Method(int arg1, IInterface *pInt) would become
Method(int arg1, IInterface *pInt) Company.Infrastructure.Algo!ComanyDll
That way you can immediately see which methods are involved which is the most interesting thing anyway. Another useful thing is to throw away from the call stack everything that does not belong to my code. That gives much better insight what my code is causing to the system.
I believe that you are correct that you can be ‘readied’ by any thread if you were swapped out rather than waiting on an event. I think the same thing can happen when you resume after a call to Sleep(). It would be nice if it was better documented.
Throwing away data worries me — I have seen many problems where the issue was in somebody else’s code, and leaving gaps in the call stack makes them less logical.
I actually only use flame graphs occasionally. They are handy for visually summarizing how code is behaving, but in 95% of cases I use WPA alone to explore traces. Because I am looking at custom events, GPU usage, which window is active, disk I/O, how much idle CPU time there is, what my process is doing, and where my process is waiting, anything less than the full WPA interface means that I might be missing the true cause.
A huge part of the trace analysis process for me is finding the right region to look at, based on the data in many graphs, and until I do that I can’t possibly create a flame graph. And once I have located the region of interest I often don’t need to create a flame graph. I really want flame graphs to be displayed instantly, within the WPA interface, along with the other rich data.
That is, I only create a flame graph of CPU consumption when I already know that CPU consumption is the issue, and when I need a different view of the CPU consumption data. With flame graphs for idle time I would only create them when I already knew that CPU idle time was the issue, and only if I needed a different view.
But, perhaps your code will make creating them enough easier that I will create them more often.
Flame graphs are gorgeous, couldn’t be prettier, but forgive me, I’m a contrarian. My question is, if there’s a lucrative speedup lurking in the code, can it hide from the flame graph? If it can hide from the flame, does it matter? I’m afraid the answer is Yes, and Yes.
I’m the curmudgeon on StackOverflow talking about how easy it is for speedups to hide from tools like these, why you can’t afford that, and what works instead. Some posts:
and a little math I assume you can handle:
I would not advocate using flame graphs as the only way to explore performance data. My usual tool is ETW, including the excellent Windows Performance Analyzer. It lets me see expandable/explorable call stacks of sampled data. I can also find a suspicious function and get a view of all call stacks that that function shows up in, for functions that are expensive because they are called from many places.
The data can also be viewed without call stacks which is an alternate way to find expensive functions that are called from many places. You can have a flat list of functions, you can group them by module, thread, process, etc., and you can even see what addresses within a function the samples hit at.
And, because I record all context switches and because WPA has robust viewing of this data I can also see where all threads of all processes go idle, including the stacks they are waiting on, who is readying them, and what functions frequently end up waiting (perhaps on different call stacks).
And, unlike the technique of breaking in to the debugger (which is fine, but too low of a sampling rate for many purposes), I can ask customers to record ETW traces and then analyze them on my machine, and see cross-process interactions of bugs that I can’t even reproduce. I have fixed *dozens* of bugs that only occur on customer machines.
Flame graphs are not the ultimate solution, but they provide a useful visualization that can sometimes make a problem apparent. That’s enough to justify their existence. However I tend to spend more time using WPA’s other visualizations as they are more dynamic and avoid many of the limitations that you mention.
ETW looks great and is full of features. I can’t tell from the doc if it 1) allows one to take samples mainly during subjective slowness, 2) allows samples on random wall-clock time, not CPU time, 3) retains line-level or instruction-level resolution, not just function, at every level on a stack sample, 4) lets you relate each point on the stack back to its source code, so you can tell exactly why that moment in time was spent, 5) lets you actually see individual stack samples, so you can bypass all the summaries such as stack counting, sorting, and hot-path extraction that speedups can easily hide from.
I’m sure you’ve seen all my explanations of why high sampling rates are not good – namely that you pay for measurement precision by missing speedups. Also that software can contain several problems, and if you miss finding even one of them you pay a steep price – getting 4 out of 5 is not good enough. (Some people, I’m sure, are happy to be told they have no problems, but if they have competition they can’t be so careless.)
You have a good point about finding remote problems. I have worked mainly in terms of software development. These ideas could be extended to situations like that, but it’s not my day job 🙂
ETW is designed such that you can leave it running – tracing to circular memory buffers – 24×7. I have done that for months at a time. Then, where there is some slowness (subjective, detected by software, whatever) I can save the buffers, typically covering the last 20-60 seconds. The samples are done at a fixed rate. I usually set it to ~1.1 KHz to avoid aliasing with 1 KHz behaviors. You can crank it up to 8 KHz.
The exact address of every sample is recorded. Typically a call stack is recorded. The viewer is extremely flexible and you can always export the raw data so yes you can see individual samples/stacks. The default viewer doesn’t take you back to source code – just to functions and modules and addresses – but you could write an address to source mapper easily enough.
I don’t understand why high sampling rates aren’t good. Not a complete solution, sure, but still pretty useful. With the ETW sampler running at 8 KHz the interference is still very low (< 5% overhead) and I have enough samples that if a single frame of a video game runs long (say, 30 ms) then I have enough samples to figure out why.
And if the problem is not CPU consumption but CPU idle, having call stacks for every context switch is pretty amazing.
It is useful to understand that ETW is a system for recording lots of data. This includes but is not limited to CPU samples, context switches, disk I/O, memory allocations, file I/O, virtualAlloc, registry accesses, custom data, and more, and call stacks can optionally be recorded on any of these.
Windows Performance Analyzer is a very flexible tool for analyzing ETW traces. It can answer more questions than most other tools I have seen because it is very configurable. And, you can export the raw data to do custom visualizers, so you aren't constrained to using WPA.
It is unfortunate that the whole toolset is so poorly documented and hard to learn, but I've had fun trying to fix that. Well worth trying if you work on Windows. It doesn't do everything that perf does, but it does a lot of things better.
8khz x .030s = 240 samples. Too many to look at, you gotta summarize them, and that’s how speedups hide from you. The only way you know you didn’t find it is to try another approach and find it. If you’ve got one taking, say, 30% of time, the number of samples you need to examine, to see it twice, is 2/0.3 = 6.67 samples on average. Examining samples finds any speedup that summaries find, plus speedups that they don’t. So if you find yourself actually examining particular samples, drawn at random from those 240, then I think you’re being most effective.
OK, I’ve said enough now. Cheers.
We’ll just have to disagree. I think that looking at too few samples will fail in many cases as it may give you an incorrect sense of where time is spent. This is particularly true when the CPU bottleneck varies over time.
Another handy feature of WPA is that when you highlight a sample (address, function, module, call stack, whatever) it highlights all the time ranges where matching samples occur. This makes it trivial to see patterns that manual inspection will miss (Is this location constantly hit? Is it hit every second? Every frame?).
I think you find random breaking to be better than the alternatives because you haven’t tried great alternatives. A great profiler should let you visualize the data in many different ways, and a great programmer will use the right view for the particular problem they are facing.
Pingback: ETW Central | Random ASCII
Pingback: Exporting Arbitrary Data from xperf ETL files | Random ASCII
Are there any requirements for flamegraph.pl? I placed it in the /bin/, the root of the unpacked directory, and just one below.
> perl -e ‘print $^V;’
v5.32.1
> python –version
Python 3.8.5
But all I get is:
“`
Creating CPU Usage (Sampled) flame graph of busiest process in C:\Users\user\OneDrive\Documenten\etwtraces\2021-05-21_14-59-38_user.etl (requires python, perl and flamegraph.pl). UIforETW will hang while this is calculated…
File “C:\Users\user\Downloads\etwpackage1.54\etwpackage\bin\xperf_to_collapsedstacks.py”, line 70
print “Couldn’t find \”%s\”. Download it from” % flameGraphPath
^
SyntaxError: Missing parentheses in call to ‘print’. Did you mean print(“Couldn’t find \”%s\”. Download it from” % flameGraphPath)?
Process exit code was 00000001 (1)
“`
It looks like xperf_to_collapsedstacks.py is not Python 3 compatible, and you are running Python 3. I see the same error in my setup. You could try running the script under Python 2, or you could wait for a fix for Python 3 compatibility.
Also, WPA has built-in flame-graph support now which makes these scripts less important.
Python 3 support has been added – grab the latest version from github. The updated script will be in the next release of UIforETW, whenever that ships. | https://randomascii.wordpress.com/2013/03/26/summarizing-xperf-cpu-usage-with-flame-graphs/ | CC-MAIN-2021-43 | refinedweb | 5,703 | 69.52 |
D.2.8 --.
Hey Alex!
What if I have two functions with same name, but 2 and 3 parameters with the third parameter having a default value, like this:
When I write foo(2, 4) which function is called? Is it legal, or is it another ambiguous situation?
If legal, then how do I write different statements to call the two functions?
Illegal, because you could mean either function.
Shouldn't "matching char* would require an implicit conversion" be "explicit conversion" ?
No, 0 will match char* implicitly, since 0 can represent a null pointer in this context, and the compiler will be happy to do the conversion for you.
ah thanks, i understood it back words as if we want the (char*) be the function to be called so while there is an exact match so i thought we need a conversion to call that version of the function!
hello, I have to next code:
#include <iostream>
using std::cout;
void f(float) { cout << "f(float)"; }
void f(long double) {cout << "f(long double)"; }
int main() {
f(2.0);
}
has one error, but if I change long double for double already work like this:
void f(float) { cout << "f(float)"; }
void f(double) {cout << "f(long double)"; }
int main() {
f(2.0);
}
why?
and other question,there is one compilation error or linker error?.
how identificate compilation error or linker error?
Hi jp!
The first version doesn't work, because 2.0 is a double, none of the versions of @f takes a double, but both versions could take a double by implicitly casting 2.0 to either a float or a long double. You can pass 2.0f or 2.0l to call one or the other function.
@main is missing a return statement.
It's a compilation error.
> how identificate compilation error or linker error?
Depends on your compiler. A linker error will usually have link, linker, ld or similar near the message.
thanks nascardriver.
I am not able to understand this section-> Matching for functions with multiple arguments
Could you give some example programs?
I added an example. Does that help?
Regarding: "Note that the function’s return type is NOT considered when overloading functions."
Here is an example of what looks like an arbitrary rule, and therefore yet another arbitrary rule for the student to memorize. Perhaps it would be better to explain that the compiler would have a hard time choosing the required function from the return type (at least in many instances). With such an explanation, it isn't just an arbitrary rule, but something that actually makes sense.
Good point. I added a little context about why this choice was intentionally made, for readers who care. Is the explanation clear or confusing?
Regarding: "(Note for advanced readers: This was an intentional choice, as it ensures the behavior of a function call or subexpression can be determined independently from the rest of the expression, making understanding complex expressions much simpler)."
Here is an alternative: The reason for this choice is that the return type of a function call is not a syntactic element when the function is called. As an example of the problem, suppose we create two similar functions with no parameters. One function returns int and the other long. One of those functions might be used in an expression that converts the integer value to double, and there is no general syntactic element to indicate which function is being requested.
If you want to create two functions with the same name and same parameters (if any) but which return different types, then you can create functions with a void return type and pass the address that the return value is to be stored in as a parameter. This new parameter will be a pointer to the required type and therefore able to disambiguate the two functions, e.g.
void createRandomFraction( double *result );
void createRandomFraction( float *result );
Trevor
Good point. I added a note about this into the lesson, with some discussion of the downsides of doing this, and why I don't recommend it.
For resolving ambiguous matches using option 2, as well as type casting, if one or more arguments are literals, then suffixing the literals to match one of the function’s parameters would also work (and I would say be more readable). For example, instead of
just use
The table in section 2.8 is quite useful for this.
True, and agreed. I've incorporated this into the lesson. Thanks for the suggestion!
Hi Alex,
In 1.7 you said "However, there)."
then why having
would cause the compiler to throw an error?
Thanks!
Because you can't have a function prototype that only differs by return value. If you were to call getRandomValue(), the compiler wouldn't be able to disambiguate which one you wanted to call.
You can have:
It's redundant, but syntactically legal.
Isn't it much more efficient for the program to have unique function names than having to go through all of the overloaded functions to find the right match?
It's more efficient for the compiler (the compiler is the one resolving all the function calls) -- but this isn't what you should be optimizing for. You should be optimizing for ease of use and maintainability.
So, that means we should maintain readability of the program by avoiding function overloading?
No, rather the opposite. You should use function overloading whenever it makes sense. If you have two functions that add numbers, one which works on integers and the other on floats, it's much better to name them both add than name one addint and the other addfloat.
Okay, makes sense.
Thank you.
Hello, Alex
1) I hope, you are doing well. Thanks for the great tutorial!
-Below program is Standard conversion or Numeric conversion ? add(2.99,8.91)-> float add(float x, float y);
2.99 and 8.91 are doubles converted to float.
float add(float x, float y)
{
return x + y;
}
int main()
{
std::cout<<add(2.99,8.91)<<'n';//Standard conversion or Numeric conversion?
return 0;
}
2) In in this "4.4-implicit-type-conversion-coercion" topic you have quoted as below:
Numeric conversions
When we convert a value from a larger type to a similar smaller type, or between different types, this is called a numeric conversion. For example:
double d = 3; // convert integer 3 to a double
short s = 2; // convert integer 2 to a short
or
int i = 10;
float f = i;
and
Standard conversions include:
Any numeric type will match any other numeric type, including unsigned (eg. int to float)
-But,you haven't talked about Numeric conversion in function overloading.Is there any use of Numeric conversion?If there is a use, please could you briefly tell us how to differentiate between numeric and standard conversions.
Thanks in Advance!
In this case, you're providing arguments that are double literals, but the function is expecting float parameters. Therefore, it will do a floating point numeric narrowing conversion.
A numeric conversion is a type of standard conversion.
"Which version of add() gets called depends on the arguments used in the call -- if we provide two ints, C++ will know we mean to call add(int, int). If we provide two floating point numbers, C++ will know we mean to call add(double, double) "
Could you write down an example program for this statement,asking the user to input a value ,based on the user's input (whether the user passed an int value or double vale) and calling the right function ? I tried working around with this,but could not exactly figure it out.As in function overloading several functions have the same name but just differ in data types used in params ,however they might have same variable,if that's the case how to figure out to call the right function ?
There's no easy way to ask the user for an input and vary the data type based on what the user entered. Here's a program showing function add called with different parameter types
There's no way to pass the same value or variable and have it determine which function to call on the fly -- which function to call is determined at compile time based on the types involved.
Extremely small typo: "Function overloading can lower a programs complexity " should be program's.
I love this guide and have been learning a lot in the past week. Thank you for sharing this!
Typo fixed. Thanks for visiting!
Under "Typedefs are not distinct", you wrote:
"Since declaring a typedef does not introduce a new type, the following two declarations of Print() are considered identical:
typedef char *string;
void print(string value);
void print(char *value);"
I was a little confused about this. I don't remember much about what you wrote regarding typedef in an earlier section, and I had trouble finding it. How are you able to define "string" as a synonym for "char *"? I would think that the "string" name would invalid since it's already taken. Is it not considered a reserved identifier? Is this typedef example only possible when the string needs to be qualified with it's full name(std::string)?
Also, if "string" is stolen as a synonym for "char *", does this mean that we can no longer define variables of type string(actual)?
typedef char *string; defines "string" as a synonym for char*. I cover typedefs in lesson 4.6.
string is only a taken name in the context of namespace std. string is not a keyword or reserved word.
You will run into problems here if you do "using namespace std" because then std::string will conflict with this string. But I highly do not recommend using namespace std!
I think that the typedef section is confusing because it's pretty obvious already
so this will still work, right?
also, why does posting a comment take so long? sometimes I even get an error and when I go back to this page the comment is there but I can't edit or delete it
My internet is pretty decent
I just got one of those errors and can't edit the comment
I'm not really sure. Some days the comments go through quickly, other days they take a while, and occasionally they error out. I haven't been able to figure out why. It's an issue on my end, not yours.
Yup, if lol is an int and blob is a float, then the functions will be considered distinct.
Ohhh sounds risky and confusing to debug. i hope i never have to use function overloading.
Really, the most important thing here is just understanding that function overloading exists and how to us it. All of the conversion stuff only comes into play if you provide arguments of the wrong type for the function, which you probably shouldn't be doing anyway. :)
If you understand the add() example at the top of the lesson, that's good enough to move on.
Thanks for the resource Alex. It's been years since I wrote C++ and I'm finding this a great way to get up to speed with changes in the languages.
A question. Given the following:
If (lazily) I invoke [pre]Foo[/pre] as follows:
Where the intention is to invoke the (float, float ,float) constructor, I actually get the (int, int, float) constructor. I understand that this is because, all other things being equal, converting int to float 3 times is a worse match than converting one int to float.
Can I use the 'explicit' keyword to prevent this conversion or is this a misuse/inappropriate approach? E.g. is this reasonable:
Yes, you can use the explicit keyword to prevent C++ from use a given constructor as a conversion constructor. That's precisely what it's meant for.
Typo:
> X value; // declare a variable named cValue (value) of type class X
Strange wording:
> If no promotion is found, C++ tries to find a match... (If no match is found by promotion could be a better standard :) )
Finally we're into OOP, right? Why isn't this exciting news mentioned anywhere, Alex? :D
Lesson updated. Not quite into OOP yet -- we get there in chapter 8, and all of these things come in handy. :)
Of course, I can't wait to get into Chapter 8! Well function overloading is one type of OOP's polymorphism, so we already get a peek in, right?
And I got the email notification about your reply. Thanks for implementing that. :)
re: "Since declaring a typedef does not introduce a new type, the following the two declarations of Print() are considered identical":
Being nitpicky remove the 'the' between "following the two declarations". Should read : 'the following two declarations of Print()...
Hope this is helpful.
Thanks, fixed.
If you could add tiny quizzes at the end of each lesson, that would greatly help with memory retention in my opinion. I'm not talking writing code, but I mean something like... asking the reader to get some paper and write the terms defined within that particular lesson from memory to see if they fully understand it. Yeah, people will cheat... but this is all self taught anyways. The only person losing in that case is them.
I can understand if you don't have time, but with tiny quizzes like this you could pretty much pass it off to anyone, since the only effort involved is writing a few sentences at the end of each, asking the reader what ambiguous matches are, or asking what an enum will be promoted to (for example).
This is something I'm working on as I rewrite each of the lessons.
I encountered a really strange thing! ~ Look at the following code:
When compiling this program compiler give me an error:
error: call of overloaded 'add(double, double)' is ambiguous
and when I changed 'float' to 'double' everything went OK!
Except both 'float' and 'double' aren't floating point number?
so why we have an compiler error here?
Your answer exists in this lesson:
remember that all literal floating point values are doubles unless they have the ‘f’ suffix
Yes, converting doubles to a float is considered a standard conversion, and (perhaps counter-intuitively) isn't considered a better match than converting doubles to integers.
i was expecting name mangling here
Name mangling is an interesting topic, but it's really an internal compiler detail, and one that doesn't need to be understood to use C++ effectively.
Hi Alex..
I have heard that "Avoid overloading function where one takes integral type argument and other takes pointer"
What could be the reason?
Thanx
I presume the reason is because the literal 0 could match as an integer or a null pointer value. In practice, I can't ever recall seeing this cause a real problem -- at most, it would trigger an ambiguous match that could be resolved via an explicit cast..
It's a pointer to a character. It could point to a single character, or to a build-in array of characters..
Giving related functions a different name is almost never going to be the best option. The benefits from consistency outweigh the few problems that overloading might cause.
As other posters have also noted, there are many cases where overloading is the only option (cases in which you can't just rename the function). Two of the most common are overloaded operators (defining your own operator+, for example) and class constructors, which must take the name of the class. We'll cover those in the next few chapters. This one just introduces the general concept.
"If there are multiple arguments, C++ applies the matching rules to each argument in turn. "
if this is true, then why does the compiler report an ambigous call for the following code:
This case is ambiguous because neither function is better than the other: each function has one matching parameter and one parameter that will only match via a standard conversion.
Can functions be overloaded on the basis of one of the arguments being a pointer?
e.g
The compiler doesnt throw an error , so I presume that it is valid, but what I dont understand is that essentially both the arguments are of type int.(even if one contains an address). Kindly explain
Yes, you can have overloaded functions where one takes an int and the other a pointer to an int. An int and a pointer to an int are distinct types, and thus the compiler can resolve which function to call.
This is a excellent side to get answer clear neat.......
Hello Alex -
Another small typo:
"Consequently, an ambiguous match needs to be disambiguated before your program will compiler."
Also, it would help if there were more quiz questions and exercises to reinforce the material - there is a lot here to remember..
i was just thinking how my learnig degraded since im a "learn by doing" type.
Because there aren't as many quiz questions in the later lessons? I'm working to fix that but I haven't gotten this far yet. :(
i lack any practice. i will soon forget how to write a helloworld.
i understand it does not happen in a snap and just letting you know how usefull(or not) your site is(the further i read. im at 7.13 atm).
lol you forget how to write a hello world program
no but seriously, you should be worried
I agree with Tom and ccplx, but it is great knowing you have this in mind! I will stay alert for future updates.
Good work!
PS: I also agree this material would make a successful book, specially given that the website is quite well known by now!
"In this case, because there is no Print(char), the char 'a'. :)
this is very good site for c++.
I agree
Me2
Me 3!
4 me too!
i agree.
i have to learn cpp in 10 days and this site has been really helpful
agreed
Me 5 (Agreed)
Name (required)
Website | https://www.learncpp.com/cpp-tutorial/76-function-overloading/comment-page-1/ | CC-MAIN-2019-13 | refinedweb | 3,033 | 63.7 |
I am trying to tackle a (probably fundamentally easy :) problem.
How do I fetch data from one function in class1 to another in class2?
I have my declarations.h file in which I declare my stuff:
class Player { private: int PosY; int PosX; public: Player() { PosY = 8; PosX = 13; }; int GetCoords(int i); Player* player; }; class DrawMap { private: public: DrawMap() { }; int showMap(); DrawMap* myMap; };In my functions.cpp file I include the declarations.h file, and define my class functions:
#include "declarations.h" int Player::GetCoords(int i) { if(i == 1) { return PosX; } else { return PosY; } } int DrawMap::showMap() { int PosX = player->GetCoords(1); <-- This is wrong, but why and how? return 0; }
The compiler says
error: ‘player’ was not declared in this scopewhich I totally understand. DrawMap:: doesn't know what player is. But how do I get it to know that? Shouldn't Player* player be public and accessible?
TIA,
A | http://forum.codecall.net/topic/63156-sharing-data-between-classes/ | crawl-003 | refinedweb | 153 | 67.04 |
Results 1 to 4 of 4
Thread: Empty function script
- Join Date
- Mar 2012
- 14
- Thanks
- 1
- Thanked 0 Times in 0 Posts
Empty function script
(I did indeed read the rules of this forum and while I'm not a "seasoned" javascript coder I thought that this script was simple enough to not matter. If that is unacceptable I apologize and you can feel free to delete this post)
Here is a function that allows you to use a PHP like 'empty()' function. It helped me a lot because it combines several other JS functions into one neat package.
Code:
function empty(obj) { if (typeof obj == 'undefined' || obj === null || obj === '') return true; if (typeof obj == 'number' && isNaN(obj)) return true; if (obj instanceof Date && isNaN(Number(obj))) return true; return false; }
You can be less repetitive like this:
Code:
function empty(obj){ return (typeof obj == 'undefined' || obj === null || obj === '') || (typeof obj == 'number' && isNaN(obj)) || (obj instanceof Date && isNaN(Number(obj))); }
Create, Share, and Debug HTML pages and snippets with a cool new web app I helped create: pagedemos.com
- Join Date
- Jun 2007
- Location
- Urbana
- 4,650
- Thanks
- 11
- Thanked 626 Times in 605 Posts
- Join Date
- May 2002
- Location
- Hayward, CA
- 1,486
- Thanks
- 1
- Thanked 24 Times in 22 Posts
I actually prefer Nile's version here, because it's more readable."The first step to confirming there is a bug in someone else's work is confirming there are no bugs in your own."
June 30, 2001
author, Verbosio prototype XML Editor
author, JavaScript Developer's Dictionary | http://www.codingforums.com/post-a-javascript/255186-empty-function-script.html?s=be1b48aa22ce50f5e0079eda54a9509b | CC-MAIN-2017-09 | refinedweb | 261 | 52.97 |
The Silverlight examples you've seen so far can be used in a basic, stand-alone website or in an ASP.NET web application. If you want to use them in an ASP.NET website, you simply need to add the Silverlight files to your website folder or web project. You copy the same files that you copy when deploying a Silverlight application_everything except the source code files.
Unfortunately, the ASP.NET development process and the Silverlight development process aren't yet integrated in Visual Studio. As a result, you'll need to compile your Silverlight project separately and copy the compiled assembly by hand. (You can't simply add a reference to the compiled assembly, because Visual Studio will place the referenced assembly in the Bin folder, so it's accessible to your ASP.NET server-side code, which isn't what you want. Instead, you need to place it in the ClientBin folder, which is where your HTML entry page expects to find it.)
This approach allows you to place Silverlight and ASP.NET pages side-by-side on the same website; but they aren't in any way integrated. You can navigate from one page to another (for example, use a link to send a user from an ASP.NET web form to a Silverlight entry page), but there's no interaction between the server-side and client-side code. In many situations, this design is completely reasonable, because the Silverlight application represents a distinct "applet" that's available in your website. In other scenarios, you might want to share part of your data model, or integrate server-side processing and client-side processing as part of a single task.
ASP.NET Futures
The ASP.NET Futures release includes two ASP.NET web controls that render Silverlight content: Xaml and Media (which are described in the following sections). Both of these controls are placed in an assembly named Microsoft.Web.Preview.dll, which you can find in a directory with a name like
c:\Program Files\Microsoft ASP.NET\ASP.NET Futures July 2007\v1.2.61025\3.5.
In order to use the Xaml and Media controls, you need a reference to the Microsoft.Web.Preview.dll assembly. You also need to register a control tag prefix for the Microsoft.Web.Preview.UI.Controls namespace (which is where the Xaml control is located). Here's the Register directive that you can add to a web page (just after the Page directive) to use the familiar asp tag prefix with the new ASP.NET Futures controls:
<%@ Register Assembly="Microsoft.Web.Preview" Namespace="Microsoft.Web.Preview.UI.Controls" TagPrefix="asp" %>
Alternatively, you can register the control prefix in your web.config file so that it automatically applies to all pages:
<?xml version="1.0"?> <configuration> ... <system.web> <pages> <controls> <add tagPrefix="asp" namespace="Microsoft.Web.Preview.UI.Controls" assembly="Microsoft.Web.Preview" /> ... </controls> </pages> ... </system.web> ... </configuration>
Rather than adding the assembly reference and editing the web.config file by hand, you can use a Visual Studio website template. Choose File -> New -> Web Site and select ASP.NET Futures Web Site.
When you take this approach, you'll end up with many more new settings in the web.config file, which are added to enable other ASP.NET Futures features that aren't related to Silverlight. Once you've finished these configuration steps, you're ready to place the Xaml and Media controls in a web page. You'll need to type the markup for these controls by hand, as they won't appear in the Toolbox. (You could add them to the Toolbox, but it's probably not worth the effort considering that there are likely to be newer builds of ASP.NET Futures in the near future.)
The Xaml Control
As you learned earlier, the HTML entry page creates a Silverlight content region using a <div> placeholder and a small snippet of JavaScript code. There's no reason you can't duplicate the same approach to place a Silverlight content region in an ASP.NET web form. However, there's a shortcut that you can use. Rather than creating the <div≫ tag and adding the JavaScript code by hand, you can use the Xaml control.
The Xaml control uses essentially the same technique as the HTML entry page you saw earlier. It renders a <div> tag and adds the JavaScript (using an instance of the ScriptManager control. The advantage is that you specify the XAML page you want to use (and configure a few additional details) using properties on the server side. That gives you a slightly simpler model to work with, and an easy way to vary these details dynamically (for example, choose a different XAML page based on server-side information, like the identity of the current user).
Here's the ASP.NET markup you'd use to show a XAML file named Page.xaml:
<form id="form1" runat="server"> <asp:ScriptManager <asp:Xaml</asp:Xaml> </form>
You can set a number of properties on the Xaml control to configure how the Silverlight content region will be created, including Height, Width, MinimumSilverlightVersion, SilverlightBackColor, and EnableHtmlAccess. You can also attach the Xaml control to two JavaScript functions. Set OnClientXamlError with the name of a JavaScript function that will be triggered if the Silverlight XAML can't be loaded, and set OnClientXamlLoaded with the name of the JavaScript function that will be triggered if the Silverlight content region is created successfully.
You also need to add the XAML page to your website. Unfortunately, the current build of ASP.NET Futures doesn't include a XAML template for Silverlight 1.1 content. Instead, it includes a XAML template for Silverlight 1.0 content, complete with a JavaScript code-behind file. (This choice was made for compatibility with Silverlight 1.0, which doesn't support client-side C# and the scaled-down CLR.) The easiest way to use Silverlight 1.1 content with the Xaml control is to create your XAML pages in a dedicated Silverlight project. You can then copy the XAML files and the ClientBin folder to your ASP.NET website. This extra work isn't the result of a technical limitation -- it's simply a limitation of pre-release software. | http://www.drdobbs.com/web-development/silverlight-and-aspnet/206105457 | CC-MAIN-2014-35 | refinedweb | 1,045 | 57.37 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Custom module Onchange function is not working in version 7 and not shows warning message
class hr_analytic_timesheet(osv.osv):
_inherit = 'hr.analytic.timesheet'
_columns = {
'worked_hours':fields.float('Worked Hours'),
'overtime_hours':fields.float('Over Time'),
'total':fields.float('Total')
}
def onchange_worked_hours(self, cr, uid, ids, worked_hours, context=None):
for i in self.read(cr,uid,ids,['worked_hours'],context=context):
if i['worked_hours'] >8.00:
print("##########################inside worked hours")
raise osv.except_osv(_('Warning'),_('Worked Hours Cannot be greater than 8'))
return True
====================================
<record id="timesheet_tree_inherited_view" model="ir.ui.view">
<field name="name">timesheet.tree.inherited</field>
<field name="model">hr.analytic.timesheet</field>
<field name="inherit_id" ref="hr_timesheet.hr_timesheet_line_tree"/>
<field name="arch" type="xml">
<xpath expr="//tree/field[@name='unit_amount']" position="before">
<field name="worked_hours" on_change="onchange_worked_hours(worked_hours)"/>
<field name="total"/>
</xpath>
</field>
</record>
@LIBU, are you trying to validate the new value of worked_hours or the saved value? Currently you are checking against the saved ['worked_hours'], if the record has been saved before (in the case of you editing the record). As @Ludo has mentioned, the record have not been saved if you expect this to be triggered when you create a new record.
Also, wouldn't it be better if you use the 'warning' return value key instead of raising exception?
Yes @Ivan I want to validate the value when I create a new record.....It is not a saved data
So you should be using worked_hours variable directly as @Ludo had explained. if worked_hours >8.00 .... Also, please consider using 'warning' return as opposed to raising errors.
Are you sure ids is not empty? Since you are on the on_change method, the object you are working on does not have to be saved yet.
Also, if you supply the worked hours (which can be a list of ids of the working_hours object), then you don't need a read on self, but rather on the object the worked_hours field is referencing.
Print/debug list the values in the beginnen of your method to be sure everything is supplied in the first place and see which types they are.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/custom-module-onchange-function-is-not-working-in-version-7-and-not-shows-warning-message-71241 | CC-MAIN-2017-47 | refinedweb | 406 | 59.3 |
Today we're launching Nhost CDN to make Nhost Storage blazing fast™.
Nhost CDN can serve files up to 104x faster than before so you can deliver an amazing experience for your users.
To achieve this incredible speed, we're using a global network of edge computers on a tier 1 transit network with only solid-state drive (SSD) powered servers. Nhost CDN is live for all projects on Nhost, starting today.
In this blog post we'll go through:
How to start using Nhost CDN?
What is a CDN?
How we built Nhost CDN and what challenges we faced.
Benefits you will notice with Nhost CDN.
Before we kick off, Nhost Storage is built on Hasura Storage which is impressively fast already. With today's launch of Nhost CDN, it's even faster!
Upgrade to the latest Nhost JavaScript SDK version:
npm install @nhost/nhost-js@latest # or yarn yarn add @nhost/nhost-js@latest
If you're using any of the React, Next.js or Vue SDKs, make sure you update them to the latest version instead.
Then, initialize the Nhost Client using
subdomain and
region instead of
backendUrl.
import { NhostClient } from '@nhost/nhost-js'; const nhost = new NhostClient({ subdomain: '<your-subdomain>', region: '<your-region>', });
You find the
subdomain and
region of your Nhost project in the Nhost dashboard.
Locally, you use
subdomain: 'localhost'. Like this:
import { NhostClient } from '@nhost/nhost-js'; const nhost = new NhostClient({ subdomain: 'localhost', });
That's it. Everything else works as before. You can now enjoy extreme speed with the Nhost CDN serving your files.
Keep reading to learn what a CDN is, what technical challenges we faced, and the incredible performance improvements Nhost CDN brings to your users.
Before we start diving into technical details and fancy numbers let's briefly talk about what CDNs are and why they are important.
CDN stands for "Content Delivery Network", roughly speaking they are highly distributed caches with lots of bandwidth and located very close to where users live. They can help online services and applications serve content to users by storing copies of it where they are most needed so users don't need to reach the origin. For instance, if your origin is in Frankfurt but users are coming from India or Singapore, the CDN can store copies of your content in caches in those locations and save users the trouble of having to reach Frankfurt for that content. If done properly this has many benefits both for users and for the people responsible for the online services:
From a user perspective: Users will experience less latency because they don't need to reach all the way to Frankfurt to get the content. Instead, they can fetch the content from the local cache in their region. This is even more important in regions where connectivity may not be as good and where packet losses or bottlenecks between service providers are common.
From an application developer perspective: Each request served from a cache is a request that didn't need to reach your origin. This will lower your infrastructure costs as you have to serve fewer requests and, thus, lower your CPU, RAM, and network usage.
Before dropping this topic let's see a quick example, imagine the following scenario:
In the example above, Pratim and Nestor are clients while Nuno is the CDN. In a faraway land, we have our origin, Johan.
When Pratim first asks Nuno about the meaning of life he doesn't know it so he asks Johan about it. When Johan responds Nuno stores a copy of the response and sends it to Pratim.
Later, when Nestor asks Nuno about the meaning of life he already has a copy of the response so Nuno can send it to Nestor right away, reducing latency and saving Johan the trouble of having to respond to the same query again.
This is great but it comes with some challenges. As a continuation, we will talk about some of those, how we are taking care of them for you in our integration with Nhost Storage, and some performance metrics you may see thanks to this integration.
As we mentioned previously, CDNs will store copies of your origin responses and serve them directly to users when available. However, things change, so you may need to tell the CDN that the copy of a response is no longer up to date and they need to remove it from their caches. This process is called “cache invalidation” or “purging”.
In the case of Nhost Storage cache-invalidation is handled automatically for you. Every time a file is deleted or changed we instruct the CDN to invalidate the cache for that particular object.
However, this isn't as easy as it sounds as Nhost Storage not only serves static files, it can also manipulate images (i.e. generate thumbnails from an image) and/or generate presigned-urls. This means that for a given file in Nhost Storage there may be multiple versions of the same object that are cached in the CDN. If you don't invalidate them all you may still serve files that were deleted or, worse, the wrong version of an object.
To solve this issue we attach to each response a special header
Surrogate-Key with the
fileID of the object being served. This means that it doesn't matter if you are serving the original image, a thumbnail, or a presigned url of it, they all will share the same
Surrogate-Key. When Nhost Storage needs to invalidate a file what it needs to do is instruct the CDN to invalidate all copies of responses with a given
Surrogate-Key.
At this point, you may be considering the security implications of this. What happens if a file is private? Does this mean the CDN will serve the stored copy of it to anyone that requests it or does it mean this is only useful for public files? Well, I am glad you asked. The short answer is that you don't have to worry, you can still benefit from the CDN while keeping your files private.
The longer answer is as follows:
In the CDN we flag cached content that required some form of the authorization header
When a user requests content that was flagged as private we perform a conditional request from the CDN to the origin. The conditional request will authenticate the request and return a 304 if it succeeds.
The CDN will only serve the cached object to the user if the conditional request succeeded.
Even though you still need a round trip to the origin to perform the authentication of the user, you can benefit from the CDN as your request to the origin is very lightweight (just a few bytes with headers going back and forth), and the file will still be served from the CDN cache. You can see below an example of two users requesting the same file:
The cache is empty, CDN requests the file and stores it, total request time from the origin perspective is 5.15s:
time="2022-06-16T12:16:28Z" level=info client_ip=10.128.78.244 errors="[]" latency_time=5.157454279s method=GET status_code=206 url=/v1/files/1ff8ef8d-3240-4cf3-805f-fc3d61d190b2 │
Cache has the object already cached but flagged as private so it makes a conditional request to authenticate the user. Total request time from the origin perspective is 218.28ms (after the 304 the actual file is served directly from the CDN without origin interaction):
time="2022-06-16T12:16:41Z" level=info client_ip=10.128.78.244 errors="[]" latency_time=218.283899ms method=GET status_code=304 url=/v1/files/1ff8ef8d-3240-4cf3-805f-fc3d61d190b2
Serving large files pose two interesting challenges:
How do you cache large files efficiently?
How do you cache partial content if a connection drops?
These two challenges are related and have a common solution. For instance, imagine you have a 1GB file in your storage and a user starts downloading it, however, the connection drops when the user has downloaded 750MB. What happens when the next user arrives? Do you have to start over? If the file is downloaded fully, do you keep the entire file in the cache?
To support these use cases Nhost Storage supports the
Range header. This header allows you to tell the origin you want to retrieve only a chunk of the file. For instance, by setting the header
Range: bytes=0-1024 you'd be instructing Nhost Storage to send you only the first 1024 bytes of a file.
In the CDN we leverage this feature to download large files in chunks of 10MB. This way if a connection drops we can store these chunks and serve them later on when a user requests the same file.
Another optimization we can do in the CDN platform is to tweak the TCP parameters. For instance, we can increase the congestion window, which is particularly useful when the latency is high. Thanks to this we can improve download times even when the file isn't cached already.
We mentioned that caches are located close to users, which means that the cache that a user in Cape Town would utilize isn't the same as a user in Paris would. A direct implication of this is that a user in Paris can't benefit from content cached in another location.
This is true up to a certain extent. We utilize a technique called “shielding” which allows us to use a location close to the origin as a sort of “global” cache. With shielding, a cache that doesn't have a copy of the file that is needed will query the shield location instead of the origin. This way you can still reduce the load of your origin and improve your users' experience.
To showcase our CDN integration we are going to perform three simple tests:
We are going to download a public image (~150kB)
We are going to download a private image (~150kB)
We are going to download a private large file (45MB)
To make things more interesting we are going to deploy a Nhost app in Singapore while the client is going to be located in Stockholm, Sweden, adding to a latency of ~200ms.
As you can see in the graph below even when the content isn't cached (miss), we experience a significant improvement in download times; downloading the images is done in less than half the time, and downloading the large file takes 30% less time. This is thanks to the TCP tweaks we can apply to the CDN platform
Improvements are more dramatic when the object is already cached, then we see we can get the public image in just 21ms compared to the 2.19s that took to get the file directly from Nhost Storage. Downloading the private image goes down from 2.07s to 403ms, which makes sense as the latency is ~200ms and we need to go back and forth to the origin to ask it to authenticate the user and get back the response before we can serve the object.
No, we didn't. We are leveraging Fastly's expertise for that so you get to benefit from their large infrastructure while we get to enjoy their high degree of flexibility to tailor the service to your needs.
Integrating a CDN with a service like Nhost Storage isn't an easy task but by doing so we have increased all metrics allowing you to serve content faster and giving your users a better experience when using your services no matter where your users are.
We use cookies to provide our services and for analytics and marketing. By continuing to browse our website, you agree to our use of cookies.
To find out more about our use of cookies, please see our Privacy Policy and Cookies Policy. | https://nhost.io/blog/launching-nhost-cdn-nhost-storage-is-now-blazing-fast | CC-MAIN-2022-33 | refinedweb | 1,983 | 68.4 |
Learning Elm syntax through TDD
I’ve been wanting to get a good grasp at Elm for quite some time now. Elm is a functional language that compiles to JavaScript. It gained a lot of attraction lately as functional (reactive) programming is increasingly used in front-end development. If you’re a JavaScript developer, I’m quite confident you’ve already heard about one, some or all of these libraries: React, Redux, Cycle.js, RxJS, MobX, Ramda, Immutable.js… All of them embrace functional programming patterns and principles.
Elm is said to be easy to learn and use. Well, this is what we’re going to see by implementing String Calculator Kata with Test Driven Development. Rules are described here:. For this exercise, we’ll only focus on the language syntax and patterns.
Getting the environment up and running
Installing Elm
I’ve followed the official guide and chose the npm package to get Elm on my machine.
npm install -g elm will give you access to the following command line tools (of course, Node and npm are required to use this method):
elm-repl: play with Elm expression
elm-reactor: get a project going quickly
elm-make: compile Elm code directly
elm-package: download packages
It’s no more complicated than that. Only
elm-repl and
elm-package will be used in the context of this kata exercise.
Testing Elm code
As we’ll explore Elm with TDD, we need to find how to quickly run unit tests to get immediate feedback. Bonus point if they can automatically re-execute themselves as we write and update code (watch mode).
After a quick search, elm-test appears to be a solid candidate. It provides an API (
describe,
test,
Expect,
fuzz) and the necessary tooling to run tests locally in a terminal. Install it with the following command:
npm install -g elm-test.
Init the project
Initiating a project is really easy. All we need to do is create an empty directory,
cd into it and run the following command:
elm-test init. Here’s what we will get:
elm-package.json describes the project’s informations and lists the required dependencies. These are downloaded in the
elm-stuff directory.
Example.elm shows how to create a test suite. First test is not implemented and if we try to run the suite with
elm-test command, terminal output message will be quite informative.
Implementing String Calculator Kata
A code kata is an exercise in programming which helps a programmer hone their skills through practice and repetition. The term was probably first coined by Dave Thomas, co-author of the book The Pragmatic Programmer,[1] in a bow to the Japanese concept of kata in the martial arts. — Wikipedia
In Test Driven Development, we move forward with baby steps. First step is to write a failing test. Then, make this test pass with a small amount of production code and then refactor. Finally, repeat the loop until all requirements are met. Shall we get started?
First thought: syntax is weird
In the String Calculator Kata, first step is to create a simple
add method that takes 0, 1, or 2 numbers (as a string) and returns their sum. For example,
add("") should return
0,
add("1")
1 and
add("1,2")
3.
Let’s write the first failing test:
elm-test —-watchis the way to run this test suite with watch mode. It means every time we’ll write or update code, a tests run will automatically be triggered. Pretty neat!
First things first. Ok,
import statements are quite familiar, but, what is this lack of parentheses and curly brackets? It appears Elm uses whitespace and indentations instead of those (although parentheses are necessary in specific cases).
Here,
test is a function that takes a first argument as a string and a second argument as a function that evaluates a single
Expectation (actually,
test is a function that returns a function. Indeed, functions are curried by default with Elm).
Next, what the heck is this:
<|\() ->? Well,
<| is used to reduce parentheses usage. For example,
leftAligned (monospace (fromString "code")) can also be written that way:
leftAligned <| monospace <| fromString "code". Isn’t it better (I guess…)?
Finally,
\() -> is used to implement anonymous function. The empty parentheses are what’s called a unit type. A type that can only ever have a single value (which is
()).
\_ -> can also be used where
_ is a placeholder for any value.
Now, let’s write the code to make this test pass:
Done. Baby step! Let’s go a bit further, shall we? No need to refactor yet, here’s a second failing test where we will try to get the sum of 2 numbers:
Using List and Result
I didn’t get the solution for this test right away. Checking if the string has a
, and then split is quite obvious but I had to understand how
List works and how to convert
String to
Int with Elm.
Here, we use the
map method to convert our
String number to an
Int.
String has a built-in function (
toInt) to do this task, but the thing is that it returns an object of type
Result, because this kind of task can fail. And Elm treats errors as data (no runtime errors, looking at you JavaScript!). So,
Result is an object that’s either
Ok (when task succeeded) or
Err (when task failed). Then, all we need is to return the actual value when
Ok or a default value (
0) when
Err. Exactly what
Result.withDefault function does.
Finally, we use the
List.sum method to get the sum of our
Int numbers. Notice the use of
|> to reduce parentheses usages. The output of the left part is the input of the right part. I personally find it very convenient and logical after a bit of practice.
Refactoring
What we can do here to refactor is create a function
splitWithSeparator whose responsibility is to check if a string has a given separator and then split this string into a list.
In the case where the string of numbers doesn’t contain any separator (it means there’s only one number), we just append the number to a list, so that in both cases,
splitWithSeparator will return a
List (in Elm, a function has to return the same type in every branch of code).
Handling new lines between numbers
Next step of the kata is to allow
add method to handle new line character
\n between numbers, so that for example,
add("1\n2,3") will return
6.
Let’s add another new failing test for this case:
My first attempt to make this test pass was to split the string with
\n as separator and then split again each chunk of string with
, as separator, which gives us a list of lists of strings. That’s not very elegant (actually not at all) but it works, thanks to
List.concatMap that maps its given function onto a list and flatten the resulting lists.
Regex to the rescue
Although the solution with
concatMap works, we can do better. How about using a simple regex to split our string of numbers wherever there’s either
\n or
, as separator? Let’s refactor the
splitWithSeparator method:
Regex.split splits a string into a list using a given
Regex.regex separator. It also needs to know how many matches we want to make. Here we need to find all the characters that match the following pattern:
\\n|,. So, we use the
Regex.HowMany data structure value:
Regex.All. Our method is greatly simplified, isn’t it?
Handling custom separator
I think this step is the hardest but hang in there! So now we need to handle custom separator in our
add method. A custom separator should be specified at the beginning of the string of numbers, the following way:
"//[separator]\n[numbers]". For example:
add("//;\n1;2;3") should return
6. Of course, this should be optional and all previous scenarios should still be supported. Let’s repeat the loop and add the failing test:
Here, what comes to mind first is to check if the string contains a custom separator, extract it if present and then repeat the split and convert steps. How does that sound? Ok, get ready, we’re about to get our hands dirty!
What do we have here. 3 new functions:
startsWithSeparator,
extractSeparator and
removeSeparatorPattern. Also, there’s this
let ... in thing we’ll explain a bit further.
startsWithSeparator is quite easy, we’ll use
Regex again but this time with the
contains method:
extractSeparator is a bit touchy. Notice the new regex we’re using:
^//(.+)\\n (see above). Parentheses are here to capture any character(s) that will be between
// and
\n. This is our custom separator.
Regex.AtMost 1 means we’re looking for 1 match only.
Regex.find returns a list of
Match objects. Each
Match contains a
submatches field, which is a list of subpattern(s), the pattern(s) surrounded by parentheses. It can be empty as all regex don’t have subpattern. By using
List.concapMap, the lists of submatches are flatten into an unique list. Then, we can extract our custom separator string with
List.head, which gives us the first item (in our case, there will always be one item).
The
unwrap function is used to extract the current value of a
Maybe wrapper. Here, it’s kind of unsafe because if there’s no value (
Nothing), the program will crash. However, in our case, it should never happen. We need to apply this util function twice because our custom separator is actually wrapped twice in a
Maybe object. Indeed,
submatches is a list of strings enclosed in
Maybe objects, and
List.head returns the head of the list as a
Maybe object as well.
Let’s take a step back to
let ... in. If you’re familiar with JavaScript (ES6+), you know
let can be used to declare variables. Well, it’s the same with Elm except that the declared variables have to be used within a
in block. It might seem a bit weird at first but it’s a good way to enclose variables, as they’re not available outside their block.
Finally,
removeSeparatorPattern will return a new string, without the custom separator at the beginning. It’s quite straight forward:
Refactoring, again
Now (if you’re still here…), all of our tests should pass but the
add method got a little bit complicated. What we could do is create a function to extract the list of string numbers (with or without custom separator) and let the
add method only calculate the sum of these numbers.
As you may notice, we remove the second argument of
startsWithSeparator function. It’s actually not necessary as we’re always looking for the same
customSeparator pattern.
extractSeparator is renamed with
extractCustomSeparator for more clarity, and its second argument removed as well (we use the hard coded variable
customSeparator within the function). Also, we add a
splitWithString function as we don’t need regex to split a string of numbers with a given string separator.
As a result, here’s the simplified
add method:
Negatives are not allowed
This is the last rule for this kata. Calling
add with a negative number should throw the following exception : “Negatives are not allowed” , followed with the negative number (a list if there are multiple negatives). Actually, there are 4 more steps we could add, but that’s plenty enough for this article…
Before writing the corresponding failing test, let’s look at how to handle errors / exceptions with Elm. By searching through the official documentation, we can find an interesting object:
Result.
Resultis either
Okmeaning the computation succeeded, or it is an
Errmeaning that there was some failure.
By running this unit test, that obviously fails, we get the following error message:
I like how informative it is. Indeed, the
add method currently returns an
Int object whereas it should be a
Result (wrapping a string when
Err and an
Int when
Ok).
First, we need to refactor all of our previous tests so that the assertions look like this:
Expect.equal (StringCalc.add "1,2") (Result.Ok 3). Now, all tests should fail, but that’s ok. All we need to do to make them pass is change the return type and value of the
add method:
All right, we’re back on track! Only the last one with the negative number should now fail.
Let’s write a function to check if there’s a negative number in the list. It’s quite simple thanks to
List API (one line’s enough):
Now, let’s update the
add method with this new piece of code:
As you may have noticed,
containsNegative expects a list of
Int numbers. Thus, we update
extractNumbers to return a list of
Int (instead of
String) numbers .
Test should be green! Ok, it will obviously fail if the negative number is different than -1 but, remember, baby steps!
Refactoring again² (lightly)
By updating
extractNumberList we duplicated some code:
|> List.map toInt. Indeed, it is necessary in both branches (in case numbers list start with custom separator or not). One simple way to remove this code smell is to extract it into a function that we can use within the
add method:
Handling negatives list
Here’s the last failing test we’ll add for this kata:
2 steps are required to make it pass: extract the negative numbers and display them in the error message. Let’s repeat the loop and implement these functions:
Then,
add method should look like this:
Finally, all our tests should pass. Isn’t it satisfying?! 😎
All the code is available on my ⭐️ Github ⭐️. I tried to separate each step with a different commit, which should give you a good overview of progress.
Conclusion
I think TDD (if you’re experienced enough with it) is a good way to learn a new language. Although, it might be a good idea to start with a code kata you’re familiar with. I didn’t know String Calculator and I struggled quite a bit at first, having to understand Elm stuffs and figuring out how to fulfil the kata requirements at the same time.
Coming from JavaScript, Elm syntax seems a bit weird and the way to do things is quite disturbing. But that feeling doesn’t last long once you start playing with the core libraries and their API. Documentation is really nice and detailed with lots of examples. Also, error messages are very informative. They’ll show you just what you need to fix your problem.
Please, feel free to add feedback, I’m sure there are better ways (more Elm-ish) to handle this coding exercise. I’d love to hear more about it from Elm developers. | https://medium.com/@npayot/learning-elm-syntax-through-tdd-89a523240fe1 | CC-MAIN-2018-34 | refinedweb | 2,487 | 72.97 |
Author: Don Porter <[email protected]> State: Draft Type: Project Tcl-Version: 9.0 Vote: Pending Created: 10-May-2019 Keywords: Tcl, traces Post-History:
Abstract
This TIP proposes the elimination of the TCL_INTERP_DESTROYED flag.
History
Variable traces arrived no later than release Tcl 5.0 (1991). Early on, no later than release Tcl 6.1 (1991), a flag value TCL_INTERP_DESTROYED was defined to pass to each trace-handling Tcl_VarTraceProc routine to signal that deletion of the interpreter was underway. This was a sign that script evaluation should not be attempted with the interpreter.
The routine Tcl_InterpDeleted() arrived in release Tcl 7.5 (1996). It is a supported, public mechanism to determine whether the deletion of any interpreter has begun, exactly the same status the TCL_INTERP_DESTROYED flag is intended to signal. At that time, there was no longer any need for the flag to achieve the needed function. Any Tcl_VarTraceProc routine may call Tcl_InterpDeleted() to test any interp to see if script evaulation should not be attempted.
Namespaces were added in release Tcl 8.0 (1997). With their addition, the setting of TCL_INTERP_DESTROYED became buggy during namespace destructions, sometimes set when it should not be, sometimes cleared when it should not be. A segfault arising from this was reported in Tk ticket 605121 (2002), leading to a bug fix.
A new facility for tracing operations on commands with Tcl_TraceCommand() and friends arrived in release Tcl 8.4 (2002). These routines were documented to also use the TCL_INTERP_DESTROYED flag. That documentation was false. The false claim was noted in ticket 2039178 (2008) and corrected in 2014, replacing claims about TCL_INTERP_DESTROYED with advice on the need to make calls to Tcl_InterpDeleted().
A fix for Tcl tickets 1337229 and 1338280 (2005) included a refactoring of namespace variable destruction into a new internal routine TclDeleteNamesapceVars(). The new routine was created improperly failing to pass TCL_INTERP_DESTROYED when it should. This defect was eventually noticed as a memory leak reported in ticket 1706140. (2007)
At that point Tcl internals were all converted to stop making use of the TCL_INTERP_DESTROYED flag, and an RFE note to TIP its elimination was recorded in ticket 1714505. Here is that TIP.
Rationale
The support of the TCL_INTERP_DESTROYED flag was buggy over longer periods of time than it was correct. It is unnecessary. We are better off without it, converting all users to use of Tcl_InterpDeleted().
Specification
In Tcl 9, remove the flag value TCL_INTERP_DESTROYED from code and documentation.
In Tcl 8.7, mark the use of the flag as deprecated in code and documentation.
Compatibility
Existing Tcl_VarTraceProc that use the TCL_INTERP_DESTROYED flag will need to be converted to use Tcl_InterpDeleted() to work with Tcl 9. Tk 8.7 has now been so converted.
Reference Implementation
See branches tip-543. and tip-543-9.
This document has been placed in the public domain. | https://core.tcl-lang.org/tips/doc/trunk/tip/543.md | CC-MAIN-2019-35 | refinedweb | 471 | 57.06 |
16 February 2012 10:10 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
Earlier in January, the company said that it plans to restart the plant by the middle of this year as it will have enough feedstock to run the facility that had been idled since 2004.
Methanex has signed a 10-year deal with New Zealand-based Todd Energy for the supply of feedstock natural gas, Methanex said in a statement on 17 January.
The plant has an actual nameplate capacity of 850,000 tonnes/year but can only produce 650,000 tonnes/year by the time it restarts.
Out of the three methanol facilities that Methanex | http://www.icis.com/Articles/2012/02/16/9532711/canadas-methanex-to-restart-second-new-zealand-plant-by-1.html | CC-MAIN-2014-15 | refinedweb | 108 | 68.3 |
scipy.optimize.newton¶
scipy.optimize.
newton(func, x0, fprime=None, args=(), tol=1.48e-08, maxiter=50, fprime2=None, x1=None, rtol=0.0, full_output=False, disp=True)[source]¶
Find a zero of a real or complex function using the Newton-Raphson (or secant or Halley also provided, then Halley’s method is used.
If x0 is a sequence, then
newtonreturns an array, and func must be vectorized and return a sequence or array of the same shape as its first argument.
Notes
The convergence rate of the Newton-Raphson method is quadratic, the Halley method is cubic, and the secant method is sub-quadratic. This means that if the function is well behaved the actual error in the estimated zero after the n-th iteration is approximately the square (cube for Halley) of the error after the (n-1)-th step..
When
newtonis used with arrays, it is best suited for the following types of problems:
- The initial guesses, x0, are all relatively the same distance from the roots.
- Some or all of the extra arguments, args, are also arrays so that a class of similar problems can be solved together.
- The size of the initial guesses, x0, is larger than O(100) elements. Otherwise, a naive loop may perform as well or better than a vector.
Examples
>>> from scipy import optimize >>> import matplotlib.pyplot as plt
>>> def f(x): ... return (x**3 - 1) # only one real root at x = 1
fprimeis not provided, use the secant method:
>>> root = optimize.newton(f, 1.5) >>> root 1.0000000000000016 >>> root = optimize.newton(f, 1.5, fprime2=lambda x: 6 * x) >>> root 1.0000000000000016
Only
fprimeis provided, use the Newton-Raphson method:
>>> root = optimize.newton(f, 1.5, fprime=lambda x: 3 * x**2) >>> root 1.0
Both
fprime2and
fprimeare provided, use Halley’s method:
>>> root = optimize.newton(f, 1.5, fprime=lambda x: 3 * x**2, ... fprime2=lambda x: 6 * x) >>> root 1.0
When we want to find zeros for a set of related starting values and/or function parameters, we can provide both of those as an array of inputs:
>>> f = lambda x, a: x**3 - a >>> fder = lambda x, a: 3 * x**2 >>> x = np.random.randn(100) >>> a = np.arange(-50, 50) >>> vec_res = optimize.newton(f, x, fprime=fder, args=(a, ))
The above is the equivalent of solving for each value in
(x, a)separately in a for-loop, just faster:
>>> loop_res = [optimize.newton(f, x0, fprime=fder, args=(a0,)) ... for x0, a0 in zip(x, a)] >>> np.allclose(vec_res, loop_res) True
Plot the results found for all values of
a:
>>> analytical_result = np.sign(a) * np.abs(a)**(1/3) >>> fig = plt.figure() >>> ax = fig.add_subplot(111) >>> ax.plot(a, analytical_result, 'o') >>> ax.plot(a, vec_res, '.') >>> ax.set_xlabel('$a$') >>> ax.set_ylabel('$x$ where $f(x, a)=0$') >>> plt.show() | https://docs.scipy.org/doc/scipy-1.2.0/reference/generated/scipy.optimize.newton.html | CC-MAIN-2022-21 | refinedweb | 472 | 67.86 |
Intent:
To cache data objects (be they value objects or entity beans) that are frequently read, yet represent mutable data (read-mostly). This problem is not so difficult when only one server (JVM) is in use, but is much more complicated when applied to a cluster of servers. The Active Clustered Expiry Cache can solve this problem.
This pattern is somewhat similar to the Seppuku pattern published by Dimitri Rakitine. I'd consider Seppuku to be an more specific (and very cool) variation of the ACE Cache pattern, involving Read-Only Entity Beans and Weblogic Server.
This pattern is J2EE compliant and vendor neutral (although the Seppuku pattern is a neat one for Weblogic users).
Motivation:
Typically, in a data-driven application, some data is read more frequently that other data. Although most DBMSs will cache queries for such data in memory, enabling fast retrieval, is it often desired to have something even faster: A cache in memory in the server (app or even.
This is also easy to accomplish in a cluster, if the data is truly read-only. Then each node will have an instance (singleton) of the cache, populate it as necessary. No expiration (a term used throughout this text in lieu of "invalidation") is necessary, since the data never changes. In addition, a cache such as this can have a "timeout", so that items will only be stale for a maximum time.
Where this becomes difficult is when a clustered environment is necessary (high load, failover) and the data is mutable. In a nutshell, changes to data must be reflected in ALL caches, so that stale reads do not occur. Ensuring that all nodes are notified synchronously also has performance problems, both with network traffic and also contention between notifiers (publishers) and caches being notified (subscribers). However, it is still very desirable to have asynchronous expiration across the cluster, so that all caches will be synced in a "timely" manner. Hence the "smart" cache; it is aware of its peers, and keeps in sync.
This restriction means that the cached data must be considered read-only. Because the caches are expired asynchronously, there is a small interval of time when the data is stale. This is fine for data that is only to be read for output; after all, the request for such data could have come a split second earlier. But, if cached data is read, then the application decides (using a cached read which is (slightly) stale) to change the data, then we are violating ACIDity. The goal here is NOT to build an ACID, cluster-wide, in-memory data store; the DB and application server vendors are counted on to provide that kind of functionality.
Applicability:
Use the ACE Cache when
1) Data is "read-mostly"
2) Application server tier is clustered.
3) Data is read by many simultaneous requests
4) Data is not usually changed (at runtime) through other means (e.g. direct SQL by an admin, other kinds of applications)
Participants:
DataObject: The data object itself
This could be an entity bean or separate value object.
DataObjectKey: A key object, satisfying equals() and hashCode(), to uniquely retrieve the DataObject
This could be an EJB Primary Key, or just any key class
Cache: Used to store the data objects, mapping DataObjectKey to DataObjects. Best performance if a singleton, and must be synchronized appropriately.
This could be backed by a Map, or possibly an application server's entity bean cache (e.g. WL Read-Only beans).
For some stripped down interfaces, see the end of the text. I might expose more implementation code later on, but it uses many of my libraries of utilities, and dragging all of that in here would make this post quite a novel!
Behavior:
The Cache has reference to the DAO (Data Access Object) logic, whether embedded within an EJB or not. If the cache is queried, and the DataObject does not exist, then the DataObject is created (and cached). This means that different instances of the cache (in different processes) will be populated differently (this can be exploited, especially when dealing with user-specific data).
These DataObjects are for read-only, so if they are entity beans, they should be read-only beans. If they are value objects, they need to have a flag set so that they cannot be "saved". What is more, since these DataObject instances are shared between all callers of the Cache, the DataObjects need to *immutable*, either by only implementing a read-only interface, or by throwing exceptions (usually RuntimeExceptions) when mutating methods are called.
The expiration logic is also tied to the Cache object. The Cache subscribes to a JMS Topic (or referenced by a MessageDrivenBean). Then when a value object is "saved" or an entity bean's ejbStore() method is called, the Cache is expired for a particular DataObjectKey. The cache then publishes (to all caches but itself) the DataObjectKey. The listeners (onMessage()) then expire that key (and value) from that cache. So, asynchronously, all Caches across the cluster are brought into sync.
If using WL read-only entity beans, the link between the Cache and the expiry logic already exists (see Seppuku). If using Value Objects, one way to integrate the expiration logic is to have each Value Object keep reference to the Cache. Then, when the Value Object is "saved", the (local) Cache is expired, and then the remote Caches are expired asynchronously (and actively).
Consequences:
1. The DB will be queried much less often by read requests.
2. There will be much less object creation in the application servers.
3. There may be small latencies in read-only data propagating across the cluster.
4. The cache may take up significant heap space in the application server.
Implementation Issues Beyond The Scope of the Pattern (I can comment on these separately):
1. Managing cache size (e.g. LRU scheme, different caches for different DataObjects or not)
2. Trading off granular caching with coarse data retrieval (hard or soft references between value objects?). Some REALLY cool stuff here.
3. Proper synchronization of the Cache and the DAO within
4. Strategies/frameworks for making DataObjects immutable when needed, and for integrating this pattern with data access control (permissions)
Some interfaces:
*******************************************************************
import java.util.*;
/**
* Cache interface, to hide our many implementations
*/
public interface ICache
extends java.io.Serializable
{
// Hook to tell cache what to do if it does not contain requested item
public Object miss(Object aKey)
throws CacheException;
public void flush();
public void expire(Object aKey);
public void hit(Object aKey);
public Object get(Object aKey)
throws CacheException;
public void add(Object aKey, Object aValue);
public void addAll(Map aMap);
public boolean contains(Object aKey);
}
*******************************************************************
import java.util.*;
/**
* extends the Cache interface to provide method for bulk access, hitting, missing, and expiry.
*/
public interface IBulkCache
extends ICache
{
public Map getAll(Collection aColl)
throws CacheException;
public Map missAll(Collection aColl)
throws CacheException;
public void hitAll(Collection aColl);
public void expireAll(Collection aColl);
}
*******************************************************************
/** Just a tag interface, implementations might expose ids, perhaps.
* equals() and hashCode(),and also compareTo() implementations important
*/
public interface IValueObjectKey
extends java.io.Serializable,
Cloneable,
Comparable
{
}
*******************************************************************
//this one has dependencies on lots of other stuff, so don't try to compile it
public interface IValueObject
extends java.io.Serializable,
Cloneable
{
public void flagReadOnly();
public boolean isReadOnly();
public void flagImmutable()
throws ValueObjectException;
public boolean isImmutable();
public boolean isImmutableCapable();
public boolean isCacheable();
public boolean isValueObjectCloneable();
public boolean isSanitized()
throws SanityCheckException;
public void sanitize()
throws SanityCheckException;
public IValueObjectKey getValueObjectKey();
public Object clone();
public IValueObject cloneDeep()
throws CloneNotSupportedException;
public void save()
throws SaveException;
//this other part deals with the Value Object graph, important for getting expiry right
public boolean isValueObjectReferencesEnabled();
public Collection getValueObjectReferences()
throws ValueObjectReferencesNotEnabledException; //value object references and value object reference lists
public Collection getReferencedValueObjects()
throws ValueObjectReferencesNotEnabledException; //flat collection of the value objects
public Collection getValueObjectsDeep()
throws ValueObjectReferencesNotEnabledException; // all contained value objects, including this one
public Map getValueObjectMapDeep()
throws ValueObjectReferencesNotEnabledException; // all contained value objects, including this one
}
Discussions
J2EE patterns: A.C.E. Smart Cache: Speeding Up Data Access
A.C.E. Smart Cache: Speeding Up Data Access (47 messages)
- Posted by: Lawrence Bruhmuller
- Posted on: December 05 2001 20:26 EST
Threaded Messages (47)
- A.C.E. Smart Cache: Speeding Up Data Access by ranjith kumar on December 06 2001 00:31 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Lawrence Bruhmuller on December 06 2001 01:34 EST
- A.C.E. Smart Cache: Speeding Up Data Access by ranjith kumar on December 06 2001 02:08 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Benedict Chng on December 06 2001 04:17 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Dimitri Rakitine on December 06 2001 04:30 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Lawrence Bruhmuller on December 06 2001 12:28 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Gal Binyamini on December 08 2001 09:37 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Lawrence Bruhmuller on December 10 2001 07:18 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Gal Binyamini on December 10 2001 09:47 EST
- More comments by Lawrence Bruhmuller on December 12 2001 01:28 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Mark Pollack on February 12 2002 01:37 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Cameron Purdy on February 12 2002 07:17 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Lawrence Bruhmuller on February 19 2002 03:01 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Cameron Purdy on March 04 2002 11:00 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Lawrence Bruhmuller on March 28 2002 02:17 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Hrishikesh Rane on December 18 2001 12:27 EST
- ReadOnly Exceptions by mike danese on December 10 2001 10:27 EST
- ReadOnly Exceptions by Lawrence Bruhmuller on December 10 2001 07:34 EST
- ReadOnly Exceptions by Gal Binyamini on December 10 2001 10:04 EST
- ReadOnly Exceptions by Gal Binyamini on December 10 2001 10:14 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Gene Chuang on December 06 2001 12:54 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Georgiy Gorshkov on December 07 2001 15:42 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Rajesh Desai on December 21 2001 02:52 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Lawrence Bruhmuller on December 28 2001 13:58 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Hrishikesh Rane on December 21 2001 18:18 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Gene Chuang on December 21 2001 18:59 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Dimitri Rakitine on December 22 2001 03:34 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Peter Annaert on December 28 2001 10:14 EST
- Cluster-Replications of Stateful SBs? by Lawrence Bruhmuller on December 28 2001 01:52 EST
- Tangosol Cache by Lawrence Bruhmuller on December 28 2001 01:32 EST
- Tangosol Cache by Cameron Purdy on December 29 2001 01:11 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Raghuram Krishnaswamy on January 10 2002 14:50 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Vincent Harcq on January 15 2002 04:00 EST
- A.C.E. Smart Cache: Speeding Up Data Access -- JCS by Aaron Smuts on January 23 2002 10:02 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Joseph Sheinis on February 12 2002 20:07 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Lawrence Bruhmuller on February 19 2002 02:57 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Khalil Ahamed Munavary on March 13 2002 21:09 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Lawrence Bruhmuller on March 28 2002 14:09 EST
- A.C.E. Smart Cache: Speeding Up Data Access by Patrick Linskey on April 08 2002 11:51 EDT
- A.C.E. Smart Cache: Speeding Up Data Access by Cameron Purdy on April 08 2002 16:32 EDT
- ACE and JDO (as well as EJB) by Lawrence Bruhmuller on April 08 2002 20:03 EDT
- A.C.E. Smart Cache by mohamed zafer on April 28 2002 09:43 EDT
- A.C.E. Smart Cache: Speeding Up Data Access by ian greaves on June 07 2002 10:13 EDT
- A.C.E. Smart Cache: Speeding Up Data Access by Cameron Purdy on June 07 2002 16:04 EDT
- A.C.E. Smart Cache: Speeding Up Data Access by KwangHan TAN on July 08 2002 12:32 EDT
- A.C.E. Smart Cache: Speeding Up Data Access by Axel Wienberg on July 24 2002 12:52 EDT
- A.C.E. Smart Cache: Speeding Up Data Access by Sunita Kuna on March 04 2003 14:26 EST
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Lawrence,
- Posted by: ranjith kumar
- Posted on: December 06 2001 00:31 EST
- in response to Lawrence Bruhmuller
I read your article. Its informative and very well written, even a fresh guy (new to design patterns) like me was able to understand most of it.
But I am not able to grasp concept behind the terms "value objects" and "read-only entity beans". If you can just give a brief on these concepts, it will be helpful.
And also I was thinking of the need for so many methods in IValueObject?
Regards
Ranjith
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Ranjith,
- Posted by: Lawrence Bruhmuller
- Posted on: December 06 2001 01:34 EST
- in response to ranjith kumar
I'm glad you found the article worthwhile. Here is just a bit of background on the two terms you mentioned:
Read-Only Entity Beans: Some application servers, like Weblogic, support labeling an EB as read-only, by which is meant that ejbLoad() will only be called once (or only once every specified time interval) and that ejbStore() is never called. Check out the WL 6.1 docs.
Value Objects: A J2EE design pattern where requests are not handed references to EBs, but rather data container objects (which could implement the same business interface as the EB) in order to achieve more efficient data retrieval. Extensions of this pattern allow for these containers to be modified and then "saved", which can map back down to EB calls.
As for the number of methods in IValueObject: This could be whittled down quite a bit to get an implementation up and running. I just decided to include some of the methods I use to keep track of a graph of DataObjects that is retrieved (and maybe modifed and saved), and also to govern immutability, cloneability, and cacheability.
Hope this helps,
Lawrence
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Lawrence,
- Posted by: ranjith kumar
- Posted on: December 06 2001 02:08 EST
- in response to Lawrence Bruhmuller
Thank you very much for the answers.
It will be greatly appreciated if you can give some advice for beginners like me on what it takes to become masters in this field.
Regards
Ranjith
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Lawrence,
- Posted by: Benedict Chng
- Posted on: December 06 2001 04:17 EST
- in response to ranjith kumar
I've tried to build the same kind of cluster-wide/aware cache system using exactly the same approach you have described in your pattern. I'm not sure whether this problem is specific to Weblogic 6.1 but JMS Topic in WL6.1 is not capable of failing over, hence a single-point of failure here.
A JMS topic is hosted on an instance of a JMS server running on 1 instance of WL in one of the clustered machines. If that server happens to go down, none of the other servers in the cluster are able to publish any updates to the other still surviving server.
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
I think Lawrence wanted to avoid implementation specifics to make his pattern portable. WebLogic 6.x JMS implementation supports multicast (or you can use JavaGroups instead), so there is no single point of failure.
- Posted by: Dimitri Rakitine
- Posted on: December 06 2001 04:30 EST
- in response to Benedict Chng
Anyway, WebLogic 6.1 supports this kind of non-transactional distributed caching already - see readMostlyImproved example at Seppuku description page.
--
Dimitri
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Benedict,
- Posted by: Lawrence Bruhmuller
- Posted on: December 06 2001 12:28 EST
- in response to Benedict Chng
You pose an interesting question. Dimitri is correct, I wanted to try and keep implementation specifics such as application server vendor out of the pattern. However, the problem you mention is still a problem.
One solution is this: You can set up polling threads in each server that check the JMS server for a heartbeat, and then flush the entire cache is the server is thought to be down. Of course, this is going against the J2EE spec, so pick your poison.
One comment on multicast, which is that I don't think multicast JMS and JavaGroup messages are considered to be "guaranteed delivery". These cache notification messages MUST be delivered or else. But maybe these mechanisms are "reliable enough" for it to be a better solution.
One last thing about the JMS server. I ran the code from which this pattern evolved on WL, and I found better JMS performance when running JMS on a standalone server, as opposed to having one EJB server handle the JMS (could be on the same box, just different process).
Lawrence
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Great pattern, best thing I've seen here in a while :)
- Posted by: Gal Binyamini
- Posted on: December 08 2001 21:37 EST
- in response to Lawrence Bruhmuller
I've used something similar in a non-EJB environment for light-weight web-server clusters running with plain JSP/Servlets and got excellent performance. Anyway, the point is I didn't rely on JMS (too heavy) so I did multicast myself, and I'd like to comment about the level of delivery assurance.
The reason the delivery is not assured is that UDP doesn't guarantee packet delivery. However, if your entire cluster is hosted within the same LAN (usually it's in the same building, and even in the same room) the number of hops is small and the delivery is allmost completely guaranteed. If you really need assurance you can send a packet two or three times (with a retry counter in it). These patterns are so light-weight, I didn't see any performance difference. Jini discovery protocols use this technique, if you want a sourch code sample.
Another point worth noting is that in some ways it's better to do the invalidation only when the updating transaction commits. This can be done using a JTA Synchronization interface. In a web-server you don't have a transaction manager and I wrote a simple one myself, but in the app-server scenario it shouldn't be a big problem.
Gal
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Gal,
- Posted by: Lawrence Bruhmuller
- Posted on: December 10 2001 19:18 EST
- in response to Gal Binyamini
You make some very interesting points.
About multicast delivery, that is a very good point that UDP over the LAN should be almost 100% reliable. I might try that out and see how much more load can be supported before the cache "falls behind" sending expiry notifications, as Gene pointed out above.
We tried the JavaGroup implementation at one point, had some problems with it, and then never came back to it. Might be good to revisit that.
About expiring upon commit, I agree that it is a better policy (otherwise, rolled-back TXs will expire needlessly, although this would have a variable impact depending on the application).
I actually accomplished that behavior using a technique that is implementation-specific.
The application in which I used this pattern used BMP entity beans for all updates, and stand-alone DAO-JDBC for pure reads. I embedded the expiration logic in the EB such that only when ejbStore() was called did the expiration happen. Furthermore, delay-updates-until-end-of-txn was enabled, so that this was only called upon commit.
I thought about doing Bean Managed Transactions and tying in to JTA, but never got around to it. Can you elaborate on what was involved with your implementation?
- Lawrence
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Lawrence.
- Posted by: Gal Binyamini
- Posted on: December 10 2001 21:47 EST
- in response to Lawrence Bruhmuller
I am not familiar with JavaGroup. Does it implements JMS on top of UDP multicast? If so, could you please direct me to some information?
About the txn point:
You can't get the notifications you need using JTA interfaces portably with EJB (up to and including 2.0). EJB only gives you access to UserTransaction, while what you actually need is Transaction. Once you get a hold of a Transaction, your cache can register a Synchronization with it to be notified of commit. You can get a Transaction with vendor specific interfaces. I know for sure WebLogic provides one (although I can't remember the exact method... should be easy to find in the docs). As I mentioned in my post, I didn't work in an EJB environment anyway, so getting the Transaction didn't place any portability constraints on my code.
However, if I were to implement the same kind of functionality with EJB, I would probably use a different solution which is more portable. Session beans (stateful) can recieve notifications of transaction events by implementing SessionSynchronization. If you use a facade session bean to wrap your beans, you can make it recieve the txn events and pass them on to the cache. The down-side of this is that the facade will have to be stateful.
If you don't want to make the facade stateful there is one more alternative I can think of. If you know for sure that your facade isn't going to recieve a client transaction context (usually the case), you can make it invoke EJBContext.getRollbackOnly at the end of each method. Then it can react appropriately based on the return value. However, here there is a tricky part: if your facade throws a system exception (that is, the exception is thrown somewhere along the thread, not including calls to other EBs) the transaction will be rolled back, but only after your method finishes (i.e, when the container gets the exception). So you have to pay special attention to that, and catch every possible RuntimeException and RemoteException... that can get quite cumbersome, I imagine.
As a final note, this pattern is very efficient and recommended, but it is not portable as per EJB spec (1.1, 2.0). It isn't portable because of many problems, but the main problem which I think is unsolvable, is that you can't make different clients use the same entity bean instance without reloading in between. You just can't, and the vendor can't either because there are no "read-only" transactions in current EJB. I can highlight some specific sections in the spec if that's useful to someone. However, this is one of the few cases where I would go ahead and completely break the spec, because the performance advantages you get with special-purpose read-mostly caches are just to big to let go (IMHO). Also, most vendors will in fact provide these read-only transactions soon (many already do), for exactly the same reason. There is also no "legal" way to hold a singleton, so even if you completely give up entity beans and use DAO and value objects, you still run into portability problems... However, the chance that these problems will actually break something in your code is very small (IMO).
Gal
More comments[ Go to top ]
Gal,
- Posted by: Lawrence Bruhmuller
- Posted on: December 12 2001 13:28 EST
- in response to Gal Binyamini
Thanks for the comments about the possible TX pitfalls. Very interesting stuff. You've obviously used this pattern in a application that involved enough rollbacks; my application rarely encounters these, so hence my easy but <100% solution ... :)
As a more general comment, I think the complexity of implementing this pattern (as we have shown in this thread) shows that it shouldn't really be a developer-pattern after all, but a vendor-pattern. I wonder how close the rumored transactional cache in WL 7.0 works (how it is implemented and how it performs).
I totally agree with you on the benefits of a read-mostly cache, especially when most of the time these objects are just rendered in a JSP and that's it. We got big, big performance improvements. Of course, as I noted in the pattern, frequency of cache hit and by what kind of request can make this invaluable or just a small bonus, depending.
One comment about the read-only/read-write interface problem. The way I implemented it was to keep the value object read-only interface as the return type of the method. Writing callers can cast if they want to. Of course, this isn't ideal, but I think it is better than implementing more accessors (with different names than the read-only methods) to return the read-write interfaces.
This will lead to more RuntimeExceptions (including ClassCastExceptions due to programmer error), but those kind of exceptions are OK in development I think.
Check out JavaGroups at SourceForge. It is based on IP Multicast. We got it up and running with test cases, but had trouble sending messages with our own classes (NoClassDefFoundError, although you think that it would just treat the message as a byte payload).
I guess I am responding to all 3 of your posts at once! Too bad TSS doesn't offer a Thread.join() ;) Floyd?
Lawrence
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi, I've been trying to come up to speed on caching within an EJB app server and I must say it is quite frustrating, even in the case of caching 'mostly-read' data within a single app server.
- Posted by: Mark Pollack
- Posted on: February 12 2002 13:37 EST
- in response to Gal Binyamini
I hope someone can reconcile these two statements:
From the description of the. "
and from the upcoming book "EJB Design Pattern"s Oct 3rd - EJB Strategies, Tips and Idioms under the section
"Using Java Singletons is ok - if used correctly"
"There is nothing wrong with using a Singleton class, as long as developers DO NOT use it in read-write fashion, in which case EJB threads calling in may need to be blocked. It is this type of behaviour that the spec is trying to protect against. Using a singleton for read-only behaviour, or any type of service that can allow EJB’s to access it independently of one another is fine."
Since you would occasionally write to the cache to update it that goes against the advice of how to use singletons. Probably in practice updating with the singleton works out just fine, since you are getting such an important functionality and shouldn't worry about sticking to the spec 100%.
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Mark,
- Posted by: Cameron Purdy
- Posted on: February 12 2002 19:17 EST
- in response to Mark Pollack
Mark: "Hi, I've been trying to come up to speed on caching within an EJB app server and I must say it is quite frustrating, even in the case of caching 'mostly-read' data within a single app server."
That is the simple case, and it isn't too bad as long as you stick with a good pattern and don't try to get too fancy. An MRU cache extending Hashtable (etc.) isn't too hard to put together in an afternoon. The real question is using it in conjunction with things like EJBs, particularly when there are transactions involved. Pardom me for advertising, but that's why we're adding JTA support into our caching products.
Mark: "I hope someone can reconcile these two statements: ... 'This is easy enough in the case of a single node server (no cluster). The application is designed so that reading this data goes through a singleton cache interface ...' and ... 'There is nothing wrong with using a Singleton class, as long as developers DO NOT use it in read-write fashion, in which case EJB threads calling in may need to be blocked.' ... Since you would occasionally write to the cache to update it that goes against the advice of how to use singletons."
It is not an issue. Some of the warnings and proscriptions in the EJB spec are a bit moribund (?) or minimally anal. IMHO That's because the EJB spec comes from the world of "the container knows best and the developer should be a moron". (Not that IBM had anything to do with it ;-)
If you look at it the positive way, you could say "The container manages the threads and all shared objects for you so you should not have to worry about synchronizing." That's probably a better way to look at it, until you cheat and use a singleton, in which case the limitations in the spec _have to_ go out the window.
Our local hashed caching implementation uses minimal synchronization (no sync required on reads for example) and has notifications and automatic cached entry expiry. I've pasted in the JavaDoc below to give you some ideas. One of the things that I strongly believe in is using existing interfaces when (a) they are accepted and (b) they are applicable. As a result, we use java.util.Map as the basis for all of our caching implementations.
Our clustered caching implementation can use our local hashed caching implementation (or any other java.util.Map) in our upcoming 1.1 release, so the API remains unchanged (java.util.Map), the notifications remain the same, but it works transparently whether local or clustered.
One other thing to look at is the caching JSR from Oracle. IMHO it is hopelessly complex but it's just my opinion and no one seems to agree with me on this one ;-) ... here's the link:.
Peace,
Cameron Purdy
Tangosol, Inc.
--
A generic cache manager.
The implementation is thread safe and uses a combination of Most Recently Used (MRU) and Most Frequently Used (MFU) caching strategies.
The cache is size-limited, which means that once it reaches its maximum size ("high-water mark") it prunes itself (to its "low-water mark"). The cache high- and low-water-marks are measured in terms of "units", and each cached item by default uses one unit. All of the cache constructors, except for the default constructor, require the maximum number of units to be passed in. To change the number of units that each cache entry uses, either set the Units property of the cache entry, or extend the Cache implementation so that the inner Entry class calculates its own unit size. To determine the current, high-water and low-water sizes of the cache, use the cache object's Units, HighUnits and LowUnits properties. The HighUnits and LowUnits properties can be changed, even after the cache is in use. To specify the LowUnits value as a percentage when constructing the cache, use the extended constructor taking the percentage-prune-level.
Each cached entry expires after one hour by default. To alter this behavior, use a constructor that takes the expiry-millis; for example, an expiry-millis value of 10000 will expire entries after 10 seconds. The ExpiryDelay property can also be set once the cache is in use, but it will not affect the expiry of previously cached items.
The cache can optionally be flushed on a periodic basis by setting the FlushDelay property or scheduling a specific flush time by setting the FlushTime property.
Cache hit statistics can be obtained from the CacheHits, CacheMisses and HitProbability read-only properties. The statistics can be reset by invoking resetHitStatistics. The statistics are automatically reset when the cache is cleared (the clear method).
The Cache implements the ObservableMap interface, meaning it provides event notifications to any interested listener for each insert, update and delete, including those that occur when the cache is pruned or entries are automatically expired.
This implementation is designed to support extension through inheritence. When overriding the inner Entry class, the Cache.instantiateEntry factory method must be overridden to instantiate the correct Entry sub-class. To override the one-unit-per-entry default behavior, extend the inner Entry class and override the calculateUnits method.
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hello Mark,
- Posted by: Lawrence Bruhmuller
- Posted on: February 19 2002 03:01 EST
- in response to Mark Pollack
I mostly agree with Cameron's sentiment regarding these techniques and the spec. I think the point should also be made, however, that this is one reason why this kind of functionality needs to be at a lower level (i.e. container), so as not to violate any spec.
- Lawrence
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Lawrence,
- Posted by: Cameron Purdy
- Posted on: March 04 2002 23:00 EST
- in response to Lawrence Bruhmuller
FYI - we released our 1.1 version today with the size-limited cache features:
Lawrence: "Is an LRU or LFU limited cache the best way to go for your usage pattern?"
We went with a combination ... both are generally "good" but if you have to choose one or the other you will occasionally end up with very non-optimal caches. OTOH if you combine both, you end up with very few cases that are "less good", and very few "poor" cases, except when the cache is just too darned small. ;-)
Lawrence: "As far as access control, integrating an access control check with the API is not really that interesting, but I found it to be convenient. The only issue arises if permissions are cached with a data object, which makes it not sharable across many reading clients."
That's a nice idea. Did you base it on a pattern that you found in a core Java class?
Peace,
Cameron Purdy
Tangosol, Inc.
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Cameron,
- Posted by: Lawrence Bruhmuller
- Posted on: March 28 2002 14:17 EST
- in response to Cameron Purdy
No, it was a home-brewed idea. Basically, you supplied a request object to the API, which not only contained the right key(s) for the data, but also your security principal, essentially. It also contained all the access parameters, like read-only/read-write, whether you wanted it to be immutable (and therefore didn't have to clone it), etc. etc.
Internally we'd do the permission checks (we had our own data-driven thing, cool, but very complicated) and then, if possible, return the shared immutable instance. But if you were getting your own instance to mutate, then your associated permissions were stored in that instance, and the framework would stop you from doing things you weren't allowed to do at the business method level, without waiting for a "save".
So we did *all* of our data-related permission checks at this layer. Worked pretty well to keep code clean.
- Lawrence
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Benedict
- Posted by: Hrishikesh Rane
- Posted on: December 18 2001 12:27 EST
- in response to Benedict Chng
WLS 6.1 Does not support failover in a JMS server.The JMS server is non clusterable object.The MDB's can be clustered and pinned to the one JMS server in the cluster hosting the
required destination.
In WLS 6.0 ther was atweak in which you could target the JMS server to the Cluster but it used be aboud only to one of the stances and in event of that failing another JMS instance came up on on of the WLS in the cluster.
So it didn't provide load balancing then too but atleast we could force some failover support.
But now in WLS 6.1 there is no load balancing nor fail over.
The "load balancing" described using JMS connection pools is a mis nomer, at beast it gives cluster wide accesibiluty to the JMS through transparent re rounting of the client call to the JMS server.
I hope this explains the problem.
Hrishi
ReadOnly Exceptions[ Go to top ]
Hi, I have to ask, is it the proper approach to throw an exception when some code tries to modify the read-only data?
- Posted by: mike danese
- Posted on: December 10 2001 10:27 EST
- in response to Lawrence Bruhmuller
I think that the approach would be simply to block such code's execution and return an error message. Please comment as this addresses basic exception handling.
ReadOnly Exceptions[ Go to top ]
Hi Mike,
- Posted by: Lawrence Bruhmuller
- Posted on: December 10 2001 19:34 EST
- in response to mike danese
The best way to implement these immutable objects is to not have any public methods that change the state of the object (i.e. mutate or reassign any member variables), or return a reference to any mutable member variable.
But if you have a business object interface that has both accessing and mutating methods, and business logic that deals with this interface, then you are kind of stuck. The only way to get by without rewriting all of the client code is to throw RuntimeExceptions (hopefully some child class of) out of the mutating methods. After all, this is what RuntimeExceptions are for, where no recovery at all is possible, because it is *programmer error* (trying to write to a read-only object). Of course, the caveat to all of this is that you only catch these errors at runtime, even if it is development runtime.
If you are starting from scratch or are willing to change some client code, then I recommend the following approach:
Divide the business interface into a read-only interface and a read-write interface (extending the read-only interface). Have your implementation only return objects that implement the interface required.
If the implementation of the object is too tied up to separate, then you can always have a dynamic proxy front the implementation class, exposing either the read-write interface, or the read-only interface, depending on what you want. This way, even if a client programmer tries to downcast to the read-write interface, it won't work.
Hope this helps,
Lawrence
ReadOnly Exceptions[ Go to top ]
Hi again.
- Posted by: Gal Binyamini
- Posted on: December 10 2001 22:04 EST
- in response to Lawrence Bruhmuller
Just read another one of Lawrence's post...
I have a note about the strategy of using a read-only
interface and extending it into a read/write interface. A problem arrises when you need to return another value object in one of the getter methods. In the read only interface, you should return the read-only interface of the object you are returning. But in the read/write interface, you would like to return the read/write version. This makes perfect sense as far as OO concepts go, because you are actually returning a sub-type of the read interface.
However, current version of the Java language do not allow this kind of variant return type. See Bug id 4144488 in Sun's Bug Database (actually this is an RFE, not a bug).
So currently you can either declare you return the read interface, and then cast it to the read/write one (bad practice, cumbersome) or try some other approaches. I can describe my own approach if anyone is interested. It is partially driven by my own cache design, which I think is somehwat different from Lawrence's.
Gal
ReadOnly Exceptions[ Go to top ]
Heh... Sorry about the fragmentation...
- Posted by: Gal Binyamini
- Posted on: December 10 2001 22:14 EST
- in response to Gal Binyamini
One last note:
"Furthermore, delay-updates-until-end-of-txn was enabled, so that this was only called upon commit."
This is not accurate. The container potentially calls ejbStore() on many different EBs before commiting. One of the subsequent ejbStore calls may fail (for instance, if the DB gives some error, even unexpectedly, like "out of segment space" or "can't serialize transaction"). Such error will abort the transaction.
Another, perhaps bigger problem occurs when the invalidation messages get to the target very quickly. With UDP multicast it is nearly real-time, so messages get around allmost instantly. If you notify the other caches before you commit, they may refresh their cached copy before you commited and see old values. If you're data is truly read-mostly, the caches may not get a chance to refresh themselves for a long while.
Gal
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hey Lawrence,
- Posted by: Gene Chuang
- Posted on: December 06 2001 12:54 EST
- in response to Lawrence Bruhmuller
Great pattern, but does it work in practice and production??? :-)
Actually this pattern was deployed on Kiko 4? months ago. If you look at the architectural diagram, page 1:
This cache spans both the web and app cluster (used by both) and drastically eliminates hits to the database tier. It's this very pattern that gave Kiko a huge performance boost 4 months ago.
Dimitri, Lawrence and I were well aware of your Seppuku ReadMostly pattern and in fact studied it as a reference. However we wanted a more generic cache that's independent of Weblogic, hence the motivation for the ACE Cache.
Of course, there are some limitations to this cache that can be improved upon. One is partial failover and full failover detection and handling. Another is load handling under extreme volume, which of course is dependent on the messaging provider. But I see a lot of immediate benefits to this pattern, and its annoyances can be worked out in the long run.
Gene
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hello there guys,
- Posted by: Georgiy Gorshkov
- Posted on: December 07 2001 15:42 EST
- in response to Lawrence Bruhmuller
Sorry to bug you here.
Please help me. I am implementing Ed Roman's Mastering
EJB( the first book ) on-line business system on
WebLogic6.1. All his beans are working fine ( entity
CMP, statefull wrappers, stateless ones ) through
clients ( simple java test classes ), but as soon as I
start using exactly the same code in Servlets which are deployed on the same WebLogic server I always get UnexpectedException failing to invoke on entity or
stateful wrapper beans' methods.
//-------------------------------------------
Example:
double price = product.getBasePrice();// works fine
quote.addProduct( product );
double price = product.getBasePrice();//
UnexpectedException: failed to invoke on method
getBasePrice(unknown source).
//--------------------------------------------
It looks like "product", which is CMP entity bean's
remote interface, gets disattached from its bean,s instance immediately after I send it to "quote", which is a statefull wrapper bean holding a Vector of those "products" to operate on later.
Please explain why it happens, if you can.
Thanking you in advance,
Georgiy Gorshkov.
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Lawrence,
- Posted by: Rajesh Desai
- Posted on: December 21 2001 02:52 EST
- in response to Lawrence Bruhmuller
Very neat pattern.
I have a user object that is frequently accessed and is read/write. I need to use a clustered environment and would like to take advantage of this pattern. Is there any way I can ensure that the user will see consistency of the updates?
Thanks
Rajesh Desai
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Rajesh,
- Posted by: Lawrence Bruhmuller
- Posted on: December 28 2001 13:58 EST
- in response to Rajesh Desai
The only way to be absolutely sure of a consistent view is to do all the expiration synchronously (the A in ACE cache stands for Asynchronous).
If you can tolerate possibly slightly stale data for *reads* to the client, then ACE is for you. You can use tricks to make sure that requests from the same client go to the same node in the cluster, to minimize the chance that a client sees some stale, some current data.
However, if this is not acceptable, then something more along the lines of Cameron's product (which I've yet to test but hear good things about) is more up your alley. Or else rely on the DB for now.
Lawrence
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi All
- Posted by: Hrishikesh Rane
- Posted on: December 21 2001 18:18 EST
- in response to Lawrence Bruhmuller
Indeed a very useful and practical design.
We have implemented the same pattern but with a slight tweak. Following are the reasons
1) For occasional writes using JMS for cache invalidation is a heavy weight solution. Using some kind of Multicast mechanism would be
a simple and more portable solution and also App server independent solution.
2) Since this pattern focuses on achieving an APP server independent solution to the problem of "read-mostly” data, I think we should re
Consider the usability of JMS for Cache Expiry.
This is for the following reasons
1) Heavy weight
2) Fault Tolerance
----------------------
Even in Weblogic 6.1 JMS is not clusterable. The MDB's and the JMS connection are clusetrable but the JMS hosting the destinations is not
I have tried to explain this particular problem in this same thread some time back.
So in event of that particular WLS instance going down the complete caching logic will come for a toss.
3) Portability of the Code
-------------------------------
Websphere 4.0.x does not support JMS and does have MDB's.It has to be integrated with MQ series or something else.
To balance out for MDB's we had to implement session beans which did the same job but the deployment had to specify which the beans
for this job etc. So essentially we had two sets of classes each App server specific.
4) We have achieved the implementation of this pattern using Multicasting for cache expiry. In event of an ejbStore being called for the
database object the Broadcaster sends out the message to all the listeners( The Cache value object implements the Listener interface) .
On receiving the message the listener i.e. the Cache invalidates that particular record or object.
5) We have written this Expiry solution from ground up and not relied on JavaGroups.Some level of reliability has been added on UDP.
6) The solution is portable, lightweight, and Fault Tolerant.
If you have any comments on this Please open up
Regards
Hrishi
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Yes, we were aware of the heavyweight-edness of JMS and nonclustering JMS of WL 6.1. Hence with suggestion from Dimitri, we looked at JavaGroups (sourceforge.net) and I wrote a MessageFrameworkAdapter that allows any client (including this SmartCache) to plug-and-play between JMS, JavaGroups and JSDT. Once a vendor gets something right, we simply plug in the new implementation without change to the client code.
- Posted by: Gene Chuang
- Posted on: December 21 2001 18:59 EST
- in response to Hrishikesh Rane
Gene
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
BTW, In the last few days I had a chance to play with the latest product by Cameron's company - and it looks very promising. It is a distributed cache implementation with some very cool features. (On the coolness scale, I think Tangosol code-morphing product is still the coolest though ;-)).
- Posted by: Dimitri Rakitine
- Posted on: December 22 2001 03:34 EST
- in response to Gene Chuang
--
Dimitri
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi,
- Posted by: Peter Annaert
- Posted on: December 28 2001 10:14 EST
- in response to Dimitri Rakitine
I have the same question about how to keep a stateful EJB synchronized in a cluster
in general.
This is not only interesting for clustered caches.
Hasn't someone submitted a pattern yet for keeping things synchronous in a cluster?
I think for example of the 'pattern' where you can register flags (you give it a name and
a value of 'true' or 'false') in a stateful session bean and other EJB's access
this bean (fast, through a local interface) for example to know if they have to do
something exceptional like refreshing.
Cluster-Replications of Stateful SBs?[ Go to top ]
Hi Peter,
- Posted by: Lawrence Bruhmuller
- Posted on: December 28 2001 13:52 EST
- in response to Peter Annaert
First, consider that you might not need this at all. The whole point to a Stateful SB is that there is one instance of it for a given client, and your client stub can always find it (or its backup). So why do you need to manually replicate data within it elsewhere?
But, you could be accessing this stateful EJB within the app-tier, and want to have the optimization of a local call to get this stateful SB.
In this case, you could use the general ACE strategy for Stateful SBs as long as they have a method of sending messages to each other, be it JMS or something more low level like JavaGroups or a custom IP multicast implementation. Note: You can't use local interfaces for this communication, since they must go from node to node in the cluster!
You will also need to have on-the-fly topic registration or message filtering to cut down on noise if you have many sets of SBs communicating.
The discussion as to whether this kind of optimization should be transparently provided by the vendor (optionally or enforced by the spec) is a totally different discussion. Stay tuned for a post on that from me either here, or on BEA's developer site (there is a thread of that nature right now).
Lawrence
Tangosol Cache[ Go to top ]
Hi all,
- Posted by: Lawrence Bruhmuller
- Posted on: December 28 2001 13:32 EST
- in response to Dimitri Rakitine
Dimitri (or Cameron, if you have been paying attention to this thread), maybe you could share with us a little more detail about how this Tangosol worked in your evaluation.
BTW, is this Cameron's replicated cache, or the distributed cache (by which I took to mean cached objects not being replicated on all nodes, but sent over the wire sometimes)?
Obviously a replicated cache has the best raw performance, since objects are on the heap already. But a distributed cache could be more scalable, since more data can be cached overall with the same memory footprint in each node.
Also about synchronous vs asynchronous caching:
From the brief description, it seems like Tangosol is synchronous, and with transactional support to boot (VERY cool).
Gene and I (and another one of our old coworkers) always talked about the cluster-wide transactional cache being the "Holy Grail" of enterprise application architecture.).
So I am curious as to how far along this path a product like Cameron's goes ... from one end of the spectrum (my ACE Cache pattern) to the other (in-memory DB). And I can't wait to hear about how well the synchronous expiry performs.
Lawrence
Tangosol Cache[ Go to top ]
Hi Lawrence,
- Posted by: Cameron Purdy
- Posted on: December 29 2001 01:11 EST
- in response to Lawrence Bruhmuller
Lawrence: "is this Cameron's replicated cache, or the distributed cache (by which I took to mean cached objects not being replicated on all nodes, but sent over the wire sometimes)?"
The Coherence product is a replicated cache. Constellation is the distributed cache, but it will not be available until late Q1/2002. (Distributing a transactional cache is VERY hard. It will tie your mind into knots! ;-)
Lawrence: "Obviously a replicated cache has the best raw performance, since objects are on the heap already. But a distributed cache could be more scalable, since more data can be cached overall with the same memory footprint in each node."
Exactly! With Constellation, we should be able to manage literally terabytes of data without ever hitting the disk. (Our TCMP (Tangosol Cluster Management Protocol) infrastructure can theoretically support thousands of servers in a cluster, although we don't have that much hardware to test with!)
Lawrence: "Also about synchronous vs asynchronous caching: From the brief description, it seems like Tangosol is synchronous, and with transactional support to boot (VERY cool)."
It is actually both. If you do a "dirty" read, it is async. If you do a locked read, it could be synchronous: the issuer for the particular resource must issue the lock, and that could require a sync'd network request.
Lawrence: "Gene and I (and another one of our old coworkers) always talked about the cluster-wide transactional cache being the "Holy Grail" of enterprise application architecture."
You took the words right out of my mouth. To be able to semi-linearly scale up a transactional architecture and provide data integrity and failover to boot is just the coolest thing! That's exactly where we are headed.
Lawrence: )."
Exactly. In Constellation, the cached objects are XML. XML is already supported in Coherence ... the doc states that objects must be Serializable, but we also support our own XmlSerializable and XmlElement interfaces (see our online doc) which we can expose as DOM objects. (We think that XML is a much better way to go for object state / persistence than JDBC, and once you have a good XML schema, it's relatively obvious how to put a JDBC access layer on top of it.)
Coherence doesn't support a cache-limit and automatic expiry (yet?), but if you send me your email address I will send you the source for our in-process (non-clustered) cache that has these features (doc'd online at).
The Coherence 1.0 download (requires registration), overview and FAQ pages are at:
BTW The latest Coherence build (build 22) is fully self-configuring (but still fully manually configurable using an XML config file) and has a built-in command line test so you can actually see it working without integrating it into your app or app server.
Peace,
Cameron Purdy
Tangosol, Inc.
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Lawrence
- Posted by: Raghuram Krishnaswamy
- Posted on: January 10 2002 14:50 EST
- in response to Lawrence Bruhmuller
I liked your article very much and was very helpful! I am new to JMS and as such like its concept. So if i am trying to say, cache someone's mailbox or a bunch of messages from a mail-server and want to obviate the need to access the server each time to retrieve mails, what would a good approach be for designing a cache based on your caching model. Should i construct a MDB and cache a message each time there is an update on the mailbox. Because those messages that do not change can expire after a certain period from the cache. Only those that change should probably be updated in the cache or those that are new should be added to the cache. What would you suggest? I want to implement the pattern you are suggesting. Or is there already an implementation I can get. Thanks
Krishna
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi,
- Posted by: Vincent Harcq
- Posted on: January 15 2002 04:00 EST
- in response to Lawrence Bruhmuller
I would love some feedback on the following.
It is more linked to Seppuku pattern than ACE but Seppuku did not have its own (highly merited) column in the patterns...
My concerns:
- work in any container that supports read-only entity beans (and no more)
- hard believer of Tyler's true power of entity bean : readMostly entities is a matter of fact and jdbc sucks.
- (!!) synchronized invalidation of read-only beans when a read-write bean change the data
In other words,
- I don't have a cluster of servers to tell to refresh its state.
- I would prefer talking to the read-only bean after the transaction of the update is finished so that I am sure the db has been touched and that my read-only bean will see the latest data.
- I don't want to use JMS or any messaging/asynchronous feature but want to make a direct/synchronous call. I do not want the situation where a client will update the data (in one Tx) then read the data (with no Tx or in another Tx) and still see the old one because JMS/... was not quick enough.
Proposal (Review of Gal's proposal)
In all setter method of the entity bean (just after the line "dirty = true;" ;-) ) I create/call a stateful session bean and give him the home of the read-only entity bean and the pk.
This stateful session bean implements javax.ejb.SessionSynchronization.
In its afterCompletion() method, it calls the read-only beans fbpk() and invalidate() method. It also throws a RuntimeException to seppuku himself.
Thanks.
A.C.E. Smart Cache: Speeding Up Data Access -- JCS[ Go to top ]
If you access data via a local data manager that either goes directly to the database or to an apserver, then you can implement a local/remote caching system. The JCS, in jakarta-turbine-stratum, is a flexible distributed caching system that is useful in this pattern.
- Posted by: Aaron Smuts
- Posted on: January 23 2002 10:02 EST
- in response to Lawrence Bruhmuller
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Lawrence,
- Posted by: Joseph Sheinis
- Posted on: February 12 2002 20:07 EST
- in response to Lawrence Bruhmuller
I really appreciate your pattern and agree that it should be provided by app server vendors. But in the mean time we mortals need to address the issue. A couple of questions for you:
1. You recommend that cache should be "synchronized appropriately". Could you share the implementation? I'd think the ideal would be to allow simultaneous access for objects with different keys, and use double-check synchronization idiom for clients attempting to read the same object. We may also need to synchronize on the Cache singleton itself when inserting new keys. What do you think?
2. You've mentioned few issues that are beyond the scope of the pattern. I am specifically interested in topics "different caches for different DataObjects or not" and "integrating this pattern with data access control (permissions).
I'd appreciate any details.
Joseph.
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hello Joseph.
- Posted by: Lawrence Bruhmuller
- Posted on: February 19 2002 02:57 EST
- in response to Joseph Sheinis
In response to your interesting questions:
1. Synchronization of the cache is a tricky issue. What I ended up doing was synchronizing on the key (or actually the key's singleton "lock" object) regardless whether doing a read or write. Of course, if a cached read was the result, the lock was held for a very short period of time.
Now, the reason I did this (lock on read as well as write) was API driven. I wanted this to be a transparent cache, where clients that needed data simply tried to read from the cache, and the cache would either have it and return it, or get it, cache it, and return it. Otherwise, the client has to worry about checking to see if the cache has it, and if it does not, then getting it, and populating the cache, all hoping not to race against another client thread. I judged the overhead of obtaining a monitor and additional contention to be worth this simplification.
To what are you referring by the double-check synchronization idiom ? If you are referring to lazily synchronizing by prechecking a condition to see if an synchronized operation needs to be done, be aware that this approach has problems when implemented in Java. If you read this article or many other similar ones out there (search Google for "double check locking"), it might be clearer as to why I always had to synchronize.
You are correct in that the only way to allow for new keys is to briefly synchronize the entire cache, since the keys must have matching singleton lock objects.
I'd be interested to hear others' ideas on how to synchronize this kind of read-through cache.
2. I don't want to dive into deep discussions about these other issues here, but in brief, this is all about caching strategy. Is an LRU or LFU limited cache the best way to go for your usage pattern? Is usage/caching priority similar across all your data objects, or different?
As far as access control, integrating an access control check with the API is not really that interesting, but I found it to be convenient. The only issue arises if permissions are cached with a data object, which makes it not sharable across many reading clients.
Hope this helps.
- Lawrence
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Lawrence,
- Posted by: Khalil Ahamed Munavary
- Posted on: March 13 2002 21:09 EST
- in response to Lawrence Bruhmuller
First of all I appreciate you for your wonderful work towards developer community. I felt bit hard to follow the patterns and this is because I'm fresh to EJB. Now I'm working on a project in which we have to develop session beans (stateful) which will query the database. The database has 40 million records. The user will qive his search criteria and the session bean has to pull the data from database and show to the user (on his browser) pagewise. Then the user may click NEXT / PREV or PAGENOs (like google) to view particular page. Now my question is, how to make the data available immediately to the bean. If suppose, the bean access database for each & every request (i.e when user clicks NEXT / PREV buttons) it will be time consuming. Instead is there any method by which we can place all the selected records in memory and the bean can look for the data on the memory (not in the database). It like simple way of caching. How to achieve this? (NB: if EJB supports thread we can do it by pulling data as a seperate process. But EJB doesn't recommand to use threads).
Expecting valuable hints from you,
Khalil Ahamed Munavary (khahmed at apis dot dhl dot com)
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Well, you definitely wouldn't want to put all the records in memory. But like you say, you wouldn't want to go get 1 each time, as that would hurt performance.
- Posted by: Lawrence Bruhmuller
- Posted on: March 28 2002 14:09 EST
- in response to Khalil Ahamed Munavary
I suggest a Page By Page Iterator design (see Sun's J2EE Design Patterns), where you retrieve the right number of rows at a time (probably using a JDBC query).
Remember that caching is only useful if many (like >10)requests can use the cached data before the data needs to be refreshed. So only if many clients are going to browse these records would you want to even try to cache the record sets (a "page" in the page by page iterator) in your application.
- Lawrence
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Folks,
- Posted by: Patrick Linskey
- Posted on: April 08 2002 11:51 EDT
- in response to Lawrence Bruhmuller
This pattern ties in very well with Sun's recent Java Data Objects specification. The JDO specification allows for transparent persistence in a backend-neutral manner. See Sun's JDO page or JDOCentral.com for more info about JDO.
By using JDO and either a session bean or entity bean facade pattern, you can structure your application such that all database reads bypass the application server container altogether without dealing with JDBC. (I'm sure the [insert appserver vendor name here] folks will love this concept...) This brings you the alleged transparency of using CMP entity beans without the cost or the bulky requirements of the specification.
Further, if your JDO vendor is clever (see SolarMetric's Kodo JDO for example), then you might be able to get high-performance caching to boot. Kodo JDO has a distributed cache that was designed to make exactly this type of pattern happen behind-the-scenes. In fact, Kodo JDO's implementation allows you to even bypass the app server for some writes, so you can minimize your app server cluster to just enough machines to run your system-critical code in robust fashion, and use cheap non-container machines for most of your scalability needs.
-Patrick
--
Patrick Linskey
SolarMetric Inc.
pcl@solarmetric.com
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Patrick,
- Posted by: Cameron Purdy
- Posted on: April 08 2002 16:32 EDT
- in response to Patrick Linskey
Sounds pretty nice, but please check on the FAQ URL () ... I couldn't access it.
Peace,
Cameron Purdy
Tangosol, Inc.
ACE and JDO (as well as EJB)[ Go to top ]
Thanks for your post, Patrick. A couple comments:
- Posted by: Lawrence Bruhmuller
- Posted on: April 08 2002 20:03 EDT
- in response to Patrick Linskey
ACE is a pattern, not an implementation. As such, it can be tied to any persistence mechanism, be it JDO or EJB entity beans, or straight JDBC for that matter.
Of course, one of my many points is that this pattern should be provided by vendors, "under the hood". I think we can all see that JDO can take advantage of this pattern just as much as an EJB container.
Your idea of bypassing the container for writes to save $ is an interesting one, can't say that I've heard it before. But of course, we *are* paying for the JDO implementation, right?
- Lawrence
A.C.E. Smart Cache[ Go to top ]
Hi Lawrence,
- Posted by: mohamed zafer
- Posted on: April 28 2002 09:43 EDT
- in response to Lawrence Bruhmuller
I have implemented a similar cache framework, using javagroups for broadcasting. I came across this pattern only today,had I been here a few weeks before, I could have saved considerable amount of time.
Please comment on this,
My cache consists of read only and read-write objects.
1. In case of read-write objects, the objects have to be expired after a time interval. I do this by setting the creation time in CacheKey [IValueObjectKey ]. A cleaner thread polls the keys periodically and checks if the object has live more than the expiry time, if so then removes it. Any better way of doing this.
2. In case of read-only objects that objects can be changed only by the administrator. These objects store configuration details and are referenced by almost all the components[EJB's] in the server. Now coming to the actual problem, For each session, which spawns multiple requests, I need the read-only objects to be consistent, i.e., even if the read-only objects are changed by the administrator, the reference held by the session in the first request should give me the same details even in the 2nd or 3rd request , till the session expires.
My system flow is like this,
1. The user enters his username and password and submits the jsp [request 1].
2. My authentication components accept the request, store any session info by creating a read-write object, gets the Config [read-only object] from the cache.
3. Component does the necessary processing, and assuming that the authentication fails, prompts the user to retry entering the username and password.
4. User enters the username and password again and submits the jsp[request 2].
5. Authentication components accepts the request, retrieves any session info from the read-write objects, gets the config again form the Cache.
6. Does the necessary processing.
Now what if the administrator changes the config in between the request 1 and 2. The config got from the cache in step.2 and step.4 is different.
One solution to this is for each session to create a new copy of config and store it in the read-write object along with the session info. So in step.4 the component gets the config from the read-write object rather than the cache. But by making multiple copies i'll be wasting memory. How can I do this.
In your pattern, what is hit(Ojbect key) and miss(Ojbect key) used for. Also can you please eloborate on Value Object graph methods in your IValueObject interface.
Thanks,
Mohamed Zafer
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Following on from this discussion has anyone ever considered the applicability of JavaSpaces? I have a similar problem in replicating data across a clustered environment but the objects I deal with need to be read/write.
- Posted by: ian greaves
- Posted on: June 07 2002 10:13 EDT
- in response to Lawrence Bruhmuller
I have only thought about using JavaSpaces but have applied them to a similar problem implementing a 'black board' architecture for parallel processing across a shared set of objects - works really neatly!
I will be looking into JavaSpaces and developing a proof-of-concept so will keep you informed as to the findings. But in the meantime any ideas comments sugesstions....
Ian
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Ian,
- Posted by: Cameron Purdy
- Posted on: June 07 2002 16:04 EDT
- in response to ian greaves
"I have a similar problem in replicating data across a clustered environment but the objects I deal with need to be read/write."
If you get a few extra cycles, could you do a quick comparison (performance and ease of implementation) between Javaspaces and Coherence. I haven't done much work with Javaspaces yet, but it looks promising. Drop me an email (cpurdy at tangosol dot com) and I can provide a full development license etc.
Peace,
Cameron Purdy
Tangosol, Inc.
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
"3. There may be small latencies in read-only data propagating across the cluster. "
- Posted by: KwangHan TAN
- Posted on: July 08 2002 12:32 EDT
- in response to Lawrence Bruhmuller
Is there anyway to compensate for the above deficiency using some sort of a feedback loop ?
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
"Is there anyway to compensate for the above deficiency [latencies in read-only data propagating] using some sort of a feedback loop ?"
- Posted by: Axel Wienberg
- Posted on: July 24 2002 12:52 EDT
- in response to KwangHan TAN
Yes, by using optimistic locking (aka "version number pattern"): keep a counter that distinguishes each written state of your object, and verify the counters of all relevant objects using direct database access right before comitting. I think Bea 7 does that now, so the vendors have started to relieve us of some of the systems programming.
A.C.E. Smart Cache: Speeding Up Data Access[ Go to top ]
Hi Lawrence
- Posted by: Sunita Kuna
- Posted on: March 04 2003 14:26 EST
- in response to Lawrence Bruhmuller
This is a very interesting pattern. We need to build a caching mechanism which would contain mutable objects and that is when i hit upon this article.
I have read your article and would like to try it out. The problem i am facing is that i am not able to understand the functionality of some of the API's mentioned in it. Eg miss(), hitAll(), missAll() and few more in the IValueObject interface. Would it be possible for you to shared the cache framework in more depth.
Can you also direct me to some more reading material on Cache design, LRU algorithms etc?
Thanks in advance
sunita
skuna@savi.com | http://www.theserverside.com/discussions/thread.tss?thread_id=10610 | CC-MAIN-2014-52 | refinedweb | 12,209 | 60.35 |
Source:NetHack 3.4.3/include/eshk.h
From NetHackWiki
Below is the full text to include/eshk.h from NetHack 3.4.3. To link to a particular line, write {{sourcecode|eshk.h|123}}, for example.
/* SCCS Id: @(#)eshk.h 3.4 1997/05/01 */
/* ESHK_H
#define ESHK_H
#define REPAIR_DELAY 5 /* minimum delay between shop damage & repair */
#define BILLSZ 200
Each shopkeeper has a fixed-length array to track what you owe them. If your bill exceeds this length, the shopkeeper will not charge you. (shk.c, line 2093, shk.c, line 2210)
struct bill_x {
unsigned bo_id;
boolean useup;
long price; /* price per unit */
long bquan; /* amount used up */
};
struct eshk {
long robbed; /* amount stolen by most recent customer */
long credit; /* amount credited to customer */
long debit; /* amount of debt for using unpaid items */
long loan; /* shop-gold picked (part of debit) */
int shoptype; /* the value of rooms[shoproom].rtype */
schar shoproom; /* index in rooms; set by inshop() */
schar unused; /* to force alignment for stupid compilers */
boolean following; /* following customer since he owes us sth */
boolean surcharge; /* angry shk inflates prices */
coord shk; /* usual position shopkeeper */
coord shd; /* position shop door */
d_level shoplevel; /* level (& dungeon) of his shop */
int billct; /* no. of entries of bill[] in use */
struct bill_x bill[BILLSZ];
struct bill_x *bill_p;
int visitct; /* nr of visits by most recent customer */
char customer[PL_NSIZ]; /* most recent customer */
char shknam[PL_NSIZ];
};
#define ESHK(mon) ((struct eshk *)&(mon)->mextra[0])
#define NOTANGRY(mon) ((mon)->mpeaceful)
#define ANGRY(mon) (!NOTANGRY(mon))
#endif /* ESHK_H */ | https://nethackwiki.com/wiki/Eshk.h | CC-MAIN-2017-34 | refinedweb | 253 | 61.36 |
In this article I am going to show you how simple it is to deploy a WCF service on Windows Azure. When I did it for the first time, I had only about 45 minutes to deploy my first service. Remember it is completely free of any cost. I highly recommend to try it, if you haven’t before. It will give you some basic understanding of what you need to do to deploy a service to the cloud using the Microsoft technology stack.
Prerequisites
To use this tutorial, you need to have Visual Studio 2012 installed. I used the Ultimate edition to create this tutorial, but I am sure it works the same with the express edition available here.
Creating a Windows Azure account
If you don’t already have a Windows Azure account it is now the time to create one. To do so, go to WindowsAzure.com and use the free trial link on the upper right corner of the website. There is currently a special offer which allows you to try some things for free, which you normally would get charged for. But for this tutorial we will only use configurations which are free of any costs.
While creating your account, you’ll be asked for some credit card information. If you don’t use any paid services, your credit card won’t be charged. Microsoft requires this information only for verification purposes. But if you start using some paid services, your credit card will be charged, so be sure to provide correct information.
Download and install the Windows Azure SDK
The next step after creating a Windows Azure account is to configure your development environment. This includes downloading and installing the Windows Azure SDK. This kit is required to get the project templates and the dlls required to develop cloud services for Windows Azure.
Create a new Windows Azure Cloud Service project
Now you have all things prepared to create your first Windows Azure Cloud Service project. To do so, start Visual Studio and open the new project window via the File – New – Project… menu.
Select the cloud tab and you will see the following screen:
As you can see there is only a single template available. Select that template and fill in a project name and click on the OK button.
If you are used to creating projects for Windows applications there was nothing special till here. But after clicking on the OK button you’ll see a new dialog which looks like this:
Within this dialog you can choose between different services types. We will use the WCF Service Web Role for our project.
Writing the service
Now that we have a new project opened, we can start coding our service. For testing purposes I recommend to write a very simple service like producing random numbers or consuming a name as an input and returning a greeting message. I’ll provide you with such an easy implementation so you can just copy and paste, if you’re primarily interested in the technology and don’t care about the implementation details at this time.
[ServiceContract] public interface IRandomService { [OperationContract] int GetRandomNumber(); [OperationContract] string SayHello(string name); } public class RandomService : IRandomService { public int GetRandomNumber() { Random number = new Random(); return number.Next(1, 1000); } public string SayHello(string name) { return string.Format("Hello Mr./Mrs. {0}", name); } }
Take the time to double-check that the names of the files correspond to the class and interface names. This is not required, but helps to find or avoid boring errors.
Creating a package
The next step is a preparation before we can actually upload the WCF service to Windows Azure. Go to the solution explorer and right click on the service project. Now you have the option Package….
Just click on the Package button to start the process. After that, the folder which contains the created files is being opened automatically. It is the \bin\Release\app.publish folder within the path of your current solution/project.
You will see a .cscfg file which contains the configuration of the package and a .cspkg file which is the package itself. My package was about 7.6 MB and the configuration was only about 1 KB.
We will need those files in a few minutes when we deploy the service to the cloud.
Creating a Window Azure Cloud service
The next thing we need to do is to setup the environment on Windows Azure. I assume that you already have created an account as mentioned at the beginning of this article. If you haven’t you should do it right now.
Log in to your account and open the Windows Azure portal. You can now click on cloud service and create a new service as you can see on the following screenshot:
Deploy your service to production
When you have successfully created your cloud service, you are now able to install and deploy your WCF service to production. To do so click on “upload a new production deployment”.
Select the files from your local hard drive and upload them to Windows Azure. Be sure to check the checkbox on the bottom of the dialog.
It will now take two to five minutes to create your service. But once it is done, it should be directly available online. Further updates will be faster. I guess this is because the infrastructure is already initialized and configured. But for now, please check if your service is available with the following link:. You should now see a default page explaining how to create a client for the service.
Congratulations! You have deployed your first WCF service on Windows Azure!
Testing the WCF service
Yes I know, you don’t trust me and it’s okay! Let us test the service we have just created with a tool called WcfTestClient. This tool comes with Visual Studio. If you don’t have it installed, you can get it online.
You find the WcfTestClient.exe in the following path: C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\WcfTestClient.exe
Open the WcftestClient.exe and right click on My Service Projects and add the url to your service. You should now see the following screen:
As you can see on the right side of the window, the service has been executed by pressing on the Invoke button and the service has resulted a string containing the sentence we expect.
Conclusion
We have successfully setup our first WCF service on Windows Azure within a very short time. If you haven’t done it before I really recommend to try it, just to get a feeling for how simple it is and which possibilities it offers.
I currently use a PHP service for one of my projects, because I was able to run the script on my web hoster for a low price. Since I am now able to publish a WCF service written in C# in such a short time and using the debugging capabilities provided by Visual Studio 2012, I definitely consider upgrading my service and change from a PHP implementation to a C# WCF service implementation. | http://www.claudiobernasconi.ch/2013/08/03/deploying-a-wcf-service-on-windows-azure/ | CC-MAIN-2017-30 | refinedweb | 1,192 | 64.1 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
how to read fealds in new API
Learning Python and trying to understand the new odoo API has been a challenge to me.
how to I convert this method to new API? this will help me a lot in my attempt to grasp new API concept
def name_get(self, cr, uid, ids, context={}):
if not len(ids):
return []
reads = self.read(cr, uid, ids, ['name', 'lastname'], context)
res = []
for record in reads:
name = record['name']
if record['lastname']:
name = record['lastname'] + ', '+name
res.append((record['id'], name))
return res
the method read name and lastname from res.partner
and append Id value to the name as return.
To read a field you can learn from this example:
var =1
obj = self.pool.get('res.partner')
obj_ids = obj.search(cr, uid, [('id', '=', var)])
res = obj.read(cr, uid, obj_ids, ['name','id'], context)
Or You can use Environment.
The Environment stores various contextual data used by the ORM: the database cursor (for database queries), the current user (for access rights checking) and the current context (storing arbitrary metadata). The environment also stores caches.
Friend, Take a look to the official documentation of ODOO.
Regards
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-read-fealds-in-new-api-89600 | CC-MAIN-2017-34 | refinedweb | 247 | 66.74 |
When attaching a USB hub (anything from a generic DLink to a keyboard
with integrated hub) while a userspace libusb query is going on, the
kernel panics (trap 12, fault virtual address = 0xdeadc0e6) at
usbd_device_fillinfo:1350; p->device is 0xdeadc0de.
I have experimented several times, I think the cause is during hub
attachment, there is a tsleep when waiting for power to settle
(uhub.c:288). In this time, libusb's usb_find_devices happens to request
an ioctl for a device exploration. At this point, the port structures of
the hub are not yet initialized.
I have a temporary fix that just initializes p->device to NULL before the sleep, but this doesn't solve a similar problem exists during hub detachment (which I haven't been able to narrow it down much further).
Fix:
Proposed patch (only a hackish fix for the attachment problem):
@@ -284,6 +284,15 @@
goto bad;
}
+ // Fixes crash on hub attachment
+ // Need to init device to NULL before delay sleep;
+ // otherwise exploration could hit an uninit'd port
+ for (p = 0; p < nports; p++) {
+ struct usbd_port *up = &hub->ports[p];
+ up->device = NULL;
+ }
+ // end changes
+
/* Wait with power off for a while. */
usbd_delay_ms(dev, USB_POWER_DOWN_TIME);
How-To-Repeat: Run a program that continuously polls for USB devices using libusb's usb_find_devices(), while attaching a USB hub. This won't cause it to crash everytime, but it is likely that out of 20 attachments, there will be at least one panic.
A piece of code along the lines of
while(1){
DPRINTF(("before usb_init\n"));
usb_init();
DPRINTF(("before usb_find_busses\n"));
usb_find_busses();
DPRINTF(("before usb_find_devices\n"));
usb_find_devices();
}
should do the trick of producing something like the log.
Would you happen to know if this happens on FreeBSD -current?
Warner
Thanks for the update Victor.
I can't get it to happen in current. Maybe I'm doing something
differently than you. Do you have an easy recipe for causing the
panic?
Warner
Here is the code I just used to panic it just now (requires libusb):
/////////// usbtest.c:
#include "/usr/local/include/usb.h"
int main(int argc, char **argv){
while(1){
usb_init ();
usb_find_busses ();
usb_find_devices ();
}
return 0;
}
//////////// Makefile:
CC=gcc
LD=ld
CFLAGS = -g -c -Wall
all: usbtest.o
$(CC) -o usbtest usbtest.o /usr/local/lib/libusb.a
clean:
rm -f usbtest usbtest.tgz *.o *~
It took me about 15 plug-in/unplug cycles to get it to crash
(sometimes as many as 30 during previous testing when I'm unlucky).
There's about a 200ms window during which the panic will occur, and
it's hard to say exactly when that is. You can probably add a printf
to given an indication of where in the loop you're plugging in the
hub, so you can change the timing around a bit.
-victor
For bugs matching the following criteria:
Status: In Progress Changed: (is less than) 2014-06-01
Reset to default assignee and clear in-progress tags.
Mail being skipped | https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=114310 | CC-MAIN-2020-10 | refinedweb | 495 | 61.26 |
Get involved in the development
74 Comments
Dean Thompson
I am eagerly waiting for this plug-in to get to the point where I feel confident doing production development with it. Last time I tried a version of the plug-in, it kept my Java code from compiling, and I had to back out. That was a couple of months ago, but now I am very confused about the status of this plug-in development. Nothing new appears in the version history any more, and this overview page isn't updated, but new plug-in versions do seem to appear. It would be tremendously helpful if someone could go through and update the version history, the roadmap, and perhaps some of the other summary information, to give us all a sense of where this development stands.
Dean Thompson
The following forum thread contains an answer to some of my questions:
The key information is in a post from Ilya Sergey, who seems to be the primary maintainer of this page. Ilya Sergey writes:
Dean Thompson
I am delighted to see that the version history is being regularly updated again to show lots of great progress on the plugin. My thanks to everyone involved in developing and supporting it!
Dean Thompson
I have been using the plug-in very successfully for the past couple of weeks. My thanks again to the plug-in development team!
Anonymous
Here's my scala class :
___________________________________
class GisPoint(x1: Int, y1: Int) {
val x = x1
val y = y1
def +(p: GisPoint) = new GisPoint(x+p.x, y+p.y)
override def toString = "x:" + x + ", y:" + y
}
___________________________________
From a java class, even though it compiles and works, I get a syntax error with "new GisPoint(10,10)". Idea underlines the arguments and says:
"GisPoint() in GisPoint cannot be applied to (int, int)"
Any idea? I use 8.1.3 with scala plugin 0.2.27245
Thanks
Taras Tielkes
Although part of the source quoted above got eaten by confluence, I think similar cases work for me using the latest IDEA 9.0 EAP and the latest Scala-for-IDEA9 plugin release.
Anonymous
Running scala code on IDEA on my Macbook doesn't work very nicely, it seems!?! the "Edit Configuration" wizard opens up the scala console so I end up with a ">scala " prompt instead of the code being executed.. Is this a mac related bug?
Alexander Podkhalyuzin
Try ro run your scala code directly from editor, not throw "Edit Configuration".
Anonymous
Thanks for replying Alexander,
it seems that there is no option for this on macbook!
The normal view on my windows pc has a bigger context menu! On the mac there is only these 2 options:
- Compile 'Hello.scala'
- Run Scala Console
!!!
Alexander Podkhalyuzin
Please create bug report to. Also attach some screenshots. We will help you soon.
Anonymous
Is this supposed to work with the Community Edition? I have #IC-90.96, and I am told that the plugin is "not compatible".
Anonymous
same version, same problems, suggestions?
Ilya Sergey
Please, download the last IntelliJ version.
Anonymous
RE: Running scala code on IDEA on my Macbook ... I'm using latest production release of IDEA (8.x) on the Mac! There is only the following options:
- Compile 'Hello.scala'
- Run Scala Console
Do you mean version 9??
Anonymous
Looks like the Scala plugin is not compatible with the latest Community Edition (v 90.193).
Alexander Podkhalyuzin
It's not the latest, you can download EAP version 92.24.
Best regards,
Alexaner Podkhalyuzin
Anonymous
Scala plugin is not available in EAP 92.24 version, it is not shown in available plugins list...
Anonymous
Are there any download links to EAP 90.96 as I am unable to use the Scala plugin with the recent EAP releases.
Anonymous
the same problem here.
Regards,
Roman Sotnikov
Anonymous
I have the same problem, just downloaded 92.24 and although the closure plugin is now compatible (was incompatible with 90.193), the scala plugin is incompatible and worse yet the list of plugins won't even display. I get an error msg that says "List of plugins was not loaded: Content is not allowed in prolog" when I click on the plugins tab and it tries to download the list of available plugins.
Anonymous
nevermind, I got the latest plugin from and it works with 92.24
Anonymous
Im having thesame problem..
greedy genius
Anonymous
I tested several versions of scala IDE+plugins recently and IDEA is best so far.
The only two things that upset me (and i suppose they both have the same reason) are:
object Test {
def a(s: String) = Some(s)
def main(args: Array[String])
You guys are doing really great work, but could you tell us what are you plans on the things i noted above.
Thanks in advance.
Anonymous
Well maybe its not type inference, maybe its just for comprehension or something else. But if i type 'a("").' i get Option methods in autocomplete list, though i can replace 'b <- a
' with 'b <- a(z)' (note that z is not defined anywhere) and i get no errors from IDEA.
Anonymous
Indeed, it would be nice if the autocomplettion could be improved.\\
Anonymous
And also... I looked through the list of tickets in issue tracker and i'm not sure if i should make a new ticket or is it a known problem. Some issues look similar to my problems but not exactly the same.
Anonymous
Hi All,
I am using Idea 9.0 community edition. The comparison page () states Scala is available for users of the community edition as a plugin.
When I open the plugin window and search for scala, nothing appears, but when I do the same in Idea 9.0 Ultimate Trial, I can see the plugin appears.
Can anyone tell me how should I install the plugin in the community edition.
Thanks,
-A
Ilya Sergey
Hi.
Please, make sure, that the build of IntelliJ IDEA 9.0 CE you use is from branch 93.* Due to some internal API changes newer branches are not supported so far.
With beest regards,
Ilya
Anonymous
Hi,
I just tried out the scala plugin for IDEA (93.54) and I am not sure if this is a bug or a missing feature:
1) auto-compilation does not show me all errors. For example if I call a method with the wrong count of parameters, this willof course result in a compilation error. But the error is not shown immediately in the editor, like it would with java or groovy.
2) On manual project compilation, the errors are not linked in the editor. I only see them in the compilation output.
Regards,
Andi
Alexander Podkhalyuzin
You have an editor with syntax errosr, and some resolve errors. This IDE is not integrated with compiler, so compilation can't update red code in the editor.
Scala compiler has complex type system, which is not implemented in scala plugin yet, until this will be implemented, you can't see all compiler errors on the fly (like in Java or Groovy).
Best regards,
Alexander Podkhalyuzin.
Ittay Dror
is this on the roadmap?
Anonymous
How can I rename a file? In the project structure I see only the classes inside the file.
Alexander Podkhalyuzin
You can rename file in the tab, not from project view. Or you can enable file node mode in scala plugin settings, then you will see file with children in project view.
Anonymous
Is the plugin only compatible with Scala 2.8? When I set the Scala version to 2.7 I get the ClassNotFound exception.
OTOH, if I set the compiler to 2.8, I cannot build Lift project.
Any suggestions?
Thanks,
– Sasha
Alexander Podkhalyuzin
Please configure Scala Facet page for development with Scala 2.7.
Anonymous
Hey, it appears that the Scala compiler is really slow compared to the java compiler. Is there any work being done to improve this?
Anonymous
Hello
Thank you for a great plugin Im look forward to each time you make and "official" release.
However I have a problem with attaching Doc and sourcecode to the scala libraries. It works fine for other libraries as scalatest.
Iam using latest EAP and release from 2th. april, however it has been a problem for a while.
Alexander Podkhalyuzin
Please try the latest Nightly Build. You can use it with fixed your problem:\\
Best regards,
Alexander Podkhalyuzin.
Anonymous
Hi, plugin is cool fo sure, thanks for good work.
One thing i noticed, when editing file with 2 or more classes and "structure" subwindow is open, then i have "short editor hangs on changing text" issue, hanging disappears
when i close "structure" window.
Anonymous
when i set Use fsc to compile. I got these errors:
Scalac internal error:
class java.lang.reflect.InvocationTargetException )]
[scala.tools.nsc.CompileSocket.fatal(CompileSocket.scala:50),
scala.tools.nsc.CompileSocket.startNewServer(CompileSocket.scala:89),
scala.tools.nsc.CompileSocket.getPort(CompileSocket.scala:113),
scala.tools.nsc.CompileSocket.getsock$1(CompileSocket.scala:152),
scala.tools.nsc.CompileSocket.getOrCreateSocket(CompileSocket.scala:170),
scala.tools.nsc.StandardCompileClient.main0(CompileClient.scala:85),
scala.tools.nsc.CompileClient.main0(CompileClient.scala),)]
Anonymous
Can someone delete the above comment? It's just spam. The person has posted the same comment on every Scala forum and blog he can find.?
Anonymous
Hmm..
Do you have scalatest library in your module dependencies? Also please check that scala library and compiler jars in your facet configuration are the same as in your module dependencies.
Also, i'd highly recommend latest IDEA EAP () and scala plugin nightly build ()
Vinay
I have already posted the question. i am trying to begin my journey and hitting roadblocks on the way. is there any help to at least get started?
With the community edition i have the plugin seems to be behaving very crazy.?
Alexander Podkhalyuzin
4. This is a bug in IDEA - CE () I hope it will be fixed before IDEA 11.1 release.
6. Getting started guide should be rewritten anyway (I think we will do it after improving creation project wizard), so it's possible that we will add some specific info about Mac, however usually it's almost the same as for any other operating system.
7. You can buy IDEA license and it will work for all EAP releases (if I remember right, license is upgradable for 1 year). Also you can just update IDEA using EAP versions. Every new version will have 30 days trial, usually distance between EAPs is less then 30 days. However It's possible that distance between IDEA 11.1 release and IDEA 12 EAP will be much bigger than 30 days, so with fixed problems of plugin installation on 11.1 CE you can use Community Edition.
Best regards,
Alexander Podkhalyuzin.
Henning Hoefer
The "nightly builds" link near the top right of this page still points to the IDEA X version.
It probably should point to by now...
Pavel Fatin
That's true. Thank you!
Tomer Gabel
Actually, at this point you should also include the Cardea nightlies:
Alexander Podkhalyuzin
That's true again. Thank you!
Jon Steelman
Since IntelliJ IDEA 14 EAP is out with a new Scala plugin build, are you going to add a link for Nightly builds (Cassiopeia) in the Download section upper right?
Thanks,
Jon
Pavel Fatin
Hi Jon, thank you for the reminder. We're definitely going to provide the nightly builds for IDEA 14 plugin, yet it will probably take a week or so, and there are no new commits in the corresponding plugin branch anyway (besides compatibility-related).
Jeffrey Aguilera
Installed 0.41 last night. Now when build.sbt is updated, all sources and javadocs are removed from the project.
Jeffrey Aguilera
0.41.2 still discards all sources and javadocs.
Alexander Podkhalyuzin
Try File -> Import Project again. But this time check "Download sources and docs" in "Import SBT" dialog.
Jeffrey Aguilera
Thank you. That works perfectly now.
I did notice that my vcs.xml "doubled up" a mapping after doing an import:
Not sure if that is related to the 0.41.2 update.
Alexander Podkhalyuzin
It's IntelliJ IDEA platform problem. I reported something similar:
Best regards,
Alexander Podkhalyuzin.
Michael Hamrah
I'm confused about the relationship between SBT and Intellij. It seems that even though an sbt project is imported, sbt is not used to build and run tests, as sbt compile configuration options are ignored.
Specifically, I've set
excludeFilter in (Compile, unmanagedSources) := HiddenFileFilter || "*_test.scala"
excludeFilter in (Test, unmanagedSources) := HiddenFileFilter
resourceDirectory in Compile := baseDirectory.value / "resources"
resourceDirectory in Test := baseDirectory.value / "resources"
in my build.sbt file, but these settings do not appear to honored when running Build => make.
I'm using Scala plugin 1.1.382.2.RC1. I also tried with the SBT plugin, but this seems old and I removed it.
Nikolay Obedin
The relationship between SBT and IDEA is partial: we use SBT to extract project structure - modules, dependencies, options and misc - but we compile and run tests on our own. It causes some problems like project structure restrictions (famous "shared roots" problem), incorrect handling of options, etc. We have plans on using sbt-remote-control, in theory it should solve most of our current inconsistencies, but sbt-remote-control is quite far from mature state right now, so don't expect it to be integrated in Scala/SBT plugin in nearby future.
As for your specific problem: it will be great if you submit an issue on Youtrack and attach an example project if it is possible. I'll take a look and try to find a solution.
Michael Hamrah
Will do, thank you!
Alexandre Russel
Is there a way to modify the args for the 'make' command, especially when used before starting a test ? I can see that it starts:
I've tried all possible configuration for sbt and scala, but I can't manage to find the setting so it uses -Xmx2048M.
Nikolay Obedin
Hi. Is it Play2 project? If so, try adding "-Xmx2048M" in "Settings / Languages & Frameworks / Play2" dialog
Christian Schlichtherle
Thanks, this solved my problem, too.
@IntelliJ: This is completely counter-intuitive and more or less undocumented.
Alexandre Russel
It is a play project but I've imported it a sbt project. When I try to add play framework support, it doesn't do anything (no facet are added). adding the -Xmx... to play2 conf doesn't change anything. I'm using 14.1.4
Christian Schlichtherle
Apparently the SBT setting "Maximum heap size, MB" is ignored. It's set to 1024 by default, but whatever I put in there, it's not used as JvisualVM tells me.
Christian Schlichtherle
Are there any release notes for this plugin? I would like to check what has changed in the last version.
Mikhail Mutcianko
Release notes are posted on the download page: Release fixes
Same goes for EAP and nightly releases.
Christian Schlichtherle
Thanks for the swift response. I've checked there, but it doesn't contain any notes for release 1.5.3, which is used in the latest (non EAP) IDEA Ultimate. The notes start with version 1.6.0.
Mikhail Mutcianko
Unfortunately we only started using release notes generator since 1.6, however you can still read about new features in older releases on scala plugin blog: 1.5 EAP features
VP
Why is Intellij + Scala plugin so slow. Intellij is quite snappy while developing java projects. In scala projects autocomplete, syntax verification and most commands are delayed sometimes with more than 3-5 seconds. Restarts or usage of Ultimate version of Intellij doesn't help.
Nikolay Obedin
You could help us make it faster by creating an issue with detailed description of certain performance problem.
VP
I hope the answer wouldn't be again to use Ultimate .
Alexander Podkhalyuzin
IntelliJ IDEA Ultimate is slower than Community version because of bigger number of enabled by default plugins. So it will not solve your problem.
There are few things about performance in Scala plugin. First of all it constantly improving over the years. It's sad that it's still slow, but we still working on that (probably not enough). One of the goals is to improve memory usage. We are working on that right now (). You can try to increase Xmx for IDEA (1.5G should be enough), probably everything will become snappier. The second possibility is implicits. If you use implicits heavily, you can improve performance of IDE (and compilation) by reducing usage of Type Inference in implicit declarations. It can significantly improve overall performance. But I hope we will find possibilities to improve algorithms for implicits.
Best regards,
Alexander Podkhalyuzin.
Ahti Kitsik
3-4 seconds is nothing compared to Scala plugin Playframework support where you have to wait 10+ seconds every time you want to run or test your app
I guess it's probably SBT runtime integration that is weak, not specifically Playframework.
VP
Could you please send your feedback here:
Maybe you have time to upload a heap dump too. | https://confluence.jetbrains.com/display/SCA/Scala+Plugin+for+IntelliJ+IDEA?focusedCommentId=69338408 | CC-MAIN-2019-47 | refinedweb | 2,848 | 66.64 |
A Mongoose OS app is a firmware that does something specific. It could be built and flashed on a microcontroller. For example, a blynk app is a firmware that makes a device controllable by the Blynk mobile app.
Another example is.
An app can use any number of libs. A lib is a reusable library. It
cannot be built directly into a working firmware, because it only provides
an API but does not actually use that API. An app can include a lib by
listing it in the
libs: section of the
mos.yml file.
mos build command generates code that calls library initialisation
functions. Libraries are initialised in the order of their reference.
By default, a
mos build command that builds an app's firmware, is using so-called
remote build - it packs apps's sources and sends them over to the Mongoose OS
build machine. This is the default behavior, cause it does not require a Docker
installation on the workstation.
However, if a Docker is installed, then it is possible to build locally.
This is done by adding an extra
--local flag (see below). In this case,
everything is done on the local machine. This is a preferrable option for the
automated builds, and for those who do not want their sources leaving their
workstations. Summary:
mos.yml file drives the way Mongoose apps are built. Below is a description
of the sections (keys) in this file. Libraries also have
mos.yml files, the
only difference with apps is that they have
type: lib key and they cannot
be built into a firmware. So the following applies to both apps and libraries.
A string,
FirstName SecondName <Email> of the author, example:
author: Joe Bloggs <joe@bloggs.net>
List of Makefile variables that are passed to the architecture-specific
Makefile when an app is getting built. See next section for a build process
deep-dive. An example of arch-specific Makefile is:
platforms/esp32/Makefile.build.
The others are in the respective directories:
fw/platforms/*/Makefile.build.
The example below changes ESP32 SDK configuration by disabling brownout detection:
build_vars: ESP_IDF_SDKCONFIG_OPTS: "${build_vars.ESP_IDF_SDKCONFIG_OPTS} CONFIG_BROWNOUT_DET="
Another example is the dns-sd library that enables DNS-SD:
build_vars: MGOS_ENABLE_MDNS: 1
A list of
.a libs or directories with those. Do not put trailing slashes to
directory names:
binary_libs: - mylib/mylib.a
Additional preprocessor flags to pass to the compiler, example:
cdefs: FOO: BAR
That gets converted into the
-DFOO=BAR compilation option, for both C and C++
sources.
Modify compilation flags for C (
cflags) and C++ (
cxxflags). For example, by
default warnings are treated as errors. This setting ignores warnings when
compiling C code:
cflags: - "-Wno-error"
If what you're after is defining preprocessor variables,
cdefs makes it
easier. This snippet:
cdefs: FOO: BAR
Is the same as:
cflags: - "-DFOO=BAR" cxxflags: - "-DFOO=BAR"
This can define a new configuration section for the device, and also override
a previosly defined configuration entries defined elsewhere. For example, the
following snippet defines a new section
foo and overrides a default
value of
mqtt.server set by the
mqtt library:
config_schema: - ["foo", "o", {title: "my app settings"}] - ["foo.enable", "b", true, {title: "Enable foo"}] - ["mqtt.server", "1.2.3.4:1883"]
A string, one-line short description, example:
description: Send BME280 temperature sensor readings via MQTT
A list of files or directories with files to be copied to the device's filesystem, example:
filesystem: - fs - other_dir_with_files - foo/somepage.html
A list of directories with C/C++ include files. Do not put trailing slash to the directory name. Example:
includes: - my_stuff/include
Library dependencies. Each library should have an
origin and optionally can
have
name and
version.
origin is a GitHub URL, like (note: it must be a repo with
mos.yml in the repo root!).
Name is used to generate the code which calls
library initialization function: e.g. if the lib name is
mylib, it should have
the function
bool mgos_mylib_init(void). Also, for local builds, name is used
as a directory name under
deps: that's where
mos clones libraries.
version is a git tag name, or branch name, or SHA of the library's
repository. If omitted, it defaults to the
libs_version in
mos.yml, which,
in turn, defaults to the mos tool version. So e.g. if the mos tool version is
1.21, then by default it will try to use libs with the tag
1.21. Latest mos
will use the
master branch.
Example:
libs: # Use aws lib on the default version - origin: # Use aws lib on the version 1.20 - origin: version: 1.20 # Use the lib "mylib" located at - origin: name: mylib
Override app or lib name. By default, the name is set equal to the directory name.
name: my_cool_app
A list of C/C++ source files or directories with those. Do not put trailing slashes to directory names:
sources: - src - foo/bar.c
A list of free-form string tags, used for Web UI search.
Some tags are predefined, they place the app or library in a certain category.
Those predefined tags are:
cloud (cloud integrations),
hardware (hardware peripherals or API),
remote_management (remote management),
core (core functionality). Example:
tags: - cloud - JavaScript - AWS
When
mos build [FLAGS] command is executed in the app directory,
the following happens:
mos scans
libs: section of the
mos.yml file and imports all
libraries into the libs directory (
~/.mos/libs, could be overridden
by
--libs-dir ANOTHER_DIR flag)
Each library also has
mos.yml file, and a library could have a
libs:
section as well - this way the library can depend on other library.
mos
imports all dependent libraries too, recursively.
When all required libraries are imported,
mos executes
git pull in each
of them, in order to update. That could be switched off by
--no-libs-update
flag.
At this point, all required libraries are imported and updated.
mos combines app's
mos.yml file together with the
mos.yml files of
all dependent libraries, merging them into one file. The order of merging
is this: if
my-app depends on library
lib1, and library
lib1 depends
on library
lib2, then
result_yml = lib2/mos.yml + lib1/mos.yml + my-app/mos.yml. Meaning, the
application's
mos.yml has the highest priority.
If
--local --verbose --repo PATH/TO/MONGOOSE_OS_REPO flag is specified,
then
mos starts a local build by invoking
docker.cesanta.com/ARCH-build
docker image. That image encapsulates a native SDK for the given architecture
together with Mongoose OS sources,.
mos tool invokes
make -f fw/platforms/ARCH/Makefile.build for the given
platform. The result of this docker invocation is a
build/ directory with
build artifacts and
build/fw.zip firmware zip file which could be flashed
to the device with
mos flash command.
If
--local flag is not specified, packs source and filesystem
files and sends them to the Mongoose OS cloud build backend at, which performs an actual build as described in the
previous step, and sends back a
build/ directory with built
build/fw.zip
and artifacts.
Generated artifacts in the
build/ directory is as follows:
build/fw.zip - a built firmware build/fs - a filesystem directory that is put in the firmware build/gen - a generated header and source files
The best way to develop a new library is as part of an app development.
In your app, do a local build, which creates a
deps/ directory. That is
the directory where you should place your new library.
Clone an
empty library, which is a skeleton for the new library,
into the
deps/mylib directory (change
mylib to whatever name you want):
git clone deps/mylib
Create
include/mgos_mylib.h and
src/mgos_mylib.c files in your library:
#include "mgos_mylib.h" // NOTE: library init function must be called mgos_LIBNAME_init() bool mgos_mylib_init(void) { return true; }
#include "mgos.h"
You can add your library-specific API to
mgos_mylib.h and implementation
in
mgos_mylib.c.
In your app's
mos.yml file, add a reference to the new library:
libs: - name: mylib
Click on build button to build an app, and flash button to flash it
Edit library source files
mylib/src, build
myapp until a test app
works as intented.
mylib/srcdirectory, and .h files into the
include/directory
mjs/api_mylib.jsfile with the FFI JS wrappers.
myappuntil it works.
If you would like to share your project with a community and publish it under the Apache 2.0 license, please follow these steps:
mos.yml, set
authorfield as
Your Name <your@email.address>.
README.mdfile.
mjs_fs/api_<name>.jsfile if your library has JavaScript API.
arduino-compatlibrary in
mos.ymlfile, see arduino-adafruit-ssd1306 lib for an example
New contribution: ..., show a link to your code on GitHub / Bitbucket / whatever, or attach a zip file with the app sources. | https://mongoose-os.com/docs/mongoose-os/userguide/build.md | CC-MAIN-2022-05 | refinedweb | 1,467 | 67.35 |
I'm suppose to create a 10X10 board, using a 2-dimensional array with 35 random blocked squares ('b') implemented. The goal of the program is, by using recursion, I'm suppose to try to find a path from the bottom left square of the board ([9][0]) to any of the squares at the top of the board ([0][0-9]) leaving a trail of 'x's throughout my movements. Once I find a path all the way to the top, if there is an empty space in one of the top squares, then I mark the square as 'x' and return 1(True) as a result. Else, if there is a 'b' char already in the square, then I return 0(False) as a result.
At the start of the path find game, I'm suppose to check to see if the starting square ([9][0]) has a 'b' or not. If it has a 'b' char, then it automatically returns a 0 and terminates. But if it has en empty space, I mark an 'x' and continue on. Using recursion, I am suppose to look for a path UP, DOWN, LEFT, and RIGHT. If the square already is blocked 'b', then I can't go that way. If it has an empty space, then I mark an 'x' and continue trying to find a path towards the top.
I am suppose to display the board before I try to find a path, then after I attempt to find a path up the board, I have to display the result I got and what the board looks like after I found my result -- with a path of 'x's included throughout the board until the last spot where I was terminated.
Here's my Class code...
Code :
import java.util.Random; import java.util.Scanner; public class Board { private int Row; // Row of the board game private int Column; // Column of the board game private char[][] boardInterface; private boolean checkedStartAlready; Scanner input = new Scanner(System.in); // Make a constructor with no arguments that sets the board public Board() { Row = 10; Column = 10; checkedStartAlready = false; boardInterface = new char[Row][Column]; // Initialize board getBoard(); // Generate board with 35 random 'b's } // Make a method to get the board public void getBoard() { int placedACharB = 0; // Make generator for random numbers Random random = new Random(); int generator; // Create a loop that sets the character b 35 times in random places for (Row = 0; Row < 10; Row++) { for (Column = 0; Column < 10; Column++) { generator = random.nextInt(3); if ((generator == 0) && (placedACharB < 35)) { boardInterface[Row][Column] = 'b'; placedACharB++; } else boardInterface[Row][Column] = ' '; } } } public int findPath(int currentRow, int currentColumn) { // check starting point while (checkedStartAlready == false) { if (checkStartPoint(currentRow, currentColumn) == false) return 0; else checkedStartAlready = true; } // check to see if row and column are out of bounds if ((currentRow < 0) || (currentRow > 9)) return 0; else if ((currentColumn < 0) || (currentColumn > 9)) return 0; // check to see if made it to the top while (currentRow == 0) { if (boardInterface[currentRow][currentColumn] == 'b') { return 0; } else { boardInterface[currentRow][currentColumn] = 'x'; return 1; } } // check to see if space has b or already has x -- if not then mark an x // and continue if (boardInterface[currentRow][currentColumn] == 'b') return 0; else if (boardInterface[currentRow][currentColumn] == 'x') return 0; else { // Place new 'x' in empty space of array and continue path boardInterface[currentRow][currentColumn] = 'x'; // Find next open path -- Look UP, DOWN, LEFT, RIGHT return findPath(currentRow--, currentColumn) + findPath(currentRow++, currentColumn) + findPath(currentRow, currentColumn--) + findPath(currentRow, currentColumn++); } } // Method to check the starting point public boolean checkStartPoint(int startRow, int startColumn) { if (boardInterface[startRow][startColumn] == 'b') return false; else { boardInterface[startRow][startColumn] = 'x'; return true; } } // Method to display the board public void displayBoard() { for (Row = 0; Row < 10; Row++) { for (Column = 0; Column < 10; Column++) { // print out character System.out.print(boardInterface[Row][Column]); } // if column reaches the end, go to next row System.out.println(); } System.out.println(); } }
Here's my Main code...
Code :
public class BoardTester { public static void main(String[] args) { Board myBoard = new Board(); int startRow = 9; int startColumn = 0; int pathResult; // Display the board myBoard.displayBoard(); // Find path in the board and return result (True == 1, False == 0) pathResult = myBoard.findPath(startRow, startColumn); if (pathResult == 0) System.out.println("False"); else System.out.println("True"); // Display board after path results with marked x's INCLUDED myBoard.displayBoard(); } }
Here's an example of what my output SHOULD look like...
Code :
bb b b b b bb b b bbb b bbb b bbbb bbbb b b b b b b b bb false bb b b b b bb xb b xbbb b bbb xb bbbbxxx xbbbbxxxxb xxxxxbxbxx xbxbxbxxbx xxxxxxxxxx xxxxxbbxxx
But here's the type of output that I keep getting...
Code :
b b b b b b b bbbb b bb b b b bb bb b b b b b b bb bb b bb false b b b b b b b bbbb b bb b b b bb bb b b b b b b bb bb x b bb
Can somebody explain to me what it is I'm doing wrong? Also, why aren't my 'x's being stored along the way? Also, I'm somewhat of a beginner when it comes to recursion, so please let me know if the recursion could be the problem. Thank you. | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/32785-problem-my-2-dimensional-array-board-game-printingthethread.html | CC-MAIN-2017-26 | refinedweb | 896 | 51.31 |
Eclipse Community Forums - RDF feed Eclipse Community Forums scoping: both uri and classpath based <![CDATA[Hi! My language needs to use both an uri and a classpath based global scopes. What I am trying is an own implementation of a global scope provider that delegates work to an ImportURI* and a Default* global scope provider, merging the results. For this, I added both fragments to my mwe2 file and overriden the binding in my runtime module. Would that work? I mean, are there other places where settings can be overwritten by the second fragment? At the moment, I have other issues that don't let me verify how this works in a real setting, I thought I'd ask in case it's a dead end and I wouldn't have to discover it the hard way. Thanks in advance! best regards, Vlad ]]> Vlad Dumitrescu 2012-10-05T13:34:57-00:00 Re: scoping: both uri and classpath based <![CDATA[Of course, I just realized that I also need a kind of namespace aware global scope... Hopefully it works to merge the result from these three different providers. regards, Vlad ]]> Vlad Dumitrescu 2012-10-05T13:53:50-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=393651&basic=1 | CC-MAIN-2015-22 | refinedweb | 199 | 73.07 |
A
do-while loop in C++ is similar to the
while loop. Just as in a
while loop, a
do-while loop also has a condition which determines when the loop will break.
The only difference between a
do-while and a
while loop is that in the former the condition is evaluated once the code in the loop body has executed and in the latter, the condition is evaluated before the code in the loop body is executed.
In a
do-while loop, the
do keyword is followed by curly braces
{ } containing the code statements. Then the condition is specified for the
while loop.
do { //code statement(s) } while(condition);
Note: Do not forget the
;after
whiletowards the end
Let’s have a look at the
do-while loop syntax in C++, using an example.
#include <iostream> using namespace std; int main() { int x = 10; do { cout << "X = " << x << endl; x++; } while(x < 20); return 0; }
RELATED TAGS
View all Courses | https://www.educative.io/answers/how-does-a-do-while-loop-work-in-cpp | CC-MAIN-2022-33 | refinedweb | 162 | 66.17 |
Can't add local project references between class libraries in .net core 1.1I am having 3 projects(.net core 1.1) in my solution.
Project Solution
--> Console application (.net core 1.1)
--> Class library 1(.net core 1.1)
--> Class library 2 (.net core 1.1)
"console application" have 2 project reference [I can add both reference without any issues]
-->Class library 1
--> Class library 2
But
"Class library 1" has dependency with "Class library 2" and I can add reference but still "class library 1 class's" could not identify the class library2 class.
showing error message as "The type and namespace assembly class library2 could not find" so kindly help to solve
Note:1) all project versions are .net core 1.1 and access modifier also I checked . no issues.
2) Added reference as this way [Project-->right click-->add reference --> class library 2]
Kindly suggest me any other way to add ref in .net core | http://www.dotnetspider.com/forum/345948-Cant-add-local-project-references-between-class-libraries-in-net-core-11.aspx | CC-MAIN-2017-47 | refinedweb | 159 | 70.7 |
I encounter a startup issue on Aerospike CE 3.9.1. I have a four nodes cluster, and use SSD to store data (no data in memory) , I use less then 70% of disk, and 70% memory, but the “available” (contig-free) only have 10%, so I plan to add some store files to aerospike and cold restart the cluster, also expecting the defrag to help me to save some resource.
After I config the conf file, I restart asd, but I fall in the defrag endless loop.
waiting for defrag: namespace devices percent 0 waiting for 10
it seems the server hang.
according to
I lower the “high-water-memory-pct” and restart aerospike, but I meet another issue:
cold-start found no records eligible for eviction hwm breached but no records to evict
and it seems the loading file process is hang, the percentage is stop increasing and cannot startup.
do you have any advice? | https://discuss.aerospike.com/t/defrag-endless-loop-and-hwm-breath-issue-when-cold-restart/4277 | CC-MAIN-2018-30 | refinedweb | 157 | 63.02 |
In my article Extending the Journal Entry Voucher Upload (Excel), I wrote about Custom Codes (Fine Tuning Activity Account Assignment Types).
Account Assignments do allow additional reporting capabilities, but they are coming with the limitation that they are not available in periodic runs (GR/IR run, WIP clearing, revenue recognition), but there is a way to get this functionality into Fixed Assets.
When depreciation is calculated, automated or manual entries are made for Fixed Assets, it will create a Journal Entry Voucher. The Fixed Asset itself does not support the Custom Code fields, but the Journal Entry does. So, assuming there are enough criterias to derive the Custom Code on the Fixed Asset, you can implement the AfterModify Action of Business Object AccountingEntry.
Snippet:
import ABSL;
this.CustomCode1 = “DE”;
Dear Thomas,
Do you know if these limitations (GR/IR run, WIP clearing, revenue recognition and fixed assets) still exists in 1711 version ?
We see the possibility to use it in the purchase part but we didn’t succed to use it in the sales part.
Do you know if it is possible to use the Custom codes in Sales order and Customer invoice ?
Thank you.
Best Regards,
Benjamin TRISTAN
Hello Benjamin,
as far as I know, the situation has not changed. In sales, account assignments will break in revenue recognition, but they should work – my colleague wrote an example on how to use these assignments when invoicing projects.
Best regards,
Thoms | https://blogs.sap.com/2016/01/19/account-assignment-types-use-in-fixed-assets-pdi/ | CC-MAIN-2018-05 | refinedweb | 241 | 51.38 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.