text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
When writing a script, you don't always get it right the first time. That was a worthwhile lesson that our engineering team recently learned. Every month when Microsoft releases security updates, our engineering team must automate their installation. Generally, this is a simple task of running the patch executable with switches for silent remote deployment through HP's ProLiant Essentials Rapid Deployment Package (RDP), our deployment standard for Windows 2000 servers. However, there are times when a security update isn't a simple install. For example, that was the case with the security update commonly known as the graphics device interface (GDI) update associated with Microsoft Security Bulletin MS04-028 "Buffer Overrun in JPEG Processing (GDI+) Could Allow Code Execution" (). The GDI security update affected various versions and editions of Windows and its components, including the Windows .NET Framework 1.1 and Framework 1.0 Service Pack 2 (SP2).
In the side-by-side execution model, Framework 1.1 and 1.0 coexist on a system, which lets developers use either or both versions in their applications. However, in most cases, it's simpler to upgrade to version 1.1 and not support side-by-side execution, provided the applications compiled for version 1.0 continue to function as expected. (In most cases, applications compiled for version 1.0 will continue to work properly on version 1.1. For more information about the versions, see "Versioning, Compatibility, and Side-by-Side Execution in the .NET Framework,".)
Because the GDI security update affected both Framework versions and we support side-by-side execution, it was essential for us to detect each server's Framework version and service pack level to ensure that the correct update would be applied. Moreover, because the Framework is an optional install in our Windows 2000 build, we also had to ensure that servers with no Framework installed didn't receive a GDI update. Thus, we decided to create a script that queries a server for its installed products and identifies the installed Framework versions and service pack levels if found. Although it took a few attempts to get it right, the result was worth it.
The First Attempt
There are multiple ways to confirm that a product is installed on a system. For example, you can read the registry, read an .ini file, or look for a specific file. Because both Framework versions were installed with Windows Installer (i.e., .msi packages) we found it easiest to use Windows Management Instrumentation's (WMI's) Win32_Product class and its properties. Using WMI's ExecQuery method to query the Win32_Product class returns instances of the Win32_Product class that represents products installed with Windows Installer. We then enumerated and compared the names of the products installed on a system with the display names of Framework 1.1 and Framework 1.0.
As Listing 1 shows, the script begins by declaring the computer name on which the script is to run. The strComputer variable stores this value. Because the script is to connect to the local computer, strComputer is set to a period. Next, the script calls the GetObject method to connect to WMI's root\cimv2 namespace, which contains the Win32_Product class.
Callout A in Listing 1 shows the heart of the script. This code stores the Windows Query Language (WQL) query in the wqlQuery variable, then calls WMI's ExecQuery method to run that query. Running the query returns a collection of objects, which the code assigns to the colProducts variable. Using a For Each...Next statement, the script iterates through the collection. The Select Case statement in the For Each...Next loop compares the name of the product (exposed by the Win32_Product class's Name property) with Microsoft .NET Framework 1.1 and Microsoft .NET Framework (English) to confirm if any or both Framework products are installed on the server.
The Second Attempt
The WQL query in callout A in Listing 1 is a general query that will return a collection of all installed .msi packages. Thus, the For Each...Next loop must iterate through the entire collection. Because we were interested in only querying for the Framework, we decided to optimize the query by modifying it to include a Where clause that limited the search to only the installed Framework products. Listing 2 shows the script with the optimized query.
We also optimized the ExecQuery method by including the wbemFlagReturnImmediately and wbemFlagForwardOnly flags, as callout A in Listing 2 shows. The wbemFlagReturnImmediately flag ensures that the WMI call doesn't wait for the query to complete before returning the result. The wbemFlagForwardOnly flag ensures that forward-only enumerators are returned. Forward-only enumerators are generally faster than bidirectional enumerators. (To learn more about the ExecQuery method and its parameters and flags, go to.)
The Third Attempt
Easier is not always better, as we learned after running the script in Listing 2. Although it was easiest to write a script using Win32_Product class, the class took up to 20 seconds to return the results, even with the optimized query. Because we were to run this script on more than 200 servers, the long wait wasn't acceptable. As an alternative, we decided to try a different approach: check the registry to confirm whether the Framework was installed on each server. The result is the script that Listing 3 shows.
Like the scripts in Listing 1 and Listing 2, the script in Listing 3 begins by setting the strComputerName variable to the local computer, but this is where the similarity ends. The differences begin with the WMI moniker statement in the GetObject call, which callout A in Listing 3 shows. In the other two scripts, the WMI moniker included the root\cimv2 namespace. For registry management, WMI provides the StdRegProv class. All WMI versions include and register the StdRegProv class, so WMI places this class in the root\default namespace by default. Thus, you have to connect to the root\default namespace (and not the root\cimv2 namespace) to use the StdRegProv provider.
StdRegProv exposes the GetStringValue method, which you can use to read the data from a registry entry whose value is of type REG_SZ. When you use the GetStringValue method, you must include four parameters in the following order:
Key tree root. The key tree root parameter specifies the target hive in the registry. Web Table 1 shows the UInt32 values (i.e., numeric constants) that represent the hives. The default hive is HKEY_LOCAL_MACHINE, which has a value of &H80000002.
Subkey. The subkey parameter specifies the registry path (not including the hive) to the registry entry that contains the value you want to retrieve.
Entry. The entry parameter specifies the name of the entry from which you're retrieving the value.
Out variable. The GetStringValue method reads the specified entry's value into a variable. You use the out variable parameter to specify the name of that variable.
All applications that can be uninstalled create a subkey under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall key. Thus, the Uninstall key is the base key under which the script looks for one of three subkeys: Microsoft .NET Framework (English), Microsoft .NET Framework Full v1.0.3705 (1033), or \{CB2F7EDD-9D1F-43C1-90FC-4F52EAE172A1\}. Microsoft .NET Framework (English) and Microsoft .NET Framework Full v1.0.3705 (1033) both represent Framework 1.0. When I first downloaded Framework 1.0 a while back, the subkey created was named Microsoft .NET Framework (English). If you download and install Framework 1.0 now, the subkey name is Microsoft .NET Framework Full v1.0.3705 (1033). (Microsoft sometimes changes names to make them more meaningful.) The \{CB2F7EDD-9D1F-43C1-90FC-4F52EAE172A1\} subkey represents Framework 1.1.
As callout B in Listing 3 shows, the script uses a constant and two variables to provide the registry information. First, the script sets the HKEY_LOCAL_MACHINE constant to &H80000002. That constant is used for the GetStringValue method's key tree root parameter. Because the script is checking three subkeys under the Uninstall key, the script uses two variables—strKey and arrKeysToCheck—to specify the subkey parameter. The strKey variable holds the string SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall. The arrKeysToCheck variable contains an array that includes three elements: the strings Microsoft .NET Framework (English), Microsoft .NET Framework Full v1.0.3705 (1033), and \{CB2F7EDD-9D1F-43C1-90FC-4F52EAE172A1\}.
In Listing 3, the GetStringValue method's entry parameter is DisplayName. Remember that the Win32_Product class's Name property exposes the name of the products installed by Windows Installer. In the registry, the DisplayName entry stores the Name property's value. GetStringValue's last parameter is the strDisplayName variable.
When the script runs, it iterates through the arrKeysToCheck array and calls the StdRegProv's GetStringValue method for each element in the array, as callout C in Listing 3 shows. After GetStringValue passes the DisplayName entry's value to the strDisplayName variable, the script displays the value.
On completion, the GetStringValue method returns a 0 if successful or some other value if an error occurred. Because the Framework was an optional install in our environment, there are instances in which none of the three subkeys exist in the registry. Such instances could cause the GetStringValue method to fail and throw an error, which in turn could cause the script to fail. To avoid this problem, the On Error Resume Next statement appears before calling the For Each...Next loop to ensure that the script continues to run, even if GetStringValue throws an error.
Success!
The script in Listing 3 provided our engineering team with a viable, fast solution that determined whether Framework 1.1, Framework 1.0, or both were installed on a machine. However, we also needed to determine the Framework service pack levels. So, we decided to build on Listing 3 and add code to check for service pack information.
After some investigation, we discovered that Microsoft doesn't provide a direct means to detect Framework service pack levels. However, there are indirect means. For Framework 1.1, you can check the SP registry entry under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322 subkey. A value of 1 means that Framework 1.1 SP1 is installed.
Checking the registry won't work for Framework 1.0. Instead, Microsoft suggests that you check the version of the mscorcfg.dll file. Web Table 2 shows the correlation between the file versions and the service pack levels.
With the service pack information in hand, we started writing the DetectFramework.vbs script that Listing 4 shows. The first part of DetectFramework.vbs should look familiar. It uses the StdRegProv class's GetStringValue method to read the DisplayName entry's value into the strDisplayName variable. However, rather than display strDisplayName's value, the script starts to go through a series of embedded If...Then...Else and Select Case statements. If the GetStringValue method returns a value of 0 (i.e., the method was successful and thus a Framework version is installed), the script proceeds to a Select Case statement that determines whether strDisplayName's value is the string Microsoft .NET Framework 1.1 or the string Microsoft .NET Framework (English).
Microsoft .NET Framework 1.1. When strDisplayName contains the string Microsoft .NET Framework 1.1, the script checks the SP registry entry for the value of 1. To do so, it uses StdRegProv's GetDWORDValue method, which reads data from a registry entry whose value is of type REG_DWORD. Like GetStringValue, GetDWORDValue requires four parameters specifying the hive, the subkey name, the entry name, and the name of the variable that will store the entry's value.
After GetDWORDValue reads the SP entry's value, the Select Case statement at callout A in Listing 4 compares that value against the value of 1. When a match occurs, the script displays the message Microsoft .NET Framework 1.1 SP1 found. When a match doesn't occur, the script displays the message Microsoft .NET Framework 1.1 found. In other words, although Framework 1.1 was installed, SP1 wasn't.
Microsoft .NET Framework (English). When strDisplayName contains the string Microsoft .NET Framework (English), the script checks the version of the mscorcfg.dll file. The Microsoft Scripting Runtime Library's FileSystemObject object provides the GetFileVersion method, which returns the version of a given file. This method requires only one parameter: the path to the target file.
Before using GetFileVersion, though, the script checks for the file's existence. Although mscorcfg.dll must exist on the system if Framework 1.0 is installed, it's good scripting practice to check for a file's existence before attempting to get its version number. This practice ensures that the script won't stop with a File Not Found error at runtime.
To check for the mscorcfg.dll file's existence, the script uses the FileSystemObject object's FileExists method, as the code at callout B in Listing 4 shows. Like the GetFileVersion method, the FileExists method's only parameter is the path to the target file. If the file exists, the script calls the GetFileVersion method.
The Select Case statement at callout C in Listing 4 compares the version number that GetFileVersion returns with the four possible version numbers shown in Web Table 2. When a match occurs, the script displays the corresponding message that identifies the service pack level.
To confirm that DetectFramework.vbs correctly detects the installed Framework version and service pack level, we tested the script by running it on a server installed with Framework 1.0 only, a server installed with Framework 1.1 only, a server installed with both versions, and a server that didn't have Framework installed. Further, after installing the latest service pack for each Framework version, we tested the script to ensure that we received the expected results. And expected results are what we received.
Challenging But Worth It
Although it took a lot of trial and error to create DetectFramework.vbs, it was worth the effort. And, as we state in engineering, a world without challenges would be a very boring world.
|
https://www.itprotoday.com/devops-and-software-development/don-t-expect-get-it-right-first-time
|
CC-MAIN-2018-43
|
refinedweb
| 2,348
| 58.79
|
This content has been marked as final. Show 19 replies
15. re:PL/SQL notification / DBMS_AQ.REGISTER253424 Oct 2, 2002 12:14 PM (in response to 253424)Hallo Kamal,
in this moment the notification works on our sun instances.
the solution looks crazy.
We compare everyting on winXP and sun/solaris servers oracle's settings and the diference was in table space of created ques. Excuse my english please. we drop old ques, change default table spaces for users to system table space and create new ques.
single consumer and multi consumer ques with propagation to another instances works perfectly.
we turn back the changes and restore table spaces and que notifications still works.
It looks crazy, but it is true.
My test users on oracle have got dba rights and in this moment I will be removing this access rights. I hope, that never bad will meet me.
Can you tell me what are you thinking about this?
Regrads, Zdenek.
16. re:PL/SQL notification / DBMS_AQ.REGISTERKamal Kishore Oct 3, 2002 2:45 AM (in response to 253424)it looks strange to me that something like this would actually make it working!!!
Did Oracle support tell you to do this or you found out on your own?
On a side note, The DBMS_AQ.LISTEN procedure does not use any resources while it is
waiting for messages to arrive.
Even when you use the AQ callback, you will write a PL/SQL procedure which will be called
automatically by Oracle when a message arrives. In order to use the LISTEN procedure
(instead of the callback), all you would have to do is put a call to DBMS_AQ.LISTEN on top of
your pl/sql procedure. As soon as a message arrives, the procedure will wake up and start
running. While it is waiting for message, no resources are being wasted.
But the fact remains, that AQ callback is a much better and cleaner solution for notification.
Now that is working for you, you may not need to look at DBMS_AQ.LISTEN (but i thought that
I would mention it anyway, for informational purposes).
Cheers,
17. re:PL/SQL notification / DBMS_AQ.REGISTER253424 Oct 4, 2002 4:04 PM (in response to Kamal Kishore)Hi,
I understand to lister features very well. I am using it, but for back waiting for answer in anothers ques. It work very well.
So,the notifications still runs wery well. No changes we did, only the table spaces. I thing so, taht it is very crazy. This solutions we did by our own. No support was needed.
So, have nice weekend,
By, Zdenek.
18. Re: re:PL/SQL notification / DBMS_AQ.REGISTERuser13376864 Nov 11, 2010 6:50 AM (in response to Kamal Kishore)Hi,
I am also running into same issue. Can you please help me?
I have posted a new thread stating my issue at Dequeue using AQ_AGENT
Thanks,
Ritika
19. Re: re:PL/SQL notification / DBMS_AQ.REGISTERspur230 Feb 28, 2013 9:20 PM (in response to Kamal Kishore)Thank you Kamal Kisore.
I was able to resolve my problem with the example you provided.
While registering I was using DBMS_AQ.*NAMESPACE_ANONYMOUS* instead of DBMS_AQ.*NAMESPACE_AQ*.
select * from USER_SUBSCR_REGISTRATIONS showed me the namespace.
|
https://community.oracle.com/message/10882473
|
CC-MAIN-2017-17
|
refinedweb
| 537
| 69.07
|
The kids at coding club have decided that we should write an implementation of pong in python. I took a look at some options, and decided tkinter was the way to go. Thus, I present a pong game broken up into stages which are hopefully understandable to an 11 year old: Operation Terrible Pong.
Category: Coding_club
More coding club
This is the second post about the coding club at my kid’s school. I was away for four weeks travelling for work and then getting sick, so I am still getting back up to speed with what the kids have been up to while I’ve been away. This post is an attempt to gather some resources that I hope will be useful during the session today — it remains to be seen how this maps to what the kids actually did while I was away..
Coding club day one: a simple number guessing game in python
I’ve recently become involved in a new computer programming club at my kids’ school. The club runs on Friday afternoons after school and is still very new so we’re still working through exactly what it will look like long term. These are my thoughts on the content from this first session. The point of this first lesson was to approach a programming problem where every child stood a reasonable chance of finishing in the allotted 90 minutes. Many of the children had never programmed before, so the program had to be kept deliberately small. Additionally, this was a chance to demonstrate how literal computers are about the instructions they’re given — there is no room for intuition on the part of the machine here, it does exactly what you ask of it.
The task: write a python program which picks a random number between zero and ten. Ask the user to guess the number the program has picked, with the program telling the user if they are high, low, or right.
We then brainstormed the things we’d need to know how to do to make this program work. We came up with:
- How do we get a random number?
- What is a variable?
- What are data types?
- What is an integer? Why does that matter?
- How do we get user input?
- How do we do comparisons? What is a conditional?
- What are the possible states for the game?
- What is an exception? Why did I get one? How do I read it?
With that done, we were ready to start programming. This was done with a series of steps that we walked through as a group — let’s all print hello work. Now let’s generate a random number and print it. Ok, cool, now let’s do input from a user. Now how do we compare that with the random number? Finally, how do we do a loop which keeps prompting until the user guesses the random number?
For each of these a code snippet was written on the whiteboard and explained. It was up to the students to put them together into a program which actually works.
Due to limitations in the school’s operating environment (no local python installation and repl.it not working due to firewalling) we used codeskulptor.org for this exercise. The code that the kids ended up with looks like this:
import random # Pick a random number number = random.randint(0, 10) # Now ask for guesses until the correct guess is made done = False while not done: guess = int(raw_input('What is your guess?')) print 'You guessed: %d' % guess if guess < number: print 'Higher!' elif guess > number: print 'Lower!' else: print 'Right!' done = True
The plan for next session (tomorrow, in the first week of term two) is to recap what we did at the end of last term and explore this program to make sure everyone understands how it works.
|
https://www.madebymikal.com/category/coding_club/
|
CC-MAIN-2019-13
|
refinedweb
| 647
| 82.34
|
Setup
import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers
Introduction.
Padding sequence data]]
Masking
Now that all samples have a uniform length, the model must be informed that some part of the data is actually padding and should be ignored. That mechanism is masking.
There are three ways to introduce input masks in Keras models:
- Add a
keras.layers.Maskinglayer.
- Configure a
keras.layers.Embeddinglayer with
mask_zero=True.
- Pass a
maskargument manually when calling layers that support this argument (e.g. RNN layers).
Mask-generating layers:
Embedding.
Mask propagation in the Functional API and Sequential API)
Passing mask tensors directly to layers([[-3.6287602e-04, 8.8942451e-03, -4.5623952e-03, ..., 3.6509466e-04, -4.3871473e-03, -1.7532009e-03], [ 2.6261162e-03, -2.5420082e-03, 7.6517118e-03, ..., 5.8210879e-03, -1.5617531e-03, -1.7562184e-03], [ 6.8687932e-03, 1.2330032e-03, -1.2028826e-02, ..., 2.0486799e-03, 5.7172528e-03, 2.6641595e-03], ..., [-3.4327951e-04, 1.3967649e-03, -1.2102776e-02, ..., 3.8406218e-03, -2.3374180e-03, -4.9669710e-03], [-2.3023323e-03, 1.8474255e-03, 2.7329330e-05, ..., 6.1798934e-03, 4.2709545e-04, 3.9026213e-03], [ 7.4090287e-03, 1.9879336e-03, -2.0261200e-03, ..., 8.2100276e-03, 8.7051848e-03, 9.9167246e-03]], dtype=float32)>
Supporting masking in your custom layers False True True True True] [ True True True True True True True True False True]], shape=(3, 10), dtype=bool)
Opting-in to mask propagation on compatible layers: KerasTensor(type_spec=TensorSpec(shape=(None, None), dtype=tf.bool, name=None), name='Placeholder_1:0')
Writing layers that need mask information_exp *))).
|
https://www.tensorflow.org/guide/keras/masking_and_padding
|
CC-MAIN-2022-40
|
refinedweb
| 275
| 56.15
|
Hello,
For an intership, I need to program the Arduino function SPI.transfer(data) (...) in Atmel Stuido.
I use the microprocessor SAM3X8E (..., page 676-707 for the SPI part).
I did this function :
static inline uint8_t spi_transmit(uint8_t data)
{
/* Transmission of the data/
SPI0->SPI_TDR = data;
/* wait the end of the transmission*/
while(SPI0->SPI_SR & SPI_SR_RDRF == 0);
/* return the data receive */
return (SPI0->SPI_RDR & SPI_RDR_RD_Msk);
}
(Note : SPI_RDR_RD_Msk is a mask to select the 16 bits which interess me of the register RDR)
But this function doesn't do the transmission I think...
Can you see anything bad in this function ?
Thank you !
Have you correctly configure the SPI (including clocks, pins, etc) before calling this function?
What do you mean, "I think" ?
Use an oscilloscope or logic analyser to know for sure
Please see Tip #1 in my signature, below, for how to properly post source code:
Top Tips:
Top
- Log in or register to post comments
Yeah the SPI is correctly configurate
I use I think because I need this function to pilot a screen and this is the only function to change to make work this screen and it steel doesn't work
. i don't know what to measure with an oscilloscope to see if the transmission is make or not...
Top
- Log in or register to post comments
Then get your supervisor/mentor to show you
This is not purely a software exercise - so being able to use hardware debug tools is an essential part of the process.
It is unreasonable to ask you to do this task if you don't know how to use these tools.
Top Tips:
Top
- Log in or register to post comments
He is not here tooday and tomorrow i will be not here... But ok I will see with him to see what I need to mesure !
If someone see a software error in my function don't be shy !
Top
- Log in or register to post comments
Have you looked at the Application Notes on the Product Page?......
Top Tips:
Top
- Log in or register to post comments
Yeah it's include in the datasheet.
I mesure with my oscilloscope the difference between a code with the screen on and mine and there is 2 differences :
The pin SPI0_MOSI is at 1 in working code (and in mine is 0) and an another PIN which I don't really understan his utility (Screen_GPIO3) is at 0 on working code (and in mine is 1).
Top
- Log in or register to post comments
There will be more information in the Application Note.
You should test your code without the screen attached first ...
Can you post traces from your scope?
(screen capture is far better than photos)
Tip #1 also covers posting pictures.
Top Tips:
Top
- Log in or register to post comments
Whitout the screen my code works perfectely. This is why I try to plug a screen now. No I can't post traces...
Top
- Log in or register to post comments
|
https://www.avrfreaks.net/comment/3145381
|
CC-MAIN-2022-40
|
refinedweb
| 504
| 69.41
|
Why is it important to constantly monitor fixed assets
Why is it important to constantly monitor all of your fixed assets? What role do these assets play in your organization's financial well-being?
Why is it important to constantly monitor all of your fixed assets? What role do these assets play in your organization's financial well-being?
How do managers go about making segment or product line elimination decisions? cof
Equipment that cost $66,000 and has accumulated depreciation of $30,000 is exchanged for equipment with a fair value of $48,000 and $12,000 cash is received. The exchange lacked commercial substance. The gain to be recognized from the exchange is a. $4,800 gain. b. $6,000 gain. c. $18,000 gain. d. $24,000 gain. The new
Capitalization vs. Expense. Rentals R Us incurs the following expenditures on an apartment building it owns: Item Amount Replace the roof $25,000 Repaint the exterior 7,000 Install new locks
I:6-33 For or From AGI Deductions. Roberta is an accountant employed by a local firm. During the year, Roberta incurs the following unreimbursed expenses: Item Amount Travel to client locations $750 Subscriptions to professional journals
1. Wateredge Corporation has budgeted a total of $361,800 in costs and expenses for the upcoming quarter. Of this amount, $45,000 represents depreciation expense and $7,300 represents the expiration of prepayments. Wateredge's current payables balance is $265,000 at the beginning of the quarter. Budgeted payments on current paya
During the fiscal year ended September 30, 2011, Worrell, Inc., had a 2-for-1 stock split and a 5% stock dividend. In its annual report for 2011, the company reported earnings per share for the year ended September 30, 2010, on a restated basis, of $0.60. Calculate the originally reported earnings per share for the year ende
Joe owes willy $5,000 from an old gambling debt. Discuss the tax treatment for the $5,000 to both Joe and Willy.
The instructions accompanying the Federal income tax return (Form 1040) provide examples of items that must be included in gross income, but do not list all possible types of income subject to tax. Would these instructions be improved if they included a complete listing of taxable sources of income? Indicate the source of your o
Question 1: What are some examples of how alternative income statements are used in decision making? What are the two types of approaches in using alternative income statements? What is cost behavior and how does it impact a financial analysis? What processes does your company or a previous employer use to analyze cost behavior
During the year ended December 31, 2011, Gluco, Inc., split its stock on a 3-for-1 basis. In its annual report for 2010, the firm reported net income of $3,703,920 for 2010, with an average 268,400 shares of common stock outstanding for that year. There was no preferred stock. Required: (a) What amount of net income for 20
** Please see the attached file for the complete problem details ** For each of the items, calculate the cash sources or cash uses that should be recognized on the statement of cash flows for the year ended December 31, 2010.
Ringemup, Inc., had net income of $473,400 for its fiscal year ended October 31, 2010. During the year the company had outstanding 38,000 shares of $4.50, $50 par value preferred stock, and 105,000 shares of common stock. Required: Calculate the basic earnings per share of common stock for fiscal 2010. (Round your answer t
A. Calculate the gross profit ratio for each of the past three years. b. Assume that Intel's net revenues for the first four months of 2009 totaled $12.6 billion. Using the 2008 gross profit ratio from above, calculate an estimated cost of goods sold and gross profit for the four months. See attachment for data and detail
See attached file. Please put in Solver Excel - show steps for Solver.
Calculating Gross Profit from Installment Sales Charter Corporation, which began business in 2011, appropriately uses the installment sales method of accounting for its installment sales. The following data was obtained for sales made during 2011 and 2012: 2011 2012 Installment sales 362,000 345,000
What is the likelihood that the market and book values of an asset or liability will differ? Why is this important? Why is net income an unreliable indicator of a health care organization's cash position?:
Troopers Medical Labs, Inc. began operations 5 years ago producing stetrics, a new type of instrument it hoped to sell to doctors, dentists, and hospitals. The demand for stetrics far exceeded initial expectations, and the company was unable to produce enough to meet demand. The company was manufacturing its product on equip
1- Controllable margin is used as a refined measure of strategic business unit reporting that is best described as: A. Margins reported to strategic business unit managers related to revenues and costs specifically within the managers' control and responsibility. B. Contribution margin net of controllable fixed costs (those co add effe
In the United States, about one in every four companies uses variable costing for internal reporting purposes. These companies must make adjustments to these reports for external-reporting purposes. Explain what these adjustments are and why they need to be made.
Concerns over the recognition, administrative, and operational lags as well as the concern that discretionary fiscal policy is subject to political biases have caused some economists to believe Congress and the president should do nothing in the face of a recession. Even if they are correct, is it realistic to expect the public
On January 1, 2010, Metco, Inc., had 264,000 shares of $2 par value common stock issued and outstanding. On March 15, 2010, Metco, Inc., purchased for its treasury 3,100 shares of its common stock at a price of $39.00 per share. On August 10, 2010, 640 of these treasury shares were sold for $45.00 per share. Metco's directors de
|
https://brainmass.com/business/accounting/pg90
|
CC-MAIN-2017-51
|
refinedweb
| 1,017
| 55.34
|
docopt 0.6.1-b.6
docopt implementation in D
To use this package, run the following command in your project's root directory:
docopt creates beautiful command-line interfaces
.. image:::
.. code:: d
import std.stdio;
import docopt;
int main(string[] args) {
auto doc = . "; auto arguments = docopt.docopt(doc, args[1..$], true, "Naval Fate 2.0"); writeln(arguments); return 0; }.
D port details
This is a port of docopt.py, <>_
which has all features of the latest version.
Installation
Use
dub <>_.
.. code:: json
{ "dependencies": { "docopt": ">=0.6.1" } }
docopt is tested with D 2.070.
Testing
You can run unit tests using the command:
make test
API
.. code:: d
import docopt;
.. code:: d
public ArgValue[string] docopt(string doc, string[] argv,
bool help = false, string vers = null, bool optionsFirst = false)
docopt takes 2 required and 3 optional arguments:
docis a string that contains a help message that will be parsed to create the option parser. The simple rules of how to write such a help message are given in next sections. Here is a quick example of such a string:
.. code:: d
string doc = "Usage: my_program [-hso FILE] [--quiet | --verbose] [INPUT ...] -h --help show this -s --sorted sorted output -o FILE specify output file [default: ./test.txt] --quiet print less text --verbose print more text ";
argvis an command-line argument array; most likely taken from the
string[] argspassed to your
main. Since the program name is in
args[0], you should passed
args[1..$]. Alternatively you can supply an array
null, is an optional argument that specifies the version of your program. If supplied, then, (assuming
--versionoption is mentioned in usage pattern) when parser encounters the
--versionoption, it will print the supplied version and terminate.
versionis a string, e.g.
"2.1.0rc1".
Note, when
docoptis set to automatically handle
-h,
--helpand
--versionoptions, you still need to mention them in usage pattern for this to work. Also, for your users to know about them.
optionsFirst, by default
false. If set to
truewill disallow mixing options and positional argument. I.e. after first positional argument, all arguments will be interpreted as positional even if they look like options. This can be used for strict compatibility with POSIX, or if you want to dispatch your arguments to other programs.
The return value is a simple associative array with options, arguments and commands as keys, spelled exactly like in your help message. Long versions of options are given priority. For example, if you invoke the top example as::
naval_fate ship Guardian move 100 150 --speed=15
the return dictionary will be:
.. code:: s
['--drifting': false, 'mine': false, '--help': false, 'move': true, '--moored': false, 'new': false, '--speed': 15, 'remove': false, '--version': false, 'set': false, '<name>': ['Guardian'], 'ship': true, '<x>': 100, 'shoot': false, '<y>': 150]
Help message format
Help message consists of 2 parts:
Usage pattern, e.g.::
Usage: my_program [:
.. code:: d
auto doc = "Usage: my_program ";
The first word after
usage: is interpreted as your program's name.
You can specify your program's name several times to signify several
exclusive patterns:
.. code:: d
auto doc = " Usage: my_program FILE my_program COUNT FILE ";
Each pattern can consist of the following elements:
- <arguments>, ARGUMENTS. Arguments are specified as either upper-case words, e.g.
my_program CONTENT-PATHor words surrounded by angular brackets:
my_program <content-path>.
- --options. Options are words started with dash (
-), e.g.
--output,
-o. You can "stack" several of one-letter options, e.g.
-oivwhich will be the same as
-o -i -v. The options can have arguments, e.g.
--input=FILEor
-i FILEor
<arguments>or
ARGUMENTS, plus two special commands: dash "
-" and double dash "
--" (see below).
Use the following constructs to specify patterns:
- [ ] (brackets) optional elements. e.g.:
my_program [-hvqo FILE]
- ( ) (parens) required elements. All elements that are not put in [ ] are also required, e.g.:
my_program --path=<path> <file>...is the same as
my_program (--path=<path> <file>...). (Note, "required options" might be not a good idea for your users).
- | (pipe) mutually exclusive elements. Group them using ( ) if one of the mutually exclusive elements is required:
my_program (--clockwise | --counter-clockwise) TIME. Group them using [ ] if none of the mutually-exclusive elements are required:
my_program [--left | --right].
- ... (ellipsis) one or more elements. To specify that arbitrary number of repeating elements could be accepted, use ellipsis (
...), e.g.
my_program FILE ...means one or more
FILE-s are accepted. If you want to accept zero or more elements, use brackets, e.g.:
my_program inis used instead of a file. To support this add "
[-]" to your usage patterns. "
-" acts as a normal command.
If your pattern allows to match argument-less option (a flag) several times::
Usage: my_program [ <file> <file> --path=<path>...
I.e. invoked with
my_program
docthat split into a list on whitespace::
Usage: my_program [-
optionsFirst
parameter (described in API section above). To get you started quickly
we implemented a subset of git command-line interface as an example:
examples/git
<>_
Changelog
docopt follows
semantic versioning <>_. The
first release with stable API will be 1.0.0 (soon). Until then, you
are encouraged to specify explicitly the version in your dependency
tools, e.g.::
- 0.6.1-b.1 Initial release in D.
- 0.6.1-b.2 Updates for D 2.067
- 0.6.1-b.3 Updates for D 2.067.1, gdc and ldc2
- 0.6.1-b.5 Updates for D 2.070
- 0.6.1-b.6 Updates for D 2.072
- Registered by Bob Tolbert
- 0.6.1-b.6 released 2 years ago
- docopt/docopt.d
- github.com/rwtolbert/docopt.d
- MIT
- Authors:
-
- Dependencies:
- none
- Versions:
- Show all 8 versions
- Download Stats:
0 downloads today
0 downloads this week
9 downloads this month
2328 downloads total
- Score:
- 2.3
- Short URL:
- docopt.dub.pm
|
https://code.dlang.org/packages/docopt
|
CC-MAIN-2019-35
|
refinedweb
| 958
| 59.7
|
Largest Sum Contiguous Subarray
Write:
#include<stdio.h> int maxSubArraySum(int a[], int size) { int max_so_far = 0, max_ending_here = 0; int i; for(i = 0; i < size; i++) { max_ending_here = max_ending_here + a[i]; if(max_ending_here < 0) max_ending_here = 0; if(max_so_far < max_ending_here) max_so_far = max_ending_here; } return max_so_far; } /*Driver program to test maxSubArraySum*/ int main() { int a[] = {-2, -3, 4, -1, -2, 1, 5, -3}; int n = sizeof(a)/sizeof(a[0]); int max_sum = maxSubArraySum(a, n); printf("Maximum contiguous sum is %d\n", max_sum); getchar(); return 0; }
Notes:
Algorithm doesn't work for all negative numbers. It simply returns 0 if all numbers are negative. For handling this we can add an extra phase before actual implementation. The phase will look if all numbers are negative, if they are it will return maximum of them (or smallest in terms of absolute value). There may be other ways to handle it though.
Above program can be optimized further, if we compare max_so_far with max_ending_here only if max_ending_here is greater than 0.
int maxSubArraySum(int a[], int size) { int max_so_far = 0, max_ending_here = 0; int i; for(i = 0; i < size; i++) { max_ending_here = max_ending_here + a[i]; if(max_ending_here < 0) max_ending_here = 0; /* Do not compare for all elements. Compare only when max_ending_here > 0 */ else if (max_so_far < max_ending_here) max_so_far = max_ending_here; } return max_so_far; }
Time Complexity: O(n)
Algorithmic Paradigm: Dynamic Programming
Now try below question
Given an array of integers (possibly some of the elements negative), write a C program to find out the *maximum product* possible by adding 'n' consecutive integers in the array, n <= ARRAY_SIZE. Also give where in the array this sequence of n integers starts.
References:
If an array consists of all negative number then what should be the answer 0 or the smallest negative in array.
If we use memoization than we do not need to consider extra case of all numbers being egative.
#include <algorithm>
#include <iostream>
#include <limits>
using namespace std;
int maxSubsequenceSum(int arr[], int n)
{
if(n==1)
return arr[0];
int sum[n];
sum[0] = arr[0];
int maxSum = numeric_limits<int>::min();
for (int i = 1; i < n; ++i)
{
sum[i] = max(sum[i-1]+arr[i], arr[i]);
if(sum[i] > maxSum)
maxSum = sum[i];
}
return maxSum;
}
int main()
{
int arr[] = {-2,-3,-4,-1,-2,-1,-5,-3};
int n = sizeof(arr)/sizeof(arr[0]);
cout << maxSubsequenceSum(arr, n) << endl;
return 0;
}
For taking care of cases where number are not positive... do this..
If a represents current max. sum, and b represents max. sum so far
then at each element arr[i] do following :
while (i
{a=max(a+arr[i],arr[i]);
b=max(a,b);
i++;}
initially a=b=arr[0];
at end b will give you max. contiguous sum.
A much better and shorter approach:
Also the same can easily be modified to keep track of the sub-array indices.
The above one can be used for any type of input.
+ve, -ve and a mix.
Also to handle the edge cases the following can be the start of the method :
public int kadane (int[] array) {
if (array == null || array.length == 0) return Integer.MIN_VALUE;
if (array.length == 1) return array[0];
..
}
Very nice and short one. thanks...
@geeksforgeeks: solution by mohitk looks elegant, simple, covers all cases (as far as I can see) and seems to be providing optimal sol. for all cases. My request to you is to mention the above solution in your main article (as an alternate sol. or otherwise),if possible, so that other will also believe on the correctness of this sol. If you think otherwise about the correctness of this sol. then please let me your comments.
@Mohitk: Thanks buddy for sharing this sol.
@Pavi,
thanks.
Yups, submitted a request for the same to the moderators.
Lets see whether they also feel that my solution is correct.
Nope, ds s actually wrong. For the present case, your code will give the answer 9 which is very clear. For reference, just see:
Hope it helps!
@code123
For the link u pasted and the instance for which its solved, I see 9 as the only max solution for input {-2,-3,4,-1,-2,5,3}.
How is it wrong? and what max do u expect it to give ?
I think you got it wrong. The input considered here at geeksforgeeks is this: {-2, -3, 4, -1, -2, 1, 5, -3}
and its diff from the one at ideone.
Hope it helps now
Nope, ds s actually wrong. For the present case, your code will give the answer 9 which is very clear. For reference, just see:
Hope it helps!:)
A little bit modification on Kadane's algo will work
// A perfect code, it also take into count if all numbers are negative
#include
#include
#include
using namespace std;
int main()
{
int n;
printf("Enter size of array : ");
scanf("%d",&n);
int a[n],i;
printf("Enter elements of array : ");
for(i=0;i<n;i++)
scanf("%d",a+i);
int start,end,sum,maxsum,ind;
sum=start=ind=0;
maxsum=INT_MIN;
for(i=0;isum)
{
sum=a[i];
ind=i;
}
if(sum>maxsum)
{
start=ind;
maxsum=sum;
end=i;
}
}
printf("sum=%d, start=%d, end=%d\n",maxsum,start+1,end+1);
return 0;
}
int max_add(int ary[])
{
int i,cur_max=0,max_value=0,c=0,min=INT_MIN;
for(i=0;i<5;i++)
{
cur_max+=ary[i];
if(ary[i]<0)
{
c++;
if(min<ary[i])
min=ary[i];
}
if(cur_max<0)
cur_max=0;
else
if(max_value<cur_max)
max_value=cur_max;
if(c==5)
max_value=min;
}
return max_value;
}
@All shouldn't be when all no are -ive , maximum of them will be largest subarray thats nothing but only of length or i am missing something ??
Solution for max product. The idea is to precompute the number of negative numbers after a given index. If there is 0 is between, we restart the count.
I think it does not work if one of the elements in array is 0
sorry yaar,it works
This doesn't work for input 2, -1, 3, 4,-5,-6,0,1,3,-2,-1.
It gives 120 as output. But the solution is 360
infact it is giving the correct answer because we have to find "maxProduct subArray" not the maximum product of the array..
I don't think the code works for following example,
{-2, 1, 1, -5, -15, 5}
The solution should be -5*-15 = 75, but the output is 10
class MaxSubarray {
public static void main(String[] args) {
int[] intArr = {-2, -3, 4, -1, -2, 1, 5, -3};
//int[] intArr = {3,1,2,-4,5,9,-11,6,7};
//int[] intArr = {-1,-2,-3,-4,-5,-99};
//int[] intArr = {-1,0,0,0,0,0};
int maxSubArraySum = printMaxSumArr(intArr,intArr.length);
System.out.println("maxSubArraySum->"+maxSubArraySum);
}
public static int printMaxSumArr(int[] a, int n) {
int maxSum=0;
int minNeg=0;
boolean allNegative=true;
int[] saveSum = new int[n];
saveSum[0]=a[0];
if(a[0]>0){
allNegative =false;
}
else {
minNeg=a[0];
}
for(int i = 1; i =0){
allNegative=false;
}
else {
minNeg=(minNeg>a[i]?minNeg:a[i]);
}
saveSum[i]=max(saveSum[i-1]+a[i],a[i]);
if(saveSum[i]>maxSum) {
maxSum=saveSum[i];
}
}
if(allNegative){
return minNeg;
}
return maxSum;
}
private static int max(int i, int j) {
return (i>j?i:j);
}
}
This is the correct one..
No cases required.
What if I want to print the elements from which the Largest Sum is formed...
For ex: int a[]={2,4,-4,-1,2,8,-30,3,8};
should print
2,4,-4,-1,2,8
AND
3,8
its working for all the cases... as I have checked... if any mistake then plse let me know..
hey nice, this looks the most optimized version
I think 1 correction should be there to update endindex=i when updating startindex=i;
consider last index a[n-1]>sum . then startindex=n-1; while endindex <n-1
or to remove unwanted extra condition that may be needed when printing sol.
/* Paste your code here (You may delete these lines if not writing code) */
void findMaxSubArray(int *ar,int n,int &maxSum,int & startIndex, int & endIndex)
{
int maxSum = INT_MIN; // minimum integer value use #include<climits>
int sum = ar[0];
for(int i = 1;i<n;i++)
{
sum+=ar[i];
if( ar[i] > sum)
{ startIndex = i;
sum = ar[i];
endIndex = i;
}
if(sum>maxSum)
{
maxSum = sum;
endIndex = i; }
}
}
your code will give incorrect answer.
test case : 4 5 -15 1 2
nice and amazing
@geeksforgeeks
If all elements are negative the function will return zero ,
interchanging the if and elseif should work.
Correct me if i am wrong
if (max_so_far < max_ending_here)
max_so_far = max_ending_here;
else if(max_ending_here < 0)
max_ending_here = 0;
It's not if/else-if, rather if/if.
if(condition)
..
if(condition)
..
and I don't think interchanging them will affect the working of program.
Code Can Work for Negative Numbers by placing a simple flag.
I guess there is better way to handle all negative numbers.
The following algo will take in account negative as well as positive numbers.
Initialize:
max_so_far = a[0]
max_ending_here = 0
Loop for each element of the array
(a) max_ending_here = max_ending_here + a[i]
(b) if(max_ending_here < a[i])
max_ending_here = a[i]
(c) if(max_so_far < max_ending_here)
max_so_far = max_ending_here
return max_so_far
And so subsequently the code would change automatically.. What say??
yupp,...perfect
Agree.
Is it possible to get the starting and end index of the subarray?
Look at the code in one of my comments below. It returns a MaxSubArrayVO which also includes the sub-array-range along with the sum.
Edited code with some comments:
It should run for both positive and negative inputs.
Sorry,I forgot to remove comments as comments mentioned in post is irrelevant.
This code is working for negative numbers also. The array containing all negative numbers.
How to handle this case.. i want the starting and end position both....
-1 -2 -3 0 0 0 -4 -5 0 0 0 -9
a little modification to find start and end index of the subarray:
int max_sum(int *a,int n)
{
int max_so_far=0;
int max_ending_here=0;
int s,e;
s=e=-1;
for(i=0;i<=n-1;i++)
{
max_ending_here=max_ending_here+a[i];
if(max_ending_here<0)
{
max_ending_here=0;
s=i+1;
}
else if(max_so_far<max_ending_here)
{
max_so_far=max_ending_here;
e=i;
}
}
if(max_so_far==0)
{
s=-1;
e=-1;
}
}
This is not working for {1, 2}, start will be -1.
This solution will be ok:
int max_sum(int[] a, int n) {
int max_so_far = 0;
int max_ending_here = 0;
int s, e, largest;
s = e = largest = 0;
for (int i = 0; i a[largest]) largest = i;
max_ending_here = max_ending_here + a[i];
if (max_ending_here < 0) {
max_ending_here = 0;
s = i + 1;
} else if (max_so_far < max_ending_here) {
max_so_far = max_ending_here;
e = i;
}
}
if (a[largest] 0) ? max_so_far : a[largest];
}
DP approach to above problem:
The program is not working for {1,2,-1,3,4}
its returning 9 which is the sum of entire array. the expected result is 7
@Hill. 9 is the correct answer for your input array. The array itself is considered one of the subarrays of the array.
Input:
5 -2 8 3 -11 13 2 1 -1 0 3 8 -30 31 -4 3 -12
Max Ending Here:
5 3 11 14 3 16 18 19 18 18 21 29 -1(0) 31 27 30 18
Max_SoFar
--------------------------------29-------30 26 29 17
So is the answer 31 or 30 or 29?
It is 31.
This should work for all negatives case,without iterating the array more than once,so please verify it.
sorry,the first comment line should be removed.
Can somebody explain to me how is this algo DYNMAMIC PROGRAMMING ???????????
this is nothing except smart thinking
also can somebody give a dynamic programming solution for the problem ???
or even a recursive relation would do
@ravikant:
Refer to the below lines @
"
The below code is an example of top-down algorithm using recursion. It returns a value object that encapsulates (max-sum, lowerIndexOfSubArray, upperIndexOfSubArray).
omg, the above code was in java and the formatting has gone for a toss
I tried to edit it later but didn't find this feature. Apologies to the folks...
why doesnt input a[] = {-2, -3, 4, -4, 50, 5, -3};
work in kadane's algorithm
it gives me output as 2293724
Check you array size parameter.
I added your test array and got 55.
thanks..
What if all elements in the array are negative. Will it not return 0 in this case which is wrong?
Correct me if I am missing something
Possible solution for an array input with all negative numbers:
// initialize max_so_far to the first element of the array
max_so_far = a[0];
{ -1, -20, -4, -5 } returns -1
{ -10, -5, -3, -2 } returns -2
Initializing max_so_far to the first element of the array might not work as we also have below condition in the code.
You are correct. So along with the initialization code. I modified the following code:
The test cases above do not cover all cases, but it looks promising.
Following code works for all cases. I think checking for max_present<0, is needed here.
#include <iostream>
using namespace std;
int main() {
int arr[]={-2,-3,4,-1,-2,-7,-5,-3};
int max_present,max_all,i;
max_present=0;
max_all=arr[0];
for(i=0;i<8;i++)
{
max_present+=arr[i];
if(max_all<max_present){
max_all=max_present;
}
if(max_present<0){
max_present=0;
}
}
cout<<"Max sum is "<<max_all;
return 0;
}
In this example {-2, -3, 4, -1, -2, 1, 5, -3}..shouldn't 6 be the answer (1 and 5)...we're getting 4 which i feel isn't correct. Please correct me if I'm wrong.
@Asit: The program doesn't return 4, it returns 7 (4 + -1 + -2 + 1 + 5). In function maxSubArraySum(), we return max_so_far, not max_ending_here (whose final value is 4).
Sorry, my mistake.I considered max_ending_here instead of max_so_far
uh... I think there's a typo in the example? The critical 6th index reads:
for i=6, a[6] = 1
... etc ...
The element at a[6] is 5, not 1. The rest of the statement it correct, but that assignment error might be confusing to a non-programmer trying to understand the explanation of algorithm? I've been programming for a while, and I still did a triple take until I found the typo.
Or, is this a test to see if the readers can debug pseudo-code?
@ab: Thanks for pointing this out. We have corrected the typo.
@Hary: Your solution works fine.
One thing to add - When count of negative numbers is odd, we need to check product of left and right for both first and last negative elements. So we need to get max of these four products.
Thanks for writing the solution. Keep writing to us!
Here is the code as discussed by you and Hary.
what if 0 is one of the element
Ok, as far as the all negative and all positive are concerned
you dont have to worry because in case of all positive - just add the entire array to get the result - in case of all negative - just find out the smalles no in terms of magnitude.
The problem becomes sensible only when you have a blend of negative and positive nos.
As far as the "product variant" of this ques is listed above:
I would say just find out the no of negative elements. If they are even, find the product of all the nos in the array , If the no of negative elements is odd then scan the array to find the last negative element. Find out the product of all the elements on one side of it - call it product left - and similarly for all the elements on the right of it - call them product right.
Yield which ever is greater .
Please correct me if i am missing something
Hi Raj, as mentioned in the notes, the algorithm doesn't work for all negative numbers.
For handling this we can add an extra phase before actual implementation. The phase will look if all numbers are negative, if they are it will return maximum of them (or smallest in terms of absolute value). There may be other ways to handle it though.
Thanks for the comment, keep visiting us.
what if all the input values are negative.
EX: {-2, -3, -4, -1, -2, -1, -5, -3}
Possible solution for an array input with all negative numbers:
max_so_far = a[0];
// initialize to the first element of the array
{ -1, -20, -4, -5 } returns -1
{ -10, -5, -3, -2 } returns -2
|
http://www.geeksforgeeks.org/largest-sum-contiguous-subarray/
|
CC-MAIN-2013-20
|
refinedweb
| 2,771
| 62.58
|
RTTI considered harmful?
Posted on March 1st, 2001
Various designs in this chapter attempt to remove RTTI, which might give you the impression that it’s “considered harmful” (the condemnation used for poor, ill-fated goto, which was thus never put into Java). This isn’t true; it is the misuse of RTTI that is the problem. The reason our designs removed RTTI is because the misapplication of that feature prevented extensibility, while the stated goal was to be able to add a new type to the system with as little impact on surrounding code as possible. Since RTTI is often misused by having it look for every single type in your system, it causes code to be non-extensible: when you add a new type, you have to go hunting for all the code in which RTTI is used, and if you miss any you won’t get help from the compiler.
However, RTTI doesn’t automatically create non-extensible code. Let’s revisit the trash recycler once more. This time, a new tool will be introduced, which I call a TypeMap. It contains a Hashtable that holds Vectors, but the interface is simple: you can add( ) a new object, and you can get( ) a Vector containing all the objects of a particular type. The keys for the contained Hashtable are the types in the associated Vector. The beauty of this design (suggested by Larry O’Brien) is that the TypeMap dynamically adds a new pair whenever it encounters a new type, so whenever you add a new type to the system (even if you add the new type at run-time), it adapts.
Our example will again build on the structure of the Trash types in package c16.Trash (and the Trash.dat file used there can be used here without change):
//: DynaTrash.java // Using a Hashtable of Vectors and RTTI // to automatically sort trash into // vectors. This solution, despite the // use of RTTI, is extensible. package c16.dynatrash; import c16.trash.*; import java.util.*; // Generic TypeMap works in any situation: class TypeMap { private Hashtable t = new Hashtable(); public void add(Object o) { Class type = o.getClass(); if(t.containsKey(type)) ((Vector)t.get(type)).addElement(o); else { Vector v = new Vector(); v.addElement(o); t.put(type,v); } } public Vector get(Class type) { return (Vector)t.get(type); } public Enumeration keys() { return t.keys(); } // Returns handle to adapter class to allow // callbacks from ParseTrash.fillBin(): public Fillable filler() { // Anonymous inner class: return new Fillable() { public void addTrash(Trash t) { add(t); } }; } } public class DynaTrash { public static void main(String[] args) { TypeMap bin = new TypeMap(); ParseTrash.fillBin("Trash.dat",bin.filler()); Enumeration keys = bin.keys(); while(keys.hasMoreElements()) Trash.sumValue( bin.get((Class)keys.nextElement())); } } ///:~
Although powerful, the definition for TypeMap is simple. It contains a Hashtable, and the add( ) method does most of the work. When you add( ) a new object, the handle for the Class object for that type is extracted. This is used as a key to determine whether a Vector holding objects of that type is already present in the Hashtable. If so, that Vector is extracted and the object is added to the Vector. If not, the Class object and a new Vector are added as a key-value pair.
You can get an Enumeration of all the Class objects from keys( ), and use each Class object to fetch the corresponding Vector with get( ). And that’s all there is to it.
The filler( ) method is interesting because it takes advantage of the design of ParseTrash.fillBin( ), which doesn’t just try to fill a Vector but instead anything that implements the Fillable interface with its addTrash( ) method. All filler( ) needs to do is to return a handle to an interface that implements Fillable, and then this handle can be used as an argument to fillBin( ) like this:
ParseTrash.fillBin("Trash.dat", bin.filler());
To produce this handle, an anonymous inner class (described in Chapter 7) is used. You never need a named class to implement Fillable, you just need a handle to an object of that class, thus this is an appropriate use of anonymous inner classes.
An interesting thing about this design is that even though it wasn’t created to handle the sorting, fillBin( ) is performing a sort every time it inserts a Trash object into bin.
Much of class DynaTrash should be familiar from the previous examples. This time, instead of placing the new Trash objects into a bin of type Vector, the bin is of type TypeMap, so when the trash is thrown into bin it’s immediately sorted by TypeMap’s internal sorting mechanism. Stepping through the TypeMap and operating on each individual Vector becomes a simple matter:
Enumeration keys = bin.keys(); while(keys.hasMoreElements()) Trash.sumValue( bin.get((Class)keys.nextElement()));
As you can see, adding a new type to the system won’t affect this code at all, nor the code in TypeMap. This is certainly the smallest solution to the problem, and arguably the most elegant as well. It does rely heavily on RTTI, but notice that each key-value pair in the Hashtable is looking for only one type. In addition, there’s no way you can “forget” to add the proper code to this system when you add a new type, since there isn’t any code you need to add.
There are no comments yet. Be the first to comment!
|
http://www.codeguru.com/java/tij/tij0181.shtml
|
CC-MAIN-2017-09
|
refinedweb
| 909
| 61.97
|
What’s new in UCMA 4.0?Posted: August 14th, 2012 | Author: Michael | Filed under: UCMA 4.0 | Tags: Lync 2013, Task Parallel Library | No Comments »
With the preview of Lync Server 2013 now available, you may be wondering what’s new in the good old Unified Communications Managed API. You may even have located the UCMA 4.0 SDK and downloaded it to check it out yourself. Since the final documentation is not yet available for UCMA 4.0, it may not be immediately clear what’s new and what isn’t.
Functionally, there’s not much new to report in the new version. As far as I’m aware, there are no new public classes in the API that were not present in UCMA 3.0. This may come as a disappointment for anyone who was hoping for support for new media types, new ways of handling signaling, or other things of that nature. I definitely sympathize with this feeling, given all the fun new capabilities we got in UCMA 3.0, but there are a couple of things to keep in mind to understand the context.
First of all, back in Lync 2010, UCMA got a lot of attention, and UCMA 3.0 had plenty of new features. Although no one would argue that it meets everyone’s needs perfectly, it is certainly a much more mature API than it was in the UCMA 1.0 days. There are also lots of things you can do with it, with a bit of tinkering, that are not totally obvious on the surface.
Second, it’s important to realize that a lot of work has been going into other parts of the Lync development platform. This time, with Lync 2013, there is an entirely new API, the Unified Communications Web API. Also, if you go to the download page for the UCMA 4.0 SDK, and scroll down to the “System requirements” section, you will see one of the new developments that I am particularly excited about: UCMA now works with (and, in fact, requires) .NET 4.5!
In case you’re not as excited as I am about this, and maybe even a bit baffled as to why I would care, I’ll try to explain. The 4.0 version of the .NET Framework introduced the Task Parallel Library (TPL), which is a new and, in many ways, easier method for asynchronous development. In .NET 4.5, there are some improvements on the TPL, including the new async and await keywords which make the TPL a more integral part of the language, and make it quite easy to write code that reads like synchronous code but runs in a parallel fashion. My recent Lync Developer Roundtable session was partly about how this can help with UCMA development. There is a whole host of other new capabilities to support parallel programming in .NET 4.5, and we can now use all of these (or any that are relevant, anyway) with UCMA 4.0. They aren’t applicable to every single UCMA application or piece of UCMA code, but they will definitely be useful.
Another change to be aware of is that the UCMA Workflow SDK is no longer a part of UCMA in the new version. There is a short note about this in the article “Comparing Unified Communications APIs” on MSDN, in the last paragraph of the section “Unified Communications Managed API 3.0 Workflow SDK.” Workflow applications written using UCMA 3.0 will continue to work just fine against Lync Server 2010, but if you are developing a new application and are considering using the UCMA Workflow SDK, you may want to think seriously about using UCMA Core instead. I’ll write more about this in future posts.
If you have any questions or interesting discoveries about UCMA 4.0, feel free to post them here in the comments!
|
http://blog.greenl.ee/2012/08/14/ucma-4/
|
CC-MAIN-2019-30
|
refinedweb
| 652
| 70.84
|
21 Feb 2013 07:54
Re: speed of runmed on SimpleRleList
Hervé Pagès <hpages@...>
2013-02-21 06:54:11 GMT
2013-02-21 06:54:11 GMT
Hi Janet, On 02/19/2013 06:10 PM, Janet Young wrote: > Hi Herve (and Valerie), > > Interesting - thanks for looking into this. Funny that the ends make > such a large difference, but I think I understand. It's because we are re-using stats::smoothEnds implementation, which contains things like 'for (i in 3:k) {...}', and therefore is inefficient when 'k' is not very small. > Hmm - I guess that > issue would go away if I use "drop" - that should be fine, as I don't > care about medians in the partial windows at the start and end of each > chromosome (although I do want to make sure I get the windows lined up > correctly, but I can probably just add a windowsize/2 constant to each > position). If you don't care about the truncated windows at the start and end of each chromosome, but do want to preserve the windows lined up correctly, you can use endrule="keep" or endrule="constant". I don't know what they do exactly with the truncated windows (it doesn't matter), but, unlike endrule="drop", they preserve the length of the input. And they don't have the cost of endrule="median". > > I found today that the timing issues are actually made less relevant for > me because of memory issues - I'm finding that runmean and runmed give > me way more information than I need, and they create objects that are > huge in memory, because they're reporting at 1bp resolution. An operation like runmean() almost systematically returns an Rle that is bigger in memory than its input, because it increases the nb of runs. One important number to keep an eye on is the mean of the run lengths (i.e. mean(runLength(x))). If it falls below the threshold of 2, then the integer-Rle representation is not worth it: an ordinary integer vector will use less space in memory. For a numeric-Rle, the threshold is 3/2. Unfortunately, runmean() tends to make that number drop badly: x <- Rle(sample(10L, 150, replace=TRUE), sample(50L, 150, replace=TRUE)) > mean(runLength(x)) [1] 27.58462 > mean(runLength(runmean(x, k=5))) [1] 5.843393 > mean(runLength(runmean(x, k=15))) [1] 2.228322 > mean(runLength(runmean(x, k=25))) [1] 1.537333 > mean(runLength(runmean(x, k=35))) [1] 1.28976 With k=35, keeping the result in a numeric-Rle is no more worth it. > For my > analysis, I don't need windows spaced by 1bp - e.g. I can look at 5kb > windows sliding along the genome in 500bp increments. > > I've cooked up some code that seems to work (using Views and aggregate), > but I wonder whether in the long run there could be some kind of > built-in solution in IRanges. I know it would probably have to return > different object types, to be able to keep track of window > positions/sizes, but it seems like something a lot of folks will want to > do (e.g. look at coverage over the genome, smoothing in some kind of > running window, but not necessarily every 1bp). Does that sound doable? > (maybe I'm missing some existing solution?) I don't think we have a built-in solution for this at the moment but it would certainly be doable. There are probably many ways to go but here is one that I find appealing. The starting point would be to have an easy way to switch between the 2 following representations of a numeric function F defined along a genome (e.g. coverage): (1) RleList representation (2) A GRanges made of fixed-width bins (that cover the entire genome), and with a metadata column that contains the average value of F for each bin. Because of the averaging, going from (1) to (2) would of course lose some information but would potentially reduce memory usage a lot. More precisely, (2) offers the advantage of using an amount of memory that is under control because it only depends on the width of the bins. For example, for a function F defined along the Human genome, if the bins are of width 500 (like in your case), the GRanges representation would have about 6 million ranges and occupy less than 100Mb. With the RleList representation, it can vary a lot: from a few hundreds of bytes (in the best case i.e. when coverage is constant), to 24Gb if all the runs are of length 1 (worst case). Once you have (2), you could do all the operations you want (e.g. runmean, runmed, etc...,) by operating directly on the metadata col. Unlike with the RleList, the size of the GRanges would not grow while you do this. Then, when you are done, you could always come back to (1) if you really miss the RleList representation. This RleList would actually be quite small: at most 72Mb in your use case (i.e. bins of width 500 on the Human genome). Of course this approach is only suitable if the user accepts the loss of information introduced by the first step i.e. by the conversion from (1) to (2). The only things we would need to add to support this approach are the functions to switch between (1) and (2). Would that be of any interest? Cheers, H. > > thanks, > > Janet > > > > > On Feb 19, 2013, at 3:51 PM, Hervé Pagès wrote: > >> Hi Janet, >> >> The culprit seems to be the call to smoothEnds() at the end of the >> "runmed" method for Rle objects: >> >> > selectMethod("runmed", "Rle") >> Method Definition: >> >> function (x, k, endrule = c("median", "keep", "drop", "constant"), >> algorithm = NULL, print.level = 0) >> { >> if (!all(is.finite(as.vector(x)))) >> stop("NA/NaN/Inf not supported in runmed,Rle-method") >> endrule <- match.arg(endrule) >> n <- length(x) >> k <- normargRunK(k = k, n = n, endrule = endrule) >> i <- (k + 1L)%/%2L >> ans <- runq(x, k = k, i = i) >> if (endrule == "constant") { >> runLength(ans)[1L] <- runLength(ans)[1L] + (i - 1L) >> runLength(ans)[nrun(ans)] <- runLength(ans)[nrun(ans)] + >> (i - 1L) >> } >> else if (endrule != "drop") { >> ans <- c(head(x, i - 1L), ans, tail(x, i - 1L)) >> if (endrule == "median") { >> ans <- smoothEnds(ans, k = k) >> } >> } >> ans >> } >> >> Not too surprisingly, the complexity of smoothEnds() doesn't depend on >> the length of its first arg: >> >> > system.time(smoothEnds(Rle(0L, 10), k=5)) >> user system elapsed >> 0.028 0.000 0.026 >> > system.time(smoothEnds(Rle(0L, 10000), k=5)) >> user system elapsed >> 0.028 0.000 0.026 >> >> but is linear w.r.t to the value of its 2nd ag: >> >> > system.time(smoothEnds(Rle(0L, 10000), k=51)) >> user system elapsed >> 0.280 0.000 0.282 >> > system.time(smoothEnds(Rle(0L, 10000), k=511)) >> user system elapsed >> 2.836 0.000 2.841 >> >> In other words, runmed() will be pretty fast on a full genome >> coverage, but you will pay an additional fixed price for each >> chromosome in the genome, and this fixed priced only depends >> on the length of the median window. >> >> Note that the "smoothEnds" method for Rle objects is just >> re-using the exact same code as stats::smoothEnds(), which is not >> very efficient on Rle's. So we'll look into ways to improve this. >> >> Cheers, >> H. >> >> >> On 02/15/2013 12:27 PM, Janet Young wrote: >>> Hi there, >>> >>> I've been using runmean on some coverage objects - nice. Runmed >>> also seems useful, but also seems very slow (oddly so) - I'm >>> wondering whether there's some easy improvement could be made there? >>> Same issue with devel version and an older version. All should be >>> explained in the code below (I hope). >>> >>> thanks very much, >>> >>> Janet >>> >>> ------------------------------------------------------------------- >>> >>> Dr. Janet Young >>> >>> Malik lab >>> >>> <> >>> >>> ------------------------------------------------------------------- >>> >>> >>> >>> library(GenomicRanges) >>> >>> ### a small GRanges object (example from ?GRanges) >>> seqinfo <- Seqinfo(paste0("chr", 1:3), c(1000, 2000, 1500), NA, "mock1") >>> gr2 <), >>> seqinfo=seqinfo) >>> gr2 >>> >>> cov <- coverage(gr2) >>> >>> ### runmed is slow! (for you Hutchies: this is on a rhino machine) >>> ### I'm really trying to run this on some much bigger objects (whole >>> genome coverage), where the slowness is more of an issue. >>> system.time(runmed(cov, 51)) >>> # user system elapsed >>> # 1.120 0.016 1.518 >>> >>> ### runmean is fast >>> system.time(runmean(cov, 51)) >>> # user system elapsed >>> # 0.008 0.000 0.005 >>> >>> sessionInfo() >>> >>> >>> R Under development (unstable) (2012-10-03 r608] GenomicRanges_1.11.29 IRanges_1.17.32 BiocGenerics_0.5.6 >>> >>> loaded via a namespace (and not attached): >>> [1] stats4_2.16.0 >>> >>> >>> ## see a similar issue on my Mac, using older R >>> >>>] GenomicRanges_1.10.1 IRanges_1.16.2 BiocGenerics_0.4.0 >>> >>> loaded via a namespace (and not attached): >>> [1] parallel_2.15.1 stats4_2.15.1 >>> >>> _______________________________________________ >>> Bioconductor mailing list >>> Bioconductor@... <mail >> Fax: (206) 667-1319 > --:
|
http://permalink.gmane.org/gmane.science.biology.informatics.conductor/46494
|
CC-MAIN-2013-48
|
refinedweb
| 1,474
| 71.85
|
Detect faces more accurately in ~180 degree fisheye camera image by correcting first
With a fisheye lens that's about 180 degrees, the goal is to help ensure the best accuracy in detecting the presence of faces (especially near the edges) by first undistorting the image. I understand it's unrealistic to expect to be able to do it super close to the edges since it's 180 degrees, but I'd like to at least get relatively close.
In OpenCV (v3.1 if that matters), I tried to use calibrate.py to do this. While it can detect the checkerboard pattern in a lot of the calibration images I took, it has trouble with ones where the distortion is more extreme. The end result was that it provided a camera matrix and distortion coefficients that correct the center fairly well, but don't do too much about the edges.
Is there some straightforward way to manually tweak the matrices to get the desired correction or is there another good approach to this? I am hoping for an intuitive way to tweak the numbers to do it qualitatively without having to understand it much from a math standpoint.
What I got from calibrate.py is a camera matrix of
[[537.04, 0, 961.19], [0, 536.42 , 506.01], [0, 0, 1]]
and distortion coefficients of
[-2.897e-01, 7.527e-02, 0, 0, 0]
This takes an image like this (shown here at 25% size):
And corrects it to something like
This is certainly better than nothing, but not quite the desired level of correction.
In case it's helpful, here's the code to undistort the image (OpenCV 3.1 + Python 2.7). NOTE: this expects the images to be sized at 1920x1080 (i.e. the native resolution of the camera)
import cv2 import numpy as np cammatrix = np.array([[537.04, 0, 961.19], [0, 536.42 , 506.01], [0, 0, 1]]) distcoeffs = np.array([-2.897e-01, 7.527e-02, 0, 0, 0]) image = cv2.imread("PATH_TO_IMAGE.jpg") h,w = image.shape[:2] newcam,roi = cv2.getOptimalNewCameraMatrix(cammatrix, distcoeffs, (w,h), 1) newimage = cv2.undistort(image, cammatrix, distcoeffs, None, newcam) cv2.imshow("Orig Image", image) cv2.imshow("Image", newimage) cv2.waitKey(0)
imho, you have to use the corresponding
cv2.fisheyefunctions to calibrate / undistort
|
https://answers.opencv.org/question/99719/detect-faces-more-accurately-in-180-degree-fisheye-camera-image-by-correcting-first/
|
CC-MAIN-2019-43
|
refinedweb
| 388
| 68.97
|
You are working at a lower league football stadium and you’ve been asked to
The referee will shout out the score, you have already set up the voice recognition module which turns the ref’s voice into a string, but the spoken score needs to be converted into a pair for the scoreboard!
e.g. “The score is four nil” should return [4,0]
Either teams score has a range of 0-9, and the ref won’t say the same string every time e.g.
“new score: two three”
“two two”
“Arsenal just conceded another goal, two nil”
Create a function which will return the array of scores.
def scoreboard(string): score = ['nil', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine'] score_arr = [] string_list = string.split(' ') for word in string_list: if word in score: score_arr.append(score.index(word)) return score_arr
These are the steps to create the score array.
- Split the string into a list.
- Iterate through the list.
- Find the scores and append them to the array then return.
Please follow and like us:
|
https://kibiwebgeek.com/turn-string-into-the-score/
|
CC-MAIN-2021-04
|
refinedweb
| 178
| 80.11
|
I am trying to split up a four digit integer so i can perform math on each integer to "encrypt" it. This is what i have so far.
import javax.swing.JOptionPane; public class Project5e { public static void main(String[] args) { String input; int number1; int number2; int number3; int number4; input = JOptionPane.showInputDialog("What is your four digit number?"); number1 = (int)(input.charAt(0)); number2 = (int)(input.charAt(1)); number3 = (int)(input.charAt(2)); number4 = (int)(input.charAt(3)); System.out.print(number1); } }
i know i only have number1 in the output......that is just to test. When i run that and put 1234 for the input i get the number 49 for number 1. Any help is appreciated. I dont know if my current code is just wrong or if i am approaching the entire thing totally wrong.
|
http://www.javaprogrammingforums.com/whats-wrong-my-code/11948-using-char-split-up-interger.html
|
CC-MAIN-2015-48
|
refinedweb
| 140
| 69.89
|
15 April 2011 12:34 [Source: ICIS news]
(Adds CEO comments and details on regional sales.)
LONDON (ICIS)--Syngenta’s first-quarter sales increased 14% year on year to $4.02bn (€2.77bn) as a result of strong performance in the European, African and the Middle East markets, the Switzerland-based agrichemicals firm said on Friday.
The three regions recorded a sales increase of 20%, the company said in a statement.
Syngenta added that a favourable environment encouraged early investment by wheat growers across the regions.
“Our first-quarter sales performance demonstrates our ability to achieve significant growth across a business that is unrivalled in its breadth and reach,” said Syngenta CEO Mike Mack.
“At the same time, we have made rapid progress in the implementation of our new commercial strategy, which is building on the combined strength of our crop protection and seeds businesses to develop a fully integrated offer on a global crop basis,” Mack said.
The company said its sales in eastern Europe rebounded from difficult conditions in the second half of 2010, with increased demand for premium crop protection products. Total sales in ?xml:namespace>
Total first-quarter sales in Latin America grew 16% year on year, driven in particular by fungicides, insecticides and corn seed.
Sales in the Asia-Pacific region increased by 6%, as continuing expansion of crop protection usage in the emerging markets offset a decline in Japan, Syngenta said.
Syngenta said its crop protection business contributed $2.79bn in sales, accounting for 69% of total revenue, while its seeds business had $1.24bn in turnover.
Sales from crop protection during the first quarter increased 11% year on year, while seeds sales were up 20%, it said.
Overall sales volumes were up 14%, Syngenta added.
Additional reporting by Pearl Bantillo
($1 = €0.69)
For more on Syngenta
|
http://www.icis.com/Articles/2011/04/15/9452854/switzerlands-syngenta-first-quarter-sales-jump-14-to-4.02bn.html
|
CC-MAIN-2014-49
|
refinedweb
| 304
| 55.13
|
# MVCC in PostgreSQL-4. Snapshots
After having discussed [isolation](https://habr.com/ru/company/postgrespro/blog/467437/) problems and having made a digression regarding the [low-level data structure](https://habr.com/ru/company/postgrespro/blog/469087/), last time we explored [row versions](https://habr.com/ru/company/postgrespro/blog/477648/) and observed how different operations changed tuple header fields.
Now we will look at how consistent data snapshots are obtained from tuples.
What is a data snapshot?
========================
Data pages can physically contain several versions of the same row. But each transaction must see only one (or none) version of each row, so that all of them make up a consistent picture of the data (in the sense of ACID) as of a certain point in time.
Isolation in PosgreSQL is based on snapshots: each transaction works with its own data snapshot, which «contains» data that were committed before the moment the snapshot was created and does not «contain» data that were not committed by that moment yet. We've [already seen](https://habr.com/ru/company/postgrespro/blog/467437/) that although the resulting isolation appears stricter than required by the standard, it still has anomalies.
At the Read Committed isolation level, a snapshot is created at the beginning of each transaction statement. This snapshot is active while the statement is being performed. In the figure, the moment the snapshot was created (which, as we recall, is determined by the transaction ID) is shown in blue.

At the Repeatable Read and Serializable levels, the snapshot is created once, at the beginning of the first transaction statement. Such a snapshot remains active up to the end of the transaction.

Visibility of tuples in a snapshot
==================================
Visibility rules
----------------
A snapshot is certainly not a physical copy of all the necessary tuples. A snapshot is actually specified by several numbers, and the visibility of tuples in a snapshot is determined by rules.
Whether a tuple will be visible or not in a snapshot depends on two fields in the header, namely, `xmin` and `xmax`, that is, the IDs of the transactions that created and deleted the tuple. Intervals like this do not overlap, and therefore, not more than one version represents a row in each snapshot.
The exact visibility rules are pretty complicated and take into account a lot of different cases and extremes.
> You can easily make sure of that by looking into src/backend/utils/time/tqual.c (in version 12, the check moved to src/backend/access/heap/heapam\_visibility.c).
>
>
To simplify, we can say that a tuple is visible when in the snapshot, changes made by the `xmin` transaction are visible, while those made by the `xmax` transaction are not (in other words, it is already clear that the tuple was created, but it is not yet clear whether it was deleted).
Regarding a transaction, its changes are visible in the snapshot either if it is that very transaction that created the snapshot (it does see its own not yet committed changes) or the transaction was committed before the snapshot was created.
We can graphically represent transactions by segments (from the start time to the commit time):

Here:
* Changes of the transaction 2 will be visible since it was completed before the snapshot was created.
* Changes of the transaction 1 will not be visible since it was active at the moment the snapshot was created.
* Changes of the transaction 3 will not be visible since it started after the snapshot was created (regardless of whether it was completed or not).
Unfortunately, the system is unaware of the commit time of transactions. Only its start time is known (which is determined by the transaction ID and marked with a dashed line in the figures above), but the event of completion is not written anywhere.
All we can do is to find out the *current* status of transactions at the snapshot creation. This information is available in the shared memory of the server, in the ProcArray structure, which contains the list of all active sessions and their transactions.
But we will be unable to figure out post factum whether or not a certain transaction was active at the moment the snapshot was created. Therefore, a snapshot has to store a list of all the current active transactions.
From the above it follows that in PostgreSQL, it is not possible to create a snapshot that shows consistent data as of certain time backward, *even if* all the necessary tuples are available in table pages. A question often arises why PostgreSQL lacks retrospective (or temporal; or flashback, like Oracle calls them) queries — and this is one of the reasons.
> Kind of funny is that this functionality was first available, but then deleted from the DBMS. You can read about this in the [article by Joseph M. Hellerstein](https://arxiv.org/pdf/1901.01973.pdf).
>
>
So, the snapshot is determined by several parameters:
* The moment the snapshot was created, more exactly, the ID of the next transaction, yet unavailable in the system (`snapshot.xmax`).
* The list of active (in progress) transactions at the moment the snapshot was created (`snapshot.xip`).
For convenience and optimization, the ID of the earliest active transaction is also stored (`snapshot.xmin`). This value makes an important sense, which will be discussed below.
The snapshot also stores a few more parameters, which are unimportant to us, however.

Example
-------
To understand how the snapshot determines the visibility, let's reproduce the above example with three transactions. The table will have three rows, where:
* The first was added by a transaction that started prior to the snapshot creation but completed after it.
* The second was added by a transaction that started and completed prior to the snapshot creation.
* The third was added after the snapshot creation.
```
=> TRUNCATE TABLE accounts;
```
The first transaction (not completed yet):
```
=> BEGIN;
=> INSERT INTO accounts VALUES (1, '1001', 'alice', 1000.00);
=> SELECT txid_current();
```
```
=> SELECT txid_current();
txid_current
--------------
3695
(1 row)
```
The second transaction (completed before the snapshot was created):
```
| => BEGIN;
| => INSERT INTO accounts VALUES (2, '2001', 'bob', 100.00);
| => SELECT txid_current();
```
```
| txid_current
| --------------
| 3696
| (1 row)
```
```
| => COMMIT;
```
Creating a snapshot in a transaction in another session.
```
|| => BEGIN ISOLATION LEVEL REPEATABLE READ;
|| => SELECT xmin, xmax, * FROM accounts;
```
```
|| xmin | xmax | id | number | client | amount
|| ------+------+----+--------+--------+--------
|| 3696 | 0 | 2 | 2001 | bob | 100.00
|| (1 row)
```
Committing the first transaction after the snapshot was created:
```
=> COMMIT;
```
And the third transaction (appeared after the snapshot was created):
```
| => BEGIN;
| => INSERT INTO accounts VALUES (3, '2002', 'bob', 900.00);
| => SELECT txid_current();
```
```
| txid_current
| --------------
| 3697
| (1 row)
```
```
| => COMMIT;
```
Evidently, only one row is still visible in our snapshot:
```
|| => SELECT xmin, xmax, * FROM accounts;
```
```
|| xmin | xmax | id | number | client | amount
|| ------+------+----+--------+--------+--------
|| 3696 | 0 | 2 | 2001 | bob | 100.00
|| (1 row)
```
The question is how Postgres understands this.
All is determined by the snapshot. Let's look at it:
```
|| => SELECT txid_current_snapshot();
```
```
|| txid_current_snapshot
|| -----------------------
|| 3695:3697:3695
|| (1 row)
```
Here `snapshot.xmin`, `snapshot.xmax` and `snapshot.xip` are listed, delimited by a colon (`snapshot.xip` is one number in this case, but in general it's a list).
According to the above rules, in the snapshot, those changes must be visible that were made by transactions with IDs `xid` such that `snapshot.xmin <= xid < snapshot.xmax` except those that are on the `snapshot.xip` list. Let's look at all table rows (in the new snapshot):
```
=> SELECT xmin, xmax, * FROM accounts ORDER BY id;
```
```
xmin | xmax | id | number | client | amount
------+------+----+--------+--------+---------
3695 | 0 | 1 | 1001 | alice | 1000.00
3696 | 0 | 2 | 2001 | bob | 100.00
3697 | 0 | 3 | 2002 | bob | 900.00
(3 rows)
```
The first row is not visible: it was created by a transaction that is on the list of active transactions (`xip`).
The second row is visible: it was created by a transaction that is in the snapshot range.
The third row is not visible: it was created by a transaction that is out of the snapshot range.
```
|| => COMMIT;
```
Transaction's own changes
=========================
Determining the visibility of the transaction's own changes somewhat complicates the situation. In this case, it may be needed to see only part of such changes. For example: at any isolation level, a cursor opened at a certain point in time must not see changes done later.
To this end, a tuple header has a special field (represented in the `cmin` and `cmax` pseudo-columns), which shows the order number inside the transaction. `cmin` is the number for insertion, and `cmax` — for deletion, but to save space in the tuple header, this is actually one field rather than two different ones. It is assumed that a transaction infrequently inserts and deletes the same row.
But if this does happen, a special combo command id (`combocid`) is inserted in the same field, and the backend process remembers the actual `cmin` and `cmax` for this `combocid`. But this is entirely exotic.
Here is a simple example. Let's start a transaction and add a row to the table:
```
=> BEGIN;
=> SELECT txid_current();
```
```
txid_current
--------------
3698
(1 row)
```
```
INSERT INTO accounts(id, number, client, amount) VALUES (4, 3001, 'charlie', 100.00);
```
Let's output the contents of the table, along with the `cmin` field (but only for rows added by the transaction — for others it is meaningless):
```
=> SELECT xmin, CASE WHEN xmin = 3698 THEN cmin END cmin, * FROM accounts;
```
```
xmin | cmin | id | number | client | amount
------+------+----+--------+---------+---------
3695 | | 1 | 1001 | alice | 1000.00
3696 | | 2 | 2001 | bob | 100.00
3697 | | 3 | 2002 | bob | 900.00
3698 | 0 | 4 | 3001 | charlie | 100.00
(4 rows)
```
Now we open a cursor for a query that returns the number of rows in the table.
```
=> DECLARE c CURSOR FOR SELECT count(*) FROM accounts;
```
And after that we add another row:
```
=> INSERT INTO accounts(id, number, client, amount) VALUES (5, 3002, 'charlie', 200.00);
```
The query returns 4 — the row added after opening the cursor does not get into the data snapshot:
```
=> FETCH c;
```
```
count
-------
4
(1 row)
```
Why? Because the snapshot takes into account only tuples with `cmin < 1`.
```
=> SELECT xmin, CASE WHEN xmin = 3698 THEN cmin END cmin, * FROM accounts;
```
```
xmin | cmin | id | number | client | amount
------+------+----+--------+---------+---------
3695 | | 1 | 1001 | alice | 1000.00
3696 | | 2 | 2001 | bob | 100.00
3697 | | 3 | 2002 | bob | 900.00
3698 | 0 | 4 | 3001 | charlie | 100.00
3698 | 1 | 5 | 3002 | charlie | 200.00
(5 rows)
```
```
=> ROLLBACK;
```
Event horizon
=============
The ID of the earliest active transaction (`snapshot.xmin`) makes an important sense: it determines the «event horizon» of the transaction. That is, beyond its horizon the transaction always sees only up-to-date row versions.
Really, an outdated (dead) row version needs to be visible only when the up-to-date one was created by a not-yet-completed transaction and is, therefore, not visible yet. But all transactions «beyond the horizon» are completed for sure.

You can see the transaction horizon in the system catalog:
```
=> BEGIN;
=> SELECT backend_xmin FROM pg_stat_activity WHERE pid = pg_backend_pid();
```
```
backend_xmin
--------------
3699
(1 row)
```
We can also define the horizon at the database level. To do this, we need to take all active snapshots and find the oldest `xmin` among them. And it will define the horizon, beyond which dead tuples in the database will never be visible to any transaction. *Such tuples can be vacuumed away* — and this is exactly why the concept of horizon is so important from a practical standpoint.
If a certain transaction is holding a snapshot for a long time, by that it will also be holding the database horizon. Moreover, just the existence of an uncompleted transaction will hold the horizon even if the transaction itself does not hold the snapshot.
And this means that dead tuples in the DB cannot be vacuumed away. In addition, it is possible that a «long-play» transaction does not intersect by data with other transactions at all, but this does not really matter since all share one database horizon.
If we now make a segment represent snapshots (from `snapshot.xmin` to `snapshot.xmax`) rather than transactions, we can visualize the situation as follows:

In this figure, the lowest snapshot pertains to an uncompleted transaction, and in the other snapshots, `snapshot.xmin` cannot be greater than the transaction ID.
In our example, the transaction was started with the Read Committed isolation level. Even though it does not have any active data snapshot, it continues to hold the horizon:
```
| => BEGIN;
| => UPDATE accounts SET amount = amount + 1.00;
| => COMMIT;
```
```
=> SELECT backend_xmin FROM pg_stat_activity WHERE pid = pg_backend_pid();
```
```
backend_xmin
--------------
3699
(1 row)
```
And only after completion of the transaction, the horizon moves forward, which enables vacuuming dead tuples away:
```
=> COMMIT;
=> SELECT backend_xmin FROM pg_stat_activity WHERE pid = pg_backend_pid();
```
```
backend_xmin
--------------
3700
(1 row)
```
In the case the described situation really causes issues and there is no way to work it around at the application level, two parameters are available starting with version 9.6:
* *`old_snapshot_threshold`* determines the maximum lifetime of the snapshot. When this time elapses, the server will be eligible to vacuum dead tuples, and if a «long-play» transaction still needs them, it will get a «snapshot too old» error.
* *`idle_in_transaction_session_timeout`* determines the maximal lifetime of an idle transaction. When this time elapses, the transaction aborts.
Snapshot export
===============
Sometimes situations arise where several concurrent transactions must be guaranteed to see the same data. An example is a `pg_dump` utility, which can work in a parallel mode: all worker processes must see the database in the same state for the backup copy to be consistent.
Of course, we cannot rely on the belief that the transactions will see the same data just because they were started «simultaneously». To this end, export and import of a snapshot are available.
The `pg_export_snapshot` function returns the snapshot ID, which can be passed to another transaction (using tools outside the DBMS).
```
=> BEGIN ISOLATION LEVEL REPEATABLE READ;
=> SELECT count(*) FROM accounts; -- any query
```
```
count
-------
3
(1 row)
```
```
=> SELECT pg_export_snapshot();
```
```
pg_export_snapshot
---------------------
00000004-00000E7B-1
(1 row)
```
The other transaction can import the snapshot using the SET TRANSACTION SNAPSHOT command before performing its first query. The Repeatable Read or Serializable isolation level should also be specified before since at the Read Committed level, statements will use their own snapshots.
```
| => DELETE FROM accounts;
| => BEGIN ISOLATION LEVEL REPEATABLE READ;
| => SET TRANSACTION SNAPSHOT '00000004-00000E7B-1';
```
The second transaction will now work with the snapshot of the first one and, therefore, see three rows (rather than zero):
```
| => SELECT count(*) FROM accounts;
```
```
| count
| -------
| 3
| (1 row)
```
The lifetime of an exported snapshot is the same as the lifetime of the exporting transaction.
```
| => COMMIT;
=> COMMIT;
```
[Read on](https://habr.com/ru/company/postgrespro/blog/483768/).
|
https://habr.com/ru/post/479512/
| null | null | 2,491
| 53.1
|
unsung hero
Part of Speech: n
Definition: a person who makes a substantive yet unrecognized contribution
We have 10 full SharePoint Conference 2011 (SPC) passes (valued at $1,199.00!!! each) to give to deserving individuals who would like to attend SPC 2011 and who are prepared to help the conference in a small way in return (some hours helping attendees in the Hands on Lab room).
We want you to nominate someone you know who has made a substantive yet unrecognized contribution and been going the extra mile for the SharePoint community. This should be someone thinking of others always, someone keen to go the extra distance and someone you feel never asks for anything in return.
We want to thank your unsung hero!
Here is how you enter:
- Email spc@microsoft.com with the subject “My Unsung Hero Nomination” by 11:59 p.m. on the 31st of July 2011.
- Describe why the unsung hero deserves this prize in 100 words or less.
- Your Unsung hero’s Name
- Your Unsung hero’s Email address
We will judge the entries and pick 10 individuals who we will ask if they would like a full conference pass to SPC11. In return we will ask them for 2 hours of time per day at the event to help attendees with hands on labs.
We want dedicated and passionate individuals who love to help others. We want to reward them with the chance to attend SPC in person and see for themselves the spectacle that is the SharePoint community in full swing.
You can Nominate up to 5 of your unsung heroes! (separate email entries for each please)
Here are the Official Contest Rules
Thanks & have fun nominating those who you think help our community and don’t get rewarded for their efforts.
-CJ.
|
https://blogs.msdn.microsoft.com/cjohnson/2011/07/01/send-an-unsung-hero-to-spc/
|
CC-MAIN-2019-30
|
refinedweb
| 302
| 68.6
|
#include <openssl/rand.h>
const char *RAND_file_name(char *buf, size_t num);
int RAND_load_file(const char *filename, long max_bytes);
int RAND_write_file(const char *filename);
RAND_file_name() generates a default path for the random seed file. buf
points to a buffer of size num in which to store the filename. The seed
file is $RANDFILE if that environment variable is set, $HOME/.rnd oth-
erwise. If $HOME is not set either, or num is too small for the path
name, an error occurs.
RAND_load_file() reads a number of bytes from file filename and adds
them to the PRNG. If max_bytes is non-negative, up to to max_bytes are
read; starting with OpenSSL 0.9.5, if max_bytes is -1, the complete
file is read.
RAND_write_file() writes a number of random bytes (currently 1024) to
file filename which can be used to initialize the PRNG by calling
RAND_load_file() in a later session._name() are available
in all versions of SSLeay and OpenSSL.
0.9.8d 2001-03-21 RAND_load_file(3)
|
http://www.syzdek.net/~syzdek/docs/man/.shtml/man3/RAND_load_file.3.html
|
crawl-003
|
refinedweb
| 166
| 74.49
|
Even so, it is a good idea to hide these headers, or use them to provide misleading information.
In an MVC application, there are generally 3 headers you are going to want to target.
The first one is the server header. This one is IIS specific. Unfortunately, MS has not provided an easy way to change this header. The two options you have are to use a WAF that will mask it for you, or to change it in code. The code option involves creating a http module and adding it to the pipeline in the appropriate place.
public class CustomServerHeaderModule : IHttpModule { public void Init(HttpApplication context) { context.PreSendRequestHeaders += OnPreSendRequestHeaders; } private void OnPreSendRequestHeaders(object sender, EventArgs e) { HttpContext.Current.Response.Headers.Set("Server","Jetty(6.0.x)"); } public void Dispose() { } }
In the above case, I am setting the server header similar to that of Jetty. Why? Well, why not?
The second one you will want to target is the x-powered-by header. I mean, who cares what powers your site. Oh wait, an attacker does. In any event, this a custom header that is set in one of the .config files that IIS reads. You can override this by adding a clear tag.
<httpProtocol> <customHeaders> <clear /> </customHeaders> <httpProtocol>
The last one is the X-AspNetMvc-Version. Once again, I'm not sure why you would want to advertise this to anyone. Luckily this one can easily be disabled by code. In the application start of your global.asax simply add the following line.
MvcHandler.DisableMvcResponseHeader = true;
Trying to minimize the amount of information leaked by your application is always a good thing.
|
https://www.shamirc.com/2013/11/mvc4-remove-unnecessary-headers.html
|
CC-MAIN-2022-05
|
refinedweb
| 273
| 60.31
|
I have an amazing tool called 7edit for editing and sending HL7 v2 messages. I've searched hours for a similar tool for HL7v3 without luck. They're all web based, crappy, broken links, etc.
Hi Scott,
One way would be to use an Ensemble production in a separate namespace to take your example HL7v3 messages and send them into the required service of your HealthShare production. See the Ensemble HL7 Version 3 Development Guide, which provides an introduction to Ensemble HL7v3 capabilities in Chapter 1 with an example production described in Chapter 2. You can copy the example production from the ENSDEMO database and run it with the example messages that are provided in Chapter 3.
Hope that helps.
Jan Jachnik
|
https://community.intersystems.com/post/how-can-i-send-test-message-my-healthshare-instance%C2%A0
|
CC-MAIN-2019-39
|
refinedweb
| 122
| 62.58
|
imshow shows black screen
I did create a project where everything was running as intended. Then I reinstalled my system, copied the project and I get black screen when showing any frame of proccessed video. My simple code:
#include <cstdio> #include "opencv2/opencv.hpp" #include <iostream> #include <sstream> using namespace cv; using namespace std; int main(int, char**) { VideoCapture cap("videofile.avi"); if (!cap.isOpened()) // check if we succeeded return -1; namedWindow("Frame", 1); for (;;) { Mat frame; if (!cap.read(frame)) { cerr << "Unable to read next frame." << endl; cerr << "Exiting..." << endl; exit(EXIT_FAILURE); } imshow("Frame", frame); if (waitKey(30) >= 0) break; } return 0; }
The output I am getting:
I have installed all possible codecs I. Even the FFMPEG which I didn't need last time I was running the project.
There's also this log in console which has never been there before
***** VIDEOINPUT LIBRARY - 0.1995 - TFW07 *****
please ask your question here .
I have edited the question
things you could try:
cerr << frame.size() << endl; cerr << frame(Rect(60,60,20,20)) << endl
If your saved image is black, then it has something to do with your capture interface.
|
http://answers.opencv.org/question/86993/imshow-shows-black-screen/
|
CC-MAIN-2019-04
|
refinedweb
| 190
| 75.2
|
Talk:Authoring a Logical Entity
Contents
- 1 c stands for server ?
- 2 Testing the entity
- 3 What's with the C++ tutorial?
- 4 Forgetting something?
- 5 // memdbgon must be the last include file in a .cpp file!!!
- 6 Good tutorial!
- 7 Extra Things
- 8 Added a tutorial on using the logical entity in Hammer
- 9 CLogicalEntity
- 10 HL2:DM and Entities
- 11 Tried Tutorial twice! Grenade isn't igniting.
- 12 it does not work?
c stands for server ?
why prefixing server-side with stuff with the c letter ? i fail to see the subtility with the "c_" for "client side". why not "s" or "sr" ? Meithal 10:54, 2 Jun 2008 (PDT)
- It has been my understanding that C in the beginning of a class name means just that, "class". This as part of the Hungarian notation. But reading this made me wonder. For some reason it doesn't sound too unfamiliar however. Think I might have heard or seen this myself before. --Kodak 06:37, 30 January 2009 (UTC)
Testing the entity
So, maybe this is silly, but maybe this tutorial should also include instructions on testing this entity we just made on a map so that we know it actually did something. From what is here I have code that compiles but no idea if it actually did anything other than faith in the person who wrote the tutorial. How does one trigger the entity so it prints to the screen or console (or wherever it prints) when they put the entity into a map and run it? ent_fire? what? Zarbon9696
- You'd tie a map output to OnThreshold, and use either an input or ent_fire to prod it. You're right though, something needs to be added. --TomEdwards 03:03, 2 Jul 2008 (PDT)
Hi, I found a good tutorial that continues on this one and teaches you how to use your logical entity: Fabian Mejia: Valve's Source - Part 3 You'll work with the editor Hammer that you can find when opening Steam -> Tools -> Source SDK (where you also create your mod), and it's very easy to follow! Have fun!--------- Now, the question is how to do that by modding the source code itself?? --Kweiko 00:39, 15 Jun 2009
What's with the C++ tutorial?
This is a VDC article on how to mod in Source; why aren't we assuming that readers have a working knowledge of C++, and letting the unenlightened go to other, more in-depth websites actually intended to explain programming fundamentals? The extra stuff is making me have to sift through the article to find actual information. --TheRatcheteer 21:33, 7 Jul 2008 (PDT)
- It's the first in the "Your First Entity" series, geared towards beginners. I'd argue that there is some use for the information, seeing as some modders might be very new to programming and they're learning to code by hacking through the SDK (as misguided as that might be).. but I see your point too. --Campaignjunkie (talk) 23:41, 7 Jul 2008 (PDT)
- It comes down to the balance of requirements. There's little argument that the number of people already familiar with C++ trying to use this article isn't far smaller than the number who have never written a line of it before. (Plus if you already understand C++ you can just read the commented source code, surely?) --TomEdwards 02:07, 8 Jul 2008 (PDT)
Forgetting something?
// memdbgon must be the last include file in a .cpp file!!!
#include "tier0/memdbgon.h"
- This is to avoid memory leaks right? Idk i'm not a pro coder but seems fairly self explanatory :P
- --Jenkins08 11:07, 7 April 2009 (UTC)
- It enables a special memory allocator that is compatible with some sort of debugging tool. I'm not sure if modders have that tool or not - but either way, it's not necessary to do the #include. --TomEdwards 17:05, 17 August 2009 (UTC)
Good tutorial!
I did this tutorial yesterday, after fixing my code for VS 2008. It's pretty easy to follow, it worked after the first attempt. I tried to add some more functionality like in math_counter, but inputdata isn't being recognised by the compiler. Any idea why? Solokiller 12:54, 19 November 2009 (UTC)
- I figured out what caused this: the reference name had a capital D, which caused it to ignore inputdata with a lower case d. Solokiller 13:17, 19 November 2009 (UTC)
- You can use autocomplete (Alt+Right arrow) to avoid that in future. And thanks. :-) --TomEdwards 18:41, 19 November 2009 (UTC)
Extra Things
While this type of tutorial is very good (it certainly helped me in the general basics), I'm really struggling with the more Valve-specific and really very very basic stuff. With the creation code for a particle effect, for example:
DispatchParticleEffect( "BeanPickup_Red", ?, Vector(0,0,0), this );
I know that the first parameter is the name of the particle effect I want to create. I know that the second parameter is the origin of the particle effect but have no idea how to put this to any use, since it involves something to do with either a GetLocalOrigin or GetAbsOrigin function. All I know about these is that they return the origin in one format or another of an entity, but I don't know which one of the two to use or how to use it, since these then involve pointers and pointers involve something called "casting", and I need to have a pointer to the entity I'm in the code for where I'm trying to actually create this particle effect and have no idea how to do that either since it involves another utility function to return the entity to pass to the pointer, and there's nothing on this type of thing on the Wiki. I can only find code that does stuff with pointers relating to pOwner and that's no use to me from where I'm standing.
It's these small things that completely stop me from coding anything outside a tutorial. If we could have more beginner-targeted tutorials like this but for the useful little things then that would help me out a hell of a lot. --X6herbius 15:35, 26 April 2010 (UTC)
- This page can't cover everything. Check Accessing other entities to learn about pointers (it's on the main category page...) and GetAbsOrigin() for your other problem. --TomEdwards 14:46, 27 April 2010 (UTC)
Added a tutorial on using the logical entity in Hammer
An imageless, shorter re-telling of with some differences. I'm just going through the tutorial myself and thought this would be relevant. --Kiwibonga 04:59, 27 July 2010 (UTC)
CLogicalEntity
This tutorial is clearly broken. When i do the second step i get loads of errors about CLogicalEntity is not a class or stuctname
HL2:DM and Entities
I thought I would post this here because I was pulling out my hair. I created a multiplayer mod with the intent to modify hl2:dm. However, whatever code I was writing wasn't getting picked up. Forget Entities, I wasn't even able to set Console variables using ConVar!
This mistake I was using was I was using AppID 320 in my gameinfo.txt, apparently this led my mod to fallback to the hl2:dm dlls (wild speculation on my end). Whatever the reason, it just wasn't working, I couldn't write any code to affect the code base. The solution was to use AppID 215 (source 2006 base) and everything works so far!
Good luck all!
--Lazypenguin 12:59, 3 July 2012 (PDT)
Tried Tutorial twice! Grenade isn't igniting.
Hey guys, just tried to rebuild the tutorial twice. Added the cpp but divided into a .h and a .cpp. Compiled correctly, even checked with the given full source code. Seemed alright. When adding the entity into my map using Hammer everything can be added like the tutorials tells one to do. Created 3 Ammo-Packages and a grenade, and of course my entity. I put the Outputs of the Ammo to Tick "MyCounter" and set the Threshold on my counter to 3. The output of such is set to OnThreshold and Ignite "MyGrenade". I even tried with different entities. Tried ent_fire to start. Changed the limit threshold to 2 and picked up only two packages and even replaced the ammo packages with weapons (crossbow and shotgun) but nothing happens at all. I can pickup whatever I want it's not igniting. I actually double-double-checked the code but it seems pretty accurate. My system: Win 8 x64 VS 2012 (Mod is starting correctly from within this. DLL's are copied each time I recompile) Source Base 2007 (App-ID: 218) The Map is starting from within the Hammer-Editor as well, applying my changes. I can't see my mistake. Any help would be appreciated.
Thanks Max
--Mvmnt-max 19:55, 25 March 2013 (PDT)
Edit: Figured it out myself: Before firstly imported into Visual Studio i copied some dll's from source base 1007 sdk to basically make the game run in the first place. wanted to try my map before. I forgot to delete those to get my self-created client.dll and server.dll into my bin-folder. Just erased those a few secs ago and rebuild my projects. The dll's have been copied correctly and the grenade is exploding as wanted!
So be sure to delete all self-copied dlls and let visual studio create those for you since those include all the self-written code.
Cheers Max
it does not work?
it gives an error as soon as i declare the first class
|
https://developer.valvesoftware.com/wiki/Talk:Authoring_a_Logical_Entity
|
CC-MAIN-2021-10
|
refinedweb
| 1,634
| 72.26
|
/***************************************************/ /*! \class UdpSocket \brief STK UDP socket server/client class. This class provides a uniform cross-platform UDP socket server/client interface. Methods are provided for reading or writing data buffers. The constructor creates a UDP socket and binds it to the specified port. Note that only one socket can be bound to a given port on the same machine. UDP sockets provide unreliable, connection-less service. Messages can be lost, duplicated, or received out of order. That said, data transmission tends to be faster than with TCP connections and datagrams are not potentially combined by the underlying system. The user is responsible for checking the values returned by the read/write methods. Values less than or equal to zero indicate the occurence of an error. by Perry R. Cook and Gary P. Scavone, 1995 - 2005. */ /***************************************************/ #ifndef STK_UDPSOCKET_H #define STK_UDPSOCKET_H #include "Socket.h" 00029 class UdpSocket : public Socket { public: //! Default constructor creates a local UDP socket on port 2006 (or the specified port number). /*! An StkError will be thrown if a socket error occurs during instantiation. */ UdpSocket( int port = 2006 ); //! The class destructor closes the socket instance. ~UdpSocket(); //! Set the address for subsequent outgoing data sent via the \e writeBuffer() function. /*! An StkError will be thrown if the host is unknown. */ void setDestination( int port = 2006, std::string hostname = "localhost" ); //! Send a buffer to the address specified with the \e setDestination() function. Returns the number of bytes written or -1 if an error occurs. /*! This function will fail if the default address (set with \e setDestination()) is invalid or has not been specified. */ int writeBuffer(const void *buffer, long bufferSize, int flags = 0); //! Read an input buffer, up to length \e bufferSize. Returns the number of bytes read or -1 if an error occurs. int readBuffer(void *buffer, long bufferSize, int flags = 0); //! Write a buffer to the specified socket. Returns the number of bytes written or -1 if an error occurs. int writeBufferTo(const void *buffer, long bufferSize, int port, std::string hostname = "localhost", int flags = 0 ); protected: //! A protected function for use in writing a socket address structure. /*! An StkError will be thrown if the host is unknown. */ void setAddress( struct sockaddr_in *address, int port = 2006, std::string hostname = "localhost" ); struct sockaddr_in address_; bool validAddress_; }; #endif // defined(STK_UDPSOCKET_H)
|
http://csound.sourcearchive.com/documentation/1:5.08.0.dfsg2-1ubuntu3/UdpSocket_8h-source.html
|
CC-MAIN-2017-30
|
refinedweb
| 379
| 60.82
|
The new NVM Express 2.0 (NVMe 2.0) Protocol has been released. While the thrust of the protocol focuses on flash storage and networking, the latest additions include full-blown support for hard disk drives (HDDs).
The addition of hard drive support is one of the biggest changes coming to NVMe 2.0 and something most people will be surprised to see, as current 7200-RPM hard drives cannot fully saturate current SATA 3.0 connections. However, like other forms of tech, hard drives are evolving, which might eventually require a bandwidth upgrade beyond SATA 3.0 speeds. For instance, Seagate announced two weeks ago that it's Mach.2 hard drives can reach up to 524MB/s, a speed previously only capable with SSDs.
Drives like the Mach.2 could become very popular over the next few years, as hard drive capacity escalates beyond 20TB to fulfill the needs of the enterprise and data center world. Drives like these will need significantly higher bandwidth to ensure that accessing more than 20TB of data won't take long.
However, simplifying the ecosystem down to one storage connection seems to be the main impetus for adding hard drive support, particularly as the NVMe spec continues to evolve its NVMeoF (NVME over Fabrics) functionally that allows the drives to be networked without additional abstraction layers.
NVMe 2.0 hard drive support could also signal the beginning of a decline of the SATA protocol as a whole since the protocol has not been updated in over 12 years. Getting rid of SATA and migrating all hard drives to NVMe could free up some space on motherboards and simplify storage connections (at least in the consumer space) to just NVMe, but don't expect that change to happen any time soon – the latest word in the storage industry is that it will take a few more years before we see NVMe HDDs ship in high volumes.
Here's the rest of the NVMe 2.0 feature set. Overall, these features aim to reduce NVMe's overhead and give your PC more control over your SSD.
- Zoned Namespaces (ZNS) is coming to NVMe 2.0 and will allow the SSD and host to collaborate data placement within the drive itself. ZNS will permit data to be aligned to the physical media of the SSD which will improve overall system performance and give your PC more of the SSD's storage capacity.
- NVMe Key-Value Command Set will provide access to data on an NVMe SSD namespace using a key instead of a logical block address. Switching from logical blocks to keys will reduce overhead by not having the SSD maintain a translation table for the logical block address.
- Rotational media support is the name for Hard Disk Drive support for NVM Express. This will include new updates to features and enhancements required for HDD support.
- NVMe Endurance Group Management will enable media to be configured into Endurance Groups and NVM Sets. This exposes granularity of access to the SSD and improved control.
- NVMe 2.0 will be backward compatible with previous NVMe architecture generations. Allowing future NVMe 2.0 SSDs to work with current M.2 capable motherboards and M.2 cards.
What connector will be used to attach hard drives to pcie via the nvme protocol?
Does this mean we may finally see a samsung evo 870 successor that eschews sata and uses pcie instead, and runs much faster than 550MB/sec?
What's the theoretical maximum speed of a pcie hard drive?
Are we potentially talking pcie5.0 bandwidth in a 2.5 inch hard drive form factor?
That would be the 970.
No faster than what the fastest hard drive on the market is. Hard drives haven't even saturated SATA 3 yet.
Maybe available to it, but no hard drive is going to reach those speeds.
Mechanical drives are still mechanical.
They can't even saturate a SATA II connection.
Even SATA3 is becoming inadequate for today's fastest HDDs.
These will be for servers. There will be PCIe connectors inside NAS enclosures, large-scale SAN devices or storage arrays inside giant server racks.
Consumer grade stuff will continue to be solid state drives in NVMe slots.
|
https://www.tomshardware.com/uk/news/nvme-2-0-supports-hard-disk-drives
|
CC-MAIN-2021-43
|
refinedweb
| 707
| 65.32
|
About
$4,430
46
Video 1: How we got started...
Video 2: About our miniatures...
Video 3: More info and how they can be used...
The names of the ships and factions are from our own Stars Reach universe. Use the retail prices listed to add to various pledge levels. To save on production cost no flight stands are included with any of the pledge levels or fleet packs. We plan to have the miniatures available for regular purchase after the kickstarter at
Koreth Imperium Fleet Box
- Senator Thalmus class destroyers x4
- Council’s Sword class light cruisers x2
- Empire’s Hand class heavy cruisers x2
- Peoples Justice class battle cruiser x1
- Emperor’s Might class dreadnought x1
The Trazari Colonies Fleet Box
- Gremlin class destroyers x4
- Hammer class light cruisers x2
- Predator class heavy cruisers x2
- Head Hunter class battle cruiser x1
- Boxer class dreadnought x1
The Human Commonwealth Fleet Box
- Madrid class destroyers x4
- Gaugamela class light cruisers x2
- Agincourt class heavy cruisers x2
- Alexander class carrier x1
- Republic class dreadnought x1
Order a total of $100 or more and get FREE shipping in the USA and half price shipping world wide!
Once we reach our $1,500.00 goal, we will work toward unlocking the following Stretch Goals that will boost your fleets!
We have big plans to expand this line and we need your help! There are several directions we can go in to add more ships to the miniature line. We want you, the customer, to tell us what you want to see next and would buy! Below are six short options we can expand in.
They are lettered A-G.
We would like you to number them 1-7 in order of what options you would be most interested. One being your first choice to seven being your last choice.
If there is an option you don’t think you would want, just put an X by it. If there is more than one option you would really be interested in, put another one beside it. When you send us your pledge and order info, just include the letters and a corresponding number or X.
It’s that quick and simple. IF you would like to see something we left out feel free to include a quick note about it! If enough folks mention it we will do it!
A: I would like to see new factions with different ship styles added to the line.
B: I would like to see the existing factions “fleshed out” with the obvious missing ship classes like carriers for the two factions, and the BC for the CW added.
C: I would like to see battleships, one size up from the dreadnoughts, for each of the factions.
D: I would like to see more ship designs of the same class added and some specialty ships, for example more than one type of CA or DN and special ships like missile or assault cruisers, light and fleet carriers, etc.
E: I would like to see some civilian ships like freighters and bases added to help with scenarios and campaigns.
F: Even though information on the Stars Reach universe will be touched on in various rule books I would like to see a general back ground book printed giving more info on the universe I could use for ideas or even RPG such as maps, history time lines, major wars and battles, more info on the background culture of each faction, etc.
G: I would like to see this line in a bigger scale. “In closing we would like to thank all of you that participate in this kickstarter in advance. You are part of making a labor of love a reality and adding something more to the hobby (group huge!).
We hope you get lots of enjoyment from the miniatures for many, many games to come! Congratulations on being a part of expanding the hobby and helping to add something new to it! We at Twilight wish everyone the best of luck and happy gaming!
Travis Melton, Owner Twilight Game Designs TJ
Risks and challenges
While Twilight Game Designs TJ is a new company, our collaborators have been doing this for a long time. This project is being managed by Bryan K Borgman who has over half-a-dozen crowdfunding projects under his belt. Acheson Creations will be handling all the mold making (completed) and metal casting. Acheson Creations has been in the business of making high-quality tabletop terrain and miniatures since May 2000. Together, Bryan and Acheson Creations consider themselves to be experienced at successfully running projects such as this and have tried to account for the labor time involved in our estimated shipping dates.Learn about accountability on Kickstarter
Questions about this project? Check out the FAQ
Support
Funding period
- (18 days)
|
https://www.kickstarter.com/projects/twilightgamedesigns/stars-reach-space-ship-miniatures
|
CC-MAIN-2018-09
|
refinedweb
| 806
| 77.27
|
Difference Between C# Array vs List
This is a very nice question while not one, definite answer. From my perspective, C# Array vs List are wherever the abstraction and implementation people in computing meet. An array is incredibly a lot of tied to the hardware notion of continuous, contiguous memory, with every part identical in size (although typically these parts are addresses, and so talk over with non-identically-sized referents). A list could be an idea (from arithmetic to an extent) wherever parts are ordered and wherever there’s (normally) a starting and finish, and thus wherever indexing is feasible. These 2 ideas line up quite well. However, once we contemplate a list as an abstract data sort, an approach to accessing and manipulating data, we are able to break a number of those rules.
What is an Array?
An array could be a sequent assortment of comparable data which will be accessed as per the “index”. It’s the best style of a system during which the weather get to keep in a contiguous memory location.
In Array, the index starts at zero, thus to access the primary part of An array “numarray”, it ought to be written as numarray[0].
An array could be a consecutive section of memory that occupies n*size(type) bytes, wherever n is that the length of the array and size(type) is that the size in memory needed to store the info sort you’re progressing to use within the array. This suggests that if you would like to form an array of one hundred ints, and every int occupies four bytes, you may have to be compelled to have an unused memory section of a minimum of four hundred bytes (100*4). This additionally implies that array is pretty cheap to form, unleash and use as a result of their chunks of memory.
Array Options:-
- The info is kept in a type of continuous memory allocations. every half follows different simply once it within the m/y. there’s no randomness in allocation.
- They give random access like arr[0],arr[6] etc.
- There is a static allocation of memory. n this might result in wastage of memory.
- There is only 1 style of data in every cell of an array.
- Insertion and deletion are bit longer intense.
What is a List?
The ArrayList could be an assortment of objects of same or differing types. The dimensions of An ArrayList is dynamically inflated or slashed as per the necessity. It works like an array however in contrast to an array in ArrayList things is dynamically allotted or deallocated, i.e. you’ll add, remove, index, or hunt for data in a very assortment.
A list but could be an utterly completely different structure. Most list implementations are a mix of nodes that store: one. – One price and, 2. – One or a lot of pointers that keep the nodes connected between them. This suggests that you just do not want an enormous chunk of obtainable memory with a size large enough to carry all of your data, because the nodes are scattered through your memory.
4.8 (1,934 ratings)
₹19999
View Course
List options:-
- The info is kept at random in components. n every half is connected to different via a pointer to next cell (n to the previous cell just in case of double link list)
- They are to be accessed consecutive thanks to the dependency of every half
- It is dynamically allotted that’s m/y is allotted to every cell once process request for it. Thus there’s no m/y wastage
- A single cell is divided into several components every having information of various data sort. However the last essentially has to be the pointer to an ensuing cell
- Insertion and deletion are a ton a lot of easier and quicker. Looking out too is easier.
Head To Head Comparison Between C# Array vs List
Below is the top 5 difference between C# Array vs List
Key Difference Between C# Array vs List
As you can see there are many difference between C# Array vs List performance. Let’s look at the top Comparison between C# Array vs List below –
- Array stores data of the same sort whereas ArrayList stores data within the type of the object which can be of various sorts.
- Size of An ArrayList grows dynamically whereas Array size remains static throughout the program.
- Insertion and deletion operation in ArrayList is slower than an Array.
- Arrays are powerfully typewritten whereas ArrayLists aren’t powerfully typewritten.
- Arrays belong to System. Array namespace whereas ArrayList belongs to System. Collections namespace.
- Once selecting between Array and ArrayList, opt for the idea of their options that you just need to implement.
C# Array vs List Comparison Table
Below is the topmost comparison between C# Array vs List
Conclusion – C# Array vs List
We saw a comparison of C# Array vs List performance memory usage within the C# language. For speed, it’s typically worthy to like regular arrays. The performance profit is critical.
Lists are used much more usually in C# than arrays are, however there are some instances wherever arrays will (or should) be used, together with if your data is unlikely to grow significantly or if you’re coping with a comparatively great deal of data which will have to be compelled to be indexed into usually.
Let me offer you 2 samples of lists that break the principles of an array. In a link list, every part points to ensuing part, thus I will simply place a replacement part between 2 existing parts, or take away one and fix the 2 remaining (the previous And next); whereas I will access parts via an index, I will solely do this by moving from one part to ensuring and investigating, thus it isn’t really indexed. Another example is that the queue, wherever I will solely boost the tip and take away from the beginning; if I want to access parts via an index, it’s doable, however, I’m clearly not mistreatment the proper abstract data sort. It doesn’t matter if the implementation would give this simply.
Recommended Article
This has a been a guide to the top differences between C# Array vs List. Here we also discuss the C# Array vs List key differences with infographics, and comparison table. You may also have a look at the following articles –
Software Development Course - All in One Bundle
600+ Online Courses
3000+ Hours
Verifiable Certificates
Lifetime Access
|
https://www.educba.com/c-sharp-array-vs-list/
|
CC-MAIN-2019-30
|
refinedweb
| 1,092
| 51.89
|
First set of code.
def main(): #Create a bool variable to use as a flag. found = False #Get the search value. search = raw_input('Enter a number to search for: ') #Open the numbers.txt file. account_file = open('numbers.txt', 'r') #Read the first record. line = number_file.readline() #Determine whether this record matches the search value. if line == search: #Display the record. print 'Number:' , line #Set the found flag to True. found = True #Read the next line line = number_file.readline() #Close the file. number_file.close() #If the search value was not found in the file #display a message. if not found: print 'The number was not found.' #Call the main function. main()
Second set of code.
def main(): #Open file for reading infile = open('numbers.txt', 'r') #Read the contents of the file into a list number = infile.readlines() #Convert each element into an int index = 0 while index !=len(account_number): number = [int(num) for num in number] index +=1 #Get an the search value search = raw_input('Enter a number to search for: ') #Determine whether this record matches the search value. if search in number: print 'Number found.', search else: print 'The number was not found.' #Close the file. infile.close() #Call the main function. main()
If someone could point me in the right direction, it would be very much appreciated.
|
http://www.dreamincode.net/forums/topic/232611-search-a-text-file-in-python/
|
CC-MAIN-2017-04
|
refinedweb
| 219
| 79.36
|
.
I think its funny to use the term "constant variable". Its just a constant right? Youre basically saying "Jumbo shrimp". Global constants are absolutely OK especially with the trick shown previously where they wont be copied into every file that includes them.
Really its just global variables are bad, global constants are good.
Coming from Java I have to read this stuff 10 times, static and extern and global vs local are all different
Java does:
public final static double PI = 3.1415;
C++11:
extern static const float PI(3.1415);
right?
Maybe I'm too literal, but I don't get the joke. Would someone please explain it.
// is the start of a comment in C++ -- starting your variable with this prefix would turn your global variable into a comment, rendering it harmless. :)
alex bro i have a question , what do you mean by saying that " In the above example, you might find g_mode is referenced 442 times. " ??......
and one more thing can you please add something like 'notify me by email(whenever someone posts a new comment)' in comment box because one has to check again and again that he has a reply or not..by adding above stuff we can simply get a notification..Please consider adding it.
I've updated the wording to try and be a little more clearer. It's not uncommon to have hundreds of references to a global variable in a given program, and if you find your global variable has the wrong value, you may have to check hundreds of different places to try to identify which one is at fault. It sucks.
I'm looking into an email notifier function.
Hey Alex, I just thought I'd mention that I've found three typos:
"Although they may seem harmless in a small academic programs..."; should say "in small academic programs"
Here's the second one:
"... to reduce the chance of naming collisions and raise aware that a variable is global."
It should say "raise awareness".
The last one is a comment in your last example:
"// pass in getGravity() for parameter gravity() if you want to use the global gravity", should just say "gravity" -- without the parenthesis after it.
Btw, I love the humor mentioned in:
cout << "Launching nuclear missile...", and "the rest of main() doesn’t work like the programmer expects (and the world is obliterated)". That one cracked me up!
Also I like the C++ joke mentioned at the end. :)
Thanks for the notes! All fixed.
Hi Alex,
In the second point of protecting yourself from global destruction, beside the get_gravity function you commented this line
//this function can be exported to other files to access the global outside of this file
How can the global be accessed in other files, the g_gravity is static ?
This gives undefined reference to g_gravity as expected since g_gravity can't be accessed on any other file as it is static, on commenting the extern line, it gives not declared in scope error.
So coming back to your commented line in the function how can this function be exported to other files to access the global outside of this file ?
Another doubt can you explain the difference between non constant global variable -> initialized vs non-initialized?
I think you've mistaken the intent of the example. You correctly note that because g_gravity is static, it can't be exported. But in this case, that's by design. It's the getGravity() function that we could export, making it available to be called from other files. This doesn't impact g_gravity in any way. e.g.
file1.cpp:
main.cpp
> Another doubt can you explain the difference between non constant global variable -> initialized vs non-initialized?
Non-const globals can be initialized when defined, or not. What are you confused about?
In the last chapter you said "Function forward declarations don’t need the extern keyword.". So in the above code (main.cpp) isn't the use of the extern keyword redundant? I just tried without it, and it worked fine. Still I'm asking this to make sure that I've got the concept right. Also, just to satisfy the compiler, shouldn't int main() return a value? Here too, but surprisingly, my compiler didn't complain about the absence of the return statement. Why?
The last two chapters about the 'global evil' have been real tricky, so I've decided to use global variables only if my life depended on it. Far better than remembering all these stuff.
And that joke was brilliant! I never expected a joke on something like C++, do add some more in the upcoming chapters. :D
Yes, you are correct that the extern keyword was redundant. I've edited the previous comment and removed it (as well as fixed the inconsistent function naming).
Good programming jokes are hard to come by. This thread should be worth at least a chuckle.
Okay, thanks for the reply, but you didn't answer my second question: why does it compile without a return statement at the end of main() ?
Thanks for the link to the Stack Overflow thread, made my day! :)
Oops. Some compilers (e.g. Visual Studio) will assume you meant return 0 if you omit the return statement at the end of main. This is non-standard behavior and shouldn't be relied upon.
Okay. Meanwhile I happened to stumble upon this article () which lists a few differences b/w C and C++. The author (whose name is Alex too) says:
"In C++, you are free to leave off the statement 'return 0;' at the end of main; it will be provided automatically, but in C, you must manually add it."
He seems to consider this a standard, contrary to your answer. But I think you are correct, omitting the return statement does feel pretty non-standard. I'm gonna stick with your suggestion and put 'return 0;' at the end of every program I write.
It appears that I have been incorrect. The C++ standard says:
;"
I'll update the tutorial articles accordingly.
Good thing, Alex! Glad that I could contribute something to the improvement of this awesome site. :)
I'm gonna put the return statement anyway, feels more comfortable with it than without.
Hi Alex,
previously, sorry for my bad English and if I bother you with all question I ask.
1. I don't understand with "So what are very good reasons to use non-const global variables?" section, I confuse with the example because it seem we can use const global variable, can you make it more clear please? or add more simple example with the code please?
2. I'm confuse too with second advise in "Protecting yourself from global destruction" section. what is the advantage of use function which return the global variable? I test it, and I got the function cannot be assigned. is that the advantage of this function? so value of the function never be changed?
3. then the last thing I don't understand is this quote .". can you make it more clear or add more example?
I'm so mad and confused with this chapter haha..
thank you.
1) I can't really provide any good examples at this point because they all rely on more advanced concepts. For now, assume global variables are bad, and when you encounter a situation that you can't figure out how to efficiently deal with without using a global variable, you'll have discovered one of the rare use cases yourself.
2&3) The advantage of using an encapsulating function is manyfold: First, the function can do any kind of input checking. For example, let's say our global variable is storing the user's name. If direct access to the global variable was provided, any function in the program could set the name to "" (which clearly isn't a valid name). However, if we had a function named setName(), this function could ensure the user had entered a valid name before changing the value of the global variable. Second, let's say we implemented our name global variable as an array of characters. Later on, we want to change to std::string. We can do that, and we just need to modify the setName() function to work properly. Third, if our program isn't working properly, it's much easier to breakpoint the setName() function than it is to try and figure out where everyone who is accessing the global variable directly is.
For now, it's not too important to understand this in detail. We'll cover encapsulation again in chapter 8 when we talk about object-oriented programming.
That C++ joke at the end made my day, keep up the good work Alex.
ok. but once again good work Alex. Thanks
Hey Alex, these are very nice tutorials. I want to ask you for a little help. Can you suggest me some good book or some other source for c++ which has lots of questions in it(I dont care for answers) which i can use as my practice book with your tutorials.
I don't have a good off-hand reference for a practice book, unfortunately. :(
Hi Alex,
Can u put some code as an example for the last section "protecting yourself from global destruction" for some better understanding.
Done, added some simply examples. Let me know if they're helpful or whether they open up additional questions.
Unfortunately I must have learned this seemingly sensible no-globals rule early, probably from reading one of those structured-programming books. When I stepped through the C++ programs in a debugger that would show the assembly language, I learned that every time a function was called more instructions were executed doing the setup and cleanup than what the functions did. So programs were spending more time with useless rigmarole protocol than doing anything material.It is a sobering experience to watch routines step through 10 levels of calls just to accomplish setting one word in memory. Putting variables in global space, with judicious naming, worked miracles to cut down on the crap. Basically, if you organize yourself, you can do better than what local variables do to contain the effects of sloppy disorganization. Of course if you are a professional programmer, you cannot afford to care if the program ends up 20 times the size of what it might be, and runs at a tenth or a hundredth of the speed it might, because structure is more important than anything.
if (g_mode == 1)
cout << "No threat detected." << endl;
else
cout << "Launching nuclear missiles..." << endl;
Well that escalated quickly...
One reason for globals: access to them is faster than making a copy of variable on the frame stack. In the embedded world it plays a role.
I had problem with understanding the prior and this chapter. I am totally confused please help.
What else can I do to understand this better.
Ask questions about what you didn't understand.
Last chapter - A word of caution about (non-const) global variables
This chapter - Why (non-const) global variables are evil
NECRO!!! (dancer) Not chapter names, specific concepts that you didn't understand.
Typos.
"Although they may seem harmless in a small academic programs" (remove 'a' if you want to use the plural 'programs')
"what it’s (its) valid values are, and what it’s (its) overall function is" ("it's" is only used for "it is". Oddly enough, the possessive form is just "its")
"(e.g. do input validation, range checking, etc..)." (remove one of the two periods directly after 'etc')
Also, great timing for some humor! Last lesson was fairly confusing, so the comic relief is appreciated!
Updated.
Yeah, this stuff is difficult. Fortunately, you don't need to remember any of it if you don't use global variables! :P
For a lot of the good uses of global variables, they need to be read from all over the program, but written to by only one or two functions. While access functions are one way to enforce this (particularly if only the "read" access functions are made external, and the few functions that need to be able to write to the variable are put in the same file), it seems to me that a more performant approach (though one that loses the ability to include range checking and implementation changeability) would be to use an extern const reference. So it might be:
Then only the file in which this is contained can change the variable (thereby ensuring that it can't be changed unexpectedly), but any file that declares globalVarAccess can read it, and do so as easily as if it were being accessed directly.
Correction to the above code: That second line would have to be "extern const int". Fortunately, that would be caught by the compiler when you try to use it in another file...
Hey! First time commenting, really enjoying this course. Quick question, in regard to game development mainly (though the concept could applied to anything really):
Say for a game, I wanted to have a variable represent something that has tochange throughout the game, but still be accessible in multiple scenarios, like the amount of an item in your inventory. I might have many variables to do something like that (or possibly an array)...is it bad practice to use a global variable for something like that? And is there a better, more efficient way to store information like that? I'm sure there is, just curious as to what you'd recommend. I've only worked with Javascript before and used copious amounts of global variables...might be hard to kick the habit!
Good question, and one that's a little hard to answer right now since we haven't covered the primary concept I'd use to solve for this (classes). But I'll answer anyway.
An inventory is a discrete set of items, consisting of a item type and a quantity (and maybe a maximum size). An inventory also needs functions to manage the inventory (add an item, consume an item, check if an item exists, etc...). I'd definitely create an Inventory class to encapsulate the inventory details and management.
But what does an inventory mean by itself? Typically an inventory is owned by someone. Most likely this is the player (but I suppose it could be a monster, or a chest). For now, we'll assume it's the player. The player also has other attributes worth keeping track of, like name, and possibly level, class, health, etc... I'd create a Player class to manage the player attributes. The Player class would also contain an instance of the Inventory class.
Because there's only one Player, and it really does need to be accessed everywhere, I'd consider making Player global. At least in this case, you only have one global object to manage instead of lots of separate but related global variables, and they're all encapsulated to minimize misuse.
I will give a concrete example of when global variables are necessary. This is not to imply that they should not be kept to a minimum --- that is certainly true.
Suppose you are a lattice gauge theorist writing a library intended to solve for quark propagators on GPUs (example, QUDA). It is going to be used by hundreds of people who are integrating it with other codes that they are already using. You want to present users of your library with as simple an interface as possible. That would just be a small set of functions to call, with no fancy abstract types, and perhaps a structure or two to contain parameters. The basic user will be solving a linear algebra problem, MX=B, where M is a large sparse matrix, B is a given "source" vector, and X will be the solution vector. For practical reasons this is done in two steps:
void load_gauge(void *gauge_array, Params *par);
void solve(void *solution, void *source);
load_gauge is done once, but solve is typically done many times with different sources. void pointers are used because different precisions of floating point number are in use, depending on a setting in par. That is for performance reasons. The array gauge_array is loaded to the GPU and a pointer (not gauge_array) to that memory on the GPU must be passed somehow to solve() so that its own functions that it calls to carry out the operation on the GPU can use this array for defining the matrix M. To keep the interface simple, and to hide all of these implementation details from users, the pointer to the GPU memory is passed between the functions using a global variable, say gpu_gauge_array. Note that it is not even in the same memory space as gauge_array.
The alternative is to force users of the library to adapt to a much more involved interface where they would call functions that are members of classes so that the pointer could be a member variable. This is far from optimal since half the community is actually using C code, not C++ code. By using the simple function interface given above, and a struct for the Params type, it can interface with either type of user. Of course everything in the interface code has to be declared with extern "C", but that is not a big deal.
The point of this example is that in the professional setting you may not be writing code just for yourself, or for one company, but you may be writing for a world-wide community that needs simple, flexible code to link to and use. Having a simple interface will often force some data into global scope.
cout << "Launching nuclear missiles..." << endl;
haha :)
if (g_nMode == 1)
cout << "No threat detected." << endl;
else
cout << "Launching nuclear missiles..." << endl;
"oh shit... my bad"
Okay. So, a very good reason for a global variable might be in a game - a player's name and inventory status, etc. Right?
If you have only one player, you could consider making all of the player-specific information global. But then what happens when you want to add a multiplayer mode?
For that reason, I'd probably pass it around.
I don't think global variables are dangerous/evil if they are declared 'const', since the value cannot be changed.
They are much less dangerous/evil if declared const, to the point where their use is generally acceptable. I've updated the lesson to indicate this.
Do note that global const variables still pollute the global namespace and are susceptible to naming collisions. Putting them in a namespace can help resolve this.
Why global variables are evil ....
One advantage using global variables is that the same variable doesn't mean twenty different things in twenty different places. It can get confusing in scientific programming when "energy" means different things in different places.
"launching nuclear missiles"was bit scary...! but awesome tutorials ...long live alex..!!
I do not understand why main() outputs "Launching nuclear missiles..." to the console.
Isn't g_nmode = 2; only local to void doSomething(), and destroyed at the end of this function.
Also, the global scope operator (::) is not used, so gnMode = 2; is not executed globally, right?
Thank you.
In the code above, every time g_nMode is called, it is calling the global g_nMode. There isn't a local g_nMode in the program, because it is never declared locally. However, by slightly editing doSomething(), the expected "No threat detected." will be the output:
(Since the local g_nMode wasn't global, the g prefix should have been dropped, but I kept it in an attempt to stay close to the code that you had in question. I hope that didn't confuse you more.)
I do agree with the most part of what you are saying, as long as you are working entirely in C++.
I think there is still a great use for file scope global variables in C and there are some definite bonuses to using C and C++ combined.
For example... most cases where you might create a singleton would require that one instance is created for the entirety of the program. If this is the case why not just create C file which contains the desired functions and variables. You can use static as a file scope controller, therefore eliminating many of the frowned upon side features to a singleton implementation. If it is a requirement that the singleton is created and destroyed multiple times throughout the program then C++ is probably a better idea considering the allocation of global/static data happens at program start up.
The fact is that the main problem we are trying to avoid is breaking encapsulation, as the overhead from C++ is in polymorphism/inheritance. An additional overhead is invoked with passing arguments through functions, and there is a greater overhead (and pain in the ass) setting up dependencies for class instances.
Its all about encapsulation and im still playing with ideas but please let me know what you think.
Global variables are Evil without a doubt. With a capital E!
static file scoped variables are still global in a sense.
Unless you have those variables wrapped by a mutex of
some sort, your code is non-threadable.
Please try to write thread safe code.
Global variables are Evil. Do *not* ever use them.
Agreed! Global variables == not thread safe!
Is there really a function that launches nuclear missiles from the computer?
Now to code my new game: "Pacman XTreme"
Lol just joking.
Now to code my new game: “Pacman XTreme”
A better name for your game would be "Global Thermonuclear War".
How's that for an '80s reference?
War Games at IMDB
Someone already did it.
This really cleared up what I wanted to know about global variables.
I used them in a code for my C++ class, and was told never to use them by my professor.
I asked him why, and his explanation made it seem (to me!) that they are a good option as far as the flexibility of the program goes.
No that I know I could accidentally blow up the world, I will avoid them ;)
"Launching nuclear missiles", LOL. Evil globals; don't trust 'em.
Anyway, I just wanted to say that these tutorials are really good. Thanks. ;D
How do I know if I have a really good reason? Can you give us an example from your experience where you decided you should use a global variable, please?
I honestly can't think of the last time I used a global variable.
Typically, people use global variables for one of three reasons:
1) Because they don't understand C++ variable passing mechanics, or they're being lazy.
2) To hold data that needs to be used by the entire program (eg. configuration settings).
3) To pass data between code that doesn't have a caller/callee relationship (eg. multi-threaded programs)
Obviously #1 isn't a good reason at all. Once you get into C++ classes, there are better ways to do #2 (eg. static classes). That pretty much leaves #3, and maybe a few other cases I'm not aware of.
Basically, the only time you should use a global variable is if there is no practical way to do what you want using local variables and variable passing mechanics. In my opinion, use of globals should always be a last resort -- so in a way, it's a "you'll know it when you run into it" situation, as there simply won't be any other reasonable way to proceed.
I use only one global variable in my 1100 line program (simulating disease spread in a population), which is a random number generator from the GNU scientific library. Different functions need to be able to use a generator, and I don't want to create a new one each time for two reasons:
a) The generator needs to be seeded from the system time. If it was local to a function, then I'd risk it being seeded by the same value each time. I need the independence throughout my program. I suppose I could declare it static, but why have 7 different RNG generators, when I would be better with just 1?
b) It's a random number generator. I expect it to change in between function calls, and don't need or want consistency.
For these reasons, it has to be declared global.
Meanwhile, I have a whole pile of variables which will not change throughout the entire program, e.g. n (the size of the lattice of sites), or the birth/death/migration/infection rates, which are constants I set when I start the program. They're currently all local variables, but I've considered making them global variables as it would make my program easier to understand, as in your reason (2).
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment.
|
https://www.learncpp.com/cpp-tutorial/why-global-variables-are-evil/comment-page-1/
|
CC-MAIN-2021-04
|
refinedweb
| 4,188
| 63.8
|
I have an ArcGIS Pro 2.3 install, and I'm following the instruction of. In Jupyter Notebook dashboard, after typing:
from arcgis.gis import GIS
my_gis = GIS()
my_gis.map()
I see MapView(layout=Layout(height='400px', width='100%')) but no map is displayed. What am I missing? One potential issue I see is ArcGIS Pro connects to our Portal but not ArcGIS Online. Is the latter connection required? If not, how do I switch to using Portal as the "map engine"? Thanks!
BTW, I modified my_gis = GIS() to my_gis = GIS("","<user>","<pass>"), but it still shows MapView(layout=Layout(height='400px', width='100%')) and no map is displayed.
|
https://community.esri.com/thread/231882-arcgis-pro-23-and-jupyter-notebook
|
CC-MAIN-2020-45
|
refinedweb
| 109
| 78.96
|
to move values of one radio button field
2013-01-30T12:45:24Z |
- SystemAdmin 110000D4XK24948 Posts
Re: Help-External PERL script to move values of one radio button field2013-01-30T13:57:59Z
This is the accepted answer. This is the accepted answer.It sounds like you are saying that for every single record that has the value of C you want to change it to D then skip the perl script.
Create a query to get the records that have a value of C
then either do a mass Modify to change all C to D for those records
Or (while no one is using CQ)
export the ID and field in question for those records
bring up the resulting text file in notepad
do a find and replace to change C to D
import the updated records
|
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014935112
|
CC-MAIN-2017-34
|
refinedweb
| 139
| 64.07
|
In my previous article we got to know how to create two methods that handle the log maintenance for exceptions. After that just build the application and we will get the "dll" of the project. The link of my previous article is in this article I will describe how to use the "ErrorHandling" dll when an exception happens and to register the Exception in a text file. The textfile will be created inside the "LogError" folder in the C drive. So before attaching the dll you have to create the folder in your C drive.Now here is the Program.Step 1:First create a webapplication named "Errorhandlingappln".Step 2:After that in the reference folder add a reference for "Errorhandling.dll". In this project I will give you the dll. Step 3:After attaching the dll it will appear in your reference folder and it will appear like the following figure:Step 4:Now in the default.aspx.cs first register the dll.Step 5:Now see the following code:using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.UI;using System.Web.UI.WebControls;using ErrorHandiling;
namespace ErrorhandlingAppln{ public partial class _Default : System.Web.UI.Page { int i = 10,d=0; ErrorHandling objError = new ErrorHandling(); protected void Page_Load(object sender, EventArgs e) { try { int result = (i /d) ; } catch (Exception ee) { string errorMessage = ErrorHandiling .Class1.CreateErrorMessage(ee); ErrorHandiling .Class1.LogFileWrite(errorMessage);
}
} }}Here I simply divide the integer i to 0. That means the ArithmaticException will occur.So in the Catch block
First create the Exception error message with inner Exception by pasing the Exception object. ErrorHandling is our dll class name.
Secondly we are passing the "errorMessage" to the method name " LogFileWrite" to create the txtfile.
Step 6:Now run the program. Obliviously the Exception will occur. Now go to the "LogError" folder and you will see the logtext file name "ProgramLog-20110711" just like the following figure.Step 7: Now open the text file; you will see exception details information that have been generated by our program exception like the following figure.
Conclusion: So in this article we have seen how to use the "ErrorHandling" dll when an exception happens and how to register the exception in a text file.
©2014
C# Corner. All contents are copyright of their authors.
|
http://www.c-sharpcorner.com/UploadFile/b19d5a/exception-error-handling-log-maintenance-in-a-text-file-part-2/
|
CC-MAIN-2014-42
|
refinedweb
| 388
| 57.98
|
Source
JythonBook / chapter18.rst
Chapter 18: Testing and Continuous Integration - Chapter Draft
Nowadays, automated testing is a fundamental activity in software development. In this chapter you will see a survey of the tools available for Jython is this field, from common tools used in the Python world to aid with unit testing to more complex tools available in the Java world which can be extended or driven using Jython.
Python Testing Tools
UnitTest
First we will take a look at the most classic test tool available in Python: Unittest. It follows the conventions of most "xUnit" incarnations (like JUnit): You subclass from TestCase class, write test methods (which must have a name starting by "test" and optionally override the methods setup() and tearDown() which are executed around the test methods,. And you can use the multiple assert*() methods provided by TestCase. Here is an very simple test case for the some functions of the built-in math module:
import math import unittest))
There are many other assertion methods besides assertEqual(), of course. Here is a list with comparison of floating point numbers.
- assertNotAlmostEqual(a, b): The opposite of assertAlmostEqual()
- assert_(x): Accepts a boolean argument expecting it to be True. You can use it to write other checks like "greater than", or to check boolean functions/attributes (The trailing underscore is needed because assert is a keyword).
- assertFalse(x). The opposite of assert_().
- assertRaises(exception, callable). Used to assert that an exception passed as the first argument is thrown when invoking the callable specified as the second argument. The rest of arguments passed to assertRaises is passed on to the callable.
As an example, let's extend our test of mathematical functions using some of these other assertion functions: test_math.py, then run:
$ jython test_math.py
And you will see this output:
.... ---------------------------------------------------------------------- Ran 4 tests in 0.005s OK
Each dot about the dash line represent a successfully ran test. Let see what happens if we add a test that fails. Change the invocation assertAlmostEqual() method in testMultiplication() to use assertEqual() instead. If you run the module again, you will see the following output:
.. 0.3 and 0.30000000000000004 are not equal. The last line also shows the grand total of 1 failure.
By the way, now you can imagine why using assertEquals(x, y) is better than assert_(x == y): if the test fails, assertEquals() provides helpful information, which assert_() can't possibly provide by itself. To see this in action, let's change testMultiplication() to use assert_(): what we have is the traceback and the "AssertionError" message. No extra information is provided to help us diagnostic the failure, as it was the case when we use assertEqual(). That's why all the specialized assert*() methods are so helpful. Actually, with the exception of assertRaises() all assertion methods accept an extra parameter meant to be the debugging message which will be shown in case the test fails. That lets you write helper methods on one python module, for maintainability reasons.
Let's create a new module, named test_lists.py with the following test code: on a setUp() method, which allows us to avoid repeating the same initialization code on each test*() method.
And, restoring our math tests to a good state, the test_math.py will contain the following: test suite. A test suite is a simply collection of test cases (and/or other test suites) which, when ran, will run all the test cases (and/or test suites) contained by it. Note that a new test case instance is built for each test method, so suites have already been build under the hood every time you have run a test module. Our work, then, is to "paste" the suites together.
Let's build suites using the interactive interpreter!
First, import the involved modules:
>>> import unittest, test_math, test_lists
Then, we will obtain the test suites for each one of our test modules (which were implicitly created when running them using the unittest.main() shortcut), using the unittest.TestLoader class:
>>> loader = unittest.TestLoader() >>> math_suite = loader.loadTestsFromModule(test_math) >>> lists_suite = loader.loadTestsFromModule(test_lists)
Now we build a new suite which combine feel like wanting runner you can easily write a script to run the tests of any project. Obviously, the details of the script will vary from project to project depending the way in which you decide to organize your tests.
On the other hand, typically you won't write custom scripts to run all your tests. Using test tools which do automatic test discovery will be a much convenient approach. We will look one of them shortly. But first, I must show you other testing tool very popular in the Python world: doctests.
Doctests
Doctests are a very ingenious combination of, well, documentation and tests. A doctest is, in essence, no more than a snapshot of a interactive interpreter session, mixed with paragraphs of documentation, typically inside of a docstring. Here is a simple example: as long as it value is an integer: >>> is_even(4.0) True """ remainder = number % 2 if 0 < remainder < 1: raise ValueError("%f isn't an integer" % number) return remainder == 0
Note that, if we weren't talking about testing, we may have thought that the docstring of is_even() is just normal documentation, in which the convention of using the interpreter prompt to mark example expressions and their outputs was adopted (also note also that irrelevant stack trace has been striped of in the exception example). After all, in many cases we use examples as part of the documentation. Take a look at Java's SimpleDateFormat documentation located in if that it encourages the inclusion of these examples by doubling them as tests. Let's save our example code as even.py and add the following snippet at the end:
if __name__ == "__main__": import doctest doctest.testmod()
Then, run it:
$ jython even.py
And well, doctests are a bit shy and don't show any output on success. But to convince you that it is indeed testing our code, run it with the , since the interactive examples can be directly copy-pasted from the interactive shell, transforming the manual testing in documentation examples and automated tests in one shot.
You don't really need to include doctests as part of the documentation of the feature they test. Nothing stops you to write the following code in, say, the test_math_using_doctest.py module:
"""()
One thing to note on the last test in the previous example, is that in some cases doctests are not the most clean way to express a test. Also note that if that test fails you will not get useful information from the failure. It will tell you that the output was False when True was expected, without the extra details that assertAlmostEquals() would give you. The morale of the history is to realize that doctest is just another tool in the toolbox, which can fit very well in some cases and not fit well in others.
Warning
Speaking of doctests gotchas: The use of dictionary outputs in doctests is a very common error that breaks the portability of your doctests across Python implementations (e.g. shown by the examples shown in this section is the way to write expressions which are written in more than one line. As you may expect, you have to follow the same convention used by the interactive interpreter: start the continuation lines with an ellipsis ("..."). For example:
""". The figure :ref:`fig-eightqueens` shows one of the solutions of the puzzle.
Eight queens solution
I like to use doctests to check the contract of the program with the outside, and unittest for what we could see as the internal tests. I type of tests have strengths and weakness, and you may find some cases in which you will prefer the readability and simplicity of doctests and only use them on your project. Or you will favor the granularity and isolation of unittests and only use them on your project. As many things in life, it's a trade-off.
We'll develop this program in a test-driven development fashion. Test will be written first, as a sort of specification for our program, and code will be written later to fulfill the tests requirements.
Let's start by specifying the public interface of our puzzle checker, which will live on the eightqueen package. This is the start of the main module, eightqueen.checker:
""" which can be used to verify our solution to the problem.
Now we will specify the "internal" interface which shows how we can solve the problem of writing the solution checker. It's a common practice to write the unit tests on a separate module. So here is the code for eightqueens.test_checker: unit tests propose a way to solve the problem, decomposing it in two big tasks (input validation and the actual verification of solutions) and each task is decomposed on a smaller portion meant to be implemented by a function. In some way, they are an executable design of the solution.
So we have a mix of doctests and unit tests. How do we run all of them in one shot? Previously I showed you how to manually compose a test suite for unit tests belonging to different modules, so that may be an answer. And indeed, there is a way to add doctests to test suites: doctest.DocTestSuite(module_with_doctests). But, since eightqueens.test_checker).
We will use setuptools to install nose. Refer to Appendix A for instructions on how to install setuptools if you haven't installed it yet.
Once you have setuptools installed, run:
$ easy_install nose
Note
I'm assuming that the bin directory of the Jython installation is on your PATH. If it's not, you will have to explicitly type that path preceding each command like jython or easy_install with that path (i.e., you will need to type something like /path/to/jython/bin/easy_install instead of just easy_install)
Once nose is installed, an executable named nosetests will appear on the bin/ directory of your Jython installation. Let's try it, locating ourselves on the parent directory of eightqueens and running:
$ nosetests --with-doctest
By default nose do not run doctests, so we have to explicitly enable the doctest plugin that comes built in with nose.
Back to our example, here is the shortened output after running nose:
FEEEEEE [Snipped output] ---------------------------------------------------------------------- Ran 8 tests in 1.133s FAILED (errors=7, failures=1)
Of course all of our tests (6 unit tests and 1 doctest) failed. It's time to fix that. But first, let's run nose again without the doctests, since we will follow the unit tests to construct the solution. And we know that as long as our unit tests fail, the doctest will also likely fail. Once all unit tests pass, we can check our whole program against the high level doctest and see if we missed something or did it right. Here is the nose output for the unit tests:
$ ValidationTest. That is, the _validate_shape(), _validate_contents() and validate_queens() functions, in the eightqueens.check _rows_ok(), _cols_ok() and _diagonals_ok() to pass PartialSolutionTest::
$ nosetests ...E... ======================================================================' ---------------------------------------------------------------------- Ran 7 tests in 0.938s FAILED (errors=1)
Finally, we have to assemble the pieces together to pass the test for is_solution(): what you learned on this chapter to test code written in Java. Personally, I in the 1960,, growing a important user base is Hudson. Among its prominent features are the ease of installation and configuration, and the ease to deploy it in a distributed, master/slaves environment for cross-platform testing.
But, in my opinion, Hudson's main strength is its highly modular, plugin-based architecture, which has resulted in the creation of plugins to support most of the version control, build and reporting tools, and many languages. One of them is the Jython plugin, which allows you to use the Python language to drive your builds.
You can find a more details about the Hudson project on its homepage at. I will go to the point and show how to test Jython applications using it.
Getting Hudson
Grab the latest version of Hudson from. You can deploy it to any servlet container like Tomcat or Glassfish. But one of the cool features of Hudson is that you can test it by simply running:
$ java -jar hudson.war
After a few seconds, you will see some logging output on the console, and Hudson will be up and running. If you visit you will get a welcome page inviting you to start using Hudson creating new jobs. .. warning:
Be careful: The default mode of operation of Hudson fully trusts its users, letting them to execute any command they want on the server, with the privileges of the user running Hudson. You can set stricter access control policies on the "Configure System" section of the "Manage Hudson" page.
Installing the Jython Plugin
Before creating jobs, we will install the Jython plugin. Click on the "Manage Hudson" link on the left side menu. Then click "Manage Plugins". Now go to the "Available" tab. You will see a very long list of plugins (I told you this was the greatest Hudson strength!). Find the "Jython Plugin", click on the checkbox at its left, as shown on the figure :ref:`fig-hudson-selectingjythonplugin` then scroll to the end of the page and click the "Install" button.
Selecting the Jython Plugin.
You will see a bar showing the progress of the download and installation progress, and after little while you will be presented with an screen like shown on the figure :ref:`fig-hudson-jythonplugininstalled` notifying you that the process finished. Press the "Restart" button, wait a little bit and you will see the welcome screen again. Congratulations, you now have a Jython-powered Hudson!
Jython Plugin Successfully Installed
Creating a Hudson Job for a Jython Project
Let's follow now (equivalent to the "New Job" entry on the left side menu) you will be asked for a name and type for the Job. We will use the eightqueens project built on the previous section, so name the project "eightqueens", select the "Build a free-style software project" option and press the "OK" button.
In the next screen, we need to setup an option on the "Source Code Management" section. You may want to experiment with your own repositories here (by default only CVS and Subversion are supported, but there are plugins for all the other VCSs used out there). For our example, I've hosted the code on a Subversion repository at. So select "Subversion" and enter as the "Repository URL".
Note
Using the public repository will be enough to get a feeling of Hudson and its support of Jython. However, I" plugin which adds the "Execute Jython script" build step.
So click on "Add Build Step" and then select "Execute Jython script". We will use our knowledge of test suites gained on the UnitTest section, the following script will be enough to run our tests:)
The figure :ref:`fig-hudson-jobconfig` shows how the page looks so far for the "Source Code Management", "Build Triggers" and "Build" sections.
Hudson Job Configuration
The next section, titled "Post-build Actions" let you specify action to carry once the build has finished, ranging from collecting results from reports generated by static-analysis tools or test runners to send emails notifying someone of build breakage. We will left these options blank by now. Click the "Save" button at the bottom of the page.
At this point Hudson will show the job's main page. But it won't contain anything useful, since Hudson is waiting for the hourly trigger to poll the repository and kick the build. But we don't need to wait if we don't want to: just click the "Build Now" link on the left-side menu. Shortly, a new entry will be shown on the "Build History" box (also on the left side, below the menu), as shown in the figure :ref:`fig-hudson-buildhistory`.
The First Build of our First Job.
If you click on the link that just appeared there you will be directed to the page for the build we just made. If you click on the "Console Output" link on the left side menu you will see what's shown in the figure :ref:`fig-hudson-buildresult`.
Console Output for the Build
As you would expect, it shows that our eight tests (remember that we had seven unit tests and the module doctest) all passed.
Using Nose on Hudson
You may be wondering why we crafted a custom build script instead of using nose, since I stated that using nose was much better than manually creating suites.
The problem is that the Jython runtime provided by the Jython Hudson plugin side menu, go the the "Build" section of the configuration and change the Jython script for our job to:
# setuptools (ez_setup) and set an environment in which it will work. Then, we check for the availability of nose, and if it's not present we install it using setuptools.
The interesting part if the last line:
nose.run(argv=['nosetests', '-v', '--with-doctest', '--with-xunit'])
Here we are invoking nose from python code, but using the command line syntax. Note the usage of the --with-xunit option. It generates JUnit-compatible XML reports for our tests, which can be read by Hudson to generate very useful test reports. By default, nose will generate a file called nosetests.xml on the current directory.
To let Hudson know where the report can be found scroll to the "Post Build Actions" section in the configuration, check the "Publish JUnit test result reports" and enter "nosetests.xml" on the "Test Report XMLs" input box. Press "Save". If Hudson points you that nosetests.xml "doesn't match anything", don't worry and just press "Save" again. Of course it doesn't match anything yet since we haven't run the build again.
Trigger the build again, and after the build is finished, click on the link for it (on the "Build History" box or going to the job page and following the "Last build [...]" permalink). The figure :ref:`fig-hudson-consolewithnose` shows what you see if you look at the "Console Output" and the figure :ref:`fig-hudson-testresults` what you see on the "Test Results" page.
Nose's Output on Hudson
Hudson's Test Reports
Navigation on your test results is a very powerful feature of Hudson. But it shines when you have failures or tons of tests, which is not the case on this example. But I wanted to show it in action, so I fabricated some failures on the code to show you some screenshots. Look at figure :ref:`fig-hudson-testresultswithfailures` and figure :ref:`fig-hudson-testresultsgraph` to get an idea of what you get from Hudson.
Test Report Showing Failures
Graph of Test Results Over Time.
Conclusion
Testing is a fertile ground for Jython usage, since you can exploit the flexibility of Python to write concise tests for Java APIs which also tend to be more readable than the ones written with JUnit. Doctests, in particular don't have a parallel on the Java world and can be a powerful way to introduce the practice of automated testing on people who want it to be simple and easy.
Integration with continuous integration tools, and Hudson in particular let's you get the maximum from your tests, avoiding test breakages to go unnoticed and representing a live history of your project health and evolution.
|
https://bitbucket.org/javajuneau/jythonbook/src/28b0486ae6c1/chapter18.rst?at=default
|
CC-MAIN-2015-40
|
refinedweb
| 3,260
| 61.77
|
05 October 2012 16:53 [Source: ICIS news]
HOUSTON (ICIS)--Brazil-based Petrobras is looking to gain a foothold in the US aromatics market by selling excess benzene from its Brazilian plants, a source with the company said.
Petrobras recently revamped its Refinaria Presidente Bernardes Cubatao (RPBC) complex in Sao Paulo state, a source said. The revamp has allowed Petrobras to export the benzene.
The company is expected to start offering some benzene next week to the US market and could have an additional 3,000-4,000 tonnes/month of benzene to offer.
However, the company source said it would only export the cargoes if the price makes sense. Otherwise Petrobras would keep the cargoes in the Brazilian domestic market.
The US typically imports about 100,000 tonnes of benzene monthly. Some trade sources have already said a small amount of benzene from Brazil will not mean much.
“I don't really see that volume making much of a difference,” said one US-based aromatics participant.
An extra 3,000 tonnes should not have any significant effect on prices, another US trade source said. “However, having another producer in the US Gulf will certainly change the dynamic of the marketplace, giving more leverage to consumers and traders. In addition, it should add more liquidity to the market”
Despite the limited volume expected to be offered next week, Petrobras may be able to ease some of the current tightness in US benzene supply.
Trade sources have attributed the increase and tightness to a shipping delay in some US benzene imports. US benzene imports from Asia are also expected to drop to about 55,000-60,000 tonnes in October because of a lack of cargoes and unfavourable price spreads.
US spot prices on Thursday have been discussed as high as $4.58-4.75/gal ($1,371-1,422/tonne, €1,056-1,095/tonne) FOB (free on board), well above spot levels of $4.31-4.43/gal FOB at the start of the week.
Other trade sources have noted that imports from Brazil could grow in the coming years, giving Petrobras a more significant role in the US market.
Petrobras recently announced plans in late August to modernise and expand its refinery operations by investing $71.6bn by 2016 to add 400,000 bbl/day capacity. The additional capacity would also include aromatics such as benzene, which the company would also look to export.
Earlier this summer, Petrobras expects that construction for the first of two refineries at the Comperj complex will be completed in April 2015.
When completed, the Complexo Petroquimico do ?xml:namespace>
Derivatives demand, currently at 2.1m bbl/day, is expected to reach 3.4m bbl/day by 2020, Petrobras said.
Additional reporting by Al Green
|
http://www.icis.com/Articles/2012/10/05/9601249/petrobras-seeks-to-offer-additional-benzene-into-the-us.html
|
CC-MAIN-2013-48
|
refinedweb
| 462
| 64
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Where I can find the code for float fields in 6.04 gtk client?
I need to force a '.' char in decimal fields to ',' (our decimal separator) I report this bug many months ago and it is solved on 6.1 & 7.0 but we still use 6.0.4
All users have problems when type float values and press the decimal key [.] in decimal keyboard. They type 100.00 and the amount 10000.00 is showed.
I think, if I can found the code where the float field's widget is implemented. I can "force" the change the decimal separator from '.' to ',' and solve this problem.
Adding something like this:
float_value = float(string_data.replace('.',','))
But I can found where is the code.
Note: I found some widgets in:
./client/bin/widget/view/form_gtk/
but none appears to be the float field widget
PD Sorry for my spelling
PD2 on 05/28/2013 I found in spimbutton.py:
def format_input(self, spin, new_value_pointer): text = spin.get_text() ######## Added to "force" '.' as ',' for numeric dor convertion if not (',' in text) and ('.' in text): text = text.replace('.',',') ######## End - if text: value = user_locale_format.str2float(text) value_location = ctypes.c_double.from_address(hash(new_value_pointer)) value_location.value = float(value) return True return False
It Works! but the real solution need to use: LOCALE_CACHE.get('thousands_sep') & LOCALE_CACHE.get('decimal_point')
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
|
https://www.odoo.com/forum/help-1/question/where-i-can-find-the-code-for-float-fields-in-6-04-gtk-client-17914
|
CC-MAIN-2017-17
|
refinedweb
| 275
| 63.15
|
news.digitalmars.com - c++Dec 15 2006 Performance compare to VC8? (4)
Dec 14 2006 cd834.zip will not download (2)
Dec 14 2006 Bug trying to get CPU ussage (1)
Dec 11 2006 enabled exception handling and generated code (1)
Dec 10 2006 Optimization bug (2)
Dec 10 2006 [bug] namespaces and anonymous namespaces (1)
Dec 10 2006 [bug] elaborated type specifiers and namespaces (1)
Dec 10 2006 [bug] template type deductions (1)
Dec 05 2006 [bug] Appears that template syntax is checked before macro expansion! (1)
Dec 04 2006 [bug] Failure to prefer more-specific (non-template) function (2)
Dec 04 2006 A complete Newbie (3)
Nov 27 2006 I need help (3)
Nov 24 2006 Keeping the path constant when looking for files (2)
Nov 21 2006 PreDefines (2)
Nov 21 2006 "_handle" or "__handle" as class member name (6)
Nov 13 2006 Is it possible to "preprocess only"? (16)
Nov 07 2006 significantly reduced execution speed after CPU upgrade (1)
Nov 03 2006 Zortech C++ (5)
Nov 01 2006 Error Code Help Requested (8)
Oct 31 2006 Here's a blast from the past (2)
Oct 31 2006 win64 support in C? Large file support fseeko and fetllo? (1)
Oct 25 2006 Boost libraries build problem (1)
Oct 24 2006 Accessibility of a static constant within a class (2)
Oct 16 2006 Question about DMC Win64 bit plans (1)
Oct 09 2006 Is (++i)++ legal? (9)
Oct 06 2006 delete[ ] (2)
Oct 06 2006 Why doesn't drand48() and srand48() work? (1)
Oct 06 2006 Why doesn't drand48() and srand48() work. (4)
Oct 04 2006 Bug related to template constructor (1)
Oct 04 2006 #include <iostream.h> versus #include <iostream> (6)
Oct 04 2006 Linkage failure for generic stack template (1)
Oct 03 2006 Declaring a struct pointer inside a member function. (2)
Oct 01 2006 floating-point optimization (4)
Sep 30 2006 templated friend problem? (1)
Sep 28 2006 Problem with a template (6)
Sep 27 2006 Data Files with Commas (1)
Sep 23 2006 Very old bug with redefining new/delete (1)
Sep 22 2006 Private Accessibility (1)
Sep 16 2006 fdopen bug (1)
Sep 16 2006 Help for building a dll (for gsl 1.8) (1)
Sep 07 2006 Re:c++ (1)
Sep 05 2006 problems with local classes (8)
Aug 31 2006 Re:c++ (1)
Aug 10 2006 Out Of Memory Error/Can't link (2)
Jul 22 2006 newsgroup moved to new server... (14)
Jul 22 2006 Test (1)
Jul 18 2006 Redundant typedefs allowed? (2)
Jul 16 2006 Bug related to operators and namespaces (1)
Jul 13 2006 operator return value - d.cpp (6)
Jul 08 2006 Problems getting started (2)
Jul 07 2006 Help with installing program from CD? (2)
Jun 29 2006 function can't accept class type as paramater?!! (2)
Jun 29 2006 DWORD32, DWORD64, (6)
Jun 25 2006 Using DMC with Ogre3D or Crystal Space (1)
Jun 23 2006 Suggestin for STLSOFT makefile problems (4)
Jun 23 2006 "Internal error: type 361" and "errorlevel -1073741819" - bug.cpp (2)
Jun 22 2006 bug with #defines in constructors (1)
Jun 21 2006 [8.49.1 bug] leading :: before namespace (11)
Jun 18 2006 How many bits in the mantissa if I declare all floats as long doubles ? (3)
Jun 16 2006 [bug?] intitialisation of array of structs (5)
Jun 02 2006 [bug?] compiler option -a4 causes abnormal program termination on (4)
May 24 2006 __inline_cos/sin (2)
May 09 2006 Argument problem - missing quote (2)
May 08 2006 A standard conformance issue regarding specialization of static data member - test.zip (2)
Apr 29 2006 Stub Libraries in Digital Mars (2)
Apr 29 2006 [bug?] Strange compiler behaviour (4)
Apr 29 2006 strchr() non-standard? (3)
Apr 28 2006 using functions defined in C header files (2)
Apr 15 2006 Problem with new (nothrow) (3)
Apr 14 2006 Problem with inline assembly interrupts (3)
Apr 11 2006 STLport vs C (2)
Apr 08 2006 License for MS redist files (5)
Apr 06 2006 Wrong warning (18) (4)
Apr 04 2006 8087 precision and _beginthread (1)
Apr 01 2006 coff2omf bug - C2OTEST.zip (5)
Mar 28 2006 WINSPOOL.LIB - ctrig.cpp (2)
Mar 25 2006 simple class question. (2)
Mar 20 2006 Compile small functions by DM C++ Compiler (7)
Mar 20 2006 compilation error (2)
Mar 18 2006 Initialization of arrays at the time of declaration not supported? (2)
Mar 18 2006 iostream error (4)
Mar 17 2006 code generation bug? (3)
Mar 17 2006 Compiler Input (4)
Mar 17 2006 How can I restrict the cursor not to go further...? (3)
Mar 16 2006 kbhit() problem... (4)
Mar 13 2006 Help; parse (1)
Mar 10 2006 Beginner: IMPLIB, COFF2OMF, COFFIMPLIB (1)
Mar 07 2006 newbie needs help with .rc files (1)
Mar 07 2006 newbie needs help with .rc files (1)
Mar 07 2006 newbie needs help with .rc files (3)
Mar 07 2006 Progresses...still a little problem (1 easy question) (4)
Mar 04 2006 Really need help configuring config.h (1)
Mar 02 2006 Is there an IRC channel for Digital Mars C++ ? (2)
Feb 20 2006 newbie needs help with multiple source files (4)
Feb 19 2006 [configuring digital mars for boost] (4)
Feb 16 2006 Linker error? - fraction.zip (3)
Feb 16 2006 Prolem including time.h, defining MSDOS (3)
Feb 14 2006 Internal error: cod2 4221 (8)
Feb 13 2006 Problem linking with dmc and gcc, please help (6)
Feb 10 2006 dll supportt (2)
Feb 09 2006 Raw speed! w00t!! (2)
Feb 08 2006 Inline Assembly and class members (4)
Feb 03 2006 Complex type bug (1)
Feb 01 2006 missiong popen functions (1)
Feb 01 2006 x86-64 x64 Win64 support (2)
Jan 31 2006 to bertel (2)
Jan 31 2006 disp.h (2)
Jan 31 2006 help (2)
Jan 31 2006 disp.h (1)
Jan 31 2006 help (3)
Jan 26 2006 DMC++ and WinXP (3)
Jan 23 2006 code coverage? (2)
Jan 18 2006 [bug?] (7)
Jan 18 2006 Is there an ANSII C++ to Visual C code converter out there please? (2)
Jan 16 2006 Optimization Error (18)
Jan 02 2006 Ambiguous reference to symbol (3)
Jan 02 2006 Private member accessibility from private class (3)
Other years:
2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001
|
https://www.digitalmars.com/d/archives/c++/index2006.html
|
CC-MAIN-2020-34
|
refinedweb
| 1,071
| 74.93
|
builder.dart
A general build tool for Dart projects, allowing for easy execution from the command line and from the Dart Editor.
It supports both a procedural and declarative style.
There is also planned support for
pub transformers
to allow using some standard Dart build processes, like unit testing and
document generation, as part of the
pub build tool.
Future Status Note: It looks like the Dart team is trying to move to using
pub as the standard build tool, and is moving away from using
build.dart.
Status
The tool supports both the procedural and declarative styles of builds. It provides a small set of built-in tools to help with creating a build, but more are planned.
The following features are planned:
Add more helper libraries to make creation of builds easier: dart, zip, and unzip. Pub won't be added as a library.
- Add support for testing frameworks like DumpRenderTree (used by Dart team's web-ui project)
- Update the documentation as work progresses.
- Work on publicity and integration with tools like drone.io
Adding The Builder To Your Project
Regardless of whether you go the procedural or declarative route, you first need to add
the builder library to your
pubspec.yaml file in the
dev_dependencies section.
name: my_app author: You version: 1.0.0 description: My App dev_dependencies: builder: ">=0.0.1"
The library belongs in the
dev_dependencies section, not
dependencies,
because it is only used to build the project.
Because of this change to dependencies, you'll need to run
pub install before
running the build.
Next, you need to decide if you're going to take the procedural or declarative route, and make your build file accordingly.
Declarative
A declarative build works by declaring what build actions depend on each other, then it's up to the build library to decide what should actually be run. You lose a bit of clarity in the build (the execution plan is no longer just top to bottom), but you don't need to worry about lots of minute details.
The build structure is broken into phases, which are major groupings of events. By
default, there's clean, build, assemble, and deploy (defined in
builder/tool.dart). Then, each build target defines which phase it run in,
and a connection of resources it consumes and resources it generates. These two
definitions give the build target an implicit ordering.
Your build will have this kind of structure:
// the build file should be in the "build" library library build; // The build library import 'package:builder/builder.dart'; // The standard dart language tools import 'package:builder/dart.dart'; // The standard package layout definitions import 'package:builder/std.dart'; // ------------------------------------------------------------------- // Extra Directories final DirectoryResource OUTPUT_DIR = new FileEntityResource.asDir(".work/"); final DirectoryResource TEST_SUMMARY_DIR = OUTPUT_DIR.child("test-results/");
// ------------------------------------------------------------------- // Targets final dartAnalyzer = new DartAnalyzer("lint", description: "Check the Dart files for language issues", dartFiles: DART_FILES);
final cleanOutput = new Delete("clean-output", description: "Clean the generated files", files: OUTPUT_DIR.everything(), onFailure: IGNORE_FAILURE);
final unitTests = new UnitTests("test", description: "Run unit tests and generate summary report", testFiles: TEST_FILES, summaryDir: TEST_SUMMARY_DIR);
void main(List<String> args) { // Run the build build(args); }
Each constructed build tool in the
build library will be picked up as a
target to run, with the target name being the first argument.
See
Running The Build(#Running The Build) for details on how to run these
targets.
If no target is given when the build is run, the declarative form of the build
will run only the targets that have changed. Currently, the changes are
detected using the Dart Editor style by passing in
--changed (filename) and
--removed (filename). Future updates may automatically find changes if those
are not given.
Procedural
Rather than have the build system know the steps, you may instead want to just
code the precise steps yourself. The
builder library supports this style of
procedural builds, similar to the Apache Ant project
for Java projects.
Next, in accordance with the
dart build file standard,
create the file
build.dart in the root project directory. It should look
like this:
import "package:builder/make.dart"; void main(List<String> args) { make(Build, args); }
class Build { @target.main('default project') void main(Project p) { ... } @target('full build', depends: ['clean', 'main']) void full(Project p) { ... } @target('clean the project') void clean(Project p) { ... } @target('bundle the files together into a distributable', depends: ['main']) void bundle(Project p) { ... } @target('deploy the project to the web server', depends: ['bundle']) void deploy(Project p) { ... } }
Note the special
@target annotation to denote a build target. This annotation
takes a text description (
String) as an argument, and an optional list of
target names (
List<String>) as the dependent targets that need to run before
this one. To specify the target that the build runs by default,
use the
@target.main annotation.
Running The Build
The builder library is designed for use from within the Dart Editor tool, but it can also be run from the command-line.
Run the default target:
dart build.dart
Discover the available targets:
dart build.dart --help
Run the clean target:
dart build.dart --clean
Supported Tools
builder provides several tools that are usually needed as part of a normal
build system. This gives a brief outline of what the tool provides. Full
documentation can be found under the docs directory, for both
declarative and procedural
These are common to both styles:
- DartAnalyzer - runs the dartanalyzer tool over a set of Dart files.
- UnitTests - runs the Dart unit tests in a file.
- Dart2JS - runs the Dart2js tool over a set of Dart files.
These are specific to declarative styles:
Relationship - declares an indirect relationship between files, such as the source files that a unit test covers.
- Delete - removes files and directories.
MkDir - creates empty directories. Usually, this isn't needed, because the builder tools will create directories as necessary. However, under some circumstances, you may need an empty directory created.
- Copy - copies files.
- Exec - execute a native OS process.
- Dart - run a Dart file through the command line.
You can make your own tool if you need additional functionality.
License
Released under the MIT license.
The MIT License (MIT) Copyright (c) 2014 Grobocl
- builder
-
- builder.make
The call-in for the build system.
- builder.std
-
- builder.task
-
- builder.tool
The declarative build tool library.
- builder.transformer
-
- builder.unittest
-
|
https://www.dartdocs.org/documentation/builder/0.2.3/index.html
|
CC-MAIN-2017-26
|
refinedweb
| 1,055
| 57.27
|
We've had feedback from many sources (comments in this blog, emails, in-person meetings, partner advisory board, etc.) that we should fix the element ID problem in AX - the sooner the better. Thank you very much for this feedback. This post is intended to give a bit of background on this subject.
The element ID problem in AX causes pain on two fronts:
- Uniqueness. Element IDs must be unique on any given installation. We have a suite of tools to help generating unique IDs, detect uniqueness conflicts, etc. Learning and using these tools are painful, resolving ID problems is even more painful, especially on live systems. Further the uniqueness issue makes it hard to combine solutions from different source in one installation.
- Upper limit. Each layer has a finite number of IDs. This means there is an upper limit to how many model elements you can create. Microsoft's core development team knows exactly how this pain feels.
At any rate; this is a problem we need to address - and my team has been given the task. While many solutions spring to mind (use GUIDs or drop IDs completely) this is a case of Devil in the Details.
Business Data: IDs are persisted in the business data base. Any change to IDs would require upgrade scripts. To make matters even worse, IDs are also stored in unstructured blobs; making identification and migration even harder.
Business Logic: Business logic uses IDs. Both Microsoft's business logic; but certainly also the business logic in solutions authored by partners around the world. Changes to IDs would impact all this business logic.
Kernel: The implementation of the kernel used IDs heavily. Changing data type or dropping IDs totally would require rewrite of large portions of the kernel. The kernel provides the API for converting IDs to names and back again.
Application model: IDs (and Names) are stored in the Application Model. IDs are used as references between model elements and IDs bind customizations to the original element. Whatever changes we make, we cannot break the references in the model.
Getting challenges like this is what I love about my job.
Read the entire solution here: blogs.msdn.com/…/the-solution-to-the-element-id-problem.aspx
Well – the design time is now 100% name based in AX6.
Well, well, reading back is quite fun. Name based solution didn't make it to Ax6 neither 🙂
In AX4 we added Unicode support . On one hand it seems like a minor thing, it is just the storage format
Today we built the first official build of Dynamics AX ever that does not run on AOD files. Starting
ddjong: I cannot yet reveal all the details of the solution; but you are not far from the mark.
Our goal for AX6 is to provide a name based solution, so you as a developer only have to care about names.
Providing namespace support is the next logical step; but will not make it into AX6.
I hope this ID problem gets properly solved. In fact we talk about naming collision prevention. Then very often you see the question ID-based or Name-based coming up. Quite often you see ID-based being used (GUIDs), but what do you think if we in real-life are going to use GUIDs to identify people? I think internally in an application, using GUIDs is a good thing to prevent huge rename actions when an ID changes, (many objects might refer to that changed ID … But ultimately we NEED NAMES … I would like to have NAMESPACES be part of AX ….
plz let all components in AOT have a namespace and also provide tree-alike structure in AOT to organize things according namespace ….
This would be really phantastic … And whether you use internally GUIDs, i don’t mind … And if you internally generate an ID (numeric) for a GUID since kernal requires that … i don’t mind. But don’t expose this to outside world … 🙂
The summer is over, and you might have noticed I haven’t been blogging much the past few months. You
Microsoft is already using the combined ranges from SYS+GLS+SLx. This gives us a total pool of 20,000 IDs.
«Upper limit. Microsoft’s core development team knows exactly how this pain feels»
Why not change layer ID limits? If lots of classes and forms and EDTs has been consolidated on the sys layer from multiple dis-layers, why not merge sys and dis layers ID ranges?
PingBack from
|
https://blogs.msdn.microsoft.com/mfp/2008/05/21/solving-the-element-id-problem/
|
CC-MAIN-2018-26
|
refinedweb
| 752
| 72.76
|
iSequenceManager Struct ReferenceThe sequence manager. More...
#include <ivaria/sequence.h>
Detailed DescriptionThe sequence manager.
The sequence manager is a plugin that will perform sequences of operations depending on elapsed time. It is mostly useful for demo's or intros of games.
Main creators of instances implementing this interface:
- Sequence Manager plugin (crystalspace.utilities.sequence)
Main ways to get pointers to this interface:
Main users of this interface:
Definition at line 179 of file sequence.h.
Member Function Documentation
Destroy all operations with a given sequence id.
Get the delta time to add to the main time to get the real main time.
Do not use GetDeltaTime() from within the operation callback.
Get the current time for the sequence manager.
This is not directly related to the real current time. Suspending the sequence manager will also freeze this time. Note that the sequence manager updates the main time AFTER rendering frames. So if you want to get the real main time you should add the delta returned by GetDeltaTime() too. However from within operation callbacks you should just use GetMainTime() in combination with the supplied delta.
Return a unique id that you can use for identifying the sequence operations.
Return true if the sequence manager has nothing to do (i.e.
the queue of sequence operations is empty).
Return true if the sequence manager is suspended.
Create a new empty sequence.
This sequence is not attached to the sequence manager in any way. After calling NewSequence() you can add operations to it and then use RunSequence() to run it.
Execute a sequence at the given time.
This will effectively put the sequence on the queue to be executed when the time has elapsed. Modifications on a sequence after it has been added have no effect. You can also remove the sequence (with DecRef()) immediatelly after running it.
The optional params instance will be given to all operations that are added on the main sequence. Ref counting is used to keep track of this object. So you can safely DecRef() your own reference after calling RunSequence.
- Parameters:
-
Suspend the sequence manager.
This will totally stop all actions that the sequence manager was doing. Use Resume() to resume. Calling Suspend() on an already suspended sequence manager has no effect. Note that a sequence manager is suspended by default. This is so you can set it up and add the needed operations and then call resume to start it all.
Perform a time warp.
This will effectively let the sequence manager think that the given time has passed. If the 'skip' flag is set then all sequence parts that would have been executed in the skipped time are not executed. Otherwise they will all be executed at the same time (but the delta time parameter to 'Do' and 'Condition' will contain the correct difference). 'time' is usually positive. When 'time' is negative this will have the effect of adding extra time before the first operation in the queue will be executed. i.e. we jump in the past but operations that used to be there before are already deleted and will not be executed again.
The documentation for this struct was generated from the following file:
- ivaria/sequence.h
Generated for Crystal Space 1.2.1 by doxygen 1.5.3
|
http://www.crystalspace3d.org/docs/online/api-1.2/structiSequenceManager.html
|
CC-MAIN-2016-07
|
refinedweb
| 547
| 59.5
|
Via the pod function, it is possible to include, in a POD result, sub-documents being themselves computed as POD results from other POD templates.
Imagine you have developed some accounting software. As a POD fan, you have defined a POD template allowing to produce the PDF version of every invoice stored in your database.
Now, what about producing a global PDF file including all the invoices of a given company, for a given period ?
You could be tempted to create a new template, insert a global section in it, repeat this section for every requested invoice, and copy/paste, in that section, the content of the POD template used to render a single invoice.
This will work, but in terms of maintainability, it is not ideal: you have 2 copies of the same template.
This is where the pod function comes into play. It allows to inject, in a main POD result, results of rendering other, so-called sub-PODs.
Warning: LibreOffice is required to run in server mode in order to use the pod function.
In the mentioned example, a single sub-POD would be executed in a for statement, as many times as there are invoices for the selected company and period. But you could also imagine to inject, at several distinct places in the main POD, the result of applying different sub-PODs.
The following example illustrates the first case. Let's start with the sub-POD: a magnificent POD template (suppose it it stored in /home/gdy999/test/Invoice.odt) whose purpose is to render an individual invoice.
Since I make the assumption you are comfortable with object-oriented programming, I guess you can understand why I may have decided to deliver the invoice in the POD context in a variable named self. It is even a convention you could adopt in your own templates. If a template concerns a given object, it can be considered a kind of "document representation" of it. Using variable self to manipulate the object inside this representation makes sense, exactly as it makes sense when manipulating it in any of the object's methods.
In individual mode, my accounting software would call this template (for a starting company), with a context like:
class Invoice:
def __init__(self, number):
self.number = number
self = Invoice('001')
The result would be:
Let's now illustrate how this POD template could be used at a global level, as a sub-POD within a main POD. In the main POD as shown below, my objective is to get, in a single document, one page per invoice, using Invoice.odt as described hereabove. In order to illustrate this, we first need an improved version of class Invoice, that incorporates an incredible mini-DBMS system implemented in method getAll:
from appy.pod.renderer import Renderer
class Invoice:
def __init__(self, number):
self.number = number
@classmethod
def getAll(class_):
return class_('001'), class_('002')
The global template would look like this.
A section is created in the sole purpose of repeating it (with the minus sign) for every requested invoice. This is implemented by the first statement. The second statement will replace the target paragraph by the result of executing the sub-POD template Invoice.odt. Here is the result.
Who said it is unreadable? How old are you? Stop complaining: you have guessed what is written down in every page. Some explanations are requested here.
The pod function, as available in the default POD context, corresponds to method importPod defined on class Renderer.
def importPod(self, content=None, at=None, format='odt', context=None,
pageBreakBefore=False, pageBreakAfter=False,
managePageStyles='rename', resolveFields=False,
forceOoCall=False):
'''Implements the POD statement "do... from pod"'''
# Similar to importDocument, but allows to import the result of
# executing the POD template whose absolute path is specified in at
# (or, but deprecated, whose binary content is passed in content, with
# this format) and include it in the POD result.
# Renaming page styles for the sub-POD (managePageStyles being
# "rename") ensures there is no name clash between page styles (and tied
# elements such as headers and footers) coming from several sub-PODs or
# with styles defined at the master document level. This takes some
# processing, so you can set it to None if you are sure you do not need
# it.
# resolveFields has the same meaning as the homonym parameter on the
# Renderer.
# By default, if forceOoCall is True for self, a sub-renderer ran by
# a PodImporter will inherit from this attribute, excepted if
# parameter forceOoCall is different from INHERIT.
Technically, implementing the pod function is achieved by creating a "sub"-instance of the Renderer class, devoted to the rendering of the sub-POD(s). Parameters resolveFields and forceOoCall allow to fine-tune the process of transmitting parameters from the main renderer to a sub-renderer. Refer to the section describing the Renderer constructor for more information about these parameters.
The process of rendering sub-PODs and integrating them into a main POD result is a complex and resources-consuming task. The following optimisations have already be implemented.
|
https://appyframe.work/104
|
CC-MAIN-2022-27
|
refinedweb
| 845
| 53.51
|
Originally published in my blog
Hey folks, how are you doing? In this post, I will show you a quick and simple Webpack 4 setup for a React application.
I'll assume you already have
node,
npm and the usual suspects installed..
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http- <title>React and Webpack4</title> </head> <body> <section id="index"></section> <script type="text/javascript" src="main.js"></script> </body> </html>
Now, we should create a
src folder, and inside of it, the
index.js file, which will be the starting point for our React application. It's structure will be the most simple React code.
import React from "react"; import ReactDOM from "react-dom"; const Index = () => { return <div>Hello React!</div>; }; ReactDOM.render(<Index />,/core": "^7.2.2", "@babel/preset-env": "^7.3.1", "@babel/preset-react": "^7.0.0", "babel-loader": "^8.0": ["@babel/preset-env", "@babel/preset = () => { return ( <div> <p>Hello React!</p> </div> ); }; ReactDOM.render(<Index />, leave it in the comments section.
Cheers!
Discussion (4)
thanks for this tutorial.
maybe it is just me but this only worked for me after i replaced "babel-core": "6.26.3" with "@babel/core": "7.2.2" in the package.json devdependencies
Hey JD! Thanks for your comment.
You are right, the older versions of Babel will not work with the configuration file that I created in this post. For those who install "babel-core" newer versions, there shouldn't be a problem. I have updated the
package.jsonfile to use the newer versions already.
Cheers!
Thank you :)
|
https://dev.to/felipegalvao/simple-setup-for-a-react-application-with-webpack-4--css--sass-bef
|
CC-MAIN-2021-49
|
refinedweb
| 274
| 54.49
|
Red Hat Bugzilla – Bug 483025
Review Request: imms - Adaptive playlist framework tracking and adapting to your listening patterns
Last modified: 2010-11-08 22:06:02 EST
Spec URL:
SRPM URL:
Description: IMMS is an adaptive playlist framework that tracks your listening patterns and dynamically adapts to your taste. Currently we ship only the XMMS plugin.
Its does a much better job of shuffling than most players. It keeps track
of when a song was last played, and makes sure same songs are not repeated
too often. It is even able to recognise different versions of the same song
(eg. remixes) and treat them as one song!
* IMMS uses a variety of techniques to figure out which songs should be played
together, and which should not. It studies your listening habits, as well as
using acoustic properties of the songs themselves, such as BPM and frequency
spectrum.
* IMMS is fair. Even unfavoured songs have a (small) chance of being played.
There are some outstanding issues I have to deal with and would appreciate any help:
1) rpmlint complains about executable-stack:
>rpmlint imms-3.1.0-0.1.rc6.fc10.x86_64.rpm
imms.x86_64: W: executable-stack /usr/bin/immsd
imms.x86_64: W: executable-stack /usr/bin/immstool
imms.x86_64: W: xmms-naming-policy-not-applied /usr/lib64/xmms/General/libxmmsimms.so
The naming policy is not a problem imho, but I don't know how to get rid of the executable-stack warning.
2) Licensing issues
The project claims to be GPL, but this license is not specified in source file. Moreover it includes some (modified) third-party code (see AUTHORS) -- this is mostly ok (better than own source files, it includes the license) except of:
immscore/xidle.c
model/emd.c
immscore/normal.h
These files and those written by the imms upstream need definitely a license specified -- I'm going to contact the author and setting FE-LEGAL for now.
Torch is already built in rawhide, hence I made a scratch build here:
I also added all the missing BR, had to add also automake + autoconf because of the funny configure script provided in the source tarball:
>cat configure
#!/bin/sh
rm $0
make
$0 $*
New SPEC:
New SRPM:
Any update on the licensing issue?
Yep, I'm quoting the answer to my email:
"
Hey, Milos,
I fixed compilation with recent Audacious versions in 3.1.0-rc7.
[]
I am fairly sure all the code in question is GPL. I made sure of that
before including it.
I do not have the time to follow up with the authors of the code right
now, but I'm certainly willing to apply any patches to clarify
licenses.
Perhaps, Artur, the Debian IMMS maintainer (cc'ed) can help you there.
He already did some work on getting explicit releases from the
authors.
Have fun,
Michael.
"
It seems that all the thins are GPL indeed...looking into debian(lenny) imms packages, I found following:
"
+Author: Michael Grigoriev <mag@luminal.org>
+
+
+With the following exceptions:
+
+ md5.{h,c} (added ability to restrict maximum size to process)
+ from GNU textutils
+ levenshtein.{h,c} (stripped down)
+ python-Levenshtein library
+ by David Necas (Yeti) <yeti@physics.muni.cz>
+ regexx.{h,cc} (stripped down)
+ Regexx - Regular Expressions C++ solution
+ by Gustavo Niemeyer <gustavo@nn.com.br>
+ xidle.{h,cc}
+ Borrows from xautolock 2.1
+ by Michel Eyckmans (MCE) <eyckmans@imec.be>
+ normal.h
+ by by Agner Fog
"
I'll prepare patches and request adding a LICENSE file into the source tarball and license information to all the source files (and merging the gcc43 patch of course too). I'd also include the above information as %doc. Would it be ok then?
Regarding the review: the audacious plugin is working now, updated link are here:
New SPEC:
New SRPM:
Koji scratch build:
As there are now both xmms and audacious plugins available, I'm considering splitting the package into a main one and two subpackages (-xmms, -audacious), I'm just not sure whether this wouldn't hit the usability from the user point of view too much.
Yeah, this is sufficient for Fedora Legal. Lifting FE-Legal.
OK, upstream is very responsive, hence there is a new (pre)release with all the necessary licensing info:
New SPEC:
New SRPM:
Moreover, the GCC 4.3 patch has been merged upstream and the rpmlint issue with executable stack has been also clarified, it is caused by a objdump call (details in specfile) and has been simply fixed by removing the flag using execstack.
I've also split the plugins into a xmms-imms and audacious-plugins-imms subpackage.
So this is now definitely ready for a review.
Unfortunately it fails to build for me:
+ execstack -c build/immsd build/immstool
RPM build errors:
/var/tmp/rpm-tmp.TIGCLm: line 41: execstack: command not found
error: Bad exit status from /var/tmp/rpm-tmp.TIGCLm (%install)
Ops, forgot to BR prelink after adding the execstack call, sorry.
New SPEC:
New SRPM:
Koji scratch build:
Rebuilt, solved the ownership of audacious/General directory:
wrong link by copy&paste in the previous comment:
New Koji scratch build in case someone would review this:
* spectool -g imms-3.1.0-0.6.rc8.fc11.src/imms.spec
retrieves an index.html file instead of the source tarball.
Project hosting has moved to Google Code.
*
Seems it doesn't link libaudcore and would need an update for Audacious >= 2.1 on F-12 (for F-12 there is a koji buildroot override tag for Audacious 2.2 currently, btw).
* The spec file isn't pretty due to questionable sed substitutions. What do you do if some of the matches fail silently? Only if that results in a failing build, the sed usage is acceptable. Else you ought to prefer patch files. (or add guards in the spec file)
Examples:
1) Using sed to modify install paths => upstream release modifies the build framework => your sed subst fails => rpmbuild fails because %install fails or the buildroot contents don't match the %files section => ACCEPTABLE, because you cannot proceed without fixing your package so it will build again.
2) Using sed to modify the contents of files => upstream release modifies the files => your sed subst fails => rpmbuild succeeds because no build error is found => NOT ACCEPTABLE, because either your sed substs are obsolete OR they fail silently => in case of the latter they don't apply your fixes anymore.
In %build you use sed to adjust various CPPFLAGS/CXXFLAGS prior to running %configure. Nothing verifies that your sed substitutions have been applied actually. Bad.
In %install you use sed to adjust install paths to point them into the buildroot. Consider yourself lucky that if the sed substitutions fail, most likely the rpmbuild fails too, because %files doesn't match buildroot anymore. Generally, however, there is no guarantee. And you use wildcards in the %files section, which means you don't even require specific files to be present in the buildroot ('*' = any file that's found will match). In the combination with sed substs, this is unsafe. Safer would be to hardcode the names of important files (e.g. immsd and immstool) and effectively require them to be found.
* BuildRequires: sox
What about run-time? I see in the source that it tries to popen() sox at run-time. Does that result in a run-time requirement of sox? Or is it fully optional and with a clear error/warning message if sox cannot be found?
* As a hint for the %files section: where you include entire directories
recursively, such as
%{_datadir}/imms
it's more readable to append a slash like
%{_datadir}/imms/
to make explicit that it's a directory and not a single file.
Michael, thank you for looking at this, this is already quite ancient, many things changed meanwhile as you can see, will look at it.
(In reply to comment #11)
> *
Ah...I filed a bug for audacious and find out you're the maintainer...so this just bounces back, sorry:)
New build:
* nasty seds: those for compilation and linking flags have been turned into a patch, those for build root harden by removing any wildcards from the %files section.
* BuildRequires: sox
GOOD CATCH! Thanks for that! (fixed)
* As a hint for the %files section...
Done as well...anything what improves readability is a good point, always, thanks.
* Also added xmms as Requires: for the -xmms subpackage. I remember that previously it was pulled in automagically through imms-libs, but is not anymore.
Created attachment 384970 [details]
patch to fix plugin preferences dialog
Without the attached patch, the Audacious IMMS plugin's Preferences dialog cannot be placed in front of other windows (even if it has focus).
[...]
As I probably won't have time to continue with reviewing until end of next week, just some comments on a look at IMMS run-time:
$ audacious
imms client: Connection failed: Connection refused
immsd: version 3.1.0-rc8 ready...
immsd: another instance already active - exiting.
* As it successfully starts an "immsd" instance, so no idea why it warns about another instance.
* /usr/bin/analyzer currently doesn't conflict with anything in Fedora, but it is reason to be concerned about a future conflict. Why the heck isn't it called "immsanalyzer" with the same "imms" namespace as the other executables?
* Fedora's "sox" as used by /usr/bin/analyzer fails to handle .mp3, .mpc, or .wma, for example. It seems there are no ready-to-use add-on packages for sox in 3rd party repos. It's happy about .ogg, .flac, .wav however.
It's been the better part of a year since the last comment with no response. I'm thinking that at this point this should just be closed, and I'll do that soon if nothing happens.
No response; closing.
|
https://bugzilla.redhat.com/show_bug.cgi?id=483025
|
CC-MAIN-2016-44
|
refinedweb
| 1,639
| 64.51
|
With few exceptions, using the C preprocessor. How I Switch between Staging and Production URLs!
Question: Where do you still find the preprocessor helpful? What are the alternatives? Leave a comment below to share your observations.
Did you find this useful? Subscribe today to get regular posts on clean iOS code. 🙂
Cory, 🙂).
Came here searching for a solution to that warning message? I invite you to browse around — my About page is a good place to start.
Thank you sir, kind sir..
I would say on top of that:
typedef NS_ENUM(NSUInteger, AddressRow) {
AddressRowFirstName,
AddressRowLastName,
AddressRowFirstLine,
AddressRowCity,
// etc.
};
Right. Since enums aren’t namespaced, we have to do it ourselves. Thanks for pointing that out.
A problem with that naming scheme is that you have to type most of of the name to get code completion. However, if the naming went: firstNameAddressRow, lastNameAddressRow, …, code completion would be more helpful and you would still have pseudo namespacing.
Also note that common ObjC code convention is to lower case first letter for class properties, and enum and struct members.
In theory, you’re right about code completion. In practice, I haven’t noticed. That could be because I prefer to use AppCode.
Regarding case… Are you confusing ObjC with Swift? In Objective-C, enumerations type names are capitalized. And with no namespace support, the full name of the enumeration (with capital letter) is the prefix for each enumerated value..
What about static dictionaries / arrays?
Simon, the preprocessor is one way to approach simple code generation, and I sometimes use it for static collections. If the code generation gets much more complex, I’d favor scripts. (And part of the generated code would be a clear comment about DON’T EDIT THIS, INSTEAD CHANGE THE SCRIPT AND REGENERATE.)
Concerning your point 3, prefixes shouldn’t be only 2 characters long (this is the convention for Apple’s prefixes), they should be precisely 3 characters long.
Here’s Apple’s relevant documentation :
You’re quite correct, Gabriel! I’ve edited my examples to match, in points 3 and 5.
Ascending integer constants do not smell. It’s not uncommon that the values of such constants will be saved to disk and later read by another program or version. It is also hard to document such constants since you don’t really know what values the compiler assigns to this constants. And it’s even worser: If you rearrange them, then their values will be different and you don’t know which one was assigned to which constant. Enumerators handy but they smell as bad as compiler definitions!
Clearly, if it’s important for the values _not_ to change when rearranged, you specify their values explicitly. But you can do this in an enumeration — which also gives you type safety.
Addendum:
>”6. Conditional compilation: Commenting out code”
Here you make two false statements:
1. Modern IDEs (not all but many) DO colorize code for compiler conditions if the constants in the statement is not defined. One example is Visual Studio.
2. Conditional compilation (and commenting) does not smell. It is used to write platform independent code and if I’m not wrong, then it is the only way to do it.
>”8. Conditional compilation: Switching between staging and production URLs”
Sorry, but wrong and bad coding style. There is a very good reason, why such things are defined globaly once and at one place. By putting these constant values directly to functions you make the code harder to read and understand. Also you write unnecessary extra code just to do the same things. Also you make it mandatory to change the code itself to switch between different builds instead of just switching external compiler directives which, for example, makes it impossible to compile the code based on settings files for the compiler.
>”9. …”
You forget about the output size. Conditional compilation eliminates code to compile and included into the final file.
Actually, point 6 to 9 are targeting one single thing and in most of them you are wrong.
Regarding #3:
I would be surprised if the C99 spec or any compiler would allow an extern const as an initializer or in a case statement. None of my compilers will (including Apple LLVM version 6.0) which makes sense because externs are fixed-up at link time, not compile time.
The value of the extern is unavailable to the compiler. How is it supposed to validate a switch() statement or generate an optimized LUT when the case values are defined in another object file?
Nice post.
I would just like to point out that #1 is not as obvious as you claim it to be. GCC documentation says:
“#import is not a well designed feature.”
More details here:
Good link. That’s really a question of portability, though, so it’s not a concern as long as you stay within the Apple ecosystem. But leave that ecosystem, then yes, you’d have to investigate the support for #import.
Still use pre-processor for xmacros!
Has so many uses for generating type and array index safe code.
That’s true, and I’ve certainly used them before. These days, I think I would lean towards creating some sort of DSL with a script that converts it into code.
Nice post!
I am new to IOS, but I want to give my two cents.
> 8. Conditional compilation: Switching between staging and production URLs
After digging into options I end using a `debug|release`.xcconfig file with all API urls, access tokens, etc. and load them through a proxy class or Info.plist file directly.
> Question: Where do you still find the preprocessor helpful?
I’ve found it helpful for logging:
#ifdef DEBUG
#define L_DEBUG(f, …) { \
[[Logger sharedInstance] log:[Logger sharedInstance].debugPrefix file:@(__FILE__) line:@(__LINE__) format:f , ##__VA_ARGS__]; \
}
#else
#define L_DEBUG(f, …) {}
#endif
As someone already mention, portability seems to be the best use case for preprocessor directives.
Best!
Hi Alvaro, welcome to iOS. I like your approach to 8. I’m curious to learn more.
Also, nice example of where the preprocessor can still be helpful.
Hi Jon, thanks for the article.
It would be really nice to read as to actually why the macros are a code smell. I am not a super experienced developer and from that point of view I see them as a tool that’s as good as any other. A familiar one too – Apple uses them quite a bit, and is happy with us using them (NSLocalizedString for eg.) The IDE argument is not too strong either, at least for me – I’m using AppCode which greys out inactive code, can refactor macro names and highlights errors, if any, in the actual macro. (From what I understand it actually expands the macro in code under the hood).
So what is it that makes macros smell? What are the problems one could run into? I’m sure there are many, or some at least, but haven’t ran into any myself and was hoping someone with more experience could point these out.
Hi Alex,
When I wrote this article, I was in a codebase that LOVED macros. My basic argument is: if there’s something the language can do, you should use the language. Macros are an old, old workaround that has nothing to do with the underlying language.
“Apple uses them quite a bit.” Well, there are the XCTest assertions. They’re necessary at the outermost level because we want the __FILE__ and __LINE__ macros. But I never, never liked the way Apple wrote entire assertions in macros. It made it difficult to add custom assertions. If you look at OCMockito, I use macros as I describe in point 2: I define macros to grab the __FILE__ and __LINE__ values, and pass them to real functions.
Why is NSLocalizedString a macro? That’s a holdover from old days when we didn’t have inlined methods. It made sense to avoid an extra function call. Inlining gives us the best of both worlds.
You’re right, AppCode does a much better job at showing #if 0’d lines. That point only applies to Xcode users.
Hope this helps…
I know it’s not possible to do a typedef in runtime in C. How to work around this issue. I have the following problem, ..based of a macro return value I want to typedef my structure to either strct a or b.
if(condition)
typedef struct a mystruct;
else
typedef struct b mystruct;
Arnab, since that can’t be done in the language, I’d fall back on the preprocessor.
Jon; since you’re still still replying to comments six years after you published this article I figured I’d toss my fuel into the pyre. I did enjoy your piece but one thing that sticks in my eye is your very first point: include guards. You do know that “import” is a GCC extension, right? And not only that, its been deprecated since GCC 3.1.
Pragma, continuing your logic in this argument, might be a better alternative except for the fact that its not well supported.
For portability’s sake the gcc docs specify simply using ifndef/endif. Does this really make code THAT intelligible?
Tim, for portable code, you’re absolutely correct. ifndef/endif all the things! I’m directing my comments specifically towards Objective-C code.
|
http://qualitycoding.org/preprocessor/
|
CC-MAIN-2017-13
|
refinedweb
| 1,568
| 66.54
|
I am currently enrolled at Launch School in order to learn the art of programming. During the section where we learn about recursion, the Fibonacci sequence is used to illustrate the concept.
Below is a recursive method, written in Ruby, to find the nth number in the Fibonacci sequence. I will attempt to explain how this method works using the code as well as a tree diagram as presented in the Launch School course.
In maths, the Fibonacci sequence is described as: the sequence of numbers where the first two numbers are 0 and 1, with each subsequent number being defined as the sum of the previous two numbers in the sequence.
The Fibonacci sequence looks like this:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233 and so on.
Take a look at the code shown below. Even if you do not know Ruby at all, it should make some sense to you. The mind-twisting part (for those new to programming) comes in where the code inside the fibonacci() method calls itself not once but twice (3rd last line)!
This is of course what recursion is. A method or function that calls itself until some exit condition is reached. The exit condition here is the first part of the ‘if’ statement and is met when the method variable number is smaller than 2.
# fibonacci.rb
def fibonacci(number)
if number < 2
number
else
fibonacci(number - 1) + fibonacci(number - 2)
end
end
To start with, let’s also look at the tree structure in the diagram below:
For now, only look at the leftmost three blocks. The ones that have f(2) and then under that f(1) and f(0).
This is the small tree for fibonacci(2), i.e. for finding the 2nd element in the Fibonacci sequence (we start counting at 0).
We begin by feeding the fibonacci method the value of 2, as we want to calculate fibonacci(2). You can see from the method code that we end up in the ‘else’ section of the ‘if’ statement (as number = 2, which is not smaller than 2).
The following line of code is now about to be executed:
fibonacci(number - 1) + fibonacci(number - 2)
Replace ‘number’ with the value 2 and the line of code becomes:
fibonacci(1) + fibonacci(0).
The next step is to find the values of the two terms, fibonacci(1) and fibonacci(0). The computer will need to call the fibonacci method for each of these two terms. This is where the recursion comes in.
In order to do the evaluation and make use of the fibonacci method, while the program is already currently inside the fibonacci method, the computer will store the current state or instance of the fibonacci method (we can call this instance ‘fibonacci(2)’ ), and then evaluate fibonacci(1). It will get a result of 1 because of the two lines of code shown below, and with number = 1. In Ruby the code do not have to read “return number”, it only needs to state the variable whose value is to be returned. Hence, number is returned as can be seen in the 2nd line below (which will return 1 in this case).
if number < 2
number
Ruby will store this value as the result of fibonacci(1), and continue to evaluate fibonacci(0). The same two lines of code above will result in a value of 0 (zero) when fibonacci(0) is evaluated.
There are no more recursion operations left to do as both terms in the line of code have been resolved to actual values:
fibonacci(2) = fibonacci(1) + fibonacci(0) = 1 + 0 = 1
On the tree structure in the diagram, we have resolved f(0) = 0 and also f(1) = 1. This allows us to resolve f(2), which is f(1) + f(0) = 1.
It may help to think in terms of the time dimension and different ‘instances’ of the fibonacci method here. Each time a recursive call is made to the fibonacci method, the current state, or instance, of the method is stored in memory (the stack), and a new value is passed to the method for the next instance of the method to use. As each term is resolved to the value 0 or 1, the previous instance of the method can be resolved to an actual value. The result is that the line of code:
fibonacci(number - 1) + fibonacci(number - 2)
can now be resolved by adding the two values. This result is then returned to the previous instance of the fibonacci method in order to again help with the line of code’s resolution to actual values in that instance. The adding of the two terms continue in this manner, until all the terms in the equation is resolved to actual values, with the total then returned to the code which called the fibonacci method in the first place.
As an example, if we wanted to calculate fibonacci(3), we know from the definition of the Fibonacci sequence that:
fibonacci(3) = fibonacci(2) + fibonacci(1)
And, using the recursive method, we get to the line of code above which reflects this definition:
fibonacci(2) + fibonacci(1)
fibonacci(2) is further recursively resolved to:
(fibonacci(1) + fibonacci(0)) + fibonacci(1)
Which leads us to the end result:
fibonacci(3) = (fibonacci(1) + fibonacci(0)) + fibonacci(1)
which evaluates/resolves to:
fibonacci(3) = 1 + 0 + 1 = 2
And, as we can see in the blocks shown in the corresponding tree structure:
f(3) = f(2) + f(1) = f(1) + f(0) + f(1) = 1 + 0 + 1 = 2
The tree structure diagram and its relation to the recursive fibonacci method should make more sense now. Recursion will happen till the bottom of each branch in the tree structure is reached with the resulting value of 1 or 0. During recursion these 1’s and 0’s are added till the value of the Fibonacci number is calculated and returned to the code which called the fibonacci method in the first place.
To recap:
The recursive method (algorithm) ‘unwinds’ the number you give it until it can get an actual value (0 or 1), and then adds that to the total. The “unwinding” takes place each time the value of ‘number-2’ and the value of ‘number-1’ is given to the fibonacci method when the line
fibonacci(number-2) + fibonacci(number-1)
is evaluated. Note that the value of ‘number-2’ in this case is the value of the next instance of the fibonacci method’s variable number (next recursive loop). The same goes for the value of ‘number-1’.
With each recursion where the method variable number is NOT smaller than 2, the state or instance of the fibonacci method is stored in memory, and the method is called again. Each time the fibonacci method is called though, the value passed in is less than the value passed in during the previous recursive call (by either 1 or 2). This goes on until the value returned is a value smaller than 2 (either 0 or 1). The resolution of the previous instance can then be done. In one instance, 0 is returned and fibonacci(0) can be resolved to 0. In another, 1 is returned and fibonacci(1) can be resolved to 1.
These values are then summed in order to obtain the requested Fibonacci number. This summing action happens each time a 0 or 1 is returned from one instance of the fibonacci method to the previous instance of the fibonacci method, and so on.
This is equivalent to where all the 1’s and 0’s at the bottom of the tree structure are added together. The final sum (or total) of all these 0's and 1's is then the value of the Fibonacci number requested in the first place. This value is returned during the final return of the fibonacci method to where the method was called from in the first place.
It is important to note that, except for the case where we want to know what the values of fibonacci(0) or fibonacci(1) is, the final return value of the requested Fibonacci number will come from the following line of code in the method:
fibonacci(number - 1) + fibonacci(number - 2)
Also note that in this scenario, where the value of any Fibonacci number greater than 1 is to be calculated, the lines of code:
if number < 2
number
will only be used during the recursive process.
I hope my explanation did not confuse you further, but helped in your understanding of both what the Fibonacci sequence is, and how we use recursion in Ruby to calculate the numbers in the sequence.
|
https://medium.com/launch-school/recursive-fibonnaci-method-explained-d82215c5498e
|
CC-MAIN-2018-51
|
refinedweb
| 1,462
| 63.32
|
WebStorm 2019.2 EAP #2: New Look of Code Completion Popup, Attach Project, and Completion in .gitignore
WebStorm 2019 #2 (build 191.4787.13). For the full list of issues fixed in this update, see the Release Notes.
Updated presentation of completion suggestions in JavaScript
One of the things we’ve been working on in WebStorm 2019.2 is the presentation of completion suggestions in JavaScript and TypeScript. Our goal is to remove some excessive information and make the list of suggestions clearer and more consistent. At the same time, our colleagues from the IntelliJ Platform team have been working on refreshing the UI of the completion popup. So, here’s what we’ve got to show you right now.
First, we’re making it clearer where each symbol comes from and where it is defined. This information was previously shown next to the item name, but in an inconsistent way.
Now, for symbols that are standard JavaScript APIs, you’ll see a built-in label and DOM for the browser APIs. For other symbols, there will be a namespace and a file or module where it’s defined.
For symbols that have multiple definitions, we’ve removed the (several definitions) label and replaced it with a new icon.
The column that showed the symbol’s visibility has been removed.
Any suggestions that the IDE is less certain about (because they are not based on the exact type information) have a grey font color and icon.
This is still a work in progress, so please tell us what you think! We appreciate all feedback, be it good or bad.
Code completion for mistyped keywords and names
As you type, it often happens that you accidentally mix up some characters. For example, you’ll type
funtcion or
fnction instead of
function. Now, code completion can understand this kind of errors and will still suggest the most relevant option for you.
This works in all supported languages and for all symbols – keywords, classes, functions, components, and so on.
Completion of default exports in import statements
Using default exports is a very common pattern in many apps. Now, after the
import keyword you will see the names of symbols that are exported as default in other modules of your project. And when you select the name, the path to the module will be added automatically.
Completion and auto imports for these names are available in other parts of your code, too (and have been for a long time).
Support for PostCSS’ simple variables
If you’re using PostCSS with the postcss-simple-vars plugin, you’d be glad to know that the IDE now supports this syntax via the PostCSS plugin, which you can install in Preferences | Plugins.
Don’t forget to enable PostCSS as a dialect for your .css files in Preferences | Languages & Frameworks | Style Sheets | Dialects.
Open several projects in one IDE window
When you have a project opened in WebStorm and want to open another project, now you have an additional option – to attach the second project to the opened one, so that you can see both of them in the same IDE window.
If you want to close the attached project, right-click on its root in the Project view and select Remove from Project View.
This was one of the most requested features in our issue tracker. It has taken us some time to enable the Attach project action in WebStorm, even though it has been available in other JetBrains IDEs for some time already. We wanted to make sure that tools like linters or test runners worked properly, and that’s what we’ve been working on for the WebStorm 2019.2 release.
There are still some limitations that you should keep in mind when you attach a project:
- The IDE will keep using the project settings (e.g. code style or inspections profile) of the first main project.
- The run configuration from the attached project will be ignored and new configurations will be saved in the .idea folder of the main project.
- If you use TypeScript, the same version will be used in all projects.
- You can’t close the first project while keeping the attached project opened.
We do not have an estimate yet of when these limitations will be lifted, but we’ll keep you posted.
New in Version Control
Working with .gitignore
As we worked to release WebStorm 2019.1 earlier this year, we worked to improve our support for
.gitignore. As a result, the IDE properly handles the ignored files and folders listed in
.gitignore and highlights them in the Project view.
Now, we’re making it a bit easier to add unversioned files to
.gitignore. To do this in the Version Control tool window, right-click on a file in the Unversioned files group and select Add to .gitignore.
Code completion is now available for file and folder names in your
.gitignore file. Cmd/Ctrl-click on the name to jump to it in the Project view.
Hide author, data or hash from Log
The Log view in the Version Control tool window shows you the latest commits made in your project. By default, it shows you the commit message, as well as the author, date, and hash of the commit. Now you can hide the columns you don’t need – click the eye icon and then select Show Columns.
View Git history for multiple folders
You can now select multiple folders in the Project view and see all the changes made in them. Choose Git – Show History in the context menu or in the VSC menu.
Abort Git merge and cherry-pick
You can now abort a Git merge. The new action is available in the Branches popup when there is an ongoing merge process.
You can also stop a cherry-pick process with a similar action.
Please report any issues on our issue tracker. And stay tuned for the next week’s update!
WebStorm Team
6 Responses to WebStorm 2019.2 EAP #2: New Look of Code Completion Popup, Attach Project, and Completion in .gitignore
Edoardo Luppi says:June 5, 2019
“For symbols that have multiple definitions, we’ve removed the (several definitions) label and replaced it with a new icon.”. Great choices, this one and the updated item presentation, I like it.
Monday I was trying out the IDEA EAP and for the “multiple definitions” one I thought something wasn’t working right (asked on YouTrack too!) hahaha. So yeah, I think you’ll need to focus a future blog post on icons alone if you keep changing (for the better)!
Ekaterina Prigara says:June 6, 2019
Thanks!
Sorry for confusion We’ve been working on these changes for a couple of weeks (and still working actually) and we’ve decided to wait a bit before we announce it.
Mladen says:June 6, 2019
Great update! Especially the “presentation of completion suggestions in JavaScript” feature which is much simpler and cleaner now, especially when using TypeScript.
Version control features are also a welcoming ones. One note: I opened another project by attaching it to the current IDE Window, then I didn’t like it and I removed it (by selecting Remove from Project View) which removed the project, but the git root left hanging. So, I had to delete git root, which wasn’t an easy task for me (so I had to Google it to find out how). So, the first issue is that the git root was left hanging (my main project uses git sub-modules, maybe that was the reason for not de-attaching git root) and the second one is that you could maybe consider adding an option in the VCS drop down menu for managing git/VCS roots. Though, this happened with previous EAP version (192.4205.48).
Also, this EAP version is currently very unstable for Angular (I’m reporting errors through ea.jetbrains.com), so I’ll wait for a more stable version.
Ekaterina Prigara says:June 6, 2019
Thank you for the feedback!
We have fixed the issue with the git root and it will land in the next EAP update.
Would appreciate it if you also report the problems with Angular on our issue tracker because we might need some additional information (e.g. a sample project) from you. Thanks!
Edoardo Luppi says:June 10, 2019
Question: let’s say I accidentally clicked on “Don’t show again” for the package.json Import pop-up. How do I enable it again? I searched every configuration file for a possible record but couldn’t find it.
Ekaterina Prigara says:June 11, 2019
Hi Edoardo, at the moment, there’s no easy way to re-enable this popup. Here’s a related issue that you can follow: We are now discussing ways to address it.
As a workaround, you can remove the block containing PackageJsonUpdateNotifier from the workspace.xml file in the project .idea folder.
|
https://blog.jetbrains.com/webstorm/2019/06/webstorm-2019-2-eap-2/
|
CC-MAIN-2020-29
|
refinedweb
| 1,494
| 71.95
|
Version 1.57.0
Version 1.57.0
November 3rd, 2014 21:55 GMT
Updated Libraries
-
- Asio:
-
-
-
-
-
-
-
-
- Explicitly marked
asio::strandas deprecated. Use
asio::io_service::strandinstead.
- Circular Buffer:
-
- Container:
- Added support for
initializer_list. Contributed by Robert Matusewicz.
- Fixed double destruction bugs in vector and backward expansion capable allocators.
- Fixed bugs:
-
- GitHub #16: Fix iterators of incomplete type containers. Thanks to Mikael Persson.
-
- Flyweight:
- Added serialization support via Boost Serialization.
-
- Geometry:
- Improvements
- The support of parameters convertible to value_type in rtree insert(), remove() and count() functions
- Solved tickets
- Bugfixes
- Several fixes of bugs in algorithm buffer
- Bug in point_on_surface() for CCW Polygons (extreme_points()) and numerical issue (thanks to Matt Amos)
- Bug in disjoint() for A/A fixed by replacement of point_on_surface() with point_on_border() (thanks to Matt Amos)
- The result of convex_hull(), duplicated Point in open output, too small number of Points for 1- and 2-Point input
- Imprecision for big coordinates in centroid(), fixed by Points translation (related with ticket 10643)
- for_each_segment() not taking into account the last segment of open Geometry
- Interprocess:
- Removed
unique_ptr, now forwards boost::interprocess::unique_ptr to the general purpose
boost::movelib::unique_ptrclass from Boost.Move. This implementation is closer to the standard
std::unique_ptrimplementation and it's better maintained.
-
- Reorganized Doxygen marks to obtain a better header reference.
- Intrusive:
- Experimental version of node checkers, contributed by Matei David. Many thanks!
- Implemented N3644: Null Forward Iterators from C++14.
- Fixed bugs:
- Iterator:
- Most components of the library were moved into the
boost::iteratorsnamespace. For backward compatibility the components are also accessible in the
boostnamespace.
- Iterator operators are now conditionally defined based on the iterator category.
- Some of the internal components of the library were made public (
minimum_category, for example).
- Lexical Cast:
- Math:
- Added Hyperexponential Distribution.
- Fix some spurious overflows in the incomplete gamma functions (with thanks to Rocco Romeo).
- Fix bug in derivative of incomplete beta when a = b = 0.5 - this also effects several non-central distributions, see issue 10480.
- Fixed some corner cases in function round.
- Don't support 80-bit floats in cstdfloat.hpp if standard library support is broken.
-
- Added
unique_ptrsmart pointer. Thanks to Howard Hinnant for his excellent unique_ptr emulation code and testsuite.
- Added
move_if_noexceptutility. Thanks to Antony Polukhin for the implementation.
-
- MultiArray:
- Fixed a friend-declaration related warning for clang (thanks to Marcel Raad).
- Multiprecision:
- Changed rational to float conversions to exactly round to nearest.
- Added improved generic float to rational conversions.
- Fixed rare bug in exponent function for cpp_bin_float.
- Fixed various minor documentation issues.
- Multi-index Containers:
- When
std::tuples are available, these can be used for lookup operations in indices equipped with composite keys.
boost::tuples are also supported for backwards compatibility.
- Preprocessor:
- Added is_begin_parens and remove_parens.
- Added tuple functionality to parallel all array functionality.
- Fixed VC++ problems with empty tuple data.
- Updated internal is_empty to use superior variadic version when variadic macros are supported.
- Updated clang to have same variadic support as gcc.
- Updated doc for new functionality.
- Thread:
-
- Fixed Bugs:
-
- TypeTraits:
- Added new traits is_copy_assignable and is_final.
- Units:
- New unit system <boost/units/systems/information.hpp> units for: bit, byte, nat, hartley and shannon,
- Add scale units for binary prefixes kibi, mebi, gibi, tebi, pebi, zebi and yobi IEC prefixes
- Fix output of NaN on msvc-14
- Add support for C++11 numeric_limits::max_digits10 and ::lowest
- warning fixes
-
-
- uBLAS:
- added two new types: matrix_row and matrix_column facades . With them, it is possible to access to the matrices as an array of rows and an array of columns, respectively.
- added fixed_vector/fixed_matrix classes to represent small - fixed size containers. Requires c++11 because it is using std::array
- fixed the long standing banded matrix bug ().
- the interface of matrices and vectors has been extended with cbegin, cend, crbegin and crend member functions, as defined in c++11.
- removed doxygen documentation to make the distribution lighter
- removed warnings with MSVC for unused parameters
- changed the uBlas development folder structure (will not affect users of the library)
- performed a very large overhaul with respect to warnings and errors on various compilers. Apart for some hard to resolve warnings and older compiler incompatibilities, compilations with uBlas will be much cleaner now.
Compilers Tested
Boost's primary test compilers are:
- Linux:
- Clang: 3.0, 3.1, 3.2, 3.3, 3.4
- Clang, C++14: 3.5
- GCC: 4.4.7, 4.5.3, 4.6.4, 4.7.3, 4.8.1, 4.8.2
- GCC, C++98: 4.9.1
- GCC, C++11: 4.4.7, 4.8.2, 4.8.3, 4.9.1
- GCC, C++14: 4.9.1
- Intel: 13.1, 14.0
- Intel, C++11: 13.1, 14.0
- QCC: 4.4.2
- OS X:
- Apple Clang: 6.0
- Apple Clang, C++11: 6.0
- Apple Clang, C++14: 6.0
- GCC: 4.2.1, 4.9.1
- Intel: 12.0
- Windows:
- GCC, mingw: 4.4.0, 4.4.7. 4.5.4, 4.6.3, 4.7.2, 4.7.3, 4.8.0, 4.8.2, 4.9.0
- Visual C++: 8.0, 9.0, 10.0, 11.0, 12.0
- FreeBSD:
- GCC: 4.2.1
- QNX:
- QCC: 4.4.2
Boost's additional test compilers include:
- Linux:
- Clang: 3.0, 3.1, 3.2, 3.3, 3.4.2
- Clang, C++14: 3.5.0, trunk
- GCC: 4.4.7, 4.6.4, 4.7.3, 4.8.1, 4.8.2, 5.0 (experimental)
- GCC, C++11: 4.4.7, 4.8.2, 4.8.3, 4.9.1
- GCC, C++14: 4.9.1
- Intel: 11.1, 12.1, 13.0, 13.1, 14.0
- Intel, C++11: 13.1, 14.0
- OS X:
- Apple Clang: 6.0
- Apple Clang, C++11: 6.0
- Apple Clang, C++14: 6.0
- Clang: trunk
- Clang, C++11: trunk
- GCC: 4.2.1, 4.9.1
- Intel: 12.0
- Windows:
- GCC, mingw: 4.4.0, 4.4.7, 4.5.4, 4.6.3, 4.7.3, 4.8.0, 4.8.2, 4.9.0
- Visual C++: 8.0, 9.0, 10.0, 11.0, 12.0
- FreeBSD:
- GCC: 4.2.1
- QNX:
- QCC: 4.4.2
Acknowledgements
Beman Dawes, Eric Niebler, Rene Rivera, Daniel James, Vladimir Prus and Marshall Clow managed this release.
|
http://www.boost.org/users/history/version_1_57_0.html
|
CC-MAIN-2018-13
|
refinedweb
| 1,041
| 52.76
|
recvfrom - receive a message from a socket
#include <sys/socket.h>
ssize_t recvfrom(int socket, void *restrict buffer, size_t length,
int flags, struct sockaddr *restrict address,
socklen_t *restrict address_len);
The recvfrom() function shall receive a message from a connection-mode or connectionless-mode socket. It is normally used with connectionless-mode sockets because it permits the application to retrieve the source address of received data.
The recvfrom() shall return the length of the message written to the buffer pointed to by the buffer argument. For message-based sockets, such as [RS]
SOCK_RAW,SOCK_RAW,.
Not all protocols provide the source address for messages. If the address argument is not a null pointer and the protocol provides the source address of messages, the source address of the received message shall be stored in the sockaddr structure pointed to by the address argument, and the length of this address shall be stored in the object pointed to by the address_len argument.
If the actual length of the address is greater than the length of the supplied sockaddr structure, the stored address shall be truncated.
If the address argument is not a null pointer and the protocol does not provide the source address of messages, the value stored in the object pointed to by address is unspecified.
If no messages are available at the socket and O_NONBLOCK is not set on the socket's file descriptor, recvfrom() shall block until a message arrives. If no messages are available at the socket and O_NONBLOCK is set on the socket's file descriptor, recvfrom() shall fail and set errno to [EAGAIN] or [EWOULDBLOCK].
Upon successful completion, recvfrom() shall return the length of the message in bytes. If no messages are available to be received and the peer has performed an orderly shutdown, recvfrom() shall return 0. Otherwise, the function shall return -1 and set errno to indicate the error.
The recvfrom()(), read(), recv(), recvmsg(), select(), send(), sendmsg(), sendto(), shutdown(), socket(), write(), the Base Definitions volume of IEEE Std 1003.1-2001, <sys/socket.h>
First released in Issue 6. Derived from the XNS, Issue 5.2 specification.
|
http://pubs.opengroup.org/onlinepubs/007904975/functions/recvfrom.html
|
CC-MAIN-2017-13
|
refinedweb
| 351
| 59.84
|
I am quite confused at the last step for my project can anyone explain this is the first part. Instructions haven't told me to create a drawAll()method so I assumed it was a graphics method but it doesn't appear on oracle? I'm new programmer help is appreciated. Thanks
After the sleep, call drawAll(g).
The drawAll method will print the following on the screen in black at position x=10 and y = 15:
Project 2 by YOURNAME.
Replace YOURNAME with your full name.
import java.awt.*;
public class Project2 {
public static final int PANEL_WIDTH = 300;
public static final int PANEL_HEIGHT = 300;
public static final int SLEEP_TIME = 50;
public static void main(String[] args) {
DrawingPanel panel = new DrawingPanel(PANEL_WIDTH, PANEL_HEIGHT);
Graphics g = panel.getGraphics( );
startGame(panel, g);
}
public static void startGame(DrawingPanel panel, Graphics g) {
for (int i = 0; i <= 10000; i++) {
panel.sleep(SLEEP_TIME);
drawAll(g);
}
}
}
public static void drawAll(Graphics g) { g.setColor(Color.BLACK); g.drawString("Project 2 by NAMEHERE", 1,15); } public static void startGame(DrawingPanel panel, Graphics g) {//start startGame for (int i = 0; i <= 10000; i++) { panel.sleep(SLEEP_TIME); drawAll(g); } }//end main game
|
https://codedump.io/share/eqa9ggo6fcmF/1/can39t-seem-to-call-or-create-a-specific-method-drawall
|
CC-MAIN-2017-34
|
refinedweb
| 193
| 56.05
|
Hi
cannot connect to my router.
Courses ESP32 Web Server – Control Outputs
my NodeMCU-32s cannot connect with my D-link router. DIR-860L
Have written the right ssid and code
Hello Henrik,
- What do you see in the Arduino IDE serial monitor?
- Just dots kept being printed?
If that’s the case, it means that either your SSID/password is wrong (double-check again, please) or the ESP is too far from the router…
Arduino IDE serial monitor displays this image
and it is real ssid and password codes. It stands about 35 cm from the router.
If it runs long then a long line of dots comes
But it does not work.
Hello again, according to that information it looks like you’ve either entered the wrong SSID or wrong password. I think your SSID should be just lillenet or something.
If you go the Internet access menu:
What’s your exact network name? In my case, it’s “MEO-620B4B”.
Thanks for your patience!
Hello again
Thanks for your reply
I have tried to make a simple program I found online. but it is the same again
Have made a spell error, should be small z in GHz and come between 2 and 4 see picture
I also found a small program that could display MAC address on my NodeMCU 32s
I have also tested it and the MAC address shown:
#include "WiFi.h" void setup(){ Serial.begin(115200); WiFi.mode(WIFI_MODE_STA); Serial.println(WiFi.macAddress()); } void loop(){}
MAC address is 3C: 71: BF: AA: DD: E0
so there is a wifi module on my NodeMCU 32s
But can’t get in touch
Thanks again
Hello, I still think your SSID or password are wrong. Just printing those dots, means that is not establishing a Wi-Fi connection with your router. Can you double-check the SSID that appears as I’ve shown mine?
Is it with a comma (,)? 2,4 GHz?
Hello again
Hooray now it works.
It was the wrong SSID name and the wrong character was supposed to be the dot and did not come in 2.4 GHz.
Thanks again
Now I can get whiter in the excitement course.
Thanks
|
https://rntlab.com/question/wrapping_up-error/
|
CC-MAIN-2021-04
|
refinedweb
| 367
| 81.02
|
Talk:Proposed features/House numbers/Karlsruhe Schema
Contents
- 1 addr:full and multi-line -values
- 2 Rendering
- 3 Experiences
- 4 postcode / postal_code / ZIP
- 5 Blocks of Flats
- 6 Liebe Karlsruher
- 7 Not topological
- 8 associatedStreet vs. street
- 9 More than one housenumber for a single node/house
- 10 One building, two addresses, two countries
- 11 Apartment complexes
- 12 Alphabet interpolation
- 13 Patch needs improvement
- 14 Extra nodes when the building outlines are available
- 15 "skip"-part
- 16 Redundant tagging a problem
- 17 example-algorithm
- 18 Apartment Blocks, Blocks of Flats, Communities, Business Parks, Campuses etc...
- 19 Proposal: Making the link of housenames to streets more robust (by User:SlowRider)
- 20 way ids instead of addr:street
- 21 addr:interpolation=alphabetic
- 22 Giving hints about the road-access
- 23 Addresses format (as used in other datasets) in British Isles
- 24 Giving hints about the road-access - why not use the highway tag for this?
- 25 House-Numbers supported in Traveling Salesman
- 26 Case: Selecting the street the series of house-numbers belongs to
- 27 Hamlets/Localities without street names
- 28 Loads of numbers in a small area
- 29 Node with house number in the way
- 30 Additional address tags needed
- 31 Why is there addr:housename when name is perfectly fine?
- 32 Recent edits to the page
- 33 member roles of relation roadAccess
- 34 Skipping house numbers in a way
- 35 associatedStreet relation: several postcodes
- 36 Errors in examples on feature page?
- 37 Adress for buildings that are relations
- 38 number *or* name?
- 39 An experience
- 40 Housenumber at house door
- 41 associatedStreet
- 42 Archiving of this page
- 43 conact:housenumber / Bremen Schema / extended Karlsruhe Schema
- 44 Move to Karlsruhe Schema
- 45 Delete proposal
addr:full and multi-line -values
There is a practical issue with addr:full. JOSM as an editor does not support values with multiple lines. I'm not sure about potlatch but I think it too has this restriction in it's UI. --MarcusWolschon 12:26, 20 April 2008 (UTC)
- another point: what about the correct newline coding? see Wikipedia on Newline - i think it would be better either to insert some markup instead of newlines (<br/> for example), or to abstain completely from adding the full address. --Florianschmitt 16:06, 7 September 2008 (UTC)
Rendering
As for me, house numbers are render very small. May be you can make numbers bigger, as they are not clear even on the highest zoom level. --LEAn 00:46, 7 June 2008 (UTC)
- Some considered them to be too prominent already.
- From what I've seen the problem I find is that the numbers are a bit small to read, and I'm sure any one with even semi poor eye sight would find those house numbers to read. That being said, as it is, I find that they "detract" from the map, and clutter it up. Is it possible to have a separate overly lay that can be turned on or off?
If you add a street number for a building where the building has a name, both the name and the number render in the centre of the polygon (with the name overwriting the number). See a couple of buildings I numbered. [1] (Yes Suncorp was in the wrong place - now fixed.) --Ebenezer 9 Sep 2008
- I'm not sure how the rendering-rules work but maybe we can add a vertical offset to the location of the house-number when a name is present. (e.g. display it below the name) --MarcusWolschon 12:14, 26 September 2008 (UTC)
Experiences
I've been tagging addresses with this scheme even if I'm not in Karlsruhe but in Finland. I have manually created the relations for quite many ways for which I have added the street numbers, for two reasons:
- I'm mostly mapping a city center, where address numbers are quite close to two roads at junctions, and
- Although not mentioned, I've been tagging the addr:interpolation=* on the way marking the road itself.
- Pro: the interpolation lines aren't drawn above and through the buildings. (Most city center buildings in one quarter share some walls and haven't been drawn separately).
- Pro: less ways. A dual carriageway road has two relations, one for each side.
- Any house number is bound to be close to some road or be in a relation with the road it belongs to; finding the interpolated location on the street is already mentioned in the simplest case.
- Con: Where the street is split into two ways, be it for a maxspeed change, I need two relations or the extra interpolation way on the same nodes for the full length of the street. Solutions:
- Allow an associatedStreet relation to have not only one, but one or more consecutive ways making up the street. It could be even possible to skip the gaps when interpolating but a tool to search for those would be needed as they might be an error and lead to false interpolation results for a part of the way.
Any comments on how the already implemented software support (Traveling Salesman only?) for this Karlsruhe Schema copes with the "modifications" above? Alv 06:49, 13 June 2008 (UTC)
- How do you put 2 addr:interpolation onto one way (left and right side) and how to link the interpolation to the house-numbers or do you use this only for dual-carriage ways? What semantics do you propose to define the ordering of ways if you allow multiple ways in an associatedStreet-relation? If there are no buildings drawn, then the line cannot be painted through it. Even if, this is something we can modify the layering in the renderer(s). --MarcusWolschon 04:59, 15 June 2008 (UTC)
- If the numbering scheme is the common odd on the left/right and even on the other side and it's a single carriage way road, addr:interpolation=all with a total of four house numbers marked on both sides should be sufficient to define the sides for even/odd numbers unambiguously. I construct manually the relations as mentioned at "Case: Relations (easy for computers, difficult for humans)" and then the fixed points for interpolation would be as in "single house as a node next to the way" (nearest point on the way).
- If the real numbers don't follow the odd/even schema for some part of the way, the interpolation ways would be split at several places already when using distinct interpolation ways. For these cases it might not be possible to tag the interpolation type on the road itself but separate interpolation ways might be needed.
- For dual carriage way roads there's two relations. If a road changes between single and dual-carriage ways, likely the relation needed to be split but house numbers on both sides at that point might suffice. Working
- Ordering of ways can be computed easily if the consecutive ways always share end nodes between the farthest house numbers. Even if they don't share nodes, say at some intersections with internal short ways that haven't been added to the relation, the shortest gaps will be clearly shortest on the "correctly" ordered set of ways. At least I haven't seen a road that would turn around to come almost back to itself.
- A longer interpolation line spanning several quarters crosses (actually goes under any streets in its path) which seems silly, especially if the numbers on both sides of the crossed road are explicitly defined. And for longer countryside roads with multiple bends a distinct interpolation way alongside needs n extra nodes.Alv 10:11, 15 June 2008 (UTC)
postcode / postal_code / ZIP
In TALK-DE (German mailing list), Stefan Neufeind noted, it would make sense to change "addr:postcode" to "addr:postal_code".
He gave the reason, "postal_code" has already been intoduced to Map_Features. --TobWen 12:17, 18 July 2008 (UTC)
Blocks of Flats
Hi all
I live in an area of London that is dominated by blocks of flats, that are either dedicated to housing, or are split level between shops (ground) and then flats above.
In cases you can have the units on the bottom are numbered as per the road, so Number 20, A Street. And then the top would be 20 -> 30 A Court, A Street.
I've had a look at both and this current discussion, and I can't see any thing that really seems to cover basic flat numbering. Though Building_Attributes does cover the split nature of buildings.
I think the approach taken in these two proposals is interesting.
One seems to be heavily biased towards the house numbers being tied directly to the road (this one), and the Building_attributes having every thing tied to the building directly.
Would it make sense for these two proposals to be merged?
Liebe Karlsruher
könnt Ihr diese Seite bitte auch auf Deutsch übersetzen? Herzlichen Dank, --Markus 20:42, 23 July 2008 (UTC)
Anscheinend nicht, die können nur Baddisch oder ganz Ausländisch :-) --Nikodemus 16:16, 29 August 2008 (UTC)
Das ist sehr schade! Sven S.
Not topological
This scheme is not topological (except for the "temporary" interpolated version), so it's fine for making pretty maps with numbers, but it's basically useless for existing GPS receivers. It might be possible to transform it into a topological form, but it would take a lot of preprocessing, measuring the distance of the node to each way in the planet to find the street it belongs to, or attempting to do something clever with parsing the tags. Why not just store it in a topological form to begin with? We don't show the outlines of streets (only centerlines); why do we need to have the exact position of buildings in order to give them an address? —Preceding unsigned comment added by SiliconFiend (talk • contribs) 18:08, 25 July 2008
- I agree that some sort of connectivity is needed for the maps, however I think by using relations or using addr:street=*, we can have a properly connected map. Rorym 22:17, 26 August 2008 (UTC)
- To whoever wrote that comment: Sorry but this is nonsense. (Starting with your use of the word "topological" - as house numbers do not make up any kind of network, how should there be topology?) One of us has written a piece of software that does the interpolation while we had the workshop and it took a few hours only. If you have a GPS position and want to find out what road you are on, you have to do exactly the same computation - find out which road is nearest, and then find out the nearest point on that road - this is where the system will "plot the dot". The same mathematics have to be applied to the house number data. You do not have to transform the whole planet into something else. - That being said, if you are unable or unwilling to fill in individual addresses, then don't - it is that simple. --Frederik Ramm 14:25, 4 September 2008 (UTC)
- Sorry, I forgot to leave my signature. I thought it important to leave this note to warn people that this is not an optimal scheme. My argument is not nonsense. It's how the TIGER and many other GIS data sets are organized, and it's how all current GPS navigation systems work (locating address numbers at a relative distance along a street using interpolation as necessary, not as individual points). In the best case, this will require a huge amount of preprocessing to transform it into a usable form for searching along a street. In the worst case, it's just unusable for creating GPS maps and other consumers which already use a standard format for addresses (hint: they're expecting a left/right and odd/even indication). The address numbers are associated with a street; they don't stand alone. You access an address via a street, therefore the address and house or building at that address is connected to the street, which makes it topological. There is nothing in the single-point scheme that ties the node to another OSM entity. Adding a tag with the street name does not make it topological. Adding a tag with the way ID would get you closer, but either method is prone to typos and other errors which will be hard to detect. e.g., someone mistyped the street name on one address, so that one is inexplicably missing when someone searches for it, even though all the neighbors are present. If you insist on this scheme, it should at least use a relation to tie the node to the associated way. It will still require a lot of preprocessing to locate the closest way node, but at least we won't have to search the entire dataset for the matching street by name and distance and hope that someone didn't make a typo. --Karl Newman 20:39, 4 September 2008 (UTC)
- You access a house by one or more streets. The house can be quite a way off the street. We have Even/Odd-Interpolation but also the other schemas. There are more complex house-numbers like "11b". There are house-numbers for the thirs yard of a building-blocl (common in Berlin). You don't want to get the numbers wrong when a way is turned, split or merged. You are free to propose any of your "standard-formats". There IS such a relation in the spec to be used when needed, you should read it. Full address-searching workd fine here, thank you. --MarcusWolschon 04:39, 5 September 2008 (UTC)
- I have proposed an alternative, long before this scheme existed. Another major problem with using nodes for house numbers is when slicing the planet into tiles, there will be cases where an address node falls in one tile but the street it's supposed to be associated with falls in the neighboring tile. This isn't a problem if you're just drawing pictures, but it makes the address node useless for both tiles if you want to actually use the address number data. --Karl Newman 18:13, 10 September 2008 (UTC)
- I think this system would work just fine, especially in the interpolated case. It doesn't matter whether or not a way of interpolated housenumbers is directly associated with a way resembling the actual street. If someone searches for an address, they'll get a point (lat/lon). Isn't that geocoding? Why would any program then have to locate the actual street on the map, if it already has a point for the specific address? Seems to me, for routing purposes, you only need to locate the nearest point on any street to figure out how to get there. This may or may not be the same street named in the address, but that's fine, because most houses whose address is on one street, but is actually closer to another street, is on a corner and readily accessed from the other street anyway. Worst case scenario, the driver has to circle the block once he sees where the driveway actually is? If there really is a need to specify access points, isn't that also addressed in this proposal? Vid the Kid 14:33, 22 April 2009 (UTC)
associatedStreet vs. street
I would prefer if the relation type "associatedStreet" would be merged with Relation:street. Otherwise you have to add street segments to two relations, one for saying that several ways are the same logical street and one for relating the houses to this street. This might also touch on Relation:route with route=road. --Shepard 15:07, 31 August 2008 (UTC)
- I don’t think merging these is necessarily the best option as a collection-type relation can be used on other things than streets (where a “house” role wouldn’t make sense) and because I’d expect that everything in a collection should be simply a part of the collection, i.e. be described by the collection’s tags – which wouldn’ŧ be the case for houses associated with a street.
- In my opinion, the solution is to create an associatedStreet relation and add the street’s relation as a member with role “street”. I’d not be strictly opposed to merging either – adding segments to both ways, however, would be by far be the worst way to deal with this situation. --Tordanik 21:39, 31 August 2008 (UTC)
- As I wrote on Talk:Relations/Proposed/Site, in my eyes all relations are collections. I don't really see the additional semantic in having a pure group/collection relation type. I think the real value lies in the finer defined relations. In this case this could still be a relation for general collected ways where one role value would say that something is a segment of this way (not as in the datatype "way" but as in street, river, ...) and another would say that some feature belongs to / is attached to this way (a house, a bus stop, ...).
- Of course as you said I could use a relation as the member of the associatedStreet relation (how do I have relations as members of others in Potlatch?) but this seems to overcomplicate things. --Shepard 14:24, 1 September 2008 (UTC)
- I think a combined relation would fit for both problems and is much easier to do for mappers. --Patzi 17:05, 18 September 2008 (UTC)
- I agree a combined relation is probably better for a couple of reasons:
- 1) There seems to be a push to make 'generic' relations around the place (look at the boundary/multipolygon case for example). The very close similarities between a collected relation (all the ways making up a road) and the associatedStreet relation (all the houses that are addressed on the SAME street) really means 1 relation may serve better.
- 2) Calling it associatedStreet is very specific, in my local city I know of both foot malls and docks have addressing systems also, so making it street-specific isn't really appropriate.
- 3) Even the *house* or *addr:houselink* roles are not really right because there are more than just houses along many streets.
- Really the solution is a generic 'collection' similar to the collected way suggestions then allow nodes in there with an 'address' role or something generic like that to link to *any* node that's adressable along that collected item. --Beldin 08:30, 25 February 2009 (who forgot to sign his entry)
- Sounds reasonable. I would like to add support for this schema when improving house-number -searching in Traveling Salesman.
- However this may fail on corners where an "associatedStreet" is explicitely given so as to avoid attaching the house-number to the side-way (that is a part of the type=street -relation in this case). If one of the 2 ways is oneway=true, then this becomes a seldom case but with a huge impact. --MarcusWolschon 08:13, 25 February 2009 (UTC)
More than one housenumber for a single node/house
How should we handle situations in which one building has more than one housenumber? For example many big companies have an adress like "Nicestreet 12-17" how should we tag this? addr:housenumber=12-17 (maybe another divider) or addr:housenumber=12,13,14,15,16,17? I think i would prefer the shorter one but the longer one should be also supported so that it is possible to have something like addr:housenumber=34,36,37,56 could also be possible. at the moment i use it like addr:housenumber=12-17. --Patzi 17:03, 18 September 2008 (UTC)
- I also have a case where I need to map multiple addresses to a single node, but only even numbers. Could we use a syntax on top of the one you proposed, similar to the one found in crontab configuration files (if it's clearer), that be start-finish/step like this addr:housenumber=12-17/2 which could be expanded to addr:housenumber=12,15,17? Pov 23:29, 17 December 2008 (UTC)
- I think "1,2,4,7" is much less likely to confuse mappers who cannot consult the wiki on every tag they see. --MarcusWolschon 07:12, 18 December 2008 (UTC)
- Ok, I understand. Also, and more importantly, should we add a note or something on the article page or do we keep this feature in the talk page for now ? I'm afraid the discussions here kinda hit an agreement on most points but the article page didn't reflect any of that progress Pov 09:31, 18 December 2008 (UTC)
- I guess we can safely add the "a,b,c,..."-notation to the description of the addr:housenumber -tag. (I just did so as "addr:housenumber" seems not to be present in any page except for an old and abadoned proposal.) --MarcusWolschon 12:56, 18 December 2008 (UTC)
I ran into a similar problem: The same house has two different adresses, but only one entry so it's not possible to put the node near the corresponding entries. Any idea on how to add two adresses? Xeen 12:49, 6 January 2009 (UTC)
I'd really like to be able to support ranges for single nodes: when tagging appartment building entrances, this is necessary for my sanity and yours: I just added a
tagged addr:housenumber=46,47,48,49, 50,51,52,53, 54,55,56,57, 58,59,60,61, 62,63,64,65, 66,67,68,69, 70,71,72,73, 74,75,76,77, 78,79,80,81, 82,83 (minus the extra spaces). Ouch! For one thing, speedy data entry on the road with something like Osm2go needs this: I'm not typing that sort of thing out every time with a PDA keyboard! Therefore, I make the following proposal --achadwick 22:04, 7 March 2009 (UTC)
Sub-proposal: ranges of numbers for individual nodes
- Extend the syntax for addr:housenumber=* to allow hyphen-separated ranges as well as comma-separated numbers, e.g. addr:housenumber=46-83 or addr:housenumber=1,2,7-13,15,16. Currently only the comma-separated form is documented on the main page.
- Permit addr:interpolation=* on those
s or
s which are also tagged addr:housenumber=*. Currently only
is documented.
For those implementing a parser, hyphens bind tighter than commas; this is quite a common syntax. Interpolation for nodes should only be invoked for ranges specified with hyphens, thus:
I think this will be sufficient for most uses, and it's much faster to type. Any takers for this idea? --achadwick 22:04, 7 March 2009 (UTC)
In Ireland, we keep running into this issue all the time. House numbering in this country is very weird, inconsistent and leads to cases such as a single shop having multiple house numbers all the time. I very much like your proposal of allowing addr:interpolation=* on a single
tagged with say addr:housenumber=1-8. I am proposing this as the standard solution for Ireland on IRC. Hopefully, others will agree and start tagging using this system. --undo 16:02, 16 November 2009 (UTC)
- This page is getting quite full. Could create one under Proposed_features#Proposed_Features_-_Addresses? --MarcusWolschon 08:08, 17 November 2009 (UTC)
One building, two addresses, two countries
My problem is one building with two different addresses with two different street names in two different countries! [2] [3] I used addr:housenumber=* etc. for the US address and addr_ca:housenumber=* tags for the Canadian address. I'm open to any other ideas... -- Gridlock Joe 02:41, 17 June 2009 (UTC)
- Ok, now THATs something nobody expected to exist on this planet. You could pick a node for one of the addresses and put the other one on the building-polygon itself. --MarcusWolschon 06:32, 17 June 2009 (UTC)
- Baarle-Hertog (BE) and Baarle-Nassau (NL) are another example of a town split by an international border. [4] No buildings have been added there yet, so I couldn't claim a precedent. -- Gridlock Joe 14:51, 17 June 2009 (UTC)
- I added two separate address-tags and removed the tags from the building. Non-standard-tags like addr:street_ca=* (which was on the building) are not good for POI-searches and other parsers. --Phobie 13:15, 17 June 2009 (UTC)
- Thanks, Phobie. Agreed on the use of non-standard tags; I think your solution probably is the most workable. -- Gridlock Joe 14:45, 17 June 2009 (UTC)
Apartment complexes
In the US, it is common to have multiple buildings with the same street number but with apartment numbers on the units in individual buildings. Would it be possible to build a relation of all of the buildings at a given street number associated with a road and then tag the building with the range of apartment letters or numbers it contains?
- You mean something like First house: addr:housenumber=12 addr:apartement=1 second: addr:housenumber=12 addr:apartement=2 and so on? I think addr:apartement=* would suit your needs and would be significant enough. --Patzi 06:38, 19 September 2008 (UTC)
- I like this idea, albeit spelt as apartmentnumber. Better still, what bout flatnumber? It's shorter. Do we need interpolation ways for apartment ranges too? Alternatively we could state that a house is a building and an apartment/flat is an element within a building; and therefore there's a natural bilevel ordering. What about flat numbers doing the even/odd thing though, or incrementing alphanumerically when the house addresses doesn't? I think we *do* need to treat flat interpolations separately. Fortunately they're quite rare :) --achadwick 20:55, 7 March 2009 (UTC)
- If the user wants to drive to a house-number but there are multiple places with that house-number, what do you suppose should be the lat+lon of the desination? What if these buildings are far apart? Does that case exits? --MarcusWolschon 06:58, 9 March 2009 (UTC)
Alphabet interpolation
While collecting data in Dresden, I came across a number of cases where a building (say housenumber 1) has several flats that are designated by letters after the house number, such as from 1 or 1a to 1z. I've tagged these with interpolation=alphabet for the moment. Is there a better way of doing that? --Bigbug21 12:36, 19 September 2008 (UTC)
- Not yet but let's keep collecting such cases. No algorithm implementing this schema will evaluate this yet except for the single houses 1, 1a and 1z. We will need to know what strange things exist and how common they are to improve the schema later. --MarcusWolschon 16:56, 21 September 2008 (UTC)
Yet another issue: There are some villages around here which have house numbers like A3 or G25 (that roads go by letters, but the actual numbers always include the letter). Example: [5]. That should be considered as well when in comes to interpolation. --Bigbug21 15:46, 5 October 2008 (UTC)
Patch needs improvement
Currently, the patch display is unable to handle four-digit and higher house numbers without making them practically illegible. Given such numbers are the rule, not the exception in many, many areas, this is a big issue that needs to be fixed in the display before house numbering is really usable in any way. Circeus 00:03, 1 October 2008 (UTC)
- The rendering has been changed in January 2009 (for the Mapnik and Osmarender layers): the numbers are now drawn without the background circle and as such show up properly even with 4 digits or more. Alv 08:17, 16 February 2009 (UTC)
- The data is still usable for routing or other software, even if shown suboptimally on the default maps. Hopefully those with the knowhow get a better rendering for longer numbers. Likely it is some Canadian numbering convention which leads to numbers of that magnitude? Everywhere else I know the numbers start at 1 or thereabouts for every street and very few streets are so long as to have over one thousand houses (or a length of 10 km if it's one number per 10 meters). So do these cities assign numbers per distance from the street's end in feet? Alv 13:15, 1 October 2008 (UTC)
- AFAICT, this is a situation found in every North American city of any decent size. It took me five minutes to find examples in Plattsburgh, NY: 3036 Rt. 22 Main Street, 1790 Main Street, 9631 Route 9. I'm not clear what the numbering scheme ARE, but I do know such numbers are common here. Circeus 17:12, 1 October 2008 (UTC)
- Just to answer this: over here in Washington state, and likely for many other states/cities, the house numbers tend to be two digits longer than the street number. For example, the part of a road between 14th Street and 15th Street would be called "the 1400 block" and houses would be numbered between 1400 and 1499 (assuming the streets weren't "moved" and the designers weren't arbitrary -- there are cities where block numbers are at a 300 or 500 offset from numbered streets, which is very disconcerting). With 10 or 16 or 20 streets per mile, street numbers over 300 are common, thus house numbers over 30000 may be present. --goldfndr 04:08, 16 February 2009 (UTC)
- It's not always a 100-per-block situation. Sometimes it's 1000-per-mile. And sometimes it's a somewhat arbitrary scale chosen so that house numbers approach a certain limit near the edge of the area (such as a county) over which the numbering scheme is relevant. But there's a big point that hasn't been mentioned yet: In the US, nearly all address systems have a common origin. (That's "origin" in the sense of Cartesian coordinates.) That means a very short street far from the city center can have a range of addresses such as 13002 to 13248. In fact, a street's housenumbers won't approach 0 unless that street intersects one of the axes of the address grid. Vid the Kid 15:12, 22 April 2009 (UTC)
Extra nodes when the building outlines are available
Sometimes we have city center houses (as per house numbers) that share a wall - most of these are likely or at least possibly drawn as a continuous single building. Then adding extra unconnected nodes for the house numbers seems reasonable. Likewise when one building in an intersection has different house numbers on both streets, extra nodes near the appropriate walls is how I've done it. But what's others' opinion, if a building is unambiguously near the one and correct street and the outline is available, should we keep the numbers tagged in the way making up the building (my opinion) or "allow" adding nodes. Alv 11:24, 23 October 2008 (UTC)
- We have a problem here. This housenumber mapping seems to be done only for visual maps - Currently the nodes are set on estimated positions, that represent the middle of the building or just look good on the map.
- What we need for OSM for the blind and the pedestrian navigation tool LoroDux is very exact nodes on entrances, that are no doubt on the outline of the building and are connected!
- Please don't recommend to map house numbers as estimated positions any more.
- If the rendering of housenumbers is to close to the road to look nice on a visual map, that shoud be corrected in the renderer, not by wrong mapping.
- (If you wonder what a visual map is, see HaptoRender for the opposite, a haptic or tactile map.)
- --Lulu-Ann 20:46, 17 June 2009 (UTC)
- Which "this"? The extra nodes are inside the building outline, naturally. For example this entrance belongs to a building that has addresses on both nearby roads and has four distinct entrances. Alv 21:57, 17 June 2009 (UTC)
- Usually the house numbers can be assigned to entrances. Why did you call the entrances A, B, C instead of using the node for the housenumber, too ? How shall a blind person find the right entrance, when the GPS position of the node is somewhere in the middle of the building instead of on the outline? --Lulu-Ann 23:02, 17 June 2009 (UTC)
- Because In Finland the entrances are considered properties of the single building; there isn't a house 4 A and a house 4 B and 4 C, but a building with the housenumber 4 and that building has four staircases with the letter identifiers. (And it could even be that several physical buildings share the number where they're part of the same housing cooperative.) Just as I wouldn't cram 100 housenumbers (4 A 1, 4 A 2, ..., 4 D 100) inside a building when that building has several staircases and 100 apartments.
- Node is inside a way with building=yes -> it's one of that building's numbers -> find a node with building=entrance on that way. Alv 23:34, 17 June 2009 (UTC)
- You did not answer my question. How shall a blind user find the right entrance, if the house number is not a node at the entrance but a node in the middle. Working algorithm, please! (Or admit that is doesn't work.) The right way to do it in your case would be to have several house numbers on one entrance node, which should be no problem and can easily be routed. --Lulu-Ann 11:15, 21 June 2009 (UTC)
- I did. If the building outlines are not available, there will be just a node and no information on the various entrances. If the address node is inside an area with building=*, check that way's nodes for entrances. So you'd add on every entrance node addr:housenumber=4 addr:staircaseletter=x addr:street=Vähänkyröntie addr:alternatestreet=Sofianlehdonkatu addr:alternatehousenumber=11. So someone searching for the building (as most do) would get four (or more) results. Searching just the addr:street+addr:housenumber combinations won't then find the address on the Sofianlehdonkatu street at all. Some houses have even three addresses on three streets. Alv 08:41, 2 July 2009 (UTC)
- "Searching just the addr:street+addr:housenumber combinations won't then find the address on the Sofianlehdonkatu street at all" - I take that as you admit it won't work. --Lulu-Ann 10:09, 2 July 2009 (UTC)
"skip"-part
Who removed the "House-numbers that are tagged as a single node always take pecedence over interpolated ones and are skipped in interpolation" from the page? You can't just change the documentation of an existing algorithm at will. --MarcusWolschon 08:01, 12 November 2008 (UTC)
I've just run into a situation where the current "skip" part showed some problems. Here's a link to [OSMI]'s take on the problem so I have to avoid some pretty heavy ASCII art lifting. As you can see, by skipping nodes with address information in the interpolation calculation we end up having numbers in that order: 51....39-35-37-33-31...27. We should redefine the algorithm part so that even though defined numbers are skipped, the interpolation is done on a per-segment basis, not a per-way basis so that we don't see stupid stuff like this. Pov 21:43, 23 December 2008 (UTC)
- Just make the nodes a part of the interpolation-way or split it up if you want that behavior and you're set. If the interpolation is not equidistant that it was just mapped wrong. May happen. At one time all houses are mapped individually and this issue goes away all by itself. I do not particularly like the idea of complicating algorithms even more by required the estimation of a corresponding point on the interpolation-way of nodes that are not a part of that way. (My connection is too slow to load your example over the cell-phone during the holidays.) --MarcusWolschon 11:39, 25 December 2008 (UTC)
- BTW: What do you suppose to do instead of skipping them? Have 2 or more locations for the same address when searching? How is the user to choose between them when we are the ones to tell the user where that address (s)he is searching for is located? --MarcusWolschon 11:49, 25 December 2008 (UTC)
- Okay, I managed to load your example. The node tagges 37 should be a meember of the way going through it. Trivial mistake. --MarcusWolschon 11:55, 25 December 2008 (UTC)
- Maybe my explanation wasn't clear, but the node is part of the way! And I certainly don't want dupes. What I was suggesting was that, with a node in the middle of a way, the interpolation is adjusted so as to avoid numbers going in reverse order around the node in the middle of the way because the interpolation isn't done on a per-segment basis. Pov 23:48, 25 December 2008 (UTC)
- Then that script implemented it incompletely by looking only at the first and last node of the way. Look at the [[6]] to see how this is already covered. --MarcusWolschon 08:56, 26 December 2008 (UTC)
I haven't looked at the examples, but I only imagine two cases. First, the "defined" address is already part of the interpolation way. If that's the case, it should be used as a vertex in the interpolation process, and the process of interpolation should then find exactly that point when looking for the specified address. Or, the geocoder will notice that that specified address is already marked with a node, then it can skip the interpolation process altogether. Either way, the result will be the same. In the other case, the "defined" address is not part of the interpolation way. Then, the possibility exists that the geocoder might find two points for the same address: one defined by the node, and one found by interpolating on the way. If the geocoder is sure that the node and the way both refer to addresses on the same street (not two streets with the same name) then it should only report the node with the exact address. This doesn't have to be phrased as "skipping" the defined address, but simply reporting the better-defined result. (All this depends on the geocoding program being certain that the address defined on the node belongs to the same street as the addresses interpolated on the way. If the node is not part of the way, there can be a lot of problems with that, unless that AssociatedStreet relation is used...) And then there's reverse geocoding. If, given a point, a reverse geocoding program decides the point lies along an interpolation way, and there are other addresses for the same street defined on nodes that aren't part of that way, then it makes sense to talk about "skipping" the defined addresses. And if the node where the address is defined isn't very close to where that same address would be interpolated, then you can get some out-of-sequence results for reverse geocoding. And the same caveats about associating these things to the same street apply. I think the solution would be to make sure the node with the defined address is part of the interpolation way. If that's true, there shouldn't be a problem, as I pointed out at the beginning of the paragraph. Vid the Kid 15:09, 22 April 2009 (UTC)
Redundant tagging a problem
I think the current proposal with the following tags addr:street=*, addr:full=*, addr:postcode=*, addr:city=*, addr:country=* is a mistake. Sounds harsh; let me elaborate. As as computer engineer when I design a database, avoiding duplication of data is a no brainer. It may slightly complexify the storage of data, but there's a big reward and that may be summarized by this golden rule: modify once, apply everywhere. Those aforementioned tags only duplicate what's already stored somewhere else. Suppose the street's name change. Rare but possible. If there's a relation between the numbers and the street, you modify the street's name and boom! You're done. Getting the street's name of a house is easy, it's just a matter of following the other end of the relation. But if we were to use the addr:street, or worse addr:full, imagine the nightmare that would ensue to keep all those data consistent. I'll let you think of the consequences of a postcode modification as an exercice :) The argument that it helps humans to access the information is moot. Seriously, who uses OSM's data by directly looking at attributes of nodes? Pov 22:29, 15 November 2008 (UTC)
- I don't like that part too. I'm a fan of polygons and relations for this. That's why I'm not using that part myself and tag the street's name on the street, the suburb- and city- names and the zip-code in a polygon. --MarcusWolschon 12:12, 17 November 2008 (UTC)
- We could just remove the relation-less method from the list. IMHO it's anyway easier to add a relation than to write all the address information many times for every house. --bkr 00:27, 29 November 2008 (UTC)
- Removing it from the list does not change anything about the fact that the relation-less version is prevalent in the database. And adding a relation is in fact more work, if only because JOSM has a "Paste Tags" feature, but not a "Paste Memberships" feature. Implementing the latter would help relation usability more than wiki politics. --Tordanik 20:22, 29 November 2008 (UTC)
- Ok, I see your point. The fact that the relation-less version is prevalent is not that important, it wouldn't be overly complicated to mass process most of the data to make it compliant to the new spec. Data that would fail to be processed would be problematic anyway. As far as JOSM's limitations goes, I guess the real problem is the lack of relations templates and I'm sure that house numbers are not the only feature that would benefit from adding this to JOSM. Of course this goes beyond house numbers' scope and should be discussed someplace else. Pov 23:14, 29 November 2008 (UTC)
- I agree that duplicating data is bad. I believe that they allowed this duplication, in order to make house numbering easy for mappers (rather than for users), which should not be disregarded as a main goal: the map will be there, only if it's easy to do it also for non DB experts. One possible improvement could be like the following:
- I create a point which is a new building, and add its number
- Then I can click on the road it belongs to, and the editing tool automatically creates the required relation, without duplicating data (nor bothering me with relation details)
This approach requires to patch existing tools to add specific support for numbering, still I believe it's worth it: numbering is so useful. Of course it can easily be extended to interpolated numbers and the like. Alfredio 18:00, 1 February 2009 (GMT)
example-algorithm
we could just remove the relation-less method from the list above. imho it's anyway easier to add a relation than to write all the address information many times for every house. --bkr ..11:49, 22 November 2008 (UTC)
- If there is no relation (the usual case) then the algorithm needs to deal with that. If you removed that part then it would only work at all for a fraction of the data we have. If that's how people map it, then that's what an implementation gets as input. --MarcusWolschon 20:01, 23 November 2008 (UTC)
- Can't we set for a good way of doing it (using relations) and just process the existing data to clean it according to the new version of the spec. That would considerably simplify the algorithm and avoid problems of the current spec too. Deciding this now would also prevent useless work and further "bad" data to be input the system ASAP. Pov 22:29, 23 November 2008 (UTC)
- Requiring (huge numbers of inexperienced) mappers to add the relations for all house-numbers (lots and lots of them) in the future is no option. Who is willing to write an osmosis-task to do that step as a pre-computation before using this data in an application? --MarcusWolschon 08:18, 25 February 2009 (UTC)
Apartment Blocks, Blocks of Flats, Communities, Business Parks, Campuses etc...
Something that's been mentioned a few times in here and in building discussions is how to deal with apartment blocks... I'd say this is related to many forms of address that are related to an entity that is not a street in addition to apartments, such as buildings in a business park, buildings in a campus etc. Typically being areas that can cover a reasonably large area...
I'd summarize the address possibilities as follows, please add if you've seen any more...
- Buildings are against a road with each having their own building number related to the road, e.g. Appt 102, 49 Hill Street, Fake Town...
- Buildings are against a road with each having their own building number related to the road, but, apartments/suites/etc reachable via specific entrances, e.g. Appt 102, Entrance 4, 49 Hill Street, Fake Town...
- Buildings are part of a larger complex with buildings named or numbered relating to that complex with the complex potentially having a name and/or number relating to the street, e.g. Appt 102, Building 3, Homely Community, 49 Hill Street, Fake Town...
- Buildings are part of a larger complex with buildings named or numbered relating to that complex, specific entrances existing that provide access to certain apartments and the complex itself having a name and/or number that relates to the street, e.g. Appt 102, Entrance 5, North Building, Homely Community, Fake Town or Appt 102, Entrance 5, North Building, Homely Community, 59 Hill Street, Fake Town.
I'm trying to work out how to deal with the latter of those where a complex/community may have 1 or more named or numbered, possibly gated/guarded entrances, the community will likely be an entire block, the community will have a name and it may have a housenumber for the entire community, buildings are named or numbered and have 1 or more entrances providing access to the apartments within.
The use of addr:street unfortunately only applies to streets, otherwise it could have been used to link the building numbers to the complex then the complex overall node could have gained the street number and addr:street to show which street the housenumber relates to... dkt 10:02, 9 December 2008 (UTC)
If we added addr:aptnumber=*, addr:aptname=*, addr:aptblocknumber=*, addr:aptblockname=*, would that solve the problem for blocks of flats/apartment blocks? They're not traditional houses, that's for certain. --achadwick 21:33, 7 March 2009 (UTC)
--achadwick 21:33, 7 March 2009 (UTC)
Proposal: Making the link of housenames to streets more robust (by User:SlowRider)
The following proposal tries to incorporate Relations/Proposed/Collected Ways to get an unambiguous and reliable relation between housenames and the street they belong to. All segments of a street are first put into a street relation. The relation is given the name of the street. Then the housename nodes are added to the very same relation as role=addr:houselink.
This makes the address relations much more robust, but puts some extra work on the shoulders of the mappers:
<node id='1' lat='48.99772077613563' lon='8.390096880695504' /> <node id='2' lat='48.996870577321786' lon='8.388697443728406' /> <node id='3' lat='48.99797799901517' lon='8.390466092937745' /> <node id='4' lat='48.99767157650778' lon='8.390030650427239' /> <node id='5' lat='48.997682328174704' lon='8.389127510405448' /> <way id='10'> <tag k='highway' v='footway' /> <nd ref='1' /> <nd ref='2' /> </way> <way id='11'> <tag k='highway' v='residential' /> <nd ref='2' /> <nd ref='3' /> </way> <way id='12'> <tag k='highway' v='residential' /> <tag k='bridge' v='yes' /> <nd ref='3' /> <nd ref='4' /> </way> <way id='13'> <tag k='highway' v='tertiary' /> <nd ref='4' /> <nd ref='5' /> </way> <node id='6' lat='48.99758693554065' lon='8.390053067282306'> <tag k='addr:housenumber' v='1' /> </node> <node id='7' lat='48.997905619553315' lon='8.39046152143938'> <tag k='addr:housenumber' v='2' /> </node> <node id='8' lat='48.99761386658398' lon='8.38915312158458'> <tag k='addr:housenumber' v='3' /> </node> <node id='9' lat='48.996866530131754' lon='8.388753644441946'> <tag k='addr:housenumber' v='4' /> </node> <relation id='13'> <tag k='type' v='street' /> <tag k='name' v='Foostreet' /> <member type='way' ref='10' role='member' /> <member type='way' ref='11' role='member' /> <member type='way' ref='12' role='member' /> <member type='way' ref='13' role='member' /> <member type='node' ref='6' role='addr:houselink' /> <member type='node' ref='7' role='addr:houselink' /> <member type='node' ref='8' role='addr:houselink' /> <member type='node' ref='9' role='addr:houselink' /> </relation>
- Could you develop how it's more robust than the current relation model? Also, what's the benefit, from your point of view, of repeating the street's name in the relation (I hate cloning info BTW, see #Redundant_tagging_a_problem above).
- It Looks to me like the suggestion isn't that the street's name is repeated, but, that it only exists in the relation. dkt 19:23, 10 December 2008 (UTC)
- Also... It's more robust in that the housenumber nodes are linked to the generic street relation achieving a single relation for one street rather than multiple relations for numbering and naming or the alternative without relations, addr:street... dkt 19:28, 10 December 2008 (UTC)
- Isn't that very similar to Proposed features/House numbers/Karlsruhe Schema#Case: Relations (easy for computers, difficult for humans)? type=street is better than type=associatedStreet but role=street is better than role=member and role=house is better than role=addr:houselink --Phobie 22:31, 10 December 2008 (UTC)
- That's what I thought too. I think the working model is robust enough. I fear User:SlowRider thinks there must be a single relation for every house or something... —Preceding unsigned comment added by Pov (talk • contribs) 00:39, 11 December 2008
- It is very similiar. But the point is that one relation would be easier (and thus more widely used) than at least 2 relations (one 'accociatedStreet' and one 'street'). Since both relations try to do something similiar (define what street objects belong to), it is reasonable to use one relation for both. Put all street parts into one relation as well as all housenumbers, instead of all street parts in one relation and the housenumbers in at least one other relation. I guess the role-values can still be discussed on. I would prefer 'house' over 'addr:houselink', since it sounds more generic. The street parts might not even need a role, since they make up the 'street' (ways in a route don't have a role either, do they?). You already know that they are a 'member' of the relation and that they are part of the 'street', so at least those two roles would be kind of redundant. But I guess that's not the most important issue. :) --Driver2 15:12, 11 December 2008 (UTC)
- First, sorry for forgetting to sign my previous comment.
- The more I think about it, the more I think you're right: Relation type=street is enough compared to associatedStreet. But I think that not adding roles for street segments is dangerous. First because it will create problems with interpolation-ways if one forgets to add a role and because it kinda contradicts the logic of determining members by their role. If we think that because the way is tagged accordingly is enough to determine it actually is a way, then, why bother with a role for other members? They're nodes with a addr:housenumber key or ways with an addr:interpolation key. Isn't that enough? For the sake of consistency we should either role everything or nothing, but not part of it.
- Pov 16:59, 11 December 2008 (UTC)
- I didn't say we determine the role by the fact that it's a way. I meant it is determined by the fact that it is absent. "No role" = "Default member", in this case a member of "type=street", so it's part of a street. But anyway, I have no objections to adding a role.
- I would propose: type=street for the relation, role=house for a house-node/area or interpolation line, role=street for the parts of the street (ways).
- --Driver2 16:54, 12 December 2008 (UTC)
- I was about to blindly agree with all of that but then I remembered to do a search and found that there's already a Relation:street that exists. Should we merge or should we either keep the existing associatedStreet name or move to a more logical type=addr:link ? Pov 17:47, 12 December 2008 (UTC)
- Of course it already exists. That's exactly what is being proposed here: Use the street-Relation for housenumbers too, instead of having two types of relations (street and accociatedStreet) for a very similiar task. The page you mentioned is linked in the very first post of this subsection. --Driver2 18:45, 12 December 2008 (UTC)
- In a perfect world a rule for the street-segments would not be needed, but in reality we will see houses and interpolation-ways without a rule set. If all members needs to have a rule set it is easier to find errors! --Phobie 05:21, 13 December 2008 (UTC)
way ids instead of addr:street
Wouldn't it be more consistent (against typing errors, for further use of OSM data and not to redundantly add the addr:street tag again and again) to add a way id to a house. Simply:
<node ...> <tag k="way_id" v="84573485"> ... </node>
Editors could support this and automatically add the way_id, when the user sets a street name. I am planning to write an algorithm which tries to get the way id for every POI. We HAVE the ways in OSM, why don't use their ids to connect the houses with the ways? --MapDeliverer 16:35, 29. December 2008 (UTC)
- Your idea is already implemented! Add the node/way to a relation and only tag the relation. --Phobie 16:15, 29 December 2008 (UTC)
addr:interpolation=alphabetic
"alphabetic" doesn't seem to render the line between the addresses at its ends, a test is here.
Also it isn't very clear from the description "7a - 7f will be: 7a, 7b , 7c, 7d, 7e" what the last housenumber should be. If the street has houses from 8a up to 8c, what addr:housenumber= values would the end nodes have? I suppose "8a" and "8c", but the page suggests "8d" (what comes after "z" then?). Balrog 14:01, 5 January 2009 (UTC)
Giving hints about the road-access
The proposal how to map hints about the road-access is incomprehensible:
<relation id="??"> <tag k="type" v="roadAccess" /> <member type="way" ref="11" role="accessto" /> <member type="node" ref="11" role="accessvia" /> <!-- (optionally multiple <member type="node" ref="11" role="accessvia" /> for multiple access-locations to the adressed place e.g. for convention-centers) --> <member type="way" ref="???" role="accessfrom" /> </relation>
- Where I can put the address node (or addr:housenumber and addr:street)? This information is necessary.
- I do not know which way is intended to have the role "accessto".
- I do not know which way is intended to have the role "accressfrom".
For me the following scheme seems more logical:
<relation id="??"> <tag k="type" v="roadAccess" /> <member type="node" ref="??" role="accessto" /> <member type="node" ref="??" role="accessvia" /> <!-- (optionally multiple <member type="node" ref="??" role="accessvia" /> for multiple access-locations to the adressed place e.g. for convention-centers) --> </relation>
--Hatzfeld 22:50, 17 January 2009 (CET)
- I agree. The example needs to explain these 3 points. --MarcusWolschon 06:41, 19 January 2009 (UTC)
I think it would be better to implement the ideas of Proposal: Making the link of housenames to streets more robust and the talkpage of Relation:street:
Example:
<relation id='13'> <tag k='type' v='street' /> <tag k='name' v='Foostreet' /> <member type='way' ref='10' role='member' /> <member type='node' ref='678' role='associated' /> <member type='node' ref='888' role='accessible' /> </relation> <relation id='14'> <tag k='type' v='street' /> <tag k='name' v='Barstreet' /> <member type='way' ref='11' role='member' /> <member type='node' ref='888' role='associated' /> </relation>
- 10 is part of the street Foostreet
- 678 is a house on Foostreet
- 888 is a house accessible from Foostreet while it is part of Barstreet
--Phobie 09:50, 29 January 2009 (UTC)
- I like this, however sometimes it might be needed to only associate a part of a road as access to a house. If a street goes around the house, you never know which part of the street allows access. Or would you neglect that? --Driver2 17:13, 31 January 2009 (UTC)
- That's the beauty of using relations: they apply to ways (unlike vague tags like
addr:streetwhich aply to all ways having the same name), split the U shaped way in two, only use the part that makes sense in the relation. Pov 17:36, 31 January 2009 (UTC)
- The street relation is thought to combine all ways belonging to the a street. Including only a part of the street in the relation doesn't make sense. However with the associatedStreet relation, this would be possible, but my question was directed at the street-relation. --Driver2 17:57, 31 January 2009 (UTC)
- 'accessible' is meant to be a rough hint for navigation. If you want to be exact draw a access-service-road or invent a "access-from-street-node" house node tag. --Phobie 12:47, 21 February 2009 (UTC)
Addresses format (as used in other datasets) in British Isles
It would be good to see full addresses be added to tags on addresses nodes / buildings (and other addressable objects) in UK with elements relating to BS7666 standard, although in this format they may not make up a quite so readable postal style address.
In that way OSM could in future inferface with datasets that held address element in BS7666 format. or even become if not linked to a free version to rival the National Land and Property Gazetteer.
To work out well it would be good if the tools used to edit OSM parse/validate the address tags to BS7666 format. Pulling some fields such as locality and administrative area from wider polygons is something to be consider but this may cause more problems than worth.
More on format of NLPG:
I think worth looking at specs of these other address sets, I realise we have to be carefull to avoid any contamination, as like with mapping data it would have to be made clear that details entered into OSM tag would have to be from direct local knowledge, public domain or suitably licensed sources. The only data that could be directly used coming from the NLPG itself is the Unique Property Reference Number (UPRN). Hopefully in time these will become more widely know and used. At moment it is only if particular professions that it is used outside local government.
There is also Unique Street Reference Number (USRN) which is being by all that dig in the road, which could be added as tag to OSM streets, where know, e.g. from roadworks notices. Note there are significant differences with addresses in format of the Royal Mail's Postcode Address File (PAF) which is widely used including as basis for Addresses Layer 2 (which also contains Address in BS7666 format) from Ordnance Survey.
One difference for addresses the numbers with suffixes eg 1A or in ranges 1-3 are treated like building names in PAF but as numbers in BS7666 Format.
Discussion of OS AL2 vs NLPG
Ordnance Survey response:
For a list of 'post towns' see: It seems Association of British Counties would wish to encourage a tag/field of traditional county as distinct from administrative area.
And when it comes to international address standard then trying to relate fields of various systems that have grown up in each postal system presents all sort of challenges. See:
- Well, you are free to define a tag for this to be used in the UK, start using it and then document it in the wiki. As for international addresses; Yes they can be difficult but for a map of the world we cannot have 100+ address-tags-semantics and the current one works most of the time. As for editor-support... everyone is screaming for editor-support of his/her idea. Write a patch/plugin for at least JOSM and Potlatch and submit it. That's the way to go here. We are short on software-developers all over the place and they are not sitting around waiting for new ideas what to do with all that free time of theirs. --MarcusWolschon 07:41, 2 February 2009 (UTC)
Giving hints about the road-access - why not use the highway tag for this?
IMO the access of a house is simply a footway or sometimes a service road. IMO its strange to use relations for this purpose. A special highway type (e.g. highway=access_footway) might be more suitable.
Advantages:
- Its easier for a mapper to draw ways than to create relations.
- Non-linear ways can be added (useful for non-convex houses).
- A renderer can draw the ways that access the houses if he wants to.
- Some mappers are already using highway=footway for this purpose now. This has the drawback that these way can not be omitted from rendering.
--Grungelborz 11:41, 1 March 2009 (UTC)
- That is actually an interesting idea. --MarcusWolschon 12:02, 1 March 2009 (UTC)
- I agree, interesting idea. I'd combine this with mapping entrances to buildings. In situations where buildings are directly adjacent to, for example, a pedestrian area, you'd probably not draw some footway, but only add an entrance node into the building polygon. In other cases, entrance + access footway leading to it seems to be the most natural solution. --Tordanik 13:56, 1 March 2009 (UTC)
- Sounds good to me, at least for places where individual houses are mapped. For a walkway to the house, I might suggest highway=footway, access=private. Renderers probably shouldn't render such a combination prominently anyway. If anything, a thin/faint dashed line. For a driveway to a garage/carport, there already seems to be highway=service, service=driveway. I'd think that last one should probably imply access=private. I can see this happening in rural parts of the US, where many houses are set back a half mile or more from the road. Vid the Kid 15:43, 22 April 2009 (UTC)
House-Numbers supported in Traveling Salesman
Great news: I was able to add support for searching house-numbers (nodes, house-polygons and numerical interpolation) to the Traveling Salesman -navigator.
Now I expect you to add more of the missing house-numbers to the map so people can actually route house to house. ;)
- forum-post announcing the feature
- Test address-search in Traveling Salesman via Webstart (dont forget to import a map first)
If you find anything wrong, you can open a browser-windows for a bug-report simply via help->file bug report . --MarcusWolschon 18:36, 14 March 2009 (UTC)
Case: Selecting the street the series of house-numbers belongs to
Proposed features/House numbers/Karlsruhe Schema#Case: Selecting the street the series of house-numbers belongs to
I find it odd that the code there puts the
<tag k="addr:street" v="AStreet" /> one one of the nodes, and not on the way with
<tag k="addr:interpolation" v="even" />. Seems to me, it makes a lot more sense to associate the entire way with the street, not just one of its nodes. After all, in the relation version, it's the way that's in the relation, not one of its nodes. Did someone just typo the code? Does anyone even care about the non-relation version? (Having a relation or two for every street seems like overkill to me. Potlatch doesn't make it easy to work with relations in an area where there are already several relations, and I've heard some complaints about making a lot of relations in JOSM, too. I guess eventually relations will be everywhere, but the editors will have to be able to manipulate them in a more streamlined manner.) Vid the Kid 15:20, 22 April 2009 (UTC)
- Yes, looked like a typo. I changed the text. IMHO one type=street relation for every street and its related objects is a good thing. In JOSM there is no problem with using many relations and Potlach should not be used for any complex thing. --Phobie 23:01, 23 July 2009 (UTC)
Hamlets/Localities without street names
In rural areas in Germany, residential streets in a hamlets or even smaller living places often don’t have a name. Instead, houses are numbered through the whole village. I know of other regions in other countries where this is the usual way of numbering houses. Using this proposal, I would have to name all the residential streets like the village (because that’s the “street” part of the addresses there) and add them all to the associatedStreet relation. As the streets aren’t named, this tagging would be “wrong”. Thus I am missing a way of adding a place=* node to an associatedStreet relation in this proposal, may it have the role “street” as well or something different. --Candid Dauth 17:23, 17 May 2009 (UTC)
- Another example is the Highlands of Scotland, where most crofting townships have numbered houses in villages, without named streets.--tms13 16:41, 22 June 2009 (UTC)
- Actually you would just use no street-name at all (the nearest street=way with "highway"-tag is used and in rural areas there are not many streets) or an associatedStreet-relation (does not need a street-name). So...is there a problem? --MarcusWolschon 10:13, 23 June 2009 (UTC)
Loads of numbers in a small area
Sometimes, there are loads of numbers in a very small area. In this example, the apartments in a two story house are individually numbered from 22 to 56 (I have used the interpolation tag, but typed it wrong, it's in the process of rerendering). It is impossible to tag every number individually if you don't want the map to be extremely clobbered with numbers, and apartments on top of each other can't possibly be shown correctly. Also, it would require going up to every entrance and check the number, and that's just a little too creepy to me... This is where interpolation has to be used, and I know what's written on the page is an utopia, but it's really impossible to accomplish even if you would want to. /Grillo 21:22, 19 May 2009 (UTC)
- Well, then I can only suggest to use interpolation or to propose a better schema or an extension to the current schema for discussion. --MarcusWolschon 05:55, 20 May 2009 (UTC)
Node with house number in the way
Why not simply include the house number in the way itself, that way routing algorithms could figure out the number more easily ? Refer to Diogownunes 21:55, 24 July 2009 (UTC)
- Which side of the road are they? How far from the road? What about different numbers on opposite sides of the road at (otherwise) exactly the same place? Alv 22:08, 24 July 2009 (UTC)
- Good points, I don't really care which side of the road they are - as long as you can actually get routed close enough to the number you want - I see that in most cases, the house or building or entrance is very close to the street. Leaving the house numbers on the side of the road makes for beautiful maps and all, but would be next to impossible to implement in routing software in an easy way.--Diogownunes 02:06, 22 August 2009 (UTC)
- Makes little sense. The point of house-numbers is to be routed to the point where the house is. If I can only be routed to the street (that may span a dozen kilometers) I would not need house-numbers in the first place. Of cause you are not forbidden to make the way itself an interpolation-way for very trivial cases or of you have no more details e.g. in an import. --MarcusWolschon 06:51, 27 July 2009 (UTC)
- Not what I meant at all. You would have the house numbers every so often, so that routing software could "suggest" a location close enough to where you're trying to get to. Check the link to see how the map is rendered.--Diogownunes 02:06, 22 August 2009 (UTC)
- I Actually sees Diogo's point of view. I have used routing software that grouped addresses in bunches of, lets say 25 numbers, giving you sections of the road where you are close to the house, but not necessarily infront of it. This will give the same accuracy as addr:interpolation, and works perfectly fine if the numbers are parallel (i.e. 110 is between 105 and 115), but this would not work very well if the numbering is offset between left and right hand sides of the road. It is not necessary to put in every single house when numbering (though I have almost done that as most houses I have marked have unique names), setting an address node at the side of the rode and connect them with addr:interpolate is a very good way of simplifying the tagging. Of corse if an import have no information if the numbers are left or right, than this is a temporarily solution, but the area should than be resurveyed to locate the numbers left/right at an oppertunity (maybe an idea for Walking Maps Paper mapping party?) --Skippern 03:40, 22 August 2009 )
- Also see: Proposed_features/House_numbers/Karlsruhe_Schema/Additional_tags. --Driver2 12:44, 15 December 2009 (UTC)
Why is there addr:housename when name is perfectly fine?
Why do we have addr:housename=* when name=* would be perfectly fine for this? --seav 06:01, 29 August 2009 (UTC)
- Because in some countries housename is part of the address. --Skippern 18:55, 10 September 2009 (UTC)
- And does that imply that because there's no "addr:" prefix then name=* can't become part of the address data? Is there an example where addr:housename=* and name=* have different values? --seav 13:17, 13 September 2009 (UTC)
- It theory there can be a lot of examples of that, for example a named house, lets call it "House" can have a restaurant in it, called "Eat Out". In Brazil almost all apartment buildings, and commercial buildings have given names, and in many areas hold space for shops, restaurants, and much more on street level. This can as well be common in every country with named buildings. --Skippern 14:32, 13 September 2009 (UTC)
- But I'll argue that its wrong to tag the whole building with name=Eat Out (which gives addr:housename=House) because the whole building is not the restaurant. My preferred way for that is to add an amenity=restaurant node inside the building way. This indicates that there is a restaurant inside the building even if the restaurant is the only public amenity inside the building. --seav 05:32, 14 September 2009 (UTC)
Recent edits to the page
The recent edits are an excellent example why proposals shouldn't be modified after they have been "approved" (in this case: stabilized). To sum up the events
- originally, the example for interpolation ways did not suggest to add addr:street to the interpolation way
- Gernot then added that tag to the example, hidden within a massive amount of edits
- one month later, people on the mailing list were pushing addr:street on addr:interpolation ways rather than house number nodes/areas, refering to the "original proposal"
- some people (I was among them) checked the history and noticed Gernot's modification to the example
- Joto then modified addr:street tags in the examples, which removed the addr:street from the addr:interpolation way, but also modified other examples; the edit was announced on the ML, but had no edit comment
- MarcusWolschon reverted Joto's edit, without an edit comment and without any discussion.
I suggest to revert the proposal page to this version, which was when the status was set to "approved". I'm aware that editing had been going on before that point, but massive modifications started only after that version. If you prefer to add street names to interpolation ways or whatever, please create another proposal or change current documentation after appropriate discussion. In my opinion, this is the only way to keep things transparent - directly editing pages like this just causes chaos. --Tordanik 13:45, 10 September 2009 (UTC)
- I asked Joto about it on his talk-page but got no answer yet. Btw, what mailing-list are you talking about? We have dozens of them. --MarcusWolschon 08:03, 11 September 2009 (UTC)
- The "addr:street" tag on the interpolation way was certainly not something envisaged by the Karlsruhe Schema team when we invented the whole thing. It does not make sense to me, and I have again removed it from the examples. I have indeed received complaints from people why the OSM inspector does not recognize that tag, and have heard from people who had actually tagged lots of addresses according to the original Karlsruhe Schema and then were told to move the address onto the interpolation ways because "it is on the Wiki". We want interpolation ways as a temporary construct to make mappers' lives easier, but once all houses are mapped individually, we want them to go away, and tagging vital information onto the ways does not make sense. OSM is a free and open project and anyone can tag what they want but if someone wants to invent their own address tagging schema then please make your own Wiki page for it and choose a different name. --Frederik Ramm 20:20, 13 September 2009 (UTC)
- Indeed it makes sense to do it this way. But please next time have some kind of note on the talk-page or at least a more meaningful comment when changing pages describing established semantics. I'll use the talk-page next time instead of a message to the editor even if that means that he/she does not get the message. --MarcusWolschon 07:44, 14 September 2009 (UTC)
member roles of relation roadAccess
To be compatible with Relation:restriction we should change the member roles of the relation roadAccess.
- accessvia -> via
- accessto -> to
- accessfrom -> from
--Vsandre 18:05, 25 September 2009 (UTC)
- One question: Why do we need to be compatible with a relation that will never be used in conjunction with this one as it does the complete opposite? --MarcusWolschon 06:03, 26 September 2009 (UTC)
- Because a OSM-user will not learn a role name for each relation if the meaning is the same. At the moment (OSMdoc 2009-08-12) it stands 10978 to 33 for the restriction relation.--Vsandre 09:00, 26 September 2009 (UTC)
- Good point. Do you want to formulate and announce a proposal? --MarcusWolschon 07:14, 27 September 2009 (UTC)
- Do we need a 'real' proposal? Because it is not wide spreaded at the moment, I will only announce it at the mailing lists.--Vsandre 08:29, 27 September 2009 (UTC)
- Yes we need one because several tools would need to be reprogrammed to userstand both your proposed and the current format. Including JOSM and at least 2 navigation-programs. --MarcusWolschon 10:53, 28 September 2009 (UTC)
Skipping house numbers in a way
In my city it is the rule more often than the exception to number houses skipping one, or possibly two numbers in the sequence, so that the even side of the street is numbered xx00, xx04, xx08, etc. and the odd side numbered xx01, xx05, xx09, etc. I see no way, under the current scheme to tag this situation other than as individual nodes. Sorry, way too much work. Therefore, I would like to propose the addition of an additional key addr:skip=*, or addr:interpolation:skip=* to define how many numbers should be skipped in the even, odd, or all sequence. I initially thought to use the tag increment, instead of skip, but that was too ambiguous. If you increment the even side by 4, do you increment to 00, 04, 08, ... or 00, 08, 12, .... The former is more intuitive, but it is also prone to strange errors. For example, what happens if the even side increment is 3? This would be interpreted as 00, 03, 06, 09 ..., not all even. Thus "skip" appears to be less error prone. Comments? --Turbodog 06:04, 7 October 2009 (UTC)
- Very strange city. In how many other places does this happen? How big is "in my city"? --MarcusWolschon 08:10, 7 October 2009 (UTC)
- This usually happens when the housenumbers were given to the land when there were small houses planned, and now the houses built are bigger and each takes several pieces of land (and thus several housenumbers). It is pretty seldom that this is done in a so regular manner that you can tell a skip step. I recommend to use nodes, just like here: [7]
--Lulu-Ann 10:01, 7 October 2009 (UTC)
- In the suburbs, i.e., out of the dense downtown area, this is the rule rather than an exception, even though the houses are new. I think the city must have established something like a 40 foot +/- interval for house numbers, and the developers use more like 60-90 foot lot width (newer homes have narrower lots). But, that's just a guess. Using nodes will probably more than triple the effort (six nodes vs a way) in most neighborhoods. I'll code a couple of blocks, and leave a link. I started to add some house numbers in my local neighborhood. I gave it up when I ran into the problem, and decided to post here before going further. I'll have to take another look at your link, Lulu-Ann, but when I took a quick look I saw a lot of ways. All of those ways would have to be broken down into nodes in my area. Surely Fort Worth isn't the only city with this problem. I'll have to check with friends and relatives in other Texas city suburbs. --Turbodog 15:02, 7 October 2009 (UTC)
- Well, that wasn't TOO painful. I've mapped housenumbers to one street using Potlatch, to demonstrate the problem. See [8]. The north side of the street is mapped properly, using the current rules. The south side is mapped using my proposed rules. Of course, currently, a renderer would map twice as many addresses as are actually present on the south side of the street, since it would ignore the "addr:interpolation:skip=1" attribute.
- It took eight entries vs. 21. There are 300 houses that this impacts in this one small neighborhood, and thousands of houses throughout the city that are similarly numbered. The difference in labor to number them using current limitations is very significant, actually to the point of not really being feasible. (I went back and deleted a bunch of stuff in and after the immediate previous post, because I was able to put the demonstration up faster than I thought.) --Turbodog 20:57, 7 October 2009 (UTC)
Just wanted to chime in on this subject and affirm that the skipping phenomenon Turbodog mentions is not at all uncommon in the U.S. In my city (Chicago) house numbers are assigned based on the street grid and have nothing at all to do with the presence or absence of any actual buildings. One block will typically span 100 numbers, whether there's 100 buildings, one, or none. I haven't done any serious address tagging yet, but if I did I would want a scheme which allowed me to say, for example, "The numbers on this side of the block are even, beginning with 1200 on the east and ending with 1298 on the west." Or maybe even create an interpolation way that spans many blocks, with a node wherever it intersects with a street that identifies the number of the street. Something like that would allow reasonably-accurate address information--it would get you to the right area of the block, at least--to be added very quickly. --BBMiller 16:32, 15 July 2011 (BST)
associatedStreet relation: several postcodes
What to do with a street with changing postcodes ? - need a child-relation associatedStreetPart ! --Skyper 12:30, 5 November 2009 (UTC)
- Do you have different postcodes on both sides of the same way or can you simply split the way? --MarcusWolschon 13:46, 16 November 2009 (UTC)
- Isn't the postcode set by the boundaries of the village? So an associatedStreet realtion should say nothing about the postcode. If you can split a way on a boundary, I would do it. Certainly if the numbers restart.--Sanderd17 22:15, 2 December 2010 (UTC)
Errors in examples on feature page?
It appears to me that (as of my signature date) the examples for relations are incorrect:
- The example under "Associating a building polygon with a street" is identical to "Associating a node with a street" without the node definition.
- The example for associating a series of housenumbers is actually the example for associating a building polygon with a street, and there is no example for associating a series of housenumbers with a street. -- --turbodog 16:25, 14 November 2009 (UTC)
Adress for buildings that are relations
Today I stepped over a building where I wanted to add name and adress that was a relation (i.e. an outter way and several inner ways). Should the adress be put on the outter way or the relation? According to the current documentation, addr-tags can only apply to points and ways. But probably the relation would be the more consistent way. Sample Building: --Hanno 17:56, 30 November 2009 (UTC)
number *or* name?
The page says addr:housenumber or addr:housename. But it's okay for houses to have both, right? This is fairly common. Ojw 17:08, 21 March 2010 (UTC)
p.s. housename doesn't seem to be rendering on mapnik. I only managed it by giving a name= tag to the building, which doesn't work if semidetached houses have a name each.
- Yes, you can have both at the same time if the house does have a name. --MarcusWolschon 06:46, 22 March 2010 (UTC)
An experience
I've done some experiments with Karlsruhe schema and want to share them, to get comments.
One is to create an associatedStreet relation which covers several highways (they have the same name, but different oneway values) and thus have two street members:
The second one is to include addr:interpolation ways as house members of the relation (same relation).
The third one is to attach a multipolygon relation for a very wide pedestrian highway as the street member:
- Relation hierarchies (relations within relations) are hard to visualize, hard to edit and should therefore be avoided. Even plain associated street relations make editing harder for beginners with no real benefit. --Tordanik 16:21, 27 September 2010 (BST)
Housenumber at house door
It is also possible to map the side of the main entrance by using a extra node for the house number of the building. Ah, you don't understand what I mean? Example: [9] -- Derstefan 18:00, 9 April 2011 (BST)
- That's generally used as the solution for mapping a building that has entrances with different house numbers. It's also possible to use that mapping style when there is only one house number for the entire building, but I don't really like it in that case, as it would leads to duplication if there are multiple entrances or POIs that would each need a copy of the address tags. By the way: Regardless of whether it carries address tags, an entrance should always be tagged as building=entrance. --Tordanik 18:14, 9 April 2011 (BST)
associatedStreet
streets can be split up in many ways. it's not clear what parts of a street i have to add to the relation. all ways of the street or only the one way that is the closest? --Flaimo 12:22, 25 April 2011 (BST)
- I've wondered about this one myself. The proposal says one way per relation, and JOSM gives an error if there's more than one way with the street role. I think Relations/Proposed/Street is meant to replace the associatedStreet relation, which will support multiple street segments and associating other objects than just houses. -- Joshdoe 14:21, 25 April 2011 (BST)
Archiving of this page
Hello, as documentation should not life on the proposed features subpages I propose to move the content of this page to respective key or tag pages. See as an example. Any objections? --Andi 00:55, 9 March 2013 (UTC)
- I found another solution: I moved the page in the general namespace with the name Karlsruhe Schema and archived this proposal page. --Andi 18:02, 6 November 2013 (UTC):48, 20 June 2013 (UTC)
Move to Karlsruhe Schema
What is the idea behind the move to a separate page, Karlsruhe Schema? There is already current documentation outside this proposal, e.g. at Key:addr, and this proposal is therefore no longer entirely current. If anything, its visibility should be reduced, not promoted with a regular page that makes it look like the current state of the art? --Tordanik 08:31, 7 November 2013 (UTC)
- I agree and suggest to move it back to avoid broken links. --rayquaza (talk) 23:36, 2 January 2014 (UTC)
- Can you show me where there are broken links? There should be links or redirects everywhere. --Andi 15:19, 5 February 2014 (UTC)
- To the Idea of the move: Proposals are in my point no for documentation and shouldn't be translated. I thought we discussed this issue somewhere but couldn't find this discussion anymore. Basically there should be created a new english page about house numbers (not addresses). When this page is stable, the translations can be created on base of this new house number page. --Andi 15:19, 5 February 2014 (UTC)
Delete proposal
This page should be deleted, as a preparation to undo the move of its contents to Karlsruhe Schema. --Tordanik 21:16, 24 June 2014 (UTC)
- I oppose deletion. Archiving should be sufficient. -- malenki 19:20, 25 June 2014 (UTC)
- That's what I want to do. When you look at this page's history, you can see that this is not actually an archived version of the original, despite the template suggesting otherwise. I want to bring back the original with the full history to this page title (where it had been for years, including during the vote), and then archive it. But I can only do that if the fake archive page that currently occupies this page title is deleted. --Tordanik 20:47, 25 June 2014 (UTC)
- Shouldn't this be moved to Approved features then? --Skippern (talk) 21:55, 25 June 2014 (UTC)
- Approved features is now only a category, so there would be no change in the page title (unlike what was still commonplace a few years ago). But the restored page will of course be inserted into that category, I'm going to follow the guidelines for that. --Tordanik 08:27, 26 June 2014 (UTC)
- In this case I agree with the deletion. --Jgpacker (talk) 11:18, 31 October 2014 (UTC)
- I also agree. But be careful not to delete the talk page. --rayquaza (talk) 08:38, 1 November 2014 (UTC)
|
https://wiki.openstreetmap.org/wiki/Talk:Proposed_features/House_numbers/Karlsruhe_Schema
|
CC-MAIN-2018-30
|
refinedweb
| 14,813
| 59.23
|
See also: IRC log
<josema> trackbot, start telcon
<trackbot> Date: 31 March 2010
<cygri> hi folks. is the call *now*? i can't dial in, it tells me the conference is restricted
<sandro> yeah.
<Chris> saem
<sandro> it's broken.
<Chris> *same
<John> Me too...
<Chris> bugger...
<josema> I don't even get the error message
<Chris> I can see the e-Gov IG note now - "Best Practices in using Technology.... Don't!" lol
<josema> zakim sees George but he's not on IRC, so we cannot ask him :(
<sandro> I wish.
<sandro> I'm trying to reach someone.
<josema> ok, thx, sandro
<Chris> I think I'm in...
<Chris> accepted the code
is the meeting over?
<sandro> No, the meeting is supposed to start now.
<josema> Chris, did you hear a beep?
<Chris> yup
<josema> hmmm....
<Chris> dead air atm - cause I think I'm alone - no - someone else is on - very faint
<sandro> Chris and I are on....
<sandro> seems to be working now.
<cygri> yup i'm in
<josema> scribe: edsu
josema: next meeting April 14th, need a scribe
<josema>
<sandro> I heard some voices in the distance there.
<josema>
josema: george can you report a summary on your project?
george: there's been some
discussion about conventions of use for some of the standards,
such as dcat, void, etc
... and trying to evangelize demonstrations that people already have
josema: is there any expected deliverable?
george: we haven't converged on that at this point
josema: so what you have decided to is to act as an aggregator of different initiatives?
george: that's kind of the
default yes
... a reflection of sandro's comment about energy and availability of people involved
josema: is there a sense of doing technical work, as well as a case that could be made for decision makers?
george: yes
josema: there is a need to make a value-proposition to poplicy makers
george: i think there is a good opportunity for that
josema: best practices for using web technologies, chris?
Chris: we tried focus groups for
publishing pdf files, and using mobile technology ; got some
response about pdf
... we'll maybe get something written about redaction
... also some conversations with adobe
... trying to identify where the interest is
josema: there is some concern about the participation in the groups, which is when we decided to let other organizers go on their own for a bit
<OwenAmbur> Lost my Skype connection
josema: hopefully focusing on very specific topics will help
Chris: yeah, we were thinking of
putting out short how-tos
... and go from there
josema: we should clearer ideas
about what we should have produce, given the charter expiring
in october
... don't be afraid of publishing something
... a working draft can be almost anything ; we shouldn't be afraid of putting drafts out there ; and also people should feel free to use the wiki
... people seem afraid sometimes
... daniel isn't on the call today for best practices
Chris: Brian updated the social media project homepage
josema: sandro could you tell us about fose?
sandro: i was on a panel
replacing john, with rachel and daniel and kevi
... i'll paste the link for the slides, and the video into irc
... there wasn't a lot of time for questions, and it seemed like some people were excited about it
<sandro>
sandro: might get some people in the IG as a result
josema: any other news from people on the call?
Chris: i've got a paper accepted to talk about the Metadata Conference in Canberra
<OwenAmbur> The Industry Advisory Council (IAC) will be conducting a study of best practices with respect to the management of records created in social networking services
John: i've got a paper w/ Jeni Tennison to talk about design patterns, provenance, data sets for Linked Data on the Web
<OwenAmbur> Pat Franks will be compiling a similar report under the auspices of IBM's Center for the Business of Government
josema: kevin wanted me to make folks in the US aware of open government meetings about agency plans
<OwenAmbur> It would be nice if agencies posted their open gov plans in open StratML format
<OwenAmbur> If they fail to do so, I will probably convert some, if not all of them to StratML format myself
<Chris> @Owen - keep me posted on the IAC / Pats stuff - v interested
<josema> conference site --
josema: the National Conference on EGovernment is next week, i will try to send information to the list
<sandro> scovo
... there will be an early initial paper on this at ldow workshop
<josema> that will be very useful for us, too
John: this is work that I have commissioned ; once we get a good representation of sdmx for rdf is a key issue for Linked Data in the UK
cygri: i'm distributing my time between the dcat and sdmxrdf
josema: lets say you create that rdf vocab for sdmx, how will the relationship between the xml vocabulary and the rdf vocabulary work
cygri: sdmx started as an edi
standard ; it includes an abstract model (uml), and there is an
xml syntax
...
John: jeni gave us an xslt
demonstration of turning an sdmx xml document into rdf ... we
figure the transformation of sdmx xml is going to be pretty
straightforward
... there's also lgdx that is being used, and they have big systems that inhibit switching to something new ; so we want to be able to translate that as well
... once sdmxrdf becomes something we can use we can surface quite a bit of data quite quickly
<sandro> (nice timing)
John: we've started to use time interval uris too, which can be used to identify statistical data
josema: sounds like this deserves its own agenda item, some time in the future
<cygri> slides:
josema: i would love to see an agenda item to talk only about this some time in the future
cygri: that's a link for some slides for the work so far
josema: ok the floor is yours
cygri: please follow along in the
slides
... most of the work has been by our egov unit and the linked data research center at deri
<josema> scribe: Ed Summers
cygri: it's about enabling the interoperability of government data catalogs we've seen popping up
<josema> scribeNick: edsu
going to talk about why it's important ; what's out there now ; what the dcat vocab is ; some experiments we've been doing and where we should go next
cygri: there are more than 30 government data catalogues online
<josema> compilation of the ones we found so far and ...
cygri: these efforts are done by both public and private parties
<josema> ... or in RDF as default SPARQL query at ;)
cygri: we're interested in this
because the information is on the web and we'd like to query
across the catalogs
... so information about san francisco can be found in local, state and federal catalogs
... in the EU we can see how me might want an eu level catalog ... data.gov.eu
... it would be nice to see new user interfaces for the data found in data catalogs ; which combine the metadata with the data itself ; also to do rating and social annotation of the data sets
... most of the data catalogs do export their data in a structured form
... however each has its own specific format, the documentation for it is lacking
... we did an in depth survey of 7 catalogs: some national some local
... we looked at the metadata, looking at the datasets themselves was out of scope
... we surveyed the types of metadata available
... quite a lot of metadata fields are shared, which is good news for interoperability
... we also looked at metadata fields, how consistently it was used, for example date fields
... we also looked at direct download links
... sometimes they go directly to the data, but quite often you go to a splash page, with a click through license, and find the download link on the page
... this is bad news for automatic processing of the data
<Chris> End
<josema> thx
cygri: the dcat vocab is at ; and an overview is available at
<josema> all, queue yourself if you have questions and we'll go through them at the end, thx
cygri: we tried to keep in mind Hepp's Law: to be careful when designing a vocabulary not to make distinctions that aren't present in the data to be integrated
<josema> [that was very good advice]
cygri: we don't want to require
data cleansing before dcat can be used
... rdf allows extensibility (classes and properties) to express additional information, so we focused on stuff that's in all the catalogs
... we tried to reuse from dublincore, skos and foaf, and minimized what we created ourselves
... we introduced dcat:Catalog, dcat:Dataset, dcat:CatalogRecord, dcat:Distribution
... Distribution is used indicate that format that the dataset is available in
... for example xml and json would be 2 Distributions
... categorization of datasets is common, and we used skos for this
... and the government agency that published the data was modeled with foaf:Organization
... we loaded some of the datasets into a standard relational database ; and then mapped to rdf with d2r
... which has a sparql interface
... and we linked some things up to geonames and dbpedia
... for example we linked the agencies to dbpedia
... we did some example sparql queries for listing datasets that were published by agencies with budgets > a certain amount
... one of the benefits of using dcat is that it could enable distributed publishing, and federation later
... could also potentially allow datasets to be downloaded in an automated fashion
... also applications that worked on one data catalog could be repurposed
... we're looking for feedback on the vocabulary, and to get more eyes on the vocab
... we're writing up a guide to using dcat
... at deri we'd like to use dcat w/ voiD for describing rdf datasets
... also w/ sdmx+rdf ; metadata about the dataset is important
... what really has to happen is that it needs to happen not only at DERI but elsewhere on the web, by the catalog publishers
<John> This is very important work
cygri: one question is, as more
people are involved how do we organize our work in a
distributed way?
... to what extent would the egov interest group be a good place to do this?
<John> We need something like this for data.gov.uk RDF work
josema: thanks very much, we only have a few minutes for asking questions
Chris: have you done anything with the datacite consortium, and when you were looking at australia did you look at the australian data service?
cygri: no we just looked at data.gov au, would be interested in that
<sandro> PROPOSED: to extend the meeting, informally, for 15 minutes
John: i have someone looking at your work right now, would it be possible to version the document, right now it's still in draft
<josema> let's say 10, sandro ;)
cygri: we have to provide some documentation about the use of other vocabularies, we have good coverage for the new vocabulary we have introduced
John: do you need any help?
cygri: yes, always :-) perhaps we
can discuss offline a bit
... it has to be driven by working on actual data
<josema> edsu: +
<John> +1
cygri: for v0.1 this is something we have to do here, it's hard to distribute
josema: we're going to extend the meeting by 10-15 minutes
<John> +1
sandro: this is very important work, i'm hoping that you can get users in the IG to help, but also institutional support from the w3c
<John> We should try and support somehow...
<josema> [I want to *thank* edsu for fantastic scribing today, wonderful, many many thanks]
<John> +1
cygri: i'm not sure if we should use the egov mailing list, or create another one (infrastructure) ; we need an issue tracker for this
<John> This is on the nail, practical, just what we need in UK
<josema> it's something we also need for our work at CTIC, happy to help from here, preferred if within W3C
cygri: an IG can publish notes at the w3c, for something like dcat to have more acceptance an IG Note would have a lot more acceptance
<John> And we can try and implement over next few weeks
sandro: another thing is the namespace for the vocab, perhaps a w3c namespace would lend it more credibility
John: it would for us
sandro: other applications i thought of: not every local gov't would have to make their own catalog, without requiring their own IT dept
<John> Yup!
<josema> +1
sandro: also, i was thinking it could be possible to publish mappings w/ dcat
<John> This is much much needed just now
josema: we really, really need
this at my place of work
... sandro, how should we proceed?
<John> I'd love to see a group note on gov data catalogues
sandro: i'm happy to listen to what richard wants to do
cygri: i haven't followed too
closely how the IG is being run with sub-projects
... with voiD we had good experience getting together a focused group, with a weekly call, with discussion list, issue tracker, and subversion repository
... i would try to replicate what we did w/ voiD
sandro: people seemed to want to
use the egov ig discussion list for sub-projects
... maybe we could try to start on there with a [dcat] tag until someone complains
cygri: i think that could work
sandro: we could schedule a telecon, would richard be ok for chairing them?
cygri: yes, great
<josema>
josema: that's the link fot the current projects
sandro: a lot of those aren't meeting so i wouldn't worry about it too much
<John> fabulous Richard!
+1
josema: next meeting aprl
14th
... we are adjourned
<josema> [ADJOURNED]
<josema> yup :)
This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/we should/we should have/ Succeeded: s/moer/more/ Succeeded: s/to/too/ Succeeded: s/repositoy/repository/ Found Scribe: edsu Inferring ScribeNick: edsu Found Scribe: Ed Summers Found ScribeNick: edsu Scribes: edsu, Ed Summers Default Present: George, Chris, +0203334aaaa, Sandro, +03498429aabb, edsu, cygri, josema, Owen?, John?, Vassilios Present: George Chris +0203334aaaa Sandro +03498429aabb edsu cygri josema Owen? John? Vassilios Regrets: kevin rachel cory Found Date: 31 Mar 2010 Guessing minutes URL: People with action items:[End of scribe.perl diagnostic output]
|
http://www.w3.org/2010/03/31-egov-minutes
|
CC-MAIN-2014-35
|
refinedweb
| 2,440
| 64.75
|
PythonEffectTutorial
Effect extensions in Inkscape means a simple programs or scripts that reads an SVG file from standard input, transforms it somehow and prints it to the standard output. Usually Inkscape sits on both ends, providing the file with some parameters as input first and finally reading the output, which is then used for further work.
We will write a simple effect extension script in Python that will create a new "Hello World!" or "Hello <value of --what option>!" string in the center of document and inside a new layer.
Contents
Effect Extension Script
First of all create a file hello_world.py and make it executable with the Python interpreter with the well-known directive:
#!/usr/bin/env python
If you're going to put the file somewhere else than into Inkscape's installation directory, we need to add a path so that python can find the necessary modules:
import sys sys.path.append('/usr/share/inkscape/extensions') # or another path, as necessary
Import the inkex.py file with the Effect base class that will do most of the work for us and the simplestyle.py module with support functions for working with CSS styles. We will use just the formatStyle function from this module:
import inkex from simplestyle import *
Declare a HelloWordEffect class that inherits from Effect and write a constructor where the base class is initialized and script options for the option parser are defined:
class HelloWorldEffect(inkex.Effect): def __init__(self): inkex.Effect.__init__(self) self.OptionParser.add_option('-w', '--what', action = 'store', type = 'string', dest = 'what', default = 'World', help = 'What would you like to greet?')
The complete documentation for the OptionParser class can be found at docs.python.org. Here we just use the add_option method which has as first argument a short option name, as second argument a long option name and then a few other arguments with this meaning:
- action - An action which should be done with option value. In this case we use action store which will store option value in self.options.<destination> attribute.
- type - Type of option value. We use string here.
- dest - Destination of option action specified by action argument. Using what value we say that we want to store option value to self.options.what attribute.
- default - Defalut value for this option if it is not specified.
- help - A help string that will be displayed if script will be given no arguments or some option or argument will have wrong syntax.
Inkscape will create a GUI form with widgets for all specified options and prefill them with the default values specified using the .inx file for this extension which we will write later.
We need to override only one Effect class method to provide the desired effect functionality:
def effect(self): what = self.options.what
As you can imagine we just stored the --what option value to the what variable.
Now we will finally start to do something. We will have to work with the XML representation of the SVG document that we can access via Effect's self.document attribute. It is of lxml's _ElementTree class type. Complete documentation for the lxml package can be found at lxml.de.
First we get SVG document's svg element and'])
The xpath function returns a list of all matching elements so we just use the first one of them.
We now create an SVG group element ( 'g' ) and "mark" it as a layer using Inkscape' SVG extensions:
layer = inkex.etree.SubElement(svg, 'g') layer.set(inkex.addNS('label', 'inkscape'), 'Hello %s Layer' % (what)) layer.set(inkex.addNS('groupmode', 'inkscape'), 'layer')
This creates inkscape:label and inkscape:groupmode attributes, which will only be read by Inkscape or compatible applications. To all other viewers, this new element looks just like a plain SVG group.
Now we create an SVG text element with a text value containing the "Hello World" string:
text = inkex.etree.Element(inkex.addNS('text','svg')) text.text = 'Hello %s!' % (what)
Set the position of the text to the center of SVG document:
text.set('x', str(width / 2)) text.set('y', str(height / 2))
If we want to center the text on its position we need to define the CSS style of the SVG text element. Actually we use the text-anchor SVG extension to CSS styles to do that work:
style = {'text-align' : 'center', 'text-anchor' : 'middle'} text.set('style', formatStyle(style))
Finally we connect all created elements together and put them into the SVG document:
layer.append(text)
We just defined a class that inherited from the original effect extension so we have to create an instance of it and execute it in the main control flow:
effect = HelloWorldEffect() effect.affect()
Extension Description File
To include script in Inkscape's main menu create hello_world.inx file describing script evokation.
<?xml version="1.0" encoding="UTF-8"?> <inkscape-extension <_name>Hello World!</_name> <id>org.ekips.filter.hello_world</id> <dependency type="executable" location="extensions">hello_world.py</dependency> <dependency type="executable" location="extensions">inkex.py</dependency> <param name="what" type="string" _gui-World</param> <effect> <object-type>all</object-type> <effects-menu> <submenu _name="Examples"/> </effects-menu> </effect> <script> <command reldir="extensions" interpreter="python">hello_world.py</command> </script> </inkscape-extension>
Create <param> element for every option of a script and <dependency> for every included module which is not from Python standard library. Inkscape will search for this modules in directory with script. <effect> element and its descendants defines name of menu item evoking our new "Hello World!" extension.
If the inx file isn't well formed or if any of the dependencies wasn't met, the extension won't show up in the menu. If your extension doesn't show up, take a look at extension-errors.log, which may give you a hint why it wasn't loaded.
Installation
To install a new extenstion just put hello_world.py and hello_world.inx files with all dependency modules to the <path_to_inkscape>/extensions or ~/.config/inkscape/extensions directory. On Linux you will probably have to make the python script executable first if you haven't done this yet. This is usually done by the usual command (or in your preferred file manager):
$ chmod a+x hello_world.py
Now start Inkscape. A new menu item Hello World! in Extensions->Examples menu should appear.
Complete Source Code
Here is a complete commented source pre of hello_world.py script file:
#!/usr/bin/env python # These two lines are only needed if you don't put the script directly into # the installation directory import sys sys.path.append('/usr/share/inkscape/extensions') # We will use the inkex module with the predefined Effect base class. import inkex # The simplestyle module provides functions for style parsing. from simplestyle import * class HelloWorldEffect(inkex.Effect): """ Example Inkscape effect extension. Creates a new layer with a "Hello World!" text centered in the middle of the document. """ def __init__(self): """ Constructor. Defines the "--what" option of a script. """ # Call the base class constructor. inkex.Effect.__init__(self) # Define string option "--what" with "-w" shortcut and default value "World". self.OptionParser.add_option('-w', '--what', action = 'store', type = 'string', dest = 'what', default = 'World', help = 'What would you like to greet?') def effect(self): """ Effect behaviour. Overrides base class' method and inserts "Hello World" text into SVG document. """ # Get script's "--what" option value. what = self.options.what # Get access to main SVG document element and get']) # Create a new layer. layer = inkex.etree.SubElement(svg, 'g') layer.set(inkex.addNS('label', 'inkscape'), 'Hello %s Layer' % (what)) layer.set(inkex.addNS('groupmode', 'inkscape'), 'layer') # Create text element text = inkex.etree.Element(inkex.addNS('text','svg')) text.text = 'Hello %s!' % (what) # Set text position to center of document. text.set('x', str(width / 2)) text.set('y', str(height / 2)) # Center text horizontally with CSS style. style = {'text-align' : 'center', 'text-anchor': 'middle'} text.set('style', formatStyle(style)) # Connect elements together. layer.append(text) # Create effect instance and apply it. effect = HelloWorldEffect() effect.affect()
Last edited by --Rubikcube 21:18, 7 August 2008 (UTC), based on a version by Blackhex 11:59, 26 April 2007 (UTC)
|
http://wiki.inkscape.org/wiki/index.php/PythonEffectTutorial
|
CC-MAIN-2017-26
|
refinedweb
| 1,351
| 58.99
|
Concordance as a Measure of Model Fit
The whole idea of concordance as a success metric makes a lot more sense when you look at the definition of the word itself.
an alphabetical list of the words (especially the important ones) present in a text, usually with citations of the passages concerned.
Simply put, the Concordance Index is a measure of how well-sorted our predictions are.
How we actually arrive at this measure requires a little more digging.
A Motivating Example
I almost never copy someone’s tutorial so brazenly, so let the fact that I’m about to be a testiment to how helpful this Medium post was. I’m going to use the same toy dataset and distill the author’s takeaways, while also adding my own and a couple clarifying snippets of code.
Essentially, we’ve got a list of 5 people experiencing an churn in some order– for simplicity,
1, 2, 3, 4, 5. We do Data Stuff to the the inputs and arrive at predictions for when each person will churn, as follows.
import pandas as pd from lifelines.utils import concordance_index names = ['Alice', 'Bob', 'Carol', 'Dave', 'Eve'] events = [1, 2, 3, 4, 5] preds = [1, 2, 3, 4, 5] df = pd.DataFrame(data={'churn times': events, 'predictions': preds}, index=names) df
Perfect prediction. We expect a good score. Lo and behold,
1.0 is the highest this index goes.
concordance_index(events, preds)
1.0
Ordering
However, one interesting consequence of this is that the magnitude of our predictions doesn’t matter, as long as they’re sorted correctly. Imagine instead, that the predictions were on the scale of
100s, not
1s.
events = [1, 2, 3, 4, 5] preds = [100, 200, 300, 400, 500] concordance_index(events, preds)
1.0
Or followed some monotonically-increasing function.
import numpy as np events = [1, 2, 3, 4, 5] preds = np.exp(events) concordance_index(events, preds)
1.0
Indeed, the stated purpose of the index is to evaluate how well the two lists are sorted. Watch what happens when it gets the last two predictions wrong.
events = [1, 2, 3, 4, 5] preds = [1, 2, 3, 5, 4] concordance_index(events, preds)
0.9
Or swaps the first and last record
events = [1, 2, 3, 4, 5] preds = [5, 2, 3, 4, 1] concordance_index(events, preds)
0.3
Or gets it entirely backwards
events = [1, 2, 3, 4, 5] preds = [5, 4, 3, 2, 1] concordance_index(events, preds)
0.0
How is this being calculated?
Taking a peak
Essentially, the Concordance Index sorts our
names by the order of
events, and takes all before-and-after pairs. Call this set
A. Then it does the same thing when sorting by
predictions to make set
B. Then it takes the intersection of the two to make a new set
C. Finally, the Concordance Index is the ratio of the lengths of
C and
A– a perfect prediction will have generated the same set
B making the intersection one-to-one. Similarly
C will contain less records the more
B generated incorrect pairs.
For example:
events = [1, 2, 3, 4, 5] preds = [1, 3, 2, 5, 4] concordance_index(events, preds)
0.8
Under the hood, you can think of having a function that does the following
def generate_name_pairs(values): # sort (name, value) pairs by values pairs = sorted(list(zip(names, values)), key=lambda x: x[1]) set_ = set() for idx, first_person in enumerate(pairs): # don't want (Alice, Alice) for second_person in pairs[idx+1:]: set_.add((first_person[0], second_person[0])) return set_ print(names) print(events) generate_name_pairs(events)
['Alice', 'Bob', 'Carol', 'Dave', 'Eve'] [1, 2, 3, 4, 5] {('Alice', 'Bob'), ('Alice', 'Carol'), ('Alice', 'Dave'), ('Alice', 'Eve'), ('Bob', 'Carol'), ('Bob', 'Dave'), ('Bob', 'Eve'), ('Carol', 'Dave'), ('Carol', 'Eve'), ('Dave', 'Eve')}
Generating our sets as described above, we can see that
C does, indeed have a smaller length than
A and
B.
A = generate_name_pairs(events) B = generate_name_pairs(preds) C = A.intersection(B) print(A, '\n') print(B, '\n') print(C) len(A), len(B), len(C)
{('Carol', 'Eve'), ('Alice', 'Eve'), ('Bob', 'Dave'), ('Alice', 'Carol'), ('Bob', 'Carol'), ('Carol', 'Dave'), ('Bob', 'Eve'), ('Dave', 'Eve'), ('Alice', 'Bob'), ('Alice', 'Dave')} {('Carol', 'Eve'), ('Alice', 'Eve'), ('Bob', 'Dave'), ('Alice', 'Carol'), ('Carol', 'Dave'), ('Eve', 'Dave'), ('Bob', 'Eve'), ('Carol', 'Bob'), ('Alice', 'Bob'), ('Alice', 'Dave')} {('Carol', 'Eve'), ('Alice', 'Eve'), ('Bob', 'Dave'), ('Carol', 'Dave'), ('Alice', 'Carol'), ('Bob', 'Eve'), ('Alice', 'Bob'), ('Alice', 'Dave')} (10, 10, 8)
Investigating the difference, is straight-forward and expected. We intentionally swapped two pairs in this example.
print(A.difference(B)) print(B.difference(A))
{('Bob', 'Carol'), ('Dave', 'Eve')} {('Eve', 'Dave'), ('Carol', 'Bob')}
And taking the ratio of the lengths, we get
0.8
len(C) / len(A)
0.8
Or we can do away with all of this set business and just use the function!
concordance_index(events, preds)
0.8
Note: This isn’t actually how it’s implemented on the backend. The pair-construction alone makes the algorithm
O(n^2). In reality,
lifelines does some clever sorting vector operations to get performance to
O(n log(n)). Use
lifelines, lol.
On Censoring
One element we’ve neglected to mention until now is the way that this index interacts with censored data. Imagine that in our dataset, we never observed Churn for Carol. Now our lists look like
events = [1, 2, 3, 4, 5] preds = [1, 3, 2, 5, 4] obs = [True, True, False, True, True]
and using the third,
events_observed, argument in generating the index, we’ve got.
concordance_index(events, preds, obs)
0.75
This is because we repeat the exercise after throwing out every pair starting with a censored data point.
new_A = set(filter(lambda x: x[0] != 'Carol', A)) new_A
{('Alice', 'Bob'), ('Alice', 'Carol'), ('Alice', 'Dave'), ('Alice', 'Eve'), ('Bob', 'Carol'), ('Bob', 'Dave'), ('Bob', 'Eve'), ('Dave', 'Eve')}
new_C = set(filter(lambda x: x[0] != 'Carol', C)) new_C
{('Alice', 'Bob'), ('Alice', 'Carol'), ('Alice', 'Dave'), ('Alice', 'Eve'), ('Bob', 'Dave'), ('Bob', 'Eve')}
len(new_C) / len(new_A)
0.75
Note, we don’t also toss pairs ending in ‘Carol’. This should track, intuitively– just because we don’t know how long after the observation window Carol took to churn doesn’t mean that Alice churning right out of the gate didn’t happen, regardless.
On Real Data
Now, to better-ground ourselves in a pratical example, let’s look at the built-in
lifelines dataset that investigates the duration of a country’s leadership.
from lifelines.datasets import load_dd data = load_dd() data.head()
Breaking out by
regime and fitting simple Kaplan-Meier curves, we can see a pattern in survival rates, relative to government type
%pylab inline from lifelines import KaplanMeierFitter fig, ax = plt.subplots(figsize=(15, 10)) kmf = KaplanMeierFitter() for idx, group in data.groupby('regime'): kmf.fit(group['duration']) kmf.plot(ax=ax, label=idx)
Populating the interactive namespace from numpy and matplotlib
We’ll use this to make a simple categorical binning
regime_mapping = { 'Monarchy': 'Monarchy', 'Civilian Dict': 'Dict', 'Military Dict': 'Dict', 'Parliamentary Dem': 'Dem', 'Presidential Dem': 'Dem', 'Mixed Dem': 'Dem' } data['regime_type'] = data['regime'].map(regime_mapping)
Probably also worth looking at this by continent, it seems
fig, ax = plt.subplots(figsize=(15, 10)) kmf = KaplanMeierFitter() for idx, group in data.groupby('un_continent_name'): kmf.fit(group['duration']) kmf.plot(ax=ax, label=idx, ci_show=True)
Finally, let’s do our favorite year-to-decade
pandas trick for good measure.
data['decade'] = data['start_year'] // 10 * 10
We’ll lop off the other fields, dummy out the categorical features, and
trimmed = data[['un_continent_name', 'decade', 'duration', 'regime_type', 'observed']] trimmed = pd.get_dummies(trimmed, drop_first=True) trimmed.head()
Fitting a Cox model to it, we’ve got a concordance score of
0.64. Not awful. We barely tried, so better-than-random using like two features is good enough for me on this one.
from lifelines import CoxPHFitter cph = CoxPHFitter(penalizer=0.001) cph.fit(trimmed, 'duration', 'observed') cph.print_summary()
Distribution shape
We can use the base hazard function of the Cox model to math our way to the base survival function
cph.baseline_survival_.plot();
Investigating, it looks like it sort of fits the right shape. Looks like it doesn’t drop off fast enough, though.
The actual durations, for reference:
trimmed[trimmed['observed'] == True]['duration'].hist(bins=20);
Expectations
The Expectation of the model has an important interpretation in this context– the predicted churn/death of a given record.
Taking the expectation of all of our trimmed data, we can see a spike at the
10-15 range, which matches our “not dropping off fast enough” interpretation.
cph.predict_expectation(trimmed).hist(bins=50);
But what about concordance?
Maybe our curve shape doesn’t completely align with reality, but this is the Cox Proportional Hazard model. We’re more interested in how well we’ve captured the relationship between records’ relative risk, and by extension their ordering in terms of survival.
We’ll store this as a standalone
Series for our analysis.
expectations = cph.predict_expectation(trimmed)
For example, it’s relatively straight-forward for us to look at the performance of the concordance index, by continent (with the sample size printed, for context).
for idx, group in data.groupby('un_continent_name'): continent= data.loc[group.index] preds = expectations.loc[group.index] ci = concordance_index(continent['duration'], preds, continent['observed']) print(idx.ljust(10), f'{ci:.3f}', len(group))
Africa 0.608 314 Americas 0.476 412 Asia 0.682 398 Europe 0.549 576 Oceania 0.516 108
To the degree that we’re doing a decent job at predicting death orders, it looks like our Asia and Africa results are propping up poor predictions elsewhere– worse than random (
.5) in the Americas.
Time?
Curiously, though, this isn’t to say that our model isn’t doing something right.
If we now consider that “time-to-death” is a continuous variable, we can look at a more-traditional measure of model fit and investigate the Mean Squared Error by continent (with concordance left in, for context)
from sklearn.metrics import mean_squared_error print(' '*10, 'CI'.ljust(5), 'MSE'.ljust(6), 'Count') for idx, group in data.groupby('un_continent_name'): continent= data.loc[group.index] preds = expectations.loc[group.index] ci = concordance_index(continent['duration'], preds, continent['observed']) mse = mean_squared_error(continent['duration'], preds) print(idx.ljust(10), f'{ci:.3f}', f'{mse:.3f}', len(group))
CI MSE Count Africa 0.608 74.030 314 Americas 0.476 28.611 412 Asia 0.682 53.970 398 Europe 0.549 18.923 576 Oceania 0.516 33.182 108
Hey, look at that– despite poor Concordance scores in the Americas and Europe, our MSE outperforms Asia and Africa, where we were celebrating model fit just a second ago.
Chasing down where we’re underperforming by both metrics can lead to discovery and creation of better variables. So let’s pare down our dataset to records where we have observed values and dig in.
observed = data[data['observed'] == True]
To visualize patterns in our MSE, let’s look at the raw difference between our predicted and observed deaths. This can be done in a couple of ways.
Looking at this in terms of volume, we might make a note to see if there’s a commonality between Africa and Asia that we can use to rein in the kurtosis.
fig, axes = plt.subplots(5, 1, figsize=(10, 12), sharex=True) observed = data[data['observed'] == True] for (idx, group), ax in zip(observed.groupby('un_continent_name'), axes.ravel()): ax.hist(group['duration'] - expectations.loc[group.index], label=idx) ax.legend()
Or with some simple box plots, we better-see the skew behavior
- Asia and Africa have outliers on both sides
- Americas and Europe have longer-tailed errors
import seaborn as sns fig, ax = plt.subplots(figsize=(15, 10)) sns.boxplot(y=observed['un_continent_name'], x=(data['duration'] - expectations), ax=ax);
Americas looks particularly offensive. Filtering down to just that continent and whipping up some KM curves by
region, we can see that there’s a pretty marked difference in the Caribbean.
fig, ax = plt.subplots(figsize=(15, 10)) kmf = KaplanMeierFitter() americas = data[data['un_continent_name'] == 'Americas'] for idx, group in americas.groupby('un_region_name'): kmf.fit(group['duration']) kmf.plot(ax=ax, label=idx, ci_show=True)
Which is the result of some serious outlier behavior.
from lifelines.plotting import plot_lifetimes from warnings import filterwarnings fig, ax = plt.subplots(figsize=(10, 5)) filterwarnings('ignore') plot_lifetimes(americas['duration'].reset_index(drop=True), ax=ax);
Now who could that be?
americas.loc[americas['duration'].idxmax()]
ctryname Cuba cowcode2 40 politycode 40 un_region_name Caribbean un_continent_name Americas ehead Fidel Castro Ruz leaderspellreg Fidel Castro Ruz.Cuba.1959.2005.Civilian Dict democracy Non-democracy regime Civilian Dict start_year 1959 duration 47 observed 1 regime_type Dict decade 1950 Name: 375, dtype: object
Egads
from IPython.display import Image Image('images/rawhide.jpg')
Now Wait a Minute
Of course, it’s worth noting that if we wanted a model that scored well in terms of MSE, we should have trained on MSE.
To restate a notion we had above, the Cox Proportional Hazard model is designed to optimize how we stack rank records by their hazard. Full stop.
If we were instead interested in accuracy of survival prediction, it’d be more appropriate to use a method that duration as a target. Which, of course, then undervalues sorting and concordance.
It’s all about defining the right objective for your application and picking the appropriate model. The same author as the top of this notebook has a great notebook exploring this idea.
|
https://napsterinblue.github.io/notes/stats/survival_analysis/concordance/
|
CC-MAIN-2021-04
|
refinedweb
| 2,238
| 56.86
|
self.getposition(d) not equal to self.getpositionbyname(d._name)
I appreciate that a complete code example would be useful, but I am not sure I can extract one easily, though I will try to produce a MWE. Anyway, in the meantime, is there any obvious plausible explanation for the following (literally executed line after after in next()):
self.getposition(d) --- Position Begin - Size: -2469643.0 - Price: 7.51399993896484 - Price orig: 0.0 - Closed: 0 - Opened: -2469643.0 - Adjbase: 7.550000190734861 --- Position End self.getpositionbyname(d._name) --- Position Begin - Size: 0 - Price: 0.0 - Price orig: 0.0 - Closed: 0 - Opened: 0 - Adjbase: None --- Position End
There are 3 data feeds in this example. We loop through them in next(). The first of them has a legitimate position. The second one no position. The third one, should have no position, but self.getposition(d) returns the position for the first one, as shown above. But then, when using self.getpositionbyname(d._name) returns a 0 position as expected.
Any thoughts appreciated, as I am pretty baffled.
- backtrader administrators last edited by
@rahul-savani said in self.getposition(d) not equal to self.getpositionbyname(d._name):
is there any obvious plausible explanation for the following
Yes. You have a bug.
@rahul-savani said in self.getposition(d) not equal to self.getpositionbyname(d._name):
I appreciate that a complete code example would be useful,
It's the only thing that matters.
@rahul-savani said in self.getposition(d) not equal to self.getpositionbyname(d._name):
but self.getposition(d) returns the position for the first one
With no code, we don't know what
dis holding, neither do you.
@rahul-savani said in self.getposition(d) not equal to self.getpositionbyname(d._name):
but I am not sure I can extract one easily
It's not about extracting anything. If you really think there is a bug in backtrader it will take around 20 lines to proof it. If those 20 lines produce the expected (good) result, you will have proof that you have a bug.
With no code, we don't know what d is holding, neither do you.
I understand; this should be somewhat more informative:
def next(self): for i, d in enumerate(self.datas): if self.order[d._name]: continue print(self.getposition(d)) print(self.getpositionbyname(d._name))
If you really think there is a bug in backtrader it will take around 20 lines to proof it. If those 20 lines produce the expected (good) result, you will have proof that you have a bug.
OK, sure, I will try to create a MWE. Of course, it's far from inconceivable that I have inadvertently introduced a bug, but since I have extended
backtrader.Strategyand not extended/changed any code that should affect
self.datas(beyond populating them of course), I am at a loss as to how the code I have written could create this phenomenon. Will keep digging, but any suggestions would be great.
And thanks! Really great platform that I have found extremely useful. Your work is much appreciated.
Here's an MWE:
from __future__ import (absolute_import, division, print_function, unicode_literals) import datetime import os.path import sys import backtrader as bt from backtrader.feeds import GenericCSVData def create_dynamic_data_class(): """ Dynamic class creation see: """ # Add a lines to the inherited ones from the base class strategy_cols = ['IND'] lines = tuple(strategy_cols) # add parameters starting from column 6 (date, OHLCV = 012345 come before) # NOTE: openinterest currently excluded params = tuple((name,6+i) for name,i in zip(strategy_cols,range(0,len(strategy_cols)))) mydict = dict(lines=lines, params=params,) return type('DrCSVData', (GenericCSVData,), mydict) def add_data(cerebro, symbols): DrCSVData = create_dynamic_data_class() datadir ='DATA' # relative reference to location of csv files csv_files = os.listdir(datadir) files = [s + '.csv' for s in symbols] for f in csv_files: if f not in files: continue fpath = os.path.join(datadir, f) # Create a Data Feed data = DrCSVData(dataname=fpath, dtformat='%Y-%m-%d', openinterest=-1, reverse=False) cerebro.adddata(data) class DrStrategy(bt.Strategy): def __init__(self): self.order = {} for i, d in enumerate(self.datas): self.order[d._name] = None def notify_order(self, order): if order.status not in [order.Submitted, order.Accepted]: self.order[order.data._name] = None def next(self): for i, d in enumerate(self.datas): if self.order[d._name]: continue pos1 = self.getposition(d) pos2 = self.getpositionbyname(d._name) if pos1.size != pos2.size: print("PROBLEM FOUND: d._name %s" % d._name) print("self.getposition(d):\n", pos1) print("self.getposition(d._name):\n", pos2) sys.exit() if d.close[0] == 0: continue pos = self.getpositionbyname(d._name) if not pos: if d.IND[0] >= 0.99: self.order[d._name] = self.buy(data=d) elif d.IND[0] <= 0.01: self.order[d._name] = self.sell(data=d) else: self.order[d._name] = self.close(data=d) if __name__ == "__main__": cerebro = bt.Cerebro(cheat_on_open=True, stdstats=False) cerebro.addstrategy(DrStrategy) add_data(cerebro, symbols=['S.1', 'S.2']) #add_data(cerebro, symbols=['S1', 'S2']) start_cash = 1000000000 cerebro.broker.setcash(start_cash) results = cerebro.run() print("pnl:", cerebro.broker.getvalue() - start_cash)
I can provide the input data if useful, which are csv files lying in ./DATA,
$ ls DATA S.1.csv S.2.csv S1.csv S2.csv
In creating this example, I spotted a crucial requirement for the problem, namely that the data name has to contain a dot, so the version with:
add_data(cerebro, symbols=['S.1', 'S.2'])
gives the problem, the version with
add_data(cerebro, symbols=['S1', 'S2'])
does not.
Sorry if I missed the restriction to not name datas with any dots in them; I have been extensively reading the docs but may have missed that.
P.S. obviously "S.1" looks like a silly name, but actually these were a large number of crosses, e.g., "USD.EUR" etc.
- backtrader administrators last edited by
@rahul-savani said in self.getposition(d) not equal to self.getpositionbyname(d._name):
In creating this example, I spotted a crucial requirement for the problem, namely that the data name has to contain a dot, so the version with:
Nothing within backtrader analyzes the name (it may contain dots, dashes, hashes ... anything ...)
@rahul-savani said in self.getposition(d) not equal to self.getpositionbyname(d._name):
# Create a Data Feed data = DrCSVData(dataname=fpath, dtformat='%Y-%m-%d', openinterest=-1, reverse=False) cerebro.adddata(data)
In the lines above lies your crucial problem
See the reference for
cerebro.adddata
You are assigning no name to any of the data feeds.
My very humble opinion: when one is creating a short snippet to test a bug/behavior, one creates something simple.
- Rahul Savani last edited by
You are assigning no name to any of the data feeds.
OK, thanks. That's easy to fix of course. But then what I don't understand:
In the code above we have not sett the name via
adddata`` but he data feeds inself.datas
**do** have names, and those names are used in the code above (e.g., withself.getpositionbyname(d._name)
). The names returned byd._name``` correspond to the names of the csv files (without the ".csv" ending). I presumed that, having seen these names set automatically, everything was in order.
How are the names apparently set in the above code, but are somehow set differently (so as to not see the behaviour above) if I add the name as an explicit argument to
adddata?
My very humble opinion: when one is creating a short snippet to test a bug/behavior, one creates something simple.
Yes, you are right of course. The reason for this not so simple code is that I started from something much much more complicated and chose to strip back to get something simpler (incrementally, while keeping the behaviour) rather than starting with something totally simple and trying to recreate the behaviour (since I didn't know exactly what was needed to create it).
Thanks!
In case it wasn't clear, my claim is that there is a bug because in my original code I made no explicit use of the
_nameproperty; I only used this when investigating the problem. The problem is that
self.getposition(d)returned the position for the wrong data feed. I will do my best to come up with an MWE for that.
In the meantime, on my question in the last post, essentially: "How do I even have names if, as you point out I have not assigned them, and they are not assigned automatically?", the following (shorter ;-) example shows that
_nameis set automatically:
from __future__ import (absolute_import, division, print_function, unicode_literals) import os.path import sys import backtrader as bt def add_data(cerebro): files = ['DATA/S1.csv', 'DATA/S2.csv'] for f in files: data = bt.feeds.GenericCSVData( dataname=f, dtformat=('%Y-%m-%d'), datetime=0, open=1, high=2, low=3, close=4, volume=5, openinterest=-1 ) cerebro.adddata(data) class DrStrategy(bt.Strategy): def __init__(self): pass def notify_order(self, order): pass def next(self): for i, d in enumerate(self.datas): print(d._name) sys.exit() if __name__ == "__main__": cerebro = bt.Cerebro() cerebro.addstrategy(DrStrategy) add_data(cerebro) results = cerebro.run()
which when run gives:
$ python main_auto_names.py S1 S2
Delving into the source by adding a print at the start of ```adddata`` as follows:
def adddata(self, data, name=None): ''' Adds a ``Data Feed`` instance to the mix. If ``name`` is not None it will be put into ``data._name`` which is meant for decoration/plotting purposes. ''' print(data._name) if name is not None: data._name = name
we see that
dataalready has its name set when passed to
adddata. Indeed the auto-naming happens here:
class MetaCSVDataBase(DataBase.__class__): def dopostinit(cls, _obj, *args, **kwargs): # Before going to the base class to make sure it overrides the default if not _obj.p.name and not _obj._name: _obj._name, _ = os.path.splitext(os.path.basename(_obj.p.dataname)) _obj, args, kwargs = \ super(MetaCSVDataBase, cls).dopostinit(_obj, *args, **kwargs) return _obj, args, kwargs
In particular, the actual thing that drops the ".csv" is:
_obj._name, _ = os.path.splitext(os.path.basename(_obj.p.dataname))
It still doesn't explain the behaviour I was experiencing with
getposition, so I will try to get a clean MWE for that.
Thanks, --Rahul.
- Rahul Savani last edited by
So managed to get an MWE. The problem relates to 0 prices. The problem goes away if those prices as empty in the csv and treated by NaNs by backtrader. This is really as simple as I can get it, hopefully you agree:
Contents of input file S1.csv:
date,OPEN,CLOSE
1996-01-01,1.0,1.0
1996-01-02,1.0,1.0
1996-01-03,1.0,1.0
Contents of input file S2.csv:
date,OPEN,CLOSE
1996-01-01,0.0,0.0
1996-01-02,0.0,0.0
1996-01-03,0.0,0.0
Contents of input file S3.csv:
date,OPEN,CLOSE
1996-01-01,,
1996-01-02,,
1996-01-03,,
Source code for MWE:
import backtrader as bt def add_data(cerebro, symbols): for symbol in symbols: fname = 'DATA_S/%s.csv' % symbol data = bt.feeds.GenericCSVData( dataname=fname, dtformat=('%Y-%m-%d'), datetime=0, open=1, close=2, high=-1, low=-1, volume=-1, openinterest=-1 ) cerebro.adddata(data) class DrStrategy(bt.Strategy): def next(self): print(self.data.datetime.date(0)) for i, d in enumerate(self.datas): if i == 0: self.buy(data=d) print("Datas %s, position size: %s" % (i, self.getposition(d).size)) if __name__ == "__main__": cerebro = bt.Cerebro(cheat_on_open=True) cerebro.addstrategy(DrStrategy) # position reported for datas 1 erroneously same as for datas 0 add_data(cerebro, symbols=['S1', 'S2']) # looks ok (missing entries in S.3 instead of 0s) # add_data(cerebro, symbols=['S1', 'S3']) results = cerebro.run()
Output from running it with
add_data(cerebro, symbols=['S1', 'S2']), where we get the problem:
1996-01-01 Datas 0, position size: 0 Datas 1, position size: 0 1996-01-02 Datas 0, position size: 1 Datas 1, position size: 1 1996-01-03 Datas 0, position size: 2 Datas 1, position size: 2
Output from running it with
add_data(cerebro, symbols=['S1', 'S3']), where we do not get the problem:
1996-01-01 Datas 0, position size: 0 Datas 1, position size: 0 1996-01-02 Datas 0, position size: 1 Datas 1, position size: 0 1996-01-03 Datas 0, position size: 2 Datas 1, position size: 0
Any feedback, comments, suggestions welcome. Happy to investigate anything related that might be useful. Of course, I will personally be careful to avoid 0 prices.
Thanks, --Rahul.
|
https://community.backtrader.com/topic/1815/self-getposition-d-not-equal-to-self-getpositionbyname-d-_name/
|
CC-MAIN-2019-51
|
refinedweb
| 2,116
| 52.05
|
With the ongoing increase in human-device interaction, Alexa devices have found a strong place in the market. Echo devices are now placed in home and offices to control the lights, check news, get the status of a task etc with just voice command. Every user now has their private (virtual) assistant to make their life easier.
But an important part of the chain is the Alexa skill developers. His aim is to build a skill which can reduce the manual work of the user and make user’s life convenient. Though developing Alexa skill is not difficult but there are many challenges that developers face while building a skill. Especially when it requires another software/app.
Currently, an Alexa skill which requires another software/app needs an account linking feature to be enabled.For example: to enable and use Uber Alexa skill, the user needs to link his Uber account with Alexa. Once the user links his account, Uber software sends an access token to Alexa skill as a unique key for the user and hence the account linking is complete. Next time the user invokes Uber Alexa skill, the request sends the access token to Uber software and fetches the information.
We faced the same blocker while developing an Alexa account linking skill closely integrated with Jira software. The Alexa skill is built primarily for Scrum Masters to read and write to their Jira and help them stay up-to-date with their projects.
The most challenging part of developing this skill was to allow account linking because Jira software needs both server-client authentication, i.e., to link an account every user has to manually add our Alexa skill as an authorized client in their Jira dashboard and then provide us with the access token.
The solution implemented to reduce the inconvenience was to create a custom HTML page ( hosted on S3) for account linking. The user just needs to add his credentials - username, password and the Jira server URL, and the account will be linked successfully.
As of now, we were not using Jira directly to authenticate users via account linking but rather as a message carrier between Alexa and Jira. This makes the account linking process easy for users but possesses a high-security risk to their credentials.
To make the process secure, the following architecture was implemented in our skill. One of the key components of the architecture is that it is built completely on AWS services, namely :
API Gateway
S3 bucket
Lambda
DynamoDB
Explanation :
When a user enables the Alexa skill, he is redirected to our HTML page hostel on S3 bucket. Once the user fills his credentials and clicks on the submit button, it sends a GET request to an endpoint deployed on AWS API Gateway with query parameters.
The API Gateway then invokes a connected Lambda function and send query parameters as an event.
Using the parameters in the event sent to Lambda, it sends a GET request to Jira REST API to validate and authenticate the user. In case the user credentials are incorrect, it sends an error message or a success message with an access token created by the Lambda. In case of successful validation, the Lambda also stores the encoded user credentials in DynamoDB table with access token as the key.
def lambda_handler(event, context):
print (event)
username = event["username"]
password = event["password"]
server = event["server"]
skill = event["skill"]
table_name = ""
if skill == "skill_name":
table_name = “Table_name”
a = validity(password,username,server)
b = event
js = {}
for k,v in event.items():
a = {
"S" : v
}
js[k] = a
print(js)
js["password"] = {
"B" : encrypt(session,event["password"], "alias/alias_name")
}
accesstoken =
''.join(random.SystemRandom().choice(string.ascii_uppercase +
string.digits) for _ in range(10))
js["accesstoken"] = {"S" : accesstoken}
item = js
dynamodb_client.put_item(TableName=table_name, Item=item)
print("done")
return a
The javascript then displays “Invalid Credential” error message if an incorrect error message is received. In case of success message, the javascript then sends the access token to Alexa redirect URL and thus successfully links the account.
The access token is the main component as it is used to identify the user.
When the user invokes our skill, Alexa sends a JSON request to Lambda function with a key carrying the access token. The lambda then queries the DynamoDB table with the access token to identify the user and fetch his credentials. Once the credentials are fetched, the lambda then sends a request to Jira REST API and based on the users intent and returns back the message to Alexa as a JSON.
The Alexa then voices the message to the user and enables the user to now use his Jira with just his voice!
|
https://www.srijan.net/resources/blog/alexa-account-linking
|
CC-MAIN-2021-25
|
refinedweb
| 783
| 59.03
|
Live Projections
Note
Live Projections feature is obsolete in favor of Result Transformers.
Usually, RavenDB can tell what types you want to return based on the query type and the CLR type encoded in a document, but there are some cases where you want to query on one thing, but the result is completely different. This is usually the case when you are using live projections.
For example, let us take a look at the following index:
public class PurchaseHistoryIndex : AbstractIndexCreationTask<Order, Order> { public PurchaseHistoryIndex() { Map = orders => from order in orders from item in order.Items select new { UserId = order.UserId, ProductId = item.Id }; TransformResults = (database, orders) => from order in orders from item in order.Items let product = database.Load<Product>(item.Id) where product != null select new { ProductId = item.Id, ProductName = product.Name }; } }
Note that when we query this index, we can query based on UserId or ProductId, but the result that we get back aren't of the same type that we query on. For that reason, we have the
OfType<T> (
As<T>) extension method. We can use it to change the result type of the query:
documentSession.Query<Shipment, PurchaseHistoryIndex>() .Where(x => x.UserId == userId) .OfType<PurchaseHistoryViewItem>() .ToArray();
In the code above, we query the
PurchaseHistoryIndex using
Shipment as the entity type to search on, but we get the results as
PurchaseHistoryViewItem.
|
https://ravendb.net/docs/article-page/2.5/csharp/client-api/querying/static-indexes/live-projections
|
CC-MAIN-2019-43
|
refinedweb
| 226
| 57.47
|
In this post, I will show you how a bitcoin transaction presented in the raw format is to be interpreted and how conversely a bitcoin transaction stored in a C++ (and later Python) object can be converted into a hexadecimal representation (a process called serialization). Ultimately, the goal of this and subsequent posts will be to create a bitcoin transaction from scratch in Python, to sign it and to publish it in a bitcoin network, without using any of the available bitcoin libraries.
The subject of serialization and deserialization in the bitcoin protocol is a bit tricky. At the end of the day, the truth is hidden in the reference implementation somewhere (so time to get the code from the GitHub repository if you have not done so yet). I have to admit that when I first started to work with that code, I found it not exactly easy to understand, given that it has been a few years (well, somewhere around 20 years to be precise) since I last worked with templates in C++. Still, the idea of this post is to get to the bottom of it, and so I will walk you through the most relevant pieces of the source code. But be warned – this will not be an easy read and a bit lengthy. Alternatively, you can also skip directly to the end where the result is again summarized and ignore the details.
The first thing that we need is access to a raw (serialized) bitcoin transaction. This can be obtained from blockchain.info using the following code snippet.
import requests def get_raw_transaction(txid="ed70b8c66a4b064cfe992a097b3406fa81ff09641fe55a709e4266167ef47891"): url = '' + txid + '?format=hex' r = requests.get(url) return r.text
If you print the result, you should get
0200000003620f7bc1087b0111f76978ef747001e3ae0a12f254cbfb858f 028f891c40e5f6010000006a47304402207f5dfc2f7f7329b7cc731df605 c83aa6f48ec2218495324bb4ab43376f313b840220020c769655e4bfcc54 e55104f6adc723867d9d819266d27e755e098f646f689d0121038c2d1cbe 4d731c69e67d16c52682e01cb70b046ead63e90bf793f52f541dafbdfeff fffff15fe7d9e0815853738ce47deadee69339e027a1dfcfb6fa887cce3a 72626e7b010000006a47304402203202e6c640c063989623fc782ac1c9dc 3c6fcaed996d852ec876749ba63db63b02207ef86e262ad4b4bc9cebfadb 609f52c35b0105e15d58a5ecbecc5e536d3a8cd8012103dc526ca188418a b128d998bf80942d66f1b3be585d0c89bd61c533bddbdaa729feffffff84 e6431db86833897bab333d844486c183dd01e69862edea442e480c2d8cb5 49010000006a47304402200320bc83f35ceab4a7ef0f8181eedb5f54e3f6 17626826cc49c8c86efc9be0b302203705889d6aed50f716b81b0f3f5769 d72d1b8a6b59d1b0b73bcf94245c283b8001210263591c21ce8ee0d96a61 7108d7c278e2e715ac6d8afd3fcd158bee472c590068feffffff02ca780a 00000000001976a914811fb695e46e2386501bcd70e5c869fe6c0bb33988 ac10f59600000000001976a9140f2408a811f6d24ab1833924d98d884c44 ecee8888ac6fce0700
Having that, we can now start to go through this byte by byte – you might even want to print that string and strike out the bytes as we go. To understand how serialization works in the reference implementation, we will have to study the the header file
serialize.h containing boilerplate code to support serialization. In addition, each individual data type contains specific serialization code. It is useful to compare our results with the human readable description of the transaction at blockchain.info.
To understand how the mechanism works, let us start at the function
getrawtransaction in
rpc/rawtransaction.cpp which is implementing the corresponding RPC call. This function ends up calling
TxToUniv in
core_write.cpp and finally
EncodeHexTx in the same file. Here an instance of the class
CDataStream is created which is defined in
streams.h. For that class, the operator << is overwritten so that the function
Serialize is invoked. Templates for this method are declared in
serialize.h and will tell us how the individual data types are serialized in each individual case for the elementary data types and sets, vectors etc.. All composite classes need to implement their own
Serialize method to fit into this scheme.
For a transaction, the method
CTransaction::Serialize is defined in
primitives/transaction.h and delegates the call to the function
SerializeTransaction in the same file.
template inline void SerializeTransaction(const TxType& tx, Stream& s) { const bool fAllowWitness = !(s.GetVersion() & SERIALIZE_TRANSACTION_NO_WITNESS); s << tx.nVersion; unsigned char flags = 0; // Consistency check if (fAllowWitness) { /* Check whether witnesses need to be serialized. */ if (tx.HasWitness()) { flags |= 1; } } if (flags) { /* Use extended format in case witnesses are to be serialized. */ std::vector vinDummy; s << vinDummy; s << flags; } s << tx.vin; s << tx.vout; if (flags & 1) { for (size_t i = 0; i < tx.vin.size(); i++) { s << tx.vin[i].scriptWitness.stack; } } s << tx.nLockTime; }
Throughout this post, we will ignore the extended format that relates to the segregated witness feature and restrict ourselves to the standard format, i.e. to the case that the flag
fAllowWitness above is false.
We see that the first four bytes are the version number, which is 2 in this case. Note that little endian encoding is used, i.e. the first byte is the least significant byte. So the version number 2 corresponds to the string
02000000
Next, the transaction inputs and transaction outputs are serialized. These are vectors, and the mechanism for serializing vectors becomes apparent in
serialize.h.
template void Serialize_impl(Stream& os, const std::vector& v, const V&) { WriteCompactSize(os, v.size()); for (typename std::vector::const_iterator vi = v.begin(); vi != v.end(); ++vi) ::Serialize(os, (*vi)); } template inline void Serialize(Stream& os, const std::vector& v) { Serialize_impl(os, v, T()); }
We see that to serialize a vector, we first serialize the length of the vector, i.e. the number of elements, and then call the serialization method on each of the individual items. The length is serialized in a compact format called a
varInt which stores a number in 1 – 9 bytes, depending on its size. In this case, one byte is sufficient – this is the byte 03 after the version number. Thus we can conclude that the transaction has three transaction inputs.
To understand the next bytes, we need to look at the method
CTxIn::SerializeOp.
template inline void SerializationOp(Stream& s, Operation ser_action) { READWRITE(prevout); READWRITE(scriptSig); READWRITE(nSequence); }
This is not very surprising – we see that the spent transaction output, the signature script and the sequence number are serialized in that order. The spent transaction
prevout is an instance of
COutPoint which has its own serialization method. First, the transaction ID of the previous transaction is serialized according to the method
base_blob::Serialize defined in
uint256.h. This will produce the hexadecimal representation in little endian encoding, so that we have to reverse the order bytewise to obtain the transaction ID.
So in our example, the ID of the previous transaction is encoded in the part starting with 620f7b… in the first line and ending (a transaction ID has always 256 bit, i.e. 32 bytes, i.e. 64 characters) with the bytes …1c40e5f6 early in the second line. To get the real transaction ID, we have to revert this byte for byte, i.e. the transaction ID is
f6e5401c898f028f85fbcb54f2120aaee3017074ef7869f711017b08c17b0f62
The next four bytes still belong to the spent transaction and encode the index of the spent output in the list of outputs of the previous transaction. In this case this is 1, again encoded in little endian byte order, i.e. as 01000000. Thus we have now covered and understood the following part of the hex representation.
0200000003620f7bc1087b0111f76978ef747001e3ae0a12f254cbfb858f 028f891c40e5f601000000
Going back to the serialization method of the class
CTxIn, we now see that the next few bytes are the signature script. The format of the signature script is complicated and will be covered in a separate post. For today, we simply take this as a hexadecimal string. In our case, this string starts with 6a473044 …. in the second line and ends with … 541dafbd close to the end of line five.
Finally, the last two bytes in line five and the first two bytes in line six are the sequence number in little endian byte order.
We are now done with the first transaction input. There are two more transaction inputs that follow the same pattern, the last one ends again with the sequence number close to the end of line 15.
Now we move on to the transaction outputs. Again, as this is a vector, the first byte (02) is the number of outputs. Each output is then serialized according to the respective method of the class
TxOut.
template inline void SerializationOp(Stream& s, Operation ser_action) { READWRITE(nValue); READWRITE(scriptPubKey); }
The first element is the value, which is an instance of the class
CAmount. Again, we can look up the serialization method of this class in
amount.h and find that this is simply a 64 bit integer, so its serialization method is covered by the templates in
serialize.h and results simply in eight bytes in little endian order:
ca780a0000000000
If we reorder and decode this, we obtain 686282 Satoshi, i.e. 0.0686282 bitcoin. The next object that is serialized is the public key script. Again, we leave the details to a later post, but remark that (which is also true for the signature script) the first byte is the length of the remaining part of the script in bytes, so that we can figure out that the script is composed of the 0x19 = 25 bytes
76a914811fb695e46e2386501bcd70e5c869fe6c0bb33988ac
For the second output, the pattern repeats itself. We have the amount and the public key script
76a9140f2408a811f6d24ab1833924d98d884c44ecee8888ac
of the second output.
Finally, there are four bytes left: 6fce0700. Going back to
SerializeTransaction, we identify this as the lock time 0x7ce6f ( 511599 in decimal notation).
After going through all these details, it is time to summarize our findings. A bitcoin transaction is encoded as a hexadecimal string as follows.
- The version number (4 bytes, little endian)
- The number of transaction inputs
- For each transaction input:
- the ID of the previous transaction (reversed)
- the index of the spent transaction output in the previous transaction (4 bytes, little endian)
- the length of the signature script
- the signature script
- the sequence number (4 bytes, little endian)
- The number of transaction outputs
- For each transaction output:
- the amount (eight bytes, little endian encoding) in Satoshi
- the length of the public key script
- the public key script
- the locktime (four bytes, little endian)
In my GitHub account, you will find a Python script
Transaction.py that retrieves our sample transaction from the blockchain.info site and prints out all the information line by line. To run it, clone the repository using
$ git clone ; cd bitcoin
and then run the script
$ python Transaction.py
The script uses a few modules in the package
btc, namely
txn.py and
serialize.py that essentially implement the serialization and deserialization routines discussed in this post.
That is it for today. In the next posts, I will start to look at a topic that we have more or less consequently ignored or oversimplified so far: scripts in the bitcoin world.
One thought on “On the road again – serializing and deserializing bitcoin transactions”
|
https://leftasexercise.com/2018/03/16/on-the-road-again-serializing-and-deserializing-bitcoin-transactions/
|
CC-MAIN-2020-50
|
refinedweb
| 1,686
| 54.52
|
Why I can't find device used CCyUSBDeviceyumic_3247926 Aug 22, 2018 8:48 PM
This is the test code.
when I didn't load the IMG the result is like this:
but when I loaded the IMG ,the reslut is like this:
This IMG can work!
Could you tell me why the empty it can find the device,instead the loading right IMG can't find device?
Thank you !
The console code:
#include "stdafx.h"
#include <iostream>
using namespace std;
#include "CyAPI.h"
int _tmain(int argc, _TCHAR* argv[])
{
CCyUSBDevice* pUSB = new CCyUSBDevice;
int nDeviceCount = pUSB->DeviceCount();
cout<<nDeviceCount<<endl;
for (int nIdx = 0; nIdx < pUSB->DeviceCount(); nIdx++)
{
pUSB->Open(nIdx);
printf("%s\n", pUSB->DeviceName);
printf("%d\n", pUSB->VendorID);
printf("%d\n", pUSB->ProductID);
}
getchar();
return 0;
}
1. Re: Why I can't find device used CCyUSBDeviceSrinathS_16
Aug 22, 2018 8:54 PM (in response to yumic_3247926)
Hello,
The IMG file that you have loaded complies to the UVC class and the device gets bound to the UVC driver. CyAPI library can only identify devices that are bound to the CYUSB3 driver.
Best regards,
Srinath S
2. Re: Why I can't find device used CCyUSBDeviceyumic_3247926 Aug 23, 2018 2:06 AM (in response to SrinathS_16)
Hi!
Thanks for your help!
The camera sensor is connected to Fx3.FX3 is connected to the computer host via usb3.0.
The data collected can be uploaded to the host without processing.The host can also send
the instructions to control sensor.
Which firmware should I refer to for such projects.Not the UVC pattern.I have learned AN75779.
But the requirement is not all like this.Could you help me?
3. Re: Why I can't find device used CCyUSBDeviceSrinathS_16
Aug 23, 2018 3:41 AM (in response to yumic_3247926)
Hello,
In case you do not want to implement the FX3 device in UVC class, you can refer to the AN65974 example which implements an FPGA interface. But, it would be better to use the UVC class for camera devices since the existing host applications can be used.
In order to communicate with the control sensor, you can implement a vendor interface in addition to the UVC class interface and use vendor commands to communicate with the device. An example of this is included as part of the AN75779 example. The macro USB_DEBUG_INTERFACE has to be defined to enable this.
Best regards,
Srinath S
4. Re: Why I can't find device used CCyUSBDeviceyumic_3247926 Aug 23, 2018 4:09 AM (in response to SrinathS_16)
We make the new host applications ,host applications calls cypress apilib's interface.
The AN65974 I have learned,but it doesn'n have contol instructions,only bulk transfer.
What should i do?
5. Re: Why I can't find device used CCyUSBDeviceSrinathS_16
Aug 23, 2018 4:21 AM (in response to yumic_3247926)
Hello,
The control endpoint is available on the device by default and hence need not be separately configured.
- To use vendor commands, the device must have an additional vendor interface which can be defined in the descriptor file as follows.
/* Interface descriptor */
0x09, /* Descriptor size */
CY_U3P_USB_INTRFC_DESCR, /* Interface Descriptor type */
0x00, /* Interface number */
0x00, /* Alternate setting number */
0x00, /* Number of end points */
0xFF, /* Interface class */
0x00, /* Interface sub class */
0x00, /* Interface protocol code */
0x00, /* Interface descriptor string index */
- The above descriptor should be added as part of the configuration descriptor and it has to be noted that the length of the configuration descriptor field and the number of interfaces field must be modified after the inclusion of this descriptor.
- To implement the vendor commands, the CyFxSlFifoApplnUSBSetupCB() callback function must be modified to implement the required functionality. As an example of vendor request handling, the USBBulkSourceSink example that comes with the FX3 SDK can be referred.
Best regards,
Srinath S
6. Re: Why I can't find device used CCyUSBDeviceyumic_3247926 Aug 27, 2018 1:04 AM (in response to SrinathS_16)
The IMG file that I have loaded complies to the UVC class.
If I want to develop applications in Windows, what API should I use? The DirectShow?
7. Re: Why I can't find device used CCyUSBDeviceSrinathS_16
Aug 27, 2018 2:03 AM (in response to yumic_3247926)
Hello,
The DirectShow or the Media Foundation APIs can be used. Also, there are a lot of free Windows applications that comply to the UVC class. You can also consider using one of them.
Best regards,
Srinath S
8. Re: Why I can't find device used CCyUSBDeviceyumic_3247926 Aug 28, 2018 12:39 AM (in response to SrinathS_16)
Thank you very much!
Best regards,
sunny
|
https://community.cypress.com/thread/36314
|
CC-MAIN-2020-40
|
refinedweb
| 766
| 62.98
|
guys i need help ASAP since i need to submit this miniproject and i need 1 more minor questions need asnwered.
the C++ coding is given below, i need to change it to function type but giving out the same output.
#include <iostream> #define SIZE 7 using namespace std; int main() { int A[ SIZE ] = { 2, 4, 8, 16, 32, 64, 128}; int i, j; int total = 0; cout <<"Value of array: "; for ( j = 0; j <SIZE; j++) cout<< A[j] <<" "; cout<<endl; for ( i = 0; i < SIZE; i++ ) { total += A[i]; } cout << "Total of 2 power n from 2 to 128 are: "<<total; return 0; }
thanks in advance if anyonemanaged to answer it within 45 minutes from now.
|
https://www.daniweb.com/programming/software-development/threads/400778/user-defined-function-based-on-array
|
CC-MAIN-2017-43
|
refinedweb
| 118
| 57.98
|
![endif]-->
This Wiki page is a deep dive into the Arduino board's Two Wire library. It specifically references the Arduino Duemilanove with its ATmega328 chip. It explores the source code and describes the hardware in the computer, and how the library makes it tick.
Wire.begin()
sbi(PORTC, 4);
sbi(PORTC, 4);
twi_init(), Following Lines
sbi(PORTC, 5);
cbi(TWSR, TWPS0)
cbi(TWSR, TWPS1)
TWBR = ((CPU_FREQ / TWI_FREQ) - 16) / 2;
TWCR = _BV(TWEN) | _BV(TWIE) | _BV(TWEA);
Wire.begin()
Wire.beginTransmission(), the next line of the sketch
Wire.send(), the next line of the sketch
Wire.endTransmission(), the next line of the sketch
twi_writeTo(), lines 1-7
twi_writeTo(), lines 9-11
twi_writeTo(), lines 12-27
twi_writeTo(), line 30
twi_writeTo(), lines 33-35
SIGNAL(TWI_vect), or How I Learned to Send My I2C Data and Love the Bomb
SIGNAL(TWI_vect)in depth
Greetings, fellow Arduinoid! GreyGnome here. Confused about I2C? Want to get your Arduino to chatter with other ICs? Me too. Imagine getting your nice shiny new Arduino, scouring the tutorials scattered about the Internet, wiring up your shiny new I2C-compatible IC, typing up your first sketch using the Wire library, uploading it, only to find... nothing. No I2C bliss. No chatter back and forth.
This was my experience. And the problem began, as they often do, from ignorance. I have the necessary I2C timing diagrams but really didn't know what Arduino's Wire library was doing- under the hood. Why should I use it, if it hardly seems to work? Why not just write my own routines, send my signals out some digital pin (there are plenty, right?), and be done with it? What does Wire have that I couldn't write myself, which would be perhaps even less opaque and easier to use?
In short, how does a sketch written using the Wire library translate to the serial I2C signals between the Arduino and MPR084?
Well, by the end of this exploration I hope to answer all those questions. For there are many more wise and talented than I who have not only written but successfully implemented the Arduino's Wire library, so the problem is probably not out there somewhere. It's here, in my head, as I type this. And as a beginner, I'm inclined to think that I simply don't know any better. Time to get educated- after that, I can decide what do do and how to do it.
So- join me on this journey into the bowels of the Arduino and the Atmel MCU (microcontroller unit).
In this tutorial I assume you:
We begin with the Arduino Duemilanove and the MPR084 IC. This is a touch sensitive keyboard controller with 8 inputs. It uses the I2C standard for serial communication over the two wires: SDA for data and SCL for clock. The Arduino is the I2C master and the MPR084 is the slave.
In order to test the communications, I want to run something simple. So I will attempt to read a value from the MPR084. The device contains a register that you can query which contains version information for the part. I will query its first 16 bytes. This means I need to:
The datasheet for the MPR084 says:
The following are the write command and the read command message formats for the MPR084. First, the write:
Then the read:
There is a twist, however: The Arduino Duemilanove provides 5 volt power to its ATmega328 processor. The MPR084 runs on 3.3 volts. Its communication pins are not designed to handle over-voltages, so the signals from the Atmega328 need to be converted to 3.3 volts. For the SDA line, the inverse is true as well: The MPR084, when it speaks to the Arduino, needs to send its signals at a 5V level. This is done with a level shifter, and the principle source I used to design mine is here: (if that link disappears, search for Application Note AN97055 from Phillips Semiconductor). The MOSFET transistor I used was the TN2106N3-G, but I'm sure there are many others that will work.
The SCL line is the clock that synchronizes the I2C components. The master- which is the ATmega328 in this case- provides the clock signal and the slave listens. My first mistake was assuming that the conversation is simply one-way. However a review of Wikipedia's I2C article shows that the slave can hold the SCL line low if it wants to slow the master down; this is called "clock stretching". So the SCL conversation is not one-way. Thus, I will use the same MOSFET level shifter for SCL that I use for SDA. My level shifter looks like this:
According to various examples on the web (for example, see this example: [search on the page for "reading data from"]) then, a basic sketch to read I2C data should look something like this:
Wire.beginTransmission(DEVICE_ADDRESS); Wire.send(some_data); // send some data Wire.endTransmission(); Wire.requestFrom(DEVICE_ADDRESS, number_of_bytes); data = Wire.receive();
...or something quite similar. For more examples, see the list at .
...However, after reviewing the tutorials and the datasheet, it appears to me that what I need to do is:
As to what I write, that I don't know. Essentially I want to send a no-op because my only interest really is in the read at this point. So, on to the documentation to figure out how a no-op might look. Or maybe just an innocuous write command, apropos of nothing. ...Back in a few hours...
...Annnnd... I'm back! Seemed like a blink of an eye, didn't it? Anyway, it appears to me that what I need to do is:
n = 0, so I will send a stop bit after sending the register address ("command byte").
Theoretically then, my sketch will look like this. I am going to attempt to read 16 bytes of the Sensor Information Register, Address 0x14 (it should be ASCII data).
Here's the sketch:
#include <Wire.h> // #define TWI_FREQ 1L // For future testing //() { Serial.begin(19200); // For reporting Serial.print("MPR084 test.\n"); Wire.begin(); Wire.beginTransmission(MPR084_ADDR); Wire.send(regINFO); // This would be my command byte Wire.endTransmission(); Wire.beginTransmission(MPR084_ADDR); // Now I send my read command Wire.requestFrom(regINFO, 16); // Not sure how much data there is, let's try this. while (Wire.available()) { c = Wire.receive(); if (c != 0) { Serial.print(c); } } } void loop() { // Nothing to see here, move along... }
The trick- and the point of this whole article- is how does this sketch translate to the message format? What electrical signals actually traverse the SDA and SCL lines of the Arduino and the MPR084?
In order to fully understand the Wire library code, we need to understand what it needs to do to the MCU, to get it set up.
Review of the datasheet () for the processor shows how the Two Wire system works. To note are:
So now we know: Why do you not want to roll your own I2C code? Because not only is it done for you with the Two Wire library, but it's done better because it's done asynchronously. Don't tie up your 16 MIP ATmega328! Let the TWI hardware handle the communication asynchronously via the Two Wire library. How slick is that? Very slick. (BTW, if you want to see some roll-your-own code, Google is your friend, a nice example is here:)
Here are some of the registers involved with the Two Wire Interface (TWI) hardware module:
To dig into this further, I'm going to need to go to the source. That's right, the very code that makes the Wire library be all... wire-y. I've got to dig in and see what goes on.
I run the Arduino development platform on a MacBook Pro. In perusing the Arduino installation for information, I stumbled into this handy directory:
/Applications/Arduino.app/Contents/Resources/Java/libraries/Wire. Your platform will have a very similar path, starting where you installed it (on Windows machines, I'll guess the "Program Files" folder.) And what should I find in there, but the following files and directories:
Wire.cpp examples/ keywords.txt Wire.h gccdump.s utility/
And what should I find in the utility directory but the following files:
twi.c twi.h
...note that files ending in ".cpp" generally contain C++ programming language source code. Files ending in .s are assembly language code, files with a .h are header files, and files ending in .c are C programs. As we're dealing with the Wire library here, I'm guessing that the Wire.cpp file is the source code for it- and, perusing it, I discover that it is!
So this is great. Wire.cpp is the first place to go to show us what is going on.
Using my favorite text editor, I open the file and search for "begin". This is what I find:
// Public Methods ////////////////////////////////////////////////////////////// void TwoWire::begin(void) { rxBufferIndex = 0; rxBufferLength = 0; txBufferIndex = 0; txBufferLength = 0; twi_init(); }
...what the heck is that? Well, without getting too far into the nitty-gritty of C++ and object-oriented programming (OOP), this is a "method". If you don't know anything about OOP, just replace "method" with the word "function" or "subroutine" and you basically have the right idea. This:
void TwoWire::begin(void) means, "I am defining a method (aka, 'function') that returns nothing (aka, 'void'), is part of the TwoWire class, has the name of 'begin', and takes nothing (aka, 'void') for its arguments." That sounds dandy, but what does that have to do with our sketch? Remember that in our sketch, the first statement we execute that has to do with the Two Wire library is
Wire.begin(). But we've found
void TwoWire::begin(void). What's going on?
The answer lies at the end of the Wire.cpp file. There, we find a statement that goes like this:
TwoWire Wire = TwoWire();
This means, "create a TwoWire object and name it Wire that's of the TwoWire class type." Again, if you're not an OOP programmer, don't worry about the fact that there are two "TwoWire"s in the statement. Suffice it to say that we have an object called Wire. Now if we want to use any of its methods, we simply call it like this:
object.method
Where our object is called
Wire and the method we want to run (with no arguments) is called
begin, we simply do:
Wire.begin()
There. Now you know: when you program using the Wire library, you are doing C++ programming.
So, the Wire.begin() method call simply sets 4 variables to 0, then calls twi_init().
Things now get a little more complicated.
twi_init() is not of the format
object.method. What is it, and where is its code? Judicious use of UNIX command line tools (essentially, "grep") shows that twi_init() can be found in the
utility/twi.c file. Thus the C++ method calls a C programming language function, and this is where things get delicious.
The code is actually fairly hairy, requiring study of the Atmel ATmega328 datasheet and a lot of digging through the source code and header files (those that end in .h), which contain a lot of definitions of the components in the above code. Since we're using an Arduino Duemilanove with its ATmega328, the twi_init code can be shortened to this:); }
So let's put it together. We have:
Wire.begin() calls twi_init() which calls some sbi and cbi functions, and some stuff that looks like registers.
Armed with the source code and the ATmega328 data sheet, then, we can decipher twi_init() like so:
sbi is defined as
#define sbi(sfr, bit) (_SFR_BYTE(sfr) |= _BV(bit)) (in the
utility/twi.c file) so that
sbi(PORTC, 4) becomes:
[@_SFR_BYTE(PORTC) |= _BV(4))@]
Here are the other definitions in this statement: (these definitions are created in the header ".h" files, and look something like this:
#define PORTC _SFR_IO8(0x08))
So, finally, tracking all these definitions and definitions-of-definitions to their source, we have:
sbi(PORTC,4) is
_SFR_BYTE(PORTC) |= _BV(4)) _SFR_BYTE(PORTC) |= (1 << (4)) _SFR_BYTE(_SFR_IO8(0x08)) |= (1 << (4)) _SFR_BYTE((0x08) + __SFR_OFFSET) |= (1 << (4)) _SFR_BYTE((0x08) + 0x20) |= (1 << (4)) _MMIO_BYTE(_SFR_ADDR((0x08) + 0x20)) |= (1 << (4)) _MMIO_BYTE(_SFR_MEM_ADDR((0x08) + 0x20)) |= (1 << (4)) _MMIO_BYTE( ((uint16_t) &((0x08) + 0x20)) ) |= (1 << (4)) (*(volatile uint8_t *)((uint16_t) &(0x08 + 0x20))) |= (1 << 4)
Ugh. That last line: what does it mean? Strangely, this line is easier to read starting from the right and moving to the left. First, we have the (1 << 4). This means we take an integer "1" and shift it left 4 bits. An integer "1" is a two-byte value (see). So we end up with
0000000000010000 in binary. The "|=" operator says that you need to perform a boolean "or" operation on this with whatever is on the left, and stick it into whatever is on the left.
The thing on the left, besides being a qualitative mess, is quantitatively: The numbers 0x08 ("8", in hexadecimal) and 0x20 (hexadecimal as well). What is 0x20 in decimal, you may ask? Don't ask. You're in computer-land now, the land of bits and bytes, and decimal is of no use here. Suffice it to say two things: 1.) You won't need to convert this number to decimal because you'll never use the decimal value, and 2.) 0x08 + 0x20 is 0x28, which makes life easy.
So we take our result 0x28 and perform "&" on it, which means that it is a reference (an address), because it is part of the lvalue (the stuff to the left of the |= assignment; see). The
uint16_t is a "cast" that ensures that the result is considered a 16-bit unsigned integer type. That is then cast to a "volatile" unsigned 8 bit integer. Thus two things have taken place: First, we have lopped off the leftmost 8 bits of this 16bit value, which is fine because the PORTC register is 8 bits wide. Secondly, we have declared that the contents of the register can change at any time, so if the compiler attempts to optimize this section of code it will not assume that this value, once set, is what it was when it was set (which is important if you are trying to test the register's value- you'll always want to read it, instead of making any assumptions). So we're taking the contents of 0x28 and setting the 5th bit from the right to "1". If it's already 1, it stays a 1. So why is 0x28 important? What is the significance?
It happens that I/O registers are addressed as memory (p. 20 of the ATmega328 data sheet). And, "When addressing I/O Registers as data space using LD and ST instructions, 0x20 must be added to these addresses." ...again from p. 20. So, referring to the Register Summary on p. 532, we see that 0x08... or, 0x28 when addressed using the LD and ST (load and store) instructions, is... wait for it... PORTC. So we're setting the 5th bit from the right of PORTC, which is called bit 4 because the first bit is bit 0. This command, then, sets PORTC4. Now we know that we have to give up two Arduino pins for the TwoWire library, and the description in section 13 starting on page 76 describes the ports. On page 87, we see the description for Port C, Bit 4: "SDA, 2-wire Serial Interface Data: When the TWEN bit in TWCR is set (one) to enable the 2-wire Serial Interface, pin PC4 is disconnected from the port and becomes the Serial Data I/O pin for the 2-wire Serial Interface." Ah-hah! Finally! After all those definitions, we arrive at one fact: PORTC Bit 4 enables the SDA line for the TwoWire interface. And, as it happens, PORTC Bit 4 is pin 27 of the ATmega328 28-pin DIP package, as found in the Arduinoa Duemilanove. Pin 27 is connected to the Arduino I/O pin 5 on jumper J2, which is labelled Analog In 4 on my Duemilanove.
sbi(PORTC, 4);
In summary: According to the top Wire library reference page, "On most Arduino boards, SDA (data line) is on analog input pin 4". This command sets bit 4 in the PORTC register such that "pin PC4 is disconnected from the port and becomes the Serial Data I/O pin for the 2-wire Serial Interface." (p. 87 in the ATmega328 datasheet).
sbi(PORTC, 4);
Holy smokes, what an involved and convoluted path we have taken to get here! Was it worth it? Perhaps not, at first blush. But since you're working on a computer I'll guess you are comfortable with abstractions- the abstractions that, say, involve taking a chunk of silicon, metal, plastic, and electricity which enable you to read these words on the World Wide Web with nary a thought to the various gates and signals and protocols and software layers (firmware, languages, other languages on top of those languages) that were required to allow you to surf the web. You, dear reader, are reading this quite far from the bits and bytes and signals and gates that are now enabling your Internet experience, and in much the same way, to the programmer writing
sbi(PORTC, 4)
beats the heck out of writing
(*(volatile uint8_t *)((uint16_t) &(0x08 + 0x20))) |= (1 << 4)
because all the abstractions that have enabled him to write the former make it a whole lot simpler and more readable in the long run. The abstractions in this case are all the lines that transform this
(*(volatile uint8_t *)((uint16_t) &(0x08 + 0x20))) |= (1 << 4) into this
sbi(PORTC,4).
twi_init();, Following Lines
At the risk of repeating myself, I'll repeat the twi_init code here:); }
The first line of this function is a simple variable assignment; no hocus-pocus there. We have just done a deep dive into the second line,
sbi(PORTC, 4);.
We can see that the third line is essentially the same, only this time we set bit 5 of PORTC. we have configured the second I/O pin to be SCL; we now have configured the MCU's pins to enable their function as I/O for its Two Wire hardware. We will cover the first part of the condition, "When the TWEN bit in TWCR is set..." in a little while. Assuming the TWEN bit is set, our I/O pins should be good to go.
cbi(TWSR, TWPS0)
We'll explain this line
cbi(TWSR, TWPS0); without going into all the gross detail of the
sbi line. The definitions of that line are as follows:
(see (path on your platform to)
/hardware/tools/avr/avr/include/avr/iom328p.h)
Tracking all the definitions and definitions-of-definitions to their source, in the same way that
sbi(PORTC,4) was done, we have:
cbi(TWSR, TWPS0) is
#define cbi(sfr, bit) (_SFR_BYTE(sfr) &= ~_BV(bit)) _SFR_BYTE(TWSR) &= ~_BV(TWPS0)) _SFR_BYTE(0xB9) &= ~_BV(0)) _SFR_BYTE(0xB9) &= ~(1 << 0) _SFR_MEM8(0xB9) &= ~(1 << 0) _MMIO_BYTE(0xB9) &= ~(1 << 0) (*(volatile uint8_t *)(0xB9)) &= ~(1 << 0) (*(volatile uint8_t *)(0xB9)) &= ~(b00000001 << 0) (*(volatile uint8_t *)(0xB9)) &= ~(b00000001) (*(volatile uint8_t *)(0xB9)) &= b11111110)
0xB9 is the address of the TWSR (Two Wire Status Register). What we are doing here is doing a Boolean AND with the binary number
b11111110, which sets the TWPS0 (0'th) bit of the TWSR to 0 (see p. 244 of the ATmega328 datasheet).
cbi(TWSR, TWPS1)
Likewise the following command
cbi(TWSR, TWPS1) will ensure that the value of the TWPS1 (1st) bit of the TWSR is 0.
TWBR = ((CPU_FREQ / TWI_FREQ) - 16) / 2;
The two lines above set the value of the Two Wire Bit Rate Generator "prescaler" to 1. The next line is
TWBR = ((CPU_FREQ / TWI_FREQ) - 16) / 2;
The CPU_FREQ and TWI_FREQ values come from
utility/twi.h, and are:
CPU_FREQ 16000000L TWI_FREQ 100000L
Appropriately-comma'ed for readability, these are: 16,000,000 and 100,000 respectively. Thus the equation resolves to 72, and the TWBR is set to that value. Here is how line 7 resolves:
TWBR = ((16,000,000 / 100,000) - 16) / 2 TWBR = (160 - 16) / 2 TWBR = 144 / 2 TWBR = 72 _SFR_MEM8(0xB8) = 72 _MMIO_BYTE(0xB8) = 72 (*(volatile uint8_t *)(0xB8))=72
So our clock (SCL) frequency will be SCL = CPU clock / (16 + 2(TWBR) ⋅ (PrescalerValue)). Our clock is 16 MHz, TWBR is 72, Prescaler Value is 1, so we have:
SCL = 16,000,000 / 16 + 144 * 1 SCL = 16,000,000 / 160 SCL = 100,000
This is a rather pedestrian I2C frequency of 100 Khz, which is what the TWI_FREQ was defined as. If you want to change the clock frequency of the I2C bus, you can create your own
cbi command after
Wire.begin() and adjust the Prescaler Value according to Table 21-7 on p. 244 of the ATmega328 datasheet. Or, you can redefine the TWI_FREQ value, and run the
TWBR line again in your own sketch. For example,
TWBR = ((CPU_FREQ / 200000 ) - 16) / 2; to set the I2C clock frequency to 200 kHz (the equation evaluates to 32, so you could also simply put this line in your sketch immediately following
Wire.begin():
TWBR=32; /* == 200kHz SCL frequency */).
Note that TWBR is an 8-bit register, so its maximum value could be 255. With a PrescalerValue of a maximum of 64, this means our SCL frequency can be at a minimum 489 Hz (16,000,000 / (16 + 2 * 255 * 64).
TWCR = _BV(TWEN) | _BV(TWIE) | _BV(TWEA);
This code sets the bits in the TWCR, the Two Wire Control Register, mentioned in the TWI Hardware Registers section. And as mentioned in Line 3, above, here we go: You can guess that by the
_BV(TWEN) statement, we are enabling the TWEN bit. At this point the Two Wire hardware should be enabled, the I/O pins are connected to it, and we should be ready for I2C communication. Let's take a quick dive into this statement, like we did with the previous ones.
TWCR = _BV(TWEN) | _BV(TWIE) | _BV(TWEA); is:
#define TWCR _SFR_MEM8(0xBC) #define TWEN 2 #define TWIE 0 #define TWEA 6 #define _BV(bit) (1 << (bit)) _SFR_MEM8(0xBC) = _BV(TWEN) | _BV(TWIE) | _BV(TWEA); _MMIO_BYTE(0xBC) = _BV(2) | _BV(0) | _BV(6); _MMIO_BYTE(0xBC) = (1 << 2) | (1 << 0) | (1 << 6); (*(volatile uint8_t *)(0xBC)) = (b00000001 << 2) | (b00000001 << 0) | (b0000001 << 6); (*(volatile uint8_t *)(0xBC)) = (b00000100 ) | (b00000001) | (b1000000 << 6); (*(volatile uint8_t *)(0xBC)) = b10000101;
Note that "|" is a boolean OR operation. So we are setting the TWEN, TWIE, and TWEA bits in the TWCR register. Refer again to our favorite ATmega328 datasheet, p. 243.
The TWEN flag, BIT 2, "enables TWI operation and activates the TWI interface. When TWEN is written to one, the TWI takes control over the I/O pins connected to the SCL and SDA pins..." Thus when TWCR is configured, the work done earlier to set the SDA and SCL lines are activated (see e.g.
sbi(PORTC, 4);).
The TWIE bit, bit 0, enables the TWINT flag. Thus, the TWI hardware will send interrupts to the MCU as long as the I-bit in the SREG is set. The SREG is the AVR Status Register, and the I-bit is bit 7, the "Global Interrupt Enable". If this bit is cleared, no interrupts are serviced. If set, interrupts are serviced for those particular interrupts that are enabled (as here, where we enable the TWI interrupt). The SREG bit is cleared by hardware after an interrupt has occurred, and is set by the RETI (return from interrupt) instruction at the end of an interrupt service routine.
The TWEA bit, bit 6, controls the generation of the acknowledge pulse. The acknowledge pulse is sent from the slave to the master after receiving its address, or sent by the master to the slave after receiving data. Turning this bit off essentially disconnects the ATmega328 from the I2C bus.
Wire.begin()
And there we have it. Line 9 of
twi_init() is the last line of that function; it returns, and Wire.begin() returns, and we are on to the next line in the sketch. To some it up, then,
Wire.begin() has accomplished the following:
- Set the following variables: in Wire.begin(): rxBufferIndex = 0; rxBufferLength = 0; txBufferIndex = 0; txBufferLength = 0; - Perform the following tasks in twi_init(): + Set the variable twi_state = TWI_READY; + activate the two ports for the TWI, analog input 4 for SDA and 5 for SCL. sbi(PORTC, 4); sbi(PORTC, 5); - Initialize twi prescaler to 1. cbi(TWSR, TWPS0); cbi(TWSR, TWPS1); - Set the TWI bit rate to 100kHz. TWBR = ((CPU_FREQ / TWI_FREQ) - 16) / 2; - Enable the twi module, the twi interrupt, and TWI acknowledgements. TWCR = _BV(TWEN) | _BV(TWIE) | _BV(TWEA);
Notice that we've done a lot of work, but not a single bit has traversed the TWI pins, not yet. Tune in to the next section to see how that takes place!
Wire.beginTransmission()
Our next line in our Sketch is
Wire.beginTransmission(MPR084_ADDR). Let's break it down:
Wire.beginTransmission() is found, of course, in the Wire.cpp file. It is another method in the Wire object. Refer back to the beginning of our deep dive for more information. This method's source goes like this:
beginTransmission((uint8_t)address);
Hmm. Not very helpful. It simply casts the address to an unsigned 8-bit integer type, then calls the method
beginTransmission() with an unsigned 8-bit integer argument. Fair enough, now what does THAT do? Here it is:
void TwoWire::beginTransmission(uint8_t address) { // indicate that we are transmitting transmitting = 1; // set address of targeted slave txAddress = address; // reset tx buffer iterator vars txBufferIndex = 0; txBufferLength = 0; }
Quite simple really.
beginTransmission(), then, does nothing more than set a number of variables in the Wire object, including assigning our MPR084 address to the txAddress variable.
So ends our examination of the
Wire.beginTransmission() call. Very little to see here; let's move on.
Wire.send(regINFO)
This method is defined as follows:
void TwoWire::send(uint8_t data) { if(transmitting){ // in master transmitter mode // don't bother if buffer is full if(txBufferLength >= BUFFER_LENGTH){ return; } // put byte in tx buffer txBuffer[txBufferIndex] = data; ++txBufferIndex; // update amount in buffer txBufferLength = txBufferIndex; }else{ // in slave send mode // reply to master twi_transmit(&data, 1); } }
Note that we set the "transmitting" variable to 1, in the beginTransmission method. Also, BUFFER_LENGTH is defined in the Wire.h file as follows:
#define BUFFER_LENGTH 32. So we can see that we can send a maximum of 32 bytes at once. txBuffer is defined as
uint8_t TwoWire::txBuffer[BUFFER_LENGTH];.
What happens if you try to send more data than 32 bytes? You are simply ignored. This is not good programming practice at all, as this means there is no error checking. However, it could perhaps be forgiven since we are on a small computer, after all. Reducing the error checking speeds things up and uses less memory. Still, I'm not comfortable with this code.
Regardless, this code is a simple fill of the array txBuffer. It seems to me that calling it send() is a misnomer, but there you have it. At this point, no data has yet been sent over the TWI; we've merely done a bit of housekeeping. Essential housekeeping, mind you.
Specific to our sketch, we have placed the
regINFO data (0x14) into the txBuffer, incremented the txBufferIndex to 1, and set the txBufferLength to be 1 as well.
Wire.endTransmission()
The endTransmission() method is interesting, in that it finally seems like something happens on the TWI busses. Here is the method:
uint8_t TwoWire::endTransmission(void) { // transmit buffer (blocking) int8_t ret = twi_writeTo(txAddress, txBuffer, txBufferLength, 1); // reset tx buffer iterator vars txBufferIndex = 0; txBufferLength = 0; // indicate that we are done transmitting transmitting = 0; return ret; }
Not much going on there: A call to twi_writeTo(), setting some variables, and returning a return code. Much of the housekeeping work that we've done before comes into play here as arguments: The address of our I2C slave in the txAddress variable, our data in the txBuffer, and the length of the buffer in txBufferLength. So let's see what twi_writeTo does. Here's the function, with line numbers:
uint8_t twi_writeTo(uint8_t address, uint8_t* data, uint8_t length, uint8_t wait) { 01. uint8_t i; 02. 03. // ensure data will fit into buffer 04. if(TWI_BUFFER_LENGTH < length){ 05. return 1; 06. } 07. 08. // wait until twi is ready, become master transmitter 09. while(TWI_READY != twi_state){ 10. continue; 11. } 12. twi_state = TWI_MTX; 13. // reset error state (0xFF.. no error occurred) 14. twi_error = 0xFF; 15. 16. // initialize buffer iteration vars 17. twi_masterBufferIndex = 0; 18. twi_masterBufferLength = length; 19. 20. // copy data to twi buffer 21. for(i = 0; i < length; ++i){ 22. twi_masterBuffer[i] = data[i]; 23. } 24. 25. // build sla+w, slave device address + w bit 26. twi_slarw = TW_WRITE; 27. twi_slarw |= address << 1; 28. 29. // send start condition 30. TWCR = _BV(TWEN) | _BV(TWIE) | _BV(TWEA) | _BV(TWINT) | _BV(TWSTA); 31. 32. // wait for write operation to complete 33. while(wait && (TWI_MTX == twi_state)){ 34. continue; 35. } 36. 37. if (twi_error == 0xFF) 38. return 0; // success 39. else if (twi_error == TW_MT_SLA_NACK) 40. return 2; // error: address send, nack received 41. else if (twi_error == TW_MT_DATA_NACK) 42. return 3; // error: data send, nack received 43. else 44. return 4; // other twi error }
Whoa. Heavy. Ok, let's dig:
twi_writeTo(), lines 1-7
A little bit of housekeeping. We declare a variable
uint8_t i;, we do some error checking, in to return if the length of the data is too long:
if(TWI_BUFFER_LENGTH < length){ return 1; }
and that's it.
twi_writeTo(), lines 9-11
Here's the code:
while(TWI_READY != twi_state){
continue;
}
In twi_init(), we set
twi_state = TWI_READY;. twi_state is a variable whose scope is across all the functions in the twi.c file, declared at the beginning of the file:
static volatile uint8_t twi_state;. So here we can see that, because the variable was set in an earlier function, it does equal TWI_READY.
twi_writeTo(), lines 12-27
More setup and such. We set
twi_state = TWI_MTX;, where TWI_MTX is a number used internally to describe the current operational mode of the TWI circuitry. I believe TWI_MTX stands for "TWI master transmit."
The following lines should be self-explanatory by this point:
// reset error state (0xFF.. no error occurred) twi_error = 0xFF; // initialize buffer iteration vars twi_masterBufferIndex = 0; twi_masterBufferLength = length; // copy data to twi buffer for(i = 0; i < length; ++i){ twi_masterBuffer[i] = data[i]; }
...In them, we see that we have a library variable (declared at the beginning of twi.c:
static volatile uint8_t twi_error;, so its scope covers all the functions in that file). This variable reflects the error state of the TWI hardware. We also set our buffer index to 0 and we have another variable in the function that gets assigned the value of the length of our data buffer. Finally, we fill the twi_masterBuffer (defined at the beginning of the twi.c file as
static uint8_t twi_masterBuffer[TWI_BUFFER_LENGTH];) with our data.
Next we have the following variable, declared at the beginning of twi.c:
static uint8_t twi_slarw;. In the I2C world "SLA+R/W" is defined as a slave address plus either a write bit or a read bit. Also in the I2C world, addresses are 7 bits long. So what we are doing is this code
// build sla+w, slave device address + w bit twi_slarw = TW_WRITE; twi_slarw |= address << 1;
is:
Thus we'll have an 8 bit data stream that begins with the 7 bit address.
twi_writeTo(), line 30
TWCR = _BV(TWEN) | _BV(TWIE) | _BV(TWEA) | _BV(TWINT) | _BV(TWSTA);
This line should look somewhat familiar. In
Wire.begin() we did the following:
- Enable the twi module, the twi interrupt, and TWI acknowledgements.
TWCR = _BV(TWEN) | _BV(TWIE) | _BV(TWEA);
So we're redoing some of the work here. Thus the statement in Wire.begin() was unnecessary, and if we're concerned about every cycle on our Arduino, somewhat wasteful. But I don't think we're that concerned, because the I2C frequency is only 100kHz.
We have some additional bits that we're setting: TWINT and TWSTA. Here's what they do:
Setting the TWINT bit clears the TWINT flag. That bears repeating: Setting the bit to a one, clears the TWINT flag. Clearing the flag starts the operation of the TWI. ...HEY! Finally! There it is! ...Clearing the flag starts the operation of the TWI. So now you know: the TWI hardware is started simply by setting a flag in the TWCR register (provided, of course, the other conditions are set such as enabling the TWI hardware with the TWEN bit and setting up the SDA and SCL pins).
Setting the TWSTA bit to one tells the TWI that it should become the master on the TWI bus. So the first thing it will do, provided the TWI bus is free, is generate a START condition. What is a START condition? The clock line SCL remains high, while the master changes the SDA line to low. This is unique because during normal data transfer, the SDA data line changes state only when the SCL clock line is low. When the data changes to low while the SCL is high, this is the signal to all I2C devices on the bus that a master is about to initiate a communication.
twi_writeTo(), lines 33-35
These lines:
while(wait && (TWI_MTX == twi_state)){ continue; }
are resolved as follows: The call to twi_writeTo() from Wire's endTransmission() method looks like this:
int8_t ret = twi_writeTo(txAddress, txBuffer, txBufferLength, 1);Thus,
wait=1]. And twi_state was set to TWI_MTX earlier, and TWI_MTX is defined as 2 in the twi.h file. So the boolean of our while() statement above looks like: [= while(1 && (2 == 2)){ =] ...and, since [@(2 == 2)returns a 1 in C and C++, and the boolean
1 && 1is 1, and since 1 is equivalent to a boolean "true", what we have here is an infinite loop. There's nothing to show that those values will change. So we're stuck.
SIGNAL(TWI_vect)
The only way for our processor to stop narcissistically talking to itself with its endless test of two equivalent values is to interrupt it. But how will it get interrupted? And what will it do once interrupted?
To answer the second question, let's take a bit of a journey. On line 332 of twi.c we see the following function:
SIGNAL(TWI_vect). Here's how it dereferences:
#define TWI_vect _VECTOR(24) /* defined in iom328p.h */ #define _VECTOR(N) __vector_ ## N /* sfr_defs.h */ #define _VECTOR(N) __vector_24 /* ## is the pre-processor "token pasting operator" */ /* the following comes from interrupt.h */ #define SIGNAL(vector) void vector (void) __attribute__ ((signal, __INTR_ATTRS)); void vector (void) #define SIGNAL(__vector_24) void vector (void) __attribute__ ((signal, __INTR_ATTRS)); void vector (void) void __vector_24 (void) __attribute__ ((signal, __INTR_ATTRS)); void __vector_24 (void) # define __INTR_ATTRS used, externally_visible void __vector_24 (void) __attribute__ ((signal, used, externally_visible)); void __vector_24 (void)
The result is interesting. It says that we have a function __vector_24 which returns nothing and takes nothing as an argument. However, we are telling the compiler that we are smarter than it with the "__attribute__" declaration. We are saying the following:
Ok, now we know that our __vector_24 function has some attributes:
void __vector_24 (void) __attribute__ ((signal, used, externally_visible));
Now what? Setting aside the attribute declaration for a moment, we plug the rest of the line into line 332 of
utility/twi.c. This is merely:
void __vector_24 (void). The GCC preprocessor changes what was once:
SIGNAL(TWI_vect) { switch(TW_STATUS){ // All Master case TW_START: // sent
(...etc...)
into
void __vector_24 (void) { switch(TW_STATUS){ // All Master case TW_START: // sent
(...etc...)
__vector_24, oh __vector_24. How you have hurt my brain. Ok, this gets even gorier, but here's my understanding:
__vector_24 is defined in no C or C++ source or header file. Rather, it's part of the AVR-libc library. In there is some assembly language code with a jump table. The file is
crt1/gcrt1.S, if you're interested. Google it. You can download the source for the avr-libc library and see it. Anyway, the jump table basically says that calling __vector_24 is equivalent to doing the following assembly language command:
jmp __vector_24
Meaning, "jump to the memory location specified by __vector_24 and run the code there." Now somehow, I believe that the assembly language jump table is closely related to the Interrupt Vectors of the ATmega chip, as listed in the Interrupt Vectors table on p. 66 of the ATmega328 datasheet. As a matter of fact, in there we see this entry:
VectorNo. Program Address Source Interrupt Definition 25 0x0030 TWI 2-wire Serial Interface
Now I note that this is vector number 25, with a program address of 48 (decimal), and I note that the iom328p.h file has all the Interrupt Vectors listed in order exactly the same as the datasheet. I also note that the integer in my __vector_24 function name is half of 48- the program address number- and only 1 off from the vector number. So what I think is happening is that the assembly language jump table is getting laid into memory exactly where it should be such that the 16-bit memory address of my __vector_24 function winds up in location 0x0030. So that when a TWI interrupt takes place, the machine will grab the contents of address 0x0030 and that will point to the beginning of my function __vector_24, which is all the code that has been written in
utility/twi.c starting on line 322.
Now, let's talk about Question 1:
The answer to how we're interrupted is simple. Again in
Wire.begin() we enabled the TWI hardware interrupts:
- Enable the twi module, the twi interrupt, and TWI acknowledgements.
TWCR = _BV(TWEN) | _BV(TWIE) | _BV(TWEA);
And note, from the ATmega328 datasheet: "Interrupts are issued after all bus events, like reception of a byte or transmission of a START condition..." (p. 224). So, we have
Therefore, assuming the Global Interrupt Enable bit is set in SREG (and it should be at this point), we will get interrupted after the START condition is sent.
Whew. Which is all to say this:
SIGNAL(TWI_vect)
SIGNAL(TWI_vect)(our "interrupt handler").
Oh, and one other thing:
SIGNAL(TWI_vect)in depth
Remember that up until this point we have sent the TWI START condition: The SDA line is low while the clock remains high. Then an interrupt is sent, and the CPU looks in its interrupt handler vector table to see where to go to handle the interrupt. For this interrupt, it goes to __vector_24, which is the SIGNAL routine in utility/twi.c.
The first line of SIGNAL is the following:
switch(TW_STATUS)
So SIGNAL's behavior is dependent on the value of TW_STATUS, which is defined as (in hardware/tools/avr/avr-4/include/util/twi.h):
#define TW_STATUS (TWSR & TW_STATUS_MASK) #define TW_STATUS_MASK (_BV(TWS7)|_BV(TWS6)|_BV(TWS5)|_BV(TWS4)|_BV(TWS3)) #define TWS3 3 #define TWS4 4 #define TWS5 5 #define TWS6 6 #define TWS7 7
So TW_STATUS looks at the top 5 bits of the TWI Status Register; the boolean AND with the TW_STATUS_MASK guarantees that the bottom 3 bits of TW_STATUS are zero.
"When the TWINT Flag is asserted, the TWI has finished an operation and awaits application response. In this case, the TWI Status Register (TWSR) contains a value indicating the current state of the TWI bus..." Thus speaketh the ATmega328 datasheet.
SIGNAL's function is now clear: It is called by the TWI hardware via an interrupt. By that point, the TWSR will have been loaded with a value indicating the state of the TWI bus. The switch statement will compare the value of TWSR with a number of cases, eg "
case TW_START:", and if there's a match it will perform that section of code. In short:
Now all the TWI hardware has done is sent the START condition. According to the ATmega328 datasheet, "After a START condition has been transmitted, the TWINT Flag is set by hardware, and the status code in TWSR will be 0x08...." In the (path on your machine to)
/hardware/tools/avr/avr-4/include/util/twi.h, we see:
#define TW_START 0x08
So we are now at this point in the
SIGNAL(TWI_vect) code:
case TW_START: // sent start condition case TW_REP_START: // sent repeated start condition // copy device address and r/w bit to output register and ack TWDR = twi_slarw; twi_reply(1); break;
So we fill TWDR with our address and our r/w bit (see ). Recall that TWDR is
Two Wire Data/Address Register Contains the data you want to transmit or have received
Then we run
twi_reply(1). Its code looks like:
void twi_reply(uint8_t ack) { // transmit master read ready signal, with or without ack if(ack){ TWCR = _BV(TWEN) | _BV(TWIE) | _BV(TWINT) | _BV(TWEA);
Notice that our argument to twi_reply is 1, therefore within the function,
ack is true (== 1). Recall above, in the [twi_writeTo()|] section, that we set the TWCR similarly to our TWCR statement in twi_reply. So here is what each bit does:
TWEN: Enables the TWI hardware. Which it already is, but we certainly don't want to set it to 0!
TWIE: Enables the TWINT flag.
TWINT: Setting this bit clears the TWINT flag, and starts the operation of the TWI hardware.
TWEA: Controls generation of the TWEA pulse.
Now according to the ATmega328 datasheet,
I think the writers of the datasheet were confused (which confuses me), because they say that "...the TWINT bit should be cleared (by writing it to one)...", whereas in other places of the datasheet they say that you write the TWINT bit to one to clear the TWINT flag. Anyway...
So, we have SLA+W (the slave address of the MPR084 device, plus the Write bit) in the TWDR, we set the TWCR, and the TWI hardware should continue the conversation.
In SIGNAL(TWI_vect), the next line after the twi_reply(1) is simply a
break. So we exit out of SIGNAL and back to our busy wait state in twi_writeTo(). Stuck again? Perhaps, but now we know how TWI interrupts the processor. So, after the twi_reply() takes place, we must receive another interrupt.
Recall that we have written the value "regINFO" (the MPR084 register that we're interested in) into the txBuffer.
This interrupt takes place after the Slave Address plus Write bit have been written to the I2C bus. So the TWSR should contain information that the slave receiver acknowledged our information, and in the SIGNAL function, the following case section of code should be used:
// Master Transmitter case TW_MT_SLA_ACK: // slave receiver acked address case TW_MT_DATA_ACK: // slave receiver acked data // if there is data to send, send it, otherwise stop if(twi_masterBufferIndex < twi_masterBufferLength){ // copy data to output register and ack TWDR = twi_masterBuffer[twi_masterBufferIndex++]; twi_reply(1); }else{ twi_stop(); } break;
Our twi_masterBufferLength is 1, and the Index is 0. So we perform:
// copy data to output register and ack TWDR = twi_masterBuffer[twi_masterBufferIndex++]; twi_reply(1);
and, just like before with the TW_START, the twi_reply(1) will send the contents of TWDR out onto the I2C bus. This will be the register we are interested, regINFO, or 0x14.
Again, we should get interrupted once the data from the twi_reply() is sent over the I2C bus. Now our twi_masterBuffer has been exhausted- we only had 1 byte of data in it- so the twi_masterBufferIndex is no longer less that the twi_masterBufferLength. So we do not set the TWDR, we do not reply over the I2C bus.
Instead, we stop, as per this code. Note that the twi_masterBufferIndex is not less than the twi_masterBufferLength, so the else condition is reached:
if(twi_masterBufferIndex < twi_masterBufferLength){ // copy data to output register and ack TWDR = twi_masterBuffer[twi_masterBufferIndex++]; twi_reply(1); }else{ twi_stop(); } break;
The TWI STOP is like the TWI START condition, only the flags are different:
TWCR = _BV(TWEN) | _BV(TWIE) | _BV(TWEA) | _BV(TWINT) | _BV(TWSTO);
And so ends this I2C communication session.
This is an unfortunate state of affairs because I am now convinced my sketch is wrong. I don't want to stop at this point, I should now be ready to read data bytes from the MPR084. Shoot. I have more work to do on my sketch. Where to go from here? I don't know, it's late and I'm tired so I'll have to read up some more.
(...later...) In the MPR084 datasheet, on p. 7 it says,
However, on p. 8 they show the communication with the MPR084. Notice the schematics in the mpr084 section, labelled "Figure 10" and "Figure 11". The R/~W bits are not the same. In those schematics, if a write was sent first for each operation- that is, for both Write and Read- the bit should remain the same. But it is not. Given this state of affairs, that
I first concluded that the text in the datasheet is wrong, and that I must necessarily send a read command from the beginning. However, on close analysis, the text is correct but the diagrams are off. What needs to happen is this:
It turns out that I am correct. Without further ado, here is
#include <Wire.h> //() { sleep 1000; Serial.begin(19200); // For reporting Serial.print("MPR084 test.\n"); Wire.begin(); Wire.beginTransmission(MPR084_ADDR); Wire.send(regINFO); // This would be my command byte Wire.endTransmission(); Wire.requestFrom(MPR084_ADDR, 16); // Not sure how much data there is, let's try this. Wire.endTransmission(); while (Wire.available()) { c = Wire.receive(); if (c != 0) { Serial.print(c); } } } void loop() { // Nothing to see here, move along... }
and my glorious I2C output, as seen in the Arduino serial monitor window, looks something like this:
VER:1_0_0Freescale,PN:MPR084,QUAL:EXTERNAL,
So, where did I go wrong? Why didn't it work the first time. Upon review, it seems my mistakes were few but costly:
readFrom()method, and was sending it a register address instead of a device address. This is because I thought I needed to send the MPR084 slave address again using
beginTransmission(). I didn't realize that
beginTransmission()set the slave address in an internal Wire library variable, and therefore remembered it for subsequent method calls.
Having finally spoken to my I2C device, where does one go? Onward, to greater horizons, of course! Now I am able to speak more confidently with my I2C device. I can construct my project. I am able to purchase other devices and create more gadgets. And, regarding the Wire library, I have a few ideas:
beginTransmission()call really doesn't begin the transmission.
(to be continued)
|
http://playground.arduino.cc/Code/ATMELTWI
|
CC-MAIN-2016-50
|
refinedweb
| 7,798
| 72.36
|
?
In case you missed it, here is the “React Inception” pattern (thanks for the great name, @mctaylorpants):> ) } } // Handle updated values by re-rendering the app function render(value) { ReactDOM.render( <CommentForm value={value} onChange={render} />, document.getElementById("react-app") ) } // Initialise the app by rendering it render({ name: "James", message: "", })
Think you know where the state is stored? Check your answer by touching or hovering your mouse over this box:
ReactDOM.renderwith the new value, we write it to the DOM. The DOM then merges it with the requested changes on every keystroke, and passes it back to the app via an
onChangehandler.
Okay. So the app isn’t stateless. That means the pattern is basically useless, right?
Not so fast. As I mentioned last week, having all our state in one place is pretty handy. But that brings us to the bonus question:
So why don’t people do this?
I’d try and explain this, but Joe Shelby beat me to the punch in a comment on last week’s article:
When you get to large apps, having EVERY single component look through the part of the ‘value’ object to see if anything has changed would be impressively slow.
To get an idea of why this is, you need to remember that every JSX statement in a React file is actually a function call, which creates a
ReactElement object.
By rendering your page from the root, you need to run the
render method of every component on your page. Which will then create a new
ReactElement object for every JSX element on your page. Which will then then be diffed with all the React Elements on the previous version of your page. And all you wanted to do was add an
active class to your spinner div.
This is a big reason why people use
setState in the real world.
setState lets you update a subtree of your react app, without re-rendering the whole thing. But it also lets you use react-router.
What has react-router got to do with
setState?
If you’ve ever used react-router, one of the things you may have noticed is that it doesn’t let you pass
props to your components. Instead, you need to pass components. Here is an example from the docs:
ReactDOM.render(( <Router history={browserHistory}> <Route path="/" component={App}> <Route path="about" component={About}/> <Route path="*" component={NoMatch}/> </Route> </Router> ), document.getElementById('root'))
In this example,
App,
About and
NoMatch are all React Components – i.e. the objects returned by
React.createClass, or classes which extend
React.Component. By passing components to routes, you’re able to easily configure routes at the very root of your application. But by only being able to pass components to routes, you also lose the ability to pass props from the parent component without using
context.
I’m not going to get into the merits/demerits of designing a router this way. But I will say that one consequence of doing things this way is that it removes the possibility of building your app using props passed all the way from the root. This means that the React Inception pattern won’t work. And honestly, this probably factors more into people’s decision to avoid the pattern than performance does.
So to summarise – the React Inception pattern is a fun novelty which doesn’t really have any place in the real world. Except that you may already be using it without realising it — just modified in a way to negate these issues.
Making React Inception Useful
To fix the two issues mentioned above, we’ll need to make two changes:
- We’ll need to split our
renderfunction into two parts — one to make changes to state, and one to handle changes in that state.
- We’ll need to make it possible to update the state instead of replacing it, as updates from one part of the application shouldn’t affect state used by other parts of the application.
Simple, right? And the implementation is even simpler:
class Store { constructor() { this.listeners = [] this.state = {} } subscribe(listener) { this.listeners.push(listener) // Return a function allowing us to unsubscribe return () => this.listeners.splice(this.listeners.indexOf(listener), 1) } updateState = (updates) => { this.state = Object.assign({}, this.state, updates) for (let listener of this.listeners) { listener(this.state) } } } const store = new Store()
To use the
store, pass its
updateState method through to components which need to store state. And then use
subscribe to pass the latest version of the state into the components.
As an example, this is how we’d implement our familiar React Inception pattern using a
Store:
store.subscribe(state => { ReactDOM.render( <CommentForm value={state} onChange={store.updateState} />, document.getElementById("react-app") ) }) // Set initial state store.updateState({ name: "James", message: "", })
But this just makes our existing code more complicated. What we really want to do is connect our store up to a component inside the application, speeding up our app and getting our data past react-router without using
props.
To do so, let’s follow Dan Abramov’s Containers pattern. This is akin to creating a “sub-application” which uses
setState to store the data for a group of state-free child components — like the
CommentForm from our previous app.
class CommentFormContainer extends React.Component { componentDidMount() { this.unsubscribe = store.subscribe(state => { this.setState(state.CommentFormContainer) }) } componentWillUnmount() { if (this.unsubscribe) { this.unsubscribe() } } handleChange(value) { store.updateState({ CommentFormContainer: value }) } render() { return ( <CommentForm value={this.state} onChange={this.handleChange} /> ) } }
Now place this Container somewhere inside an existing application and presto — (almost) stateless sub-app.
Wait a moment – haven’t I seen this before?
Container components? Aren’t they a Redux thing? And stores with a
subscribe method which hold the application state at the root of the app?
So I admit it. I’ve basically made a shit version of Redux.
Honestly, I wouldn’t use the code I’ve written above — Redux is better. Reasons:
- Instead of allowing direct mutation of the stored state, it forces you to write actions and reducers which make it easier to understand what-the-hell is going on.
- Redux gives you the
connectfunction to automate creation of container components (my container above is based on the internals of
connect).
- It gives you extensibility via middleware, and a whole bunch of neat tooling.
So what I’m saying is that if you want to manage state, Redux is the bees knees. By putting it all in one place, it becomes possible to share state between Containers and cache data between page views. And by keeping most of your components stateless, testing and design becomes a breeze.
But while Redux does a great job of managing your state, it still leaves you with the problem of structuring it.
5 Types of state
I have a theory: to understand how to structure state, you first need to understand where the state actually is.
As far as I can tell, there are five types of state — and the best way to structure state depends on which of the types it falls under.
Want to find out what the types are, and the patterns for structuring them? Then subscribe to my newsletter now to make sure you don’t miss the next instalment of State of React! And in return for your e-mail, you’ll immediately receive five!
Great post… thanks
|
http://jamesknelson.com/state-react-2-inception-redux/
|
CC-MAIN-2017-34
|
refinedweb
| 1,230
| 56.55
|
Comparing two HTTP codes is a particularly simple task. Since an HTTP code is simply an integer, we can use the built in C comparison functions <, >, <=, >=, ==, !=, for comparing whether one integer is less than, greater than, less than or equal to, greater than or equal to, equal to or not equal to a second integer, respectively.
In fact in our case there is an even simpler way of doing things. Since our comparison function is merely supposed to return a value less than zero, equal to zero or greater than zero depending whether the the first HTTP code is less than, equal to or greater than the second, respectively, then we can simply return the difference between the two codes, which will automatically be a value of the required kind.
Comparing IP addresses is slightly more complicated. Here there are four numbers for each IP address. In many locations the first two or even three of these numbers remains the same, whilst the fourth number in the IP address is used to differentiate machines all on the same network, or in virtually the same location. Therefore if the first digit of two IP addresses differ, then there is not much point comparing the latter digits, and so on, since the machines are almost certainly in entirely different places, if not countries.
We can again use the same trick of subtracting the first digit of one IP address from that of the second IP address in order to perform our comparison. However if the difference is zero (i.e. the first digit for both addresses is the same) then we need to proceed to the second digits to see how they differ, and so on, right down to the fourth digits.
Our comparison function may now be implemented as follows:
int compare(logType * line1, logType * line2, int flag) { if (flag == HTTP_FLAG) return (line1->http - line2->http); if (line1->ip.int1 != line2->ip.int1) return (line1->ip.int1 - line2->ip.int1); if (line1->ip.int2 != line2->ip.int2) return (line1->ip.int2 - line2->ip.int2); if (line1->ip.int3 != line2->ip.int3) return (line1->ip.int3 - line2->ip.int3); return (line1->ip.int4 - line2->ip.int4); }
This function assumes that if the flag is not HTTP_FLAG then it is the other kind. It performs no check that a valid flag has been used. At each point in the function things only continue on if the last condition checked was not true, else the function returns with the correct value. Finally if the function gets to the last line, then the flag is not HTTP_FLAG, so we are presumably comparing IP addresses, and the other three tests have failed, meaning that the first three numbers in the two IP addresses being compared are the same. One then simply returns the result of subtracting the fourth numbers of the two IP addresses.
|
http://friedspace.com/cprogramming/compare2.php
|
crawl-001
|
refinedweb
| 478
| 61.06
|
From Capitol Column
Stone noted that there are things that the U.S. Government can do to help improve the situation in Mexico, including legalizing marijuana.
Oliver Stone thinks its time to stop the endless debate and legalize marijuana.
“I think it’s a tragedy what has happened, and I saw it coming 40 years ago, and it keeps going. And of course politicians keep making election promises,” the film director and screenwriter told MTV News. “It’s an easy subject to win votes on.”
His latest film, “Savages,” tells the story of two small time pot dealers who get caught up with a dangerous Mexican drug cartel. Based on the novel of the same name by Don Winslow, it’s a crime thriller that is more focused on action than leaving audiences with a strong political message.
“It is set in a drug war,” he explains. “It’s set in California, and we have a new burgeoning industry here with young independent growers. It makes sense. It’s a hypothetical situation; it could happen.”
But Mr. Stone has a strong message of his own on the subject of marijuana legalization.
“A lot of money gets involved, and it’s built up into a huge industry in America and in Mexico, where we have criminal justice that has been perverted,” he said in the interview. .”
In June, Mr. Stone was featured on the cover of High Times smoking a joint.
,” he told High Times. “I’m thinking myself of getting into the business, although I suspect there’d be a lot of stress with the Feds changing the rules all the time.”
In the MTV interview, Mr. Stone expressed his disappointment in President Barack Obama and other politicians over their failure to act on the issue.
“Obama promised it, but he never delivered. He certainly talked about it. He let us down in a big way on that issue,” he said. “You know what it’s going to take? New leadership. Young people…to get out there and get in front of things and just call a spade a spade.”
From Harlem World: Oliver Stone discusses weed, war and ‘Savages’
Ol very.
Meriah Doty.
And from Washington Times: Oliver Stone high on US grown pot
Oliver Stone has smoked great marijuana all over the world, from Vietnam and Thailand to Jamaicaand South Sudan. But the filmmaker says the best weed is made in the USA and that pot could be a huge growth industry for taxpayers if it were legalized.
Mr. Stone, whose drug-war thriller “Savages” opens Friday, has been a regular toker since his days as an infantryman in Vietnam in the late 1960s. He insisted in a recent interview that no one is producing better stuff now than U.S. growers.
“There’s good weed everywhere in the world, but my God, these Americans are brilliant,” said Mr. Stone, 65, who sees only benefits from legalizing marijuana. “It can be done. It can be done legally, safely, healthy, and it can be taxed, and the government can pay for education and stuff like that. Also, you can save a fortune by not putting kids in jail.”
Mr. elseproduct,” Mr.,” Miss Hayek said. “Some of the other drugs that are on the market are really, really dangerous. The legal drugs. That your doctor can prescribe. And they can kill you with it slowly.”
Miss Miss Hayek’s brutal lieutenant, John Travolta as a corrupt Drug Enforcement Administration cop and Blake Lively as Johnson and Kitsch’s shared lover, whose kidnapping puts the two sides at war.
Mr. Mr. Stone has done in decades. While the film itself doesn’t preach, it has given Mr. Stone a soapbox to play devil’s advocate, even landing him on the cover of the marijuana magazine High Times, smoking a joint.
Mr..”
Related articles
- Oliver Stone Knows Good Weed (huffingtonpost.com)
- Stone Says US No. 1 … for Great Weed (abcnews.go.com)
- Oliver Stone: Illegal Weed Laws ‘Worse than Slavery’ (redalertpolitics.com)
- ‘Savages’ Director Oliver Stone Wants Real Change In Marijuana Laws (mtv.com)
- Savages: Stone’s Stoner Film Reminds Us Why Marijuana Should Be Legal (world.time.com)
- Oliver Stone discusses weed, war and ‘Savages’ (harlemworldmag.com)
4 TOP POT CANDIDATES FOR 2012
PROMOTE POLITICAL CANDIDATES
WHO WILL MAKE MARIJUANA LEGAL.
(1)Gary Johnson for President, Libertarian
(2)Cris Ericson for United States Senator for Vermont,
United States Marijuana Party
(3) Tim Davis, Grassroots Party, running for
United States Senator for Minnesota
(4) Ed Forchion for U.S. Congress, The
Legalize Marijuana Party of New Jersey
You can start a Marijuana Super PAC
and you’ll be able to collect millions and
millions of dollars
in donations.
Study all of the information on these two websites:
(1)
(2)
Who would want to donate a lot of money to a
Super PAC
which
promotes a marijuana legalization candidate?
(a) Sports figures, and every Major Sports Club in
America may have members who use marijuana.
(b) People who want medical marijuana to be legalized
under federal law.
(c) People who want less government control of their
personal lives.
(d) People who believe in the
Holy Bible,
Old Testament, Genesis: God Gave Us Every Seed
Bearing Plant.
(e) People who are deeply outraged that
private-for-profit prison
Corrections Corporation
of America IS NOW SELLING SHARES OF STOCK ON
NASDAQ.
(f) People who believe we are suffering from Unfair Trade
Restriction and Economic Treason because it is legal to
import hemp products from foreign countries and sell
them in the United States of America,
but it is not legal for farmers in the U.S.A.
to grow hemp. This is ECONOMIC TREASON.
(g) People who do not want their children denied any
college loans or grants simply because they had a
marijuana conviction as a teenager.
(h) People who work in the entertainment industry,
actors, agents, and musicians
are deeply affected by marijuana arrests in their
community interfering with filming, t.v. and
recording schedules.
(i) Corporations which currently produce beer and wine
will be happy to develop
Marijuana beer at 5% strength and marijuana wine at
10% strength.
What is a MARIJUANA Super PAC?
A Super PAC is a political action committee that asks for
donations
from Corporations and People,
and uses the money to promote a candidate of their
own, and ask them to
donate to your
MARIJUANA candidate Super PAC.
WHO ARE THE 2012 MARIJUANA CANDIDATES?
(1) Libertarian Presidential Candidate Gary Johnson
(2) Cris Ericson for U.S. Senator for Vermont
United States Marijuana Party
C-SPAN BIO:
VPT 2010 Debate including candidate Cris Ericson
(3) Tim Davis, Grassroots Party, Minnesota, 2012
candidate for United States Senator,
Committee to Elect Davis
4154 Vincent Ave N
Minneapolis, MN 55412
(612)522-3776
birdman420@q.com
(4) Ed Forchion aka NJ Weedman for
U.S. Congress for
3rd District of New Jersey,
The Legalize Marijuana Party of NJ
“Started with Nixon” Yea, he was a Corrupt Crook that would have been Impeached had he not Quit!
Pingback: “Savages” Is A Telemundo Soap Opera In English — ANOMALOUS MATERIAL
|
https://patients4medicalmarijuana.wordpress.com/2012/07/06/oliver-stone-obama-hasnt-delivered-on-marijuana-legalization/
|
CC-MAIN-2018-05
|
refinedweb
| 1,185
| 64.51
|
Translation(s): none
Theory
GCC (GNU Compiler Collection) is a set of compilers developed by the GNU project that support a lot of languages, a lot of different architectures, and provides the basic layer for software development in most Free operating systems. GCC includes front ends for C, C++, Onjective C, Java, ADA and Fortran, although there are other non-standard front ends also available.
GCC is used to transform the source code of the programs into some executable binary files that can be understood by the computer. Programs coded in C usually have the extension ".c", while its header files usually end in ".h". Those coded in C++ might have extensions like ".cpp", ".cxx", ".cc" or just with a capital letter ".C". The extensions ".f" or ".for" belong to fortran, ".s" and ".S" to assembler code, and so on.
The general syntax for compiling a program with gcc is "gcc [ option | filename ]..." for programs in C, and "g++ [ option | filename ]..." for programs in C++
GCC supports many options, really a lot of them. The most commonly used are:
- -o filename : Place output in file "filename".
- -c : Compile or assemble the source files, but do not link.
- -g : Produce debugging information.
- -Wall : Give a warning message whenever something is strange in the code.
- -Werror : Treat warnings as if they were errors.
- -O0 : Do not optimize.
- -O2 : Turns on all of these optimizations except ‘-funroll-loops’ and ‘-funroll-all-loops’.
- -O3 : Optimize even more. This optimizations can be problematic in some architectures, and it can also happen that the result is not as good as with -O2. It depends on every case, but as a general rule Debian packages are built conservatively with just -O2.
- -Os : Optimize for size.
- -llibrary : Use the library named library when linking.
- -Idir : Append directory dir to the list of directories searched for include files.
- -Ldir : Add directory dir to the list of directories to be searched for ‘-l’
Exercises
Lets try to compile the simplest program in C. You can write it in your favourite editor, or just cat it directly from the console into a file (CTRL-D for EOF):
$ cat >test.c
#include <stdlib.h>
#include <stdio.h>
- int main()
- {
- printf("Hello, World\n");
- return 0;
- }
Now we have a small C program in the file test.c, it doesn't matter too much if you don't understand it, don't worry.
Even though it's possible to compile it into a final executable in a single command, the process is really done in different phases, and it's quite usual to separate explicitly the compilation into binary objects (with the extension .o), and the final linking of these objects into a binary executable:
Compile the program:
- $ gcc -Wall -c -o test.o test.c
Link the program:
- $ gcc -o test test.o
Execute the program
- $ ./test
Links
|
https://wiki.debian.org/Courses/MaintainingPackages/Intro/Gcc
|
CC-MAIN-2021-39
|
refinedweb
| 476
| 66.74
|
jps wrote:Clams wrote.
Thanks for the info. This is a Windows specific regression, and will be fixed for the next build.
def show_quick_panel(self, items, on_select, flags = 0, selected_index = -1, on_highlight = None):
[...]
items_per_row = 1
flat_items = items
if len(items) > 0 and isinstance(items[0], list):
-----> BUG items_per_row = len(items[0])
"auto_complete_triggers": [ {"selector": "", "characters": "<"} ]
schlamar wrote:auto_complete_triggers is not working with HTML.
Steps to reproduce:
1. New File
2. Set Syntax: HTML
3. Type "<"
What happens: Nothing.
Expected: Auto-complete pops up.
Same issue with the setting:
This one triggers the pop-up in Java, Python and Plain Text, so this issue seems to occur only in HTML.This one triggers the pop-up in Java, Python and Plain Text, so this issue seems to occur only in HTML.
- Code: Select all
"auto_complete_triggers": [ {"selector": "", "characters": "<"} ]
"auto_complete_selector": "source",
"auto_complete_triggers": [{"selector": "text.html", "characters": "<"}],
Return to General Discussion
Users browsing this forum: Google [Bot] and 29 guests
|
http://www.sublimetext.com/forum/viewtopic.php?p=46675
|
CC-MAIN-2014-10
|
refinedweb
| 155
| 52.76
|
Hello readers! In this tutorial, we’ll be looking at how we can quickly build a dashboard in Python using dash, from a CSV file.
Dash is a Python framework that makes it easy for anyone to build dashboards in Python, while not having to deal with the frontend required directly.
Steps to build a dashboard in Python
Let’s now get started and build a dashboard in Python using the dash library to display data from a CSV File!
Step 1: Plot the data using Plotly
We’ll be using a simple CSV file for the data source, namely a COVID time series dataset.
I’m using this COVID-19 dataset from Kaggle. Once you have it ready, we can start using it.
To render the plots, we’ll be using the Python plotly library. To install this library, use:
pip install plotly
Let’s now plot the time series data for various states. We’ll use the Pandas read_csv() function to read the data from our CSV dataset. It’s just 3 simple lines of code!
import plotly.express as px df = pd.read_csv('covid_19_india.csv') # Plot the scatterplot using Plotly. We ploy y vs x (#Confirmed vs Date) fig = px.scatter(df, x='Date', y='Confirmed', color='State/UnionTerritory') fig.update_traces(mode='markers+lines') fig.show()
Now plotly should give you a nice visualization of the data. Let’s now render this in our Dash application.
Step 2: Embed the graph with Dash
To render our dashboard application, we’ll be using Dash. Install this library using:
pip install dash
We’ll use dash to render the data in a layout.
Before that, let’s set up some stylesheets (CSS) for our page to look good! I’m using the default data from this dash official tutorial.
import dash import dash_core_components as dcc import dash_html_components as html import plotly.express as px import pandas as pd external_stylesheets = [''] app = dash.Dash(__name__, external_stylesheets=external_stylesheets) colors = { 'background': '#F0F8FF', 'text': '#00008B' }
Let’s now configure our data in this layout.
# Our dataframe df = pd.read_csv('covid_19_india.csv') fig = px.scatter(df, x='Date', y='Confirmed', color='State/UnionTerritory') fig.update_traces(mode='markers+lines') app.layout = html.Div(children=[ html.H1(children='COVID-19 Time Series Dashboard'), html.Div(children=''' COVID-19 Dashboard: India. '''), dcc.Graph( id='example-graph', figure=fig ) ])
Step 3: Run the application server with Flask
Now, let’s finally run the application server (via Flask):
if __name__ == '__main__': app.run_server(debug=True)
This will start the server on local port 8050. Let’s look at the output now, when we go to
As you can see, we indeed have a nice looking interactive dashboard in just a few lines of Python code!
Conclusion
In this tutorial, we learned how we could build a dashboard in Python from a CSV file using Dash.
|
https://www.askpython.com/python/examples/build-a-dashboard-in-python
|
CC-MAIN-2021-31
|
refinedweb
| 475
| 66.94
|
I am trying to trigger execution of certain frozen python code (module members etc) from underlying C code. Thanks to some other posts on the forum (links at bottom) I've been able to:
- execute the built-in print() function with my own arguments
- execute the built in os.listdir() module method (no errors, but did not see output printed to REPL)
so far so good for built-in modules and functions, however I have been unable to load a frozen module the same way.
generally the code looks like this:
this fails to load the module, even when attempting to import the module into global namespace by executing the frozen module with pyexec_frozen_module("test.py");.
Code: Select all
mp_obj_t test_module_obj = mp_module_get(MP_QSTR_test); if(test_module_obj){ // mp_obj_t test_write_to_fn = mp_load_attr(test_module_obj, MP_QSTR_write_to); // mp_obj_t test_write_to_fn = mp_import_from(test_module_obj, MP_QSTR_write_to); mp_obj_t test_write_to_fn = NULL; mp_load_method(test_module_obj, MP_QSTR_write_to, &test_write_to_fn); if(test_write_to_fn){ mp_call_function_2(test_write_to_fn, file, data); } }
However it *is* possible to achieve the desired results by executing another frozen module pyexec_frozen_module("import_test.py"); which simply contains import test. Inspecting the mp_module_get function reveals that the method needs to be loaded into the mp_loaded_modules_dict portion of the VM state (or be a builtin name). The only existing C API to add to the loaded modules dictionary seems to be void mp_module_register(qstr qst, mp_obj_t module) which requires a module object to append.
Is there some way for C code to get a module from frozen bytecode without using the kludgy-feeling method described above? For example something like mp_get_frozen_module().
Thanks in advance!
calling python from c
|
https://forum.micropython.org/viewtopic.php?p=53365
|
CC-MAIN-2021-10
|
refinedweb
| 257
| 51.18
|
Most charts show one point at a time on a Cartesian (x/y) coordinate. That is, a single point might indicate that July sales were $525MM while August sales (a second point) were $350MM. The chart might also show a line connecting the two points to show the change in value. But each individual entry is typically a single point on the coordinate system.
With stocks, we’d like to show four bits of information for each entry:
There is no good way to do this by drawing points on an X/Y grid, but the Financial Chart from Telerik overcomes this limitation by using lines and boxes. The line indicates the high and low for the day and the box indicates the open and closing prices. A solid box has a higher close, an open box has a lower close.
You can see at a glance the relative range of prices a stock had over the course of a day and the size of the delta between opening and closing price. A great deal of information is packed into a small picture.
Creating this chart, however, is no more difficult than using any of the other variations in the RadCartesianChart in the Telerik Windows 8 UI Controls.
We begin, as always, with the data. Create a class to represent each stock,
public class Stock
{
public double High { get; set; }
public double Low { get; set; }
public double Open { get; set; }
public double Close {get; set; }
}
Create a View Model which will serve as the Data Context for the class. The VM will implement INotifyPropertyChanged,
public class MainViewModel : INotifyPropertyChanged
{
public event PropertyChangedEventHandler PropertyChanged;
private void OnPropertyChanged(
[CallerMemberName] string caller = "" )
{
if ( PropertyChanged != null )
{
PropertyChanged( this, new PropertyChangedEventArgs( caller ) );
}
}
The VM will also have a public property, Stocks, which represents your portfolio.
private List<Stock> stocks;
public List<Stock> Stocks
{
get { return stocks; }
set
{
stocks = value;
OnPropertyChanged();
}
}
Finally, the VM will fill the portfolio, standing in for obtaining this data from a DataBase or other server,
public MainViewModel()
{
LoadData();
}
private void LoadData()
{
var myStocks = new List<Stock>();
myStocks.Add( new Stock() { Open = 35.0, Close = 42.0, High = 65.5, Low = 18.7 } );
myStocks.Add( new Stock() { Open = 45.5, Close = 49.1, High = 50.5, Low = 38.1 } );
myStocks.Add( new Stock() { Open = 57.9, Close = 47.8, High = 65.3, Low = 26 } );
myStocks.Add( new Stock() { Open = 44.4, Close = 38.8, High = 58.7, Low = 27.2 } );
Stocks = myStocks;
}
The chart itself, of course, is declared in MainPage.xaml WE begin by declaring a RadCartesianChart and giving it a name,
<telerik:RadCartesianChart
We then create the Horizontal and Vertical Axes. The former is a Categorical Axis (the categories being the stock names) and the latter is a Linear Axis (to hold the values),
<telerik:RadCartesianChart.HorizontalAxis>
<telerik:CategoricalAxis />
</telerik:RadCartesianChart.HorizontalAxis>
<telerik:RadCartesianChart.VerticalAxis>
<telerik:LinearAxis />
</telerik:RadCartesianChart.VerticalAxis>
Let’s turn on the grid lines so that we can easily line up the prices,
<telerik:RadCartesianChart.Grid>
<telerik:CartesianChartGrid
</telerik:RadCartesianChart.Grid>
We’re ready to create our “CandleStickSeries” – that is the series of CandleSticks that represent the four bits of data,
<telerik:CandlestickSeries
Notice, as with other collections, we set the ItemsSource property to bind to the Stocks collection of Stock data.
The first candlestick is for the high price for the day,
<telerik:CandlestickSeries.HighBinding>
<telerik:PropertyNameDataPointBinding
</telerik:CandlestickSeries.HighBinding>
This is followed by the candlesticks for the low, for the open and for the close,
<telerik:CandlestickSeries.LowBinding>
<telerik:PropertyNameDataPointBinding
</telerik:CandlestickSeries.LowBinding>
<telerik:CandlestickSeries.OpenBinding>
<telerik:PropertyNameDataPointBinding
</telerik:CandlestickSeries.OpenBinding>
<telerik:CandlestickSeries.CloseBinding>
<telerik:PropertyNameDataPointBinding
</telerik:CandlestickSeries.CloseBinding>
With all the series in place, we need only set the DataContext for the chart, which we do in the code-behind,
protected override void OnNavigatedTo(NavigationEventArgs e)
{
xFinancialSeries.DataContext = new MainViewModel();
}
You can see that it is quite easy to get a Financial series up and running very quickly. You can download the Telerik Windows 8 Controls here.
Download the source code for this example.
|
https://www.telerik.com/blogs/showing-4-data-points-at-once-financial-chart
|
CC-MAIN-2021-39
|
refinedweb
| 669
| 50.53
|
Overview:
In this article We will discuss calculating the Power of a Number using Recursion. First of all I would like to explain what a number is. What do you mean by the power of a number? How to calculate the power of a number? Then, I will demonstrate how to find the power of a number. Furthermore, I would write the logic and C Program to Find the Power of a Number using Recursion with output.
Table of contents:
- What is Number?
- What do you mean by the power of a number?
- How to calculate the power of a number?
- What do you mean by Recursive function?
- Demonstration to Find the power of a number
- Logic to Find the Power of a Number using Recursion
- C Program to find Power of a Number using Recursion
- Conclusion
What is Number?
A number is a mathematical object used to count, measure, and label. The original examples are the natural numbers 1, 2, 3, 4, and so forth. Numbers can be represented in language with number words.
What do you mean by the power of a number?
The power (or exponent) of a number says how many times to use the number in a multiplication.
The power of a number(baseexponent)is the base multiplied to itself exponent times.
For Example,
ab = a x a x a .....x a(b times)
24 = 2 x 2 x 2 x 2 = 16
How to calculate the power of a number?
The power of a number(baseexponent)is the base multiplied to itself exponent times. To calculate the power(exponent) of a number, you will multiplied this number to exponent times i.e. if number is n and exponent is e then,
ne= n × n × n × n × n...(e times)
For Example,
The 3 to the power 2 = 32 = 3 × 3 = 9
The 2 to the power 4 = 2 x 2 x 2 x 2 = 16
You can say that "2 to the power 4 is 16" or "the 4th power of 2 is 16"
What do you mean by Recursive function?
A recursive function is a function of code that refers to itself for execution. A recursive function is a function that calls itself during its execution. The process may repeat several times, outputting the result and the end of each iteration. A recursive function is a function that calls itself, meaning it uses its own previous terms in calculating subsequent terms. The function Count().
Demonstration to Find the power of a number
The mathematical definition of recursive function to find power of a number is as follows:
This function accepts two numbers i.e. x and y and calculates x ^ y.
Let's look at this example to calculate the power of a given number:
Suppose, The number is n and its exponent is e then you will read it n to the power that looks like as follows:
ne = n × n × n × n ...(e times)
For example, If you want to show the powers of the 23 that is calculated as follows:
230 = 1
231 = 23
232 = 23 x 23 = 529
233 = 23 x 23 x 23 = 12,167
234 = 23 x 23 x 23 x 23 = 2,79,841
Logic to Find the Power of a Number using Recursion
The final recursive function declaration to calculate power is as follows: double power_Number(double base, int exponent);
- Step 1: First give a meaningful name to our recursive function, say power_Number().
- Step 2: The function must accept two numbers i.e. base and exponent and calculate its power.
- Step 3: Hence, take two parameters for base and exponent, say power_Number(double base, int exponent);.
- Step 4: Finally the function should return base ^ exponent i.e. a double type value.
C Program to find Power of a Number using Recursion
#include <stdio.h> int power_Number(int n1, int n2); int main() { int base, power, result; printf("Enter base number: "); scanf("%d", &base); printf("Enter power number(positive integer): "); scanf("%d", &power); result = power_Number(base, power); printf("%d^%d = %d", base, power, result); return 0; } int power_Number(int base, int power) { if (power != 0) return (base * power_Number(base, power - 1)); else return 1; }
The output of the Program with code:
When the above code is executed, it produces the following results:
Enter base number: 2
Enter power number(positive integer): 3
2^3 = 8
Conclusion:
In this Program, we will solve this problem using recursion. We will first take base and exponent as input from the user and pass it to a recursive function, which will return the value of base raised to the power of exponent(baseexponent).
|
https://www.onlineinterviewquestions.com/blog/c-program-to-find-power-of-a-number-using-recursion/
|
CC-MAIN-2021-17
|
refinedweb
| 773
| 61.56
|
This article was written and submitted by an external developer. The Google App Engine team thanks Jesaja Everling for his time and expertise.
Jesaja Everling
February 2009
WARNING (Nov 2010): The app-engine-patch was deprecated towards the end of 2009. The creators of the Patch have moved on to create a much better tool called Django-nonrel and have told developers the same on its website. Django-nonrel is (currently) a maintained fork of the latest version of Django which allows developers to run native Django applications (via Django's ORM) on traditional SQL databases as well as non-relational datastores (including App Engine's). (This is a significant differentiator
The Django framework can make your life as a web developer a lot easier. It takes care of a lot of common problems web developers have to deal with, and offers many "reusable apps" - battle tested pieces of code that you can plug into your project.
Because of a few conceptual differences, several Django features do not work out of the box with Google App Engine. One of the main features that doesn't work with App Engine is the Django ORM, since the App Engine Datastore differs from a traditional relational database model off of which the Django ORM is based. app-engine-patch is a project that aims for providing all the functionality of Django while working around the limitations imposed by the missing Django ORM support. The project can be found here:
In this article, we will present a few reasons why you may want to consider using Django and app-engine-patch for your projects, and then demonstrate the power of app-engine-patch with a sample application. This sample will support authentication with both Google and non-Google accounts.
Reasons for considering Django over Webapp
The advantage to using Django over Google App Engine's webapp framework is that Django has been widely in use for years, for many types of web applications. Additionally, Django has an extensive developer community. There are numerous blogs dealing with Django, a very frequently used mailing-list, and the #django IRC-channel.
Webapp was developed exclusively for Google App Engine and has yet to build all this sort of community backing. Django has also become the "standard" Python web-framework. There are several other great frameworks, like Pylons or cherrypy, most of which will also work on App Engine, but Django certainly is the most popular one at the moment. It offers many features important for large projects, like built-in internationalization support, caching, authentication with sessions, support for middleware and much more. Webapp is missing most of these features. If you need them, you have to create them yourself. Lastly, Django makes you less dependent on App Engine. If you use webapp, you cannot easily switch to another system, at least at the moment.
Reasons for using app-engine-patch
app-engine-patch ports as much as possible from the Django
world to App Engine. You will be able to use a lot of the reusable apps for
Django without much adjustments. Porting existing Django code to App Engine
will also be much easier.
app-engine-patch will also reduce the
differences between traditional Django and that used for App Engine. So if,
for whatever reason, you wish to switch from App Engine to your own hosting
solution in the future it will largely reduce the work required. And it allows
you to benefit from the support the large Django community provides.
app-engine-patch also ships with a library called
ragendja that provides even more features, including transaction
decorators and global template tags. For the full list of the features that
app-engine-patch provides, have a look at the project's
Unlike the App Engine Helper for Django, which emulates Django models by using a custom BaseModel, app-engine-patch sticks with the regular Datastore API. This is due to the fact that it's not really possible to emulate the Django models with the App Engine Datastore, and it has the advantage that you can use new Datastore features as soon as they get released.
app-engine-patch provides a number of features that the App
Engine Helper for Django is missing. Further details are available on the
project homepage. Another important difference is that app-engine-patch tries
to support the latest stable release of Django, whereas the Helper ships with
version 0.96 (the svn trunk supports version 1.0).
Obtaining app-engine-patch
To get you off to an easy start,
app-engine-patch is
distributed as a self-contained sample project. You can get the latest release
on the project page here:.
At the time of this writing, 0.9.3. is the latest version.
App-engine-patch needs the App Engine SDK to work:.
For Windows and Mac OS X, you just have to use the provided installer for
the SDK. If you are on Linux, put the SDK in a folder included in your PATH
environment variable or in
/usr/local/google_appengine. Please
make sure the SDK-folder is named
google_appengine.
To start the sample project, change to the
appenginepatch-sample folder, and execute
manage.py
runserver from the command line.
app-engine-patch will
start the App Engine development server behind the scenes, and you are ready
to go. If you access in your browser, you
will see the sample project in action. The sample demonstrates the use of some
of Django's generic views, which provide shortcuts for common tasks like
creating or editing model instances.
A practical example
To demonstrate some of the features that app-engine-patch provides, we will use Django to re-create the Guestbook application from here:. We will use generic views, and add user authentication for both Google and non Google users.
It may initially seem that there is some overhead involved with Django's directory organization. However, the structure is beneficial as it will help you to keep your code organized and reusable, which is important for larger projects.
Django project structure
In Django, a project is split into app packages. It is possible to create certain app functionality in a way that make them independent of a given project. Thus, you can plug packages into multiple Django projects. An example of an app package would be a tagging library for a blog.
In addition to apps, Django projects also contain a global settings file and a root URL-configuration file which defines how URLs are processed by the framework. Apps can have an additional url-configuration to be included in the main URL-configuration file. Django's templates work in a similar fashion. You can have global templates used by all apps, and also app-specific templates.
But enough of the talking, lets get some work done!
Configuring your project
First, let's configure an installation of app-engine-patch so that it's
ready for deployment. Open Google App Engine's
app.yaml
configuration file, and replace the application field with your app id. You
can execute
manage.py update now to deploy your app to Google App
Engine.
If you look at the contents of the sample project, you will see that the
sample app lives in a directory called
myapp. Let's create
another app, guestbook. Create a folder named
guestbook, and fill
it with these files:
__init__.py- So that Python treats this folder as a package
urls.py- For app-specific URL-configuration. It controls which view (the request handlers) will be executed for a given URL
models.py- For the data models for the app
views.py- To obtain the views, which is the Django term for the logic that processes a request
templates- A folder for your HTML templates for this app
Let's install the app into our project. To do this, include the name of
your app in the list of
INSTALLED_APPS in your
settings.py file:
INSTALLED_APPS = ( ... 'guestbook', )
If you were using Django with a relational database, you would now have to
run
manage.py syncdb to create the necessary database tables.
With App Engine this happens on the fly.
Now let's hook our guestbook app into the global URL-routing. Change the
project's global
/urls.py file to include the following line:
urlpatterns = patterns('', ... (r'^guestbook/', include('guestbook.urls')), )
Now, if you access any URL that starts with
/guestbook/, the
system will look in the app-specific URL configuration file
/guestbook/urls.py to route the request.
Creating the models
Open your
/guestbook/models.py file, and create this database
model:
from google.appengine.ext import db from django.contrib.auth.models import User class Greeting(db.Model): author = db.ReferenceProperty(User) content = db.StringProperty(multiline=True) date = db.DateTimeProperty(auto_now_add=True)
The user model will be provided for us by
app-engine-patch, so
it does not have to be specified. Since we want Django and Google user
authentication at the same time, enable middleware in your settings and
specifying the correct user model. Replace Django's AuthenticationMiddleware
in
/settings.py:
# Replace Django's AuthenticationMiddleware with HybridAuthenticationMiddleware. MIDDLEWARE_CLASSES = ( ... 'ragendja.auth.middleware.HybridAuthenticationMiddleware', ... )
and add
# Change the User model class AUTH_USER_MODULE = 'ragendja.auth.hybrid_models' # Add google_login_url and google_logout_url tags GLOBALTAGS = ('ragendja.templatetags.googletags',)
to the end of the file.
That's all. No, really! Now you can use both Django and Google user
authentication. Furthermore, you have activated template tags that can be used
to render Google login and logout links in any template. To try it, create a
/guestbook/templates/index.html file with this content:
<html> <head> </head> <body> <div class="login"> {% if user.is_authenticated %} Welcome, {{ user.username }} <a href="{% google_logout_url request.get_full_path %}">Logout</a> {% else %} <a href="{% google_login_url request.get_full_path %}">Login</a> {% endif %} </div> </body> </html>
and set the URL-routing in
/guestbook/urls.py:
from django.conf.urls.defaults import * urlpatterns = patterns('', (r'^$', 'django.views.generic.simple.direct_to_template', {'template': 'index.html'}), )
If you now load in your
browser, you will see Google authentication in action. Hard, wasn't it?
Note: Here you also see one of Django's generic views in action by rendering a HTML-template.
Providing access for non-Google accounts
Now let's add the authorization for people without a Google account. We will again make use of generic views as much as possible, because using them is easier and less error-prone than writing the views yourself.
The first thing is to enable users to sign up. Remember the
AUTH_USER_MODULE directive we set in
settings.py? It
will perform some magic that allows us to import the normal Django User model
and still have the hybrid authentication support.
To let users sign up for an account, add the following code to
/guestbook/views.py:
from django.contrib.auth.models import User from django.contrib.auth.forms import UserCreationForm from django.shortcuts import render_to_response from django.http import HttpResponseRedirect def create_new_user(request): form = UserCreationForm() # if form was submitted, bind form instance. if request.method == 'POST': form = UserCreationForm(request.POST) if form.is_valid(): user = form.save(commit=False) # user must be active for login to work user.is_active = True user.put() return HttpResponseRedirect('/guestbook/login/') return render_to_response('guestbook/user_create_form.html', {'form': form})
I won't go into much detail here, suffice it to say that this is nothing
else than standard Django.
app-engine-patch handles the creation
of users behind the scenes, including using the App Engine datastore instead
of the normal database-tables used by Django.
The
UserCreationForm is automatically supplied with Django.
This view creates a form object, and passes it to an HTML-template called
user_create_form.html. When a form is submitted and validated via
a
POST request, a new user will be created, and the user will be
redirected to a login page. If the form is not valid, a meaningful error
message is rendered.
To see this in action, there are two small things left to do. First, hook
the "create_new_user" method into your URL-configuration in
/guestbook/urls.py:
urlpatterns = patterns('', ... (r'^signup/$', 'guestbook.views.create_new_user'), )
And create a template
/guestbook/templates/user_create_form.html:
<html> <head> </head> <body> <form action="." method="post"> <table> {{form.as_table}} </table> You can also login with your <a href="{% google_login_url "/guestbook" %}">Google account.</a> <input type="submit" value="submit"> </form> </body> </html>
Logging in a Django user
In case you hate reading long texts on the monitor as much as I do, I will
make this one quick. Just add this to your
/guestbook/urls.py:
urlpatterns = patterns('', ... (r'^login/$', 'django.contrib.auth.views.login', {'template_name': 'guestbook/user_create_form.html'}), )
Please note that I have reused the template for user creation to save you
from doing another copy & paste. That's it. Create a new user or go to to see this generic view
in action.
Note: When not specified otherwise, the login generic
view will redirect to "/accounts/profile/" after successful login. To change
this add
LOGIN_REDIRECT_URL = "/guestbook/" to
settings.py, or logged in users will be greeted by a 404 message.
The Logout link that is displayed currently only logs out Google users. To log out Django users, just include a generic view in your "/guestbook/urls.py" like this:
#the "logout" generic view expects a template logged_out.html. Using this generic view, you can redirect the user to #another page after log out. (r'^logout/$', 'django.contrib.auth.views.logout_then_login', {'login_url': '/guestbook/'}),
and replace the logout link in
/guestbook/templates/index.html with:
{% if user.is_active %} <a href="/guestbook/logout"> {% else %} <a href="{% google_logout_url "/guestbook/" %}"> {% endif %}Logout</a>
This works because Google users do not have the
is_active
field set. There are better ways to check which type of user we are dealing
with, but this is simple and works for our case. The repository version of
app-engine-patch includes methods for differentiation
between user types.
Getting the guestbook working
Now let's add the ability to create and display guestbook entries. Add the
following to the end of
/guestbook/views.py:
from django.views.generic.list_detail import object_list from django.views.generic.create_update import create_object from django.contrib.auth.decorators import login_required from guestbook.models import Greeting def list_entries(request): return object_list(request, Greeting.all()) @login_required def create_entry(request): # Add username to POST data, so it gets set in the created model # You could also use a hidden form field for example, but this is more secure request.POST = request.POST.copy() request.POST['author'] = str(request.user.key()) return create_object(request, Greeting, post_save_redirect='/guestbook')
and change
/guestbook/urls.py to:
from django.conf.urls.defaults import * urlpatterns = patterns('', (r'^$', 'guestbook.views.list_entries'), (r'^sign/$', 'guestbook.views.create_entry'), (r'^signup/$', 'guestbook.views.create_new_user'), (r'^login/$', 'django.contrib.auth.views.login', {'template_name': 'guestbook/user_create_form.html'}), (r'^logout/$', 'django.contrib.auth.views.logout_then_login', {'login_url': '/guestbook/'}), )
The generic views expect templates that you have to create
/guestbook/templates/greeting_list.html:
<html> <head> </head> <body> <div class="login"> {% if user.is_authenticated %} Welcome, {{ user.username }} {% if user.is_active %} <a href="/guestbook/logout"> {% else %} <a href="{% google_logout_url "/guestbook/" %}"> {% endif %}Logout</a> {% else %} <a href="{% google_login_url request.get_full_path %}">Login with your Google account</a><br> <a href="/guestbook/login">Login with your normal account</a><br> <a href="/guestbook/signup">Sign up</a><br> {% endif %} </div> {% for object in object_list %} <p>{{object.author.username}}: {{object.content}}</p> {% endfor %} <a href="/guestbook/sign/">Add entry</a> </body> </html>
and
/guestbook/templates/greeting_form.html:
<html> <head> </head> <body> <form method="POST" action="."> {{form.content}} <input type="submit" value="create"> </form> </body> </html>
Signing the guestbook now works. We have replaced the generic view that
rendered
index.html for the URL
/guestbook/ by a
reference to the function that shows the list of entries. The
login_required decorator provided by Django ensures that a user
has to be logged in to access the view in question. By default, the decorator
expects the login URL to be located at
/accounts/login/ to change
this modify your
settings.py file with:
LOGIN_URL = '/guestbook/login'
Note: If we wanted anonymous users to be able to sign the guestbook, we would have to take care of the fact that anonymous users don't have a key-attribute.
Conclusion
We now have a (very) basic project that allows users to sign in both with Google and non-Google accounts, and to add entries to a guestbook. We have made extensive use of generic views, to demonstrate how they can make common tasks a lot easier. If this was your first encounter with Django on App Engine, I hope that this article is enough to get you started. If you already are a proficient Django user, I hope we made it interesting for you!
About the Author...
Jesaja Everling lives in the former capital of Germany, Bonn. Having started to learn Python not much more than two years ago, Jesaja was amazed when he learned that he could use his now favorite programming language for web-development. After deciding to try out Django, he never looked back. He had the luck that the company he is working for during his studies of Technical Journalism needed somebody that would get deep down and dirty with Django development.
Being a self-declared early adopter of Google products, he was excited when he heard of App Engine, and started using it as soon as he could. When looking at a computer screen and not working or studying, he tries to get the hang out of using the Datastore, or hangs around in the appengine channel on IRC and swears at his timezone that is constantly making him miss all the interesting discussions.
|
https://cloud.google.com/appengine/articles/app-engine-patch
|
CC-MAIN-2015-40
|
refinedweb
| 2,939
| 58.38
|
This post is a pre-requisite for the next thing I want to talk about so may not make a whole lot of sense or be all that interesting until shown in that context.
Say you have a function like this:
If you know the values of a, m and n, how do you solve for x? Note in this post we are only dealing with integers, so we are looking for the integer solution for x.
It might be hard to visualize with so many symbols, so here it is with some constants:
How would you solve that for x? In other words, what do you need to multiply 5 by, so that when you divide the result by 7, that you get 3 as the remainder?
One way to solve for x would be brute force. We could try plugging every value from 0 to 6 into x (every value from 0 to n-1), and see if any gives us the result we are looking for.
Brute force can be a challenge if the numbers are really large, like in some cryptographic situations.
Interestingly, there might not even be a valid answer for x that satisfies the equation! The below has no answer for instance:
Better Than Brute Force
There’s something called the “Modular Multiplicative Inverse” which looks eerily familiar:
Where a and m are known, and the inverse itself is the value of x.
Using the same constants we did above, that gives us this:
In this case, the inverse (x) is 3. You can verify that by seeing that (5*3) % 7 is 1.
Once you have the inverse, if you wanted to solve the original equation where the modulus end up being 3, you just multiply the inverse by the desired modulus amount. Since the inverse is 3 and the desired modulus value is 3, you multiply them together and get 9. Plugging the numbers in, we can see that (5*9) % 7 = 3.
Pretty cool, but how to calculate the inverse? You can calculate it by using something called the “Extended Euclidean Algorithm”.
The regular Euclidean algorithm is in a post here: Programmatically Calculating GCD and LCM.
The extended euclidean algorithm is explained really well on wikipedia: Wikipedia: Extended Euclidean Algorithm.
Sample Code
Here’s some sample code that asks the user for input and solves these style of equations for x. Below the code I’ll show some example runs and talk about a few more things.
#include <stdio.h> #include <algorithm> #include <array> //=================================================================================) { // get user input int a, m, n; printf("Given a, m and n, solves for X.\n(a * X) %% m = n\n\n"); printf("a = "); scanf("%i", &a); printf("m = "); scanf("%i", &m); printf("n = "); scanf("%i", &n); // show details of what they entered printf("\n(%i * X) mod %i = %i\n", a, m, n); // Attempt brute force printf("\nBrute Force Testing X from 0 to %i:\n", (m-1)); for (int i = 0; i < m; ++i) { if ((a*i) % m == n) { printf(" X = %i\n", i); printf(" %i mod %i = %i\n", a*i, m, (a*i) % m); break; } else if (i == (m - 1)) { printf(" No solution!\n"); } } // Attempt inverse via Extended Euclidean Algorithm printf("\nExtended Euclidean Algorithm:\n"); int s, t; int GCD = ExtendedEuclidianAlgorithm(a, m, s, t); // report failure if we couldn't do inverse if (GCD != 1) { printf(" Values are not co-prime, cannot invert! GCD = %i\n", GCD); } // Else report details of inverse and show that it worked else { printf(" Inverse = %i\n", t); printf(" X = Inverse * n = %i\n", t*n); printf(" %i mod %i = %i\n", a*t*n, m, (a*t*n) % m); } WaitForEnter(); return 0; }
Example Runs
Here is a normal run that solves
, to come up with a value of 8 for x.
Here is a run that solves
. Brute force gives us a value of 2 for x, while the inverse gives us a value of 9. Both are valid, and in fact are equivalent since 9 % 7 = 2. This shows that getting the inverse and then multiplying it to get the desired answer doesn’t always give you the smallest possible value of x.
Here is a large number run that solves
. Brute force gives a value of 571,506 for x, while using the inversion method gives us a value of 230,571,736.
Lastly, here is a run that solves
. Brute force gives us a value of 2 for x, but interestingly, it isn’t invertible, so the inversion based solution can’t even find us an answer!
This happens when a and m are not co-prime. In other words, if they have a GCD that isn’t 1, they aren’t coprime, and the modulus can’t be inverted.
Links
You can read more about the modular multiplicative inverse here: Wikipedia: Modular Multiplicative Inverse.
|
http://blog.demofox.org/2015/09/10/modular-multiplicative-inverse/
|
CC-MAIN-2017-22
|
refinedweb
| 813
| 67.59
|
Code. Collaborate. Organize.
No Limits. Try it Today.
For a short while it was rather disappointing as a C++ developer that you did
not have lambdas while other languages like C# already had them. Of course,
that's changed now and C++ not only has lambdas but C++ lambdas have greater
syntactic flexibility than C# lambdas. This article will be a comparative look at
the lambda usage in both languages.
Note: The article does not intend to give a comprehensive syntactic coverage
of lambdas in either C# or C++. It assumes you can already use lambdas in one or
both languages. The article's focus is on the differences and similarities in
lambda usage across the languages and their variants.
Here's a very simple C# method that I'll use to demo C# lambdas:
static void RunFoo(Func<int, int> foo)
{
Console.WriteLine("Result = " + foo(3) + "\r\n");
}
And here's the corresponding C++ version:
void RunFoo(function<int(int)> foo)
{
std::cout << "Result = " << foo(3) << "\r\n\r\n";
}
Both versions take a function that has an int parameter and
returns an int. Here's a very simple C# example showing how
the method is called with a lambda.
int
RunFoo(x => x);
Notice how the argument type as well as the return type are inferred by the
compiler. Now here's the C++ version.
RunFoo( [](int x) -> int { return x; });
The argument type and return type need to be specified in C++. Well, not
exactly, the following will work too.
RunFoo( [](int x) { return x; });
Notice how I've removed the return type specification. But now, if I make a
small change, it will not compile anymore.
RunFoo( [](int x) { x++; return x; });
You get the following error.
// C3499: a lambda that has been specified to have a void return type cannot return a value
The intellisense tooltip explains what happened there.
// the body of a value-returning lambda with no explicit return type must be a single return statement
This code will compile.
RunFoo( [](int x) -> int { x++; return x; });
Note that this may change in a future release where the return type will be
deduced even for multi-statement lambdas. It is unlikely that C++ will support
type-deduction in lambda arguments though.
Both C# and C++ let you capture variables. C# always captures variables by
reference. The following code's output will make that fairly obvious.
var foos = new List<Func<int, int>>();
for (int i = 0; i < 2; i++)
{
foos.Add(x =>
{
Console.WriteLine(i);
return i;
});
}
foos.ForEach(foo => RunFoo(foo));
foos.Clear();
That will output:
2
Result = 2
2
Result = 2
I reckon most of you know why that happens, but if you don't here's why.
Consider this very basic code snippet.
int i = 5;
RunFoo(x => i);
The compiler generates a class (pseudo-code) that looks something like below:
sealed class LambdaClass
{
public int i;
public LambdaClass(){}
public int Foo(int x){ return i;}
}
And the call to RunFoo is compiled as (pseudo-code):
RunFoo
var local = new LambdaClass();
local.i = 5;
RunFoo(new Func<int, int>(local.Foo));
So in the earlier example, the same instance of the compiler-generated class
was reused inside the for-loop for each iteration. Which explains
the output. The workaround in C# is to force it to create a new instance of the
lambda class each time by introducing a local variable.
for
for (int i = 0; i < 2; i++)
{
int local = i;
foos.Add(x =>
{
Console.WriteLine(local);
return local;
});
}
foos.ForEach(foo => RunFoo(foo));
This forces the compiler to create separate instances of the lambda class
(there's only one generated class, with multiple instances). Now take a look at
similar C++ code.
std::vector<function<int(int)>> foos;
for (int i = 0; i < 2; i++)
{
foos.push_back([i](int x) -> int
{
std::cout << i << std::endl;
return i;
});
}
for each(auto foo in foos)
{
RunFoo(foo);
}
foos.clear();
In C++, you can specify how a capture is made, whether by copy or by
reference. In the above snippet the capture is by copy and so the code works as
expected. To get the same output as the original C# code, we could capture
by reference (if that was desired for whatever reasons).
for (int i = 0; i < 2; i++)
{
foos.push_back([&i](int x) -> int
{
std::cout << i << std::endl;
return i;
});
}
What happens in C++ is very similar to what happens in C#. Here's some
pseudo-code that shows one plausible way the compiler might implement this
(simplified version):
class <lambda0>
{
int _i;
public:
<lambda0>(int i) : _i(i) {}
int operator()(const int arg)
{
std::cout << i << std::endl;
return i;
}
};
If you look at the disassembly, you will see a call to the ()
operator where the lambda is executed, something like the following:
()
00CA20CB call `anonymous namespace'::<lambda0>::operator() (0CA1415h)
C++ captures variables as const by default, whereas C# does
not. Consider the following C# code.
const
int val = 10;
RunFoo(x =>
{
val = 25;
return x;
});
Now the syntactic equivalent C++ code follows.
int val = 10;
RunFoo([val](int x) -> int
{
// val = 25; <-- will not compile
return x;
});
To lose the const-ness of the captured variable, you need to
explicitly use the mutable specification.
mutable
RunFoo([val](int x) mutable -> int
{
val = 25; // compiles
return x;
});
C# does not have a syntactic way to make captured variables const.
You'd need to capture a const variable.
const
In C#, you cannot assign a lambda to a var variable. The
following line of code won't compile.
var
var f = x => x;
You'll get the following compiler errror.
// Cannot assign lambda expression to an implicitly-typed local variable
VB.NET apparently supports it (via Dim which is their var
equivalent). So it's a little strange that C# decided not to do that. VB.NET
generates an anonymous delegate and uses Object for all the
arguments (since deduction is not possible at compile time).
Dim
Object
Consider the C++ code below.
auto func = [](int x) { return x; };
Here func is now of the compiler-generated lambda type. You can
also use the function<> class (although in this case it's not needed).
func
function<>
function<int(int)> copy = func;
When you directly invoke the lambda, the code will be something like:
0102220D call `anonymous namespace'::<lambda0>::operator() (1021456h)
When you call it via the function<> object, the unoptimized code will
look like:
function<>
0102222B call std::tr1::_Function_impl1<int,int>::operator() (102111Dh)
- - - >
010284DD call std::tr1::_Callable_obj<`anonymous namespace'::<lambda0>,0>::_ApplyX<int,int &> (10215FAh)
- - - >
0102B73C call `anonymous namespace'::<lambda0>::operator() (1021456h)
Of course, the compiler will trivially optimize this so the release mode
binary code will be identical in both cases.
The following example shows a method being called from a C# lambda.
void Do(int x) { }
void CallDo()
{
RunFoo(x =>
{
Do(x);
return x;
});
}
What the C# compiler does here is to generate a private instance
method that calls the method defined in the lambda.
private
private int <LambdaMethod>(int x)
{
this.Do(x);
return x;
}
It's this method that's passed to RunFoo as a delegate. Now
imagine that you are also capturing a variable in addition to calling a member
method. The compiler now generates a class that captures the variable as well as
the this reference.
this
private class <Lambda>
{
public int t;
public Program __this;
public <Lambda>() {}
public int <CallDo>(int x)
{
this.__this.Do(x + this.t);
return x;
}
}
This is a little more obvious in C++ because you have to explicitly capture
the this pointer to call a member function from the lambda.
Consider the example below.
this
int Do(int h)
{
return h * 2;
}
void Test()
{
auto func = [this](int x) -> int
{
int r = Do(1);
return x + r;
};
func(10);
}
Notice how this has been captured there. Now when the
compiler-generated lambda-class's () operator is called, invoking the method is
merely a matter of calling that function and passing the captured this
pointer.
class
() operator
this
call T::Do (1021069h)
One big disappointment for C++//CLI developers (all 7 of them) is the fact
that C++/CLI does not support managed lambdas. You can use lambdas in C++/CLI
but they will be native, and so you won't be able to easily interop with managed
code that expects for instance a Func<> argument. You'd have to write
plumbing classes to convert your native lambdas to managed delegates. An example
is shown below.
Func<>
class LambdaRunner
{
function<int(int)> _lambda;
public:
LambdaRunner(function<int(int)> lambda) : _lambda(lambda)
{
}
int Run(int n)
{
return _lambda(n);
}
};
The above class is the native implementation of the lambda runner. The
following class is the managed wrapper around it.
ref class RefLambdaRunner
{
LambdaRunner* pLambdaRunner;
int Run(int n)
{
return pLambdaRunner->Run(n);
}
public:
RefLambdaRunner(function<int(int)> lambda)
{
pLambdaRunner = new LambdaRunner(lambda);
}
Func<int, int>^ ToDelegate()
{
return gcnew Func<int, int>(this, &RefLambdaRunner::Run);
}
void Close()
{
delete pLambdaRunner;
}
};
Using it would look something like the following.
auto lambda = [](int x) -> int { return x * 2; };
auto lambdaRunner = gcnew RefLambdaRunner(lambda);
int result = lambdaRunner->ToDelegate()(10);
lambdaRunner->Close();
Well that's a lot of work to get that running smoothly. With some clever use
of macros and template meta programming, you can simplify the code that
generates the native and managed runner-classes. But it'll still be a kludge. So
a friendly advice to anyone planning on doing this is - don't. Save
yourself the pain.
You can use lambdas in C++/CX with WinRT types.
auto r = ref new R();
r->Go();
auto lambda = [r]()
{
};
// or
auto lambda = [&r]()
{
}
The dev-preview may have a subtle bug or two, but the expected behavior is
that when you capture by copy
you will incur AddRef and Release, whereas when you
capture by reference, you will not. The compiler will try and optimize this away
for you in copy-capture scenarios when it thinks it's safe to do so. And this may
be the source of one of the bugs in the dev preview where a Release is optimized
away but the AddRef is not resulting in a potential memory leak.
But it's a certainty that this will all be fixed by beta, so I wouldn't worry
too much about it.
AddRef
Release
AddRef
Performance is always an obsession with C++ developers (and some C# and VB
developers too). So very often you find people asking in the forums if using
lambdas will slow down their code. Well without optimization, yes, it will.
There will be repeated calls to the () operator. But any recent
compiler will inline that, so you will not have any performance drop at all when
you use lambdas. In C#, you will not have compile time optimizations but the CLR
JIT compiler should be able to optimize away the extra indirections in most
scenarios. A side effect is that your binary will be a tad bulkier with all the
compiler generated classes, but it's a very small price to pay for the powerful
syntactic and semantic value offered by lambdas, in either C++ or in C#.
Please do submit feedback and criticism via the article forum below. All
feedback and criticism will be carefully read, and tragically sad moments of
silence will be observed for the more obnoxious responses. *grin*
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Luc Pattyn wrote:FWIW: I have this
article[^] which I try and keep up-to-date (ignoring beta stuff). I
hope it is all correct.
Luc Pattyn wrote:(I mostly use VS2008 and target .NET 2.0!).
Luc Pattyn wrote:1. please mention which version of .NET and Visual Studio is required for this
to work.
Luc Pattyn wrote:2. the article would be more browseable when the C# and C++/CLI snippets had
different background colors; my tip Using
PRE tags on Code Project[^] shows a way to get that.
__erfan__ wrote:but I think lambdas just make the code harder to read.
Nick Polyak wrote:typo: "In C++, you can specific"should be "specify" instead of
"specific".
auto
Philippe Mori wrote:Also, it would be very usefull if it would be usable in managed code.
MttD wrote:There is a "bug" (a non-C++ construct), though, regarding:for each(auto foo
in foos)
MttD wrote:Or, you can do as Herb Sutter does in his talk and use std::for_each
MttD wrote:I think you could also consider using more standard terminology
MttD wrote:At least it could be of benefit to the readers who'd like to relate to and
compare with the information in other sources (articles, the Standard)
C++
Wong Shao Voon wrote:I read your C++/CLI in Action[^] book twice to fully understand .NET difference with
native C++. My C# colleague, with many years of .NET experience, does not know
what is a value type and C# structure is a value type. There aren't many moments
in a C++ programmer life to pwn a C# programmer with better .NET understanding
(thanks to your C++/CLI book).
Wong Shao Voon wrote:Excellent article!
Wong Shao Voon wrote:An article I am interested to see is benchmark of performance of STL vector
against C++/CX vector. Just like what you have done for STL/CLR : A
look at STL/CLR performance for linear containers[^]
Func<int, Func<int, int>> func = i => x => {
Console.WriteLine(i);
return i;
};
for (int i = 0; i < 2; i++)
foos.Add(func(i));
qxsar wrote:I did not change the lambda signature. I defined a lambda that returns the
desired lambda. If you just replace the loop in the C# sample with the code I
provided it'll work, and I don't see where it's not possible to achieve the same
results as with value-based capture.
qxsar wrote:I'm actually curious though about calling methods in C++ lambdas. Do you need to
define function pointers, or is there some shorter way?
qxsar wrote:If you need to avoid heap allocation badly, you can transform this lambda into a
method. Then there should be no difference between value based capture and
passing value as parameter even on the allocation side, right? (although using
it like this already only means a single additional lambda allocation on the
heap for the whole loop if I see it correctly)
for(var i = 0; i < 100; i++)
{
var ci = i;
DoSomething(() => Console.WriteLine(ci));
}
yield return
IEnumerable`1
async
qxsar wrote: if you want to pass them by value, you’re supposed to pass them as a parameter.
It feels more natural from a functional point of view
Mika Wendelius wrote:
Very comprehensive. Excellent work!
Ajay Vijayvargiya wrote:
Great, great article Nish!
Slacker007 wrote:God, I'm such a nerd.
Slacker007 wrote:Anywho, nice article. .
|
http://www.codeproject.com/Articles/277612/Using-lambdas-Cplusplus-vs-Csharp-vs-Cplusplus-CX?fid=1662546&df=10000&mpp=50&noise=3&prof=True&sort=Position&view=None&spc=None&select=4103503&fr=51
|
CC-MAIN-2014-23
|
refinedweb
| 2,496
| 62.68
|
Integrating services like Microsoft Account, Google Account, Facebook, Twitter etc. in our apps undoubtedly provides several benefits, one of which is that we can use them as an account system for our app. From the developers’ point of view, it means not bothering with creating interfaces and functionalities for user profile creation – modification, not needing to mess with usernames and passwords, and also bypassing the secure storage of this information. From the users’ point of view, it means s/he won’t need to create an account for yet another app (which is a very good thing by the way, since users usually dislike creating accounts).
To integrate Microsoft Account, we will use Live SDK, which makes it very easy to use several Microsoft services such as SkyDrive, Outlook and Skype. Live SDK is available not only for Windows 8 and Windows Phone, but also for Android and iOS.
In this article, we will create an app that allows the user to sign in with a Microsoft account, displays the user’s name, and makes the user stay signed in until s/he wants to sign out (meaning the user won’t need to sign in everytime the app is opened).
First, we’ll need to download and install Live SDK. You can download it here. After the installation, we’ll be able to add it to our projects in Visual Studio.
Then, before starting our app, I’d like to point out that our app has to be associated with the store in order for the Live SDK to work. What this means is that you have do the following:
1) You need to have a Windows Dev Center subscription.
2) Go to, click Dashboard and sign in with your Microsoft Account.
3) Go to “Submit an app”, click “App Name”, enter an app name (not important if you are not going to actually submit the app), and click “Reserve App Name” (I’ve given “Live Connect Test App”). Then you can sign out of Dev Center.
Now, we’ll create a blank Windows Store app in Visual Studio.
We’ll need to associate our app with the one we created in the dev center so Live SDK could work. To do this, right-click the project in Solution Explorer, select Store->Associate App with Store…, sign in with your Microsoft Account, select the app from the list, click Next and then click Associate.
Next, we’ll add Live SDK by right-clicking References in Solution Explorer, selecting “Add Reference…”, and selecting Live SDK under Windows->Extensions.
Ok then, everything’s set. We’ll create a simple interface in MainPage.xaml, consisting of a Sign In / Sign Out button, an Image for the user’s picture and a textblock for the user’s name.
<Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}"> <Button x: <Image x: <TextBlock x: </Grid>
Finally, we will add the code needed for login and getting user information, in MainPage.xaml.cs:
using Microsoft.Live; using Windows.UI.Xaml.Media.Imaging; using Windows.UI.Popups;
LiveAuthClient auth; public MainPage() { this.InitializeComponent(); this.Loaded += MainPage_Loaded; } async void MainPage_Loaded(object sender, RoutedEventArgs e) { auth = new LiveAuthClient(); LiveLoginResult initializeResult = await auth.InitializeAsync(); if (initializeResult.Status == LiveConnectSessionStatus.Connected) { SignIn(); } } async private void ButtonSignInOut_Click(object sender, RoutedEventArgs e) { LiveLoginResult initializeResult = await auth.InitializeAsync(); if (ButtonSignInOut.Content.ToString() == "Sign In") { SignIn(); } else { ButtonSignInOut.Content = "Sign In"; auth.Logout(); } } async public void SignIn() { try { LiveLoginResult loginResult = await auth.LoginAsync(new string[] { "wl.basic" }); if (loginResult.Status == LiveConnectSessionStatus.Connected) { ButtonSignInOut.Content = "Sign Out"; LiveConnectClient client = new LiveConnectClient(auth.Session); LiveOperationResult operationResult = await client.GetAsync("me"); dynamic result = operationResult.Result; TextBlockUserName.Text = result.name; operationResult = await client.GetAsync("me/picture?type=large"); result = operationResult.Result; ImageUser.Source = new BitmapImage(new Uri(result.location, UriKind.Absolute)); } } catch { MessageDialog messageDialog = new MessageDialog("Deleting all system and personal files. Please press OK to confirm."); messageDialog.ShowAsync(); } }
Let’s see what we’ve done here.
In SignIn function, we signed in using the “wl_basic” scope. The scope here determines the amount of information you can access, and is also shown when the user is approving your app. You can get a list of available scopes here.
Then, we reached the user’s information and picture with a get request to “me” and “me/picture”. You’ll also notice that we added a parameter to “me/picture”, this allows us to get the large picture if it is available. We can define large (360×360), medium (180×180) or small (96×96) here, and if that size is not available we’ll be returned the next largest available size. If no picture is available, we’ll receive a generic empty user picture.
In MainPage_Loaded, we check if the user is currently signed in, because once the user signs in, s/he doesn’t need to sign in again without explicitly signing out. In that case, SignIn() function will not ask the user his/her credentials.
One final note is, since our returned variables are dynamic, you can set break points and view their contents in runtime to determine what is returned more easily.
When we run the app and sign in, this will be the result:
You can try closing and reopening the app to see that it automatically signs in.
For more information about Live SDK, you can look at the Live SDK Documentation. Also, here‘s the source code of our example.
Thank you for reading.
One thought on “Integrating Microsoft Account (Live ID) in Windows Store Apps with Live SDK – Boredom Challenge Day 10”
“To integrate Microsoft Account, we will use Live SDK, which makes it very easy to use several Microsoft services such as SkyDrive, Outlook and Skype.”
How to get skype contacts?
|
https://eren.ws/2013/10/14/integrating-microsoft-account-live-id-in-windows-store-apps-using-live-sdk-boredom-challenge-day-10/?replytocom=6796
|
CC-MAIN-2021-49
|
refinedweb
| 954
| 56.96
|
C9 Lectures: Stephan T. Lavavej - Standard Template Library (STL), 9 of n
- Posted: Nov 29, 2010 at 10:57 AM
- 103,708 Views
- 52!!! i have learned allot more about the STL. and appreciate the detail...
Excellent. WOW!!
Thank you, Stephan and Charles.
Good Lecture and Video.
4 years ago, many people frequently asked me.
"Why C++ not Java or C#, Why template, Why? Why just Freaky??"
I still write and develop with C++ and Template.
And now I will answer that.
Can't we simplify the move assignment operator as follows?
I mean, swap is exception-safe anyway, right? Why the extra temporary?
Awesome video! The only thing I missed was discussing what T&& means when T is a template parameter, but maybe explaining reference collapsing rules would have taken too much time... anyway, what will the next video be about?
.
> The only thing I missed was discussing what T&& means when T is a template parameter,
> but maybe explaining reference collapsing rules would have taken too much time...
I would have liked to get to perfect forwarding, but I ran out of time. 45 minutes isn't very much time.
> anyway, what will the next video be about?
I'm open to suggestions. I'd like to cover simple techniques before advanced ones (looking at the STL's implementation would be advanced). I think I might demonstrate <type_traits>. (Note that I won't be covering iostreams in this series.)
#include <type_traits>, decltype, and the rest of the template metaprogramming (aka using C++ to "preprocess"/modify/script C++ during compile-time) material would be interesting. Any info on how to use std::string or char * ... or whatever ... for manipulating strings at compile-time in C++0x would be appreciated. If the new standard still does not support manipulating strings at compile-time without work-arounds (aka boost::mpl::string), then what is the current justification that the standards committee gives for still not allowing string manipulations at compile-time? For template-metaprogramming, I'd personnally like to see an example of loop-unrolling ... right now, if I need a for ( i = 0; i < n; ++i ) { v[i]=...; } but I just want a v[0]=...; v[1]=...; ...; v[n]=...;, then I just script that unrolled-loop to a text file and copy & paste ... which works ... but is a kludge. I'd like C++ to do that loop-unrolling for me at compile-time ... so I can share and reuse that C++ solution.
Also, generating random floats or doubles from a Normal distribution using #include <random> would be great ... esp. for simultations, stocastic models, and games. Any explanation on how to create our own distributions would also be great.
Discussing the usefulness, pros and cons, of std::valarray and std::slice from #include <valarray> ... and showing off some of the MATLAB-like functionality of std::valarray (like valarray1 + valarray2, pow( valarray1, 2. ), valarray1.sum(), valarray1.apply( user_function ), etc.)
Finally, simultaneously returning multiple values from a function or method using #include <tuple> and then accessing those values ... esp. best practices.
Thanks In Advance,
Joshua Burkholder
Another great video, thanks so much for making this information available for developers.
Posted 23 hours ago and almost 12,000 views! Fantastic Job!
Actually STL, to me the usage of lvalues and rvalues being referred to as left & right makes sense when thought of to be similar to key-value pairs. With the left lvalue being the "key" and the rvalue being the "value" which lives at the lvalue. In this way of thinking about them, I will try to explain using a table, where lvalues are user defined and named, and the rvalues are the compiler managed values which are refrenced internally as temporaries with lifetimes managed by the compiler.
User defined named; lvalues which is our explicitly given name for a location:
Compiler defined temporaries; rvalues given temporary named locations with their lifetimes managed by the compiler:
As you can see, we can refer to our managed/uer-defined variables using our named location(lvalue) while on the other hand, the compiler generated temporaries arent usually cared about, where we only want their values that they contain(its rvalue). Even though lvalue could also mean location value and rvalue meaning return value, whichever way you like best, it IS possible to think of them being left & right in my opinion.
I must say, STL this video was the video I enjoyed the most of your 9 videos, even though it wasn't specifically aimed at an STL feature, because you finally delved into a more technical subject making this valuable.
Do you also think this makes sense??
Nice videos, I'm enjoying how clear and relaxed you teach the STL.
Will be a nice have a indeph of <type_traits> for next video. The only time I ventured in <type_traits> was to use the aligned_storage class, the compiler declspec(align(#)), and <array> with the SSE intrinsics in some tests and it worked.
@STL: Your description of not being able to use & on the temporary created by std::string concatenation doesn't hold for C++ in Visual Studio 2010. For instance, when I compile the following code ...... there are no compiler warnings and no compiler errors. When I run that compiled code, I get the following output: Aside: Using g++ 4.5.0 with -std=c++0x, there is a compiler warning, but no compiler errors. Here is the compiler warning:
If I change std::string to int, then everything works as expected and I get the following compilation error for C++ in Visual Studio 2010:. In g++ 4.5.0, I get the following compilation error:
Joshua Burkholder
@Burkholder:Disregard my no-compiler-warning for C++ in Visual Studio 2010. I was compiling with Warning Level Level3, vice Warning Level Level4. Once I switched to Level4, I received the following warningStill no compiler error though, but at least this is the same behavior as g++.
Joshua Burkholder
[HeavensRevenge]
> where lvalues are user defined and named, and the rvalues are the compiler managed values
> which are refrenced internally as temporaries with lifetimes managed by the compiler.
Sorry, this is deeply incorrect.
As I explained in the video, lvalueness/rvalueness is a property of *expressions*, not objects. It's possible to have lvalue expressions referring to temporary objects (for example, this happens whenever a temporary X is passed to a function taking const X&), and to have rvalue expressions referring to user-named objects (for example, std::move(foobar)).
> Even though lvalue could also mean location value and rvalue meaning return value, whichever way you like best
Historically, L meant "left side of an assignment" and R meant "right side of an assignment". As I explained in the video, this is no longer accurate (due to constness, among other things). Nor should you attempt to invent your own meanings for L and R.
I think of them like this: lvalues are "persistent" and rvalues are "ephemeral". I can say a[i] over and over, referring to something persistent, so it is an lvalue. Given const X& x, I can also say x over and over, so it is an lvalue. It doesn't matter if x is bound to a temporary object that will evaporate shortly after my function returns. For the duration of my function where x is in scope, I have something persistent. Conversely, y + z does not refer to something persistent, so it is an rvalue. std::move(foobar) is also an rvalue, because it's saying "I want you to pretend that foobar is going to evaporate soon".
> I must say, STL this video was the video I enjoyed the most of your 9 videos,
> even though it wasn't specifically aimed at an STL feature, because you
> finally delved into a more technical subject making this valuable.
Thanks.
[Burkholder]
> Disregard my no-compiler-warning for C++ in Visual Studio 2010.
I recall mentioning VC's Evil Extension in the video, and exhorting the audience to compile with /W4. If I didn't, I should have. This extension is the most disgusting, vile thing ever (the Standard prohibits this nonsense for extremely good reasons), it should never be used, and it's good that /W4 warns about it.
Any particular reason you did not mention the distinction between prvalues and xvalues?
I think this is important because the xvalue "std::move(var)" is an rvalue even though it allows you to refer to the same object multiple times (as opposed to prvalues such as "x+y").?
[quote]
1 day ago, STL wrote
.
[/quote]
Sorry, I'm very confused, wont that extra temporary cause performance loss, like an extra copy or does the compiler optimize away this extra temporary ?
Thanks for another great lecture, STL, i'm already eagerly waiting for more. :)
If you return a local variable, the move is indeed implied.
Depends on what you mean by "referenced at a later point". Consider how std::swap is implemented:
I found a text made by Thomas Becker a good companion while re-watching de video, it is dated from July/2010, the text have 10 pages and try to explain de rvalue, move, and forward step by step.[url][/url]
[NotFredSafe]
> Any particular reason you did not mention the distinction between prvalues and xvalues?
I suspected that explaining the lvalue/xvalue/prvalue/glvalue/rvalue taxonomy would be more confusing than clarifying.
[dpratt71]
> Cannot the compiler reason that "here is the last place this reference is...referenced, therefore it is safe to use "move" semantics"?
The compiler is very smart, but not omniscient. In particular, when it sees something like a memory allocation followed by a deallocation, it can't determine whether that work is unnecessary.
C++ compilers have always understood the distinction between lvalues and rvalues (in fact, even C compilers have this knowledge). What move semantics does is very clever: it provides just a little bit of extra information to the compiler in order to enable major optimizations that were previously impossible. This information takes two forms: telling the compiler to treat certain lvalues as rvalues with std::move(), and telling the compiler to do something more efficient during construction and assignment when the source is a modifiable rvalue.
> Furthermore, supposing you specify "std::move", could not the compiler issue a strong
> warning or even an error, should the 'moved' reference be...referenced at a later point?
In general, moved-from objects are in a "valid but unspecified" state, so the only things that you should do to them are reassign them or destroy them. However, certain types provide stronger guarantees. For example, moved-from shared_ptrs are guaranteed to be empty.
[obvious]
> Sorry, I'm very confused, wont that extra temporary cause performance loss
It should not. The optimizer should be able to see through simple pointer assignments. In any event, this is just a matter of a few instructions. The major expense (unnecessary dynamic memory allocation/deallocation) has already been avoided.
[NotFredSafe]
> If you return a local variable, the move is indeed implied.
Yes, due to a special rule in the Working Paper. Named local variables are lvalues, and ordinarily lvalues must be copied from, not moved from. However, returning from a function triggers the destruction of local variables. Since val's destruction is imminent, "return val;" will automatically move from val.
Please make the other video before Christmas (it'll be considered your gift to us
). Well, I'm still in for the Template Metaprogramming, but don't start too complicated, since I'm not that good at math.
Question, Video 3 you had the container traits etc. Since I'm not only using VS2010, I can't always use the "auto" keyword. How do I get to the iterator from the template argument? Will this work? It compiles, but I can't say if it can crash:
template <typename Container, typename Predicate>
void erase_if_helper(Container& c, Predicate p, associative_tag) {
typename Container::iterator i = c.begin()
while ( i != c.end() ) {
if ( p( *i ) ) {
c.erase( i++ );
} else {
++i;
}
}
}
}
How about SFINAE? This code is giving me unexpected results in Visual C++:
The output on VS2008 is 1 1 0 0, but GNU gives 1 1 1 1. Is this a compiler bug?
@NotFredSafe:
This is __not__ a compiler bug in either GNU/g++ or Visual Studio.
Your code assumes that type U directly has member functions begin() const and end() const, vice inheriting them. Your code also assumes that const_iterator comes directly from U, vice inheriting it. The C++ standard does __not__ assume this for associative containers like std::map and std::set (see section 23.6.1 point 2 [for std::map] and section 23.6.3 point 2 [for std::set] of the current draft of the C++ standard). According to the C++ standard, implementers are free to typedef std::map<...>::const_iterator or std::set<...>::const_iterator any way they want to ... so since Dinkumware uses inheritance to implement std::map and std::set, then Dinkumware can choose to just use the inherited std::_Tree<...>::const_iterator std::_Tree<...>::begin() const and std::_Tree<...>::const_iterator std::_Tree<...>::end () const. Visual Sudio uses Dinkumware's STL implementation ... while GNU/g++ does not. Dinkumware's std::map and std::set implementation uses inheritance ... while GNU/g++ does not. If GNU/g++ used this inherited structure, then the results from GNU/g++ and Visual Studio would be the same. Here's an example:GNU/g++ Version 4.5.0: Visual Studio 2010:
Because your code requires something that the C++ standard does not require, your code is not a good way to check for a container at compile time ... and is implementation dependent. You should change your code to reflect the C++ standard requirements.
Also, good info on Substitution Failure Is Not An Error (SFINAE) can be found on page 106 of C++ Templates: The Complete Guide by Vandevoorde and Josuttis.
Hope This Clarifies,
Joshua Burkholder
P.S. - If you wanted to keep your previous structure and still be able to pick up the associative containers, std::set and std::map, then you could always use a partial specialization on your is_container<...> struct. Here's an example:Of course, this is a kludge (so you don't have to dramatically modify your code) and is not recommended ... but it works.
Based on the kludge above, GNU/g++ Version 4.5.0 outputs:Based on the kludge above, Visual Studio 2010 outputs:
]
a moment ago
]
Ah scratch that question! I understan now. Somebody could write assignment operator with std::move(lhs) in which case it will invoke move assignment operator and resources will be swapped to an lvalue..
[Deraynger]
> Will this work?
> typename Container::iterator
Yep.
NotFredSafe/Burkholder: Please note that it is technically forbidden to take the address of a Standard Library member function. (They can be overloaded, making &foo::bar ambiguous, and they can have additional default arguments, defeating attempts to disambiguate via static_cast.)
@STL:Thanks for the heads up on not taking the address of Standard Library member functions. Where is that detailed in the standard?
Thanks In Advance,
Joshua Burkholder
This is one of those things that the Standard doesn't explicitly say, but that logically follows from other Standardese. (This is the second most subtle way that the Standard specifies stuff. The MOST subtle way is "X is permitted because nothing prohibits it", which for example applies to v.push_back(v[0]).)
From my BoostCon 2009 presentation:
* What's wrong with mem_fn(&string::size) ?
* It's nonconformant to take the address of any Standard Library non-virtual member function
* C++03 17.4.4.4/2 permits additional overloads, making the address-of operator ambiguous
* static_cast can't disambiguate: C++03 17.4.4.4/2 also permits additional arguments with default values, which change the signature
* Lambdas for the win!
* [](const string& s) { return s.size(); }
@STL:Are there any new videos planned yet ? I'm getting withdrawal symptoms!
I'm filming Part 10, which will be the final part of this introductory series, on Thursday. They're renovating the studio this month, but we found a temporary location where I'll be able to have a screen but no whiteboard.
@STL:Can you share the subject of this talk? Just can't wait, sorry
Stephan,
Great job with part 9. This is the best episode of the series so far. Thanks again for doing these.
I'm sure you already have ep 10 all planned out, but I would really like to see an episode with perfect forwarding. I'd like to see the type traits stuff too ... maybe that is in the next episode.
Does the "final part of this introductory series" maybe imply that there would be an advanced series coming???
@STL:
I hope "final" doesn't mean no more videos.
Do you have any thoughts about what videos will come next ?
[JeanGa]
> Can you share the subject of this talk?
Type traits, with some coverage of templates in the Core Language (not much) and some philosophy of templates (not much).
[ryanb]
> Great job with part 9. This is the best episode of the series so far.
Thanks. What exactly do you like about it the most? The fact that it's covering a newer and more advanced topic?
> Thanks again for doing these.
You're welcome, and thanks for watching.
> I'm sure you already have ep 10 all planned out,
It's more like I have an idea, and I wing it as I go. The only parts I extensively prepared for were the Nurikabe segments.
> but I would really like to see an episode with perfect forwarding.
I expect that I will get to that in the advanced series. It'll help to see "live" examples of perfect forwarding.
> Does the "final part of this introductory series" maybe imply that there would be an advanced series coming???
...and...
[Mr Crash]
> Do you have any thoughts about what videos will come next ?
In the advanced series I'll dive into the STL's implementation. There are many neat tricks and subtleties in there, and it'll give me a chance to talk about things that can't be easily found in books.
Hello,
I hope that someone could provide me with elegant solution for this problem.
Let's say we have vector with dynamically allocated objects, and we want to use remove_if to delete not necessary objects.
I wrote such template (code below), and as You figured, I learned the hard way that you cannot use iterator returned from remove_if as begin or iterate it :)
So one solution that I though about, is that predicate should delete those elements that should be removed, but this is to easy to forget.
All other toughs ended up creating second vector or iterating vector 2 times...
So maybe You could provide some elegant solution for this problem.
Thanks.
template<class TYPE, class PRED>void destroy_if(std::vector<TYPE*>& v, PRED pr){ std::vector<TYPE*>::iterator it, itb = std::remove_if(v.begin(), v.end(), pr); for (it = itb; it != v.end(); ++it) delete (*it); v.erase(itb, v.end());}
@SauliusZ:
Did you look at part 3, STL's STL code
?
Unless I misunderstood something in the code, or your question, this should work for a vector (else look at STL's example to create a generic way of deleting container items):
@Deraynger
Well You see this example works grate if this container contains objects from stack, but if in container are objects on the heap (allocated with NEW) then this will leave memory leaks.
One solution as I said would be to delete object in the predicate if condition is met.
But I think that predicate should only determine if object should be removed, and NOT delete them. (well maybe this thinking is completely wrong and I should delete objects with predicate?).
Or maybe there is some solution which wouldn't need to iterate container 2 times (one time to delete objects, second time to remove them from container).
Short answer, yes.
The episodes I personally find most interesting are the ones that have taken a deeper dive into the library and/or have helped to introduce some of the newer (C++0x) aspects of the language.
In this episode in particular, I found that it:
- Introduced a feature (r-value references) that was mostly new to me
- Showed what they really do at a low level
- Showed me why I should care about them and how I can use them to solve problems in real code
- Gave clear examples of where and how to use them
- Let me see how they are applied within the library (and thus providing benefits) even if I am not directly using them in my code (improvements with only a recompile).
- Presented a more advanced topic that you don't find in the piles of beginner STL info out there.
And the most important part is that you have (as always) managed to present all of this using a very clear, uncomplicated, straight-to-the-point style. You really do have a knack for understandable presentation of oft-complex material. (Maybe you should write a book?)
Anyway, your description of episode 10 sounds interesting. I eagerly await that episode, as well as the advanced series. It sounds like there is even more goodness coming there.
Does the following code solve your problem?
But you are probably better off with a std::vector<std::unique_ptr<T>> if you have a C++0x compiler.
@STL: Excellent, i can barely wait
"Must have C++0x / STL goodness, arrrrrg"
[SauliusZ]
> Let's say we have vector with dynamically allocated objects
As I believe I stressed in Part 3, every owning raw pointer (a T * that must be deleted) is a potential memory leak. Containers of owning raw pointers (e.g. vector<T *> where the pointers must be deleted) are basically guaranteed leaktrocity.
You should use either vector<shared_ptr<T>> or vector<unique_ptr<T>>. You should be using VC10, so both should be available to you. If you're using VC9 SP1 (old!) you'll have access to shared_ptr (in std::tr1) but not unique_ptr. If you're using VC8 SP1 (argh, REALLY OLD!), you can use boost::shared_ptr.
[ryanb]
> The episodes I personally find most interesting are the ones that have taken a deeper dive into the library and/or have helped to introduce some of the newer (C++0x) aspects of the language.
Thanks for the explanation; this will help me to target future episodes more precisely.
> And the most important part is that you have (as always) managed to present all of this using a very clear, uncomplicated, straight-to-the-point style.
> You really do have a knack for understandable presentation of oft-complex material.
:->
My ultimate goal for this series is to demonstrate that anyone can do this stuff if they approach it in the right way. It would be especially awesome to start a chain reaction - if I've helped you to understand something better, whether it's the STL or rvalue references or whatever, it's now your responsibility to spread this to other programmers and the codebases that you work on!
> (Maybe you should write a book?)
That's one of the things that I'd like to do. My understanding of C++ is still expanding, though. The more I learn, the more I realize how much else there is to learn.
> Anyway, your description of episode 10 sounds interesting.
I filmed it yesterday and it should be available in January. I spent a fair amount of time explaining SFINAE, which is a tricky Core Language concept.
> as well as the advanced series. It sounds like there is even more goodness coming there.
In the intro series, I assumed knowledge of C++ but not the STL's interface. In the advanced series, I'll assume knowledge of the STL's interface but not its implementation. I've spent a lot of time covering the basic concepts that I live and breathe all day (what's a vector, what's a shared_ptr, etc.). Now I'd like to try covering the most complicated stuff that I work with.
@NotFredSafe
Thanks, that will do it :)
And by "in January", you mean right on time for Christmas, right? Right??? I need my STL fix under the christmas tree
Can't wait for your SFINAE wisdom and other advanced insights...
The last installment in this series will ship before the end of December. Don't worry!
C
Hi Stephan,
This penultimate lecture of the fantastic introductory series is excellent and I'm looking forward to your coverage of <type_traits>. You're a natural at getting across the salient points of your chosen topic.
I'm impressed that you "wing it" in most lectures when you often elaborate on the reasoning and history behind features and decisions with such clarity, this is rare.
Please keep recording lectures for as long as you enjoy doing so, I particularly look forward to an advanced series.
Topics I'd like to see you cover in future lectures include:
I'd like to be able to download the code shown in this episode, commented code would be a plus.
Keep up the great work, thanks,
Peter
peteo: Thanks! It's a fair amount of work, but comments like yours help to keep me going. :->
> The next two years for C++
C++0x will officially become an International Standard, and compilers/libraries will increasingly support it. I can't show off what's coming in VC11, but I can walk through what we did in VC10.
> The Boost libraries
An episode about Boost is a good idea, and well-suited to the advanced series. One of the few downsides to my job, other than the fact that I get free Diet Mountain Dew but not free BBQ Popchips from the vending machines, is that I get very little time to use Boost. Other teams at MS use Boost, but the STL is the layer underneath Boost. Most of my Boost experience comes from college or from programming at home, so it's not exactly comprehensive. Still, things like Bimap, Filesystem, and Format (to name some things I've used and that aren't in C++0x) would be great to demonstrate.
> Template meta programming
Oh yes, you'll be seeing a lot of this.
> Algorithms and Big-O notation
I expect that I'll be going over some of this with the implementations of STL algorithms, although I've already given my Big-O spiel in the intro series.
> Stephan's guide to the C++ specification and community
The Working Paper is freely available (and the International Standard in PDF form is cheap). The Standard Library section is *mostly* intelligible to programmers who don't read Standardese all day, with the exception of iostreams/locales. As for the C++ community, there are probably a bunch of good sites I don't know about (and newsgroups; comp.lang.c++[.moderated] still seems to be active but I don't monitor it). Certainly I would say that Boost is the leading edge of C++ development.
> Your thoughts on concurrency with C++
This is a complicated topic and I'm not an expert here, but my PPL talk from BoostCon 2009 may be of interest to you:
> Intermediate to advanced C++ advice
> Existing C++ features and design patterns
This should come up as I'm looking through the STL's implementation; there are some good practices that we follow, places where we do stuff that ordinary users shouldn't do at all (like _Ugly identifiers) or should think carefully before doing (like some of the stuff we do with inheritance), and very few places where we do bad stuff that nobody should imitate and that we should fix (the most notorious example is using std::forward where we should use std::move).
> Compiler features, settings and design, debugging tips
Interestingly, I characterize myself as being very good at static code analysis and compile-time debugging, but lacking magic powers for run-time debugging. I try to, and usually succeed at, writing bulletproof code that works on the first try and doesn't need to be debugged. Conveying how to do this is kind of hard but I keep trying (and trying, and trying, even when people don't listen when I say that owning raw pointers are memory leaks waiting to happen, see above - my code is basically utterly immune to memory leaks because of how it is structured).
> I'd like to be able to download the code shown in this episode, commented code would be a plus.
I'm 99% sure that this is on my work laptop (I distinctly remember cleaning out old files, noticing my rvalue references examples on my desktop, and deciding to keep them because someone might ask for them). Unfortunately I'm on vacation right now, and my work laptop is the one thing that I don't have remote access to. I've written myself a todo to dig up this code when I get back after the new year. It was cleaned up and updated from my rvalue references v1 blog post.
Part 10:
Here's the code:
I was always cautious about rvalues even since they've appeared few years ago in discussions. And this video demonstrates that very clearly to a common developer -- rvalue adds a LOT of complexity to the language while falling short of achieving true move semantics. It is a great improvement, there is no doubt about it. But see for yourself:
- using move constructor (ctor) still results in calling temporary's destructor (dtor). No one needs that thing anymore, why wasting CPU cycles to check moved values for some magical NULL values? What if type has no such values? In reality all you need is reduced mctor (no need to touch temporary object) and "don't call original temporary's dtor" guarantee provided by environment.
- proposed move operator (mop=) v1 implemented as (MyClass(move(ref)).swap(*this)), it's price is: mctor + swap + 2 dtors! But all you need is release all resources you have (dtor call) + the same "reduced mctor" + "forget temporary" guarantee. And btw -- "swap" is worse than "move", "move" is worse than "reduced mctor".
- mop= v2 (i.e. universal op=) is the same
My problem with rvalues is that they are so complex and give so much advantage over what we have in language now that at the end of the day no one will ever be able to add more efficient true move semantic to C++ -- there will be only few interested people and it will very likely contradict with complex clauses in standard created to support rvalues -- you'll have to remove it... but then it'll break existing code.
So, in my opinion rvalues are a partial solution that will make sure that we never get a complete one. :-(
Sorry guys, when I was pressing "Comment" button formatting looked fine. Now it's not :-\
Its a great video about rvalues, thank you!
[Michael Kilburn]
> using move constructor (ctor) still results in calling temporary's destructor (dtor).
That's just a few instructions, and the compiler is much more likely to be able to see through them (it can't see through memory allocation/deallocation, but pointer assignments and tests are much friendlier to the optimizer).
> What if type has no such values?
Movable objects typically have resourceless states. Some objects don't, but that can be dealt with.
> That's just a few instructions
No, it is not. It is a function call (lets leave optimizer aside for a moment -- none of it's magic is guaranteed to happen anyway). And more -- move ctor has to be more expensive, move assignment -- even worse... C++ is usually quite proud of "not doing unnecessary work", but it is not the case with rvalues.
Remove this comment
Remove this threadclose
|
http://channel9.msdn.com/Series/C9-Lectures-Stephan-T-Lavavej-Standard-Template-Library-STL-/C9-Lectures-Stephan-T-Lavavej-Standard-Template-Library-STL-9-of-n?format=smooth
|
CC-MAIN-2015-14
|
refinedweb
| 5,337
| 63.8
|
of a pain. This is what I realized when I found myself in the need to upload large files to a web site using Twisted. I googled, and I googled, and I found no real answer to my problems.
You can of course find an easy solution: build a multipart request by building the request body yourself out of the files you are trying to upload. Sure, that works. But what happens when you are dealing with a 50 MB file upload? What about 500 MB? I’m sure you are not planning to encode the whole file as a string, right?
This is where Twisted’s body producers come handy. Implementing your own body producer, you have total control on how to build the request. In fact, Twisted will call you every time it needs data for the request, so you can be sure you won’t be building the whole chunk in one string. Instead, you will be sending chunks of bytes to what is known as the consumer. What is the consumer? Whatever is asking for a request body.
Ok, enough said. I could go on and on with the explanation, but let’s learn by coding (yay!.) We want to upload a file to a server using Twisted. We also want to know how the file upload is doing (by getting progress callbacks), and when it finishes. Just for the fun of it, we will also send other POST data with the request.
First things first, let us build the server code that will handle the upload. For simplicity (well, that’s sort of B.S., since Python is simple enough) we’ll build a PHP script that will take a file upload, and print out the uploaded information, together with any posted data. Here’s the code for that:
<?php $files = array(); foreach ($_FILES as $field => $file) { if (empty($file['tmp_name']) || !is_uploaded_file($file['tmp_name'])) { continue; } $files[$field] = $file; } print_r($_POST); print_r($files); ?>
Easy right? We are just printing out whatever we got, discarding any invalid file upload information.
Let’s look at our client to upload a file, and POST some data to this PHP script. This Python script is using the body producer I built, together with a handy receiver for getting the response. More about that later. Here is the client script:
from twisted.internet import defer from twisted.internet import reactor from twisted.web import client from twisted.web import http_headers from pyfire.twistedx import producer from pyfire.twistedx import receiver def finished(bytes): print "Upload DONE: %d" % bytes def progress(current, total): print "Upload PROGRESS: %d out of %d" % (current, total) def error(error): print "Upload ERROR: %s" % error def responseDone(data): print "Response:" print "-" * 80 print data reactor.stop() def responseError(data): print "ERROR with the response. So far I've got:" print "-" * 80 print data reactor.stop() url = "" files = { "upload": "/home/mariano/myfile.tar.gz" } data = { "field1": "value1" } producerDeferred = defer.Deferred() producerDeferred.addCallback(finished) producerDeferred.addErrback(error) receiverDeferred = defer.Deferred() receiverDeferred.addCallback(responseDone) receiverDeferred.addErrback(responseError) myProducer = producer.MultiPartProducer(files, data, progress, producerDeferred) myReceiver = receiver.StringReceiver(receiverDeferred) headers = http_headers.Headers() headers.addRawHeader("Content-Type", "multipart/form-data; boundary=%s" % myProducer.boundary) agent = client.Agent(reactor) request = agent.request("POST", url, headers, myProducer) request.addCallback(lambda response: response.deliverBody(myReceiver)) reactor.run()
Let’s look at what we are doing here. We start by importing some twisted modules we need for this script, and the multipart and receiver modules from the pyfire module I wrote. Don’t worry, you don’t have to use pyfire to use these two modules. Just make sure to download the twistedx module that is part of pyfire. Next, we define some callback functions:
finished() to inform that the upload is done,
progress() to keep us in the loop while the request is being sent to the server,
error() in case sh*t happens, and finally
responseDone() and
responseError() to handle the response from the server, and stop Twisted’s reactor (its event loop.)
In the actual script code we see that we start by defining the destination URL, and dictionaries with the files to be sent out, and data to post, both of them indexed by field name. We proceed then to start coding the “Twisted” way: creating two deferreds that will be used by the producer, and the receiver. If you don’t know about deferreds you probably haven’t read the Twisted manual much, so I recommend you go over their section on deferreds. Basically it’s a very easy (and flexible, and chainable, and…) way to get callbacks.
So we build two deferreds: the first one (
producerDeferred) is for the producer, where we attach a success callback (the
finished() function), and an error callback (the
error() function). The second one (
receiverDeferred) is used by the receiver, and contains a success callback (the
responseDone() function), and an error callback (the
responseError() function). Both of these functions will print out whatever data we got as response, and finish by stopping the reactor.
We then build the producer, passing on the files to upload, the data to post, the callback
progress() function that will be called throughout the upload, and the deferred for the producer. Similarly, we build the receiver, passing only its deferred.
Having both the producer and the receiver, we can now proceed to create the actual request, not without first creating any additional headers we may need (the request will automatically specify the content length out of the body producer.) Since we are uploading a file, we specify the content type of the request to be
multipart/form-data, and as its
boundary we set whatever the producer chose as boundary for our chunks in the request body.
The final step is the actual running of the request, doing it the typical Twisted way: first getting an agent for the reactor, creating the request (a POST request to the given URL, with the given headers and body producer), and adding a callback for when we get a response. The callback in this case is a simple lambda function, that delivers the body from the response to the receiver. Finally, we run the reactor.
Notice that when you run the reactor the
run() call will block until you stop the reactor. This is why from our response callbacks (
responseDone() and
responseError()) we stop the reactor whenever we get some sort of response.
If you run this script against your PHP server script, you may get an output like the following:
Upload PROGRESS: 153 out of 9120 Upload PROGRESS: 320 out of 9120 Upload PROGRESS: 9080 out of 9120 Upload PROGRESS: 9082 out of 9120 Upload PROGRESS: 9120 out of 9120 Upload DONE: 9120 Response: -------------------------------------------------------------------------------- Array ( [field1] => value1 ) Array ( [upload] => Array ( [name] => myfile.tar.gz [type] => application/x-tar [tmp_name] => /tmp/phplINynd [error] => 0 [size] => 8760 ) )
Isn’t this fun?
No related posts.
Leo Lima wrote:
hi man, i am your new follower!! you are good programmer.. i found a perfect code in ‘′.
so, i need some help now..
i would like to do a code that colects the meta datas from a url and returns in a variable. example:
$title = value inside tags
$description = value from
$tags = value from
just it.. may you help me?
king regards
LeoLima
Link
mariano wrote:
I’m not sure I follow what you are saying. If you use the function that you linked to, you’ll notice it returns an array with title, and metaTags. The description is actually a meta tag by itself, same with tags. So you can do:
$result = getUrlData('/');
if (!empty($result)) {
extract($result);
echo 'TITLE: ' . $title . '<br />';
echo 'META TAGS: '; print_r($metaTags);
}
Link
Odie wrote:
Is it possible to use twistedx with twisted.web.client instead of Agent?
Link
Rob wrote:
Nice tutorial. I had fun playing with this even though it was not quite what I was looking for.
I need to process raw post data
Link
|
https://marianoiglesias.com.ar/python/file-uploading-with-multi-part-encoding-using-twisted/comment-page-1/
|
CC-MAIN-2021-21
|
refinedweb
| 1,327
| 65.93
|
I've read a few posts about this and thought I had some code that worked. If the difference between the 2 values is less than a 1sec then the millisecs displayed is correct.
If the difference is more than a sec, its still only showing me the difference of the millisecs.
As below.
Correct:
now_wind 2013-08-25 08:43:04.776209
first_time_wind 2013-08-25 08:43:04.506301
time_diff 0:00:00.269908
diff 269
now_wind 2013-08-25 08:43:25.660427
first_time_wind 2013-08-25 08:43:23.583902
time_diff 0:00:02.076525
diff 76
#!/usr/bin/env python
import datetime
import time
from time import sleep
first_time_wind = datetime.datetime.now()
sleep (2)
now_wind = datetime.datetime.now()
print "now_wind", now_wind
print "first_time_wind", first_time_wind
time_diff_wind = (now_wind - first_time_wind)
print "time_diff", time_diff_wind
print "diff", time_diff_wind.microseconds / 1000
>>> a = datetime.datetime.now() >>> b = datetime.datetime.now() >>> a datetime.datetime(2013, 8, 25, 2, 5, 1, 879000) >>> b datetime.datetime(2013, 8, 25, 2, 5, 8, 984000) >>> a - b datetime.timedelta(-1, 86392, 895000) >>> b - a datetime.timedelta(0, 7, 105000) >>> (b - a).microseconds 105000 >>> (b - a).seconds 7 >>> (b - a).microseconds / 1000 105
your microseconds don't include the seconds that have passed
|
https://codedump.io/share/vEvV6EEXqJWT/1/python---time-difference-in-milliseconds-not-working-for-me
|
CC-MAIN-2017-04
|
refinedweb
| 205
| 73.13
|
The QTextBlockFormat class provides formatting information for blocks of text in a QTextDocument. More...
#include <QTextBlockFormat>
Inherits QTextFormat.
Note: All the functions in this class are reentrant.
The QTextBlockFormat class provides formatting information for blocks of text in a QTextDocument.
A document is composed of a list of blocks, represented by QTextBlock objects.Block and currently set page break policy for the paragraph. The default is QTextFormat::PageBreak_Auto.
This function was introduced in Qt 4.2.
See also setPageBreakPolicy().Indent(). The indentation is an integer that is multiplied with the document-wide standard indent, resulting in the actual indent of the paragraph.
See also indent() and QTextDocument::indentWidth(). page break policy for the paragraph to policy.
This function was introduced in Qt 4.2.
See also pageBreakPolicy().
Sets the paragraph's right margin.
See also rightMargin(), setLeftMargin(), setTopMargin(), and setBottomMargin().
Sets the tab positions for the text block to those specified by tabs.
This function was introduced in Qt 4.4.
See also tabPositions(). a list of tab positions defined for the text block.
This function was introduced in Qt 4.4.
See also setTabPositions().
Returns the paragraph's text indent.
See also setTextIndent().
Returns the paragraph's top margin.
See also setTopMargin() and bottomMargin().
|
http://doc.trolltech.com/4.5-snapshot/qtextblockformat.html#alignment
|
crawl-003
|
refinedweb
| 205
| 55.5
|
* James Morris wrote:
> On Fri, 8 May 2009, Ingo Molnar wrote:
>
> > > In general, I believe that ftrace based solutions cannot safely
> > > validate arguments which are in user-space memory when multiple
> > > threads could be racing to change the memory between ftrace and
> > > the eventual copy_from_user. Because of this, many useful
> > > arguments (such as the sockaddr to connect, the filename to open
> > > etc) are out of reach. LSM hooks appear to be the best way to
> > > impose limits in such cases. (Which we are also experimenting
> > > with).
> >
> > That assessment is incorrect, there's no difference between safety
> > here really.
> >
> > LSM cannot magically inspect user-space memory either when multiple
> > threads may access it. The point would be to define filters for
> > system call _arguments_, which are inherently thread-local and safe.
>
> LSM hooks are placed so that they can access objects safely, e.g.
> after copy_from_user() and with all apropriate kernel locks for
> that object held, and also with all security-relevant information
> available for the particular operation.
>
> You cannot do this with system call interception: it's an
> inherently racy and limited mechanism (and very well known for
> being so).
Two things.
Firstly, the seccomp + filter engine based filtering method does not
have to be limited to system call interception at all: by placing a
tracepoint at that place seccomp can register itself to the same
point as the LSM hook, and enumerate and expose the fields. It can
be expressed in the string filter namespace just fine.
[ do we have nestable LSM hooks? If yes then seccomp could layer
itself below any existing security context, in a hierarchical way,
to provide add-on restrictions. It is all about further
restrictions, not to creation or overruling of existing security
policies/modules. ]
Secondly, pure system call argument based filtering is already very
powerful for _sandboxing_. Seccomp v1 is the proof for that, it is
equivalent to the:
{ { "sys_read", "1" },
{ "sys_write", "1" },
{ "sys_ret_from_signal", "1" } }
filter rules. Your argument really pertains to full-system security
solutions - while maximising compatibility and capability and
minimizing user invenience. _That_ is an extremely hard problem with
many pitfalls and snake-oil merchants flooding the roads. But that
is not our goal here: the goal is to restrict execution in very
brutal but still performant ways.
That means we'd like to give finegrained but still very brutally
constructed permissions to untrusted contexts. Instead of the
seccomp v1 rules above, an app might want to inject these rules into
a sandbox context:
{ { "sys_read", "fd == 0" },
{ "sys_write", "fd == 1" },
{ "sys_sigreturn", "1" },
{ "sys_gettimeofday", "tz == NULL" },
Note how such a (simple!) set of rules expands over seccomp v1 in a
very meaningful way:
- The sys_read rule restricts the read() syscall to stdin only.
Even if other fds exist.
- The sys_write() rule restricts the write() syscall to stdout
only.
- sys_gettimeofday() is allowed, but only tv is allowed - tz not.
Note how we were able to _further restrict_ the seccomp v1
sandboxing concept: under seccomp v1 the task would be able to write
to stdin or read from stdout.
Furthermore, only fds 0 and 1 are allowed - under seccomp v1 if any
other fd gets into the sandboxed context accidentally, it could make
use of them. With the above filtering scheme that is denied.
Also, note the gettimeofday rule: we were able to 'halve' the
security cross-section of the sys_gettimeofday() permission: we only
allow &tv to be recovered, not the time zone.
So the filtering engine allows the very finegrained tailoring of the
system call environment, right in the context of the sandboxed task,
without context-switches.
The filtering engine is also 'safe' in that unprivileged tasks can
use PRCTL_SECCOMP_SET with arbitrary strings, and the resulting
filter expression is still to be parsed and later on executed by the
kernel.
> I'm concerned that we're seeing yet another security scheme being
> designed on the fly, without a well-formed threat model, and
> without taking into account lessons learned from the seemingly
> endless parade of similar, failed schemes.
I do agree that that danger is there (as with any security scheme),
so this all has to be designed carefully.
[ I think as long as we shape it as "only additional restrictions on
top of what is already there", in a strictly nested way, there's
little danger of impacting existing security measures. ]
There's also the very real possibility of having a really flexible
sandboxing model :) So i think Adam's work is fundamentally useful.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe
linux-security-module" in
the body of a message to [email protected]
More majordomo info at
|
http://article.gmane.org/gmane.linux.kernel.lsm/8621
|
CC-MAIN-2017-39
|
refinedweb
| 769
| 50.97
|
The Twins Electronic Game
Introduction: The Twins Electronic Game
In one evening, you can easily create this fun & versatile game for 2 users.
NOTE: You will need 2 of everything, that is why it is called "The Twins".
It starts with a PCB you can create yourself or purchase already made from
Add to that a simple Arduino Nano or Pro/Mini, an LCD & an RF24 Wireless Transmitter, an Open Source sketch & voila, you are ready for some fun action!
The principle is simple: You need the button on each twin to be pressed at the same time, to complete the challenge & proceed to the next step. The twins can be any distance apart up to approx. 300 ft and will still be able "to see" each other...
Step 1: Assemble a Twin
This is really an easy project to accomplish whether you only buy the PCB, or the kit.
1. Solder Arduino Pro/Mini (or Nano) pins on back side of PCB (S&T in upper right corner). Don’t forget the 2 little pro/mini pins that go between the 2 rows: A string of 12 pins on one side, a string of 12 with the 5th pin missing on the other side with the 2 isolated pins behind this gap.
2. Solder the 2 groups of 6 pins on front side of PCB for the LCD. DO NOT solder LCD yet.
3. Solder Pro/Mini (or Nano) on back side
4. Solder LCD on front side after applying 2 layers of double sided tape to the back of the LCD.
5. Test assembly by transferring & running the Twin Sketch on the Arduino with power through USB cable connected to laptop. Troubleshoot if Display errors. It is normal for the display to have a little flicker from USB Power. 9-Volt battery will later correct this.
6. Solder RF24 on front side.
7. Solder Push Button
8. Solder On/Off switch interrupting either one of the 9 Volt connectors, as shown below. Solder resulting assembly to board, making sure to respect polarity.
9. After making sure you change the one line in the code depending on which twin you program, download your Twin sketch to the Arduino.
10. Test Assembly (you will need to have both twins powered up for full functionality).
Step 2: Program the Twins
You can implement the functionality in a multiple different ways.
The example below (that we use in Geocaching challenges) uses the same code for both Twins and we change only one line depending on which Twin we program (Thank You Rion for this clever coding!).
// F.D.R. Geo-Project Twin Multi-Stage GeoCache
// This is the Final Twin code
// This requires an RF24L01 Receiver & a Parallel LCD on Nano
// User supplies 9 Volt battery
#include <LiquidCrystal.h>
#include <SPI.h>
#include <Mirf.h>
#include <MirfHardwareSpiDriver.h>
#include <nRF24L01.h>
LiquidCrystal lcd(2, 3, A0, A1, A2, A4);
#define pinUnused 0
#define deviceNumber 1 // 1 or 2: The only thing to change between devices
#define device1Name "TwinOne"
#define device1Final " N 32 xx.xxx "
#define device1Alone "Awaiting Romulus"
#define device2Name "TwinTwo"
#define device2Final " W 84 xx.xxx "
#define device2Alone "Awaiting Remus."
// #define RF_DR_LOW 5
// #define RF_PWR_LOW 1
// #define RF_PWR_HIGH 2
#define cacheMessage "FDR TwinGeocache"
#define keyMessage " -> GOODLUCK <-"
#define successMessage "Final Position: "
int rate;
void setup(){
Serial.begin(9600);
pinMode(5, OUTPUT); //LCD V0 Pin to control Brightness
analogWrite(5,120); //120 seems like good Brightness level under 9 Volts
lcd.begin(16,2);
lcd.clear();
delay(10);
Mirf.cePin = 9; //ce pin on Uno
Mirf.csnPin = 10; //csn pin on Uno
Mirf.spi = &MirfHardwareSpi;
Mirf.init();
Mirf.setRADDR( ( deviceNumber == 1 ) ? (byte*)device1Name : (byte*)device2Name );
// Mirf.setRADDR((byte *)"serv1");
Mirf.setTADDR( ( deviceNumber == 1 ) ? (byte*)device2Name : (byte*)device1Name );
// Mirf.setRADDR((byte *)"clie1");
Mirf.payload = sizeof(rate);
Mirf.config();
randomSeed( analogRead( pinUnused ) );
displayMessage( cacheMessage, keyMessage );
delay( 4000 );
}
void displayMessage( const char* line1, const char* line2 )
{
lcd.clear();
lcd.setCursor(0, 0);
lcd.print( line1 );
lcd.setCursor(0, 1);
lcd.print( line2 );
}
void sendData( void )
{
unsigned long now = millis();
Mirf.send( (byte*)&now );
while( Mirf.isSending() )
delay( random( 10 ) );
}
bool readData( void )
{
bool dataFound = false;
while( Mirf.dataReady() )
{
byte data[Mirf.payload];
Mirf.getData(data);
dataFound = true;
}
return dataFound;
}
void loop(){
sendData();
if( readData() == true )
{
displayMessage( keyMessage, ( deviceNumber == 1 ) ? device1Final : device2Final );
unsigned long now = millis();
while( millis() <= (now + 1000) )
sendData();
}
else
{
displayMessage( cacheMessage, ( deviceNumber == 1 ) ? device1Alone : device2Alone );
unsigned long now = millis();
while( millis() <= (now + 1000) )
sendData();
}
}
Step 3: Feedback Is a Gift!
Described here was only 1 other use of this versatile LBE (Location Based Entertainment) board.
We are sure you can see all sorts of Entertainment possibilities brought by modifications to the code or various sensors/hardware changes.
We want to build a community of DIY Geocachers to elevate the Geocaching hobby to the next level.
Please don't hesitate to post ideas/suggestions whether you are involved in Geocaching or not...
Boards are available through and more pictures are available on our blog at inversegeocahe.tumblr.com.
Thank You for reading!
After purchasing a few circuit boards and an arduino, I made a geocache similar to this one. Thanks for all the help !
The interface circuit board you created for the motherboard is great. It worked out perfectly. I used the 'build it yourself' kit, and customized the text on the lcd screen.
Uh sorry i wrote i wrote i have instead of i am. Im hungarian. Stupid autocorrect
I have 15 i have been geocaching for 1,5 year. I've hidden some but none of them is as cool as yours. I just started learning arduino. I want to make a cool multi with caches like this so thank you :)
|
http://www.instructables.com/id/The-Twin-Electronic-Game/
|
CC-MAIN-2017-47
|
refinedweb
| 948
| 67.76
|
Just say i want to read a amount of words, lines, and characters in a .dat file. Im thinking my string algoritim would look something like this:
------------------------------------------
//for words, lines, characters count read file, and then out_stream or cout results
//in_stream is already connected to words.dat
#include <iostream>
#include <fstream>
#include <string>
#include <cstring>
int words, lines, characters;
ifstream in_stream;
ofstream out_stream;
while(!in_stream.eof())
{
//chracter readout, and cout result
characters = strlen("words.dat");
out_stream << "words.dat has " << characters << " characters.";
//lines readout, and cout result
//words readout, and cout result
}
I cant think of an algoritum for lines, and words. I do have idea of lines, i think its something like this "\n" , line++ . I think words is something like this "\0" word++ . The thing is, i can't translate it into code form, do I use strlen(), or getline()??? Please help, or suggestion...Thank You!
|
http://cboard.cprogramming.com/cplusplus-programming/4827-need-help-string-read.html
|
CC-MAIN-2015-22
|
refinedweb
| 147
| 75.61
|
Red Hat Bugzilla – Bug 82091
Cannot select subscribed #shared/... folders
Last modified: 2008-05-01 11:38:04 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2.1) Gecko/20021218
Description of problem:
We are using a number of shared IMAP folders in the #shared/ IMAP namespace
(WU-IMAPD automatically sets this namespace up if there is an account named
"imapshared")
When using Evolution in RedHat 8.0 I could use these folders just fine, with
only some minor nuicanses when subscribing. However, with Phoebe
(evolution-1.2.1-2) the folders show up in the folder list, but nothing happens
if I try to select a folder in the #shared/ namespace.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Set up shared imap folders in #shared/ (imapshared account if using wu-imapd)
2. Subscribe to #shared/... folders using another IMAP tool or editing
.mailboxlist. You might be able to subscribe using Evolution if manually
entering "#shared/" in "only show folders beginning with..".
3. Try to select a #share/... folder in the folder list.
Actual Results: Evolution attempts to try to open the folder, but then
immediately jumps back to the previous selected folder.
Expected Results: The #shared/... folder should show up in the message list window
Additional info:
Did work in RedHat 8.0.
*** This bug has been marked as a duplicate of 81674 ***
Changed to 'CLOSED' state since 'RESOLVED' has been deprecated.
|
https://bugzilla.redhat.com/show_bug.cgi?id=82091
|
CC-MAIN-2017-09
|
refinedweb
| 250
| 67.04
|
0
Hello,
After a relatively small C++ background, I decided to learn some Java. I want to be able to pass two variables from my main class, evaluate them in another class, and return that answer to the main class. I have all the code done for it pretty much, but I can't figure out how to use the returned variable in my main class. Any help would be appreciated!
//This is the main class file import java.util.Scanner; public class Main { public static void main (String args[]) { System.out.println("Calculator v2"); Scanner choice = new Scanner (System.in); System.out.println("Which operation do you want to do?"); System.out.println("1-Addition"); System.out.println("2-Subtraction"); System.out.println("3-Multiplication"); System.out.println("4-Divistion"); int opt = choice.nextInt(); switch (opt) { case 1: { Addition addobj = new Addition(); System.out.println("Enter the first number!"); Scanner numbers = new Scanner (System.in); double fnum = numbers.nextDouble(); System.out.println("Enter the first number!"); double snum = numbers.nextDouble(); addobj.add(fnum, snum); break; } default: { System.out.println("No such operation!"); } } } } //This is the addition class file public class Addition { public static double add (double num, double num2) { double answer = num+num2; return answer; } }
I just tried adding a "System.out.println(answer)" in the case inside the switch statement (ln. 22) , but I got an error saying that "answer cannot be resolved to a variable." I'm sure this is a quick fix, but I couldn't find anything helpful by googling it, so any help would be awesome.
Thanks much!
~Carpetfizz
Edited by Carpetfizz
|
https://www.daniweb.com/programming/software-development/threads/444397/how-to-return-variables-from-different-classes-functions
|
CC-MAIN-2018-05
|
refinedweb
| 268
| 52.87
|
Question
In this exercise, the function written computes the average of two numbers passed as parameters. If the function were given a list of numbers, how could the average be calculated?
Answer
There are several ways to write a function to calculate the average of a list. In later versions of Python 3, there is a
statistics library which contains a
mean() function that can perform the calculation directly. Another simple way is to use the
sum() and
len() functions as shown below. Python 2 does not perform division using floating point unless instructed to do so by being given a floating point number (Python 3 does the conversion automatically) so the sum is cast to floating point.
def avg(numbers): return float(sum(numbers))/len(numbers)
Finally, it can be done using a
for loop and variables to store the sum and length. This is more involved than the previous version but achieves the same result.
def avg(numbers): sum = 0 count = 0 for n in numbers: sum += n count += 1 return float(sum) / count
|
https://discuss.codecademy.com/t/how-can-i-calculate-the-average-for-a-list-of-numbers/362502
|
CC-MAIN-2020-16
|
refinedweb
| 176
| 60.85
|
MageUI.exe Deployment Manifest Tab
This topic describes the Deployment Manifest tab for the graphical Manifest Generation and Editing Tool (MageUI.exe)
The Name tab is the tab displayed when you first create or open a deployment manifest. It uniquely identifies the deployment, and optionally specifies a valid target platform.
- Name
Displays identifying information about this deployment.
- Description
Displays publisher and support information.
- Deployment Options
Provides additional information about the deployment, such as where to obtain product support.
- Update Options
Determines the update location, and how often ClickOnce should check for application updates.
- Application Reference
Provides a pointer to the application manifest for this deployment.
Name Panel—UI Element List
- Name
Required. The name of the deployment manifest. Usually the same as the file name.
- Version
Required. The version number of the deployment in the form N.N.N.N. Only the first major build number is required. For example, for version 1.0 of an application, valid values would include 1, 1.0, 1.0.0, and 1.0.0.0.
- Processor
Optional. The machine architecture on which this deployment can run. The default is msil, or Microsoft Intermediate Language: the default format of all managed assemblies. You would only change this field if you have pre-compiled the assemblies in your application for a specific architecture. For more information about pre-compilation, see Native Image Generator (Ngen.exe).
- Culture
Optional. The two-part ISO country/region code in which this application runs. Defaults to Neutral.
- Public key token
Optional. The public key with which this deployment manifest has been signed. If this is a new or unsigned manifest, this field will appear as unsigned.
Description Panel—UI Element List
- Publisher
Required. The name of the person or organization responsible for the application.
- Product
Required. The full product name. If you selected Install Locally for the Application Type element on the Name tab, this name will be what appears in the Start menu link and in the Add or Remove Programs dialog box for this application.
- Support Location
Optional. The URL from which customers can obtain Help and support for the application.
Deployment Options Panel—UI Element List
- Application Type
Optional. Determines whether this application installs itself to the client computer (Install Locally) or runs online (Online Only). Default is Install Locally.
- Launch Location
Optional. The URL from which the application should actually be started. Useful when deploying an application from a CD that should update itself from the Web.
- Automatically run application after installing
Required. Tells ClickOnce to run the application immediately after the initial installation. Default is with the check box selected.
- Allow URL parameters to be passed to application
Required. Permits the transfer of parameter data to the ClickOnce application through a query string appended to the deployment manifest's URL. Default is with the check box cleared.
- Use .deploy file extension
Required. Default is with the check box cleared.
Update Options Panel—UI Element List
This tab contains none of the options mentioned previously unless the Application Type selection box on the Name tab is set to Install Locally.
- This application should check for updates
Determines whether ClickOnce should periodically check for application updates. If this check box is not selected, the application will not check for updates unless you update it programmatically using the APIs in the System.Deployment.Application namespace.
- Choose when the update check should happen
Provides two options for update checks:
In the background, after the app starts. The update check is delayed until the main form of the application has initialized.
Before the application starts. The update check is performed prior to application execution.
These options are only available if This application should check for updates check box is selected.
- Update Check Frequency
Determines how often ClickOnce should check for updates:
Every time the application starts. ClickOnce will perform an update check every time the user opens the application.
Check every: Select a unit (hours, days, or weeks) and a time interval for updates.
These options are only available if the This application should check for updates check box is selected.
- Specify a minimum required version for this application
Optional. Specifies that a specific version of your application is a required installation, preventing your users from working with an earlier version.
- Version
Required if Specify a minimum required version for this application check box is selected. The version number supplied must be of the form N.N.N.N. Only the first major build number is required. For example, for version 1.0 of an application, valid values would include 1, 1.0, 1.0.0, and 1.0.0.0.
Application Reference Panel—UI Elements
This panel contains the same fields as the Name Panel described in the previous section. The one exception is the following field.
- Select Manifest
The application manifest to use to prepopulate all of the other fields on this page.
|
http://msdn.microsoft.com/en-US/library/396st2hh(v=vs.80).aspx
|
CC-MAIN-2014-35
|
refinedweb
| 815
| 51.34
|
Most of this was gleaned from squinting hard at the below:
But some stuff has changed due to Bluehost’s recent implementation of OpenSSH. So here goes!
This tutorial assumes you already have Python2.6 installed on Bluehost. (This should come with your hosting plan)
First things first– SSH into your account (using Putty, for instance). In order to do this you will have to enable SSH in your cPanel and send identification to Bluehost. Putty will prompt you for your password.
You are now in the home directory. Move into the directory where you would like to install Django from. Despite what the Bluehost instructions say, it doesn’t matter where this is. I just downloaded Django into my home directory.
cd ~ wget tar xzvf Django-1.2.4.tar.gz cd Django-1.2.4
Now you can try installing:
python2.6 setup.py install --user
The below may or may not affect anything. Bluehost seems to have disabled the .bashrc in your home directory (not the one in your etc folder, that one is read-only) But go ahead and do it anyway just in case.
—————————————————
Django installs some commands that you can use in your bash shell. To make sure
these work, add the following to the end of your .bashrc file:
export PATH=$HOME/.local/bin:$HOME/.local/usr/bin:$PATH
Log out and log back in to effect this change.
—————————————————
Now you’ll want to confirm that django can run in Python. From the official bluehost instructions:
Testing Your Installation
You can test your installation of Django to see if it worked by starting the
Python interpretor by typing python2.6 at the command prompt and hitting
enter. You’ll get the Python interactive interpretor, where you can type
import django and hit enter. If it gives an error, then there is a problem
with your installation. If no error is given, then you’ve successfully
installed Django to your Python path. Go ahead and exit the Python shell by
typing ctrl+d.
If you were unable to import Django, try re-logging in to your shell, and try
again. If it still doesn’t work, double check to make sure that your Python
path is correct, and that Django really resides in a directory on the Python
Path. To see what this list of directories is, start up the Python interpretor,
and type “import sys; sys.path“, and press enter. This should give you a
list of directories. This should include any directories you listed in your
.bashrc file for PYTHONPATH. Check for any typos in your .bashrc
file. As long as the django directory is in one of those directories
listed, then you should be able to do the import. If it isn’t in any of those
directories, try re-installing Django.
You’ll also want to test the django-admin.py command by trying to run it
from your shell. You should get a usage message with a list of management
commands that can be run. If you get “command not found”, then there was a
problem with the installation of Django. This will probably not work. When you call django-admin.py you will have to do it with an absolute path. Something like,
python2.6 ~/.local/bin/django-admin.py
If django-admin.py isn’t in that directory, try re-installing Django.
START THE PROJECT:
Decide where you want to put your django projects. You can put them anywhere, but for security’s sake, they should not be in public_html or any of its subdirectories.
Let’s say you decide on “django_projects”, and you want it to be in your home directory. And let’s say your bluehost username is DJANGONOOB. and let´s say your first project name will be “myFirstProject”
cd ~ mkdir django_projects cd django_projects
initialize your project:
python2.6 /home/DJANGONOOB/.local/bin/django-admin.py startproject myFirstProject
Once the project directory has been created, cd into it
cd myFirstProject
and then follow the tutorial part ONE here: BUT keep the following in mind:
When you are creating the database, use sqlite3.
Remember that EVERYWHERE it says python, you must type python2.6 instead.
Everywhere you edit a file, use VIM editor, like so:
vi filename.py
This will bring up the editor. Press i to edit, and escape to exit editing mode.
After pressing escape, type :q! to quit without saving, and :wq to save and quit.
At the end of part one, you will have made a number of modifications. Make sure all your files match up with the below. You will have to change a few paths to make sure this works on Bluehost (changes are highlighted in red below)
settings.py:
# Django settings for testRun project. DEBUG = True TEMPLATE_DEBUG = DEBUG ADMINS = ( # ('Your Name', 'your_email@domain.com'), ) MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': '/home/DJANGONOOB/django_projects/testRun/testRunDB', # Absolute # Absolute filesystem path to the directory that will hold user-uploaded files. # = 'a_i1wck_739wiot-ps995ua%g9*)i(z99)-uyz4sc$j3y)1_ja' #', 'codePosts', # Uncomment the next line to enable the admin: 'django.contrib.admin', # Uncomment the next line to enable admin documentation: # 'django.contrib.admindocs', )
urls.py to enable admin documentation: # (r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: (r'^admin/', include(admin.site.urls)), #(r'^polls/$', 'polls.views.index'), #(r'^polls/(?P \d+)/$', 'polls.views.detail'), #(r'^polls/(?P \d+)/results/$', 'polls.views.results'), #(r'^polls/(?P \d+)/vote/$', 'polls.views.vote') )
Now for the big guns. Obviously you won’t be able to run all this on your browser as if you are running all this from an installation on your home computer.
It’s a server for god’s sake. So you’ll have to get into some fun redirecting.
Decide where you want the project to be in your viewable directory structure. So, say you want your project to be accessible from:
then you would go
mkdir ~/public_html/django/testProject cd ~/public_html/django/testProject
and now you’ll have to make a Fast CGI file to run your django site.
vim mySite.fcgi
copy and paste the following:
#!/usr/bin/python2.6 import sys, os # Add a custom Python path. sys.path.insert(0, "/home/DJANGONOOB/.local/lib/python2.6") sys.path.insert(13, "/home/DJANGONOOB/django_projects/myFirstProject") # Switch to the directory of your project. (Optional.) # os.chdir("/home/DJANGONOOB/django_projects/myFirstProject") # Set the DJANGO_SETTINGS_MODULE environment variable. os.environ['DJANGO_SETTINGS_MODULE'] = "settings" from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false")
remember to :wq to save.
you must create an .htaccess to make sure mySite.fcgi is run when you pull this up in the browser:
vi .htaccess
copy and paste the below:
AddHandler fcgid-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ mysite.fcgi/$1 [QSA,L]
now set the correct permissions for mySite.fcgi
chmod 0755 mySite.fcgi
now just pull up
and you should see the Django administration console!
Finally, I’m sure to have missed something. I tried I really did but I am trying to document 24 hours of jiggering here… so if it doesnt work for you please pipe up in the comments and I will help figure it out!
——————
If you are interested in finding good django hosting, take a look at Webfaction. They are a one-click setup django hosting provider, and offer a lot of other really good features for advanced users.
Hi, I just followed ur steps to setup django on bluehost. But I didn’t get the admin console work. I got the 500 error. and searched a lot on google and still have no idea….
Could you please share something details by email?
Thanks a lot!
Bluehost is using the .bashrc file fine for me.
Adding these two lines to the .bashrc file makes things a little nicer
alias python=”python2.6″
alias django-admin=”python2.6 ~/.local/bin/django-admin.py”
Now you can just use “python” and “django-admin” everywhere
Thanks for a great job. I started with the instructions on bluehost’s site but got stuck and then ran across this and I have django installed!
Ryan’s alias’ worked for me also.
When I bring up the admin page it does not show any of the css/formatting. I have tried several suggestions around the web, but no success. Can you remember what you did to make the ..django/contrib/admin/media files serve up properly?
Hi Steve, glad you found the tut useful. I had a lot of problems with the admin media files myself. basically u need to copy django’s media folder (on your server, the path to it would be something like: ~/.local/lib/python2.6/site-packages/django/contrib/admin/media) to a subdirectory of public_html. Let’s say you copy it into “public_html/project_name/media”. Now in your project’s settings.py, you need to change the following setting to that path:
ADMIN_MEDIA_PREFIX = “/project_name/media/”
I think now django will load up everything correctly. let me know if this doesn’t help though, it might be a hack on my part… how django handles serving media is not my strong suit :/
Thanks so much. At first I was trying to use symlinks instead of copying the ../admin/media folder. Then I took your advice and copied the files – but I mistakenly copied them to the real django_project folder, not the url in public_html – so another few minutes scratching my head until I realized what I did. After moving the files to the correct directory and pointing ADMIN_MEDIA_PREFIX to the right location it works perfectly! Thanks so much.
This has me thinking, does this apply to any static files I have in my app also? (Haven’t started to move my app yet…)
This is a great tutorial, but I’m having a problem during the Django installation. During the install_lib portion, I’m getting “error: could not create ‘/usr/lib/python2.4/site-packages/django’: Read-only file system”. Any idea how I can get around this with Bluehost?
David, I got the same error at some point. You probably forgot to add the “–user” at the end.
python2.6 setup.py install –user
Thank you so much. This helped me enormously.
Hi, I created a django site on my computer using python 2.7.1 and django 1.3.1. Should I still download django 1.2.4 to bluehost as suggested in this tutorial, or should I download django 1.3.1 as it’s the version I’ve used for my site?
Thanku very much
|
http://blog.ruedaminute.com/2011/01/2011-installation-instructions-for-django-on-bluehost/
|
CC-MAIN-2016-18
|
refinedweb
| 1,781
| 68.97
|
Photo Credit ?
Among the basic data types and structures that Python provides are the following:
- Logical: bool
- Numeric: int, float, complex
- Sequence: list, tuple, range
- Text Sequence: str
- Binary Sequence: bytes, bytearray, memoryview
- Map: dict
- Set: set, frozenset
Out of the above, the four basic inbuilt data structures namely Lists, Dictionary, Tuple and Set cover almost 80% of the our real world data structures.
Lists
Lists is one of the most versatile collection object types available in Python. (dictionaries and tuples being the other two, which in a way, more like variations of lists).
- A list is a mutable, ordered sequence of items. As such, it can be indexed, sliced, and changed. Each element can be accessed using its position in the list.Python lists do the work of most of the collection data structures found in other languages and since they are built-in, you don?t have to worry about manually creating them.
- Lists can be used for any type of object, from numbers and strings to more lists.
- They are accessed just like strings (e.g. slicing and concatenation) so they are simple to use and they?re variable length, i.e. they grow and shrink automatically as they?re used.
- List variables are declared by using brackets [ ] following the variable name.
A = [ ] # This is a blank list variableB = [1, 23, 45, 67] # this list creates an initial list of 4 numbers.C = [2, 4, ‘john’] # lists can contain different variable types.
Some useful methods of Lists
Slicing a List
In List, we can take portions (including the lower but not the upper limit). List slicing is the method of splitting a subset of a list, and the indices of the list objects are also used for this. e.g. if my List name is products
From the first element up to a given position: products[:3]From a given position until the last element: products[2:]Between two given positions in the list: products[2:4]all the members of the list: products[:]
Python lists are upper-bound exclusive, and this means that the last index during list slicing is usually ignored. That is why, for a list [3, 22, 30, 5.3, 20]
list[2:-1] = [30, 5.3]members of the list from index 2, which is the third element, to the second to the last element in the list, which is 5.3).
So here the summary of the slicing notation
a[start:stop] # items start through stop-1a[start:] # items start through the rest of the arraya[:stop] # items from the beginning through stop-1a[:] # a copy of the whole array
Deleting list elements
Python has 3 methods for deleting list elements: list.remove(), list.pop(), and del operator. remove method takes the particular element to be removed as an argument, and deletes the first occurrence of number mentioned in its arguments.
And pop and del take the index of the element to be removed as an argument. del[a : b] :- d deletes all the elements in range starting from index ?a? till ?b? mentioned in arguments.]
extend(b) :- This function is used to extend the list with the elements present in another list. This function takes another list as its argument.
clear() :- This function is used to erase all the elements of list. After this operation, list becomes empty.
Difference between Lists and Arrays in Python
The list elements can be anything and each list element can have a completely different type. This is not allowed in arrays. Arrays are objects with definite type and size, hence making the use of Lists extremely flexible. And arrays are stored more efficiently (i.e. as contiguous blocks of memory vs. pointers to Python objects).
The ?array?.
Another difference between a list and an array is the functions that you can perform to them. For example, you can divide an array by 3, and each number in the array will be divided by 3 as below.
However, if I try to divide a list by 3, Python will throw error.
x = array([3, 6, 9, 12])x/3.0print(x)Output# TypeError: unsupported operand type(s) for /: ‘list’ and ‘float’
Note above, how it takes an extra step to use arrays because they have to be declared explicitly, while lists don?t because they are part of Python?s syntax. However, if we?re going to perform arithmetic functions to our lists, weshould really be using arrays instead. Additionally, arrays will store your data more compactly and efficiently, so if you?re storing a large amount of data, you may consider using arrays as well.
When should I use Array instead of Lists
For almost all cases the normal list is the right choice. Arrays are more efficient than lists for some uses. If you need to allocate an array that you KNOW will not change, then arrays can be faster and use less memory. If you?re going to be using arrays, consider the numpy or scipy packages, which give you arrays with a lot more flexibility.
You can cast a list to a numpy array as below.
import numpy as np u=np.array([1,0])v=np.array([0,1])
Tuple
Tuples are used to hold together multiple objects. Think of them as similar to lists, but without the extensive functionality that the list class gives you. One major feature of tuples is that they are immutable like strings i.e. you cannot modify tuples. However, you can take portions of existing tuples to make new tuples. list are declared in square brackets and can be changed while tuple is declared in parentheses
Tuple indexing and splitting
The indexing and slicing in tuple are similar to lists. The indexing in the tuple starts from 0 and goes to length(tuple) ? 1.
The items in the tuple can be accessed by using the slice operator. Python also allows us to use the colon operator to access multiple items in the tuple.
Unlike lists, the tuple items can not be deleted by using the del keyword becasuse tuples being immutable. To delete an entire tuple, we can use the del keyword with the tuple name.
tupleA = (1, 2, 3, 4)print(tupleA)del tupleA[0]print(tupleA)del tupleAprint(tupleA)
Output of the above is
(1, 2, 3, 4)Traceback (most recent call last):File “/home/paul/codeLap/BLOG/Python-testing-code-snippets/test2.py”, line 3, in <module>del tupleA[0]TypeError: ‘tuple’ object doesn’t support item deletion
Difference between lists and tuples
List has mutable nature i.e., list can be changed or modified after its creation according to needs whereas tuple has immutable nature i.e., tuple can?t be changed or modified after its creation.
list_num = [1,2,3,4]tup_num = (1,2,3,4)print(list_num)print(tup_num)Output:[1,2,3,4](1,2,3,4)list_num[2] = 5print(list_num)tup_num[2] = 5Output:[1,2,5,4]Traceback (most recent call last):File “python”, line 6, in <module>TypeError: ‘tuple’ object does not support item assignment
Also, you can?t use a list as a key for a dictionary. This is because only immutable values can be hashed. Hence, we can only set immutable values like tuples as keys. But if you still want to use a list as a key, you must turn it into a tuple first. Because some tuples can be used as dictionary keys (specifically, tuples that contain immutable values like strings, numbers, and other tuples). Lists can never be used as dictionary keys, because lists are not immutable.
A very fine explanation of Tuples vs List is given in the StackOverflow ans
Apart from tuples being immutable there is also a semantic distinction that should guide their usage. Tuples are heterogeneous data structures (i.e., their entries have different meanings), while lists are homogeneous sequences. Tuples have structure, lists have order.One example would be pairs of page and line number to reference locations in a book, e.g.:my_location = (42, 11) # page number, line numberYou can then use this as a key in a dictionary to store notes on locations. A list on the other hand could be used to store multiple locations. Naturally one might want to add or remove locations from the list, so it makes sense that lists are mutable. On the other hand it doesn’t make sense to add or remove items from an existing location – hence tuples are immutable.
Among Tuples and List ? when to use which
Use a tuple when you know what information goes in the container that it is. For example, when you want to store a person?s credentials for your website.
person=(?ABC?,?admin?,?12345′)
But when you want to store similar elements, like in an array in C++, you should use a list.
groceries=[?bread?,?butter?,?cheese?]
Dictionary
A dictionary is a key:value pair, similar to an associative array found in other programming languages..
Iterating a Dictionary
The Python dictionary allows the programmer to iterate over its keys using a simple for loop. Let?s take a look:
my_dict = {1: ‘one’, 2: ‘two’, 3: ‘three’}for key in my_dict: print(key)Output123
Python 3 changed things up a bit when it comes to dictionaries. In Python 2, you could call the dictionary?s keys() and values() methods to return Python lists of keys and values respectively:
But in Python 3, you will get views returned:
# Python 3my_dict = {1: ‘one’, 2: ‘two’, 3: ‘three’}my_dict.keys() # dict_keys([1, 2, 3])my_dict.values()# dict_values([‘one’, ‘two’, ‘three’])# my_dict.items()dict_items([(1, ‘one’), (2, ‘two’), (3, ‘three’)])
A view object has some similarities to the range object we saw earlier ? it is a lazy promise, to deliver its elements when they?re needed by the rest of the program. We can iterate over the view, or turn the view into a list like this:
my_dict = {1: ‘one’, 2: ‘two’, 3: ‘three’}print(list(my_dict.keys()))#[1, 2, 3]
The values method is similar; it returns a view object which can be turned into a list:
my_dict = {1: ‘one’, 2: ‘two’, 3: ‘three’}print(list(my_dict.values()))# [‘one’, ‘two’, ‘three’]
Alternatively, to get list of keys as an array from dict in Python 3, You can use * operator to unpack dict_values:
my_dict = {1: ‘one’, 2: ‘two’, 3: ‘three’}print([*my_dict.keys()])# [1, 2, 3]my_dict = {1: ‘one’, 2: ‘two’, 3: ‘three’}print([*my_dict.values()])# [‘one’, ‘two’, ‘three’]
The items method also returns a view, which promises a list of tuples ? one tuple for each key:value pair:
Restrictions on Dictionary Keys
Almost any type of value can be used as a dictionary key in Python, e.g. integer, float, and Boolean objects. However, there are a couple restrictions that dictionary keys must abide by.
First, a given key can appear in a dictionary only once. Duplicate keys are not allowed.
Secondly, a dictionary key must be of a type that is immutable. A tuple can also be a dictionary key, because tuples are immutable:
d = {(1, 1): ‘a’, (1, 2): ‘b’, (2, 1): ‘c’, (2, 2): ‘d’}d[(1,1)]’a’d[(2,1)]’c’
However, neither a list nor another dictionary can serve as a dictionary key, because lists and dictionaries are mutable:
Restrictions on Dictionary Values
By contrast, there are no restrictions on dictionary values. Literally none at all. A dictionary value can be any type of object Python supports, including mutable types like lists and dictionaries, and user-defined objects, which you will learn about in upcoming tutorials.
There is also no restriction against a particular value appearing in a dictionary multiple times:
To check if a key exists in a Python dictionary
there is often a need to extract the value of a given key from a dictionary; however, it is not always guaranteed that a specific key exists in the dictionary.
When you index a dictionary with a key that does not exist, it will throw an error.
Hence, it is a safe practice to check whether or not the key exists in the dictionary prior to extracting the value of that key. For that purpose, Python offers two built-in functions:
- has_key()
- if-in statement
However the has_key method has been removed from Python3.
The code snippet below illustrates the usage of the if-in statement to check for the existence of a key in a dictionary:
Stocks = {‘a’: “Apple”, ‘b’:”Microsoft”, ‘c’:”Google”}key_to_lookup = ‘a’if key_to_lookup in Stocks:print “Key exists”else:print “Key does not exist”
So, how does Python find out whether the value is or is not in the dictionary without looping through the dictionary somehow?. Basically, the keys or index values for a dictionary are passed through a complicated ?hash function? which assigns them to unique buckets in memory. A given input value for the key or index will always access the same bucket when passed through the hash function, so instead of having to check every value to determine membership, you only have to pass the index through the hash function and then check if anything exists at the bucket associated with it. The drawback of dictionaries is that they are not ordered and require more memory than lists.
Aliasing and copying of Dictionaries
Because dictionaries are mutable, we need to be aware of aliasing. Whenever two variables refer to the same object, changes to one affect the other.
If we want to modify a dictionary and keep a copy of the original, use the copy method. For example, in the below original_dict is a dictionary that contains pairs of original_dict:
original_dict = {“up”: “down”, “right”: “wrong”, “yes”: “no”} alias = original_dict copy = original_dict.copy() # Shallow copy
alias and original_dict refer to the same object; copy refers to a fresh copy of the same dictionary. If we modify alias, original_dict is also changed:
alias[“right”] = “left”original_dict[“right”]# Will output ‘left’
If we modify copy, original_dict is unchanged:
Merging dictionaries
Say you have two dicts and you want to merge them into a new dict without altering the original dicts:
x = {‘a’: 1, ‘b’: 2}y = {‘b’: 3, ‘c’: 4}
The desired result is to get a new dictionary (z) with the values merged, and the second dict’s values overwriting those from the first.
z{‘a’: 1, ‘b’: 3, ‘c’: 4}
For dictionaries in Python, x and y, z becomes a shallowly merged dictionary with values from y replacing those from x.
In Python 3.5 or greater:z = {**x, **y}#In Python 2, (or 3.4 or lower) we have to write a function that will take two dictionary as its argument.def merge_two_dicts(x, y): z = x.copy() # start with x?s keys and values z.update(y) # modifies z with y?s keys and values & returns None return z#and now:z = merge_two_dicts(x, y)
Some implementational differences between Lists and Dictionary
While Copying ? Lists vs Dictionary
Mutable objects come with one potentially prominent drawback: changes made to an object are visible from every reference to that object. All mutable objects work this way because of how Python references objects, but that behavior isn?t always the most useful. In particular, when working with objects passed in as arguments to a function, the code that called the function will often expect the object to be left unchanged. If the function needs to make modifications in the course of its work, i.e. if we want to make changes to an object without those changes showing up elsewhere, we?ll need to copy the object first.
Lists, support slicing to retrieve items from the list into a new list. That behavior can be used to get all the items at once, creating a new list with those same items. Simply leave out the start and end values while slicing, and the slice will copy the list automatically:
a = [1, 2, 3]b = a[:]print(b)# Output[1, 2, 3]# The : operator here, as we know, works as a slice operator with sequence
We could also use list.copy() to make a copy of the list.
The slice() method above returns a slice object, however, ror dictionaries, by contrast, slice objects aren?t hashable, so dictionaries don?t allow them to be used as keys.
a = {1: 2, 3: 4}b = a.copy()b[5] = 6print(b)# Output{1: 2, 3: 4, 5: 6}
|
https://911weknow.com/python-list-vs-tuple-vs-dictionary
|
CC-MAIN-2021-04
|
refinedweb
| 2,736
| 63.29
|
Add into the DataBox default constructed items for the collection of tags requested by any of the actions in the phase-dependent action list. More...
#include <SetupDataBox.hpp>
Add into the DataBox default constructed items for the collection of tags requested by any of the actions in the phase-dependent action list.
This action adds all of the simple tags given in the
simple_tags type lists in each of the other actions in the current component's full phase-dependent action list, and all of the compute tags given in the
compute_tags type lists. If an action does not give either of the type lists, it is treated as an empty type list.
To prevent the proliferation of many DataBox types, which can drastically slow compile times, it is preferable to use only this action to add tags to the DataBox, and place this action at the start of the
Initialization phase action list. The rest of the initialization actions should specify
simple_tags and
compute_tags, and assign initial values to those tags, but not add those tags into the DataBox.
An example initialization action:
ParallelComponentand the
ActionListdo not depend on the DataBox type in the Algorithm. This assumption holds for all current utilities, but if it must be relaxed, revisions to
SetupDataBoxmay be required to avoid a cyclic dependency of the DataBox types.
|
https://spectre-code.org/structActions_1_1SetupDataBox.html
|
CC-MAIN-2021-49
|
refinedweb
| 223
| 52.23
|
PIC RS232
RS232 serial communication is an ancient, low speed, reliable standard. Each of about 100 characters (a, b, c, 1, 2, 3...) has a one-byte ASCII code. These are transmitted as 8 consecutive low and high logic levels, with a few framing bits as well. Baud rate refers to the number of bits/second. A typical baud rate is 19200, which is about 10000 times slower than USB. The original teletype machines were 110 baud. Read more about RS232 on wikipedia.
All desktop and laptop computers used to have a serial or "COM" port, available through a male DB-9 connector. These are getting less common on newer computers. However you can get an inexpensive "USB to Serial Adapter" cable which makes a COM port available, once you install some driver software to support the adapter cable.
The 18F4520 PIC has a built-in UART for transmitting and receiving characters on RS232 standard. After you make the hardware connection -- the PIC's transmit pin sends to the PC/laptop's receive pin and vice versa -- you can open a text window on the PC/laptop and see whatever your PIC program prints, and type characters that will be transmitted to your PIC. The CCS IDE has a built-in text window for this purpose, shown at the bottom of this page. You can also use Hyperterminal, which actually works somewhat better. It is a Windows ap usually found at Programs/Accessories/Communications/Hyperterminal. You can also set up your PIC to communicate with a Matlab program on a host PC.
We have two different kinds of cables for PC to PIC RS232 communication: the FTDIChip TTL-232R (preferred) and a cheaper USB to RS232 cable. The TTL-232R is preferred because you can easily plug it into a protoboard and you don't need any other components to interface with your PIC. It also provides +5V output from the PC which you can use to power other devices, such as an xbee radio (just need a 3.3V regulator). The cheaper cable requires you to solder a DB-9 connector and use an external level-shifter chip, to shift between the high voltages used by the PC and the lower TTL voltages used by the PIC. These two solutions are discussed below.
FTDIChip TTL-232R USB to RS232 Cable
The FTDIChip TTL-232R USB to RS232 cable (pictured at right), when used with the driver software, allows you to use your USB port as an RS232 port. You can find a manual for the cable here. You connect the cable's Tx (transmit data) pin with your PIC's Rx (receive data, pin 26), the cable's Rx pin with your PIC's Tx (pin 25), and the cable's ground with the PIC's ground, and you're all set. The circuitry inside the cable takes care of the USB to RS232 as well as voltage-level shifting. This cable also provides a 5V output which you can use to power an xbee wireless modem, for example (the xbee needs a 3.3V regulator).
The only drawback of this cable is that it costs a bit more than the cheaper, less convenient option below. Also, please check to make sure that the USB-RS232 driver software does not interfere with your PIC's ICD driver software.
Wiring to your PIC
The TTL-232R cable has six wires, color coded and in this sequence on the SIP connector:
BLACK GROUND Connect to PIC ground. BROWN CTS# Unused. "Clear to send" (flow control). RED Vcc +5V from PC. !! DO NOT CONNECT TO PIC +5 !! ORANGE Tx "PC Transmit." Data from PC, to PIC Rx (pin 26). YELLOW Rx "PC Receive." Data from PIC Tx (pin 25), to PC. GREEN RTS# Unused. "Request to send" (flow control).
Note that the TTL-232R cable gives you access to the PC's +5 supply (Red wire), which is limited in current (probably to 500mA) by the USB port which is designed to protect from short circuits. If you wish, you can probably power your PIC from the PC, using USB as a power source. If you do that, get rid of any other +5 source. Don't connect two +5 sources at once. In contrast, be sure that you do connect the PIC's ground to the PC's ground (Black wire).
The HL-340 USB to RS232 Cable
Cables such as the HL-340 pictured at right can be bought for as low as a few dollars, on ebay for example. This cable serves the same purpose as the cable above, but it is a bit less convenient for a few reasons: (1) the RS232 voltages are high level (+12V and -7V) and must be level-shifted before interfacing with the PIC; (2) the cable breaks out into a DB-9 connector, so you have to have (or solder) your own connector; and (3) it doesn't provide +5V to power external devices, as the cable above does. Nonetheless, it works just fine; just install the driver at and make sure you follow the instructions below.
Do I need to hook up all 9 pins?
No, you only need three: transmit, receive, and ground. The rest have purposes but are seldom used any more. You do have to get the baud rate and a few other parameters matched, between whatever your two communicating partners are.
On a standard male DB-9 COM connector on a computer, 2=input to computer, 3=output from computer, 5=ground.
The connector you build will be female with the same pin numbers for the same purposes. Photo at right shows the pin numbering on the female DB-9.
Voltage problems: important!
PC/laptops use +12 volt and -7 volt signals The official RS232 standard uses +12V and -7V for logic 1 and 0 (or maybe the other way around). The PIC can only produce 5V logic levels, and it can be damaged by voltages above 5V. So although the PIC and your PC/laptop agree about the code for transmitting characters, they don't agree about the voltage levels.
Some devices use "5 volt RS232" Quite a lot of devices use 0 and 5V logic levels instead of the higher voltages that the RS232 standard specifies. The PIC does this, and so do Serial 2-line LCD Displays, which are very handy. So these can be connected directly.
Shifting voltage levels
For interfacing a PIC to a PC/laptop or to a Serial-to-USB Adapter, you need a level converter. The MAX232N is a lovely chip that serves this function, in both directions (PC to PIC and PIC to PC). In fact it has two channels in each direction, should you ever want that many. You would think that in order to produce 12V signals it would need a +12V power supply, and maybe a negative supply as well, but it doesn't, it produces the needed +12V and -7V internally from a single convenient 5V supply.
Using the MAX232N
The chip needs five 1uF capacitors around it, which is not a lot to ask for sparing you two extra supply voltages. Connect MAX232N pins 13 & 14 (and ground) to a female DB-9 connector as shown, and plug into your PC/laptop's serial port. Connect MAX232N pins 11 & 12 to your PIC. If you only want PIC output, you don't need to hook up the PIC input.
- PIC to PC: PIN pin 25 (RC6/TX) --> MAX232N pin 11 --> MAX232N pin 14 --> DB-9 pin 2
- PC to PIC: DB-9 pin 3 --> MAX232N pin 13 --> MAX232N pin 12 --> PIC pin 26 (RC7/RX)
Code on the PIC
The essential line is #use rs232(baud=19200, UART1). This sets up the PIC's hardware UART. The PIC can also can do software RS232, but that is susceptible to mistiming caused by interrupts.
Once the UART is set up, you can use most of the usual C serial i/o functions, such as printf(). There's no scanf() in PIC C, and be sure to read about the formatting codes (%d and so on.)
Note - Characters transmitted to the PIC faster than your program reads them cause the UART to choke. After that happens, kbhit() never returns TRUE again, and no characters can be read. So use PIC RS232 input with care. If you want reliable input you have to set up an interrupt on incoming characters.
#include <18f4520.h> #fuses HS,NOLVP,NOWDT,NOPROTECT #use delay(clock=20000000) // 20 MHz crystal on PCB #use rs232(baud=19200, UART1) // hardware UART; uses RC6/TX and RC7/RX int16 i; int j; void main() { while (TRUE) { for (i=1; i<10000; i++) { printf("Hello world. %Lu %u\r\n", i, j); // \r is carriage return, \n is scroll up delay_ms(100); if (kbhit()) j = getc(); // type slowly so kbhit() doesn't choke } } }
Configuring your COM port and the Loopback Test
Shown at right are the usual configuration options for the COM port on your computer. Baud rate must match that of the PIC. There is the problem of which COM port to select, and whether the one you have made electrical connection to (through a DB-9 connector) is that one.
To check your COM port, try the Loopback Test. With the DB-9 disconnected, if you type "hello" into the text window ("serial port monitor") you will not see "hello" appear in the window, because your characters went out on pin 3 of the DB-9. The window shows only characters arriving on pin 2 of the DB-9; there is no Local Echo. If you connect pin 2 to pin 3 temporarily and type again, you will now see your words. Thus you have identified electrical signals that correspond to your COM port and text window.
Typical other problems are:
- Different baud rates (sometimes resulting is garbled transmission, sometimes no transmission)
- DB-9 pins 2 & 3 (computer receive and transmit) not going to partner's transmit & receive, respectively
- You didn't level shift appropriately and now things are burned out.
- Partner (e.g. PIC) is not transmitting anything. Look with a scope and see if there's anything happening on partner's transmit line.
|
http://hades.mech.northwestern.edu/index.php/PIC_RS232
|
CC-MAIN-2017-51
|
refinedweb
| 1,712
| 70.33
|
These are chat archives for django/django
Heey guys. Morning here. I have a little bit of a problem with django forms if i have a code like below in a template rendering a form
<section class="project"> <p>{% trans "Select the associated project" %}</p> <select id="project" name="project" > {% for project in projects %} <option value={{project.id}} {% if project.id == 1 %} selected {% endif %}>{{ project.title }}</option> {% endfor %} </select> </section>
The form field
project = forms.IntegerField(required=False)
now, the bug:
My form renders appropriately. I have a select field. The value I'm using for each select option is the
project.id which I suppose must always be a positive int
When I clean my form and check for the value from the
project field, I get a
None instead of an number integer. IN my view:
form.cleaned_data["project"] equals
None. What really is the bug here??? Thanks so
|
https://gitter.im/django/django/archives/2017/07/23
|
CC-MAIN-2019-35
|
refinedweb
| 151
| 68.47
|
@example tag
This tag inserts a piece of code from source code file in the document. The code may be put in a frame with colored syntax depending on output format.
It is possible to select some code file parts by adding anchor comments in the source file ( /* anchor name */ on a single line) and specifying an anchor name pattern. All lines following a matching anchor line will be retained and other lines will be ignored. A /* ... */ comment or /* ... name */ comment in ignored file part will insert an ellipsis line.
Flags available for the @code tag can be used. Moreover, the P flag can be used to insert a code comment with path to the example file in inline code block. This behavior may be enforced for all examples by using the show_examples_path option.
See also code_path section and @scope tag section.
Syntax
@example <path to source file>[:<label>] [<flags>] [{<scope>}] <end of line>
Examples
Consider the following C source file:
/*
Long license header
*/
/* anchor global */
#include <stdio.h>
FILE *file;
int main()
{
/* anchor open */
file = fopen("foo,h", "rw");
if (file == NULL)
return 1;
/* anchor write */
fputs("test", file);
/* ... */
/* anchor close */
fclose(file);
/* anchor global */
return 0;
}
To insert the whole file with line numbers:
@example path/to/file.c N
If you want to skip the header:
@example path/to/file.c:global|open|write|close
If you just want to show how to open and close a file:
@example path/to/file.c:open|close
This will result in the following code being inserted:
file = fopen("foo,h", "rw");
if (file == NULL)
return 1;
[ ... ]
fclose(file);
|
http://www.nongnu.org/mkdoc/_example_tag.html
|
CC-MAIN-2015-22
|
refinedweb
| 269
| 71.85
|
Go to the source code of this file.
wall clock
For a short explanation, see Time Handling.
adjustment sources
{ CLOCKADJ_NONE, CLOCKADJ_GPS_MOD, CLOCKADJ_GPS_EXT, CLOCKADJ_USER, } clock_adj_source_t;
For clock_get_time() to be able to return a valid time, clock_adjust() has to be called at least once before (e.g. from the GPS code). clock_adjust() records the offset to the internal timer built around the 32 kHz oscillator.
It is assumed that clock_adjust() will be called shortly after the time value has been determined so that the ticks value has not made a full "rotation", and preferrably has not wrapped around.
The current implementation records only the last adjustment values.
{ if (ADJ_BUFFER_SIZE >= 1) { hl_time_t curr_time = hl_get_current_time (); if (curr_time.ticks < ticks) /* there was a wrap-around, computation is too complicated * here at the moment */ return; /* just store it away */ adjust_buffer[0].uptime.seconds = curr_time.seconds; adjust_buffer[0].uptime.ticks = ticks; adjust_buffer[0].uptime.t2_val = t2_val; adjust_buffer[0].walltime.time = *time; adjust_buffer[0].source = src; } }
get current time (UTC)
clock_get_time() will return the current wall clock time.
{ clock_status_t retval = { .is_set = 0, .valid_time = 0, .valid_date = 0, .valid_century = 0, }; if (ADJ_BUFFER_SIZE >= 1) { #ifdef DEBUG printf_P (PSTR("last seconds=%ld tick=%d\r\n"), adjust_buffer[0].uptime.seconds, adjust_buffer[0].uptime.ticks); #endif if (adjust_buffer[0].uptime.ticks || adjust_buffer[0].uptime.seconds) { /* clock_adjust() has already been called, so we can return data */ struct adjbuf_s * buf = &adjust_buffer[0]; retval.is_set = 1; retval.valid_time = 1; retval.valid_date = buf->walltime.time.day && buf->walltime.time.month; retval.valid_century = buf->walltime.time.year >= 100; if (time) { /* The caller has requested to get the data and not just the status */ hl_time_t curr_time = hl_get_current_time (); /* time elapsed since last call to clock_adjust(), * will be (very) near 0 when GPS reception is OK. */ int32_t offset = curr_time.seconds - buf->uptime.seconds; ldiv_t d; #ifdef DEBUG printf_P (PSTR("time offset=%ld\r\n"), offset); #endif /* if we did not reach the saved tick value, the real second * tick did not occur yet */ if (curr_time.ticks < buf->uptime.ticks) { offset--; } *time = buf->walltime.time; if (offset == 0) goto out; /* do corrections */ /* seconds */ d = ldiv(offset + buf->walltime.time.sec, 60L); time->sec = d.rem; offset = d.quot; if (offset == 0) goto out; /* minutes */ d = ldiv(offset + buf->walltime.time.min, 60L); time->min = d.rem; offset = d.quot; if (offset == 0) goto out; /* hours */ d = ldiv(offset + buf->walltime.time.hour, 24L); time->hour = d.rem; offset = d.quot; /* now offset is in days * --- we don't expect a very long time between adjustments */ while (offset) { uint8_t mdays = month_days (time->year, time->month); if (offset > (int8_t)(mdays - time->day)) { /* advance to first day of next month */ offset -= mdays - time->day + 1; time->day = 1; if (++time->month > 12) { time->month = 1; time->year++; } } else { time->day += offset; offset = 0; } } out:; /* now offset is 0, so *time contains the current time */ #ifdef DEBUG printf_P (PSTR("%04u-%02u-%02u %02u:%02u:%02u\r\n"), time->year, time->month, time->day, time->hour, time->min, time->sec); #endif } } } return retval; }
|
http://doc.explorerspal.org/clock_8h.html
|
CC-MAIN-2022-27
|
refinedweb
| 496
| 50.63
|
On Friday I published a new zine: “How Containers Work!”. I also launched a fun redesign of wizardzines.com.
You can get it for $12 at If you buy it, you’ll get a PDF that you can either print out or read on your computer. Or you can get a pack of all 8 zines so far.
Here’s the cover and table of contents:
why containers?
I’ve spent a lot of time figuring out how to run things in containers over the last 3-4 years. And at the beginning I was really confused! I knew a bunch of things about Linux, and containers didn’t seem to fit in with anything I thought I knew (“is it a process? what’s a network namespace? what’s happening?“). The whole thing seemed really weird.
It turns out that containers ARE actually pretty weird. They’re not just one thing, they’re what you get when you glue together 6 different features that were mostly designed to work together but have a bunch of confusing edge cases.
As usual, the thing that helped me the most in my container adventures is a good understanding of the fundamentals – what exactly is actually happening on my server when I run a container?
So that’s what this zine is about – cgroups, namespaces, pivot_root, seccomp-bpf, and all the other Linux kernel features that make containers work.
Once I understood those ideas, it got a lot easier to debug when my containers were doing surprising things in production. I learned a couple of interesting and strange things about containers while writing this zine too – I’ll probably write a blog post about one of them later this week.
containers aren’t magic
This picture (page 6 of the zine) shows you how to run a fish container image with only 15 lines of bash. This is heavily inspired by bocker, which “implements” Docker in about 100 lines of bash.
The main things I see missing from that script compared to what Docker actually does when running a container (other than using an actual container image and not just a tarball) are:
- it doesn’t drop any capabilities – the container is still running as root and has full root privileges (just in a different mount + PID namespace)
- it doesn’t block any system calls with seccomp-bpf
container command line tools
The zine also goes over a bunch of command line tools & files that you can use to inspect running containers or play with Linux container features. Here’s a list:
mount -t overlay(create and view overlay filesystems)
unshare(create namespaces)
nsenter(use an existing namespace)
getpcaps(get a process’s capabilities)
capsh(drop or add capabilities, etc)
cgcreate(create a cgroup)
cgexec(run a command in an existing cgroup)
chroot(change root directory. not actually what containers use but interesting to play with anyway)
/sys/fs/cgroups(for information about cgroups, like
memory.usage_in_bytes)
/proc/PID/ns(all a process’s namespaces)
lsns(another way to view namespaces)
I also made a short youtube video a while back called ways to spy on a Docker container that demos some of these command line tools.
container runtime agnostic
I tried to keep this zine pretty container-runtime-agnostic – I mention Docker a couple of times because it’s so widely used, but it’s about the Linux kernel features that make containers work in general, not Docker or LXC or systemd-nspawn or Kubernetes or whatever. If you understand the fundamentals you can figure all those things out!
we redesigned wizardzines.com!
On Friday I also launched a redesign of wizardzines.com! Melody Starling (who is amazing) did the design. I think now it’s better organized but the tiny touch that I’m most delighted by is that now the zines jump with joy when you hover over them.
One cool thing about working with a designer is – they don’t just make things look better, they help organize the information better so the website makes more sense and it’s easier to find things! This is probably obvious to anyone who knows anything about design but I haven’t worked with designers very much (or maybe ever?) so it was really cool to see.
One tiny example of this: Melody had the idea of adding a tiny FAQ on the landing page for each zine, where I can put the answers to all the questions people always ask! Here’s what the little FAQ box looks like:
I probably want to edit those questions & answers over time but it’s SO NICE to have somewhere to put them.
what’s next: maybe debugging! or working more on flashcards!
The two projects I’m thinking about the most right now are
- a zine about debugging, which I started last summer and haven’t gotten around to finishing yet
- a flashcards project that I’ve been adding to slowly over the last couple of months. I think could become a nice way to explain basic ideas.
Here’s a link to where to get the zine again :)
|
https://jvns.ca/blog/2020/04/27/new-zine-how-containers-work/
|
CC-MAIN-2022-21
|
refinedweb
| 856
| 66.98
|
As long as we’re on the topic of logging, this is probably a good time to mention Twisted Web’s access log support. In this example, we’ll see what Twisted Web logs for each request it processes and how this can be customized.
If you’ve run any of the previous examples and watched the output of
twistd or read
twistd.log then you’ve already seen some log lines like this:
2014-01-29 17:50:50-0500 [HTTPChannel,0,127.0.0.1] “127.0.0.1” - - [29/Jan/2014:22:50:50 +0000] “GET / HTTP/1.1” 200 2753 “-” “Mozilla/5.0 ...”
If you focus on the latter portion of this log message you’ll see something that looks like a standard “combined log format” message.
However, it’s prefixed with the normal Twisted logging prefix giving a timestamp and some protocol and peer addressing information.
Much of this information is redundant since it is part of the combined log format.
Site lets you produce a more compact log which omits the normal Twisted logging prefix.
To take advantage of this feature all that is necessary is to tell Site where to write this compact log.
Do this by passing
logPath to the initializer:
... factory = Site(root, logPath=b"/tmp/access-logging-demo.log")
Or if you want to change the logging behavior of a server you’re launching with
twistd web then just pass the
--logfile option:
$ twistd -n web --logfile /tmp/access-logging-demo.log
Apart from this, the rest of the server setup is the same.
Once you pass
logPath or use
--logfile on the command line the server will produce a log file containing lines like:
“127.0.0.1” - - [30/Jan/2014:00:13:35 +0000] “GET / HTTP/1.1” 200 2753 “-” “Mozilla/5.0 ...”
Any tools expecting combined log format messages should be able to work with these log files.
Site also allows the log format used to be customized using its
logFormatter argument.
Twisted Web comes with one alternate formatter, proxiedLogFormatter, which is for use behind a proxy that sets the
X-Forwarded-For header.
It logs the client address taken from this header rather than the network address of the client directly connected to the server.
Here’s the complete code for an example that uses both these features:
from twisted.web.http import proxiedLogFormatter from twisted.web.server import Site from twisted.web.static import File from twisted.internet import reactor resource = File('/tmp') factory = Site(resource, logPath=b"/tmp/access-logging-demo.log", logFormatter=proxiedLogFormatter) reactor.listenTCP(8888, factory) reactor.run()
|
http://twistedmatrix.com/documents/current/web/howto/web-in-60/access-logging.html
|
CC-MAIN-2016-30
|
refinedweb
| 436
| 58.89
|
RECV(2) BSD Programmer's Manual RECV(2)
recv, recvfrom, recvmsg - receive a message from a socket
#include <sys/types.h> supported in future releases. On successful completion, all three routines return the number of message bytes read. If a message is too long to fit in the supplied buffer, ex- cess to a recv call is formed by ORing one or more of the values: MSG_OOB process out-of-band data MSG_PEEK peek at incoming message MSG_WAITALL wait for full request or error MSG_DONTWAIT don loca- tions,. As an example, one could use this to learn of changes in the data-stream in XNS/SPP.,.
|
http://mirbsd.mirsolutions.de/htman/sparc/man2/recvmsg.htm
|
crawl-003
|
refinedweb
| 106
| 60.24
|
There are dozens of reasons to love Swift tuples. Here are five to get you started.
Tuplizing Declarations
Tuples can initialize a bunch of variables or constants at once. For example, here’s a typical set of variable declarations. This code creates several variables and establishes initial values for each.
var x = "XX" var y = "YY" var z = "ZZ"
When there’s an underlying semantic relationship between these items, consider placing them into a single line with semicolons. Yes, you can turn your nose up at this but a core relationship is emphasized by reordering the declarations into a one-line presentation. Items are declared together because they have some kind of meaning together:
var x = "XX"; var y = "YY"; var z = "ZZ"
There are, of course, drawbacks. This presentation complicates any modifications you may later need to make to this group: whether inserting or removing variables, or changing their initial values. That’s a bad thing so reserve this approach to items where there is a long-standing logical reason to group.
In fact, you only want to join declarations when there’s some really compelling structural reason these items are grouped together, and you never want to do this with unrelated items. For example, this next snippet is an abomination against Cthulhu:
var myVariable = initialValue; var theta = 0.0; let radius = 6; var userName = "Bill"
Avoid this bad example: you are a good person and a good coder, and don’t want Cthulhu to swallow your soul.
Instead, consider a case when there’s a true uniformity of intent. When such exists, you can ditch the semicolons and re-design your declarations to use a tuple, as in the following line of code.
var (w, x, y, z) = ("WW", "XX", "YY", "ZZ")
This line produces the same symbols as the four separate statement snippets you saw previously, but it does so using a simple parsimonious approach. If
w,
x,
y, and
z are related, why shouldn’t their declarations be related too?
Tuple declarations become slightly more complicated when the compiler cannot immediately infer typing. In such cases, add explicit typing. For example, 5.0 and 9.6 normally default to double values. You can force these to
CGFloat like this:
var (px, py) = (5.0, 9.6) as (CGFloat, CGFloat) // or var (px, py): (CGFloat, CGFloat) = (5.0, 9.6)
Even with underlying relationships, avoid complicated declarations that use mixed types. These force you to cognitively match each variable or constant with a type and initial value. This next line is an example of what you don’t want to do:
var (a, b, c) : (Int, Double, CGFloat) = (1, 2, 3)
Don’t force yourself to match a variable from column 1 with a type from column 2 and a value from column 3. It’s bad coding, it’s bad neurocognitive overloading, and Cthulhu doesn’t need that many souls.
Becoming the Tuple
Tuples do more than simplify individual declarations. When looking for targets of tuple opportunity, it’s easy to find instances like these:
var gen1 = seq1.generate(); var gen2 = seq2.generate() let item1 = gen1.next(); let item2 = gen2.next()
and transform them into these:
var (gen1, gen2) = (seq1.generate(), seq2.generate()) let (item1, item2) = (gen1.next(), gen2.next())
But in doing so, you’re missing a big opportunity. Use direct tuple declarations in place of separate declaration proxies. Consider this instead:
var generators = (seq1.generate(), seq2.generate()) let items = (generators.0.next(), generators.1.next()) // thanks Steve
Binding related components into a single tuple offers some great programming wins. Here’s a real-world example that benefits from tuple amalgamation. Here’s the original code before conversion.
func longZip<S1: SequenceType, S2: SequenceType> (seq1: S1, _ seq2: S2) -> AnyGenerator<(S1.Generator.Element?, S2.Generator.Element?)> { var (gen1, gen2) = (seq1.generate(), seq2.generate()) return anyGenerator { let (item1, item2) = (gen1.next(), gen2.next()) guard item1 != nil || item2 != nil else {return nil} return (item1, item2) } }
In this example, both the generator and item assignments would benefit from tuplification.
There’s also a nagging syntactic issue to consider before conversion. Tuple field numbering starts at 0, not 1. So items.0 is relates to
gen1, and
items.1 to
gen2. This can be ugly if you’re using 1-based numbering. You’ll want to change this when performing your conversion.
Here’s what you get after tuplifying and re-numbering the variables and type parameters:
func longZip<S0: SequenceType, S1: SequenceType> (seq0: S0, _ seq1: S1) -> AnyGenerator<(S0.Generator.Element?, S1.Generator.Element?)> { var generators = (seq0.generate(), seq1.generate()) return anyGenerator { let items = (generators.0.next(), generators.1.next()) guard items.0 != nil || items.1 != nil else {return nil} return items } }
Notice how much nicer the return statement is as well.
Adjusting field names
Tuples are basically anonymous structs. As you’ve seen, you get numbered field names for free:
let myTuple = (1, "Hello", 5.2)
gives you
(.0 1, .1 "Hello", .2 5.2)
with each field automatically typed and numbered.
Numbers only take you so far. You can manually add field names to declarations for richer semantics, as in the following example. As you see, “intItem” carries more meaning than “.0”
You can also turn this into a typealias, which you can use to pick up field names:
Unfortunately, this type alias is tied to specific types. You cannot create generic typealiases that refer to structure this way:
typealias PairType<T, U> = (first: T, second: U) // won't work.
That’s a pity. A “Pair” is semantically rich but inherently generic. The answer, at this time, is to work around this with
Any. The
Any type fixes the declaration issue even if it’s aesthetically displeasing:
typealias PairTuple = (first: Any, second: Any) let bar: PairTuple = (2, "hello") print(bar.first.dynamicType) // Int.Type print(bar.second.dynamicType) // String.Type
If you insist on using generics, Mike Ash, tongue in cheek, offers the following alternative:
struct Pa<T, U> { typealias ir = (first: T, second: U) } let foo = Pa.ir(2, "Hello") print(foo.first.dynamicType) // Int.Type print(foo.second.dynamicType) // String.Type
Unfortunately, this beautifully named struct does not work with the original mission statement, despite its otherwise sterling qualities:
let gar: Pa = (2, "Hello") // Error!
Swapping values
When it comes to value swaps, you cannot beat the tuple shuffle. Simply construct tuples out of the items whose values you want to reassign, and use tuple-assignment to move those values around. Here’s an example:
(x, y) = (y, x)
You’re not limited to pairs. You can use this approach with as many variables as required:
(x, y, z, w) = (w, x, y, z)
When executed in optimized code, 2-tuple value swaps are as efficient as the built-in
swap function, and n-tuple value swaps are as efficient as
tmp=first, first=second, second=third,...,last=tmp swaps. It’s a simple, beautiful approach.
Testing for Nil
Bless pattern matching for it is good. Remember this line from the example at the top of the post?
guard items.1 != nil || items.2 != nil else {return nil}
This code tests to see whether both sequences have been exhausted, returning nil from the combined sequences if all items have been used from both incoming generators. Checking multiple items against nil is a pretty common Swift task, and it’s one that really benefits from tuples.
In this example, there’s a far better approach than looking individually at each tuple field. Pattern matching enables you to leverage an
if-case statement to determine whether a tuple contains a
.Some case or not. Here’s what that test looks like:
if case (.None, .None) = items {return nil}
The if-case here “binds” the tuple field items against the tuple in the case. It uses a single equal sign even though there is no actual assignment here. When items is
(nil, nil), the generator returns nil.
This statement is kind of an odd bird. The condition looks like you could use it as a boolean intermediate for, for example, a ternary condition:
return case (.None, None) = items ? nil : items // nope! won't work
But this is not legal Swift and will not compile. An if-case performs pattern matching binding and exposes any newly bound symbols in its scope clause. This tuple-test uses a byproduct of binding to add early exit but it’s a bit off-label if you get my meaning.
Part 6 of 5: Better Return Values
Sebastian Celis writes, “My personal favorite way to use tuples is when a function really wants to return more than one piece of data. I prefer the tuple to being forced to use an inout parameter”
He’s right. There is never a good reason for you to return multiple distinct values with in-out parameters. Use proper error handling if needed and return a tuple instead.
func status() -> (code: Int, text: String) { // RFC 2324 return (418, "I'm a teapot") }
In-out side-effects? Not even once.
Wrap-Up
So there you have it. Five really cool ways to use tuples, and this write-up doesn’t even begin to touch on the ways you can use tuples in switch statements.
Tuples are one of the big paradigm-shift areas in Swift (along with algebraic data types, pattern matching, value types, protocol oriented programming, functional programming, etc) but tuples tend to get short shrift in terms of people paying proper attention to them.
Hopefully this post will give them a little of the love they deserve and raise your awareness of their awesomeness and possibilities. Remember: Swift isn’t just about re-writing code using new syntax. It’s about embracing new ways of thinking about and architecting your code.
One Comment
Nice summary! One error:
var generators = (seq1.generate(), seq2.generate())
let items = (gen1.next(), gen2.next())
should be:
var generators = (seq1.generate(), seq2.generate())
let items = (generators.0.next(), generators.1.next())
Correct?
|
http://ericasadun.com/2015/11/04/5-delicious-tuple-tricks-in-swiftlang/
|
CC-MAIN-2018-09
|
refinedweb
| 1,661
| 57.98
|
- Code: Select all
main.c29:22 fatal error: bcm_host.h: No much file or directory
It's having a paddy over the
- Code: Select all
#include "bcm_host.h"
Any help at all?
main.c29:22 fatal error: bcm_host.h: No much file or directory
#include "bcm_host.h"
RichardUK wrote:I've just tried it and there is an error in the make file. The path to the libfreetype.so is wrong, remove "arm-linux-gnueabi" from the path. After that it built and ran ok from the terminal.
cd /opt/vc/src/hello_pi/hello_font
sudo nano ./Makefile
OBJS=main.o
BIN=hello_font.bin
LDFLAGS+=../libs/vgfont/libvgfont.a -L$(SDKSTAGE)/usr/lib $(SDKSTAGE)/usr/lib/libfreetype.so -lz
include ../Makefile.include
cc: error: /usr/lib/libfreetype.so: No such file or directory
make: *** [hello_font.bin] Error 1
rm main.o
E: Invalid operation libfreetype-dev
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded
RichardUK wrote:Which distro you using?
dom wrote:Are you on Squeeze or Wheezy ? On Wheezy the file is here:
/usr/lib/arm-linux-gnueabi/libfreetype.so
#include "bcm_host.h"
|
http://www.raspberrypi.org/forums/viewtopic.php?f=68&t=9250&p=112512
|
CC-MAIN-2014-52
|
refinedweb
| 187
| 64.47
|
Hi i made this example
demo
capx
its got easing of scroll, scale, and parallax with 2 layers
it also has a follow scroll and at the end the scroll is fixed on the player with platform movement
i think a lot of people are wondering how these things work so i hope this helps! <img src="smileys/smiley1.gif" border="0" align="middle">
note:
lerp(layoutscale,1,0.5*dt)
how this lerp works?:
layoutscale = current scale
1 = target scale
0.5*dt = is how fast it will go from current to target, larger is faster
If the player has the "scroll to" behavior it will stil work?
i think it interferes with it so i dont use the scroll to behavior because you dont have control over it, maybe later if the behavior is extended, for now i think this is the best way so you can switch scrolls on and off
Vtrix. Thank you. This is much better than the ScrollTo.
THNX for sharing...it`s a big help for me!
Question regarding this:
I'm making a vertical scroller, (player starts at the bottom and moves to the top, shooting his way through for power ups etc)
I want the 'level' to be small width, but of course a long length. Is it advised to use layout properties of example: 1024x10000?
Or will a size like that hurt my framerate etc? I know i can probably set the objects to not be created until the player gets to a certain point on the map.
Also, as with all vertical scrollers, once the level has scrolled passed the bottom of the screen, the player cannot return (if they miss a power up, that's it... no going back)
How do i implement something like this?
Develop games in your browser. Powerful, performant & highly capable.
i created something like that in classic, a doodlejump clone :) but it created it totally random, i made it spawn a platform on a counterinterval only when the platformcount was lower then a certain number, so i knew there where only so much objects in the game constantly, and ( also destroyed them when too far from the player, i dont know how far you can go with a premade level, i also thought of some kind of levelblocks that where randomly put together, so you have a premade levels but still some randomness , but never implemented :)
Very nice example, it's a much more interesting scrolling effect than ScrollTo. My only problem is that once it's on the player and the player's running around, it takes too long to catch up to him. For anyone who wants to change that: I changed the 0.8*dt in both of the "Scroll to" lerp actions to 2*dt instead and it's a lot faster.
Other than that, great job! <img src="smileys/smiley1.gif" border="0" align="middle" />
Thank so much, It's very awesome !!! ... ^^
Wow this is freakin awesome !! Thanks Guys.. Ideas are starting to rumble
Scroll To: Button Trigger
Thanks Vtrix!!
I re-appropriated the code so when you press the 'scrollstart' object the camera scrolls to the destination (player) instead of automatically scrolling :D
Iv also made the animated 'player' into a button, so I can return to the start.
I have a problem: I can only press the 'scrollstart' object once. When I press the object for the 2nd time nothing happens. Why doesn't the Boolean function reset so you can activate the button again? ...I'll have another tinker!
Demo
Capx
I know that this is an almost 4 year topic but it was listed under:
Neither the link to the example or the demo work
i think they made changes to dropbox, which is pretty annoying for old links
|
https://www.construct.net/forum/construct-2/how-do-i-18/an-example-of-scroll-scale-and-41822
|
CC-MAIN-2018-26
|
refinedweb
| 635
| 68.6
|
If you have a program that execute from top to bottom, it will not be responsive and feasible to build complex applications. So .Net Framework offers some classes to create complex applications.
Introduction
If you have a program that executes from top to bottom, it will not be responsive and feasible to build complex applications. So, the .NET Framework offers some classes to create complex applications.
What is threading?
In short, thread is like a virtualized CPU which will help to develop complex applications.
Understanding threading
Suppose, you have a computer which only has one CPU capable of executing only one operation at a time. And, your application has a complex operation. So, in this situation, your application will take too much time. This means that the whole machine will freeze and appear unresponsive. So your application performance will decrease.
For protecting the performance, we will be multithreading in C#.NET. So, we will divide our program into different parts using threading. And you know every application run its own process in Windows. So every process will run in its own thread.
Thread Class
Thread class can be found in System.Threading namespace, using this class you can create a new thread and can manage your program like property, status etc. of your thread.
Example
The following code shows how to create thread in C#.NET
Explanation
You may observe that, I created a new Local Variable thread of ThreadStart delegate and pass loopTask as a parameter to execute it. This loopTask Function has a loop. And we create a new object myThread from Thread Class and pass Local Variable thread to Thread Constructor. And start the Thread using myThread.Start(); and Thread.Sleep(2000); will pause for 2000 milliseconds.
And finally the result will be This code also can be written in a more simple way like this.
In the above code, we are using lambda expression ( => ) for initialization.
Passing Value as Parameter
The Thread constructor has another overload that.
View All
|
https://www.c-sharpcorner.com/article/multithreading-in-c-sharp-net2/
|
CC-MAIN-2019-43
|
refinedweb
| 333
| 59.6
|
Heroku Clojure Support
Last updated 07 November 2015
Table of Contents
The Heroku Cedar stack is capable of running a variety of types of Clojure applications.
This document describes the general behavior of the Cedar stack as it relates to the recognition and execution of Clojure applications. For a more detailed explanation of how to deploy an application, see:
Activation
Heroku’s Clojure support is applied only when the application has a
project.clj file in the root directory.
Clojure applications that use Maven can be deployed as well, but they will be treated as Java applications, so different documentation will apply. make your
run. application config is not visible during compile time, with the exception of private repository credentials (
LEIN_USERNAME, etc) if present. In order to change what is exposed, set the
BUILD_CONFIG_WHITELIST config to a space-separated list of config var names. Note that this can result in unpredictable behavior since changing your app’s config does not result in a rebuild of your app.
Uberjar
If your
project.clj contains an
:uberjar-name setting, then
lein uberjar will run during deploys. If you do this, your
Procfile
entries should consist of just
java invocations.
If your main namespace doesn’t have a
:gen-class then you can use
clojure.main as your entry point and indicate your app’s main
namespace using the
-m argument in your
Procfile:
web: java -cp target/myproject-standalone.jar clojure.main -m myproject.web
If you have custom settings you would like to only apply during build,
you can place them in an
:uberjar profile. This can be useful to use
AOT-compiled classes in production but not during development where
they can cause reloading issues:
:profiles {:uberjar {:main myproject.web, :aot :all}}
If you need Leiningen in a
heroku run session, it will be downloaded
on-demand.
Note that if you use Leiningen features which affect runtime like
:jvm-opts, extraction of native dependencies, or
:java-agents,
then you’ll need to do a little extra work to ensure your Procfile’s
java invocation includes these things. In these cases it might be
simpler to use Leiningen at runtime instead.
Customizing the build
You can customize the Leiningen build by setting the following configuration variables:
LEIN_BUILD_TASK: the Leinigen command to run.
LEIN_INCLUDE_IN_SLUG: Set to
yesto add lein to uberjar slug.
Leiningen at runtime
Instead of putting a direct
java invocation into your Procfile, you
can have Leiningen handle launching your app. If you do this, be sure
to use the
trampoline and
with-profile tasks. Trampolining will
cause Leiningen to calculate the classpath and code to run for your
project, then exit and execute your project’s JVM, while
with-profile will omit development profiles:
web: lein with-profile production trampoline run -m myapp.web
Including Leiningen in your slug will add about ten megabytes to its size and will add a second or two of overhead to your app’s boot time.
Overriding build behavior
If neither of these options get you quite what you need, you can check
in your own executable
bin/build script into your app’s repo and it
will be run instead of
compile or
uberjar after setting up Leiningen.
Runtimes
Heroku makes a number of different runtimes available. You can configure your app to select a particular Clojure runtime, as well as the configure the JDK..
Supported Clojure versions
Heroku supports apps on any production release of Clojure, running on a supported JDK version.
Add-ons
No add-ons are provisioned by default. If you need a SQL database for your app, add one explicitly:
$ heroku addons:create heroku-postgresql:hobby-dev
|
https://devcenter.heroku.com/articles/clojure-support
|
CC-MAIN-2015-48
|
refinedweb
| 609
| 53.31
|
Traduzione di un ambiente di lavoro esterno
In the following notes,
"context" should be your addon's or workbench's name, for example,
"MySuperAddon" or
"DraftPlus", or whatever. This context makes it so that all translation of your code will be gathered under the same name, to be more easily identified by translators. That is, they will know exactly to which addon or workbench a particular string belongs.
Contents
Preparing the sources
General
- Add a
translations/folder. You can name it to something else, but this will be easier as it is the same throughout FreeCAD. In this folder, you will place the
.tsfiles (the "source" translation files) and
.qmfiles (compiled translation files).
- Only the text that is shown to the user in the FreeCAD UI should be translated. Text that is only shown in the Python console should not be translated.
- Text that prints to the
FreeCAD.Consoleis shown in the "Report view", and therefore should be translated. The "Report view" is different from the Python console.
In every Python .py file:
- In every file where you need to translate text, you need a
translate()function defined. An easy way is to use the one from the Draft Workbench:
from DraftTools import translate
- All text that must be translated must be passed through the
translate()function.
print("My text")
becomes
print(translate("context", "My text"))
This can be used anywhere: in
print(), in
FreeCAD.Console.PrintMessage(), in Qt dialogs, etc. The
FreeCAD.Console functions do not automatically add the newline character (
\n), so this must be added at the end if desired. This character doesn't need translation either, so it can be outside the translating function:
FreeCAD.Console.PrintMessage(translate("context", "My text") + "\n")
- If you are using
.uifiles made with QtDesigner, nothing special needs to be done with them.
- When creating new objects, do not translate the object's "Name". Rather, translate object's "Label". The difference is that a "Name" is unique; it stays the same throughout the life of the object; on the other hand, a "Label" can be changed by the user as desired.
- When creating properties for your objects, don’t translate the property name. But place the description inside
QT_TRANSLATE_NOOP:
obj.addProperty("App::PropertyBool", "MyProperty", "PropertyGroup", QT_TRANSLATE_NOOP("App::Property", "This is what My Property does"))
Don't use your own
"context" in this specific case. Keep
"App::Property".
- Do not translate the text of document transactions made with
Document.openTransaction()
In InitGui.py:
- Add the following line, close to the top of the file:
def QT_TRANSLATE_NOOP(scope, text): return text
- To translate menu names:
self.appendMenu(QT_TRANSLATE_NOOP("context", "My menu"), [list of commands, ...])
- The
QT_TRANSLATE_NOOPmacro doesn’t do anything, but it marks texts to be picked up by the
lupdateutility later on. Since it doesn't actually do anything, we only use it in special cases where FreeCAD itself takes care of everything.
- Add the path to your
translations/folder in the Initialized function:
FreeCADGui.addLanguagePath("/path/to/translations")
The
InitGui.py file has no file attribute, so it is not easy to find the translations folder’s relative location. An easy way to work around this is to make it import another file from the same folder, and in that file, do
FreeCADGui.addLanguagePath(os.path.join(os.path.dirname(__file__), "translations"))
Inside each FreeCAD command class:
- Add the following line, close to the top of the file:
def QT_TRANSLATE_NOOP(context, text): return text
- Translate the
'MenuText'and
'Tooltip'of the command like this:
def GetResources(self): return {'Pixmap' : "path/to/icon.svg"), 'MenuText': QT_TRANSLATE_NOOP("CommandName", "My Command"), 'ToolTip' : QT_TRANSLATE_NOOP("CommandName", "Describes what the command does"), 'Accel' : "Shift+A" }
where
"CommandName" is the name of the command, defined by
FreeCADGui.addCommand('CommandName', My_Command_Class())
Gather all the strings from your module
- You will need the
lupdate,
lconvert,
lreleaseand
pylupdatetools installed on your system. In Linux distributions they usually come in packages named
pyside-toolsor
pyside2-tools. On some systems
lupdateis named
lupdate4or
lupdate5or
lupdate-qt4or similar. Same for the other tools. You may use the Qt4 or Qt5 version at your choice.
- If you have
.uifiles, you need to run
lupdatefirst:
lupdate *.ui -ts translations/uifiles.ts
This is recursive and will find
.ui files inside all your directory structure
- If you have
.pyfiles, you need to run
pylupdatetoo:
pylupdate *.py -ts translations/pyfiles.ts
- If you ran both operations, you now need to unify these two files into one:
lconvert -i translations/uifiles.ts translations/pyfiles.ts -o translations/MyModule.ts
- Check the contents of the three
.tsfiles to make sure that they contain the strings, then you can delete both
pyfiles.tsand
uifiles.ts.
- You can do it all in one bash script like this:
#!/bin/sh lupdate *.ui -ts translations/uifiles.ts pylupdate *.py -ts translations/pyfiles.ts lconvert -i translations/uifiles.ts translations/pyfiles.ts -o translations/MyModule.ts rm translations/pyfiles.ts rm translations/uifiles.ts
Send the .ts file to a translation platform
It is time to have your
.ts file translated. You can choose to set up an account on a public translation platform such as Crowdin or Transifex, or you can benefit from our existing FreeCAD account at Crowdin, which has many users already, and therefore more chance to have your file translated quickly and by people who know FreeCAD.
If you wish to host your file on the FreeCAD Crowdin account, get in touch with Yorik on the FreeCAD forum.
Note: some platforms like Crowdin can integrate with GitHub and do all the process from points 2, 3 and 4 automatically. For that, you can’t use the FreeCAD Crowdin account; you will need to set up your own account.
Merge the translations
Once your
.ts file has been translated, even if partially, you can download the translations from the site:
- You will usually download a
.zipfile containing one
.tsper language
- Place all the translated
.tsfiles, together with your base
.tsfile, in the
translations/folder
Compile the translations
Now run the
lrelease program on each file that you have.
lrelease "translations/Draft_de.ts" lrelease "translations/Draft_fr.ts" lrelease "translations/Draft_pt-BR.ts"
You can automate the process
for f in translations/*_*.ts do lrelease "translations/$f" done
You should find one
.qm file for each translated
.ts file. The
.qm files is what will be used by Qt and FreeCAD at runtime.
That's all you need. Note that certain parts of your workbench cannot be translated on-the-fly if you decide to switch languages. If this is the case, you will need to restart FreeCAD for the new language to take effect.
Pagine correlate
>
|
https://www.freecadweb.org/wiki/index.php?title=Translating_an_external_workbench/it&oldid=500393
|
CC-MAIN-2019-51
|
refinedweb
| 1,104
| 68.06
|
chrome.storage
Overview
This API has been optimized to meet the specific storage needs of extensions. It provides the same storage capabilities as the localStorage API with the following key differences:
- APIstores data in strings).
- Enterprise policies configured by the administrator for the extension can be read (using
storage.managedwith a schema).
Manifest
You must declare the "storage" permission in the extension manifest to use the storage API. For example:
{ "name": "My extension", ... "permissions": [ "storage" ], ... }
Usage
To store user data for your extension,
you can use either
storage.sync or
storage.local.
When using
storage.sync,
the stored data will automatically be synced
to any Chrome browser that the user is logged into,
provided the user has sync enabled.
When Chrome is offline,
Chrome stores the data locally.
The next time the browser is online,
Chrome syncs the data.
Even if a user disables syncing,
storage.sync will still work.
In this case, it will behave identically
to
storage.local.
Confidential user information should not be stored! The storage area isn't encrypted.
The
storage.managed storage is read-only.
Storage and throttling limits
chrome.storage.
For details on the current limits of the storage API, and what happens when those limits are exceeded, please see the quota information for sync and local.
Examples
The following example checks for CSS code saved by a user on a form, and if found, stores it.
function saveChanges() { // Get a value saved in a form. var theValue = textarea.value; // Check that there's some code there. if (!theValue) { message('Error: No value specified'); return; } // Save it using the Chrome extension storage API. chrome.storage.sync.set({'value': theValue}, function() { // Notify that we saved. message('Settings saved'); }); }
If you're interested in tracking changes made
to a data object,
you can add a listener
to its
onChanged event.
Whenever anything changes in storage,
that event fires.
Here's sample code
to listen for saved changes:
chrome.storage.onChanged.addListener(function(changes, namespace) { for (key in changes) { var storageChange = changes[key]; console.log('Storage key "%s" in namespace "%s" changed. ' + 'Old value was "%s", new value is "%s".', key, namespace, storageChange.oldValue, storageChange.newValue); } });
Summary
Types
StorageChange
StorageArea
Properties
Events
onChanged
Fired when one or more items change.
|
http://developer.chrome.com/extensions/storage
|
CC-MAIN-2014-10
|
refinedweb
| 373
| 61.22
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.