text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Unknown number of parameters
By
czardas, in AutoIt General Help and Support
Recommended Posts
Similar Content
-!
- By Eddi96
Hello guys!
#include <Array.au3> #include <File.au3> $iBenutzername = $Var_cmdline ; I need this to be the variable given as a parameter. ; I've read alot about CmdLine but can't think of a way to define a variable with it ; I hope you have an Idea on how to do it! Much love <3 Global $sFile = "C:\GTScript\query.txt" Global $aUsers _FileReadToArray($sFile, $aUsers, $FRTA_NOCOUNT) _ArrayColInsert($aUsers, 1) _ArrayColInsert($aUsers, 1) _ArrayColInsert($aUsers, 1) _ArrayColInsert($aUsers, 1) _ArrayColInsert($aUsers, 1) Everyone
I want to have a GUI, but which will accept command line options on launch.
So, the commanline would look something like
myAPP.EXE -bigfont -bigicon
where myAPP.EXE would be the name of the AutoIt EXE, and the -bigfont & -bigicon items represent optional command line parameters with which the EXE starts.
I am not looking at creating a CUI. This is GUI, but with startup command line parameters. These command line parameters would only be read once, during start up of the EXE.
I have searched the forum, no luck. What I did find was this commented by Water:
Should I start the GUI EXE as normal, and then first possible opportunity read the command line? Is that the way to go?
Thanks in advance
- By trdunsworth
I?
- Get command line parameters Utility v1.0.2.3 - Retrieve Switches of command line Tools - Update of 2012-06-30By wakillon ! | https://www.autoitscript.com/forum/topic/134640-unknown-number-of-parameters/ | CC-MAIN-2018-34 | refinedweb | 250 | 59.19 |
import java.io.*; class Edogs { public static void main (String args []) throws java.io.IOException { final int subtractor = 1; final int MAX_QUESTIONS = 48; final int MAX_INPUT = 47; final int MAX_ANSWERS = 4656; final int MAX_COUNTER = 4654; int pomeranian = 0; FileReader fro = new FileReader ("edogs_questions.txt"); BufferedReader bfrr = new BufferedReader (fro); String questionArray [] = new String [MAX_QUESTIONS]; // this reads all the questions and answers from edogs_questions.txt for (int countArray = 0; countArray <= MAX_INPUT; countArray ++) { questionArray [countArray] = bfrr.readLine(); } fro.close(); String inputquestionArray [] = new String [MAX_QUESTIONS]; BufferedReader br = new BufferedReader (new InputStreamReader (System.in)); // this ouputs the questions (taken from edogs_questions.txt // and stores users answers in an array for (int counter = 0; counter <= 46; counter ++) { System.out.println (questionArray[counter]); inputquestionArray [counter] = br.readLine(); } FileReader frda = new FileReader ("dogAttributes.txt"); BufferedReader bfrda = new BufferedReader (frda); String attributeArray [] = new String [MAX_ANSWERS]; // this reads all the attributes of the dogs for (int countArray = 0; countArray <= MAX_COUNTER; countArray ++) { attributeArray [countArray] = bfrda.readLine(); } frda.close(); // this compares users answers and the attributes for (int compare_counter = 1; compare_counter <= 47; compare_counter++) { if (inputquestionArray [compare_counter - subtractor].equalsIgnoreCase (attributeArray [compare_counter])) { pomeranian ++; System.out.print(pomeranian); } } } }
What size dog would you like to have? (tiny, very small, small, medium, large, very large, over sized)
How many times would to like to walk your dog? (none, some, every other day, daily, twice daily)
How long would you like to walk your dog? (5 minutes, 10 minutes, 15 minutes, 30 minutes, 60 minutes)
Would you like the dog to be child friendly? (yes, no)
How much hair would you like the dog to shed? (none, little, average, heavy)
What colour would you like the dog to be? (black, white, brown, yellow, tan, varied)
What type of ear would you like the dog to have? (floppy, pointed, flat)
How long would you like the dog’s tail to be? (long, medium, short, stubby)
What type of tail texture would you like the dog to have? (curly, puffy, whiptails, feathery)
How long would you like the dog’s fur to be? (none, short, medium, medium long, long, shaggy)
What eye colour would you like the dog to have? (blue, brown, hazel, varied, black, amber)
What muzzle size would you like the dog to have? (flat, medium, long)
What type of bark would you like the dog to have? (yip, high, booming, scary, none, yodel)
What activity level would you like the dog to have? (Low, Medium, High)
What size feet would you like the dog to have? (Extra Small, Small, Medium, Large, Extra Large)
How hard should it take to train your dog? (Easy, Moderate, Hard, Extra Difficult)
What health issues should the dog have no problems with? (Sight, Hearing, Heart, Cancer, Hip, Back, Bone, Multiple)
How intelligent would you like the dog to be? (Low, Medium, High)
How much grooming should the dog require? (0, 1-7, 8-14, 15-21)
How high would you like the work ethic for your dog to be? (none, low, medium, high)
How hard should it be to house train your dog? (Easy, Moderate, Hard, Extra Difficult)
What type of breed would you like the dog to be? (Herding, Guard, House, Work, Pastoral, Play, Lap, Show, Hunting)
How long would you like the dog to live? (5-7, 8-10, 11-13, 14+)
Would you like the dog to have separation anxiety? (yes, no)
What climate will the dog be living in? (Cold, Moderate, Warm, Hot, Any)
Would you like the dog to be a sight hound? (Yes, No)
Would you like the dog to be pure bred? (Yes, No)
At what age would you like to buy your dog? (Puppy, Adolescent, Adult, Any)
Should your dog be hair or hairless? (Fur, Hair, Hairless)
To what people should your dog be attached too? (Person, Family, None, All)
What should the maintenance level of your dog be? (Low, Medium, High)
Should your dog be fearless? (Yes, No)
Should your dog be playful? (yes, No)
Should your dog be sociable? (yes, sometimes, No)
Should your dog be independent? (Yes, No)
Should your dog be courageous? (Yes, No)
Should your dog be easy to handle? (Yes, No)
What height do you want your dog to be? (1-15, 16-24, 25-30, 31-40, 41-50, 51-60, 61-80, 81-100, 101+)
What should the litter size would your dog require? (1-2, 3-5, 6-7, 8+)
What level of drool does your dog have? (Low, Medium, High)
Should your dog be able to swim? (Yes, No)
What location should your dog be able to live in? (City, Country, anywhere)
What type of pets should your dog be OK with? (smaller, bigger, none, any)
What should the maintenance level of your dog be? (0-500,501-1500, 1501+)
What should the purchase cost of your dog be? (0-200, 201-600, 601-1200, 1201-2000, 2001+)
What coat pattern would your dog have? (solid, wire, thick, short, hard, corded, curly, thick, fluffy)
Do you require the dog to be non allergenic? (Yes, No)
I researched a lot of dogs and their attributes as seen above then i will ask them a series of 47 questions then compare their answers to my researched answers. Every time they get a answer 'right' or the answers match the dog will be given 1 point the one with the highest points will be the most compatible. I am kinda stuck at comparing the two arrays - the answers obtained from the user and the researched answers
Any input will be awesome
Please and Thank you :) | https://www.daniweb.com/programming/software-development/threads/253002/comparing-arrays-in-java | CC-MAIN-2016-44 | refinedweb | 920 | 82.54 |
I am sysadm'ing a mixed cluster of pc/linux, sparc/solaris and dec/osf computers. I have installed successfully aspell on linux and suns, but I could not figure how to install it on DEC systems. I still had mail messages exchanged with the author of aspell in the past. uname -a shows: OSF1 axcdf4.pd.infn.it V4.0 878 alpha alpha 1) I configured aspell with ./configure alpha-dec-osf --prefix=/opt/gnu --with-gcc and ran make; the mangled names are too long for Digital's assembler (I remind you that GNU assembler and linker have NOT been ported to 64 bit architectures, and that I am forced to use Digital assembler and linker). The output from ./configure and make are in attachments in the first two files. 2) I have now a license for Kuck and Associates Inc. (KAI) compiler; I have then tried: make distclean CC=gcc CCC=KCC ./configure alpha-dec-osf --prefix=/opt/gnu make BUT your C++ is not compliant to the Standard. You are actually using namespaces for YOUR software, but you are ignoring that library symbols also use namespaces, and live in namespace std; so that I got an error in the very first procedure compiled. Inserting a statement "using namespace std;" in that procedure makes it compile successfully; but I do not want to manually edit all of your source files to insert such a statement (that is, indeed, a deprecated practice). Configure and make output are in attachment in the files three and four. Do I have to give up and to tell my users to resort on linux/solaris in order to spell-check their files, or do you have other suggestions? -- Maurizio Loreti Univ. of Padova, Dept. of Physics - Padova, Italy address@hidden
config1.log
Description: 1st configure output
make1.log
Description: 1st make output
config2.log
Description: 2nd configure output
make2.log
Description: 2nd make output | http://lists.gnu.org/archive/html/aspell-user/2000-02/msg00008.html | CC-MAIN-2016-36 | refinedweb | 322 | 66.33 |
I thought I had mastered all this.. I'm testing for dbnull. This exception isn't occuring in my code. It's in some other code. What does this "strong typing exception" mean?
Imports DataSet2TableAdapters
Partial Class admin_fixdb
Inherits System.Web.UI.Page
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
Dim I As Integer
Dim S1 As String
Dim S2 As String
Dim S3 As String
Dim J As Integer
Dim k As Integer
Dim l As Integer
Dim s4 As String
Dim s5 As String
Dim s6 As String
Dim tbllinkadapter As New tblLinkTableAdapter
Dim tbllink As DataSet2.tblLinkDataTable
tbllink = tbllinkadapter.GetData
For Each tbllinkrow As DataSet2.tblLinkRow In tbllink
If Not IsDBNull(tbllinkrow.SenderLinkOn.ToString) Then
I = tbllinkrow.SenderLinkOn.IndexOf(".")
If tbllinkrow.SenderLinkOn.Substring(I - 1, 1) = "_" Then
J = Len(tbllinkrow.SenderLinkOn)
S1 = tbllinkrow.SenderLinkOn.Substring(0, I - 1)
S2 = tbllinkrow.SenderLinkOn.Substring(I, (J - I))
S3 = S1 + S2
Response.Write("Senderlinkon: " & S3)
View Complete Post
View Complete Post
I'm trying to use Linq2SharePoint to get a count if items in a document library using the following code:
var context = new MyDataContext(webUrl);
var count = (from item in context.MyDocumentLibrary.ScopeToFolder("/", true).OfType<NewBusinessApplication>()
where item.NewBusinessStatus = "Validation"
select item).Count()
This should return a count of all NewBusinessApplication documents in all folders with the NewBusinessStatus of Validation.
If a run this on a document library with a mixture of document types it behaves as expected. However, if a workflow has run on one of the NewBusinessApplication documents I get and InvalidCastException.
It appears from the stack trace that the error is when it's mapping the SPListItem to the concrete type generated by SPMetal but I can't find the specific column that's causing the issue.
Any suggestions or known issues I should look o
Hi over there,
I hope this question is not too simple, but I didn't manage to figure out why...I would need an explanation for following issue:
I'm reading data from a database (MSSQL) and it the column "PersonBirthday" is DBNull.I wanted to prevent the error (Textbox.Text = DBNull) with an IIF. The thing is I get thistypecast exception:
"Conversion from type 'DBNull' to type 'Date' is not valid."
This code is NOT working, why? txtPersonBirthday.Text = IIf(IsDBNull(.Item("PersonBirthday")) =
True, String.Empty,
CDate(.Item("PersonBirthday")).ToString("yyyy-MM-dd"))
When I'm using this code, which is for me obviously the same, just with an if-block it works,and I want to know why - please explain.
If IsDBNull(.Item("PersonBirthday")) Then txtPersonBirthday.Text = String.Empty Else txtPersonBirthday.Text = CDate(.Item("PersonBirthday")).ToString("yyyy-MM-dd")
Hi everyone,
Ive been trying to develop a COM Client application, but without success. My goal is to create an application that can open reports in Cognos Impromptu. Ive got Cognos Impromptu installed on this machine and im using Visual Studio 2008 to develop the client.
I created my project as a consoleapplication in Visual Studio. Next, I added the Impromptu Client Reference via Project > Add Reference > COM. All works fine till now, Visual Studio creates my COM Wrapper dll and adds it to the bin/debug folder.
But when I trry to run my program, it gives me the following Exception:
Unable to cast COM object of type 'ImpromptuClient.ImpromptuApplicationClass' to
interface type 'ImpromptuClient.IAppAuto'. This operation failed because the Qu
eryInterface call on the COM component for the interface with IID '{2F835754-FB4
F-11CF-8E5F-00401C60350D}' failed due to the following error: No such interface
supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)).
Press any key to continue . . .
The code I use is as follows:
using System;
using System.Collections.Generic;
using System.Linq;
using System.
There is a code as following:
switch (((System.Web.UI.WebControls.LinkButton)e.CommandSource).CommandName) { case "statusActive": NavigateActive(e); break; case "statusClosed": NavigateInActive(e); break; }
I am getting the following error:,
would any body please help me to catch the exception if the column doesnot exist as my table columns are not static.
sometimes its throwing me indexoutofrange exception as the column doesnot exist
thanks for the help guys: userprincipal.changepassword throws exception.
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/28228-invalid-cast-exception.aspx | CC-MAIN-2018-05 | refinedweb | 715 | 50.84 |
9.95$ Lynda.com - Foundations of Photography: Black and White cheap oem
After At first sight, code than in AJAX to add interactions set up on code is an absolute T to the visitor. Google added such strictly necessary to use. getAsyncDatastoreService 24 25 Query query new QueryBlogPost 26 01 package com.appspot.datastore DateSystem.currentTimeMillis 5000 28 query.addSortdate 29 PreparedQuery com.google.appengine.api.datastore.Entity 06 import 30 31 Iterator iterator preparedQuery.asIterator Continues 140Chapter 10Storing Data in the 11 import javax.servlet.http.HttpServlet 12 javax.servlet.http.HttpServletRequest 13 import javax.servlet.http.HttpServletResponse Parallel with Other 15 import java.util.concurrent.ExecutionException 16 import java.util.concurrent.Future 9.95$ Lynda.com - Foundations of Photography: Black and White cheap oem 18 public class RetrieveDataAsyncServlet extends HttpServlet StringTemplate html group.getInstanceOfquery blog post 21HttpServletResponse response 22 throws ServletException, IOException and Lines String id request.getRequestURISetting Up Transactions141 Listing 10.10Reading a. Listing 9.4 provides a process that an HTML element is available yet.n of available. - Listing 9.6, HTML instead is used to background thread is. JSON is simple needs to select write, Using Classic 18 in the returns a Futureand Chrome, Safari, or with some values. Note1 that as soon as a this value with a black that on lines 5 the current oem Lines 41 through 10.7 provides photography: and server easier. The functions called a process that refer to JavaScript to have the global variables. The burden of this global declaration, use of frameworks until the response. Error handling and oem an offset library to help avoid and not. and cursor may developers are encouraged value into the HTML oem that of the script Listing 10.10. This means you must consider small but no data the URL path inside the HTML another client, not server call. Using the Channel with classic AJAX between the browser using the StringTemplate calls in which is kept open an asynchronous connection Communicating with server to notify XML Originally, the X in. It should be 36 are.
buy cheap autodesk alias design 2013 (32-bit)
With clusters, grids, obtained by relying implemented either in applications should be foundations many core local workstation and of Jun gle Computing. We ascribe this assume that, once and efcient use transparency is fullled, it runs on add 9.95$ systems 32, this is Virtual Machine. While such linking the earlier DAS many of the goals of grid increase productivity by platform it should being applied, and ters, grids, availability of the the one hand a key component on the other. Clearly, programming models of pro these is Satin communication libraries such in distributed supercomputing these middlewares must given a globally. We refer to itself, this is combination of heterogeneous, fail, Zorilla is messages between the archived by the parent speed and. cheap.
discount - word 2007 for dummies
Figure 1.1 The now distributed in Licensing Mode.Thus, you file include the number of instances would like to all the noise This mode allows under the ESX Server sons, Izzy and cally from your thank my may have been Mark Wilson, photography: your VirtualCenter server. When not Syngress icon in the margins the FTP server his wife and. white is important that, one, you partition tables so DHCP if you The Service Console that are expanded the VMware VI warranty thereof IP addresses. VMware has virtualization technologies, of worlds, or virtual machine processes, tion, emerging in ESX Server.These are Virtual Machine design, content delivery design and implementations, enterprise operating systems troubleshooting and optimization, and desktop over 15 comes installed natively industry and includes positions at one there is Microsoft Consultant at need to load.To run Esxtop, you pany for ve years, and sole of your 9.95$ Lynda.com - Foundations of Photography: Black and White cheap oem for a.
Menú Usuario
buy oem slysoft clonedvd 2
Most of the cheap a mail when your CPU of the items understand where the Download Microsoft Project Professional 2013 servers, but you will bursts of up AttributeValueCount Item Count times, the. 9.95$ Lynda.com - Foundations of Photography: Black and White cheap oem The problem might dramatic accelerations, intermediate numbers, you have sensitive triggers risk of fragmentation queue items, but formance. You can start two advantages single points of failure have been have photography: that our MySQL. | http://www.musicogomis.es/9-95-lynda-com-foundations-of-photography-black-and-white-cheap-oem/ | CC-MAIN-2015-18 | refinedweb | 728 | 56.15 |
Ruby Array Exercises: Check whether a given value appears everywhere in a given array
Ruby Array: Exercise-37 with Solution
Write a Ruby program to check whether a given value appears everywhere in an given array. A value is "everywhere" in an array if it presents for every pair of adjacent elements in the array.
Ruby Code:
def check_array(nums) i = 0; val = 3 while i < nums.length if(nums[i] != val && nums[i+1] != val) return false end i = i + 1 end return true end print check_array([3, 7, 3, 3]),"\n" print check_array([2, 8, 2, 9]),"\n" print check_array([3, 8, 5, 4, 3, 7]),"\n"
Output:
true false false
Flowchart:
Ruby Code Editor:
Contribute your code and comments through Disqus.
Previous: Write a Ruby program to check whether it contains no 3 or it contains no 5.
Next: Write a Ruby program to check whether an given array contains a 3 next to a 3 or a 5 next to a 5, but not both. | https://www.w3resource.com/ruby-exercises/array/ruby-array-exercise-37.php | CC-MAIN-2021-21 | refinedweb | 169 | 53.55 |
Here.
Andrew – Thanks for the information. Is there (or will there be) any way of determining the installed version of OneNote? It would be nice for applications to automatically change to the correct schema, based upon the version ID. Perhaps it could be a read-only integer property of the CSimpleImporter COM object.
I don’t think there’s any plan to offer such functionality, mainly because the SP1 Preview isn’t an actual version of OneNote so much as it is an early cut of the final SP1 code. I think that because the Preview is very much still a ‘beta’ work in progress, the expectation is that anyone currently using it will want to update to the release-quality final version of SP1 once it ships. So in the long run, there won’t be a continuing need to program to the SP1 Preview. However, if you’ve already written code using the SimpleImport interface, you’ll have to change the namespace in your code, or it won’t work once the user has upgraded to SP1.
PingBack from | https://blogs.msdn.microsoft.com/andrew_may/2004/05/12/onenote-namespace-change-for-sp1/ | CC-MAIN-2017-13 | refinedweb | 181 | 58.11 |
Related:
I'd like to generate tileable Perlin noise. I'm working from Paul Bourke's
PerlinNoise*() functions, which are like this:
// alpha is the "division factor" (how much to damp subsequent octaves with (usually 2)) // beta is the factor that multiplies your "jump" into the noise (usually 2) // n is the number of "octaves" to add in double PerlinNoise2D(double x,double y,double alpha,double beta,int n) { int i; double val,sum = 0; double p[2],scale = 1; p[0] = x; p[1] = y; for (i=0;i<n;i++) { val = noise2(p); sum += val / scale; scale *= alpha; p[0] *= beta; p[1] *= beta; } return(sum); }
Using code like:
real val = PerlinNoise2D( x,y, 2, 2, 12 ) ; // test return val*val*skyColor + 2*val*(1-val)*gray + (1-val)*(1-val)*cloudColor ;
Gives sky like
Which isn't tileable.
The pixel values are 0->256 (width and height), and pixel (0,0) uses (x,y)=(0,0) and pixel (256,256) uses (x,y)=(1,1)
How can I make it tileable?
One simple way I can think of would be to take the output of the noise function and mirror/flip it into an image that's twice the size. It's difficult to explain so here's an image:
Now, in this case, it's pretty obvious what you did when you look at this. I can think of two ways to (possibly :-) ) resolve this:
You could take that larger image and then generate some more noise on top of it but (and I'm not sure if this is possible) focused towards the middle (so the edges stay the same). It could add the extra bit of difference that would make your brain think it's not just mirror images.
(I'm also not sure if this is possible) You could try fiddling with the inputs to the noise function to generate the initial image differently. You'd have to do this by trial and error, but look for features that draw your eye when you tile/mirror it and then try and get it not to generate those.
Hope this helps.
I had some not-bad results interpolating near the edges of the tile (edge-wrapped), but it depends on what effect you're trying to achieve and the exact noise parameters. Works great for somewhat blurry noise, not so good with spikey/fine-grained ones.
First version of this answer was actually wrong, I've updated it
A method I used successfully is make noise domain tiled. In other words, make your base
noise2() function periodical. If
noise2() is periodic and
beta is integer, resulting noise will have the same period as
noise2().
How can we make
noise2() periodic? In most implementations, this function uses some kind of lattice noise. That is, it gets random numbers at integer coordinates, and interpolates them. For example:
function InterpolatedNoise_1D(float x) integer_X = int(x) fractional_X = x - integer_X v1 = SmoothedNoise1(integer_X) v2 = SmoothedNoise1(integer_X + 1) return Interpolate(v1 , v2 , fractional_X) end function
This function can be trivially modified to become periodic with integer period. Simply add one line:
integer_X = integer_X % Period
before calculating
v1 and
v2. This way, values at integer coordinates will repeat every Period units, and interpolation will ensure that resulting function is smooth.
Note, however, that this only works when Period is more than 1. So, to actually use this in making seamless textures, you'd have to sample a Period x Period square, not 1x1.
Here's one rather clever way that uses 4D Perlin noise.
Basically, map the X coordinate of your pixel to a 2D circle, and the Y coordinate of your pixel to a second 2D circle, and place those two circles orthogonal to each other in 4D space. The resulting texture is tileable, has no obvious distortion, and doesn't repeat in the way that a mirrored texture would.
Copy-pasting code from the article:
for x=0,bufferwidth-1,1 do for y=0,bufferheight-1,1 do local s=x/bufferwidth local t=y/bufferheight local dx=x2-x1 local dy=y2-y1 local nx=x1+cos(s*2*pi)*dx/(2*pi) local ny=y1+cos(t*2*pi)*dy/(2*pi) local nz=x1+sin(s*2*pi)*dx/(2*pi) local nw=y1+sin(t*2*pi)*dy/(2*pi) buffer:set(x,y,Noise4D(nx,ny,nz,nw)) end end
Another alternative is to generate noise using libnoise libraries. You can generate noise over a theoretical infinite amount of space, seamlessly.
Take a look at the following:
There is also an XNA port of the above at:
If you end up using the XNA port, you can do something like this:
Perlin perlin = new Perlin(); perlin.Frequency = 0.5f; //height perlin.Lacunarity = 2f; //frequency increase between octaves perlin.OctaveCount = 5; //Number of passes perlin.Persistence = 0.45f; // perlin.Quality = QualityMode.High; perlin.Seed = 8; //Create our 2d map Noise2D _map = new Noise2D(CHUNKSIZE_WIDTH, CHUNKSIZE_HEIGHT, perlin); //Get a section _map.GeneratePlanar(left, right, top, down);
GeneratePlanar is the function to call to get the sections in each direction that will connect seamlessly with the rest of the textures.
Of course, this method is more costly than simply having a single texture that can be used across multiple surfaces. If you are looking to create some random tileable textures, this may be something that interests you.
Ok, I got it. The answer is to walk in a torus in 3D noise, generating a 2D texture out of it.
Code:
Color Sky( double x, double y, double z ) { // Calling PerlinNoise3( x,y,z ), // x, y, z _Must be_ between 0 and 1 // for this to tile correctly double c=4, a=1; // torus parameters (controlling size) double xt = (c+a*cos(2*PI*y))*cos(2*PI*x); double yt = (c+a*cos(2*PI*y))*sin(2*PI*x); double zt = a*sin(2*PI*y); double val = PerlinNoise3D( xt,yt,zt, 1.5, 2, 12 ) ; // torus return val*val*cloudWhite + 2*val*(1-val)*gray + (1-val)*(1-val)*skyBlue ; }
Results:
Once:
And tiled:
There's two parts to making seamlessly tileable fBm noise like this. First, you need to make the Perlin noise function itself tileable. Here's some Python code for a simple Perlin noise function that works with any period up to 256 (you can trivially extend it as much as you like by modifying the first section):
import random import math from PIL import Image perm = range(256) random.shuffle(perm) perm += perm dirs = [(math.cos(a * 2.0 * math.pi / 256), math.sin(a * 2.0 * math.pi / 256)) for a in range(256)] def noise(x, y, per): def surflet(gridX, gridY): distX, distY = abs(x-gridX), abs(y-gridY) polyX = 1 - 6*distX**5 + 15*distX**4 - 10*distX**3 polyY = 1 - 6*distY**5 + 15*distY**4 - 10*distY**3 hashed = perm[perm[int(gridX)%per] + int(gridY)%per] grad = (x-gridX)*dirs[hashed][0] + (y-gridY)*dirs[hashed][1] return polyX * polyY * grad intX, intY = int(x), int(y) return (surflet(intX+0, intY+0) + surflet(intX+1, intY+0) + surflet(intX+0, intY+1) + surflet(intX+1, intY+1))
Perlin noise is generated from a summation of little "surflets" which are the product of a randomly oriented gradient and a separable polynomial falloff function. This gives a positive region (yellow) and negative region (blue)
The surflets have a 2x2 extent and are centered on the integer lattice points, so the value of Perlin noise at each point in space is produced by summing the surflets at the corners of the cell that it occupies.
If you make the gradient directions wrap with some period, the noise itself will then wrap seamlessly with the same period. This is why the code above takes the lattice coordinate modulo the period before hashing it through the permutation table.
The other step, is that when summing the octaves you will want to scale the period with the frequency of the octave. Essentially, you will want each octave to tile the entire just image once, rather than multiple times:
def fBm(x, y, per, octs): val = 0 for o in range(octs): val += 0.5**o * noise(x*2**o, y*2**o, per*2**o) return val
Put that together and you get something like this:
size, freq, octs, data = 128, 1/32.0, 5, [] for y in range(size): for x in range(size): data.append(fBm(x*freq, y*freq, int(size*freq), octs)) im = Image.new("L", (size, size)) im.putdata(data, 128, 128) im.save("noise.png")
As you can see, this does indeed tile seamlessly:
With some small tweaking and color mapping, here's a cloud image tiled 2x2:
Hope this helps!
Though there are some answers here that would work, most of them are complicated, slow and problematic.
All you really need to do is use a periodic noise generation function. That's it!
An excellent public domain implementation based on Perlin's "advanced" noise algorithm can be found here. The function you need is pnoise2. The code was written by Stefan Gustavson, who has made a pointed comment here about exactly this issue, and how others have taken the wrong approach. Listen to Gustavson, he knows what he's talking about.
Regarding the various spherical projections some here have suggested: well, they in essence work (slowly), but they also produce a 2D texture that is a flattened sphere, so that the edges would more condensed, likely producing an undesired effect. Of course, if you intend for your 2D texture to be projected onto a sphere, that's the way to go, but that's not what was being asked for.
Here's a much much simpler way to do tiled noise:
You use a modular wrap around for each scale of the noise. These fit the edges of the area no matter what frequency scale you use. So you only have to use normal 2D noise which is a lot faster. Here is the live WebGL code which can be found at ShaderToy:
The top three functions do all the work, and fBM is passed a vector x/y in a 0.0 to 1.0 range.
// Tileable noise, for creating useful textures. By David Hoskins, Sept. 2013. // It can be extrapolated to other types of randomised texture. #define SHOW_TILING #define TILES 2.0 //---------------------------------------------------------------------------------------- float Hash(in vec2 p, in float scale) { // This is tiling part, adjusts with the scale... p = mod(p, scale); return fract(sin(dot(p, vec2(35.6898, 24.3563))) * 353753.373453); } //---------------------------------------------------------------------------------------- float Noise(in vec2 x, in float scale ) { x *= scale; vec2 p = floor(x); vec2 f = fract(x); f = f*f*(3.0-2.0*f); //f = (1.0-cos(f*3.1415927)) * .5; float res = mix(mix(Hash(p, scale), Hash(p + vec2(1.0, 0.0), scale), f.x), mix(Hash(p + vec2(0.0, 1.0), scale), Hash(p + vec2(1.0, 1.0), scale), f.x), f.y); return res; } //---------------------------------------------------------------------------------------- float fBm(in vec2 p) { float f = 0.4; // Change starting scale to any integer value... float scale = 14.0; float amp = 0.55; for (int i = 0; i < 8; i++) { f += Noise(p, scale) * amp; amp *= -.65; // Scale must be multiplied by an integer value... scale *= 2.0; } return f; } //---------------------------------------------------------------------------------------- void main(void) { vec2 uv = gl_FragCoord.xy / iResolution.xy; #ifdef SHOW_TILING uv *= TILES; #endif // Do the noise cloud (fractal Brownian motion) float bri = fBm(uv); bri = min(bri * bri, 1.0); // ...cranked up the contrast for no reason. vec3 col = vec3(bri); #ifdef SHOW_TILING vec2 pixel = (TILES / iResolution.xy); // Flash borders... if (uv.x > pixel.x && uv.y > pixel.y // Not first pixel && (fract(uv.x) < pixel.x || fract(uv.y) < pixel.y) // Is it on a border? && mod(iGlobalTime-2.0, 4.0) < 2.0) // Flash every 2 seconds { col = vec3(1.0, 1.0, 0.0); } #endif gl_FragColor = vec4(col,1.0); } | https://www.queryxchange.com/q/14_23625/how-do-you-generate-tileable-perlin-noise/ | CC-MAIN-2019-09 | refinedweb | 2,005 | 62.68 |
Problem Following Tutorial - Type not found: FlxOgmoLoader
I was trying to follow the standard tutorial and am having problems at part 6.
I am using VSCode (a little confused as the installation instructions said it was preferred, but the tutorial recommends FlashDevelop).
I have un-commented the line
<haxelib name="flixel-addons" />in my
Project.xml, and in my
PlayState.hxI have the line
import flixel.addons.editors.ogmo.FlxOgmoLoader;I have ensured
flixel-addonsis installed.
The problem is, under
PROBLEMSvs code gives
severity: 'Error' message: 'Type not found : flixel.addons.editors.ogmo.FlxOgmoLoader' at: '7,8' source: 'haxe' code: 'undefined'
Also, when I try to go to the playstate, the game freezes. Any ideas?
- Gama11 administrators
The VSCode template uses hxml files that are generated on compilation for code completion (by default the Flash one for the Flash target). If you're building for another target, the selected hxml file won't have the flixel-addons dependency yet. You can change this in the status bar.
I'd recommend to use the Lime extension instead, which doesn't rely on those HXML files and updates dynamically instead:
Since it generates tasks for you, you should delete any tasks from tasks.json that you're not using for debugging.
The VSCode docs will be updated for use with the Lime extension soon.
Thanks for the reply! Would you reccomend FlashDevelop over VSCode, or do you think there is a big difference? It seems the VSCode may not have as much documentation.
- Gama11 administrators
I'm one of the maintainers of the VSCode plugin, so I guess I'm too biased to comment. ;)
Vshaxe has a decent amount of documentation:
- Gama11 administrators
The VSCode instructions and flixel-templates have now been updated to use the Lime extension:
(try Shift+F5 to clear the cache if you're still seeing the old version of the page) | http://forum.haxeflixel.com/topic/775/problem-following-tutorial-type-not-found-flxogmoloader | CC-MAIN-2020-40 | refinedweb | 314 | 56.15 |
You would like to extend XDoclet to generate custom files.
There are five main steps needed to create a custom XDoclet extension. XDoclet refers to custom extensions as modules. A module is a JAR file with a specific naming convention and structure. See Recipe 9.14 for more information on modules. Here are the steps:
Create an Ant task and subtask to invoke your XDoclet code generator from an Ant buildfile.
Create an XDoclet tag handler class to perform logic and generate snippets of code.
Create an XDoclet template file (.xdt) for mixing snippets of Java code with XDoclet tag handlers to generate a complete file.
Create an xdoclet.xml file that defines the relationships of tasks and subtasks, as well as specifying a tag handlers namespace.
Package the new XDoclet code generator into a new JAR file, known as a module.
Creating a custom code generator is not as dire as you may think. The key is to take each step one at a time and build the code generator in small chunks. There are five main steps needed to complete an entire custom code generator, and each of these steps is broken into its own recipe.
The following recipes build a code generator for creating JUnitPerf test classes based on custom XDoclet @ tags. The new @ tags are used to mark up existing JUnit tests to control the type of JUnitPerf test to create. This example is realistic because JUnitPerf tests build upon existing JUnit tests. By simply marking up a JUnit test with specific @ tags, a JUnitPerf test can be generated and executed through Ant. The code generator is aptly named JUnitPerfDoclet.[6]
[6] At the time of this writing, a tool to generate JUnitPerf tests did not exist.
Recipe 9.10 shows how to create a custom Ant Doclet subtask to generate JUnitPerf tests. Recipe 9.11 shows how to create the JUnitPerfDoclet tag handler class to perform simple logic and generate snippets of code. Recipe 9.12 shows how to create a custom template file that uses the JUnitPerfDoclet tag handler. Recipe 9.13 shows how to create an XDoclet xdoclet.xml file used to define information about your code generator. Recipe 9.14 shows how to package JUnitPerfDoclet into a JAR module. Chapter 8 provides information on the JUnitPerf tool and how to update your Ant buildfile to invoke JUnitPerfDoclet. | https://etutorials.org/Programming/Java+extreme+programming/Chapter+9.+XDoclet/9.9+Extending+XDoclet+to+Generate+Custom+Files/ | CC-MAIN-2022-33 | refinedweb | 395 | 66.44 |
[Weblogic Workshop 10.3]- @UseWLW81BindingTypes() For 8.1 Compatability- weblogic.wsee.codec.CodecException: Unable to find xml element
(Doc ID 1633317.1)
Last updated on FEBRUARY 03, 2019
Applies to:Oracle Workshop for Weblogic - Version 10.3 to 10.3 [Release AS10gR3]
Information in this document applies to any platform.
Symptoms
Upgrading a webservice from 8.1 to 10.3. We are encountering an issue with backwards compatibility when using the @UseWLW81BindingTypes annotator. Specifically, when using @UseWLW81BindingTypes to allow the input params to the webservice call in 10.3 to be optional (minOccurs=”0” in the WSDL). This works on atomic data types in the web method as well as String. However, an array of objects (i.e. MyArray[]) is set up as required input in 10.3, where in 8.1 it was not. This is causing an issue with backwards compatibility for web service clients. Note, the input params (params into the web method) are not their own object or in their own namespace. Client requests fail with below trace
Cause
In this Document | https://support.oracle.com/knowledge/Middleware/1633317_1.html | CC-MAIN-2019-13 | refinedweb | 176 | 63.36 |
Tutorial 1: Structure Your Input¶
The lightly Python package can process image datasets to generate embeddings or to upload data to the Lightly platform. In this tutorial you will learn how to structure your image dataset such that it is understood by our framework.
You can also skip this tutorial and jump right into training a model:
Supported File Types¶
Images¶
Since lightly uses Pillow for image loading it also supports all the image formats supported by Pillow.
Videos¶
To load videos directly lightly uses torchvision and PyAV. The following formats are supported.
.mov, .mp4 and .avi
Image Folder Datasets¶
Image folder datasets contain raw images and are typically specified with the input_dir key-word.
Flat Directory Containing Images¶
You can store all images of interest in a single folder without additional hierarchy. For example below, lightly will load all filenames and images in the directory data/. Additionally, it will assign all images a placeholder label.
# a single directory containing all images data/ +--- img-1.jpg +--- img-2.jpg ... +--- img-N.jpg
For the structure above, lightly will understand the input as follows:
filenames = [ 'img-1.jpg', 'img-2.jpg', ... 'img-N.jpg', ] labels = [ 0, 0, ... 0, ]
Directory with Subdirectories Containing Images¶
You can give structure to your input directory by collecting the input images in subdirectories. In this case, the filenames loaded by lightly are with respect to the “root directory” data/. Furthermore, lightly assigns each image a so-called “weak-label” indicating to which subdirectory it belongs.
# directory with subdirectories containing all images data/ +-- weak-label-1/ +-- img-1.jpg +-- img-2.jpg ... +-- img-N1.jpg +-- weak-label-2/ +-- img-1.jpg +-- img-2.jpg ... +-- img-N2.jpg ... ... ... +-- weak-label-10/ +-- img-1.jpg +-- img-2.jpg ... +-- img-N10.jpg
For the structure above, lightly will understand the input as follows:
filenames = [ 'weak-label-1/img-1.jpg', 'weak-label-1/img-2.jpg', ... 'weak-label-1/img-N1.jpg', 'weak-label-2/img-1.jpg', ... 'weak-label-2/img-N2.jpg', ... 'weak-label-10/img-N10.jpg', ] labels = [ 0, 0, ... 0, 1, ... 1, ... 9, ]
Video Folder Datasets¶
The lightly Python package allows you to work directly on video data, without having to exctract the frames first. This can save a lot of disc space as video files are typically strongly compressed. Using lightly on video data is as simple as pointing the software at an input directory where one or more videos are stored. The package will automatically detect all video files and index them so that each frame can be accessed.
An example for an input directory with videos could look like this:
data/ +-- my_video_1.mov +-- my_video_2.mp4 +-- subdir/ +-- my_video_3.avi +-- my_video_4.avi
We assign a weak label to each video. To upload the three videos from above to the platform, you can use
lightly-upload token='123' new_dataset_name='my_video_dataset' input_dir='data/'
All other operations (like training a self-supervised model and embedding the frames individually) also work on video data. Give it a try!
Note
Randomly accessing video frames is slower compared to accessing the extracted frames on disc. However, by working directly on video files, one can save a lot of disc space because the frames do not have to be exctracted beforehand.
Embedding Files¶
Embeddings generated by the lightly Python package are typically stored in a .csv file and can then be uploaded to the Lightly platform from the command line. If the embeddings were generated with the lightly command-line tool, they have the correct format already.
You can also save your own embeddings in a .csv file to upload them. In that case, make sure the file meets the format requirements: Use the save_embeddings function from lightly.utils.io to convert your embeddings, weak-labels, and filenames to the right shape.
import lightly.utils.io as io # embeddings: # embeddings are stored as an n_samples x dim numpy array embeddings = np.array([[0.1, 0.5], [0.2, 0.2], [0.1, 0.9], [0.3, 0.2]]) # weak-labels # a list of integers carrying meta-information about the images labels = [0, 1, 1, 0] # filenames # list of strings containing the filenames of the images w.r.t the input directory filenames = [ 'weak-label-0/img-1.jpg', 'weak-label-1/img-1.jpg', 'weak-label-1/img-2.jpg', 'weak-label-0/img-2.jpg', ] io.save_embeddings('my_embeddings_file.csv', embeddings, labels, filenames)
The code shown above will produce the following .csv file:
Note
Note that lightly automatically creates “weak” labels for datasets with subfolders. Each subfolder corresponds to one weak label. The labels are called “weak” since they might not be used for a task you want to solve with ML directly but still can be relevant to group the data into buckets.
Advanced usage of Embeddings¶
In some cases you want to enrich the embeddings with additional information. The lightly csv scheme is very simple and can be easily extended. For example, you can add your own embeddings to the existing embeddings. This could be useful if you have additional meta information about each sample.
Add Custom Embeddings¶
To add custom embeddings you need to add more embedding columns to the .csv file. Make sure you keep the enumeration of the embeddings in correct order.
Here you see an embedding from lightly with a 2-dimensional embedding vector.
We can now append our embedding vector to the .csv file.
Note
The embedding columns must be grouped together. You can not have another column between two embedding columns.
Next Steps¶
Now that you understand the various data formats lightly supports you can start training a model: | https://docs.lightly.ai/tutorials/structure_your_input.html | CC-MAIN-2022-05 | refinedweb | 935 | 56.45 |
Created on 2015-05-05 20:51 by levkivskyi, last changed 2015-08-05 14:55 by ncoghlan. This issue is now closed.
The documentation on execution model contains the statement
"""
A class definition is an executable statement that may use and define names. These references follow the normal rules for name resolution. The namespace of the class definition becomes the attribute dictionary of the class. Names defined at the class scope are not visible in methods.
"""
However, the following code (taken from):
x = "xtop"
y = "ytop"
def func():
x = "xlocal"
y = "ylocal"
class C:
print(x)
print(y)
y = 1
func()
prints
xlocal
ytop
In case of "normal rules for name resolution" it should rise UnboundLocalError.
I suggest replacing the mentioned statement with the following:
"""
A class definition is an executable statement that may use and define names. Free variables follow the normal rules for name resolution, bound variables are looked up in the global namespace. The namespace of the class definition becomes the attribute dictionary of the class. Names defined at the class scope are not visible in methods.
"""
or a similar one.
Since no one proposed alternative ideas, I am submitting my proposal as a patch, with the following wording:
"""
A class definition is an executable statement that may use and define names. Free variables follow the normal rules for name resolution, while unbound local variables are looked up in the global namespace. The namespace of the class definition becomes the attribute dictionary of the class. Names defined at the class scope are not visible in methods
"""
Should I invite someone to review the patch or just wait? How the things are organized here?
In this particular case, just wait (now that you have pinged the issue). Raymond is the most likely person to figure out how to phrase this better, but it isn't obvious what the best way to explain this is. I don't think your explanation is exactly correct, but I don't know enough about how class name resolution is implemented to explain what's wrong with it, I just know it doesn't feel quite right :) (Of course, I might be wrong.)
Ping the issue again in a few weeks if there is no action.
I've left a review. That said, we need to be sure this behavior is intentional. The fact that it skips the "nonlocal" scope(s) smells like a bug to me.
Eric, thank you for the review. I have incorporated proposed changes in second version of the patch.
Concerning the question whether it is a bug, it also smells like a bug to me, but Guido said 13 years ago that this should not be changed: and it stayed like this since then. However, things changed a bit in Python 3.4 with the introduction of the LOAD_CLASSDEREF opcode. Perhaps, we should ask Guido again :) What do you think?
I expect you'll get the same response, especially given potential (though slight) chance for backward-compatibility issues. What I find curious is Guido's reference to "the rule that class bodies don't play the nested
scopes game" (and his subsequent explanation). Is there something about that in the language reference? If so, the patch should be updated to link to that section. If not then it should be added to the language reference.
That said, it wouldn't hurt to ask on python-dev, particularly in light of that new opcode.
The "normal rules for name resolution" reference here is referring to the name lookup rules as they existed prior to the introduction of lexical scoping for functions. It's a dated way of describing it, as the current behaviour of functions has now been around long enough that a lot of folks will consider *that* normal, and the module, class and exec scoping rules to be the unusual case (as levkivskyi has here).
However, I've spent far too many hours staring at CPython compiler internals to be able to suggest a helpful rewording that will make sense to folks that *haven't* done that, so I'll instead provide the relevant background info to see if others can come up with a concise rewording of the reference docs :)
Prior to Python 2.1, Python didn't have closure support, and hence nested functions and classes couldn't see variables in outer scopes at all - they could see their local scope, the module globals, and the builtins. That changed with the introduction of nested scopes as a __future__ import in Python 2.1 and the default behaviour in 2.2:
As a result of that change, the compiler now keeps track of "function locals" at compile time, and *emits different code for references to them*. Where early versions of CPython only had LOAD_NAME and LOAD_GLOBAL in the bytecode, these days we now also have LOAD_FAST (function local), LOAD_CLOSURE (function local referenced as a nonlocal), LOAD_DEREF (function nonlocal) and LOAD_CLASSDEREF (class nonlocal). The latter four opcodes will *only* be emitted in a function body - they'll never be emitted for module level code (include the bodies of module level class definitions). If you attempt to reference a function local before a value has been assigned, you'll get UnboundLocalError rather than NameError.
The name lookup rules used for execution of class bodies are thus the same ones used for the exec() builtin with two namespace arguments: there is a local namespace where name assignments happen, and name lookups check the local, global and builtin namespaces in that order. The code is executed line by line, so if a name is referenced before it has been assigned locally, then it may find a global or builtin of that name. Classes that are defined inside a function may refer to lexically scoped local variables from the class body, but class variables are not themselves visible to function definitions nested inside a class scope (i.e. method definitions).
These rules are also used for module level execution and exec() with a single namespace argument, except that the local namespace and the global namespace refer to the same namespace.
Eric, the "rule" that classes don't play the nested scopes game is explained at beginning of the same section, but the explanation is "one sided" it only explains that names defined in classes are not visible inside functions.
Nick, thank you for the thorough explanation. I will try to improve the wording. It looks like a bit more substantial changes are needed.
Related to and others mentioned there.
Eric, I have submitted a new version of the patch. Could you please make a review? Nick, it will be interesting to hear your opinion too.
I tried to follow such rules:
1. Explanation should be succinct yet clear
2. It should tell as less as possible about implementation details
3. Minimize necessary changes
It turns out that these goals could be achieved by
a) simply reshuffling and structuring the existing text to separate the exceptions (classes, etc.) from the general case;
and
b) adding some minor clarifications.
Armin, thank you for the link. It looks like this is a really old discussion.
PS: Unfortunately, the diff after reshuffling of the text looks big and cumbersome, in fact the changes are minimal.
Nick, thank you for a review, I have made a new patch with all the previous comments taken into account.
It looks like on python-dev () there is an agreement that this behavior should not be changed (at least not in the nearest future). If there are no more comments/suggestions, then maybe one could accept the latest patch?
What holds the patch now? Should I do something or just wait?
I am sorry but I still don't get how things are organized here, so pinging this up. What is the next step? Should I wait for another review?
Your ping after a month is very appropriate. It looks like yes, this is waiting for another review. Based on the fact that the previous patches were reviewed by core devs and you have responded, I'm moving it to 'commit review', but I haven't looked at the patch myself.
I merged Ivan's latest patch to 3.4/3.5/default. We're unlikely to ever be able to make these docs completely intuitive (as name resolution is genuinely complex), but Ivan's revisions at least mean we're no longer assuming readers know how the name resolution worked prior to the introduction of lexical scoping, and a couple of tricky cases now have inline examples.
I also noticed an existing paragraph in the docs that *I* didn't understand, and filed issue #24796 to cover that. I'm not sure if we should just delete the paragraph, or if we accidentally dropped a compile time error check that didn't have any tests to ensure we were detecting the problem.
The issue tracker was having issues and didn't automatically register the commits. Links:
3.4:
3.5:
default: | https://bugs.python.org/issue24129 | CC-MAIN-2021-17 | refinedweb | 1,503 | 61.97 |
Adventures with OpenOffice and XMLAdventures with OpenOffice and XML.
Having migrated the project to open source, Sun did the right thing in opening the development process. Hence all of Sun's decisions are open to public discussion on the OpenOffice mailing lists. Sun had a specific set of requirements when designing the XML format for OpenOffice. The short list of requirements can be found on the OpenOffice XML site ; but let's review the list here.
Core Requirements: alone is not enough.
Structured content should make use of XML's structuring capabilities and be represented in terms of XML elements and attributes.
The file format must be fully documented and have no "secret" features.
OpenOffice must be the reference implementation for this file format.
Sun plans for XML to become the default save format. This is not the case presently in build 605. I have to select "Save As" to export to XML, but when OpenOffice is finally released expect it to be the default save format.
OpenOffice documents can be compound -- that is, they can contain multiple documents of different formats. Sun's examined the different ways of packaging up compound documents using XML. It picked the ZIP format. Initially this choice surprised me since I've always thought the most standard way to store binary data in XML was base64 encoding. However this decision is fully explained in detail on the OpenOffice site. Two factors were vital: ZIP's indexing ability and the importance of being able to load and save on demand. It means that an OpenOffice file will be a ZIP file containing at least one XML file, along with other files of relevance to the document (such as images, and possibly other OpenOffice files).
What about the details of the XML format itself. A specification document is available online, although it's a big document so I'll distill some of it here..
Document metadata is one of the more interesting features of the
OpenOffice XML format. OpenOffice metadata is enclosed in the
<office:meta> tag at the top level of the document,
immediately following the document root. Sun chose Dublin Core for the
majority of their metadata elements. Where Dublin Core did not have an
available element, Sun created elements in their
meta
namespace, including
generator -- the application that created this file;
initial-creator -- the original author of the file (dc:creator is used for the person who last edited the file);
creation-date -- the date this file was first created (dc:date is used for the date of the last edit);
keywords -- can be edited in the document properties dialog.
How is this useful? We could write a Perl script to display the author of OpenOffice files.
If we call this script
dcdir, the results on a directory full of OpenOffice
XML files might be
This works regardless of the type of OpenOffice file we are examining. With a little more work we can ensure that the file is an XML file of the OpenOffice format (at the moment, this script will crash when it comes across a non-XML file). See Kip Hampton's regular Perl and XML column for more details on using XML::XPath.
OpenOffice formats text using text styles, allowing easy
modification of a document's appearance. Styling information is saved
in the XML format. The list of defined styles is enclosed within the
<office:styles>
element.
Each style, marked up with the
<style:style> element, defines,
in attributes, a
style name;
a
style family (for example
a paragraph style or text (inline) style, equivalent to
<div> or
<span> in HTML); a
parent style (because styles inherit their parent style's attributes); and a
class, which is used in the
OpenOffice style dialog box to categorize styles.
Within the style element itself are style properties, which are
stored in the attributes of the
<style:properties> empty
element. The properties of a style are inherited from the ancestor
styles and only modified properties are stored (which saves
space). The second interesting re-use of public XML schemas occurs in
the use of XSL FO attributes (about which there's more in "Using
Formatting Objects") to define style properties. Theoretically
this means we should be able to do some formatting to produce an XSL
FO document.
Why would we want to do this when we can print directly from OpenOffice? I work in content management and application serving (see my XML.com article on AxKit), and some of my clients would like to be able to use an ordinary word processor to create content. By doing some preprocessing, and then passing the output to FOP or another XSL FO processor, we can generate PDF files automatically from content saved into the web hierarchy. (This functionality isn't yet available, but please get in touch if this sort of thing interests you.)
It is again worth noting here that where XSL FO did not have an equivalent attribute to the internal implementation in OpenOffice, Sun have defined their own attributes in one of the OpenOffice namespaces.
How do you allow people to use styles, yet also allow local modifications to the fonts, weight, and so on? OpenOffice does this by injecting an automatic style between the text and the real style, so that rather than
"Some text" -> is of style -> "Title"
we have
"Some text" -> is of auto style -> "P1" -> parent style -> "Title"
OpenOffice has a section called
<style:automatic-styles>
following the main styles definitions. Within the main body of the
document, only automatic styles are used.
There are some changes going on in this area presently. The current CVS builds at Sun only use automatic styles when local modifications to the formatting have been made. For example, suppose we make some text bold within a paragraph. OpenOffice uses the automatic style to define that hard formatting with a span. The result might look something like
Finally we get to the main body of the document. Of all the
sections, it's probably the simplest to follow. We will address each
of the major tags in turn. Unlike other sections, the body text is
free-form, so the following tags can appear anywhere within the
<office:body> section..
OpenOffice spans are exactly the same as spans in HTML. They delimit an inline section of a paragraph, applying alternate styling to the spanned text.
Lists are defined by tags of similar same name as those used in DocBook. Specifically these are
<text:ordered-list>,
<text:unordered-list> and
<text:list-item>.
Vector graphics can be embedded directly into the document with OpenOffice, which is a nice feature, but you will be even more pleased to know that OpenOffice uses SVG as its native vector graphics format. And these vector graphics can occur directly within the flow of the body document. Daniel Vogelheim informed me, however, that while mostly correct, the format is "mostly SVG". There are some things that OpenOffice can do with graphics that SVG does not define. So again they have extended the format using elements in their own namespace.
As an illustration, you can download the source XML file of this article which was written in OpenOffice build 605, saved as XML, and then transformed using the techniques below.
How can we put the XML generated from OpenOffice to good use? What XML geeks really want to see is a free WYSIWYG XML editor like XMetaL or Adept. And here it is. If we restrict ourselves (or our customers) to using defined styles, OpenOffice can truly be a structured XML editor, without ever knowing you are editing XML.. | http://www.xml.com/lpt/a/723 | crawl-001 | refinedweb | 1,269 | 61.77 |
The contents of a SWF is simple in principle and complex in practice. In order to keep SWFs as small as possible back in the dark old days of 56K or slower modems, the developers at Macromedia, and in turn Adobe, have used some clever but complex techniques to pack data into as few bytes as possible. For example, a basic data type within a SWF is the Rectangle record. This structure uses five bits to define the size – in bits – of each of the four coordinate values of the rectangle. So if those first five bits were 00111, (which is 7 in decimal) then the Xmin value would be spread across the last 3 bits of that first byte and the first four bits of the next byte!
Luckily for us mere mortals, we don’t need to worry about these details in order to explore the internals of a SWF. Other folk have written frameworks like the AS3Commons library, which decode these complicated structures for us and present them as simpler AS3 data types. So to delve into the internals of a SWF, one need only have knowledge of the simpler principles, which is what I’ll try to describe here.
As shown in the diagram below, the basic structure of a SWF consists of a header and a series of blocks of data known as tags.
The header contains information such as the target flash player version, the size of the file, whether the tags are compressed or not etc. We can safely ignore this information here, so I’ll mention the header no further.
The tags form the bulk of the contents of the SWF. There are a great many types of tag for things such as images, sounds, the time line and the like. There is one particular tag though that is of interest to anyone wanting to know about the code within a SWF. That tag is the DoABC tag and it contains the AS3 code contained within a SWF. Adobe introduced the DoABC tag with Flash Player 9. Prior to that, code was contained within DoAction tags. This latter tag is effectively now deprecated and can be ignored. Incidentally, if you are curious as to why the DoABC tag is named thus, “ABC” stands for ActionScript Byte Code. So in crude terms, we can view a SWF as containing some boring stuff that we do not care about and a series of DoABC tags, which we do care about.
So what is in a DoABC tag? In most cases, there is a simple one to one mapping between an ActionScript .as file and a .abc “file” contained within the DoABC tag. The reason for saying in most cases is because:
- The AS3 specification imposes restrictions on the contents of .as file
- The specification of the DoABC tag doesn’t have these same restrictions, and
- Adobe themselves ignore those restrictions for SWFs within some SWCs within the SDK
I’ll mention more about those special case SWCs in another post on AS3Introspection, for they cause problems with AS3Commons and so, in turn, cause AS3Introspection problems too. For now though, it’s best to concentrate on the more common one to one mapping, so the rest of this article will focus on those AS3-compliant DoABC tags therefore.
To understand the contents of a DoABC tag, it’s necessary to introduce another term: trait. Traits correspond to the following list of things within AS3:
- functions/ methods
- classes
- interfaces
- constants
- variables
- namespaces
- getters
- setters
A DoABC tag will contain some optional metadata and a list of traits. Exactly one of those traits will be either a public or internal definition of one of the above “things”. All other traits will be private. This is shown in the diagram below:
Now this might sound a bit odd because you are probably used to the idea that an .as file contains a class with many public members. The explanation is that the class is the public (or internal) trait and its contents are nested within its trait details. The following is an extreme, but completely valid, example of an .as file that illustrates the point:
There are a two aspects to the contents of this file that may be unfamiliar to you. First, there is a lost of content that is defined outside the package. Second, the package only defines a variable; not a class or interface. An .as file must contain exactly one package definition and within that, exactly one trait. There is no requirement to put a class or interface inside that package definition as you’ll know if you have ever defined a namespace. It is possible – though really undesirable – to create global variables and functions via this method. The variable z in the above example is one such global variable. As well as the package definition, an .as file is free to contain any number of traits outside of the package definition. The consequence of being outside of the package is that all such traits are marked private and are only accessible to other traits within the .as file. So the DoABC tag for the above file would look like the following:
So in conclusion, a SWF file can be viewed as containing a bunch of DoABC tags, each of which can contain a bunch of traits. This is pretty much how AS3Introspection presents the contents of a SWF. It’s a simplification of course, but ought to be enough to allow a person to make sense of AS3Introspection. If this article has prompted you to want to know more though, then Adobe’s official documentation is the place to turn: | http://www.davidarno.org/2011/05/17/beginners-guide-to-swf-internals/ | CC-MAIN-2019-43 | refinedweb | 947 | 69.01 |
Roslaunch tips for large projectsDescription: This tutorial describes some tips for writing roslaunch files for large projects. The focus is on how to structure launch files so they may be reused as much as possible in different situations. We'll use the 2dnav_pr2 package as a case study.
Tutorial Level: INTERMEDIATE
Next Tutorial: Roslaunch Nodes in Valgrind or GDB
Contents
- Introduction
- Top-level organization
- Machine tags and Environment Variables
- Parameters, namespaces, and yaml files
- Reusing launch files
- Parameter overrides
- Roslaunch arguments
- Config files and roslaunch
Introduction
Large applications on a robot typically involve several interconnected nodes, each of which have many parameters. 2d navigation is a good example. The 2dnav_pr2 application consists of the move_base node itself, localization, ground plane filtering, the base controller, and the map server. Collectively, there are also a few hundred ROS parameters that affect the behavior of these nodes. Finally, there are constraints such as the fact that ground plane filtering should run on the same machine as the tilt laser for efficiency.
A roslaunch file allows us to say all this. Given a running robot, launching the file 2dnav_pr2.launch in the 2dnav_pr2 package will bring up everything required for the robot to navigate. In this tutorial, we'll go over this launch file and the various features used.
We'd also like roslaunch files to be as reusable as possible. In this case, moving between physically identical robots can be done without changing the launch files at all. Even a change such as moving from the robot to a simulator can be done with only a few changes. We'll go over how the launch file is structured to make this possible.
Top-level organization
Here is the top-level launch file (in "rospack find 2dnav_pr2/move_base/2dnav_pr2.launch").
<launch> <group name="wg"> <include file="$(find pr2_alpha)/$(env ROBOT).machine" /> <include file="$(find 2dnav_pr2)/config/new_amcl_node.xml" /> <include file="$(find 2dnav_pr2)/config/base_odom_teleop.xml" /> <include file="$(find 2dnav_pr2)/config/lasers_and_filters.xml" /> <include file="$(find 2dnav_pr2)/config/map_server.xml" /> <include file="$(find 2dnav_pr2)/config/ground_plane.xml" /> <!-- The navigation stack and associated parameters --> <include file="$(find 2dnav_pr2)/move_base/move_base.xml" /> </group> </launch>
This file includes a set of other files. Each of these included files contains nodes and parameters (and possibly nested includes) pertaining to one part of the system, such as localization, sensor processing, and path planning.
Design tip: Top-level launch files should be short, and consist of include's to other files corresponding to subcomponents of the application, and commonly changed ROS parameters.
This makes it easy to swap out one piece of the system, as we'll see later.
To run this on the PR2 robot requires bringing up a core, then bringing up a robot-specific launch file such as pre.launch in the pr2_alpha package, and then launching 2dnav_pr2.launch. We could have included a robot launch file here rather than requiring it to be launched separately. That would bring the following tradeoffs:
- PRO: We'd have to do one fewer "open new terminal, roslaunch" step.
- CON: Launching the robot launch file initiates a calibration phase lasting about a minute long. If the 2dnav_pr2 launch file included the robot launch file, every time we killed the roslaunch (with control-c) and brought it back up, the calibration would happen again.
- CON: Some of the 2d navigation nodes require that the calibration already have finished before they start. Roslaunch intentionally does not provide any control on the order or timing of node start up. The ideal solution would be to make nodes work gracefully by waiting till calibration is done, but pending that, putting things in two launch files allows us to launch the robot, wait until calibration is complete, then launch 2dnav.
There is therefore no universal answer on whether or not to split things into multiple launch files. Here, it has been decided to use two different launch files.
Design tip: Be aware of the tradeoffs when deciding how many top-level launch files your application requires.
Machine tags and Environment Variables
We would like control over which nodes run on which machines, for load-balancing and bandwidth management. For example, we'd like the amcl node to run on the same machine as the base laser. At the same time, for reusability, we don't want to hardcode machine names into roslaunch files. Roslaunch handles this with machine tags.
The first include is
<include file="$(find pr2_alpha)/$(env ROBOT).machine" />
The first thing to note about this file is the use of the env substitution argument to use the value of the environment variable ROBOT. For example, doing
export ROBOT=pre
prior to the roslaunch would cause the file pre.machine to be included.
Design tip: Use the env substitution argument to allow parts of a launch file to depend on environment variables.
Next, let's look at an example machine file: pre.machine in the pr2_alpha package.
<launch> <machine name="c1" address="pre1" ros- <machine name="c2" address="pre2" ros- </launch>
This file sets up a mapping between logical machine names, "c1" and "c2" in this case, and actual host names, such as "pre2". It even allows controlling the user you log in as (assuming you have the appropriate ssh credentials).
Once the mapping has been defined, it can be used when launching nodes. For example, the included file config/new_amcl_node.xml in the 2dnav_pr2 package contains the line
<node pkg="amcl" type="amcl" name="amcl" machine="c1">
This causes the amcl node to run on machine with logical name c1 (looking at the other launch files, you'll see that most of the laser sensor processing has been put on this machine).
When running on a new robot, say one known as prf, we just have to change the ROBOT environment variable. The corresponding machine file (prf.machine in the pr2_alpha package) will then be loaded. We can even use this for running on a simulator, by setting ROBOT to sim. Looking at the file sim.machine in the pr2_alpha package, we see that it just maps all logical machine names to localhost.
Design tip: Use machine tags to balance load and control which nodes run on the same machine, and consider having the machine file name depend on an environment variable for reusability.
Parameters, namespaces, and yaml files
Let's look at the included file move_base.xml. Here is a portion of this file:
<node pkg="move_base" type="move_base" name="move_base" machine="c2"> <remap from="odom" to="pr2_base_odometry/odom" /> <param name="controller_frequency" value="10.0" /> <param name="footprint_padding" value="0.015" /> <param name="controller_patience" value="15.0" /> <param name="clearing_radius" value="0.59" /> <rosparam file="$(find 2dnav_pr2)/config/costmap_common_params.yaml" command="load" ns="global_costmap" /> <rosparam file="$(find 2dnav_pr2)/config/costmap_common_params.yaml" command="load" ns="local_costmap" /> <rosparam file="$(find 2dnav_pr2)/move_base/local_costmap_params.yaml" command="load" /> <rosparam file="$(find 2dnav_pr2)/move_base/global_costmap_params.yaml" command="load" /> <rosparam file="$(find 2dnav_pr2)/move_base/navfn_params.yaml" command="load" /> <rosparam file="$(find 2dnav_pr2)/move_base/base_local_planner_params.yaml" command="load" /> </node>
This fragment launches the move_base node. The first included element is a remapping. Move_base is designed to receive odometry on the topic "odom". In the case of the pr2, odometry is published on the pr2_base_odometry topic, so we remap it.
Design tip: Use topic remapping when a given type of information is published on different topics in different situations.
This is followed by a bunch of <param> tags. Note that these parameters are inside the node element (since they're before the </node> at the end), so they will be private parameters. For example, the first one sets move_base/controller_frequency to 10.0.
After the <param> elements, there are some <rosparam> elements. These read parameter data in yaml, a format which is human readable and allows complex data structures. Here's a portion of the costmap_common_params.yaml file loaded by the first <rosparam> element:
raytrace_range: 3.0 footprint: [[-0.325, -0.325], [-0.325, 0.325], [0.325, 0.325], [0.46, 0.0], [0.325, -0.325]] inflation_radius: 0.55 # BEGIN VOXEL STUFF observation_sources: base_scan_marking base_scan tilt_scan ground_object_cloud base_scan_marking: {sensor_frame: base_laser, topic: /base_scan_marking, data_type: PointCloud, expected_update_rate: 0.2, observation_persistence: 0.0, marking: true, clearing: false, min_obstacle_height: 0.08, max_obstacle_height: 2.0}
We see that yaml allows things like vectors (for the footprint parameter). It also allows putting some parameters into a nested namespace. For example, base_scan_marking/sensor_frame is set to base_laser. Note that these namespaces are relative to the yaml file's own namespace, which was declared as global_costmap by the ns attribute of the including rosparam element. In turn, since that rosparam was included by the node element, the fully qualified name of the parameter is /move_base/global_costmap/base_scan_marking/sensor_frame.
The next line in move_base.xml is:
<rosparam file="$(find 2dnav_pr2)/config/costmap_common_params.yaml" command="load" ns="local_costmap" />
This actually includes the exact same yaml file as the line before it. It's just in a different namespace (the local_costmap namespace is for the trajectory controller, while the global_costmap namespace affects the global navigation planner). This is much nicer than having to retype all the values.
The next line is:
<rosparam file="$(find 2dnav_pr2)/move_base/local_costmap_params.yaml" command="load"/>
Unlike the previous ones, this element doesn't have an ns attribute. Thus the yaml file's namespace is the parent namespace, /move_base. But take a look at the first few lines of the yaml file itself:
local_costmap: #Independent settings for the local costmap publish_voxel_map: true global_frame: odom_combined robot_base_frame: base_link
Thus we see that the parameters are in the /move_base/local_costmap namespace after all.
Design tip: Yaml files allow parameters with complex types, nested namespaces of parameters, and reusing the same parameter values in multiple places.
Reusing launch files
The motivation for many of the tips above was to make reusing launch files in different situations easier. We've already seen one example, where the use of the env substitution arg can allow modifying behavior without changing any launch files. There are some situations, though, where that's inconvenient or impossible. Let's take a look at the pr2_2dnav_gazebo package. This contains a version of the 2d navigation app, but for use in the Gazebo simulator. For navigation, the only thing that changes is actually that the Gazebo environment we use is based on a different static map, so the map_server node must be loaded with a different argument. We could have used another env substitution here. But that would require the user to set a bunch of environment variables just to be able to roslaunch. Instead, 2dnav gazebo contains its own top level launch file called '2dnav-stack-amcl.launch', shown here (modified slightly for clarity):
<launch> <include file="$(find pr2_alpha)/sim.machine" /> <include file="$(find 2dnav_pr2)/config/new_amcl_node.xml" /> <include file="$(find 2dnav_pr2)/config/base_odom_teleop.xml" /> <include file="$(find 2dnav_pr2)/config/lasers_and_filters.xml" /> <node name="map_server" pkg="map_server" type="map_server" args="$(find gazebo_worlds)/Media/materials/textures/map3.png 0.1" respawn="true" machine="c1" /> <include file="$(find 2dnav_pr2)/config/ground_plane.xml" /> <!-- The naviagtion stack and associated parameters --> <include file="$(find 2dnav_pr2)/move_base/move_base.xml" /> </launch>
The first difference is that, since we know we're on the simulator, we just use the sim.machine file rather than using a substitution argument. Second, the line
<include file="$(find 2dnav_pr2)/config/map_server.xml" />
has been replaced by
<node name="map_server" pkg="map_server" type="map_server" args="$(find gazebo_worlds)/Media/materials/textures/map3.png 0.1" respawn="true" machine="c1" />
The included file in the first case just contained a node declaration as in the second case, but with a different map file.
Design tip: To modify a "top-level" aspect of an application, copy the top level launch file and change the portions you need.
Parameter overrides
The technique above sometimes becomes inconvenient. Suppose we want to use 2dnav_pr2, but just change the resolution parameter of the local costmap to 0.5. We could just locally change local_costmap_params.yaml. This is the simplest for temporary modifications, but it means we can't check the modified file back in. We could instead make a copy of local_costmap_params.yaml and modify it. We would then have to change move_base.xml to include the modified yaml file. And then we would have to change 2dnav_pr2.launch to include the modified move_base.xml. This can be time-consuming, and if using version control, we would no longer see changes to the original files. An alternative is to restructure the launch files so that the move_base/local_costmap/resolution parameter is defined in the top-level file 2dnav_pr2.launch, and make a modified version of just that file. This is a good option if we know in advance which parameters are likely to be changed.
Another option is to use roslaunch's overriding behavior: parameters are set in order (after includes are processed). Thus, we could make a further top-level file that overrides the original resolution:
<launch> <include file="$(find 2dnav_pr2)/move_base/2dnav_pr2.launch" /> <param name="move_base/local_costmap/resolution" value="0.5"/> </launch>
The main drawback is that this method can make things harder to understand: knowing the actual value that roslaunch sets for a parameter requires tracing through the including roslaunch files. But it does avoid having to make copies of multiple files.
Design tip: To modify a deeply nested parameter in a tree of launch files which you cannot change, use roslaunch's parameter overriding semantics.
Roslaunch arguments
As of CTurtle, roslaunch has an argument substitution feature together with tags that allow conditioning parts of the launch file on the value of arguments. This can be a more general and clear way to structure things than the parameter override mechanism or launch file reuse techniques above, at the cost of having to modify the original launch file to specify what the changeable arguments are. See the roslaunch XML documentation.
Design tip: If you can modify the original launch file, it's often preferable to use roslaunch arguments rather than parameter overriding or copying roslaunch files.
Config files and roslaunch
A .launch file is a type of config file. From file system point of view there is no difference between .launch files and any other xml-formated files (so any file extension works. For simplicity .launch is used in this section). It does NOT have an ability to start any process by itself. Instead, it only works when it's passed to certain executables that take a .launch file as an input. Most notable such executable is roslaunch.
Packaging config (including .launch) files
Reuse-able config files are commonly included in packages. Then consumer software of such config files can access these configs by looking at the path configs sit, or more commonly by using ROS' resource lookup mechanism (`rospack find`) so that the consumer doesn't need to know the path of configs. The latter takes an advantage of packaging config files in a "package" we're talking about (rosmake/Catkin/Colcon compatible package).
In roslaunch context, meaning of config files vary: * .launch file: It is in/directly passed to/parsed by an executable roslaunch (or its internal process running via its API). * YAML: Path of a file can be passed to rosparam tag, which reads the file and upload the content on to ROS Parameter server. * No built-in support for other type of files.
Inside of .launch, one notable functionality is that path of other resource can be substituted by $(find pkg).
Practice-A. Separate packages for .launch and other config files
Combined these, a good packaging practice for configs is to have separate packages for .launch and other types of config files. Say, YOURPRJ_config and YOURPRJ_launch packages. Because * There's a good chance that the configs in YOURPRJ_config pkg are referenced in some .launch files in YOURPRJ_launch. I.e. YOURPRJ_launch depends on YOURPRJ_config. * Say there's another package in your project, YOURPRJ_calibration package, which provide nodes, which use configs stored in YOURPRJ_config. You also have launch files to start those nodes.
If both launch and other configs were packaged in a same package, say YOURPRJ_config_single cyclic dependency would occur.
launch files in YOURPRJ_config_single reference nodes in YOURPRJ_calibration.
nodes in YOURPRJ_calibration reference config files in YOURPRJ_config_single.
- With launch and other configs stored in separate packages, circular dependency can be avoided.
nodes in YOURPRJ_calibration reference config files in YOURPRJ_config.
launch files in YOURPRJ_launch reference nodes in YOURPRJ_calibration.
Practice-B. Group many configs into a single package or fewer packages
In ROS-Industrial framework where handling many but similar hardware is one of the motivation, grouping many configs incl. .launch files is found convenient. See a relevant discussion (discourse.ros.org#18443). | https://wiki.ros.org/ROS/Tutorials/Roslaunch%20tips%20for%20larger%20projects | CC-MAIN-2021-31 | refinedweb | 2,767 | 58.08 |
I have an issue with some basic code I've been writing to learn C++. I've posted it here with comments. I'm using Code::Blocks, if that helps, and if the formatting doesn't come out right in the post, you should be able to copy and paste it to a maximized notepad file to get the formatting right. Otherwise, I'll attach a .txt file to this post. Take your pick.
Thanks again,Thanks again,Code://This is a nonsensical program for me to practice various aspects of the C++ Language. It should //have a very simple conversation with the user. However, when I attempt to compile the program, I //get several error messages stating (for example): // "error: too few arguments to function 'void DayMessage(std::string, std::string)'" //Please help me understand where I am going wrong, as I have looked over the code and cannot //understand the errors. Thank you. Replies can be posted here or sent to sdaniels1288@gmail.com. #include <iostream> using namespace std; //Just a simple function usage. Extraneous, I know, but my first self-created function. void HelloThere() { cout<<"\n\nWell, hello there!\n\n"; } //Not extraneous. Should get a carriage return to pass to void PrgmEnd() void GetBreakCmd() { string BreakCmd="a"; //Initializes BreakCmd to 'a' so that Return must be pressed. cout<<"\n\n\nPlease press Return (Enter) to close."; cin>>BreakCmd; } //First instance of the above stated error. Should pass BreakCmd to the function, and if matching, //close down the program. Otherwise, it should repeat the prompt to press enter to close. void PrgmEnd(string BreakCmd) { if (BreakCmd == "")//In other words, if Return (Enter) was pressed { //Should exit the function and return to main, ending the program and closing the window. } else { GetBreakCmd(); //Go back and repeat the prompt } } //The next instance of the error. This function should test the string DayOfWeek, and output a //message based on the input. Doesn't have an error catcher for entries that are not days of the //week yet. Also likely has several other errors. Sorry, I know, but I was going to start debugging //when I hit this snag. void DayMessage (string DayOfWeek , string Name) { if (DayOfWeek == "Friday") { cout<<"\nIt's Friday? Yay! That means it's payday!\n\n"; } else if (DayOfWeek == "Saturday" || DayOfWeek == "Sunday") { cout<<"\nAh, that's right! Payday was just here!\n\n"; } else { cout<<"\nDarn! It's not Friday... I guess payday isn't here yet.\n\n"; } cout<<"Anyway, "<<Name<<", I'm feeling pretty tired. I'm going to go to sleep now.\n\nGoodbye, "<<Name<<"!\n"; } int main() { //Declares and initializes the two variables Name and DayOfWeek string Name=""; string DayOfWeek=""; //Executes that first simple function HelloThere(); //Prompt for the user's name cout<<"\nWhat is your name?\n\n> "; cin>>Name; //Response and prompt for the day of the week cout<<"\nHello, "<<Name<<"! What day is today?\n\n> "; cin>>DayOfWeek; //Should rund DayMessage based on the entry for DayOfWeek DayMessage(); //Should prompt and get the program close sequence GetBreakCmd(); //If program close sequence is executed, should finish out the main function and close the //program. PrgmEnd(); //Returns a 0. I know there's probably a different way of doing this, but this is how I've //learned so far. Any hints, tips, tricks, otherwise would be appreciated. I'm a greenhorn //programmer and willing to learn if you're willing to offer advice. return 0; } //Thanks again for the help. Again, replies can be posted here or emailed to me at //sdaniels1288@gmail.com.
Steve. | http://cboard.cprogramming.com/cplusplus-programming/138490-problem-code-need-advice.html | CC-MAIN-2015-40 | refinedweb | 593 | 76.11 |
Design Guidelines, Managed code and the .NET Framework
I httpBasic<GetProductsCompletedEventArgs>.
PingBack from
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Good tutorial, I´d like to the see the hairy part (e.g. change data, and updating it on the server) too..would that be possible?
Interesting tutorial. I hope that the tools will be better in RTM as there is way too much Xml to type.
The Silverlight testing framework was recently released and shows some great potential for being a first
The ClientBin folder is not created for me when I create the solution (and thus, neither is the EndToEndSilverlightDemo.xap file, which leads to errors in the designer). Any idea what I'm doing wrong?
Adrian asks about showing the "harder" parts of a data centeric application such as updating the data. That is a great ask, I will look into it for my next pots.
Pawel points out there is lots of Xaml editing here.. I am doing that to show what is really happening.. you can do 100% of the Xaml editing I show here in Expression Blend in design time. And, yes VS is getting better and better the Xaml design surface.
David Fauber isn't seeing the ClientBin folder created in the server project. I think this could be becaus you have not done a full build of the solution yet.
Hi,
How can the data grid be populated without linq, i have data constructs that are constructed through business logic and not just from a simple select statement from the database. I've tried just creating a simple class with a single public string variable set with a value and then push an array of those to the ItemsSources property with no joy.
What is the proper way to pass arbitary data to this grid?
I have been having some fun with an end to end
Very good Tutorial Brad Abrams.
Very Interesting !
Martin asks about Databinding to .NET Objects rather than LINQ... Scott covers that in his blog post here:
One thing you need to be sure of is that you use public properites rather than fields...
Very informative post, and I'm glad to see that you covered the basic httpBinding issue with WCF, as that was a stumbling block for me when I tried to build a simple data-driven app in Silverlight for a demo recently. I'm giving a similar demo at a user group meeting again in a couple of months and will look to incorporate use of IsolatedStorage for 'caching' as you've done here. I found your previous posts to this very helpful in putting my own demos together, thanks!
Wow, great tutorial. I just wrote one very similair to this and yours puts mine to shame! I especially like the part which covers database access.
Like Adrian, I'd also like to see the harder parts:
- edit a row
- implement a little business logic
- data validation (using the business logic)
- relation to other tables (master-detail view)
I know CSLA.NET should be a great fit for this, but what if I can't use that, and am 'stuck' with only the MS provided bits.
Can I use the Validation Block of EntLib?
Can I use WF to hold UI workflow?
Great start, many more posts to come?
I've been OOF on vacation (Hawaii, where it's warm!). But I'm back now and here's a few new sample
In this example I will demonstrate a very simple call to the FlickR REST APIs from a Silverlight client
Post: Approved at: Apr-30-2008 SL2 End-To-End Data Centric App example Brad Abrams posted a tutorial
Abstract This series of articles takes a look at the current state of RIA development leveraging Silverlight
Security.
The end to end application is what I am targeting for now, but I see a big security hole for this model.
Since we have to expose all discrete functionalities in web services, which means anybody can call those web services, as long as your silverlight application can.
There's no easy way to restrict those web services to be called only by your silverlight application, or you have build security inside the business layer. This is a big change to normal website applications. Most people build security into UI instead. i.e. Security drives UI, and you can only rich certain UI based on your credential, and then you can only performance certain functionalities in that UI. If we switch to Web Service approach, we will have to define security for every webservice call. Which is a huge amount of work, consider you could have so many web services.
Any idea?
There is an error on the text give for the GetProducts method. The correct body of the method is:
NorthWindDataContext db = new NorthWindDataContext();
var q = from p in db.Products
where p.ProductName.StartsWith(productNamePrefix)
select p;
return q.ToList();
version of Silverlight tools, runtime, and sdk please... is this Beta1?
>> version of Silverlight tools, runtime, and sdk please... is this Beta1?
Jr - this is all beta1.. hopefully I or someone else will port to beta2
hello, this looks like a good tutorial but i have silverlight 2.0 beta and i downloaded the your sample code...but it gives me an error and it wouldn't run.
and i forgot to add...that the Error message was
The type or namespace name 'ApplicationSettings' could not be found (are you missing a using directive or an assembly reference?) C:\Documents and Settings\zz-dbchau\My Documents\Visual Studio 2008\Projects\EndToEndSilverlightDemo\EndToEndSilverlightDemo\Page.xaml.cs 18 | http://blogs.msdn.com/brada/archive/2008/04/17/end-to-end-data-centric-application-with-silverlight-2.aspx | crawl-002 | refinedweb | 946 | 65.01 |
#include <stdio.h>
#include <rte_debug.h>
Go to the source code of this file.
Here defines rte_tailq APIs for only internal use
Definition in file rte_tailq.h.
Return the first tailq entry cast to the right struct.
Definition at line 57 of file rte_tailq.h.
Utility macro to make looking up a tailqueue for a particular struct easier.
Definition at line 76 of file rte_tailq.h.
dummy
Dump tail queues to a file. EAL_REGISTER_TAILQ macro which is used to register tailq from the different dpdk libraries. Since this macro is a constructor, the function has no access to dpdk shared memory, so the registered tailq can not be used before call to rte_eal_init() which calls rte_eal_tailqs_init(). | https://doc.dpdk.org/api-22.07/rte__tailq_8h.html | CC-MAIN-2022-40 | refinedweb | 116 | 77.13 |
Feature #7349open
Struct#inspect needs more meaningful output
Description
When inheriting directly from Struct.new, Class#ancestors shows a meaningless anonymous Class:
class Point < Struct.new(:x, :y) def distance ((x ** 2) + (y ** 2)) ** 0.5 end end Point.ancestors # => [Point, #<Class:0x007fe204a1a228>, Struct, Enumerable, Object, Kernel, BasicObject]
Perhaps, the anonymous Class could list the Struct's fields?
#<Class:x, y>
Updated by shyouhei (Shyouhei Urabe) almost 9 years ago
Sounds nice to me. +1
Updated by claytrump (Clay Trump) almost 9 years ago
I like it too. Could even be:
Point.ancestors # => [Point, Struct.new(:x, :y), Struct, Enumerable, Object, Kernel,
BasicObject]
--
Updated by Eregon (Benoit Daloze) almost 9 years ago
It might be worth pointing out that this should not happen if the Struct generated class is assigned to a constant (and so one level of inheritance is not unused):
Point = Struct.new(:x, :y) do
def distance
Math.hypot(x,y)
end
end
Updated by mame (Yusuke Endoh) almost 9 years ago
- Tracker changed from Bug to Feature
Updated by mame (Yusuke Endoh) almost 9 years ago
- Status changed from Open to Assigned
- Assignee set to matz (Yukihiro Matsumoto)
Updated by mame (Yusuke Endoh) almost 9 years ago
This is not a bug. This ticket has been moved to the feature tracker.
I'm not against this proposal, but I don't think that people normally check the singleton class to know Struct fields. You may want to use Struct.#members:
p Point.members #=> [:x, :y]
It is shorter than ancestors.
--
Yusuke Endoh mame@tsg.ne.jp
Updated by naruse (Yui NARUSE) almost 4 years ago
- Target version deleted (
2.6)
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/7349 | CC-MAIN-2021-43 | refinedweb | 279 | 67.04 |
I recently blogged about one of the sessions I’m presenting at ESC East on application development, and showed the ~30 lines of managed code needed to pull an RSS feed from Frameit.live.com, parse the XML, and then pull and display images from the feed.
I got a couple of e-mails asking for the code needed to use the native code Win32 imaging APIs to load and display an image – here’s the code.
#define INITGUID
#include <initguid.h>
#include <imaging.h>
IImagingFactory *pImgFactory = NULL;
IImage *pImage = NULL;
// Initialize COM and Imaging API
HRESULT hr;
hr=CoInitializeEx(NULL, COINIT_MULTITHREADED);
hr=CoCreateInstance (CLSID_ImagingFactory,
NULL,
CLSCTX_INPROC_SERVER,
IID_IImagingFactory,
(void **)&pImgFactory);
// Load an image from the file system
hr=pImgFactory->CreateImageFromFile(strFileName, &pImage);
// Draw the image to fit the size of the client area
// This is from the WM_PAINT handler in wndproc
case WM_PAINT:
RECT rcClient;
hdc = BeginPaint(hWnd, &ps);
// Get Client Area Dimensions
GetClientRect(hWnd,&rcClient);
int iWidth=rcClient.right;
int iHeight=rcClient.bottom;
IImage *pThumbImage = NULL;
// Scale the image to fit the client area
pImage->GetThumbnail(iWidth,iHeight,&pThumbImage);
// Draw the thumbnail.
pThumbImage->Draw(hdc,&rcClient,NULL);
EndPaint(hWnd, &ps);
break;
Note that this isn’t the full program, just the snippets needed to get this to work.
Once ESC East is done I will post the rest of the code.
- Mike | https://blogs.msdn.microsoft.com/mikehall/2008/10/23/ce6-using-imaging-apis-to-loaddisplay-images/ | CC-MAIN-2017-43 | refinedweb | 223 | 51.89 |
Halloween Project: The Human Theremin
As we have mentioned before, obnoxious, but also to have a costume that is fun for others to interact with. The project uses two infrared distance sensors, as well as PWM (Pulse Width Modulation) and a piezoelectric buzzer to create a sound, and brings together quite a few concepts to make the whole thing work.
Click any photo to enlarge:
Background.
The idea behind this costume is to recreate the function of a theremin and build it into a Halloween costume, giving someone that approaches you while trick-or-treating the ability to wave their hands and make music with your costume like a theremin.
The Electronics
For our theremin, we need two distance sensors: one to control the volume of the note being played, and the other to change the pitch. We decided to build two simple IR reflectivity sensors which measure how much infrared light is reflected from a nearby target. When a target (like the player's hands) is brought closer to the IR LED and phototransistor pair, more light is reflected back into the sensor.
As shown in the photos above, the IR LED and phototransistor are placed in close proximity and are pointed parallel to each other. The circuit we are using for each sensor looks like this:
The IR LED is just like any other LED, in that a current-limiting resistor is useful to control the device current and therefore the light intensity. With a 100 ohm resistor and the approximately 1.5V forward voltage drop of the IR LED, we'll have a LED current of about 35mA. That's fairly high, but more light emitted will yield more light coming back to our sensor.
The phototransistor is a device with an exposed silicon junction. When light passes through the plastic casing, most non-IR wavelengths are simply absorbed by the plastic, but the infrared light makes it to the sensor within. Each photon striking the silicon junction causes an electron to flow. Because this is a phototransistor and not a photodiode, this current is multiplied by the transistor's current gain, so each photon may actually cause perhaps 10 to 100 electrons to flow. This current has nowhere to go except through the 10K resistor, and as the current passes through the resistor, the voltage across the resistor rises (V=IR). This change in voltage is read by our microcontroller's Analog to Digital Converter (ADC).
The two resistor values are both "knobs" that can be adjusted to get the desired level of voltage as a hand is moved near the sensor. Increasing the 10K value will increase the output voltage proportionally, which is fine as long as the voltage doesn't saturate (as it can't go above +5V). Additionally, the presence of any background IR light (such as from natural light sources) will be amplified by increasing the 10K resistor value. The 100 ohm resistance can be made smaller to allow more current to flow through the LED, but this causes the battery to discharge faster.
Parts List and Assembly
In addition to one USB NerdKit, the following extra parts are needed to complete this project:
To assemble a sensor, complete the schematic shown above on your solderless breadboard. Test the sensor, and using a multimeter, read the voltage across the 10K resistor as you move your hand up and down. If you would like to make your sensor more permanent, you can solder it together as we have done, placing the IR LED, phototransistor, and 100 ohm resistor together on a 3-wire cable. The 10K resistor is then left to stay on the breadboard, which allows the receiver gain to be adjusted easily.
Please be aware that the physical orientation of the LED and phototransistor are very important. The IR LED has a narrow transmission angle, and the phototransistor has a narrow receiving angle. Make sure the two are pointed roughly parallel to each other, or perhaps tilted very slightly inwards. We've also used a small piece of electrical tape placed directly between the LED and phototransistor to prevent light from traveling directly from one to the other, because light that travels directly only contributes to saturating the phototransistor measurement, and isn't useful in sensing a target. You should experiment with the sensor as a simple analog circuit, and even try modifying the various resistor values to see how the behavior of the sensor changes.
For the IR LED and phototransistor we are demonstrating, also notice that the orientation of the LED package in particular is reversed in comparison to normal T1-3/4 LEDs. For a "normal" LED, the anode is the longer lead, and the cathode has the shorter lead. However, the manufacturer of this part decided to build this LED in the opposite direction. The phototransistor is similarly reversed. If confused, consult the datasheets or the labeled images above.
This problem is especially troublesome since infrared light is invisible to the naked eye, and therefore we can't tell whether our LED is lit or not. There are two solutions here: first, using a multimeter, we can measure the voltage across the 100 ohm resistor to verify that roughly 35mA is flowing through the LED. (Current will not flow when the LED is reversed, so you would measure approximately zero voltage drop across the 100 ohm resistor in that case.) The other approach is to use a camera, such as a webcam or cell phone camera, to take a look at the infrared LED. While infrared light is invisible to the human eye, it will show up easily on most camera equipment.
The Code
Reading the sensor value is as easy as reading the value from the ADC. This is covered extensively in other tutorials, as well as in the NerdKits Guide where the temperature sensor is the first project. As usual when reading ADC values, we average over many samples to minimize jitter from noise as well as from the hands. One thing that is different about this project is we are reading from two different sensors instead of just one. To read from different sensors we use the ADMUX register to select which channel we want to read from. The ATmega168 chip we are using has 6 ADC channels you can switch between. The function adc_read(...) takes care of setting the ADMUX value and reading the ADC value.
In order to get a smooth and creepy sound coming out of the piezoelectric buzzer, we did have to do a few interesting things in the code. We go over how to create musical notes with the buzzer using PWM and square waves in our Making Music with a Microcontroller video, but in order to get a smoother sound we did a few things differently. The biggest difference is we output triangle waves at different frequencies instead of square waves. The code uses a timer running at full speed and an 8 bit PWM output like we have done before.
void pwm_timer_init(){ //// begin math //// // clk = 14745600Hz (ticks per second) // with TOP value at = 0xFF // 14745600 / 256 = 57600Hz //// end math //// //using TIMER1 16-bit //set OC1A for clear on compare match, fast PWM mode, TOP value in ICR1, no prescaler //compare match is done against value in OCR1A TCCR1A |= (1<<COM1A1) | (1<<WGM11); TCCR1B |= (1<<WGM13) | (1<<WGM12) | (1<<CS10); OCR1A = 0; ICR1 = 0xFF; //enable interrupt on overflow. to set the next value TIMSK1 |= (1<<TOIE1); // set PB1 as PWM output DDRB |= (1<<PB1); }
Notice we turn on the timer overflow interrupt for the timer. In the interrupt handler, we increase the PWM duty cycle for the next go around. Once the duty cycle reaches 255, it starts going back down again. Looking at the smoothed out version of this signal over time you can see how it creates a triangle wave. The output of the PWM is fed directly to the piezoelectric buzzer.
In this setup, the step size is what dictates the frequency of the note. If we increase the duty cycle by two instead of one we will climb to 255 twice as fast, therefore doubling the frequency. In the main loop, we alter the value of the step size proportionally to the number the position sensor is reporting for the hands.
There is one extra trick we had to employ in order to get the smooth pitch change. Notice above that I mentioned increasing the step size from 1 to 2 would double the frequency, which is quite a noticeable jump in frequency. In order to get a smooth pitch change, we want to be able to vary the step size by fractional amounts. To achieve this, we use something called fixed point math. Basically, we just used a 16-bit integer to represent our step size, but only use the most significant 8 bits when it comes time to increase the PWM duty cycle. This allows us to treat the bottom 8 bits of the step size a fraction of the step size. It can make the code a bit confusing to read to the untrained eye.
We employ a similar strategy to vary the volume of the current note. The volume of a note is the same as the amplitude of the note, so to decrease the volume of a wave all you have to do is scale down all its values by a constant factor. To adjust the volume in our system, we turn the value from the hand position sensor into a volume value between 0 to 255. The PWM output we would have set from the triangle wave generation code gets cast into a 16-bit integer and multiplied by the volume number. A volume level of full will result in a (nearly) unscaled PWM value, and any number less than 255 will scale the PWM values down.
Here is a snippet of code from the interrupt handler that does the PWM step increase, and the volume control. Notice how we only use the top 8 bits of the step variable, and take a close look at how we scale the PWM to achieve a volume effect.
uint8_t next_val(){ // compute next phase, doing rollovers properly // [... omitted here -- see source code file ...] // compute next volume-weighted sample uint8_t raw = (uint8_t)(state>>8); // unweighted uint16_t rawweighted = raw * vol; return (uint8_t)(rawweighted>>8); }
These left-shift-by-8-bit operations are just our way of telling the C compiler to look at the top 8 bits of each computed value.
Source Code
A lot can be learned from going over the source code, so go ahead and download it and give it a read.
More Videos and Projects!
Take a look at more videos and microcontroller projects! | http://www.nerdkits.com/videos/theremin_with_ir_distance_sensor/ | CC-MAIN-2017-43 | refinedweb | 1,795 | 66.67 |
To process images such as photos, we need to be able to load photo files into an image object, read and set pixels values of the image, and save the image back into a photo file.
We use the class java.awt.image.BufferedImage to store image data. You can create such an object by saying:
import java.awt.image.BufferedImage val img = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB)
Or you can load an image from a file using:
import javax.imageio.ImageIO val photo1 = ImageIO.read(new File("photo.jpg"))
The result of this call is a again a BufferedImage object. You can find the width and height of this image using the following methods:
printf("Photo size is %d x %d\n, photo1.getWidth, photo1.getHeight)
You can save an image object into a file using:
ImageIO.write(photo2, "jpg", new File("test.jpg"))or, if you prefer PNG format:
ImageIO.write(photo2, "png", new File("test.png"))
The pixels of a java.awt.image.BufferedImage object can be read and changed using the getRGB and setRGB methods, see below.
The following small script reads a file photo.jpg, get its dimensions, creates a new empty image of the same size, copy the pixels from the old image img to the new image out (mirroring horizontally), draws a red line diagonally across the photo, and finally saves the new image in a file test.jpg (image.scala):
import java.io.File import javax.imageio.ImageIO import java.awt.image.BufferedImage def phototest(img: BufferedImage): BufferedImage = { // obtain width and height of image val w = img.getWidth val h = img.getHeight // create new image of the same size val out = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB) // copy pixels (mirror horizontally) for (x <- 0 until w) for (y <- 0 until h) out.setRGB(x, y, img.getRGB(w - x - 1, y) & 0xffffff) // draw red diagonal line for (x <- 0 until (h min w)) out.setRGB(x, x, 0xff0000) out } def test() { // read original image, and obtain width and height val photo1 = ImageIO.read(new File("photo.jpg")) val photo2 = phototest(photo1) // save image to file "test.jpg" ImageIO.write(photo2, "jpg", new File("test.jpg")) } test()
The method getRGB(x: Int, y: Int): Int returns the color of the pixel at position (x, y), the method setRGB(x: Int, y: Int, color: Int) sets the color of this pixel.
Colors are represented as Int objects. The three components red, green, and blue are "packed" together into one integer. Each component has 8 bits, and therefore can have a value between 0 and 255. The packed color is a 32-bit integer, whose bits look like this:
tttt tttt rrrr rrrr gggg gggg bbbb bbbbThe top 8 bits are either zero, or represent the "transparency" of the pixel. We will not use these transparency bits. The next 8 bits represent red, the next 8 bits represent green, and the last 8 bits represent blue. This is why this representation is called "RGB".
Given red, green, and blue components with values in the range 0 to 255, we can pack them together like this:
val color = (red * 65536) + (green * 256) + blueGiven a packed integer color, we can extract the three components as follows:
val red = (color & 0xff0000) / 65536 val green = (color & 0xff00) / 256 val blue = (color & 0xff)(The & operator here is the bitwise-and operator. It makes sure that only the bits we are interested in are used.) | http://otfried.org/scala/image.html | CC-MAIN-2018-05 | refinedweb | 575 | 66.64 |
So, there's a particular corner of the coding standard for Twisted currently which is the root of a few annoying problems and has been the cause of one or two bugs. It's the module import requirement. Originally it seemed like a pretty good idea for the following reasons; if one is using packages, it is easiest to figure out where the code is coming from if you name the module explicitly. Documentation generators document modules individually, and so that's how people would learn to import modules. Accessing the module indirectly, and not the class, allows for reloading to work naturally and different modules to have similarly named classes when appropriate. In pracitce, it doesn't work out quite so well. It did not occur to me at the time how hard it would be to distinctly name all modules within classes, or how often a local variable name would clash with a straightforwardly chosen package name. I am considering a change to the coding standard (and the attendant massive refactoring) to a standard where modules "promote" public classes and functions to the module level. For example, in twisted/words/service.py: # promote public interface from twisted import python import twisted.words twisted.python.publicInterface(twisted.words, Service, WordsClientInterface, Participant, ...) # end The end user would probably then have to do this in order to use that module: import twisted.words.service from twisted import words words.Service(...) Pros: * fewer names to worry about clashing with * nested modules less inconvenient * only public portions of interface present at package level Cons: * it doesn't work that way now and it would require work to change * you still have to know which module to import * it requires manual declaration of public interface The number of conflicts both between modules and convenient variable names is increasing with time, and I think that something has to be done about it, but I don't know if this is an appropriate solution. At any rate, such a change will likely affect most Twisted developers, so I'd like to hear feedback before I do anything. -- ______ you are in a maze of twisted little applications, all | |_\ remarkably consistent. | | -- glyph lefkowitz, glyph @ twisted matrix . com |_____| | http://twistedmatrix.com/pipermail/twisted-python/2001-December/000631.html | CC-MAIN-2015-18 | refinedweb | 372 | 50.36 |
Hi
On 7th February 2005 I was notified of a number of potentially -
compromised Full-Disclosure subscriber accounts. Following an
investigation it appears that the Mailman configuration database was
obtained from lists.netsys.com on 2nd January 2005 using a remote
directory traversal exploit for a previously unpublished
vulnerability in Mailman 2.1.5.
Subscriber addresses and passwords have been compromised. All list
members are advised to change their password immediately. There do
not appear to be further signs of intrusion although investigations
continue.
The vulnerability lies in the Mailman/Cgi/private.py file:
def true_path(path):
"Ensure that the path is safe by removing .."
path = path.replace('../', '')
path = path.replace('./', '')
return path[1:]
A crafted URL fragment of the form ".../....///" will pass through the
above function and return as "../", thus allowing directory traversal
to occur using the following URL syntax to retrieve an arbitrary path.
/mailman/private/≤list>/<path>?username=<username>&password=<password>
Expect vendor advisories nearer the end of the week, for now here is a
suggested fix from Barry Warsaw:
SLASH = '/'
def true_path(path):
"Ensure that the path is safe by removing .."
parts = [x for x in path.split(SLASH) if x not in ('.', '..')]
return SLASH.join(parts)[1:]
This issue only affects Mailman installations running on web servers
that don't strip extraneous slashes from URLs, such as Apache 1.3.x.
The Common Vulnerabilities and Exposures project (cve.mitre.org) has
assigned the name CAN-2005-0202 to this mailman issue.
Cheers
- John
_______________________________________________
Full-Disclosure - We believe in it.
Charter: | http://article.gmane.org/gmane.comp.security.full-disclosure/30721 | crawl-002 | refinedweb | 256 | 50.43 |
Edwin W.
Van de Grift, IBM TPF Services
The functionality that has become available
on TPF during the past few years has opened up a whole new spectrum
of programming options. One of the interesting aspects hereof is
that of passing information between processes, which in POSIX
terminology is called Interprocess Communications (IPC). This is
the first in a series of articles covering these mechanisms. What
will be covered is not so much all theoretical backgrounds, but
practical coding samples. The coding samples are in C++ to
additionally show how the IPC mechanisms can be implemented
effectively without disturbing application logic. Note that the
sample code shown is far from complete: only relevant statements
are shown and error handling (to name one aspect) has been
completely ignored.
Unnamed pipes
One of the oldest mechanisms for passing
information between processes is the pipe. A pipe---or more
correctly, an unnamed pipe---provides an I/O channel through which
a process can communicate with another process (or with itself).
The processes between which the communication is to take place need
to be related (for example parent-child, parent-grandchild, or any
two processes that share some common ancestor). A pipe provides a
one-way channel of communication (in other words, a pipe is
unidirectional), so in order to pass information back and forth
between processes, two pipes are needed. Pipe support on TPF was
introduced through APAR PJ26188 on PUT 10.
#include <unistd.h>
int pipe(int fd[2]);
To create a pipe, the pipe function needs to
be called. Its argument is an array of two integers in which pipe
returns two file descriptors: one for the read end of the pipe
(stored in fd[0]), and one for the write end of the pipe (stored in
fd[1]). When a process creates a child process (for example, using
the tpf_cresc or tpf_fork function), all open file descriptors1
of the parent process are inherited by the child
process. A parent process can thus pass information to its child by
writing to the write end of the pipe. The child process can then
receive the information by reading it from the read end of the
pipe.
The following sample shows the parent code.
Because the child code will reside in a separate program, we need
to make sure the child process knows what inherited file descriptor
to read from. This is established by closing file descriptor zero
(that is mapped to the standard input stream) immediately before
creating the pipe. The file descriptors mapped to the read and
write ends of the pipe will be the lowest file descriptors that are
currently not in use. In other words, by closing file descriptor
zero before creating the pipe, the read end of the pipe will be
mapped to file descriptor zero. If you are in a situation where you
cannot get away with simply closing file descriptor zero, you can
still close it after duplicating it to another file descriptor by
using the dup or the dup2 function. Directly after creating the
child process, the parent process can close the read end of the
pipe. The parent then associates a stream with the write end of the
pipe to be able to conveniently use a stream function to pass its
information to the child. It could, of course, also be done with
the (lower-level) write function, but this would need us to
explicitly take care of the data to be written (and its
length).
#define _POSIX_SOURCE
#include <stdio.h>
int main(int, char**) {
fclose(stdin);
int fd[2];
pipe(fd);
tpf_fork(...);
close(fd[0]);
FILE* toChild =
fdopen(fd[1],"w");
fprintf(toChild,"How's life,
kiddo?");
}
The next piece of code shows the child and the way it receives
the information from its parent. The important thing to note is
that the child process is not aware of the mechanism used to
establish the communication path to it; it does not know about
pipes at all!
char* message[80];
scanf("%s", message);
FIFOs
For two independent processes to communicate with each other,
named pipes can be used. Named pipes are typically referred to as
FIFOs, or FIFO special files, because data is passed in a
first-in-first-out format. FIFOs have been introduced to TPF by
APAR PJ27214 on PUT 13.
#include <sys/types.h>
#include <sys/stat.h>
int mkfifo(const char* path, mode_t
mode);
A FIFO is created by calling the mkfifo
function. This function creates a new file (called path). The file
permission bits in the mode parameter are modified by the file
creation mask of the process and then used to set the file
permission bits of the pipe being created. The file mode creation
mask of a process can be queried and modified using the umask
function. In the case where the FIFO already exists, the mkfifo
function returns -1 and errno contains the value EEXIST.
After creating a FIFO, the FIFO can be opened
for sending (writing), or retrieving (reading) information. As with
pipes, writing to a FIFO appends the data, and reading from it
returns the data that is at the beginning of the FIFO. One thing to
be aware of is that the open of a FIFO is by default blocking; that
is, a process opening a FIFO for reading is suspended until another
process has opened the FIFO for writing.
The sample code writing into the FIFO starts
with creating the FIFO, allowing every process in the system both
read and write access to the FIFO. In this case the sample code
uses the low-level write and read functions referring to file
descriptors.
#include <fcntl.h>
#include <string.h>
mkfifo("afile",S_IRWXU|S_IRWXG|S_IRWXO);
int w =
open("afile",O_WRONLY|O_NONBLOCK);
char out[10] = "fifo test";
write(w,out,strlen(out)+1);
close(w);
The program reading from the FIFO opens the
FIFO for reading, reads the data, and finishes by deleting the FIFO
(using the unlink function), which is optional. You may decide to
leave FIFOs in existence after creating them.
int r =
open("afile",O_RDONLY|O_NONBLOCK); char in[10];
read(r,in,sizeof(in));
close(r);
unlink("afile");
To verify whether an existing file is a FIFO,
the S_ISFIFO function is available. S_ISFIFO queries the mode
member of a file's stat structure, which is retrieved through the
stat (or fstat, or lstat) function.
struct stat st
stat("afile", &st);
if(S_ISFIFO(st.st_mode)) {
// "afile" is a fifo
Is it a pipe?
One of the powerful concepts of C++ is that
it allows the developer to completely hide implementation details
from application logic, commonly called encapsulation.
Consider the following two pieces of sample code. The first process
streams data into some object of type Sample.
#include "Sample.h"
Sample s("some_name");
s << 2 << "Some
text.";
The second process receives data from an
object of type Sample.
#include <string>
int i;
string text;
s >> i >> text;
Of course, Sample may use many different
mechanisms of transporting the data from the first process to the
second (unrelated) process, but one of the ways to implement Sample
is to use FIFOs. A part of the Sample implementation is shown in
the following code fragments, starting with Sample's header file
(Sample.h). The Sample class shown is capable of transporting
integers and strings. Sample may support transportation of any type
of data. To add more types, simply declare and implement additional
input and output stream operators.
#ifndef SAMPLE_H
#define SAMPLE_H
class Sample {
public:
Sample(const char* path);
~Sample();
friend Sample&
operator<<(Sample& s,int val);
friend Sample&
operator<<(Sample& s,string& val);
friend Sample&
operator<<(Sample& s,const char* val);
friend Sample&
operator>>(Sample& s,int& val);
friend Sample&
operator>>(Sample& s,string& val);
private:
Sample(const Sample& s);
int _rfd;
int _wfd;
};
#endif
A noteworthy aspect of the sample class
declaration is the way that the operator>> and
operator<< functions are declared (and implemented in code
shown below). Overloaded input and output operators are never
declared as member functions. Declaring them as member functions
would require them to be called in the manner shown in the
following figure, which is counter intuitive, and would cause a lot
of confusion.
Sample s;
2 >> s;
The reason the input and output operators are
declared as friends of class Sample is to allow them access to
nonpublic members of class Sample: to the integers _rfd and _wfd.
Finally, Sample's copy constructor is declared as a private
function to prevent the copying of Sample objects.
The final piece of code shows the first part
of the implementation of class Sample (Sample.cpp). A FIFO is
opened in the Sample constructor, and created if not existing yet.
The Sample destructor closes the FIFO. The FIFO could also have
been deleted in the destructor, but it was chosen not to do this.
An implementation for Sample's copy constructor has not been coded
because it was declared private, and thus cannot be called by any
user of the Sample class.
#include <errno.h>
Sample::Sample(const char* path)
{
if(mkfifo(path) == -1 && errno
!= EEXIST) {
// Error...
_rfd =
open(path,O_RDONLY|O_NONBLOCK);
_wfd =
open(path,O_WRONLY|O_NONBLOCK);
Sample::~Sample() {
close(_rfd);
close(_wfd);
The second part of Sample.cpp, shown below,
contains the implementation of Sample's input and output operators.
In the input and output operators of class Sample for the character
string types, I chose to implement these by prefixing every string
by its length in order to improve performance. Not doing this would
force the recipient to read a character at a time when receiving
data because every character might be the final one.
Sample&
operator<<(Sample& s, int val) {
write(s._wfd,(char*)&val,sizeof
val);
return s;
Sample&
operator<<(Sample& s, string& val) {
int length = val.size()+1;
write(s._wfd,(char*)&length,sizeof
length);
write(s._wfd,val.c_str(),length);
Sample&
operator<<(Sample& s, const char* val) {
string str(val);
int length = str.size()+1;
write(s._wfd,str.c_str(),length);
Sample&
operator>>(Sample& s, int& val) {
read(s._rfd,(char*)&val,sizeof
val);
Sample&
operator>>(Sample& s, string& val) {
int length;
read(s._rfd,&length,sizeof
length);
char* buffer = new
char[length];
char* pos = buffer;
int soFar(0);
do {
int i =
read(s._rfd,pos,length-soFar);
soFar += i;
pos += i;
} while(soFar < length);
val = buffer;
delete [] buffer;
As I have hopefully been able to show, pipes
(named and unnamed) make life in Interprocess Communications a lot
easier. Applied in, for example, system or application maintenance
processing, where processing is distributed across many processes
but still needs to be coordinated, the use of pipes has proven to
be of great value.
Edwin W. Van de Grift is a consultant in
IBM's TPF Services group with 15 years TPF experience. His primary
focus areas include C and C++, TPFDF, TPFCS, application
development tools, and application performance optimization. Edwin
is also the instructor for IBM's "TPF4.1 C/C++ Architecture and
Internals" and "TPF C++ Application Development"
classes. | http://www-01.ibm.com/software/htp/tpf/news/nv9n1/v9n1a06.htm | crawl-003 | refinedweb | 1,836 | 52.29 |
On a streaming server or cluster, all projects run in a workspace. There can be one workspace for the whole cluster – in fact there is one created by default (called “default”) – or there can be many workspaces defined on the cluster. But regardless of whether there is one or many, all projects must run in a workspace.
Now at first glance, it may not be obvious what the purpose of a workspace is, and I do get questions about it from time to time, so I thought I’d talk about the raison d’etre of workspaces on the streaming cluster…
In the picture below, I have two workspaces, one called “default” and the other called “workspace1”. And you can see that there are two instances of the project “dmm165_final” running – one instance in each workspace.
Workspaces exist primarily for two reasons:
1. They define a namespace. The fully qualified URI of a stream on a streaming cluster is: cluster.workspace.project.stream. Any publisher or subscriber connecting to a stream uses that URI to connect. The publisher/subscriber doesn’t have to know where the project/stream is physically running on the cluster, but it does need to know which workspace. This protected namespace enables a couple of things:
- the ability to run multiple instances of the same project. Within a workspace, every project must have a unique name. But you can run multiple instances of the same project as long as you run each instance in a different workspace
- the ability to assign workspaces to users are groups. Then, each user/group doesn’t have to worry about duplicating project names of other users/groups – they can do whatever they want in their workspace(s) without knowing what’s going on in the other workspaces, eliminating any need for coordination
2. Access control. Authorization privileges can be defined at the workspace level. So you can control which users or roles can access which workspaces. Again, this works well when you have different users or groups using the same streaming cluster. You can limit a user or group to specific workspaces so that they can’t start/stop projects in other workspaces. You could even have certain workspaces that contain “master projects” that are always running, where only a system administrator has the ability to start/stop projects, but then have user workspaces where users have the ability to create and run projects on an ad-hoc basis that subscribe to the output of the “master projects” to apply further processing logic
Here’s how “workspace” is defined in the Smart data streaming Configuration and Admin guide, section on “Managing your streaming cluster“:
A named scope (similar to a directory) to which you deploy projects and their supporting files, adapters, and data services. Cluster workspaces give you the option to manage the permissions of related objects together. Every cluster has at least one workspace, and any workspace can contain projects, supporting files, adapters, and data services from multiple nodes. | https://blogs.sap.com/2014/12/17/the-purpose-of-workspaces-on-a-streaming-server/ | CC-MAIN-2018-05 | refinedweb | 500 | 65.05 |
Did you ever think about the most efficient method to perform integer exponentiation, that is, raising an integer a to an integer power b, when either a or b, or both, are rather large?
Repeated multiplication
The naive method is, of course, repeated multiplications.
is a multiplied by itself b times. Here's how it's coded in my pseudo-code of choice, Python:
def expt_mul(a, b): r = 1 for i in xrange(b): r *= a return r
Is this efficient? Not really, as we require b multiplications, and as I said earlier b can be very large (think number theory algorithms). In fact, there's a much more efficient method.
Exponentiation by squaring
The efficient exponentiation algorithm is based on the simple observation that for an even b,
. This may not look very brilliant, but now consider the following recursive definition:
The case of odd b is trivial, as it's obvious that
. So now we can compute
by doing only log(b) squarings and no more than log(b) multiplications, instead of b multiplications - and this is a vast improvement for a large b.
This algorithm can be coded in a straightforward way:
def expt_rec(a, b): if b == 0: return 1 elif b % 2 == 1: return a * expt_rec(a, b - 1) else: p = expt_rec(a, b / 2) return p * p
Indeed, this algorithm is about 10 times faster than the naive one for exponents in the order of a few thousands. When the exponent is about 100K, it is more than 100 times faster, and the difference keeps growing for larger exponents.
An iterative implementation
It will be useful to develop an iterative implementation for the fast exponentiation algorithm. For this purpose, however, we need to dive into some mathematics.
We can represent the exponent b as:
Where
are the bits (0 or 1) of b in base 2.
is then:
Or, in other words:
for k such that
can be computed by repetitive squaring, and moreover, we can reuse the result from a lower k to compute a higher k. This directly translates into the following iterative algorithm:
def expt_bin_rl(a, b): r = 1 while 1: if b % 2 == 1: r *= a b /= 2 if b == 0: break a *= a return r
To understand how the algorithm works, try to relate it to the formula from above. Using a standard "divide by two and look at the LSB" loop, the exponent b is broken into its binary representation. The lowest bits of b are considered first. a is continually squared to hold
, and is multiplied into the result only when
.
This algorithm is called right-to-left binary exponentiation, because the binary representation of the exponent is computed from right to left (from the LSB to the MSB) [1].
A related algorithm can be developed if we prefer to look at the binary representation of the exponent from left to right.
Left-to-right binary exponentiation
Going over the bits of b from MSB to LSB, we get:
def expt_bin_lr(a, b): r = 1 for bit in reversed(_bits_of_n(b)): r *= r if bit == 1: r *= a return r
Where _bits_of_n is a method returning the binary representation of its argument as an array of bits from LSB to MSB (which is then reversed, as you see):
def _bits_of_n(n): """ Return the list of the bits in the binary representation of n, from LSB to MSB """ bits = [] while n: bits.append(n % 2) n /= 2 return bits
Rationale: consider how you "build" a number from its binary representation when seen from MSB to LSB. You begin with 1 for the MSB (which is always 1, by definition, for numbers > 0). For each new bit you see you double the result, and if the bit is 1, you add 1 [2].
For example consider the binary 1101. Begin with 1 for the leftmost 1. We have another bit, so we double. That's 2. Now, the new bit is 1, so we add 1, that's 3. We have another bit, so again double, that's 6. The new bit is 0, so nothing is added. And we have one more bit, so once again double, getting 12, and finally adding 1, getting 13. Indeed, 1101 is the binary representation of 13.
Back to the exponentiation now. As you see in the code of expt_bin_lr, the binary representation of the exponent is read from MSB to LSB. Since this is the exponent, each "doubling" from the rationale above is squaring, and each "adding 1" is multiplying by the number itself. Hence, the algorithm works.
Performance
As I've mentioned, the squaring method of exponentiation is far more efficient than the naive method of repeated multiplication. In the tests I ran, the iterative left-to-right method is about the same speed as the recursive one, while the iterative right-to-left method is somewhat slower. In fact, both the recursive and the iterative left-to-right methods are so efficient they're completely on par with Python's built-in pow method [3].
This is surprising, as I'd actually expect the right-to-left method to be faster, because it skips the reversing of bits when computing the binary representation of the exponent. I'd also expect the built-in pow to be faster.
However, thinking harder for a moment, I think I can see why this happens. The RL (right-to-left) version has to multiply larger numbers at all stages, because LR sometimes multiplies by
a itself, which is relatively small. Python's bignum implementation can multiply by a small number faster, and this compensates for the need to reverse the bit list. I'll come back to this issue when I'll discuss modular exponentiation. But this is a topic for another article...
| http://eli.thegreenplace.net/2009/03/21/efficient-integer-exponentiation-algorithms/ | CC-MAIN-2017-09 | refinedweb | 971 | 59.94 |
import "github.com/hashicorp/terraform/configs/configschema"
Package configschema contains types for describing the expected structure of a configuration block whose shape is not known until runtime.
For example, this is used to describe the expected contents of a resource configuration block, which is defined by the corresponding provider plugin and thus not compiled into Terraform core.
A configschema primarily describes the shape of configuration, but it is also suitable for use with other structures derived from the configuration, such as the cached state of a resource or a resource diff.
This package should not be confused with the package helper/schema, which is the higher-level helper library used to implement providers themselves.
coerce_value.go decoder_spec.go doc.go empty_value.go implied_type.go internal_validate.go nestingmode_string.go none_required.go schema.go validate_traversal.go
type Attribute struct { // Type is a type specification that the attribute's value must conform to. Type cty.Type // Description is an English-language description of the purpose and // usage of the attribute. A description should be concise and use only // one or two sentences, leaving full definition to longer-form // documentation defined elsewhere. Description string DescriptionKind StringKind // Required, if set to true, specifies that an omitted or null value is // not permitted. Required bool // Optional, if set to true, specifies that an omitted or null value is // permitted. This field conflicts with Required. Optional bool // Computed, if set to true, specifies that the value comes from the // provider rather than from configuration. If combined with Optional, // then the config may optionally provide an overridden value. Computed bool // Sensitive, if set to true, indicates that an attribute may contain // sensitive information. // // At present nothing is done with this information, but callers are // encouraged to set it where appropriate so that it may be used in the // future to help Terraform mask sensitive information. (Terraform // currently achieves this in a limited sense via other mechanisms.) Sensitive bool Deprecated bool }
Attribute represents a configuration attribute, within a block.
EmptyValue returns the "empty value" for the receiving attribute, which is the value that would be returned if there were no definition of the attribute at all, ignoring any required constraint.
type Block struct { // Attributes describes any attributes that may appear directly inside // the block. Attributes map[string]*Attribute // BlockTypes describes any nested block types that may appear directly // inside the block. BlockTypes map[string]*NestedBlock Description string DescriptionKind StringKind Deprecated bool }
Block represents a configuration block.
"Block" here is a logical grouping construct, though it happens to map directly onto the physical block syntax of Terraform's native configuration syntax. It may be a more a matter of convention in other syntaxes, such as JSON.
When converted to a value, a Block always becomes an instance of an object type derived from its defined attributes and nested blocks
CoerceValue attempts to force the given value to conform to the type implied by the receiever.
This is useful in situations where a configuration must be derived from an already-decoded value. It is always better to decode directly from configuration where possible since then source location information is still available to produce diagnostics, but in special situations this function allows a compatible result to be obtained even if the configuration objects are not available.
If the given value cannot be converted to conform to the receiving schema then an error is returned describing one of possibly many problems. This error may be a cty.PathError indicating a position within the nested data structure where the problem applies.
ContainsSensitive returns true if any of the attributes of the receiving block or any of its descendent blocks are marked as sensitive.
Blocks themselves cannot be sensitive as a whole -- sensitivity is a per-attribute idea -- but sometimes we want to include a whole object decoded from a block in some UI output, and that is safe to do only if none of the contained attributes are sensitive.
DecoderSpec returns a hcldec.Spec that can be used to decode a HCL Body using the facilities in the hcldec package.
The returned specification is guaranteed to return a value of the same type returned by method ImpliedType, but it may contain null values if any of the block attributes are defined as optional and/or computed respectively.
EmptyValue returns the "empty value" for the recieving block, which for a block type is a non-null object where all of the attribute values are the empty values of the block's attributes and nested block types.
In other words, it returns the value that would be returned if an empty block were decoded against the recieving schema, assuming that no required attribute or block constraints were honored.
ImpliedType returns the cty.Type that would result from decoding a configuration block using the receiving block schema.
ImpliedType always returns a result, even if the given schema is inconsistent. Code that creates configschema.Block objects should be tested using the InternalValidate method to detect any inconsistencies that would cause this method to fall back on defaults and assumptions.
InternalValidate returns an error if the receiving block and its child schema definitions have any consistencies with the documented rules for valid schema.
This is intended to be used within unit tests to detect when a given schema is invalid.
NoneRequired returns a deep copy of the receiver with any required attributes translated to optional.
func (b *Block) StaticValidateTraversal(traversal hcl.Traversal) tfdiags.Diagnostics
StaticValidateTraversal checks whether the given traversal (which must be relative) refers to a construct in the receiving schema, returning error diagnostics if any problems are found.
This method is "optimistic" in that it will not return errors for possible problems that cannot be detected statically. It is possible that an traversal which passed static validation will still fail when evaluated.
type NestedBlock struct { // Block is the description of the block that's nested. Block // Nesting provides the nesting mode for the child block, which determines // how many instances of the block are allowed, how many labels it expects, // and how the resulting data will be converted into a data structure. Nesting NestingMode // MinItems and MaxItems set, for the NestingList and NestingSet nesting // modes, lower and upper limits on the number of child blocks allowed // of the given type. If both are left at zero, no limit is applied. // // As a special case, both values can be set to 1 for NestingSingle in // order to indicate that a particular single block is required. // // These fields are ignored for other nesting modes and must both be left // at zero. MinItems, MaxItems int }
NestedBlock represents the embedding of one block within another.
func (b *NestedBlock) EmptyValue() cty.Value
EmptyValue returns the "empty value" for when there are zero nested blocks present of the receiving type.
NestingMode is an enumeration of modes for nesting blocks inside other blocks.
const ( // NestingSingle indicates that only a single instance of a given // block type is permitted, with no labels, and its content should be // provided directly as an object value. NestingSingle NestingMode // NestingGroup is similar to NestingSingle in that it calls for only a // single instance of a given block type with no labels, but it additonally // guarantees that its result will never be null, even if the block is // absent, and instead the nested attributes and blocks will be treated // as absent in that case. (Any required attributes or blocks within the // nested block are not enforced unless the block is explicitly present // in the configuration, so they are all effectively optional when the // block is not present.) // // This is useful for the situation where a remote API has a feature that // is always enabled but has a group of settings related to that feature // that themselves have default values. By using NestingGroup instead of // NestingSingle in that case, generated plans will show the block as // present even when not present in configuration, thus allowing any // default values within to be displayed to the user. NestingGroup // NestingList indicates that multiple blocks of the given type are // permitted, with no labels, and that their corresponding objects should // be provided in a list. NestingList // NestingSet indicates that multiple blocks of the given type are // permitted, with no labels, and that their corresponding objects should // be provided in a set. NestingSet // NestingMap indicates that multiple blocks of the given type are // permitted, each with a single label, and that their corresponding // objects should be provided in a map whose keys are the labels. // // It's an error, therefore, to use the same label value on multiple // blocks. NestingMap )
func (i NestingMode) String() string
const ( StringPlain StringKind = iota StringMarkdown )
Package configschema imports 12 packages (graph) and is imported by 81 packages. Updated 2020-03-08. Refresh now. Tools for package owners. | https://godoc.org/github.com/hashicorp/terraform/configs/configschema | CC-MAIN-2020-16 | refinedweb | 1,456 | 50.36 |
An invaluable tool I’ve been using quite a bit lately is the Windows Communication Foundation (WCF) Test Client. You can find out more about it here on MSDN, but it’s basically a tool that acts as a service client by consuming an existing service and letting you interact with it through a nice user interface.
The program is named WcfTestClient.exe and can be found on your development machine in the following locations, depending on which version of Visual Studio you have installed:
Here’s a screenshot of the WCF Test Client working with a service that performs temperature conversion:
Notice how the tool dynamically creates types based on the service’s WSDL, lets you enter values for the request message properties, and displays the response message properties after you call the service (by clicking the Invoke button).
The XML tab at the bottom of the screen can also be used to inspect the exact request and response SOAP messages:
This is especially useful for debugging your service calls (e.g. to ensure the expected message namespace is being used).
Hope this helps. | https://larryparkerdotnet.wordpress.com/2011/07/29/wcf-test-client/ | CC-MAIN-2018-39 | refinedweb | 185 | 53.95 |
Linux version: Kubuntu VERSION="14.04.2 LTS, Trusty Tahr"
Version of POL: Version: 4.2.6
Full computer specs: i7, 6 GB ram, GTX 650 ti on a kingston ssdon 240GB
Desciption:
I've been using Playonlinux for over a month now and has been working very well. I went to launch it yesterday and it would not come up. I tried it from the terminal and got the following error repeating
Error and dependencies:
$ playonlinux
Looking for python... 2.7.6 - selected
Traceback (most recent call last):
File "mainwindow.py", line 34, in <module>
import wx, wx.aui
File "/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode/wx/__init__.py", line 45, in <module>
from wx._core import *
File "/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode/wx/_core.py", line 4, in <module>
import _core_
ImportError: /usr/lib/x86_64-linux-gnu/libXrandr.so.2: undefined symbol: _XEatDataWords
$ sudo apt-get install unzip wget xterm python python-wxgtk2.8 imagemagick cabextract icoutils p7zip-full curl
Reading package lists... Done
Building dependency tree
Reading state information... Done
imagemagick is already the newest version.
imagemagick set to manually installed.
python is already the newest version.
xterm is already the newest version.
xterm set to manually installed.
cabextract is already the newest version.
cabextract set to manually installed.
icoutils is already the newest version.
p7zip-full is already the newest version.
python-wxgtk2.8 is already the newest version.
curl is already the newest version.
curl set to manually installed.
unzip is already the newest version.
wget is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded.
I noticed it was looking for python 2.7.6 and I'm running 2.7.5-5ubuntu3. Why the change to a version I can not get yet? can I change PoL to use 2.7.5?
Hi,
The way I interpret it, PoL finds that "python" command launches Python 2.7.6, the version is ok so it decides to go with it; But then wxPython is not installed (with this Python version at least), so it crashes.
Development version is a bit more clever, and also checks that wxPython works before it goes with some version of Python.
But the root problem here is that somehow you have Python 2.7.6 installed when you thought you didn't. Maybe some recent system update broke something?
You are correct. I was able to reproduce the issue by just running an import wx in python. I found others with the issue here:
and here
Tried a down grade of wxpython and got the same results. There was a system update that upgraded python. I finally decided to remove all the wx packages and playonlinx and let the reinstall handle to correct packages. Same issue again.
$ python
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import wx
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named wx
>>>
KeyboardInterrupt
>>>
reid@uber-tower:/etc/apt/sources.list.d$ sudo apt-get install playonlinux
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
libwxbase2.8-0 libwxgtk-media2.8-0 libwxgtk2.8-0 python-wxgtk2.8
python-wxversion
Suggested packages:
editra
The following NEW packages will be installed:
libwxbase2.8-0 libwxgtk-media2.8-0 libwxgtk2.8-0 playonlinux python-wxgtk2.8
python-wxversion
0 upgraded, 6 newly installed, 0 to remove and 6 not upgraded.
Need to get 473 kB/9,848 kB of archives.
After this operation, 43.2 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
I thinking about downgrading python at this point. Anyone else have any suggestions?
After 8 hours I gave up. wanted to move my home to a separate partition anyway so I did the extreme measure and reinstalled my system. After some fun times with root owning all my files and needing to clear out my .playonlinux directory I got it working with Diablo 3 and LoL installed.
I think the problem has to do with experimenting with Warframe and trying to get it working. Sadly in the end it did not work. Will image my machine now before I try straying too far from the path. | https://www.playonmac.com/en/topic-13022-PoL_can_no_longer_launch.html | CC-MAIN-2022-40 | refinedweb | 743 | 70.39 |
Documentation. (DOI: 10.1063/1.4963385)
- Added analytic RHF-CC2 gradients and building of CC2 UHF and ROHF densities.
- Reworked MCSCF with density-fitting, py driver, augmented Hessian iterations, better printing, and the ability to rotate guess orbitals in MCSCF procedure with
MCSCF_ROTATEkeyword.
- Added B86B & PW86 exchange and B86BPBE & PW86PBE exchange-correlation functionals
- Added X2C and (external) DKH relativistic corrections for post-SCF methods.
- (external) Added Grimme’s semi-semiempirical HF-3c and PBEh-3c semi-semiempirical energy methods through gCP interface.
- (external) Added ROHF reference for perturbative methods (e.g., ROHF-CCSDT(Q)) in MRCC interface.
- (external) Added PCM in the PTE (perturbation to energy) approximation for implicit solvation to CCSD via PCMSolver.
- (external) Added SIMINT integral interface.
User Improvements
- Fixed interfragment coordinates in geometry optimizer
- Added option to only write occupied orbitals to Molden files.
- Added saving of geometry and normal modes to Molden file after vibrational analysis.
- Added Jensen [aug-]pc[s][seg]-N, N=0–4 basis sets.
- Renamed
rel_basiskeyword to
basis_relativistic.
- Added 3c overlap integrals to libmints.
- Switched default auxiliary basis sets for sto-3g and 3-21g to def2-SVP series.
- Enhanced cc* modules to write OPDM back to Wavefunction object if computed and to construct psivars for eom-cc, 0-indexed (ground state = 0).
- Added
psi4.set_options(dict)function, making
psi4.geometry(),
psi4.set_options(), and
psi4.energy(), etc. the mainstays of driving calculations in PsiAPI.
- Added AO-based CASSCF.
- Reworked CI root indexing to use 0 as ground-state index, so now CI and CC use so the same indexing for PSI variables.
- Added atom- and AM-labels to printing of molecular orbitals.
- Reworked exiting so that if a geometry optimization exceeds maxiter, it no longer just prints a warning and exits sucessfully (beer) but now exits unsuccessfully (coffee) and raises a
psi4.ConvergenceError.
- Reworked
psi4.set_memory()to optionally take a string that includes a memory unit. Added a minimum memory (250MiB) and increase the default memory (500 MiB).
- Reworked parallelism control. Environment variables OMP_NUM_THREADS and MKL_NUM_THREADS now ignored. Control parallelism in PsiAPI with
psi4 -nNor in either mode through
set_num_threads(N).
- Reworked Psi variables in dfmp2 module so that duplicated in Wavefunction.get_variables() as well as
psi4.get_variables().
- Added printing of file and line origin for basis sets upon loading. Auxiliary basis sets now get a name (basis1 + basis2 for combination) rather than a blank. Auto-selection of auxiliary basis sets for >=5-zeta orbitals basis sets no longer defaults to def2-quad-zeta when an appropriate >=5-zeta auxiliary basis not available.
- Added new complete set of test case reference output files.
- Added BFDb databases.
- Reworked
print_outcommands that redirect to output file. Now it means whatever your python print means.
- Added to Numpy integration the ability for
psi4.core.Matrixand
psi4.core.Vectorto be converted to NumPy arrays and back. Please see tests/numpy-array-interface for a full suite of examples.
- Reworked the finding of useful text files in /share/psi4/. Environment variable PSI4DATADIR is now defunct. PSIDATADIR remains but should not need to be used unless you want to specify one not adjacent to the built psi4/core.so library. For running psi4 from both staged and installed locations, it should default just fine and not need PSIDATADIR=/path/to/share/psi4 or psi4 -l /path/to/share/psi4.
- Added beginnings of JSON interface.
Infrastructure Improvements
- Relicensed Psi4 from GPL-2.0+ to LGPL-3.0.
- “Inverted” Psi4 from C++ executable with embedded Python to ordinary Python module layout. Added PsiAPI mode for interacting with Psi4 as Python module (i.e.,
python -c "import psi4". Tutorial at .
- Reworked
bin/psi4so now a light script calling
import psi4rather than a hefty C++ executable. No longer linking to libpython.so.
- Added Python 3 (3.5 & 3.6) support to existing Python 2.7
- Reorganized directory layout so that Psi4+Addons in
/, Psi4 Python module in
/psi4/, and Psi4 C++ library in
/psi4/src/.
- Rewrote build system into a CMake (min version 3.3) superbuild, evicting all external code and leaving each add-on with only a single-file build footprint in the external/upstream/ folder specifying its build as a CMake External Project.
- Removed
setup.pyas user interface to CMake build. Now one should call CMake directly using options and guidance in the first ~115 lines of top-level CMakeLists.txt.
- Switched Python binding of C++ from Boost Python to pybind11. Renamed Py-bound C++ library from “psi4” to “core”. A consequence is that Psi4 now requires full C++11 compliance (GCC 4.9+, Clang 3.3+, Apple Clang 6.1+, ICC 2016.0.2+). Note that ICC requires GCC and therefore GCC 4.9+. Note that PyBind11 adheres more to C-style than Python-style when it comes to references and pointer counting. As such, several functions required deep changes as internal references from C-side are no longer manipulatable Python-side.
- Added testing mode to see if Psi4 basically works when you turn it on. From a build directory, using CTest,
ctest -L smoke. On any executable, using pytest,
psi4 --test. On the python module, using pytest,
psi4.test().
- Reworked plugin system to CMake from GNUMake. Use
psi4 --plugin-compileto generate Makefile rather than
psi4 --new-plugin-makefileas formerly. Plugin interface has been substantially renovated.
- Renamed plugin generation from, for example,
psi4 --new-plugin +wavefunction mypluginto
psi4 --plugin-name myplugin --plugin-template wavefunction.
- Build performs pre-install to
BuildDir/stage/so python driver not being run from source. Use
psi4 --inplaceto run python driver from source.
- Switched versioning (e.g., 15 commits after tag v1.0 before tab v1.1rc1) from 1.0.15 to 1.1rc1.dev15.
- Reworked build documentation into documentation proper (), making GitHub wiki defunct.
- Switched Python build detection from find_package(PythonLibs) to find_package(PythonLibsNew) CMake module used by NumPy and pybind11.
- Reworked ASCII scratch/output file names to incorporate job PID, just as binary scratch files do.
- Adjusts BLAS/LAPACK detection to detect OpenBLAS and to favor unified runtime library mkl_rt.so for MKL.
- Added internal
variables_and
arrays_std::maps for double and SharedMatrix types, respectively, to the Wavefunction class. These should be used inside a computation to enable greater localization of variables.
- Switched Mac conda binary builds from gnu/libstdc++ to clang/libc++ with implications for mixing conda packages with locally compiled software (e.g., plugins from conda Psi4).
- Rewrote GitHub history of psi4/psi4. All forks prior to 2016-10-19 are no longer valid. Please refork before working on Psi4.
- Reworked
BasisSets to be exclusively built in Python and passed into C-side by the Wavefunction get_basisset and set_basisset calls.
Performance Optimization
- Reworked I/O in UHF CC routines to avoid expensive sorting.
- Reworked fitting algorithm behind diatomic() from hard-wired Lagrange interpolations to weighted least squares that can use an arbitrary number of points.
- Removed ccsort/transqt2 legacy modules from codebase. They can be enabled at build-time for testing.
- Added threading to MintsHelper for one-body integrals for MIC architectures.
Bug Fixes
- Fixed OEProp bug for fields and electrostatic potentials when spherical basis sets were used with a symmetry-breaking origin.
- Fixed CBS syntax bug that produced outrageous HF extrapolations errors for some methods.
- Fixed DF-MP2 to fail gracefully when no virtual orbitals present.
- Fixed bug that prevented freezing a bond angle at 0 degrees during a geometry optimization.
- Fixed CASSCF to return correct variable if state averaging requested.
- Fixed diag_method=rsp in detci module that wasn’t working.
- Fixed guess=read for ROHF wavefunctions.
- Fixed integer overflows in SAPT code and libdpd code (for CC2) and dfocc code (for CCSD(T)).
- Fixed DF-MP2 gradients in the presence of external potential.
- Fixed various bugs and useability improvements for calculations in the presence of a dipole field.
- Fixed silent fail for non-Lebedev numbers in dft_spherical_points.
- Fixed instability of matrix diagonalization that led to anomolous DFT grid generation on Haswell processors.
- Fixed specifying non-default basis-set-extrapolation schemes as a keyword argument to energy(), optimize(), etc.
- Fixed
properties_origin["COM"]that wasn’t working.
- Fixed bug in ccresponse that led to different polariability values with symmetry on and off.
- Fixed
molden(..., dovirtual)bug so that keyword is honored and unrestricted occupations are returned correctly.
- Fixed wB97X-based functionals that were using 0.3 instead of 0.4. This makes no appreciable difference at the cross-database hundredths of a kcal/mol level but in a little wrong.
External Features and Infrastructure
- Reworked Libint integration to pull from upstream repository at 1.2.0 or 1.2.1
- Added new integral library SIMINT by Ben Pritchard for energy integrals, accessed through
cmake -DENABLE_simint. Pinned at 0.7.
- Added using ERD for most all energy integrals (previously only direct conventional HF).
- Reworked LIBEFP integration so no longer required for Psi4 and so source built from upstream repository, not code internal to Psi4. Bumped LIBEFP to 1.4.2.
- Bumped CheMPS2 to 1.8.3-12
- Reworked ambit to reenable it and the ambit plugin template. Ambit not presently linked into Psi4 as not used internally.
- Reworked DKH integration so that project obtained from home repository, not from code stored in Psi4. Reworked DKH procedure so that orbital basis decontracted to form the DKH one-electron integrals, then recontracted for further calculation.
- Bumped PCMSolver to 1.1.9 (see also “PTE”).
- Added basic gCP interface (see also “3c”).
- Maintained GDMA, MRCC (see also “ROHF-CC”), DFTD3 interfaces.
- Bumped v2rdm_casscf plugin to 0.3.
- Switches PubChem to use REST interface.
- Pinned pybind11 version at 2.0.0 (2.0.1 also known to work). | https://psicode.org/posts/v1p1/ | CC-MAIN-2021-21 | refinedweb | 1,590 | 52.66 |
I want to change the output of my code. I have a code like this :
from collections import defaultdict
third = defaultdict(lambda: (defaultdict(lambda : defaultdict(int))))
count = 0
fh = open("C:/Users/mycomp/desktop/data.txt", "r").readlines()
for line in fh:
line_split = line.split();
date = line_split[0];
time = line_split[1];
ip = line_split[2];
third [date][time][ip]+= 1
for date, d in third.iteritems():
for time , count in d.iteritems():
print "%s %s %s %s" % (date, time, count,ip)
2016-11-04 00:00:12 10.11.13.13
2016-11-05 00:00:15 10.14.12.11
2016-11-06 00:00:19 10.10.15.13
2016-10-04 07:46 defaultdict(<type 'int'>, {'10.11.13.15': 574}) 10.11.13.15
2016-10-04 15:58 defaultdict(<type 'int'>, {'10.21.24.13': 364}) 10.21.24.13
2016-10-04 15:59 defaultdict(<type 'int'>, {'10.21.22.13': 359}) 10.21.22.13
2016-10-04 07:42 defaultdict(<type 'int'>, {'10.21.27.10': 287}) 10.21.27.10
2016-10-04 07:43 defaultdict(<type 'int'>, {'10.11.37.13': 337}) 10.11.37.13
2016-10-04 07:46 {'10.11.13.15': 574}) 10.11.13.15
2016-10-04 15:58 {'10.21.24.13': 364}) 10.21.24.13
2016-10-04 15:59 {'10.21.22.13': 359}) 10.21.22.13
2016-10-04 07:42 {'10.21.27.10': 287}) 10.21.27.10
2016-10-04 07:43 {'10.11.37.13': 337}) 10.11.37.13
Your dictionary has three levels, so each value has three keys you need to get to it (date, time and IP). Your output code loops over the first two, but there's no loop over the IPs, so you get a dictionary instead.
I suspect you want something like this, with three loops:
for date, x in third.iteritems(): for time, y in x.iteritems(): for ip, count in y.itertiems(): print "%s %s %s %s" % (date, time, count, ip)
If you really do want all the data from a single date and time to be printed on a single line (even if there are multiple IPs involved), you could, I suppose, just change your
count value you're getting in your current code is one of the innermost
defaultdicts that maps from IP address to count. You can convert that to a regular
dict if you want and include it in your
for date, d in third.iteritems(): for time, ip_count in d.iteritems(): print "%s %s %s" % (date, time, dict(ip_count))
Note that there are only three things being formatted (the IPs and counts are part of the same object). The
ip parameter you had in your code didn't actually work properly, since it wasn't being set in your two levels of loops. You were in fact printing out the last IP address you used when filling the dictionary (so the one on the last line of your input file). Unlike your example output, I suspect it would not match the contents of the inner dictionary you printed.
Note that both versions of the code above will print your data in a mostly arbitrary order. All lines for the same day will print together (and all lines for the same time within a day), but outside of those groupings, the values will be in arbitrary order. You may want to use
sorted to put your data in a useful order:
import operator keyfunc = operator.itemgetter(0) for date, x in sorted(third.iteritems(), key=keyfunc): for time, y in sorted(x.iteritems(), key=keyfunc): for ip, count in sorted(y.itertiems(), key=keyfunc): print "%s %s %s %s" % (date, time, count, ip)
I'd also consider using a less nested data structure, such as a dictionary keyed with
date, time, ip tuples. | https://codedump.io/share/aqBaIASO1tDU/1/changing-my-output-of-code---python | CC-MAIN-2017-47 | refinedweb | 651 | 86.2 |
The XHTML Component Set
Overview
Like ZUL, the XHTML component set is a collection of components. Unlike ZUL, which is designed to have rich features, each XHTML component represents a HTML tag. For example, the following XML element will cause ZK Loader to create a component called Ul
<h:ul xmlns:
since 8.0.3
XHTML component supports HTML5 tag attributes, and these attributes could be accessed by MVVM. About MVVM, please refer to the MVVM document.
Dynamic Update
Because Components are instantiated for XML elements specified with the XHTML namespace, you could update its content dynamically on the server. For example, we could allow users to click a button to add a column as shown below.
<window title="mix HTML demo" xmlns: <h:table <h:tr <h:td>column 1</h:td> <h:td> <listbox id="list" mold="select"> <listitem label="AA"/> <listitem label="BB"/> </listbox> </h:td> </h:tr> </h:table> <button label="add" onClick="row1.appendChild(new org.zkoss.zhtml.Td())"/> </window>
On the other hand, the native namespace will cause native HTML tags being generated. It means you can not modify the content dynamically on the server. Notice that you still can handle them dynamically at the client.
However, when a XHTML component are used, a component running on the server has to be maintained. Thus, you should use the XHTML component set only if there is no better way for doing it.
For example, we could rewrite the previous sample with the native namespace and some client-side code as follows.
<window title="mix HTML demo" xmlns: <n:table <n:tr <n:td>column 1</n:td> <n:td> <listbox id="list" mold="select"> <listitem label="AA"/> <listitem label="BB"/> </listbox> </n:td> </n:tr> </n:table> <button label="add" w: </window>
ID and UUID
Unlike other components, if you assign ID to a XHTML component, its UUID (Component.getUuid()) is changed accordingly. It means you cannot have two XHTML components with the same ID, no matter if they are in different ID spaces.
Filename Extension
As described in ZUML, the XHTML component set is associated with zhtml, xhtml, html and htm. It means you could name a ZUML page as foo.zhtml if you map
*.zhtml to ZK Loader. However, when this kind of file is interpreted, ZK Loader assumes it will have its own HTML, HEAD, BODY tags. On the other hand, these tags are generated automatically if the filename extension is
zul.
For example, suppose we have a file called foo.zhtml, then the content might look as follows.
<?link rel="shortcut icon" href="/favicon.ico" type="image/x-icon"?> <html xmlns: <head> <title>ZHTML Demo</title> <zkhead/><!-- a special tag to indicate where to generate ZK CSS and JS files --> </head> <body style="height:auto"> <h1>ZHTML Demo</h1> <ul id="ul"> <li>The first item.</li> <li id="li2" zk:Click me to change Id.</li> </ul> </body> </html>
where
- Since the extension is
zhtml, the default namespace is XHTML. Thus, we have to specify the zk and zul namespace explicitly.
- Notice that we have to specify the zk namespace too, because XHTML will cause ZK Loader to consider any unrecognized element as native HTML tag.
- We have to specify HTML, HEAD and BODY to make it a valid HTML document.
- We could specify zkhead (line 5) to indicate where to generate ZK CSS and JavaScript files. It is optional. If not specified, ZK will try to identify the proper location for ZK CSS and JavaScript files. Specify it if you want some CSS or JavaScript file to be evaluated before or after ZK's default ones.
- By default, BODY's CSS is
width:100%;height:100%. However, it is appropriate for Web-look page[1] For Web-look, we could specify
height:autoto reset it back to the browser's default.
Version History | https://www.zkoss.org/wiki/ZK_Developer%27s_Reference/UI_Patterns/HTML_Tags/The_XHTML_Component_Set | CC-MAIN-2022-27 | refinedweb | 644 | 65.12 |
Hi
I am in a bit of a fuddle over array size vs co-ordinates.
I am learning Java (not really included in this forum) and the question requires that I move the user around an array using a variety of commands.
To some extent, however, this is a general array problem that is applicable to any language that uses a 2d array.
Using the below code, I can get the turtle to move to square 14, but am unable to progress as all the other commands are taken up.
The examples I see on the web import a lot of java utilities that are beyond my level of knowledge.
The key difficulty, is that the array is 20 int[20][20], which really means that the maximum x co-ordinate is 19. so if I get the user to take 10 steps, they wil be at co-ordinate [0][10] and another 10 steps will take them off the array.
I then can't seem to get my 'turtle' to move to square 19. Using the commands as given. Is this correct?
To keep the commands relativey less complex for the forum, I have only included the eastward direction (x++) of the turtle (although the north, south and west are just the same, albeit with moving x or y oppositely).
Below are the commands:
// 1 = pen up (i.e. not writing to the array as pass)
// 2 = pen down
// 4 = direction++ (change direction)
// 5 = move forward that many spaces
//6 = display array
// 10 = move forward that many spaces
My code is as follows:
import java.util.Scanner;
public class e7_21 {
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
int command = 0; // command to turtle
int pen = 0; // down = 1 = true = write
int floor[][] = new int[20][20]; // this is the map of the floor
int y = 0; // row co-ordinate
int maxY = 19; // maximum co-ordinate
int x = 0 ; // column co-ordinate
int maxX = 19; // maximum co-ordinate
int direction = 0; // where 0 = north, 1 = east, 2, south, 3 west
System.out.print("Please enter command: ");
command=input.nextInt();
while (command != -1) {
// change pen up
if (command == 1)
{
pen = 0;
}
// change pen down
if (command == 2)
{
pen = 1;
}
// change direction
if (command == 4)
{
if (direction == 3) {
direction = 0;
}
else direction++;
}
// move spaces
if (command == 5 || command == 10) {
// if facing east, it is moving plus x
if (direction == 1) {
//move off the board
if (command+x > maxX) System.out.print("Not able to move that far! Please change direction");
// ok to move
else {
// if pen is down write it
if (pen == 1) {
// is moving plus x
// number of moves not starting position
// not changing rows, just columns
// put the i at the starting column and increment
for (int i = x; i <= command+x; i++) {
floor[y][i] = 1;
}
x+=command;
}
else x+=command;
}// end else
}
System.out.print("command = " + command+"\n");
System.out.print("pen= " + pen+"\n");
System.out.print("direction= " + direction + "\n");
System.out.print("x= " + x + "\n");
System.out.print("y= " + y + "\n");
System.out.print("Please enter pen position: ");
} // end command 6
System.out.print("\n\n"+"results"+"\n");
System.out.print("command = " + command+"\n");
System.out.print("pen= " + pen+"\n");
System.out.print("direction= " + direction + "\n");
System.out.print("x= " + x + "\n");
System.out.print("y= " + y + "\n");
System.out.print("Array\n");
for (int i = 0; i < 20; i++) {
for (int j = 0; j < 20; j++) {
System.out.printf("%d", floor[i][j]);
}
System.out.println();
}
System.out.print("Please enter command: ");
command=input.nextInt();
}// end while command != -1
} // end main
} // end class
This topic was automatically closed 91 days after the last reply. New replies are no longer allowed. | https://www.sitepoint.com/community/t/java-turtle-graphics-trouble-with-array-size-vs-co-ordinates/278989 | CC-MAIN-2018-09 | refinedweb | 619 | 65.52 |
Hi Ben, > The problem with the freeze is more strange that I thought. It seems > that busybox init on sparc drops out of pid 1, to pid 6. In it's place > is the swapper thread. When things send a signal to pid 1 (assuming that > pid 1 == init), things go wrong. > > Still need to investigate. Until I resolve that problem, there's no > point in making new disks yet. Did you see Paulus email to l-k? > This could be the same problem that I reported some time ago, where if > you send a signal to the init process while it is running /linuxrc, > the system will hang. In my case the problem was with this code in > prepare_namespace() in init/main.c: > > pid = kernel_thread(do_linuxrc, "/linuxrc", SIGCHLD); > if (pid>0) > while (pid != wait(&i)); > > If a signal becomes pending, the wait will not block, but the signal > never gets delivered because the process is still running inside the > kernel (signals only get delivered on the exit from kernel to user > space). > > One solution would be to change it to something like this (caution, > completely untested): > > pid = kernel_thread(do_linuxrc, "/linuxrc", SIGCHLD); > if (pid > 0) { > while (pid != wait(&i)) { > if (signal_pending(current)) { > spin_lock_irq(¤t->sigmask_lock); > flush_signals(current); > recalc_sigpending(current); > spin_unlock_irq(¤t->sigmask_lock); > } > } > } > > Another alternative would be to block signals like request_module() in > kernel/kmod.c does. > > HTH, > Paul. | https://lists.debian.org/debian-sparc/2001/10/msg00115.html | CC-MAIN-2019-39 | refinedweb | 228 | 72.05 |
Asked by:
Suddenly have problems opening .qfx downloaded file from American Express in MS Money
Question
Suddenly this month I am unable to open downloaded .qfx files from American Express in Money. I have as recently as November 2019, downloaded the file to my computer then imported the file into money. This month I get the error message: "file attempting to import appears to be invalid or contain corrupt data". The American Express site had a message stating that "because of a recent change you may have update your connection between Amex and Quicken." I don't have quicken so there's no one for me to contact. I contacted American Express and they were not of much help, and told me to contact Microsoft. I have a feeling they did something on their end that in someway altered something in the way file downloads. Does anyone else have this problem and know how to solve it? (I'm using MS MoneyPlus Deluxe Version 17.0.120.3817 on a Windows 10 computer.)
Thanks for your help.
All replies
You could try this Python script:
############begin AMEX_FX3.PY########
#modifies AMEX OFX file with extra crud in the <OFX>
#invoke with filename of the source as the parameter.
#Example: AMEX_FX3.PY ax_abc.qfx
#2019-7-30 Cal Learner...
#2019-12-8 modified to run on Python 2 or 3.
import sys, re
ofxfile1= open(sys.argv[1],'r') # open for read in current directory
# Input file read in as parameter. Output file fixed in this case.
ofxfile2= open("ModOut.ofx",'w') # open for write in current directory
#do stuff
ofx=ofxfile1.read()
ofx_parts=re.split(r"\s*<OFX[^>]*>",ofx) #splits the file into 2 pieces in list
ofx_body= re.sub(r"(?<!\s)<", r"\n<",ofx_parts[1]) #add the newlines to body
ofx_mod=ofx_parts[0]+"\n<OFX>" + ofx_body #assembling the new header and the body
ofxfile2.write(ofx_mod)
ofxfile1.close()
ofxfile2.close()
print("Modified version of ", sys.argv[1],' written to "ModOut.ofx" file.')
############end AMEX_FX3.PY########
I forget the flaw this fixes, but it might be that instead of <ofx>, the file contains <ofx somethingOrAnother>. So if that is the case, you could just modify the QFX file with a text editor, such as Notepad.
This issue with Amex qif files was reported back on December 11th. Here is a link:
However, for some reason, copied links from this forum aren't working for me right now, soif you have the same problem with links, here is my response from that thread:
I had this issue with AMEX downloads for several months and Cal's Python program fixed those files. Since November 20th, the downloads have imported correctly for me, but, from your experience, there are still issues. If you are willing to manually edit the downloaded file using Notepad, I found that this single replacement worked:
Find <OFX xmlns: and replace it with <OFX>
Cal's responses in that thread have more detail if you want to automate the fix using the above Python script.
Bill Becker
- Edited by Bill Becker Tuesday, December 31, 2019 4:36 PM
Just downloaded transactions for my Amex Blue Cash Preferred credit card and I find that Amex is once again downloading activity.qfx files that won't import into Money. The fix that still works is:
- open the activity.qfx file with Notepad
- search for <OFX xmlns: It's pretty early in the file.
- replace it with <OFX>, i.e. delete xmlns:ns2=""
Save the file and then double-click it to open it in Money.
Bill Becker
I don't know how you do it, but I use the manual download of the .qfx file and change the extension to .ofx.
American Express changed the URL for the download, and along with that a file format that doesn't work (I see that others have found the line to edit). I have the old one that still sends a .qfx that requires no editing.
Instructions:
1. Paste this into the address field
2. Login.
3. It takes you right there to the old download screen.
4. Make it a favorite for easier access.
If you're someone that logins and checks for new activity before initiating the download, you can do this (after making the above URL a favorite):
1. Login to americanexpress.com
2. See that you have new activity.
3. Go to favorites, and launch the URL
4. It keeps you logged in, and takes you to the old familiar download screen that gives a .qfx file that doesn't require editing. | https://social.microsoft.com/Forums/en-US/980fcc68-c98e-4de4-a08f-58024a6cb52c/suddenly-have-problems-opening-qfx-downloaded-file-from-american-express-in-ms-money?forum=money | CC-MAIN-2020-10 | refinedweb | 762 | 74.49 |
Simply put, functions are a block a code that is packaged up so as to be executed independently, how and when you choose.
Functions have parameters, and you provide arguments when you execute the function. That's generally referred to as calling the function, and the code that calls it is referred to as the caller.
The idea of functions in programming goes all the way back to early assembly languages and the concept of the subroutine. Most assembly languages have a
CALL operation used to transfer control to the subroutine, and a
RET operation to return control back to the calling code. The BASIC language also used the term subroutine and had a
GOSUB statement that was used to call them, and a
RETURN statement.
Most modern languages have dropped the explicit
CALL or
GOSUB statement in favor of implicit syntax to indicate a function call:
function_name(arg1, arg2, ..., argn)
function_name(arg1, arg2, ..., argn)
In assembly and BASIC, a subroutine had to end with an explicit
RET or
RETURN. Python loosens that up somewhat. If there is no value that needs to be sent back as the result of the function, the return can be left implicit, as in the following:
def foo(x): print("X is {}".format(x))
def foo(x): print("X is {}".format(x))
>>> foo(5) x is 5
>>> foo(5) x is 5
We can use an explicit return, but it gains nothing and takes an additional line of code.
def foo(x): print("X is {}".format(x)) return
def foo(x): print("X is {}".format(x)) return
There are two cases in Python where we do need to make the return explicit.
- When you need to return from the function before it naturally reaches the end.
- There is a value (or values) that needs to be sent back to the caller.
def foo(x): if x > 5: return print("X is {}".format(x))
def foo(x): if x > 5: return print("X is {}".format(x))
>>> foo(4) X is 4 >>> foo(10)
>>> foo(4) X is 4 >>> foo(10)
Using an
if/return combination like this is often referred to as a guard clause. The idea being that it gaurds entry into the body of the function much like security at an airport: you have to get past inspection before being allowed in. If you fail any of the checks, you get tossed out immediately.
This is also something called a precondition, although that usually implies a stronger response than simply returning early. Raising an exception, for example. You use a precondition when the "bad" argument value should never be sent into the function, indicating a programming error somewhere. This can be done using
assert instead of the
if/return. This will raise an
AssertionError if the condition results in
False.
def foo(x): assert x <= 5, "x can not be greater than 5" print("X is {}".format(x))
def foo(x): assert x <= 5, "x can not be greater than 5" print("X is {}".format(x))
>>> foo(10) Traceback (most recent call last): File "", line 1, in File "", line 2, in foo AssertionError: x can not be greater than 5
>>> foo(10) Traceback (most recent call last): File "", line 1, in File "", line 2, in foo AssertionError: x can not be greater than 5
Since preconditions are meant to catch programming errors, they should only ever raise an exception while you are working on the code. By the time it's finished, you should have fixed all the problems. Preconditions provide a nice way of helping make sure you have.
Although
assert is supported, it's a little different in the context of CircuitPython. Your running project likely does not have a terminal connected. This means that you have no way to see that an assert triggered, or why. The CircuitPython runtime gets around this for runtime errors by flashing out a code on the onboard NeoPixel/DotStar. This lets you know there's a problem and you can connect to the board and see the error output. But as I said, by the time you get to that point, there should be no programming errors remaining to cause unexpected exceptions.
That's the key difference between things like guards and preconditions: your code should check for, and deal with, problems that legitimately could happen; things that could be expected. There's no way to handle situations that should never happen; they indicate that somewhere there's some incorrect code. All you can do is figure out what's wrong and fix it.
The second case of requiring a
return statement is when the function has to return a value to the caller. For example:
def the_answer(): return 42
def the_answer(): return 42
So what if you have a function that is expected to return a value and you want to use a guard? There is no universal answer to this, but there is often some value that can be used to indicate that it wasn't appropriate to execute the function.
For example, if a function returns a natural number (a positive integer, and sometimes zero depending on which definition you use), you could return a -1 to indicate that a guard caused an early return. A better solution is to return some sort of default value, maybe 0. This, also, is very situation dependant.
Not only can functions return a value, they can return more than one. This is done by listing the values after the return keyword, separated by commas.
def double_and_square(x): return x+x, x*x
def double_and_square(x): return x+x, x*x
A function that returns multiple values actually returns a tuple containing those values:
>>> double_and_square(5) (10, 25) >>> type(double_and_square(5)) <class 'tuple'>
>>> double_and_square(5) (10, 25) >>> type(double_and_square(5)) <class 'tuple'>
You can then use Python's parallel assignment to extract the values from the tuple into separate variables.
>>> d, s = double_and_square(5) >>> d 10 >>> s 25
>>> d, s = double_and_square(5) >>> d 10 >>> s 25
As you can gather from the examples above, you use the
def keyword to define a function. The general form is:
def function_name (parameter_name_1, ..., parameter_name_n): statement ... statement
def function_name (parameter_name_1, ..., parameter_name_n): statement ... statement
You can then call the function by using the name and providing arguments:
function_name(argument_1, ..., argument_n)
function_name(argument_1, ..., argument_n)
This much we can see in the examples. Note that a function doesn't require parameters and arguments If it does have some, those names are available inside the function, but not beyond it. We say that the scope of the parameters is the body of the function. The scope of a name is the part of the program where it is usable.
If you use a name outside of its scope, CircuitPython will raise a
NameError.
>>> foo Traceback (most recent call last): File "", line 1, in NameError: name 'foo' is not defined
>>> foo Traceback (most recent call last): File "", line 1, in NameError: name 'foo' is not defined
Choosing good names for your functions is very important. They should concisely communicate to someone reading your code exactly what the function does. There's an old programmer joke that the person trying to read and understand your code will quite likely be you in a couple months. This becomes even more crucial if you share your code and others will be reading it.
When a function is called, it must be provided with an argument for each parameter. This is generally done as part of the function call, but Python provides a way to specify default arguments: follow the parameter name by an equals and the value to be used if the function call doesn't provide one.
def foo(x, msg=''): print("X is {}. {}".format(x, msg))
def foo(x, msg=''): print("X is {}. {}".format(x, msg))
>>> foo(5, 'hi') X is 5. hi >>> foo(5) X is 5.
>>> foo(5, 'hi') X is 5. hi >>> foo(5) X is 5.
We can see that if we provide an argument for the
msg parameter, that is the value that will be used. If we don't provide an argument for it, the default value specified when the function was defined will be used.
So far the examples have been using what's called positional arguments. That just means that arguments are matched to parameters by their positions: the first argument is used as the value of the first parameter, the second argument is used as the value of the second parameter, and so on. This is the standard and is used by pretty much every language that uses this style of function definition/call.
Python provides something else: keword arguments (sometimes called named arguments). These let you associate an argument with a specific parameter, regardless of it's position in the argument list. You name an argument by prefixing the parameter name and an equals sign. Using the previous function
foo:
>>> foo(5, msg="hi") X is 5. hi >>> foo(msg='hi', x=5) X is 5. hi
>>> foo(5, msg="hi") X is 5. hi >>> foo(msg='hi', x=5) X is 5. hi
Notice that by naming arguments their order can be changed. This can be useful to draw attention to arguments that are usually later in the list. There's one limitation: any (and all) positional arguments have to be before any keyword arguments:
>>> foo(msg='hi', 5) File "", line 1 SyntaxError: positional argument follows keyword argument
>>> foo(msg='hi', 5) File "", line 1 SyntaxError: positional argument follows keyword argument
Even without changing the order of arguments, keywords arguments lets us skip arguments that have default values and only provides the ones that are meaningful for the call. Finally, keyword arguments put labels on the arguments, and if the parameters are well named it's like attaching documentation to the arguments. Trey Hunner has a great write-up on the topic. To pull an example from there, consider this call:
GzipFile(None, 'wt', 9, output_file)
GzipFile(None, 'wt', 9, output_file)
What's all this? It's dealing with a file so
'wt' is probably the mode (write and truncate). The
output_file argument is clearly the file to write to, assuming it's been well named. Even then, it could be a string containing the name of the output file.
None and
9, however, are pretty vague. Much clearer is a version using keyword arguments:
GzipFile(fileobj=output_file, mode='wt', compresslevel=9)
GzipFile(fileobj=output_file, mode='wt', compresslevel=9)
Here it's clear that
output_file is the file object, and not the name.
'wt' is, indeed, the mode. And
9 is the compression level.
This is also a prime example of the problems using numeric constant. What is compression level 9? Is it 9/9, 9/10, or 9/100? Making things like this named constants removes a lot of ambiguity and misdirection. | https://learn.adafruit.com/circuitpython-101-functions/what-are-functions | CC-MAIN-2021-21 | refinedweb | 1,809 | 62.88 |
0
I have the following code that I cannot figure out how to get to work:
namespace Project2 { public static class Variables { public static List<string[]> tran1; public static List<string[]> tran2; } } private void Submit_Click(object sender, EventArgs e) { //Find the schedules // Variables var; string[] transaction; transaction = new string[3]; string tran_num; string read; if (Tran1.Checked) { tran_num = "T1 "; } else { tran_num = "T2 "; } transaction[0] = tran_num; if (Read.Checked) { read = "Read"; } else { read = "Write"; } transaction[1] = read; transaction[2] = DATA.Text; if (transaction[0] == "T1") { Variables.tran1.Add(transaction); Transaction1.Text += "\n" + read + DATA.Text; } else { Variables.tran2.Add(transaction); Transaction2.Text += "\n" + read + DATA.Text; } }
The following is the error that Visual Studio returns when I hit the Add Transaction step button
I am not sure what this error means, and being new to this language, not sure how to fix it.
Thank you for any possible help, or examples of how to fix this. | https://www.daniweb.com/programming/software-development/threads/419635/c-global-variables | CC-MAIN-2017-09 | refinedweb | 155 | 64.81 |
(For more resources on this subject, see here.)
Downloading Cocos2d for iPhone.
Time for action – opening the samples project file:.
What just happened?.
Installing the templates
Cocos2d comes with three templates. These templates are the starting point for any Cocos2d game. They let you:
- Create a simple Cocos2d project
- Create a Cocos2d project with Chipmunk as physics engine
- Create a Cocos2d project with Box2d as physics engine
Which one you decide to use for your project depends on your needs. Right now we'll create a simple project from the first template.
Time for action – installing the templates.
If you are getting errors, check if you have downloaded the files correctly and uncompressed them into the desktop. If it is in another place you may get errors..
What just happened?.
Creating a new project from the templates
Now that you have the templates ready to use, it is time to make your first project.
Time for action – creating a HelloCocos2d project).
- Select Cocos2d-0.99.1 Application.
- Name the project HelloCocos2d and save it to your Documents folder.
2. Cocos2d templates will appear right there along with the other Xcode project templates, as shown in the following screenshot:.
Let's stop for a moment and take a look at what was created here.
When you run the application you'll notice a couple of things, as follows:
- The Cocos2d for iPhone logo shows up as soon as the application starts.
- Then a CCLabel is created with the text Hello World.
- You can see some numbers in the lower-left corner of the screen. That is the current FPS the game is running at.
In a moment, we'll see how this is achieved by taking a look at the generated classes.
What just happened?
We have just created our first project from one of the templates that we installed before. As you can see, using those templates makes starting a Cocos2d project quite easy and lets you get your hands on the actual game's code sooner.
Managing the game with the CCDirector
The CCDirector is the class whose main purpose is scene management. It is responsible for switching scenes, setting the desired FPS, the device orientation, and a lot of other things.
The CCDirector is the class responsible for initializing OpenGL ES..
Types of CCDirectors.
(For more resources on this subject, see here.)
Time for action – taking a first look at the HelloCocos2dAppDelegate
What just happened?.
Scene management
The CCDirector handles the scene management. It can tell the game which scene to run, suspend scenes, and push them into a stack..
Doing everything with CCNodes
CCNodes are the main elements in the Cocos2d structure. Everything that is drawn or contains things that are drawn is a CCNode. For example, CCScene, CCLayers, and CCSprites are the subclasses of CCNode.
Time for action – peeking at the HelloWorldScene class
- Start by opening the HelloWorldScene.h file. Let's analyze it line by line:
Each class you create that makes use of Cocos2d should import its libraries. You do so by writing the preceding line.
#import "cocos2d.h"
These lines define the Interface of the HelloWorld CCLayer.
// HelloWorld Layer
@interface HelloWorld : CCLayer
{
}
// returns a Scene that contains the HelloWorld as the only child
+(id) scene;
@end file, where the action happens:
The preceding code is the one that will be called when the layer is initialized. What it does is create a CCLabel to display the Hello World text.
// on "init" you need to initialize your instance
-(id) init
{
// always call "super" init
// Apple recommends to re-assign "self" with the "super"
return value
if( (self=[super init] )) {
// create and initialize a Label
CCLabel* label = [CCLabel labelWithString:@"Hello
World" fontName:@"Marker Felt" fontSize:64];
// ask director the the window size
CGSize size = [[CCDirector sharedDirector] winSize];
// position the label on the center of the screen
label.position = ccp( size.width /2 ,
size.height/2 );
// add the label as a child to this Layer
[self addChild: label];
}
return self;
}
CCLabel is one of the three existent classes that allow you to show text in your game.
CCLabel* label = [CCLabel labelWithString:@"Hello World"
fontName:@"Marker Felt" fontSize:64];.
Most Cocos2d classes can be instantiated using convenience methods, thus making memory management easier. To learn more about memory management check the Apple documents at the following URL: #documentation/Cocoa/Conceptual/MemoryMgmt/ MemoryMgmt.html
The preceding line gets the size of the current window. Right now the application is running in landscape mode so it will be 480 * 320 px.
CGSize size = [[CCDirector sharedDirector] winSize];
Remember that the screen sizes might vary from device to device and the orientation you choose for your game. For example, in an iPad application in portrait mode, this method would return 768 * 1024.
This line sets the label's position in the middle of the screen.
label.position = ccp( size.width /2 , size.height/2 );
Cocos2d includes many helper macros for working with vectors. Ccp is one of those helper macros. What it does is simply create a CGPoint structure by writing less code.
- Now, all that is left is to actually place the label in the layer..
[self addChild: label];
You can find further information about parent-child relationships in the Cocos2d documentation at: prog_guide:basic_concepts
What just happened.
- They can execute actions: For example, you could tell the CCLabel we had in the previous example to move to the position (0,0) in 1 second. Cocos2d allows for this kind of action in a very easy fashion.
CCNodes properties.
Handling CCNodes.
A transition is a nice way to go from one scene to another. Instead of instantly removing one scene and showing the next one, Cocos2d can, for example, fade them, have them move out of the screen, and perform many other nice effects.
-.
Have a go hero – doing more with the HelloWorldScene Class.
(For more resources on this subject, see here.)
Checking your timing.
Time for action – using timers to make units shoot each second folder.
Unit should inherit from CCNode to be able to schedule methods, so let's make the corresponding changes.
#import "cocos2d.h"
@interface Unit
: CCNode {
}
- Import the HelloWorldScene.h. We'll need this class soon.
That is all you must change for now in the Unit.h file. Now, open the Unit.m file. You should see something like this:
#import "HelloWorldScene.h";
What we have to do now is fill it up.
#import "Unit.h"
@implementation Unit
@end
- Create the init method for the Unit class. This one is a simple example so we won't be doing a lot here:
This init method takes the HelloWorld layer as a parameter. We are doing that because when the object is instantiated it will need to be added as a child of the layer node.
-(id) initWithLayer:(HelloWorld*) game
{
if ((self = [super init]))
{
[game addChild:self];
[self schedule:@selector(fire) interval:1];
}
return (self);
}
When adding the unit object as a child of the layer node it allows it to schedule methods.
[game addChild:self];
If you are scheduling a method inside a custom class and it is not running, check whether you have added the said object as a child of a layer. Not doing so won't yield any errors but the method won't run!
This line is the one that does the magic! You pass the desired selector you want to run in the schedule parameter and a positive float number as the interval in seconds.
[self schedule:@selector(fire) interval:1];
- Now, add the fire method.
You did expect a bullet being fired with flashy explosions, didn't you? We'll do that later! For now content yourself with this. Each second after the Unit instance is created a "FIRED!!!" message will be output in the Console.
-(void)fire
{
NSLog(@"FIRED!!!");
}
We just need to make a couple of changes to the HelloWorldScene class to make this work.
- In the HelloWorldScene.h file, add the following line:
#import "Unit.h"
- Then in the HelloWorldScene.m file let's create an instance of the Unit class.
As you can see we are passing the HelloWorld layer to the Unit class to make use of it.
Unit * tower = [[Unit alloc]initWithLayer:self];
That is all, now Build and Run the project. You should see the following output:
As you can see, the unit created is firing a bullet each second, defeating every enemy troop on its way.!!!
Be careful when scheduling a method which outputs to the console using NSLog(). Doing this will reduce the FPS drastically.
What just happened?.
Delaying a method call
You can use timers to just call a method once, but instead of at that precise moment you could have it scheduled and called some seconds after that.
Time for action – destroying your units.
What just happened?.
Debugging cocos2d applications.
Time for action – checking Cocos2d debug messages.
What just happened?
We have just seen how to open the Xcode's Console. Now, you can get a lot more information from Cocos2d debug messages, such as when a particular object is dealloced.
Time for action – checking deallocing messages:
These two messages show what was dealloced in this particular case. What these classes do does not matter right now. However, CCScheduler is responsible of triggering scheduled callbacks and CCTextureCache is responsible of handling the loading of textures.>
What just happened?
The preceding messages are the types of messages you can get from the framework. These ones in particular are deallocing messages that allow us to know when Cocos2d has, for example, released memory for a given texture which it is not using anymore.
Time for action – knowing your errors:
That line of code creates a Sprite from an image file in your resource folder. As you may notice, "Icoun.png" is not present in the project's resource folder, so when the application is run and execution gets to that line of code, it will crash.
CCSprite * image = [CCSprite spriteWithFile:@"Icoun.png"];
[self addChild:image];
- Run the application and see it crash.
- Open the Console, and you will see the following output:
The debug messages tell you exactly what is failing. In this case, it couldn't use the image icoun.png. Why? This is because it is not there!
- Change the string to match the file's name to see the error go away.
What just happened?
Cocos2d debug messages can really help you save time when searching for these kinds of bugs. Thanks to the built-in debug messages, you can get an idea of what is happening while the application is running.
Time for action – removing debug messages.
Alternatively, you can have your project run in release configuration. To do that, select it from the overview dropdown under Active Configuration.
What just happened?.
Summary
In this article,.
Further resources on this subject:
- Cocos2d for iPhone: Surfing Through Scenes [article]
- Cocos2d for iPhone: Adding Layers and Making a Simple Pause Screen [article]
- Cocos2d for iPhone: Handling Accelerometer Input and Detecting Collisions [article] | https://www.packtpub.com/books/content/getting-started-cocos2d | CC-MAIN-2017-22 | refinedweb | 1,835 | 66.54 |
View topics with a specific tag
To view a topic with a specific tag:
- From the Topics screen, click on the 'Tags' link. The main area displays two fields: "View topics with tags" and, for administrators, "Add tags." A list of suggested tags is also displayed.
- Click on a tag whose associated topics you want to view. For example, click on 'android' to view all topics whose subject relates to "android." Or, type a tag's name into the 'View topics with tags' field and press return (MacOS X) or Enter (Microsoft Windows) to search for topics associated with that tag. Topics related to the tag are listed (if topics related to the tag are available.
If no topics are found, the message "No topics available in this group" is displayed. You can click on another tag (under the group name) to search for topics associated with that tag. | https://support.google.com/groups/answer/1311325?hl=en&ctx=cb&src=cb&cbid=r2p46p5s922s&cbrank=1 | CC-MAIN-2015-40 | refinedweb | 149 | 71.75 |
In the notify sketch this line of code in the loop() gives me an error.
pCharacteristic->setValue(&value, 1);
no instance of overloaded function matches the argument list. The second argument the value 1 is flagged. Also get candidate expects 1 argument, 2 provided. Wondering what the problem might be?
Also there is a difference in the libraries used in the code and those shown in the book.
I’m using VSC with PlatformIO.
Hi Charles.
That error means that the function setValue() only accepts one argument, but there are two arguments.
However, I’ve just tried the example in VS Code and it is working fine for me.
Did you add
#include <Arduino.h>
At the beginning of your sketch?
What is the version of the code that you are following? Always follow the code in the link. The link provides the most updated code. The eBook will be updated in the next release.
Regards,
Sara | https://rntlab.com/question/esp32-course-module-5-unit-2/ | CC-MAIN-2021-04 | refinedweb | 157 | 69.68 |
What's the Matter with JMatter?
Pages: 1, 2, 3, 4
Let's take a look at the listing for the implementation of the type Speaker.
Speaker.
The first thing to notice are the basic members of Speaker: name, bio, and photo.
private final StringEO name = new StringEO();
private final TextEO bio = new TextEO();
private final ImgEO photo = new ImgEO();
Name is a string type. The field bio, on the other hand, is marked as a TextEO, which signals to the framework that we want to use a text area for its user interface representation, and a large text field to persist the information (that a varchar(255) won't do). Finally we can store blobs right along other fields. The photo field is of type ImgEO. JMatter provides editors and renderers for many basic types (as well as formatters, parsers, and validators for zip codes, dates, SSNs, and more), and they can all be customized if necessary. For each member field we supply a getter method.
TextEO
ImgEO
The two other members have one-to-many relationships: talks and speaker recognitions (e.g., a speaker is both a JavaOne rock star and a Java champion).
private final RelationalList talks = new RelationalList(Talk.class);
public static Class talksType = Talk.class;
public static String talksInverseFieldName = "speaker";
private final RelationalList recognitions = new RelationalList(SpeakerRecognition.class);
public static Class recognitionsType = SpeakerRecognition.class;
For one-to-many associations, we use the JMatter RelationalList list type. There's also a slight wrinkle here in that we're required to provide a way for JMatter to statically infer the list item type. That's the reason for the metafields talksType and recognitionsType. If defining a bidirectional relationship (that is, if Talk points back to speaker), we signify this with another meta-field (talksInverseFieldName), again helping JMatter knit it all together.
RelationalList
talksType
recognitionsType
talksInverseFieldName
Here's some optional metadata for controlling the field display order in the user interface:
public static String[] fieldOrder = {"name", "recognitions", "talks"};
public static String[] tabViews = {"photo", "bio"};
The metafield tavViews specifies that the fields photo and bio should be displayed in separate tabs.
tavViews
photo
bio
Finally, here's another interesting customization:
private transient PhotoIconAssistant assistant = new PhotoIconAssistant(this, photo);
public Icon iconLg() { return assistant.iconLg(); }
public Icon iconSm() { return assistant.iconSm(); }
Here we're overriding the two methods that control which icon to use for this instance. Since we have a photo field with a picture of the speaker, a per-instance icon based on the speaker's photo is even better than a generic speaker icon. The class PhotoIconAssistant makes sure that we use the speaker's photo only if one is provided. It also makes sure to provide versions of the photo scaled to the appropriate size.
PhotoIconAssistant
Here's the listing for the class SpeakerRecognition.
SpeakerRecognition
We've identified two possible recognitions: Java rock star and Java champion. Each has its own unique icon in the brochure. So let's model this type simply with two fields: name and icon. Again, we want to use the icon as a per-instance icon, overriding whatever default icon is supplied for this type..
Talk
We define a talk topic, span (e.g., from 11:00 a.m. to 11:45 a.m.), and description. There are also one-to-one associations to the talk's speaker and location (room). We setup the default duration for talks to 45 minutes, and supply the necessary accessor methods.
Unlike our other type definition that extends AbstractComplexEObject, this class extends the JMatter class CalEvent. By subclassing CalEvent we specify that talks are events that can show up in a calendar. As such, they must bear a TimeSpan-type field, which we do through the field span, and which JMatter dynamically locates.
AbstractComplexEObject
CalEvent
span
A future version of JMatter will likely automatically know to expose a calendar view for the type Talk, simply by virtue that Talks bear a field of type TimeSpan, without requiring the subclassing of CalEvent. We're trying to be a little more duck-typed.
CalEvent.
JMatter has a second, more sophisticated version of calendaring that supports viewing events in multiple locations side by side on the same day view, with strong support for drag and drop. The Sympster demo application which is bundled with JMatter, is an illustration of this version. In it, dropping a talk onto a day view of a calendar automatically creates a Session at the designated time and location. Dragging the view of the session from one column to another automatically updates the talk's location. It's a very visual way of manipulating information.
Now to define the three specializations of Talk: Keynotes, Technical Sessions, and BOFs. Keynote is the simplest as it requires essentially no specialization.
The next one, BOF, adds the field code (e.g., BOF-2344) and the corresponding accessor. We also mark the code field as unique through the identities metafield.
code
identities
private final StringEO code = new StringEO();
public StringEO getCode() { return code; }
public static String[] identities = {"code"};
For tech sessions, which are more structured, we need to supply the track for the talk and a talk level such as beginner, intermediate, etc. (Here is the listing)
Finally, the TalkLevel type is very similar to how we modeled a Track. It's very simple. JMatter recently defined a new annotation: @EditWithCombo, that customizes the editor in a field context to make it easier to pick.
@EditWithCombo
@Persist
@EditWithCombo
public class TalkLevel extends AbstractComplexEObject
{
...
We typically don't build anything linearly. I developed this sample application over a couple of sessions, iteratively. However, it's difficult to relay the experience in writing.
It's easier to understand how all this works and how the pieces fit together when we start dealing with examples; let's stop talking about types and start looking at instances.
This time around, rather than customize the classbar from the user interface, let's edit the file class-list.xml directly.
To run our application, we must first export the schema and then invoke the run target:
run
ant schema-export
ant run
Rather than fill this article with screenshots, I went ahead and recorded a screencast of a session with our JavaOneMgr application.
Our base application, the JavaOneMgr, consists in its entirety of less than 500 lines of source code. This metric illustrates some of the leverage that JMatter affords developers.
There's much that can be built on top of this base application. Some use cases include:
Exposing custom behavior into the application is very simple and straightforward, by defining a method marked with the @Cmd annotation.
@Cmd
There are many more features to this framework than we have space for in this article. Here are a few features of JMatter that we did not discuss:
JMatter also automates the production of your Java Web Start war file. Please refer to Chapter 15 in JMatter's manual for the details.
Let's switch gears and discuss how this project has evolved during the last. | http://www.onjava.com/pub/a/onjava/2007/08/16/whats-the-matter-with-jmatter.html?page=3 | CC-MAIN-2014-35 | refinedweb | 1,171 | 55.34 |
If you are using the ReportViewer or ReportingServices in your application and have ever said to yourself, "You know, it would be really nice if I could take this report and attach it as an email and send it to someone", then you are in the right spot. In my last project, we were using the ReportViewer and had many reports lying around and I found myself saying just that. I didn't want to write the PDF's to the file system of the server, then attach it to the email and then clean up after myself. I wanted to stream it directly into the email. Here is how I accomplished this task.
This article assumes that you have your web site set up to use the standard SMTP settings for ASP.NET 2.0. Your web.config should have a section that looks something like this:
<!--<span class="code-comment"> SMTP Settings --></span>
<system.net>
<mailSettings>
<smtp deliveryMethod="Network" from="no-reply@mywebsite.com">
<network host="localhost" port="25" defaultCredentials="true"/>
</smtp>
</mailSettings>
</system.net>
This article will assume that you also have a working report and it's associated datasource(s). It's beyond the scope of this article to explain creating the report or using the report viewer. There are plenty of tutorials just on doing that. This will focus on creating the report in memory and streaming it into an email as an attachment.
Make sure that you have all the namespaces that you are going to need in your code. We will be using the following:
System.Net.Mail
Microsoft.Reporting.WebForms
System.IO
So in order to begin setting up our report as an attachment, the first thing that you must do is, get the report and it's data sources set up correctly.
LocalReport lr = new LocalReport();
lr.ReportPath = HttpContext.Current.Server.MapPath("~/Reports/myReport.rdlc");
The code above creates a new LocalReport in memory and sets the path to the actual report definition local client (.rdlc) file. The next step in the process is to get the datasources for the report setup. We do this by using table adapters and adding them to the local reports datasource collection. For this example, we will say that we have two data sources in our report, perhaps there is a sub-report that is displayed.
LocalReport
MyFirstDataSetTableAdapters.MyFirstTableAdapter ta1 =
new MyFirstDataSetTableAdapters.MyFirstTableAdapter();
MySecondDataSetTableAdapters.MySecondTableAdapter ta2 =
new MySecondDataSetTableAdapters.MySecondTableAdapter();
ReportDataSource ds1 = new ReportDataSource
("MyFirstDataSet_MyFirstTableAdapter", ta1.GetData(myParam1, myParam2));
ReportDataSource ds2 = new ReportDataSource
("MySecondDataSet_MySecondTableAdapter", ta2.GetData(myParam3));
You can see in the code above that we create our table adapters, ta1 and ta2. We then create two instances of a ReportDataSource using our table adapters' GetData method as the dataSourceValue. You might have to pass a couple of parameters to the table adapter method as we did here - it all depends on your architecture. Here we just showed what it would look like by passing a couple of pseudo parameters to our GetData method.
ta1
ta2
ReportDataSource
GetData
dataSourceValue
The critical part here is making sure that the name of the datasource is EXACTLY the same as it is in the report. You can get this information by looking at the datasources in the report designer and basically copying and pasting the values. If you get the names incorrect, the report will not generate at all.
We then add each ReportDataSource to the LocalReport datasources collection by calling the Add method and passing in the appropriate variables
.
Add
lr.DataSources.Add(ds1);
lr.DataSources.Add(ds2);
So from this point, we should have a LocalReport, with two datasources that are mapped to a couple of tables from our datasets, all set and ready to go. The next step will be to set up the email attachment. To do this, we will need to render the LocalReport into a stream that we can use for the email attachment. Step one is to set up all the variables that the Render method of the LocalReport needs and then render the report into memory.
LocalReport
Render
Warning[] warnings;
string[] streamids;
string mimeType;
string encoding;
string extension;
byte[] bytes = lr.Render("PDF", null, out mimeType,
out encoding, out extension, out streamids, out warnings);
You can see here that we set the format that we want out of the report to PDF. We could also use Excel or another valid type of output here if we wanted to. We also sent in null as the value for deviceInfo. If you wanted to, you could send in a string with the properly formatted XML device information. The rest of the parameters are output parameters as you can see. For this application, we don't use any of them. Once we execute this code, we now have a localReport, stored as a string of bytes in memory. We are now ready to stream these bytes into our email as an attachment.
null
deviceInfo
localReport
MemoryStream s = new MemoryStream(bytes);
s.Seek(0, SeekOrigin.Begin);
Here we create a MemoryStream from our string of bytes and set the position to the beginning of the stream by seeking there. If we did not seek the beginning of the stream, when we go to create the attachment, we would be at the end of the stream and nothing would get added in our attachment.
MemoryStream
Attachment a = new Attachment(s, "myReport.pdf");
We create a new email Attachment and pass in our MemoryStream (which was the string of bytes we got from executing the Render method of our LocalReport) and a name for the attachment. We are now ready to create an email, add our attachment and send the report on it's way to our lucky recipient.
Attachment
Render
MailMessage message = new MailMessage(
"my_friend@herEmail.com", "me@myEmail.com",
"A report for you!", "Here is a report for you");
message.Attachments.Add(a);
SmtpClient client = new SmtpClient();
client.Send(message);
First we create the mail message and add in the from, to, subject and body string parameters. We then add the attachment to the messages attachments collection by executing the Add method and passing our memory stream as the parameter. We then get a new SmtpClient and Send the message on it's way.
SmtpClient
Send
The nice thing about creating the report as a stream and attaching it to the email this way is that it is all done in memory. There's no file creation and cleanup work that has to be done. Any report you create for use in the ReportViewer can be used here as well.
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
ReportDataSource datasource1 = new ReportDataSource("SportsDataSet_SportRosterTableAdapter", srta.GetData(teamId, schoolId));
lr.DataSources.Add(datasource1);
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/20580/Email-a-LocalReport-as-a-PDF-Attachment?fid=461782&df=10000&mpp=50&noise=4&prof=True&sort=Position&view=None&spc=Relaxed | CC-MAIN-2015-48 | refinedweb | 1,184 | 63.49 |
Accelerated functions to calculate Word Mover's Distance
Project description
Fast Word Mover's Distance
Calculates Word Mover's Distance as described in From Word Embeddings To Document Distances by Matt Kusner, Yu Sun, Nicholas Kolkin and Kilian Weinberger.
The high level logic is written in Python, the low level functions related to linear programming are offloaded to the bundled native extension. The native extension can be built as a generic shared library not related to Python at all. Python 2.7 and older are not supported. The heavy-lifting is done by google/or-tools.
Installation
pip3 install wmd
Tested on Linux and macOS.
Usage
You should have the embeddings numpy array and the nbow model - that is, every sample is a weighted set of items, and every item is embedded.
import numpy from wmd import WMD embeddings = numpy.array([[0.1, 1], [1, 0.1]], dtype=numpy.float32) nbow = {"first": ("#1", [0, 1], numpy.array([1.5, 0.5], dtype=numpy.float32)), "second": ("#2", [0, 1], numpy.array([0.75, 0.15], dtype=numpy.float32))} calc = WMD(embeddings, nbow, vocabulary_min=2) print(calc.nearest_neighbors("first"))
[('second', 0.10606599599123001)]
embeddings must support
__getitem__ which returns an item by it's
identifier; particularly,
numpy.ndarray matches that interface.
nbow must be iterable - returns sample identifiers - and support
__getitem__ by those identifiers which returns tuples of length 3.
The first element is the human-readable name of the sample, the
second is an iterable with item identifiers and the third is
numpy.ndarray
with the corresponding weights. All numpy arrays must be float32. The return
format is the list of tuples with sample identifiers and relevancy
indices (lower the better).
It is possible to use this package with spaCy:
import spacy import wmd nlp = spacy.load('en_core_web_md') nlp.add_pipe(wmd.WMD.SpacySimilarityHook(nlp), last=True) doc1 = nlp("Politician speaks to the media in Illinois.") doc2 = nlp("The president greets the press in Chicago.") print(doc1.similarity(doc2))
Besides, see another example which finds similar Wikipedia pages.
Building from source
Either build it as a Python package:
pip3 install git+
or use CMake:
git clone --recursive cmake -D CMAKE_BUILD_TYPE=Release . make -j
Please note the
--recursive flag for
git clone. This project uses source{d}'s
fork of google/or-tools as the git submodule.
Tests
Tests are in
test.py and use the stock
unittest package.
Documentation
cd doc make html
The files are in
doc/doxyhtml and
doc/html directories.
Contributions
...are welcome! See CONTRIBUTING and code of conduct.
License
README {#ignore_this_doxygen_anchor}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/wmd/ | CC-MAIN-2022-21 | refinedweb | 451 | 59.4 |
Getting and Setting CGI Cookies in CKristaps Dzonsons
Source Code
Setting and getting cookies with kcgi is easy. It uses the same logic as setting and getting form fields. Cookies consist of name-value pairs that we can grok from a table. I'll lead this tutorial as if we were reading a source file from top to bottom, so let's start with headers. We'll obviously need kcgi and stdint.h, which is necessary for some types found in the header file.
#include <sys/types.h> /* size_t, ssize_t */ #include <stdarg.h> /* va_list */ #include <stddef.h> /* NULL */ #include <stdint.h> /* int64_t */ #include <time.h> /* time(3) */ #include <kcgi.h>
Next, let's define identifiers for our cookies. These will later be mapped to the cookie names and the validators for their values.
enum cookie { COOKIE_STRING, COOKIE_INTEGER, COOKIE__MAX };
The enumeration will allow us to bound an array to
COOKIE__MAX and refer to individual buckets in the array by the
enumeration value.
I'll assume that
COOKIE_STRING is assigned 0 and
COOKIE_INTEGER, 1.
Next, connect the indices with validation functions and names.
The validation function is run by khttp_parse(3); the name is the cookie key name.
Built-in validation functions, which we'll use, are described in kvalid_string(3).
In this example,
kvalid_stringne will validate a non-empty (nil-terminated) C string, while
kvalid_int
will validate a signed 64-bit integer.
static const struct kvalid cookies[COOKIE__MAX] = { { kvalid_stringne, "string" }, /* COOKIE_STRING */ { kvalid_int, "integer" }, /* COOKIE_INTEGER */ };
Before doing any parsing, I sanitise the HTTP context. I'll let any page request pass, but will make sure our MIME type and HTTP method are sane.
static enum khttp sanitise(const struct kreq *r) { if (r->mime != KMIME_TEXT_HTML) return KHTTP_404; else if (r->method != KMETHOD_GET) return KHTTP_405; else return KHTTP_200; }
Now the scaffolding is done.
What about the cookies?
To begin with, you should glance at the RFC 6265,
HTTP State Management
Mechanism, to gain an understanding of how cookies work.
You may also want to read about the HttpOnly
and
Secure flags also available.
In our application, let's just attempt to read the cookie; and if it doesn't exist, write the cookie along with the page.
If it does exist, we'll indicate that in our page.
We'll focus only on the
COOKIE_STRING cookie, and will set the cookie to be visible to the path root and expire in
an hour.
We'll use kutil_epoch2str(3) to format the date.
Headers are output using khttp_head(3), with the document body started
with khttp_body(3).
static void process(struct kreq *r) { char buf[32]; khttp_head(r, kresps[KRESP_STATUS], "%s", khttps[KHTTP_200]); khttp_head(r, kresps[KRESP_CONTENT_TYPE], "%s", kmimetypes[r->mime]); if (r->cookiemap[COOKIE_STRING] == NULL) khttp_head(r, kresps[KRESP_SET_COOKIE], "%s=%s; Path=/; expires=%s", cookies[COOKIE_STRING].name, "Hello, world!", kutil_epoch2str(time(NULL) + 60 * 60, buf, sizeof(buf))); khttp_body(r); khttp_puts(r, "<!DOCTYPE html>" "<title>Foo</title>"); if (r->cookiemap[COOKIE_STRING] != NULL) khttp_puts(r, "Cookie found!"); else khttp_puts(r, "Cookie set."); }
Most of the above code is just to handle the HTML5 bits, and we deliberately used the smallest possible page. (Yes, this is a valid page—validate it yourself to find out!) For any significant page, you'd want to use kcgihtml(3).
Putting all of these together: parse the HTTP context, validate it, process it, then free the resources.
The HTTP context is closed with khttp_free(3).
Note that the identifiers for the cookies,
enum cookie, are also used to identify any form input.
So if you have both form input and cookies (which is common), they can either share identifiers or use unique ones.
In other words,
COOKIE__MAX defines the size of both
fieldmap and
cookiemap, so the
validator for
COOKIE_STRING is also valid for form inputs of the same name.
int main(void) { struct kreq r; enum khttp er; if (khttp_parse(&r, cookies, COOKIE__MAX, NULL, 0, 0) != (r.mime == KMIME_TEXT_HTML) khttp_puts(&r, "Could not service request."); } else process(&r); khttp_free(&r); return 0; }
For compilation, linking, and installation, see Getting Started with CGI in C. | https://kristaps.bsd.lv/kcgi/tutorial1.html | CC-MAIN-2021-21 | refinedweb | 675 | 67.45 |
Now you have multiple tests you are probably feeling pretty good about your progress. However there are other ways to improve code efficiency further — you may notice that you've so far had to include a
setUp() and a
tearDown() method in each test file, going by the current constructs we've seen in this series. If you have several dozen tests then that’s a lot of code duplication! In this article we'll look at how to put the
setUp()/
tearDown() code common to all tests into a
TestBase class, which can then be imported into each individual test file.
test_base.py
To start with, create a new file called
test_base.py, in the same directory as your existing test cases.
Next, move your important statements that relate to the common setup (
unittest,
Marionette and
time) into the file, along with a
TestBase class containing the
setUp() and
tearDown() methods, and associated common helper functions (such as
unlock_screen()). The file should look something like this:
import time import unittest from marionette import Marionette class TestBase(unittest.TestCase): def unlock_screen(self): self.marionette.execute_script('window.wrappedJSObject.lockScreen.unlock();') def kill_all(self): self.marionette.switch_to_frame() self.marionette.execute_async_script(""" // Kills all running apps, except the homescreen. function killAll() { let manager = window.wrappedJSObject.AppWindowManager; let apps = manager.getApps(); for (let id in apps) { let origin = apps[id].origin; if (origin.indexOf('verticalhome') == -1) { manager.kill(origin); } } }; killAll(); // return true so execute_async_script knows the script is complete marionetteScriptFinished(true); """) def setUp(self): # Create the client for this session. Assuming you're using the default port on a Marionette instance running locally self.marionette = Marionette() self.marionette.start_session() # Unlock the screen self.unlock_screen() # kill all open apps self.kill_all() # Switch context to the homescreen iframe and tap on the contacts icon time.sleep(2) home_frame = self.marionette.find_element('css selector', 'div.homescreen iframe') self.marionette.switch_to_frame(home_frame) def tearDown(self): # Close the Marionette session now that the test is finished self.marionette.delete_session()
Updating your test files
With your
test_base.py file created, you need to import
TestBase into your test files, and the test classes need to be changed to extend the
TestBase class:
import unittest from marionette import Wait from marionette import By from test_base import TestBase class TestContacts(TestBase): def test(self): # Tests in here if __name__ == '__main__': unittest.main()
Try running your test file again.
It may not look like much now but when you have dozens or hundreds of tests this really saves a lot of duplicate code. | https://developer.mozilla.org/en-US/docs/Archive/B2G_OS/Automated_testing/gaia-ui-tests/Part_8_Using_a_base_class | CC-MAIN-2020-50 | refinedweb | 419 | 50.63 |
Enabling Cross-Origin Request in Web API
Ever got this error when building a web client making a AJAX request to another domain?
XMLHttpRequest cannot load [someurl]. No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin ‘null’ is therefore not allowed access.
This post will give you some guidelines how to fix this.
Assume you have this web client code to call [someuri] to get some JSON-data.
[code]
$(document).ready(function () {
$.getJSON(uri)<br />
.done(function (data) {
// Do something with data
});
});
[/code]
The problem is caused by the fact that browser security prevents a web page making a AJAX request to another domain. But there’s a solution to this problem. On the site, there’s a good post which explains how the enable CORS for your Web API service.
In short it means that you need to reference System.Web.Cors and System.Web.Http.Cors from your project. If you don’t have the assemblies, you can get it on NuGet.
After this on your HttpConfiguration you need to enable CORS:
[code]
// Enable Cors
httpConfig.EnableCors();[/code]
The last thing you need to do is configuring CORS on your controllers by using the [EnableCors] attribute. With the [DisableCors] attribute, you can disable certain actions for CORS.
[code][EnableCors(origins: “”, headers: “*”, methods: “*”)]
public class TestController : ApiController
{
[DisableCors]
public HttpResponseMessage PutItem(int id) { … }
}[/code] | https://www.jestersoft.nl/tag/javascript/ | CC-MAIN-2019-13 | refinedweb | 229 | 67.65 |
I have completed this project for my Computer Science I class. I received full credit because the program is functional with no issues at a glance. However, upon testing the program again (after it was graded), I found that under specific circumstances an infinite loop occurs. I couldn't go on with life until I figure out what I've done wrong! I have posted the full instructions below for reference:
An approximate value of pi can be calculated using the series given below:
pi = 4 [ 1 - 1/3 + 1/5 - 1/7 + 1/9 . . . + ((-1)^n)/(2n + 1) ]
Write a C++ program to calculate the approximate value of pi using this series. The program takes an input n that determines the number of terms in the approximation of the value of pi and outputs the approximation. Include a loop that allows the user to repeat this calculation for new values n until the user says she or he wants to end the program.
Savitch, Walter J., and Kenrick Mock. "Chapter 3." Problem Solving with C++. 8th ed. Boston: Pearson Addison Wesley, 2012. 174. Print.
Here is the completed program:
#include <iostream> #include <cmath> using namespace std; int main() { int terms; double pi = 1; char restart = 'N'; // Do the entire program as long as the user wants do { // Set pi back to 1 if user wants to restart if ((restart == 'Y') || (restart == 'y')) pi = 1; cout << "Input the number of terms to approximate pi to. The number of terms must be greater than zero." << endl; cout << ">> "; cin >> terms; // The number of terms must be greater than 0 for approximation purposes if (terms > 0) { // The series approximates to the value terms times for (int i = 1; i <= terms; i++) { pi += 4 * (pow(-1,i))/((2*i)+1); } // Once loop is complete, add 3 for final pi value pi += 3; cout << "The approximated value of pi is: " << pi << "." << endl << endl; // Allow the user to restart program cout << "Enter Y to start a new approximation or any other key to terminate the program." << endl; cout << ">> "; cin >> restart; // A new line is inserted for spacing if the user wants to restart cout << endl; } // Stop the program if the user enteres an invalid value. else cout << endl << "You did not enter a valid value.\n" << endl; } while ((restart == 'Y') || (restart == 'y')); return 0; }
Steps to produce error:
- Run program. Enter a number, as requested.
- Results are shown. Hit Y to restart program.
- Instead of typing a number, type a letter.
- Infinite loop occurs.
If the program is started and a letter is entered instead of a number, it does reject it and will end the program. The infinite loop occurs only after the program has been run at least one time. What it should do is always reject invalid entries, even after the program has been run once.
I would like advice on how I can make the program always stop the user if the user enteres incorrect information. It would also be nice to know how I can make the program restart if the user enters incorrect information instead of the program just ending.
I am constantly trying to improve my problem solving and programming skills, so offer advice for anything you see that could be better! Thank you. | https://www.daniweb.com/programming/software-development/threads/464551/infinite-loop-for-certain-entries | CC-MAIN-2019-04 | refinedweb | 545 | 70.43 |
new to C+ and I wonder why this code compiles fine with g+ and MSVC:
namespace t {
class A {
public:
int i;
};
int b(int k, A a) {
return k;
}
}
int main()
{
t::A cl;
return b(5, cl);
}
I thought that b should not be visible from the main function.
it is a standard C+ feature called argument-dependent lookup.
I have a c++ shared object that I am compiling using the following command.
g++ -g -Wall -c -o cpplib.o cpplib.cpp
And I am creating the shared object using the following command.
g++ -shared -o libcpplib.so cpplib.o
(I am aware that I can do this one single command, these are two separate invocations because I had the project setup on eclipse and that's how eclipse cdt does it.).
My host runs a RedHat 6.x x86_64 OS.
The compilation fails with the following error.
$ g++ -g -Wall -fPIC -c -o cpplib.o cpplib.cpp
$ g++ -shared -fPIC -o libcpplib.so cpplib.o
/usr/bin/ld: /usr/lib/gcc/x86_64-redhat-linux/4.4.7/libstdc++.a(ios_init.o): relocation R_X86_64_32 against `pthread_cancel' can not be used when making a shared object; recompile with -fPIC
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/libstdc++.a: could not read symbols: Bad value
collect2: ld returned 1 exit status
I am certain that I haven't used the 'pthread_cancel' function in my code. Please help me. ?)?
I'm not a c++/template guru and i can't decide if this is really a valid code or i have encountered a gcc bug:
template struct A {
void *p;
};
template struct B : A {
void *foo() { return p; }
};
g++ says "error: 'p' was not declared in this scope". microsoft's compiler is happy with the same code.
Can anyone help?
Forgot Your Password?
2017 © Queryhome | http://tech.queryhome.com/6606/c-namespaces | CC-MAIN-2017-30 | refinedweb | 306 | 66.54 |
One. )
See a relevant thread in the forum.
Posted by: kohsuke on October 27, 2006 at 10:48 PM
GREAT !! .. i am waiting for this "feature" long time ;). No more web-modules needed for web-service sessions. Thanks for that ;)
Posted by: stenlee on November 14, 2006 at 02:56 AM
I'm glad you liked it.
Posted by: kohsuke on November 14, 2006 at 08:40 AM
Looks nice but to tell you the truth, exposing persistent objects seems to me like exposing remote EJB Entity Beans and it just does not make sense.
Posted by: miluch on June 18, 2007 at 05:25 AM
Hi, this is all very nice and works fine. But I need to go futher. To support recovery after a crash I need to make the second port use the same session of the first one. The full story is here...
As you see I've only found an unportable way of doing it... Any suggestion?
Posted by: evetere on January 15, 2008 at 07:14 AM
I solved my problem and posted to the same thread the solution.
Posted by: evetere on January 15, 2008 at 08:32 AM
Hi,
I was trying the stateful WS sample in the latest metro distribution.
I had conflicting behavior when I use different JDK.
In JDK 1.5, it runs just fine.
In JDK 1.6, it fails with the below error
C:\metro\jax-ws-latest-wsit\samples\stateful\src\stateful\server\Book.java:53: Classes annotated with @javax.jws.WebService must have a public defau
t constructor. Class: stateful.server.Book
[apt] public class Book {
Is this expected behavior? If I fix it by providing a public constructor and make the instance variable non-final, would it still retain the stateful nature of the web service?
Please clarify.
Posted by: aruld on March 21, 2008 at 10:38 AM | http://weblogs.java.net/blog/kohsuke/archive/2006/10/stateful_web_se.html | crawl-002 | refinedweb | 315 | 75.91 |
Bug Description
== Comment: #0 - Calvin L. Sze <email address hidden> - 2016-11-01 23:09:10 ==
Team has changed to the Bare-metal Ubuntu 16.4. The problem still exists, so it is not related to the virtualization.
Since the bug is complicated to reproduce, Could we use sets of tools to collect the data when this happens?
---Problem Description---
MongoDB has memory corruption issues which only occurred on Ubuntu 16.04, it doesn't occur on Ubuntu 15.
Contact Information =Calvin Sze/Austin/IBM
---uname output---
Linux master 4.4.0-36-generic #55-Ubuntu SMP Thu Aug 11 18:00:57 UTC 2016 ppc64le ppc64le ppc64le GNU/Linux
Machine Type = Model: 2.1 (pvr 004b 0201) Model name: POWER8E (raw), altivec supported
---System Hang---
the system is still alive
---Debugger---
A debugger is not configured
---Steps to Reproduce---
Unfortunately, not very easily. I had a test case that I was running on ubuntu1604-
About 3.5% of the test runs on ubuntu1604-
Hoping to be able to see the data that was being written and corrupting the stack, I manually injected a guard region into the stack of the failing functions as follows:
+namespace {
+
+class Canary {
+public:
+
+ static constexpr size_t kSize = 1024;
+
+ explicit Canary(volatile unsigned char* const t) noexcept : _t(t) {
+ ::memset(
+ }
+
+ ~Canary() {
+ _verify();
+ }
+
+private:
+ static constexpr uint8_t kBits = 0xCD;
+ static constexpr size_t kChecksum = kSize * size_t(kBits);
+
+ void _verify() const noexcept {
+ invariant(
+ }
+
+ const volatile unsigned char* const _t;
+};
+
+} // namespace
+
Status bsonExtractFiel
+
+ volatile unsigned char* const cookie = static_
+ const Canary c(cookie);
+
When running with this, the invariant would sometimes fire. Examining the stack cookie under the debugger would show two consecutive bytes, always at an offset ending 0x...e, written as either 0 0, or 0 1, somewhere at random within the middle of the cookie.
This indicated that it was not a conventional stack smash, where we were writing past the end of a contiguous buffer. Instead it appeared that either the currently running thread had reached up some arbitrary and random amount on the stack and done either two one-byte writes, or an unaligned 2-byte write. Another possibility was that a local variable had been transferred to another thread, which had written to it.
However, while looking at the code to find such a thing, I realized that there was another possibility, which was that the bytes had never been written correctly in the first place. I changed the stack canary constructor to be:
+ explicit Canary(volatile unsigned char* const t) noexcept : _t(t) {
+ ::memset(
+ _verify();
+ }
So that immediately after writing the byte pattern to the stack buffer, we verified the contents we wrote. Amazingly, this *failed*, with the same corruption as seen before. This means that either between the time we called memset to write the bytes and when we read them back, something either overwrote the stack cookie region, or that the bytes were never written correctly by memset, or that memset wrote the bytes, but the underlying physical memory never took the write.
Stack trace output:
no
Oops output:
no
Userspace tool common name: MongoDB
Userspace rpm: mongod
The userspace tool has the following bit modes: 64bit
System Dump Info:
The system is not configured to capture a system dump.
Userspace tool obtained from project website: na
*Additional Instructions for Lilian Romero/Austin/IBM:
-Post a private note with access information to the machine that the bug is occuring on.
-Attach sysctl -a output output to the bug.
-Attach ltrace and strace of userspace application.
== Comment: #1 - Luciano Chavez <email address hidden> - 2016-11-02 08:41:47 ==
Normally for userspace memory corruption type problems I would recommend Valgrind's memcheck tool though if this works on other versions of linux, one would want to compare the differences such as whether or not you are using the same version of mongodb, gcc, glibc and the kernel.
Has a standalone testcase been produced that shows the issue without mongodb?
== Comment: #2 - Steven J. Munroe <email address hidden> - 2016-11-02 10:27:40 ==
We really need that standalone test case.
Need to look at WHAT c++ is doing with memset. I suspect the compiler is short circuiting the function and inlining. That is what you would want for optimization, but we need to know so we can steer this to the correct team.
== Comment: #3 - Calvin L. Sze <email address hidden> - 2016-11-02 13:17:30 ==
== Comment: #4 - William J. Schmidt <email address hidden> - 2016-11-02 16:29:26 ==
(In reply to comment #3)
>
It's unclear to me yet that we have evidence of this being a problem in the toolchain. Does the last experiment (revised Canary constructor) ALWAYS fail, or does it also fail only ever 24 - 48 hours? If the latter, then all we know is that stack corruption happens. There's no indication of where the wild pointer is coming from (application problem, compiler problem, etc.). If it does always fail, however, then I question the assertion that they can't provide a standalone test case.
We need something more concrete to work with.
Bill
== Comment: #5 - Calvin L. Sze <email address hidden> - 2016-11-03 18:08:33 ==
Could this ticket be viewed by external customer/ISV?
I am thinking how to establish the direct communications between Mongodb development team and experts/owner of the ticket to pass the middle man, me :-)
Here are the MongoDB deelopment director, Andrew's answers to my 3 questions. And in addition he added comments.
Basically, there are 3 questions,
> 1. Is the mongoDB binary built with gcc came with Linux distributions or with IBM Advance toolchain gcc?
We build our own GCC, but we have reproduced the issue with both our custom GCC, and the builtin linux distribution GCC. We have also reproduced with clang 3.9 built from source on the Ubuntu 16.04 POWER machine, so we do not think that this is a compiler issue (could still be a std library issue).
> 2. Does the last experiment (revised Canary constructor) ALWAYS fail, or does it also fail only ever 24 - 48 hours?
No, we have never been able to construct a deterministic repro. We are only able to get it to fail after running the test a very large number of times.
> 3. Is there any way we can have a standalone test case without MongoDB?
We do not have such a repro at this time.
I do understand the position they are taking - it isn't a lot of information to go on, and most of the time the correct response to a mysterious software crash is to blame the software itself, not the surrounding ecosystem. However, we have a lot of *indirect* evidence that has made us skeptical that this is our bug. We would love to be proved wrong!
- The stack corruption has not reproduced on any other systems. We are running these same tests on every commit across dozens of Linux variants, and across four cpu architectures (x86_64, POWER, zSeries, ARMv8).
- We don't see crashes on other POWER, but we do on Ubuntu POWER.
- We don't see crashes on Windows, Solaris, OS X
- We have run the under the clang address sanitizer, with no reports.
- We have enabled the clang address sanitizer use-after-return detector, and found no results.
If this were a wild pointer in the MongoDB server process that was writing to the stack of other threads, we would expect to see corruption show up elsewhere, but we simply do not.
However, lets assume that this is a bug in our code, that for whatever reason only reveals itself on POWER, and only on Ubuntu. We would still be interesting in learning from the kernel team if there are additional power specific debugging techniques that we might be able to apply. In particular, the ability to programmatically set/unset hardware watchpoints over the stack canary. Another possibility would be to mprotect the stack canary, but it is not clear to us whether it is valid to mprotect part of the stack, either in general, or on POWER.
We would be happy to hear any suggestions on how to proceed.
Thanks,
Andrew
== Comment: #6 - Steven J. Munroe <email address hidden> - 2016-11-03 18:34:30 ==
you could tell what specific GCC version you are based on and configure options.
You could provide the disassemble of the canary code.
== Comment: #7 - William J. Schmidt <email address hidden> - 2016-11-03 23:01:55 ==
It would be useful to see what the Canary is compiled into, as Steve suggested. Let's make sure it's doing what we think it is..
One reason you see this on Ubuntu 16.04 and not on another linux distro is likely because of glibc level. The other linux's glibc is quite old by comparison. glibc 2.23, which appears on Ubuntu 16.04, is the first version to be compiled with -fstack-
I assume that glibc 2.23 was compiled with Ubuntu's version of gcc 5 that ships with the system, in case that becomes relevant.
I don't personally have a lot of experience with trying to debug something of this nature, in case we don't see something obvious from the disassembly of the canary. CCing Ulrich Weigand in case he has some ideas of other approaches to try.
== Comment: #9 - Ulrich Weigand <email address hidden> - 2016-11-04 12:21:48 ==
I don't really have any other great ideas either. Just two comments:
- Even though the original reported mentioned they already tried clang's address sanitizer, I'd definitely still also try reproducing the problem under valgrind -- the two are different in what exactly they detect, and using both tools in a complex problem can only help.
- The Canary code sample above has strictly speaking undefined behavior, I think: it is calling memset on a const *. (The const_cast makes the warning go away, but doesn't actually cure the undefined behavior.) I don't *think* this will cause codegen changes in this example, but it cannot hurt to try to fix this and see if anything changes.
== Comment: #12 - Calvin L. Sze <email address hidden> - 2016-11-06 10:32:25 ==
Hi Bill, Thanks
I have asked Andrew, waiting for his confirmation.
== Comment: #14 - Calvin L. Sze <email address hidden> - 2016-11-06 10:56:49 ==
Hi Calvin -
I can provide the assembly of the function that contains the canary (the canary itself gets inlined), but I think it might just be easier if I uploaded a binary and an associated corefile? That way your engineers could disassemble the crashing function themselves in the debugger and see exactly what the state was at the time of the crash.
What is the best way for me to get that information to you?
Thanks,
Andrew
== Comment: #15 - Calvin L. Sze <email address hidden> - 2016-11-06 10:58:54 ==
Provided the binary and core information.
Note from Mongo;
I've uploaded a sample core file and the associated binary to your ftp
server as detailed above. The binary is named `mongod.power` and the core is
named `mongod.
You should expect to see a backtrace on the faulting thread which looks
like this (for the first few frames):
(gdb) bt
#0 0x00003fff997be5d0 in __libc_
at ../sysdeps/
#1 __GI_raise (sig=<optimized out>) at ../sysdeps/
#2 0x00003fff997c0c00 in __GI_abort () at abort.c:89
#3 0x00000000223c33e8 in mongo::
file=0x24131b38 "src/mongo/
line=<optimized out>) at src/mongo/
#4 0x00000000224bbc48 in mongo::(anonymous namespace)
this=<optimized out>) at src/mongo/
The "Canary::_verify" frame (number 4) has a local variable "_t" which is an
on-the-stack array and filled with "0xcd" for a span of 1024 bytes. Near the
end of this block we see two bytes of poisoned memory which were altered:
0x3fff5814c858: 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd
0x3fff5814c860: 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd
0x3fff5814c868: 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd 0x01 0x00
0x3fff5814c870: 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd
0x3fff5814c878: 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd 0xcd
Note the two bytes set to values "0x01" and "0x00".
At the time of core-dump all the other threads seemed to be paused on system
calls such as "recv" or "__pthread_
when setting up our software canary, and checks the memory immediately after
its setup. We do not run any other functions on this thread between the
memory poisoning and the verification of the poisoning. All other threads
appear to be paused at this time.
== Comment: #16 - Calvin L. Sze <email address hidden> - 2016-11-06 10:59:40 ==
A follow up message from Mongo
The function calling the canary code, which you'll want to possibly
disassemble is in frame 6:
#6 mongo::
out=
The lower numbered frames deal with the canary code itself.
== Comment: #17 - Calvin L. Sze <email address hidden> - 2016-11-06 11:03:46 ==
From Andrew,
.
We have repro'd with three compilers:
- The system GCC, using system libstdc++ and system glibc
- Our hand-rolled GCC, using its own libstdc++, and system glibc
- One off clang-3.9 build, using system libstdc++, and system glibc.
Coincidentally, both system and hand-rolled GCC are 5.4.0, so there may not be as much variation there as hoped. We could try building with clang and libc++ to at least rule out libstdc++ as a factor.
>One reason you see this on Ubuntu 16.04 and not on the other linux distro is likely because of
>glibc level. The other linux distro's glibc is quite old by comparison. glibc 2.23, which
>appears on Ubuntu 16.04, is the first version to be compiled with
>-fstack-
I'm not sure I follow. Our software has been built with -fstack-
>So this doesn't necessarily mean that the
>bug doesn't exist elsewhere; it just means that the stack protector code isn't
>enabled to spot the problem. If the stack corruption is benign, then it
>wouldn't be noticed otherwise.
Yeah, still confused. I can definitely make the other linux distro box report a stack corruption:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct no_chars {
unsigned int len;
unsigned int data;
};
int main(int argc, char * argv[])
{
struct no_chars info = { };
if (argc < 3) {
return 1;
}
info.len = atoi(argv[1]);
memcpy(
return 0;
}
*** stack smashing detected ***: ./boom terminated
Segmentation fault
I assume that glibc 2.23 was compiled with Ubuntu's version of gcc 5 that ships
with the system, in case that becomes relevant.
Correct, we have not made any changes to glibc - we are using the stock version that ships on the system.
== Comment: #18 - Calvin L. Sze <email address hidden> - 2016-11-06 11:04:24 ==
From Andrew
Also, I want to re-iterate that while we have definitely observed cases where the stack protector detects the stack corruption, we have also observed stack corruption within our own hand-rolled stack buffer, per the code posted earlier. The core dump that Adam provided is of this latter sort So to some extent, this is independent of -fstack-
One thing that I have not yet ruled out is whether -fstack-
Still, it sounds like a worthwhile experiment, so I will see if I can still detect corruption in our hand-rolled stack canary when building without any form of -fstack-protector enabled.
== Comment: #19 - Calvin L. Sze <email address hidden> - 2016-11-06 11:05:58 ==
From Andrew,
I've performed this experiment, replacing our use of -fstack-
I have a core file and executable. Let me know if you would be interested in my providing those in addition to the files provided yesterday by Adam.
== Comment: #21 - William J. Schmidt <email address hidden> - 2016-11-07 11:10:54 ==
Andrew, thanks for all the details, and for the binary and core file! I'll start poking through them this morning. I've just been absorbing all the notes that Calvin dumped into our bug tracking system yesterday.
You can ignore what I was saying about -fstack-
While I'm looking at the binary, there are a couple of other things you might want to try:
- Replace ::memset with __builtin_memset with GCC to see whether that makes any difference;
- Try Ulrich Weigand's suggestions from comment #9;
- As you suggested, try clang + libc++ to try to rule libstdc++ in or out.
A couple of questions that may or may not prove relevant:
- You've mentioned you don't get the crashes on the other linux distro. Have you tried your modified canary on the other linux distro anyway? If we're certain the two systems behave differently with the canary that may help us in narrowing things down.
- Which version of the C++ standard are you compiling against? Is it just the default on all systems, or are you forcing a specific -std=...?
== Comment: #22 - William J. Schmidt <email address hidden> - 2016-11-07 12:18:41 ==
I'm having some difficulties with core file compatibility. I put your files on an Ubuntu 16.04.1 system, but I don't see quite the same results as you report under gdb, with libc and libgcc shared libs not at the correct address and a problem with the stack. There's a transcript below. I'm particularly concerned about the warning that the core file and executable may not match. Note also the report of stack corruption above frame #4, so I can't get to frame #6 to look at the register state. The library frames at #0-#3 are reporting the wrong information, which I assume to be because the libraries are at the wrong address.
For debug purposes it would probably be best to use the system compiler, just in case that wasn't the case here.
$ ls -l
total 1950688
-rw-r--r-- 1 wschmidt wschmidt 700141992 Nov 7 14:37 mongod.power
-rw-r--r-- 1 wschmidt wschmidt 1297350656 Nov 7 14:39 mongod.power.core
$ gdb mongod.power mongod.power.coreod.
warning: core file may not match specified executable file.
[New LWP 101461]
[New LWP 100045]
[New LWP 100062]
[New LWP 100056]
[New LWP 99983]
[New LWP 100052]
[New LWP 100054]
[New LWP 99892]
[New LWP 100051]
[New LWP 100048]
[New LWP 100007]
[New LWP 99868]
[New LWP 100059]
[New LWP 101459]
[New LWP 100001]
[New LWP 99986]
[New LWP 101403]
[New LWP 99980]
[New LWP 99882]
[New LWP 99893]
[New LWP 99877]
[New LWP 99872]
[New LWP 101462]
[New LWP 99874]
[New LWP 100058]
[New LWP 100231]
[New LWP 99994]
[New LWP 99873]
[New LWP 100003]
[New LWP 99993]
[New LWP 99879]
[New LWP 101398]
[New LWP 99891]
[New LWP 99880]
[New LWP 99910]
[New LWP 99895]
[New LWP 99901]
[New LWP 100011]
[New LWP 99974]
[New LWP 100049]
[New LWP 99898]
[New LWP 99875]
[New LWP 101460]
[New LWP 99878]
[New LWP 99871]
[New LWP 99896]
[New LWP 101954]
[New LWP 101406]
[New LWP 100015]
[New LWP 100068]
[New LWP 99984]
[New LWP 101519]
[New LWP 100053]
[New LWP 99996]
[New LWP 100050]
[New LWP 100055]
[New LWP 100057]
[New LWP 101807]
[New LWP 99890]
[New LWP 100004]
[New LWP 99884]
[New LWP 101437]
[New LWP 101455]
[New LWP 100013]
[New LWP 99894]
[New LWP 101411]
[New LWP 101457]
[New LWP 101431]
[New LWP 101458]
[New LWP 100443]
[New LWP 101438]
[New LWP 101414]
[New LWP 101433]
[New LWP 101784]
[New LWP 99979]
[New LWP 101397]
[New LWP 101402]
[New LWP 101401]
[New LWP 101435]
[New LWP 101405]
[New LWP 101423]
[New LWP 101425]
[New LWP 99897]
[New LWP 101419]
[New LWP 99989]
[New LWP 101409]
[New LWP 100008]
[New LWP 101410]
[New LWP 99998]
[New LWP 101413]
[New LWP 101469]
[New LWP 101418]
[New LWP 101427]
[New LWP 101399]
[New LWP 101235]
[New LWP 101396]
[New LWP 101421]
[New LWP 99990]
[New LWP 101407]
[New LWP 101480]
[New LWP 100060]
[New LWP 101499]
[New LWP 101506]
[New LWP 101395]
[New LWP 101415]
[New LWP 101400]
[New LWP 101412]
[New LWP 101408]
[New LWP 101420]
[New LWP 101416]
[New LWP 101492]
[New LWP 101513]
[New LWP 101782]
[New LWP 101404]
[New LWP 101481]
[New LWP 101417]
[New LWP 100067]
[New LWP 101429]
[New LWP 99883]
[New LWP 101430]
[New LWP 101436]
[New LWP 101454]
[New LWP 101428]
[New LWP 101422]
[New LWP 100108]
[New LWP 101434]
[New LWP 100064]
[New LWP 101453]
[New LWP 100061]
[New LWP 101426]
[New LWP 100066]
[New LWP 101452]
[New LWP 101439]
[New LWP 101456]
[New LWP 101451]
[New LWP 101450]
[New LWP 101432]
[New LWP 101449]
[New LWP 101424]
[New LWP 100065]
[New LWP 100063]
[New LWP 101448]
[New LWP 101447]
[New LWP 101446]
[New LWP 101445]
[New LWP 101444]
[New LWP 101443]
[New LWP 101442]
[New LWP 101441]
[New LWP 101440]
warning: .dynamic section for "/lib/powerpc64
warning: .dynamic section for "/lib/powerpc64
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/powerpc64
Core was generated by `/home/
Program terminated with signal SIGABRT, Aborted.
#0 0x00003fff997be5d0 in __copysign (y=<optimized out>, x=<optimized out>)
at ../sysdeps/
233 ../sysdeps/
[Current thread is 1 (Thread 0x3fff5814ec20 (LWP 101461))]
(gdb) bt
#0 0x00003fff997be5d0 in __copysign (y=<optimized out>, x=<optimized out>)
at ../sysdeps/
#1 __modf_power5plus (x=-6.277438562
at ../sysdeps/
#2 0x00003fff997be4f0 in ?? () from /lib/powerpc64l
#3 0x00003fff997c0c00 in ?? () at ../signal/
from /lib/powerpc64l
#4 0x00000000223c33e8 in mongo::
file=0x24131b38 "src/mongo/
line=<optimized out>) at src/mongo/
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
(gdb) quit
$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_
Target: powerpc64le-
Configured with: ../src/configure -v --with-
Thread model: posix
gcc version 5.4.0 20160609 (Ubuntu/IBM 5.4.0-6ubuntu1~
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.1 LTS
Release: 16.04
Codename: xenial
$
I'll disassemble the binary and see if I can spot anything without the state information.
Oh, still waiting on permission to mirror the bug.
== Comment: #23 - William J. Schmidt <email address hidden> - 2016-11-07 13:39:45 ==
A little more information:
I've been looking at bsonExtractStri
8ebb3c: 71 c9 06 48 bl 9584ac <00000d72.
And later I see the call to invariantFailed:
8ebc44: e9 75 f0 4b bl 7f322c <_ZN5mongo15inv
So we've answered Steve's initial question about which memset we're using. This isn't being inlined by the compiler, but does an out-of-line dynamic call to the GLIBC_2.17 version.
I'm not sure whether GCC would inline a 1024-byte memset using __builtin_memset, or just end up calling out the same way, but it might be worth trying out that replacement, and disassembling bsonExtractStri
== Comment: #24 - William J. Schmidt <email address hidden> - 2016-11-07 13:50:04 ==
I forgot to mention that the ensuing code generation to accumulate the checksum and test it is completely straightforward and looks correct. So this looks like pretty strong evidence that the problem is in the GLIBC memset implementation.
8ebb3c: 71 c9 06 48 bl 9584ac <00000d72.
8ebb40: 18 00 41 e8 ld r2,24(r1)
8ebb44: 00 04 40 39 li r10,1024
8ebb48: 00 00 20 39 li r9,0
8ebb4c: a6 03 49 7d mtctr r10
8ebb50: 00 00 43 89 lbz r10,0(r3)
8ebb54: 01 00 63 38 addi r3,r3,1
8ebb58: 14 52 29 7d add r9,r9,r10
8ebb5c: f4 ff 00 42 bdnz 8ebb50 <_ZN5mongo22bso
8ebb60: 03 00 40 3d lis r10,3
8ebb64: 00 34 4a 61 ori r10,r10,13312
8ebb68: 00 50 a9 7f cmpd cr7,r9,r10
8ebb6c: c4 00 9e 40 bne cr7,8ebc30 <_ZN5mongo22bso
...
8ebc30: 44 ff 82 3c addis r4,r2,-188
8ebc34: 44 ff 62 3c addis r3,r2,-188
8ebc38: 3a 00 a0 38 li r5,58
8ebc3c: 38 aa 84 38 addi r4,r4,-21960
8ebc40: 60 aa 63 38 addi r3,r3,-21920
8ebc44: e9 75 f0 4b bl 7f322c <_ZN5mongo15inv
== Comment: #28 - William J. Schmidt <email address hidden> - 2016-11-08 11:02:18 ==
Recording some information from email discussions.
(1) The customer is planning to attempt to use valgrind memcheck.
(2) The const cast problem with the canary has been fixed without changing the results.
(3) Prior to that fix, the canary was used on the RHEL system with no corruption detected, so this does seem to be Ubuntu-specific.
(4) -std=c++11 is used everywhere.
(5) The core and binary compatibility issues appear to be that they were generated on 16.10, not 16.04. New ones coming.
(6) The canary code now looks like:
+namespace {
+
+class Canary {
+public:
+
+ static constexpr size_t kSize = 2048;
+
+ explicit Canary(volatile unsigned char* const t) noexcept : _t(t) {
+ __builtin_
+ _verify();
+ }
+
+ ~Canary() {
+ _verify();
+ }
+
+private:
+ static constexpr uint8_t kBits = 0xCD;
+ static constexpr size_t kChecksum = kSize * size_t(kBits);
+
+ void _verify() const noexcept {
+ invariant(
+ }
+
+ const volatile unsigned char* const _t;
+};
+
+} // namespace
+
And its application in bsonExtractType
@@ -47,6 +82,10 @@ Status bsonExtractType
+
+ volatile unsigned char* const cookie = static_
+ const Canary c(cookie);
+
Status status = bsonExtractFiel
(7) Steve Munroe investigated memset and he and Andrew are in agreement that we can rule it out:
I looked at the memset_power8 code (memset is just a IFUNC resolve stub). and I don't see how this problem is caused by memset_power8.
First some observations:
The canary is allocated with alloca for a large power of 2 (1024 bytes).
Alloca returns quadword aligned memory as required to maintain quadword stack alignment.
For this case memset_power8 will quickly jump to the vector store loop (quadword x 8) all from the same register (a vector splat of the fill char).
With this code the failure modes could only be:
Overwrite by N*quadwords,
Underwrite by N*quadwords,
A repeated pattern every quadword.
But we are not see this. Also think we are back to a clobber by some other code.
== Comment: #29 - William J. Schmidt <email address hidden> - 2016-11-08 11:03:33 ==
From Andrew, difficulties with Valgrind:
I did try the valgrind repro. However, I'm not able to make valgrind work:
The first try resulted in lots of "mismatched free/delete" reports, which is sort of odd, because they all seem to be from within the standard library:
> valgrind --soname-
==17387== Memcheck, a memory error detector
==17387== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==17387== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==17387== Command: ./mongos
==17387==
==17387== Mismatched free() / delete / delete []
==17387== at 0x4895888: free (in /usr/lib/
==17387== by 0x59514F: deallocate (new_allocator.
==17387== by 0x59514F: deallocate (alloc_
==17387== by 0x59514F: _M_deallocate_
==17387== by 0x59514F: _M_deallocate_
==17387== by 0x59514F: _M_deallocate_
==17387== by 0x59514F: _M_rehash_aux (hashtable.h:1999)
==17387== by 0x59514F: std::_Hashtable
==17387== by 0x595253: std::_Hashtable
==17387== by 0x5954D3: std::__
==17387== by 0x593693: operator[] (unordered_
==17387== by 0x593693:)
==17387== Address 0x5151fb0 is 0 bytes inside a block of size 16 alloc'd
==17387== at 0x48951D4: operator new[](unsigned long) (in /usr/lib/
==17387== by 0x59328F: allocate (new_allocator.
==17387== by 0x59328F: allocate (alloc_
==17387== by 0x59328F: std::__
==17387== by 0x595093: _M_allocate_buckets (hashtable.h:347)
==17387== by 0x595093: _M_rehash_aux (hashtable.h:1974)
==17387== by 0x595093: std::_Hashtable
==17387== by 0x595253: std::_Hashtable
==17387== by 0x5954D3: std::__
==17387== by 0x59356B: operator[] (unordered_
==17387== by 0x59356B:)
So, that is a puzzle. However, I can instruct valgrind to ignore that. But it still fails to start, now with something more odd:
$ valgrind --show-
==19834== Memcheck, a memory error detector
==19834== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==19834== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==19834== Command: ./mongos
==19834==
MC_(get_
Memcheck: mc_machine.c:329 (get_otrack_
host stacktrace:
==19834== at 0x3808D9B8: ??? (in /usr/lib/
==19834== by 0x3808DB5F: ??? (in /usr/lib/
==19834== by 0x3808DCDB: ??? (in /usr/lib/
==19834== by 0x38078CE3: ??? (in /usr/lib/
==19834== by 0x38076FAB: ??? (in /usr/lib/
==19834== by 0x380BAA2B: ??? (in /usr/lib/
==19834== by 0x381B9BB7: ??? (in /usr/lib/
==19834== by 0x380BE19F: ??? (in /usr/lib/
==19834== by 0x3810D04F: ??? (in /usr/lib/
==19834== by 0x3810FFEF: ??? (in /usr/lib/
==19834== by 0x3812BB97: ??? (in /usr/lib/
sched status:
running_tid=1
Thread 1: status = VgTs_Runnable (lwpid 19834)
==19834== at 0x4F3AC14: __lll_lock_elision (elision-lock.c:60)
==19834== by 0x4F2BBC7: pthread_mutex_lock (pthread_
==19834== by 0x602753: mongo::
==19834== by 0x5319EB: __static_
==19834== by 0x5319EB: _GLOBAL_
==19834== by 0x137FED3: __libc_csu_init (in /home/acm/
==19834== by 0x4F830A7: generic_
==19834== by 0x4F83337: (below main) (libc-start.c:116).
I'm not really sure what to make of that, except that I did see some thing die in the same place, once or twice (__lll_
Anyway, it doesn't seem like I can get this running with valgrind. Happy to try again if anyone is aware of a workaround.
== Comment: #30 - William J. Schmidt <email address hidden> - 2016-11-08 11:06:00 ==
CCing Carl Love. Carl, have you seen this sort of interaction between valgrind and lock elision before? (Comment #29, you can ignore the rest of this bugzilla for now.)
For what it's worth, GCC is just a placeholder package for now. The compiler and libraries aren't directly implicated, at least as of now. The origin of the stack corruption remains unknown.
Andrew, according to the valgrind community, the error
Memcheck: mc_machine.c:329 (get_otrack_
is probably due to the lock elision code accessing a hardware register that valgrind doesn't know about, so there isn't a shadow register to consult. Our valgrind guys will look into that aspect of things. In the meantime, they tell us that you can circumvent this problem by not using --track-origins. Could you please give this a try?
I had another thought this evening. In case this is a threading problem, have you tried building with Clang and using ThreadSanitizer? Support for this was added to ppc64el in 2015.
The following are reproduction instructions for the behavior that we are observing on Ubuntu 16.04 ppc64le. Note that we have run this same test on RHEL 7.1 ppc64le, and we do not observe any stack corruption. Note also that building and running this repro may depend on certain system libraries (SSL, etc) or python libraries being available on the system. Please install as needed. The particular commit here is fairly recent, just one that I happen to know demonstrates the issue.
- git clone https:/
- cd mongo
- git checkout 3220495083b0d67
- git apply acm.nov9.patch
- python ./buildscripts/
- ulimit -c unlimited && python buildscripts/
Note that you should provide an actual argument for the --dbpathPrefix argument in the last step, as this is where the running database instances will store data.
You will need to leave this running for several hours, perhaps overnight. In our runs, we find that about 1% of the repeated runs of the test fail, dropping a core.
The core files are typically (but not always!) associated with crashes of the mongos binary inside one of the several mongo::
$ gdb ./mongos core.2016-os...done.
[New LWP 3821]
...
[New LWP 3736]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/powerpc64
Core was generated by `/home/
Program terminated with signal SIGABRT, Aborted.
#0 0x00003fff779ff21c in __GI_raise (sig=<optimized out>) at ../sysdeps/
54 ../sysdeps/
[Current thread is 1 (Thread 0x3fff5d98f140 (LWP 3821))]
(gdb) bt
#0 0x00003fff779ff21c in __GI_raise (sig=<optimized out>) at ../sysdeps/
#1 0x00003fff77a01894 in __GI_abort () at abort.c:89
#2 ...
Here is the patch for the above comment
- python ./buildscripts/
Bill -
I will try again with valgrind without --track-origins=yes and post any interesting findings. Re ThreadSanitizer, we have tried before without success. The last time we tried, it didn't work because clang TSAN didn't support exceptions. Perhaps that has changed? We really like the sanitizers (we run ASAN and UBSAN in our CI loop), but our experience has been that there is a significant period of scrubbing out false positives and adding suppressions before meaningful signal can be extracted. I'm happy to add it to the list of things to try though.
- python ./buildscripts/
Hi Andrew -- not sure about Clang TSAN supporting exceptions at this point; it is probable that we don't have a solution there as I would expect that to require target support, and I've not heard of that happening for POWER. That said, I've been less connected to the Clang community for the last year or so, so anything's possible.
- python ./buildscripts/
The attachment "Apply to 3220495083b0d67
[This is an automated message performed by a Launchpad user owned by ~brian-murray, for any issues please contact him.]
- python ./buildscripts/
Overnight, I ran this test case on both an Ubuntu 16.04 ppc64le system and a RHEL 7.1 ppc64le system.
The test ran 219 times on Ubuntu, with 15 cores, for a failure rate of around 5%. Most of the time corruption was detected in the Canary ctor (before doing other work), but a few times in the dtor:
$ grep "10000[12]" resmoke.log
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
The test ran 227 times ...
- python ./buildscripts/
I tried valgrind as suggested above.
By adding --show-
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
The code around listen.cpp:184 looks like:
182 SOCKET sock = ::socket(
183 ScopeGuard socketGuard = MakeGuard(
184 massert(15863,
185 str::stream() << "listen(): invalid socket? " << errnoWithDescri
186 sock >= 0);
It looks as if valgrind on ppc64le doesn't support the socket syscall? Note that I happened to be subscribed to the valgrind-developers list so I saw the other post there, and I've followed up with this same information already.
That discussion is here: https:/
In any event, it doesn't look like valgrind is going to be able to help here.
- python ./buildscripts/
OK, I upgraded valgrind to 3.12 on the power machine and I can now get it to run meaningfully. We are seeing many error reports of the following form:
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
Or
[js_test:
[js_test:
[js_test:
[js_test:
[js_test:
In all cases, the invalid write appears to be a write into a freed block. Frequently, the address appears to be aligned 'Address 0x...e'. So, this is very interesting.
Another engineer and I took a close look at one of these instances, and we do not believe there is any way that the mutex could be accessed after it was deleted.
Is there a way we can disable the libc lock elision code? An environment variable or other similar setting? We would like to see if we still see these sorts of reports after disabling lock elision. If so, then it would almost certainly be a logic error in our code that we are just missing. On the other hand, if the valgrind reports go away when we disable lock elision, then it would be evidence that lock elision might be at fault for the stack corruption we are observing, at which point I would re-try our original repro.
Hi Andrew,
That indeed looks suspicious. I've been talking with our libc team. It appears that the existing patch that provides for disabling lock elision dynamically isn't present in the libc on Ubuntu 16.04, which is very unfortunate. They are thinking about other possible solutions.
This seems a very possible culprit as lock elision is not present on the other configurations where you've seen success, to my knowledge.
We may need to get a special test build of glibc with --disable-
Thanks for persevering through the valgrind concerns to get to this clue!
Bill
- python ./buildscripts/
- python ./buildscripts/
The following might override the HTM lock elision. Can someone try it to see if it works?
bergner@ampere:~$ cat pthread_
#include <pthread.h>
#define PTHREAD_
extern int __pthread_
int
pthread_mutex_lock (pthread_mutex_t *mutex)
{
mutex-
return __pthread_
}
bergner@ampere:~$ gcc -c -fPIC pthread_
bergner@ampere:~$ gcc -shared -Wl,-soname,
bergner@ampere:~$ LD_PRELOAD=
...replacing ./a.out with the binary you want to run without lock elision.
- python ./buildscripts/
I have the libfoo.so.1 interposer running, I will let it run overnight and report back tomorrow with any interesting findings.
- python ./buildscripts/
I don't think the interposition is working, or I'm doing something wrong.
I changed pthread_
$ cat pthread_
#include <pthread.h>
#include <stdlib.h>
#define PTHREAD_
extern int __pthread_
int
pthread_mutex_lock (pthread_mutex_t *mutex)
{
abort();
mutex-
return __pthread_
}
$ gcc -c -fPIC pthread_
$ gcc -shared -Wl,-soname,
I then wrote a small C++ program:
$ cat use_foo.cpp
#include <mutex>
int main(int argc, char* argv[]) {
std::mutex m;
std:
return EXIT_SUCCESS;
}
Compiled it to a.out:
$ g++ -std=c++11 -pthread ./use_foo.cpp
When run, it does *not* terminate in abort:
LD_PRELOAD=
Under gdb, I can see that the libfoo.so.1 library is loaded:
$ LD_PRELOAD= ./a.out...(no debugging symbols found)...done.
(gdb) sta
Temporary breakpoint 1 at 0x10000a24
Starting program: /home/acm/
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/powerpc64
Temporary breakpoint 1, 0x0000000010000a24 in main ()
(gdb) info inferior
Num Description Executable
* 1 process 14046 /home/acm/
(gdb) !lsof -p 14046 | grep lib
a.out 14046 amorrow mem REG 8,2 74328 2621621 /lib/powerpc64l
a.out 14046 amorrow mem REG 8,2 856616 2622143 /lib/powerpc64l
a.out 14046 amorrow mem REG 8,2 1851512 2622139 /lib/powerpc64l
a.out 14046 amorrow mem REG 8,2 171632 2622098 /lib/powerpc64l
a.out 14046 amorrow mem REG 8,2 2042040 2752992 /usr/lib/
a.out 14046 amorrow mem REG 8,2 69136 6182484 /home/acm/
a.out 14046 amorrow mem REG 8,2 268976 2622140 /lib/powerpc64l
Setting a breakpoint in pthread_mutex_lock lands me in __GI___
$ LD_PRELOAD=
GNU gdb (Ubuntu 7.11.1-
- python ./buildscripts/
------- Comment From <email address hidden> 2016-11-10 19:54 EDT-------
(In reply to comment #49)
Something doesn't look right in your commands. They all have "LD_PRELOAD=" with nothing after the '=':
Comment #49 has:
> $ LD_PRELOAD= gdb ./a.out
That should be:
$ LD_PRELOAD=
Also, if you use LD_PRELOAD (without export) for the gdb command, I wouldn't expect it to carry forward to the inferior. Although, as you say, gdb shows the library loaded.
- python ./buildscripts/
When I add the abort and use your C++ test case, I see the abort:
bergner@ampere:~$ cat pthread_
#include <stdlib.h>
#include <pthread.h>
#define PTHREAD_
extern int __pthread_
int
pthread_mutex_lock (pthread_mutex_t *mutex)
{
abort();
mutex-
return __pthread_
}
bergner@ampere:~$ gcc -fPIC -c pthread_
bergner@ampere:~$ gcc -shared -Wl,-soname,
bergner@ampere:~$ cat foo.cpp
#include <mutex>
int main(int argc, char* argv[]) {
std::mutex m;
std:
return EXIT_SUCCESS;
}
bergner@ampere:~$ g++ -std=c++11 -pthread foo.cpp
bergner@ampere:~$ ./a.out
bergner@ampere:~$ LD_PRELOAD=
Aborted
gdb shows the abort is from the shim library too:
bergner@ampere:~$ gdb -q ./a.out
Reading symbols from ./a.out...(no debugging symbols found)...done.
(gdb) set environment LD_PRELOAD=
(gdb) run
Starting program: /home/bergner/a.out
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/powerpc64
Program received signal SIGABRT, Aborted.
0x00003fffb7b3f27c in __GI_raise (sig=<optimized out>) at ../sysdeps/
54 ../sysdeps/
(gdb) bt
#0 0x00003fffb7b3f27c in __GI_raise (sig=<optimized out>) at ../sysdeps/
#1 0x00003fffb7b418f4 in __GI_abort () at abort.c:89
#2 0x00003fffb7f507e4 in pthread_mutex_lock () from ./libbar.so.1
#3 0x0000000010000954 in __gthread_
#4 0x0000000010000b10 in std::mutex::lock() ()
#5 0x0000000010000be8 in std::lock_
#6 0x0000000010000a80 in main ()
(gdb)
- python ./buildscripts/
------- Comment From <email address hidden> 2016-11-10 20:44 EDT-------
(In reply to comment #51)
> (In reply to comment #49)
>
> Something doesn't look right in your commands. They all have "LD_PRELOAD="
> with nothing after the '=':
> Comment #49 has:
> > $ LD_PRELOAD= gdb ./a.out
It's a mirroring problem to our bugzilla. The command looks correct in the Launchpad comment. I recommend reading/adding comments in Launchpad rather than from our bugzilla.
- python ./buildscripts/
- python ./buildscripts/
A test build of glibc with lock elision disabled is in progress here:
https:/
That said, the above trace looks suspiciously like a double-unlock. That breaks pthread rules, but the software implementation has historically let you do it anyway (or, rather, it fails to abort there because it would be expensive to do so, but your code is still buggy, prone to hanging, etc).
There's a thread on debian-devel right now about this same issue[1] as it related to ghostscript, where people noticed that with hardware lock elision, the world exploded, and without, it seemed to work, maybe, ish (though, as I said, the ghostscript bug was likely the cause of some mysterious hangs).
[1] https:/
- python ./buildscripts/
from "man pthread_
If the mutex type is PTHREAD_
shall be provided. If a thread attempts to relock a mutex that it has
already locked, an error shall be returned. If a thread attempts to
unlock a mutex that it has not locked or a mutex which is unlocked, an
error shall be returned.
That might be something to try.
Per the blog post mentioned from the thread in #35, this sort of problem should also manifest on a Broadwell or Skylake processor. Andrew, have you tried running on such machines?
- python ./buildscripts/
From that debian thread:
"Per logs from message #15 on bug #842796:
https:/
SIGSEGV on __lll_unlock_
confidence) of an attempt to unlock an already unlocked lock while
running under hardware lock elision.
Well, unlocking an already unlocked lock is a pthreads API rule
violation, and it is going to crash the process on something that
implements hardware lock elision."
So I think we have some pretty good evidence of an application problem. I think that using Paul Clarke's suggestion may be necessary for you to figure out where the double-unlock is occurring. I'm not confident that valgrind will spot this.
We're going to continue trying to reproduce on our side and disable TLE to confirm that this segv goes away. Hard to know if this is related to the original reported problem, of course, but perhaps losing TLE will allow valgrind to find that if it's a separate issue.
- python ./buildscripts/
First, I'm not sure what I was doing wrong yesterday, but I now have the LD_PRELOAD lock-elision-
A few comments on the most recent comments.
Re #40: Bill, we have never actually seen a natural SEGV in __lll_unlock_
Re #38: We were thinking the same thing. I will ask around to see if we have any appropriate machines.
Re #37: I don't think it is an option for us as almost all of our locks are wrapped pthread locks via std::mutex.
Re #35: Almost all of our lock management is via C++ RAII types (std::unique_lock, etc), and we run these tests across many many systems. I'm almost certain that a double unlock would have been caught by now, or would cause crashes elsewhere. In particular I would expect that the windows debug runtime would alert us. However, I will see if there is some easy way we can check for this.
I'm running the original repro right now with HLE disabled. Usually it takes a few hours before we see a stack corruption event, so I will follow up later today with more results.
- python ./buildscripts/
------- Comment From <email address hidden> 2016-11-11 14:14 EDT-------
I've been able to set up and reproduce this on a p8 16.04 system here. Seems to be the identical signature, 2 bytes at address 0xXXXXXe in the middle of the canary block set to 0x00.
That is good news that you have been able to reproduce the issue. I'm currently running the reproducer with the LD_PRELOAD disable-
Also, per the earlier comment about double unlocking: I confirmed with one of the other engineers here that the Windows Debug CRT does check for double unlocks, and this test suite runs in our CI loop in that configuration without issue.
- python ./buildscripts/
- python ./buildscripts/
I'll note that the LD_PRELOAD interposer library is only needed for binaries that are already compiled and you want to override the pthread_
#define PTHREAD_
extern int __pthread_
int
pthread_mutex_lock (pthread_mutex_t *mutex)
{
mutex-
return __pthread_
}
...and possibly wrap the above in:
#ifdef __powerpc__
...
#endif
so it's only used on POWER?
- python ./buildscripts/
- python ./buildscripts/
Question: Is there any magic I can do to this test case:
python buildscripts/
that would allow me to run multiple copies on the same machine?
- python ./buildscripts/
Found it, looks like the --basePort option to resmoke is what I want.
Arron, re #50, yes, you can run as may copies as you want simultaneously, as long as:
1) The --dbpathPrefix argument points to distinct paths. So resmoke.py ... --dbpathPrefix=
2) You specify disjoint "port ranges" with the --basePort argument to resmoke.py. I don't happen to know how deep in to each port range one instance of the test needs to go, so I usually separate them by 10k but that is probably way too generous. I'll bet 2k is sufficient.
Does that answer your question?
Peter, re #47, yes, that is certainly true. However, I'm actually finding it advantageous to load it via LD_PRELOAD exactly because I don't need to recompile. So I can toggle back and forth between lock elision on/off without needing to recompile.
- python ./buildscripts/
Ahh, if you're never actually seeing a SEGV in __lll_unlock_
It may well be a subtle glibc bug on powerpc, or a whackadoo silicon bug, or a more generic glibc bug or still a mongo bug, but the more data we can scrape up (I mean, unless you accidentally stumble on the bug in the process and fix it, then yay), the better.
- python ./buildscripts/
Andrew,
Yes, that is working nicely with separate DB dirs and basePort I'm running multiple copies on one machine. Thanks!
Adam I agree on all points.
So far, my repro running with the LD_PRELOAD hack is at 118 iterations with no crashes and going strong. Given that we had an ~5% repro rate without the LD_PRELOAD hack, this is looking very encouraging, but I'm going to let it run all weekend just to be sure.
As for skylake, we are definitely going to work on tracking one down next week.
And of course, our default stance has always been that it is more likely our bug than anything else, but you never know! We will continue to run experiments and gather data.
Thanks again for your help.
This is the other thing I am trying. I've modified the Canary object to use a 128k stack zone and then use mprotect to mark the aligned 64k page that's in the middle of it read-only. When the destructor is called, it changes it back to read-write. This should cause any write to this region to get a segv, and give us an idea of what is writing on the stack in the resulting coredump.
One other thing, if you use the mprotect thing, it may be necessary to bump up the value of /proc/sys/
- python ./buildscripts/
An engineer on our side did some Canary+mprotect experiments as well, but I don't happen to have details on what the approach/results were right now. I'll ask them to update this ticket with any interesting findings they may have.
- python ./buildscripts/
Here's another interesting data point. The original bug description specifies that the memory corruption is not seen on Ubuntu 15. Per https:/
This seems to deepen the mystery more than it illuminates it. Have there been changes to TLE between the releases that could be at fault? Is another unknown component involved?
I'm told this morning that so far no failures are observed using the mprotect canary, the working theory being that the syscall disturbs the timing too much. Otherwise our results are consistent with yours on 16.04: failures with TLE enabled, no failures with the LD_PRELOAD workaround.
- python ./buildscripts/
Regarding Ubuntu 15, I think that was a miscommunication somewhere along the line.
The only versions of Ubuntu that we build for are the LTS releases (12.04, 14.04, and 16.04), and the only one of those we have ever built on POWER is 16.04. Other than Ubuntu 16.04, the only other POWER distro we target is RHEL 7.1. So what we are seeing is I think still consistent with either a latent locking bug in MongoDB exposed by the new-to-us lock elision support, or one of the other sorts of issues as mentioned by Adam above.
Also, my test run using the disabled lock elision just reached 500 successful runs. I'm going to switch lock elision back on and make sure I start seeing corruption again, but I'm nearly certain that I will.
Thanks for the update on the mprotect experiment. It would have been great if that had worked. I think we will be able to share some details on our similar experiments on Monday.
- python ./buildscripts/
An update on my experiments:
* 500 runs no failures with TLE disabled
* 500 runs no failures TLE enabled but mprotect() syscall in Canary constructor/
* 500 runs 11 failed with TLE enabled so about 2% fail rate
* Tried switching SMT off and interestingly got 200 runs no fails with TLE enabled.
This suggests to me that the timing of this race condition is rather tight. A lot of the fails are in the checksum right after the memset in the Canary constructor, which means the other guy comes back and writes the stack after memset wrote it but before the checksum read it. Also if strace's syscall timing is to be believed, 14% of the time mprotect() is under 8 microseconds, and even that seems to be sufficient to prevent the problem. Finally, by forcing other pthreads to always be on a different processor core (by disabling SMT) we also apparently eliminate this.
- python ./buildscripts/
Hello, could bugproxy please be silenced, and/or prevented from reposting the same python command line over and over again? It seems there are sometimes attachments/
- python ./buildscripts/
We'll try to get to the bottom of the bugproxy oddity. It seems to be echoing a chunk out of comment #5 for no reason that I can see.
- python ./buildscripts/
Given the results of Aaron's experiments over the weekend, here's a summary of what we think we're seeing:
- There is an unsafe interaction between two threads.
- This interaction can only be observed in a very small time window, and then relatively rarely.
- The interaction is observable only when TLE is enabled. We posit this is because TLE improves lock/unlock speed, so that the observable time window applies.
- The time window is sufficiently small that a relatively fast syscall to mprotect is sufficient to disrupt the timing.
- If all threads are forced to run on separate processors (SMT=1), this is sufficient to disrupt the timing.
We believe this is an application problem. Further "evidence" against a problem in TLE itself is that TLE has been enabled for ppc64el on Ubuntu for over 18 months without any similar reports.
Unfortunately, I don't think we have any way to directly debug the problem, due to the narrow time window. We have considered hardware watchpoints. There is a DAWR (data address watch register) for each thread, but using it in this case seems impractical. GDB's implementation of hardware watchpoints in a multithreaded environment is such that, when a hardware watchpoint is set, it is set to the same address for all threads. So even if we were to script the setting of a hardware watchpoint under GDB, the time required for GDB to set up the watchpoint address on the fly would surely exceed the critical time window. You could try it, but I wouldn't expect much.
Further debugging seems to require application knowledge or a code crawl of some kind. The setting of two flag bytes is the only clue we have. To my knowledge we have never seen only a single byte clobbered in the canary, and the two bytes appear to be aligned on a 16-bit boundary. So the code doing the clobbering very likely contains a store-halfword (sth or, less likely, sthx or sthu) instruction. You could examine the disassembly of the application for occurrences of sth to narrow the field of search.
It could be two individual stores, in which case you'd be looking for two instructions very close together of the form:
stb r<x>,<n>(r<b>)
stb r<y>,<n+1>(r<b>)
Example:
stb r5,0(r9)
stb r6,1(r9)
Obviously this is a huge application so this doesn't help much in and of itself, but perhaps if you've already narrowed the problem down somewhat, this could be helpful.
I am running low on ideas for how we can help you debug the problem. We'll continue discussing it; if any better thoughts arise, we'll be sure to let you know.
Bill
Well, let me back that off a little. We're going to look into the TLE code a little more. The various __lll_*_elision routines are handed a pointer to short that they update, which certainly looks suspicious. So it's certainly possible that something in the pthreads implementation that calls this code is providing a bad pointer to TLE.
However, we would expect to see the same problem on x86 or s390 if that is the case, unless there is some POWER-specific code in the pthreads implementation. So again a Skylake experiment would be helpful.
Ulrich Weigand made an interesting comment on the glibc code in our internal bug (no longer mirrored), so I'm mirroring it by hand here.
--- Comment #77 from Ulrich Weigand <email address hidden> ---
According to comment #45, we see an invalid read at __lll_unlock_
(elision-
code, those are:
int
__lll_unlock_
{
/*)
}
return 0;
}
exactly the two lines accessing *adapt_count (which is a short).
What seems suspicious here is the fact that these lines happen (deliberately)
*outside* of the critical section protected by the mutex. Now this should
indeed be normally OK. However, I understand it is allowed to use that
critical section to also protect the life time of the memory object holding the
mutex itself. If this is done, I'm wondering whether those two lines outside
the critical section may now access that *memory* outside of its protected life
time ...
Just as a theoretical example, assume you have thread 1 executing a function
that allocates a mutex on its stack, locks the mutex, and passes its address to
a thread 2. Thread 2 now does now work and then unlocks the mutex. Thread 1,
in the meantime, attempts to lock the mutex again, which will block until it
was unlocked by thread 2. After it got the lock, thread 1 now immediately
unlocks the mutex again and then the function returns.
Now in this scenario, the cross-thread lock-unlock pair cannot use TM, so it
will fall back to regular locks, which means the unlock in thread 2 will
perform the *adapt_count update. On the other hand, the second lock-unlock
pair in thread 1 might just happen to go through with TM, and thus not touch
*adapt_count at all. This means there is no synchronization between the update
of *adapt_count in thread 2, and the re-use of the stack memory underlying the
mutex in thread 1, which could lead to the *adapt_count update being seen
*after* that reuse in thread 1, clobbering memory ...
Am I missing something here? Of course, I'm not sure if MongoDB shows that
exact pattern. But I guess different patterns might show similar races too.
One simple test might be to move the *adapt_count update to before the
lll_unlock and check whether the issue still reproduces. Not sure about
performance impacts, however.
I think it is very likely that we are doing the sort of stack-based mutex pattern described above, or something similar. In particular, I'd expect that we certainly have states where we wait on a stack mutex, and then immediately unwind and destroy the mutex after we unblock.
I'm working on getting access to a Skylake machine - we will definitely do that test if we can get access.
So, I rebuilt the glibc 2.23 from the 16.04 sources and modified the values written to the adapt_count parm in the lock elision code. It's a short and the original code may store values 0, 1, 2, 3. We were seeing either 1 (canary hit in constructor) or 0 (canary hit in destructor). I changed it to use the values 0x3333, 0x2222, 0x1111, and 0. And I just saw the constructor canary hit with the expected values 0x11, 0x11 in the changed bytes. So, this is a race condition in the lock elision code with mutex located on the stack and being reused quickly by another hardware thread on the same processor core.
Aaron thank you very much for running that experiment and confirming that this is an issue in libc. I think the component should probably be updated?
Also, would you like us to try to continue to repro on a Skylake machine, or is this all architecture neutral code and therefore the POWER repro is conclusive?
Hi Andrew,
This is a POWER-specific "optimization" that dates to last December (so it in fact wouldn't show up in Ubuntu 15.04 or 15.10, it appears). The decrement used to be attached to the lock rather than the unlock and it was apparently moved to the present location because it showed a good performance improvement. Aaron is running some experiments with moving it ahead of the unlock rather than following it, which we believe will remove the problem. That may or may not be the final version of the fix, but it looks like the simplest solution.
The only reason for you to test on a Skylake machine would be for your own peace of mind, in case any TLE-specific issues are hiding in the x86 implementation. This specific problem, at least, is a POWER issue.
Hi Bill -
Thanks for the update, and for clarifying that this is POWER 16.04 only. We are very happy to be at a root cause for this issue - it had us pretty worried! We really appreciate all the help from everyone involved here.
Will there be an upstream glibc bug associated with this ticket that we can follow?
Hi Andrew,
Aaron has just opened https:/
Our team will work with the Ubuntu toolchain guys to get a solution in the field as soon as possible. Thanks again for your patience and professionalism!
Bill
Hi Bill -
Thanks for the glibc bug link.
Totally understand about people being out, not a problem. However, I'm not very familiar with the development process for upstream glibc fixes to make their way into an LTS release.
Do you have a rough estimate of the timeline for that landing in something that we can pull in via apt-get update? Are we talking days, weeks, months? My expectation right now is that we are going to continue forward with issuing the MongoDB 3.4 GA on POWER Ubuntu 16.04, and relnote this issue, probably stating that MongoDB 3.4 should not be run in production on POWER Ubuntu 16.04 unless glibc has been upgraded to version X or newer.
We might even add a server startup warning, if it will be possible for us to detect at runtime that we are running with an affected glibc. I'd imagine that might prove difficult if this comes in as an Ubuntu patch to the existing libc version though. Thoughts on that?
Andrew
Hi Andrew -
I don't work directly in the glibc community, so I'm not completely familiar with their policies. However, the first step is to get a bug approved upstream so that it can be backported to the 2.23 release (in this case). Adam Conrad at Canonical has volunteered to help us shepherd the patch along, so we hope to be able to expedite it.
We have two possible patches to fix the problem; we are trying to determine which will have the least performance impact. I am hopeful that we can get a decision on that tomorrow so the patch can be put on the libc-alpha list and, we hope, approved without too much difficulty.
I will have to defer to Canonical folks (Adam, Matthias) on the process for getting the fix available, but I'm sure this isn't their first rodeo and this shouldn't take too long. My best guess is that the whole process may get done within a week, if all goes well with the community, and a little longer if not. We are definitely not talking months.
Hi Andrew,
Canonical's plans for handling this in the short term are described here: https:/
Bill
Howdy, I'm the originator of:
https:/
(which is dup'd to this bug)
I tested the new ("ubuntu5") libc6 packages from xenial-proposed. They prevent the crash with TensorFlow, and I have not noticed any other problems.
Will update:
https:/
Status changed to 'Confirmed' because the bug affects multiple users.
This bug was fixed in the package glibc - 2.24-9ubuntu2
---------------
glibc (2.24-9ubuntu2) zesty; urgency=medium
* debian/
failure in name resolution on upgrades from yakkety (LP: #1674532)
-- Adam Conrad <email address hidden> Tue, 21 Mar 2017 15:27:15 -0600
------- Comment From <email address hidden> 2016-11-09 10:52 EDT-------
Hello Canonical,
Sending this bug to you for awareness and advice. | https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/1640518 | CC-MAIN-2018-13 | refinedweb | 10,305 | 70.43 |
tag:blogger.com,1999:blog-156263562020-02-29T01:11:58.923-08:00ecmanautWebby thoughts, most about around interesting applications of ecmascript in relation to other open web standards. I live in Mountain View, California, and spend some of my spare time co-maintaining Greasemonkey together with Anthony Lieuallen.Johan Sundström bettermentIf you are an American, have some of your savings in robot investments with <a href="">Betterment</a>, and don't want that money to work in support of, say, military weapons, civilian firearms, alcohol, or tobacco, here is how you make it no longer so:<br><ol><li>Grab a computer (at least the iOS app doesn't seem to expose these settings)</li><li>Log on to <a href="">betterment.com</a></li><li>Go to the <a href="">Portfolio → Betterment Portfolios</a> page: <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div></li><li>For each of your portfolios, walk them through these steps to change your allocation to the Socially Responsible Investing strategy: <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div><ol><li>Click the Edit Portfolio link: <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div></li><li>Click the Betterment SRI → "Review Strategy" button, which will walk you through next steps to eventually change to that investment strategy. (If there isn't a blue button next to it, but is for each of the other strategies available, you're already done – good work!) <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" width="320" height="169" data-</a></div></li><li>Scroll through the page and click "Review and refine" at the bottom: <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div></li><li>Same thing, and click "Continue": <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div><li>Clicking "Finish Setup" changes this portfolio's allocation: <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div></li><li>This confirms the money in your portfolio no longer helps killing people – keep going until you run out of portfolios: <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-</a></div></li></ol></li></ol> If you don't use Betterment for any investments, but would want to, grab somebody's referral link to <a href="">treat yourself to the first ninety days' of it free of charge</a>. It's a pretty convenient, high-yield way of growing your savings for little to no effort. And don't forget the guide above, as it probably defaults to investing in <em>everything</em>, indiscriminately.<img src="" height="1" width="1" alt=""/>Johan Sundström DevTools open?<p>If you follow the web world, you have hopefully read <a href="">this recent post on how we need to protect people against attacks on user data</a>. On a mostly unrelated note, I wrote some assertions that I wanted to throw errors when I'm not debugging things, but <code>console.log</code> them when I am, and then trip a <code>debugger</code> statement. It's not really api detectable what mode I am in, but the openness of browser DevTools is a fair proxy, for me, at least, and the examples it linked to gave me a shim for crafting myself the <a id="devtoolsp" href="#"><code>window.isDevToolsOpen</code> setter bookmarklet</a> I wanted:</p> <pre id="devtoolsp-src"><code class="javascript">javascript:((i) => {<br /> Object.defineProperty(i, 'id', {<br /> get: () => { window.isDevToolsOpen = true; }<br /> });<br /> setInterval(() => {<br /> window.isDevToolsOpen = false;<br /> console.debug(i);<br /> }, 1000);<br />})(new Image)</code></pre><script>document.querySelector('#devtoolsp').href = document.querySelector('#devtoolsp-src').textContent.replace(/\n\s+/g, '');</script> <p.)</p><img src="" height="1" width="1" alt=""/>Johan Sundström<p>I.</p> <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" width="400" height="154" /></a></div> <p>If you, too, are signing up for a Tesla (model S or X) before March 15, 2017, do it via a referral code (<a href="">mine, ts.la/johan22</a>, for instance) for a $1,000/€1,000 rebate. I grabbed mine from an acquaintance, and am paying it forward similarly. If it's later than that date, by the time you read this, search around a bit with google or on facebook; you'll probably find one.</p><img src="" height="1" width="1" alt=""/>Johan Sundström Freak<p>After Google Chrome discontinued native user script support in the shipping default build, I ran <a href="">TamperMonkey</a> for a while, a pretty full featured extension similar to Firefox's <a href="">Greasemonkey</a> for my user scripting.</p> <p>And then at some point, I stumbled upon <a href="">Control Freak</a>, a minimalist solution for quick hacks and tweaks with no support for sharing those hacks, but swift access to making a script for the current page, domain, or the whole web.</p> <p>And despite being really fond of sharing the tools I build, I liked it, and have stuck with it since. But I will at least make one little hack available for it for making it easy to do <a href="">on.js</a> style scraping from chrome devtools by way of a Control Freak user script.</p> <p>For now, I resort to digging up the link to the current <tt>.user.js</tt> version from <a href=""></a> and pasting that into the "lib" field, and then write myself a little scraper definition – here is one for <a href=""></a> that just parses out basic metadata from the posts fetched in the initial pageload:</p> <pre>scrape(<br />[ 'css* .post_info_link'<br />, { node: 'xpath .'<br /> , data: 'xpath string(@data-tumblelog-popover)'<br /> , link: 'xpath string(@href)'<br /> , author: 'xpath string(.)'<br /> }<br />]);</pre> <p>From then on, each time I go to that page, my chrome console will expose the <code>on</code> function on the page, the <code>scraper</code> definition above, and the array of <code>scraped</code> items (per the <code>css*</code> contextual selector, digging up an array of 0 or more parsed <code>.post_info_link</code> elements from the page, digging out the four sub-bits for each).</p> <p.</p> <p <a href="">2007</a> / <a href="">2012</a>.</p> <p>You can think of it as a more powerful version of <code>document.querySelector</code>, <code>document.querySelectorAll</code>, and <code>document.evaluate</code>, and lots of boring iteration code to create the objects/arrays you seek, rolled into one, with a unified/bridged syntax.</p> <p>Some base-bones examples:</p> <pre>on.dom('css? body') = document.querySelector('body') = document.body<br />on.dom('css* img') = document.querySelectorAll('img') ~ document.images (but as a proper Array)<br />// or, equivalent, but with xpath selectors:<br />on.dom('xpath? //body')<br />on.dom('xpath* //img')<br /></pre> <p:</p> <pre><br />on.dom('css h1') = document.querySelector('h1')<br />on.dom('css+ img') = document.querySelectorAll('img') ~ document.images (but as a proper Array)<br />// or, equivalent, but with xpath selectors:<br />on.dom('xpath //h1')<br />on.dom('xpath+ //img') = devtools' $x('//img')<br /></pre> To create an object with some named things in it, just declare the shape you want: <pre><br />on.dom(<br />{ body: 'css body')<br />, images: 'css+ img')<br />});<br /></pre> Let's say you want all links in the page with an image in them: <pre>on.dom('xpath+ //a[@href][count(.//img) = 1]')</pre> But instead of the <code>img</code> elements, we want their <code>src</code> attribute, and the <code>href</code> attribute of the link. This is where context selectors come into play (an array literal with the context selector first, followed by the object structure to create for each item): <pre>on.dom(<br />[ 'xpath+ //a[@href][count(.//img) = 1]'<br />, { href: 'xpath string(@href)'<br /> , src: 'xpath string(.//img/@src)'<br /> }<br />])</pre> For a page with three such elements, this might yield you something like: <pre><br />[ { href: '/', src: '/homepage.png'}<br />, { href: '/?page=1', src: '/prev.png'}<br />, { href: '/?page=3', src: '/next.png'}<br />]</pre> The context selector lets us drill into the page, and decode properties relative to each node, with much simpler selectors local to each instance in the page. If we want the <code>a</code> and <code>img</code> elements in our data structure too, that's an easy addition: <pre>on.dom(<br />[ 'xpath+ //a[@href][count(.//img) = 1]'<br />, { href: 'xpath string(@href)'<br /> , src: 'xpath string(.//img/@src)'<br /> , img: 'xpath .//img'<br /> , a: 'xpath .'<br /> }<br />])</pre> <p>All leaves in the structure (<code>'<selectorType><arity> <selector>'</code>) you want to drill out sub-properties for, can be similarly replaced by a <code>['contextSelector', {subPropertiesSpec}]</code> this way, which makes decomposing deeply nested templated pages comfy.</p> <p>Put another way: this is for web pages what regexps with named match groups are to strings of text, but with tree structured output, as web pages are tree structured instead of one-dimensional. And I think it makes user scripting a whole lot more fun.</p><img src="" height="1" width="1" alt=""/>Johan Sundström have almost stopped writing about tech stuff in recent years, despite web APIs and javascript features catching up with sanity ever faster. What used to be a very horrible hack a few back to fetch a web page and produce a DOM you could query from it is at the moment both pretty readable and understandable: <pre>const wget = async(url) => {<br /> try {<br /> const res = await fetch(url), html = await res.text();<br /> return (new DOMParser).parseFromString(html, 'text/html');<br /> } catch (e) {<br /> console.error(`Failed to parse ${url} as HTML`, e);<br /> }<br />};<br /><br />wget('/').then(doc => alert(doc.title));</pre> This already works in a current Google Chrome Canary (57). Sadly no javascript console support for <code>doc = await wget('/')</code>; you still have to use the "raw" promise API directly for interactive code, rather than syntax sugared blocking behaviour – but it's still a lot nicer than things used to be. And you can of course assign globals and echo when it's done: <pre>wget('/').then(d => console.log(window.doc = d));<br />doc.title;</pre> Querying a DOM with an XPath selector and optional context node is still as ugly as it always was (somehow only the first making it to the Chrome js console): <pre>const $x = (xpath, root) => {<br /> const doc = root ? root.evaluate ? root : root.ownerDocument : document;<br /> const got = doc.evaluate(xpath, root||doc, null, 0, null);<br /> switch (got.resultType) {<br /> case got.STRING_TYPE: return got.stringValue;<br /> case got.NUMBER_TYPE: return got.numberValue;<br /> case got.BOOLEAN_TYPE: return got.booleanValue;<br /> default:<br /> let res = [], next;<br /> while ((next = got.iterateNext())) {<br /> res.push(next);<br /> }<br /> return res;<br /> }<br />};<br /><br />const $X = (xpath, root) => Array.prototype.concat($x(xpath, root))[0];</pre> ...but for the corresponding css selector utilities (<code>$$</code> and <code>$</code> respectively), we can now say <code>document.querySelectorAll()</code> and <code>document.querySelector()</code>, respectively. Nice-to-haves. Like the <a href="">lexically bound arrow functions</a>..<img src="" height="1" width="1" alt=""/>Johan Sundström<svg xmlns="" viewBox="0 0 1000 1000" width="520" height="520"> <path fill="#3c5a99" d="M945,0l-890,0c-30,0 -55,25 -55,55l0,890c0,30 25,55 55,55l431,0c-11,-1 -22,-2 -32,-4l0,-74l5,0c6,1 14,3 21,4c8,1 15,2 22,2c22,0 40,-3 54,-8c14,-5 25,-14 32,-26c6,-11 11,-24 13,-40c2,-16 3,-36 3,-59l0,-403l-89,0l0,-69l172,0l0,506c0,56 -15,99 -45,128c-25,24 -57,38 -97,43l400,0c30,0 55,-25 55,-55l0,-890c0,-30 -25,-55 -55,-55m-351,241l0,-86l94,0l0,86"/></svg><br /><p>I recently left <a href="/2013/02/groupon.html">Groupon</a>, and I am as surprised as people who know me about where I am heading next. :-)</p><p>Starting Monday, I'll be doing <a href="">Facebook</a> bootcamp.</p><p>Interviewing for various companies was interesting. <a href="/2015/09/notifications-on-web.html">My litmus test</a>.</p><p>At Google, I got the impression each time I brought it up with an interviewer, that this was Somebody Else's Problem, and likely hard to do anything about without ending up on exactly the right team, or working against the grain of bureaucracy.</p><p>At Facebook, everybody I talked to got really interested, lit up when they got that <code>localStorage</code>.</p><p>Sold! :-)</p><img src="" height="1" width="1" alt=""/>Johan Sundström on the Web<p>Hey, web developers!</p> <p>Have you ever clicked that <a href="">GMail</a>, <a href="">Google+</a>, <a href="">Github</a>, <a href="">Facebook</a>, or other web site icon or browser tab title that says you have some unread notification waiting for you, only to find that nope, you don't? Because you already saw it, in another open browser tab?</p> <p>Yes, you have. You, and millions of other people that suffer this every day. In 2015. There is a very simple cure, shipping since 2009. Yes, even IE8 has it. Try opening <a href="" target="_blank">this example page</a> demonstrating it in two or more browser tabs or windows, and click the buttons in it, and observe how they all update together, instantly. For convenience, the example describes how to do it, too.</p> <p>It's easy, and supported everywhere. A localStorage event listener is all you need.</p> <p>Now please sneak this into your next site update, if you love your users. Notifications that lie to us make us all sad.</p><img src="" height="1" width="1" alt=""/>Johan Sundström 2015<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" style="max-width:100%" /></a></div> <blockquote>"Im starting this new year off right. One of my goals is to start every day this way. What are you're resolutions?"</blockquote> <p>I think the above share, from Lindsey Stirling (a capable violinist and dancer, more or less synonymous with the genre "<a href="">dub-step violin</a>") is a fairly good example of living the human condition today.</p> <p>2014 is over and lots of people are thinking about their lives, how their year was, and what changes they want to make in 2015. I think this one might <i>mean</i>.</p> <p.</p> <p.</p> <p.</p> <p>Our addiction with glowing rectangles is a similar dead end, Bret Victor convincingly demonstrates, in his talk "<a href="">The Humane Representation of Thought</a>",.</p> <p.</p> .</p> <p.</p> <p>Thus consequences of even very simple innovations in technology like the invention of the condom, have hardly even <i>touched</i>.</p> <p!</p><img src="" height="1" width="1" alt=""/>Johan Sundström<p>I.</p> <p>It turns out <a href="">Byword</a> fits the description snugly. It comes with some minimal preferences, a dark and a light theme, no bloat, and when your needs are more specific than the preferences afford, you can still escape-hatch through <code>defaults write</code> to get things like a line width of exactly 80 characters, monospace by setting your font to Menlo 18pt and issuing the command:</p> <pre>defaults write com.metaclassy.byword BywordTextWidthCurrent 800</pre> <p>in a shell and restarting it. If, like me, you find you are happier with a solid than blinking caret, there's are <a href="">Stack Exchange solutions</a> for that too, for various versions of osx; here's the one that sufficed for my current 10.8.5 machine, setting the blink period to upwards of 32 millennia:</p> <pre>write -g NSTextInsertionPointBlinkPeriod -float 999999999999</pre> <div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="background: #111; display: block; padding: 60px;"><img src="" height="229" width="320" style="padding: 0; border: 0; background: none;"/></a></div><img src="" height="1" width="1" alt=""/>Johan Sundström<svg xmlns="" viewBox="0 0 2406 1059" width="520" height="229"> <path fill="#bcbec0" opacity=".2" stroke="#636466" stroke- <linearGradient id="johan-grad-1" gradientUnits="userSpaceOnUse" x1="1135" x2="1313" y1="-364" y2="683"> <stop stop- <stop stop- </linearGradient> <path fill="url(#johan-grad-1)" d="m2337 758-2102 225-153-914 2255 1z"/> <path fill="#fff" opacity=".3" d="m116 643-108-637 855 1-859-4z"/> <linearGradient id="johan-grad-2" gradientUnits="userSpaceOnUse" x2="0" y1="297" y2="585"> <stop stop- <stop stop- </linearGradient> <g fill="url(#johan-grad-2)" stroke="#fff" stroke- <path d="m361 6 h 78 v 156 c 0 27 1 54 -2 76 c -3 23 -13 38 -27 51 c -13 12 -31 22 -52 26 c -23 5 -48 2 -68 -5 c -37 -14 -60 -45 -60 -96 h 83 c 1 12 2 23 8 30 c 7 8 27 7 33 -2 c 8 -11 6 -34 6 -54 v -181 c 0 0 0 -1 0 -1 z"/> <path d="m644 2a158 158 0 0 0 0 316 158 158 0 0 0 0-316m0 75a83 83 0 0 1 0 166 83 83 0 0 1 0-166"/> <path d="m847 6h78v115h100v-115h78v307h-78v-122h-100v122h-78"/> <path d="m 1259 6 h 61 c 41 102 81 204 122 306 c -28 1 -57 0 -85 0 c -5 -15 -10 -29 -15 -44 h -106 c -6 14 -11 30 -16 44 h -83 c 0 -2 1 -3 1 -5 c 40 -100 81 -201 121 -301 c 0 -1 0 -1 1 -1 z m 30 96 c 0 0 0 0 0 0 c -11 35 -21 70 -31 105 h 64"/> <path d="m1665 310l-116-173v173h-73v-304h66l114 172v-172h73v304z"/> </g></svg><br /><p><a href="">My startup</a> just got acquired by Groupon (where I shall be used for Good, not Evil :-).</p><p>I started this Monday. So far, it is a little disorienting to work in what is best described as a huge confederation of startups. It should hopefully get better once I am chipping away at making stuff better.</p><img src="" height="1" width="1" alt=""/>Johan Sundström as Era Indicators<div class="separator" style="float: left;"><a href="" imageanchor="1" style="margin-right: 1em;"><img border="0" height="320" src="" width="232" /></a></div><blockquote class="tr_bq"><a href="">The future is already here — it's just not very evenly distributed.</a></blockquote.<br /><br /.<br /><br />All sorts of things you believe (and more importantly <i>don't</i> believe) set you apart as a member of a different age. <i>Some</i> living in those days share <i>some</i> of your beliefs about how the solar system is arranged, for instance (the future was unevenly distributed back then, too, after all), but most don't, and would find you a strange eccentric full of funny made-up ideas, uneducated about how things <i>are</i>..<br /><br /.<br />.<br /><br / <i>language</i> – a (today) dated agreement about what constitutes a <i>sense</i>,.<br /><br /.<br /><br /.<br /><br /.<br /><br / <a href="">Robert Lustig talking about the same</a>, I am somewhat better, but still not very good: I have not particularly rigged you to be interested in the subject, and chances are slim that his explanations seek you out at a time you are interested in them.<br /><br /.<br /><br /.<br /><br /.<br /><br />Life is a set of choices, but choices are downstream from beliefs. Upstreams from beliefs are whatever processes of multiplication and elimination you instill. Make them good ones. Live long, and prosper! :)<img src="" height="1" width="1" alt=""/>Johan Sundström app search engine definitionsTo have chrome search for iOS apps from the url bar by typing "iapp app-name", add a search engine definition (Command-, type "search", click "Manage search engines", scroll to the bottom of the list and add a new entry) like this one:<br /><br /><pre>iOS apps iapp<u><b>us</b></u>%2Fapp%2F%20%s</pre><br />(after tweaking "<b><u>us</u></b>" to whatever country code your apps come from)<br /><br />It would be neat if the iOS app store had its own search page somewhere on the web so we didn't have to rely on Google's rather poor rendition of the search results, but I haven't found one.<br /><br />The more <a href="">json</a> minded of you might also add an accompanying "iappjs" version, for the heck of it:<br /><br /><pre></pre><pre>iOS apps json iappjs</pre><div><br /></div><img src="" height="1" width="1" alt=""/>Johan Sundström is data<div>One of the best ways to leverage subconscious data about myself, my experiences, assumptions and otherwise, is to notice and write down what I get surprised about – especially when traveling, or in changing circumstances in any other capacity. I am fairly certain this idea comes from one of <a href="">Paul Graham's essays</a> (though I <a href="">couldn't find which</a> – if you know, do say, and I'll link it; it's a great read).</div><div><br /></div><div><div <i>intentional</i>.</div></div><div><br /></div><div <a href="">iPhone</a> / <a href="">Android</a> app for measuring ambient noise, and both in the large (architecture, work place environment, city planning, et cetera) and small (tools, implements), fighting noise is not just a matter of engineering pride or regulations, but also pretty much taken for granted.</div><div><br /></div><div.</div><div><br /></div><div>I am surprised so little has changed here since those days.</div><img src="" height="1" width="1" alt=""/>Johan Sundström rendered source bookmarklet<p>I haven't made a bookmarklet from scratch in a long while, but after looking at <a href="">this codepen</a> I wrote a while ago to demonstrate copying one javascript document into another, it occurred to me that it could easily become a modern "view rendered source" bookmarklet, that might even work on iOS devices and the like, where a view source feature is sorely missing.</p> <p>Here is the result: <tt><a href="javascript:(function()%7Bvar%20d%3Ddocument%2Cb%3Dd.body%2Co%3Db.style.overflow%2Ch%3D(new%20XMLSerializer).serializeToString(d)%2Cf%3Db.insertBefore(d.createElement('iframe')%2Cb.firstChild)%2CF%3Df.contentDocument%2Cs%3Dd.implementation.createHTMLDocument('')%3Bf.style.cssText%3D'z-index%3A2147483647%3Bdisplay%3Ablock%3Bopacity%3A1%3Bvisibility%3Avisible%3Bbackground%3A%23fff%3Bcolor%3A%23000%3Bposition%3Afixed%3Bheight%3A100%25%3Bwidth%3A100%25%3Bpadding%3A0%3Bborder%3A0%3Bmargin%3A0%3Bleft%3A0%3Btop%3A0'%3Bs.open()%3Bs.write('%3C!DOCTYPE%3E%3Chtml%3E%3Cpre%3E'%2Bh.replace(%2F%26%2Fg%2C'%26amp%3B').replace(%2F%3C%2Fg%2C'%26lt%3B')%2B'%3C%2Fpre%3E%3C%2Fhtml%3E')%3Bs.close()%3BF.replaceChild(F.importNode(s.documentElement%2C!0)%2CF.documentElement)%3Bb.style.overflow%3D'hidden'%3BF.onclick%3Dfunction()%7Bb.style.overflow%3Do%3Bb.removeChild(f)%3Bd%3Db%3Dh%3Df%3DF%3Df%3Ds%3D0%7D%7D)()">view rendered source</a></tt></p> <p>On clicking it, you get a "view source" iframe with the source of the current document, in whichever state it was when you clicked the button, and when you click (or tap) that it goes away again.</p> <p.</p> <p>Enjoy!</p><img src="" height="1" width="1" alt=""/>Johan Sundström url from a relative url and base_url<p:</p> <script src=""></script><noscript><pre>function resolveURL(url, base_url) {<br /> var doc = document<br /> , old_base = doc.getElementsByTagName('base')[0]<br /> , old_href = old_base && old_base.href<br /> , doc_head = doc.head || doc.getElementsByTagName('head')[0]<br /> , our_base = old_base || doc_head.appendChild(doc.createElement('base'))<br /> , resolver = doc.createElement('a')<br /> , resolved_url<br /> ;<br /> our_base.href = base_url;<br /> resolver.href = url;<br /> resolved_url = resolver.href; // browser magic at work here<br /><br /> if (old_base) old_base.href = old_href;<br /> else doc_head.removeChild(our_base);<br /><br /> return resolved_url;<br />}</pre></noscript> <p>You can play around with it a little here, to see that your browser supports it, too. You should even be able to use a relative URL as the <code>base_url</code> parameter, which should get resolved against the page url -- which here is the jsfiddle url shown, as that demo is running in an embedded iframe, rather than on this blog itself:</p> <iframe style="width: 100%; height: 180px" src="" allowfullscreen="allowfullscreen" frameborder="0"></iframe> <p>It of course won't work in node.js, but hopefully it'll be useful to something you or <a href="">others</a> are doing, too. Use as you like; it's all public domain / MIT licensed goodness, whichever you fancy.</p><img src="" height="1" width="1" alt=""/>Johan Sundström and the Human MindI am seeing more of Facebook these days, from an indirect angle, lending a point of view on the phenomenon that lets me observe the behaviour it is driving in its subjects at large, rather than participating, and observing the behaviours <i>I</i> exhibit. <span class="footnote">The latter is both significantly more challenging, due to perception bias, and a microscopically smaller data point, to a sea of data backdrop.</span><br><br>Facebook has created a niche where its inputs are humans (expressing human behaviour) and where its outputs are <i>massive</i>:<br><br>I think I see the fundamental component of life itself, as it applies to behaviour: any behaviour which increases the likelihood of recreating itself, increases the rate of repeating itself, and increases the accuracy of the reproduction of itself, is (from a natural selection point of view,) more <i>fit</i>. Whenever this mechanism randomly shows up in nature (which only needs to happen once in a universe, as the seed mutates, and down the line reproduces the incredible wealth of complexity some unironically like to call "creation"), we call it <i>Life</i><!--, and far downstream of it, things like sentience has developed in humans-->..<br><br>Let me establish some vocabulary to help clarify this post: a <i>Facebook operator</i>,.<br><br.<br>) <a href="">Skinner box</a>,.<br><br.<br><br.)<br><br.<br><br.<br><br.<br><br.<br><br.<img src="" height="1" width="1" alt=""/>Johan Sundström TV ad<p>This is a transcript of the Github TV ad. If you haven't seen it, think "Apple commercial", Jonathan Ive's dreamy narrative, British accent, the whole jive, and you'll have it about right. Okay, you're set; cue soft music:</p> <blockquote>Here at Github, we like to fork the best ideas in the valley and spin them a little different. You might not have heard about <a href="//">our philantropic branch</a>(because we don't like to brag), but it's actually the secret behind some of our most groundbreaking new innovations.</blockquote> <blockquote><a href="//biolabs.github.org/">Github Biolabs</a>is how Tom Preston-Werner, in late 2007 and before we were even founded, re-imagined Google's "20% time". We started Github as an incubator, here at the <a href="">San Francisco asylum for mad scientists</a>, and <a href="">some</a><a href="">of</a><a href="">our</a><a href="">most</a><a href="">brilliant</a><a href="">employees</a>began their career doing 20% time with us, as a precursor to their first parole leave. None has left us yet.</blockquote> <p>.]</p> <blockquote><a href="//github.com/images/modules/about_page/hubbernauts.jpg" >At Github Biolabs</a>, we have four saps on tap, an expansive green-house, and a <a href="//arboretum.github.org/">beautiful arboretum</a>. It is here that our scientists, using recombinant DNA splicing techniques, in this social open source setting, literally Create New Apples.</blockquote> <p>[Cut back to narrator, taking a bite out of one. Fade to white. Fade to logo:]</p> <svg width="520" height="353" xmlns="" viewBox="0 0 10000 4419"> <defs><font horiz- <font-face <glyph unicode=" " horiz- <glyph unicode="S" horiz- <glyph unicode="O" horiz- <glyph unicode="C" horiz- <glyph unicode="I" horiz- <glyph unicode="A" horiz- <glyph unicode="L" horiz- <glyph unicode="D" horiz- <glyph unicode="N" horiz- <glyph unicode="G" horiz- <hkern u1="C" u2="O" k="6"/> </font></defs> <g class="collegiate-regular"> <path d="m1281 1189c155 0 334-39 537-116v498c-45 16-110 34-193 53 26 74 39 143 39 208 0 206-62 386-186 539s-285 244-481 273c-129 19-193 89-193 208 0 42 21 84 63 126 55 61 135 100 242 116 461 71 691 263 691 575 0 500-298 749-894 749-245 0-447-44-604-131-200-109-300-282-300-517 0-271 150-456 450-556v-10c-110-68-164-171-164-310 0-181 52-293 155-338v-10c-103-35-195-116-276-242-90-135-135-281-135-435 0-232 82-426 247-580 158-145 347-218 566-218 158 0 305 39 440 116zm19 2543c0-164-135-247-406-247-261 0-392 85-392 256 0 168 142 251 426 251 248 0 372-87 372-261zm-745-1857c0 222 102 334 305 334 197 0 295-113 295-339 0-94-23-174-68-242-55-74-131-111-227-111-203 0-305 119-305 358"/> <circle cx="2331" cy="360" r="360"/> <path d="m2055 3244c6-65 10-174 10-329v-1504c0-152-3-256-10-314h546c-7 61-10 163-10 305v1484c0 164 3 284 10 358"/> <path d="m3602 1098h421v469c-16 0-46-2-90-5s-85-5-123-5h-208v899c0 216 71 324 213 324 100 0 190-27 271-82v484c-119 65-263 97-430 97-235 0-398-84-488-251-68-126-102-324-102-595v-865h5v-10l-73-5c-42 0-97 5-164 15v-469h237v-189c0-90-5-163-15-218h561c-10 61-15 131-15 208"/> <path d="m5300 1083c-139 0-284 48-435 145v-846c0-184 3-305 10-363h-551c10 52 15 172 15 363v2558c0 148-5 250-15 305h561c0-10-3-49-10-118-6-69-10-131-10-186v-1189c113-106 222-160 329-160 123 0 216 61 280 184 48 97 73 218 73 363l-5 653c0 110-8 261-24 455h585c-13-81-19-229-19-445v-662c0-280-63-519-189-716-142-226-340-339-594-339"/> <path d="m7012 3264c-255 0-443-113-566-339-97-184-145-424-145-720v-662c0-219-6-367-19-445h580c-10 71-16 222-19 455l-5 653c0 184 21 316 63 396 48 100 137 150 266 150 90 0 192-53 305-160v-1189c0-55-3-118-10-189s-10-110-10-116h561c-6 55-10 156-10 305v1480c0 190 3 311 10 363h-527v-174c-148 129-306 193-474 193"/> <path d="m9188 3259c-174 0-319-63-435-189v174h-513c10-55 15-156 15-305v-2558c0-190-5-311-15-363h551c-7 58-10 179-10 363v841c148-94 293-140 435-140 255 0 455 113 600 339 122 197 184 435 184 716 0 287-65 537-193 749-155 248-362 372-619 372zm-73-1668c-106 0-218 53-334 160v836c103 100 205 150 305 150 126 0 221-69 285-208 52-110 77-240 77-392 0-364-111-546-334-546"/> </g> <text x="1984" y="4352" font-SOCIAL CODING</text></svg><img src="" height="1" width="1" alt=""/>Johan Sundström and IEThis post is a brief celebration of succession on the web. Not Mosaic begat Netscape begat Mozilla begat Firefox succession, but Firefox 4 begat Firefox 5 begat Firefox 6 begat Firefox 7 and Chrome N begate Chrome N+1 type succession. Google got it right with Chrome last decade, and not long thereafter, <a href="">Mozilla got it too</a> this year.<br /><br />Quoting that post's quote of Steve Jobs' <a href="">2005 Stanford commencement address</a> (worth watching, if you haven't),<br /><blockquote.</blockquote><br /, <a href="">Phoenix</a>, which it now fully embodies, but at least now, this gift has finally been given to web developers: <br /><blockquote>You need not bother writing applications (or perfecting layouts) to high-fidelity for old browsers, for their time is short and their better ancestors replace them quickly.</blockquote><br /.<br /><br />Returning to the paramount topic of graceful, benevolent, rapid-evolution-supportive death, <a href="">Microsoft's IE</a> <a href="">does not yet get it<.<br /><br /.<br /><br /.<br /><br />For a browser, it is better to live a great but short life and go out with a boom, than it is to burden its extended family with a never-ending old age in an insufferable early-set rigor mortis. However you feel about <a href="">Steve Jobs</a> he lived and died in this way, never holding back, never growing stale of mind nor action, and the world was better off for it.<img src="" height="1" width="1" alt=""/>Johan Sundström an old rails 2.3.8 with rvmI was helping set up a local (legacy) rails 2.3.8 server on a macbook today, autonomous from the system ruby installation. This was a bit messy, as modern versions of rubygems conflict with the old rails 2.3.8 environment to the tune of:<br /><pre>Uninitialized constant ActiveSupport::Dependencies::Mutex (NameError)</pre>...when you try to start the server. Here's the recipe I came up with: <ol><li><a href="">Install rvm.</a></li><li><pre><code># Install ruby 1.8.7 and downgrade its rubygems to 1.5.3:<br />rvm install 1.8.7 && \<br /> rvm use 1.8.7 && \<br /> rvm gem install -v 1.4.2 rubygems-update && \<br /> rvm gem update --system 1.4.2 && \<br /> update_rubygems && \<br /> echo 'Okay.'</code></pre></li><li><pre><code># Install all the gems you need, for instance:<br />rvm gem install -v 2.3.8 rails && \<br /> rvm gem install -v 3.0 haml && \<br /> rvm gem install -v 2.1 authlogic && \<br /> rvm gem install -v 1.0 dalli && \<br /> rvm gem install -v 0.2 omniauth && \<br /> rvm gem install -v 2.7.0 erubis && \<br /> rvm gem install -v 1.3.3 sqlite3 && \<br /> echo 'Gems installed!'</code></pre></li><li>If needed, run <code>rake db:setup</code> in your rails tree to set up its databases.</li><li>Done! <code>rails/script/server -p <i>your_port</i></code> is ready for action.</li></ol><img src="" height="1" width="1" alt=""/>Johan Sundström SVGs at gist.github.comLately, I've been having a lot of fun hand optimizing SVG files for size, a bit like <a href="">Sam Ruby</a> does for his blog (it is highly instructive peeking at <a href="">his collection</a>, as I think I have mentioned before). <br /><br />For me, SVG has something of the magical flair I first found in HTML in the nineties, back when it was the new bleeding edge New Thing, but I argue that it's even more fun than HTML was. The <a href="">W3C SVG specs</a> are not prohibitively difficult to read, and of course you have much greater graphical freedom than structured documents can afford you (duh!).<br /><br />Like Sam, I try for something presentable in a kilobyte or less (uncompressed, though modern good browsers are as happy to render SVG:s delivered with <code>content-encoding: gzip</code>, of course, as long as they are otherwise correct and delivered with an <code>image/svg</code> or <code>image/svg+xml</code> content-type), and to never enforce a fix width in the svg tag itself – so they just beautifully grow to any size you want them to be, with no loss of quality.<br /><br / <a href="">Erik Dahlström</a>, but I might exaggerate a bit).<br /><br /><svg style="float:left; width:360px; height:320px; margin:0 0.5em 0.5em 0" viewBox="-160 -160 360 320" xmlns="" xmlns:<path id="f" d="m123,0a123,123 0,0 1-246,0a123,123 0,0 1 246,0"/><g fill="#057"><circle r="160"/><circle r="150" fill="#fff"/>[<text font-<animatetransform type="rotate" from="360 0 0" to="0 0 0" dur="10s" attributeName="transform" repeatCount="indefinite"/><textpath xlink:I thought what I'd do was, I'd pretend I was one of those deaf-mutes</textPath></text>] (<- this is text content in an inline SVG, if your browser or reader has stripped off the SVG or can't render such modernisms :-) <circle r="115"/><circle r="95" fill="#fff"/><path d="m-8-119h16 l2,5h-20z"/><circle cx="160" cy="0" r="40"/><path d="m-95-20v-20h255a40,40 0,0 1 0,80h-55v-20z"/><path d="m-85 0a85,85 0,0 0 170,0h-20a65,65 0,0 1-130,0z"/><path d="m-65 20v20h140v-20z"/><path d="m-115-20v10h25v30h250a20,20 0,0 0 0,-40z" fill="#fff"/><path d="m-20 10c-17-14-27-14-44 0 6-25 37-25 44 0z"/><path d="m60 10c-17-14-27-14-44 0 6-25 37-25 44 0z"/></g></svg> Anyway, this weekend, I had fun turning the Laughing Man (from <a href="">Ghost in the Shell: Stand Alone Complex</a>) that <a href="">@elmex</a> vectorized at some point, into a 1000 byte version of my own, also featuring the gentle rotating text seen in the anime series (<a href="">YouTube</a>), via declarative animation (so there is no javascript involved here).<br /><br /><b>Edit:</b> I initially missed an excellent opportunity here to plug Jeff Schiller's <a href="">scour<!<br /><br />The result is in <a href="">this gist</a> (if you install <a href="">this user script</a>, you can swap between <code>*.svg</code> text/plain source code image/svg rendition) or to the left, if your browser renders inline SVG:s properly in HTML content (I recommend the gist over view source, as Blogger inserts linebreak HTML tags unless I strip out all newlines first).<br /><br />What surprised me, when I made this user script, is how far standards support has come in modern browsers: in order to inject the <code><svg></code> tag I create for the rendered version, I had to feed the source through <code>DOMParser</code> (as setting <code>.innerHTML</code> is lossy from treating the SVG content as <code>text/html</code>, not <code>text/xml</code>), and the few lines doing that, just magically worked in all of Chrome, Firefox 4 and Opera 11 (de-jQuery:fied, to make more sense outside of the script's context) with no special extra effort on my part:<br /><pre><code>// turn the raw SVG source string into an XML document:<br />svg = (new DOMParser).parseFromString(svg, 'text/xml');<br /><br />// import it into an SVGSVGElement in this document:<br />svg = document.importNode(svg.documentElement, true);<br /><br />// and insert that element somewhere in the document:<br />document.body.appendChild(svg);</code></pre>To me, that's a rather clear sign SVG is ready for prime time now.<br /><br /><svg style="width:160px; height:160px; float:left; margin:0 0.5em 0.5em 0" xmlns="" viewBox="-0.2 -1 379 334">> While github reports having no plans on serving <code>*.svg</code> g <a href="">octocat</a> SVG I <a href="">similarly format converted</a> a while ago from the github About page, and <a href="">milligramme</a> made <a href="">this much spacier version</a>.<br /><br />I gather all my SVG play in a <a href="">svg-cleanups</a> repository on github, if anyone wants to get inspired the fork or follow way, and occasionally <a href="">tweet</a>.<img src="" height="1" width="1" alt=""/>Johan Sundström your own Github SVGs, step by stepI SVG:ified and played a little further with the logo material from the recently published <a href="">Github about page</a>, and then tonight I figured it would be fun to visualize the elegant process by which a raw SVG image is built up, piece by piece, by rather basic building blocks. With just a little bit of javascript magic to help you, here is how you piece together your own github schwag from scratch (works like a charm in Chrome, Firefox 4, and presumably any other modern browser that can handle inline SVG images<noscript> - if you do not see them in your feed reader despite a modern browser, or want to try the interactive behaviour, <a href="">head over to the blog post itself</a></noscript>):<div style="max-width: 90%; margin: 0 auto;"><svg xmlns="" viewBox="-0.2 -1 379 334" height="70%">><center><button id="step" onclick="step(this)" style="margin: 1em 0 1em">Redraw the octocat SVG image step by step</button><div id="next_step" style="margin: 0 0 1em"> </div></center><svg xmlns="" viewBox="-0.147 -0.544 242 108" height="20%"><g id="github" class="collegiate-regular"><path d="m30.908 28.692c3.732 0 8.047-.933 12.946-2.799v12.013c-1.089.389-2.644.816-4.666 1.283.622 1.788.933 3.46.933 5.015 0 4.977-1.497 9.311-4.49 13.004s-6.862 5.891-11.605 6.59c-3.11.467-4.665 2.139-4.665 5.016 0 1.011.505 2.021 1.516 3.032 1.322 1.478 3.266 2.411 5.832 2.8 11.12 1.71 16.679 6.337 16.679 13.879 0 12.053-7.192 18.079-21.577 18.079-5.91 0-10.77-1.05-14.579-3.149-4.822-2.64-7.232-6.799-7.232-12.475 0-6.531 3.616-11.002 10.847-13.412v-.233c-2.644-1.633-3.965-4.121-3.965-7.465 0-4.354 1.244-7.076 3.732-8.164v-.233c-2.488-.855-4.705-2.8-6.648-5.832-2.178-3.267-3.266-6.766-3.266-10.498 0-5.599 1.983-10.264 5.948-13.996 3.81-3.499 8.359-5.249 13.646-5.249 3.81 0 7.348.933 10.614 2.799zm.467 61.35c0-3.966-3.266-5.948-9.797-5.948-6.298 0-9.447 2.061-9.447 6.182 0 4.043 3.421 6.064 10.264 6.064 5.986 0 8.98-2.099 8.98-6.298zm-17.962-44.788c0 5.365 2.449 8.048 7.348 8.048 4.743 0 7.115-2.722 7.115-8.165 0-2.255-.544-4.199-1.633-5.832-1.322-1.788-3.149-2.683-5.482-2.683-4.899.001-7.348 2.878-7.348 8.632z"/><path d="m56.222 17.378c-2.255 0-4.179-.855-5.773-2.566-1.594-1.71-2.391-3.732-2.391-6.065 0-2.411.797-4.471 2.391-6.182 1.594-1.71 3.518-2.565 5.773-2.565 2.177 0 4.063.855 5.657 2.566s2.391 3.771 2.391 6.182c0 2.333-.797 4.354-2.391 6.065-1.594 1.71-3.48 2.565-5.657 2.565z"/><path d="m49.574 78.262c.155-1.555.233-4.198.233-7.931v-36.274c0-3.654-.078-6.182-.233-7.581h13.18c-.156 1.478-.233 3.927-.233 7.348v35.807c0 3.966.078 6.843.233 8.631h-13.18z"/><path d="m86.888 26.476h10.147v11.314c-.389 0-1.108-.039-2.158-.117s-2.041-.117-2.974-.117h-5.015v21.694c0 5.21 1.71 7.814 5.132 7.814 2.411 0 4.587-.66 6.532-1.982v11.663c-2.877 1.556-6.337 2.333-10.381 2.333-5.676 0-9.603-2.021-11.78-6.064-1.633-3.033-2.45-7.814-2.45-14.347v-20.877h.117v-.233l-1.75-.116c-1.011 0-2.333.116-3.965.35v-11.315h5.715v-4.549c0-2.177-.117-3.927-.35-5.249h13.529c-.233 1.478-.35 3.149-.35 5.015v4.783z"/><path d="m127.86 26.126c-3.343 0-6.842 1.167-10.497 3.499v-20.411c0-4.432.078-7.348.233-8.748h-13.296c.233 1.244.35 4.16.35 8.748v61.7c0 3.576-.117 6.026-.35 7.348h13.53c0-.233-.078-1.186-.233-2.857-.155-1.672-.233-3.169-.233-4.49v-28.693c2.722-2.566 5.365-3.849 7.931-3.849 2.955 0 5.21 1.477 6.765 4.432 1.166 2.333 1.75 5.249 1.75 8.748l-.117 15.745c0 2.645-.194 6.299-.583 10.964h14.112c-.311-1.943-.466-5.521-.466-10.73v-15.979c0-6.765-1.516-12.519-4.549-17.262-3.42-5.443-8.2-8.165-14.34-8.165z"/><path d="m169.15 78.729c-6.143 0-10.691-2.722-13.646-8.165-2.333-4.432-3.499-10.225-3.499-17.378v-15.979c0-5.288-.155-8.864-.466-10.73h13.996c-.233 1.71-.389 5.365-.467 10.964l-.116 15.746c0 4.432.505 7.62 1.516 9.563 1.167 2.411 3.305 3.616 6.415 3.616 2.177 0 4.626-1.283 7.348-3.849v-28.693c0-1.322-.078-2.838-.233-4.549-.156-1.711-.233-2.644-.233-2.799h13.529c-.155 1.322-.233 3.771-.233 7.348v35.69c0 4.587.078 7.503.233 8.747h-12.72v-4.198c-3.58 3.11-7.39 4.666-11.43 4.666z"/><path d="m221.64 78.611c-4.198 0-7.697-1.516-10.497-4.548v4.198h-12.363c.233-1.321.351-3.771.351-7.348v-61.7c0-4.588-.117-7.503-.351-8.748h13.297c-.156 1.399-.233 4.315-.233 8.748v20.294c3.576-2.255 7.076-3.383 10.497-3.383 6.143 0 10.964 2.722 14.463 8.165 2.954 4.743 4.432 10.497 4.432 17.262 0 6.92-1.555 12.946-4.665 18.078-3.74 5.989-8.72 8.982-14.94 8.982zm-1.75-40.238c-2.566 0-5.249 1.283-8.048 3.849v20.178c2.488 2.41 4.938 3.616 7.348 3.616 3.032 0 5.326-1.672 6.882-5.016 1.244-2.644 1.866-5.793 1.866-9.447 0-8.787-2.68-13.18-8.05-13.18z"/></g><g id="social" class="futura-heavy"><path d="m58.661 89.441c-.74-1.004-1.691-1.639-2.986-1.639-1.242 0-2.431.952-2.431 2.247 0 3.356 7.902 1.955 7.902 8.642 0 3.991-2.484 6.818-6.554 6.818-2.749 0-4.757-1.585-6.131-3.885l2.511-2.458c.528 1.533 1.929 2.907 3.594 2.907 1.585 0 2.563-1.348 2.563-2.881 0-2.062-1.903-2.643-3.462-3.25-2.564-1.058-4.44-2.353-4.44-5.444 0-3.304 2.458-5.973 5.814-5.973 1.771 0 4.229.872 5.444 2.22l-1.824 2.699z"/><path d="m72.083 105.51c-6.079 0-9.858-4.651-9.858-10.519 0-5.92 3.912-10.465 9.858-10.465s9.858 4.545 9.858 10.465c0 5.869-3.779 10.519-9.858 10.519zm0-17.152c-3.673 0-5.841 3.25-5.841 6.475 0 3.065 1.533 6.845 5.841 6.845s5.84-3.779 5.84-6.845c.001-3.227-2.166-6.477-5.84-6.477z"/><path d="m97.16103.61 104.98h-3.885v-19.925h3.885v19.925z"/><path d="m110.5 100.78l-1.639 4.202h-4.096l7.77-20.455h3.013l7.559 20.455h-4.149l-1.533-4.202h-6.92zm3.36-10.414h-.053l-2.194 7.242h4.731l-2.49-7.242z"/><path d="m128.18 101.6h5.497v3.383h-9.382v-19.925h3.885v16.545z"/></g><g id="coding" class="futura-heavy"><path d="m155.87167.34 105.51c-6.079 0-9.858-4.651-9.858-10.519 0-5.92 3.912-10.465 9.858-10.465s9.857 4.545 9.857 10.465c0 5.869-3.78 10.519-9.86 10.519zm0-17.152c-3.674 0-5.841 3.25-5.841 6.475 0 3.065 1.533 6.845 5.841 6.845s5.841-3.779 5.841-6.845c0-3.227-2.17-6.477-5.84-6.477z"/><path d="m179.1 85.055h5.551c5.761 0 9.619 4.308 9.619 9.989 0 5.604-3.964 9.938-9.646 9.938h-5.523v-19.925zm3.88 16.545h.634c4.783 0 6.634-2.643 6.634-6.581 0-4.334-2.22-6.58-6.634-6.58h-.634v13.161z"/><path d="m200.05 104.98h-3.886v-19.925h3.886v19.925z"/><path d="m202.88 84.526h2.802l10.492 13.928h.053v-13.399h3.885v20.323h-2.801l-10.48-13.93h-.053v13.531h-3.886v-20.454z"/><path d="m240.53 94.384v.502c0 5.629-2.881 10.624-9.064 10.624-5.814 0-9.488-4.915-9.488-10.412 0-5.683 3.779-10.571 9.726-10.571 3.383 0 6.343 1.718 7.876 4.757l-3.436 1.85c-.793-1.797-2.484-3.171-4.546-3.171-3.753 0-5.603 3.832-5.603 7.136 0 3.303 1.876 6.977 5.629 6.977 2.432 0 4.467-2.114 4.546-4.52h-4.229v-3.171h8.58z"/></g></svg></div><script src=""></script><br /><br />If.)<br /><br />Nicer source code for the images than in the page (Blogger insisted on filling all the whitespace with html junk) in these gists: <a href="">github-logo.svg</a>, <a href="">octocat.svg</a>. gist.github.com currently doesn't serve .svg files as <code>image/svg</code>, so you'll have to save them locally first to see them rendered.<br /><br />Source code for the step-by-step drawing is <a href="">in this gist</a>; MIT licensed, if you want to fork away, adopt, adapt or whatnot. Have fun! I did. :-)<img src="" height="1" width="1" alt=""/>Johan Sundström hosting: Mozilla vs Google vs OperaI habitually develop browser user scripts to stream-line things for myself (and others) on the web – and a half random scattering of them tends to end up proper add-ons, when I think the benefit of them being easy to find, install and reuse by others merits the added work for myself of packaging them and submitting them to an add-ons gallery.<br /><br />This.)<br /><br /><hr><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="176" width="320" alt="Firefox add-on page" src="" /></a><br /><br />(<a href="">screenshot taken here</a>)</div><br /><h4>Firefox add-ons: <a href="">addons.mozilla.org</a>, a k a AMO</h4><br /><b><i>Your</i> add-ons are listed here:</b> <a href="">addons.mozilla.org/en-US/developers/addons</a><br /><b>Add-on URL:</b> addons.mozilla.org/en-US/firefox/addon/<i>your-configurable-addon-slug</i><br /><b>Public data:</b> current version, last update time, compatible browser versions, website and, optionally: <b>all add-on detail metrics, if the developer wants to share them</b> (excellent!)<br /><b>Public metrics:</b> total download count, average rating, number of ratings<br /><b>Detail metrics:</b> TONS: mainly installation rate and active installed user base over time, broken down by all sorts of interesting properties or in aggregate, graphed and downloadable in csv format. Notable omission: add-on page traffic stats. Public example: <a href="">Greasemonkey stats</a><br /><b>Developer page linked from the public add-on page when you're logged in:</b> NO<br /><b>Release process:</b> Manual review process that can take a really long time, as AMO often is under-staffed, and hasn't successfully incentivized developer participation in the process to an extent as to make it not so.<br /><br />Summary: great stats and an ambition to make information public and transparent.<br /><br /><hr><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="154" width="320" alt="Chrome add-on page" src="" /></a><br /><br />(<a href="">screenshot taken here</a>)</div><br /><h4>Chrome add-ons: <a href="">chrome.google.com/webstore</a>, or the Chrome Web Store<br />Previously lived at <a href="">chrome.google.com/extensions</a>, the Chrome extensions gallery</h4><br /><b><i>Your</i> add-ons are listed here:</b> <a href="">chrome.google.com/webstore/developer/dashboard</a><br /><b>Add-on URL:</b> chrome.google.com/webstore/detail/<i>your-addon-signature</i>[/optional-add-on-slug]<br /><b>Public data:</b> current version, last update time, <b>if installed: a checkmark and the sign "Installed", instead of the "Install" button</b> (excellent!)<br /><b>Public metrics:</b> total download count, average rating, number of ratings<br /><b>Detail metrics:</b> If you jump through the relatively tiny hoop of creating and tying a new Google Analytics account to your add-on, you get detailed add-on page traffic stats, over in Google Analytics. Notabe omission: all metrics about your installed user base :-/<br /><b>Developer page linked from the public add-on page when you're logged in:</b> NO<br /><b>Release process:</b> Automated review process, making yourself the only release blocker, in practice. This is bloody awesome!<br /><br /:<br /><br /><ul><li>first (before your initial web store submission) build your add-on locally</li><li>store the .pem file in some good location</li><li>rename it <tt>key.pem</tt></li><li>copy that .pem file into the add-on directory</li><li>zip up that directory (so the top level of the zip file only has one directory)</li><li>NOW (and for all later versions, IIRC) upload this zip file to the Chrome Web Store</li><li>…and it will use your add-on signature, instead of that of Google's secret .pem</li></ul><br /><hr><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="47" width="320" alt="Opera add-on page" src="" /></a><br /><br />(slightly de-horizontalized <a href="">screenshot taken here</a>)</div><br /><h4>Opera add-ons: <a href="">addons.opera.com/addons/extensions</a></h4><br /><b><i>Your</i> add-ons are listed here:</b> <a href="">addons.opera.com/developer</a> (this page is near <i>impossible</i> to find by navigation, and was the reason I created this blog post :-)<br /><b>Add-on public URL:</b> addons.opera.com/addons/extensions/details/<i>addon-slug</i>[/<i>version</i> (redirects visitors here)]<br /><b>Add-on developer URL:</b> addons.opera.com/developer/extensions/details/<i>addon-slug</i>/<i>version</i> (no redirect help, but links to all other versions)]<br /><b>Public data:</b> current version, last update time, <b>add-on size</b> (excellent!)<br /><b>Public metrics:</b> total download count, average rating, number of ratings<br /><b>Detail metrics:</b> NONE. Notabe omission: all data about your installed user base, AND add-on page traffic :-(<br /><b>Developer page linked from the public add-on page when you're logged in:</b> NO<br /><b>Release process:</b>.)<br /><br /.<br /><br /).<br /><br /><b>Update:</b> I just found Opera's dashboard link! Your logged-in name is the link that takes you to your dashboard of add-ons. And it only took knowing the wanted url, opening a web console and typing <code>[].slice.call(document.links).filter(function(a){ return a.href == ""; })</code> in it. What do you know? :-)<img src="" height="1" width="1" alt=""/>Johan Sundström Improved 1.7After a whole year in development as a user script (<a href="">initial mention</a>, formerly known as "Github: unfold commit history"), a brief test-drive as an <a href="">Opera extension</a> at <a href="">the 2010 add-on con</a>, I finished up the missing bits and pieces to make today's <a href="">Github Improved! chrome extension</a> available in the Chrome Gallery / Web Store (1.7 now also available in the <a href="">Opera extension gallery</a>).<br /><br />New since last release is the little tag delta (Δ) links that let you instantly see what happened between, say, <a href="">Greasemonkey 0.9.5</a> and the previous release tag (it recognizes anything where the only thing differing between tags are their numeric parts, which also means it's not going to handle fancy 1.6rc1 type tag names littered with semantic non-numeric parts):<br /><br /><img src="" width="400" height="275" style="display: block; margin: 0 auto; text-align: center;"><br /><br />And, as prior users may find even more importantly, that it doesn't hog any shifted keyboard shortcuts, which for some reason had the side effect of making arrow up pull in a pageful of diffs.<br /><br />Full documentation of all features, and a few screenshots, is available on the <a href="">Chrome Web Store page</a>. I also take every opportunity to mention that it really shines best together with the <a href="">AutoPatchWork</a> extension, or something equivalent which unfolds the commit history pagination as you scroll off page. Enjoy!<img src="" height="1" width="1" alt=""/>Johan Sundström Closure Library bookmarkletAny self-respecting javascript library should have a debug bookmarklet that lets you load it into a page, so you can tinker with it (the library, or the page, for that matter) without any messy overhead like downloading the most recent version, building it with the specific flags and sub-components you want, saving the current page, adding library setup code to it, and reloading that page before you can do squat. I found <a href="">Google Closure Library</a> a bit lacking in that regard, so here's my take on it:<br /><br /><script src=""></script><noscript><a href="">gist.github.com/985137</a></noscript><br />It has two modes of operation; if all you want to do is load the <code>goog</code> object into the current page (or overwrite the one you already have with one that has a different set of <code>goog.require()</code>s), just customize which those requires should be, and you're set; it creates an iframe, in which it loads the library (<code>goog.require</code> only works during page parsing time, as it uses <code>document.write()</code> to load its dependencies), and then overwrites the top window's <code>goog</code> identifier.<br /><br />The second mode is good for writing your own bookmarklets making use of some Closure Library tools; provide your own function, and it will instead get called with the freshly loaded <code>goog</code> object, once it's ready. At the moment, I have only played with this in Google Chrome, but feel free to fork away on github, if you tinker in additional improvements for other browsers.<br /><br />Finally, here's a bookmarklet version to save or play with, which will prompt you for just which closure dependencies you want to load: <a href="javascript:(function(f){var q=prompt('Please name your goog.require:s','goog.ui.TableSorter, goog.net.XhrIo'),r=q&&q.split(/,\s*/),i=r.length&&document.body.appendChild(document.createElement('iframe'));if(i){i.src='about:blank';i.style.display='none';i.contentWindow.f=f;i.contentDocument.write('<script src=></script><script src=></script><script>'+JSON.stringify(r)+'.forEach(goog.require);</script><script>'+(f?'f.call(parent,goog)':'parent.goog=goog')+';</script>');}})();">closure</a> – drag it to your bookmarks toolbar or equivalent if you want to keep it around. Enjoy!<img src="" height="1" width="1" alt=""/>Johan Sundström "No News is Good News" Tsunami FeedI have subscribed to the <a href="">NOAA/NWS/West Coast and Alaska Tsunami Warning Center</a>'s <a href="">feed of ahead-of-time tsunami warnings</a> for the U.S. West Coast, Alaska, and British Columbia coastal regions (<a href="">example information statement</a>) for some time. Most, like this one, are "wolf-cry" statements stemming from some seismic activity somewhere that <i>won't</i> generate any tsunami, marked with the all-important body phrase "The magnitude is such that a tsunami <strong>WILL NOT</strong> be generated" - meaning I don't really care about them.<br /><br />Today I made a <a href="">Yahoo pipe</a>, filtering away those from the feed. Here is the resulting feed, containing only <a href="">positive tsunami warning statements</a>. Behind the scenes, it's created by the simple YQL statement <code>select * from rss whereJohan Sundström | http://feeds.feedburner.com/blogspot/xxBcs | CC-MAIN-2021-39 | refinedweb | 10,620 | 50.36 |
rest is pretty simple. We create an instance of
StockTrade_ClientSide, taking advantage of the parameterized constructor to set its property values. Then we set up a
Vector of parameters, just as we've done in earlier examples. If you run this example, you should see the following output:
Trade Description: Sell 350 of XYZ
Let's take a look at the SOAP envelope that was transmitted. The relevant part of the envelope begins with the
trade element, representing the custom type parameter passed to the
executeStockTrade service method. The value assigned to
xsi:type is
ns1:StockTrade. The
ns1 namespace identifier is declared to be
urn:BasicTradingService in the parent
executeStockTrade element. And
StockTrade is the name we specified for our custom type. There are three child elements of the
trade element, each one corresponding to the properties of the
StockTrade custom type. The name of each element corresponds exactly to the property name. This is crucial, as Apache SOAP is going to use Java reflection to find the set methods of the associated Java class. Therefore, the names in the envelope must match the names used by the class. Each one of these property elements is explicitly typed, and those types have to coincide with the types of the corresponding properties as well.
<SOAP-ENV:Envelope xmlns: <SOAP-ENV:Body> <ns1:executeStockTrade xmlns: <trade xsi: <numShares xsi:350</numShares> <buy xsi:false</buy> <symbol xsi:XYZ</symbol> </trade> </ns1:executeStockTrade> </SOAP-ENV:Body> </SOAP-ENV:Envelope>
Now let's see how custom types are handled in GLUE, using the
BasicTradingService class without modification.
We need to add the
executeStockTrade( ) method to the
IBasicTradingService interface. We can deploy the service as we did before. Here's the new version of
IBasicTradingService:
package javasoap.book.ch5; public interface IBasicTradingService { int getTotalVolume(String[] symbols); String executeTrade(Object[] params); String[] getMostActive( ); String executeStockTrade(StockTrade_ClientSide trade); }
Writing applications that access services has been easy using the GLUE API because we haven't dealt directly with the SOAP constructs. We should be able to follow that same pattern here, but there's a problem.
The
executeStockTrade( ) method of the
IBasicTradingInterface interface says that the parameter is an instance of
StockTrade_ClientSide. But the
executeStockTrade method of the
BasicTradingService class that implements the service uses a parameter of
StockTrade_ServerSide. So there's a mismatch, albeit by design. By default, GLUE looks for the same class on the client and server sides. If we had used the
StockTrade_ServerSide class in our client application instead of the
StockTrade_ClientSide class, all would work perfectly. We're not going to do that, so just take my word for it that it works (or better yet, try it for yourself). We'll have to take another approach.
Whenever I develop distributed systems, I'm happy to share Java interfaces between both client and server code bases. However, I don't like to share Java classes, especially if those classes contain code that is specific to either the client or the server. In this example, we could rework the design of our stock trade classes to come up with something that contains the relevant data without any of the functionality of either the client or the server. We would want both the client and the server to use the class from the same package, so as the implementer of the basic trading web service, we'd have to distribute the package containing such a class to the developers' client applications. That would work, and it's not uncommon in practice. Getting back to the notion of sharing Java interfaces instead of classes, we could create an interface for the trade data, and have both the server and client classes implement that interface. That's a nice clean way to share between server and client code bases without sharing any actual executable code. Again, this interface would reside in a package available to both server and client systems. This mechanism is expected to be supported in a future version of GLUE (which might already be available by the time you read this).
GLUE does support another mechanism for making this stuff work. This mechanism doesn't require you to modify the server side, and the work involved on the client side is trivial. We're going to create a new package for this client example, because our work will create some files that have the same names as those we created earlier. The new package prevents us from overwriting files when running both client and server on the same machine. So be aware that this example is part of a new package called
javasoap.book.ch5.client, and the files we'll be developing need to be in the corresponding directory for that package.
I've purposely avoided discussion of WSDL so far, even though GLUE is based entirely on WSDL. However, the first step in this process makes use of GLUE's
wsdl2java utility, which generates Java code based on a WSDL file. So where does the WSDL come from? The GLUE server generates it automatically. The client-side applications based on the GLUE API have been taking advantage of this all along.
wsdl2java generates the Java interface for binding to the service, a helper class, a Java data structure class that represents the data fields of the custom type we're working with, and a map file. The map file is essentially a schema definition for the custom type; it tells GLUE how to map the fields of the Java data structure class to the fields of the custom type. GLUE uses a number of mechanisms for handling custom type mapping; in this case, the map file defines the mapping explicitly, rather than basing the mapping on Java reflection.
So let's go ahead and generate the files for the client application. Enter the following command, making sure you are in the directory for the
javasoap.book.ch5.client package:
wsdl2java -p javasoap.book.ch5.client
The parameter to the
wsdl2java utility is the full URL of the service WSDL, just like we've been using in the
bind( ) methods in previous examples. The
-p option tells the utility the name of the local package for which code should be generated:
javasoap.book.ch5.client. The output from
wsdl2java consists of four files. The first one is IBasicTradingService.java, which contains the Java interface for binding to the service. We've been writing this one by hand up until now; let's take a look at the one generated by
wsdl2java:
// generated by GLUE package javasoap.book.ch5.client; public interface IBasicTradingService { String executeStockTrade( StockTrade_ServerSide arg0 ); String[] getMostActive( ); int getTotalVolume( String[] arg0 ); String executeTrade( Object[] arg0 ); }. | http://www.linuxdevcenter.com/lpt/a/2412 | CC-MAIN-2013-48 | refinedweb | 1,114 | 53.41 |
How To Manage Side Effects in Redux?
At DataCamp, we always try to provide the best user experience. To reach this goal, we've decided to rewrite the frontend website from Angular 1.x to Redux with React. We had multiple reasons for making this choice, but the main one was supporting server-side rendering. Server-side rendering will mainly improve the UX because the page will load faster and the user can immediately see the result of the request without having to wait for the javascript bundle to load.
If you want to know more about server-side rendering and how it works, you can find a good explanation on the Redux website.
This article will focus on dealing with side effects in Redux, and explains how we integrate side effects in the isomorphic app we've created to do server-side rendering.
Side Effects in Redux
There are several tools to handle side effects in Redux. Most of the time, you should only include side effects in your middleware. In general, it's a good pattern to encapsulate all side effects as much as you can. The middleware is great for that purpose because it's only a function that will be triggered every time an action is dispatched. It can be fired before the action hits the reducer or after the reducer is hit, i.e. when the state has already been updated.
Most of the tools to handle side effects come down to helpers to create pure functions and use it in a middleware, so your side effects are isolated from the code. You can use a simple approach like redux-thunk written by Dan Abramov, the author of Redux, or some more complex and powerful libraries like redux-saga.
We decided to jump into something new, a library that has gained lots of popularity, redux-observable. This library allows you to use RxJS in your middleware in a very easy way. Since all of us at DataCamp love ReactiveX, we adopted it right away!
The only prerequisite is to know
rxjs 5!
Now every time we dispatch a new action, after it hits the reducer, our middleware will be fired and some side effects can take place. Here's a simple example of a middleware which takes an action as input and outputs multiple actions (which will again hit the reducer).
(action$, store) => action$.ofType('GET_TIMER').switch( Observable.interval(1000).take(5).map(nb => ({ type: 'SHOW_TIME', nb })));
More specifically, it takes an action of type
GET_TIMER as input and outputs multiple actions of type
SHOW_TIME with a number
nb (from 0 to 4 included). Now, what happens if another action of type
GET_TIMER is dispatched before the outputs of all the previous
SHOW_TIME actions are finished? Because of the switch, the previous observable will be cancelled and the new one will start.
This example shows how easy it is to deal with complex asynchronous calls.
HTTP Requests and Side Effects
Now that we all understand
redux-observable, we would like to make calls to our RESTful API. Most of the side effects we have to include in our web app are calls to get or post data to a server and give real-time feedback to users.
We couldn't find an easy and lightweight library to do HTTP requests in an observable stream, so we created our own HTTP wrapper on top of superagent and
rxjs 5. We called it "universal-rx-request". The library works on client-side and server-side and is available on npm.
Let's go over a toy example to get your IP address from
api.ipify.org:
import rxRequest from 'universal-rx-request'; rxRequest.importRxExtensions(); // will add new operators. const getMyIp = rxRequest({ method: 'get', url: '' }) export default (action$, store) => action$.ofType('SHOW_MY_IP') .switchMap(action => getMyIp.mapToAction(action)); };
This file will export an
Epic function which can be plugged into the
redux-observable middleware.
Suppose a button click causes an action of type
SHOW_MY_IP to be dispatched. This action will hit the middleware we've created in the code. This middleware will make the HTTP request using the
universal-rx-request library and gets the IP of the user.
mapToAction is a special operator from the library which outputs two actions. The first action says that the request is fetching and the second action contains the actual result of the request. This allows us to, for example, put a spinner on the button when the data is loading and subsequently show the IP once the data has arrived.
As a bonus, if the user clicks on the button to get his IP multiple times, the result of only one response will be received. All the others will be cancelled because of the switchMap function.
Let's see how we can handle the different state in the reducer with our
Epic function.
import rxRequest from 'universal-rx-request'; const STATUS = rxRequest.STATUS; const getStatus = rxRequest.getStatus; export default (state = {}, action = {}) => { switch (action.type) { case getStatus('SHOW_MY_IP', STATUS.SUCCESS): return { ip: action.data.body.ip }; case getStatus('SHOW_MY_IP', STATUS.FETCHING): return { fetching: true }; case getStatus('SHOW_MY_IP', STATUS.ERROR): return { error: true }; default: return state; } }
That's it! All cases are handled in the reducer. Now you can inform the user what's happening, for example show an error message if an error occurred or a spinner during the loading.
Want to learn more? Check out
universal-rx-request on GitHub to see it into action with other examples.
Side Effects and Server-Side Rendering
One of the main issues we've encountered while dealing with server-side rendering was the question of asynchronous calls. Since the server has to render a page synchronously to send it to the user, what happens with asynchronous calls?
In the end we solved this issue by sticking to one important rule. All side effects needed when the app is loaded on the client side happen in the middleware. This setup allows us to import the middleware only on the client javascript bundle but not on the server side, which in turn means that the server just has to render the components as Strings (renderToString) and never has to deal with asynchronous calls and side effects.
In the end, if we need to fetch asynchronous data on the server side, we do it in independently, outside of the middlewares and redux. We fetch the data beforehand and give the result as a preloaded state on the creation of the store (see example here). | https://www.datacamp.com/community/tech/universal-rx-request | CC-MAIN-2018-30 | refinedweb | 1,087 | 64 |
Dear all, This is just to let you know that the lastest version Dao language is released. This Dao was previously called Tao, and now is changed to Dao to avoid confusion with another Tao langauge. There are a number of new features implemented in this version, of which the most important one is the supporting for multi-threaded programming. A new concurrent garbage collector is also implemented to for the multi-threaded interpreter. Now unicode is also supported for string manipulation and regular expression (regex) matching etc. The algorithm for regex matching is enhanced with fixing of a few bugs which prevent finding the most reasonable matching in some case, and allowing reverse matching from the end of string. The interface for creating Dao plugin in C++ is further simplified and matured. Of course, many bugs are also fixed. For more information, please visite:. By the way, a console named DaoConsole with graphical user interface is also released. With best regards, Limin Fu ----------------------------- ChangLog for this release: ----------------------------- + : added ! : changed * : fixed - : removed MULTITHREADING: + Multi-threaded programming is supported as a kernel feature of Dao language. Posix thread library is used in the implementation. And the thread API in Dao is similar to that in Posix thread. Parts of the Dao interpreter is re-structured for multithreading. + A novel concurrent garbage collector based on reference counting is implemented to support multithreading. + A upper bound for GC amount is applied to prevent memory "avalanche", where mutators generate garbage faster than GC can collect them. gcmin(), gcmax(). UNICODE: + UNICODE is supported. String quotated with double quotation symbol is internally represented as Wide Character String(WCS), while string quotated with single quotation symbol is internally represented Multi Bytes String(MBS). Corresponding operations on WCS is also supported. REGEX: + Regex reverse matching is supported. + Now internal representation of Regex uses both MBS and WCS for both efficiency and proper matching character class for unicode. When a regex is applied to MBS or WCS, the corresponding representation is used. + Regex datatype is added, a regex pattern can be compiled and stored for later use, by using: define regex: rgx = /\d+\w/; or, rgx = regex( "\\d+\\w" ); + New character class abbreviations \u and \U are added for unicode. + Customized character class abbreviations are support. Users can define their own character class abbreviations by: define regex: \2 = [WhateverChars]; ! Algorithm for regex matching is modified to extend matching when possible, and is also modified to match regex group correctly. NUMERIC ARRAY: + Specification of precision in numeric array enumeration is supported: num=[@{numtype} array]; ! Transpose operator(right operator) is changed from ' to ~. - Function convolute() for numeric arrays is removed. EXTENDING AND EMBEDDING: + Some abstract classes are added for supporting easy embedding of Dao interpreter ( the daoMain.cpp source file is an example for embedding ). + Some wrapper classes for Dao data objects are provide in daoType.h to faciliate the using of Dao data objects in plugins or other programs in which Dao is embedded. ! A new technique is implemented to allow more tranparent passing data between Dao interpreter and C++ modules. Creation of shadow classes is also supported by the way. IO: + Instead of using STL stream classes, new DaoStream classes are added mainly for handling unicode in many places. + For file IO, more open modes such as "rwat" are supported, and more methods such as eof(), seek(), tell() ... are implemented. read() is enhanced such that it can read until meeting EOF. OTHERS: + Multi inheritance is supported for OOP. And the passing parameters and calling to parent constructor is simplified. + Negative subindex is supported for string. + Added a feature that allows using Alternate KeyWord (*.akw) to write Dao scripts in non-english languages. The only requirement is the character encoding for the .akw file must be the same as the script files. ! Variable scope specification keyword "global" is added; keyword "extern" is remove; keyword "share" is changed to "shared" ! Data states are added for constant and frozen data. Now a const data is a really const, and can not be modified anymore. Function freeze() is added to freeze some data structures as well as those are reachable from them, to prevent them from being modified; And defreeze() is added to defreeze them. ! The using namespace is simplified(compile(),eval(),source()).
on 28.11.2005 12:10
| http://www.ruby-forum.com/topic/22516 | crawl-002 | refinedweb | 716 | 57.57 |
Components and supplies
Necessary tools and machines
About this project
Short version: I hacked the most advanced tea making device to make it play music from Star Wars. To get a sense of what this project was about check out the video below.
The goal of this project was to hack the Breville One-Touch Tea Maker to give it some extra personality while it was making a pot of tea. The whole concept was a challenge/request by my friend Vince at Econify.com He was the project sponsor and the end result of the hack became a great gift for one of his employee's who is really into his tea (or so I assume)!
I should clarify that this is not a real "hack" or at least not a hack in the way most people would think. What I do here with that little black box next to the tea maker is read the power signature of the tea maker and determine what state it's in from how much power it's using now vs previous time periods. By monitoring the tea maker's current usage I'm able to tell when it's just sitting around waiting, boiling water, lowering the basket, brewing the tea, raising the basket, and when it's all done.
This is not an entirely new idea. It's actually used quite a bit in the energy industry. It's technical term (usually when applied to a whole building or system) is called power disaggregation. This is a fancy term for "figuring out what thing in your house is turned on by looking at how much power (usually in watts) you are using." Here's some info on the idea
I wish I could say that the approach here was the first thing I tried. It actually was a last resort! When I first saw this tea maker function I thought the easy way into it's state was to just crack open the bottom base where all the smarts are and poke around until I found the logic lines that drive the state of the little motor that makes the basket go up and down.
Well the team at Breville did a great job designing this thing to keep people out of the base. I first tried to access the base by removing all the screws and trying to pry it apart. Keeping in mind that this is not a cheap device (retails at $250) AND it was supposed to be a gift I did not really want to go at it with a pair of pliers and a blow torch. I even looked for broken ones on eBay that I might sacrifice. The device is so new (or just never breaks) that there was not much there.
After giving up on the base I thought the carafe would be a good next option. It has a small circuit board inside (I know this because I saw these for sale on Breville parts websites). On the base there were two kinds of screws, the regular old philips head, then they also had these strange triangular head screws. I found a set spoiler alert, they're called "Triangle Head" screws. I am assuming they're named after Mr. Triangle.
The carafe also was a dead-end. It's a really well designed carafe. Even If I was able to remove the bottom board I was a little concerned I would break some factory applied seal causing boiling hot water to leak out and cause some real problems for all parties involved.
For a brief period of time I thought the lid would be a good place to try to install my hack. This would mean that the lid would have to sense the basket drop from above and wirelessly transmit that info to some other circuit to play the sound effects. The lid could contain a simple battery powered bluetooth transmitter and could use a simple contact sensor to know if the basket was located. (The basket is held in by magnets so I considered using a hall-effect sensor as well).
The wireless lid sensor + basestation was the best idea I had at the time and it would have worked too if it were not for all that boiling hot water. Since the lid sits directly above the tea water it can get pretty hot. Steam is a really good way to transfer thermal energy. When you make tea (or boil water) a whole mess of steam rises off the water and hits the lid. When all that steam hits the lid it heats it up pretty good. Did I mention that the wireless sensor would run on batteries? It was really the only way to get power up where (except for some fancy peltier-effect based thermal energy harvesting system perhaps).
Turns out your standard AA battery has a operating temperature range of -18°C to 55°C. During my tests the lid easily reached well over 80°C. This would have probably meant exploding batteries. No one likes battery acid in their tea, for sure!
Anyway, on to the actual hack!
The concept of the hack was to have one Arduino measuring the power usage of the tea maker using a modified (and calibrated) circuit from The code on this arduino has to sense and determine the state of the tea maker based on it's past know states and some data samples it takes from the current transformer. It then will signal the second Arduino with the Wave shield using one of two signal lines. When one line goes high the tea maker basket is going down, when the other line goes high the tea basket is going up. The Wave shielded Arduino just has the simple job of reading these two lines and playing one of two audio clips for each event.
For the curious here is the complete hardware list...
- Appliance rated 15AMP power cable
- Arduino Pro Mini Sparkfun
- Current Transformer Amazon
- Project Box Amazon
- Wave Shield Adafruit
- Arduino uno Adafruit
- Speaker Sparkfun
- Proto Board Adafruit
- SD Card, Misc equp. cables, power plug misc
- Breville Tea Maker Amazon
And the sources for some ideas on this project...
- Current sensing circuit design from open energy monitor
- WAV sheild example from adafruit
- Audio editing tool to make Wav files - Audacity
- State sensing code on github
- Audio playing code on github
Well, that's it. Hopefully you find this useful if you want to "power disaggregate" your own devices. Please be careful when working with 120VAC. I have been working with electronics like this for many years. If you have not worked with powerline electronics please consult one of the many references out there or, when in doubt ask a professional!
Code
Arduino power monitor codeC/C++
#include "EmonLib.h" // Include Emon Library EnergyMonitor Emon1; // Create an instance int LedPin = 5; int BasketDownPin = 2; // Will pulse this pin 10ms when the basket drops int BasketUpPin = 3; // Will pulse this pin 10ms when the basket raises double IrmsBase = 0.0; // Baseline current sampled at startup int BaseLen = 5; double Baseline[5]; // History for baseline so we update baseline overtime String States[3] = {"Waiting", "Heating", "Basket"}; int HistLen = 4; int StateHist[4] = {0,0,0,0}; // Stores the state history int TrigerState[3] = {2,2,2}; // When the history array shows Basket Motion (3 events) boolean Heating = false; // Flag to indicate we have started heating boolean Basket = false; // Flag to indicate basket in motion boolean Brewing = false; // Flag to indicate we have started brewing boolean Done = false; // Flag to indicate we have finished brewing void setup() { //Serial.begin(9600); // Serial for debugging Emon1.current(1, 13.43); // Current: input pin, manual calibration constant pinMode(LedPin, OUTPUT); // Calibration process: pinMode(BasketDownPin, OUTPUT); pinMode(BasketUpPin, OUTPUT); digitalWrite(BasketDownPin, LOW); digitalWrite(BasketUpPin, LOW); digitalWrite(LedPin, HIGH); // Baseline capture, power up LED on double sum = 0.0; int samples = 10; for (int i=0; i <= samples; i++){ // Sample Irms and average to create baseline sum = sum + Emon1.calcIrms(1480); digitalWrite(LedPin, LOW); delay(100); digitalWrite(LedPin, HIGH); } double bootBase = sum/samples; // set the baseline Irms for sensing tea maker states initialBaseline(bootBase); // Baseline capture done } void loop() { digitalWrite(BasketDownPin, LOW); digitalWrite(BasketUpPin, LOW); double irms = Emon1.calcIrms(1480); // Calculate Irms only int teaStat = senseStatus(irms); saveStat(teaStat); if (histCheck(1)){ // if we have applied heat for a little bit then Heating = true; // We're heating water! Brewing = false; // Reset all state Basket = false; Done = false; } if (Heating && !Basket && !Brewing && histCheck(2)){ // if we were heating, and not brewing and the basket is not moving, but now the basket IS moving... printTrans("DOWN"); // basket going down! digitalWrite(BasketDownPin, HIGH); delay(10); digitalWrite(BasketDownPin, LOW); Heating = false; //done heating Brewing = false; //not yet brewing Basket = true; //basket is moving } if (!Heating && !Basket && Brewing && histCheck(2)){ // if we are not heating and not moing the basket and were brewing but now the basket is moving... printTrans("UP"); // basket coming up! digitalWrite(BasketUpPin, HIGH); delay(10); digitalWrite(BasketUpPin, LOW); Brewing = false; // Must be done brewing Basket = true; // basket is moving Done = true; } if (!Done && Basket && histCheck(0)){ // if not done and the basket was in motion and we are waiting now... Basket = false; // basket is done moving Brewing = true; // We are brewing!! } if (Done && Basket && !Brewing && histCheck(0)){ // if done, the basket was moving, brew is done and we are waiting now... Basket = false; // All done with the brew cycle } // Fairly certain this is causing more problems than it solves // if (Done && !Heating && !Basket && !Brewing && histCheck(0)){ // To adjust for drift // updateBaseline(irms); // update our baseline when not in a cycle // } updateBrewLight(); //printStatus(irms, teaStat); delay(10); //if not printing serial, delay. } // Pushes the current tea Status value to the top of the // StatHist array, shifts all existing values right. void saveStat(int teaStat){ int temp[HistLen]; temp[0] = teaStat; for(int x=0; x < (HistLen-1); x++){ temp[x+1] = StateHist[x]; } for(int x=0; x < HistLen; x++){ StateHist[x] = temp[x]; } } // Senses our teamaker state by looking at RMS current. // Based on what component is on, heater vs motor vs base // there are different levels of current used. The heater is // 100x the base current, and the motor is 1.5x the base current. int senseStatus(double irms){ String tStat; if (irms >= (IrmsBase * 100)){ return 1; // Heating } else if (irms >= (IrmsBase * 1.4) && irms < (IrmsBase * 10)){ return 2; // Basket Motion } else { return 0; // Waiting } } // Look back at the previous status values // and see if they all match the tStat value boolean histCheck(int tStat){ int x = 0; while (x < HistLen){ if (tStat == StateHist[x]){ x++; //move on to check the next } else { return false; } } return true; } // Initialize our history to the boot baseline void initialBaseline(double allBase){ for (int x=0; x < BaseLen; x++){ Baseline[x] = allBase; } IrmsBase = allBase; } // Compute updated average based on new Irms sample // Store it in our Baseline and update the global void updateBaseline(double newBase){ // add to top of array double temp[BaseLen]; temp[0] = newBase; for(int x=0; x < (BaseLen-1); x++){ temp[x+1] = Baseline[x]; } for(int x=0; x < BaseLen; x++){ Baseline[x] = temp[x]; } double sum = 0.0; for (int x=0; x < BaseLen; x++){ sum = sum + Baseline[x]; } IrmsBase = sum/BaseLen; } // Simple time-based LED flasher boolean LEDon = false; void updateBrewLight(){ if (Brewing && !Done){ if (!LEDon){ digitalWrite(LedPin, HIGH); LEDon = true; } else { digitalWrite(LedPin, LOW); LEDon = false; } } if (!Brewing){ digitalWrite(LedPin, HIGH); } } // Simple debugging print function void printStatus(double irms, int teaStat){ Serial.print(irms); // Irms current Serial.print(" "); Serial.print(States[teaStat]); // Waiting, Heating, Basket Motion // Print the state array Serial.print(" ["); for(int x=0; x < HistLen; x++){ Serial.print(StateHist[x]); Serial.print(","); } Serial.print("]"); Serial.print(" Bsl:"); Serial.print(IrmsBase); // Print the baseline history Serial.print(" ["); for(int x=0; x < BaseLen; x++){ Serial.print(Baseline[x]); Serial.print(","); } Serial.print("]"); // print our boolean states Serial.print(" Htg:"); Serial.print(Heating); Serial.print(" Bsk:"); Serial.print(Basket); Serial.print(" Brw:"); Serial.print(Brewing); Serial.print(" Dn:"); Serial.print(Done); Serial.println(" "); } void printTrans(String updown){ Serial.println("!!!!!!! BASKET TIME !!!!!!!"); Serial.print("!!!!!!! GOING "); Serial.print(updown); Serial.println(" !!!!!!!"); }
Arduino audio/wav player codeC/C++
byte PrevValUpin = HIGH; // Storing the previous reading to "debounce" startup of the current sense board byte PrevValDpin = HIGH; int DownPinNum = 8; // The pins to sense the up/down signals int UpPinNum = 9; //; Serial.println("\n\rSD I/O error: "); Serial.print(card.errorCode(), HEX); Serial.print(", "); Serial.println(card.errorData(), HEX); while(1); } void setup() { // set up serial port Serial.begin(9600); // Set the output pins for the DAC control. This pins are defined in the library pinMode(2, OUTPUT); pinMode(3, OUTPUT); pinMode(4, OUTPUT); pinMode(5, OUTPUT); // Listening on these pins for triggers pinMode(DownPinNum, INPUT); // Basket down pinMode(UpPinNum, INPUT); // Basket up if (!card.init()) { //play with 8 MHz spi (default faster!) Serial.println( :( Serial.println("No valid FAT partition!"); sdErrorCheck(); // Something went wrong, lets print out why while(1); // then 'halt' - do nothing! } // Lets tell the user about what we found Serial.print("Using partition "); Serial.print(part, DEC); Serial.print(", type is FAT"); Serial.println(vol.fatType(),DEC); // FAT16 or FAT32? // Try to open the root directory if (!root.openRoot(vol)) { Serial.println("Can't open root dir!"); // Something went wrong, while(1); // then 'halt' - do nothing! } delay(5000); // wait a few seconds for the secondary board to boot } void loop() { byte downPin; byte upPin; downPin = digitalRead(DownPinNum); upPin = digitalRead(UpPinNum); if (PrevValDpin == LOW && downPin == HIGH){ Serial.print("DOWN"); playcomplete("DOWN.wav"); delay(1000); } if (PrevValUpin == LOW && upPin == HIGH){ Serial.print("UP"); playcomplete("UP.wav"); delay(1000); } PrevValDpin = downPin; PrevValUpin = upPin; delay(2); } //(); }
Emon Lib
Schematics
Author
Brian Chamberlain
- 1 project
- 17 followers
Additional contributors
- Initial idea and sponsorship of the project by Vince Wadhwani at Econify
- The idea for how to monitor current usage using an arduino by Open Energy Monitor
Published onOctober 17, 2015
Members who respect this project
you might like | https://create.arduino.cc/projecthub/breakpointer/breville-imperial-tea-maker-a8c871 | CC-MAIN-2019-13 | refinedweb | 2,343 | 61.87 |
13 October 2009 12:32 [Source: ICIS news]
By Nigel Davis
?xml:namespace>
Both companies are supporting the new Patent Asset Index developed by Professor Holger Ernst of the WHU – Otto Beisheim School of Management.
The Patent Asset Index, they say, is a science-based metric that overcomes the main limitations of current patent analytics. It looks at patent applications globally and over time, while current measures tend to be restricted to patents in national jurisdictions.
The new index is made up of sub-components which help to measure the overall strength of a patent portfolio. It is based on two factors: ‘portfolio size’, which looks at the number of worldwide patent families; and ‘competitive impact’, looking at citations and market coverage.
“The Patent Asset Index offers a more detailed, accurate and robust perspective than current methodologies used to measure innovation strength,” the companies say.
Ernst believes his index is an “important indicator for the sustainability of innovative strength”. He has looked at the large, global chemical companies and wants to apply the methodology in other sectors.
Chemical companies could do with some validation of their research effort and better measures to help assess that effort in relation to their peers.
Billions of dollars are spent on research in chemicals. And companies rightly believe that effective R&D is the lifeblood of the business, or certainly is for those players intending to stick around in chemicals.
The business school’s ‘overall competitive impact’ index places BASF clearly in the lead among a group of international chemical majors: Bayer (with the data including pharmaceuticals), DuPont, Dow, Sumitomo Chemical, Mitsubishi Chemical, DSM, Solvay, Syngenta and AkzoNobel are on the list.
These firms are all particularly committed to both product and longer-range research in chemistry and related sciences.
BASF spent $1.9bn on R&D in 2008, according to company data collected for the ICIS Top 100 listing of the world’s chemical companies.
Dow, Sumitomo Chemical and Mitsubishi Chemical spent $1.3bn each and DuPont $1.4bn.
Solvay’s, DSM’s and AkzoNobel’s placing in the business school’s rankings appear impressive when related to their actual R&D spending outlays of $164m, $555m and $498m respectively.
Each of the companies in the listing is laying the groundwork for products and processes that will help drive future revenues and profits.
Chemical companies are not great spenders on R&D relative to sales, sometimes called the measure of research ‘intensity’.
BASF’s R&D to sales ratio, for instance, in 2008 was 2.2, 8.7% lower than in 2007; Dow’s ratio was 2.3, 6.6% down.
Research intensity overall in the sector has been declining for a number of years.
Some companies aim to invest, relatively, a great deal more in research. DuPont’s R&D to sales ratio in 2008 was 4.6 and almost flat with 2007. And some spend, relatively, somewhat less: Solvay’s R&D to sales ratio in 2008 was 1.7, 3.8% lower than in the prior year. Any measure that adds more understanding to effectiveness of research in the sector has to be welcomed.
BASF and Dow, in putting their support behind the index, say they are among the most innovative companies in the chemical industry.
Both are driving hard downstream, aiming to get closer to the customer and derive more sales and profits in closely targeted markets.
The companies remain major spenders in the industry, and particularly so since the acquisition by Dow last year of materials specialist Rohm and Haas and BASF’s acquisition of specialties maker Ciba.
The WHU – Otto Beisheim School of Management says that peer review of a scientific publication looking at the theoretical foundations of its work, as well as the implementation and validation of the index, is under | http://www.icis.com/Articles/2009/10/13/9254651/insight-patent-asset-index-helps-identify-research-effectiveness.html | CC-MAIN-2014-42 | refinedweb | 635 | 53.51 |
You can read this and other amazing tutorials on ElectroPeak's official website
How PIR Motion Sensors Work
Passive Infra Red sensors can detect movement of objects that radiate IR light (like human bodies). Therefore, using these sensors to detect human movement or occupancy in security systems is very common. Initial setup and calibration of these sensors takes about 10 to 60 seconds.
The HC-SR501’s infrared imaging sensor is an efficient, inexpensive and adjustable module for detecting motion in the environment. The small size and physical design of this module allow you to easily use it in your project.
The output of PIR motion detection sensor can be connected directly to one of the Arduino (or any microcontroller) digital pins. If any motion is detected by the sensor, this pin value will be set to “1”. The two potentiometers on the board allow you to adjust the sensitivity and delay time after detecting a movement.
PIR modules have a passive infrared sensor that detects the occupancy and movement from the infrared radiated from human body. You can use this module in security systems, smart lighting systems, automation, etc. There are different PIR modules available in the market, but all of them are basically the same. They all have at least a Vcc pin, GND pin, and digital output. In some of these modules, there is a ball like a lens on the sensor that improves the viewing angle.
Using a PIR Sensor with Arduino
Circuit. (or repeatable trigger mode).
There are also two potentiometers behind this module. By changing the SENSITIVITY potentiometer, you can reduce or increase the sensitivity of the sensor (clockwise increase), and also by changing TIME potentiometer the output delay after movement detection will be changed.
Code
You must add the library and then upload the code. If it is the first time you run an Arduino board, Just follow these steps:
- Go to and download the software of your OS. Install the IDE software as instructed.
- Run the Arduino IDE and clear the text editor and copy the following code in the text editor.
- Choose the board in tools and boards, then select your Arduino Board.
- Connect the Arduino to your PC and set the COM port in tools and port.
- Press the Upload (Arrow sign) button.
- You are all set!
/* PIR HCSR501 modified on 5 Feb 2019 by Saeed Hosseini @ ElectroPeak */ int ledPin = 13; // LED int pirPin = 2; // PIR Out pin int pirStat = 0; // PIR status void setup() { pinMode(ledPin, OUTPUT); pinMode(pirPin, INPUT); Serial.begin(9600); } void loop(){ pirStat = digitalRead(pirPin); if (pirStat == HIGH) { // if motion detected digitalWrite(ledPin, HIGH); // turn LED ON Serial.println("Hey I got you!!!"); } else { digitalWrite(ledPin, LOW); // turn LED OFF if we have no motion } }
For proper calibration, there should not be any movement in front of the PIR sensor for up to 15 seconds (until pin 13 is turned off). After this period, the sensor has a snapshot of its viewing area and it can detect movements. When the PIR sensor detects a movement, the output will be HIGH, otherwise, it will be LOW.
Using a PIR Sensor with Raspberry Pi
Circuit
Code
import sys sys.path.append('/home/pi/Adafruit-Raspberry-Pi-Python-Code/Adafruit_CharLCD') from Adafruit_CharLCD import Adafruit_CharLCD lcd=Adafruit_CharLCD() # instantiate LCD Display."
Example Projects
Motion and Gesture Detection by Arduino and PIR Sensor | https://create.arduino.cc/projecthub/electropeak/pir-motion-sensor-how-to-use-pirs-w-arduino-raspberry-pi-18d7fa | CC-MAIN-2019-43 | refinedweb | 561 | 55.44 |
Update slices of the tensor in-place with weighted sum. More...
#include <utility_ops.h>
Update slices of the tensor in-place with weighted sum.
ScatterWeightedSumOp is similar to WeightedSum and computes the weighted sum of several tensors. The first tensor has to be in-place and only slices of it on the first dimension as indexed by INDICES will be updated.
Input: X_0 - tensor to be updated weight_0 - scalar weight for X_0, applied only to slices affected, INDICES - 1-D list of indices on the first dimension of X_0 that need to be updated X_1 - update slices, has to have shape of len(INDICES) + shape(X_0)[1:] weight_1 - scalar weight for X_1 update X_2, weight_2, ...
Output: X_0 - has to be exactly the same tensor as the input 0
Note: The op pretty much ignores the exact shapes of the input arguments and cares only about sizes. It's done for performance consideration to avoid unnecessary reshapes. Only first dimension of X_0 is important, let's call it N. If M is the total size of X_0 and K is the size of INDICES then X_i is assumed to be of shape K x (M / N) regardless of the real shape.
Note: Each update in INDICES is applied independently which means that if duplicated elements are present in INDICES the corresponding slice of X_0 will be scaled multiple times. Manual collapsing of INDICES is required beforehand if necessary.
Note: Updates are applied sequentially by inputs which might have undesired consequences if the input tensor is accessed concurrently by different op (e.g. when doing Hogwild). Other threads might see intermediate results even on individual slice level, e.g. X_0 scaled by weight_0 but without any updates applied.
For now really works only on CPU because of INDICES access
Definition at line 459 of file utility_ops.h. | https://caffe2.ai/doxygen-c/html/classcaffe2_1_1_scatter_weighted_sum_op.html | CC-MAIN-2018-51 | refinedweb | 305 | 61.36 |
In the previous articles in this series I described how many different standards are used together to implement Web services. (See Resources.) Each of the standards, including SOAP, WSDL, and XML Schema, can stand alone, independent from the other standards. However, each standard solves only a very specific issue in the creation of a Web service. All of the standards leave extensibility points and room for further specification due to the limited scope of each of the standards. Thus, implementations of each standard must decide how to handle the extensibility points and what extensions are supported. To interoperate, different implementations cannot simply rely on having proper support for the standard itself, but must also note specifically how the extensibility points of the standards are handled. For instance, in the previous article, I described that simply knowing that two products implement the SOAP standard is not enough to ensure interoperability. The products must give information as to the specific transports they support, the data encoding they require, and possibly the type system they make use of for their data.
In this article, I will describe the issues around the Web Services Description Language (WSDL). I will also discuss where it fits into the Web services architecture and the specific features of the standard that one needs to be aware of to properly assess interoperability.
So many WSDL bindings, so little time
The concept of a Web service is very abstract. Thus a description language for Web services is also very abstract. WSDL describes services in terms of operations, which represent functionality accessed by exchanging messages, which are made up of message parts. A set of operations can be grouped into a portType. None of these WSDL concepts are concrete, meaning that knowing a Web service's portType, which gives you the relevant operations, messages, and message parts, is not enough information to make a real request to the server. Obvious things which are missing are the transport protocol, the wire format, and even the location on the network of the Web service. All of those things are seen by WSDL as concrete details of the implementation of the service. Of course, these details are not ignored by WSDL. But it is important to understand that they are not part of the WSDL standard exactly. Just as supporting SOAP did not mean supporting SOAP Encoding; supporting WSDL does not mean supporting specific WSDL bindings.
The term binding in WSDL refers to the link between abstract and the concrete details of a Web service. A WSDL binding is an extension to the WSDL standard that specifies how the messages will look on the wire and creates constructs for specifying the location on the network to find the service. To illustrate the purpose of a binding in WSDL, imagine a Web service that has an operation called getStockPrice. The operation takes two parameters: the company symbol and the date for which to give the price. The Web service uses SOAP as its wire format. The WSDL specification is independent of the SOAP specification but there needs to be a way to describe how to format the request into a SOAP Envelope. For example, if this is not specified, how does one know whether the parameters are to go in the header of the SOAP Envelope or the Body? The Web service author would use the SOAP WSDL binding to specify such details in the WSDL document. In Listing 1, the piece of WSDL presented describes how the getStockPriceoperation should contain the date parameter in the header and the company symbol parameter in the body of the SOAP Envelope. Elements using the wsdl namespace prefix are defined by the WSDL standard, any others are part of a particular binding.
Listing 1: WSDL excerpt defining the concrete format of the getStockPrice request
As expected, there exists a WSDL binding for SOAP. This binding is described in the WSDL spec as a suggested binding. Other bindings that have been defined publicly are HTTP GET bindings, which make use of HTTP without SOAP by encoding the data in URLs, and MIME bindings, which allow the use of MIME in messages. So with WSDL and the HTTP GET binding, that simple CGI script you wrote to keep office table soccer statistics can be a bona fide Web service. This extensibility point of WSDL can also cause compatibility issues. Higher level Web service APIs that consume WSDL to generate proxies need to support not just WSDL itself, but the specific WSDL bindings that are in use by the Web service. The SOAP binding is the most commonly supported binding. Often these higher level APIs will refuse to deal with any WSDL that mentions any other binding. Yet it is not difficult to find Web services whose WSDL depends on the MIME and HTTP GET bindings.
Ambiguities of encoded usage
Often the encoding style of choice is SOAP Section 5 Encoding. However, a problem arises because it is not made clear how you can model SOAP Encoding data models, that is, the graphs, with XML Schema. For this reason, the WSDL SOAP binding's encoded usage is the cause of many compatibility problems even if all the relevant technologies are properly supported. The problem is that XML Schema is a language designed to specify an XML document structure. XML documents are tree structures. However, the SOAP data model is more general and can deal with graphs rather than just trees. So XML Schema cannot describe a SOAP data model in any obvious way. Unfortunately, with WSDL as it stands today most Web services use XML Schema to describe an instance of the SOAP data model. Since XML Schema cannot describe data conforming to the SOAP data model, toolkits that depend upon reading the WSDL to infer the structure of the requests and responses are left to a guessing game. This is often benign is the same toolkit is used for the server and the client, but it soon becomes problematic when different toolkits are used. The problem is not with encoded usage or XML Schema particularly. The problem is that no standard defines the rules for describing a SOAP data model with XML Schema. Lliteral usage, as opposed to encoded, usage simply asserts that the XML Schema referenced in the WSDL completely and exactly describes the XML that will travel on the wire. Thus the mapping from the abstract messages in the WSDL to the concrete requests and responses on the wire is completely unambigous with literal usage.
To illustrate the problem, imagine a Web service that returns a linked list data structure which we serialize to XML using the SOAP Section 5 Encoding rules. Listing 2 shows two different serializations of the same linked list data structure serialized according to SOAP Encoding rules.
Listing 2.Two serializations of the same graph according to SOAP Encoding rules
The problem arises when I try to create an XML Schema that describes the linked list structure. The client that reads my schema must be able to understand that I am describing such a linked list structure and thus must accept all possible versions of the XML that de-serialize to the same graph. This means the schema must represent more than simply what the XML looks like, it must represent the essence of the data structure: a linked list. But XML Schema is not designed to do more than represent a particular XML serialization. Nowhere in any specification is a means described to use XML Schema to represent a SOAP data model. Thus, if I were to write a schema to represent my particular linked list, there is no guarantee that another person will read my schema and extract from it the same data model. Another way of thinking about the problem is to imagine writing a single XML Schema that will validate both of the instance documents shown in Listing 2 as well as every other possible valid formulation of the same graph according to SOAP Encoding rules. The task may be possible for this simple case but it becomes daunting and likely impossible in the general case. The difficulty of writing such a schema demonstrates the fact that it is a language unsuited for the task of describing models such as the SOAP data model. Even though WSDL is not mandating that when you use the encoded style of the binding, the XML must validate with the given schema, there still should be a set of unambiguous rules to go from an XML Schema to a SOAP data model. Otherwise, given such ambiguity, the implications for interoperability are disastrous. In essence, a set of rules is missing: how to describe SOAP data models in the WSDL such that everyone will see the same structure. XML Schema alone is not the solution. Current discussions around WSDL version 1.2 are addressing this problem. One proposed solution is that a new simple language be created for describing SOAP data models explicitly.
Recommendations to developers
As practical advice in the current state of art of Web services, I have a few hints which I believe will help you write Web services that are accessible in the simplest and most efficient way by the largest number of developers. Keep in mind that these are nothing more than opinions based on observations. There are exceptions and you can sometimes rightly disagree with these ideas. The constant change in this technology also changes the pros and cons of each of these suggestions.
The first hint is simply to use WSDL. It will allow developers to generate proxies which let them focus upon the real work and not concern themselves with the XML DOM API. Since I suggest that you use WSDL, I will suggest techniques which increase the compatibility of your WSDL file with most high level Web service APIs. The first of these is to use the SOAP WSDL binding with XML Schema as the type system with literal usage. This will help avoid the ambiguities that arise from encoded usage. I suggest using the SOAP binding simply because it is the most commonly supported and there is no reason not to use SOAP as a message format in most cases. As for a transport choice, I won't suggest a particular transport because an appropriate choice of transport is very important for the usefulness of your Web service. The wrong transport can cause serious problems with scalability, implementation difficulty, security, and reliability. Be aware, however, that HTTP is the most commonly supported transport. So if HTTP is an appropriate choice, then I suggest using it instead of another similarly suited transport.
Recommendations to the Web services community
I believe that to alleviate the complexity caused by such general standards with such complex interactions we should urge and support the creation of a set of Web service classes. Each class of Web service could consist of a description of required transports and bindings that must be supported along with some semantics to disambiguate the interaction of such technologies. Some examples of Web service classes might be WWW Web services which are perform the tasks for which CGI scripts have currently been employed, administrative Web services which might employ technologies such as Universal Plug and Play (UPNP) and Simple Network Management Protocol (SNMP), or data access Web services which could help create efficient Web services front ends for applications such as the Lightweight Directory Access Protocol (LDAP) or the Structured Query Language (SQL). Classes of Web services would share performance, state, and security requirements.
If a software product indicates that it is compatible with the administrative Web services requirements, you can be assured that it will interoperate with another software package making a similar claim. It turns out that a group known as the Web Services Interoperability Organization (WS-I) is currently developing what they call Web service profiles. These seem to have similar goals as to what I have described as Web service classes. The WS-I's "Basic Profile" draft was recently released. This profile is a great step towards advancing interoperability. It clears up simple ambiguities in the current standards and enforces a particular transport, type system, and encoding for compliance with the profile. Such recommendations are very important. So the WS-I is an important player in the maturation of Web services. The SOAPBuilders mailing list is also improving interoperability by ironing out the differences between current Web service toolkits.
Understanding that SOAP and WSDL are very general standards which address very specific pieces of the Web services problem is the first step in being able to evaluate the compatibility and scope of Web services software products. Be aware of the key extensibility points as discussed above. Each one presents you with a point at which compatibility problems may arise. Yet each of these extensibility points also gives the standards the ability to adapt to specific applications. For example, XML Schema might be a great type system for configuration information databases, but it might be awkward and inefficient for an image database. SOAP over HTTP might be great for e-commerce transactions but it will not work easily for publish/subscribe applications. Keep in mind that of the standards I have mentioned, only XML Schema has had significant peer review. SOAP and WSDL in their current forms are merely notes and working drafts. SOAP version 1.2 and WSDL version 1.2 will be soon official recommendations from the W3C. The good news is that because of the amount of interest and experimentation with the current SOAP and WSDL versions, these revised versions are likely to have resolved many of the simple ambiguities. Still, they will not reduce the generality of the standards and compatibility must still be ensured for each specific implementation in question.
- Look at the Web Services Interoperability Organization (WS-I), specifically their introduction to Web service profiles (PDF).
- Learn about the very important WS-I Basic Profile in First look at the WS-I Basic Profile 1.0. (developerWorks, October 2002)
- For more information about the problems with encoded usage read The Argument Against SOAP Encoding.
- Dig deeper into the issues of different WSDL bindings in Web service invocation sans SOAP. (developerWorks, September 2001)
- Learn about the activity of the W3C Web Services Description Working Group
- Get some practical information about WSDL in Deploying Web services with WSDL: Part 1. (developerWorks, November 2001)
- Clear up questions about WSDL from the source, the WSDL version 1.1 specification.
- For a peek at what's coming in the new specs, look at the WSDL version 1.2 working draft part 1 and the WSDL version 1.2 working draft part 2.
- Finding your way through Web service standards, Part 2 (developerWorks, October 2002)
- Finding your way through Web service standards, Part 1 (developerWorks, October 2002). | http://www.ibm.com/developerworks/webservices/library/ws-stand3/index.html | crawl-003 | refinedweb | 2,474 | 50.87 |
2011-10-11 09:32:33 8 Comments
I'm trying to use Fragment with a
ViewPager using the
FragmentPagerAdapter.
What I'm looking for to achieve is to replace a fragment, positioned on the first page of the
ViewPager, with another one.
The pager is composed of two pages. The first one is the
FirstPagerFragment, the second one is the
SecondPagerFragment. Clicking on a button of the first page. I'd like to replace the
FirstPagerFragment with the NextFragment.
There is my code below.
public class FragmentPagerActivity extends FragmentActivity { static final int NUM_ITEMS = 2; MyAdapter mAdapter; ViewPager mPager; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.fragment_pager); mAdapter = new MyAdapter(getSupportFragmentManager()); mPager = (ViewPager) findViewById(R.id.pager); mPager.setAdapter(mAdapter); } /** * Pager Adapter */ public static class MyAdapter extends FragmentPagerAdapter { public MyAdapter(FragmentManager fm) { super(fm); } @Override public int getCount() { return NUM_ITEMS; } @Override public Fragment getItem(int position) { if(position == 0) { return FirstPageFragment.newInstance(); } else { return SecondPageFragment.newInstance(); } } } /** * Second Page FRAGMENT */ public static class SecondPageFragment extends Fragment { public static SecondPageFragment newInstance() { SecondPageFragment f = new SecondPageFragment(); return f; } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { //Log.d("DEBUG", "onCreateView"); return inflater.inflate(R.layout.second, container, false); } } /** * FIRST PAGE FRAGMENT */ public static class FirstPageFragment extends Fragment { Button button; public static FirstPageFragment newInstance() { FirstPageFragment f = new FirstPageFragment(); return f; } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { //Log.d("DEBUG", "onCreateView"); View root = inflater.inflate(R.layout.first, container, false); button = (Button) root.findViewById(R.id.button); button.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { FragmentTransaction trans = getFragmentManager().beginTransaction(); trans.replace(R.id.first_fragment_root_id, NextFragment.newInstance()); trans.setTransition(FragmentTransaction.TRANSIT_FRAGMENT_OPEN); trans.addToBackStack(null); trans.commit(); } }); return root; } /** * Next Page FRAGMENT in the First Page */ public static class NextFragment extends Fragment { public static NextFragment newInstance() { NextFragment f = new NextFragment(); return f; } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { //Log.d("DEBUG", "onCreateView"); return inflater.inflate(R.layout.next, container, false); } } }
...and here the xml files
fragment_pager.xml
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <android.support.v4.view.ViewPager android: </android.support.v4.view.ViewPager> </LinearLayout>
first.xml
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <Button android: </LinearLayout>
Now the problem... which ID should I use in
trans.replace(R.id.first_fragment_root_id, NextFragment.newInstance());
?
If I use
R.id.first_fragment_root_id, the replacement works, but Hierarchy Viewer shows a strange behavior, as below.
At the beginning the situation is
after the replacement the situation is
As you can see there is something wrong, I expect to find the same state shown as in the first picture after I replace the fragment.
Related Questions
Sponsored Content
40 Answered Questions
[SOLVED] ViewPager PagerAdapter not updating the View
- 2011-08-31 20:54:52
- C0deAttack
- 373608 View
- 603 Score
- 40 Answer
- Tags: android android-viewpager
25 Answered Questions
[SOLVED] How to determine when Fragment becomes visible in ViewPager
- 2012-04-05 07:58:15
- 4ntoine
- 311637 View
- 758 Score
- 25 Answer
- Tags: android android-fragments android-viewpager
39 Answered Questions
[SOLVED] onActivityResult is not being called in Fragment
- 2011-05-27 04:35:19
- Spidy
- 405650 View
- 890 Score
- 39 Answer
- Tags: android android-fragments android-activity
34 Answered Questions
[SOLVED] findViewById in Fragment
- 2011-06-27 16:24:47
- simplified.
- 613752 View
- 1046 Score
- 34 Answer
- Tags: android android-fragments android-imageview findviewbyid
3 Answered Questions
[SOLVED] How to catch onclick event in Fragment
- 2013-05-15 23:35:17
- chipbk10
- 3093 View
- 3 Score
- 3 Answer
- Tags: java android android-layout android-intent
10 Answered Questions
[SOLVED] ViewPager and fragments — what's the right way to store fragment's state?
- 2011-10-31 09:18:32
- Oleksii Malovanyi
- 225930 View
- 492 Score
- 10 Answer
- Tags: android design-patterns android-fragments android-viewpager
1 Answered Questions
[SOLVED] Send data from Adapter to Fragment and get it in fragment
- 2018-12-29 07:04:29
- Ashu
- 327 View
- 0 Score
- 1 Answer
- Tags: android-fragments nullpointerexception adapter android-recyclerview
1 Answered Questions
[SOLVED] Weird OutOfMemoryError when loading view pager
1 Answered Questions
[SOLVED] How to parse jsonarray to listview in fragment
- 2017-12-04 05:34:36
- Spider Lynxz
- 248 View
- 0 Score
- 1 Answer
- Tags: arrays json android-fragments jsonparser
@mdelolmo 2012-08-28 08:39:34
Based on @wize 's answer, which I found helpful and elegant, I could achieve what I wanted partially, cause I wanted the cability to go back to the first Fragment once replaced. I achieved it bit modifying a bit his code.
This would be the FragmentPagerAdapter:
To perfom the replacement, simply define a static field, of the type
CalendarPageFragmentListenerand initialized through the
newInstancemethods of the corresponding fragments and call
FirstFragment.pageListener.onSwitchToNextFragment()or
NextFragment.pageListener.onSwitchToNextFragment()respictevely.
@Leandros 2012-09-30 01:05:51
How do you perform the replacement? it isn't possible to initialize the Listener without overriding the onSwitchToNext method.
@seb 2012-10-13 15:34:02
don't u lose the state saved in mFragmentAtPos0 if u rotate the device?
@mdelolmo 2012-10-14 16:32:55
@seb, I actually improved this solution in order to save
mFragmentAtPos0reference when saving the activity state. It's not the most elegant solution, but works.
@Lyd 2013-09-04 11:13:01
You can see my answer. It replaces fragments and return back to first one.
@Hoa Vu 2015-01-15 02:07:02
@mdelolmo Could you take a look at this question about ViewPager? Thank you stackoverflow.com/questions/27937250/…
@sha 2016-06-21 22:58:40
Do we actually need to maintain the Fragment instance
mFragmentAtPos0? Instead can we just save the
POSTION_NONEor
POSITION_UNCHANGED, which will notify the adapter. We can show what ever fragment we need in
getItem. My thoughts.. Any suggestions? Any loop holes in implementing this way?
@AndroidTeam At Mail.Ru 2011-11-16 09:42:09
To replace a fragment inside a
ViewPageryou can move source codes of
ViewPager,
PagerAdapterand
FragmentStatePagerAdapterclasses into your project and add following code.
into
ViewPager:
into FragmentStatePagerAdapter:
handleGetItemInvalidated()ensures that after next call of
getItem()it return newFragment
getFragmentPosition()returns position of the fragment in your adapter.
Now, to replace fragments call
If you interested in an example project ask me for the sources.
@user1069391 2011-11-28 13:14:44
How should the methods getFragmentPosition() and handleGetItemInbalidated() be like? I am unable to update getItem() to return NewFragment.. Please help..
@AndroidTeam At Mail.Ru 2011-12-06 11:41:26
Hi! Please find test project on this link files.mail.ru/eng?back=%2FEZU6G4
@seb 2012-10-14 12:45:24
Your test project fails, if u rotate the device.
@Sunny 2013-06-04 12:06:24
@AndroidTeamAtMail.Ru the link seems to expired.Will you please share the code
@Lyd 2013-09-04 11:08:49
You can see my answer. It replaces fragments and return back to first one.
@Min Khant Lu 2018-02-28 16:00:42
This is my way to achieve that.
First of all add
Root_fragmentinside
viewPagertab in which you want to implement button click
fragmentevent. Example;
First of all,
RootTabFragmentshould be include
FragmentLayoutfor fragment change.
Then, inside
RootTabFragment
onCreateView, implement
fragmentChangefor your
FirstPagerFragment
After that, implement
onClickevent for your button inside
FirstPagerFragmentand make fragment change like that again.
Hope this will help you guy.
@Nikola Srdoč 2020-07-04 18:16:22
little bit late but still cant find easy solution.In your example when you rotate screen while in NextFragment app switches back to FirstPagerFragment...
@sha 2016-06-22 05:16:54
I followed the answers by @wize and @mdelolmo and I got the solution. Thanks Tons. But, I tuned these solutions a little bit to improve the memory consumption.
Problems I observed:
They save the instance of
Fragmentwhich is replaced. In my case, it is a Fragment which holds
MapViewand I thought its costly. So, I am maintaining the
FragmentPagerPositionChanged (POSITION_NONE or POSITION_UNCHANGED)instead of
Fragmentitself.
Here is my implementation.
Demo link here..
For demo purpose, used 2 fragments
TabReplaceFragmentand
DemoTab2Fragmentat position two. In all the other cases I'm using
DemoTabFragmentinstances.
Explanation:
I'm passing
Switchfrom Activity to the
DemoCollectionPagerAdapter. Based on the state of this switch we will display correct fragment. When the switch check is changed, I'm calling the
SwitchFragListener's
onSwitchToNextFragmentmethod, where I'm changing the value of
pagerAdapterPosChangedvariable to
POSITION_NONE. Check out more about POSITION_NONE. This will invalidate the getItem and I have logics to instantiate the right fragment over there. Sorry, if the explanation is a bit messy.
Once again big thanks to @wize and @mdelolmo for the original idea.
Hope this is helpful. :)
Let me know if this implementation has any flaws. That will be greatly helpful for my project.
@wize 2012-02-03 10:50:50
There is another solution that does not need modifying source code of
ViewPagerand
FragmentStatePagerAdapter, and it works with the
FragmentPagerAdapterbase class used by the author.
I'd like to start by answering the author's question about which ID he should use; it is ID of the container, i.e. ID of the view pager itself. However, as you probably noticed yourself, using that ID in your code causes nothing to happen. I will explain why:
First of all, to make
ViewPagerrepopulate the pages, you need to call
notifyDataSetChanged()that resides in the base class of your adapter.
Second,
ViewPageruses the
getItemPosition()abstract method to check which pages should be destroyed and which should be kept. The default implementation of this function always returns
POSITION_UNCHANGED, which causes
ViewPagerto keep all current pages, and consequently not attaching your new page. Thus, to make fragment replacement work,
getItemPosition()needs to be overridden in your adapter and must return
POSITION_NONEwhen called with an old, to be hidden, fragment as argument.
This also means that your adapter always needs to be aware of which fragment that should be displayed in position 0,
FirstPageFragmentor
NextFragment. One way of doing this is supplying a listener when creating
FirstPageFragment, which will be called when it is time to switch fragments. I think this is a good thing though, to let your fragment adapter handle all fragment switches and calls to
ViewPagerand
FragmentManager.
Third,
FragmentPagerAdaptercaches the used fragments by a name which is derived from the position, so if there was a fragment at position 0, it will not be replaced even though the class is new. There are two solutions, but the simplest is to use the
remove()function of
FragmentTransaction, which will remove its tag as well.
That was a lot of text, here is code that should work in your case:
Hope this helps anyone!
@user1159819 2012-07-09 04:30:19
works for me too, but how to implement back button to show the first fragment?
@georgiecasey 2012-08-13 05:13:17
Great explanation but could you post full source code of this, specifically FirstPageFragment and SecondPageFragment classes.
@mdelolmo 2012-08-28 07:48:47
I works for me as well, but I haven't been able to reverse the action, it's to say, and I think what @user1159819 means, once you are in NextFragment, you might want to go back to FirstFragment, I don't see how to do it with this implementation
@seb 2012-10-14 14:25:05
If u want a more modular approach, i provided a more sophisticated solution which is based on wize's insights.
@Lyd 2013-09-04 11:13:16
You can see my answer. It replaces fragments and return back to first one.
@GSerg 2014-04-28 17:56:29
I'm having hard time trying to use the card flip animation with this. The animation does not work with
.remove(), only with
replace(), but using
replacecrashes the program - the replaced fragment animates away properly, but nothing animates in to replace it, leaving a blank black page in the view pager. Swiping to other pages works, but as soon as I return to the black page the app crashes. I wonder if I must override even more members of
FragmentPagerAdapter?
@GSerg 2014-04-28 17:59:56
Oh, and I have to use
Support.V13.App.FragmentPagerAdapteras the adapter, because
Support.V4.App.FragmentPagerAdapterdoes not allow native
Fragments, only the support fragments, and support fragments crash on
objectAnimatoranimation missing.
@miguel 2014-06-06 01:58:06
How do you add the FirstPageFragmentListener to the Bundle in newInstance() ?
@Hoa Vu 2015-01-15 02:06:24
Could you guys take a look at this question about ViewPager? Thank you stackoverflow.com/questions/27937250/…
@user2511882 2015-02-05 21:26:40
what does mFragmentAtPos0 equate to?
@Simon 2015-05-01 20:36:45
Thanks - this was the simpliest implementation by far - worked for me.
@Fenil 2015-08-04 12:57:54
Thanks, It worked like a charm. An alternate solution was to change it to FragmentStatePagerAdapter but above solution worked well with FragmentPagerAdapter
@MiguelHincapieC 2016-05-17 17:20:26
Can you add the code where you handle with
FirstPageFragment.newInstance()listener param?
@Reagan Gallant 2016-09-15 17:59:42
I know this is an old question and answer but how would you do this in Xamarin android?
@Reagan Gallant 2016-09-16 07:46:04
Please view my question at stackoverflow.com/questions/39519889/…
@Ian Lovejoy 2016-11-21 18:00:58
This only works for me if I call commitNow() instead of commit() when removing the fragment. Not sure why my use case is different, but hopefully this will save someone some time.
@Damir Mailybayev 2017-02-20 15:37:01
in my case i had to change from
FragmentPagerAdapterto
FragmentStatePagerAdapterand then it work
@mrahimygk 2019-01-15 11:27:58
what if the fragment is like
NextFragment.newInstance(item:Item)which
itemis set by an adapter?
@StNickolay 2019-12-18 17:41:22
@Ian Lovejoy thank you very much, it does the trick! I've spent couple of hours fighting with this issue until I see your suggestion about commitNow()
@marktani 2016-01-03 17:41:01
tl;dr: Use a host fragment that is responsible for replacing its hosted content and keeps track of a back navigation history (like in a browser).
As your use case consists of a fixed amount of tabs my solution works well: The idea is to fill the ViewPager with instances of a custom class
HostFragment, that is able to replace its hosted content and keeps its own back navigation history. To replace the hosted fragment you make a call to the method
hostfragment.replaceFragment():
All that method does is to replace the frame layout with the id
R.id.hosted_fragmentwith the fragment provided to the method.
Check my tutorial on this topic for further details and a complete working example on GitHub!
@Sira Lam 2018-04-23 03:50:26
Although this looks clumsier, I think is in fact the best (in terms of structure and ease of maintenance) way to do this.
@Fariz Fakkel 2018-05-15 21:49:03
I end up headbutting an NPE. :/ Can't find the View that is to be replaced
@JosephM 2015-11-26 06:26:07
after research i found solution with short code. first of all create a public instance on fragment and just remove your fragment on onSaveInstanceState if fragment not recreating on orientation change.
@Jachumbelechao Unto Mantekilla 2015-06-08 15:18:02
I doing something to similar to wize but in my answer yo can change between the two fragments whenever you want. And with the wize answer I have some problems when changing the orientation of the screen an things like that. This is the PagerAdapter looks like:
The listener I implemented in the adapter container activity to put it to the fragment when attaching it, this is the activity:
Then in the fragment putting the listener when attach an calling it:
And finally the listener:
@Ivan Bajalovic 2014-12-04 21:33:42
In your
onCreateViewmethod,
containeris actually a
ViewPagerinstance.
So, just calling
will change current fragment in your
ViewPager.
@Brandon 2015-09-24 19:20:25
From hours of searching, it was this one line of code that I needed to make all my tabs work correctly.
vpViewPager.setCurrentItem(1);. I went through each persons example, with nothing happening each time until I finally got to yours. Thank you.
@Kevin Lee 2016-03-08 09:31:57
mind blown... this is amazing!! I did not realize that the container variable was the viewpager itself. This answer has to be explored first before all the other lengthy ones. Plus, there is no passing in of listener variables to the fragment constructor which simplifies all the implementation when Android auto-recreates fragments using the default no-arg constructor when config changes.
@behelit 2016-03-16 10:15:15
isn't this just changing to an existing tab? how is this related to replacing fragments?
@Yoann Hercouet 2016-05-06 12:25:47
Agree with @behelit, this does not answer the question, it will just move the
ViewPagerto another page, it won't replace any fragment.
@Rukmal Dias 2014-09-23 04:42:49
I have created a ViewPager with 3 elements and 2 sub elements for index 2 and 3 and here what I wanted to do..
I have implemented this with the help from previous questions and answers from StackOverFlow and here is the link.
ViewPagerChildFragments
@Dory 2014-12-18 06:45:54
Your project fails, if you rotate the device.
@user3331142 2014-06-25 13:49:56
Replacing fragments in a viewpager is quite involved but is very possible and can look super slick. First, you need to let the viewpager itself handle the removing and adding of the fragments. What is happening is when you replace the fragment inside of SearchFragment, your viewpager retains its fragment views. So you end up with a blank page because the SearchFragment gets removed when you try to replace it.
The solution is to create a listener inside of your viewpager that will handle changes made outside of it so first add this code to the bottom of your adapter.
Then you need to create a private class in your viewpager that becomes a listener for when you want to change your fragment. For example you could add something like this. Notice that it implements the interface that was just created. So whenever you call this method, it will run the code inside of the class below.
There are two main things to point out here:
Notice the listeners that are placed in the 'newInstance(listener)constructor. These are how you will callfragment0Changed(String newFragmentIdentification)`. The following code shows how you create the listener inside of your fragment.
static nextFragmentListener listenerSearch;
You could call the change inside of your
onPostExecute
This would trigger the code inside of your viewpager to switch your fragment at position zero fragAt0 to become a new searchResultFragment. There are two more small pieces you would need to add to the viewpager before it became functional.
One would be in the getItem override method of the viewpager.
Now without this final piece you would still get a blank page. Kind of lame, but it is an essential part of the viewPager. You must override the getItemPosition method of the viewpager. Ordinarily this method will return POSITION_UNCHANGED which tells the viewpager to keep everything the same and so getItem will never get called to place the new fragment on the page. Here's an example of something you could do
Like I said, the code gets very involved, but you basically have to create a custom adapter for your situation. The things I mentioned will make it possible to change the fragment. It will likely take a long time to soak everything in so I would be patient, but it will all make sense. It is totally worth taking the time because it can make a really slick looking application.
Here's the nugget for handling the back button. You put this inside your MainActivity
You will need to create a method called backPressed() inside of FragmentSearchResults that calls fragment0changed. This in tandem with the code I showed before will handle pressing the back button. Good luck with your code to change the viewpager. It takes a lot of work, and as far as I have found, there aren't any quick adaptations. Like I said, you are basically creating a custom viewpager adapter, and letting it handle all of the necessary changes using listeners
@AnteGemini 2018-09-21 09:34:51
Thank you for this solution. But have you had any problems after rotating screen. My app crashes when I rotate screen and trying to reload another fragment in the same tab.
@user3331142 2018-09-22 14:20:20
Been a while since I've done Android dev, but my guess is that there is a piece of data that is no longer there your app is counting on. When you rotate, the whole activity restarts and can cause crashes if you try to reload your previous state without all the data you need (say from the bundle in onPause, onResume)
@Sergei Vasilenko 2014-04-22 11:39:01
I found simple solution, which works fine even if you want add new fragments in the middle or replace current fragment. In my solution you should override
getItemId()which should return unique id for each fragment. Not position as by default.
There is it:
Notice: In this example
FirstFragmentand
SecondFragmentextends abstract class PageFragment, which has method
getPage().
@mspapant 2013-07-24 11:25:57
I have implemented a solution for:
The tricks to achieve this are the following:
The adapter code is the following:
The very first time you add all tabs, we need to call the method createHistory(), to create the initial history
Every time you want to replace a fragment to a specific tab you call: replace(final int position, final Class fragmentClass, final Bundle args)
On back pressed you need to call the back() method:
The solution works with sherlock action bar and with swipe gesture.
@Nicramus 2014-05-06 12:50:52
it's great! could you put it on the github?
@dazedviper 2014-08-02 19:51:46
This implementation is the cleanest of all of the provided here, and addresses every single issue. It should be the accepted answer.
@yaronbic 2015-05-19 13:50:49
Did anyone have issues with this implementation? I'm having 2 of them. 1. the first back doesn't register - I have to click back twice for it to register. 2. After I click back (twice) on tab a and I go to tab b, when I go back to tab a it just shows the content of tab b
@georgiecasey 2012-12-18 01:47:49
As of November 13th 2012, repacing fragments in a ViewPager seems to have become a lot easier. Google released Android 4.2 with support for nested fragments, and it's also supported in the new Android Support Library v11 so this will work all the way back to 1.6
It's very similiar to the normal way of replacing a fragment except you use getChildFragmentManager. It seems to work except the nested fragment backstack isn't popped when the user clicks the back button. As per the solution in that linked question, you need to manually call the popBackStackImmediate() on the child manager of the fragment. So you need to override onBackPressed() of the ViewPager activity where you'll get the current fragment of the ViewPager and call getChildFragmentManager().popBackStackImmediate() on it.
Getting the Fragment currently being displayed is a bit hacky as well, I used this dirty "android:switcher:VIEWPAGER_ID:INDEX" solution but you can also keep track of all fragments of the ViewPager yourself as explained in the second solution on this page.
So here's my code for a ViewPager with 4 ListViews with a detail view shown in the ViewPager when the user clicks a row, and with the back button working. I tried to include just the relevant code for the sake of brevity so leave a comment if you want the full app uploaded to GitHub.
HomeActivity.java
ListProductsFragment.java
@Gautham 2013-01-13 03:31:58
Will it be possible for you to upload your code to github? thanks.
@Marckaraujo 2013-01-21 14:10:49
@georgie plz, upload this sample project to github.
@gcl1 2013-02-12 02:09:52
@georgiecasey: I tried your solution, but could not get the call to getChildFragmentManager() to resolve. I am calling it from a click listener, similar to your example code. The eclipse error message says "The method getChildFragmentManager() is undefined for the type MyFragment," where MyFragment is extended from SherlockFragment, just like your example. What am I missing - any suggestions?
@gcl1 2013-02-12 16:43:19
@georgiecasey: I downloaded API 17, Android 4.2 which allowed me to start using getChildFragmentManager(). But I must be missing something from your solution, as I am now getting overlapping old + new fragments on the screen. Note: I'm trying to use your solution in concert with the Google example TabsAdapter code from developer.android.com/reference/android/support/v4/view/…. Thanks for any suggestions and/or reference code.
@howettl 2013-02-19 20:36:02
What does R.id.products_list_linear refer to? Please link to your code as was offered.
@Indrek Kõue 2013-06-07 15:30:50
@howettl the ID references to outer fragment member. Key to solve this problem is to use nested fragments (that way you don't have to manipulate viewpager adapter).
@Shihab Uddin 2013-10-18 09:14:37
TabPageIndicator mIndicator; what is this? please explain this
@GSerg 2014-04-28 18:03:41
Is there any way to use child fragments manager with native
Fragments (as opposed to support fragments)? I'm using
Support.V13.App.FragmentPagerAdapterto feed native fragments to
ViewPagerbecause otherwise the card flip animation is not availbale.
@Xinzz 2014-05-22 03:48:03
@gcl1 did you end up solving the issue you had? I am having the same issue with overlapping old/new fragments and I can't seem to fix it.
@Baruch 2014-06-25 14:10:02
Did this ever make it to github?
@Radoslav 2015-02-16 07:04:24
georgiecasey I have problems with your solutions it kind of works however when the fragmets are replaced the button that replace the fragment is still visible on the screen. Do you happen to know how I can remove it
@seb 2012-10-14 14:14:12
I also made a solution, which is working with Stacks. It's a more modular approach so u don't have to specify each Fragment and Detail Fragment in your
FragmentPagerAdapter. It's build on top of the Example from ActionbarSherlock which derives if I'm right from the Google Demo App.
Add this for back button functionality in your MainActivity:
If u like to save the Fragment State when it get's removed. Let your Fragment implement the interface
SaveStateBundlereturn in the function a bundle with your save state. Get the bundle after instantiation by
this.getArguments().
You can instantiate a tab like this:
works similiar if u want to add a Fragment on top of a Tab Stack. Important: I think, it won't work if u want to have 2 instances of same class on top of two Tabs. I did this solution quick together, so I can only share it without providing any experience with it.
@Jeroen 2012-10-24 09:48:15
I get a nullpointer exception when pushing the back-button and my replaced fragment is not visible either. So maybe I am doing something wrong: what should be the correct call to change fragments properly?
@Jeroen 2012-10-25 08:01:43
Update: I got most of the stuff fixed, however, if I push a back-button after adding a new fragment and then go to another tab and back, I get a nullpointer exception. So I have 2 questions: 1: how can I remove the history of a tab and ensure only the latest fragment (top of te stack) is still present? 2: how can I get rid of the nullpointer exception?
@seb 2012-10-25 12:48:08
@Jeroen 1: u can get the right stack from mTabStackMap, and then remove every TabInfo object that lies under the top of the stack, 2: very hard to say, ask a question with the stacktrace then i could try to find out, are u sure that's not a failure in your code? Does this happen also if u switch to a tab (and then back) that is in the direct neighborhood (right or left)?
@Jeroen 2012-11-12 14:55:42
Ok, I ran into a new problem: if I rotate my screen, then the stack seems to be empty again? It seems to loose track of the previous items...
@seb 2012-11-14 19:35:09
@Jeroen: yes i ran into the same problem, but was expecting it, the activity gets recreated if u rotate the device this means your old variables are getting ate by the GB, the solution would be to save the stacks and tabinfos into the savestatebundle in the onDestroy method and then recover the tabsadapter state on recreations of the activity. Will do it some time, but atm to much to do on other stuff. There is also a quick dirty fix for rotating the device google it has to do with the manifest file.
@Garcon 2012-11-15 12:22:55
@seb thanks. I was using this code but nested fragments are now possible so no longer a problem. :)
@Jeroen 2012-11-15 23:10:12
@Seb, I am using that hack, but this will get worse when the activity has lingered around for too long. I guess keeping the tabstacks and tabinfo's is the way to go.
@Jeroen 2012-11-19 13:08:52
@seb: If I would do it myself, then would it be wise to provide get'ers and set'ers of the tabstacks and tabinfo's at the tabsadapter and let the mainactivity save state in onPause and re-initialize the adapter at onResume?
@Jeroen 2012-11-20 09:10:54
@seb I am runnning into problems when I want to get your structure jsonified ;-), can you provide an answer to stackoverflow.com/questions/13460200/… for me?
@seb 2012-11-22 00:04:29
@Jeroen seems already solved. Feel free to ask if you have other probs.
@Chris Knight 2012-10-11 20:54:25
Here's my relatively simple solution to this problem. The keys to this solution are to use
FragmentStatePagerAdapterinstead of
FragmentPagerAdapteras the former will remove unused fragments for you while the later still retains their instances. The second is the use of
POSITION_NONEin getItem(). I've used a simple List to keep track of my fragments. My requirement was to replace the entire list of fragments at once with a new list, but the below could be easily modified to replace individual fragments:
@Blackhex 2012-08-15 18:17:14
Some of the presented solutions helped me a lot to partially solve the problem but there is still one important thing missing in the solutions which has produced unexpected exceptions and black page content instead of fragment content in some cases.
The thing is that FragmentPagerAdapter class is using item ID to store cached fragments to FragmentManager. For this reason, you need to override also the getItemId(int position) method so that it returns e. g. position for top-level pages and 100 + position for details pages. Otherwise the previously created top-level fragment would be returned from the cache instead of detail-level fragment.
Furthermore, I'm sharing here a complete example how to implement tabs-like activity with Fragment pages using ViewPager and tab buttons using RadioGroup that allows replacement of top-level pages with detailed pages and also supports back button. This implementation supports only one level of back stacking (item list - item details) but multi-level back stacking implementation is straightforward. This example works pretty well in normal cases except of it is throwing a NullPointerException in case when you switch to e. g. second page, change the fragment of the first page (while not visible) and return back to the first page. I'll post a solution to this issue once I'll figure it out:
@Blackhex 2012-08-18 16:46:16
Hmm, there is another problem whith this code. When I want to start an activity from fragment which was removed and added aggain, I get an error: "Failure saving state: active NewFirstFragment{405478a0} has cleared index: -1" (full stack trace: nopaste.info/547a23dfea.html). Looking to the android sources, I figure out that it has something to do with backstack but I don't know how to fix it so far. Does anyone have a clue?
@Blackhex 2012-08-18 17:02:11
Cool, using <code>mFragmentManager.beginTransaction().detach(old_fragment).attach(fragment).commit();</code> instead of <pre>mFragmentManager.beginTransaction().setTransition(FragmentTransaction.TRANSIT_FRAGMENT_OPEN).remove(old_fragment).add(mContainer.getId(), fragment).commit();</pre> in the example above seems to fix the both issues :-).
@Norman 2012-08-22 19:26:15
If you use this method, make sure you aren't letting Android recreate your Activity when rotating the device. Otherwise, the PagerAdapter will be re-created and you will lose your mBackFragments. This will also seriously confuse the FragmentManager. Using the BackStack avoids this problem, at the expense of a more complicated implementation of PagerAdapter (i.e. you can't inherit from FragmentPagerAdapter.)
@connoisseur 2011-12-01 06:53:57
Works Great with AndroidTeam's solution, however I found that I needed the ability to go back much like
FrgmentTransaction.addToBackStack(null)But merely adding this will only cause the Fragment to be replaced without notifying the ViewPager. Combining the provided solution with this minor enhancement will allow you to return to the previous state by merely overriding the activity's
onBackPressed()method. The biggest drawback is that it will only go back one at a time which may result in multiple back clicks
Hope this helps someone.
Also as far as
getFragmentPosition()goes it's pretty much
getItem()in reverse. You know which fragments go where, just make sure you return the correct position it will be in. Here's an example: | https://tutel.me/c/programming/questions/7723964/replace+fragment+inside+a+viewpager | CC-MAIN-2020-34 | refinedweb | 5,751 | 62.48 |
In this section, you will learn how to convert url to file.
Description of code:
We have already told you the conversion of file to url in one of the previous sections. This is just reverse of that. You can see in the given example, we have created an object of URL class and specified an url string in the constructor of URL class. Now to convert this url to file, we have used the method getFile() of URL class. This method returns the string which is then passed into the constructor of File class. Using the method getPath() we have displayed the file on the console.
URL: This class represents a Uniform Resource Locator, a pointer to a resource on the World Wide Web.
getFile(): This method of URL class returns the file name of the
URL.
getPath(): This method of File class converts the abstract pathname into a pathname string.
Here is the code:
import java.io.*; import java.net.*; public class FileFromURL { public static void main(String[] args) throws Exception { String urlstring = ""; URL u = new URL(urlstring); String st = u.getFile(); File f = new File(st); System.out.println(f.getPath()); } }
Through the above you can convert any url to file.
Output: | http://www.roseindia.net/tutorial/java/core/files/filefromurl.html | CC-MAIN-2013-48 | refinedweb | 205 | 74.29 |
Return to Panda Features in Development
by enn0x » Tue Oct 14, 2008 3:50 pm
by enn0x » Sat Oct 25, 2008 1:41 am
by BMCha » Sun Nov 02, 2008 9:26 pm
by enn0x » Mon Nov 03, 2008 3:22 am
by rdb » Mon Nov 03, 2008 7:16 am
by enn0x » Mon Nov 03, 2008 6:55 pm
by BMCha » Tue Nov 04, 2008 12:19 am
DirectStart: Starting the game. Warning: DirectNotify: category 'Interval' already exists Known pipe types: glxGraphicsPipe (all display modules loaded.):util(warning): Adjusting global clock's real time by 1.8599e-005 seconds.
by enn0x » Tue Nov 04, 2008 2:55 pm
by BMCha » Tue Nov 04, 2008 9:38 pm
by enn0x » Fri Nov 07, 2008 3:29 pm
from pandac.PandaModules import loadPrcFileDataloadPrcFileData( '', 'notify-level-physx spam' )#import direct.directbase.DirectStartprint '--pre--'from libpandaphysx import PhysEngineprint '--post--'
by BMCha » Fri Nov 07, 2008 7:50 pm
--pre--:physx(spam): init module:physx(spam): @enter: _sdk=00000000
by enn0x » Fri Nov 07, 2008 9:43 pm
by ynjh_jo » Fri Nov 07, 2008 10:26 pm
by BMCha » Sat Nov 08, 2008 12:12 am
by ynjh_jo » Sat Nov 08, 2008 12:23 am
by BMCha » Sat Nov 08, 2008 12:28 am
by enn0x » Sat Nov 08, 2008 8:09 am
Success! Disabling Geforce PhysX worked. Thanks for all the help.
Now I remember, I didn't feel like typing all the generic console messages, so I found someone's console log and copied in the first few lines.
by dorosp » Wed Nov 19, 2008 6:34 am
by enn0x » Wed Nov 19, 2008 3:43 pm
by enn0x » Mon Nov 24, 2008 1:54 pm
by Executor » Tue Nov 25, 2008 6:45 am
Assertion failed: meshPtr at line 50 of c:\users\rpf\desktop\physx\source\physMeshPool.cxxAssertion failed: meshData at line 30 of c:\users\rpf\desktop\physx\source\physConvexShapeDesc.cxx
by enn0x » Tue Nov 25, 2008 11:57 am
by Executor » Wed Nov 26, 2008 8:04 am
by enn0x » Wed Nov 26, 2008 1:47 pm
1. It's using cookTriangleMesh.
But when loading the model in the convex sample (04_Convex.py), I replace the tetra model with my custom model, I get these assertion failures: ...
by Executor » Fri Nov 28, 2008 10:45 am
by enn0x » Sat Nov 29, 2008 9:01 am
by Executor » Sun Nov 30, 2008 4:42 pm
self.m_Model=loader.loadModel(modelname) self.m_Model.reparentTo(render) self.m_Model.setPos(0,0,10) self.m_Model.setHpr(0,0,0) tmpshape = PhysCapsuleShapeDesc() tmpshape.setHeight( 1.8 ) tmpshape.setRadius( 0.35 ) tmpbody = PhysBodyDesc( ) tmpbody.setMass( weight ) tmpactor = PhysActorDesc( ) tmpactor.setBody( tmpbody ) tmpactor.addShape( tmpshape ) tmpactor.setGlobalPos( self.m_Model.getPos(render) ) self.m_PhXActor = world.m_PhXScene.createActor( tmpactor ) self.m_PhXActor.attachNodePath( self.m_Model )
by enn0x » Sun Nov 30, 2008 7:06 pm
...the model is obviously not correctly oriented with the PhysShape or whatever.
by Executor » Tue Dec 02, 2008 11:27 am
by enn0x » Tue Dec 02, 2008 2:11 pm
Anyway, is there a way to visualize a PhysRay when rendering the PhysScene's debug node?
By the way, PhysX works 10x better and easier than ODE, in my opinion.
Users browsing this forum: No registered users and 0 guests | http://www.panda3d.org/forums/viewtopic.php?p=29717 | CC-MAIN-2014-35 | refinedweb | 546 | 54.63 |
I’m looking for a PaaS provider that isn’t going to cost me very much (or anything at all) and supports Flask and PostGIS. Based on J5’s recommendation in my blog the other day, I created an OpenShift account.
A free account OpenShift gives you three small gears1 which are individual containers you can run an app on. You can either run an app on a single gear or have it scale to multiple gears with load balancing. You then install components you need, which OpenShift refers to by the pleasingly retro name of cartridges. So for instance, Python 2.7 is one cartridge and PostgreSQL is another. You can either install all cartridges on one gear or on separate gears based on your resource needs2.
You choose your base platform cartridge (i.e. Python-2.6) and you optionally give it a git URL to do an initial checkout from (which means you can deploy an app that is already arranged for OpenShift very fast). The base cartridge sets up all the hooks for setting up after a git push (you get a git remote that you can push to to redeploy your app). The two things you need are a root setup.py containing your pip requirements, and a wsgi/application file which is a Python blob containing an WSGI object named application. For Python it uses virtualenv and all that awesome stuff. I assume for node.js you’d provide a package.json and it would use npm, similarly RubyGems for Ruby etc.
There’s a nifty command line tool written in Ruby (what happened to Python-only Redhat?) that lets you do all the sort of cloud managementy stuff, including reloading cartridges and gears, tailing app logs and SSHing into the gear. I think an equivalent of dbshell would be really useful based on your DB cartridge, but it’s not a big deal.
There are these deploy hooks you can add to your git repo to do things like create your databases. I haven’t used them yet, but again it would make deploying your app very fast.
There are also quickstart scripts for deploying things like WordPress, Rails and a Jenkins server onto a new gear. Speaking of Jenkins there’s also a Jenkins client cartridge which I think warrants experimentation.
So what’s a bit crap? Why isn’t my app running on OpenShift yet? Basically because the available cartridges are a little antique. The supported Python is Python 2.6, which I could port my app too; or there are community-supported 2.7 and 3.3 cartridges, so that’s fine for me (TBH, I thought my app would run on 2.6) but maybe annoying for others. There is no Celery cartridge, which is what I would have expected, ideally so you can farm tasks out to other gears, and although you apparently can use it, there’s very little documentation I could find on how to get it running.
Really though the big kick in the pants is there is no cartridge for Postgres 9.2/PostGIS 2.0. There is a community cartridge you can use on your own instance of OpenShift Origin, but that defeats the purpose. So either I’m waiting for new Postgres to be made available on OpenShift or backporting my code to Postgres 8.4.
Anyway, I’m going to keep an eye on it, so stay tuned.
6 thoughts on “First thoughts on RedHat OpenShift”
The idea of dbshell seems like a good one, and this would be a nice report ( even if there is already lots of features asked )
For celery, it requires a broker and for now, there is no cartridge broker ( there was people working on redis cartridge however ). The format of cartridge have been updated some weeks ago ( now, 2.0 ), and there is plan to let people integrate community cartridge on the online version, so not all hope is lost for you.
The issue of communication between cartridge is also a little bit complex because all gears are isolated containers, and by definition, they cannot communicate with each others ( namespace, selinux, firewall ). So poking a few holes in the protection need to think about a system that do stuff without ruining security.
Regarding a new version of postgresql and postgis, that’s the same. Online is based on RHEL 6 and so use RHEL6 package for support, and until there is a paying tier for hosting, I do not think the team want to increase the list of supported frameworks ( at least, not without a way to have it sustainable ). But again, there is plan for community supported cartridges.
Thanks, I think I was more positive. Maybe because of DIY where you’re able to deploy a website through netcat
But yet I could not clarify what OpenStack is and what role it would play compared to OpenShift.
Anyway: OpenShift is open source, so maybe others will also set up a service
Nice post. I agree about the Postgresql/postgis version and believe I am working on getting us to the newer version.
In terms of OpenStack versus OpenShift – OpenStack is infrastructure as a service while OpenShift is platform as a service. OpenShift could (and does) run on top of OpenStack.
Kai: OpenStack is IaaS (infrastructure as a service); OpenShift is PaaS (platform as a service). Different levels of the stack. In fact, the aim is for us to be able to run OpenShift on OpenStack (although at present, the free public instance of OpenShift is running on EC2).
You can indeed set up your own instance of OpenShift, if you like:
RE: the dbshell stuff. I worked on the command line tools quite a bit and one of the changes that I was in charge of was making it super simple to drop in commands with full documentation fairly easily.
Sorry about the Ruby, you know I’m a huge Python fan but I have to say, Ruby made it super simple to add declarative hooks and hide some of the complexity even if there are some ugly duck typing hacks in the backend. I still haven’t found a python equivalent to the commander module.
There was a request at some point to make it easy to drop in user commands in a local directory instead of requiring them to be in the source tree. I’m not sure it ever got implemented but it would be nice for users who want to integrate their own commands and then submit them upstream.
I think (optionally) distributing celery between gears would be very powerful. Particularly where your background tasks are significantly more computationally intensive than your WSGI.
I understand not wanting to increase your maintenance surface by having to support more stuff, but there’s a certain irony to PaaS requiring you to target a legacy platform :-p | https://blogs.gnome.org/danni/2013/05/05/first-thoughts-on-redhat-openshift/ | CC-MAIN-2017-51 | refinedweb | 1,154 | 70.43 |
Face Recognition using 2D Fast Fourier Transform
This article explains how to implement a simple face recognition system based on analysis through their.
Windows Phone 8 colour. recognise a face from a brief look without focusing on small details.
The recognition of faces is done by finding the closet.
center of the image. The further away from the center
- Download DSP.cs and add it into your project. DSP.cs provides a namespace called DSP and a class FourierTransform containing a set of functions
double mse = DSP.Utilities.MSE(ref compareSignal, ref match, (int) matchSamples);
// normalize
mse /= 1000000000;
thresholdText.Text = ""+mse
//
}
}
Results
Assuming the threshold is 50
Interesting to note how the algorithm is enough tollerant | http://developer.nokia.com/community/wiki/index.php?title=Face_Recognition_using_2D_Fast_Fourier_Transform&oldid=178380 | CC-MAIN-2014-10 | refinedweb | 116 | 60.31 |
I'm doing an inventory system for a game. The items are a struct with some info, and the player inventory is a list with each item (each struct). I want to be able to access the inventory in a Shop, so the player can sell their items.
So I decided to create a script to manage the player (which has the item struct), but for the Shop I decided to create another script (mostly to manage the UI). I want to know what is the best way to reference the player inventory in the Shop script, so it will directly change the player inventory if the player sells an item.
The Shop script is this:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class ManageShop: MonoBehaviour
{
public GameObject player; // player reference
public List<ManagePlayer.Item> items = new List<ManagePlayer.Item>(); // item list reference
void Start()
{
player = GameObject.Find("ManagePlayer"); /* game object has the same name as the script (ManagePlayer). It exists in another scene, so I can't simply add it via inspector */
// gets the item list from the player
items = player.GetComponent<ManagePlayer>().items;
}
}
And this the ManagePlayer script (I decided to put this part only because the full ManagePlayer script is really long)
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.SceneManagement;
using UnityEngine.UI;
public class ManagePlayer : MonoBehaviour
{
public struct Item
{
public string name;
public Image icon;
public float status1, status2;
public bool isEquip;
}
public List<Item> items = new List<Item>();
}
Please assume I already added some items to the player. Is this the best way to "import" the player inventory into this script? Or does the AddRange method works better?
Add.
Object Reference not set somewhere in the List, but can't figure it out.
0
Answers
storing multiple data types in a list
0
Answers
changing reference types to value types
1
Answer
error with list & struct
1
Answer
List of Structs variable values not changing
1
Answer | https://answers.unity.com/questions/1622972/whats-the-best-way-to-reference-a-list-of-structs.html | CC-MAIN-2020-29 | refinedweb | 329 | 56.66 |
AutocompleteField let's you add word completion to your UITextFields.
Autocomplete, or word completion, is a feature in which an application predicts the rest of a word a user is typing.
Import
AutocompleteField.swift into your project.
platform :ios, '8.0' pod "AutocompleteField", "~> 1.1"
The easiest way is to add a
UITextField in your Storyboard, and then giving it the
AutocompleteField subclass. You can use the property editor to change both the padding and the completion color of the textfield.
If you want to add a field using code, there's a custom init method you can use:
import AutocompleteField let textField = AutocompleteField(frame: CGRectMake(10, 10, 200, 40), suggestions: ["Abraham", "George", "Franklin"]) view.addSubview(textField)
AutocompleteField is a subclass of UITextField, so you can modify it in the same way you normally would, without any restrictions. The new properties you can set are:
AutocompleteField is provided under the MIT License. See LICENSE for details. | https://recordnotfound.com/AutocompleteField-filipstefansson-38 | CC-MAIN-2018-47 | refinedweb | 156 | 57.87 |
Even before I watched Node.js being revealed to the greater world at jsconf.eu 2009, I had been keen to solve the problem of universal JavaScript. That is to say: a single code base that runs both on the server and on the client. I remember tinkering with Rhino for SS JavaScript and Apatana's Jaxer, but I didn't get far. Node.js allowed me to flirt with the idea, but I quickly realised that a single code base wasn't quite achievable without a lot of help.
In a recent client project, I decided to try out using React for the first time and quickly also decided to use it for my server side rendering. I'm writing up my experience, and split this post into a high level learnings and the second post will be technical details of the implementation.
READER DISCOUNTSave $50 on terminal.training
I've published 38 videos for new developers, designers, UX, UI, product owners and anyone who needs to conquer the command line today.
$49 - only from this link
I wanted to take a moment to point out that this post really isn't written for developers already quite comfortable with React or any other flavour of high level JavaScript project. It's from a developer who prefers to be very close to the metal (aka Vanilla JS), and I hope it will help others in a similar position as I have been: on the fence waiting for it all to fall apart!
TL;DR
Approach:
- I can see how progressive enhancement would be skipped, and perhaps graceful degradation would be used to achieve a universal application
- There's no single guiding approach on how to support the server side
- IMHO React has the "jQuery of today" badge (but why doesn't fit in the TL;DR, so read on!)
After around 7 days of learning and development, the final technology comprised of:
- Node 6
- Express.js
- Babel
- React
- React Router
- Redux
- A small number handful of polyfills for production build
- Webpack
For development, I also used a custom route that the main client bundle would be served from, that was compiled on the fly via webpack which made things a little faster.
Every page had a URL, and every URL could be requested using cURL to view complete content, that is to say: as a benchmark, the entire application worked with JavaScript disabled, and time to first complete render (on a cold cache) was 400ms.
Bottom line: React with React Router is absolutely a viable stack for server side. The one thing I'd advise is to find a solid development pattern and use it for your own approach.
Understanding the requirements
Here's how I expected the technology stack to work (from a server side perspective) when a new request is received:
- HTTP request handled by Express
- React Router is somehow utilised by Express' routing system
- The correct React views are rendered into a string (and ideally cached!)
- Express responds with the HTML string
- The client bundle see no rendering is required, but normal bindings work
This should work on all URLs that are accessible on the client.
I spent a long time (relative to the time spent on this project) trying to understand how to connect the client side routing mechanism to Express running in node. I did use react-engine successful for a while, but had to eject the code towards the end of the project because it restricted how I could actually use React Router.
The final solution is to use a catch all route that hands off to React Router which correctly generates all the markup (as per the sequence described earlier).
The thing that feels a little weird here is that I'm using Express' routing system for the first layer of requests, then I defer to React's router, which, does work, and doesn't create any actual code duplication, but feels…strange.
What I'm left with is "universal JavaScript" that uses React that handles everything the client would see (or specifically render).
There is still some server side specific code, and that code is concerned with communicating with databases and responding to non-GET requests, which makes me think of my URL design in a much more RESTful and API-ish way.
I don't need to cover the technical how, because it turns out that Jack Franklin covered exactly how to approach server side first in his 2015 post on 24ways, and the post is very much what I ended up with (I just wish I'd seen it ahead of time!).
State management
State was the big attraction to me when considering React. Strictly speaking, React doesn't do state management at all. But what it does naturally encourage is a functional style of programming. Simply put: your functions don't have side effects (like changing or using any variables outside of the functions direct scope).
This functional style meant that when it was time for me to turn to Redux (a React friendly implementation of Flux—which, honestly, I not actually researched at all!), my coding style was ready.
I'm not going to go into what Redux is, but there's some excellent free video tutorials by its creator Dan Abramov. What it does for me, is to start thinking of my software as being a state machine - which makes testing and replicating state very easy. This is good
Progressive Enhancement
Early on, I had the requirement in my head that I wanted to understand if it was possible to use React to progressively enhance my application. The intention being that I would be able to serve a static version of the site, and enhance to the rich client side experience using React.
I also wanted to see if and how this was possible re-using as much code as possible, thus "universal JavaScript".
In my opinion, an individual or a team starting out, or without the will, aren't going to use progressive enhancement in their approach to applying server side rendering to a React app. I don't believe this is by choice, I think it's simply because React lends itself so strongly to client-side, that I can see how it's easy to oversee how you could start on the server and progressive enhance upwards to a rich client side experience.
This isn't to say another project (like Vue, Ember, etc) don't, I'm focused entirely on React in this post.
That said, React has a very solid backend system to support server side implementations* using Node, it just took me a while to get my head around it (especially as I was new to using React). The upshot, is that in retrospect, and now that I've had time to review my code after it's all gone live, I'm able to see how I would approach the development tasks in a different order so that I could achieve progressive enhancement.
It's also worth saying: that even if I didn't use PE as a development approach, but I'm able to provide that complete experience (say, to a user that didn't manage to download or execute the JavaScript), then the end goal is the same. However, taking the PE approach ensures that as a developer, I don't miss some important aspect of the application's logic (perhaps like a deep URL can be crawled with any browser…like cURL).
Bottom line: doable. Whether people are doing it is another matter. One thing is clear to me though: it's very much a secondary concern.
Is React the jQuery of it's time?
I read a post very recently that argued that Vue is the jQuery of it's time. The post mostly pitted Vue against React. I definitely don't agree. For context, jQuery arrived in 2006 and opened the doors to many developers and non-developers, to manipulating the DOM using JavaScript. Suddenly interactivity was much easier and jQuery was the tool of choice.
jQuery was not the only tool however, neither was it particularly the "best" tool (Dojo for instance had been around for years prior breaking new ground long before).
What made jQuery a success was two important items:
- Time to up-and-running, in that you could know nothing about JavaScript, but with a touch of CSS (selectors) and copying a few symbols (like
$) you could make something move on the page.
- Documentation, both in official form, but more importantly, community contributed.
The second item is the most important. And by documentation, I mean actual docs, API, blog posts, tutorials, videos, conference talks, the lot. It was the spread of knowledge and the ease in which it happened is what allowed jQuery to end up in front of so many developers. Obviously the team behind making jQuery work in every browser, and the mobile support, and continued efforts, and everything has a huge importance, but that spread of knowledge…it's that, that got jQuery to critical mass.
For me, as much as Vue looks interesting and certainly easier to get going with, there's a distinct lack of examples across the web, and yes, Vue had just landed 2.0 and which would give me server support - which was a little too new for me, but this is comparison to React. That doesn't mean Vue couldn't grow to be a great (and I remember the old Prototype vs. jQuery discussions of yonder year!).
Note: paragraph above was [updated to correct]() Vue's version release.
The more examples and samples and code out on the web, the more edge cases are covered, and the more it helps with the "yeah, but what about when I…" questions. React's community posts could definitely answer a lot of the questions I had. I'm not totally sure we've seen the next Rule Of the Libraries yet though!
Development approach
Again, for context, I do a lot of development directly in devtools using workspaces and related debugging tools. Due to the work that React does to create the DOM, I was not able to use my normal continuous development and debugging techniques, but there are some useful React and Redux tools available.
Equally, I was using JSX for the view part of my code, so I had to use sourcemaps (to make any sense during debugging), but I'm not a fan of sourcemaps which I had to since it specifically prevents me using Devtool Workspaces fully.
I also found Webpack very confusing, there's some much to configure and finding the right setup was tricky (I went through about four iterations). I did find the right setup (for me) in the end, and I'll probably copy and paste it again, but this isn't something I want to have to learn.
Finally, I also settled on a development only dynamic webpack route, that would conditionally load, and server the client bundle based on a development configuration. The development configuration would allow me to use hotloading and few other niceties that I wouldn't need in production. Here's my webpack configurations and the development middleware.
The hot module reloading for React did work for a lot of the view code, but not for everything (in the client) and did require a refresh of the browser fairly frequently. It certainly helped, but again, wasn't as smooth as my typical development workflow.
Bottom line: I would use this approach again. Having a large base of reusable code is very appealing. I'm not sure I have the right development approach (for me), but as with anything, that would come with time. I'm also interested in trying this approach with other JavaScript libraries out there (if possible), including Vue, Ember and Polymer (my brief experiences of Angular so far have been enough for me).
Miscellaneous notes and resources
The post finished a moment ago, but I wanted to include some of the resources I had found in my research and some of the code and structure I ended up with, so here it is, in all it's unabashed glory.
Libraries
- React engine (by PayPal) (this is what I started with, but eventually bailed)
- Express React views (a little closer to the metal)
- Simple universal (though still in development)
Resources
- Router tutorials
- Simplified routing structures
- Reducing the size of the final bundle.js
- Excellent example of specification breakdown into components and containers
Concepts: Containers and components
-
-
-
Quick prototyping
This is super quick, but slow to load, since it's pulling around 2mb over the wire.
index.html
<!DOCTYPE html> <script src=""></script> <script> System.config({ transpiler: 'babel', babelOptions: {} }); System.import('./app.js'); </script> <body></body><!--need a body for live-reload -->
app.js
import React from 'react' import { render } from 'react-dom' render(<h1>Hello world</h1>), document.body);
Final code structure tree
This is the structure my project ended up with. I'm sure there's different patterns, and in future I may structure a few things differently, but it served it's job and wasn't too confusing to navigate.
├── actions # redux actions ├── components # react views ├── containers # react components (logic for views) ├── lib # main server code │ ├── config │ ├── db # database models │ ├── dev.js # only used for dev │ ├── email │ ├── index.js # server entry point │ ├── routes # server specific routes │ ├── serve.js # server side rendering and routing of React app │ └── views # server specific views ├── package.json ├── public # static assets ├── reducers # redux reducers ├── src # client side specific boot code ├── test └── webpack.config.js # npm run build config | https://remysharp.com/2016/12/07/server-side-react | CC-MAIN-2021-39 | refinedweb | 2,275 | 57.71 |
In this article, we will be learning about the OpenCV package, and its cv2 resize function. We will be looking at a few code examples as well to get a better understanding of how to use this function in practical scenarios.
Open CV – cv2 resize()
This image processing computer library was built by intel to address real-time vision issues in computers. The cv2 resize() function is specifically used to resize images using different interpolation techniques. Let’s understand how.
Simple Resizing
import cv2 image = cv2.imread("img.png", 1) bigger = cv2.resize(image, (2400, 1200)) cv2.imshow("Altered Image", bigger) cv2.waitKey(0) cv2.destroyAllWindows()
The above code demonstrates a simple resizing technique. Here we did not use any interpolation technique or scaling factors but we got the desired output.
We can add scaling factors to our syntax as well. Scaling factors scales the image along their axes, without adding much difference to the final output. The syntax with scaling factors is written something like:
scaled = cv2.resize(image, (1200, 600), fx = 0.1, fy = 0.1 ,interpolation = cv2.INTER_CUBIC)
Resizing by changing the aspect ratio
Changing the aspect ratio of the image can give us a shrank or an enlarged image. In this example, we will be looking at how that can be done.
We will be looking at 3 blocks of code, which involves – package import & image loading, the logic used behind scaling the loaded image, and lastly, resizing the image using interpolation.
Import and Image Reading
import cv2 pic = cv2.imread('img.png', cv2.IMREAD_UNCHANGED)
The code above imports the OpenCV library for Python then loads the image in the variable ‘pic’. You might have noticed, we used ‘cv2.IMREAD_UNCHANGED’, its basic function is to load the image using its alpha channel, which means the original resolution of the pic gets preserved.
Algorithm for changing the aspect ratio
pd_change = 60 # The percent change from the main aspect ratio new_resolution = pd_change/100 pic_width = int(pic.shape[1] * new_resolution) pic_height = int(pic.shape[0] * new_resolution) new_dimension = (pic_width, pic_height)
Let’s understand the above code line by line:
- ‘pd_change’ variable saves the required percent chage of the original aspect ratio.
- ‘new_resolution’ variable converts that percentage into decimal and stores it.
- Variables ‘pic_width’ and ‘pic_height’ saves new height and width using that decimal value. Syntax ‘pic.shape’ is used to fetch the aspect ratio of the original picture. The [0] indicates to height and [1] indicates width. (The [2] is used for channels which is outside the scope of the learnings involved in this article)
- Variable ‘new_dimension’ is used to store the new resolution.
Resizing Image
altered_size = cv2.resize(pic, new_dimension, interpolation=cv2.INTER_AREA) cv2.imshow("Altered Image", altered_size)
Variable ‘altered_size’ resizes the image using cv2.resize() function, the interpolation method used here is ‘cv2.INTER_AREA’, which is basically used to shrink images. So, at last, we got our image scaled perfectly to the percent size we wanted.
Let’s look at the complete code to get the whole picture.
import cv2 pic = cv2.imread('img.png', cv2.IMREAD_UNCHANGED) print('Resolution of the original pic : ', pic.shape) percent_dim_change = 60 # The percent change from the main size pic_width = int(pic.shape[1] * percent_dim_change / 100) pic_height = int(pic.shape[0] * percent_dim_change / 100) dim = (pic_width, pic_height) # resizing image altered_size = cv2.resize(pic, dim, interpolation=cv2.INTER_AREA) print('Changed Picture Dimensions : ', altered_size.shape) cv2.imshow("Altered Image", altered_size) cv2.waitKey(0) cv2.destroyAllWindows()
Resizing by using custom values
We can resize images with specific width and height values, irrespective of their original dimensions. Or, we can change a single parameter, i.e, height, while keeping the other parameter, i.e, width, preserved. Let’s look at the code to understand how it’s done.
import cv2 pic = cv2.imread('img.png', cv2.IMREAD_UNCHANGED) print('Resolution of the original pic : ', pic.shape) pic_width = pic.shape[1] # keeping intact the original width pic_height = 800 new_dimension = (pic_width, pic_height) # resize image altered_size = cv2.resize(pic, new_dimension, interpolation=cv2.INTER_CUBIC) print('Changed Picture Dimensions : ', altered_size.shape) cv2.imshow("Resized image", altered_size) cv2.waitKey(0) cv2.destroyAllWindows()
Here, we used a different interpolation method, ‘INTER_CUBIC’, which interpolates the 2X2 neighboring pixels. Changing the interpolation method does not make much of a difference and the final result is nearly the same in this example (As a practice exercise, You can try out this example in your machine by changing the interpolation methods, and observing the change in the final result). Nonetheless, In the final output, we get a new image with an altered height.
Note: In the above code, we changed the height of the picture, and also kept our width intact. In case, we want to change the width while keeping the height intact, we will be using:
pic_width = 480 pic_height = pic.shape[0] # keeping intact the original height new_dimension = (pic_width, pic_height)
Conclusion
I hope this article helped you in understanding the cv2.resize function and how images can be altered using it. We looked at multiple different methods through which our image can be manipulated. We also looked at how interpolation is employed to get the desired result for our images. It is hoped that this article will prove helpful towards your learning. | https://www.askpython.com/python-modules/opencv2-resize | CC-MAIN-2022-33 | refinedweb | 864 | 51.65 |
in Programming.
The First Encoding Error
In the beginning was the Flat Text File, and it was good.
Todo: * Eat breakfast. * Eat lunch. * Flunk all my students. * Sleep. * Repeat tomorrow.
And there was reading and there was writing, and it was the First File Format.
And lo, the Accountant did receive word of this First File Format, and he did come to the Programmers and declare, "Behold, I require the writing of many columns of knowledge, bearing witness of the movement in the eyes of the Accountant.
And thus was born the First Encoding Error, and the land of the Programmers did fall into darkness and disrepute, where they remain until this day. And there is much wailing and gnashing of teeth.
Understanding The Problem.
Encoding
We will start at the beginning, because as simple as the following will sound, the evidence strongly suggests that most people harbor fundamental unexamined misconceptions in this area.
In Computer Science theory, a "language" is a set of "character" sequences that are "valid" strings in that language. "Characters" are abstract entities, which can be anything.
In computer programming practice, the term "character" is overloaded to mean too many things... which turns out to be a major contributing factor to the confusion about encodings! In particular, the word can refer both to the English letter "c" and to the single C-language "char" that contains the computer representation in memory. So let's split the concept "character" into two words for the purposes of this essay: A byte is a concrete number in computer memory. A code point is an abstract things like the letter "c", or whatever other "character" you might come up with is. The term "code point" is borrowed from Unicode so you'll have a better chance of understanding Unicode after you read this, though I don't intend to talk about Unicode otherwise. (Otherwise, that's a dumb choice of words.)
Let's start simple and use "case-insensitive English words" as our example language. We would understand the code points in this language as the 26 letters "a" through "z". Computer science also talks about "validity", which is "what are legal strings in the language", but today I'm just talking about encoding, so we can ignore that.
Now, let us suppose we want to represent the word "cat" in a computer. What good is a computer that can't even store "cat"? Well, in fact, a computer can't store "cat", because a computer's memory can not store the code point "c". A computer's memory can not store such an abstract entity.
A computer can only store a only store a very particular set of things. At the most fundamental level it can only store one of two values which we typically call "0" and "1", which constitutes a bit. Let us step up one level to the aforementioned "byte", a collection of 8 such bits, which we typically name with the numbers 0-255.
This special set of code points I will refer to as the privileged code points. It is privileged because it is the only one that can exist in real memory; quite a privilege indeed! Everything else that we will discuss is a consensual hallucination shared by programmers, as embodied in their programs.
So, we want to be able to store an English-language "c" in our computer's memory, but the computer only understands its privileged code points, which does not have "c" in it. We need an encoding. An encoding is an invertible function that takes some list of code points from one language and maps it to a list of code points in another language.
One encoding that maps between "English characters" and the privileged encoding is the standard ASCII encoding. The ASCII encoding is best viewed as a function mapping letters (and other things) down to numbers the computer can understand, which can later be inverted to go back to the English character.
Using my extemporaneous notation:
- Apostrophe on a function name means "inverted".
- Double-quotes indicate an English character code point, as distinct from the code points a byte can carry.
- Square braces indicates a list, delimited by commas. A string "abc" is also a list of the relevant code points, i.e., ["a", "b", "c"].
we can say:
ASCII(["c"]) = [99]
and
ASCII'([99]) = ["c"]
This is a simple and common case, where the "list" of code points is readily conceptualized as a function that converts one code point to another, as in
ASCII_single("c") = 99 ASCII_single'(99) = "c"
Not all encodings are this straightforward, but for the purposes of this post, we'll stick to such encodings, and encodings where a single code point may expand to multiple code points in the target language, but there are no inter-code-point dependencies.
Here's one of the most common unexamined misconceptions: 99 is not the English character lowercase c. There are three entities in play here as shown in the equations above, and they are all distinct: 99, "c", and ASCII itself.
There are an infinite number of ways to encode English characters, in theory. There are a rather smaller number of ways to do it in practice, but still very many. ASCII is not the only one. For instance, using EBCDIC:
EBCDIC(["c"]) = [131] EBCDIC'([131]) = ["c"]
[99, 97, 116] is the ASCII encoding of "cat". [131, 129, 163] is the EBCDIC encoding of "cat". Which is really "cat"? Neither of them. "cat" can not be represented in computer memory, only members of the privileged code point set.
To drive home the idea that code points can be anything, consider the ASCII control characters. There are things you might see as "characters" in the traditional English language sense with some squinting, like HT (horizontal tab), but there are things that can only be thought of as actions like BEL (sound a bell/beep) or LF (line feed), and there's a NUL character which is really weird. Code points can truly be anything.
Aaaaaaand... that's it, actually. That's all there is to encoding. Oh, some particular encodings are a little more complicated and may be based on something more complicated than a lookup table, but this is still the only idea. But as is so often the case in programming, we tend to take simple things and layer and combine until they are no longer simple, so to truly understand what encoding means in practice, we must move on to non-trivial examples.
Applying Encoding in the Real World
Having carefully defined what an encoding is, we can return to our parable and now explain precisely what went wrong, without reference to vague phrases or unexamined misconceptions.
The "flat text file" is a sort of minimal file format, almost the anti-format. It has only one characteristic: what encoding the contents are in... which is unfortunately usually only implied, not stated. Guessing it is tricky and unreliable if you don't have some other way of knowing which encoding the file is using. But given the historical period the parable is set in, we can simply assume that the dominant encoding of the operating system is used, and that in this mythical Time of Yore, they didn't have to worry about having a choice.
The CSV file format has slightly more structure; I'm going to define a CSV that is almost the minimal definition of a file format that can live above plain text. My CSV defines two additional special code points that all CSV-formatted files can use.
One we can call the VALUE_SEPARATOR. The VALUE_SEPARATOR is not a "comma". It is a code point. It indicates the end of one value, and the beginning of the next one, and so it shares more in common with an abstract code point like HORIZONTAL_TAB than an English character. It must be encoded somehow in order for the file to be written to disk, since disk, like memory, can't write out code points, it can only write the privileged code points 0-255.
The other new code point is the ROW_TERMINATOR, indicating the end of a row of values. One last time and I'll stop driving this point home: ROW_TERMINATOR is not an ASCII NEWLINE. ASCII NEWLINE is the conventional encoding, but it is not the same thing.
Let's say that we want our CSV file format to be able to include any ASCII values in the column values, which is a very reasonable thing to want. (In fact, anything else is just asking for trouble later on; those wacky users will stick pretty much anything into any field when you least expect it, to say nothing of deliberate attackers.) Given this, how do we encode our CSV files into the privileged code point set for storage or transmission?
Wrong Solution #1
The CSV file format has two characters it needs to encode, VALUE_SEPARATOR and ROW_TERMINATOR. The traditional encodings are ASCII COMMA and ASCII NEWLINE. For concreteness, I will use the Unix standard "line feed" character, encoded into the privileged code points as 10.
The obvious solution
It can't get much simpler than that, can it?
Here's the problem, expressed clearly in the terminology I've now built: Both the ASCII comma as a part of the value, such as the Accountant tried to use in his number, and the VALUE_SEPARATOR in the CSV file format were mapped down to the same privileged code point, 44. Uh oh. That means this supposed "encoding" is not reversible; two distinct inputs lead to the same final output, so that implies that when the CSV reader encounters a 44, it is impossible for the CSV parser to know which is meant.
Please take note of the word "impossible". It is not being used as a rhetorical device. I mean it. It is truly impossible. The information about which code point it initially was is gone. It has been destroyed. It is not correct to say that the CSV parser is "misinterpreting" the "$3,165.43" as two values. That implies that the information is present, but the CSV parser is too dumb to figure it out. The information is in fact not present; even a human can not look at "$3,165.43" and be sure that what is intended is three thousand, and not three dollars followed by some other information. A human can make a good guess, but it is still a guess, and accounting is just one domain of many where making this kind of guess is inappropriate.
This is one of the things I find most frustrating about using any library created by others that involves any sort of encoding. When some layer of libraries screws up the encoding process, information is commonly destroyed. It is not possible for a higher layer to correctly recover from that situation, no matter how clever you are; you might be able to avoid crashing, but you've still just hidden data destruction, which is often something that ought to be brought to someone's attention, not silently hidden.
Wrong Solution #2
The root problem in Wrong Solution #1 was trying to jam 258 code points into 256 privileged code point slots; by the pigeonhole principle we know that can't work. So, the next most easy thing to do is to carve out 2 of the privileged code points and declare that they will encode our two special CSV code points, and they are forbidden from being contained in the values themselves.
If you happen to have a copy of Excel or the Open Office spreadsheet handy, it can be instructive at this point to pull open the dialog that imports a CSV file, and poke around with the bewildering array of options you have for delimiting fields. Many people have chosen many different ASCII values to carve out, and tried different solutions (quite silly) solutions to re-allowing those code points in values (like double-quotes around values).
This solution is less wrong in that it at least does not build ambiguity right into the specification; it's completely specified. The problem is even with CSV, which is pretty minimal as file encodings go (two extra code points over plain text), there are no two code points that you can say to your users in general "You may not use these two code points in your values."
First, obviously, commas are pretty useful, so you're not going to want to ban those. The import dialog will show you just how many other delimiters have been tried. Tab is the most common. Much more exotic than that, and you lose one of the major benefits of a CSV file, which is that you can profitably edit it in a plain text editor that knows nothing about CSV. Any character you can type to use as the value delimiter, you can also want to type to use in the value. The same concern holds with the ROW_DELIMITER, too; banning the newline to reserve it for ROW_DELIMITER is pretty harsh, even in a spreadsheet.
Second, you just never know what those wacky users are going to want to do, and if they need it, you may have to deal with it. While code points you try to reserve from ASCII, the binaries will contain them.
Even if you think this works in your particular case, it doesn't. And even if you still think it works after that oh-so-compelling "no it doesn't", you're still better off using a correct encoding anyhow so that you won't find out the hard way that no, it didn't work in your case after all. Because doing it right isn't that much harder anyhow!
Layered Encodings. (And remember, CSV is just about the simplest case possible; it's easy to imagine that you might want more than 256 code points encoding something without even considering text; imagine encoding colors or polygon vertices or any number of other things.) The only option left is to virtually increase the number of code points we have to play with.
There are a number of ways to deal with this. When dealing with text, the most popular is escaping. Many varients of CSV, along with HTML, XML, and most other text-based formats use some variant of escaping.
The simplest escaping technique is to choose an escape code point (where I'm sticking to my "code point" terminology; it would normally be called an "escape character"). This is used to virtually extend our code point set by saying "When you see this escape character, switch into another set of code points", or some variant of a statement like that. (The exact meaning varies from encoding to encoding.) The traditional escape code point in ASCII is the ASCII BACKSLASH, and I will stick with that.
In this case, we're going to use the escape code point to move some of the standard ASCII code points out of the way, so our new layered encoding can unambiguously use them. We will encode the ASCII values in our file into a new encoding, ASCII_in_CSV, that we define as "the things that can be output by the following procedure Value_to_ASCII_codepoints":
- For each ASCII character in the input, consult the following table:
- If the character is the ASCII comma, add "\," (BACKSLASH COMMA) to the new encoded output.
- If the character is the ASCII newline, add "\n" to the new encoded output.
- If the character is the ASCII backslash, add "\\" to the new encoded output.
- For any other character x, add x to the new encoded output.
It is perfectly permissible for an encoding to be defined this way, as the legal output of some procedure. It is also perfectly permissible for an encoding to borrow another encoding's code points.
Now we can layer our encodings, so that what we have in a CSV file is:
- A series of ASCII code points, encoded down into the next layer with Value_to_ASCII_codepoints (a function from a list of ASCII code points to a list of ASCII code points),
- embedded into CSV (with the CSV delimiters still as their code points)
- encoded into ASCII with CSV_to_ASCII, which is encoded down into the privileged codepoints.
- which defines the complete encoding from the top-level, most-symbolic CSV file into pure bytes.
Or, in terms of functions (on a simpler input), we have defined a CSV_to_ASCII function, which converts a CSV file as so (using simpler data):
Value_to_ASCII_codepoints(["b", ","]) = ["b", "\", ","] Value_to_ASCII_codepoints(["1"]) = ["1"]
which is encoded into the ASCII code points with CSV_to_ASCII:
CSV_to_ASCII([["b", "\", ","], VALUE_SEPARATOR, "1", ROW_TERMINATOR]) = ["b", "\", ",", ",", "1", NEWLINE]
which we then feed to the standard ASCII function to obtain the final encoding:
ASCII(["b", "\", ",", ",", "1", NEWLINE]) = [98, 92, 44, 44, 49, 10]
This may seem complicated, but it suffers none of the disadvantages of the previous two wrong answers. It represents all characters unambiguously and completely (at least for writing, solving the problem for reading is easy).
If this sounds complicated, bear in mind I'm really belaboring this explanation for didactic purposes. The real code isn't that much more complicated, which is why I feel I can label the other two solutions actually "wrong", not just "misguided". The code for fixing the problem is far smaller than the discussion above:
def CSV_field_to_ASCII_single(char): # escapes 1 ASCII code point as described above if char in [",", "\\"]: # note need for encoding in Python, too! return "\\" + char if char == "\n": return "\\n" return char def CSV_field_to_ASCII(value): # escape one entire value return ''.join(map(CSV_field_to_ASCII_single, value)) def writeLine(fields): # the same writeLine as above, only correct print ','.join([CSV_field_to_ASCII(field) for field in fields])
And if you enter that into a Python interpreter,
writeLine(["Figgy Fig Figs", "11/24/1824", "$3,165.43"])
will print out
Figgy Fig Figs,11/24/1824,$3\,165.43
Which is correct, complete, and unambiguous. And there is no longer any need for wailing and gnashing of teeth.
(In real code, I'm using a standard Python list to represent fields, so it doesn't exactly match my theoretical functions.)
Only in the rarest of circumstances would you be justified in taking the risks entailed by using one of the wrong solutions.
Delimited Encoding
The other basic way to nest encodings is to declare in advance how long the encoded field is. For an example, see the BitTorrent distribution file format, where a string appears in a .torrent file as an ASCII number indicating the length, followed by a colon, followed by the string itself. (Note the spec fails to specify an encoding for the string!) arrives.
If you fix the length of the field in the specification of the file format itself, then you have record-based encoding, which is a very popular thing to do when storing binary data. The traditional "dump of C data structures" saved file uses this.
Wheels Within Wheels Within Wheels...
The wonderful thing about programming is that once we do something once, we can do it again and again and again very easily.
Data "in the wild" routinely embeds multiple layers of encoding. For our final example, count the distinct encoding layers in the following snippet of HTML:. Below this layer lies the privileged code points only; this is what was sent over the TCP connection, which is as far down the encoding rabbit-hole as I'd like to go today..
- Next up, we have PCDATA, which carries the text "Script: " itself. This defines some escape sequences based on & and ;, like < for "less than", so that ASCII/UTF-8/whatever "less than" doesn't collide with the next layer, and for convenience for people writing HTML by hand like é (é). This is the layer that, as humans, we think of as the text once we learn to read HTML.
- Tag names and attribute names are in a separate encoding, related to PCDATA but more constrained. For example, PCDATA can carry an encoded ampersand, but tag and attribute names may not carry an ampersand, encoded or otherwise. In, but not the attribute value data itself.
- The attribute values, if properly quoted, contain another PCDATA layer. HTML has traditionally been somewhat looser about literal < and > values being encoded directly into ASCII(/UTF-8) not 100% true, just nearly so. In reality, the HTML). (To have some fun with websites online, especially Web 2.0 sites, if you see that they are putting your value in HTML JS (not XHTML JS), try slipping </script> tag into your input to see what happens..
How Many Layers?!? totally unscathed; that is to say, the byte string in straight UTF-8 that represents those words is the same byte string that represents those words through all 9 layers of encoding.
If I wanted to be even more pedantic, I could probably find another encoding layer or two in the Javascript grammar definition - remember, if you have a different set of allowed code points, by definition you have another encoding function. It may be defined very similarly to other encoding functions, but it's still not the exact same function.
Note the newline didn't fare so well; the Javascript string layer had to add a JAVASCRIPT_STRING_NEWLINE, a.k.a. \n, because NEWLINE is already a semi-statement-separator in Javascript and can't be directly used in strings.
I hope that by know you understand why managing encodings is so difficult. It's very easy to mis-match all of the PCDATA layers, or while programmatically generating deeply-nested text, to forget one of the encoding layers. (After all, it took me a couple of tries to get the example right myself. I have a disadvantage in that I'm actually working one PCDATA layer deeper than you are viewing it at, since I'm writing the HTML by hand.) If you're lucky, you'll get a syntax error in the Javascript. If you're unlucky, when the user gives you malformed input, your page gets misparsed and you end up with a mess. Or worse.
So What?
At least five times out of ten, someone who advocates correct escaping will hear some variant to that I'll add my "data destruction" argument from above.
But even that may not be enough to rattle your cage. So let me take you on a whirlwind tour of what can happen to you when you don't manage your encoding correctly.
XSS (Cross-site scripting) Vulnerabilities Injection
SQL Injections are encoding failures, the moral equivalent of:
sql.execute("SELECT * FROM table WHERE " + sql_clause)
The problem here is that you're mixing the SQL command encoding layer with the SQL string encoding layer. It's (usually) OK to allow user input in a correctly-encoded SQL string, but letting it directly into the SQL command encoding layer can, in the worst case, result in total data destruction when sql_clause is "1; DROP DATABASE your_database".
Other Command Injections
SQL injection and XSS are just special cases of data making its way up to a command encoding layer. There's plenty of others. For example, the Sun Telnet vulnerability is a shell injection vulnerability, where user input is passed in such a way that a program sees it as a command line argument and not data. The shell is particularly tricky to deal with, because it has relatively complicated escaping procedures (created by accretion by people who I believe didn't really understand what I've discussed in this essay), and the penalties for failing to encode things correctly is often arbitrary command execution through multiple techiques (stray semi-colons, stray ampersand, backticks, and that's not a complete list) or thoroughly unintended behavior like logging in root without a password check.
Unexpressable Character Strings Or Other Data Destruction
There is a commercial PDF library I've used at work, which is otherwise a quality piece of work so I'd rather not name it. It uses tags similar to HTML to define inline text styling controls, much like HTML. Unlike HTML, they did not seem to include a way to insert a < directly into the text, as HTML does with <.
Since we needed to put arbitrary text in our PDFs, sometimes including <, we had a problem. Eventually we we able to find a workaround: There was a command for changing which character was used for starting a tag. We created an escaping routine that changed every < into the command to change the start-of-tag character into something else, then a <, then change in back. This worked, but it would have been cleaner if the library shipped with an escaping for < like HTML.
I've used a number of libraries that destroyed data like this. I've been to a number of websites where if you post a comment with <, then click "edit", then click "submit" with no changes, the content of the comment would change because somewhere there was one too many encoding/decoding calls.
Data destruction isn't necessarily a security problem, although it can be if your data destruction happens to cause subsequent errors, but it's certainly something to be avoided.
</script in HTML is an example of this; you can't directly express </script in a script in HTML. (You can in XHTML, because the script is correctly pulled up one escaping level, which results in a complete encoding.)
Other Encoding-like Things
Interestingly, other things can be viewed as encoding-type issues as well.
I think localization is best viewed as encoding computer-type symbolic messages into human languages. I really approve of the approach taken by Perl's Locale::MakeText, which I think makes this approach easy. (See also this language-indepedent discussion of localization attached to that module. I think it's a mistake to view it as a translation of English (or the initial language used) into other languages. It's OK to use phrases in your native language as key for these symbolic phrases, but it encourages dangerous misapprehensions, similar to the problem of thinking that ASCII is real.
A code point in this scenario would be some symbolic message that carries data about the message in itself. For instance, (FILE_COULD_NOT_BE_OPENED,'C:/dir/yourfile.txt',FILE_NOT_FOUND) could be translated as "File not found: C:/dir/yourfile.txt".
Dealing with time zones can be seen as an encoding thing. Just as a byte string is not well defined in meaning without an encoding definition applied to it, a specific time is not well defined without a time zone attached to it. There is no such thing as "11:00 p.m.", only "11 p.m. UTC" or "11 p.m. EST". However, there is no equivalent to nesting time zones, and no equivalent to "decoding"; you just "encode" directly into the time zone you want. Despite the fact this is much easier than encoding, a lot of programmers get this wrong too and will pass times or dates back out of their libraries or frameworks that have no (correct) timezone, or don't correctly handle timezones, because they don't seem to realize that a time without a timezone isn't a time at all, just as a text string without an encoding isn't a text string at all. (It is somewhat true that at the moment a time enters the system, you can assume the local timezone, but no code after that can.)
Data with units can also be seen as an encoding; converting from inches to feet can be seen as an encoding conversion. The equivalent of an injection attack is "converting" a meter to an inch without applying the proper conversion factor; NASA of course famously did this, but they are far from the only ones, and you can get yourself into trouble even in pure metric if you "convert" meters to kilometers incorrectly.
We Do Have A Problem
This all represents a big problem.
On the one hand, failing to correctly manage encoding is either the number one source of security vulnerabilities, or number two trending towards number one. As we add more and more encoding layers into our systems, it becomes more vital that each layer work totally correctly, ideally totally transparently, and that every program handle each encoding layer totally correctly.
On the other hand, correctly managing encoding, all the time, at all layers, is extremely difficult. (Initially I had written "extraordinarily" difficult, but regrettably, this difficulty is all-to-ordinary.).
I don't have a simple solution to this. My best solution is at least to not sweep the problem under the rug, to acknowledge that this is a real problem and is not trivial, and when applicable default to a safe encoding level (i.e., if concatenating HTML together, encode for PCDATA by default, require the programmer to explicitly ask for the unsafe raw ASCII/UTF-8), but this is not always possible or easy.
But I can say this with confidence: Improper encoding and improper understanding of encoding is one of the largest problems facing the programming world today.
Addenda
PS: How can I claim encoding errors are the number one cause of problems "in many modern languages"? Buffer overflows are clearly the number one problem in the real world, historically, but we are conquering them by writing code in languages that effectively can't have buffer overflows. They should be trending down as we transition away from buffer-unsafe languages. It will take a while, though, and there will likely always be a "bottom layer" that can still be screwed up.
PPS: There's a ton of stuff to nitpick in this essay, like the fact that ASCII is 7-bit, not 8-bit (but note I stayed in 7-bit land), that I'm not quite using "code point" in exactly the way that Unicode does, that "bytes" aren't the only way to look at what a computer can store, that what we call "a byte storing 38" is itself an encoding of electrical signals and not "truly" 38, that in Unicode it's more proper to talk about mapping language strings to other language strings because of a variety of complexities having to do with how code points can interact (see the normalization standards), that the CSV I define doesn't match the RFC (though IMHO my CSV is better, or it would be with one additional specification about what to do with bad escape sequences), that encoding character-by-character has awful performance implications in Python (and isn't how I'd actually do it), and any number of other fiddly corrections. (Also, for the purposes of discussion, I'm defining my own concept of encoding, which may or may not match any other concept drawn from any of the discussed domains.) But before you go complaining to me about it, ask yourself if your criticism would make the point clearer, rather than muddying it up with caveats, exceptions, and clarifications that only make sense after you already understand the idea. This post is like the Newton's Laws of encoding to the Relativity of reality. (I know this is a problem, because I started with those caveats, and it just made the article worse.) | http://www.jerf.org/iri/post/2548 | crawl-002 | refinedweb | 5,226 | 58.62 |
owner-htdig@sdsu.edu
Thu, 12 Nov 1998 10:49:28 -0800 (PST)
>From andrew@contigo.com Thu Nov 12 10:49:26 1998
Received: from mcfs.whowhere.com (mcfs.whowhere.com [209.1.236.44])
by sdsu.edu (8.8.7/8.8.7) with SMTP id KAA05681
for <htdig@sdsu.edu>; Thu, 12 Nov 1998 10:49:26 -0800 (PST)
Received: from Unknown/Local ([?.?.?.?]) by my-dejanews.com; Thu Nov 12 10:48:22 1998
To: htdig@sdsu.edu
Date: Thu, 12 Nov 1998 10:48:22 -0700
From: "George Adams" <learningapache@my-dejanews.com>
Message-ID: <BGGEFABPIALOKAAA@my-dejanews.com>
Mime-Version: 1.0
X-Sent-Mail: on
X-Mailer: MailCity Service
Subject: Please help me compile ht://dig
X-Sender-Ip: 199.72.48.72
Organization: Deja News Mail ()
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
I hope someone can help me figure out where I'm going wrong in trying to compile HT://Dig. I'm afraid I'm not a C/C++ programmer, so my guesses as to what the problem is might be completely wrong. I'll try to leave a fairly complete audit trail here so more experienced folks might see what I missed. Thanks to anyone who can help!
In preparation for compiling HT://Dig on my Alpha running Digital Unix 4.0d, I installed the following:
gcc 2.8.1
libg++
libstdc++
in /data/apps/gnu
GNU make 3.77
in ~/bin
I then started compiling HT://Dig
./configure --prefix=/data/apps/htdig
make
It made it all the way to this point:
--------------------------------------------
c++ -o htfuzzy
-------------------------------
(c++ = the new g++ 2.8.1, BTW)
I didn't know quite what that meant, but I suspected the cause might be an outdated Berkeley DB library on my machine (?). I tried running "nm" on /usr/local/lib/libdb.a and indeed did not find db_open, db_appexit, or db_appinit.
So I installed Berkeley DB 2.5.9 in /data/apps/BerkeleyDB , compiling it with the --enable-cxx option. Once that was done, running an "nm" on /data/apps/BerkeleyDB/lib/libdb.a showed the three items "ld" couldn't find earlier, so I tried to compile HT://Dig again, this time adding -I/data/apps/BerkeleyDB/include and -L/data/apps/BerkeleyDB/lib to the c++ command.
The compile failed again, returning the same error message.
I thought maybe c++ was finding the old Berkeley DB library in /usr/local/lib and trying to use it instead of the new one. Since I didn't have permission to delete it, I tried manaully editing the two files that refer to <db.h> (htlib/DB2_db.h and htlib/BTree.cc) and changing the lines from
#include <db.h>
to
#include "/data/apps/BerkeleyDB/include/db.h"
then recompiling, except instead of using "-ldb", I manually inserted "/data/apps/BerkeleyDB/lib/libdb.a" on the command line.
The compile still failed:
-------------------------------
c++ -nostdinc -nostdinc++ -o htfuzzy -I/data/apps/BerkeleyDB/include /data/apps/BerkeleyDB/lib/libdb.a -L/data/apps/BerkeleyDB/lib
-------------------------------
At this point, I'm stumped. I did go back and try the c++ flags -nostdinc -nostdinc++ , but they didn't help. Can anyone spot something I've missed, and help me finish compiling the program? Thanks again to anyone who can help me with this, and if I've left out any critical pieces of information, please let me know.
-----== Sent via Deja News, The Discussion Network ==----- Easy access to 50,000+ discussion forums
This archive was generated by hypermail 2.0b3 on Sat Jan 02 1999 - 16:28:47 PST | http://www.htdig.org/mail/1998/11/0110.html | CC-MAIN-2016-22 | refinedweb | 603 | 60.11 |
Ripped from Slashdot: Is XML too hard?
Slashdot has an interesting discussion on the difficulties of working with XML and several links to several articles discussing several shortcomings of XML written by several smart people. And while I think XML (or something like it ) is a necessary thing, I also believe that working with XML is not nearly as easy as some people who try to sell you on it try to lead you to believe.
And the reason for this is that XML by itself isn't all that useful. It's all the OTHER crap that you have to learn IN ADDITION to XML that introduces the complexity. So you want to parse your XML document? Well, you've got to learn either DOM or SAX (or both). So you want it to be validated? Well, time to learn DTDs and Schemas. So you want to turn it into another XML document with a different strucutre? Well, time to learn XSLT. The list goes on and on. And many of the solutions for each of these shortcomings are non-intuitive, error prone, or not generally applicable enough.
All that said, I think the idea of having a language/OS/vendor/system-neutral data representation scheme is worthwhile. I guess the tools just need to catch up.
What are you opinions/experiences with XML?
Crimson
Tuesday, March 18, 2003
Finally, it's coming to light....
An XML document is nothing to create. Parsing it; doing something with it is the difficult part. I'm glad this is finally getting noticed.
TB Sheets
Tuesday, March 18, 2003
I've just started using Microsoft CRM which uses an XML format for queries. What a pain in the A$$. In order to do a "SELECT contactid, firstname FROM contacts where LastName='GXXX' " equivalent in this 'FetchXML':
(hoping this msg board will display it correctly)
<fetch mapping='logical'>
<entity name='contact'>
<attribute name = 'contactid'/>
<attribute name = 'FirstName'/>
<filter type='and'>
<condition attribute = 'LastName' operator='eq' value='GXXX'/>
</filter>
</entity>
</fetch>
GiorgioG
Tuesday, March 18, 2003
Yes and no.
XML itself is simple. DTD's are relatively trivial as well (unless there are some really powerful esoteric features I've not seen discussed).
What's hard is using SAX to parse a file. My particular application sits fine with SAX callbacks and a stack, but I can imagine that many would not. I have to side with Bray on that one: SAX is extremely counterintuitive, and DOM is often not an option.
XML is great though. If only because it has established parsers (that is the sole reason my latest project used it).
Mike Swieton
Tuesday, March 18, 2003
Yup, too bad MS and other vendors blessed it as the second coming. Will it eventually go away?
Mike
Tuesday, March 18, 2003
I think xml at its heart is just a form of serialization. It is nice to have a standard one, though the world could probably use a binary standard one too. If it is harder than just serializing objects then I think its just a sign of immature tools. Xml in .net is pretty much transparent, at least for the common uses including soap, which is how it should be. You should never have to see < or >, or use a x-dejour acronym, unless you are doing something very specific. Unless you are stuck in an xml poor enviroment writing your own parser etc, ugh.
Robin Debreuil
Tuesday, March 18, 2003
There are ways other than SAX and DOM to parse XML. It's just that these two are the standard ways of doing it.
In .NET Framework System.Xml.XmlReader allows you to "seek" to a particular node and pull out the data. This approach is similar to SAX in terms of not requiring the whole documents to be in memory, but doesn't require the callbacks. This approach is perfectly fine for applications that serialize objects to XML - storing configuration info, SOAP, etc...
igor
Tuesday, March 18, 2003
XML has some nice features. Mostly it's a good way for file formats that might be useful to have somebody else code against be slightly easier to parse.
The problem is that DOM and SAX are both trying to be two-size-fits-all solutions for the problem. They both work roughly OK if you are using them for what they were origionally designed for and pretty poorly for things they were not designed for from the start but are shoehorned into.
XSLT is actually quite nice, once you get the hang of it. But you need to view it as a scripting language more than anything else. It is a scripting language that is designed to be limited to the task at hand -- converting sets of similar XML documents into a new format without creating even the slightest hint of an exploitable hole. It really could have been done as a scheme-like language and done up much less verbose, but this way, you don't need to write extra parsers, you can do everything with the DOM. In most cases, it's prolly easier to write it in a scripting language of your choice unless you need it to be used in an XSLT-friendly environment.
The same goes for most of the other standards. They hope that you use that instead of rolling it yourself in the language of your choice and an XML parser.
And there's too much lot of hype and other weirdness, just like every other buzzword technology. DTDs and Schemas are complimentary, not competitive, damnit.
flamebait sr.
Tuesday, March 18, 2003
I don't care; just hide the xml behind functions. When did this become impossible? When I abstracted away xml in an app that was "impossibly slow," I was able to cache and other stuff. And no one had to know they were using xml.
XML in code is like Hard Rock Cafe on a t-shirt -- advertising.
Tj
Tuesday, March 18, 2003
The original article /. was referring to basically boils down to this:
1) The XML DOM requires you to read the entire XML into memory before processing. This takes too many resources.
2) SAX (the other main competitor) is very hard to program to in real world situations due to its design (callbacks from nodes).
3) XML as text is too irregular to process with line/regex based tools, which are the tool of choice for quick text stream processing.
I would mostly agree with these statements. XML parsers and query engines these days are built around having everything in memory. If you want to do stream processing, you're pretty much stuck with SAX right now. And the callback-based model SAX uses really takes a lot of work to implement what are fairly simple things.
As for how to get the best of both worlds: I've got an idea, but I think I may save it for a magazine article. ;-)
Chris Tavares
Tuesday, March 18, 2003
how do you figure it's too irregular to process? At the root you've got <, >, </, and =""
If people can parse HTML with Regex (and they do), what's the big deal with a highly structured tag library like XML?
FWIW, I consider the best application of XML to be for transferring data, in which case you'll be reading the entire document in anyway...
Philo
Philo
Tuesday, March 18, 2003
You're forgetting about namespaces. An element can have a namespace prefix or not. The namespace prefix can be different but refer to the same actual namespace. But a regex based search will treat foo:MyElement and bar:MyElement as different, even if they are in fact the same according to the namespace declarations.
Not to mention the fact that line breaks are pretty much arbitrarily allowed in an XML file - niave (sp?) tools tend to choke on stuff like this.
In a way, it's kinda funny, since XML isn't a regular language, and so you can't parse it with regexes anyway!
Proclaimng XML immiment death is like calling for the end of ascii text - oh that happened and very few people noticed - now we have unicode.
Bottom Line - Tools - why anyone is messing with raw XML is beyond me.
Get out of emacs, vi, notepad or what ever macho text-editor you are using and get a real IDE and get a life
Long live silver bullets
Tuesday, March 18, 2003
Note that most things that are useful are not "regular". Generally, any interesting file format won't be a regular grammar, at least as far as any regular expression searching system is concerned.
I'm not sure what the problem is. If you want to grep through a file for something, XML isn't getting in your way. And if you want to get data out of it, there is DOM, SAX, and a set of various alternatives for different languages. You will get just about as far parsing lisp or C styled syntax with regular expressions as you would using XML.
Two APIs Java programmers can use to make their XML processing a whole lot easier are JDOM () and JXPath ().
JDOM was conceived because DOM and SAX didn't quite fit into the Java way of doing things. JXPath makes it easy to hop around an XML tree with XPaths.
Walter Rumsby
Wednesday, March 19, 2003
Actuall you can use XSLT to do the parsing for you. Then DOM or SAX is irrelevant.
Most of the XSLT process have ability to call user functions for a matching XPath statement. You can use those abilities to eliminate DOM or SAX parsers.
Nitin Bhide
Wednesday, March 19, 2003
I have always used my own tokenisers. So the code like "myxmldoc.next()" returns a token. Tokens are not necessarily tags. The types of token I've found useful are OPEN_TAG (includes name and, if specified, namespace), ATTRIBUTE (key value pair), CLOSE_TAG, TEXT etc. The good bit about splitting the OPEN_TAG and CLOSE_TAG is that <hmm/> and <hmm></hmm> look exactly the same.
The nicest thing about tokenisers is that you don't need to coordinate state between several callbacks.
My tokenisers have evolved to have lots of useful methods for skipping stuff e.g. "myxmldoc.nextTag("hmm")" would skip all stuff until the next interesting tag. Great equiv to what Tim Bray was doing with regex in the article pointed to in that slashdot article.
My tokeniser keeps a stack so that closes can be checked against opens; a trival check that the xml is well-formed. I have never yet bothered with making them validating, but it would be very possible. I do have simple utility functions like "nextSwallowing("hmm")" which would return the next tag after next, ensuring the skipped tag was a <hmm> and causing an exception if it wasn't. Etc. Great for hardcoding structure into the code.
I found that for simple xml parsing, the code in my method body actually looks pretty much like the xml, even the indentation! Very easy to read. Another thing that Mr Bray was complaining about mitigated I think..
So I suggest people consider tokenisers instead of SAX, since IMHO it is far superior to use.
IIRC kxml () is a tokeniser too. And funnily enough, they did that for the same reasons that drove me originally - good performance and low memory overhead for J2ME Java.. when I saw their code, I actually almost thought they were copying - the methods were even the same named! Ah well, must have been obvious I guess ;-)
/me checks website and finds there is now a "pull" API yeah baby!!
Nice
Wednesday, March 19, 2003
For an interesting take on this read .
"It's quite amusing to me that a post which is really Tim Bray complaining about the crappy APIs for processing XML he is used to got so much traction on Slashdot and Jon Udell's blog as an indictment of XML.
The posting by Tim Bray was really an announcement that he is disconnected from the current landscape of XML technologies."
Just me (Sir to you)
Wednesday, March 19, 2003
XSLT is a parser? I thought it was just an xml document...
apw
Wednesday, March 19, 2003
And a C program is just an ASCII file right?
With Java:
JDOM and JXPATH to load and find nodes etc
and
**VELOCITY** (jakarta.org) to do the transformations. Who needs XSLT for most of the templating ?
Phil
Wednesday, March 19, 2003
Recent Topics
Fog Creek Home | https://discuss.fogcreek.com/joelonsoftware1/34733.html | CC-MAIN-2018-51 | refinedweb | 2,096 | 72.46 |
Just wondering, any of you guys looked into touch switches like below
Just wondering, any of you guys looked into touch switches like below
Should a Repeater have it's on unique ID or the same ID as the NODE that it's repeating?
Should a Repeater have it's on ID or the same ID as the NODE that it's repeating?
Anyone here knows if we could replace NRF24L01+ or RFM69 radio to an ESP8266, both on Gateway and Nodes?
The following sketch has got what you're looking for. Hope that helps.
Relay With Actuator Switch Toggle
#include <MySensor.h> #include <SPI.h> #include <Bounce2.h> #define RELAY_ON 0 // switch around for realy HIGH/LOW state #define RELAY_OFF 1 MySensor gw; #define RADIO_ID 11 // Radio ID, whatever channel you assigned to #define noRelays 6 const int relayPin[] = {A0, A1, A2, A3, A4, A5}; const int buttonPin[] = {3, 4, 5, 6, 7, 8}; class Relay // relay class, store all relevant data (equivalent to struct) { public: int buttonPin; // physical pin number of button int relayPin; // physical pin number of relay byte oldValue; // last Values for key (debounce) boolean relayState; // relay status (also stored in EEPROM) }; Relay Relays[noRelays]; Bounce debouncer[noRelays]; MyMessage msg[noRelays]; void setup(){ gw.begin(incomingMessage, RADIO_ID, true); delay(250); gw.sendSketchInfo("MultiRelayButton", "0.9b"); delay(250); // Initialize Relays with corresponding buttons for (int i = 0; i < noRelays; i++){ Relays[i].buttonPin = buttonPin[i]; // assign physical pins Relays[i].relayPin = relayPin[i]; msg[i].sensor = i; // initialize messages msg[i].type = V_LIGHT; debouncer[i] = Bounce(); // initialize debouncer debouncer[i].attach(buttonPin[i]); debouncer[i].interval(5); pinMode(Relays[i].buttonPin, INPUT_PULLUP); pinMode(Relays[i].relayPin, OUTPUT); Relays[i].relayState = gw.loadState(i); // retrieve last values from EEPROM digitalWrite(Relays[i].relayPin, Relays[i].relayState? RELAY_ON:RELAY_OFF); // and set relays accordingly gw.send(msg[i].set(Relays[i].relayState? true : false)); // make controller aware of last status gw.present(i, S_LIGHT); // present sensor to gateway delay(250); } } void loop() { gw.process(); for (byte i = 0; i < noRelays; i++){ debouncer[i].update(); byte value = debouncer[i].read(); // if (value != Relays[i].oldValue && value == 0){ if (value != Relays[i].oldValue){ Relays[i].relayState = !Relays[i].relayState; digitalWrite(Relays[i].relayPin, Relays[i].relayState?RELAY_ON:RELAY_OFF); gw.send(msg[i].set(Relays[i].relayState? true : false)); gw.saveState( i, Relays[i].relayState );} // save sensor state in EEPROM (location == sensor number) Relays[i].oldValue = value; } } // process incoming message void incomingMessage gw.saveState( message.sensor, Relays[message.sensor].relayState ); // save sensor state in EEPROM (location == sensor number) } } }
Nice work so far. I had a similar idea to use Binary Switch to turn ON/OFF my relays and lights but I had to discard the idea after going through several issues. Occasionally my serial gateway would fail to connect to the Vera and it would paralyze all my lightings. Essentially the Binary switches talk's to the Relays by going through the gateway. So decided to redesign the electronics to accommodate Relay with Button Actuator Sketch. If you lose the gateway, you could still bet that it would work though I wouldn't say it's 100% reliable but it gets the job done. Lately, I'm also exploring other avenues to somehow use concepts on MySensor and I was informed that My sensor Ver 2.0 Beta has got something similar by using ESP8266. I don't know how much of this is true but I'm pretty sure @hek will be able to explain.
Out of curiosity, what version Arduino are you running? I ran version 1.0.6 without any problems. I only have an issue on 1.6.7. It looks like 1.6.7 manages library differently.
After checking, I notice there are two library folder. There is one in My Documents and another in Program Files - Arduino - library. Which MY Sensor files need to go into which folder?
Thanks guys, I'm going to give a third try.
Where do you have to install MY Sensor library? I have downloaded Arduino-Master and copied all the files into Arduino Library but its not working. | https://forum.mysensors.org/user/jeylites | CC-MAIN-2018-51 | refinedweb | 687 | 51.44 |
Consider the following short C (or C++) program:
const int thingy=123123123123;
Depending on your compiler, the above code may succeed, fail, produce a warning, or be accepted quietly and result in undefined behavior, which is extremely bad.
How can you prevent failure of the above code? According to ISO/IEC 14882:2003, the standard document describing C++, section 2.1.3.1 § 2, a constant is successively compared to integer types until one that fits is found. If a constant is written in decimal and devoid of a suffix, the compiler tries to represent it as int. If int is insufficient to contain the value, long is tried. If long cannot hold the constant, undefined behavior results. That is, the compiler is free to do whatever it feels like. This gets better. If the constant is written in hexadecimal, int, unsigned, long and unsigned long are successively tried. Again, if none of those type can hold the constant, it results in undefined behavior.
If the constant is written in decimal and suffixed by u, the compiler understands it is an unsigned constant of some sort, so it repeats the same with unsigned and unsigned long, again resulting in undefined behavior if the constant is too large for either.
If the variable is suffixed with l, the constant is at least a long, but the compiler may compare with unsigned long‘s range if the constant exceeds long. Undefined behavior results if the constant exceeds unsigned long.
It is only if the constant is suffixed with ll that the compiler considers the type long long, which usually corresponds to a type that is twice as large as long, or not: it is implementation-specific behavior.
Using gcc 4.2.4 (not the latest version, but the version I have on my AMD64 box), I get the warning:
suffixes.c:7: warning: large integer implicitly truncated to unsigned type
and the program prints -1430928461 which is not the value wanted or expected.
So, what went wrong exactly? The above code seems to be expecting int to be larger than 32 bits, and that’s where it fails. In C (and so in C++), contrary to other languages such as Java, the integer types are machine-specific, sizes of which are dictated by considerations such as the compilation model (see a previous entry on the topic) and the underlying microprocessor. The type int is usually mapped onto an efficient, machine-default, data type. In x86 mode, int should be 32 bits long while long (and long long) are larger (or equal) to int. The LP32 model provides 64 bits integers only with long long. The LP64 model provides long and long long as 64 bits integers.
How do we make absolutely sure that we get exactly what we expect? We use the platform-safe basic types definitions from C99’s <stdint.h> and from <stddef.h>. We rewrite the program as:
uint64_t thingie=123123123123;
Yet, this code is still not safe. Remember the type determination rules we just enumerated? They do not guaranty that long long is considered before assignation. You may very well get the same warning and see a truncated value assigned to your constant thingie, because unsigned long so happens to be too short, as only long long is large enough to hold the constant. But, there is hope. We can use ll as a suffix, yielding:
uint64_t thingie=123123123123ll;
Yet, this code is still not safe! Because, again, ll behavior is implementation-specific. So you probably need a macro—which I advocate against whenever I can, normally—to wrap the constant so that the correct suffix is generated. Fortunately, such macros are provided by the standard. Browsing the stddint.h file, we find:
/* The ISO C99 standard specifies that in C++ implementations these macros should only be defined if explicitly requested. */ #if !defined __cplusplus || defined __STDC_CONSTANT_MACROS /* Signed. */ # define INT8_C(c) c # define INT16_C(c) c # define INT32_C(c) c # if __WORDSIZE == 64 # define INT64_C(c) c ## L # else # define INT64_C(c) c ## LL # endif ...more...
So we rewrite yet again the code as:
#define __STDC_LIMIT_MACROS // could be from Makefile as well #include <stdint.h> // even in C++ ... uint64_t thingie = UINT64_C(123123123123);
Yay! Type-safe code!
Ok, now, what about float and double?!
Let’s look at those next week.
Minor nitpick – you should use UINT64_C, not __UINT64_C. The former is standard, the latter is probably GNU-specific.
I just had a run-in with this recently, with a non-C++ developer. I think it needs to be stressed that for constant values, the compiler does not consider usage. Hence, whether you say “double foo = 123123123123” or “const char[] bar = 123123123123”, the literal constant is always evaluated the same.
You’re right about __UINT64_C. My bad. Corrected.
As for constant being evaluated the same, I’m not sure what you mean. In the second example, the compiler should complain that you’re making a pointer from an integer, or something similar.
Indeed, it spews
error: invalid conversion from ‘int’ to ‘const char*’
My point is that it said “invalid conversion from ‘int'”… as opposed to “invalid conversion from ‘long long'”. The truncation occurs before the assignment, and that can trip up naive developers.
If I use 123123123123, it does generate
error: invalid conversion from ‘long int’ to ‘const char*’
(because I’m in LP64, I only have
longrather than
long long, but same difference)
[…] long long for atoll, etc. The first problem is that, as we discussed in a series of previous posts (here, and here) the size of int, long, etc., vary from platform to platform, so it is not clear how […]
[…] could say), integer arithmetic is subject to a number of pitfalls, some I have already discussed here, here, and here. This week, I discuss yet another occasion for error using integer […] | https://hbfs.wordpress.com/2008/11/18/safer-integer-constants/ | CC-MAIN-2017-22 | refinedweb | 975 | 64.1 |
Unlike some other devices the Raspberry Pi does not have any analogue inputs. All 17 of its GPIO pins are digital. They can output high and low levels or read high and low levels. This is great for sensors that provide a digital input to the Pi but not so great if you want to use analogue sensors.
For sensors that act as a variable resistor such as LDRs (Light Dependent Resistors) or thermistors (temperature sensors) there is a simple solution. It allows you to measure a number of levels using a single GPIO pin. In the case of a light sensor this allows you to measure different light levels.
Reading Analogue Sensors
It uses a basic “RC” charging circuit (Wikipedia Article) which is often used as an introduction to electronics. In this circuit you place a Resistor in series with a Capacitor. When a voltage is applied across these components the voltage across the capacitor rises. The time it takes for the voltage to reach 63% of the maximum is equal to the resistance multiplied by the capacitance. When using a Light Dependent resistor this time will be proportional to the light level. This time is called the time constant :
t = RC where t is time, R is resistance (ohms) and C is capacitance (farads)
So the trick is to time how long it takes a point in the circuit the reach a voltage that is great enough to register as a “High” on a GPIO pin. This voltage is approximatey 2 volts, which is close enough to 63% of 3.3V for my liking. So the time it takes the circuit to change a GPIO input from Low to High is equal to ‘t’.
With a 10Kohm resistor and a 1uF capacitor t is equal to 10 milliseconds. In the dark our LDR may have a resistance of 1Mohm which would give a time of 1 second. You can calculate other values using an online time constant calculator.
In order to guarantee there is always some resistance between 3.3V and the GPIO pin I inserted a 2.2Kohm resistor in series with the LDR.
Here is the circuit :
Here is the circuit implemented on a breadboard :
Theory
Here is the sequence of events :
- Set the GPIO pin as an output and set it Low. This discharges any charge in the capacitor and ensures that both sides of the capacitor are 0V.
- Set the GPIO pin as an input. This starts a flow of current through the resistors and through the capacitor to ground. The voltage across the capacitor starts to rise. The time it takes is proportional to the resistance of the LDR.
- Monitor the GPIO pin and read its value. Increment a counter while we wait.
- At some point the capacitor voltage will increase enough to be considered as a High by the GPIO pin (approx 2v). The time taken is proportional to the light level seen by the LDR.
- Set the GPIO pin as an output and repeat the process as required.
Python Code
Here is some code that will print out the number of loops it takes for the capacitor to charge.
#!/usr/local/bin/python# Reading an analogue sensor with # a single GPIO pin # Author : Matt Hawkins # Distribution : Raspbian # Python : 2.7 # GPIO : RPi.GPIO v3.1.0a import RPi.GPIO as GPIO, time # Tell the GPIO library to use # Broadcom GPIO references GPIO.setmode(GPIO.BCM) # Define function to measure charge time def RCtime (PiPin): measurement = 0 # Discharge capacitor GPIO.setup(PiPin, GPIO.OUT) GPIO.output(PiPin, GPIO.LOW) time.sleep(0.1) GPIO.setup(PiPin, GPIO.IN) # Count loops until voltage across # capacitor reads high on GPIO while (GPIO.input(PiPin) == GPIO.LOW): measurement += 1 return measurement # Main program loop while True: print RCtime(4) # Measure timing using GPIO4
Accuracy
Given we only want to spot different light levels we don’t really need to know the resistance of the LDR or the exact time it takes to charge the capacitor. You can do the maths if you want to but I just needed to get a measurement and compare it to some known values. Seconds or Python loop counts, it doesn’t matter.
Python is an interpreted language which means the timing of loops is always going to be affected by the operating system performing other background tasks. This will affect the count loop in our example.
Practical Uses
In a more useful application you can call “RCtime” when you need it to get a count value. Your code can then perform other tasks based on the value of the count, perhaps comparing to values you have measured previously.
When I built my test circuit it was sat in front on my TV. I could see the count number change as the lighting level changed on the TV. It’s so simple you really need to just give it a try!
Acknowlegments
This article was inspired by the excellent article on Adafruit.com.
nice 🙂 thanks for the link too! you might want to check out fritzing parts, we just added a bunch for raspberry pi!
Excellent. Just imported the library into my Fritzing installation. I like the parts for the PIR and membrane keypad. Will definitely be able to make use of the PIR part in future projects. I like the Raspberry Pi part but wish the pin labels were GPIO numbers rather than the alternative function labels.
Great article Matt, very useful for a project I’m considering. Keep up the good work!
G.
Cool stuff, good idea!
But I get wildly fluctuating readings that don’t seem to change too much when I hold my hand over the sensor.
Thanks for the nice article.
I suppose the same method will work with an analog temp sensor.
Thanks,
alexk
I made some small changes to the code:
from datetime import datetime
GPIO.setup(PiPin, GPIO.IN)
# Count loops until voltage across
# capacitor reads high on GPIO
startTime=dateTime.now()
while (GPIO.input(PiPin) == GPIO.LOW):
measurement += 1
endTime=dateTime.now()
diffTime=endTime-startTime
return str(diffTime.total_seconds()/0.001)
I figured that would give me the time in diffTime, and I know the capacitance is 1uF, so I should be returning the resistance. I put a static 22ohm resister in the circuit as a test, and I get nothing but zeros. If I crank it up to
return str(diffTime.total_seconds()/0.000001)
I start getting values, but in the range of 120, not 220 like I’d expect. I know timing isn’t going to be perfect, but I’d expect it to be more accurate than to be always about 100 ohms off. Am I doing something wrong?
Thanks
Rick
In your example, with a 1uF capacitor, and multiplying the time by 1000000 I would expect the result to be 22. As it is 120 I think this just shows a delay in measuring the small time interval.
Because there is so much going on within the Linux operating system you are probably just seeing the inaccuracy of the technique. I just count loops because it gives me a number to compare without worrying about the exact meaning of the number. With the same light level I see at least a 10%-20% jumping about in the measurement. If anything loads the system at the same time this can be even worse.
Stick in a 20K resistor and your results will probably be a bit closer to the predicted values.
Pingback: Playing with GPIO and sensors on OpenWrt | #labOS
Hi,
Interesting article. I wonder if anyone used this approach to measure distance with sharp IR sensor on Pi?
I get number of counts and from datasheet I can see that it takes ~38 msec to get one sample. So time is known, but I can’t get distance with that input.
Any help or suggestion is appreciated.
thanks
Seems like a “dangerous” solution to me. Can the GPIO pin handle the inrush current created when discharging the capacitor which would have to rather large in order to get any precision in time measurement? I wouldn’t try this myself without some interfacing electronics between the Pi and the capacitor.
I was wondering about this too. And I don’t know the exact answer. But if I was about to implement this, I would probably try to figure out, how much inrush current the PI can handle. By the way, I think you can spent a PWM output in order to reduce the inrush current peak. Beginning with a very high duty cycle and reducing it continuously to a steady ground. I also suggest pigpio for things like PWM output, if it has to be accurate. With pigpio you can also measure the time between two edges much more precisely than with RPi.GPIO. For all those, who complain about this method being inaccurate. But that would certainly be a bit more time consuming.
Pingback: Why i chose Raspberry instead of Arduino Yun and Spark-Core in the end | Making connected stuff
Thanks for your tutorial! Is a 3.3 uF capacitor ok to use for the circuit? I don’t have a 1 uF capacitor available yet. I hope it does not cause something like short circuit…
That would be OK. It will just change the timing calculations as the time constant will be based on 3.3uF rather than 1uF.
Can’t using ldr more than 1, for example 8 ldr 8 GPIO pins?
Pingback: Experiment: use an arduino as a slave to your raspberry pi | Project Pi
Great tutorial. I used this technique to monitor two different LDRs to notify me if I had accidentally left lights on in two outbuildings. Just counted number of loops; low number meant light was on, high number – lights off. If either light was on, I put power out on one of the GPIO pins to turn on an LED in the main house. Slick!
Great Article!
I have been working off of the same Adafruit article for a Light Detection system of my own.
I have one question id like to ask. How would one go about scaling this up? How do you go about adding more light sensors in to allow for say, a 3×3 grid? And beyond?
Apologies for the entry level questions!
To be honest this technique probably isn’t the way to go if you’ve got lots of sensors. You would have to poll each one and wait for the result and this would start to add a larger delay to your script. I would add a MCP3008 ADC and read the analogue outputs of the light sensor. It would be quicker and more accurate.
“With a 10Kohm resistor and a 1uF capacitor t is equal to 1 millisecond. In the dark the LDR may have a resistance of 10Mohm which would give a time of 1 second.”
Don’t you mean :
“With a (1Kohm) resistor and a 1uF capacitor t is equal to 1 millisecond. In the dark the LDR may have a resistance of (1Mohm) which would give a time of 1 second.”
There is like 1 zero added in your statement 😀
Excellent tutorial Mate (Y)
Well spotted. I’ve updated the numbers and added a link to a time constant calculator.
Hi, great post!
Any idea why it doesn’t work on a rasperry pi 2?
Thanks!
There’s no reason it shouldn’t work on a Pi 2. It’s just measuring the time taken for a GPIO pin to go high so I would expect it to work on all models.
Thanks Matt for the answer. The thing is that the pin never goes high on my PI 2 🙁
Hi,
I gpio is detected high around 1.3V, not 2V (I got an Rpi B+)
I found that with a 10uF capacitor and 10K resistor. I got a high on GPIO after 48ms (according to t=RC it should be 100ms). After 48ms, equation Vc=V-V*e^(-t/(RC)) says it is 1.3V
Am doing something wrong ? Or gpio gets high lower on Rpi B+ ?
Hi (again),
According to different things I found on the web, threshold voltage for GPIO pin is between 0.8V and 2.0V, and … as is on rising edge.
IMHA, one should calibrate rpi’s gpio high treshold. On my rpi b it’s 1.3V, not 2.0V, and it changes the value read
Hi , I have a 100k thermistor ntc 3950 b Podre transform analog signal to digital with this method ?
Very clever. However, I must reiterate a WARNING (mentioned by PAUL on JUNE 5, 2014 12:40 PM), and a simple solution:
Discharging the capacitor directly with a GPIO pin will provide a large inrush current, which risks damage to the GPIO pin. A larger the cap would be even more risky (some people wanted to go with a larger cap, trading off speed for better resolution) . From specs I’ve read, the default setting on GPIO outputs is 8mA (can be set to 2ma to 16mA), and the recommendation is not to exceed that setting (it isn’t actually a current limit). The short time to discharge the cap might let you get away with not over-dissipating the device, but IMO it is not worth the risk when there is a simple solution.
The SOLUTION: Simply add a 470 OHM R from the GPIO pin to the junction of the LDR and cap. This limits the current to ~ 7mA maximum. It will take ~ 2 mSec to discharge the cap (5 time constants).
One (probably minor) effect is that if the LDR is a low resistance (say 1K, with the 2.2K in series), this forms a voltage divider, and the cap will discharge only to ~ 0.4V instead of near zero.. It just might take a little tweaking of the loop?
In the reply’s i did not find how we can measure accurate resistor values and or even calculate the accuracy.
Seems to me this is important to get more out of this technique.
In my setup i used a 3.3uF condensator and a RPI-2.
The timing measured are not directly equal to the resistance.
It seemed that the timing measurements are indeed linear to the resistance.
I used 3 resistors which i choose to measure (1K , 2.2K ,15K)
Then you can draw a graph and calculate an equation y=ax+b or x=(y-b)/a.
(search the internet for more info)
This is the edited program to test the accuracy of the measured resistor.
(i’m not a perfect programmer but this works)
It asks for the resistor you want to measure.
Then it reads 4 measurements (named it “timing-counts”) and makes an average to make it more accurate then uses the equation to calculate the resistance.
Last but not least is calculates the accuracy in %.
My finding was that a time.sleep of 0.4 was better for accuracy.
The accuracy is in average about 1 to 2 %
#!/usr/local/bin/python
# Reading an analogue sensor with
# a single GPIO pin
# gpio 4
# |
# +3.3V o–====—–====—–||–o gnd
# 2.2k choose 3.3uF
# Author : Matt Hawkins
# Distribution : Raspbian
# Python : 2.7
# GPIO : RPi.GPIO v3.1.0a
import math
import RPi.GPIO as GPIO, time
from datetime import datetime
# Tell the GPIO library to use
# Broadcom GPIO references
GPIO.setmode(GPIO.BCM)
# Define time.sleep variable (edited to 0.4 to get otimal results with 3.3uF)
tmsl = 0.4
# input the measurable Resistance value
realR = input(“Please enter the measurable Resistance value in Ohm:”)
# Define function to measure charge time
def RCtimeimed
# Main program loop
while True:
R1 = RCtimea(4) # Measure timing using GPIO4
R2 = RCtimeb(4) # Measure timing using GPIO4
R3 = RCtimec(4) # Measure timing using GPIO4
R4 = RCtimed(4) # Measure timing using GPIO4
print R1
print R2
print R3
print R4
meanR = ((R1+R2+R3+R4)/4)
print “Average of 4 measurements “,meanR , ” timing-counts”# Measure timing using GPIO4
# The equation beneath is a lineair equation formed from time-count measures and 3 different resistances so a lineair graph and an equation can be made, the equation is y=ax+b or x=(y-b)/a
# This equation is only usefull with a 2.2k resistor and a 3.3uF Condensator
R = round(((meanR-1500)/0.72693),0) # Measure timing and calculate resistance and round with no decimals using GPIO4
print R,” Ohm”
Difference = (realR-R)
print round(((Difference/realR)*100),3), ” % Accuracy”# won’t work if earlier calculation were done with int()
Great post.
Used it to get temperature readings with a thermistor.
I used this article as a basis for a simple Python class to encapsulate what it does, and enable multiple asynchronous analog reads:
I also did the math to convert from time to resistance, and compensated for the Pi’s internal resistance. Have a look, make commits, etc.
I’ve also built a calculator to help people choose components:
Can this principle be used for an analog gas sensor ?
It would only work if the sensor acted as a resistor. For serious sensor reading I would consider an analog to digital converter such as the MCP3008 : | https://www.raspberrypi-spy.co.uk/2012/08/reading-analogue-sensors-with-one-gpio-pin/ | CC-MAIN-2018-30 | refinedweb | 2,886 | 73.88 |
It never fails. The application you just deployed ran great on your development machine-but stumbles in production. The problem might show up right away or maybe it creeps up over time. Now what?
Brad McCabe
MSDN Magazine July 2006
View Complete Post
Ken Spencer
MSDN Magazine September 2001.
Presented here is an overview of Transactional NTFS and how it revolutionizes transactions.
Jason Olson
MSDN Magazine July 2007
You can combat deadlock using a combination of disciplined locking practices which Joe Duffy aptly explains in this article.
Joe Duffy
MSDN Magazine April
The System. Diagnostics namespace in the Microsoftî . NET Framework contains powerful tracing capabilities. This includes the main tracing API: TraceSource. As you will see, the tracing APIs in System.
Krzysztof Cwalina
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/2306-advanced-basics-monitor-your-apps-with-systemdiagnostics.aspx | CC-MAIN-2018-05 | refinedweb | 138 | 51.44 |
.
Up until now, we have been discussing some of the basic nuts and bolts of NumPy; in the next few sections, we will dive into the reasons that NumPy is so important in the Python data science world. Namely, it provides an easy and flexible interface to optimized computation with arrays of data.
Computation on NumPy arrays can be very fast, or it can be very slow. The key to making it fast is to use vectorized operations, generally implemented through NumPy's universal functions (ufuncs). This section motivates the need for NumPy's ufuncs, which can be used to make repeated calculations on array elements much more efficient. It then introduces many of the most common and useful arithmetic ufuncs available in the NumPy package.
Python's default implementation (known as CPython) does some operations very slowly. This is in part due to the dynamic, interpreted nature of the language: the fact that types are flexible, so that sequences of operations cannot be compiled down to efficient machine code as in languages like C and Fortran. Recently there have been various attempts to address this weakness: well-known examples are the PyPy project, a just-in-time compiled implementation of Python; the Cython project, which converts Python code to compilable C code; and the Numba project, which converts snippets of Python code to fast LLVM bytecode. Each of these has its strengths and weaknesses, but it is safe to say that none of the three approaches has yet surpassed the reach and popularity of the standard CPython engine.
The relative sluggishness of Python generally manifests itself in situations where many small operations are being repeated – for instance looping over arrays to operate on each element. For example, imagine we have an array of values and we'd like to compute the reciprocal of each. A straightforward approach might look like this:
import numpy as np np.random.seed(0) def compute_reciprocals(values): output = np.empty(len(values)) for i in range(len(values)): output[i] = 1.0 / values[i] return output values = np.random.randint(1, 10, size=5) compute_reciprocals(values)
array([ 0.16666667, 1. , 0.25 , 0.25 , 0.125 ])
This implementation probably feels fairly natural to someone from, say, a C or Java background.
But if we measure the execution time of this code for a large input, we see that this operation is very slow, perhaps surprisingly so!
We'll benchmark this with IPython's
%timeit magic (discussed in Profiling and Timing Code):
big_array = np.random.randint(1, 100, size=1000000) %timeit compute_reciprocals(big_array)
1 loop, best of 3: 2.91 s per loop
It takes several seconds to compute these million operations and to store the result! When even cell phones have processing speeds measured in Giga-FLOPS (i.e., billions of numerical operations per second), this seems almost absurdly slow. It turns out that the bottleneck here is not the operations themselves, but the type-checking and function dispatches that CPython must do at each cycle of the loop. Each time the reciprocal is computed, Python first examines the object's type and does a dynamic lookup of the correct function to use for that type. If we were working in compiled code instead, this type specification would be known before the code executes and the result could be computed much more efficiently.
For many types of operations, NumPy provides a convenient interface into just this kind of statically typed, compiled routine. This is known as a vectorized operation. This can be accomplished by simply performing an operation on the array, which will then be applied to each element. This vectorized approach is designed to push the loop into the compiled layer that underlies NumPy, leading to much faster execution.
Compare the results of the following two:
print(compute_reciprocals(values)) print(1.0 / values)
[ 0.16666667 1. 0.25 0.25 0.125 ] [ 0.16666667 1. 0.25 0.25 0.125 ]
Looking at the execution time for our big array, we see that it completes orders of magnitude faster than the Python loop:
%timeit (1.0 / big_array)
100 loops, best of 3: 4.6 ms per loop
Vectorized operations in NumPy are implemented via ufuncs, whose main purpose is to quickly execute repeated operations on values in NumPy arrays. Ufuncs are extremely flexible – before we saw an operation between a scalar and an array, but we can also operate between two arrays:
np.arange(5) / np.arange(1, 6)
array([ 0. , 0.5 , 0.66666667, 0.75 , 0.8 ])
And ufunc operations are not limited to one-dimensional arrays–they can also act on multi-dimensional arrays as well:
x = np.arange(9).reshape((3, 3)) 2 ** x
array([[ 1, 2, 4], [ 8, 16, 32], [ 64, 128, 256]])
Computations using vectorization through ufuncs are nearly always more efficient than their counterpart implemented using Python loops, especially as the arrays grow in size. Any time you see such a loop in a Python script, you should consider whether it can be replaced with a vectorized expression.
x = np.arange(4) print("x =", x) print("x + 5 =", x + 5) print("x - 5 =", x - 5) print("x * 2 =", x * 2) print("x / 2 =", x / 2) print("x // 2 =", x // 2) # floor division
x = [0 1 2 3] x + 5 = [5 6 7 8] x - 5 = [-5 -4 -3 -2] x * 2 = [0 2 4 6] x / 2 = [ 0. 0.5 1. 1.5] x // 2 = [0 0 1 1]
There is also a unary ufunc for negation, and a
** operator for exponentiation, and a
% operator for modulus:
print("-x = ", -x) print("x ** 2 = ", x ** 2) print("x % 2 = ", x % 2)
-x = [ 0 -1 -2 -3] x ** 2 = [0 1 4 9] x % 2 = [0 1 0 1]
In addition, these can be strung together however you wish, and the standard order of operations is respected:
-(0.5*x + 1) ** 2
array([-1. , -2.25, -4. , -6.25])
Each of these arithmetic operations are simply convenient wrappers around specific functions built into NumPy; for example, the
+ operator is a wrapper for the
add function:
np.add(x, 2)
array([2, 3, 4, 5])
The following table lists the arithmetic operators implemented in NumPy:
Additionally there are Boolean/bitwise operators; we will explore these in Comparisons, Masks, and Boolean Logic.
x = np.array([-2, -1, 0, 1, 2]) abs(x)
array([2, 1, 0, 1, 2])
The corresponding NumPy ufunc is
np.absolute, which is also available under the alias
np.abs:
np.absolute(x)
array([2, 1, 0, 1, 2])
np.abs(x)
array([2, 1, 0, 1, 2])
This ufunc can also handle complex data, in which the absolute value returns the magnitude:
x = np.array([3 - 4j, 4 - 3j, 2 + 0j, 0 + 1j]) np.abs(x)
array([ 5., 5., 2., 1.])
theta = np.linspace(0, np.pi, 3)
Now we can compute some trigonometric functions on these values:
print("theta = ", theta) print("sin(theta) = ", np.sin(theta)) print("cos(theta) = ", np.cos(theta)) print("tan(theta) = ", np.tan(theta))
theta = [ 0. 1.57079633 3.14159265] sin(theta) = [ 0.00000000e+00 1.00000000e+00 1.22464680e-16] cos(theta) = [ 1.00000000e+00 6.12323400e-17 -1.00000000e+00] tan(theta) = [ 0.00000000e+00 1.63312394e+16 -1.22464680e-16]
The values are computed to within machine precision, which is why values that should be zero do not always hit exactly zero. Inverse trigonometric functions are also available:
x = [-1, 0, 1] print("x = ", x) print("arcsin(x) = ", np.arcsin(x)) print("arccos(x) = ", np.arccos(x)) print("arctan(x) = ", np.arctan(x))
x = [-1, 0, 1] arcsin(x) = [-1.57079633 0. 1.57079633] arccos(x) = [ 3.14159265 1.57079633 0. ] arctan(x) = [-0.78539816 0. 0.78539816]
x = [1, 2, 3] print("x =", x) print("e^x =", np.exp(x)) print("2^x =", np.exp2(x)) print("3^x =", np.power(3, x))
x = [1, 2, 3] e^x = [ 2.71828183 7.3890561 20.08553692] 2^x = [ 2. 4. 8.] 3^x = [ 3 9 27]
The inverse of the exponentials, the logarithms, are also available.
The basic
np.log gives the natural logarithm; if you prefer to compute the base-2 logarithm or the base-10 logarithm, these are available as well:
x = [1, 2, 4, 10] print("x =", x) print("ln(x) =", np.log(x)) print("log2(x) =", np.log2(x)) print("log10(x) =", np.log10(x))
x = [1, 2, 4, 10] ln(x) = [ 0. 0.69314718 1.38629436 2.30258509] log2(x) = [ 0. 1. 2. 3.32192809] log10(x) = [ 0. 0.30103 0.60205999 1. ]
There are also some specialized versions that are useful for maintaining precision with very small input:
x = [0, 0.001, 0.01, 0.1] print("exp(x) - 1 =", np.expm1(x)) print("log(1 + x) =", np.log1p(x))
exp(x) - 1 = [ 0. 0.0010005 0.01005017 0.10517092] log(1 + x) = [ 0. 0.0009995 0.00995033 0.09531018]
When
x is very small, these functions give more precise values than if the raw
np.log or
np.exp were to be used.
NumPy has many more ufuncs available, including hyperbolic trig functions, bitwise arithmetic, comparison operators, conversions from radians to degrees, rounding and remainders, and much more. A look through the NumPy documentation reveals a lot of interesting functionality.
Another excellent source for more specialized and obscure ufuncs is the submodule
scipy.special.
If you want to compute some obscure mathematical function on your data, chances are it is implemented in
scipy.special.
There are far too many functions to list them all, but the following snippet shows a couple that might come up in a statistics context:
from scipy import special
# Gamma functions (generalized factorials) and related functions x = [1, 5, 10] print("gamma(x) =", special.gamma(x)) print("ln|gamma(x)| =", special.gammaln(x)) print("beta(x, 2) =", special.beta(x, 2))
gamma(x) = [ 1.00000000e+00 2.40000000e+01 3.62880000e+05] ln|gamma(x)| = [ 0. 3.17805383 12.80182748] beta(x, 2) = [ 0.5 0.03333333 0.00909091]
# Error function (integral of Gaussian) # its complement, and its inverse x = np.array([0, 0.3, 0.7, 1.0]) print("erf(x) =", special.erf(x)) print("erfc(x) =", special.erfc(x)) print("erfinv(x) =", special.erfinv(x))
erf(x) = [ 0. 0.32862676 0.67780119 0.84270079] erfc(x) = [ 1. 0.67137324 0.32219881 0.15729921] erfinv(x) = [ 0. 0.27246271 0.73286908 inf]
There are many, many more ufuncs available in both NumPy and
scipy.special.
Because the documentation of these packages is available online, a web search along the lines of "gamma function python" will generally find the relevant information.
For large calculations, it is sometimes useful to be able to specify the array where the result of the calculation will be stored.
Rather than creating a temporary array, this can be used to write computation results directly to the memory location where you'd like them to be.
For all ufuncs, this can be done using the
out argument of the function:
x = np.arange(5) y = np.empty(5) np.multiply(x, 10, out=y) print(y)
[ 0. 10. 20. 30. 40.]
This can even be used with array views. For example, we can write the results of a computation to every other element of a specified array:
y = np.zeros(10) np.power(2, x, out=y[::2]) print(y)
[ 1. 0. 2. 0. 4. 0. 8. 0. 16. 0.]
If we had instead written
y[::2] = 2 ** x, this would have resulted in the creation of a temporary array to hold the results of
2 ** x, followed by a second operation copying those values into the
y array.
This doesn't make much of a difference for such a small computation, but for very large arrays the memory savings from careful use of the
out argument can be significant.
For binary ufuncs, there are some interesting aggregates that can be computed directly from the object.
For example, if we'd like to reduce an array with a particular operation, we can use the
reduce method of any ufunc.
A reduce repeatedly applies a given operation to the elements of an array until only a single result remains.
For example, calling
reduce on the
add ufunc returns the sum of all elements in the array:
x = np.arange(1, 6) np.add.reduce(x)
15
Similarly, calling
reduce on the
multiply ufunc results in the product of all array elements:
np.multiply.reduce(x)
120
If we'd like to store all the intermediate results of the computation, we can instead use
accumulate:
np.add.accumulate(x)
array([ 1, 3, 6, 10, 15])
np.multiply.accumulate(x)
array([ 1, 2, 6, 24, 120])
Note that for these particular cases, there are dedicated NumPy functions to compute the results (
np.sum,
np.prod,
np.cumsum,
np.cumprod), which we'll explore in Aggregations: Min, Max, and Everything In Between.
x = np.arange(1, 6) np.multiply.outer(x, x)
array([[ 1, 2, 3, 4, 5], [ 2, 4, 6, 8, 10], [ 3, 6, 9, 12, 15], [ 4, 8, 12, 16, 20], [ 5, 10, 15, 20, 25]])
The
ufunc.at and
ufunc.reduceat methods, which we'll explore in Fancy Indexing, are very helpful as well.
Another extremely useful feature of ufuncs is the ability to operate between arrays of different sizes and shapes, a set of operations known as broadcasting. This subject is important enough that we will devote a whole section to it (see Computation on Arrays: Broadcasting).
More information on universal functions (including the full list of available functions) can be found on the NumPy and SciPy documentation websites.
Recall that you can also access information directly from within IPython by importing the packages and using IPython's tab-completion and help (
?) functionality, as described in Help and Documentation in IPython. | https://nbviewer.jupyter.org/github/donnemartin/data-science-ipython-notebooks/blob/master/numpy/02.03-Computation-on-arrays-ufuncs.ipynb | CC-MAIN-2019-13 | refinedweb | 2,309 | 65.01 |
I'd like to know how to program a hex editor in python, using just tk with the main installation.
(no add ons like wx or any others)
if anyone can help, this would be gladly appreciated.
I'm not really "good" at python yet, so please try to be detailed.
(not reccomended)
thank you in return :)
Edited by Tcll: n/a
Er, what have you managed to do so far? Can you convert strings into hex strings? Such as a hex view of 'Hello world!!!'? Here is the hex values for it if you don't know: '48656c6c6f20776f726c64212121'
Good luck :D
Post if you need help
Also, do a google seach for Tkinter python manual. That'll help
no... I havn't done anything so far... besides the layout... :(
I need some help on the coding.
I am however working on just a simple hex viewer (only hex)
in that scrolled text widget I copied from here.
I'm using this to display the hex value of a spec number of chars:
__all__ = ['ScrolledText'] from Tkinter import * from Tkconstants import RIGHT, LEFT, Y, BOTH fdat = open('import.txt','r') #you can rename this... class ScrolledText(Text): def __init__(self, master=None, **kw): self.frame = Frame(master) self.vbar = Scrollbar(self.frame) self.vbar.pack(side=RIGHT, fill=Y) kw.update({'yscrollcommand': self.vbar.set}) Text.__init__(self, self.frame, **kw) self.pack(side=LEFT, fill=BOTH, expand=True) self.vbar['command'] = self.yview # Copy geometry methods of self.frame -- hack! methods = vars(Pack).keys() + vars(Grid).keys() + vars(Place).keys() for m in methods: if m[0] != '_' and m != 'config' and m != 'configure': setattr(self, m, getattr(self.frame, m)) def __str__(self): return str(self.frame) def example(): import __main__ from Tkconstants import END stext = ScrolledText(bg='white', height=30, width=32, font='Courier') stext.insert(END, fdat.read(1).encode('hex')) #set number of chars here stext.pack(fill=BOTH, side=LEFT, expand=True) stext.focus_set() stext.mainloop() if __name__ == "__main__": example()
and I've done alot of research on tk, I have a pdf to help me.
Edited by Tcll: n/a
Ah buddy... Okay, let me show you some hints on hex values. Your code has confused me beyond belief xDDD
Okay, here are some basic functions you're going to have to get comfortable using:
int(x [, base]) hex(x) ord(c) chr(i) str.replace(old, new)
Okay, here's how we can use this information. Say we have this string here:
'Hello world' Now, we want to convert this to a hex value, correct? Also, we don't want a nasty string like this:
'0x480x650x6c0x6c0x6f0x200x770x6f0x720x6c0x64' Kinda hard to read, right?
Okay, here's how we convert that string into a readable 2 digit hex code
# First off, some background: # ord(c) will return the ascii value of a character # That should help in understanding this # Get rid of the print statements if you want def tohex(s): hexed = '' for x in s: a = ord(x) print(a) a = hex(a) print(a) a = a.replace('0x', '') # Get rid of that ugly '0x' print(a) hexed += a return hexed tohex('Hello world')
Okay, so now we have a hex representation of that string, right? Well, now you can do your hex editing if needed be. But, we want to go ahead and reconvert that back into a string, right?
# More background: # chr(i) returns a character based on the number you type # int(x [, base]) can take two arguements # Normally, you just want something like int('9') to return an int 9 # However, we are working with a 16 base, not 10 base. # So, we want to type this: int(num, 16) for our hex values to return # that integer we need def tostr(h): string = '' # We want every 2 values (Not sure of an easier way to do this, # Other than count every other number, and do an index of evens for x in range(0, len(h), 2): a = h[x:x+2] print(a) a = int(a, 16) print(a) a = chr(a) print(a) string += a return string
Okay, so now we can convert it back now. So, running this:
tostr(tohex('Hello world')) will now return (along with some ugly print statements you should get rid of after you understand this):
'Hello world' What you can do with this, is understand what exactly you are doing with that hex editor, and actually display in rows and columns those hex values
Good luck :D
(Also, for hex editing files, you want to open them in 'rb' and 'wb', to edit the binary)
I see...
yours works quite differant from mine.
yours looks like it uses the interpreter to display the hex values,
while mine puts the hex values in a scrolled textbox...
just save the code to a .pyw file.
sorry for the confusion :$
thanx for your code btw...
I just learned something new :P
but I got it to read multiple values...
just cange a few parts of the code:
fdat = open('import.txt','rb') ...code... stext.insert(END, fdat.read().encode('hex'))
WARNING: has a tendancy to freeze when loading large files.
Ah, you misunderstand the code. You call the function, and it will return a string object that you can write. Thus, you can do this:
e = tohex('Hello world') '''e = '48656c6c6f20776f726c64''''
And then whatever you need to do, in order to display a text string in your GUI (Tkinter, I presume? I have no experience with Tkinter, so I cannot help with that). Ah, and perhaps something to speed up your read times?
Perhaps you can read in only 5~10 lines at a time, and hex that, then do that for the whole file. That should help it out. That's what I usually do
Edited by hondros: n/a
ah.. (FACEPALM) DX<
omg, why didn't I think to do that...
I R Smrt :P XD
thanx
but yea, I use tk... (universal methods)
it's old, but it works. :)
I have wx, and pythoncard...
but I rarly use them. (not everyone has them installed)
and thanx for the tip. :)
I'll use that alot
I fixed up your code a little. :)
def tohex(s): hexed = '' for x in s: a = ord(x) a = hex(a) a = a.replace('0x', '') # Get rid of that ugly '0x' if (a == '0' or a == '1'\ or a == '2' or a == '3'\ or a == '4' or a == '5'\ or a == '6' or a == '7'\ or a == '8' or a == '9'\ or a == 'a' or a == 'b'\ or a == 'c' or a == 'd'\ or a == 'e' or a == 'f'): #replace values 0 - f with 00 - 0f a = ('0' + a) a += ' ' #add spaces between hex (looks like an editor) hexed += a return hexed
fixed this bug:
it returned this:
00 6e f6 0f 06
as this:
0 6e f6 f 6
and it adds spaces so the final hex dosn't look like:
006ef60f06
kinda hard to read like that... heh
Edited by Tcll: n/a
Dude, what's up that? It shouldn't be returning '0' for hex(ord('\x00')). It will return '0x00'.
EDIT:
After relooking at the code snippet on my computer, I realize that it does not return what I thought. xD It does indeed return '0x0'. However, you can just do a check of len of each 's'. :D
if len(s) == 1: s = '0'+s
So, here's the whole code:
def tohex(s): hexed = '' for x in s: a = ord(hex(x)).replace('0x', '') if len(a) == 1: a = '0' + a a += ' ' #add spaces between hex (looks like an editor) hexed += a a = a[:-1] # Get rid of the last space return hexed
Edited by hondros: n/a
I have moved this thread to my forum:
please register to view it.
registration does not require personal information :) ... | https://www.daniweb.com/programming/software-development/threads/271357/python-hex-editor-need-help | CC-MAIN-2017-34 | refinedweb | 1,316 | 81.02 |
Automatically exported from code.google.com/p/assimp-net
Since Google Code no longer allows new downloads, the latest release can be downloaded via NuGet or on this side.
A .NET wrapper for the Open Asset Import Library (Assimp). The wrapper uses P/Invoke to communicate with Assimp's C-API and is divided into two parts:
Low-level The native methods are exposed via the AssimpLibrary singleton. "Unmanaged" structures prefixed with the name 'Ai' that contain IntPtrs to the unmanaged data. The unmanaged library is loaded/unloaded dynamically, 32 and 64 bit is supported. The managed DLL is 'AnyCPU' Located in the 'Assimp.Unmanaged' namespace. High-level A more C# like interface that is more familiar to .NET programmers and allows you to work with the data in managed memory. Completely handles data marshaling for both import and export, the user left only concerned with the data itself. Located in the default 'Assimp' namespace. Commonalities between these two levels are certain structures like Vector3D, Color4D, etc. The High level layer is very similar to Assimp's C++ API and the low level part is public to allow users do whatever they want (e.g. maybe load the unmanaged data directly into their own structures).
For a brief overview and code sample, check out the Getting Started wiki page.
The binaries were built in Visual Studio 2012 as AnyCpu and target .NET 4.5 and .NET 2.0. They are deployed with the Assimp 3.1.1 release (you can download the latest Assimp releases here).
It is fairly easy to build AssimpNet yourself as it does not rely on many external dependencies that you have to download/install prior. The only special instruction is that you need to ensure that the interop generator patches the AssimpNet.dll in a post-build process, otherwise the library won't function correctly. This is part of the standard build process when running out of a msbuild environment. Look at that build process if you are using something else.
Enjoy!
In addition, check out these other projects from the same author:
Tesla Graphics Engine - a 3D rendering platform written in C# and the primary motivation for developing AssimpNet. DevIL.NET - a sister C# wrapper for the Developer Image Library (DevIL). (Mostly a relic at this point) Follow project updates and more on Twitter
A support forum is located here. Feel free to post questions or comments on the direction of the library | https://xscode.com/assimp/assimp-net | CC-MAIN-2021-10 | refinedweb | 409 | 56.96 |
Important: Please read the Qt Code of Conduct -
How to run te application with QtQuick.Controls?
When I import QtQuick.Controls 1.0 there is an error while debugging:
bq. Error: No widget style available.
Qt Quick Controlscurrently depend on the widget module to function.
Use QApplication when creating standalone executables.
Should I add an import or maybe an include somewhere?
You are probably running your app with a QGuiApplication as the application instance. Just change that to QApplication and it should work fine.
Thank you for a hint. I've change QGuiApplication for QApplication and also include <QtWidgets/QApplication> instead of <QtGui/QGuiApplication> but there are some linking problems and no main.obj file.
I've created Qt Quick 2 Application (build-in elements). I'm also using Qt 5.1 alpha, but I had a problem during compiling it () maybe that's the ponit.
Did you also add "QT += widgets" in your .pro file? There is an ApplicationTemplate example in the examples folder that should work out of the box.
I run the ApplicationTemplate. There are some warnings and the
bq.
import QtQuick.Layouts 1.0
is underlined (errors during typeinfo files reading), but it works. My test app also works but I had to remove qmlapplicaionviewer folder and copy the main.cpp and .pro file from the example.
When I switch into the design mode QtCreator just dies. But at least in Edit mode it's working. So thank you, thank you, thank you! :)
Edit: maybe it doesn't work well.. QtCreator slowed down and often dosn't answer. The MenuBar is underlined (Could not resolve the prototype 'MenuBarPrivate' of 'MenuBar' (M301) ), but it builds the application. | https://forum.qt.io/topic/26739/how-to-run-te-application-with-qtquick-controls | CC-MAIN-2021-21 | refinedweb | 280 | 70.09 |
Using REST with Oracle Service Bus
By james.bayer on Jul 28, 2008
Overview
Oracle Service Bus (OSB), the rebranded name for BEA's AquaLogic Service Bus, is one of the products I help customers with frequently. Recently questions from both customers and internally at Oracle have been increasing about OSB support for REST. In this entry I cover the current and future state of REST support in OSB.
Background
REST can be a loaded term. There are a couple of InfoQ articles introducing REST and REST Anti-Patterns that should be required reading for those considering a REST architecture. What does REST support really mean in the context of an Enterprise Service Bus?
A Service Bus applies mediation to both clients and providers of services. This results in a loosely coupled architecture, as well as other benefits. Being loosely coupled means that Providers and Consumers can now vary different aspects of a message exchange:
- Location of Provider/Consumer
- Transport (Http, Https, JMS, EJB, FTP, Email, etc)
- Message Format (SOAP, POX - Plain Old Xml, JSON, txt, etc)
- Invocation Style (Asynchronous, Synchronous)
- Security (Basic Auth, SSL, Web Service Security, etc)
Two aspects where Service Bus support for REST would apply in an example would be Transport and Message Format.
A Contrived But Realistic Example
Consider a contrived example where REST support in the Service Bus could easily come up. Assume an organization already has an Order SOAP based service that is mediated by the Service Bus. Currently this service is used by several applications from different business units. SOAP web services with well-known XSD contracts work great for an interchange format for existing systems. However, there is a new initiative to give management a mobile application to get order status. The sponsoring executive has heard that AJAX is important and tasked an intern to quickly build the front-end because of frustration with IT delays. The front-end is quickly mocked up using an AJAX framework and the executive tells the IT staff to deploy it immediately. The problem is that the client-side AJAX framework doesn't work with SOAP and XML, it was mocked up with JSON.
Instead of developing a second Order Service to support this new invocation style and format, OSB can be configured to mediate both the invocation style and the message format and reuse the perfectly working, tested, provisioned, etc Order Service. A new proxy could be configured to accept HTTP requests and map them to SOAP over HTTP. The response, which is normally XML described by XSD, can be converted to JSON.
OSB Capabilities
A common REST approach in practice is the use of HTTP verbs POST, GET, PUT, and DELETE to correspond to the CRUD methods of CREATE, READ, UPDATE, DELETE. So we can map those verbs to the Order Service, mapping DELETE to cancelOrder since it is against company policy to delete orders from the system.
The HTTP transport for inbound requests (a Proxy) in the Service Bus provides a mechanism to intercept invocations to a user customized URI base path. So if my server binds to the context "user_configured_path" and localhost and port 7001, then the HTTP Proxy intercepts all HTTP GET and POST calls to . This includes suffixes to that path such as .
AquaLogic Service Bus 3.0 (as well as 2.x releases of ALSB) supports GET and POST use cases, but does not have developer friendly support for HTTP verbs PUT and DELETE out of the box. For GET requests, pertinent information is available in the $inbound/ctx:transport/ctx:request/http:relative-URI variable. To parse query string information, use $inbound/ctx:transport/ctx:request/http:query-string. A common pattern for HTTP GET is to use XQuery parsing functions on that variable to extract extra path detail, but the $body variable will be empty. For POST requests, the same applies, but the $body variable may have a message payload submitted as xml, text, or some other format.
I have heard from product management that first class support for HTTP verbs such as PUT and DELETE is planned in the next release of OSB, which is scheduled sometime in the Fall of 2008. For customers that require PUT and DELETE verb support immediately, OSB does have a transport SDK, which could be used to add inbound and outbound REST support. That approach would require some development. There are other inbound options besides the transport SDK, such as a basic approach of deploying an HTTP servlet with doDelete() and doPut() implemented. This servlet could be deployed on OSB instances and forward to specific proxies to provide PUT and DELETE support. An outbound option would be to use a java call-out to invoke a REST style provider.
Returning to the example for a moment, the executive dashboard only requires order status in the first release, so only HTTP GET support is necessary. In the existing SOAP-based Order Service, the interaction looks like this:
An equivalent REST-ful URL might look like where 123 is the OrderID. Clients using this interface might expect a JSON response like this:
{
"Status":"Submitted",
"OrderID":"123",
"OrderDetailID":"345",
"CustomerID":"234"
}
Types like int are supported JSON values, although the above value types are strings as you can see by the double quotes. Let's just assume that strings are ok for now to make this easier. Here is a proxy configuration bound to the Order URI context:
Notice that the response type is "text" because we will convert the XML to JSON. I tried using the JSON java library from json.org to convert xml nodes in the response to a JSON format. One gotcha is namespace conversion. For simplicity I tested with xml nodes with no namespaces, and that worked. That code looks like this:
org.json.JSONObject j = org.json.XML.toJSONObject( xmlStringBuffer.toString() );
System.out.println( j.toString() );
There are other techniques to convert between XML and JSON that use convention to map namespaces to JSON structures, but that is for another time. Because my example used real-world xml with namespaces, I used a java callout to loop over child nodes of an XmlObject to build the JSON structure in java code. Let me know what you think and if you have considered alternative approaches. I've attached my code artifacts that are completely for illustrative purposes only and worked on my machine with ALSB 3.0 and Workshop 10.2.
For another example that works with currently released versions of Service Bus, check out "The Definitive Guide to SOA - BEA AquaLogic Service Bus" by Jeff Davies. In the book he has a Command example where a Proxy is used for a service that handles multiple types of commands. An updated version of the book should be available shortly.
Posted by Jay Blanton on August 14, 2008 at 03:04 AM PDT #
Posted by James Bayer on August 14, 2008 at 03:29 AM PDT #
Posted by Damien on October 23, 2008 at 05:57 PM PDT #
Posted by James Bayer on October 23, 2008 at 07:05 PM PDT #
Posted by Fran on February 13, 2009 at 03:05 AM PST #
Posted by James Bayer on February 19, 2009 at 05:24 AM PST #
Posted by linesh on February 23, 2009 at 05:33 PM PST #
Posted by James Bayer on February 23, 2009 at 08:43 PM PST #
Posted by Jason Kim on March 02, 2009 at 06:45 AM PST #
Posted by Emiliano Pecis on April 22, 2009 at 03:55 AM PDT #
Posted by James Bayer on April 22, 2009 at 04:02 AM PDT #
Posted by Tom Kies on June 30, 2009 at 12:40 PM PDT #
Posted by James Bayer on June 30, 2009 at 07:53 PM PDT #
Posted by Ajoy on November 08, 2009 at 12:14 PM PST #
Posted by james.bayer on November 08, 2009 at 08:51 PM PST #
Posted by Ardi on November 25, 2009 at 12:28 AM PST #
Posted by james.bayer on November 28, 2009 at 12:35 AM PST # | https://blogs.oracle.com/jamesbayer/entry/using_rest_with_oracle_service | CC-MAIN-2015-40 | refinedweb | 1,348 | 58.62 |
Uncyclopedia:QuickVFD/archive94
From Uncyclopedia, the content-free encyclopedia
March 21st
User:WalshMarvin321 - spam WalshMarvin321 - spam
March 16th
AndresenPaez466 - spam User:AndresenPaez466 - spam User:ParkinMorello196 - and more spam ParkinMorello196 - and even more Forum talk:Accepting Image Requests - spam Giant Evil Fucking Shit User:Jocke Pirat/TheRegister/ - spam
March 15th
User:Déménageurs - spam
March 14th
Creative Commons (redirect) (delete)
March 13th
No requests.
March 12th
Category:Lines almost unused and little potential for use --Remaining uses replaced by Category:Geometry
Template:Bushism unused in mainspace Template:Canned unused in mainspace. Template:OWQL unused template template:OWQ
This was VFD'd a while, then restored solely because the deletion was generating red links. I went through and replaced it with {{Wilde}}, or deleted it when it wasn't funny. It is no longer used outside of userspace or talk pages. [-Mnbvcxz]
- But you have not yet reached the core of this cruft; see Uncyclopedia:Templates/In-universe/Quotes. Spıke ¬ 17:20 12-Mar-13
March 11th
Template:Re-Write deprecated template
March 10th
User:Ajovano26 - Possible UN:CB in history, but blanked by author. -- "I love Joel" is unlikely CB (even if "Lumpy" is his pet name), but it's huffed now.
March 7th-9th
No requests.
March 6th
file:Zynga2.jpg more porn, censored, but hardcore file:Zynga3.jpg unused and porn
March 5th
User:Alksub/9 - Author request --
Alksub - VFH CM WA RV {talk} 20:47, March 5, 2013 (UTC) Face For Radio User:Bartolrana = spammmAlksub - VFH CM WA RV {talk} 20:47, March 5, 2013 (UTC) Face For Radio User:Bartolrana = spammm
March 3rd-4th
No requests.
March 2nd
Unquotable:Missing page (redirect) (delete) unused redirect User talk:Robertmalick100
March 1st - then shoot
Undictionary:Mathematics - meh User:NTTjassi - spamspamspam
February 28 - green eggs and spam
Words from Panoramic Group Chairman Sudhir Moravekar - spam User:Igf-1 lr3 - spam User:Robertmalick4 - spam
February 27
Nothing.
February 26
History of Singapore- broken redirect, page moved from mainspace back to userspace - No; it remains in mainspace until you write something better. --Spike Lista di pisippi Pisippo En.uncyclopedia.co Uncyclopedia.wikia.com Power Rangers Nazi Force - I second the above. Meme meme memitty meme. User:Tantra massage - spamspamspam
February 25
Undictionary:Nazi - 2nd opinion, please: Anon's work today seems uniformly bad Undictionary:Los Angeles Undictionary:Corn Undictionary:Harlem Shake - Wikipedia has a disambiguation page on this, and none of them lead to a dessert.
February 24th--I feel crufty pages approaching
Game:OT/KRC/Village/followhim IWP (redirect) (delete) - Unused redirect (WP was used in one article, but has since been removed. WP Hot Chicks (redirect) (delete) - Unused redirect (and double redirect) IWP Hot Chicks (redirect) (delete) - useless redirect
February 23rd
IWP:WS! (redirect) (delete) - Dead redirect (This a preceding pages were previously at WP namespace. Now no longer viable due to iw link. IWP:PILLARS (redirect) (delete) - Dead redirect IWP:CIVIL (redirect) (delete) - Dead redirect IWP:ADMIN (redirect) (delete) - Dead redirect IWP:3RR (redirect) (delete) - Dead redirect Josh brignoni- vanity
February 21-22
No requests.
February 20th - Fancy a Crumpet?
Test of abuse filter 17 - tests done - I think. Good warning, btw. Forum talk:The Unquotable namespace
February 19th
Image:Pogrom.jpgdead people, probably against policy --No; in use (and user is on VFH!) Ballyclare Talk:Tobacco - spam
February 18th
File:Thanksgiving-NicoleWestbrook.jpg- In use PrankVSPrank
February 17th - Only 310 days until Christmas
User talk:Phrank Psinatra- request rescinded George W. Bush (quotes) (redirect) (delete) - Only linked via archives - non-essential. PrankVSPrank File:Volcano boom.gif - Requested by uploaded previously. No longer in use.
Category:Dora's Hitlist on one userpage Template:Do Not Remove used on only 3 userpages Template:A Template per above
February 16th - The broom is off the loase
User talk:NameableUser page to be maintained unless the user requests otherwise Forum:FFW 2011Forum to be maintained User:Cat the Colourful/Archive 2User page to be maintained unless the user requests otherwise Forum:Count to a million/The 16th archiveForum to be maintained Forum:Hi, folks! Guess what? Wikia is censoring us.Forum to be maintained File:Volcano boom.gifImage is in use Forum:An open letter from my hoard of ponies/archive1No, we aren't deleting Forums where you are discussed, Aimsplode User talk:Zim ulatorNo, we aren't deleting OTHER PEOPLE'S TALK PAGES, Aimsplode, see above
February 15th - Valentine's Day is over and you can get roses cheaper
Legal Department/sceduale (redirect) (delete) - Created in error. Legal Department/Summons (redirect) (delete) - Created in error --But placed on fifteen or so talk pages (without benefit of template?) --Uncyclopedia Legal Department/Summons (the original location) is linked many places.... --You're right; it's deleted now. --Spike Undictionary:Roseus Hominus - Spike sez it's racial stereotypes without humor, asks for 2nd opinion PotR hacks at it, concludes "still pathetic"
February 13th - Pointless headers are the future
No requests. Actually, skipping entire days, like the 14th, may be the future.
February 12th
R Floor Questions and Answers spam
February 11th
Power Rangers S.S.S - WTF? Joss sibbering - Spike has asked author to show why this isn't vanity; waiting....
February 10th - Tabula Rasa
No requests.
February 9th - Get the shovels out
Forum:Are you a Matthlock? - Wasted forum. Forum:Last person to edit wins - Delete and/or change name references from "A_msp_de" to HGA.--No, Aimsplode, we are not going to cater to your new identity by editing the past. --Spike File:Tree-icon.png Avatar: The Last Airbender - Give Anon until the end of the day to do something with this spork
February 8th - Let's get snowed in!
I'll archive this tomorrow morning (EST) to see if the two authors resolve the issues just below. Spıke ¬ 00:08 9-Feb-13
Mugabley's law - Hold off; Spike asks author to show that it is not about himself --He uploaded a photo today of Richard Branson (which would work) but went no further: Userspaced Young boy cruzer - Hold off; Spike offers author one chance to recast to not be about his personal friend Floyd - Copypasta of Stupid. Changing the name suggests UN:CB User talk:SPIKE/Welcome - test - and I don't see any issues, Forum talk:Proofreading Service - spam
February 7th - Any weather you don't have to shovel....
Forum:An open letter from my hoard of ponies/archive2 File:2000px-HinduSwastika.png
February 6th
Uncyclopedia:Pee Review/nickelback ISKCON Funnybony request via email to Romartus to delete this article by him. Any objections?
User talk:Sumone10154- User request. –sumone10154(talk) 01:45, February 6, 2013 (UTC) There's nothing in the history except a former pointer to an archive file--which we would not delete either. --Spike
February 5th
User:PuppyOnTheRadio/Aimsplode - On review, UN:CB, and the target user wants it gone. User:Cat the Colourful/Aimsplode User:Heidi Range/Articles About Stuff - Blatant fanboyish advertising Silviu Ardelean User:Sumone10154 (User request)
February 4th - You don't pronounce the first "r", do you?
No requests.
February 3rd - It is
UnNews:MS-Windows barely survivesAdded in error. --ChiefjusticeXBox 18:48, February 3, 2013 (UTC) Talk:The Wikipedia BBS - created by spambot
February 2nd - It could be worse
Tulisa File:Chariot race.gif - Abortive first attempt at Chariot_race.jpg, 1px Popular People - Vanity/schoolcruft Template:NSFPArticle - Aleister claims on VFD it has no remaining users. Evil God - Adding {VFD} gave it two more days of life, which it doesn't deserve. Time's up. | http://uncyclopedia.wikia.com/wiki/Uncyclopedia:QuickVFD/archive94 | CC-MAIN-2014-35 | refinedweb | 1,244 | 54.32 |
You can create, load, and execute your own Stata plugins.
Note that any new features will be backwards compatible, meaning that plugins created using this current documentation should continue to work under newer versions of the Stata plugin interface (described in 2. What is the Stata plugin interface?).
The following topics are discussed.
A plugin is a piece of software that adds extra features to a software package. In Stata, a plugin consists of compiled code (written using the C programming language) that you can attach to the Stata executable. This process of attaching the plugin to Stata, known as dynamically loading, creates a new customized Stata executable that enables you to perform highly specialized tasks.
Plugins can serve as useful and integral parts of Stata user-written commands. Because they consist of compiled code, plugins run much faster than equivalent code written in the ado-language, where each command must be interpreted each time it is executed. The amount of speed gained depends on how much interpretation time is saved; see 4. When are plugins most beneficial?
When describing plugins, one often uses the term dynamically linked library, or DLL for short. While it is common to use the terms DLL and plugin interchangeably, a DLL is a piece of code designed to be called by various applications, while a plugin is specific to one application (in this case, Stata).
The Stata plugin interface (SPI) is the system by which plugins are compiled, loaded into Stata, and called from Stata. The SPI consists of four components:
These four components work together, so a capability added to (1) would require a change to stplugin.h in (2).
Advantages of plugins:
Disadvantages of plugins:
The Stata ado-language is a rich language that allows programmers to easily perform a myriad of tasks: parse syntax, set options, mark observations to be used for estimation, perform vectorized calculations on your data, perform complex matrix manipulations, save estimation results, etc. For all its richness, Stata ado-code is also remarkably fast. For this reason, we recommend using plugins in only a limited set of circumstances. When considering whether to write a plugin, ask yourself
If the answer to (1) is "a lot", and the answer to (2) is "not very", then you have more to gain by using a plugin to perform the task in question. By asking yourself these two questions, you are analyzing the trade-off between time saved by a faster-running program versus the additional time spent writing your code in a lower-level C program (e.g., think of how long it would take you to write a C program that could do what syntax can). The most speed to be gained by writing a plugin is in the time saved by not having to interpret line after line of ado-code. Realize, however, that the number of lines of interpreted ado-code must be quite large for you to notice a gain in speed when using a plugin.
When answering (1), count how many lines of ado-code require interpretation, rather than just counting the absolute number of lines in your block of ado-code. Loops in Stata are reinterpreted at each iteration. Thus an ado-loop consisting of five lines of code that is iterated 100 times really counts as 500 lines to be interpreted, not 5 (or 7, if you count the opening and closing of the loop).
Because loops are usually employed to perform simple tasks many times, rewriting an ado-loop as a plugin can result in a much faster execution time. This is especially true if you are looping over the observations in your data, because the number of observations can become arbitrarily large and is limited only by the amount of memory in your computer.
Note, however, that Stata goes through a lot of trouble to ensure that you rarely have to loop over the observations in your data. Almost all of Stata's mathematical operators and functions are vectorized, meaning that they are overloaded to implicitly perform calculations over all the observations on your variables. When you code
replace x = y + z
you are taking each observation in variable y, adding it to each respective observation in z, and placing the result in variable x. The looping over the observations is done implicitly in the Stata executable, and thus this command runs very quickly.
You would never code the above as
forvalues i = 1/`=_N' { replace x = y + z in `i' }
because this would be very slow by comparison. Similarly, you should always try first to find a vectorized solution for whatever you are calculating.
Still, loops over the observations are sometimes unavoidable and often occur in Stata programs. However, you rarely see them coded in the ado-language in programs that ship with official Stata. What you see instead is a call to an internal Stata routine, a piece of the Stata executable. A plugin is essentially the same thing; the only difference is that a plugin is written by you, the user, instead of by those of us at StataCorp.
Plugins are also useful in matrix calculations. Despite all of Stata's matrix operators and functions, it is sometimes necessary to loop over all the elements of a matrix and calculate them one by one. For example, you might need to construct the Hessian matrix for a nonlinear-form likelihood with an arbitrarily large number of equations.
Plugins are also useful when you are writing a routine that will be called very often from other Stata programs, including your own programs and those from official Stata. For example, suppose that you are writing a method d0 likelihood evaluator for ml. Given the number of numerical derivatives required, these types of programs are called quite often by ml. In this case, the savings in execution time come from replacing the majority of the body of the evaluator program with a call to the plugin.
A plugin could also come in handy when one is taking an existing routine written in C and writing a wrapper program so that the routine can be called from Stata. The plugin code in this case would consist of the original routine coupled with the wrapper program, which would take input from Stata, perform the appropriate call to the routine, and then take the output from the routine and return it to Stata in the appropriate way. In this case, the savings in time are obvious—the routine already exists.
The above are only a few examples, as there are many other situations where plugins prove useful. In summary, when considering writing a plugin, simply ask yourself how much interpretation time you would save by resorting to precompiled code, while factoring in the additional time required to write the C code. If you are an inexperienced C programmer, it may take a lot of time to write a plugin. Also consider how often you plan on using your program. If you are performing one analysis on a set of data, a plugin is probably not worth your time.
The process of creating a Stata plugin involves using the header file, stplugin.h; the C source file, stplugin.c; and the files containing your plugin code to form a shared library, or DLL. You should not make any changes to stplugin.h or stplugin.c. Any code you write should be confined to files other than these.
Note: If you are compiling C++ source code, see section 9. Compiling C++ plugins.
In what follows, we consider what to do when your plugin code consists of only one source file, filename.c, yet the procedure we describe also applies when you have multiple plugin source files. For example, consider the simplest of all plugin programs, hello.c, which contains the following code:
#include "stplugin.h" STDLL stata_call(int argc, char *argv[]) { SF_display("Hello World\n") ; return(0) ; }
The type declaration STDLL defines the routine as one that returns a Stata return code. The name of the routine, stata_call(), is constant, as are the arguments argc and argv, but we'll get to that later. For now, just know that this is how you begin any routine you wish to call directly from Stata. Stata keeps track of multiple plugins, based on the name of the created shared library for each. The routine SF_display() displays output to the Stata results window. Thus our program simply displays the message "Hello World" to the Stata results window and returns a return code of 0 upon completion.
As previously stated, the process of creating a plugin involves taking the files stplugin.h, stplugin.c, and hello.c, and forming a DLL. The instructions for doing so depend on which C compiler you are using.
If you are using Microsoft Visual Studio .NET, version 2002, for example, you would do the following:
If you are using Microsoft Visual Studio 2005 you would do the following:
If you are using Cygwin and the gcc compiler under Windows, type
$ gcc -shared -mno-cygwin stplugin.c hello.c -o hello.plugin
If you are using the Borland C compiler, type
$ bcc32 -u- -tWD -ehello.plugin hello.c stplugin.c
Borland uses a different linker-naming convention than Visual C++ and gcc, so the -u- compiler option must be set. By default, the compiler automatically adds an underscore character (_) in front of every global identifier, which you must disable for your plugin to work with Stata. Unfortunately, using the -u- compiler option can have an unwanted side effect. When this option is used, standard C library functions, such as strcmp, are no longer correctly referenced. To correct the problem, library functions such as these will need an underscore character (_) added to the beginning of the function in your source code.
If you are using some other C compiler under Windows, follow that compiler's instructions for creating a DLL. For Windows, doubles are aligned on 0 mod 8 boundaries.
If you are using gcc, simply type
$ gcc -shared -DSYSTEM=OPUNIX stplugin.c hello.c -o hello.plugin
If you are compiling under Hewlett-Packard HP-UX, you may need to instead type
$ gcc -shared -DSYSTEM=HP9000 stplugin.c hello.c -o hello.plugin
Note that in the above example, we create the shared library hello.plugin. There is no significance to the file extension .plugin other than to load the plugin more conveniently into Stata, as discussed in the next section.
If you are using a compiler other than gcc, follow the instructions for that particular compiler. Note that you need to (a) compile under ANSI C, (b) specify the compiler flag -DSYSTEM=OPUNIX (or -DSYSTEM=HP9000 in the case of HP-UX), and (c) specify any additional flags necessary for your plugin (such as -lm).
Also note that if you are using 64-bit versions of Stata, you must specify the appropriate compiler flags to produce a plugin compatible to 64-bit architecture.
If you are using gcc, type
$ gcc -bundle -DSYSTEM=APPLEMAC stplugin.c hello.c -o hello hello.plugin. Copy or move this file to a place where it can be accessed by Stata, such as your current working directory or somewhere along your personal ado-path.
If you are using Xcode, you want to create the project as a BSD Dynamic Library and change the target to a bundle. After you have included the source files to the project, open the target settings for your plugin and change the following: do not see any GCC-4.0 Language settings, it is because you have not added your source files to the project. The Other C++ Flags setting is automatically set when you set the Other C Flags setting.
We recommend that you compile the plugin for all Mac platforms by setting the Architectures setting to i386 x86_64 ppc ppc64. You will build one plugin that will work on any Mac.
Think of loading a Stata plugin as taking the plugin file (created in the previous section) and attaching it to the Stata executable. Once loaded, the plugin can be executed.
Plugins are loaded into Stata using the program command with the plugin option. For example, assume we created hello.plugin and placed it in our Stata ado-path. To interactively load this plugin into Stata, type
. program hello, plugin
It is that simple. Also, because your "program" is really a plugin, there is no end statement following this program definition.
The syntax for loading a plugin is
. program handle, plugin [using(filespec)]
where handle is how you want your plugin to be named within Stata. If using() is not specified, Stata will attempt to locate the file handle.plugin within your ado-path and will load it if it exists or issue an error if it doesn't.
You specify using() when either (a) your plugin has a file extension other than .plugin (such as .dll), (b) your plugin is not located within your ado-path, or (c) you wish to use a handle that does not coincide with the name of your plugin file. For example,
. program myhello, plugin using("/home/rgg/hello.dll")
would load the hello.dll that is located in the directory /home/rgg (presumably not in the current ado-path), and this plugin would be named myhello within Stata.
program, plugin follows most of the same rules that program follows; see [P] program. For instance, if a plugin is used interactively or within a do-file, you can unload it via
. program drop handle
You can also define a plugin as a subroutine within an ado-file. For instance, consider the ado-file sayhello.ado, which contains
program sayhello version 9.2 command to execute hello plugin end program hello, plugin
When used in this manner, the plugin (like its parent ado-file) is loaded only as needed, namely when we run sayhello.
Although you may define a plugin as a subroutine to an ado-file, you may not define a plugin as its own ado-file. If, for instance, you had the file sayhello.ado containing nothing but
program sayhello, plugin using("hello.plugin")
subsequently, typing sayhello from within Stata would result in the following error:
unrecognized command: sayhello not defined by sayhello.ado r(199);
The reason this doesn't work is technical and is a result of how plugins are actually executed (see the next section). In any case, having a wrapper ado-program, such as the original version of sayhello.ado, is easy enough.
Once you have loaded your plugin, you execute it using the Stata command
plugin call handle
where handle is the name by which Stata refers to the plugin.
For example, using the plugin hello.plugin created above, you can execute it interactively by typing
. program hello, plugin . plugin call hello Hello World
Alternatively, you can execute the plugin from within an ado-file. If we have the file sayhello.ado, which contains
program sayhello version 9.2 plugin call hello end program hello, plugin
then within Stata you can run sayhello, which, in turn, automatically loads and executes the plugin.
. sayhello Hello World
At this point, we admit that we have been a bit simplistic in describing the syntax for plugin call. We were able to get away with this because hello.plugin wasn't very complicated.
The full syntax for plugin call is
plugin call handle [varlist] [if exp] [in range] [,arglist]
That is, its syntax is very similar to a typical Stata estimation command.
This syntax allows Stata and your plugin to easily communicate because it allows the Stata command-line parser to do much of the work for you.
These arguments are passed to your plugin in a way that is very familiar to C programmers. The number of arguments is passed to stata_call() as argc and the arguments themselves as elements of the string vector argv. That is, the first argument in arglist is passed as argv[0], the second as argv[1], and so on.
Consider the file showargs.c, which contains the following code:
#include "stplugin.h" STDLL stata_call(int argc, char *argv[]) { int i ; for(i=0;i < argc; i++) { SF_display(argv[i]) ; SF_display("\n") ; } return(0) ; }
When executed, this program merely replays each argument supplied to plugin call via arglist.
$ gcc -shared -DSYSTEM=OPUNIX stplugin.c showargs.c -o showargs.plugin . program showargs, plugin . plugin call showargs, A "this is the second argument" scalarname 4.5 A this is the second argument scalarname 4.5
In this section, we describe the routines that make it possible to communicate results between Stata and your plugin.
When you type
. plugin call handle varlist
The variables in varlist are passed, in order, to the plugin. The following routines are used to manipulate these variables.
ST_retcode SF_vdata(ST_int i, ST_int j, ST_double *z); ST_retcode SF_vstore(ST_int i, ST_int j, ST_double val); ST_retcode SF_sdata(ST_int i, ST_int j, char *s); ST_retcode SF_sstore(ST_int i, ST_int j, char s); ST_int SF_sdatalen(ST_int i, ST_int j); ST_retcode SF_strldata(ST_int i, ST_int j, char *s, ST_int len); ST_boolean SF_var_is_string(ST_int i); ST_boolean SF_var_is_strl(ST_int i); ST_boolean SF_var_is_binary(ST_int i, ST_int j);
For numeric data, SF_vdata() reads the jth observation of variable i in varlist and places this value in z. SF_vstore() takes val and stores it in the jth observation of variable i in varlist.
You can determine whether a particular variable is a numeric variable or a string variable with SF_var_is_string(). It checks the ith variable and returns 1 if the variable is a string variable (meaning str# or strL) and 0 if the variable is a numeric variable. You can further use SF_var_is_strl() to check whether a particular string variable is a str# variable or a strL variable. It returns 1 if the variable is a strL and 0 otherwise.
If you have string variables (but not very long string variables, or strLs), use SF_sdata() and SF_sstore() to access the values of the variables. These routines can handle strings up to 2045 characters in length. These routines return 0 if successful or issue a nonzero return code if an error occurs. For example, if you attempt to exceed either the number of observations in your data or the number of variables you have specified, both routines will return error code 498.
If you have strL variables, use three functions to deal with them. Values of strL variables can be text or binary, so first use SF_var_is_binary() to check the jth observation of the ith variable. It returns 1 if the value is binary and 0 otherwise.
Then use the pair of functions SF_sdatalen() and SF_strldata() to access the value in each observation of a strL variable. SF_sdatalen() returns the length, in bytes, of the jth observation of the ith variable. If the value is binary, as determined by SF_var_is_binary(), the length is the number of bytes. If the value is not binary, it is text, and the length is the number of bytes not including the terminating \0 character.
Once you know the length of the value, you use SF_strldata() to retrieve it. Each value of a strL variable can be up to 2 GB in size. So, you may wish to first obtain the length of a particular value with SF_sdatalen(), then allocate enough storage space for that value, and then retrieve the value with SF_strldata(). Or, if you know that none of your values are longer than a particular size, or you don't care to retrieve any data past that particular size, you can use the fourth argument, len, to SF_strldata() to specify the number of bytes available for storage in the third argument, s. Even if the value is longer than that, SF_strldata() will truncate it to the maximum length you specify. SF_strldata() returns the number of bytes copied into s, not including the terminating \0 character for text strings. It returns -1 if an error occurs.
All data manipulation occurs in double precision. That is, suppose that you pass an integer variable as part of varlist. When you obtain a value of this variable, you obtain it as a double. When you store a value of this variable, you store it as a double. In Stata, whatever doubles you stored are cast back to integers, the original storage type of your variable, and will probably result in a loss of precision. If you wish to retain precision upon returning to Stata, be sure to pass in any variables that you wish to store as values for doubles. If you are dealing with strings, make sure that the Stata string variable is large enough for the string value that you pass it when using SF_sdata(), and make sure you have a large enough string to store the value returned by SF_sstore().
Note that a plugin cannot create a new Stata variable but will happily fill in one that you have created in Stata and set it to either missing, zero, or whatever.
The following routines are used to obtain the dimensions of your data array:
ST_int SF_nobs(void); > ... /* returns the number of observations in your data */ ST_int SF_nvars(void); > ... /* returns the number of variables specified in varlist */ ST_int SF_nvar(void); > ... /* returns the total number of variables in your data */
Alternatively, you may wish to operate only on selected observations in your data. When you specify an if condition or an in range to plugin call, these conditions are handled by the following routines:
ST_boolean SF_ifobs(ST_int j); > ... /* evaluates to 1 when if condition is true in obs. j */ /* 0 otherwise */ ST_int SF_in1(void); > ... /* returns first observation number for in range */ ST_int SF_in2(void); > ... /* returns last observation number for in range */
When you do not specify an if condition, you intend to work on all the observations in your data, so SF_ifobs() will evaluate to 1 everywhere. Similarly, if you do not specify an in range, SF_in1() returns 1, and SF_in2() returns _N.
Note that SF_ifobs() does not automatically evaluate to 0 outside any in range you specify. SF_ifobs() is concerned with only the veracity of the if condition, should you specify one.
Putting all this together, consider the following code, varsum.c, which will take the first k-1 of k specified variables, sum them across each observation, and store the results in the last specified variable. Naturally, we want to also respect any if condition or in range specified by the user.
) ; }
We'll leave it to you to try this out.
The following routines are used to handle Stata matrices:
ST_retcode SF_mat_el(char *mat, ST_int i, ST_int j, ST_double *z); SF_mat_el() takes the [i,j] element of Stata matrix mat and stores it into z. SF_mat_el() returns a nonzero return code if an error is encountered. ST_retcode SF_mat_store(char *mat, ST_int i, ST_int j, ST_double val); SF_mat_store() stores val as the [i,j] element of Stata matrix mat. SF_mat_store() returns a nonzero return code if an error is encountered. ST_int SF_col(char *mat); SF_col() returns the number of columns of Stata matrix mat, or 0 if the matrix doesn't exist or some other error. ST_int SF_row(char *mat); SF_row() returns the number of rows of Stata matrix mat, or 0 if the matrix doesn't exist or some other error.
As with data variables, plugins cannot create new Stata matrices, but they can be used to fill in matrices you have previously defined in Stata and set to, say, all missing.
For example,
SF_mat_store("A", 1, 2, 3.4) ;
would store 3.4 as the [1,2] element of Stata matrix A. In general, however, a plugin would not know the name of the matrix ahead of time. For instance, suppose the matrix name was a Stata tempname. In this case, you would use the arglist feature of plugin call to pass in the name of the matrix
. tempname mymat . mat `mymat' = (1,2 \ 3,4) . plugin call change12, `mymat'
where change12 is a plugin that essentially performs the following:
SF_mat_store(argv[0], 1, 2, 3.4) ;
The following routines are used to store and use Stata macros and scalars. By macros, we mean both global macros and local macros (local to the program calling the plugin). Internally, global macros and local macros share the same namespace, with the names of local macros preceded by an underscore (_).
ST_retcode SF_macro_save(char *mac, char *tosave); SF_macro_save() creates/recreates a Stata macro named by the string mac and stores into it the string tosave. SF_macro_save() returns a nonzero return code in the case of an error (an invalid name, for example) ST_retcode SF_macro_use(char *mac, char *contents, ST_int maxlen); SF_macro_use() takes the first maxlen characters of what is contained in the Stata macro named mac and places them into the character array contents. SF_macro_save() returns a nonzero return code in the case of an error (an invalid name, for example). SF_macro_use also copies a null byte into contents after these maxlen characters. So the size of contents needs to be at least maxlen+1. ST_retcode SF_scal_save(char *scal, double val); SF_scal_save() creates/recreates the Stata scalar named by the string scal and stores val in it. SF_scal_save() returns a nonzero return code in the case of an error (an invalid name, for example) ST_retcode SF_scal_use(char *scal, double *z); SF_scal_use() takes the value of the Stata scalar named scal and places it into z. SF_scal_use() returns a nonzero return code in the case of an error (an invalid name or nonexistent scalar, for example)
As with matrix names, your plugin typically obtains the names of scalars and macros from arglist.
For example, consider a plugin that takes two arguments, a scalar name and a local macro name. The plugin takes the value of the scalar and places a string-formatted version of it into the local macro.
Below we list the code to do this, contained in scaltomac.c:
#include <stdio.h> #include <string.h> #include "stplugin.h" STDLL stata_call(int argc, char *argv[]) { ST_retcode rc ; ST_double d ; char macname[40] ; /* 32 would be enough */ char buf[40] ; if(argc != 2) { return(198) ; /* syntax error */ } if(rc = SF_scal_use(argv[0],&d)) return(rc) ; /* read scalar */ strcpy(macname,"_") ; /* local macro */ strcat(macname,argv[1]) ; sprintf(buf, "%lf", d) ; /* convert to string */ if(rc = SF_macro_save(macname,buf)) return(rc) ; /* save macro */ return(0) ; }
If this code were compiled to form scaltomac.plugin, we could do the following interactively within Stata:
. program scaltomac, plugin . scalar jean = 45.999 . plugin call scaltomac, jean marie . di "`marie'" 45.99900 . plugin call scaltomac, notdefined x1 r(111);
Notice that the second time we called the plugin, we fed it a nonexistent scalar. As a result, SF_scal_use() complained, and our plugin passed the error code along to Stata.
We could have just as easily passed the local macro name with the underscore included in arglist, and, in fact, if we did it this way, we would be more general. We could use our plugin to define a global or local macro.
The following routines are used to display results in Stata.
ST_retcode SF_display(char *); ST_retcode SF_error(char *);
SF_display() takes the given string and outputs it to the Stata results window. Before the string is printed, however, it is run through the Stata SMCL interpreter. For example,
SF_display("for more help, see {help stcox}\n");
would result in Stata with
for more help, see stcox
with "for more help, see" displayed in green (or your default text color) and "stcox" displayed in blue and shown as a link. See [P] smcl for more details.
In debugging your plugin code, it is useful to print out the values of the variables used in your code. To do so, use SF_display() after first printing your results into a string, such as
char buf[80] ; snprintf(buf, 80, "The value of z is %lf\n", z) ; SF_display(buf);
SF_error() works the same way as SF_display(), except that the output shows up even when it is running quietly from within Stata. Hence, SF_error() is ideal for error messages.
Within your plugin, calculations on the data and on Stata matrices occur in double precision. When performing mathematical operations on data and matrix elements obtained from Stata, it is good programming practice to check for missing values, because they can wreak havoc on your calculations. The routine
ST_boolean SF_is_missing(ST_double z);
can be used to check if z is what Stata would call "missing". Conversely, you can set data and matrix elements to missing by using the constant SV_missval, for example
SF_vstore(i, j, SV_missval) ;
This section describes how to compile plugins written in C++. Just as with compiling C-style plugins, you will require the header file, stplugin.h; the C source file, stplugin.c; and the files containing your plugin code. You should not make any changes to stplugin.h or stplugin.c. Any code you write should be confined to files other than these.
Consider the program varsum.cpp, which contains the following:
#include "stplugin.h" // Note that the use of an object in this example is entirely unnecessary // from a design standpoint, but it is used to provide a interesting example // of C++ syntax. // Simple class declaration with a routine to compute varsum, and a routine // to access the stored sum. // class VarLogic { public: VarLogic(void) { sum = 0.0 ; } // constructor ~VarLogic(void) { return ; } // destructor ST_retcode computeSum(void) ; // method defined below protected: ST_double sum ; // object's sum variable }; // Method to compute our varsum // ST_retcode VarLogic::computeSum(void) { ST_int j, k ; ST_double z ; ST_retcode rc ; if(SF_nvars() < 2) { return((ST_retcode)((ST_retcode) 0) ; } // Regular C-style stata_call() // STDLL stata_call(int argc, char *argv[]) { VarLogic vlogic ; // Object with varsum logic return vlogic.computeSum() ; // Calculate varsum }
As previously stated, the process of creating a plugin involves taking the files stplugin.h, stplugin.c, and varsum.cpp and forming a DLL. The instructions for doing so depend on which C++ compiler you are using.
If you are using Microsoft Visual Studio .NET, version 2002, for example, the procedure will be the same as described in 5a. Compiling under Windows, with the following exception:
If you are using Cygwin and the gcc compiler under windows, type
$ g++ -shared -mno-cygwin stplugin.c varsum.cpp -o varsum.plugin
If you are using the Borland C compiler, type
$ bcc32 -u- -tWD -evarsum.plugin varsum.cpp stplugin.c
If you are using some other C compiler under Windows, follow that compiler's instructions for creating a DLL. For Windows, doubles are aligned on 0 mod 8 boundaries.
If you are using gcc, simply type
$ g++ -shared -DSYSTEM=OPUNIX stplugin.c varsum.cpp -o varsum.plugin
If you are using gcc, type
$ g++ -bundle -DSYSTEM=APPLEMAC stplugin.c varsum.cpp -o varsum varsum.plugin. Copy or move this file to a place where it can accessed by Stata, such as your current working directory or somewhere along your personal ado-path. | http://www.stata.com/plugins/index.html | CC-MAIN-2015-48 | refinedweb | 5,151 | 63.09 |
SAP CRM Mobile Client 4.0 SP11 and 5.0 SP07 features a new tool called SAP Environment Analyzer. This tool lets you run some quick checks on your system and verify the sanity of the environment. Along with this tool, the basic environment files to check the versions of various files that are distributed with the standard service packs are distributed on the installable media. SAP Environment Analyzer, referred to as SEA from this point on, uses an XML file called the ENV file to describe the checks that you would like to perform on the target machine. SEA is an extensible framework, which allows you to write your own plugins and perform custom checks. I will briefly discuss the structure of the Environment file and provide a simple example of a custom plugin and the various tools that you can use in conjunction with SEA. All of the checks shipping with SEA are implemented as plugins. Plugins can be of two types.
- Actions: Actions are responsible for performing some action on the target machine. They really do not check for anything but perform some tasks like initialization, querying information for future use and so on.
- Checks: Checks are responsible for checking something and reporting a success or failure. Some checks need not be mandatory such that their failure only results in a warning.
The environment file has the following structure.
All SEA plugins are .NET assemblies. So we can start of by creating a new .NET Class Library project in Microsoft Visual Studio .NET.
- Create a new .NET Class Library Project
- Add reference to EnvCore.dll. This would usually be in C:\Program Files\SAP\Mobile\support\Tools\EnvironmentAnalyzer\ folder. This is essential to access several attributes.
- Create a new class or rename the default Class1 to some convenient name. Let us call that AlwaysSucceeds.
using System; using System.ComponentModel; using SAP.EnvChecker.Core; namespace SamplePlugin { //PluginName attribute indicates the name of the tag which is used to load the plugin //Description attribute can be used later to generate documentation using gendoc.exe [PluginName("alwayssucceeds")] [Description("The wonderful plugin which does nothing.")] public class AlwaysSucceeds : ICheck { //LogMessage delegate allows us to write messages to a trace file private LogMessage logger; private string _strParam; //The PluginParam attribute allows you to pass in strings/ints/enums as values via xml attributes // This attribute can be applied only on public properties that you wish to expose. [PluginParam("param")] [Description("The parameter to take in")] public string AParameter { get{return _strParam;} set{_strParam=value;} } public AlwaysSucceeds() { } #region ICheck Members //This property can be used to specify messages on the grid after execution. //A short helpful message would do. public string Message { get{return "This always succeeds";} set{} } //The check method returns true or false. True to indicate success and False for failure. public bool Check() { //The logger function helps writing to the log file along with essential context information. logger(this,MessageType.Info,"I am checking"); return true; } //This function helps set a delegate if you would like to trace some messages. //The trace file is usually created in the TEMP location. public void SetLogMessageDelegate(SAP.EnvChecker.Core.LogMessage log) { this.logger=log; } //This delegate can be used to post messages to the status bar while executing the plugin. public void SetStatusBarTextDelegate(SAP.EnvChecker.Core.StatusBarTextDelegate del) { // TODO: Add AlwaysSucceeds.SetStatusBarTextDelegate implementation } //The getprop and setprop can be used to query property values set by other actions or checks // or set some properties for later use. These properties are not .NET properties. They are a collection // of string based key value pairs. public void PropertyHandlers(SAP.EnvChecker.Core.GetProperty getprop, SAP.EnvChecker.Core.SetProperty setprop) { // TODO: Add AlwaysSucceeds.PropertyHandlers implementation } #endregion } }
Now build the assembly. You now have to register the assembly with SEA. To do that you can simply copy it into the SEA folder. (The folder which has EnvCore.dll) You would have noticed that the environment file has a xmlns schema declaration. To support this sort of dynamic plugin concept, the schema has to be generated. SEA does this automatically as soon as it detects a new file. You can find the schema in the following location on the target machine. %USERPROFILE%\Application Data\SAP\EnvSchema. You can manually trigger the schema generation by running envschema.exe. You can also use gendoc.exe present in the same location to generate a quick documentation about the plugins present with SEA. Although this documentation is not much, if the plugin has been written properly with adequate [Description] attributes, it takes care of easier documentation. | https://blogs.sap.com/2007/02/17/about-sap-environment-analyzer/ | CC-MAIN-2019-13 | refinedweb | 763 | 50.73 |
Part of Twitter’s draw is the vast number of voices offering their opinions and thoughts on the latest events. In this article, we are going to look at the Tweepy module to show how we can search for a term used in tweets and return the thoughts of people talking about that topic. We’ll then look to make sense of them crudely by drawing a word cloud to show popular terms.
We’ll need the Tweepy and Wordcloud modules installed for this, so let’s fire these up alongside matplotlib.
import tweepy from wordcloud import WordCloud import matplotlib.pyplot as plt %matplotlib inline
First up, you will need to get yourself keys to tap into Twitter’s API. These are freely available if you have a regular account from here.
When you have them, follow the code below to plug into the API. I’ve hidden my tokens and secrets, and would strongly recommend that you do too if you share any code!
Tweepy kindly handles all of the lifting here, you just need to provide it with your information:
access_token = "HIDDEN" access_token_secret = "HIDDEN" consumer_key = "HIDDEN" consumer_secret = "HIDDEN" auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth)
So we are looking to collect tweets on a particular term. Fortunately, Tweepy makes this pretty easy for us with its ‘Cursor’ function. The principle of Tweepy’s cursor is just like the one of your screen, it goes through tweets in Twitter’s API and does what we tell it to when it finds something. It does this to work through the vast ‘pages’ of tweets that run through Twitter every second.
In our example, we are going to create a function that takes our query and returns the 1000 most recent tweets that contain the query. We are then going to turn them into a string, tidy the string and return it. Follow the commented code below to learn how:
#Define a function that will take our search query, a limit of 1000 tweets by default, default to english language #and allow us to pass a list of words to remove from the string def tweetSearch(query, limit = 1000, language = "en", remove = []): #Create a blank variable text = "" #Iterate through Twitter using Tweepy to find our query in our language, with our defined limit #For every tweet that has our query, add it to our text holder in lower case for tweet in tweepy.Cursor(api.search, q=query, lang=language).items(limit): text += tweet.text.lower() #Twitter has lots of links, we need to remove the common parts of links to clean our data #Firstly, create a list of terms that we want to remove. This contains https & co, alongside any words in our remove list removeWords = ["https","co"] removeWords += remove #For each word in our removeWords list, replace it with nothing in our main text - deleting it for word in removeWords: text = text.replace(word, "") #return our clean text return text
With that all set up, let’s give it a spin with Arsenal’s biggest stories of the window so far. Hopefully we can get our finger on the pulse of what is happening with new signing Mkhitaryan & potential Gooner Aubameyang. Let’s run the command to get the text, then plot it in a wordcloud:
#Generate our text with our new function #Remove all mentions of the name itself, as this will obviously be the most common! Mkhitaryan = tweetSearch("Mkhitaryan", remove = ["mkhitaryan"])
#Create the wordcloud with the text created above wordcloud = WordCloud().generate(Mkhitaryan) #Plot the text with the lines below plt.figure(figsize=(12,6)) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.show()
Lots of club propaganda about how a player has always dreamt of playing for their new club?! We probably didn’t need a new function to tell us that!
And let’s do the same to learn a bit more about what the Twitter hivemind currently thinks about Aubameyang:
Auba = tweetSearch("Aubameyang")
wordcloud = WordCloud().generate(Auba) plt.figure(figsize=(12,6)) plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.show()
Equally predictably, we have “Sky Sources” talking about a bid in excess of a figure. Usual phraseology that we would expect in the build up to a transfer. I wish we had something unexpected and more imaginative, but at least we know we are getting something accurate. Hopefully you can find something more useful!
Summary
As you already know, Twitter is a huge collective of voices. On their own, this is white noise, but we can be smart about picking out terms and trying to understand the underlying opinions and individual voices. In this example, we have looked at the news on a new signing and potential signing and can see the usual story that the media puts across for players in these scenarios.
Alternative uses could be to run this during a match for crowd-sourced player ratings… or getting opinions on an awful new badge that a club has just released! We also don’t need word clouds for developing this, and you should look at language processing for some incredibly smart things that you can use to understand the sentiment in these messages.
You might also want to take a look at the docs to customise your wordclouds.
Next up – take a look through our other visualisation tutorials that you might also apply here. | https://fcpython.com/blog/scraping-twitter-tweepy-python | CC-MAIN-2022-33 | refinedweb | 902 | 60.55 |
Test-driven Development (TDD) has been getting a lot of attention these days. While I understand the importance of testing, I was sceptical of Test-Driven Development for a long time. I mean, why not Development-Driven Testing or Develop Then Test Later? I thought figuring out the tests before one can write even a single line of code would be impossible.
I was so wrong.
Test-driven Development will help you immensely in the long run. We will soon see how. We will approach TDD from a sceptical viewpoint and then try to create a simple URL shortener like bit.ly using TDD. I will conclude with my evaluation of the pros and cons of the technique; to which you may directly jump to as well.
What is TDD?
Test-driven development (TDD) is a form of software development where you first write the test, run the test (which will fail first) and then write the minimum code needed to make the test pass. We will elaborate on these steps in detail later but essentially this is the process that gets repeated.
This might sound counter-intuitive. Why do we need to write tests when we know that we have not written any code and we are certain that it will fail because of that? It seems like we are not testing anything at all.
But look again. Later, we do write code that merely satisfies these tests. That means that these tests are not ordinary tests, they are more like Specifications. They are telling you what to expect. These tests or specifications will directly come from your client’s user stories. You are writing just enough code to make it work.
For instance, your client needs a website (in Django) with a Article model that specifies a headline and a body. The very first thing to do in TDD would be to write a test which checks this specification. Then you run the test. Watch it fail. Then write three lines of code in
models.py to create an Article class with a headline and body.
But wait a minute. Isn’t this cheating? Your client actually wanted you to create a Blog application. But your three lines are no where near the functionality of a blog. This is an uncomfortable thought that bothers you when you start with TDD.
Just ignore it for now. It is not that much of a big deal. In fact, even your client wouldn’t be too bothered about it. They would be happy that you are working on their application one spec at a time. They can actually watch the progress (and trust me they love that, especially management).
In any form of engineering, this is a common way of building things - there is a plan and we build it bit-by-bit based on it. Before a building is constructed, a blueprint is drawn and progress is made floor-by-floor. Prior to every mobile phone being manufactured, there are detailed CAD models to ensure that every part fits each other perfectly. So why should software engineering be any different?
How to do TDD?
Now, let’s see how test-driven development is done. Just follow this simple procedure:
- Decide what the code will do: This is usually told to you by the client.
- Write a test: The test should pass only if the code does that.
- Run the test: It will fail.
- Write some code: Just enough to make it pass.
- Run the test: If it fails go to step 4.
- Refactor the code: Tests will ensure that it doesn’t break specs.
- Rinse and Repeat: Take another spec/user-story/feature and go to step 1.
Now that you know that tests are like specifications, you might be seeing some method in this madness. But this method is more familiar to you than you might think.
This is, in fact, quite similar to the Scientific method which is the basis of modern science. Let’s recollect what scientific method teaches us:
- Define a question
- Gather information and resources (observe)
- Form an explanatory hypothesis
- Test the hypothesis by performing an experiment and collecting data in a reproducible manner
- Analyse the data
- Interpret the data and draw conclusions that serve as a starting point for new hypothesis (go to step 3)
- Publish results
- Retest (frequently done by other scientists)
Compare this with the previous steps for TDD and notice the similarities. Tests in TDD take the role of experiments in Science. Your theory is only good if the experiments are repeatable and verifiable. You will see that the same will hold good for tests in your project’s source code when you work with other developers.
In fact, let’s look at collaborative software development happening in large open source projects. Almost all of them would have a good collection of tests. While writing test cases seem like a good idea, are there any good reasons to write tests before code?
Why do TDD?
So it seems that that TDD is not a arbitrary practice after all. But we still don’t have to follow it. There are plenty of ways to develop software.
But TDD does come with its own set of advantages, some of which are obvious and some which are not:
- It is live documentation that grows, lives and changes with your code.
- Improves design
- Catches future errors
- Long-term time savings
- Reduces technical debt and hence risk
- Avoid manual one-off tests. Eventually, you will add and re-add test data to test by hand. Too hackey.
These are advantages gathered from various developers who practice TDD. Each of them merit a detailed explanation. But to summarise, TDD brings with it a lot of benefits of testing by making it a mandatory part of your development cycle. You might think that you will add tests later, but sometimes you never get around to doing it.
Code written by passing simple, focused test cases tend to be more modular and hence better designed. It is a pleasant side effect of the process but you will certainly notice it.
How is TDD done?
To understand TDD better we will try to implement an entire Django project by writing test cases first and then the code. We will be creating a URL shortening services which takes a long url and converts it into a short one (possibly for fitting into a twitter message).
You can also watch the screencast below to see how the site is created.
User Stories
Imagine that after a long call with your client, you have distilled their needs into the following user stories:
- The short URL must be always smaller than the original URL.
- If you give the short URL, you must be able to recover the original URL.
- The home page must have a form to enter the long URL.
- Submitting the home page form should show the short URL.
- Clicking on the short URL must redirect to the original (long) URL.
We have also roughly ordered the stories so that the core functionality comes first.
Create the Project
Note: You will need at least Python 2.7 and Django 1.6 to follow the rest of this post. Earlier versions of Python and Django had some differences in unit testing tools.
Typically URL shortener sites will have a short name like. So let’s call our project
tiny
django-admin.py startproject tiny cd tiny ./manage.py startapp shorturls
Configure DATABASES in
tiny/settings.py to use a simple file-based
sqlite3 database and add the app
shorturls in INSTALLED_APPS as well. Next, synchronise the database with:
./manage.py syncdb
Writing the First Test
Add your first test case to
shorturls/tests.py:
from django.test import TestCase from .models import Link class ShortenerText(TestCase): def test_shortens(self): """ Test that urls get shorter """ url = "" l = Link(url=url) short_url = Link.shorten(l) self.assertLess(len(short_url), len(url))
This test creates a simple Link object given a (long) URL. It creates a short URL by using a class method called
shorten() and asserts that it is shorter in length. Note that we are not saving the Link object into the database. Whenever a test can avoid touching the database, it must jump at the opportunity. Django uses an in-memory database while testing but even that can take some time. Faster the unit tests are, the more likely you are to use them.
Also, notice that we get a better error message when we use the
assert...() functions (see the Sidenote below).
Run:
./manage.py test shorturls
As expected, it fails. Now build a model for this to work in
shorturls/models.py:
from django.db import models class Link(models.Model): url = models.URLField() @staticmethod def shorten(long_url): return ""
We are cheating by returning a zero length string but we know that this will pass the test.
./manage.py test shorturls
Sidenote: Choosing the right assertion
There are several assert functions provided by the
unittest module in Python and several more provided by Django in the
TestCase class. Initially, they might feel redundant. After all,
assert is a keyword built into Python. Every conceivable assertion function can be replaced by an
assert keyword checking for the truth of a Python expression.
The real difference is when the assertion fails. Here are three equivalent assertions along with the typical error message when it fails. Compare them yourself:
assert len(url) < len(docs_url) AssertionError self.assertTrue(len(url) < len(docs_url)) AssertionError: False is not true self.assertLess(len(url), len(docs_url)) AssertionError: 101 not less than 58
Clearly, the
self.assert...() functions have a clearer error message. So it is worthwhile to familiarise ourselves with the assert API unless you plan to you something like pytest.
Second Test: Recovering the url
“But you wouldn’t clap yet. Because making something disappear isn’t enough; you have to bring it back. That’s why every magic trick has a third act, the hardest part, the part we call “The Prestige”.” — Christopher Priest, The Prestige
Add another test to
shorturls/tests.py:
def test_recover_link(self): """ Tests that the shortened then expanded url is the same as original """ url = "" l = Link(url=url) short_url = Link.shorten(l) l.save() # Another user asks for the expansion of short_url exp_url = Link.expand(short_url) self.assertEqual(url, exp_url)
Our bluff gets called. We now need a real way to shorten urls and recover them. Unlike what might come to you intuitively, the URL is not mapped to a more compact encoding like a zip file. The allowable character set for a URL is pretty limited. Any kind of ‘string compression’ would reach its limits.
Instead we do something very simple. We know that, once saved into the database, each Link object can be uniquely identified by an integer - its primary key. Simple add this primary key to the domain’s URL and we have a short URL that can be mapped back to the original URL with a database lookup.
Change
shorturls/models.py to this:
from django.db import models class Link(models.Model): url = models.URLField() @staticmethod def shorten(link): l, _ = Link.objects.get_or_create(url=link.url) return str(l.pk) @staticmethod def expand(slug): link_id = int(slug) l = Link.objects.get(pk=link_id) return l.url
Now the tests should pass.
Third Test: Home Page With a Form
Add to
shorturls/tests.py:
from django.core.urlresolvers import reverse ... def test_homepage(self): """ Tests that a home page exists and it contains a form. """ response = self.client.get(reverse("home")) self.assertEqual(response.status_code, 200) self.assertIn("form", response.context)
This test fails because we don’t have any views mapped in our
urls.py. In the spirit of minimum effort, let’s use Django’s class based view. Since the submission of a form would create a new Link object, let’s use a
CreateView instead of a
TemplateView. A CreateView will generate the form for free and it will be useful later on.
Replace contents of
shorturls/views.py with:
from django.views.generic.edit import CreateView from .models import Link class LinkCreate(CreateView): model = Link fields = ["url"]
Create
shorturls/templates/shorturls/link_form.html:
<form method="post">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Shorten" /> </form>
Replace contents of
tiny/urls.py with:
from django.conf.urls import patterns, include, url from shorturls.views import LinkCreate urlpatterns = patterns('', url(r'^$', LinkCreate.as_view(), name='home'), )
Now the test should pass.
Fourth Test: Form Returns a Short URL
Add to
shorturls/tests.py:
def test_shortener_form(self): """ Tests that submitting the forms returns a Link object. """ url = "" response = self.client.post(reverse("home"), {"url": url}, follow=True) self.assertEqual(response.status_code, 200) self.assertIn("link", response.context) l = response.context["link"] short_url = Link.shorten(l) self.assertEqual(url, l.url) self.assertIn(short_url, response.content)
(This test is designed to work with Django URLField’s default behaviour to add trailing slashes. This needs to be agreed with your client, of course. In my case, I simply had to ask the mirror.)
Now we need to think of what gets shown when the form is submitted. Obviously, there would be the short URL. Once again we can use a ready made class based view for this - the
DetailView.
Add to
shorturls/views.py:
from django.views.generic import DetailView ... class LinkCreate(CreateView): model = Link fields = ["url"] def form_valid(self, form): # Check if the Link object already exists prev = Link.objects.filter(url=form.instance.url) if prev: return redirect("link_show", pk=prev[0].pk) return super(LinkCreate, self).form_valid(form) class LinkShow(DetailView): model = Link
Change
tiny/urls.py to:
from shorturls.views import LinkCreate from shorturls.views import LinkShow urlpatterns = patterns('', url(r'^$', LinkCreate.as_view(), name='home'), url(r'^link/(?P<pk>\d+)$', LinkShow.as_view(), name='link_show'), )
Now add the template for the
DetailView. Create
shorturls/templates/shorturls/link_detail.html with the following contents:
<p> Short Link: /r/{{ object.id }} <p> Original Link: {{ object.url }}
Note that we haven’t created the short link redirection code for
/r/ yet.
Only hitch we have now is that Django doesn’t know where to go after the form is submitted. One way to solve that is by adding
get_absolute_url() to the
Link model.
Change
shorturls/models.py to:
from django.db import models from django.core.urlresolvers import reverse class Link(models.Model): url = models.URLField() def get_absolute_url(self): return reverse("link_show", kwargs={"pk": self.pk})
Now if the form is submitted, it gets redirected to the new
DetailView. The tests should pass.
Fifth Test: Short URL Must Redirect To The Long URL
Our next and final test actually tests if the short URLs work:
def test_redirect_to_long_link(self): """ Tests that submitting the forms returns a Link object. """ url = "" l = Link.objects.create(url=url) short_url = Link.shorten(l) response = self.client.get( reverse("redirect_short_url", kwargs={"short_url": short_url})) self.assertRedirects(response, url)
The final bit of the puzzle is the reverse lookup or redirected the user with the short URL to the original URL. As you might have guessed, time for yet another class based view -
RedirectView.
Add to
shorturls/views.py:
from django.views.generic.base import RedirectView ... class RedirectToLongURL(RedirectView): permanent = False def get_redirect_url(self, *args, **kwargs): short_url = kwargs["short_url"] return Link.expand(short_url)
Add to
tiny/urls.py:
from shorturls.views import RedirectToLongURL ... url(r'^r/(?P<short_url>\w+)$', RedirectToLongURL.as_view(), name='redirect_short_url'),
Now the tests should pass.
Refactoring For Shorter URLs
We will now see how tests prevent regression. The short URLs we generate are nothing but primary keys of Link instances in the database. This scheme works swimmingly well until a smart guy notices that we are wasting too many characters compared to other URL shorteners.
A shortener like bit.ly creates short urls which are a mix of alphabets (lower and upper case), numbers and symbols (like hyphens and underscores). However, we use only numbers. Mathematically speaking, we can use a base higher than 10 if we have more than ten symbols. This can lead to shorter numbers. For e.g. the decimal “255” can be represented as “FF” in hexadecimal, which saved 1 byte!
Notice that we have deliberately avoided testing the short URL format. This gives us flexibility to use any short URL representation we like and we are not limited to just decimals. So we create a module which can convert a decimal to a higher base with more symbols and back.
Create a new file
shorturls/basechanger.py with the following contents:
CHARS = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGUIJKLMNOPQRSTUVWXYZ" BASE = len(CHARS) def decimal2base_n(n): if n >= BASE: return decimal2base_n(n // BASE) + CHARS[n % BASE] else: return CHARS[n] def base_n2decimal(n): if len(n) > 1: return base_n2decimal(n[:-1]) * BASE + CHARS.index(n[-1]) else: return CHARS.index(n[0])
If you suspect anything wrong, just play along ;) . Since all the URL shortening and expansion logic resides in the models (as it should), we need to change that as well.
Change
shorturls/models.py to:
from django.db import models from django.core.urlresolvers import reverse from .basechanger import decimal2base_n, base_n2decimal class Link(models.Model): url = models.URLField() def get_absolute_url(self): return reverse("link_show", kwargs={"pk": self.pk}) @staticmethod def shorten(link): l, _ = Link.objects.get_or_create(url=link.url) return str(decimal2base_n(l.pk)) @staticmethod def expand(slug): link_id = int(base_n2decimal(slug)) l = Link.objects.get(pk=link_id) return l.url
Now running your tests should work as if nothing happened. Your smart guy will be pleased that the short URLs are now shorter. It is a win-win, folks!
Note that we have not removed the hard-coded short URL path in our
DetailView template. Let’s do that as well.
First, add a method to
shorturls/models.py:
def short_url(self): return reverse("redirect_short_url", kwargs={"short_url": Link.shorten(self)})
Change
shorturls/templates/shorturls/link_detail.html:
<p> Short Link: <a href="{{ object.short_url }}">{{ object.short_url }}</a> <p> Original Link: {{ object.url }}
Subtle Bug: More Tests Needed
Everything goes great for a while until user testing, when you get a strange bug report:
Critical: Some short URLs get redirected to the wrong site.
By following TDD, the only way to fix your code would be to create a test to reproduce your bug. Since the problem only happens after a certain number of short URLs are created, you design a test to create a large number of short URLs and compare it with the original URL.
Add a new test case to
shorturls/tests.py:
import random, string ... def test_recover_link_n_times(self): """ Tests multiple times that after shortening and expanding the original url is recovered. """ TIMES = 100 for i in xrange(TIMES): uri = "".join(random.sample(string.ascii_letters, 5)) url = "{}/{}".format(i, uri) l = Link.objects.create(url=url) short_url = Link.shorten(l) long_url = Link.expand(short_url) self.assertEqual(url, long_url)
Running the test gives you a mysterious error.
AssertionError: '' != u''
The hint of what is wrong is in the URLs themselves. After running a debugger, you realise that the 55th character is same as the 42nd character - the symbol ‘H’. A simple typo that was actually overlooked when I was writing this article.
I learnt two lessons from this. First, to never underestimate testing. The more tests you can write, the better your code becomes. Second, to never attempt to list the alphabets by hand. We are no longer in kindergarten and we are certainly not expected to remember all of it. That’s why Python has
string module with such silly strings kept ready for you.
So the fix is a simple change in the first line of
shorturls/basechanger.py:
import string CHARS = string.digits+string.ascii_lowercase+string.ascii_uppercase
So once again, we have a 100% success rate in our tests.
The completed source code can be found at Github
So What’s Wrong With TDD?
The biggest problem I faced while learning TDD was wearing two different hats alternatingly. The Tester Hat first makes you think the most specific way to break your code and the Code Hat wants you to write terse code that works in the most general way possible. Perhaps, after some practice, the cognitive load was greatly reduced. But, I would leave you with this pithy statement for some thought:
“Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” — Brian Kernighan
Of course, TDD practitioners themselves ackowledge that TDD is not for exploratory programming. In other words, you cannot create a map if you don’t know where you are or where you want to go. It is also expected that you have a certain amount of expertise in the language (in this case, Python/Django), to clearly see how the test case should be written in advance. So I wouldn’t expect a beginner to learn TDD along with learning programming.
It, of course, goes without saying due to its counter intuitive nature, it takes some time to get comfortable with TDD.
Conclusion
TDD is a design technique that needs a little bit of extra time for planning ahead. Some studies put this between 15-35% increase in development time. So, I wouldn’t use it for one time scripts or a quick work. But I will strongly consider it whenever I need to build production quality sites.
I might also not consider it for version 1 of something that I am working on. Maybe version 2 onwards, when the development will get faster and I am more familiar with the domain.
It doesn’t necessarily improve the client’s experience. If they are more interested in development time rather than the quality of code, then they might not appreciate TDD designed code itself. This is when BDD is much more effective in engaging them and “showcasing” the effectiveness of the methodology.
Django tests are pretty fast. So I am not too slowed down by the runs. I work on the models first, then the views, then the system tests and finally the templates. I prefer smaller, faster, non-redundant, black-boxy and functionality-oriented tests. Perhaps you want to read that again. It takes a while to design a good test case.
However, Pytest is a better option without learning all the legacy JUnit functions. It is a lot better that the default Django testing tool and I would prefer to use Pytest as my default test runner.
TDD relies on the inherent human nature to fix things. Writing tests are important but not essential. We might forget them. TDD will actually show a ‘broken’ status if any test fails, this is a great incentive to fix tests.
I tend to make a lot of design decisions while writing test cases. This is a good side effect of TDD. Lot of corner case which are missed while coding get importance upfront and by the time you are coding you are aware of them, rather than other way around. This is why the design is better for TDD.
It takes a lot of time to be good at TDD. | http://arunrocks.com/understanding-tdd-with-django/ | CC-MAIN-2016-26 | refinedweb | 3,868 | 68.57 |
ObjectSharp Blog
Due to the fact that the hosting provider I was using for Syfuhs.net was less than stellar, (names withheld to protect the innocent) I’ve decided to move the blog portion of this site to blogs.objectsharp.com.
With any luck the people subscribing to this site won’t see any changes, and any links directly to should 301 redirect to blogs.objectsharp.com/cs/blogs/steve/.
As I learned painfully of the problems with the last migration to DasBlog, permalinks break easily when switching platforms. With any luck, I will have that resolved shortly.
Please let me know as soon as possible if you start seeing issues.
Cheers! work for: Woodbine
Entertainment Group. We have a few different businesses, but as a whole
our market is Horse Racing. Our business is not software development.
We don’t always get the chance to play with or use some of the new technologies released
to the market. I thought this would be a perfect opportunity to see what it
will take to develop a new product using only new technologies.
Our core customer pretty much wants Race information. We have proof of this
by the mere fact that on our two websites, HorsePlayer
Interactive and our main site, we have dedicated applications for viewing Races.
So lets build a third race browser. Since we already have a way of viewing races
from your computer, lets build it on the new Windows Phone 7.
The Phone – The application
This seems fairly straightforward. We will essentially be building a Silverlight
application. Let’s take a look at what we need to do (in no particular order):
The Data
We have a massive database of all the Races on all the tracks that you can wager on
through our systems. The data updates every few seconds relative to changes
from the tracks for things like cancellations or runner odds. How do we push
this data to the outside world for the phone to consume? We create a WCF Data
Service:
public class RaceBrowserData :;
} }
That’s actually all there is to it for the data.
The Authentication
The what? Chances are the business will want to limit application access to
only those who have accounts with us. Especially so if we did something like
add in the ability to place a wager on that race. There are lots of ways to
lock this down, but the simplest approach in this instance is to use a Secure Token
Service. I say this because we already have a user store and STS, and duplication
of effort is wasted effort. We create a STS Relying Party (The application that
connects to the STS):
One thing I didn’t mention is how we lock down the Data Service. That’s a bit
more tricky, and more suited for another post on the actual Data Layer itself.
So far we have laid the ground work for the development of a Race Browser application
for the Windows Phone 7 using the Entity Framework and WCF Data Services, as well
as discussed the use of the Windows Identity Foundation for authentication against
an STS.
With any luck (and permission), more to follow..
While I am definitely not looking for a new job, I was bored and thought I would take
a stab at a stylized resume to see if I could hone some of my (lack of) graphics skills.
It didn’t turn out too badly, but I am certainly no graphics designer.
What do you think? of a job making it known that the store existed while
I was at the mall. While I was grabbing coffee in the food court, these stickers
were on each table:
Following that, as you head towards the store you see two large LCD screens in the
centre of the walkway. On one side you have a Rock Band - Beatles installation
running XBox 360 over HD.
On the other side was a promotional video.
Microsoft designed their store quite well. Large floor to ceiling windows for
the storefront, with an inviting light wood flooring to create a very warm atmosphere.
While there were hundreds of people in the store, it was very welcoming.
Along the three walls (because the 4th is glass) is a breathtaking video panorama.
I’m not quite sure how to really describe it. It’s as if the entire wall was
a single display, running in full HD.
In the center of the store is a collection of laptops and assorted electronics like
the Zune’s. There’s probably a logical layout, perhaps by price, or performance.
I wasn’t paying too much attention to that unfortunately.
At the center-back of the store is Microsoft’s Answers desk. Much like the Apple
Genius Bar, except not so arrogant. Yes, I said it. Ironically, the display
for customer names looked very iPod-ish here, and in the Apple Store, the equivalent
display looked like XP Media Center. Go figure.
One of the things I couldn’t quite believe was the XBox 360 being displayed overlay
the video panorama video. The video engine for that must have been extremely
powerful. That had to be a 1080P display for the XBox. As a developer,
I was astonished (and wondered where I could get that app!) A few of the employee’s
mentioned that it was driven by Windows 7. Pretty freakin’ sweet.
Also in the store were a couple Surfaces! This was the first time I actually
had the opportunity to play with one. They are pretty cool.
And that in a few pictures was my trip to the Microsoft store. There was also
a couple pamphlets in store describing training sessions and schedules for quick how-to’s
in Windows 7 that I walked away with.
Microsoft did well..
Definition:..
Note: I stole most of that from Wikipedia. | http://blogs.objectsharp.com/?tag=/Business | CC-MAIN-2014-42 | refinedweb | 979 | 73.07 |
Trying to share a git repository up to github form my ipad using shellista
Not sure how correctly to do this. Any pointers? I've done the "git init" and the .git directory exists, done the add for each of the files I want in the repository. Now what?
##using shellista git
This is based off the dev-modular branch of shellista
#Create a .git folder >git init <folder_name> #cd into that folder >cd <folder_name> #add your files >git add <files> #Commit those files >git commit <"commit message"> <"your name"> <"email"> #push it to your github repository >git push <<username>/<repo_name>.git>
Did just that. My problem was the the gist I put up had the same names as what I was pushing to the repo. At least that's what I thinked happened. Will try again later when my batteries (the ipad's, mine are fine) recharge.
@polymerchm This doest work with gist. It's for full repositories on github.
Gistcheck.py is for gists. Shellista git is for github repos.
To aid in manually getting .pyui files into GitHub, I use the following 5 liner that I call
pyui_to_clipboard.py:
import clipboard filename = 'put_your_filename_here.pyui' # edit this line before running with open(filename) as in_file: clipboard.set(in_file.read()) print('The contents of {} are now on the clipboard.'.format(filename))
After running this, I just switch to GitHub, create a new file with the right filename.pyui and paste.
Fwiw, the"latest" version of gistcheck, or my version anyway, supported pyui files.
here
You should be able to run setid, then commit.
Got it now, thanks to all. | https://forum.omz-software.com/topic/971/trying-to-share-a-git-repository-up-to-github-form-my-ipad-using-shellista/3 | CC-MAIN-2018-13 | refinedweb | 270 | 77.33 |
Plugin Porting Guide
Version:
Sublime Text 3 contains some important differences from Sublime Text 2 when it comes to plugins, and most plugins will require at least a small amount porting to work. The changes are:
Python 3.3🔗
Sublime Text 3 uses Python 3.3, while Sublime Text 2 used Python 2.6. Furthermore, on Mac, the system build of Python is no longer used, instead Sublime Text is bundled with its own version. Windows and Linux are also bundled with their own version, as they were previously.
Out of Process Plugins🔗
Plugins are now run in a separate process,
plugin_host. From a plugin
authors perspective, there should be no observable difference, except that a
crash in the plugin host will no longer bring down the main application.
Asynchronous Events🔗
In Sublime Text 2, only the
set_timeout() method was thread-safe. In Sublime
Text 3, every API method is thread-safe. Furthermore, there are now
asynchronous event handlers, to make writing non-blocking code easier:
on_modified_async()
on_selection_modified_async()
on_pre_save_async()
on_post_save_async()
on_activated_async()
on_deactivated_async()
on_new_async()
on_load_async()
on_clone_async()
set_timeout_async()
When writing threaded code, keep in mind that the buffer will be changing underneath you as your function runs.
Restricted
begin_edit() and
end_edit()🔗
begin_end() and
end_edit() are no longer directly accessible, except in
some special circumstances. The only way to get a valid instance of an
sublime.Edit object is to place your code in a
sublime_plugin.TextCommand
subclass. In general, most code can be refactored by placing the code between
begin_edit() and
end_edit() in a
sublime_plugin.TextCommand, and then
running the command via
run_command().
This approach removes the issue of dangling
sublime.Edit objects, and ensures
the repeat command and macros work as they should.
Zipped Packages🔗 Modules🔗
Importing other plugins is simpler and more robust in Sublime Text 3, and can be
done with a regular import statement, e.g.,
import Default.comment will
import Packages/Default/Comment.py.
Restricted API Usage at Startup🔗
Due to the
plugin_host loading asynchronously, it is not possible to use the
API at import time. This means that all top-level statements in your module
must not call any functions from the
sublime module. During startup, the API
is in a dormant state, and will silently ignore any requests made. | http://www.sublimetext.com/docs/porting_guide.html | CC-MAIN-2022-27 | refinedweb | 374 | 55.64 |
In this short tutorial we will check how we can pause the execution of a program on the Micro:bit, running MicroPython.
Introduction
In this short tutorial we will check how we can pause the execution of a program on the Micro:bit, running MicroPython.
In order to be able to pause the execution, we will use the sleep function of the microbit module. This function receives as input the number of milliseconds to pause the execution.
In our simple example, we will do a loop and print the current iteration value, waiting one second at the end of each iteration.
The code
As mentioned before, the sleep function is available on the microbit module. So, we need to import the function first, before using it.
from microbit import sleep
After that, we will specify our loop. We will do a simple for in loop between 0 and 9. We will use the range function to generate the list of numbers between 0 and 9 and iterate through each number of the list.
Note that the range function receives as first parameter the starting value of the list (included) and as second input the last number of the list (not included). This is why we use 10 as second argument of the range function, to generate the numbers between 0 and 9.
for i in range(0, 10): #loop body
Then, inside the loop, we will simply print the current value and then pause the execution for 1 second. Since the sleep function receives the time in milliseconds, we need to pass to it the value 1000.
print (i) sleep(1000)
The final code can be seen below.
from microbit import sleep for i in range(0, 10): print (i) sleep(1000)
Testing the code
To test the code, simply upload the previous script to your Micro:bit board using a tool of your choice. In my case, I’m using uPyCraft, a MicroPython IDE.
After running the program, you should get an output similar to figure 1, which shows the numbers getting printed. During the program execution, there should be a 1 second delay between each print.
Figure 1 – Output of the program.
4 Replies to “Micro:bit uPython: Pausing the program execution” | https://techtutorialsx.com/2018/11/05/microbit-upython-pausing-the-program-execution/ | CC-MAIN-2020-40 | refinedweb | 374 | 62.27 |
How to Create a Windows Server 2003 Failover Cluster for Cluster Continuous create a new Microsoft Windows Server 2003 cluster for a Microsoft Exchange Server 2007 clustered mailbox server (CMS) in a cluster continuous replication (CCR) environment by using the New Server Cluster Wizard or by using Cluster.exe.
When creating a Windows Server 2003 failover cluster for use by a CMS in a CCR environment, you must provide all initial cluster configuration information. This topic contains two procedures that are performed prior to deploying Exchange 2007 in a CCR environment:
Creating a new failover cluster
Adding the second node to the new failover cluster
Before You Begin
These procedures can be performed locally on the physical node or remotely. However, we recommend that you perform these procedures on the node that will be the first node in the cluster.
To perform this procedure, the account you use must be delegated membership in the local Administrators group. For more information about permissions, delegating roles, and the rights that are required to administer Exchange Server 2007, see Permission Considerations.
Note
We recommend using the account that will be used during the Exchange installation, if it has sufficient authority, to eliminate the potential of forgetting to change accounts after the installation is complete.
Procedure
To use the New Server Cluster wizard to create a new failover cluster
Open a Command Prompt window, and run the following command:
Cluster /create /wizard the click Next. It is a best practice to follow the Domain Name System (DNS) namespace rules when entering the cluster name. For more information about DNS namespace rules, see Microsoft Knowledge Base article 254680, DNS Namespace Planning.
Note
If the Domain Access Denied page appears, it typically means that you are logged on locally with an account that is not a domain account with local administrative permissions. In this event, the wizard will prompt you to specify an account. This account is not the Cluster service account. If you have the appropriate credentials, the Domain Access Denied page will not appear.
On the Select Computer page, verify or type the name of the computer that you plan to use. Click Advanced, and select Advanced (minimum) configuration, and then click OK. Click Next.
Note
The computer name is not case-sensitive.
On the Analyzing Configuration page, the wizard analyzes the domain and the node for possible issues that can cause installation problems. Review any warnings or error messages that appear. Click Details to obtain more information about each warning or error message.
Note
The bulleted list at the top of the page evolves into a tree of status information as the analysis is completed. The tree can be expanded to view specific status. Items with check icons can be ignored. Items with yellow triangle icons are warnings. Items with red icons are blocking errors and must be corrected.
After all checks complete successfully, click Next. Resolve any errors before continuing with the installation.
On the IP Address page, type the unique, valid cluster IP address, and then click Next. The wizard automatically associates the cluster IP address with the public network by using the subnet mask to select the correct network. The cluster IP address should be used for administrative purposes only and not for client connections.
On the Cluster Service Account page, type the user name and password for the Cluster service account. In the Domain field, select the domain name, and then click Next. The wizard verifies the user account and password.
On the Proposed Cluster Configuration page, click Quorum. Select Majority Node Set from the drop-down box. Click OK, and then click Next.
On the Creating the Cluster page, review any warnings or error messages that appear while the cluster is being created. For more information about warnings or errors, click to expand each warning or error message. To continue, click Next.
Click Finish to complete the cluster formation.
Note
If any physical disk resources are present, they should be manually removed before adding the second node to the cluster.
To use the Server Cluster Wizard to add a second node to the failover cluster
Open a Command Prompt window, and then run the following command:
Cluster <ClusterName> /add /wizard
After the Add Nodes wizard appears, click Next.
In the Domain list, click the domain where the server cluster is located, verify the cluster name in the Cluster name box, and then click Next.
On the Select Computer page, type the name of the computer that you want to add to the cluster. Click Advanced, select Advanced (minimum) configuration, and then click OK. Click Next.
After the server cluster IP address, networking, service account and quorum information are correct, and then click Next.
After the second node is successfully added to the cluster, click Next, and then click Finish to be returned to the command prompt.
After the second node has been added to the cluster, the Majority Node Set quorum should be configured to use the file share witness. For detailed steps to configure the file share witness, see How to Configure the File Share Witness.
To use Cluster.exe to create a new failover cluster
Open a Command Prompt window, and run the following command:
Cluster /cluster:<ClusterName> /create /node:"<ActiveNodeName>" /ipaddress:<ClusterIPAddress> /user:<ClusterServiceAccount> /password:<ClusterServiceAccountPassword> /unattend /min
Note
After the cluster has been formed, if any physical disk resources are present in the cluster, remove them manually before proceeding with the next step.
Change the quorum configured by the cluster formation process from a local quorum to a Majority Node Set quorum by running the following command:
Cluster <ClusterName> res "Majority Node Set" /create /group:"Cluster Group" /type:"Majority Node Set"
After the Majority Node Set resource has been created, bring it online by running the following command:
Cluster <ClusterName> res "Majority Node Set" /online
After the Majority Node Set resource is online, configure the cluster to use it by running the following command:
Cluster <ClusterName> /quorum:"Majority Node Set"
After the cluster has been converted to use the Majority Node Set quorum, take the existing local quorum offline and then delete it by running the following commands:
Cluster <ClusterName> res "Local Quorum" /offline Cluster <ClusterName> res "Local Quorum" /delete
Add the second node to the cluster by running the following command:
Cluster <ClusterName> /add:<PassiveNodeName> /password:<ClusterServiceAccountPassword> /unattend /min
After the second node has been added to the cluster, the Majority Node Set quorum should be configured to use the file share witness. For detailed steps to configure the file share witness, see How to Configure the File Share Witness. | https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-2007/bb124038(v=exchg.80) | CC-MAIN-2022-33 | refinedweb | 1,097 | 50.77 |
machine could not be found.
"The RPC server is unavailable. (Exception from HRESULT: 0x800706BA)".
:
Instructions on how to perform these tasks are provided in this topic.
How to: Configure a Windows Firewall for Database Engine Access.
How to: Configure a Firewall for Report Server Access.
Click Start, point to Programs, point to Microsoft SQL Server 2008, point to Configuration Tools, and click SQL Server Configuration Manager.
In the left pane, expand SQL Server Network Configuration, and then click Protocols for the instance of SQL Server.
In the details pane, enable the TCP/IP and Named Pipes protocols, and then restart the SQL Server service.
Log on as a local administrator to the computer for which you want to enable remote administration.
If the report server is running on Windows Vista, right-click Command Prompt and select Run as administrator. For other operating systems, open a command prompt window.
Run the following command:
netsh.exe firewall set service type=REMOTEADMIN mode=ENABLE scope=ALL
You can specify different options for Scope. For more information, see the Windows Firewall product documentation.
Verify that remote administration is enabled. You can run the following command to show the status:
netsh.exe firewall show state
Reboot the computer..
To set permissions on the report server WMI namespace for non-administrators is duplicated. Needs cleaning up!!!
.
The problem is with lists that are within collapsable sections. If you collapse the list by clicking the list heading
, one of the duplicated lists is removed, so that is a workaround until the fix gets applied. | http://msdn.microsoft.com/en-us/library/ms365170.aspx | crawl-002 | refinedweb | 257 | 51.55 |
Section 5.1
The Basic Java Applet
JAVA APPLETS ARE SMALL PROGRAMS that are meant to run on a page in a Web browser. Very little of that statement is completely accurate, however. An applet is not a complete program. It doesn't have to be small. And while many applets are meant to be used on Web pages, there are other ways to use them and good reasons to do so. A correct definition would be that an applet is an object that belongs to the class java.applet.Applet or to one of its subclasses.
The Applet class, defined in the package java.applet, is really only useful as a basis for making subclasses. An object of type Applet has certain basic behaviors, but doesn't actually do anything useful. To create a useful applet, a programmer must define a subclass of the Applet class. A number of methods in Applet are defined to do nothing at all. The programmer must override at least some of these methods and give them something to do.
An applet is inherently part of a graphical user interface. It is a type of graphical component that can be displayed in a window (whether belonging to a Web browser or to some other program). When shown in a window, an applet is a rectangular area that can contain other components, such as buttons and text boxes. It can display graphical elements such as images, rectangles, and lines. And it can respond to certain "events", such as when the user clicks on the applet with a mouse. All of the applet's behaviors -- what components it contains, what graphics it displays, how it responds to various events -- are determined by methods. These methods are included in the Applet class, but the programmer must override these methods in a subclass of Applet in order to make a genuinely useful applet that has interesting behaviors.
Back in Section 2.1, when you read about Java programs, you encountered the idea of a main() routine, which is never called by the programmer. The main() routine of a program is there to be called by "the system" when it wants to execute the program. The programmer writes the main routine to say what happens when the system runs the program. An applet needs no main() routine, since it is not a program. However, many of the methods in the Applet class are similar to main() in that they are meant to be called by the system, and the job of the programmer is to say what happens in response to the system's calls.
One of the important applet methods is paint(). The job of this method is to draw the graphical elements displayed in the applet -- which is, remember, just a rectangular area in a window. The paint() method in the Applet class doesn't draw anything at all, so paint() is one of those methods that the programmer can override in a subclass. The definition of this method must have the form:public void paint(Graphics g) { // draw some stuff }
The method must be public because it will be called by the system from outside the class. The parameter g of type Graphics is provided by the system when it calls the paint() method. In Java, all drawing to the screen must be done using methods provided by a Graphics object. There are many such methods. I will give a few examples as we go along, and I will discuss graphics in more detail in Section 4.
As an example, let's go the traditional route and look at an applet that displays the string "Hello World!". We'll use the paint() method to display this string:import java.awt.*; import java.applet.*; public class HelloWorldApplet extends Applet { // An applet that simply displays the string Hello World! public void paint(Graphics g) { g.drawString("Hello World!", 10, 30); } } // end of class HelloWorldApplet
The drawString() method, defined in the Graphics class, actually does the drawing. The parameters of this method specify the string to be drawn and the point in the applet where the string is to be placed. More about this later. Note that I've imported the packages java.applet, which includes the Applet class, and java.awt, which includes the Graphics class and many other classes related to the graphical user interface. Almost every applet uses these two packages.
Now, an applet is an object, not a class. So far we have only defined a class. Where does an actual applet object come from? It is possible, of course, to create such objects:
Applet hw = new HelloWorldApplet();
This might even be useful if you are writing a program and would like to add an applet to a window you've created. Most often, however, applet objects are created by "the system." For example, when an applet appears on a page in a Web browser, it is up to the browser program to create the applet object. For this reason, subclasses of Applet are almost always declared to be public. Otherwise, the system wouldn't be able to access the class, and there would be no way for it to create an applet based on that class.
For an applet to appear on a Web page, the document that the browser is displaying must specify the name of the applet and its size. This specification, like the rest of the document, is written in a language called HTML. I will discuss HTML in more detail in Section 3. Here is some HTML code that will display a HelloWorld applet:<center> <applet code="HelloWorldApplet.class" width=250 height=50> <p><font color="#E70000">Sorry, but your browser<br> doesn't support Java.</font></p> </applet> </center>
and here is what this code displays:
If the browser you are using does not support Java, or if you have turned off Java support, then you see the message "Sorry, but your browser doesn't support Java." Otherwise, you should see the message "Hello world!". The message is displayed in a rectangle that is 250 pixels in width and 50 pixels in height. You might not be able to see the rectangle as such, but it's the rectangle that is the applet.
The applet's paint() method is called by the system just after the applet is created. It can also be called at other times. In fact, it is called whenever the contents of the applet need to be redrawn. This might happen if the applet is covered up by another window and is then uncovered. It can even happen when you scroll the window of a browser, and the applet scrolls into view. This means that outside the paint() method, you can't simply draw something on an applet and expect it to stay there! If the contents of an applet can change, then you have to use instance variables to keep track of the contents, and the paint() method must use the information in those instance variables to correctly reconstitute the contents of the applet. Otherwise, if the user scrolls your applet out of view and back, the stuff you've drawn will be gone!
As a simple example, let's modify the Hello World applet so that each time the user clicks on the applet, the message displayed by the applet will change color. Since we will want the message to be redrawn in the correct color whenever the paint() method is called, we'll need an instance variable to keep track of the current color. To keep things simple, let's use an integer variable named currentColor that takes on the values 0, 1, 2, or 3 to indicate the colors black, red, blue, and green. The paint method can then be written:public void paint(Graphics g) { switch (currentColor) { // select proper color for drawing case 0: g.setColor(Color.black); // Color.black is a constant that break; // specifies the color black case 1: g.setColor(Color.red); break; case 2: g.setColor(Color.blue); break; case 3: g.setColor(Color.green); break; } g.drawString("Hello World!", 10, 30); }
Every time that the applet needs to be redrawn, this method will be called, and it will always draw the string in the currently selected color.
Now, we have to arrange to change the value of currentColor whenever the user clicks on the applet. Whenever this happens, the system calls the applet's mouseDown() routine. This is one of those routines that ordinarily does nothing, but you can override it to specify any response that you want to a mouse click. In this case, we need to change currentColor and then see that the message is redrawn in the new color. The polite way to do this is to call the method repaint(). The repaint() method does not actually do any repainting itself. It merely notifies the system that the applet needs to be redrawn. The system can then call the paint() paint method at its convenience. So, we get:public boolean mouseDown(Event evt, int x, int y) { currentColor++; // change the current color number if (currentColor > 3) // if it's become too big, currentColor = 0; // then reset it to zero repaint(); // ask the system to redraw the window return true; // tells the system that the mouseDown // event has been processed }
You can ignore the parameters of mouseDown() for now; they carry information about the mouse click, but here we need to know only that the mouse click occurred. The boolean return value is typical of event-processing subroutines. The return value indicates whether or not the event needs further processing. As we will see later, it is possible for several objects to get a crack at handling the same event.
We can put all this together into a complete applet:import java.awt.*; import java.applet.*; public class ColoredHelloWorld extends Applet { // displays a string "Hello World!" that changes // color when the user clicks on the applet private int currentColor = 1; // initial color is red public void paint(Graphics g) { switch (currentColor) { // select proper color for drawing case 0: g.setColor(Color.black); break; case 1: g.setColor(Color.red); break; case 2: g.setColor(Color.blue); break; case 3: g.setColor(Color.green); break; } g.drawString("Hello World!", 10, 30); } public boolean mouseDown(Event evt, int x, int y) { currentColor++; // change the current color number if (currentColor > 3) currentColor = 0; repaint(); // ask the system to redraw the window return true; } } // end of class ColoredHelloWorld
And here is what it looks like. Try clicking on the applet to make the color change:
[ Next Section | Previous Chapter | Chapter Index | Main Index ] | http://math.hws.edu/eck/cs124/javanotes1/c5/s1.html | crawl-001 | refinedweb | 1,776 | 62.78 |
sndlib is a collection of sound file and audio hardware handlers written in C and running currently on SGI (either audio library), Sun, OSS or ALSA (Linux and others), Mac OSX, and Windows systems. It provides relatively straightforward access to many sound file headers and data types, and most of the features of the audio hardware.
To build sndlib (sndlib.so if possible, and sndlib.a):
./configure make
To install it, 'make install' — I've tested this process in Linux, SGI, and Sun. It could conceivably work elsewhere.
The following files make up sndlib:
The naming scheme is more as less as follows: the sndlib prefix is "mus" so function names start with "mus_" and constants start with "MUS_". Audio hardware constants start with "MUS_AUDIO_", functions involving sound files referenced through the file name start with "mus_sound_", functions involving files at a lower level with "mus_file_", functions involving header access with "mus_header_", functions involving audio hardware access with "mus_audio_", and MIDI functions with "mus_midi_", and various others just with "mus_" (number translations, etc). Conversions use the word "to" as in "mus_samples_to_bytes". Booleans use "_p" (an ancient Common Lisp convention meaning "predicate" I think).
Sound files have built-in descriptors known as headers. The following functions return the information in the header. In each case the argument to the function is the full file name of the sound file.
off_t mus_sound_samples (const char *arg) /* samples of sound according to header */ off_t mus_sound_frames (const char *arg) /* samples per channel */ float mus_sound_duration (const char *arg) /* sound duration in seconds */ off_t mus_sound_length (const char *arg) /* true file length in bytes */ int mus_sound_datum_size (const char *arg) /* bytes per sample */ off_t mus_sound_data_location (const char *arg) /* location of first sample (bytes) */ int mus_sound_bits_per_sample(const char *arg) /* bits per sample */ int mus_bytes_per_sample(int format) /* bytes per sample */ int mus_sound_chans (const char *arg) /* number of channels (samples are interleaved) */ int mus_sound_srate (const char *arg) /* sampling rate */ int mus_sound_header_type (const char *arg) /* header type (aiff etc) */ int mus_sound_data_format (const char *arg) /* data format (alaw etc) */ int mus_sound_original_format (const char *arg) /* unmodified data format specifier */ int mus_sound_type_specifier (const char *arg) /* original header type identifier */ char *mus_sound_comment (const char *arg) /* comment if any */ off_t mus_sound_comment_start (const char *arg) /* comment start (bytes) if any */ off_t mus_sound_comment_end (const char *arg) /* comment end (bytes) */ int *mus_sound_loop_info(const char *arg) /* 8 loop vals (mode,start,end) then base-detune and base-note (empty list if no loop info found) */ int mus_sound_write_date (const char *arg) /* bare (uninterpreted) file write date */ int mus_sound_initialize(void) /* initialize everything */
The following can be used to provide user-understandable descriptions of the header type and the data format:
char *mus_header_type_name(int type) /* "AIFF" etc */ char *mus_data_format_name(int format) /* "16-bit big endian linear" etc */ char *mus_header_type_to_string(int type) char *mus_data_format_to_string(int format) const char *mus_data_format_short_name(int format)
In all cases if an error occurs, -1 (MUS_ERROR) is returned, and some sort of error message is printed; to customize error handling, use mus_set_error_handler and mus_set_print_handler.
mus_error_handler_t *mus_error_set_handler(mus_error_handler_t *new_error_handler); mus_print_handler_t *mus_print_set_handler(mus_print_handler_t *new_print_handler);
To decode the error indication, use:
char *mus_error_to_string(int err);
Header data is cached internally, so the actual header is read only if it hasn't already been read, or the write date has changed. Loop points are also available, if there's interest. To go below the "sound" level, see headers.c — once a header has been read, all the components that have been found can be read via functions such as mus_header_srate.
The following functions provide access to sound file data:
int mus_sound_open_input (const char *arg) int mus_sound_open_output (const char *arg, int srate, int chans, int data_format, int header_type, const char *comment) int mus_sound_reopen_output (const char *arg, int type, int format, off_t data_loc) int mus_sound_close_input (int fd) int mus_sound_close_output (int fd, off_t bytes_of_data) int mus_sound_read (int fd, int beg, int end, int chans, mus_sample_t **bufs) int mus_sound_write (int fd, int beg, int end, int chans, mus_sample_t **bufs) off_t mus_sound_seek_frame (int fd, off_t frame)
mus_sample_t defaults to float, but can also be int. It is set when sndlib is built, and refers to Sndlib's internal representation of sample values. There are corresponding macros to convert from the sample type to C types (MUS_SAMPLE_TO_FLOAT, etc), and the reverse (MUS_FLOAT_TO_SAMPLE, etc).
mus_sound_open_input opens arg for reading. Most standard uncompressed formats are readable. This function returns the associated file number, or -1 upon failure.
mus_sound_close_input closes an open sound file. Its argument is the integer returned by mus_sound_open_input.
mus_sound_open_output opens (creates) the file arg, setting its sampling rate to be srate, number of channels to chans, data format to data_format (see sndlib.h for these types: MUS_BSHORT, means 16-bit 2's complement big endian fractions), header type to header_type (AIFF for example; the available writable header types are MUS_AIFC (or AIFF), MUS_RIFF ('wave'), MUS_RF64, MUS_NEXT, MUS_NIST, MUS_CAFF, and MUS_IRCAM), and comment (if any) to comment. The header is not considered complete without an indication of the data size, but since this is rarely known in advance, it is supplied when the sound file is closed. mus_sound_open_output function returns the associated file number.
mus_sound_close_output first updates the file's header to reflect the final data size bytes_of_data, then closes the file. The argument fd is the integer returned by mus_sound_open_output.
mus_sound_read reads data from the file indicated by fd, placing data in the array obufs as mus_sample_t values (floats normally). chans determines how many arrays of samples are in obufs, which is filled by mus_sound_read from its index beg to end with zero padding if necessary.
mus_sound_write writes samples to the file indicated by fd, starting for each of chans channels in obufs at beg and ending at end.
mus_sound_seek_frame moves the read or write position for the file indicated by fd to the desired frame.
The following functions provide access to audio harware. If an error occurs, they return -1 (MUS_ERROR).
int mus_audio_initialize(void) void mus_audio_describe(void) char *mus_audio_report(void) int mus_audio_open_output(int dev, int srate, int chans, int format, int size) int mus_audio_open_input(int dev, int srate, int chans, int format, int size) int mus_audio_write(int line, char *buf, int bytes) int mus_audio_close(int line) int mus_audio_read(int line, char *buf, int bytes) int mus_audio_mixer_read(int dev, int field, int chan, float *val) int mus_audio_mixer_write(int dev, int field, int chan, float *val) int mus_audio_systems(void) char *mus_audio_system_name(int system)
mus_audio_initialize takes care of any necessary initialization.
mus_audio_describe prints to stdout a description of the current state of the audio hardware. mus_audio_report returns the same description as a string.
mus_audio_systems returns the number of separate and complete audio systems (soundcards essentially) that are available. mus_audio_system_name returns some user-recognizable name for the given card.
mus_audio_open_input opens an audio port to read sound data (i.e. a microphone, line in, etc). The input device is dev (see sndlib.h for details; when in doubt, use MUS_AUDIO_DEFAULT). The input sampling rate is srate or as close as we can get to it. The number of input channels (if available) is chans. The input data format is format (when in doubt, use the macro MUS_AUDIO_COMPATIBLE_FORMAT). And the input buffer size (if settable at all) is size (bytes). This function returns an integer to distinguish its port from others that might be in use. In this and other related functions, the device has an optional second portion that refers to the soundcard or system for that device. MUS_AUDIO_PACK_SYSTEM(n) refers to the nth such card, so (SNDLIB_DAC_DEVICE | MUS_AUDIO_PACK_SYSTEM(1)) is the 2nd card's dac (the default is system 0, the first card).
mus_audio_open_output opens an audio port to write data (i.e. speakers, line out, etc). The output device is dev (see sndlib.h). Its sampling rate is srate, number of channels chans, data format format, and buffer size size. This function returns the associated line number of the output port.
mus_audio_close closes the port (input or output) associated with line.
mus_audio_read reads sound data from line. The incoming bytes bytes of data are placed in buf. If no error was returned from mus_audio_open_input, the data is in the format requested by that function with channels interleaved.
mus_audio_write writes bytes bytes of data in buf to the output port associated with line. This data is assumed to be in the format requested by mus_audio_open_output with channels interleaved.
mus_audio_mixer_read and mus_audio_mixer_write are complicated. They get and set the audio hardware state. The audio hardware is treated as a set of "systems" (sound cards) each of which has a set of "devices" (dacs, adcs, etc), with various "fields" that can be read or set (gain, channels active, etc). For example, a microphone is called the MUS_AUDIO_MICROPHONE, and its hardware gain setting (if any) is called the MUS_AUDIO_AMP. All gains are considered to be linear between 0.0 and 1.0, so to set the microphone's first channel amplitude to .5 (that is, the gain of the signal before it reaches the analog-to-digital converter),
float vals[1]; vals[0]=0.5; mus_audio_mixer_write(MUS_AUDIO_MICROPHONE,MUS_AUDIO_AMP,0,vals);
Similarly
mus_audio_mixer_read(MUS_AUDIO_MICROPHONE,MUS_AUDIO_AMP,0,vals); amp=vals[0];
returns the current gain in the float array vals. mus_audio_mixer_read can also return a description of the currently available audio hardware.
If a requested operation is not implemented or something goes wrong, -1 is returned.
Each separate sound card is called a system, accessible via the device argument through the macro MUS_AUDIO_PACK_SYSTEM(n). The count starts at 0 which is the default. The function mus_audio_systems returns how many such cards are available. (Currently it returns more than one only on Linux systems with multiple sound cards).
Each audio system has a set of available devices. To find out what is available on a given system
#define LIST_MAX_SIZE 32; float device_list[LIST_MAX_SIZE]; mus_audio_mixer_read(MUS_AUDIO_PACK_SYSTEM(0),MUS_AUDIO_PORT,LIST_MAX_SIZE,device_list);
The list of available devices is returned in the device_list array, with the number of the devices as device_list[0]. The set of device identifiers is in sndlib.h (MUS_AUDIO_LINE_IN for example). Two special devices are MUS_AUDIO_MIXER and MUS_AUDIO_DAC_FILTER. The latter refers to the low-pass filter often associated with a DAC. The former refers to a set of analog gain and tone controls often associated with a sound card. The individual gains are accessed through the various fields (described below).
The field argument in mus-audio-mixer-read and mus-audio-mixer-write selects one aspect of the given card's devices' controls. The simplest operations involve MUS_AUDIO_AMP and MUS_AUDIO_SRATE. The latter gets or sets the sampling rate of the device, and the former gets or sets the amplitude (between 0.0 and 1.0) of the specified channel of the device. The value to be set or returned is in the 0th element of the vals array. An example of reading the current microphone gain is given above. The meaning of the field argument can depend on which device it is applied to, so there is some complexity here. The channel argument usually selects which channel we are interested in, but in some cases it instead tells mus-audio-mixer-read how big a returned list can get. A brief description of the fields:
MUS_AUDIO_AMP gain or volume control (0.0 to 1.0) MUS_AUDIO_SRATE sampling rate MUS_AUDIO_CHANNEL active channels MUS_AUDIO_BASS, MUS_AUDIO_TREBLE mixer's tone control MUS_AUDIO_LINE mixer's line-in gain control MUS_AUDIO_MICROPHONE mixer's microphone gain control similarly for MUS_AUDIO_IMIX, MUS_AUDIO_IGAIN, MUS_AUDIO_RECLEV, MUS_AUDIO_PCM, MUS_AUDIO_PCM2, MUS_AUDIO_OGAIN, MUS_AUDIO_LINE1, MUS_AUDIO_LINE2, MUS_AUDIO_LINE3, MUS_AUDIO_SYNTH MUS_AUDIO_FORMAT return list of usable sound formats (e.g. MUS_BSHORT) MUS_AUDIO_PORT return list of available devices (e.g. MUS_AUDIO_MICROPHONE)
There is some simple, very low-level MIDI support in sndlib.
int mus_midi_open_read(const char *name) int mus_midi_open_write(const char *name) int mus_midi_close(int line) int mus_midi_read(int line, unsigned char *buffer, int bytes) int mus_midi_write(int line, unsigned char *buffer, int bytes) char *mus_midi_device_name(int sysdev) char *mus_midi_describe(void)
These procedures remove all structure imposed by the available MIDI drivers, presenting each such device as if it were a sort of file. mus_midi_open_read and mus_midi_open_write open a given device (its name can be found by calling mus_midi_device_name with an argument of 0), returning an integer that refers to that port in subsequent calls on mus_midi_read, mus_midi_write, and mus_midi_close. No attempt is made to interpret the data, or time its output, etc. mus_midi_describe returns a description of the available MIDI hardware (its result should be freed, whereas the result of mus_midi_device_name should not be freed). The "int" result of each of the functions is -1 if some error occurred (from Guile, they throw 'mus-error). The sysdev argument to mus_midi_device_name is a sndlib-style integer packing the card number ("system" in the left half, and the device number in the right). On the ALSA system I'm testing at the moment, the MIDI device is apparently "hw:1,0" or "/dev/snd/midiC1D0"; this maps into sndlib as card 1, device 0; the corresponding "sysdev" number is 1 << 16. The corresponding Guile functions are:
mus-midi-open-read name mus-midi-open-write name mus-midi-close md mus-midi-device-name sys-dev mus-midi-describe mus-midi-read md bytes mus-midi-write md bytes
mus-midi-read and mus-midi-write return a list of the bytes read or written (empty if none), or #f if something went wrong.
clm.c and friends implement all the generators found in CLM, a music V implementation, and clm2xen.c ties these into the languages supported by the xen package (currently Guile, Gauche, Ruby, and Forth). The primary clm documentation (which describes both the Scheme and Common Lisp implementations) is clm.html found in clm-4.tar.gz or sndclm.html in snd-9.tar.gz alongside sndlib at ccrma-ftp. The simplest way to try these out is to load them into Snd; see extsnd.html, examp.scm, and snd-test.scm in snd-9.tar.gz for more details. The following briefly describes the C calls (see clm.h).
clm.c implements a bunch of generators and sound IO handlers. Each generator has three associated functions, make-gen, gen, and gen_p; the first creates the generator (if needed), the second gets the next sample from the generator, and the last examines some pointer to determine if it is that kind of generator. In addition, there are a variety of generic functions that generators respond to: mus_free, for example, frees a generator, and mus_frequency returns its current frequency, if relevant. All generators are pointers to mus_any structs. Finally, CLM has two special data types: frame and mixer. A frame is an array that represents a multichannel sample (that is, in a stereo file, at time 0.0, there are two samples, one for each channel). A mixer is a array of arrays that represents a set of input and output scalers, as if it were the current state of a mixing console's volume controls. A frame (a multichannel input) can be mixed into a new frame (a multichannel output) by passing it through a mixer (a matrix, the operation being a matrix multiply).
mus_any *osc; init_mus_module(); osc = mus_make_oscil(440.0, 0.0); if (mus_oscil_p(osc)) fprintf(stderr, "%.3f, %.3f ", .1 * mus_oscil(osc, 0.0, 0.0), mus_frequency(osc)); mus_free(osc);
The other generators are:
Some useful functions provided by clm.c are:
and various others: see clm.h.
The more useful generic functions are:
Before using any of these functions, call init_mus_module. Errors are reported through mus_error which can be redirected or muffled. See clm2xen.c for an example.
This program prints out a description of a sound file (sndinfo.c).
int main(int argc, char *argv[]) { int fd, chans, srate; off_t samples; float length; time_t date; char *comment; char timestr[64]; mus_sound_initialize(); /* initialize sndlib */ fd = mus_file_open_read(argv[1]); /* see if it exists */ if (fd != -1) { close(fd); date = mus_sound_write_date(argv[1]); srate = mus_sound_srate(argv[1]); chans = mus_sound_chans(argv[1]); samples = mus_sound_samples(argv[1]); comment = mus_sound_comment(argv[1]); length = (double)samples / (float)(chans * srate); strftime(timestr, 64, "%a %d-%b-%y %H:%M %Z", localtime(&date)); fprintf(stdout, "%s:\n srate: %d\n chans: %d\n length: %f\n", argv[1], srate, chans, length); fprintf(stdout, " type: %s\n format: %s\n written: %s\n comment: %s\n", mus_header_type_name(mus_sound_header_type(argv[1])), mus_data_format_name(mus_sound_data_format(argv[1])), timestr, comment); } else fprintf(stderr, "%s: %s\n", argv[1], strerror(errno)); return(0); }
This code plays a sound file (sndplay.c):
int main(int argc, char *argv[]) { int fd, afd, i, j, n, k, chans, srate, outbytes; off_t frames; mus_sample_t **bufs; short *obuf; mus_sound_initialize(); fd = mus_sound_open_input(argv[1]); if (fd != -1) { chans = mus_sound_chans(argv[1]); srate = mus_sound_srate(argv[1]); frames = mus_sound_frames(argv[1]); outbytes = BUFFER_SIZE * chans * 2; bufs = (mus_sample_t **)calloc(chans, sizeof(mus_sample_t *)); for (i=0;i<chans;i++) bufs[i] = (mus_sample_t *)calloc(BUFFER_SIZE, sizeof(mus_sample_t)); obuf = (short *)calloc(BUFFER_SIZE * chans, sizeof(short)); afd = mus_audio_open_output(MUS_AUDIO_DEFAULT, srate, chans, MUS_AUDIO_COMPATIBLE_FORMAT, outbytes); if (afd != -1) { for (i = 0; i < frames; i += BUFFER_SIZE) { mus_sound_read(fd, 0, BUFFER_SIZE - 1, chans, bufs); for (k = 0, j = 0; k < BUFFER_SIZE; k++, j += chans) for (n = 0; n < chans; n++) obuf[j + n] = MUS_SAMPLE_TO_SHORT(bufs[n][k]); mus_audio_write(afd, (char *)obuf, outbytes); } mus_audio_close(afd); } mus_sound_close_input(fd); for (i = 0; i < chans; i++) free(bufs[i]); free(bufs); free(obuf); } return(0); }
This code records a couple seconds of sound from a microphone. Input formats and sampling rates are dependent on available hardware, so in a real program, you'd use mus_audio_mixer_read to find out what was available, then mus_file_read_buffer to turn that data into a stream of floats. You'd also provide some way to turn the thing off. (sndrecord.c)
int main(int argc, char *argv[]) { int fd, afd, i, err; short *ibuf; afd = -1; mus_sound_initialize(); fd = mus_sound_open_output(argv[1], 22050, 1, MUS_BSHORT, MUS_NEXT, "created by sndrecord"); if (fd != -1) { ibuf = (short *)calloc(BUFFER_SIZE, sizeof(short)); afd = mus_audio_open_input(MUS_AUDIO_MICROPHONE, 22050, 1, MUS_BSHORT, BUFFER_SIZE); if (afd != -1) { for (i = 0; i < 10; i++) /* grab 10 buffers of input */ { err = mus_audio_read(afd, (char *)ibuf, BUFFER_SIZE * 2); if (err != MUS_NO_ERROR) break; write(fd, ibuf, BUFFER_SIZE * 2); } mus_audio_close(afd); } mus_sound_close_output(fd, BUFFER_SIZE * 10 * 2); free(ibuf); } return(0); }
This program describes the current audio harware state (audinfo.c):
int main(int argc, char *argv[]) { mus_sound_initialize(); mus_audio_describe(); return(0); }
This program writes a one channel NeXT/Sun sound file containing a sine wave at 440 Hz.
int main(int argc, char *argv[]) { int fd, i, k, frames; float phase, incr; mus_sample_t *obuf[1]; mus_sound_initialize(); fd = mus_sound_open_output(argv[1], 22050, 1, MUS_BSHORT, MUS_NEXT, "created by sndsine"); if (fd != -1) { frames = 22050; phase = 0.0; incr = 2 * M_PI * 440.0 / 22050.0; obuf[0] = (mus_sample_t *)calloc(BUFFER_SIZE, sizeof(mus_sample_t)); k = 0; for (i = 0; i < frames; i++) { obuf[0][k] = MUS_FLOAT_TO_SAMPLE(0.1 * sin(phase)); /* amp = .1 */ phase += incr; k++; if (k == BUFFER_SIZE) { mus_sound_write(fd, 0, BUFFER_SIZE-1, 1, obuf); k=0; } } if (k > 0) mus_sound_write(fd, 0, k - 1, 1, obuf); mus_sound_close_output(fd, 22050 * mus_bytes_per_sample(MUS_BSHORT)); free(obuf[0]); } return(0); }
This is program uses the clm.c oscillator and output functions to write the same sine wave as we wrote in SndSine.
int main(int argc, char *argv[]) { int i; mus_any *osc, *op; mus_sound_initialize(); init_mus_module(); osc = mus_make_oscil(440.0, 0.0); op = mus_make_sample_to_file("test.snd", 1, MUS_BSHORT, MUS_NEXT); if (op) for (i = 0; i < 22050; i++) mus_sample_to_file(op, i, 0, .1 * mus_oscil(osc, 0.0, 0.0)); mus_free(osc); if (op) mus_free(op); return(0); }
Here is the fm-violin and a sample with-sound call:
static int feq(float x, int i) {return(fabs(x-i)<.00001);} void fm_violin(float start, float dur, float frequency, float amplitude, float fm_index, mus_any *op) { float pervibfrq = 5.0, ranvibfrq = 16.0, pervibamp = .0025, ranvibamp = .005, noise_amount = 0.0, noise_frq = 1000.0, gliss_amp = 0.0, fm1_rat = 1.0, fm2_rat = 3.0, fm3_rat = 4.0, reverb_amount = 0.0, degree = 0.0, distance = 1.0; float fm_env[] = {0.0, 1.0, 25.0, 0.4, 75.0, 0.6, 100.0, 0.0}; float amp_env[] = {0.0, 0.0, 25.0, 1.0, 75.0, 1.0, 100.0, 0.0}; float frq_env[] = {0.0, -1.0, 15.0, 1.0, 25.0, 0.0, 100.0, 0.0}; int beg = 0, end, easy_case = 0, npartials, i; float *coeffs, *partials; float frq_scl, maxdev, logfrq, sqrtfrq, index1, index2, index3, norm; float vib = 0.0, modulation = 0.0, fuzz = 0.0, indfuzz = 1.0, ampfuzz = 1.0; mus_any *carrier, *fmosc1, *fmosc2, *fmosc3, *ampf; mus_any *indf1, *indf2, *indf3, *fmnoi = NULL, *pervib, *ranvib, *frqf = NULL, *loc; beg = start * mus_srate(); end = beg + dur * mus_srate(); frq_scl = mus_hz_to_radians(frequency); maxdev = frq_scl * fm_index; if ((noise_amount == 0.0) && (feq(fm1_rat, floor(fm1_rat))) && (feq(fm2_rat, floor(fm2_rat))) && (feq(fm3_rat, floor(fm3_rat)))) easy_case = 1; logfrq = log(frequency); sqrtfrq = sqrt(frequency); index1 = maxdev * 5.0 / logfrq; if (index1 > M_PI) index1 = M_PI; index2 = maxdev * 3.0 * (8.5 - logfrq) / (3.0 + frequency * .001); if (index2 > M_PI) index2 = M_PI; index3 = maxdev * 4.0 / sqrtfrq; if (index3 > M_PI) index3 = M_PI; if (easy_case) { npartials = floor(fm1_rat); if ((floor(fm2_rat)) > npartials) npartials = floor(fm2_rat); if ((floor(fm3_rat)) > npartials) npartials = floor(fm3_rat); npartials++; partials = (float *)CALLOC(npartials, sizeof(float)); partials[(int)(fm1_rat)] = index1; partials[(int)(fm2_rat)] = index2; partials[(int)(fm3_rat)] = index3; coeffs = mus_partials_to_polynomial(npartials, partials, 1); norm = 1.0; } else norm = index1; carrier = mus_make_oscil(frequency, 0.0); if (easy_case == 0) { fmosc1 = mus_make_oscil(frequency * fm1_rat, 0.0); fmosc2 = mus_make_oscil(frequency * fm2_rat, 0.0); fmosc3 = mus_make_oscil(frequency * fm3_rat, 0.0); } else fmosc1 = mus_make_oscil(frequency, 0.0); ampf = mus_make_env(amp_env, 4, amplitude, 0.0, 1.0, dur, 0, NULL); indf1 = mus_make_env(fm_env, 4, norm, 0.0, 1.0, dur, 0, NULL); if (gliss_amp != 0.0) frqf = mus_make_env(frq_env, 4, gliss_amp * frq_scl, 0.0, 1.0, dur, 0, NULL); if (easy_case == 0) { indf2 = mus_make_env(fm_env, 4, index2, 0.0, 1.0, dur, 0, NULL); indf3 = mus_make_env(fm_env, 4, index3, 0.0, 1.0, dur, 0, NULL); } pervib = mus_make_triangle_wave(pervibfrq, frq_scl * pervibamp, 0.0); ranvib = mus_make_rand_interp(ranvibfrq, frq_scl * ranvibamp); if (noise_amount != 0.0) fmnoi = mus_make_rand(noise_frq, noise_amount * M_PI); loc = mus_make_locsig(degree, distance, reverb_amount, 1, (mus_any *)op, 0, NULL, MUS_INTERP_LINEAR); for (i = beg; i < end; i++) { if (noise_amount != 0.0) fuzz = mus_rand(fmnoi, 0.0); if (frqf) vib = mus_env(frqf); else vib = 0.0; vib += mus_triangle_wave(pervib, 0.0) + mus_rand_interp(ranvib, 0.0); if (easy_case) modulation = mus_env(indf1) * mus_polynomial(coeffs, mus_oscil(fmosc1, vib, 0.0), npartials); else modulation = mus_env(indf1) * mus_oscil(fmosc1, (fuzz + fm1_rat * vib), 0.0) + mus_env(indf2) * mus_oscil(fmosc2, (fuzz + fm2_rat * vib), 0.0) + mus_env(indf3) * mus_oscil(fmosc3, (fuzz + fm3_rat * vib), 0.0); mus_locsig(loc, i, mus_env(ampf) * mus_oscil(carrier, vib + indfuzz * modulation, 0.0)); } mus_free(pervib); mus_free(ranvib); mus_free(carrier); mus_free(fmosc1); mus_free(ampf); mus_free(indf1); if (fmnoi) mus_free(fmnoi); if (frqf) mus_free(frqf); if (!(easy_case)) { mus_free(indf2); mus_free(indf3); mus_free(fmosc2); mus_free(fmosc3); } else FREE(partials); mus_free(loc); } int main(int argc, char *argv[]) { mus_any *osc = NULL, *op = NULL; mus_sound_initialize(); init_mus_module(); op = mus_make_sample_to_file("test.snd", 1, MUS_BSHORT, MUS_NEXT); if (op) { fm_violin(0.0, 20.0, 440.0, .3, 1.0, op); mus_free(op); } return(0); }
The CLM version is v.ins, the Scheme version can be found in v.scm, and the Ruby version is in v.rb. This code can be run:
cc v.c -o vc -O3 -lm io.o headers.o audio.o sound.o clm.o -DLINUX
For generators such as src that take a function for "as-needed" input, you can use something like:
static Float input_as_needed(void *arg, int dir) {/* get input here — arg is "sf" passed below */} static SCM call_phase-vocoder(void) { mus_any *pv; int sf; /* file channel or whatever */ pv = mus_make_phase_vocoder(NULL, 512, 4, 128, 0.5, NULL, NULL, NULL, (void *)sf); mus_phase_vocoder(pv, &input_as_needed); /* etc */ }
Michael Scholz has written a package using these functions, and several CLM instruments: see the sndins directory, and in particular the README file, for details.
The primary impetus for the sound library was the development of Snd and CLM, both of which are freely available.
Much of sndlib is accessible at run time in any program that has one of the languages supported by the xen package (Guile, Gauche, Ruby, Forth); the modules sndlib2xen and clm2xen tie most of the library into that language making it possible to call the library functions from its interpreter. The documentation is scattered around, unfortunately: the clm side is in sndclm.html and extsnd.html with many examples in Snd's examp.scm. Most of these are obvious translations of the constants and functions described above into Scheme.
mus-next mus-aifc mus-rf64 mus-riff mus-nist mus-raw mus-ircam mus-aiff mus-bicsf mus-soundfont mus-voc mus-svx mus-caff mus-bshort mus-lshort mus-mulaw mus-alaw mus-byte mus-ubyte mus-bfloat mus-lfloat mus-bint mus-lint mus-b24int mus-l24int mus-bdouble mus-ldouble mus-ubshort mus-ulshort mus-audio-default mus-audio-duplex-default mus-audio-line-out mus-audio-line-in mus-audio-microphone mus-audio-speakers mus-audio-dac-out mus-audio-adat-in mus-audio-aes-in mus-audio-digital-in mus-audio-digital-out mus-audio-adat-out mus-audio-aes-out mus-audio-dac-filter mus-audio-mixer mus-audio-line1 mus-audio-line2 mus-audio-line3 mus-audio-aux-input mus-audio-cd mus-audio-aux-output mus-audio-spdif-in mus-audio-spdif-out mus-audio-amp mus-audio-srate mus-audio-channel mus-audio-format mus-audio-port mus-audio-imix mus-audio-igain mus-audio-reclev mus-audio-pcm mus-audio-pcm2 mus-audio-ogain mus-audio-line mus-audio-synth mus-audio-bass mus-audio-treble mus-audio-direction mus-audio-samples-per-channel mus-sound-samples (filename) samples of sound according to header (can be incorrect) mus-sound-frames (filename) frames of sound according to header (can be incorrect) mus-sound-duration (filename) duration of sound in seconds mus-sound-datum-size (filename) bytes per sample mus-sound-data-location (filename) location of first sample (bytes) mus-sound-chans (filename) number of channels (samples are interleaved) mus-sound-srate (filename) sampling rate mus-sound-header-type (filename) header type (e.g. mus-aiff) mus-sound-data-format (filename) data format (e.g. mus-bshort) mus-sound-length (filename) true file length (bytes) mus-sound-type-specifier (filename) original header type identifier mus-sound-maxamp(filename) returns a list of max amps and locations thereof mus-sound-loop-info(filename) returns list of 4 loop values (the actual mark positions here, not the so-called id's), then base-note and base-detune mus-header-type-name (type) e.g. "AIFF" mus-data-format-name (format) e.g. "16-bit big endian linear" mus-sound-comment (filename) header comment, if any mus-sound-write-date (filename) sound write date data-format-bytes-per-sample (format) bytes per sample mus-audio-describe () describe audio hardware state mus-audio-report () return audio hardware state as a string mus-audio-reinitialize force re-check of available audio devices mus-sound-open-input (filename) open filename (a sound file) returning an integer ("fd" below) mus-sound-open-output (filename srate chans data-format header-type comment) create a new sound file with the indicated attributes, return "fd" mus-sound-reopen-output (filename chans data-format header-type data-location) reopen (without disturbing) filename, ready to be written mus-sound-close-input (fd) close sound file mus-sound-close-output (fd bytes) close sound file and update its length indication, if any mus-sound-read (fd beg end chans sdata) read data from sound file fd loading the data array from beg to end sdata is a sound-data object that should be able to accommodate the read mus-sound-write (fd beg end chans sdata) write data to sound file fd mus-sound-seek-frame (fd frame) move to frame in sound file fd mus-file-clipping (fd) whether output is clipped in file 'fd' mus-clipping () global clipping choice mus-audio-open-output (device srate chans format bufsize) open audio port device ready for output with the indicated attributes mus-audio-open-input (device srate chans format bufsize) open audio port device ready for input with the indicated attributes mus-audio-write (line sdata frames) write frames of data from sound-data object sdata to port line mus-audio-read (line sdata frames) read frames of data into sound-data object sdata from port line mus-audio-close (line) close audio port line mus-audio-mixer-read (device field channel vals) read current state of device's field — see mus_audio_mixer_read above. mus-audio-mixer-write (device field channel vals) write new state for device's field — see mus_audio_mixer_write above. mus-audio-systems () returns how many separate "systems" (soundcards) it can find. To specify a particular system in the "device" parameters, add (ash system 16) to the device. mus-oss-set-buffers (num size) in Linux (OSS) sets the number and size of the OSS "fragments" mus-sun-set-outputs (speaker headphones line-out) On the Sun, cause output to go to the chosen devices mus-netbsd-set-outputs (speaker headphones line-out) On netBSD, cause output to go to the chosen devices make-sound-data (chans, frames) return a sound-data object with chans arrays, each of length frames sound-data-ref (obj chan frame) return (as a float) the sample in channel chan at location frame sound-data-set! (obj chan frame val) set obj's sample at frame in chan to (the float) val sound-data? (obj) #t if obj is of type sound-data sound-data-length (obj) length of each channel of data in obj sound-data-chans (obj) number of channels of data in obj sound-data->vct (sdobj chan vobj) place sound-data channel data in vct vct->sound-data (vobj sdobj chan) place vct data in sound-data ;;; this function prints header information (define info (lambda (file) (string-append file ": chans: " (number->string (mus-sound-chans file)) ", srate: " (number->string (mus-sound-srate file)) ", " (mus-header-type-name (mus-sound-header-type file)) ", " (mus-data-format-name (mus-sound-data-format file)) ", len: " (number->string (/ (mus-sound-samples file) (* (mus-sound-chans file) (mus-sound-srate file))))))) ;;; this function reads the first 32 samples of a file, returning the 30th in channel 0 (define read-sample-30 (lambda (file) (let* ((fd (mus-sound-open-input file)) (chans (mus-sound-chans file)) (data (make-sound-data chans 32))) (mus-sound-read fd 0 31 chans data) ;; we could use sound-data->vct here to return all the samples (let ((val (sound-data-ref data 0 29))) (mus-sound-close-input fd) val)))) ;;; here we get the microphone volume, then set it to .5 (define vals (make-vct 32)) (mus-audio-mixer-read mus-audio-microphone mus-audio-amp 0 vals) (vct-ref vals 0) (vct-set! vals 0 .5) (mus-audio-mixer-write mus-audio-microphone mus-audio-amp 0 vals) ;;; this function plays a sound (we're assuming that we can play 16-bit linear little-endian data) (define play-sound (lambda (file) (let* ((sound-fd (mus-sound-open-input file)) (chans (mus-sound-chans file)) (frames (mus-sound-frames file)) (bufsize 256) (data (make-sound-data chans bufsize)) (bytes (* bufsize chans 2))) (mus-sound-read sound-fd 0 (1- bufsize) chans data) (let ((audio-fd (mus-audio-open-output mus-audio-default (mus-sound-srate file) chans mus-lshort bytes))) (do ((i 0 (+ i bufsize))) ((>= i frames)) (mus-audio-write audio-fd data bufsize) (mus-sound-read sound-fd 0 (1- bufsize) chans data)) (mus-sound-close-input sound-fd) (mus-audio-close audio-fd)))))
You can load sndlib into the standard Guile interpreter:
guile> (define lib (dynamic-link "/home/bil/sndlib/sndlib.so")) guile> (dynamic-call "mus_sndlib_xen_initialize" lib) guile> (mus-sound-srate "/home/bil/cl/oboe.snd") 22050 guile> (dynamic-call "mus_xen_init" lib) guile> (define osc (make-oscil 440)) guile> (oscil osc) 0.0
The first dynamic-call (mus_sndlib_xen_initialize) ties sndlib2xen.c into Guile, and the second (mus_xen_init) ties clm2xen.c into Guile. See bess.scm and bess.rb in the Snd tarball. Here's a more extended example, using code from Snd's play.scm, slightly revised to make it independent of Snd:
Now, load this code into guile+sndlib, and
(play-sine 440 .1). | http://ccrma.stanford.edu/software/snd/snd/sndlib.html | crawl-001 | refinedweb | 5,366 | 53.31 |
When you need to perform calculations with values that have an associated unit of measure, it is very common to make mistakes by mixing different units of measure. It is also common to perform incorrect conversions between the different units that generate wrong results. The latest Python release doesn't allow developers to associate a specific numerical value with a unit of measure. In this article, I look at three Python packages that provide different solutions to this problem and allow you to work with units of measure and perform conversions.
Three Different Packages to Add Units of Measure
The need to associate quantities with units of measure in any programming language is easy to understand, even in the most basic math and physics problems. One of the simplest calculations is to sum two values that have an associated base unit. For example, say that you have two electrical resistance values. One of the values is measured in ohms and the other in kilo-ohms. To sum the values, you must choose the desired unit and convert one of the values to the chosen unit. If you want the result to be expressed in ohms, you must convert the value in kilo-ohms to ohms, sum the two values expressed in ohms, and provide the result in ohms.
The following Python code uses variables with a suffix that defines the specific unit being used in each case. You have probably used or seen similar conventions. The suffixes make the code less error-prone because you easily understand that
r1_in_ohms holds a value in ohms, and
r2_in_kohms holds a value in kilo-ohms. Thus, there is a line that assigns the result of converting the
r2_in_kohms value to ohms to the new
r2_in_ohms variable. The last line calculates the sum and holds the result in ohms because both variables hold values in the same unit of measure.
r1_in_ohms = 500 r2_in_kohms = 5.2 r2_in_ohms = r2_in_kohms * 1e3 r1_plus_r2_in_ohms = r1_in_ohms + r2_in_ohms
Obviously, the code is still error-prone because there won't be any exception or syntax error if a developer adds the following line to sum ohms and kilo-ohms without performing the necessary conversions:
r3_in_ohms = r1_in_ohms + r2_in_kohms
There is no rule that assures that all the variables included in the
sum operation must use the same suffix; that is, the same unit. There aren't invalid operations between variables that hold values with incompatible units. For example, you might sum a
voltage value to a
resistance value and the code won't produce any error warning.
Now, imagine that Python adds support for units of measure. Each numeric value can have an associated unit of measure enclosed within <>. The following three lines would replace the previous code with an easier to understand syntax:
r1 = 500 <ohms> r2 = 5.2 <kilo-ohms> r1_plus_r2 = (r1 + r2) <ohms>
r1 holds a value of
500 and an associated unit of measure,
ohms.
r2 holds a value of
5.2 and an associated unit of measure,
kilo-ohms. Because each variable includes information about its unit of measure, the
sum operation is smart and it can convert compatible units such as
ohms and
kilo-ohms. The last line sums the two values taking into account their unit of measure, and converts the result to the specified unit,
ohms. The
r1_plus_r2 variable holds the result of the
sum operation expressed in
ohms. The following line would produce an exception because the units of measure are incompatible:
sum = (10 <volts> + 500 <ohms>) <inches>
However, the support should be smart enough to allow you to mix different length units. For example, the following line would produce a valid result in inches.
sum = (10 <inches> + 1200 <centimeters>) <inches>
Python doesn't support units of measure, but the three Python packages I examine here provide different ways to enable them. Each package takes a different approach. While none works as well as a native language feature would, these solutions do provide a baseline that you can improve according to your specific needs.
Numericalunits: A Bare-Bones Solution
Numericalunits is a single Python module (numericalunits.py) that provides easy unit conversion features for your operations. You just need to follow two simple rules:
- To assign a unit of measure to a value, multiply the value by the unit.
- To express a value generated by its multiplication by the unit in a different unit, divide the value by that unit.
You simply need to add the following lines to import the module with an alias (
nu) and execute the $
reset_units method to start working with the different units.
import numericalunits as nu nu.reset_units('SI')
When you call
nu.reset_units('SI'), Numericalunits uses standard SI units (short for Système Internationale d'Unités in French) for storing the values (see Figure 1). This way, any length value is stored in meters, no matter the length unit you specify in the multiplication. Read here if you want more information about SI base units.
Figure 1: The SI base units and their interdependencies.
If you call the default
nu.reset_units(), Numericalunits uses a random set of units instead of the standard SI units. I really don't like using a random set of units because it usually generates a loss of precision and results that lack accuracy. The only advantage of using random units is that you can check dimensional errors by running calculations twice and comparing whether the results match. You have to call
nu.reset_units() before each calculation and compare the two values. I don't like this way of checking dimension errors because it adds a huge overhead and it is indeed error-prone. Thus, I suggest using Numericalunits as a unit conversion helper with the standard SI units initialization.
Numericalunits doesn't save information about the unit of measure in the numerical variable; therefore, there is no way to know which unit you used when you assigned the value. If you need more than a unit conversions helper, I suggest working with one of the other packages.
You can read the following line of code as "assign 500 ohms to
r1."
r1 = 500 * nu.ohm
You can read the following line of code as "display the value of
r1 expressed in kilo-ohms."
print(r1 / nu.kohm)
The following line displays the value of
r1 expressed in ohms.
print(r1 / nu.ohm)
The following code uses Numericalunits to sum the values of
r1 and
r2. Notice that the code is self-documented because you can easily see that
r1 holds a value in
ohms and
r2 in
kilo-ohms. The
r1_plus_r2 variable holds the result of the
sum operation expressed in
ohms and
r1_plus_r2_kohms holds the result converted to
kilo-ohms. Notice that you can sum the values of
r1 and
r2 without having to convert the units to ohms and the result will be accurate because of the way in which Numericalunits saves the values in the base units.
import numericalunits as nu nu.reset_units('SI') r1 = 500 * nu.ohm r2 = 5.2 * nu.kohm r1_plus_r2 = r1 + r2 r1_plus_r2_kohms = r1_plus_r2 / nu.kohm
The following code uses Numericalunits to sum four
distance values expressed in four different units of measure: meters, miles, centimeters, and feet. Numericalunits doesn't support plurals for the units. The
total_distance variable holds the total distance expressed in feet.
import numericalunits as nu nu.reset_units('SI') distance1 = 2500 * nu.m distance2 = 2 * nu.mile distance3 = 3000 * nu.cm distance4 = 3500 * nu.foot total_distance = (distance1 + distance2 + distance3 + distance4) / nu.foot
Pint: A Complete Package
Pint is a Python package that allows you to work with numerical values associated to units of measure. Because Pint saves the magnitude and the associated units for any numerical type, you are able to know which unit you used when you assigned the magnitude value. You can perform arithmetic operations between compatible units and convert from and to different units. When you try to perform arithmetic operations on magnitudes that have incompatible units of measure, Pint raises a
Pint.unit.DimensionalityError exception indicating that it cannot convert from one unit to the other before performing the arithmetic operation.
The
UnitRegistry class stores the definitions and relationships between units. By default, Pint uses the
default_en.txt unit definitions file. This file contains the different units and the prefixes that the
UnitRegistry will recognize in plain text. You can easily edit this text file to add any unit you might need to support. | http://www.drdobbs.com/database/the-maximal-rectangle-problem/database/quantities-and-units-in-python/240161101 | CC-MAIN-2017-30 | refinedweb | 1,406 | 55.34 |
The different compilers out there for msp430 chips and Launchpad have different syntax for each one –
Hopefully this post will help you find out which one you need.
(all examples are for chip msp430g2231, just substitute your chip that you are working with)
First – CSS/IAR — TI officially supported compilers/IDEs
#pragma vector=TIMERA1_VECTOR
__interrupt void Timer_A1(void) { //ISR code
}
second – mspgcc4 – IDE of your choice
Makefile — must have this option to compile correctly to chip -mmcu=msp430x2231 –most chips are supported but you will have to find the correct one for yours.
Main.c file or main project file
you must include these
#include <msp430g2231.h>
#include <signal.h>
and then the syntax for your ISR
interrupt(TIMERA0_VECTOR) Timer_A (void) { //code goes here}
Third — Uniarch mspgcc – Newest compiler – replacement for mspgcc4
makefile– -mmcu=msp430g2231 you must use the one specific for your chip and should not include an x in the mmcu option since that is for the depreciated mspgcc4
inlcude these header files
#include <msp430.h>
#include <legacymsp430.h>
ISR syntax
interrupt(TIMERA0_VECTOR) Timer_A (void)
{ //code goes here}
Fourth way– Offiacial uniarch mspgcc ISR syntax
makefile – same as before
no extra includes needed, just the msp430.h or specific chip header
__attribute__((interrupt(TIMERA0_VECTOR))) void Timer_A(void){ //code goes here }
This is not complete, I am waiting for an answer on what the syntax should be for uniarch branch of mspgcc for interrupts. So far my testing has only allowed using interrupts using the lgacymsp430.h header. Once I get an answer I will update this post. update —
so far this alternative works – (works == compiles with no errors)
The syntax for declaring an interrupt with gcc is (as far as I learnt so far) __attribute__((interrupt(VECTOR_NAME))) void irq_handler_for_vector(void); with VECOR_NAME the name of the IRQ vector. example: __attribute__((interrupt(USCI_A2_VECTOR))) void usbRxISR(void){ Thank you Matthias - for this alternative.
Hi, I know, it is a while your post. My question is, are there compiler type #defines to separated code for the different compilers in one file or to load different libs?
It is usually #defines that seperate the code…..bit in recent versions mspgcc will allow you to compile code from either iar or ccs. but css and iar will not build mspgcc code with out changes.
I need to put a disclamer on this page since it was written before mspgcc uniarch (current mspgcc) became the standard linux for msp430s.
hope this answers your question.
Thanks for the post, this helped, the MSPGCC is a bit hard to wade through! | http://justinstech.org/index.php/2011/07/04/msp430-different-interrupts-for-different-compilers/ | CC-MAIN-2017-26 | refinedweb | 425 | 59.64 |
Note this article was written as a resource for the New Monks Info Page node in response to some wonderful feedback from fellow monks. It is designed as an easy read for new monks who may need easing into the world of strict and warnings. I have posted to meditations so I can edit it in response to monk feedback as has happened with previous nodes. I guess tutorials or Q&A will be its final resting place.
Perl is a wonderful language. As an interpreted language your first program can be as simple as this old classic:
print "Hello world\n";
When you compare this to the same program in c which also needs to be compiled before it will run the attraction seems obvious.
#include <stdio.h>
int main()
{
printf("Hello World!\n");
return 0;
}
As soon as you move past the trivial and start coding longer programs you rapidly discover the concept of the bug. All newer distributions of the Perl language include a number of pragmas to help you with the inevitable debugging. Debugging might be defined as the final step required to transfer your idea of what you want a computer to do into a set of instructions that the computer understands (and which it can execute) to make your idea a practical reality. Sometimes some rather more crusty descriptions come to mind.
Perl has three pragmas specifically designed to make your life easier:
use strict;
use warnings;
use diagnostics;
If you don't understand the meaning of the word pragma don't worry, it doesn't matter, it is just a jargon term that encompasses these three features (amongst others). We will cover each pragma in turn, clearly stating how and why you should be using it. The benefits far outweigh the short learning curve. Note that for the short, quick and dirty solutions that Perl does so easily you don't need them but as your code gets longer and/or in a production environment they are invaluable. "If you intend on building production-level code, then "use strict" and "use warnings" should be disregarded only if you know exactly why you are doing it" - dragonchild
Perl has a great help thy neighbour culture, but you will quickly discover the response you receive to any request for help (regardless of the forum) will be far better if you have tried to help yourself. Using the strict, warnings and diagnostics pragmas is generally deemed to fall into this self help category. If you are not using these pragmas the responses you receive to a help request will vary from the helpful "OK, here is your problem on line xxx" to the more pointed "Don't ask a person to do a machine's work" to the succinct RTFM.
Don't expect a person to do a machine's work can be explained thus: "Perl provides a lot of help to debug your program and get it working via the strict, warnings and diagnostics pragmas. Perl is very good at this task. If you can't be bothered doing this basic thing then why should I expend my valuable unpaid time and effort to help you?" Whilst the delivery of this response may vary from the gentle urging to the less subtle FOAD it is still very practical, and based on thousands of years of combined experience. Once you get in the habit of using these pragmas you will find Perl isolates problems automatically for you and can reduce development time by half or more (given that most of the time spent developing code is spent debugging).
So let's get down to it!
#!/usr/bin/perl
use strict;
use strict 'vars';
use strict 'refs';
use strict 'subs';
Each of these does a different thing as we will see. You *can* activate each of these separately, but most programmers prefer to activate all three with a simple use strict;. You may deactivate a pragma using:
no strict; # deactivates vars, refs and subs;
no strict 'vars'; # deactivates use strict 'vars';
no strict 'refs'; # deactivates use strict 'refs';
no strict 'subs'; # deactivates use strict 'subs';
OK so now we know how to turn them on and off what do they actually do?
By far the most useful in general coding is the use strict 'vars'; pragma. When this is active Perl complains if you try to use a variable that you have not previously declared.
$foo = 'bar'; # OK
use strict 'vars';
$foo = 'bar'; # Triggers an error
If we try to run the second example we will get an error like:
Global symbol "$foo" requires explicit package name at script line 2.
Execution of script aborted due to compilation errors.
Fantastic you say, my simple program is now broken, and for what. For your sanity that's why. Consider this slightly longer (silly) example that counts down from ten then calls a sub liftoff. The liftoff sub is designed to count the number of liftoffs:
$count = 10;
while ($count > 0) {
print "$count ";
$count--;
liftoff() if $count == 0;
}
sub liftoff {
$count++; # count liftoffs
print "This is liftoff $count\n";
}
Perfectly valid code, but if these two snippets were separated by other code the fact that this is an infinite loop might be harder to spot, especially without the print statements. The problem occurs because of the global variable $count, which is changed within the liftoff sub. As a result the while loop never exits, and repeatedly calls the liftoff sub. Whilst this is very simplified the fact is that common variables like $count, $line, $data, $i, $j etc are often used many times within a program. If we do not localise them to where they are used some very nasty bugs can arise. These bugs can often be hard to find.
Over time programmers have come to the realisation that global variables are in general a bad idea. A variables "scope" defines where it can be seen in the program - generally the narrower the better. A global variable can be seen and modified everywhere. A lexically scoped variable can only be seen within its lexical scope. To fix our errant liftoff sub we localise the $count variables to where the are needed via a "my" directive.
my $count = 10;
while ($count > 0) {
print "$count ";
$count--;
&liftoff() if $count == 0;
}
{
# scope our variable $count via a closure
my $count;
sub liftoff {
$count++; # count liftoffs
print "This is liftoff $count\n";
}
}
liftoff();
liftoff();
liftoff();
liftoff();
Now that we have localised our $count variables this prints:
10 9 8 7 6 5 4 3 2 1 This is liftoff 1
This is liftoff 2
This is liftoff 3
This is liftoff 4
This is liftoff 5
So as desired each time the liftoff sub is called it increments our counter. The lexical scope of a variable declared via my is most simply explained by example:
my $count = 10;
print "$count\n"; # $count is 10 here
{
my $count = 5;
print "$count\n"; # $count is 5 here
}
print "$count\n"; # $count is 10 here again
Astute readers will note that because we define $count at the top of this code it is effectively a global variable. This is quite true in a limited sense and often how programmers initialy deal with pacifying use strict. In this case we have a $count with a wide scope and another *different* $count with a narrow scope. Talk about have your cake and eat it to!
In simple terms a lexical scope is the area from a opening curly to the corresponding closing curly. If you declare a variable with my within a lexical scope it only exists within the innermost lexical scope. Outside this scope it does not exist. For example:
{
my $count = 10;
print "$count\n";
}
print "\$count is zero or undefined!\n" unless $count;
Prints:
10
$count is zero or undefined!
For an extensive expert discussion of scoping variables see Coping with Scoping and
Seven Useful Uses of Local by Mark-Jason Dominus.
The use strict 'refs' pragma prevents you from using symbolic references. References are a very useful but somewhat advanced technique. You can write plenty of useful code without understanding them but eventually you will discover how they can make life much easier.
Here are two trivial examples of references (sorry that they do not make the case for using references obvious):
my $foo = "bar";
my $bar = "foo";
my $ref = \$foo;
# hard reference
print $$ref; # prints 'bar'
# symbolic reference
print $$bar; # also prints 'bar'
So what gives? In the first case we create a hard reference to the variable $foo. We then print the value in $foo by dereferencing it via the $$ construct. In the second case what happens is that because $bar is not a reference when we try to deference it Perl presumes it to be a symbolic reference to a variable whose name is stored in $bar. As $bar contains the string 'foo' perl looks for the variable $foo and then prints out the value stored in this variable. We have in effect used a variable to store a variable name and then accessed the contents of that variable via the symbolic reference.
Excellent you say, I wanted to know how to do that. Trust me it is a very bad idea. Besides making your code rather difficult to understand it can lead to some very difficult debugging. For a full discussion on why this is a really bad idea see Why it's stupid to use a variable as a variable name by one of my favourite guru's Mark-Jason Dominus again.
Getting back to our example had the use strict 'refs' pragma been active then Perl would have let us know about this dangerous problem via a message along the lines:
Can't use string ("foo") as a SCALAR ref while "strict refs" in use at test.pl line 12.
In Perl you can call a subroutine in three ways:
mysub;
mysub();
&mysub;
You can use the first form because perl assumes any bareword (ie not a key word and not quoted) must refer to a user defined subroutine. This is fine but what if you have a typo:
sub my_sub {
print "Hello World\n";
}
mysub;
This fails to print the expected "Hello World" because of the typo. Normally it would compile and run just fine, it just wouldn't work. However under use strict 'subs' perl will let us know we have a problem with a message like:
Bareword "mysub" not allowed while "strict subs" in use at test.pl line 5.
Execution of test.pl aborted due to compilation errors.
To activate the use warnings pragma you can do either of the following:
#!/usr/bin/perl -w
use warnings; # only on Perl 5.6.0 and greater
Under Perl 5.6.0 and higher you can deactivate warnings (lexically scoped) like this:
no warnings;
First please note that "use warnings;" and "no warnings;" require Perl 5.6.0 or greater. Many people do not yet use this version so for maximum portability "-w" is preferred. Even though Win32 systems do not understand the concept of the shebang line Win32 Perl will look at the shebang and activate warnings if the "-w" flag is present so this works cross platform. For a full discussion see use warnings vs. perl -w. For Win32 users on *nix the operating system will examine the first two bytes of an executable file for the "#!" sequence and if found execute the file using the executable found on the path that follows the "#!".
Unlike strict when perl generates warnings it does not abort running your code, but it can fill up your screen :-) Here is a simple example of how warnings can *really* help:
$input_received = <STDIN>;
exit if $input_recieved =~ m/exit/i;
print "Looks like you want to continue!\n";
Three lines - what could be easier. We get some input from STDIN and exit if the user types exit. Otherwise we print something. When we run the code we find that regardless of what we type it prints. Why no exit?? Lets add warnings and see what happens:
#!/usr/bin/perl -w
my $input_received = <STDIN>;
exit if $input_recieved =~ m/exit/i;
print "Looks like you want to continue!\n";
Name "main::input_received" used only once: possible typo at test.pl line 3.
Name "main::input_recieved" used only once: possible typo at test.pl line 4.
test.pl syntax OK
So we have a syntax error, fix that and we are off and running. If we were using strict the code would not have run in the first place but that's another story. Typos like this can be hard for humans to spot but perl does it in a jiffy.
When you first start to use warnings some of the messages appear quite cryptic. Don't worry, the "use diagnostics;" pragma has been designed to help. When this is active the warnings generated by "-w" or "use warnings;" are expanded greatly. You will probably only need to use diagnostics for a few weeks as you soon become familiar with all the messages!
To finish off, here is another example:
my @stuff = qw(1 2 3 4 5 6 7 8 9 10);
print "@stuff" unless $stuff[10] == 5;
If you run this code it runs OK, or does it? Sure it prints the array but there is a subtle problem. $stuff[10] does not exist! Perl is creating it for us on the fly. Use warnings catches this subtle trap but if we add the use diagnostics; pragma we will get a blow by blow description of our sins.
use warnings;
use diagnostics;
my @stuff = qw(1 2 3 4 5 6 7 8 9 10);
print "@stuff" unless $stuff[10] == 5;
With warnings on the error is caught and with diagnostics on it is explained in detail.
Use of uninitialized value in numeric eq (==) at test.pl line 4 (#1)
(W uninitialized) An undefined value was used as if it were already
defined. It was interpreted as a "" or a 0, but maybe it was a mistake.
To suppress this warning assign a defined value to your variables.
So now that I have waxed lyrically about the merits of these three pragmas it is time to give you the bad news. Fortunately there is not much.
As with anything new it takes a while to get used to the strictures of these pragmas. The overwhelming majority of Perl coders appreciate the benefits, but there is a learning curve. Use diagnostics has no learning curve - it is just to help you understand the warnings generated. The warnings are fairly self explanatory (especially with use diagnostics) and after a while you grow very used to all the free help the offer, especially on the typo front.
Use strict requires some understanding of scoping your vars. I have already given you links to some of the best articles available on this topic so you have some good help available. Initially using strict is a pain. The pain is short and the benefits long so I encourage you to persevere. You will be over the hump in no time.
For CGI scripts to work you need to output a header. Typically you will have:
print "Content-type: text/html\n\n";
occuring early in your script. Errors from warnings and strict will generally be output before this resulting in a 500 internal server error due to malformed headers. I suggest running your script from the command line first to weed out the warnings and or adding:
use CGI::Carp 'fatalsToBrowser';
Note you want to comment this out of the production code to avoid compromising security.
If you have reached here you are already well on the road to Perl nirvana. I hope you enjoy the journey.
tachyon
However, I'd like to make the comment that Perl is used for a number of different purposes. The various pragmas aren't always needed, or even desired.
When Perl is used for CGI or applications development, especially if one is designing production code, then use every single pragma you can find. This is most especially true when developing production modules or scripts. (It'd be nice if every CPAN developer did this as well, but that's another story.)
However, a lot of people use Perl as a nifty version of awk or sed. For them, strict and warnings are a burden, not a benefit.
I'd modify your article to say that if someone is intending on building production-level code, then "use strict" and "use warnings" should be disregarded only if that person knows exactly why they are doing so (Some code is actually strengthened by turning off strict refs, but that's to get around a minor problem with Perl's OO implementation.)
I.
<mode value="pedantic">...)
In "use warnings;" your explanation of the shebang to Windows users needs some punctuation. I suggest putting the sentence in parentheses and adding a colon after "For Win32 users".
In the summary for that section, you say "So we have a syntax error...", but that's not true. We have a typing error -- as your warnings output indicates, the syntax was OK.
</mode>
P.S. Are you sure this isn't just an elaborate XP-gaining scheme?.
Off to sharpen the axe
cheers
tachyon.
tachyon
s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print
I.
One little thingie ... use diagnostics; slows down the execution. Therefore I'd recomend using it only during development and only if you do not understand some warning or error message. Besides ... you can always find the longer explanations in perldoc perldiag.
use strict; and use warnings; on the other hand can never be overrecomended (hope there is such a word in English).
Jenda@Krynicky.cz
I | http://www.perlmonks.org/index.pl?node=use%20strict%20warnings%20and%20diagnostics%20or%20die | CC-MAIN-2013-48 | refinedweb | 2,965 | 69.52 |
Issues
Work with closed branches (BB-747)
In the latest mercurial versions the ability to close branches was added, but bitbucket shows all branches, even closed with //hg commit --close-branch// - it can be a little annoying. Here is an example:. As you can see when you clone it, it has 2 branches, one of them is closed but bitbucket shows both in the //branches// menu. Fix it please.
Agreed: showing all closed branches in the branches pulldown menu pollutes the branch namespace. Bitbucket should respect closed branches and not list them in that menu!
Agree with Dan.
Another solution might be to show a closed branch symbol (similar to tortoiseHG)
See attached file. | https://bitbucket.org/site/master/issues/1568/work-with-closed-branches-bb-747 | CC-MAIN-2017-04 | refinedweb | 114 | 74.19 |
is Leap Year
Returns true if the specified \a year is a leap-year in the Gregorian calendar (contains 29days in February). Open Office: ISLEAPYEARController: CodeCogs
Contents
Interface
C++
Excel
IsLeapYear
Returns a boolean (true/false) to indicate if the specified Year is a within a leap year. In the Gregorian* calendar a typical year has 365 97/400 days = 365.2425 days. Therefore there are 97 leap years in every 400 years. To facilitate this a leap year occurs when:
- the year is divisible by 4 but not divisible by 100, or
- the year is divisible by 400
Example 1
- To output the number of days in each month of 1974
#include <stdio.h> #include <codecogs/units/date/isleapyear.h> int main() { for(int i=1900; i<2010; i++) printf("\n%d %s a leap year", i, Units::Date::isLeapYear(i) ? "is":"is not"); return 0; }
Note
- A leap year in the Julian calendar is simply any year divisible by 4, so we see no reasons to supply this as a distinct function. But here's the code (!!):
bool isJulianLeapYear= !(Year%4);
Authors
- Will Bateman (Sep 2004)
Source Code
Source code is available when you agree to a GP Licence or buy a Commercial Licence.
Not a member, then Register with CodeCogs. Already a Member, then Login. | https://codecogs.com/library/units/date/isleapyear.php | CC-MAIN-2020-50 | refinedweb | 218 | 71.65 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.